text stringlengths 12 1.05M | repo_name stringlengths 5 86 | path stringlengths 4 191 | language stringclasses 1 value | license stringclasses 15 values | size int32 12 1.05M | keyword listlengths 1 23 | text_hash stringlengths 64 64 |
|---|---|---|---|---|---|---|---|
#-------------------------------------------------------------------------------
#
# Define classes for (uni/multi)-variate kernel density estimation.
#
# Currently, only Gaussian kernels are implemented.
#
# Written by: Robert Kern
#
# Date: 2004-08-09
#
# Modified: 2005-02-10 by Robert Kern.
# Contributed to SciPy
# 2005-10-07 by Robert Kern.
# Some fixes to match the new scipy_core
#
# Copyright 2004-2005 by Enthought, Inc.
#
#-------------------------------------------------------------------------------
from __future__ import division, print_function, absolute_import
# Standard library imports.
import warnings
# SciPy imports.
from scipy import linalg, special
from scipy.special import logsumexp
from scipy._lib._util import check_random_state
from numpy import (asarray, atleast_2d, reshape, zeros, newaxis, dot, exp, pi,
sqrt, ravel, power, atleast_1d, squeeze, sum, transpose,
ones, cov)
import numpy as np
# Local imports.
from . import mvn
__all__ = ['gaussian_kde']
class gaussian_kde(object):
"""Representation of a kernel-density estimate using Gaussian kernels.
Kernel density estimation is a way to estimate the probability density
function (PDF) of a random variable in a non-parametric way.
`gaussian_kde` works for both uni-variate and multi-variate data. It
includes automatic bandwidth determination. The estimation works best for
a unimodal distribution; bimodal or multi-modal distributions tend to be
oversmoothed.
Parameters
----------
dataset : array_like
Datapoints to estimate from. In case of univariate data this is a 1-D
array, otherwise a 2-D array with shape (# of dims, # of data).
bw_method : str, scalar or callable, optional
The method used to calculate the estimator bandwidth. This can be
'scott', 'silverman', a scalar constant or a callable. If a scalar,
this will be used directly as `kde.factor`. If a callable, it should
take a `gaussian_kde` instance as only parameter and return a scalar.
If None (default), 'scott' is used. See Notes for more details.
weights : array_like, optional
weights of datapoints. This must be the same shape as dataset.
If None (default), the samples are assumed to be equally weighted
Attributes
----------
dataset : ndarray
The dataset with which `gaussian_kde` was initialized.
d : int
Number of dimensions.
n : int
Number of datapoints.
neff : int
Effective number of datapoints.
.. versionadded:: 1.2.0
factor : float
The bandwidth factor, obtained from `kde.covariance_factor`, with which
the covariance matrix is multiplied.
covariance : ndarray
The covariance matrix of `dataset`, scaled by the calculated bandwidth
(`kde.factor`).
inv_cov : ndarray
The inverse of `covariance`.
Methods
-------
evaluate
__call__
integrate_gaussian
integrate_box_1d
integrate_box
integrate_kde
pdf
logpdf
resample
set_bandwidth
covariance_factor
Notes
-----
Bandwidth selection strongly influences the estimate obtained from the KDE
(much more so than the actual shape of the kernel). Bandwidth selection
can be done by a "rule of thumb", by cross-validation, by "plug-in
methods" or by other means; see [3]_, [4]_ for reviews. `gaussian_kde`
uses a rule of thumb, the default is Scott's Rule.
Scott's Rule [1]_, implemented as `scotts_factor`, is::
n**(-1./(d+4)),
with ``n`` the number of data points and ``d`` the number of dimensions.
In the case of unequally weighted points, `scotts_factor` becomes::
neff**(-1./(d+4)),
with ``neff`` the effective number of datapoints.
Silverman's Rule [2]_, implemented as `silverman_factor`, is::
(n * (d + 2) / 4.)**(-1. / (d + 4)).
or in the case of unequally weighted points::
(neff * (d + 2) / 4.)**(-1. / (d + 4)).
Good general descriptions of kernel density estimation can be found in [1]_
and [2]_, the mathematics for this multi-dimensional implementation can be
found in [1]_.
With a set of weighted samples, the effective number of datapoints ``neff``
is defined by::
neff = sum(weights)^2 / sum(weights^2)
as detailed in [5]_.
References
----------
.. [1] D.W. Scott, "Multivariate Density Estimation: Theory, Practice, and
Visualization", John Wiley & Sons, New York, Chicester, 1992.
.. [2] B.W. Silverman, "Density Estimation for Statistics and Data
Analysis", Vol. 26, Monographs on Statistics and Applied Probability,
Chapman and Hall, London, 1986.
.. [3] B.A. Turlach, "Bandwidth Selection in Kernel Density Estimation: A
Review", CORE and Institut de Statistique, Vol. 19, pp. 1-33, 1993.
.. [4] D.M. Bashtannyk and R.J. Hyndman, "Bandwidth selection for kernel
conditional density estimation", Computational Statistics & Data
Analysis, Vol. 36, pp. 279-298, 2001.
.. [5] Gray P. G., 1969, Journal of the Royal Statistical Society.
Series A (General), 132, 272
Examples
--------
Generate some random two-dimensional data:
>>> from scipy import stats
>>> def measure(n):
... "Measurement model, return two coupled measurements."
... m1 = np.random.normal(size=n)
... m2 = np.random.normal(scale=0.5, size=n)
... return m1+m2, m1-m2
>>> m1, m2 = measure(2000)
>>> xmin = m1.min()
>>> xmax = m1.max()
>>> ymin = m2.min()
>>> ymax = m2.max()
Perform a kernel density estimate on the data:
>>> X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
>>> positions = np.vstack([X.ravel(), Y.ravel()])
>>> values = np.vstack([m1, m2])
>>> kernel = stats.gaussian_kde(values)
>>> Z = np.reshape(kernel(positions).T, X.shape)
Plot the results:
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots()
>>> ax.imshow(np.rot90(Z), cmap=plt.cm.gist_earth_r,
... extent=[xmin, xmax, ymin, ymax])
>>> ax.plot(m1, m2, 'k.', markersize=2)
>>> ax.set_xlim([xmin, xmax])
>>> ax.set_ylim([ymin, ymax])
>>> plt.show()
"""
def __init__(self, dataset, bw_method=None, weights=None):
self.dataset = atleast_2d(asarray(dataset))
if not self.dataset.size > 1:
raise ValueError("`dataset` input should have multiple elements.")
self.d, self.n = self.dataset.shape
if weights is not None:
self._weights = atleast_1d(weights).astype(float)
self._weights /= sum(self._weights)
if self.weights.ndim != 1:
raise ValueError("`weights` input should be one-dimensional.")
if len(self._weights) != self.n:
raise ValueError("`weights` input should be of length n")
self._neff = 1/sum(self._weights**2)
self.set_bandwidth(bw_method=bw_method)
def evaluate(self, points):
"""Evaluate the estimated pdf on a set of points.
Parameters
----------
points : (# of dimensions, # of points)-array
Alternatively, a (# of dimensions,) vector can be passed in and
treated as a single point.
Returns
-------
values : (# of points,)-array
The values at each point.
Raises
------
ValueError : if the dimensionality of the input points is different than
the dimensionality of the KDE.
"""
points = atleast_2d(asarray(points))
d, m = points.shape
if d != self.d:
if d == 1 and m == self.d:
# points was passed in as a row vector
points = reshape(points, (self.d, 1))
m = 1
else:
msg = "points have dimension %s, dataset has dimension %s" % (d,
self.d)
raise ValueError(msg)
result = zeros((m,), dtype=float)
whitening = linalg.cholesky(self.inv_cov)
scaled_dataset = dot(whitening, self.dataset)
scaled_points = dot(whitening, points)
if m >= self.n:
# there are more points than data, so loop over data
for i in range(self.n):
diff = scaled_dataset[:, i, newaxis] - scaled_points
energy = sum(diff * diff, axis=0) / 2.0
result += self.weights[i]*exp(-energy)
else:
# loop over points
for i in range(m):
diff = scaled_dataset - scaled_points[:, i, newaxis]
energy = sum(diff * diff, axis=0) / 2.0
result[i] = sum(exp(-energy)*self.weights, axis=0)
result = result / self._norm_factor
return result
__call__ = evaluate
def integrate_gaussian(self, mean, cov):
"""
Multiply estimated density by a multivariate Gaussian and integrate
over the whole space.
Parameters
----------
mean : aray_like
A 1-D array, specifying the mean of the Gaussian.
cov : array_like
A 2-D array, specifying the covariance matrix of the Gaussian.
Returns
-------
result : scalar
The value of the integral.
Raises
------
ValueError
If the mean or covariance of the input Gaussian differs from
the KDE's dimensionality.
"""
mean = atleast_1d(squeeze(mean))
cov = atleast_2d(cov)
if mean.shape != (self.d,):
raise ValueError("mean does not have dimension %s" % self.d)
if cov.shape != (self.d, self.d):
raise ValueError("covariance does not have dimension %s" % self.d)
# make mean a column vector
mean = mean[:, newaxis]
sum_cov = self.covariance + cov
# This will raise LinAlgError if the new cov matrix is not s.p.d
# cho_factor returns (ndarray, bool) where bool is a flag for whether
# or not ndarray is upper or lower triangular
sum_cov_chol = linalg.cho_factor(sum_cov)
diff = self.dataset - mean
tdiff = linalg.cho_solve(sum_cov_chol, diff)
sqrt_det = np.prod(np.diagonal(sum_cov_chol[0]))
norm_const = power(2 * pi, sum_cov.shape[0] / 2.0) * sqrt_det
energies = sum(diff * tdiff, axis=0) / 2.0
result = sum(exp(-energies)*self.weights, axis=0) / norm_const
return result
def integrate_box_1d(self, low, high):
"""
Computes the integral of a 1D pdf between two bounds.
Parameters
----------
low : scalar
Lower bound of integration.
high : scalar
Upper bound of integration.
Returns
-------
value : scalar
The result of the integral.
Raises
------
ValueError
If the KDE is over more than one dimension.
"""
if self.d != 1:
raise ValueError("integrate_box_1d() only handles 1D pdfs")
stdev = ravel(sqrt(self.covariance))[0]
normalized_low = ravel((low - self.dataset) / stdev)
normalized_high = ravel((high - self.dataset) / stdev)
value = np.sum(self.weights*(
special.ndtr(normalized_high) -
special.ndtr(normalized_low)))
return value
def integrate_box(self, low_bounds, high_bounds, maxpts=None):
"""Computes the integral of a pdf over a rectangular interval.
Parameters
----------
low_bounds : array_like
A 1-D array containing the lower bounds of integration.
high_bounds : array_like
A 1-D array containing the upper bounds of integration.
maxpts : int, optional
The maximum number of points to use for integration.
Returns
-------
value : scalar
The result of the integral.
"""
if maxpts is not None:
extra_kwds = {'maxpts': maxpts}
else:
extra_kwds = {}
value, inform = mvn.mvnun_weighted(low_bounds, high_bounds,
self.dataset, self.weights,
self.covariance, **extra_kwds)
if inform:
msg = ('An integral in mvn.mvnun requires more points than %s' %
(self.d * 1000))
warnings.warn(msg)
return value
def integrate_kde(self, other):
"""
Computes the integral of the product of this kernel density estimate
with another.
Parameters
----------
other : gaussian_kde instance
The other kde.
Returns
-------
value : scalar
The result of the integral.
Raises
------
ValueError
If the KDEs have different dimensionality.
"""
if other.d != self.d:
raise ValueError("KDEs are not the same dimensionality")
# we want to iterate over the smallest number of points
if other.n < self.n:
small = other
large = self
else:
small = self
large = other
sum_cov = small.covariance + large.covariance
sum_cov_chol = linalg.cho_factor(sum_cov)
result = 0.0
for i in range(small.n):
mean = small.dataset[:, i, newaxis]
diff = large.dataset - mean
tdiff = linalg.cho_solve(sum_cov_chol, diff)
energies = sum(diff * tdiff, axis=0) / 2.0
result += sum(exp(-energies)*large.weights, axis=0)*small.weights[i]
sqrt_det = np.prod(np.diagonal(sum_cov_chol[0]))
norm_const = power(2 * pi, sum_cov.shape[0] / 2.0) * sqrt_det
result /= norm_const
return result
def resample(self, size=None, seed=None):
"""
Randomly sample a dataset from the estimated pdf.
Parameters
----------
size : int, optional
The number of samples to draw. If not provided, then the size is
the same as the effective number of samples in the underlying
dataset.
seed : {None, int, `~np.random.RandomState`, `~np.random.Generator`}, optional
This parameter defines the object to use for drawing random
variates.
If `seed` is `None` the `~np.random.RandomState` singleton is used.
If `seed` is an int, a new ``RandomState`` instance is used, seeded
with seed.
If `seed` is already a ``RandomState`` or ``Generator`` instance,
then that object is used.
Default is None.
Specify `seed` for reproducible drawing of random variates.
Returns
-------
resample : (self.d, `size`) ndarray
The sampled dataset.
"""
if size is None:
size = int(self.neff)
random_state = check_random_state(seed)
norm = transpose(random_state.multivariate_normal(
zeros((self.d,), float), self.covariance, size=size
))
indices = random_state.choice(self.n, size=size, p=self.weights)
means = self.dataset[:, indices]
return means + norm
def scotts_factor(self):
"""Compute Scott's factor.
Returns
-------
s : float
Scott's factor.
"""
return power(self.neff, -1./(self.d+4))
def silverman_factor(self):
"""Compute the Silverman factor.
Returns
-------
s : float
The silverman factor.
"""
return power(self.neff*(self.d+2.0)/4.0, -1./(self.d+4))
# Default method to calculate bandwidth, can be overwritten by subclass
covariance_factor = scotts_factor
covariance_factor.__doc__ = """Computes the coefficient (`kde.factor`) that
multiplies the data covariance matrix to obtain the kernel covariance
matrix. The default is `scotts_factor`. A subclass can overwrite this
method to provide a different method, or set it through a call to
`kde.set_bandwidth`."""
def set_bandwidth(self, bw_method=None):
"""Compute the estimator bandwidth with given method.
The new bandwidth calculated after a call to `set_bandwidth` is used
for subsequent evaluations of the estimated density.
Parameters
----------
bw_method : str, scalar or callable, optional
The method used to calculate the estimator bandwidth. This can be
'scott', 'silverman', a scalar constant or a callable. If a
scalar, this will be used directly as `kde.factor`. If a callable,
it should take a `gaussian_kde` instance as only parameter and
return a scalar. If None (default), nothing happens; the current
`kde.covariance_factor` method is kept.
Notes
-----
.. versionadded:: 0.11
Examples
--------
>>> import scipy.stats as stats
>>> x1 = np.array([-7, -5, 1, 4, 5.])
>>> kde = stats.gaussian_kde(x1)
>>> xs = np.linspace(-10, 10, num=50)
>>> y1 = kde(xs)
>>> kde.set_bandwidth(bw_method='silverman')
>>> y2 = kde(xs)
>>> kde.set_bandwidth(bw_method=kde.factor / 3.)
>>> y3 = kde(xs)
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots()
>>> ax.plot(x1, np.full(x1.shape, 1 / (4. * x1.size)), 'bo',
... label='Data points (rescaled)')
>>> ax.plot(xs, y1, label='Scott (default)')
>>> ax.plot(xs, y2, label='Silverman')
>>> ax.plot(xs, y3, label='Const (1/3 * Silverman)')
>>> ax.legend()
>>> plt.show()
"""
if bw_method is None:
pass
elif bw_method == 'scott':
self.covariance_factor = self.scotts_factor
elif bw_method == 'silverman':
self.covariance_factor = self.silverman_factor
elif np.isscalar(bw_method) and not isinstance(bw_method, str):
self._bw_method = 'use constant'
self.covariance_factor = lambda: bw_method
elif callable(bw_method):
self._bw_method = bw_method
self.covariance_factor = lambda: self._bw_method(self)
else:
msg = "`bw_method` should be 'scott', 'silverman', a scalar " \
"or a callable."
raise ValueError(msg)
self._compute_covariance()
def _compute_covariance(self):
"""Computes the covariance matrix for each Gaussian kernel using
covariance_factor().
"""
self.factor = self.covariance_factor()
# Cache covariance and inverse covariance of the data
if not hasattr(self, '_data_inv_cov'):
self._data_covariance = atleast_2d(cov(self.dataset, rowvar=1,
bias=False,
aweights=self.weights))
self._data_inv_cov = linalg.inv(self._data_covariance)
self.covariance = self._data_covariance * self.factor**2
self.inv_cov = self._data_inv_cov / self.factor**2
self._norm_factor = sqrt(linalg.det(2*pi*self.covariance))
def pdf(self, x):
"""
Evaluate the estimated pdf on a provided set of points.
Notes
-----
This is an alias for `gaussian_kde.evaluate`. See the ``evaluate``
docstring for more details.
"""
return self.evaluate(x)
def logpdf(self, x):
"""
Evaluate the log of the estimated pdf on a provided set of points.
"""
points = atleast_2d(x)
d, m = points.shape
if d != self.d:
if d == 1 and m == self.d:
# points was passed in as a row vector
points = reshape(points, (self.d, 1))
m = 1
else:
msg = "points have dimension %s, dataset has dimension %s" % (d,
self.d)
raise ValueError(msg)
if m >= self.n:
# there are more points than data, so loop over data
energy = zeros((self.n, m), dtype=float)
for i in range(self.n):
diff = self.dataset[:, i, newaxis] - points
tdiff = dot(self.inv_cov, diff)
energy[i] = sum(diff*tdiff, axis=0) / 2.0
result = logsumexp(-energy.T,
b=self.weights / self._norm_factor, axis=1)
else:
# loop over points
result = zeros((m,), dtype=float)
for i in range(m):
diff = self.dataset - points[:, i, newaxis]
tdiff = dot(self.inv_cov, diff)
energy = sum(diff * tdiff, axis=0) / 2.0
result[i] = logsumexp(-energy, b=self.weights /
self._norm_factor)
return result
@property
def weights(self):
try:
return self._weights
except AttributeError:
self._weights = ones(self.n)/self.n
return self._weights
@property
def neff(self):
try:
return self._neff
except AttributeError:
self._neff = 1/sum(self.weights**2)
return self._neff
| arokem/scipy | scipy/stats/kde.py | Python | bsd-3-clause | 21,796 | [
"Gaussian"
] | 59920931facd930396270182a63c8f486e305cb35e17414c6aeb967e279bac3e |
#!/usr/bin/env python
"""
Reads a list of intervals and a maf. Produces a new maf containing the
blocks or parts of blocks in the original that overlapped the intervals.
If a MAF file, not UID, is provided the MAF file is indexed before being processed.
NOTE: If two intervals overlap the same block it will be written twice.
usage: %prog maf_file [options]
-d, --dbkey=d: Database key, ie hg17
-c, --chromCol=c: Column of Chr
-s, --startCol=s: Column of Start
-e, --endCol=e: Column of End
-S, --strandCol=S: Column of Strand
-t, --mafType=t: Type of MAF source to use
-m, --mafFile=m: Path of source MAF file, if not using cached version
-I, --mafIndex=I: Path of precomputed source MAF file index, if not using cached version
-i, --interval_file=i: Input interval file
-o, --output_file=o: Output MAF file
-p, --species=p: Species to include in output
-l, --indexLocation=l: Override default maf_index.loc file
-z, --mafIndexFile=z: Directory of local maf index file ( maf_index.loc or maf_pairwise.loc )
"""
#Dan Blankenberg
from galaxy import eggs
import pkg_resources; pkg_resources.require( "bx-python" )
from bx.cookbook import doc_optparse
import bx.align.maf
import bx.intervals.io
from galaxy.tools.util import maf_utilities
import sys
assert sys.version_info[:2] >= ( 2, 4 )
def __main__():
index = index_filename = None
mincols = 0
#Parse Command Line
options, args = doc_optparse.parse( __doc__ )
if options.dbkey: dbkey = options.dbkey
else: dbkey = None
if dbkey in [None, "?"]:
print >>sys.stderr, "You must specify a proper build in order to extract alignments. You can specify your genome build by clicking on the pencil icon associated with your interval file."
sys.exit()
species = maf_utilities.parse_species_option( options.species )
if options.chromCol: chromCol = int( options.chromCol ) - 1
else:
print >>sys.stderr, "Chromosome column not set, click the pencil icon in the history item to set the metadata attributes."
sys.exit()
if options.startCol: startCol = int( options.startCol ) - 1
else:
print >>sys.stderr, "Start column not set, click the pencil icon in the history item to set the metadata attributes."
sys.exit()
if options.endCol: endCol = int( options.endCol ) - 1
else:
print >>sys.stderr, "End column not set, click the pencil icon in the history item to set the metadata attributes."
sys.exit()
if options.strandCol: strandCol = int( options.strandCol ) - 1
else:
strandCol = -1
if options.interval_file: interval_file = options.interval_file
else:
print >>sys.stderr, "Input interval file has not been specified."
sys.exit()
if options.output_file: output_file = options.output_file
else:
print >>sys.stderr, "Output file has not been specified."
sys.exit()
#Finish parsing command line
#Open indexed access to MAFs
if options.mafType:
if options.indexLocation:
index = maf_utilities.maf_index_by_uid( options.mafType, options.indexLocation )
else:
index = maf_utilities.maf_index_by_uid( options.mafType, options.mafIndexFile )
if index is None:
print >> sys.stderr, "The MAF source specified (%s) appears to be invalid." % ( options.mafType )
sys.exit()
elif options.mafFile:
index, index_filename = maf_utilities.open_or_build_maf_index( options.mafFile, options.mafIndex, species = [dbkey] )
if index is None:
print >> sys.stderr, "Your MAF file appears to be malformed."
sys.exit()
else:
print >>sys.stderr, "Desired source MAF type has not been specified."
sys.exit()
#Create MAF writter
out = bx.align.maf.Writer( open(output_file, "w") )
#Iterate over input regions
num_blocks = 0
num_regions = None
for num_regions, region in enumerate( bx.intervals.io.NiceReaderWrapper( open( interval_file, 'r' ), chrom_col = chromCol, start_col = startCol, end_col = endCol, strand_col = strandCol, fix_strand = True, return_header = False, return_comments = False ) ):
src = "%s.%s" % ( dbkey, region.chrom )
for block in maf_utilities.get_chopped_blocks_for_region( index, src, region, species, mincols ):
out.write( block )
num_blocks += 1
#Close output MAF
out.close()
#remove index file if created during run
maf_utilities.remove_temp_index_file( index_filename )
if num_blocks:
print "%i MAF blocks extracted for %i regions." % ( num_blocks, ( num_regions + 1 ) )
elif num_regions is not None:
print "No MAF blocks could be extracted for %i regions." % ( num_regions + 1 )
else:
print "No valid regions have been provided."
if __name__ == "__main__": __main__()
| dbcls/dbcls-galaxy | tools/maf/interval2maf.py | Python | mit | 5,001 | [
"Galaxy"
] | 674835560534cd41239a38cf6a540ba038febfd55898d64cd8bae7ef75a66b48 |
import numpy as np
def gaussian_kernel(x1, x2, sigma):
x1 = x1.flatten()
x2 = x2.flatten()
sim = 0
# ===================== Your Code Here =====================
# Instructions : Fill in this function to return the similarity between x1
# and x2 computed using a Gaussian kernel with bandwith sigma
#
# ==========================================================
return sim
| nsoojin/coursera-ml-py | machine-learning-ex6/ex6/gaussianKernel.py | Python | mit | 430 | [
"Gaussian"
] | 58c53801ca189b0cf9f3ad7e59da9d84a757fb74cf80fd8369b87d56a9537a5e |
# $Id$
#
# Copyright (C) 2006 greg Landrum
#
# @@ All Rights Reserved @@
# This file is part of the RDKit.
# The contents are covered by the terms of the BSD license
# which is included in the file license.txt, found at the root
# of the RDKit source tree.
#
from contextlib import closing
import unittest
from io import StringIO
from rdkit.Chem.FeatMaps import FeatMaps, FeatMapParser
def feq(n1, n2, tol=1e-5):
return abs(n1 - n2) <= tol
class TestCase(unittest.TestCase):
data = """
ScoreMode=Best
DirScoreMode=DotFullRange
BeginParams
family=Aromatic radius=2.5 width=1.0 profile=Triangle
family=Acceptor radius=1.5
EndParams
# optional
BeginPoints
family=Acceptor pos=(1.0, 0.0, 5.0) weight=1.25 dir=(1, 1, 0)
family=Aromatic pos=(0.0,1.0,0.0) weight=2.0 dir=(0,0,1) dir=(0,0,-1)
family=Acceptor pos=(1.0,1.0,2.0) weight=1.25
EndPoints
"""
def test1Basics(self):
p = FeatMapParser.FeatMapParser()
p.SetData(self.data)
fm = p.Parse()
self.assertTrue(fm.scoreMode == FeatMaps.FeatMapScoreMode.Best)
self.assertTrue(fm.dirScoreMode == FeatMaps.FeatDirScoreMode.DotFullRange)
self.assertTrue(fm.GetNumFeatures() == 3)
feats = fm.GetFeatures()
self.assertTrue(feq(feats[0].weight, 1.25))
self.assertTrue(feq(feats[1].weight, 2.0))
self.assertTrue(feq(feats[2].weight, 1.25))
self.assertTrue(len(feats[0].featDirs) == 1)
self.assertTrue(len(feats[1].featDirs) == 2)
self.assertTrue(len(feats[2].featDirs) == 0)
fams = [x.GetFamily() for x in feats]
self.assertTrue(fams == ['Acceptor', 'Aromatic', 'Acceptor'])
def test_FeatMapParser(self):
# We can use a string
p = FeatMapParser.FeatMapParser(data=self.data)
fm = p.Parse()
self.assertEqual(fm.GetNumFeatures(), 3)
self.assertEqual([x.GetFamily() for x in fm.GetFeatures()],
['Acceptor', 'Aromatic', 'Acceptor'])
# We can use a list of strings
p = FeatMapParser.FeatMapParser(data=self.data.split('\n'))
fm = p.Parse()
self.assertEqual(fm.GetNumFeatures(), 3)
self.assertEqual([x.GetFamily() for x in fm.GetFeatures()],
['Acceptor', 'Aromatic', 'Acceptor'])
# and a stream
with closing(StringIO(self.data)) as file:
p = FeatMapParser.FeatMapParser(file=file)
fm = p.Parse()
self.assertEqual(fm.GetNumFeatures(), 3)
self.assertEqual([x.GetFamily() for x in fm.GetFeatures()],
['Acceptor', 'Aromatic', 'Acceptor'])
def test_ParseErrors(self):
# Typos in scoreMode or dirscoreMode section
data = "scoreMode = typo\nbeginParams\nfamily=Acceptor radius=1.5\nEndParams"
p = FeatMapParser.FeatMapParser(data=data)
self.assertRaises(FeatMapParser.FeatMapParseError, p.Parse)
data = "dirscoremode = typo\nbeginParams\nfamily=Acceptor radius=1.5\nEndParams"
p = FeatMapParser.FeatMapParser(data=data)
self.assertRaises(FeatMapParser.FeatMapParseError, p.Parse)
data = "typo = All\nbeginParams\nfamily=Acceptor radius=1.5\nEndParams"
p = FeatMapParser.FeatMapParser(data=data)
self.assertRaises(FeatMapParser.FeatMapParseError, p.Parse)
# Typos in paramBlock
data = "beginTypo\nfamily=Acceptor radius=1.5\nEndParams"
p = FeatMapParser.FeatMapParser(data=data)
self.assertRaises(FeatMapParser.FeatMapParseError, p.Parse)
data = "beginParams\nfamily=Acceptor radius=1.5\nEndTypo"
p = FeatMapParser.FeatMapParser(data=data)
self.assertRaises(FeatMapParser.FeatMapParseError, p.Parse)
data = "beginParams\ntypo=Acceptor radius=1.5\nEndParams"
p = FeatMapParser.FeatMapParser(data=data)
self.assertRaises(FeatMapParser.FeatMapParseError, p.Parse)
data = "beginParams\nprofile=Typo\nEndParams"
p = FeatMapParser.FeatMapParser(data=data)
self.assertRaises(FeatMapParser.FeatMapParseError, p.Parse)
# Typos in points block
data = "BeginPoints\npos=(1.0, 0.0, 5.0, 4.0)\nEndPoints"
p = FeatMapParser.FeatMapParser(data=data)
self.assertRaises(ValueError, p.Parse)
data = "BeginPoints\npos=(1.0, 0.0, 5.0) typo=Acceptor\nEndPoints"
p = FeatMapParser.FeatMapParser(data=data)
self.assertRaises(FeatMapParser.FeatMapParseError, p.Parse)
if __name__ == '__main__': # pragma: nocover
unittest.main()
| bp-kelley/rdkit | rdkit/Chem/FeatMaps/UnitTestFeatMapParser.py | Python | bsd-3-clause | 4,564 | [
"RDKit"
] | ebf17fa6f57551a8aa5456012b7d541ae7a3a47261f08dbb14b54159337fd969 |
# -*- coding: utf-8 -*-
# Copyright (C) 2012, Almar Klein, Ant1, Marius van Voorden
#
# This code is subject to the (new) BSD license:
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the <organization> nor the
# names of its contributors may be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
""" Module images2gif
Provides functionality for reading and writing animated GIF images.
Use writeGif to write a series of numpy arrays or PIL images as an
animated GIF. Use readGif to read an animated gif as a series of numpy
arrays.
Note that since July 2004, all patents on the LZW compression patent have
expired. Therefore the GIF format may now be used freely.
Acknowledgements
----------------
Many thanks to Ant1 for:
* noting the use of "palette=PIL.Image.ADAPTIVE", which significantly
improves the results.
* the modifications to save each image with its own palette, or optionally
the global palette (if its the same).
Many thanks to Marius van Voorden for porting the NeuQuant quantization
algorithm of Anthony Dekker to Python (See the NeuQuant class for its
license).
Many thanks to Alex Robinson for implementing the concept of subrectangles,
which (depening on image content) can give a very significant reduction in
file size.
This code is based on gifmaker (in the scripts folder of the source
distribution of PIL)
Usefull links
-------------
* http://tronche.com/computer-graphics/gif/
* http://en.wikipedia.org/wiki/Graphics_Interchange_Format
* http://www.w3.org/Graphics/GIF/spec-gif89a.txt
"""
# todo: This module should be part of imageio (or at least based on)
import os
import time
try:
import PIL
from PIL import Image
from PIL.GifImagePlugin import getheader, getdata
except ImportError:
PIL = None
try:
import numpy as np
except ImportError:
np = None
def get_cKDTree():
try:
from scipy.spatial import cKDTree
except ImportError:
cKDTree = None
return cKDTree
# getheader gives a 87a header and a color palette (two elements in a list).
# getdata()[0] gives the Image Descriptor up to (including) "LZW min code size"
# getdatas()[1:] is the image data itself in chuncks of 256 bytes (well
# technically the first byte says how many bytes follow, after which that
# amount (max 255) follows).
def checkImages(images):
""" checkImages(images)
Check numpy images and correct intensity range etc.
The same for all movie formats.
"""
# Init results
images2 = []
for im in images:
if PIL and isinstance(im, PIL.Image.Image):
# We assume PIL images are allright
images2.append(im)
elif np and isinstance(im, np.ndarray):
# Check and convert dtype
if im.dtype == np.uint8:
images2.append(im) # Ok
elif im.dtype in [np.float32, np.float64]:
im = im.copy()
im[im < 0] = 0
im[im > 1] = 1
im *= 255
images2.append(im.astype(np.uint8))
else:
im = im.astype(np.uint8)
images2.append(im)
# Check size
if im.ndim == 2:
pass # ok
elif im.ndim == 3:
if im.shape[2] not in [3, 4]:
raise ValueError('This array can not represent an image.')
else:
raise ValueError('This array can not represent an image.')
else:
raise ValueError('Invalid image type: ' + str(type(im)))
# Done
return images2
def intToBin(i):
""" Integer to two bytes """
# devide in two parts (bytes)
i1 = i % 256
i2 = int(i / 256)
# make string (little endian)
return chr(i1) + chr(i2)
class GifWriter:
""" GifWriter()
Class that contains methods for helping write the animated GIF file.
"""
def getheaderAnim(self, im):
""" getheaderAnim(im)
Get animation header. To replace PILs getheader()[0]
"""
bb = "GIF89a"
bb += intToBin(im.size[0])
bb += intToBin(im.size[1])
bb += "\x87\x00\x00"
return bb
def getImageDescriptor(self, im, xy=None):
""" getImageDescriptor(im, xy=None)
Used for the local color table properties per image.
Otherwise global color table applies to all frames irrespective of
whether additional colors comes in play that require a redefined
palette. Still a maximum of 256 color per frame, obviously.
Written by Ant1 on 2010-08-22
Modified by Alex Robinson in Janurari 2011 to implement subrectangles.
"""
# Defaule use full image and place at upper left
if xy is None:
xy = (0, 0)
# Image separator,
bb = '\x2C'
# Image position and size
bb += intToBin(xy[0]) # Left position
bb += intToBin(xy[1]) # Top position
bb += intToBin(im.size[0]) # image width
bb += intToBin(im.size[1]) # image height
# packed field: local color table flag1, interlace0, sorted table0,
# reserved00, lct size111=7=2^(7+1)=256.
bb += '\x87'
# LZW minimum size code now comes later, begining of [image data]
# blocks
return bb
def getAppExt(self, loops=float('inf')):
""" getAppExt(loops=float('inf'))
Application extention. This part specifies the amount of loops.
If loops is 0 or inf, it goes on infinitely.
"""
if loops == 0 or loops == float('inf'):
loops = 2**16 - 1
# bb = "" # application extension should not be used
# (the extension interprets zero loops
# to mean an infinite number of loops)
# Mmm, does not seem to work
if True:
bb = "\x21\xFF\x0B" # application extension
bb += "NETSCAPE2.0"
bb += "\x03\x01"
bb += intToBin(loops)
bb += '\x00' # end
return bb
def getGraphicsControlExt(
self, duration=0.1, dispose=2, transparent_flag=0,
transparency_index=0):
""" getGraphicsControlExt(duration=0.1, dispose=2)
Graphics Control Extension. A sort of header at the start of
each image. Specifies duration and transparancy.
Dispose
-------
* 0 - No disposal specified.
* 1 - Do not dispose. The graphic is to be left in place.
* 2 - Restore to background color. The area used by the graphic
must be restored to the background color.
* 3 - Restore to previous. The decoder is required to restore the
area overwritten by the graphic with what was there prior to
rendering the graphic.
* 4-7 -To be defined.
"""
bb = '\x21\xF9\x04'
# low bit 1 == transparency,
bb += chr(((dispose & 3) << 2) | (transparent_flag & 1))
# 2nd bit 1 == user input , next 3 bits, the low two of which are used,
# are dispose.
bb += intToBin(int(duration * 100)) # in 100th of seconds
bb += chr(transparency_index) # transparency index
bb += '\x00' # end
return bb
def handleSubRectangles(self, images, subRectangles):
""" handleSubRectangles(images)
Handle the sub-rectangle stuff. If the rectangles are given by the
user, the values are checked. Otherwise the subrectangles are
calculated automatically.
"""
image_info = [im.info for im in images]
if isinstance(subRectangles, (tuple, list)):
# xy given directly
# Check xy
xy = subRectangles
if xy is None:
xy = (0, 0)
if hasattr(xy, '__len__'):
if len(xy) == len(images):
xy = [xxyy for xxyy in xy]
else:
raise ValueError("len(xy) doesn't match amount of images.")
else:
xy = [xy for im in images]
xy[0] = (0, 0)
else:
# Calculate xy using some basic image processing
# Check Numpy
if np is None:
raise RuntimeError("Need Numpy to use auto-subRectangles.")
# First make numpy arrays if required
for i in range(len(images)):
im = images[i]
if isinstance(im, Image.Image):
tmp = im.convert() # Make without palette
a = np.asarray(tmp)
if len(a.shape) == 0:
raise MemoryError(
"Too little memory to convert PIL image to array")
images[i] = a
# Determine the sub rectangles
images, xy = self.getSubRectangles(images)
# Done
return images, xy, image_info
def getSubRectangles(self, ims):
""" getSubRectangles(ims)
Calculate the minimal rectangles that need updating each frame.
Returns a two-element tuple containing the cropped images and a
list of x-y positions.
Calculating the subrectangles takes extra time, obviously. However,
if the image sizes were reduced, the actual writing of the GIF
goes faster. In some cases applying this method produces a GIF faster.
"""
# Check image count
if len(ims) < 2:
return ims, [(0, 0) for i in ims]
# We need numpy
if np is None:
raise RuntimeError("Need Numpy to calculate sub-rectangles. ")
# Prepare
ims2 = [ims[0]]
xy = [(0, 0)]
t0 = time.time()
# Iterate over images
prev = ims[0]
for im in ims[1:]:
# Get difference, sum over colors
diff = np.abs(im - prev)
if diff.ndim == 3:
diff = diff.sum(2)
# Get begin and end for both dimensions
X = np.argwhere(diff.sum(0))
Y = np.argwhere(diff.sum(1))
# Get rect coordinates
if X.size and Y.size:
x0, x1 = X[0], X[-1] + 1
y0, y1 = Y[0], Y[-1] + 1
else: # No change ... make it minimal
x0, x1 = 0, 2
y0, y1 = 0, 2
# Cut out and store
im2 = im[y0:y1, x0:x1]
prev = im
ims2.append(im2)
xy.append((x0, y0))
# Done
# print('%1.2f seconds to determine subrectangles of %i images' %
# (time.time()-t0, len(ims2)) )
return ims2, xy
def convertImagesToPIL(self, images, dither, nq=0, images_info=None):
""" convertImagesToPIL(images, nq=0)
Convert images to Paletted PIL images, which can then be
written to a single animaged GIF.
"""
# Convert to PIL images
images2 = []
for im in images:
if isinstance(im, Image.Image):
images2.append(im)
elif np and isinstance(im, np.ndarray):
if im.ndim == 3 and im.shape[2] == 3:
im = Image.fromarray(im, 'RGB')
elif im.ndim == 3 and im.shape[2] == 4:
# im = Image.fromarray(im[:,:,:3],'RGB')
self.transparency = True
im = Image.fromarray(im[:, :, :4], 'RGBA')
elif im.ndim == 2:
im = Image.fromarray(im, 'L')
images2.append(im)
# Convert to paletted PIL images
images, images2 = images2, []
if nq >= 1:
# NeuQuant algorithm
for im in images:
im = im.convert("RGBA") # NQ assumes RGBA
nqInstance = NeuQuant(im, int(nq)) # Learn colors from image
if dither:
im = im.convert("RGB").quantize(
palette=nqInstance.paletteImage(),
colors=255)
else:
im = nqInstance.quantize(
im,
colors=255) # Use to quantize the image itself
self.transparency = True # since NQ assumes transparency
if self.transparency:
alpha = im.split()[3]
mask = Image.eval(alpha, lambda a: 255 if a <= 128 else 0)
im.paste(255, mask=mask)
images2.append(im)
else:
# Adaptive PIL algorithm
AD = Image.ADAPTIVE
# for index,im in enumerate(images):
for i in range(len(images)):
im = images[i].convert('RGB').convert(
'P',
palette=AD,
dither=dither,
colors=255)
if self.transparency:
alpha = images[i].split()[3]
mask = Image.eval(alpha, lambda a: 255 if a <= 128 else 0)
im.paste(255, mask=mask)
images2.append(im)
# Done
return images2
def writeGifToFile(self, fp, images, durations, loops, xys, disposes):
""" writeGifToFile(fp, images, durations, loops, xys, disposes)
Given a set of images writes the bytes to the specified stream.
"""
# Obtain palette for all images and count each occurance
palettes, occur = [], []
for im in images:
palettes.append(im.palette.getdata()[1])
for palette in palettes:
occur.append(palettes.count(palette))
# Select most-used palette as the global one (or first in case no max)
globalPalette = palettes[occur.index(max(occur))]
# Init
frames = 0
firstFrame = True
for im, palette in zip(images, palettes):
if firstFrame:
# Write header
# Gather info
header = self.getheaderAnim(im)
appext = self.getAppExt(loops)
# Write
fp.write(header)
fp.write(globalPalette)
fp.write(appext)
# Next frame is not the first
firstFrame = False
if True:
# Write palette and image data
# Gather info
data = getdata(im)
imdes, data = data[0], data[1:]
transparent_flag = 0
if self.transparency:
transparent_flag = 1
graphext = self.getGraphicsControlExt(
durations[frames], disposes[frames],
transparent_flag=transparent_flag, transparency_index=255)
# Make image descriptor suitable for using 256 local color
# palette
lid = self.getImageDescriptor(im, xys[frames])
# Write local header
if (palette != globalPalette) or (disposes[frames] != 2):
# Use local color palette
fp.write(graphext)
fp.write(lid) # write suitable image descriptor
fp.write(palette) # write local color table
fp.write('\x08') # LZW minimum size code
else:
# Use global color palette
fp.write(graphext)
fp.write(imdes) # write suitable image descriptor
# Write image data
for d in data:
fp.write(d)
# Prepare for next round
frames = frames + 1
fp.write(";") # end gif
return frames
# Exposed functions
def writeGif(filename, images, duration=0.1, repeat=True, dither=False,
nq=0, subRectangles=True, dispose=None):
""" writeGif(filename, images, duration=0.1, repeat=True, dither=False,
nq=0, subRectangles=True, dispose=None)
Write an animated gif from the specified images.
Parameters
----------
filename : string
The name of the file to write the image to.
images : list
Should be a list consisting of PIL images or numpy arrays.
The latter should be between 0 and 255 for integer types, and
between 0 and 1 for float types.
duration : scalar or list of scalars
The duration for all frames, or (if a list) for each frame.
repeat : bool or integer
The amount of loops. If True, loops infinitetely.
dither : bool
Whether to apply dithering
nq : integer
If nonzero, applies the NeuQuant quantization algorithm to create
the color palette. This algorithm is superior, but slower than
the standard PIL algorithm. The value of nq is the quality
parameter. 1 represents the best quality. 10 is in general a
good tradeoff between quality and speed. When using this option,
better results are usually obtained when subRectangles is False.
subRectangles : False, True, or a list of 2-element tuples
Whether to use sub-rectangles. If True, the minimal rectangle that
is required to update each frame is automatically detected. This
can give significant reductions in file size, particularly if only
a part of the image changes. One can also give a list of x-y
coordinates if you want to do the cropping yourself. The default
is True.
dispose : int
How to dispose each frame. 1 means that each frame is to be left
in place. 2 means the background color should be restored after
each frame. 3 means the decoder should restore the previous frame.
If subRectangles==False, the default is 2, otherwise it is 1.
"""
# Check PIL
if PIL is None:
raise RuntimeError("Need PIL to write animated gif files.")
# Check images
images = checkImages(images)
# Instantiate writer object
gifWriter = GifWriter()
# init transparency flag used in GifWriter functions
gifWriter.transparency = False
# Check loops
if repeat is False:
loops = 1
elif repeat is True:
loops = 0 # zero means infinite
else:
loops = int(repeat)
# Check duration
if hasattr(duration, '__len__'):
if len(duration) == len(images):
duration = [d for d in duration]
else:
raise ValueError("len(duration) doesn't match amount of images.")
else:
duration = [duration for im in images]
# Check subrectangles
if subRectangles:
images, xy, images_info = gifWriter.handleSubRectangles(
images, subRectangles)
defaultDispose = 1 # Leave image in place
else:
# Normal mode
xy = [(0, 0) for im in images]
defaultDispose = 2 # Restore to background color.
# Check dispose
if dispose is None:
dispose = defaultDispose
if hasattr(dispose, '__len__'):
if len(dispose) != len(images):
raise ValueError("len(xy) doesn't match amount of images.")
else:
dispose = [dispose for im in images]
# Make images in a format that we can write easy
images = gifWriter.convertImagesToPIL(images, dither, nq)
# Write
fp = open(filename, 'wb')
try:
gifWriter.writeGifToFile(fp, images, duration, loops, xy, dispose)
finally:
fp.close()
def readGif(filename, asNumpy=True):
""" readGif(filename, asNumpy=True)
Read images from an animated GIF file. Returns a list of numpy
arrays, or, if asNumpy is false, a list if PIL images.
"""
# Check PIL
if PIL is None:
raise RuntimeError("Need PIL to read animated gif files.")
# Check Numpy
if np is None:
raise RuntimeError("Need Numpy to read animated gif files.")
# Check whether it exists
if not os.path.isfile(filename):
raise IOError('File not found: ' + str(filename))
# Load file using PIL
pilIm = PIL.Image.open(filename)
pilIm.seek(0)
# Read all images inside
images = []
try:
while True:
# Get image as numpy array
tmp = pilIm.convert() # Make without palette
a = np.asarray(tmp)
if len(a.shape) == 0:
raise MemoryError(
"Too little memory to convert PIL image to array")
# Store, and next
images.append(a)
pilIm.seek(pilIm.tell() + 1)
except EOFError:
pass
# Convert to normal PIL images if needed
if not asNumpy:
images2 = images
images = []
for index, im in enumerate(images2):
tmp = PIL.Image.fromarray(im)
images.append(tmp)
# Done
return images
class NeuQuant:
""" NeuQuant(image, samplefac=10, colors=256)
samplefac should be an integer number of 1 or higher, 1
being the highest quality, but the slowest performance.
With avalue of 10, one tenth of all pixels are used during
training. This value seems a nice tradeof between speed
and quality.
colors is the amount of colors to reduce the image to. This
should best be a power of two.
See also:
http://members.ozemail.com.au/~dekker/NEUQUANT.HTML
License of the NeuQuant Neural-Net Quantization Algorithm
---------------------------------------------------------
Copyright (c) 1994 Anthony Dekker
Ported to python by Marius van Voorden in 2010
NEUQUANT Neural-Net quantization algorithm by Anthony Dekker, 1994.
See "Kohonen neural networks for optimal colour quantization"
in "network: Computation in Neural Systems" Vol. 5 (1994) pp 351-367.
for a discussion of the algorithm.
See also http://members.ozemail.com.au/~dekker/NEUQUANT.HTML
Any party obtaining a copy of these files from the author, directly or
indirectly, is granted, free of charge, a full and unrestricted
irrevocable, world-wide, paid up, royalty-free, nonexclusive right and
license to deal in this software and documentation files (the "Software"),
including without limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the Software, and
to permit persons who receive copies from any such party to do so, with
the only requirement being that this copyright notice remain intact.
"""
NCYCLES = None # Number of learning cycles
NETSIZE = None # Number of colours used
SPECIALS = None # Number of reserved colours used
BGCOLOR = None # Reserved background colour
CUTNETSIZE = None
MAXNETPOS = None
INITRAD = None # For 256 colours, radius starts at 32
RADIUSBIASSHIFT = None
RADIUSBIAS = None
INITBIASRADIUS = None
RADIUSDEC = None # Factor of 1/30 each cycle
ALPHABIASSHIFT = None
INITALPHA = None # biased by 10 bits
GAMMA = None
BETA = None
BETAGAMMA = None
network = None # The network itself
colormap = None # The network itself
netindex = None # For network lookup - really 256
bias = None # Bias and freq arrays for learning
freq = None
pimage = None
# Four primes near 500 - assume no image has a length so large
# that it is divisible by all four primes
PRIME1 = 499
PRIME2 = 491
PRIME3 = 487
PRIME4 = 503
MAXPRIME = PRIME4
pixels = None
samplefac = None
a_s = None
def setconstants(self, samplefac, colors):
self.NCYCLES = 100 # Number of learning cycles
self.NETSIZE = colors # Number of colours used
self.SPECIALS = 3 # Number of reserved colours used
self.BGCOLOR = self.SPECIALS - 1 # Reserved background colour
self.CUTNETSIZE = self.NETSIZE - self.SPECIALS
self.MAXNETPOS = self.NETSIZE - 1
self.INITRAD = self.NETSIZE / 8 # For 256 colours, radius starts at 32
self.RADIUSBIASSHIFT = 6
self.RADIUSBIAS = 1 << self.RADIUSBIASSHIFT
self.INITBIASRADIUS = self.INITRAD * self.RADIUSBIAS
self.RADIUSDEC = 30 # Factor of 1/30 each cycle
self.ALPHABIASSHIFT = 10 # Alpha starts at 1
self.INITALPHA = 1 << self.ALPHABIASSHIFT # biased by 10 bits
self.GAMMA = 1024.0
self.BETA = 1.0 / 1024.0
self.BETAGAMMA = self.BETA * self.GAMMA
self.network = np.empty(
(self.NETSIZE, 3), dtype='float64') # The network itself
self.colormap = np.empty(
(self.NETSIZE, 4), dtype='int32') # The network itself
self.netindex = np.empty(
256,
dtype='int32') # For network lookup - really 256
self.bias = np.empty(
self.NETSIZE,
dtype='float64') # Bias and freq arrays for learning
self.freq = np.empty(self.NETSIZE, dtype='float64')
self.pixels = None
self.samplefac = samplefac
self.a_s = {}
def __init__(self, image, samplefac=10, colors=256):
# Check Numpy
if np is None:
raise RuntimeError("Need Numpy for the NeuQuant algorithm.")
# Check image
if image.size[0] * image.size[1] < NeuQuant.MAXPRIME:
raise IOError("Image is too small")
if image.mode != "RGBA":
raise IOError("Image mode should be RGBA.")
# Initialize
self.setconstants(samplefac, colors)
self.pixels = np.fromstring(image.tostring(), np.uint32)
self.setUpArrays()
self.learn()
self.fix()
self.inxbuild()
def writeColourMap(self, rgb, outstream):
for i in range(self.NETSIZE):
bb = self.colormap[i, 0]
gg = self.colormap[i, 1]
rr = self.colormap[i, 2]
outstream.write(rr if rgb else bb)
outstream.write(gg)
outstream.write(bb if rgb else rr)
return self.NETSIZE
def setUpArrays(self):
self.network[0, 0] = 0.0 # Black
self.network[0, 1] = 0.0
self.network[0, 2] = 0.0
self.network[1, 0] = 255.0 # White
self.network[1, 1] = 255.0
self.network[1, 2] = 255.0
# RESERVED self.BGCOLOR # Background
for i in range(self.SPECIALS):
self.freq[i] = 1.0 / self.NETSIZE
self.bias[i] = 0.0
for i in range(self.SPECIALS, self.NETSIZE):
p = self.network[i]
p[:] = (255.0 * (i - self.SPECIALS)) / self.CUTNETSIZE
self.freq[i] = 1.0 / self.NETSIZE
self.bias[i] = 0.0
# Omitted: setPixels
def altersingle(self, alpha, i, b, g, r):
"""Move neuron i towards biased (b,g,r) by factor alpha"""
n = self.network[i] # Alter hit neuron
n[0] -= (alpha * (n[0] - b))
n[1] -= (alpha * (n[1] - g))
n[2] -= (alpha * (n[2] - r))
def geta(self, alpha, rad):
try:
return self.a_s[(alpha, rad)]
except KeyError:
length = rad * 2 - 1
mid = length / 2
q = np.array(list(range(mid - 1, -1, -1)) + list(range(-1, mid)))
a = alpha * (rad * rad - q * q) / (rad * rad)
a[mid] = 0
self.a_s[(alpha, rad)] = a
return a
def alterneigh(self, alpha, rad, i, b, g, r):
if i - rad >= self.SPECIALS - 1:
lo = i - rad
start = 0
else:
lo = self.SPECIALS - 1
start = (self.SPECIALS - 1 - (i - rad))
if i + rad <= self.NETSIZE:
hi = i + rad
end = rad * 2 - 1
else:
hi = self.NETSIZE
end = (self.NETSIZE - (i + rad))
a = self.geta(alpha, rad)[start:end]
p = self.network[lo + 1:hi]
p -= np.transpose(np.transpose(p - np.array([b, g, r])) * a)
def contest(self, b, g, r):
""" Search for biased BGR values
Finds closest neuron (min dist) and updates self.freq
finds best neuron (min dist-self.bias) and returns position
for frequently chosen neurons, self.freq[i] is high and
self.bias[i] is negative
self.bias[i] = self.GAMMA*((1/self.NETSIZE)-self.freq[i])"""
i, j = self.SPECIALS, self.NETSIZE
dists = abs(self.network[i:j] - np.array([b, g, r])).sum(1)
bestpos = i + np.argmin(dists)
biasdists = dists - self.bias[i:j]
bestbiaspos = i + np.argmin(biasdists)
self.freq[i:j] *= (1 - self.BETA)
self.bias[i:j] += self.BETAGAMMA * self.freq[i:j]
self.freq[bestpos] += self.BETA
self.bias[bestpos] -= self.BETAGAMMA
return bestbiaspos
def specialFind(self, b, g, r):
for i in range(self.SPECIALS):
n = self.network[i]
if n[0] == b and n[1] == g and n[2] == r:
return i
return -1
def learn(self):
biasRadius = self.INITBIASRADIUS
alphadec = 30 + ((self.samplefac - 1) / 3)
lengthcount = self.pixels.size
samplepixels = lengthcount / self.samplefac
delta = samplepixels / self.NCYCLES
alpha = self.INITALPHA
i = 0
rad = biasRadius >> self.RADIUSBIASSHIFT
if rad <= 1:
rad = 0
print("Beginning 1D learning: samplepixels = %1.2f rad = %i" %
(samplepixels, rad))
step = 0
pos = 0
if lengthcount % NeuQuant.PRIME1 != 0:
step = NeuQuant.PRIME1
elif lengthcount % NeuQuant.PRIME2 != 0:
step = NeuQuant.PRIME2
elif lengthcount % NeuQuant.PRIME3 != 0:
step = NeuQuant.PRIME3
else:
step = NeuQuant.PRIME4
i = 0
printed_string = ''
while i < samplepixels:
if i % 100 == 99:
tmp = '\b' * len(printed_string)
printed_string = str((i + 1) * 100 / samplepixels) + "%\n"
print(tmp + printed_string)
p = self.pixels[pos]
r = (p >> 16) & 0xff
g = (p >> 8) & 0xff
b = (p) & 0xff
if i == 0: # Remember background colour
self.network[self.BGCOLOR] = [b, g, r]
j = self.specialFind(b, g, r)
if j < 0:
j = self.contest(b, g, r)
if j >= self.SPECIALS: # Don't learn for specials
a = (1.0 * alpha) / self.INITALPHA
self.altersingle(a, j, b, g, r)
if rad > 0:
self.alterneigh(a, rad, j, b, g, r)
pos = (pos + step) % lengthcount
i += 1
if i % delta == 0:
alpha -= alpha / alphadec
biasRadius -= biasRadius / self.RADIUSDEC
rad = biasRadius >> self.RADIUSBIASSHIFT
if rad <= 1:
rad = 0
finalAlpha = (1.0 * alpha) / self.INITALPHA
print("Finished 1D learning: final alpha = %1.2f!" % finalAlpha)
def fix(self):
for i in range(self.NETSIZE):
for j in range(3):
x = int(0.5 + self.network[i, j])
x = max(0, x)
x = min(255, x)
self.colormap[i, j] = x
self.colormap[i, 3] = i
def inxbuild(self):
previouscol = 0
startpos = 0
for i in range(self.NETSIZE):
p = self.colormap[i]
q = None
smallpos = i
smallval = p[1] # Index on g
# Find smallest in i..self.NETSIZE-1
for j in range(i + 1, self.NETSIZE):
q = self.colormap[j]
if q[1] < smallval: # Index on g
smallpos = j
smallval = q[1] # Index on g
q = self.colormap[smallpos]
# Swap p (i) and q (smallpos) entries
if i != smallpos:
p[:], q[:] = q, p.copy()
# smallval entry is now in position i
if smallval != previouscol:
self.netindex[previouscol] = (startpos + i) >> 1
for j in range(previouscol + 1, smallval):
self.netindex[j] = i
previouscol = smallval
startpos = i
self.netindex[previouscol] = (startpos + self.MAXNETPOS) >> 1
for j in range(previouscol + 1, 256): # Really 256
self.netindex[j] = self.MAXNETPOS
def paletteImage(self):
""" PIL weird interface for making a paletted image: create an image
which already has the palette, and use that in Image.quantize. This
function returns this palette image. """
if self.pimage is None:
palette = []
for i in range(self.NETSIZE):
palette.extend(self.colormap[i][:3])
palette.extend([0] * (256 - self.NETSIZE) * 3)
# a palette image to use for quant
self.pimage = Image.new("P", (1, 1), 0)
self.pimage.putpalette(palette)
return self.pimage
def quantize(self, image):
""" Use a kdtree to quickly find the closest palette colors for the
pixels """
if get_cKDTree():
return self.quantize_with_scipy(image)
else:
print('Scipy not available, falling back to slower version.')
return self.quantize_without_scipy(image)
def quantize_with_scipy(self, image):
w, h = image.size
px = np.asarray(image).copy()
px2 = px[:, :, :3].reshape((w * h, 3))
cKDTree = get_cKDTree()
kdtree = cKDTree(self.colormap[:, :3], leafsize=10)
result = kdtree.query(px2)
colorindex = result[1]
print("Distance: %1.2f" % (result[0].sum() / (w * h)))
px2[:] = self.colormap[colorindex, :3]
return Image.fromarray(px).convert(
"RGB").quantize(palette=self.paletteImage())
def quantize_without_scipy(self, image):
"""" This function can be used if no scipy is availabe.
It's 7 times slower though.
"""
w, h = image.size
px = np.asarray(image).copy()
memo = {}
for j in range(w):
for i in range(h):
key = (px[i, j, 0], px[i, j, 1], px[i, j, 2])
try:
val = memo[key]
except KeyError:
val = self.convert(*key)
memo[key] = val
px[i, j, 0], px[i, j, 1], px[i, j, 2] = val
return Image.fromarray(px).convert(
"RGB").quantize(palette=self.paletteImage())
def convert(self, *color):
i = self.inxsearch(*color)
return self.colormap[i, :3]
def inxsearch(self, r, g, b):
"""Search for BGR values 0..255 and return colour index"""
dists = (self.colormap[:, :3] - np.array([r, g, b]))
a = np.argmin((dists * dists).sum(1))
return a
if __name__ == '__main__':
im = np.zeros((200, 200), dtype=np.uint8)
im[10:30, :] = 100
im[:, 80:120] = 255
im[-50:-40, :] = 50
images = [im * 1.0, im * 0.8, im * 0.6, im * 0.4, im * 0]
writeGif('lala3.gif', images, duration=0.5, dither=0)
| curtiszimmerman/deepdreamer | deepdreamer/images2gif.py | Python | gpl-3.0 | 36,702 | [
"NEURON"
] | d335e144cf87605521268454c2ceefe98a9dbbfddc586ed079cc6537ceb2eb27 |
# -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
# these are system modules
import commands
import numpy
import os
import sys
# these are my local modules
from env import gidgetConfigVars
import miscIO
import miscTCGA
import path
import tsvIO
# -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
NA_VALUE = -999999
debugON = 0
## debugON = 1
# NOTE: this is a modified script that handles ONLY the microRNAseq data
# from BCGSC
platformStrings = [
'bcgsc.ca/illuminaga_mirnaseq/mirnaseq/',
'bcgsc.ca/illuminahiseq_mirnaseq/mirnaseq/']
dataTypeDict = {}
dataTypeDict["IlluminaGA_miRNASeq"] = ["N", "MIRN"]
dataTypeDict["IlluminaHiSeq_miRNASeq"] = ["N", "MIRN"]
# -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
# from Timo's resegmentation code:
class AutoVivification(dict):
"""Implementation of perl's autovivification feature."""
def __getitem__(self, item):
try:
return dict.__getitem__(self, item)
except KeyError:
value = self[item] = type(self)()
return value
# -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
def getLastBit(aName):
ii = len(aName) - 1
while (aName[ii] != '/'):
ii -= 1
# print ' <%s> <%s> ' % ( aName, aName[ii+1:] )
return (aName[ii + 1:])
# -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
def loadNameMap(mapFilename):
metaData = {}
fh = file(mapFilename)
for aLine in fh:
aLine = aLine.strip()
tokenList = aLine.split('\t')
metaData[tokenList[1]] = tokenList[0]
fh.close()
return (metaData)
# -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
# hsa-let-7a-2 MIMAT0010195 N:MIRN:hsa-let-7a-2:::::MIMAT0010195
def makeFeatureName(tok0, tok1, metaData):
try:
featName = "N:MIRN:" + metaData[tok1] + ":::::" + tok1
print " all good : ", tok0, tok1, featName
except:
featName = "N:MIRN:" + tok0 + ":::::" + tok1
print " BAD ??? ", tok0, tok1, featName
return (featName)
# -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
def makeOutputFilename(outDir, tumorList, zString, outSuffix):
if (len(tumorList) == 1):
zCancer = tumorList[0]
else:
tumorList.sort()
zCancer = tumorList[0]
for aCancer in tumorList[1:]:
zCancer = zCancer + '_' + aCancer
print " --> combined multi-cancer name : <%s> " % zCancer
# start by pasting together the outDir, cancer sub-dir, then '/'
# and then the cancer name again, followed by a '.'
outFilename = outDir + zCancer + "/" + zCancer + "."
# now we are just going to assume that we are writing to the current
# working directory (21dec12)
outFilename = outDir + zCancer + "."
# next we want to replace all '/' in the platform string with '__'
i1 = 0
while (i1 >= 0):
i2 = zString.find('/', i1)
if (i1 > 0 and i2 > 0):
outFilename += "__"
if (i2 > 0):
outFilename += zString[i1:i2]
i1 = i2 + 1
else:
i1 = i2
# and finally we add on the suffix (usually something like '25jun')
if (not outSuffix.startswith(".")):
outFilename += "."
outFilename += outSuffix
return (outFilename)
# -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
if __name__ == "__main__":
# list of cancer directory names
cancerDirNames = [
'acc', 'blca', 'brca', 'cesc', 'cntl', 'coad', 'dlbc', 'esca', 'gbm',
'hnsc', 'kich', 'kirc', 'kirp', 'laml', 'lcll', 'lgg', 'lihc', 'lnnh',
'luad', 'lusc', 'ov', 'paad', 'prad', 'read', 'sarc', 'skcm', 'stad',
'thca', 'ucec', 'lcml', 'pcpg', 'meso', 'tgct', 'ucs' ]
if (1):
if (len(sys.argv) < 4):
print " Usage: %s <outSuffix> <platformID> <tumorType#1> [tumorType#2 ...] [snapshot-name]"
print " currently supported platforms : ", platformStrings
print " currently supported tumor types : ", cancerDirNames
print " ERROR -- bad command line arguments "
sys.exit(-1)
else:
# output suffix ...
outSuffix = sys.argv[1]
# specified platform ...
platformID = sys.argv[2]
if (platformID[-1] != '/'):
platformID += '/'
if (platformID not in platformStrings):
print " platform <%s> is not supported " % platformID
print " currently supported platforms are: ", platformStrings
sys.exit(-1)
platformStrings = [platformID]
# assume that the default snapshotName is "dcc-snapshot"
snapshotName = "dcc-snapshot"
# specified tumor type(s) ...
argList = sys.argv[3:]
# print argList
tumorList = []
for aType in argList:
tumorType = aType.lower()
if (tumorType in cancerDirNames):
tumorList += [tumorType]
elif (tumorType.find("snap") >= 0):
snapshotName = tumorType
print " using this snapshot : <%s> " % snapshotName
else:
print " ERROR ??? tumorType <%s> not in list of known tumors ??? " % tumorType
print cancerDirNames
if (len(tumorList) < 1):
print " ERROR ??? have no tumor types in list ??? ", tumorList
sys.exit(-1)
print " tumor type(s) list : ", tumorList
# --------------------------------------
# HERE is where the real work starts ...
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# now we need to get set up for writing the output ...
# NEW: 21dec12 ... assuming that we will write to current working directory
outDir = "./"
outFilename = makeOutputFilename(
outDir, tumorList, platformID, outSuffix)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# initialize a bunch of things ...
sampleList = []
gotFiles = []
geneList = []
numGenes = 0
numProc = 0
iS = 0
# and then loop over tumor types ...
for zCancer in tumorList:
print ' '
print ' ********************************** '
print ' LOOP over %d CANCER TYPES ... %s ' % (len(tumorList), zCancer)
# piece together the directory name ...
## topDir = gidgetConfigVars['TCGAFMP_DCC_REPOSITORIES'] + "/dcc-snapshot/public/tumor/" + zCancer + "/cgcc/" + platformID
topDir = gidgetConfigVars['TCGAFMP_DCC_REPOSITORIES'] + "/" + \
snapshotName + "/public/tumor/" + zCancer + "/cgcc/" + platformID
print ' starting from top-level directory ', topDir
dMatch = "Level_3"
if (not os.path.exists(topDir)):
print ' --> <%s> does not exist ' % topDir
continue
d1 = path.path(topDir)
for dName in d1.dirs():
print dName
if (dName.find(dMatch) >= 0):
print ' '
print ' found a <%s> directory : <%s> ' % (dMatch, dName)
archiveName = getLastBit(dName)
print ' archiveName : ', archiveName
if (dName.find("IlluminaHiSeq") > 0):
zPlat = "IlluminaHiSeq_miRNASeq"
elif (dName.find("IlluminaGA") > 0):
zPlat = "IlluminaGA_miRNASeq"
else:
print " not a valid platform: %s ??? !!! " % (dName)
sys.exit(-1)
cmdString = "%s/shscript/expression_matrix_mimat.pl " % gidgetConfigVars['TCGAFMP_ROOT_DIR']
cmdString += "-m " + gidgetConfigVars['TCGAFMP_DCC_REPOSITORIES'] + "/mirna_bcgsc/tcga_mirna_bcgsc_hg19.adf "
cmdString += "-o %s " % outDir
cmdString += "-p %s " % topDir
cmdString += "-n %s " % zPlat
print " "
print cmdString
print " "
(status, output) = commands.getstatusoutput(cmdString)
normMatFilename = outDir + "/expn_matrix_mimat_norm_%s.txt" % (zPlat)
print " normMatFilename = <%s> " % normMatFilename
# make sure that we can open this file ...
try:
fh = file(normMatFilename, 'r')
gotFiles += [normMatFilename]
fh.close()
except:
print " "
print " Not able to open expn_matrix_mimat_norm file ??? "
print " "
sys.exit(-1)
print " "
print " "
if (len(gotFiles) == 0):
print " ERROR in new_Level3_miRNAseq ... no data files found "
sys.exit(-1)
if (len(gotFiles) > 1):
print " ERROR ??? we should have only one file at this point "
print gotFiles
sys.exit(-1)
# if we get this far, we should make sure that the output directory we
# want exists
print " --> testing that we have an output directory ... <%s> " % outDir
tsvIO.createDir(outDir)
print " output file name will be called <%s> " % outFilename
# we also need to read in the mapping file ...
metaData = loadNameMap(
gidgetConfigVars['TCGAFMP_DCC_REPOSITORIES'] + "/mirna_bcgsc/mature.fa.flat.human.mirbase_v19.txt")
if (1):
fh = file(gotFiles[0], 'r')
numRow = miscIO.num_lines(fh) - 1
numCol = miscIO.num_cols(fh, '\t') - 1
rowLabels = []
dataMatrix = [0] * numRow
for iR in range(numRow):
dataMatrix[iR] = [0] * numCol
hdrLine = fh.readline()
hdrLine = hdrLine.strip()
hdrTokens = hdrLine.split('\t')
if (len(hdrTokens) != (numCol + 1)):
print " ERROR #1 ??? "
sys.exit(-1)
done = 0
iR = 0
numNA = 0
while (not done):
aLine = fh.readline()
aLine = aLine.strip()
tokenList = aLine.split('\t')
if (len(tokenList) != (numCol + 1)):
done = 1
else:
aLabel = tokenList[0]
# print " label = <%s> " % aLabel
labelTokens = aLabel.split('.')
# print labelTokens
featName = makeFeatureName(
labelTokens[0], labelTokens[1], metaData)
# print featName
rowLabels += [featName]
for iC in range(numCol):
try:
fVal = float(tokenList[iC + 1])
dataMatrix[iR][iC] = fVal
except:
dataMatrix[iR][iC] = NA_VALUE
numNA += 1
iR += 1
print " iR=%d numNA=%d " % (iR, numNA)
dataD = {}
dataD['rowLabels'] = rowLabels
dataD['colLabels'] = hdrTokens[1:]
dataD['dataMatrix'] = dataMatrix
dataD['dataType'] = "N:MIRN"
print ' writing out data matrix to ', outFilename
newFeatureName = "C:SAMP:mirnPlatform:::::seq"
newFeatureValue = zPlat
dataD = tsvIO.addConstFeature(dataD, newFeatureName, newFeatureValue)
sortRowFlag = 0
sortColFlag = 0
tsvIO.writeTSV_dataMatrix(
dataD, sortRowFlag, sortColFlag, outFilename)
print ' '
print ' DONE !!! '
print ' '
# -#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#-#
| cancerregulome/gidget | commands/feature_matrix_construction/main/new_Level3_miRNAseq.py | Python | mit | 11,860 | [
"ADF"
] | d76bc066092a26e711f9e35cad83f2523a6d235f340e738edba5c1fa081007b7 |
#!/usr/bin/python
# -*- encoding: utf-8; py-indent-offset: 4 -*-
# +------------------------------------------------------------------+
# | ____ _ _ __ __ _ __ |
# | / ___| |__ ___ ___| | __ | \/ | |/ / |
# | | | | '_ \ / _ \/ __| |/ / | |\/| | ' / |
# | | |___| | | | __/ (__| < | | | | . \ |
# | \____|_| |_|\___|\___|_|\_\___|_| |_|_|\_\ |
# | |
# | Copyright Mathias Kettner 2014 mk@mathias-kettner.de |
# +------------------------------------------------------------------+
#
# This file is part of Check_MK.
# The official homepage is at http://mathias-kettner.de/check_mk.
#
# check_mk is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation in version 2. check_mk is distributed
# in the hope that it will be useful, but WITHOUT ANY WARRANTY; with-
# out even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE. See the GNU General Public License for more de-
# ails. You should have received a copy of the GNU General Public
# License along with GNU Make; see the file COPYING. If not, write
# to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
# Boston, MA 02110-1301 USA.
register_notification_parameters(
"mail",
Dictionary(
elements = [
( "from",
TextAscii(
title = _("From: Address"),
size = 40,
allow_empty = False,
)
),
( "reply_to",
TextAscii(
title = _("Reply-To: Address"),
size = 40,
allow_empty = False,
)
),
( "host_subject",
TextUnicode(
title = _("Subject for host notifications"),
help = _("Here you are allowed to use all macros that are defined in the "
"notification context."),
default_value = "Check_MK: $HOSTNAME$ - $EVENT_TXT$",
size = 64,
)
),
( "service_subject",
TextUnicode(
title = _("Subject for service notifications"),
help = _("Here you are allowed to use all macros that are defined in the "
"notification context."),
default_value = "Check_MK: $HOSTNAME$/$SERVICEDESC$ $EVENT_TXT$",
size = 64,
)
),
( "elements",
ListChoice(
title = _("Information to be displayed in the email body"),
choices = [
( "address", _("IP Address of Host") ),
( "abstime", _("Absolute Time of Alert") ),
( "reltime", _("Relative Time of Alert") ),
( "longoutput", _("Additional Plugin Output") ),
( "ack_author", _("Acknowledgement Author") ),
( "ack_comment", _("Acknowledgement Comment") ),
( "perfdata", _("Performance Data") ),
( "graph", _("Performance Graphs") ),
( "context", _("Complete variable list (for testing)" ) ),
],
default_value = [ "perfdata", "graph", "abstime", "address", "longoutput" ],
)
),
( "url_prefix",
TextAscii(
title = _("URL prefix for links to Check_MK"),
help = _("If you specify an URL prefix here, then several parts of the "
"email body are armed with hyperlinks to your Check_MK GUI, so "
"that the recipient of the email can directly visit the host or "
"service in question in Check_MK. Specify an absolute URL including "
"the <tt>.../check_mk/</tt>"),
regex = "^(http|https)://.*/check_mk/$",
regex_error = _("The URL must begin with <tt>http</tt> or "
"<tt>https</tt> and end with <tt>/check_mk/</tt>."),
size = 64,
default_value = "http://" + socket.gethostname() + "/" + (
defaults.omd_site and defaults.omd_site + "/" or "") + "check_mk/",
)
),
( "no_floating_graphs",
FixedValue(
True,
title = _("Display graphs among each other"),
totext = _("Graphs are shown among each other"),
help = _("By default all multiple graphs in emails are displayed floating "
"nearby. You can enable this option to show the graphs among each "
"other."),
)
),
('bulk_sort_order',
DropdownChoice(
choices = [
('oldest_first', _('Oldest first')),
('newest_first', _('Newest first')),
],
help = _("With this option you can specify, whether the oldest (default) or "
"the newest notification should get shown at the top of the notification mail."),
title = _("Notification sort order for bulk notifications"),
default = "oldest_first"
)
)
]
)
)
register_notification_parameters(
"asciimail",
Dictionary(
elements = [
( "from",
EmailAddress(
title = _("From: Address"),
size = 40,
allow_empty = False,
)
),
( "reply_to",
EmailAddress(
title = _("Reply-To: Address"),
size = 40,
allow_empty = False,
)
),
( "host_subject",
TextUnicode(
title = _("Subject for host notifications"),
help = _("Here you are allowed to use all macros that are defined in the "
"notification context."),
default_value = "Check_MK: $HOSTNAME$ - $EVENT_TXT$",
size = 64,
)
),
( "service_subject",
TextUnicode(
title = _("Subject for service notifications"),
help = _("Here you are allowed to use all macros that are defined in the "
"notification context."),
default_value = "Check_MK: $HOSTNAME$/$SERVICEDESC$ $EVENT_TXT$",
size = 64,
)
),
( "common_body",
TextAreaUnicode(
title = _("Body head for both host and service notifications"),
rows = 7,
cols = 58,
monospaced = True,
default_value =
"""Host: $HOSTNAME$
Alias: $HOSTALIAS$
Address: $HOSTADDRESS$
""",
)
),
( "host_body",
TextAreaUnicode(
title = _("Body tail for host notifications"),
rows = 9,
cols = 58,
monospaced = True,
default_value =
"""Event: $EVENT_TXT$
Output: $HOSTOUTPUT$
Perfdata: $HOSTPERFDATA$
$LONGHOSTOUTPUT$
""",
)
),
( "service_body",
TextAreaUnicode(
title = _("Body tail for service notifications"),
rows = 11,
cols = 58,
monospaced = True,
default_value =
"""Service: $SERVICEDESC$
Event: $EVENT_TXT$
Output: $SERVICEOUTPUT$
Perfdata: $SERVICEPERFDATA$
$LONGSERVICEOUTPUT$
""",
)
),
('bulk_sort_order',
DropdownChoice(
choices = [
('oldest_first', _('Oldest first')),
('newest_first', _('Newest first')),
],
help = _("With this option you can specify, whether the oldest (default) or "
"the newest notification should get shown at the top of the notification mail."),
title = _("Notification sort order for bulk notifications"),
default = "oldest_first"
)
)
]
)
)
register_notification_parameters(
"mkeventd",
Dictionary(
elements = [
( "facility",
DropdownChoice(
title = _("Syslog Facility to use"),
help = _("The notifications will be converted into syslog messages with "
"the facility that you choose here. In the Event Console you can "
"later create a rule matching this facility."),
choices = syslog_facilities,
)
),
( "remote",
IPv4Address(
title = _("IP Address of remote Event Console"),
help = _("If you set this parameter then the notifications will be sent via "
"syslog/UDP (port 514) to a remote Event Console or syslog server."),
)
),
]
)
)
register_notification_parameters(
"spectrum",
Dictionary(
optional_keys = None,
elements = [
( "destination",
IPv4Address(
title = _("Destination IP"),
help = _("IP Address of the Spectrum server receiving the SNMP trap")
),
),
( "community",
TextAscii(
title = _("SNMP Community"),
help = _("SNMP Community for the SNMP trap")
)
),
( "baseoid",
TextAscii(
title = _("Base OID"),
help = _("The base OID for the trap content"),
default_value = "1.3.6.1.4.1.1234"
),
),
]
)
)
| xorpaul/check_mk | web/plugins/wato/notifications.py | Python | gpl-2.0 | 10,848 | [
"VisIt"
] | d417c710c8d035b845da19e8672d3a469bf93bbb0b687d079679836eefcfcd3e |
"""
Tests to try and ensure that important mayavi imports work with no UI.
"""
# Author: Prabhu Ramachandran <prabhu@aero.iitb.ac.in>
# Copyright (c) 2009, Enthought, Inc.
# License: BSD Style.
import sys
import unittest
from traits.etsconfig.api import ETSConfig
class TestNoUIToolkit(unittest.TestCase):
"""Test if any important mayavi imports work with no UI
whatsoever."""
def setUp(self):
self.orig_tk = ETSConfig.toolkit
ETSConfig._toolkit = 'null'
# Import something from Pyface to force any potential imports
# from a UI toolkit. Why did I pick Pyface? Well, adder_node
# imports ImageResource and this seems to trigger some UI
# toolkit import and this makes life difficult as far as the
# testing goes. Forcing the issue here should let us test
# safely since the Pyface imports will be done.
from pyface.api import GUI
# Remove any references to wx and Qt
saved = {}
for mod in ['wx', 'PyQt4', 'PySide']:
saved[mod] = sys.modules.pop(mod, None)
self.saved = saved
def tearDown(self):
ETSConfig._toolkit = self.orig_tk
# Add back any any references to wx and Qt
for mod in ['wx', 'PyQt4', 'PySide']:
m = self.saved[mod]
if m is not None:
sys.modules[mod] = m
def test_no_ui(self):
"""Test if mayavi imports work without any UI (wx or PyQt4)."""
# These imports should work without any UI.
from mayavi import mlab
from mayavi.api import Engine
from mayavi.sources.api import VTKDataSource
from mayavi.filters.api import Optional
from mayavi.modules.api import Outline
from mayavi.preferences.api import preference_manager
# Should not have triggered an import wx or PyQt4.
self.assertEqual(sys.modules.has_key('wx'), False)
self.assertEqual(sys.modules.has_key('PyQt4'), False)
if __name__ == '__main__':
unittest.main()
| liulion/mayavi | mayavi/tests/test_no_ui_toolkit.py | Python | bsd-3-clause | 2,037 | [
"Mayavi"
] | 1fc1c09fe3c20dd2d97769e0476c2be49c8faac7ce07ace4c02d6fc85b90e230 |
from .estimator_base import H2OEstimator
class H2ODeepLearningEstimator(H2OEstimator):
def __init__(self, model_id=None, overwrite_with_best_model=None, checkpoint=None,
use_all_factor_levels=None, activation=None, hidden=None, epochs=None,
train_samples_per_iteration=None, seed=None, adaptive_rate=None, rho=None,
epsilon=None, rate=None, rate_annealing=None, rate_decay=None,
momentum_start=None, momentum_ramp=None, momentum_stable=None,
nesterov_accelerated_gradient=None, input_dropout_ratio=None,
hidden_dropout_ratios=None, l1=None, l2=None, max_w2=None,
initial_weight_distribution=None, initial_weight_scale=None, loss=None,
distribution=None, tweedie_power=None, score_interval=None,
score_training_samples=None, score_validation_samples=None,
score_duty_cycle=None, classification_stop=None, regression_stop=None,
quiet_mode=None, max_confusion_matrix_size=None, max_hit_ratio_k=None,
balance_classes=None, class_sampling_factors=None,
max_after_balance_size=None, score_validation_sampling=None,
diagnostics=None, variable_importances=None, fast_mode=None,
ignore_const_cols=None, force_load_balance=None,
replicate_training_data=None, single_node_mode=None,
shuffle_training_data=None, sparse=None, col_major=None,
average_activation=None, sparsity_beta=None, max_categorical_features=None,
reproducible=None, export_weights_and_biases=None, nfolds=None,
fold_assignment=None, keep_cross_validation_predictions=None,
stopping_rounds=None, stopping_metric=None, stopping_tolerance=None):
"""
Build a supervised Deep Learning model
Performs Deep Learning neural networks on an H2OFrame
Parameters
----------
model_id : str, optional
The unique id assigned to the resulting model. If none is given, an id will
automatically be generated.
overwrite_with_best_model : bool
If True, overwrite the final model with the best model found during training.
Defaults to True.
checkpoint : H2ODeepLearningModel, optional
Model checkpoint (either key or H2ODeepLearningModel) to resume training with.
use_all_factor_levels : bool
Use all factor levels of categorical variance. Otherwise the first factor level is
omitted (without loss of accuracy). Useful for variable importances and auto-enabled
for autoencoder.
activation : str
A string indicating the activation function to use.
Must be either "Tanh", "TanhWithDropout", "Rectifier", "RectifierWithDropout",
"Maxout", or "MaxoutWithDropout"
hidden : list
Hidden layer sizes (e.g. [100,100])
epochs : float
How many times the dataset should be iterated (streamed), can be fractional
train_samples_per_iteration : int
Number of training samples (globally) per MapReduce iteration.
Special values are: 0 one epoch; -1 all available data
(e.g., replicated training data); or -2 auto-tuning (default)
seed : int
Seed for random numbers (affects sampling) - Note: only reproducible when
running single threaded
adaptive_rate : bool
Adaptive learning rate (ADAELTA)
rho : float
Adaptive learning rate time decay factor (similarity to prior updates)
epsilon : float
Adaptive learning rate parameter, similar to learn rate annealing during initial
training phase. Typical values are between 1.0e-10 and 1.0e-4
rate : float
Learning rate (higher => less stable, lower => slower convergence)
rate_annealing : float
Learning rate annealing: \eqn{(rate)/(1 + rate_annealing*samples)
rate_decay : float
Learning rate decay factor between layers (N-th layer: \eqn{rate*\alpha^(N-1))
momentum_start : float
Initial momentum at the beginning of training (try 0.5)
momentum_ramp : float
Number of training samples for which momentum increases
momentum_stable : float
Final momentum after the amp is over (try 0.99)
nesterov_accelerated_gradient : bool
Logical. Use Nesterov accelerated gradient (recommended)
input_dropout_ratio : float
A fraction of the features for each training row to be omitted from training in
order to improve generalization (dimension sampling).
hidden_dropout_ratios : float
Input layer dropout ratio (can improve generalization) specify one value per hidden
ayer, defaults to 0.5
l1 : float
L1 regularization (can add stability and improve generalization,
causes many weights to become 0)
l2 : float
L2 regularization (can add stability and improve generalization,
causes many weights to be small)
max_w2 : float
Constraint for squared sum of incoming weights per unit (e.g. Rectifier)
initial_weight_distribution : str
Can be "Uniform", "UniformAdaptive", or "Normal"
initial_weight_scale : str
Uniform: -value ... value, Normal: stddev
loss : str
Loss function: "Automatic", "CrossEntropy" (for classification only),
"Quadratic", "Absolute" (experimental) or "Huber" (experimental)
distribution : str
A character string. The distribution function of the response.
Must be "AUTO", "bernoulli", "multinomial", "poisson", "gamma",
"tweedie", "laplace", "huber" or "gaussian"
tweedie_power : float
Tweedie power (only for Tweedie distribution, must be between 1 and 2)
score_interval : int
Shortest time interval (in secs) between model scoring
score_training_samples : int
Number of training set samples for scoring (0 for all)
score_validation_samples : int
Number of validation set samples for scoring (0 for all)
score_duty_cycle : float
Maximum duty cycle fraction for scoring (lower: more training, higher: more scoring)
classification_stop : float
Stopping criterion for classification error fraction on training data
(-1 to disable)
regression_stop : float
Stopping criterion for regression error (MSE) on training data (-1 to disable)
stopping_rounds : int
Early stopping based on convergence of stopping_metric.
Stop if simple moving average of length k of the stopping_metric does not improve
(by stopping_tolerance) for k=stopping_rounds scoring events.
Can only trigger after at least 2k scoring events. Use 0 to disable.
stopping_metric : str
Metric to use for convergence checking, only for _stopping_rounds > 0
Can be one of "AUTO", "deviance", "logloss", "MSE", "AUC", "r2", "misclassification".
stopping_tolerance : float
Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)
quiet_mode : bool
Enable quiet mode for less output to standard output
max_confusion_matrix_size : int
Max. size (number of classes) for confusion matrices to be shown
max_hit_ratio_k : float
Max number (top K) of predictions to use for hit ratio computation
(for multi-class only, 0 to disable)
balance_classes : bool
Balance training data class counts via over/under-sampling (for imbalanced data)
class_sampling_factors : list
Desired over/under-sampling ratios per class (in lexicographic order).
If not specified, sampling factors will be automatically computed to obtain class
balance during training. Requires balance_classes.
max_after_balance_size : float
Maximum relative size of the training data after balancing class counts
(can be less than 1.0)
score_validation_sampling :
Method used to sample validation dataset for scoring
diagnostics : bool
Enable diagnostics for hidden layers
variable_importances : bool
Compute variable importances for input features (Gedeon method) - can be slow
for large networks)
fast_mode : bool
Enable fast mode (minor approximations in back-propagation)
ignore_const_cols : bool
Ignore constant columns (no information can be gained anyway)
force_load_balance : bool
Force extra load balancing to increase training speed for small datasets
(to keep all cores busy)
replicate_training_data : bool
Replicate the entire training dataset onto every node for faster training
single_node_mode : bool
Run on a single node for fine-tuning of model parameters
shuffle_training_data : bool
Enable shuffling of training data (recommended if training data is replicated and
train_samples_per_iteration is close to \eqn{numRows*numNodes
sparse : bool
Sparse data handling (Experimental)
col_major : bool
Use a column major weight matrix for input layer. Can speed up forward propagation,
but might slow down back propagation (Experimental)
average_activation : float
Average activation for sparse auto-encoder (Experimental)
sparsity_beta : bool
Sparsity regularization (Experimental)
max_categorical_features : int
Max. number of categorical features, enforced via hashing Experimental)
reproducible : bool
Force reproducibility on small data (will be slow - only uses 1 thread)
export_weights_and_biases : bool
Whether to export Neural Network weights and biases to H2O Frames"
nfolds : int, optional
Number of folds for cross-validation. If nfolds >= 2, then validation must remain
empty.
fold_assignment : str
Cross-validation fold assignment scheme, if fold_column is not specified
Must be "AUTO", "Random" or "Modulo"
keep_cross_validation_predictions : bool
Whether to keep the predictions of the cross-validation models
Examples
--------
>>> import h2o as ml
>>> from h2o.estimators.deeplearning import H2ODeepLearningEstimator
>>> ml.init()
>>> rows=[[1,2,3,4,0],[2,1,2,4,1],[2,1,4,2,1],[0,1,2,34,1],[2,3,4,1,0]]*50
>>> fr = ml.H2OFrame(rows)
>>> fr[4] = fr[4].asfactor()
>>> model = H2ODeepLearningEstimator()
>>> model.train(x=range(4), y=4, training_frame=fr)
"""
super(H2ODeepLearningEstimator, self).__init__()
self._parms = locals()
self._parms = {k:v for k,v in self._parms.iteritems() if k!="self"}
self._parms["autoencoder"] = isinstance(self, H2OAutoEncoderEstimator)
@property
def overwrite_with_best_model(self):
return self._parms["overwrite_with_best_model"]
@overwrite_with_best_model.setter
def overwrite_with_best_model(self, value):
self._parms["overwrite_with_best_model"] = value
@property
def checkpoint(self):
return self._parms["checkpoint"]
@checkpoint.setter
def checkpoint(self, value):
self._parms["checkpoint"] = value
@property
def use_all_factor_levels(self):
return self._parms["use_all_factor_levels"]
@use_all_factor_levels.setter
def use_all_factor_levels(self, value):
self._parms["use_all_factor_levels"] = value
@property
def activation(self):
return self._parms["activation"]
@activation.setter
def activation(self, value):
self._parms["activation"] = value
@property
def hidden(self):
return self._parms["hidden"]
@hidden.setter
def hidden(self, value):
self._parms["hidden"] = value
@property
def epochs(self):
return self._parms["epochs"]
@epochs.setter
def epochs(self, value):
self._parms["epochs"] = value
@property
def train_samples_per_iteration(self):
return self._parms["train_samples_per_iteration"]
@train_samples_per_iteration.setter
def train_samples_per_iteration(self, value):
self._parms["train_samples_per_iteration"] = value
@property
def seed(self):
return self._parms["seed"]
@seed.setter
def seed(self, value):
self._parms["seed"] = value
@property
def adaptive_rate(self):
return self._parms["adaptive_rate"]
@adaptive_rate.setter
def adaptive_rate(self, value):
self._parms["adaptive_rate"] = value
@property
def rho(self):
return self._parms["rho"]
@rho.setter
def rho(self, value):
self._parms["rho"] = value
@property
def epsilon(self):
return self._parms["epsilon"]
@epsilon.setter
def epsilon(self, value):
self._parms["epsilon"] = value
@property
def rate(self):
return self._parms["rate"]
@rate.setter
def rate(self, value):
self._parms["rate"] = value
@property
def rate_annealing(self):
return self._parms["rate_annealing"]
@rate_annealing.setter
def rate_annealing(self, value):
self._parms["rate_annealing"] = value
@property
def rate_decay(self):
return self._parms["rate_decay"]
@rate_decay.setter
def rate_decay(self, value):
self._parms["rate_decay"] = value
@property
def momentum_start(self):
return self._parms["momentum_start"]
@momentum_start.setter
def momentum_start(self, value):
self._parms["momentum_start"] = value
@property
def momentum_ramp(self):
return self._parms["momentum_ramp"]
@momentum_ramp.setter
def momentum_ramp(self, value):
self._parms["momentum_ramp"] = value
@property
def momentum_stable(self):
return self._parms["momentum_stable"]
@momentum_stable.setter
def momentum_stable(self, value):
self._parms["momentum_stable"] = value
@property
def nesterov_accelerated_gradient(self):
return self._parms["nesterov_accelerated_gradient"]
@nesterov_accelerated_gradient.setter
def nesterov_accelerated_gradient(self, value):
self._parms["nesterov_accelerated_gradient"] = value
@property
def input_dropout_ratio(self):
return self._parms["input_dropout_ratio"]
@input_dropout_ratio.setter
def input_dropout_ratio(self, value):
self._parms["input_dropout_ratio"] = value
@property
def hidden_dropout_ratios(self):
return self._parms["hidden_dropout_ratios"]
@hidden_dropout_ratios.setter
def hidden_dropout_ratios(self, value):
self._parms["hidden_dropout_ratios"] = value
@property
def l1(self):
return self._parms["l1"]
@l1.setter
def l1(self, value):
self._parms["l1"] = value
@property
def l2(self):
return self._parms["l2"]
@l2.setter
def l2(self, value):
self._parms["l2"] = value
@property
def max_w2(self):
return self._parms["max_w2"]
@max_w2.setter
def max_w2(self, value):
self._parms["max_w2"] = value
@property
def initial_weight_distribution(self):
return self._parms["initial_weight_distribution"]
@initial_weight_distribution.setter
def initial_weight_distribution(self, value):
self._parms["initial_weight_distribution"] = value
@property
def initial_weight_scale(self):
return self._parms["initial_weight_scale"]
@initial_weight_scale.setter
def initial_weight_scale(self, value):
self._parms["initial_weight_scale"] = value
@property
def loss(self):
return self._parms["loss"]
@loss.setter
def loss(self, value):
self._parms["loss"] = value
@property
def distribution(self):
return self._parms["distribution"]
@distribution.setter
def distribution(self, value):
self._parms["distribution"] = value
@property
def tweedie_power(self):
return self._parms["tweedie_power"]
@tweedie_power.setter
def tweedie_power(self, value):
self._parms["tweedie_power"] = value
@property
def score_interval(self):
return self._parms["score_interval"]
@score_interval.setter
def score_interval(self, value):
self._parms["score_interval"] = value
@property
def score_training_samples(self):
return self._parms["score_training_samples"]
@score_training_samples.setter
def score_training_samples(self, value):
self._parms["score_training_samples"] = value
@property
def score_validation_samples(self):
return self._parms["score_validation_samples"]
@score_validation_samples.setter
def score_validation_samples(self, value):
self._parms["score_validation_samples"] = value
@property
def score_duty_cycle(self):
return self._parms["score_duty_cycle"]
@score_duty_cycle.setter
def score_duty_cycle(self, value):
self._parms["score_duty_cycle"] = value
@property
def classification_stop(self):
return self._parms["classification_stop"]
@classification_stop.setter
def classification_stop(self, value):
self._parms["classification_stop"] = value
@property
def regression_stop(self):
return self._parms["regression_stop"]
@regression_stop.setter
def regression_stop(self, value):
self._parms["regression_stop"] = value
@property
def stopping_rounds(self):
return self._parms["stopping_rounds"]
@stopping_rounds.setter
def stopping_rounds(self, value):
self._parms["stopping_rounds"] = value
@property
def stopping_metric(self):
return self._parms["stopping_metric"]
@stopping_metric.setter
def stopping_metric(self, value):
self._parms["stopping_metric"] = value
@property
def stopping_tolerance(self):
return self._parms["stopping_tolerance"]
@stopping_tolerance.setter
def stopping_tolerance(self, value):
self._parms["stopping_tolerance"] = value
@property
def quiet_mode(self):
return self._parms["quiet_mode"]
@quiet_mode.setter
def quiet_mode(self, value):
self._parms["quiet_mode"] = value
@property
def max_confusion_matrix_size(self):
return self._parms["max_confusion_matrix_size"]
@max_confusion_matrix_size.setter
def max_confusion_matrix_size(self, value):
self._parms["max_confusion_matrix_size"] = value
@property
def max_hit_ratio_k(self):
return self._parms["max_hit_ratio_k"]
@max_hit_ratio_k.setter
def max_hit_ratio_k(self, value):
self._parms["max_hit_ratio_k"] = value
@property
def balance_classes(self):
return self._parms["balance_classes"]
@balance_classes.setter
def balance_classes(self, value):
self._parms["balance_classes"] = value
@property
def class_sampling_factors(self):
return self._parms["class_sampling_factors"]
@class_sampling_factors.setter
def class_sampling_factors(self, value):
self._parms["class_sampling_factors"] = value
@property
def max_after_balance_size(self):
return self._parms["max_after_balance_size"]
@max_after_balance_size.setter
def max_after_balance_size(self, value):
self._parms["max_after_balance_size"] = value
@property
def score_validation_sampling(self):
return self._parms["score_validation_sampling"]
@score_validation_sampling.setter
def score_validation_sampling(self, value):
self._parms["score_validation_sampling"] = value
@property
def diagnostics(self):
return self._parms["diagnostics"]
@diagnostics.setter
def diagnostics(self, value):
self._parms["diagnostics"] = value
@property
def variable_importances(self):
return self._parms["variable_importances"]
@variable_importances.setter
def variable_importances(self, value):
self._parms["variable_importances"] = value
@property
def fast_mode(self):
return self._parms["fast_mode"]
@fast_mode.setter
def fast_mode(self, value):
self._parms["fast_mode"] = value
@property
def ignore_const_cols(self):
return self._parms["ignore_const_cols"]
@ignore_const_cols.setter
def ignore_const_cols(self, value):
self._parms["ignore_const_cols"] = value
@property
def force_load_balance(self):
return self._parms["force_load_balance"]
@force_load_balance.setter
def force_load_balance(self, value):
self._parms["force_load_balance"] = value
@property
def replicate_training_data(self):
return self._parms["replicate_training_data"]
@replicate_training_data.setter
def replicate_training_data(self, value):
self._parms["replicate_training_data"] = value
@property
def single_node_mode(self):
return self._parms["single_node_mode"]
@single_node_mode.setter
def single_node_mode(self, value):
self._parms["single_node_mode"] = value
@property
def shuffle_training_data(self):
return self._parms["shuffle_training_data"]
@shuffle_training_data.setter
def shuffle_training_data(self, value):
self._parms["shuffle_training_data"] = value
@property
def sparse(self):
return self._parms["sparse"]
@sparse.setter
def sparse(self, value):
self._parms["sparse"] = value
@property
def col_major(self):
return self._parms["col_major"]
@col_major.setter
def col_major(self, value):
self._parms["col_major"] = value
@property
def average_activation(self):
return self._parms["average_activation"]
@average_activation.setter
def average_activation(self, value):
self._parms["average_activation"] = value
@property
def sparsity_beta(self):
return self._parms["sparsity_beta"]
@sparsity_beta.setter
def sparsity_beta(self, value):
self._parms["sparsity_beta"] = value
@property
def max_categorical_features(self):
return self._parms["max_categorical_features"]
@max_categorical_features.setter
def max_categorical_features(self, value):
self._parms["max_categorical_features"] = value
@property
def reproducible(self):
return self._parms["reproducible"]
@reproducible.setter
def reproducible(self, value):
self._parms["reproducible"] = value
@property
def export_weights_and_biases(self):
return self._parms["export_weights_and_biases"]
@export_weights_and_biases.setter
def export_weights_and_biases(self, value):
self._parms["export_weights_and_biases"] = value
@property
def nfolds(self):
return self._parms["nfolds"]
@nfolds.setter
def nfolds(self, value):
self._parms["nfolds"] = value
@property
def fold_assignment(self):
return self._parms["fold_assignment"]
@fold_assignment.setter
def fold_assignment(self, value):
self._parms["fold_assignment"] = value
@property
def keep_cross_validation_predictions(self):
return self._parms["keep_cross_validation_predictions"]
@keep_cross_validation_predictions.setter
def keep_cross_validation_predictions(self, value):
self._parms["keep_cross_validation_predictions"] = value
class H2OAutoEncoderEstimator(H2ODeepLearningEstimator):
"""
Examples
--------
>>> import h2o as ml
>>> from h2o.estimators.deeplearning import H2OAutoEncoderEstimator
>>> ml.init()
>>> rows=[[1,2,3,4,0]*50,[2,1,2,4,1]*50,[2,1,4,2,1]*50,[0,1,2,34,1]*50,[2,3,4,1,0]*50]
>>> fr = ml.H2OFrame(rows)
>>> fr[4] = fr[4].asfactor()
>>> model = H2OAutoEncoderEstimator()
>>> model.train(x=range(4), training_frame=fr)
"""
pass | madmax983/h2o-3 | h2o-py/h2o/estimators/deeplearning.py | Python | apache-2.0 | 22,893 | [
"Gaussian"
] | 4e88d00becf16ed8211e9cb4cd0226752bc1572769249aeff794ff421feea178 |
import chainerx
from chainerx import _docs
def set_docs():
_docs_creation()
_docs_evaluation()
_docs_indexing()
_docs_linalg()
_docs_logic()
_docs_loss()
_docs_manipulation()
_docs_math()
_docs_sorting()
_docs_statistics()
_docs_connection()
_docs_normalization()
_docs_pooling()
_docs_rnn()
def _docs_creation():
_docs.set_doc(
chainerx.empty,
"""empty(shape, dtype, device=None)
Returns an array without initializing the elements.
Args:
shape (tuple of ints): Shape of the array.
dtype: Data type of the array.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
:class:`~chainerx.ndarray`: New array with elements not initialized.
.. seealso:: :func:`numpy.empty`
""")
_docs.set_doc(
chainerx.empty_like,
"""empty_like(a, device=None)
Returns a new array with same shape and dtype of a given array.
Args:
a (~chainerx.ndarray): Prototype array.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
:class:`~chainerx.ndarray`: New array with same shape and dtype as ``a`` \
with elements not initialized.
Warning:
If ``device`` argument is omitted, the new array is created on the default
device, not the device of the prototype array.
.. seealso:: :func:`numpy.empty_like`
""")
_docs.set_doc(
chainerx.eye,
"""eye(N, M=None, k=0, dtype=float64, device=None)
Returns a 2-D array with ones on the diagonals and zeros elsewhere.
Args:
N (int): Number of rows.
M (int): Number of columns. M == N by default.
k (int): Index of the diagonal. Zero indicates the main diagonal,
a positive index an upper diagonal, and a negative index a lower
diagonal.
dtype: Data type.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: A 2-D array with given diagonals filled with ones and
zeros elsewhere.
.. seealso:: :func:`numpy.eye`
""")
_docs.set_doc(
chainerx.tri,
"""tri(N, M=None, k=0, dtype=float32, device=None)
Returns a 2-D array with ones at and below the given diagonal
and zeros elsewhere.
Args:
N (int): Number of rows.
M (int): Number of columns. M == N by default.
k (int): Index of the diagonal. Zero indicates the main diagonal,
a positive index an upper diagonal, and a negative index a lower
diagonal.
dtype: Data type.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: A 2-D array with given diagonals filled ones at and
below the given diagonal and zeros elsewhere.
.. seealso:: :func:`numpy.tri`
""")
_docs.set_doc(
chainerx.tril,
"""tril(m, k=0)
Lower triangle of an array.
Returns a copy of an array with elements above the k-th diagonal zeroed.
Args:
m (~chainerx.ndarray): Input array.
k (int): Index of the diagonal. Zero indicates the main diagonal,
a positive index an upper diagonal, and a negative index a lower
diagonal.
Returns:
~chainerx.ndarray: Lower triangle of ``m``.
.. seealso:: :func:`numpy.tril`
""")
_docs.set_doc(
chainerx.triu,
"""triu(m, k=0)
Upper triangle of an array.
Returns a copy of an array with elements below the k-th diagonal zeroed.
Args:
m (~chainerx.ndarray): Input array.
k (int): Index of the diagonal. Zero indicates the main diagonal,
a positive index an upper diagonal, and a negative index a lower
diagonal.
Returns:
~chainerx.ndarray: Upper triangle of ``m``.
.. seealso:: :func:`numpy.triu`
""")
_docs.set_doc(
chainerx.identity,
"""identity(n, dtype=None, device=None)
Returns a 2-D identity array.
It is equivalent to ``eye(n, n, dtype)``.
Args:
n (int): Number of rows and columns.
dtype: Data type.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: A 2-D identity array.
.. seealso:: :func:`numpy.identity`
""")
_docs.set_doc(
chainerx.ones,
"""ones(shape, dtype, device=None)
Returns a new array of given shape and dtype, filled with ones.
Args:
shape (tuple of ints): Shape of the array.
dtype: Data type.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: New array.
.. seealso:: :func:`numpy.ones`
""")
_docs.set_doc(
chainerx.ones_like,
"""ones_like(a, device=None)
Returns an array of ones with same shape and dtype as a given array.
Args:
a (~chainerx.ndarray): Prototype array.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: New array.
Warning:
If ``device`` argument is omitted, the new array is created on the default
device, not the device of the prototype array.
.. seealso:: :func:`numpy.ones_like`
""")
_docs.set_doc(
chainerx.zeros,
"""zeros(shape, dtype, device=None)
Returns a new array of given shape and dtype, filled with zeros.
Args:
shape (tuple of ints): Shape of the array.
dtype: Data type.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: New array.
.. seealso:: :func:`numpy.zeros`
""")
_docs.set_doc(
chainerx.zeros_like,
"""zeros_like(a, device=None)
Returns an array of zeros with same shape and dtype as a given array.
Args:
a (~chainerx.ndarray): Prototype array.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: New array.
Warning:
If ``device`` argument is omitted, the new array is created on the default
device, not the device of the prototype array.
.. seealso:: :func:`numpy.zeros_like`
""")
_docs.set_doc(
chainerx.full,
"""full(shape, fill_value, dtype, device=None)
Returns a new array of given shape and dtype, filled with a given value.
Args:
shape (tuple of ints): Shape of the array.
dtype: Data type.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: New array.
.. seealso:: :func:`numpy.full`
""")
_docs.set_doc(
chainerx.full_like,
"""full_like(a, fill_value, dtype=None, device=None)
Returns a full array with same shape and dtype as a given array.
Args:
a (~chainerx.ndarray): Prototype array.
dtype: Data type.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: New array.
Warning:
If ``device`` argument is omitted, the new array is created on the default
device, not the device of the prototype array.
.. seealso:: :func:`numpy.full_like`
""")
_docs.set_doc(
chainerx.array,
"""array(object, dtype=None, copy=True, device=None)
Creates an array.
Args:
object: A :class:`~chainerx.ndarray` object or any other object that can be
passed to :func:`numpy.array`.
dtype: Data type. If omitted, it's inferred from the input.
copy (bool): If ``True``, the object is always copied. Otherwise, a copy
will only be made if it is needed to satisfy any of the other
requirements (dtype, device, etc.).
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: New array.
Warning:
If ``device`` argument is omitted, the new array is created on the default
device, not the device of the input array.
.. seealso:: :func:`numpy.array`
""")
_docs.set_doc(
chainerx.asarray,
"""asarray(a, dtype=None, device=None)
Converts an object to an array.
Args:
a: The source object.
dtype: Data type. If omitted, it's inferred from the input.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: Array interpretation of ``a``. If ``a`` is already an \
ndarray on the given device with matching dtype, no copy is performed.
Warning:
If ``device`` argument is omitted, the new array is created on the default
device, not the device of the input array.
.. seealso:: :func:`numpy.asarray`
""")
_docs.set_doc(
chainerx.ascontiguousarray,
"""ascontiguousarray(a, dtype=None, device=None)
Returns a C-contiguous array.
Args:
a (~chainerx.ndarray): Source array.
dtype: Data type.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: C-contiguous array. A copy will be made only if needed.
Warning:
If ``device`` argument is omitted, the new array is created on the default
device, not the device of the input array.
.. seealso:: :func:`numpy.ascontiguousarray`
""")
_docs.set_doc(
chainerx.copy,
"""copy(a)
Creates a copy of a given array.
Args:
a (~chainerx.ndarray): Source array.
Returns:
~chainerx.ndarray: A copy array on the same device as ``a``.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``a``.
.. seealso:: :func:`numpy.copy`
""")
_docs.set_doc(
chainerx.frombuffer,
"""frombuffer(buffer, dtype=float, count=-1, offset=0, device=None)
Returns a 1-D array interpretation of a buffer.
The given ``buffer`` memory must be usable on the given device, otherwise,
an error is raised.
Note:
The ``native`` backend requires a buffer of main memory, and
the ``cuda`` backend requires a buffer of CUDA memory.
No copy is performed.
Args:
buffer: An object that exposes the buffer interface.
dtype: Data type of the returned array.
count (int): Number of items to read. -1 means all data in the buffer.
offset (int): Start reading the buffer from this offset (in bytes).
device (~chainerx.Device): Device of the returned array.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: 1-D array interpretation of ``buffer``.
.. seealso:: :func:`numpy.frombuffer`
""")
_docs.set_doc(
chainerx.arange,
"""arange([start=0, ]stop, [step=1, ]dtype=None, device=None)
Returns an array with evenly spaced values within a given interval.
Values are generated within the half-open interval [``start``, ``stop``).
The first three arguments are mapped like the ``range`` built-in function,
i.e. ``start`` and ``step`` are optional.
Args:
start: Start of the interval.
stop: End of the interval.
step: Step width between each pair of consecutive values.
dtype: Data type specifier. It is inferred from other arguments by
default.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: The 1-D array of range values.
.. seealso:: :func:`numpy.arange`
""")
_docs.set_doc(
chainerx.linspace,
"""linspace(start, stop, num=50, endpoint=True, dtype=None, device=None)
Returns an array with evenly spaced numbers over a specified interval.
Instead of specifying the step width like :func:`chainerx.arange()`,
this function requires the total number of elements specified.
Args:
start: Start of the interval.
stop: End of the interval.
num: Number of elements.
endpoint (bool): If ``True``, the stop value is included as the last
element. Otherwise, the stop value is omitted.
dtype: Data type specifier. It is inferred from the start and stop
arguments by default.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: The 1-D array of ranged values.
.. seealso:: :func:`numpy.linspace`
""") # NOQA
_docs.set_doc(
chainerx.diag,
"""diag(v, k=0, device=None)
Returns a diagonal or a diagonal array.
Args:
v (~chainerx.ndarray): Array object.
k (int): Index of diagonals. Zero indicates the main diagonal, a
positive value an upper diagonal, and a negative value a lower
diagonal.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: If ``v`` is a 1-D array, then it returns a 2-D
array with the specified diagonal filled by ``v``. If ``v`` is a
2-D array, then it returns the specified diagonal of ``v``. In latter
case, if ``v`` is a :class:`chainerx.ndarray` object, then its view is
returned.
Note:
The argument ``v`` does not support array-like objects yet.
.. seealso:: :func:`numpy.diag`
""")
_docs.set_doc(
chainerx.diagflat,
"""diagflat(v, k=0, device=None)
Creates a diagonal array from the flattened input.
Args:
v (~chainerx.ndarray): Array object.
k (int): Index of diagonals. See :func:`chainerx.diag`.
device (~chainerx.Device): Device on which the array is allocated.
If omitted, :ref:`the default device <chainerx_device>` is chosen.
Returns:
~chainerx.ndarray: A 2-D diagonal array with the diagonal copied
from ``v``.
Note:
The argument ``v`` does not support array-like objects yet.
.. seealso:: :func:`numpy.diagflat`
""")
_docs.set_doc(
chainerx.meshgrid,
"""meshgrid(xi, indexing='xy')
Returns coordinate matrices from coordinate vectors.
Make N-D coordinate arrays for vectorized evaluations of N-D scalar/vector
fields over N-D grids, given one-dimensional coordinate arrays x1, x2,…, xn.
Args:
xi (sequence of :class:`~chainerx.ndarray`\\ s): 1-D arrays
representing the coordinates of a grid.
indexing (str): {‘xy’, ‘ij’}, optional
Cartesian (‘xy’, default) or matrix (‘ij’) indexing of output.
Returns:
list of :class:`~chainerx.ndarray`\\ s: For vectors x1, x2,…, ‘xn’ with
lengths Ni=len(xi), return (N1, N2, N3,...Nn) shaped arrays if
indexing=’ij’ or (N2, N1, N3,...Nn) shaped arrays if indexing=’xy’
with the elements of xi repeated to fill the matrix along the first
dimension for x1, the second for x2 and so on.
.. seealso:: :func:`numpy.meshgrid`
""")
def _docs_evaluation():
_docs.set_doc(
chainerx.accuracy,
"""accuracy(y, t, ignore_label=None)
Computes multiclass classification accuracy of the minibatch.
Args:
y (~chainerx.ndarray):
Array whose (i, j, k, ...)-th element indicates the score of
the class j at the (i, k, ...)-th sample.
The prediction label :math:`\\hat t` is calculated by the formula
:math:`\\hat t(i, k, ...) = \\operatorname{\\mathrm{argmax}}_j \
y(i, j, k, ...)`.
t (~chainerx.ndarray):
Array of ground truth labels.
ignore_label (int or None): Skip calculating accuracy
if the true label is ``ignore_label``.
Returns:
:func:`~chainerx.ndarray`: A variable holding a scalar \
array of the accuracy.
Note:
This function is non-differentiable.
.. seealso:: :func:`chainer.functions.accuracy`
.. admonition:: Example
We show the most common case, when ``y`` is the two dimensional array.
>>> y = chainerx.array([[0.1, 0.7, 0.2], # prediction label is 1
... [8.0, 1.0, 2.0], # prediction label is 0
... [-8.0, 1.0, 2.0], # prediction label is 2
... [-8.0, -1.0, -2.0]]) # prediction label is 1
>>> t = chainerx.array([1, 0, 2, 1], chainerx.int32)
>>> chainerx.accuracy(y, t) \
# 100% accuracy because all samples are correct
array(1., shape=(), dtype=float64, device='native:0')
>>> t = chainerx.array([1, 0, 0, 0], chainerx.int32)
>>> chainerx.accuracy(y, t) \
# 50% accuracy because 1st and 2nd samples are correct
array(0.5, shape=(), dtype=float64, device='native:0')
>>> chainerx.accuracy(y, t, ignore_label=0) \
# 100% accuracy because of ignoring the 2nd, 3rd and 4th samples.
array(1., shape=(), dtype=float64, device='native:0')
""")
def _docs_indexing():
_docs.set_doc(
chainerx.take,
"""take(a, indices, axis)
Takes elements from an array along an axis.
Args:
a (~chainerx.ndarray): Source array.
indices (~chainerx.ndarray):
The indices of the values to extract. When indices are out of bounds,
they are wrapped around.
axis (int): The axis over which to select values.
mode (str): Specifies how out-of-bounds indices will behave.
'raise' - raise an error
'wrap' - wrap around
'clip' - clip to the range
Returns:
:func:`~chainerx.ndarray`: Output array.
Note:
This function currently does not support ``axis=None``
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``a``.
Note:
The default mode for the native backend is 'raise', while for the cuda
backend is 'wrap' in order to prevent device synchronization.
'raise' mode is currently not supported in the CUDA backend.
.. seealso:: :func:`numpy.take`
""")
_docs.set_doc(
chainerx.where,
"""where(condition, x, y)
Return elements chosen from ``x`` or ``y`` depending on condition.
Args:
condition (~chainerx.ndarray): Where True, yield ``x``, otherwise
yield ``y``.
x (~chainerx.ndarray): Values from which to choose.
y (~chainerx.ndarray): Values from which to choose.
Returns:
:func:`~chainerx.ndarray`: An array with elements
from ``x`` where condition is True, and elements from ``y`` elsewhere.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x`` and ``y``.
.. seealso:: :func:`numpy.where`
""")
_docs.set_doc(
chainerx.nonzero,
"""nonzero(a)
Return the indices of the elements that are non-zero.
Args:
a (~chainerx.ndarray): Input array.
Returns:
tuple of :func:`~chainerx.ndarray`: Indices of elements that are non-zero.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :func:`numpy.nonzero`
""")
def _docs_linalg():
_docs.set_doc(
chainerx.dot,
"""dot(a, b)
Returns a dot product of two arrays.
For arrays with more than one axis, it computes the dot product along the last
axis of ``a`` and the second-to-last axis of ``b``. This is just a matrix
product if the both arrays are 2-D. For 1-D arrays, it uses their unique axis
as an axis to take dot product over.
Args:
a (~chainerx.ndarray): The left argument.
b (~chainerx.ndarray): The right argument.
Returns:
:class:`~chainerx.ndarray`: Output array.
Note:
This function currently does not support N > 2 dimensional arrays.
Note:
During backpropagation, this function propagates the gradient of the
output array to input arrays ``a`` and ``b``.
.. seealso:: :func:`numpy.dot`
""")
_docs.set_doc(
chainerx.linalg.solve,
"""solve(a, b)
Solves a linear matrix equation, or system of linear scalar equations.
It computes the exact solution of ``x`` in ``ax = b``,
where ``a`` is a square and full rank matrix,
``b`` can be a vector, or a rectangular matrix.
When ``b`` is matrix, its columns are treated as separate vectors
representing multiple right-hand sides.
Args:
a (~chainerx.ndarray): Coefficient matrix.
b (~chainerx.ndarray): "dependent variable" values.
Returns:
:class:`~chainerx.ndarray`:
Solution to the system ``ax = b``.
Shape is identical to ``b``.
Note:
The ``dtype`` must be ``float32`` or ``float64`` (``float16`` is not
supported yet.)
.. seealso:: :func:`numpy.linalg.solve`
""")
_docs.set_doc(
chainerx.linalg.inv,
"""inv(a)
Computes the inverse of a matrix.
This function computes matrix ``a_inv`` from square matrix
``a`` such that ``dot(a, a_inv) = dot(a_inv, a) = eye(a.shape[0])``.
Args:
a (~chainerx.ndarray): The matrix to be inverted.
Returns:
:class:`~chainerx.ndarray`: The inverse of a matrix.
Note:
The ``dtype`` must be ``float32`` or ``float64`` (``float16`` is not
supported yet.)
.. seealso:: :func:`numpy.linalg.inv`
""")
_docs.set_doc(
chainerx.linalg.svd,
"""svd(a, full_matrices=True, compute_uv=True)
Singular Value Decomposition.
Factorizes the matrix ``a`` into two unitary matrices ``U`` and ``Vt``, and
a 1-D array ``s`` of singular values such that
``a == U * S * Vt``, where ``S`` is a suitably shaped matrix of zeros with
main diagonal ``s`` and ``*`` represents a dot product.
Args:
a (~chainerx.ndarray): The input matrix with dimension ``(M, N)``.
full_matrices (bool): If True, it returns u and v with dimensions
``(M, M)`` and ``(N, N)``. Otherwise, the dimensions of u and v
are respectively ``(M, K)`` and ``(K, N)``, where
``K = min(M, N)``.
compute_uv (bool): If False, only singular values are computed.
Returns:
tuple of :class:`chainerx.ndarray`:
A tuple of ``(U, s, Vt)`` such that ``a = U * diag(s) * Vt``.
When ``compute_uv`` is False only singular values ``s`` are returned.
Note:
* The ``dtype`` must be ``float32`` or ``float64`` (``float16`` is not
supported yet.)
* The SVD is commonly written as `a = U * diag(s) * V^T`.
The ``Vt`` returned by this function is `V^T`.
* During backpropagation, this function requires ``U`` and ``Vt`` computed,
therefore differentiation does not work for ``compute_uv=False``.
* Backpropagation is not implemented for ``full_matrices=True``.
.. seealso:: :func:`numpy.linalg.svd`
""")
_docs.set_doc(
chainerx.linalg.pinv,
"""pinv(a, rcond=1e-15)
Compute the (Moore-Penrose) pseudo-inverse of a matrix.
Calculate the generalized inverse of a matrix using its singular-value
decomposition (SVD) and including all large singular values.
Args:
a (~chainerx.ndarray): The input matrix to be pseudo-inverted.
rcond (float): Cutoff for small singular values.
Returns:
:class:`~chainerx.ndarray`: The pseudo-inverse of ``a``.
Note:
The ``dtype`` must be ``float32`` or ``float64`` (``float16`` is not
supported yet.)
.. seealso:: :func:`numpy.linalg.pinv`
""")
_docs.set_doc(
chainerx.linalg.qr,
"""qr(a, mode='reduced')
Compute the qr factorization of a matrix.
Factor the matrix ``a`` as *qr*, where ``q`` is orthonormal and ``r`` is
upper-triangular.
Args:
a (~chainerx.ndarray): Matrix to be factored.
mode (str): The mode of decomposition.
'reduced' : returns q, r with dimensions (M, K), (K, N) (default)
'complete' : returns q, r with dimensions (M, M), (M, N)
'r' : returns r only with dimensions (K, N)
'raw' : returns h, tau with dimensions (N, M), (K,),
where ``(M, N)`` is the shape of the input matrix and ``K = min(M, N)``
Returns:
q (~chainerx.ndarray): A matrix with orthonormal columns.
r (~chainerx.ndarray): The upper-triangular matrix.
Note:
* The ``dtype`` must be ``float32`` or ``float64`` (``float16`` is not
supported yet.)
* Backpropagation is not implemented for non-square output matrix ``r``.
* Backpropagation is not implemented for 'r' or 'raw' modes.
.. seealso:: :func:`numpy.linalg.qr`
""")
_docs.set_doc(
chainerx.linalg.cholesky,
"""cholesky(a)
Computes the Cholesky decomposition of a matrix.
Returns the Cholesky decomposition, :math:`A = L L^T`,
for the square matrix ``a``.
Args:
a (~chainerx.ndarray): Symmetric positive-definite input matrix.
Returns:
:class:`~chainerx.ndarray`: Output array. Cholesky factor of ``a``.
Note:
The forward computation does not necessarily check if the input matrix is
symmetric (e.g. the native backend relying on LAPACK does not). However,
both the forward and the backward computations assume that it is and their
results are unspecified otherwise. The computed gradient is always a
symmetric matrix. More specifically, the gradient is computed as if the
function is restricted to a Riemannian submanifold of
:math:`R^{n \\times n}` consisting just of positive-definite symmetric
matrices and is faithful to the mathematical definition of the Cholesky
decomposition.
Note:
* GPU implementation of the Cholesky decomposition routine is based on
cuSOLVER library. Older versions (<10.1) of it might not raise an error
for some non positive-definite matrices.
* The ``dtype`` must be ``float32`` or ``float64`` (``float16`` is not
supported yet.)
.. seealso:: :func:`numpy.linalg.cholesky`
""")
_docs.set_doc(
chainerx.linalg.eigh,
"""eigh(a, UPLO='L')
Compute the eigenvalues and eigenvectors of a real symmetric matrix.
Args:
a (~chainerx.ndarray): Real symmetric matrix whose eigenvalues
and eigenvectors are to be computed.
UPLO (str): Specifies whether the calculation is done with the lower
triangular part of a ('L', default) or the upper triangular part ('U').
Returns:
tuple of :class:`~chainerx.ndarray`:
Returns a tuple ``(w, v)``. ``w`` contains eigenvalues and
``v`` contains eigenvectors. ``v[:, i]`` is an eigenvector
corresponding to an eigenvalue ``w[i]``.
Note:
Although ``UPLO`` can be specified to ignore either the strictly lower or
upper part of the input matrix, the backward computation assumes that the
inputs is symmetric and the computed gradient is always a symmetric matrix
with respect to ``UPLO``. More specifically, the gradient is computed as if
the function is restricted to a Riemannian submanifold of
:math:`R^{n \\times n}` consisting just of symmetric matrices and is
faithful to the mathematical definition of the eigenvalue decomposition of
symmetric matrices.
Note:
The ``dtype`` must be ``float32`` or ``float64`` (``float16`` is not
supported yet.)
.. seealso:: :func:`numpy.linalg.eigh`
""")
_docs.set_doc(
chainerx.linalg.eigvalsh,
"""eigvalsh(a, UPLO='L')
Compute the eigenvalues of a real symmetric matrix.
Main difference from eigh: the eigenvectors are not computed.
Args:
a (~chainerx.ndarray): Real symmetric matrix whose eigenvalues
and eigenvectors are to be computed.
UPLO (str): Specifies whether the calculation is done with the lower
triangular part of a (‘L’, default) or the upper triangular part (‘U’).
(optional).
Returns:
:class:`~chainerx.ndarray`: Returns eigenvalues as a vector.
Note:
* The ``dtype`` must be ``float32`` or ``float64`` (``float16`` is not
supported yet.)
* Backpropagation requires eigenvectors and, therefore, is not implemented
for this function. ``linalg.eigh`` should be used instead.
.. seealso:: :func:`numpy.linalg.eigvalsh`
""")
def _docs_logic():
_docs.set_doc(
chainerx.all,
"""all(x)
Test whether all array elements along a given axis evaluate to True.
Args:
x (~chainerx.ndarray): Input array.
axis (None or int or tuple of ints):
Axis or axes along which AND reduction is performed.
The flattened array is used by default.
keepdims (bool):
If this is set to ``True``, the reduced axes are left in the result
as dimensions with size one.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.all`
""")
_docs.set_doc(
chainerx.any,
"""any(x)
Test whether any array element along a given axis evaluate to True.
Args:
x (~chainerx.ndarray): Input array.
axis (None or int or tuple of ints):
Axis or axes along which OR reduction is performed.
The flattened array is used by default.
keepdims (bool):
If this is set to ``True``, the reduced axes are left in the result
as dimensions with size one.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.any`
""")
_docs.set_doc(
chainerx.logical_not,
"""logical_not(x)
Returns an array of NOT x element-wise.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.logical_not`
""")
_docs.set_doc(
chainerx.logical_and,
"""logical_and(x1, x2)
Returns an array of x1 AND x2 element-wise.
Args:
x1 (~chainerx.ndarray): Input array.
x2 (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.logical_and`
""")
_docs.set_doc(
chainerx.logical_or,
"""logical_or(x1, x2)
Returns an array of x1 OR x2 element-wise.
Args:
x1 (~chainerx.ndarray): Input array.
x2 (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.logical_or`
""")
_docs.set_doc(
chainerx.logical_xor,
"""logical_xor(x1, x2)
Returns an array of x1 XOR x2 element-wise.
Args:
x1 (~chainerx.ndarray): Input array.
x2 (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.logical_xor`
""")
_docs.set_doc(
chainerx.greater,
"""greater(x1, x2)
Returns an array of (x1 > x2) element-wise.
Args:
x1 (~chainerx.ndarray): Input array.
x2 (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.greater`
""")
_docs.set_doc(
chainerx.greater_equal,
"""greater_equal(x1, x2)
Returns an array of (x1 >= x2) element-wise.
Args:
x1 (~chainerx.ndarray): Input array.
x2 (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.greater_equal`
""")
_docs.set_doc(
chainerx.less,
"""less(x1, x2)
Returns an array of (x1 < x2) element-wise.
Args:
x1 (~chainerx.ndarray): Input array.
x2 (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.less`
""")
_docs.set_doc(
chainerx.less_equal,
"""less_equal(x1, x2)
Returns an array of (x1 <= x2) element-wise.
Args:
x1 (~chainerx.ndarray): Input array.
x2 (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.less_equal`
""")
_docs.set_doc(
chainerx.equal,
"""equal(x1, x2)
Returns an array of (x1 == x2) element-wise.
Args:
x1 (~chainerx.ndarray): Input array.
x2 (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.equal`
""")
_docs.set_doc(
chainerx.not_equal,
"""not_equal(x1, x2)
Returns an array of (x1 != x2) element-wise.
Args:
x1 (~chainerx.ndarray): Input array.
x2 (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Output array of type bool.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.not_equal`
""")
def _docs_loss():
_docs.set_doc(
chainerx.absolute_error,
"""Element-wise absolute error function.
Computes the element-wise absolute error :math:`L` between two inputs
:math:`x_1` and :math:`x_2` defined as follows.
.. math::
L = |x_1 - x_2|
Args:
x1 (~chainerx.ndarray): Input variable.
x2 (~chainerx.ndarray): Input variable.
Returns:
:class:`~chainerx.ndarray`: A variable holding an array representing
the absolute error of two inputs.
.. seealso:: :func:`chainer.functions.absolute_error`
""")
_docs.set_doc(
chainerx.squared_error,
"""Element-wise squared error function.
Computes the element-wise squared error :math:`L` between two inputs
:math:`x_1` and :math:`x_2` defined as follows.
.. math::
L = (x_1 - x_2)^2
Can be used to compute mean squared error by just calling `mean()`
on the output array.
Args:
x0 (~chainerx.ndarray): Input variable.
x1 (~chainerx.ndarray): Input variable.
Returns:
:class:`~chainerx.ndarray`: A variable holding an array representing
the squared error of two inputs.
.. seealso:: :func:`chainer.functions.squared_error`
""")
_docs.set_doc(
chainerx.huber_loss,
"""Element-wise Huber loss.
The Huber loss is similar to the squared error but is less sensitive to
outliers in the data. It is defined as
.. math::
L_{\\delta}(a) = \\left \\{ \\begin{array}{cc}
\\frac{1}{2} a^2 & {\\rm if~|a| \\leq \\delta} \\\\
\\delta (|a| - \\frac{1}{2} \\delta) & {\\rm otherwise,}
\\end{array} \\right.
where :math:`a = x - t` is the difference between the input :math:`x`
and the target :math:`t`.
See: `Huber loss - Wikipedia <https://en.wikipedia.org/wiki/Huber_loss>`_.
Args:
x (~chainerx.ndarray): Input variable.
t (~chainerx.ndarray): Target variable for regression.
delta (float): Constant variable for Huber loss function as used in
definition.
Returns:
:class:`~chainerx.ndarray`:
A variable object holding an array representing the Huber loss
:math:`L_{\\delta}` of the two inputs.
.. seealso:: :func:`chainer.functions.huber_loss`
""")
_docs.set_doc(
chainerx.gaussian_kl_divergence,
"""Element-wise KL-divergence of Gaussian variables from the standard one.
Given two variable ``mean`` representing :math:`\\mu` and ``ln_var``
representing :math:`\\log(\\sigma^2)`, this function calculates
the element-wise KL-divergence between the given multi-dimensional
Gaussian :math:`N(\\mu, S)` and the standard Gaussian :math:`N(0, I)`
.. math::
D_{\\mathbf{KL}}(N(\\mu, S) \\| N(0, I)),
where :math:`S` is a diagonal matrix such that :math:`S_{ii} = \\sigma_i^2`
and :math:`I` is an identity matrix.
Args:
mean (~chainerx.ndarray):
A variable representing mean of given
gaussian distribution, :math:`\\mu`.
ln_var (~chainerx.ndarray):
A variable representing logarithm of
variance of given gaussian distribution, :math:`\\log(\\sigma^2)`.
Returns:
:class:`~chainerx.ndarray`:
A variable representing KL-divergence between
given gaussian distribution and the standard gaussian.
.. seealso:: :func:`chainer.functions.gaussian_kl_divergence`
""")
_docs.set_doc(
chainerx.sigmoid_cross_entropy,
"""sigmoid_cross_entropy(x1, x2)
Element-wise cross entropy loss for pre-sigmoid activations.
Args:
x1 (~chainerx.ndarray): An array whose (i, j)-th element indicates the
unnormalized log probability of the j-th unit at the i-th example.
x2 (~chainerx.ndarray): An array whose (i, j)-th element indicates a signed
integer vector of ground truth labels 0 or 1. If ``x2[i, j] == -1``,
corresponding ``x1[i, j]`` is ignored. Loss is zero if all ground truth
labels are -1.
Returns:
:class:`~chainerx.ndarray`: An array of the cross entropy.
Note:
During backpropagation, this function propagates the gradient of the output
array to the input array ``x1`` only.
""")
_docs.set_doc(
chainerx.softmax_cross_entropy,
"""softmax_cross_entropy(x1, x2)
Element-wise cross entropy loss for pre-softmax activations.
Args:
x1 (~chainerx.ndarray): An array whose element indicates unnormalized log
probability: the first axis of the array represents the number of
samples, and the second axis represents the number of classes.
x2 (~chainerx.ndarray): A signed integer vector of ground truth labels. If
``x2[i] == -1``, corresponding ``x1[i]`` is ignored.
Returns:
:class:`~chainerx.ndarray`: An array of the cross entropy.
Note:
During backpropagation, this function propagates the gradient of the output
array to the input array ``x1`` only.
""")
def _docs_manipulation():
_docs.set_doc(
chainerx.reshape,
"""reshape(a, newshape)
Returns a reshaped array.
Args:
a (~chainerx.ndarray): Array to be reshaped.
newshape (int or tuple of ints): The new shape of the array to return.
If it is an integer, then it is treated as a tuple of length one.
It should be compatible with ``a.size``. One of the elements can be
-1, which is automatically replaced with the appropriate value to
make the shape compatible with ``a.size``.
Returns:
:class:`~chainerx.ndarray`: A reshaped view of ``a`` if possible,
otherwise a copy.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``a``.
.. seealso:: :func:`numpy.reshape`
""")
_docs.set_doc(
chainerx.ravel,
"""ravel(a)
Returns a flattened array.
Args:
a (~chainerx.ndarray): Array to be flattened.
Returns:
:class:`~chainerx.ndarray`: A flattened view of ``a`` if possible,
otherwise a copy.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``a``.
.. seealso:: :func:`numpy.ravel`
""")
_docs.set_doc(
chainerx.transpose,
"""transpose(a, axes=None)
Permutes the dimensions of an array.
Args:
a (~chainerx.ndarray): Array to permute the dimensions.
axes (tuple of ints): Permutation of the dimensions. This function reverses
the shape by default.
Returns:
~chainerx.ndarray: A view of ``a`` with the dimensions permuted.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``a``.
.. seealso:: :func:`numpy.transpose`
""")
_docs.set_doc(
chainerx.broadcast_to,
"""broadcast_to(array, shape)
Broadcasts an array to a given shape.
Args:
array (~chainerx.ndarray): Array to broadcast.
shape (tuple of ints): The shape of the desired array.
Returns:
~chainerx.ndarray: Broadcasted view.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``array``.
.. seealso:: :func:`numpy.broadcast_to`
""")
_docs.set_doc(
chainerx.squeeze,
"""squeeze(a, axis=None)
Removes size-one axes from the shape of an array.
Args:
a (~chainerx.ndarray): Array to be reshaped.
axis (int or tuple of ints): Axes to be removed. This function removes all
size-one axes by default. If one of the specified axes is not of size
one, an exception is raised.
Returns:
~chainerx.ndarray: An array without (specified) size-one axes.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``a``.
.. seealso:: :func:`numpy.squeeze`
""")
_docs.set_doc(
chainerx.concatenate,
"""concatenate(arrays, axis=0)
Joins arrays along an axis.
Args:
arrays (sequence of :class:`~chainerx.ndarray`\\ s): Arrays to be joined.
All of these should have the same dimensionalities except the specified
axis.
axis (int): The axis to join arrays along.
Returns:
~chainerx.ndarray: Joined array.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays in ``arrays``.
.. seealso:: :func:`numpy.concatenate`
""")
_docs.set_doc(
chainerx.stack,
"""stack(arrays, axis=0)
Stacks arrays along a new axis.
Args:
arrays (sequence of :class:`~chainerx.ndarray`\\ s): Arrays to be stacked.
axis (int): Axis along which the arrays are stacked.
Returns:
~chainerx.ndarray: Stacked array.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays in ``arrays``.
.. seealso:: :func:`numpy.stack`
""")
_docs.set_doc(
chainerx.hstack,
"""hstack(arrays)
Stack arrays in sequence horizontally (column wise).
Args:
arrays (sequence of :class:`~chainerx.ndarray`\\ s): Arrays to be stacked.
Returns:
~chainerx.ndarray: Stacked array.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays in ``arrays``.
.. seealso:: :func:`numpy.hstack`
""")
_docs.set_doc(
chainerx.vstack,
"""vstack(arrays)
Stack arrays in sequence vertically (row wise).
Args:
arrays (sequence of :class:`~chainerx.ndarray`\\ s): Arrays to be stacked.
Returns:
~chainerx.ndarray: Stacked array.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays in ``arrays``.
.. seealso:: :func:`numpy.vstack`
""")
_docs.set_doc(
chainerx.dstack,
"""dstack(arrays)
Stack arrays in sequence depth wise (along third axis).
Args:
arrays (sequence of :class:`~chainerx.ndarray`\\ s): Arrays to be stacked.
Returns:
~chainerx.ndarray: Stacked array.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays in ``arrays``.
.. seealso:: :func:`numpy.dstack`
""")
_docs.set_doc(
chainerx.atleast_2d,
"""atleast_2d(a)
View inputs as arrays with at least two dimensions.
Args:
a (~chainerx.ndarray): Array.
Returns:
~chainerx.ndarray: An array with a.ndim >= 2.
Copies are avoided where possible, and views with
two or more dimensions are returned.
Note:
* Arrays that already have two or more dimensions are preserved.
* During backpropagation, this function propagates the gradient of the
output array to the input arrays in ``a``.
.. seealso:: :func:`numpy.atleast_2d`
""")
_docs.set_doc(
chainerx.atleast_3d,
"""atleast_3d(a)
View inputs as arrays with at least three dimensions.
Args:
a (~chainerx.ndarray): Array.
Returns:
~chainerx.ndarray: An array with a.ndim >= 3.
Copies are avoided where possible, and views with
three or more dimensions are returned.
Note:
* Arrays that already have three or more dimensions are preserved.
* During backpropagation, this function propagates the gradient of the
output array to the input arrays in ``a``.
.. seealso:: :func:`numpy.atleast_3d`
""")
_docs.set_doc(
chainerx.split,
"""split(ary, indices_or_sections, axis=0)
Splits an array into multiple sub arrays along a given axis.
Args:
ary (~chainerx.ndarray): Array to split.
indices_or_sections (int or sequence of ints): A value indicating how to
divide the axis. If it is an integer, then is treated as the number of
sections, and the axis is evenly divided. Otherwise, the integers
indicate indices to split at. Note that a sequence on the device
memory is not allowed.
axis (int): Axis along which the array is split.
Returns:
list of :class:`~chainerx.ndarray`\\ s: A list of sub arrays. Each array \
is a partial view of the input array.
Note:
During backpropagation, this function propagates the gradients of the
output arrays to the input array ``ary``.
.. seealso:: :func:`numpy.split`
""")
_docs.set_doc(
chainerx.dsplit,
"""dsplit(ary, indices_or_sections)
Split array into multiple sub-arrays along the 3rd axis (depth).
Args:
ary (~chainerx.ndarray): Array to split.
indices_or_sections (int or sequence of ints): A value indicating how to
divide the axis. If it is an integer, then is treated as the number of
sections, and the axis is evenly divided. Otherwise, the integers
indicate indices to split at. Note that a sequence on the device
memory is not allowed.
Returns:
list of :class:`~chainerx.ndarray`\\ s: A list of sub arrays. Each array \
is a partial view of the input array.
Note:
During backpropagation, this function propagates the gradients of the
output arrays to the input array ``ary``.
.. seealso:: :func:`numpy.dsplit`
""")
_docs.set_doc(
chainerx.vsplit,
"""vsplit(ary, indices_or_sections)
Splits an array into multiple sub-arrays vertically (row-wise).
Args:
ary (~chainerx.ndarray): Array to split.
indices_or_sections (int or sequence of ints): A value indicating how to
divide the axis. If it is an integer, then is treated as the number of
sections, and the axis is evenly divided. Otherwise, the integers
indicate indices to split at. Note that a sequence on the device
memory is not allowed.
Returns:
list of :class:`~chainerx.ndarray`\\ s: A list of sub arrays. Each array \
is a partial view of the input array.
Note:
During backpropagation, this function propagates the gradients of the
output arrays to the input array ``ary``.
.. seealso:: :func:`numpy.vsplit`
""")
_docs.set_doc(
chainerx.hsplit,
"""hsplit(ary, indices_or_sections)
Split an array into multiple sub-arrays horizontally (column-wise).
Args:
ary (~chainerx.ndarray): Array to split.
indices_or_sections (int or sequence of ints): A value indicating how to
divide the axis. If it is an integer, then is treated as the number of
sections, and the axis is evenly divided. Otherwise, the integers
indicate indices to split at. Note that a sequence on the device
memory is not allowed.
Returns:
list of :class:`~chainerx.ndarray`\\ s: A list of sub arrays. Each array \
is a partial view of the input array.
Note:
During backpropagation, this function propagates the gradients of the
output arrays to the input array ``ary``.
.. seealso:: :func:`numpy.hsplit`
""")
_docs.set_doc(
chainerx.swapaxes,
"""swapaxes(a, axis1, axis2)
Interchange two axes of an array.
Args:
a (~chainerx.ndarray): Array to swapaxes.
axis1 (int): First Axis
axis2 (int): Second Axis
Returns:
~chainerx.ndarray: Swaped array.
Note:
* Output array is a view of the input array.
* During backpropagation, this function propagates the gradients of the
output arrays to the input array ``a``.
.. seealso:: :func:`numpy.swapaxes`
""")
_docs.set_doc(
chainerx.repeat,
"""repeat(a, repeats, axis=None)
Constructs an array by repeating a given array.
Args:
a (~chainerx.ndarray): Array to repeat.
repeats (int or tuple of ints): The number of times which each
element of a is repeated.
axis (int): The axis along which to repeat values.
Returns:
~chainerx.ndarray: The repeated output array.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``a``.
.. seealso:: :func:`numpy.repeat`
""")
_docs.set_doc(
chainerx.expand_dims,
"""expand_dims(a, axis)
Expand the shape of an array.
Args:
a (~chainerx.ndarray): Input Array.
axis (int): Position in the expanded axes where the new axis is placed.
Returns:
~chainerx.ndarray: Output array.
Note:
* Output array may or may not be a view of the input array.
* During backpropagation, this function propagates the gradients of the
output arrays to the input array ``a``.
.. seealso:: :func:`numpy.expand_dims`
""")
_docs.set_doc(
chainerx.flip,
"""flip(m, axis)
Reverse the order of elements in an array along the given axis.
Args:
m (~chainerx.ndarray): Input Array.
axis (int or tuple of ints): Axis or axes along which to flip over.
The default, axis=None, will flip over all of the axes of the input array.
If axis is negative it counts from the last to the first axis.
If axis is a tuple of ints, flipping is performed on all of the
axes specified in the tuple.
Returns:
~chainerx.ndarray: A view of m with the entries of axis reversed.
Since a view is returned, this operation is done in constant time.
Note:
* Output array is a view of the input array.
* During backpropagation, this function propagates the gradients of the
output arrays to the input array ``m``.
.. seealso:: :func:`numpy.flip`
""")
_docs.set_doc(
chainerx.fliplr,
"""fliplr(m)
Flip array in the left/right direction.
Args:
m (~chainerx.ndarray): Input Array.
Returns:
~chainerx.ndarray: A view of m with the columns reversed.
Since a view is returned, this operation is done in constant time.
Note:
* Output array is a view of the input array.
* During backpropagation, this function propagates the gradients of the
output arrays to the input array ``m``.
.. seealso:: :func:`numpy.fliplr`
""")
_docs.set_doc(
chainerx.flipud,
"""flipud(m)
Flip array in the up/down direction.
Args:
m (~chainerx.ndarray): Input Array.
Returns:
~chainerx.ndarray: A view of m with the rows reversed.
Since a view is returned, this operation is done in constant time.
Note:
* Output array is a view of the input array.
* During backpropagation, this function propagates the gradients of the
output arrays to the input array ``m``.
.. seealso:: :func:`numpy.flipud`
""")
_docs.set_doc(
chainerx.moveaxis,
"""moveaxis(a, source, destination)
Move axes of an array to new positions.
Other axes remain in their original order.
Args:
a (~chainerx.ndarray): Input Array.
source (int or tuple of ints): Original positions of the axes to move.
These must be unique.
destintation (int or tuple of ints): Destination positions for each of
the original axes. These must also be unique.
Returns:
~chainerx.ndarray: Array with moved axes. This array is a view of the
input array.
Note:
* During backpropagation, this function propagates the gradients of the
output arrays to the input array ``a``.
.. seealso:: :func:`numpy.moveaxis`
""")
def _docs_math():
_docs.set_doc(
chainerx.negative,
"""negative(x)
Numerical negative, element-wise.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = -x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.negative`
""")
_docs.set_doc(
chainerx.add,
"""add(x1, x2)
Add arguments, element-wise.
Args:
x1 (~chainerx.ndarray or scalar): Input array.
x2 (~chainerx.ndarray or scalar): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = x_1 + x_2`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays ``x1`` and ``x2``.
.. seealso:: :data:`numpy.add`
""")
_docs.set_doc(
chainerx.subtract,
"""subtract(x1, x2)
Subtract arguments, element-wise.
Args:
x1 (~chainerx.ndarray or scalar): Input array.
x2 (~chainerx.ndarray or scalar): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = x_1 - x_2`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays ``x1`` and ``x2``.
.. seealso:: :data:`numpy.subtract`
""")
_docs.set_doc(
chainerx.multiply,
"""multiply(x1, x2)
Multiply arguments, element-wise.
Args:
x1 (~chainerx.ndarray or scalar): Input array.
x2 (~chainerx.ndarray or scalar): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = x_1 \\times x_2`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays ``x1`` and ``x2``.
.. seealso:: :data:`numpy.multiply`
""")
_docs.set_doc(
chainerx.divide,
"""divide(x1, x2)
Divide arguments, element-wise.
Args:
x1 (~chainerx.ndarray or scalar): Input array.
x2 (~chainerx.ndarray or scalar): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\frac{x_1}{x_2}`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays ``x1`` and ``x2``.
.. seealso:: :data:`numpy.divide`
""")
_docs.set_doc(
chainerx.sum,
"""sum(a, axis=None, keepdims=False)
Sum of array elements over a given axis.
Args:
a (~chainerx.ndarray): Input array.
axis (None or int or tuple of ints):
Axis or axes along which a sum is performed.
The flattened array is used by default.
keepdims (bool):
If this is set to ``True``, the reduced axes are left in the result
as dimensions with size one.
Returns:
:class:`~chainerx.ndarray`: The sum of input elements over a given axis.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``a``.
.. seealso:: :func:`numpy.sum`
""")
_docs.set_doc(
chainerx.maximum,
"""maximum(x1, x2)
Maximum arguments, element-wise.
Args:
x1 (~chainerx.ndarray or scalar): Input array.
x2 (~chainerx.ndarray or scalar): Input array.
Returns:
:class:`~chainerx.ndarray`:
Returned array: :math:`y = max(\\{x_1, x_2\\})`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays ``x1`` and ``x2``.
.. seealso:: :data:`numpy.maximum`
""")
_docs.set_doc(
chainerx.minimum,
"""minimum(x1, x2)
Minimum arguments, element-wise.
Args:
x1 (~chainerx.ndarray or scalar): Input array.
x2 (~chainerx.ndarray or scalar): Input array.
Returns:
:class:`~chainerx.ndarray`:
Returned array: :math:`y = min(\\{x_1, x_2\\})`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays ``x1`` and ``x2``.
.. seealso:: :data:`numpy.minimum`
""")
_docs.set_doc(
chainerx.remainder,
"""remainder(x1, x2)
Return element-wise remainder of division.
Args:
x1 (~chainerx.ndarray or scalar): Input array.
x2 (~chainerx.ndarray or scalar): Input array.
Returns:
:class:`~chainerx.ndarray`:
Returned array: The element-wise remainder of
the quotient ``floor_divide(x1, x2)``.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays ``x1`` and ``x2``.
.. seealso:: :data:`numpy.remainder`
""")
_docs.set_doc(
chainerx.exp,
"""exp(x)
Numerical exponential, element-wise.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\exp x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.exp`
""")
_docs.set_doc(
chainerx.log,
"""log(x)
Natural logarithm, element-wise.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\ln x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.log`
""")
_docs.set_doc(
chainerx.log10,
"""log10(x)
Base 10 logarithm, element-wise.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\log_{10} x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.log10`
""")
_docs.set_doc(
chainerx.log2,
"""log2(x)
Base 2 logarithm, element-wise.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\log_{2} x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.log2`
""")
_docs.set_doc(
chainerx.log1p,
"""log1p(x)
Natural logarithm of one plus the input, element-wise.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\log(1 + x)`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.log1p`
""")
_docs.set_doc(
chainerx.logsumexp,
"""logsumexp(x, axis=None, keepdims=False)
The log of the sum of exponentials of input array.
Args:
x (~chainerx.ndarray): Input array.
axis (None or int or tuple of ints):
Axis or axes along which a sum is performed.
The flattened array is used by default.
keepdims (bool):
If this is set to ``True``, the reduced axes are left in the result
as dimensions with size one.
Returns:
:class:`~chainerx.ndarray`: The log of the sum of exponentials of
input elements over a given axis.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
""")
_docs.set_doc(
chainerx.log_softmax,
"""log_softmax(x, axis=None)
The log of the softmax of input array.
Args:
x (~chainerx.ndarray): Input array.
axis (None or int or tuple of ints):
Axis or axes along which a sum is performed.
The flattened array is used by default.
Returns:
:class:`~chainerx.ndarray`: The log of the softmax of input elements
over a given axis.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
""")
_docs.set_doc(
chainerx.square,
"""square(x)
Returns the element-wise square of the input.
Args:
x (~chainerx.ndarray or scalar): Input data
Returns:
~chainerx.ndarray: Returned array: :math:`y = x * x`.
A scalar is returned if ``x`` is a scalar.
Note:
During backpropagation, this function propagates the gradient
of the output array to the input array ``x``.
.. seealso:: :data:`numpy.square`
""")
_docs.set_doc(
chainerx.sqrt,
"""sqrt(x)
Non-negative square-root, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\sqrt x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.sqrt`
""")
_docs.set_doc(
chainerx.sinh,
"""sinh(x)
Hyperbolic Sine, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\sinh x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.sinh`
""")
_docs.set_doc(
chainerx.cosh,
"""cosh(x)
Hyperbolic Cosine, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\cosh x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.cosh`
""")
_docs.set_doc(
chainerx.tanh,
"""tanh(x)
Element-wise hyperbolic tangent function.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\tanh x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.tanh`
""")
_docs.set_doc(
chainerx.sigmoid,
"""sigmoid(x)
Element-wise sigmoid logistic function.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array:
:math:`f(x) = (1 + \\exp(-x))^{-1}`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :func:`chainer.functions.sigmoid`
""")
_docs.set_doc(
chainerx.sin,
"""sin(x)
Sine, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\sin x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.sin`
""")
_docs.set_doc(
chainerx.cos,
"""cos(x)
Cosine, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\cos x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.cos`
""")
_docs.set_doc(
chainerx.ceil,
"""ceil(x)
Return the ceiling of the input, element-wise..
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: The ceiling of each element in array.
.. seealso:: :data:`numpy.ceil`
""")
_docs.set_doc(
chainerx.tan,
"""tan(x)
Tangent, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\tan x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.tan`
""")
_docs.set_doc(
chainerx.relu,
"""Rectified Linear Unit function.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\max (0, x)`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
""")
_docs.set_doc(
chainerx.tree_lstm,
"""tree_lstm(*inputs)
TreeLSTM unit as an activation function.
This function implements TreeLSTM units both for
N-ary TreeLSTM and Child-Sum TreeLSTM.
Let the children cell states
:math:`c_{\\text{1}}, c_{\\text{2}}, \\dots, c_{\\text{N}}`,
and the incoming signal :math:`x`.
First, the incoming signal :math:`x` is split into (3 + N) arrays
:math:`a, i, o, f_{\\text{1}}, f_{\\text{2}}, ..., f_{\\text{N}}`
of the same shapes along the second axis.
It means that :math:`x` 's second axis must have (3 + N) times
of the length of each :math:`c_{n}`.
The splitted input signals are corresponding to
- :math:`a` : sources of cell input
- :math:`i` : sources of input gate
- :math:`o` : sources of output gate
- :math:`f_{n}` : sources of forget gate for n-th ary
Second, it computes outputs as
.. math::
c &= \\tanh(a) \\text{sigmoid}(i) \\\\
& + c_{\\text{1}} \\text{sigmoid}(f_{\\text{1}}), \\\\
& + c_{\\text{2}} \\text{sigmoid}(f_{\\text{2}}), \\\\
& + ..., \\\\
& + c_{\\text{N}} \\text{sigmoid}(f_{\\text{N}}), \\\\
h &= \\tanh(c) \\text{sigmoid}(o).
These are returned as a tuple of (N + 1) variables.
Args:
inputs (list of :class:`~chainerx.array`): Variable arguments which
include all cell vectors from child-nodes, and an input vector.
Each of the cell vectors and the input vector is
:class:`~chainerx.array`.
The input vector must have the second dimension whose size
is (N + 3) times of that of each cell,
where N denotes the total number of cells.
Returns:
tuple: Two :class:`~chainerx.array` objects ``c`` and ``h``. ``c`` is
the updated cell state. ``h`` indicates the outgoing signal.
See the papers for details: `Improved Semantic Representations From
Tree-Structured Long Short-Term Memory Networks
<https://www.aclweb.org/anthology/P15-1150>`_ and
`A Fast Unified Model for Parsing and Sentence Understanding
<https://arxiv.org/pdf/1603.06021.pdf>`_.
Tai et al.'s N-Ary TreeLSTM is little extended in
Bowman et al., and this link is based on
the variant by Bowman et al.
Specifically, eq. 10 in Tai et al. only has one :math:`W` matrix
to be applied to :math:`x`, consistently for all children.
On the other hand, Bowman et al.'s model has multiple matrices,
each of which affects the forget gate for each child's cell individually.
.. admonition:: Example
Assuming ``y`` is the current input signal, ``c`` is the previous cell
state, and ``h`` is the previous output signal from an
:meth:`~chainerx.tree_lstm` function.
Each of ``y``, ``c`` and ``h`` has ``n_units`` channels.
Using 2-ary (binary) TreeLSTM,
most typical preparation of ``x`` is
>>> c1 = chainerx.ones((4, 10), dtype = chainerx.float32)
>>> c2 = chainerx.ones((4, 10), dtype = chainerx.float32)
>>> x = chainerx.ones((4, 50), dtype = chainerx.float32)
>>> c, h = chainerx.tree_lstm(c1, c2, x)
""")
_docs.set_doc(
chainerx.slstm,
"""slstm(c_prev1, c_prev2, x1, x2)
S-LSTM units as an activation function.
This function implements S-LSTM unit. It is an extension of LSTM unit
applied to tree structures.
The function is applied to binary trees. Each node has two child nodes.
It gets four arguments, previous cell states ``c_prev1`` and ``c_prev2``,
and input arrays ``x1`` and ``x2``.
First both input arrays ``x1`` and ``x2`` are split into eight arrays
:math:`a_1, i_1, f_1, o_1`, and :math:`a_2, i_2, f_2, o_2`. They have the
same shape along the second axis.
It means that ``x1`` and ``x2`` 's second axis must have 4 times
the length of ``c_prev1`` and ``c_prev2``.
The split input arrays are corresponding to
- :math:`a_i` : sources of cell input
- :math:`i_i` : sources of input gate
- :math:`f_i` : sources of forget gate
- :math:`o_i` : sources of output gate
It computes the updated cell state ``c`` and the outgoing signal
``h`` as.
.. math::
c &= \\tanh(a_1 + a_2) \\sigma(i_1 + i_2)
+ c_{\\text{prev}1} \\sigma(f_1)
+ c_{\\text{prev}2} \\sigma(f_2), \\\\
h &= \\tanh(c) \\sigma(o_1 + o_2),
where :math:`\\sigma` is the elementwise sigmoid function.
The function returns ``c`` and ``h`` as a tuple.
Args:
c_prev1 (:class:`~chainerx.array`):
Variable that holds the previous cell state of the first child
node. The cell state should be a zero array or the output of
the previous call of LSTM.
c_prev2 (:class:`~chainerx.array`):
Variable that holds the previous cell state of the second child
node.
x1 (:class:`~chainerx.array`):
Variable that holds the sources of cell input, input gate, forget
gate and output gate from the first child node. It must have the
second dimension whose size is four times of that of the cell
state.
x2 (:class:`~chainerx.array`):
Variable that holds the input sources from the second child node.
Returns:
tuple: Two :class:`~chainerx.array` objects ``c`` and ``h``. ``c`` is
the cell state. ``h`` indicates the outgoing signal.
See detail in paper: `Long Short-Term Memory Over Tree Structures
<https://arxiv.org/abs/1503.04881>`_.
.. admonition:: Example
Assuming ``c1``, ``c2`` is the previous cell state of children,
and ``h1``, ``h2`` is the previous outgoing signal from children.
Each of ``c1``, ``c2``, ``h1`` and ``h2`` has ``n_units`` channels.
Most typical preparation of ``x1``, ``x2`` is:
>>> n_units = 100
>>> c1 = chainerx.ones((1, n_units), np.float32)
>>> c2 = chainerx.ones((1, n_units), np.float32)
>>> x1 = chainerx.ones((1, 4 * n_units), chainerx.float32)
>>> x2 = chainerx.ones((1, 4 * n_units), chainerx.float32)
>>> c, h = chainerx.slstm(c1, c2, x1, x2)
""")
_docs.set_doc(
chainerx.arcsin,
"""arcsin(x)
Inverse sine, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\arcsin x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.arcsin`
""")
_docs.set_doc(
chainerx.arccos,
"""arccos(x)
Trigonometric inverse cosine, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\arccos x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.arccos`
""")
_docs.set_doc(
chainerx.arctan,
"""arctan(x)
Trigonometric inverse tangent, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\arctan x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.arctan`
""")
_docs.set_doc(
chainerx.arctan2,
"""arctan2(x1, x2)
Element-wise arc tangent of :math:`\\frac{x_1}{x_2}` choosing the quadrant
correctly.
Args:
x1 (~chainerx.ndarray): Input array.
x2 (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returns an array where each element
represents :math:`\\theta` in the range :math:`[-\\pi, \\pi]`, such
that :math:`x_1 = r \\sin(\\theta)` and :math:`x_2 = r \\cos(\\theta)`
for some :math:`r > 0`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x1`` and/or ``x2``.
.. seealso:: :data:`numpy.arctan2`
""")
_docs.set_doc(
chainerx.arcsinh,
"""arcsinh(x)
Inverse hyperbolic sine, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\arcsinh x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.arcsinh`
""")
_docs.set_doc(
chainerx.arccosh,
"""arccosh(x)
Inverse hypberbolic inverse cosine, element-wise
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = \\arccosh x`.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. seealso:: :data:`numpy.arccosh`
""")
_docs.set_doc(
chainerx.fabs,
"""fabs(x)
Compute the absolute values element-wise.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: The absolute values of x, the returned values
are always floats.
.. seealso:: :data:`numpy.fabs`
""")
_docs.set_doc(
chainerx.sign,
"""sign(x)
Returns an element-wise indication of the sign of a number.
The sign function returns :math:`-1 if x < 0, 0 if x==0, 1 if x > 0`.
``nan`` is returned for ``nan`` inputs.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: The sign of x.
.. seealso:: :data:`numpy.sign`
""")
_docs.set_doc(
chainerx.floor,
"""floor(x)
Return the floor of the input, element-wise.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: The floor of each element in array.
.. seealso:: :data:`numpy.floor`
""")
_docs.set_doc(
chainerx.isnan,
"""isnan(x)
Test element-wise for NaN and return result as a boolean array.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: True where ``x`` is NaN, false otherwise
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.isnan`
""")
_docs.set_doc(
chainerx.isfinite,
"""isfinite(x)
Test element-wise for finiteness (not infinity or not Not a Number).
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: True where x is not positive infinity,
negative infinity, or NaN; false otherwise.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.isfinite`
""")
_docs.set_doc(
chainerx.isinf,
"""isinf(x)
Test element-wise for positive or negative infinity.
Args:
x (~chainerx.ndarray): Input array.
Returns:
:class:`~chainerx.ndarray`: True where ``x`` is positive or negative
infinity, false otherwise.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.isinf`
""")
_docs.set_doc(
chainerx.bitwise_and,
"""bitwise_and(x1, x2)
Compute the bit-wise AND of two arrays element-wise.
Args:
x1 (~chainerx.ndarray or scalar): Input array of integers.
x2 (~chainerx.ndarray or scalar): Input array of integers.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = x_1 \\& x_2`
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.bitwise_and`
""")
_docs.set_doc(
chainerx.bitwise_or,
"""bitwise_or(x1, x2)
Compute the bit-wise OR of two arrays element-wise.
Args:
x1 (~chainerx.ndarray or scalar): Input array of integers.
x2 (~chainerx.ndarray or scalar): Input array of integers.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = x_1 | x_2`
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.bitwise_or`
""")
_docs.set_doc(
chainerx.bitwise_xor,
"""bitwise_xor(x1, x2)
Compute the bit-wise XOR of two arrays element-wise.
Args:
x1 (~chainerx.ndarray or scalar): Input array of integers.
x2 (~chainerx.ndarray or scalar): Input array of integers.
Returns:
:class:`~chainerx.ndarray`: Returned array: :math:`y = x_1 \\oplus x_2`
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.bitwise_xor`
""")
_docs.set_doc(
chainerx.left_shift,
"""left_shift(x1, x2)
Shift the bits of an integer to the left.
Args:
x1 (~chainerx.ndarray or scalar): Input array of integers.
x2 (~chainerx.ndarray or scalar): Input array of integers.
Returns:
:class:`~chainerx.ndarray`: Return `x1` with bits shifted `x2` times to the left.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.left_shift`
""") # NOQA
_docs.set_doc(
chainerx.right_shift,
"""right_shift(x1, x2)
Shift the bits of an integer to the right.
Args:
x1 (~chainerx.ndarray or scalar): Input array of integers.
x2 (~chainerx.ndarray or scalar): Input array of integers.
Returns:
:class:`~chainerx.ndarray`: Return `x1` with bits shifted `x2` times to the right.
Note:
During backpropagation, this function does not propagate gradients.
.. seealso:: :data:`numpy.right_shift`
""") # NOQA
def _docs_sorting():
_docs.set_doc(
chainerx.argmax,
"""argmax(a, axis=None)
Returns the indices of the maximum along an axis.
Args:
a (~chainerx.ndarray): Array to take the indices of the maximum of.
axis (None or int): Along which axis to compute the maximum. The flattened
array is used by default.
Returns:
:class:`~chainerx.ndarray`: The indices of the maximum of ``a``, along the
axis if specified.
.. seealso:: :func:`numpy.argmax`
""")
_docs.set_doc(
chainerx.argmin,
"""argmin(a, axis=None)
Returns the indices of the minimum along an axis.
Args:
a (~chainerx.ndarray): Array to take the indices of the minimum of.
axis (None or int): Along which axis to compute the minimum. The flattened
array is used by default.
Returns:
:class:`~chainerx.ndarray`: The indices of the minimum of ``a``, along the
axis if specified.
.. seealso:: :func:`numpy.argmin`
""")
def _docs_statistics():
_docs.set_doc(
chainerx.amax,
"""amax(a, axis=None, keepdims=False)
Returns the maximum of an array or the maximum along an axis.
Note:
When at least one element is NaN, the corresponding max value will be NaN.
Args:
a (~chainerx.ndarray): Array to take the maximum.
axis (None or int or tuple of ints): Along which axis to take the maximum.
The flattened array is used by default.
If this is a tuple of ints, the maximum is selected over multiple
axes, instead of a single axis or all the axes.
keepdims (bool): If ``True``, the axis is remained as an axis of size one.
Returns:
:class:`~chainerx.ndarray`: The maximum of ``a``, along the axis if
specified.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``a``.
.. seealso:: :func:`numpy.amax`
""")
_docs.set_doc(
chainerx.amin,
"""amin(a, axis=None, keepdims=False)
Returns the minimum of an array or the minimum along an axis.
Note:
When at least one element is NaN, the corresponding min value will be NaN.
Args:
a (~chainerx.ndarray): Array to take the minimum.
axis (None or int or tuple of ints): Along which axis to take the minimum.
The flattened array is used by default.
If this is a tuple of ints, the minimum is selected over multiple
axes, instead of a single axis or all the axes.
keepdims (bool): If ``True``, the axis is remained as an axis of size one.
Returns:
:class:`~chainerx.ndarray`: The minimum of ``a``, along the axis if
specified.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``a``.
.. seealso:: :func:`numpy.amin`
""")
_docs.set_doc(
chainerx.mean,
"""mean(a, axis=None, keepdims=False)
Compute the arithmetic mean along the specified axis.
Returns the average of the array elements. The average is taken over the
flattened array by default, otherwise over the specified axis.
Args:
a (~chainerx.ndarray): Array to take the mean of.
axis (None or int or tuple of ints): Along which axis or axes to compute
the mean. The flattened array is used by default.
keepdims (bool): If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option,
the result will broadcast correctly against the input array.
Returns:
:class:`~chainerx.ndarray`: The mean of ``a``, along the axis or axes if
specified.
.. seealso:: :func:`numpy.mean`
""")
_docs.set_doc(
chainerx.var,
"""var(a, axis=None, keepdims=False)
Compute the arithmetic var along the specified axis.
Returns the var of the array elements. The var is taken over the flattened
array by default, otherwise over the specified axis.
Args:
a (~chainerx.ndarray): Array to take the var of.
axis (None or int or tuple of ints): Along which axis or axes to compute
the var. The flattened array is used by default.
keepdims (bool): If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option,
the result will broadcast correctly against the input array.
Returns:
:class:`~chainerx.ndarray`: The var of ``a``, along the axis or axes if
specified.
.. seealso:: :func:`numpy.var`
""")
def _docs_connection():
_docs.set_doc(
chainerx.conv,
"""conv(x, w, b=None, stride=1, pad=0, cover_all=False)
N-dimensional convolution.
This is an implementation of N-dimensional convolution which is generalized
two-dimensional convolution in ConvNets. It takes three arrays: the
input ``x``, the filter weight ``w`` and the bias vector ``b``.
Notation: here is a notation for dimensionalities.
- :math:`N` is the number of spatial dimensions.
- :math:`n` is the batch size.
- :math:`c_I` and :math:`c_O` are the number of the input and output
channels, respectively.
- :math:`d_1, d_2, ..., d_N` are the size of each axis of the input's
spatial dimensions, respectively.
- :math:`k_1, k_2, ..., k_N` are the size of each axis of the filters,
respectively.
- :math:`l_1, l_2, ..., l_N` are the size of each axis of the output's
spatial dimensions, respectively.
- :math:`p_1, p_2, ..., p_N` are the size of each axis of the spatial
padding size, respectively.
Then the ``conv`` function computes correlations between filters
and patches of size :math:`(k_1, k_2, ..., k_N)` in ``x``.
Note that correlation here is equivalent to the inner product between
expanded tensors.
Patches are extracted at positions shifted by multiples of ``stride`` from
the first position ``(-p_1, -p_2, ..., -p_N)`` for each spatial axis.
Let :math:`(s_1, s_2, ..., s_N)` be the stride of filter application.
Then, the output size :math:`(l_1, l_2, ..., l_N)` is determined by the
following equations:
.. math::
l_n = (d_n + 2p_n - k_n) / s_n + 1 \\ \\ (n = 1, ..., N)
If ``cover_all`` option is ``True``, the filter will cover the all
spatial locations. So, if the last stride of filter does not cover the
end of spatial locations, an additional stride will be applied to the end
part of spatial locations. In this case, the output size is determined by
the following equations:
.. math::
l_n = (d_n + 2p_n - k_n + s_n - 1) / s_n + 1 \\ \\ (n = 1, ..., N)
Args:
x (:class:`~chainerx.ndarray`):
Input array of shape :math:`(n, c_I, d_1, d_2, ..., d_N)`.
w (:class:`~chainerx.ndarray`):
Weight array of shape :math:`(c_O, c_I, k_1, k_2, ..., k_N)`.
b (None or :class:`~chainerx.ndarray`):
One-dimensional bias array with length :math:`c_O` (optional).
stride (:class:`int` or :class:`tuple` of :class:`int` s):
Stride of filter applications :math:`(s_1, s_2, ..., s_N)`.
``stride=s`` is equivalent to ``(s, s, ..., s)``.
pad (:class:`int` or :class:`tuple` of :class:`int` s):
Spatial padding width for input arrays
:math:`(p_1, p_2, ..., p_N)`. ``pad=p`` is equivalent to
``(p, p, ..., p)``.
cover_all (bool): If ``True``, all spatial locations are convoluted
into some output pixels. It may make the output size larger.
`cover_all` needs to be ``False`` if you want to use ``cuda`` backend.
Returns:
~chainerx.ndarray:
Output array of shape :math:`(n, c_O, l_1, l_2, ..., l_N)`.
Note:
In ``cuda`` backend, this function uses cuDNN implementation for its
forward and backward computation.
Note:
In ``cuda`` backend, this function has following limitations yet:
- The ``cover_all=True`` option is not supported yet.
- The ``dtype`` must be ``float32`` or ``float64`` (``float16`` is not
supported yet.)
Note:
During backpropagation, this function propagates the gradient of the
output array to input arrays ``x``, ``w``, and ``b``.
.. seealso:: :func:`chainer.functions.convolution_nd`
.. admonition:: Example
>>> n = 10
>>> c_i, c_o = 3, 1
>>> d1, d2, d3 = 30, 40, 50
>>> k1, k2, k3 = 10, 10, 10
>>> p1, p2, p3 = 5, 5, 5
>>> x = chainerx.random.uniform(0, 1, (n, c_i, d1, d2, d3)).\
astype(np.float32)
>>> x.shape
(10, 3, 30, 40, 50)
>>> w = chainerx.random.uniform(0, 1, (c_o, c_i, k1, k2, k3)).\
astype(np.float32)
>>> w.shape
(1, 3, 10, 10, 10)
>>> b = chainerx.random.uniform(0, 1, (c_o)).astype(np.float32)
>>> b.shape
(1,)
>>> s1, s2, s3 = 2, 4, 6
>>> y = chainerx.conv(x, w, b, stride=(s1, s2, s3),\
pad=(p1, p2, p3))
>>> y.shape
(10, 1, 16, 11, 9)
>>> l1 = int((d1 + 2 * p1 - k1) / s1 + 1)
>>> l2 = int((d2 + 2 * p2 - k2) / s2 + 1)
>>> l3 = int((d3 + 2 * p3 - k3) / s3 + 1)
>>> y.shape == (n, c_o, l1, l2, l3)
True
>>> y = chainerx.conv(x, w, b, stride=(s1, s2, s3),\
pad=(p1, p2, p3), cover_all=True)
>>> y.shape == (n, c_o, l1, l2, l3 + 1)
True
""")
_docs.set_doc(
chainerx.conv_transpose,
"""conv_transpose(x, w, b=None, stride=1, pad=0, outsize=None)
N-dimensional transposed convolution.
This is an implementation of N-dimensional transposed convolution, which is
previously known as **deconvolution** in Chainer.
.. _Deconvolutional Networks: \
://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf
It takes three arrays: the input ``x``, the filter weight ``w``, and the
bias vector ``b``.
Notation: here is a notation for dimensionalities.
- :math:`N` is the number of spatial dimensions.
- :math:`n` is the batch size.
- :math:`c_I` and :math:`c_O` are the number of the input and output
channels, respectively.
- :math:`d_1, d_2, ..., d_N` are the size of each axis of the input's
spatial dimensions, respectively.
- :math:`k_1, k_2, ..., k_N` are the size of each axis of the filters,
respectively.
- :math:`p_1, p_2, ..., p_N` are the size of each axis of the spatial
padding size, respectively.
- :math:`s_1, s_2, ..., s_N` are the stride of each axis of filter
application, respectively.
If ``outsize`` option is ``None``, the output size
:math:`(l_1, l_2, ..., l_N)` is determined by the following equations with
the items in the above list:
.. math::
l_n = s_n (d_n - 1) + k_n - 2 p_n \\ \\ (n = 1, ..., N)
If ``outsize`` option is given, the output size is determined by
``outsize``. In this case, the ``outsize`` :math:`(l_1, l_2, ..., l_N)`
must satisfy the following equations:
.. math::
d_n = \\lfloor (l_n + 2p_n - k_n) / s_n \\rfloor + 1 \\ \\ \
(n = 1, ..., N)
Args:
x (:class:`~chainerx.ndarray`):
Input array of shape :math:`(n, c_I, d_1, d_2, ..., d_N)`.
w (:class:`~chainerx.ndarray`):
Weight array of shape :math:`(c_I, c_O, k_1, k_2, ..., k_N)`.
b (None or :class:`~chainerx.ndarray`):
One-dimensional bias array with length :math:`c_O` (optional).
stride (:class:`int` or :class:`tuple` of :class:`int` s):
Stride of filter applications :math:`(s_1, s_2, ..., s_N)`.
``stride=s`` is equivalent to ``(s, s, ..., s)``.
pad (:class:`int` or :class:`tuple` of :class:`int` s):
Spatial padding width for input arrays
:math:`(p_1, p_2, ..., p_N)`. ``pad=p`` is equivalent to
``(p, p, ..., p)``.
outsize (None or :class:`tuple` of :class:`int` s):
Expected output size of deconvolutional operation. It should be a
tuple of ints :math:`(l_1, l_2, ..., l_N)`. Default value is
``None`` and the outsize is estimated by input size, stride and
pad.
Returns:
~chainerx.ndarray:
Output array of shape :math:`(n, c_O, l_1, l_2, ..., l_N)`.
Note:
During backpropagation, this function propagates the gradient of the
output array to input arrays ``x``, ``w``, and ``b``.
.. seealso:: :func:`chainer.functions.deconvolution_nd`
.. admonition:: Example
**Example1**: the case when ``outsize`` is not given.
>>> n = 10
>>> c_i, c_o = 3, 1
>>> d1, d2, d3 = 5, 10, 15
>>> k1, k2, k3 = 10, 10, 10
>>> p1, p2, p3 = 5, 5, 5
>>> x = chainerx.random.uniform(0, 1, (n, c_i, d1, d2, d3)).\
astype(np.float32)
>>> x.shape
(10, 3, 5, 10, 15)
>>> w = chainerx.random.uniform(0, 1, (c_i, c_o, k1, k2, k3)).\
astype(np.float32)
>>> w.shape
(3, 1, 10, 10, 10)
>>> b = chainerx.random.uniform(0, 1, (c_o)).astype(np.float32)
>>> b.shape
(1,)
>>> s1, s2, s3 = 2, 4, 6
>>> y = chainerx.conv_transpose(x, w, b, stride=(s1, s2, s3), \
pad=(p1, p2, p3))
>>> y.shape
(10, 1, 8, 36, 84)
>>> l1 = s1 * (d1 - 1) + k1 - 2 * p1
>>> l2 = s2 * (d2 - 1) + k2 - 2 * p2
>>> l3 = s3 * (d3 - 1) + k3 - 2 * p3
>>> y.shape == (n, c_o, l1, l2, l3)
True
**Example2**: the case when ``outsize`` is given.
>>> n = 10
>>> c_i, c_o = 3, 1
>>> d1, d2, d3 = 5, 10, 15
>>> k1, k2, k3 = 10, 10, 10
>>> p1, p2, p3 = 5, 5, 5
>>> x = chainerx.array(np.random.uniform(0, 1, (n, c_i, d1, d2, d3)).\
astype(np.float32))
>>> x.shape
(10, 3, 5, 10, 15)
>>> w = chainerx.array(np.random.uniform(0, 1, (c_i, c_o, k1, k2, k3)).\
astype(np.float32))
>>> w.shape
(3, 1, 10, 10, 10)
>>> b = chainerx.array(np.random.uniform(0, 1, (c_o)).astype(np.float32))
>>> b.shape
(1,)
>>> s1, s2, s3 = 2, 4, 6
>>> l1, l2, l3 = 9, 38, 87
>>> d1 == int((l1 + 2 * p1 - k1) / s1) + 1
True
>>> d2 == int((l2 + 2 * p2 - k2) / s2) + 1
True
>>> d3 == int((l3 + 2 * p3 - k3) / s3) + 1
True
>>> y = chainerx.conv_transpose(x, w, b, stride=(s1, s2, s3), \
pad=(p1, p2, p3), outsize=(l1, l2, l3))
>>> y.shape
(10, 1, 9, 38, 87)
>>> y.shape == (n, c_o, l1, l2, l3)
True
""")
_docs.set_doc(
chainerx.linear,
"""linear(x, W, b=None, n_batch_axis=1)
Linear function, or affine transformation.
It accepts two or three arguments: an input minibatch ``x``, a weight
matrix ``W``, and optionally a bias vector ``b``. It computes
.. math:: Y = xW^\\top + b.
Args:
x (~chainerx.ndarray):
Input array, which is a :math:`(s_1, s_2, ..., s_n)`-shaped array.
W (~chainerx.ndarray):
Weight variable of shape :math:`(M, N)`,
where :math:`(N = s_{\\rm n\\_batch\\_axes} * ... * s_n)`.
b (~chainerx.ndarray):
Bias variable (optional) of shape :math:`(M,)`.
n_batch_axes (int):
The number of batch axes. The default is 1. The input variable is
reshaped into (:math:`{\\rm n\\_batch\\_axes} + 1`)-dimensional
tensor. This should be greater than 0.
Returns:
:class:`~chainerx.ndarray`:
Output array with shape of
:math:`(s_1, ..., s_{\\rm n\\_batch\\_axes}, M)`.
Note:
During backpropagation, this function propagates the gradient of the
output array to input arrays ``x``, ``W`` and ``b``.
""")
_docs.set_doc(
chainerx.lstm,
"""lstm(c_prev, x)
Long Short-Term Memory units as an activation function.
This function implements LSTM units with forget gates. Let the previous
cell state ``c_prev`` and the input array ``x``.
First, the input array ``x`` is split into four arrays
:math:`a, i, f, o` of the same shapes along the second axis. It means that
``x`` 's second axis must have 4 times the ``c_prev`` 's second axis.
The split input arrays are corresponding to:
- :math:`a` : sources of cell input
- :math:`i` : sources of input gate
- :math:`f` : sources of forget gate
- :math:`o` : sources of output gate
Second, it computes the updated cell state ``c`` and the outgoing signal
``h`` as
.. math::
c &= \\tanh(a) \\sigma(i)
+ c_{\\text{prev}} \\sigma(f), \\\\
h &= \\tanh(c) \\sigma(o),
where :math:`\\sigma` is the elementwise sigmoid function.
These are returned as a tuple of two variables.
This function supports variable length inputs. The mini-batch size of
the current input must be equal to or smaller than that of the previous
one. When mini-batch size of ``x`` is smaller than that of ``c``, this
function only updates ``c[0:len(x)]`` and doesn't change the rest of ``c``,
``c[len(x):]``. So,
please sort input sequences in descending order of lengths before
applying the function.
Args:
c_prev (:class:`~chainerx.array`):
Variable that holds the previous cell state. The cell state
should be a zero array or the output of the previous call of LSTM.
x (:class:`~chainer.array`):
Variable that holds the sources of cell input, input gate, forget
gate and output gate. It must have the second dimension whose size
is four times of that of the cell state.
Returns:
tuple: Two :class:`~chainerx.array` objects ``c`` and ``h``.
``c`` is the updated cell state. ``h`` indicates the outgoing signal.
See the original paper proposing LSTM with forget gates:
`Long Short-Term Memory in Recurrent Neural Networks
<http://www.felixgers.de/papers/phd.pdf>`_.
.. admonition:: Example
Assuming ``y`` is the current incoming signal, ``c`` is the previous
cell state, and ``h`` is the previous outgoing signal from an ``lstm``
function. Each of ``y``, ``c`` and ``h`` has ``n_units`` channels.
Most typical preparation of ``x`` is
>>> n_units = 100
>>> c_prev = chainerx.zeros((1, n_units), chainerx.float32)
>>> x = chainerx.zeros((1, 4 * n_units), chainerx.float32)
>>> c, h = chainerx.lstm(c_prev, x)
It corresponds to calculate the input array ``x``, or the input
sources :math:`a, i, f, o`, from the current incoming signal ``y`` and
the previous outgoing signal ``h``. Different parameters are used for
different kind of input sources.
""")
def _docs_normalization():
_docs.set_doc(
chainerx.batch_norm,
"""batch_norm(x, gamma, beta, running_mean, running_var, eps=2e-5, \
decay=0.9, axis=None)
Batch normalization function.
It takes the input array ``x`` and two parameter arrays ``gamma`` and
``beta``. The parameter arrays must both have the same size.
Args:
x (~chainerx.ndarray): Input array.
gamma (~chainerx.ndarray): Scaling parameter of normalized data.
beta (~chainerx.ndarray): Shifting parameter of scaled normalized data.
running_mean (~chainerx.ndarray):
Running average of the mean. This is a running average of
the mean over several mini-batches using the decay parameter.
The function takes a previous running average, and updates
the array in-place by the new running average.
running_var (~chainerx.ndarray):
Running average of the variance. This is a running average of
the variance over several mini-batches using the decay parameter.
The function takes a previous running average, and updates
the array in-place by the new running average.
eps (float): Epsilon value for numerical stability.
decay (float): Decay rate of moving average. It is used during training.
axis (int, tuple of int or None):
Axis over which normalization is performed. When axis is ``None``,
the first axis is treated as the batch axis and will be reduced
during normalization.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input arrays ``x``, ``gamma`` and ``beta``.
See: `Batch Normalization: Accelerating Deep Network Training by Reducing\
Internal Covariate Shift <https://arxiv.org/abs/1502.03167>`_
""")
_docs.set_doc(
chainerx.fixed_batch_norm,
"""fixed_batch_norm(x, gamma, beta, mean, var, eps=2e-5, axis=None)
Batch normalization function with fixed statistics.
This is a variant of :func:`~chainerx.batch_norm`, where the mean
and array statistics are given by the caller as fixed variables.
Args:
x (~chainerx.ndarray): Input array.
gamma (~chainerx.ndarray): Scaling parameter of normalized data.
beta (~chainerx.ndarray): Shifting parameter of scaled normalized data.
mean (~chainerx.ndarray): Shifting parameter of input.
var (~chainerx.ndarray): Square of scaling parameter of input.
eps (float): Epsilon value for numerical stability.
axis (int, tuple of int or None):
Axis over which normalization is performed. When axis is ``None``,
the first axis is treated as the batch axis and will be reduced
during normalization.
Note:
During backpropagation, this function does not propagate gradients.
""")
def _docs_pooling():
_docs.set_doc(
chainerx.max_pool,
"""max_pool(x, ksize, stride=None, pad=0, cover_all=False)
Spatial max pooling function.
This acts similarly to :func:`~chainerx.conv`, but it computes the maximum
of input spatial patch for each channel without any parameter instead of
computing the inner products.
Args:
x (~chainerx.ndarray): Input array.
ksize (int or tuple of ints): Size of pooling window. ``ksize=k`` and
``ksize=(k, k, ..., k)`` are equivalent.
stride (int or tuple of ints or None): Stride of pooling applications.
``stride=s`` and ``stride=(s, s, ..., s)`` are equivalent. If
``None`` is specified, then it uses same stride as the pooling
window size.
pad (int or tuple of ints): Spatial padding width for the input array.
``pad=p`` and ``pad=(p, p, ..., p)`` are equivalent.
cover_all (bool): If ``True``, all spatial locations are pooled into
some output pixels. It may make the output size larger.
Returns:
:class:`~chainerx.ndarray`: Output array.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``. This function is only
differentiable up to the second order.
.. note::
In ``cuda`` backend, only 2 and 3 dim arrays are supported as ``x``
because cuDNN pooling supports 2 and 3 spatial dimensions.
""")
_docs.set_doc(
chainerx.average_pool,
"""average_pool(x, ksize, stride=None, pad=0, pad_mode='ignore')
Spatial average pooling function.
This acts similarly to :func:`~chainerx.conv`, but it computes the average
of input spatial patch for each channel without any parameter instead of
computing the inner products.
Args:
x (~chainerx.ndarray): Input array.
ksize (int or tuple of ints): Size of pooling window. ``ksize=k`` and
``ksize=(k, k, ..., k)`` are equivalent.
stride (int or tuple of ints or None): Stride of pooling applications.
``stride=s`` and ``stride=(s, s, ..., s)`` are equivalent. If
``None`` is specified, then it uses same stride as the pooling
window size.
pad (int or tuple of ints): Spatial padding width for the input array.
``pad=p`` and ``pad=(p, p, ..., p)`` are equivalent.
pad_mode ({'zero', 'ignore'}): Specifies how padded region is treated.
* 'zero' -- the values in the padded region are treated as 0
* 'ignore' -- padded region is ignored (default)
Returns:
:class:`~chainerx.ndarray`: Output array.
Note:
During backpropagation, this function propagates the gradient of the
output array to the input array ``x``.
.. note::
In ``cuda`` backend, only 2 and 3 dim arrays are supported as ``x``
because cuDNN pooling supports 2 and 3 spatial dimensions.
""")
def _docs_rnn():
_docs.set_doc(
chainerx.n_step_lstm,
"""n_step_lstm(n_layers, hx, cx, ws, bs, xs)
Stacked Uni-directional Long Short-Term Memory function.
This function calculates stacked Uni-directional LSTM with sequences.
This function gets an initial hidden state :math:`h_0`, an initial cell
state :math:`c_0`, an input sequence :math:`x`, weight matrices :math:`W`,
and bias vectors :math:`b`.
This function calculates hidden states :math:`h_t` and :math:`c_t` for each
time :math:`t` from input :math:`x_t`.
.. math::
i_t &= \\sigma(W_0 x_t + W_4 h_{t-1} + b_0 + b_4) \\\\
f_t &= \\sigma(W_1 x_t + W_5 h_{t-1} + b_1 + b_5) \\\\
o_t &= \\sigma(W_2 x_t + W_6 h_{t-1} + b_2 + b_6) \\\\
a_t &= \\tanh(W_3 x_t + W_7 h_{t-1} + b_3 + b_7) \\\\
c_t &= f_t \\cdot c_{t-1} + i_t \\cdot a_t \\\\
h_t &= o_t \\cdot \\tanh(c_t)
As the function accepts a sequence, it calculates :math:`h_t` for all
:math:`t` with one call. Eight weight matrices and eight bias vectors are
required for each layer. So, when :math:`S` layers exist, you need to
prepare :math:`8S` weight matrices and :math:`8S` bias vectors.
If the number of layers ``n_layers`` is greater than :math:`1`, the input
of the ``k``-th layer is the hidden state ``h_t`` of the ``k-1``-th layer.
Note that all input variables except the first layer may have different
shape from the first layer.
Args:
n_layers(int): The number of layers.
hx (:class:`~chainerx.array`):
Variable holding stacked hidden states.
Its shape is ``(S, B, N)`` where ``S`` is the number of layers and
is equal to ``n_layers``, ``B`` is the mini-batch size, and ``N``
is the dimension of the hidden units.
cx (:class:`~chainerx.array`): Variable holding stacked cell states.
It has the same shape as ``hx``.
ws (list of list of :class:`~chainerx.array`): Weight matrices.
``ws[i]`` represents the weights for the i-th layer.
Each ``ws[i]`` is a list containing eight matrices.
``ws[i][j]`` corresponds to :math:`W_j` in the equation.
Only ``ws[0][j]`` where ``0 <= j < 4`` are ``(N, I)``-shaped as
they are multiplied with input variables, where ``I`` is the size
of the input and ``N`` is the dimension of the hidden units. All
other matrices are ``(N, N)``-shaped.
bs (list of list of :class:`~chainerx.array`): Bias vectors.
``bs[i]`` represents the biases for the i-th layer.
Each ``bs[i]`` is a list containing eight vectors.
``bs[i][j]`` corresponds to :math:`b_j` in the equation.
The shape of each matrix is ``(N,)`` where ``N`` is the dimension
of the hidden units.
xs (list of :class:`~chainerx.array`):
A list of :class:`~chainerx.array`
holding input values. Each element ``xs[t]`` holds input value
for time ``t``. Its shape is ``(B_t, I)``, where ``B_t`` is the
mini-batch size for time ``t``.
When sequences has different lengths, they must be
sorted in descending order of their lengths.
So ``xs`` needs to satisfy
``xs[t].shape[0] >= xs[t + 1].shape[0]``.
Returns:
tuple: This function returns a tuple containing three elements,
``hy``, ``cy`` and ``ys``.
- ``hy`` is an updated hidden states whose shape is the same as
``hx``.
- ``cy`` is an updated cell states whose shape is the same as
``cx``.
- ``ys`` is a list of :class:`~chainerx.array` . Each element
``ys[t]`` holds hidden states of the last layer corresponding
to an input ``xs[t]``. Its shape is ``(B_t, N)`` where ``B_t`` is
the mini-batch size for time ``t``, and ``N`` is size of hidden
units. Note that ``B_t`` is the same value as ``xs[t]``.
.. note::
The dimension of hidden units is limited to only one size ``N``. If you
want to use variable dimension of hidden units, please use
:class:`chainerx.lstm`.
.. seealso::
:func:`chainerx.lstm`
.. admonition:: Example
>>> import chainerx as chx
>>> batchs = [3, 2, 1] # support variable length sequences
>>> in_size, out_size, n_layers = 3, 2, 2
>>> xs = [chx.ones((b, in_size)).astype(chx.float32) for b in batchs]
>>> [x.shape for x in xs]
[(3, 3), (2, 3), (1, 3)]
>>> h_shape = (n_layers, batchs[0], out_size)
>>> hx = chx.ones(h_shape).astype(chx.float32)
>>> cx = chx.ones(h_shape).astype(chx.float32)
>>> w_in = lambda i, j: in_size if i == 0 and j < 4 else out_size
>>> ws = []
>>> bs = []
>>> for n in range(n_layers):
... ws.append([chx.ones((out_size, w_in(n, i))).\
astype(np.float32) for i in range(8)])
... bs.append([chx.ones((out_size,)).astype(chx.float32) \
for _ in range(8)])
...
>>> ws[0][0].shape # ws[0][:4].shape are (out_size, in_size)
(2, 3)
>>> ws[1][0].shape # others are (out_size, out_size)
(2, 2)
>>> bs[0][0].shape
(2,)
>>> hy, cy, ys = chx.n_step_lstm(
... n_layers, hx, cx, ws, bs, xs)
>>> hy.shape
(2, 3, 2)
>>> cy.shape
(2, 3, 2)
>>> [y.shape for y in ys]
[(3, 2), (2, 2), (1, 2)]
""")
_docs.set_doc(
chainerx.n_step_bilstm,
"""n_step_bilstm(n_layers, hx, cx, ws, bs, xs)
Stacked Bi-directional Long Short-Term Memory function.
This function calculates stacked Bi-directional LSTM with sequences.
This function gets an initial hidden state :math:`h_0`, an initial cell
state :math:`c_0`, an input sequence :math:`x`, weight matrices :math:`W`,
and bias vectors :math:`b`.
This function calculates hidden states :math:`h_t` and :math:`c_t` for each
time :math:`t` from input :math:`x_t`.
.. math::
i^{f}_t &=& \\sigma(W^{f}_0 x_t + W^{f}_4 h_{t-1} + b^{f}_0 + b^{f}_4),
\\\\
f^{f}_t &=& \\sigma(W^{f}_1 x_t + W^{f}_5 h_{t-1} + b^{f}_1 + b^{f}_5),
\\\\
o^{f}_t &=& \\sigma(W^{f}_2 x_t + W^{f}_6 h_{t-1} + b^{f}_2 + b^{f}_6),
\\\\
a^{f}_t &=& \\tanh(W^{f}_3 x_t + W^{f}_7 h_{t-1} + b^{f}_3 + b^{f}_7),
\\\\
c^{f}_t &=& f^{f}_t \\cdot c^{f}_{t-1} + i^{f}_t \\cdot a^{f}_t,
\\\\
h^{f}_t &=& o^{f}_t \\cdot \\tanh(c^{f}_t),
\\\\
i^{b}_t &=& \\sigma(W^{b}_0 x_t + W^{b}_4 h_{t-1} + b^{b}_0 + b^{b}_4),
\\\\
f^{b}_t &=& \\sigma(W^{b}_1 x_t + W^{b}_5 h_{t-1} + b^{b}_1 + b^{b}_5),
\\\\
o^{b}_t &=& \\sigma(W^{b}_2 x_t + W^{b}_6 h_{t-1} + b^{b}_2 + b^{b}_6),
\\\\
a^{b}_t &=& \\tanh(W^{b}_3 x_t + W^{b}_7 h_{t-1} + b^{b}_3 + b^{b}_7),
\\\\
c^{b}_t &=& f^{b}_t \\cdot c^{b}_{t-1} + i^{b}_t \\cdot a^{b}_t, \\\\
h^{b}_t &=& o^{b}_t \\cdot \\tanh(c^{b}_t), \\\\
h_t &=& [h^{f}_t; h^{b}_t]
where :math:`W^{f}` is the weight matrices for forward-LSTM, :math:`W^{b}`
is weight matrices for backward-LSTM.
As the function accepts a sequence, it calculates :math:`h_t` for all
:math:`t` with one call. Eight weight matrices and eight bias vectors are
required for each layer of each direction. So, when :math:`S` layers
exist, you need to prepare :math:`16S` weight matrices and :math:`16S`
bias vectors.
If the number of layers ``n_layers`` is greater than :math:`1`, the input
of the ``k``-th layer is the hidden state ``h_t`` of the ``k-1``-th layer.
Note that all input variables except the first layer may have different
shape from the first layer.
Args:
n_layers(int): The number of layers.
hx (:class:`~chainerx.array`):
Variable holding stacked hidden states.
Its shape is ``(2S, B, N)`` where ``S`` is the number of layers and
is equal to ``n_layers``, ``B`` is the mini-batch size, and ``N``
is the dimension of the hidden units. Because of bi-direction, the
first dimension length is ``2S``.
cx (:class:`~chainerx.array`): Variable holding stacked cell states.
It has the same shape as ``hx``.
ws (list of list of :class:`~chainerx.array`): Weight matrices.
``ws[2 * l + m]`` represents the weights for the l-th layer of
the m-th direction. (``m == 0`` means the forward direction and
``m == 1`` means the backward direction.) Each ``ws[i]`` is a
list containing eight matrices. ``ws[i][j]`` corresponds to
:math:`W_j` in the equation. ``ws[0][j]`` and ``ws[1][j]`` where
``0 <= j < 4`` are ``(N, I)``-shaped because they are multiplied
with input variables, where ``I`` is the size of the input.
``ws[i][j]`` where ``2 <= i`` and ``0 <= j < 4`` are
``(N, 2N)``-shaped because they are multiplied with two hidden
layers :math:`h_t = [h^{f}_t; h^{b}_t]`. All other matrices are
``(N, N)``-shaped.
bs (list of list of :class:`~chainerx.array`): Bias vectors.
``bs[2 * l + m]`` represents the weights for the l-th layer of
m-th direction. (``m == 0`` means the forward direction and
``m == 1`` means the backward direction.)
Each ``bs[i]`` is a list containing eight vectors.
``bs[i][j]`` corresponds to :math:`b_j` in the equation.
The shape of each matrix is ``(N,)``.
xs (list of :class:`~chainerx.array`):
A list of :class:`~chainerx.array`
holding input values. Each element ``xs[t]`` holds input value
for time ``t``. Its shape is ``(B_t, I)``, where ``B_t`` is the
mini-batch size for time ``t``.
When sequences has different lengths, they must be
sorted in descending order of their lengths.
So ``xs`` needs to satisfy
``xs[t].shape[0] >= xs[t + 1].shape[0]``.
Returns:
tuple: This function returns a tuple containing three elements,
``hy``, ``cy`` and ``ys``.
- ``hy`` is an updated hidden states whose shape is the same as
``hx``.
- ``cy`` is an updated cell states whose shape is the same as
``cx``.
- ``ys`` is a list of :class:`~chainer.array` . Each element
``ys[t]`` holds hidden states of the last layer corresponding
to an input ``xs[t]``. Its shape is ``(B_t, 2N)`` where ``B_t``
is the mini-batch size for time ``t``, and ``N`` is size of
hidden units. Note that ``B_t`` is the same value as ``xs[t]``.
.. admonition:: Example
>>> import chainerx as chx
>>> batchs = [3, 2, 1] # support variable length sequences
>>> in_size, out_size, n_layers = 3, 2, 2
>>> dropout_ratio = 0.0
>>> xs = [chx.ones((b, in_size)).astype(chx.float32) for b in batchs]
>>> [x.shape for x in xs]
[(3, 3), (2, 3), (1, 3)]
>>> h_shape = (n_layers * 2, batchs[0], out_size)
>>> hx = chx.ones(h_shape).astype(chx.float32)
>>> cx = chx.ones(h_shape).astype(chx.float32)
>>> def w_in(i, j):
... if i == 0 and j < 4:
... return in_size
... elif i > 0 and j < 4:
... return out_size * 2
... else:
... return out_size
...
>>> ws = []
>>> bs = []
>>> for n in range(n_layers):
... for direction in (0, 1):
... ws.append([chx.ones((out_size, w_in(n, i))).\
astype(np.float32) for i in range(8)])
... bs.append([chx.ones((out_size,)).astype(chx.float32) \
for _ in range(8)])
...
>>> ws[0][0].shape # ws[0:2][:4].shape are (out_size, in_size)
(2, 3)
>>> ws[2][0].shape # ws[2:][:4].shape are (out_size, 2 * out_size)
(2, 4)
>>> ws[0][4].shape # others are (out_size, out_size)
(2, 2)
>>> bs[0][0].shape
(2,)
>>> hy, cy, ys = chx.n_step_bilstm(
... n_layers, hx, cx, ws, bs, xs)
>>> hy.shape
(4, 3, 2)
>>> cy.shape
(4, 3, 2)
>>> [y.shape for y in ys]
[(3, 4), (2, 4), (1, 4)]
""")
_docs.set_doc(
chainerx.n_step_gru,
"""n_step_gru(n_layers, hx, ws, bs, xs)
Stacked Uni-directional Gated Recurrent Unit function.
This function calculates stacked Uni-directional GRU with sequences.
This function gets an initial hidden state :math:`h_0`, an input
sequence :math:`x`, weight matrices :math:`W`, and bias vectors :math:`b`.
This function calculates hidden states :math:`h_t` for each time :math:`t`
from input :math:`x_t`.
.. math::
r_t &= \\sigma(W_0 x_t + W_3 h_{t-1} + b_0 + b_3) \\\\
z_t &= \\sigma(W_1 x_t + W_4 h_{t-1} + b_1 + b_4) \\\\
h'_t &= \\tanh(W_2 x_t + b_2 + r_t \\cdot (W_5 h_{t-1} + b_5)) \\\\
h_t &= (1 - z_t) \\cdot h'_t + z_t \\cdot h_{t-1}
As the function accepts a sequence, it calculates :math:`h_t` for all
:math:`t` with one call. Six weight matrices and six bias vectors are
required for each layers. So, when :math:`S` layers exists, you need to
prepare :math:`6S` weight matrices and :math:`6S` bias vectors.
If the number of layers ``n_layers`` is greather than :math:`1`, input
of ``k``-th layer is hidden state ``h_t`` of ``k-1``-th layer.
Note that all input variables except first layer may have different shape
from the first layer.
Args:
n_layers(int): Number of layers.
hx (~chainerx.array):
Variable holding stacked hidden states.
Its shape is ``(S, B, N)`` where ``S`` is number of layers and is
equal to ``n_layers``, ``B`` is mini-batch size, and ``N`` is
dimension of hidden units.
ws (list of list of :class:`~chainerx.array`): Weight matrices.
``ws[i]`` represents weights for i-th layer.
Each ``ws[i]`` is a list containing six matrices.
``ws[i][j]`` is corresponding with ``W_j`` in the equation.
Only ``ws[0][j]`` where ``0 <= j < 3`` is ``(N, I)`` shape as they
are multiplied with input variables. All other matrices has
``(N, N)`` shape.
bs (list of list of :class:`~chainerx.array`): Bias vectors.
``bs[i]`` represnents biases for i-th layer.
Each ``bs[i]`` is a list containing six vectors.
``bs[i][j]`` is corresponding with ``b_j`` in the equation.
Shape of each matrix is ``(N,)`` where ``N`` is dimension of
hidden units.
xs (list of :class:`~chainerx.array`):
A list of :class:`~chainerx.array`
holding input values. Each element ``xs[t]`` holds input value
for time ``t``. Its shape is ``(B_t, I)``, where ``B_t`` is
mini-batch size for time ``t``, and ``I`` is size of input units.
Note that this function supports variable length sequences.
When sequneces has different lengths, sort sequences in descending
order by length.
So ``xs`` needs to satisfy
``xs[t].shape[0] >= xs[t + 1].shape[0]``.
Returns:
tuple: This function returns a tuple containing two elements,
``hy`` and ``ys``.
- ``hy`` is an updated hidden states whose shape is same as ``hx``.
- ``ys`` is a list of :class:`~chainerx.array` . Each element
``ys[t]`` holds hidden states of the last layer corresponding
to an input ``xs[t]``. Its shape is ``(B_t, N)`` where ``B_t`` is
mini-batch size for time ``t``, and ``N`` is size of hidden
units. Note that ``B_t`` is the same value as ``xs[t]``
""")
_docs.set_doc(
chainerx.n_step_bigru,
"""n_step_bigru(n_layers, hx, ws, bs, xs)
Stacked Bi-directional Gated Recurrent Unit function.
This function calculates stacked Bi-directional GRU with sequences.
This function gets an initial hidden state :math:`h_0`, an input
sequence :math:`x`, weight matrices :math:`W`, and bias vectors :math:`b`.
This function calculates hidden states :math:`h_t` for each time :math:`t`
from input :math:`x_t`.
.. math::
r^{f}_t &= \\sigma(W^{f}_0 x_t + W^{f}_3 h_{t-1} + b^{f}_0 + b^{f}_3)
\\\\
z^{f}_t &= \\sigma(W^{f}_1 x_t + W^{f}_4 h_{t-1} + b^{f}_1 + b^{f}_4)
\\\\
h^{f'}_t &= \\tanh(W^{f}_2 x_t + b^{f}_2 + r^{f}_t \\cdot (W^{f}_5
h_{t-1} + b^{f}_5)) \\\\
h^{f}_t &= (1 - z^{f}_t) \\cdot h^{f'}_t + z^{f}_t \\cdot h_{t-1}
\\\\
r^{b}_t &= \\sigma(W^{b}_0 x_t + W^{b}_3 h_{t-1} + b^{b}_0 + b^{b}_3)
\\\\
z^{b}_t &= \\sigma(W^{b}_1 x_t + W^{b}_4 h_{t-1} + b^{b}_1 + b^{b}_4)
\\\\
h^{b'}_t &= \\tanh(W^{b}_2 x_t + b^{b}_2 + r^{b}_t \\cdot (W^{b}_5
h_{t-1} + b^{b}_5)) \\\\
h^{b}_t &= (1 - z^{b}_t) \\cdot h^{b'}_t + z^{b}_t \\cdot h_{t-1}
\\\\
h_t &= [h^{f}_t; h^{b}_t] \\\\
where :math:`W^{f}` is weight matrices for forward-GRU, :math:`W^{b}` is
weight matrices for backward-GRU.
As the function accepts a sequence, it calculates :math:`h_t` for all
:math:`t` with one call. Six weight matrices and six bias vectors are
required for each layers. So, when :math:`S` layers exists, you need to
prepare :math:`6S` weight matrices and :math:`6S` bias vectors.
If the number of layers ``n_layers`` is greather than :math:`1`, input
of ``k``-th layer is hidden state ``h_t`` of ``k-1``-th layer.
Note that all input variables except first layer may have different shape
from the first layer.
Args:
n_layers(int): Number of layers.
hx (:class:`~chainerx.array`):
Variable holding stacked hidden states.
Its shape is ``(2S, B, N)`` where ``S`` is number of layers and is
equal to ``n_layers``, ``B`` is mini-batch size, and ``N`` is
dimension of hidden units.
ws (list of list of :class:`~chainerx.array`): Weight matrices.
``ws[i]`` represents weights for i-th layer.
Each ``ws[i]`` is a list containing six matrices.
``ws[i][j]`` is corresponding with ``W_j`` in the equation.
Only ``ws[0][j]`` where ``0 <= j < 3`` is ``(N, I)`` shape as they
are multiplied with input variables. All other matrices has
``(N, N)`` shape.
bs (list of list of :class:`~chainerx.array`): Bias vectors.
``bs[i]`` represnents biases for i-th layer.
Each ``bs[i]`` is a list containing six vectors.
``bs[i][j]`` is corresponding with ``b_j`` in the equation.
Shape of each matrix is ``(N,)`` where ``N`` is dimension of
hidden units.
xs (list of :class:`~chainerx.array`):
A list of :class:`~chainerx.array` holding input values.
Each element ``xs[t]`` holds input value
for time ``t``. Its shape is ``(B_t, I)``, where ``B_t`` is
mini-batch size for time ``t``, and ``I`` is size of input units.
Note that this function supports variable length sequences.
When sequneces has different lengths, sort sequences in descending
order by length.
So ``xs`` needs to satisfy
``xs[t].shape[0] >= xs[t + 1].shape[0]``.
Returns:
tuple: This function returns a tuple containing two elements,
``hy`` and ``ys``.
- ``hy`` is an updated hidden states whose shape is same as ``hx``.
- ``ys`` is a list of :class:`~chainerx.array` . Each element
``ys[t]`` holds hidden states of the last layer corresponding
to an input ``xs[t]``. Its shape is ``(B_t, N)`` where ``B_t`` is
mini-batch size for time ``t``, and ``N`` is size of hidden
units. Note that ``B_t`` is the same value as ``xs[t]``.
""")
_docs.set_doc(
chainerx.n_step_rnn,
"""n_step_rnn(n_layers, hx, ws, bs, xs, activation='tanh')
Stacked Uni-directional RNN function for sequence inputs.
This function calculates stacked Uni-directional RNN with sequences.
This function gets an initial hidden state :math:`h_0`,
an initial cell state :math:`c_0`, an input sequence :math:`x`,
weight matrices :math:`W`, and bias vectors :math:`b`.
This function calculates hidden states :math:`h_t` and :math:`c_t` for each
time :math:`t` from input :math:`x_t`.
.. math::
h_t = f(W_0 x_t + W_1 h_{t-1} + b_0 + b_1)
where :math:`f` is an activation function.
Weight matrices :math:`W` contains two matrices :math:`W_0` and
:math:`W_1`. :math:`W_0` is a parameter for an input sequence.
:math:`W_1` is a parameter for a hidden state.
Bias matrices :math:`b` contains two matrices :math:`b_0` and :math:`b_1`.
:math:`b_0` is a parameter for an input sequence.
:math:`b_1` is a parameter for a hidden state.
As the function accepts a sequence, it calculates :math:`h_t` for all
:math:`t` with one call. Two weight matrices and two bias vectors are
required for each layer. So, when :math:`S` layers exist, you need to
prepare :math:`2S` weight matrices and :math:`2S` bias vectors.
If the number of layers ``n_layers`` is greather than :math:`1`, input
of ``k``-th layer is hidden state ``h_t`` of ``k-1``-th layer.
Note that all input variables except first layer may have different shape
from the first layer.
Args:
n_layers(int): Number of layers.
hx (:class:`~chainerx.array`):
Variable holding stacked hidden states.
Its shape is ``(S, B, N)`` where ``S`` is number of layers and is
equal to ``n_layers``, ``B`` is mini-batch size, and ``N`` is
dimension of hidden units.
ws (list of list of :class:`~chainerx.array`): Weight matrices.
``ws[i]`` represents weights for i-th layer.
Each ``ws[i]`` is a list containing two matrices.
``ws[i][j]`` is corresponding with ``W_j`` in the equation.
Only ``ws[0][j]`` where ``0 <= j < 1`` is ``(N, I)`` shape as they
are multiplied with input variables. All other matrices has
``(N, N)`` shape.
bs (list of list of :class:`~chainerx.array`): Bias vectors.
``bs[i]`` represnents biases for i-th layer.
Each ``bs[i]`` is a list containing two vectors.
``bs[i][j]`` is corresponding with ``b_j`` in the equation.
Shape of each matrix is ``(N,)`` where ``N`` is dimension of
hidden units.
xs (list of :class:`~chainerx.array`):
A list of :class:`~chainerx.array` holding input values.
Each element ``xs[t]`` holds input value for time ``t``.
Its shape is ``(B_t, I)``, where ``B_t`` is
mini-batch size for time ``t``, and ``I`` is size of input units.
Note that this function supports variable length sequences.
When sequneces has different lengths, sort sequences in descending
order by length.
So ``xs`` needs to satisfy
``xs[t].shape[0] >= xs[t + 1].shape[0]``.
activation (str): Activation function name.
Please select ``tanh`` or ``relu``.
Returns:
tuple: This function returns a tuple containing two elements,
``hy`` and ``ys``.
- ``hy`` is an updated hidden states whose shape is same as ``hx``.
- ``ys`` is a list of :class:`~chainerx.array` . Each element
``ys[t]`` holds hidden states of the last layer corresponding
to an input ``xs[t]``. Its shape is ``(B_t, N)`` where ``B_t`` is
mini-batch size for time ``t``, and ``N`` is size of hidden
units. Note that ``B_t`` is the same value as ``xs[t]``.
""")
_docs.set_doc(
chainerx.n_step_birnn,
"""n_step_birnn(n_layers, hx, ws, bs, xs, activation='tanh')
Stacked Bi-directional RNN function for sequence inputs.
This function calculates stacked Bi-directional RNN with sequences.
This function gets an initial hidden state :math:`h_0`, an initial
cell state :math:`c_0`, an input sequence :math:`x`,
weight matrices :math:`W`, and bias vectors :math:`b`.
This function calculates hidden states :math:`h_t` and :math:`c_t` for each
time :math:`t` from input :math:`x_t`.
.. math::
h^{f}_t &=& f(W^{f}_0 x_t + W^{f}_1 h_{t-1} + b^{f}_0 + b^{f}_1), \\\\
h^{b}_t &=& f(W^{b}_0 x_t + W^{b}_1 h_{t-1} + b^{b}_0 + b^{b}_1), \\\\
h_t &=& [h^{f}_t; h^{f}_t], \\\\
where :math:`f` is an activation function.
Weight matrices :math:`W` contains two matrices :math:`W^{f}` and
:math:`W^{b}`. :math:`W^{f}` is weight matrices for forward directional
RNN. :math:`W^{b}` is weight matrices for backward directional RNN.
:math:`W^{f}` contains :math:`W^{f}_0` for an input sequence and
:math:`W^{f}_1` for a hidden state.
:math:`W^{b}` contains :math:`W^{b}_0` for an input sequence and
:math:`W^{b}_1` for a hidden state.
Bias matrices :math:`b` contains two matrices :math:`b^{f}` and
:math:`b^{f}`. :math:`b^{f}` contains :math:`b^{f}_0` for an input sequence
and :math:`b^{f}_1` for a hidden state.
:math:`b^{b}` contains :math:`b^{b}_0` for an input sequence and
:math:`b^{b}_1` for a hidden state.
As the function accepts a sequence, it calculates :math:`h_t` for all
:math:`t` with one call. Two weight matrices and two bias vectors are
required for each layer. So, when :math:`S` layers exist, you need to
prepare :math:`2S` weight matrices and :math:`2S` bias vectors.
If the number of layers ``n_layers`` is greather than :math:`1`, input
of ``k``-th layer is hidden state ``h_t`` of ``k-1``-th layer.
Note that all input variables except first layer may have different shape
from the first layer.
Args:
n_layers(int): Number of layers.
hx (:class:`~chainerx.array`):
Variable holding stacked hidden states.
Its shape is ``(2S, B, N)`` where ``S`` is number of layers and is
equal to ``n_layers``, ``B`` is mini-batch size, and ``N`` is
dimension of hidden units. Because of bi-direction, the
first dimension length is ``2S``.
ws (list of list of :class:`~chainerx.array`): Weight matrices.
``ws[i + di]`` represents weights for i-th layer.
Note that ``di = 0`` for forward-RNN and ``di = 1`` for
backward-RNN.
Each ``ws[i + di]`` is a list containing two matrices.
``ws[i + di][j]`` is corresponding with ``W^{f}_j`` if ``di = 0``
and corresponding with ``W^{b}_j`` if ``di = 1`` in the equation.
Only ``ws[0][j]`` and ``ws[1][j]`` where ``0 <= j < 1`` are
``(I, N)`` shape as they are multiplied with input variables.
All other matrices has ``(N, N)`` shape.
bs (list of list of :class:`~chainerx.array`): Bias vectors.
``bs[i + di]`` represnents biases for i-th layer.
Note that ``di = 0`` for forward-RNN and ``di = 1`` for
backward-RNN.
Each ``bs[i + di]`` is a list containing two vectors.
``bs[i + di][j]`` is corresponding with ``b^{f}_j`` if ``di = 0``
and corresponding with ``b^{b}_j`` if ``di = 1`` in the equation.
Shape of each matrix is ``(N,)`` where ``N`` is dimension of
hidden units.
xs (list of :class:`~chainerx.array`):
A list of :class:`~chainerx.array` holding input values.
Each element ``xs[t]`` holds input value
for time ``t``. Its shape is ``(B_t, I)``, where ``B_t`` is
mini-batch size for time ``t``, and ``I`` is size of input units.
Note that this function supports variable length sequences.
When sequneces has different lengths, sort sequences in descending
order by length.
So ``xs`` needs to satisfy
``xs[t].shape[0] >= xs[t + 1].shape[0]``.
activation (str): Activation function name.
Please select ``tanh`` or ``relu``.
Returns:
tuple: This function returns a tuple containing two elements,
``hy`` and ``ys``.
- ``hy`` is an updated hidden states whose shape is same as ``hx``.
- ``ys`` is a list of :class:`~chainerx.array` . Each element
``ys[t]`` holds hidden states of the last layer corresponding
to an input ``xs[t]``. Its shape is ``(B_t, N)`` where ``B_t``
is mini-batch size for time ``t``, and ``N`` is size of hidden
units. Note that ``B_t`` is the same value as ``xs[t]``.
""")
| hvy/chainer | chainerx/_docs/routines.py | Python | mit | 127,369 | [
"Gaussian"
] | 39c6c047602574d716bc3211414de4c45c21397d2fdf357e82569c360490d028 |
#
# @BEGIN LICENSE
#
# Psi4: an open-source quantum chemistry software package
#
# Copyright (c) 2007-2021 The Psi4 Developers.
#
# The copyrights for code used from other parties are included in
# the corresponding files.
#
# This file is part of Psi4.
#
# Psi4 is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, version 3.
#
# Psi4 is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License along
# with Psi4; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# @END LICENSE
#
import math
import collections
import numpy as np
import qcelemental as qcel
def BFS(geom, elem, seed_atoms=None, bond_threshold=1.20):
"""Detect fragments among real atoms through a breadth-first search (BFS) algorithm.
Parameters
----------
geom : ndarray of float
(nat x 3) Cartesian coordinates [a0] of real atoms.
elem : ndarray of str or int
(nat) Either element symbols or atomic numbers corresponding to `geom`.
Used for selecting van der Waals radius.
seed_atoms : list (optional)
List of lists of atoms (0-indexed) belonging to independent fragments.
Useful to prompt algorithm or to define intramolecular fragments through
border atoms. Example: `[[1, 0], [2]]`.
bond_threshold : float (optional)
Factor beyond average of covalent radii to determine bond cutoff.
Returns
-------
list of lists
Array of atom indices (0-indexed) of detected fragments. See example
below for how to transform inputs.
Notes
-----
Relies upon van der Waals radii and so faulty for close (especially
hydrogen-bonded) fragments. `seed_atoms` can help.
Authors
-------
Original code from Michael S. Marshall, linear-scaling algorithm from
Trent M. Parker, revamped by Lori A. Burns
Usage
-----
>>> # [1] BFS on large array of jumbled coordinates `geom` and element
>>> # symbols `elem`. Use the output `fragments` to form list of small
>>> # per-fragment arrays.
>>> fragments = BFS(geom, elem)
>>> frag_geoms = [geom[fr] for fr in fragments]
>>> frag_elems = [elem[fr] for fr in fragments]
"""
radii = _get_covalent_radii(elem)
max_covalent_radius = np.max(radii)
blocksize = int(math.ceil(2.0 * bond_threshold * max_covalent_radius))
allblocks = _get_blocks(geom, blocksize)
bond_tree = _get_bond_tree(radii, geom, allblocks, blocksize, bond_threshold)
if seed_atoms is None:
seed_atoms = []
allfragments = seed_atoms
# bare queues
new_list = []
break_list = []
unfound_list = list(range(geom.shape[0]))
# seed queues from intrafrag atom hints
for ifr, fr in enumerate(allfragments):
new_list.append([])
for at in fr:
new_list[ifr].append(at)
break_list.append(at)
unfound_list.remove(at)
# perform BFS
while len(unfound_list) > 0:
for ifr, fr in enumerate(new_list):
while len(fr) > 0:
for at1 in reversed(fr):
for at2 in bond_tree[at1]:
if at2 in unfound_list and at2 not in break_list:
allfragments[ifr].append(at2)
new_list[ifr].append(at2)
unfound_list.remove(at2)
new_list[ifr].remove(at1)
if len(unfound_list) > 0:
at_new = unfound_list[0]
allfragments.append([at_new])
new_list.append([at_new])
unfound_list.remove(at_new)
for fr in range(len(allfragments)):
allfragments[fr] = sorted(allfragments[fr])
return allfragments
def _get_covalent_radii(elem):
"""Return covalent radii [a0] for all atoms
Look-up values for covalent (or ionic) radii by atomic element [A] from
"Inorganic Chemistry" 3rd ed, Housecroft, Appendix 6, pgs 1013-1014
"""
covalent_radii_lookup = {
'H' : 0.37, 'He': 0.30,
'Li': 1.02, 'Be': 0.27, 'B' : 0.88, 'C' : 0.77, 'O' : 0.73, 'N' : 0.75, 'F' : 0.71, 'Ne': 0.84,
'Na': 1.02, 'Mg': 0.72, 'Al': 1.30, 'Si': 1.18, 'P' : 1.10, 'S' : 1.03, 'Cl': 0.99, 'Ar': 1.00,
'K' : 1.38, 'Ca': 1.00,
'Sc': 0.75, 'Ti': 0.86, 'V' : 0.79, 'Cr': 0.73, 'Mn': 0.67,
'Fe': 0.61, 'Co': 0.64, 'Ni': 0.55, 'Cu': 0.46, 'Zn': 0.60,
'Ga': 1.22, 'Ge': 1.22, 'As': 1.22, 'Se': 1.17, 'Br': 1.14, 'Kr': 1.03,
'I' : 1.33,
'X' : 0.00} # yapf: disable
#'RN': 2.40 / 1.5, # extrapolation
#'H': 1.06 / 1.5, # Bondi JPC 68 441 (1964)
#'SN': 2.16 / 1.5, # Bondi JPC 68 441 (1964)
#'SB': 2.12 / 1.5, # Bondi JPC 68 441 (1964)
#'TE': 2.08 / 1.5, # Bondi JPC 68 441 (1964)
#'XE': 2.05 / 1.5} # Bondi JPC 68 441 (1964)
nat = elem.shape[0]
try:
caps = [el.capitalize() for el in elem]
except AttributeError:
caps = [qcel.periodictable.to_E(z) for z in elem]
covrad = np.fromiter((covalent_radii_lookup[caps[at]] for at in range(nat)), dtype=np.float, count=nat)
return np.divide(covrad, qcel.constants.bohr2angstroms)
def _get_key(x, y, z, b):
"""Return key string from point values and block resolution"""
return """{},{},{}""".format(x - x % b, y - y % b, z - z % b)
def _distance2(v, u):
"""Compute the square distance between points defined by vectors *v* and *u*."""
return sum(((v[i] - u[i]) * (v[i] - u[i]) for i in range(len(v))))
def _get_blocks(geom, blocksize):
"""Parition atoms into spatial blocks"""
allblocks = collections.defaultdict(list)
for at in range(geom.shape[0]):
x, y, z = (int(math.floor(geom[at][j])) for j in range(3))
xyz_key = _get_key(x, y, z, blocksize)
allblocks[xyz_key].append(at)
return allblocks
def _get_bond_tree(radii, geom, allblocks, blocksize, bond_threshold):
"""Create bond tree from atomic coordinates"""
bond_tree = [[] for at in range(geom.shape[0])]
for blk in allblocks:
atom_list = _get_atoms_from_blocks(_get_neighbor_blocks(blk, blocksize, allblocks), allblocks)
for at1 in allblocks[blk]:
for at2 in atom_list:
r2_ij = _distance2(geom[at1], geom[at2])
r2_thresh = bond_threshold * (radii[at1] + radii[at2])**2
if at1 != at2 and r2_ij <= r2_thresh:
if at2 not in bond_tree[at1]:
bond_tree[at1].append(at2)
if at1 not in bond_tree[at2]:
bond_tree[at2].append(at1)
return bond_tree
def _get_neighbor_blocks(block, blocksize, allblocks):
"""Find occupied blocks which neighbor `block`, including self"""
x, y, z = (int(block.split(',')[j]) for j in range(3))
neighbor_blocks = [_get_key(x + blocksize * (i - 1),
y + blocksize * (j - 1),
z + blocksize * (k - 1),
blocksize)
for i in range(3)
for j in range(3)
for k in range(3)] # yapf: disable
active_blocks = list(set(neighbor_blocks) & set(allblocks))
return active_blocks
def _get_atoms_from_blocks(blocks, master_blocks):
"""Get list of atoms in a set of blocks"""
atoms_nested = [master_blocks[blk] for blk in blocks]
atoms = [at for sublist in atoms_nested for at in sublist]
return atoms
| ashutoshvt/psi4 | psi4/driver/qcdb/bfs.py | Python | lgpl-3.0 | 8,260 | [
"Psi4"
] | d4852bf6eb6175fe568e316d8cbd1c752eb1de86719399c64719c5417c1d1ecd |
#! /usr/bin/env python
################################################################################
# Ruth Lunt 2013 #
################################################################################
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
from pylab import *
from optparse import OptionParser
######################## Set up optional arguments #############################
# specify input file
parser = OptionParser()
parser.add_option("-f", "--file",
action = "store", type = "string", dest = "file", default = "GULP.gin",
help = "Path to input file [default: ./geometry.in]")
(options, args) = parser.parse_args()
file = options.file
########################### Begin main program #################################
print "A program to calculate the effective spring constants, C6 Lennard-Jones parameter and bsm values for oxygen in a mixed metal oxide. As well as plotting the potentials for the metal oxide."
print "Ruth Lunt 2013\nDate last edited: 20/08/2013"
# Open the file and split into Lines
f = open(file,"r")
lines = f.readlines()
f.close()
# lists to be appended
buck = []
coul = []
lenn_fix = []
lenn_mix = []
spring_fix = []
spring_mix = []
bsm_mix = []
p_fix = []
copy_lenn_fix = 0
copy_lenn_mix = 0
copy_spring_fix = 0
copy_spring_mix = 0
copy_bsm = 0
copy_p = 0
# determination of metal oxide composition
composition = raw_input("What is the metal composition of the mixed metal oxide?")
elements = composition.split()
# start reading lines
for element in elements:
for line in lines:
inp = line.split()
if inp == []:
continue
if len(inp) == 12 and inp[0] == element:
buck.append(inp)
if len(inp) == 3 and inp[0] == element and (inp[1] == "core" or inp[1] == "shel"):
coul.append(inp)
if copy_spring_fix == 1 and inp[0] == element:
spring_fix.append(inp)
copy_spring_fix = 0
if len(inp) == 2 and (inp[0] == "spring" and inp[1] == element):
copy_spring_fix = 1
if copy_spring_mix == 1 and inp[0] == "O":
copy_spring_mix = 0
# addition of 'element label' allows for dinstinction between multiple oxygen spring constants
inp = element.split() + inp
spring_mix.append(inp)
if len(inp) ==2 and (inp[0] == "spring" and inp[1] == element):
copy_spring_mix = 1
# copy_lenn_fix differnet as needs to read in multiple lines for lennard
if copy_lenn_fix <= 2 and copy_lenn_fix > 0 and (inp[0] == element or inp[2] == element):
copy_lenn_fix = copy_lenn_fix + 1
lenn_fix.append(inp)
if len(inp) == 2 and (inp[0] == "lennard" and inp[1] == element):
copy_lenn_fix = 1
if copy_lenn_mix == 1 and inp[0] == "O" and inp[2] == "O":
copy_lenn_mix = 0
# addition of 'element label' allows for dinstinction between multiple oxygen C6 values
inp = element.split() + inp
lenn_mix.append(inp)
if len(inp) == 2 and (inp[0] == 'lennard' and inp[1] == element):
copy_lenn_mix = 1
if copy_bsm == 1 and inp[0] == "O":
copy_bsm = 0
# addition of 'element label' allows for dinstinction between multiple oxygen bsm values
inp = element.split() + inp
bsm_mix.append(inp)
if len(inp) == 2 and (inp[0] == "bsm" and inp[1] == element):
copy_bsm = 1
if copy_p == 1 and len(inp) == 1:
copy_p = 0
# allows for distinction between mulitple oxygen P values (electron number - effective number of electrons contributing to polrizability)
# these values are obtained from the static dipole of polarisability (alpha) and C6 values for the pecific metal oxides
inp = element.split() + inp
p_fix.append(inp)
if len(inp) == 2 and (inp[0] == "P" and inp[1] == element):
copy_p = 1
# getting the k2 values for oxygen
for spring in spring_mix:
if spring[0] == "Zn":
k2_Zn = spring[2]
elif spring[0] == "Sn":
k2_Sn = spring[2]
elif spring[0] == "In":
k2_In = spring[2]
# getting the k4 values for oxygen
for spring in spring_mix:
if spring[0] == "Zn":
k4_Zn = spring[3]
elif spring[0] == "Sn":
k4_Sn = spring[3]
elif spring[0] == "In":
k4_In = spring[3]
# getting bsm values for oxygen
for bsm in bsm_mix:
if bsm[0] == "Zn":
bsm_Zn = bsm[3]
elif bsm[0] == "Sn":
bsm_Sn = bsm[3]
elif bsm[0] == "In":
bsm_In = bsm[3]
# getting p values for oxygen
for p in p_fix:
if p[0] == "Zn":
P_Zn = p[1]
elif p[0] == "Sn":
P_Sn = p[1]
elif p[0] == "In":
P_In = p[1]
# determination of metal oxide formula
formula = raw_input("What is the elemental ratio of the mixed metal oxide (same order as composition)?")
ratio = formula.split()
#values for interpoltion formula
x = 0
y = 0
z = 0
# on addition of other elements into the code this section must be modified
if elements[0] == "Zn":
x = ratio[0]
if elements[1] == "Sn":
z = ratio[1]
elif elements[2] == "Sn":
z = ratio[2]
if elements[1] == "In":
y = ratio[1]
elif elements [2] == "In":
y = ratio[2]
elif elements[0] == "In":
y = ratio[0]
if elements[1] == "Zn":
x = ratio[1]
elif elements[2] == "Zn":
x = ratio[2]
if elements[1] == "Sn":
z = ratio[1]
elif elements [2] == "Sn":
z = ratio[2]
elif elements[0] == "Sn":
z = ratio[0]
if elements[1] == "Zn":
x = ratio[1]
elif elements[2] == "Zn":
x = ratio[2]
if elements[1] == "In":
y = ratio[1]
elif elements [2] == "In":
y = ratio[2]
# to avoid zero point errors if a metal is not present in the oxide the k2, k4, C6, P and bsm values for that metal oxide are set to 1, this will still give a zero value for that term within the interpolation formula
if x == 0:
k2_Zn = 1
if y == 0:
k2_In = 1
if z == 0:
k2_Sn = 1
if x == 0:
k4_Zn = 1
if y == 0:
k4_In = 1
if z == 0:
k4_Sn = 1
if x == 0:
bsm_Zn = 1
if y == 0:
bsm_In = 1
if z == 0:
bsm_Sn = 1
if x == 0:
P_Zn = 1
if y == 0:
P_In = 1
if z == 0:
P_Sn = 1
# calculatuon of effective oxygen k2 spring constant
def spring_k2(x, y, z, k2_Zn, k2_Sn, k2_In):
k2_eff = (float(x) + float(y) + float(z))/((float(x)/float(k2_Zn)) + (float(y)/float(k2_In)) + (float(z)/float(k2_Sn)))
return k2_eff
k2 = spring_k2(x, y, z, k2_Zn, k2_Sn, k2_In)
# calculation of effective oxygen k4 spring constant
def spring_k4(x, y, z, k4_Zn, k4_Sn, k4_In):
k4_eff = (float(x) + float(y) + float(z))/((float(x)/float(k4_Zn)) + (float(y)/float(k4_In)) + (float(z)/float(k4_Sn)))
return k4_eff
k4 = spring_k2(x, y, z, k4_Zn, k4_Sn, k4_In)
# calculation of effective oxygen bsm
def bsm(x, y, z, bsm_Zn, bsm_Sn, bsm_In):
bsm_eff = (float(x) + float(y) + float(z))/((float(x)/float(bsm_Zn)) + (float(y)/float(bsm_In)) + (float(z)/float(bsm_Sn)))
return bsm_eff
bsm = bsm(x, y, z, bsm_Zn, bsm_Sn, bsm_In)
print "effective oxygen k2 = " + "{0:6.4f}".format(k2)
print "effective oxygen k4 = " + "{0:6.4f}".format(k4)
print "effective oxygen bsm = " + "{0:6.4f}".format(bsm)
# oxygen shell charge
for i in coul:
if i[0] == "O" and i[1] == "shel":
Y = float(i[2])
# calculation of alpha (static dipole of polarizability) for metal oxide (to later calculate C6)
def alpha_eff(Y, k2):
alpha = (Y**2)/k2
return alpha
alpha = alpha_eff(Y, k2)
print "effective oxygen alpha = " + "{0:6.4f}".format(alpha)
# calculation of P for metal oxide
def P_eff(x, y, z, P_Zn, P_Sn, P_In):
P = (float(x) + float(y) + float(z))/((float(x)/float(P_Zn)) + (float(y)/float(P_In)) + (float(z)/float(P_Sn)))
return P
P = P_eff(x, y, z, P_Zn, P_Sn, P_In)
print "effective oxygen P = " + "{0:6.4f}".format(P)
# calculation of C6 for mixed metal oxide
def C6_eff(alpha, P):
C6 = (0.75) * (alpha**(1.5) * (P**0.5))
return C6
C6 = C6_eff(alpha, P)
print "effective oxygen C6 = " + "{0:6.4f}".format(C6)
# addition of the effective oxygen k2 and k4 values to the spring_fix list
# only need one O spring term - can remove the rest from spring_mix list
spring_mix = spring_mix[0]
spring_mix[2] = k2
spring_mix[3] = k4
# to remove 'element label' from list
spring_mix = spring_mix[1:6]
spring_fix.append(spring_mix)
#Addition of effective oxygen bsm value
bsm_mix = bsm_mix[0]
bsm_mix[3] = bsm
bsm_mix = bsm_mix[1:7]
# Addition of oxygen C6 value to lenn_fix list
lenn_mix = lenn_mix[0]
lenn_mix[6] = C6
lenn_mix = lenn_mix[1:11]
lenn_fix.append(lenn_mix)
# Buckingham potential - for plot
def buck_pot(a, rho, c, cut):
array_size = int(cut/0.01)
buck = np.zeros(shape= (array_size,2))
for x in range (1, array_size):
buck[x, 0] = x*0.01
buck[x, 1] = a*np.exp(-buck[x, 0]/rho) - c**6/buck[x, 0]**6
return buck
# coulomb potential - for plot
def coulomb_pot(q1, q2, cut):
array_size = int(cut/0.01)
coulomb = np.zeros(shape = (array_size, 2))
for x in range (1, array_size):
coulomb[x, 0] = x*0.01
#covert Hartree to eV and Amstrom to Bohr
coulomb[x, 1] = 27.211396132*(q1*q2)/(x*0.01*1.889725989)
return coulomb
# Lennard-Jones potential - for plot
def lennard_pot(A, B, cut):
array_size = int(cut/0.01)
lenn = np.zeros(shape = (array_size, 2))
for x in range (1, array_size):
lenn[x, 0] = x*0.01
lenn[x, 1] = A/((x*0.01)**12) - B/((x*0.01)**6)
return lenn
# print out potentials
print "species"
coul_input = '\n'.join(str(i) for i in coul)
print coul_input
print "buck"
buckingham = '\n'.join(str(i) for i in buck)
print buckingham
print "lennard"
lennard = '\n'.join(str(i) for i in lenn_fix)
print lennard
print "spring"
spring = '\n'.join(str(i) for i in spring_fix)
print spring
print "bsm"
print bsm_mix
# Lennard-Jones paramters set to incase there are no values in input file
A = 0
B = 0
# Getting values to plot potentials
for i in buck:
buck = buck_pot(float(i[4]), float(i[5]), float(i[6]), float(i[8]))
for j in coul:
if j[0] == i[0] and j[1] == i[1]:
q1 = j[2]
if j[0] == i[2] and j[1] == i[3]:
q2 = j[2]
coulomb = coulomb_pot(float(q1), float(q2), float(i[8]))
for k in lenn_fix:
if k[0] == i[0] and k[2] == i[2]:
A = k[4]
B = k[5]
# in case element order is reversed in input file
elif k[2] == i[0] and k[0] == i[2]:
A = k[4]
B = k[5]
lenn_j = lennard_pot(float(A), float(B), float(i[8]))
#total_pot = np.add(buck,coul,lenn_j)
total_pot = buck + coulomb + lenn_j
plt.plot(buck[1:, 0],buck[1:, 1], label = 'Buckingham potential')
plt.plot(coulomb[1:, 0], coulomb[1:, 1], label = 'Coulombic interaction')
plt.plot(lenn_j[1:, 0], lenn_j[1:, 1], label = 'Lennard potential')
plt.plot(buck[1:, 0], total_pot[1:, 1], label = 'Total potential')
plt.legend(('Buckingham potential', 'Coulombic interaction', 'Lennard potential', 'Total potential'))
plt.axis([0.00, 5.00, -150, 150])
plt.xlabel('Interatomic distance, r', fontsize = 16)
plt.ylabel('Potential Energy, eV', fontsize = 16)
plt.title("%s" % (str(i[0] + " (" + i[1] + ")" + " - " + i[2] + " (" + i[1] + ")")), fontsize = 18)
xticklines = getp(gca(), 'xticklines')
yticklines = getp(gca(), 'yticklines')
xgridlines = getp(gca(), 'xgridlines')
ygridlines = getp(gca(), 'ygridlines')
xticklabels = getp(gca(), 'xticklabels')
ygridlines = getp(gca(), 'ygridlines')
xticklabels = getp(gca(), 'xticklabels')
yticklabels = getp(gca(), 'yticklabels')
setp(xticklines, 'linewidth', 3)
setp(yticklines, 'linewidth', 3)
#setp(xgridlines, 'linestyle', '-')
#setp(ygridlines, 'linestyle', '-')
setp(yticklabels, 'color', 'Black', fontsize='medium')
setp(xticklabels, 'color', 'Black', fontsize='medium')
plt.grid(True)
plt.savefig('%s.eps' % (str(i[0] + i[2])))
plt.show()
| WMD-group/mixed_metal_oxide_potentials | metal_oxide_mix.py | Python | gpl-3.0 | 11,526 | [
"GULP"
] | 94068d8850e4a54a652eb0c698c29fc5a3f5473058ecd544c2704d40ac26bdb7 |
import unittest
from hypothesis import given
from paraview import servermanager
from paraview.simple import Disconnect
from simphony.core.cuba import CUBA
from simphony_paraview.show import show
from simphony_paraview.core.testing import (
cuds_containers,
create_example_mesh, create_example_lattice, create_example_particles)
class TestShow(unittest.TestCase):
def setUp(self):
if servermanager.ActiveConnection is not None:
Disconnect()
self.closed = False
def tearDown(self):
if servermanager.ActiveConnection is not None:
raise RuntimeError('There is still an active connection')
@given(cuds_containers)
def test_valid_cuds_containers(self, setup):
# XXX This is a very basic test.
# given
cuds, kind = setup
def close(obj, event):
obj.TerminateApp()
show(cuds, testing=close)
def test_lattice_showing_point_data(self):
cuds = create_example_lattice()
def close(obj, event):
obj.TerminateApp()
show(cuds, select=(CUBA.TEMPERATURE, 'nodes'), testing=close)
with self.assertRaises(ValueError):
show(cuds, select=(CUBA.TEMPERATURE, 'particles'), testing=close)
with self.assertRaises(ValueError):
show(cuds, select=(CUBA.TEMPERATURE, 'points'), testing=close)
def test_mesh_showing_point_data(self):
cuds = create_example_mesh()
def close(obj, event):
obj.TerminateApp()
show(cuds, select=(CUBA.TEMPERATURE, 'points'), testing=close)
with self.assertRaises(ValueError):
show(cuds, select=(CUBA.TEMPERATURE, 'nodes'), testing=close)
with self.assertRaises(ValueError):
show(cuds, select=(CUBA.TEMPERATURE, 'partiles'), testing=close)
def test_particles_showing_point_data(self):
cuds = create_example_particles()
def close(obj, event):
obj.TerminateApp()
show(cuds, select=(CUBA.TEMPERATURE, 'particles'), testing=close)
with self.assertRaises(ValueError):
show(cuds, select=(CUBA.TEMPERATURE, 'nodes'), testing=close)
with self.assertRaises(ValueError):
show(cuds, select=(CUBA.TEMPERATURE, 'points'), testing=close)
def test_mesh_showing_cell_data(self):
cuds = create_example_mesh()
def close(obj, event):
obj.TerminateApp()
show(cuds, select=(CUBA.TEMPERATURE, 'elements'), testing=close)
with self.assertRaises(ValueError):
show(cuds, select=(CUBA.TEMPERATURE, 'bonds'), testing=close)
def test_particles_showing_cell_data(self):
cuds = create_example_particles()
def close(obj, event):
obj.TerminateApp()
show(cuds, select=(CUBA.TEMPERATURE, 'bonds'), testing=close)
with self.assertRaises(ValueError):
show(cuds, select=(CUBA.TEMPERATURE, 'elements'), testing=close)
def test_unknown_container(self):
container = object()
with self.assertRaises(TypeError):
show(container)
| simphony/simphony-paraview | simphony_paraview/tests/test_show.py | Python | bsd-2-clause | 3,134 | [
"ParaView"
] | 5156368ebb7e2eaa7386bde69931427acf3fcb2583dbba5f9891cecbb9cc64a6 |
#!/usr/bin/env python
import vtk
from vtk.test import Testing
from vtk.util.misc import vtkGetDataRoot
VTK_DATA_ROOT = vtkGetDataRoot()
# create a rendering window and renderer
ren1 = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren1)
renWin.StereoCapableWindowOn()
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
reader = vtk.vtkGenericEnSightReader()
# Make sure all algorithms use the composite data pipeline
cdp = vtk.vtkCompositeDataPipeline()
reader.SetDefaultExecutivePrototype(cdp)
del cdp
reader.SetCaseFileName("" + str(VTK_DATA_ROOT) + "/Data/EnSight/elements6.case")
geom = vtk.vtkGeometryFilter()
geom.SetInputConnection(reader.GetOutputPort())
calc = vtk.vtkArrayCalculator()
calc.SetInputConnection(geom.GetOutputPort())
calc.SetAttributeModeToUsePointData()
calc.SetFunction("pointTensors_XZ - pointTensors_YZ")
calc.AddScalarVariable("pointTensors_XZ","pointTensors", 5)
calc.AddScalarVariable("pointTensors_YZ","pointTensors", 4)
calc.SetResultArrayName("test")
mapper = vtk.vtkHierarchicalPolyDataMapper()
mapper.SetInputConnection(calc.GetOutputPort())
mapper.SetColorModeToMapScalars()
mapper.SetScalarModeToUsePointFieldData()
mapper.ColorByArrayComponent("test",0)
mapper.SetScalarRange(-0.1,0.1)
actor = vtk.vtkActor()
actor.SetMapper(mapper)
# assign our actor to the renderer
ren1.AddActor(actor)
# enable user interface interactor
iren.Initialize()
renWin.Render()
# prevent the tk window from showing up then start the event loop
reader.SetDefaultExecutivePrototype(None)
# --- end of script --
| hlzz/dotfiles | graphics/VTK-7.0.0/IO/EnSight/Testing/Python/EnSightTensorsInversion.py | Python | bsd-3-clause | 1,611 | [
"VTK"
] | 002c5054ad725b7d0721480e1e8f0e4e57341ea226f1d5d966291f595b0b4bc0 |
# -*- coding: utf-8 -*-
"""
Created on Wed Feb 25 201 - 04:24:26
@author: Yoan BOUZIN email : yoan.bouzin@gmail.com
"""
try:
import time
import subprocess
import platform
import os
import Tkinter # Python 2
import ttk
from tkFileDialog import askopenfilename
from Tkinter import *
from tkMessageBox import *
import threading
except ImportError:
from tkinter import * # Python 3
from tkinter import ttk
from tkinter.filedialog import askopenfilename
from tkinter.messagebox import *
import tkinter.font
import time
import threading
###########
# CREDITS #
###########
print("BlastLP v1.0 - 16.02.2015 - 17:43 \n")
print("This software was made to facilitate and automatise local Blast analysis")
print("Please install NCBI Blast+ v2.2.29 or his upgrade")
print("Contact Yoan BOUZIN at yoan.bouzin@gmail.com, Master1 graduates in Bioinformatic, Rennes1 \n")
###########
# PROCESS #
###########
if platform.system() == 'Linux' or platform.system() == 'Darwin':
chemin = os.getcwd()
#subprocess.call("cd "+chemin)
subprocess.call("chmod +x makeblastdb tblastn blastp seqfetch.def.pl", shell=True)
##################
# Graphical User Interface #
##################
#fenetre principale
fenetre = Tk()
fenetre.resizable(0,0)
#Titre
fenetre.title("BLAST-LP v1.0")
#Logo
img = PhotoImage(file="logoGUI.png")
Label(fenetre, image=img).grid(rowspan=5, column=0)
#Titre next logo
Label(fenetre, text="BLAST Local Pipeline").grid(row=0, column=1, columnspan=4)
Label(fenetre, text="version 1.0").grid(row=1, column=1, columnspan=4)
Label(fenetre, text="Application to automatise Blast+ v2.2.30 command").grid(row=2, column=1, columnspan=4)
Label(fenetre, text="By Yoan BOUZIN").grid(row=3, column=1, columnspan=4)
Label(fenetre, text="yoan.bouzin@gmail.com",pady=3).grid(row=4, column=1, columnspan=4)
#Progress Bar the first for partie 1 and 2
progressbar = ttk.Progressbar(orient=HORIZONTAL, length=400, mode='indeterminate', variable=0, maximum=10)
#######
#Partie 1 #
#######
#Titre
a = Label(fenetre, text="Create the library database (makeblastdb)",pady=5).grid(row=5, columnspan=3, sticky="W")
ttk.Separator(fenetre, orient=HORIZONTAL).grid(row=5, columnspan=5, sticky="NEW")
#Variable et création des boutons
var = IntVar()
for item in [1,2]:
Label(fenetre, text="Type :").grid(row=6, column=0, sticky="W",padx=30)
if item == 1:
rb = Radiobutton(fenetre, text='Nucleotides',value=item,variable=var).grid(row=6, column=0,sticky="E")
if item == 2:
rb = Radiobutton(fenetre, text='Proteins',value=item,variable=var).grid(row=6, column=1,sticky="W")
#Récuperation de la valeur du bouton radio (Nucleotides/Proteines)
def typeNP():
return var.get()
#Création d'un objet StringVar() qui contient la valeur de l'Entry
textEntry1 = StringVar()
pathTextEntry1 = StringVar()
#Création du Label File
Label(fenetre, text="Files : ").grid(row=7, column=0, sticky="E")
#Entry file
entry1 = Entry(fenetre, textvariable=textEntry1,state='disabled')
entry1.grid(row=7, column=1)
#entry path
pathEntry1 = Entry(fenetre, textvariable=pathTextEntry1)
#Fonction pour le bouton browse cherche et récupère le nom du fichier
def GetFileToMakeLibraryDatabase():
import os
pathfile = askopenfilename(title='Open the Library Datafile')
textEntry1.set(os.path.split(pathfile)[1])
pathTextEntry1.set(os.path.split(pathfile)[0])
#Bouton Browse
Button(fenetre, text="Browse",command=GetFileToMakeLibraryDatabase).grid(row=7, column=2)
#Récupère la valeur de entry1
def callback():
return entry1.get()
def pathCallback():
return pathEntry1.get()
#fonction de création de la base de données
def makeblastdb():
"""
create the local library database with your input file
"""
import subprocess
import os
import platform
from time import strftime, gmtime
OS = platform.system()
if OS == 'Linux' or OS == 'Darwin':
path = pathCallback()+'/'
print(path)
if OS == 'Windows':
path = pathCallback()+'\\'
DB = callback()
if os.path.isfile(path+DB) != True:
progressbar.grid_forget()
showerror('Error : Missing File !', "You must choose a valid file")
typ = str(typeNP())
if typ != '1' and typ != '2':
progressbar.grid_forget()
showerror('Error : Missing Type !', "You do not choose your type\n(nucleotides or proteins)")
t0 = time.time()
if os.path.isfile(path+DB) == True and typ == '1' or typ == '2':
if OS == 'Windows':
if typ == '1':
process = subprocess.Popen("makeblastdb -in "+path+DB+" -dbtype nucl")
process.communicate()
t1 = time.time()
print("Finish in "+str(strftime("%H hour(s) %M minute(s) %S second(s)", gmtime(t1-t0))))
progressbar.stop()
showinfo('Information', "Your job finish in\n"+str(round(t1-t0,2))+" seconds")
if typ == '2':
process = subprocess.call("makeblastdb -in "+path+DB+" -dbtype prot", shell=True)
process.communicate()
t1 = time.time()
print("Finish in "+str(strftime("%H hour(s) %M minute(s) %S second(s)", gmtime(t1-t0))))
progressbar.stop()
showinfo('Information', "Your job finish in\n"+str(round(t1-t0,2))+" seconds")
if OS == 'Linux' or OS == 'Darwin':
if typ == '1':
subprocess.call("makeblastdb -in "+path+DB+" -dbtype nucl", shell=True)
t1 = time.time()
print("Finish in "+str(strftime("%H hour(s) %M minute(s) %S second(s)", gmtime(t1-t0))))
progressbar.stop()
showinfo('Information', "Your job finish in\n"+str(round(t1-t0,2))+" seconds")
if typ == '2':
subprocess.call("makeblastdb -in "+path+DB+" -dbtype prot", shell=True)
t1 = time.time()
print("Finish in "+str(strftime("%H hours %M minute(s) %S second(s)", gmtime(t1-t0))))
progressbar.stop()
showinfo('Information', "Your job finish in\n"+str(round(t1-t0,2))+" seconds")
progressbar.grid_forget()
### Threading progress bar makeblastdb ###
def foo():
makeblastdb() # simulate some work
def start_foo_thread():
global foo_thread
foo_thread = threading.Thread(target=foo)
foo_thread.daemon = True
progressbar.grid(row=25,columnspan=4,pady=2,sticky=W+E)
#progressbar.step(100)
progressbar.start()
foo_thread.start()
fenetre.after(20, check_foo_thread)
def check_foo_thread():
if foo_thread.is_alive():
fenetre.after(20, check_foo_thread)
else:
progressbar.stop()
Button(fenetre, text="Run",command=start_foo_thread).grid(row=7,column=3)
######
#Partie2#
######
#Variable et création des boutons radio
var2 = IntVar()
for item in [1,2]:
Label(fenetre, text="Type :").grid(row=9, column=0, sticky="W",padx=30)
if item == 1:
rb = Radiobutton(fenetre, text='Nucleotides',value=item,variable=var2).grid(row=9, column=0,sticky="E")
if item == 2:
rb = Radiobutton(fenetre, text='Proteins',value=item,variable=var2).grid(row=9, column=1,sticky="W")
#Récuperation de la valeur du bouton radio (Nucleotides/Proteines)
def typeNP2():
return var2.get()
#objet stringvar
Label(fenetre, text="Blast a file to the library database (Blastp/Blastn)",pady=5).grid(row=8, columnspan=3, sticky="WN")
ttk.Separator(fenetre, orient=HORIZONTAL).grid(row=8, columnspan=5, sticky="NEW")
Label(fenetre, text="Files with sequence(s) : ").grid(row=11, column=0, sticky="E")
textEntry2 = StringVar()
entry2 = Entry(fenetre, textvariable=textEntry2,state='disabled')
entry2.grid(row=10, column=1)
textEntry3 = StringVar()
entry3 = Entry(fenetre, textvariable=textEntry3,state='disabled')
entry3.grid(row=11, column=1)
pathTextEntry2 = StringVar()
pathEntry2 = Entry(fenetre, textvariable=pathTextEntry2)
pathTextEntry3 = StringVar()
pathEntry3 = Entry(fenetre, textvariable=pathTextEntry3)
#Fonction pour le bouton browse cherche et récupère le nom du fichier
def GetLibraryDatabaseFile():
import os
pathfile = askopenfilename(title='Open Library Datafile')
textEntry2.set(os.path.split(pathfile)[1])
pathTextEntry2.set(os.path.split(pathfile)[0])
def GetSequenceFile():
import os
pathfile = askopenfilename(title='Open File with Sequences')
textEntry3.set(os.path.split(pathfile)[1])
pathTextEntry3.set(os.path.split(pathfile)[0])
#Récupère la valeur de entry2
def callback2():
return entry2.get()
def pathCallback2():
return pathEntry2.get()
#Récupère la valeur de entry3
def callback3():
return entry3.get()
def pathCallback3():
return pathEntry3.get()
Button(fenetre, text="Browse", command=GetLibraryDatabaseFile).grid(row=10, column=2)
Button(fenetre, text="Browse",command=GetSequenceFile).grid(row=11, column=2)
Label(fenetre, text="Library Database : ").grid(row=10, column=0,sticky="E")
#fonction OneFile
def OneFile():
"""
This function is very long, take all the sequences of the input file and blast it to the database
input : files with your sequences
output : files with blast results in the same folder
"""
import subprocess
import platform
import time
OS = platform.system()
if OS == 'Linux' or OS == 'Darwin':
pathLibrary = pathCallback2()+'/'
pathSequence = pathCallback3()+'/'
if OS == 'Windows':
pathLibrary = pathCallback2()+'\\'
pathSequence = pathCallback3()+'\\'
typ = str(typeNP2())
if typ != '1' and typ != '2':
progressbar.stop()
progressbar.grid_forget()
showerror('Error : Missing Type !', "You do not choose your type\n(nucleotides or proteins)")
else:
library = callback2()
if os.path.isfile(pathLibrary+library) != True:
progressbar.stop()
progressbar.grid_forget()
showerror('Error : Missing File !', "You must choose a Library Database file")
else:
filename = callback3()
if os.path.isfile(pathSequence+filename) != True:
progressbar.stop()
progressbar.grid_forget()
showerror('Error : Missing File !', "You must choose your sequence file")
else:
#evalue = input("Choose your e-value limit : ")
#if os.path.isfile(pathLibrary+library) == True and os.path.isfile(pathSequence+filename) == True and typ == '1' or typ == '2':
if typ =="1":
typ = "tblastn"
if typ == "2":
typ = "blastp"
#filename = input("Write the filename : ")
if OS == 'Linux' or OS == 'Darwin':
t0 = time.time()
query = str(filename)
blast = str(filename)+'_Blast.txt'
seqs = str(filename)+'_seqs.txt'
subprocess.call(typ+" -query "+pathSequence+query+" -db "+pathLibrary+library+" -evalue 1e-10 -out "+pathSequence+blast, shell=True)
print('Fichier n° '+str(1)+' '+str(filename))
subprocess.call("grep '\(Sbjct\|>\)' "+pathSequence+blast+" > "+pathSequence+seqs, shell=True)
t1 = time.time()
progressbar.stop()
print('Job finish in '+str(round(t1-t0,2))+' seconds')
showinfo('Information', "Your job finish in\n"+str(round(t1-t0,2))+" seconds")
showinfo('Information', "The "+blast+" and "+seqs+" have been created in the \n"+pathSequence)
if OS == 'Windows':
t0 = time.time()
query = str(filename)
blast = str(filename)+'_Blast.txt'
seqs = str(filename)+'_seqs.txt'
subprocess.call(typ+' -query '+pathSequence+query+' -db '+pathLibrary+library+' -evalue 1e-10 -out '+pathSequence+blast, shell=True)
print('Fichier n° '+str(1)+' '+str(filename))
subprocess.Popen('findstr "Sbjct >" '+pathSequence+blast+' > '+pathSequence+seqs, shell=True)
t1 = time.time()
progressbar.stop()
print('Job finish in '+str(round(t1-t0,2))+' seconds')
showinfo('Information', "Your job finish in\n"+str(round(t1-t0,2))+" seconds")
showinfo('Information','The files '+blast+' and '+seqs+"\nhave been created in :\n"+pathSequence)
progressbar.grid_forget()
### Threading progress bar makeblastdb ###
def foo2():
OneFile() # simulate some work
def start_foo_thread2():
global foo_thread
foo_thread = threading.Thread(target=foo2)
foo_thread.daemon = True
progressbar.grid(row=25,columnspan=4,pady=2,sticky=W+E)
progressbar.start()
foo_thread.start()
fenetre.after(20, check_foo_thread2)
def check_foo_thread2():
if foo_thread.is_alive():
fenetre.after(20, check_foo_thread2)
else:
progressbar.stop()
#Bouton Run pour la fonction OneFile
Button(fenetre, text="Run",command=start_foo_thread2).grid(row=9,column=3,rowspan=2)
# #################
# DISPLAY ALIGNEMENT
# #################
def BlastFile():
listeFile = []
if os.path.isfile(pathCallback3()+'/'+callback3()) == True:
if os.path.isfile(pathCallback3()+'/'+callback3()+"_Blast.txt") == True:
blastFile = open(pathCallback3()+'/'+callback3()+"_Blast.txt",'r')
for ligne in blastFile:
listeFile.append(ligne)
blastFile.close()
if os.path.isfile(pathCallback3()+'/'+callback3()+"_Blast.txt") == False:
blastFile = open(pathCallback3()+'/'+callback3(),'r')
for ligne in blastFile:
listeFile.append(ligne)
blastFile.close()
return listeFile
def getNameSequence():
## listeFile = []
listeFile = BlastFile()
listeName = []
## blastFile = open(pathCallback3()+'/'+callback3(),'r')
## for ligne in blastFile:
## listeFile.append(ligne)
for ligne in range(len(listeFile)):
if listeFile[ligne].startswith("Query="):
if listeFile[ligne].startswith("Query=") and listeFile[ligne+1] != "\n":
a = re.sub("[\n]","",listeFile[ligne])
b = re.sub("[\n]","",listeFile[ligne+1])
listeName.append(a+" "+b)
else:
a = re.sub("[\n]","",listeFile[ligne])
listeName.append(a)
if listeFile[ligne].startswith(">"):
if listeFile[ligne].startswith(">") and listeFile[ligne+1] != "\n":
a = re.sub("[\n]","",listeFile[ligne])
b = re.sub("[\n]","",listeFile[ligne+1])
listeName.append(a+" "+b)
else:
a = re.sub("[\n]","",listeFile[ligne])
listeName.append(a)
if listeFile[ligne].startswith("*"):
a = re.sub("[\n]","",listeFile[ligne])
listeName.append(a)
if listeFile[ligne].startswith(" Frame = "):
if not listeFile[ligne-6].startswith(">") or listeFile[ligne-5].startswith(">"):
a = re.sub("[\n]","",listeFile[ligne])
listeName.append(a)
## blastFile.close()
#print("getNameSequence() : ",listeName)
return listeName
def chercheQueryLength(blastFile):
"""
crée une liste des longueurs des sequences Query
"""
lenghtListe = []
## blastFile = open(pathCallback3()+'/'+callback3(),'r')
repere = ""
for ligne in blastFile:
if ligne.startswith("Query="):
repere = "Query="
if ligne.startswith("Length=") and repere == "Query=":
lenght = ''.join(re.findall("[0-9]",ligne))
lenghtListe.append(int(lenght))
repere = ""
## blastFile.close()
#print("chercheQueryLength() : ", lenghtListe)
return lenghtListe
def chercheScore(blastFile):
"""
crée une liste des Scores en bits
"""
lenghtListe = []
## blastFile = open(pathCallback3()+'/'+callback3(),'r')
for ligne in blastFile:
if ligne.startswith(" Score ="):
lenght = float(re.findall(".*?(\\d+)",ligne)[0])
lenghtListe.append(lenght)
## blastFile.close()
if lenghtListe == []:
lenghtListe.append(0)
#print("chercheScore() : ", lenghtListe)
return lenghtListe
def getEValues(blastfile):
import re
evalues = []
for ligne in blastfile:
if ' Expect = ' in ligne:
num = re.findall('.*?(\\d+)',ligne)
evalue = float(num[len(num)-2]+"e-"+num[len(num)-1])
evalues.append(evalue)
if evalues == []:
evalues.append(0)
#print("getEValues : ", evalues)
return evalues
def getPosition2(blastFile):
"""
crée une liste exploitable pour getSbjctWidth
"""
## blastFile = open(pathCallback3()+'/'+callback3(),'r')
liste = []
for ligne in blastFile:
if not ligne == "\n":
if ligne.startswith("*"):
liste.append(ligne)
if ligne.startswith("Query="):
liste.append(ligne)
if ligne.startswith(" Frame = "):
liste.append(ligne)
if ligne.startswith(">"):
liste.append(ligne)
if ligne.startswith("Query "):
a = int(re.findall(".*?(\\d+)",ligne)[0])
b = int(re.findall(".*?(\\d+)",ligne)[1])
liste.append([a,b])
## blastFile.close()
#print("getPosition2() : ", liste)
return liste
def getSbjctWidth2(blastFile):
"""
renvois une liste de liste avec les taille des Sbjct trouvé pour chaque sequence Query
"""
liste = getPosition2(blastFile)
listeOneSeq = []
listeOneSeqCopy=[]
listeTotal = []
debut = 0
fin = 0
for i in range(len(liste)):
a = liste[i][0]
if a == " " and liste[i-1][0] == ">" and liste[i-2][0] == "Q":
debut = liste[i+1][0] #debut, 1er alignement d'un fichier de sequence
if a == "Q" and type(liste[i-1][0]) == int and i-1>0:
# Q etant le debut du nouveau Query, la aleur avant sera toujour la valeur fin
fin = liste[i-1][1]
if a == " " and type(liste[i-1][0]) == int and type(liste[i+1][0]) == int:
#dans le cas de plusieur sequence resultat ont à déjà un début au départ ici la fin
fin = liste[i-1][1]
#ont ajoute donc les deux valeur pour créé le tuple
listeOneSeq.append((debut,fin))
#ont réinitialise à 0
debut = 0
fin = 0
#debut nième alignement d'un fichier de sequence
debut = liste[i+1][0]
if a == " " and liste[i-1][0] == ">" and type(liste[i-2][0]) == int:
fin = liste[i-2][1]
listeOneSeq.append((debut,fin))
debut = 0
fin = 0
debut = liste[i+1][0]
if a == '*':
listeTotal.append([(0,0)])
if i == len(liste)-1:
fin = liste[i][1]
if debut != 0 and fin !=0:
listeOneSeq.append((debut,fin))
debut = 0
fin = 0
if i == len(liste)-1:
#modifier ici 12.19 le 2.3.2015
if listeOneSeq != []:
listeTotal.append(listeOneSeq)
## else:
## listeTotal.append([(0,0)])
#test listeTotal
if a == "Q" and i-1>0 and len(listeOneSeq) != 0:
listeOneSeqCopy=listeOneSeqCopy+listeOneSeq
listeTotal.append(listeOneSeq)
listeOneSeq = []
#print(listeOneSeqCopy)
#print("getSbjctWidth2() : ", listeTotal)
return listeTotal
def calculdistance(blastFile):
liste = getSbjctWidth2(blastFile)
liste1 = []
liste2 = []
for i in liste:
for j in i:
width = j[1]-j[0]
#print(width)
liste2.append(width)
liste1.append(liste2)
liste2=[]
#print("calculdistance() : ", liste1)
return liste1
def sizePadx():
#print("Padx running : \n")
listePadx = []
longueur = 1000
blastFile = BlastFile()
ListeDesPadxSbjct = getSbjctWidth2(blastFile)
ListeDesLongueurQuery = chercheQueryLength(blastFile)
ListeDesLongueurSbjct = calculdistance(blastFile)
ListeDesScores = chercheScore(blastFile)
ListeEvalues = getEValues(blastFile)
try:
MAX = max(ListeDesLongueurQuery)
except ValueError:
MAX = longueur
cpt = 0
for i in range(len(ListeDesPadxSbjct)):
#print("boucle : ",cpt)
listePadx.append([int(round(ListeDesLongueurQuery[i]*longueur/MAX,0)),0])
for j in range(len(ListeDesPadxSbjct[i])):
score = ListeDesScores[cpt]
evalue = ListeEvalues[cpt]
padx = int(round(ListeDesPadxSbjct[i][j][0]*longueur/MAX,0))
width = int(round(ListeDesLongueurSbjct[i][j]*longueur/MAX,0))
if padx != 0 and width != 0:
listePadx.append([width,padx,score,evalue])
cpt = cpt + 1
else:
listePadx.append([width,padx,0,""])
#print("sizePadx : ", listePadx)
return listePadx
###################### END DISPLAY ALIGNEMENT
def alignement():
if callback3() == "":
showerror("Error : Missing File !","Choose your Blast Result")
if sys.version[0] == '2':
execfile("displayAlignement.py")
if sys.version[0] == '3':
exec(compile(open("displayAlignement.py", "rb").read(), "displayAlignement.py", 'exec'))
Button(fenetre, text="Show\nAlignment",command=alignement).grid(row=11,column=3)
######
#Partie3##################################################################
######
Label(fenetre, text="Create individual query files with a file",pady=5).grid(row=12, columnspan=3, sticky="W")
ttk.Separator(fenetre, orient=HORIZONTAL).grid(row=12, columnspan=5, sticky="NEW")
Label(fenetre, text="Files : ").grid(row=13, column=0,sticky="E")
textEntry4 = StringVar()
entry4 = Entry(fenetre, textvariable=textEntry4,state='disabled')
entry4.grid(row=13, column=1)
pathTextEntry4 = StringVar()
pathEntry4 = Entry(fenetre, textvariable=pathTextEntry4)
#Get the value of entry4
def GetSequenceFile2():
import os
pathfile = askopenfilename(title='Open File with Sequences')
textEntry4.set(os.path.split(pathfile)[1])
pathTextEntry4.set(os.path.split(pathfile)[0])
#Récupère la valeur de entry4
def callback4():
return entry4.get()
def pathCallback4():
return pathEntry4.get()
Button(fenetre, text="Browse",command=GetSequenceFile2).grid(row=13, column=2)
#comand check button
folderVar = StringVar()
folderEntry = Entry(fenetre,width=30, textvariable=folderVar)
def checkButtonEntry():
if checkButtonVar.get() == 1:
folderEntry.grid(row = 14, column=1, columnspan=2, sticky="W")
folderEntry.insert(0, "FolderName")
else :
folderEntry.delete(0, END)
folderEntry.grid_forget()
#IntVar du check button
checkButtonVar = IntVar()
checkButton = Checkbutton(fenetre, text="Create a Folder ?", variable=checkButtonVar, command=checkButtonEntry)
checkButton.grid(row=14,column=0)
#recupere le nom du dossier
def getFolderName():
return folderEntry.get()
#Fonction createQueryFiles
def createQueryFile():
"""
create individuals sequences files with a single files withs all sequences
"""
import re
import time
liste = []
seq = []
fichier = []
position = []
OS = platform.system()
if OS == 'Linux' or OS == 'Darwin':
#path = os.getcwd()+'/'
path = pathCallback4()+'/'
if OS == 'Windows':
path = pathCallback4()+'\\'
#Nom du fichier de séquence
name = callback4()
if os.path.isfile(path+name) != True:
showwarning('Warning', "You must choose your sequence file")
if os.path.isfile(path+name) == True:
f = open(path+name,'r')
for i in f:
fichier.append(i)
f.close()
for i in range(len(fichier)):
if fichier[i][0] == ">":
position.append(i)
print("\n They are "+str(len(position))+" sequences in the file \n")
showinfo('Number of sequences', "They are "+str(len(position))+" sequences in the file")
for i in range(len(position)):
if i == len(position)-1:
for j in range(position[i],len(fichier)):
seq.append(fichier[j])
liste.append(seq)
seq = []
else:
for j in range(position[i],position[i+1]):
seq.append(fichier[j])
liste.append(seq)
seq = []
choice = checkButtonVar.get()
if choice == 0:
if OS == "Windows":
tfile = time.time()
for i in range(len(liste)):
a = ''.join(re.findall("[^|]+$",liste[i][0]))
b = re.sub('[:/,\s\n]','',a)
giN = re.findall("[^|^>]+(?=\|)",liste[i][0])
gi = giN[0]+giN[1]
seq = open(path+b+"_"+gi+".txt","a")
for j in range(len(liste[i])):
seq.write(liste[i][j])
seq.close()
t1d = time.time()
showinfo('Number of sequences', str(len(position))+" files have been created in "+str(round(t1d-tfile,3))+" seconds")
if OS == "Linux" or OS == "Darwin":
tfile = time.time()
for i in range(len(liste)):
a = ''.join(re.findall("[^|]+$",liste[i][0]))
b = re.sub('[:/,\s\n]','',a)
giN = re.findall("[^|^>]+(?=\|)",liste[i][0])
gi = giN[0]+giN[1]
seq = open(path+b+"_"+gi,"a")
for j in range(len(liste[i])):
seq.write(liste[i][j])
seq.close()
t1d = time.time()
showinfo('Number of sequences', str(len(position))+" files have been created in "+str(round(t1d-tfile,3))+" seconds")
if choice == 1:
#RECUPERATION DU NOM DU DOSSIER
folder = getFolderName()
if OS == "Windows":
path = path+"\\"+folder
if os.path.isdir(path) == True:
showwarning('Warning', "The folder already exist \n or they are no folder name !\nChange or get the folder name")
else:
os.mkdir(path)
tfile = time.time()
for i in range(len(liste)):
a = ''.join(re.findall("[^|]+$",liste[i][0]))
b = re.sub('[:/,\s\n]','',a)
giN = re.findall("[^|^>]+(?=\|)",liste[i][0])
gi = giN[0]+giN[1]
seq = open(path+"\\"+b+"_"+gi+".txt","a")
for j in range(len(liste[i])):
seq.write(liste[i][j])
seq.close()
t1d = time.time()
showinfo('Number of sequences', str(len(position))+" files have been created in "+str(round(t1d-tfile,3))+" seconds")
if OS == "Linux" or OS == "Darwin":
path = path+"/"+folder
if os.path.isdir(path) == True:
showwarning('Warning', "The folder already exist \n or they are no folder name !\nChange or get the folder name")
else:
os.mkdir(path)
tfile = time.time()
for i in range(len(liste)):
a = ''.join(re.findall("[^|]+$",liste[i][0]))
b = re.sub('[:/,\s\n]','',a)
giN = re.findall("[^|^>]+(?=\|)",liste[i][0])
gi = giN[0]+giN[1]
seq = open(path+"/"+b+"_"+gi,"a")
for j in range(len(liste[i])):
seq.write(liste[i][j])
seq.close()
t1d = time.time()
showinfo('Number of sequences', str(len(position))+" files have been created in "+str(round(t1d-tfile,3))+" seconds")
run = Button(fenetre, text="Run",command=createQueryFile).grid(row=13,column=3)
######
#Partie4#
######
Label(fenetre, text="Extract Reference and Sequences",pady=5).grid(row=15, columnspan=3, sticky="W")
ttk.Separator(fenetre, orient=HORIZONTAL).grid(row=15, columnspan=5, sticky="NEW")
Label(fenetre, text="Blast Result File : ").grid(row=16, column=0,sticky="E")
textEntry5 = StringVar()
entry5 = Entry(fenetre, textvariable=textEntry5,state='disabled')
entry5.grid(row=16, column=1)
pathTextEntry5 = StringVar()
pathEntry5 = Entry(fenetre, textvariable=pathTextEntry5)
textEntry6 = StringVar()
entry6 = Entry(fenetre, textvariable=textEntry6,state='disabled')
entry6.grid(row=17, column=1)
pathTextEntry6 = StringVar()
pathEntry6 = Entry(fenetre, textvariable=pathTextEntry6)
textEntry7 = StringVar()
entry7 = Entry(fenetre, textvariable=textEntry7)
entry7.grid(row=18, column=1)
#Get the value of entry5 blast result
def GetBlastFile():
import os
pathfile = askopenfilename(title='Open the file with Blast results')
textEntry5.set(os.path.split(pathfile)[1])
pathTextEntry5.set(os.path.split(pathfile)[0])
#Récupère la valeur de entry5 blast result
def callback5():
return entry5.get()
def pathCallback5():
return pathEntry5.get()
#Get the value of entry6 library database
def GetLibraryDatabaseFile2():
import os
pathfile = askopenfilename(title='Open Library Database')
textEntry6.set(os.path.split(pathfile)[1])
pathTextEntry6.set(os.path.split(pathfile)[0])
#Récupère la valeur de entry6 library database
def callback6():
return entry6.get()
def pathCallback6():
return pathEntry6.get()
#Récupère la valeur de entry7
def callback7():
return entry7.get()
Button(fenetre, text="Browse",command=GetBlastFile).grid(row=16, column=2, columnspan=2)
Label(fenetre, text="Library Database : ").grid(row=17, column=0,sticky="E")
Button(fenetre, text="Browse",command=GetLibraryDatabaseFile2).grid(row=17, column=2, columnspan=2)
Label(fenetre, text="Output Filename : ").grid(row=18, column=0,sticky="E")
## Function ##
def extractSbjctSeqAndRef():
"""
input : file with blast result
output : file with the reference sequences "Sbjct"
output2 : file with the sequence of reference "Sbjct"
"""
import subprocess
import os
import re
import platform
OS = platform.system()
if OS == 'Windows':
path = os.getcwd()+"\\"
pathBlastResult = pathCallback5()+"\\"
pathLibrary = pathCallback6()+"\\"
if OS == 'Linux' or OS == 'Darwin':
path = os.getcwd()+'/'
pathBlastResult = pathCallback5()+'/'
pathLibrary = pathCallback6()+'/'
liste = []
file1 = callback5()
if os.path.isfile(pathBlastResult+file1) != True:
showerror('Error : Missing File !', "You must choose a Blast-result file")
else:
f = open(pathBlastResult+file1,"r")
for line in f:
if line[0] == ">":
if line not in liste:
liste.append(line)
f.close()
if OS =="Windows":
file2 = entry7.get()+".txt"
else:
file2 = entry7.get()
if os.path.isfile(pathLibrary+file2) == True:
showerror('Error',"Your output filename already exist, please change your filename")
if file2 == '':
showerror('Error: Missing Filename',"You did not choose the name of the output file")
else:
file3 = callback6()
if file3 == '':
showerror('Error : Missing File !', "You must choose the Library Database file")
else:
f2 = open(pathBlastResult+file2,"a")
for i in liste:
a = re.sub('[>][\s]','',i)
f2.write(a)
f2.close()
if OS == 'Linux' or OS == 'Darwin':
t0 = time.time()
process = subprocess.Popen(["perl", "seqfetch.def.pl" , pathBlastResult+file2 , pathLibrary+file3], stdout=subprocess.PIPE)
process.communicate()
t1 = time.time()
showinfo('Time',"Your job finish in "+str(round(t1-t0,3))+" seconds")
if OS == 'Windows':
t0 = time.time()
process = subprocess.Popen(["perl", path+"seqfetch.def.pl" , pathBlastResult+file2 , pathLibrary+file3], stdout=subprocess.PIPE)
process.communicate()
t1 = time.time()
showinfo('Time',"Your job finish in "+str(round(t1-t0,3))+" seconds")
showinfo('Information','The files '+file2+' and '+file2+".seq.txt\nhave been created in :\n"+pathBlastResult)
Button(fenetre, text="Run",command=extractSbjctSeqAndRef).grid(row=18,column=2, columnspan=2)
######
#Partie5#
######
#Variable et création des boutons radio
var5 = IntVar()
for item in [1,2]:
Label(fenetre, text="Type :").grid(row=20, column=0, sticky="W",padx=30)
if item == 1:
rb = Radiobutton(fenetre, text='Nucleotides',value=item,variable=var5).grid(row=20, column=0,sticky="E")
if item == 2:
rb = Radiobutton(fenetre, text='Proteins',value=item,variable=var5).grid(row=20, column=1,sticky="W")
#Récuperation de la valeur du bouton radio (Nucleotides/Proteines)
def typeNP5():
return var5.get()
Label(fenetre, text="Blast all individual Query File",pady=5).grid(row=19, columnspan=3, sticky="W")
ttk.Separator(fenetre, orient=HORIZONTAL).grid(row=19, columnspan=5, sticky="NEW")
Label(fenetre, text="Library Database : ").grid(row=21, column=0,sticky="E")
Label(fenetre, text="Files with sequences : ").grid(row=22, column=0,sticky="E")
textEntry8 = StringVar()
entry8 = Entry(fenetre, textvariable=textEntry8,state='disabled')
entry8.grid(row=21, column=1)
pathTextEntry8 = StringVar()
pathEntry8 = Entry(fenetre, textvariable=pathTextEntry8)
textEntry9 = StringVar()
entry9 = Entry(fenetre, textvariable=textEntry9,state='disabled')
entry9.grid(row=22, column=1)
pathTextEntry9 = StringVar()
pathEntry9 = Entry(fenetre, textvariable=pathTextEntry9)
#Fonction pour le bouton browse cherche et récupère le nom du fichier
def GetLibraryDatabaseFile3():
import os
pathfile = askopenfilename(title='Open Library Datafile')
textEntry8.set(os.path.split(pathfile)[1])
pathTextEntry8.set(os.path.split(pathfile)[0])
Button(fenetre, text="Browse",command=GetLibraryDatabaseFile3).grid(row=21, column=2)
def GetSequenceFile3():
import os
pathfile = askopenfilename(title='Open File with Sequences')
textEntry9.set(os.path.split(pathfile)[1])
pathTextEntry9.set(os.path.split(pathfile)[0])
Button(fenetre, text="Browse",command=GetSequenceFile3).grid(row=22, column=2)
#Récupère la valeur de entry8
def callback8():
return entry8.get()
def pathCallback8():
return pathEntry8.get()
#Récupère la valeur de entry9
def callback9():
return entry9.get()
def pathCallback9():
return pathEntry9.get()
#comand check button
folderVar2 = StringVar()
folderEntry2 = Entry(fenetre,width=30, textvariable=folderVar2)
def checkButtonEntry2():
if checkButtonVar2.get() == 1:
folderEntry2.grid(row = 23, column=1, columnspan=2, sticky="W")
previousFolder = getFolderName()
folderEntry2.insert(0, previousFolder)
else :
folderEntry2.delete(0, END)
folderEntry2.grid_forget()
#IntVar du check button
checkButtonVar2 = IntVar()
checkButton2 = Checkbutton(fenetre, text="In a folder ?", variable=checkButtonVar2, command=checkButtonEntry2)
checkButton2.grid(row=23,column=0)
#recupere le nom du dossier
def getFolderName2():
return folderEntry2.get()
#####
#fonctions
#####
def Filename(OS):
import re
filename1 = []
filename2 = []
name = callback9()
if OS == 'Linux' or OS == 'Darwin':
#path = os.getcwd()+'/'
path = pathCallback9()+'/'
if OS == 'Windows':
#path = os.getcwd()+'\\'
path = pathCallback9()+'\\'
if os.path.isfile(path+name) != True:
showerror('Error : Missing File !', "You must choose your Sequences file")
else:
#f = open(name,'r')
f = open(path+name,'r')
for i in f:
if i[0] == ">":
filename1.append(i)
f.close()
for i in range(len(filename1)):
a = ''.join(re.findall("[^|]+$",filename1[i]))
b = re.sub('[:/,\s\n]','',a)
giN = re.findall("[^|^>]+(?=\|)",filename1[i])
gi = giN[0]+giN[1]
filename2.append(b+"_"+gi)
return filename2
def Blast():
"""
Blast the Query file into the local database librairy
"""
import subprocess
import platform
import time
OS = platform.system()
if OS == 'Linux' or OS == 'Darwin':
pathLibrary = pathCallback8()+'/'
pathQuery = pathCallback9()+'/'
extention = ""
if OS == 'Windows':
pathLibrary = pathCallback8()+'\\'
pathQuery = pathCallback9()+'\\'
extention = ".txt"
typ = str(typeNP5())
if typ != '1' and typ != '2':
showerror('Error : Missing Type !', "You do not choose your type\n(nucleotides or proteins)")
else:
#evalue = input("Choose your e-value limit : ")
if typ =="1":
typ = "tblastn"
else:
typ = "blastp"
DB = callback8()
if os.path.isfile(pathLibrary+DB) != True:
showerror('Error : Missing File !', "You must choose the Library Database file")
else:
filename = Filename(OS)
if filename != None:
if os.path.isfile(pathQuery+filename[0]+extention) != True:
showerror('Error : Missing File !', "Query file corresponding to sequences were not found.\nChoose Query file or Create Query files with your Sequence file")
else:
#debut bar de progression
lab = Label(fenetre, text="Blast in progress...")
lab.grid(row=24, columnspan=4)
progressbarBlast = ttk.Progressbar(orient=HORIZONTAL, length=400, mode='determinate', variable=1, maximum=0, value=0)
progressbarBlast.grid(row=25,columnspan=4,pady=2,sticky=W+E)
progressbarBlast["maximum"]=len(filename)
progressbarBlast["value"]=0
progressbarBlast.update()
File = StringVar()
nFile = StringVar()
nFileLabel = Label(fenetre, textvariable=nFile)
nFileLabel.grid(row=26,columnspan=4)
fileLabel = Label(fenetre, textvariable=File)
fileLabel.grid(row=27,columnspan=4)
#fin
if OS == 'Linux' or OS == 'Darwin':
Dir = checkButtonVar2.get()
if Dir == 0:
t0 = time.time()
if not os.path.exists(pathQuery+"out-blast"):
os.mkdir(pathQuery+"out-blast")
if not os.path.exists(pathQuery+"out-seqs"):
os.mkdir(pathQuery+"out-seqs")
pathBlast = pathQuery+"out-blast/"
pathSeqs = pathQuery+"out-seqs/"
print(str(len(filename))+" files are being analyzed")
for i in range(len(filename)):
query = filename[i]
blast = filename[i]+'_Blast'
seqs = filename[i]+'_seqs'
sub1 = time.time()
#BARRE DE PROGRESION
nFile.set("File "+str(i+1)+"/"+str(len(filename)))
File.set(filename[i])
progressbarBlast.update()
subprocess.call(typ+" -query "+pathQuery+query+" -db "+pathLibrary+DB+" -evalue 1e-10 -out "+pathBlast+blast, shell=True)
val = i+1
progressbarBlast["value"]= val
progressbarBlast.update()
#FIN BARRE
subprocess.call("grep '\(Sbjct\|>\)' "+pathBlast+blast+" > "+pathSeqs+seqs, shell=True)
sub2 = time.time()
print('Fichier n° '+str(filename.index(filename[i])+1)+' '+str(filename[i])+' in '+str(sub2-sub1)+' seconds')
t1 = time.time()
showinfo('Information',"Your job finish in\n"+str(round(t1-t0,3))+" seconds")
# QUERY FILES ARE IN A FOLDER
# à été corrigé :
if Dir == 1:
t0 = time.time()
folder = getFolderName2()
pathQueryFolder = pathQuery+folder+"/"
if not os.path.exists(pathQueryFolder+"out-blast"):
os.mkdir(pathQueryFolder+"out-blast")
if not os.path.exists(pathQueryFolder+"out-seqs"):
os.mkdir(pathQueryFolder+"out-seqs")
pathBlast = pathQueryFolder+"out-blast/"
pathSeqs = pathQueryFolder+"out-seqs/"
for i in range(len(filename)):
query = filename[i]
blast = filename[i]+'_Blast'
seqs = filename[i]+'_seqs'
sub1 = time.time()
#BARRE DE PROGRESION
nFile.set("File "+str(i+1)+"/"+str(len(filename)))
File.set(filename[i])
progressbarBlast.update()
subprocess.call(typ+" -query "+pathQueryFolder+query+" -db "+pathLibrary+DB+" -evalue 1e-10 -out "+pathBlast+blast, shell=True)
val = i+1
progressbarBlast["value"] = val
progressbarBlast.update()
#FIN BARRE
subprocess.call("grep '\(Sbjct\|>\)' "+pathBlast+blast+" > "+pathSeqs+seqs, shell=True)
sub2 = time.time()
print('Fichier n° '+str(filename.index(filename[i])+1)+' '+str(filename[i])+' in '+str(sub2-sub1)+' seconds')
t1 = time.time()
showinfo('Information',"Your job finish in\n"+str(round(t1-t0,3))+" seconds")
if OS == 'Windows':
Dir = checkButtonVar2.get()
if Dir == 0:
t0 = time.time()
if not os.path.exists(pathQuery+"out-blast"):
os.mkdir(pathQuery+"out-blast")
if not os.path.exists(pathQuery+"out-seqs"):
os.mkdir(pathQuery+"out-seqs")
pathBlast = pathQuery+"out-blast\\"
pathSeqs = pathQuery+"out-seqs\\"
print(str(len(filename))+" files are being analyzed")
for i in range(len(filename)):
query = filename[i]+'.txt'
blast = filename[i]+'_Blast.txt'
seqs = filename[i]+'_seqs.txt'
sub1 = time.time()
#BARRE DE PROGRESION
nFile.set("File "+str(i+1)+"/"+str(len(filename)))
File.set(filename[i])
progressbarBlast.update()
process1 = subprocess.Popen(typ+' -query '+pathQuery+query+' -db '+pathLibrary+DB+' -evalue 1e-10 -out '+pathBlast+blast, shell=True)
process1.communicate()
val = i+1
progressbarBlast["value"]= val
progressbarBlast.update()
#FIN BARRE
process2 = subprocess.Popen('findstr "Sbjct >" '+pathBlast+blast+' > '+pathSeqs+seqs, shell=True)
process2.communicate()
sub2 = time.time()
print('Fichier n° '+str(filename.index(filename[i])+1)+' '+str(filename[i])+' in '+str(sub2-sub1)+' seconds')
t1 = time.time()
showinfo('Information',"Your job finish in\n"+str(round(t1-t0,3))+" seconds")
if Dir == 1:
t0 = time.time()
folder = getFolderName2()
pathQueryFolder = pathQuery+folder+"\\"
if not os.path.exists(pathQueryFolder+"out-blast"):
os.mkdir(pathQueryFolder+"out-blast")
if not os.path.exists(pathQueryFolder+"out-seqs"):
os.mkdir(pathQueryFolder+"out-seqs")
pathBlast = pathQueryFolder+"out-blast\\"
pathSeqs = pathQueryFolder+"out-seqs\\"
for i in range(len(filename)):
query = filename[i]+'.txt'
blast = filename[i]+'_Blast.txt'
seqs = filename[i]+'_seqs.txt'
sub1 = time.time()
#BARRE DE PROGRESION
nFile.set("File "+str(i+1)+"/"+str(len(filename)))
File.set(filename[i])
progressbarBlast.update()
process1 = subprocess.Popen(typ+' -query '+pathQueryFolder+query+' -db '+pathLibrary+DB+' -evalue 1e-10 -out '+pathBlast+blast, shell=True)
process1.communicate()
val = i+1
progressbarBlast["value"]= val
progressbarBlast.update()
#FIN BARRE
process2 = subprocess.Popen('findstr "Sbjct >" '+pathBlast+blast+' > '+pathSeqs+seqs, shell=True)
process2.communicate()
sub2 = time.time()
print('Fichier n° '+str(filename.index(filename[i])+1)+' '+str(filename[i])+' in '+str(sub2-sub1)+' seconds')
t1 = time.time()
print('Job finish in '+str(t1-t0)+' seconds')
showinfo('Information',"Your job finish in\n"+str(round(t1-t0,3))+" seconds")
lab.grid_forget()
progressbarBlast.grid_forget()
nFileLabel.grid_forget()
fileLabel.grid_forget()
Button(fenetre, text="Run",command=Blast).grid(row=21,column=3, rowspan=2)
fenetre.mainloop()
| Tuisto59/Blast-LP | BLAST-LP v1.2.py | Python | gpl-2.0 | 48,327 | [
"BLAST"
] | eacd89b5b2b0b363b97fc0af8ea9f28daf0ac4c4801cbabc571e96aa034445bd |
#! /usr/bin/env python
import sys, shutil
from numpy import *
from os import path, makedirs
from glob import glob
import subprocess
class color:
"""
define colors in the terminal
"""
purple = '\033[95m'
cyan = '\033[96m'
darkcyan = '\033[36m'
blue = '\033[94m'
green = '\033[92m'
yellow = '\033[93m'
red = '\033[91m'
bold = '\033[1m'
underline = '\033[4m'
end = '\033[0m'
grid_nx = 261
grid_ny = 261
grid_nrap = 101
grid_dx = 0.1
grid_dy = 0.1
grid_drap = 0.1
rand_flag = 1
# the width of the Gaussian in the transverse plane
sigma_perp = 0.5; delta_sigma_perp = 0.3
# peak position in the longitudinal direction
eta_0 = 2.0; fluct_eta_0 = 1.0
# the width of the Gaussian in the longitudinal direction
sigma_beam_in = 1.0; delta_sigma_beam_in = 0.5
sigma_beam_out = 0.5; delta_sigma_beam_out = 0.3
# wounded nucleon/binary collision mixing ratio
alpha = 0.0
def get_density(
eta, x, y, participant_trans_list, participant_eta_list,
binary_list, alpha):
eps = 1e-8
distance_trans = (
( (x - participant_trans_list[:, 0])**2.
+ (y - participant_trans_list[:, 1])**2.)
/(2.*participant_trans_list[:, 2]**2.)
)
idx = distance_trans < 25.
dis_cut = distance_trans[idx]
sigma_trans = participant_trans_list[idx, 2]
sigma_eta = participant_eta_list[idx, 1:3]
eta_0_cut = participant_eta_list[idx, 0]
idx_left = eta_0_cut > eta
idx_right = eta_0_cut <= eta
dis_eta_left = (
(eta - eta_0_cut[idx_left])**2./(2.*sigma_eta[idx_left, 0]**2.))
dis_eta_right = (
(eta - eta_0_cut[idx_right])**2./(2.*sigma_eta[idx_right, 1]**2.))
rho_part = (
sum(exp(-dis_cut[idx_left])/(2*pi*sigma_trans[idx_left]**2.)
*exp(-dis_eta_left)/(sqrt(pi*sigma_eta[idx_left, 0]**2./2.)
+ sqrt(pi*sigma_eta[idx_left, 1]**2./2.)))
+ sum(exp(-dis_cut[idx_right])/(2.*pi*sigma_trans[idx_right]**2.)
*exp(-dis_eta_right)/(sqrt(pi*sigma_eta[idx_right, 0]**2./2.)
+ sqrt(pi*sigma_eta[idx_right, 1]**2./2.)))
)
rho_binary = 0.0
if abs(alpha) > eps:
for ibin in range(len(binary_list)):
binary_x = binary_list[ibin, 0]
binary_y = binary_list[ibin, 1]
rho_binary += (
(exp( - (eta - eta_0)**2./sigma_beam**2.) +
exp( - (eta + eta_0)**2./sigma_beam**2.))*0.5
*exp( - ((x - participant_x)**2. + (y - participant_y)**2.)
/sigma_perp**2.)
)
rho_binary = prefactor_beam*prefactor_perp*rho_binary
rho = rho_part*(1. - alpha)/2. + rho_binary*alpha
return(rho)
def generate_3d_profile(data_path):
random.seed()
event_list = glob(path.join(data_path, 'ParticipantTable_event*.dat'))
for iev in range(1, len(event_list)+1):
participant_list = loadtxt(
path.join(data_path, "ParticipantTable_event_%d.dat" % iev))
participant_trans_list = zeros([len(participant_list[:, 0]), 3])
participant_eta_list = zeros([len(participant_list[:, 0]), 3])
participant_trans_list[:, 0:2] = participant_list[:, 0:2]
for ipart in range(len(participant_list)):
if rand_flag == 0:
participant_trans_list[ipart, 2] = sigma_perp
else:
participant_trans_list[ipart, 2] = (
random.uniform(sigma_perp - delta_sigma_perp,
sigma_perp + delta_sigma_perp))
if participant_list[ipart, 2] == 1:
if rand_flag == 0:
participant_eta_list[ipart, 0] = eta_0
participant_eta_list[ipart, 1] = sigma_beam_in # left
participant_eta_list[ipart, 2] = sigma_beam_out # right
else:
participant_eta_list[ipart, 0] = (
random.normal(eta_0, fluct_eta_0))
participant_eta_list[ipart, 1] = ( #left
random.uniform(sigma_beam_in - delta_sigma_beam_in,
sigma_beam_in + delta_sigma_beam_in))
participant_eta_list[ipart, 2] = ( #right
random.uniform(sigma_beam_out - delta_sigma_beam_out,
sigma_beam_out + delta_sigma_beam_out))
else:
if rand_flag == 0:
participant_eta_list[ipart, 0] = -eta_0
participant_eta_list[ipart, 1] = sigma_beam_out # left
participant_eta_list[ipart, 2] = sigma_beam_in # right
else:
participant_eta_list[ipart, 0] = (
random.normal(-eta_0, fluct_eta_0))
participant_eta_list[ipart, 1] = ( #left
random.uniform(sigma_beam_out - delta_sigma_beam_out,
sigma_beam_out + delta_sigma_beam_out))
participant_eta_list[ipart, 2] = ( #right
random.uniform(sigma_beam_in - delta_sigma_beam_in,
sigma_beam_in + delta_sigma_beam_in))
binary_list = loadtxt(
path.join(data_path, "BinaryCollisionTable_event_%d.dat" % iev))
entropy_density = zeros([grid_nrap, grid_nx, grid_ny])
grid_eta = linspace( -(grid_nrap - 1.)/2.*grid_drap,
(grid_nrap - 1.)/2.*grid_drap, grid_nrap)
grid_x = linspace( -(grid_nx - 1.)/2.*grid_dx,
(grid_nx - 1.)/2.*grid_dx, grid_nx)
grid_y = linspace( -(grid_ny - 1.)/2.*grid_dy,
(grid_ny - 1.)/2.*grid_dy, grid_ny)
for ieta in range(len(grid_eta)):
eta_local = grid_eta[ieta]
print eta_local
for ix in range(len(grid_x)):
x_local = grid_x[ix]
for iy in range(len(grid_y)):
y_local = grid_y[iy]
entropy_density[ieta, ix, iy] = get_density(
eta_local, x_local, y_local,
participant_trans_list, participant_eta_list,
binary_list, alpha)
with file('sd_event_%d_block_3d.dat' % iev, 'w') as outfile:
for slice_2d in entropy_density:
savetxt(outfile, slice_2d)
def print_help_message():
print "Usage : "
print(color.bold
+ "./generateAvgprofile.py -ecm ecm "
+ "-cen cen_bounds"
+ "[-model model -collision_system collsys -cut_type cut_type]"
+ color.end)
print "Usage of generateAvgprofile.py command line arguments: "
print(color.bold + "-cen" + color.end
+ " centrality bounds(%): "
+ color.purple + "20-30" + color.end)
print(color.bold + "-ecm" + color.end
+ " collision energy (GeV): "
+ color.purple + "7.7, 11.5, 19.6, 27, 39, 62.4, 200, 2760, 5500"
+ color.end)
print(color.bold + "-cut_type" + color.end
+ " centrality cut type: "
+ color.purple + color.bold + "total_entropy[default]" + color.end
+ color.purple + ", Npart" + color.end)
print(color.bold + "-model" + color.end + " initial condition model: "
+ color.purple + color.bold + " MCGlb[default]" + color.end
+ color.purple + ", MCKLN" + color.end)
print(color.bold + "-collision_system" + color.end
+ " type of collision system: "
+ color.purple + color.bold + " Pb+Pb[default]" + color.end
+ color.purple + ", Au+Au, Cu+Au, U+U, p+Pb, p+Au, d+Au, He+Au"
+ color.end)
if __name__ == "__main__":
data_path = path.abspath(str(sys.argv[1]))
generate_3d_profile(data_path)
| chunshen1987/superMC | scripts/generate_3d_profiles/generate_ebe_3d_profiles.py | Python | gpl-3.0 | 7,993 | [
"Gaussian"
] | f8d5a3b99032eccc36a02a0c71e685f035112b00e3e13226b2891f03055d1bf1 |
#!/usr/bin/env python
# copy LAMMPS src/libliggghts.so and liggghts.py to system dirs
instructions = """
Syntax: python install.py [-h] [libdir] [pydir]
libdir = target dir for src/libliggghts.so, default = /usr/local/lib
pydir = target dir for liggghts.py, default = Python site-packages dir
"""
import sys,os
if sys.version_info[0] == 2:
import commands
else:
import subprocess as commands
if (len(sys.argv) > 1 and sys.argv[1] == "-h") or len(sys.argv) > 3:
print(instructions)
sys.exit()
if len(sys.argv) >= 2: libdir = sys.argv[1]
else: libdir = "/usr/local/lib"
if len(sys.argv) == 3: pydir = sys.argv[2]
else: pydir = ""
# copy C lib to libdir if it exists
# warn if not in LD_LIBRARY_PATH or LD_LIBRARY_PATH is undefined
if not os.path.isdir(libdir):
print("ERROR: libdir %s does not exist" % libdir)
sys.exit()
if "LD_LIBRARY_PATH" not in os.environ:
print("WARNING: LD_LIBRARY_PATH undefined, cannot check libdir %s" % libdir)
else:
libpaths = os.environ['LD_LIBRARY_PATH'].split(':')
if libdir not in libpaths:
print("WARNING: libdir %s not in LD_LIBRARY_PATH" % libdir)
str = "cp ../src/libliggghts.so %s" % libdir
print(str)
outstr = commands.getoutput(str)
if len(outstr.strip()): print(outstr)
# copy liggghts.py to pydir if it exists
# if pydir not specified, install in site-packages via distutils setup()
if pydir:
if not os.path.isdir(pydir):
print("ERROR: pydir %s does not exist" % pydir)
sys.exit()
str = "cp ../python/liggghts.py %s" % pydir
print(str)
outstr = commands.getoutput(str)
if len(outstr.strip()): print(outstr)
sys.exit()
print("installing liggghts.py in Python site-packages dir")
os.chdir('../python') # in case invoked via make in src dir
from distutils.core import setup
sys.argv = ["setup.py","install"] # as if had run "python setup.py install"
setup(name = "liggghts",
version = "3.8.0",
author = "Christoph Kloss",
author_email = "office@dcs-computing.com",
url = "http://www.cfdem.com",
description = "LIGGGHTS - LAMMPS improved for general granular and granular heat transfer simulations",
py_modules = ["liggghts"])
| schrummy14/LIGGGHTS_Flexible_Fibers | python/install.py | Python | gpl-2.0 | 2,198 | [
"LAMMPS"
] | 845e62c6bd26aeebeec8aa935d872dfb4ab20d6167db11bda88466ef28d20656 |
""" :mod: OperationTests
====================
.. module: OperationTests
:synopsis: Operation test cases
.. moduleauthor:: Krzysztof.Ciba@NOSPAMgmail.com
Operation test cases
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
__RCSID__ = "$Id $"
import pytest
from DIRAC.RequestManagementSystem.Client.File import File
from DIRAC.RequestManagementSystem.Client.Operation import Operation
def test_ctor():
"""test constructors and (de)serialisation"""
assert isinstance(Operation(), Operation), "empty ctor failed"
# # using fromDict
fromDict = {
"Type": "replicateAndRegister",
"TargetSE": "CERN-USER,PIC-USER",
"SourceSE": None,
}
operation = Operation(fromDict)
assert isinstance(operation, Operation), "fromDict ctor failed"
for key, value in fromDict.items():
assert getattr(operation, key) == value, "wrong attr value %s (%s) %s" % (key, getattr(operation, key), value)
# # same with file
operation = Operation(fromDict)
operation.addFile(
File(
{
"LFN": "/lhcb/user/c/cibak/testFile",
"Checksum": "1234567",
"ChecksumType": "ADLER32",
"Size": 1024,
"Status": "Waiting",
}
)
)
for key, value in fromDict.items():
assert getattr(operation, key) == value, "wrong attr value %s (%s) %s" % (key, getattr(operation, key), value)
toJSON = operation.toJSON()
assert toJSON["OK"], "JSON serialization failed"
def test_valid_properties():
operation = Operation()
operation.Arguments = "foobar"
assert operation.Arguments == b"foobar", "wrong Arguments"
operation.SourceSE = "CERN-RAW"
assert operation.SourceSE == "CERN-RAW", "wrong SourceSE"
operation.TargetSE = "CERN-RAW"
assert operation.TargetSE == "CERN-RAW", "wrong TargetSE"
operation.Catalog = ""
assert operation.Catalog == "", "wrong Catalog"
operation.Catalog = "BookkeepingDB"
assert operation.Catalog == "BookkeepingDB", "wrong Catalog"
operation.Error = "error"
assert operation.Error == "error", "wrong Error"
toJSON = operation.toJSON()
assert toJSON["OK"]
def test_StateMachine():
"""state machine"""
op = Operation()
assert op.Status == "Queued", "1. wrong status %s" % op.Status
op.addFile(File({"Status": "Waiting"}))
assert op.Status == "Queued", "2. wrong status %s" % op.Status
op.addFile(File({"Status": "Scheduled"}))
assert op.Status == "Scheduled", "3. wrong status %s" % op.Status
op.addFile(File({"Status": "Done"}))
assert op.Status == "Scheduled", "4. wrong status %s" % op.Status
op.addFile(File({"Status": "Failed"}))
assert op.Status == "Scheduled", "5. wrong status %s" % op.Status
op[3].Status = "Scheduled"
assert op.Status == "Scheduled", "6. wrong status %s" % op.Status
op[0].Status = "Scheduled"
assert op.Status == "Scheduled", "7. wrong status %s" % op.Status
op[0].Status = "Waiting"
assert op.Status == "Scheduled", "8. wrong status %s" % op.Status
for f in op:
f.Status = "Done"
assert op.Status == "Done", "9. wrong status %s" % op.Status
for f in op:
f.Status = "Failed"
assert op.Status == "Failed", "9. wrong status %s" % op.Status
def test_List():
"""getitem, setitem, delitem and dirty"""
op = Operation()
files = []
for _ in range(5):
f = File()
files.append(f)
op += f
for i in range(len(op)):
assert op[i] == files[i], "__getitem__ failed"
for i in range(len(op)):
op[i] = File({"LFN": "/%s" % i})
assert op[i].LFN == "/%s" % i, "__setitem__ failed"
del op[0]
assert len(op) == 4, "__delitem__ failed"
# opID set
op.OperationID = 1
del op[0]
| ic-hep/DIRAC | src/DIRAC/RequestManagementSystem/Client/test/Test_Operation.py | Python | gpl-3.0 | 3,935 | [
"DIRAC"
] | 8e080d03ac3600330705f7ee08ce4ca3aafa8a0b5228ffa714ea11b2a0fbed89 |
## This file is part of Invenio.
## Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2014 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
__revision__ = "$Id$"
import urllib
import cgi
from invenio.base.wrappers import lazy_import
from invenio.config import \
CFG_CERN_SITE, \
CFG_SITE_LANG, \
CFG_SITE_NAME, \
CFG_SITE_NAME_INTL, \
CFG_SITE_SUPPORT_EMAIL, \
CFG_SITE_SECURE_URL, \
CFG_SITE_URL, \
CFG_WEBSESSION_RESET_PASSWORD_EXPIRE_IN_DAYS, \
CFG_WEBSESSION_ADDRESS_ACTIVATION_EXPIRE_IN_DAYS, \
CFG_WEBSESSION_DIFFERENTIATE_BETWEEN_GUESTS, \
CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS, \
CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS, \
CFG_SITE_RECORD
CFG_EXTERNAL_AUTH_USING_SSO = lazy_import('invenio.modules.access.local_config:CFG_EXTERNAL_AUTH_USING_SSO')
CFG_EXTERNAL_AUTH_USING_SSO = lazy_import('invenio.modules.access.local_config:CFG_EXTERNAL_AUTH_LOGOUT_SSO')
CFG_OPENID_PROVIDERS = lazy_import('invenio.modules.access.local_config:CFG_OPENID_PROVIDERS')
CFG_OAUTH2_PROVIDERS = lazy_import('invenio.modules.access.local_config:CFG_OAUTH2_PROVIDERS')
CFG_OAUTH1_PROVIDERS = lazy_import('invenio.modules.access.local_config:CFG_OAUTH1_PROVIDERS')
CFG_OPENID_AUTHENTICATION = lazy_import('invenio.modules.access.local_config:CFG_OPENID_AUTHENTICATION')
CFG_OAUTH2_AUTHENTICATION = lazy_import('invenio.modules.access.local_config:CFG_OAUTH2_AUTHENTICATION')
CFG_OAUTH1_AUTHENTICATION = lazy_import('invenio.modules.access.local_config:CFG_OAUTH1_AUTHENTICATION')
from invenio.utils.url import make_canonical_urlargd, create_url, create_html_link
from invenio.utils.html import escape_html, nmtoken_from_string
from invenio.base.i18n import gettext_set_language, language_list_long
from invenio.modules.apikeys.models import WebAPIKey
from invenio.legacy.websession.websession_config import CFG_WEBSESSION_GROUP_JOIN_POLICY
class Template:
def tmpl_back_form(self, ln, message, url, link):
"""
A standard one-message-go-back-link page.
Parameters:
- 'ln' *string* - The language to display the interface in
- 'message' *string* - The message to display
- 'url' *string* - The url to go back to
- 'link' *string* - The link text
"""
out = """
<table>
<tr>
<td align="left">%(message)s
<a href="%(url)s">%(link)s</a></td>
</tr>
</table>
"""% {
'message' : message,
'url' : url,
'link' : link,
'ln' : ln
}
return out
def tmpl_external_setting(self, ln, key, value):
_ = gettext_set_language(ln)
out = """
<tr>
<td align="right"><strong>%s:</strong></td>
<td><i>%s</i></td>
</tr>""" % (key, value)
return out
def tmpl_external_user_settings(self, ln, html_settings):
_ = gettext_set_language(ln)
out = """
<p><big><strong class="headline">%(external_user_settings)s</strong></big></p>
<table>
%(html_settings)s
</table>
<p><big><strong class="headline">%(external_user_groups)s</strong></big></p>
<p>%(consult_external_groups)s</p>
""" % {
'external_user_settings' : _('External account settings'),
'html_settings' : html_settings,
'consult_external_groups' : _('You can consult the list of your external groups directly in the %(x_url_open)sgroups page%(x_url_close)s.', **{
'x_url_open' : '<a href="../yourgroups/display?ln=%s#external_groups">' % ln,
'x_url_close' : '</a>'
}),
'external_user_groups' : _('External user groups'),
}
return out
def tmpl_user_api_key(self, ln=CFG_SITE_LANG, keys_info=None, csrf_token=''):
"""
Displays all the API key that the user owns the user
Parameters:
- 'ln' *string* - The language to display the interface in
- 'key_info' *tuples* - Contains the tuples with the key data (id, desciption, status)
- 'csrf_token' *string* - The CSRF token to verify the form origin.
"""
# load the right message language
_ = gettext_set_language(ln)
out = """
<script type="text/javascript">
$(document).ready(function(){
$(".key_value").hide();
$(".key_label").click(function(){
$(this).next(".key_value").slideToggle("slow");
});
});
</script>
<p><big><strong class="headline">%(user_api_key)s</strong></big></p>
""" % {
'user_api_key' : _("API keys")
}
if keys_info and len(keys_info) != 0:
out += "<p>%(user_keys)s</p>" % {'user_keys': _("These are your current API keys")}
out += """
<table>
"""
for key_info in keys_info:
out += """
<tr><td>%(key_description)s</td>
<td>%(key_status)s</td>
</tr><tr>
<td class = "key_label">
<a name="#%(index)s" href="#%(index)s"> %(key_label)s</a>
</td>
<td class="key_value"><code/>%(key_id)s</code></td>
</tr><tr>
<td></td>
<td align="left">
<form method="post" action="%(sitesecureurl)s/youraccount/apikey" name="api_key_remove">
<input type="hidden" name="key_id" value="%(key_id)s" />
<code class="blocknote"><input class="formbutton" type="%(input_type)s" value="%(remove_key)s" /></code>
<input type="hidden" name="csrf_token" value="%(csrf_token)s" />
</form>
</td>
</tr>
""" % {
'key_description': _("Description: " + cgi.escape(key_info[1])),
'key_status': _("Status: " + key_info[2]),
'key_id': _(key_info[0]),
'index': keys_info.index(key_info),
'key_label': _("API key"),
'remove_key' : _("Delete key"),
'csrf_token': cgi.escape(csrf_token, True),
'sitesecureurl': CFG_SITE_SECURE_URL,
'input_type': ("submit", "hidden")[key_info[2] == WebAPIKey.CFG_WEB_API_KEY_STATUS['REVOKED']]
}
out += "</table>"
out += """
<form method="post" action="%(sitesecureurl)s/youraccount/apikey" name="api_key_create">
<p>%(create_new_key)s</p>
<table>
<tr><td align="right" valign="top"><strong>
<label for="new_key_description">%(new_key_description_label)s:</label></strong><br />
<small class="important">(%(mandatory)s)</small>
</td><td valign="top">
<input type="text" size="50" name="key_description" id="key_description" value=""/><br />
<small><span class="quicknote">%(note)s:</span>
%(new_key_description_note)s
</small>
</td>
</tr>
<tr><td></td><td align="left">
<code class="blocknote"><input class="formbutton" type="submit" value="%(create_new_key_button)s" /></code>
</td></tr>
</table>
<input type="hidden" name="csrf_token" value="%(csrf_token)s" />
</form>
""" % {
'create_new_key' : _("If you want to create a new API key, please enter a description for it"),
'new_key_description_label' : _("Description for the new API key"),
'mandatory' : _("mandatory"),
'note' : _("Note"),
'new_key_description_note': _("The description should be something meaningful for you to recognize the API key"),
'create_new_key_button' : _("Create new key"),
'csrf_token': cgi.escape(csrf_token, True),
'sitesecureurl': CFG_SITE_SECURE_URL
}
return out
def tmpl_user_preferences(self, ln, email, email_disabled, password_disabled, nickname, csrf_token=''):
"""
Displays a form for the user to change his email/password.
Parameters:
- 'ln' *string* - The language to display the interface in
- 'email' *string* - The email of the user
- 'email_disabled' *boolean* - If the user has the right to edit his email
- 'password_disabled' *boolean* - If the user has the right to edit his password
- 'nickname' *string* - The nickname of the user (empty string if user does not have it)
- 'csrf_token' *string* - The CSRF token to verify the form origin.
"""
# load the right message language
_ = gettext_set_language(ln)
out = """
<p><big><strong class="headline">%(edit_params)s</strong></big></p>
<form method="post" action="%(sitesecureurl)s/youraccount/change" name="edit_logins_settings">
<p>%(change_user)s</p>
<table>
<tr><td align="right" valign="top"><strong>
<label for="nickname">%(nickname_label)s:</label></strong><br />
<small class="important">(%(mandatory)s)</small>
</td><td valign="top">
%(nickname_prefix)s%(nickname)s%(nickname_suffix)s<br />
<small><span class="quicknote">%(note)s:</span>
%(fixed_nickname_note)s
</small>
</td>
</tr>
<tr><td align="right"><strong>
<label for="email">%(new_email)s:</label></strong><br />
<small class="important">(%(mandatory)s)</small>
</td><td>
<input type="text" size="25" name="email" id="email" %(email_disabled)s value="%(email)s" /><br />
<small><span class="quicknote">%(example)s:</span>
<span class="example">john.doe@example.com</span>
</small>
</td>
</tr>
<tr><td></td><td align="left">
<input class="formbutton" type="submit" value="%(set_values)s" />
</td></tr>
</table>
<input type="hidden" name="action" value="edit" />
<input type="hidden" name="csrf_token" value="%(csrf_token)s" />
</form>
""" % {
'change_user' : _("If you want to change your email or set for the first time your nickname, please set new values in the form below."),
'edit_params' : _("Edit login credentials"),
'nickname_label' : _("Nickname"),
'nickname' : nickname,
'csrf_token': cgi.escape(csrf_token, True),
'nickname_prefix' : nickname=='' and '<input type="text" size="25" name="nickname" id="nickname" value=""' or '',
'nickname_suffix' : nickname=='' and '" /><br /><small><span class="quicknote">'+_("Example")+':</span><span class="example">johnd</span></small>' or '',
'new_email' : _("New email address"),
'mandatory' : _("mandatory"),
'example' : _("Example"),
'note' : _("Note"),
'set_values' : _("Set new values"),
'email' : email,
'email_disabled' : email_disabled and "readonly" or "",
'sitesecureurl': CFG_SITE_SECURE_URL,
'fixed_nickname_note' : _('Since this is considered as a signature for comments and reviews, once set it can not be changed.')
}
if not password_disabled and not CFG_EXTERNAL_AUTH_USING_SSO:
out += """
<form method="post" action="%(sitesecureurl)s/youraccount/change" name="edit_password">
<p>%(change_pass)s</p>
<table>
<tr>
<td align="right"><strong><label for="old_password">%(old_password)s:</label></strong><br />
</td><td align="left">
<input type="password" size="25" name="old_password" id="old_password" %(password_disabled)s /><br />
<small><span class="quicknote">%(note)s:</span>
%(old_password_note)s
</small>
</td>
</tr>
<tr>
<td align="right"><strong><label for="new_password">%(new_password)s:</label></strong><br />
</td><td align="left">
<input type="password" size="25" name="password" id="new_password" %(password_disabled)s /><br />
<small><span class="quicknote">%(note)s:</span>
%(password_note)s
</small>
</td>
</tr>
<tr>
<td align="right"><strong><label for="new_password2">%(retype_password)s:</label></strong></td>
<td align="left">
<input type="password" size="25" name="password2" id="new_password2" %(password_disabled)s value="" />
</td>
</tr>
<tr><td></td><td align="left">
<input class="formbutton" type="submit" value="%(set_values)s" />
</td></tr>
</table>
<input type="hidden" name="action" value="edit" />
<input type="hidden" name="csrf_token" value="%(csrf_token)s" />
</form>
""" % {
'change_pass' : _("If you want to change your password, please enter the old one and set the new value in the form below."),
'mandatory' : _("mandatory"),
'old_password' : _("Old password"),
'new_password' : _("New password"),
'csrf_token': cgi.escape(csrf_token, True),
'optional' : _("optional"),
'note' : _("Note"),
'password_note' : _("The password phrase may contain punctuation, spaces, etc."),
'old_password_note' : _("You must fill the old password in order to set a new one."),
'retype_password' : _("Retype password"),
'set_values' : _("Set new password"),
'password_disabled' : password_disabled and "disabled" or "",
'sitesecureurl': CFG_SITE_SECURE_URL,
}
elif not CFG_EXTERNAL_AUTH_USING_SSO and CFG_CERN_SITE:
out += "<p>" + _("""If you are using a lightweight CERN account you can %(x_url_open)sreset the password%(x_url_close)s.""",
{'x_url_open' : '<a href="http://cern.ch/LightweightRegistration/ResetPassword.aspx%s">'
% (make_canonical_urlargd({'email': email,
'returnurl': CFG_SITE_SECURE_URL + '/youraccount/edit' + make_canonical_urlargd({'lang' : ln}, {})}, {})),
'x_url_close' : '</a>'}) + "</p>"
elif CFG_EXTERNAL_AUTH_USING_SSO and CFG_CERN_SITE:
out += "<p>" + _("""You can change or reset your CERN account password by means of the %(x_url_open)sCERN account system%(x_url_close)s.""") % \
{'x_url_open' : '<a href="https://cern.ch/login/password.aspx">', 'x_url_close' : '</a>'} + "</p>"
return out
def tmpl_user_bibcatalog_auth(self, bibcatalog_username="", bibcatalog_password="", ln=CFG_SITE_LANG, csrf_token=''):
"""template for setting username and pw for bibcatalog backend"""
_ = gettext_set_language(ln)
out = """
<form method="post" action="%(sitesecureurl)s/youraccount/change" name="edit_bibcatalog_settings">
<p><big><strong class="headline">%(edit_bibcatalog_settings)s</strong></big></p>
<table>
<tr>
<td> %(username)s: <input type="text" size="25" name="bibcatalog_username" value="%(bibcatalog_username)s" id="bibcatuid"></td>
<td> %(password)s: <input type="password" size="25" name="bibcatalog_password" value="%(bibcatalog_password)s" id="bibcatpw"></td>
</tr>
<tr>
<td><input class="formbutton" type="submit" value="%(update_settings)s" /></td>
</tr>
</table>
<input type="hidden" name="csrf_token" value="%(csrf_token)s" />
</form>
""" % {
'sitesecureurl' : CFG_SITE_SECURE_URL,
'bibcatalog_username' : bibcatalog_username,
'bibcatalog_password' : bibcatalog_password,
'edit_bibcatalog_settings' : _("Edit cataloging interface settings"),
'username' : _("Username"),
'password' : _("Password"),
'update_settings' : _('Update settings'),
'csrf_token': cgi.escape(csrf_token, True),
}
return out
def tmpl_user_lang_edit(self, ln, preferred_lang, csrf_token=''):
_ = gettext_set_language(ln)
out = """
<form method="post" action="%(sitesecureurl)s/youraccount/change" name="edit_lang_settings">
<p><big><strong class="headline">%(edit_lang_settings)s</strong></big></p>
<table>
<tr><td align="right"><select name="lang" id="lang">
""" % {
'sitesecureurl' : CFG_SITE_SECURE_URL,
'edit_lang_settings' : _("Edit language-related settings"),
}
for short_ln, long_ln in language_list_long():
out += """<option %(selected)s value="%(short_ln)s">%(long_ln)s</option>""" % {
'selected' : preferred_lang == short_ln and 'selected="selected"' or '',
'short_ln' : short_ln,
'long_ln' : escape_html(long_ln)
}
out += """</select></td><td valign="top"><strong><label for="lang">%(select_lang)s</label></strong></td></tr>
<tr><td></td><td><input class="formbutton" type="submit" value="%(update_settings)s" /></td></tr>
</table><input type="hidden" name="csrf_token" value="%(csrf_token)s" /></form>""" % {
'select_lang' : _('Select desired language of the web interface.'),
'update_settings' : _('Update settings'),
'csrf_token': cgi.escape(csrf_token, True),
}
return out
def tmpl_user_profiling_settings(self, ln, enable_profiling, csrf_token=''):
_ = gettext_set_language(ln)
out = """
<form method="post" action="%(sitesecureurl)s/youraccount/change" name="edit_profiling_settings">
<p><big><strong class="headline">%(edit_settings)s</strong></big></p>
<table>
<tr><td align="right"><select name="profiling">
""" % {
'sitesecureurl' : CFG_SITE_SECURE_URL,
'edit_settings' : _("Edit profiling settings"),
}
out += """<option %(selected)s value="0">%(desc)s</option>""" % {
'selected' : 'selected="selected"' if enable_profiling is False else '',
'desc' : _("Disabled")
}
out += """<option %(selected)s value="1">%(desc)s</option>""" % {
'selected' : 'selected="selected"' if enable_profiling is True else '',
'desc' : _("Enabled")
}
out += """</select></td><td valign="top"></td></tr>
<tr><td></td><td><input class="formbutton" type="submit" value="%(update_settings)s" /></td></tr>
</table><input type="hidden" name="csrf_token" value="%(csrf_token)s" /></form>""" % {
'update_settings' : _('Update settings'),
'csrf_token': cgi.escape(csrf_token, True),
}
return out
def tmpl_user_websearch_edit(self, ln, current = 10, show_latestbox = True, show_helpbox = True, csrf_token=''):
_ = gettext_set_language(ln)
out = """
<form method="post" action="%(sitesecureurl)s/youraccount/change" name="edit_websearch_settings">
<p><big><strong class="headline">%(edit_websearch_settings)s</strong></big></p>
<table>
<tr><td align="right"><input type="checkbox" %(checked_latestbox)s value="1" name="latestbox" id="latestbox"/></td>
<td valign="top"><b><label for="latestbox">%(show_latestbox)s</label></b></td></tr>
<tr><td align="right"><input type="checkbox" %(checked_helpbox)s value="1" name="helpbox" id="helpbox"/></td>
<td valign="top"><b><label for="helpbox">%(show_helpbox)s</label></b></td></tr>
<tr><td align="right"><select name="group_records" id="group_records">
""" % {
'sitesecureurl' : CFG_SITE_SECURE_URL,
'edit_websearch_settings' : _("Edit search-related settings"),
'show_latestbox' : _("Show the latest additions box"),
'checked_latestbox' : show_latestbox and 'checked="checked"' or '',
'show_helpbox' : _("Show collection help boxes"),
'checked_helpbox' : show_helpbox and 'checked="checked"' or '',
}
for i in 10, 25, 50, 100, 250, 500:
if i <= CFG_WEBSEARCH_MAX_RECORDS_IN_GROUPS:
out += """<option %(selected)s>%(i)s</option>
""" % {
'selected' : current == i and 'selected="selected"' or '',
'i' : i
}
out += """</select></td><td valign="top"><strong><label for="group_records">%(select_group_records)s</label></strong></td></tr>
<tr><td></td><td><input class="formbutton" type="submit" value="%(update_settings)s" /></td></tr>
</table>
<input type="hidden" name="csrf_token" value="%(csrf_token)s" />
</form>""" % {
'update_settings' : _("Update settings"),
'select_group_records' : _("Number of search results per page"),
'csrf_token': cgi.escape(csrf_token, True),
}
return out
def tmpl_user_external_auth(self, ln, methods, current, method_disabled, csrf_token=''):
"""
Displays a form for the user to change his authentication method.
Parameters:
- 'ln' *string* - The language to display the interface in
- 'methods' *array* - The methods of authentication
- 'method_disabled' *boolean* - If the user has the right to change this
- 'current' *string* - The currently selected method
- 'csrf_token' *string* - The CSRF token to verify the form origin.
"""
# load the right message language
_ = gettext_set_language(ln)
out = """
<form method="post" action="%(sitesecureurl)s/youraccount/change">
<big><strong class="headline">%(edit_method)s</strong></big>
<p>%(explain_method)s:</p>
<table>
<tr><td valign="top"><b>%(select_method)s:</b></td><td>
""" % {
'edit_method' : _("Edit login method"),
'explain_method' : _("Please select which login method you would like to use to authenticate yourself"),
'select_method' : _("Select method"),
'sitesecureurl': CFG_SITE_SECURE_URL,
}
for system in methods:
out += """<input type="radio" name="login_method" value="%(system)s" id="%(id)s" %(disabled)s %(selected)s /><label for="%(id)s">%(system)s</label><br />""" % {
'system' : system,
'disabled' : method_disabled and 'disabled="disabled"' or "",
'selected' : current == system and 'checked="checked"' or "",
'id' : nmtoken_from_string(system),
}
out += """ </td></tr>
<tr><td> </td>
<td><input class="formbutton" type="submit" value="%(select_method)s" /></td></tr></table>
<input type="hidden" name="csrf_token" value="%(csrf_token)s" />
</form>""" % {
'select_method' : _("Select method"),
'csrf_token': cgi.escape(csrf_token, True),
}
return out
def tmpl_lost_password_form(self, ln):
"""
Displays a form for the user to ask for his password sent by email.
Parameters:
- 'ln' *string* - The language to display the interface in
- 'msg' *string* - Explicative message on top of the form.
"""
# load the right message language
_ = gettext_set_language(ln)
out = "<p>" + _("If you have lost the password for your %(sitename)s %(x_fmt_open)sinternal account%(x_fmt_close)s, then please enter your email address in the following form in order to have a password reset link emailed to you.", **{'x_fmt_open' : '<em>', 'x_fmt_close' : '</em>', 'sitename' : CFG_SITE_NAME_INTL[ln]}) + "</p>"
out += """
<blockquote>
<form method="post" action="../youraccount/send_email">
<table>
<tr>
<td align="right"><strong><label for="p_email">%(email)s:</label></strong></td>
<td><input type="text" size="25" name="p_email" id="p_email" value="" />
<input type="hidden" name="ln" value="%(ln)s" />
<input type="hidden" name="action" value="lost" />
</td>
</tr>
<tr><td> </td>
<td><input class="formbutton" type="submit" value="%(send)s" /></td>
</tr>
</table>
</form>
</blockquote>
""" % {
'ln': ln,
'email' : _("Email address"),
'send' : _("Send password reset link"),
}
if CFG_CERN_SITE:
out += "<p>" + _("If you have been using the %(x_fmt_open)sCERN login system%(x_fmt_close)s, then you can recover your password through the %(x_url_open)sCERN authentication system%(x_url_close)s.",
**{'x_fmt_open' : '<em>',
'x_fmt_close' : '</em>',
'x_url_open' : '<a href="https://cern.ch/lightweightregistration/ResetPassword.aspx%s">' % make_canonical_urlargd(
{'lf': 'auth', 'returnURL': CFG_SITE_SECURE_URL + '/youraccount/login?ln='+ln}, {}),
'x_url_close' : '</a>'}) + " "
else:
out += "<p>" + _("Note that if you have been using an external login system, then we cannot do anything and you have to ask there.") + " "
out += _("Alternatively, you can ask %(x_name)s to change your login system from external to internal.",
x_name=("""<a href="mailto:%(email)s">%(email)s</a>""" % { 'email' : CFG_SITE_SUPPORT_EMAIL })) + "</p>"
return out
def tmpl_account_info(self, ln, uid, guest, CFG_CERN_SITE):
"""
Displays the account information
Parameters:
- 'ln' *string* - The language to display the interface in
- 'uid' *string* - The user id
- 'guest' *boolean* - If the user is guest
- 'CFG_CERN_SITE' *boolean* - If the site is a CERN site
"""
# load the right message language
_ = gettext_set_language(ln)
out = """<p>%(account_offer)s</p>
<blockquote>
<dl>
""" % {
'account_offer' : _("%(x_name)s offers you the possibility to personalize the interface, to set up your own personal library of documents, or to set up an automatic alert query that would run periodically and would notify you of search results by email.",
x_name=CFG_SITE_NAME_INTL[ln]),
}
if not guest:
out += """
<dt>
<a href="./edit?ln=%(ln)s">%(your_settings)s</a>
</dt>
<dd>%(change_account)s</dd>""" % {
'ln' : ln,
'your_settings' : _("Your Settings"),
'change_account' : _("Set or change your account email address or password. Specify your preferences about the look and feel of the interface.")
}
out += """
<dt><a href="../youralerts/display?ln=%(ln)s">%(your_searches)s</a></dt>
<dd>%(search_explain)s</dd>""" % {
'ln' : ln,
'your_searches' : _("Your Searches"),
'search_explain' : _("View all the searches you performed during the last 30 days."),
}
out += """
<dt><a href="../yourbaskets/display?ln=%(ln)s">%(your_baskets)s</a></dt>
<dd>%(basket_explain)s""" % {
'ln' : ln,
'your_baskets' : _("Your Baskets"),
'basket_explain' : _("With baskets you can define specific collections of items, store interesting records you want to access later or share with others."),
}
if not guest:
out += """
<dt><a href="../yourcomments/?ln=%(ln)s">%(your_comments)s</a></dt>
<dd>%(comments_explain)s""" % {
'ln' : ln,
'your_comments' : _("Your Comments"),
'comments_explain' : _("Display all the comments you have submitted so far."),
}
if guest and CFG_WEBSESSION_DIFFERENTIATE_BETWEEN_GUESTS:
out += self.tmpl_warning_guest_user(ln = ln, type = "baskets")
out += """</dd>
<dt><a href="../youralerts/list?ln=%(ln)s">%(your_alerts)s</a></dt>
<dd>%(explain_alerts)s""" % {
'ln' : ln,
'your_alerts' : _("Your Alerts"),
'explain_alerts' : _("Subscribe to a search which will be run periodically by our service. The result can be sent to you via Email or stored in one of your baskets."),
}
if guest and CFG_WEBSESSION_DIFFERENTIATE_BETWEEN_GUESTS:
out += self.tmpl_warning_guest_user(type="alerts", ln = ln)
out += "</dd>"
if CFG_CERN_SITE:
out += """</dd>
<dt><a href="%(CFG_SITE_SECURE_URL)s/yourloans/display?ln=%(ln)s">%(your_loans)s</a></dt>
<dd>%(explain_loans)s</dd>""" % {
'your_loans' : _("Your Loans"),
'explain_loans' : _("Check out book you have on loan, submit borrowing requests, etc. Requires CERN ID."),
'ln': ln,
'CFG_SITE_SECURE_URL': CFG_SITE_SECURE_URL
}
out += """
</dl>
</blockquote>"""
return out
def tmpl_warning_guest_user(self, ln, type):
"""
Displays a warning message about the specified type
Parameters:
- 'ln' *string* - The language to display the interface in
- 'type' *string* - The type of data that will get lost in case of guest account (for the moment: 'alerts' or 'baskets')
"""
# load the right message language
_ = gettext_set_language(ln)
if (type=='baskets'):
msg = _("You are logged in as a guest user, so your baskets will disappear at the end of the current session.") + ' '
elif (type=='alerts'):
msg = _("You are logged in as a guest user, so your alerts will disappear at the end of the current session.") + ' '
msg += _("If you wish you can %(x_url_open)slogin or register here%(x_url_close)s.", **{'x_url_open': '<a href="' + CFG_SITE_SECURE_URL + '/youraccount/login?ln=' + ln + '">',
'x_url_close': '</a>'})
return """<table class="errorbox" summary="">
<tr>
<th class="errorboxheader">%s</th>
</tr>
</table>""" % msg
def tmpl_account_body(self, ln, user):
"""
Displays the body of the actions of the user
Parameters:
- 'ln' *string* - The language to display the interface in
- 'user' *string* - The username (nickname or email)
"""
# load the right message language
_ = gettext_set_language(ln)
out = _("You are logged in as %(x_user)s. You may want to a) %(x_url1_open)slogout%(x_url1_close)s; b) edit your %(x_url2_open)saccount settings%(x_url2_close)s.") %\
{'x_user': user,
'x_url1_open': '<a href="' + CFG_SITE_SECURE_URL + '/youraccount/logout?ln=' + ln + '">',
'x_url1_close': '</a>',
'x_url2_open': '<a href="' + CFG_SITE_SECURE_URL + '/youraccount/edit?ln=' + ln + '">',
'x_url2_close': '</a>',
}
return out + "<br /><br />"
def tmpl_account_template(self, title, body, ln, url):
"""
Displays a block of the your account page
Parameters:
- 'ln' *string* - The language to display the interface in
- 'title' *string* - The title of the block
- 'body' *string* - The body of the block
- 'url' *string* - The URL to go to the proper section
"""
out ="""
<table class="youraccountbox" width="90%%" summary="" >
<tr>
<th class="youraccountheader"><a href="%s">%s</a></th>
</tr>
<tr>
<td class="youraccountbody">%s</td>
</tr>
</table>""" % (url, title, body)
return out
def tmpl_account_page(self, ln, warnings, warning_list, accBody, baskets, alerts, searches, messages, loans, groups, submissions, approvals, tickets, administrative, comments):
"""
Displays the your account page
Parameters:
- 'ln' *string* - The language to display the interface in
- 'accBody' *string* - The body of the heading block
- 'baskets' *string* - The body of the baskets block
- 'alerts' *string* - The body of the alerts block
- 'searches' *string* - The body of the searches block
- 'messages' *string* - The body of the messages block
- 'groups' *string* - The body of the groups block
- 'submissions' *string* - The body of the submission block
- 'approvals' *string* - The body of the approvals block
- 'administrative' *string* - The body of the administrative block
- 'comments' *string* - The body of the comments block
"""
# load the right message language
_ = gettext_set_language(ln)
out = ""
if warnings == "1":
out += self.tmpl_general_warnings(warning_list)
out += self.tmpl_account_template(_("Your Account"), accBody, ln, '/youraccount/edit?ln=%s' % ln)
if messages:
out += self.tmpl_account_template(_("Your Messages"), messages, ln, '/yourmessages/display?ln=%s' % ln)
if loans:
out += self.tmpl_account_template(_("Your Loans"), loans, ln, '/yourloans/display?ln=%s' % ln)
if baskets:
out += self.tmpl_account_template(_("Your Baskets"), baskets, ln, '/yourbaskets/display?ln=%s' % ln)
if comments:
comments_description = _("You can consult the list of %(x_url_open)syour comments%(x_url_close)s submitted so far.")
comments_description %= {'x_url_open': '<a href="' + CFG_SITE_URL + '/yourcomments/?ln=' + ln + '">',
'x_url_close': '</a>'}
out += self.tmpl_account_template(_("Your Comments"), comments_description, ln, '/yourcomments/?ln=%s' % ln)
if alerts:
out += self.tmpl_account_template(_("Your Alert Searches"), alerts, ln, '/youralerts/list?ln=%s' % ln)
if searches:
out += self.tmpl_account_template(_("Your Searches"), searches, ln, '/youralerts/display?ln=%s' % ln)
if groups:
groups_description = _("You can consult the list of %(x_url_open)syour groups%(x_url_close)s you are administering or are a member of.")
groups_description %= {'x_url_open': '<a href="' + CFG_SITE_URL + '/yourgroups/display?ln=' + ln + '">',
'x_url_close': '</a>'}
out += self.tmpl_account_template(_("Your Groups"), groups_description, ln, '/yourgroups/display?ln=%s' % ln)
if submissions:
submission_description = _("You can consult the list of %(x_url_open)syour submissions%(x_url_close)s and inquire about their status.")
submission_description %= {'x_url_open': '<a href="' + CFG_SITE_URL + '/yoursubmissions.py?ln=' + ln + '">',
'x_url_close': '</a>'}
out += self.tmpl_account_template(_("Your Submissions"), submission_description, ln, '/yoursubmissions.py?ln=%s' % ln)
if approvals:
approval_description = _("You can consult the list of %(x_url_open)syour approvals%(x_url_close)s with the documents you approved or refereed.")
approval_description %= {'x_url_open': '<a href="' + CFG_SITE_URL + '/yourapprovals.py?ln=' + ln + '">',
'x_url_close': '</a>'}
out += self.tmpl_account_template(_("Your Approvals"), approval_description, ln, '/yourapprovals.py?ln=%s' % ln)
#check if this user might have tickets
if tickets:
ticket_description = _("You can consult the list of %(x_url_open)syour tickets%(x_url_close)s.")
ticket_description %= {'x_url_open': '<a href="' + CFG_SITE_URL + '/yourtickets?ln=' + ln + '">',
'x_url_close': '</a>'}
out += self.tmpl_account_template(_("Your Tickets"), ticket_description, ln, '/yourtickets?ln=%s' % ln)
if administrative:
out += self.tmpl_account_template(_("Your Administrative Activities"), administrative, ln, '/admin')
return out
def tmpl_account_emailMessage(self, ln, msg):
"""
Displays a link to retrieve the lost password
Parameters:
- 'ln' *string* - The language to display the interface in
- 'msg' *string* - Explicative message on top of the form.
"""
# load the right message language
_ = gettext_set_language(ln)
out =""
out +="""
<body>
%(msg)s <a href="../youraccount/lost?ln=%(ln)s">%(try_again)s</a>
</body>
""" % {
'ln' : ln,
'msg' : msg,
'try_again' : _("Try again")
}
return out
def tmpl_account_reset_password_email_body(self, email, reset_key, ip_address, ln=CFG_SITE_LANG):
"""
The body of the email that sends lost internal account
passwords to users.
"""
_ = gettext_set_language(ln)
out = """
%(intro)s
%(intro2)s
<%(link)s>
%(outro)s
%(outro2)s""" % {
'intro': _("Somebody (possibly you) coming from %(x_ip_address)s "
"has asked\nfor a password reset at %(x_sitename)s\nfor "
"the account \"%(x_email)s\"." % {
'x_sitename' :CFG_SITE_NAME_INTL.get(ln, CFG_SITE_NAME),
'x_email' : email,
'x_ip_address' : ip_address,
}
),
'intro2' : _("If you want to reset the password for this account, please go to:"),
'link' : "%s/youraccount/resetpassword%s" %
(CFG_SITE_SECURE_URL, make_canonical_urlargd({
'ln' : ln,
'k' : reset_key
}, {})),
'outro' : _("in order to confirm the validity of this request."),
'outro2' : _("Please note that this URL will remain valid for about %(days)s days only.", days=CFG_WEBSESSION_RESET_PASSWORD_EXPIRE_IN_DAYS),
}
return out
def tmpl_account_address_activation_email_body(self, email, address_activation_key, ip_address, ln=CFG_SITE_LANG):
"""
The body of the email that sends email address activation cookie
passwords to users.
"""
_ = gettext_set_language(ln)
out = """
%(intro)s
%(intro2)s
<%(link)s>
%(outro)s
%(outro2)s""" % {
'intro': _("Somebody (possibly you) coming from %(x_ip_address)s "
"has asked\nto register a new account at %(x_sitename)s\nfor the "
"email address \"%(x_email)s\"." % {
'x_sitename' :CFG_SITE_NAME_INTL.get(ln, CFG_SITE_NAME),
'x_email' : email,
'x_ip_address' : ip_address,
}
),
'intro2' : _("If you want to complete this account registration, please go to:"),
'link' : "%s/youraccount/access%s" %
(CFG_SITE_SECURE_URL, make_canonical_urlargd({
'ln' : ln,
'mailcookie' : address_activation_key
}, {})),
'outro' : _("in order to confirm the validity of this request."),
'outro2' : _("Please note that this URL will remain valid for about %(days)s days only.", days=CFG_WEBSESSION_ADDRESS_ACTIVATION_EXPIRE_IN_DAYS),
}
return out
def tmpl_account_emailSent(self, ln, email):
"""
Displays a confirmation message for an email sent
Parameters:
- 'ln' *string* - The language to display the interface in
- 'email' *string* - The email to which the message has been sent
"""
# load the right message language
_ = gettext_set_language(ln)
out =""
out += _("Okay, a password reset link has been emailed to %(x_email)s.", x_email=email)
return out
def tmpl_account_delete(self, ln):
"""
Displays a confirmation message about deleting the account
Parameters:
- 'ln' *string* - The language to display the interface in
"""
# load the right message language
_ = gettext_set_language(ln)
out = "<p>" + _("""Deleting your account""") + '</p>'
return out
def tmpl_account_logout(self, ln):
"""
Displays a confirmation message about logging out
Parameters:
- 'ln' *string* - The language to display the interface in
"""
# load the right message language
_ = gettext_set_language(ln)
out = _("You are no longer recognized by our system.") + ' '
if CFG_EXTERNAL_AUTH_USING_SSO and CFG_EXTERNAL_AUTH_LOGOUT_SSO:
out += _("""You are still recognized by the centralized
%(x_fmt_open)sSSO%(x_fmt_close)s system. You can
%(x_url_open)slogout from SSO%(x_url_close)s, too.""") % \
{'x_fmt_open' : '<strong>', 'x_fmt_close' : '</strong>',
'x_url_open' : '<a href="%s">' % CFG_EXTERNAL_AUTH_LOGOUT_SSO,
'x_url_close' : '</a>'}
out += '<br />'
out += _("If you wish you can %(x_url_open)slogin here%(x_url_close)s.") % \
{'x_url_open': '<a href="./login?ln=' + ln + '">',
'x_url_close': '</a>'}
return out
def tmpl_login_form(self, ln, referer, internal, register_available, methods, selected_method, msg=None):
"""
Displays a login form
Parameters:
- 'ln' *string* - The language to display the interface in
- 'referer' *string* - The referer URL - will be redirected upon after login
- 'internal' *boolean* - If we are producing an internal authentication
- 'register_available' *boolean* - If users can register freely in the system
- 'methods' *array* - The available authentication methods
- 'selected_method' *string* - The default authentication method
- 'msg' *string* - The message to print before the form, if needed
"""
# load the right message language
_ = gettext_set_language(ln)
out = "<div style='float:left'>"
if msg is "":
out += "<p>%(please_login)s</p>" % {
'please_login' : cgi.escape(_("If you already have an account, please login using the form below."))
}
if CFG_CERN_SITE:
out += "<p>" + _("If you don't own a CERN account yet, you can register a %(x_url_open)snew CERN lightweight account%(x_url_close)s.", **{'x_url_open' : '<a href="https://www.cern.ch/lightweightregistration/RegisterAccount.aspx">', 'x_url_close' : '</a>'}) + "</p>"
else:
if register_available:
out += "<p>"+_("If you don't own an account yet, please %(x_url_open)sregister%(x_url_close)s an internal account.") %\
{'x_url_open': '<a href="../youraccount/register?ln=' + ln + '">',
'x_url_close': '</a>'} + "</p>"
else:
# users cannot register accounts, so advise them
# how to get one, or be silent about register
# facility if account level is more than 4:
if CFG_ACCESS_CONTROL_LEVEL_ACCOUNTS < 5:
out += "<p>" + _("If you don't own an account yet, please contact %(x_name)s.",
x_name=('<a href="mailto:%s">%s</a>' % (cgi.escape(CFG_SITE_SUPPORT_EMAIL, True), cgi.escape(CFG_SITE_SUPPORT_EMAIL)))) + "</p>"
else:
out += "<p>%s</p>" % msg
out += """<form method="post" action="%(CFG_SITE_SECURE_URL)s/youraccount/login">
<table>
""" % {'CFG_SITE_SECURE_URL': CFG_SITE_SECURE_URL}
if len(methods) - CFG_OPENID_AUTHENTICATION - CFG_OAUTH2_AUTHENTICATION - CFG_OAUTH1_AUTHENTICATION > 1:
# more than one method, must make a select
login_select = """<select name="login_method" id="login_method">"""
for method in methods:
# OpenID/OAuth shouldn't be shown in this list.
if not method in ['openid', 'oauth1', 'oauth2']:
login_select += """<option value="%(method)s" %(selected)s>%(method)s</option>""" % {
'method' : cgi.escape(method, True),
'selected' : (method == selected_method and 'selected="selected"' or "")
}
login_select += "</select>"
out += """
<tr>
<td align="right"><strong><label for="login_method">%(login_title)s</label></strong></td>
<td>%(login_select)s</td>
</tr>""" % {
'login_title' : cgi.escape(_("Login method:")),
'login_select' : login_select,
}
else:
# only one login method available
out += """<input type="hidden" name="login_method" value="%s" />""" % cgi.escape(methods[0], True)
out += """<tr>
<td align="right">
<input type="hidden" name="ln" value="%(ln)s" />
<input type="hidden" name="referer" value="%(referer)s" />
<strong><label for="p_un">%(username)s:</label></strong>
</td>
<td><input type="text" size="25" name="p_un" id="p_un" value="" /></td>
</tr>
<tr>
<td align="right"><strong><label for="p_pw">%(password)s:</label></strong></td>
<td align="left"><input type="password" size="25" name="p_pw" id="p_pw" value="" /></td>
</tr>
<tr>
<td></td>
<td align="left"><input type="checkbox" name="remember_me" id="remember_me"/><em><label for="remember_me">%(remember_me)s</label></em></td>
<tr>
<td></td>
<td align="center" colspan="3"><input class="formbutton" type="submit" name="action" value="%(login)s" />""" % {
'ln': cgi.escape(ln, True),
'referer' : cgi.escape(referer, True),
'username' : cgi.escape(_("Username")),
'password' : cgi.escape(_("Password")),
'remember_me' : cgi.escape(_("Remember login on this computer.")),
'login' : cgi.escape(_("login")),
}
if internal:
out += """ (<a href="./lost?ln=%(ln)s">%(lost_pass)s</a>)""" % {
'ln' : cgi.escape(ln, True),
'lost_pass' : cgi.escape(_("Lost your password?"))
}
out += """</td>
</tr>
</table></form>"""
out += """<p><strong>%(note)s:</strong> %(note_text)s</p>""" % {
'note' : cgi.escape(_("Note")),
'note_text': cgi.escape(_("You can use your nickname or your email address to login."))}
out += "</div>"
if CFG_OPENID_AUTHENTICATION or \
CFG_OAUTH2_AUTHENTICATION or \
CFG_OAUTH1_AUTHENTICATION:
# If OpenID or OAuth authentication is enabled, we put the login
# forms of providers.
out += self.tmpl_external_login_panel(ln, referer)
return out
def tmpl_lost_your_password_teaser(self, ln=CFG_SITE_LANG):
"""Displays a short sentence to attract user to the fact that
maybe he lost his password. Used by the registration page.
"""
_ = gettext_set_language(ln)
out = ""
out += """<a href="./lost?ln=%(ln)s">%(maybe_lost_pass)s</a>""" % {
'ln' : ln,
'maybe_lost_pass': ("Maybe you have lost your password?")
}
return out
def tmpl_reset_password_form(self, ln, email, reset_key, msg=''):
"""Display a form to reset the password."""
_ = gettext_set_language(ln)
out = ""
out = "<p>%s</p>" % _("Your request is valid. Please set the new "
"desired password in the following form.")
if msg:
out += """<p class='warning'>%s</p>""" % msg
out += """
<form method="post" action="../youraccount/resetpassword?ln=%(ln)s">
<input type="hidden" name="k" value="%(reset_key)s" />
<input type="hidden" name="e" value="%(email)s" />
<input type="hidden" name="reset" value="1" />
<table>
<tr><td align="right"><strong>%(set_password_for)s</strong>:</td><td><em>%(email)s</em></td></tr>
<tr><td align="right"><strong><label for="password">%(type_new_password)s:</label></strong></td>
<td><input type="password" name="password" id="password" value="123" /></td></tr>
<tr><td align="right"><strong><label for="password2">%(type_it_again)s:</label></strong></td>
<td><input type="password" name="password2" id="password2" value="" /></td></tr>
<tr><td align="center" colspan="2">
<input class="formbutton" type="submit" name="action" value="%(set_new_password)s" />
</td></tr>
</table>
</form>""" % {
'ln' : ln,
'reset_key' : reset_key,
'email' : email,
'set_password_for' : _('Set a new password for'),
'type_new_password' : _('Type the new password'),
'type_it_again' : _('Type again the new password'),
'set_new_password' : _('Set the new password')
}
return out
def tmpl_register_page(self, ln, referer, level):
"""
Displays a login form
Parameters:
- 'ln' *string* - The language to display the interface in
- 'referer' *string* - The referer URL - will be redirected upon after login
- 'level' *int* - Login level (0 - all access, 1 - accounts activated, 2+ - no self-registration)
"""
# load the right message language
_ = gettext_set_language(ln)
out = ""
if level <= 1:
out += _("Please enter your email address and desired nickname and password:")
if level == 1:
out += _("It will not be possible to use the account before it has been verified and activated.")
out += """
<form method="post" action="../youraccount/register">
<input type="hidden" name="referer" value="%(referer)s" />
<input type="hidden" name="ln" value="%(ln)s" />
<table>
<tr>
<td align="right"><strong><label for="p_email">%(email_address)s:</label></strong><br /><small class="important">(%(mandatory)s)</small></td>
<td><input type="text" size="25" name="p_email" id="p_email" value="" /><br />
<small><span class="quicknote">%(example)s:</span>
<span class="example">john.doe@example.com</span></small>
</td>
<td></td>
</tr>
<tr>
<td align="right"><strong><label for="p_nickname">%(nickname)s:</label></strong><br /><small class="important">(%(mandatory)s)</small></td>
<td><input type="text" size="25" name="p_nickname" id="p_nickname" value="" /><br />
<small><span class="quicknote">%(example)s:</span>
<span class="example">johnd</span></small>
</td>
<td></td>
</tr>
<tr>
<td align="right"><strong><label for="p_pw">%(password)s:</label></strong><br /><small class="quicknote">(%(optional)s)</small></td>
<td align="left"><input type="password" size="25" name="p_pw" id="p_pw" value="" /><br />
<small><span class="quicknote">%(note)s:</span> %(password_contain)s</small>
</td>
<td></td>
</tr>
<tr>
<td align="right"><strong><label for="p_pw2">%(retype)s:</label></strong></td>
<td align="left"><input type="password" size="25" name="p_pw2" id="p_pw2" value="" /></td>
<td></td>
</tr>
<tr>
<td></td>
<td align="left" colspan="3"><input class="formbutton" type="submit" name="action" value="%(register)s" /></td>
</tr>
</table>
</form>
<p><strong>%(note)s:</strong> %(explain_acc)s""" % {
'referer' : cgi.escape(referer),
'ln' : cgi.escape(ln),
'email_address' : _("Email address"),
'nickname' : _("Nickname"),
'password' : _("Password"),
'mandatory' : _("mandatory"),
'optional' : _("optional"),
'example' : _("Example"),
'note' : _("Note"),
'password_contain' : _("The password phrase may contain punctuation, spaces, etc."),
'retype' : _("Retype Password"),
'register' : _("register"),
'explain_acc' : _("Please do not use valuable passwords such as your Unix, AFS or NICE passwords with this service. Your email address will stay strictly confidential and will not be disclosed to any third party. It will be used to identify you for personal services of %(x_name)s. For example, you may set up an automatic alert search that will look for new preprints and will notify you daily of new arrivals by email.", x_name=CFG_SITE_NAME),
}
else:
# level >=2, so users cannot register accounts
out += "<p>" + _("It is not possible to create an account yourself. Contact %(x_name)s if you want an account.",
x_name=('<a href="mailto:%s">%s</a>' % (CFG_SITE_SUPPORT_EMAIL, CFG_SITE_SUPPORT_EMAIL))) + "</p>"
return out
def tmpl_account_adminactivities(self, ln, uid, guest, roles, activities):
"""
Displays the admin activities block for this user
Parameters:
- 'ln' *string* - The language to display the interface in
- 'uid' *string* - The used id
- 'guest' *boolean* - If the user is guest
- 'roles' *array* - The current user roles
- 'activities' *array* - The user allowed activities
"""
# load the right message language
_ = gettext_set_language(ln)
out = ""
# guest condition
if guest:
return _("You seem to be a guest user. You have to %(x_url_open)slogin%(x_url_close)s first.",
x_url_open='<a href="' + CFG_SITE_SECURE_URL + '/youraccount/login?ln=' + ln + '">',
x_url_close='<a/>')
# no rights condition
if not roles:
return "<p>" + _("You are not authorized to access administrative functions.") + "</p>"
# displaying form
out += "<p>" + _("You are enabled to the following roles: %(x_role)s.",
x_role=('<em>' + ", ".join(roles) + "</em>")) + '</p>'
if activities:
# print proposed links:
activities.sort(lambda x, y: cmp(x.lower(), y.lower()))
tmp_out = ''
for action in activities:
if action == "runbibedit":
tmp_out += """<br /> <a href="%s/%s/edit/">%s</a>""" % (CFG_SITE_URL, CFG_SITE_RECORD, _("Run Record Editor"))
if action == "runbibeditmulti":
tmp_out += """<br /> <a href="%s/%s/multiedit/">%s</a>""" % (CFG_SITE_URL, CFG_SITE_RECORD, _("Run Multi-Record Editor"))
if action == "runauthorlist":
tmp_out += """<br /> <a href="%s/authorlist/">%s</a>""" % (CFG_SITE_URL, _("Run Author List Manager"))
if action == "runbibcirculation":
tmp_out += """<br /> <a href="%s/admin/bibcirculation/bibcirculationadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Run BibCirculation"))
if action == "runbibmerge":
tmp_out += """<br /> <a href="%s/%s/merge/">%s</a>""" % (CFG_SITE_URL, CFG_SITE_RECORD, _("Run Record Merger"))
if action == "runbibswordclient":
tmp_out += """<br /> <a href="%s/%s/bibsword/">%s</a>""" % (CFG_SITE_URL, CFG_SITE_RECORD, _("Run BibSword Client"))
if action == "runbatchuploader":
tmp_out += """<br /> <a href="%s/batchuploader/metadata?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Run Batch Uploader"))
if action == "cfgbibformat":
tmp_out += """<br /> <a href="%s/admin/bibformat/bibformatadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure BibFormat"))
if action == "cfgbibknowledge":
tmp_out += """<br /> <a href="%s/kb?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure BibKnowledge"))
if action == "cfgoaiharvest":
tmp_out += """<br /> <a href="%s/admin/oaiharvest/oaiharvestadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure OAI Harvest"))
if action == "cfgoairepository":
tmp_out += """<br /> <a href="%s/admin/oairepository/oairepositoryadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure OAI Repository"))
if action == "cfgbibindex":
tmp_out += """<br /> <a href="%s/admin/bibindex/bibindexadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure BibIndex"))
if action == "cfgbibrank":
tmp_out += """<br /> <a href="%s/admin/bibrank/bibrankadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure BibRank"))
if action == "cfgwebaccess":
tmp_out += """<br /> <a href="%s/admin/webaccess/webaccessadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure WebAccess"))
if action == "cfgwebcomment":
tmp_out += """<br /> <a href="%s/admin/webcomment/webcommentadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure WebComment"))
if action == "cfgweblinkback":
tmp_out += """<br /> <a href="%s/admin/weblinkback/weblinkbackadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure WebLinkback"))
if action == "cfgwebjournal":
tmp_out += """<br /> <a href="%s/admin/webjournal/webjournaladmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure WebJournal"))
if action == "cfgwebsearch":
tmp_out += """<br /> <a href="%s/admin/websearch/websearchadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure WebSearch"))
if action == "cfgwebsubmit":
tmp_out += """<br /> <a href="%s/admin/websubmit/websubmitadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure WebSubmit"))
if action == "runbibdocfile":
tmp_out += """<br /> <a href="%s/%s/managedocfiles?ln=%s">%s</a>""" % (CFG_SITE_URL, CFG_SITE_RECORD, ln, _("Run Document File Manager"))
if action == "cfgbibsort":
tmp_out += """<br /> <a href="%s/admin/bibsort/bibsortadmin.py?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Configure BibSort"))
if action == "runinfomanager":
tmp_out += """<br /> <a href="%s/info/manage?ln=%s">%s</a>""" % (CFG_SITE_URL, ln, _("Run Info Space Manager"))
if tmp_out:
out += _("Here are some interesting web admin links for you:") + tmp_out
out += "<br />" + _("For more admin-level activities, see the complete %(x_url_open)sAdmin Area%(x_url_close)s.",
x_url_open='<a href="' + CFG_SITE_URL + '/help/admin?ln=' + ln + '">',
x_url_close='</a>')
return out
def tmpl_create_userinfobox(self, ln, url_referer, guest, username, submitter, referee, admin, usebaskets, usemessages, usealerts, usegroups, useloans, usestats):
"""
Displays the user block
Parameters:
- 'ln' *string* - The language to display the interface in
- 'url_referer' *string* - URL of the page being displayed
- 'guest' *boolean* - If the user is guest
- 'username' *string* - The username (nickname or email)
- 'submitter' *boolean* - If the user is submitter
- 'referee' *boolean* - If the user is referee
- 'admin' *boolean* - If the user is admin
- 'usebaskets' *boolean* - If baskets are enabled for the user
- 'usemessages' *boolean* - If messages are enabled for the user
- 'usealerts' *boolean* - If alerts are enabled for the user
- 'usegroups' *boolean* - If groups are enabled for the user
- 'useloans' *boolean* - If loans are enabled for the user
- 'usestats' *boolean* - If stats are enabled for the user
@note: with the update of CSS classes (cds.cds ->
invenio.css), the variables useloans etc are not used in
this function, since they are in the menus. But we keep
them in the function signature for backwards
compatibility.
"""
# load the right message language
_ = gettext_set_language(ln)
out = """<img src="%s/img/user-icon-1-20x20.gif" border="0" alt=""/> """ % CFG_SITE_URL
if guest:
out += """%(guest_msg)s ::
<a class="userinfo" href="%(sitesecureurl)s/youraccount/login?ln=%(ln)s%(referer)s">%(login)s</a>""" % {
'sitesecureurl': CFG_SITE_SECURE_URL,
'ln' : ln,
'guest_msg' : _("guest"),
'referer' : url_referer and ('&referer=%s' % urllib.quote(url_referer)) or '',
'login' : _('login')
}
else:
out += """
<a class="userinfo" href="%(sitesecureurl)s/youraccount/display?ln=%(ln)s">%(username)s</a> :: """ % {
'sitesecureurl' : CFG_SITE_SECURE_URL,
'ln' : ln,
'username' : username
}
out += """<a class="userinfo" href="%(sitesecureurl)s/youraccount/logout?ln=%(ln)s">%(logout)s</a>""" % {
'sitesecureurl' : CFG_SITE_SECURE_URL,
'ln' : ln,
'logout' : _("logout"),
}
return out
def tmpl_warning(self, warnings, ln=CFG_SITE_LANG):
"""
Display len(warnings) warning fields
@param infos: list of strings
@param ln=language
@return: html output
"""
if not((type(warnings) is list) or (type(warnings) is tuple)):
warnings = [warnings]
warningbox = ""
if warnings != []:
warningbox = "<div class=\"warningbox\">\n <b>Warning:</b>\n"
for warning in warnings:
lines = warning.split("\n")
warningbox += " <p>"
for line in lines[0:-1]:
warningbox += line + " <br />\n"
warningbox += lines[-1] + " </p>"
warningbox += "</div><br />\n"
return warningbox
def tmpl_error(self, error, ln=CFG_SITE_LANG):
"""
Display error
@param error: string
@param ln=language
@return: html output
"""
_ = gettext_set_language(ln)
errorbox = ""
if error != "":
errorbox = "<div class=\"errorbox\">\n <b>Error:</b>\n"
errorbox += " <p>"
errorbox += error + " </p>"
errorbox += "</div><br />\n"
return errorbox
def tmpl_display_all_groups(self,
infos,
admin_group_html,
member_group_html,
external_group_html = None,
warnings=[],
ln=CFG_SITE_LANG):
"""
Displays the 3 tables of groups: admin, member and external
Parameters:
- 'ln' *string* - The language to display the interface in
- 'admin_group_html' *string* - HTML code for displaying all the groups
the user is the administrator of
- 'member_group_html' *string* - HTML code for displaying all the groups
the user is member of
- 'external_group_html' *string* - HTML code for displaying all the
external groups the user is member of
"""
_ = gettext_set_language(ln)
group_text = self.tmpl_infobox(infos)
group_text += self.tmpl_warning(warnings)
if external_group_html:
group_text += """
<table>
<tr>
<td>%s</td>
</tr>
<tr>
<td><br />%s</td>
</tr>
<tr>
<td><br /><a name='external_groups'></a>%s</td>
</tr>
</table>""" %(admin_group_html, member_group_html, external_group_html)
else:
group_text += """
<table>
<tr>
<td>%s</td>
</tr>
<tr>
<td><br />%s</td>
</tr>
</table>""" %(admin_group_html, member_group_html)
return group_text
def tmpl_display_admin_groups(self, groups, ln=CFG_SITE_LANG):
"""
Display the groups the user is admin of.
Parameters:
- 'ln' *string* - The language to display the interface in
- 'groups' *list* - All the group the user is admin of
- 'infos' *list* - Display infos on top of admin group table
"""
_ = gettext_set_language(ln)
img_link = """
<a href="%(siteurl)s/yourgroups/%(action)s?grpID=%(grpID)s&ln=%(ln)s">
<img src="%(siteurl)s/img/%(img)s" alt="%(text)s" style="border:0" width="25"
height="25" /><br /><small>%(text)s</small>
</a>"""
out = self.tmpl_group_table_title(img="/img/group_admin.png",
text=_("You are an administrator of the following groups:") )
out += """
<table class="mailbox">
<thead class="mailboxheader">
<tr class="inboxheader">
<td>%s</td>
<td>%s</td>
<td style="width: 20px;" > </td>
<td style="width: 20px;"> </td>
</tr>
</thead>
<tfoot>
<tr style="height:0px;">
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tfoot>
<tbody class="mailboxbody">""" %(_("Group"), _("Description"))
if len(groups) == 0:
out += """
<tr class="mailboxrecord" style="height: 100px;">
<td colspan="4" style="text-align: center;">
<small>%s</small>
</td>
</tr>""" %(_("You are not an administrator of any groups."),)
for group_data in groups:
(grpID, name, description) = group_data
edit_link = img_link % {'siteurl' : CFG_SITE_URL,
'grpID' : grpID,
'ln': ln,
'img':"webbasket_create_small.png",
'text':_("Edit group"),
'action':"edit"
}
members_link = img_link % {'siteurl' : CFG_SITE_URL,
'grpID' : grpID,
'ln': ln,
'img':"webbasket_usergroup.png",
'text':_("Edit %(x_num)s members", x_num=''),
'action':"members"
}
out += """
<tr class="mailboxrecord">
<td>%s</td>
<td>%s</td>
<td style="text-align: center;" >%s</td>
<td style="text-align: center;" >%s</td>
</tr>""" % (cgi.escape(name), cgi.escape(description), edit_link, members_link)
out += """
<tr class="mailboxfooter">
<td colspan="2">
<form name="newGroup" action="create?ln=%(ln)s" method="post">
<input type="submit" name="create_group" value="%(write_label)s" class="formbutton" />
</form>
</td>
<td> </td>
<td> </td>
<td> </td>
</tr>
</tbody>
</table>""" % {'ln': ln,
'write_label': _("Create new group"),
}
return out
def tmpl_display_member_groups(self, groups, ln=CFG_SITE_LANG):
"""
Display the groups the user is member of.
Parameters:
- 'ln' *string* - The language to display the interface in
- 'groups' *list* - All the group the user is member of
"""
_ = gettext_set_language(ln)
group_text = self.tmpl_group_table_title(img="/img/webbasket_us.png", text=_("You are a member of the following groups:"))
group_text += """
<table class="mailbox">
<thead class="mailboxheader">
<tr class="inboxheader">
<td>%s</td>
<td>%s</td>
</tr>
</thead>
<tfoot>
<tr style="height:0px;">
<td></td>
<td></td>
</tr>
</tfoot>
<tbody class="mailboxbody">""" % (_("Group"), _("Description"))
if len(groups) == 0:
group_text += """
<tr class="mailboxrecord" style="height: 100px;">
<td colspan="2" style="text-align: center;">
<small>%s</small>
</td>
</tr>""" %(_("You are not a member of any groups."),)
for group_data in groups:
(id, name, description) = group_data
group_text += """
<tr class="mailboxrecord">
<td>%s</td>
<td>%s</td>
</tr>""" % (cgi.escape(name), cgi.escape(description))
group_text += """
<tr class="mailboxfooter">
<td>
<form name="newGroup" action="join?ln=%(ln)s" method="post">
<input type="submit" name="join_group" value="%(join_label)s" class="formbutton" />
</form>
</td>
<td>
<form name="newGroup" action="leave?ln=%(ln)s" method="post">
<input type="submit" name="leave" value="%(leave_label)s" class="formbutton" />
</form>
</td>
</tr>
</tbody>
</table>
""" % {'ln': ln,
'join_label': _("Join new group"),
'leave_label':_("Leave group")
}
return group_text
def tmpl_display_external_groups(self, groups, ln=CFG_SITE_LANG):
"""
Display the external groups the user is member of.
Parameters:
- 'ln' *string* - The language to display the interface in
- 'groups' *list* - All the group the user is member of
"""
_ = gettext_set_language(ln)
group_text = self.tmpl_group_table_title(img="/img/webbasket_us.png", text=_("You are a member of the following external groups:"))
group_text += """
<table class="mailbox">
<thead class="mailboxheader">
<tr class="inboxheader">
<td>%s</td>
<td>%s</td>
</tr>
</thead>
<tfoot>
<tr style="height:0px;">
<td></td>
<td></td>
</tr>
</tfoot>
<tbody class="mailboxbody">""" % (_("Group"), _("Description"))
if len(groups) == 0:
group_text += """
<tr class="mailboxrecord" style="height: 100px;">
<td colspan="2" style="text-align: center;">
<small>%s</small>
</td>
</tr>""" %(_("You are not a member of any external groups."),)
for group_data in groups:
(id, name, description) = group_data
group_text += """
<tr class="mailboxrecord">
<td>%s</td>
<td>%s</td>
</tr>""" % (cgi.escape(name), cgi.escape(description))
group_text += """
</tbody>
</table>
"""
return group_text
def tmpl_display_input_group_info(self,
group_name,
group_description,
join_policy,
act_type="create",
grpID=None,
warnings=[],
ln=CFG_SITE_LANG):
"""
Display group data when creating or updating a group:
Name, description, join_policy.
Parameters:
- 'ln' *string* - The language to display the interface in
- 'group_name' *string* - name of the group
- 'group_description' *string* - description of the group
- 'join_policy' *string* - join policy
- 'act_type' *string* - info about action : create or edit(update)
- 'grpID' *int* - ID of the group(not None in case of group editing)
- 'warnings' *list* - Display warning if values are not correct
"""
_ = gettext_set_language(ln)
#default
hidden_id =""
form_name = "create_group"
action = CFG_SITE_URL + '/yourgroups/create'
button_label = _("Create new group")
button_name = "create_button"
label = _("Create new group")
delete_text = ""
if act_type == "update":
form_name = "update_group"
action = CFG_SITE_URL + '/yourgroups/edit'
button_label = _("Update group")
button_name = "update"
label = _('Edit group %(x_name)s', x_name=cgi.escape(group_name))
delete_text = """<input type="submit" value="%s" class="formbutton" name="%s" />"""
delete_text %= (_("Delete group"),"delete")
if grpID is not None:
hidden_id = """<input type="hidden" name="grpID" value="%s" />"""
hidden_id %= grpID
out = self.tmpl_warning(warnings)
out += """
<form name="%(form_name)s" action="%(action)s" method="post">
<input type="hidden" name="ln" value="%(ln)s" />
<div style="padding:10px;">
<table class="bskbasket">
<thead class="bskbasketheader">
<tr>
<td class="bskactions">
<img src="%(logo)s" alt="%(label)s" />
</td>
<td class="bsktitle">
<b>%(label)s</b><br />
</td>
</tr>
</thead>
<tfoot>
<tr><td colspan="2"></td></tr>
</tfoot>
<tbody>
<tr>
<td colspan="2">
<table>
<tr>
<td><label for="group_name">%(name_label)s</label></td>
<td>
<input type="text" name="group_name" id="group_name" value="%(group_name)s" />
</td>
</tr>
<tr>
<td><label for="group_description">%(description_label)s</label></td>
<td>
<input type="text" name="group_description" id="group_description" value="%(group_description)s" />
</td>
</tr>
<tr>
<td>%(join_policy_label)s</td>
<td>
%(join_policy)s
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
%(hidden_id)s
<table>
<tr>
<td>
<input type="submit" value="%(button_label)s" class="formbutton" name="%(button_name)s" />
</td>
<td>
%(delete_text)s
</td>
<td>
<input type="submit" value="%(cancel_label)s" class="formbutton" name="cancel" />
</td>
</tr>
</table>
</div>
</form>
"""
out %= {'action' : action,
'logo': CFG_SITE_URL + '/img/webbasket_create.png',
'label': label,
'form_name' : form_name,
'name_label': _("Group name:"),
'delete_text': delete_text,
'description_label': _("Group description:"),
'join_policy_label': _("Group join policy:"),
'group_name': cgi.escape(group_name, 1),
'group_description': cgi.escape(group_description, 1),
'button_label': button_label,
'button_name':button_name,
'cancel_label':_("Cancel"),
'hidden_id':hidden_id,
'ln': ln,
'join_policy' :self.__create_join_policy_selection_menu("join_policy",
join_policy,
ln)
}
return out
def tmpl_display_input_join_group(self,
group_list,
group_name,
group_from_search,
search,
warnings=[],
ln=CFG_SITE_LANG):
"""
Display the groups the user can join.
He can use default select list or the search box
Parameters:
- 'ln' *string* - The language to display the interface in
- 'group_list' *list* - All the group the user can join
- 'group_name' *string* - Name of the group the user is looking for
- 'group_from search' *list* - List of the group the user can join matching group_name
- 'search' *int* - User is looking for group using group_name
- 'warnings' *list* - Display warning if two group are selected
"""
_ = gettext_set_language(ln)
out = self.tmpl_warning(warnings)
search_content = ""
if search:
search_content = """<tr><td> </td><td>"""
if group_from_search != []:
search_content += self.__create_select_menu('grpID', group_from_search, _("Please select:"))
else:
search_content += _("No matching group")
search_content += """</td><td> </td></tr>"""
out += """
<form name="join_group" action="%(action)s" method="post">
<input type="hidden" name="ln" value="%(ln)s" />
<div style="padding:10px;">
<table class="bskbasket">
<thead class="bskbasketheader">
<tr>
<td class="bskactions">
<img src="%(logo)s" alt="%(label)s" />
</td>
<td class="bsktitle">
<b>%(label)s</b><br />
</td>
</tr>
</thead>
<tfoot>
<tr><td colspan="2"></td></tr>
</tfoot>
<tbody>
<tr>
<td colspan="2">
<table>
<tr>
<td>%(list_label)s</td>
<td>
%(group_list)s
</td>
<td>
</td>
</tr>
<tr>
<td><br /><label for="group_name">%(label2)s</label></td>
<td><br /><input type="text" name="group_name" id="group_name" value="%(group_name)s" /></td>
<td><br />
<input type="submit" name="find_button" value="%(find_label)s" class="nonsubmitbutton" />
</td>
</tr>
%(search_content)s
</table>
</td>
</tr>
</tbody>
</table>
<table>
<tr>
<td>
<input type="submit" name="join_button" value="%(label)s" class="formbutton" />
</td>
<td>
<input type="submit" value="%(cancel_label)s" class="formbutton" name="cancel" />
</td>
</tr>
</table>
</div>
</form>
"""
out %= {'action' : CFG_SITE_URL + '/yourgroups/join',
'logo': CFG_SITE_URL + '/img/webbasket_create.png',
'label': _("Join group"),
'group_name': cgi.escape(group_name, 1),
'label2':_("or find it") + ': ',
'list_label':_("Choose group:"),
'ln': ln,
'find_label': _("Find group"),
'cancel_label':_("Cancel"),
'group_list' :self.__create_select_menu("grpID",group_list, _("Please select:")),
'search_content' : search_content
}
return out
def tmpl_display_manage_member(self,
grpID,
group_name,
members,
pending_members,
infos=[],
warnings=[],
ln=CFG_SITE_LANG):
"""Display current members and waiting members of a group.
Parameters:
- 'ln' *string* - The language to display the interface in
- 'grpID *int* - ID of the group
- 'group_name' *string* - Name of the group
- 'members' *list* - List of the current members
- 'pending_members' *list* - List of the waiting members
- 'infos' *tuple of 2 lists* - Message to inform user about his last action
- 'warnings' *list* - Display warning if two group are selected
"""
_ = gettext_set_language(ln)
out = self.tmpl_warning(warnings)
out += self.tmpl_infobox(infos)
out += """
<form name="member" action="%(action)s" method="post">
<p>%(title)s</p>
<input type="hidden" name="ln" value="%(ln)s" />
<input type="hidden" name="grpID" value="%(grpID)s"/>
<table>
<tr>
<td>
<table class="bskbasket">
<thead class="bskbasketheader">
<tr>
<td class="bskactions">
<img src="%(imgurl)s/webbasket_usergroup.png" alt="%(img_alt_header1)s" />
</td>
<td class="bsktitle">
%(header1)s<br />
</td>
</tr>
</thead>
<tfoot>
<tr><td colspan="2"></td></tr>
</tfoot>
<tbody>
<tr>
<td colspan="2">
<table>
<tr>
%(member_text)s
</tr>
</table>
</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td>
<table class="bskbasket">
<thead class="bskbasketheader">
<tr>
<td class="bskactions">
<img src="%(imgurl)s/webbasket_usergroup_gray.png" alt="%(img_alt_header2)s" />
</td>
<td class="bsktitle">
%(header2)s<br />
</td>
</tr>
</thead>
<tfoot>
<tr><td colspan="2"></td></tr>
</tfoot>
<tbody>
<tr>
<td colspan="2">
<table>
<tr>
%(pending_text)s
</tr>
</table>
</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td>
<table class="bskbasket" style="width: 400px">
<thead class="bskbasketheader">
<tr>
<td class="bskactions">
<img src="%(imgurl)s/iconpen.gif" alt="%(img_alt_header3)s" />
</td>
<td class="bsktitle">
<b>%(header3)s</b><br />
</td>
</tr>
</thead>
<tfoot>
<tr><td colspan="2"></td></tr>
</tfoot>
<tbody>
<tr>
<td colspan="2">
<table>
<tr>
<td colspan="2" style="padding: 0 5 10 5;">%(invite_text)s</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td>
<input type="submit" value="%(cancel_label)s" class="formbutton" name="cancel" />
</td>
</tr>
</table>
</form>
"""
if members :
member_list = self.__create_select_menu("member_id", members, _("Please select:"))
member_text = """
<td style="padding: 0 5 10 5;">%s</td>
<td style="padding: 0 5 10 5;">
<input type="submit" name="remove_member" value="%s" class="nonsubmitbutton"/>
</td>""" % (member_list,_("Remove member"))
else :
member_text = """<td style="padding: 0 5 10 5;" colspan="2">%s</td>""" % _("No members.")
if pending_members :
pending_list = self.__create_select_menu("pending_member_id", pending_members, _("Please select:"))
pending_text = """
<td style="padding: 0 5 10 5;">%s</td>
<td style="padding: 0 5 10 5;">
<input type="submit" name="add_member" value="%s" class="nonsubmitbutton"/>
</td>
<td style="padding: 0 5 10 5;">
<input type="submit" name="reject_member" value="%s" class="nonsubmitbutton"/>
</td>""" % (pending_list,_("Accept member"), _("Reject member"))
else :
pending_text = """<td style="padding: 0 5 10 5;" colspan="2">%s</td>""" % _("No members awaiting approval.")
header1 = self.tmpl_group_table_title(text=_("Current members"))
header2 = self.tmpl_group_table_title(text=_("Members awaiting approval"))
header3 = _("Invite new members")
write_a_message_url = create_url(
"%s/yourmessages/write" % CFG_SITE_URL,
{
'ln' : ln,
'msg_subject' : _('Invitation to join "%(x_name)s" group', x_name=escape_html(group_name)),
'msg_body' : _("""\
Hello:
I think you might be interested in joining the group "%(x_name)s".
You can join by clicking here: %(x_url)s.
Best regards.
""", **{'x_name': group_name,
'x_url': create_html_link("%s/yourgroups/join" % CFG_SITE_URL, { 'grpID' : grpID,
'join_button' : "1",
},
link_label=group_name, escape_urlargd=True, escape_linkattrd=True)})})
link_open = '<a href="%s">' % escape_html(write_a_message_url)
invite_text = _("If you want to invite new members to join your group, please use the %(x_url_open)sweb message%(x_url_close)s system.",
**{'x_url_open': link_open, 'x_url_close': '</a>'})
action = CFG_SITE_URL + '/yourgroups/members?ln=' + ln
out %= {'title':_('Group: %(x_name)s', x_name=escape_html(group_name)),
'member_text' : member_text,
'pending_text' :pending_text,
'action':action,
'grpID':grpID,
'header1': header1,
'header2': header2,
'header3': header3,
'img_alt_header1': _("Current members"),
'img_alt_header2': _("Members awaiting approval"),
'img_alt_header3': _("Invite new members"),
'invite_text': invite_text,
'imgurl': CFG_SITE_URL + '/img',
'cancel_label':_("Cancel"),
'ln':ln
}
return out
def tmpl_display_input_leave_group(self,
groups,
warnings=[],
ln=CFG_SITE_LANG):
"""Display groups the user can leave.
Parameters:
- 'ln' *string* - The language to display the interface in
- 'groups' *list* - List of groups the user is currently member of
- 'warnings' *list* - Display warning if no group is selected
"""
_ = gettext_set_language(ln)
out = self.tmpl_warning(warnings)
out += """
<form name="leave" action="%(action)s" method="post">
<input type="hidden" name="ln" value="%(ln)s" />
<div style="padding:10px;">
<table class="bskbasket">
<thead class="bskbasketheader">
<tr>
<td class="bskactions">
<img src="%(logo)s" alt="%(label)s" />
</td>
<td class="bsktitle">
<b>%(label)s</b><br />
</td>
</tr>
</thead>
<tfoot>
<tr><td colspan="2"></td></tr>
</tfoot>
<tbody>
<tr>
<td colspan="2">
<table>
<tr>
<td>%(list_label)s</td>
<td>
%(groups)s
</td>
<td>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<table>
<tr>
<td>
%(submit)s
</td>
<td>
<input type="submit" value="%(cancel_label)s" class="formbutton" name="cancel" />
</td>
</tr>
</table>
</div>
</form>
"""
if groups:
groups = self.__create_select_menu("grpID", groups, _("Please select:"))
list_label = _("Group list")
submit = """<input type="submit" name="leave_button" value="%s" class="formbutton"/>""" % _("Leave group")
else :
groups = _("You are not member of any group.")
list_label = ""
submit = ""
action = CFG_SITE_URL + '/yourgroups/leave?ln=%s'
action %= (ln)
out %= {'groups' : groups,
'list_label' : list_label,
'action':action,
'logo': CFG_SITE_URL + '/img/webbasket_create.png',
'label' : _("Leave group"),
'cancel_label':_("Cancel"),
'ln' :ln,
'submit' : submit
}
return out
def tmpl_confirm_delete(self, grpID, ln=CFG_SITE_LANG):
"""
display a confirm message when deleting a group
@param grpID *int* - ID of the group
@param ln: language
@return: html output
"""
_ = gettext_set_language(ln)
action = CFG_SITE_URL + '/yourgroups/edit'
out = """
<form name="delete_group" action="%(action)s" method="post">
<table class="confirmoperation">
<tr>
<td colspan="2" class="confirmmessage">
%(message)s
</td>
</tr>
<tr>
<td>
<input type="hidden" name="confirmed" value="1" />
<input type="hidden" name="ln" value="%(ln)s" />
<input type="hidden" name="grpID" value="%(grpID)s" />
<input type="submit" name="delete" value="%(yes_label)s" class="formbutton" />
</td>
<td>
<input type="hidden" name="ln" value="%(ln)s" />
<input type="hidden" name="grpID" value="%(grpID)s" />
<input type="submit" value="%(no_label)s" class="formbutton" />
</td>
</tr>
</table>
</form>"""% {'message': _("Are you sure you want to delete this group?"),
'ln':ln,
'yes_label': _("Yes"),
'no_label': _("No"),
'grpID':grpID,
'action': action
}
return out
def tmpl_confirm_leave(self, uid, grpID, ln=CFG_SITE_LANG):
"""
display a confirm message
@param grpID *int* - ID of the group
@param ln: language
@return: html output
"""
_ = gettext_set_language(ln)
action = CFG_SITE_URL + '/yourgroups/leave'
out = """
<form name="leave_group" action="%(action)s" method="post">
<table class="confirmoperation">
<tr>
<td colspan="2" class="confirmmessage">
%(message)s
</td>
</tr>
<tr>
<td>
<input type="hidden" name="confirmed" value="1" />
<input type="hidden" name="ln" value="%(ln)s" />
<input type="hidden" name="grpID" value="%(grpID)s" />
<input type="submit" name="leave_button" value="%(yes_label)s" class="formbutton" />
</td>
<td>
<input type="hidden" name="ln" value="%(ln)s" />
<input type="hidden" name="grpID" value="%(grpID)s" />
<input type="submit" value="%(no_label)s" class="formbutton" />
</td>
</tr>
</table>
</form>"""% {'message': _("Are you sure you want to leave this group?"),
'ln':ln,
'yes_label': _("Yes"),
'no_label': _("No"),
'grpID':grpID,
'action': action
}
return out
def __create_join_policy_selection_menu(self, name, current_join_policy, ln=CFG_SITE_LANG):
"""Private function. create a drop down menu for selection of join policy
@param current_join_policy: join policy as defined in CFG_WEBSESSION_GROUP_JOIN_POLICY
@param ln: language
"""
_ = gettext_set_language(ln)
elements = [(CFG_WEBSESSION_GROUP_JOIN_POLICY['VISIBLEOPEN'],
_("Visible and open for new members")),
(CFG_WEBSESSION_GROUP_JOIN_POLICY['VISIBLEMAIL'],
_("Visible but new members need approval"))
]
select_text = _("Please select:")
return self.__create_select_menu(name, elements, select_text, selected_key=current_join_policy)
def __create_select_menu(self, name, elements, select_text, multiple=0, selected_key=None):
""" private function, returns a popup menu
@param name: name of HTML control
@param elements: list of (key, value)
"""
if multiple :
out = """
<select name="%s" multiple="multiple" style="width:100%%">"""% (name)
else :
out = """<select name="%s" style="width:100%%">""" % name
out += '<option value="-1">%s</option>' % (select_text)
for (key, label) in elements:
selected = ''
if key == selected_key:
selected = ' selected="selected"'
out += '<option value="%s"%s>%s</option>'% (key, selected, label)
out += '</select>'
return out
def tmpl_infobox(self, infos, ln=CFG_SITE_LANG):
"""Display len(infos) information fields
@param infos: list of strings
@param ln=language
@return: html output
"""
_ = gettext_set_language(ln)
if not((type(infos) is list) or (type(infos) is tuple)):
infos = [infos]
infobox = ""
for info in infos:
infobox += '<div><span class="info">'
lines = info.split("\n")
for line in lines[0:-1]:
infobox += line + "<br />\n"
infobox += lines[-1] + "</span></div>\n"
return infobox
def tmpl_navtrail(self, ln=CFG_SITE_LANG, title=""):
"""
display the navtrail, e.g.:
Your account > Your group > title
@param title: the last part of the navtrail. Is not a link
@param ln: language
return html formatted navtrail
"""
_ = gettext_set_language(ln)
nav_h1 = '<a class="navtrail" href="%s/youraccount/display">%s</a>'
nav_h2 = ""
if (title != ""):
nav_h2 = ' > <a class="navtrail" href="%s/yourgroups/display">%s</a>'
nav_h2 = nav_h2 % (CFG_SITE_URL, _("Your Groups"))
return nav_h1 % (CFG_SITE_URL, _("Your Account")) + nav_h2
def tmpl_group_table_title(self, img="", text="", ln=CFG_SITE_LANG):
"""
display the title of a table:
- 'img' *string* - img path
- 'text' *string* - title
- 'ln' *string* - The language to display the interface in
"""
out = "<div>"
if img:
out += """
<img src="%s" alt="" />
""" % (CFG_SITE_URL + img)
out += """
<b>%s</b>
</div>""" % text
return out
def tmpl_admin_msg(self, group_name, grpID, ln=CFG_SITE_LANG):
"""
return message content for joining group
- 'group_name' *string* - name of the group
- 'grpID' *int* - ID of the group
- 'ln' *string* - The language to display the interface in
"""
_ = gettext_set_language(ln)
subject = _("Group %(x_name)s: New membership request", x_name=group_name)
url = CFG_SITE_URL + "/yourgroups/members?grpID=%s&ln=%s"
url %= (grpID, ln)
# FIXME: which user? We should show his nickname.
body = (_("A user wants to join the group %(x_name)s.", x_name=group_name)) + '<br />'
body += _("Please %(x_url_open)saccept or reject%(x_url_close)s this user's request.",
x_url_open='<a href="' + url + '">',
x_url_close='</a>')
body += '<br />'
return subject, body
def tmpl_member_msg(self,
group_name,
accepted=0,
ln=CFG_SITE_LANG):
"""
return message content when new member is accepted/rejected
- 'group_name' *string* - name of the group
- 'accepted' *int* - 1 if new membership has been accepted, 0 if it has been rejected
- 'ln' *string* - The language to display the interface in
"""
_ = gettext_set_language(ln)
if accepted:
subject = _("Group %(x_name)s: Join request has been accepted", x_name=group_name)
body = _("Your request for joining group %(x_name)s has been accepted.", x_name=group_name)
else:
subject = _("Group %(x_name)s: Join request has been rejected", x_name=group_name)
body = _("Your request for joining group %(x_name)s has been rejected.", x_name=group_name)
url = CFG_SITE_URL + "/yourgroups/display?ln=" + ln
body += '<br />'
body += _("You can consult the list of %(x_url_open)syour groups%(x_url_close)s.",
x_url_open='<a href="' + url + '">',
x_url_close='</a>')
body += '<br />'
return subject, body
def tmpl_delete_msg(self,
group_name,
ln=CFG_SITE_LANG):
"""
return message content when new member is accepted/rejected
- 'group_name' *string* - name of the group
- 'ln' *string* - The language to display the interface in
"""
_ = gettext_set_language(ln)
subject = _("Group %(x_name)s has been deleted", x_name=group_name)
url = CFG_SITE_URL + "/yourgroups/display?ln=" + ln
body = _("Group %(x_name)s has been deleted by its administrator.", x_name=group_name)
body += '<br />'
body += _("You can consult the list of %(x_url_open)syour groups%(x_url_close)s.", **{'x_url_open': '<a href="' + url + '">',
'x_url_close': '</a>'})
body += '<br />'
return subject, body
def tmpl_group_info(self, nb_admin_groups=0, nb_member_groups=0, nb_total_groups=0, ln=CFG_SITE_LANG):
"""
display infos about groups (used by myaccount.py)
@param nb_admin_group: number of groups the user is admin of
@param nb_member_group: number of groups the user is member of
@param total_group: number of groups the user belongs to
@param ln: language
return: html output.
"""
_ = gettext_set_language(ln)
out = _("You can consult the list of %(x_url_open)s%(x_nb_total)i groups%(x_url_close)s you are subscribed to (%(x_nb_member)i) or administering (%(x_nb_admin)i).")
out %= {'x_url_open': '<a href="' + CFG_SITE_URL + '/yourgroups/display?ln=' + ln + '">',
'x_nb_total': nb_total_groups,
'x_url_close': '</a>',
'x_nb_admin': nb_admin_groups,
'x_nb_member': nb_member_groups}
return out
def tmpl_general_warnings(self, warning_list, ln=CFG_SITE_LANG):
"""
display information to the admin user about possible
ssecurity problems in the system.
"""
message = ""
_ = gettext_set_language(ln)
#Try and connect to the mysql database with the default invenio password
if "warning_mysql_password_equal_to_invenio_password" in warning_list:
message += "<p><font color=red>"
message += _("Warning: The password set for MySQL root user is the same as the default Invenio password. For security purposes, you may want to change the password.")
message += "</font></p>"
#Try and connect to the invenio database with the default invenio password
if "warning_invenio_password_equal_to_default" in warning_list:
message += "<p><font color=red>"
message += _("Warning: The password set for the Invenio MySQL user is the same as the shipped default. For security purposes, you may want to change the password.")
message += "</font></p>"
#Check if the admin password is empty
if "warning_empty_admin_password" in warning_list:
message += "<p><font color=red>"
message += _("Warning: The password set for the Invenio admin user is currently empty. For security purposes, it is strongly recommended that you add a password.")
message += "</font></p>"
#Check if the admin email has been changed from the default
if "warning_site_support_email_equal_to_default" in warning_list:
message += "<p><font color=red>"
message += _("Warning: The email address set for support email is currently set to info@invenio-software.org. It is recommended that you change this to your own address.")
message += "</font></p>"
#Check for a new release
if "note_new_release_available" in warning_list:
message += "<p><font color=red>"
message += _("A newer version of Invenio is available for download. You may want to visit ")
message += "<a href=\"http://invenio-software.org/wiki/Installation/Download\">http://invenio-software.org/wiki/Installation/Download</a>"
message += "</font></p>"
#Error downloading release notes
if "error_cannot_download_release_notes" in warning_list:
message += "<p><font color=red>"
message += _("Cannot download or parse release notes from http://invenio-software.org/repo/invenio/tree/RELEASE-NOTES")
message += "</font></p>"
if "email_auto_generated" in warning_list:
message += "<p><font color=red>"
message += _("Your e-mail is auto-generated by the system. Please change your e-mail from <a href='%(x_site)s/youraccount/edit?ln=%(x_link)s'>account settings</a>.",
x_site=CFG_SITE_SECURE_URL, x_link=ln)
message += "</font></p>"
return message
def tmpl_external_login_button(self, provider, referer = '', icon_size = 48,
classes = ""):
"""
Template of the login button for providers which don't need username.
@param provider: The name of the provider
@type provider: str
@param referer: The referer URL - will be redirected upon after login
@type referer: str
@param icon_size: The size of the icon of the provider
@type icon_size: int
@param classes: Additional classes for the login form
@type classes: str
@rtype: str
"""
login_url = CFG_SITE_SECURE_URL + "/youraccount/"
if provider in CFG_OPENID_PROVIDERS:
login_url += 'openid'
elif provider in CFG_OAUTH2_PROVIDERS:
login_url += 'oauth2'
elif provider in CFG_OAUTH1_PROVIDERS:
login_url += 'oauth1'
login_url += '?'
if referer:
if not 'youraccount/login' in referer:
login_url += "referer=" + referer + "&"
out = ""
out += """
<div class="login_button %(class)s" id="%(provider)s_login_button">
<div class="provider_img" id="%(provider)s_img">
<a class="openid_url" id="%(provider)s_login" href="%(loginurl)s\
provider=%(provider)s">
<img class="external_provider %(class)s" src="%(imgurl)s/\
%(provider)s_icon_%(icon_size)s.png"></img>
</a>
</div>
</div>""" % {
'loginurl': login_url,
'imgurl': CFG_SITE_SECURE_URL + "/img",
'provider': provider,
'class': classes,
'icon_size': icon_size
}
return out
def tmpl_external_login_form(self, provider, referer = '', icon_size = 48,
classes = "", label = "%(provider)s username"):
"""
Template of the login form for providers which need an username for
verification.
@param provider: The name of the provider
@type provider: str
@param referer: The referer URL - will be redirected upon after login
@type referer: str
@param icon_size: The size of the icon of the provider
@type icon_size: int
@param classes: Additional classes for the login form
@type classes: str
@param label: The label for text input.
@param label: str
@rtype: str
"""
login_url = CFG_SITE_SECURE_URL + "/youraccount/"
if provider in CFG_OPENID_PROVIDERS:
login_url += 'openid'
elif provider in CFG_OAUTH2_PROVIDERS:
login_url += 'oauth2'
elif provider in CFG_OAUTH1_PROVIDERS:
login_url += 'oauth1'
label %= {'provider': provider}
out = ""
out += """
<div class="login_button %(class)s login_form" id="%(provider)s_verify_form">
<div class="provider_img with_login_form" id="%(provider)s_login_img" \
onclick="show_username_form(this)">
<img class="external_provider %(class)s" src="%(imgurl)s/\
%(provider)s_icon_%(icon_size)s.png"></img>
</div>
<div class="login_content with_label" id="%(provider)s_verifier" hidden=\
"hidden">
<form method="get" accept-charset="UTF-8" action="%(loginurl)s">
<input type="hidden" name="provider" value="%(provider)s">
<input type="hidden" name="referer" value="%(referer)s">
<label class="openid_label" for="%(provider)s">%(label)s:</label>
</br>
<input class="openid_input" id="%(provider)s_username_field" \
type="text" name="identifier" value="" >
<input type="submit" value=" Login ">
</form>
</div>
</div>
""" % {
'loginurl': login_url,
'imgurl': CFG_SITE_SECURE_URL + "/img",
'provider': provider,
'label': label,
'referer': referer,
'class': classes,
'icon_size': icon_size
}
return out
def tmpl_external_login_panel(self, ln, referer):
"""
Template for external login buttons
"""
from invenio.legacy.websession.websession_config import CFG_EXTERNAL_LOGIN_LARGE
from invenio.legacy.websession.websession_config import CFG_EXTERNAL_LOGIN_BUTTON_ORDER
from invenio.legacy.websession.websession_config import CFG_EXTERNAL_LOGIN_FORM_LABELS
from invenio.modules.access.local_config import CFG_OPENID_CONFIGURATIONS
def construct_button(provider, size, button_class):
"""
Constructs a button for given provider.
@param provider: the name of the provider.
@type provider: str
@param size: the size of the login button
@type size: int
@param button_class: the additional class for the login button
@type button_class: str
@rtype str
"""
_ = gettext_set_language(ln)
# Look if the login button needs a form.
config = CFG_OPENID_CONFIGURATIONS.get(provider, {})
identifier = config.get('identifier', '')
if "{0}" in identifier:
label = CFG_EXTERNAL_LOGIN_FORM_LABELS.get(provider,
"%(provider)s username")
return self.tmpl_external_login_form(provider,
referer = referer,
icon_size = size,
classes = button_class,
label = _(label))
else:
return self.tmpl_external_login_button(provider,
referer = referer,
icon_size = size,
classes = button_class)
activated_providers = CFG_OPENID_PROVIDERS * CFG_OPENID_AUTHENTICATION \
+ CFG_OAUTH1_PROVIDERS * CFG_OAUTH1_AUTHENTICATION \
+ CFG_OAUTH2_PROVIDERS * CFG_OAUTH2_AUTHENTICATION
if not len(activated_providers):
return ""
out = ""
out += "<div id='buttons'>"
out += "<strong>You may login with:</strong>"
out += "<div id='big_buttons'>"
for provider in CFG_EXTERNAL_LOGIN_LARGE:
if provider in activated_providers:
out += construct_button(provider, 48, "login_button_big")
out += "</div>"
out += "<div id='small_buttons'>"
providers = CFG_EXTERNAL_LOGIN_BUTTON_ORDER
if (len(activated_providers) - len(CFG_EXTERNAL_LOGIN_LARGE)) != \
len(providers):
# Not all the providers ordered. Add the unsorted ones to the end.
for provider in sorted(activated_providers):
if not provider in providers:
providers.append(provider)
for provider in providers:
if not provider in CFG_EXTERNAL_LOGIN_LARGE:
out += construct_button(provider, 24, "login_button_small")
out += "</div>"
out += "<div id='form_field'>"
out += "</div>"
out += "</div>"
out += """
<script type="text/javascript">
function show_username_form(element) {
form_field = document.getElementById('form_field');
form_field.innerHTML = element.nextSibling.nextSibling.innerHTML;
}
</script>"""
return out
| egabancho/invenio | invenio/legacy/websession/templates.py | Python | gpl-2.0 | 114,091 | [
"VisIt"
] | 4ede93c786f12a6b4f805e67144f65bcf3c791badcffab7fea96aa75eafb98b6 |
# -*- coding: utf-8 -*-
# This program is free software; you can redistribute it and/or modify
# it under the terms of the (LGPL) GNU Lesser General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Library Lesser General Public License for more details at
# ( http://www.gnu.org/licenses/lgpl.html ).
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
# written by: Jurko Gospodnetić ( jurko.gospodnetic@pke.hr )
"""
Suds Python library document caching unit tests.
Implemented using the 'pytest' testing framework.
"""
if __name__ == "__main__":
try:
import pytest
pytest.main(["--pyargs", __file__])
except ImportError:
print("'py.test' unit testing framework not available. Can not run "
"'%s' directly as a script." % (__file__,))
import sys
sys.exit(-2)
import suds
import suds.cache
import suds.sax.parser
import pytest
import os
import tempfile
class InvisibleMan:
"""Dummy class used for pickling related tests."""
def __init__(self, x):
self.x = x
# Hardcoded values used in different caching test cases.
value_empty = suds.byte_str("")
value_f2 = suds.byte_str("fifi2")
value_f22 = suds.byte_str("fifi22")
value_f3 = suds.byte_str("fifi3")
value_p1 = suds.byte_str("pero1")
value_p11 = suds.byte_str("pero11")
value_p111 = suds.byte_str("pero111")
value_p2 = suds.byte_str("pero2")
value_p22 = suds.byte_str("pero22")
value_unicode = suds.byte_str(u"€ 的 čćžšđČĆŽŠĐ")
def test_Cache():
cache = suds.cache.Cache()
pytest.raises(Exception, cache.get, "id")
pytest.raises(Exception, cache.put, "id", "object")
pytest.raises(Exception, cache.purge, "id")
pytest.raises(Exception, cache.clear)
def test_DocumentCache(tmpdir):
cacheFolder = tmpdir.join("puffy").strpath
cache = suds.cache.DocumentCache(cacheFolder)
assert isinstance(cache, suds.cache.FileCache)
assert cache.get("unga1") is None
# TODO: DocumentCache class interface seems silly. Its get() operation
# returns an XML document while its put() operation takes an XML element.
# The put() operation also silently ignores passed data of incorrect type.
# TODO: Update this test to no longer depend on the exact input XML data
# formatting. We currently expect it to be formatted exactly as what gets
# read back from the DocumentCache.
content = suds.byte_str("""\
<xsd:element name="Elemento">
<xsd:simpleType>
<xsd:restriction base="xsd:string">
<xsd:enumeration value="alfa"/>
<xsd:enumeration value="beta"/>
<xsd:enumeration value="gamma"/>
</xsd:restriction>
</xsd:simpleType>
</xsd:element>""")
xml = suds.sax.parser.Parser().parse(suds.BytesIO(content))
cache.put("unga1", xml.getChildren()[0])
readXML = cache.get("unga1")
assert isinstance(readXML, suds.sax.document.Document)
readXMLElements = readXML.getChildren()
assert len(readXMLElements) == 1
readXMLElement = readXMLElements[0]
assert isinstance(readXMLElement, suds.sax.element.Element)
assert suds.byte_str(str(readXMLElement)) == content
def test_FileCache():
cache = suds.cache.FileCache()
assert isinstance(cache, suds.cache.Cache)
def test_FileCache_clear(tmpdir):
cacheFolder1 = tmpdir.join("fungus").strpath
cache1 = suds.cache.FileCache(cacheFolder1)
cache1.put("unga1", value_p1)
cache1.put("unga2", value_p2)
assert cache1.get("unga1") == value_p1
assert cache1.get("unga2") == value_p2
cache1.clear()
assert _isEmptyCacheFolder(cacheFolder1)
assert cache1.get("unga1") is None
assert cache1.get("unga2") is None
cache1.put("unga1", value_p11)
cache1.put("unga2", value_p2)
assert cache1.get("unga1") == value_p11
assert cache1.get("unga2") == value_p2
cacheFolder2 = tmpdir.join("broccoli").strpath
cache2 = suds.cache.FileCache(cacheFolder2)
cache2.put("unga2", value_f2)
assert cache2.get("unga2") == value_f2
cache2.clear()
assert not _isEmptyCacheFolder(cacheFolder1)
assert _isEmptyCacheFolder(cacheFolder2)
assert cache2.get("unga2") is None
assert cache1.get("unga1") == value_p11
assert cache1.get("unga2") == value_p2
cache2.put("unga2", value_p22)
assert cache2.get("unga2") == value_p22
def test_FileCache_location(tmpdir):
defaultLocation = os.path.join(tempfile.gettempdir(), "suds")
cache = suds.cache.FileCache()
assert os.path.isdir(cache.location)
assert cache.location == defaultLocation
assert suds.cache.FileCache().location == defaultLocation
assert cache.location == defaultLocation
cacheFolder1 = tmpdir.join("flip-flop1").strpath
assert not os.path.isdir(cacheFolder1)
assert suds.cache.FileCache(location=cacheFolder1).location == cacheFolder1
assert _isEmptyCacheFolder(cacheFolder1)
cacheFolder2 = tmpdir.join("flip-flop2").strpath
assert not os.path.isdir(cacheFolder2)
assert suds.cache.FileCache(cacheFolder2).location == cacheFolder2
assert _isEmptyCacheFolder(cacheFolder2)
def test_FileCache_close_leaves_cached_files_behind(tmpdir):
cacheFolder1 = tmpdir.join("ana").strpath
cache1 = suds.cache.FileCache(cacheFolder1)
cache1.put("unga1", value_p1)
cache1.put("unga2", value_p2)
cacheFolder2 = tmpdir.join("nan").strpath
cache2 = suds.cache.FileCache(cacheFolder2)
cache2.put("unga2", value_f2)
cache2.put("unga3", value_f3)
del cache1
cache11 = suds.cache.FileCache(cacheFolder1)
assert cache11.get("unga1") == value_p1
assert cache11.get("unga2") == value_p2
assert cache2.get("unga2") == value_f2
assert cache2.get("unga3") == value_f3
def test_FileCache_get_put(tmpdir):
cacheFolder1 = tmpdir.join("firefly").strpath
cache1 = suds.cache.FileCache(cacheFolder1)
assert _isEmptyCacheFolder(cacheFolder1)
assert cache1.get("unga1") is None
cache1.put("unga1", value_p1)
assert not _isEmptyCacheFolder(cacheFolder1)
assert cache1.get("unga1") == value_p1
assert cache1.get("unga2") is None
cache1.put("unga1", value_p11)
assert cache1.get("unga1") == value_p11
assert cache1.get("unga2") is None
cache1.put("unga2", value_p2)
assert cache1.get("unga1") == value_p11
assert cache1.get("unga2") == value_p2
cacheFolder2 = tmpdir.join("semper fi").strpath
cache2 = suds.cache.FileCache(cacheFolder2)
assert _isEmptyCacheFolder(cacheFolder2)
assert cache2.get("unga2") is None
cache2.put("unga2", value_f2)
assert not _isEmptyCacheFolder(cacheFolder2)
assert cache2.get("unga2") == value_f2
assert cache2.get("unga3") is None
cache2.put("unga2", value_f22)
assert cache2.get("unga2") == value_f22
assert cache2.get("unga3") is None
cache2.put("unga3", value_f3)
assert cache2.get("unga2") == value_f22
assert cache2.get("unga3") == value_f3
assert not _isEmptyCacheFolder(cacheFolder1)
assert not _isEmptyCacheFolder(cacheFolder2)
assert cache1.get("unga1") == value_p11
assert cache1.get("unga2") == value_p2
assert cache1.get("unga3") is None
assert cache2.get("unga1") is None
assert cache2.get("unga2") == value_f22
assert cache2.get("unga3") == value_f3
def test_FileCache_purge(tmpdir):
cacheFolder1 = tmpdir.join("flamenco").strpath
cache1 = suds.cache.FileCache(cacheFolder1)
cache1.put("unga1", value_p1)
assert cache1.get("unga1") == value_p1
cache1.purge("unga1")
assert _isEmptyCacheFolder(cacheFolder1)
assert cache1.get("unga1") is None
cache1.put("unga1", value_p11)
cache1.put("unga2", value_p2)
assert cache1.get("unga1") == value_p11
assert cache1.get("unga2") == value_p2
cache1.purge("unga1")
assert cache1.get("unga1") is None
assert cache1.get("unga2") == value_p2
cache1.put("unga1", value_p111)
cacheFolder2 = tmpdir.join("shadow").strpath
cache2 = suds.cache.FileCache(cacheFolder2)
cache2.put("unga2", value_f2)
cache2.purge("unga2")
assert _isEmptyCacheFolder(cacheFolder2)
assert cache1.get("unga1") == value_p111
assert cache1.get("unga2") == value_p2
assert cache2.get("unga2") is None
def test_FileCache_reused_cache_folder(tmpdir):
cacheFolder = tmpdir.strpath
cache1 = suds.cache.FileCache(cacheFolder)
assert _isEmptyCacheFolder(cacheFolder)
assert cache1.get("unga1") is None
cache1.put("unga1", value_p1)
assert cache1.get("unga1") == value_p1
assert cache1.get("unga2") is None
cache1.put("unga1", value_p11)
assert cache1.get("unga1") == value_p11
assert cache1.get("unga2") is None
cache1.put("unga2", value_p2)
assert cache1.get("unga1") == value_p11
assert cache1.get("unga2") == value_p2
cache2 = suds.cache.FileCache(cacheFolder)
assert cache2.get("unga1") == value_p11
assert cache2.get("unga2") == value_p2
cache2.put("unga3", value_f3)
assert cache1.get("unga3") == value_f3
def test_FileCache_version(tmpdir):
fakeVersionInfo = "--- fake version info ---"
assert suds.__version__ != fakeVersionInfo
cacheFolder = tmpdir.join("hitori")
versionFile = cacheFolder.join("version")
cache = suds.cache.FileCache(cacheFolder.strpath)
assert versionFile.read() == suds.__version__
cache.put("unga1", value_p1)
versionFile.write(fakeVersionInfo)
assert cache.get("unga1") == value_p1
cache2 = suds.cache.FileCache(cacheFolder.strpath)
assert _isEmptyCacheFolder(cacheFolder.strpath)
assert cache.get("unga1") is None
assert cache2.get("unga1") is None
assert versionFile.read() == suds.__version__
cache.put("unga1", value_p11)
cache.put("unga2", value_p22)
versionFile.remove()
assert cache.get("unga1") == value_p11
assert cache.get("unga2") == value_p22
cache3 = suds.cache.FileCache(cacheFolder.strpath)
assert _isEmptyCacheFolder(cacheFolder.strpath)
assert cache.get("unga1") is None
assert cache.get("unga2") is None
assert cache2.get("unga1") is None
assert versionFile.read() == suds.__version__
def test_FileCache_with_empty_cached_content(tmpdir):
cacheFolder = tmpdir.strpath
cache = suds.cache.FileCache(cacheFolder)
cache.put("unga1", value_empty)
assert cache.get("unga1") == value_empty
assert not _isEmptyCacheFolder(cacheFolder)
def test_FileCache_with_random_utf_character_cached_content(tmpdir):
cacheFolder = tmpdir.strpath
cache = suds.cache.FileCache(cacheFolder)
cache.put("unga1", value_unicode)
assert cache.get("unga1") == value_unicode
assert not _isEmptyCacheFolder(cacheFolder)
def test_NoCache():
cache = suds.cache.NoCache()
assert isinstance(cache, suds.cache.Cache)
assert cache.get("id") == None
cache.put("id", "something")
assert cache.get("id") == None
# TODO: It should not be an error to call purge() or clear() on a NoCache
# instance.
pytest.raises(Exception, cache.purge, "id")
pytest.raises(Exception, cache.clear)
def test_ObjectCache(tmpdir):
cacheFolder = tmpdir.join("george carlin").strpath
cache = suds.cache.ObjectCache(cacheFolder)
assert isinstance(cache, suds.cache.FileCache)
assert cache.get("unga1") is None
assert cache.get("unga2") is None
cache.put("unga1", InvisibleMan(1))
cache.put("unga2", InvisibleMan(2))
read1 = cache.get("unga1")
read2 = cache.get("unga2")
assert read1.__class__ is InvisibleMan
assert read2.__class__ is InvisibleMan
assert read1.x == 1
assert read2.x == 2
def _isEmptyCacheFolder(folder):
assert os.path.isdir(folder)
def walkError(error):
pytest.fail("Error attempting to walk through cache folder contents.")
count = 0
for root, folders, files in os.walk(folder, onerror=walkError):
assert root == folder
return len(folders) == 0 and len(files) == 1 and files[0] == 'version'
return False
| pexip/os-python-suds-jurko | tests/test_cache.py | Python | lgpl-3.0 | 12,799 | [
"Firefly"
] | 1a1bed0ec4528ae7f15d66aea1beabfad33273cf0204fdb0c2cda11ce79d81ed |
# Hidden Markov Model Implementation
import pylab as pyl
import numpy as np
import matplotlib.pyplot as pp
#from enthought.mayavi import mlab
import scipy as scp
import scipy.ndimage as ni
import roslib; roslib.load_manifest('sandbox_tapo_darpa_m3')
import rospy
#import hrl_lib.mayavi2_util as mu
import hrl_lib.viz as hv
import hrl_lib.util as ut
import hrl_lib.matplotlib_util as mpu
import pickle
import ghmm
import sys
sys.path.insert(0, '/home/tapo/svn/robot1_data/usr/tapo/data_code/Classification/Data/Single_Contact_HMM/Variable_Stiffness_Variable_Velocity/with_padding_3s/')
from data_padding_hshv_3s import Fmat_original_hshv
from data_padding_hslv_3s import Fmat_original_hslv
from data_padding_lshv_3s import Fmat_original_lshv
from data_padding_lslv_3s import Fmat_original_lslv
# Returns mu,sigma for 20 hidden-states from feature-vectors(123,35) for RF,SF,RM,SM models
def feature_to_mu_sigma(fvec):
index = 0
m,n = np.shape(fvec)
#print m,n
mu = np.matrix(np.zeros((20,1)))
sigma = np.matrix(np.zeros((20,1)))
DIVS = m/20
while (index < 20):
m_init = index*DIVS
temp_fvec = fvec[(m_init):(m_init+DIVS),0:]
#if index == 1:
#print temp_fvec
mu[index] = scp.mean(temp_fvec)
sigma[index] = scp.std(temp_fvec)
index = index+1
return mu,sigma
# Returns sequence given raw data
def create_seq(fvec):
m,n = np.shape(fvec)
#print m,n
seq = np.matrix(np.zeros((20,n)))
DIVS = m/20
for i in range(n):
index = 0
while (index < 20):
m_init = index*DIVS
temp_fvec = fvec[(m_init):(m_init+DIVS),i]
#if index == 1:
#print temp_fvec
seq[index,i] = scp.mean(temp_fvec)
index = index+1
return seq
if __name__ == '__main__':
# HMM - Implementation:
F = ghmm.Float() # emission domain of this model
# A - Transition Matrix
A = [[0.1, 0.25, 0.15, 0.15, 0.1, 0.05, 0.05, 0.03, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01],
[0.0, 0.1, 0.25, 0.25, 0.2, 0.1, 0.05, 0.03, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01],
[0.0, 0.0, 0.1, 0.25, 0.25, 0.2, 0.05, 0.03, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01],
[0.0, 0.0, 0.0, 0.1, 0.3, 0.30, 0.20, 0.09, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01],
[0.0, 0.0, 0.0, 0.0, 0.1, 0.30, 0.30, 0.15, 0.04, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.1, 0.35, 0.30, 0.10, 0.05, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.1, 0.30, 0.20, 0.10, 0.05, 0.05, 0.05, 0.03, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.1, 0.30, 0.20, 0.10, 0.05, 0.05, 0.05, 0.05, 0.02, 0.02, 0.02, 0.02, 0.02],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.1, 0.30, 0.20, 0.15, 0.05, 0.05, 0.05, 0.02, 0.02, 0.02, 0.02, 0.02],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.1, 0.30, 0.20, 0.15, 0.10, 0.05, 0.02, 0.02, 0.02, 0.02, 0.02],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.0, 0.1, 0.30, 0.30, 0.10, 0.10, 0.02, 0.02, 0.02, 0.02, 0.02],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.0, 0.0, 0.1, 0.40, 0.30, 0.10, 0.02, 0.02, 0.02, 0.02, 0.02],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.0, 0.0, 0.0, 0.20, 0.40, 0.20, 0.10, 0.04, 0.02, 0.02, 0.02],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.0, 0.0, 0.0, 0.00, 0.20, 0.40, 0.20, 0.10, 0.05, 0.03, 0.02],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.0, 0.0, 0.0, 0.00, 0.20, 0.40, 0.20, 0.10, 0.05, 0.05],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.0, 0.0, 0.0, 0.00, 0.00, 0.20, 0.40, 0.20, 0.10, 0.10],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.20, 0.40, 0.20, 0.20],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.30, 0.50, 0.20],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.40, 0.60],
[0.0, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.0, 0.0, 0.0, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00]]
# pi - initial probabilities per state
pi = [0.05] * 20
# Confusion Matrix
cmat = np.zeros((4,4))
#############################################################################################################################################
# HSHV as testing set and Rest as training set
# Checking the Data-Matrix
mu_rf_hshv,sigma_rf_hshv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hslv[0:301,0:15], Fmat_original_lshv[0:301,0:15], Fmat_original_lslv[0:301,0:15]))))
mu_rm_hshv,sigma_rm_hshv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hslv[0:301,15:30], Fmat_original_lshv[0:301,15:30], Fmat_original_lslv[0:301,15:30]))))
mu_sf_hshv,sigma_sf_hshv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hslv[0:301,30:45], Fmat_original_lshv[0:301,30:45], Fmat_original_lslv[0:301,30:45]))))
mu_sm_hshv,sigma_sm_hshv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hslv[0:301,45:60], Fmat_original_lshv[0:301,45:60], Fmat_original_lslv[0:301,45:60]))))
# B - Emission Matrix, parameters of emission distributions in pairs of (mu, sigma)
B_rf_hshv = np.zeros((20,2))
B_rm_hshv = np.zeros((20,2))
B_sf_hshv = np.zeros((20,2))
B_sm_hshv = np.zeros((20,2))
for num_states in range(20):
B_rf_hshv[num_states,0] = mu_rf_hshv[num_states]
B_rf_hshv[num_states,1] = sigma_rf_hshv[num_states]
B_rm_hshv[num_states,0] = mu_rm_hshv[num_states]
B_rm_hshv[num_states,1] = sigma_rm_hshv[num_states]
B_sf_hshv[num_states,0] = mu_sf_hshv[num_states]
B_sf_hshv[num_states,1] = sigma_sf_hshv[num_states]
B_sm_hshv[num_states,0] = mu_sm_hshv[num_states]
B_sm_hshv[num_states,1] = sigma_sm_hshv[num_states]
B_rf_hshv = B_rf_hshv.tolist()
B_rm_hshv = B_rm_hshv.tolist()
B_sf_hshv = B_sf_hshv.tolist()
B_sm_hshv = B_sm_hshv.tolist()
# generate RF, RM, SF, SM models from parameters
model_rf_hshv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_rf_hshv, pi) # Will be Trained
model_rm_hshv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_rm_hshv, pi) # Will be Trained
model_sf_hshv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_sf_hshv, pi) # Will be Trained
model_sm_hshv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_sm_hshv, pi) # Will be Trained
# For Training
total_seq_rf_hshv = np.matrix(np.column_stack((Fmat_original_hslv[0:301,0:15], Fmat_original_lshv[0:301,0:15], Fmat_original_lslv[0:301,0:15])))
total_seq_rm_hshv = np.matrix(np.column_stack((Fmat_original_hslv[0:301,15:30], Fmat_original_lshv[0:301,15:30], Fmat_original_lslv[0:301,15:30])))
total_seq_sf_hshv = np.matrix(np.column_stack((Fmat_original_hslv[0:301,30:45], Fmat_original_lshv[0:301,30:45], Fmat_original_lslv[0:301,30:45])))
total_seq_sm_hshv = np.matrix(np.column_stack((Fmat_original_hslv[0:301,45:60], Fmat_original_lshv[0:301,45:60], Fmat_original_lslv[0:301,45:60])))
train_seq_rf_hshv = (np.array(total_seq_rf_hshv).T).tolist()
train_seq_rm_hshv = (np.array(total_seq_rm_hshv).T).tolist()
train_seq_sf_hshv = (np.array(total_seq_sf_hshv).T).tolist()
train_seq_sm_hshv = (np.array(total_seq_sm_hshv).T).tolist()
#print train_seq_rf_hshv
final_ts_rf_hshv = ghmm.SequenceSet(F,train_seq_rf_hshv)
final_ts_rm_hshv = ghmm.SequenceSet(F,train_seq_rm_hshv)
final_ts_sf_hshv = ghmm.SequenceSet(F,train_seq_sf_hshv)
final_ts_sm_hshv = ghmm.SequenceSet(F,train_seq_sm_hshv)
model_rf_hshv.baumWelch(final_ts_rf_hshv)
model_rm_hshv.baumWelch(final_ts_rm_hshv)
model_sf_hshv.baumWelch(final_ts_sf_hshv)
model_sm_hshv.baumWelch(final_ts_sm_hshv)
# For Testing
total_seq_obj_hshv = Fmat_original_hshv[0:301,:]
rf_hshv = np.matrix(np.zeros(np.size(total_seq_obj_hshv,1)))
rm_hshv = np.matrix(np.zeros(np.size(total_seq_obj_hshv,1)))
sf_hshv = np.matrix(np.zeros(np.size(total_seq_obj_hshv,1)))
sm_hshv = np.matrix(np.zeros(np.size(total_seq_obj_hshv,1)))
k = 0
while (k < np.size(total_seq_obj_hshv,1)):
test_seq_obj_hshv = (np.array(total_seq_obj_hshv[0:301,k]).T).tolist()
new_test_seq_obj_hshv = np.array(sum(test_seq_obj_hshv,[]))
#print new_test_seq_obj_hshv
ts_obj_hshv = new_test_seq_obj_hshv
#print np.shape(ts_obj_hshv)
final_ts_obj_hshv = ghmm.EmissionSequence(F,ts_obj_hshv.tolist())
# Find Viterbi Path
path_rf_obj_hshv = model_rf_hshv.viterbi(final_ts_obj_hshv)
path_rm_obj_hshv = model_rm_hshv.viterbi(final_ts_obj_hshv)
path_sf_obj_hshv = model_sf_hshv.viterbi(final_ts_obj_hshv)
path_sm_obj_hshv = model_sm_hshv.viterbi(final_ts_obj_hshv)
obj_hshv = max(path_rf_obj_hshv[1],path_rm_obj_hshv[1],path_sf_obj_hshv[1],path_sm_obj_hshv[1])
if obj_hshv == path_rf_obj_hshv[1]:
rf_hshv[0,k] = 1
elif obj_hshv == path_rm_obj_hshv[1]:
rm_hshv[0,k] = 1
elif obj_hshv == path_sf_obj_hshv[1]:
sf_hshv[0,k] = 1
else:
sm_hshv[0,k] = 1
k = k+1
#print rf_hshv.T
cmat[0][0] = cmat[0][0] + np.sum(rf_hshv[0,0:15])
cmat[0][1] = cmat[0][1] + np.sum(rf_hshv[0,15:30])
cmat[0][2] = cmat[0][2] + np.sum(rf_hshv[0,30:45])
cmat[0][3] = cmat[0][3] + np.sum(rf_hshv[0,45:60])
cmat[1][0] = cmat[1][0] + np.sum(rm_hshv[0,0:15])
cmat[1][1] = cmat[1][1] + np.sum(rm_hshv[0,15:30])
cmat[1][2] = cmat[1][2] + np.sum(rm_hshv[0,30:45])
cmat[1][3] = cmat[1][3] + np.sum(rm_hshv[0,45:60])
cmat[2][0] = cmat[2][0] + np.sum(sf_hshv[0,0:15])
cmat[2][1] = cmat[2][1] + np.sum(sf_hshv[0,15:30])
cmat[2][2] = cmat[2][2] + np.sum(sf_hshv[0,30:45])
cmat[2][3] = cmat[2][3] + np.sum(sf_hshv[0,45:60])
cmat[3][0] = cmat[3][0] + np.sum(sm_hshv[0,0:15])
cmat[3][1] = cmat[3][1] + np.sum(sm_hshv[0,15:30])
cmat[3][2] = cmat[3][2] + np.sum(sm_hshv[0,30:45])
cmat[3][3] = cmat[3][3] + np.sum(sm_hshv[0,45:60])
#print cmat
#############################################################################################################################################
# HSLV as testing set and Rest as training set
mu_rf_hslv,sigma_rf_hslv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,0:15], Fmat_original_lshv[0:301,0:15], Fmat_original_lslv[0:301,0:15]))))
mu_rm_hslv,sigma_rm_hslv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,15:30], Fmat_original_lshv[0:301,15:30], Fmat_original_lslv[0:301,15:30]))))
mu_sf_hslv,sigma_sf_hslv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,30:45], Fmat_original_lshv[0:301,30:45], Fmat_original_lslv[0:301,30:45]))))
mu_sm_hslv,sigma_sm_hslv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,45:60], Fmat_original_lshv[0:301,45:60], Fmat_original_lslv[0:301,45:60]))))
# B - Emission Matrix, parameters of emission distributions in pairs of (mu, sigma)
B_rf_hslv = np.zeros((20,2))
B_rm_hslv = np.zeros((20,2))
B_sf_hslv = np.zeros((20,2))
B_sm_hslv = np.zeros((20,2))
for num_states in range(20):
B_rf_hslv[num_states,0] = mu_rf_hslv[num_states]
B_rf_hslv[num_states,1] = sigma_rf_hslv[num_states]
B_rm_hslv[num_states,0] = mu_rm_hslv[num_states]
B_rm_hslv[num_states,1] = sigma_rm_hslv[num_states]
B_sf_hslv[num_states,0] = mu_sf_hslv[num_states]
B_sf_hslv[num_states,1] = sigma_sf_hslv[num_states]
B_sm_hslv[num_states,0] = mu_sm_hslv[num_states]
B_sm_hslv[num_states,1] = sigma_sm_hslv[num_states]
B_rf_hslv = B_rf_hslv.tolist()
B_rm_hslv = B_rm_hslv.tolist()
B_sf_hslv = B_sf_hslv.tolist()
B_sm_hslv = B_sm_hslv.tolist()
model_rf_hslv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_rf_hslv, pi) # Will be Trained
model_rm_hslv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_rm_hslv, pi) # Will be Trained
model_sf_hslv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_sf_hslv, pi) # Will be Trained
model_sm_hslv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_sm_hslv, pi) # Will be Trained
# For Training
total_seq_rf_hslv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,0:15], Fmat_original_lshv[0:301,0:15], Fmat_original_lslv[0:301,0:15])))
total_seq_rm_hslv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,15:30], Fmat_original_lshv[0:301,15:30], Fmat_original_lslv[0:301,15:30])))
total_seq_sf_hslv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,30:45], Fmat_original_lshv[0:301,30:45], Fmat_original_lslv[0:301,30:45])))
total_seq_sm_hslv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,45:60], Fmat_original_lshv[0:301,45:60], Fmat_original_lslv[0:301,45:60])))
train_seq_rf_hslv = (np.array(total_seq_rf_hslv).T).tolist()
train_seq_rm_hslv = (np.array(total_seq_rm_hslv).T).tolist()
train_seq_sf_hslv = (np.array(total_seq_sf_hslv).T).tolist()
train_seq_sm_hslv = (np.array(total_seq_sm_hslv).T).tolist()
#print train_seq_rf_hslv
final_ts_rf_hslv = ghmm.SequenceSet(F,train_seq_rf_hslv)
final_ts_rm_hslv = ghmm.SequenceSet(F,train_seq_rm_hslv)
final_ts_sf_hslv = ghmm.SequenceSet(F,train_seq_sf_hslv)
final_ts_sm_hslv = ghmm.SequenceSet(F,train_seq_sm_hslv)
model_rf_hslv.baumWelch(final_ts_rf_hslv)
model_rm_hslv.baumWelch(final_ts_rm_hslv)
model_sf_hslv.baumWelch(final_ts_sf_hslv)
model_sm_hslv.baumWelch(final_ts_sm_hslv)
# For Testing
total_seq_obj_hslv = Fmat_original_hslv[0:301,:]
rf_hslv = np.matrix(np.zeros(np.size(total_seq_obj_hslv,1)))
rm_hslv = np.matrix(np.zeros(np.size(total_seq_obj_hslv,1)))
sf_hslv = np.matrix(np.zeros(np.size(total_seq_obj_hslv,1)))
sm_hslv = np.matrix(np.zeros(np.size(total_seq_obj_hslv,1)))
k = 0
while (k < np.size(total_seq_obj_hslv,1)):
test_seq_obj_hslv = (np.array(total_seq_obj_hslv[0:301,k]).T).tolist()
new_test_seq_obj_hslv = np.array(sum(test_seq_obj_hslv,[]))
#print new_test_seq_obj_hslv
ts_obj_hslv = new_test_seq_obj_hslv
#print np.shape(ts_obj_hslv)
final_ts_obj_hslv = ghmm.EmissionSequence(F,ts_obj_hslv.tolist())
# Find Viterbi Path
path_rf_obj_hslv = model_rf_hslv.viterbi(final_ts_obj_hslv)
path_rm_obj_hslv = model_rm_hslv.viterbi(final_ts_obj_hslv)
path_sf_obj_hslv = model_sf_hslv.viterbi(final_ts_obj_hslv)
path_sm_obj_hslv = model_sm_hslv.viterbi(final_ts_obj_hslv)
obj_hslv = max(path_rf_obj_hslv[1],path_rm_obj_hslv[1],path_sf_obj_hslv[1],path_sm_obj_hslv[1])
if obj_hslv == path_rf_obj_hslv[1]:
rf_hslv[0,k] = 1
elif obj_hslv == path_rm_obj_hslv[1]:
rm_hslv[0,k] = 1
elif obj_hslv == path_sf_obj_hslv[1]:
sf_hslv[0,k] = 1
else:
sm_hslv[0,k] = 1
k = k+1
#print rf_hslv.T
cmat[0][0] = cmat[0][0] + np.sum(rf_hslv[0,0:15])
cmat[0][1] = cmat[0][1] + np.sum(rf_hslv[0,15:30])
cmat[0][2] = cmat[0][2] + np.sum(rf_hslv[0,30:45])
cmat[0][3] = cmat[0][3] + np.sum(rf_hslv[0,45:60])
cmat[1][0] = cmat[1][0] + np.sum(rm_hslv[0,0:15])
cmat[1][1] = cmat[1][1] + np.sum(rm_hslv[0,15:30])
cmat[1][2] = cmat[1][2] + np.sum(rm_hslv[0,30:45])
cmat[1][3] = cmat[1][3] + np.sum(rm_hslv[0,45:60])
cmat[2][0] = cmat[2][0] + np.sum(sf_hslv[0,0:15])
cmat[2][1] = cmat[2][1] + np.sum(sf_hslv[0,15:30])
cmat[2][2] = cmat[2][2] + np.sum(sf_hslv[0,30:45])
cmat[2][3] = cmat[2][3] + np.sum(sf_hslv[0,45:60])
cmat[3][0] = cmat[3][0] + np.sum(sm_hslv[0,0:15])
cmat[3][1] = cmat[3][1] + np.sum(sm_hslv[0,15:30])
cmat[3][2] = cmat[3][2] + np.sum(sm_hslv[0,30:45])
cmat[3][3] = cmat[3][3] + np.sum(sm_hslv[0,45:60])
#print cmat
############################################################################################################################################
# LSHV as testing set and Rest as training set
mu_rf_lshv,sigma_rf_lshv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,0:15], Fmat_original_hslv[0:301,0:15], Fmat_original_lslv[0:301,0:15]))))
mu_rm_lshv,sigma_rm_lshv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,15:30], Fmat_original_hslv[0:301,15:30], Fmat_original_lslv[0:301,15:30]))))
mu_sf_lshv,sigma_sf_lshv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,30:45], Fmat_original_hslv[0:301,30:45], Fmat_original_lslv[0:301,30:45]))))
mu_sm_lshv,sigma_sm_lshv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,45:60], Fmat_original_hslv[0:301,45:60], Fmat_original_lslv[0:301,45:60]))))
# B - Emission Matrix, parameters of emission distributions in pairs of (mu, sigma)
B_rf_lshv = np.zeros((20,2))
B_rm_lshv = np.zeros((20,2))
B_sf_lshv = np.zeros((20,2))
B_sm_lshv = np.zeros((20,2))
for num_states in range(20):
B_rf_lshv[num_states,0] = mu_rf_lshv[num_states]
B_rf_lshv[num_states,1] = sigma_rf_lshv[num_states]
B_rm_lshv[num_states,0] = mu_rm_lshv[num_states]
B_rm_lshv[num_states,1] = sigma_rm_lshv[num_states]
B_sf_lshv[num_states,0] = mu_sf_lshv[num_states]
B_sf_lshv[num_states,1] = sigma_sf_lshv[num_states]
B_sm_lshv[num_states,0] = mu_sm_lshv[num_states]
B_sm_lshv[num_states,1] = sigma_sm_lshv[num_states]
B_rf_lshv = B_rf_lshv.tolist()
B_rm_lshv = B_rm_lshv.tolist()
B_sf_lshv = B_sf_lshv.tolist()
B_sm_lshv = B_sm_lshv.tolist()
model_rf_lshv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_rf_lshv, pi) # Will be Trained
model_rm_lshv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_rm_lshv, pi) # Will be Trained
model_sf_lshv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_sf_lshv, pi) # Will be Trained
model_sm_lshv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_sm_lshv, pi) # Will be Trained
# For Training
total_seq_rf_lshv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,0:15], Fmat_original_hslv[0:301,0:15], Fmat_original_lslv[0:301,0:15])))
total_seq_rm_lshv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,15:30], Fmat_original_hslv[0:301,15:30], Fmat_original_lslv[0:301,15:30])))
total_seq_sf_lshv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,30:45], Fmat_original_hslv[0:301,30:45], Fmat_original_lslv[0:301,30:45])))
total_seq_sm_lshv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,45:60], Fmat_original_hslv[0:301,45:60], Fmat_original_lslv[0:301,45:60])))
train_seq_rf_lshv = (np.array(total_seq_rf_lshv).T).tolist()
train_seq_rm_lshv = (np.array(total_seq_rm_lshv).T).tolist()
train_seq_sf_lshv = (np.array(total_seq_sf_lshv).T).tolist()
train_seq_sm_lshv = (np.array(total_seq_sm_lshv).T).tolist()
#print train_seq_rf_lshv
final_ts_rf_lshv = ghmm.SequenceSet(F,train_seq_rf_lshv)
final_ts_rm_lshv = ghmm.SequenceSet(F,train_seq_rm_lshv)
final_ts_sf_lshv = ghmm.SequenceSet(F,train_seq_sf_lshv)
final_ts_sm_lshv = ghmm.SequenceSet(F,train_seq_sm_lshv)
model_rf_lshv.baumWelch(final_ts_rf_lshv)
model_rm_lshv.baumWelch(final_ts_rm_lshv)
model_sf_lshv.baumWelch(final_ts_sf_lshv)
model_sm_lshv.baumWelch(final_ts_sm_lshv)
# For Testing
total_seq_obj_lshv = Fmat_original_lshv[0:301,:]
rf_lshv = np.matrix(np.zeros(np.size(total_seq_obj_lshv,1)))
rm_lshv = np.matrix(np.zeros(np.size(total_seq_obj_lshv,1)))
sf_lshv = np.matrix(np.zeros(np.size(total_seq_obj_lshv,1)))
sm_lshv = np.matrix(np.zeros(np.size(total_seq_obj_lshv,1)))
k = 0
while (k < np.size(total_seq_obj_lshv,1)):
test_seq_obj_lshv = (np.array(total_seq_obj_lshv[0:301,k]).T).tolist()
new_test_seq_obj_lshv = np.array(sum(test_seq_obj_lshv,[]))
#print new_test_seq_obj_lshv
ts_obj_lshv = new_test_seq_obj_lshv
#print np.shape(ts_obj_lshv)
final_ts_obj_lshv = ghmm.EmissionSequence(F,ts_obj_lshv.tolist())
# Find Viterbi Path
path_rf_obj_lshv = model_rf_lshv.viterbi(final_ts_obj_lshv)
path_rm_obj_lshv = model_rm_lshv.viterbi(final_ts_obj_lshv)
path_sf_obj_lshv = model_sf_lshv.viterbi(final_ts_obj_lshv)
path_sm_obj_lshv = model_sm_lshv.viterbi(final_ts_obj_lshv)
obj_lshv = max(path_rf_obj_lshv[1],path_rm_obj_lshv[1],path_sf_obj_lshv[1],path_sm_obj_lshv[1])
if obj_lshv == path_rf_obj_lshv[1]:
rf_lshv[0,k] = 1
elif obj_lshv == path_rm_obj_lshv[1]:
rm_lshv[0,k] = 1
elif obj_lshv == path_sf_obj_lshv[1]:
sf_lshv[0,k] = 1
else:
sm_lshv[0,k] = 1
k = k+1
#print rf_lshv.T
cmat[0][0] = cmat[0][0] + np.sum(rf_lshv[0,0:15])
cmat[0][1] = cmat[0][1] + np.sum(rf_lshv[0,15:30])
cmat[0][2] = cmat[0][2] + np.sum(rf_lshv[0,30:45])
cmat[0][3] = cmat[0][3] + np.sum(rf_lshv[0,45:60])
cmat[1][0] = cmat[1][0] + np.sum(rm_lshv[0,0:15])
cmat[1][1] = cmat[1][1] + np.sum(rm_lshv[0,15:30])
cmat[1][2] = cmat[1][2] + np.sum(rm_lshv[0,30:45])
cmat[1][3] = cmat[1][3] + np.sum(rm_lshv[0,45:60])
cmat[2][0] = cmat[2][0] + np.sum(sf_lshv[0,0:15])
cmat[2][1] = cmat[2][1] + np.sum(sf_lshv[0,15:30])
cmat[2][2] = cmat[2][2] + np.sum(sf_lshv[0,30:45])
cmat[2][3] = cmat[2][3] + np.sum(sf_lshv[0,45:60])
cmat[3][0] = cmat[3][0] + np.sum(sm_lshv[0,0:15])
cmat[3][1] = cmat[3][1] + np.sum(sm_lshv[0,15:30])
cmat[3][2] = cmat[3][2] + np.sum(sm_lshv[0,30:45])
cmat[3][3] = cmat[3][3] + np.sum(sm_lshv[0,45:60])
#print cmat
#############################################################################################################################################
# LSLV as testing set and Rest as training set
mu_rf_lslv,sigma_rf_lslv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,0:15], Fmat_original_hslv[0:301,0:15], Fmat_original_lshv[0:301,0:15]))))
mu_rm_lslv,sigma_rm_lslv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,15:30], Fmat_original_hslv[0:301,15:30], Fmat_original_lshv[0:301,15:30]))))
mu_sf_lslv,sigma_sf_lslv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,30:45], Fmat_original_hslv[0:301,30:45], Fmat_original_lshv[0:301,30:45]))))
mu_sm_lslv,sigma_sm_lslv = feature_to_mu_sigma(np.matrix(np.column_stack((Fmat_original_hshv[0:301,45:60], Fmat_original_hslv[0:301,45:60], Fmat_original_lshv[0:301,45:60]))))
# B - Emission Matrix, parameters of emission distributions in pairs of (mu, sigma)
B_rf_lslv = np.zeros((20,2))
B_rm_lslv = np.zeros((20,2))
B_sf_lslv = np.zeros((20,2))
B_sm_lslv = np.zeros((20,2))
for num_states in range(20):
B_rf_lslv[num_states,0] = mu_rf_lslv[num_states]
B_rf_lslv[num_states,1] = sigma_rf_lslv[num_states]
B_rm_lslv[num_states,0] = mu_rm_lslv[num_states]
B_rm_lslv[num_states,1] = sigma_rm_lslv[num_states]
B_sf_lslv[num_states,0] = mu_sf_lslv[num_states]
B_sf_lslv[num_states,1] = sigma_sf_lslv[num_states]
B_sm_lslv[num_states,0] = mu_sm_lslv[num_states]
B_sm_lslv[num_states,1] = sigma_sm_lslv[num_states]
B_rf_lslv = B_rf_lslv.tolist()
B_rm_lslv = B_rm_lslv.tolist()
B_sf_lslv = B_sf_lslv.tolist()
B_sm_lslv = B_sm_lslv.tolist()
model_rf_lslv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_rf_lslv, pi) # Will be Trained
model_rm_lslv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_rm_lslv, pi) # Will be Trained
model_sf_lslv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_sf_lslv, pi) # Will be Trained
model_sm_lslv = ghmm.HMMFromMatrices(F,ghmm.GaussianDistribution(F), A, B_sm_lslv, pi) # Will be Trained
# For Training
total_seq_rf_lslv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,0:15], Fmat_original_hslv[0:301,0:15], Fmat_original_lshv[0:301,0:15])))
total_seq_rm_lslv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,15:30], Fmat_original_hslv[0:301,15:30], Fmat_original_lshv[0:301,15:30])))
total_seq_sf_lslv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,30:45], Fmat_original_hslv[0:301,30:45], Fmat_original_lshv[0:301,30:45])))
total_seq_sm_lslv = np.matrix(np.column_stack((Fmat_original_hshv[0:301,45:60], Fmat_original_hslv[0:301,45:60], Fmat_original_lshv[0:301,45:60])))
train_seq_rf_lslv = (np.array(total_seq_rf_lslv).T).tolist()
train_seq_rm_lslv = (np.array(total_seq_rm_lslv).T).tolist()
train_seq_sf_lslv = (np.array(total_seq_sf_lslv).T).tolist()
train_seq_sm_lslv = (np.array(total_seq_sm_lslv).T).tolist()
#print train_seq_rf_lslv
final_ts_rf_lslv = ghmm.SequenceSet(F,train_seq_rf_lslv)
final_ts_rm_lslv = ghmm.SequenceSet(F,train_seq_rm_lslv)
final_ts_sf_lslv = ghmm.SequenceSet(F,train_seq_sf_lslv)
final_ts_sm_lslv = ghmm.SequenceSet(F,train_seq_sm_lslv)
model_rf_lslv.baumWelch(final_ts_rf_lslv)
model_rm_lslv.baumWelch(final_ts_rm_lslv)
model_sf_lslv.baumWelch(final_ts_sf_lslv)
model_sm_lslv.baumWelch(final_ts_sm_lslv)
# For Testing
total_seq_obj_lslv = Fmat_original_lslv[0:301,:]
rf_lslv = np.matrix(np.zeros(np.size(total_seq_obj_lslv,1)))
rm_lslv = np.matrix(np.zeros(np.size(total_seq_obj_lslv,1)))
sf_lslv = np.matrix(np.zeros(np.size(total_seq_obj_lslv,1)))
sm_lslv = np.matrix(np.zeros(np.size(total_seq_obj_lslv,1)))
k = 0
while (k < np.size(total_seq_obj_lslv,1)):
test_seq_obj_lslv = (np.array(total_seq_obj_lslv[0:301,k]).T).tolist()
new_test_seq_obj_lslv = np.array(sum(test_seq_obj_lslv,[]))
#print new_test_seq_obj_lslv
ts_obj_lslv = new_test_seq_obj_lslv
#print np.shape(ts_obj_lslv)
final_ts_obj_lslv = ghmm.EmissionSequence(F,ts_obj_lslv.tolist())
# Find Viterbi Path
path_rf_obj_lslv = model_rf_lslv.viterbi(final_ts_obj_lslv)
path_rm_obj_lslv = model_rm_lslv.viterbi(final_ts_obj_lslv)
path_sf_obj_lslv = model_sf_lslv.viterbi(final_ts_obj_lslv)
path_sm_obj_lslv = model_sm_lslv.viterbi(final_ts_obj_lslv)
obj_lslv = max(path_rf_obj_lslv[1],path_rm_obj_lslv[1],path_sf_obj_lslv[1],path_sm_obj_lslv[1])
if obj_lslv == path_rf_obj_lslv[1]:
rf_lslv[0,k] = 1
elif obj_lslv == path_rm_obj_lslv[1]:
rm_lslv[0,k] = 1
elif obj_lslv == path_sf_obj_lslv[1]:
sf_lslv[0,k] = 1
else:
sm_lslv[0,k] = 1
k = k+1
#print rf_lslv.T
cmat[0][0] = cmat[0][0] + np.sum(rf_lslv[0,0:15])
cmat[0][1] = cmat[0][1] + np.sum(rf_lslv[0,15:30])
cmat[0][2] = cmat[0][2] + np.sum(rf_lslv[0,30:45])
cmat[0][3] = cmat[0][3] + np.sum(rf_lslv[0,45:60])
cmat[1][0] = cmat[1][0] + np.sum(rm_lslv[0,0:15])
cmat[1][1] = cmat[1][1] + np.sum(rm_lslv[0,15:30])
cmat[1][2] = cmat[1][2] + np.sum(rm_lslv[0,30:45])
cmat[1][3] = cmat[1][3] + np.sum(rm_lslv[0,45:60])
cmat[2][0] = cmat[2][0] + np.sum(sf_lslv[0,0:15])
cmat[2][1] = cmat[2][1] + np.sum(sf_lslv[0,15:30])
cmat[2][2] = cmat[2][2] + np.sum(sf_lslv[0,30:45])
cmat[2][3] = cmat[2][3] + np.sum(sf_lslv[0,45:60])
cmat[3][0] = cmat[3][0] + np.sum(sm_lslv[0,0:15])
cmat[3][1] = cmat[3][1] + np.sum(sm_lslv[0,15:30])
cmat[3][2] = cmat[3][2] + np.sum(sm_lslv[0,30:45])
cmat[3][3] = cmat[3][3] + np.sum(sm_lslv[0,45:60])
#print cmat
############################################################################################################################################
# Plot Confusion Matrix
Nlabels = 4
fig = pp.figure()
ax = fig.add_subplot(111)
figplot = ax.matshow(cmat, interpolation = 'nearest', origin = 'upper', extent=[0, Nlabels, 0, Nlabels])
ax.set_title('Performance of HMM Models')
pp.xlabel("Targets")
pp.ylabel("Predictions")
ax.set_xticks([0.5,1.5,2.5,3.5])
ax.set_xticklabels(['Rigid-Fixed', 'Rigid-Movable', 'Soft-Fixed', 'Soft-Movable'])
ax.set_yticks([3.5,2.5,1.5,0.5])
ax.set_yticklabels(['Rigid-Fixed', 'Rigid-Movable', 'Soft-Fixed', 'Soft-Movable'])
figbar = fig.colorbar(figplot)
i = 0
while (i < 4):
j = 0
while (j < 4):
pp.text(j+0.5,3.5-i,cmat[i][j])
j = j+1
i = i+1
pp.savefig('results_force_20_states.png')
pp.show()
| tapomayukh/projects_in_python | classification/Classification_with_HMM/Single_Contact_Classification/Variable_Stiffness_Variable_Velocity/HMM/with padding 3s/hmm_crossvalidation_force_20_states.py | Python | mit | 29,288 | [
"Mayavi"
] | 9e1f70c6ed37637e65e52327de6fda193ef47bfbe4cc39b0c79ffca8e4a7a376 |
import inspect
import logging
import os
from six import with_metaclass
from kalliope.core.ConfigurationManager import SettingLoader
from kalliope.core.ConfigurationManager.ConfigurationChecker import ConfigurationChecker
from kalliope.core.Models import Singleton
from kalliope.core.Models.Brain import Brain
from kalliope.core.Models.Neuron import Neuron
from kalliope.core.Models.Signal import Signal
from kalliope.core.Models.Synapse import Synapse
from kalliope.core.Utils import Utils
from .YAMLLoader import YAMLLoader
logging.basicConfig()
logger = logging.getLogger("kalliope")
FILE_NAME = "brain.yml"
class BrainNotFound(Exception):
pass
class BrainLoader(with_metaclass(Singleton, object)):
"""
This Class is used to get the brain YAML and the Brain as an object
"""
def __init__(self, file_path=None):
sl = SettingLoader()
self.settings = sl.settings
self.file_path = file_path
if self.file_path is None: # we don't provide a file path, so search for the default one
self.file_path = Utils.get_real_file_path(FILE_NAME)
else:
self.file_path = Utils.get_real_file_path(file_path)
# if the returned file path is none, the file doesn't exist
if self.file_path is None:
raise BrainNotFound("brain file not found")
self.yaml_config = self.get_yaml_config()
self.brain = self.load_brain()
def get_yaml_config(self):
"""
Class Methods which loads default or the provided YAML file and return it as a String
:return: The loaded brain YAML
:rtype: String
:Example:
brain_yaml = BrainLoader.get_yaml_config(/var/tmp/brain.yml)
.. warnings:: Class Method
"""
if self.file_path is None:
brain_file_path = self._get_root_brain_path()
else:
brain_file_path = self.file_path
return YAMLLoader.get_config(brain_file_path)
def load_brain(self):
"""
Class Methods which loads default or the provided YAML file and return a Brain
:return: The loaded Brain
:rtype: Brain
:Example:
brain = BrainLoader.load_brain(file_path="/var/tmp/brain.yml")
.. seealso:: Brain
.. warnings:: Class Method
"""
# Instantiate a brain
brain = Brain()
# get the brain with dict
dict_brain = self.get_yaml_config()
brain.brain_yaml = dict_brain
# create list of Synapse
synapses = list()
for synapses_dict in dict_brain:
if "includes" not in synapses_dict: # we don't need to check includes as it's not a synapse
if ConfigurationChecker().check_synape_dict(synapses_dict):
name = synapses_dict["name"]
neurons = self.get_neurons(synapses_dict["neurons"], self.settings)
signals = self.get_signals(synapses_dict["signals"])
new_synapse = Synapse(name=name, neurons=neurons, signals=signals)
synapses.append(new_synapse)
brain.synapses = synapses
if self.file_path is None:
brain.brain_file = self._get_root_brain_path()
else:
brain.brain_file = self.file_path
# check that no synapse have the same name than another
if not ConfigurationChecker().check_synapes(synapses):
brain = None
return brain
@classmethod
def get_neurons(cls, neurons_dict, settings):
"""
Get a list of Neuron object from a neuron dict
:param neurons_dict: Neuron name or dictionary of Neuron_name/Neuron_parameters
:type neurons_dict: String or dict
:param settings: The Settings with the global variables
:return: A list of Neurons
:rtype: List
:Example:
neurons = cls._get_neurons(synapses_dict["neurons"])
.. seealso:: Neuron
.. warnings:: Static and Private
"""
neurons = list()
for neuron_dict in neurons_dict:
if ConfigurationChecker().check_neuron_dict(neuron_dict):
if isinstance(neuron_dict, dict):
for neuron_name in neuron_dict:
new_neuron = Neuron(name=neuron_name, parameters=neuron_dict[neuron_name])
neurons.append(new_neuron)
else:
new_neuron = Neuron(name=neuron_dict)
neurons.append(new_neuron)
return neurons
@classmethod
def get_signals(cls, signals_dict):
"""
Get a list of Signal object from a signals dict
:param signals_dict: Signal name or dictionary of Signal_name/Signal_parameters
:type signals_dict: String or dict
:return: A list of Event and/or Order
:rtype: List
:Example:
signals = cls._get_signals(synapses_dict["signals"])
.. seealso:: Event, Order
.. warnings:: Class method and Private
"""
signals = list()
for signal_dict in signals_dict:
if ConfigurationChecker().check_signal_dict(signal_dict):
for signal_name in signal_dict:
new_signal = Signal(name=signal_name, parameters=signal_dict[signal_name])
signals.append(new_signal)
return signals
@staticmethod
def _get_root_brain_path():
"""
Return the full path of the default brain file
:Example:
brain.brain_file = cls._get_root_brain_path()
.. raises:: IOError
.. warnings:: Static method and Private
"""
# get current script directory path. We are in /an/unknown/path/kalliope/core/ConfigurationManager
cur_script_directory = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
# get parent dir. Now we are in /an/unknown/path/kalliope
parent_dir = os.path.normpath(cur_script_directory + os.sep + os.pardir + os.sep + os.pardir)
brain_path = parent_dir + os.sep + "brain.yml"
logger.debug("Real brain.yml path: %s" % brain_path)
if os.path.isfile(brain_path):
return brain_path
raise IOError("Default brain.yml file not found")
| kalliope-project/kalliope | kalliope/core/ConfigurationManager/BrainLoader.py | Python | gpl-3.0 | 6,353 | [
"NEURON"
] | b603f6941843a9df623d0e6256fd0dc6d8479c8ae721b02ac9fba3d6483958c8 |
from Biskit import *
import Biskit.tools as T
import numpy as N
###############################
## Creating a Trajectory
## Example 1:
## starting from several single structures
###############################
m = PDBModel( '3TGI' )
t = Trajectory( 20 * [ m ] ) ## trajectory of 20 identical structures
t.ref ## reference PDBModel
t.lenFrames() ## number of frames, same as len(t)
t.lenAtoms() ## number of atoms, same as len( t.ref )
## kick out non-protein atoms (ions, solvent, etc.)
t = t.compressAtoms( t.ref.maskProtein() )
## shift each frame by incrementing delta-x
for i in range( len( t ) ):
t.frames[i] += [ i, 0, 0 ]
pm = Pymoler()
## t[i] returns a PDBModel instance for frame i
pm.addMovie( [ t[i] for i in range(len(t)) ], 'traj' )
pm.add( 'mplay' )
pm.show()
###################################
## Load and split a real trajectory
###################################
## converted from amber crd using:
## amber2traj.py -i sim.crd -r frame0.pdb
t = T.load( 'traj_0.dat' )
## kick out frames: take every 4th frame only
t_short = t.takeFrames( range(0, len(t), 4) )
len( t_short )
## split system into three trajectories ...
## ... containing *roughly* one spectrin repeat each
third = t.lenAtoms() / 3
t0 = t.takeAtoms( range(0, third) )
t1 = t.takeAtoms( range(third, 2*third) )
t2 = t.takeAtoms( range(2*third, 3*third) )
###################################
## RMS fits and plotting
###################################
## fit trajectory to average
t0.fit()
## fit to average using CA only
t0.fit( t0.ref.maskCA(), prof='rms_ca' )
## iterative fit to CA, kicks out outlier regions
t0.fit( t0.ref.maskCA(), prof='rms_ca_it', n_it=5 )
## plot the rms versus frame for the 3 above fits
## (uses ProfileCollection.plot -- also available for ...
## ... PDBModel.atoms, .residues & Trajectory.profiles)
p = t0.profiles.plot( 'rms', 'rms_ca', 'rms_ca_it' )
p.show()
##################################
## divide, fit, and re-connect MD
##################################
## t0 and t1 have same core sequence but different head and tail peptides...
## align residue/atom content of first and second trajectory
i0, i1 = t0.ref.compareAtoms( t1.ref )
t0 = t0.takeAtoms( i0 )
t1 = t1.takeAtoms( i1 )
## now t0 and t1 have exactly the same atom content and order
t0.fit( n_it=3 )
t1.fit( ref=t0.avgModel(), n_it=3 ) ## fit t1 to average of t0
t0.ref.writePdb( 't0.pdb' ) ## write ref PDB
t0.writeCrd( 't0.crd' ) ## write Amber CRD
t1.writeCrd( 't1.crd' )
t_01 = t0.concat( t1 ) ## concat t0 and t1 in time
p = t_01.profiles.plot( 'rms' )
p.show() ## repeat the show if the plot gets hidden behind other windows
## Note: do not close the xplot window during the python session!
p.title = 'RMS versus t'
p.write_eps( 'plot.eps' ) ## see biggles documentation
## t_01.ref.writePdb( 't_0_1.pdb' )
## t_01.writeCrd( 't_0_1.crd')
| ostrokach/biskit | doc/tutorial_md/tutorial_md.py | Python | gpl-3.0 | 2,885 | [
"Amber"
] | c1eb7822891b999b54e1585362699049e9db75192aba76c95a5468e0b3c74d48 |
# coding: utf-8
# Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
"""
This module implements various transmuter classes.
Transmuters are essentially classes that generate TransformedStructures from
various data sources. They enable the high-throughput generation of new
structures and input files.
It also includes the helper function, batch_write_vasp_input to generate an
entire directory of vasp input files for running.
"""
__author__ = "Shyue Ping Ong, Will Richards"
__copyright__ = "Copyright 2012, The Materials Project"
__version__ = "0.1"
__maintainer__ = "Shyue Ping Ong"
__email__ = "shyuep@gmail.com"
__date__ = "Mar 4, 2012"
import os
import re
from multiprocessing import Pool
from pymatgen.alchemy.materials import TransformedStructure
from pymatgen.io.vasp.sets import MPRelaxSet
class StandardTransmuter:
"""
An example of a Transmuter object, which performs a sequence of
transformations on many structures to generate TransformedStructures.
.. attribute: transformed_structures
List of all transformed structures.
"""
def __init__(
self,
transformed_structures,
transformations=None,
extend_collection=0,
ncores=None,
):
"""
Initializes a transmuter from an initial list of
:class:`pymatgen.alchemy.materials.TransformedStructure`.
Args:
transformed_structures ([TransformedStructure]): Input transformed
structures
transformations ([Transformations]): New transformations to be
applied to all structures.
extend_collection (int): Whether to use more than one output
structure from one-to-many transformations. extend_collection
can be an int, which determines the maximum branching for each
transformation.
ncores (int): Number of cores to use for applying transformations.
Uses multiprocessing.Pool. Default is None, which implies
serial.
"""
self.transformed_structures = transformed_structures
self.ncores = ncores
if transformations is not None:
for trans in transformations:
self.append_transformation(trans, extend_collection=extend_collection)
def __getitem__(self, index):
return self.transformed_structures[index]
def __getattr__(self, name):
return [getattr(x, name) for x in self.transformed_structures]
def undo_last_change(self):
"""
Undo the last transformation in the TransformedStructure.
Raises:
IndexError if already at the oldest change.
"""
for x in self.transformed_structures:
x.undo_last_change()
def redo_next_change(self):
"""
Redo the last undone transformation in the TransformedStructure.
Raises:
IndexError if already at the latest change.
"""
for x in self.transformed_structures:
x.redo_next_change()
def __len__(self):
return len(self.transformed_structures)
def append_transformation(self, transformation, extend_collection=False, clear_redo=True):
"""
Appends a transformation to all TransformedStructures.
Args:
transformation: Transformation to append
extend_collection: Whether to use more than one output structure
from one-to-many transformations. extend_collection can be a
number, which determines the maximum branching for each
transformation.
clear_redo (bool): Whether to clear the redo list. By default,
this is True, meaning any appends clears the history of
undoing. However, when using append_transformation to do a
redo, the redo list should not be cleared to allow multiple
redos.
Returns:
List of booleans corresponding to initial transformed structures
each boolean describes whether the transformation altered the
structure
"""
if self.ncores and transformation.use_multiprocessing:
with Pool(self.ncores) as p:
# need to condense arguments into single tuple to use map
z = map(
lambda x: (x, transformation, extend_collection, clear_redo),
self.transformed_structures,
)
new_tstructs = p.map(_apply_transformation, z, 1)
self.transformed_structures = []
for ts in new_tstructs:
self.transformed_structures.extend(ts)
else:
new_structures = []
for x in self.transformed_structures:
new = x.append_transformation(transformation, extend_collection, clear_redo=clear_redo)
if new is not None:
new_structures.extend(new)
self.transformed_structures.extend(new_structures)
def extend_transformations(self, transformations):
"""
Extends a sequence of transformations to the TransformedStructure.
Args:
transformations: Sequence of Transformations
"""
for t in transformations:
self.append_transformation(t)
def apply_filter(self, structure_filter):
"""
Applies a structure_filter to the list of TransformedStructures
in the transmuter.
Args:
structure_filter: StructureFilter to apply.
"""
def test_transformed_structure(ts):
return structure_filter.test(ts.final_structure)
self.transformed_structures = list(filter(test_transformed_structure, self.transformed_structures))
for ts in self.transformed_structures:
ts.append_filter(structure_filter)
def write_vasp_input(self, **kwargs):
r"""
Batch write vasp input for a sequence of transformed structures to
output_dir, following the format output_dir/{formula}_{number}.
Args:
\\*\\*kwargs: All kwargs supported by batch_write_vasp_input.
"""
batch_write_vasp_input(self.transformed_structures, **kwargs)
def set_parameter(self, key, value):
"""
Add parameters to the transmuter. Additional parameters are stored in
the as_dict() output.
Args:
key: The key for the parameter.
value: The value for the parameter.
"""
for x in self.transformed_structures:
x.other_parameters[key] = value
def add_tags(self, tags):
"""
Add tags for the structures generated by the transmuter.
Args:
tags: A sequence of tags. Note that this should be a sequence of
strings, e.g., ["My awesome structures", "Project X"].
"""
self.set_parameter("tags", tags)
def __str__(self):
output = ["Current structures", "------------"]
for x in self.transformed_structures:
output.append(str(x.final_structure))
return "\n".join(output)
def append_transformed_structures(self, tstructs_or_transmuter):
"""
Method is overloaded to accept either a list of transformed structures
or transmuter, it which case it appends the second transmuter"s
structures.
Args:
tstructs_or_transmuter: A list of transformed structures or a
transmuter.
"""
if isinstance(tstructs_or_transmuter, self.__class__):
self.transformed_structures.extend(tstructs_or_transmuter.transformed_structures)
else:
for ts in tstructs_or_transmuter:
assert isinstance(ts, TransformedStructure)
self.transformed_structures.extend(tstructs_or_transmuter)
@staticmethod
def from_structures(structures, transformations=None, extend_collection=0):
"""
Alternative constructor from structures rather than
TransformedStructures.
Args:
structures: Sequence of structures
transformations: New transformations to be applied to all
structures
extend_collection: Whether to use more than one output structure
from one-to-many transformations. extend_collection can be a
number, which determines the maximum branching for each
transformation.
Returns:
StandardTransmuter
"""
tstruct = [TransformedStructure(s, []) for s in structures]
return StandardTransmuter(tstruct, transformations, extend_collection)
class CifTransmuter(StandardTransmuter):
"""
Generates a Transmuter from a cif string, possibly containing multiple
structures.
"""
def __init__(self, cif_string, transformations=None, primitive=True, extend_collection=False):
"""
Generates a Transmuter from a cif string, possibly
containing multiple structures.
Args:
cif_string: A string containing a cif or a series of cifs
transformations: New transformations to be applied to all
structures
primitive: Whether to generate the primitive cell from the cif.
extend_collection: Whether to use more than one output structure
from one-to-many transformations. extend_collection can be a
number, which determines the maximum branching for each
transformation.
"""
transformed_structures = []
lines = cif_string.split("\n")
structure_data = []
read_data = False
for line in lines:
if re.match(r"^\s*data", line):
structure_data.append([])
read_data = True
if read_data:
structure_data[-1].append(line)
for data in structure_data:
tstruct = TransformedStructure.from_cif_string("\n".join(data), [], primitive)
transformed_structures.append(tstruct)
super().__init__(transformed_structures, transformations, extend_collection)
@staticmethod
def from_filenames(filenames, transformations=None, primitive=True, extend_collection=False):
"""
Generates a TransformedStructureCollection from a cif, possibly
containing multiple structures.
Args:
filenames: List of strings of the cif files
transformations: New transformations to be applied to all
structures
primitive: Same meaning as in __init__.
extend_collection: Same meaning as in __init__.
"""
allcifs = []
for fname in filenames:
with open(fname, "r") as f:
allcifs.append(f.read())
return CifTransmuter(
"\n".join(allcifs),
transformations,
primitive=primitive,
extend_collection=extend_collection,
)
class PoscarTransmuter(StandardTransmuter):
"""
Generates a transmuter from a sequence of POSCARs.
"""
def __init__(self, poscar_string, transformations=None, extend_collection=False):
"""
Args:
poscar_string: List of POSCAR strings
transformations: New transformations to be applied to all
structures.
extend_collection: Whether to use more than one output structure
from one-to-many transformations.
"""
tstruct = TransformedStructure.from_poscar_string(poscar_string, [])
super().__init__([tstruct], transformations, extend_collection=extend_collection)
@staticmethod
def from_filenames(poscar_filenames, transformations=None, extend_collection=False):
"""
Convenient constructor to generates a POSCAR transmuter from a list of
POSCAR filenames.
Args:
poscar_filenames: List of POSCAR filenames
transformations: New transformations to be applied to all
structures.
extend_collection:
Same meaning as in __init__.
"""
tstructs = []
for filename in poscar_filenames:
with open(filename, "r") as f:
tstructs.append(TransformedStructure.from_poscar_string(f.read(), []))
return StandardTransmuter(tstructs, transformations, extend_collection=extend_collection)
def batch_write_vasp_input(
transformed_structures,
vasp_input_set=MPRelaxSet,
output_dir=".",
create_directory=True,
subfolder=None,
include_cif=False,
**kwargs,
):
"""
Batch write vasp input for a sequence of transformed structures to
output_dir, following the format output_dir/{group}/{formula}_{number}.
Args:
transformed_structures: Sequence of TransformedStructures.
vasp_input_set: pymatgen.io.vaspio_set.VaspInputSet to creates
vasp input files from structures.
output_dir: Directory to output files
create_directory (bool): Create the directory if not present.
Defaults to True.
subfolder: Function to create subdirectory name from
transformed_structure.
e.g., lambda x: x.other_parameters["tags"][0] to use the first
tag.
include_cif (bool): Boolean indication whether to output a CIF as
well. CIF files are generally better supported in visualization
programs.
"""
for i, s in enumerate(transformed_structures):
formula = re.sub(r"\s+", "", s.final_structure.formula)
if subfolder is not None:
subdir = subfolder(s)
dirname = os.path.join(output_dir, subdir, "{}_{}".format(formula, i))
else:
dirname = os.path.join(output_dir, "{}_{}".format(formula, i))
s.write_vasp_input(vasp_input_set, dirname, create_directory=create_directory, **kwargs)
if include_cif:
from pymatgen.io.cif import CifWriter
writer = CifWriter(s.final_structure)
writer.write_file(os.path.join(dirname, "{}.cif".format(formula)))
def _apply_transformation(inputs):
"""
Helper method for multiprocessing of apply_transformation. Must not be
in the class so that it can be pickled.
Args:
inputs: Tuple containing the transformed structure, the transformation
to be applied, a boolean indicating whether to extend the
collection, and a boolean indicating whether to clear the redo
Returns:
List of output structures (the modified initial structure, plus
any new structures created by a one-to-many transformation)
"""
ts, transformation, extend_collection, clear_redo = inputs
new = ts.append_transformation(transformation, extend_collection, clear_redo=clear_redo)
o = [ts]
if new:
o.extend(new)
return o
| gmatteo/pymatgen | pymatgen/alchemy/transmuters.py | Python | mit | 15,098 | [
"VASP",
"pymatgen"
] | 5d039b6a2108d60e96d034101b8a007cb3e6f6b2873b597ac7e8955710c56fd0 |
# Copyright 2021 Lawrence Livermore National Security, LLC
"""
This script is used by cmec-driver to run the ASoP-Spectral metrics.
It is based on the workflow in ASoP1_spectral_main.py and
can be called with the aruments listed below. Keys that can be set
in the config or settings dictionary are: region, timescale-all, mask,
dates-all, and season-all.
Arguments:
* model_dir:
directory containing model data
* obs_dir:
directory containing obs data
* wk_dir:
output directory
* config_path:
JSON config file (optional)
* settings:
dictionary of settings (optional)
Author: Ana Ordonez
"""
import argparse
from datetime import datetime, timezone
import glob
import itertools
import json
import os
from platform import python_version
import iris
import make_hist_maps
import plot_hist_maps
import plot_hist1d
from ASoP_Spectral_metric import plot_metric
from set_descriptive_text import set_descriptive_text
# set date once for provenance
current_date = datetime.now(timezone.utc).strftime("%b %d %Y %H:%M:%S")+" UTC"
# setting output directory names
figure_dir_name = "asop_figures"
metrics_dir_name = "asop_metrics"
def main(model_dir, obs_dir, wk_dir, config_path=None, settings=None):
"""
Read in data and create histogram cubes, save these to netcdf files.
Then plot histogram maps and some regional 1d histograms
Arguments:
* model_dir
Directory containing model precipitation time series and/or
pre-calculated histogram cubes
* obs_dir
Directory containing observational precipitation time series
and/or pre-calculated histogram cubes.
* wk_dir
Path to output directory
* config_path (optional)
Path to configuration JSON (for CMEC driver)
* settings (optional)
Dictionary containing choices for region and timescale
"""
# Load CMEC config
if config_path is not None:
print("Loading configuration file")
with open (config_path,"r") as fname:
settings=json.load(fname)["ASoP/Spectral"]
print("Settings from configuration file:\n",json.dumps(settings, indent=4))
elif settings is None:
settings={
"regions": {"default":[-10.0, 10.0, 60.0, 160.0]},
"figure_type": "png",
"timescale-all": "",
"mask": None,
"dates-all": "",
"season-all": ""}
print("Using default settings")
# Re-order the regions from Coherence to Spectral format
for r in settings["regions"]:
settings["regions"][r][:]=[settings["regions"][r][i] for i in [2,0,3,1]]
# Clean up extension in case there is a leading '.'
ext = '.'+settings.get('figure_type','png').replace(".","")
# Set up output files and directories
json_filename=os.path.join(wk_dir,"output.json")
initialize_descriptive_json(json_filename,wk_dir,model_dir,obs_dir)
os.mkdir(os.path.join(wk_dir,figure_dir_name))
os.mkdir(os.path.join(wk_dir,metrics_dir_name))
# Get input file lists and separate histogram cubes from timeseries
hist_input_model,model_filenames=get_filename_lists(model_dir)
hist_input_obs,obs_filenames=get_filename_lists(obs_dir)
# Make and save histogram cubes if they don't already exist
# for the timeseries files
make_hist_model,new_hist_model=check_histogram_files(model_filenames)
new_hist_model=[os.path.join(wk_dir,f) for f in new_hist_model]
make_hist_obs,new_hist_obs=check_histogram_files(obs_filenames)
new_hist_obs=[os.path.join(wk_dir,f) for f in new_hist_obs]
for hlist in [make_hist_model,make_hist_obs]:
if hlist:
print("Making histograms")
making_histogram_files(hlist,wk_dir)
# Combine input and newly made histogram files into one list
hist_filenames_model=sorted(hist_input_model+new_hist_model)
hist_filenames_obs=(hist_input_obs+new_hist_obs)
if len(hist_filenames_obs) > 1:
raise RuntimeError("More than one benchmark dataset found.")
elif len(hist_filenames_obs) == 0:
raise RuntimeError("No control datasets provided")
# Want obs to go first in list for diffs
hist_filenames=hist_filenames_obs+hist_filenames_model
runtitles_long=make_runtitle(hist_filenames,settings)
runtitles_short=make_runtitle(hist_filenames,settings,model_only=True)
region_dict=settings.get("regions",{"default":[60.0,-10.0,160.0,10.0]})
for region in region_dict:
# Plot histogram maps
print("Plotting histogram maps")
myregion=region_dict[region]
for item in hist_filenames_model:
title1=runtitles_long[item]
title2=runtitles_long[hist_filenames_obs[0]]
plotname_root=figure_dir_name+'/compare_{0}_{1}_{2}'.format(title1,title2,region)
filenames=[item,hist_filenames_obs[0]]
plot_histogram_maps(filenames,plotname_root,wk_dir,myregion,ext,settings)
# 1d histogram plots
print("Plotting 1d histograms")
timescale=settings.get("timescale-all",None)
plottitle='All datasets'
plotname_root=figure_dir_name+'/compare_as_1dhistograms_{0}'.format(region)
# Plot 1d histograms of model data with obs overplotted
runtitles_model=[runtitles_short[f] for f in hist_filenames_model]
runtitles_obs=[runtitles_short[hist_filenames_obs[0]]]
plot_1d_histograms(
hist_filenames_model,runtitles_model, \
hist_filenames_obs,runtitles_obs, \
timescale,myregion,plottitle,plotname_root,wk_dir,ext)
# 1d histogram DIFFERENCE plots
print("Plotting 1d histogram differences")
title_long=runtitles_long[hist_filenames_obs[0]]
title_short=runtitles_short[hist_filenames_obs[0]]
titles=[[title_short,runtitles_short[f]] for f in hist_filenames_model]
filenames = [[hist_filenames_obs[0],f] for f in hist_filenames_model]
plottitle='Differences between datasets'
plotname_root=figure_dir_name+'/compare_as_1dhist_differences_{0}_{1}_{2}'.format(title_long,"all_models",region)
# Plot differences between 1d histograms from 1 model datasets
plot_1d_histogram_diffs(
filenames,titles,timescale, \
myregion,plottitle,plotname_root,wk_dir,ext)
# plot histogram metric
mask=settings.get("mask",None)
print("Mask: " + str(mask))
dates=settings.get("dates-all","")
season=settings.get("season-all","")
# Mask file must be present for this metric
if (mask is not None) and (timescale is not None):
if os.path.exists(mask):
print("Making histogram metrics")
json_filename=wk_dir+"/"+metrics_dir_name+"/histogram_metric.json"
model_combo=[[f,hist_filenames_obs[0]] for f in hist_filenames_model]
initialize_metrics_json(json_filename,hist_filenames_obs[0],hist_filenames_model,settings)
make_histogram_metrics(model_combo,season,timescale,dates,mask, \
wk_dir,json_filename,settings,ext)
else:
raise RuntimeError("Mask file not found.")
else:
for keyword,val in zip([mask,timescale],["mask","timescale-all"]):
if val is None:
raise RuntimeError("Keyword not found: {0}",keyword)
# output html page
write_index_html(wk_dir,region_dict,ext)
print('Processing completed OK!')
return
def check_histogram_files(filename_list):
"""
For the timeseries files in model_filenames, check if an
equivalent histogram file already exists.
Arguments:
* filename_list
List of precipitation timeseries files
"""
make_hist=[]
new_hist=[]
check_for_hist=[".".join(f.split(".")[:-1])+"_hist.nc" for f in filename_list]
for data,hist in zip(filename_list,check_for_hist):
if not os.path.exists(hist):
make_hist.append(data)
new_hist.append(os.path.basename(hist))
return make_hist, new_hist
def making_histogram_files(filename_list,wk_dir):
"""
Read in data and create histogram cubes, save these to netcdf files.
Arguments:
* filename_list
List of precipitation timeseries files
* wk_dir
Path to output directory
"""
desc = {}
for fname in filename_list:
print("Loading cube for",fname)
fname_tmp = os.path.basename(fname)
hname = os.path.join(wk_dir,".".join(fname_tmp.split(".")[:-1])+"_hist.nc")
ppndata1=make_hist_maps.read_data_cube(fname)
ppn_hist_cube=make_hist_maps.make_hist_ppn(ppndata1)
iris.save(ppn_hist_cube, hname)
desc.update({os.path.relpath(hname,start=wk_dir): {
"long_name": "iris histogram cubes",
"description": "histograms saved individually for model and obs data"}})
update_json("data",desc,wk_dir+"/output.json")
return
def plot_histogram_maps(hist_filenames,plotname_root,wk_dir,region,ext,settings):
"""
Plot histogram maps
"""
hist_filename1=hist_filenames[0]
hist_filename2=hist_filenames[1]
ppn_hist_cube1=make_hist_maps.read_data_cube(hist_filename1)
ppn_hist_cube2=make_hist_maps.read_data_cube(hist_filename2)
avg_rain_bins_a,avg_rain_bins_frac_a=make_hist_maps.calc_rain_contr(ppn_hist_cube1)
avg_rain_bins_b,avg_rain_bins_frac_b=make_hist_maps.calc_rain_contr(ppn_hist_cube2)
ppn_names=make_runtitle([hist_filename1,hist_filename2],settings)
ppn1_name=ppn_names[hist_filename1].replace("_"," ")
ppn2_name=ppn_names[hist_filename2].replace("_"," ")
names=make_runtitle([hist_filename1,hist_filename2],settings,model_only=True)
runtitle="{0} vs {1}".format(names[hist_filename1].replace("_"," "),names[hist_filename2].replace("_"," "))
# (optional) Define how you want to lump the bins together (below is the default)
all_ppn_bounds = [(0.005, 10.), (10., 50.), (50., 100.), (100., 3000.)]
# Plot as actual contributions for specific region, e.g. 60 to 160E,10S to 10N
desc={}
plotname='{0}_actual_contributions{1}'.format(plotname_root,ext)
plotname=os.path.join(wk_dir,plotname)
plot_hist_maps.plot_rain_contr(avg_rain_bins_a,avg_rain_bins_b,plotname,
runtitle,ppn1_name,ppn2_name,all_ppn_bounds,region=region)
desc.update({os.path.relpath(plotname,start=wk_dir): {
"description": "Actual contribution of each timescale for region {0}".format(region)}})
# Plot as fractional contributions
plotname='{0}_fractional_contributions{1}'.format(plotname_root,ext)
plotname=os.path.join(wk_dir,plotname)
plot_hist_maps.plot_rain_contr(avg_rain_bins_frac_a,avg_rain_bins_frac_b,plotname,
runtitle,ppn1_name,ppn2_name,all_ppn_bounds,region=region,frac=1)
desc.update({os.path.relpath(plotname,start=wk_dir): {
"description": "Fractional contribution of each timescale for region {0}".format(region)}})
update_json("plots",desc, wk_dir+"/output.json")
return
def plot_1d_histograms(filenames,runtitles,filenames_obs,runtitles_obs,timescale,
myregion,plottitle,plotname_root,wk_dir,ext):
"""
Plot 1d histograms for a small region.
This example uses histogram cubes pre-calculated from two different model datasets
on the same timescale, and compares with those from two observational datasets.
NOTE that the region and the timescale will appear automatically in the plot title
"""
desc={}
plotname='{0}_actual{1}'.format(plotname_root,ext)
plotname=os.path.join(wk_dir,plotname)
plot_hist1d.plot_1dhist(plotname,myregion,filenames,runtitles,plottitle,timescale=timescale,
filenames_obs=filenames_obs,runtitles_obs=runtitles_obs,log=1)
desc.update({os.path.relpath(plotname,start=wk_dir): {
"description": "Actual histogram"}})
plotname='{0}_fractional{1}'.format(plotname_root,ext)
plotname=os.path.join(wk_dir,plotname)
plot_hist1d.plot_1dhist(plotname,myregion,filenames,runtitles,plottitle,timescale=timescale,
filenames_obs=filenames_obs,runtitles_obs=runtitles_obs,frac=1,log=1)
desc.update({os.path.relpath(plotname,start=wk_dir): {
"description": "Fractional histogram"}})
update_json("plots",desc,wk_dir+"/output.json")
return
def plot_1d_histogram_diffs(filenames,runtitles,timescale,
myregion,plottitle,plotname_root,wk_dir,ext):
"""
Plot 1d histograms for a small region.
This example uses histogram cubes pre-calculated from two different model datasets
on the same timescale, and compares with those from two observational datasets.
NOTE that the region and the timescale will appear automatically in the plot title
"""
desc={}
plotname='{0}_actual{1}'.format(plotname_root,ext)
plotname=os.path.join(wk_dir,plotname)
plot_hist1d.plot_1dhist(plotname,myregion,filenames,runtitles,plottitle,timescale,log=1)
desc.update({os.path.relpath(plotname,start=wk_dir): {
"description": "Actual 1d histogram for region "+str(myregion)}})
plotname='{0}_fractional{1}'.format(plotname_root,ext)
plotname=os.path.join(wk_dir,plotname)
plot_hist1d.plot_1dhist(plotname,myregion,filenames,runtitles,plottitle,timescale,frac=1,log=1)
desc.update({os.path.relpath(plotname,start=wk_dir): {
"description": "Fractional 1d histogram for "+str(myregion)}})
update_json("plots",desc,wk_dir+"/output.json")
return
def make_histogram_metrics(hist_combo,season,timescale,dates,mask,wk_dir,json_filename,settings,ext):
"""Set up and run the histogram metrics and difference plot."""
for ppn1,ppn2 in hist_combo:
titles=make_runtitle([ppn1,ppn2],settings,model_only=True)
name1=titles[ppn1]
name2=titles[ppn2]
tmp_list = [x for x in [timescale,season,dates] if x != ""]
plotname=wk_dir+"_".join(["/"+figure_dir_name+"/histogram_metric",name1,name2]+tmp_list)+ext
index_list=plot_metric(ppn1,ppn2,name1,name2,season,timescale,dates,mask,plotname)
result_list=[index_list[x].data.item() for x in range(6)]
# Add metrics to file. Use full name as key.
json_title=make_runtitle([ppn1,ppn2],settings)[ppn1]
results={json_title: {
"histogram overlap": {
"global": result_list[0],
"land": result_list[1],
"sea": result_list[2],
"tropics": result_list[3],
"NH mid-lat": result_list[4],
"SH mid-lat": result_list[5]
}
}
}
update_json("RESULTS",results,json_filename)
# Write figure metadata
desc={os.path.relpath(plotname,start=wk_dir): {
"description": "histogram metric global plot"}}
update_json("plots",desc,wk_dir+"/output.json")
# Write metrics file metadata
desc={os.path.relpath(json_filename,start=wk_dir): {
"description": "Histogram overlap metrics"}}
update_json("metrics",desc,wk_dir+"/output.json")
return
def get_filename_lists(directory):
"""Return lists of files in the directory, separating histogram cubes
end with '_hist.nc' from timeseries files."""
hist_list=[]
tseries_list=[]
if (directory is not None) and (directory != 'None'):
file_list=sorted(glob.glob(directory+"/*"))
hist_list = [f for f in file_list if f.endswith("_hist.nc")]
tseries_list = [f for f in file_list if f not in set(hist_list)]
return hist_list, tseries_list
def get_cube_name(data_cube,default_name="no name"):
# Return data set name obtained by checking common name variables
cube_name=default_name
for key in ["source_id","short_name","name","source","model"]:
if key in data_cube.attributes:
cube_name=data_cube.attributes[key]
break
if "variant_label" in data_cube.attributes:
cube_name+=("_"+data_cube.attributes["variant_label"])
return cube_name
def make_runtitle(data_cube_names,settings,model_only=False,return_timescale=False):
"""
Return a list of names for each data cube for use in figure titles. Option to
return timescale dictionary for histogram map headings.
"""
cube_name={}
extra_params = ["timescale","dates","season"]
timescale = dict.fromkeys(data_cube_names,"")
for fname in data_cube_names:
fbasename=os.path.basename(fname)
tmp_fname="_".join(fbasename.split("_")[:-1])+".nc"
if "name" in settings.get(fbasename,{}):
cube_name[fname]=settings[fbasename]["name"].replace(" ","_")
elif "name" in settings.get(tmp_fname,{}):
cube_name[fname]=settings[tmp_fname]["name"].replace(" ","_")
else:
data_cube=iris.load_cube(fname)
cube_name[fname]=get_cube_name(data_cube).replace(" ","_")
# Get season, dates, timescale if available in settings
for item in extra_params:
tmp="unknown"
# First see if 'all' setting exists
if settings.get(item+"-all",False):
tmp=settings[item+"-all"]
# Check for setting under histogram or regular filename
elif item in settings.get(fbasename,{}):
tmp=settings[fbasename][item]
elif item in settings.get(tmp_fname,{}):
tmp=settings[tmp_fname][item]
if tmp!="unknown" and not model_only:
cube_name[fname]=cube_name[fname]+"_"+tmp
if return_timescale and item=="timescale":
timescale[fname]=tmp
if return_timescale:
return cube_name,timescale
return cube_name
def initialize_descriptive_json(json_filename,wk_dir,model_dir,obs_dir):
"""
Create metadata JSON file that describes package outputs.
"""
from platform import python_version
output = {"provenance":{},"index": "index.html","data":{},"metrics":{},"plots":{},"html":"index.html"}
log_path = wk_dir + "/asop_spectral.log.txt"
output["provenance"] = {
"environment": {'iris':iris.__version__,'python':python_version()},
"modeldata": model_dir,
"obsdata": obs_dir,
"log": log_path,
"date": current_date}
with open(json_filename,"w") as output_json:
json.dump(output,output_json, indent=2)
return
def initialize_metrics_json(json_filename,control,test,settings):
"""
Initalize histogram metrics json for writing metrics
from ASoP_Spectral_metric.py
"""
schema = {"name": "CMEC", "version": "v1", "package": "ASoP"}
dims = {
"json_structure": ["test dataset","metric","region"],
"dimensions": {
"test dataset": {},
"metric": {
"histogram overlap": "area under the fractional histogram that is covered by overlap between two individual histograms"},
"region": {
"global": "global region",
"land": "masked land area from -30 to 30 degrees latitude",
"sea": "masked ocean area from -30 to 30 degrees latitude",
"tropics": "-15 to 15 degrees latitude",
"NH mid-lat": "30 to 60 degrees north",
"SH mid-lat": "30 to 60 degrees south"}}}
titles = make_runtitle(test,settings)
for item in titles:
dims["dimensions"]["test dataset"].update({titles[item]: {}})
con_name = make_runtitle([control],settings)[control]
prov = {"environment":{'iris':iris.__version__,'python':python_version()},
"date":current_date}
data={"SCHEMA": schema, "DIMENSIONS": dims, "CONTROL": con_name, "RESULTS": {}, "PROVENANCE": prov}
with open(json_filename,"w") as output_json:
json.dump(data,output_json,indent=2)
return
def update_json(json_key, data_description, json_filename):
"""
Add the dictionary 'data_description' under the key 'json_key' in
the descriptive output json if it exists
"""
if os.path.exists(json_filename):
with open(json_filename,"r") as output_json:
output=json.load(output_json)
output[json_key].update(data_description)
with open(json_filename,"w") as output_json:
json.dump(output,output_json,indent=2)
return
def write_index_html(wk_dir,region_dict,ext):
"""Create an html page that links users to the metrics json and
plots created by ASoP-Spectral. Results must be located in the
output directory "wk_dir".
Arguments:
* wk_dir: output directory
* region_dict: dictionary of region names and coordinates
* ext: figure file extension
"""
metric_file=metrics_dir_name+'/histogram_metric.json'
fig_list=[figure_dir_name+'/'+f for f in os.listdir(wk_dir+'/'+figure_dir_name) if f.endswith(ext)]
hist_metric_exists=os.path.exists(os.path.join(wk_dir,metric_file))
# Extensive descriptions are set in another function.
intr_txt,mtrc_txt,hst_mp_txt,hst_txt,hst_df_txt=set_descriptive_text()
# list unique keyword to identify plots for each category
fig_keys=["contributions","1dhistograms","differences"]
subtitle_list=["Histogram Maps","All Histograms","Histogram Difference"]
subheading_list=["actual","fractional"]
text_list=[hst_mp_txt,hst_txt,hst_df_txt]
# Initialize html text
html_text=[
'<html>\n','<body>','<head><title>ASoP-Spectral</title></head>\n',
'<br><h1>ASoP-Spectral results</h1>\n',intr_txt]
contents = [
'<h2>Contents</h2>\n',
'<dl>\n','<dt><a href="#Figures">Figures</a></dt>\n',
'<dd><a href="#Histogram-Maps">Histogram Maps</a></dd>\n',
'<dd><a href="#All-Histograms">All Histograms</a></dd>\n',
'<dd><a href="#Histogram-Difference">Histogram Difference</a></dd>\n',
'</dl>\n']
if hist_metric_exists:
contents.insert(2,'<dt><a href="#Metrics">Metrics</a></dt>\n')
contents.insert(4,'<dd><a href="#Histogram-Metric-Maps">Histogram Metric Maps</a></dd>\n')
html_text.extend(contents)
# Check for optional histogram metric files
if hist_metric_exists:
html_text.extend([
'<section id="Metrics">\n',
'<h2>Metrics</h2>\n',
mtrc_txt,
'<br><a href="{0}" target="_blank" >{0}</a>\n'.format(metric_file),
'</section>\n',
'<section id="Figures">\n',
'<h2>Figures</h2>\n',
'<section id="Histogram-Metric-Maps">\n',
'<h3>Histogram Metric Maps</h3>'])
sub_list=[f for f in fig_list if ('histogram_metric' in f)]
for fig in sub_list:
html_text.append(
'<p><a href="{0}" target="_blank" alt={0}><img src="{0}" '.format(fig)
+'width="647" alt="{0}"></a></p>\n'.format(fig))
else:
html_text.append('<section id="Figures">\n')
html_text.append('<h2>Figures</h2>\n')
# Build the rest of the titles, subtitles, text, and figures.
for title,kword,desc in zip(subtitle_list,fig_keys,text_list):
html_text.extend([
'<section id="'+title.replace(' ','-')+'">\n',
'<h3>{0}</h3>\n'.format(title),
'<p>{0}</p>'.format(desc)])
plot_list=[f for f in fig_list if (kword in f)]
for region in region_dict:
html_text.append('<h4>{0}</h4>\n'.format(region.replace('_',' ')))
for heading in subheading_list:
html_text.append('<h5>{0} contribution</h5>\n'.format(heading.capitalize()))
sub_list=[f for f in plot_list if ((heading in f) and (region in f))]
for fig in sub_list:
html_text.append('<p><a href="{0}" target="_blank" alt={0}><img src="{0}" width="647" alt="{0}"></a></p>\n'.format(fig))
html_text.append('</section>\n')
html_text.append('</section>\n')
html_text.append('</body>\n</html>\n')
filename=wk_dir+"/index.html"
with open(filename,"w") as html_page:
html_page.writelines(html_text)
return
if __name__ == '__main__':
parser=argparse.ArgumentParser(description='Process two model '
'precipitation datasets and compare them with each other and with two '
'observational datasets.')
parser.add_argument('model_dir', help='model directory')
parser.add_argument('wk_dir', help='output directory')
parser.add_argument('--obs_dir', help='observations directory', default=None, required=False)
parser.add_argument('--config', help='configuration file', default=None, required=False)
args=parser.parse_args()
model_dir=args.model_dir
obs_dir=args.obs_dir
wk_dir=args.wk_dir
config=args.config
if obs_dir=="None": obs_dir=None
main(model_dir, obs_dir, wk_dir, config_path=config, settings=None)
| nick-klingaman/ASoP | ASoP-Spectral/ASoP1_Spectral/ASoP1_spectral_cmec_workflow.py | Python | apache-2.0 | 25,029 | [
"NetCDF"
] | 08bb1a7d1abf95fc299ad193d304eda938cbc611f5894cc731facb92d18aa571 |
"""
Put Canvas specific configuration here.
Note that this project has three different configuration files:
/var/canvas/website/settings.py
That is where we keep all Django specific settings.
/var/canvas/common/configuration.py
That is where we keep AWS and infrastructure related settings. Note that this one is outside the /website
package, which means that you need some pythonpath magic to use it inside canvas
/var/canvas/website/canvas/knobs.py
That is where we keep static vars that you can use around Canvas.
"""
from drawquest.knobs import *
# how many times a use gets to sticker before he or she is shown a sticker prompt.
LOGGED_OUT_STICKER_LIMIT = 4
EPIC_STICKER_COST_THRESHOLD = 5
# This allows you to override the default template filename for specific notifications.
OVERRIDE_NOTIFICATION_TEMPLATE = {
"EmailChannel": {
"newsletter": {
"text": "email/canvas_shutdown.django.txt",
"body": "email/canvas_shutdown.django.html",
"subject": "email/canvas_shutdown_subject.django.txt"
}
}
}
FLAG_RATE_LIMITS = {
'm': (15, 2*60,),
'h': (50, 60*60,),
}
# The number of (#1) stickers users get when they visit everyday. This is a retention award.
DAILY_FREE_STICKERS = 3
SIGNUP_FREE_STICKERS = 10
# The number of stickers required to reach each level.
STICKER_SCHEDULE = [5,10,15,20,25,30,40,50,60,70,80,90,100]
# The award (in #1 stickers) a user gets when she achieves a level.
STICKER_REWARDS = [3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5]
TWENTYFOUR_HOUR_EMAIL_COMMENT_COUNT = 9
TAGLINE = 'Share and play with images!'
FACEBOOK_SHARE_IMAGE_TYPE = ['small_column', 'stream', 'thumbnail']
# The max filesize in KB before we try the next smaller image type, from the list above.
FACEBOOK_SHARE_IMAGE_SIZE_CUTOFF = 60
VIEW_THREAD_PAGE_NUM_TOP = 8
COMMENTS_PER_PAGE = 50
DEFAULT_FOOTER_STICKER = 'smiley'
POST_TEXT_TRUNCATION_LENGTH = 140
FOOTER_UPDATE_ATTEMPTS = 3
POST_TITLE_MAX_LENGTH = 140
STICKER_MESSAGE_MAX_LENGTH = 140
# How many points given for one of your posts being remixed.
REMIX_POINTS = 1
PUBLIC_API_RATE_LIMIT = 1000
PUBLIC_API_MAX_ITEMS = 100
PUBLIC_API_PAGINATION_SIZE = 100
FOLLOWING_MENU_ROWS = 15
FOLLOWING_MENU_COLUMNS = FOLLOWING_MENU_ROWS * 4
REMIX_IMAGES_STAFF_PICKS = [
# This is the abcde from http://example.com/p/abcde
'2hdv9',
'1km2r',
'2ypcj',
'1f1a9',
'25gna',
'1umn4',
'222zn',
'8wfp8',
'89bkc',
'qix8v',
'lakze',
'4uqym',
'4luij',
'42k6w',
'awg15',
'ocmpt',
'pkztj',
'2f6zm',
'21ypq',
'1ese3',
'221qd',
'1i8xo',
'6v79z',
'78ykf',
'u2zw9',
'qydyh',
'tif0q',
'rc328',
'piusb',
]
FEED_PROMOTION_STICKER_COST_THRESHOLD = 5
FEED_ITEMS_PER_PAGE = 50
FOLLOWED_TAGS_SHOWN = 100
FOLLOWED_TAGS_REALTIME_THRESHOLD = 10
ACTIVITY_STREAM_PER_PAGE = 20
SUGGESTED_USERS = [
'Enin',
'Tati5001',
'calhaus',
'RedmonBray',
'Jiakko',
'CyberTaco',
'Harbltron',
'lollajames',
'TmsT',
'Sunset',
'Xeno_Mezphy',
'AngelOsario',
'ravenunknown',
'abeeiamnot',
'Coutoon',
'nicepunk',
'GrogMalBlood',
'ZombieLincolnFP',
'TrueBlue',
'mradmack',
'jerm',
'the7thcolumn',
'BrettZki',
'francesco9001',
'sanamkan',
'Grga',
'nsbarr',
'dmauro',
'moobraz',
'dagfooyo',
'echapa',
'bhudapop',
'ChasM',
'metaknight',
'Photocopier',
'lukebn',
'Zoucas',
'AvengerOfBoredom',
'mikshaw',
'Anominous',
]
SUGGESTED_TOPICS = [
'abstract',
'art',
'canvas',
'cats',
'challenges',
'cute',
'drawing',
'exploitable',
'funny',
'games',
'gif_bin',
'glitch_art',
'photography',
'pop_culture',
'request',
'video_games',
]
OFFLINE_SUGGESTED_TOPICS = [
'photography',
'drawing',
'abstract',
'cute',
'challenges',
]
SUGGESTED_TOPIC_PREVIEWS = {
"abstract" : "cd12831f5c633ed00c4f483dc3006eb3c0cca345",
"art" : "bd457cc102df633df440c96dc2aaae107de3979a",
"canvas" : "41eb1025e73b62b297e48e7736098457da32d16c",
"cats" : "5c4279694ef21e9be365d6f9d7f6900e48edaba6",
"challenges" : "c28e1df3b622ec88203949620b23b82eeacfa6e5",
"cute" : "dd2871c89dec7e589425bdfc8b6de1e4b8eafa75",
"drawing" : "eddd46ab6992e867a7f45f3e56aa9e95122ae419",
"exploitable" : "853e684737772002f3dc99a628b14a60db133fa6",
"funny" : "9823b39e77698f7371071310094567d4542e82d0",
"games" : "5be3b62cae5538e5457bc24574849af46c02a009",
"gif_bin" : "14aba9e1d8a126a7dd2bfad5c9fbc803e0d314c6",
"glitch_art" : "bbf5af5e5580dbfb7db2bc73c5ae1172ad281a19",
"photography" : "b28d0a7931c11cc5909f05c1bf5e7368ea1bfb32",
"pop_culture" : "0d04b9d7ae641a31ea12e50b98e156912f2ad5ef",
"request" : "299071ee0d48065c76bd940caa252680d210183f",
"video_games" : "91096f74bc169f67c8c62279103eebf73babad0b",
}
SUGGESTED_USERS_TO_FOLLOW_COUNT = 3
| drawquest/drawquest-web | website/canvas/knobs.py | Python | bsd-3-clause | 5,123 | [
"VisIt"
] | 4699092ae353005e6b7dfb43dbd058ddccda13e167c3dda3fdfb2c67b343dc07 |
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2000-2007 Donald N. Allingham
# Copyright (C) 2009 Brian G. Matherly
# Copyright (C) 2009 Gary Burton
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
"""
Provide the management of databases. This includes opening, renaming,
creating, and deleting of databases.
"""
#-------------------------------------------------------------------------
#
# Standard python modules
#
#-------------------------------------------------------------------------
import os
import time
import copy
import subprocess
from urllib.parse import urlparse
import logging
import re
#-------------------------------------------------------------------------
#
# GTK/Gnome modules
#
#-------------------------------------------------------------------------
from gi.repository import Gdk
from gi.repository import Gtk
from gi.repository import Pango
#-------------------------------------------------------------------------
#
# gramps modules
#
#-------------------------------------------------------------------------
from .display import display_help
from gramps.gen.const import URL_WIKISTRING, URL_MANUAL_PAGE
from .user import User
from .dialog import ErrorDialog, QuestionDialog, QuestionDialog2, ICON
from .pluginmanager import GuiPluginManager
from gramps.cli.clidbman import CLIDbManager, NAME_FILE, time_val, UNAVAILABLE
from .managedwindow import ManagedWindow
from .ddtargets import DdTargets
from gramps.gen.recentfiles import rename_filename, remove_filename
from .glade import Glade
from gramps.gen.db.exceptions import DbException
from gramps.gen.db.utils import make_database, open_database
from gramps.gen.config import config
from .listmodel import ListModel
from gramps.gen.constfunc import win
from gramps.gen.plug import BasePluginManager
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.gettext
#-------------------------------------------------------------------------
#
# set up logging
#
#-------------------------------------------------------------------------
LOG = logging.getLogger(".DbManager")
#-------------------------------------------------------------------------
#
# constants
#
#-------------------------------------------------------------------------
if win():
_RCS_FOUND = os.system("rcs -V >nul 2>nul") == 0
if _RCS_FOUND and "TZ" not in os.environ:
# RCS requires the "TZ" variable be set.
os.environ["TZ"] = str(time.timezone)
else:
_RCS_FOUND = os.system("rcs -V >/dev/null 2>/dev/null") == 0
_RETURN = Gdk.keyval_from_name("Return")
_KP_ENTER = Gdk.keyval_from_name("KP_Enter")
WIKI_HELP_PAGE = _('%s_-_Manage_Family_Trees') % URL_MANUAL_PAGE
WIKI_HELP_SEC = _('Family_Trees_manager_window')
ARCHIVE = "rev.gramps"
ARCHIVE_V = "rev.gramps,v"
NAME_COL = 0
PATH_COL = 1
FILE_COL = 2
DATE_COL = 3
DSORT_COL = 4
OPEN_COL = 5
ICON_COL = 6
BACKEND_COL = 7
RCS_BUTTON = {True : _('_Extract'), False : _('_Archive')}
class Information(ManagedWindow):
def __init__(self, uistate, data, track):
super().__init__(uistate, track, self, modal=True)
self.window = Gtk.Dialog()
self.set_window(self.window, None, _("Database Information"))
self.setup_configs('interface.information', 600, 400)
self.ok = self.window.add_button(_('_OK'), Gtk.ResponseType.OK)
self.ok.connect('clicked', self.on_ok_clicked)
s = Gtk.ScrolledWindow()
titles = [
(_('Setting'), 0, 150),
(_('Value'), 1, 400)
]
treeview = Gtk.TreeView()
model = ListModel(treeview, titles)
for key, value in sorted(data.items()):
model.add((key, str(value),), key)
s.add(treeview)
self.window.vbox.pack_start(s, True, True, 0)
self.show()
def on_ok_clicked(self, obj):
self.window.close()
def build_menu_names(self, obj):
return (_('Database Information'), None)
class DbManager(CLIDbManager, ManagedWindow):
"""
Database Manager. Opens a database manager window that allows users to
create, rename, delete and open databases.
"""
ICON_MAP = {
CLIDbManager.ICON_NONE : None,
CLIDbManager.ICON_RECOVERY : 'dialog-error',
CLIDbManager.ICON_LOCK : 'gramps-lock',
CLIDbManager.ICON_OPEN : 'document-open',
}
BUSY_CURSOR = Gdk.Cursor.new_for_display(Gdk.Display.get_default(),
Gdk.CursorType.WATCH)
def __init__(self, uistate, dbstate, viewmanager, parent=None):
"""
Create the top level window from the glade description, and extracts
the GTK widgets that are needed.
"""
window_id = self
ManagedWindow.__init__(self, uistate, [], window_id, modal=True)
CLIDbManager.__init__(self, dbstate)
self.glade = Glade(toplevel='dbmanager')
self.top = self.glade.toplevel
self.set_window(self.top, None, None)
self.setup_configs('interface.dbmanager', 780, 350)
self.viewmanager = viewmanager
for attr in ['connect_btn', 'cancel_btn', 'new_btn', 'remove_btn',
'info_btn', 'dblist', 'rename_btn', 'convert_btn',
'repair_btn', 'rcs_btn', 'msg', 'close_btn']:
setattr(self, attr, self.glade.get_object(attr))
self.model = None
self.column = None
self.lock_file = None
self.data_to_delete = None
self.selection = self.dblist.get_selection()
# For already loaded database:
self._current_node = None
self.__connect_signals()
self.__build_interface()
self._populate_model()
self.before_change = ""
self.after_change = ""
self._select_default()
self.user = User(error=ErrorDialog, parent=parent,
callback=self.uistate.pulse_progressbar,
uistate=self.uistate)
def build_menu_names(self, obj):
''' This window can have children, but they are modal so no submenu
is visible'''
submenu_label = " "
menu_label = _('Family Trees')
return (menu_label, submenu_label)
def _select_default(self):
"""
Select the current, or latest, tree.
"""
# If already loaded database, center on it:
if self._current_node:
store, node = self.selection.get_selected()
tree_path = store.get_path(self._current_node)
self.selection.select_path(tree_path)
self.dblist.scroll_to_cell(tree_path, None, 1, 0.5, 0)
def __connect_signals(self):
"""
Connects the signals to the buttons on the interface.
"""
ddtarget = DdTargets.URI_LIST
self.top.drag_dest_set(Gtk.DestDefaults.ALL,
[DdTargets.URI_LIST.target()],
Gdk.DragAction.COPY)
self.remove_btn.connect('clicked', self.__remove_db)
self.new_btn.connect('clicked', self.__new_db)
self.rename_btn.connect('clicked', self.__rename_db)
self.convert_btn.connect('clicked', self.__convert_db_ask)
self.info_btn.connect('clicked', self.__info_db)
self.close_btn.connect('clicked', self.__close_db)
self.repair_btn.connect('clicked', self.__repair_db)
self.selection.connect('changed', self.__selection_changed)
self.dblist.connect('button-press-event', self.__button_press)
self.dblist.connect('key-press-event', self.__key_press)
self.top.connect('drag_data_received', self.__drag_data_received)
self.top.connect('drag_motion', drag_motion)
self.top.connect('drag_drop', drop_cb)
self.define_help_button(
self.glade.get_object('help_btn'), WIKI_HELP_PAGE, WIKI_HELP_SEC)
if _RCS_FOUND:
self.rcs_btn.connect('clicked', self.__rcs)
def define_help_button(self, button, webpage='', section=''):
button.connect('clicked', lambda x: display_help(webpage, section))
def __button_press(self, obj, event):
"""
Checks for a double click event. In the tree view, we want to
treat a double click as if it was OK button press. However, we have
to make sure that an item was selected first.
"""
if (event.type == Gdk.EventType.DOUBLE_BUTTON_PRESS
and event.button == 1):
if self.connect_btn.get_property('sensitive'):
self.top.response(Gtk.ResponseType.OK)
return True
return False
def __key_press(self, obj, event):
"""
Grab ENTER so it does not start editing the cell, but behaves
like double click instead
"""
if event.keyval in (_RETURN, _KP_ENTER):
if self.connect_btn.get_property('sensitive'):
self.top.response(Gtk.ResponseType.OK)
return True
return False
def __selection_changed(self, selection):
"""
Called when the selection is changed in the TreeView.
"""
self.__update_buttons(selection)
def __update_buttons(self, selection):
"""
What we are trying to detect is the selection or unselection of a row.
When a row is unselected, the Open, Rename, and Remove buttons
are set insensitive. If a row is selected, the rename and remove
buttons are disabled, and the Open button is disabled if the
row represents a open database.
"""
# Get the current selection
store, node = selection.get_selected()
if not _RCS_FOUND: # it's not in Windows
self.rcs_btn.set_visible(False)
# if nothing is selected
if not node:
self.connect_btn.set_sensitive(False)
self.rename_btn.set_sensitive(False)
self.convert_btn.set_sensitive(False)
self.info_btn.set_sensitive(False)
self.close_btn.set_sensitive(False)
self.rcs_btn.set_sensitive(False)
self.repair_btn.set_sensitive(False)
self.remove_btn.set_sensitive(False)
return
path = self.model.get_path(node)
if path is None:
return
is_rev = len(path.get_indices()) > 1
self.rcs_btn.set_label(RCS_BUTTON[is_rev])
if store.get_value(node, ICON_COL) == 'document-open':
self.close_btn.set_sensitive(True)
self.convert_btn.set_sensitive(False)
self.connect_btn.set_sensitive(False)
if _RCS_FOUND:
self.rcs_btn.set_sensitive(True)
elif store.get_value(node, BACKEND_COL) == UNAVAILABLE:
self.close_btn.set_sensitive(False)
self.convert_btn.set_sensitive(False)
self.connect_btn.set_sensitive(False)
self.rcs_btn.set_sensitive(False)
self.repair_btn.set_sensitive(False)
else:
self.close_btn.set_sensitive(False)
dbid = config.get('database.backend')
backend_type = self.get_backend_name_from_dbid(dbid)
if (store.get_value(node, ICON_COL) in [None, ""] and
store.get_value(node, BACKEND_COL) != backend_type):
self.convert_btn.set_sensitive(True)
else:
self.convert_btn.set_sensitive(False)
self.connect_btn.set_sensitive(not is_rev)
if _RCS_FOUND and is_rev:
self.rcs_btn.set_sensitive(True)
else:
self.rcs_btn.set_sensitive(False)
if store.get_value(node, ICON_COL) == 'dialog-error':
path = store.get_value(node, PATH_COL)
backup = os.path.join(path, "person.gbkp")
self.repair_btn.set_sensitive(os.path.isfile(backup))
else:
self.repair_btn.set_sensitive(False)
self.rename_btn.set_sensitive(True)
self.info_btn.set_sensitive(True)
self.remove_btn.set_sensitive(True)
self.new_btn.set_sensitive(True)
def __build_interface(self):
"""
Builds the columns for the TreeView. The columns are:
Icon, Database Name, Last Modified, Backend Type
The Icon column gets its data from column 6 of the database model.
It is expecting either None, or a GTK stock icon name
The Database Name column is an editable column. We connect to the
'edited' signal, so that we can change the name when the user changes
the column.
The last accessed column simply displays the last time famtree was
opened.
The Backend Type column is a string based on database backend.
"""
# Put some help on the buttons:
dbid = config.get('database.backend')
backend_type = self.get_backend_name_from_dbid(dbid)
if backend_type == UNAVAILABLE:
dbid = 'sqlite'
config.set('database.backend', dbid)
backend_type = self.get_backend_name_from_dbid(dbid)
self.new_btn.set_tooltip_text(backend_type)
# build the database name column
render = Gtk.CellRendererText()
render.set_property('ellipsize', Pango.EllipsizeMode.END)
render.connect('edited', self.__change_name)
render.connect('editing-canceled', self.__stop_edit)
render.connect('editing-started', self.__start_edit)
self.column = Gtk.TreeViewColumn(_('Family Tree name'), render,
text=NAME_COL)
self.column.set_sort_column_id(NAME_COL)
self.column.set_sort_indicator(True)
self.column.set_resizable(True)
self.column.set_min_width(250)
self.dblist.append_column(self.column)
self.name_renderer = render
# build the icon column
render = Gtk.CellRendererPixbuf()
#icon_column = Gtk.TreeViewColumn(_('Status'), render,
#icon_name=ICON_COL)
icon_column = Gtk.TreeViewColumn(_('Status'), render)
icon_column.set_cell_data_func(render, bug_fix)
icon_column.set_sort_column_id(ICON_COL)
self.dblist.append_column(icon_column)
# build the backend column
render = Gtk.CellRendererText()
column = Gtk.TreeViewColumn(_('Database Type'), render,
text=BACKEND_COL)
column.set_sort_column_id(BACKEND_COL)
column.set_sort_indicator(True)
column.set_resizable(True)
self.dblist.append_column(column)
# build the last accessed column
render = Gtk.CellRendererText()
column = Gtk.TreeViewColumn(_('Last accessed'), render, text=DATE_COL)
column.set_sort_column_id(DSORT_COL)
self.dblist.append_column(column)
def __populate(self):
"""
Builds the data and the display model.
"""
self._populate_cli()
self._populate_model()
def _populate_model(self):
"""
Builds the display model.
"""
self.model = Gtk.TreeStore(str, str, str, str, int, bool, str, str)
#use current names to set up the model
self._current_node = None
last_accessed_node = None
last_accessed = 0
for items in self.current_names:
data = list(items[:8])
backend_type = self.get_backend_name_from_dbid(data[BACKEND_COL])
node = self.model.append(None, data[:-1] + [backend_type])
# For already loaded database, set current_node:
if self.dbstate.is_open() and \
self.dbstate.db.get_save_path() == data[1]:
self._current_node = node
if data[DSORT_COL] > last_accessed:
last_accessed = data[DSORT_COL]
last_accessed_node = node
for rdata in find_revisions(os.path.join(items[1], ARCHIVE_V)):
data = [rdata[2], rdata[0], items[1], rdata[1], 0, False, "",
backend_type]
self.model.append(node, data)
if self._current_node is None:
self._current_node = last_accessed_node
self.model.set_sort_column_id(NAME_COL, Gtk.SortType.ASCENDING)
self.dblist.set_model(self.model)
def existing_name(self, name, skippath=None):
"""
Return true if a name is present in the model already.
If skippath given, the name of skippath is not considered
"""
iter = self.model.get_iter_first()
while iter:
path = self.model.get_path(iter)
if path == skippath:
pass
else:
itername = self.model.get_value(iter, NAME_COL)
if itername.strip() == name.strip():
return True
iter = self.model.iter_next(iter)
return False
def run(self):
"""
Runs the dialog, returning None if nothing has been chosen,
or the path and name if something has been selected
"""
self.show()
self.__update_buttons(self.selection)
while True:
value = self.top.run()
if value == Gtk.ResponseType.OK:
store, node = self.selection.get_selected()
# don't open a locked file
if store.get_value(node, ICON_COL) == 'gramps-lock':
self.__ask_to_break_lock(store, node)
continue
# don't open a version
if len(store.get_path(node).get_indices()) > 1:
continue
if node:
del self.selection
del self.name_renderer
self.close()
path = store.get_value(node, PATH_COL)
return (path, store.get_value(node, NAME_COL))
else:
del self.selection
del self.name_renderer
if value != Gtk.ResponseType.DELETE_EVENT:
self.close()
return None
def __ask_to_break_lock(self, store, node):
"""
Prompts the user for permission to break the lock file that another
process has set on the file.
"""
path = store.get_path(node)
self.lock_file = store[path][PATH_COL]
QuestionDialog(
_("Break the lock on the '%s' database?") % store[path][0],
_("Gramps believes that someone else is actively editing "
"this database. You cannot edit this database while it "
"is locked. If no one is editing the database you may "
"safely break the lock. However, if someone else is editing "
"the database and you break the lock, you may corrupt the "
"database."),
_("Break lock"),
self.__really_break_lock, parent=self.top)
def __really_break_lock(self):
"""
Deletes the lock file associated with the selected database,
then updates the display appropriately.
"""
try:
self.break_lock(self.lock_file)
store, node = self.selection.get_selected()
dbpath = store.get_value(node, PATH_COL)
(tval, last) = time_val(dbpath)
store.set_value(node, OPEN_COL, 0)
store.set_value(node, ICON_COL, "") # see bug_fix
store.set_value(node, DATE_COL, last)
store.set_value(node, DSORT_COL, tval)
except IOError:
return
def __stop_edit(self, *args):
self.name_renderer.set_property('editable', False)
self.__update_buttons(self.selection)
def __start_edit(self, *args):
"""
Do not allow to click Load while changing name, to force users to finish
the action of renaming. Hack around the fact that clicking button
sends a 'editing-canceled' signal loosing the new name
"""
self.connect_btn.set_sensitive(False)
self.rename_btn.set_sensitive(False)
self.convert_btn.set_sensitive(False)
self.info_btn.set_sensitive(False)
self.rcs_btn.set_sensitive(False)
self.repair_btn.set_sensitive(False)
self.remove_btn.set_sensitive(False)
self.new_btn.set_sensitive(False)
def __change_name(self, renderer_sel, path, new_text):
"""
Change the name of the database. This is a callback from the
column, which has been marked as editable.
If the new string is empty, do nothing. Otherwise, renaming the
database is simply changing the contents of the name file.
"""
# kill special characters so can use as file name in backup.
new_text = re.sub(r"[':<>|,;=\"\[\]\.\+\*\/\?\\]", "_", new_text)
#path is a string, convert to TreePath first
path = Gtk.TreePath(path=path)
if len(new_text) > 0:
node = self.model.get_iter(path)
old_text = self.model.get_value(node, NAME_COL)
if self.model.get_value(node, ICON_COL) == 'document-open':
# this database is loaded. We must change the title
# in case we change the name several times before quitting,
# we save the first old name.
if self.before_change == "":
self.before_change = old_text
self.after_change = new_text
if not old_text.strip() == new_text.strip():
if len(path.get_indices()) > 1:
self.__rename_revision(path, new_text)
else:
self.__rename_database(path, new_text)
self.name_renderer.set_property('editable', False)
self.__update_buttons(self.selection)
def __rename_revision(self, path, new_text):
"""
Renames the RCS revision using the rcs command. The rcs command
is in the format of:
rcs -mREV:NEW_NAME archive
"""
node = self.model.get_iter(path)
db_dir = self.model.get_value(node, FILE_COL)
rev = self.model.get_value(node, PATH_COL)
archive = os.path.join(db_dir, ARCHIVE_V)
cmd = ["rcs", "-x,v", "-m%s:%s" % (rev, new_text), archive]
proc = subprocess.Popen(cmd, stderr=subprocess.PIPE)
status = proc.wait()
message = "\n".join(proc.stderr.readlines())
proc.stderr.close()
del proc
if status != 0:
ErrorDialog(_("Rename failed"),
_("An attempt to rename a version failed "
"with the following message:\n\n%s") % message,
parent=self.top)
else:
self.model.set_value(node, NAME_COL, new_text)
#scroll to new position
store, node = self.selection.get_selected()
tree_path = store.get_path(node)
self.dblist.scroll_to_cell(tree_path, None, False, 0.5, 0.5)
def __rename_database(self, path, new_text):
"""
Renames the database by writing the new value to the name.txt file
"""
new_text = new_text.strip()
node = self.model.get_iter(path)
filename = self.model.get_value(node, FILE_COL)
if self.existing_name(new_text, skippath=path):
ErrorDialog(_("Could not rename the Family Tree."),
_("Family Tree already exists, choose a unique name."),
parent=self.top)
return
old_text, new_text = self.rename_database(filename, new_text)
if old_text is not None:
rename_filename(old_text, new_text)
self.model.set_value(node, NAME_COL, new_text)
#scroll to new position
store, node = self.selection.get_selected()
tree_path = store.get_path(node)
self.dblist.scroll_to_cell(tree_path, None, False, 0.5, 0.5)
def __rcs(self, obj):
"""
Callback for the RCS button. If the tree path is > 1, then we are
on an RCS revision, in which case we can check out. If not, then
we can only check in.
"""
store, node = self.selection.get_selected()
tree_path = store.get_path(node)
if len(tree_path.get_indices()) > 1:
parent_node = store.get_iter((tree_path[0],))
parent_name = store.get_value(parent_node, NAME_COL)
name = store.get_value(node, NAME_COL)
revision = store.get_value(node, PATH_COL)
db_path = store.get_value(node, FILE_COL)
self.__checkout_copy(parent_name, name, revision, db_path)
else:
base_path = self.dbstate.db.get_save_path()
archive = os.path.join(base_path, ARCHIVE)
_check_in(self.dbstate.db, archive, self.user,
self.__start_cursor, parent=self.window)
self.__end_cursor()
self.__populate()
self._select_default()
def __checkout_copy(self, parent_name, name, revision, db_path):
"""
Create a new database, then extracts a revision from RCS and
imports it into the db
"""
dbid = config.get('database.backend')
new_path, newname = self._create_new_db("%s : %s" % (parent_name, name),
dbid=dbid)
self.__start_cursor(_("Extracting archive..."))
dbase = make_database(dbid)
dbase.load(new_path)
self.__start_cursor(_("Importing archive..."))
check_out(dbase, revision, db_path, self.user)
self.__end_cursor()
dbase.close(user=self.user)
def __remove_db(self, obj):
"""
Callback associated with the Remove button. Get the selected
row and data, then call the verification dialog.
"""
store, node = self.selection.get_selected()
path = store.get_path(node)
self.data_to_delete = store[path]
if len(path.get_indices()) == 1:
QuestionDialog(
_("Remove the '%s' Family Tree?") % self.data_to_delete[0],
_("Removing this Family Tree will permanently destroy "
"the data."),
_("Remove Family Tree"),
self.__really_delete_db, parent=self.top)
else:
rev = self.data_to_delete[0]
parent = store[(path[0],)][0]
QuestionDialog(_("Remove the '%(revision)s' version "
"of '%(database)s'"
) % {'revision' : rev,
'database' : parent},
_("Removing this version will prevent you from "
"extracting it in the future."),
_("Remove version"),
self.__really_delete_version, parent=self.top)
def __really_delete_db(self):
"""
Delete the selected database. If the database is open, close it first.
Then scan the database directory, deleting the files, and finally
removing the directory.
"""
# close the database if the user has requested to delete the
# active database
if self.data_to_delete[PATH_COL] == self.active:
self.uistate.viewmanager.close_database()
store, node = self.selection.get_selected()
path = store.get_path(node)
node = self.model.get_iter(path)
filename = self.model.get_value(node, FILE_COL)
try:
with open(filename, "r", encoding='utf-8') as name_file:
file_name_to_delete = name_file.read()
remove_filename(file_name_to_delete)
directory = self.data_to_delete[1]
for (top, dirs, files) in os.walk(directory):
for filename in files:
os.unlink(os.path.join(top, filename))
os.rmdir(directory)
except (IOError, OSError) as msg:
ErrorDialog(_("Could not delete Family Tree"),
str(msg),
parent=self.top)
# rebuild the display
self.__populate()
self._select_default()
def __really_delete_version(self):
"""
Delete the selected database. If the database is open, close it first.
Then scan the database directory, deleting the files, and finally
removing the directory.
"""
db_dir = self.data_to_delete[FILE_COL]
rev = self.data_to_delete[PATH_COL]
archive = os.path.join(db_dir, ARCHIVE_V)
cmd = ["rcs", "-x,v", "-o%s" % rev, "-q", archive]
proc = subprocess.Popen(cmd, stderr=subprocess.PIPE)
status = proc.wait()
message = "\n".join(proc.stderr.readlines())
proc.stderr.close()
del proc
if status != 0:
ErrorDialog(_("Deletion failed"),
_("An attempt to delete a version failed "
"with the following message:\n\n%s") % message,
parent=self.top)
# rebuild the display
self.__populate()
self._select_default()
def __convert_db_ask(self, obj):
"""
Ask to convert a closed family tree into the default database backend.
"""
store, node = self.selection.get_selected()
name = store[node][0]
dirname = store[node][1]
dbid = config.get('database.backend')
backend_type = self.get_backend_name_from_dbid(dbid)
QuestionDialog(
_("Convert the '%s' database?") % name,
_("Do you wish to convert this family tree into a "
"%(database_type)s database?") % {'database_type': backend_type},
_("Convert"),
lambda: self.__convert_db(name, dirname), parent=self.top)
def __convert_db(self, name, dirname):
"""
Actually convert the family tree into the default database backend.
"""
try:
db = open_database(name)
except:
ErrorDialog(_("Opening the '%s' database") % name,
_("An attempt to convert the database failed. "
"Perhaps it needs updating."), parent=self.top)
return
plugin_manager = GuiPluginManager.get_instance()
export_function = None
for plugin in plugin_manager.get_export_plugins():
if plugin.get_extension() == "gramps":
export_function = plugin.get_export_function()
break
## Next, get an XML dump:
if export_function is None:
ErrorDialog(_("Converting the '%s' database") % name,
_("An attempt to export the database failed."),
parent=self.top)
db.close(user=self.user)
return
self.__start_cursor(_("Converting data..."))
xml_file = os.path.join(dirname, "backup.gramps")
export_function(db, xml_file, self.user)
db.close(user=self.user)
count = 1
new_text = "%s %s" % (name, _("(Converted #%d)") % count)
while self.existing_name(new_text):
count += 1
new_text = "%s %s" % (name, _("(Converted #%d)") % count)
dbid = config.get('database.backend')
new_path, newname = self._create_new_db(new_text, dbid=dbid,
edit_entry=False)
## Create a new database of correct type:
dbase = make_database(dbid)
dbase.load(new_path)
## import from XML
import_function = None
for plugin in plugin_manager.get_import_plugins():
if plugin.get_extension() == "gramps":
import_function = plugin.get_import_function()
if import_function is None:
ErrorDialog(_("Converting the '%s' database") % name,
_("An attempt to import into the database failed."),
parent=self.top)
else:
import_function(dbase, xml_file, self.user)
self.__end_cursor()
dbase.close(user=self.user)
self.__populate()
self._select_default()
def __rename_db(self, obj):
"""
Start the rename process by calling the start_editing option on
the line with the cursor.
"""
store, node = self.selection.get_selected()
path = self.model.get_path(node)
self.name_renderer.set_property('editable', True)
self.dblist.set_cursor(path, self.column, True)
def __close_db(self, obj):
"""
Close the database. Set the displayed line correctly, set the dbstate to
no_database, update the sensitivity of the buttons in this dialogue box
and get viewmanager to manage the main window and plugable views.
"""
store, node = self.selection.get_selected()
dbpath = store.get_value(node, PATH_COL)
(tval, last) = time_val(dbpath)
store.set_value(node, OPEN_COL, 0)
store.set_value(node, ICON_COL, "") # see bug_fix
store.set_value(node, DATE_COL, last)
store.set_value(node, DSORT_COL, tval)
self.dbstate.no_database()
self.__update_buttons(self.selection)
self.viewmanager.post_close_db()
def __info_db(self, obj):
"""
Show info on this database.
"""
store, node = self.selection.get_selected()
name = store[node][0]
dirname = store[node][1]
# if this is open, get info from there, otherwise, temp open?
summary = self.get_dbdir_summary(dirname, name)
Information(self.uistate, summary, track=self.track)
def __repair_db(self, obj):
"""
Start the repair process by calling the start_editing option on
the line with the cursor.
"""
store, node = self.selection.get_selected()
dirname = store[node][1]
#First ask user if he is really sure :-)
yes_no = QuestionDialog2(
_("Repair Family Tree?"),
_("If you click %(bold_start)sProceed%(bold_end)s, Gramps will "
"attempt to recover your Family Tree from the last good "
"backup. There are several ways this can cause unwanted "
"effects, so %(bold_start)sbackup%(bold_end)s the "
"Family Tree first.\nThe Family Tree you have selected "
"is stored in %(dirname)s.\n\n"
"Before doing a repair, verify that the Family Tree can "
"really no longer be opened, as the database back-end can "
"recover from some errors automatically.\n\n"
"%(bold_start)sDetails:%(bold_end)s Repairing a Family Tree "
"actually uses the last backup of the Family Tree, which "
"Gramps stored on last use. If you have worked for "
"several hours/days without closing Gramps, then all "
"this information will be lost! If the repair fails, then "
"the original Family Tree will be lost forever, hence "
"a backup is needed. If the repair fails, or too much "
"information is lost, you can fix the original "
"Family Tree manually. For details, see the webpage\n"
"%(gramps_wiki_recover_url)s\n"
"Before doing a repair, try to open the Family Tree "
"in the normal manner. Several errors that trigger the "
"repair button can be fixed automatically. "
"If this is the case, you can disable the repair button "
"by removing the file %(recover_file)s in the "
"Family Tree directory."
) % {'bold_start': '<b>',
'bold_end': '</b>',
'recover_file': '<i>need_recover</i>',
'gramps_wiki_recover_url':
URL_WIKISTRING + 'Recover_corrupted_family_tree',
'dirname': dirname},
_("Proceed, I have taken a backup"),
_("Stop"),
parent=self.top)
prompt = yes_no.run()
if not prompt:
return
opened = store[node][OPEN_COL]
if opened:
self.dbstate.no_database()
# delete files that are not backup files or the .txt file
for filename in os.listdir(dirname):
if os.path.splitext(filename)[1] not in (".gbkp", ".txt"):
fname = os.path.join(dirname, filename)
os.unlink(fname)
dbase = make_database("sqlite")
dbase.load(dirname, None)
self.__start_cursor(_("Rebuilding database from backup files"))
try:
dbase.restore()
except DbException as msg:
ErrorDialog(_("Error restoring backup data"), msg,
parent=self.top)
self.__end_cursor()
dbase.close(user=self.user)
self.dbstate.no_database()
self.__populate()
self._select_default()
def __start_cursor(self, msg):
"""
Set the cursor to the busy state, and displays the associated
message
"""
self.msg.set_label(msg)
self.top.get_window().set_cursor(self.BUSY_CURSOR)
while Gtk.events_pending():
Gtk.main_iteration()
def __end_cursor(self):
"""
Set the cursor back to normal and clears the message
"""
self.top.get_window().set_cursor(None)
self.msg.set_label("")
def __new_db(self, obj):
"""
Callback wrapper around the actual routine that creates the
new database. Catch OSError and IOError and display a warning
message.
"""
self.new_btn.set_sensitive(False)
dbid = config.get('database.backend')
if dbid:
try:
self._create_new_db(dbid=dbid)
except (OSError, IOError) as msg:
ErrorDialog(_("Could not create Family Tree"),
str(msg),
parent=self.top)
self.new_btn.set_sensitive(True)
def _create_new_db(self, title=None, create_db=True, dbid=None,
edit_entry=True):
"""
Create a new database, append to model
"""
new_path, title = self.create_new_db_cli(title, create_db, dbid)
path_name = os.path.join(new_path, NAME_FILE)
(tval, last) = time_val(new_path)
backend_type = self.get_backend_name_from_dbid(dbid)
node = self.model.append(None, [title, new_path, path_name,
last, tval, False, '', backend_type])
self.selection.select_iter(node)
path = self.model.get_path(node)
if edit_entry:
self.name_renderer.set_property('editable', True)
self.dblist.set_cursor(path, self.column, True)
return new_path, title
def __drag_data_received(self, widget, context, xpos, ypos, selection,
info, rtime):
"""
Handle the reception of drag data
"""
drag_value = selection.get_data().decode().strip(' \r\n\x00')
fname = None
type = None
title = None
# Allow any type of URL ("file://", "http://", etc):
if drag_value and urlparse(drag_value).scheme != "":
fname, title = [], []
for treename in [v.strip() for v in drag_value.split("\n")
if v.strip() != '']:
f, t = self.import_new_db(treename, self.user)
fname.append(f)
title.append(t)
return fname, title
def drag_motion(wid, context, xpos, ypos, time_stamp):
"""
DND callback that is called on a DND drag motion begin
"""
Gdk.drag_status(context, Gdk.DragAction.COPY, time_stamp)
return True
def drop_cb(wid, context, xpos, ypos, time_stamp):
"""
DND callback that finishes the DND operation
"""
Gtk.drag_finish(context, True, False, time_stamp)
return True
def find_revisions(name):
"""
Finds all the revisions of the specified RCS archive.
"""
import re
rev = re.compile(r"\s*revision\s+([\d\.]+)")
date = re.compile(r"date:\s+(\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d)[-+]\d\d;")
if not os.path.isfile(name) or not _RCS_FOUND:
return []
rlog = ["rlog", "-x,v", "-zLT", name]
proc = subprocess.Popen(rlog, stdout=subprocess.PIPE)
proc.wait()
revlist = []
date_str = ""
rev_str = ""
com_str = ""
get_next = False
if os.path.isfile(name):
for line in proc.stdout:
if not isinstance(line, str):
# we assume utf-8 ...
line = line.decode('utf-8')
match = rev.match(line)
if match:
rev_str = copy.copy(match.groups()[0])
continue
match = date.match(line)
if match:
date_str = time.strftime(
'%x %X', time.strptime(match.groups()[0],
'%Y-%m-%d %H:%M:%S'))
get_next = True
continue
if get_next:
get_next = False
com_str = line.strip()
revlist.append((rev_str, date_str, com_str))
proc.stdout.close()
del proc
return revlist
def check_out(dbase, rev, path, user):
"""
Checks out the revision from rcs, and loads the resulting XML file
into the database.
"""
co_cmd = ["co", "-x,v", "-q%s" % rev] + [os.path.join(path, ARCHIVE),
os.path.join(path, ARCHIVE_V)]
proc = subprocess.Popen(co_cmd, stderr=subprocess.PIPE)
status = proc.wait()
message = "\n".join(proc.stderr.readlines())
proc.stderr.close()
del proc
if status != 0:
user.notify_error(
_("Retrieve failed"),
_("An attempt to retrieve the data failed "
"with the following message:\n\n%s") % message
)
return
pmgr = GuiPluginManager.get_instance()
for plugin in pmgr.get_import_plugins():
if plugin.get_extension() == "gramps":
rdr = plugin.get_import_function()
xml_file = os.path.join(path, ARCHIVE)
rdr(dbase, xml_file, user)
os.unlink(xml_file)
def _check_in(dbase, filename, user, cursor_func=None, parent=None):
"""
Checks in the specified file into RCS
"""
init = ["rcs", '-x,v', '-i', '-U', '-q', '-t-"Gramps database"']
ci_cmd = ["ci", '-x,v', "-q", "-f"]
archive_name = filename + ",v"
glade = Glade(toplevel='comment')
top = glade.toplevel
text = glade.get_object('description')
top.set_transient_for(parent)
top.run()
comment = text.get_text()
top.destroy()
if not os.path.isfile(archive_name):
cmd = init + [archive_name]
proc = subprocess.Popen(cmd, stderr=subprocess.PIPE)
status = proc.wait()
message = "\n".join(proc.stderr.readlines())
proc.stderr.close()
del proc
if status != 0:
ErrorDialog(_("Archiving failed"),
_("An attempt to create the archive failed "
"with the following message:\n\n%s") % message,
parent=self.top)
if cursor_func:
cursor_func(_("Creating data to be archived..."))
plugin_manager = GuiPluginManager.get_instance()
for plugin in plugin_manager.get_export_plugins():
if plugin.get_extension() == "gramps":
export_function = plugin.get_export_function()
export_function(dbase, filename, user)
if cursor_func:
cursor_func(_("Saving archive..."))
cmd = ci_cmd + ['-m%s' % comment, filename, archive_name]
proc = subprocess.Popen(cmd, stderr=subprocess.PIPE)
status = proc.wait()
message = "\n".join(proc.stderr.readlines())
proc.stderr.close()
del proc
if status != 0:
ErrorDialog(_("Archiving failed"),
_("An attempt to archive the data failed "
"with the following message:\n\n%s") % message,
parent=self.top)
def bug_fix(column, renderer, model, iter_, data):
"""
Cell data function to set the status column.
There is a bug in pygobject which prevents us from setting a value to
None using the TreeModel set_value method. Instead we set it to an empty
string and convert it to None here.
"""
icon_name = model.get_value(iter_, ICON_COL)
if icon_name == '':
icon_name = None
renderer.set_property('icon-name', icon_name)
| gramps-project/gramps | gramps/gui/dbman.py | Python | gpl-2.0 | 45,626 | [
"Brian"
] | 498bc381cefe082547efe382877cf295f7936c6fa83d0b8f4aa6f56bc052ca3a |
import errors
import executer
from phpbuiltins import constants
from scope import scope
class PHPFunction():
def __init__(self, name, modifiers, params, body, context = None, filename = None, line_num = 0):
self.name = name
self.modifiers = modifiers
self.params = params
self.body = body
self.context = context
self.filename = filename
self.line_num = line_num
def __repr__(self):
return '<phpfunction %s(%s) defined in %s on line %d>'%(self.name, ', '.join([
'%s%s%s'%('%s '%x[0] if x[0] else '', x[1], ' = %r'%x[2] if len(x) > 2 else '')
for x in self.params
]), self.filename, self.line_num)
def __call__(self, *args, **kwargs):
if 'context' in kwargs:
context = kwargs['context']
else:
context = self.context
caller_filename = kwargs['filename'] if 'filename' in kwargs else None
caller_line_num = kwargs['line_num'] if 'line_num' in kwargs else None
# print "Calling %r with %r"%(self, args)
call_context = scope({
'%func_args' : args,
'__FUNCTION__' : self.name
}, self.context, name='fncall')
executer = call_context['%executer']
arglen = len(args)
for i, par in enumerate(self.params):
# print '\n\n==\n', par, '\n==\n'
if i < arglen:
val = args[i]
elif len(par) > 2:
val = par[2]
else:
val = None
executer.report_error(
constants.E_WARNING,
"Missing argument %d for %s()%s defined in %s on line %d"%(i+1, self.name,
', called in %s on line %d and'%(caller_filename, caller_line_num) if caller_filename is not None and caller_line_num is not None else '',
self.filename, self.line_num)
# "Warning: Missing argument 2 for f(), called in /Users/giovanyvega/langdev/php/test/err_func_missing_arg.php on line 7 and defined in /Users/giovanyvega/langdev/php/test/err_func_missing_arg.php on line 3"
)
# raise errors.ExecuteError("Missing required argument %d for %r"%(i, self))
call_context[par[1]] = val
if self.name == 'library':
print ('='*20 +'\n')*5
print self.body.prepr()
print ('='*20 +'\n')*5
# print executer
# print self.body
# print call_context
try:
return executer.visit(self.body, call_context)
except errors.ReturnError, rerr:
return rerr.retval
# raise errors.ExecuteError("Can't execute yet.")
def is_static(self):
return 'static' in self.modifiers
def bind(self, context):
self.context = context
return self | g-i-o-/pyphp | pyphp/phpfunction.py | Python | mit | 2,442 | [
"VisIt"
] | b6a9de753560206d8a46a7c1a9ce9463170080039d395a30a18c1ac6fb6feceb |
#!/usr/env/bin/python
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
from Cython.Distutils import build_ext
import pysam
import numpy
import glob
import os
def two_dot(version):
v = version.split('.')
return '.'.join(v[0:min(3,len(v))])
def get_version():
"""Extract version number from source file."""
from ast import literal_eval
with open('fusorsv/fusion_utils.pyx') as f:
for line in f:
if line.startswith('__version__'):
return literal_eval(line.partition('=')[2].lstrip())
raise ValueError("__version__ not found")
cythonize('fusorsv/fusion_utils.pyx')
extensions = [Extension('fusion_utils',
sources=['fusorsv/fusion_utils.pyx'],
include_dirs=pysam.get_include()+[numpy.get_include()],
define_macros=pysam.get_defines(),
extra_compile_args=['-ffast-math'])]
setup(
name = 'fusorsv',
version=get_version(),
author='Timothy Becker',
author_email='timothyjamesbecker@gmail.com',
url='https://github.com/timothyjamesbecker/FusorSV',
license='GPL 3 License',
description='SV calling data fusion framework',
classifiers=['Intended Audience :: Developers',
'License :: GPL 3 License',
'Programming Language :: Python :: 2.7',
'Programming Language :: Cython',
'Programming Language :: C',
'Operating System :: POSIX',
'Topic :: Software Development :: Libraries :: Python Modules'],
cmdclass = { 'build_ext': build_ext },
ext_modules = extensions,
packages = ['fusorsv','cmmodule'],
package_data = {'fusorsv':['data/*.json','data/*.vcf','data/liftover/*.gz','data/models/*.gz']},
scripts = ['bin/FusorSV.py'])#,
#install_requires = [])
#'cython>=0.24.0,<0.25.0',
#'numpy>=0.10.0,<0.12.0',
#'pysam>=0.9.0,<0.9.2',
#'bx-python>=0.5.0,<0.7.3', #now optional
#'mygene>=3.0.0'] #now optional
| timothyjamesbecker/FusorSV | setup.py | Python | gpl-3.0 | 2,213 | [
"pysam"
] | b662d5d3afad9416d704ab098dd50c14ecc94f4473d541281d9da10aa8673f0a |
"""General Tools for use throughout RLPy"""
def module_exists(module_name):
try:
__import__(module_name)
except ImportError:
return False
else:
return True
import sys
import numpy as np
# print "Numpy version:", numpy.__version__
# print "Python version:", sys.version_info
import os
__copyright__ = "Copyright 2013, RLPy http://acl.mit.edu/RLPy"
__credits__ = ["Alborz Geramifard", "Robert H. Klein", "Christoph Dann",
"William Dabney", "Jonathan P. How"]
__license__ = "BSD 3-Clause"
__author__ = "Alborz Geramifard"
__rlpy_location__ = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
if os.name == 'nt':
# Anaconda is built with QT4 backend support on Windows
matplotlib_backend = 'qt4agg'
else:
matplotlib_backend = 'tkagg' # 'WX' 'QTAgg' 'QT4Agg'
def available_matplotlib_backends():
def is_backend_module(fname):
"""Identifies if a filename is a matplotlib backend module"""
return fname.startswith('backend_') and fname.endswith('.py')
def backend_fname_formatter(fname):
"""Removes the extension of the given filename, then takes away the leading 'backend_'."""
return os.path.splitext(fname)[0][8:]
# get the directory where the backends live
backends_dir = os.path.dirname(matplotlib.backends.__file__)
# filter all files in that directory to identify all files which provide a
# backend
backend_fnames = filter(is_backend_module, os.listdir(backends_dir))
backends = [backend_fname_formatter(fname) for fname in backend_fnames]
return backends
if module_exists('matplotlib'):
import matplotlib
import matplotlib.backends
import matplotlib.pyplot as plt
mpl_backends = available_matplotlib_backends()
if matplotlib_backend in mpl_backends:
plt.switch_backend(matplotlib_backend)
else:
print "Warning: Matplotlib backend", matplotlib_backend, "not available"
print "Available backends:", mpl_backends
from matplotlib import pylab as pl
import matplotlib.ticker as ticker
from matplotlib import rc, colors
import matplotlib.patches as mpatches
import matplotlib.path as mpath
import matplotlib.cm as cm
from matplotlib import lines
from mpl_toolkits.mplot3d import axes3d
from matplotlib import lines # for plotting lines in pendulum and PST
from matplotlib.patches import ConnectionStyle # for cartpole
pl.ion()
else:
print 'matplotlib is not available => No Graphics'
if module_exists('networkx'):
import networkx as nx
else:
'networkx is not available => No Graphics on SystemAdmin domain'
if module_exists('sklearn'):
from sklearn import svm
else:
'sklearn is not available => No BEBF representation available'
from scipy import stats
from scipy import misc
from scipy import linalg
from scipy.sparse import linalg as slinalg
from scipy import sparse as sp
from time import clock
from hashlib import sha1
import datetime
import csv
from string import lower
# from Sets import ImmutableSet
# from heapq import *
import multiprocessing
from os import path
from decimal import Decimal
# If running on an older version of numpy, check to make sure we have
# defined all required functions.
import numpy as np # We need to be able to reference numpy by name
from select import select
from itertools import combinations, chain
def discrete_sample(p):
cp = np.cumsum(p)
return np.sum(cp <= np.random.rand(1))
def cartesian(arrays, out=None):
"""
Generate a cartesian product of input arrays.
Parameters
----------
arrays : list of array-like
1-D arrays to form the cartesian product of.
out : ndarray
Array to place the cartesian product in.
Returns
-------
out : ndarray
2-D array of shape (M, len(arrays)) containing cartesian products
formed of input arrays.
Examples
--------
>>> cartesian(([1, 2, 3], [4, 5], [6, 7]))
array([[1, 4, 6],
[1, 4, 7],
[1, 5, 6],
[1, 5, 7],
[2, 4, 6],
[2, 4, 7],
[2, 5, 6],
[2, 5, 7],
[3, 4, 6],
[3, 4, 7],
[3, 5, 6],
[3, 5, 7]])
"""
arrays = [np.asarray(x) for x in arrays]
dtype = arrays[0].dtype
n = np.prod([x.size for x in arrays])
if out is None:
out = np.zeros([n, len(arrays)], dtype=dtype)
m = n / arrays[0].size
out[:, 0] = np.repeat(arrays[0], m)
if arrays[1:]:
cartesian(arrays[1:], out=out[0:m, 1:])
for j in xrange(1, arrays[0].size):
out[j * m:(j + 1) * m, 1:] = out[0:m, 1:]
return out
# if numpy.version.version < '2.6.0': # Missing count_nonzero
def count_nonzero(arr):
"""
Custom ``nnz()`` method, moves recursively through any sublists within
*arr*, such that only individual elements are examined. \n
Some versions of numpy's count_nonzero only strictly compare each element;
e.g. ``numpy.count_nonzero([[1,2,3,4,5], [6,7,8,9]])`` returns 2, while
``Tools.count_nonzero([[1,2,3,4,5], [6,7,8,9]])`` returns 9.
"""
nnz = 0
# Is this an instance of a matrix? Use inbuilt nonzero() method and count # of indices returned.
# NOT TESTED with high-dimensional matrices (only 2-dimensional matrices)
if sp.issparse(arr):
return arr.getnnz()
if isinstance(arr, np.matrixlib.defmatrix.matrix):
# Tuple of length = # dimensions (usu. 2) containing indices of nonzero
# elements
nonzero_indices = arr.nonzero()
# Find # of indices in the vector corresponding to any of the
# dimensions (all have same length)
nnz = np.size(nonzero_indices[0])
return nnz
if isinstance(arr, np.ndarray):
# return sum([1 for x in arr.ravel() if x != 0])
return np.count_nonzero(arr.ravel())
if isinstance(arr, list):
for el in arr:
if isinstance(el, list):
nnz += np.count_nonzero(el)
elif el != 0:
nnz += 1
return nnz
print "In tools.py attempted count_nonzero with unsupported type of", type(arr)
return None
def randint(low, high, m=1, n=1):
"""
:param low: Lower bound on possible random ints
:param high: Max possible random int (INCLUSIVE)
:param m: number of rows in output
:param n: number of cols in output
Generates an ``m x n`` whose elements are integers selected uniform random
in the range [low, high].
"""
return np.random.randint(low, high + 1, size=(m, n))
def randSet(x):
"""
:param x: a list, array, or other iterable datatype
Accepts a 1-D vector (list, array, etc) and returns an element from the list
selected uniform random.
"""
# i = random.random_integers(0,size(x)-1)
i = np.random.randint(0, len(x) - 1)
return x[i]
def closestDiscretization(s, num_bins, limits):
"""
:param s: a state. (possibly multidimensional) ndarray, with dimension d =
dimensionality of state space.
:param num_bins: Number of discrete elements in
:param limits: 2 x d ndarray, where row[0] is a row vector of the lower
limit of each discrete dimension, and row[1] are corresponding upper
limits.
Returns the closest point to the state ``s`` based on the discretization
defined by the number of bins and limits. \n
( equivalent to state2bin(x) / (num_bins-1) * width + limits[0] )
"""
# width = limits[1]-limits[0]
# return round((s-limits[0])*num_bins/(width*1.)) / num_bins * width + limits[0]
return bin2state(state2bin(s, num_bins, limits), num_bins, limits)
def bin2state(bin, num_bins, limits):
"""
:param bin: index in the discretization
:param num_bins: the total number of bins in the discretization
:param limits: 2 x d ndarray, where row[0] is a row vector of the lower
limit of each discrete dimension, and row[1] are corresponding upper
limits.
.. note::
This is the inverse of state2bin function.
Given an index ``bin``, the number of the bins ``num_bins``, and the limits
on a single state dimension, this function returns the corresponding value
in the middle of the bin (ie, the average of the discretizations around it)
"""
bin_width = (limits[1] - limits[0]) / (num_bins * 1.)
return bin * bin_width + bin_width / 2.0 + limits[0]
def state2bin(s, num_bins, limits):
"""
:param s: a state. (possibly multidimensional) ndarray, with dimension d =
dimensionality of state space.
:param num_bins: the total number of bins in the discretization
:param limits: 2 x d ndarray, where row[0] is a row vector of the lower
limit of each discrete dimension, and row[1] are corresponding upper
limits.
Returns the bin number (index) corresponding to state s given a
discretization num_bins between each column of limits[0] and limits[1].
The return value has same dimensionality as ``s``. \n
Note that ``s`` may be continuous. \n
\n
Examples: \n
s = 0, limits = [-1,5], num_bins = 6 => 1 \n
s = .001, limits = [-1,5], num_bins = 6 => 1 \n
s = .4, limits = [-.5,.5], num_bins = 3 => 2 \n
"""
if s == limits[1]:
return num_bins - 1
width = limits[1] - limits[0]
if s > limits[1]:
print 'Tools.py: WARNING: ', s, ' > ', limits[1], '. Using the chopped value of s'
print 'Ignoring', limits[1] - s
s = limits[1]
elif s < limits[0]:
print 'Tools.py: WARNING: ', s, ' < ', limits[0], '. Using the chopped value of s'
# print("WARNING: %s is out of limits of %s . Using the chopped value of s" %(str(s),str(limits)))
s = limits[0]
return int((s - limits[0]) * num_bins / (width * 1.))
def deltaT(start_time):
""" Returns the time elapsed since ``start_time`` in seconds. """
return clock() - start_time
def hhmmss(t):
"""
:param t: time elapsed (in seconds)
Returns the string representation of ``t`` in format: ``hhmmss``
"""
return str(datetime.timedelta(seconds=round(t)))
def className(obj):
""" Return the name of a class as a string. """
return obj.__class__.__name__
def createColorMaps():
"""
Create and register the colormaps to be used in domain visualizations.
"""
# Make Grid World ColorMap
mycmap = colors.ListedColormap(
['w', '.75', 'b', 'g', 'r', 'k'], 'GridWorld')
cm.register_cmap(cmap=mycmap)
mycmap = colors.ListedColormap(
['w', '.75', 'b', 'g', 'r', 'k', 'c'], 'GridWorldInter')
cm.register_cmap(cmap=mycmap)
mycmap = colors.ListedColormap(['r', 'k'], 'fiftyChainActions')
cm.register_cmap(cmap=mycmap)
mycmap = colors.ListedColormap(['b', 'r'], 'FlipBoard')
cm.register_cmap(cmap=mycmap)
mycmap = colors.ListedColormap(
['w', '.75', 'b', 'r'], 'IntruderMonitoring')
cm.register_cmap(cmap=mycmap)
mycmap = colors.ListedColormap(
['w', 'b', 'g', 'r', 'm', (1, 1, 0), 'k'], 'BlocksWorld')
cm.register_cmap(cmap=mycmap)
mycmap = colors.ListedColormap(['.5', 'k'], 'Actions')
cm.register_cmap(cmap=mycmap)
# mycmap = make_colormap({0:(.8,.7,0), 1: 'w', 2:(0,0,1)}) # orange to
# blue
mycmap = make_colormap({0: 'r', 1: 'w', 2: 'g'}) # red to blue
cm.register_cmap(cmap=mycmap, name='ValueFunction')
mycmap = colors.ListedColormap(['r', 'w', 'k'], 'InvertedPendulumActions')
cm.register_cmap(cmap=mycmap)
mycmap = colors.ListedColormap(['r', 'w', 'k'], 'MountainCarActions')
cm.register_cmap(cmap=mycmap)
mycmap = colors.ListedColormap(['r', 'w', 'k', 'b'], '4Actions')
cm.register_cmap(cmap=mycmap)
def make_colormap(colors):
"""
Define a new color map based on values specified in the dictionary
colors, where colors[z] is the color that value z should be mapped to,
with linear interpolation between the given values of z.
The z values (dictionary keys) are real numbers and the values
colors[z] can be either an RGB list, e.g. [1,0,0] for red, or an
html hex string, e.g. "#ff0000" for red.
"""
from matplotlib.colors import LinearSegmentedColormap, ColorConverter
z = np.sort(colors.keys())
n = len(z)
z1 = min(z)
zn = max(z)
x0 = (z - z1) / ((zn - z1) * 1.)
CC = ColorConverter()
R = []
G = []
B = []
for i in xrange(n):
# i'th color at level z[i]:
Ci = colors[z[i]]
if isinstance(Ci, str):
# a hex string of form '#ff0000' for example (for red)
RGB = CC.to_rgb(Ci)
else:
# assume it's an RGB triple already:
RGB = Ci
R.append(RGB[0])
G.append(RGB[1])
B.append(RGB[2])
cmap_dict = {}
cmap_dict['red'] = [(x0[i], R[i], R[i]) for i in xrange(len(R))]
cmap_dict['green'] = [(x0[i], G[i], G[i]) for i in xrange(len(G))]
cmap_dict['blue'] = [(x0[i], B[i], B[i]) for i in xrange(len(B))]
mymap = LinearSegmentedColormap('mymap', cmap_dict)
return mymap
def showcolors(cmap):
"""
:param cmap: A colormap.
Debugging tool: displays all possible values of a colormap.
"""
plt.clf()
x = np.linspace(0, 1, 21)
X, Y = np.meshgrid(x, x)
plt.pcolor(X, Y, 0.5 * (X + Y), cmap=cmap, edgecolors='k')
plt.axis('equal')
plt.colorbar()
plt.title('Plot of x+y using colormap')
def schlieren_colormap(color=[0, 0, 0]):
"""
Creates and returns a colormap suitable for schlieren plots.
"""
if color == 'k':
color = [0, 0, 0]
if color == 'r':
color = [1, 0, 0]
if color == 'b':
color = [0, 0, 1]
if color == 'g':
color = [0, 0.5, 0]
if color == 'y':
color = [1, 1, 0]
color = np.array([1, 1, 1]) - np.array(color)
s = np.linspace(0, 1, 20)
colors = {}
for key in s:
colors[key] = np.array([1, 1, 1]) - key ** 10 * color
schlieren_colors = make_colormap(colors)
return schlieren_colors
def make_amrcolors(nlevels=4):
"""
:param nlevels: maximum number of AMR levels expected.
Make lists of colors useful for distinguishing different grids when
plotting AMR results.
Returns the tuple (linecolors, bgcolors):\n
linecolors = list of nlevels colors for grid lines, contour lines. \n
bgcolors = list of nlevels pale colors for grid background.
"""
# For 4 or less levels:
linecolors = ['k', 'b', 'r', 'g']
# Set bgcolors to white, then light shades of blue, red, green:
bgcolors = ['#ffffff', '#ddddff', '#ffdddd', '#ddffdd']
# Set bgcolors to light shades of yellow, blue, red, green:
# bgcolors = ['#ffffdd','#ddddff','#ffdddd','#ddffdd']
if nlevels > 4:
linecolors = 4 * linecolors # now has length 16
bgcolors = 4 * bgcolors
if nlevels <= 16:
linecolors = linecolors[:nlevels]
bgcolors = bgcolors[:nlevels]
else:
print "*** Warning, suggest nlevels <= 16"
return (linecolors, bgcolors)
def linearMap(x, a, b, A=0, B=1):
"""
.. warning::
``x`` *MUST* be a scalar for truth values to make sense.
This function takes scalar ``x`` in range [a,b] and linearly maps it to
the range [A,B].
Note that ``x`` is truncated to lie in possible boundaries.
"""
if a == b:
res = B
else:
res = (x - a) / (1. * (b - a)) * (B - A) + A
if res < A:
res = A
if res > B:
res = B
return res
def l_norm(x, norm=2):
''' Returns the L infinity norm of a vector '''
return np.linalg.norm(x, norm)
def generalDot(x, y):
"""
Takes the inner product of the inputs x and y.
Defined because of inconsistent or confusing definition of the "dot"
operator for numpy ndarray, matrix, and sparse.matrix.
"""
if sp.issparse(x):
# active_indices = x.nonzero()[0].flatten()
return x.multiply(y).sum()
else:
return np.dot(x, y)
def normpdf(x, mu, sigma):
""" Returns the scalar probability density of Gaussian (mu,sigma) at x. """
return stats.norm.pdf(x, mu, sigma)
def factorial(x):
return misc.factorial(x)
def nchoosek(n, k):
""" Returns combination n choose k. """
return misc.comb(n, k)
def findElemArray1D(x, arr):
"""
:param x: a scalar
:param arr: a 1-dimensional numpy ndarray
Returns an array of indices i in arr where x == arr[i]
or [] if x not in arr.
"""
res = np.where(arr == x)
if len(res[0]):
return res[0].flatten()
else:
return []
def findElemArray2D(x, arr2d):
"""
:param x: a scalar
:param arr2d: a 2-dimensional numpy ndarray or matrix
Returns a tuple of arrays (rVec, cVec), where the corresponding elements in
each are the rows and cols where arr2d[r,c] == x.
Returns [] if x not in arr2d. \n
Example: \n
arr2d = np.array([[1,2],[3,1]]), x = 1
findElemArray2D(x, arr2d) --> ([0, 1], [0, 1]).
i.e., arr2d[0][0] and arr2d[1][1] both == x.
.. note::
The type of each tuple member is the same as type(arr2d)
"""
res = np.where(arr2d == x)
if len(res[0]):
return res[0].flatten(), res[1].flatten()
else:
return [], []
# CURRENTLY not used by any algs
def findRow(rowVec, X):
"""
:param rowVec: a 1-dimensional numpy ndarray
:param X: a 2-d numpy ndarray
Return the indices of the rows of X that are equal to rowVec. \n
NOTE: rowVec and X must have the same number of columns
"""
# return nonzero(any(logical_and.reduce([X[:, i] == r[i] for i in arange(len(r))])))
# return any(logical_and(X[:, 0] == r[0], X[:, 1] == r[1]))
ind = np.nonzero(np.logical_and.reduce([X[:, i] == rowVec[i] for i in xrange(len(rowVec))]))
return ind[0]
def perms(X):
"""
:param X: an iterable type (ndarray, matrix, list).
If a 1-D array, each element e is treated as the number of discrete
elements to use for permutations, [0, e).
If a >1-D array, take permutations between the elements themselves
between dimensions.
Returns all permutations *in numpy array format*. For example: \n
X = [2 3] \n
res = [[0,0],[0,1],[0,2],[1,0],[1,1],[1,2] \n
X = [[1,3],[2,3]] \n
res = [[1,2],[1,3],[3,2],[3,3] \n
"""
allPerms, _ = perms_r(X, perm_sample=np.array([]), allPerms=None, ind=0)
return allPerms
######################################################
def perms_r(X, perm_sample=np.array([]), allPerms=None, ind=0):
""" Recursive helper function for perms(). """
if allPerms is None:
# Get memory
if isinstance(X[0], list):
size = np.prod([len(x) for x in X])
else:
size = np.prod(X, dtype=np.int)
allPerms = np.zeros((size, len(X)))
if len(X) == 0:
allPerms[ind, :] = perm_sample
perm_sample = np.array([])
ind = ind + 1
else:
if isinstance(X[0], list):
for x in X[0]:
allPerms, ind = perms_r(
X[1:], np.hstack((perm_sample, [x])), allPerms, ind)
else:
for x in xrange(X[0]):
allPerms, ind = perms_r(
X[1:], np.hstack((perm_sample, [x])), allPerms, ind)
return allPerms, ind
######################################################
def vec2id2(x, limits):
"""
:param x: A discrete (multidimensional) quantity (often the state vector)
:param limits: The limits of the discrete quantity (often statespace_limits)
Returns a unique id by determining the number of possible values of ``x``
that lie within ``limits``, and then seeing where this particular value of
``x` falls in that spectrum.
.. warning::
This function assumes that (elements of) ``x`` takes integer values,
and that ``limits`` are the lower and upper bounds on ``x``.
.. note::
This implementation is half as fast
as :py:meth:`~rlpy.Tools.GeneralTools.vec2id`.
"""
if isinstance(x, int):
return x
lim_prod = np.cumprod(limits[:-1])
return x[0] + sum(map(lambda x_y: x_y[0] * x_y[1], zip(x[1:], lim_prod)))
def vec2id(x, limits):
"""
:param x: A discrete (multidimensional) quantity (often the state vector)
:param limits: The limits of the discrete quantity (often statespace_limits)
Returns a unique id by determining the number of possible values of ``x``
that lie within ``limits``, and then seeing where this particular value of
``x` falls in that spectrum.
.. note::
See :py:meth:`~rlpy.Tools.GeneralTools.id2vec`, the inverse function.
.. warning::
This function assumes that (elements of) ``x`` takes integer values,
and that ``limits`` are the lower and upper bounds on ``x``.
"""
if isinstance(x, int):
return x
_id = 0
for d in xrange(len(x) - 1, -1, -1):
_id *= limits[d]
_id += x[d]
return _id
######################################################
def id2vec(_id, limits):
"""
:param _id: a unique id, presumably generated using ``vec2id()``.
:param limits: The limits of the discrete quantity (often statespace_limits)
Returns the vector corresponding to the unique ``_id`` by determining the
number of possible values of ``x`` that lie within ``limits``, and then
seeing which particular vector ``x`` lies at the index ``_id``.
.. note::
See :py:meth:`~rlpy.Tools.GeneralTools.vec2id`, the inverse function.
"""
prods = np.cumprod(limits)
s = [0] * len(limits)
for d in xrange(len(prods) - 1, 0, -1):
# s[d] = _id / prods[d-1]
# _id %= prods[d-1]
s[d], _id = divmod(_id, prods[d - 1])
s[0] = _id
return s
def bound_vec(X, limits):
"""
:param X: any (multidimensional) iterable type, eg ndarray or list, len = n.
:param limits: n x 2 iterable type, where limits[i,0] is minimum possible
value for dimension i, and limits[i,1] is maximum possible.
Returns ``X ``with any dimensions that lie outside the bounds of ``limits``
appropriately truncated. \n
i.e limits[i,0] <= output[i] <= limits[i,1]
"""
MIN = limits[:, 0]
MAX = limits[:, 1]
X = np.vstack((X, MIN))
X = np.amax(X, axis=0)
X = np.vstack((X, MAX))
X = np.amin(X, axis=0)
return X
def bound(x, m, M=None):
"""
:param x: scalar
Either have m as scalar, so bound(x,m,M) which returns m <= x <= M *OR*
have m as length 2 vector, bound(x,m, <IGNORED>) returns m[0] <= x <= m[1].
"""
if M is None:
M = m[1]
m = m[0]
# bound x between min (m) and Max (M)
return min(max(x, m), M)
def wrap(x, m, M):
"""
:param x: a scalar
:param m: minimum possible value in range
:param M: maximum possible value in range
Wraps ``x`` so m <= x <= M; but unlike ``bound()`` which
truncates, ``wrap()`` wraps x around the coordinate system defined by m,M.\n
For example, m = -180, M = 180 (degrees), x = 360 --> returns 0.
"""
diff = M - m
while x > M:
x = x - diff
while x < m:
x = x + diff
return x
def powerset(iterable, ascending=1):
"""
:param iterable: an iterable type (list, ndarray)
:param ascending: (boolean) if true, return powerset in ascending order,
else return in descending order.
"""
s = list(iterable)
if ascending:
return (
chain.from_iterable(combinations(s, r) for r in xrange(len(s) + 1))
)
else:
return (
chain.from_iterable(combinations(s, r)
for r in xrange(len(s) + 1, -1, -1))
)
def printClass(obj):
""" Print class name and all attributes of object ``obj``. """
print className(obj)
print '======================================='
for property, value in vars(obj).iteritems():
print property, ": ", value
def addNewElementForAllActions(weight_vec, actions_num, newElem=None):
"""
:param weight_vec: The weight vector (often feature weights from
representation) used for s-a pairs
(i.e, len(weight_vec) = actions_num * numFeats)
:param actions_num: The total number of possible actions
:param newElem: (Optional) The weights associated with each action of the
feature to insert (often newElem = const * np.ones(actions_num, 1)).
If not specified or = None, assume 0 weight on new features.
Adds new elements into ``weight_vec`` in the correct location based on
the number of possible actions.
[[Since the new element (usually feature) is added for all actions,
weight_vec should expand by the number of possible actions as for each
action the feature vector phi(s) is expand by 1 element.]]\n
Example: \n
x = [1,2,3,4], a = 2, newElem = None => [1,2,0,3,4,0] \n
x = [1,2,3], a = 3, newElem = [1,1,1] => [1,1,2,1,3,1] \n
"""
if newElem is None:
newElem = np.zeros((actions_num, 1))
if len(weight_vec) == 0:
return newElem.flatten()
else:
weight_vec = weight_vec.reshape(actions_num, -1) # -1 means figure the other dimension yourself
weight_vec = np.hstack((weight_vec, newElem))
weight_vec = weight_vec.reshape(1, -1).flatten()
return weight_vec
def solveLinear(A, b):
""" Solve the linear equation Ax=b. Return tuple (x, time to solve). """
error = np.inf # just to be safe, initialize error variable here
if sp.issparse(A):
# print 'sparse', type(A)
start_log_time = clock()
result = slinalg.spsolve(A, b)
solve_time = deltaT(start_log_time)
error = linalg.norm((A * result.reshape(-1, 1) - b.reshape(-1, 1))[0])
# For extensive comparision of methods refer to InversionComparison.txt
else:
# print 'not sparse, type',type(A)
if sp.issparse(A):
A = A.todense()
# Regularize A
# result = linalg.lstsq(A,b); result = result[0] # Extract just the
# answer
start_log_time = clock()
result = linalg.solve(A, b)
solve_time = deltaT(start_log_time)
# use numpy matrix multiplication
if isinstance(A, np.matrixlib.defmatrix.matrix):
error = np.linalg.norm(
(A * result.reshape(-1, 1) - b.reshape(-1, 1))[0])
elif isinstance(A, np.ndarray): # use array multiplication
error = np.linalg.norm(
(np.dot(A, result.reshape(-1, 1)) - b.reshape(-1, 1))[0])
else:
print 'Attempted to solve linear equation Ax=b in solveLinear() of Tools.py with a non-numpy (array / matrix) type.'
sys.exit(1)
if error > RESEDUAL_THRESHOLD:
print "||Ax-b|| = %0.1f" % error
return result.ravel(), solve_time
def rank(A, eps=1e-12):
"""
:param A: numpy arrayLike (ndarray, matrix).
:param eps: threshold above which a singular value is considered nonzero.
Returns the rank of matrix ``A``, ie number of eigenvalues > ``eps``.
"""
u, s, v = linalg.svd(A)
return len([x for x in s if abs(x) > eps])
def fromAtoB(x1, y1, x2, y2, color='k', connectionstyle="arc3,rad=-0.4",
shrinkA=10, shrinkB=10, arrowstyle="fancy", ax=None):
"""
Draws an arrow from point A=(x1,y1) to point B=(x2,y2) on the (optional)
axis ``ax``.
.. note::
See matplotlib documentation.
"""
if ax is None:
return pl.annotate("",
xy=(x2, y2), xycoords='data',
xytext=(x1, y1), textcoords='data',
arrowprops=dict(
arrowstyle=arrowstyle, # linestyle="dashed",
color=color,
shrinkA=shrinkA, shrinkB=shrinkB,
patchA=None,
patchB=None,
connectionstyle=connectionstyle),
)
else:
return ax.annotate("",
xy=(x2, y2), xycoords='data',
xytext=(x1, y1), textcoords='data',
arrowprops=dict(
arrowstyle=arrowstyle, # linestyle="dashed",
color=color,
shrinkA=shrinkA, shrinkB=shrinkB,
patchA=None,
patchB=None,
connectionstyle=connectionstyle),
)
def drawHist(data, bins=50, fig=101):
"""
:param data: Data to use in histogram.
:param bins: number of bins to use in histogram
:param fig: The figure number for the plot
Draws a histogram in its own figure using specified parameters.
"""
hist, bins = np.histogram(data, bins=bins)
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.figure(fig)
plt.bar(center, hist, align='center', width=width)
def nonZeroIndex(arr):
"""
:param arr: a numpy 1-D array.
Returns the list of indices of nonzero elements in ``arr``. \n
Example: [0,0,0,1] => [4]
"""
return arr.nonzero()[0]
def sp_matrix(m, n=1, dtype='float'):
"""
:param m: number of rows in matrix
:param n: number of cols in matrix
:param dtype: datatype of sparse matrix
Returns an empty sparse matrix with m rows and n columns, with the dtype.
"""
return sp.csr_matrix((m, n), dtype=dtype)
def sp_dot_array(sp_m, arr):
"""
:param sp_m: a sparse 1-D array/matrix (created
with :py:meth:`~rlpy.Tools.GeneralTools.sp_matrix`)
:param arr: a (possibly dense) 1-D iterable type (ndarray, list, matrix)
Returns dot product of 1-by-p matrix ``sp_m`` and length-p array arr.
"""
assert sp_m.shape[1] == len(arr)
ind = sp_m.nonzero()[1]
if len(ind) == 0:
return 0
if sp_m.dtype == 'bool':
# Just sum the corresponding indexes of theta
return sum(arr[ind])
else:
# Multiply by feature values since they are not binary
return sum([arr[i] * sp_m[0, i] for i in ind])
def sp_dot_sp(sp_1, sp_2):
"""
:param sp_1: a sparse 1-D array/matrix (created
with :py:meth:`~rlpy.Tools.GeneralTools.sp_matrix`)
:param sp_2: another sparse 1-D array/matrix, len(sp_2) = len(sp_1).
Returns dot product of 1-by-p matrices ``sp_1`` and ``sp_2``.
"""
assert sp_1.shape[
0] == sp_2.shape[
0] and sp_1.shape[
1] == 1 and sp_2.shape[
1] == 1
ind_1 = sp_1.nonzero()[0]
ind_2 = sp_2.nonzero()[0]
if len(ind_1) * len(ind_2) == 0:
return 0
ind = np.intersect1d(ind_1, ind_2)
# See if they are boolean
if sp_1.dtype == bool and sp_2.dtype == bool:
return len(ind)
sp_bool = None
if sp_1.dtype == bool:
sp_bool = sp_1
sp = sp_2
if sp_2.dtype == bool:
sp_bool = sp_2
sp = sp_1
if sp_bool is None:
# Multiply by feature values since they are not binary
return sum([sp_1[i, 0] * sp_2[i, 0] for i in ind])
else:
return sum([sp[i, 0] for i in ind])
def sp_add2_array(sp, arr):
"""
:param sp: sparse matrix p-by-1 (created
with :py:meth:`~rlpy.Tools.GeneralTools.sp_matrix`)
:param arr: a 1-D iterable type (ndarray, list, matrix) of length p.
Returns ret = arr + sp (with type(ret) = type(arr))
"""
ind = sp.nonzero()[0]
for i in ind:
arr[i] += sp[i, 0]
return arr
def checkNCreateDirectory(fullfilename):
"""
:param fullfilename: root path to desired file/folder.
See if all directories in ``fullfilename`` exist; if not create as required.
"""
path_, _, _ = fullfilename.rpartition('/')
if not os.path.exists(path_):
os.makedirs(path_)
def hasFunction(object, methodname):
""" Test if class of ``object`` has a method called ``methodname``. """
method = getattr(object, methodname, None)
return callable(method)
def pretty(X, format='%0.3f'):
"""
Returns a formatted string for a numpy array ``X``. \n
Example: [1,2,3], %0.3f => 1.000 2.000 3.000
"""
format = format + '\t'
return ''.join(format % x for x in X)
def regularize(A):
""" Regularize the numpy arrayLike object ``A``.
Adds REGULARIZATION*I To A, where I is identity matrix and REGULARIZATION
is defined in GeneralTools.py.\n
This is often done before calling the linearSolver.
.. note::
``A`` must be a square matrix.
"""
x, y = A.shape
assert x == y # Square matrix
if sp.issparse(A):
A = A + REGULARIZATION * sp.eye(x, x)
# print 'REGULARIZE', type(A)
else:
# print 'REGULARIZE', type(A)
for i in xrange(x):
A[i, i] += REGULARIZATION
return A
def sparsity(A):
""" Returns the percentage of nonzero elements in ``A``. """
return (1 - np.count_nonzero(A) / (np.prod(A.shape) * 1.)) * 100
# CURRENTLY UNUSED
def incrementalAverageUpdate(avg, sample, sample_number):
"""
:param avg: the old average
:param sample: the new sample to update the average with
:param sample_number: the current sample number (#samples observed so far+1)
Updates an average incrementally.
"""
return avg + (sample - avg) / (sample_number * 1.)
def padZeros(X, L):
"""
:param X: a 1-D numpy array
:param L: the desired length of ``X`` (integer)
if ``len(X) < L`` pad zeros to X so it will have length ``L``, otherwise
do nothing and return the original ``X``.
"""
if len(X) < L:
new_X = np.zeros(L)
new_X[:len(X)] = X
return new_X
else:
return X
# UNUSED
def expectedPhiNS(p_vec, ns_vec, representation):
# Primarily for use with domain.expectedStep()
# Takes p_vec, probability of each state outcome in ns_vec,
# Returns a vector of length features_num which is the expectation
# over all possible outcomes.
expPhiNS = np.zeros(representation.features_num)
for i, ns in enumerate(ns_vec):
expPhiNS += p_vec[i] * representation.phi_nonTerminal(ns)
return expPhiNS
# p: k-by-1 probability of each transition
# r: k-by-1 rewards
# ns: k-by-|s| next state
# t: k-by-1 terminal values
# UNUSED
def allExpectedPhiNS(domain, representation, policy, allStates=None):
# Returns Phi' matrix with dimensions n x k,
# n: number of possible states, and
# k: number of features
if allStates is None:
allStates = domain.allStates()
allExpPhiNS = np.zeros((len(allStates), representation.features_num))
for i, s in enumerate(allStates):
# print s
# print policy.pi(s)
# print 'looping',i, policy.pi(s)
# print policy.pi(s)
p_vec, r_vec, ns_vec, t_vec = domain.expectedStep(s, policy.pi(s))
allExpPhiNS[i][:] = expectedPhiNS(p_vec, ns_vec, representation)
return allExpPhiNS
def rk4(derivs, y0, t, *args, **kwargs):
"""
Integrate 1D or ND system of ODEs using 4-th order Runge-Kutta.
This is a toy implementation which may be useful if you find
yourself stranded on a system w/o scipy. Otherwise use
:func:`scipy.integrate`.
*y0*
initial state vector
*t*
sample times
*derivs*
returns the derivative of the system and has the
signature ``dy = derivs(yi, ti)``
*args*
additional arguments passed to the derivative function
*kwargs*
additional keyword arguments passed to the derivative function
Example 1 ::
## 2D system
def derivs6(x,t):
d1 = x[0] + 2*x[1]
d2 = -3*x[0] + 4*x[1]
return (d1, d2)
dt = 0.0005
t = arange(0.0, 2.0, dt)
y0 = (1,2)
yout = rk4(derivs6, y0, t)
Example 2::
## 1D system
alpha = 2
def derivs(x,t):
return -alpha*x + exp(-t)
y0 = 1
yout = rk4(derivs, y0, t)
If you have access to scipy, you should probably be using the
scipy.integrate tools rather than this function.
"""
try:
Ny = len(y0)
except TypeError:
yout = np.zeros((len(t),), np.float_)
else:
yout = np.zeros((len(t), Ny), np.float_)
yout[0] = y0
i = 0
for i in np.arange(len(t) - 1):
thist = t[i]
dt = t[i + 1] - thist
dt2 = dt / 2.0
y0 = yout[i]
k1 = np.asarray(derivs(y0, thist, *args, **kwargs))
k2 = np.asarray(derivs(y0 + dt2 * k1, thist + dt2, *args, **kwargs))
k3 = np.asarray(derivs(y0 + dt2 * k2, thist + dt2, *args, **kwargs))
k4 = np.asarray(derivs(y0 + dt * k3, thist + dt, *args, **kwargs))
yout[i + 1] = y0 + dt / 6.0 * (k1 + 2 * k2 + 2 * k3 + k4)
return yout
# # NOT USED
# def findElem(x, lis):
# """
# Searches for the element ``x`` in the list (python built-in type) ``A``
# Returns the index of the first occurrence of ``x``.
#
# .. warning::
#
# ``A`` *MUST* be a list (python built-in type)
#
# """
# if type(lis) is not list:
# print 'ERROR: Tools.findElem() only accepts python lists.
# return []
# elif x in lis:
# return lis.index(x)
# else:
# return []
# def matrix_mult(A, B):
# """
# Multiples the inputs A and B using matrix multiplication.
# Defined because of inconsistent or confusing definition of the "*"
# operator for numpy ndarray, matrix, and sparse.matrix.
#
# """
# if len(A.shape) == 1:
# A = A.reshape(1, -1)
# if len(B.shape) == 1:
# B = B.reshape(1, -1)
# n1, m1 = A.shape
# n2, m2 = B.shape
# if m1 != n2:
# print "Incompatible dimensions: %dx%d and %dx%d" % (n1, m2, n2, m2)
# return None
# else:
# return A.dot(B)
# Setup the latdex path
# if sys.platform == 'darwin':
# os.environ['PATH'] += ':' + TEXPATH
# if sys.platform == 'win32':
# print os.environ['PATH']
# os.environ['PATH'] += ';' + TEXPATH
# def isLatexConfigured():
# return False
# try:
# pl.subplot(1,3,2)
# pl.xlabel(r"$\theta$")
# pl.show()
# pl.draw()
# pl.close()
# print "Latex tested and functioning"
# except:
# print "Matplotlib failed to plot, likely due to a Latex problem."
# print "Check that your TEXPATH is set correctly in config.py,"
# print "and that latex is installed correctly."
# print "\nDisabling latex functionality, using matplotlib native fonts."
if module_exists('matplotlib'):
createColorMaps()
rc('font', family='serif', size=15,
weight="bold", **{"sans-serif": ["Helvetica"]})
rc("axes", labelsize=15)
rc("xtick", labelsize=15)
rc("ytick", labelsize=15)
# rc('text',usetex=False)
# Try to use latex fonts, if available
# rc('text',usetex=True)
# Colors
PURPLE = '\033[95m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
NOCOLOR = '\033[0m'
RESEDUAL_THRESHOLD = 1e-7
REGULARIZATION = 1e-6
FONTSIZE = 15
SEP_LINE = "=" * 60
# Tips:
# array.astype(float) => convert elements
# matlibplot initializes the maping from the values to
# colors on the first time creating unless bounds are set manually.
# Hence you may update color values later but dont see any updates!
# in specifying dimensions for reshape you can put -1 so it will be automatically infered
# [2,2,2] = [2]*3
# [1,2,2,1,2,2,1,2,2] = ([1]+[2]*2)*3
# [[1,2],[1,2],[1,2]] = array([[1,2],]*3)
# apply function foo to all elements of array A: vectorize(foo)(A) (The operation may be unstable! Care!
# Set a property of a class: vars(self)['prop'] = 2
# dont use a=b=zeros((2,3)) because a and b will point to the same array!
# b = A[:,1] does NOT create a new matrix. It is simply a pointer to that row! so if you change b you change A
# DO NOT USE A = B = array() unless you know what you are doing. They will point to the same object!
# Todo:
# Replace vstack and hstack with the trick mentioned here:
# http://stackoverflow.com/questions/4923617/efficient-numpy-2d-array-construction-from-1d-array
# if undo redo does not work in eclipse, you may have an uninfinished
# process. Kill all
| BerkeleyAutomation/rlpy | rlpy/Tools/GeneralTools.py | Python | bsd-3-clause | 40,682 | [
"Gaussian"
] | e870d2f220ca344571b7bcd7edeb65f60c1e6fac3bc9184aadb4323e0bbc665e |
"""
Functions related to creating the engine or the figures.
"""
# Author: Gael Varoquaux <gael.varoquaux@normalesup.org>
# Copyright (c) 2007, Enthought, Inc.
# License: BSD Style.
# Standard library imports.
import gc
import warnings
import copy
import numpy as np
# Enthought imports
from pyface.timer.api import do_later
# imports
from tvtk.api import tvtk
from mayavi.core.scene import Scene
from mayavi.core.registry import registry
from .camera import view
from .engine_manager import get_engine, options, set_engine
######################################################################
# A list to store the allocated scene numbers
__scene_number_list = set((0,))
def figure(figure=None, bgcolor=None, fgcolor=None, engine=None,
size=(400, 350)):
""" Creates a new scene or retrieves an existing scene. If the mayavi
engine is not running this also starts it.
**Keyword arguments**
:figure: The name of the figure, or handle to it.
:bgcolor: The color of the background (None is default).
:fgcolor: The color of the foreground, that is the color of all text
annotation labels (axes, orientation axes, scalar bar
labels). It should be sufficiently far from `bgcolor`
to see the annotation texts. (None is default).
:engine: The mayavi engine that controls the figure.
:size: The size of the scene created, in pixels. May not apply
for certain scene viewer.
"""
if isinstance(figure, Scene):
if figure.scene is None:
engine = registry.find_scene_engine(figure)
else:
engine = registry.find_scene_engine(figure.scene)
set_engine(engine)
engine.current_scene = figure
else:
if engine is None:
engine = get_engine()
if figure is None:
name = max(__scene_number_list) + 1
__scene_number_list.update((name,))
name = 'Mayavi Scene %d' % name
engine.new_scene(name=name, size=size)
engine.current_scene.name = name
else:
if type(figure) in (int, np.int, np.int0, np.int8,
np.int16, np.int32, np.int64):
name = int(figure)
__scene_number_list.update((name,))
name = 'Mayavi Scene %d' % name
else:
name = str(figure)
# Go looking in the engine see if the scene is not already
# running
for scene in engine.scenes:
if scene.name == name:
engine.current_scene = scene
return scene
else:
engine.new_scene(name=name, size=size)
engine.current_scene.name = name
figure = engine.current_scene
scene = figure.scene
if scene is not None:
if hasattr(scene, 'isometric_view'):
scene.isometric_view()
else:
# Not every viewer might implement this method
view(40, 50)
scene = figure.scene
if scene is not None:
if bgcolor is None:
bgcolor = options.background_color
scene.background = bgcolor
if fgcolor is None:
fgcolor = options.foreground_color
scene.foreground = fgcolor
return figure
def gcf(engine=None):
"""Return a handle to the current figure.
You can supply the engine from which you want to retrieve the
current figure, if you have several mayavi engines.
"""
if engine is None:
engine = get_engine()
scene = engine.current_scene
if scene is None:
return figure(engine=engine)
return scene
def clf(figure=None):
"""Clear the current figure.
You can also supply the figure that you want to clear.
"""
try:
if figure is None:
scene = gcf()
else:
scene = figure
disable_render = scene.scene.disable_render
scene.scene.disable_render = True
scene.children[:] = []
scene._mouse_pick_dispatcher.clear_callbacks()
scene.scene.disable_render = disable_render
except AttributeError:
pass
gc.collect()
def close(scene=None, all=False):
""" Close a figure window
close() by itself closes the current figure.
close(num) closes figure number num.
close(name) closes figure named name.
close(figure), where figure is a scene instance, closes that
figure.
close(all=True) closes all figures controlled by mlab
"""
if all is True:
engine = get_engine()
# We need the copy, as the list gets pruned as we close scenes
for scene in copy.copy(engine.scenes):
engine.close_scene(scene)
return
if not isinstance(scene, Scene):
engine = get_engine()
if scene is None:
scene = engine.current_scene
else:
if type(scene) in (int, np.int, np.int0, np.int8,
np.int16, np.int32, np.int64):
scene = int(scene)
name = 'Mayavi Scene %d' % scene
else:
name = str(scene)
# Go looking in the engine see if the scene is not already
# running
for scene in engine.scenes:
if scene.name == name:
break
else:
warnings.warn('Scene %s not managed by mlab' % name)
return
else:
if scene.scene is None:
engine = registry.find_scene_engine(scene)
else:
engine = registry.find_scene_engine(scene.scene)
engine.close_scene(scene)
def draw(figure=None):
""" Forces a redraw of the current figure.
"""
if figure is None:
figure = gcf()
figure.render()
def savefig(filename, size=None, figure=None, magnification='auto',
**kwargs):
""" Save the current scene.
The output format are deduced by the extension to filename.
Possibilities are png, jpg, bmp, tiff, ps, eps, pdf, rib (renderman),
oogl (geomview), iv (OpenInventor), vrml, obj (wavefront)
**Parameters**
:size: the size of the image created (unless magnification is
set, in which case it is the size of the window used
for rendering).
:figure: the figure instance to save to a file.
:magnification: the magnification is the scaling between the
pixels on the screen, and the pixels in the
file saved. If you do not specify it, it will be
calculated so that the file is saved with the
specified size. If you specify a magnification,
Mayavi will use the given size as a screen size,
and the file size will be 'magnification * size'.
**Notes**
If the size specified is larger than the window size, and no
magnification parameter is passed, the magnification of the scene
is changed so that the image created has the requested size.
Please note that if you are trying to save images with sizes
larger than the window size, there will be additional computation
cost.
Any extra keyword arguments are passed along to the respective
image format's save method.
"""
if figure is None:
figure = gcf()
current_mag = figure.scene.magnification
try:
if size is not None:
current_x, current_y = tuple(figure.scene.get_size())
target_x, target_y = size
if magnification is 'auto':
magnification = max(target_x // current_x,
target_y // current_y) + 1
target_x = int(target_x / magnification)
target_y = int(target_y / magnification)
size = target_x, target_y
elif magnification is 'auto':
magnification = 1
figure.scene.magnification = int(magnification)
figure.scene.save(filename,
size=size,
**kwargs)
finally:
figure.scene.magnification = int(current_mag)
def sync_camera(reference_figure, target_figure):
""" Synchronise the camera of the target_figure on the camera of the
reference_figure.
"""
reference_figure.scene._renderer.sync_trait('active_camera',
target_figure.scene._renderer)
target_figure.scene._renderer.active_camera.on_trait_change(
lambda: do_later(target_figure.scene.render))
def screenshot(figure=None, mode='rgb', antialiased=False):
""" Return the current figure pixmap as an array.
**Parameters**
:figure: a figure instance or None, optional
If specified, the figure instance to capture the view of.
:mode: {'rgb', 'rgba'}
The color mode of the array captured.
:antialiased: {True, False}
Use anti-aliasing for rendering the screenshot.
Uses the number of aa frames set by
figure.scene.anti_aliasing_frames
**Notes**
On most systems, this works similarly to taking a screenshot of
the rendering window. Thus if it is hidden by another window, you
will capture the other window. This limitation is due to the
heavy use of the hardware graphics system.
**Examples**
This function can be useful for integrating 3D plotting with
Mayavi in a 2D plot created by matplotlib.
>>> from mayavi import mlab
>>> mlab.test_plot3d()
>>> arr = mlab.screenshot()
>>> import pylab as pl
>>> pl.imshow(arr)
>>> pl.axis('off')
>>> pl.show()
"""
if figure is None:
figure = gcf()
x, y = tuple(figure.scene.get_size())
# Try to lift the window
figure.scene._lift()
if mode == 'rgb':
out = tvtk.UnsignedCharArray()
shape = (y, x, 3)
pixel_getter = figure.scene.render_window.get_pixel_data
pg_args = (0, 0, x - 1, y - 1, 1, out)
elif mode == 'rgba':
out = tvtk.FloatArray()
shape = (y, x, 4)
pixel_getter = figure.scene.render_window.get_rgba_pixel_data
pg_args = (0, 0, x - 1, y - 1, 1, out)
else:
raise ValueError('mode type not understood')
if antialiased:
# save the current aa value to restore it later
old_aa = figure.scene.render_window.aa_frames
figure.scene.render_window.aa_frames = figure.scene.anti_aliasing_frames
figure.scene.render()
pixel_getter(*pg_args)
figure.scene.render_window.aa_frames = old_aa
figure.scene.render()
else:
pixel_getter(*pg_args)
# Return the array in a way that pylab.imshow plots it right:
out = out.to_array()
out.shape = shape
out = np.flipud(out)
return out
| dmsurti/mayavi | mayavi/tools/figure.py | Python | bsd-3-clause | 11,152 | [
"Mayavi"
] | fc3940e7837efd7756dfc9042cef26c854700947ff42c30b44e8b84aee2a2440 |
#! /usr/bin/env python
sources = """
eNrsvWuT40iSINZ3ku501Eq7J51Op8edMEzlAqhiojKzH7NLa9ZsTXX1TN30dLdVVc/WWnYuB0ki
MzFJEiwAzGROW49Jpi/6Z/dH9Av0C+SvCEQEAiCr+jVrdjVjnSAQ4RHh4eHh4eGP//Off/vmvej1
v3jvvffW93VW1W/+2ev/75+9995B8OU/vPr1F59Pn7z41dMvfvvlZ89ePZt+8ZvBcDgccMFxsFnl
dZCu5sHlZjWr82KVLgL8kq+ugru8vg6+vK+vi1VClabTdLGYToNJcBYu03wVng8G+WUwna7SZYbv
J0E4neKX6TQcBwcBfCw3qyCtgjSoZmW+roOiDC7ug3BNYIOjZcA9CQcB/DsI7rKgLvOrq6wM6uss
uMgWxV0wzBZVNgxmxWqeYx8RAn69LBbwGbuaL9dFWRMMfhSw9KZM8yoLXt5XdbZ8ts3riL8l2NEo
jgeDgwAbwLbTMhMA2XwwuCyLZTCV0tD6Za5aCrDuKPiqSq+yZ2VZlCMoV0Jn0/l6sbnKV9UomC3n
i3yVWVBU9en0NisrGMp0Ohg4NaMY8FAXwbpYbxZpnclQkgcBorlap7MsqIrgOlusZSRxcFeUN9Vg
8Oafv/4roALV5TorlznM6Jv/4vWDz957D6YwUK+CMsOOIO6KS0bmZtFM/bosZllVJYPBq+u8CnKc
wKuimEPDm3KGmC+DRVHcYNm0pvq3aZkXm8oAfA0FKqYce1L0L/VU3UPn59mlFJim83mxxpmO1mlZ
ZWU8pom8KovNGqiPXyZXWU1vomF7VMNRMDR/pJdQZjK8ylZZmS6GcQMvMVoLj27DURAeHcHsXBRV
Bj9SWhWT4azYrOrhiOoZ/+bQ3clQikM7MIh0s6gnxyOaoMkwX83KLAXq4jJ5fZ8M41FX82+4+Teb
PKv3bZwK+5qeZ2/TdBka8FXDVV2UPCxsiRE6uwb0G+19XqyyUbDM6hRIAPrKnxtY3JnqGhZytq3L
lIgsqDbLZVreB/nqskAGUa2zWX6ZZ3Nc3AQjiC7jNF9kc5hKZ+TD6Fmc8bKLqri6yddrLBZt40up
Eb2O12lVAbjoLr5LyxWQQZV0z/uCEY+9XBSzdFF1YmNal5sGJU0FAyOfws/MjwEuC6MGNgcL+SKd
3cBA53mVXix47AIk7unsEU/EvhPWPVcA0TNV0TwDjjQD1gOI3AD9HJVxT2fqi9AAWdX3i6xNqv7+
1RdSXHcwTDd1Ebaqz66LHBjS5Ey+B+GiWF3hX8AqYiIIV/waOC79Suv8NgvPW5B4iBr5wOnyFbDz
Yp4FEcJ8RAAfIZxHDOTRquidDGScBI9JCH8eye9eLDSE5Kcas7/zYhXWwWyDe/W9TTpcG1k07BlQ
orevs2JRlOZ00Yt9p0sK7z9Z91nVTA0V7poQAt3sTsWmXsNgIwAA6H+EVWlc5ibBG/KmzCJ+kk2C
fyQ85kQ4c3A0cT4Q12TxgBYJCByT4JW0/0JeCeQRblBJVc+hW7HZCO/Yy3QFYkCZlNlVXmEtBREG
rUakXoVcHwQjuzvz7GJzhcKR/Zpmml+NBw3bvwyW93dljm3V6RXIGml5VcVjC7XL6gpGNAyGyR8K
kHOW6Tqq6lKKxlZR1bmEYE6R9qPhmdH2OcB5iBCbetJPKpSURVEnVVaL2FCU0VAETC4GNKM6LHMI
eze3CgO150+/rrD7zPnh12262GSBO4tcWKFUFWsQsb5PLjb5AoSAhBb6NBp+8uzLF8+ePnn17JOx
MDfgxjBt6RwlIcVbA2nAWRqXsLtMACjQQiI0AftQgxVvJ6giTC0giOQiYP6qUFKtFyCQDkdDZ/bw
nyo/UU/QHMjQUdwqCc3qwoA02RCHbZAOgh9i4WGrVLZwAG55W90P4HZozCPv5P5po29q7oxXTSuI
NXyFKPMWUGsJy6yKuimHvWn31u4p1pKu1pvSrGnxGX49xeeqTutNJetbZkz3PWGBo2l1ARgktjJM
GCWEVyksc+QrXbVLM/59hT8dmsQnxe+us1XwM/gKwsbCmbWm6uWwPfoEWNysWIKQwOVGzvtkA70u
gQQHswUMN/h7Fq2YYXJDiLrpNIeD5XQaVdniEo5CsL3iplPheWkEMzXP8rkII5cVykS03eALYx1g
3QSrEv3MM/uDQINv8mR/5jbgKz/YH5s2oUDzY6BG5e4DPQPj/YH4gr/7dGpUC8D+qOVyWuK6tLN3
2XVQirwGZkVT6EB5PAmO26Uv4cgIB/k9S6MMxKdAkMvbddwq09VmCXv3IpuBtBggRAckrBhc/998
23pf1vNcD2K2KfkXMFfsbkKCcmRxVkQyyjmI53GLLXNdgzEDxTp9re9Uc7wXcZW8SNR8/z3vqQjO
atkzOSywoMYDhRy7O6q15Dqt4JBzQ4fWVyDt7Q0TJKY9QJLE6NDbpiyzVa3nHHFll7D5sm8ftsub
DbZ7MdArA14hDFkXAN5YCsSfgQhC2UjCcRBuSXpnLoi/q/BbPNFHWFYADBq+TSxKbQXuOJpOsADD
owfGXYGUKj3id8jOTEEJqYpR9bOJB4GeOXARzA9WuQN7xSlSTy5gWZUg7cK3iIvE/jkmAcz+diAV
3x0ooUaKgDAHgmE86CiEOHJxmq0qELVRVXWZbwWn/GPEx/rJEKS8Bw9u7hxZVJO6hThAN9ce740C
L/oZSN+IuYS1nKm/475KVMIYzu6OHJ02GBNcrbI7GgaW34WQ74gGWub2jDU7VA0lcSy8Yt0dqhl0
u6hLBHo4IB7kuHl7gMLwUBIzJO8pnoq2dYRVHDEXXzEXVoV1SSAr1PBUpMVYwCnDpVcHyx5qpvdO
R5shlZmJp87xSBdhlfEAurA3/LrEQ1Jnc4zBKltLg/A0Cuq8FsGhe4J2j5OBKnDe1quMTvTStpSD
OpPhRJatjy7agKFcA9QgBzgbdoOhck2JBoBI2HAwy0rYfmnCBWC2ncG8m5sI6ZxxMmAHcAlGlVbn
qa9X7oGq2fblfPv881fPXnz+5LNnL1588eJxoOautemctLq7KK5Eo2iJt40c2RJ1jb4oXaTByOsK
z86iVImGqgTMy9l57FaEepbMHWHjE7cHE6Mz3jObpSngzk5c+Vz63up6ksKevZpH8rs9n6wWmSp9
SDZX+wW997HC3UoPQ6Xx5Wdf/er550EDfhwcVsPgMIi4gZGzdQavX78OarzHyG4zVPal9yCt4BiC
+aZEnAL6N+tHdZaW8+JuBdS+zBwQNP7rfHYdbEAwLuvNKq2zxX0wS9dAJVkVFJtSaa2uoU+d1a/S
8mJhlwcs3GWki3CItJeASRvjYn6eQTGSxQXlUHxZuWuyRXNhUy8kqktgUeEUc/VWM+VmpVYCCSLS
GBPMKFCkZ7R7IJsizIPcF9HOBXNCehlUfWeAV/yeGZVOqpqvC4rLIEUt7QYEfWqywptFJL+GP6jN
kLuhWMF4PIzPjs9bNOcedrwbE7MwNR7CuxIkH+hRdk2TLTLxxjA0djFRs1jHtE6mZcu0qhMIrmdy
WEaW2bG0FqINgSEaKjR+WTmnUbzGS3boQibwx9ANpnWjP7gryjk1U+0gQqqFtCfsxQIpewm2Oy3T
lec8hUKH6DbwPht/YtNjZyWuy+IivYCVK9dD9tJ31KK4AbQIpzkNfwzn3ZYuChuGs1Ba16QIJvX3
PAtj6lXvbPfOOIBK1KzzMG26wxvsDnCNjOJWbFeCAeQV6kPT1SyLEIGw+W/Wi8yjn+Sv+nCIP/fo
EmupWsoyi88rkN+EQMLZKhzjZH/r11MSYhz1WAewEk+aO0C19HIdsO4ztD3wgetmHRWThMxjDDwE
f/n5SA8l9VKNj+nwRDVSoVv9wCGTvvI9dNYpqBpj2m88PvH67LDCa4hDmiWsmFzBFnyX3iemjNJR
excG3MY84uDepz99PUVaMSV0t09IjnZFswf/BdbjSXDStTlEQ9UYyDJJkgQgOl4Ui/kEiTPu6ln/
5tCte+7m4EMS4IcuF0fAA3eheXXgPaDVnUYvcBJX0DxpS6eFbXMfkDAbdfjbdqSMa55Dzfi8T7f5
cAJ8d6UkIndT0PPo7Dd9/JyQKyw9fBZ65AjZbqUTkXV4tb6Ijgt1mHyPvFPIVnT1sbuFya7XTBid
wgMevTE5qKsL6SOLjMZezVPVUUWp/JxKpN+F7nvFsGEzDcOezcspjathaJ/jH/JBvj29ManEaHKt
CxUevacZvOkKHgWHcymCnImfLMT7KNyprygbAMjjvtTlx5ndBJyEfXQl8gBtDwar6Mate/xg5tjN
9KYgrOeX94RRlwH6aNtgVX+n7fPKGzgM3i9SMeQzGqrgrAqtmKcPedVSQJhF8WTXXBTgL/qPfc+g
mXNz1zLuEwwN1KDChDQqQzZy4qblxDL0ohoWoty1DBP3rh5vMpShIBY6G79vrhk5D68XaQ3sbgmn
4ODoSCw21ZEYQagCI9WYNVy1JUPJERpmru+nZpvu/ryrvy0ATqdVx5E6z7Dw0WEF/z+n3grwLkjv
n9uDJwqHIRMQfkC8Hyl1wH1iGFpqXm+8i3uk+8fBcUC39i3eqY01vjZHBdyN8diqgGgFGs4u8hWw
Peea0B0LSh4VTwecwrPZpkbTsHjQqwcw1/0e5zcm6sjqA5eeGDVHgbplmFh3Dk5zsJIRn+YiMlV1
l0B8NewDVDZ+F6Zid9q8eDXAsepHKNNrnqPUUvOcD/zm1De1HVaNwsR5y5IDQSh1FqkjO6rjPzTa
BShYJYHD5x+A2ZG9tE86xveM5gqNr5UVzdGwQ0YW2Pjn7Ofj8/Y2oAQlLBF7eBgyDzI6Vio0ZF28
qBe93B32nby67uS8nfebWB9OQ/e++w7SBOkdWe0cBDpxxC5LkdPIFnK7GPcchIlD/wyH2YwmwHqo
xRu2xX2cblSUqGvHvVoT7UpSF8r+K1IdiD1GKaJn9rw8/i770gH2PR+RAi0tye4oA5knK1O59VDT
49zYHXj15bSh0XHnUClco/zhiQYfG/TSMZU+hSSbupMRZoG6UJLg8SxUX2d5iabeqLJVlGNIV+wr
kN5kwQyt9gHKIktvM4LyXAR8gHSTBgZjOgguslmKqla0rC8D2QgqMvy/KlBiBEAwu0GZz8Uyfok2
l3DOfFeBelfxo5M2AZHBt2M3YVIkognnk9DVyxuwRGJpRMPxGHahE0sp2m77DOufEy/Fn0Tx+GoU
HKOsfOLtFhegGrRgyI8iEghMBfGu4z5T2yFyJJKJIwNovI8uYl/8KAMhA0P9BQWLch2JmuUoHo+R
laCOeY9RuRcqPlmyRqNka9PJV3NcAtoYco8xrrJsns2nzapRw1zALjS7Jr+Xs5PxOawHMikM0ICT
xmeBubvOWd0+u/Hq8aSzk3Z7Z2M6+uH3+NyP/4syS2/afBqrJOvCtXIkS8BigeNtN9a0NfY0xiBl
I4RaHv0XmeMuEl4ygOgo9pg6HoiUBMLDxrN7q1mKmt4ER7DMggcg0gXDwU6CF6GVASHBL+L2Fiz7
obX9ZttctPGjYDpdbhZ1PiNHLWOpWe9Fruy+MzdvKcheQjWA+I+OgXuMgtNR8IFPnBPHjikfhn3G
C6qE2nX7yqg7R68RRFu+VdvtVOpHrv01SbJdg4OpP/UJJiKA3mT3F0Vazum6utysa6dT82whFVol
p8tsWQy8IzSu7GJ/CRI5ojYtqFamTYf03Tmd8tyTsL9bQLNSIWFrsDJCZ0AUCjoUiPBV2963dZv6
PEl9CTtaDeO3xHQjXHQWcvoiRgFaF79jVvg0LZWQy5ezMq2uE695qXHcR2HSOoQBDoa/kbaeq7aG
uHSg2D6ygfYoaTMh1b2dkqV/b2yPzguowbV9fyFHL2K8U6W61LdSUGQFR/d5gc6RlohnVwnQkLgK
yC+Sbl/ncjd+TY6iw0fDo1VRLtNF/sdsHljmdY29Hvms0mOzIX/9NW7Gj4YxmpfZbTr2okpvD2w3
+PhI6Ur8I4vbrfvvioHmfRZ4+aWgBm+7UYZv268qKFDm4cQjW2l13t8p7QCXbjXEuPfsgSj5SR9l
fpQ0eBbG/uIgH2INedRYDhM03AQZMsaZZcck+RbANzoOp2W6rLpHAeISDANa5pMmNRHvodc9U9R4
PnStdBg2WjUaxAvsTPYYPC0ZJIxXy16exddSqhmXR/kpne/Q9TWe7xjHBXtGqE7jptZwyJYrv7AH
RKu2cyh1ee8FrG4e0eDBZgCxyS6zdR08ATzkFyAdkF+zDa0FvqeJ+Gz8oXPC2KMFExkylQfiGs7b
YY6cg9RyhiBklLOci9Shk9QfhkmfLWDrqyLngM8Hj7PzuPeOf4t73Hp+gYfylU8ZoJUw25Zt2cIw
0HMFng6rUTGzHt65OiG/YRmOo7Eo814xq69dE9GlsWt03co0TcYwbIvud4je7oYc6fPvgSvT/4kr
35EPyC4ztrvEtL+7U7u2KUK35M4uw1zbGo19VMmdZlUMxx7POY1zg/C0dsiHcin07hj/9Mnzz756
8eylB9WirOpson+UePrBSehw9bJsGyxuZFnr7Nbxtu0vfOJKI5bpFl2Gvk+7hLepK6a1CbDY4PTp
swOBblGPnGt+UNrha9UfhnTI3rWHcBrmcSb98Jyp32VSvBYt6KnWpUHVcmMPXfBwAvPGlz1Vuy18
yDmO3CTR9Gs43g09rdlMDBWC+4NXNmV7tqCtT3sa2Zum96Bn96tPnNipw+adeKZUdOQ/INwnEZPz
qsO1gQZwNByp+i1qF3BnR6ioQoR+7UOlanSiy4+PTs57vCmkmGdl8zm7dVdNcsZ0vimVw2D79jg4
6rhrbswo4PCJSz0a8r6gzQ/FVqLRArRVRcr9NthKpWY/FyHenhBoypFmsPGoLcZgQfGaxRJt5OJb
Jb3Ac9PMOmXO5UhR0nIbGDeGIYEcU2vFBK9RdZ+ugmy5ru+x7CjIr1ZiELxsdew2XbTlHKpFt6s+
3nPrs5PQY1GjHB7OldyBejyoFI+wO3HsOoQ0N2UEoHUrf0jKssPklJwvitVcoJLJhUtVcUMoho0h
WguE48A2MYShKNnCnmW8kBaObX9w7pode0gCD5zY21h7W27bZnoB9N2rt6xmffuUuIu4BnttcMAZ
jk7Gu0zl/U4xbRVce1s3beP78Oq5Kr7x+/UCObVsZXGB9Bwdhkc3TJU3jiGFvwUcabZdl24Ty94m
lsFhiW0s2/qL8c6tiGQLWDxkom2ys4t7ARv5JUrLROzMxPY5LL1hcx0d20aNYgVXTjmUmNzkRbem
GzYu+7bH8C0izbGsGbSOt+oUelgl9H82U8GB3A6Mw2z06n6tQoD9DkM60HPcgoPH41vptjaJkGL6
ALqwKM82VIwWZHTAxthxO7rEvWlssfWIVPd5tpgH9z1rm0tsB4M3/+Xrv6RocsmUNGzV7WpTLt78
V6//+LcURmwgnhgwmJxOfimspQvBJ18SFxdo6RBcpLhhwUt0+UCnEvTzGswKWHireRAC4DAJKMYY
g6yCNF9SKBsMacYh6KBQcJK8Tyq/6/zqOisHF5s6WMJjHaSLqghIo5tCc3fZYsGVYINZ5MYNL8ch
UwHHiopMjEbk6oNSjxGTjPSS63sdSI5UPhLZRH1kvKgiOJxi5f8Gnb+bUYCr25WvXDJLZ9fZps4X
qsYvN/liPiuq+skMm3yK30fBkyvAMz0PBp88++VXv2I7TxWx4OXt6ikj9Uv0BNaNJfAB3/wy1Tah
GAZO9XCRLUEWYsGGPHOKy0tAWJBS0zgN0bqArQodJspsCaJgbMy0jvMBjLhAzUwSDJXWZrqoyuyW
BgfLzTemaJluoXGsNzk5/ZtYVcNdX1dshm0VPz4+Rs+Dreysk4+Ok2NTtkUj9Ok0mi1gipXf9634
Waab+toXqwFaY6JNjOpxx4JkoFjAPdTdUoA6UkvfWp+wXfUNnwe2ENJ8BFZhKb2byQQ0zG4u0jl5
vUe2qtmEUHKgmPBR6F4pMWgp5xpk3pIq89Z+Ld22eqx4Gnw3kU7seNp1NCb47c1aYIXMY6LDMg7R
Etbo7W69aVN5FAiAyIQw0j0wiQSXJrAKeKtuNpboJOBYw+BykftSFFCRGQn7AlKirZN9+UCYhS4Q
kwKIOXHC4XDoRYM3iIWB1IT6xk7ZTZ968IDfKSwnhvHT7ZyTE0NVR20f9t2N2XgyPcq/ByyZiCHB
CWcQBG+cOYDfiFF6YLAPDzVhTLNqlq4zMsaPLWMILN/UXig/RhsBWuZxoa5Q0zhHSopNm6HLfEuW
QcCRKCxJpq2I7tD6CG9ACvhY3mGcUQpOadhzlOy0awgyJkkQK7fnkoyhoiFiE6uyzMFwmqpo5TRp
pg9whuVxkUZuUZlqDIdizalVR83sXlMahCnGBA2taYTK2CPNqwBpjKwI7YXga2slEGfpvBNDeA8n
QQj/e9iUB9H2JoNPLOiaZgEWRcvYkKBNSlavfcNt3W0wilUkVAuic6GBMWvsYonnaiMTsRMqkciJ
JjjnrqQdZQkc4QB5SJ2fovnNsy2s4CqkO82jEzjk9SrCW9XR9CGd36OZgwHGIx5ylFwYCh0hk2fP
Xj9/+SqyzSV0uZ3EtS5ggb0DaY0UxDRY5+tMFJPpnK4Xavjz50tzNGSX4vilDwcKc1XirWjwImc/
tRoVRmhudiYUkqXgfLtAyUnp+KhQA082+AY92hPNO3hUM7SjCllyW6IbJJdHcXZEaOZ1jIEZDL85
CcvQIRFFCisqC+cIivNE4j3y4qv8NltR3cTdaSmep6i2omFJ0YC/Go5cYY1Ivjm1RUOqd8ix5KrN
mgNB0+EVv4yMXQE1caXY45JYRlGjJicYvJnoOa2l/1Wj16L+663i3UWDZnWFqi21k0U7wz2Ys+Yj
n1EQx7tFrr6+HJV4ffdOPQIUjHr6ZtLPPKd3QkIkkHgDEBmUhFQD1chs8F7OqRz9Whw6+R3u05mt
nyGiw9irTHRsglJebfD4BGdLoy1LrjE7L7pafpXZOq0FaQOCj4MPfBTaMOXnn//uyWcy4iGerRUv
u00X+dyNLSBQQej+oHsOSShxsbfn/MMxKcJeoP5+EoZxBzC5wBePMODky6y+LuZVEPExb7mpahCp
YLHSMDDwcKyneXkDM7bXJGO0a9hQ/to/2zypMpFQOTFOThXaHYXL6ioMRDmnJxfPwFXGIStwl8lr
FS/FP+tcZspKYO4mewsgdJg0Gg4qxdb3IJtf4Jjl1tqYPR6JGe+rQayvlCHCh9QAhSJehiOjO66t
p7M1GMDUworbAdh0oWYZzoq12ljQDSCrSXc7CeG9RPNujdTl8wiClxQgm4HwPBFjRbs5ifUIgBOH
0yuvK9V2CKNSTlbdG71/XSkYQ3WAqyQhwJKdsFoR0xwpmhkgDuZoSeyv+Q8wQlJDD96OKccOM+Qe
xvGOyeRi5kQajsO4WD0TNuQv8w7adPkp2UpTUJ19Jy7omDmxE/0O8yb88LvM2rK4zfSsHR2B2DnL
7Nnrnzm6Yvi+po9eelZhZ2FrfnEo+jZ3NjmR9cgf9luQXBZ4IgsspcFGI/iVGr/rMss4WMAMhR+c
/cEey9aUd37kqS6X1vJUK9Mze287T00AwK0RWqEuHIGbTz+k64Vlk7JOIWi4qj6RUsWgui42Cw5o
Q4c4VBBgrPQiQGt+Vkzk1aiRNm1tghlPNVD6S+/WJe25MVil/90qmchD93Gb03WUF/Cj+Pzt6MNR
eSnHW9Txod4LP+9BDDIPrHmK4ODHOy1ttKP2BsiddSM97iWdmPGvUlZUHYlMQmjnA3oQNSkg4Ixh
j5iWY5wEzy+D+2IjKTPu4X1LaAnRaRcvykKbotA2xQy+RXxujrR0kVnLmubfSyU/DeeWzaVHGuLw
wabExXLQcdw29JT9kS7uI3Ff5nvFltnCVk53Ho2J7cHkkZSdIX/+xbPPXzVCBlq7I67wVEFyL12W
/cx2vdmi9M5VvFaoqns43+ia3aXXQezgud7szatPnr+ItnSgN+blJb/1yfxbg1WIsK06B+tuURdm
PXUuwCJS2jkBuU6INRySPUGgE5Bo8ZOhd2np6qSqhCCKVIOjQOGlpeaZ0KWBunPzyWsdB9ehLGTR
y9qsjJuNR4Od0p3qMZGywlXcUXPrFOtSVXfyOktr2bmfGbNHcUUWHVgWGaPNHYkU1HGPdP6X7VOf
6SVcrHGleo20jUB05k1NKJVCqeD2AaqRBZh2baxhAdAuKXGpBnb7dD/S0rB1N43lw3bABaiiHCq+
Xnm+n21V8gbjth+/oRvmeUu3pzWaX0KTn0GTn+Q6yg8HUmiGQbpkZwS29uw5RkWA5jczjFWpL9WP
YDy3OVARgSiXJAraYrq4XwsytKgzMJ2MsfK0yt5wYi4onkwRSVB4qj4bNS5WCpqi5kHbg5jCOKCr
pwG+zf7EW/Ni1elCYIV58LNjEZ8Njye3792ohV5tYCFmSOZY/Ij63Wyicl2Nn+mC2cQstoUhbDGk
RbyXl4WfJheUYefo1rEWVjcTu68l9ruaUM6SxvXCqlgdkdAiNp8k2aidXF00dMRG6JyKETcR++1r
jeaHz06P4d/fjoc/dEt8j4Kb7WWxWc1/8JGFFD47LSXoqbavuA/jn01+yHa/WqHQhcIgqnR/uNYW
xR0SvbTKuwJLQhhSEX0m9m/5ydOnz172t+xWIU2/p+wuVu7hd57QMItKBb4heF7/Cy7jH6BEsEGW
3dj0RFzD77uQ8xaQEMdC2gyTcBzINcRJ8iEygfkGg2Ggex5wp6pbC2WOT12SRw105sxxN04SDPkQ
eYVHs9j3dve0nybBvLFozJLwkFCUxIOjDmsRubQa7XmD0ZwMmLHvfY1iXZ51d+vdOmN0x4jcTpuc
jhu4ENsonD7XNgo3PCxvbGxiQ4/GVco6DGBgQBXJzMq3JPCO8srap1g8hahaHCGk86BpGFwBLDrl
x63ML2T6T8LSp5//Nq0BhZTBpVNIEZR3iyhyc2JWo0sUhyUZosrEasMNrGLIKpbgQ6igxXUDnJDy
vuABPxx7A1To6D/NgW+fa3w55tn3+EhOJJU2p2kSpomDRcZKd4Wwcycfz7hlxOaG3+R3XBoOGG6I
TSQAEsGpHBNgS6anb4aMtiiutILzdkrOFkK/+Bt4Fio9JcBOKxymKRozXcM2+1lx9QyPQFpBxk6m
Wr+dDHRLlGnxWqKQ47lCW3upXIwSRYiS6gK3hQZgPTEEPBNJfYxq2Fn318+efAJV0H5Z4gRBLbaz
1zocT5/JGPZmVdyJQz2SU4Vpc1ZXuMrM1dW+v46DA+S3QQ5vyTS0FBxkZIphTIrCxCSwsIJTQDQ9
xO4PAzLdkO9WbcTCJDDw0VETvg7sgLS65YkqKvZ2DLO1RvEL7B528BwfM9bF6O56LC4gukFNWVbc
RpwYqaemiQZxdEtDMJrcLhdkzjIJOi/OgaiDoyMoyK6vSguxJ7ePZAgjs1+jwL48b05aaIEMTSVz
+KsyWuerHH4aOpaMkqvx64TM2iIZh7lCKaxuyxMIRpMJccrWEPEKRbAJLYun17ArAe3Bfz8v5q3Y
gPmlBkLRmdDOHmdYv3z22bPfgtg5/fyLT555z0vQMSXDNAdqtWoiBSf2qRCg6mBwcHxy+v4HH370
87/52z2ePvr5ACMEnZ5++BFDu17fKMAnH30INH4bnH4QnPx8/OGHOo1csr4fcH6Kal3U4i7xqw1g
fBS8/N3naPaeHFNcjHmOltl41EoX+RUa3Y9YAVnJ1fQ8+9nPfkZdOHn/5DT4Q3G9Wt0bCDn56PTn
wW/T++D4w+Dkg/H7p4FkjJgVZYphlKgvYk5ui59j0ehgQ+HxL0I+neTMhpb5fM552VI8cM9zSawH
XPXuOkNbFyoGSOXU4Xkl0BbF7AYzDFdAZ7QCMBMsyMassl5grAV0FyBOkcDsrYGZNSq5MvzH4EH0
iy8/BsJ//PX8YRw8xF+4norycfLwF/ji+Bdcpsr/mFGh+BeBrREP6TuaHDz++u5h8PDr+Ten3wYP
z76ej88VTOSij5MH8f8Rxl15AJ3URge8c1Da9Pp+jWGX2G2MFh4v9yq4ruv1+NGjJEmaPh1Maa5O
YK7o3x82S/XpOPiPmwVMbnDy4fj0b2DygedfPzL8xkD0UeKNxl5Crx2naxhuiksbPyWUJniez+pW
NCgVuRJKn7Fk0r6ioUJnqG9D8eWRN2udwDHK+100WQqiu4twVyRiVZa61WBPhUgg1R4FraZYYsck
v4Vfhk6yPNLWzqds+Y5WtzxWvNA4d7CB2/WUqUvjhH+GjkyDxKaL4I/wXCQ9BZ9f0mnGSae4VAGK
kdtO8ccUNT3TZQ4C8+pqep+lpQBBmm31UqobsB4E6CwB/0xD/eyNplyyRuuyLpwiXUyniDwqp9+A
MHYwfYd/wGAOaJ2jcAEnAzrXvSMoCkjVhyd83ZwYUO5LV+ni/o/kU0qeP8zIaFGmAVYERkbrFH2w
ZZXCZj6QuzM6UUvSnnmRVWj4TVnd8Rs2GaAAxQw5lNaZ5NLlRX5VbMToSMlhyn0ohePGXHLR1lMU
3ybUveRqKdGp+ZPyuhXQlvMv7G10lKZ6uONTTa4ia2AUhIcXoVbtzdP73eXnUP6Uy5PAOgmsIsDp
aNxAydfA3scgLWzqrO1MhyGxxhQCC6HssMJsaJpgu31ECNi1fwit2yNsn+WVDd1LHZteeg30sVVh
1JT3N/Lr8eFvoZ33xx+et3qFM4U9aESmqRaHIiw04lkZIapHVnuj4HhE/7NOnbr+YwbueOdhs0cq
PNQ7t6VMuYToiGVcLSMFT2f2Ra8xkpQ6k9+ukRLauaW0hxF+NkU9lOJoIhK+vYrCr159evQ3ro8S
557XAK6yWgcMikL+GMadILSht0ABtv/EtyuhmRYu/KnVW7sxVeYIy/S0aQcAMeB2uM3qMs3G09s8
7kdoXvLmX7z+9++9995U4pjzEY7zaZMH5Brm9s2/fP1//cv33lPeh/eV69nYru06H8Ih9In6SGtm
sLFzNE4xJsS21umS7dKRF0bcSUauK5G3euKtZM46vnCTPQEBcDT80+Qj9NVBEwJU7VDlyeQUjypj
dRQ+HgXRdnQfd8P4Oe3j6QXaRt2ha8Vdel/ZQOkM6kDQJqcYUe0iU65E1Bj78yatk4+G9zjwaMLr
giMmT2wzl+5AOnYFV0HlvXBiGYLMWDeR1PfeMT2jPzBRO2D4b9M/Prsoi5tspd0Hz/Hwm9bB4fF2
/njorQQnc+kQkAWRIEb3z+e6m70G7JdMzLiUk09LMoa8rziMDf06cfTbfuzwYWYScIZyCtnI77wB
l9TRpyf+X2dTrSb5AZkF+syTB290maigdHpdO3kmPDP3fDXPth1Xgq1WLb37fqGb7F6jwRD3fJ5x
kNsojtWtuPfy0r5f6pkEb9/kEDZDYyvUixWzGUgIkvHR4JgqZJjo1Qa+MERTPGfQOQYgkjl4AaRb
blaPh621y73qXQxG64KTUYBx9MnSj+KTeSavJ/OWhBz5eKYtBecZR+3JzLt9ZzmZSuU292z2Yza0
i1RLI9iJcJhOBAXgVNH7aG7FYFrsm/378TARDGUjqYYdkKC101HwEZ2QzAQmVm6EP6S3qbJL7tjZ
VtmdRrTa4ZoXaWVOxKDBbwe0YjHfH9qb//r1v5YQCsQg1P715l+9Tt6jIAoBB65A9e2aQyU9wpKw
n6wyiTX05MvndFh4M3j9b1Q8BrnR1/D+m9eRwMPYCEeL7BbDEG8ujqRkcA2bFhnAEKS/oMgOanBl
NsNgQm/+29f/918wkBL1QeRupwMKypKh2JTq0Cbue9pgxJAyDNFjYAQNlpjCyOapzahE64mqbo5n
L5RWXhLcVi+oL1mpNdxsYyBmK3TOgpODmBWNWQ5/EPz+93g2mQFnvCrKe77N+v3vxzrmIh5jVMRC
UoGoDVlVSTSg2QLPkVSbHvV9gcINt/kyy5QyZ17MUJGGE5sU5dWjRX5RpuX9I50797peLthjuVmX
BKTQARJVRzCggn1e7FwrPzf97Rn/uovan3UxZ00srmbdH3l1ZmQP0d/4GkDUt6HcUBhyL9ubAzuc
CpSoIyal0QpGWdAdMbQORAtJOp+T5Vn+xwwjxBmgudU7IJ4mA7ImkEgdbTxQsEqiflpHIPxikSgy
+GqNQW5lHFLwGxg7LOoZKYvQ6DocB86bbxmQ85YCaXcZBpNCgkVPImtlzf3732OtlhfV738vx+r8
6opCdASfSGNAC4IQm1wE+xKrRcctswjDiFoE04Kfptl2vchnFC5YGfpakDBqn1nO0Ipa76Nue2jt
gG9L8W4P/G5pUmpH95xevXNnOrzZrCYTF2/W747yUswbeAjlSh8NDLymmW/bl939Uel0DY02nx+s
akZyAYpWfpHObq6BMdMeb6XnZRWPc0wcHqJfCYsrYqLiIWlMM0OrKLbWLm6x6vgpXGAuNToPmjq/
uWL0ZPmQ8XW3Ep8dTT4LPeLHMgm8seAVODREkEe7gBYeJ7pBR7nMsb8n0ov2R/mkRuzyv44hxx00
0sELjBYr50qPoqVdF3cq7/3eqJwc++L8YwNq1Tmz1z4m7j9t8Z6nWlja0/7ReI8zHSNsW2bJ+UVH
BvPlEvlYxL/T5KN37WSrQ44izYHQ7Pqy6o1vHXzBrm2Wb1JCgKwl4Q0W1UQm0TG3gDKGNUQpE67a
Ggl7yJQeQbEf9oixPMcko9WdJ5cV0lTsSWYNojAtGowjrTHXDtzk3n0QieLg8nh/PtdokoYSRIBM
RikQJMbA44BUTXfFVF7LUWV2Ba+VU5+6du2Q6Q4sCasA8VhVT0Ru9RSbqgDjqux0qks3Lr30wpe0
E3vNQtzZeVNeSVduFYfJuOTkodHBm//u9b+1jk36+AUl3/zl6//0TOs2raPGSB2jWMkp18VlE8St
rDD0F6Xtu0fIcMpqArFRU7qlXl3oKJiy3E57arpiAX6APuT1dVlsrq6zraMthe5lW8W6P+VQyZ2b
FCrJXazDyhCquUX1uTzXFy2rbbsagpK0Var13+XZXcRRzhopFF+Su05AhRKe2OeXwVO+wVYrCA8+
WFZskJ6i7xBfoWIp2MK39yrsX1oi3et7M3m7TYLgFfxMZ/UmXQQaKJkYUXXRkD3FFcPn4AXzYgz4
lwYPVFceYLWnjZZjrgwPS4oheJEtijtsrAjS2wJkDAyPudEh6yhq02IR3OLAuRd0/mv3J7JH/xQN
1BQaGNuoEZbheSBtBZlaq8+57VT8A4tDNK2iE1y6qQs8HWJH7ykEDp4Qtyzhf0FmFcVa9AQV8P+b
DPVSZD+WGo0BJCiFFNuEYaRGRsq70MZhXlcGWmS+1PQROdwCAZOdOOdvUfAYGRUBmE6hBnBH3RHB
AsIycM6BPmnBVZ6pnE6xLIAhFblOCshnANoogIa40E0GfEywCn3+pXahHPG1rYpFajTOVnhydYpJ
N8iZMp/Z8x3cXaNVl+4K6loJ4e4sy4oBIa7azK4bIORjucG4RtyRtISvpJnJYP51KBt2nOGhGcSE
ERWDT2HPy7YpHsVHsOhIMQg1gX55P10UxQ1tjA2tMiDqP7aguz8JoiRJMI5qsRhhpviY6XcFJIab
ZXOtjbb56JnIh0TVAprp+SHmmLIMAY7UPK0C+sDDGcGzwhG6W4Log5SKmk4Tl0+RfNBTBvW2gGbY
ZEvtnkrhPHha1apaUDzROr/NFveMYS95oQoYAMxLskoG8kpXZM4A9LpW4eVl2StSp/2kplV36Uz2
CCEUMIgyJz2ZQYKJVgeRlklmjaK7Y/jTlUDT0U6b8zoHv0RAKvKlZH40Me2JcImJ+6hrhGlL1TCv
dga41LVd0x9ZwFTB/aQrUQH9y4x0rq19GtT0eW9K2QYov9DYOAMI527Qtd9k9x7hugMU04IWBa1Y
5h3BLEW5oDdm/GGZeBlstsXHOX9VcJ0Dh4YVf09oYg6MW4cJpcxofcHHzVpV52kKUaOs9km3u1a2
aZkv6aQ5igb/fVHEuHqDNxMCqUgbECMOzq5QaZltkicKLakkrxpM2zMkgfBVQRjHsizsGWnfdEgl
mxB6I843hqlPqSCJFE2vVfh7ePk0UWusZbbPQDoPCmIo6rw1JWOX8DQGzaXZdGrC4TrxZNO8jdrr
LrZ92p2xeR3FDVeKZsAqnHR7hE0ZZDxGDSP80HWRzwyvRJNSXBpx0xhL3R6XGHO47ahWUj/2mnyz
fQuXMK+622SFV3vLC9SvkCUr2t8jv5a6PrA1nOmj8BehYE53ZAT8Ot4dC1fFqz2sIitOrTVcZM9w
qsfpMZcneqlyzAmLOpqjLCXjwA+UeQD+QM2mILLgsZ24Y3PRyCkrFy5Din0Rwo2Kg+YtlNYquef6
chUkxYikZX3ceMKnMTYwpzMKG0gEl7B1Z1s4Yiv5WB/9jaOWuu1t3J+rqZy0lDdT6TqSE84lXLaO
3HErCU3aFQ5ALlksjvCYT3jarNClgmKFlJy3gbpddW9oWBJtLao6eaYrRfZkuuVNLWH4MXbvcejb
2phV7yo8K+gQKiddoxdP4c2v+IYR3SDhfYw8GF+3/PaEgxJ+EkLYrGgHPzVPvuOeUKFc3mf7J2di
x0lJnPCkEw6vlXOxSRatVzB13HXKaWqAM8MlbVZvSQXa5uItiOC3JO2JBwS+eFkv6+jMnNHzeBdJ
oLVB7yRzK/tPsMzrNptNf5SJ1UhfAcs01ScdXNKjaIncSY41y/kcbXcsviMQjY0MUY/lLO7BUY78
RKAtTMLDkqxXsSw6ooqi4mpRXNALHUTaUu53pWNkzEPbNRzmImN9cXvx9zwXXgYIrXPv9xz6P+2h
9m8R1lglbKyeajzPNq8tNERtBs36eP0Y/5NAUM9mWG3W6LulRxVz57jYwGdKpJeUC6qPbxMtyUp+
WixRH7LPYpaie41E+LEtmqj+mL7ayHl9gyQNiNK5jajcqc4IVKzbcreTCZs3H1+2uwO8Wynr2YYV
eCi9XaekC2tkjqoroZ5NQOY257fKs8NWqRGftlBDb9ub0qkXNx1zO7QyeXrSUSOqmxqjFn5tXu81
/htOcYcoyUV5kV1iMLfAeFVimhhsXoPe85bQJ/+Isd7+LvhO3yY0Yob8blBoOBPGjZJmfLaT3Uyi
zSi6mMUuCc1YVdQhbQ++mu+zeJ8o997dC1eRgCfnHHVArUISybxiWJu2PfJWF2WbPVD31i7tetaC
piBj1gc7V7BReGfQOc+SCyMMXh/StiXRDM3uY2D8MA7VVH1R7jNTX5T/eaJ+kEnClH09czQ4QAXH
V6u0vDdveyaTwU2WrdNFfpsxnkn9XylNMDwp51nA9zdyNQOiL9Aa/BsHFGLIYCr4EI50ueeYTa3G
ctGfnFKxFPs2wYwrZPBHv5maqKdPSjTP9VFVm7JYh2CZmbj0ZQ5n0jzGexCPZ3PfSUGeyWoaTSTR
fRT6kfd2//oJ8+02pqaP77at4IO5Of34m8qA9bxC1mrxNhQVy2r4Ze5ZDvvR/5P5XOg/cmWGh609
NjYWxMvNRVfFo96Kv90suio+6K34SX7bVfFRf4tF5xgPeyt+iVG5Orra3Vc/H+A5+kkYAXXYywjw
S9wq28kIaJh+SIyBdum3YSrGit25YL1sBzsfjmTA3Wxkb3g0AgAoIzHg/ZR8iYRmmqfvLjTzyP68
+JuxUhpV1tN0sfgU7X/3OQFLWVvbURS71TrGjZCBKrEwQghx+F2VF2+3K7q9mJhn2Z9YDSK2VB5m
QAZbVjkvG+iWjW9T8sj4xlyMl5hXmWDx8L/1zJ9VPAotWTvVgnbbscuOq5ayQvo3HEE89uYAp9ji
QG62xk8bPdKnVrphv7taA63ls5e28JvaqzTt5K8wSKUmN7ByOEcdHafxLp38h/jmTKqd0wD8Ur/q
rzctkzEfDye6EyC7j0KfqqN1Mkm72XaHs59uLDysJofViJSQ0seR6kG8V+MMwQHQwfeVczu6303b
FKVf+1eI/hz7a73ltGK9sHcyG8ieSTVw+CDkRAEd0+bFGtUxuu6bQIWueQe+5jsQNu/A2PxdUYbG
QP0om++Ns3dCGlWa70CbX38YHVZxW3vIfNbUHGIYZM9R2p4V9j+GPrEPKXTe1U/bTtZ2pCIDDX17
4y7tIcjTNkP6oW9SRc1EODPuQph8UAdh6u5JdvCp7ks9mPZ96g5hF3jN16tvDpHc8elb4jolyJqj
wHOhx0LQr8TAaQ8ZSIr+OLcA3g2YSjM35V0XutN/PbaTSPY6nP8od/CtuZSRRm31vTV4M4VuzlaU
2mYODW8wyKayIVbyyIgtkFH8wz2X/KmzqjZdVPQERKG6YHFwNcI7AQ5UNeT7u9AjiMq9pjuLqmZr
Lnuu8iiYi3ZSnjhicavtt5zt73e63b5aDiyk5zS+/0QcgBQ9LzCevhmjAY1nMc5GY4JBuh8rCM1e
9w5Uch8bEHJ59zIL/BJb5bzM4oBDqWKWnlCcg+jMgdEmykyeJ4EVAhCzADPABidm4IOQagE2Ve2Q
aNzJ1OHUb7Y8RH7sZhxwimN0K+fV2cmH46PTc2Nk7BEqsR05XVGgR/mxUfVx2GHTIyEIdhn2KJgo
Q7jdGvRepRgN7JVjgTihPwTSj64u0FSdX632pGoouQ9Vf/ctcOediW8WgcjxD0yiu2/4bK6OoP+K
uGplhJ/SGJfKVUJc7iwENIdyVgSTvU9bX28w1WUx7zfTgibO7fJ9hll7GGUBBJ9NlmdbMQ20fmKR
QAjyk7yapeVe97tS9M+XJFt0KGOkad9jgFhun9GRsS2U7bv9pO8tDMDLuFUswZZk/GwSjDlCypoC
B+GEqbad0VKzScv4LlYu/8ZL750unsfQL2YcDHkRW+vX1lg41diMF50Vq3pebOqEk9XC8W4M8gBK
BHTEQ2KfGebPmXJ4tNUSNGYOWl4Jts2kBeqytq204dfsNWkG/GoVsOODGbQmhMrRsKHxEIlRYk3I
NmUo11gYtc2AuaxXn0j02OgTLX7gVSz2r/VmnUs5y/eU3vfm/WlRlWeevduuobh0j9dMAZgFRiig
8brFdrL5OFDU0EGtSPMH7/4PZEuMqvQo4Nj56wKEmApevjvAASfhasXz4jsrM6aXmYpAR2KwSUAI
S2CEyPvD2KAJicMxvAKkM4jhKDDCcThJM7gP4oeNvgsVkzSGJojH+5O9RYriu2Zwoe9CY8qbySWz
tyLtVvQj7ahv7K7yJiEUAlHNGtZnzFLLDDkaRi6RjsgfmSLG5eiVj45aJLAgp2/HLBxSi2zZh16u
2TzH3InE29BzvdZxUBB8EgQvN1dXeOotVsAfPfDQvR0P0cJxDMeEiwy6kClhCT+i6Tps5kdHq2KZ
XuWzeOhbxzJWdq1Agka/rOoKWNKMEoGMPdQl39pORPLBICgdIxIDy2ugTNFCpBLmBNqtL/A5qi/M
An3UeaDTvalFiADUPizBky9ow6eGE00LZ0q9Z4UprC8SfcS0IxRuOVyrs9ShvGe1NxH82i5YWzeX
DUtcOmJINGwCy/HgMiSQ1RF2BuiIoklgBFE7Ks5Wz913FAUQC7L5mpGSJEOWdTDtSnqIwU4fwtYf
BB9/rAxA1X4ed8gJCIZ1uJLkkA+n25pVweMGjiMnuOpkVDdBNevYbB/odNjX0Drvb/lcuq3PTj6S
kGjK84si3mpB70eWO/q3C99O8QOybFcsoCiOajYo5DI6A+ar6TRUMUzEFboJe3EZtR0+PtRfrzxf
39dfr6Otx6duhY7lfA5j2XAIbQQPEBb26UPhe/KNuG0Ut19Gl2Lzf8X5xo6dMpcM7krXzQFbH5gl
cvzego33kPCSKh/bnwzGcPrw/YcfAG0tirRGAEyBMG1DYj12va0aV1NKiFpGB3RRFOsqlGpcAjav
UYBR9E5Gwan/C3febGqZbqMzhAjjPqcxfGD3JbzOFosiPMPvRALXVqvh1eaG72OvCQvw7c1fvf4r
IxRldZOv18Db3vzr12d/zbEoq82aNnM6P8HnR1skuiZxAMfSSMubrKzcMJQjCerXFY4ynUtaOV5D
ihdTNgwl4Jd4SKc30VCikciY6GXSwAiPcPun7tmmgByGfDKs4NyfkXQM8hoc/urJUFWgFxTWgAVE
qz7mSqCipPmqMHjCSpTpHCeEhj8PGFJsDRGDnuRXuEb5qWFT/DvhzieqI2Mz9h0xI5oY+maoN6jq
FGMErRAHfMW3SJcX83SMMUhII891R8FQDRFAxnbkyFWx7onKR/GDTL93KN6IEdC715+SKGac7zta
hoqKlrnvMG35KqedaEqSxlBoaNigfoj0ll8i5uY57StjIkEju50dDzWXW6emgiG4DfkwRG62KWnc
WfgDUe8ZPvD+d0245KggEgjAACE7APsfcQY+2OCCZ8JYA+lwaMbLxYX49fAuX71/+vUwtHqExTn0
Bo0Duo8RzYGgJEgMVQp04F3K0mNUl3CnQiIU6RTjANWP1DKmQKdcIX5r5NPUNajE7FtpBctIkrxt
JLb1mKifB6EGoifE6GxKuYlASOAoMLJz+aYMMy3ZE2SA+UKCzmDeQooWc0/pWbFnxKIuMjR0k/2R
AvFgbjLoLCvqMQdGA+y+2ATzAkOt0JK+S1cUVJrD6nrGk1BYkXeegX3CnFISIEL9hP6rdX6y1GRp
JXRI0UvRiGZKQuxc8qkQwjkIJQVbgsGZDAV2UWLY2C9+IXM8HBoQmeGnOtywCkOMGwJ0em3sBTqW
Dq9OhqY9yHtDt7HwrUbJNePB1h6nwXIELb8F4pPlW5SdAb3QPraV3Jwv2+ELburwZ9DyIZScksy4
/g7TGcG+ft+IQAVlju4J6oFgE7HsqRKVZl38E0lqNXqMl/6d0Qq0NWHCrcaCUTiP/DErC8KlAjEw
oq5WgJt83gEUJRTHPw7EzdDMa15phzZTRO+wRGRfaqkTe4LFXBRpOSfpudx49d29GnEVh6075Fr7
fKzqgPA8Cl7er+p0a6bCsGJcUgD3syEclx4EZsWkuLzEePMPgw/QK2T4j8PRua+22oeHRjtj2G5o
CqDL+HK4j4kZ90RpjLTORLtiy2qYYsyo6IHu6fj03IZu8IpoSN0hZotbHZ1fDaXF16thp53oEP9D
GsHuInAo7Px4GLleuWIgABD5pAkDjnuS6sI4EAmiuzMifKBoKL7ADonOyZqzqMKxwmOBJzSgm+YN
hSwMeU8Mx8aK5VeNvScyOOUkSwXMiEytxTBPNmtMDEYhc+GALD00403IitAR6TxBlGwo+B8FqBU1
ybjdmfoXrDKGY97hNKSDxxjY9OfV4ep9eQr07Q9b9baMLlwXtF6I5o2R8m/szL7tKvzN4I8YV5GV
NnF3kgzddQp3NZ9q+yWTJtgjFg4L8Tuk4VBoxH1gyDvcMO62arWdkzE/pQg3Fxkf+GCbx5DlxULF
WMM0Ur1gLIaA0s8q49BuSpaS/f/lqxfPP/9VMNzbdHxIetgNpm+jrQrOJRXaImixrkqG8W7U0y7X
dhbbIxqUnwotu5p3Iq0OZ+3uNCD+tj0Bwxpn9FHgMjYlKXAwseZs6mQFN7ddWUl8hiMSs6rawHvi
fuAnxREQSsgkEZryiqFPxeLeWFctuQIv/DujeA2He+BXldVExZeglYrAZhOOlMYODgaDvxPSx8MK
bKn3FLbRFMjhjEB/SaSNcMk3R3VHcczSpAD8VITf1rSwdg+YCB1ZJ7akKjBCPjCGzfWvlE+UEGUG
8tUqmkgXc+ZSTvfEsIiBseze0TirSgwN0pTegEyJ2GAcWIeW9T3tQ5RVgZ8NPHkgGGWM+2JPKzam
G3bbrR0xB+cM1yRR/dKDUJOKdTEm+Q3SOx9vBx08lM9Lw7PPv3j14qvPz/k63gTjXph6aA0zVfJJ
NZpOKfIZopbyaNEM4a/vgwoPgg2chmo6tRGnR+VFWmWc91eFrC1AfNis1Am92swwc83ACaMhpNMu
GFpNr+k4YgwpkUN1FNvJxNcJ7RyYSxwLDt3YcHeZ3qNABlxVeL7G258iwJyfPPqAcwJWwdZSVklP
EjgEKUqRXZf5BtlPMu20RhO3oBSbGs1lEQjfPw7bKavX5lRFOJ5EXcOZxoHmB8yuZUyqSVzG8T72
E77aTtSk6G+d3FoV8B3ftGLS6fc79Ldlm2Cuaeg16XWiRvvqWrnvRT+75tfqNmuS4JzRN7GktTFn
1jO7u/u2izUJSrBtabAXXZ0ssMXi9Gmf70528T7PdPvlqn2wpCNDEri+Nb3nototUjsT00MW3Xx5
sAOo8XNwQGOT2NgUJztdMPcGmXxdFld4lG40jxbD5+s6uj6AJbupIn7TMHfFX/k9nI1V94f2YuLv
bcIx5aOt4BL170OthLcDeAocsffxg0FuymBew3+i4esvn7x8CU/fhPd4z3QHZ2bcI7+NHdyQ2tTB
j+IYj9aLzVW+MpGjyk6rzRLEs/vIrSwYqCnhr/PNXFE12hvi69l1WhrnyQM8dnLiwAY7gaBQrEkC
NVoz+KAFGieOBWtWZBzHTlHuI9tUiVr9188/fzUm04/wqAwlYjqdt7JMwmwO20DU7QCjI2A+fAFr
he+fhnG7in1EEX4+MO0FxNkVUYHooSjHHfjCVYxFcAFvndWLeSKmgsLWRLEBielUtTBhvfbCYry/
DSycxctPfcA4WVoXLNgiueOwPQ4/ffL8MzRl6GqgeultgInmbUf+7J06S4k3sa/PXrz44kXTWYy/
i0VMhWwyre+SKltHwwkuWAplxmp6k4yGdiC1BeemdKEZEImMObnOQOUf2tlxXCmjgLWVKiwuL7L2
+jWWFT7pAbqMCbuLW2++Cnw8a11UFJ90TcYXue2IQd1SylnuVnAYrVEhGJvj2kXW3OR2n7FoHqwH
tO0Z0fathyQ3XhNrp+se9PC1Ine03MGRx3E7zjaJTq0d0QYU6CO3XNFYCNyxlgWBXGoXAoUfNwjk
F34Eut++fwTi3qfD6DHx2Bgw1YaustA6w6mtXhViuZ2qm4codcmvPzb5CNxUdRRr2y1+ho2fD+wL
GEfRfGAYZuh3bN36+HFjJ4ybP8E8WuYVWoCR5VvbK2DGBtXKgEoSAIlCdgmvJkPsnsGFOnoNQLYi
t0UKsKV0bWztCPeXqPyaE3euIuHRKtl5gzbSPt9i1GBUP7sSFKdMoO/JosDkTIZKUEx60H4IA8cj
U38fsEKlOZZ8c/MAbLgWrRvHYD87j3UcNKwQu1kfJYj+iFsnC4V5E9NHE6QCEmE3uGg8whsp3b5K
r2oy6x071i7ZSolJE1sGCuV9SMPTJosuUg8a+QnIHuSfpqIrPtmiU1vEGR42MhvJQSOttT4qK8Lh
Es1uaZ8LDtsAEGuKNhwRyjgA03ywYcbET1eWoyZ9cpQWeiS0H09pPzY6rnZkRyFOQuoGzvGX1Tqt
r5sEd8KqcOfztGZxbys380uZjGAYdx2gNA/Eh7O/HZ/v4P4vf/P8y+DscI754ceHc7EF9QKPesYC
6H/z37eyKd/kiwU+v/kfXv8//9yTYQxZLlmQSQpe08CG7WuGaJOglCJ030d2FhiVIKQ1G67qcNxm
odLAjIzzTY75nD44TJNyzBbY22idu9FaYEBqPLPlHBUD0bBOqxssHjz6NHj05fNPgsM57SH5XG3D
9iG3t4EvX3zx9NnLl9NXz1789vnnT149QyNCqwTlmkY+zONJADVzOJPfZOUqW7x/mnwBc/kl9zHq
vPNoNSOXFKMAu2RHQPc384q5SZ2ptrhfo+DoZK/6TxdFlf2a6kjV2EgP3omjggmJsBucfCibs1MQ
zVpoRlR+bsy6NU901l4D8uDNvyETSaTURXHV5Pz+H1//R8n5Da+vUIGKPi1RKGlUy0rCf2IOJUyS
CL/hcaWSYKUqrE0lRjhv/u3rvzBMMVcw+jf/0+v/919JSnCxPgTmkWO2G2QwdcY2T1jUNr3kZeIu
IDHGNDOrK9In/TB+1MbrxPQaWwO9E9HrJlcK7HWiRWEGTPnl1Vk7ChVcXILq+RR/YJ/NLYAdQHFZ
MwzeZQx4FttVIhR8B1jI6F5hI66jk9FVtHnD4okqbG2YnEfELK+8+r6Txv5tNKlelFt75CLDbESY
96POa2WfpqBL+g7jkigwYzthw6d0tw0d0CmGlMXooOeumO6bUN5vq1PjeNSo+2KveluaPFW/PTdx
mKVr34u4vJquixrzv6V4gQRFoKRZpm0NQKpUTu2mb0u0F7BDMFeotQyMGn0Xm1CYNhjoBTxSb30X
nEQfKlUeVkooR1goFQb9tgxY3un8c/nWsam32+P6O5rFgtZYPFEpKGmQBV4ZwzSAW7l90OxQzGYX
ILMuxGaQzafLjJM4qJRvXMINSuZps3dMB2grQRneKO4DcsuM8+uxEWSWlnPMIkJaau4OMOIbw46Z
2qjYMiphSiRnJDurvTa4VhAJgUyOMXME2X48Bd6apnuxrxpwJ2AX6pp6lshu3vZp+uumdZKhDzxt
WTU6muPdfGEuO7uezYmRA0/F5kX06pH81OvZXkP6a+/6t3uv6zgkJo6mu6bsQJS7dN9NF5iocM4x
2AuQ4rygyaQ/VRZU9f0iYzp8pBAkUAo0CC5xZajNU0rLLmXuYTuvhmkH+tqw5fdcJ6tmEtwin6ZV
pmsPlJbDRBMhxzCr5ZxRxs2kLmBeSObVZb6t0fp70uzkVBODBnG3pYQyLW7UQNKE5H5BoDQuHo2C
i28MAzDsNLlqcWWLcp8LIwrRPv0eBUIouIQfeMOhtthMK+7ZTBmIB1NX1iCXWmFm4IhZYxrPq1VB
LoJQqjEr5sajlhKD2Oybf0c5l5X0p51dk02dL978z6//MiQ58Cv4ldfI0jjasnKJnWcXGxJCbQec
gT7mSEZmZSpWKcHvZfZmk8H8D7pOPKqA5MUbDLyhWexsy/K3Spz4LRsnKXNdsEPfAWWlpXRtM04m
YgSpEl8IxIPaSGDyN1Vz2bTK7hpUADAnYhFppZAiGnSVGR3QS5xRTHo7q3m/4MspNJsGMIsixWWb
SkZJUrLR/gFNXhfFDebLvaQEIbSRpRU6LKkwKVD/E5wT7SqYlcnAGqHKoSYKrFaWJdOfUp9aXl1T
MlssrPwpmsAZWOZzSnoJNIg9zZYX2RzHABjieyBsOKtm6Rr9p6+LO3TXk40YU7jQyfi6zAyir8bB
16tvRvCfb5mBrP4kGYk58219VxBURPpqLnvoDFOzQqtVzelTdChAtjjR/iE0K3qmzYKNwo59a6og
Sm6BXdVTDLaLXnX8S5vQgrAs/aK0r8gX8gYKbfX4zvR1XVH+X7J+BFTqzINyi5ElVwnV4xNhXkHF
e1zU8/zysmqyxrY9aGl7StdVNr3EI3PU8oxVd3JTctztnPWBZRupMnFRZdM3V9jIhh2A2aBa51gj
6urrkCatp1IGB5OuKNsN0rIRnvdlwauEnLHhI6YDB5Y5NINqAa20wmoB6TCaLhbF7MZBHZeeBMfy
C3gocUTDTwN1WlBClTQ9nzFFcjR0g3ENR1zYOjNK9Ulw5CSrtO9OWQRV/VH0k6NPpuRnRNHBwuEZ
9XB83rYtm5Fu6huPBQQ389DV3fC1IdX6trPWkVvLkBCpxD45h9p2Fby12Sw7Gm5WF+kCJQRgt2iE
r9X/JgbM6yTMsDyRSXsY5Pq9nj7DEsoOeHOGVY+CE/ZdBWIed0U1xcCDZsUxNXduR8ThaXl48uEY
4B4B1Id95sxOPx6ejB1nCu4/4v7nA8e+VHNgXm39q1ovuBcMIKV9FZca0HJ+m8836UI4hDggtpg8
7QKUfbx0YXBF6gHunIDFb1DdQtH+SB0F/DvkFfgE1i6n4NbbA8G80FsE8+f1OhN7gWtmjrjVqbzt
C9xGocM4XXTfnc5q9AS21niZ3k0Vz4uc7JDAtkJg3W4IA222oOueHZ83tyUL7E/zCSbLOistrKAD
34TkRG6//Nb38k/u+clSfy/6siNzR4jMkHgRHQ+DhXUrgyUUjXiYuSaMT/mKmkNy2iTRnn9KIkBl
Q5ppNc9/MqcDKaPYlMEyX+UKh9AEmX3CZFxt0quscd5jR7cgZIrHuIgAF4HiI0OXtUQvUIYbBXV6
g9Bm5L5KtAyyMM9yitdp7NCa3kva+BbxS4+A6BjnNgGpTZCxPFYBQ2pMDApEcmz8nq1q41XLzKFN
K/DWJZcWI1eQcYI9hv/QINAxYiYIwr2CiHMNxnDYisk9u9E0l+nN3FNKetTeR9RXBebYl1ZWfdxg
IqQQLxLxCbrzgJolGPHRCX6okJgBUWcWW+RU0C7+vnXxpy5PPQU9A18X68g/VM8nHsgZlUBMECpU
R3tWa1eX/hTuQhSip8FOCy3GRbGUmKi5sROW46nny7KAxZ0FjX8SkikcAa+L1fvN2c34PDF+qCMb
Blc0DmxW6QojMhPH4Y5RRmw5gWgDhGI9ogwmknektUOJyTj5FVspFCgmBNpvciKgoqwe0dNqXmnx
Lp8DV5kEf3OMO/uH8B9ETbGO4ekUD83wjszPTI4xCk4Cco6Gk0WxgVVVsDUmdpLOUHyOzIukSi8z
suPm/i/TbZX/MZvAYSSilh+dquA8OLKOuvStqUwVj7Cbuj0BokypaO2qrIqwcEAY0kUZnU1z4niT
A/qbgzTpDYPtOIjseEQRckOJ5TTSR++4P5tMo/4wQRk+b0rrIpF1mtY7a0gFdEbvqYCfYz26vpIR
fMZgPcUfSacXC1LgkHhRkMTemIarYB7yLYy7ko+3b3Fhj1uT6DiZhJ50Gzh8mifWhMkLk+S7pc0p
HvqmGoJM8EiNIG4L8c2EW23ql3u2K2t1mr2ZWvD27EHtNF6/U7tvMWiKX2C2SS/evlENp7tValFm
nBMDf++zDmDz1a5pFzasvR+6AnCeWa0BB4lUGBytl2L905hcU7JKC1AgHs2zOs0XlRg/JkErmnoI
m8lFeoGan5W6TuFIGmSJdB9MiR1Np0kcOq7Wm8gfIi6OzwcDx9evFZvUClpnvDgTZukcypT827+e
rCiNzUbkHIZo/8F3CAydY++yjGOEoWR/gcjlLn21WuAl/9GRYjhKgVhrQZpuTBdZOlcxQ2o4Di9U
yHtDkr67zmfXrPHC/Qrvf1GFhEmlmr6gnL1MFyLrsooZ6HONGhOoSV3j8EHqxIWbEV5QFZT2URye
MLhMj55JrMfs2w5Gp6mMxcZMMzJOzraJ9L4Zg/gCslOsU5KVeA6jU2PjmWg0IXPV04bK2yaN8Grb
pxUkM5kkg7GjOgZlhT9sKrLdQ981HgeagFJbSOoYCAPFi6v62lbe8GFxdZVFSwxrJzt7PCJZhDsX
t2V+LHOWnwc/m3Ab8LwryyjiKXgcfHDqACO1wbHHq1tJgCRFSTyhHqXHGXCNlxJaBqWPhgQV6Yb9
LtuhQcloZQjEOuLQlLekCL0u7kieyc99Cf0YI46VmJprwZHxlXAouEYu3WB73LJ8M+ZI1/H7K1En
jsx5OfJNjN9xu2OC3EnaY2LcyXnYNzuKmeyanu8yR+48jY88BezJ0kXcgci5CGNu2lEl3WlTR2uJ
qIF9XeQXyQofaBJZscNqjvhdErSqTluA4vMdKrh9xSZro/Fy13dhIPswDw/tPKnpELRF2kEEoqYV
ARyWYf8JAM4f+Ug1OdINxucdem5rYT72rktP9z5D4sKlkMJOILa2aKM8kpsgvvLGV2M6FnXHg/Ed
3s6aXpyPpOesX9A9/XivnrpS1gsi+bfot2PQ6jssnulO6b62aRHZyENT0pFFsiYD+2TNGi+hpEF3
5A+nMx4QQoFe4q/fke6p2wY/gfM6gWg+WryEDvaXtSIvXds7T4Y+5ZlGvlZ3U2vQ8bGZIIKWIcbq
Eo7oAO9owJk7tsyx+kidf4dO8qB39tIB/9bd3IvDdZ6Vds8yQFnSb31UTOj2vGIzgag5MYoGhKOi
UYvRjfCcm/OYXRhINmOIigXe0GUOL5obw1EA4SjFRUvg8zCfL5Y5q6utfVXWcXtz7FxOpJhDgxWD
w+Dv3safypiwNZjv866y/tVJzVnbl15DgvibPvT9zIM+2h36evwJFMhYodjqtG7LhuKD5BDmNzdj
1a1vUfUZYufC/nu1wANDjedbNQu4vhSzaZQWR0FDfk1+DFV016Lt2q6sBWulZ9rWshg9k2iHzOom
/6Z/sT3nsTFSxTmb4clYm/TYeqh7MajOPe77HKymQne0wiA8w+1mX4ZmBd2ORnKL5GNcLBNxRHC+
8Mca3MA1HHtUsPAxFWSKqtnxn97T64e48KneWHG+Eg3rpqIUJUAPqZ65Oo3d26zgdFfSiWR3UuuM
1cN5paZEbITGJLc7C4KHr5TPH5zG7Ysre6F67lyMg0crrxh2IV/Zlrye+kfBu9Z82K4pqHDuTdRl
yal9h+TezTiV2fvYjFrJ3wdv/hcrdrV2zPhfX/87eH0wmE5hetCOhYJjhqfJR8lxOHjzv73+D15D
u2Ix14Zjb/7969//yuOCRBEDYYVhrJiBmNSRj2Op7OkoZjWlmxk1CagslwvHtk9VbFuAjTqyoA2s
fByOSR30MNuq4KhWUHlPWFQKVD92szPNJEijpP3hVHsX3UE3dTYlDJcKf1Trv8uzu4g1kY0mD1/S
TQObsDYasqe4XFJ0b+DaxSUBGKG0tQqeRts4mMEBBhVnKd5Qb++VlpPviMR2LVNvt2INBgdpvD3X
QOWCG6qLCelTXGESWZzFcMwrkgYPVFceYLWnZCeIHk2N3WGJfirBRbYo7rCxIkhvi3xOdxmbSqVk
usvYahCTVkkvsK6nP5E9+qeA+kKhQS4X8SqMh+eBtBVkapM4ic1OZqc81ku6TRMRllslY8tNXQDx
kS8LxQCco+3IlnWPX5CJCN/w0a3fIr9pgkalRmMACUohIWdzZbZJjeB6EBo0cUhZRjVaZL7U9BE5
3AIBo+Fuk6OU4MlNKwGYTqEGrHDdkUTZSGQmzoFIJUhiXnmmUicVo6s3RpwycVHWzlLoJruHcoxV
6PMv71WIPSJVaYhcvHTjeWMvsyxUDK7LfGbPd3B3jbpp3RXMfEIId2dZVswKxN3N7LoBQqahMMGq
I2kJX8nDDe0cG/OOxrXFJKYMZ+DTxuJyBIuOQqCxjprPyouiuJHk9qpZBkT9xxaM9GxRkiQjiuk4
QksRvoAhPwxUw6EBfFZh8G3c09GSkO94VQtoremHmKM3AgIcqXlaBfSBhzOCZ4UjtG+9x0juV3Rz
YuLyKZIPsC7Zpis4VZSsd79gexqZVrWqFkAzyJhvs8U9Y9hLXrMUEwAF85IiYF/gdQwbWaW8awi2
TE5FW0pNq+7SmewRQkBj2jIn9a5BgomYZ4vBq8xaIIZDaEpH0PJKJR7RNwkcOhoBOXEFTEybYXan
K3wblUVRU9cI06PASBwwr5x9BEDw8JJWbWfnUAuYKrifdCUqoH85TvtSWKOmL5+jlG2A8guNjTOA
cO4JXe0J0dsBimlBLY/ICg5gROw08SvuEnpjxh+Wl4LBZlt8fEUm38F1DhwaVvw9oYk5MG4dJhTg
0Li+4ONmrarzNIUYuUrtk31hS9V8SSfNUTT47wmJLtUbvJkQYBOaXTcgRhwVQaHSumJCFsKCWJJX
DabtGYI3FWWL4oKYT6ss7Blpy59SySYECdGASMPv44E3fu3ZUypIIkXTa+yr1H2aqDV2Ptgzsq1t
NuS8PTs30OcQnsaguTSbTrE/R4QWzc3bqL3uHPWWMzZvOE3jrrAZcDtFmfrXlEHGY9QwfEqvi3wm
DsfUQYNSXBppOc1x3Z6I9eZwHeM2ODVKfbbk8kKREsrm0E9WB3hWuMATHa467W0gdX1g0S8+Cn8R
CuZ0R0bAr62Fxxf7HasOTqLRYRlTsu/2cI0EXubyVLpkhzpmC7UEyfQLP5BGFP5AzaYgsmAn9erm
opFTVi5chuSQxn2eLeZmxUHzFkoP1DnDzsZF0rI+bjzhAxlsA1nG+T7FUfoStm5W/It8nPjv2Bu7
BnH8k5NWR5ZZwbnhNEFYVKHE2xWMbMuEp83qZoW+qUY6QUpU272hSQRrO8FpZE+mW97OnYzdexz6
tjZm1bsKO9mWm17YGZcpXpAn5bLFQSXdMgdR+uETKvtDd+/OWtx6BVPHXSedjgEu3ic5cRcV6AyI
b0EEHQm0zRk9j3eRhJ1O2zPJvpTafRNsp9H+ETJlK6S38iv6uaTHAc+d41hzHEpnvkdu6s918hhh
Hoti1skJdPbL8JC8LahsFcVKT6FTMyAn1wk1duZksbMCG8uL24u/56nw8j9onXu/59D/aQ+1f4ew
xspmi81U43G2eW2hIWrzZyOhih0h5c8ZQT17YbVZZ2X0uR5V7M0yL1IdZ+ZQS8oF1ce2OaWTKAHE
FH6PxSxF9xqJsGMnkXw7RwExXt8gSQGiVG4cR/BUJ00p1m2x20m3IJklPML2QUCxQWebWpIho3Ej
qcIakaPq8i60Ccjc5fY1wOKR+JKun7b3pFMvbjrmdqgM8oet1dKg2tSuu/hNWr63rTU7nOIOUaIC
nK8rD1EBqV/RRRg2r0E7MVXvdyUNaa3JtzCZcvo2oREz5HeDQsOZMG6UMGOjdBeTaDOKLmaxS0Az
VhV1SC3gJ6v5PosXiu27cBv3Fm0l0fiiG6mLSCLzSmFt2vaIW12UbfZAR410aNezFjQFGbM+2LmC
jcLxLttWz5IL0XEsEJdAdjw3u08mAnGopuqLcp+Z+qL8zxP1g0wSoKVvjgYHqN/4aoX+TcZlz2Qy
QNP2dJHfZoxnDjyiFMHwtEa3Tcr8HnwjNzMg+gKtwb8xu2YYTIUSAY10uedk747loj85pWIp9m0T
CpVv7omaqKdPSryD9lFVm7JYhWAFWXTpyxzOpHmM9yAez+a+k4I8k9U0qs3kQz/y3u5fP2G+3cbU
9PHdthV8MDenH39TGbCaV8haLd6GomJZDb/MPcthP/p/Mp8L/UeuzPCwtcfGxoJ4ubnoqnjUW/G3
m0VXxQe9FT/Jb7sqPupvsegc42FvxS+Lu6zs6Gp3X/18gOfoJ2EEYrDmYQSNHZdZtpMRKHswDyTD
+s02h38LpmKs2J0L1st2sPPhSAbczUb2hkcjCEdqJAa8n5IvkdBM8/TdhWYe2Z8XfzNWSqPKwmhK
GONtrxOwlLW1HSoxcZ+qw7gQMlAlBkYIIQ6/q/Li7XZFtxcT8yz7E6tBxJTKwwzIXssq52UD3bLx
LTraY2JcY/yXq3DMsHj433rmzyoehZasnWpBu53F1fYbTFkf/RuOe+yRZSUiMpKbrfHTru70qZWs
0x9jo4H2uRtRNG3hN7VXadrJX2GQSktuYOVwjjo6vC1EFNs18M2ZVDunAfilftXfzoSwMh8PJ7oT
ILuPwj63NH0ySbvZdkeEEt1YeFhNDqsRKSGljyPVg3ivxhmCA6CD76skuWhjOm1TlH7tXyH6c+yv
9ZbTivXC3slsIHsm1cDhAzyEdU+bF2tUx+i6bwIVuuYd+JrvQNi8A2Pzd0UZ2gL1o2y+N87eCWlU
ab4DbX79YXRYxW3tIfNZU3OIXsueo7Q9KzSOBPokoWVGrfDkir3ywxl6Rw88aOjbG3dpD0GethnS
D32RKmomwplxF8LkExSlpbsn2cGnui/1YNrXqTuE3RAT1X9ziOSOT98S1ylB1hwFngs9FoJ+JfZN
e8hAUvTHuQXwbsBUmrkp77rQnf7rsXfSO7/1ib0vmT1fPPw2vRfTyibmJy1tHYgTPi/T1dUim//C
dwURaeJRg7YSnkynQySx7jzyupZelVSnKz1KSwa37wKUVZw7N9qyDUNmi3UNpyBpZ7fkksJGv8sk
kG0xo26KvaLLCsqC2rTwMOigmN57DqE34otmC6P4B6KpH8XWo8U0ZElF7XsiC2dm4HKOYW8QM+Yy
rRpbdSX4jtjSHc8ZKNyRl0lW1Sq4iIX5KFQ3eQ6uMDQ2eXoSlZPG3UO1coHeJkmu2WIaPXfGOIyp
GoKeTnX+arX9lrP9/U6321c3c4H1/Sfaakij+CI7cqJTY+jDDQagUaY+pGRUt1vk+bPXBReV3MfW
iKLIe3elWqdFUeW8u9JBwBniyAWGF3taAelJ6LuPjWEbNkQ2byDgu82sFEwU6bg/hidh782W0UC8
x27I/MLvb/Wja2/03OdXqz3nHkruM/ffXSLZeYXlm8UkSQJyFG7dQ/gs4I4o1xETlwqnjuELYYxL
MycLIslEQKMjYb08mV+1r08M1sNJh3qM5qCJc7t8n5ncHiZyAMFnIedhvqa53E+8cQpBfpJXs7Tc
67pdiv75kmSLDmWMNO17DBDL7TM6Mn2Gsn2X0fS9hQF4GbeKYRZ4NX420JbM4pcyYf5gDxk1m7RM
IQ1Xc09kN+d4jF5KY8r+6q5fW4HkVGOZV3KJFpuaUyNGcNrGaCu4b3IwcfQsNYzRM+V+amuJOAcJ
ZlWJKsH2hBKL6MQv9K6tQ+PX7MOK7jb0OzIy0akCKhTep1TAoDUhVLInjyg9M/QwtKK+GrpOFtls
o2xxE/epd4keG/WuxQ+8et7+td6scylneQLT+5Y/75mhfWhRlWeevduuoUd2tR06m6yigCbsoaQ/
DhQ1dFAr0vzBu/8DCQwz9D0Knq0Av8G6ACEG83y8O0BOBqQmUsu9coVYXRebxZyQaMUzkCQmLgkI
YQmMEHl/GBs0IXl5h1eAdAYxHMmDJ0af9EHiO3CGRSLpV/AYj/cne4sUxZPQ4ELfhcaUb5lLZm9F
2q0snjVmKUA5wthd5U1CKMQMew3rM2apZRQeDSOXSEfkHU7pRPIaE8VAWRJYkNO3z/dDapENLdHn
OJvnIObzWRBDCdTBPOcgOAg+CYKXm6srPBtS4hUPPIw3gEdN4TiGm8hFdolhP0RYwo/oSICRcY74
9wSWUr6Kh77FbEXxlAyQy+oqkrR93rxa8q3t19Xk/vPHFVVAmayFUinWBSWOqi8oA3B9YRboI9ED
lQFLr0QEoDZj3qYpgIHKXqgJ4kypXBtdLJy864tEn8ZiCYPHLW3JmdBd71Des+TJd3rg94rbjrCh
2KHqANempN3QrcjUYBKfYnUkgb0lA8d2ZMdZ2eq5+47yAGJBdmAzcqjkZDDjyLh5G3S5s2OKxT4M
go8/Vka5alOPO4QFBMN6dSO9jQRoEYHBLyy4Kn7UzGC0GDO0u32qG6tFElpH422tosacfDS2Artx
piot7f3Iwkf/nuHbLn5Avu3KBpSRWc0GhWNG/8x8NZ2GTTo88k5vIpFcRm0fnA8bifrK8/n95vN1
tPU4Oq7Q2z8U4yCSEYfQTPAAoWG3PhTWJ9+I60Zx+2V0Ka4YWA+Y6LFT5pLBXem6GKD2A7NEjt9b
sPF6GF5S5WP7k8EbTh++//ADIK9FkdYIgIkQZm5I3Meut1XjakoJXcvogDSKYl2FUo1LwCaG+Ywm
wckoOPV/4c6bTS3TbXSGEGHc5zSGD+y+hNfZYlGEZ/idqODaajW82tzwNfk1YQG+vfkPr/9SUisT
2TNDffO/v/713773HsfMmV5uMKMf0JVs71fqLF2ZyY7lURy9gSUXN9kq/2NmpMJjtSwKQgoUH9CR
6Q5m6/uphPKhaPT0NGhSMEiNKeaqVQpH+qW+fHn/9NPpF59/9g/TJy9foT4M/04//ezJrzoz6+kS
OtsUAdR54eRESjhpxdFBs4vlckNsQIf7phBGVxibg7czynwHC/4Kd7LGQGZdgMSAkcLnGSdd4Dhx
dqhphY5Zgfl4S91HTxihB5j3TgJB0E2yE1JI7Rs6p8+5YXwHv7kWJXwOVZfCUdBsoERDtA87pfll
qyzKYpSLDyPTYt+8Xur4pa2Rw7eejrbFHyw4kumJ++HgM2PBFzLfgRhJ9gf0zo93AD7bJoyCaAgn
Jw7MtlWjPt+jMTN0lJH/YY/h+DZ+M5oBdct/scZJ3jTEcaflFsDRpVBUU7LYuNfWq+105B2IJ5uM
3xLGHDpQHa8twZ8i1wlsu54ZxlCNOyAqCJF+GXsyUKlwfU0ZYzFmb/RSpFAzpkTTYUxnrssJ13K6
v8etrk1QBMSVa91mxUMt1q3uYSpH4VHQ9NeMCtKZ75uiQYD0HO8YtxPnpTXzbJAFFJetlT+mqIdP
4i49/nOMNCgC/Cxd0W31Isc0NsInVPQBhDqMewJSIJ1hTQ5vwXLsSDpUrK3ZB6miM/YDRTvVQ44d
nKoWGKnSBpCZaYuY3emjkexEcfuj5vAGejknIkBrnfh1taY7vK6dMeAu11RRe5rsdRzNyszRoK8F
QHBT0fAv4FhwI/tOmS2L22yemPDtHJMjyV14PNKIiwc20+K0hh9TQRW+1h10B5uSjIhmpi6GibAe
y2cPTE5e6IdJCRNNiN2TZc7U2fh838lyZ2oNIgfF2xOqYa3DJIQ9OL0EOYGehCeGQQjC9wf+KU1B
Olnfq+ud9uRa4wy5GfE0o4bC4K5Em6+5yhGlBBh3akUvolHCvxvEELTmM/2M91sAehsOIh6yRLiM
reidDZ7Pe5YOd0t+PgzUX+6ew5u7F5LsJTw33/s0YBxDlZkSQTfx864o2hm/FSWFfzbekqGcCWIf
7ofWbsQgu1M6HUEP1l0VoyYttKWldfAk/LsJyMF5YHTIXRXfT/1jhFATm+UFEFjEgvScjw7H8R58
iEZodlzyY7j99oYXM9e0FwsM7B1QERk7Bev+BBmYRpkj7En+G5f/CerK7Ar1PRYGJVu6ibfCjyKR
3qPj4OOJFJTMAMSwY999prkxSxXMXgxUT1gwtmIK3GrNQgtneErTk2BvElrMUfUb1Gs5TwS1y8sq
q031UWthtjc9BYPXHYNIrOE+v5TXSgnF0RGvNpj/KF1ZaTmlINKjbSChc3ujxrHZQkk7fcHpNGrh
AzyH1+ktyDi22EouXp7mVMIjYDs5RsnMygv4vlQRQo1OSGzSqtosdVxGlWGdwmcZiUJdIjkIXr9+
HSzJ4BB14DgmpROgrO+c4xTOwNewpSyyBqABgqkWeVkVZPUsWa9/8U58jLdbiwASSTLOSIn34ewV
Kfv09bhx9qDDr5eI8EtjjIw0oYGMgutsU8IZk6O+2sbfll4AaU3HwvQhu3XIMGMhl+6JoqUHaW4I
gVS2U9Im0Y7cZBMUXQTqzcO0moda4dkjwLfAcXeSagNnicHAOqLxcLvMs1XkQz2DcW+7br0eM48D
NTCu9JAzYg23Q/gPDtA5I9hDMivFnoiVF0VazklfXG68Zh5uHU8Gu9aprHvcBsmZ54yq7g5Nh0oL
1vd7DykaOWxaIYY3Ez6FLYs5iJ1EBW3Xr8tFejVpNIWJitI9xQ/t4nPYg6b5Co6jeT0B6R8OR6uW
et9ZWgJyzko2iXVKdwDKQEg4sL0yVrfIPIETAyPDEMB5umhqMDedw1JfpPctfiiU9YgEoHVR5cp+
kELGwiLOVAyyvn1TN4Y6svtkndbXicTQkS9xwgpd/D05budF09jp9lRqirQvPYCxNh22jZ5WI9mc
Jw2E5JJ1w7Niqjo4sr5yFTvTUUpJRYcfH86PxBorGCrTV0ex2RPR1S3aTrPs4LSNCMNajDr1EI1G
x4fzx2QrCgMWCWYfPZQPVhmY4DR+fIAb427fqkNowEe6WRUZtxlq8sbaQbeKS3JEa6/Fjl4S5/Iw
fQpg333ZxBsxXrGrjJjCBIMMYYmxNXJ23Ks4c7Lasy0gy+rKo6QYZ9uE8dTS9MIX3pzbE6FgKZ+j
YTB8oIsjIv/R0Ym2KkTENNB6QLORtUrfSSOibBGBuCPphemmSSDkGaiNmgtU1WTsqaR6OmkG6Smk
l6J+9hSSG1N58kjeVKxfzYYEE/x1cx3SqT2cFW3NKJ0Rt4B0RwHuOx2qy22gNpUZAikOb6ofT4L3
fS4rKEWu798PKzOGMFM+zkoU8+VXJZ6whjmODQf1FmmwztbvH5+izq5I57AbTTFsOWYIgA0orEWM
7gFS82YjNHMEYjNuCggAPWhQrL1MbzDzC9FPm24BWc2dVzScsq2qgJuuq2wzL6ZiW9TW6ostkUIE
l6su7hERZ4pGz33utkZtRL7UPMNyWH7Z7mli4MhlugIJJ5cidicct9vsQXQyClhMEDnbv4Qauhqg
ddh6c7HIZ2QkpmKkBZebFcV1rwZi7qWw1eJ/HrmESLua2IqBLqnEkUKMqz51Mdmcj4WXw4SnsM7u
TBmkkW4o+QMcrfF4zWc/imC/CDjquySyXcBJrQJR29wKAS9ljhaHjl6I7xMl6a7RptVkusIUErMN
LLBbSmlLWTssOamyrxpRpqCrVA50ScdWztWstfrapYTMvV++smKn4iJlyxutptms6QQCKwr3Bajw
C1f23Hcrq+yJYQJ6O/mmas6IEmhxoHdUTQVRR+ONLKqfLJsfJF5t9VMxo8bbYoN+XpiqXAo2SZJj
hLKflhP0vDKFiUBr5i9WRMAXG5jjoEmjoQ3m4ORyJIYtanotUWLG1q/K0IXs4LG3pkGKNoPquce6
pMAmDkuyeTOC9bsDtqvpCi3JxdMZ8wRD421Ok5cVIhXtJ1ZEwo6EvSJ7Eos/6m326MTyWCJAe3iu
TEcNCEykUWl7Aa+X5vMvOtwz1yrAtr1H6wHRdOFzu+f0iTRGhpzQNolWKjO8nDMpmJtQw2BWfJ0t
1niccbmvM8Jxm8D4q7B93UVnxn2IUmanbjKlfUxPDfPJkZpIv3rI1W+TYk3u8B39djMQyxBNMSuF
LqXX1cNpGWLAO2PJQVnYMmbWwrMxWJdVObPTJnQg63mj33PISgMZfj385ebq6l4J5yJ56ZQrwWZ9
VdJt3UixFmhPRvm1sJA2MTF8vm42saP4rHzWmGA8aQ2OpX5zjPhsBarty6Emx2O4IAasNKHZdg2r
v04vKsfCQJnBJAvTrNQUTtsrU0vrlHQW70GOSNsd74gF2Ba2NaRjZ6xokNa+PM5Fz13p/HaOYQw7
oKRzfD3Vgo3U2RtxHNGfCjiHUL4fRY2Sr0Yopn4lRWzv6EWbvFVRrJWs1Kllh7mDVQkz2mxrlbnE
Ao5DBY4M/48qNkKexvCcNc9TyrsiquhEuptN6U0VqXbaWhcCJ8maXQRb8w/yxzNO/pOv1hu8bUFx
b94HT82sB6RB1dUZ1zkKTs67KVzAW0TOd/KY+m/MBHee5BVwfeSKflseo9kzOYafw8A+kfXbqqNG
YCcN7MsyL9OTebDNUuUndCZTHEMlQ0RxkldBcemppK8ReIqrIMqTLDFei24i3msEFSbxNskzchlu
Y+mYvMIHemkMR+/qBwFGJETZXC5uMvQ8wBFWGXTwWXJFyst0JU3RBR16N+CeJ0YTuotijcRdtAno
3E3VyDfXapea6qs0iq2jwaBcgLbLZPQ2bkwsL3LMRKwuFPjXtMlDfaB2eDb0xhPysXR1W+vNX8tb
zE9Oxo4IM8VW2ZqOXxydNIoE1SskPzqk4FUiIFDsNWrjcAqbE+xSU5Wv1OxtZLTUVs1xLqOm7mPm
9Q5CvM1Y1h+cskZdier6Z2aNc6XYgWXMC9Nu2twI+lszNmYBCf3eo1HjmmxbG3kj27UG3fK/NcHe
2g9PzFZbt1D6RtiGuzbTX2FiNQXbaMbyzJNvQNvqV6TG5Y2KgyU8hkHqtlnV1ZQ4MobqsSea4F17
cxuN72LfDInhkr4xp5mPspUONxbIJk2kg1f7tg5WnM94XQMHG5q35F5DNuMOXBZ/g6Am76hrWL1o
pAvVJpYeMnsYBcNL3Psr+Z1M+Se8577DexWWRcrr98expgNJSMS3NHCIw2ABeOWG1uH496KY3+Nf
vhsusbVhUaI8NaQerNIFFWnmUUKu2G1LE6yKsoiBihvqjt6oeRhrJfDwAUKX4sWcjK1l8KuCiTAQ
hQwvIBUoTgNqzbmmX8DCwmTzi+YU0mMpIecWx8aEnGQRsCOFy9u2GI4bMW7IcsFqKFQ69LvICk9H
P/ckNyaLLb4Q6eY0qiPaLSCSqoBPbl3f1o6Ck+PTD2Lcl/CB6OzJy1cu7/kdOkH5LsD7zU6Kxbwb
mV7FJp9HnXXqttK3JZuLVvAgGpIDU+ssqbFV4knl9HOEuDtSlzMkC9KFB0U50WVwWZnRT+YFLQtR
hlcZ1EeXSYMDYi20Hf7/qXvXJTeSJF2s9UsmyOzYOT+kY9IPWU6WuECSqOStd2a21OgdNlnsKS2b
RSOL22xV14JZQKIqhwASzARYVTM7x2T6o3fRU+iJ9AzyW9wjUSC7Z885Y7vNQmZEZFw8PNw93D9H
PyMEzj2Hry5WaxFtdEz7MgKlyKMm9clSxLOe443pygcaoM31xwwUOIVPRXWEmTrwVHuEThU+Jz0t
RK4KyhHfmd8cBIBVgRdoqPkZzVC5F/FqDmN8+nPoznEU+2VOYr/EQYwvm0B6Nec6H7qSItcZkcrR
Slk0XVM0G5Mpa+q8vqgmSEGUsXbTckZNtmI8yr+mA5RyP0vFhzknsiVT6Vr8lOQHf9yItWh9qVfq
8FNXCrg1CtmlDBwhMSyu3x/KBPsPXQd4vR5Rm4RDv+yj7N2wks9XjXcBRsFB7yu63d5gVmg8IzH8
eXJJU0j+15PCu2WCnvQpi09f6frssYu3cd4jFUOkH+/iVI+pp6EvJvlwGhhQJP4t1bmV1BdTas55
Hn4ySHkPx4G+Nw5m8YCX5J51Yb2XfNxUkw/AvuA/5GeGDKzUV9TaB68Uk4fjZLvn0wLozgPeBnLf
DSQoB3KKLoxoQkMzTJsFPc5ZGB48GCZ93NjX19egbfedggZg8OelgrRU9TPPE07/718SUt3dO8aI
v4A7EP21wMI+sG6wh8kxHN0zoEP5ac7KyPlNK2V185G1BXUCZH8Dqt1VcgZde2/dezi0zobs3sMQ
OsoH7Yx4zqsAZFU4t133tsXGOMc0E799uQ+CMAIHVFNlWmZmJu7vREx3puTnolgFIWZ8TN79h6++
+moMnAsxsS7KZTuBk2v98TfvPvS++gp5trJC4caqEOFvf0bBDrBr94X9sqtmKx5H3BZxTRNgaYVj
alv9eF0D6/9UzgdLA4xDvO2GLxYwVs3cDLt3HuLG496hSEXbCf9cPGLuO6CGpAqfiy/TtGpGD+Mh
QVDIA4TCtngvpHegTzcESeLnIEMsKMtPqqNxKGWt5ou6/rBZ8WpygzRRqw+EcDtQczWkDM58KUJj
5aYpkpH8GGEy6McgI+uLKq0eZm7kej+XnX2qPnCW3ENjx2rTlDhW0oOc0MH2zHQNlnC8AOUACMNe
RNWWXDt5C81GGPjrEd8CSIJx3BZSzps8tJ25fg+qtu6101+85dcv0Go6sGAtEKIO+G9xUbohoeSq
wPZXqfkJFK31oH8XVtmH54Wh42TL2rAiZ1aI2/Kca1SPodIZES6WsXunzGJS0HgUIIiHnmX5V/oz
LdaFucFaweECbU43i1WrCg4TyZ3mFv0zkb80PsB3w+QfYgVxTn/7dV4uUSxhAyUVd8riPzk6i0zL
Qb9oJ1XVd8aE73vegGQRWpo9ZV2WThuiwKVRqjgVtHALuGy+WUHz5SBGjd5leMdUKtlWrO3MAQco
EN4MFa207oSHo9DleAMsVvOAO43JSXE8znLgOPh8kPHe66M2Py3moI3AWbRCfwuiObHuIp/DBh1S
pqfwX5ND4w9vjt++fnr45g/9YWJWKFbw8OXJ65+wGI3RmSQoKAvFbHxcTKc1WXwZT0KdtBdwsKyM
3zQoBvRkkE7L880FnAuz6kLkL3qRm3bS/X19zoBWXNAl7iht13WD2rJ4k7PHjBUL3q5HqV1vUa6L
T0UzSnEmU1MSb4dHKTuWJGZi1bHE9dE2y8oCHKx4pUjLlGbO0CcLvjZCP5kBD0mNXvWDSAFf4AwQ
fsnA6qTGG9GPLKeFK6aPqs5PyFGimP+I0FeNpTVOyxXy9OS0D9SAgMHcMfxT/jr7fNsGNqp9DWHw
tIKeqLy+IkmWxkKbYqrmjSI0ENLIUnf+fp//epw/vncv3eb+rtv98cnrl0cvvz9I4h9AMdz7yG8j
gEeknAGDIE0rmW7I6amvxtTHoZYoT09u8uRtW97eFlS3xB09vvu/pQ1frCsEVNAU2JTTkYstoInC
ZyZuwKIWi3AN/zctzYEOtTi8hiNHpDGhv5zoL8s8V3ZcR/PE9MGmTZf/6DeRwoK75r+PEILaR2Zn
CWkLAI9uMnSlP6/n/owJ23nQ+5i++3eWJMq76uPeu//v/2IxVFQFsaAQJ0GMrGXF4ii+wjpUeXWD
HrMTxL6C84EkUUsU3UueHb/s63sjGQ+ZjM7LSYERQRXo3fiHRgOZ3EzQxw44GVCARhRpSkuuHSZ1
yzgiMgZV9xIEO/SVEE2abLIkKxPIHPI7VHbJkcWuDlMAPVIIIvPNRbX8oVjCEdMgXLBZhWa9WfX2
dMpangKOo1FInTp2IzvogulggdJF9ML/6ar51goupod4BTkOQYKzqcC/5C85a4jC0RVD/CRXNNzW
tvpa5qPyumJnVwkrTlagFdbIQ0WF3pfVZzhjYCQ5K4IH8ImE0mCQyRyVFYes4NWG7Ix2eemLqcIP
lLeixJ4Vm3W9j6apFvG7p8CMGj/2XF2ZwVqDovZnhsja5gyHOwCjHcRMwA9omvQMORfA0cXfFbmp
+zwy4IxZ1GtDwQhqID2dLl1IZ3C3zEPKstmL4+gUY6yaDx2+fn38GmO+0KGTLKp1MU0spFDSiWJN
CPF83Q1wJyc57te8WwQYiSSgMHikyIEy0OApUc2r9Q3Lq2gNEdG2WnJOmHU1AeHlsp4O8Jlu6G0L
m5v1P3/DIvGXbLdcqq2/weIJPfokXpvE6JBckMj4rqwn4tRYCAYdI3nc6aJoPnCv1rLguFWWuJ/Y
G2s1PU820BJ9blKsEGspaT9ULKLLEZqCZAriLChoyw/lDYx+cokevldFg9oi7IZz+ADKZELPy7ot
jUXSOq1Ue3/CT14v5gIfPq8vkmk9oVu4TIBlREzDkeLyy9CU+cTCuFJzwX658kMJP3i9wVVheYmx
DrLMOCP4r/Tlk2onoB/9AQSPeaCuQloTFotuv8QwaNZJ0hE5tUhmMNxLnRWAdXrqgXQAFu4VLbxz
FIiw6JTMNed4yhzDeZuJGZtuGfGORqXTIMMLHlPQNY9oLJcR50N8OEmpAVZ2dAmnsFkyj5d1s3w0
XWNuouCiRjIWIWOCPz+dPhSbhg9jJKzSFoKyoJVTvCkkN0CrEQ+SkRvqAl/qKh+i3bCkZ8yWDClX
iOMPbmZ1BNFHDhSWI7VnuR9L15Vx7HJeXsvWMMmafOqJ0XR46lSzJFh0bSLhg69axouE9KHOQ6HA
gBe7pWN8lzEMt39m5Pyi7GPtyMxDd/yq+3lgRIgl115u1tP6ammpYezaK1w6tgmdX90ylhIE29Gp
+vPMhkPDu5FBpH0BEjASmGlI/xVAnsvMBwmYahAAlp+qBmQPQkl79dPJ4ZuT8bPD795+71u54MiR
bcbnv3exCCIY22qUZwMUweBreYFq6mY9+31/h7yd/CUWP6abFXncU2uqsZH6I+q43hGcHPjs6Pkh
YSVH22jelusrFnQ0/LQVZqzokfiVQr0fJo41wEhraOWANSJDwJgFFjxlyUuk52qazQ05xR8kdAwX
RBwo7c/pMoKDMJSHO99Tri8LvkV3lddUtuWimFzCF5sbPlgQxRpv3xGWpVozZsP9ok3KAoEN4A8B
+itzS2P/glGgG9S/1SDwWziGOR2Z8SEgo0Lhg2GVJWhXHrTBLRGOFl9SzOUoPXpIhqUWRauRVDLp
UF+RrctIZPybvhhXI8SDTlRUxnXOWdzv4hEk1akjkbUYUHb9UHMeVbGslzeLeoNHwTGR//dsg5ts
2jUokqIlp0Ox0o0iqY7GZJ3z8Az5jfk8bk39wyvFYuiIO+69g5FhaiFjzXVekXgsUqrebvozqDcI
GootCStTl9fDwEeKK+VoNuxIb2bVHsgnHCwctmdyF9jAD42RqErmylRhWkVQANCgOECqIOEuQ7iW
AmFK+DMJrVJu7iYPKDyczdxyKykl2XRqClo9AH20BhK3nhAh7u+jrI23v6uNhchxQH013yB4PW5+
yO4LWLlugMvjxvFa0c2cUOIVgZxmE7CCFy0IQub9e23nff8+YS1H4NjwrtW+gsOsLAWpFFD1YHVz
gIzi4L0YlHUzuvw3rmkof+UV/PY9o68LhEx7CYc4XedY4EsgY2PQSvWplN7zexkoTsz79/IVmYP3
7+OACThdugl7JwWUyGvIsfmjJI4EoODusaixzYmJ3d7aASF27O1Kx3Fo7/YhNEi9RdApsuUN7H5n
kY6bbjP53BJTYjenHAwqvDanR4H0x6PVW85cEAhwLPwi3FgULdoAZ4PlG4IoCwyD9t7CVg7UfiKW
M1QuPhTHS15XuJOEW9p7Bj98QFRr8rgpWCMkG6L2sfQ6A4LXZx1vY3u+3iuzNN4pN0XjgOp8c7le
rw7u3wdFF++UUfvO6+bi/qP7Uvi+qp1frhfzb9+P7drFBKWf1hr1EzKLOTOjfExkmAyf9Kmo5gwS
zPtATAu8w3RrssU/VQWMWHafTPTLJz8cwrivyHr6/r38BIVt027QHy2x4QPOb0gaYxTi9++RN0Nh
PbND9rxgMHBdyeIoeHOES0WXRHgdhJ+DH3meZ1371DsprWsoj76sM4hx04kIPcRixz6LGWTR4DMv
16XJL3FjP3a7AR/kzapwJjAymJ+46Z7sJgZu1Sicm3md0z+U23ZAiu21BXuAT89snE/zeQ+C59Zx
okDTqodGL7BH+MMNM65X1hdc3taaeeCf90AZd9frLM5wt7JaofCQWyHbRG2FStkHp2q6I694c6FY
sZln3PhKzuPuD7DFMHSHzlX+jOp4dw/JUYC/g7uL2FUkhIp6ZYoRAXcVk847HR7cXRLh215c50V7
maALHZmyZWmVVDEF7SEZsN0Mt2//vqXUxSdl8PzoxeH4+PX42dFrFKBQKe/fBZ0Q/ehH6F46RrmA
9IsxwyAOtyI34yVXrklu1EGBsit0n7xtPQZ+5JwybKkJREwqLJxCUSnzBTcbtnJPGKoMnkunqs6t
mSNEcRugvoreXMtRbrWURVLhUFm7+WFiTbLPwcYfliABMSuIMrPd2JFtJQuYSee050EX6OOnD86c
Ax+0AEfCRolLKIRVINsZoFMEWBodKzz/Q9kaC38qmgoPPlMOP3pAn1blVBktL4BsWKwv0TD3/v0Q
DyYYEhxgMCPv3yMZ8xtHXKfeH6hhKCIh32Ovz3ir1YLKiDIsnJ8fNyCJY2ydI3tzfYreNGNoVf8E
mQO98eCoLlhtXpvAICNkG58FXAFylQnk66d2mW87jlextdO0GRDsVM0TBj/gLKUSXkLPsriWeKrx
WQwVaALIOtRHZcsXhzmVHVN4UOdlCv37mqIXJbBRVdEWeFwEhU5CCky1ZP9K8cSvlhSk3lKciHVx
qL7QpeAv2osIv6FRwSsM2WovArGBPDqmEtUSKKsdOHpKTdb14yAVQn5WBh231lB3Lrsd4E8VDZZC
Tz0/JjPQolpUE3b3X5aoioOgC3SMuCz1pmFXUjJwK7aSs0Rh0lAAjaA3FxoYesa7YrnuHyCQhDlP
+mxgh8fwh3n8V5WAZklCMF8mt/UCtUN0Yl7BHqyntu2bLYVsUDz56dXhGB1Z4PMBKr2bA0P0jqgu
Q35PyLIXLR0fTfWJs4A1DBxF/gjLxD5bIwIutovpJ/Ff7x2pOailRK08KFB3vZxyNmBqVGXgaNf9
LCAxPRU7YJKQZUEaPcW0MKv+WSxIvH9H9j4FAWC5uLwkNKJMfGzO65QkxHEKGPBVQg4eWrFKUvXB
VAEGnZf97oYk2IFinNI7A8WoYAv1uwWZ55Sw5kfua3cxYEHoyfmpnI8eRw3e/1Te3AbVEl0A2DBm
7pHDWnO/tWmn2XD773FYDIdcSbwNZiTgS+sCQ8rn86FAmV0VmCAazbprsQhviXCETn5GJhL0IcYh
jpL+5LKuJmU/TjS3EO6X0xeRBh2H+iYPgbi01pnhuaIv/O40eX97U8lzTlfTluiLQpOMUUfUWDEn
AzPlG0RQqzliJt/WniZt6AX289YODJS2goiDBCVJS8Jeu9tF9h2pfTvFWxB3yoJSXq/I9YZHwK5b
awG3CxCr3TgmW8BjHOty7oZ/ONqTvVHQCxeh8eQh01fbPwOhNuxxHKXzv3bS+6+WePyFVGJJrkSI
U/j3LILZQbZxcpFp13wbjfch+99aD8bC+jE7ZdnELxtoWboYbze92HXhny1Y0caMxopEKSIsR+bh
CESeH0ducqwDn/UU3VN1BH/JKSHSR0myx1iwTcTb3/ed0OJGCG3jSikdcySSilcYdubpo4Mz4/m+
T17U/V0AdaOyS/x7RsDCDypnlBi+WgeuhOse4mov3QIIBVgXYrxulDWb9ltf5R7QEimbkuJI24GY
eM+fSEtpJ1uT1w5sE4rBIAgZy1KO6FHnN7G7QyWvKgIjgy4JeX3t4+UTQUgeLJiKCqjLuNYZRlNY
ctkdxFNLnj7F/55Z/g1iRgnRFG5PhxXsDk3alnRN4/ekmkKTWbfEjGUKQ+VacAZitwRTj+z3/Fqw
TBdcBf8Kykc6gQ3EiYn1EKMP+VxA3+x6ytC0QqwABXSs9D5jBGEBUyAmjdJYtFo0wklmodxcuCB+
acLukZ5ZFikjZCsYHVgu8ezKkm+SR13Bxzts1FRbDVhz1YfvQVcIwmLT0uBAP56XiCm0vqpR1QCF
FDVP2u6opUODfoIRg/KjOg9y8KOokExJUqAIwhtDoXQ/JVESnzw8S35DT7JfgUnp8dvc6fZZcGZC
SWt1s0j2r4fJ4Jr4DOYfmaLRGicnU1PSrVIFUxXTkBUzqVfrXQ4JZyYPHslcWpP56G8ymdZ95efM
JUMGkDFrf38oIf2cLkZN5i+aRn1cOLNoGUUQpz2a7wHRhOEwUPMw6Ids0qxRYHjC2vcwsbBVBmTP
e5iNWLJzWK8IzX6YRD7RIWTYXzBF/A/oN0H7Vn06wryadGRF+qTQeZhVssyYdfaMzdduy/hsh5bV
adHduLZj+z3nkyXyCa58uv/ogLP9wmtykC/mV8UNM7GkXNabi0vk1GSRGaR6+dMki3VlpJo92H90
FpverO+fRvxSWyMtV4pOL824r494WcT8sCTtgnN3qFZWXzGO7BZj9t2Y/U1fz/h3WTFfiW02RqyA
Vm7hGTjleFxeeI5IaApnTl16Mf2Fw3NQEwQNvJqhwaFinxsrpR4ZIzA6VlSny5Iku9zqdn9/H861
q7qZtkgb9Guff9o3XxgA0CrzH7n00O2jXZwQWIZ0bWLaJKCanusfAZ2BuVlO796lHuEt5wJDMUj0
rCTNfNWqu61ctRW999A3g5pmI0vg22jV6McmLapc49Gcw6QBhTFkjsU1P2ut/3b94mCaSLdMLdtd
LzKmwGHelAh9zY1sRteoXdzfePuJJAMqniXJ5FVLucYGW49f2x+fik8QW4I+qfYmphdpPpXTNFRL
VpZXaGT35q5Lo7q8iXEA69jkCx7hWp4nhViFtOFaLfUrOz47GuJnvbfcID0m4xmfOj6TR/1WZcjs
FOoKEXh/QaHZAuzGQWIg0nJCndGzpl69IdbTvABW80co+lwVCa62vUttuSzW8VaR62JKZZCQakmZ
y/BySbIkoRlV3d85d5vkGoABJo4HgH+pbX3XhfGDiiFdoymNOcynKCVThCImSmguDEXHiZevC/1l
Gg/6myUG2Fwg8OvUDA1NcP2s26RBpnpMDQMiYF/h/WA/vSwxShnmsdsOCBpuECv5JzEWVwS9ba0D
8nYpQd8jyjHFJ8PMc+Lj+8VpNZuR4gf6JPo9Xq+bIrm8WV2WS9FN95MJCMHFCrb73bvYABwOThPo
Jac8V5Waz15qdlvcFB2JSEzsAmgMcotipcVDesNpNdpNQ6ck+cNNp0KLMm/2aFT7Ku1INROEO+wd
jQ51RO3Yh9xvqdSmhRh9Wf0SqNt9yfLBByw6/0mfxdPPUqcpU8XUaKOPEkws0zoX6cRrJOiTGxqb
mES1TSfejXcN9LBuIgTsrHe+Y7tOhAu3TPNAf+JG+g1vJBBBacLMBYraHrHbbK5uyyOWMYi/jPZE
xcqmYS/7EXzPpmyjd+eNlcTXyITcBzGKoazktCWaPpZlbZ+hW63HeDvBb3CPWM8fyvMssCRbCW6g
Tn//cijO0P2Efu8X5xOS2grX/M+jz7fNhx7QTtONTzX+8pk3OXLxH66Ht92gp3/5q0sgpn40SZ3T
vh25QEII7UznueviF3Xuc5bKrAf/PmXDATDdHTDnLPlJhCMLBhLWpp/tZq7qWxzG2gdkHsD0t2jG
OEhO77Rn/dvc8uh/d9RdTxYzS11fa8lUjfkgAEVThZQFGPetAPKZJYlYbLXCMjJNuCb/fhSG3lSU
r1gLDIvzc2zchACni53qJs4yyTOsetAhcUbrWh23bOroomD1CCMszks48WYVsWRcJFLkSjosUIUm
0w6acSxOsqe1nqsy+VCWK6pBuOOKaHXYBxldbUeTW0g82ft1iNza6b4MHA3rORWzgpkdsp93LH32
GR80lclmMerbQbc7sTesxrKT9ZFQGrq9LS3+K1iHAZ/OFiQASSaEyZdQr1nCnAjwiQScSRAaHKb4
CQuTYl1fFagrK9gUwVhTaRPZTyaakyuiXNRL9gyPR6qhP+Aj/A58oo2Fg6nqyB/kT6+E6iU1xVB8
QStYBo4thCMQYH/d3ZbcGBF5wzLzsNDuWkoIT0KKmqkhb2NBjCoEtxPNG6luJnVzbT9p/cqL4obM
MLBo0D8MTopB2jiN2Am0dUwhMvnFzT4qyOiqhvPRz3Bz890I58fx3NhnqHOJwvOJYhCfYJQUu9QQ
gTxupgTud6NiuzmQCr/tYgUjB6Lse1MMaL4qP1F8ck2DqTVrMDBOOAvowoYefYSa23OdwZqOjLST
TdMwkrcLYpRtW3OpxPtPL01uyoDAeN6yLSPO3wPzfqQRRqV2EnUpymzFEMhVCO6tbsZQx+Kkm+W0
WE6AdWm/QSfIXoAvdYvedaCq6QyVwXhwaA+zMFcp+hUPuCJhKV4JujG7OrIPspuqz51ijEjBJdd7
UDUWCXawB0fZnj2ZxSoRMZlEPiXjtE1PYSHRBWSILoXAhlDlhMEF3d+Dc2vdB045J0KW3FNY4y7C
d2D0g6PZU/Ut4KHGSZ5L4hEA5QaUS+BuGvdbu97S4G0DunZDV/3XBrEp25I1z7nAdTj2Kf7a0QVk
EsrrhrQb9rB2UEqjkxHsbeRR/kM4rOfresDNdsxYVH5Wwd9qj7HxS++m1AISS6OSo115G9CrCx1L
/efDzZC31VLcODOxpRRoKHYrH1ktZBFOPhIb6IhemCxdCA1CwTjShHMT4pGNEhJGncToDgUXHoeP
aMp4WMCpOZCS2Q7uGAYq1yTvoB9hwtYdvTACimDtSFGzCiVQk+Otl4F/0Ku265byJZhT+8GuDlYf
LnTKRYsEVzfyIswZp2p0p2MX4JOmXNSfQBBtb1pen4H7CQtfdwdAD6Z4r5M8l4PPhfDgBYpD29lf
GHrwZrGdsn0RMPyidjPWCFSrPxiN4OrPtypfRTnpQTQ+kDtPsnG8Wjx8zF5l/CTlFMXOMV+UnlBe
CR7CSPWu2+VUAnpw+UQpxu93FqfOdfEmzceVSN/lbaXeh01YSNkCINVJrfCPjUBtE+W0nCd2bmkp
etbbut/IXUvpYIsp38G0oRZ2Wc+nGgbOmGFbyxMu36I5efk4zayo0EHBN1a5Kndwp6C4nm9Mn5M7
zbdpcmfgtDvUwHMv67VOZb9Tyy+PT94cnnyb9npLqkraFv4B28G6EEBt2BKB+5Y+W12E88iKK6eM
YbQd1qlZjR16gFZE7ByUQyBIt+CquBholuvigfPdQCHz1tGuN5jCLJUTjBDLCETDCrC70GGm2wLr
NHAFXqGWxTR2K4gwbob4zO4Yo/ueNdldt3pyaejsKr6sS+8oT7/kVKj2jAx+8p88z88oHmxcYG5t
z13bAqfx4VRiqLQ4z94KOlh35tbUgy1zQeRCECd9N+eAeVn4TmgZSh1A6vA0EKw8trIoS4bLm1RR
rwlCPIp2Ad+EUYt88RKzgKzWj8SrOLRrzMtiSQHogYdIHG+NaT5luouP3a2J/pmq9hjvfBDIU/iF
fu6juJk5JGisaSQOzqvrbEaR4SwMLfiuFIxPqVUgVmmg2k0NdjnxhjTuz/aXvtM8RgcG3/xrZL0V
SJ2g6RnUV+mAANINd7Kbq4NUsOzCHlhw4BXCF2wbsLFG0aUHnvwWMZF0MvEuJmwQZD84HRgOvxhi
xWzXOWdmgl3oMxtSMytc6a+hE4VFRZ46ueVTug59in/Jp/5qSfDT2oNvszEgDVao349tZO5YNgK6
MF8TxFjPFx86tFne1qXP6Y5rRQr6Y3/M6dEWhhCiIeruUxSS0oMQhFkA0wIQLA2krvDYUNali2XG
rkYzdb3MQ7wWu/Pz+kKqM0ab88mR/LvbVpu1c7GtK4cRaKuassnaMaCMFSLuWKABtwROuPOGEApM
ralqA4QpcnVIs3y8vgqhBR2oy4i0EgA3WBSCMn0M/HWYxFrKFXRx6Kfk4VpYAww6HJKT1TfOkyeS
jjq4POsYsXevCGHmZn45m+xgj1azGwtVmgcowNLK78y/f9B3RsorDCkOk5aRhOCnWLyZ49mcko/9
lmhXXRC9Bj+Vqa27o+hs8M2RHshhFsVCclVC1uGSK7o1km27HV14gCB9864/5G3uiWHUlRH9NyZ2
NcZSY28qug4C0mSfH+n1SP79rHMrUSMdBVDwYvG1YneL5c2gCew+NpY44htINxTYMybgjCi+BpVU
UhekRy9PDl+/fPKCIMK/VZjgnOx7S+3ZfNOiHs+U9gfSURi4TpMeXoih0kQ+X5O5BowhTSrYnpyt
AITbZjNZcwA/wQtIYlmRk/EWxmZ8O8P1KuMlITzHtreP5jlW11XissUndKs8Tb3SPnCNaJ/WgF3b
Htm53bpRdGApEnSYbO0yUugfmrJ9ayWPygiVvkjuGeVYy/fVadrv6N+JV1deWYZZHrgPbbzlmCyq
uh6YcuX7VoctpEntI2ujbVk5juCNF4pHTzpCE1zVQSOoYBV80It5/VPgkHH6V3wxHn3JyVZNuECu
OJOqEduZVrRcUF79JaguDhbZAvEA4KBXeO48V7PWs7KCRKim2iCIaZN9YOHL2X5g7R4DhX96bWh4
22Heha6lrm6Urczu2lnwvXtBZyPW85mXwc6Bym6Dw5juweS6Wrerpo6wfsfBGe11wroatyvkxs91
7Hm4bv12LqDASkw2S0xFNeKTxzX5Qn2soN0tdaEDS8qBLDeVc/gCGp1CoE3UvjXfvMn1z/xoWSmk
/Hg7gVlTpV9KCA5dtdOWkw5cN775gYIz/LiucRqmcbolFly3Yal1EfnEKYpENlN5Q05FecLXlNW8
vlZ/Ah2AHAZF07Msak0SXOhBX04QdHVhR9iYJwO+xW/2b2kMZGQJNMEakvd3fgOE/3FTNSbhkCrk
MtbSR2s05xvd8h+EJhiiwYCEpJZ32w6lTtHXSF92Yd1UyqYY+UR+zd7OwgtB6e5gt0PEOaJ3q0LL
ta7reTumNG6UTajd8Xvl8pNfMrbd0Zvhc7djcB0UUemiPMtmPqSgDrsQDRne34G+Vb0LEsjekhSH
Sm5NjKNOwjZnTOQG/5TVPYggRlyVHszNCu89l2vxob2/v6/iquDdVd18iLTR1smfMGJzXq5VTDMb
vBvMGGUU6kKnVY+zC3f1NQy6urHpjhf18uv4/lBtMrjTZpxzR1Lu7MDBJI+C3r3OPvHBy6J2KeYV
+iqdOByHzjtcJHC55rduj7ghfqNUjdy77l/ccCmFhDpW6fXGXTXwY1Tpm+RTDDGZ7zKlPSvjTzwY
/E57cGd6oFhhK/X277Tk+L6BFdFP+ijBdjuNyFwpKZxvS60XyLnrmTOPHfgsXGIYmZNsNxDdPX4p
3nTq3sw4uAGJS3yiHNd8U5THjHZeKCufN8MknIgUMRLJ9Z8/bn0WL3iSRY3OjksGfZMrKvlwcGtQ
N9WFyCkR7t/Byjnrjrkyolurgy1GRGUWUs3ZjNHPcaL4YpxVG8xTyUtjBPNAW6fsaMEZqBiGwuMj
vIrJ1XTg60j2tHhAn1Y6Cdu9hPTzSLAqyxII4xDCexIWpph1KLKdXCTRNdG68gVFHqUHDAw1qBE3
JUEK4J0lovmRfyVF+5EbNX7xvARhpmwd+G4DFqXsFBXQKQ3IVeKvPUnBFR6Fbi1ws2tJJmQaUDNN
08J5jzHMllES6fJoyonoqiUWcayWLppqAJdqKdPO1aq4fS+Tg6acHbyHVtgn8BuRgNtv3+fJkQto
bsJ/SXGFbYgnHjmHWknw1pcNhXijx2xVNxEgUkcWTL7pQvgnBFLn+iEZKIRx7r0oSLD3CyueEw9P
4rhTd522O+qoa7pT4yRwq2NOrCYQA4eta3TdkCasj1Oguq0Pd65n0H0nIYADnOpzBBdtdbexhVGy
myWJfzFqkpxVnKkg2yIWMu05R/nndAovs2SIsMi4t+PuTSoDNZf1m2Dg2nZ73X4/tlynTu5dBuka
WdC33gqtwmT18XM56/AtmsedKhuQKcXfKJIOjLGkw5HNFZ+ZrlSwwtw4EZPmFKXRec8BmtGjJpDf
6Jbq7FRHSxomONraKceksWNSsRrMi8X5tEiuD4Bz6vSRzJgt63FGK3S2RWN2cIxb17fb3aKORcOy
34zVwm9zngw2Abkywtyjp0LEmBV6ZlrN7rh5uefOiKAh5fbqUCRayyJJynsdJCjkh31n904eRtSI
aB16Uq47MZ/pOrc9Un/p6Jz0fupVz9tylW1pQYasiZ17oIm9F90cUi4wxs2dY9dBZ1DoIQxbzu5R
Q0qaGaAgOKdy6HjEVGwhmWNoeFemoDxJfqo3HGOCTux8Pt+4fpok4mAc0Dx5/35///jVCaKXq6gu
8kBSraZoq03ttCm525Gt0OYqrmzGQdRLDp7GMASvFZyXA6yEi+DqRvjKur2b1iVzeW7HjddsOP0Q
Ebmat1hMiSC1uFZz0h952dyjOXZieWBwuY3cHyiCWF6YCfuv42i7gjMdr+XtQsJnODyHp+TScvbb
/ZyMd73becRYiWkhMc8lyAVqMSkIJCYkxKUNQ1GcRkuSYqrK9l6kxQ84cCx9gOW9x857lste5lCN
LRuaQp6buPp23dB4f2EPiF/wKbxjV4THCEoL9kjifvhUry6WdVOODjnzo44CjjnLqnPbhDY4eTS5
paC4OH6pABRtbwb5RnmNKxuAyihx7SaT0KgwlGjCjmpG+LSzXqeSqtrz4sTOeiHmhlsrOP/gmXu5
iV2XqhKuIjEMvnFZVbCGK3cDevQRPEjoAL7mQ8kqG405UevZ5QS+w+1G1uU+/rn3Gd7id11oyOu/
/FWct6NZXwYOmDdJbZzoAB4q2INTuUSgn4SLAm3n7eZ80PT/8PP0HgX6DvVrJwsxVOYJpreZdIW9
2+rzP7FrmyQcDvhZJL8LFA/d8V3McipLDfv3oxwWod088PPcvGuyVJZEjJvpUk7EiR3dUITTcvqG
aA4j/q7K1A5Sbpv/QBeOJ4horVzno24SvtN8JEZCvgCyI3rrjUNFxb4WtgYduPvDO9WIm9kjdEwk
yTnQJMKbsOCjXp2dP7uHwsj5nxBJhOfbx80wR94uQ+BkPuJd6fTs4//67t999dVXymuc86t/vPPu
//3vvvoKzw1EV6McXBjX0dSfKkrJBBQJTxEMlTFU64asRgKupxLTKTd5h9JRlO45Kb/sZO5S1Hqk
4gZOSuraH0G2mStbd8T1Pn7/rJmV57TieHYLI2JvbnLk5tlIrZOXPRVx+PI9Zh24/dxASu/MzRSD
CGfuhuNlkFMFEjRbrQV1s2C7HPSvmuNvPCNgho/WVgo8L3waUW1wWTZidOzziPomd6CB4kDzpO90
D8tNMf0J9Hp9k2yW1cdNub8qm326CLCwCPRovABi4ikXm6IpluuSoXLPS24uLjbvAd+Y1xf5uFhV
5PyZfvswx0y5KQ2C+h92P826xBY85mi9MnEzHdhLBv9vre7ig7Wy6nyEDUNHRzn1z2HVtGWHNV/z
bd2qES/ETk5lrJMvPmDcbnguh5wmsO/ki+KDOuPKKQbADID0ZtX1SA8j4JGIzzF6MExQ68S5UP0Y
JtDkhzFCL9UbwViIujfxdKFyHHqQOMKqnhXv3s1SSkn4iWyOzzGlqs98js7ir6HrXBa0KAuqnkeS
lJr2XHFPvYiKW5oERAaOC0G6FMejRdyEPFLa5Z60q8PdFCVi134Usttqbm3TdlMW84id0SanZXml
ywNRraMqqRVJPAPG3176NGW3yCVS0VE6HOalLnXZOneko+RZ5Bw9A9uv2XcsFlC6UxgxaJ01jCLn
ToiYCM/Vcax8p/tjZmqX3DwmLRySXoqJ6C23SauqOkn75jjCarn5GbTQ+4OctrCCmOqCZoQ/PMAb
X+NdB/tN4cZ0n1YWhAPfa+ERRKcDcviywFQ7tiiQmKOCGZHKdkamFJCtrQ+IVcjlCD0Jj1Vcwc0O
LVByhD733iFjSV5rddpFlxGpVqYgR6XBJOaUlyLJihKQnv784xmeR+PUNsX88OTdPz95AaUfP+hZ
+ESc7u5beX3gW4fwn9MDfslS7LXVG0VdDoHkckrFziZbFbnuffy7d/89yHbQfQxswFP1Y//df/sf
SbDrwfRWEwRJuaBgCVmnArSWG5r6KSH1oRAw3UzK5j5682Aq56QFHoUWuh5eMVeYEpOzv+Ad+pNX
RwfJYFHcwBmPOa6rNV+wt/gdvCK++cfMGBnhIQz2Bb0adOiZs3okAsGbk2fHb0/ibgLT8nxzsUtB
sX16hxrWIl+c9LKcz2tc3Ku6mdsArFhEKkdKxYdEvZe/Ef1kjf6mI0wg3OH1vvMwvHE4QjYFpms0
Vw5ZceNfI2IzcD3CHw78J9hbS97C0NSfO9zEI/YBggJ3BPYmKWvKeLOCifJMS45rB505XW1Q7CjZ
59IDac7pcHZ74kInuoY/lqmcJGoEBh33lWyHMKDYEfZh0fXOwU0hTBIkMZhECSNqRRamrTG1gLJg
1/XVbusL0xqaROBLdsxqGXATN9d6CnKaIRQONRgyEtQwKdeTPHmLG5oOpxZUivkNwvC/unl1sw+y
tZdGUkgGkyrLX5jNqmYgUUlvMdm063pR/bnQKHJqxh9pPkFuo7fTm/y1KAi8mUOlMErdtXco1xtT
r08WFh+A0KLU9QZYk66gYP5895ptpA3fdXoXN4+4RUbKqD12nse/quu4ZXePjRdWoagyudNi6H3i
bgY9RmcziFnIQaJ357w/JhNd6Nu8wyWCJn51AcLAy/I1a9rvJYFdvnUS4IhZW9oLFQ15YQ+Nwjg1
4kEkYIZieGCbyU5UyIYw+00NmwvjCBUZD1rXKI4HpR6Us2ziXy+1vIm3ppVa6PQiwLdcW7Futylh
0BpO4Z/4+Q+FQTYPtlsXO+/aq4hnvy7W5TZWGbQBzHJ1M8hsDDzTCNpJ4O9bO5KjHN0EzsJBMYkT
4kZ9NCqzAobPxJXOQi9zosJBLLOLZh09D4FGbF0g2BPAHrvztZtqTZFXuklgsVfF/APZGuiCQaFe
BmYejH8fypUpegbOLP5o3MHCvhYxRkRuSEowHsxgL2BR/AT5BzPHCDId++mKKrpkx5yjhO+r53GY
PBgm+w8/A00pvoSn6snpQXV29nlZR0NQpa2EKblkVZYqxZ/V+8zBbeygn6GefY+S0LnPIiO6AaKH
1iK6NgzEWm0WIGj/mRkPHlNth23ffB5kl9uOOjjvMXqYj9Dg3PNcYCIHquYxWw5V3ew4OC2pWc/7
wiJj7GIWO0fwUkDuXqE4XtwKc9TZI9EHB6uToKebzByuqlfA4qxkfvSiQvFswG06iKxnJAxNlYLp
oUOjG/DYDKPDu7qxElryGqm+4ABRWNufVx94kPqz3nW1xbCfV/YgbmGYeredsZ2bHrJVxN8OCMrv
ob6qS30zGUYCtc9QFkTRWZISp1Chvjk+gyBavHzFr90jvynqTig5QYedM26Q6Ts13eXOXRptMb+l
qm5fzi/r5Opszy7ac85Qqel6PISSgxTu9fZ6e+gRTl1p4Zec87TcgdKB6qamiasGWqNjgahpQOSU
KdvIFlCoCCCEov+ZIXzDwELX4hlrG4QBYm8frF6vymUQlkU+v6Nk1im2GSq8RWjDKQhTlGP7HpWB
qMl0FlErTCXoMMVo+xvdalbHcItKiFbN7StDjkk4DxxYQ/1qWdPD2lvXRqIChwnf70lGk17EdjAv
bsrpmA1rKvHJ+QazUhg7oB+hxo2i6k5/RGZRuTHhagYRimLzVR9Bg6/627//cPsXjywul0S4tp6i
H3qy6KKeGu8qGQXObb/gvAFXxo12RkHK5XLgDopuhMt/O8KMBDijERE/mGa3zMcXEbfBmdGLcjtN
44jZ8uQdBC4xGxIWrk4o/i6jh0fdjB4/cvj69ed9BI6O3U8T3p1vbtD2ePsXCN2cyibTokRw6a0A
bxhhgJZSRv529Wb9MmousKrS/L84/n589PL5sRdJaUqpP399ilTe0jRs+WfgfJ7kTppVmFCUbcf4
BlWD9PCHw9ffJwiAcZI8fX10ksBqJphh/Ojl98nL45Ojp4cJjit5dvjd2+9TJYdyR7mZUZLi6FMg
XnoQuocpUwCv4pCLGWxWZwDyNnMcyLqu/ghu8ePg3X8Q+zhdGKhF/pi9S75i94fNkgNPiC3Ad9bl
AlhlRdP48e67f2+5TkCT6/K8Wn689+7f95TzxPkCs8RzHCjJsRYWEVm3EZFfwjOLRDWRYP6vaiLx
PnHL7qaZ83DongSm8nK9Xh3cv39OjeTLcs0Lfb2YN6uJXMUhm7jPT+7LNrqsr+yX+Bte2RdmJiec
k9SL8vnpTF4ow9ATA0WUMBYRsB3hS5wBcOwk5lND7iPM0rr4VDSjFPmyhaLGyQpG/XZdNyUpcO16
lKqKqXGAJqnaBO5c1jCF7ei0j/NfTilEfD7vn5kilCIsRatswmX+FYN9cFnINqtnMlyYzjvF8XgB
Xal4g3ruKLKQeHuD9MQ7wS6fl9flZLNWHF+wny2sE9UNioKYz62wBQ2/IkXkHFPfyk/URRoJkv2r
e33b99h4v3QBTPV9gCmrfj2fMseB7zUIO8Uc2QqPm0FHyrHw6SgKqd2OUySG6+KM0mL/rRMCZHpC
EyHfd9atE13Kkg3NPa39VVs+jPaqLcsPgwf20U2bni+IojVAMJoGsDtemcm8bm2BAMFmYwVtQ+z1
TSV+gPRjkAFJX73CwgN05rsk9DzTvcxmktQgsBpkMHda9vkbwO8c+cRQtW6NMgBKsrYOfWEfj2OF
l2R/4tejR5wUWX6FLHQqWsyZ9hqWudDLjT1XwcXEAb5JBo+HyQMbOge91ph7zqtztZ3fYK7G5hU2
1wvdLKw6sHgVQdLH64mGaD2miea6Ge/91qFejRsHEs0CtvbAnxMzOv9N3sFYMH80s8LUt/v29NL4
jamPKEZLGSUadKq243OCLtCKjNsSjowRXqa+AT6MSoR3LBKl0rQA43Ul3LU/DrpA9UyT8c9SaHDK
k4tHaUK0bc24eyOBa0IU42wkLzhpxak0vO/RNLARUuZnF+Mp51uERnJEY8JrIIxHasixBW259Auh
OKqyPd1/eEa/cefP68mXANXz93DXQE9FYLkEhkTz1PjhTusrcfyu4WDh8f4o2IVksqvqSKQTDmZd
q/kZrK88VyD8/lWuGsh1gEUWC56j1FiZ768orM5aryiv8+p9PpuTA8YmpTttsr//rZARzObQ4W1Z
7+Pw3f9iSYs8DKRZ6CAh5MFyrj/uv/s/H7F/xvNKZVJalFO6kipxOta8L6xsxQqgEQ1IjGmgIqFV
yp0nb07y3gn6Z4gFUiLb8MDVn85XNwlBh5AfaU4+BUr0RLFT/iwQ6EtLpK4jrx6TYnGbdTXvKAL0
qj+tin/H/X2iyhCp9uBzzJ2hz2vkDE60xp9AZFTq8x5wiTJRcvDmos3/REue183F/aptN+XDr//h
dyx2lderBilukH5X1/NjdDhLv6uW/MfbJXBT/vMFxXziX0ezw2t69AxOlADkN31RteunNXmupd8z
GmndSI2fqnI+xT+wQEEQsulTlN+CVl7DLsa3LzcLYolr+qU3Lz3bnHP8NZUrFmW8L/j2hKzmTLbj
dr1Y84ifi7PPs3JGPUF9Qv5+TayeRlnOS/4grEd1sQy/8mRzoV4l6Ss09uIfz2vq8o8IzcnTRj9h
sah9tJWHTZ00N5ySgXqNQuoSqVC+DtRALRGVmL+eA2GFTR2CCE1rMK/Pizn+BYtAXQI20NIyg6D1
gVeD75HUDCFNjNEDjINj1wOd7VG5zWcOfB4TkTW9n1WZ1sNC2K/aMRSlNgfYTnjxSs5p+iZd94C/
GjSE7e/ekOl+zwgwO/bLMgQvCTAXyudYPvucTkVbwfLG+0bl4dDpO4zpiBEYONMUw8Jq9oIyBbLA
bQYd6O284LTsozQNwgeKTRviNrllrBYIkVX/khgg3Z1BW2+aCSalbjiUjnjyGM9cx0jLUTQFWRYI
3oaq8Vc/wWStKe3VM5R4jlTbcABTq1loSpEqOf1rcl+IeBBAVSmbxTawKlk4IzAM5F+tvlpjC9Z7
kA7MgcGS0ZAAVK7Qx6lCSCu8yN5vNkvaOGkIs0M3TCwyWuekSuFQYVJmvgDE5vMkebO5wNSVnE4y
0t4E2DPmUJYjFUlYTDXnJXSBYyjkJWOtwpnPv0ew3tUyU3YB6PSgns1YoiUJQdbbjs1EZ5CG2IFv
KOTHJGIhBHX+HH8PcCVIPKNfDzPH3dPQV/S7mVZ81FqpXriEi0d2LmmQrTeD0iZwIULB1Ch5fyjK
wGUHtueDQWBfWndnqIC7n+37ISqHUXf3QBZLkm++SUQc4zg3G4jA7jc2wj5P1IBsgPJ6zSLdgWpF
dfbBGTBA5MVopQmGRhrT9drJduuKJQeKDmxpXr6H/5w+/O2BE5eID3u9esWCgcr3yzGqsMe/q9bH
TQJU+a9ypMnDdzU9/Rf36RPgcvD076ynL95cVrM1Pv3mG+vxa/3422+tx0+m1MA96xHIFfho33r0
A976wrO71rNn1Sd8dN969Hxe1416br/4oaav3LEeHX7EJ6OR9ehlveanv7GfvuCxOE8O6ZFd6nse
mvOESn1rl3pVX9Ew7HEctfioap1H0BV+ilzDfrOkx0u31/yU0VjT3l97vQ3KjcHSSqNY7o7zOYQ2
o1f/yXn+Vq2E+1QtGTzFb6ns9j7/5y9Oy39mfm9OSF0ID8OEJRVQry/mZbFAVjbbzNkB+cJTwvOt
VyHMXbwDU/Ev+tf2zwK5uJqM+QwS/z5XGNjD4G5KCMLngAY45MSkOu+9djznLtpWgW0Si3uwHmoG
z+U8W6MFkbJYVZz1KdvF76m1sJBoCnLU3QaT+suSkSmxx69t8z/rWwQBz52IhtBYFYfSWdeJZ5uw
5k4gh/UOTrHQ2S7TB1I32rjTHabRnj2oMv6Vp8+Cqhm6gDVhnJQ4vDoHwABB/oguMSpD9H4SXB2n
czV28YIU0Q9vOEYpEkUaCsK6ihROv7EUa7WJafmA78kltv4gba0xEraRbJsOH07ehlggl2hgrxlZ
XrlmBOJ3YcSRPiRpW35eT2NIrLLTWYp3G39JIk3M9zhCoCbE0uYgrjMCq/8VRw8tRa7iWDbDnWhO
SSZA/AR+3Q4y5TZ2QVojPNCgH3k1HdpG+YCqbbk8Ssz0jVvYwXZa3mPuh3eELd5eU1wNguBFv+MQ
M4Yh0XOFDSMeOlZnYLevm005iCEjKYVRJsJnF1s4irfaYvnQUdoLG1piXs7WZPJfrHL823kxdlqn
Jy5NMC3gi8xLIQ+UBVLXuCbD7J+r1YC+UK9a7gEG9ECnUB7zr6KonvNhehL7sHzCC/JfjdubxXk9
Z98TLfOd1iujeJ9t4edoT6T/I0IM50F/YPcEGv6YPPOr3hljOjlRABjTwkA3rEcNwltSp0wXbuX8
/h75krNzmHgdG1m08BlpRPyxjKyV/WUHTOfcRodqe+i4G1H60hEEjNarDn/p87popiTnNZvVugsT
OawbiQoOvnLbVty6Y5zcEKRiUmQL77+yOxqCM8oEFSzSs3vhfDLAN2liqDjupmssR+zdWRsbkBVq
APyobd4Gi0qGGsu8xWVy5E8oRB03UaW2DfHviJPQtYB8R9JNHHQxEv/4ZI4lzXSKkxqBt5OgeTqp
4RGP8DYSEjdCng0cQ4onkfyHfnco9ukA/YPoACL13u4nub1l6ResmVj4ZdFIb3PSQyBwBplDlEZ3
Sn/lcfYtE+pzaH4YWwHVmLcO7shVL4jVhh/YKpRYdW1+XpIpfBtQ6q4MGP8YuUPcVZrZgWl+xubD
Oxu196pl7YsVO0oPVDV3ZQg6Hdz6/Ki7AXpve0PERAAuGqckn/aVFJB1iAGfJwMEI8p6X378B2f/
l8jGf+PzPjjr7QX8z0KveO2nhGD409aoNsuJu7h2rCFTGVbJ8bGDxD7uPjXo91/sNcXaaXJAjf/V
bkUA8YNDRwDt6NNRwHa30/ggtkEcuDtVU2GKWf3TEu9yAAzeEy2X7amqdsbx5mNPI3EGo84xVSfb
1nWndHSVcTZ0kJ3MiIof+cJZkep5BCfsbzc/8tGxpQy3ozvSsOpRbL3cWfWagbrq237YUtdMx1tw
Djk3yAtnHG3yv4AOnTZ2mnEsnP4aZJjelTn+3HlyKt4yPew2+UsmJ+Z42TE1H66m7a80NV8+NztM
DiUaonfVkpLY4vU/y5N+u13SGOyRgTqHQ1btfiDIk+F8jkd+y9FLceDqe4Ru+jc8aO/edcf9C09D
Iz6n6Of5lzs4BfjXX21RfbWD1bpTIIbSeMKuInd3u57HaOUWDykiuLgt0h5LozLvaSshaS7mubIm
puHaWn3XxjT545curDHofpFx0cXUdaUV7XikkuusnTDkMOAXC+ST9TVrti/qYpp1d9c15lLb3sR5
wi4/i0oX+F0foNPfv4JZOoi1TQ3EuuDtS7KWK4aj6vzCvfmLLGGkiDkzk32J/+de8gOhNpGHgyrL
WvuyLKcKrGZRLC/m5fQfu6xZekocJ73xOKXQUPMWgXDMu7ilK7RWuauiHKz81dFqFfrsyx0/YfZm
aQRKYCdbJRWlRm1U9y9fFLpF46kc4xiIyNDOYX3kXmLma8ssdNCm3fow+yXj/jUI8RcfKcFeNsdK
zicLbez4cfKZOaaTX6uZ3djMXvIUQR/VxRWRQ9XqPEmKtXbfYXXsA6J+jn4A6v/LX2OnkiV4/0q8
DLs9Vn3+WxKN/yH/1st9b91+RQzuTtngwFrteL2+i9jztxZq5Nim+2J1Zrdts3aclFrPiERPoqcq
Vs3xdedOFC8o4TpB2/GAZHf2rc/vBCtD3CLuK/6r397E5tzqb2Tiq4ulmXj4YQ2Jk9I5bfGjjrmH
2rfINHmeE40ZB7mO2Zdji5yDUNYNjcJaYBtsYXToW7esR3bfcn7WXWdSz8f1bNaWa7eeeW51s7wa
cyHprEyoVASRAqEy7Gx+pje39aO7P7GeRHxcdN/OtrLIqJdLPE+t690SMkabOv7G1kr7U72P+bv/
KRqr0pQU8vLx/rujCYWpvC4l1lx7qKBzGeWhJVy2ZFnBf0pyzla4h3ZkCYaTyJ9QaFmrHxhLROmG
9YPFSv25KJr2spirn7Uu05Q6YAUYy2QdCV+hbASfGcDS6+2ptNCUbw+9knHUa4RivJlQEM4YmqCX
43Hes4JHK4TiRUl1vC4ulKD76qeTwzcn45Mn3+ON2GKVy3sCnkz3+XVq+eHbSCU3QP7p6mZ1M7aj
FVM3m+2cEzWsbtKe9uqyQ2gogJhCZ8JqHDOTRmIZVYnJyioiKXK96MlwnHAm7mNolRoecixscIgt
nD44438fnqmYgXlCiV+xSK/36qen48N3J8TyYFCIGgiyDTorjsd8ezjhq0SYCCp88uToBZXGslY/
8Ac11eu9Pvzx9dHJ4fjl4Y8vjl4evomM4vTg0RmGRA4eDZPfsbLfESb6KOs9efP06Gh89Gb87PD5
k7cvTsaHL58eP0M0hEjDD86g4mPlxKlPMd5O1fLij3X9IQCVeXX46vGDRwlHv1BKV0EFlW3ZyjZs
t4Q6+J6aCovAE6DonSTjcBPAM7NTeTc5wcrqA1n5SIxrBy6g2li+oDzg+FfgMLoEOYgjZ+i9iutU
SdCZ4girpPVT5JohyF8ORPh0vLC92LqzJOEWsZuLonfEEtxhBXUKqB4YLNZiXUrPNJyzrBK+cgta
uOWq22TURQAKdDk3OVfck70VJOm80Ymih8lDy6ce2KwFOE0Bo8bCRbn5fPGZ0zx2+WXsJW8UoCES
QPKKlid5nD8e6ppFQi5+lNqdMJKyYSTJxF5C/vgtHhjn83KhMzPBZKF7NKeMQFjE3Be2sduUAzFM
CS7g2/QCed3DSD4kTkkIJWAzOiPn7oeDjkqjMzjsZ0sCpZgIN7eJTs28n0XROsd5O3eYZWL0phSc
6fZcpLNpgBZAo1gRRm07OX0UpJ3HdzyGVz+Nnx7/8OroxeGzqOeSe76JAkdqFJ2CXSad2VLmKKgx
mC1D00M8Y4FuaLY8PbApWR91MI7f6HG8OX77+ulhzOL1jBzLMbksEGYhEOZVm++0CmHfqE91m5vU
jyvMhsYbE5O1VBRJg1ncYe7xqMezzLpeXIIQEU2Vac/NXnLUck8LgdgHnviPoZoF/AbtstWasi7M
lpm/g38stVfpeYmH7Q1sxYbCAC7LhnJTU9gbi0Ik5Uie9ClypaXX3LqpEOgcwTBvJvMyD6XdjqOm
e2thyqGCDOvWGXHQlVZMTx8UJIRU3P5Zdzoxh+NSBeQ1akZV3r/Z0r+I2u4mt51su7bzjONdO5zg
3QNuWwKq7UMaYESdSaYNLUre90zyKHeO05uGPcm+QFkKoCU5quYgkbQJog7atIl2eyWhgFyP+dRp
m1mtaRDdRXGBUDiU2kjlgYCDAaSK9gCaLxhvVznGq0btXOF7SgYakrim6xiRnWvnSP+UvZTYj1vG
ak2dwkj0GGyvA1BA3qc2KCU8bKDiCj0ZVZ9rWLHzipIp1DOrOTjVJpsGgRuUSiHD01oFAThKzynL
E2gYMGd1Unyqq2nP2XCTDzcJJa+CZqfEXWDYV2ilr5brgjZsCdsIgds5pudT0VTFcn2A62f3ilO9
w6fo4J5fFTctodpikM2ak4VVUx7yMbkykadJTWGaMgH2CiA2PBR9dfzm6F2/ld8wPpJBOOM9dA4j
J6wkUAqph9gXyMvTerlmyJ7xOUwVroCVlB1VLcwE5XFcwwTgJR+8qaWbufAh1PgOZzx8gfPp6M9G
D/LjNx2HeDSKNmd9N8LIKDU1vc0PD98dvTmJ85K95BBEL0KyK23908qkgigkxfRGsjkmA3ujeEJY
vUDMu0nRMohntYZ1O8c8UEAX59ASLMtyHyecYseTo2XS3dicTJrUGJJjHzG1KJEn5wNJZFWRnKIt
EDhbyFRpaqD6qczNy+PDlyfDRP86eXb0+qxrro6XZnPiEYu0DVwYPZ6QKzLesJ65IXExL52aaUwd
jTC2RtjCn6sVcbv4iBRt8yVrx9DMqj95+vTwTXwkDotvKInycm6l5xFOngQbIfuijnUeZM7dHFOe
aBjmyyr1GEhpj89E9UbF3AIgnPgbWG+xodWqffK8rNdoUiL4RsQroY1s1gGnZN+dkmFy1F8kF7UN
SkoZLSjZlkB2GxYoBxRmt0aqOOes14tinue5a/cb48eQjA3bweyGEw+YOq5NeAupWH+n2KG+yHsH
jw6GGx6aCQ90oK5v8wy8Ao2rOJ/j5n5zA+fFNWMoyLFBBrHP0Uci3JRcMCmPmD5aabr8jtOsYUxO
tqtww0APsmiOOY7WMzqLtk3jVDkoodkVasXi44wZAc/i0IzgIMANhZjtj+SrehXEqB2Bxtxewj8T
guL504bMinOKaM3pQ2JInpIYzpAIY7J8AG/H1OPrumezIkKbx8QVZTO/Ic6stfFH94aiKkD7Vwp8
nCMWYfbqBneIxeX2kNU7fcg9g7ido1VNIXYcjeMyQbfkNefUrTIcnP98UmtMXk+neElA+fMbNQrt
lwA/cHR5pGWmCWrbXldTAMdXNlSAVtk5eQxQvLHKczW+SfWt8O7Y0GoZzFCvm28qYguq9AztVe14
VUw+qEwgAemFN7DbLRJeaslbbBCOc9COxoeI4SFqdFDACMbo8E/fj+EMP3x6cvz6J56BP5CBFFNn
SRZgMmhusz5O5l6yD/33IWXHS2Q2E10FlVlOxASEMeWLd6I3ppQcPWPwPVvizIZA3E6QjxNMlInq
BuPh10o+xgmqpgi7iTmk4slOg8VTFxP2kLxNUUxQX0DXAGs/sOiGIkw5AyEChsEtcUIEjHKHPrik
bn9CY2F8JmH07ODbCi/aRavB/0Afm1JpE2jIQFVRKSUGT3fo6W/JM4ZzfcXpZ2kAVmYeM5HuCDRN
yP7Gmx4khqFXzmtdpeg2Mqk6nXDri5nKnOwKaOyknFwuCbX1ZuhhFIBgwf+ipooikRxKoI8DjUj9
wdOMuTR8g+YNSsP6pajtpQqmbYnp35nDw9kGrFl4k7TBK5wnf6yvSrorQW2g7KMGvl7PkcCLFjPz
lp8qGhBqlUfJJRCoNEA56YB4xQREqSgkWXB4GKEVVpTpRZ4M3pSqFRwnqg8Ebbm0z0posf5U5pK+
kr40wgj2gTWtOT1Xebmc/TBbKeB2kg7Sq/PUwSw6OvboEuHBb9e0HDmCbyaV8EUrhZRZoEfDAYvj
Iw7tpz7AA8cMAisvmk2xvNGYSTBV2jAh5gUHk3WPV4q2xwIr8kohqSI4IsOkTstlhYhItmZ3XrpZ
f/a4t5jhrdfJsb05FVxWde1IFhc7LZouwXeoObLLQfoNIqnRUllF5SI2n24WKzopZxIFHZizoFH7
SJA+IkRkr/f6JV7U/dz8vEzzcolq/iDdrGf7v4fV5leRF71JXX+oUK6ksM1cXb436b+cJj+vf56d
3d3L70IdmLDTg9EZPjy7e7r/81V+dg/qf3f8w/jtyfPfE3LQdTn7+fr8HP5/1hduEJezzVXcCWjX
65q0DdpXyhR1d7a8Kw/4cLsU25HkHlXM30U5V+5YsyWDAKeNR+qHy09VUy9xj3k07wvkcDZ3XkTa
KCtUCFmrneCFZpktRUqIfJQniN2H8P5oEJq6StS0hPO5ERcCVQME3PmcjUdogwImgJWXmHNwXbb2
CWC1BGVa4Mf0ivtGyafy5E0x1QLgeQmctYINO61Llj44364rEitaYJdTjZ+mrp8opSSnRTcIlJSF
64avUHuOJaNa7j8EfvcEIUeLlkreaHFboX+gK0U1qdYyQwlTZ5tnyYndMzqHGzobYDTMe9SY6Bhh
nQAOi0lp91xxZZoYx9RJHdfLBkdEtaSrZMoC43QmT94ipM96swRi5hkt1rahk42QmxW51UheWbQ8
Xm7YqKgON9aagWRBHP1U0n0dLC7lNrWag/F1LKkiABJYlrVFdK2eR0cnYSIDXSR5jl4kyG0pMef1
Gq+TURloN2Wy9+i3/5AnP4GohgqgUoG8G8c9ysvD00+hgIZvAhk91B6RJDEP3IwaUOBRpMCQa96z
71thSgZkxuSyliu04jvkTODp98LQcr684LqnDw6w+bNMezDuWE91Cqs/MtXDbFfasYSZXArn5pSM
rbE7PIvb4GSqpWuVtDkFMk2KC9j2llQkvDGizOtvYZLtZrPDnVD0oeGhOTc3SIt2UlVp5EaR+elb
kN2g3DMq3WG1ZXp5UVIWYzzaKSNeydhTKHj9Ojc8pDO6s9FTJktFxuI60goDQ7kByetR/ju+KkB9
ALbzEhle+XEDXYR9+jh/aETrPUnTgDsMzqE2ubusrtHdjtDycnV0+D4xB10uw7lACA9evxwmL9Fv
6WV4rq2bsuyEsJSVsCxP9vH0opTr2FW9Qp5ENKU4R3QRXPNQoWRW+vBB3CDkr5U69OUGaYD9j8il
ZIlThwyWQdEAcatZTbBd/7YP8giN7HxVNBeLpnUwKB9wlNJB8r+Q0Qc3XQwHLSIsm4uOZOfR1XY9
w4RgeHSBIkapYMr5/NZ5k/HtPHOCXCXi0zabnzH2+dIUypGWMY8YAzy+C+XvaslpGxj0VbVMHSy5
H2FL1VdtcgEnzxqPl8l8gwmQlRINraOCwamSaS+hZGHg5ujiymqvkcxRbc2GO7nUXEqCHdrk3jVA
XLGUacgiXnbQawS+ww0OK3vFIxgq5Ce5RKtafcv2DGfNkzPtg3hJqWVY4ea73wknBCLDQC3GhKm0
h0xK44NX1v7ComO2ceJ/74lvHUJ91gRvv6qmAzcedcvQpTE/AQiq8RSLpwrILCmhXMzt6gLYtIDe
cbyRbkQo1/er0FkUF4B/oJDhkpekdmXUa5/0kNI2RCfoEwE7ko49drAwObNvVVub29RWfzN5PNS3
nwZ6tKtC63OlWBeoUqxYpfh96D/cqVN0nWMqUoXywy75ThhRaTZ0V48ZebUG7QAyUrA69idDt5zf
Yx38dXrw9Zny07FU0aSOhDuxLrpZWtooNfH1wVmGzpTQDGunt4+C2LdSXdFaMZitsi0hjUiz5Dac
PwWqwDSfgSvNCbDufsvqBOILLy9y+7KoqyeGa96mNAuosHc+GUQ+yxPbgii2UgIISp8iWM/hFLNq
5ghZTACXvd64LUBDKleNzvKgHnSh+aOjtDJehujBqKiMI48xFe7HQna3YEVTxnMOsOWwXOBobg5N
O4sevKQbfrZeptnWlK/YknxMOsMoSg6m0JAxitxPMgjRYAt6UdaNTsRfpAxPLoZVq+J3Wo74wX+K
yeXYjJh2GpXiEgrHziRmhlayzG9n2/XAFJkYe1nYcFfbgwqkkiOjw06ZBrZ/4xC1O9DXBE5SIKo4
zld7Wp2ZafF+oIJz5t62cltbAMV4y1GxnhcM0duC4NsN4BvF743C91rovYQBFAeBjmJARyGgowjQ
cQDoCP5zCP8cQX8OwZ87sJ8j0M+I/HwHGDXoie2kWLETs8rpzBtwrYwGDA0dRYYOgaFDXOgQFjpE
hY6AQoeY0FFI6DgidBQQ2sWDVomEx3T7oWCGVeDS0ApFMnz8DWg+WCpRdZx8SE052TQowc5vcidr
7Ky6vq1x2SIpF0iRl3AiCH2P5DlX0lsuTa7W+IfTkvnErq2ZGqRDqR8OxM7ksppPObPNOsfDaUxP
ODtFDI+YBk9loqPv6TKdE+QeL6CBd0RmRPG19SLgCRrDyoWFomw+eAsfPZrvQvm7jh2Zblki2WPj
ULvkFYMGONKi6pjkYaudS+4C6pUWnDY5+RZsCl2D1i3eUnKVg2plMcP7xGLpGOfqCe9oViLG49kG
PoeXFdKk6QzIi0VLsQqnOIn0c2Cd0/JvawHzp39Y3SiQjjaNByaattLOgLWUm+J0lk2aOYBroC+N
9TD8Uw7Uc3j0oOdGQDqP6JDGxKCdUMgoAwcfIiBHI2ZWdBWqUMlD+6DBk3SqcDAlV3yzbrKI0Q4+
ifqCLpy3Ma/DVKKknh2/PBmLHYh2NVTvMo6dGPpAx5Rp1aKcNY0ZJrZZy3rxMFPMAEGaQz3Jkv3k
YQy801+70I+NnOsGnkxvJtukEhK/R5ileQn8Nfk2eRBTRRIuI8P+DSEwKJqPmUw1wXDTLhPt9iNH
uoPxP+y5ngF693C/B6dE+Wc6FjbkbaMHUc8/8nrDusRoeWee2c4+RMan0I0D+H9x88EO2NpgPZ9j
ankVddazWT31FNoxjXJaHnoXngwkkk7FZ8rtscqahMnQS+9koCfRQ0HI2k5Sj2WHFDLUERawLK9c
/Du/GyCL69OpXGIeaWDj3PCWSAO3G3JQSXRz2WyLUZBtBuJ+S2GbPNP51vIwiJyvxQY2shd+dUsk
Q7erp92uYG5xY1tLd4/5zcktA9YUtMPnVIJoPtmZSqCjWdzHdhBSg+pSJ781gUIse4m5z+QGauUa
DsNnsJgYmzobC3aLP/J1Eu8mIfl3Ua49X0yPFjYBev2fi++iK5d8X+K9LtK9KuRIHXvJW/IqmlwW
TTEhbyKx+1TobEmXE9UUxBiMKGnInkphCoRjc+mwfYUehyc6T4EyHyJos5Cq9GE8qTdoS/GdN9V7
PVLXmc0CqzeDL2xIBi8ZAk4AWoPv4vO7OBEYq2VPgA33o6fRjz/VspxGkDTIBacacoGJk8/pGtNR
ZGfSoWAEkTqMzGAtKhy2q3lx0zkwBBr1jDdoT6b7Yk27zlhtaC7MjA3CbqqqpvIJ830pYUfV3vVy
OqteFFJYh5Dx4Zm7uejHAUaFJbG5k6BFP4ZOounWSGXS0lAQlcIpjEw2obIS+g1HlNJAhnASDK0E
IdbgRSSNe2SSMU57f6gknBLHexcL33WGboPQ+eM3wu+tYzAzYDc4TLbQkGWtGmMihEWcmlTIWMOo
Ap+/b+3v6MbaU/3nGecFa/xhpXfI7ma+jwa11gptX23aS2XLE5+AaJR7vAORwHbYy5MPahtvqWpN
IggssS6wKW7sTaUK/hrZn3NFHh2Hju8ioCGdYzG1nPjuD+WNlhpBQ0DQ1UxB2ZLjKHcJkWzbQeZg
/+KY0O9ZqBITn2L1VnaJqskg9EBZDlTuQmoxQreejKEyD6FDjvWNLHpScIHPpzjECmBvvM/nyfjN
L+PJO+S3go3/R+0zpCUI14NUjJh0CQWdcZiF2A9uSbkpuG2KKOQMdHNcNerSqx2GhGW4Qa4OGiiY
ZbchTcGPsWtukof5or0Ir03YX1WFqRUKIiYXiQt/MsBAtY5ep5xK62ddJ7IrznMyz3o5HYNAU2FA
7aBLwohU9GmOdTqGqcnpWUQymHyItGSoM/IyxtNsn1MXckfd755vlpNLtNlY0oi5BAAWpUNEh7HM
YRpzitbKRfwCRQ/WB76o+qxWyZxgoCqqlszYLFSnC/UxMoVL7gUxmiIfcDq4k3h1NBuoZof0fTyt
s9tgypCR2KlZDSYaOiuu1fGLbDKKnMRZ0nmZQravWrHY4EKzeiVRhddg6FnbWmDbZdOEsoALdNYh
DZTXE6lEAo1qCKYGOnIWyjP2kRNBqPl2lDyOgEuN5RuUyxnmaeI315FAraOeXxsXU60013NIcV4W
Da1X3QAZJWbDnt+gNkiuj2g9RV0EG86Dg1VXcfvobP0tJ0S37mqnd3a/5FoUJjQGB+BMlxxaa47d
d1c6i6AqBNuDmrfn7Hl17Tp1RrO4ea35kod1iYE1NLdXuGv6dwxNzVYtzCd2zwgHKjifQuL/hbqM
nedtxgHLheQ6U7Np2+/2um+YM9RKlh9aaWSCV6CkChurbz1p7Z1FA1FKQMqwz6koDJ2bbGm3Ihec
itsoKOihWPeWA5RETrGGdcE5reVS3tXR4sNKhx5SELnDiEzGKYs4+xAy4FPu3VA+ceYwUkWrR7PD
6xXBNYrEoEQD+o62QevBRDXzTmHD0y93ya0EIsn4U9FsUdBJ7ERh1ROFaE+hDIurFd1gsJkC1VK3
Bsolb7YORqyTPoFMsDXxUxacoSgAjoKdojcsHK1ju0zkuCXzNRIaGs+dPFGuGX0naeOVeIjhpbtK
aUQ5y5v1/qRqJhvOvz0jlGg7XlTMpZ9cU6nbneCWpIqgwOCIq+WS5K2IaZYwNSgaES8VKQZ81WCM
wLzGvH9rFY13Xs7rqzi+SlxZAAkDWx5aPWAZQ7mt3NIWOl/pmiHf3j7xQt3i9eEIaZ/cUvZxGdVx
ZJMEag7L8z4wrXRpi5RjC0h+jDpaeyw2Cd2CKbOa3cIeb1kNymhKGXOCKPcq+UZo/iCCc72ceond
HAv1uMvzWyp2yKv4NhQEtlDpdkmW6cwisdvqw0eEX7jUFRZDXtG7hUKdMrRaljwcE155F6fDxGKI
NDubBftIeYL4FmpyPtetdDtELNx220kyz/5tMr7RHrWzs+2U5y1UzW06U5WcFHJZXHHvyBE3zD4n
T5pOU6a9krbmKFNZyJqhybD2GdnUmqGVh2yHJGo7pkTTudCs1m+bcZ4Uazxq2FYjTcfM22rkzjm+
ECYBM7sMTbavnRN8+Zm9EI6Ys8t4Tzl/T/gcPeUpUacU8YHYbk/21XEiBamiVN+0Gle28WQ+TqbJ
L0m01dGjLWm1zAzZTFgq2NmvYidk0G+rPBqnR8rAsGvaKrUusUHckqnK6Ud6t/vLsYxQehY6P9yV
Bcr7bOS70mA0YdIwwUwcbuYle5KxX74UYTaNoquhtYrD2y6tZX51ja0WUvm8U8bZpjsZSfUpsUEf
BTMfblIAhvxvZV7MXz7nMjzHtPdfTqIe8T8y4PO3J+aJM2MzBOXSpFNSxDW0X7Q0K5JxY4lEfDrG
kh0rZQYdzezhHxeflRN9ByVB5cPsPId1QvXMV5mNpcvTnFlvo1BH7WOusqZn1u0QwTwo7GGjbHxy
1ooa+8Rxk9ZnLYM3qiPdzSht5dZ2KlxM1zHedFqSvcfTvjur7Vv9bxbeE3Gzx4dq8s98jddKO6+z
NVfrSHp5WjmdNjuaVD6Sp1m6oD0gpCVPkbpZOOJcd9Z5HKN9EL4hjPlFRDPsTkpPopc1lLBu8ImI
8ijbqvEsZIbIYRSoP6oRn+1kDLVVYovaTquzM72TG68n8X0VWTPPTyYaFeMFQMwozhPNjJ9A9XKt
jKQLyennqF1+7Eh6e5YLHPfJZoVOP7DCrt70GZXNNv/iJiSy5Qtr68CW6BEgON96ryff+lDffNZ4
Ns8nS7p9NoPbendBLZiyNlz4rneoXYqvGDQ+Pnj3P0fzbSjo/o8P323+IyXc6LWblYqyUrhOaIGb
lmuO3bWjBYDzoIJP3saqzTbvYTMmA4aVJMMJ7lrUS5BuVwgqoCK7rEe7Jc4g/whx4plOa4osGnAk
uhx1F029WdEpiw8xDpCeDFLK7sCI+RI/SS9y005/f5+/248TVDHBYqOUtn3HlplC10ZyO7hA0IN4
scllXU3KdqSQGsmhvCkZPwT/JqiTdJh1fYXwVUamdrTYAtYQDrdR+sPxs8OOMsgSRrCASEZNPbdc
rWnCLjgIAG+kk6RPvepv3W3AX5BeMLwl1ha2osa5vSFVqkGAjTDKgWMHyE2Sg8y3NsYJZBCgy+qU
cZiwiXy7M2xfZrxP4LdqITI/Y8bWNtxYDbpFUZkxDMGv6+3T7IymayjoHdJJ68v6M8idMmul26mR
fOW3bYtlLY6a20jx2eGr14dPn5wcPkPoiQpEYvRvggVUu3PEe2PLuCjwF/VCGqT8+puP8XO7H4QE
vUG6MHFblOkDubKJwMX1Xc03sJG2ZGQhazJwOYoaKn0NYMG+8Qsb/ZpeEGIDBVFRUhH6mTd1vUYe
Okh1B9LM5cNcHnND8V/yQfmOSbtCus3AZo1ZT8PKeoU0oVAMReQ1rakdHSGfE76pGuano0TzyYNb
kRIR18K/kLkNu1B9XDPwLaf/XvJdvb5M/nf2dUZx/qlGxvpt/oAdtjDnFTDMljE8EPwiQMeOonVw
Dp9FMUfzRsjM0GO9C+uij9mb+lk8QiWJZDJ6fIYzi5mMfjtMHsT8yOPTopblN3q1TM2xBfZH4eiW
kr8gqtXywsDOmcZpcNBPYYncgCX1Rb5ZTi0EYBiuOPdHYsSAWbgeMVGGYZ1JuVs8c4iGlmHkgKht
IUUpLS/yjuRNWUCWBPMKQ2LcQwbqGjwYUns87KuiWY4JL21MMIDLi7GmGmt6I3mEoD8uXxo4TKWr
Wi5jwX86ywg2jGYcU3bJoxnCO1dglgISg48QJsZhOEDacZYj3+7qls58hsWsgG7ahPxw6c7pQcdU
N+Wi/lQOeJ49ZkgRTJyeyk5MtceO6urbBCguHtxXiM7iSAEFIZJRSyLXILYmnQXldYE5FFSCcAqo
LBCmVEOv1Koz+9fTql0rfFBGetI4d9K8893MnkUvb9bOs+mSde4m7OIZcaesQd/HFkutQUzHoDYn
dnnDWDRK6KdekGeK0oCpM1QBoweX5ZUB4bMDYNXcEpYgYt/FdOme2H4kv4Gg5In7LBStGQnvU1Vo
fry9Y3nC4O8tCrE0FEx4Y8ACuGyFsKrqpDc46XY8NtLKebWkXLFoEEJDglwqRQIkbp8xmhuYJHrA
PlAm/yhPJvSUEmkxEFHTrlXmZgE0thpaIwmXBIfcOieVzhVi+ng3Of5EUIfw3/O6Le2LLyb9aVOv
MLp/s8QF2//0iTIqQF8sb5C7yeHivJwivrICHaO6ChkAQXNBHuNVDl0kHc8IvClu7KaPZlb6lpIZ
E8/GNCnVZ+/s6yAm/rSAHNmKCKsda9xo9BHMo6BxCnJ7stUCKOxoKcbJMjoHgcUsQKVyvUYoyWrp
Bsja7t64ZCY9LIWQVkSoneQy8NABkR2MqKI6EerVSJHkyKLLkXfvSt6M5dVY4Y5YXQn8dlS5aAjm
oN0syCS7YkPuirACpMbpw4MzNNv8/sHd33dqUGRFNUPIWXnIFUV+kzzqDujUX0GjrSVJjNc12WE6
P9p/FjOpgM6xJCTrIfGaFMg9RbppL+urfhY6fugZRJMxkL0GzmPsyPRnxJikaSEPTWtmzmKZ2BmQ
xx8DtPWfFAKNqp5FHV6sSYzL+B0yT9iNhjC3ZSwI84FgH1kXYqRywAm5+MjmlfGjZl0WzbS+Wtqn
Tawhkt+cFuQIm4G61V56R/yvfXA6yc0E9CeUj/VZ+aL4cwVcVasxRjmYE1A6wZMpVsBunYrDDW3B
lnGCthrjrNKOULpdzL2tTWa2GsOJBny7AHsQQpyJTcSE7ov+5krstq+OpAfystNuHY0Vx8YYd6AL
GSXZR9b07TzpVuNOSkLKucICn6Zb05i4nUiVgZYR8zHrDAsuGFWMI4V3ac8RbNv1tGwaQelOf3zy
+iViOydWpBw3fq+706lOYRSx1MkcIIzlBilxy+DTc/bIBs0NZQQch2jHmuLgFN3WwAA/d1NvgJsy
AjvV3j/+x+xnNl3sJYfXK+TyDFhI1pQ+Ac7OqcPMlpU0dovc2f1a0KfCN72Pj979D1999RWw3XFV
52tEaF8Wc0aD+fj43ekDtsv3/sgxrkqVb6lPStEn2E2uyUAphDgJIgVVtWzwQyuf9eoGOvz4Q0f+
YIzOYCu8ORG0NZLAkQnQutcDee7xozG6hk0Ix07DZIBSUTfFolBMK5YkmiqnkV0rX1Jt9G4zvHQZ
cLhPzptIhx1MltvsO8QghAPDKTetgE4ozm4g7EfPEqxI3Q5nk+V6PpSc4ZYfKz3Pq3qyng8eDqV0
fnJ0/PT7H49evvk/hunPDx48SO8KtuJliQLU8KqaUgpPF7MwSS/hfyl7hCVZgqmlbdQhqZxQbek8
pQUXqhnTi0GMfzpVMUmUN2Z7aWzhAda6vLac6XVqGj+1zV7y/MmLF989efpPBstDvoUO7jAnJWNJ
kumx//T4xdsfXr7pD0Goy+IIp+/evaPteqXwWe0eExMEbnpeX2zoDhavS9tiWc0wRPC8Wjs5/Kgj
3yRfPzjwaIg7+PsH9izL7LqTCqViM42sB/u5oQ+TyWOsgcxgk4tHOWEMQMfHJMgPeOdBuSFt8RG7
OovOM0IyhhfzTXvJpulMJ3gngNYgWxc9HVlsn5cfPgJP8Z+8QewbFT+MeeTaiQZBtPx9qEdrvFK1
RGXOEETvgvoOJ/iN5gSaeeUVIjDeDGysQe7WoC8OXqf9n68fnp/eaRd90KRAoGF4r5rhU9oJCP6R
Q4paCR9zWw8W/Uxo6MnLN0fMfjhSNsXL0LU2WsqUe727h/kelv2eP9qA5WwZJlR7KCPwwA/r+TSG
ZM7TTJM/OL2mSbiWBrCxa5xeG8cwFBqkZRfdCKpTmBKGsaO5MTlInh+/Pvz+9fHbl8/GP/7x6ORw
GMEfWVJSreiRPHj8cJg5rbw+fDaMopg0HTd4g8ePvCa+f314+DLWkYumLJcdjTyONfKvQcf2kpty
Hovr4Fa+9lr57sXbyJRg2rP5puxo4+8jbYQdQYD1TbOad7Xy21takUnaSyY3Rdec/M5ro3OFQZZd
d3XkH3ZthHZTtJG/+vgbihCJ/ROj8T/geGwgMbvkTe3868iudvTy5BA2+MlPuuCbk2fj47cnr96e
jP/45OWzF4fw5f2HD533h69fH7+2Xz9ykpYJizXc1O2GJPMYJd+X6zfrKcfpD/x2t+3T7hacnlse
ovMpsbCW6zyF46+el0eYuojbyvIr7RjZ9vwJG5j6f5c8uH4ws8C+3ujmToDzGe9KbldcRD1wLrTS
IJ9Ei8zjR7/77e+9zG7IEFnvwFKnB1TGc8SyDqdTbuPMXgR8vrXV3UegBx8TMoJW9UGLp69Xjp4N
BBxY4imn9XhRNB82qwEWMQe1L+68+mkMEs/x6zd9Mp70H/YDaG99IuxQ/UFY3bB9JSwK+jGn1+7z
EdXPwkMr+bnnmdH8z58cvv6hT7jc/elmcd4Pa6Agces1pDQtcevQ2HLdz9SN+YlIVj8ygiZD1MuE
jq1TDGFABudzEJdHjx+gkWM6ggOJz4kRnCvC7EdwOsQ1SmTjI+D6wotHwLyJoY6A/zJXHAEXjdf9
jr77NXz3NXz3a/ju9/Tdr+G7P/F3v37cWRe++zV89xV/92v47lP87tfw3R/pu193fRe54eghIqqg
DRY+dg5iy4fR32M8GMILj34npMvS6BRU0xINkAKkW9Xolb9ZqPR+EfcCSw5VdcQvQidwGXm5YTrl
URVHL+10JaxXXRspAZaBqHAbHx17YY5x5DlTDzh1Lhy73qx7DqBbDOWbdquhXLdld+ekbCXi0t4r
Ygmpf08u3SJKfl7J16x5VH84B57XMm/ONLZd8YnSqKMfVi/zJ6BynNQ/otjKI8ZJL4uFh3ukciuN
TJolkD4v5GJd+Id6BRqbTrDmNDOWrzs5u+jNbDOfb1WinNIwEcxSUaWJclnvu8A+1nAmEdyqIW/R
woS8tcbl0q/WY5wPH8SOql9FYTH6SZDgFN6b3sto7b7fvfvhKtDKTl3HcoUr8eEqvHzBV2K15GlT
PDWSwAmV/OSf0e5PhpNBullilpil9EvyvTDsRRhq++GKE9RGcty3Ewc2S/eBK8ShIPRK8jywhoQr
aYUvtqWaLPgLr/CgXLXWHE0ToPz25hIZmSbRKDezKdglaTdfY3JRI05y6+T3pDsbTJ2JCU7Jy1bn
87Fqk3VUUrQIxg17CqG4hU1kyTejJPzubfY48wXKgc5XicrNALcOsovNYmnlkcFiaO5EXyavDQSB
5PFs1pKgkq47VNogOIyq2c39ZblZN8W8+jNbZL1GBlelzpxO2YcYb7DAG1VsmwaYeZXaWow85+Sh
iWYgTjXHWJLlYrW+SfjOWAYHB8Fqs+5Yx30HNnfG5NKdEoCyeF4VMHOPknvJo7u4KMCL5pi+geRh
rN6xQjL7eZnjX6a+kGp29+XOjaj/BQ3YdZJ9u7X95FFHI1Rr0F0tS+7fTwbup9xVwcybv6gBnELa
UvQyuSsJwvQlfkUZJdyIEqwj25vElnm21TNPrduWCetYKW9WYSx2G7GOmnEMOutlTtolAViv1hvx
ldB7qqnrBUfbolMz7CbdusCv00380G1tBTJ3NdnMoRTvduAlbcWMBaqRfdI0NK8+lEk6TlIvZRh8
jK+b2hUleOJNwfZwnErpAObUFIAdHL8fb8E86549eG2HdIn8IJzLe3oyje3SPfaxGH1D2Lk5DCQz
LWPstxcxdh/gzIWxj1RzQLcjfARHvSEJ6ouu1x10LhtJysgzKGn4RzO1Qq8t2DA5/k3nd5CF7Ubg
v66dmzICAT8cGEltaGpYc0ezKifpqN8P5o7qywT7feO2KfMnbstBm8WqoWXV+mBTWp+0ljMIS90m
ooUCy6RYYiW8vJRr3n05WjhNrRJB02gfbbLqGh2Rd3yATb9bRNU7wxJVrSnT02BNwLSazR7RST3y
mtu3mrOmy1T4NnkQAQeVS2HY93dNWa2Ik7ogJhVRxl3dXLr2X9BWsw1kQVLN3ffhZxvmfqGBzotD
GJ/jCD1j3YPnD4LyYlE11WJePB+uCDK2j6YD2Mf2RVK0vS7L6ra2m/ILmn59+CwOvK57DNv485tF
Y/n2dslE9PkNk9V9e8tscvrCpv/11rnpAt1XLfok8+B34ap9nqF3y6EROeb09w8++7OWdVaxH201
cUyAEZMVdXBRAkOfdhupuiwcHoO22oJC1q+4WEF5Cg8C0EjV+EEsySL+o1LUO6UpSpB85AKTitUR
zo1ousPWaA/HWlKl9Hp06e3eVnIpl5WjlSpg97244wV5kcjFp7xFt2x6L7ZHdS2q86aibKjSq074
G73brmmCKxrrjsbZ70+e/hMNesRE/4Cu6BBLl2wpQfG3h4ld/CEKumiUUbfDlJChJfNw7temjWrX
ftRRm3hMUB22deJ8/OuO6o1yh/bvxuzKv/NLaGatSvzebb7irJOgCcwq9QF01Ng6kzCRGIenbI08
tWFVd1YfRqtG5tZqw5/bR9vbsGbYasSf4a+3N9JEpsGf59898Ev48/z76Ef82Wai/uPx6xM0zXKS
0MmYwBHZj4jY3tPj49fPBvL6DXkFbRqbkUkWG8THPR3038FhQ212BBMP+j/pEmfWZ9788OTFC5it
pye7f+tFOVvf+rmTenVrmdeosd5a6rt6va4X0d4/PX755vjF4fjNU6SZ8Xdvnz8/fA3L8vx499FM
r95Uf0b5gma8sxfTq6ebpq2bV3VLtrhbK1gCXn+oOWP+47Y6bcPMEQerF2ZLl34orqvFZsGVnGGc
MZmNbcnVkBva9Qgdq1mW88ePcrtUWA9xqpRL3akeyDMcyVmkNCazhRJ4bKqyzLj1UeWI0x+gL8GJ
5TbJZWTjxAWI7rF1VNjWWHzAPAhvKc+2thOZiu+Oj1+YtZFabybIxL7bzGZlg1oCVDA3qt1r1lH7
tta3Du9WlA3pzqtj5H6vB91bMDu7tSNd82MRSkR3suQsnqstbMAIUFv6oaVPGdv5TVPOBth4COOH
Ty1VPeop+kW6o4wlPmTLGPdmc95icCUmei9NmBYZzkB+RnvhVdXyTcOEWBZIqsXKTsEm0VxosEMh
+GcEy6MoMDQtars5pYVkGKw8Ngs5c878p6Hz8x3BBfe0hEwaA+gKrCocWLfSaOrE9NttksKrlEIm
Z5Q1OxHd35Qg6YXc7ZVoWS9W8wrvtDl4TZtMc9iOdEuI5s51Q7cQJcbiTNDjvEJx+x9Dd9g9vNxY
3Tz6HWe1P68/ldoNm+xDdWKuk8VuugyVhz25oUw27Ybc6a/qhlOfy6dVGoQ2GVQzWQTRRSp7iSbF
irHIMfwms/1OxFqj9S7x0X3L7R+SPuG5N6vB/RbDHhd46JLFhfqmXPplZJwdsRUvcPdmv0OdCXy0
g44qLcduI8CKunUUlmFbjaJd430BBkMk4WxyCCzPY9CIDge51aFAu/nEg5SsSZ5hSBKm35EARUrd
qtbdxBF6eSGs+Uml8D7XT7N8WvKLop1UVbqVDuyufvz6XU+iD4pVtfpw8fHv3/3f/yOHHPCDg0Rh
wUhuL5A+McwSsYrIyA+EV6jACvRFp9QdvbYsk8v1enVw//7qZlXlXCCvmwv6fZ8b7/UGkyy5rOcX
oFF+aMoPJez+Rw8e/AOwhh+OToC7TIBjljakkIlf0JBCjmb5A8XUnMCjXm88VlENKML1H+aPYZ4+
9cV3HzPFnbcYwj3A/5jYqZ4OpEGwf26BhsmlmSfwrQis27zE61p80TKT/RPPxZ8KjASdK6Oiahfd
SjBq3PZyGkNfSE7FN+NxP5Ry9Hdd6pO3dZtTm85weJRo94CJHsD/Mzw+ryC8atl0MyK3qMyMnqpU
fAV7UX0q9aqaI4TbwHONyiLqkx5fPZ8uyBRCQfIc4ES+YNIDpsGpSY41Y98O8hjhysMEZgSNRjAV
No43MhfLsw5jEaw1nEnLp6YyBmXOep6bjPUNTR32jFMD5s0ZW1YxZan1eEujGJRXNpE29QunSfV0
S4shVVB78pgiTx1ituJg9We48Jn6ClSf1hOorTw7DFnQuRZZEC4v6+F1hd8546JHvCL5ZjVFzAhj
KYyMMx1Tni43y61pjV6phgSzkOnsyariPR+ncYzkWoGUVl2P9Hume27EItNTKXCWMCSO7CDiK/X5
nzDCEKeR61spRCRZ7wjNjFR4PDZlDRj9MLGmSs8Dzr9qL9jY3HJPOWd+Kubcs40cVAraUjWQgxBW
rQdpLieAjgWggk7z3JZaZ34wTK6dXOX8VCMT6Yk2PNaxscLQcKN7lsXg1HdcdLCSH3ml1aLIue7Q
rvZIQhTK8figKzxYF/pQ0mh1C5nb+7ZUucYYUtYzBlMZmDR6ySoLt4OehA3Caa5vBmoahrrJrBOZ
ialRMh/DeeNQK5MMkWrMMq2ymCN0JGYk9V4ipCPZLayAEPMhHRcCM4GByQ2mEkPTK5Km3xTNbSSZ
IVG79JYKmJ+JuJQ5mdpgJAcdmZeRGDG3BoLcot3aUZC03MQOSKmkYUqH7kQMTVsRt1Emc3vaoVzW
C7tiT5I9ZfF+uXmI7ZWkpIMRB9bNuSTf6hMkcP/OIDKKUIYEeclnd9JUFwFFsjxYrE4qI8vmpnea
My66i1MB+pO0Jq03TiDzpv5BP4thQyG3ZKDGNWfpjiQWV2xOlWPYEvwLI1ZhCdM0dukorWMZ9B3P
+/FLMdMJi5DvqcfR68w48/411ttdc0yg7q+69CvrBOjafakVY821ONA9oM8hje2XkDZ7OVU9jByy
Nvuk0Omxf8TMXYdWS7rgHnaJelRXg81LEQLZJ2Qd6aCummVbPyFCZ1f7fRKf+17jXMnDnI2el/1v
9NZP7jTJnfZb9CL2qSlNUnY2nocWKbeFWHVnrvVx2ZWGeM4YE2hgQcMqHYvGmxj+Ze8a9zjHAYpT
sJsLXLi7+mqcvRupomjgkPW9Nlg68E+07XKCbskuwrfzYVNZJMYcRcO73Epmaf+B3BNQtvdN3gO+
MPRP5U1cDJJ++9Bl1HD8ZI9ZSIA0mpth4hQVmwm5ISMCG/a37ZKtLgJekEU8rFwa8BKrR0GRBZJo
i/Ddu40lcSvZ7TaoaTnfthK3roayE7ErZQEfRnsq6M77ylShbI7wclpvzuflPn4T0Z0uiT304qA2
GoNrPJZ5JkHL7E17v7KO5PPGvYQtmgoIjACiYaOJ/6XoLusavaNJFxUIOtUeeZSWxdRyKpuAatVO
0AXAKAJaRyMdh/6yJEnJ8qyr5jQe1VnXA22y7nam9nguWn29LRkkyRRROFywKCU455Kwv/gBvYOu
EjUhyuLiOJUKwfPs6hD0TGHj2ec/LJhz/psNYasJLDecntnJoxGCaRC6R8JTt9vX8X0X5lMToZ6n
u2vJVIuKT1x37WBU9OWovM4iijACo1s3uvacOPpoz3KO65QYVK+UcNcxrNhQ7o3giEUZQRXqOKxN
//C4RrpUZ65ewuvM763MElGUoybG06eoSVWLqxiw12qreIevAG38dttog6qs3+4UwaS8dr0GpUi8
hzIKR7yFsjI/iFz/23f/zkKuZyPyx9+9+38e/Ddoov7/uXvXJUeuK11M4T+OAztsh58giXYZmSQK
3dXUaOZgCGqo7uZRxxEvQbZCHBcrUCggUZ1qAAlmAl1VUnBeyg/jf34EP4P3uu299iWz0JRmTtiK
EBuVuXPf99rr+i1BA0IwpVXVLmvDqz0A43BgZ83miJpbLGChc0hZGUHUGzEOTBpavWzLQAUBfv2i
eWd1zub3y3JZY9IPC5ozR4wt1OlzOXGh+g6xyPghVzhYHg+rqqGgRNTmYt7XXNqzrOLElEI9n5lE
OG5vyC4KSvCczujX37z5/hX4jJAfnClYtTJ4BFXBcU7cQ/OeDpV+iU8GEvfoI8PJUwsks24pTy+4
3lnT3f39PUTeZCuZmjYrD0tGkQKwlXaxAz5xVTZiGqM4casbvPkzKgbBLLkvV4Fu8ObPNMSJKhBp
F6GGEbrIzRftKPm5vCWNGQ+ExosrGA5voFDFlPAvZS4vrsDOeTBb3ZTW+1xK2FmrtrACmL4qhbwj
n5nXsPyQ5Go+OOHu6dK4YUXVFmv6EJ2b+tzClH9Z3YOzzJe8g+AElE2nw6U5mfsS0gk2i20qpdbi
eKiPbSlxww/gdrM83Mvf1apN6sCwWuCi4d8gtRE2RYoC88N/yc2BQEy//NfSvHkvP/0Cpj9wVFat
Zr8wi4oNiuZ58a9bPlR54m0yGCHJTQxpAda0AHTtctYOw7vljG7EkWzrI7o0aYOgYIEzYeF6FH2A
EaT2IOUixL3LH+U48bOhvB3KEovy0l/WYBmBgOeSibNwVAKYUaSuC2knWy+W5sWD67xA/Bq2w33H
YKkIzJqj5d08BK8GDt3AvgFkXFugywKYr2jpVUvSAgMIoyS7ZtRbvwi2R0CuN4KfsC6b0hADRJwl
c3mFeR3ec3LzbPHW8NJQo7mZdhaYb4r/KGNCi0j3sNDmJ7RzZEcNdftMzDPZBTn/cGzVFk8kDMO7
+7Ayc4MYGm9Ed6hVhkW2hAX0VoIfJfQfAyMJwnC54I7IZ0L/eKcAepWYCbnIQG+7zIIZ7v6MgNa8
lFNIcYjbaUpeGni0EeMO20V4Pttmm7VvFw2gpILYW6+jg+K2ZJZLVo4xHx0AE6VZhl+MrDnU/aBN
PAXHDcKCXWwyzPINaNp2E3HPcEC01lvTTJVC6XEboO3cTehOYqri97gvoooIxrA66N7yKZsChSEE
sLB+mDDIbfEeoT8xNBCkQYSEBEN2nNd4AdITbF6zgV6vKTohyHGCKM8YYgjuNdUhoSDg04DCAebv
xqhb7ojupR6OoRJTO9sE9QDPKEW4Oepmt+9r8kdiP3GfxLc1GecxGQwEI4OCWM+qqW0CgzKXPNSL
RSh/ihVy8VvZqZieEGcOZhqAegkA0+5zap/HoOz8FjMCN3Mh2mq4lzhWnBZdbiNG4NKyO51UoXFA
gEPwltQ9nPecB3sLS7sFd895J0gPfb2SH3klleSUnpmg37yMx3z/wnsu7dkW0x33GQXpIl0d5v/s
1IC38vw/6hLC1s67riKsJX/1w7evvnv91auv33zxh0LfTofFO/E7g6vEUVRDaKf7hylUM71mqs5t
XEOMPDN65GQV30+CRr/Irq+xg9fXSIr5doHHNKrrazJzCwKrl/zInj/wTCWyZZ12sEfgr1PunsIt
1x6eYkPyydvDdoO0ZFvjNbCuJ//fPACK3wTy6Z2HpLvNB23dR91k8X9eD4LtDuQWem/2Dhml6GpH
wBQ6AYWgUhuBY261SEoaZ3kj1ILsKCULJ2E9AB7t7pDf2rwosOgssBe+2mjVo/HhWYJCE5GAyHA+
ZDexs5ZFU7rWsToA9bdZLoi25szQ4XGdYAi/agj9HTb1nZn6YuyRFjsDaIbXDhimG8XfljeOLoNN
d74p4cgo4RSvWxvnj0nlmuLEWAD2LtXgw75kU5RnCj5xOR+tcACOsQ2nDSBs4wwEZbNFdzZmjIdo
BmFEtnyomErTNmgVZkMYgerI5dV44Dcv1zWXcMIJ6ZWRI0eGnFCWmxoWP90+bRFUdoQdGARhr7N8
hGv48WT/ANP98Zy1RKMi7CAgnJ+3hwecm00pie0JRVlrkVgfbpVJvZ1kBr1rnvIh8N5e2j7qDR3j
Mm6dRLvTGrcsfXfzhw9s3uNFOaZRd8bLbbNdgTZjvl1UOz/xjkvhxYkU9H5WErx6KhWElPaZf1iF
E8N1bnNIwdOlOyEaCQyIlJpY0Y5Z+Kb6S3mKOkW57+J1R9S32kkbqlFpSrWQf8ySGGUW/1j+lATo
JyRP49mk9aek0Ah9MOQOqJMx1C2bBiS3anOL37XFlBCiF8GCi+SiwNSH5E/PIPEwXoj4B47+4AuG
MDsT1wTzy8Ahq8qA/xdysLDsPX8BGnfuq2DMl5wVCXgEFMJVXSC/2CqQ82wfrcwOEF9MdHWvKInS
NPsXPXkjU8mFoSqXF+PnV0V2h876G2CvQOq4q3EerTSnqmNWBYQdPcUkqlI8gKl5dkGszw7jMNTz
5xNVl+dFbRky5sZUbxPMGFavquJUUS1XX3zwzurQNlyMM/XX83E2mUzMLkPemSTMBUl/sIlUf5So
K6u0svfGJOsduWZBn6h+ZTK2RJIO9LcOcnRIcg4RJbaL3eIWOStm8b6iBypX1b9oBYyhOACAoVuz
fvFiaUOlXjt5hTSGFGzC7Zvb2j3XDMxf7TSNuGejqcyNW5GRJwaZEt7fqhx1YZRNuTPqFXNc5mPV
JplizDP6YW7WF3AvmQf4r/n7NYuB5pH8VJUKY2zefml3/+i/EPGuG/PY/lZfwZW/sfzSFAGaZGHJ
8/BnM/2B0Cfam0KtAW1qI9T9ZMjBQcl5Tn/GG1+4UDp9NvZ8WW+3cHowtsncYa11VLGWeayak7r4
d9T+ARX1QB3y+RzJDmmGjWiAr1SWGbbAesUmnBjDRz5vD0pB6+qx/LXI7u7NvGpxP5Qr+TL0XdS1
5h/rT+lm6nbPkFXy+yJPo0a4qPIflbsU9T07r9u8wBWHHwaOy2G1l+Y/6OXPbePfPaP8WD4sUon6
kOvMydxNwhBPGQE5olVOEAIAAxKegjV4/zCMzOr0vSTumYDqpDqEwSZ6LsxjBuKC71wWJeD5Rpor
HhXJPFjYu/Vui+kwzR8dYCI3Tbl497gDZxCzVHE6o2BUt8Aj1u8MQ3//kPveiBL/p9OJ7R9kosG5
Yste++azmZ7zGU99cKw6Pg1Wipv9KlGgpz7YeuFp5feGRrEN3Jk2KUVVx7GV3QGFkjJykLNK2YVc
C6j8UHZM25kJWTSPZjPBOEaep4+rHNHrTJVHcM0A7MNFtq3uzfZi4xGyUCCdvAB9Phhg35vrN3Cl
c43iVzALZgMCrFPC0P+CDcjuI3BmoOSGNLohFgmSeHF/8RXBgsousBWJJsbrERzpsEMezq7Yjml5
GA7XxUqDUVcufVTfwNzHEJ5PDEO1fAfcyBavBpTWs+PuBmEbSESCCE0z0+Wv//HTQvKJUBYECKFS
inEyOdvQGu4g2XWHtL8GEdaVpV3+5iDuTWYDEmPlYKyeDV88N1Vx7OgsVoMxvJkk/DxrbFxnZf2u
Fs4KFvtTn/FpSC2j7n04TDx1MEzUsSW86EW6q5vEUNVWsbxD73bpdz/mqlA37W3YnZWqc3foJZRO
d/ERm73yHWgWdxTTCl+Yp/P1ZnELgEufAhSMrdFXKKdFUgAN4UxMzqCHbpq7GgNxlzqP9xMhAIbv
2cLmhbV19oy6adNI8Gzh/xa0iS/AW/D+4CMq2XinQOE4JKrLJ3y5aRNFNA2wpsO4mHCVw8LrzldA
w3LdM6XgBG81HZgGduqE0rPT988LFtKM1Qe4+9mPybfEeviafmD3uhrVzWHHu0ORwmZcNFJKBbpG
HShUJzrQocqmxvpYWl1eFdv5vPDhC+BREjXK8wGeMIuA5x4fBE785AmGnBA7hxxq8xdtKtF/75ab
44qv+VmIjQa8vD1nHHdsGHhMbcjQvuLpuqjQyE4VTbzcp8u35qUsEFACfKATicPfk6YE//9SPZfQ
lhBCG5XiO/qsL0oICo6t3JQgdNDxahekjGGHcdSqR16HqQboMCaqZ9cl1J+XuxXFICMTG29PadX8
czk9//QqxXjq9Zt2xZZ4K9odWULhP+wG2h0ZEzOxPR/Sq3gdYQ2HEw6XwEIxjLjLQTq5BG+By2Fw
KKxnWOfJ8EpM8JJXGKaURHpdx27bAHyxXCzfmv3724CFkJpS4SjEH2J2QZBr5ujXTz0IBQdI1gQJ
rbFsdVNtqsODL5a0HBClw4nI447+ms+vrGNiuCHpY3+HLYeJ/WhboR9mn11cRbi+6JEHTSdH1ier
tiQE2EqS7nx+/JcYxIROFWEyUXXYqCby+Iu8pry2xy6UTK60F8J75O56G4t+yL4Up2a8GHy+N+l3
i9Ikha+JF7wvTsYmhCIJ+69BCjiosPPaxCQ0LkF5IDD8bf0UO8vfvZc0w+HZY97bu9VOYWO5DfG6
pyP29e8mKIsh1nxjhJB6KzVz/GJZvhN3PRd0QaZ8VU8O9KBuqltwIIbMvgDKsTouxVi6MNvp2wfU
tI4Z7gSMir8tvCAMvLfigbmYjXH2158L/2K7MSIiMHMQDsvewXA2GkfRDA3AJZrPQ+9vaFKoMtcz
cdENjp1BhGOlINrE96v5iIwQpsquOGIgj1SmO4xYUlrglit36bsoeQVLV22wYpSKzykHGI2bVQtO
lkh1hzUF6diJ3t7EiMho9AVupujLrn1p/onv8g0ot2ClzFvX0c0EbMj5u/Jhtllsb1YLTPM9pWTf
6gIrIOdoRAI3OjUJz4YOQghUHpIvGL1sUXUkt4Okz2CSmnJSVV91a3nSEvTM9WlmOzbzL2tfPNSD
gKf+QKyIFPhWMFPmXd7mTJh+xGVRVCp0MVCvUGHM5IM/oSpwFvXiII3QuWvXZTNnO07OPRzDR2Pu
nfIJ3krrHRYRBSdktbNwYW8nTo9iuRg7H9haEVlHIW6Mf7p+qUrG+iro958x9c/UiGbhwMyKwT6w
udjdDaN3iW9WVp7xkDvBHBaq1YsvRnrY5QCDHwrVMyVz5T1DXhitXm16kkPGA/wy1fvJHg0KXHZs
53JmbeD2qy99T+5YDScFhpG2ydqvUWsWUFm0JtjafeWLH8aXSqaxWK2skh4yUVlnXoScMRJboPd/
rl77yRxXqzlXNN+35XFVy16dm7KCKM8DMftqWwyiywJHxwgPvWP2I/iHZ+3lWXtlExjZeiZVIvA9
MV8zG7yv5u1RDzFpZSY/qGEgUHJ8HqvCEO+7ulm1s7+qLk/h+vqZlVuPTKtShqu59WiEDfshV23P
35voESQZArOvc/MiGzo4BTuFFtdCMG3kos5OgNbslGOiFX7qzODoH1jw96LWtFlc0FEfjX+kqmcn
SvBHuKEI2KZEh4OBYB+aguYiBf0Bm4NflmvOu42umOxHbZ0hJgqyEDKoLFAPc2Oqf2D+D6dmsbk1
fNzh7TbjVCuGvkA81AtT2fdmaZ7LPNbOUX+54AaXRvKllO/OexqPELC/wKEZFo/V0pNB99G+fHZl
7XWR0BouIXqMrGqONUF9Y3ruLeIiKZfteoFDKjrJ095DmOM6cgORkIPn1neYGUJ4JqE+Ahp26ilm
KyNjVLCPCqpp5CBIL9McIzMkXIXAY9iPg+gifVz53SV/ivAb3L7fgh3xHOdjpp6Y+/jA7ly5HcVl
kBHVHemdYXxLr0XI8hE0UPS1LreW9DQO+PUnwi5Ngm7C490RWAvbw7ktb14gKpyM6jQ32ayrESvK
JkBjpEk9L/igvZTP3d0eb4vlplw0uc2S2ZS3Zp5K2FWHal0hRqAjD5BKnuMPiOBAIIJHc+RwV9tS
iATTHYBRRLWtJjeASQpUCDeyOIFjfOvEng1NBpS/2zx1jXrnAZcYNx0vJe+66CRICjg8+wsOzsFR
Gjrxtrp9S3midorslyiK6txQNUoA5j5tcOTo3gVKOaJp4AoOPhjmmgLy5vqd1Ro608xR2ZxTB4xA
WbWT7E/QlWPLtjnkjBfLt2VAITPMiqetI6iwRaUxBabYsCMHYitUJ95DoQ91CENCX0Jie+t47h8R
/g5gELDsHB7oe1Z5oSdcr9PCoohP1O2ZBDzhpCghUbXyVcAxu+nBgCZ8e74xy7RxwhJRX6iZFyIn
/fddQn9Mo7SfRl7k6ATHNKVip3F0onpOXIhbwGlEt9Te1mf7sqcSW+6qT2XoNs0sU4cx9xmecQZp
rk4hYLDIdGPO8YRZ1gr5qL4v443XW9weaE4oI3llTp469/yqa+d1wzycMu8uasE8HdhI6O7pCfy4
Ag8sLG9jkk1xXi2AxO6MRnZ+sfjPfLmpgSjRn3pqwnhj61c6s3X4Bbz62G5i/46LBpQ7fESz87gs
72FlI+vG2NJGqH2L0anZko1TCPkCrpkcDVA39hHXwS+4hm3Z3Jbiul9acxuWmdgL5W29WZFLdB70
KWlsJbEbuzHjbyfu2YdEpkeKMd8Q5ipN6sTEm5tc0KRoOkGezHqflOvqcKV5a4o/Emn2v8SESEr9
r5wUX3hU1kUFlKQtDkAsTrPPkh5gW27r6i/lCjUJI7AejwRji6AosPd8HzymI+/TFk0wYIEC5aqy
DUB+pE9Hc5Hn4mGKRSbSlla6hV2LDWXO08lmn0WJydQFLk3LElgGjZrahc6VeTpGsyWp7bzcwfFt
H/AxxYiFm/T7h91hcZ/CTsCYfqyWV7YjtF88RqxDMKL6F+hBgnhxELyCgT7NYVhEHdAdf43d/qpq
0Rcw0amSTXrm2znrbgOT28ndHjKKCUbVbLnJ6Y+7YVdJsyZbi4rzdtESwRIjYmbRb9J1ZNlZm3xB
sdh8Pblwy9ZF/WIPRRuAXr44rg9s5/evv34zNXt4C7j/iNsNvKbp+NMM7JvYCp7UpwDljwGaiVqO
u8rcX2hZQc4HjvpDfWxUT9ncGn+cnWWYd8SXOosISM9Ot4prGiJt9g8xK0SJpssJluyvbOfNPWhH
h8mk82gfOO24RyTMw7nVhd+D16IryDY18+CPe89zSFho/XWS5T21fluov4GOBM7mVp1qS7nZtct3
MJHkaoJvaPbitNnVgSOAF5C5u2IsAbN6Y9TUOPWQOKZTjFqYiRKTWEMKbrhLOF8HBuJw58mBJ3IJ
cdY62C/md66HXFw+u0p5MLkSymXhcZ8670OdgGCnnB6VyfFgxOqX9d2Oow1CgDHMJLHrSKreV+fK
1Mnd6Kn0P3qtR/8NFtsMNb3Ga7AhG1YAIl3IpgfhFrsPWm5dxzqYDmKWVytbJre/CosgRC6/JzNB
otT02J5uA765WYDnd0NKZbMFr9nhiwtw7An8YlU0pu8iCzdWkqSLjIHZClFHXDcQsqxs5ARkWnT4
DIRP0sYb5xhJFoVhXgx9M8zVSURZ8L6SNHOkyowSJJM+7jxFfgvumNqnpgWGoRqNdenig6sR9+kT
6wl9sHgjJxqJSQrNRpqiPDYjqTbsM2827NPigypQ89BRQ3wm1ckPZ8fKLbLdEue0R+xQXmrOGzSP
GAgwgv8HyRk2Sq1b0tiVd1ZwTHWi151XC1o4LKeRQIgI57eMPxWpo7CMNQvilAGo3m7JOPWlDsi2
ju8OZ6P/pEP1ZvBodyNIRGoGYEl0aKrvF5uAtt4b6ads3pcrWs3Qj1DPTFBUScu+aK42R6cvbLyN
9BoltnfvEsXXWHKbDhJ3qhFrKJNu520CFEZw3pkRwi9GQSwa0xIp6viVoHTc2VQbsoCnthKXd/XM
62Zu0xirW4H9uKXxxG0QfP3IteCXTrAcPnOHzXsj+lCerpfweSTPUrV9c9xB4MayhCxdfAyMsIyJ
/qZ9JwWUKoXFS9LyFWMhrI+bDdYcKrJWpYKcfGFTPUfcGLk/Kd9XVK+bb+gF/kx5xrJbEA0J0lnT
cCb2mW+d0EXt78nyeNDxeqqdmfodGy1Ubaq6hAZZN7vraDfpQ3dyE481Y+YFHNWxNYJkjfLP+fPm
V8H+rkUoHkCA2Ke/+TUHxoM1/OZ4oAnE7QJx5Nm6Qe3yoQ6+RoUWQbxkAMYAJhmITeYwLyTrCI+G
FYSzn9iFhxsSPcBEBEBFw6SbJBiPo+EW2efZ8/S0Yjd2kEkRE0oFH15egGN3t+s/fgkGb4Jsxg7m
I9R3jfThxLcQOw8Wgv2DfzrHGevJNvXuduifVelR2TSRjjgIxk84GtcHWwFaWUBjHZ9jzQsYsuB9
0cGaeDyC8CfRKGV8PmnDseJ//aAC+2k4O/XR7M4ooRCbDemliNxGKEKXY3p4xI2GcJvmiodsmivH
MvBm6tlp3q3Rt/WkKlrAtCK7a26C+WAezMULBjOdZmk7+NAnxDK52DnUNKPjsgOugsdwrAl6cNPW
2oC+e1+/Y9P9U7nQMN1mvT9uFo2YtLTLd7UjB++bB2YOkS8cEujREOxFYConKEwy0lDcKQFvFB1M
NHYB8eAmZimB8wk45yeIvuHN1sRBajOohtQxr1rH9M0vPv11kCQqYAh7GK7A5zt2Cgf6Uo0plxV4
baDTpr0m8yI4tc5HD01+2ucSgenAE/G+SB10i1sHP1LBBKgdB+Bt0ooPz5qMwRbM9zv7Pe2ZM9AD
KJESe1QUg6RTepdlX1wQL89W4ICYVSeoaOw3o7N2hF+lAsf6neEjTGIYq0PEJIQX3nq7c9ZrY5Vu
2BGE3OOO9DZrDk5b4HRqrbjtjECVxB0Sl6vPBV0vPdXiM/OxPz2uWI4gQ2xDTAopWE6fIEJFPfAZ
7bBPmo8un12NNbL6HGwChHqa2M4UAwWZEOIuwIuLaZ9fA9fguap4x0P6E1YliH7BgWLzvaja3FTC
nyEPPxIFWZjSE576KNnw9UeCXj+R75KxNhTPI6DUDjAmhNgGURjeg4kK4aysbyJBIXTIw0FoNxJo
RhshDG1NtB+xV2vzPOeWRa4fwxsqhrsiwE/yR2JECW3pFDt8Xf65fWdmiQ3xWfb6kGFebjEVY8kI
kIUSet+xvxQ6c5l2MLvzIrup0CwGR+kOuznx4wPsoB+PEUiGBmj4NZT67QPcH75Q52Y4iDioTlkO
B+D0HT3IY/dA8SCZe9tGH2GUAjGnt7dZ/tz6YQK78k7h4QTXlRDXddUBZqO+ttTQjsd/E0+OalfV
kwQuTQ92MJgbeocOhq3OFyEqevFI9uP/Ey41ARSfvf5VzIf89Aukp469ElbsEHt51QHuH30COI9G
ygcdhhtZR5m5ZQoOeZEsg+30VaXdWbE77DhHINsthkbdKi9p88JMfH1slqHnkHjoJwbFbr30RvGp
VvZJTP4Sirvl89ZChRS1zs2VU+t6i+KVZN9QXU7DOnMpGYdXTh56JXlYXkF+5pXzfYZ1ae+N/43d
OgjxobeTX07tF39zJEvNNRvpP02Ul90T7ajwXl3qPBLLt+XyHdCK+sAABuXK+a35vArDZ+kz4UC1
vJV8JNHEcHUEdgW8cIltA6tl4YM2YAPpQN7eFL/sBv9hye3iUwhOIvYYnsCnJjLRaRlSLweW+heB
zLBjrlYd7lTDcwYN2C72kLAIFGGo9SGx2dttehINoUHEKAfsAbit2h/xPXhrHgD/eWxjdsbWyT3h
+kn+kubYlPdFFPQw5rS6f6n2edhGElAkufdg1w0Cb1al8+RRFILEFqbf1edayqheJ3qhT7V8IX+H
+wRH2L8zqNJyuzfdxK1o0SYDuUOvm4gfOrDLI2VCiBxp08sMHnYuFJKEFFhT7ECwUCjOps9pz6rc
F+l7VPoVk3DgrK12vmvCHpsDyQ/wWD06jrf7GCea9kmnTy4VYmDffPXtn7lKZONMdHI4XqDPxxc2
UTVIDQCZXLK7u43t+MrGwHkWOp/pZODz3eiAoRDI0oOSlvLOEGzsWrU+9JE4nCucj6sfEypdScoG
uID8aq46nMfra4Xa2l5fCxjD+fPJp34/tG1Qk1D9vXVzlbje9Kx2848OJdcLBOYYYIzzJeLKgb6p
bFCM6TnjrwJGVJzEAj9Qe3g4ZNZD4XIF1FidBzuqdntcvOdxcE74ZTLMlUYj4d4fMA4KfkswzHPK
WKX0DMT02uXQyMehk7xFdgaQE/LTD5NpjP2ImXB5YBN+sVqBkOKlwuGgGwVI5aNSU7YbKAL0yLkX
rBXatFjOb6v35c722ojE3/qxiUB52BiuwpJWx0aaYB0uFN2/XbQlZb55qI/29JIOFET0HSR3TbLy
EE0KIiMnqXczBlcNhnTW4EbEnvkSHUY1Q0zYxDmmUHIfkRqNhI64tOctqGxR+cbJcValIajwA3Mh
MQS0BKV2mVt4bS2U9lNUcfHU4pdcfZvoEU39FDNkSQynW5EVZEfaGhLXZmCyqtcHWJgIlBUWhJTi
rLuO4cU5MJRWQuDAw1gWFLEhe4S72F1fINzWJROCvO6lV+nXqdra7K5sFCy1OgLZ9tgefAzzr88J
gtznzu7eQhVI7/H1ebmh9CsOknyhMmaDMygkNoIN+D45xHAZZGPZtE/UmA1xjTIlLXZutqqNz6Td
kLchTF3rx9bAl37nbC06vVeomrA3ZXWgOGh2Q8G96U4QmRHUKYwdBugwUk4b3DZ8dDDccHHQxzZ1
fj48m5Rb6zawnv/SrFIunVQ46Y+mlrKdCcfFqdJAVLLbvjoQLZCEQxibx110acsmXife2ILmqFjN
BeY1sdnpMHuIfG9RG4OKXh8o1B/MXhkkjGgww9DuQfbJud0bxACtJCskahy1khwnu6bU49nqAVTo
S+4k3wgM4Q0AkJgoj/MmLCg3oLp2Bp4FblWZ9TgakcHTyDiCVx5aSSLIGUUpn7fhZatSq1HrtU0w
4WfGu4PbDL7FSed9hDGO+/3mwY+1Yo3swfGQwBJpvWJaKUQNQLpTt1UjtKRKiJZvLFOJIAJyVaEP
vDVNUMGxn9o2YQozveRALq8kf48w+R8W380fotoTHAZkHDZhRkcPWAlq5TMVOV8hqFXnh5f8w3b2
iqryFsrvno3lp36GgHWJFfID6xWgnFrCxGeDHtOU49D67FMqZvHyfgIUcJ8XTvC0DFO7N8x+PhwP
C2jKlozCQBlRAD8qwIp/Me2YJN6XOewjapG3o319FY7O8RaD7vqc/Giq/djv0NUgDsJOh45wZLcL
zx4kgAsIJmCCmoo8CMiWBZF7OAokkBhKiEoDoohyvOVX6z0GCZRIN37bhYzfjYDP6j/GYYiklA70
r5Q1Fw1HO4vAwxrA0w6sJz6RhtLNEKuGMKs0A4ZgUhJSwwyBcosA3A59PQURTVhaSIMG1kBe5iT1
ioY1OhPTtLsdkcc0z02Fo0fH57c2tj0p4vVfBXMtyYEB7KBJ5uDRJD4hsYXYJlq0M68ufbV+cZVK
JWD1a6LvO+0aUKobGDGrCuGUe6csRUptjxXch7VR9H4xsXrRXo1oe6mGdXXihaJvBIAb8WbGXAq9
ytUOgFqrcO0clMWhdYMsOiR1KeJE8cWKInR9/SGrP6rVTJE/6o16EsjcuXPWGmMwouJ7CpTIFyiT
I8BPJIx76UccZ98jMmsu/Qhm6a/rQ+lnmBXhQkaJmPscq6tqZg4vqH7fVEAad05+h4WAqTbsG7BX
mF32yA5QDnPF9ORfjRzP9BgFSQzNlD7oM+0J/Jyp3iaPcnmv3EiDnNSWQ7cqSsdg8lZEDEuIDmse
ODUepNJFWThig230akcr1Wrq2HZBY7wlwA+YFYCoIf3H/tiYG0cEXzNMH6kT9ZSQRxwC1HbZdbW6
RpFR5JKMfXuqVZwYN+wUbjPQWDipQeVKvkEJsG45HTAD5YQJTn2a6hJoH94a2eSW3CAgx6mSVa+v
Q52p1psq2maNzpKFFBhmx1g5nT0slO9prr7sCPXRVv+kllr7XX3Y9a18VXO5OSE+DWrBTYAIDwq6
ayJOXydm81FShjUXJBJ8RlyEIUs4j8jDgThXroYJy0OXtQDtr3DDKNqoPkfr7KEJDRHVyt2O0S3c
Z+I0HyKPk0XWHagHSBM21WdM96zIbPBJmXpSVN8iQ0oSoj5egR2qAk8IZqY/8IaHrTHH2Qyh602r
ri6/oqS5MBZGUFjM15t6cUAsbHDGbcbZTV1vyLcHvCWLBLvBnbL+fgc3D5e2X1fFJ/BChlycEqCa
qBgkL72vtJXLGnX5u8JLKUtlablSiSupeVTbzG2GOIhKIXQEkOskEZ2uVj/Pxdgx1w1gfk3eJ/EL
+006rx1r3eYu2g137LEx3DiZAxAvYmNo68YlLrIpqwAFzE8WzI4S6j0ACdKbhNsR6sjcB2jyqdC6
d2i1E6Otejidhqb1S+ru5Aa8IssNZ9BqDub74ir7BNsAZ0W1sFSdhe6X5hkdoy3342z4VID8D3c0
E1U9eYNq68XmT03lYjjel80NRH2LWQnYSDxZ+ZBfSU2Mopt2jiOwNpug1zvJCv9qvY2sRtNBjEvV
hh528MwhBNIERKj1+nt1f4USTbLtJNqOB5SlIaW0QmSDqZ9NVzd8heeu5ISMqLTEqr8yT3I4c7gc
1GfgJGsG2MezB42olAV9XyU2m+n2qQ3ZBXDPxP/DDQnRtO1JBHuHtenZ1RDYdbhItgyRw31yiGN6
+kFVIC14LkJ+Ex+J3TBJzjlLioLTHwoxS+VtONxNMFlokXpjThmQVHPSbHJOUfsilTxrQcnAYyvi
KsKpCcyd4PrBB/OzWfZMw8gZwoDBJfPh41hRUsfn2bM0T0QC7fCszc7Puc92+mVBTuGtqB7+dBDO
oCo1zm6bstwFKEO/4AxRxvX4FJjn8zmqXjyVi3kc87EYZAZ4+vVS9IM/7vq2wpDAcz7BL0WF2Dc/
3odnLWBvQGtsqLFbGubdDH2cOIhmcDxZgiXn5sjqpXhymJmiO5mDe+m87aLrUOFpYFZI60FMw7Gx
l9p3H8ILJNUXlp0k0pSAFAFphTaHOvf6ZTsSvtYcg9mK07MVzMd6J4lFPrkwg3epwfjIUq5X9l+h
yaFH+St2MrfhbuPsY8lOzVp25//CdwcavxbksX5jZujdUx8emFv7l6huon67FXPki4wDt9gFECrA
FMh35i3LkW9AhGG/mz0luQBr6fV1B0aWkfs4VoC+N+It2Yk4t/nzyT+gofumfm+OLQj42wVpAHyc
Y3CSkbTIaFriy3s6dbLA559/TppEnsv/o2zql9X7Ci59FDTUYk4mE/jn4ukz+v4bBHlCC5NoFhYu
VgiNbBScsDCC8flNec56EY5XDnrR1YGxhXQxDbuz95k3adC3z6m+OtErUDzfVIcGNBS2g5KjmjQg
YXfQESm/L6ayUy+e3uuZOLHv63H2SKdPrud+dsrwv4BN0KwAsqcVz6sKXVMIvYtJESGFc0jG6vS1
GK7zZ8XwhH58S7w65sMDMZm20Xn4Pyr8fbWtIGYQYMwWx9u3B32a8CigYZL2/5iDkSoEEgc0Y9Ey
IaUz64VuF0uMPObVM7uu47jZw4amUqwGjirqY/i8GfGvbPYN6n3MljJ1HffoRHBrdhWEkTrFHp/Y
F9wrUwtcx6pHmOkLTr99li0flswJ5NfXYd/Ozz+PpwQeYmi0WU2I2IblkjmAfvslBW5JfwHPabbe
m47CcQCOgDZj1ImCtV1bRIPiSXlXlnuM+5bZswNaOdkRqSyOD4I/IRAdyYH5KmycsTy5r1wDJUHd
gGfF0TA7GyyyA0oGdVbLxAKQovb7srTeF/WaUa2549fXh+bBzCxGe6Ia01zQSANIIWezzq/KgyHt
PBzwcGu2C7XGooebz21I+NtqBdB5yg/XXI7RDQLH8Qu8heRYBeiPFtEPsAfNVvELq7IsEWaAtNce
b8hn0QZVyf2pRaInNhF9O3361JDEm+PyXUnJ6N/u3/36OWenf4oB/k8v/vE3/IAogrvXtWZgIf2b
HGGRmB/5HTXvd15lrCVBLktfsNsWHB/zod3JzlOp3qzOKapagsPqxsOvGtr8v9jL3xm5Q3EGIBic
2YydXvhigo9AQ3Tg4Gz6BNs1Kp1WKVnIFYIkMh+fEIALwz/LwHaEH6iMwak6o55oF//umuPPWLZz
xus4l8J3uA8kjWpcxSAFjoNOBRCI2R40P7HEvI6zzIs3iTPSUTH4VG0/Cv0E0E/MWgh/5xehgIGP
J+s5EpuWzHdeGev1HvhiCOYkc4FMOclFiVStftXdkSMBEMr36GCJQyow9V+1CQVOxYDD9Tzn8UuD
5FAOSbHqZYjKAUb6VbXajd5k2xLQ66U0CJDS16ytAasGI+0pbvO3QTWibSR4BDPx9YM5TOgAqBkf
jnl55BwECYQDVjehZOPwOt4z3TOLftnI54O6TPH6f/8OenaKl69fZl9/8yb77ovX379y+Xz9g/FY
fGHfkUXVenxrzDpIpf2EYSNE76KahrxLDbcdNuS+svHJu/LOFE5OSBohjOvwmrxXo/34sFfN9l6Y
TH8Oe1B7JK01vYuRGpqNuM5NN8L+wz13o9DiLMwL2u3TSwHSqZFP6ZJdtIYPYfxL5/hmOAtTJkBN
CwE7eCSvTeFx1hthgIKrqx6+gMkhZ0/D5m/I61actBnZgu3FwHERS04MUGB/9XiZ23KHimuruUvs
XZ3xiWELyJhPCuXAgT7TkfmY9okLM8aBWb3vX71x4WEzCTpDvXMUGOFBzFh0Gemg3zXuzymJ1zwX
IGpZTBBxDEQb3lmgl6ThWD00jmraBenGpQeR1coarPJADwPrBdZbh6IFgu5ffy7SaX682C4biK1j
2bjPsCJd2Se5K9avxXqedOaiPKlZef9Is39La665ltKWSdQq/12uZP9HkdJhfP4jmQRDVIB0J2VH
OtAUiYlRHjFkgcT+FY+E63RE6ag1fQQ/LuHdFFqOzYlibwQfv149T90nndb3zgVIRtpRZi4NRjeU
jg6LcZYAbhfAGptRQLsLFRkD4iJIhHXS61PlRmh4qgOJHOduQr0J7sxWzrA8Xm6mjlhPIcoq113K
AUN669LqkWfisCN1qotm1nX7IYoCUNUBQ0GuAz1hfLjCYQifio/iBBwji+TIXMgwkXg8RLLVkLtR
JvIOJFeGjLEZS9wG5Xwl5qyB8fxyWABiRiIdnIiNYHs6v0gC0lB2+eqqdwwOMIrDz7um0UjZMEyb
aVbPZE6RjeABRh7JKG4Q6ApPr1mm8+eT5wxEJivJSptkJKQ+7NFp7UwLrw8KXpJDxkDxTLMO1O2I
uUBT4Z2slOwNpfMhXlOJchGXk5KM2vmbqf3aiTL7hB2bq3W4/uaEofmWNce7DIOw/Vi+SEzxqEEy
bvtDEsSARr03ObxKDG9xY216+CLFSLfvqn0+vAU8QByO83FDR1VrTTiDkNvMmmsGva7TFh18rPqk
LGdpPo7WQ0OnKnQXXrsBiuHP4ZjAHcM8EbFsYJtTry+HnCXhCnVJ5C8BJlybPMErTLnFbHR1VMcn
pg5zYw/HwXeiqIg/lRrhS1tqnAXfW4/96HtVM1RhC5ohCNszZDshfie0g6DUiZFe1UvNREPZFUW1
lPhNHsLcoP+wI2TAZrKplVbUWWQhzpZCwsIzpLwL7Xfo3KZGR1wVPLjqxJxEGT92uCalln9OcjBq
o9rKOqVUsGkppdtKbGCgu8k7cjtaMDbqVoy8ZueXhz1mYEn6rzZIezZOb75doL5/lT4q/31hiTdB
fFkHU1Bysu9uHfudTsQs5JN+jC9uIUFh2drAZS4ShqRZkycmC9hlBIVpuPTr68hPlQSGtrS2COkP
hCvrKDUJodg8JOC/A4nTUfCQ21SXI2rZ5A9HzaduRmpx20XllsyH+XlT4mUZRVOHzHcINmcq/x7j
bsioNqaI5XXmnVQkGZbmYAIXpFbB9dUXwEOCiTDWKZwo35VKg3Z54oybIE/Q+ZvC/EO1qqsDfRsT
vf2F6AO+4OWPptdNLeapdnVMs+CMKXZDmZQwU2BOjvmAa2ctarKFiFx0AUvAFYwlLAvH9GXgs6tg
2HLz6sMWFF1ec8kV8zLRpuDFPb+5RNoYyjwGRhRS+0rCEIwSUEGsHGgRfMzqCnHZlZ4UDLrLrvOG
luwWph8QIBxUAGT83Q7wZBeYkxmMm0+DwOiODJduTjrldPQtHDwSs6kuA0VjLG5/4JDYeXbSeSJ1
6K3uOqWuNYuy3R4PeIVR5hxwjQWcHJj5ErLLtgzFoPCG5LAFx8/fCs+K7Dy7eGQvwC2Sn1N9n2e+
p2JbJEPV+C77g+HAj3u6jr057KQOen788QgHoOaSXKe7zjSp+VKnmjM98o0GmCp8B5pruF5WGK/O
VlN3L/gShnee1X5gaBbuk8+EFY9LwtA5LxiJO0bJ4i19oSQ9kCjY4RFgYt+T+mgVjUH/mD113TQ3
VaqHxK7kHEhOMh3BQUTRPHiq+aCWK797pnbzxiXWcB0kMDZzrgS8/IWf7qVa88dJww29UqOMgc86
0qDA4ORdML5aouBPHt4TCCg6EPrJcY8WcMq1BQZ3sZRLkckb858Xhk36MowO6oWe05M2h3osp3Wi
+KiAgQKRXXEs/n2hWHlxXujOmOtsaVp3XW2hEYVZ2n9a0nkxMfkLaYu8vGG/ZCt2zWm0ETmbdOcR
Ukcchcsu6oMZENuHFthUxMfVQBs8HEZdkB67vDan9Z060HUERFxMdVDePRUICBXOJ9HdijHCrMid
XcK3UmNXbyRCJL3EBCCEJcQmedIMSDyKjjx1SVI4/NTmbfOaNSXdK4dmb/cSecux++JifUC8Fnfy
MNkF9huuEXZl6hCmoJ2qfeupfWxYp09QwJx/bI+IoIK7wWWkad8uVhCVBl6Wym2edUpLEEQibNdk
/joOGJ55bKnlSvtmUDK6a3+ODQtiKb7XRxvoWj0Pzt9rPswfRo9nqj/c/oz/VQNBtBLa3oIxi3+E
mF4ULcovKZqT8pMEtMXFbzp3iDf+Chl2eX3cSAQuhaOKYxcaIRZy8PjcOTiwHXrIJRqLUHPoS4hM
ne4fpnhRT69dwFbzbuKhmFynci0CP3mgnbSwkdPgLOkq+fqLr17lk8mkuL5Ox6GmtZ4eLbikvnqg
JMHAT7jAojBIXkelWIYSJb7jlW5vg2UWz+2Ya3XOcwS/xglpAvrjcgmHQkaCD3YYqtQVdbpi9aYK
ddEHKaHXCGRouc0r9CUY6rfDcSx1FyEGpcP17orZJXwOHeRL1053SLB+rfl8SVqRlBEVIDQkIl7N
nY6e03a59FniKoAEzClYDJFtFmbbpVD7PJPgd3SRkO4M6LQA4LGjPJ6J62ts9fo6+99tTdfX0gXz
mGJg4SF2BJRhO/A2lm6YBxZBjiLj9dXhVeV4F44r5xtHwMfa400Lt8qObYda9Hb9vENUgKakg813
EQ3MdPNPsMOtXehTpE4IaacibXCFETTg+tpbBghNMGyV+NeLBg+k0g0ghC1oMTxSFmkkKXqivDXC
bkmKTMm8yrHwakwKXsHdBEiGfEIokzh1rZiuldV7Cj8wa/6+qo8tpJ1CEDY7IQEKGbwEcrqrzy3K
gQsMgQmlCru+JxdB62VP0HWAn2gYvutrqen6egwzC+SaftLevb7202A0uKh4LZp5R8xhah89pc26
wO9NtS7Jw7pe+2vtd0224xTYIoIqQL920ONKXeY11JIrAFa55dN0nz1FtZ18IiG8I9ozuH9GKfcU
jj9VxQhynhifu3LxrinXv1WZPUwJ6OEsy0OCNu7mOBw9KPyqgvxOqhvd1xpBZWGhS+nPiSDgj3q9
aYwnSTyvtIUxu+XfR6iI/oq/Y4sIgISYjcjAFKTqXwDAAttEhkkt2NCSQbJzqg98G0InrFIuPVa2
lOLR2UAM1sR0h1ZBu4JuCRI43aBCET6+V2alVCuWtqR9JVblpnPZfSBQajL3QNE+iBH3+WQWdqxT
E8cEendnl6YYLryXpKlFGaIB20uJrCeUXCUoM9M8RV6/QGwYd2FA8m+YUFBClgheY2WUm/JwYLaZ
9hoGwoUsi6AImrsVuHzUZB4ysixVu/1RwaWyJBUCczonAEG0RQgcvJGM/LMSzFoKmlvssBu2ecao
9VOGokq5L5aOIolo+ip0wcd2OvuW3dSrhzTRDG0D8wXikCobjuWGJnz7mhN53ABEgG8xSHzZtRf6
9UopjfVpZC2qN4qjt+MM7BuW4/OgL6itmI/uAowjs9wMiBaSpmH6/JIO89u2PK5qrvxlue5O3ujN
u9w44wyDGGL9WDCnYTNpih2sBjfknDb0CVfx9wmbQ4rJpgpThR8zTqR2iXrrb0ARXw6SV1arA9hK
RoPqSotHSJ0X00HnBhLZhusD/yL3vlNfqbAEunLQbSaGLSybNhXyb3Ot+R8wfkPXergh8y8ju6FC
0SbW8qYvQcVVzVMvdxMmNgTO8ngTWPJvyjWEqwGNx0xgXcBTT7J8s0AKrcSb1EjssUpgQXiE7jEh
VWzaqhpfPoioh7gjOgOLeE7ZjC0R3WCakft6g7HSEhTpRshJe9ABh2cKPOtxtSL8XQjp3FYQnQkM
xg0iQMP2c0FqkqVIIcSBZhrgrXpajscv6YUCe17Yp7VDKQM/C400RvjDjCHK6MYCbBxUg3DLB4RJ
o/KCdWH1I73wZuTf5mBH49H4SZDUmKKD7NfVy8nJdiOU00vvSwWkqk7QLPv+eKOdWccC4owfj33k
SX1WUu6Ci+yt2Qplc74xZGUjPLPlO5Anwsh/ABRrMioVANZYiNf0MP/9pIgncBiJzUHjlYsSg3Do
9XETk0+ExJ+p+TS3ywKyxj64XOPF3yiuwF57VGBxSK+PiSzjrirgnGwwZhRHUJXtjzvCR+mVcHxP
AX4IoCKE+oRTVJwu/HQTxiesmn3s6JHIpAgue9PmPA8zt1peinpg4KcxMYkkhDWGRAP9ILKHqmxR
5pTKipJIhvOoil+ju6CV5HSYbdWU246ggO4XpeINGzAxsskDhDkfqsix8h4/FJwybipGsel24dXO
u/x54MObsqcHAGEa/GkdgN1ypIZEFFOMO0P35B8rzBh5Jr3wq8FZsshz6CM8pQk+aztO0V6GOZaJ
UT7DGMcZJ76FVnwOyveriqmbpagz7Vk3ja67Sq5xRLbGfBiQ/3ftRdytynYPCNcA/eAcHqMcxB32
SQ/OjdZ2jn2bW280XVzG4uvaVhb+GyVvNzby5QgHtjbHGnY1ENS1Dk6EMWimXPWrs982ckWxC5JJ
F54p703MaD7vSjf3me/6igfqrPncZpWmYyt+suqG9r/zYiPNqff8Y9+a6UF2+JZjIkOH2UGEBPyU
CzhrT8IVlds4lU+I3FU9aQDlND9LesLltIcJD9OXdYYdphOQuVeW8UylsvP8U30GPpQp05IkltKU
PhiVetPn3mu5jJSoEfnIOp4kjK5KucyqjLIdCawSbrH2o+BVjy+s90limgO7nhT3kpM94hYbDsXz
hT3pfLojB4yUO57dinfloe0ObsziheHu5NceMRnABCrIIpdIKSqJTjMD1iMs37KnHPHXtGPzcnI7
Mc++F4+SXcnGs4VyuxMNAnqhk/wAvtjihsIuOjQue40whhyHc+hsfGGqBXf1FC5aw7LnrCtgirIr
7/QtJreJrs4WyT73X+iaiiDQQBul1W+1Eks0HAIdacQyynuptLkAs1wyjgPG9A4QbFdFL8W0LLIl
ndv2NpnwzukdQjqHr08jmoRvpE5BzJYFGegQasb81x0QYoPgkIRH5HCTYAgN/UJkwZm8Zk7ItcN9
utTjCH00r/zihnE/ADcFwLEEgJbdT7N7RkeMRqy4JRqQjE3fFfCmOy6YO4n/YqoaVgSgd5OcPPad
2GUlY7OVh1EL6qm2JzmbEikOGSWd0u5RrVZArDneiF1fqDthlBvwsl1scpQZ3TKo42yueF3gHtHM
he86vuOlNQcYtJRnzJsTcmaW+x355CLgijlvkaBOOgxqEgunSYkaczzvyknDaJNxALF0ySFTFinl
ONay4Vo0COrIbPFRh50L4ckGiT2TAIa3qr+uC8frdQwVHEf++1JVAju4GwuAGCuX+Dp1yDzn/dho
sVjqHNvQOpJnqxBQ4unYtlck557rSs9xhAYc673oCGu9BgIZ18fdyl3DQljjTz+ZgQ5CzblQiilj
vkIYEmkobJmiux64hUf7hwlyy+fnFgD3Eh7ACbgaEc8NuHmY0sBMtkpVxrdJfA99BxTWP0RjoaF4
TYy9i6HnRsOaBO8a/uhJzWpOsmw3i2SqmkXCJgFdsSENO+SuAf4VAM+4KkwR9Vd4Y20cj08/wwK2
g1jGB17tuTft2A/1gSeFR3+403YFC1ebMLjxVvEWYKzgafXxpVmwqhCelDQ0Lr209M2vyKtGTVw3
XK+HuJuC6031OYZ6ttVIgLPldTea0VVrgOTe35BfMb6qh6nDHs0+bHRFCBWY1sQhC6JnryE24GRs
3rCZRsMpQnIZaJEN9C8pw4wKjVojmWtLR7V0Rhqb+AXM3chhY+GBNs/7aQ25o9XOlHUWTQv6SAMg
NyLadIsDej8dlzjazn6Cs5IWvuG8s6XkfFsujfheLVuOzjhAiA4FoSIGEvcDAUdtEABw/zhQMNxC
rAwqJRDV95zyjVon3Y46zBtK3SKGfjSQWdAlqK5VogZQObMnIIq2BvnEaRhIeFOpW7jpsqvlt/Vm
1XobgRwY7J5RybhfinzZlJvyPTgUUzgw5A2olkdAUVX+EV9QEkaAv5SkobbSiuoBtc72BnF5q3fk
9sBosOfw7bmYfsChmT/lt5BLxzw9Rx/clertpo4TKpklNef6uHe5QD399bmu3yajXEh0xFPc9Q6j
GW0n/KHvmm2lfQKWtR6TnK0SXuuA0/SS4U6qt3vAxuVJovFQYKq42tn+gq+kzqkML2XepRDpsqtJ
Vk6iD50hVsHXqA2Gpl9kFrg/HLDt9R6zf9Cw/7inaxjn5lzlnW3JtdwbNARnu3O2Grv9j3WhX/gG
LwYed9WEGzkI4Qa2zYgEphngYhiSgxdmPieymHC+niUedpSFKz5ZHl50xpH7WUVCu4TLdtGRh130
6ql87AntUBSk3Zblbr853prZJre2KGoaCEHZGKoARTvKUENgHUm1Qczt3BCMOR9Pmy0zH4oruMv8
YerJh+rkDYtCp1rX451Q18XVXWw+OoCM3g2VddR1SoeNczS66bzn06ZgtBhpjpAvKR+Ah5WF17l2
kBDVOGz6YRC7PUQSiT101j/2JHVNFCf6SqUGc3m6H6iH4KIzcAGN18601LPhrnZoVZFHvpmUR0zi
IPaJ7vHihCxD3gdfd6Qn54gD5SZM8vNYfd4HraWqUdtbXwDO64fq9fZoFNJOlFaqlNr1LOu6Oycs
qEd9g2lxDV+pSww8QRhjXbpQCDQ+lEU3QOHieU8OHC6X60Z/iUE0tPEUHeAQAU1X6bv8fp/YhWB0
xQk04ZJ9eVPQFL4nmoMbffKE/MWRNwYIKEmL6MInUDatgLGulpm51yDfIfI6fDNJuu3SVigBF3cl
R26hmfsAuTwI9bPc1HfR59i8pWiCRIVUc+5M5IJpgs99UyU9cxZldWGkCFKcb8o7sLFzVZQUhCqf
zCEUqRTUmr8PFBUMB/MaIfOtc/DAfGFs2f4hBU1nB7OfrKoGrdYF5w/xvAhIb5BUv+whXQ8AtA2f
dviD2kaC3Fp7lU4rtJJpoYomLs5MFa0bZt/jtZZwWBVZZ9YIRcpw19g9xbvFS8uGs2/5iOTtDN5y
VCL3BWy3xZIbi8batXO9cYgJXrMZ6t42sxJE3dloK8BUV5IIkReOsQW/OD/cLmRjfEcMySolO611
p6eDFUrAtOMWUFuU03ElfY7pXXpXVZz2kb9P77zy/mBkocbuvctqWn1ykQ4ngN5JeeQP5A8OwRtO
uza4pH3xMkSlplTsDHYCFQo6ZnzIUOLxVgFEBkj7RZKLdiwDH71nqMYIOGmvSUwZ5vUKwvWclSMN
pXJ/BXnYJtJO5B6im0gxlnKfeo62fMOROhdvRw8JkBjJUPZTMlSbWabnNogejUFeePZaTz40PP4E
wyzMZSPXDKRcwcyjug52H/L1Rw93i4exTcluz1iXQRqWdL9HowtfcRRLZobZYABFU4K2t6LM5qu6
bDNBTVV1oJfg+aoyS/G+bDwMmcUtiPmYJmGxfKsHrv2Q5VPoCCrZleNyywoNisCFL58Kcw/AlrvW
PFcShlL8u0UMdfx6/oWP0c72Hg2zqn0P6W5bNrdljqEkIGgX064U97bIYznuU51KH+ZUSTEeQIp6
Z/rDTqYjfXsFVLDxbVDkPL8IXPPlFeesT/UlmAxXWecHqclLxSB3z4sK/6h2WWeOyZPoYQI+qisV
Zbw50mBKHaafD0OH8uwxemklKeUkTmHv88n9EscgZDskAzaCy8htLnrZIA2Kmn3r494dPu6sa7Zw
H2pZHM/B33cGc8jiAnhZJQncUlomiKO2IV/RPgxMkF12PG+xyYXqEUmc1pCHrjz080T6S67wA4Q9
QVNExYiEEoZhFx9S36rVn5s/i0HMo4qWBVKRVusHMs7SpsHf3l3aQgQLBKR7MJDkCuzBHrSXU4pg
qptV2cypVqpP9cHnxx3fOa+bOYEq4xVg8wsI9NIMPa19uYulgR4nOquYY7aNW+mRfzq+mOivIklE
CgXXF4B72+osf+spC3vlw0R5lEvsk+JEblsOvNls6mO/bRqyqG9sKckbkYzWEjR0METncZWdxPtJ
Qm3O4Cqh1jljhXRu/kXgAwzPCeNnkK6V2VCENK5+SBYJ32mAUWJmisVkiBlvRsWPgkp30i+ehVBa
tqwwdb/4wJvN9tGcmcVxIyo7qiyAX0mhmLsk9qoLV4EeMc6eTjWPhfzavCch6I5mS28hnxunKyJO
ej7nXWRWT0UbAFT1Edz37ppamdTjPWHNOadtprRCVOH1P7Y23SGMznbnSchyJuB4nESbGUOHJS7+
60MuC4RzXx7uZ/yt/H3a15aGyo8xXhRcF14S6Ts/jVJqaD9vS07ccnnVE8c5ebto55ILd9rlvvNY
hGZ6kX0YTtx8uzqTxsj8RqLS2uy5Q+J7FKcqtFib7dLqZIZ6K8TYYa6K8r4ifzjdF7SWgoiOQB2i
9TGbOlEBB4IqwyBWwWDVK4RmZt1bOxl0KS8u18QIKf7HZlL31uCq6FwEmoW8Gmdd62Ap4oRvnGm/
liLpKgUyQKdmp1cJZBOx8/VrBjwajb3aiiKlREBpIHBsjdRdca6yxzF7PzDuntl8T+Eb720uhRo3
JledPm2tHYhW8AXFwyDlNjH6BBvdJRcltW/qyHcq4iirjw5LR79qiE4T4wtix6vfGgmsO8jKlS9s
Sln26EbXIHH81LmqcbcRYkqU6S+qUKcvQx+7bDj9cffjDtyWWsDQIfAesKHtDnlRQAF6K12JafWe
4teYtaWpAHZKpmIez4V1xqakeWN7M0hS8bV9Mg3hfqpWBMW68QYYinw9ixFzKu3tjHL4zK2vselj
BQohWm2XsLVy0IxKT18ddGAJpQeMcgJi9lgPLw2SGkPOu/vDfD5MoGXjF0m2LV0XPB1qeRyOPDzM
fYWShYopTkDUCL5X5OH7Q71/fYDFSF6LnpWm+wL8sLVClb6/Vtb/BiIwtnQNLSjaYoQlR96cJFC9
ZD5SOSExnei/16bzWX0lSXDKKFCIXl/jIK6vJ10BxK8N71suVoYphRBBgjgjZSxYW3alc8t6asfR
VRd8vSlZN2t6An7jFP+L/na42iXG7ps/JtFm6z4B3CVw4fI9Fy0YCkXkAH+8qLDTlGsi0j33hnz4
5u+xNdI4VQc5C6gQuhT/apnTQHgnrjMVQRIHQ3VY4rE0dcsU4h/IBPhFNL/jCuqMRr7DLuUu9YJ6
Twlt6QmuUzAKXqgPfSJ+LQEpdIF9rcT0tYOki4cFrYaddQH1xaP6AK+PWcoBBCuRxTSVyE+/gO2J
61SQ4GmFocqrOADORRVeXn0w/G9QiRMgpKiKC0K3qygmKCTYpDcP6k3QXNosYfv7OhX1gYf50Yh9
cG4gqQlrW9iQ/RZtRByvtTik8D7Yc9ZD6Xo0IZ5CSUrZ8gE2jRNgqZJuQgWcwIut9bSGZsMJH7Cy
wGCtDyEJ6c8RrpmWRxswxSJXNd6cOC+wuzhXYaDc9o5AD0yRF3DWDemVgH1CwjjHKZq/Kx/GmGHc
i5D1p88znN6dgPykbCUf9UJldQTliis8TK8yvW0fXKfVBKRCiEM8rQB93t9ICSbM/75T0S7zqeey
aQSyMTmBIIh645i5r5NaOqjwUU2/RsBvSrRF5B+bL+ODnebH/Ms67PMTayt20KagE3CRspsHX81u
szCxhm4QaS9MjYj4Cp7FhhMDEzBiJwq6lMM+3ZV3WegYqDZIQLz68nl61GMQJQuVa2DaA5WGdFF2
nowumQw8Tpbn2CRdW4I0RkwuzJR4rNxAVJY4ZJtzfwyAKJ9kYfes7R2zNd7VzTvrPDCEbgxNtbC6
bVDPouXUjuVqEu7hqI3eLeoPHVM02FRrE98ZNtbF4VcfqRnrNVoLS+T+mqBe1zCLYZ+Lx+Ugu/gf
KN5+iNukx64kZa6OTPLOYzxEDySgak1nxpAOArKNz9HfuSgeAQlM1yv0zq/Zp54+GTkVBCC3KB0Q
7oP5BAVTCX4QIwy/ABdgEMMj6QBFDxxJcd4FqypSTCWnhXSMpWL3yeETDeJ0aiA/0mT/MGoN19I4
59wnivZI1G/VUrJ3P9zXELeNit33IlOImxPapUui4uCuAYZxNZ8PvUn06tN/TtQnomlRHHmkaPC4
dTcKSjTghuH492cDSxJUJz5yo3J1b2uK+DY7kcJsyAY/gucj7/7Fkp0kBR0LxY2n2nm+2zxR8hq8
ty+vik77l5SbWOMNggLh8Ow7uIEAscd0avLy1Zdf/PEPbzpSHLvJ+0T7v8eXgV8S9OCPD6KIUV/9
tR4Ezu4BnlIuukLzd7O4g59uQQtAe+XDCiaS0CeeuoSJk+dSZNRJ5xJfzu1n8/mIiQYmlecL3Mic
qmmYEfmzkB2mikQEhHTOMvhLN7/Tc/XZlaeX6P7GlBuYC3BzWzfmwG9xx4EXIQJR7Dg5lsfxWBBr
cvYwFKdqB08YvFg5hot8PZs9yySRbEGxtrTpKCtkK82ZOpAv4gtfQY2YIsflW3Tvq7Ot4YW2AMBo
qr4BGWWdDQEdkIoOB0/Y8wHbENVXS/Qw5fkwlb1kKHw7d2jpAzl/DiLR0PrF7rbMnzmfTAtMog6e
V9elxUkEW3EsDxFmFGFHBUIQVMKhSnCt60WwFzSUyRWqFbhvRvYgKJQQIS/hO+gWvPe1WGqa5gtC
PaHpGlPk1NgfJOJlxjMcflrd7mp2ltLf2o5bBbmd8s9n8UzDSYJDQ8uXfZZ9Gh0RfDWwbi/zVb1T
8AgRMC4VIn6cHWzmLd6v9BsdDxE/hkZgqvox5k821bLUO6tjvG5DFMkOhA46dhJ7Oa1gCMmJlobF
siL6XzverpyVgPON2RFq6RoBiIM3sqdXU2KJmjmpNmVAU+vziTeIYG6ghDhU62LBHMJV4Ibzie6F
KyhraNeTbVw9K0iQl3NvPi3PZAS8fQVQcCjQAHnDQ03BzEcCJg3FREZgH3hyTCm4c0C6OJkniaMQ
KYORiw/o0/ynkjrL7s57Nlo/4Ukgy/IOsyxJcDLNxK5Gs6Pe5NCY/lrNlpzEYOBTKLvyBFv4y0hb
YMs3F8Bv1aLQijTiZqb9AhoM3jdDXeBoOE/GIAB5EQLpQF6q0EeYeyeK3KC7yHkRgFasusHveiU6
Vzn/moguYGm6gvuj6PChRbqLY0VJNj3MkIaY99x/1WRadZkgHezmV131FG5JX07kv6ccHd4IVUbT
hTu1D8EWADxFI0HcsPXSAqx373WHWfyide1YYwR19KeXvJvt2x97CWzwrWRqwvWfnkCaYdJFA97f
6W69VWJ5Hq9S31r0I8wrnSyNboz+oHuh+fMPvUQ9/pTJrU76jjeGSLGnMz/O2GaxTYkA4jqjBnqz
8VV4TvlNNBdJpE2zzpn/hDY78xxfdJZX+SzBqsDZ1yKLp3pZynVl0bsHJ0QIWruzv0+emOoIyvxQ
WjQnoGyGZV3V24yu7XrtpCbt6oTauB3HmChUQGuxFW4LIlkAQ+S2tDWqStxMtjXpOcsWrq4to+MD
F79v6pvFzeYhZRXw4D7Rq6q18OxJgCpQYLc+5rnT2H/kmMcTvV416zmbZc+m4mDth1wFke6pzhex
J6eu+QKvBVINfEi140ylJH2kjefYBiplfnET/MdyEzggkt8IXGN0RO9Bo0zphKxHMi8V6FOUhJ10
WK7WsasvfFfEHCmfanjrdFyRj3BEBbwiYowl7tfuUnQThHeoIFypw4gZIuXce2eYq/dGN+Q8kUpv
GiUAdi5oN7URUV8D0k9z3KvL0Gkpu3WjTxBM3mIGUWQGgTtRHLYgO+bo7bg2e+FdVm0Rhj7CKn/i
grJJGY44KayLj3w4Bil/OcbrhOBzcoIw4i4WHOGfo2mGCahJmTKi7W8eUjJgfiotmOfsPwIQ0ePB
z/YuUNjOBOWgUT0BuEHOve0HsgoK9dkBPPibywO15lM/TDr+aVSCIH+lyqE5PO7e7UCnQboJ78rb
sX6K8yLjEfvpn374T7/61a/2D5M5ruJP//mH/+5/+NWvYOdxzkHzeFef22y0ODoO7Hv9DezduuW8
iYMBfMaL3T6Y27UmDLZdLS4qDB27e1819Q7otsYtPQXP9qydnLXZGUKaZRbTbD7H6udz84vWeD5/
1E4QfcmI5R/83apenvDZUGD6AAW0PTQOD68tike/fmIRTCe+Lazrf9rxc94eElOKqLiXZ+2VBw/3
QQP7xWOKTBrtYDC/Ay+lBgTGvT3G5krBZ5NXX3/z6us3VOenqYcX/2ifvvrh9ffy1JX93R+//9cx
OFZs94eHbLnKVk313pxlMLaair569fL1H78CmdfIs8cdgilWEMRC/dAdefPy9XdU/fNn6ce/+cfk
83+wT7948eLV92PkYHYPpOuglGu/NWRHH5avFnzDGPKoLpnN4i8VGIbJH7x1pNo7oBi3u68NzQDk
yG+/+f71D3werXVn0dqEMHtyZ89GWGTEzEIBEc2GgUWFqCgTWp2BhXsbHOqJh5wnqFAOCGqOrTiq
/bPesDZYRGIsfODGisxYl8+ukG7Oh6nMwj47G7ibW3dhvESxK9yKY5Y3rY4rxjJkcDIfa48o7f5A
fIbHvfAON898sAVVIX0L1Z6a5U1N3qX57kRPc9MHdl2jbYgkX8wWOWJnDv9IF8grKIDoifg8UPHS
hzhBh4d9mRMZz7n+cUYPxh004K8jR6ZHU4Ajxfq6TB4jpkTmfq5bgIKklNAwXT+ncqiFcyO4nbAC
iQm171y6YsgCZKQ+zPHE3mZoiv6Y/KutR4KPikFenUFuXIWxbK5Ry9yhomC/N9Iuosd6KBmPJgbq
2hrkJRr0MdwZOEdd6bijRFzffM8ZuDrubLWzxuRnBvCrbIh0JvFUkKCYYfkrpjpdSVF/QcpRrJDS
iR7LiU/3Or0S4gA47t+fjKxS37Wpwft052tzAhyZCybUvkwFLgH0Q0M5uc0O4QbHwGPdUagcuAjV
QKZ5WWIZs5tmwc9T4qeSs9Jft765L/FHIp6olzD5Mxj7K9Jz0wMAdT1rKHCD/E868sycdJDgf4NB
qe8mum8NYzz94b8nxhgkgp/++Yf/2/zleNtBBdcUtQ13EFC0RbUDIkX8PpUkeYKiSvBAVIfcYgYZ
vsl02K2C5pyDD9mkfynQzaMrDE/BMoMneNejo/4X37421whk/9obtgy0IiWpSkcYKTZSXt9gQN1Z
b+/J/mHw02c//K8sC+CdQOomU/Sn2Q//1//oj16GNxj8jnzjvpDCctMrtzn+t534hQzLB1wtgHMu
UFmIEvIT8hk7jMPM6QBeSQowwAwz+2+P46bPqxbMdHu2us4JfXVupMrNYoduorn67fgoxJClwpi0
VhWi6+BreAcJcYG2l9ubEhNu78o7Ao2GfpftcrEHrM239R3koIT5bvAks0avMWvgpPpp9uPur2Pz
n59JB7z7twkBypJ54nBXM4znwUGEEjbLjgw2qo9oHA2ysTvgU13Q2pbL+8V2b7ZSlk8QmWb+AhMZ
jgmnZm6ZJrM1uV+YMwB1F66WO9DdwTPVhvOka/dmKi0+sWBxQ+4O+I6u2ao1Hz4A3wvKv9bnFpvF
3VxQufXCkVt/wQjSox937NnyhBfBLQxhrG/ApI87x8w4zfe/USiYZH6wLRle8soOb0OWd3l1Mb3y
eM+NB8H/1xG6WvgPf049/LfwdvNybW36HBypI5DI4RNDbH40I4cossHAc6SjQlMGoLKZKp6pv5eY
0VQeaZjujqES+rY32kghKjVD/xJudKbBEUy94eROQa2E4nd4hEZxUgs7WeWOPeYCmq/74jsK6bdS
zbOUD7e8NFMMszwy/f4YG6TsHOcXEMjXMjQ5zJheuNSc/RzOGdOxVMHEkGNTnB1G/IrTS1PmDzMH
OAnS0T4cz44u/duof4rM3LipieZEanUlZrIkzLPCISYdAi+nzTnkXxZ58p7pTg3wccCgJz+fJD8a
+Ma89oSYPpV9BgI/0Y//2VXS4VS7k5urtLw/heGlb3vbHX52edPU78qd1aFdgY5sccjOnt2vPu9M
f8l9dYofCMmyI+h1ulYhsl+CdSsHVgWTIeNfYQ6V9MxJsO2aKlofNxt6lmLNufSjPvudTKzXJMfn
QvYYiUfN1xPJImHZnyDhQGJNX4MZo4et9VqNHFNOM8O6XrvI4lVpY4snqRwzbsshj8v6sp5FSPaN
kyqCIQ6R6erl8thkK8oDILxYA2oQ9EZE3iANi7Cs5zZRBgILHoBdGH7WHHefDyM/eOpV76ZXrfOc
GFF9zLCxc4iWSixex72jj5JLn7UqKetGqTH9Px+mBFqr++yonMOTcmlpbGgdDNMcGkhdDuBLCN/7
eZZ/Cv5tVE1EsURzAh0VzprZJjcb83oDrn9D+CdcIuysSigDHr8Pnw6VJIK2GysHzO1JgNqY9bdV
gpNk0HB6VGbkz8fZb8gvFV4btu4AU+ohyv7ZdGxYFP39QKfh/n6o0ain/pbBvg5++vyH/5nFnqqe
tAuwRO+bn377w//5n6zMYwQdJf0MeNqRzmKVAFQP/91UN77oYyTQB/oyH0GB0TgbccGRTTnyvWkT
s97wm4nKegOKJatjrdfYFr6nsJNNta0YooTuT0rC9ZdSyiqKQAk+lpvjqvQzVNQ7JZ+wLVLOt2BK
+gw6OV7uRet5H1tqSE8AX0IqodIfG2D0srEAPh54dc6Pu2opaVTN6zFliPBCCb83xG4JkVHtvlxQ
yA0ACMFIdgtzTu4yWAAVXmo7fIzZ1+FoKN41x1SYCo7HW9P6ADH8w9FZOxoW2Vl2jO3io+HoF1U6
Gp61w1Gq0hTROqlzFhHZDHScNcMfdfx+y/s3v7+cEmFa3FOeniuPEUIWDpMUeoWCyQRMwMU9uCvn
frnzT4unT5/7dPjPrnRY+LwKGF7Vy8qwtNk9wlndF+d/nl7FRVsuNZpMJsDCtwR+haUTlidv91m9
X+f269rhmir421qXYhBo8wArD/nURLbwNw3lO3m/2FWbzQK7yUmFBNLXwr+6qEpM3VMFoFoth1vl
9xGzexp7mtISJzTBJ2iB4TEbI8QMg5+ORK0GpJKt2aMTGElphjmk8t+f+VYt2o6m1PKjzy7PUHFp
jiMTV1AvIHFGHp1TB5klfHZ/dv/5CLSbydbIuCLtmv2Tsl8D+37fy7gnj7O5M6ZJIKnoQJuSqePc
daSxeHigP+ik+qcVHaf5lib3F25l9vzXz2I/nAVeh+d4UYJLnfkS5/6ck7phzjPBeealoMvuS7AA
mxIiTynntBrTUDTOHmqVlVAUE2iy6g6D2hbAkTaHTJl/YN/AGR7JUEYZYYmAsvFgDjbdXUvQBRoe
eYeaRXa3cXc1pf6ecDIsl2uTIPVASG842IQiW8044f59KryKfLHYVIcH8aKFIa0WzSp7PvlNBle2
d/WD7/j7yjBgbjCSoJvUTMwUWaamcI8diad9AisTvAXWpeOdeAdf/OaZl063EY6CwBp/+hfLz23q
28ndotmZBn/64of/R/NzY+LkiK96CYtGyCB/ouJ5/Khb14DZDXXmwxCSwuWlDVA90FEN/vFfWDwr
lSLwNO+Y6dnKc+egXqlKP7kY2z4Vj7mI6JsOs+pi2cW+gknNkW1nHl9SPIKSB+/L2fOx3V1+wCcl
3MCYvxZYuBfffPv61Uvh9Q9v693zya+f8rK1FAsK7WKbou/9L+UBHSoR0MmxsgOLMOSucdsl6J42
3aGWgi8prbVwSNrwlZdsh9zqZ/TpZE1yiZJqIW5+pZoswCU0guHGr4Nbsw5qjZxO6/52ZtBOGtwY
+twZU8kVYFig5GEL/0eZZE9AnuLZiWxs1ESkjZUFkHGDoa4/rtP1lmpied3coblZ8aXhkWg7miNQ
wBmQTWn3KS0ubqVww7qfcEbs3nXRxQO7C8NPL9J7nVNCCUmOVVYqa6ghV+1hNVFhpfAykcrY0oa+
iFPaK36OUX9CIx6KEweEx0HtsZBVdG6Hfk2C/MpVzcFDnAOko0H0xfG6eqhvZofwo3RdtlSQWBWk
MctUYkwz1eK3xnnhZ/L20n3Tpze3nw0/o5tN6YbU8nKlFJMtOXpG1jdUSqqtsduoXLITjE3NPYnM
lJiUuxXrTQydXA5RtxI9r0M4n0SyWkwS7psx/Hr+N9hucFmeUJkVOEf2K2DmgZR3IMLJLIIzFxvS
hyeo3VXbsNWMQPNeTFvBXu3LfsQWRDHuWvUQZK1UKaOkBbK53hxvsyf/9Ol/vviHi75uOb+AEHkx
XvLgU+WrL4yCl/BJijqSRnxVgpmJGRTGr0SaY69a+IE282pZHXJ+DLLqobytm4cZVzeONvgMJC4u
j11Ujl/U4EzesnZdsRiA/GQqDzsjdENDC3tTBYANXFgqQdTHv7JTmCHYP/3uh//FsILsHc/JkMrm
pxc/vLxGJ2dMX7K4YfcCbqYwMvUe2USUDTizPef+xThy+ilpoShZpuIs2THD94SWP4FQL+1fTSm/
MIUBOc3vMB5INJz8J72bK+8RKfA9OYWPs1c/vH4z/+a/DkRtyqKudUYx5Okw9+tZgtcIv/99Xb/7
rtwsJMQCo6CON2Y3zAnMY+Okq292hnMTv3IM7sFqCc8QLehANiDx7cLc/osVzOARQvVaaNFGNnAN
l/c41RiAs4EDcg+unR+ha+cV94ZB4BerVY1SR45pA4SfuzUCzp7w8VrKmYVP8uH+wUgwrO/CRxNX
w+j8fFefH+p6057Xu3PYuL4P4oKu9CEmcp4fDFtj9tcKwAuH5gjDd/UOvsKnuHcY1VBXAnqfWT60
KXbwO9BUfvvFm9+jBAlYk0b2uq1J4Wp6efsWr/ftCu2xQ7uhk2nFvIRiT0gUJIx15zzjZ4ueZLh+
IGa6GGIS8+iikv0xB3OclU88Gpws0e3lFhY3a1W3RM4WNxj5I45RcjtKWvFhbS62oq+i8JFTeLpb
a5wNMUmdN4c2owDXkPuwdbw7v8V3XzS39rXIjfZNp2gY4+ChMCVRMhZpzcOnhtyDhoRjuBvXA49U
JbqEqYROLn8A7ybzvQrO1KXJ2DKnP+FuoOLwX+A8w9yOSXhXrz5CCnMVRo7PurSdODimK3A96pw5
0nXhVaf1o+xRMEKSZFXsVCzAgZQL5LgHmMScq+ooBJ4UI3g0irJbuhwKj0MurayLpFQchL4CjOLq
krsfKaOHn7mZyc6a/OOPz5ricxecQZOyshtQL3znTHrpBCO0TO8tUFD9d4hZtdkEmUSoDlne0JE/
3GtuL9NeS8BRW7ndlhpnOUTUjglSpijC3CxcDPpl//BTnchj2Cm2SNL9xZbkfRUML1B0wwrQ7MN6
Jcy+c3VGoaoIaYnWkwCgoL33ZvXtyIt0BGoQAPFRIgAijE1QnZSDJQjy70pYVbo2c+pIEXr82DGo
evKU572dKLsShGLJhz/+oCMPtXwRHEVMCTtzfIreI/vtLFGlH1FAuT0EJXo+jOBgg60aAOHbiYg3
xgRdllPBwqlxVq1KxGnH2uE1EHx73D02S6pXS8N6gdSocwB4C26zgsGyT73kxpTeyzy3lgUNN4Te
/o398e7OckBJXQYUNL9lg4XHHj0jKCi7lT2N6IkR6kYr+SieGYEyINdrPLlR86QclB709j0ITT2i
st3UeWPmDP4wz+4WO3RaQZHQzC6wUQ97nRGRo2ENg0c2aURLcYHqY1OtNss9IUPcUwnYgBti8dCS
zw1IG2CYaLlW3YzhM9FNUqllTFctEBn4RYMGSUeWyDY4o2io9izHOSvaHrcjb0vRFSDef71BiO4i
y7m5Rq7zvFAkBqSws9QLQVPgN5eUUUPzSdgZxS4EF4q6TDg1qK//lZ3e8u3esl9xkcyTlcIMBvfz
TZugB50U2xL5pcfV/BJtaxz7C/OBjZw17AB9RJePDCMm8mR2qHFENBDjQi/EJk4ehqFOMuxlfPlV
lLSCmaf07GHVspWgdLz0KiSLUOxdqNwBcpioGXw01gNMmc9U/B9WgCaWFkLfqD77HuqhnecISmgn
AE2jz3eSep8rizfamCLLIB6NiiDX+SzOoFntVmM7xw6GyM32ZTXtAF5U0z6jbd2BaozKgHz49Rdf
vfrqizcvfj+0wYveUgTVl+aiy3EUYzVHY26Wed6im5xIsy9+/+rFf331nbSM1justjCC2vnnw75u
9PtJ2oF9099GbxOdeB1uN30CGJCr7JPs4sRT7HcunvchaEySvUoPWGefGTqHRcS3MgSAthrHSqnd
p1MCme2nYhyDINtHTxTDZfXt06KPLHTuTwU1j3v9qsvvaanTKtuYjnAyliRMjU0np242iiv/S8F6
uxxmZEBFN5L7wmmmXJeUy5G3Cj/uOB4eawyyTnXP9EZlqqU7zY9EVl70ZOm7GMuSJogmaMDTWg7o
FbAFaTXHH8zbF/C2Q0cC778CTSSynh0VcIF0DfAXpJv0vz5s9/wCNth2/yYo5aGK2rLgb3k/r4+H
Zb0lBEPH8eQ/rj4psvzHu08Kw+OznHbcfYe+UT2qGiNjmgolXqlp+Nfq2CwCwE3WjZACJ8htIVWA
hot/+gWkZopLThQwAwTN6UzPaC51FVFZgsXXZaXaoKwMBHBB+WeQhZXns02JPxIo1IArimHtcm+0
8Uk37Lkhoat2JNFF8UGX5nAa7WqicRsOC3yURLqT79K3QIB5Gsncxy3aNDDBd29FWNmlKXqFgL+H
3EtVn6BHK9HLuG38t2sGvdcvUNerMjSQ8tf5gYXanYeDV9vEAXSjyJoLdNBQyz6Ayd2U29qIOKsj
BqnjtqEMHNs9HFWSShW/1JbmTaJjVB5dhIz8UO7AX/FQGjIz5JOsWma5U6pQONuBBIpXj4PHvXj2
7JTMW9z1mfR2sn0HhGZHyJ1A7qvikQAKhBY5EdvLZwFwPWwP6EfgDsQZIRMKNsN6gmKb1yQusHwL
AykATmRzB9IjPjhBlSvJHvDP4mRI+eFnboebq/VzvFbVCMe+agVrT9GUvZOdvCH6M6z4b1bvQ5l8
7+kR/IwUI/DtNx0JI++oLXons6Y2vuE8zOZk02tLbrUo6d8AFOqmXoCBGC1dwQIqZR5p8lCs0Gjs
rMuL6aQpkLBs75RfrCmBfrHsNZBKvGx9AlBlsN55uW15Vnhtig72HLkuFWsey9tmS9TNIbBM+Cme
1UKga+aIyMAomVga0mPT++7vQcXX/TW8Ddkk/3tn2RgV2Fm1lcFOjlpNGJa1n3gZn6nQJLC3aNo6
CQ02Xd+zC3tQmz8EC4cN5TQgCWzU4PxQhI9a2Gg/B0Abp58MmF0IjfHUhuiPxOL3PSU6aiUpRzGN
UJHxANBbu/f7Iyt13s0onMGw1gXx1onE2Ps8/kQQIQxl76rPp9aL1vqSaHLZfQnZ0SqdZC7VjHk4
vkIrDLFzZANv4w5Y9n2wzjgNKCJA+oTcjGdm/l90TqZOYortFJFGcPsgMwOs7858Ue1uZ8PjYX3+
T8O0pkHPatXewGHIu3RabXpVdVvyo+gS9trE+CRWCPyvrPB16QaDHKTHvXLkJPKsV8XJ+89vrhiE
l3+JcfVcOYdiTnBIhsXiWTT3CyWA9eM73Of8SwzfRar4foJZKXMuC2qDm2ERpxA6dKenp324j4nO
YeCOfOrEd8MfeSEx9tOYTPj1SwZrboMPTIf7sSm/f8Bq5bMZf+BXapjsU+vD2kbm9FbmTj3U93GF
kE7JLPn6tqtKezRt01xFHKEFFRFP/HpXER9vWBiAdCFEF38YPFYVl/4BE4+udr1Tf7g//E31m++7
G2BWjqkiG6HNgzDPJxAR8OBI7lT27VB0b9DFDzryC6wO+p8VfQy0KJ1VaTc57+SqjdQ0ei6YEjsB
wqti/9BZidsw6lN3uEk+MlKZw+UZxpuJ2mK/L4i1ULCgvG13LoKSdh2ulidtBl+LH1HkajGdWj8L
BjcovOH4bone63scLVbP+L6TmxIQ3Dbk8aPQ9uhooDcK6434Q+RkJZ3LTNK6BHlypRFO8TJf1ptN
uTzkl/dXYwiswVuVk2prF9HuZm2CUK9doIQVwgUc2xn726Xyc3kkBMaq1gO33DTogmwLVMM43y4f
dfnxNXtswr3q/n8859S4PQAbnXnHn0l5CRgYEbyuIOuE5i/6BvHC+fNUYkNRKctE2W7xR8Wjmd2a
I37Rdfs8ARM1C6SmqM5Mvdig52rr8eVKz0zVBpcVpUhEjRHpsQiHh3HpJNMStAZmcMR6YgRSCj7F
LrgQA1MPVmNTkASsdZTBkscwiz/FKAd8m8eTRs8pIYGdumoHPN7cvKRRenNo7jp2rgzuO+8+lzs4
mKWNSDe6CsOrXe6vkneF60n+8aanhxcd0DWMG4FN+jG7lmNFKcsHA/qgkaSHwPJxahCqkJGdWyfb
WjVBm4ufjfkG/93Ut/RqmDRt8Gdo4Ph0bGvm7FpPYfc9lexaCYEZCl9eXEXTG9CCj3sYnGpndvPS
HFYcpj8xo/Nzpm/1bvMwukquY0cbnGvCm03dkr+odkhL3YQuT60Q88VKw5C1orYU2RKxXD2iI/6C
RjWNRMLIz/heWIkURqdZeuGqmMIE2iL+tOhqBzrIM8y2BUoFkBI+UehO5iGR1IM0KwFHmdKvihM/
18azkUdiu0ZtdDvDzT//W8THx/xX3358TmLjkXcUlgopS20Ku5N8/ZBiGvppyGVo04Tn/t1mHydg
3uBbDx3s/BxV6eV2nwJu9RXgCcwxoFo88UNX1YzNr4rPNv9H/mXkmlNrTuELLqTB0yha9sp/P5mb
KQTcSZ42Ok1aD5/MUwBR/uZWZLQFvCkx8hvSUVF+ciPMvF1s1ueUqzDozBMb64/R2McWanYwDtUB
2EWAH7enbsWQrxvKHrR70PlXzGRkOYZF3yE+6mJ/INQWsoC3Bw5KpxTmYw0AWXjQKCR2zFW7CVdG
0SPyBPqOltmQKUVKW5T6YrKqvfZCTrbzO+5r+/Z4gKsgV9mmu+0c8QBjgG5aqtShgv6lz1Un854H
8Bo8FDNoNcUndHyz2N6sFtNEPd5iPToay/LhMCwbBOwiRnANral+mFDmaq6xjdiIR7JKcsa/iTh9
SJudcDZQOiSLRrgfnnEznm8bmSumP+7O2h/pppcA+IQmUpp2fCBniQvnqfUnqtCJdlfmmlJzQh0w
z7p0P1yQ2ZBLquDKb1NV4i8QrSPspRmgtAKRAc0Ai1WuW+/uwOrdr7mehspg7rAoLH8epJQvin8E
BZF310p3EqYE9ZkoM0BL+2QY7Hp7gh6Vg4Fpt7OhjMak/5XlQEFbVB6+vKvyoGjPQrzx5zcPOEvi
Eo2LFGlvAuGPSk3m23Jbi/SbcHmiDyb9Tk/23GJhz0erFJbPCBx8T6GHCP5bNg1p7lTD5e49xVeV
hNoeBqGYx5ejb//1ze+/+RqC0BBNmqOx2nJPWnSzcuBkTqnRLoNA/CavMZZ5ebcytAe0+O8ptFlV
OgaoXo0g+e7ucmQKYmvmX3eRoVdcNjTPhmPvjVOeor7veMNc8ORbnBSZjlMzwtOszfzJm3lz6Ine
QvO5nS61KMpEUkYZzXpr4CcpnoxfKR71ImUAGtI41HnaP+8qZwaoy7EXIgjOhlsYqo01XB4b8+Vs
OA7UejpD9gWeVghjndDmBD3qBXhV3g0D69E/qWbXz1PfPX/0u8iXA3iwGUYyTuA/QSgKHhhLRryN
IjtgfWFXf/18nEDRr9tyvl4ZkqtxCjEs4a7affp8GIXIoIgAbU3uFr4LOdzkm2AA64sJthH0fP08
etwz180vnOvmg+aaHNFMhw0vb846ucij9SyPUoJAwee9Bf/G2aDztjpu94TMndM5fmjZZa6nJB5x
Lml+R5eC9Q/MxQ8QXQDHepudm42nD7iqns45Owyu932AbhHodsq11xog+axCwTHG28/W+wiJ449k
lnyF9sYEwgFWwk7CIMUQwT3UAD4msS312u4KSq+wHxc+Odw/3FRC0dql4RgOxEqFPDGE4TJts9ki
8Ft4mLsvQa8Cj5JUFYmq1FT4mXNUZUFn4ohCjyNi5tlUwc5vfhR3ESLvgR/cHWB6Gemr3nE75s+1
2RmcnhrxMeumAaGudDlSgoruQDjbQX9gqoe5smBB1M7xABgEY6q/COzDboXAAw9zghBMF/XGyJsI
rb/IODcE1JiCTutoNIqeHheYaPenRY/kzqJ0+w7is3lkZsXA7xzF4DiwHiV6t07BxgLUJ28xxxCs
B4oBwu/1bYb0ZhrDPspuA2sfFYpTz6TQebFJNqy3bHUHj0B6TgSt6Nym4YTyMvp7VnUp8OORIMFf
tkkxi5tDmvjn2Ch61hT/LO7sAIqGxkmrUekFCsTKh32ImbJ486W1tWyNoL7yRGR80rU+Zp65ROcE
d2zc4fly6BoMO+X8Cj5O6Pg95gZllfnuuL2BiMw52GUlVNTWdT70+YR3ZbmfEXPc1DX4Qc60j12o
eNOqstlZCxCP+3EWUcAnHRq4J+JTDVro3dpMAXhwiUX2vghKxsq3J2Hu3kzl9bVPVGddM7PJaBx1
VOlS74MICXnlR+DdU/TdlXeQU1pa14f9yKpXAYr+kduCbqchE8GhxTd1Xgn7xd1u7u0MQmwac/bR
OdzywB5ePJs8+7sdUI9UxnQRqNl7TGDM3Hi2p96cY3/JiDcsfEfrchu6hJE3wZC/VeWr3fv6HfpH
pXM1+vepnT4d/2e4ZwZZ8/YwR8nk1MLY9kymNU0wcVTAkEez7v8ZLpyTgB9bLp4Dp7Mn6lg3tAQy
RYZ+fDp5Nkz7EANkz2j/sH+YawzzUUGQnb/5NR49JksWxny7WL41bFretwGgzvPf/Dq7qQ7EnhDS
ULnyO+KJHYCKZeQXc/cPkzXfU/CRjBuSC2Pdd3XzDngXyGKJ/AtV8tuPutvyoNjXTVnetKuODX1y
q7YaZ3asb4GTTQursNgTlGtJcPH928wMo9srN6u3E9c6439PUq5SUZIzgkYmvL1AWeHtt0jTCqUt
YpKRuAwPvgLJ5IRkjqbUhL6wXoIdvP3LsoO3F4/811+/efXd11/8ARbhHKS6c6qY7kowngOO14oP
J+pLh0mgYRZAFBY+DmZsQUdif2mXmzQR6BJA12gkB8/3WV0eDo1EwGx6SjnYCGSsToy50zTJ88eW
rwI2tTdUsrMy+Sqo7JHYdYVevMEE4WsVTa8whxEXTED6Cb8Y7xLE4PNSSj42Woo4lNvySYALLDBo
YmZPBAV4Q5l1Wv6dURmVpeIOEM3k5f2EP/B5i6C7ns+kmQv6RozHUGZvyNlsqHi3X9RBSmoS2tg9
kOkDellhFXdvDTUxBPEAx02w1+Cqxz6addMrkwiuN9XoAcsEU8/DvdIZ7MTyL2ZjwUyk5X6Cf4AW
CyZxaB8DI1h2QMSK+4V4XiDXSIlfyD/jPNuUh5Fh9253NSLR1206ZDoZI3UiPKPHqortQUYmYTKm
R2NMHbaTEBkYK/x9YsCWgBDJxskwFF/+IMz2CVgPqhVDRQyn05S908IamA8i34RNKlmxB+Pghy+r
k+dOPGRA39XeqQT8eLOsHxmqHnUot8MYF1Fahw3gwF881isyuG1pjReU/U9teq9nnfY3Pdm2R9mm
6IhiVmQGuI1jU/p0ZvRBx3iUoDNw1h47cwV51O8xyLtcxf0qV9wOeE72YWjrcYzSXXSVA+nvisel
ExsEAhoGcx8+pO6dTGN8gM1UD7Nup65pHIXQTVuqtX9uh3BOh3JuZ0KjOsArsNLkIbPwtdA4T0lC
40MvHq+B5i8Rn4fPk9+L0Qr7OJamxvyJNjyaKexaYNmflOUkuAW9naEvQbKV8yvrbQz9mD2zPYGf
1JfZM6/FxUb6DL9tv+EPu40S7YeWetmbM3ZlkmqjgnbDupL8KCoq7buS9EQdF8K6CqYxZngIUmJ6
RUfCfq1gtVJVdGJwEcvZ3VyEK2a5aIFb6OCbwx4Q214x8Ldhzt+U94fX32hwL5qruUDlxJaJ5wHz
wpPLiaSgAPn5UgVmvnNyb908FNZYcTFJSR8Ly6iaXjULgjfAG2OiWzwdW+T9wrpYyMidwqVIT81E
hBwNqeOXaMvynX5LQ6Ic4XKle5GBMbQFYXsA+g/h8vI08wTr1eUPOgWjLAROsGkaWknmqqEUm65Y
bBv45r4PlM+9O8HX09HrFFQW75AoolBG3l+f/1FX1fTDBtD52BhWJIIyKr5cr8O8MRxhve0aq23I
6r781eu00T2PbXiaHieteJwKFqjWPQGzY0dBFYpxif3ATCQsrKaZQBfhN8XfGTIMOxjChRnavj8e
EJYNETg0v8PTvFgfbAT4ehcMh4GBZAIDYKAOuBCqBUk8/Kvmi22fu45J0wwWJTGuPtG5bj940NxS
xw7r2lp4SN/Wd+BJeoHe2c8jxVlo1XVF2barLB2hnfrUrWtJmpoQNRs7Q6Bpov1IZEW3PWbxJBSo
vqOyq3HycMTIQqDnmleEYNOo4/H6qh7zDZRATQvOm3zTEY0MyzQs7xdLllumv+iMWUZRNqq02nu+
qXH+5JSG6YOMUknZD6StD+xtF3Iay6b+YvWgQmKfuPQpg0juhChPTDRkkO9PGbLbvcKS28IfiqTW
lODjDz7YlgKfNZTzgR2pQGuKy2w68tPLH/4nlzyUHLDLn179cH9uc0056H/IOBUlFAWTCZz9wcCq
cQhKv7b4/si9vP5mINmG8bHSlGB5KRV9lczG+hnkmBX2nzGLiKGU7wLSxsZxpH2rxWERE/AAvxkK
jTMJmE+AvJn3Zg9Isk8qLiKhQFWIRwo4E/7xzZfn/wQgHiOGdw+yAkrHJ1FXVS5YGiQEUNrp6Zr2
38Gu6Jl1mjUu9cun7QOmjG6yNw97ucgQOzcD6AXCj5gy0idWVJw8PYM97HSzRwC+EdyIn02zEWD7
gCLtgn6b69H88Zz+MKdg9LOwul++fEHb3qXe4weZmelD/RTndSEJ9tpzSuEEe35Vkq9E3XASjQ5O
GdB7y8PaCKMHOixsed/Vd+gfgo6+OILILxrz5wKCj9SRuUbHqD5Ed8GFuTLufKVqCQu+aB6wp6Bi
b8pJlr1eo96NumEWz/eX2JfLal0Z4raw59oIavQjL2ySQGiyXPk7AUy99wdwKS7TMhPZymQUMzsg
zXi7fuHtTmEm8slHs+xZnMHd6ycO+EsM8b+7+SRMu8mVA3ocjWitPQnxfAZQFOu0C5+riX+F0fpm
wXCMdTsxbeXe0IsAPwDXvQPLxrxROEpo1tV7/VLqvAq0tHeJ+igmugjQ7UOhLHagbCdriF3O9dAi
K98336dMexHnCl+vgqODhNcc/8ocj2FEM4arapU91EdCweQxZIe7aln+NrAD+/vLbBbcPtaJjpcq
2EG8SmiqXZXvd+JEBg+/mX/38puv//CvRTghZk2f53Ccn0WvaL/oGerG9zKrOErCpncttzfEq3H2
st4dvisXqy8NhXoNMN397lDScz0dE8yBVENsRscu/Xfsv+6I2pgrc/TDfQl08LjD2nBZQV22y477
MctP7FOoTydmmOKAtSJJkrwJ4a3dOQ92cdMHIdqBMbny5j1U5Zw8xR84vVxL0QUDQrybd9/XzW36
yodFwBI2O7O4kNZNdQtuCXTZuMOdnHYk2ftu2l30UqPEilGFdisX0UW4FtghYhh63bm5vBB+8oxQ
NwbccHx/3xzX6xKmAXTSQO3gJ2VQcheLsrp6FldzYdPdrbYp6jJHkP1WcCvRsXk90OFbeD3a9yvh
XZE/MdvGdmqEmtCmVgDf9A4zCmc5Jw3gVTRcgf2S/YR4RCPNEaKEffe2xFy25gSqlMCtZIdmy6bh
CQDSdnGobtQNqUdrPmFtu2EfcjBZUIvwgqeR0jup7+VTzBys0hbDrIrmXTabt32Q1K/tJvFndEb/
gNYGAATLwQnGXdffvhzjvN6UO8WsueYLhH6kJJzPZ1rE4VKyqdLpOIPRuDRSN+AtFuLFyWKHgqy4
9ph7f4XXInaa9r3a8m6HjyngZM2+zokMiWsacm99xYeNk3tJIQOrLwNuroinn7l9/cFjnj+QVLhc
bFW1YfIhKgDB2fQrAEqWUcxsDQGl7cKgVEIVwkCmZSqHI8lIbV6rhZ8OM6wyVpB7TgoJ7gFbaw8B
QqSeCCay+N4faKh6BHOLFjVJmB6NHPw4lvBb0SRcrRmzx17mrdjMoEXziV1ZdmxiKAkS++J9gT5Q
S8gTAI5FfXhjHoXPG4imkVCcghJAZX7SE4lRAoILIUg2/zpdq6jUfYrgvfyHjz9B4fI2Sy+5l0PM
bJQbVzlZ2eKU3B2yt9BAfEdmhJ4nQdjrBzdqKHCqPU/kw7S2XrKlhCchiDjQpTyc1v57WuYVcQsn
MGmHPAVI5FZgYOdiRtoPSufDyTuc/z1WleA/8YUK13rK4OXkh8SI/jhhK56xbNEyo9T6Dm0Jbg9r
D5m9WIS6M1UuNhDT8UAdGsb5k1qM5/OouukP3Hw4D9p5lXhtw0ex5mGQmN4hXh+671ybFbD4b2Ka
VqkIPClBwShhAfexD/8qneXTgb8/mkXN86tk8zQEKZFo3vs43kF28zi5+dim4mDU0V8emwaOcLtb
7Nu3poO8LVrIVbU1vPJfSsv7BhvDNMd7mrx+Fyt6khenLWWq/9h7pnHfH1ZM5r58mfMvxZ1i8k0q
aQ4FhmrVlgog/cUnX768wMn/8uXzgaYX28UDMJ07hAjJvv7jH/7A2if45FmWY/ZZChjQjhv/L3tv
tuXGda4J6hqrq+qmVt/V6hDSPBEgA2Am6REWpCNTtM02RalIyqIrmQdGApGZUQkgQASQg4dz0e/R
T9A3/SL9Av00/Y97ih1IJCWfqupuLZsJBPYUe/j3P37/TNKvytEqlz3WVE2Wt8j+IuU6zI/yJ6F0
4aCDYRYjUiIgCAlxhbgxTgtzIgceZxi57dEVn5VxGFfLnxblTTELc6KWy3GoteOvqs5rsAmcr5Wy
C/oeSugnPqQt5j9HNf2Q4jf95zQe+IX+Br/BsOAX+Dd4roOEH/VjUALG3UV3vGv7/O8RjVa2r5IJ
b0v3hqbKYdTYcmxsgTI9xyk8dHJKMuEIiqAe10n0bY+KLYLaXVuEpqpZiB47xXRqmiX1l9SL/YGh
7uHEKmBZ+KpGzYw4n001cDPvbhIObaQfohj/UQ1cg61j2j1s0Yt6FtXgmiI6mBIJiCl9bCNVwKO0
TwwvsJ2YIzsx8ncnNIKdtrumqHESZR/pYmMYQ6jJ/ZiphZ3XsI8oc0cbbte87R7J3FkKiuveZymU
59ltPtyx/u1rxyfPrt2Tf8O1owOuMwZfPn7tfF24xkfE8wEIn4Y0KgY9D8+Nir29Nm67WG1c97tr
44vHauMcNNT7iMW5KFr4WPjFgdsiOcCo8Cg8L8KatBJ4o6/NyVQV3oEOY5SFvAtrmupNtbLDaTBE
djfH9mnLJPvKf+dctTGp4TEM2KmW1fC7cTZjGzPqdoPF7+wmsuECooFbr9kMLkXMrBRc5ZZN9BbG
3UeGB23neA2ny3kAqrPEF5HM4t4lBxFL1ItJECrcALconWXh0u40flAjO7smlJmY9NDWtbPcu7pW
KarhHa2yxYmLuhP2kide1pmG4UAE6FC0KZZTx3RIr6fwGGECFla4xcTySCYBkmIK1IJ0Rb/owrZE
fVhjY7agxw3B5OPFEiua/032HXDFfRK5brueoEISAgvXNt8tp9SEdnAqA4lkkLwAMXc5v3X1xmgr
r5u9/o1qJBPVG3MeXI0wtaqQaju9SLbLGaIjIhlGpWTylZVukgxhM1yJjUPZLiab3keJM1ZycQWb
HXLLXBK52VdslpAjYlz+miVgLmwJJ+mWwx25eV6Y08RDlUYJgfEcjzCT7ghbGLIWRoxfI2Reie9y
B3cH78WtxEdoNBGdiHgA/36cdLWDc9F5CVoya+ncdo1qjZmzC+zcXo1qDbHIrHri3Fb4OWK7/u+I
ozBbqsFFxK715pTK0bmDx4iqwSKGYbN9GszGjtF4svF81pAFGqxIVCvWNpodCz1yT/59+Ix/K371
H8rRSMh9eOW3Hkf/HMaCRDwhIXa/+lJE+/ZpZW1wJ+3qmPicXR2TABK/463+lO/vxrk3t7xxAoCZ
3p7ydY96wFmByn+2KvMOw8zyUwRSxPiuzUUhin3dHrBsGIF6WiRsFIbFO2Po0WRV1XWJwA5oBSCr
k+oHO2KwOJusE7a/VApTVRRw/TMPYptWbE/BJ4Dn5xcs8ireFbIA2w06DRcaBbtFXBmye58WG9iA
xMasJ/UFxaRSbfh7jpmZ0dSOhrPb5k1PnF4LHDyxNC++EWMBliQDEfEyNHfsrC7mCeSq5P27mqh2
MhNvdfzoPavdh2NEPF+TPz09c/IlkoE9DPZpGDKgz3JNaOnI4vCqrupiO6uYQMCNi04Jaq+X8RGa
PF7Fty3hRNY9n6Pq1GMniD51/GYdty8xzfODzg77P7IutUFETZeb1PWbddtLX333Mo3YxINSj+H7
Y3yQdj789t1/sD7ZytZ9+N27/wRPkUKVeGweyxJeFPMVLjFukA+/f9eRmjeL+Xmx/PDi3f/9v1Cl
DiP0EgMJP2A8C9aiu7JYlwgogt+hGj272CzQ2269nZLdonN6m2zJM6MuEX07YdQsccaoB51ONu0l
F9UcUS8u18VlMc/162STLArM57u9SYrtIHlyePirDo7IepWjE1fcz9v3gqBIxywGZOoEQrI/9k0u
jH0kaZPycjeUlVJqjBtStrR9M3DKNAVnBrByXLRbB6lDEwWXTf0mn4wo9AqjtlcgWX0N0hs9yja3
q8JzVLjD6I0sJDw5Hh6d0AYdpzG7or+zg2ROcnFgdTOgu4yTpqAIa0ioT2uMwHHNlJvJOXoaG0Zh
LA/GHguspRwXEOZ/8NBJPicpc+93o9Hh/PmplvWuHI/rTTm9vBXYad+WrlWPUzgsWIJ0kfjBaR5J
Ja6ZpDnP7ItKnuO8l9umXEHBDSCQtPfzpvcePDO5mifnmeNOwQ+/lAyi1o+hRUyM+DBY2W2MboXj
8WC7muHN30iUFrbV6hZRb1fFOoORslNiz1C1MCEMdivrQp/pPaxDgNOzPZI7AAv0yOF9utyMhDvB
5eWby2mm0yAhCsNCVZ8E+e6dELM3RBIF5+iPZV2C7J4phIY20Btc4S9BwLVShqyr2UHneyQxdtN8
msTZzfyeJtfxgzUeFPWcezDjdMe8uUqxY0O39vCOYiQoNc9S2NDcWJ7ANmabZuoc43RIxzV3fpBR
wi+4C/QH95jBT2IE+7vxyPk9XEO4v+H/7hVwzwXCVtrWJUeWbo0ixOZWXCDusVIHiK1F+GfJaj4B
gkT35lInSt4CH2Zm9mSc7qzAcOVNzf608+J6cjhzjGEMJWYrg2vsyEFYsBgY6SSfnJ6u88l0XS1v
F/lkNoOrvM5hAubFJp8A25af5qezKj8tz/NTuKEvc+vrmZ7Oq+nlh221KfLTanabQ0tATjfVMkeW
EfOaTWH+inU+BUYwxwXJEXLeaQG+nq+r7SpHjEuc69ksnwFnMDtb5phke1Ze5TP4usmLBfzvFLh5
p/ZZWcxnQA/zM5AYckR3y8/WOInw6OIov3iSXzzNL36aX/wsv/h5fgE8aI4T7TZR5iVVycvFeU5c
E/xb55ens3w+OYWRzItz3AvzMqe3RzKKDo1OE4vJKl9M1h+2RZHDO2zhn80kX6DfLL7tsoJpWVY8
+GXFA3TrLysBhJUDA3Wq1YanhQ1a+SpfrYv8Q14LgK5bHTYh1qoXIK/ksH2WOboTXRb4p4KR1ptb
BN7dnsL/VzkDizrVN7Rym1mOqlta8M1ZVW3yzQX8D2dsU26gxmadbzb5Nt/O85vFytsEEziQ+A8v
Ak3mxTpHZ8JZcZOvJvAkrydQ6Wqy5noCnZ2leUrIMzeaKkL1vLeh6+WOq6kXiVvPE8olc3mteZmb
ajhMI3hj/F9TzDme9tNee6gCdYgt9wwXtp5c+8MEnvW/bmuKWKtuGHViOlkq3AQ8Vo5OXeOxhDgo
s9CwnM63mIGdBR9OiWPd/HYoc6Fl30dUbzd2yuQPRpMeu4/CN8G0TltgtK+K5IqLoCzKzv3yHoNd
6mUqaYjvYY4+T/aLQ1NjimXuZsSNBOk0JtOLwufK6DkNkhw5/vp3BHhDP8Riw5hFqJyR13GSH4qa
asa5iflD0JcOGf0B9XMAnDJZE7ipj1REiiXzingV2y+IOIbfEv6K9DrhICd7sdsLJqdsHoEGalbW
K8beQKsEFkjF+vEYj2KdNAHymOPEopYjaPe5pP7Z+VH5CJr3Y2jmJAzt+kNxGzGh4wIA2RE2nxhS
6HmxrkJ+udmfHySrjRj+JbROOcy5206rZ/h9QSUikzEe83FpssR2mhB0xdbsRFrLaGkVkhDPuURc
JGdAzRGRAJU9szksc4lgiOtLypZVXk3kUBzgkZxcVeWMVh+h9yRJV0EMHfGBzkj5mPKDqGf5geNG
3XTi5udFDdd7kSmPhcU8HH2girGm/ZaFNAVoYNGKAuIDfCg+N4l6nTMfJQTHUgHX4cgDc4Abk5Lb
wK8RmYaPHpZxBocspkbqnkfHhs93jq1xxqCGUA8nJRDSkuP+UeOMvcDbNHLKvEY8G0U4FzhAfy7g
iQgKeuA2KH8ZqZVeNiI/NFXHlk569ocxcgGoF8ykq5gMyVtCAOm3WZoAU/AwaLYXiP2RZuwQHo1c
yt7WIfT02YP6Qf05dAeyjgwwtwImG2Zp2noRhbaslaIxYLE4gE5juMFOu9nRuJ/wNPIGj/UFdIJ3
TUw/PjFNooctiYwt7T6KTUsj8YCsOaoBQX6oli3LHl2Jx7IQ2ncY3R8nzY1mPrdTYpsys+Niidl3
iR3sA6P0LJyi9tzDIwIEQxmLzs3AV9NogUFdeS5bETRUOoPiDYBVYm4W4usNsr4ZjOlUY1Qa2IT1
7ivQwfB0FCxzm9OZkRNJgOBjsKdWIhinJHOF73W7FvLJCbrtp+NQDcm2NUuh3GYa+BzI9MDQJXXn
OGZ+F4LHnbp0thlXRV1j1OZ1LEMrJZ9gXLVtIbfZHhvW1AtuUWqmmXQJZzlNHtSj7oO6mzpKGWrG
B67ihYptZjfmSJeF8ezqbUkiIXNr0EDDFt64tqgbwdHALUjfjSYwvLd24d6qk9PJrgQd2LqctOOb
R+kQpuNRcity3i0niJABqbR3Eu0FrxYqynOJFAIe/RquG8GW055cvEmPmpmpDTYxTBeKeZm6DfUI
AeMKU+jMgNLSGIWHLeo20GorIHi9y/35j+qabWGOLk2lwZiICP2yDZRmxFUvJY5+qQnNm56uSb9C
6gVWCKBm5GLNqhJSrJAaIY2y6SnrZUi1kLq6A8G0KAVXbe/hTBLUeiWi9UpOE1VfJKezKjktz0Ey
SFBnRfnFktnZEkS1hApERpiWCbwcm9ySy9NZQoqj5ENSJ6j2SFhBk5CCBjM9kEGovCyibbHSBtcM
NeKJKmWSzSbZJqhA0deHbds7+UE0l6w+zNr9AJrLZQWNpQ26rrHhSdnvbDdV+gdv4XV8vzMpPSmL
q0w5V7zPCWtpiGsZVdCYyfmeUKdcuBHsA+QI6dIQP/wT6lV/jWBQyNeYp3Pz7HPz7JyehS39k/kd
NqFU6qZd83BV1Y1qgUZlegF772y8Lm7IhD/AWH2MioaG/qb3vvM+g8viFqhvz8sQyQo2FeUJcKzF
EsONHFORAak/s0OPFI8xCtMqvCQHi3/LgejSl9m10ay+1u2O603adYC7tmG2lz3zVrS2BCIWpcJI
jQU6AvjlTo1dCEwRmqmzMU+tnYpOx2ws2Y/AeH74Xw1+G+WDqq+W19MPf3j3xf/BVn/43n8mCayQ
Ds4QBwyEX5AQJqo6myRvtqdidU++r9aX0NuzanWbfIst4hK9uVp+/0yawYcUb4crgFBz4qjTIZt+
x4DG5WzYx6QjaN7PMfAWuLfJugkhZxOjdshRRd9GkczQfFAtO52D/sf/1zlInk0Y4X0+MbDzRJ4w
jhaFtcokGqgotVENdTKgkhtJUbHAt67nmEBdtTAa6Fl6kwr89kGHs+9RbvTFpo+pSH7Y+IUUkQKK
dyImVhfeSbSl5DJhvsFwyKHZPMAK/M3JhPJ86TjCNBXP6zmu3hUvJLDOi1VDC71GqQf+DRPWXBFd
uQqgs7QZyYBKn10yUG/WrTZd9DivEJ7/1w/qRFD6dQy56TYPevISvzyz88eO5QU9ZrUzA2TXqICd
IptVzNxdwntCPchh+cu12Tk1B6wC417OEB1Hk+/QJsG2qR4ZAzD61dw7F4XdfBLpiunYfJ8vm8nn
ySGJrtMKxKCEMGzRn2t6gQkagSWCzSpWhyK5xCyz2B68+sZ00tnvCqO3DiDG2wHKY6XRobd1C0V8
cnC7RPmMgMdADZHZQtEK4Q7THLu+khB3vbnu6QUajuK0sUdcFHdYxNubfopv6XjZ5qkwKgnJIo2x
+zgeynILVTq79dqRaBN6tZE93FnsEDc1STQJqjSgRvxCOkAuIUfDjNIVVe3Cu04LrUmP77Eebn4v
sy69luQx18lnjRV6ZGlDwxM3fFHeeQR1sViACIpgTM836+Vt69I4pMqMLrcr37m7bP+ooYWRxx1D
iZimG2JGGFVAFa6Wmo0tYW+yL1++/Ob751+Nn/3+y9dv0Bt5nPQfv38/+sngXx896CYHk9ksQd5j
MgU6UiNUxbLASxg9cMnZFq/JYtaJ5XPjNNI8f34/j+DXYdfvfPz7b968hREEJZP0n4cp42tRVs2r
pXAhGfwdHZ/IwnpMnMwKFMAMK3eo0K8kU6ZkXZ8uZghYknVxrvofkn5f+nN8064UvN1rJB0IMs2V
SmiDtIdqLadYsdbDc9Xg867kLZlvG6PPJnDNKP/pOyJ2JqPr0FN1MnUAtd003inNvyPYhfWt7fsn
aPt+//4nqfr38sUDhWRMsIWml2NoeXw6mY1xQ9TURs6hVoU8G3mLJ8PGozvVNJ7O68jop4OynsyX
20Voo2+kL8LCpCV1u7yjjryKcY7xlR50ZeGrwVvxS1lSAXzdxeqSDToftqgKqDGQbHKqVzZja6+2
azhywLqeb8tZlVwPvlA2alMheSuZ7zG5lB4/7qJywqwdlqN0VejD7KSCuajqDdUX51/4pNvqcWpa
4IFiyBm/B002vAyd2VpdB5aYhxM7mlbzBNMReQHpWXR1uf/mCe0l7/3QhHV8d8jIgyZ6d0LX6OBx
7A8Yh5qoW48o2PiH/We8H66WKKL8BoSdjAWHgX53vB/wa8C10/xK7ZCht1JlXazIXTv1M1TAe+zy
Q4TjgTPGfr/ELyMvXqw3t5nUZqu3/tvFQsQPoshxxalWB93enYyyhz8lEqqBlOLXzNgNhX+56nPa
4F5ruj0dutPzxaS+aOXR8cfMrekMellcx317cNhT4GsIWVICK2eJyqQ6ETQJyZcIoH2VIm7VtpGq
vFzOCBWxFoBF5Xt9PCwCFayQ1OAsXBa319V6ZhqseQS3qJErqy33y2drOOwEl/jFZrOCg49HCjUo
j/GWfowVHlP4FJJZv8LfWgS8vzmoXuXacye+67+/aQso4d+3JvcLpK64d23TgvuWXiyUAtRhqtPx
GBZEto01O/fcwsIeX16jZSqjZbZinV9ysiXSqUXxq5bFz7awTGZuJif3Xhaz8W7crBSnt+hfGSR/
62orpprXBjThYxGn+lPKnmoNXjZ160shpLcpNBWtE6eoinkkwN1TPuFMW+GA7ZASVqc8ezWmfCWv
lMwfkz9NIeTMplkbR06z2dAiY3ED1XWzcZl45KOGsXQIzHQBjxieoMvrYzu76PEAb8KlOraINzBZ
OxicfOq1lQTirpuoDjIf+t02kBGV2FGi6Uz7qVE9scI/Zh5rXprdOIstrQVVNcAOPf/cu0h2sLhX
wceA0KKWwCK9E11T3QNT2CQFUphS8IbcIX66E9R+Udp6vA0na4pvszRyQEoVbhbTcSk0awgXTzwW
MDcMHzxDZzcehLeZBz7BRSRwBBnjMrcBJWca7de4J4H+0cnzxxLnH0qadxLmdZgmh5dr5F35wpM2
j4Pr1oAL4Rjtmp4mYo5viFWhrUpPaczKZNQSOhYWxmjQZOfvRdIfmpbNsd3ZNDeGbe1h59cmkfxo
vaYSAn7VgoM15kIlchdTVpQ40P5RS0atyHVlv6fpPdLk7GrqeFie2FvyuByeRNUqOqveZTFsUYXY
2W29S5rrhRfJnQ027pndL85XJ8ZgyM35VbVMN6wkpey4fGf6w3NRSiz3W3ywGBuIgh3nvjdrNJye
MV1ks8KViwhSTKaSUDnGeWcmXR7OCGHuUlfYUvh+mVW3j3hE9AU9PmI/eAY8YMlaX0aGou5+phFb
3ZgIm3HIDaRzkqoyYzipcbGVOe+JEhujkasFitz+S+q9gKOgdpjiw61zekt3VZ3yPFPtJfnWRcme
YHlgjaivCvZgbw8shipLRO3NjuGb0DBDEIkQwnNSXEBhh3QoPfUkIZB0gVXxgHZhauSi99niGKXr
NUwcPqerC8ZNOqp4EDWtVrbpYuIaI1DJggYtmXK2sKqs6ttW1Y+AB66dRE8QFfUHZPyBg7GQQQt9
KbRTjqYKvN1llG7n1GAzTpJLOnAp5V+iGBOv7RwQOr5MAEFkBJNCZtFWeRltcFlvgI042d1J5b27
W7LTsOwrfDxWcgdyZ6fUjTqc12hWnlF8eMIO6S4ZEwsy74npYtY8+IGyEst0pG3RG5bz2WJyA9vR
fbODYFdBiXKxXVgzFysc8L2ohTrJXFJFR1R+sUqJAx7XlfWVvLLPlR3J2f6pRoQx7gjgKsitmSfI
GaFtEAaY0ZHCMgPWQszGRuA88G6CK3cGMIfhhHOI87vtmgfH6tw3c8FwTka14b0u64ka6qMDgQSN
0AyjRpKR2/IHaBFcFpTYaY4uSNeT+SVGF3CGdrE69nFwelmVJpiGWmDYiKNgBo1q/sCnpt680rt5
26XXa1ZCWGDStowS/10dGNQL+XEAy4p/s6AdMQVkgn+RJ4Hmf0CPe5EhW6PaQZtB8iCKsZEnTr57
jEhaL9BdDYiwrnbirTYZrfmiad1dgkaPmtdiXasWU78HQXPooBwQllbkTpdk0DwK3bjEXSBMeBpD
0VzdDsiXZdACpuk2rOfuulpf1q7dlc6M9+O+47YDlupZb/coX33z/NXb1mE2M3i2pJt1DAzeW1Aq
mB9t0gnp7Ie/jzfC4gYmqv6hY/whg4rvBbr+J+u6GE9W6zHdimyc1VNZGiP6OpSZYnJSUyVmTyP1
IwdOO+lYD4G17QlFn46wbcB80r0rtmokve4w4bw/+FP/waL/YPb2we+HD74ePnjT9U1rWG1xSZVs
e8YJ5VvgVV7CrHyF7qzo0+pYJSYJPgVSwSZY5InPCgwErJmFgrvyBSzMm6ul+nSpOzvclfPJX8r5
7Y6QTWZBLwuKK/fISEnqWa8wOdrSXWJjXbTqSa/TOJlKtn2IEnTZNk1CdadTZR+l81jhzn6eqB4j
SrtXmVGvEedN62ZnOTfTxrryqb+ZV+jFpoxEaBiPGWbVy//ls/GXL1+OnolfsD5OyXRfIaIJsH9o
6dsuL4k3QpdGhLSs5leFlSKRKQB2VC0j+AhdSTlVTg07pPPi5cvnv/vypbH6pw+TvyXvk8fJMPks
+Tz5Inm/Sd4vk/c3h6f4zzR577kddxjZCM1csOJeY/xS3iNgxBbVVZFxjV7nxZvvX7z66pvv3wje
o+szIFPTAdbqfEx23vGsrC9959N1+i8gavX/cvJ++P5974vjfxmePEILNhR50XPt1XT9k3lJ1mI+
L84nyDF5A9TU0fVKWQeXl4J3NSN2DNfclL5bOkx7oQQZvMOA8yfXq7tMoCktJCowsRUbOT6t5miZ
G/akK7Kri6W0Xvk2dXyskYjs8kr+KhTn4VSTt7hrQDpvxhEEE4tDdRwoRaTgDxitJaR7czHeVOOz
2sx/nkxms8nGjfNuLNHuJaD6tJXpV2S8PvWpPFVNH9T//KCmMdWr3JTVfFPaUKTW759/+ZXW80h1
veLXglM1Rs/Txq7i95RxN17cyZhCOBnsbYL+GtDgvDwd0NMdO431P6OW7cR9OWpXHQx/sC4e79+j
j8djf5u6ntZHIYxCbWDHMEiS5sYvH2k8v1P5TK8rwz6eF8ugzd7wZGd6SjMqt51oOrH2gqJxi20i
+9J2I/Gz5maK9uZtJanpbSfi5IaPH/uN9xzPhC+3sHkaUAxCB+DskUIJLZvIf60Xjl+C9dDeBRNR
F2LsRIg7NGrn7H06xkYNCDCccjjqZRNXlX1ppRF0TJGP4XXPbdOx4I8RDIexWGjtlxCZwQyD4BnM
N0drMrksQHIT3P+Qmd26MEg6Urtvu+z31E17Hu7f9cxyCjz2nVXWlEnp2Ev9QR039IeqmE77fR2M
DeujKm7MK+1THM2udnSEth2uEzRkArLtVO9qdVn1sUifSqfxlpzl2N3Usu8UbcY3AEfBCkz43tvb
y/szOSlm/40e1MlgMPjc+nvrRu+hX+TN+HTOe8HjJN7XD7P3s0c9+vvmUS/JBg/xgrXH0Qtq2OEt
tGq6BCGmKKdPZCTSx77mriJ/zGsOpoADvioLRyf9gvCkVSuHmIrlfLJWxFFNz0Y+XsBZGebPL+do
Q+kdjBEXe57OSxip70bOrkvMqvk2AHTLoPR211PsTNLSEM0IPLXZEhC6dHgIdX6YLbfIIHYNLwj+
0WqwkE2Egq0yJZf3jdFToc7SFnHoXqmyxccRrSpcaY+rzf4Xc4Pcz+VNhsoCqZmYXiBMmRcQ31Xz
UjrYACyPqaxHXxvWDOtOYNzO4MCczibJzZABLG23vcATTZzI8Ccj517FW7phtQEcTqIto8Me2yq8
9lQb5juz7TCqufoEZ3JsACNVAMptfkxVC3qXi155FoDRpGMopdXj0qbE2Dhq9whwIutuSaGMHw/E
rTDTJ6Js9ryaFGzEnh3TVu4lYiAMoCBozVO6YCzOx7kafv8sQcHYW5uPtBQY/eR2sYoCvxBhLaeX
TjrgJVA2Wl1cSiSBbfYOs9QDat6HmgH6F1nrSHo+dfY0Wvjvn/Xx9YMUJe0LLu3pMZWOdZH3QHLU
G4/C9rIH6x6RBs97MjlIcruue1+iakB2DrZTFTkrJBktrJWifTAr2h6V4zofuMuD9QYB9+Z2j7/D
K9s9lZNVNLxsxABLf/D+IxMfGZpR0HWYMnzmuYg0XrDXclRwFNT/wxBgNBhfaOAmpI8Ur1rm9qHQ
SThwTsiuBwh3FivT8JOxGTehOXxIVIPjkSoX6LXq7RXXcRzjac7KG4ldTFhzZbDJrzEsDT0giHhe
47VFukp7uWh0q6P1Sh4l3aRrEEHaoY9iKubLAoOSUA/29fM3b7783fM3TceVi2ou2bGL5VW5BmYs
qsUjnwBT5hh+Rz/A9FmzQY6ai4SAhPSTBL1YllmFjIaRxT1LmgPBsvdwTZkV87CRTlPnHrVkBYFf
onSDSmMmVMeBVxJaKjkJCSH2ow/FuumSxaUGrHxHkeCs2i5naS8UqH2uJ7ALBPCpRkfjNt59/uQQ
/vvVsPuD28ZYB2/cZLhnK4iOHIhG07LUqKO5de9Z9/roZ/AqT4b7Vujy8ROndwaExCxFUr1391Q8
f/fiTWwqounnMdmIvSCvS1JXRqL08Jbkn1HKYPeP716/jCTWMEQ85fLANR1DWy5qBXHdlRsIKpIF
8D0hrSc9iJTndAXIl8AYiPgod+8NQ6msL5xDreiN5ZjT3Vhl39KL94o4TIW7EXfY7mismHMW/RcE
t5FPV3o0eBpzfcZhYggdaZq6O7Rl5Vl7uzuafTBDFiMMRoxTJ/daTftr9MZGtoKdyGKFiEFp2SXT
KuVb3tkggivO2wM3RUr6tZ06KtwoXM8Vr2/5UjV71sR9JBmta//zBJvuBblaVgztiS9HAzgJtSGt
ihCsqq/dbWhCuq3TsEUPc6zsTAMjnAmzwb5GOXoCFaP0Og1enaFT1SOJOGY+IZQA57bGWbieRs6r
ZZm5O68jl18X14o7/foI0FLpFvvliQJC+evdjlKBr5vD0ssIuG/RQzj+bo5ossS4PShfN1N8uMiL
vruI0b9hkQERHfSxHx0Gx1EKuL1EsBFbq9PPi0vfW6EhmpOxnjq4E0Qf556LcpD/hDnIPvJj6oHP
t1aSnd4asNFJ7Q1rQjsFDgK8AXrTqyf9pBEHRe4gvLx0CNMwhmppYgaQfgI1nMIw8AyeYhiX3Ru0
qlGHzJWuEPGUD33OF9dI5repOtJfhPhpMrsIRV0NJrMw57sKbn6EB5EyCidCP5g8OYwnBFm1bAmz
51aRDbdzM60UJrPrPvOHbYZs1eOX6vsTP6fCW/xT/MDKriGPXHrx0FN2p5esu2CNTR7BeTPCVkqF
095u91uCcJzNYjI8Rq1X2zUpIkXN2uRNqCcomXoYWmTO1ctmOjrKec+OjhoEDkvKSUGWwN3MwMMV
AwxBnKaCn136B6w8X1boeIkSKxBZBAShr/PryW3NfuGZimHVmc+j2FyQFM5fLCbLTTlt8WYWhRGM
JCcNAkp0eGfJ8PFKMiDf89tu3GQQHKLgsmVRknymZwj5IBOeTZa3C3jJL4A6EyQ6d+lTT093qQBg
tAS9XQAfZ/NJhK2jhQrMhVjQMUZQkbQX2wncL5zoh1TJZVGBdTDgYIifEp4hZC2IwqHSnUoMPALW
gi5A9R7UilGj3vw5B8pzTx6sMkVG7BoJl7Deos6Akn1GRHtCZuk+I8MsELFziDglE/o1yUDinG9x
m/UojSqB2tRwSKGnZmq8gCHCFhD3LwgPkk3aAOk4SIBMEygHooawB6bkcSVZ2UWAj6vkt0uM+lhy
ZfKigNmh9xBDiqv93C7b3n+75BlQv9X5LU0JNXTnS3Oz8dc2qcAlQhIqDNPejzELz/U3TJp5/NOh
J6vNi8lyu4prTZkcLm/p7WoWz1pXmYGvbKK8M8y3Kkro+e2nn366A3yUsxrQlIeAbCFvFmRv32wV
6ZeFgxrTECCVP6Q4JzQXzmuPR3OTiE6rOcI40g5+Q42pTtrohpsO+AcaWwgrdFpVl0DeZv1TmEaK
M6QnmJDkAOP3pxf9p/0aGuz/dPB0cOS04f735MnhEX84+tUTffhftwsC+Kg3/hR3/AhbfsO77FG4
NHJNwHKQACuT10u6u61g3Wpp+iFpq05uCzfuuXntHxwNnigoTT20o0RtXb/PF2XfPA19YJ3CqS+v
N3JVT70yzYGsKduYWKzNpZh2gk2rKZpJskRSRqCe1vVC/tr7/kDIVGT+DxovEXtjT3XBGzdQW/BD
qr/d9YpOQafZxhGDOwGL8KIn/Su4EjDnH7kF8PD4chC1dhbdE9JXzryHeZ1ea9Z2JXxkGvoo5WZk
3P/2I6atVFUbGcUo+f7ZG0t6egMkjKxZ5iTWvnIusi3dtt59/fJezWnUgGnDleHPzhytSkTVZmLz
sGgot7PDwfkEDZE2egH1YpkIlWFIuHguUAgTdtbCsMYUdqJ9iwOZu8qlbn+dGO1Vb/f9im9l1E27
VKHkOLITRIUC0nCCMAxQfCxABKA0Exhp6A840+nKiZvAFKl5gliyvThAhjN4GjiNx/OjqVGG9yaN
ymg+VuIW0HuXHnjyIhAvKRQYKFUJigy5ZHYN4NjKHGQU7UZegGL2l9sFpg4tmqfsL8DH2JHltv8Q
F2hBfjrisyOujqZiMz4c9sViHygMSYdKYNEPKLwM+Tp2DDJ0XQJu73b36BY3K85Ny354ZJflyWjA
uF9pokbMybZgd8e6IeTgOupWzsrlhtSoWpNePRZ4DJV4syI87Ji3VOjnNHj4jJ5vChu4lbDn00B8
p7/65u2XL1/2HLEHKyhsb30+SlORiRvyD/VIWgJFl6N4O/celVJ1hA0sk/MtUHbGw/dzM2OOd02D
fAEi8hefftEJqL303l8kaPtU6aU/r87ZZbU+jznv5Q0posExYPuPoIOk/yrt7E3+G5cpmu7I1YUc
A8jcu2f6JOJffaa/eUoUCVgXXg4LlI2qT3BTLay3rR9vW3sBwBLtENcboRjjRd8ieZcs6MtYuCL1
E7oroVYIEe+zHfOH6r8Zq5JS6iDQC6leLAjNCKR7eb1UXw11ARhSxSLtqmeM+zsNvTOrndg/cvq8
MVXtM+Sk+rDjPjfjjp1+wrbAUF6UEmH6JiVmcMDgQoYnd8cJe7F9nPBjsSl+2FChjR9pqE4uKJLQ
2q7eBZBLknHPgmhw3JDmCUdGDTovSDJAXoL9nElB7fA5JrBKmwXGnnGeSa21JTwG+gUaC3ShEZEj
MkPEKvRfx5Wb6JEyNiSTQtYpo4HOQ72TUWzrLujLZmmvNabn/bKlzPGNah9shBf9dnw0PIkm2fBC
13jcfMO7eiyQkXYvLhawLinoHbk8V8bK7MYyWMwZRkJ1HHVmQ10dWyJmnTxFIPXurVF0tltqpjvv
6P93QNz9/wh39wBSIruRv3tcB97Asombg7bFjs3XXv0OK6opt8NiGiK2tNoZ/wfFbtljDazhynn5
H2FqyePKhOGwA/RRy6wuEwzjxbtyO92gPZf56yuCcr0q0dLiBABF3VG1DzYzGR50oOxKr+nJcFbt
4aYnYpRH+7Bq2hYMvofuZj/nNFVlDqxT1bdiWqaF9909jEGu3S9tt/9Yzj21gFE5o+jKAMg4Zwg2
uk+sq3n3x+7d996ygtSbP75KjgZPKW5E1qhCL98ZOvShogYkeRJ6NzOUYzLG6wDhCWXfoD3Zhoef
otWnwnzGUI7ij/PkdEvZA2DfbzEoudLOSu02aAtZJxrEYDBo+EtxDcNmoHtSGnOMsxtPfRId78NJ
YsyTxuCQ7u8m584599GL+fNLXL2JCMq0bPC+z8htbw2bZHKKyMxAtRDkEDOnwIira07RikvAcUGU
VRndw0D8bfgw7Anu7UbW4BknwvZpSNnuvwW7bdM77CaPdt6RXdS3wggklsWMKg/GFOhKQllZYiQ6
DUEWnnmZY62jA/Ac4kSIqRd3qjbr4sO2WE4JQgkpSe1gSUqjnJFDYfhL9IXG5B2o6mO7v2r/bO4P
HhaqcUg0WYZxYdOLqpwW7ZeYE98B70IyahidW6KnokSj/fbV15wKGkv3Au3KdkmeO+qvA6wNjoku
k5e4BN86kCkePAgsPFJ2J9I5dDPBmgblEDclKg8dwwILTo1sl/biLRuhGrqQ3HmvqQrY/9ZtKgnp
dQgfL8nIZxkmEcYDf+BxzCuINoSiEza8BmBvkZRKxXivNV1xaE+YfQriqPJW7Lu3TluvfyyLukWk
0YavYs/FhZd53dVba/sWTi5w3bsL/DUeQePWwvJRDCJftdMCRRTBG4n5xFGwi4sZEoX6UbCAJvtL
GdSVZu2PIWTAtLT7HSA3WUAw89DvuncfcKF/E14pyialvU9HrcxJ23i9xu93H39ET02GZ08wJnZO
Obd2rjF5TVgX8zEcbXTyguGeglzScBCMmnleVufPJReNIOsEIG0d05MmQaMvAqcvyndrJjNBveXa
2MZkbFrfS9AU1mWvZURlkdcIFFzYQGTM8KoVo5sKjBB6B4uuZeart8SNzHEH69EFQ6nLgDGs6YJb
yzQAL1Gu3PACnYxR4k0MeV0jNe6SI7s41vPvXm2ciFHiTElLzcJxrmezoXY80pKCXMdNjpJYOAn8
Wq1wsN3uTgWQKYZGx3oofI7p1OwvL/8LLo/U08Wi9+hf0Vs4XXJEFTRw5UdYuQmqERToFNhC5EkM
q29XDe5B6asl4s22hdfuMhfRIFehgQOtVnitPM2cAT1qcU6J/5ey99S5+BGoJT4xkAX3ak1nPnen
Mzcvm9+vsZbYODYuuW98j7fYY5BJJPsFsRw41bheJt3h4Fu61DHAMEo4ecVGboUX3z5vLQurumfZ
i2I+ZzgQ87vDAvn7ZMQDR93fAlNdovU3LMzGHxOhvKkwF1BmGrolt2ohbMCSV8g7uzGZwLeWs2qR
P7+BOaNbEUUDyv4I65HtjDUs8LqUBgYUxPiGfSa4+4a/ie3jLmykpZgFlDbfYZdn/vfcpDIDmruB
u5PvI3IbpkvgGQJhDggO8xXwbxFYBG1ksITf396uCBbbPHz+8vnXwJKMX33z1fMoorljaNabIdPa
vTsV2P9fAcjdN5VNwHL7MoqLw9wz6YLVjYcbRHgBtlqNkixVzX+ap+RSjVZrmL6zeTlFS2C6Xcol
jV/UTyltHuOUTXpUDI1BY9swNkIurvSRHJ/Gk6tJOUeIsFhT5RLVGNgc1kBcykVZk60Zv4s/e8oI
C5f8Sczus2bIba/Thk6kiBfqkkTyi/1Cl9c6BjgyCEA+9ko0ym0jaaAPTdQMojH8oRNPbUAldfVC
QcaFjWC787HrTzuZz50wKtJVMNcWmIVmNj3rffpXvHwBguPkMZfXx/jwpEkVsFmVys8bQ++1BCYf
YxVU0hx5Ye8zSQXtxULBCwZ2jAE+awawzBWfGhUYrHqsp2iWBWZXtI7I8hRprSETTzCvOjK1p8Xm
usCMz4pQpQGXB4JteQHCyhXmREWRmrRonFCOrL3cRsnV1Y6MPZGKdJluFDe74EDCUzbUwe91hTl2
gKSuK0TtH2bWI8d47wXIQ4/Q/+Zv/R59evOI/g4efQF///ok/7sCEelmcRz94LROcnLq+6jj0rDd
KC0y/szou42dAM+TxnODRB0cgxHpYHQcdpmF4PDZw9H59yO6Z8Ea4AhcC/Uw5veFhVV53NyijXyA
tHySu1N4IFx4zuxAXgiN8BEyvOM1jj8fD395whbt418GyS8ORH6bVvPtwnetnx7m06N8+iSfPs2n
P82nP8tvfp5Pf4F8PfbgN4OZnx6mamkPffqRR+ThU9VuTqnbMo5ZIeiceqMP8XOgnEZwyENsO/3i
3YuI+vhsKS8qE8/76KhNuQBtocL+i5ZcHIYm253BtrUzEDUmp/XoqBdXBpjtNZBrSpmVEN/IM8jI
aN7dYzRWk9iqy3ZKBxZC+xbt4FCklXSaaOomIy+td/p93vrFP24N5HYPR9N+2vw9q6PEXfevn6YE
QPpTGvObNLK9JQ1LtTFZ6IuZ+G+ui2lRXqFSFLa7HNrpYTCShUOSBg4BFs84PhT7eZDiuH9BI33Y
Mrt0XrDJaO6iH/McBDzaXVujlfqp/IBn3Ffchd7cw12aQY+Es6tqvVFqDTWBYOiU/CiDk9vGiS9J
e8nnrepEZh0ohJFs5xgLDff1rCI30sFggKEtF5NVjYbM68kSf21pqN7w/b4gLd6mcC2pFNgobwL3
SI4Jktfl+cWmpS1UtpUbUpuxXm9Trfpz4EfmNmwG/QUlkvK6nBYtLWUVWq2gO62XJ/oEZNL1AuYn
MXICheL0WlqycaY0ImCnyJAs+UDrIJ7nfmt5kFwWBbr63YbRAHEH7RCYXTy19XLu7aUDbjAeOR/T
Frfr+x7OA1GGSlFRh3biN+PXEboRq4+SKd4jmEVyhtZj9i33ooo5p56sqIrTuJ2bvuoO4VCZbxfB
cO+RL5lAP6Uvj9JkuKtx2qf7tvxVurMtEVb3be3Z7tZUXt63uX/d3Zwr8O7b5Ke7m7QS9b4Nvt7d
oMrbdzZHuOKH7Vyzx36pPWBno9GD+APvcXzvo9ZD5IzRU23sGqcG8BGyWYVCIMbuMQyqidvjOIPG
SJ7QSF7y4fgZffnD7mGxImTXeHazF/e4/OOYqdiypWl3bJ1QPxKnJFFtSYwuBLqTyB1vGYjhnrwP
d26/3C3tNcPfiGczkjQq2tElY8oou5n9xSAAuxF3+kGOzI8vlfOll2JLaZJB1wqxZxy6NuwUScgP
mx5j+0yS2ojvUWmd9z2K1iaoMfdgWeoLDHYndmNIbIRTlW4dywNYl8OcDFfEdFCZs+2cf8fRlmcu
zOBFwdBL1xNySCb2hMKDjKADDJkbXYhMSOU2MSsmc+O3QoZWSmWBg4fpIAGF8ltskj7/TOFcyGc5
jdhIWzw/k7XLPkm08gQZQngPh41yDUqWo6qWrCgS466jPakrHWByBn2QMqXE8f/jtSdqIknubyOZ
VdMWEwnuxr0NJHe7JTSYPgzAcQPbtuhGT9HQMCa0CT2XKMrf3L6dnGN6TiOq+MjkUrEtfDYgI1wY
k7JiH19q1k3y4g8tOXR00DRSzEkx1TouKpQ2MKKIvZQGgt4IizgIW6LhFnO/TqS362mfy4K4dejP
Mu1m3WLaYNA1lkmbl4dbdYR2AuPKEbmkWpln8cmKy7b31+9EJQxiZoLhGulwv7Heof9p1/247xfX
/uyn+fkIrc/ec6FWmX/AsrWohD5+qNa89I8Y7V6sdrsKS7JKxo9RhF7ETxLb8hIB93V+gJswnnga
e00Pm/Yzw4mlXzR/NLxX7EfCRByFCaEjK9JVaaIb84us6z0EANGzKw3bVP4VZKecYuDvJndcsEnr
lJWWduLspJkyLbbfKjYnNd3x+5jvLbeL2OVFZVs6QlA109awVa1MyZ8IH9oUhmlxPAaGO32z5XUe
wVjJgYCu47iBL7phGsO445WxFVq49zuHtdf4aUTByDv3JxediErGOQLozmzM5Xhm4+qaLGJ0L2fv
76nAadzCMA416qLtt9mJ8KZDzxmgWcy7vB2bf7Ok6ku4oPU2iHWN1ErJmV/g7xRbZKcxd6Y0nPIN
ZVdUS7Ol1rn3+r1GpZ3CL8V538X4UKEmuXbris0847fNzYT3Pl4T8T+ckO7q5pgK4J8w2AU232xe
UBbbWthRxT1BZ8hFRRrzsyoIeNalqe8k+27LzUWzDUXmzuGkbbnY9bH2+OX1boY5QrLd+rRtjN+O
oSG94Q818YQhoJf7zB5jyDW8x4plJi30PkKJ9WMqWMKgqmGba5AEW7noWehENgxhz757/XKoAcmY
IbMGUf9ysCw2iMH2GIOpKDB5swZq+HhW1hvnmd/Sa9x5JZHu77578dUwOZsdzn5xevakPzs7/Xn/
8OnRYf+Xs6dH/dNfFNOz4lc/n0xmE6++GNKSJ0c/c/Hc8IZL/lDCy9rbwfn5DVwys+28GIqqxPnp
Jfq3PZMr5Es6t/Cyq8u2IjAE7P3wsK3AV7DloMTh4dM+vM2TX8DH4U+fDo9+mjw6hGpJ9jVqeuD5
N3CZYTHX//hbxlcoi5ob/Y528EzbO4IpSo5+OvzpL4Y//aXXHjx/VV1Je7v8nNQXRKMEf3xvEJvX
1fd8SIcpOj6EZaEQ/GuMkwZaJsHDHhw0bZX+Rg3EM80HceUxYC2gh4ROPztOMf/QnhgyrG3xbGyv
WuIzuoGyPFTU5ElrVVHhN/3uOH81jhl5NfyWnmgScQnNJS0igSkjl+WVvGM+rO0Zahn+/aS338w4
TZAOLZ6u2AOohW5IXRPmNiZfVze3MPnHerqpFB1ThVEj2AZUI0UGxOgPs7H3bkHdk9aWRbJoaxxL
js2t7zcsVU/amiYOvq3hhWTD5qzd11O878lZ1++D2jiJYPRIdaeth8nRIf33EQnAxmMETeFMcVTO
PHFzizuj9LOLW4/iGtoDmkHZ91DNDdfBFASI794+s07EqFWeoG7hI4goo5ypX0qK7oB9+X8C/x/K
/3tJdvyof0KfBg+BzniJypveK02zulRgT7cA6awt8zl38xcMtGmYzg/QiIYtCPNnShJQPOIm5V5u
bAfRCybv/lnUk3gWdQzOWM4ma9o/5ws/k7omB43h6VxPkWPZndGPb5zdZdbFje/W2XVuxGqZpOTE
Oez2GlvLRxuS4OH+5y56jkUaMpvNwvJYOJ7mzYhb4kZy1WMn9lblgH9qxclSVS716qNQn+zQiblg
HD7Zor4LRtxt6m6fDoG06pN3Y9S5Q4iv3XQuy+74eeMi7lAEYkfqVO+EYDiTIBnDEM/o8MQDVAY5
N9TiS2vBVEWvddOzCR6WB41MfqYk7PYFOhNdTK4KTqak6FWwlz51oLtxRY95EpBx8PCW1HxkWvWO
C1Xt8MmwNiFGITk+sfnq6UmDtNJTw94nUHUwQ8sWNaSGI/93Wu81KrZhWFrSWo46NtxfspodRwxY
J8GRx1GI6KCRK60ig4loGXZaOAcTMdOmDfSNQPMVhS4GcTumkZ0BO1jVj9ahJ7tDdbyKr2j7oSaZ
L+u4ROkHL3BtJ6ioXd1HWuBIb4v6vKUrU962366349u9Pr/foNrVy5F2I1rKtpciXqTFd5Au8sNf
9J/86i1c5Ic/Gx4dDX72q1/+/Okv/ku0glxY938xTjzDuhXmSiar9djjSfZ+IUIa2LUlJDwpoIaN
CJD4Dqf+Wrd3qEhrbPXVHlu9dcBKRFHa50g1aq7X2zd1ZvrZSw25Qy8M4CfEBeNBTSot+Pt5M4JT
KUXunqjcrhnGcn14+e7fffLJJyAmIOFBrWl5/uHrdy/+4yef4F1/Crfhsj9DZ1GOkOZV5jhpqNCv
N7dzDkurB53sWS95XS2Xt8m3Z5Plsp5eLMrZJk9+X83Poc4f1sVlgSjcydcv3sIdNC2WCKSFvMF4
LDIZMr1J93DwZDArrp50O/ALcGP09Dh9sSyf0QhRVP0WB0IEFnj+zrNvvsaVoXzF2MLBr7tKZG3B
zOQG6LVSXL5/8IJcVoQa66yGqT64s1KnDeohyFdFVeRGXladCEmAf/fOKU5B3EPNeqE9525fj45y
07rNEl4QJ/f9GjfqOmM/kPY54m3SCKti+Y9+IxsSfvB/FPaD0Okc6JtlUZ3FI7VcqYgbHDjFBxyS
xnyJac9imZJmRAJyxXUJGkGIw5EPSBPpRluRPqgprSx/bdvy14sKhAZQp2omDRN87uyx5kWoj023
J5wn010GoGjN5ZeaKoAG7YWvcuwIsM78++Pbe/K9vKVeYB/wSoRcMmVfKyg44upBN7dlgey1tydI
Gx2+Zhg22NYMb3COEuxYLa3Qjjt3Nx8ZdBqKhmkq4B6i7krqYtoPy7Jw/c7gFqEAtGiaRsoub5Bu
mob8KLbLproEgmmgEIgQo6VqnZ319szxCus4r+piZxaraDc2ko2B1CmDUq/TABRCXVrtK0H5JpS9
yD81FKRINc1GcaCbyT+NxtPgRnXPR+fXDomkuUz7oGB3qXgBlxpcSrABMEVYGgVe5vc59oZ20qTW
bqxq63jcMS/9WdnB84QvMNvCIiA6gWnswTp9YE9ZJFbF60rf5cRfpHYm1bzXsqWhjxo8NUkj59Xe
e9jHugRX1mxOZ5i7sni9zdubBXuHG3AuSO/qto3y5reNEv31qHgTIMES42hcKPaiNgGbaCDWOP7H
s0OExDuQWD6zaQuWVegnjlGxVw223dmkmoqCNAhKqeL5KZqIC1nLoSVPy9hQpFzEk3/XcFoPEpbG
mvfYe8Vitbk1i7P0E1A01y4KoHvXJDCLQZdGOAliuZnsnoV7LIgB/cch3WMetkuT9IFJrDuyyJQQ
HsVIXx1VRvEyY8nxgB9lv+Ln4/6TYdTZydRpX+iPfofW/nBgwzYvIRxy+qB+v5QUO6aG7Os7tgO9
LeHDP0rYlf5uGBLnMFsiY460ayg+nU+Wl6y383EW0W8JJENDEAICIrrRnUBQjv50sG5JUdeM5JZX
snvehXUKjrtERB4fsu3sOA2b4iwqMohO1EHLlfCaa+i+A2uSp6h0bb6POxxcLPQUPOm2IrFRuSNc
1uAlG1OgL9HQpx4EhNhEMMmUDMqaYBuyfQD+XObIt/aOmtZeedfusIsT2E4wGyrvKFyg/T3Cm8bG
unO8MKjoePfobB+6oLr5NBp4KQuHg9PlyluM3C2kOwan5mwGbyPEJV3v6gwlDaMX8BhRkuQyr2Kv
Fx4yqNZ6cTibGoo9So5iUnN4p+8hPzeWX5XdXDrbxcvtae2WFmU0uwTtQIUQgMjsYLz5GGj3WTQJ
SKApcXq8W0yPSsfOaFhGNrqaFkF5vxHgpsUIEzMrk/V5U/0ADxtzIuaVtotlqsd4Ltv7eCjxMbrm
IbHufHj17n8WHSO6MwELe/tkswaKdzqZXn745t3/1fnkEzxmK8wVQbaX1e3molr2nwx+MXiaYjph
KTxY3XYOEmj11e+ewyVwkPQprB2VYrjp1eM7l3htDJiXbBSI6YY+S2xErjsHHbHmbG5XhbwzB4aN
TXLRcbWc32YFlhDqYK2gv6WyFDxlylNqCTSFTpwR86K8RQx7k5ZiQsnN3ZrYBzF+Ym1GQLZJLWC5
p+xahVFJxJOYwuYJky7qxbNal7UF/eQ2+PXzpECMH4F3LgVGh5aVx/uK3LbQCk6YZKWABOCOwmg9
qDXXufy1AhJRommq/eYWit5w9JZ5R3KLta3UWGUyFzMmpXrPri8KprDkd1/MesmshNtigmneodbc
zxiQTE4RlO+a0I0oio26pQY47W01pXC8mbMIi6KuJ+eF5hehiDzCDrVrobVo9iSftQCzOXOooXM4
N9I+KrLFu+DLpYEpVdcB9jagjNuLYoJpbc+2c1kn2DhwgCmN3unWQDTBziYjCvk75LS7yul2PlnP
b4mWyEDcGZaNIy18/vnnQtS46FEuH55wROJqXaA0LvBSUunZHAML4dUQUWG7EUwgxVjAoMVT3ICZ
adJZbXWuuKY33lysq2vYqOTC4kwwbt3FpL6k+avW5Tmqq9DkezovFgM14mYOhricwd9MQHQ3KnwX
Y79Zls714IU8RTOOV4FKGZELQV/xnehpjwJKY/rh47GQCFKwIaEYE3n0SIToa2vqYMQdDcYk6qD/
jsNT01ySdlJbcKdy797rSO8HyYuNBKvyqeAT8WsKlEWHn+kGNpE9OjL5VIVs9wPfXYMpvHfNL+rz
PKE4ApvdD7mw6uysLjawkTnpYs+4D2IOko4XgSkrOew04nJ89ko7IRhZ+YhQs5/xHvy86/FNNl16
kvwWPSUQ31SULg9m75ck3IXj9vkoGXs7IxX2kyQoOGLLUjUOHYM58Gh6dsv2U7ggNiQTwBtrgyLJ
kIPM8ZCbOTG3cCcG1bLscyNA4TYFf4SrnJBbJqd1LyF/jdMC2I4VJzyczMvzZSMXU2NIWTa1Qgvn
ZsH1SJO05wTUmBqxweHVKjlGBbMO48CnU9h7PBR+QYJPWFV1chiV+LwleFD/Cy9BmrJPjTMCB8FY
RBFjVvPa2eeIee4cLJET+7APcTD8w+sQktvcqrTx+n0NMp8CzaMJiXIn6lfF6eHIgj9WjshNKIjI
2soRKMXD/af1Gs5UXdzP6NhEw4+cSVtumGhZ712N7d+ZKZmoYITDJnGROsK1uW/STjzahJBgKnY3
4lEhY33fLokjwZgGBExmCxJZ3Ona4NYtkf/w7bv/CAwv8Lnojz8YA+XT8M0P//nd//nvP/kEjd4E
2oi5vi768itejnQM1+fsaUbxVk5tTo4CV9oceKHOsihmtfvz56PDwc8GP2c3CGKhk6eDJ4+fDp4m
WTV3EvtgYH8578CRQ2wqugEWk/NyKuwBJcwZf/n6d8DGf/vy+dvnwCleDTCNGtEtQp8kGo/QlBas
gLSyMCZ1okyI2+903BcwBTXPqozzyeBnSTaZY1KZc04UoSnzGJF9sukg17wo4Vj0oM3fbpesqoVV
d+eXXSjXPcdhc0qzJXEKQHVOC2iy6FAfCJ8Cs/ClsOZkHFgPxJMF7qsM+3orwLu1dkLTQIhct9VW
Dipep9QjIVExb0jBF+ghOe9Y5p8mR8F9c3KAXOPSwDAQOKsrWeMed5PJGXZFDz57++Vv4JpzgLuw
fRGGTZ2ky3ntqtWGpwHhjcbadcb2z8EY32VcrcdQCyVHeM1R+jC9K11cb2Bef+TPRqfzDXJ4OYJS
zBW1WF+TUkRzuVoX5bxS7hkb6swqktVQRHKGS4tDeBbwyy1vLAYslabhO/xoR9WZqPubQcEYJNmL
M/8AaWScHCIWcM7MbroobDOwfiWuGCU3ms+pInQPCzzDbfHm2+fPv/ru286I/6Ntcl4sizWcI7fL
ego3NF9qwWHvZI+LzfQxPh3bp4PZY5F/nVYG9UXS60DXyNLJmQGm7RzkBoKFwwlYIEkmH9DtSnul
AU0EdGVVrgadP8GWnU5ogeAplHSIDxJm2tdQmM4fLwzyw9zcBlduOt/O8Eo4SL7909vff/PKpRTj
b/7QqTkZBJ2Axov0Kf60X0zq276sQl8aV9rQcdZaDq9LqdgPitbG5HRH0Z4k4oJoJImTZd1ZTNaX
UJvpaHywnRev3mDa6MdfPf/Nd7/73YtXv9MFtf913prXltkQ4oWSs4g37t6n7jGHGRDJ7WqwurXL
0OF3rYedPu52ESGVCWyb0QS2PwtliLenb70AQVr3gjMp0LIO1jllNJO4nC5xG1p317F3SkQ34tPW
PKCCWDkXGuqXZFhBJrRehwK5s2KqCDQT5qAjHL5DXiWdiKWfVnmSekQCGN8XlFR0eV6rSz+mhmEM
w2QC0jvONPpEg2BcLb2DOStOt+fnBpQno3QiF8V8hYI56iISpkxTeBnyDLaUTGauuKF5guV6Nqb9
MzqC3tZbYSjutft/krEiAjaS1TwW04sq+QmCeREFoq+HvMP1yiZGDDfbKaJGk/CWJ0ciY3b6jhqh
3NDb1eY0FaRKwN2DwmENlA1el4fu7sDRkfeGOsJOIqoToeoi7lv9bZJSCy9ePU+TzCTQZKfw+S1l
4D69JZKbJOfz6pSXOaCbPc7jouq6+rbWj1XdoZ2LVXXH4mf1+PntpN6g/Fc/02XjV0vxB4X01x1H
VdI2lyAN9iqLmgDOQ58g53eMS7LfPJUsOTKqk9G6OCtv2jKTknQkpA+t1aiVYt2JJdZ1kHIN05xQ
6FaxwnPPHWCW46GodqxvP/6Ctz+qcotlpjWFi8i4QC955La5y2XIa9FKas7F4nlI4CKdFjP/oYBG
i5aeW+TIui+Cp8Nm2Du9qljznDEPSRdVJJK7EKSJGwm8ukal2LKiC7nB+Mj41Kcfv8qcYBKVwcO0
14u8PiI6uGxUeyuBS/6NYw+QSk3ne32rssbkcjcR92UewOM0cDxgwZYzlq+Uz5IRkzKA5iAXg+Ql
nwm4roKchpY7EUn55tiu+vCkYSdxmJwODx9EiHIN9dHQlLrkReNUDrzTz5KMsGuUgHJTnmoiOSs0
DH4+ZG3xY/zhMQvJKvKiflz9iikf4/DJSfJZkj3Jk5+7R5jgvspNJuZIT5AUwuKyYvYacCXSF1Qw
sFyZpvvSdnDRjSJUyrUYtkg3tgN3XJPtpgpLduxp3dWelXkbAySr94fX7/4nR6JFzuPDm3f/+5S9
xKd4zQcZIpFHgaLEcTBM/RAdxilxUF2T6AP3JBZJ5lW14hAxJe2rW/sJi+SJ9GxpPyXcybHXjlkv
ug6mFZBN8TaU0l9vSXT/msPRUDiQj1+XN+Wy07aA1Nx3MD8UzCZt4Weq1miGbyOdIni5JQqKXEsG
NYa5HGNYQadjLETT7YYJpx+qa5QHLK6Ne0ieKdkGbA+M2QSOBgMZTMayKaMPsJ7l+bsXb4FxJHJM
n98+f/P2zW+/fPHy+VeU6oIevnj19vnr1999+5YePnEevvryJfzyzWt4/JQff/fmy98912c/7XRI
1bEueNyIP+AGEf7L8aT/ly/7/2V88v764U+6EszIbzQG/o0vM383W4kVtknWRXQuDHss4LVRfdu1
KDQwC4gzxirDq6qc0RxIlGS17NqQR9TOjLp4v3atITtLKQY1ffbHN/hnPJuspzV++it8uPg7fno4
KM7PlcwfBCPjdaARSFeE+t41w2L5lAyQwMHWRLZwUXlPysHgg+Y2wIPF5UOdoDPg4+7Dh4TsMH44
2Nxs3DqGZ7AlVrc4W/D94Zg2EHznW/aAX4dCEjVcZ43kmJ5kXZZR5lgbNy+dFBKp5DgJC1J3nXYG
zmqm/Rucu34f9yZZjeDrhKqOuoRBPt4A7+S8GOUqbASowqbejLqmkW6jADKxXECMbBvmYtlSJaa/
Nam4UHTESegKxF1k0P3F5AaLwmCBUZ5cTdaj7nK7aHbrvUqXLUwjuHlyGbG047zf4a6hs3aHxwz9
0XC3awy7XQs/Pmif6T7e59O2CY73CosqR5AYDmqAkqvmCbCgSxKhTospwsTf0X+3P+06s4U7WeeD
wp55PuCwqJWlbUjzajKTQJHtmu8NIqJ/xop/dnHqrUGfThLycwbwHW+ecsrGZbclCkrqym3auu2F
NOO2t1eH9+bexEsZ5NB5t8uDvjzZuR781szdczXannCPzSrGTAcitiHVySLYs94gVrdI0/brbcNZ
h9CgvgaWbUOQ0Y4vQm3USXAZTc6LYNndVe8zaGLX9svMoLsZ8IpqjIHrcVj0bLvmxK0610m2wEyg
fdKHo2qta8zIKIP7eih4E5zy5VkfJqpPGkwJTVndJqjgpOMOoiXyGqUgzxwkrCrfrs7Xk5mwAKgf
bF/k5Rnfy6nuZvuoG/hEWYuozgEVii262e9CmVP0WZnDdr0qKMe8UWxCC3fvXNJf8Ns73XXphAuT
5eg4yEznHpD2PY4psTfFYmVeXh+Er972yt6bY+UEa1drhAqw97hJOUsjBvJEr+yyChQC7vpG6qkZ
EfhA9gKaHeE/Oaw9n8v1yHzKyfw7wn/y5A1PyEj+eqYpakuaHslfn2fRiSsy/tQzCh5cyCDmjXwG
WWDBVJG3JiydNyrP88BccENHYHYLyH1C7BqN5Xo9WY1lYTONwZtV5caaFd9cYg4C1hMYblAVxaow
0L0x0knRl3J/pfEZQDHhJ/lNlvxDYYR9T2BqmOHkrWaVM4uBWdxp8ShW96Kq0PeLlkLGR3ruTL6M
andVY80+CbCi0UFYZrBR16SypsX9Dv2FIg6JSIFjWNHWz8EV8jGWsyT1YyTaB9vgPKSDawRiy7rE
aBvDaoa+Fr0wQqltmSynHvGsPK0m69kLvAvW21UQNwClCP2cWHqy3RkD6YtmEvHI0lxK++NSO8ik
zZH83fcdHMkkeIkfYcjLagPsubWk6yBz/wDea6wqMIV6G2kas4w7PkpvboGxWTy/KTe9fXYDitso
JA+T6WSLeu03K7hEKxiCbejT98vuLh0dhZ/xK5BcUDN7HIuba3tRR4bsRMMM7WqQjwHQwec3zJqh
2XVdTC6B4p4VawyUTKa303kx6ITd0rnG6296gZouzy3HHujP4UQP96QUMMyyvsjaXlRpRx5xRdcZ
GDUnpX1gR23kb7uMEUBlX+ZbuKaBZgO1WQ+KJWZ9GtcXwHFU18vMu66ag/HvqgU5DI1x1/jXldSP
3iNUXC49t6qljxYBSO3O4SWD6qe5QgMg91X+Razc3hyrdEnbUPKZo5YEnlmkoMh6WpYxTvcjVUTF
hIenWce/4MO2/TnTOV8Va9QxavnMbyXSnUWl0Sb8W96RJxruL2hq8Dz0l8UN+bpnpRccpNkaJo4h
W1UjzNw/cUpjQivKFercXGSMYXZbzISSmCMAlJTWqI3ab2GDBiqyNq4lqmtenG3u9s6RWaGI7+Py
0VEDF/DFclbctEcEWOw72ngMvutHV3p9ODOn08kovs7kdlwc40H7vhrrpifE4FFJnKi2M9IPfg5Z
pXNkNAO5bRULQtBS5qamWIGwpkcVeLM4m5GFLrNTOdDVZ18Fk8JRLdIouaJircjrY5yFii70G2qp
jPg8NqIh/jRaRVvyvsJiifkH1hmt1XAgbG/QmYqd0rJxZLPFHS2907LBYvIVqjc9a22xTfjgTJo9
2G1OzYm/h+X/dl3dtIM1ndWROTY2Qv4VXVjrJg5IiJrhh7egR0pbeAvuSsamdbCoXcgGLOAFiGjT
ZI8kXDQ472MsVmcRcyRtFLqc6gBgAoem9aVE5rxqNOTKDnewwgFkUtGxhDZxuxrj5H3OQpaAjN1m
zsRQNIdGNHmk0jq9J31y1RIu34v3lY51NtUG4cyh7hftGzszaCoIMASs9nW1ntWZayJox5xAm0C4
YfAZRiJXsyJEoKAUQ/yTfAuQGMjxZU1wC1SIYv+IRPx9f4ySdpLtdMEQJXuFcBH1s+NvjfslEtja
MdfG7Nw0wQ2IlLr5Tr73bWOWfAhbp6lZMd8F4dIECpws0VrJeQAJaxezlstAScvQ7e2D5lIwlIvE
zekwg2vEm8i4Lzm2JKDcWWzuGkeN0ESwljfMebFshRyaK4iJeZueU5dSqLdUxNuDfursi4LV/cw9
WRxOh2fkQf05eR+bU5MnvY57GH3IF8K1Qp0UF8BmjMqIuFFUKLGvVVFvXEXlZl2IxGIraAhHUXN0
D0GMwSzn1IzEepFvHmUMm6F+VeOEYoSAYhJ4nWzIJVBb+aJMbBAvejAEXm27LD9sBVsDreGa3GsK
REp15bJlmkSlAc0kzTqVpualiaa0ECWhR402bmU26aLjBdl5ByKDIPvVvRSdJlWPRewv7QR0L+IY
vBB+RjVe+sk2Lk+81sl0QKI1swQaKKP9YJiMzIaGLGboU3laMPpDy81vLhTqOU9S/iUVzAhvCHpG
HyspD/pDLT6tbB0AVuvxGPn3kHPUdMOQ/zlZNc9xewDbtJ5YcgXs/1ZCD/Ty9Xui8mMpzxd0UQvh
cnsih1kJWOMkdxjIdlaSi/hZeQNnu4CTwCeRtcMaELcWATKg27hJn6zqYjurbP0AWujAjhEPmI7o
n/XKtlBTyFGEFAdphCxbDbJ4SRp6LIhX/s0tvxJGl+IyyNYmzsR1ynJvLWXez5H3qS6pFZ9ZYksF
jsgO2hnVgQyry8vkC9hiTHwlZ8D8WMwGSeJkL7UtYQpGSghtyNfAxi9+XSF4P+cBcjmsLv8grPgz
Ip/NQvRcymj8XaSY/iQljUN/s6T+pCXL6NjwsfbKMl2jR3jsXcAo0ZBvJwfhRfnrKaWocBlrZQTd
zE1zyrcTZRgDThqR94F1nqxKNLpm3SeDQzQ9Sv5qIqwPSJi3nCr83OCRupZ3Ha9uVQhDuGgSYQeD
Qc+Jfua3jDXj3G50OeFFGg18h1e8x0UNr/Bg/blBITRJ6kyMzE5Yan+2UwLrMbg6ZhA0fwpIOCNj
FMX2BsdYXa/hghQzt+bgLCS+2niPa3CwGqZ8NYCXkVTzLVmdLo/BQzw9q9Xto7GFuvpLNw+uDNhO
Tr04fNyOdpmcNFptKoS9Rhg5EhffPkZvuQDhrXR1ILCPZULp3UfeIozkb35XWIloYqBuORsZDq6c
7VnRDndkPyoxTVjS5JuFomxw9Wmft1wG3HfsNpgkw2G/LvBiwjtYYq9nBRoTKG9rHXKKeLWuYRb8
jbNbqOL+Q3HK4Dq3ZYGQerCQNwYzhGjBLItK4TfOUZZy8ZPsSg3Sx6OkOxx24Y9hG126cIGxLG10
AX/MnDV2DjNFSoTVyCnR+i4WkzUpw3eWGi+KRYU3H2kXBPtCUnHlRj/ac40WTqKy7ri4GePL6TN7
ltk9p3HgbPXYQZZKrRIaXAenWzQSLwdAUhnORiodH57k2sDxkfP5iZ8+AybVGY7JOhbdZP7QmxnK
IpsTbz+dtSxiI7XjB4a5uIkoM3ca8Oy0ihuhb08NtnrbxEsrvR2d19FXR9imOgbT5YL2UPqPchmj
Co4wS0G8yA0GwhK6nZBzB3Ycpsklowcz9JgLANsI5U2PetBAAsd5Znaw+SBDACvC27YeNaXe3L6a
2mmWbr5AxcQ1Mf3w2sMSVU4Bf7bzh+E6LMPI3POXYCJnt7Ac5VRjghAkg4ppTmqJCGLB09T885+5
1J//nIj4NVHKzCGw7Ky8vhw8NHgcg2iGcc93F2sY32H4/FUxrYDsu3gTZ+7Vz4PIvdOAGgZJFObP
u7zXyG9Z2ug1UcIi/Xg1e3elr5Ad0D4zX1Oz3V5ckDzmXgcKvMlfPbOUv74B+WEebNNYUZpxK1JX
azYaO3nXdcEpMBDd4xQ+hdsRLm7CUJD+ObmazPUi1PegwIMWOo3FWw/Kzq3xgvwG4pukuVGgnzzJ
vNK5aabXa0Whg3o+QSIh8NKT7f0ZN/gCaNgTyhQI+AL8RJzvZHkrhIvVU5ZkOzK+Fe530Z4zSQ9s
gKUsAe2Fzhtu46qoJIoT1S00KI3fgD9D5DrWwocc37C6y0bfBOM88ba3a9vYDeF9p0XEpYrkPgGc
ip6bs3LZuF3Oca3XgWHXhsWSxdZTf2mjM4dKvsUCYiNCMkmuiLsaQe/OKwfKWvYRxgrj/OScTp6z
Rqk/p2g5kIvrxUmspwcZU1HymRh4c3FWLhm/20d0582p4t68jhAYIhewJVwdZ5JxKCzRPYwmwHad
W5JDEicOWpPoSUUyJFWGe/8iMhSpOSP3rf6IByqg3PJTOHS/SfkUXrd6rfKvLnjoerssTByJYYvY
0aqVPybJXRgmoGp+NRRgb+cNOMIzPeV2/UTxJg4rnTtdns4Wg99ylZcgQ25XIciRd6y5AYYN4pAu
HLVDeDannN0DhkV279Ay4vtYnG3nc5qnQHSiVwVJfHm+Cx5VYJf9yY76szmjotAG8pjw6gEdXheT
2S3Hupebhn2M0mkgJikGc0VgSaVAEoz6IHn37p0G4ur8waHAacPIGKCPdAUQzC3uz02lRomspwFa
MJ4vOo3BRDUQ8XnenNrxw1jWm10vwAX2QCAPXrnTslvkbTOklIRpgRvjTi0C4mGRf0A9iryQ/fXu
hmgz0b+52Qgj/WDMxvbsIVRPcBbVZmWsS9kraxdGMmfNTnquaqPiE7sThWEjhor1UDI6KwJQwAS/
G9SIAYXDewAY1hkhFqpty6iBvcHEE6gY7YjEkDe9+3MXOFD0kKIbcjTObM7nAbeKWQ6YoX1dztUt
3u0qcvUa/rd0vwXKzvg1xQz0q2rj6M2Fk57A/oaT7Flx3SVsJcH8DiuQK+l+cAciVX32tZWK0h51
V6TBUiE+p0dA2yCRoBC5Kh8fNqN3BV03uCmCO0JOcC/QvexYRnILRjsoBgYAmRSehOwoU8KuRHhs
CpFTNzSzi/0J8j0RXI1POjaGshThyBens8nQWpsHpsFe7/5XaUPdoha8BnSz0nu7GOaZD/DvFrWo
odMtO2yNWv1pUDp06jqVI6DRbidLvxdxhaLOgpjUXszVetDSlmTbMk4wv31jqZj51LvTd+relm9j
XPW9vWTK4GK8ublhjpfIADnv0TZDM8zq1l54CjBIFQcovnv6v/IsucvdYk152qUB+LKpxNLbtoBQ
Jo7wTYD7807kMYm6mrO5qhHlIE+6j13hfYuwwM70Czttc1a5PgY2mVMjDCLqtXaX3lgYA9yQLr8Y
5fO6A/dAzzWxjstm6mqgKoVLxD3MGGIAoS8+HeF0NBZGGpdPu2fQaAHn/MqyoTGy2ZnWdm8Spe7G
xZkN9uRVIHee5CRCU11wxU+wtXKaSJzmlTEEkQkKrqWCdR6EE6eYgSoddkRtQkhpGCgAUh4F16Gg
Fmuxtld+4BBr3Wp/PC8V3pocr/UD9uSY3cXHTlKh4xNf4eiXkFGjoGszeW2gJ3/jysMY9x/0aFKA
RJr0eQNyaz+rdukOlPTx1ClH1LRNqXUrbGu3RUlr3cem5BgGJYegeY8wr4lzMtkD/cO2ROwedC3x
dqUFEsWbPnAklezWqD+Q2IKsO3aapkrdPPnr3/fIk2VIFtU6NubMw5OTWFqDuK+i19BdJMnpohdB
Tg1H0fQBDiY90y1haqH9B5k28+DJSS+WscpppLHwrvOd/mac5L6m/Mghe098K/LDMzwfqtlKxM9R
IrEn9gdOGeySOI0wbBJOTctmHdwb4WkB+1iX58A8irrGVLPRox6HOB4vyFmFUkemopavU9iyjI75
tLM7u6HTtzP2RuJHj+no+TRxt8goBDMgdSNfReWeEj9CSHWDDkUGjpwb6RK+FFIoRqsKyKcT/uXh
NfE2N9EFqsXwfyeOz/rp09cB5bFG/XozhN61UQuihwHKcL39I6AfQQMaGdbg9RyK60fzaIxojEO5
MwADivjRF16dZp8aEgJ8JdNLWRn+4g+Anw1kDVRfqKIF/wrCxfWkZmiKXuxOcpfxURg4a4OHg6SW
JBNmBqyiwZNqRYH+CPr5fKQFWhLuedsHYQkEcof80x7MDMpFt5GttvXlek6ota4sTxHthPjkO6kk
azyxhnXkAJXmVayhH9y9eNBJAIijh/ad91RuaTZnokUMLXfWwd08QTyZSdZRCyt1XiyJUQtx58gZ
0V9dfNTOFbCexGSRC/rFHm1n/q6Ix+AhfUWUG8nUaalXk+4JqXPGylFa9SjoLZq4sqV7CeuM003r
z75wdd/ts+28u3dSKco7qgLV+G9nAbzIb0sts27Qb5d3A/fci9RgglouZyhvemdbSemGgB7DzIPh
1m0Y7IIi600daUJ3iv51fhd06ngou7bn5CmEQri1YqyKOwjlp+nLzsKUTXsmJQPV1YouFB9oK4vc
p2VkRwnJ5j+jtYt32LIsfXdZ9FIxKxOo5Rhfs5G3XKaTVE2W+MTbcEHXs2WlKTQZg/YWdWYP1j0i
q9piU73nj0dnvYuXD/eYcP472wylcIgA+L97927InJfnKGkv0AZ6QvaQe22kltODFxViYCUGkjVv
Z2w7lkNDVyyi3QSKlG57O/KKsWQsYYT0RMeYUaBW1KcuoDUtatAzDjzY1JG7Zr2pGyQGV3s47HJC
g8VklRHAEp+TyEFResMGHfa1Z6SfLqeAaqnSTnDaxSzPeO5eJL341HL6qpuYDCbyxzCSKELgO4F3
L2csa1wWIo7zHVhX8VoajKWw77g7jHHEy77UkjpYj6ER9ulANPA+ekG+yV3kwt5EjRvIDUtGKeoO
OiqSKQtcmJXzsOerKoE7JqzhDCZgdBQsiXgvaxpVuP265RI4w3JGe+7BGj238XWbhhTlk6iHK2Cu
NphLZcSK9uRmmNxIv6jtgo738uOFJRqpooX4/zw5PWMDHiGjhszP7j2IPWcBW7ZrIzYJisyPM4vy
Np323kkIJt/irGU0LKXVkVHpiBr7gypH2dbSZQA9xrShGaV0C0TXLCscmZ5SYmEG7fHjIzfA2eXn
2l2Xmv6Bset3bGZp5NkBwgBB2R4fMyU2yN2H7viRXplfLqDzG/dSD6TfJoyxs9uWtJuw7/aZ9YXz
vfdDbPYNJlfrEuxEphjDBSGpHgmZTpbHRSleUHy6l0FVSNixQhtPTklSytJB2sMkukhpPS2Zmz/x
RtOeDkLh+ECi95ErImB40pen6gTJalVuBxXlQV2kBQyZTlUJEVnx9hiDHY3X6NILPQd14XRNCY1w
gOjqon/iSpPpZku+rRaNnWDq2Vku1KluYd/giCnNQG1zIVR0CWJqukkYohjm3OZ3ZUS/WApwOElM
sThsfkcwtkYM3M0QzHJcZgaCRK1bMBNGxx9awnYBKO8cRmvG+rPZ7gRbPNzBdF6RA3M4QfQCx09O
0JiEL/HtH343/urF6+fP3n7z+k/N1sKNrDjq8Nq9kz0GrPWh/El40OBZmC7aHjCfbXD80Y3SgS+c
ZLNFAxArYYjCKBA4oeAXN2W9CcJUmPVElbPHdsT9fvh4xHhXUeYE5CHgZERqlL5MfuChZ5JTC54K
fsaE130MrC3b8RrskWNfYQ5a2oGpO60D3ahclA4V7u3j8xR7dyJ4NfHuNMF0F9BRTBxRax/fJ7eV
1qptAhdWfpQ0J5vScPtUwyjB4Ge73wKWJjeK/iYn40oftlq3WeVe2g5qxIloMsNpbRWRdDmrARXs
3THAhLVIiAbyeQKjherAC/PY760BoL5HyWFMn6wWlmDgmDy3cblygJprad97GWQTG4OMpHmjYlH5
uhHa7fgZ4EYJAx49nwg0F6uwwhWOhi4FQzFblzBQIqk8rv0Pd3jNM6SD7G+0HveiRL/lRSPDUYmO
hPmYthLzfRcxiaBtVNa4FUxyVCfV7PcODcbFpBYn+FnDKtOQRu5ScUBf4v8+Gu1ICh/OmasOcXbk
8c1JbjdCRFUUfwfPa9f3Yp1MmR34StICMzQyxX50s16XXSzIeGicINu2g9snAdUVeAXo9PRwAo7o
Bzsh2EVLKnt9STj/uBmywzAS/MecOnJQvKem0oavadeu7WKpqvo2wCGhjlqyK6U6H3cqmy+ArRrn
PEZMa54EFoU9fJJW+fy/0Wk8Y9iXfVWKvprArgO30Gs/fTGF2X22Rafz4e27/yCZ15mvqRaLavnh
u3d/P6TMHZRayU+p4aTgUG9kZIaKtZDV8YwMyzB9Y+IbC0Z3SJmJREj0ebm8xL+Mnp0i75K2GtsD
EV7ijYXTc/3NEAw0dKJp89S1tYz+5H7VZtUmWtMLina8BNk2jVnGWHy1LcHpjjLrQqDgYVg3xkbD
c+bk2oeCgS9AvrCM0zmuyUe8v75XdOits9Don2Mg8e3u15A7JcS+Oh5V6FW5V2Nc1K8uepX9GtDC
fhOw93lgu9pQCD7a3QNTxW9pXKAtXrG58uTyOrATsLwseXuX8DsbJhpyCUlWUb4A1Rih2qVVfpfC
u6BO9vQfc/QMx8OnlJUrhd2etvAjOv4oT9A63DuHffx0eBK/m/d8BS+AKnSTKNjhbQeCXYJJ51kS
izYNYgba76ZMXpPJ1aScU4ZhXHtSwSeiMrF7aQ/3N9QkChY2hsRMrimDMo4XM7hy9idMsv15CFTs
NJCREAFVJLVxL/kXWaZdvGJEK7lbupXeTqtqLh1BP/SN+v5h3coiZzAZZHgcPH/1zfNXb8lnyzx4
+9WL1+6T33z35k+9mBmKfknOCngRItxANct1QXk81qg7zSN1WI9XJ5eYaxHBfQo0ymEOBNYUQv9f
P//qxXdfR+pq0PaU7Hi4K0yuz7h9U/SiLClH7ujW2deal9ftkwzFgPpoAPUOluXOjeCImEQS4Frz
RMqPGxxO1Q8coKfYFq9IRDh4Te6PDc9IDqrkct/CyfwNXF1NmMH6YoLa2iA1GmVzDxDmuKoNvlam
C+ZJP3rIlOWVYaEooWHLRUZKL7yOuBCTjzEliqEmMAiNGyNw56eXLTxAoGY0N76g7bHWrD1CBxbm
9Bazm2SpVk17ql2w7IPFU9Vn6hXN/xr+AMZcTcfjnscetg1Wk2Pff6xS0w5Vm3JGKo/8gcrDyDhX
23Wxa2bxd44noI4MaiMMeZ8Ru83bYbtP3bG7z/0XcH+JvIVytcHgSc6tNfufjNsNwcZnyOPu9S7Q
mn0FYm/tyOGrP2B4EN8VjuPfQ3b0iqdc1WFQXnWTouZbOpuSZFoCtekZHqxi5pEZwqNAVxyOHad3
N3mWWmINnVdfFtdm249SmCM6u+GYXVZ4MlN9IMJvpeu08UaTmcDrCxjEKYmMa4s5U0zMujSPRSSD
JmfPfBoQWCPhpluU+L5Lo1wJlcChxqnzguGO8Y/R7d9Aa6njHqiqYNgFSzSv7IYuopfG6IWJC4MV
9TE8821BOr/o/aSyy9R3o4hPsIanUkWFONG5xRCTNd7snPeOMh/DqpNTF1kJTzEEclFdKcYnZzsn
3qGYTC+o1cYSUX7UdSPnxMbCFPAMpOvv0qjjkhRWg+r7ZboTNs5bg2ajUZ7UWwue1MCVIrYoLQuD
SbRi1CeziIk97VGwZrbLVTm9nOu82knp+SBAwbudpnfvL8M7Mh8/GzOqOYNZca8DHHGenN1/D+Je
kO0HEjFijPuvjL9zaDNRGwRJomKNTSKPrfgcRQ2yfPCLV3/88mXGtfIIaiT7VlH34hBE2SLq7aml
nbBrgfuFvro7ZlGWgq5DecUIxpiM6t1Xz/9IOZ3VWWm6ruq6PyuuSmCnUe/UbHtarW4bLTs90xTv
gUHtXBEGScmEdNuUt7oWUQaD8AlUOeAGirG9UW6ph54igPLr4s9kz4epZuwV1bwh9ZFbsSzqgW/K
/h5upGq7sbdQ7rK5mkaeWmVVkZsdm4hU0KDMuioEh53meSU/KSPDsJ9CvNShL+mwLTpbTG5ZwMI7
lEbVlOHJpS1xOoEHMV/ZS28kjf3BewzfG8sdyGf/nf5UbQkzR3K62mhKFdth+ciBs8IcBg4uTmxy
Gv5kuYyyR9mv6Uuygmt/I7kug+mLUP5QMLq8Rhxk1cYmw+To71FuQ4WKTOIwjS4KNl+bvkw8l/zN
qWyTs6GUi3mMuCvEcyo4NCHrzKvTfiptpc4G480lPyDSEEItRLbZQxPiwo2ilMaePl6xL4JiyJlJ
uKxp26twXBcfToIKpiTL1h/8Cp9yjbCCpqvH8qbCC3PY+PUsKAbuCkUW5VzOzIXi7PlWetjGBktb
GzKmXoLxwgz0hILO4V/sOoR8hrMFsZoz7aU/MMVakwFGx6eF3YFRiL2xtZ3ju2283bCTA/7tKzKP
F4iNzpsszA3gan7Fi+MO6mzgnTYELSFpNlUKVMpnr6bKAX5KpYt0sOvwOfYn41jCDtqqEuhF71mr
GOw+WA+b6HxWIYB+uNq0o/yDsk7APIWMzRtun04hCuYyX4/7R+TaJGGtqxAVyVR7ZMtYpCGxvQXN
jfYs+sTr2X0hhg0wuvNeKIaARLDBIBoy0l6Xy6dPujhZqvdF81U6dqGiWe0cap0ZKYAo8RJam6Le
RnrvuXaHkHjHKumst/toSsvHlCbDVnC8JAjnUYq5/bulOx/VsqIsdF3jBaWVU1uaiMShFE+FLGvA
oq6ItxIONbEycrsqhRuyIuxfKUuspKFxjnaI7sBZZ/c93RP3bCN/pBy+N2sZ7EXggdSV0hk/4aTP
KGfudsqAkk0+kWRxZ5y0uagSHi31sYb6nYZjnwCTzgoG3CqMtIjV4+SlGbtnUTWwVut2g0063a5D
bojUSSZSDS3AGU1wJ6JFxbLEYsuwSfGB9YG/hUM3q67rHXs90i72+sQdARNyfBJKo3OaUH7F1mIy
FdRkBDxFeqGfB2ToyJTY9JJHQVxL3Cqx9IKiiZU03p04tyfAgSwjUC7xlZmrs5GUCKRlEkz0nWGQ
vNPmUUGdC98HyKGxMneZhl1IS/aBdANpYzb4Rg2ON3Gr3WVK9yMsrKrUMRwL2PCIhJI2yoC+eX1K
qewBMpOH0MbkFqnWZeAwtJpva0vuWGaNH8wW+EfaH/jkJECEDCxs80nt4DzegQLp5+Rzthq3Ygo2
txxlWu1EN6HUaXi9yvwGw23HeJ67gW5EUeK2iECxymWZBxPbiMnFQFWbvuwWF1iAbUhedFuKL5XM
tWf6Jj1lnlBarr+UK5MTC7ZZT8auX5u+vzfIuLTrtrC/2GLfdGIFHXUDwtWM756+ZXHtMoY8bylV
SBHyxrD6huHeS9FMT/TbI9dS5AxxuljtM0SM1BJ3hawPsu1hnjw66u2X/gC6EEK9WTMjzcshX3sf
QfWoSeUmGT5F3g1jNdSkbtfdw+cdj+eb1nfeibgjI04+89/gI8k2DR7asqO3Y+ToOwUPngtgAMbS
8SeOgxk5ZkwKqxPtM8XVxegpuZ3VgmWPQeSnBaL5uCYQgw3lKHSMugel5QylfCNTosMmNIfODbAQ
oY+yK54aron04cSQFDwgHLxgA4lyhMhCZqX/JIxDplksZkF8B8zPnkNl91MUhNfVvEb4KFwA3z3L
YCbD/ToliTs2zCDChOOTGHjYGJbFp5X6xBxOVGjm5eSFK4GADP0roTIB4HqH1Ul2eqvDyAX7W7rh
jHTsJtiYm9MzXB1SDtICTCeYZmhCN8psc8F3K0agIOsNoi/aP7hfrxnNDz2DLrnSIPmKnw3F7O7z
iEg4vI7pCew0vKs1lx2ZQKx6eQ4rP28B81d71B/xhFQIKj2ng6FnAs8Bn4AeOmTGNOLNkEwsbnAk
KXGENOFjy1Sht4oLGsm4LmMyEozHaVRwrAc1ocdg03uwq6aCm00FiLkTOrqDZqsimG9fys5e60WC
rWCSio3eOZOaSVyrxKe0bzTyCBbvbFmMdgjI9kUatvggUxVMuTRsUKORo1OC721ezNRfswWRVKZe
M/A9kp/BovZMvaUVw+C0tWEJmcYJG/q+by0Y1FwN/g0gGult4d8A5YKpjGZd9n/0DvQIJtr/uVpt
asbXoT+kPiJ6gfQxc2K9PZD2Zdynt3Fb6rmW5NN4plCyaNzxzmvsjLXledEp6jhWgtoa9OiFsuMV
K085P/imIQq4TnSePENzkMndQTdPRcDa2arXO4lHW7tz3LSXs6kMx4HjjDuZr1yv8Yxr7IxpX3Xi
1fX15ZVbAtvkEte302fwim0R66tGYOB/k5dWhE49qcMd3uaBBUP5QLr8R0ax7TK/dJ7jW9tWdNsJ
gBhI345xsByGTpoIIJD9QDhU55wo+myTIkjxmtKJMtItNy8kWBDhr5ZfxNQ8qs0s68lpnTWnpfmC
aFh4lOgb8EfvZa09HC3fYjkyTtQm9pc66nQ+/PHdv//kk080rcp6u1wW6w/fv/vf/pkCEwQEVWIb
mJQyxBdaOzcIMh141dWMOmijF8wnrKXfMMCBXSvKRaF5XPAzP8VACfKcpZSA8vNbAcp/XazWnHhc
Ii8obGeFfioyczIFfzVzkhJ6m5nNIcGfWXt6Wl+WK/dn/O78zAOo1lxsmLjfnWLFTblxW8Hv/PPf
O52DzoEmKGUAwcTJoC4vAuIjR7NmFGCsx+18XW1XHHILD9G7kp5kXZPPWIFBzzGFpfuFQo1HXc69
qChzVHtgO0v7/dl2zauXOl4GEwL4GiF63BpRPzEce1QiLqwwliLfLIrN5GoC3bzqOrURW5ywya+T
V4RDSqpdzDbymGbBdJlkr0aHDME0n/cG3V7uTYq+5LjeLhaT9W2mD4R3NmDWpr1REhYJYoVN0Y4c
RVu34Uru3HGbdaRp7pv0XCa48ozupBU9xOTPKHxONjXjwHtxA1JSwpewQivPukZ85FRHGmNZaRRW
zSkBaUJpZn7jzmtxNbqXLotbD0FGe+s5BX1dlLauE+iyG4PrdbkpxkCmsu4Id6buAn/9ZVf6pHVH
7Qd12EDywH6xPIAuCv09HpoSAlHsTH0wNyaJI0aU8Rcb5T4cwvoNcTxeYLwZL3pVZd0Hh08GT87q
5EH/l/APjtC3gmDLOiCCphwwgrHkY+x5J0Cg9RRFk6GYebiRND4w7jf45Q1+gVVqNuQj9rW3NNBM
j2N0peq5mdwxT1br7W7SfwbXewwUtxNB7tSbhUITcwOGLc0RALcXeeeATvIcBStJ6IaaztR0zU/1
GxMuXk9pr2UYIzOeTsNfnQzk0dowNsFRMu04oNxwxtfFhy2bXfTAc8XuWH7pmgPnlDYpjvBdtOTQ
z5dLiyM/ZTpoisBEjgpaEBgv7ZA2QJeGbEpXglAIH086O0IzpahRc8d7wMfSgdfDHdV0O3LV3At/
91emJ3lnGWkD9S5al29djmI+LUSnYxJyX0+Wm0QT14jgyBNNTrcgj59XyeR6cttcinDS7XoG+ED4
q+nCUYxb37SKwB8iB4PTeoVnIXZy0e9ksi4kn2SsLXKPDJuSH7N4FZP4tf1c7qQixQ1wFGFlvyeG
dyfQWai7rTMPtdci9pI6Dih3Znar3R07IH5DhCI2+cAgz6v1bZ5QIpV5gbxxnsAtd1rVRR9z2kWz
JpAzJPb8nP55/fqb111fLSF9I5u4CjvXVuRHuuDwnzd/ePHtt8+/2pUIyzgU5PL/DrOXLzxGnCc2
fpT4srE0CX0dZyqOYhWlDrr0eGq8ulKDqRcjXlnC7JxppGcRUo3ogc6IRtw0djnCfwypgyH6ULuD
VpRnG3FtFHaleHSOCaacs+mNjQaWyEwEF9rtxhQ2TVCYejhebcXr3TvQshr3G40aUOCXgaR9MRQ/
i4FTeLDVXYWt7vaSah3Bw7ethtmNYE8ObKTVPWuL5Hk6Ox38Znb6n7flRvmZvbbU0OwpzQUdLDmm
hcYaHRduTd2P7CbMTRvecjyDMSD3kmlKotIMRFZVxsHDGuE/hvXRyjay7DUBDjy21gNeJJsn0ib6
sOrag6FR7piK2KzodwcSacML7lwRMWUt9MMjdXYwtE/BBDdke7cjGCbU65m94+Uqdmu612zXSSQV
JJAkCjyyC2F+IBYMBSWQ5rO7Pc5x+iS1dVPz2MhEEGlCMM6D7oyT386s1+weZSZa43W9Rcka8Pdu
f3f6q6tCTnoJffzwksM9bujAkMUFG7vuJRHbFRFiGxMUDA1edmY6b3gQdD/TTc37/cEaBvC5rcdn
k9uWYwxnrZ4DA4XDInHHwc/wltnBUQKOQKtQIo7OHiZYFMGorqlqfZQpuWP3QT2g/5EEeJy6EVLp
CYa8e5JAOAZU22Erxw/qE5Qq+/3kW4rsxS8PItD40EU5S09y/FDf1uoJiU+ukGWAx8VNMd1u0GyZ
RtK5KR1Bl9TXfG2ZWNW2Ex7EIXB4niZqkVy9nrO4k1JSzE5b93rD7I2UjnKkUhl/35FcDScvVDhA
owNa/MZuYOcGH5/ZaVP7g3btSMPWtdDAeRl8jR3bPxrwZEapDUYxFL5blnjuny/p3zgIgWmp+9n/
w9y7NrlxJAmC+nZmdbt3e7ZmY7YfziwbNA4ACQU+1LMzWyuoRy2RfbxVSzKSPc25Ui0KBSSqsgkg
QSRAVnVPn93X+0/3f+5vnD8jPB4JoChp5jTTrERmPD08PNw9/LFbrTdwhlOMAG33y06YJ9elKFKF
9aacV7fZwBLqJKbqdq15losXaU1TkzYjfTx9du2fOH9pCvzkXDitvgcmOa0l/hKXFKnTZSk5WI1L
SjVhbo+pp3xwvy3tEocP3Ge/JqoaE7cQvVZzkkzEfjInJ63KwUYMjjvdVFejH/AQOJUnWA8qY7Jz
4+7snfduB0/6/hKcWBMtJMTBnz2WoRKFjM2qZZ0MTUkPCAdLt3K5He9YiXD/pHb0Lsdk9kxMaYH2
Lcuf3cw+CWMoJrWxkXQjEfGirR0r/oQcgMn7OJbEtNPNpImt+cwIyVRJzJCAHac9v6rxSbKl7qNB
BwBRzT1aEdIT95WxVfXDYbHc5jbNZjymoZDrlOdDJVU4ZVtkju5gR62pRo8JHm4YEUxLSg1m8xL3
vf41zdvmdkqc680egVH2t87XkzVmtEbnipme3T1qlXro26xwVmkHCPbSkwGnIsxoBtP/dCcPdNUH
hT/aaF4tFXX4g8Kr14UvMAPyLIKJY/Fbn5JQxGuxCOlNFk3tnZJ45SnPsOq9ENUawEBNTXhHNL0f
5h3MpRiM1KaDzMTTiUaQcHMeoYGnTnv0GLkbkFI2k1B4Qd8OSkM/i1MFF9Us5IScrp4fTmwzk6Ln
w3tIIjfZzbN6OYFp4j6SoLzVCoj2ZBsbwmEzFDPa54VDc2ZJ4ohhQ07hj8/yOKvm85Js89RzPJTK
JOoeCWLl8Jqc0SeagBfNzVY3Jd4jzDRZpW+QozcPj9CpBzAgKfr0SzEIxZMJymDwG/EsI5+1xcIf
X4A2toXlZEMBUAA/62mFXt3ighInsoxGZo5DfQxGRrXd1pksPkzuGiefyhE2cHR04Cl81I+nufIU
9KJy9kQzXkXOwgk4HUF0fLFtTsbXpQ2GMRZwW+Ff3WddVBILLpX+rvwG3SDaBWjbhZrMw/6bMoOI
yeL7Be0TUi/A5qAlZ9ebVVnO0FQsWrPmRswnI+nVcxViQzeVa7NgbehWfgut1m9xQjCP4k87cSSk
e/ygUcMd6ePJfoGFN/1h6dkJqZ4yYqwulVRlxeHxy1RaM/RhUFhZ1mCNoby8fsT9t5HgluNdVtIr
TtovxpQqpmSvFf3ali9JONUCYsP+Us4Wjbsp/O80TgfJ2l+nJpv6lItGGRSphvhUzfGZH8OzhnwV
0UGxT1FTkFVt0yECXRgbu1rg4V4xqRigqxGWHZJEFpbrBdLpAW2oqfcRLKiBYcxg2X4BqHg+dfpD
x6f+W3CpdK+hUokbecCD2lFn3ddU2Hdt4QWNE+U7GZnV9ipxY3UvmmZa2XZXwtxvSjMvI0R3NCHl
21zrqk5mpXyXlWfq0OruLIfmqoQST5RlEtY2ugo4carkYHAJqdnDiWXYTVHunbTymsR3tRCdmIU6
cLrej2A5Ba88wFEcE672I+noI+T4JM2amCGT8TlIaH78yRQsJB5Oi3LFkx2BKLL3hEpOqYXY7mpg
7JbDKtkf1jSuHYGWzfWeAwe+HqtCdLqwoA1EQ01067IOO0OUtph8bLMiQgux32QgNiHnYpJcHKvd
FD6pO6bNaxdb4mlCJ9O3uTOTAgCBoEEWyn/5a5C83H1zdvHyG6XJRRVG2qaU8dstukZMfMkwmIHU
Goa5zumd2p0NOHIN56qZzcK2rEc0FGLLBejVxqcKTXbybhkSDkC7djYk/siTTwOWjfpxzfXdUBI5
D9VJp+fG2U/iDrg5ruxqIL1EyrJbVWwR1kiEeRZeudDZo9YFs0nK3XjPL/qqFTADMolb6jXdhjtT
hghbdKgjM1JK65bmbhY4oyyko2KKo8hhs4gBlPzIXX7rGIkCbEzmi8NwE6WDyN5txT6W7Pjqq8ee
GSvyVQgaPyane7XqZVXoTkN31pJN4HuMS8J+Dl5JpdHWGrTEubpDBcYmF8GVMH5WUuYmPGs5uzHH
aWMHqmEu0h1l82z25B5iuIWpBE/CBqLUqh7z4VygC8YenKpTu9h7UKNt4SVRXYglDo8ylygOD7Jm
LwaTnfWNjbzHZDTYk+M2TIn3sPdLyG/rH0NAY16tdrriDxxLtqI9yZjsu8jl4E72dTjbtyX5Gjuv
MTvdlpzerWsIbSUE0fmamHYzk2MDKEaCrA0VW7OVMwxf58455MykGNNq+UEOO9ObCW7JHF/lJ7Gt
P8BT00uazqKtlhauM6lz7Mq4BO98+I7Sps7PHMNDhfoXxzjRt664j1gg1m9t243YD1aOc7hfICq1
hN90oysItHqC4wF1ymq6cHtYHSv0+76qdw3ml7PNB2FWc+urm9Ws6MeuJRNMjlQnIaXIAz5WpAIl
YQMJjJzJ4lNMIMKd3WqYDiXR4U6AjpJyzjg9Szmh6tBW7OeGkKJNhDVnF1mzEDy61aC0XhxxuOFo
2MTyvkdcPI/2QyW0TTkxGYp9IpRYJaM2Tl6C9VZ2uRQkbATtSo+mYRoWFo6DinsVQ3F9EXpR/nUy
GbIPatbrWRPRq9AmOcr0LU4iesgMz42sxRavn0rfD4AS/YT/oD6qIovvWSR7Zty5YZLo84LcG2Ev
EC69TgCeX0WiuGIS0RyJU1yIoxttMRPlVZRyySubrd5eo8axNdoVBp/cqr24oQQiWDYHb4FAOBRn
H1gRTE8UZwJ2Qx1mhM6QlnEuPRU3vfMfN0xmWPR0vG0VtLU3cMMwGIST3B96MKJwLpzBmAAMOD0m
x7LxeKDR2qE5idTOw3FSLuv9evGqyUiZ/70BclQ1N3iFUHz+FhNjz2HT4fGywJBZFQdmERLZSMUK
AyHMWI4kk3XpyrDQZIqCaEbGy+Xm0aqailfSWBJ+0qC72nTXuWmS7q9t1OSbzucD30RhFND1oppW
rM3FIckVPWoFTQTfA90+u622vcSwL9Mrso7LZTnDOye0H7neTJbkU9dgXoyCcOQKoNo8Ys8wdKg9
gMKd3ertCk9q2JtNHfDMLaiZDDSL3yd4884pYpkEyI2emPHjoHte+8Ih2kXqxaWTNPHo1w8onXRZ
fIAP2011fY05Z4cG0A4GNyAdEbyd/zpHOHymPZ+cYI/ekAKK4jceHyquewSfjlXQwFtedjRpI0qG
JMQFCmKlhuibhwVmMirPOGHurqEsu5xWgOzpEV0whgWFxmC0wYtFMu/ANC50ycgR4ABNMGsw90Zh
4gJbot0KNwTgOBE4IHQzjvrKOYKXVeMCmqh9XqMGzJyOo1xNMURvUbwq7XjofmDN9cg/ExFwVkIv
i4g+HgF3JQkIVAJ/aMaCPlpcgheAdhAvQJbCMsbw7sNkJ1gel0biwCzuTvcv0u9lkZiknmEyK+nl
jNL0oocKFccN4C5IGx/+VC9PzfVjRqBmTZKkR/Eb1ARVt8L6PcAppEqg6UEkf/snOL4AvFzjhMFr
/XUxXD1fqy4ppQ0ajFpXLBOTg6sBykkGaYxYQVcbqHNblMhVd3wjHdq+20bmpLaoY1QLsnnrkKLP
AoB8JZSnOaRxhYGjbhDvcZQnNpqr2/rK0Mt4MDSghObRoTJkcSdyABvtBig88KZVg2c+NolJrzmg
S9GQ/7YryWEdeRa0ozpPhk+Hn3coiw72xr+Hs/L9Ewr8ScFF7rNDcCwYE8UtRBcv0DHwcRe952HH
T3EHAwcNPMwtSSZNalsMZJjmDIRYGupbm+JsEmta/86UdEek8VyKzh5Pfx2OXO9wonTk4hnazbkU
cLZ44SrmvIYpa7SYKyMWGD8FStDdNWjStYwzIe1qCx97WICWJL17OIcivVtjYejKqg3ksNN3w9Ux
GA2O7wK/YHgreeOn1o+hJ7j3cEMbweL5Q7oBeLerCFMbMT+PbjzcokundhuGxlkIu3dv3vxHzZvI
Z+RwupwhrXv3z2/+5n/45BMTTkAfd1dS1MQg4Fik7osu/A+YX2BQ/PDih2ciinHjPfibkoMdWwkj
07zekYmGJ7pdqNEtKEyzRGgu9VQXosuETDoYEmJSfTXXm7vAfEhF+V52qMZascoUmn4P+wqPgC5s
kK6nMcaqx1fGzk5l2C6hhXQ1PDHBpw2IZJ0xDDXn49YuxZS3KNFaGrp5pHCxlAP3zbXTz2tRYBL1
KNhRFDTqllLLSi8UDhNTl7x+fvoP3ZCo6MhGZphDWkJcL/TeK0EEJIe2AOdgfIhgk8VYknw0mULN
dgYAGtmWASvQ8QG4jU38vq/mIAMELps3M3YCUBEe7HrdkrwF835/UXyO9prIwN99jtIAhh+NIDng
z08RMu1+NALjsQJZqFYW+g+KP/HZrNH9Saio7hXLtKU/BuDQvQX0llW014ZQWCbZI+gp3DK1437M
bQeBXNuBX0E78PuIdpyzjNZc1+Tj7mJ24+c4V+cz3u/1SvgRLqXeMYOC0FCxIiBm8NLLP2EjPq0G
/tt+e+s6I3PGfJcH5fHEl0mgwA/RR9MRL7L7GRaEMaB2ahmZDvAywb+JRQFbE1j5nkTrtiv2CGJw
sMyK4mHz44rtfM1MBm5AAzcGJ51tthEpZUkC35GNZBdWIjpleE26JyHZHsVreOI2ZfRhGMrCaQfd
llqsecA6MgLNohjuzIEPYW/z4b37P978j3Jsivj97vzN5X8Kjks/ZgnowQEv+FG7+w6GYUgAx//n
IgD2dztMfRKnoYg/o+2pvMVLD05eUKMDF4dD0YCJWsGrsV+6gMJR9bZcAN5tiZsiDktHlIb5c74j
XHpoy/aSWGLZxoHKouI+03acYEBzv5A5pQKpXyzRqpICoWu7nUwecRmVzt8Dl4HFhCUtFvqWtdGV
8l3uCgxe0yX0u2yyHrEbHonFyrt+0gWMId3RkcdOWGmVyWZH8PdB5ZaGs7FxBIsvi8eZXGfF6SiK
NK6BCtwFf/nuvLrIG+XYaVZt4YaxsczCv9rW6xe0DIEtrES0LVfX25vxDTDbB0FkJu13LIYZHeG/
e/YpfO0RImAE1bM0JKd+S+Mrt4ZWDmIjBsk7zdDobgD+3Tc0jjxx/6HhNmwf3p48om54Ej1xJA8t
gxSiifGW6WBbA+OEt1CqxMI3IPyIWXuYa5sbdvPDE3otqnOMrqSxgTmM+mMzfWhYOgzsODDlK95e
M3+cmjLgHQvUajVzwDFJ05jG764lhJ+Ay8y8dwt8KlSAf48Jf3q/fva1T1Hc8Zq9V3I4iz4wN/LE
mZ34h0WViz3mrh5CeQiGwId/k9GQ3YDDWLs0Wq+18QUH1pJy+6xyFxo9NghecmQ4+ENYohoDBR2C
sTc2cEXriItMdHm/c0rgMUF4/HO5akqOvG9ftOwkSQKydSJ3VAuD4nG05U29u745KVdwSG44ipV7
NqNAO1mr5gt+t4whruPL+3Z/RwHy4MxEpTiUCX63tMunblCyl3NURBXCa7TqM855sE64hnLXgdYg
kiBpQ1dCeB+L79a1JJ027h0UZqwgOZG0elsqQKq8r8V9/enw7zhvG65q/QH6kS/Wf55CigWWCS6/
ejCniIf1hvDdk5MxcJLlLdqjJ5cWg+IVCQzYxKD4fbmsN3fCswbN980qqDGhmFST90r7MeJMD4HN
SaUGZdfgIwbRkvCh6F6cS6f6JUzh80Gh9BhZ86LXobu1MQmJ8FD8V3o/HuG/Go2Lsdpd28m9HRk2
ovYTbTXcMlWLanund8SsJ8UAo5JIh2KIGnOcGuN38PBVdJXTg1WiVmmdxlOmepRmNg0yy/Xbjwyo
ixLa1Z+Gs5JF7EQEbz8HWivruDOhm+Nxp3GgudWG1zNhkOGdESarhubdu02RwnR0q+CxFQnkB+vh
8Li/LadD1oWjBjGMM/49MUNCMWGJ7FxP48KkfahmWEuQfIF1jG44aBELkqP5fdrsqMlGJ9smrdg9
28M6ufZwm1Dw8zmUw+D+jeDvop7msJeK5LHyis23cdvSzDeYpvRJiI5cfURlh/Mx/gSGMPIXmZoS
6CIQFZiVC/oaoihVy45LGqSuPR+LbASpJXHinc3etKv1bkM2D/P1cfmR1y45rTf1INMgvlPh9uCE
xZ6JTqGvKxAx8SsV83yv7gN61pvWsj68NHhX7La2ELixKhLwPqsJzRSEVHmVXIa0+R9239jyund0
Lyn2kAc/P7aeEfff+rSF773xZbz9n76zqyVt7X/7jd2WQmXfTv+YJDCZ9vCRmkxHzAdvb3+i+CWI
xcsJb1j2hUdHx8/fymlNjiXI3IZpEhqyfusWXUt6uvCWrm+4o1gdgjX4Cxm6UWkDhBUKlJivO2gS
Xrc2yVVsk1ja7n1C8C7+NY3QBbutRd/J+Ner4AOZINO3xJ1Ej2pOcnaOx6q9RuRaF/39Ci3N/Wyk
YE5urBGsaMm8pfR2rJHBAj2C6zUdqXAHWvMsm76Yw+72mnIdZzL332h+4WcznFBvYOrBmkTH2Zj5
Ezlb/HGmP47FUOIvWxB07925OSoni5ZDiU7HY05LbGHExVsOTD78WnuSIzBsJzpW9QRtbcQ1EA8A
gfQ0ALjCei8T/9heoail8RQrA74Oiu1V/+x4WIt9vwoT1ELX1ELFb3RJBGXM+SmSBKzvofEcYQZj
a50oTraA6IhGaWzIWcPui6qfwIjFdxxauGMuoPcpRdpPb8bJTIhzEu2meL2BNkN3sf2MR3P1msDW
TmwwLDL3xrd7Dihj+OGtPg5ZfoSxllpss1sjrVorEG8CEjgAsXPyux/f/I1cvFDQpDHPFDB0VX54
d/Hm8jFlVTh5XqE9L4rKat9Ieb8lfz07fzZEFDVzr5AKpylRFqz46tXr4QmpF8QpSiKVF77vejEb
ru9II7Alu7zhic3RYIwnKCVeZDbhJuPac0wh24KN0TJtspLoML/lcX2lhWkhTtyGVZs8G8es86fJ
+0mHU7skm/qLovd0UPzdoHjqzGnRbu9mu12fPXp0tYOjkO+0h/Xm+lHVNLvyya//y9/zCsHIyL+i
1/ltXS++p4iXv61W/PAHjILCj99SUDJ8ejF/dkuvvgFmqhP7onW+BVbxa+C1sYRRY9Dvf8aoa/jw
NRlblfQYhteUVtBNF79+t1tSyN8t/XJcFL3bXTXTTbXeUjnMIpgdC359jb5UwpTBobbc8oyfC6f1
TTmnkaC2Sp75Yo1mWS5K7hAWrLpepb18tbvWT0XnB+Ss8OE5Rz7+IzoAMdjoJ6wmtY/bIm3q9eaO
9WM06s3dc5ZwpHdAF2qJcMs/PQccTJt6xqJN53dEtij+8i3D9AeYJi0z+g3xaqB99M5BCHGC/CzY
yXLbU9Z0gpYnRFICesRIZMB7r8q0HoGoAfuStszGhs1MomWqFbobAfeaNITtH9+QH745oo4clxGF
sMAAicUQy/fvM6hsK1i+f2Lt0jHIRMYine/f5/UOKAU7nxERXUPjKGciLezYPNvRVachVN7U2Zsx
UJK91tNBgrG6Fkgwdb/U0lSH4yXxDZm7MXGmABpBskc2M0QgUHYXqca9vudEbfD9mxII3QttGzOp
bdzRFxyQUmXIKSmh8cAoUiBrrtzErDee9PmTi4zEqdE/5K83lvFzSwMtdHr+3GCrtgGZknIEd7Zt
35SnGGEIN04nYTA7zsMhODDZeIA057OKzTnJCLgoXu2ur9GwHmPG5dpDXQlaysnZiijMUeuLq3Je
S8Z1+QgjQqQ4PV3Vp/yqr3wRjLhXz+cgJMDYKO2HLLaVBNC4mtjxhPVVLl3jKz4nLj3m2QMjIo9c
2X59dN54oZQVtbibOby1vMVywUSKHTZSfOGtokhC5lLEGwceQWRpF24SNkulD6l9Bb2ORAN8pT58
HaN3oA9oRvcZCLLFF19otF4aw8CyhHbKnR9XIvJS/b5VRXUeOrm1F87y8YX3BCoSyKAFI7QQsDMh
73OmuGQJjvSKf86f/OezC7vM+BLICRpZIncxXk5Q//AX4Zy3w99W2+83BaD2v8i5KC/f1PT2v4dv
vwJSCW//1rz99tVNNd/i2y++MK9futdffmlefzWjBj4zr4A5wVen5tXvMbgLvPvUvPumeo+vHplX
zxd1vdH39sPva+rloXn17B2+GY3Mq+/qLb/9lX37Lc8lePOMXtlSv+OpBW+o1Je21A/1B5qGnceL
Bl9VTfAKhsJvEXntlxW9XoWj5rd017HqnPz15GSHzGeytNIolnsYdIcm0vTp/wze/0FXInyrSwZv
sS85V5NDhHuclZIQ1B+zrhCeqAWzO5j7ZFFOlkgPQbyD4xVau2ay7HxCWMRrjVVPVCo6dZUO0t8g
i2W5qaZjm1s54igeoOZmwQ4meJh8KCWACBm1TtivYhLnpwuir+xje8LT+Zk7JbhcFGvBxeswzg1H
eBy76EgeFEOUBHvT+r4uyN5RBil/XNuSQdMXuvr0JNZQTvANJDsuFyp29nF8IQB/T/Jz7xwLXRwD
Pr3AOAKMFnpyo/Fzgk+Awdq+wDgnozGgdsITAD2jGS9XM3SdIKtV4n6D9O7uDofN8oR/XKKfcweR
opNy0/G1T+cLI6TrJqblA7pHTcVpy8eI2J493rSE1uZtiAU0D0HUjCyv+ETWs8iuC/EDJRD4Mryq
Z7lUELLTWRQIG/+OWCOmASEJySCo4lJIQUJVEesQxOlhJfwZab5MkE69mus83LAhBSrGen01shGN
Wa9PPAPFZq9mg+heLsRqy9xnkZn6OEAO9uPyA6Z+QK0BE3YcxbNqAPuy/USpqYQRw49ZPZhOMiYF
e6hFtJKiGnGROZbW5X9RzrdylznE5+BDqGWiN+F68zrjhzCKRL3mMCvjmnL9/bnCKErQQ71ueARD
do6bRMFNNLNY2DG9yXUsXUQp4dfj5m55VS/4Rtjxc+f12kvmF3todYeiWStjmsLBdXB8hO54Tv38
hXRnTKciHu5jWpiH6O3lXm0w4DENyg/hIFWP8f9jzsVBEQ1sZHDhHnHK47mMzMr+tMOjFbbZqcqm
k5Qt2fGnIXCOS4+T5sBpy4OT7+XQttu7O7zR8waX30i6FD2B91xAe+hdTHdQgqQPBs/sMII+k1Ck
mzKT3DrcYRuTefx4OsbqZCFjV/CjDmKXNGNS2xhlF5cZIjFCbuj7TVZIjeL6MvkCskG3BdKPZJE9
a6Ma8TnI5EmaaeULnZjdir0MTmp4xDM8hEOSv4uhgXMgf2H5h363Ceo9zPFFpw2J63acGNK30+98
xJqJvl8WjQSwbApxJ5qd09MwT6sFoDE55pe5FdDGonUIZ66jILqadrCXuzB1LfEuSTHe7+8z+TiO
2uLDKJzisWzJERTyHpsPb3B071WrOuYhjmQVqOowZBjoKAjr86v2Bui7URNnz3sumsekGPf1yO+3
nPn3O/CTGfVPPv6sTw76j2Fyf+HDPTnY7QL+m+Dr1xrUMMgzRAQeTaCCFvBNiGUUaxtfG1uazfW4
/dSg33+xa4q1O5j1Hv7+1bbCBjvpoYMBQipJPJm1IAoHjS9yGwTe95OamtXRjM+xt6seEPiIj1w1
51rtAnUJrq+4ZZ6MnmNap79v6EHp7CpL9EiycFKIaDqLj4SKVI8T+v2y8JFOx0aqbUYPpWEdUW69
QqhGzUBd7XvQPw7S+RaCQy4NzI/a9Z+Ah0EbR0EcC3d+DjTsfCowvi+cgooHwNNi5ncP4IiB5TGg
weSoPxNoPh42RwAHJ8TfqhW5/HmLx7jd1gujh01Pz+GUVIcdhB3H3fHMDxy9lANV+yPT11/woP30
03DeP/E09OxzBz3z//IQQYBPf7Ws+voI9XMrQwylKbZj5hbu2PMY1dVqyL5qVSrauWzYQWXl1X0k
ufj3qhbspGtrxu40Z/LwUxfWa2Y/SksYOu+G3IozQxLgUGigfakCscBwur1lyfbbejLb484camWp
7QhwEbPL77LcBfYbZ8WN968kh+3l2qYGckOI9iWpvZXgaJ1/JSaYZK4ACD/jpk2g5TfukPeuSwOc
btj75QQsip+rmeMW8kHxtQ2BrJGvNJSlIm+7ul8Nu6K1IPNfdrToDoq//DW37w1r8zNhCw57rGP+
JZEm7ig2PA++xxi9PvJy8Ziz4pc+CYTW0W2ZErqm2WwDG40mkrw5h2yOFGHVIX4OmLO4hcBKhIgv
zJ7+FiOxf/0x2Q5JI8ABSRM/JlH/48JkYxK+On/yd2enT1vVD2KsIuQugUFitmNgckQ+ZiZKeRvh
n13nnsMDM9wMMlTXK48M8MNAlw6aEB34VQs+QO0Dh9NwOCS895ZLLZAWJpzMNZBpSbV77uTt7aGn
nCNsZMemecNa60zrxbiez5tyG9bz780wyw9jLiSDFYBKRUB9TBU+kKzX4WgOjaN9PLmRZKwO3Ngu
9lLirN1B6rWZ2huk9Ndixy+sdrJdnbz772/+wyeffCIZ84aYXgcf3o3f/F//ibwPilnVTDEA0x1R
nM1uRUFuajTjm50uqqsNZgftaMUO5/PlFD1s4OM8BPAdRmO1rgQnD0hpst1MVs28JP9PSSF6YlP5
DSVKrNTU4mMtK/a0zVjHYdz80USpIc+ViY+IDuPXokMM0f71pCl/ox4n+iUM6YlB8npd/dZ1Fn2u
dFtcT4wHU6MdA8VgpRv3GUz1PYUnjsaQGuk6U2gdO3vwJDUD2909d33+ji++2wv5fJsKcn2n4f4x
1gRGMeqZYPq8RT3Aq7aVMJ38Ab7qyHvcAN7wrbY2f4DaogWFBSHIQ0HaXdUoDIskjUwIglyCRCN2
zatbTPi8nKyuyw0uo7xoybbgmEITYJoCTwKPVExvqsUMBlrMak7kslsTSuoIWlvzMX8oaUScSmrh
sjhhfALrvafJBdHfCthJhStnqES+0lqIp6gnkbCNHR3mGhlF7cLbP6wJpt04XVQlQ26PFxOnwXAp
TuJe8MM38KG1I1dzT18AoiDfmtYxctluDe8t1nCAv75L2OHDgjC2Hb8cZMxjJwXnKy0HulJHaQ78
QsQ5DMk2EPP/KGJOEDPJqH8+QeSvysZdPfJ6R874Czi/yw1flAMlHgb04Fv62AvOuN2i9MZOW95s
upn44OsHcyW/CQxNsr1hr/zI89X4rfBYsFmFN3IePIFhuP3xv1uDF1pE1CqheAFbikN5aOlbdobH
D4A9t7HBRHgo9KT+gD0hGQRhlTt0wip00OoDFdKjMBBmFjIcNixCkrBUhBSS6TkHhy58xBEl20Ov
1aXu/ghVubOrs/2AsQtmw+2mmiwcxnRyHenZtrURoLXrX43ggzt4hjLgvDlKC4jNLAM4n9jk9kEN
QdXnYVSCsffTJFvL/VRWsyM12+mkcduBB4DQ52UIMZHrhHgYtiPRQ8NqQYDJsDTnAx9z0oduS0Yx
Ljy0RXtuKK3dIDnCqNq4rNmG5SOSHkwfR+Sn6WVyq2Wyp7TOx6WbOmpKUWk7K796KPoiCsistHKg
d21M1rQxHAmCClJlM/mgnrbWCny3+rCZrIt1vYU1R4NvE2cL8aiHaXlloxS0UdwRf1Uu6g/WgfuD
Rz7FDP8SD2v/qxsMqF1Y8C2qAZRj/inBdq4R4flctIUjJOvsSw1gJyeK8wUfOucfzUL0aW4YYdS5
akXKksfokPr962dnxYuVCUDmI6y91PwRIp10WqW6DkgiIMbcIc1FWeZ9ubg7+3GFao1sHZtvpqOu
PH2fDoLZpozBF8fONdX9GgyKtlw0h03p9pnTJeN99vLl9y/PgPPlDDQtwNsDrE0AV4ATx9c3+Lkf
FD/daLDIQjBnYbcf5dsyO5uMrt2x32YmqatDUZuol43sQ7LSQi24P0NZYrSXJlUE/+mN2lYx50va
JKUi2heSR6BO+Rek9El+YcIsMzmabYlq4PIZgfR2DTx0eQQcdAKhQ23bHG5pL1Dyep5I20xu9+HY
vabyh1Upk3nFwSlaViDjEQySmdaVwBZkMIqlw3XNNtx6sjXben2fk1CYtP2sj2jalelyp+gaapeO
8DtPjxiHI15suL+eWWfN8TPSj/aUqRboZxYaFVOoheL2jIL5DqPoM6yR8SJxN7R2cE1nj9ihHY17
Pjn5R5twCjCTQpVYZYgAmFQhkv2Q07Mac60wvCJ/jjnaKBojZvRUMhYl50WbC08lg5IU2enxEZbj
GKyOGsoYWxwRmMvj2YM8d5SD0HpTb2uQsAUA4/ESeq44tuZHA6qbl2Io8JWReMLhx/LQuWuFFYtD
8eLthu4VQtDUM7EguUdeulQJ4VpR4krdZmPnTdiMabiSLjOQzX3IqCnsdUc+kUKRqlbjeMLvLbB4
9ucV/0gzjE7WyNj/0yQN/Gkje2oP+9Nei+Bgud0D6UrzsT5NrziP/Z0KWnJJ2J/wp+fG2z92nD1t
woBTQdc/nimO0CBeIL822vaeKxKzMua5lQ1tYeo/emT9aFsk2MykmRDxGM8b0jIGG3vIeY5it8hs
gMr2MUQTk4RLx26uGeDVCBhBR2NmGb28Dzn953pdDskDco6xkOSSgfQQL1wfvnhIgComg1LpxUsZ
gw/67JvoxZRt4MsLQwnjVO71Sf/k3eWb/9nczgjCvJu8+b9/zaGh1uXmlFQyHM3vEWe0ESyjy5Ry
ejNZVQ3mlMUKfKkynu+2BF0dNiUud3GjcsGearmQgRJrCjgo71+X+HeC8XkWpQ0HFVz7nLjoUFXt
a95uX3x/ommMRjbpM0c1WGNmNxgDMvxopfz4rOjCDKsVsPlP+BkmDT+e8g9MbvXX8OYC+K2aZVZS
rqpt0vWmJiU4v0Q+gt70OmRxhN6zvtjQNOJQqHt6KqsB3U+mHCSGdk1nUNgMDiifjDrzWcdqUGrU
G892606faGXRgUmasEXLcjt5P9mMOqwhgRanN3UFTOPovDufYf47KI9/VnX3wlfDWNGjjkMJiwTY
DCaTLfHiaz77F6j/L6t6eMQ0m3h+42m9IqUh/R11VjVGjUUFdUcg0omHRMlmpzsOJO8AN+IBHMFx
oX6ZCAFucuh2TreLvXKyWdzR7+p6IEvJhpZ5XoNM3W2lIQnXYzLNpEBya0wkTKwOJfUUvXxcK/io
FGbV6PZEjyeESpbeQBmoCC1+zYV/z630fPV+OgDg2a6R5Gx6XB2QRwpLgU5fY4lTLPhmpzdWYTE8
cjkf2+IObVxKDJfJlLlobnZb1Mplege8GOtnGcGQao8dijWZASxqjvjg0rXxfXVpkmBLjiVJRU3S
DlAR4ATHUnmMkTe165AP7EiRzl7Gz/J8rsbFkIZi0nHb2409k98zOgcAOeg0D2eh+Eq3JAgPjmdU
UUpO3aE9CkX3Xz7vGyQZ0tHnQdzLRHBqM/Y8dCJnT2OToU5G0OwaPIoEi3r97LWxyefGkVDhqR9/
xTxrEid1s4mCYDqdf7gnWnObMTlLZFt+jRmZ6SGKaCxTaGkBE3RKddi8QK2zUMWwNZUMEmfJmQER
ZPIk30bPv/nabmYXAMP0gRT/p3Xy6q452EtAhvZ1QujPvchjtRrHesG2ABb/hLymBK/QnNnp6UMK
SAX9iclcGmJ5pLfQ6J9BTi2+6HCVOrGK0HeuJjsRCii2tFZk4zv/216RxLQvuTyeONtGp6xEMd2O
OHP3jdVaL/XgI7YxhrWCNRpv63EN9CXisbEQqotah70DdtBAIdlCfvpcON9QRBV4OTy6RI3OSmQf
3gNrOVYbCRuPfrKO7tXClb0vnBg+TMEoRj6/yAGKZ2GmiFOA/2XDuXAzRo+XzInHvr7DF6jHCBFC
ihGXqyUy2CHFWqatjezDEikz5IDV0bzdKvvh+B9+crklS3Hct9GC6j/fZDSLAX3Ps4s6dmEZkQ0Z
q9GSaOl4hYKjcVB4c6N9XgvG2kl1n9XiGFOPcNPtNfHHjfBTz202L8md2jIvKJv0O2zKaeAx1evI
0TATubJDWUND7Sp0eN+WoEqHU49mshauM6t7U9dv8bp2XW7iBVYdo7/nHxTRttu7AGSXEJnlBHAb
k5kbayQ71Eun/3FDnHqP3vuMkC2dko1ITbRM40E7xSXlLyyDJLduwZBDcMBGPhYMgWXBL7lY2lHn
PgTjrVx0jiu96Wy9/dChlhF3fnxn1Afs5NLcRd6zn/QgZhh44A4o0J1p73g6gYcTCj5MOMeyq3vY
HsDXE4Rja/iNf3JCc8YJAQc8n5WSIxeDniHNpYSJ9I3MjfEMmKmc2KC52bZalh2jNBAbFtYPUsWe
2Lh4899yhYlfGsOT1nPOjEaRCr0A88hLK9Q9niLNidHbzjSf++T9pOLMMu+rSXF5yV0HHMflpXLi
uGu4GZD7pjdK87Bez2VjhuJbjM0dpjFHQZeAQGKut96RfR3nnNYC9IuxKwNvjRPLZTnyc8gUeAXF
c4ZvL5Y2NKZc60rMZ/deCNIrzkoOZY5Jx57QKjy9/1rMZ7/YUqDgdmgt7rEKUb5g1Ax2Z7t1N71t
pRt9Rga1NqY8o01RN0NSJn7EskaSql/VUCSX4q0iuSzJ1Bhlm0jZ/hsznO6nkdGZXd0jlaCG+KBs
7KSQox0FVZpOhpoXDct9giGzqfeUBst7yzhiMGclnZ8sGrq9Eo1+rzexg0lW0Do+3ZOdkKjvm8m8
HOMtwBgwm/PCcUjKgSrVxy7PYecPr5//QydNNDKhvGocPZlIC2e4Js1oF86XAppeAK+9Za14UVNu
EjfA59+cLsr35QLXQ1NrhhRBR2AkWBRf9XWwsAEg56immqPFQ7mqA7VaztcHCc9Vx7mWm56WnA+r
I+6ASkSCQfTjkI8fMHt6uUSCK7FvCULNFlZxiRGQEDh/wkzjO8qinlux+UmqD1qVH2heTI1681m/
ZQY46iTwa0nRua+8Rducm5rPKDkeNa4Y8LjPPjXF1W7OTkOWeD3DmZez54I0bpGAz4hRx7uY2Eou
EXoLseNefcsxweLvmMeOHqKcAB5n9NHvQ81Ohb2EdvOZfJ2aXy6X+xIzalLzou7yaTU7kvEl1pnx
WFWDGxjj0juOw85Dw2fMcmy6nk22E8xB1vUh06lE2Am3jmWt0dFYUNoBOIqwECWFM8NVrwFdxfB8
sMtYkwUIE0w4JBJz8WiJ286WUAObyam6GmdoNXeo43q8XyBnTwAartZ4sl/w5hWmqWmNp7EttaH7
qWG3DjI/9GGsSdIarUNvrdE6dFfDS0rpiRYNHCk9lCqmuw3a7xfNCrirmxrdtLaP/C14SaRxvtg1
N0jysCWhdU2YSi0Wj7KH2r4VY/SuN9fhDci+JfNVwmsRp3SkESUSX7Saee3rEcvErd1voXJ1EMEp
H9MBRErqenYGvstnCr/jXWq86vowDrdPlVu410xzVeIrCDvqrMMFTjtpSO3x0pmHZp/tE0bcxxJG
kiLvXLrT9e8EzwM0T2bRHVO12GsjvdrB5CuTBW6LO+p8jXM+if1LON97HAriwPKg3cu9FqelQjvu
SYUjWF7DQTqSokYqbC7Hzs85e5uIpGhaG7/joEEMCu4BYtl8MQbpDPIx62n7hQ1g55kG+nogflfL
iaAJ/pyBleazRLjgT8ZADnG7owgCYntLaVXP5LiUnEvfBK0iwORF8eJ7ILKPOHBKcV1hWvm6ET4a
2V0v3e/P/sD+/8jybdneKD5safpaCi0E5fGAH74WGzfI+jpmNfiWiDDfv8rILqxCEygyrM7CeP/+
CoxhnC2U3qmi5bKb16h4nJrMmTtRgc7AKdBw+8lLsuoqyGAqdRTRMpJ8GuFQvl/tFosoCXU43bvG
C+NeI9RLIdduBorTUxOyfRagyP8HxmUtnto0yXm7FamfaKtMOep8uPqsk3WAdCsBcos1R9tjKnsI
SA6l72M4m7bstnbuOPWTlqfWImPePfzDS6RmX6InkezLlCnvfOGIAcaBrRez+Wz0sPnSx4Ly0043
YMyl5kjxK9piRmO48ouya9jAYAlc3Z/RIl2mERDhhBCgVIm2jr3cgGI3sFB9MVBacPCwxNZmEcmj
Hct5kCeruyUaC/oOmRI97UWLE4EtvqpxGJFy0HRURACdK3MbowU6pZdvrWzC0dTncaL5XJz0kqLQ
HlSBWIYYalCcci/TbjA/GyVk72dD/IcGopJGnmtBa1nBlie23aACdFvGbhLplIP7yCTQe7drbsiR
mzjIk1EgKbSZVEZs4Dh7kELI7E73MxFxs7blW0TnGhVPFbn6/bnc1BFeR0dZRsUYFMngWvC9FdOg
NGs2W7ZKjIkRb2bxLbzJNxJNVvNrsTsRHcL9sncikRxxqK+EXz92a4YaE5TqmJkhhUeIKFRC5VHC
hOjyYxgz7Rb50RgUkD9q1+hh8I8qgDq77fwfOv3iQfHmzRvS6XmV2Nz3zyhq5+y1UTF8RYMj7KA/
41pVZnsYOYlXZQ/X8/nsIva/X8ysndBdEweMcC74mGoCk4Mdy2/gBQxVQ6M84pXO9jBL39Sr7Uug
iJho9sVqvYstbPInuK/PRu8tu8Mc1/vPxcbCwc18EDT2sSeB0n2ANUk71vokS0tzR4ejoXsp5t5p
4JL3T1JxuY4tAe5PWg72epBg3Av8+6mB6zbQjqZfSYPV87suwUQnjDmuGZBnd8VOKMOieDEryTCZ
0t8RpuOGmJAzKuZZ1bsPoz7gHOFXQKhgPfHCdQ7jJ4ZbLkTuCmiSiY7eyp5wdMtiPtkU6H+ORI5u
76ebslzBOGz6cWiajep3G7zAxUFgvnI+Ca9KTrKLmc1323o5IadDNKUGLGw4VxUUwkCP2Ml0M0EV
382ETbDgL4alhwVASlRiHnN7d6M6AFkUknL78XX6i++Fm8OSCBG2niHYsQm3XEFXjZu/u4adzFAL
zj7As+BdY1+O0W5jQzI5vXOjE1485rsTThP6rDbkdgy8sqzquil3s5olw5sJKgi0uX7H2t6iHuiu
hbWP4v1mLzfJJfTd1Zv/aLyUlvXqbXlH9Pzd9M3/+79yGDnzVrVVy3r6loAq3lDAGG/vhrxE3u1o
QF5I6j6kiew1qfzVpIE+6QrIeozInfd4bLpNzRFe35QyWYDd5aUpe3nprtjXm/p9RTktb0rxEECv
FsA5NikgywWQIqv5nVwwwpDxFMMZYRwnvHiC07Rcva829ers7MREZXIdDpWmuFhIEqhvQCuO15xh
/CdbF4hjXPdwJeiQDIaWk/WaeFXTaWs3VOO+3cDMe8GM1sCqA2GOzcujvpJqR3R218DDzVg6wCQ3
N/mS0xvYNPKdCnwFdIxWES+EySfkQwWvgEDtVqS/mcyJztw4Ixu9diSKpDhMuw0pTnNTzoYFYtjl
pQz88pL9SyboxY6NzfDfJREE4EImGGyDc5bD/4dyJ1usyICIBMw0WDEA+BFCi3rnNDI6Dtj1zLiF
pG9JIEBHCbM5QuORIMAalx8iHAIrEX5/Ijf2HNJwzBsTAdvjxzEBWTwxA5/AvKmtqTWw+7tPmaCG
7hbZlIupo3Oa7XWWeI8NMJtcNfUC83jrECakGmMpjTLoondCqxIpGJXLs00RvCQZCgMabxd9SbUz
wniY9HmPXQXf244FfsA4c42BSSRKseZm9XQ87iR6ihdULaOmxKt3D7R4GPrfeOwiINwAsaPTKFDh
Z4PUTOkMp6YZqpPVHYbYRBTfSoLsvXY4FkNCb2OZvDD0gsUbygjeg/kMiiex2I5Yy0bHoWCQcuSi
Pt0oDu51ceblw2QqWL5ffFk8yWvoKGj4iAeCwndeWxnGESNKGoYizzR3/vgi+XyghSMjOnzcsosZ
zcONEpmJdiM+PmZUWeUKxThHUPhwb9/V20YR8yi144oqfAmcAj+RRhQfeu7yw5A3f1Ehg39blmvi
Q6DFKabrqefK1j/CQ+4RnD6P5Dgp0IX6ujR3O4mEm/Dz0laYLMl9wh5yn6YfZrlAduayzm4MORhH
DIDofIz0UuTqqaskt1gr15poUBEgeKSgsM2Rm10jv71T4xlNWhyglo2cz33MKqYM5W0FiON5nuc1
uUS+L1cVulIWd/WOwuJS/m1goSZClWGXwsnJAwRujKw1T7zOn09Cl7UYjsMJWXTVW3w2RF7kFRwc
IMWWSJNr56rUKfuBU86Nonh2O0G/fQ/GgD907Fqnpig0sHBAk1y0Gkxj35fjnsZLhJKuZogh4BrC
ZPLBPZdPdYPGquSxaqAW8RGygiH/IMwAoZ2IV7hfA0VIMFO5ONQrXYkXVvRkoYmxDagBm8wuywlL
X7QKhA39ocW2k6PJi6wTnP0YNykIoekCk6wMZWh30FFMtgxDWxh4wx6geKlLmdlazHPkCHknriXl
GdmYF2DsckiZb0cjLWwjR0YXRT7WZ6nObOBZjpT14k99D1HY0u8pifN1drYM5FDLz8hGkprUblmN
HEHodVqPh6Drvhnjg2LyvgaywdTb2oEvKnQqx1B2Uxa6HlEZcfQNDFEIk4ZVw6GyBQ4Rq6WwELbC
6cwxnFQeHgFRN7ENIzByy/1+oinK4Zb1t1u0kPhjaDsx/3ZfX15iVaQuKDR7IorXKyEVd+3E1Hzb
Qs2BRryv6l2zuEsI+wvck75v1PEQSa+QNq9mlpij2kkw3niQ3IeqO6Lu4yErcW8n6sFYFb1xnCUp
qebs95zQyMliUX/g2xgAgVd9bMpTgcHH0b65yyX/i5M4RbBg57XStijqZlD3/hTueHJ3f6oWOU8E
00vAKSveBrKIfh0SIo6jCMd5Ieylyf3Ebiyda8AtGjcwoGshzUm5Qqepgh29AgnI72DAd2YCA7wO
eFg7b9MXPKa01MwDvp9r/j/qIiCF+fEfJoAh2cN5DWRpnV6F4uxRQJJVV1nT6JqN9qRIy9DGfbFI
uzoef34CHPVixMEzwAPUn3l76kT5Fl3DKT6ImhLDQEGNTUV+VRYnLi+pocvLIcHj8lIaNOwu0XcQ
moAAbtFIG/lLoqtqqpvthDHByb9wZEjLwhlLt8VkhibmQG4cY2/G4LsNFlY5KYwlGmk48WpV+sE+
Naa90dhmeTL++5mr+5mpIYsRxfznjelL7eEG4qW7L/Ib+KZ7YOvlsmaYbG2n5E3GqYOQv5boRGpX
CYXg9X7OXFuABQsGHxWf0HFSfqPPAMpCckxgukT2J49vw9N4IF26m5ah5EOQmwISxBAfz88ugmg1
+M5sSEQZmop19CLtces0WV8QYPyHekN3HXxVUwOd1WsxxyQ5jSEriVHvAHIx6i5VJMZcy3gJQiOk
VJKizshyIGojSlqF7NW3VTo4OTZv8MyidIcmHhs603AYJPsoHpqRxFp3BCbqlXO2NPjeMZxeB8P3
h+jHSRHIdku+mnFVqRqwTNO3Q/TuWyyIy9siFEuoMCMPYhVNyvkctUG71aJsvMEP6iQwvwtaaCUX
V+4agDqikKoW6hjiLL7DoQhVll9IThUr7WbZQStMxM0fY4GQXBG1yDawGdIE355HaJ8WEo2908pO
qTXkphxr0m1EUNMY5XtCZOIVJXq9GeN1pplov1OjNxx5weGKgshZcaiKybY8BqjRAB07kz3kE9Bm
LPH3kTBPrpyxlyne4lxgS7RShpZQOrpfXdl+K/UgleW72Zu/+eSTT4BAYeRfNCvAzAFvyxlqud6V
bxb/0yefnOheeU5fMBCmv2GdwCoRYdzs0HvTKccq+kUVCmn5RHmEaw5iV20b1Q0Ln8MW+uyWwlb6
6i7ewKIvWC8Gz5jmDSQ/KgiH4hYXf9fsmuGJvYf2AS3rJhMdcznZNDeThVyFwZjGuxW7qJWzcVX3
0P6JLuC9ZDJX02s8YNw3oLEfOs4zfD7DXCveUfQsMUnz39DIqu+jjhZf7bb1c7QZObPhKAMXw4z9
2DxrhcJf1ALFNnfQeS/jwDe3FEi+udEaCxePItzcszcvXr96/dXrP7waP3vz9bMfXr/4/jsA4ucn
rV4EgEEcEFKCJnPicvmxqjCSH3omjB5nBDVK9DWuV+yZLnXcS8SUjP8f5SDLnrWSnSzc/jyefAX5
Nir+8teQnsCk0EBrF0UaktJBDjL64Brih8h4qlyuYY9T5k99CjiM4fLtDD/FJmsvn73+p6++9fWA
W8Soj73uhuzGulHxV6+/+f4PrzPFJYZrWvzZy5f54hjl1egA1pWwL0ggQuYFPp2hWpFs94kNs9Qj
6JBbgX9DBkZq07Ifqpz3/RhT3Z5HtkGEWRFSWdN7rirb6ugGbLabTXV9gwzFBzznMFgEGUoh7+TJ
64AiUlKcGlPKNLKcrMnMC317MKjokhq9odvWTBRIErPkIaWETwYWIXJxIrk+P6T1nw4shgQ2h4B2
zqqR0XNIBLbz4cpomjwVgbKP2y/kUchwME/PfkA4/GwW9ogg/BjZIVi7/Vm8UjKUuVXmibtkyrtV
71NHDwbFp58aGtBWWYm+nGJwsizXGEZnGyqzc7NAjLvHJLB4L3dV/VPSwjgUsmFG8VTVHAl4RuF1
MlkI9vqZKQVIQRDLHTb7w7BJDFQ1Q82MLPdJVyD+5ozun2TePQ3eMVT9gK3p6aTasjWSkBF8UW5G
aN0NT0C4rDFfNSNjO6C5zAQBLLh8T8kcxl4I2EhTOtk+0MkfXzx/9eJ333317bNverZsP7feynsx
Of/j62cvfw+Vw3rFZ8WTp/9wBG+eNOfhE7a4R2IN2vCkgjlG1iB4WP1t8fj27+exjsI0Qdol9NCj
6mcn7ZvYEq/u5qrbPyoSPzQwFvN//hW77uwNwy9V2lEU7cWEPmAkY6UP45BTzISI1drB+eiPCn8Y
xOP1x4En+EkZ8Tpe1u/LiE0RzvIl5Tvp+YUYyAoMZGgqKgykQ3sCm4ZbXM6VQaHcBL2c2klLcFuh
bx0IaqkFiTaNbIUlrXgW16vFHTD45WRVYEbWFawJcjdtnEkIGWWsBSSH4pzcA2SR+UuwcfyPSCPq
thE9xGHvBGv44SQTHYOHcJKJgsGDOnk3f/PvbPaEelO+u37zd/9ZUifQ2+IHivYtIaeBFZo01bSQ
WPPVn9mWEncu3uZhYNwTk9LaCIBqwuDFxQfo8tDdhjaPyH7JJXe1me4Wk418b05OxCwNxefx+3KD
GWIxy5mavPXPz55eFF+CCNF9grH/f929GBS9Dl0PLhY8UMCDq0W5PJObuM5Dki22dY0307h4iAyo
RtytrzeTWVl013ddstgKe+07G6rXk+vXeI/ZFo07zTA1uX6K6JgRXOhg3iSUgD30Z8ieI5V1/eDN
x754MG5or3ZXUpBvSqymVlIXLmF3wAK7G3ZkjSKL/zDCDn48P31ywabc/fhcAGwQWQsL5QQ9+nZm
P+bOF24HAGXSzK+2DIpOIUkLl+i9s5XUBjYIqwKtA0tdfGoh6dtTf4PzoGfADMCN84fNxY8rNMng
WgPtfVB0zqRzhJXp8+IkyQBstIE0oSEqu5qYFNI41EITERP+e9igkSiNoHBDsIrNhJhTKyZIDNO9
41bX4qCNmYC7O4zfZ+HGXHWIRaaHlORyDz0bCckCMB+WwG2bc2z7ItvDXm1nlDOu3MowhOujH/Fe
dfuRH4LqAlmfKZEG5N627hsuB9gaLT++R3EewypSISVsZ53+Eba6SQ/UUP9kHxBRctDxJuQMaEbr
ubep6y3POAnGCl/wTKrrbRyIgqc3Ef0Kt8phnFs8jFyDwwwWD2MEkGVZ3sULk1sS33SyksN4IY8m
t+JeTHCUSQ0ZUh4CnxkKzAAPjteDHAdGEcZztMlG+6AcLU85PUfmeBnj5SEqGKcTYMZyBbhqk7O8
Bfl4hfFbMt/IjgetdxR3en1Op20ThSSBcPjjeFahowiJsakhsKQOyXxDQMDr/w3+vCwXk7uegwye
3udwLK2XcfLGWT1WXzolhfw70AihQRJMFhgQSShbodeVPX4wxU3QVmheMtZkNITeJt2j/jwueYO2
orUykYTCAhTpkh+tf6YkxpHNQEAX0znW2Ia+RumBYJHKGFsQ/qFP8OmTvUHejccxroyquHUg3TG1
Px53+SK5msmnfj8TvKjR6ZSznp3KwWATvMvc/ZZvBs/XER6x5IvRO2COJHnvpecEPGbID/zOQAdE
7o7jPdhG+u0b2N2Y8U9Dba4zCSESbMnETMWarZoo+BiCNBbUBU9yZ7OQDWVe4qm12fZoNW9KENdU
RjaIdbZbZZFaLy0Us0NUFqKYvUhwBJOGBZDlFz22L81cBevARWCNR53h+5Dy9tLtJIxgv+VeeKSL
n89W2oIyQS5dk5pJL3ymqS+GlNEFpEK+Gb5aCFsybbADUNhSdGe2o/gnUWepDxCW6/WPP5MsKFHk
3ySVgxPPlTHuvYagtFDIDEkMUaS/b1N5/LXYQbbiq8wsaN0T0cDj74gu0FvzTIYbBZafTkWZGZ6O
RO7n1e2oI37ASZJmrEGZ7qSqrcR/+oEEKsvT4oZk8F4K2rPybbWu5suqaXycxoiaC+3hkU0aC/UI
RInjZOSZxcHBBYwPScCRnlG4ioxEfVf7GL6ruua0FTFC5CDUPj+xcW7FnkAZsD/cdEoLjpKN2tpg
Lk7VQx3gXs3Zr+ncvvrhBeE0QGMLJzg7S/M3z8WWq/cOFZAsbiI4sCprfYc3XENvWUcDkNKZA827
XLRDD3W2LEl1B93kZAmCk9erppoB26bGhTYYCEwQt4LftTqjzg///PrZq9fjH779w+9efPcqH+NS
vDgFEaCdXLeUQmULONWMydB4DdLxtjkYb5wd/99eY2TJereZoq+TqNy2NBmoMea2BsU3VcNG3FW9
+q7ePq93q9nRnrICMw6TQ/n4yCCF5IIAUiUpXpPue13GpCdPYtMdYRLL9TDx5JEdwtG7GrRo7eVo
V4bdPP/7s4skrBb3kCW+hRzb2Y9pT6gJqla7I5IAO/4Cesebgez9XnZpjuw1K1C5DEPQK75z3FVG
JeMYquD0i4iiQ1TkBDGXqOzmSGwn+7719skA/nmKsPwzEF6+ayWV35Ozi/R4wwoUYOh03TnLh1b0
3dMYJxhFGHrIDtCVcCNMrFAChFrVZ508SkLJ889TPLofG+DmYFhXw62GLOZ+YeD0yf7AmNHAcObt
USQzxAkrZCCqOTqN2Iw/2Q0zwyH5fKxBSZlw+HI4HlO8sHGOdLoRcNmovdxQpaDksQySWIrDscpO
8BFEJyEmwp6A3NQLJc+sOWrkecT+2r0FbTPWvGVok3TPpSPO15wvaYfHHSTRR51/dNKm+kIpw0rJ
PbyutnXoyG0/qpaUfVT8lcJGM4wKZZt6ocmmMhL+fUI8SD/HHiOWIwrhKg3h4dI/OKToNsOzOGjM
g/nmKzLUOI9uSNqZ2ciwn0tgumvgbrvksBZhZjmwDLC3EGlzjMuQIdX8ubsJbGmNlqbKTEski55H
pXK4bK77/YM6AEcnaC8myNBKBpZBhlSV8pTcw6NVsjSt+oBmn0LAXW4obXBl3pZ3uo9ZmwucMSvu
nZBziF83glEgo55D2xf3udJw1zGhgEzensEbST7XpNbrqhtZ6cTPDvM06FlsSKnyDQqSjwq2keVt
jA049gm4bjLpdTNorPNUdMVasQNA2qgmosu1iLC8d2tYKduYUXflGsuZ6izyvS+AhGzxNfaVea2Q
2KsUIYTD2NuIkIv0PtGcsHhZE5wuuuo4LN5wYkaXiMqSd2Sx6ElMME4n5Taub0G37Dk/XPRTy2Mx
UuY/AzbM3JDFBvt/mbSgbG4enDhiToBnp2S9Z07Pv0/TEfHBlUjCxpfZRSVa2pzJeoyYHOvw9+Lk
0KFl2jPD3desL3YR5XpZLHxsFwEMOh+ivSoIZXVB+VSBduE5cVOvnNVr80iWysSh54b/CJL5Nd68
/JERrJc7VbC7Cd1X6Y70ee6vSvIomdYb9PVa3IXOa3kz9SmiGd3/RZvKdT7cWyk9U0QriX/Sj1KN
kqXT08n+ZOuN4n6Ij9mEzxoWTzad/OzfwyidW8/pRE3nODf/63DAcmeP1HlIiTKwh0EBzzjAhkwQ
MMqTHUB/ULhXOo1+GorIYSLwCc7H4OEGY6D3nP2UtcL1qmfZydFQVxij3sVfi+CQnFdGQS2DzJ9n
tMltsZYwVW5plEHmF704Y3rLEUZShznEcvynBHwYybh6n37aZqCs4DCHcYXmKfguG1zcspJYb2+O
gbbWx2NagvHeHu7Vi7+3I6gmNEYgdziAQkJtkK2YkIcUZcnt5AfLUX1xrL12IBucc+oVXVL83r9P
xgLu8eACczz5wybkdl/q4OBn6+LH5KJ9aTIR32Pd/N52rDAlA9xvKI5MqYJ1zpzppkTbu3LWi5ah
fw+XSaiVWVs5h19t6/WLrQSHbHeUvM/6/pzYrEhMHrcYwrdAyyxF6OC+QaiRPZ0MlHJeUwhveKvK
T1G4O4J2hDjArZ5LI87/Mnp9ci/fVNbTudDjQeb3Th7k+WHEB5KcqMQaurmai1GbIXNzvaMgDDQM
kvEYWsY5SDGVmSOM3obmplcuPNmLFbxpKCnShAsNyBuzo4xER08fdNsWQbecDdUL1IfRvSkxrNZi
3jEhUoUuVKvpYoeZwdhTmzy0J4AqiLOqnkeT6NUV6pW1QwoQ8QPxfZ8XPY5eJo7fzW5NHCVNN6wm
NEV5NrUmct6LxHd1NGgUHBF/+Wsmxadm0iUhpDPWhegE3HGKHW4TZoJZRZfb6VWJ5/gcs/gR2Vit
D13jGAd3yRtSBFUWuqEq4vBondODLyAnQjQZM/4Q0F01o+vK7X5ucPnW2b908oEyd3k3InQG4pem
WADMW7Tx4yJQxa3duTZ/Zj7CFpoC8myDdW2DMjbcy3SYYgkUvLUtuoBOGVSRdbtVychZjR22s0M7
sr2X5qky2NSNcoDif95ODZhW9yO2bAhKxR/XGBpzvcwb4a2X/MjmjXSBi411QvMUbR/JvesrZ6Pj
DAG0lALDemlEhgYWeFz2rG16yqi4F36Y0CglOuQo66ltjdJMOq8a00LevDpzj5gMzjMKVohy1wVC
57vmM+y5KEa3W+epmCei3ONMlXjoVkI0z/3WgBa28s00Ewt3ZpJCxh8frDcVRq3EiIwYkxvzENMa
dVosvai1QxZ1GvAOmKoZtfhQsKoglXDWmI4BbhGk731uPKz27soN7tsMIHN2LFQWwOKeDydDOSBE
Z7dc2B+/vkd+MD/xQmVilzEj8IUKraU/TXRtXq8QjQgoxjDQkQ/DdQ/sP2bo1O70dE6lZ3wK6LvV
ADY/x3C86i9Rot9vfFoiq55JEiSShaYZRCyG+VJMUcWn5LNR4W9il7jXExVnXiMUI9jeqwISEKdG
n5kKhi03hWJ5yq6tnSAJS+f09MsOZd7qH/DXzU/91E7dZrJ5d/PmfzHuZDy/RX39rnrzj/+ek1zA
r2I5wUBB5SnZEFBgM3L8aoBtRRct2Xd4M4Y+JS68yXoB8s+Jy1WfRB+x2S2AgtWkE+xRJ2r6f72p
d2tKpYQv8XCkN70ORwmeLKBzbI7tdDtuAnpDwm02cpZS3aHvqnt66mqg/5n+POXfE+L6Rh3Kd2Yy
mAK2TOAEA+ZissWVkvijbM7qSmFGDS7C0rHAEM2ZCYQCNehqiPKggYU3Uw+M3f3kRmrTzvPw6yYG
XhhhCtcdvW3x+PA1YWmaBWpyVzVGp+ndzlxGc0ZOKac8rmrBuD+8tsTaFebmsVclUMVm3nTtQIUP
XQxsD4OizDEa9iA2zh/bybEv57f1tetW2u/H1ay7wiY0YQgaDeELHPJhCCsP4WY+tqjiVWgWbPao
kFdDGXrihYzmwMk498/P2KLEM7suV+Wmmo45SQVwUzKj6c2E7JgoIiNScHohw7iWqHHn9PL88cXQ
y/xzDSnnvs19TCZ8pByxntMjLg+FEgotglWenF1Y9e8H1yIWs605paMrMpLuE36QO04JKM1EOdPu
WTfltzJD3q8PCpscdvddjIdlH0Xdu64DXk9ZGQRGnNyNzBUeXyAsu73zbq6zUM8dDCDkFDzYFb5W
wFKXvmsJYGedqXEDHvJuijdodFbzZ0evwo+eaOgTUS6gDxPMwhXSizDz2BhqsKGi+vLBvuZ4fgtK
noWiLI5qdY2snA0I4VMOEpM9JsfNh3pZ4utqi32OcjWyQw4lMxooegxIb2y8Sk6SsUiT6V36xuLZ
nrzJBEy53m2n9VJNpfjoOzhlnYkhalqzi+hXzboZS11XK3sPYNrktuIN7b0xzVodWCXDsDKh3uxW
9Bda4F6CiYcivQyDMsH9aoS5oxeLjmSQpy+oW0iEpND8aePdY4UAk229jobaGeMzX4EJGEcyHEO8
Z5wjrbFmQhhihj6Miu5ttKcVBBLOVUbsARNYLrhW3rS30u2GdfbAwNQyV7q2FqZc2FPr8Ijlq1gh
3aeh86cX/YRquG2gaHwIj6Y1yGrT7UEcQo5nD6T2Q8SsfOd5Gjb8WIC1n0gux48FZusQXu0dAhC9
s4czsQIrosGkIsXHAJ1C3gOHXpL2QcNvTCPi5GIYGeokpYA8ua8JhUoJGhdE81DmyhOK1k7NpLHO
9MPsjCESxIuLfZRissYkrfsrcYjUSfZP3v3pzX8w4pUqUd69ffOfeixdkbWFxMvVJG1k4MWhNZyw
Xq3e1xgqklwHMD4KECUibpo8UB3cJNXgg+L05/oP2noRxA75WRuPZEBWSgZsr7/iQZKO+RMULgWa
5nPsW4xBgjkNUBNmFWZN8b6aqPGMRIDLCw3csxgpU/jbYGykw16jhO7HIzwUXprgVREWOf1SkjHB
gK5KWCnYjNeL+oqylILQWNE1UyHOoXg9JCvt2tdowIQZEhW4xKyic7xHhB24xGUnnkOEW74popsj
vG/1ifECmXJJaTrH7AYQgCDwB/DTcgFjypmwbxKZeUAdaUIKkxEDWmG7oFyfkYqO/d5yA1RXBWUt
w8H1ZrC1SszXO+trisoAKHr/1yjIGEo65mECmRbNAwGCUQPbpG+nzfZu4eDO2d5XlbwVGLE/oFwk
0kq6mK+auK/CLEw4xnLGipIz4HrPLgWpv+C/9WZWbr68lJTycm1HqFBjjqkJpzO5giGuVpL3euIy
yUr3Z5Tsk2Z1VryucXNkMYgd3R2RO1vfneGgYUisdvEgGg6H/eILH36ImKQfolJfXvr8IdIrgolY
fAYP78dcP1CQOnENtHcGRbEnKvm9LAmGF19McM0wyLjkAQZIbTAFr4upjR2TpHN2KasW9/I1/QHw
K8oD4uJ1XPW+RJ0btXJqJiDV4DgSQLHdfb5ZX+zLSx+WErbdpiolbLOkM6P753TBhvsGgBA81DuB
jqISS6cyhol0TNkN6KyZrAyKsxZPMLsMqQKl6CWmCbYLXT0jxmP2RGUENJUskWTOcSAjkzwLPkUN
XclToHgdIKXNkcQINCmtCYO6xSTFFEleFpxaw14CaqE1Xaf7KSWesqF6iOLg8x6kM7rcoMJTCSFF
LQtJEW17zpgqOkJNGcr7n3WllCOID3iOr6/KKTnBSIYRSQg7WdT1OktniS84QGbxyBwLaXc+Qo14
e5WTzeJurIQ3Jod+3LwS2JRQHmnQeRkRpsDIbzQjdD3P4XE7UW5R0JkV4KD1OWrGwc+vynIlJ2KQ
gAMJj3oLCAHPDZ2q4wzbDtRWJaIZo5xAzMpxeBrcJhg9DjPpJm3KKuMC90S5fjzqSeUCa1OdnoKI
hB8Etybx7QfoY3rNYs/Py0ua4RB6/2LcpO8oA0sBoOw3NyJYpG0NvwnI+O0aCP5KLzrCTedqHeRs
XMkxMyx82y2Neh0avW5HdA+2iaCnTAKj/i8nd4i4WBJGvilPiX1wTCY1Dah+ul4gi5nuNjdCF1m1
BfuOGI0kQp8RV5hu7eoaeJtS+xThKdlBJjwCnlN6o6HGVoz5dOTI/YpZD896OSYas2ostrJ7kAQI
gYKRavoSTGu+3lS48BxPWo2wyIBPZTRGWk8SspM6EiPGLnOKQIEjerfTkc2EwhlyfjKXdSWcPc8s
h6q+u2PHh021DU2NwXzH36HmAcZCEejCDUSydPEVMAMomKHSkf04ypJzzbt0FV6awHw62CVmR8I0
CvxrKNItCN/XCAc+IUMIZGbCgablR23PNPeuYDMT107uVMJtJN/Lmb1mwZY+lMWfkMF3BZQfp7uX
omXPiU4q0EaF41KivXdks7IpzbCa/LHhhkOpKKUCZlB8W959AIqRtrucvHVYLQrQLBiVnl5euq9D
3eH9y0vRw7KBZvE1f3jJWieLqZnu/hWOJDanjJLeo+EDQueXPaTWdzpbnLq4Su7fcZPi91TMoMie
TRfRQs1HaZCinGACXM0FREDgcQQNlDnawG26XQzt7pD/r+ZkzPphstqqeDMFiWJb2tZ51zIZphD8
M8pFySPhLW9KJwQ3B7hDZC2sQynBHAxdutOrP2WIHOxrkIUoibbZmJK7ipBHhSBcH/Wcl2UZ7hk4
0YaDw0akJFuiHj9HlIeEaZC4SuDe3RoqIg/Dzl1Th3qlG2VYM77vQAuZSWRwrSWKnjqk9dXKecNa
I+iGEGMSDekXUFHKBbjjgn+57RudB3y04NOADjbruYxQMiLoALdH5m6LmugP28SBsXKl2kt5u40w
IJLMtCLFp3mEy/BoC7IdBWa0LK7jD5lguONhIMbpiEzTyRr2AD6V6rbHfJNlkZwRjrJHXv1ESbR8
07Rr2GzdjNSPClk6ZSRtMzrpM2YTpnAgwh6bnW7r06vylByWfB89JYiUH7xqcp4HFYMKU/ctgYcC
9m+FchHTSudE4JUaeEDUuYaMFitcNgW56qjOTBCqcrI6Y+aWrp1ga2xgNBthWAP9QKOZzymNMF0D
JNQwxpRDmztGvh7fAw9cdkKDWxLTnMBO+T1Alof6K2ZDkRtVe7A8oxMgYy9DujyDexmDkGgeVcHc
nG0t+1JJw0B81IMWJ0DDvLzEsvsa1JVr33CBNJQd9uXlx6Ov4q5HjBzi+QqY8FmbTHFYNGl0NOdR
WLm38naCVxcye7x2GsLpJRududtyRZpU786Va6+p+UjXWXOIf8cn0HlF6HDqXJfzShav9npbCvfJ
S4JN5Fgjo9l1Ro2Aq+Vm+BqemeO8tBkmQ5bJVJfaLxBVkN7FiuOoeTQjxRwul61708/hHruzTTxQ
bdJELTFTQs91+CAkrqB25N4rs4v1jTg68TbhTU0T+yUuEiOz0X8NDlt6YzKXqjWE8qiGh5S+fX/v
ltIIKRnqSWxiCSsS3tRKGzH9n0hy6p7/M8PWiMLErlbuik4MB8TI+Ze7vqVuyHSeAjFv/B1evUaT
nzneo2CireSusbxdLyYrVoezMI/1qwZPP2Cp0dyCMJMnAqU3sqpCY1/6uItiiag8u2kZhoGk9AP6
RTiCgTaRqqHmlLScj7XUX05kkgzNV6TD4FD6V5SM0b+ghj6tVp/6/LZau2yAkSKl73c1HUZAHpEM
YhO4RpyrjNTYWIVaogN/xveEzaK63t4s7gasz6OUBmRo3TgSZppgYQwGsVsuJ5s7Q1x/KZyrVvPF
rlxN2dxYucFeYLcgJHOsttv9XwwVxVQLrx9Kb8JLNGBWbXIHhywXA25WNYA1d3zpxI2QdbsoUWLL
80Sf2mYplnYs9uYoHlzXmIsWeL3Nlq3jiMt+X26uakBYVM9Q92GvbR0eOmJ0EmPBkJ6+4JaCax+8
v8XrVsoLiwe3y2DqQCGtmMGlZzmcZhKXqicBOtSGSJnPeZNhP/1JJ7WNjhD24qRwjU3Y9olXku3w
J9I2gdI3X/SugBTglmdJBLXnsOwubCuJDaXrUBx2txUzlKSL3FaY6AZbf+QG/UvItbN6SufHL3tI
Si9ifIEqOCIwPfmb4q2sCpuGUHIX1oYwGyXNGQRt6eBfQblH5mfFDSAAHyF05ScX0rPyamf0yL+c
lo/jfGZC4Sc2Toiy5YfQ/qu4JrtErZzRSwemds58TnIF5nXBDghUqUkbfSvx/Lh1DOjX29ugli9c
+bRNp0LgRkHaIMmT+fnIJpOUJ3B8l5uKZOHFgCAD4Hg6/HVfe0Z7X3WVd61TZmxygJzJDkcZaF3T
oUmmWFey0WUUZNbBSOLUDsEFEuco4y4xykG+M2GwJy59JFlPKJxdhUX1FsMCBNH9OuaMfrd48+8l
6bS4ZM/Kd8s3/8+fPvkkSd2M+brocBV/cY1E+PX343/66uVXL3/3aiDP/+3ZP//x+5ffvDo5EZvB
sdiCksnnCf67qK7YqVyN1MfbzR032CNbzK7YZEJBzFfL2RKHmmMLcQPdFr4cFZ9LQmgcGfmjUboL
GZukHvLIgNA98UavHB9ZZw9cw9MjmxD3ha/RDz7wXCBmXOJyiHWKKP3pyFDDFI2PkEugw+7xqb2w
ekhJgUHRmdZjzX3dSdJc7PHc1y4O5FXStkOHfn2bKw8kFrnCVR1X8R9OjYNg7G1rqgQ9HBN+gZyQ
naM/hu3d6jnNQD8Tz2MFXz+ytYX3fgjWtbZ859aG2PqW7D5Uf8RF8IdtYlUeakKj5bgmuPo/wvEH
qCQ5Zoi+od9V5C4c8Jd0Ty2XBBRbW7hNDgDOEAFaqKJB5R2Xl5LvWOoRB7JC26IdUTIQPBtqjNxF
wrhs6zjzdU9hYlGmH+RFvkP+l7L8AfrxHTvlgbQRwzW/SD5DJEcb55awjZsJ2iIw1SO2qqzglAB2
sJpG9ZClZDXjaoYnCdKjc8RQTgaxW8+AT6bnCwoW9ZvQilt9N6L5xYu6blvE+W6x4PXYv5SydV/x
0smiOilsh8ot/kR3ZPVcrExmpW3vJIwGL9RO6Rs34Mvs8Hgc4wTpwxB9+M1QccqJ8zVWMqkbfurE
zKbtNjpDIru5WT1wu9AXlKbQflv0yAFQdHmCUZjp2YhJJl4SeuLGQfuCaXEMIAqcQ9oQGyCoyU3v
JPb66EIfXdKyllvW2kN/tfaAtbktWWooXZGZ92QV+mfINbuxUSZ3JdRmYILDPByRK2GkQtFwuuMV
uZbIEEGkQSaWCrETk76R4rkwIbURXsLUDBHxdtU+c/XmC0yJ9reGu7hfDceDxM7oXNhFpTnzIWjk
TH+O94rJof7H8Eyf6Kk+x9LAvC3oCms+JuLH11bzMdvjS0iilTCIRqekCvMSbWFRQzl00TRzgS43
YbRuiZIqxy19HcIA0mOWD2cdjSlrx2cKyhxMm/Qid1xSidgTM2A7iEXShpjraCGKqE+gy6gc6XAf
+aBgsOPdwzZAYo1dR117Krs3h0pEBjqdfHCJqEnkp8IRy1rYCJXvJwvnvooc26efYjyaaGq6+kUX
C3X1bGLQBgNG6tB0yQODLVCB2Tf6ElomRxOak8xkG9XKoxgsRET6R3um7Knh8SHADwDI+s44LLn3
fHj2aKYxJAkiDIsQKweufhThc3wEAKHYxwDvv+4D3i8HCiP18AT3w6Po24x9a3VwCwhUchJ1m8kc
oAF83up0U053IDa9RzuRVXmKWNpXLSQJZkjlu3LeZuftN0pVD7FlGoeMwGbMAiluV7YNUFqxJ99H
nLGoRifMxbO2kZSZkvHW2Vy6Y+/YA1ZC5CWHqh6k8eFz7KHqMnFHYdXJXsESFoUCdH5MTEVJOK9x
RqHqIERJjHN40e/fL6oiGiAIL8DsOIaSnAL28ER3dM/zQ3M3rdMYL5RoXBPFqhD9jLwWEwHZXeaT
VyObMnnB252BHByJHUJM3ms8QCUrKL3bI0hT+0mEMf3AXAo9thyq+nm4vbIHa5TRUwfp82pkkmos
JWRFr0PcFcaQQY/WTOZOM+elE0UzMgTxKfnAUXqkPXdl/KxxLsxKtLW8KfEavq1tC6FTQTo63vmA
9KL+PcJchQhTkMuwj3ZlmmeDRTOEz558HD9xQALZbdAk0nAddlMLTzFKJ++4gwRoR/EMx0v7JORv
axZijWCfodo5EAbUlwnGvtXO8IHuINQh91xL7BCt/3akZD0Xw8EJWQ5Sk53gaCP9JRDT3Hq99F/5
fJd7Wr9A3u4Xb3BmQE4rUqH6UFGW3E2uahBvPqCEiMMX8ydybXU2YTm+0tGeLEPpEUODl7nx9Yd4
4sYRw28NozwOIEAtDQz4B2KcNcapcy6DA4mHbnOpxLO6Mg9bhmCJIZlWp8IjkNqsgOZy2cEMMb7N
4o4WCFDOUYm8ruAB2sD972IrTey+USCKzHT6pIDTjyyXjWJAel5Obnt7KNOgeBwK+WYYAyDYW9L8
xJlhDBvi0M9vvWEkTH8gOyh2vGa7PjzmpGncuhICAv3G8P12Uxqu8wFpESqr/CcoUyyXIKVeoMFW
aYdVO5babCar63I8MSmD7kPFKtXqHJ+MkvPgQIcrlgeDj3ixoaBoDV7OLWTJV4T4WHJvDHQ/DO2W
op9CvTh6Axse+vD9Mabu0ZlLJ4NiPCBjz1F+ASzdHwhYDwf7zv0nHY7kb5IZ6NXdaju5zfB6PDp7
kH9WHMgddwyIg5wxMqj8QXhOYD6DcUhGYoeM9jjhl4GAcVPNZuVqj25RLV3tKS46GrL0B/4a2BHH
bEJ7pYnzLKF56sV7UZr7pLk+3Qvgw114sblr8grXvQlZw0P1vJuMqntkhtY2ASHpSUTL/V0dJShI
6xzhzLB5cGKkXF4aantlj0iOBxZ1DRSNbouHmQFQ9e5vfvObbhziLKYVgU4+GQbZL+XP6kV8WH98
5rlsnx0YfcKodYriOarxH26A+6VIuc2Pq4L+RVZ4vooYX3UkIYHer8KqXSyI4SPKUAcmAaPbgz61
ruPmpHjMzmFznX4i//VMkGvcpO6DuwgtV9PJutmhfSaxcDVGYStuqmu0UV+U70sfLIJ97nlXajNI
WKvS2PRS1h8R7kKpok1M3F5FlMSHI+HTVXychB2JZjCM1Y96S7u9wlxfV5QLJc71ReJbD8hTLkUZ
JaRBmftgVg/K8MCLyoI2tpgtic2N8N+hjChCymabJEiCCexN0ZerBFVMSrCdBsDCHc4hSwvGXflh
ODJ5A1sG1TCuQpsSyCOAcrt86Y6oqDc+xDi6giERv7pDJH8/YzsdiutyVYlVaM1Sl46VVfmWd9TY
HkFEcDlk8J6VOAjyASCTe8dSG9MC9IpkkzYNOOioFBnUY4ChBiG4Vcta7kTnloUA8DTDJFUKSz9b
1vVNyMy1tymX9XtmX2HIuxWdYyXf7V5VW/WSKyeLMFUS3ms58yZVHit/+shNr5/XncJgblXnFaKS
3BjcGtKUfNcYoW0sYK9ng25RWA9Xa0QL2k8uySh0gUE5WzsMScvfySRlsa0pmWa6zzifEBcdUkHb
eL+lf8Ey0/Wt0z+NBAdbqlqpKKifl3oo2qn/2W9Pberp963XG2VvRezhj9Fe4AB11AAYwSSLe7PD
DDeufT7R+kNbGatZgmokWuLXtyC39ppFBb8f9+NJSC+k8BrTYQQtvp+kuQtJW+loMcUzUMq3Gi0m
y6vZpLg9ozW9HTq+s38fgoTbZQrn6ITznS+bgjZevOPnZFrtk8bw7kP21+tJVeM8sF2hw0i4DaTr
AdEs8eWyfDFpcTVKg3H2oqD+IRli1ZJZKOL1uAUASkJOJeoM0y+eJ9GxsJkXBAa+B0VlC4M1aAtW
ofSMfh81Ne/L/r5rCY+tso7KKfVDIZ+i9vlQrvmlpNyoqPzgARiizSpjWpyFBpsxoPKCOsfH0nrD
VmpYUe4qFAt7p8DMnWq2vD79etKPZTZmarDEeXWR0/2wetfDrnV/hypvs53PT59cWJUcXRzVK/QN
uN0DNHYUgDJ6KhABehQsO6LOpvRtBmOrNxUmZd9iLrM56nWAA91U8HtQGA81V5cvJWxwbwtbzTkU
JbGqBv66oVzBdkKP4nhSYh00C4w16K4bubGynLFh9od681ZMAaKqbFVEq0rR22Em1wAMCqAjd5Oz
clpD3xTP90oVOOsqaog3iXo+N6H5IGEhhbKAM+rmEYfSKt/tgGvd3oUNoYUU+8Oxt1VGxcJ4k2jZ
q1kv+YLmMQJHOaVOMulEOmrHJX4EumwTvPaAPkNGEo91Usk05VbICFP684tExbnIJGQJZ5CJRUp5
nVIzBoscZHKHJTHW41lbWrz5UK8450O5yR4T1Nv1N3j1IdOnScogxk9G8HD/ak9HOtJ+/2BSvhCl
3G2hXVR/6Z7T5Z3o/IjzWq4xSEu3dUbIX7SOu5uda/c3aOeLoOw64dEZKqOTY964tinCHOluXzgJ
sgmSI0hmoPfV9cQx1BGBVgIyJsl/y2ktJbZxTmjcrZ3IwvrtTB5xKNQS05oyTYRziDE9vmmQ8IyO
F4I2zp9cDIqv6HoRwEWakgxSGA29xrDVut1lc91ty9KZGUMe40wHjWt8f3s4F/1hsy515bK029+T
xzJconD+Z0U3uk8VTwIoCgPz+vWz6NAm3IsSuwCjydXOH1+019QVCSszSebaT/bUxqPFoWLU/5XU
f7qnPg0yzVKEr61SDH8DR4yvjLozbc1xOz29lko521z+Gyrg77I8SyYz6d/jMjggAMVDOO2ugDUa
8Y1w0QvmZ3IMe/kzsCCaoj2o7trNHV2+77MwCQFCOmO+BzuJ8gYDc9jVBruiNS4bVRozDx5hCjQW
B+nnq78QiQdsw0ompGR8bdoQDUCkMwgwqhSDlIl26jQQvabObJgiClshUoq4VNjQr3k2nD1UR3lP
Cb9cA7MxzC2KUlrODkFtJV/xz3AT62SRwgr8k+sKqmEIS0Q4cplhfVfnDqdsjYsYU/G1wbTNxlBn
F4T88C1FiB+enrCr/DQn8UQdMUy5O7PP0OFDg5XHm032hZIH3SZug4T3y1b28PtsGMpUYcR0pz7L
sJeR1XwLPylzfQlTQLX4t+L72LOtoxJcxh7KesZQDib0gbkQ3vZ4K363KEdsfhOyJZMryi2hBbdX
LFGOeEejhE7pyfeQDzwB+5J1KdiHqpEML3RDhZ0f6hk9q5UFGh5HgmhYDyd0RhHy/4XW719W9b+g
OvO94XO4VEg4ZH5nKIiXqkzHZK8oqiW3RBTJhhAjUju4HLYM2RF37eGn+giZHLbDYQxneem4kpYo
9wQ31s1epCFyhPwivnGnz3dUNZNr8A5ow8xgcky6eu3cB5Gy9s+8FfdU1z4x0ZHTraUV+n2hl+E2
7hvYz5eUZYgVvOXsGTM6PYPv/lGRnv7N47z8NVivDwbz9SF1AVliUjc410sdBhKcg3eDecpCaohQ
fkvIANMMHrGlH3iVPQipWj9VXcIuRRnKjm+3qpA6/f9mjDIeGac6J8SrnchHQnRY3+DT5LHBk9qt
PNdgOLQzf8dKBuebywoI0oDEF23sNbmoP4yXk83bkhJ8fMk1sG3z9lm7I8MBiuwwkqnusUSY73A9
kRmZfqJCYl0ZEkThVIVijVy/kXeEdC+pogOnGCogg0cLiKtMoiJn9kL6qeCw5ggcCiO2fbA2URKq
G9dRsi2Sew7HDQ7tdVJ/Tr3oztghErfD3Z0+6f/8d95Z+4RwQLiZ2myV93WeDqBtEGZjPY532q9h
PzIU+sUpCxTOAqAfcVS0jQNzscTGWE1ZeMeH1mVu/ft526r8ha831eLlm5WCK/28eY0ZsrPWd0b5
WWvhyK4/cjkYGwP9eLpqGam7IfHsitJUGwt4Zv7ggYXlahWwisYintnRjM0R5gExNvDcoHDuOh1t
vx+ZL1r+Esb+FfYVZYuFIYyDtVazTAqCTxrv0ekTF/5ANEIHRcy5EnGOX40kc1bs1rrMJAMNsyLW
+cWxJnneoCrybkLLk35ik8KTgeKPbQfmyxfF47O2Wp+NCkND/EbAJDNjzuuMJwJBIRh/Krrx3GVJ
gwY+U+SnIue++wu1rN1nrWCz5wXtnJmGUgTbNxw2u2kvTZvCnJSfEQQ6eybSP27IvsZnQebL+w+a
VHJT1fDlaY8cNSOgkrpkBF/Pneq5FS2/HwrIyQ4iuKWU55G+B9LHiP8MaEdMFmLknJA4jn8V7NFQ
yxM3+2vfYrwpc/tK59uB//tUfppT+Fos0DdloJexvBac0OZA1D5kWC2yNqdy7HV/XJmcnsyuyXgY
oQyT9ZkMDvOfZtNESt97dpjpIIfQcq8o0MtcvdjxRZDKrxadJsbhYCAibuZAMcJwOoH0THlb3tFb
ZMUJCHKZIxInZrWcYsgQzNj4j5207rDB2CPpFiQtKjSEZVIIqJpXjDh9VtsYjKSLBWl2PBbfv2Y8
7uaV3MEKdWwF6OgL/fVlJ1Wx57P5Md5ygihvDTSly/8Jx4yTeDR4kR9bN/kWSG/b6ztDhYHcUWrQ
GgnXMsQTVfOOpa3MquZ6VxHTT1Tnfbm5c4loUHEyzAvPID1qUsPwfI8UikFvuOx4MknlPhxjf/9Y
zXuMFm2P1P5gnwU22TcO2BdxQOnZ2q7lwkV9ePrkMWIrpQAUM0s3yJa57Ftcd9tBflva/I8/ksKc
mm9r1YXRaP8s2pI13e3KH4EYDrqcLEcqyiKBo5yFm1Ze61ve/KLoDQmDEzeDjLusIQmYrBy77c4j
y6JHlku/ONOULtIDLwckvgTel2VgrHEeo/fLn8jctL3LQJ1wSg4v7f14f5cWAqoSMTdhfBBTlVwe
cCpRO/E6UzHj+3gWqz/Z90J1f1I8QqX1RqQKjkPg5BNeaz20DKcSuJ444VO5EWXwE9becBJ8UNO/
+xic5PTMcLjkQ8CBBJHJIDNvJAIM+jBKTjtN9y0oA8dXqsxiuNs/oHSdTi4BKW8WDBRMHoZeURUd
gNWihKNNCG+LKt6iIirkZWwRoGjbS0PJgtg1O8tZcPi6ZtWFj5D+IzuGWEtMBIWWa+BwaGCaHtjJ
ikYqxymf/QQG99cpN5uMzVrbizzJQ/EyvVs2b16dRguzyqgj9DGrNIJTf3hVIn1fUF8pcoiK5vtX
+51Ecta6eCiv1ngcy/HMzfdzLumcUn59kmu25fgIBYHAFcXfVbvNkty+Kwfqrkci8Pk2zA0Cmx7a
9bXGcxkHOCcPBBeHL3crTDObM+ZIWvS9R5Z6fhRkzmhKBlprve8JjgLA5c1E3FSCIbPHE9PF1I7O
375HZotifNHjkcgR07fkSrOXJGQ+ULxnGQRlDWiZE3PJynNcrrmERAswRxF4c3Y0HjCdX/3qV7B1
1a4NjfkpwVGvQaorMsffFuu6ofAi/TRD9hXwTW9zxMCbTsgUBr5nd4vjjtmYg7K3L7kNgIUsAmdA
62JvZjNnK9ZG11InBy/mgp4Hvk3vNCQhfrHq2aF7o8Zbeft7m5DPy4SL/EKjRQbNDMsVXbR0d9v5
6T90U53qUbdED4rn//yCr3h9LmUx7kS7y9NbkHwonlm58Rmu5ZKf4XRi49M5dYKXh6qaaXRVD18D
Trz43nqjfjDfGJB/JEYcjaLLUVXHVju1hlTubdOAbtDKNUX02AH36DzXjzfhedh4M4bJtnj4+NZH
c3B2+WRhqnbaggQp3li06J+1xvpow67ofijG/+B3WlSx3j1HF0ml3KIRBXVjm8xmjcsHiaNjAU8i
EaOCej3qnHaSuytpzWm002r2NsKsoFg3fdg727YlVyVH2JOLTqOjig7BD/ABU4eueXb9+CsJohrU
2a6up2+ZlWXRrURepDMu2m8QlZxnyWMOCv6oM79OokgGnr67530Xh0euxAPJ/sMD4BibZLo9q9BS
lcOM3NmQU3hUpmFvDtux25n2E5MoZpO9bEUSWMYOStau00kP1Ls2HNKzVBm6eDAYNSfDcwKYx8kh
L3XOq8+eZFVp2XkgD/FjToURl2aJktgP3/1hqAQ4j8giiGrEah8iRHDnLLsnwjL5rSF2K8G7dtq3
vWJlylkeWTtsQNPZtynOnRgiXWubF+17RCzEPdk21TNbOxnOPaejWgv5fOzuA7CTQgrwmS0MfWP9
fjL83MCZmzuCKBnhUi9D+Zea4GQEzJZJ+hwfwYIZ4wP7My3oTCD8j0xrPBy0ZPAD+xkInlPgR5su
E2HaDGMfaQmuwhycchLnTIBnbT87z/Derd9K6fjouKoXMzExgWZG8L+wxoM2wshMTzJ7u0BtM5fP
+07mQ9M+fsrHT9dOIXcD88BSQrc9BkWHdbst/cZwi7poA0IAT0aV5GjbhxMHu2/Bv0MSiPK6GJ6B
/0fK9s6Pq5TQHAzvEsHi+PIy+ICSBSq649hna1PrFXgRcVLVoWprWmLmRWa89NGrK+XpeObJBaHH
I1s6qXfbtUQnppyykQ3lA4lUOFmZkiBMsdscBnApylmF9m4F5j/bUBRwHwW9uVZ2RAfrUA0n0FxT
TGxa6dBRGe8WTyM+h1uDf8/PjNOmQ0qMdNeciQrYQTkI9DHA2uFZJXcox63t3tP0XmfpMfTIUpkQ
K9XG5rhRkyVONGg5+ujIu+dZlFJhvh+ZT8hYsZO99pWwom66aSPExeKYYCGj67yWa0lVfgJP/Bk9
uXHgi6fFlwhBjMD1oZrFitvIModqtTsB2pXgDtovJgUOMJd73CkfNwzf/mcApgEZwKyavCNZa1fx
QKMG9o/kACDsAQH/wbmoV+xioylxGKc37ia+N1EfGjlCxTnS2qmRLe5uq5+IfIWuN4XrZ1tLOPFu
4156AzAKliYVz/YH83flTqx3rZmS9bGNPIG6ocOtcxc14YpdK5HTYS60cUtZigwmL6JPOlmB2NlR
c5DC9xm8VNk/al02fxcsbxgfdquPwwj2rToKKWiBJcr8sUhxFPwNKM9jHLgYrmv1scotxX5QBS3r
ymiTJ+qaI0lo6qs/kUfe1Jl6pdnQvc+7sXUO840nN1o+zDSqGmsX3+1AEhkobzxkaXDdajnGzrps
N7u3KGc7xxDkxxQ+uqTOIC7Lvm+U3pZUGSZaDVcMs+50U3VQWHxMmYShn75rT8w+JNvTsGroLO+F
FsT63y2P3KztUJv0i8zWd7kjUQdze3Dk0SrfZpb95OTd6s2/++STTzRxLtqvvavfjPuffOKyyVdT
YEWnN5NV1XCmcCykadvgtEetJwaTkiwPLnkw4ZzPjRXkHKOMEmhvqaE3ZEx/6WLj3bPi9/DHOVv0
+n8Nq09mMw5M36OgoMocXW9qchfnl6iXpze9DseGWAgTSi+HphEHme7pW+N9P6GJjDoNDKGE83gG
XY86EuacfhNlHXW7AwpggebXnWdvfnj57NWrF99/Zxzm0LN+1GHTrx3ndG9cmsOtpF0X94TdlQST
9dkvhkV0K9b5amU+c1Acgb+kKCC/OlMkboD91nBTcJYUDDHjuj6lQaHR9fUELyc4Wy4XjBsS391q
U3CEWXYPR6vsZ7cTzJl+Vpy+LbqcbZmS3RaUrxMajJuieA1dhoiGxMFy88AFR9rHPL4N+3MnDaEy
HQcedNvFfvkFd5SBqstxAAsl68ygUYBgomzpn4KduZG1DIHXETa7QI/jmgCwuvR2LJ2MZc5dDJM+
SGEMsy2RfjWmR0YfUo/LEjaYFR4GOas2sCfRLZri7SyH3J4I6u3Y3zlddg5hPxnUAloZ9O90DPr/
HvPJwBY4hPw0YYQOYz02GyB8AoPSYdOyICLxxJFf/PW0m5tldpKnbBTcoPuUnSOFK0nGTc6lUqPo
yZk90CSQOIR1uTldb2o6emsUIMORCDHC/BmrqtfxneujJLC1iA5fu0jU0WYSw31Y2jddzsjwirJi
szWokD/iQvD3kCc9lA78sSCfZ/VYzUjLoy5G84o2aY3s2s3MomOIZTwMmue0b2S/3TmDWT5JBAxR
Qf+jOZCGIPejxohbcuq4vGyCHeW/9DNgAIBnAOG8nzIgH8IRzREGNTVqsDj1YsG3gONlPavmdxTN
rEf/DopgtWTrI9JTzBi7cDaVBpOGXCndiyfGlsm26veHtBBzfieiL4ARoy0NCg8cp1YaNI2dnz45
uyAN9Vkn0DWYugQOYyhupmdbOnORs4DFFxrp7GqAylCbpKuVd5x8fEGR2DRAXKC3aJ20fOhJ7YEt
mQQa1Y5VIJJK/f3Rng0IUvYrBELOli+zNolLAZTApfazcJVaOL4j5hIGjUQn2F1sVSRrkzSh2OF7
SQgMZmgdapZhV4y3wci/MEw6fjlHDPMdO0sH5AV/zyFLndzzw6Z+X83w1JN0REuJaaoMKhJWZnII
bzgirmBdGH9Eo0uIrwgwU+IYPNyXe1TZA7MEyztsgu1tthHNpChdTkemlYdMHlKRIzG4JyCglQcG
1Gz5/I1GS8ughYwNj6EwbKSYt7qhy9O+aJWrMLmapuUU74+gQbeG/41nfK9ldFCi0r8jRsFnNSKm
B/fCmuLSer4ZvlF4T1rVRrkjWewwhFVmXal0kgeH2xjx132wgWFE4LGeMaaxlJBwTSyGf/dEHN0l
oeIlujidRFl6wQeFVxu8pttsgAmLH+j1DjCUHSCQBTFzokyvl0+kUZfyw2Yr034GxV/+OrDbVocy
dLumb0d7BJlOx8zD0eyNRtSBEm6oSGdwv2DfvGx/rBYLP2tNE8l5L6V3znCl0c9cFkkM98syjmDk
98jSqrRiioWBO8sKpQ1A3DPaCWeXX+OfS9zGHOJI32ukgUtu3QojA9dNvQokiX1yhMY+wpXliRk5
ghtxkoKTHXCtA1Y03DCtpzpesS1Qnu+cImuLB5dI27ijy5nbQUwaJXACzDHN1Km5CJhuYwZJmIAI
lwRcPoBUseAiNc5loZk1ZXxDYjG9QU45Nfs2RJTBIyznCw3eFV1PmXkQGaUOVj47VDwdChWLsGYB
0MmTKEPKFOtNOjFLL+w0qBVtxE4nGdi+MUV72uV98BF1QXC6vhHMELxYqrei0T+53dLVut0WoqeT
cMrI8Rjdv8bjA/A109BTYRSdIz1byali0R2VFnhlMdTelKKeroLThoLxYO5mjCJOg24qlDJBSnxL
uWjY/Al/PRl+3klz59Aozk0vzLGC4BZwpYHBAe8Nbjg70vNfW19lk/U60x+WvUgIsSnAtFhqxqKk
k31COTLZWolkiUftdGuV3V5ak2aduBLwcU6n5zXYzydIozlPiNLCgJ25dEErT5HO1w1tK6ZtxeWl
6fvyUsJAb020wWHhVFFnJhyT10y6V4HQ2QBKhFFg6Q4V4aaYHDNuiDHcwwc8ZZAGY35JaagbzA55
OZ0YVZHT6PIy6OLyMsieEHIeuBHbuDLxpEVbbpTaxp1cEvhQw9/r/N4zAMX/V9zXPbmRJPetHxyO
gC3b4XD4QQ+KPtB0N5YAOOSudHvQzt7xuKSO0i7JILl3oxiNcD1Az0xrMACIBubjTid//gH+Dxz+
8/zmd784f5lZ1VXV1QCGd7IvpOWg67sqKysrK/OXVxs6Bl6/+aAo/YzRzXEsqgkUQr1YJAfpSaoU
ECqkRfqR8OS+JazSrrfq7v7nEUvB6GAbfjrGj7iWR0NVSItN0o6nD7vKgfisF47DZBaV/j1TgPF+
ehMLJbhbbYInBZsdaFmaNeOsLiSnvvgRd73thaRiONGWCYtTzUNxEs5pLs9xUMHQSop7TrUdiUsE
8gZ1Z/iPr/03ZIQUEiLMs4gyy9po1Tz82HwC1gnB42sBpP+GysB93f50eZAlsZoHPXOgppvqOAGo
xCer3E5+BVRL4qqzUrG1y7Xs+smqQISWtr1e6/3vaM1EAFwVZ6Nf0xyQgHstruW4QEOjKwKNFR2+
Zne59Woxqy7L5TcqKXpjsmxPFNeLszUkYe4TjIIuC3a8cDmhKFMPPR7++tn3L0L/Zj0avda8Sp5G
KuHVf3KoQWcfQNyhDzTS31B3/KpUWJ2yq+8a0wu8fZlhTIVdoaoppbrj+Rn3pfMJXFvWtGU+HV8V
WurpopAA0WcLyA72TpuQoJC8OjM5mWva2ARCgryqCgRg3J7yOhCvarDqLjqRlxux6vvoiwp1XC3t
XSvbcWuADnDOMAmWYCZ1s17Ac2bCDx4iBymd53jiOOMA7qXGWeD6jIvN0/ZRWllyj4HWHUP4mL0H
mlt1iHnzljHX1dXIC85LjAmyUEclzme0M6Z3CT8ATG2Ystmdh8qAMh5RmFn4Yig0Y+hCrtH8vMRT
yL3WMJnodXpGEjx6RPLTRoRmLutYMZZDEljKtROgc17ctJEkTzx6p1EBnCg0Xom0Mh4ricTZntZF
68nQ4Is8lS7QLXqot+fXi3UxSj4A1GWDmNIccgL6smvU7ffS8Du2zsAEs3IGxCD33pBcoOVlkiV2
CGcPOERV5m0xQj48At0oEdJZREtVOpAdGiCxqlOUqssb+yNUDClwdBNV2jHAQ0+yQOEmVeIKfWOy
GIy7ZmRb8LNdocq4I4JECBcIqnU6gFkJ7dvs6fDL4RM6XKcaMomOzj081Ka136Bc18LQ7Tr2KRut
pPjVdAjsfu3TxEP83ze1WJBMvZ7Iea/r8Lm4BXz+ucxRGHrszNBrzV6ptEQj18UemTOjkNN5fSEP
jU04Wd6yNxBuiJKk5cemYco+YN2G6J60vHE5DNFYuYR5hJAKAnwtoneljFAyU4tHHi2kdnnizEKZ
KRpSqfk/X5oCnA6YOgxXeqNtliXN/K1GjUEROfzZgmNLyIWZAkIO6+yteZuqmlk/cSLltf0vqJ/R
jsRU+KS13HabSs9WThBb719JpFvxPrVXc7GYTRl00xeCLRNoi1FgVkyKb41S4DVjDtqss9PK3sEU
lK3rsLrW0r1Oe50tA5S+fcr8S0m+/8S72WvTuqMT9dvmjeGNUirkjJc3Qzlms7BWF46F/36UeLPj
svQ6Kpczdj6JpNt6LF3e9DwNyyuLvAE2+b3aaqmYZC4CdMVoUbXY68NwvzPS9MP1RBhZnXpuroq7
j00qFTvlBTaquFqu74Qb0E2a1XdnpRPUqGH07tQaypIMPoVWGA+hrnjN9igtdQcHdnD/p0/8LEX7
OXMnRQmjd3IPh3Ar2T5cyWrTv7rU5ujsfNrW63l+1473QGQNQTs4Ch1J20qM8rpiDzodqncQetPi
4o7WbfUii/coWD13jwU7yqVMEtqbE4oRSNjP4K5QiSfMqphpiBu6eRibxgHLuo34Y27HaxWJDs9n
N36T4da1o+98XB79y88++wxm6TWQzMePR/9wx6aXHXkOlViLuHdfGXj8YQfJLFSzTH+7Rmws1Wbq
l6t8np8XK2N9Wd1Rwwt2scSzXnFLN8Ja/WnN401jUtHV1WLeUdmdpEj9/n786v13r/+qz398++qd
/PHuxV90JO+iGrrVWKT0OQlMGiuroo9wRaepw7WumpYr/APvJfw7K+eXnU5Z0Z36i6ca+4fErzVE
LlZlcgK7MWfWAriCwMIiqIbf6wkgPltPRSAukm+S7Iv+geNUc5Uvx3k1xoLqicPYbKMmSiNloMxu
pl6nPnacevCani8Nb35PXfUxyPfS42qzZqRCeYsKa7Iqqs0M5oLVesy+IoHCNOa45pZs8VprInF7
7cF43fnZdmlZ3Mw1TqSvlNaFjekV6XL5ypB6MVXlIoKyyYo7Ckul3ZtpA3neRsNmbXEx5StFRjmh
bl3ebMqp8kb6q3Fb4UoglLeMSayJ24P7if2xOftA0I3rwR9w+Oer5Z7Dp5xiDH1uh3++c/iiraXN
2XJSKQOIUOQQNreLqR+OHrOxpSbiIPvWBAbRALmxsK/CTFFF06BPmdf2hmS3viVR5PYt0LyEEw7x
989zC4nLYZsviMh1d20qBH9gAgCbnRweBCQCA/rzQrZFdVEufasKFOfDVOxWPerw01RxXMspkOXy
ZL65OqVswc1QkgARzsRH/Uo40vOmaPQnhp91Xczu4mFHaPfIFQSdK6cZ/qmn+9ykco8pmf/1Hhyo
hSbstA16PLymG986wzzWAWZvhajhnHZJ0xu/xAaZ4leB+C6ho4tXlKE6s9teP2Fu4e2UfUoyabqF
HdypfBqjXoeFiIkYx0Wt7q5OF7Nygtefy2EsuFFrb0xDfQMvsBI35Pq0u0Ty2qBf0bUoOHbYJw97
QNEoLvIV+xRfsrg0FxWnUJbbsdYeaW9wVmtbzmR5HaNhe31TAzcSGhYzukgcPgk3lry3BPMVSqsy
r9lyUc6t0OeMotfU42h7o85elKO9lkHJa0c4+e1XVChaDP8SdhOD6n0gY5JVqcXDKkkfp+wCMrvJ
76Awliq41mBXzwIIWDfuM5C1JRQ3CobADbNpUfPYtmxzNt/krDSOjUboGDL4i2eRTSJwYWujZIEB
yNLhkOS33udzRJmwvaUvvXsugjTQoH51q3N5lm/xoAJFU+sk7J1oxyE+5suHiZUtiI6kThOF0ESn
oW+2bZ8jeq2b87zZuh4rfvPG2cke7mhfqg06wB87nZfvfy50JrWLfI1zxZ51EgveO+7Mech+9Hwe
SjWOB6KqFharkgUXeUQ5U32puQ9wuFHebkK5EAdIKHfjcvOFjMsHgG5mZ0pPXvFsfV9WbOsmYpJ8
C5Egma1CtJrCuGJ5J/Oc9dTgd1WI49SVVgWuO+ZQa+NxWllrC4GzAXUVS6cbz5n2VpWRD8xvV4PO
UUUgiURCJ0ehRT31D0pyAJQYlOg2A4XW6muB3tbtiU2B1BQ0abfdi+/evHl7/9pnLdW3DNqbxogY
2iqKSjNDR240tcRk0FY5tL0aubpuqcitwC8aO/33kF9jMux6GEjGkfteHXiX9h/4UDTWyisqxjiL
4mWtDRmLbGYM1paqHg5/V1Ru0aVcF6FZoVgRLlaOFPnqTJUaGqsXb+niSFaFpRE6HGd1vJZ6UEam
Jb5f0uEz4BTjGKmvr278PDxiypO2NzZ7gk7yFV2m8rmVAbjHQXmax4VUgmcojVmeO+ey4YtiCgYd
zl1obCQxLXPKxGCoTqlh8gMs14sb4lawZ2ZLMjwEQ6HYGgnQndiI6ZUehK3XROJxiFN5M3VlFd8Z
QZQI8StRW1vCWVaeWiFer+MyL49PbNAfweezS9+KgqznzrDOGoFlDnqqyqrM6KoUaXmL4CYX919C
eNL7OjaEGDzgkJDtF1BZd/vjXZdjLswHopmWKeCtYY/grrfpEda+VZ18YQKaRu4A43Hx0XILPpvd
C/WTGqPeX7rqqet6j2KsgeJMXVPRTpWH1P9kOFvceP6FrWcXN1s9jRbY80z0nCVCls0x2aun7vTM
i9bpcYxyOQcK68idCmbrXRV4FPj17ml19Yb3q/ybe1QObmTPymb1kei8qfpuW9MNMY5itnbG8B10
TUFtaZx5iWh4P7oSMZY2bRZ20K3RbOpmeW/doue452rTqrxzqEoQTFcan87O43739ga9Gh5mqvHv
M81lE8QUi/s6OXzS1+CxY26uLRov46zksk7EfOxxm2R4+nB+r1dFocobuoYPvalx26kPZP19c1Fy
mF9qyT3Q4c+D47FT27zujnbLl2TW8VA1GGJU1RNTMSlK2gI3YB31QpQyzLH5SoKbAi3ydHFTxdRL
UQrwjpPJBYll2ZdffqVLAKPHxWQNqeDgxwcHnf10UYooUl1sSKQZrq4w88Hyx1/4veX2fu0TK6Nd
oXTFrzT7KjS2zZQ/S9umZ4t6C4vXqtyC3ROdAvbs6/MxiEgfh92r6Z8SM5lcbOaXFQm8h3/69Mun
X30VZ24Xxe20PC/E3RBViBKJ7YM5RFRDwd84uaJHmd73USOe7og10mTEDjTnWttmscPDUuTQ/EkL
HG+dj7M1BRBqn9LoAsEN0rliSvhEcyEQ34ZJU6l6antZIxxq5h/I/SR+TY8LUt8u5uk6uZwvbhIg
MMBpSpYV7+JoFPhxpnE3ZtOZEVwWy2KepatTxyqtsRzCmCKAy6cb1HPGmtTMkkuvzUmbsm8F30FH
h5aeIpdgzqBP3FSZM5yS7VEDZflwMltUhQs0j7uC2s19fnnTpiNlUAI8GuhjqI217t9wzOXEGjc3
DXKlqrvkOl+Vi41UAOvutWeLrVYvo8fV4qp4jDyP14vH+WPeOsXtOsh4e7tFMJ6u6IbZKBD8zytQ
rjwLk7b/OWVx69i7jGFTm1WxdzlTmHfJOi4LMbiV3gMhhd4YEyCrr23IQZc3gfRz+nfOxSYqxwdI
UN4s983c9e2M9L1x9qnzbiCx0zu8RwUiTFfqMlXZsmFFvi9RapJSdou7aZwsqVteMzG2D1UVLRNn
MOWc2Hk5NQ9pAgJLzAUWNpc320635SnbAXngel6f/Lna43Ijs3l5c8xjOGkJ79xyEHj+SO1nO9+b
19Y0lf52vRLTYZvJpvSN0pNHSbhj0WM7aoA30NRILndFs1RJwCxWbR57bJNOgua9NJgZe7u/OcRg
PUxRS8medZ6bsyqWaT9pPlT4W8jqBLxGuw8zU30FUNol/rHrXoWk5AKv2YtcvXsUQYD+DNh3VbAu
iCWPG6icUmRKVfsWPg0GUYqYJwdMQB3E6kGHBooao0mjalFjJuhqP+05Jpti2MWvPiYWR9QFuGkF
7Qb65KWNRPg0wJQYwPFBGPV85lWhy91aif/SJFUCiKa3j1hsVlSg5ajk4Mn20KV2T8S3lPYpziB2
IMnC1U/LDVeMroyt22buW6I7gyftFrEeL9a9Xv9O008wst1WJaCd6zPluBydtPXczqXHWdtbNcTS
ynjjhAN+u7NSyvQp1sbtZw6oKHrwRN1iVy7kiAVM3+UgYdX44BzMxWANwgMyvlApiqW4gHiSAQRt
2vhwljFP4YdPcKff4CkNAFvoPpv7rRZQwpzB7almnGfGr4gVAe16dN9gdOm8Bzt2PSaC5bG5ddBv
TzFEv3vGSNPwGjdc51YhCG76Yl5K1WcpdTQ8Bmn2TA9OGgYr2iTcn1Y0N5l2N26aIroqdHfUFrfX
KKJX5205tC/aq2guP15aMA6AlBardXYgE9eJjEZb8TuJNO6YhrhqvOlvu/I/UICFzdzak4jlAtzG
HI1LM55x2Gz6OMLr6kwGhISySRCoTnyCzV+PmO4eeRN+L/G7RUhoqEhiJz/fT2Un4yXvMAWudDGv
iIO1KOpkQ8+5JEKm4e4a2LmgJueRDu9kXGOtktO7oOKOuKo48MWimG5RvkldkeceqoYH3htKFtHN
9fY0WKLhhLrN8G3zLK/WDusLrJUmdI2f7r9qnL3lfsQ0IVxITWrDQXDpXQaJJjpcnRhTVDmIBsGt
veVeZyw16xfopjpOvcioOvYha8hdXVqbbus9qX7YIAmcs54kf+sYgjbbM9WCHPetl/NKxfahYdi2
a6rNkmRR+/wlpNJTJTDPm0wpDNjhvzu5yFcGaqj7+U+PYRttdemGKchrfLU24+KYQ/pejvj2obsn
diAbddfqcHWppSN4uaiq8lR0yNQDDQ5vQLjLme8+ZPzcbSm0t2XXoQLzXM6Yy9AeRp95DcjSFjM8
HnGbJtfHtrH28LKPzBY0GGmdOF4d9Xb/l1wbftdbuyGbEJnAgFRh5MQ0W97vG2eOX315x3g7fzte
9jHnPWmj5FAaYARMfRh/+fp7xuFa+d35fVencbVzr1gRULmWKWqKxgGNAQOjnMlc9WJb2cZdlCyB
swDIk8lmBRS5ygsEHpNooVjdYhGLZCNTMv7M7I4BAvm1SkwsIqaxOoNswoMaHAtTBA3e0h5H3xU9
qYrmKGB6wExkV4PchPsesbwzEVfUMJEP/Nghj7wircMYhnMP21/AOGDpk+a20oLOM1lkHa0Vpuau
QzHFL1ASRIGb/9GhlPHJjbrOmvJi6o22QWsYfGRDU3GktJRtXrXkyXOSLXvtAbV9a/C9DM5XnvrF
MU5UYeFWzV/jPsAkeNwGU6pF43l3Sk5ttu1PWrgXV+paUd9aa/Csdw/8V3tVvt1Ga7G1v+1zJ3rb
q2ujSjuGndMRJyOPlGxXnDdyi6RgaSxkA8xS7auIsxH9tz6zfczF1Oxt524aEvFuu33HkdPKwqYW
O4jp5mppbDMQk+K0nDfM4pfl5LLmkHSgLmQ0Mw135rk4+k9mN1ufzLY+WEurQ3RQ+3bG3bv/e9bV
ZS2ZfR5Rb+g15j+YOQVzrkUzbDwLTNPwTrWqBt6fn/su2q2Lwz3q2+Veeiu97DVXuR6MgJrLYKb5
Ojc3vpvtNz4uxgXqBRzGbnU6GzUaUykmic1L3h/+asdPNqzZb25IlfBcabCsToF/kGFUvX3eaWqL
utUCGM4Jl+/uMD9oNgu/2LZW799PXlksjET541z3UONqUbdBRGfTHvbZX/XcPlJw1HXAXbY+c8t1
uXXPnimsvt/NfbfiWBae1r1h9KwEdhiQim+UKlkkT6u9dLOIc1AcNCC7OIPbsd4WI6GDfWzvJYwh
c52odWFt/v7i6NX7DzEtFwIQ4zQVLKIRy6iPqUIXdsF4Ha0vcCw/VqIeRmqDGnaWA/AeoMJiUrYW
e6wYAe8Ys7fBYsJrvd66z/dRMisbYjvoXHSvA+W9fNrItSvJaOBKywa500qVPEu9IYZwt9ioVhx+
iw3cBrzycOSHNDRUmFvZHNPEFl6MYwUAOedUaFc/bz0RAiUx1Uds+yBu+r9sIcrIK7LDWeNlXM60
SwhbmrM77dkNHOuec+0yjil9pgliL25oMbPA72qlY6XXm2S9WUY8m7USbHTUE/ewqNtsd+KiPMEN
eAtna1Zs+tlSQ/2cfrrIV9NX0DSsNst1xB4pLPOC//GiyDldkCDfVrUU8/tpTOisfUbvMWOz6JTV
C12s6ztvX+6/Me0W4Gqb916xdjO6LJVAcOZzPalRGtiaMhbEZK/36r0Jwk0raRxl0JgDhI3Pjtq6
9tBAWQ7YsOEuoTdOyxYXYlpAH4EzoU2+sS03tWa7pPINijYU1Pi4mzDvV2k2eGJqblBsff68ev3L
Z9/9IVoT4BWmjV7drqOejrhdOb7sjmHwQqypHfehxWzadAPabiy5cLRV1jN1yzy8fvPi9YdYFR5B
bnHZ3qprM6PoKBpEAK1ikXSq8eRmukWfpOUSLajwrzKFHhg8BIXpZqVxBxxkVZSbAveEbhqmumHy
Zg4Pjjlj37iAkQwHOeMoYC229vU0ywpvoWJBs8EsbBUaKYOty/V2n8lrWOvcOI/TOi0mchus511/
7mrYpnOrScXo+L2GYxwx30cFKBFj7q/8y33l3w68p5QdxrKHq16aPPTfqHeFVXd1pBL5BC1x5Brh
39rrtzt6HbS2vFtentvpo2Pn8hxKidiJ4Vy9NfrqMp9cInq5sTeYLRaXJpxL7os/Uu1QPALlh9VA
o5g9ebAancCuU9nJZgmgSwOpXK0Zf9mQUD63LpFDF+VdX18kfqA17cFpwmMYiBE0gCR9qfxdfc6z
yUOi88RmWGwEjINPQmy3veaYIgGPOnMCQcyNH+hqXWVqWRBKZuae5E9b9F7r3aV26PBa1IC1Q7qd
mUyrtZri0T1MEepesbSdOquUmme9NuQOO39Sw/0GYk5JqWQ/m8lgoBBvddq3vh4F3Qtarrc2bVhI
LnTfd7ec/zhVqZKh8cxbAeOc8bEo5fjgxO/RgyUxhXXS1SYU9cJk79LJ5yt7NMGxEKk8YQAeG7W5
wlaUmyU2AUfjqXycGzZPwI6/4lDc2KQ+UdN5hkOyXNexaVj7oQ5li2pgzKa4iuDiHGDcqO/vbjib
JuSCDNFHXKj1Ux/ulkY9xQN5uJJwCbhvzs0gGKqP6+l9GvRN/CViizzDDhkGvMZXCe1TSoFrAtML
i5tg194eB0Z5qdQbuz/a7QBFLPNkqQ0KAjkyqMJNyGpFu+/ytr5/Jjig3nrkNPi7RdlmxcRksljx
FpDmBLrGKwNQa01kl21ay+Vm/RjNUmc3S14g2iOSp9pKSI4uJko/gQwbvCWZjUvSmG5ambNu39eU
tRwmzckbdeI8VA4ae973IueL0algVK1njEcF24AhAm6n9deKkKYK1fFJMrnbzTsphwIG6iIdh2Wa
z/l1XCoANRY3MPY8TFODhbNHF73YcA3L6hbTVsWIxl+wJpYYLXoOtnhxSf110PnAgUsNlLvD7tA+
8+/l9PcAwrYR3KrJYomL4lrCpF0yQIMaqhR/yLX3FcbbRiRU2uZk4xCIFvHrEtc2lyY020knpkB1
z/muI5h0W9+Y0cCD5IZkP/ZY410Pe1iBk+WQa8lVeX6xbrMT5sIMHmGjotvlGA6HtP+JAUwu+ZbN
tYfjY1OIQ/w1NPg1EW6KhOPBl6MTtJWlNKZJ2k/wbzRyvFcvlx2FVu/8nqupQyyteJL8e+JufCXb
t9qfnCSPuCNpS7fryh0zLirjLVD8Wcdf1B/tXNTWoT896ezh2VlVDs1aN3qtZsezQkSd4dXp4xZ4
YksVDxjJZw13JQaapPugb0apdnWdOI1v0WRvhTLatfF2ujfx9uAw53kyISFrcZXYnk8B3busis10
ode2FtdWfZ6Gr2g1/J77AxkuzjK0mN1MKn7HDT5iAxTK2R+TyXkGLG6LiVhEegKZDcLQhs7UFgh4
tnsF4jrvCAXUGuW7Cv10Xoiu8UK0xBPEeLFcV21aiulis05YG8beA6hkw3B8QO8DGJjarOmrWD/w
1S34qalQJAl1OQWiH98ypLbmvR2CnXk7Lp2Yl+X8mkU7441mQh/avlQXBQKfxOQ8wUXenNp6xaX7
rZgyv3319oXrUXUtSMC1PeV6xeb31458bufuOJV5En86/zNxCv7sNYA+4Bs/AB1bujkRu/ZwZRzP
CLSCox7Voi2qh+EHN3O8AAQB0U2Gm7xcBw+8kVdzqbwBcsXrH332tr3Z+fDNN6g1WPlBUyES7wqN
L3I61AOPdofS9umOJ9Sb19zJ1RTUOBRGu4JVEP+n3tT7OQrHVqrftI/mdG97smPYZFYZpH659a30
1sbOF1sUerkHYCVRoBxlnj7LyC7xlQiOgebbZx9+4fs48cWfb2/SG/de4a/k2l7BzCal/a221XyC
ycWPRiH6wxw2RvnKV/JN8rlq5XgEfdXcVTrKwmfQJrJRWUkE8MVpDtNs1GBwyKDf5Ad5gU6Mjf90
s2aAXhRB/sndOWVuvSuKS1BEgbYUsqyR4eNWuzufhe2z71ZjGXHG2aZF3Ip7IsXtIUsyWjG/LleL
+XEK3XNq4hGmf97uqJimJvyg1AamDJ9j9+MWl0MmBTVKbvWHbD1/9Ryn1WR/NkYPs0N4/9fvP7z4
/t2bNx/SkxYH6R0STKuj9p7+lDq9x6tiSEdOlj58z319R319mPadnqvmcDdvEX0zo5JJ9feJQLNt
uWnP18s9Shuu7wgp0fCk20ZfWuaRvyZo58XRB9uUXguaQLNc2hBGt9vrxBVvLeTFrxbTKeQVBMHj
ylrmpLFfb3v1NZzRi0U7hkOZa7w/he6z34P8ymhHOylCbxKaP2pSfS9N+1YetPUW9Oz58xfv99xD
ruWF7mEcfHDEwP3zqlhfQGctX3u+z/3F4gr2NjgktwWSvQ24wS/efP/C4QNb9350NYMKu6jw23ev
fvmieyJuSl5TsqHud2EKZ8V1XZtVJvCrMwfBfDkpJoq4ObYe2MDZjIFrVazA+OI4twG2r2wsvw4F
Qh5T8hJMK1iB4F1R6klJwC8gYtMZXz//eSPP9HpA2QTIC9FPN9WGgzka+zvXEDmOp+7sYHNn1Boh
B+JvfuWWatxhBJPoJvlWzfgqohnSYNK2TRB76whiHng5XUGK6oLfz/eYGjyP3hR2HhablfjQxQUT
ucyYgXuSt3S5xY5GEzETLDbXa9yLUWL8TcLOt84VI3gcQnLW+ns6+5oeTLx+dec8vyzGEh+B2tA9
30fQyrPy9pDukvxQNUj9Beknl0WxPPxim6ROdHI5xnO/XGue/PjpVwcHvRErLdY3i2Sa31WxZaUL
1seNaz+jAVQ1iMM5rxIeOiQQrrEQ8fV++W15tbkiIRNv6Ljjamk8qlXV5kqEZvHVt3fe/AwVy9Ab
Ty4YMIqvV07nBKPP7d6MzSLQt4w6QR8HKOgfvEZ8F6S2dtfGT6cnD1Z5mRM9Y42ziF8fe0MgA8+l
CZSxWUuUBRaDMjGx5auFvjTKHPW8DrOEPzeYtlF1NFV0OneBdbSeeyB0G9fh+To7nR/DldfUcdKK
zV0bzLfJbCbGcW1bvFnLjBhK0plRYmMFCRAfrIe5U1w6ZD/AQoNqCR+iBFytaQwqVJMAmKTTJpXq
2huXyJg9gNQRLH30vYUybn3C8jpFfyhd91EyEBkfyKMIUxKx3jjDjS7tRkjZDEwszWUi4XMOPD1u
9dGT3h/K/jxua25PAXWZiVRUCt+5KvI5W2YSg2F/542cP/l5Xs5jM20J4VDnc3QPTWNNRVK2s5e0
KSYmLm1uYCSPM/xSnER1MN5yiXFpwKWSfO3RuMwWR0tzx+gwfX94SNGHAyy2WrdwT4Kb0NXdkkPk
COI24kE0rvYmtKqptJ+kjldf7GnF5PS8/5iu0NpeQEe2CnVYainMIdVXd2OZt7EpFdukD+Rli7b2
ZdZLqnK9ySUsN7v0GPMuO9nVxWIzm8ZIW/BaUYAntO+E7GXDECZ0rYZY+Goz9wOC1zWV1SVYf1UU
ampJ+9IToej/K1wZ8xUR/kuG476JP0uEvTK0prZcPMaMI3rfFHoqRyqyhs6s/F4xZMC8JOHutKB5
K5wae8PYrnOoCGpRXrFtR8v+R5GlB4UW3iM2xI4zyLPS4EiEVPc5vLNWWYSket7OpkW1tq1GOOkE
Aszo/8eRIm8I9O3rQyMUJQPuTstFGsjTIkW0M4m9lAJr4KjPzowngbqet+YGiLpd1P3KqGrAk3Wh
FzmtsvXTwfpJL/l6C09s4+G8oNVlufQETTE7QG3FdD91wW6NG7cke092GgQ9Otgqu+8sXnc6O0vv
vwRqc8wbRCCfd6gJtzme7D4mnTc/5h3UdTYvYIMlb176ya8EV4l/wZZgu1qlEwg5HHfMKdSYBYRV
UCMNV3Hxw/sX79ITl8VRTZvbfoIoFrPfQ3eypb3Xz6CXQVsx9O6dOhOn5lQF4LSej2o1SfQReMNa
kfoglMAtq8nxiP5jAPkGKb++0b/0X1P1Fr+HariZs4M+6ms4PLx5H+m0x0xjNaoIkFG3+km03kwr
7ichTnMkEmUv0nx4p98YYbJx4w7v6GG6xuaqISBsp6VZHw69zobntNVEQ8lxfqcmi0URVGbRnKlX
Av9NBH+Rw32FGMM5ZAN+OeTMZ1h7XuEQT9qb9DOlBA59to8X/S7gaaam3xN7Om59zV1VAW8PpGkJ
5uZ4FDaycV+t07BAYjmW4s7rltq+NVCuOM/xwcmQ5K7Z8iInOUaRbegjwzSO01571AUPYEfs8CwS
XHfcBYxlLxa5QwK1ahhDNI0jv9f5uDr6ZxqZmYjuY3X0P/7JZ5/VkZTreL5TP5wv9AygbBgjLAez
4rqY+TauVZJf5+UMs6K2sDY8ej9QvqlkCvUOTfjKuqPUAdHiYZ/CkL7GbMQ3gevsH364QbhXPlC8
/yy5C7deHsp9dpN1ZXePcL+ciSmGzvbDqhUT86rToTLUFywDrdn66I9pzcZLbFwGNp5P8xlJZhDs
EVbu4+bof9GqPvhR8nhTrR6flvPHdH7o+lBdHGkQr1qY25+9f/PDu+cv3v+MJ9pZev0Tmp8/+9L8
+s2sPDVE8S3Ntoye5NlIsGcvQpc2Gq5Z3Rf9q14uHC9jWUQDWraZzRSe28T78lHkTAY2aMxX5yzc
dgVHrBEUm6TXp/0fB1uNbzo3kJLOi3mxYtUB64S+ltl7OvyxUTEvqbFKQTi5oTCCJd7HcRWiykiY
Z/ttIPwvcEek9UQ4BKmJrr8kqGGOWAcFUW3Y2fXK4Q7XmNPrHO6FO2ALwxzRLFZ6/6riHsGLvGXt
nPnGJYdtv53cI+JgNq9vgcT2bGa31GZt7bu3BoKWsRybeoMX2WosttQ+f90qRrVW7c9mS0McasdB
oBUwtSVbwrlUnsKuxmHnarHvsTcX3rsu6Vj92YkP63FN/roPq8fCgDLz1VmyZkGsGPtiHjboSkY5
Cu12pRz2LZdy1qIWMBdjtrSbLPp1CbUEZMPtj3lni8VlUKN5epMVaos3Hy4kox74g2YbrKrNm7q1
Ap8S4qhwJUfeosyYEzbXvcrLuTWKL8/ibOsLH5GBJ63LWEnODNLFsjdKvJ9/M4/ElWcQo5oph/x4
KB5JWTevJmXZxUIoDkYtN/oFFRYJJFJlOC+G0wLkTSOuMjlN+Mu0UJs1PRmUQH1lnfZy8lYQnoDe
7Hd41+CxvYin0Qdv9J/SXXE8dfvrRLTl+FveiWjydazlLEkf48DrzBTWukxo++7PXrz+8O6vf6av
yDoyTu3Lw3OV9eym+Hh99M8dgWA5Pf14c3T9Lz77DFIaw0vmE45YSVO+OT/HQcRn2ttvf953HVq/
5eRiJQ605siXWgHJ4EoHHXGXQtI4n04XjBWR8Tlo7k8maK98HJqAwFlXjtaZLodEgXcqSQcDGkLa
eIzMGSLzsFutEegJD3dduescdunYpCL8k1mhIEw1argoZkuUxzMhxu3Ojc7BVOcAx7TEkhp2e95o
reOCUarpjv5tWuHVDu8S6Ugz0wwDk9F8/51X0WQxPyvPgT4jf9UXB/mNGeMoTJkZn4sCLFmWsw2t
p7rb19rFt9PTV2xMCKyDlIqKaWGqZCZe8RbHbHpad7Ff93w49mrvuZKZq1C8XzXQZ6jWO1aQbYtE
rjWfNG9rdbGp6DhzNJ7MinxONKamUWewjFAZ1lZbX2vesok9NocozaYI82ddO+Fvn6DH9rG00R9f
FLLDiLm0y7JwW/UM2F1K5LxaLJf8CjW/S169SSb5cs2oCcPAQls9MYjL4JLDv4nwzsanueNCQ6Wp
lxEXOnVa8sbR+ohoa4mUYt08fwCWGfpaaEq38egj9USwMfn7sNpUWC2tJaNjUYzKfM3ZjdiilYvh
B/ZFz2e/wmW/EWXzZjgr50XzK+2rrPsNMY6uvwbZqzcDO9mJmNAmi7OzXrcXo3vab+4+z3gBiMiU
yux2HDmepswDCgOzMza8yEgq7JMLIxGYU4PtupvfLAFyDaOsYJ+FaFmEfRdAATlo/Fnd2yGNCIKK
7XUnHDIPlBaKuauOlgoBz4H/QGlnrGxqVc4LRevPNGtPrSG7ONZHDeduMR9XzU/31esP0Jp+9+Ld
uzfvvqGrNhVKHqLWXlvJs9mmcr1E14iDNF4uqvUVDuwrWWhsr8z02fFvoGFKtmx9CjJQZDl/utyZ
0iE8SI6OjuSxbbDR1z5D2O8KERXYPkw+8fBWWvK0mORShiGHiivmWfn1AnGfFsQfjEW/DT6/ppP3
dKZC3ANBWIZhJLDYinUBWzLinCvwvGoxV30Mb7m96M70cqUd7/aG4/VNp7kfvX1o51UJlUoP4agg
dWXrm0gZnlm2w//251ps3/UK18o0iXWqLhY3c3Nf82T3pVnTrQ2YNX1GEtyEz7Qf5sXtsphA0Ww2
vmG0tHR0g5FHvLpPQ63iByUGx+egqGuY06GfT6344Lj2a1eGLEf0Dcva0p0maotXBbY3X0SOn55E
xPWgyFj/QGadMNbq4Bl8PCemd1FOp8V8LOcVdXlyaYQgNU4hERlGQZKUDBJ9bxKNcimaHaQdlyfQ
qZ6NRTjmWxhdp+yaUEN0ySJaCaFHy2RwmDxxV7fUrnqUYXTz09PaGJx+iJhheHymR4FTvb2FopNG
FUJCd8NbKMczP0atdQyjpRpI5lscGUD/O2c7qt7RznSMsTefbvJrCMydtf1lBWgS28WQkPr48Zav
I9Ap6/X3493R//53fB3p0FeQnsQEuKPr1ihZ5TAQwVeRdKY0a9fFbLFkIES8gtB1Q97+RAGygdED
FVpeEve5kxMi/83dAHc4VFFtTo0WusPoCFigolLTQXSJSg6gWaB+ww6CeCdCtqgDD5WCjD/4RkKg
XuUsglXqkycfJRqZNTqyd4KOUysejuf2ns1wjJ1s0kt+sZhBwPqrVXFZzHi8XEnVT54eHHw5eHrw
5IsOpmo8Njd/aEzSJ8Mvh0+fpp0Oa8No3FrzWGai03kABSKLrPZp9HMnFDv0kHzZhU7evcq5SpTU
FGXXO616+GxW5pXolFiDzTnAeLHG5kcq5ei4M8V0pjOj4+jzG8Dhb1PNQNck/et3OA3RIdov1eFv
lekZVKoFB3ASPfdqmkBxapYXGeEsSFWl/IQxwo9+tALa0OUtLnXzRQq7UUMXUon0nqvhP0fyoS/y
S0qXa2BhJpxhjKs8GNtIvkp7qS4XKlnejZzlS/s4n2lXAcGJz9d8XZ6WJL/cqdG5CkeDp8MDVgfn
yRmd0TVd9SEU4HTPsRTe8B9gfGzOimudE5TctKPakhQN8ACkLdNrPgkmi9mMDoLW5CuAqhd+srZ+
AawShmzGAVU/B7nrtTL+aqlaLaGm31rmk47HdCRB/az/41nWrCOTWF/nU/U5jOY3Dokjk8kpd1nO
ZqnD8bxySMTfI87llHq5WF0W05ebuTbolTrjRMQ8GTn5pPTvDPUovfuD1u2Rmhol08h8djrwbFnK
7ku9nPXnoDmqQoSzoMVX8/K5fq8HYjOP6mSn7bfQ3rAKPI2VcZKDToDj7bXMwNKIrHF1Pb+ZpOFa
MSACUkbvr+e/ev58cUWS55Rd7/yym5Wz0l5ZSkHhlqIsQESb5ZSRDc3jFqLqnm0w3Pa+cro/RQ/M
kykx6aUEgHkMReNA354g2z17+0qmEwk7ptM0jKzRXSOPDNH8qr8eaR6v3HtOcos1y2kep9Rz7nAS
L8Vd5BzuPltx3NQtJSSHU8TKra+IG6exIn4Op+gHIxumba3VOZxiUK8g0FUxX6Rtk+Hm8Yuu8puJ
Oy9Bi04Op9wSpvpjdduu0ki5IIdTdjNvlA7KNnI4pcfPOA4KzZ7lAHXp3KSNglxuBauCZcQlyZZj
Op/SeAVhrpYa0nDSojUEpZcrPghXxZbSXja3OCDVcqhpljO64CJvGq8gkjHc7KAkyDji8JtPxXK9
YhscnXvZ6vprH+apWWO7vZhvrvgFO43krxOdEia0YxprwSa6bJbBzYL9YwpoopM9n981mYjJjkQ3
r39QB3n987mylBHrhk8QJDz/pphXPi2ZvHWiU+LnOR1whomkQQk/0Sn1F8LCF6sXt+U6LOUnuvTG
aCMtE6qJLmPA+/m4Jbsm+puBjVKi62sT3QJAZeOLQhopUCe6VEcS1zhtWQtJdBuo46WlzQacRK9T
DFvRsg800c2v4BjxUZtEv8CWBjTRzU9cu7wSmJZm/joxKAKREbfFNFbEJgaF3MOjUSg8N7wTIywQ
Y/cGkCdtWzxOdCUK2om4TkYL2ES3S60r0VgGdw28nM7818y1nC8368Fis4ZLDN75iGpUGF7sFpuM
WLuIMdLpZnkWiE02/1DV4yOTyRUwqJ+v3sREIKecZnL5DSYiLBcWM5lc6enb55KYbilXZ3Llu/W0
WTQs6WSKFn35bbq7KGXyJsh9skn9wr6OexTk9U6Vqhwzs4v0PqjFyevLZWOTcXxTTlmQb6khktc9
iXLcvperNLZ2JnFkc4VEXF1BTwHpAEb7ye3V7PHF+mqW1PcBIWlK2IOmuV3KSqVjZI2aA+L0inC6
u1r5eZjdy490V5jIb7ZmR7qT/bXRdKTx7HW6y68qIrHYxtRCmh5cTGeL4F78IKFPbBZBt60kg45i
upmQtJMKNAtc/yEu0W/6c84K8+S6tFFbHKCp9oWgJmKrgLv8Tb6ap5H8QyRQt0Y2k3sv105GC6Ix
m8GXlMxg0mghN4MnlxRrONA0VscpJ+nBebW10HmkEN+sY2RjhxVevT98++aHD2l7Ac3gF3nx7t32
IsjgFrmrmGzai0iGmtR+1+t8/M3Rv3bscv5uMy/XRJkff3v0f/5EjHPkaYx137AI28zWbFH6lz9Q
zsHR998lcq3os47b+if/YjOtYEUKpKoNIOQYhFkVBnQjgaA+7HQgm04TVi/QTQNAFoBvZu3xu0VV
Jd/lN7PibtjxjX3MXwtrAbwqImbBAvz/wJjNPB0ecX++oH99/aY1ywTS2sUq+7M/7XXUthJ72jGu
lAww6LhY+cVggJY+S7cVxNDYa6UuOVvMz7MnbYWQSiVo6uwz/V9igWAVQ4s0tAzHPP2wFwiN+FcF
Rw6H1hUxERm4Y3OKGC8ag6CcX+ezcmq7tZLoFzUs83pB1WB1nwwP1AQ4l5eFcipelRagXxZ4mCS/
gE6XYS3y2WSD99uOOkRN7+b5VTlh7A99CURPLvLVVN97y7U+fUDLJN1BDm6PapEQ0mU1Sp7TX8lo
dJg8uP1J8vf032f832/pv8cPbp8eDOjvH798eSK/Xxwc4MvLly+/PaFqIv/jbE8OJN+TA8r58qQz
nhXndHaawNXZwe3BT/oJ/fcZ/3faMzl03igLLwBlfHqALD9+oZBy9OUr/oJO1d/QL3xFx+qv3A18
ln5Qgm2IlptuP7BKNcTWfVgNHlbdHgxilW5ni5te3xDxRXl+EQ0win2KrP2Es2A1vdF0oj6HeHz6
WmwE81vtw0m8dxcAPK/BnNzJhBuWV6ZTzoIqVtD5iKWx3VfHf/uwOklpqFt9GW32VLGevJZoLgC/
6fbG/aBjd75oB/mh9bSc8285rbN8de64EcC3JWOsisXp33nvtocJbSebNBTbQh87sYSPKgghasZu
h/Tg9uHB0yNMgT56xp3GY8W+dIvVyDpgICTgZP4CMKIZBtQ3eZwh93q/l02lkUn1SKFt75lXetaV
5iACJLL+GshPz8Sy69lPiqElZdOACngCu85Xh139bSwv+Rm40zC61BCmtjFi1XezwhyAjCSQr93A
Wzu6L2AS7gjsly2DsJ2mg2J3nxXFWxFAGJIa54RAmtPmqwcj987AULTFvlOnsDZclJEN9XvHoo5c
s1s99QA8nFJJvsBpXs2ADzznMBHZ7bQ03o6IN6xVG5RXg6sgDdFccVm+KqdNW1JQKfXqu8U5nU2Z
1tUPeulMfm9PY1Snen+CNvP2KXJiDdvej4VI2e/HGbJrbTFzB7O9f5u57aH0rCMASvPzWTFG/3id
FVpe2jDo98e3tc8doLX7SbfrMGUlEOocIArSrKeOIIKADy8/+2ddz2M8oRs0R+O4h1xGOtF12eVo
RVQiEBoNMB5NEYdi3mPF7ZJIBU67mf+J9kiVaf5eEPO+Wc2chFSGtjcf8lPBvW+pQffTYQitw4mY
+crHTdQYQRXEWeNtcVkul/zzwM92llODNptYcHOuer6wpdDKhORjExQytLA0K90gB8k5xPYrp8YG
cTTqOmN0mIRZaA9O3hjcyugD20tb1g1NU+fuRSbL2DWz+Dq0I4vXe2hjJtRNBQi9nC04lG3whiAv
x1c0G1Xmpg+N2UqfQBInEL0blWzM6pSxaoimY2GgbauBnUVZja9blgIyWixOQl9ZEoM5ahwH6g3g
rM7U/7WfmCzlXBsTcyt5g62yrqqrYFw2Zb+o+0AYu4a0LfGinF4+OjS9CY276kwRQK383OGQvPJg
7wwgOEgfmZbjgX+VXqiOLJS9bJM9L4acFLBR2EMvTiZDUIepGnnC4mPs4LYVdrf4I2OQV3v2xqml
YWGsDYEH1EF9G209uKrOFeFAFx93QSjkwI1X1maQeCH97VsJO2BFhtq7N3l1iyYbdsjOXDcWQjar
srGMNldFhxLJV6hocJWvGAcaagGelIqkTmOqObvrBlcPjyN6s9cUYlE/DV6aNxNlm+cW9WtgsY+v
ZjQB1QSzGOudcb2gSiKpyrR/74VXK6Z/TAJwR+NP4o5J2QcF3a6DDoSjwJrVCPqgRxvPWds8GPL6
fzIPprF/pHnQ6hvz4NF9OBGuj0Gc53hDkOy/9wD87VSRLLf8pGXcsXyfyov2WS5Tz97g/XbMhk0l
Pi/ZFh0cMqKcy2Iz0+fFFq8DVlp4C9AIZWTzulCU3fcyvlESC5bj1R/MQJ12/JMQhfIeLL37sBrR
/ymuQ7ZtjDsnGa7vdB1Vvzgq191VwixH3UijwNaN9ElcWK91qw2L11CNS5YtBKxrKyf/qBnfUIgR
gbXhHw0lZxdIUOX5HGBxvLUer4t8NV3czOPCji/umz5vkYtETgkzcvwn7Y8cV5ENtrutYFA/soPa
2qNCQ2s064tDG7plzfG0bUC69J80IrctQ/ttlKHsfCdVQHexhTLuPdexBQv7HsoOnzbb4cn7CU5w
4QnlnhBW0ZGv8ybnMhVsuSN2mgwrfmH080g30Wq/5vRmDOIn3W3hYs7NU8bewrski6mz22vOmYJX
M5OPAgBVG7ApTmeAPpohRnSVoO9t9Qn+ZFihBNhhIK4AggEKnK+TL0I4VKMU0VJQEU8qgcdy1SZ0
S7tJ+9YX7zDdrM8GX6XbjkdH4bJ3bfXE6JwslrEp8XPh43hazJi2woKD+Bx3HDhJo7rxLnOuhN8J
dVHqoJl+/VOo/HSWD7tPhgfdelBdHlT3p984s+SXrwmZu5c1uQWnRdQccZqULXfobL9+4y6EQJCH
ztj8HNj/mqysINCboD+HZs4iOpXuw+EXZ5AcwqWp8/aG5iminE+JFx4e9JoTZLHKAuK3NizV5oqu
m3fKg0KfTRcdK0iSuR+zF+YAKtAaMQmLidat9OPSLHXy498f/ZHxEDOuNB9/d/Tjzyz22PKuw641
jVAjXEYQanp1SM3Ox384+jfOUzv096L0/fgfj/7nH8tju5JXIk63yGJ4WUUDp8sO9OxGI61BnNjN
vvE2Lq3UL+V9Y8HPAWrEQct0ZQJBRXNe5ytV6X76S0/K8ADq3tH6RmJckfpR1IxOExRjWlbLWX6n
nWIfKzNfeE3QAWDqYJCAuWmD7+gO8BzUHQxQcbe/DbZDckR6A89bb4VMlON6bbgv7RAiS2focvbZ
VuUZoKrff1jb3DiV7FsRlqzbgiVCoudMHA/N2xVXnmRXVG85INFucVNMe8OkGz30uh+Mh7bxXVyc
aRVEVMYD/NfzxejXqiwnZj88b60OOdW599dbXtD4Zq8kxC9o/GEwCZ1/tgOv7EZaWRuQ9qqcqgEb
m7ujIYGquZMAKkMjPUQ6y/R+LzQYLrE3GAzvTyPKGDbAVbjkDlWxur5xGrhZytAw3pOf+MmNedNm
xNxABRDwiVuNHEwSxhV0EocSz9EcyTUEjL66cZMO4oq8H4bPLd2gf44G78xID/KmR1JD4/VuzOVU
0qghvc4MiIKyA4OZMniIuCJ3+g9sVQbAVZjcTA/xb746r/gPhlrwT2TljI6DJD+Ie7/tM8VVvpQY
kqEc1gsETkEGp/aBeWPGtFiV5+hKrzFcps0hcLKB2yHWnZkO1hGMGlgS+Icj8tVz7dHJemHwJZwA
ZSYQDDfd6fxMZwB6XurIHTy093oYdVQ+9auoXbfYm65N9HBN24a2WBd7DKwTgFwHDfkvD9umW15y
ozsIQGhtuEi6KZQYnG0RkRaU0lRi2DJ4Gw1RDkBz+IG8FaRrKrZzsUnIoiTtEJ3wdEZw42cb+a2Z
8dmM1buA2EJNoHcDidKWJzpILvQo6XpQaEqbBx2rFfCnGXyyQVfThWPe4EwrndvI3xhNXcyl7Ugn
mBbCakadfcB+1jc6TEOSKjrROPQvddLiqnu9OBKJ8+uBgowk3UNINMq6mUMSDYMLVF2/XPdYCAFx
x8uBTGBVB+W4XQcHt9HpUe6/Xy9u+V9Wag0nZ9LSqBv2rOM+ZaLuYLgwB2Aps140DLjPWkRX5omU
g8vJsQ+RSfSAgnGgCU45TLripVOPjQ0NGVYyyR5WPb4DiPYTJZxbCgiS8iXJw8HTLyu5LmQoLQK6
g7+j48d/jkf0C3CPbH9/olNiZyj585Z1pGUMlgtBo4rCgSrmlzegfVE3AeImYBWDgX7fWZ4kNOhI
mxWYhLCGTEBmXONQi0MxZQqgM4xhxRc2XBHH7QgISbVmtqgBqgKhqBm5Qyu+JcxYMzgUY/tnTNDN
TU5/93vQ3E0dRCgNt9HpGOFubMifpGsxwlT5SSHDKoieiE9Ry7xqdkFTAHE4pQP+RBjCNo6p5rzU
W8dsxOO38UHLzzEMqJCvs4XpcuU2siZv0PViMUN0XbEgoqXSQY1coWvB4VWAnSjD28KvZxzX2jzo
Sy4c7npypawoNQnBC6tj8MB2q7Ck0+VC84jUDJuhscaRxyedR0HK9DvijZUoiw4MYxPgyjKczzu6
RTFA3BXYsFtXJy7fMoV7n517SttibCqmTiuTCqjMoGm0ETuifaGz56jNroEnda/DWkuMIpNJfB99
1RwGNvTs/oNVqMoIqZWIztxK6lj4p/JtyDkzn0gVVNsSKecZtUVpahJnLBgzzmpp0WTbQ5W+kuc+
S+dbKZMPjQPzxKbni7E+8Km08yA5/JT/UTm2kofpqk5PdTdf57esjwDySfXJVbtbR3lRzU4yXTuz
KjrDElNCFpofGpwFkXFznyj9t7+z7J8/4UCm3huSwLfh2KS4ogI+DTdLDDlzGzF5zS6RI79cODIZ
PBmzGib0Lfzsss/5+uUp1vElxL+SyowwnoSXP7nDOYJmUIAl2o5j4FKDeQtWmM6cb+HX188w9JP0
4XKxBPU7G+SBopRz4E5IOQIj7WNes+6znGCSYiHOG9HcNDqECfnNk9vYcEg0J4L0jzadcIt8hiJp
E8/M/E8mvyvx7DfzyzmkDJQZaf+bb6A6cxbtLr5NI9MhfesEyjP6NMayIQoTUXVmlJ86lF7jsS71
VCIMRe9U0xyik2hCEgVV+E0wUNGhzPVxEwce37W/tqv41mvcvSjXzs4JmbvrG8/nrBVl2DDk2UPB
YbRSJcRj+Aj0k15vawWT9SafYe+xxxeiIgprlHsPfbdzv70eK87ylJl+GU671fQAFyXZu2bMLW3F
qM39XzOgywPdEigZRm2AyCd0yQ4hkHYNsTcG70VBaNSlkZEikpiAmKWWJUeifbzllF/KkWHwUNgU
5G/mKpEJ37H8y2I8s8GlQglGawkhGjnUq5xJaj93bUsogrfEp2nhTY6NORtX+MJ5F9LY33SCifEs
bLQAoAponcfjro1HVJ8atbFofXQ5R1O+LM0tFlHqAhefM5MeazY069H6j7XIibOGVG3fVOVJB1pG
ul3TCUbkTxKtoC5fLTdRpqFFtqs7LQ9BqmoA/dPP7HPzGV3Sb9yKdIcm7uN/4gcsSKSsv5rkxGCB
ffjxPx/90z8S7MQPDgYiTpScak2Y30nQ0wEcyxMuCZ2VxLQSrfuw03k2myXPc475e5FfK0I19Wex
Yh/D7LK464u7YY31PWPIeSRRF80KMnoNHjaCBxm2aBG8wXWZm7A+2h+mbPE1lWgeeFN2fEmBx4eg
4+xWKtvg53lVTrjHO50ervJb9LQsqsMnT78K7QLqVEEV1R+hU8JmXkhMRDBBp8zAKfP4q8C6ggNU
CFHbzjFqbjxEEXIPJd21hqeRy0zLcDDfDVDVuoJjSneiXIwR3dgv3ZdIAo3meZB4kqrWNwUC6WWx
4XD9NCauo27HkoPTDNge3uhDMaQRjAWuOeEI6hgOW0KtwIpHGolcFzhooBvuwxCz00VD0ls6Z8Iu
SAftYjDZ79VJv4K2/RSsxaZuxqxXA0gXXwV5113vxipGoOU1gOhmnRg6Zt9xKZZIOR9Ffs6nvG6T
mbE04fXyzkwn3zeH4Q4LD+biGvvDU2/xoNJK++Avtl5jjyXWxVCyyIZoFTtwkNRzaCMYyV7T6+5J
s5khAKrCWMTzaQHHJGeIA589hLQpJb5JDprUyaq3egD2Yn084kIncWGQgwh//6EOHywh2gvh5nX4
y3MXw79BXS7rDnep5a+gzcmiWj9jmFThtDXTdcSMZ5L3AzHnx5J5YONjx44bJ4zmRaFzIBHoBT23
rExQ9in4voSgB1oFcvkxyGnKBwLjWlSDxdkgH0gVn/OpMVgvBrzFBlTHwNkn+B883vlTZcO6E1kj
AnO1mVxotwQXl6nUCcNanjlHl3MU4JytLoCzCoz1zbJYTYppMa3HiwjK/lwkZ7PiVsEQ6PDmEMf5
PEEgXTnV6uhKZmm1N1Dj5WKuZSPhKeArDkzID+ZhDpYCILcrGp6ELNBZKa+8CHh7MEeJjnJoDmNn
g9BcwuZQSjjf4SIby6887Fc8z8X0uVLMC6ZLqgwblmQWtFeHSojn3nX+K7w5TaqsauPgE6I5lIx+
mpKnYd62DryYm7/rKRRhPXa2a6lHjYriB3ndE9M32gJEUGuem029TZ8BmWbL5mSpUCiVmVTFTF8+
1HQLWgLtLRaACfYJaqdAxb8rKCen9Ptg6Eaf4l2Q1Z0Ud8Wexf52JKleQyzTOkUs0x/7ikX+mduU
KmgPyyFCMsLtsoRSLeYU4Qu5zSudOfD9dPd8vt/+at9FMhyeSWeP2H1lTCnrmWrrjkM399o+PE3M
tNBiSN/7bCG/hnpi6iUIds56G+dYW/lCN9LH/3L0rxwrP7V0+vhfj/77vxUTv2lZTRbXxYrvKLQZ
TOAFfoO1gUnnxjlFTI/isa48Iz6D0yu5Xspb3rvi44ZzA/FZvwFk1gC0OyCrpqQTy2PVT/Dfl9QH
QAozuNfvAwCht+4Wo6/uYKBTMdBpcCza7mPfJVZT4dQCtgzGXM261SJL8prknX08ny1O2zro9I0d
rD4frm/XASBFN+yw7Sy/3lsdEWWF4ZetcZQ4NcbHwF0LgR7UdQC4KmJgBSDZ+Tq0+ZKvGjjFPjiK
2/2alTBAE/A0IME7mTePUXiSbyWLovV7vaktPDLbKLtTY7x4jR2uqnXak9gM0lW1kR9CfizXbGXG
Vk09Xz0kKgtohLOzOc/uYSOGmTd/zeut9huvCM15NDyNd0xezjRz5u6ndvaGF6WZ7jHxv2rEoXWz
6BuU+RmgH+hLpjwsmQadADVqR33TFsLIVjKKRqiKeFm6nQlD4ZhQvDIfr/ByoSwLf7dPiQmji9nt
c3jaYiUwKMkUxcPgunLKO800jnm3xnAAXD2mlf/wE7k5RBNnk+oa90ecyKKSllSDf7K6CkedggkL
fICbcaWUHGuPDf3gyQ8ax6ZYrZyYOpkJn0PT8YH+fSkt7eMpuS3wTiCcaFYdBp+tThweX/twm+PK
gUn0Cg01oeF50Mzpjdz1yYTDCHpsfof3X04Ul8b2WDSaHgSea38Gtvnd2h+ZcdYfnnQiPo/OVMEI
x+poA2bp7ffwJI64a2rtgeUFWF6xahDR8A27SD6X5EDJ8O7F2zfvPox/+PbVy5fNkm5qY0UM9/Gt
HE1ne0NogTlPNlkdHkSez31gFQM/ZpcvHvHP4G0hIJOzIn0boUnKD5InBxyU9Ojo6KdRbYhhgnYo
x+VICrcoQ/wn/4cHX0yNYUv56Ik03PK8VPoOc9tpzTaRvjh69v3b714k3715/uzDqzevkx9e/9Xr
N7963RfrqovFDccoInFHxAmG/KO7lZJm2uyM+OYggM8333yTbp0WQ98SRUDAZWQ1e3tMT/rTn/6U
Zof+L+UJ4na3z5Ht2nA4TEOqiDO/OO/rtcwro5vIJhiK1zBd1M40Qk2m421nnAGTOl/QeNz94QUB
3GOtS5wbJiiZbCMW0b0YEVk84FmvfYTH3R9evzh6++L5hxffJi+Onr94C9LZ57WUbV28Xkmrgb7S
a035RY3OYEMP6CCyz/cZgYpdoUQVk5V2Y+K1igfeUWwD77knNYPV0awHB72rKzmrRBgU+aR7rGRx
oryA8/DZ5EtDVpT0+qXiEbi9CzMYFzd2CgkPkoqmqjq7S37t3wp/zW4vdJea0J2FtleAUiVvps5r
KPOCKwtvpWL3WM1G3ai2psuoI2QMDFLqa760BqX4sytI5M43jkV5qCHmqMZD/Kd/T7ijyaxS4dEM
zKia68OLWxyvZHZgoONNl0x8W3aMYjYzJrSu8Zq4XkL+rmUaFmZ44UMbo8whp54JMj/mcJjldRHz
E5J719kspyGZ6l98992rt+9fvQ98RoGRgssNZSwn66ye5sNwNIrvT/Mkm73fVDuNF3Px+mcL2z6s
C08XVYEz3Sdyc8+LUrVeTO9N1QahTGZqiDgM1kzA8d0KUAv0ZVpJ2AZfNvlFBWJ+ekuxhcH4lWqH
lhp4wrXS+kNuxNjOCZQ8GbyMjk/6VOyTqVynfQyqIdIDBd+Tcpzhc1BE68dmllinD9oiCC5XxeQi
n5fVldNlRBR1No852Pm7q5MzlzibETa77/hrZsmzv3XH+Fan3EGWBtHUEP9k0l+zNSP2FW3/c7ef
P629UfTOosJHBTgRuGgnxdVyfWe1Wo0G78piNvUu2lyNmvOInoEno5/IdfTjf9sM/y9BCVLB
"""
import sys
import base64
import zlib
class DictImporter(object):
def __init__(self, sources):
self.sources = sources
def find_module(self, fullname, path=None):
if fullname == "argparse" and sys.version_info >= (2,7):
# we were generated with <python2.7 (which pulls in argparse)
# but we are running now on a stdlib which has it, so use that.
return None
if fullname in self.sources:
return self
if fullname + '.__init__' in self.sources:
return self
return None
def load_module(self, fullname):
# print "load_module:", fullname
from types import ModuleType
try:
s = self.sources[fullname]
is_pkg = False
except KeyError:
s = self.sources[fullname + '.__init__']
is_pkg = True
co = compile(s, fullname, 'exec')
module = sys.modules.setdefault(fullname, ModuleType(fullname))
module.__file__ = "%s/%s" % (__file__, fullname)
module.__loader__ = self
if is_pkg:
module.__path__ = [fullname]
do_exec(co, module.__dict__) # noqa
return sys.modules[fullname]
def get_source(self, name):
res = self.sources.get(name)
if res is None:
res = self.sources.get(name + '.__init__')
return res
if __name__ == "__main__":
if sys.version_info >= (3, 0):
exec("def do_exec(co, loc): exec(co, loc)\n")
import pickle
sources = sources.encode("ascii") # ensure bytes
sources = pickle.loads(zlib.decompress(base64.decodebytes(sources)))
else:
import cPickle as pickle
exec("def do_exec(co, loc): exec co in loc\n")
sources = pickle.loads(zlib.decompress(base64.decodestring(sources)))
importer = DictImporter(sources)
sys.meta_path.insert(0, importer)
entry = "import pytest; raise SystemExit(pytest.cmdline.main())"
do_exec(entry, locals()) # noqa
| s-m-i-t-a/sales_flatpages | runtests.py | Python | bsd-3-clause | 228,439 | [
"EPW"
] | 0bb38163d8d23ba3b7e758297d40860ee5038ecaa962e5eed409c42bb39265ef |
"""
Simple GUI for Quantum ESPRESSO
"""
from PyQt5.QtWidgets import (QApplication, QCheckBox, QComboBox, QDialog,
QDialogButtonBox, QFormLayout, QGridLayout, QGroupBox, QHBoxLayout,
QLabel, QLineEdit, QMenu, QMenuBar, QPushButton, QScrollArea, QSpinBox,
QTextEdit, QVBoxLayout, QWidget, QPlainTextEdit )
from PyQt5.QtCore import (pyqtSlot)
from PyQt5.QtCore import (Qt)
import sys
class Dialog(QDialog):
def __init__(self, input_file):
super(Dialog, self).__init__()
self.input_file = input_file
self.central_widget = QWidget()
self.setWindowTitle("Quantum ESPRESSO Input Form")
self.main_layout = QVBoxLayout(self.central_widget)
#list of all associated group boxes
self.group_boxes = []
#inside of the main layout is a scroll area
self.scroll_area = QScrollArea(self.central_widget)
self.scroll_area.setWidgetResizable(True)
self.main_layout.addWidget(self.scroll_area)
#inside of the scroll area is another widget that will contain everything else
self.boxes_widget = QWidget()
self.boxes_layout = QVBoxLayout(self.boxes_widget)
#create the box for basic information
basic_box = self.create_box('basic')
self.boxes_layout.addWidget(basic_box)
self.scroll_area.setWidget(self.boxes_widget)
self.setLayout(self.main_layout)
#set the dimensions of the form
# self.setGeometry(10,10,500,500)
def create_box(self,group_name):
group_box = InputBox(group_name)
group_box.initialize_widgets()
group_box.setLayout(group_box.layout)
self.group_boxes.append(group_box)
self.boxes_layout.addWidget(group_box)
group_box.update_visibility()
#if the new group box is not visible, create the next one
if not group_box.shown:
self.create_box(group_box.next_group_box)
return group_box
def on_window_update(self):
#print("Window Updating")
for group_box in self.group_boxes:
group_box.update_layout()
#not included:
#nat
#ntyp
#london, xdm
#ortho_para
#probably shouldn't include:
#title
#space_group, uniqueb, origin_choice, rhombohedral
#ion_positions
#not sure where to place:
#no_t_rev
#force_symmorphic
#use_all_frac
#one_atom_occupations
#q2sigma, ecfixed, qcutz
class InputBox(QGroupBox):
"""
This class represents a collection of input widgets that
correspond to a single type of input parameter
"""
def __init__(self, group_name):
self.group_name = group_name
self.label = self.group_name + " Information"
super(QGroupBox, self).__init__(self.label)
self.layout = QFormLayout()
self.layout.setFormAlignment(Qt.AlignLeft)
#self.layout.setSpacing(0)
self.widgets = []
self.input_file = input_file
#conditions under which this group box should be shown
self.show_conditions = []
self.shown = True
def initialize_widgets(self):
"""
Add GUI elements for each of the input parameters associated with self.group_name
"""
#for w in self.widgets:
# self.layout.removeWidget(w)
#reset any widgets
#NOTE: need to check whether this clears memory correctly
#self.widgets = []
#self.layout = QFormLayout()
#start with a fresh layout
# self.clear_layout()
#print("start of initialize_widgets")
#print(self.window())
if self.group_name == 'basic':
self.create_basic_box()
elif self.group_name == 'cell':
self.create_cell_box()
elif self.group_name == 'hubbard':
self.create_hubbard_box()
self.show_conditions.append( [ ["GUI_exx_corr","==","dft+u"],
"or", ["GUI_exx_corr","==","dft+u+j"] ] )
elif self.group_name == 'system':
self.create_system_box()
elif self.group_name == 'vdw':
self.create_vdw_box()
self.show_conditions.append( [ [ ["vdw_corr","==","grimme-d2"], "or",
["vdw_corr","==","tkatchenko-scheffler"] ], "or",
["vdw_corr","==","xdm"] ])
elif self.group_name == 'md':
self.create_md_box()
self.show_conditions.append( ["calculation","==","md"] )
elif self.group_name == 'relaxation':
self.create_ions_box()
self.show_conditions.append( ["calculation","==","relax"] )
elif self.group_name == 'cell dynamics':
self.create_celld_box()
self.show_conditions.append( [ [ ["calculation","==","relax"], "and",
["GUI_variable_cell","==",2] ], "or",
[ ["calculation","==","md"], "and",
["GUI_variable_cell","==",2] ] ] )
elif self.group_name == 'magnetization':
self.create_magnetization_box()
self.show_conditions.append( [ ["nspin","==","2"],
"or", ["nspin","==","4"] ] )
elif self.group_name == 'noncollinear':
self.create_noncollinear_box()
self.show_conditions.append( ["nspin","==","4"] )
elif self.group_name == 'efield':
self.create_efield_box()
self.show_conditions.append( [ ["GUI_efield_type","==","tefield"],
"or", ["GUI_efield_type","==","lefield"] ] )
elif self.group_name == 'monopole':
self.create_monopole_box()
self.show_conditions.append( ["GUI_charge_type","==","monopole"] )
elif self.group_name == 'kpoint':
self.create_kpoint_box()
elif self.group_name == 'electrons':
self.create_electrons_box()
elif self.group_name == 'print':
self.create_print_box()
else:
raise LookupError('Group name not recognized: ' + str(self.group_name))
self.apply_layout()
self.update_layout()
#print("end of initialize_widgets")
#print(self.window())
def update_visibility(self):
#if this group should not be shown, hide it and then initialize the next box
should_show = self.check_show_conditions(self)
if should_show:
self.setVisible(True)
self.shown = True
else:
self.setVisible(False)
self.shown = False
#self.window().create_box(self.next_group_box)
def apply_layout(self):
for w in self.widgets:
try:
if w.label:
self.layout.addRow( w.label, w.widget )
else:
self.layout.addRow( w.widget )
except AttributeError: #legacy code - delete when possible
try: #check if the widget has a label
self.layout.addRow( w.label, w)
except AttributeError:
self.layout.addRow( w )
w.shown = True
def clear_layout(self):
"""
Remove all objects from layout
"""
for i in reversed( range( self.layout.count() ) ):
w = self.layout.itemAt(i).widget()
self.layout.removeWidget( w )
#w.setParent( None )
w.deleteLater()
self.widgets = []
def update_layout(self):
self.update_visibility()
for w in self.widgets:
should_show = self.check_show_conditions(w)
if should_show and not w.shown:
w.set_visible(True)
elif not should_show and w.shown:
w.set_visible(False)
def check_show_conditions(self, widget):
show = True
for condition in widget.show_conditions:
if condition[0] == "no_next_box": #show only if the next group box has not been initialized
if self.window() is not self: #confirm that the group_box has been initialized
for box in self.window().group_boxes:
if box.group_name == self.next_group_box:
show = False
else:
if self.evaluate_condition(condition) == False:
show = False
return show
def evaluate_condition(self, condition):
try:
#evaluate this condition
try:
input = input_file.inputs[ condition[0] ]
except KeyError:
input = None
if condition[1] == "==":
if input == condition[2]:
return True
else:
return False
elif condition[1] == "!=":
if input == condition[2]:
return False
else:
return True
except TypeError: #the condition must be a list of conditions
#evaluate each condition in the list
c1 = self.evaluate_condition(condition[0])
c2 = self.evaluate_condition(condition[2])
if condition[1] == "or":
#print("HERE: " + str(c1) + " " + str(c2) + " " + str(c1 or c2))
return (c1 or c2)
elif condition[1] == "and":
return (c1 and c2)
def create_basic_box(self):
group_box = self
#title
# widget = InputText( group_box, input_name="title" )
# widget.label = QLabel("Title:")
# widget.setToolTip('Enter a title for the calculation.\nThis has no impact on the results.')
# widget.textChanged.connect( widget.on_text_changed )
# self.widgets.append(widget)
#calculation
widget = InputField( group_box, "combo", label_name = "Calculation:", input_name = "calculation")
widget.add_combo_choice( "SCF (Self-Consistent Field)", "scf" )
widget.add_combo_choice( "NSCF (Non-Self-Consistent Field)", "nscf" ) #replace with maximum_iterations?
widget.add_combo_choice( "Bands", "bands" ) #how is this different from NSCF?
widget.add_combo_choice( "Geometry Relaxation", "relax" ) #note: includes vc-relax
widget.add_combo_choice( "Molecular Dynamics", "md" ) #note: includes vc-md
#GUI_charge_type
widget = InputField( group_box, "combo", label_name = "Charge Type:", input_name = "GUI_charge_type")
widget.add_combo_choice( "Neutral", "neutral" )
widget.add_combo_choice( "Charged (Counter With Homogenous Background)", "homogeneous" )
widget.add_combo_choice( "Charged (Counter With Charged Plate)", "monopole" )
#tot_charge
widget = InputField( group_box, "text", label_name = "System Charge:", input_name = "tot_charge")
widget.show_conditions.append( ["GUI_charge_type","!=","neutral"] )
#GUI_exx_corr (custom)
widget = InputField( group_box, "combo", label_name = "Exchange Correction:", input_name = "GUI_exx_corr")
widget.add_combo_choice( "None", "none" )
widget.add_combo_choice( "LDA+U", "dft+u" )
widget.add_combo_choice( "LDA+U+J", "dft+u+j" )
widget.add_combo_choice( "Hybrid Functional", "hybrid" )
#vdw_corr
widget = InputField( group_box, "combo", label_name = "Van der Waals Correction:", input_name = "vdw_corr")
widget.add_combo_choice( "None", "none" )
widget.add_combo_choice( "Grimme-D2", "grimme-d2" )
widget.add_combo_choice( "Tkatchenko-Scheffler", "tkatchenko-scheffler" )
widget.add_combo_choice( "XDM", "xdm" )
#nspin
widget = InputField( group_box, "combo", label_name = "Spin Polarization:", input_name = "nspin")
widget.add_combo_choice( "None", "1" )
widget.add_combo_choice( "Spin-Polarized", "2" )
widget.add_combo_choice( "Noncollinear Spin-Polarized", "4" )
#GUI_efield_type
widget = InputField( group_box, "combo", label_name = "Electric Field:", input_name = "GUI_efield_type")
widget.add_combo_choice( "None", "none" )
widget.add_combo_choice( "Saw-Like", "tefield" )
widget.add_combo_choice( "Homogeneous", "lefield" )
#widget = InputField( group_box, "button", label_name = " ", input_name = "Next")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'cell'
#--------------------------------------------------------#
# Cell Inputs
#--------------------------------------------------------#
def create_cell_box(self):
group_box = self
#ibrav
widget = InputField( group_box, "combo", label_name = "Lattice Type:", input_name = "ibrav")
widget.add_combo_choice( "Custom", "0" )
widget.add_combo_choice( "Simple Cubic", "1" )
widget.add_combo_choice( "Face-Centered Cubic", "2" )
widget.add_combo_choice( "Body-Centered Cubic", "3" )
widget.add_combo_choice( "Hexagonal and Trigonal P", "4" )
widget.add_combo_choice( "Trigonal R, 3-fold axis c", "5" )
widget.add_combo_choice( "Trigonal R, 3-fold axis <111>", "-5" )
widget.add_combo_choice( "Tetragonal P", "6" )
widget.add_combo_choice( "Tetragonal I", "7" )
widget.add_combo_choice( "Orthorhombic P", "8" )
widget.add_combo_choice( "Base-Centered Orthorhombic", "9" )
widget.add_combo_choice( "Face-Centered Orthorhombic", "10" )
widget.add_combo_choice( "Body-Centered Orthorhombic", "11" )
widget.add_combo_choice( "Monoclinic P, unique axis c", "12" )
widget.add_combo_choice( "Monoclinic P, unique axis b", "-12" )
widget.add_combo_choice( "Base-Centered Monoclinic", "13" )
widget.add_combo_choice( "Triclinic", "14" )
#GUI_lattice_vector
widget = InputField( group_box, "plain_text", label_name = "Lattice Vector:", input_name = "GUI_lattice_vector")
widget.widget.setMaximumHeight(60)
#v1
#widget = InputField( group_box, "text", label_name = "v1:", input_name = "v1")
#v2
#widget = InputField( group_box, "text", label_name = "v2:", input_name = "v2")
#v3
#widget = InputField( group_box, "text", label_name = "v3:", input_name = "v3")
#GUI_variable_cell
widget = InputField( group_box, "check", label_name = "Cell Relaxation:", input_name = "GUI_variable_cell")
widget.show_conditions.append( ["calculation","==","relax"] )
#GUI_variable_cell
widget = InputField( group_box, "check", label_name = "Cell Dynamics:", input_name = "GUI_variable_cell")
widget.show_conditions.append( ["calculation","==","md"] )
#assume_isolated
widget = InputField( group_box, "combo", label_name = "Cell Periodicity:", input_name = "assume_isolated")
widget.add_combo_choice( "Periodic", "none" )
widget.add_combo_choice( "ESM (Effective Screening Medium)", "esm" )
widget.add_combo_choice( "Vacuum (Makov-Payne Method)", "makov-payne" )
widget.add_combo_choice( "Vacuum (Martyna-Tuckerman Method)", "martyna-tuckerman" )
#esm_bc
widget = InputField( group_box, "combo", label_name = "ESM Boundary Conditions:", input_name = "esm_bc")
widget.add_combo_choice( "Periodic", "pbc" )
widget.add_combo_choice( "Vacuum-Slab-Vacuum", "bc1" )
widget.add_combo_choice( "Metal-Slab-Metal", "bc2" )
widget.add_combo_choice( "Vacuum-Slab-Metal", "bc3" )
widget.show_conditions.append( ["assume_isolated","==","esm"] )
#esm_w
widget = InputField( group_box, "text", label_name = "Effective Screening Region Offset:", input_name = "esm_w")
widget.show_conditions.append( ["assume_isolated","==","esm"] )
#esm_efield
widget = InputField( group_box, "text", label_name = "ESM Electric Field (Ry/a.u.):", input_name = "esm_efield")
widget.show_conditions.append( ["assume_isolated","==","esm"] )
widget.show_conditions.append( ["esm_bc","==","bc2"] )
#esm_nfit
widget = InputField( group_box, "text", label_name = "Number of ESM Grid Points:", input_name = "esm_nfit")
widget.show_conditions.append( ["assume_isolated","==","esm"] )
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'cell dynamics'
#--------------------------------------------------------#
# Cell Dynamics Inputs
#--------------------------------------------------------#
def create_celld_box(self):
group_box = self
#cell_dynamics
widget = InputField( group_box, "combo", label_name = "cell_dynamics:", input_name = "cell_dynamics")
widget.add_combo_choice( "none", "none" )
widget.add_combo_choice( "sd", "sd" )
widget.add_combo_choice( "damp-pr", "damp-pr" )
widget.add_combo_choice( "damp-w", "damp-w" )
widget.add_combo_choice( "bfgs", "bfgs" )
widget.add_combo_choice( "none", "none" )
widget.add_combo_choice( "pr", "pr" )
widget.add_combo_choice( "w", "w" )
#press
widget = InputField( group_box, "text", label_name = "press:", input_name = "press")
#wmass
widget = InputField( group_box, "text", label_name = "wmass:", input_name = "wmass")
#cell_factor
widget = InputField( group_box, "text", label_name = "cell_factor:", input_name = "cell_factor")
#press_conv_thr
widget = InputField( group_box, "text", label_name = "press_conv_thr:", input_name = "press_conv_thr")
#cell_dofree
widget = InputField( group_box, "combo", label_name = "cell_dofree:", input_name = "cell_dofree")
widget.add_combo_choice( "all", "all" )
widget.add_combo_choice( "x", "x" )
widget.add_combo_choice( "y", "y" )
widget.add_combo_choice( "z", "z" )
widget.add_combo_choice( "xy", "xy" )
widget.add_combo_choice( "xz", "xz" )
widget.add_combo_choice( "yz", "yz" )
widget.add_combo_choice( "xyz", "xyz" )
widget.add_combo_choice( "shape", "shape" )
widget.add_combo_choice( "volume", "volume" )
widget.add_combo_choice( "2Dxy", "2Dxy" )
widget.add_combo_choice( "2Dshape", "2Dshape" )
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'system'
#--------------------------------------------------------#
# System Inputs
#--------------------------------------------------------#
def create_system_box(self):
group_box = self
#ecutwfc
widget = InputField( group_box, "text", label_name = "ecutwfc:", input_name = "ecutwfc")
#input_dft
widget = InputField( group_box, "combo", label_name = "DFT Functional:", input_name = "input_dft")
widget.add_combo_choice( "BLYP", "blyp" )
widget.add_combo_choice( "PBE", "pbe" )
widget.add_combo_choice( "PBE0", "pbe0" )
widget.add_combo_choice( "HSE", "hse" )
widget.show_conditions.append( ["GUI_exx_corr","==","hybrid"] )
#etot_conv_thr
widget = InputField( group_box, "text", label_name = "Energy Convergence:", input_name = "etot_conv_thr")
#forc_conv_thr
widget = InputField( group_box, "text", label_name = "Force Convergence:", input_name = "forc_conv_thr")
widget.show_conditions.append( ["calculation","==","relax"] )
#nstep
widget = InputField( group_box, "text", label_name = "Maximum Relaxation Steps:", input_name = "nstep")
widget.show_conditions.append( ["calculation","==","relax"] )
#nstep
widget = InputField( group_box, "text", label_name = "Number of Timesteps:", input_name = "nstep")
widget.show_conditions.append( ["calculation","==","md"] )
#nbnd
widget = InputField( group_box, "text", label_name = "Number of Bands:", input_name = "nbnd")
#ecutrho
widget = InputField( group_box, "text", label_name = "ecutrho:", input_name = "ecutrho")
#nr1, nr2, and nr3
#nr1s, nr2s, and nr3s
#occupations
widget = InputField( group_box, "combo", label_name = "occupations:", input_name = "occupations")
widget.add_combo_choice( "Gaussian Smearing", "smearing" )
widget.add_combo_choice( "Tetrahedron (Bloechl Method)", "tetrahedra" )
widget.add_combo_choice( "Tetrahedron (Linear Method)", "tetrahedra_lin" )
widget.add_combo_choice( "Tetrahedron (Kawamura Method)", "tetrahedra_opt" )
widget.add_combo_choice( "Fixed", "fixed" )
widget.add_combo_choice( "Custom", "from_input" )
#NOTE: for occupations, default to 'smearing', unless doing DOS or phonons, in which case use 'tetrahedra_opt' - the Kawamura Method
#smearing
widget = InputField( group_box, "combo", label_name = "Smearing Method:", input_name = "smearing")
widget.add_combo_choice( "Ordinary Gaussian", "gaussian" )
widget.add_combo_choice( "Methfessel-Paxton", "methfessel-paxton" )
widget.add_combo_choice( "Marzari-Vanderbilt", "marzari-vanderbilt" )
widget.add_combo_choice( "Fermi-Dirac", "Fermi-Dirac" )
widget.show_conditions.append( ["occupations","==","smearing"] )
#NOTE: default to Marzari-Vanderbilt 'cold smearing'
#degauss
widget = InputField( group_box, "text", label_name = "degauss:", input_name = "degauss")
widget.show_conditions.append( ["occupations","==","smearing"] )
#NOTE: degauss has suggested values of 0.06-0.10 Ry
#exx_fraction
widget = InputField( group_box, "text", label_name = "exx_fraction:", input_name = "exx_fraction")
widget.show_conditions.append( ["GUI_exx_corr","==","hybrid"] )
#ecutfock
widget = InputField( group_box, "text", label_name = "ecutfock:", input_name = "ecutfock")
widget.show_conditions.append( ["GUI_exx_corr","==","hybrid"] )
#screening_parameter
widget = InputField( group_box, "text", label_name = "screening_parameter:", input_name = "screening_parameter")
widget.show_conditions.append( ["GUI_exx_corr","==","hybrid"] )
#exxdiv_treatment
widget = InputField( group_box, "text", label_name = "exxdiv_treatment:", input_name = "exxdiv_treatment")
widget.show_conditions.append( ["GUI_exx_corr","==","hybrid"] )
#x_gamma_extrapolation
widget = InputField( group_box, "text", label_name = "x_gamma_extrapolation:", input_name = "x_gamma_extrapolation")
widget.show_conditions.append( ["GUI_exx_corr","==","hybrid"] )
#ecutvcut
widget = InputField( group_box, "text", label_name = "ecutvcut:", input_name = "ecutvcut")
widget.show_conditions.append( ["GUI_exx_corr","==","hybrid"] )
#nqx1, nqx2, nqx3
widget = InputField( group_box, "text", label_name = "nqx1, nqx2, nqx3:", input_name = "nqx1")
widget.show_conditions.append( ["GUI_exx_corr","==","hybrid"] )
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'hubbard'
#--------------------------------------------------------#
# Hubbard Inputs
#--------------------------------------------------------#
def create_hubbard_box(self):
group_box = self
#lda_plus_u
widget = InputField( group_box, "check", label_name = "DFT+U:", input_name = "lda_plus_u")
#NOTE: Instead of having a checkbox, just turn DFT+U on if a non-zero U is applied to any species
#lda_plus_u_kind
widget = InputField( group_box, "check", label_name = "DFT+U+J:", input_name = "lda_plus_u_kind")
#NOTE: Instead of having a checkbox, just turn DFT+U+J on if a non-zero J is applied to any species
#U_projection_type
widget = InputField( group_box, "combo", label_name = "U Projection Type:", input_name = "U_projection_type")
widget.add_combo_choice( "Atomic", "atomic" )
widget.add_combo_choice( "Ortho-Atomic", "ortho-atomic" )
widget.add_combo_choice( "Norm-Atomic", "norm-atomic" )
widget.add_combo_choice( "File", "file" )
widget.add_combo_choice( "Pseudo", "pseudo" )
#starting_ns_eigenvalue(m,ispin,l)
widget = InputField( group_box, "text", label_name = "starting_ns_eigenvalue:", input_name = "starting_ns_eigenvalue")
#--------------------------------------------------------#
# Per-species information
#--------------------------------------------------------#
#Hubbard_U
widget = InputField( group_box, "text", label_name = "U:", input_name = "U")
#Hubbard_J0
widget = InputField( group_box, "text", label_name = "J0:", input_name = "J0")
#Hubbard_alpha
widget = InputField( group_box, "text", label_name = "alpha:", input_name = "alpha")
#Hubbard_beta
widget = InputField( group_box, "text", label_name = "beta:", input_name = "beta")
#Hubbard_J
widget = InputField( group_box, "text", label_name = "J:", input_name = "J")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'vdw'
#--------------------------------------------------------#
# VdW Inputs
#--------------------------------------------------------#
def create_vdw_box(self):
group_box = self
#london_rcut
widget = InputField( group_box, "text", label_name = "london_rcut:", input_name = "london_rcut")
#ts_vdw_econv_thr
widget = InputField( group_box, "text", label_name = "ts_vdw_econv_thr:", input_name = "ts_vdw_econv_thr")
#ts_vdw_isolated
widget = InputField( group_box, "text", label_name = "ts_vdw_isolated:", input_name = "ts_vdw_isolated")
#london_s6
widget = InputField( group_box, "text", label_name = "london_s6:", input_name = "london_s6")
#xdm_a1
widget = InputField( group_box, "text", label_name = "xdm_a1:", input_name = "xdm_a1")
#xdm_a2
widget = InputField( group_box, "text", label_name = "xdm_a2:", input_name = "xdm_a2")
#--------------------------------------------------------#
# Per-species information
#--------------------------------------------------------#
#london_c6
widget = InputField( group_box, "text", label_name = "london_c6:", input_name = "london_c6")
#london_rvdw
widget = InputField( group_box, "text", label_name = "london_rvdw:", input_name = "london_rvdw")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'md'
#--------------------------------------------------------#
# MD Inputs
#--------------------------------------------------------#
def create_md_box(self):
group_box = self
#dt
widget = InputField( group_box, "text", label_name = "Timestep:", input_name = "dt")
#ion_dynamics
widget = InputField( group_box, "combo", label_name = "ion_dynamics:", input_name = "ion_dynamics")
widget.add_combo_choice( "verlet", "verlet" )
widget.add_combo_choice( "langevin", "langevin" )
widget.add_combo_choice( "langevin-smc", "langevin-smc" )
widget.show_conditions.append( ["GUI_variable_cell","==",0] )
#ion_dynamics
widget = InputField( group_box, "combo", label_name = "ion_dynamics:", input_name = "ion_dynamics")
widget.add_combo_choice( "beeman", "beeman" )
widget.show_conditions.append( ["GUI_variable_cell","==",2] )
#pot_extrapolation
widget = InputField( group_box, "combo", label_name = "Potential Extrapolation:", input_name = "pot_extrapolation")
widget.add_combo_choice( "None", "none" )
widget.add_combo_choice( "Atomic", "atomic" )
widget.add_combo_choice( "First-Order", "first_order" )
widget.add_combo_choice( "Second-Order", "first_order" )
#wfc_extrapolation
widget = InputField( group_box, "combo", label_name = "Wavefunction Extrapolation:", input_name = "wfc_extrapolation")
widget.add_combo_choice( "None", "none" )
widget.add_combo_choice( "First-Order", "first_order" )
widget.add_combo_choice( "Second-Order", "first_order" )
#remove_rigid_rot
widget = InputField( group_box, "check", label_name = "remove_rigid_rot:", input_name = "remove_rigid_rot")
widget.show_conditions.append( ["assume_isolated","!=","none"] )
#ion_temperature
widget = InputField( group_box, "combo", label_name = "ion_temperature:", input_name = "ion_temperature")
widget.add_combo_choice( "rescaling", "rescaling" )
widget.add_combo_choice( "rescale-v", "rescale-v" )
widget.add_combo_choice( "rescale-T", "rescale-T" )
widget.add_combo_choice( "reduce-T", "reduce-T" )
widget.add_combo_choice( "berendsen", "berendsen" )
widget.add_combo_choice( "andersen", "andersen" )
widget.add_combo_choice( "initial", "initial" )
widget.add_combo_choice( "not_controlled", "not_controlled" )
#tempw
widget = InputField( group_box, "text", label_name = "tempw:", input_name = "tempw")
#tolp
widget = InputField( group_box, "text", label_name = "tolp:", input_name = "tolp")
#delta_t
widget = InputField( group_box, "text", label_name = "delta_t:", input_name = "delta_t")
#nraise
widget = InputField( group_box, "text", label_name = "nraise:", input_name = "nraise")
#refold_pos
widget = InputField( group_box, "check", label_name = "refold_pos:", input_name = "refold_pos")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'relaxation'
#--------------------------------------------------------#
# Ions Inputs NOTE: ONLY FOR RELAXATION CALCULATIONS
#--------------------------------------------------------#
def create_ions_box(self):
group_box = self
#ion_dynamics
widget = InputField( group_box, "combo", label_name = "ion_dynamics:", input_name = "ion_dynamics")
widget.add_combo_choice( "bfgs", "bfgs" )
widget.add_combo_choice( "damp", "damp" )
#ion_positions
# widget = InputField( group_box, "combo", label_name = "ion_positions:", input_name = "ion_positions")
# widget.add_combo_choice( "default", "default" )
# widget.add_combo_choice( "from_input", "from_input" )
#pot_extrapolation
widget = InputField( group_box, "combo", label_name = "Potential Extrapolation:", input_name = "pot_extrapolation")
widget.add_combo_choice( "None", "none" )
widget.add_combo_choice( "Atomic", "atomic" )
widget.add_combo_choice( "First-Order", "first_order" )
widget.add_combo_choice( "Second-Order", "first_order" )
#wfc_extrapolation
widget = InputField( group_box, "combo", label_name = "Wavefunction Extrapolation:", input_name = "wfc_extrapolation")
widget.add_combo_choice( "None", "none" )
widget.add_combo_choice( "First-Order", "first_order" )
widget.add_combo_choice( "Second-Order", "first_order" )
#remove_rigid_rot
widget = InputField( group_box, "check", label_name = "remove_rigid_rot:", input_name = "remove_rigid_rot")
widget.show_conditions.append( ["assume_isolated","!=","none"] )
#upscale
widget = InputField( group_box, "text", label_name = "upscale:", input_name = "upscale")
#bfgs_ndim
widget = InputField( group_box, "text", label_name = "bfgs_ndim:", input_name = "bfgs_ndim")
#trust_radius_min
widget = InputField( group_box, "text", label_name = "trust_radius_min:", input_name = "trust_radius_min")
#trust_radius_ini
widget = InputField( group_box, "text", label_name = "trust_radius_ini:", input_name = "trust_radius_ini")
#w_1
widget = InputField( group_box, "text", label_name = "w_1:", input_name = "w_1")
#w_2
widget = InputField( group_box, "text", label_name = "w_2:", input_name = "w_2")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'magnetization'
#--------------------------------------------------------#
# Magnetization Inputs
#--------------------------------------------------------#
def create_magnetization_box(self):
group_box = self
#tot_magnetization
widget = InputField( group_box, "text", label_name = "tot_magnetization:", input_name = "tot_magnetization")
#starting_spin_angle
widget = InputField( group_box, "check", label_name = "starting_spin_angle:", input_name = "starting_spin_angle")
#constrainted_magnetization
widget = InputField( group_box, "combo", label_name = "Magnetization Constraint:", input_name = "constrained_magnetization")
widget.add_combo_choice( "None", "none" )
widget.add_combo_choice( "Total", "total" )
widget.add_combo_choice( "Atomic", "atomic" )
widget.add_combo_choice( "Total Direction", "total_direction" )
widget.add_combo_choice( "Atomic Direction", "atomic_direction" )
#fixed_magnetization
widget = InputField( group_box, "text", label_name = "fixed_magnetization:", input_name = "fixed_magnetization")
#lambda
widget = InputField( group_box, "text", label_name = "lambda:", input_name = "lambda")
#report
widget = InputField( group_box, "text", label_name = "report:", input_name = "report")
#--------------------------------------------------------#
# Per-species information
#--------------------------------------------------------#
#starting_magnetization
widget = InputField( group_box, "text", label_name = "starting_magnetization:", input_name = "starting_magnetization")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'noncollinear'
#--------------------------------------------------------#
# Noncollinear Inputs
#--------------------------------------------------------#
def create_noncollinear_box(self):
group_box = self
#lspinorb
widget = InputField( group_box, "check", label_name = "lspinorb:", input_name = "lspinorb")
#--------------------------------------------------------#
# Per-species information
#--------------------------------------------------------#
#angle1
widget = InputField( group_box, "text", label_name = "angle1:", input_name = "angle1")
#angle2
widget = InputField( group_box, "text", label_name = "angle2:", input_name = "angle2")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'efield'
#--------------------------------------------------------#
# Electric Field Inputs
#--------------------------------------------------------#
def create_efield_box(self):
group_box = self
#tefield
widget = InputField( group_box, "check", label_name = "Saw-Like Electric Field:", input_name = "tefield")
#edir
widget = InputField( group_box, "text", label_name = "edir:", input_name = "edir")
#emaxpos
widget = InputField( group_box, "text", label_name = "emaxpos:", input_name = "emaxpos")
#eopreg
widget = InputField( group_box, "text", label_name = "eopreg:", input_name = "eopreg")
#eamp
widget = InputField( group_box, "text", label_name = "eamp:", input_name = "eamp")
#dipfield
widget = InputField( group_box, "check", label_name = "Dipole Correction:", input_name = "dipfield")
#lefield
widget = InputField( group_box, "check", label_name = "Homogeneous Electric Field:", input_name = "lefield")
#efield
widget = InputField( group_box, "text", label_name = "efield:", input_name = "efield")
#efield_cart(3)
widget = InputField( group_box, "text", label_name = "efield_cart:", input_name = "efield_cart")
#efield_phase
widget = InputField( group_box, "combo", label_name = "efield_phase:", input_name = "efield_phase")
widget.add_combo_choice( "Read", "read" )
widget.add_combo_choice( "Write", "write" )
widget.add_combo_choice( "None", "none" )
#nberrycyc
widget = InputField( group_box, "text", label_name = "nberrycyc:", input_name = "nberrycyc")
#lorbm
widget = InputField( group_box, "check", label_name = "lorbm:", input_name = "lorbm")
#lberry
widget = InputField( group_box, "check", label_name = "lberry:", input_name = "lberry")
#gdir
widget = InputField( group_box, "combo", label_name = "gdir:", input_name = "gdir")
widget.add_combo_choice( "First Reciprocal Lattice Vector", "1" )
widget.add_combo_choice( "First Reciprocal Lattice Vector", "2" )
widget.add_combo_choice( "First Reciprocal Lattice Vector", "3" )
#nppstr
widget = InputField( group_box, "text", label_name = "nppstr:", input_name = "nppstr")
#lfcpopt
widget = InputField( group_box, "check", label_name = "lfcpopt:", input_name = "lfcpopt")
#fcp_mu
widget = InputField( group_box, "text", label_name = "fcp_mu:", input_name = "fcp_mu")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'monopole'
#--------------------------------------------------------#
# Monopole Inputs
#--------------------------------------------------------#
def create_monopole_box(self):
group_box = self
#monopole
widget = InputField( group_box, "check", label_name = "monopole:", input_name = "monopole")
#zmon
widget = InputField( group_box, "text", label_name = "zmon:", input_name = "zmon")
#realxz
widget = InputField( group_box, "check", label_name = "realxz:", input_name = "realxz")
#block
widget = InputField( group_box, "check", label_name = "block:", input_name = "block")
#block_1
widget = InputField( group_box, "text", label_name = "block_1:", input_name = "block_1")
#block_2
widget = InputField( group_box, "text", label_name = "block_2:", input_name = "block_2")
#block_height
widget = InputField( group_box, "text", label_name = "block_height:", input_name = "block_height")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'kpoint'
#--------------------------------------------------------#
# K-Point Inputs
#--------------------------------------------------------#
def create_kpoint_box(self):
group_box = self
#nosym
widget = InputField( group_box, "text", label_name = "nosym:", input_name = "nosym")
#nosym_evc
widget = InputField( group_box, "text", label_name = "nosym_evc:", input_name = "nosym_evc")
#noinv
widget = InputField( group_box, "text", label_name = "noinv:", input_name = "noinv")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'electrons'
#--------------------------------------------------------#
# Electrons Inputs
#--------------------------------------------------------#
def create_electrons_box(self):
group_box = self
#GUI_convergence_standards
widget = InputField( group_box, "combo", label_name = "Convergence Standards:", input_name = "GUI_convergence_standards")
widget.add_combo_choice( "Low", "low" )
widget.add_combo_choice( "Medium", "medium" )
widget.add_combo_choice( "High", "high" )
widget.add_combo_choice( "Custom", "custom" )
#electron_maxstep
widget = InputField( group_box, "text", label_name = "electron_maxstep:", input_name = "electron_maxstep")
widget.show_conditions.append( ["GUI_convergence_standards","==","custom"] )
#scf_must_converge
widget = InputField( group_box, "check", label_name = "scf_must_converge:", input_name = "scf_must_converge")
widget.show_conditions.append( ["GUI_convergence_standards","==","custom"] )
#conv_thr
widget = InputField( group_box, "text", label_name = "conv_thr:", input_name = "conv_thr")
widget.show_conditions.append( ["GUI_convergence_standards","==","custom"] )
#adaptive_thr
widget = InputField( group_box, "check", label_name = "adaptive_thr:", input_name = "adaptive_thr")
widget.show_conditions.append( ["GUI_convergence_standards","==","custom"] )
#conv_thr_init
widget = InputField( group_box, "text", label_name = "conv_thr_init:", input_name = "conv_thr_init")
widget.show_conditions.append( ["GUI_convergence_standards","==","custom"] )
#conv_thr_multi
widget = InputField( group_box, "text", label_name = "conv_thr_multi:", input_name = "conv_thr_multi")
widget.show_conditions.append( ["GUI_convergence_standards","==","custom"] )
#diago_thr_init
widget = InputField( group_box, "text", label_name = "diago_thr_init:", input_name = "diago_thr_init")
widget.show_conditions.append( ["GUI_convergence_standards","==","custom"] )
#GUI_convergence_acceleration
widget = InputField( group_box, "combo", label_name = "Convergence Acceleration:", input_name = "GUI_convergence_acceleration")
widget.add_combo_choice( "Default", "default" )
widget.add_combo_choice( "Custom", "custom" )
#mixing_mode
widget = InputField( group_box, "combo", label_name = "mixing_mode:", input_name = "mixing_mode")
widget.add_combo_choice( "Plain", "plain" )
widget.add_combo_choice( "TF", "TF" )
widget.add_combo_choice( "Local-TF", "local-TF" )
widget.show_conditions.append( ["GUI_convergence_acceleration","==","custom"] )
#mixing_beta
widget = InputField( group_box, "text", label_name = "mixing_beta:", input_name = "mixing_beta")
widget.show_conditions.append( ["GUI_convergence_acceleration","==","custom"] )
#mixing_ndim
widget = InputField( group_box, "text", label_name = "mixing_ndim:", input_name = "mixing_ndim")
widget.show_conditions.append( ["GUI_convergence_acceleration","==","custom"] )
#mixing_fixed_ns
widget = InputField( group_box, "text", label_name = "mixing_fixed_ns:", input_name = "mixing_fixed_ns")
widget.show_conditions.append( ["GUI_convergence_acceleration","==","custom"] )
#diagonalization
widget = InputField( group_box, "combo", label_name = "diagonalization:", input_name = "diagonalization")
widget.add_combo_choice( "david", "david" )
widget.add_combo_choice( "cg", "cg" )
#diago_cg_maxiter
widget = InputField( group_box, "text", label_name = "diago_cg_maxiter:", input_name = "diago_cg_maxiter")
widget.show_conditions.append( ["diagonalization","==","cg"] )
#diago_david_ndim
widget = InputField( group_box, "text", label_name = "diago_david_ndim:", input_name = "diago_david_ndim")
widget.show_conditions.append( ["diagonalization","==","david"] )
#diago_full_acc
widget = InputField( group_box, "text", label_name = "diago_full_acc:", input_name = "diago_full_acc")
#startingpot
widget = InputField( group_box, "combo", label_name = "startingpot:", input_name = "startingpot")
widget.add_combo_choice( "atomic", "atomic" )
widget.add_combo_choice( "file", "file" )
#startingwfc
widget = InputField( group_box, "combo", label_name = "startingwfc:", input_name = "startingwfc")
widget.add_combo_choice( "atomic", "atomic" )
widget.add_combo_choice( "atomic+random", "atomic+random" )
widget.add_combo_choice( "random", "random" )
widget.add_combo_choice( "file", "file" )
#tqr
widget = InputField( group_box, "check", label_name = "tqr:", input_name = "tqr")
widget = InputField( group_box, "button", input_name = "Next")
widget.show_conditions.append( ["no_next_box"] )
group_box.next_group_box = 'print'
#--------------------------------------------------------#
# Print Inputs
#--------------------------------------------------------#
def create_print_box(self):
group_box = self
#disk_io
widget = InputField( group_box, "combo", label_name = "disk_io:", input_name = "disk_io")
widget.add_combo_choice( "High", "high" )
widget.add_combo_choice( "Medium", "medium" )
widget.add_combo_choice( "Low", "low" )
widget.add_combo_choice( "None", "none" )
#verbosity
widget = InputField( group_box, "check", label_name = "Verbosity:", input_name = "verbosity")
#restart_mode
widget = InputField( group_box, "check", label_name = "restart_mode:", input_name = "restart_mode")
#wf_collect - just set to .true.
widget = InputField( group_box, "check", label_name = "wf_collect:", input_name = "wf_collect")
#max_seconds
widget = InputField( group_box, "text", label_name = "Checkpoint Time (hrs):", input_name = "max_seconds")
#iprint
widget = InputField( group_box, "text", label_name = "iprint:", input_name = "iprint")
#outdir
widget = InputField( group_box, "text", label_name = "Output Directory:", input_name = "outdir")
#wfcdir
widget = InputField( group_box, "text", label_name = "Scratch Directory:", input_name = "wfcdir")
#pseudo_dir
widget = InputField( group_box, "text", label_name = "Pseudopotential Directory:", input_name = "pseudo_dir")
#prefix
widget = InputField( group_box, "text", label_name = "Prefix:", input_name = "prefix")
#tstress
widget = InputField( group_box, "check", label_name = "tstress:", input_name = "tstress")
#tprnfor
widget = InputField( group_box, "check", label_name = "tprnfor:", input_name = "tprnfor")
#lkpoint_dir
widget = InputField( group_box, "check", label_name = "lkpoint_dir:", input_name = "lkpoint_dir")
widget = InputField( group_box, "button", input_name = "Next")
group_box.next_group_box = '???'
def on_update(self):
#print("Box updating")
#print(self)
#print(self.window())
self.window().on_window_update()
#self.update_layout()
# @pyqtSlot()
# def on_click(self):
#
# #create the next group box
# self.window().create_box(self.next_group_box)
class InputField():
"""
This class manages input fields of all types
"""
def __init__(self, parent_, type, label_name = None, input_name = None):
self.type = type
self.label_name = label_name
self.input_name = input_name
#is this widget currently being shown to the user?
self.shown = False
#conditions under which this text box should be shown
self.show_conditions = []
#list of all possible combo choices
self.combo_choices = []
self.group_box = parent_
self.group_box.widgets.append(self)
self.initialize_widget()
self.initialize_value()
def initialize_widget(self):
if self.label_name:
self.label = QLabel(self.label_name)
#self.label.setAlignment(Qt.AlignLeft)
else:
self.label = None
if self.type == "text":
self.widget = InputText(self.group_box, self.input_name)
self.widget.textChanged.connect( self.widget.on_text_changed )
elif self.type == "plain_text":
self.widget = InputPlainText(self.group_box, self.input_name)
self.widget.textChanged.connect( self.widget.on_text_changed )
elif self.type == "combo":
self.widget = InputCombo(self.group_box, self.input_name)
self.widget.currentIndexChanged.connect( self.widget.on_index_changed )
elif self.type == "check":
self.widget = InputCheck(self.group_box, self.input_name)
self.widget.stateChanged.connect( self.widget.on_state_changed )
elif self.type == "button":
self.widget = InputButton(self.group_box, self, self.input_name)
# self.widget.clicked.connect( self.group_box.on_click )
self.widget.clicked.connect( self.widget.on_click )
def initialize_value(self):
if self.type == "text":
self.group_box.input_file.inputs[self.input_name] = ""
elif self.type == "plain_text":
self.group_box.input_file.inputs[self.input_name] = ""
elif self.type == "combo":
pass #must wait until an item is added before initializing this value
elif self.type == "check":
self.group_box.input_file.inputs[self.input_name] = 0
elif self.type == "button":
pass
def add_combo_choice(self, label, name):
self.widget.currentIndexChanged.disconnect()
self.widget.addItem( label, userData = name )
self.widget.currentIndexChanged.connect( self.widget.on_index_changed )
#self.combo_choices.append( (label,name) )
self.combo_choices.append( (label,name) )
#if this is the first choice, initialize the value associated with this widget
if len(self.combo_choices) == 1:
self.group_box.input_file.inputs[self.input_name] = self.widget.itemData(0)
def set_visible(self, visible):
if visible:
#create a new widget
self.initialize_widget()
self.shown = True
#determine where this new row needs to be inserted
index = self.group_box.widgets.index(self)
if self.label:
self.group_box.layout.insertRow( index, self.label, self.widget )
else:
self.group_box.layout.insertRow( index, self.widget )
if self.type == "combo":
#add all of the combo choices to the new widget
temp_choices = [ self.combo_choices[i] for i in range(len(self.combo_choices)) ]
self.combo_choices = []
for combo_choice in temp_choices:
self.add_combo_choice(combo_choice[0],combo_choice[1])
else:
#delete this widget
self.widget.deleteLater()
self.widget = None
self.shown = False
if self.label:
self.label.deleteLater()
self.label = None
class InputText(QLineEdit):
"""
This class represents a text box in the GUI
"""
def __init__(self, parent_, input_name = None):
super(QLineEdit, self).__init__(parent = parent_)
self.input_name = input_name
#initialize the input text
try:
text = self.parent().input_file.inputs[self.input_name]
self.setText(text)
except KeyError:
pass
@pyqtSlot(str)
def on_text_changed(self, string):
self.parent().input_file.inputs[self.input_name] = string
self.parent().on_update()
#print(input_file.inputs)
class InputPlainText(QPlainTextEdit):
"""
This class represents a multi-line text box in the GUI
"""
def __init__(self, parent_, input_name = None):
super(QPlainTextEdit, self).__init__(parent = parent_)
self.input_name = input_name
#initialize the input text
try:
text = self.parent().input_file.inputs[self.input_name]
self.setText(text)
except KeyError:
pass
@pyqtSlot()
def on_text_changed(self):
self.parent().input_file.inputs[self.input_name] = self.toPlainText()
self.parent().on_update()
class InputCombo(QComboBox):
"""
This class represents a drop-down box in the GUI
"""
def __init__(self, parent_, input_name = None):
super(QComboBox, self).__init__(parent = parent_)
self.input_name = input_name
@pyqtSlot(int)
def on_index_changed(self, index):
self.parent().input_file.inputs[self.input_name] = self.itemData(index)
self.parent().on_update()
class InputCheck(QCheckBox):
"""
This class represents a check box in the GUI
"""
def __init__(self, parent_, input_name = None):
super(QCheckBox, self).__init__(parent = parent_)
self.input_name = input_name
@pyqtSlot(int)
def on_state_changed(self, value):
self.parent().input_file.inputs[self.input_name] = value
self.parent().on_update()
class InputButton(QPushButton):
"""
This class represents a button in the GUI
"""
def __init__(self, parent_, controller, input_name = None):
super(QPushButton, self).__init__(input_name, parent = parent_)
self.input_name = input_name
#this is the InputField that controls this button
self.controller = controller
@pyqtSlot()
def on_click(self):
#create the next group box
self.parent().window().create_box(self.parent().next_group_box)
self.parent().on_update()
class QuantumEspressoInputFile():
"""
This class holds all of the information associated with a QE input file
"""
def __init__(self):
self.inputs = {}
def set_input(self, name, value):
self.inputs[name] = value
if __name__ == '__main__':
app = QApplication(sys.argv)
input_file = QuantumEspressoInputFile()
dialog = Dialog(input_file)
sys.exit(dialog.exec_())
| taylor-a-barnes/test-gui | window.py | Python | bsd-3-clause | 55,650 | [
"DIRAC",
"Gaussian",
"Quantum ESPRESSO"
] | 638e1ec75d7524c94897cc03d5c358d28685d702b47845a63b2549b711679d7d |
from flask import jsonify, Flask, request, Response
from flask.ext.sqlalchemy import SQLAlchemy
from flask.ext.httpauth import HTTPBasicAuth
import time
import json
auth = HTTPBasicAuth()
import os
"""Provides an API for models, intended for use with javascript and electron"""
app = Flask(__name__)
app.config.from_object('config.DevelopmentConfig')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db = SQLAlchemy(app)
from models import *
value_types=None
@app.route('/', methods = ['GET'])
def home():
return("server up")
@app.route('/rollback', methods = ['GET'])
def rollback():
responseString=[]
try:
db.session.rollback()
except Exception as e:
return responseString.append(e.args), 500
return str(responseString)
@app.route('/products', methods = ['GET'])
def get_products():
start = time.time()
#TODO: log visits automatically
visit = Visits("/products")
db.session.add(visit)
db.session.commit()
result=[]
edbString=request.args.get('edb')
if not edbString:
allProducts=Product.query.all()
if edbString:
if edbString == "true":
allProducts=EdbProduct.query.all()
elif edbString == "false":
allProducts=TemplateProduct.query.all()
else:
allProducts=Product.query.all()
fieldsString=request.args.get('fields')
if fieldsString:
fields=fieldsString.split(",")
else:
fields=None
result = [a.toDict(fields=fields) for a in allProducts]
end = time.time()
print (end-start)
return jsonify(result)
@app.route('/products/<id>', methods = ['GET'])
def get_product(id):
visit = Visits("/products")
db.session.add(visit)
db.session.commit()
try:
product = Product.query.get(id)
except Exception as e:
product = None
if not product or product is None:
db.session.rollback()
return "resource "+str(id)+" not found", 404
else:
return jsonify(product.toDict())
@app.route('/products', methods = ['POST'])
def post_product():
if 'edb' in request.json:
if request.json['edb']:
product = EdbProduct()
else:
product = TemplateProduct()
else:
product = TemplateProduct()
# if 'id' in request.json and Product.query.get(request.json['id']) is None:
# product.id=request.json['id']
if not 'name' in request.json:
return "name missing", 400
else:
product.name = request.json['name']
db.session.add(product)
db.session.commit()
id = editProduct(product.id,request.json)
return jsonify(product.toDict()),201
@app.route('/products/<id>', methods = ['PUT'])
def put_product(id):
try:
product = Product.query.get(id)
except:
product = None
if not product:
return "resource "+str(id)+" not found", 404
else:
editProduct(id,request.json)
return jsonify(product.toDict())
@app.route('/products/<id>', methods = ['DELETE'])
def delete_product(id):
try:
product = Product.query.get(id)
except:
product = None
if not product:
return "resource "+str(id)+" not found", 404
else:
db.session.delete(product)
db.session.commit()
return "",204
@app.route('/values/<id>', methods = ['GET'])
def get_value(id):
try:
value = Value.query.get(id)
except:
value = None
if not value:
return "resource "+str(id)+" not found", 404
else:
return jsonify(value.toDict())
@app.route('/values', methods=['POST'])
def post_value():
value_types={
'ProductNutrientAssociation':ProductNutrientAssociation,
'ProductAllergeneAssociation': ProductAllergeneAssociation,
'Co2Value':Co2Value,
'FoodWasteData':FoodWasteData,
'ProductDensity':ProductDensity,
'ProductProcessNutrientAssociation':ProductProcessNutrientAssociation,
'ProductProcessCO2Association':ProductProcessCO2Association,
'ProductUnitWeight':ProductUnitWeight}
if 'product' in request.json and 'type' in request.json:
try:
product=Product.query.get(request.json['product'])
except:
product=None
if not product:
print("product "+str(request.json['product'])+" not found")
return "product "+str(request.json['product'])+" not found", 404
elif request.json['type'] in value_types:
value=value_types[request.json['type']]()
value.product=product
db.session.add(value)
try:
editValue(value, request.json)
except TypeError as e:
db.session.rollback()
return str(e.args),400;
db.session.commit()
return jsonify(value.toDict()), 201
return "must provide product and type",400
@app.route('/values/<id>', methods = ['PUT'])
def put_value(id):
try:
value = Value.query.get(id)
except:
value = None
if not value:
return "resource "+str(id)+" not found", 404
else:
editValue(value,request.json)
db.session.add(value)
return jsonify(value.toDict())
@app.route('/values/<id>', methods = ['DELETE'])
def delete_value(id):
try:
value = Value.query.get(id)
except:
value = None
if not value:
return "resource "+str(id)+" not found", 404
else:
db.session.delete(value)
db.session.commit()
return "",204
@app.route('/values', methods = ['GET'])
def get_values():
return jsonify([a.toDict() for a in Value.query.all()])
@app.route('/references', methods = ['GET'])
def get_references():
fieldsString=request.args.get('fields')
if fieldsString:
fields=fieldsString.split(",")
else:
fields=None
return jsonify([a.toDict(fields=fields) for a in Reference.query.all()])
@app.route('/references', methods = ['POST'])
def post_reference():
if 'name' in request.json:
reference=Reference()
reference.name=request.json['name']
if 'comment' in request.json:
reference.comment=request.json['comment']
db.session.add(reference)
db.session.commit()
return jsonify(reference.toDict()), 201
else:
return ("name required"), 400
@app.route('/references/<id>', methods = ['PUT'])
def put_reference(id):
try:
reference = Reference.query.get(id)
except:
reference = None
if not reference:
return "resource "+str(id)+" not found", 404
else:
editReference(reference.id,request.json)
return jsonify(reference.toDict())
@app.route('/allergenes', methods = ['GET'])
def get_allergenes():
return jsonify([a.toDict() for a in Allergene.query.all()])
@app.route('/nutrients', methods = ['GET'])
def get_nutrients():
return jsonify([a.toDict() for a in Nutrient.query.all()])
@app.route('/processes', methods = ['GET'])
def get_processes():
return jsonify([a.toDict() for a in Process.query.all()])
@app.route('/processes', methods = ['POST'])
def post_process():
if 'name' in request.json:
process=Process()
process.name=request.json['name']
if 'type' in request.json:
process.type=request.json['type']
if 'description' in request.json:
process.description=request.json['description']
db.session.add(reference)
db.session.commit()
return jsonify(reference.toDict()), 201
else:
return "name required", 400
def editProduct(id,jsonData):
product = Product.query.get(id)
if 'allergenes' in jsonData:
for allergeneDict in jsonData['allergenes']:
if 'id' in allergeneDict:
try:
association = ProductAllergeneAssociation.query.get(allergeneDict['id'])
editValue(association, allergeneDict)
except:
raise TypeError('no allergene association with id '+str(allergeneDict['id']))
if 'alternatives' in jsonData:
product.alternatives=[]
for alternative in jsonData['alternatives']:
try:
alternative = Product.query.get(alternative['id'])
if not alternative in product.alternatives:
product.alternatives.append(alternative)
except:
pass
if 'co2Values' in jsonData:
for co2Dict in jsonData['co2Values']:
if 'id' in co2Dict:
try:
value=Co2Value.query.get(co2Dict['id'])
editValue(value, co2Dict)
except:
value=None
if 'commentsOnDensityAndUnitWeight' in jsonData:
product.commentsOnDensityAndUnitWeight=jsonData['commentsOnDensityAndUnitWeight']
if 'densities' in jsonData:
for densityDict in jsonData['densities']:
if 'id' in densityDict:
try:
value=ProductDensity.query.get(densityDict['id'])
editValue(value, densityDict)
except:
value=None
if 'endOfLocalSeason' in jsonData:
#TODO: parse date, check, add
pass
if 'englishName' in jsonData:
product.englishName = jsonData['englishName']
if 'foodWasteData' in jsonData:
for foodWasteDict in jsonData['foodWasteData']:
if 'id' in foodWasteDict:
try:
value=FoodWasteData.query.get(foodWasteDict['id'])
editValue(value, foodWasteDict)
except:
value=None
if 'frenchName' in jsonData:
product.frenchName = jsonData['frenchName']
if 'infoTextForCook' in jsonData:
product.infoTextForCook = jsonData['infoTextForCook']
if 'name' in jsonData:
product.name = jsonData['name']
#add ProductProcessNutrientAssociation (if not existing) and if necessary create respective Nutrient and Process (side effect)
if 'nutrientProcesses' in jsonData:
for processDict in jsonData['nutrientProcesses']:
if 'id' in processDict:
try:
value=ProductProcessNutrientAssociation.query.get(processDict['id'])
editValue(value, processDict)
except:
value=None
#add ProductNutrientAssociation (if not existing) and if necessary create respective Nutrient (side effect)
if 'nutrients' in jsonData:
for nutrientDict in jsonData['nutrients']:
if 'id' in nutrientDict:
try:
value=ProductNutrientAssociation.query.get(nutrientDict['id'])
editValue(value, nutrientDict)
except:
value=None
if 'possibleOrigins' in jsonData:
for originName in jsonData['possibleOrigins']:
try:
location = Location.query.filter(Location.name == originName).all()[0]
except:
location = None
if not location:
location = Location()
location.name = originName
db.session.add(location)
if not location in product.possibleOrigins:
product.possibleOrigins.append(location)
if 'processes' in jsonData:
product.processes=[]
for processDict in jsonData['processes']:
try:
process = Process.query.filter(Process.name == processDict['name']).all()[0]
except:
try:
process = Process.query.get(processDict['id'])
except:
process = None
if not process:
process = Process()
process.name = processDict['name']
db.session.add(process)
product.processes.append(process)
if 'processesCo2' in jsonData:
for processDict in jsonData['processesCo2']:
if 'id' in processDict:
try:
value=ProductProcessCO2Association.query.get(processDict['id'])
editValue(value, processDict)
except:
value=None
if 'specification' in jsonData:
product.specification = jsonData['specification']
if 'standardOrigin' in jsonData:
try:
location = Location.query.filter(Location.name == jsonData['standardOrigin']).all()[0]
except:
location = None
if not location:
location = Location()
location.name = jsonData['standardOrigin']
db.session.add(location)
product.standardOrigin = location
if 'startOfLocalSeason' in jsonData:
product.startOfLocalSeason = jsonData['startOfLocalSeason']
if 'synonyms' in jsonData:
for synonymName in jsonData['synonyms']:
try:
synonym = Synonym.query.get(synonymName)
except:
synonym = None
if not synonym:
synonym = Synonym(synonymName)
#side effect
db.session.add(synonym)
product.synonyms.append(synonym)
if 'tags' in jsonData:
for tagName in jsonData['tags']:
try:
tag = Tag.query.get(tagName)
except:
tag = None
if not tag:
tag = Tag()
tag.name = tagName
#side effect
db.session.add(tag)
product.tags.append(tag)
if 'texture' in jsonData:
product.texture = jsonData['texture']
if 'unitWeights' in jsonData:
for unitWeightDict in jsonData['unitWeights']:
if 'id' in unitWeightDict:
try:
value=ProductUnitWeight.query.get(unitWeightDict['id'])
editValue(value, unitWeightDict)
except:
value=None
db.session.add(product)
db.session.commit()
return product.id
def editValue(value,valueDict):
#common fields for values
if 'amount' in valueDict:
value.amount = valueDict['amount']
value.baseValue = None
if 'comment' in valueDict:
value.comment=valueDict['comment']
if 'unit' in valueDict:
value.unit = valueDict['unit']
if 'referenceId' in valueDict:
try:
reference = Reference.query.get(valueDict['referenceId'])
except:
reference = None
if reference:
value.reference=reference
elif 'reference' in valueDict:
try:
reference = Reference.query.filter(Reference.name == valueDict['reference']).all()[0]
except:
reference = None
if not reference:
reference = Reference()
reference.name = valueDict['reference']
value.reference = reference
db.session.add(reference)
if 'validCountries' in valueDict:
value.validCountries=[]
for countryName in valueDict['validCountries']:
try:
location=Location.query.filter(Location.name==countryName).get[0]
except:
location=None
if not location:
location=Location()
location.name=countryName
db.session.add(location)
value.validCountries.append(location)
if 'baseValue' in valueDict:
try:
baseValue = Value.query.get(valueDict['baseValue'])
if not baseValue==value:
value.baseValue=baseValue
except:
pass
#type specific fields
if not 'type' in valueDict:
raise TypeError
db.session.rollback()
elif valueDict['type']=='Co2Value' and type(value)==Co2Value:
# no additonal fields
pass
elif valueDict['type']=='FoodWasteData' and type(value)==FoodWasteData and 'field' in valueDict:
try:
field=FoodWasteField.query.get(valueDict['field'])
except:
field=None
if not field:
field=FoodWasteField()
field.name=valueDict['field']
db.session.add(field)
value.field=field
elif valueDict['type']=='ProductDensity' and type(value)==ProductDensity:
# no additonal fields
pass
elif valueDict['type']=='ProductAllergeneAssociation' and type(value)==ProductAllergeneAssociation and 'allergeneName' in valueDict:
try:
allergene = Allergene.query.filter(Allergene.name == valueDict['allergeneName']).all()[0]
except:
allergene = None
if not allergene:
allergene = Allergene()
allergene.name = valueDict['allergeneName']
db.session.add(allergene)
value.allergene=allergene
elif valueDict['type']=='ProductProcessCO2Association' and type(value)==ProductProcessCO2Association and 'processName' in valueDict:
try:
process = Process.query.filter(Process.name == valueDict['processName']).all()[0]
except:
process = None
if not process:
process = Process()
process.name = valueDict['processName']
db.session.add(process)
value.process=process
elif valueDict['type']=='ProductNutrientAssociation' and type(value)==ProductNutrientAssociation and 'nutrientName' in valueDict:
try:
nutrient = Nutrient.query.filter(Nutrient.name == valueDict['nutrientName']).all()[0]
except:
nutrient = None
if not nutrient:
nutrient = Nutrient()
nutrient.name = valueDict['nutrientName']
db.session.add(nutrient)
value.nutrient=nutrient
elif valueDict['type']=='ProductProcessNutrientAssociation' and type(value)==ProductProcessNutrientAssociation and 'nutrientName' in valueDict and 'processName' in valueDict:
try:
process = Process.query.filter(Process.name == valueDict['processName']).all()[0]
except:
process = None
if not process:
process = Process()
process.name = valueDict['processName']
db.session.add(process)
value.process=process
try:
nutrient = Nutrient.query.filter(Nutrient.name == valueDict['nutrientName']).all()[0]
except:
nutrient = None
if not nutrient:
nutrient = Nutrient()
nutrient.name = valueDict['nutrientName']
db.session.add(nutrient)
value.nutrient=nutrient
elif valueDict['type']=='ProductUnitWeight' and type(value)==ProductUnitWeight:
# no additonal fields
pass
else:
raise TypeError('missing arguments for type '+value.type)
db.session.commit()
return value.id
def editReference(id, jsonData):
reference = Reference.query.get(id)
if 'name' in jsonData:
reference.name = jsonData['name']
if 'comment' in jsonData:
reference.comment = jsonData['comment']
db.session.add(reference)
db.session.commit()
return reference.id
if __name__ == '__main__':
app.run(threaded=True) | schinke/solid-fortnight-ba | flask/app.py | Python | mit | 19,346 | [
"VisIt"
] | 60c53bc3818775d3fa16a32bf3c789759c5b1a3eb6929240159a2aee499e4bfd |
import ispyb.model
class Sample(ispyb.model.DBCache):
def __init__(self, sample_id, db_conn, preload=None):
"""Create a Sample object for a defined sample_id. Requires
a database connection object exposing further data access methods.
:param sample_id: bLSampleId
:param db_conn: ISPyB database connection object
:return: A Sample object representing the database entry for
the specified bLSampleId
"""
self._db = db_conn
self._id = int(sample_id)
self._cache_container = None
if preload:
self._data = preload
def reload(self):
"""Load/update information from the database."""
raise NotImplementedError()
@property
def id(self):
"Returns the sampleId"
return self._id
def __str__(self):
"""Returns a pretty-printed object representation."""
if not self.cached:
return "Sample #%d (not yet loaded from database)" % self._id
return (
"\n".join(
(
"Sample #%s" % self._id,
" Name : %s" % self.name,
" Crystal id : %s"
% (self.crystal_id if self.crystal_id else "None"),
" Container id : %s"
% (self.container_id if self.container_id else "None"),
" DCIDs : %s"
% (",".join(str(i) for i in self.dcids) if self.dcids else "None"),
)
)
).format(self)
@property
def container(self):
"""Returns the container information for the sample"""
if self._cache_container is None:
self.load()
if not self._data["blSampleId"]:
# Can not have a container without a sample
self._cache_container = False
return self._cache_container
container = self._db.shipping.retrieve_container_for_sample_id(
self._data["blSampleId"]
)
if not container:
self._cache_container = False
else:
self._cache_container = ispyb.model.container.Container(
container[0]["containerId"], self._db, preload=container[0]
)
return self._cache_container
ispyb.model.add_properties(
Sample,
(
("name", "name", "The sample name"),
("crystal_id", "crystalId", "The crystal id for this sample"),
("container_id", "containerId", "The container id for this sample"),
("location", "location", "The location of this sample within its container"),
("dcids", "dcids", "The data collection ids associated with this sample"),
),
)
| DiamondLightSource/ispyb-api | src/ispyb/model/sample.py | Python | apache-2.0 | 2,810 | [
"CRYSTAL"
] | c8f72095655347b626a1dd38613c531d30a71c43f3f4c109a038b6e963719c8e |
#User provided customizations for the gpaw setup
import os
# compiler
compiler = './gcc.py'
mpicompiler = './gcc.py'
mpilinker = 'cc'
extra_compile_args = ['-std=c99']
# libz
libraries = ['z']
# libxc
library_dirs += [os.environ['LIBXCDIR'] + '/lib']
include_dirs += [os.environ['LIBXCDIR'] + '/include']
libraries += ['xc']
# use ScaLAPACK and HDF5
scalapack = True
hdf5 = True
# GPAW defines
define_macros += [('GPAW_NO_UNDERSCORE_CBLACS', '1')]
define_macros += [('GPAW_NO_UNDERSCORE_CSCALAPACK', '1')]
define_macros += [("GPAW_ASYNC",1)]
define_macros += [("GPAW_MPI2",1)]
| mlouhivu/build-recipes | gpaw/examples/sisu-git-2016-04-01/customize-gnu.py | Python | mit | 583 | [
"GPAW"
] | 994ff2f62dcb0b86a25cbc25e8e98d5ab0960ba46beae30cf4a82f7931e55788 |
import os
import numpy as np
from pychemia.utils.periodic import atomic_symbol, covalent_radius, atomic_number
from pychemia.utils.constants import bohr_angstrom, angstrom_bohr
from pychemia.utils.mathematics import unit_vectors
from .parser import parser
from pychemia.utils.netcdf import netcdf2dict
from pychemia.core import Structure
from ..codes import CodeInput
"""
Definition of the class input to read
ABINIT input files and store their information
as a python dictionary called 'variables'
"""
__author__ = "Guillermo Avendano-Franco"
__copyright__ = "Copyright 2016"
__version__ = "1.1"
__maintainer__ = "Guillermo Avendano-Franco"
__email__ = "guillermo.avendano@uclouvain.be"
__status__ = "Development"
__date__ = "May 13, 2016"
class AbinitInput(CodeInput):
def __init__(self, input_filename=None):
CodeInput.__init__(self)
if input_filename is None:
self.input_file = 'abinit.in'
else:
self.input_file = input_filename
if os.path.isfile(self.input_file):
if self.input_file[-6:] == 'OUT.nc':
self.variables = netcdf2dict(self.input_file)
else:
self.read()
def __import_input(self, filename):
"""
Read an ABINIT input file and return a python dictionary
with the input variables readed from that. The keys are
the fullname input variables (acell,xcart21,etc). The
values are numbers or lists except for
the value '*[NUMBER]' that is keeped as string, and
the string associated to the variable xyzfile
Args:
filename:
ABINIT input filename
"""
ans = parser(filename)
if ans is not None:
self.variables = ans
def read(self):
if not os.path.isfile(self.input_file):
raise ValueError("ERROR: Could not read %s" % self.input_file)
ans = parser(self.input_file)
if ans is not None:
self.variables = ans
def check(self):
if self.get_number_variables > 0:
print("ABINIT input is readable and has %d variables" % self.get_number_variables)
def __str__(self):
"""
String representation of the object
"""
ret = ''
thekeys = self.variables.keys()
varnames = [x for x in thekeys if not x[-1].isdigit()]
if 'ndtset' in varnames:
ret = ret + "#" + 60 * "-" + "\n#" + " MULTI DATASET\n#" + 60 * "-" + "\n\n"
ret += self.write_key('ndtset')
varnames.remove('ndtset')
if 'jdtset' in varnames:
ret += self.write_key('jdtset')
varnames.remove('jdtset')
if 'udtset' in varnames:
ret += self.write_key('udtset')
varnames.remove('udtset')
ret += '\n'
seqvarnames = [x for x in varnames if
(x[-1] == ':' or x[-1] == "+" or x[-1] == "?" or x[-2] == ':' or x[-2] == "+" or x[
-2] == "?")]
if len(seqvarnames) > 0:
ret = ret + "#" + 60 * "-" + "\n#" + " SEQUENCE\n#" + 60 * "-" + "\n\n"
seqvarnames.sort()
for i in seqvarnames:
ret += self.write_key(i)
varnames.remove(i)
ret += '\n'
if len(varnames) > 0:
varnames.sort()
ret = ret + "#" + 60 * "-" + "\n#" + " ALL DATASETS\n#" + 60 * "-" + "\n\n"
for i in varnames:
if i == 'dmatpawu' and 'lpawu' in self.variables:
if 2 in self.variables['lpawu']:
ret += self.write_key(i, ncolumns=5)
elif 3 in self.variables['lpawu']:
ret += self.write_key(i, ncolumns=7)
else:
ret += self.write_key(i)
ret += '\n'
for dtset in range(1, 100):
varnames = [x for x in thekeys if
(x[-len(str(dtset)):] == str(dtset) and not x[-len(str(dtset)) - 1:].isdigit())]
if len(varnames) > 0:
varnames.sort()
ret = ret + "#" + 60 * "-" + "\n#" + " DATASET " + str(dtset) + "\n#" + 60 * "-" + "\n\n"
for i in varnames:
if i == 'dmatpawu' and 'lpawu' in self.variables:
if 2 in self.variables['lpawu']:
ret += self.write_key(i, ncolumns=5)
elif 3 in self.variables['lpawu']:
ret += self.write_key(i, ncolumns=7)
ret += self.write_key(i)
ret += '\n'
return ret
def write_key(self, varname, ncolumns=None, debug=False):
"""
Receives an input variable and write their contents
properly according with their kind and length
Args:
varname:
The name of the input variable
ncolumns:
Number of columns for the input variable
debug:
Shows contents of variable before creating string out of it
"""
ret = ''
if varname not in self.variables:
print("[ERROR] input variable: '%s' is not defined" % varname)
return
# Assume that the variables are integer and test if such assumption
# is true
integer = True
real = False
string = False
compact = True
if isinstance(self.variables[varname], (int, float)):
varlist = [self.variables[varname]]
elif isinstance(self.variables[varname], str):
varlist = [self.variables[varname]]
else:
varlist = self.variables[varname]
if debug:
print('varlist: %s' % varlist)
# Get the general kind of values for the input variable
for j in varlist:
try:
if not float(j).is_integer():
# This is the case of non integer values
integer = False
real = True
string = False
if len(str(float(j))) > 7:
compact = False
except ValueError:
# This is the case of '*1' that could not
# be converted because we don't know the size
# of the array
integer = False
real = False
string = True
ret = ret + (varname.rjust(15)) + " "
known_variables = {'xred': [3], 'acell': [3]}
if varname in known_variables:
for i in known_variables[varname]:
if len(varlist) % i == 0:
for j in range(int(len(varlist) / i)):
if j == 0:
ret += (i * '%17.10E ' + '\n') % tuple(varlist[j * i:j * i + i])
else:
ret += (17 * ' ' + i * '%17.10E ' + '\n') % tuple(varlist[j * i:j * i + i])
elif ncolumns is not None:
i = ncolumns
for j in range(int(len(varlist) / i)):
if j == 0:
ret += (i * '%17.10E ' + '\n') % tuple(varlist[j * i:j * i + i])
else:
ret += (17 * ' ' + i * '%17.10E ' + '\n') % tuple(varlist[j * i:j * i + i])
else:
if debug:
print("real: %s integer: %s string: %s" % (real, integer, string))
for j in range(len(varlist)):
if real:
if compact:
ret += ("%g" % varlist[j]).rjust(8)
else:
ret += "%17.10e" % varlist[j]
elif integer:
ret += "%d" % varlist[j]
elif string:
ret += "%s" % varlist[j]
# Conditions to jump to a new line
if ((j + 1) % 3) == 0 and real and j < len(varlist) - 1:
ret += "\n"
ret += 17 * " "
elif j < len(varlist) - 1:
ret += " "
ret += "\n"
return ret
def get_structure(self, idtset=None, units='bohr'):
"""
Return the atomic structure from the input object
for a given dataset (no dataset by default)
"""
# NATOM
natom = self.get_value('natom', idtset)
# SYMBOLS
ntypat = self.get_value('ntypat', idtset)
symbols = []
znucl = self.get_value('znucl', idtset)
typat = self.get_value('typat', idtset)
for i in range(natom):
# NOTE: znucl is a real number in OUT.nc
# Alchemic mixing is not allow here
if ntypat == 1:
symbols.append(atomic_symbol(int(znucl)))
else:
symbols.append(atomic_symbol(int(znucl[typat[i] - 1])))
# POSITIONS
xangst = self.get_value('xangst', idtset)
xcart = self.get_value('xcart', idtset)
xred = self.get_value('xred', idtset)
rprim = self.get_value('rprim', idtset)
acell = np.array(self.get_value('acell', idtset))
# Set rprimd and acell using the default values
# if not found
if rprim is None:
rprim = np.identity(3)
else:
rprim = np.array(rprim).reshape((3, 3))
if acell is None:
acell = np.ones(3)
rprimd = np.zeros((3, 3))
rprimd[0] = rprim[0] * acell
rprimd[1] = rprim[1] * acell
rprimd[2] = rprim[2] * acell
if xangst is not None:
xangst = np.array(xangst)
positions = xangst.reshape((natom, 3))
if units == 'bohr':
positions = positions * angstrom_bohr
elif xcart is not None:
xcart = np.array(xcart)
positions = xcart.reshape((natom, 3))
if units == 'angstrom':
positions = positions * bohr_angstrom
elif xred is not None:
xred = np.array(xred)
xred = xred.reshape((natom, 3))
xcart = np.zeros((natom, 3))
# print rprimd
# print xred
for i in range(natom):
xcart[i] = xred[i, 0] * rprimd[0] + xred[i, 1] * rprimd[1] + xred[i, 2] * rprimd[2]
positions = xcart
if units == 'angstrom':
positions = positions * bohr_angstrom
else:
positions = None
if units == 'angstrom':
rprimd = rprimd * bohr_angstrom
# Create an object atomic_structure
structure = Structure(natom=natom, symbols=symbols, positions=positions, cell=rprimd)
return structure
def from_structure(self, structure):
"""
Set input variables for a given structure
:param structure: (pychemia.Structure) Structure to set ABINIT input variables
:return:
"""
natom = structure.natom
ntypat = len(structure.species)
znucl = atomic_number(structure.species)
typat_dict = {}
index = 1
for ispec in structure.species:
typat_dict[ispec] = index
index += 1
typat = [typat_dict[i] for i in structure.symbols]
xcart = angstrom_bohr * structure.positions.flatten()
acell = angstrom_bohr * np.array(structure.lattice.lengths)
rprim = unit_vectors(structure.cell).T.flatten()
for i in ['natom', 'ntypat', 'znucl', 'typat', 'xcart', 'acell', 'rprim']:
self.set_value(i, eval(i))
def get_dtsets_keys(self):
"""
Get the list of dtset suffix acording to
the values given by ndtset, jdtset and udtset
"""
ret = None
if 'jdtset' in self.variables and 'udtset' in self.variables:
print('ERROR: Both udtset and jdtset could not be used ')
return None
elif 'ndtset' in self.variables:
ndtset = self.get_value('ndtset')
if ndtset != 0:
if 'jdtset' in self.variables:
ret = list(self.variables['jdtset'])
elif 'udtset' in self.variables:
ret = []
udtset = self.get_value('udtset')
for i in range(1, udtset[0] + 1):
for j in range(1, udtset[1] + 1):
print(ret)
print(str(i) + str(j))
ret.append(str(i) + str(j))
else:
ret = range(1, ndtset + 1)
else:
ret = ['']
return ret
def atom_name(self, iatom, idtset=''):
"""
Return the name of the atom and the position in the
list of atoms such as H3, N4, etc
"""
atomnumber = self.get_value('znucl', idtset=idtset)
print("atomnumber=%s" % atomnumber)
if isinstance(atomnumber, list):
atomnumber = atomnumber[self.get_value('typat', idtset=idtset)[iatom] - 1]
return atomic_symbol(atomnumber) + str(iatom + 1)
def view_projections(self):
"""
Show the 3 projections of the molecule in a single
figure
"""
import matplotlib.patches as mpatches
from matplotlib.collections import PatchCollection
from matplotlib.pylab import subplots
fig, ax = subplots(nrows=1, ncols=3)
fig.set_size_inches(15, 4)
color = ['r', 'g', 'b']
j = 0
structure = self.get_structure()
for i in structure.cell:
ax[0].plot([0, i[0]], [0, i[1]], color[j] + '-', lw=3)
ax[1].plot([0, i[1]], [0, i[2]], color[j] + '-', lw=3)
ax[2].plot([0, i[2]], [0, i[0]], color[j] + '-', lw=3)
j += 1
proj = [[0, 1], [1, 2], [2, 0]]
labels = [['x', 'y'], ['y', 'z'], ['z', 'x']]
for j in range(3):
patches = []
for i in range(structure.natom):
radius = 0.5 * covalent_radius(atomic_number(structure.symbols[i]))
pos = structure.positions[i]
art = mpatches.Circle((pos[proj[j][0]], pos[proj[j][1]]), radius, fc='g', ec='g')
patches.append(art)
collection = PatchCollection(patches, color='k', alpha=0.5)
col = ax[j].add_collection(collection)
ax[j].set_xlim(min(structure.positions[:, proj[j][0]]) - 1, max(structure.positions[:, proj[j][0]]) + 1)
ax[j].set_ylim(min(structure.positions[:, proj[j][1]]) - 1, max(structure.positions[:, proj[j][1]]) + 1)
ax[j].set_aspect('equal', adjustable='datalim')
ax[j].set_xlabel(labels[j][0])
ax[j].set_ylabel(labels[j][1])
return fig, ax
def clean(self):
self.variables = {}
def has_variable(self, varname, section=None):
return varname in self.variables
def get_value(self, varname, idtset=None, return_iterable=False):
"""
Get the value of the input variable 'varname'
associated with the dataset 'idtset'
If 'idtset' is not given will asume that the
value is not dataset dependent
"""
name = ''
fact = 1
delta = 0
# Get the right key for the abinit variable
if idtset is None:
if varname in self.variables:
name = varname
else:
if (varname + str(idtset)) in self.variables:
name = varname + str(idtset)
elif idtset > 10:
if (varname + '?' + (str(idtset)[1])) in self.variables:
name = varname + '?' + (str(idtset)[1])
elif (varname + (str(idtset)[0]) + '?') in self.variables:
name = varname + (str(idtset)[0]) + '?'
elif (varname + '+?') in self.variables and (varname + ':?') in self.variables:
name = varname + ':?'
fact = int(str(idtset)[0]) - 1
delta = self.variables[varname + '+?']
elif (varname + '?+') in self.variables and (varname + '?:') in self.variables:
name = varname + '?:'
fact = int(str(idtset)[1]) - 1
delta = self.variables[varname + '?+']
if name == '' and varname in self.variables:
name = varname
# print 'varname=',varname,'name=',name
# Get the value of the abinit variable
if name != '':
if isinstance(self.variables[name], list):
npvalue = list(np.array(self.variables[name]) + fact * np.array(delta))
else:
npvalue = self.variables[name] + fact * delta
elif (varname + ":") in self.variables and (varname + "+") in self.variables:
if isinstance(self.variables[varname + ":"], list):
npvalue = list(np.array(self.variables[varname + ":"]) +
(idtset - 1) * np.array(self.variables[varname + "+"]))
else:
npvalue = self.variables[varname + ":"] + (idtset - 1) * self.variables[varname + "+"]
else:
npvalue = None
if isinstance(npvalue, (int, float)) and return_iterable:
npvalue = [npvalue]
return npvalue
def set_value(self, varname, value, idtset=''):
"""
Set the value 'value' into the dictionary
input with key 'varname'+str(idtset)
The value could be an integer, real, list
or numpy array, the internal representation
is always serializable.
"""
if isinstance(value, (int, float)):
npvalue = value
elif isinstance(value, np.ndarray):
if value[0].dtype == np.dtype('>f8'):
npvalue = [round(x, 11) for x in value.flatten()]
elif value[0].dtype == np.dtype('>i4'):
npvalue = [int(x) for x in value.flatten()]
else:
npvalue = list(value)
else:
npvalue = list(value)
if idtset == '':
self.variables[varname] = npvalue
else:
self.variables[varname + str(idtset)] = npvalue
# class InputVariables(collections.MutableMapping):
# """
# An input object contains:
# data:
# variables = Dictionary whose keys are ABINIT variable names
# methods:
# write = Write the input into as a text file that ABINIT
# can use as an input file
# get_value = Get the value of a particular variable
# set_value = Set the value of a particular variable
# get_atomic_structure =
# """
# def __init__(self, *args, **kwargs):
# """
# Creates a new input object, the input object
# contains a dictionary whose keys are the abinit
# input variables and the values are always serializable
# """
# self.variables = {}
# filename = ''
# if len(args) == 1:
# x = args[0]
# if isinstance(x, AbiFiles):
# filename = x.basedir + "/" + x.files["in"]
# elif os.path.isfile(x):
# filename = x
# if 'filename' in kwargs:
# filename = kwargs['filename']
# if os.path.isfile(filename):
# if filename[-3:] == '.in':
# self.__import_input(filename)
# elif filename[-6:] == '.files':
# abifile = AbiFiles(filename)
# filename = abifile.get_input_filename()
# self.__import_input(filename)
# elif filename[-6:] == 'OUT.nc':
# self.variables = netcdf2dict(filename)
# else:
# try:
# self.__import_input(filename)
# except ValueError:
# print('File format not identified')
# def __delitem__(self, key):
# return self.variables.__delitem__(key)
# def __setitem__(self, key, value):
# return self.variables.__setitem__(key, value)
# def __getitem__(self, key):
# return self.variables.__getitem__(key)
# def __iter__(self):
# return self.variables.__iter__()
# def __len__(self):
# return self.variables.__len__()
# def __import_input(self, filename):
# """
# Read an ABINIT input file and return a python dictionary
# with the input variables readed from that. The keys are
# the fullname input variables (acell,xcart21,etc). The
# values are numbers or lists except for
# the value '*[NUMBER]' that is keeped as string, and
# the string associated to the variable xyzfile
# Args:
# filename:
# ABINIT input filename
# """
# ans = parser(filename)
# if ans is not None:
# self.variables = ans
# def write(self, filename):
# """
# Write an input object into a text
# file that ABINIT can use as an input
# file
# Args:
# filename:
# The 'abinit.in' filename that will be written
# """
# wf = open(filename, 'w')
# wf.write(self.__str__())
# wf.close()
# def __str__(self):
# """
# String representation of the object
# """
# ret = ''
# thekeys = self.variables.keys()
# varnames = [x for x in thekeys if not x[-1].isdigit()]
# if 'ndtset' in varnames:
# ret = ret + "#" + 60 * "-" + "\n#" + " MULTI DATASET\n#" + 60 * "-" + "\n\n"
# ret += self.write_key('ndtset')
# varnames.remove('ndtset')
# if 'jdtset' in varnames:
# ret += self.write_key('jdtset')
# varnames.remove('jdtset')
# if 'udtset' in varnames:
# ret += self.write_key('udtset')
# varnames.remove('udtset')
# ret += '\n'
# seqvarnames = [x for x in varnames if
# (x[-1] == ':' or x[-1] == "+" or x[-1] == "?" or x[-2] == ':' or x[-2] == "+" or x[-2] == "?")]
# if len(seqvarnames) > 0:
# ret = ret + "#" + 60 * "-" + "\n#" + " SEQUENCE\n#" + 60 * "-" + "\n\n"
# seqvarnames.sort()
# for i in seqvarnames:
# ret += self.write_key(i)
# varnames.remove(i)
# ret += '\n'
# if len(varnames) > 0:
# varnames.sort()
# ret = ret + "#" + 60 * "-" + "\n#" + " ALL DATASETS\n#" + 60 * "-" + "\n\n"
# for i in varnames:
# ret += self.write_key(i)
# ret += '\n'
# for dtset in range(1, 100):
# varnames = [x for x in thekeys if
# (x[-len(str(dtset)):] == str(dtset) and not x[-len(str(dtset)) - 1:].isdigit())]
# if len(varnames) > 0:
# varnames.sort()
# ret = ret + "#" + 60 * "-" + "\n#" + " DATASET " + str(dtset) + "\n#" + 60 * "-" + "\n\n"
# for i in varnames:
# if i == 'dmatpawu':
# if 2 in self['lpawu']:
# ret += self.write_key(i, ncolumns=5)
# elif 3 in self['lpawu']:
# ret += self.write_key(i, ncolumns=7)
# ret += self.write_key(i)
# ret += '\n'
# return ret
# def write_key(self, varname, ncolumns=None):
# """
# Receives an input variable and write their contents
# properly according with their kind and length
# Args:
# varname:
# The name of the input variable
# """
# ret = ''
# if varname not in self.variables:
# print("[ERROR] input variable: '%s' is not defined" % varname)
# return
# # Assume that the variables are integer and test if such assumption
# # is true
# integer = True
# real = False
# string = False
# compact = True
# if isinstance(self.variables[varname], (int, float)):
# varlist = [self.variables[varname]]
# else:
# varlist = self.variables[varname]
# # Get the general kind of values for the input variable
# for j in varlist:
# try:
# if not float(j).is_integer():
# # This is the case of non integer values
# integer = False
# real = True
# string = False
# if len(str(float(j))) > 7:
# compact = False
# except ValueError:
# # This is the case of '*1' that could not
# # be converted because we don't know the size
# # of the array
# integer = False
# real = False
# string = True
# ret = ret + (varname.rjust(15)) + " "
# known_variables = {'xred': [3], 'acell': [3]}
# if varname in known_variables:
# for i in known_variables[varname]:
# if len(varlist) % i == 0:
# for j in range(int(len(varlist) / i)):
# if j == 0:
# ret += (i * '%17.10E ' + '\n') % tuple(varlist[j * i:j * i + i])
# else:
# ret += (17 * ' ' + i * '%17.10E ' + '\n') % tuple(varlist[j * i:j * i + i])
# elif ncolumns is not None:
# for i in ncolumns:
# for j in range(int(len(varlist) / i)):
# if j == 0:
# ret += (i * '%17.10E ' + '\n') % tuple(varlist[j * i:j * i + i])
# else:
# ret += (17 * ' ' + i * '%17.10E ' + '\n') % tuple(varlist[j * i:j * i + i])
# else:
# for j in range(len(varlist)):
# if real:
# if compact:
# ret += ("%g" % varlist[j]).rjust(8)
# else:
# ret += "%17.10e" % varlist[j]
# elif integer:
# ret += "%d" % varlist[j]
# elif string:
# ret += "%s" % varlist[j]
# # Conditions to jump to a new line
# if ((j + 1) % 3) == 0 and real and j < len(varlist) - 1:
# ret += "\n"
# ret += 17 * " "
# elif j < len(varlist) - 1:
# ret += " "
# ret += "\n"
# return ret
def merge(abi_into, abi_from, filename=None):
"""
From the variables present in the file 'abi_into'
will try to recover the value from 'abi_from'
if the value has changed, the value will be updated.
The new variables will be save in the given filename
If filename==None the new values will overwrite
abi_into
Example:
merge('abinit.in','abinit_xo_OUT.nc')
It will update the abinit input with the values of the output
:param abi_into: (str) Merge abinit variables using this destination
:param abi_from: (str) and this source
:param filename: (str) Storing the final input variables on file
"""
abinit_into = AbinitInput(abi_into)
abinit_from = AbinitInput(abi_from)
for i in abinit_into.variables.keys():
if i in abinit_from.variables.keys():
abinit_into.variables[i] = abinit_from.variables[i]
if filename is None:
filename = abi_into
abinit_into.write(filename)
def xyz2input(filename):
"""
Reads a .xyz and return an ABINIT input
as a python dictionary
"""
abiinput = AbinitInput()
atomdict = atomic_symbol()
rf = open(filename, 'r')
natom = int(rf.readline())
typat = []
znucl = []
xangst = []
ntypat = 0
rf.readline()
data = rf.readlines()
for i in range(natom):
atom = data[i].split()
atomnumber = atomdict[atom[0]]
if atomnumber not in znucl:
ntypat += 1
znucl.append(atomnumber)
typat.append(znucl.index(atomnumber) + 1)
xangst += [float(atom[1]), float(atom[2]), float(atom[3])]
abiinput.variables['natom'] = np.array([natom])
abiinput.variables['znucl'] = np.array(znucl)
abiinput.variables['ntypat'] = np.array([ntypat])
abiinput.variables['typat'] = np.array(typat)
abiinput.variables['xcart'] = angstrom_bohr * np.array(xangst)
return abiinput
| MaterialsDiscovery/PyChemia | pychemia/code/abinit/input.py | Python | mit | 28,922 | [
"ABINIT",
"NetCDF"
] | 126270ee7bbc0a7e027aab3c7960e37f9d4d6a39b4ef2f881a059c10fb9d7509 |
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# --- BEGIN_HEADER ---
#
# grid_events - event handler to monitor files and trigger actions
# Copyright (C) 2003-2015 The MiG Project lead by Brian Vinter
#
# This file is part of MiG.
#
# MiG is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# MiG is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# -- END_HEADER ---
#
"""Event handler to monitor vgrid files for creation, modification and removal
and trigger any associated actions based on rule database.
Requires watchdog module (https://pypi.python.org/pypi/watchdog).
"""
import fnmatch
import glob
import logging
import logging.handlers
import os
import re
import sys
import tempfile
import time
import threading
from shared.fileio import makedirs_rec, pickle
try:
from watchdog.observers import Observer
from watchdog.events import PatternMatchingEventHandler, \
FileModifiedEvent, FileCreatedEvent, FileDeletedEvent, \
DirModifiedEvent, DirCreatedEvent, DirDeletedEvent
except ImportError:
print 'ERROR: the python watchdog module is required for this daemon'
sys.exit(1)
from shared.conf import get_configuration_object
from shared.defaults import valid_trigger_changes, workflows_log_name, \
workflows_log_size, workflows_log_cnt
from shared.job import fill_mrsl_template, new_job
from shared.logger import daemon_logger
from shared.serial import load
from shared.vgrid import vgrid_is_owner_or_member
# Global trigger rule dictionary with rules for all VGrids
all_rules = {}
rule_hits = {}
(_rate_limit_field, _settle_time_field) = ('rate_limit', 'settle_time')
_default_period = 'm'
_default_time = '0'
_unit_periods = {
's': 1,
'm': 60,
'h': 60 * 60,
'd': 24 * 60 * 60,
'w': 7 * 24 * 60 * 60,
}
_hits_lock = threading.Lock()
(configuration, logger) = (None, None)
def get_expand_map(trigger_path, rule, state_change):
"""Generate a dictionary with the supported variables to be expanded and
the actual expanded values based on trigger_path and rule dictionary.
"""
trigger_filename = os.path.basename(trigger_path)
trigger_dirname = os.path.dirname(trigger_path)
(prefix, extension) = os.path.splitext(trigger_filename)
expand_map = {
'+TRIGGERPATH+': trigger_path,
'+TRIGGERDIRNAME+': trigger_dirname,
'+TRIGGERFILENAME+': trigger_filename,
'+TRIGGERPREFIX+': prefix,
'+TRIGGEREXTENSION+': extension,
'+TRIGGERCHANGE+': state_change,
'+TRIGGERVGRIDNAME+': rule['vgrid_name'],
'+TRIGGERRUNAS+': rule['run_as'],
}
# TODO: provide exact expanded wildcards?
return expand_map
def make_fake_event(path, state):
"""Create a fake state change event for path. Looks up path to see if the
change is a directory or file.
"""
file_map = {'modified': FileModifiedEvent,
'created': FileCreatedEvent,
'deleted': FileDeletedEvent}
dir_map = {'modified': DirModifiedEvent,
'created': DirCreatedEvent, 'deleted': DirDeletedEvent}
if os.path.isdir(path):
return dir_map[state](path)
else:
return file_map[state](path)
def extract_time_in_secs(rule, field):
"""Get time in seconds for provided free form period field. The value is a
integer or float string with optional unit letter appended. If no unit is
given the default period is used and if all empty the default time is used.
"""
limit_str = rule.get(field, '')
if not limit_str:
limit_str = str(_default_time)
# NOTE: format is 3(s) or 52m
# extract unit suffix letter and fall back to a raw value with default unit
unit_key = _default_period
if not limit_str[-1:].isdigit():
val_str = limit_str[:-1]
if limit_str[-1] in _unit_periods.keys():
unit_key = limit_str[-1]
else:
# print "ERROR: invalid time value %s ... fall back to defaults" % \
# limit_str
(unit_key, val_str) = (_default_period, _default_time)
else:
val_str = limit_str
try:
secs = float(val_str) * _unit_periods[unit_key]
except Exception, exc:
print 'ERROR: failed to parse time %s (%s)!' % (limit_str, exc)
secs = 0.0
secs = max(secs, 0.0)
return secs
def extract_hit_limit(rule, field):
"""Get rule rate limit as (max_hits, period_length)-tuple for provided
rate limit field where the limit kicks in when more than max_hits happened
within the last period_length seconds.
"""
limit_str = rule.get(field, '')
# NOTE: format is 3(/m) or 52/h
# split string on slash and fall back to no limit and default unit
parts = (limit_str.split('/', 1) + [_default_period])[:2]
(number, unit) = parts
if not number.isdigit():
number = '-1'
if unit not in _unit_periods.keys():
unit = _default_period
return (int(number), _unit_periods[unit])
def update_rule_hits(
rule,
path,
change,
ref,
time_stamp,
):
"""Update rule hits history with event and remove expired entries. Makes
sure to neither expire events needed for rate limit nor settle time
checking.
"""
(_, hit_period) = extract_hit_limit(rule, _rate_limit_field)
settle_period = extract_time_in_secs(rule, _settle_time_field)
logger.debug('update rule hits at %s for %s and %s %s %s'
% (time_stamp, rule, path, change, ref))
_hits_lock.acquire()
rule_history = rule_hits.get(rule['rule_id'], [])
rule_history.append((path, change, ref, time_stamp))
max_period = max(hit_period, settle_period)
period_history = [i for i in rule_history if time_stamp - i[3]
<= max_period]
rule_hits[rule['rule_id']] = period_history
_hits_lock.release()
logger.debug('updated rule hits for %s to %s' % (rule['rule_id'],
period_history))
def get_rule_hits(rule, limit_field):
"""find rule hit details"""
if limit_field == _rate_limit_field:
(hit_count, hit_period) = extract_hit_limit(rule, limit_field)
elif limit_field == _settle_time_field:
(hit_count, hit_period) = (1, extract_time_in_secs(rule,
limit_field))
_hits_lock.acquire()
rule_history = rule_hits.get(rule['rule_id'], [])
res = (rule_history, hit_count, hit_period)
_hits_lock.release()
logger.debug('get_rule_hits found %s' % (res, ))
return res
def get_path_hits(rule, path, limit_field):
"""find path hit details"""
(rule_history, hit_count, hit_period) = get_rule_hits(rule,
limit_field)
path_history = [i for i in rule_history if i[0] == path]
return (path_history, hit_count, hit_period)
def above_path_limit(
rule,
path,
limit_field,
time_stamp,
):
"""Check path trigger history against limit field and return boolean
indicating if the rate limit or settle time should kick in.
"""
(path_history, hit_count, hit_period) = get_path_hits(rule, path,
limit_field)
if hit_count <= 0 or hit_period <= 0:
logger.debug('no %s limit set' % limit_field)
return False
period_history = [i for i in path_history if time_stamp - i[3]
<= hit_period]
logger.debug('above path %s test found %s vs %d' % (limit_field,
period_history, hit_count))
if len(period_history) >= hit_count:
return True
return False
def show_path_hits(rule, path, limit_field):
"""Return path hit details for printing"""
msg = ''
(path_history, hit_count, hit_period) = get_path_hits(rule, path,
limit_field)
msg += \
'found %d entries in trigger history and limit is %d per %s s' \
% (len(path_history), hit_count, hit_period)
return msg
def wait_settled(
rule,
path,
change,
settle_secs,
time_stamp,
):
"""Lookup recent change events on path and check if settle_secs passed
since last one. Returns the number of seconds needed without further
events for changes to be considered settled.
"""
limit_field = _settle_time_field
(path_history, _, hit_period) = get_path_hits(rule, path,
limit_field)
period_history = [i for i in path_history if time_stamp - i[3]
<= hit_period]
logger.debug('wait_settled: path %s, change %s, settle_secs %s'
% (path, change, settle_secs))
if not period_history:
remain = 0.0
else:
# NOTE: the time_stamp - i[3] values are non-negative here
# since hit_period >= 0.
# Thus we can just take the smallest and subtract from settle_secs
# to always wait the remaining part of settle_secs.
remain = settle_secs - min([time_stamp - i[3] for i in
period_history])
logger.debug('wait_settled: remain %.1f , period_history %s'
% (remain, period_history))
return remain
def recently_modified(path, time_stamp, slack=2.0):
"""Check if path was actually recently modified and not just accessed.
If atime and mtime are the same or if mtime is within slack from time_stamp
we accept it as recently changed.
"""
try:
stat_res = os.stat(path)
result = stat_res.st_mtime == stat_res.st_atime \
or stat_res.st_mtime > time_stamp - slack
except OSError, ex:
# If we get an OSError, *path* is most likely deleted
result = True
logger.debug('OSError: %s' % str(ex))
return result
def map_args_to_vars(var_list, arg_list):
"""Map command args to backend var names - if more args than vars we
assume variable length on the first arg:
zip src1 src2 src3 dst -> src: [src1, src2, src3], dst: [dst]
"""
args_dict = dict(zip(var_list, [[] for _ in var_list]))
remain_vars = [i for i in var_list]
remain_args = [i for i in arg_list]
while remain_args:
args_dict[remain_vars[0]].append(remain_args[0])
del remain_args[0]
if len(remain_args) < len(remain_vars):
del remain_vars[0]
return args_dict
def run_command(
command_list,
target_path,
rule,
configuration,
):
"""Run backend command built from command_list on behalf of user from
rule and with args mapped to the backend variables.
"""
# TODO: add all ops with effect here!
command_map = {
'pack': ['src', 'dst'],
'unpack': ['src', 'dst'],
'zip': ['src', 'dst'],
'unzip': ['src', 'dst'],
'tar': ['src', 'dst'],
'untar': ['src', 'dst'],
'cp': ['src', 'dst'],
'mv': ['src', 'dst'],
'rm': ['path'],
'rmdir': ['path'],
'truncate': ['path'],
'touch': ['path'],
'mkdir': ['path'],
'submit': ['path'],
'canceljob': ['job_id'],
'resubmit': ['job_id'],
'jobaction': ['job_id', 'action'],
'liveio': ['action', 'src', 'dst', 'job_id'],
'mqueue': ['queue', 'action', 'msg_id', 'msg'],
}
logger.info('run command for %s: %s' % (target_path, command_list))
if not command_list or not command_list[0] in command_map:
raise ValueError('unsupported command: %s' % command_list[0])
function = command_list[0]
args_form = command_map[function]
client_id = rule['run_as']
command_str = ' '.join(command_list)
logger.debug('run %s on behalf of %s' % (command_str, client_id))
user_arguments_dict = map_args_to_vars(args_form, command_list[1:])
logger.debug('import main from %s' % function)
main = id
txt_format = id
try:
exec 'from shared.functionality.%s import main' % function
exec 'from shared.output import txt_format'
logger.debug('run %s on %s and %s' % (function, client_id,
user_arguments_dict))
# Fake HTTP POST
os.environ['REQUEST_METHOD'] = 'POST'
(output_objects, (ret_code, ret_msg)) = main(client_id,
user_arguments_dict)
except Exception, exc:
logger.error('failed to run %s on %s: %s' % (function,
user_arguments_dict, exc))
raise exc
logger.info('done running command for %s: %s' % (target_path,
command_str))
logger.debug('raw output is: %s' % output_objects)
try:
txt_out = txt_format(configuration, ret_code, ret_msg,
output_objects)
except Exception, exc:
txt_out = 'internal command output text formatting failed'
logger.error('text formating failed: %s\nraw output is: %s %s %s'
% (exc, ret_code, ret_msg, output_objects))
if ret_code != 0:
raise Exception('command error: %s' % txt_out)
logger.info('result was %s : %s:\n%s' % (ret_code, ret_msg,
txt_out))
class MiGRuleEventHandler(PatternMatchingEventHandler):
"""Rule pattern-matching event handler to take care of VGrid rule changes
and update the global rule database.
"""
def __init__(
self,
patterns=None,
ignore_patterns=None,
ignore_directories=False,
case_sensitive=False,
):
"""Constructor"""
PatternMatchingEventHandler.__init__(self, patterns,
ignore_patterns, ignore_directories, case_sensitive)
def update_rules(self, event):
"""Handle all rule updates"""
state = event.event_type
src_path = event.src_path
if event.is_directory:
logger.debug('skip rule update for directory: %s'
% src_path)
logger.debug('%s rule file: %s' % (state, src_path))
rel_path = \
src_path.replace(os.path.join(configuration.vgrid_home, ''
), '')
vgrid_name = rel_path.replace(os.sep
+ configuration.vgrid_triggers, '')
vgrid_prefix = os.path.join(configuration.vgrid_files_home,
vgrid_name, '')
logger.info('refresh %s rules from %s' % (vgrid_name, src_path))
try:
new_rules = load(src_path)
except Exception, exc:
new_rules = []
if state != 'deleted':
logger.error('failed to load event handler rules from %s (%s)'
% (src_path, exc))
logger.info("loaded new rules from '%s':\n%s" % (src_path,
new_rules))
# Remove all old rules for this vgrid and
# leave rules for parent and sub-vgrids
for target_path in all_rules.keys():
all_rules[target_path] = [i for i in all_rules[target_path]
if i['vgrid_name'] != vgrid_name]
remain_rules = [i for i in all_rules[target_path]
if i['vgrid_name'] != vgrid_name]
if remain_rules:
all_rules[target_path] = remain_rules
logger.debug('remain_rules for: %s \n%s'
% (target_path, remain_rules))
else:
logger.debug('removing rules for: %s ' % target_path)
del all_rules[target_path]
for entry in new_rules:
rule_id = entry['rule_id']
path = entry['path']
logger.info('updating rule: %s, path: %s, entry:\n%s'
% (rule_id, path, entry))
abs_path = os.path.join(vgrid_prefix, path)
all_rules[abs_path] = all_rules.get(abs_path, []) + [entry]
logger.info('all rules:\n%s' % all_rules)
def on_modified(self, event):
"""Handle modified rule file"""
self.update_rules(event)
def on_created(self, event):
"""Handle new rule file"""
self.update_rules(event)
def on_deleted(self, event):
"""Handle deleted rule file"""
self.update_rules(event)
class MiGFileEventHandler(PatternMatchingEventHandler):
"""File pattern-matching event handler to take care of VGrid file changes
and the corresponding action triggers.
"""
def __init__(
self,
patterns=None,
ignore_patterns=None,
ignore_directories=False,
case_sensitive=False,
):
"""Constructor"""
PatternMatchingEventHandler.__init__(self, patterns,
ignore_patterns, ignore_directories, case_sensitive)
def __workflow_log(
self,
configuration,
vgrid_name,
msg,
level='info',
):
"""Wrapper to send a single msg to vgrid workflows page log file"""
log_name = '%s.%s' % (configuration.vgrid_triggers,
workflows_log_name)
log_path = os.path.join(configuration.vgrid_home, vgrid_name,
log_name)
workflows_logger = logging.getLogger('workflows')
workflows_logger.setLevel(logging.INFO)
handler = logging.handlers.RotatingFileHandler(log_path,
maxBytes=workflows_log_size,
backupCount=workflows_log_cnt - 1)
formatter = \
logging.Formatter('%(asctime)s %(levelname)s %(message)s')
handler.setFormatter(formatter)
workflows_logger.addHandler(handler)
if level == 'error':
workflows_logger.error(msg)
elif level == 'warning':
workflows_logger.warning(msg)
else:
workflows_logger.info(msg)
handler.flush()
handler.close()
workflows_logger.removeHandler(handler)
def __workflow_err(
self,
configuration,
vgrid_name,
msg,
):
"""Wrapper to send a single error msg to vgrid workflows page log"""
self.__workflow_log(configuration, vgrid_name, msg, 'error')
def __workflow_warn(
self,
configuration,
vgrid_name,
msg,
):
"""Wrapper to send a single warning msg to vgrid workflows page log"""
self.__workflow_log(configuration, vgrid_name, msg, 'warning')
def __workflow_info(
self,
configuration,
vgrid_name,
msg,
):
"""Wrapper to send a single error msg to vgrid workflows page log"""
self.__workflow_log(configuration, vgrid_name, msg, 'info')
def __add_trigger_job_ent(
self,
configuration,
event,
rule,
jobid,
):
result = True
vgrid_name = rule['vgrid_name']
trigger_job_dir = os.path.join(configuration.vgrid_home,
os.path.join(vgrid_name, os.path.join('.%s.jobs'
% configuration.vgrid_triggers, 'pending_states')))
trigger_job_filepath = os.path.join(trigger_job_dir, jobid)
if makedirs_rec(trigger_job_dir, configuration):
trigger_job_dict = {
'jobid': jobid,
'owner': rule['run_as'],
'rule': rule,
'event': {},
}
src_path = ''
if hasattr(event, 'src_path'):
src_path = event.src_path
dest_path = ''
if hasattr(event, 'dest_path'):
dest_path = event.dest_path
trigger_job_dict['event']['src_path'] = src_path
trigger_job_dict['event']['dest_path'] = dest_path
trigger_job_dict['event']['time_stamp'] = event.time_stamp
trigger_job_dict['event']['event_type'] = event.event_type
trigger_job_dict['event']['is_directory'] = \
event.is_directory
logger.debug('trigger_job_dict: %s' % trigger_job_dict)
if not pickle(trigger_job_dict, trigger_job_filepath,
logger):
result = False
else:
logger.error('Failed to create trigger job dir: %s'
% trigger_job_dir)
result = False
return result
def __handle_trigger(
self,
event,
target_path,
rule,
):
"""Actually handle valid trigger for a specific event and the
corresponding target_path pattern and trigger rule.
"""
state = event.event_type
src_path = event.src_path
time_stamp = event.time_stamp
_chain = getattr(event, '_chain', [(src_path, state)])
base_dir = configuration.vgrid_files_home
rel_src = src_path.replace(base_dir, '').lstrip(os.sep)
vgrid_prefix = os.path.join(base_dir, rule['vgrid_name'])
logger.info('in handling of %s for %s %s' % (rule['action'],
state, rel_src))
above_limit = False
# Run settle time check first to only trigger rate limit if settled
for (name, field) in [('settle time', _settle_time_field),
('rate limit', _rate_limit_field)]:
if above_path_limit(rule, src_path, field, time_stamp):
above_limit = True
logger.warning('skip %s due to %s: %s' % (src_path,
name, show_path_hits(rule, src_path,
field)))
self.__workflow_warn(configuration, rule['vgrid_name'],
'skip %s trigger due to %s: %s' % (rel_src,
name, show_path_hits(rule, src_path, field)))
break
# TODO: consider if we should skip modified when just created
# We receive modified events even when only atime changed - ignore them
if state == 'modified' and not recently_modified(src_path,
time_stamp):
logger.info('skip %s which only changed atime' % src_path)
self.__workflow_info(configuration, rule['vgrid_name'],
'skip %s modified access time only event'
% rel_src)
return
# Always update here to get trigger hits even for limited events
update_rule_hits(rule, src_path, state, '', time_stamp)
if above_limit:
return
logger.info('proceed with handling of %s for %s %s'
% (rule['action'], state, rel_src))
self.__workflow_info(configuration, rule['vgrid_name'],
'handle %s for %s %s' % (rule['action'],
state, rel_src))
settle_secs = extract_time_in_secs(rule, _settle_time_field)
if settle_secs > 0.0:
wait_secs = settle_secs
else:
wait_secs = 0.0
logger.debug('no settle time for %s (%s)' % (target_path,
rule))
while wait_secs > 0.0:
logger.info('wait %.1fs for %s file events to settle down'
% (wait_secs, src_path))
self.__workflow_info(configuration, rule['vgrid_name'],
'wait %.1fs for events on %s to settle'
% (wait_secs, rel_src))
time.sleep(wait_secs)
logger.debug('slept %.1fs for %s file events to settle down'
% (wait_secs, src_path))
time_stamp += wait_secs
wait_secs = wait_settled(rule, src_path, state,
settle_secs, time_stamp)
# TODO: perhaps we should discriminate on files and dirs here?
if rule['action'] in ['trigger-%s' % i for i in
valid_trigger_changes]:
change = rule['action'].replace('trigger-', '')
# Expand dynamic variables in argument once and for all
expand_map = get_expand_map(rel_src, rule, state)
for argument in rule['arguments']:
filled_argument = argument
for (key, val) in expand_map.items():
filled_argument = filled_argument.replace(key, val)
self.__workflow_info(configuration, rule['vgrid_name'],
'expanded argument %s to %s' % (argument,
filled_argument))
pattern = os.path.join(vgrid_prefix, filled_argument)
for path in glob.glob(pattern):
rel_path = \
path.replace(configuration.vgrid_files_home, '')
_chain += [(path, change)]
# Prevent obvious trigger chain cycles
if (path, change) in _chain[:-1]:
flat_chain = ['%s : %s' % pair for pair in
_chain]
chain_str = ' <-> '.join(flat_chain)
rel_chain_str = \
chain_str.replace(configuration.vgrid_files_home,
'')
logger.warning('breaking trigger cycle %s'
% chain_str)
self.__workflow_warn(configuration,
rule['vgrid_name'],
'breaking trigger cycle %s'
% rel_chain_str)
continue
fake = make_fake_event(path, change)
fake._chain = _chain
logger.info('trigger %s event on %s' % (change,
path))
self.__workflow_info(configuration,
rule['vgrid_name'], 'trigger %s event on %s'
% (change, rel_path))
self.handle_event(fake)
elif rule['action'] == 'submit':
mrsl_fd = tempfile.NamedTemporaryFile(delete=False)
mrsl_path = mrsl_fd.name
# Expand dynamic variables in argument once and for all
expand_map = get_expand_map(rel_src, rule, state)
try:
for job_template in rule['templates']:
mrsl_fd.truncate(0)
if not fill_mrsl_template(
job_template,
mrsl_fd,
rel_src,
state,
rule,
expand_map,
configuration,
):
raise Exception('fill template failed')
logger.debug('filled template for %s in %s'
% (target_path, mrsl_path))
(success, msg, jobid) = new_job(mrsl_path,
rule['run_as'], configuration, False,
returnjobid=True)
if success:
self.__add_trigger_job_ent(configuration,
event, rule, jobid)
logger.info('submitted job for %s: %s'
% (target_path, msg))
self.__workflow_info(configuration,
rule['vgrid_name'],
'submitted job for %s: %s' % (rel_src,
msg))
else:
raise Exception(msg)
except Exception, exc:
logger.error('failed to submit job(s) for %s: %s'
% (target_path, exc))
self.__workflow_err(configuration, rule['vgrid_name'],
'failed to submit job for %s: %s'
% (rel_src, exc))
try:
os.remove(mrsl_path)
except Exception, exc:
logger.warning('clean up after submit failed: %s' % exc)
elif rule['action'] == 'command':
# Expand dynamic variables in argument once and for all
expand_map = get_expand_map(rel_src, rule, state)
command_str = ''
command_list = (rule['arguments'])[:1]
for argument in (rule['arguments'])[1:]:
filled_argument = argument
for (key, val) in expand_map.items():
filled_argument = filled_argument.replace(key, val)
self.__workflow_info(configuration, rule['vgrid_name'],
'expanded argument %s to %s' % (argument,
filled_argument))
command_list.append(filled_argument)
try:
run_command(command_list, target_path, rule,
configuration)
self.__workflow_info(configuration, rule['vgrid_name'],
'ran command: %s' % ' '.join(command_list))
except Exception, exc:
logger.error('failed to run command for %s: %s (%s)'
% (target_path, command_str, exc))
self.__workflow_err(configuration, rule['vgrid_name'],
'failed to run command for %s: %s (%s)'
% (rel_src, command_str, exc))
else:
logger.error('unsupported action: %(action)s' % rule)
def run_handler(self, event):
"""Trigger any rule actions bound to file state change"""
state = event.event_type
src_path = event.src_path
is_directory = event.is_directory
logger.info('got %s event for path: %s' % (state, src_path))
logger.debug('filter %s against %s' % (all_rules.keys(),
src_path))
# Each target_path pattern has one or more rules associated
for (target_path, rule_list) in all_rules.items():
# Do not use ordinary fnmatch as it lets '*' match anything
# including '/' which leads to greedy matching in subdirs
recursive_regexp = fnmatch.translate(target_path)
direct_regexp = recursive_regexp.replace('.*', '[^/]*')
recursive_hit = re.match(recursive_regexp, src_path)
direct_hit = re.match(direct_regexp, src_path)
if direct_hit or recursive_hit:
logger.debug('matched %s for %s and/or %s' % (src_path,
direct_regexp, recursive_regexp))
for rule in rule_list:
# user may have been removed from vgrid - log and ignore
if not vgrid_is_owner_or_member(rule['vgrid_name'],
rule['run_as'], configuration):
logger.warning('no such user in vgrid: %(run_as)s'
% rule)
continue
# Rules may listen for only file or dir events and with
# recursive directory search
if is_directory and not rule.get('match_dirs',
False):
logger.debug('skip event %s handling for dir: %s'
% (rule['rule_id'], src_path))
continue
if not is_directory and not rule.get('match_files',
True):
logger.debug('skip %s event handling for file: %s'
% (rule['rule_id'], src_path))
continue
if not direct_hit and not rule.get('match_recursive'
, False):
logger.debug('skip %s recurse event handling for: %s'
% (rule['rule_id'], src_path))
continue
if not state in rule['changes']:
logger.info('skip %s %s event handling for: %s'
% (rule['rule_id'], state,
src_path))
continue
logger.info('trigger %s for %s: %s' % (rule['action'
], src_path, rule))
self.__handle_trigger(event, target_path, rule)
else:
logger.debug('skip %s with no matching rules'
% target_path)
def handle_event(self, event):
"""Handle an event in the background so that it can block without
stopping further event handling.
We add a time stamp to have a sort of precise time for when the event
was received. Still not perfect but better than comparing with 'now'
values obtained deeply in handling calls.
"""
event.time_stamp = time.time()
worker = threading.Thread(target=self.run_handler, args=(event,
))
worker.daemon = True
worker.start()
def on_modified(self, event):
"""Handle modified files"""
self.handle_event(event)
def on_created(self, event):
"""Handle created files"""
self.handle_event(event)
def on_deleted(self, event):
"""Handle deleted files"""
self.handle_event(event)
def on_moved(self, event):
"""Handle moved files: we translate a move to a created and a deleted
event since the single event with src and dst does not really fit our
model all that well.
"""
for (change, path) in [('created', event.dest_path), ('deleted'
, event.src_path)]:
fake = make_fake_event(path, change)
self.handle_event(fake)
if __name__ == '__main__':
print '''This is the MiG event handler daemon which monitors VGrid files
and triggers any configured events when target files are created, modifed or
deleted. VGrid owners can configure rules to trigger such events based on file
changes.
Set the MIG_CONF environment to the server configuration path
unless it is available in mig/server/MiGserver.conf
'''
configuration = get_configuration_object()
# Use separate logger
logger = daemon_logger('events', configuration.user_events_log,
configuration.loglevel)
keep_running = True
print 'Starting Event handler daemon - Ctrl-C to quit'
logger.info('Starting Event handler daemon')
logger.info('initializing rule listener')
# Monitor rule configurations
rule_monitor = Observer()
rule_patterns = [os.path.join(configuration.vgrid_home, '*',
configuration.vgrid_triggers)]
rule_handler = MiGRuleEventHandler(patterns=rule_patterns,
ignore_directories=False, case_sensitive=True)
rule_monitor.schedule(rule_handler, configuration.vgrid_home,
recursive=True)
rule_monitor.start()
logger.info('initializing file listener - may take some time')
# monitor actual files to handle events for
file_monitor = Observer()
file_patterns = [os.path.join(configuration.vgrid_files_home, '*')]
file_handler = MiGFileEventHandler(patterns=file_patterns,
ignore_directories=False, case_sensitive=True)
file_monitor.schedule(file_handler, configuration.vgrid_files_home,
recursive=True)
file_monitor.start()
logger.info('trigger rule refresh')
# Fake touch event on all rule files to load initial rules
logger.info('trigger load on all rule files (greedy) matching %s'
% rule_patterns[0])
# We manually walk and test to get the greedy "*" directory match behaviour
# of the PatternMatchingEventHandler
all_trigger_rules = []
for (root, _, files) in os.walk(configuration.vgrid_home):
if configuration.vgrid_triggers in files:
rule_path = os.path.join(root, configuration.vgrid_triggers)
all_trigger_rules.append(rule_path)
for rule_path in all_trigger_rules:
logger.debug('trigger load on rules in %s' % rule_path)
rule_handler.dispatch(FileModifiedEvent(rule_path))
logger.debug('loaded initial rules:\n%s' % all_rules)
logger.info('ready to handle triggers')
while keep_running:
try:
# Throttle down
time.sleep(1)
except KeyboardInterrupt:
keep_running = False
rule_monitor.stop()
file_monitor.stop()
except Exception, exc:
print 'Caught unexpected exception: %s' % exc
rule_monitor.join()
file_monitor.join()
print 'Event handler daemon shutting down'
sys.exit(0)
| heromod/migrid | mig/server/grid_events.py | Python | gpl-2.0 | 37,280 | [
"Brian"
] | 6b3570c78f4e5f690c0149045a3665a0f31ccf073ede1810481855d6d3e829d8 |
# -*- coding: utf-8 -*-
# Licensed under a 3-clause BSD style license - see LICENSE.rst
"""
This package defines miscellaneous units. They are also
available in the `astropy.units` namespace.
"""
from . import si
from astropy.constants import si as _si
from .core import (UnitBase, def_unit, si_prefixes, binary_prefixes,
set_enabled_units)
# To ensure si units of the constants can be interpreted.
set_enabled_units([si])
import numpy as _numpy
_ns = globals()
###########################################################################
# AREAS
def_unit(['barn', 'barn'], 10 ** -28 * si.m ** 2, namespace=_ns, prefixes=True,
doc="barn: unit of area used in HEP")
###########################################################################
# ANGULAR MEASUREMENTS
def_unit(['cycle', 'cy'], 2.0 * _numpy.pi * si.rad,
namespace=_ns, prefixes=False,
doc="cycle: angular measurement, a full turn or rotation")
def_unit(['spat', 'sp'], 4.0 * _numpy.pi * si.sr,
namespace=_ns, prefixes=False,
doc="spat: the solid angle of the sphere, 4pi sr")
##########################################################################
# PRESSURE
def_unit(['bar'], 1e5 * si.Pa, namespace=_ns,
prefixes=[(['m'], ['milli'], 1.e-3)],
doc="bar: pressure")
# The torr is almost the same as mmHg but not quite.
# See https://en.wikipedia.org/wiki/Torr
# Define the unit here despite it not being an astrophysical unit.
# It may be moved if more similar units are created later.
def_unit(['Torr', 'torr'], _si.atm.value/760. * si.Pa, namespace=_ns,
prefixes=[(['m'], ['milli'], 1.e-3)],
doc="Unit of pressure based on an absolute scale, now defined as "
"exactly 1/760 of a standard atmosphere")
###########################################################################
# MASS
def_unit(['M_p'], _si.m_p, namespace=_ns, doc="Proton mass",
format={'latex': r'M_{p}', 'unicode': 'Mₚ'})
def_unit(['M_e'], _si.m_e, namespace=_ns, doc="Electron mass",
format={'latex': r'M_{e}', 'unicode': 'Mₑ'})
# Unified atomic mass unit
def_unit(['u', 'Da', 'Dalton'], _si.u, namespace=_ns,
prefixes=True, exclude_prefixes=['a', 'da'],
doc="Unified atomic mass unit")
###########################################################################
# COMPUTER
def_unit((['bit', 'b'], ['bit']), namespace=_ns,
prefixes=si_prefixes + binary_prefixes)
def_unit((['byte', 'B'], ['byte']), 8 * bit, namespace=_ns,
format={'vounit': 'byte'},
prefixes=si_prefixes + binary_prefixes,
exclude_prefixes=['d'])
def_unit((['pix', 'pixel'], ['pixel']),
format={'ogip': 'pixel', 'vounit': 'pixel'},
namespace=_ns, prefixes=True)
def_unit((['vox', 'voxel'], ['voxel']),
format={'fits': 'voxel', 'ogip': 'voxel', 'vounit': 'voxel'},
namespace=_ns, prefixes=True)
###########################################################################
# CLEANUP
del UnitBase
del def_unit
del si
###########################################################################
# DOCSTRING
# This generates a docstring for this module that describes all of the
# standard units defined here.
from .utils import generate_unit_summary as _generate_unit_summary
if __doc__ is not None:
__doc__ += _generate_unit_summary(globals())
| pllim/astropy | astropy/units/misc.py | Python | bsd-3-clause | 3,393 | [
"Dalton"
] | 993c714f2008288a714601027ad005de4d23245e3e460918c66069eb7966a3f1 |
# -*- coding: utf-8 -*-
"""This class contains an alternate implementation of the PyBEL database manager that only stores graphs in memory."""
from dataclasses import dataclass
from typing import Iterable, List
from pybel import BELGraph
@dataclass
class _Namespace:
id: int
class DictManager:
"""A dictionary-based implementation of the PyBEL Manager."""
def __init__(self):
self.universe = None
self.networks = {}
self.disease_to_id = {}
self.hash_to_node = {}
def insert_graph(self, graph: BELGraph, **_kwargs):
"""Insert a graph and return the resulting ORM object (mocked)."""
result = _Namespace(id=len(self.networks))
self.networks[result.id] = graph
return result
def get_graph_by_id(self, network_id: int) -> BELGraph:
"""Get a graph by its identifier."""
return self.networks[network_id]
def get_graphs_by_ids(self, network_ids: Iterable[int]) -> List[BELGraph]:
"""Get several graphs by their identifiers."""
return [
self.networks[network_id]
for network_id in network_ids
]
| pybel/pybel-tools | src/pybel_tools/dict_manager.py | Python | mit | 1,150 | [
"Pybel"
] | 05c275ef77a366064f37e69db384e082c78bf77429ca8ddc3322d9890beeaa53 |
from twisted.internet import defer
from nevow import livepage, loaders, tags, rend, static, entities, util
from nevow.livepage import js
testFrameNode = js.testFrameNode
contentDocument = testFrameNode.contentDocument
gid = contentDocument.getElementById
XPathResult = js.XPathResult
null = js.null
class xpath(object):
def __init__(self, path):
self.path = path
def __repr__(self):
return 'nevow.livetest.xpath(%r)' % (self.path, )
def _asjs(self, localName):
yield livepage.var(
js.targetXPathResult,
contentDocument.evaluate(self.path,
contentDocument,
null,
XPathResult.ANY_TYPE,
null)), livepage.eol
yield livepage.var(
localName,
js.targetXPathResult.iterateNext()), livepage.eol
class Driver(object):
def __init__(self, handle, suite):
self.handle = handle
self.suite = list(suite)
self.results = {}
self.state = 0
self.iterator = self.drive()
self.nextTest()
passes = 0
failures = 0
def drive(self):
for i, test in enumerate(self.suite):
self.state = i
action, target, parameter = test
actionCallable = getattr(self, 'action_%s' % (action, ), None)
if actionCallable is not None:
test = actionCallable(target, parameter)
if test is not None:
yield test
self.handle.send(livepage.set('test-status', 'Complete'))
def nextTest(self):
try:
test = self.iterator.next()
except StopIteration:
return
self.handle.send(test)
def passed(self):
self.results[self.state] = True
self.passes += 1
self.handle.send(js.passed(self.state))
self.nextTest()
def failed(self, text=''):
self.results[self.state] = False
self.failures += 1
self.handle.send(js.failed(self.state, text))
self.nextTest()
def checkException(self):
def continueTests(ctx, status):
if status == 'passed':
self.passed()
else:
self.failed()
continuer = self.handle.transient(continueTests)
return livepage.anonymous(
[livepage.js("if (testFrameNode.contentDocument.title != 'Exception') {\n\t"),
continuer('passed'),
livepage.js("\n} else {\n\t"),
continuer('failed'),
livepage.js("\n}")])
def action_visit(self, target, param):
yield js.addLoadObserver(self.checkException()), livepage.eol
yield js.setContentLocation(target), livepage.eol
def action_assert(self, target, param):
def doAssert(ctx, actual):
if param == actual:
self.passed()
else:
self.failed("%r != %r" % (param, actual))
if isinstance(target, xpath):
yield target._asjs(js.targetNode)
else:
yield livepage.var(js.targetNode, gid(target)), livepage.eol
yield self.handle.transient(
doAssert, js.targetNode.innerHTML)
def action_value(self, target, param):
def doAssert(ctx, actual):
if param == actual:
self.passed()
else:
self.failed()
if isinstance(target, xpath):
yield target._asjs(js.targetNode)
else:
yield livepage.var(js.targetNode, gid(target)), livepage.eol
yield self.handle.transient(
doAssert, js.targetNode.value)
def action_follow(self, target, param):
if isinstance(target, xpath):
yield target._asjs(js.targetNode)
else:
yield livepage.var(js.targetNode, gid(target)), livepage.eol
yield [
js.addLoadObserver(self.checkException()),
livepage.eol,
js.setContentLocation(js.targetNode.href)]
def action_post(self, target, param):
def passed(ctx):
self.passed()
if isinstance(target, xpath):
yield target._asjs(js.targetForm)
else:
yield livepage.var(js.targetForm, contentDocument[target]), livepage.eol
yield livepage.var(js.postTarget, js.targetForm.action), livepage.eol
for key, value in param.items():
yield livepage.assign(js.targetForm[key].value, value), livepage.eol
yield js.addLoadObserver(
livepage.anonymous(
self.handle.transient(passed))), livepage.eol
yield js.sendSubmitEvent(js.targetForm, livepage.anonymous(js))
def action_submit(self, target, param):
"""This should only be used with livepage, to simulate an onsubmit.
It could be possible to make this work when not testing a livepage
app, using a monstrosity similar to that used by action_click, below.
"""
def passed(ctx):
self.passed()
if isinstance(target, xpath):
yield target._asjs(js.targetForm)
else:
yield livepage.var(js.targetForm, contentDocument[target]), livepage.eol
yield livepage.var(js.postTarget, js.targetForm.action), livepage.eol
for key, value in param.items():
yield livepage.assign(js.targetForm[key].value, value), livepage.eol
yield livepage.var(
js.inputListener,
contentDocument.defaultView.listenForInputEvents(
livepage.anonymous(
self.handle.transient(passed)))), livepage.eol
yield js.sendSubmitEvent(
js.targetForm,
livepage.anonymous(
contentDocument.defaultView.stopListening(js.inputListener)))
def action_click(self, target, param):
"""TODO: Either decide that this should only be used in the presence
of a real, honest-to-god livepage app, or figure out some way to simplify
this monstrosity.
"""
def passed(ctx):
self.passed()
if isinstance(target, xpath):
yield target._asjs(js.targetNode)
else:
yield livepage.var(js.targetNode, gid(target)), livepage.eol
## If the testee is using livepage, we don't want the test to pass
## until all input events (and the response javascript from these
## input events) have passed. To do this we use listenForInputEvents,
## passing a continuation function which will be called when all input
## event responses have been evaluated. We call stopListening
## immediately after sending the click event. This means we
## start listening for input events, simulate the click, then stop listening.
## If any input events were initiated during the click, our test only passes
## when all event responses have been processed.
## If we are not using livepage, listenForInputEvents will not be defined.
## Because it is hard to do javascript tests (if statement) from python,
## ifTesteeUsingLivePage has been defined in livetest-postscripts.
testDidPass = self.handle.transient(passed)
yield [
js.ifTesteeUsingLivePage(
## Using livepage
livepage.anonymous(
livepage.assign(
## Save the listener in a variable so we can stop listening later
js.inputListener,
contentDocument.defaultView.listenForInputEvents(
## When all observed events complete, continue running tests
livepage.anonymous(
testDidPass)))),
## Not using livepage; do nothing here.
livepage.anonymous('')), livepage.eol,
js.sendClickEvent(
## Click our node.
js.targetNode,
## Immediately after clicking the node, run this stuff.
livepage.anonymous(
js.ifTesteeUsingLivePage(
## We're done clicking the node, and we're using livepage.
## Stop listening for input events. This will fire the continuation
## immediately if no input events were observed; otherwise it
## will wait for all responses to be evaluated before firing the
## continuation.
livepage.anonymous(contentDocument.defaultView.stopListening(js.inputListener)),
## We're done clicking the node, and we are not using livepage.
## Call testDidPass.
livepage.anonymous(testDidPass))))]
def action_call(self, target, param):
# Import reactor here to avoid installing default at startup
from twisted.internet import reactor
def doit():
target(self.handle, *param).addCallback(
lambda result: self.passed()
).addErrback(
lambda result: self.failed())
reactor.callLater(0, doit)
return ''
def action_fail(self, target, param):
# Import reactor here to avoid installing default at startup
from twisted.internet import reactor
def doit():
target(self.handle, *param).addCallback(
lambda result: self.failed()
).addErrback(
lambda result: self.passed())
reactor.callLater(0, doit)
class Tester(livepage.LivePage):
addSlash = True
child_css = static.File(util.resource_filename('nevow', 'livetest.css'))
child_scripts = static.File(util.resource_filename('nevow', 'livetest.js'))
child_postscripts = static.File(util.resource_filename('nevow', 'livetest-postscripts.js'))
docFactory = loaders.stan(tags.html[
tags.head[
tags.script(src="scripts"),
tags.link(rel="stylesheet", type="text/css", href="css")],
tags.body[
tags.table(id="testprogress")[
tags.tr[
tags.th["Tests"], tags.th["Pass"], tags.th["Fail"]],
tags.tr[
tags.td(id="test-status")["Running"],
tags.td(id="test-passes", _class="test-passes")[entities.nbsp],
tags.td(id="test-failures", _class="test-failures")[entities.nbsp]]],
tags.table(id="testresults", render=tags.directive('sequence'))[
tags.tr(pattern="item", render=tags.directive('test'))[
tags.td(title=tags.slot('action'))[tags.slot('action')],
tags.td(title=tags.slot('target'))[tags.slot('target')],
tags.td(title=tags.slot('parameter'))[tags.slot('parameter')]]],
tags.iframe(id="testframe", src="asdf"),
tags.script(src="postscripts"),
livepage.glue]])
def beforeRender(self, ctx):
self.testId = 0
def render_test(self, ctx, test):
ctx.tag(id=("test-", self.testId))
action, target, parameter = test
ctx.fillSlots('action', action)
ctx.fillSlots('target', str(target))
ctx.fillSlots('parameter', str(parameter))
self.testId += 1
return ctx.tag
def goingLive(self, ctx, handle):
Driver(handle, self.original)
class ChildXPath(rend.Page):
docFactory = loaders.stan(
tags.html[
tags.body[
tags.div[
tags.span[
tags.div(id='target-node-identifier')[
'expected content']]]]])
def thingThatPasses(_):
return defer.succeed(None)
def thingThatFails(_):
return defer.fail(None)
class TestTests(rend.Page):
addSlash = True
docFactory = loaders.stan(tags.html[tags.a(href="/testtests/tests/")["Run tests"]])
child_foo = '<html><body><div id="body">foo</div><form method="POST", name="theForm" action="postTarget"><input name="blah" /></form></body></html>'
child_bar = "bar"
child_baz = '<html><body onclick="alert(event.clientX);alert( event.clientY);"><div id="body">toot</div><a id="nextPage" href="foo" onclick="alert(\'clicked\')">Foo</a></body></html>'
child_clickHandler = """<html>
<body>
<a id="theClicker" onclick="this.innerHTML='Clicked'">Click me!</a>
</body>
</html>"""
def child_postTarget(self, ctx):
return rend.Page(
docFactory=loaders.stan(
tags.html[tags.body(id="body")[str(ctx.arg('blah'))]]))
def child_testtests(self, ctx):
return self
def child_xpath(self, ctx):
## print 'lkfjasldkjasd!!!!!!!!'
return ChildXPath()
child_tests = Tester([
('visit', '/testtests/xpath', ''),
('assert', xpath('/html/body/div/span/div[@id="target-node-identifier"]'), 'expected content'),
('visit', '/testtests/foo', ''),
('visit', '/testtests/bar', ''),
('visit', '/testtests/baz', ''),
('assert', 'body', 'toot'),
('follow', 'nextPage', ''),
('assert', 'body', 'foo'),
('post', 'theForm', dict(blah="blah")),
('assert', 'body', 'blah'),
('visit', '/testtests/clickHandler', ''),
('click', 'theClicker', ''),
('assert', 'theClicker', 'Clicked'),
('call', thingThatPasses, ()),
('fail', thingThatFails, ())
])
def createResource():
return TestTests()
| UstadMobile/exelearning-ustadmobile-work | nevow/livetest.py | Python | gpl-2.0 | 13,641 | [
"VisIt"
] | 4ca6d327ea02eaf650c3f46e46c62205674e5e45a1a17c1aee1988c4602bac0d |
# coding: utf8
{
' (leave empty to detach account)': ' (leave empty to detach account)',
' Module is the main communications hub of the Sahana system. It is used to send alerts and/or messages using SMS & Email to various groups and individuals before, during and after a disaster.': ' Module is the main communications hub of the Sahana system. It is used to send alerts and/or messages using SMS & Email to various groups and individuals before, during and after a disaster.',
' by ': ' by ',
' is envisioned to be composed of several sub-modules that work together to provide complex functionality for the management of relief and project items by an organization. This includes an intake system, a warehouse management system, commodity tracking, supply chain management, fleet management, procurement, financial tracking and other asset and resource management capabilities.': ' is envisioned to be composed of several sub-modules that work together to provide complex functionality for the management of relief and project items by an organization. This includes an intake system, a warehouse management system, commodity tracking, supply chain management, fleet management, procurement, financial tracking and other asset and resource management capabilities.',
' on ': ' on ',
'"update" is an optional expression like "field1=\'newvalue\'". You cannot update or delete the results of a JOIN': '"update" is an optional expression like "field1=\'newvalue\'". You cannot update or delete the results of a JOIN',
'# of International Staff': '# of International Staff',
'# of National Staff': '# of National Staff',
'# of People Affected': '# of People Affected',
'# of People Deceased': '# of People Deceased',
'# of People Injured': '# of People Injured',
'# of Vehicles': '# of Vehicles',
'%Y-%m-%d': '%Y-%m-%d',
'%Y-%m-%d %H:%M:%S': '%Y-%m-%d %H:%M:%S',
'%s rows deleted': '%s rows deleted',
'%s rows updated': '%s rows updated',
'(Constraints Only)': '(Constraints Only)',
') & then click on the map below to adjust the Lat/Lon fields:': ') & then click on the map below to adjust the Lat/Lon fields:',
'* Required Fields': '* Required Fields',
'0-15 minutes': '0-15 minutes',
'1 Assessment': '1 Assessment',
'1 location, shorter time, can contain multiple Tasks': '1 location, shorter time, can contain multiple Tasks',
'1-3 days': '1-3 days',
'1. Fill the necessary fields in BLOCK letters.': '1. Fill the necessary fields in BLOCK letters.',
'15-30 minutes': '15-30 minutes',
'2 different options are provided here currently:': '2 different options are provided here currently:',
'2. Always use one box per letter and leave one box space to seperate words.': '2. Always use one box per letter and leave one box space to seperate words.',
'2x4 Car': '2x4 Car',
'30-60 minutes': '30-60 minutes',
'4-7 days': '4-7 days',
'4x4 Car': '4x4 Car',
'8-14 days': '8-14 days',
'A Reference Document such as a file, URL or contact person to verify this data. You can type the 1st few characters of the document name to link to an existing document.': 'A Reference Document such as a file, URL or contact person to verify this data. You can type the 1st few characters of the document name to link to an existing document.',
'A Warehouse is a physical place to store items.': 'A Warehouse is a physical place to store items.',
'A Warehouse/Site is a physical location with an address and GIS data where Items are Stored. It can be a Building, a particular area in a city or anything similar.': 'A Warehouse/Site is a physical location with an address and GIS data where Items are Stored. It can be a Building, a particular area in a city or anything similar.',
'A brief description of the group (optional)': 'A brief description of the group (optional)',
'A file downloaded from a GPS containing a series of geographic points in XML format.': 'A file downloaded from a GPS containing a series of geographic points in XML format.',
'A file in GPX format taken from a GPS whose timestamps can be correlated with the timestamps on the photos to locate them on the map.': 'A file in GPX format taken from a GPS whose timestamps can be correlated with the timestamps on the photos to locate them on the map.',
'A library of digital resources, such as photos, documents and reports': 'A library of digital resources, such as photos, documents and reports',
'A place within a Site like a Shelf, room, bin number etc.': 'A place within a Site like a Shelf, room, bin number etc.',
'A snapshot of the bin or additional documents that contain supplementary information about it can be uploaded here.': 'A snapshot of the bin or additional documents that contain supplementary information about it can be uploaded here.',
'A snapshot of the location or additional documents that contain supplementary information about the Site Location can be uploaded here.': 'A snapshot of the location or additional documents that contain supplementary information about the Site Location can be uploaded here.',
'A snapshot of the location or additional documents that contain supplementary information about the Site can be uploaded here.': 'A snapshot of the location or additional documents that contain supplementary information about the Site can be uploaded here.',
'A survey series with id %s does not exist. Please go back and create one.': 'A survey series with id %s does not exist. Please go back and create one.',
'ABOUT': 'ABOUT',
'ABOUT THIS MODULE': 'ABOUT THIS MODULE',
'ACCESS DATA': 'ACCESS DATA',
'ANY': 'ANY',
'API is documented here': 'API is documented here',
'Ability to Fill Out Surveys': 'Ability to Fill Out Surveys',
'Ability to customize the list of details tracked at a Shelter': 'Ability to customize the list of details tracked at a Shelter',
'Ability to customize the list of human resource tracked at a Shelter': 'Ability to customize the list of human resource tracked at a Shelter',
'Ability to customize the list of important facilities needed at a Shelter': 'Ability to customize the list of important facilities needed at a Shelter',
'Ability to track partial fulfillment of the request': 'Ability to track partial fulfillment of the request',
'Ability to view Results of Completed and/or partially filled out Surveys': 'Ability to view Results of Completed and/or partially filled out Surveys',
'About': 'About',
'About Sahana': 'About Sahana',
'About Sahana Eden': 'About Sahana Eden',
'About this module': 'About this module',
'Access denied': 'Access denied',
'Accessibility of Affected Location': 'Accessibility of Affected Location',
'Account registered, however registration is still pending approval - please wait until confirmation received.': 'Account registered, however registration is still pending approval - please wait until confirmation received.',
'Acronym': 'Acronym',
"Acronym of the organization's name, eg. IFRC.": "Acronym of the organization's name, eg. IFRC.",
'Actionable by all targeted recipients': 'Actionable by all targeted recipients',
'Actionable only by designated exercise participants; exercise identifier SHOULD appear in <note>': 'Actionable only by designated exercise participants; exercise identifier SHOULD appear in <note>',
'Actioned?': 'Actioned?',
'Active Problems': 'Active Problems',
'Activities': 'Activities',
'Activities matching Assessments:': 'Activities matching Assessments:',
'Activities of boys 13-17yrs before disaster': 'Activities of boys 13-17yrs before disaster',
'Activities of boys 13-17yrs now': 'Activities of boys 13-17yrs now',
'Activities of boys <12yrs before disaster': 'Activities of boys <12yrs before disaster',
'Activities of boys <12yrs now': 'Activities of boys <12yrs now',
'Activities of girls 13-17yrs before disaster': 'Activities of girls 13-17yrs before disaster',
'Activities of girls 13-17yrs now': 'Activities of girls 13-17yrs now',
'Activities of girls <12yrs before disaster': 'Activities of girls <12yrs before disaster',
'Activities of girls <12yrs now': 'Activities of girls <12yrs now',
'Activities:': 'Activities:',
'Activity': 'Activity',
'Activity Added': 'Activity Added',
'Activity Deleted': 'Activity Deleted',
'Activity Details': 'Activity Details',
'Activity Report': 'Activity Report',
'Activity Reports': 'Activity Reports',
'Activity Type': 'Activity Type',
'Activity Updated': 'Activity Updated',
'Add': 'Add',
'Add Activity': 'Add Activity',
'Add Activity Report': 'Add Activity Report',
'Add Activity Type': 'Add Activity Type',
'Add Address': 'Add Address',
'Add Assessment': 'Add Assessment',
'Add Assessment Summary': 'Add Assessment Summary',
'Add Baseline': 'Add Baseline',
'Add Baseline Type': 'Add Baseline Type',
'Add Bed Type': 'Add Bed Type',
'Add Bin Type': 'Add Bin Type',
'Add Bins': 'Add Bins',
'Add Budget': 'Add Budget',
'Add Bundle': 'Add Bundle',
'Add Catalog': 'Add Catalog',
'Add Catalog Item': 'Add Catalog Item',
'Add Catalog.': 'Add Catalog.',
'Add Category': 'Add Category',
'Add Category<>Sub-Category<>Catalog Relation': 'Add Category<>Sub-Category<>Catalog Relation',
'Add Cholera Treatment Capability Information': 'Add Cholera Treatment Capability Information',
'Add Cluster Subsector': 'Add Cluster Subsector',
'Add Config': 'Add Config',
'Add Contact': 'Add Contact',
'Add Contact Information': 'Add Contact Information',
'Add Disaster Victims': 'Add Disaster Victims',
'Add Distribution.': 'Add Distribution.',
'Add Donor': 'Add Donor',
'Add Feature Class': 'Add Feature Class',
'Add Feature Layer': 'Add Feature Layer',
'Add Flood Report': 'Add Flood Report',
'Add Group': 'Add Group',
'Add Group Member': 'Add Group Member',
'Add Hospital': 'Add Hospital',
'Add Identification Report': 'Add Identification Report',
'Add Identity': 'Add Identity',
'Add Image': 'Add Image',
'Add Impact': 'Add Impact',
'Add Impact Type': 'Add Impact Type',
'Add Incident Report': 'Add Incident Report',
'Add Item': 'Add Item',
'Add Item (s)': 'Add Item (s)',
'Add Item Catalog': 'Add Item Catalog',
'Add Item Catalog ': 'Add Item Catalog ',
'Add Item Catalog Category ': 'Add Item Catalog Category ',
'Add Item Category': 'Add Item Category',
'Add Item Packet': 'Add Item Packet',
'Add Item Sub-Category': 'Add Item Sub-Category',
'Add Key': 'Add Key',
'Add Kit': 'Add Kit',
'Add Layer': 'Add Layer',
'Add Line': 'Add Line',
'Add Location': 'Add Location',
'Add Locations': 'Add Locations',
'Add Log Entry': 'Add Log Entry',
'Add Member': 'Add Member',
'Add Membership': 'Add Membership',
'Add Message': 'Add Message',
'Add Need': 'Add Need',
'Add Need Type': 'Add Need Type',
'Add New': 'Add New',
'Add New Activity': 'Add New Activity',
'Add New Address': 'Add New Address',
'Add New Assessment': 'Add New Assessment',
'Add New Assessment Summary': 'Add New Assessment Summary',
'Add New Baseline': 'Add New Baseline',
'Add New Baseline Type': 'Add New Baseline Type',
'Add New Bin': 'Add New Bin',
'Add New Bin Type': 'Add New Bin Type',
'Add New Budget': 'Add New Budget',
'Add New Bundle': 'Add New Bundle',
'Add New Catalog Item': 'Add New Catalog Item',
'Add New Cluster Subsector': 'Add New Cluster Subsector',
'Add New Config': 'Add New Config',
'Add New Contact': 'Add New Contact',
'Add New Document': 'Add New Document',
'Add New Donor': 'Add New Donor',
'Add New Entry': 'Add New Entry',
'Add New Feature Class': 'Add New Feature Class',
'Add New Feature Layer': 'Add New Feature Layer',
'Add New Flood Report': 'Add New Flood Report',
'Add New Group': 'Add New Group',
'Add New Hospital': 'Add New Hospital',
'Add New Identity': 'Add New Identity',
'Add New Image': 'Add New Image',
'Add New Impact': 'Add New Impact',
'Add New Impact Type': 'Add New Impact Type',
'Add New Incident Report': 'Add New Incident Report',
'Add New Item': 'Add New Item',
'Add New Item Catalog': 'Add New Item Catalog',
'Add New Item Catalog Category': 'Add New Item Catalog Category',
'Add New Item Category': 'Add New Item Category',
'Add New Item Packet': 'Add New Item Packet',
'Add New Item Sub-Category': 'Add New Item Sub-Category',
'Add New Item to Kit': 'Add New Item to Kit',
'Add New Key': 'Add New Key',
'Add New Kit': 'Add New Kit',
'Add New Layer': 'Add New Layer',
'Add New Location': 'Add New Location',
'Add New Log Entry': 'Add New Log Entry',
'Add New Marker': 'Add New Marker',
'Add New Member': 'Add New Member',
'Add New Membership': 'Add New Membership',
'Add New Need': 'Add New Need',
'Add New Need Type': 'Add New Need Type',
'Add New Office': 'Add New Office',
'Add New Organization': 'Add New Organization',
'Add New Photo': 'Add New Photo',
'Add New Position': 'Add New Position',
'Add New Problem': 'Add New Problem',
'Add New Project': 'Add New Project',
'Add New Projection': 'Add New Projection',
'Add New Rapid Assessment': 'Add New Rapid Assessment',
'Add New Received Item': 'Add New Received Item',
'Add New Record': 'Add New Record',
'Add New Report': 'Add New Report',
'Add New Request': 'Add New Request',
'Add New Request Item': 'Add New Request Item',
'Add New Resource': 'Add New Resource',
'Add New River': 'Add New River',
'Add New Role': 'Add New Role',
'Add New Role to User': 'Add New Role to User',
'Add New Sector': 'Add New Sector',
'Add New Sent Item': 'Add New Sent Item',
'Add New Setting': 'Add New Setting',
'Add New Shelter': 'Add New Shelter',
'Add New Shelter Service': 'Add New Shelter Service',
'Add New Shelter Type': 'Add New Shelter Type',
'Add New Site': 'Add New Site',
'Add New Skill': 'Add New Skill',
'Add New Skill Type': 'Add New Skill Type',
'Add New Solution': 'Add New Solution',
'Add New Staff': 'Add New Staff',
'Add New Staff Type': 'Add New Staff Type',
'Add New Storage Location': 'Add New Storage Location',
'Add New Survey Answer': 'Add New Survey Answer',
'Add New Survey Question': 'Add New Survey Question',
'Add New Survey Section': 'Add New Survey Section',
'Add New Survey Series': 'Add New Survey Series',
'Add New Survey Template': 'Add New Survey Template',
'Add New Task': 'Add New Task',
'Add New Team': 'Add New Team',
'Add New Theme': 'Add New Theme',
'Add New Ticket': 'Add New Ticket',
'Add New Track': 'Add New Track',
'Add New Unit': 'Add New Unit',
'Add New User': 'Add New User',
'Add New User to Role': 'Add New User to Role',
'Add New Warehouse': 'Add New Warehouse',
'Add New Warehouse Item': 'Add New Warehouse Item',
'Add Office': 'Add Office',
'Add Organization': 'Add Organization',
'Add Peer': 'Add Peer',
'Add Person': 'Add Person',
'Add Personal Effects': 'Add Personal Effects',
'Add Photo': 'Add Photo',
'Add Point': 'Add Point',
'Add Polygon': 'Add Polygon',
'Add Position': 'Add Position',
'Add Problem': 'Add Problem',
'Add Project': 'Add Project',
'Add Projection': 'Add Projection',
'Add Question': 'Add Question',
'Add Rapid Assessment': 'Add Rapid Assessment',
'Add Recipient': 'Add Recipient',
'Add Recipient Site': 'Add Recipient Site',
'Add Recipient Site.': 'Add Recipient Site.',
'Add Record': 'Add Record',
'Add Recovery Report': 'Add Recovery Report',
'Add Reference Document': 'Add Reference Document',
'Add Report': 'Add Report',
'Add Request': 'Add Request',
'Add Request Detail': 'Add Request Detail',
'Add Request Item': 'Add Request Item',
'Add Resource': 'Add Resource',
'Add River': 'Add River',
'Add Role': 'Add Role',
'Add Section': 'Add Section',
'Add Sector': 'Add Sector',
'Add Sender Organization': 'Add Sender Organization',
'Add Sender Site': 'Add Sender Site',
'Add Sender Site.': 'Add Sender Site.',
'Add Service Profile': 'Add Service Profile',
'Add Setting': 'Add Setting',
'Add Shelter': 'Add Shelter',
'Add Shelter Service': 'Add Shelter Service',
'Add Shelter Type': 'Add Shelter Type',
'Add Shipment Transit Log': 'Add Shipment Transit Log',
'Add Shipment/Way Bills': 'Add Shipment/Way Bills',
'Add Site': 'Add Site',
'Add Skill': 'Add Skill',
'Add Skill Type': 'Add Skill Type',
'Add Skill Types': 'Add Skill Types',
'Add Solution': 'Add Solution',
'Add Staff': 'Add Staff',
'Add Staff Type': 'Add Staff Type',
'Add Status': 'Add Status',
'Add Storage Bin ': 'Add Storage Bin ',
'Add Storage Bin Type': 'Add Storage Bin Type',
'Add Storage Location': 'Add Storage Location',
'Add Storage Location ': 'Add Storage Location ',
'Add Sub-Category': 'Add Sub-Category',
'Add Subscription': 'Add Subscription',
'Add Survey Answer': 'Add Survey Answer',
'Add Survey Question': 'Add Survey Question',
'Add Survey Section': 'Add Survey Section',
'Add Survey Series': 'Add Survey Series',
'Add Survey Template': 'Add Survey Template',
'Add Task': 'Add Task',
'Add Team': 'Add Team',
'Add Theme': 'Add Theme',
'Add Ticket': 'Add Ticket',
'Add Unit': 'Add Unit',
'Add User': 'Add User',
'Add Volunteer': 'Add Volunteer',
'Add Volunteer Registration': 'Add Volunteer Registration',
'Add Warehouse': 'Add Warehouse',
'Add Warehouse Item': 'Add Warehouse Item',
'Add a Person': 'Add a Person',
'Add a Reference Document such as a file, URL or contact person to verify this data. If you do not enter a Reference Document, your email will be displayed instead.': 'Add a Reference Document such as a file, URL or contact person to verify this data. If you do not enter a Reference Document, your email will be displayed instead.',
'Add a Volunteer': 'Add a Volunteer',
'Add a new Site from where the Item is being sent.': 'Add a new Site from where the Item is being sent.',
'Add a new Site where the Item is being sent to.': 'Add a new Site where the Item is being sent to.',
'Add an Photo.': 'Add an Photo.',
'Add main Item Category.': 'Add main Item Category.',
'Add main Item Sub-Category.': 'Add main Item Sub-Category.',
'Add new Group': 'Add new Group',
'Add new Individual': 'Add new Individual',
'Add new position.': 'Add new position.',
'Add new project.': 'Add new project.',
'Add new staff role.': 'Add new staff role.',
'Add the Storage Bin Type.': 'Add the Storage Bin Type.',
'Add the Storage Location where this bin is located.': 'Add the Storage Location where this bin is located.',
'Add the Storage Location where this this Bin belongs to.': 'Add the Storage Location where this this Bin belongs to.',
'Add the main Warehouse/Site information where this Bin belongs to.': 'Add the main Warehouse/Site information where this Bin belongs to.',
'Add the main Warehouse/Site information where this Item is to be added.': 'Add the main Warehouse/Site information where this Item is to be added.',
'Add the main Warehouse/Site information where this Storage location is.': 'Add the main Warehouse/Site information where this Storage location is.',
'Add the unit of measure if it doesnt exists already.': 'Add the unit of measure if it doesnt exists already.',
'Add to Bundle': 'Add to Bundle',
'Add to Catalog': 'Add to Catalog',
'Add to budget': 'Add to budget',
'Add/Edit/Remove Layers': 'Add/Edit/Remove Layers',
'Additional Beds / 24hrs': 'Additional Beds / 24hrs',
'Additional Comments': 'Additional Comments',
"Additional quantity quantifier – e.g. '4x5'.": "Additional quantity quantifier – e.g. '4x5'.",
'Address': 'Address',
'Address Details': 'Address Details',
'Address Type': 'Address Type',
'Address added': 'Address added',
'Address deleted': 'Address deleted',
'Address updated': 'Address updated',
'Addresses': 'Addresses',
'Adequate': 'Adequate',
'Adequate food and water available': 'Adequate food and water available',
'Adjust Item(s) Quantity': 'Adjust Item(s) Quantity',
'Adjust Items due to Theft/Loss': 'Adjust Items due to Theft/Loss',
'Admin Email': 'Admin Email',
'Admin Name': 'Admin Name',
'Admin Tel': 'Admin Tel',
'Administration': 'Administration',
'Administrator': 'Administrator',
'Admissions/24hrs': 'Admissions/24hrs',
'Adolescent (12-20)': 'Adolescent (12-20)',
'Adolescent participating in coping activities': 'Adolescent participating in coping activities',
'Adult (21-50)': 'Adult (21-50)',
'Adult ICU': 'Adult ICU',
'Adult Psychiatric': 'Adult Psychiatric',
'Adult female': 'Adult female',
'Adult male': 'Adult male',
'Adults in prisons': 'Adults in prisons',
'Advanced Bin Search': 'Advanced Bin Search',
'Advanced Catalog Search': 'Advanced Catalog Search',
'Advanced Category Search': 'Advanced Category Search',
'Advanced Item Search': 'Advanced Item Search',
'Advanced Location Search': 'Advanced Location Search',
'Advanced Site Search': 'Advanced Site Search',
'Advanced Sub-Category Search': 'Advanced Sub-Category Search',
'Advanced Unit Search': 'Advanced Unit Search',
'Advanced:': 'Advanced:',
'Advisory': 'Advisory',
'After clicking on the button, a set of paired items will be shown one by one. Please select the one solution from each pair that you prefer over the other.': 'After clicking on the button, a set of paired items will be shown one by one. Please select the one solution from each pair that you prefer over the other.',
'Age Group': 'Age Group',
'Age group': 'Age group',
'Age group does not match actual age.': 'Age group does not match actual age.',
'Aggravating factors': 'Aggravating factors',
'Aggregate Items': 'Aggregate Items',
'Agriculture': 'Agriculture',
'Air Transport Service': 'Air Transport Service',
'Air tajin': 'Air tajin',
'Aircraft Crash': 'Aircraft Crash',
'Aircraft Hijacking': 'Aircraft Hijacking',
'Airport Closure': 'Airport Closure',
'Airspace Closure': 'Airspace Closure',
'Alcohol': 'Alcohol',
'Alert': 'Alert',
'All': 'All',
'All Inbound & Outbound Messages are stored here': 'All Inbound & Outbound Messages are stored here',
'All Requested Items': 'All Requested Items',
'All Resources': 'All Resources',
'All data provided by the Sahana Software Foundation from this site is licenced under a Creative Commons Attribution licence. However, not all data originates here. Please consult the source field of each entry.': 'All data provided by the Sahana Software Foundation from this site is licenced under a Creative Commons Attribution licence. However, not all data originates here. Please consult the source field of each entry.',
'Allowed to push': 'Allowed to push',
'Allows a Budget to be drawn up': 'Allows a Budget to be drawn up',
'Allows authorized users to control which layers are available to the situation map.': 'Allows authorized users to control which layers are available to the situation map.',
'Alternative infant nutrition in use': 'Alternative infant nutrition in use',
'Alternative places for studying': 'Alternative places for studying',
'Alternative places for studying available': 'Alternative places for studying available',
'Ambulance Service': 'Ambulance Service',
'An intake system, a warehouse management system, commodity tracking, supply chain management, procurement and other asset and resource management capabilities.': 'An intake system, a warehouse management system, commodity tracking, supply chain management, procurement and other asset and resource management capabilities.',
'Analysis of Completed Surveys': 'Analysis of Completed Surveys',
'Animal Die Off': 'Animal Die Off',
'Animal Feed': 'Animal Feed',
'Animals': 'Animals',
'Answer Choices (One Per Line)': 'Answer Choices (One Per Line)',
'Anthropolgy': 'Anthropolgy',
'Antibiotics available': 'Antibiotics available',
'Antibiotics needed per 24h': 'Antibiotics needed per 24h',
'Apparent Age': 'Apparent Age',
'Apparent Gender': 'Apparent Gender',
'Appropriate clothing available': 'Appropriate clothing available',
'Appropriate cooking equipment/materials in HH': 'Appropriate cooking equipment/materials in HH',
'Approx. number of cases/48h': 'Approx. number of cases/48h',
'Approximately how many children under 5 with diarrhea in the past 48 hours?': 'Approximately how many children under 5 with diarrhea in the past 48 hours?',
'Archive not Delete': 'Archive not Delete',
'Arctic Outflow': 'Arctic Outflow',
'Are basic medical supplies available for health services since the disaster?': 'Are basic medical supplies available for health services since the disaster?',
'Are breast milk substitutes being used here since the disaster?': 'Are breast milk substitutes being used here since the disaster?',
'Are the areas that children, older people, and people with disabilities live in, play in and walk through on a daily basis physically safe?': 'Are the areas that children, older people, and people with disabilities live in, play in and walk through on a daily basis physically safe?',
'Are the chronically ill receiving sufficient care and assistance?': 'Are the chronically ill receiving sufficient care and assistance?',
'Are there adults living in prisons in this area?': 'Are there adults living in prisons in this area?',
'Are there alternative places for studying?': 'Are there alternative places for studying?',
'Are there cases of diarrhea among children under the age of 5?': 'Are there cases of diarrhea among children under the age of 5?',
'Are there children living in adult prisons in this area?': 'Are there children living in adult prisons in this area?',
'Are there children living in boarding schools in this area?': 'Are there children living in boarding schools in this area?',
'Are there children living in homes for disabled children in this area?': 'Are there children living in homes for disabled children in this area?',
'Are there children living in juvenile detention in this area?': 'Are there children living in juvenile detention in this area?',
'Are there children living in orphanages in this area?': 'Are there children living in orphanages in this area?',
'Are there children with chronical illnesses in your community?': 'Are there children with chronical illnesses in your community?',
'Are there health services functioning for the community since the disaster?': 'Are there health services functioning for the community since the disaster?',
'Are there older people living in care homes in this area?': 'Are there older people living in care homes in this area?',
'Are there older people with chronical illnesses in your community?': 'Are there older people with chronical illnesses in your community?',
'Are there people with chronical illnesses in your community?': 'Are there people with chronical illnesses in your community?',
'Are there separate latrines for women and men available?': 'Are there separate latrines for women and men available?',
'Are there staff present and caring for the residents in these institutions?': 'Are there staff present and caring for the residents in these institutions?',
'Area': 'Area',
'Assessment': 'Assessment',
'Assessment Details': 'Assessment Details',
'Assessment Reported': 'Assessment Reported',
'Assessment Summaries': 'Assessment Summaries',
'Assessment Summary Details': 'Assessment Summary Details',
'Assessment Summary added': 'Assessment Summary added',
'Assessment Summary deleted': 'Assessment Summary deleted',
'Assessment Summary updated': 'Assessment Summary updated',
'Assessment added': 'Assessment added',
'Assessment deleted': 'Assessment deleted',
'Assessment updated': 'Assessment updated',
'Assessments': 'Assessments',
'Assessments Needs vs. Activities': 'Assessments Needs vs. Activities',
'Assessments and Activities': 'Assessments and Activities',
'Assessments:': 'Assessments:',
'Assessor': 'Assessor',
'Assign Storage Location': 'Assign Storage Location',
'Assign to Org.': 'Assign to Org.',
'Assigned': 'Assigned',
'Assigned To': 'Assigned To',
'Assigned to': 'Assigned to',
'Assistance for immediate repair/reconstruction of houses': 'Assistance for immediate repair/reconstruction of houses',
'Assistant': 'Assistant',
'At/Visited Location (not virtual)': 'At/Visited Location (not virtual)',
'Attend to information sources as described in <instruction>': 'Attend to information sources as described in <instruction>',
'Attribution': 'Attribution',
'Audit Read': 'Audit Read',
'Audit Write': 'Audit Write',
"Authenticate system's Twitter account": "Authenticate system's Twitter account",
'Authentication Required': 'Authentication Required',
'Author': 'Author',
'Automotive': 'Automotive',
'Available Beds': 'Available Beds',
'Available Messages': 'Available Messages',
'Available Records': 'Available Records',
'Available databases and tables': 'Available databases and tables',
'Available from': 'Available from',
'Available in Viewer?': 'Available in Viewer?',
'Available until': 'Available until',
'Availablity': 'Availablity',
'Avalanche': 'Avalanche',
'Avoid the subject event as per the <instruction>': 'Avoid the subject event as per the <instruction>',
'Babies who are not being breastfed, what are they being fed on?': 'Babies who are not being breastfed, what are they being fed on?',
'Baby And Child Care': 'Baby And Child Care',
'Background Colour': 'Background Colour',
'Background Colour for Text blocks': 'Background Colour for Text blocks',
'Bahai': 'Bahai',
'Baldness': 'Baldness',
'Balochi': 'Balochi',
'Banana': 'Banana',
'Bank/micro finance': 'Bank/micro finance',
'Base Layer?': 'Base Layer?',
'Base Layers': 'Base Layers',
'Base Unit': 'Base Unit',
'Baseline Number of Beds': 'Baseline Number of Beds',
'Baseline Type': 'Baseline Type',
'Baseline Type Details': 'Baseline Type Details',
'Baseline Type added': 'Baseline Type added',
'Baseline Type deleted': 'Baseline Type deleted',
'Baseline Type updated': 'Baseline Type updated',
'Baseline Types': 'Baseline Types',
'Baseline added': 'Baseline added',
'Baseline deleted': 'Baseline deleted',
'Baseline number of beds of that type in this unit.': 'Baseline number of beds of that type in this unit.',
'Baseline updated': 'Baseline updated',
'Baselines': 'Baselines',
'Baselines Details': 'Baselines Details',
'Basic Assessment': 'Basic Assessment',
'Basic Assessment Reported': 'Basic Assessment Reported',
'Basic Details': 'Basic Details',
'Basic information on the requests and donations, such as category, the units, contact details and the status.': 'Basic information on the requests and donations, such as category, the units, contact details and the status.',
'Basic medical supplies available prior to disaster': 'Basic medical supplies available prior to disaster',
'Basic medical supplies available since disaster': 'Basic medical supplies available since disaster',
'Basic reports on the Shelter and drill-down by region': 'Basic reports on the Shelter and drill-down by region',
'Baud': 'Baud',
'Baud rate to use for your modem - The default is safe for most cases': 'Baud rate to use for your modem - The default is safe for most cases',
'Bed Capacity': 'Bed Capacity',
'Bed Capacity per Unit': 'Bed Capacity per Unit',
'Bed Type': 'Bed Type',
'Bed type already registered': 'Bed type already registered',
'Bedding materials available': 'Bedding materials available',
'Beneficiary Type': 'Beneficiary Type',
'Biological Hazard': 'Biological Hazard',
'Biscuits': 'Biscuits',
'Blizzard': 'Blizzard',
'Blood Type (AB0)': 'Blood Type (AB0)',
'Blowing Snow': 'Blowing Snow',
'Boat': 'Boat',
'Bodies found': 'Bodies found',
'Bodies recovered': 'Bodies recovered',
'Body': 'Body',
'Body Recovery Reports': 'Body Recovery Reports',
'Body Recovery Request': 'Body Recovery Request',
'Body Recovery Requests': 'Body Recovery Requests',
'Bomb': 'Bomb',
'Bomb Explosion': 'Bomb Explosion',
'Bomb Threat': 'Bomb Threat',
'Border Colour for Text blocks': 'Border Colour for Text blocks',
'Boys 13-18 yrs in affected area': 'Boys 13-18 yrs in affected area',
'Boys 13-18 yrs not attending school': 'Boys 13-18 yrs not attending school',
'Boys 6-12 yrs in affected area': 'Boys 6-12 yrs in affected area',
'Boys 6-12 yrs not attending school': 'Boys 6-12 yrs not attending school',
'Breast milk substitutes in use since disaster': 'Breast milk substitutes in use since disaster',
'Breast milk substitutes used prior to disaster': 'Breast milk substitutes used prior to disaster',
'Bricks': 'Bricks',
'Bridge Closed': 'Bridge Closed',
'Bucket': 'Bucket',
'Buddhist': 'Buddhist',
'Budget': 'Budget',
'Budget Details': 'Budget Details',
'Budget Updated': 'Budget Updated',
'Budget added': 'Budget added',
'Budget deleted': 'Budget deleted',
'Budget updated': 'Budget updated',
'Budgeting Module': 'Budgeting Module',
'Budgets': 'Budgets',
'Buffer': 'Buffer',
'Building Aide': 'Building Aide',
'Building Collapsed': 'Building Collapsed',
'Built using the Template agreed by a group of NGOs working together as the': 'Built using the Template agreed by a group of NGOs working together as the',
'Bulk Uploader': 'Bulk Uploader',
'Bundle': 'Bundle',
'Bundle Contents': 'Bundle Contents',
'Bundle Details': 'Bundle Details',
'Bundle Updated': 'Bundle Updated',
'Bundle added': 'Bundle added',
'Bundle deleted': 'Bundle deleted',
'Bundle updated': 'Bundle updated',
'Bundles': 'Bundles',
'Burn': 'Burn',
'Burn ICU': 'Burn ICU',
'Burned/charred': 'Burned/charred',
'Business damaged': 'Business damaged',
'By': 'By',
'By Warehouse': 'By Warehouse',
'CBA Women': 'CBA Women',
'CSS file %s not writable - unable to apply theme!': 'CSS file %s not writable - unable to apply theme!',
'Calculate': 'Calculate',
'Camp': 'Camp',
'Camp Coordination/Management': 'Camp Coordination/Management',
'Can users register themselves for authenticated login access?': 'Can users register themselves for authenticated login access?',
"Can't import tweepy": "Can't import tweepy",
'Cancel': 'Cancel',
'Cancelled': 'Cancelled',
'Candidate Matches for Body %s': 'Candidate Matches for Body %s',
'Canned Fish': 'Canned Fish',
'Cannot be empty': 'Cannot be empty',
'Capacity (Max Persons)': 'Capacity (Max Persons)',
'Capacity (W x D X H)': 'Capacity (W x D X H)',
'Capture Information on Disaster Victim groups (Tourists, Passengers, Families, etc.)': 'Capture Information on Disaster Victim groups (Tourists, Passengers, Families, etc.)',
'Capture Information on each disaster victim': 'Capture Information on each disaster victim',
'Capturing organizational information of a relief organization and all the projects they have in the region': 'Capturing organizational information of a relief organization and all the projects they have in the region',
'Capturing the essential services each Volunteer is providing and where': 'Capturing the essential services each Volunteer is providing and where',
'Capturing the projects each organization is providing and where': 'Capturing the projects each organization is providing and where',
'Cardiology': 'Cardiology',
'Cash available to restart business': 'Cash available to restart business',
'Cassava': 'Cassava',
'Casual Labor': 'Casual Labor',
'Catalog': 'Catalog',
'Catalog Item': 'Catalog Item',
'Catalog Item added': 'Catalog Item added',
'Catalog Item deleted': 'Catalog Item deleted',
'Catalog Item updated': 'Catalog Item updated',
'Catalog Items': 'Catalog Items',
'Catalog Name': 'Catalog Name',
'Category': 'Category',
'Category<>Sub-Category<>Catalog Relation': 'Category<>Sub-Category<>Catalog Relation',
'Category<>Sub-Category<>Catalog Relation added': 'Category<>Sub-Category<>Catalog Relation added',
'Category<>Sub-Category<>Catalog Relation deleted': 'Category<>Sub-Category<>Catalog Relation deleted',
'Category<>Sub-Category<>Catalog Relation updated': 'Category<>Sub-Category<>Catalog Relation updated',
'Central point to record details on People': 'Central point to record details on People',
'Change Password': 'Change Password',
'Check for errors in the URL, maybe the address was mistyped.': 'Check for errors in the URL, maybe the address was mistyped.',
'Check if the URL is pointing to a directory instead of a webpage.': 'Check if the URL is pointing to a directory instead of a webpage.',
'Check outbox for the message status': 'Check outbox for the message status',
'Check to delete': 'Check to delete',
'Check to delete:': 'Check to delete:',
'Checklist': 'Checklist',
'Checklist created': 'Checklist created',
'Checklist deleted': 'Checklist deleted',
'Checklist of Operations': 'Checklist of Operations',
'Checklist updated': 'Checklist updated',
'Chemical Hazard': 'Chemical Hazard',
'Chemical, Biological, Radiological, Nuclear or High-Yield Explosive threat or attack': 'Chemical, Biological, Radiological, Nuclear or High-Yield Explosive threat or attack',
'Chicken': 'Chicken',
'Child': 'Child',
'Child (2-11)': 'Child (2-11)',
'Child (< 18 yrs)': 'Child (< 18 yrs)',
'Child Abduction Emergency': 'Child Abduction Emergency',
'Child headed households (<18 yrs)': 'Child headed households (<18 yrs)',
'Children (2-5 years)': 'Children (2-5 years)',
'Children (5-15 years)': 'Children (5-15 years)',
'Children (< 2 years)': 'Children (< 2 years)',
'Children in adult prisons': 'Children in adult prisons',
'Children in boarding schools': 'Children in boarding schools',
'Children in homes for disabled children': 'Children in homes for disabled children',
'Children in juvenile detention': 'Children in juvenile detention',
'Children in orphanages': 'Children in orphanages',
'Children living on their own (without adults)': 'Children living on their own (without adults)',
'Children not enrolled in new school': 'Children not enrolled in new school',
'Children orphaned by the disaster': 'Children orphaned by the disaster',
'Children separated from their parents/caregivers': 'Children separated from their parents/caregivers',
'Children that have been sent to safe places': 'Children that have been sent to safe places',
'Children who have disappeared since the disaster': 'Children who have disappeared since the disaster',
'Children with chronical illnesses': 'Children with chronical illnesses',
'Chinese (Taiwan)': 'Chinese (Taiwan)',
'Cholera Treatment': 'Cholera Treatment',
'Cholera Treatment Capability': 'Cholera Treatment Capability',
'Cholera Treatment Center': 'Cholera Treatment Center',
'Cholera-Treatment-Center': 'Cholera-Treatment-Center',
'Choosing Skill and Resources of Volunteers': 'Choosing Skill and Resources of Volunteers',
'Christian': 'Christian',
'Church': 'Church',
'Circumstances of disappearance, other victims/witnesses who last saw the missing person alive.': 'Circumstances of disappearance, other victims/witnesses who last saw the missing person alive.',
'Civil Emergency': 'Civil Emergency',
'Click on the link ': 'Click on the link ',
'Client IP': 'Client IP',
'Clinical Laboratory': 'Clinical Laboratory',
'Clinical Operations': 'Clinical Operations',
'Clinical Status': 'Clinical Status',
'Close map': 'Close map',
'Closed': 'Closed',
'Clothing': 'Clothing',
'Cluster Distance': 'Cluster Distance',
'Cluster Subsector': 'Cluster Subsector',
'Cluster Subsector Details': 'Cluster Subsector Details',
'Cluster Subsector added': 'Cluster Subsector added',
'Cluster Subsector deleted': 'Cluster Subsector deleted',
'Cluster Subsector updated': 'Cluster Subsector updated',
'Cluster Subsectors': 'Cluster Subsectors',
'Cluster Threshold': 'Cluster Threshold',
'Cluster(s)': 'Cluster(s)',
'Code': 'Code',
'Cold Wave': 'Cold Wave',
'Collective center': 'Collective center',
'Colour for Underline of Subheadings': 'Colour for Underline of Subheadings',
'Colour of Buttons when hovering': 'Colour of Buttons when hovering',
'Colour of bottom of Buttons when not pressed': 'Colour of bottom of Buttons when not pressed',
'Colour of bottom of Buttons when pressed': 'Colour of bottom of Buttons when pressed',
'Colour of dropdown menus': 'Colour of dropdown menus',
'Colour of selected Input fields': 'Colour of selected Input fields',
'Colour of selected menu items': 'Colour of selected menu items',
'Column Choices (One Per Line': 'Column Choices (One Per Line',
'Combined Method': 'Combined Method',
'Come back later.': 'Come back later.',
'Come back later. Everyone visiting this site is probably experiencing the same problem as you.': 'Come back later. Everyone visiting this site is probably experiencing the same problem as you.',
'Comment': 'Comment',
'Comments': 'Comments',
'Commiting a changed spreadsheet to the database': 'Commiting a changed spreadsheet to the database',
'Communication problems': 'Communication problems',
'Community Centre': 'Community Centre',
'Community Health Center': 'Community Health Center',
'Community Member': 'Community Member',
'Complete Unit Label for e.g. meter for m.': 'Complete Unit Label for e.g. meter for m.',
'Completed': 'Completed',
'Complexion': 'Complexion',
'Compose': 'Compose',
'Compromised': 'Compromised',
'Config': 'Config',
'Config added': 'Config added',
'Config deleted': 'Config deleted',
'Config updated': 'Config updated',
'Configs': 'Configs',
'Configure Run-time Settings': 'Configure Run-time Settings',
'Confirmed': 'Confirmed',
'Conflict Details': 'Conflict Details',
'Conflict Resolution': 'Conflict Resolution',
'Consumable': 'Consumable',
'Contact': 'Contact',
'Contact Data': 'Contact Data',
'Contact Details': 'Contact Details',
'Contact Information': 'Contact Information',
'Contact Method': 'Contact Method',
'Contact Person': 'Contact Person',
'Contact details': 'Contact details',
'Contact information added': 'Contact information added',
'Contact information deleted': 'Contact information deleted',
'Contact information updated': 'Contact information updated',
'Contact person(s) in case of news or further questions (if different from reporting person). Include telephone number, address and email as available.': 'Contact person(s) in case of news or further questions (if different from reporting person). Include telephone number, address and email as available.',
'Contact us': 'Contact us',
'Contacts': 'Contacts',
'Contents': 'Contents',
'Contributor': 'Contributor',
'Conversion Tool': 'Conversion Tool',
'Cooking NFIs': 'Cooking NFIs',
'Cooking Oil': 'Cooking Oil',
'Coordinate Conversion': 'Coordinate Conversion',
'Copy': 'Copy',
'Copyright': 'Copyright',
'Corn': 'Corn',
'Cost Type': 'Cost Type',
'Cost per Megabyte': 'Cost per Megabyte',
'Cost per Minute': 'Cost per Minute',
"Couldn't import tweepy library": "Couldn't import tweepy library",
'Country': 'Country',
'Country of Residence': 'Country of Residence',
'Create & manage Distribution groups to receive Alerts': 'Create & manage Distribution groups to receive Alerts',
'Create Checklist': 'Create Checklist',
'Create Group Entry': 'Create Group Entry',
'Create Impact Assessment': 'Create Impact Assessment',
'Create Import Job': 'Create Import Job',
'Create Mobile Impact Assessment': 'Create Mobile Impact Assessment',
'Create New Import Job': 'Create New Import Job',
'Create PDF': 'Create PDF',
'Create Rapid Assessment': 'Create Rapid Assessment',
'Create Request': 'Create Request',
'Create Task': 'Create Task',
'Create a group entry in the registry.': 'Create a group entry in the registry.',
'Create, enter, and manage surveys.': 'Create, enter, and manage surveys.',
'Creation of Surveys': 'Creation of Surveys',
'Crime': 'Crime',
'Criteria': 'Criteria',
'Currency': 'Currency',
'Current Group Members': 'Current Group Members',
'Current Identities': 'Current Identities',
'Current Location': 'Current Location',
'Current Log Entries': 'Current Log Entries',
'Current Memberships': 'Current Memberships',
'Current Registrations': 'Current Registrations',
'Current Status': 'Current Status',
'Current Team Members': 'Current Team Members',
'Current Twitter account': 'Current Twitter account',
'Current greatest needs of vulnerable groups': 'Current greatest needs of vulnerable groups',
'Current main income sources': 'Current main income sources',
'Current major expenses': 'Current major expenses',
'Current number of patients': 'Current number of patients',
'Current problems, categories': 'Current problems, categories',
'Current problems, details': 'Current problems, details',
'Current request': 'Current request',
'Current response': 'Current response',
'Current session': 'Current session',
'Current type of health problems, adults': 'Current type of health problems, adults',
'Current type of health problems, children': 'Current type of health problems, children',
'Current type of source for drinking water': 'Current type of source for drinking water',
'Current type of source for sanitary water': 'Current type of source for sanitary water',
'Custom Database Resource (e.g., anything defined as a resource in Sahana)': 'Custom Database Resource (e.g., anything defined as a resource in Sahana)',
'Customisable category of aid': 'Customisable category of aid',
'DECISION': 'DECISION',
'DNA Profile': 'DNA Profile',
'DNA Profiling': 'DNA Profiling',
'Dam Overflow': 'Dam Overflow',
'Dangerous Person': 'Dangerous Person',
'Data uploaded': 'Data uploaded',
'Database': 'Database',
'Date': 'Date',
'Date & Time': 'Date & Time',
'Date Requested': 'Date Requested',
'Date Required': 'Date Required',
'Date and Time': 'Date and Time',
'Date and Time of Goods receipt. By default shows the current time but can be modified by editing in the drop down list.': 'Date and Time of Goods receipt. By default shows the current time but can be modified by editing in the drop down list.',
'Date and time this report relates to.': 'Date and time this report relates to.',
'Date of Birth': 'Date of Birth',
'Date of Latest Information on Beneficiaries Reached': 'Date of Latest Information on Beneficiaries Reached',
'Date of Report': 'Date of Report',
'Date/Time': 'Date/Time',
'Date/Time of Find': 'Date/Time of Find',
'Date/Time of disappearance': 'Date/Time of disappearance',
'De-duplicator': 'De-duplicator',
'Dead Body Details': 'Dead Body Details',
'Dead Body Reports': 'Dead Body Reports',
'Deaths in the past 24h': 'Deaths in the past 24h',
'Deaths/24hrs': 'Deaths/24hrs',
'Debug': 'Debug',
'Decimal Degrees': 'Decimal Degrees',
'Decomposed': 'Decomposed',
'Default Height of the map window. In Window layout the map maximises to fill the window, so no need to set a large value here.': 'Default Height of the map window. In Window layout the map maximises to fill the window, so no need to set a large value here.',
'Default Marker': 'Default Marker',
'Default Width of the map window. In Window layout the map maximises to fill the window, so no need to set a large value here.': 'Default Width of the map window. In Window layout the map maximises to fill the window, so no need to set a large value here.',
'Default synchronization policy': 'Default synchronization policy',
'Defaults': 'Defaults',
'Defaults updated': 'Defaults updated',
'Defecation area for animals': 'Defecation area for animals',
'Defines the icon used for display of features on handheld GPS.': 'Defines the icon used for display of features on handheld GPS.',
'Defines the icon used for display of features on interactive map & KML exports. A Marker assigned to an individual Location is set if there is a need to override the Marker assigned to the Feature Class. If neither are defined, then the Default Marker is used.': 'Defines the icon used for display of features on interactive map & KML exports. A Marker assigned to an individual Location is set if there is a need to override the Marker assigned to the Feature Class. If neither are defined, then the Default Marker is used.',
'Defines the marker used for display & the attributes visible in the popup.': 'Defines the marker used for display & the attributes visible in the popup.',
'Degrees must be a number between -180 and 180': 'Degrees must be a number between -180 and 180',
'Dehydration': 'Dehydration',
'Delete': 'Delete',
'Delete Assessment': 'Delete Assessment',
'Delete Assessment Summary': 'Delete Assessment Summary',
'Delete Baseline': 'Delete Baseline',
'Delete Baseline Type': 'Delete Baseline Type',
'Delete Budget': 'Delete Budget',
'Delete Bundle': 'Delete Bundle',
'Delete Catalog Item': 'Delete Catalog Item',
'Delete Cluster Subsector': 'Delete Cluster Subsector',
'Delete Config': 'Delete Config',
'Delete Document': 'Delete Document',
'Delete Donor': 'Delete Donor',
'Delete Entry': 'Delete Entry',
'Delete Feature Class': 'Delete Feature Class',
'Delete Feature Layer': 'Delete Feature Layer',
'Delete Group': 'Delete Group',
'Delete Hospital': 'Delete Hospital',
'Delete Image': 'Delete Image',
'Delete Impact': 'Delete Impact',
'Delete Impact Type': 'Delete Impact Type',
'Delete Incident Report': 'Delete Incident Report',
'Delete Item': 'Delete Item',
'Delete Item Category': 'Delete Item Category',
'Delete Item Packet': 'Delete Item Packet',
'Delete Key': 'Delete Key',
'Delete Kit': 'Delete Kit',
'Delete Layer': 'Delete Layer',
'Delete Location': 'Delete Location',
'Delete Marker': 'Delete Marker',
'Delete Membership': 'Delete Membership',
'Delete Message': 'Delete Message',
'Delete Need': 'Delete Need',
'Delete Need Type': 'Delete Need Type',
'Delete Office': 'Delete Office',
'Delete Organization': 'Delete Organization',
'Delete Peer': 'Delete Peer',
'Delete Person': 'Delete Person',
'Delete Photo': 'Delete Photo',
'Delete Project': 'Delete Project',
'Delete Projection': 'Delete Projection',
'Delete Rapid Assessment': 'Delete Rapid Assessment',
'Delete Received Item': 'Delete Received Item',
'Delete Received Shipment': 'Delete Received Shipment',
'Delete Record': 'Delete Record',
'Delete Recovery Report': 'Delete Recovery Report',
'Delete Report': 'Delete Report',
'Delete Request': 'Delete Request',
'Delete Request Item': 'Delete Request Item',
'Delete Resource': 'Delete Resource',
'Delete Section': 'Delete Section',
'Delete Sector': 'Delete Sector',
'Delete Sent Item': 'Delete Sent Item',
'Delete Sent Shipment': 'Delete Sent Shipment',
'Delete Service Profile': 'Delete Service Profile',
'Delete Setting': 'Delete Setting',
'Delete Skill': 'Delete Skill',
'Delete Skill Type': 'Delete Skill Type',
'Delete Staff Type': 'Delete Staff Type',
'Delete Status': 'Delete Status',
'Delete Subscription': 'Delete Subscription',
'Delete Survey Answer': 'Delete Survey Answer',
'Delete Survey Question': 'Delete Survey Question',
'Delete Survey Section': 'Delete Survey Section',
'Delete Survey Series': 'Delete Survey Series',
'Delete Survey Template': 'Delete Survey Template',
'Delete Unit': 'Delete Unit',
'Delete User': 'Delete User',
'Delete Volunteer': 'Delete Volunteer',
'Delete Warehouse': 'Delete Warehouse',
'Delete Warehouse Item': 'Delete Warehouse Item',
'Delete from Server?': 'Delete from Server?',
'Delivered': 'Delivered',
'Delphi Decision Maker': 'Delphi Decision Maker',
'Demographic': 'Demographic',
'Demonstrations': 'Demonstrations',
'Dental Examination': 'Dental Examination',
'Dental Profile': 'Dental Profile',
'Deployment': 'Deployment',
'Describe the condition of the roads to your hospital.': 'Describe the condition of the roads to your hospital.',
'Describe the procedure which this record relates to (e.g. "medical examination")': 'Describe the procedure which this record relates to (e.g. "medical examination")',
'Description': 'Description',
'Description of Bin Type': 'Description of Bin Type',
'Description of Contacts': 'Description of Contacts',
'Description of defecation area': 'Description of defecation area',
'Description of drinking water source': 'Description of drinking water source',
'Description of sanitary water source': 'Description of sanitary water source',
'Description of water source before the disaster': 'Description of water source before the disaster',
'Descriptive Text (e.g., Prose, etc)': 'Descriptive Text (e.g., Prose, etc)',
'Designated for': 'Designated for',
'Desire to remain with family': 'Desire to remain with family',
'Destination': 'Destination',
"Detailed address of the site for informational/logistics purpose. Please note that you can add GIS/Mapping data about this site in the 'Location' field mentioned below.": "Detailed address of the site for informational/logistics purpose. Please note that you can add GIS/Mapping data about this site in the 'Location' field mentioned below.",
'Details': 'Details',
'Dialysis': 'Dialysis',
'Diarrhea': 'Diarrhea',
'Diarrhea among children under 5': 'Diarrhea among children under 5',
'Dignitary Visit': 'Dignitary Visit',
'Dimensions of the storage bin. Input in the following format 1 x 2 x 3 for width x depth x height followed by choosing the unit from the drop down list.': 'Dimensions of the storage bin. Input in the following format 1 x 2 x 3 for width x depth x height followed by choosing the unit from the drop down list.',
'Dimensions of the storage location. Input in the following format 1 x 2 x 3 for width x depth x height followed by choosing the unit from the drop down list.': 'Dimensions of the storage location. Input in the following format 1 x 2 x 3 for width x depth x height followed by choosing the unit from the drop down list.',
'Direction': 'Direction',
'Disabled': 'Disabled',
'Disabled participating in coping activities': 'Disabled participating in coping activities',
'Disabled?': 'Disabled?',
'Disaster Victim Identification': 'Disaster Victim Identification',
'Disaster Victim Registry': 'Disaster Victim Registry',
'Disaster clean-up/repairs': 'Disaster clean-up/repairs',
'Discharge (cusecs)': 'Discharge (cusecs)',
'Discharges/24hrs': 'Discharges/24hrs',
'Discussion Forum': 'Discussion Forum',
'Discussion Forum on item': 'Discussion Forum on item',
'Disease vectors': 'Disease vectors',
'Dispatch': 'Dispatch',
'Dispatch Items': 'Dispatch Items',
'Dispensary': 'Dispensary',
'Displaced': 'Displaced',
'Displaced Populations': 'Displaced Populations',
'Display Polygons?': 'Display Polygons?',
'Display Routes?': 'Display Routes?',
'Display Tracks?': 'Display Tracks?',
'Display Waypoints?': 'Display Waypoints?',
'Dispose': 'Dispose',
'Dispose Expired/Unusable Items': 'Dispose Expired/Unusable Items',
'Distance between defecation area and water source': 'Distance between defecation area and water source',
'Distance between latrines and temporary shelter in meters': 'Distance between latrines and temporary shelter in meters',
'Distance between shelter and latrines': 'Distance between shelter and latrines',
'Distance(Kms)': 'Distance(Kms)',
'Distribution': 'Distribution',
'Distribution groups': 'Distribution groups',
'District': 'District',
'Do adolescent and youth in your community participate in activities that help them cope with the disaster? (ex. meetings, religious activities, volunteer in the community clean-up, etc)': 'Do adolescent and youth in your community participate in activities that help them cope with the disaster? (ex. meetings, religious activities, volunteer in the community clean-up, etc)',
'Do households each have at least 2 containers (10-20 litres each) to hold water?': 'Do households each have at least 2 containers (10-20 litres each) to hold water?',
'Do households have appropriate equipment and materials to cook their food (stove, pots, dished plates, and a mug/drinking vessel, etc)?': 'Do households have appropriate equipment and materials to cook their food (stove, pots, dished plates, and a mug/drinking vessel, etc)?',
'Do households have bedding materials available (tarps, plastic mats, blankets)?': 'Do households have bedding materials available (tarps, plastic mats, blankets)?',
'Do households have household water storage containers?': 'Do households have household water storage containers?',
'Do minority members in your community participate in activities that help them cope with the disaster? (ex. meetings, religious activities, volunteer in the community clean-up, etc)': 'Do minority members in your community participate in activities that help them cope with the disaster? (ex. meetings, religious activities, volunteer in the community clean-up, etc)',
'Do older people in your community participate in activities that help them cope with the disaster? (ex. meetings, religious activities, volunteer in the community clean-up, etc)': 'Do older people in your community participate in activities that help them cope with the disaster? (ex. meetings, religious activities, volunteer in the community clean-up, etc)',
'Do people have at least 2 full sets of clothing (shirts, pants/sarong, underwear)?': 'Do people have at least 2 full sets of clothing (shirts, pants/sarong, underwear)?',
'Do people have reliable access to sufficient sanitation/hygiene items (bathing soap, laundry soap, shampoo, toothpaste and toothbrush)?': 'Do people have reliable access to sufficient sanitation/hygiene items (bathing soap, laundry soap, shampoo, toothpaste and toothbrush)?',
'Do people with disabilities in your community participate in activities that help them cope with the disaster? (ex. meetings, religious activities, volunteer in the community clean-up, etc)': 'Do people with disabilities in your community participate in activities that help them cope with the disaster? (ex. meetings, religious activities, volunteer in the community clean-up, etc)',
'Do women and girls have easy access to sanitary materials?': 'Do women and girls have easy access to sanitary materials?',
'Do women in your community participate in activities that help them cope with the disaster? (ex. meetings, religious activities, volunteer in the community clean-up, etc)': 'Do women in your community participate in activities that help them cope with the disaster? (ex. meetings, religious activities, volunteer in the community clean-up, etc)',
'Do you have access to cash to restart your business?': 'Do you have access to cash to restart your business?',
'Do you know of any incidents of violence?': 'Do you know of any incidents of violence?',
'Do you know of children living on their own (without adults)?': 'Do you know of children living on their own (without adults)?',
'Do you know of children separated from their parents or caregivers?': 'Do you know of children separated from their parents or caregivers?',
'Do you know of children that have been orphaned by the disaster?': 'Do you know of children that have been orphaned by the disaster?',
'Do you know of children that have been sent to safe places?': 'Do you know of children that have been sent to safe places?',
'Do you know of children that have disappeared without explanation in the period since the disaster?': 'Do you know of children that have disappeared without explanation in the period since the disaster?',
'Do you know of older people who are primary caregivers of children?': 'Do you know of older people who are primary caregivers of children?',
'Do you know of parents/caregivers missing children?': 'Do you know of parents/caregivers missing children?',
'Do you really want to delete these records?': 'Do you really want to delete these records?',
'Do you want to receive this shipment?': 'Do you want to receive this shipment?',
'Document': 'Document',
'Document Details': 'Document Details',
'Document Scan': 'Document Scan',
'Document added': 'Document added',
'Document deleted': 'Document deleted',
'Document updated': 'Document updated',
'Documents': 'Documents',
'Documents and Photos': 'Documents and Photos',
'Does this facility provide a cholera treatment center?': 'Does this facility provide a cholera treatment center?',
'Doing nothing (no structured activity)': 'Doing nothing (no structured activity)',
'Dollars': 'Dollars',
'Domestic chores': 'Domestic chores',
'Donation Phone #': 'Donation Phone #',
'Donor': 'Donor',
'Donor Details': 'Donor Details',
'Donor added': 'Donor added',
'Donor deleted': 'Donor deleted',
'Donor updated': 'Donor updated',
'Donors': 'Donors',
'Donors Report': 'Donors Report',
'Door frame': 'Door frame',
'Draft': 'Draft',
'Draft Features': 'Draft Features',
'Drainage': 'Drainage',
'Drawing up a Budget for Staff & Equipment across various Locations.': 'Drawing up a Budget for Staff & Equipment across various Locations.',
'Drill Down by Group': 'Drill Down by Group',
'Drill Down by Incident': 'Drill Down by Incident',
'Drill Down by Shelter': 'Drill Down by Shelter',
'Driving License': 'Driving License',
'Drought': 'Drought',
'Drugs': 'Drugs',
'Dug Well': 'Dug Well',
'Duplicate?': 'Duplicate?',
'Duration': 'Duration',
'Dust Storm': 'Dust Storm',
'Dwellings': 'Dwellings',
'E-mail': 'E-mail',
'EMS Reason': 'EMS Reason',
'EMS Status': 'EMS Status',
'ER Status': 'ER Status',
'ER Status Reason': 'ER Status Reason',
'Early Recovery': 'Early Recovery',
'Earthquake': 'Earthquake',
'Easy access to sanitation items for women/girls': 'Easy access to sanitation items for women/girls',
'Edit': 'Edit',
'Edit Activity': 'Edit Activity',
'Edit Address': 'Edit Address',
'Edit Application': 'Edit Application',
'Edit Assessment': 'Edit Assessment',
'Edit Assessment Summary': 'Edit Assessment Summary',
'Edit Baseline': 'Edit Baseline',
'Edit Baseline Type': 'Edit Baseline Type',
'Edit Budget': 'Edit Budget',
'Edit Bundle': 'Edit Bundle',
'Edit Catalog Item': 'Edit Catalog Item',
'Edit Category<>Sub-Category<>Catalog Relation': 'Edit Category<>Sub-Category<>Catalog Relation',
'Edit Cluster Subsector': 'Edit Cluster Subsector',
'Edit Config': 'Edit Config',
'Edit Contact': 'Edit Contact',
'Edit Contact Information': 'Edit Contact Information',
'Edit Contents': 'Edit Contents',
'Edit Defaults': 'Edit Defaults',
'Edit Description': 'Edit Description',
'Edit Details': 'Edit Details',
'Edit Disaster Victims': 'Edit Disaster Victims',
'Edit Document': 'Edit Document',
'Edit Donor': 'Edit Donor',
'Edit Email Settings': 'Edit Email Settings',
'Edit Feature Class': 'Edit Feature Class',
'Edit Feature Layer': 'Edit Feature Layer',
'Edit Flood Report': 'Edit Flood Report',
'Edit Gateway Settings': 'Edit Gateway Settings',
'Edit Group': 'Edit Group',
'Edit Hospital': 'Edit Hospital',
'Edit Identification Report': 'Edit Identification Report',
'Edit Identity': 'Edit Identity',
'Edit Image': 'Edit Image',
'Edit Image Details': 'Edit Image Details',
'Edit Impact': 'Edit Impact',
'Edit Impact Type': 'Edit Impact Type',
'Edit Incident Report': 'Edit Incident Report',
'Edit Item': 'Edit Item',
'Edit Item Catalog': 'Edit Item Catalog',
'Edit Item Catalog Categories': 'Edit Item Catalog Categories',
'Edit Item Category': 'Edit Item Category',
'Edit Item Packet': 'Edit Item Packet',
'Edit Item Sub-Categories': 'Edit Item Sub-Categories',
'Edit Key': 'Edit Key',
'Edit Kit': 'Edit Kit',
'Edit Layer': 'Edit Layer',
'Edit Location': 'Edit Location',
'Edit Log Entry': 'Edit Log Entry',
'Edit Map Services': 'Edit Map Services',
'Edit Marker': 'Edit Marker',
'Edit Membership': 'Edit Membership',
'Edit Message': 'Edit Message',
'Edit Messaging Settings': 'Edit Messaging Settings',
'Edit Modem Settings': 'Edit Modem Settings',
'Edit Need': 'Edit Need',
'Edit Need Type': 'Edit Need Type',
'Edit Office': 'Edit Office',
'Edit Options': 'Edit Options',
'Edit Organization': 'Edit Organization',
'Edit Parameters': 'Edit Parameters',
'Edit Peer Details': 'Edit Peer Details',
'Edit Person Details': 'Edit Person Details',
'Edit Personal Effects Details': 'Edit Personal Effects Details',
'Edit Photo': 'Edit Photo',
'Edit Position': 'Edit Position',
'Edit Problem': 'Edit Problem',
'Edit Project': 'Edit Project',
'Edit Projection': 'Edit Projection',
'Edit Rapid Assessment': 'Edit Rapid Assessment',
'Edit Received Item': 'Edit Received Item',
'Edit Received Shipment': 'Edit Received Shipment',
'Edit Record': 'Edit Record',
'Edit Recovery Details': 'Edit Recovery Details',
'Edit Registration': 'Edit Registration',
'Edit Registration Details': 'Edit Registration Details',
'Edit Report': 'Edit Report',
'Edit Request': 'Edit Request',
'Edit Request Item': 'Edit Request Item',
'Edit Resource': 'Edit Resource',
'Edit River': 'Edit River',
'Edit Role': 'Edit Role',
'Edit Sector': 'Edit Sector',
'Edit Sent Item': 'Edit Sent Item',
'Edit Sent Shipment': 'Edit Sent Shipment',
'Edit Setting': 'Edit Setting',
'Edit Settings': 'Edit Settings',
'Edit Shelter': 'Edit Shelter',
'Edit Shelter Service': 'Edit Shelter Service',
'Edit Shelter Type': 'Edit Shelter Type',
'Edit Shipment Transit Log': 'Edit Shipment Transit Log',
'Edit Shipment/Way Bills': 'Edit Shipment/Way Bills',
'Edit Shipment<>Item Relation': 'Edit Shipment<>Item Relation',
'Edit Site': 'Edit Site',
'Edit Skill': 'Edit Skill',
'Edit Skill Type': 'Edit Skill Type',
'Edit Solution': 'Edit Solution',
'Edit Staff': 'Edit Staff',
'Edit Staff Type': 'Edit Staff Type',
'Edit Storage Bin Type(s)': 'Edit Storage Bin Type(s)',
'Edit Storage Bins': 'Edit Storage Bins',
'Edit Storage Location': 'Edit Storage Location',
'Edit Subscription': 'Edit Subscription',
'Edit Survey Answer': 'Edit Survey Answer',
'Edit Survey Question': 'Edit Survey Question',
'Edit Survey Section': 'Edit Survey Section',
'Edit Survey Series': 'Edit Survey Series',
'Edit Survey Template': 'Edit Survey Template',
'Edit Task': 'Edit Task',
'Edit Team': 'Edit Team',
'Edit Theme': 'Edit Theme',
'Edit Themes': 'Edit Themes',
'Edit Ticket': 'Edit Ticket',
'Edit Track': 'Edit Track',
'Edit Tropo Settings': 'Edit Tropo Settings',
'Edit Unit': 'Edit Unit',
'Edit User': 'Edit User',
'Edit Volunteer Details': 'Edit Volunteer Details',
'Edit Volunteer Registration': 'Edit Volunteer Registration',
'Edit Warehouse': 'Edit Warehouse',
'Edit Warehouse Item': 'Edit Warehouse Item',
'Edit current record': 'Edit current record',
'Edit message': 'Edit message',
'Edit the Application': 'Edit the Application',
'Edit the OpenStreetMap data for this area': 'Edit the OpenStreetMap data for this area',
'Editable?': 'Editable?',
'Education': 'Education',
'Education materials received': 'Education materials received',
'Education materials, source': 'Education materials, source',
'Effects Inventory': 'Effects Inventory',
'Eggs': 'Eggs',
'Either a shelter or a location must be specified': 'Either a shelter or a location must be specified',
'Either file upload or document URL required.': 'Either file upload or document URL required.',
'Either file upload or image URL required.': 'Either file upload or image URL required.',
'Elderly person headed households (>60 yrs)': 'Elderly person headed households (>60 yrs)',
'Electrical': 'Electrical',
'Elevated': 'Elevated',
'Email': 'Email',
'Email Settings': 'Email Settings',
'Email address verified, however registration is still pending approval - please wait until confirmation received.': 'Email address verified, however registration is still pending approval - please wait until confirmation received.',
'Email settings updated': 'Email settings updated',
'Embalming': 'Embalming',
'Embassy': 'Embassy',
'Emergency Capacity Building project': 'Emergency Capacity Building project',
'Emergency Department': 'Emergency Department',
'Emergency Shelter': 'Emergency Shelter',
'Emergency Support Facility': 'Emergency Support Facility',
'Emergency Support Service': 'Emergency Support Service',
'Emergency Telecommunications': 'Emergency Telecommunications',
'Enable/Disable Layers': 'Enable/Disable Layers',
'Enabled': 'Enabled',
'End date': 'End date',
'End date should be after start date': 'End date should be after start date',
'End of Period': 'End of Period',
'English': 'English',
'Enter Coordinates:': 'Enter Coordinates:',
'Enter a GPS Coord': 'Enter a GPS Coord',
'Enter a date before': 'Enter a date before',
'Enter a location': 'Enter a location',
'Enter a name for the spreadsheet you are uploading (mandatory).': 'Enter a name for the spreadsheet you are uploading (mandatory).',
'Enter a new support request.': 'Enter a new support request.',
'Enter a summary of the request here.': 'Enter a summary of the request here.',
'Enter a unique label!': 'Enter a unique label!',
'Enter a valid email': 'Enter a valid email',
'Enter tags separated by commas.': 'Enter tags separated by commas.',
'Enter the same password as above': 'Enter the same password as above',
'Enter your firstname': 'Enter your firstname',
'Entering a phone number is optional, but doing so allows you to subscribe to receive SMS messages.': 'Entering a phone number is optional, but doing so allows you to subscribe to receive SMS messages.',
'Entry deleted': 'Entry deleted',
'Equipment': 'Equipment',
'Error encountered while applying the theme.': 'Error encountered while applying the theme.',
'Error in message': 'Error in message',
'Error logs for "%(app)s"': 'Error logs for "%(app)s"',
'Errors': 'Errors',
'Estimated # of households who are affected by the emergency': 'Estimated # of households who are affected by the emergency',
'Estimated # of people who are affected by the emergency': 'Estimated # of people who are affected by the emergency',
'Estimated total number of people in institutions': 'Estimated total number of people in institutions',
'Euros': 'Euros',
'Evacuating': 'Evacuating',
'Evaluate the information in this message. (This value SHOULD NOT be used in public warning applications.)': 'Evaluate the information in this message. (This value SHOULD NOT be used in public warning applications.)',
'Event type': 'Event type',
'Example': 'Example',
'Exceeded': 'Exceeded',
'Excreta disposal': 'Excreta disposal',
'Execute a pre-planned activity identified in <instruction>': 'Execute a pre-planned activity identified in <instruction>',
'Existing food stocks, main dishes': 'Existing food stocks, main dishes',
'Existing food stocks, side dishes': 'Existing food stocks, side dishes',
'Expected In': 'Expected In',
'Expected Out': 'Expected Out',
'Explosive Hazard': 'Explosive Hazard',
'Export': 'Export',
'Export Data': 'Export Data',
'Export Database as CSV': 'Export Database as CSV',
'Export in GPX format': 'Export in GPX format',
'Export in KML format': 'Export in KML format',
'Export in OSM format': 'Export in OSM format',
'Export in PDF format': 'Export in PDF format',
'Export in RSS format': 'Export in RSS format',
'Export in XLS format': 'Export in XLS format',
'External Features': 'External Features',
'Eye Color': 'Eye Color',
'Facebook': 'Facebook',
'Facial hair, color': 'Facial hair, color',
'Facial hair, type': 'Facial hair, type',
'Facial hear, length': 'Facial hear, length',
'Facility Operations': 'Facility Operations',
'Facility Status': 'Facility Status',
'Facility Type': 'Facility Type',
'Factors affecting school attendance': 'Factors affecting school attendance',
'Failed to send mail to Approver - see if you can notify them manually!': 'Failed to send mail to Approver - see if you can notify them manually!',
'Failed!': 'Failed!',
'Falling Object Hazard': 'Falling Object Hazard',
'Families/HH': 'Families/HH',
'Family': 'Family',
'Family tarpaulins received': 'Family tarpaulins received',
'Family tarpaulins, source': 'Family tarpaulins, source',
'Family/friends': 'Family/friends',
'Farmland/fishing material assistance, Rank': 'Farmland/fishing material assistance, Rank',
'Fax': 'Fax',
'Feature Class': 'Feature Class',
'Feature Class Details': 'Feature Class Details',
'Feature Class added': 'Feature Class added',
'Feature Class deleted': 'Feature Class deleted',
'Feature Class updated': 'Feature Class updated',
'Feature Classes': 'Feature Classes',
'Feature Classes are collections of Locations (Features) of the same type': 'Feature Classes are collections of Locations (Features) of the same type',
'Feature Layer Details': 'Feature Layer Details',
'Feature Layer added': 'Feature Layer added',
'Feature Layer deleted': 'Feature Layer deleted',
'Feature Layer updated': 'Feature Layer updated',
'Feature Layers': 'Feature Layers',
'Feature Namespace': 'Feature Namespace',
'Feature Type': 'Feature Type',
'Features Include': 'Features Include',
'Female': 'Female',
'Female headed households': 'Female headed households',
'Few': 'Few',
'Field Hospital': 'Field Hospital',
'File': 'File',
'Fill in Latitude': 'Fill in Latitude',
'Fill in Longitude': 'Fill in Longitude',
'Filter Field': 'Filter Field',
'Filter Value': 'Filter Value',
'Filtered search of aid pledges and requests': 'Filtered search of aid pledges and requests',
'Find': 'Find',
'Find Dead Body Report': 'Find Dead Body Report',
'Find Volunteers': 'Find Volunteers',
'Find by Name': 'Find by Name',
'Finder': 'Finder',
'Fingerprint': 'Fingerprint',
'Fingerprinting': 'Fingerprinting',
'Fingerprints': 'Fingerprints',
'Finish': 'Finish',
'Finished Jobs': 'Finished Jobs',
'Fire': 'Fire',
'Fire suppression and rescue': 'Fire suppression and rescue',
'First Name': 'First Name',
'First name': 'First name',
'Fishing': 'Fishing',
'Flash Flood': 'Flash Flood',
'Flash Freeze': 'Flash Freeze',
'Fleet Management': 'Fleet Management',
'Flexible Impact Assessments': 'Flexible Impact Assessments',
'Flood': 'Flood',
'Flood Alerts': 'Flood Alerts',
'Flood Alerts show water levels in various parts of the country': 'Flood Alerts show water levels in various parts of the country',
'Flood Report': 'Flood Report',
'Flood Report Details': 'Flood Report Details',
'Flood Report added': 'Flood Report added',
'Flood Report deleted': 'Flood Report deleted',
'Flood Report updated': 'Flood Report updated',
'Flood Reports': 'Flood Reports',
'Flow Status': 'Flow Status',
'Focal Point': 'Focal Point',
'Fog': 'Fog',
'Food': 'Food',
'Food Supply': 'Food Supply',
'Food assistance available/expected': 'Food assistance available/expected',
'Footer': 'Footer',
'Footer file %s missing!': 'Footer file %s missing!',
'For POP-3 this is usually 110 (995 for SSL), for IMAP this is usually 143 (993 for IMAP).': 'For POP-3 this is usually 110 (995 for SSL), for IMAP this is usually 143 (993 for IMAP).',
'For a country this would be the ISO2 code, for a Town, it would be the Airport Locode.': 'For a country this would be the ISO2 code, for a Town, it would be the Airport Locode.',
'For each sync partner, there is a default sync job that runs after a specified interval of time. You can also set up more sync jobs which could be customized on your needs. Click the link on the right to get started.': 'For each sync partner, there is a default sync job that runs after a specified interval of time. You can also set up more sync jobs which could be customized on your needs. Click the link on the right to get started.',
'For enhanced security, you are recommended to enter a username and password, and notify administrators of other machines in your organization to add this username and password against your UUID in Synchronization -> Sync Partners': 'For enhanced security, you are recommended to enter a username and password, and notify administrators of other machines in your organization to add this username and password against your UUID in Synchronization -> Sync Partners',
'For live help from the Sahana community on using this application, go to': 'For live help from the Sahana community on using this application, go to',
'For messages that support alert network internal functions': 'For messages that support alert network internal functions',
'For more details on the Sahana Eden system, see the': 'For more details on the Sahana Eden system, see the',
'For more information, see ': 'For more information, see ',
'For:': 'For:',
'Forest Fire': 'Forest Fire',
'Formal camp': 'Formal camp',
'Format': 'Format',
'Forms': 'Forms',
'Found': 'Found',
'Freezing Drizzle': 'Freezing Drizzle',
'Freezing Rain': 'Freezing Rain',
'Freezing Spray': 'Freezing Spray',
'French': 'French',
'Friday': 'Friday',
'From': 'From',
'From Location': 'From Location',
'From Warehouse': 'From Warehouse',
'Frost': 'Frost',
'Full': 'Full',
'Full beard': 'Full beard',
'Fullscreen Map': 'Fullscreen Map',
'Functional Tests': 'Functional Tests',
'Functions available': 'Functions available',
'Funding Organization': 'Funding Organization',
'Funeral': 'Funeral',
'GIS Reports of Shelter': 'GIS Reports of Shelter',
'GIS integration to view location details of the Shelter': 'GIS integration to view location details of the Shelter',
'GPS': 'GPS',
'GPS Marker': 'GPS Marker',
'GPS Track': 'GPS Track',
'GPS Track File': 'GPS Track File',
'GPX Track': 'GPX Track',
'Gale Wind': 'Gale Wind',
'Gap Analysis': 'Gap Analysis',
'Gap Analysis Map': 'Gap Analysis Map',
'Gap Analysis Report': 'Gap Analysis Report',
'Gap Map': 'Gap Map',
'Gap Report': 'Gap Report',
'Gateway Settings': 'Gateway Settings',
'Gateway settings updated': 'Gateway settings updated',
'Gender': 'Gender',
'General Medical/Surgical': 'General Medical/Surgical',
'General emergency and public safety': 'General emergency and public safety',
'Generator': 'Generator',
'Geocoder Selection': 'Geocoder Selection',
'Geometry Name': 'Geometry Name',
'Geonames.org search requires Internet connectivity!': 'Geonames.org search requires Internet connectivity!',
'Geophysical (inc. landslide)': 'Geophysical (inc. landslide)',
'Geraldo module not available within the running Python - this needs installing for PDF output!': 'Geraldo module not available within the running Python - this needs installing for PDF output!',
'Girls 13-18 yrs in affected area': 'Girls 13-18 yrs in affected area',
'Girls 13-18 yrs not attending school': 'Girls 13-18 yrs not attending school',
'Girls 6-12 yrs in affected area': 'Girls 6-12 yrs in affected area',
'Girls 6-12 yrs not attending school': 'Girls 6-12 yrs not attending school',
'Give a brief description of the image, e.g. what can be seen where on the picture (optional).': 'Give a brief description of the image, e.g. what can be seen where on the picture (optional).',
'Give information about where and when you have seen the person': 'Give information about where and when you have seen the person',
'Give information about where and when you have seen them': 'Give information about where and when you have seen them',
'Global Messaging Settings': 'Global Messaging Settings',
'Goatee': 'Goatee',
'Government': 'Government',
'Government UID': 'Government UID',
'Government building': 'Government building',
'Grade': 'Grade',
'Greek': 'Greek',
'Group': 'Group',
'Group %(group_id)s created': 'Group %(group_id)s created',
'Group Details': 'Group Details',
'Group ID': 'Group ID',
'Group Member added': 'Group Member added',
'Group Members': 'Group Members',
'Group Memberships': 'Group Memberships',
'Group Title': 'Group Title',
'Group Type': 'Group Type',
'Group added': 'Group added',
'Group deleted': 'Group deleted',
'Group description': 'Group description',
'Group name': 'Group name',
'Group type': 'Group type',
'Group updated': 'Group updated',
'Groups': 'Groups',
'Groups removed': 'Groups removed',
'Guest': 'Guest',
'Hail': 'Hail',
'Hair Color': 'Hair Color',
'Hair Length': 'Hair Length',
'Hair Style': 'Hair Style',
'Has data from this Reference Document been entered into Sahana?': 'Has data from this Reference Document been entered into Sahana?',
'Has the safety and security of women and children in your community changed since the emergency?': 'Has the safety and security of women and children in your community changed since the emergency?',
'Has your business been damaged in the course of the disaster?': 'Has your business been damaged in the course of the disaster?',
'Have households received any shelter/NFI assistance or is assistance expected in the coming days?': 'Have households received any shelter/NFI assistance or is assistance expected in the coming days?',
'Have normal food sources been disrupted?': 'Have normal food sources been disrupted?',
'Have schools received or are expecting to receive any assistance?': 'Have schools received or are expecting to receive any assistance?',
'Have the people received or are you expecting any medical or food assistance in the coming days?': 'Have the people received or are you expecting any medical or food assistance in the coming days?',
'Hazard Pay': 'Hazard Pay',
'Hazardous Material': 'Hazardous Material',
'Hazardous Road Conditions': 'Hazardous Road Conditions',
'Header Background': 'Header Background',
'Header background file %s missing!': 'Header background file %s missing!',
'Headquarters': 'Headquarters',
'Health': 'Health',
'Health care assistance, Rank': 'Health care assistance, Rank',
'Health center': 'Health center',
'Health center with beds': 'Health center with beds',
'Health center without beds': 'Health center without beds',
'Health services functioning prior to disaster': 'Health services functioning prior to disaster',
'Health services functioning since disaster': 'Health services functioning since disaster',
'Healthcare Worker': 'Healthcare Worker',
'Heat Wave': 'Heat Wave',
'Heat and Humidity': 'Heat and Humidity',
'Height': 'Height',
'Height (cm)': 'Height (cm)',
'Help': 'Help',
'Helps to monitor status of hospitals': 'Helps to monitor status of hospitals',
'Helps to report and search for Missing Persons': 'Helps to report and search for Missing Persons',
'Here are the solution items related to the problem.': 'Here are the solution items related to the problem.',
'High': 'High',
'High Water': 'High Water',
'Hindu': 'Hindu',
'History': 'History',
'Hit the back button on your browser to try again.': 'Hit the back button on your browser to try again.',
'Holiday Address': 'Holiday Address',
'Home': 'Home',
'Home Address': 'Home Address',
'Home Country': 'Home Country',
'Home Crime': 'Home Crime',
'Hospital': 'Hospital',
'Hospital Details': 'Hospital Details',
'Hospital Status Report': 'Hospital Status Report',
'Hospital information added': 'Hospital information added',
'Hospital information deleted': 'Hospital information deleted',
'Hospital information updated': 'Hospital information updated',
'Hospital status assessment.': 'Hospital status assessment.',
'Hospitals': 'Hospitals',
'Hot Spot': 'Hot Spot',
'Household kits received': 'Household kits received',
'Household kits, source': 'Household kits, source',
'How did boys 13-17yrs spend most of their time prior to the disaster?': 'How did boys 13-17yrs spend most of their time prior to the disaster?',
'How did boys <12yrs spend most of their time prior to the disaster?': 'How did boys <12yrs spend most of their time prior to the disaster?',
'How did boys girls 13-17yrs spend most of their time prior to the disaster?': 'How did boys girls 13-17yrs spend most of their time prior to the disaster?',
'How did girls <12yrs spend most of their time prior to the disaster?': 'How did girls <12yrs spend most of their time prior to the disaster?',
'How do boys 13-17yrs spend most of their time now?': 'How do boys 13-17yrs spend most of their time now?',
'How do boys <12yrs spend most of their time now?': 'How do boys <12yrs spend most of their time now?',
'How do girls 13-17yrs spend most of their time now?': 'How do girls 13-17yrs spend most of their time now?',
'How do girls <12yrs spend most of their time now?': 'How do girls <12yrs spend most of their time now?',
'How does it work?': 'How does it work?',
'How is this person affected by the disaster? (Select all that apply)': 'How is this person affected by the disaster? (Select all that apply)',
'How long does it take you to reach the available water resources? Specify the time required to go there and back, including queuing time, by foot.': 'How long does it take you to reach the available water resources? Specify the time required to go there and back, including queuing time, by foot.',
'How long does it take you to walk to the health service?': 'How long does it take you to walk to the health service?',
'How long will the food last?': 'How long will the food last?',
'How long will this water resource last?': 'How long will this water resource last?',
'How many Boys (0-17 yrs) are Dead due to the crisis': 'How many Boys (0-17 yrs) are Dead due to the crisis',
'How many Boys (0-17 yrs) are Injured due to the crisis': 'How many Boys (0-17 yrs) are Injured due to the crisis',
'How many Boys (0-17 yrs) are Missing due to the crisis': 'How many Boys (0-17 yrs) are Missing due to the crisis',
'How many Girls (0-17 yrs) are Dead due to the crisis': 'How many Girls (0-17 yrs) are Dead due to the crisis',
'How many Girls (0-17 yrs) are Injured due to the crisis': 'How many Girls (0-17 yrs) are Injured due to the crisis',
'How many Girls (0-17 yrs) are Missing due to the crisis': 'How many Girls (0-17 yrs) are Missing due to the crisis',
'How many Men (18 yrs+) are Dead due to the crisis': 'How many Men (18 yrs+) are Dead due to the crisis',
'How many Men (18 yrs+) are Injured due to the crisis': 'How many Men (18 yrs+) are Injured due to the crisis',
'How many Men (18 yrs+) are Missing due to the crisis': 'How many Men (18 yrs+) are Missing due to the crisis',
'How many Women (18 yrs+) are Dead due to the crisis': 'How many Women (18 yrs+) are Dead due to the crisis',
'How many Women (18 yrs+) are Injured due to the crisis': 'How many Women (18 yrs+) are Injured due to the crisis',
'How many Women (18 yrs+) are Missing due to the crisis': 'How many Women (18 yrs+) are Missing due to the crisis',
'How many days will the supplies last?': 'How many days will the supplies last?',
'How many doctors in the health centers are still actively working?': 'How many doctors in the health centers are still actively working?',
'How many houses are uninhabitable (uninhabitable = foundation and structure destroyed)?': 'How many houses are uninhabitable (uninhabitable = foundation and structure destroyed)?',
'How many houses suffered damage but remain usable (usable = windows broken, cracks in walls, roof slightly damaged)?': 'How many houses suffered damage but remain usable (usable = windows broken, cracks in walls, roof slightly damaged)?',
'How many latrines are available in the village/IDP centre/Camp?': 'How many latrines are available in the village/IDP centre/Camp?',
'How many midwives in the health centers are still actively working?': 'How many midwives in the health centers are still actively working?',
'How many new cases have been admitted to this facility in the past 24h?': 'How many new cases have been admitted to this facility in the past 24h?',
'How many nurses in the health centers are still actively working?': 'How many nurses in the health centers are still actively working?',
'How many of the patients with the disease died in the past 24h at this facility?': 'How many of the patients with the disease died in the past 24h at this facility?',
'How many of the primary school age boys (6-12) in the area are not attending school?': 'How many of the primary school age boys (6-12) in the area are not attending school?',
'How many of the primary school age girls (6-12) in the area are not attending school?': 'How many of the primary school age girls (6-12) in the area are not attending school?',
'How many of the primary/secondary schools are now open and running a regular schedule of class?': 'How many of the primary/secondary schools are now open and running a regular schedule of class?',
'How many of the secondary school age boys (13-18) in the area are not attending school?': 'How many of the secondary school age boys (13-18) in the area are not attending school?',
'How many of the secondary school age girls (13-18) in the area are not attending school?': 'How many of the secondary school age girls (13-18) in the area are not attending school?',
'How many patients with the disease are currently hospitalized at this facility?': 'How many patients with the disease are currently hospitalized at this facility?',
'How many primary school age boys (6-12) are in the affected area?': 'How many primary school age boys (6-12) are in the affected area?',
'How many primary school age girls (6-12) are in the affected area?': 'How many primary school age girls (6-12) are in the affected area?',
'How many primary/secondary schools were opening prior to the disaster?': 'How many primary/secondary schools were opening prior to the disaster?',
'How many secondary school age boys (13-18) are in the affected area?': 'How many secondary school age boys (13-18) are in the affected area?',
'How many secondary school age girls (13-18) are in the affected area?': 'How many secondary school age girls (13-18) are in the affected area?',
'How many teachers have been affected by the disaster (affected = unable to work)?': 'How many teachers have been affected by the disaster (affected = unable to work)?',
'How many teachers worked in the schools prior to the disaster?': 'How many teachers worked in the schools prior to the disaster?',
'How much detail is seen. A high Zoom level means lot of detail, but not a wide area. A low Zoom level means seeing a wide area, but not a high level of detail.': 'How much detail is seen. A high Zoom level means lot of detail, but not a wide area. A low Zoom level means seeing a wide area, but not a high level of detail.',
'Humanitarian NGO': 'Humanitarian NGO',
'Hurricane': 'Hurricane',
'Hurricane Force Wind': 'Hurricane Force Wind',
'Hygiene': 'Hygiene',
'Hygiene NFIs': 'Hygiene NFIs',
'Hygiene kits received': 'Hygiene kits received',
'Hygiene kits, source': 'Hygiene kits, source',
'Hygiene practice': 'Hygiene practice',
'Hygiene problems': 'Hygiene problems',
'ID Label': 'ID Label',
'ID Tag': 'ID Tag',
'ID Tag Number': 'ID Tag Number',
'ID type': 'ID type',
'Ice Pressure': 'Ice Pressure',
'Iceberg': 'Iceberg',
'Identification': 'Identification',
'Identification Report': 'Identification Report',
'Identification Reports': 'Identification Reports',
'Identification Status': 'Identification Status',
'Identification label of the Storage bin.': 'Identification label of the Storage bin.',
'Identified as': 'Identified as',
'Identified by': 'Identified by',
'Identity': 'Identity',
'Identity Details': 'Identity Details',
'Identity added': 'Identity added',
'Identity deleted': 'Identity deleted',
'Identity updated': 'Identity updated',
'If Unit = m, Base Unit = Km, then multiplicator is 0.0001 since 1m = 0.001 km.': 'If Unit = m, Base Unit = Km, then multiplicator is 0.0001 since 1m = 0.001 km.',
'If enabled then a log is maintained of all records a user accesses. If disabled then it can still be enabled on a per-module basis.': 'If enabled then a log is maintained of all records a user accesses. If disabled then it can still be enabled on a per-module basis.',
'If enabled then a log is maintained of all records a user edits. If disabled then it can still be enabled on a per-module basis.': 'If enabled then a log is maintained of all records a user edits. If disabled then it can still be enabled on a per-module basis.',
'If no marker defined then the system default marker is used': 'If no marker defined then the system default marker is used',
'If no, specify why': 'If no, specify why',
'If the location is a geographic area, then state at what level here.': 'If the location is a geographic area, then state at what level here.',
'If this is set to True then mails will be deleted from the server after downloading.': 'If this is set to True then mails will be deleted from the server after downloading.',
'If this record should be restricted then select which role is required to access the record here.': 'If this record should be restricted then select which role is required to access the record here.',
'If this record should be restricted then select which role(s) are permitted to access the record here.': 'If this record should be restricted then select which role(s) are permitted to access the record here.',
"If this setting is enabled then all deleted records are just flagged as deleted instead of being really deleted. They will appear in the raw database access but won't be visible to normal users.": "If this setting is enabled then all deleted records are just flagged as deleted instead of being really deleted. They will appear in the raw database access but won't be visible to normal users.",
'If yes, specify what and by whom': 'If yes, specify what and by whom',
'If yes, which and how': 'If yes, which and how',
"If you cannot find the person you want to register as a volunteer, you can add them by clicking 'Add Person' below:": "If you cannot find the person you want to register as a volunteer, you can add them by clicking 'Add Person' below:",
"If you cannot find the person you want to report missing, you can add them by clicking 'Add Person' below:": "If you cannot find the person you want to report missing, you can add them by clicking 'Add Person' below:",
"If you cannot find the record of the person you want to report missing, you can add it by clicking 'Add Person' below:": "If you cannot find the record of the person you want to report missing, you can add it by clicking 'Add Person' below:",
'If you do not enter a Reference Document, your email will be displayed to allow this data to be verified.': 'If you do not enter a Reference Document, your email will be displayed to allow this data to be verified.',
'If you know what the Geonames ID of this location is then you can enter it here.': 'If you know what the Geonames ID of this location is then you can enter it here.',
'If you know what the OSM ID of this location is then you can enter it here.': 'If you know what the OSM ID of this location is then you can enter it here.',
'If you need to add a new document then you can click here to attach one.': 'If you need to add a new document then you can click here to attach one.',
'If you would like to help, then please': 'If you would like to help, then please',
'Illegal Immigrant': 'Illegal Immigrant',
'Image': 'Image',
'Image Details': 'Image Details',
'Image Tags': 'Image Tags',
'Image Type': 'Image Type',
'Image Upload': 'Image Upload',
'Image added': 'Image added',
'Image deleted': 'Image deleted',
'Image updated': 'Image updated',
'Image/Attachment': 'Image/Attachment',
'Image/Other Attachment': 'Image/Other Attachment',
'Imagery': 'Imagery',
'Images': 'Images',
'Immediate reconstruction assistance, Rank': 'Immediate reconstruction assistance, Rank',
'Impact Assessments': 'Impact Assessments',
'Impact Details': 'Impact Details',
'Impact Type': 'Impact Type',
'Impact Type Details': 'Impact Type Details',
'Impact Type added': 'Impact Type added',
'Impact Type deleted': 'Impact Type deleted',
'Impact Type updated': 'Impact Type updated',
'Impact Types': 'Impact Types',
'Impact added': 'Impact added',
'Impact deleted': 'Impact deleted',
'Impact updated': 'Impact updated',
'Impacts': 'Impacts',
'Import': 'Import',
'Import & Export Data': 'Import & Export Data',
'Import Data': 'Import Data',
'Import Job': 'Import Job',
'Import Jobs': 'Import Jobs',
'Import and Export': 'Import and Export',
'Import from Ushahidi Instance': 'Import from Ushahidi Instance',
'Import if Master': 'Import if Master',
'Import job created': 'Import job created',
'Import multiple tables as CSV': 'Import multiple tables as CSV',
'Import/Export': 'Import/Export',
'Important': 'Important',
'Importantly where there are no aid services being provided': 'Importantly where there are no aid services being provided',
'Imported': 'Imported',
'Importing data from spreadsheets': 'Importing data from spreadsheets',
'Improper decontamination': 'Improper decontamination',
'Improper handling of dead bodies': 'Improper handling of dead bodies',
'In Inventories': 'In Inventories',
'In Process': 'In Process',
'In Progress': 'In Progress',
'In Transit': 'In Transit',
'In general, what are the greatest needs of older people, people with disabilities, children, youth and women in your community?': 'In general, what are the greatest needs of older people, people with disabilities, children, youth and women in your community?',
'Inbound Mail Settings': 'Inbound Mail Settings',
'Incident': 'Incident',
'Incident Categories': 'Incident Categories',
'Incident Report': 'Incident Report',
'Incident Report Details': 'Incident Report Details',
'Incident Report added': 'Incident Report added',
'Incident Report deleted': 'Incident Report deleted',
'Incident Report updated': 'Incident Report updated',
'Incident Reporting': 'Incident Reporting',
'Incident Reporting System': 'Incident Reporting System',
'Incident Reports': 'Incident Reports',
'Incidents': 'Incidents',
'Incomplete': 'Incomplete',
'Individuals': 'Individuals',
'Industrial Crime': 'Industrial Crime',
'Industry Fire': 'Industry Fire',
'Industry close to village/camp': 'Industry close to village/camp',
'Infant (0-1)': 'Infant (0-1)',
'Infectious Disease': 'Infectious Disease',
'Infectious Diseases': 'Infectious Diseases',
'Infestation': 'Infestation',
'Informal Leader': 'Informal Leader',
'Informal camp': 'Informal camp',
'Information gaps': 'Information gaps',
'Infusion catheters available': 'Infusion catheters available',
'Infusion catheters need per 24h': 'Infusion catheters need per 24h',
'Infusion catheters needed per 24h': 'Infusion catheters needed per 24h',
'Infusions available': 'Infusions available',
'Infusions needed per 24h': 'Infusions needed per 24h',
'Input Job': 'Input Job',
'Instant Porridge': 'Instant Porridge',
"Instead of automatically syncing from other peers over the network, you can also sync from files, which is necessary where there's no network. You can use this page to import sync data from files and also export data to sync files. Click the link on the right to go to this page.": "Instead of automatically syncing from other peers over the network, you can also sync from files, which is necessary where there's no network. You can use this page to import sync data from files and also export data to sync files. Click the link on the right to go to this page.",
'Institution': 'Institution',
'Insufficient': 'Insufficient',
'Insufficient vars: Need module, resource, jresource, instance': 'Insufficient vars: Need module, resource, jresource, instance',
'Intake Items': 'Intake Items',
'Intergovernmental Organisation': 'Intergovernmental Organisation',
'Internal Features': 'Internal Features',
'Internal State': 'Internal State',
'International NGO': 'International NGO',
'International Organization': 'International Organization',
'Interview taking place at': 'Interview taking place at',
'Invalid': 'Invalid',
'Invalid Query': 'Invalid Query',
'Invalid email': 'Invalid email',
'Invalid login': 'Invalid login',
'Invalid request!': 'Invalid request!',
'Invalid ticket': 'Invalid ticket',
'Inventories with Item': 'Inventories with Item',
'Inventory Management': 'Inventory Management',
'Inventory Store': 'Inventory Store',
'Inventory of Effects': 'Inventory of Effects',
'Inventory/Ledger': 'Inventory/Ledger',
'Is adequate food and water available for these institutions?': 'Is adequate food and water available for these institutions?',
'Is it safe to collect water?': 'Is it safe to collect water?',
'Is there any industrial or agro-chemical production close to the affected area/village?': 'Is there any industrial or agro-chemical production close to the affected area/village?',
'Issuing Authority': 'Issuing Authority',
'Item': 'Item',
'Item Catalog Categories': 'Item Catalog Categories',
'Item Catalog Category': 'Item Catalog Category',
'Item Catalog Category Details': 'Item Catalog Category Details',
'Item Catalog Category added': 'Item Catalog Category added',
'Item Catalog Category deleted': 'Item Catalog Category deleted',
'Item Catalog Category updated': 'Item Catalog Category updated',
'Item Catalog Details': 'Item Catalog Details',
'Item Catalog added': 'Item Catalog added',
'Item Catalog deleted': 'Item Catalog deleted',
'Item Catalog updated': 'Item Catalog updated',
'Item Catalogs': 'Item Catalogs',
'Item Categories': 'Item Categories',
'Item Category': 'Item Category',
'Item Category Details': 'Item Category Details',
'Item Category added': 'Item Category added',
'Item Category deleted': 'Item Category deleted',
'Item Category updated': 'Item Category updated',
'Item Details': 'Item Details',
'Item Packet Details': 'Item Packet Details',
'Item Packet added': 'Item Packet added',
'Item Packet deleted': 'Item Packet deleted',
'Item Packet updated': 'Item Packet updated',
'Item Packets': 'Item Packets',
'Item Sub-Categories': 'Item Sub-Categories',
'Item Sub-Category': 'Item Sub-Category',
'Item Sub-Category Details': 'Item Sub-Category Details',
'Item Sub-Category added': 'Item Sub-Category added',
'Item Sub-Category deleted': 'Item Sub-Category deleted',
'Item Sub-Category updated': 'Item Sub-Category updated',
'Item added': 'Item added',
'Item already in Bundle!': 'Item already in Bundle!',
'Item already in Kit!': 'Item already in Kit!',
'Item already in budget!': 'Item already in budget!',
'Item deleted': 'Item deleted',
'Item updated': 'Item updated',
'Items': 'Items',
'Japanese': 'Japanese',
'Jerry can': 'Jerry can',
'Jew': 'Jew',
'Job Title': 'Job Title',
'Jobs': 'Jobs',
'KPIs': 'KPIs',
'Key': 'Key',
'Key Details': 'Key Details',
'Key added': 'Key added',
'Key deleted': 'Key deleted',
'Key updated': 'Key updated',
'Keys': 'Keys',
'Kit': 'Kit',
'Kit Contents': 'Kit Contents',
'Kit Details': 'Kit Details',
'Kit Updated': 'Kit Updated',
'Kit added': 'Kit added',
'Kit deleted': 'Kit deleted',
'Kit updated': 'Kit updated',
'Kits': 'Kits',
'Known Identities': 'Known Identities',
'Known incidents of violence against women/girls': 'Known incidents of violence against women/girls',
'Known incidents of violence since disaster': 'Known incidents of violence since disaster',
'LICENCE': 'LICENCE',
'LICENSE': 'LICENSE',
'LMS Administration': 'LMS Administration',
'Label': 'Label',
'Lack of material': 'Lack of material',
'Lack of school uniform': 'Lack of school uniform',
'Lack of supplies at school': 'Lack of supplies at school',
'Lack of transport to school': 'Lack of transport to school',
'Lactating women': 'Lactating women',
'Lahar': 'Lahar',
'Landslide': 'Landslide',
'Language': 'Language',
'Last Name': 'Last Name',
'Last known location': 'Last known location',
'Last name': 'Last name',
'Last synchronization time': 'Last synchronization time',
'Last updated by': 'Last updated by',
'Last updated on': 'Last updated on',
'Latitude': 'Latitude',
'Latitude & Longitude': 'Latitude & Longitude',
'Latitude is North-South (Up-Down). Latitude is zero on the equator and positive in the northern hemisphere and negative in the southern hemisphere.': 'Latitude is North-South (Up-Down). Latitude is zero on the equator and positive in the northern hemisphere and negative in the southern hemisphere.',
'Latitude should be between': 'Latitude should be between',
'Law enforcement, military, homeland and local/private security': 'Law enforcement, military, homeland and local/private security',
'Layer Details': 'Layer Details',
'Layer added': 'Layer added',
'Layer deleted': 'Layer deleted',
'Layer updated': 'Layer updated',
'Layers': 'Layers',
'Layers updated': 'Layers updated',
'Layout': 'Layout',
'Legend Format': 'Legend Format',
'Length': 'Length',
'Level': 'Level',
"Level is higher than parent's": "Level is higher than parent's",
'Library support not available for OpenID': 'Library support not available for OpenID',
'Line': 'Line',
'Link Item & Shipment': 'Link Item & Shipment',
'Link an Item & Shipment': 'Link an Item & Shipment',
'List': 'List',
'List / Add Baseline Types': 'List / Add Baseline Types',
'List / Add Impact Types': 'List / Add Impact Types',
'List / Add Services': 'List / Add Services',
'List / Add Types': 'List / Add Types',
'List Activities': 'List Activities',
'List All': 'List All',
'List All Entries': 'List All Entries',
'List All Memberships': 'List All Memberships',
'List Assessment Summaries': 'List Assessment Summaries',
'List Assessments': 'List Assessments',
'List Baseline Types': 'List Baseline Types',
'List Baselines': 'List Baselines',
'List Budgets': 'List Budgets',
'List Bundles': 'List Bundles',
'List Catalog Items': 'List Catalog Items',
'List Category<>Sub-Category<>Catalog Relation': 'List Category<>Sub-Category<>Catalog Relation',
'List Checklists': 'List Checklists',
'List Cluster Subsectors': 'List Cluster Subsectors',
'List Configs': 'List Configs',
'List Conflicts': 'List Conflicts',
'List Contacts': 'List Contacts',
'List Documents': 'List Documents',
'List Donors': 'List Donors',
'List Feature Classes': 'List Feature Classes',
'List Feature Layers': 'List Feature Layers',
'List Flood Reports': 'List Flood Reports',
'List Groups': 'List Groups',
'List Groups/View Members': 'List Groups/View Members',
'List Hospitals': 'List Hospitals',
'List Identities': 'List Identities',
'List Images': 'List Images',
'List Impact Assessments': 'List Impact Assessments',
'List Impact Types': 'List Impact Types',
'List Impacts': 'List Impacts',
'List Incident Reports': 'List Incident Reports',
'List Item Catalog Categories': 'List Item Catalog Categories',
'List Item Catalogs': 'List Item Catalogs',
'List Item Categories': 'List Item Categories',
'List Item Packets': 'List Item Packets',
'List Item Sub-Categories': 'List Item Sub-Categories',
'List Items': 'List Items',
'List Keys': 'List Keys',
'List Kits': 'List Kits',
'List Layers': 'List Layers',
'List Locations': 'List Locations',
'List Log Entries': 'List Log Entries',
'List Markers': 'List Markers',
'List Members': 'List Members',
'List Memberships': 'List Memberships',
'List Messages': 'List Messages',
'List Missing Persons': 'List Missing Persons',
'List Need Types': 'List Need Types',
'List Needs': 'List Needs',
'List Offices': 'List Offices',
'List Organizations': 'List Organizations',
'List Peers': 'List Peers',
'List Personal Effects': 'List Personal Effects',
'List Persons': 'List Persons',
'List Photos': 'List Photos',
'List Positions': 'List Positions',
'List Problems': 'List Problems',
'List Projections': 'List Projections',
'List Projects': 'List Projects',
'List Rapid Assessments': 'List Rapid Assessments',
'List Received Items': 'List Received Items',
'List Received Shipments': 'List Received Shipments',
'List Records': 'List Records',
'List Registrations': 'List Registrations',
'List Reports': 'List Reports',
'List Request Items': 'List Request Items',
'List Requests': 'List Requests',
'List Resources': 'List Resources',
'List Rivers': 'List Rivers',
'List Roles': 'List Roles',
'List Sections': 'List Sections',
'List Sector': 'List Sector',
'List Sent Items': 'List Sent Items',
'List Sent Shipments': 'List Sent Shipments',
'List Service Profiles': 'List Service Profiles',
'List Settings': 'List Settings',
'List Shelter Services': 'List Shelter Services',
'List Shelter Types': 'List Shelter Types',
'List Shelters': 'List Shelters',
'List Shipment Transit Logs': 'List Shipment Transit Logs',
'List Shipment/Way Bills': 'List Shipment/Way Bills',
'List Shipment<>Item Relation': 'List Shipment<>Item Relation',
'List Sites': 'List Sites',
'List Skill Types': 'List Skill Types',
'List Skills': 'List Skills',
'List Solutions': 'List Solutions',
'List Staff': 'List Staff',
'List Staff Types': 'List Staff Types',
'List Status': 'List Status',
'List Storage Bin Type(s)': 'List Storage Bin Type(s)',
'List Storage Bins': 'List Storage Bins',
'List Storage Location': 'List Storage Location',
'List Subscriptions': 'List Subscriptions',
'List Survey Answers': 'List Survey Answers',
'List Survey Questions': 'List Survey Questions',
'List Survey Sections': 'List Survey Sections',
'List Survey Series': 'List Survey Series',
'List Survey Templates': 'List Survey Templates',
'List Tasks': 'List Tasks',
'List Teams': 'List Teams',
'List Themes': 'List Themes',
'List Tickets': 'List Tickets',
'List Tracks': 'List Tracks',
'List Units': 'List Units',
'List Users': 'List Users',
'List Volunteers': 'List Volunteers',
'List Warehouse Items': 'List Warehouse Items',
'List Warehouses': 'List Warehouses',
'List all': 'List all',
'List of Items': 'List of Items',
'List of Missing Persons': 'List of Missing Persons',
'List of Peers': 'List of Peers',
'List of Reports': 'List of Reports',
'List of Requests': 'List of Requests',
'List of Spreadsheets': 'List of Spreadsheets',
'List of Spreadsheets uploaded': 'List of Spreadsheets uploaded',
'List of Volunteers for this skills set': 'List of Volunteers for this skills set',
'List of addresses': 'List of addresses',
'List unidentified': 'List unidentified',
'List/Add': 'List/Add',
'Lists "who is doing what & where". Allows relief agencies to coordinate their activities': 'Lists "who is doing what & where". Allows relief agencies to coordinate their activities',
'Live Help': 'Live Help',
'Livelihood': 'Livelihood',
'Load Cleaned Data into Database': 'Load Cleaned Data into Database',
'Load Raw File into Grid': 'Load Raw File into Grid',
'Local Name': 'Local Name',
'Local Names': 'Local Names',
'Location': 'Location',
'Location 1': 'Location 1',
'Location 2': 'Location 2',
'Location Details': 'Location Details',
'Location added': 'Location added',
'Location deleted': 'Location deleted',
'Location details': 'Location details',
'Location updated': 'Location updated',
'Location: ': 'Location: ',
'Locations': 'Locations',
'Locations of this level need to have a parent of level': 'Locations of this level need to have a parent of level',
'Lockdown': 'Lockdown',
'Log': 'Log',
'Log Entry Details': 'Log Entry Details',
'Log entry added': 'Log entry added',
'Log entry deleted': 'Log entry deleted',
'Log entry updated': 'Log entry updated',
'Logged in': 'Logged in',
'Logged out': 'Logged out',
'Login': 'Login',
'Logistics': 'Logistics',
'Logistics Management': 'Logistics Management',
'Logistics Management System': 'Logistics Management System',
'Logo': 'Logo',
'Logo file %s missing!': 'Logo file %s missing!',
'Logout': 'Logout',
'Long Text': 'Long Text',
'Longitude': 'Longitude',
'Longitude is West - East (sideways). Latitude is North-South (Up-Down). Latitude is zero on the equator and positive in the northern hemisphere and negative in the southern hemisphere. Longitude is zero on the prime meridian (Greenwich Mean Time) and is positive to the east, across Europe and Asia. Longitude is negative to the west, across the Atlantic and the Americas. These need to be added in Decimal Degrees.': 'Longitude is West - East (sideways). Latitude is North-South (Up-Down). Latitude is zero on the equator and positive in the northern hemisphere and negative in the southern hemisphere. Longitude is zero on the prime meridian (Greenwich Mean Time) and is positive to the east, across Europe and Asia. Longitude is negative to the west, across the Atlantic and the Americas. These need to be added in Decimal Degrees.',
'Longitude is West - East (sideways). Longitude is zero on the prime meridian (Greenwich Mean Time) and is positive to the east, across Europe and Asia. Longitude is negative to the west, across the Atlantic and the Americas.': 'Longitude is West - East (sideways). Longitude is zero on the prime meridian (Greenwich Mean Time) and is positive to the east, across Europe and Asia. Longitude is negative to the west, across the Atlantic and the Americas.',
'Longitude should be between': 'Longitude should be between',
'Looting': 'Looting',
'Lost Password': 'Lost Password',
'Low': 'Low',
'Magnetic Storm': 'Magnetic Storm',
'Main cash source': 'Main cash source',
'Main income sources before disaster': 'Main income sources before disaster',
'Major outward damage': 'Major outward damage',
'Make Pledge': 'Make Pledge',
'Make Request': 'Make Request',
'Make a Request': 'Make a Request',
'Make a Request for Aid': 'Make a Request for Aid',
'Make preparations per the <instruction>': 'Make preparations per the <instruction>',
'Male': 'Male',
'Malnutrition present prior to disaster': 'Malnutrition present prior to disaster',
'Manage': 'Manage',
'Manage Category': 'Manage Category',
'Manage Item catalog': 'Manage Item catalog',
'Manage Kits': 'Manage Kits',
'Manage Relief Item Catalogue': 'Manage Relief Item Catalogue',
'Manage Sub-Category': 'Manage Sub-Category',
'Manage Users & Roles': 'Manage Users & Roles',
'Manage Warehouses/Sites': 'Manage Warehouses/Sites',
'Manage requests of hospitals for assistance.': 'Manage requests of hospitals for assistance.',
'Manage volunteers by capturing their skills, availability and allocation': 'Manage volunteers by capturing their skills, availability and allocation',
'Manager': 'Manager',
'Managing Office': 'Managing Office',
'Managing, Storing and Distributing Catalog Items.': 'Managing, Storing and Distributing Catalog Items.',
'Managing, Storing and Distributing Items.': 'Managing, Storing and Distributing Items.',
'Managing, Storing and Distributing Relief Items': 'Managing, Storing and Distributing Relief Items',
'Mandatory. In GeoServer, this is the Layer Name. Within the WFS getCapabilities, this is the FeatureType Name part after the colon(:).': 'Mandatory. In GeoServer, this is the Layer Name. Within the WFS getCapabilities, this is the FeatureType Name part after the colon(:).',
'Mandatory. The URL to access the service.': 'Mandatory. The URL to access the service.',
'Manual': 'Manual',
'Manual Synchronization': 'Manual Synchronization',
'Many': 'Many',
'Map': 'Map',
'Map Height': 'Map Height',
'Map Service Catalogue': 'Map Service Catalogue',
'Map Settings': 'Map Settings',
'Map Viewing Client': 'Map Viewing Client',
'Map Width': 'Map Width',
'Map from Sahana Eden': 'Map from Sahana Eden',
'Map of Hospitals': 'Map of Hospitals',
'Marine Security': 'Marine Security',
'Marital Status': 'Marital Status',
'Marker': 'Marker',
'Marker Details': 'Marker Details',
'Marker added': 'Marker added',
'Marker deleted': 'Marker deleted',
'Marker updated': 'Marker updated',
'Markers': 'Markers',
'Master Message Log': 'Master Message Log',
'Master Message Log to process incoming reports & requests': 'Master Message Log to process incoming reports & requests',
'Match Percentage': 'Match Percentage',
'Match percentage indicates the % match between these two records': 'Match percentage indicates the % match between these two records',
'Matching Records': 'Matching Records',
'Matrix of Choices (Multiple Answers)': 'Matrix of Choices (Multiple Answers)',
'Matrix of Choices (Only one answer)': 'Matrix of Choices (Only one answer)',
'Matrix of Text Fields': 'Matrix of Text Fields',
'Max Persons per Dwelling': 'Max Persons per Dwelling',
'Maximum Weight': 'Maximum Weight',
'Maximum weight capacity of the Storage Location followed by choosing the unit from the drop down list.': 'Maximum weight capacity of the Storage Location followed by choosing the unit from the drop down list.',
'Maximum weight capacity of the items the storage bin can contain. followed by choosing the unit from the drop down list.': 'Maximum weight capacity of the items the storage bin can contain. followed by choosing the unit from the drop down list.',
'Measure Area: Click the points around the polygon & end with a double-click': 'Measure Area: Click the points around the polygon & end with a double-click',
'Measure Length: Click the points along the path & end with a double-click': 'Measure Length: Click the points along the path & end with a double-click',
'Medical and public health': 'Medical and public health',
'Medium': 'Medium',
'Megabytes per Month': 'Megabytes per Month',
'Members': 'Members',
'Membership': 'Membership',
'Membership Details': 'Membership Details',
'Membership added': 'Membership added',
'Membership deleted': 'Membership deleted',
'Membership updated': 'Membership updated',
'Memberships': 'Memberships',
'Message': 'Message',
'Message Details': 'Message Details',
'Message Variable': 'Message Variable',
'Message added': 'Message added',
'Message deleted': 'Message deleted',
'Message updated': 'Message updated',
'Message variable': 'Message variable',
'Messages': 'Messages',
'Messaging': 'Messaging',
'Messaging settings updated': 'Messaging settings updated',
'Meteorite': 'Meteorite',
'Meteorological (inc. flood)': 'Meteorological (inc. flood)',
'Method used': 'Method used',
'Micronutrient malnutrition prior to disaster': 'Micronutrient malnutrition prior to disaster',
'Middle Name': 'Middle Name',
'Migrants or ethnic minorities': 'Migrants or ethnic minorities',
'Military': 'Military',
'Minorities participating in coping activities': 'Minorities participating in coping activities',
'Minutes must be a number between 0 and 60': 'Minutes must be a number between 0 and 60',
'Minutes per Month': 'Minutes per Month',
'Minutes should be a number greater than 0 and less than 60': 'Minutes should be a number greater than 0 and less than 60',
'Miscellaneous': 'Miscellaneous',
'Missing': 'Missing',
'Missing Person': 'Missing Person',
'Missing Person Details': 'Missing Person Details',
'Missing Person Reports': 'Missing Person Reports',
'Missing Persons': 'Missing Persons',
'Missing Persons Registry': 'Missing Persons Registry',
'Missing Persons Report': 'Missing Persons Report',
'Missing Report': 'Missing Report',
'Missing Senior Citizen': 'Missing Senior Citizen',
'Missing Vulnerable Person': 'Missing Vulnerable Person',
'Mobile': 'Mobile',
'Mobile Assess.': 'Mobile Assess.',
'Mobile Basic Assessment': 'Mobile Basic Assessment',
'Mobile Phone': 'Mobile Phone',
'Mode': 'Mode',
'Modem Settings': 'Modem Settings',
'Modem settings updated': 'Modem settings updated',
'Moderator': 'Moderator',
'Modify Feature: Select the feature you wish to deform & then Drag one of the dots to deform the feature in your chosen manner': 'Modify Feature: Select the feature you wish to deform & then Drag one of the dots to deform the feature in your chosen manner',
'Modify Information on groups and individuals': 'Modify Information on groups and individuals',
'Modifying data in spreadsheet before importing it to the database': 'Modifying data in spreadsheet before importing it to the database',
'Module Administration': 'Module Administration',
'Module disabled!': 'Module disabled!',
'Module provides access to information on current Flood Levels.': 'Module provides access to information on current Flood Levels.',
'Monday': 'Monday',
'Monthly Cost': 'Monthly Cost',
'Monthly Salary': 'Monthly Salary',
'Months': 'Months',
'Morgue Status': 'Morgue Status',
'Morgue Units Available': 'Morgue Units Available',
'Mosque': 'Mosque',
'Motorcycle': 'Motorcycle',
'Moustache': 'Moustache',
'Move Feature: Drag feature to desired location': 'Move Feature: Drag feature to desired location',
'Movements (Filter In/Out/Lost)': 'Movements (Filter In/Out/Lost)',
'MultiPolygon': 'MultiPolygon',
'Multiple': 'Multiple',
'Multiple Choice (Multiple Answers)': 'Multiple Choice (Multiple Answers)',
'Multiple Choice (Only One Answer)': 'Multiple Choice (Only One Answer)',
'Multiple Text Fields': 'Multiple Text Fields',
'Multiplicator': 'Multiplicator',
'Muslim': 'Muslim',
'My Tasks': 'My Tasks',
'N/A': 'N/A',
'Name': 'Name',
'Name and/or ID': 'Name and/or ID',
'Name of Storage Bin Type.': 'Name of Storage Bin Type.',
'Name of the file (& optional sub-path) located in static which should be used for the background of the header.': 'Name of the file (& optional sub-path) located in static which should be used for the background of the header.',
'Name of the file (& optional sub-path) located in static which should be used for the top-left image.': 'Name of the file (& optional sub-path) located in static which should be used for the top-left image.',
'Name of the file (& optional sub-path) located in views which should be used for footer.': 'Name of the file (& optional sub-path) located in views which should be used for footer.',
'Name of the person in local language and script (optional).': 'Name of the person in local language and script (optional).',
'Names can be added in multiple languages': 'Names can be added in multiple languages',
'National ID Card': 'National ID Card',
'National NGO': 'National NGO',
'Nationality': 'Nationality',
'Nationality of the person.': 'Nationality of the person.',
'Nautical Accident': 'Nautical Accident',
'Nautical Hijacking': 'Nautical Hijacking',
'Need Type': 'Need Type',
'Need Type Details': 'Need Type Details',
'Need Type added': 'Need Type added',
'Need Type deleted': 'Need Type deleted',
'Need Type updated': 'Need Type updated',
'Need Types': 'Need Types',
"Need a 'url' argument!": "Need a 'url' argument!",
'Need added': 'Need added',
'Need deleted': 'Need deleted',
'Need to configure Twitter Authentication': 'Need to configure Twitter Authentication',
'Need to specify a Budget!': 'Need to specify a Budget!',
'Need to specify a Kit!': 'Need to specify a Kit!',
'Need to specify a Resource!': 'Need to specify a Resource!',
'Need to specify a bundle!': 'Need to specify a bundle!',
'Need to specify a group!': 'Need to specify a group!',
'Need to specify a location to search for.': 'Need to specify a location to search for.',
'Need to specify a role!': 'Need to specify a role!',
'Need to specify a table!': 'Need to specify a table!',
'Need to specify a user!': 'Need to specify a user!',
'Need updated': 'Need updated',
'Needs': 'Needs',
'Needs Details': 'Needs Details',
'Needs elaboration!!!': 'Needs elaboration!!!',
'Needs to reduce vulnerability to violence': 'Needs to reduce vulnerability to violence',
'Negative Flow Isolation': 'Negative Flow Isolation',
'Neighbourhood': 'Neighbourhood',
'Neonatal ICU': 'Neonatal ICU',
'Neonatology': 'Neonatology',
'Network': 'Network',
'Neurology': 'Neurology',
'New': 'New',
'New Assessment reported from': 'New Assessment reported from',
'New Checklist': 'New Checklist',
'New Peer': 'New Peer',
'New Record': 'New Record',
'New Report': 'New Report',
'New Request': 'New Request',
'New Solution Choice': 'New Solution Choice',
'New Synchronization Peer': 'New Synchronization Peer',
'New cases in the past 24h': 'New cases in the past 24h',
'Next': 'Next',
'Next View': 'Next View',
'No': 'No',
'No Activities Found': 'No Activities Found',
'No Addresses currently registered': 'No Addresses currently registered',
'No Assessment Summaries currently registered': 'No Assessment Summaries currently registered',
'No Assessments currently registered': 'No Assessments currently registered',
'No Baseline Types currently registered': 'No Baseline Types currently registered',
'No Baselines currently registered': 'No Baselines currently registered',
'No Budgets currently registered': 'No Budgets currently registered',
'No Bundles currently registered': 'No Bundles currently registered',
'No Catalog Items currently registered': 'No Catalog Items currently registered',
'No Category<>Sub-Category<>Catalog Relation currently registered': 'No Category<>Sub-Category<>Catalog Relation currently registered',
'No Checklist available': 'No Checklist available',
'No Cluster Subsectors currently registered': 'No Cluster Subsectors currently registered',
'No Configs currently defined': 'No Configs currently defined',
'No Details currently registered': 'No Details currently registered',
'No Documents found': 'No Documents found',
'No Donors currently registered': 'No Donors currently registered',
'No Feature Classes currently defined': 'No Feature Classes currently defined',
'No Feature Layers currently defined': 'No Feature Layers currently defined',
'No Flood Reports currently registered': 'No Flood Reports currently registered',
'No Groups currently defined': 'No Groups currently defined',
'No Groups currently registered': 'No Groups currently registered',
'No Hospitals currently registered': 'No Hospitals currently registered',
'No Identification Report Available': 'No Identification Report Available',
'No Identities currently registered': 'No Identities currently registered',
'No Image': 'No Image',
'No Images currently registered': 'No Images currently registered',
'No Impact Types currently registered': 'No Impact Types currently registered',
'No Impacts currently registered': 'No Impacts currently registered',
'No Incident Reports currently registered': 'No Incident Reports currently registered',
'No Item Catalog Category currently registered': 'No Item Catalog Category currently registered',
'No Item Catalog currently registered': 'No Item Catalog currently registered',
'No Item Categories currently registered': 'No Item Categories currently registered',
'No Item Packets currently registered': 'No Item Packets currently registered',
'No Item Sub-Category currently registered': 'No Item Sub-Category currently registered',
'No Item currently registered': 'No Item currently registered',
'No Items currently registered': 'No Items currently registered',
'No Items currently requested': 'No Items currently requested',
'No Keys currently defined': 'No Keys currently defined',
'No Kits currently registered': 'No Kits currently registered',
'No Locations currently available': 'No Locations currently available',
'No Locations currently registered': 'No Locations currently registered',
'No Markers currently available': 'No Markers currently available',
'No Members currently registered': 'No Members currently registered',
'No Memberships currently defined': 'No Memberships currently defined',
'No Memberships currently registered': 'No Memberships currently registered',
'No Messages currently in Outbox': 'No Messages currently in Outbox',
'No Need Types currently registered': 'No Need Types currently registered',
'No Needs currently registered': 'No Needs currently registered',
'No Offices currently registered': 'No Offices currently registered',
'No Offices found!': 'No Offices found!',
'No Organizations currently registered': 'No Organizations currently registered',
'No People currently registered in this shelter': 'No People currently registered in this shelter',
'No Persons currently registered': 'No Persons currently registered',
'No Persons currently reported missing': 'No Persons currently reported missing',
'No Persons found': 'No Persons found',
'No Photos found': 'No Photos found',
'No Presence Log Entries currently registered': 'No Presence Log Entries currently registered',
'No Problems currently defined': 'No Problems currently defined',
'No Projections currently defined': 'No Projections currently defined',
'No Projects currently registered': 'No Projects currently registered',
'No Rapid Assessments currently registered': 'No Rapid Assessments currently registered',
'No Received Items currently registered': 'No Received Items currently registered',
'No Received Shipments': 'No Received Shipments',
'No Records currently available': 'No Records currently available',
'No Records matching the query': 'No Records matching the query',
'No Request Items currently registered': 'No Request Items currently registered',
'No Request Shipments': 'No Request Shipments',
'No Requests have been made yet': 'No Requests have been made yet',
'No Requests match this criteria': 'No Requests match this criteria',
'No Rivers currently registered': 'No Rivers currently registered',
'No Roles currently defined': 'No Roles currently defined',
'No Sections currently registered': 'No Sections currently registered',
'No Sectors currently registered': 'No Sectors currently registered',
'No Sent Items currently registered': 'No Sent Items currently registered',
'No Sent Shipments': 'No Sent Shipments',
'No Settings currently defined': 'No Settings currently defined',
'No Shelter Services currently registered': 'No Shelter Services currently registered',
'No Shelter Types currently registered': 'No Shelter Types currently registered',
'No Shelters currently registered': 'No Shelters currently registered',
'No Shipment Transit Logs currently registered': 'No Shipment Transit Logs currently registered',
'No Shipment/Way Bills currently registered': 'No Shipment/Way Bills currently registered',
'No Shipment<>Item Relation currently registered': 'No Shipment<>Item Relation currently registered',
'No Sites currently registered': 'No Sites currently registered',
'No Skill Types currently set': 'No Skill Types currently set',
'No Solutions currently defined': 'No Solutions currently defined',
'No Staff Types currently registered': 'No Staff Types currently registered',
'No Staff currently registered': 'No Staff currently registered',
'No Storage Bin Type currently registered': 'No Storage Bin Type currently registered',
'No Storage Bins currently registered': 'No Storage Bins currently registered',
'No Storage Locations currently registered': 'No Storage Locations currently registered',
'No Subscription available': 'No Subscription available',
'No Survey Answers currently registered': 'No Survey Answers currently registered',
'No Survey Questions currently registered': 'No Survey Questions currently registered',
'No Survey Sections currently registered': 'No Survey Sections currently registered',
'No Survey Series currently registered': 'No Survey Series currently registered',
'No Survey Template currently registered': 'No Survey Template currently registered',
'No Tasks with Location Data': 'No Tasks with Location Data',
'No Themes currently defined': 'No Themes currently defined',
'No Tickets currently registered': 'No Tickets currently registered',
'No Tracks currently available': 'No Tracks currently available',
'No Units currently registered': 'No Units currently registered',
'No Users currently registered': 'No Users currently registered',
'No Volunteers currently registered': 'No Volunteers currently registered',
'No Warehouse Items currently registered': 'No Warehouse Items currently registered',
'No Warehouses currently registered': 'No Warehouses currently registered',
'No Warehouses match this criteria': 'No Warehouses match this criteria',
'No access at all': 'No access at all',
'No access to this record!': 'No access to this record!',
'No action recommended': 'No action recommended',
'No conflicts logged': 'No conflicts logged',
'No contact information available': 'No contact information available',
'No contacts currently registered': 'No contacts currently registered',
'No data in this table - cannot create PDF!': 'No data in this table - cannot create PDF!',
'No databases in this application': 'No databases in this application',
'No entries found': 'No entries found',
'No entries matching the query': 'No entries matching the query',
'No import jobs': 'No import jobs',
'No location known for this person': 'No location known for this person',
'No location known for this team': 'No location known for this team',
'No log entries matching the query': 'No log entries matching the query',
'No messages in the system': 'No messages in the system',
'No peers currently registered': 'No peers currently registered',
'No pending registrations found': 'No pending registrations found',
'No pending registrations matching the query': 'No pending registrations matching the query',
'No person record found for current user.': 'No person record found for current user.',
'No positions currently registered': 'No positions currently registered',
'No problem group defined yet': 'No problem group defined yet',
'No records matching the query': 'No records matching the query',
'No recovery reports available': 'No recovery reports available',
'No report available.': 'No report available.',
'No reports available.': 'No reports available.',
'No reports currently available': 'No reports currently available',
'No requests found': 'No requests found',
'No resources currently registered': 'No resources currently registered',
'No resources currently reported': 'No resources currently reported',
'No service profile available': 'No service profile available',
'No skills currently set': 'No skills currently set',
'No status information available': 'No status information available',
'No synchronization': 'No synchronization',
'No tasks currently registered': 'No tasks currently registered',
'No template found!': 'No template found!',
'No units currently registered': 'No units currently registered',
'No volunteer information registered': 'No volunteer information registered',
'None': 'None',
'None (no such record)': 'None (no such record)',
'Noodles': 'Noodles',
'Normal': 'Normal',
'Normal food sources disrupted': 'Normal food sources disrupted',
'Not Applicable': 'Not Applicable',
'Not Authorised!': 'Not Authorised!',
'Not Possible': 'Not Possible',
'Not Set': 'Not Set',
'Not authorised!': 'Not authorised!',
'Not installed or incorrectly configured.': 'Not installed or incorrectly configured.',
'Note that this list only shows active volunteers. To see all people registered in the system, do a search from the home screen instead.': 'Note that this list only shows active volunteers. To see all people registered in the system, do a search from the home screen instead.',
'Notice to Airmen': 'Notice to Airmen',
'Number': 'Number',
'Number of Columns': 'Number of Columns',
'Number of Patients': 'Number of Patients',
'Number of Rows': 'Number of Rows',
'Number of additional beds of that type expected to become available in this unit within the next 24 hours.': 'Number of additional beds of that type expected to become available in this unit within the next 24 hours.',
'Number of alternative places for studying': 'Number of alternative places for studying',
'Number of available/vacant beds of that type in this unit at the time of reporting.': 'Number of available/vacant beds of that type in this unit at the time of reporting.',
'Number of deaths during the past 24 hours.': 'Number of deaths during the past 24 hours.',
'Number of discharged patients during the past 24 hours.': 'Number of discharged patients during the past 24 hours.',
'Number of doctors': 'Number of doctors',
'Number of doctors actively working': 'Number of doctors actively working',
'Number of houses damaged, but usable': 'Number of houses damaged, but usable',
'Number of houses destroyed/uninhabitable': 'Number of houses destroyed/uninhabitable',
'Number of in-patients at the time of reporting.': 'Number of in-patients at the time of reporting.',
'Number of latrines': 'Number of latrines',
'Number of midwives actively working': 'Number of midwives actively working',
'Number of newly admitted patients during the past 24 hours.': 'Number of newly admitted patients during the past 24 hours.',
'Number of non-medical staff': 'Number of non-medical staff',
'Number of nurses': 'Number of nurses',
'Number of nurses actively working': 'Number of nurses actively working',
'Number of private schools': 'Number of private schools',
'Number of public schools': 'Number of public schools',
'Number of religious schools': 'Number of religious schools',
'Number of schools damaged but usable': 'Number of schools damaged but usable',
'Number of schools destroyed/uninhabitable': 'Number of schools destroyed/uninhabitable',
'Number of schools open before disaster': 'Number of schools open before disaster',
'Number of schools open now': 'Number of schools open now',
'Number of teachers affected by disaster': 'Number of teachers affected by disaster',
'Number of teachers before disaster': 'Number of teachers before disaster',
'Number of vacant/available beds in this hospital. Automatically updated from daily reports.': 'Number of vacant/available beds in this hospital. Automatically updated from daily reports.',
'Number of vacant/available units to which victims can be transported immediately.': 'Number of vacant/available units to which victims can be transported immediately.',
'Number or Label on the identification tag this person is wearing (if any).': 'Number or Label on the identification tag this person is wearing (if any).',
'Number/Percentage of affected population that is Female & Aged 0-5': 'Number/Percentage of affected population that is Female & Aged 0-5',
'Number/Percentage of affected population that is Female & Aged 13-17': 'Number/Percentage of affected population that is Female & Aged 13-17',
'Number/Percentage of affected population that is Female & Aged 18-25': 'Number/Percentage of affected population that is Female & Aged 18-25',
'Number/Percentage of affected population that is Female & Aged 26-60': 'Number/Percentage of affected population that is Female & Aged 26-60',
'Number/Percentage of affected population that is Female & Aged 6-12': 'Number/Percentage of affected population that is Female & Aged 6-12',
'Number/Percentage of affected population that is Female & Aged 61+': 'Number/Percentage of affected population that is Female & Aged 61+',
'Number/Percentage of affected population that is Male & Aged 0-5': 'Number/Percentage of affected population that is Male & Aged 0-5',
'Number/Percentage of affected population that is Male & Aged 13-17': 'Number/Percentage of affected population that is Male & Aged 13-17',
'Number/Percentage of affected population that is Male & Aged 18-25': 'Number/Percentage of affected population that is Male & Aged 18-25',
'Number/Percentage of affected population that is Male & Aged 26-60': 'Number/Percentage of affected population that is Male & Aged 26-60',
'Number/Percentage of affected population that is Male & Aged 6-12': 'Number/Percentage of affected population that is Male & Aged 6-12',
'Number/Percentage of affected population that is Male & Aged 61+': 'Number/Percentage of affected population that is Male & Aged 61+',
'Nursery Beds': 'Nursery Beds',
'Nutrition': 'Nutrition',
'OK': 'OK',
'OR Reason': 'OR Reason',
'OR Status': 'OR Status',
'OR Status Reason': 'OR Status Reason',
'Observer': 'Observer',
'Obstetrics/Gynecology': 'Obstetrics/Gynecology',
'Office': 'Office',
'Office Address': 'Office Address',
'Office Details': 'Office Details',
'Office added': 'Office added',
'Office deleted': 'Office deleted',
'Office updated': 'Office updated',
'Offices': 'Offices',
'Offline Sync': 'Offline Sync',
'Offline Sync (from USB/File Backup)': 'Offline Sync (from USB/File Backup)',
'Older people as primary caregivers of children': 'Older people as primary caregivers of children',
'Older people in care homes': 'Older people in care homes',
'Older people participating in coping activities': 'Older people participating in coping activities',
'Older people with chronical illnesses': 'Older people with chronical illnesses',
'Older person (>60 yrs)': 'Older person (>60 yrs)',
'On by default?': 'On by default?',
'On by default? (only applicable to Overlays)': 'On by default? (only applicable to Overlays)',
'One Time Cost': 'One Time Cost',
'One time cost': 'One time cost',
'One-time': 'One-time',
'One-time costs': 'One-time costs',
'Oops! Something went wrong...': 'Oops! Something went wrong...',
'Oops! something went wrong on our side.': 'Oops! something went wrong on our side.',
'Open': 'Open',
'Open area': 'Open area',
'Open recent': 'Open recent',
'Operating Rooms': 'Operating Rooms',
'Optional link to an Incident which this Assessment was triggered by.': 'Optional link to an Incident which this Assessment was triggered by.',
'Optional. In GeoServer, this is the Workspace Namespace URI. Within the WFS getCapabilities, this is the FeatureType Name part before the colon(:).': 'Optional. In GeoServer, this is the Workspace Namespace URI. Within the WFS getCapabilities, this is the FeatureType Name part before the colon(:).',
"Optional. The name of the geometry column. In PostGIS this defaults to 'the_geom'.": "Optional. The name of the geometry column. In PostGIS this defaults to 'the_geom'.",
'Options': 'Options',
'Organisation': 'Organisation',
'Organization': 'Organization',
'Organization Details': 'Organization Details',
'Organization Registry': 'Organization Registry',
'Organization added': 'Organization added',
'Organization deleted': 'Organization deleted',
'Organization updated': 'Organization updated',
'Organizations': 'Organizations',
'Origin': 'Origin',
'Origin of the separated children': 'Origin of the separated children',
'Other': 'Other',
'Other (describe)': 'Other (describe)',
'Other (specify)': 'Other (specify)',
'Other Evidence': 'Other Evidence',
'Other Faucet/Piped Water': 'Other Faucet/Piped Water',
'Other Isolation': 'Other Isolation',
'Other Name': 'Other Name',
'Other activities of boys 13-17yrs': 'Other activities of boys 13-17yrs',
'Other activities of boys 13-17yrs before disaster': 'Other activities of boys 13-17yrs before disaster',
'Other activities of boys <12yrs': 'Other activities of boys <12yrs',
'Other activities of boys <12yrs before disaster': 'Other activities of boys <12yrs before disaster',
'Other activities of girls 13-17yrs': 'Other activities of girls 13-17yrs',
'Other activities of girls 13-17yrs before disaster': 'Other activities of girls 13-17yrs before disaster',
'Other activities of girls<12yrs': 'Other activities of girls<12yrs',
'Other activities of girls<12yrs before disaster': 'Other activities of girls<12yrs before disaster',
'Other alternative infant nutrition in use': 'Other alternative infant nutrition in use',
'Other alternative places for study': 'Other alternative places for study',
'Other assistance needed': 'Other assistance needed',
'Other assistance, Rank': 'Other assistance, Rank',
'Other current health problems, adults': 'Other current health problems, adults',
'Other current health problems, children': 'Other current health problems, children',
'Other events': 'Other events',
'Other factors affecting school attendance': 'Other factors affecting school attendance',
'Other major expenses': 'Other major expenses',
'Other school assistance received': 'Other school assistance received',
'Other school assistance, details': 'Other school assistance, details',
'Other school assistance, source': 'Other school assistance, source',
'Other side dishes in stock': 'Other side dishes in stock',
'Other types of water storage containers': 'Other types of water storage containers',
'Other ways to obtain food': 'Other ways to obtain food',
'Outbound Mail settings are configured in models/000_config.py.': 'Outbound Mail settings are configured in models/000_config.py.',
'Outbox': 'Outbox',
'Outgoing SMS Handler': 'Outgoing SMS Handler',
'Outgoing SMS handler': 'Outgoing SMS handler',
'Overland Flow Flood': 'Overland Flow Flood',
'Overlays': 'Overlays',
'Owned Resources': 'Owned Resources',
'PDAM': 'PDAM',
'PIN': 'PIN',
'PIN number ': 'PIN number ',
'PL Women': 'PL Women',
'Packet': 'Packet',
'Pan Map: keep the left mouse button pressed and drag the map': 'Pan Map: keep the left mouse button pressed and drag the map',
'Parameters': 'Parameters',
'Parent': 'Parent',
'Parent Office': 'Parent Office',
"Parent level should be higher than this record's level. Parent level is": "Parent level should be higher than this record's level. Parent level is",
'Parent needs to be of the correct level': 'Parent needs to be of the correct level',
'Parent needs to be set': 'Parent needs to be set',
'Parent needs to be set for locations of level': 'Parent needs to be set for locations of level',
'Parents/Caregivers missing children': 'Parents/Caregivers missing children',
'Participant': 'Participant',
'Pashto': 'Pashto',
'Passport': 'Passport',
'Password': 'Password',
"Password fields don't match": "Password fields don't match",
'Pathology': 'Pathology',
'Patients': 'Patients',
'Pediatric ICU': 'Pediatric ICU',
'Pediatric Psychiatric': 'Pediatric Psychiatric',
'Pediatrics': 'Pediatrics',
'Peer': 'Peer',
'Peer Details': 'Peer Details',
'Peer Registration': 'Peer Registration',
'Peer Registration Details': 'Peer Registration Details',
'Peer Registration Request': 'Peer Registration Request',
'Peer Type': 'Peer Type',
'Peer UID': 'Peer UID',
'Peer added': 'Peer added',
'Peer deleted': 'Peer deleted',
'Peer not allowed to push': 'Peer not allowed to push',
'Peer registration request added': 'Peer registration request added',
'Peer registration request deleted': 'Peer registration request deleted',
'Peer registration request updated': 'Peer registration request updated',
'Peer updated': 'Peer updated',
'Peers': 'Peers',
'Pending Requests': 'Pending Requests',
'People': 'People',
'People Needing Food': 'People Needing Food',
'People Needing Shelter': 'People Needing Shelter',
'People Needing Water': 'People Needing Water',
'People Trapped': 'People Trapped',
'People with chronical illnesses': 'People with chronical illnesses',
'Person': 'Person',
'Person 1': 'Person 1',
'Person 1, Person 2 are the potentially duplicate records': 'Person 1, Person 2 are the potentially duplicate records',
'Person 2': 'Person 2',
'Person Data': 'Person Data',
'Person De-duplicator': 'Person De-duplicator',
'Person Details': 'Person Details',
'Person Finder': 'Person Finder',
'Person Registry': 'Person Registry',
'Person added': 'Person added',
'Person deleted': 'Person deleted',
'Person details updated': 'Person details updated',
'Person interviewed': 'Person interviewed',
'Person missing': 'Person missing',
'Person reporting': 'Person reporting',
'Person who has actually seen the person/group.': 'Person who has actually seen the person/group.',
'Person who is reporting about the presence.': 'Person who is reporting about the presence.',
'Person who observed the presence (if different from reporter).': 'Person who observed the presence (if different from reporter).',
'Person/Group': 'Person/Group',
'Personal Data': 'Personal Data',
'Personal Effects': 'Personal Effects',
'Personal Effects Details': 'Personal Effects Details',
'Personal impact of disaster': 'Personal impact of disaster',
'Persons': 'Persons',
'Persons with disability (mental)': 'Persons with disability (mental)',
'Persons with disability (physical)': 'Persons with disability (physical)',
'Phone': 'Phone',
'Phone 1': 'Phone 1',
'Phone 2': 'Phone 2',
"Phone number to donate to this organization's relief efforts.": "Phone number to donate to this organization's relief efforts.",
'Phone/Business': 'Phone/Business',
'Phone/Emergency': 'Phone/Emergency',
'Phone/Exchange': 'Phone/Exchange',
'Photo': 'Photo',
'Photo Details': 'Photo Details',
'Photo added': 'Photo added',
'Photo deleted': 'Photo deleted',
'Photo updated': 'Photo updated',
'Photograph': 'Photograph',
'Photos': 'Photos',
'Physical Description': 'Physical Description',
'Picture upload and finger print upload facility': 'Picture upload and finger print upload facility',
'Place for solid waste disposal': 'Place for solid waste disposal',
'Place of Recovery': 'Place of Recovery',
'Places the children have been sent to': 'Places the children have been sent to',
'Playing': 'Playing',
"Please come back after sometime if that doesn't help.": "Please come back after sometime if that doesn't help.",
'Please correct all errors.': 'Please correct all errors.',
'Please enter a First Name': 'Please enter a First Name',
'Please enter a valid email address': 'Please enter a valid email address',
'Please enter the first few letters of the Person/Group for the autocomplete.': 'Please enter the first few letters of the Person/Group for the autocomplete.',
'Please enter the recipient': 'Please enter the recipient',
'Please fill this!': 'Please fill this!',
'Please report here where you are:': 'Please report here where you are:',
'Please select another level': 'Please select another level',
'Please specify any problems and obstacles with the proper handling of the disease, in detail (in numbers, where appropriate). You may also add suggestions the situation could be improved.': 'Please specify any problems and obstacles with the proper handling of the disease, in detail (in numbers, where appropriate). You may also add suggestions the situation could be improved.',
'Please use this field to record any additional information, including a history of the record if it is updated.': 'Please use this field to record any additional information, including a history of the record if it is updated.',
'Please use this field to record any additional information, such as Ushahidi instance IDs. Include a history of the record if it is updated.': 'Please use this field to record any additional information, such as Ushahidi instance IDs. Include a history of the record if it is updated.',
'Pledge Aid': 'Pledge Aid',
'Pledge Aid to match these Requests': 'Pledge Aid to match these Requests',
'Pledge Support': 'Pledge Support',
'Pledged': 'Pledged',
'Pledges': 'Pledges',
'Point': 'Point',
'Poisoning': 'Poisoning',
'Poisonous Gas': 'Poisonous Gas',
'Police': 'Police',
'Pollution and other environmental': 'Pollution and other environmental',
'Polygon': 'Polygon',
'Porridge': 'Porridge',
'Port': 'Port',
'Port Closure': 'Port Closure',
'Position Details': 'Position Details',
'Position added': 'Position added',
'Position deleted': 'Position deleted',
'Position type': 'Position type',
'Position updated': 'Position updated',
'Positions': 'Positions',
'Postcode': 'Postcode',
'Poultry': 'Poultry',
'Poultry restocking, Rank': 'Poultry restocking, Rank',
'Pounds': 'Pounds',
'Power Failure': 'Power Failure',
'Powered By': 'Powered By',
'Powered by Sahana Eden': 'Powered by Sahana Eden',
'Preferred Name': 'Preferred Name',
'Pregnant women': 'Pregnant women',
'Preliminary': 'Preliminary',
'Presence': 'Presence',
'Presence Condition': 'Presence Condition',
'Presence Log': 'Presence Log',
'Previous': 'Previous',
'Previous View': 'Previous View',
'Primary Name': 'Primary Name',
'Print Extent': 'Print Extent',
'Print Map': 'Print Map',
'Printed from Sahana Eden': 'Printed from Sahana Eden',
'Printing disabled since server not accessible: ': 'Printing disabled since server not accessible: ',
'Priority': 'Priority',
'Priority Level': 'Priority Level',
'Private': 'Private',
'Problem': 'Problem',
'Problem Administration': 'Problem Administration',
'Problem Details': 'Problem Details',
'Problem Group': 'Problem Group',
'Problem Title': 'Problem Title',
'Problem added': 'Problem added',
'Problem deleted': 'Problem deleted',
'Problem updated': 'Problem updated',
'Problems': 'Problems',
'Procedure': 'Procedure',
'Procurements': 'Procurements',
'Product Description': 'Product Description',
'Product Name': 'Product Name',
'Profile': 'Profile',
'Project': 'Project',
'Project Details': 'Project Details',
'Project Status': 'Project Status',
'Project Tracking': 'Project Tracking',
'Project added': 'Project added',
'Project deleted': 'Project deleted',
'Project has no Lat/Lon': 'Project has no Lat/Lon',
'Project updated': 'Project updated',
'Projection': 'Projection',
'Projection Details': 'Projection Details',
'Projection added': 'Projection added',
'Projection deleted': 'Projection deleted',
'Projection updated': 'Projection updated',
'Projections': 'Projections',
'Projects': 'Projects',
'Protected resource': 'Protected resource',
'Protection': 'Protection',
'Provide Metadata for your media files': 'Provide Metadata for your media files',
'Provide a password': 'Provide a password',
'Province': 'Province',
'Proxy-server': 'Proxy-server',
'Psychiatrics/Adult': 'Psychiatrics/Adult',
'Psychiatrics/Pediatric': 'Psychiatrics/Pediatric',
'Public': 'Public',
'Public Event': 'Public Event',
'Public and private transportation': 'Public and private transportation',
'Pull tickets from external feed': 'Pull tickets from external feed',
'Punjabi': 'Punjabi',
'Push tickets to external system': 'Push tickets to external system',
'Put a choice in the box': 'Put a choice in the box',
'Pyroclastic Flow': 'Pyroclastic Flow',
'Pyroclastic Surge': 'Pyroclastic Surge',
'Python Serial module not available within the running Python - this needs installing to activate the Modem': 'Python Serial module not available within the running Python - this needs installing to activate the Modem',
'Quantity': 'Quantity',
'Quarantine': 'Quarantine',
'Queries': 'Queries',
'Query': 'Query',
'Query Feature': 'Query Feature',
'Queryable?': 'Queryable?',
'RECORD A': 'RECORD A',
'RECORD B': 'RECORD B',
'RESPONSE': 'RESPONSE',
'Race': 'Race',
'Radiological Hazard': 'Radiological Hazard',
'Radiology': 'Radiology',
'Railway Accident': 'Railway Accident',
'Railway Hijacking': 'Railway Hijacking',
'Rain Fall': 'Rain Fall',
'Rapid Assessment': 'Rapid Assessment',
'Rapid Assessment Details': 'Rapid Assessment Details',
'Rapid Assessment added': 'Rapid Assessment added',
'Rapid Assessment deleted': 'Rapid Assessment deleted',
'Rapid Assessment updated': 'Rapid Assessment updated',
'Rapid Assessments': 'Rapid Assessments',
'Rapid Assessments & Flexible Impact Assessments': 'Rapid Assessments & Flexible Impact Assessments',
'Rapid Close Lead': 'Rapid Close Lead',
'Rating Scale': 'Rating Scale',
'Raw Database access': 'Raw Database access',
'Real World Arbitrary Units': 'Real World Arbitrary Units',
'Receive': 'Receive',
'Receive Items': 'Receive Items',
'Receive Shipment': 'Receive Shipment',
'Received': 'Received',
'Received By': 'Received By',
'Received Item Details': 'Received Item Details',
'Received Item added': 'Received Item added',
'Received Item deleted': 'Received Item deleted',
'Received Item updated': 'Received Item updated',
'Received Items': 'Received Items',
'Received Items added to Warehouse Items': 'Received Items added to Warehouse Items',
'Received Shipment Details': 'Received Shipment Details',
'Received Shipment canceled': 'Received Shipment canceled',
'Received Shipment updated': 'Received Shipment updated',
'Received Shipments': 'Received Shipments',
'Recipient': 'Recipient',
'Recipients': 'Recipients',
'Record Details': 'Record Details',
'Record ID': 'Record ID',
'Record Saved': 'Record Saved',
'Record added': 'Record added',
'Record deleted': 'Record deleted',
'Record last updated': 'Record last updated',
'Record not found!': 'Record not found!',
'Record updated': 'Record updated',
'Records': 'Records',
'Recovery': 'Recovery',
'Recovery Request': 'Recovery Request',
'Recovery Request added': 'Recovery Request added',
'Recovery Request deleted': 'Recovery Request deleted',
'Recovery Request updated': 'Recovery Request updated',
'Recovery Requests': 'Recovery Requests',
'Recovery report added': 'Recovery report added',
'Recovery report deleted': 'Recovery report deleted',
'Recovery report updated': 'Recovery report updated',
'Recurring': 'Recurring',
'Recurring Cost': 'Recurring Cost',
'Recurring cost': 'Recurring cost',
'Recurring costs': 'Recurring costs',
'Reference Document': 'Reference Document',
'Regional': 'Regional',
'Register': 'Register',
'Register Person': 'Register Person',
'Register Person into this Shelter': 'Register Person into this Shelter',
'Register them as a volunteer': 'Register them as a volunteer',
'Registered People': 'Registered People',
'Registered users can': 'Registered users can',
'Registering ad-hoc volunteers willing to contribute': 'Registering ad-hoc volunteers willing to contribute',
'Registration': 'Registration',
'Registration Details': 'Registration Details',
'Registration added': 'Registration added',
'Registration entry deleted': 'Registration entry deleted',
'Registration key': 'Registration key',
'Registration successful': 'Registration successful',
'Registration updated': 'Registration updated',
'Registry keeps track of all the relief organizations working in the disaster region. It captures not only the places where they are active, but also captures information on the range of projects they are providing in each area.': 'Registry keeps track of all the relief organizations working in the disaster region. It captures not only the places where they are active, but also captures information on the range of projects they are providing in each area.',
'Rehabilitation/Long Term Care': 'Rehabilitation/Long Term Care',
'Reliable access to sanitation/hygiene items': 'Reliable access to sanitation/hygiene items',
'Relief': 'Relief',
'Relief Item Catalog': 'Relief Item Catalog',
'Relief Team': 'Relief Team',
'Religion': 'Religion',
'Religious Leader': 'Religious Leader',
'Relocate as instructed in the <instruction>': 'Relocate as instructed in the <instruction>',
'Remember me (for 30 days)': 'Remember me (for 30 days)',
'Remove': 'Remove',
'Remove Feature: Select the feature you wish to remove & press the delete key': 'Remove Feature: Select the feature you wish to remove & press the delete key',
'Repeat your password': 'Repeat your password',
'Replace': 'Replace',
'Replace if Master': 'Replace if Master',
'Replace if Newer': 'Replace if Newer',
'Report': 'Report',
'Report Another Assessment...': 'Report Another Assessment...',
'Report Details': 'Report Details',
'Report Resource': 'Report Resource',
'Report Type': 'Report Type',
'Report Types Include': 'Report Types Include',
'Report a Problem with the Software': 'Report a Problem with the Software',
'Report added': 'Report added',
'Report deleted': 'Report deleted',
'Report my location': 'Report my location',
'Report that person missing': 'Report that person missing',
'Report the contributing factors for the current EMS status.': 'Report the contributing factors for the current EMS status.',
'Report the contributing factors for the current OR status.': 'Report the contributing factors for the current OR status.',
'Report the person as found': 'Report the person as found',
'Report them as found': 'Report them as found',
'Report them missing': 'Report them missing',
'Report updated': 'Report updated',
'ReportLab module not available within the running Python - this needs installing for PDF output!': 'ReportLab module not available within the running Python - this needs installing for PDF output!',
'Reporter': 'Reporter',
'Reporter Name': 'Reporter Name',
'Reporting on the projects in the region': 'Reporting on the projects in the region',
'Reports': 'Reports',
'Request': 'Request',
'Request Added': 'Request Added',
'Request Canceled': 'Request Canceled',
'Request Details': 'Request Details',
'Request Item': 'Request Item',
'Request Item Details': 'Request Item Details',
'Request Item added': 'Request Item added',
'Request Item deleted': 'Request Item deleted',
'Request Item updated': 'Request Item updated',
'Request Items': 'Request Items',
'Request Type': 'Request Type',
'Request Updated': 'Request Updated',
'Request added': 'Request added',
'Request deleted': 'Request deleted',
'Request for Role Upgrade': 'Request for Role Upgrade',
'Request updated': 'Request updated',
'Request, Response & Session': 'Request, Response & Session',
'Requested': 'Requested',
'Requested By Location': 'Requested By Location',
'Requested From Warehouse': 'Requested From Warehouse',
'Requested by': 'Requested by',
'Requested on': 'Requested on',
'Requester': 'Requester',
'Requestor': 'Requestor',
'Requests': 'Requests',
'Requests for Item': 'Requests for Item',
'Requires Login!': 'Requires Login!',
'Requires login': 'Requires login',
'Rescue and recovery': 'Rescue and recovery',
'Reset': 'Reset',
'Reset Password': 'Reset Password',
'Reset Password key': 'Reset Password key',
'Resize Feature: Select the feature you wish to resize & then Drag the associated dot to your desired size': 'Resize Feature: Select the feature you wish to resize & then Drag the associated dot to your desired size',
'Resolution': 'Resolution',
'Resolve': 'Resolve',
'Resolve Conflict': 'Resolve Conflict',
'Resolve link brings up a new screen which helps to resolve these duplicate records and update the database.': 'Resolve link brings up a new screen which helps to resolve these duplicate records and update the database.',
'Resource': 'Resource',
'Resource Details': 'Resource Details',
'Resource added': 'Resource added',
'Resource deleted': 'Resource deleted',
'Resource updated': 'Resource updated',
'Resources': 'Resources',
'Respiratory Infections': 'Respiratory Infections',
'Restricted Access': 'Restricted Access',
'Restrictions': 'Restrictions',
'Results': 'Results',
'Retail Crime': 'Retail Crime',
'Retrieve Password': 'Retrieve Password',
'Rice': 'Rice',
'Riot': 'Riot',
'River': 'River',
'River Details': 'River Details',
'River added': 'River added',
'River deleted': 'River deleted',
'River updated': 'River updated',
'Rivers': 'Rivers',
'Road Accident': 'Road Accident',
'Road Closed': 'Road Closed',
'Road Conditions': 'Road Conditions',
'Road Delay': 'Road Delay',
'Road Hijacking': 'Road Hijacking',
'Road Usage Condition': 'Road Usage Condition',
'Role': 'Role',
'Role Details': 'Role Details',
'Role Manager': 'Role Manager',
'Role Required': 'Role Required',
'Role Updated': 'Role Updated',
'Role added': 'Role added',
'Role deleted': 'Role deleted',
'Role updated': 'Role updated',
'Role-based': 'Role-based',
'Roles': 'Roles',
'Roles Permitted': 'Roles Permitted',
'Roof tile': 'Roof tile',
'Rotate Feature: Select the feature you wish to rotate & then Drag the associated dot to rotate to your desired location': 'Rotate Feature: Select the feature you wish to rotate & then Drag the associated dot to rotate to your desired location',
'Rotation': 'Rotation',
'Row Choices (One Per Line)': 'Row Choices (One Per Line)',
'Rows in table': 'Rows in table',
'Rows selected': 'Rows selected',
'Run Functional Tests': 'Run Functional Tests',
'Run Interval': 'Run Interval',
'Running Cost': 'Running Cost',
'SITUATION': 'SITUATION',
'Safe environment for vulnerable groups': 'Safe environment for vulnerable groups',
'Safety of children and women affected by disaster': 'Safety of children and women affected by disaster',
'Sahana Administrator': 'Sahana Administrator',
'Sahana Agasti': 'Sahana Agasti',
'Sahana Blue': 'Sahana Blue',
'Sahana Community Chat': 'Sahana Community Chat',
'Sahana Eden': 'Sahana Eden',
'Sahana Eden <=> Other': 'Sahana Eden <=> Other',
'Sahana Eden <=> Sahana Eden': 'Sahana Eden <=> Sahana Eden',
'Sahana Eden Disaster Management Platform': 'Sahana Eden Disaster Management Platform',
'Sahana Eden Open Source Disaster Management Platform': 'Sahana Eden Open Source Disaster Management Platform',
'Sahana Eden Website': 'Sahana Eden Website',
'Sahana Green': 'Sahana Green',
'Sahana Login Approval Pending': 'Sahana Login Approval Pending',
'Sahana Steel': 'Sahana Steel',
'Sahana access granted': 'Sahana access granted',
'Sahana: new request has been made. Please login to see if you can fulfil the request.': 'Sahana: new request has been made. Please login to see if you can fulfil the request.',
'Salted Fish': 'Salted Fish',
'Salvage material usable from destroyed houses': 'Salvage material usable from destroyed houses',
'Salvage material usable from destroyed schools': 'Salvage material usable from destroyed schools',
'Sanitation problems': 'Sanitation problems',
'Satellite': 'Satellite',
'Satellite Office': 'Satellite Office',
'Saturday': 'Saturday',
'Save': 'Save',
'Saved.': 'Saved.',
'Saving...': 'Saving...',
'Scale': 'Scale',
'Scale of Results': 'Scale of Results',
'Schedule': 'Schedule',
'School': 'School',
'School Closure': 'School Closure',
'School Lockdown': 'School Lockdown',
'School Teacher': 'School Teacher',
'School assistance received/expected': 'School assistance received/expected',
'School destroyed': 'School destroyed',
'School heavily damaged': 'School heavily damaged',
'School tents received': 'School tents received',
'School tents, source': 'School tents, source',
'School used for other purpose': 'School used for other purpose',
'School/studying': 'School/studying',
'Schools': 'Schools',
'Search': 'Search',
'Search & List Bin Types': 'Search & List Bin Types',
'Search & List Bins': 'Search & List Bins',
'Search & List Catalog': 'Search & List Catalog',
'Search & List Category': 'Search & List Category',
'Search & List Items': 'Search & List Items',
'Search & List Locations': 'Search & List Locations',
'Search & List Site': 'Search & List Site',
'Search & List Sub-Category': 'Search & List Sub-Category',
'Search & List Unit': 'Search & List Unit',
'Search Activities': 'Search Activities',
'Search Activity Report': 'Search Activity Report',
'Search Addresses': 'Search Addresses',
'Search Assessment Summaries': 'Search Assessment Summaries',
'Search Assessments': 'Search Assessments',
'Search Baseline Type': 'Search Baseline Type',
'Search Baselines': 'Search Baselines',
'Search Budgets': 'Search Budgets',
'Search Bundles': 'Search Bundles',
'Search Catalog Items': 'Search Catalog Items',
'Search Category<>Sub-Category<>Catalog Relation': 'Search Category<>Sub-Category<>Catalog Relation',
'Search Checklists': 'Search Checklists',
'Search Cluster Subsectors': 'Search Cluster Subsectors',
'Search Configs': 'Search Configs',
'Search Contact Information': 'Search Contact Information',
'Search Contacts': 'Search Contacts',
'Search Documents': 'Search Documents',
'Search Donors': 'Search Donors',
'Search Feature Class': 'Search Feature Class',
'Search Feature Layers': 'Search Feature Layers',
'Search Flood Reports': 'Search Flood Reports',
'Search Geonames': 'Search Geonames',
'Search Groups': 'Search Groups',
'Search Hospitals': 'Search Hospitals',
'Search Identity': 'Search Identity',
'Search Images': 'Search Images',
'Search Impact Type': 'Search Impact Type',
'Search Impacts': 'Search Impacts',
'Search Incident Reports': 'Search Incident Reports',
'Search Item Catalog Category(s)': 'Search Item Catalog Category(s)',
'Search Item Catalog(s)': 'Search Item Catalog(s)',
'Search Item Categories': 'Search Item Categories',
'Search Item Packets': 'Search Item Packets',
'Search Item Sub-Category(s)': 'Search Item Sub-Category(s)',
'Search Items': 'Search Items',
'Search Keys': 'Search Keys',
'Search Kits': 'Search Kits',
'Search Layers': 'Search Layers',
'Search Locations': 'Search Locations',
'Search Log Entry': 'Search Log Entry',
'Search Markers': 'Search Markers',
'Search Member': 'Search Member',
'Search Membership': 'Search Membership',
'Search Memberships': 'Search Memberships',
'Search Need Type': 'Search Need Type',
'Search Needs': 'Search Needs',
'Search Offices': 'Search Offices',
'Search Organizations': 'Search Organizations',
'Search Peer': 'Search Peer',
'Search Personal Effects': 'Search Personal Effects',
'Search Persons': 'Search Persons',
'Search Photos': 'Search Photos',
'Search Positions': 'Search Positions',
'Search Problems': 'Search Problems',
'Search Projections': 'Search Projections',
'Search Projects': 'Search Projects',
'Search Rapid Assessments': 'Search Rapid Assessments',
'Search Received Items': 'Search Received Items',
'Search Received Shipments': 'Search Received Shipments',
'Search Records': 'Search Records',
'Search Registations': 'Search Registations',
'Search Registration Request': 'Search Registration Request',
'Search Report': 'Search Report',
'Search Reports': 'Search Reports',
'Search Request': 'Search Request',
'Search Request Items': 'Search Request Items',
'Search Requests': 'Search Requests',
'Search Resources': 'Search Resources',
'Search Rivers': 'Search Rivers',
'Search Roles': 'Search Roles',
'Search Sections': 'Search Sections',
'Search Sectors': 'Search Sectors',
'Search Sent Items': 'Search Sent Items',
'Search Sent Shipments': 'Search Sent Shipments',
'Search Service Profiles': 'Search Service Profiles',
'Search Settings': 'Search Settings',
'Search Shelter Services': 'Search Shelter Services',
'Search Shelter Types': 'Search Shelter Types',
'Search Shelters': 'Search Shelters',
'Search Shipment Transit Logs': 'Search Shipment Transit Logs',
'Search Shipment/Way Bills': 'Search Shipment/Way Bills',
'Search Shipment<>Item Relation': 'Search Shipment<>Item Relation',
'Search Site(s)': 'Search Site(s)',
'Search Skill Types': 'Search Skill Types',
'Search Skills': 'Search Skills',
'Search Solutions': 'Search Solutions',
'Search Staff': 'Search Staff',
'Search Staff Types': 'Search Staff Types',
'Search Status': 'Search Status',
'Search Storage Bin Type(s)': 'Search Storage Bin Type(s)',
'Search Storage Bin(s)': 'Search Storage Bin(s)',
'Search Storage Location(s)': 'Search Storage Location(s)',
'Search Subscriptions': 'Search Subscriptions',
'Search Tasks': 'Search Tasks',
'Search Teams': 'Search Teams',
'Search Themes': 'Search Themes',
'Search Tickets': 'Search Tickets',
'Search Tracks': 'Search Tracks',
'Search Twitter Tags': 'Search Twitter Tags',
'Search Units': 'Search Units',
'Search Users': 'Search Users',
'Search Volunteer Registrations': 'Search Volunteer Registrations',
'Search Volunteers': 'Search Volunteers',
'Search Warehouse Items': 'Search Warehouse Items',
'Search Warehouses': 'Search Warehouses',
'Search and Edit Group': 'Search and Edit Group',
'Search and Edit Individual': 'Search and Edit Individual',
'Search by ID Tag': 'Search by ID Tag',
'Search by Skill Types': 'Search by Skill Types',
'Search for Items': 'Search for Items',
'Search for a Person': 'Search for a Person',
'Search for a Project': 'Search for a Project',
'Search for a Request': 'Search for a Request',
'Search here for a person in order to:': 'Search here for a person in order to:',
"Search here for a person's record in order to:": "Search here for a person's record in order to:",
'Search messages': 'Search messages',
'Searching for different groups and individuals': 'Searching for different groups and individuals',
'Secondary Server (Optional)': 'Secondary Server (Optional)',
'Seconds must be a number between 0 and 60': 'Seconds must be a number between 0 and 60',
'Section Details': 'Section Details',
'Section deleted': 'Section deleted',
'Section updated': 'Section updated',
'Sections': 'Sections',
'Sector': 'Sector',
'Sector Details': 'Sector Details',
'Sector added': 'Sector added',
'Sector deleted': 'Sector deleted',
'Sector updated': 'Sector updated',
'Sectors': 'Sectors',
'Security Policy': 'Security Policy',
'Security Status': 'Security Status',
'Security problems': 'Security problems',
'Seen': 'Seen',
'Select Items from this Warehouse': 'Select Items from this Warehouse',
"Select a person in charge for status 'assigned'": "Select a person in charge for status 'assigned'",
'Select a question from the list': 'Select a question from the list',
'Select all that apply': 'Select all that apply',
'Select an Organization to see a list of offices': 'Select an Organization to see a list of offices',
'Select the overlays for Assessments and Activities relating to each Need to identify the gap.': 'Select the overlays for Assessments and Activities relating to each Need to identify the gap.',
'Select the person assigned to this role for this project.': 'Select the person assigned to this role for this project.',
'Select the person associated with this scenario.': 'Select the person associated with this scenario.',
'Selects whether to use a Modem, Tropo or other Gateway for sending out SMS': 'Selects whether to use a Modem, Tropo or other Gateway for sending out SMS',
'Self Registration': 'Self Registration',
'Self-registration': 'Self-registration',
'Send': 'Send',
'Send Alerts using Email &/or SMS': 'Send Alerts using Email &/or SMS',
'Send Notification': 'Send Notification',
'Send Shipment': 'Send Shipment',
'Send message': 'Send message',
'Send new message': 'Send new message',
'Sends & Receives Alerts via Email & SMS': 'Sends & Receives Alerts via Email & SMS',
'Senior (50+)': 'Senior (50+)',
'Sent': 'Sent',
'Sent Item': 'Sent Item',
'Sent Item Details': 'Sent Item Details',
'Sent Item added': 'Sent Item added',
'Sent Item deleted': 'Sent Item deleted',
'Sent Item updated': 'Sent Item updated',
'Sent Items': 'Sent Items',
'Sent Shipment Details': 'Sent Shipment Details',
'Sent Shipment canceled': 'Sent Shipment canceled',
'Sent Shipment updated': 'Sent Shipment updated',
'Sent Shipments': 'Sent Shipments',
'Separate latrines for women and men': 'Separate latrines for women and men',
'Seraiki': 'Seraiki',
'Series': 'Series',
'Server': 'Server',
'Service': 'Service',
'Service Catalogue': 'Service Catalogue',
'Service or Facility': 'Service or Facility',
'Service profile added': 'Service profile added',
'Service profile deleted': 'Service profile deleted',
'Service profile updated': 'Service profile updated',
'Services': 'Services',
'Services Available': 'Services Available',
'Setting Details': 'Setting Details',
'Setting added': 'Setting added',
'Setting deleted': 'Setting deleted',
'Setting updated': 'Setting updated',
'Settings': 'Settings',
'Settings updated': 'Settings updated',
'Settings were reset because authenticating with Twitter failed': 'Settings were reset because authenticating with Twitter failed',
'Severity': 'Severity',
'Severity:': 'Severity:',
'Share a common Marker (unless over-ridden at the Feature level)': 'Share a common Marker (unless over-ridden at the Feature level)',
'Shelter': 'Shelter',
'Shelter & Essential NFIs': 'Shelter & Essential NFIs',
'Shelter Details': 'Shelter Details',
'Shelter Name': 'Shelter Name',
'Shelter Registry': 'Shelter Registry',
'Shelter Service': 'Shelter Service',
'Shelter Service Details': 'Shelter Service Details',
'Shelter Service added': 'Shelter Service added',
'Shelter Service deleted': 'Shelter Service deleted',
'Shelter Service updated': 'Shelter Service updated',
'Shelter Services': 'Shelter Services',
'Shelter Type': 'Shelter Type',
'Shelter Type Details': 'Shelter Type Details',
'Shelter Type added': 'Shelter Type added',
'Shelter Type deleted': 'Shelter Type deleted',
'Shelter Type updated': 'Shelter Type updated',
'Shelter Types': 'Shelter Types',
'Shelter Types and Services': 'Shelter Types and Services',
'Shelter added': 'Shelter added',
'Shelter deleted': 'Shelter deleted',
'Shelter updated': 'Shelter updated',
'Shelter/NFI assistance received/expected': 'Shelter/NFI assistance received/expected',
'Shelters': 'Shelters',
'Shipment Received': 'Shipment Received',
'Shipment Sent': 'Shipment Sent',
'Shipment Transit Log Details': 'Shipment Transit Log Details',
'Shipment Transit Log added': 'Shipment Transit Log added',
'Shipment Transit Log deleted': 'Shipment Transit Log deleted',
'Shipment Transit Log updated': 'Shipment Transit Log updated',
'Shipment Transit Logs': 'Shipment Transit Logs',
'Shipment/Way Bill added': 'Shipment/Way Bill added',
'Shipment/Way Bills': 'Shipment/Way Bills',
'Shipment/Way Bills Details': 'Shipment/Way Bills Details',
'Shipment/Way Bills deleted': 'Shipment/Way Bills deleted',
'Shipment/Way Bills updated': 'Shipment/Way Bills updated',
'Shipment<>Item Relation added': 'Shipment<>Item Relation added',
'Shipment<>Item Relation deleted': 'Shipment<>Item Relation deleted',
'Shipment<>Item Relation updated': 'Shipment<>Item Relation updated',
'Shipment<>Item Relations': 'Shipment<>Item Relations',
'Shipment<>Item Relations Details': 'Shipment<>Item Relations Details',
'Shipments': 'Shipments',
'Shipments To': 'Shipments To',
'Shooting': 'Shooting',
'Short Assessment': 'Short Assessment',
'Short Description': 'Short Description',
'Show Checklist': 'Show Checklist',
'Show on map': 'Show on map',
'Sindhi': 'Sindhi',
'Site': 'Site',
'Site Address': 'Site Address',
'Site Administration': 'Site Administration',
'Site Description': 'Site Description',
'Site Details': 'Site Details',
'Site ID': 'Site ID',
'Site Location Description': 'Site Location Description',
'Site Location Name': 'Site Location Name',
'Site Manager': 'Site Manager',
'Site Name': 'Site Name',
'Site added': 'Site added',
'Site deleted': 'Site deleted',
'Site updated': 'Site updated',
'Site/Warehouse': 'Site/Warehouse',
'Sites': 'Sites',
'Situation Awareness & Geospatial Analysis': 'Situation Awareness & Geospatial Analysis',
'Sketch': 'Sketch',
'Skill': 'Skill',
'Skill Details': 'Skill Details',
'Skill Status': 'Skill Status',
'Skill Type Details': 'Skill Type Details',
'Skill Type added': 'Skill Type added',
'Skill Type deleted': 'Skill Type deleted',
'Skill Type updated': 'Skill Type updated',
'Skill Types': 'Skill Types',
'Skill added': 'Skill added',
'Skill deleted': 'Skill deleted',
'Skill updated': 'Skill updated',
'Skills': 'Skills',
'Skype ID': 'Skype ID',
'Small Trade': 'Small Trade',
'Smoke': 'Smoke',
'Snow Fall': 'Snow Fall',
'Snow Squall': 'Snow Squall',
'Solid waste': 'Solid waste',
'Solution': 'Solution',
'Solution Details': 'Solution Details',
'Solution Item': 'Solution Item',
'Solution added': 'Solution added',
'Solution deleted': 'Solution deleted',
'Solution updated': 'Solution updated',
'Solutions': 'Solutions',
'Some': 'Some',
'Sorry - the server has a problem, please try again later.': 'Sorry - the server has a problem, please try again later.',
'Sorry that location appears to be outside the area of the Parent.': 'Sorry that location appears to be outside the area of the Parent.',
'Sorry that location appears to be outside the area supported by this deployment.': 'Sorry that location appears to be outside the area supported by this deployment.',
'Sorry, I could not understand your request': 'Sorry, I could not understand your request',
'Sorry, only users with the MapAdmin role are allowed to edit these locations': 'Sorry, only users with the MapAdmin role are allowed to edit these locations',
'Sorry, something went wrong.': 'Sorry, something went wrong.',
'Sorry, that page is forbidden for some reason.': 'Sorry, that page is forbidden for some reason.',
'Sorry, that service is temporary unavailable.': 'Sorry, that service is temporary unavailable.',
'Sorry, there are no addresses to display': 'Sorry, there are no addresses to display',
"Sorry, things didn't get done on time.": "Sorry, things didn't get done on time.",
"Sorry, we couldn't find that page.": "Sorry, we couldn't find that page.",
'Source': 'Source',
'Source ID': 'Source ID',
'Source Time': 'Source Time',
'Source Type': 'Source Type',
'Space Debris': 'Space Debris',
'Spanish': 'Spanish',
'Special Ice': 'Special Ice',
'Special Marine': 'Special Marine',
'Special needs': 'Special needs',
'Specialized Hospital': 'Specialized Hospital',
'Specific Area (e.g. Building/Room) within the Location that this Person/Group is seen.': 'Specific Area (e.g. Building/Room) within the Location that this Person/Group is seen.',
'Specific locations need to have a parent of level': 'Specific locations need to have a parent of level',
'Specify a descriptive title for the image.': 'Specify a descriptive title for the image.',
'Specify the bed type of this unit.': 'Specify the bed type of this unit.',
'Specify the minimum sustainability in weeks or days.': 'Specify the minimum sustainability in weeks or days.',
'Specify the number of available sets': 'Specify the number of available sets',
'Specify the number of available units (adult doses)': 'Specify the number of available units (adult doses)',
'Specify the number of available units (litres) of Ringer-Lactate or equivalent solutions': 'Specify the number of available units (litres) of Ringer-Lactate or equivalent solutions',
'Specify the number of sets needed per 24h': 'Specify the number of sets needed per 24h',
'Specify the number of units (adult doses) needed per 24h': 'Specify the number of units (adult doses) needed per 24h',
'Specify the number of units (litres) of Ringer-Lactate or equivalent solutions needed per 24h': 'Specify the number of units (litres) of Ringer-Lactate or equivalent solutions needed per 24h',
'Spherical Mercator?': 'Spherical Mercator?',
'Spreadsheet Importer': 'Spreadsheet Importer',
'Spreadsheet uploaded': 'Spreadsheet uploaded',
'Spring': 'Spring',
'Squall': 'Squall',
'Staff': 'Staff',
'Staff 2': 'Staff 2',
'Staff Details': 'Staff Details',
'Staff Type Details': 'Staff Type Details',
'Staff Type added': 'Staff Type added',
'Staff Type deleted': 'Staff Type deleted',
'Staff Type updated': 'Staff Type updated',
'Staff Types': 'Staff Types',
'Staff added': 'Staff added',
'Staff deleted': 'Staff deleted',
'Staff present and caring for residents': 'Staff present and caring for residents',
'Staff updated': 'Staff updated',
'Staffing': 'Staffing',
'Start date': 'Start date',
'Start of Period': 'Start of Period',
'Stationery': 'Stationery',
'Status': 'Status',
'Status Report': 'Status Report',
'Status added': 'Status added',
'Status deleted': 'Status deleted',
'Status of clinical operation of the facility.': 'Status of clinical operation of the facility.',
'Status of general operation of the facility.': 'Status of general operation of the facility.',
'Status of morgue capacity.': 'Status of morgue capacity.',
'Status of operations of the emergency department of this hospital.': 'Status of operations of the emergency department of this hospital.',
'Status of security procedures/access restrictions in the hospital.': 'Status of security procedures/access restrictions in the hospital.',
'Status of the operating rooms of this hospital.': 'Status of the operating rooms of this hospital.',
'Status updated': 'Status updated',
'Storage Bin': 'Storage Bin',
'Storage Bin Details': 'Storage Bin Details',
'Storage Bin Number': 'Storage Bin Number',
'Storage Bin Type': 'Storage Bin Type',
'Storage Bin Type Details': 'Storage Bin Type Details',
'Storage Bin Type added': 'Storage Bin Type added',
'Storage Bin Type deleted': 'Storage Bin Type deleted',
'Storage Bin Type updated': 'Storage Bin Type updated',
'Storage Bin Types': 'Storage Bin Types',
'Storage Bin added': 'Storage Bin added',
'Storage Bin deleted': 'Storage Bin deleted',
'Storage Bin updated': 'Storage Bin updated',
'Storage Bins': 'Storage Bins',
'Storage Location': 'Storage Location',
'Storage Location Details': 'Storage Location Details',
'Storage Location ID': 'Storage Location ID',
'Storage Location Name': 'Storage Location Name',
'Storage Location added': 'Storage Location added',
'Storage Location deleted': 'Storage Location deleted',
'Storage Location updated': 'Storage Location updated',
'Storage Locations': 'Storage Locations',
'Store spreadsheets in the Eden database': 'Store spreadsheets in the Eden database',
'Storm Force Wind': 'Storm Force Wind',
'Storm Surge': 'Storm Surge',
'Stowaway': 'Stowaway',
'Street Address': 'Street Address',
'Strong Wind': 'Strong Wind',
'Sub Category': 'Sub Category',
'Sub-type': 'Sub-type',
'Subject': 'Subject',
'Submission successful - please wait': 'Submission successful - please wait',
'Submission successful - please wait...': 'Submission successful - please wait...',
'Submit': 'Submit',
'Subscription Details': 'Subscription Details',
'Subscription added': 'Subscription added',
'Subscription deleted': 'Subscription deleted',
'Subscription updated': 'Subscription updated',
'Subscriptions': 'Subscriptions',
'Subsistence Cost': 'Subsistence Cost',
'Sufficient care/assistance for chronically ill': 'Sufficient care/assistance for chronically ill',
'Suggest not changing this field unless you know what you are doing.': 'Suggest not changing this field unless you know what you are doing.',
'Summary': 'Summary',
'Sunday': 'Sunday',
'Support Request': 'Support Request',
'Supports the decision making of large groups of Crisis Management Experts by helping the groups create ranked list.': 'Supports the decision making of large groups of Crisis Management Experts by helping the groups create ranked list.',
'Sure you want to delete this object?': 'Sure you want to delete this object?',
'Surgery': 'Surgery',
'Survey Answer': 'Survey Answer',
'Survey Answer Details': 'Survey Answer Details',
'Survey Answer added': 'Survey Answer added',
'Survey Answer deleted': 'Survey Answer deleted',
'Survey Answer updated': 'Survey Answer updated',
'Survey Module': 'Survey Module',
'Survey Name': 'Survey Name',
'Survey Question': 'Survey Question',
'Survey Question Details': 'Survey Question Details',
'Survey Question Display Name': 'Survey Question Display Name',
'Survey Question added': 'Survey Question added',
'Survey Question deleted': 'Survey Question deleted',
'Survey Question updated': 'Survey Question updated',
'Survey Section': 'Survey Section',
'Survey Section Details': 'Survey Section Details',
'Survey Section Display Name': 'Survey Section Display Name',
'Survey Section added': 'Survey Section added',
'Survey Section deleted': 'Survey Section deleted',
'Survey Section updated': 'Survey Section updated',
'Survey Series': 'Survey Series',
'Survey Series Details': 'Survey Series Details',
'Survey Series Name': 'Survey Series Name',
'Survey Series added': 'Survey Series added',
'Survey Series deleted': 'Survey Series deleted',
'Survey Series updated': 'Survey Series updated',
'Survey Template': 'Survey Template',
'Survey Template Details': 'Survey Template Details',
'Survey Template added': 'Survey Template added',
'Survey Template deleted': 'Survey Template deleted',
'Survey Template updated': 'Survey Template updated',
'Survey Templates': 'Survey Templates',
'Switch this on to use individual CSS/Javascript files for diagnostics during development.': 'Switch this on to use individual CSS/Javascript files for diagnostics during development.',
'Symbology': 'Symbology',
'Sync Conflicts': 'Sync Conflicts',
'Sync History': 'Sync History',
'Sync Now': 'Sync Now',
'Sync Partners': 'Sync Partners',
'Sync Partners are instances or peers (SahanaEden, SahanaAgasti, Ushahidi, etc.) that you want to sync information with. Click on the link on the right to go the page where you can add sync partners, search for sync partners and modify them.': 'Sync Partners are instances or peers (SahanaEden, SahanaAgasti, Ushahidi, etc.) that you want to sync information with. Click on the link on the right to go the page where you can add sync partners, search for sync partners and modify them.',
'Sync Pools': 'Sync Pools',
'Sync Schedule': 'Sync Schedule',
'Sync Settings': 'Sync Settings',
'Sync process already started on ': 'Sync process already started on ',
'Synchronisation': 'Synchronisation',
'Synchronization': 'Synchronization',
'Synchronization Conflicts': 'Synchronization Conflicts',
'Synchronization Details': 'Synchronization Details',
'Synchronization History': 'Synchronization History',
'Synchronization Peers': 'Synchronization Peers',
'Synchronization Settings': 'Synchronization Settings',
'Synchronization allows you to share data that you have with others and update your own database with latest data from other peers. This page provides you with information about how to use the synchronization features of Sahana Eden': 'Synchronization allows you to share data that you have with others and update your own database with latest data from other peers. This page provides you with information about how to use the synchronization features of Sahana Eden',
'Synchronization not configured.': 'Synchronization not configured.',
'Synchronization settings updated': 'Synchronization settings updated',
'Syncronisation History': 'Syncronisation History',
'System allows the General Public to Report Incidents & have these Tracked.': 'System allows the General Public to Report Incidents & have these Tracked.',
'System allows the tracking & discovery of Items stored in Locations.': 'System allows the tracking & discovery of Items stored in Locations.',
'System is a central online repository where all relief organizations, relief workers, government agents and camp sites for displaced personnel can coordinate the supply of aid with their demand. It allows users to allocate the available resources to fulfill the demands effectively and efficiently.': 'System is a central online repository where all relief organizations, relief workers, government agents and camp sites for displaced personnel can coordinate the supply of aid with their demand. It allows users to allocate the available resources to fulfill the demands effectively and efficiently.',
'System keeps track of all Volunteers working in the disaster region. It captures not only the places where they are active, but also captures information on the range of services they are providing in each area.': 'System keeps track of all Volunteers working in the disaster region. It captures not only the places where they are active, but also captures information on the range of services they are providing in each area.',
"System's Twitter account updated": "System's Twitter account updated",
'Table name': 'Table name',
'Tags': 'Tags',
'Take shelter in place or per <instruction>': 'Take shelter in place or per <instruction>',
'Task Details': 'Task Details',
'Task List': 'Task List',
'Task Status': 'Task Status',
'Task added': 'Task added',
'Task deleted': 'Task deleted',
'Task status': 'Task status',
'Task updated': 'Task updated',
'Tasks': 'Tasks',
'Team': 'Team',
'Team Description': 'Team Description',
'Team Details': 'Team Details',
'Team Head': 'Team Head',
'Team Id': 'Team Id',
'Team Leader': 'Team Leader',
'Team Member added': 'Team Member added',
'Team Members': 'Team Members',
'Team Name': 'Team Name',
'Team Type': 'Team Type',
'Team added': 'Team added',
'Team deleted': 'Team deleted',
'Team updated': 'Team updated',
'Teams': 'Teams',
'Technical testing only, all recipients disregard': 'Technical testing only, all recipients disregard',
'Telecommunications': 'Telecommunications',
'Telephone': 'Telephone',
'Telephony': 'Telephony',
'Temp folder %s not writable - unable to apply theme!': 'Temp folder %s not writable - unable to apply theme!',
'Template file %s not readable - unable to apply theme!': 'Template file %s not readable - unable to apply theme!',
'Templates': 'Templates',
'Terrorism': 'Terrorism',
'Tertiary Server (Optional)': 'Tertiary Server (Optional)',
'Test Results': 'Test Results',
'Text': 'Text',
'Text Colour for Text blocks': 'Text Colour for Text blocks',
'Text before each Text Field (One per line)': 'Text before each Text Field (One per line)',
'Text in Message': 'Text in Message',
'Text in Message: ': 'Text in Message: ',
'Thanks for your assistance': 'Thanks for your assistance',
'The': 'The',
'The "query" is a condition like "db.table1.field1==\'value\'". Something like "db.table1.field1 == db.table2.field2" results in a SQL JOIN.': 'The "query" is a condition like "db.table1.field1==\'value\'". Something like "db.table1.field1 == db.table2.field2" results in a SQL JOIN.',
"The <a href='http://en.wikipedia.org/wiki/Well-known_text' target=_blank>Well-Known Text</a> representation of the Polygon/Line.": "The <a href='http://en.wikipedia.org/wiki/Well-known_text' target=_blank>Well-Known Text</a> representation of the Polygon/Line.",
'The Area which this Site is located within.': 'The Area which this Site is located within.',
'The Assessments module allows field workers to send in assessments.': 'The Assessments module allows field workers to send in assessments.',
'The Author of this Document (optional)': 'The Author of this Document (optional)',
'The Current Location of the Person, which can be general (for Reporting) or precise (for displaying on a Map). Enter a few characters to search from available locations.': 'The Current Location of the Person, which can be general (for Reporting) or precise (for displaying on a Map). Enter a few characters to search from available locations.',
'The Current Location of the Person/Group, which can be general (for Reporting) or precise (for displaying on a Map). Enter a few characters to search from available locations.': 'The Current Location of the Person/Group, which can be general (for Reporting) or precise (for displaying on a Map). Enter a few characters to search from available locations.',
"The Donor(s) for this project. Multiple values can be selected by holding down the 'Control' key.": "The Donor(s) for this project. Multiple values can be selected by holding down the 'Control' key.",
'The Location the Person has come from, which can be general (for Reporting) or precise (for displaying on a Map). Enter a few characters to search from available locations.': 'The Location the Person has come from, which can be general (for Reporting) or precise (for displaying on a Map). Enter a few characters to search from available locations.',
'The Location the Person is going to, which can be general (for Reporting) or precise (for displaying on a Map). Enter a few characters to search from available locations.': 'The Location the Person is going to, which can be general (for Reporting) or precise (for displaying on a Map). Enter a few characters to search from available locations.',
'The Office this record is associated with.': 'The Office this record is associated with.',
'The Organization this record is associated with.': 'The Organization this record is associated with.',
'The Organization which is funding this Activity.': 'The Organization which is funding this Activity.',
'The Project Tracking module allows the creation of Activities to meet Gaps in Needs Assessments.': 'The Project Tracking module allows the creation of Activities to meet Gaps in Needs Assessments.',
'The Request this record is associated with.': 'The Request this record is associated with.',
'The Role this person plays within this Office/Project.': 'The Role this person plays within this Office/Project.',
'The Role this person plays within this hospital.': 'The Role this person plays within this hospital.',
'The Shelter this Request is from (optional).': 'The Shelter this Request is from (optional).',
'The URL for the GetCapabilities of a WMS Service whose layers you want accessible via the Map.': 'The URL for the GetCapabilities of a WMS Service whose layers you want accessible via the Map.',
"The URL of the image file. If you don't upload an image file, then you must specify its location here.": "The URL of the image file. If you don't upload an image file, then you must specify its location here.",
'The URL of your web gateway without the post parameters': 'The URL of your web gateway without the post parameters',
'The URL to access the service.': 'The URL to access the service.',
'The Unique Identifier (UUID) as assigned to this facility by the government.': 'The Unique Identifier (UUID) as assigned to this facility by the government.',
'The area is ': 'The area is ',
'The attribute within the KML which is used for the title of popups.': 'The attribute within the KML which is used for the title of popups.',
'The attribute(s) within the KML which are used for the body of popups. (Use a space between attributes)': 'The attribute(s) within the KML which are used for the body of popups. (Use a space between attributes)',
'The body height (crown to heel) in cm.': 'The body height (crown to heel) in cm.',
'The category of the Item.': 'The category of the Item.',
'The contact person for this organization.': 'The contact person for this organization.',
'The country the person usually lives in.': 'The country the person usually lives in.',
'The duplicate record will be deleted': 'The duplicate record will be deleted',
'The entered unit links to this unit. For e.g. if you are entering m for meter then choose kilometer(if it exists) and enter the value 0.001 as multiplicator.': 'The entered unit links to this unit. For e.g. if you are entering m for meter then choose kilometer(if it exists) and enter the value 0.001 as multiplicator.',
'The first or only name of the person (mandatory).': 'The first or only name of the person (mandatory).',
'The hospital this record is associated with.': 'The hospital this record is associated with.',
'The item is designated to be sent for specific project, population, village or other earmarking of the donation such as a Grant Code.': 'The item is designated to be sent for specific project, population, village or other earmarking of the donation such as a Grant Code.',
'The language to use for notifications.': 'The language to use for notifications.',
'The last known location of the missing person before disappearance.': 'The last known location of the missing person before disappearance.',
'The length is ': 'The length is ',
'The list of Item categories are maintained by the Administrators.': 'The list of Item categories are maintained by the Administrators.',
'The name to be used when calling for or directly addressing the person (optional).': 'The name to be used when calling for or directly addressing the person (optional).',
'The next screen will allow you to detail the number of people here & their needs.': 'The next screen will allow you to detail the number of people here & their needs.',
'The next screen will allow you to enter a detailed list of items and quantities, if appropriate...': 'The next screen will allow you to enter a detailed list of items and quantities, if appropriate...',
'The number of tiles around the visible map to download. Zero means that the 1st page loads faster, higher numbers mean subsequent panning is faster.': 'The number of tiles around the visible map to download. Zero means that the 1st page loads faster, higher numbers mean subsequent panning is faster.',
'The person at the location who is reporting this incident (optional)': 'The person at the location who is reporting this incident (optional)',
'The person reporting about the missing person.': 'The person reporting about the missing person.',
'The person reporting the missing person.': 'The person reporting the missing person.',
"The person's manager within this Office/Project.": "The person's manager within this Office/Project.",
'The post variable containing the phone number': 'The post variable containing the phone number',
'The post variable on the URL used for sending messages': 'The post variable on the URL used for sending messages',
'The post variables other than the ones containing the message and the phone number': 'The post variables other than the ones containing the message and the phone number',
'The serial port at which the modem is connected - /dev/ttyUSB0, etc on linux and com1, com2, etc on Windows': 'The serial port at which the modem is connected - /dev/ttyUSB0, etc on linux and com1, com2, etc on Windows',
'The server did not receive a timely response from another server that it was accessing to fill the request by the browser.': 'The server did not receive a timely response from another server that it was accessing to fill the request by the browser.',
'The server received an incorrect response from another server that it was accessing to fill the request by the browser.': 'The server received an incorrect response from another server that it was accessing to fill the request by the browser.',
'The simple policy allows anonymous users to Read & registered users to Edit. The full security policy allows the administrator to set permissions on individual tables or records - see models/zzz.py.': 'The simple policy allows anonymous users to Read & registered users to Edit. The full security policy allows the administrator to set permissions on individual tables or records - see models/zzz.py.',
'The subject event no longer poses a threat or concern and any follow on action is described in <instruction>': 'The subject event no longer poses a threat or concern and any follow on action is described in <instruction>',
'The title of the WMS Browser panel in the Tools panel.': 'The title of the WMS Browser panel in the Tools panel.',
'The token associated with this application on': 'The token associated with this application on',
'The unique identifier which identifies this instance to other instances.': 'The unique identifier which identifies this instance to other instances.',
'The weight in kg.': 'The weight in kg.',
'Theme': 'Theme',
'Theme Details': 'Theme Details',
'Theme added': 'Theme added',
'Theme deleted': 'Theme deleted',
'Theme updated': 'Theme updated',
'Themes': 'Themes',
'There are errors': 'There are errors',
'There are multiple records at this location': 'There are multiple records at this location',
'There was a problem, sorry, please try again later.': 'There was a problem, sorry, please try again later.',
'These are settings for Inbound Mail.': 'These are settings for Inbound Mail.',
'These are the Incident Categories visible to normal End-Users': 'These are the Incident Categories visible to normal End-Users',
'These are the default settings for all users. To change settings just for you, click ': 'These are the default settings for all users. To change settings just for you, click ',
'They': 'They',
'This appears to be a duplicate of ': 'This appears to be a duplicate of ',
'This file already exists on the server as': 'This file already exists on the server as',
'This is the way to transfer data between machines as it maintains referential integrity.': 'This is the way to transfer data between machines as it maintains referential integrity.',
'This is the way to transfer data between machines as it maintains referential integrity...duplicate data should be removed manually 1st!': 'This is the way to transfer data between machines as it maintains referential integrity...duplicate data should be removed manually 1st!',
'This might be due to a temporary overloading or maintenance of the server.': 'This might be due to a temporary overloading or maintenance of the server.',
'This page shows you logs of past syncs. Click on the link below to go to this page.': 'This page shows you logs of past syncs. Click on the link below to go to this page.',
'This screen allows you to upload a collection of photos to the server.': 'This screen allows you to upload a collection of photos to the server.',
'Thunderstorm': 'Thunderstorm',
'Thursday': 'Thursday',
'Ticket': 'Ticket',
'Ticket Details': 'Ticket Details',
'Ticket added': 'Ticket added',
'Ticket deleted': 'Ticket deleted',
'Ticket updated': 'Ticket updated',
'Ticketing Module': 'Ticketing Module',
'Tickets': 'Tickets',
'Time needed to collect water': 'Time needed to collect water',
'Time of Request': 'Time of Request',
'Timestamp': 'Timestamp',
'Title': 'Title',
'To': 'To',
'To Location': 'To Location',
'To begin the sync process, click the button on the right => ': 'To begin the sync process, click the button on the right => ',
'To begin the sync process, click this button => ': 'To begin the sync process, click this button => ',
'To edit OpenStreetMap, you need to edit the OpenStreetMap settings in models/000_config.py': 'To edit OpenStreetMap, you need to edit the OpenStreetMap settings in models/000_config.py',
"To search for a body, enter the ID label of the body. You may use % as wildcard. Press 'Search' without input to list all bodies.": "To search for a body, enter the ID label of the body. You may use % as wildcard. Press 'Search' without input to list all bodies.",
"To search for a body, enter the ID tag number of the body. You may use % as wildcard. Press 'Search' without input to list all bodies.": "To search for a body, enter the ID tag number of the body. You may use % as wildcard. Press 'Search' without input to list all bodies.",
"To search for a hospital, enter any of the names or IDs of the hospital, separated by spaces. You may use % as wildcard. Press 'Search' without input to list all hospitals.": "To search for a hospital, enter any of the names or IDs of the hospital, separated by spaces. You may use % as wildcard. Press 'Search' without input to list all hospitals.",
"To search for a location, enter the name. You may use % as wildcard. Press 'Search' without input to list all locations.": "To search for a location, enter the name. You may use % as wildcard. Press 'Search' without input to list all locations.",
"To search for a person, enter any of the first, middle or last names and/or an ID number of a person, separated by spaces. You may use % as wildcard. Press 'Search' without input to list all persons.": "To search for a person, enter any of the first, middle or last names and/or an ID number of a person, separated by spaces. You may use % as wildcard. Press 'Search' without input to list all persons.",
"To search for a request, enter some of the text that you are looking for. You may use % as wildcard. Press 'Search' without input to list all requests.": "To search for a request, enter some of the text that you are looking for. You may use % as wildcard. Press 'Search' without input to list all requests.",
'To submit a new job, use the': 'To submit a new job, use the',
'To variable': 'To variable',
'Tools': 'Tools',
'Tornado': 'Tornado',
'Total # of Target Beneficiaries': 'Total # of Target Beneficiaries',
'Total # of households of site visited': 'Total # of households of site visited',
'Total Beds': 'Total Beds',
'Total Beneficiaries': 'Total Beneficiaries',
'Total Cost per Megabyte': 'Total Cost per Megabyte',
'Total Cost per Minute': 'Total Cost per Minute',
'Total Monthly': 'Total Monthly',
'Total Monthly Cost': 'Total Monthly Cost',
'Total Monthly Cost: ': 'Total Monthly Cost: ',
'Total One-time Costs': 'Total One-time Costs',
'Total Persons': 'Total Persons',
'Total Recurring Costs': 'Total Recurring Costs',
'Total Unit Cost': 'Total Unit Cost',
'Total Unit Cost: ': 'Total Unit Cost: ',
'Total Units': 'Total Units',
'Total number of beds in this hospital. Automatically updated from daily reports.': 'Total number of beds in this hospital. Automatically updated from daily reports.',
'Total number of houses in the area': 'Total number of houses in the area',
'Total number of schools in affected area': 'Total number of schools in affected area',
'Total population of site visited': 'Total population of site visited',
'Totals for Budget:': 'Totals for Budget:',
'Totals for Bundle:': 'Totals for Bundle:',
'Totals for Kit:': 'Totals for Kit:',
'Tourist Group': 'Tourist Group',
'Town': 'Town',
'Traces internally displaced people (IDPs) and their needs': 'Traces internally displaced people (IDPs) and their needs',
'Tracing': 'Tracing',
'Track': 'Track',
'Track Details': 'Track Details',
'Track deleted': 'Track deleted',
'Track updated': 'Track updated',
'Track uploaded': 'Track uploaded',
'Tracking of Projects, Activities and Tasks': 'Tracking of Projects, Activities and Tasks',
'Tracking of basic information on the location, facilities and size of the Shelters': 'Tracking of basic information on the location, facilities and size of the Shelters',
'Tracks': 'Tracks',
'Tracks requests for aid and matches them against donors who have pledged aid': 'Tracks requests for aid and matches them against donors who have pledged aid',
'Tracks the location, distibution, capacity and breakdown of victims in Shelters': 'Tracks the location, distibution, capacity and breakdown of victims in Shelters',
'Traffic Report': 'Traffic Report',
'Transit': 'Transit',
'Transition Effect': 'Transition Effect',
'Transparent?': 'Transparent?',
'Transportation assistance, Rank': 'Transportation assistance, Rank',
'Trauma Center': 'Trauma Center',
'Travel Cost': 'Travel Cost',
'Tree': 'Tree',
'Tropical Storm': 'Tropical Storm',
'Tropo Messaging Token': 'Tropo Messaging Token',
'Tropo Settings': 'Tropo Settings',
'Tropo Voice Token': 'Tropo Voice Token',
'Tropo settings updated': 'Tropo settings updated',
'Truck': 'Truck',
'Try checking the URL for errors, maybe it was mistyped.': 'Try checking the URL for errors, maybe it was mistyped.',
'Try hitting refresh/reload button or trying the URL from the address bar again.': 'Try hitting refresh/reload button or trying the URL from the address bar again.',
'Try refreshing the page or hitting the back button on your browser.': 'Try refreshing the page or hitting the back button on your browser.',
'Tsunami': 'Tsunami',
'Tuesday': 'Tuesday',
'Twitter': 'Twitter',
'Twitter ID or #hashtag': 'Twitter ID or #hashtag',
'Twitter Settings': 'Twitter Settings',
'Type': 'Type',
'Type of cause': 'Type of cause',
'Type of latrines': 'Type of latrines',
'Type of place for defecation': 'Type of place for defecation',
'Type of water source before the disaster': 'Type of water source before the disaster',
'Types of health services available': 'Types of health services available',
'Types of water storage containers available': 'Types of water storage containers available',
'UID': 'UID',
'URL': 'URL',
'UTC Offset': 'UTC Offset',
'Unable to parse CSV file!': 'Unable to parse CSV file!',
'Understaffed': 'Understaffed',
'Unidentified': 'Unidentified',
'Unit': 'Unit',
'Unit Cost': 'Unit Cost',
'Unit Details': 'Unit Details',
'Unit Name': 'Unit Name',
'Unit Set': 'Unit Set',
'Unit Short Code for e.g. m for meter.': 'Unit Short Code for e.g. m for meter.',
'Unit added': 'Unit added',
'Unit deleted': 'Unit deleted',
'Unit updated': 'Unit updated',
'Units': 'Units',
'Units of Measure': 'Units of Measure',
'Unknown': 'Unknown',
'Unknown Peer': 'Unknown Peer',
'Unknown type of facility': 'Unknown type of facility',
'Unresolved Conflicts': 'Unresolved Conflicts',
'Unselect to disable the modem': 'Unselect to disable the modem',
'Unsent': 'Unsent',
'Unsupported data format!': 'Unsupported data format!',
'Unsupported method!': 'Unsupported method!',
'Update': 'Update',
'Update Activity Report': 'Update Activity Report',
'Update Cholera Treatment Capability Information': 'Update Cholera Treatment Capability Information',
'Update Import Job': 'Update Import Job',
'Update Request': 'Update Request',
'Update Service Profile': 'Update Service Profile',
'Update Task Status': 'Update Task Status',
'Update Unit': 'Update Unit',
'Update if Master': 'Update if Master',
'Update if Newer': 'Update if Newer',
'Update your current ordered list': 'Update your current ordered list',
'Upload Geodata': 'Upload Geodata',
'Upload Photos': 'Upload Photos',
'Upload Spreadsheet': 'Upload Spreadsheet',
'Upload Track': 'Upload Track',
'Upload a Spreadsheet': 'Upload a Spreadsheet',
"Upload an image file here. If you don't upload an image file, then you must specify its location in the URL field.": "Upload an image file here. If you don't upload an image file, then you must specify its location in the URL field.",
'Urban Fire': 'Urban Fire',
'Urban area': 'Urban area',
'Urdu': 'Urdu',
'Urgent': 'Urgent',
'Use (...)&(...) for AND, (...)|(...) for OR, and ~(...) for NOT to build more complex queries.': 'Use (...)&(...) for AND, (...)|(...) for OR, and ~(...) for NOT to build more complex queries.',
'Use default': 'Use default',
'Use these links to download data that is currently in the database.': 'Use these links to download data that is currently in the database.',
'Use this space to add a description about the Bin Type.': 'Use this space to add a description about the Bin Type.',
'Use this space to add a description about the site location.': 'Use this space to add a description about the site location.',
'Use this space to add a description about the warehouse/site.': 'Use this space to add a description about the warehouse/site.',
'Use this space to add additional comments and notes about the Site/Warehouse.': 'Use this space to add additional comments and notes about the Site/Warehouse.',
'Used to import data from spreadsheets into the database': 'Used to import data from spreadsheets into the database',
'User': 'User',
'User %(id)s Logged-in': 'User %(id)s Logged-in',
'User %(id)s Logged-out': 'User %(id)s Logged-out',
'User %(id)s Registered': 'User %(id)s Registered',
'User Details': 'User Details',
'User ID': 'User ID',
'User Management': 'User Management',
'User Profile': 'User Profile',
'User Requests': 'User Requests',
'User Updated': 'User Updated',
'User added': 'User added',
'User already has this role': 'User already has this role',
'User deleted': 'User deleted',
'User updated': 'User updated',
'Username': 'Username',
'Users': 'Users',
'Users removed': 'Users removed',
'Ushahidi': 'Ushahidi',
'Usual food sources in the area': 'Usual food sources in the area',
'Utility, telecommunication, other non-transport infrastructure': 'Utility, telecommunication, other non-transport infrastructure',
'Various Reporting functionalities': 'Various Reporting functionalities',
'Vehicle': 'Vehicle',
'Vehicle Crime': 'Vehicle Crime',
'Vehicle Types': 'Vehicle Types',
'Vendor': 'Vendor',
'Verified?': 'Verified?',
'Verify Password': 'Verify Password',
'Verify password': 'Verify password',
'Version': 'Version',
'Very High': 'Very High',
'View Alerts received using either Email or SMS': 'View Alerts received using either Email or SMS',
'View Fullscreen Map': 'View Fullscreen Map',
'View Image': 'View Image',
'View On Map': 'View On Map',
'View Outbox': 'View Outbox',
'View Requests for Aid': 'View Requests for Aid',
'View Settings': 'View Settings',
'View Tickets': 'View Tickets',
"View and/or update details of the person's record": "View and/or update details of the person's record",
'View and/or update their details': 'View and/or update their details',
'View or update the status of a hospital.': 'View or update the status of a hospital.',
'View pending requests and pledge support.': 'View pending requests and pledge support.',
'View the hospitals on a map.': 'View the hospitals on a map.',
"View/Edit the Database directly (caution: doesn't respect the framework rules!)": "View/Edit the Database directly (caution: doesn't respect the framework rules!)",
'Village': 'Village',
'Village Leader': 'Village Leader',
'Visible?': 'Visible?',
'Visual Recognition': 'Visual Recognition',
'Volcanic Ash Cloud': 'Volcanic Ash Cloud',
'Volcanic Event': 'Volcanic Event',
'Volume - Fluids': 'Volume - Fluids',
'Volume - Solids': 'Volume - Solids',
'Volume Capacity': 'Volume Capacity',
'Volume/Dimensions': 'Volume/Dimensions',
'Volunteer Data': 'Volunteer Data',
'Volunteer Details': 'Volunteer Details',
'Volunteer Management': 'Volunteer Management',
'Volunteer Project': 'Volunteer Project',
'Volunteer Registration': 'Volunteer Registration',
'Volunteer Registrations': 'Volunteer Registrations',
'Volunteer Request': 'Volunteer Request',
'Volunteer added': 'Volunteer added',
'Volunteer deleted': 'Volunteer deleted',
'Volunteer details updated': 'Volunteer details updated',
'Volunteer registration added': 'Volunteer registration added',
'Volunteer registration deleted': 'Volunteer registration deleted',
'Volunteer registration updated': 'Volunteer registration updated',
'Volunteers': 'Volunteers',
'Volunteers were notified!': 'Volunteers were notified!',
'Vote': 'Vote',
'Votes': 'Votes',
'WASH': 'WASH',
'WMS Browser Name': 'WMS Browser Name',
'WMS Browser URL': 'WMS Browser URL',
'Walking Only': 'Walking Only',
'Walking time to the health service': 'Walking time to the health service',
'Warehouse': 'Warehouse',
'Warehouse Details': 'Warehouse Details',
'Warehouse Item': 'Warehouse Item',
'Warehouse Item Details': 'Warehouse Item Details',
'Warehouse Item added': 'Warehouse Item added',
'Warehouse Item deleted': 'Warehouse Item deleted',
'Warehouse Item updated': 'Warehouse Item updated',
'Warehouse Items': 'Warehouse Items',
'Warehouse Management': 'Warehouse Management',
'Warehouse added': 'Warehouse added',
'Warehouse deleted': 'Warehouse deleted',
'Warehouse updated': 'Warehouse updated',
'Warehouse/Sites Registry': 'Warehouse/Sites Registry',
'Warehouses': 'Warehouses',
'WatSan': 'WatSan',
'Water Sanitation Hygiene': 'Water Sanitation Hygiene',
'Water gallon': 'Water gallon',
'Water storage containers available for HH': 'Water storage containers available for HH',
'Water storage containers sufficient per HH': 'Water storage containers sufficient per HH',
'Water supply': 'Water supply',
'Waterspout': 'Waterspout',
'Way Bill(s)': 'Way Bill(s)',
'We have tried': 'We have tried',
'Website': 'Website',
'Wednesday': 'Wednesday',
'Weight': 'Weight',
'Weight (kg)': 'Weight (kg)',
'Welcome': 'Welcome',
'Welcome to the Sahana Portal at ': 'Welcome to the Sahana Portal at ',
'Well-Known Text': 'Well-Known Text',
'Were basic medical supplies available for health services prior to the disaster?': 'Were basic medical supplies available for health services prior to the disaster?',
'Were breast milk substitutes used prior to the disaster?': 'Were breast milk substitutes used prior to the disaster?',
'Were there cases of malnutrition in this area prior to the disaster?': 'Were there cases of malnutrition in this area prior to the disaster?',
'Were there health services functioning for the community prior to the disaster?': 'Were there health services functioning for the community prior to the disaster?',
'Were there reports or evidence of outbreaks of any micronutrient malnutrition disorders before the emergency?': 'Were there reports or evidence of outbreaks of any micronutrient malnutrition disorders before the emergency?',
'What are the factors affecting school attendance?': 'What are the factors affecting school attendance?',
"What are the people's normal ways to obtain food in this area?": "What are the people's normal ways to obtain food in this area?",
'What are your main sources of cash to restart your business?': 'What are your main sources of cash to restart your business?',
'What are your main sources of income now?': 'What are your main sources of income now?',
'What do you spend most of your income on now?': 'What do you spend most of your income on now?',
'What food stocks exist? (main dishes)': 'What food stocks exist? (main dishes)',
'What food stocks exist? (side dishes)': 'What food stocks exist? (side dishes)',
'What is the estimated total number of people in all of these institutions?': 'What is the estimated total number of people in all of these institutions?',
'What is your major source of clean water for daily use (ex: washing, cooking, bathing)?': 'What is your major source of clean water for daily use (ex: washing, cooking, bathing)?',
'What is your major source of drinking water?': 'What is your major source of drinking water?',
"What should be done to reduce women and children's vulnerability to violence?": "What should be done to reduce women and children's vulnerability to violence?",
'What type of latrines are available in the village/IDP centre/Camp?': 'What type of latrines are available in the village/IDP centre/Camp?',
'What type of salvage material can be used from destroyed houses?': 'What type of salvage material can be used from destroyed houses?',
'What type of salvage material can be used from destroyed schools?': 'What type of salvage material can be used from destroyed schools?',
'What types of health problems do children currently have?': 'What types of health problems do children currently have?',
'What types of health problems do people currently have?': 'What types of health problems do people currently have?',
'What types of health services are still functioning in the affected area?': 'What types of health services are still functioning in the affected area?',
'What types of household water storage containers are available?': 'What types of household water storage containers are available?',
'What were your main sources of income before the disaster?': 'What were your main sources of income before the disaster?',
'Wheat': 'Wheat',
"When syncing data with others, conflicts happen in cases when two (or more) parties want to sync information which both of them have modified, i.e. conflicting information. Sync module tries to resolve such conflicts automatically but in some cases it can't. In those cases, it is up to you to resolve those conflicts manually, click on the link on the right to go to this page.": "When syncing data with others, conflicts happen in cases when two (or more) parties want to sync information which both of them have modified, i.e. conflicting information. Sync module tries to resolve such conflicts automatically but in some cases it can't. In those cases, it is up to you to resolve those conflicts manually, click on the link on the right to go to this page.",
'Where are the alternative places for studying?': 'Where are the alternative places for studying?',
'Where are the separated children originally from?': 'Where are the separated children originally from?',
'Where do the majority of people defecate?': 'Where do the majority of people defecate?',
'Where have the children been sent?': 'Where have the children been sent?',
'Where is solid waste disposed in the village/camp?': 'Where is solid waste disposed in the village/camp?',
'Whiskers': 'Whiskers',
'Who is doing what and where': 'Who is doing what and where',
'Who usually collects water for the family?': 'Who usually collects water for the family?',
'Width': 'Width',
'Wild Fire': 'Wild Fire',
'Wind Chill': 'Wind Chill',
'Window frame': 'Window frame',
'Winter Storm': 'Winter Storm',
'Without mentioning any names or indicating anyone, do you know of any incidents of violence against women or girls occuring since the disaster?': 'Without mentioning any names or indicating anyone, do you know of any incidents of violence against women or girls occuring since the disaster?',
'Women of Child Bearing Age': 'Women of Child Bearing Age',
'Women participating in coping activities': 'Women participating in coping activities',
'Women who are Pregnant or in Labour': 'Women who are Pregnant or in Labour',
'Womens Focus Groups': 'Womens Focus Groups',
'Wooden plank': 'Wooden plank',
'Wooden poles': 'Wooden poles',
'Working hours end': 'Working hours end',
'Working hours start': 'Working hours start',
'Working or other to provide money/food': 'Working or other to provide money/food',
'X-Ray': 'X-Ray',
'XMPP': 'XMPP',
'Yes': 'Yes',
'You are attempting to delete your own account - are you sure you want to proceed?': 'You are attempting to delete your own account - are you sure you want to proceed?',
'You are currently reported missing!': 'You are currently reported missing!',
'You can change the configuration of synchronization module in the Settings section. This configuration includes your UUID (unique identification number), sync schedules, beacon service and so on. Click the following link to go to the Sync Settings page.': 'You can change the configuration of synchronization module in the Settings section. This configuration includes your UUID (unique identification number), sync schedules, beacon service and so on. Click the following link to go to the Sync Settings page.',
'You can click on the map below to select the Lat/Lon fields:': 'You can click on the map below to select the Lat/Lon fields:',
'You can select the Draw tool (': 'You can select the Draw tool (',
'You can set the modem settings for SMS here.': 'You can set the modem settings for SMS here.',
'You can use the Conversion Tool to convert from either GPS coordinates or Degrees/Minutes/Seconds.': 'You can use the Conversion Tool to convert from either GPS coordinates or Degrees/Minutes/Seconds.',
"You have personalised settings, so changes made here won't be visible to you. To change your personalised settings, click ": "You have personalised settings, so changes made here won't be visible to you. To change your personalised settings, click ",
"You have unsaved changes. Click Cancel now, then 'Save' to save them. Click OK now to discard them.": "You have unsaved changes. Click Cancel now, then 'Save' to save them. Click OK now to discard them.",
"You haven't made any calculations": "You haven't made any calculations",
'You must be logged in to register volunteers.': 'You must be logged in to register volunteers.',
'You must be logged in to report persons missing or found.': 'You must be logged in to report persons missing or found.',
'You must provide a series id to proceed.': 'You must provide a series id to proceed.',
'You should edit Twitter settings in models/000_config.py': 'You should edit Twitter settings in models/000_config.py',
'Your action is required. Please approve user %s asap: ': 'Your action is required. Please approve user %s asap: ',
'Your action is required. Please approve user joe@aidiq.com asap: ': 'Your action is required. Please approve user joe@aidiq.com asap: ',
'Your current ordered list of solution items is shown below. You can change it by voting again.': 'Your current ordered list of solution items is shown below. You can change it by voting again.',
'Your post was added successfully.': 'Your post was added successfully.',
'Your system has been assigned a unique identification (UUID), which other computers around you can use to identify you. To view your UUID, you may go to Synchronization -> Sync Settings. You can also see other settings on that page.': 'Your system has been assigned a unique identification (UUID), which other computers around you can use to identify you. To view your UUID, you may go to Synchronization -> Sync Settings. You can also see other settings on that page.',
'Zinc roof': 'Zinc roof',
'Zoom': 'Zoom',
'Zoom In: click in the map or use the left mouse button and drag to create a rectangle': 'Zoom In: click in the map or use the left mouse button and drag to create a rectangle',
'Zoom Levels': 'Zoom Levels',
'Zoom Out: click in the map or use the left mouse button and drag to create a rectangle': 'Zoom Out: click in the map or use the left mouse button and drag to create a rectangle',
'Zoom to Current Location': 'Zoom to Current Location',
'Zoom to maximum map extent': 'Zoom to maximum map extent',
'act': 'act',
'active': 'active',
'added': 'added',
'all records': 'all records',
'allows a budget to be developed based on staff & equipment costs, including any admin overheads.': 'allows a budget to be developed based on staff & equipment costs, including any admin overheads.',
'allows for creation and management of surveys to assess the damage following a natural disaster.': 'allows for creation and management of surveys to assess the damage following a natural disaster.',
'an individual/team to do in 1-2 days': 'an individual/team to do in 1-2 days',
'approved': 'approved',
'assigned': 'assigned',
'average': 'average',
'black': 'black',
'blond': 'blond',
'blue': 'blue',
'brown': 'brown',
'c/o Name': 'c/o Name',
'can be used to extract data from spreadsheets and put them into database tables.': 'can be used to extract data from spreadsheets and put them into database tables.',
'cancelled': 'cancelled',
'caucasoid': 'caucasoid',
'check all': 'check all',
'click for more details': 'click for more details',
'completed': 'completed',
'consider': 'consider',
'constraint_id': 'constraint_id',
'crud': 'crud',
'curly': 'curly',
'currently registered': 'currently registered',
'daily': 'daily',
'dark': 'dark',
'data uploaded': 'data uploaded',
'database': 'database',
'database %s select': 'database %s select',
'db': 'db',
'delete all checked': 'delete all checked',
'deleted': 'deleted',
'denied': 'denied',
'description': 'description',
'design': 'design',
'diseased': 'diseased',
'displaced': 'displaced',
'divorced': 'divorced',
'done!': 'done!',
'edit': 'edit',
'editor': 'editor',
'embedded': 'embedded',
'enclosed area': 'enclosed area',
'export as csv file': 'export as csv file',
'fat': 'fat',
'feedback': 'feedback',
'female': 'female',
'flush latrine with septic tank': 'flush latrine with septic tank',
'forehead': 'forehead',
'form data': 'form data',
'from Twitter': 'from Twitter',
'from_id': 'from_id',
'full': 'full',
'getting': 'getting',
'green': 'green',
'grey': 'grey',
'here': 'here',
'high': 'high',
'hourly': 'hourly',
'households': 'households',
'identified': 'identified',
'ignore': 'ignore',
'in Deg Min Sec format': 'in Deg Min Sec format',
'in GPS format': 'in GPS format',
'inactive': 'inactive',
'injured': 'injured',
'insert new': 'insert new',
'insert new %s': 'insert new %s',
'invalid request': 'invalid request',
'is a central online repository where information on all the disaster victims and families, especially identified casualties, evacuees and displaced people can be stored. Information like name, age, contact number, identity card number, displaced location, and other details are captured. Picture and finger print details of the people can be uploaded to the system. People can also be captured by group for efficiency and convenience.': 'is a central online repository where information on all the disaster victims and families, especially identified casualties, evacuees and displaced people can be stored. Information like name, age, contact number, identity card number, displaced location, and other details are captured. Picture and finger print details of the people can be uploaded to the system. People can also be captured by group for efficiency and convenience.',
'keeps track of all incoming tickets allowing them to be categorised & routed to the appropriate place for actioning.': 'keeps track of all incoming tickets allowing them to be categorised & routed to the appropriate place for actioning.',
'kilogram': 'kilogram',
'kit': 'kit',
'latrines': 'latrines',
'legend URL': 'legend URL',
'light': 'light',
'liter': 'liter',
'login': 'login',
'long': 'long',
'long>12cm': 'long>12cm',
'low': 'low',
'male': 'male',
'manual': 'manual',
'married': 'married',
'maxExtent': 'maxExtent',
'maxResolution': 'maxResolution',
'medium': 'medium',
'medium<12cm': 'medium<12cm',
'menu item': 'menu item',
'message_id': 'message_id',
'meter': 'meter',
'meter cubed': 'meter cubed',
'meters': 'meters',
'module allows the site administrator to configure various options.': 'module allows the site administrator to configure various options.',
'module helps monitoring the status of hospitals.': 'module helps monitoring the status of hospitals.',
'module provides a mechanism to collaboratively provide an overview of the developing disaster, using online mapping (GIS).': 'module provides a mechanism to collaboratively provide an overview of the developing disaster, using online mapping (GIS).',
'mongoloid': 'mongoloid',
'more': 'more',
'n/a': 'n/a',
'negroid': 'negroid',
'never': 'never',
'new': 'new',
'new record inserted': 'new record inserted',
'next 100 rows': 'next 100 rows',
'no': 'no',
'none': 'none',
'normal': 'normal',
'not needed': 'not needed',
'not specified': 'not specified',
'num Zoom Levels': 'num Zoom Levels',
'once': 'once',
'open defecation': 'open defecation',
'or import from csv file': 'or import from csv file',
'other': 'other',
'over one hour': 'over one hour',
'pack of 10': 'pack of 10',
'pending': 'pending',
'people': 'people',
'piece': 'piece',
'pit': 'pit',
'pit latrine': 'pit latrine',
'postponed': 'postponed',
'preliminary template or draft, not actionable in its current form': 'preliminary template or draft, not actionable in its current form',
'previous 100 rows': 'previous 100 rows',
'problem connecting to twitter.com - please refresh': 'problem connecting to twitter.com - please refresh',
'provides a catalogue of digital media.': 'provides a catalogue of digital media.',
'record does not exist': 'record does not exist',
'record id': 'record id',
'red': 'red',
'reports successfully imported.': 'reports successfully imported.',
'retired': 'retired',
'retry': 'retry',
'river': 'river',
'sack 20kg': 'sack 20kg',
'sack 50kg': 'sack 50kg',
'see comment': 'see comment',
'selected': 'selected',
'separated': 'separated',
'separated from family': 'separated from family',
'shaved': 'shaved',
'shift_end': 'shift_end',
'shift_start': 'shift_start',
'short': 'short',
'short<6cm': 'short<6cm',
'sides': 'sides',
'sign-up now': 'sign-up now',
'simple': 'simple',
'single': 'single',
'slim': 'slim',
'state': 'state',
'straight': 'straight',
'suffered financial losses': 'suffered financial losses',
'table': 'table',
'table_name': 'table_name',
'tall': 'tall',
'this': 'this',
'times and it is still not working. We give in. Sorry.': 'times and it is still not working. We give in. Sorry.',
'to access the system': 'to access the system',
'to reset your password': 'to reset your password',
'to verify your email': 'to verify your email',
'to_id': 'to_id',
'ton': 'ton',
'tonsure': 'tonsure',
'total': 'total',
'tracks all shelters and stores basic details regarding them. It collaborates with other modules to track people associated with a shelter, the services available etc.': 'tracks all shelters and stores basic details regarding them. It collaborates with other modules to track people associated with a shelter, the services available etc.',
'tweepy module not available within the running Python - this needs installing for non-Tropo Twitter support!': 'tweepy module not available within the running Python - this needs installing for non-Tropo Twitter support!',
'unable to parse csv file': 'unable to parse csv file',
'unapproved': 'unapproved',
'uncheck all': 'uncheck all',
'unidentified': 'unidentified',
'uninhabitable = foundation and structure destroyed': 'uninhabitable = foundation and structure destroyed',
'unknown': 'unknown',
'unspecified': 'unspecified',
'updated': 'updated',
'updates only': 'updates only',
'vm_action': 'vm_action',
'wavy': 'wavy',
'weekly': 'weekly',
'white': 'white',
'wider area, longer term, usually contain multiple Activities': 'wider area, longer term, usually contain multiple Activities',
'widowed': 'widowed',
'window': 'window',
'windows broken, cracks in walls, roof slightly damaged': 'windows broken, cracks in walls, roof slightly damaged',
'within human habitat': 'within human habitat',
'xlwt module not available within the running Python - this needs installing for XLS output!': 'xlwt module not available within the running Python - this needs installing for XLS output!',
'yes': 'yes',
}
| dotskapes/wikiSkapes | languages/en-gb.py | Python | mit | 231,317 | [
"VisIt"
] | e9451dc4a54c3ddedc447e08d6e3b022a5970af9088368d0a2cb9a24e78ed530 |
#
# Copyright 2016 The BigDL Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Test data for convolution
convolutionDefinition = """
name : "ConvolutionTest"
input : "data"
input_shape {dim:1 dim :3 dim :5 dim :5}
layer {
name : "convolution"
type : "Convolution"
bottom : "data"
top : "convolution"
convolution_param {
num_output : 4
kernel_size: 2
weight_filler {
type: "xavier"
}
bias_filler {
type: "gaussian"
std: 0.02
}
}
}
"""
convolutionShapes = [{"data": (1, 3, 5, 5)}]
convolutionName = "convolution"
# Test data for Relu
reluDefinition = """
name : "ReluTest"
input : "data"
input_shape{dim:2 dim :2}
layer {
name: "relu"
type: "ReLU"
bottom: "data"
top: "relu"
}
"""
reluShapes = [{"data": (2, 2)}]
reluName = "relu"
# Test Data for SpatialCrossMapLRN
crossMapLrnDefinition = """
name : "SpatialCrossMapLRNTest"
input : "data"
input_shape{dim:1 dim :3 dim:224 dim :224}
layer {
name: "crossMapLrn"
type: "LRN"
bottom: "data"
top: "crossMapLrn"
lrn_param {
local_size: 5
alpha: 1.0E-4
beta: 0.75
k: 1.0
}
}
"""
crossMapLrnShapes = [{"data": (1, 3, 224, 224)}]
crossMapLrnName = "crossMapLrn"
# Test Data for SpatialWithinChannelLRN
withinChannelLRNDefinition = """
name : "SpatialWithinChannelLRNTest"
input : "data"
input_shape{dim:1 dim :3 dim:224 dim :224}
layer {
name: "withinChannelLRN"
type: "LRN"
bottom: "data"
top: "withinChannelLRN"
lrn_param {
local_size: 5
alpha: 1.0E-4
beta: 0.75
k: 1.0
norm_region : WITHIN_CHANNEL
}
}
"""
withinChannelLRNShapes = [{"data": (1, 3, 224, 224)}]
withinChannelLRNName = "withinChannelLRN"
# Test data for Inner product
innerProductDefinition = """
name : "InnerProductTest"
input : "data"
input_shape{dim: 2 dim: 10}
layer {
name: "innerProduct"
type: "InnerProduct"
bottom: "data"
top: "innerProduct"
inner_product_param {
num_output: 10
}
}
"""
innerProductShapes = [{"data": (2, 10)}]
innerProductName = "innerProduct"
# Test data for max pooling
maxpoolingDefinition = """
name : "MaxpoolingTest"
input : "data"
input_shape{dim: 1 dim: 3 dim: 3 dim: 3}
layer {
name: "maxpooling"
type: "Pooling"
bottom: "data"
top: "maxpooling"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
"""
maxpoolingShapes = [{"data": (1, 3, 3, 3)}]
maxpoolingName = "maxpooling"
# Test data for average pooling
avepoolingDefinition = """
name : "AvepoolingTest"
input : "data"
input_shape{dim: 1 dim: 3 dim: 3 dim: 3}
layer {
name: "avepooling"
type: "Pooling"
bottom: "data"
top: "avepooling"
pooling_param {
pool: AVE
kernel_size: 2
stride: 2
}
}
"""
avepoolingShapes = [{"data": (1, 3, 3, 3)}]
avepoolingName = "avepooling"
# Test data for SoftMax
softMaxDefinition = """
name : "SoftMaxTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "softMax"
type: "Softmax"
bottom: "data"
top: "softMax"
}
"""
softMaxShapes = [{"data": (2, 2)}]
softMaxName = "softMax"
# Test data for Tanh
tanhDefinition = """
name : "TanhTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "tanh"
type: "TanH"
bottom: "data"
top: "tanh"
}
"""
tanhShapes = [{"data": (2, 2)}]
tanhName = "tanh"
# Test data for Sigmoid
sigmoidDefinition = """
name : "SigmoidTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "sigmoid"
type: "Sigmoid"
bottom: "data"
top: "sigmoid"
}
"""
sigmoidShapes = [{"data": (2, 2)}]
sigmoidName = "sigmoid"
# Test data for Abs
absDefinition = """
name : "AbsTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "abs"
type: "AbsVal"
bottom: "data"
top: "abs"
}
"""
absShapes = [{"data": (2, 2)}]
absName = "abs"
# Test data for BatchNormalization
batchNormDefinition = """
name : "BatchNormTest"
input: "data"
input_dim: 1
input_dim: 3
input_dim: 224
input_dim: 224
layer {
bottom: "data"
top: "conv1"
name: "conv1"
type: "Convolution"
convolution_param {
num_output: 64
kernel_size: 7
pad: 3
stride: 2
}
}
layer {
bottom: "conv1"
top: "batchNorm"
name: "batchNorm"
type: "BatchNorm"
batch_norm_param {
use_global_stats: true
}
}
"""
batchNormShapes = [{"data": (1, 3, 224, 224)}]
batchNormName = "batchNorm"
# Test data for Concat
concatDefinition = """
name : "ConcatTest"
input : "data1"
input_shape{dim: 2 dim: 2}
input : "data2"
input_shape{dim: 2 dim: 2}
layer {
name: "abs"
type: "AbsVal"
bottom: "data1"
top: "abs"
}
layer {
name: "sigmoid"
type: "Sigmoid"
bottom: "data2"
top: "sigmoid"
}
layer {
name: "concat"
type: "Concat"
bottom: "abs"
bottom: "sigmoid"
top: "concat"
}
"""
concatShapes = [{"data1": (2, 2)}, {"data2": (2, 2)}]
concatName = "concat"
# Test data for Elu
eluDefinition = """
name : "EluTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "elu"
type: "ELU"
bottom: "data"
top: "elu"
}
"""
eluShapes = [{"data": (2, 2)}]
eluName = "elu"
# Test data for Flattern
flattenDefinition = """
name : "FlattenTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "flatten"
type: "Flatten"
bottom: "data"
top: "flatten"
}
"""
flattenShapes = [{"data": (2, 2)}]
flattenName = "flatten"
# Test data for Log
logDefinition = """
name : "LogTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "log"
type: "Log"
bottom: "data"
top: "log"
}
"""
logShapes = [{"data": (2, 2)}]
logName = "log"
# Test data for Power
powerDefinition = """
name : "PowerTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "power"
type: "Power"
bottom: "data"
top: "power"
}
"""
powerShapes = [{"data": (2, 2)}]
powerName = "power"
# Test data for PReLU
preluDefinition = """
name : "PReLUTest"
input : "data"
input_shape{dim: 2 dim: 5}
layer {
name: "prelu"
type: "PReLU"
bottom: "data"
top: "prelu"
}
"""
preluShapes = [{"data": (2, 5)}]
preluName = "prelu"
# Test data for Reshape
reshapeDefinition = """
name : "ReshapeTest"
input : "data"
input_shape{dim: 2 dim: 8}
layer {
name: "reshape"
type: "Reshape"
bottom: "data"
top: "reshape"
reshape_param { shape { dim: 0 dim: -1 dim: 4 } }
}
"""
reshapeShapes = [{"data": (2, 8)}]
reshapeName = "reshape"
# Test data for Scale
scaleDefinition = """
name : "ScaleTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "scale"
type: "Scale"
bottom: "data"
top: "scale"
}
"""
scaleShapes = [{"data": (2, 2)}]
scaleName = "scale"
# Test data for Bias
biasDefinition = """
name : "BiasTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "bias"
type: "Bias"
bottom: "data"
top: "bias"
}
"""
biasShapes = [{"data": (2, 2)}]
biasName = "bias"
# Test data for Threshold
thresholdDefinition = """
name : "ThresholdTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "threshold"
type: "Threshold"
bottom: "data"
top: "threshold"
threshold_param {
threshold : 0.5
}
}
"""
thresholdShapes = [{"data": (2, 2)}]
thresholdName = "threshold"
# Test data for Exp
expDefinition = """
name : "ExpTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "exp"
type: "Exp"
bottom: "data"
top: "exp"
}
"""
expShapes = [{"data": (2, 2)}]
expName = "exp"
# Test data for Slice
sliceDefinition = """
name : "SliceTest"
input : "data"
input_shape{dim: 2 dim: 2}
layer {
name: "slice"
type: "Slice"
bottom: "data"
top: "slice"
}
"""
sliceShapes = [{"data": (2, 2)}]
sliceName = "slice"
# Test data for Tile
tileDefinition = """
name : "TileTest"
input : "data"
input_shape{dim: 2 dim : 2}
layer {
name: "tile"
type: "Tile"
bottom: "data"
top: "tile"
tile_param {
axis : 1
tiles : 2
}
}
"""
tileShapes = [{"data": (2, 2)}]
tileName = "tile"
# Test data for Eltwise MAX
eltwiseMaxDefinition = """
name : "EltwiseMaxTest"
input : "data1"
input_shape{dim: 2 dim: 2}
input : "data2"
input_shape{dim: 2 dim: 2}
layer {
name: "abs"
type: "AbsVal"
bottom: "data1"
top: "abs"
}
layer {
name: "sigmoid"
type: "Sigmoid"
bottom: "data2"
top: "sigmoid"
}
layer {
name: "eltwiseMax"
type: "Eltwise"
bottom: "abs"
bottom: "sigmoid"
top: "eltwiseMax"
eltwise_param {
operation : MAX
}
}
"""
eltwiseMaxShapes = [{"data1": (2, 2)}, {"data2": (2, 2)}]
eltwiseMaxName = "eltwiseMax"
# Test data for Eltwise Prod
eltwiseProdDefinition = """
name : "EltwiseProdTest"
input : "data1"
input_shape{dim: 2 dim: 2}
input : "data2"
input_shape{dim: 2 dim: 2}
layer {
name: "abs"
type: "AbsVal"
bottom: "data1"
top: "abs"
}
layer {
name: "sigmoid"
type: "Sigmoid"
bottom: "data2"
top: "sigmoid"
}
layer {
name: "eltwiseProd"
type: "Eltwise"
bottom: "abs"
bottom: "sigmoid"
top: "eltwiseProd"
eltwise_param {
operation : PROD
}
}
"""
eltwiseProdShapes = [{"data1": (2, 2)}, {"data2": (2, 2)}]
eltwiseProdName = "eltwiseProd"
# Test data for Eltwise SUM
eltwiseSUMDefinition = """
name : "EltwiseSUMTest"
input : "data1"
input_shape{dim: 2 dim: 2}
input : "data2"
input_shape{dim: 2 dim: 2}
layer {
name: "abs1"
type: "AbsVal"
bottom: "data1"
top: "abs1"
}
layer {
name: "abs2"
type: "AbsVal"
bottom: "data2"
top: "abs2"
}
layer {
name: "eltwiseSUM"
type: "Eltwise"
bottom: "abs1"
bottom: "abs2"
top: "eltwiseSUM"
eltwise_param {
operation : SUM
coeff: [0.5 , 1.0]
}
}
"""
eltwiseSUMShapes = [{"data1": (2, 2)}, {"data2": (2, 2)}]
eltwiseSUMName = "eltwiseSUM"
deconvolutionDefinition = """
name : "deconvolution"
input : "data"
input_shape {dim:1 dim :3 dim :5 dim :5}
layer {
name: "deconvolution"
type: "Deconvolution"
bottom: "data"
top: "deconvolution"
convolution_param {
num_output: 4
pad: 0
kernel_size: 2
stride: 2
weight_filler {
type: "xavier"
}
}
}
"""
deconvolutionShapes = [{"data": (1, 3, 5, 5)}]
deconvolutionName = "deconvolution"
# End layer definitions
testlayers = []
class caffe_test_layer():
def __init__(self, name, definition, shapes):
self.name = name
self.definition = definition
self.shapes = shapes
def registerTestLayer(name, definition, shapes):
layer = caffe_test_layer(name, definition, shapes)
testlayers.append(layer)
registerTestLayer(convolutionName, convolutionDefinition, convolutionShapes)
registerTestLayer(reluName, reluDefinition, reluShapes)
registerTestLayer(crossMapLrnName, crossMapLrnDefinition, crossMapLrnShapes)
registerTestLayer(withinChannelLRNName, withinChannelLRNDefinition, withinChannelLRNShapes)
registerTestLayer(innerProductName, innerProductDefinition, innerProductShapes)
registerTestLayer(maxpoolingName, maxpoolingDefinition, maxpoolingShapes)
registerTestLayer(avepoolingName, avepoolingDefinition, avepoolingShapes)
registerTestLayer(softMaxName, softMaxDefinition, softMaxShapes)
registerTestLayer(tanhName, tanhDefinition, tanhShapes)
registerTestLayer(sigmoidName, sigmoidDefinition, sigmoidShapes)
registerTestLayer(absName, absDefinition, absShapes)
registerTestLayer(batchNormName, batchNormDefinition, batchNormShapes)
registerTestLayer(concatName, concatDefinition, concatShapes)
registerTestLayer(eluName, eluDefinition, eluShapes)
registerTestLayer(flattenName, flattenDefinition, flattenShapes)
registerTestLayer(logName, logDefinition, logShapes)
registerTestLayer(powerName, powerDefinition, powerShapes)
registerTestLayer(preluName, preluDefinition, preluShapes)
registerTestLayer(reshapeName, reshapeDefinition, reshapeShapes)
registerTestLayer(scaleName, scaleDefinition, scaleShapes)
registerTestLayer(biasName, biasDefinition, biasShapes)
registerTestLayer(thresholdName, thresholdDefinition, thresholdShapes)
registerTestLayer(expName, expDefinition, expShapes)
registerTestLayer(sliceName, sliceDefinition, sliceShapes)
registerTestLayer(tileName, tileDefinition, tileShapes)
registerTestLayer(eltwiseMaxName, eltwiseMaxDefinition, eltwiseMaxShapes)
registerTestLayer(eltwiseProdName, eltwiseProdDefinition, eltwiseProdShapes)
registerTestLayer(eltwiseSUMName, eltwiseSUMDefinition, eltwiseSUMShapes)
registerTestLayer(deconvolutionName, deconvolutionDefinition, deconvolutionShapes)
| intel-analytics/BigDL | python/dllib/test/bigdl/caffe/caffe_layers.py | Python | apache-2.0 | 12,755 | [
"Gaussian"
] | c7dd41f7660c84d9faa02bc8043982ecf9bb0f8f4dcbb06d628cc30736c0f887 |
##############################################################################
# #
# NodeBox 1.9.5 -> Pycairo wrapper #
# #
##############################################################################
import cairo, colorsys, math
class Color(object):
def __init__(self, c1, c2, c3, a, mode='rgb'):
c1 = min(max(0.0, c1), 1.0)
c2 = min(max(0.0, c2), 1.0)
c3 = min(max(0.0, c3), 1.0)
a = min(max(0.0, a), 1.0)
if mode == 'rgb':
self.r = c1
self.g = c2
self.b = c3
self.a = a
self._update_hsv()
elif mode == 'hsv':
self.h = c1
self.s = c2
self.v = c3
self.a = a
self._update_rgb()
else:
raise ValueError, 'Invalid color mode: ' + mode
def __repr__(self):
return 'Color(r=%.3f, g=%.3f, b=%.3f, a=%.3f)' % (self.r, self.g, self.b, self.a)
def copy(self):
return Color(self.r, self.g, self.b, self.a)
def rgba(self):
return (self.r, self.g, self.b, self.a)
def darken(self, step=0.1):
return Color(self.h, self.s, self.v - step, self.a, mode='hsv')
def lighten(self, step=0.1):
return Color(self.h, self.s, self.v + step, self.a, mode='hsv')
def blend(self, clr, factor=0.5):
r = self.r * (1.0 - factor) + clr.r * factor
g = self.g * (1.0 - factor) + clr.g * factor
b = self.b * (1.0 - factor) + clr.b * factor
a = self.a * (1.0 - factor) + clr.a * factor
return Color(r, g, b, a)
def _update_hsv(self):
self.h, self.s, self.v = colorsys.rgb_to_hsv(self.r, self.g, self.b)
def _update_rgb(self):
self.r, self.g, self.b = colorsys.hsv_to_rgb(self.h, self.s, self.v)
def color(*args):
# Only K(A) & RGB(A) modes are supported, HSB(A) & CMYK(A) are not
n = len(args)
if n == 1:
r = g = b = args[0]
a = 1.0
elif n == 2:
r = g = b = args[0]
a = args[1]
elif n == 3:
r, g, b = args
a = 1.0
elif n == 4:
r, g, b, a = args
else:
raise ValueError, "Invalid color value: '%s'" % args
r = min(max(0.0, r), 1.0)
g = min(max(0.0, g), 1.0)
b = min(max(0.0, b), 1.0)
a = min(max(0.0, a), 1.0)
return Color(r, g, b, a)
#=============================================================================#
#= NODEBOX COMMANDS =#
#=============================================================================#
class Context(object):
def __init__(self):
self._backgroundcolor = None
self._fillcolor = None
self._strokecolor = None
self._strokewidth = 1.0
self._autoclosepath = True
self._fontname = 'Helvetica'
self._fontsize = 12.0
self._lineheight = 1.5
self._shadow = False
self._shadow_dx = 0
self._shadow_dy = 0
self._shadow_radius = 3
self._shadow_color = color(0, 0, 0, 1)
self._shadow_blur_passes = 2
self._bitmap_dpi = 150
# TODO call on init
def init():
self.font(self._fontname, self._fontsize)
self.strokewidth(self._strokewidth)
### SHAPE #################################################################
def rect(self, x, y, width, height, roundness=0.0, draw=True):
# Negative width & height behaviour not implemented
# Formula for rounded rectangle taken from NodeBox 1 source code
c = self._ctx
if roundness == 0:
c.rectangle(x, y, width, height)
else:
curve = min(width * roundness, height * roundness)
xw = x + width
yh = y + height
c.move_to(x, y + curve)
c.curve_to(x, y, x, y, x + curve, y)
c.line_to(xw - curve, y)
c.curve_to(xw, y, xw, y, xw, y + curve)
c.line_to(xw, yh - curve)
c.curve_to(xw, yh, xw, yh, xw - curve, yh)
c.line_to(x + curve, yh)
c.curve_to(x, yh, x, yh, x, yh - curve)
c.close_path()
if draw:
self._draw()
else:
path = c.copy_path()
c.new_path()
return path
def oval(self, x, y, width, height, draw=True):
c = self._ctx
# Negative width & height behaviour not implemented
if width == 0 or height == 0:
return
cx = x + width / 2.
cy = y + height / 2.
r = width / 2.
yscale = float(height) / width
c.new_path()
c.save()
c.scale(1, yscale)
c.arc(cx, cy / yscale, r, 0, 2 * math.pi)
c.restore()
if draw:
self._draw()
else:
path = c.copy_path()
c.new_path()
return path
def line(self, x1, y1, x2, y2, draw=True):
c = self._ctx
c.move_to(x1, y1)
c.line_to(x2, y2)
if draw:
self._draw_stroke()
else:
path = c.copy_path()
c.new_path()
return path
def arrow(x, y, width, type, draw=True):
raise NotImplementedError
def star(x, y, points=20, outer=100, inner=50, draw=True):
raise NotImplementedError
### PATH ##################################################################
def beginpath(self, x, y):
self._ctx.move_to(x, y)
def moveto(self, x, y):
self._ctx.move_to(x, y)
def lineto(self, x, y):
self._ctx.line_to(x, y)
def curveto(self, x1, y1, x2, y2, x3, y3):
self._ctx.curve_to(x1, y1, x2, y2, x3, y3)
def findpath(list, curvature=1.0):
raise NotImplementedError
def endpath(self, draw=True):
if self._autoclosepath:
self._ctx.close_path()
if draw:
self._draw()
else:
path = self._ctx.copy_path()
self._ctx.new_path()
return path
def drawpath(self, path):
self._ctx.append_path(path)
self._draw()
def beginclip(self, path):
self._ctx.save()
self._ctx.new_path()
self._ctx.append_path(path)
self._ctx.clip()
def endclip(self):
self._ctx.restore()
def autoclosepath(self, close=True):
self._autoclosepath = close
### TRANSFORM #############################################################
def transform(mode):
raise NotImplementedError
def translate(self, x, y):
self._ctx.translate(x, y)
def rotate(self, degrees=0.0, radians=0.0):
if degrees != 0:
radians = degrees * math.pi / 180
self._ctx.translate(radians)
def scale(self, x, y=None):
if not y:
y = 1.0
self._ctx.scale(x, y)
def skew(x, y=None):
raise NotImplementedError
def push(self):
self._ctx.save()
def pop(self):
self._ctx.restore()
def reset(self):
self._ctx.identity_matrix()
### COLOR #################################################################
def outputmode(self, mode):
# Not implemented; always RGB
raise NotImplementedError
def colormode(self, mode):
pass
def color(self, *args):
return color(*args)
def fill(self, *args):
self._fillcolor = self._make_color_obj(*args)
def nofill(self):
self._fillcolor = None
def stroke(self, *args):
self._strokecolor = self._make_color_obj(*args)
def nostroke(self):
self._strokecolor = None
def strokewidth(self, width):
self._ctx.set_line_width(width)
def background(self, *args):
# Transparent background
if len(args) == 1 and args[0] == None:
return
col = self._make_color_obj(*args)
self._backgroundcolor = col
c = self._ctx
c.set_source_rgba(*col.rgba())
c.rectangle(0, 0, self._width, self._height)
c.fill()
### TYPOGRAPHY ############################################################
def font(self, fontname, fontsize=None):
self._ctx.select_font_face(fontname, cairo.FONT_SLANT_NORMAL,
cairo.FONT_WEIGHT_NORMAL)
self._fontname = fontname
if fontsize:
self.fontsize(fontsize)
def fontsize(self, fontsize):
self._ctx.set_font_size(fontsize)
self._fontsize = fontsize
def text(self, txt, x, y):
# width, height & outline not implemented
c = self._ctx
c.set_source_rgba(*self._fillcolor.rgba())
c.move_to(x, y)
c.show_text(txt)
def textpath(txt, x, y, width=None, height=1000000):
raise NotImplementedError
def textwidth(self, txt):
width, height = self.textmetrics(txt)
return width
def textheight(self, txt):
width, height = self.textmetrics(txt)
return height
def textmetrics(self, txt):
(ascent, descent, height,
max_x_advance, max_y_advance) = self._ctx.font_extents()
linewidth = self._ctx.text_extents(txt)[4]
return linewidth, height + descent
def lineheight(self, height=None):
if height:
self._lineheight = height
return self._lineheight
def align(self, align):
raise NotImplementedError
### IMAGE #################################################################
def image(path, x, y, width=None, height=None, alpha=1.0, data=None):
raise NotImplementedError
def imagesize(path):
raise NotImplementedError
### UTILITY ###############################################################
def size(w, h):
raise NotImplementedError
def var(name, type, default, min, max):
raise NotImplementedError
def random(v1=None, v2=None):
raise NotImplementedError
def choice(list):
raise NotImplementedError
def grid(cols, rows, colsize=1, rowsize=1):
raise NotImplementedError
def files(path):
raise NotImplementedError
def autotext(xml):
raise NotImplementedError
#=========================================================================#
#= COLORS LIBRARY =#
#=========================================================================#
def rgba_color(self, c):
return self.color(*c)
def gradientfill(self, path, clr1, clr2, dx=0.0, dy=0.0,
type='linear',spread=1.0):
c = self._ctx
c.append_path(path)
x1, y1, x2, y2 = c.fill_extents()
pat = cairo.LinearGradient(0, y1, 0, y2)
pat.add_color_stop_rgba(1, *clr1.rgba())
pat.add_color_stop_rgba(0, *clr2.rgba())
if self._shadow:
self._draw_shadow()
c.set_source(pat)
if self._strokecolor:
c.fill_preserve()
c.set_source_rgba(*self._strokecolor.rgba())
c.stroke()
else:
c.fill()
def shadow(self, dx=0.0, dy=0.0, blur=3.0, clr=color(0, 0, 0, 1)):
self._shadow_dx = dx
self._shadow_dy = dy
self._shadow_radius = blur / 2
self._shadow_color = clr
self._shadow = True
def noshadow(self):
self._shadow = False
#=========================================================================#
#= HELPER FUNCTIONS =#
#=========================================================================#
def initsurface(self, w, h, fmt, fname=None, scale=1.0):
self._width = w
self._height = h
w *= scale
h *= scale
if fmt == 'pdf':
self._surface = cairo.PDFSurface(fname, w, h)
elif fmt == 'svg':
self._surface = cairo.SVGSurface(fname, w, h)
elif fmt == 'png':
w = int(w + .5)
h = int(h + .5)
self._surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, w, h)
elif fmt == 'ps':
self._surface = cairo.PSSurface(fname, w, h)
else:
raise ValueError, "Invalid output format: '%s'" % (fmt)
self._format = fmt
self._filename = fname
self._ctx = cairo.Context(self._surface)
self._ctx.scale(scale, scale)
def writesurface(self):
if self._format == 'png':
self._surface.write_to_png(self._filename)
else:
self._ctx.show_page()
def _make_color_obj(self, *args):
if len(args) == 1 and type(args[0]).__name__ == 'Color':
return args[0]
else:
return self.color(*args)
def _draw_stroke(self):
c = self._ctx
if self._strokecolor:
c.set_source_rgba(*self._strokecolor.rgba())
c.stroke()
def _draw(self):
c = self._ctx
if self._fillcolor:
if self._shadow:
self._draw_shadow()
c.set_source_rgba(*self._fillcolor.rgba())
if self._strokecolor:
c.fill_preserve()
c.set_source_rgba(*self._strokecolor.rgba())
c.stroke()
else:
c.fill()
else:
self._draw_stroke()
def _draw_shadow(self):
c = self._ctx
img, padding = self._render_bitmap_shadow()
x1, y1, x2, y2 = c.fill_extents()
dpi_scale = 72.0 / self._bitmap_dpi
c.save()
c.set_source_rgba(*self._shadow_color.rgba())
c.translate(x1 + self._shadow_dx, y1 + self._shadow_dy)
c.scale(dpi_scale, dpi_scale)
c.translate(-padding, -padding)
c.mask_surface(img, 0, 0)
c.restore()
def _render_bitmap_shadow(self):
# 'Moving average' subpixel resolution box filter implementation
# based on Ryg's posts on fast blurs:
#
# http://fgiesen.wordpress.com/2012/07/30/fast-blurs-1/
# http://fgiesen.wordpress.com/2012/08/01/fast-blurs-2/
#
# Note: Shadows doesn't work properly for SVG output as shadow
# bitmaps don't get translated correctly but are all drawn at
# the origin.
dpi_scale = self._bitmap_dpi / 72.0
radius = self._shadow_radius * dpi_scale
# With 3 passes we get a good approximation of Gaussian blur
# within a 3% error margin (bicubic blur), which is good enough
# for practical purposes.
# 1 - box filter
# 2 - triangle filter
# 3 - piecewise quadratic filter
# 4 - piecewise cubic filter
passes = self._shadow_blur_passes
# Integer part of radius
m = int(radius)
# Fractional part of radius
alpha = radius - m
scale = 1.0 / (2 * radius + 1)
# Calculate the padding required for the blur around the shape's
# bounding box. As we don't do any boundary checks when applying
# the filter, negative index values will wrap around to the end
# of the image buffer. Therefore, we need to make the padding a
# slightly larger than the blur radius to avoid visible wrapping
# effects around the edges, hence the 1.5 multiplier.
padding = int((m+2) * passes * 1.5 + 0.5)
# Calculate shape extents. x1, y1 will hold the offset from the
# origin.
c = self._ctx
x1, y1, x2, y2 = c.fill_extents()
# Add some extra padding (3) to the sides
width = int((x2 - x1) * dpi_scale + padding * 2 + 0.5) + 3
height = int((y2 - y1) * dpi_scale + padding * 2 + 0.5) + 3
# As we don't do any boundary checks when applying the filter,
# the buffer needs to be made N rows larger to prevent index out
# of range exceptions, where N is the maximum sampling radius
# (m+2 in this case). The buffer will be in ARGB32 format, so we
# need 4 bytes per pixel.
data = bytearray(width * (height + m+2) * 4)
# Create an image surface backed by our bytebuffer
img = cairo.ImageSurface.create_for_data(data, cairo.FORMAT_ARGB32,
width, height)
imgctx = cairo.Context(img)
# Draw the shape to be blurred offset from the origin, so
# there's space around it for the blur.
offsx = int(-x1 * dpi_scale + padding + 0.5)
offsy = int(-y1 * dpi_scale + padding + 0.5)
imgctx.translate(offsx, offsy)
imgctx.scale(dpi_scale, dpi_scale)
imgctx.append_path(c.copy_path())
# Draw the shape with full opacity; the alpha value will be used
# later when we blit the blurred image onto the target surface.
col = self._shadow_color.copy()
col.a = 1.0
imgctx.set_source_rgba(*col.rgba())
imgctx.fill()
# Horizontal passes (blur the alpha channel only)
row = bytearray(width * 4)
for y in range(0, height):
for p in range(passes):
yoffs = y * width * 4 + 3
sum_ = data[yoffs]
for x in range(m):
sum_ += data[yoffs - x*4] + data[yoffs + x*4]
sum_ += alpha * data[yoffs - m*4] + data[yoffs + m*4]
for x in range(width):
a = int(sum_ * scale)
row[x*4] = a
a = data[yoffs + (x+m+1)*4]
b = data[yoffs + (x+m+2)*4]
sum_ += a + alpha * (b - a)
a = data[yoffs + (x-m)*4]
b = data[yoffs + (x-m-1)*4]
sum_ -= a + alpha * (b - a)
data[yoffs:yoffs + width*4] = row
# Vertical passes (blur the alpha channel only)
col = bytearray(height)
for x in range(width):
for p in range(passes):
xoffs = x*4+3
sum_ = data[xoffs]
for y in range(m):
sum_ += data[xoffs - y*width*4] + data[xoffs + y*width*4]
sum_ += alpha * data[xoffs - m*width*4] + data[xoffs + m*width*4]
for y in range(0, height):
a = int(sum_ * scale)
col[y] = a
a = data[xoffs + (y+m+1)*width*4]
b = data[xoffs + (y+m+2)*width*4]
sum_ += a + alpha * (b - a)
a = data[xoffs + (y-m)*width*4]
b = data[xoffs + (y-m-1)*width*4]
sum_ -= a + alpha * (b - a)
for y in range(1, height - 1):
data[xoffs + y*width*4] = col[y]
return img, padding
context = Context()
| johnnovak/twyg | twyg/cairowrapper.py | Python | mit | 19,681 | [
"Gaussian"
] | 674323a71c52c0d091d68c51f24ac4f38b44788753c1a5798310709dbdf78691 |
import logging
import os
import string
import tempfile
from time import gmtime
from time import strftime
from datetime import date
from datetime import datetime
from galaxy import util
from galaxy import web
from galaxy.web.base.controller import BaseUIController
from galaxy.web.form_builder import CheckboxField
from galaxy.web.framework.helpers import grids
from galaxy.util import json
from galaxy.model.orm import and_
from tool_shed.capsule import capsule_manager
from tool_shed.dependencies.repository import relation_builder
from tool_shed.galaxy_install import dependency_display
from tool_shed.metadata import repository_metadata_manager
from tool_shed.utility_containers import ToolShedUtilityContainerManager
from tool_shed.tools import tool_validator
from tool_shed.tools import tool_version_manager
from tool_shed.util import basic_util
from tool_shed.util import common_util
from tool_shed.util import encoding_util
from tool_shed.util import hg_util
from tool_shed.util import metadata_util
from tool_shed.util import readme_util
from tool_shed.util import repository_util
from tool_shed.util import search_util
from tool_shed.util import shed_util_common as suc
from tool_shed.util import tool_util
from tool_shed.util import workflow_util
from galaxy.webapps.tool_shed.util import ratings_util
import tool_shed.grids.repository_grids as repository_grids
import tool_shed.grids.util as grids_util
import tool_shed.repository_types.util as rt_util
from galaxy import eggs
eggs.require( 'mercurial' )
from mercurial import mdiff
from mercurial import patch
log = logging.getLogger( __name__ )
malicious_error = " This changeset cannot be downloaded because it potentially produces malicious behavior or contains inappropriate content."
malicious_error_can_push = " Correct this changeset as soon as possible, it potentially produces malicious behavior or contains inappropriate content."
class RepositoryController( BaseUIController, ratings_util.ItemRatings ):
category_grid = repository_grids.CategoryGrid()
datatypes_grid = repository_grids.DatatypesGrid()
deprecated_repositories_i_own_grid = repository_grids.DeprecatedRepositoriesIOwnGrid()
email_alerts_repository_grid = repository_grids.EmailAlertsRepositoryGrid()
docker_image_grid = repository_grids.DockerImageGrid()
install_matched_repository_grid = repository_grids.InstallMatchedRepositoryGrid()
matched_repository_grid = repository_grids.MatchedRepositoryGrid()
my_writable_repositories_grid = repository_grids.MyWritableRepositoriesGrid()
my_writable_repositories_missing_tool_test_components_grid = repository_grids.MyWritableRepositoriesMissingToolTestComponentsGrid()
my_writable_repositories_with_failing_tool_tests_grid = repository_grids.MyWritableRepositoriesWithFailingToolTestsGrid()
my_writable_repositories_with_invalid_tools_grid = repository_grids.MyWritableRepositoriesWithInvalidToolsGrid()
my_writable_repositories_with_no_failing_tool_tests_grid = repository_grids.MyWritableRepositoriesWithNoFailingToolTestsGrid()
my_writable_repositories_with_skip_tests_checked_grid = repository_grids.MyWritableRepositoriesWithSkipTestsCheckedGrid()
my_writable_repositories_with_test_install_errors_grid = repository_grids.MyWritableRepositoriesWithTestInstallErrorsGrid()
repositories_by_user_grid = repository_grids.RepositoriesByUserGrid()
repositories_i_own_grid = repository_grids.RepositoriesIOwnGrid()
repositories_i_can_administer_grid = repository_grids.RepositoriesICanAdministerGrid()
repositories_in_category_grid = repository_grids.RepositoriesInCategoryGrid()
repositories_missing_tool_test_components_grid = repository_grids.RepositoriesMissingToolTestComponentsGrid()
repositories_with_failing_tool_tests_grid = repository_grids.RepositoriesWithFailingToolTestsGrid()
repositories_with_invalid_tools_grid = repository_grids.RepositoriesWithInvalidToolsGrid()
repositories_with_no_failing_tool_tests_grid = repository_grids.RepositoriesWithNoFailingToolTestsGrid()
repositories_with_skip_tests_checked_grid = repository_grids.RepositoriesWithSkipTestsCheckedGrid()
repositories_with_test_install_errors_grid = repository_grids.RepositoriesWithTestInstallErrorsGrid()
repository_dependencies_grid = repository_grids.RepositoryDependenciesGrid()
repository_grid = repository_grids.RepositoryGrid()
# The repository_metadata_grid is not currently displayed, but is sub-classed by several grids.
repository_metadata_grid = repository_grids.RepositoryMetadataGrid()
tool_dependencies_grid = repository_grids.ToolDependenciesGrid()
tools_grid = repository_grids.ToolsGrid()
valid_category_grid = repository_grids.ValidCategoryGrid()
valid_repository_grid = repository_grids.ValidRepositoryGrid()
@web.expose
def browse_categories( self, trans, **kwd ):
# The request came from the tool shed.
if 'f-free-text-search' in kwd:
# Trick to enable searching repository name, description from the CategoryGrid.
# What we've done is rendered the search box for the RepositoryGrid on the grid.mako
# template for the CategoryGrid. See ~/templates/webapps/tool_shed/category/grid.mako.
# Since we are searching repositories and not categories, redirect to browse_repositories().
if 'id' in kwd and 'f-free-text-search' in kwd and kwd[ 'id' ] == kwd[ 'f-free-text-search' ]:
# The value of 'id' has been set to the search string, which is a repository name.
# We'll try to get the desired encoded repository id to pass on.
try:
repository_name = kwd[ 'id' ]
repository = suc.get_repository_by_name( trans.app, repository_name )
kwd[ 'id' ] = trans.security.encode_id( repository.id )
except:
pass
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
**kwd ) )
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation in [ "repositories_by_category", "repositories_by_user" ]:
# Eliminate the current filters if any exist.
for k, v in kwd.items():
if k.startswith( 'f-' ):
del kwd[ k ]
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
**kwd ) )
title = trans.app.repository_grid_filter_manager.get_grid_title( trans,
trailing_string='by Category',
default='Repositories' )
self.category_grid.title = title
return self.category_grid( trans, **kwd )
@web.expose
def browse_datatypes( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
# The received id is a RepositoryMetadata id.
repository_metadata_id = kwd[ 'id' ]
repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
repository_id = trans.security.encode_id( repository_metadata.repository_id )
changeset_revision = repository_metadata.changeset_revision
new_kwd = dict( id=repository_id,
changeset_revision=changeset_revision )
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**new_kwd ) )
return self.datatypes_grid( trans, **kwd )
@web.expose
def browse_deprecated_repositories_i_own( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
operation='view_or_manage_repository',
id=trans.security.encode_id( repository.id ),
changeset_revision=selected_changeset_revision ) )
return self.deprecated_repositories_i_own_grid( trans, **kwd )
@web.expose
def browse_my_writable_repositories( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
operation='view_or_manage_repository',
id=trans.security.encode_id( repository.id ),
changeset_revision=selected_changeset_revision ) )
return self.my_writable_repositories_grid( trans, **kwd )
@web.expose
def browse_my_writable_repositories_missing_tool_test_components( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>you are authorized to update them</li>'
message += '<li>the latest installable revision contains at least 1 tool with no defined tests <b>OR</b>:</li>'
message += '<li>the latest installable revision contains at least 1 tool with a test that requires a missing test data file</li>'
message += '</ul>'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.my_writable_repositories_missing_tool_test_components_grid( trans, **kwd )
@web.expose
def browse_my_writable_repositories_with_failing_tool_tests( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>you are authorized to update them</li>'
message += '<li>the latest installable revision contains at least 1 tool</li>'
message += '<li>the latest installable revision is not missing any tool test components</li>'
message += '<li>the latest installable revision has no installation errors</li>'
message += '<li>the latest installable revision has at least 1 tool test that fails</li>'
message += '</ul>'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.my_writable_repositories_with_failing_tool_tests_grid( trans, **kwd )
@web.expose
def browse_my_writable_repositories_with_invalid_tools( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>you are authorized to update them</li>'
message += '<li>the latest metadata revision contains at least 1 invalid tool</li>'
message += '</ul>'
message += 'Click the tool config file name to see why the tool is invalid.'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.my_writable_repositories_with_invalid_tools_grid( trans, **kwd )
@web.expose
def browse_my_writable_repositories_with_no_failing_tool_tests( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>you are authorized to update them</li>'
message += '<li>the latest installable revision contains at least 1 tool</li>'
message += '<li>the latest installable revision is not missing any tool test components</li>'
message += '<li>the latest installable revision has no tool tests that fail</li>'
message += '</ul>'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.my_writable_repositories_with_no_failing_tool_tests_grid( trans, **kwd )
@web.expose
def browse_my_writable_repositories_with_install_errors( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>you are authorized to update them</li>'
message += '<li>the latest installable revision is not missing any tool test components</li>'
message += '<li>the latest installable revision has installation errors (the repository itself, '
message += 'repository dependencies or tool dependencies)</li>'
message += '</ul>'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.my_writable_repositories_with_test_install_errors_grid( trans, **kwd )
@web.expose
def browse_my_writable_repositories_with_skip_tool_test_checked( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>you are authorized to update them</li>'
message += '<li>the latest installable revision has <b>Skip automated testing of tools in this '
message += 'revision</b> checked if the repository type is <b>Unrestricted</b> or <b>Skip '
message += 'automated testing of this tool dependency recipe</b> checked if the repository '
message += 'type is <b>Tool dependency definition</b></li>'
message += '</ul>'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.my_writable_repositories_with_skip_tests_checked_grid( trans, **kwd )
@web.expose
def browse_repositories( self, trans, **kwd ):
# We add params to the keyword dict in this method in order to rename the param with an "f-" prefix,
# simulating filtering by clicking a search link. We have to take this approach because the "-"
# character is illegal in HTTP requests.
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "edit_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='edit_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation == "reviewed_repositories_i_own":
return trans.response.send_redirect( web.url_for( controller='repository_review',
action='reviewed_repositories_i_own' ) )
elif operation == "repositories_by_category":
category_id = kwd.get( 'id', None )
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_in_category',
id=category_id,
message=message,
status=status ) )
elif operation == "receive email alerts":
if trans.user:
if kwd[ 'id' ]:
kwd[ 'caller' ] = 'browse_repositories'
return trans.response.send_redirect( web.url_for( controller='repository',
action='set_email_alerts',
**kwd ) )
else:
kwd[ 'message' ] = 'You must be logged in to set email alerts.'
kwd[ 'status' ] = 'error'
del kwd[ 'operation' ]
selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
operation='view_or_manage_repository',
id=trans.security.encode_id( repository.id ),
changeset_revision=selected_changeset_revision ) )
title = trans.app.repository_grid_filter_manager.get_grid_title( trans,
trailing_string='',
default='Repositories' )
self.repository_grid.title = title
return self.repository_grid( trans, **kwd )
@web.expose
def browse_repositories_by_user( self, trans, **kwd ):
"""Display the list of repositories owned by a specified user."""
# Eliminate the current search filters if any exist.
for k, v in kwd.items():
if k.startswith( 'f-' ):
del kwd[ k ]
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
user_id = kwd.get( 'user_id', None )
if user_id is None:
# The received id is the repository id, so we need to get the id of the user that owns the repository.
repository_id = kwd.get( 'id', None )
if repository_id:
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
user_id = trans.security.encode_id( repository.user.id )
kwd[ 'user_id' ] = user_id
else:
# The user selected a repository revision which results in a refresh_on_change.
selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
id=trans.security.encode_id( repository.id ),
changeset_revision=selected_changeset_revision ) )
if user_id:
user = suc.get_user( trans.app, user_id )
trailing_string = 'Owned by %s' % str( user.username )
default = 'Repositories Owned by %s' % str( user.username )
else:
trailing_string = ''
default='Repositories'
title = trans.app.repository_grid_filter_manager.get_grid_title( trans,
trailing_string=trailing_string,
default=default )
self.repositories_by_user_grid.title = title
return self.repositories_by_user_grid( trans, **kwd )
@web.expose
def browse_repositories_i_can_administer( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
operation='view_or_manage_repository',
id=trans.security.encode_id( repository.id ),
changeset_revision=selected_changeset_revision ) )
return self.repositories_i_can_administer_grid( trans, **kwd )
@web.expose
def browse_repositories_i_own( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
operation='view_or_manage_repository',
id=trans.security.encode_id( repository.id ),
changeset_revision=selected_changeset_revision ) )
return self.repositories_i_own_grid( trans, **kwd )
@web.expose
def browse_repositories_in_category( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
if operation == 'repositories_by_user':
user_id = kwd.get( 'user_id', None )
if user_id is None:
# The received id is the repository id, so we need to get the id of the user that owns the repository.
repository_id = kwd.get( 'id', None )
if repository_id:
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
user_id = trans.security.encode_id( repository.user.id )
user = suc.get_user( trans.app, user_id )
self.repositories_by_user_grid.title = "Repositories owned by %s" % user.username
kwd[ 'user_id' ] = user_id
return self.repositories_by_user_grid( trans, **kwd )
selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
# The user selected a repository revision which results in a refresh_on_change.
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
id=trans.security.encode_id( repository.id ),
changeset_revision=selected_changeset_revision ) )
category_id = kwd.get( 'id', None )
if category_id:
category = suc.get_category( trans.app, category_id )
if category:
trailing_string = 'in Category %s' % str( category.name )
else:
trailing_string = 'in Category'
else:
trailing_string = 'in Category'
title = trans.app.repository_grid_filter_manager.get_grid_title( trans,
trailing_string=trailing_string,
default='Repositories' )
self.repositories_in_category_grid.title = title
return self.repositories_in_category_grid( trans, **kwd )
@web.expose
def browse_repositories_missing_tool_test_components( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>the latest installable revision contains at least 1 tool with no defined tests <b>OR</b>:</li>'
message += '<li>the latest installable revision contains at least 1 tool with a test that requires a missing test data file</li>'
message += '</ul>'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.repositories_missing_tool_test_components_grid( trans, **kwd )
@web.expose
def browse_repositories_with_failing_tool_tests( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>the latest installable revision contains at least 1 tool</li>'
message += '<li>the latest installable revision is not missing any tool test components</li>'
message += '<li>the latest installable revision has at least 1 tool test that fails</li>'
message += '</ul>'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.repositories_with_failing_tool_tests_grid( trans, **kwd )
@web.expose
def browse_repositories_with_install_errors( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>the latest installable revision is not missing any tool test components</li>'
message += '<li>the latest installable revision has installation errors (the repository itself, '
message += 'repository dependencies or tool dependencies)</li>'
message += '</ul>'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.repositories_with_test_install_errors_grid( trans, **kwd )
@web.expose
def browse_repositories_with_invalid_tools( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>the latest metadata revision contains at least 1 invalid tool</li>'
message += '</ul>'
message += 'Click the tool config file name to see why the tool is invalid.'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.repositories_with_invalid_tools_grid( trans, **kwd )
@web.expose
def browse_repositories_with_no_failing_tool_tests( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>the latest installable revision contains at least 1 tool</li>'
message += '<li>the latest installable revision is not missing any tool test components</li>'
message += '<li>the latest installable revision has no tool tests that fail</li>'
message += '</ul>'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.repositories_with_no_failing_tool_tests_grid( trans, **kwd )
@web.expose
def browse_repositories_with_skip_tool_test_checked( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
elif operation == "repositories_by_user":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories_by_user',
**kwd ) )
elif operation in [ 'mark as deprecated', 'mark as not deprecated' ]:
kwd[ 'mark_deprecated' ] = operation == 'mark as deprecated'
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
if 'message' not in kwd:
message = 'This list contains repositories that match the following criteria:<br>'
message += '<ul>'
message += '<li>the latest installable revision has <b>Skip automated testing of tools in this '
message += 'revision</b> checked if the repository type is <b>Unrestricted</b> or <b>Skip '
message += 'automated testing of this tool dependency recipe</b> checked if the repository '
message += 'type is <b>Tool dependency definition</b></li>'
message += '</ul>'
kwd[ 'message' ] = message
kwd[ 'status' ] = 'warning'
return self.repositories_with_skip_tests_checked_grid( trans, **kwd )
@web.expose
def browse_repository( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
commit_message = kwd.get( 'commit_message', 'Deleted selected files' )
repository = suc.get_repository_in_tool_shed( trans.app, id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
# Update repository files for browsing.
hg_util.update_repository( repo )
changeset_revision = repository.tip( trans.app )
metadata = metadata_util.get_repository_metadata_by_repository_id_changeset_revision( trans.app,
id,
changeset_revision,
metadata_only=True )
repository_type_select_field = rt_util.build_repository_type_select_field( trans, repository=repository )
return trans.fill_template( '/webapps/tool_shed/repository/browse_repository.mako',
repository=repository,
changeset_revision=changeset_revision,
metadata=metadata,
commit_message=commit_message,
repository_type_select_field=repository_type_select_field,
message=message,
status=status )
@web.expose
def browse_repository_dependencies( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
# The received id is a RepositoryMetadata id.
repository_metadata_id = kwd[ 'id' ]
repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
repository_id = trans.security.encode_id( repository_metadata.repository_id )
changeset_revision = repository_metadata.changeset_revision
new_kwd = dict( id=repository_id,
changeset_revision=changeset_revision )
if operation == "browse_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repository',
**new_kwd ) )
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**new_kwd ) )
return self.repository_dependencies_grid( trans, **kwd )
@web.expose
def browse_tools( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
# The received id is a RepositoryMetadata id.
repository_metadata_id = kwd['id' ]
repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
repository_id = trans.security.encode_id( repository_metadata.repository_id )
changeset_revision = repository_metadata.changeset_revision
new_kwd = dict( id=repository_id,
changeset_revision=changeset_revision )
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**new_kwd ) )
return self.tools_grid( trans, **kwd )
@web.expose
def browse_tool_dependencies( self, trans, **kwd ):
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
# The received id is a RepositoryMetadata id.
repository_metadata_id = kwd[ 'id' ]
repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
repository_id = trans.security.encode_id( repository_metadata.repository_id )
changeset_revision = repository_metadata.changeset_revision
new_kwd = dict( id=repository_id,
changeset_revision=changeset_revision )
if operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**new_kwd ) )
return self.tool_dependencies_grid( trans, **kwd )
@web.expose
def browse_valid_categories( self, trans, **kwd ):
"""Filter repositories per category by those that are valid for installing into Galaxy."""
# The request came from Galaxy, so restrict category links to display only valid repository changeset revisions.
galaxy_url = common_util.handle_galaxy_url( trans, **kwd )
if galaxy_url:
kwd[ 'galaxy_url' ] = galaxy_url
if 'f-free-text-search' in kwd:
if kwd[ 'f-free-text-search' ] == 'All':
# The user performed a search, then clicked the "x" to eliminate the search criteria.
new_kwd = {}
return self.valid_category_grid( trans, **new_kwd )
# Since we are searching valid repositories and not categories, redirect to browse_valid_repositories().
if 'id' in kwd and 'f-free-text-search' in kwd and kwd[ 'id' ] == kwd[ 'f-free-text-search' ]:
# The value of 'id' has been set to the search string, which is a repository name.
# We'll try to get the desired encoded repository id to pass on.
try:
name = kwd[ 'id' ]
repository = suc.get_repository_by_name( trans.app, name )
kwd[ 'id' ] = trans.security.encode_id( repository.id )
except:
pass
return self.browse_valid_repositories( trans, **kwd )
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation in [ "valid_repositories_by_category", "valid_repositories_by_user" ]:
# Eliminate the current filters if any exist.
for k, v in kwd.items():
if k.startswith( 'f-' ):
del kwd[ k ]
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_valid_repositories',
**kwd ) )
title = trans.app.repository_grid_filter_manager.get_grid_title( trans,
trailing_string='by Category',
default='Categories of Valid Repositories' )
self.valid_category_grid.title = title
return self.valid_category_grid( trans, **kwd )
@web.expose
def browse_valid_repositories( self, trans, **kwd ):
"""Filter repositories to those that are installable into Galaxy."""
galaxy_url = common_util.handle_galaxy_url( trans, **kwd )
if galaxy_url:
kwd[ 'galaxy_url' ] = galaxy_url
repository_id = kwd.get( 'id', None )
if 'f-free-text-search' in kwd:
if 'f-Category.name' in kwd:
# The user browsed to a category and then entered a search string, so get the category associated with its value.
category_name = kwd[ 'f-Category.name' ]
category = suc.get_category_by_name( trans.app, category_name )
# Set the id value in kwd since it is required by the ValidRepositoryGrid.build_initial_query method.
kwd[ 'id' ] = trans.security.encode_id( category.id )
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "preview_tools_in_changeset":
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repository_metadata = metadata_util.get_latest_repository_metadata( trans.app, repository.id, downloadable=True )
latest_installable_changeset_revision = repository_metadata.changeset_revision
return trans.response.send_redirect( web.url_for( controller='repository',
action='preview_tools_in_changeset',
repository_id=repository_id,
changeset_revision=latest_installable_changeset_revision ) )
elif operation == "valid_repositories_by_category":
# Eliminate the current filters if any exist.
for k, v in kwd.items():
if k.startswith( 'f-' ):
del kwd[ k ]
category_id = kwd.get( 'id', None )
category = suc.get_category( trans.app, category_id )
kwd[ 'f-Category.name' ] = category.name
selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='preview_tools_in_changeset',
repository_id=trans.security.encode_id( repository.id ),
changeset_revision=selected_changeset_revision ) )
url_args = dict( action='browse_valid_repositories',
operation='preview_tools_in_changeset',
repository_id=repository_id )
self.valid_repository_grid.operations = [ grids.GridOperation( "Preview and install",
url_args=url_args,
allow_multiple=False,
async_compatible=False ) ]
title = trans.app.repository_grid_filter_manager.get_grid_title( trans,
trailing_string='',
default='Valid Repositories' )
self.valid_repository_grid.title = title
return self.valid_repository_grid( trans, **kwd )
@web.expose
def check_for_updates( self, trans, **kwd ):
"""Handle a request from a local Galaxy instance."""
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
# If the request originated with the UpdateRepositoryManager, it will not include a galaxy_url.
galaxy_url = common_util.handle_galaxy_url( trans, **kwd )
name = kwd.get( 'name', None )
owner = kwd.get( 'owner', None )
changeset_revision = kwd.get( 'changeset_revision', None )
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
# Default to the current changeset revision.
update_to_ctx = hg_util.get_changectx_for_changeset( repo, changeset_revision )
latest_changeset_revision = changeset_revision
from_update_manager = kwd.get( 'from_update_manager', False )
if from_update_manager:
update = 'true'
no_update = 'false'
elif galaxy_url:
# Start building up the url to redirect back to the calling Galaxy instance.
params = '?tool_shed_url=%s&name=%s&owner=%s&changeset_revision=%s&latest_changeset_revision=' % \
( web.url_for( '/', qualified=True ),
str( repository.name ),
str( repository.user.username ),
changeset_revision )
url = common_util.url_join( galaxy_url,
'admin_toolshed/update_to_changeset_revision%s' % params )
else:
message = 'Unable to check for updates due to an invalid Galaxy URL: <b>%s</b>. ' % galaxy_url
message += 'You may need to enable third-party cookies in your browser. '
return trans.show_error_message( message )
if changeset_revision == repository.tip( trans.app ):
# If changeset_revision is the repository tip, there are no additional updates.
if from_update_manager:
return no_update
# Return the same value for changeset_revision and latest_changeset_revision.
url += latest_changeset_revision
else:
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app,
trans.security.encode_id( repository.id ),
changeset_revision )
if repository_metadata:
# If changeset_revision is in the repository_metadata table for this repository, there are no
# additional updates.
if from_update_manager:
return no_update
else:
# Return the same value for changeset_revision and latest_changeset_revision.
url += latest_changeset_revision
else:
# The changeset_revision column in the repository_metadata table has been updated with a new
# changeset_revision value since the repository was installed. We need to find the changeset_revision
# to which we need to update.
update_to_changeset_hash = None
for changeset in repo.changelog:
changeset_hash = str( repo.changectx( changeset ) )
ctx = hg_util.get_changectx_for_changeset( repo, changeset_hash )
if update_to_changeset_hash:
if changeset_hash == repository.tip( trans.app ):
update_to_ctx = hg_util.get_changectx_for_changeset( repo, changeset_hash )
latest_changeset_revision = changeset_hash
break
else:
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app,
trans.security.encode_id( repository.id ),
changeset_hash )
if repository_metadata:
# We found a RepositoryMetadata record.
update_to_ctx = hg_util.get_changectx_for_changeset( repo, changeset_hash )
latest_changeset_revision = changeset_hash
break
else:
update_to_changeset_hash = changeset_hash
else:
if changeset_hash == changeset_revision:
# We've found the changeset in the changelog for which we need to get the next update.
update_to_changeset_hash = changeset_hash
if from_update_manager:
if latest_changeset_revision == changeset_revision:
return no_update
return update
url += str( latest_changeset_revision )
url += '&latest_ctx_rev=%s' % str( update_to_ctx.rev() )
return trans.response.send_redirect( url )
@web.expose
def contact_owner( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository = suc.get_repository_in_tool_shed( trans.app, id )
metadata = metadata_util.get_repository_metadata_by_repository_id_changeset_revision( trans.app,
id,
repository.tip( trans.app ),
metadata_only=True )
if trans.user and trans.user.email:
return trans.fill_template( "/webapps/tool_shed/repository/contact_owner.mako",
repository=repository,
metadata=metadata,
message=message,
status=status )
else:
# Do all we can to eliminate spam.
return trans.show_error_message( "You must be logged in to contact the owner of a repository." )
@web.expose
def create_galaxy_docker_image( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository_ids = util.listify( kwd.get( 'id', '' ) )
if 'operation' in kwd:
if repository_ids:
operation = kwd[ 'operation' ].lower()
if operation == "include in docker image":
repository_tups = []
for repository_id in repository_ids:
repository = suc.get_repository_by_id( trans.app, repository_id )
repository_tups.append( ( str( repository.name ),
str( repository.user.username ),
str( repository.type ) ) )
return trans.fill_template( "/webapps/tool_shed/repository/docker_image_repositories.mako",
id=','.join( repository_ids ),
repository_tups=repository_tups,
message=message,
status=status )
else:
# This can only occur when there is a multi-select grid with check boxes and an operation,
# and the user clicked the operation button without checking any of the check boxes.
kwd[ 'message' ] = "No items were selected."
kwd[ 'status' ] = 'error'
elif kwd.get( 'create_docker_image_button', False ):
tmp_image_dir = tempfile.mkdtemp( prefix="tmp-toolshed-cdidir" )
docker_file_name = 'Dockerfile'
docker_file_path = os.path.join( tmp_image_dir, docker_file_name )
tool_shed_url = tool_shed_url = web.url_for( '/', qualified=True )
repository_string = ''
for repository_id in repository_ids:
repository = suc.get_repository_by_id( trans.app, repository_id )
template = basic_util.SELECTED_REPOSITORIES_TEMPLATE
repository_template = \
string.Template( template ).safe_substitute( tool_shed_url=tool_shed_url,
repository_owner=str( repository.user.username ) ,
repository_name=str( repository.name ) )
repository_string = '%s\n%s' % ( repository_string, repository_template )
template = basic_util.DOCKER_IMAGE_TEMPLATE
docker_image_template = \
string.Template( template ).safe_substitute( selected_repositories=repository_string )
docker_image_string = docker_image_template
trans.response.set_content_type( 'application/text/plain' )
trans.response.headers[ "Content-Disposition" ] = 'attachment; filename="%s"' % docker_file_name
opened_file = open( docker_file_path, "w" )
opened_file.write( docker_image_string )
opened_file.close()
opened_file = open( docker_file_path, "r" )
# Make sure the file is removed from disk after the contents have been downloaded.
os.unlink( docker_file_path )
docker_file_path, docker_file_name = os.path.split( docker_file_path )
basic_util.remove_dir( docker_file_path )
return opened_file
return self.docker_image_grid( trans, **kwd )
@web.expose
def create_repository( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
categories = suc.get_categories( trans )
if not categories:
message = 'No categories have been configured in this instance of the Galaxy Tool Shed. '
message += 'An administrator needs to create some via the Administrator control panel before creating repositories.'
status = 'error'
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
message=message,
status=status ) )
name = kwd.get( 'name', '' ).strip()
description = kwd.get( 'description', '' )
long_description = kwd.get( 'long_description', '' )
category_ids = util.listify( kwd.get( 'category_id', '' ) )
selected_categories = [ trans.security.decode_id( id ) for id in category_ids ]
repository_type = kwd.get( 'repository_type', rt_util.UNRESTRICTED )
if kwd.get( 'create_repository_button', False ):
error = False
message = repository_util.validate_repository_name( trans.app, name, trans.user )
if message:
error = True
if not description:
message = 'Enter a description.'
error = True
if error:
status = 'error'
else:
repository, message = repository_util.create_repository( trans.app,
name,
repository_type,
description,
long_description,
user_id=trans.user.id,
category_ids=category_ids )
trans.response.send_redirect( web.url_for( controller='repository',
action='manage_repository',
message=message,
id=trans.security.encode_id( repository.id ) ) )
repository_type_select_field = rt_util.build_repository_type_select_field( trans )
return trans.fill_template( '/webapps/tool_shed/repository/create_repository.mako',
name=name,
description=description,
long_description=long_description,
selected_categories=selected_categories,
categories=categories,
repository_type_select_field=repository_type_select_field,
message=message,
status=status )
@web.expose
@web.require_login( "deprecate repository" )
def deprecate( self, trans, **kwd ):
"""Mark a repository in the tool shed as deprecated or not deprecated."""
# Marking a repository in the tool shed as deprecated has no effect on any downloadable changeset
# revisions that may be associated with the repository. Revisions are not marked as not downlaodable
# because those that have installed the repository must be allowed to get updates.
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository_id = kwd.get( 'id', None )
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
mark_deprecated = util.string_as_bool( kwd.get( 'mark_deprecated', False ) )
repository.deprecated = mark_deprecated
trans.sa_session.add( repository )
trans.sa_session.flush()
if mark_deprecated:
# Update the repository registry.
trans.app.repository_registry.remove_entry( repository )
message = 'The repository <b>%s</b> has been marked as deprecated.' % repository.name
else:
# Update the repository registry.
trans.app.repository_registry.add_entry( repository )
message = 'The repository <b>%s</b> has been marked as not deprecated.' % repository.name
trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
operation='repositories_i_own',
message=message,
status=status ) )
@web.expose
def display_image_in_repository( self, trans, **kwd ):
"""
Open an image file that is contained in repository or that is referenced by a URL for display. The image can be defined in
either a README.rst file contained in the repository or the help section of a Galaxy tool config that is contained in the repository.
The following image definitions are all supported. The former $PATH_TO_IMAGES is no longer required, and is now ignored.
.. image:: https://raw.github.com/galaxy/some_image.png
.. image:: $PATH_TO_IMAGES/some_image.png
.. image:: /static/images/some_image.gif
.. image:: some_image.jpg
.. image:: /deep/some_image.png
"""
repository_id = kwd.get( 'repository_id', None )
relative_path_to_image_file = kwd.get( 'image_file', None )
if repository_id and relative_path_to_image_file:
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
if repository:
repo_files_dir = repository.repo_path( trans.app )
path_to_file = suc.get_absolute_path_to_file_in_repository( repo_files_dir, relative_path_to_image_file )
if os.path.exists( path_to_file ):
file_name = os.path.basename( relative_path_to_image_file )
try:
extension = file_name.split( '.' )[ -1 ]
except Exception, e:
extension = None
if extension:
mimetype = trans.app.datatypes_registry.get_mimetype_by_extension( extension )
if mimetype:
trans.response.set_content_type( mimetype )
return open( path_to_file, 'r' )
return None
@web.expose
def display_tool( self, trans, repository_id, tool_config, changeset_revision, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
render_repository_actions_for = kwd.get( 'render_repository_actions_for', 'tool_shed' )
tv = tool_validator.ToolValidator( trans.app )
repository, tool, message = tv.load_tool_from_changeset_revision( repository_id,
changeset_revision,
tool_config )
if message:
status = 'error'
tool_state = tool_util.new_state( trans, tool, invalid=False )
metadata = metadata_util.get_repository_metadata_by_repository_id_changeset_revision( trans.app,
repository_id,
changeset_revision,
metadata_only=True )
try:
return trans.fill_template( "/webapps/tool_shed/repository/tool_form.mako",
repository=repository,
render_repository_actions_for=render_repository_actions_for,
metadata=metadata,
changeset_revision=changeset_revision,
tool=tool,
tool_state=tool_state,
message=message,
status=status )
except Exception, e:
message = "Error displaying tool, probably due to a problem in the tool config. The exception is: %s." % str( e )
if trans.webapp.name == 'galaxy' or render_repository_actions_for == 'galaxy':
return trans.response.send_redirect( web.url_for( controller='repository',
action='preview_tools_in_changeset',
repository_id=repository_id,
changeset_revision=changeset_revision,
message=message,
status='error' ) )
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
operation='view_or_manage_repository',
id=repository_id,
changeset_revision=changeset_revision,
message=message,
status='error' ) )
@web.expose
def download( self, trans, repository_id, changeset_revision, file_type, **kwd ):
"""Download an archive of the repository files compressed as zip, gz or bz2."""
# FIXME: thgis will currently only download the repository tip, no matter which installable changeset_revision is being viewed.
# This should be enhanced to use the export method below, which accounts for the currently viewed changeset_revision.
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
# Allow hgweb to handle the download. This requires the tool shed
# server account's .hgrc file to include the following setting:
# [web]
# allow_archive = bz2, gz, zip
file_type_str = basic_util.get_file_type_str( changeset_revision, file_type )
repository.times_downloaded += 1
trans.sa_session.add( repository )
trans.sa_session.flush()
download_url = common_util.url_join( '/',
'repos',
str( repository.user.username ),
str( repository.name ),
'archive',
file_type_str )
return trans.response.send_redirect( download_url )
@web.expose
def export( self, trans, repository_id, changeset_revision, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
export_repository_dependencies = kwd.get( 'export_repository_dependencies', '' )
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
if kwd.get( 'export_repository_button', False ):
# We'll currently support only gzip-compressed tar archives.
file_type = 'gz'
export_repository_dependencies = CheckboxField.is_checked( export_repository_dependencies )
tool_shed_url = web.url_for( '/', qualified=True )
erm = capsule_manager.ExportRepositoryManager( app=trans.app,
user=trans.user,
tool_shed_url=tool_shed_url,
repository=repository,
changeset_revision=changeset_revision,
export_repository_dependencies=export_repository_dependencies,
using_api=False )
repositories_archive, error_message = erm.export_repository()
repositories_archive_filename = os.path.basename( repositories_archive.name )
if error_message:
message = error_message
status = 'error'
else:
trans.response.set_content_type( 'application/x-gzip' )
trans.response.headers[ "Content-Disposition" ] = 'attachment; filename="%s"' % ( repositories_archive_filename )
opened_archive = open( repositories_archive.name )
# Make sure the file is removed from disk after the contents have been downloaded.
os.unlink( repositories_archive.name )
repositories_archive_path, file_name = os.path.split( repositories_archive.name )
basic_util.remove_dir( repositories_archive_path )
return opened_archive
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, repository_id, changeset_revision )
metadata = repository_metadata.metadata
toolshed_base_url = str( web.url_for( '/', qualified=True ) ).rstrip( '/' )
# Initialize the repository dependency RelationBuilder.
rb = relation_builder.RelationBuilder( trans.app, repository, repository_metadata, toolshed_base_url )
# Work-around to ensure repositories that contain packages needed only for compiling
# a dependent package are included in the capsule.
rb.set_filter_dependencies_needed_for_compiling( False )
# Get a dictionary of all repositories upon which the contents of the current repository_metadata record depend.
repository_dependencies = rb.get_repository_dependencies_for_changeset_revision()
if repository_dependencies:
# Only display repository dependencies if they exist.
exclude = [ 'datatypes', 'invalid_repository_dependencies', 'invalid_tool_dependencies', 'invalid_tools',
'readme_files', 'tool_dependencies', 'tools', 'tool_test_results', 'workflows', 'data_manager' ]
tsucm = ToolShedUtilityContainerManager( trans.app )
containers_dict = tsucm.build_repository_containers( repository,
changeset_revision,
repository_dependencies,
repository_metadata,
exclude=exclude )
export_repository_dependencies_check_box = CheckboxField( 'export_repository_dependencies', checked=True )
else:
containers_dict = None
export_repository_dependencies_check_box = None
revision_label = hg_util.get_revision_label( trans.app, repository, changeset_revision, include_date=True )
return trans.fill_template( "/webapps/tool_shed/repository/export_repository.mako",
changeset_revision=changeset_revision,
containers_dict=containers_dict,
export_repository_dependencies_check_box=export_repository_dependencies_check_box,
repository=repository,
repository_metadata=repository_metadata,
revision_label=revision_label,
metadata=metadata,
message=message,
status=status )
@web.expose
def export_via_api( self, trans, **kwd ):
"""Return an exported gzip compressed repository archive file opened for reading."""
encoded_repositories_archive_name = kwd.get( 'encoded_repositories_archive_name', None )
if encoded_repositories_archive_name:
repositories_archive_name = encoding_util.tool_shed_decode( encoded_repositories_archive_name )
opened_archive = open( repositories_archive_name )
# Make sure the file is removed from disk after the contents have been downloaded.
os.unlink( repositories_archive_name )
return opened_archive
return ''
@web.expose
def find_tools( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
galaxy_url = common_util.handle_galaxy_url( trans, **kwd )
if 'operation' in kwd:
item_id = kwd.get( 'id', '' )
if item_id:
operation = kwd[ 'operation' ].lower()
is_admin = trans.user_is_admin()
if operation == "view_or_manage_repository":
# The received id is a RepositoryMetadata id, so we have to get the repository id.
repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, item_id )
repository_id = trans.security.encode_id( repository_metadata.repository.id )
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
kwd[ 'id' ] = repository_id
kwd[ 'changeset_revision' ] = repository_metadata.changeset_revision
if trans.webapp.name == 'tool_shed' and ( is_admin or repository.user == trans.user ):
a = 'manage_repository'
else:
a = 'view_repository'
return trans.response.send_redirect( web.url_for( controller='repository',
action=a,
**kwd ) )
if operation == "install to galaxy":
# We've received a list of RepositoryMetadata ids, so we need to build a list of associated Repository ids.
encoded_repository_ids = []
changeset_revisions = []
for repository_metadata_id in util.listify( item_id ):
repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
encoded_repository_ids.append( trans.security.encode_id( repository_metadata.repository.id ) )
changeset_revisions.append( repository_metadata.changeset_revision )
new_kwd = {}
new_kwd[ 'repository_ids' ] = encoded_repository_ids
new_kwd[ 'changeset_revisions' ] = changeset_revisions
return trans.response.send_redirect( web.url_for( controller='repository',
action='install_repositories_by_revision',
**new_kwd ) )
else:
# This can only occur when there is a multi-select grid with check boxes and an operation,
# and the user clicked the operation button without checking any of the check boxes.
return trans.show_error_message( "No items were selected." )
tool_ids = [ item.lower() for item in util.listify( kwd.get( 'tool_id', '' ) ) ]
tool_names = [ item.lower() for item in util.listify( kwd.get( 'tool_name', '' ) ) ]
tool_versions = [ item.lower() for item in util.listify( kwd.get( 'tool_version', '' ) ) ]
exact_matches = kwd.get( 'exact_matches', '' )
exact_matches_checked = CheckboxField.is_checked( exact_matches )
match_tuples = []
ok = True
if tool_ids or tool_names or tool_versions:
ok, match_tuples = search_util.search_repository_metadata( trans.app,
exact_matches_checked,
tool_ids=tool_ids,
tool_names=tool_names,
tool_versions=tool_versions )
if ok:
kwd[ 'match_tuples' ] = match_tuples
# Render the list view
if trans.webapp.name == 'galaxy':
# Our initial request originated from a Galaxy instance.
global_actions = [ grids.GridAction( "Browse valid repositories",
dict( controller='repository', action='browse_valid_categories' ) ),
grids.GridAction( "Search for valid tools",
dict( controller='repository', action='find_tools' ) ),
grids.GridAction( "Search for workflows",
dict( controller='repository', action='find_workflows' ) ) ]
self.install_matched_repository_grid.global_actions = global_actions
install_url_args = dict( controller='repository', action='find_tools' )
operations = [ grids.GridOperation( "Install", url_args=install_url_args, allow_multiple=True, async_compatible=False ) ]
self.install_matched_repository_grid.operations = operations
return self.install_matched_repository_grid( trans, **kwd )
else:
kwd[ 'message' ] = "tool id: <b>%s</b><br/>tool name: <b>%s</b><br/>tool version: <b>%s</b><br/>exact matches only: <b>%s</b>" % \
( basic_util.stringify( tool_ids ),
basic_util.stringify( tool_names ),
basic_util.stringify( tool_versions ),
str( exact_matches_checked ) )
self.matched_repository_grid.title = "Repositories with matching tools"
return self.matched_repository_grid( trans, **kwd )
else:
message = "No search performed - each field must contain the same number of comma-separated items."
status = "error"
exact_matches_check_box = CheckboxField( 'exact_matches', checked=exact_matches_checked )
return trans.fill_template( '/webapps/tool_shed/repository/find_tools.mako',
tool_id=basic_util.stringify( tool_ids ),
tool_name=basic_util.stringify( tool_names ),
tool_version=basic_util.stringify( tool_versions ),
exact_matches_check_box=exact_matches_check_box,
message=message,
status=status )
@web.expose
def find_workflows( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
galaxy_url = common_util.handle_galaxy_url( trans, **kwd )
if 'operation' in kwd:
item_id = kwd.get( 'id', '' )
if item_id:
operation = kwd[ 'operation' ].lower()
is_admin = trans.user_is_admin()
if operation == "view_or_manage_repository":
# The received id is a RepositoryMetadata id, so we have to get the repository id.
repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, item_id )
repository_id = trans.security.encode_id( repository_metadata.repository.id )
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
kwd[ 'id' ] = repository_id
kwd[ 'changeset_revision' ] = repository_metadata.changeset_revision
if trans.webapp.name == 'tool_shed' and ( is_admin or repository.user == trans.user ):
a = 'manage_repository'
else:
a = 'view_repository'
return trans.response.send_redirect( web.url_for( controller='repository',
action=a,
**kwd ) )
if operation == "install to galaxy":
# We've received a list of RepositoryMetadata ids, so we need to build a list of associated Repository ids.
encoded_repository_ids = []
changeset_revisions = []
for repository_metadata_id in util.listify( item_id ):
repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, item_id )
encoded_repository_ids.append( trans.security.encode_id( repository_metadata.repository.id ) )
changeset_revisions.append( repository_metadata.changeset_revision )
new_kwd = {}
new_kwd[ 'repository_ids' ] = encoded_repository_ids
new_kwd[ 'changeset_revisions' ] = changeset_revisions
return trans.response.send_redirect( web.url_for( controller='repository',
action='install_repositories_by_revision',
**new_kwd ) )
else:
# This can only occur when there is a multi-select grid with check boxes and an operation,
# and the user clicked the operation button without checking any of the check boxes.
return trans.show_error_message( "No items were selected." )
if 'find_workflows_button' in kwd:
workflow_names = [ item.lower() for item in util.listify( kwd.get( 'workflow_name', '' ) ) ]
exact_matches = kwd.get( 'exact_matches', '' )
exact_matches_checked = CheckboxField.is_checked( exact_matches )
match_tuples = []
ok = True
if workflow_names:
ok, match_tuples = search_util.search_repository_metadata( trans.app,
exact_matches_checked,
workflow_names=workflow_names )
else:
ok, match_tuples = search_util.search_repository_metadata( trans.app,
exact_matches_checked,
workflow_names=[],
all_workflows=True )
if ok:
kwd[ 'match_tuples' ] = match_tuples
if trans.webapp.name == 'galaxy':
# Our initial request originated from a Galaxy instance.
global_actions = [ grids.GridAction( "Browse valid repositories",
dict( controller='repository', action='browse_valid_repositories' ) ),
grids.GridAction( "Search for valid tools",
dict( controller='repository', action='find_tools' ) ),
grids.GridAction( "Search for workflows",
dict( controller='repository', action='find_workflows' ) ) ]
self.install_matched_repository_grid.global_actions = global_actions
install_url_args = dict( controller='repository', action='find_workflows' )
operations = [ grids.GridOperation( "Install", url_args=install_url_args, allow_multiple=True, async_compatible=False ) ]
self.install_matched_repository_grid.operations = operations
return self.install_matched_repository_grid( trans, **kwd )
else:
kwd[ 'message' ] = "workflow name: <b>%s</b><br/>exact matches only: <b>%s</b>" % \
( basic_util.stringify( workflow_names ), str( exact_matches_checked ) )
self.matched_repository_grid.title = "Repositories with matching workflows"
return self.matched_repository_grid( trans, **kwd )
else:
message = "No search performed - each field must contain the same number of comma-separated items."
status = "error"
else:
exact_matches_checked = False
workflow_names = []
exact_matches_check_box = CheckboxField( 'exact_matches', checked=exact_matches_checked )
return trans.fill_template( '/webapps/tool_shed/repository/find_workflows.mako',
workflow_name=basic_util.stringify( workflow_names ),
exact_matches_check_box=exact_matches_check_box,
message=message,
status=status )
@web.expose
def generate_workflow_image( self, trans, workflow_name, repository_metadata_id=None ):
"""Return an svg image representation of a workflow dictionary created when the workflow was exported."""
return workflow_util.generate_workflow_image( trans, workflow_name, repository_metadata_id=repository_metadata_id, repository_id=None )
@web.expose
def get_changeset_revision_and_ctx_rev( self, trans, **kwd ):
"""Handle a request from a local Galaxy instance to retrieve the changeset revision hash to which an installed repository can be updated."""
def has_galaxy_utilities( repository_metadata ):
has_galaxy_utilities_dict = dict( includes_data_managers=False,
includes_datatypes=False,
includes_tools=False,
includes_tools_for_display_in_tool_panel=False,
has_repository_dependencies=False,
has_repository_dependencies_only_if_compiling_contained_td=False,
includes_tool_dependencies=False,
includes_workflows=False )
if repository_metadata:
includes_tools_for_display_in_tool_panel = repository_metadata.includes_tools_for_display_in_tool_panel
metadata = repository_metadata.metadata
if metadata:
if 'data_manager' in metadata:
has_galaxy_utilities_dict[ 'includes_data_managers' ] = True
if 'datatypes' in metadata:
has_galaxy_utilities_dict[ 'includes_datatypes' ] = True
if 'tools' in metadata:
has_galaxy_utilities_dict[ 'includes_tools' ] = True
if 'tool_dependencies' in metadata:
has_galaxy_utilities_dict[ 'includes_tool_dependencies' ] = True
repository_dependencies_dict = metadata.get( 'repository_dependencies', {} )
repository_dependencies = repository_dependencies_dict.get( 'repository_dependencies', [] )
has_repository_dependencies, has_repository_dependencies_only_if_compiling_contained_td = \
suc.get_repository_dependency_types( repository_dependencies )
has_galaxy_utilities_dict[ 'has_repository_dependencies' ] = has_repository_dependencies
has_galaxy_utilities_dict[ 'has_repository_dependencies_only_if_compiling_contained_td' ] = \
has_repository_dependencies_only_if_compiling_contained_td
if 'workflows' in metadata:
has_galaxy_utilities_dict[ 'includes_workflows' ] = True
return has_galaxy_utilities_dict
name = kwd.get( 'name', None )
owner = kwd.get( 'owner', None )
changeset_revision = kwd.get( 'changeset_revision', None )
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app,
trans.security.encode_id( repository.id ),
changeset_revision )
has_galaxy_utilities_dict = has_galaxy_utilities( repository_metadata )
includes_data_managers = has_galaxy_utilities_dict[ 'includes_data_managers' ]
includes_datatypes = has_galaxy_utilities_dict[ 'includes_datatypes' ]
includes_tools = has_galaxy_utilities_dict[ 'includes_tools' ]
includes_tools_for_display_in_tool_panel = has_galaxy_utilities_dict[ 'includes_tools_for_display_in_tool_panel' ]
includes_tool_dependencies = has_galaxy_utilities_dict[ 'includes_tool_dependencies' ]
has_repository_dependencies = has_galaxy_utilities_dict[ 'has_repository_dependencies' ]
has_repository_dependencies_only_if_compiling_contained_td = \
has_galaxy_utilities_dict[ 'has_repository_dependencies_only_if_compiling_contained_td' ]
includes_workflows = has_galaxy_utilities_dict[ 'includes_workflows' ]
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
# Default to the received changeset revision and ctx_rev.
update_to_ctx = hg_util.get_changectx_for_changeset( repo, changeset_revision )
ctx_rev = str( update_to_ctx.rev() )
latest_changeset_revision = changeset_revision
update_dict = dict( changeset_revision=changeset_revision,
ctx_rev=ctx_rev,
includes_data_managers=includes_data_managers,
includes_datatypes=includes_datatypes,
includes_tools=includes_tools,
includes_tools_for_display_in_tool_panel=includes_tools_for_display_in_tool_panel,
includes_tool_dependencies=includes_tool_dependencies,
has_repository_dependencies=has_repository_dependencies,
has_repository_dependencies_only_if_compiling_contained_td=has_repository_dependencies_only_if_compiling_contained_td,
includes_workflows=includes_workflows )
if changeset_revision == repository.tip( trans.app ):
# If changeset_revision is the repository tip, there are no additional updates.
return encoding_util.tool_shed_encode( update_dict )
else:
if repository_metadata:
# If changeset_revision is in the repository_metadata table for this repository, there are no additional updates.
return encoding_util.tool_shed_encode( update_dict )
else:
# The changeset_revision column in the repository_metadata table has been updated with a new changeset_revision value since the
# repository was installed. We need to find the changeset_revision to which we need to update.
update_to_changeset_hash = None
for changeset in repo.changelog:
includes_tools = False
has_repository_dependencies = False
has_repository_dependencies_only_if_compiling_contained_td = False
changeset_hash = str( repo.changectx( changeset ) )
ctx = hg_util.get_changectx_for_changeset( repo, changeset_hash )
if update_to_changeset_hash:
update_to_repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app,
trans.security.encode_id( repository.id ),
changeset_hash )
if update_to_repository_metadata:
has_galaxy_utilities_dict = has_galaxy_utilities( repository_metadata )
includes_data_managers = has_galaxy_utilities_dict[ 'includes_data_managers' ]
includes_datatypes = has_galaxy_utilities_dict[ 'includes_datatypes' ]
includes_tools = has_galaxy_utilities_dict[ 'includes_tools' ]
includes_tools_for_display_in_tool_panel = has_galaxy_utilities_dict[ 'includes_tools_for_display_in_tool_panel' ]
includes_tool_dependencies = has_galaxy_utilities_dict[ 'includes_tool_dependencies' ]
has_repository_dependencies = has_galaxy_utilities_dict[ 'has_repository_dependencies' ]
has_repository_dependencies_only_if_compiling_contained_td = has_galaxy_utilities_dict[ 'has_repository_dependencies_only_if_compiling_contained_td' ]
includes_workflows = has_galaxy_utilities_dict[ 'includes_workflows' ]
# We found a RepositoryMetadata record.
if changeset_hash == repository.tip( trans.app ):
# The current ctx is the repository tip, so use it.
update_to_ctx = hg_util.get_changectx_for_changeset( repo, changeset_hash )
latest_changeset_revision = changeset_hash
else:
update_to_ctx = hg_util.get_changectx_for_changeset( repo, update_to_changeset_hash )
latest_changeset_revision = update_to_changeset_hash
break
elif not update_to_changeset_hash and changeset_hash == changeset_revision:
# We've found the changeset in the changelog for which we need to get the next update.
update_to_changeset_hash = changeset_hash
update_dict[ 'includes_data_managers' ] = includes_data_managers
update_dict[ 'includes_datatypes' ] = includes_datatypes
update_dict[ 'includes_tools' ] = includes_tools
update_dict[ 'includes_tools_for_display_in_tool_panel' ] = includes_tools_for_display_in_tool_panel
update_dict[ 'includes_tool_dependencies' ] = includes_tool_dependencies
update_dict[ 'includes_workflows' ] = includes_workflows
update_dict[ 'has_repository_dependencies' ] = has_repository_dependencies
update_dict[ 'has_repository_dependencies_only_if_compiling_contained_td' ] = has_repository_dependencies_only_if_compiling_contained_td
update_dict[ 'changeset_revision' ] = str( latest_changeset_revision )
update_dict[ 'ctx_rev' ] = str( update_to_ctx.rev() )
return encoding_util.tool_shed_encode( update_dict )
@web.expose
def get_ctx_rev( self, trans, **kwd ):
"""Given a repository and changeset_revision, return the correct ctx.rev() value."""
repository_name = kwd[ 'name' ]
repository_owner = kwd[ 'owner' ]
changeset_revision = kwd[ 'changeset_revision' ]
repository = suc.get_repository_by_name_and_owner( trans.app, repository_name, repository_owner )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
ctx = hg_util.get_changectx_for_changeset( repo, changeset_revision )
if ctx:
return str( ctx.rev() )
return ''
@web.json
def get_file_contents( self, trans, file_path ):
# Avoid caching
trans.response.headers['Pragma'] = 'no-cache'
trans.response.headers['Expires'] = '0'
return suc.get_repository_file_contents( file_path )
@web.expose
def get_functional_test_rss( self, trans, **kwd ):
'''Return an RSS feed of the functional test results for the provided user, optionally filtered by the 'status' parameter.'''
owner = kwd.get( 'owner', None )
status = kwd.get( 'status', 'all' )
if owner:
user = suc.get_user_by_username( trans.app, owner )
else:
trans.response.status = 404
return 'Missing owner parameter.'
if user is None:
trans.response.status = 404
return 'No user found with username %s.' % owner
if status == 'passed':
# Return only metadata revisions where tools_functionally_correct is set to True.
metadata_filter = and_( trans.model.RepositoryMetadata.table.c.includes_tools == True,
trans.model.RepositoryMetadata.table.c.tools_functionally_correct == True,
trans.model.RepositoryMetadata.table.c.time_last_tested is not None )
elif status == 'failed':
# Return only metadata revisions where tools_functionally_correct is set to False.
metadata_filter = and_( trans.model.RepositoryMetadata.table.c.includes_tools == True,
trans.model.RepositoryMetadata.table.c.tools_functionally_correct == False,
trans.model.RepositoryMetadata.table.c.time_last_tested is not None )
else:
# Return all metadata entries for this user's repositories.
metadata_filter = and_( trans.model.RepositoryMetadata.table.c.includes_tools == True,
trans.model.RepositoryMetadata.table.c.time_last_tested is not None )
tool_shed_url = web.url_for( '/', qualified=True )
functional_test_results = []
for repository_metadata in trans.sa_session.query( trans.model.RepositoryMetadata ) \
.filter( metadata_filter ) \
.join( trans.model.Repository ) \
.filter( and_( trans.model.Repository.table.c.deleted == False,
trans.model.Repository.table.c.private == False,
trans.model.Repository.table.c.deprecated == False,
trans.model.Repository.table.c.user_id == user.id ) ):
repository = repository_metadata.repository
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
latest_downloadable_changeset_revsion = suc.get_latest_downloadable_changeset_revision( trans.app, repository, repo )
if repository_metadata.changeset_revision == latest_downloadable_changeset_revsion:
# We'll display only the test run for the latest installable revision in the rss feed.
tool_test_results = repository_metadata.tool_test_results
if tool_test_results is not None:
# The tool_test_results column used to contain a single dictionary, but was recently enhanced to contain
# a list of dictionaries, one for each install and test run. We'll display only the latest run in the rss
# feed for nwo.
if isinstance( tool_test_results, list ):
tool_test_results = tool_test_results[ 0 ]
current_repository_errors = []
tool_dependency_errors = []
repository_dependency_errors = []
description_lines = []
# Per the RSS 2.0 specification, all dates in RSS feeds must be formatted as specified in RFC 822
# section 5.1, e.g. Sat, 07 Sep 2002 00:00:01 UT
time_tested = repository_metadata.time_last_tested.strftime( '%a, %d %b %Y %H:%M:%S UT' )
# Generate a citable URL for this repository with owner and changeset revision.
repository_citable_url = common_util.url_join( tool_shed_url,
'view',
str( user.username ),
str( repository.name ),
str( repository_metadata.changeset_revision ) )
passed_tests = len( tool_test_results.get( 'passed_tests', [] ) )
failed_tests = len( tool_test_results.get( 'failed_tests', [] ) )
missing_test_components = len( tool_test_results.get( 'missing_test_components', [] ) )
installation_errors = tool_test_results.get( 'installation_errors', [] )
if installation_errors:
tool_dependency_errors = installation_errors.get( 'tool_dependencies', [] )
repository_dependency_errors = installation_errors.get( 'repository_dependencies', [] )
current_repository_errors = installation_errors.get( 'current_repository', [] )
description_lines.append( '%d tests passed, %d tests failed, %d tests missing test components.' % \
( passed_tests, failed_tests, missing_test_components ) )
if current_repository_errors:
description_lines.append( '\nThis repository did not install correctly. ' )
if tool_dependency_errors or repository_dependency_errors:
description_lines.append( '\n%d tool dependencies and %d repository dependencies failed to install. ' % \
( len( tool_dependency_errors ), len( repository_dependency_errors ) ) )
title = 'Revision %s of %s' % ( repository_metadata.changeset_revision, repository.name )
# The guid attribute in an RSS feed's list of items allows a feed reader to choose not to show an item as updated
# if the guid is unchanged. For functional test results, the citable URL is sufficiently unique to enable
# that behavior.
functional_test_results.append( dict( title=title,
guid=repository_citable_url,
link=repository_citable_url,
description='\n'.join( description_lines ),
pubdate=time_tested ) )
trans.response.set_content_type( 'application/rss+xml' )
return trans.fill_template( '/rss.mako',
title='Tool functional test results',
link=tool_shed_url,
description='Functional test results for repositories owned by %s.' % user.username,
pubdate=strftime( '%a, %d %b %Y %H:%M:%S UT', gmtime() ),
items=functional_test_results )
@web.json
def get_latest_downloadable_changeset_revision( self, trans, **kwd ):
"""
Return the latest installable changeset revision for the repository associated with the received
name and owner. This method is called from Galaxy when attempting to install the latest revision
of an installed repository.
"""
repository_name = kwd.get( 'name', None )
repository_owner = kwd.get( 'owner', None )
if repository_name is not None and repository_owner is not None:
repository = suc.get_repository_by_name_and_owner( trans.app, repository_name, repository_owner )
if repository:
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
return suc.get_latest_downloadable_changeset_revision( trans.app, repository, repo )
return hg_util.INITIAL_CHANGELOG_HASH
@web.json
def get_readme_files( self, trans, **kwd ):
"""
This method is called when installing or re-installing a single repository into a Galaxy instance.
If the received changeset_revision includes one or more readme files, return them in a dictionary.
"""
repository_name = kwd.get( 'name', None )
repository_owner = kwd.get( 'owner', None )
changeset_revision = kwd.get( 'changeset_revision', None )
if repository_name is not None and repository_owner is not None and changeset_revision is not None:
repository = suc.get_repository_by_name_and_owner( trans.app, repository_name, repository_owner )
if repository:
repository_metadata = \
suc.get_repository_metadata_by_changeset_revision( trans.app,
trans.security.encode_id( repository.id ),
changeset_revision )
if repository_metadata:
metadata = repository_metadata.metadata
if metadata:
return readme_util.build_readme_files_dict( trans.app,
repository,
changeset_revision,
repository_metadata.metadata )
return {}
@web.json
def get_repository_dependencies( self, trans, **kwd ):
"""
Return an encoded dictionary of all repositories upon which the contents of the received repository
depends.
"""
name = kwd.get( 'name', None )
owner = kwd.get( 'owner', None )
changeset_revision = kwd.get( 'changeset_revision', None )
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
repository_id = trans.security.encode_id( repository.id )
# We aren't concerned with repositories of type tool_dependency_definition here if a
# repository_metadata record is not returned because repositories of this type will never
# have repository dependencies. However, if a readme file is uploaded, or some other change
# is made that does not create a new downloadable changeset revision but updates the existing
# one, we still want to be able to get repository dependencies.
repository_metadata = suc.get_current_repository_metadata_for_changeset_revision( trans.app,
repository,
changeset_revision )
if repository_metadata:
metadata = repository_metadata.metadata
if metadata:
toolshed_base_url = str( web.url_for( '/', qualified=True ) ).rstrip( '/' )
rb = relation_builder.RelationBuilder( trans.app, repository, repository_metadata, toolshed_base_url )
repository_dependencies = rb.get_repository_dependencies_for_changeset_revision()
if repository_dependencies:
return encoding_util.tool_shed_encode( repository_dependencies )
return ''
@web.expose
def get_repository_id( self, trans, **kwd ):
"""Given a repository name and owner, return the encoded repository id."""
repository_name = kwd[ 'name' ]
repository_owner = kwd[ 'owner' ]
repository = suc.get_repository_by_name_and_owner( trans.app, repository_name, repository_owner )
if repository:
return trans.security.encode_id( repository.id )
return ''
@web.json
def get_repository_information( self, trans, repository_ids, changeset_revisions, **kwd ):
"""
Generate a list of dictionaries, each of which contains the information about a repository that will
be necessary for installing it into a local Galaxy instance.
"""
includes_tools = False
includes_tools_for_display_in_tool_panel = False
has_repository_dependencies = False
has_repository_dependencies_only_if_compiling_contained_td = False
includes_tool_dependencies = False
repo_info_dicts = []
for tup in zip( util.listify( repository_ids ), util.listify( changeset_revisions ) ):
repository_id, changeset_revision = tup
repo_info_dict, cur_includes_tools, cur_includes_tool_dependencies, cur_includes_tools_for_display_in_tool_panel, \
cur_has_repository_dependencies, cur_has_repository_dependencies_only_if_compiling_contained_td = \
repository_util.get_repo_info_dict( trans.app, trans.user, repository_id, changeset_revision )
if cur_has_repository_dependencies and not has_repository_dependencies:
has_repository_dependencies = True
if cur_has_repository_dependencies_only_if_compiling_contained_td and not has_repository_dependencies_only_if_compiling_contained_td:
has_repository_dependencies_only_if_compiling_contained_td = True
if cur_includes_tools and not includes_tools:
includes_tools = True
if cur_includes_tool_dependencies and not includes_tool_dependencies:
includes_tool_dependencies = True
if cur_includes_tools_for_display_in_tool_panel and not includes_tools_for_display_in_tool_panel:
includes_tools_for_display_in_tool_panel = True
repo_info_dicts.append( encoding_util.tool_shed_encode( repo_info_dict ) )
return dict( includes_tools=includes_tools,
includes_tools_for_display_in_tool_panel=includes_tools_for_display_in_tool_panel,
has_repository_dependencies=has_repository_dependencies,
has_repository_dependencies_only_if_compiling_contained_td=has_repository_dependencies_only_if_compiling_contained_td,
includes_tool_dependencies=includes_tool_dependencies,
repo_info_dicts=repo_info_dicts )
@web.json
def get_required_repo_info_dict( self, trans, encoded_str=None ):
"""
Retrieve and return a dictionary that includes a list of dictionaries that each contain all of the
information needed to install the list of repositories defined by the received encoded_str.
"""
repo_info_dict = {}
if encoded_str:
encoded_required_repository_str = encoding_util.tool_shed_decode( encoded_str )
encoded_required_repository_tups = encoded_required_repository_str.split( encoding_util.encoding_sep2 )
decoded_required_repository_tups = []
for encoded_required_repository_tup in encoded_required_repository_tups:
decoded_required_repository_tups.append( encoded_required_repository_tup.split( encoding_util.encoding_sep ) )
encoded_repository_ids = []
changeset_revisions = []
for required_repository_tup in decoded_required_repository_tups:
tool_shed, name, owner, changeset_revision, prior_installation_required, only_if_compiling_contained_td = \
common_util.parse_repository_dependency_tuple( required_repository_tup )
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
encoded_repository_ids.append( trans.security.encode_id( repository.id ) )
changeset_revisions.append( changeset_revision )
if encoded_repository_ids and changeset_revisions:
repo_info_dict = json.loads( self.get_repository_information( trans, encoded_repository_ids, changeset_revisions ) )
return repo_info_dict
@web.expose
def get_tool_dependencies( self, trans, **kwd ):
"""
Handle a request from a Galaxy instance to get the tool_dependencies entry from the metadata
for a specified changeset revision.
"""
name = kwd.get( 'name', None )
owner = kwd.get( 'owner', None )
changeset_revision = kwd.get( 'changeset_revision', None )
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
for downloadable_revision in repository.downloadable_revisions:
if downloadable_revision.changeset_revision == changeset_revision:
break
metadata = downloadable_revision.metadata
tool_dependencies = metadata.get( 'tool_dependencies', '' )
if tool_dependencies:
return encoding_util.tool_shed_encode( tool_dependencies )
return ''
@web.expose
def get_tool_dependencies_config_contents( self, trans, **kwd ):
"""
Handle a request from a Galaxy instance to get the tool_dependencies.xml file contents for a
specified changeset revision.
"""
name = kwd.get( 'name', None )
owner = kwd.get( 'owner', None )
changeset_revision = kwd.get( 'changeset_revision', None )
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
# TODO: We're currently returning the tool_dependencies.xml file that is available on disk. We need
# to enhance this process to retrieve older versions of the tool-dependencies.xml file from the repository
#manafest.
repo_dir = repository.repo_path( trans.app )
# Get the tool_dependencies.xml file from disk.
tool_dependencies_config = hg_util.get_config_from_disk( rt_util.TOOL_DEPENDENCY_DEFINITION_FILENAME, repo_dir )
# Return the encoded contents of the tool_dependencies.xml file.
if tool_dependencies_config:
tool_dependencies_config_file = open( tool_dependencies_config, 'rb' )
contents = tool_dependencies_config_file.read()
tool_dependencies_config_file.close()
return contents
return ''
@web.expose
def get_tool_versions( self, trans, **kwd ):
"""
For each valid /downloadable change set (up to the received changeset_revision) in the repository's
change log, append the changeset tool_versions dictionary to the list that will be returned.
"""
name = kwd[ 'name' ]
owner = kwd[ 'owner' ]
changeset_revision = kwd[ 'changeset_revision' ]
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
tool_version_dicts = []
for changeset in repo.changelog:
current_changeset_revision = str( repo.changectx( changeset ) )
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app,
trans.security.encode_id( repository.id ),
current_changeset_revision )
if repository_metadata and repository_metadata.tool_versions:
tool_version_dicts.append( repository_metadata.tool_versions )
if current_changeset_revision == changeset_revision:
break
if tool_version_dicts:
return json.dumps( tool_version_dicts )
return ''
@web.json
def get_updated_repository_information( self, trans, name, owner, changeset_revision, **kwd ):
"""
Generate a dictionary that contains the information about a repository that is necessary for installing
it into a local Galaxy instance.
"""
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
repository_id = trans.security.encode_id( repository.id )
repository_clone_url = common_util.generate_clone_url_for_repository_in_tool_shed( trans.user, repository )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, repository_id, changeset_revision )
if not repository_metadata:
# The received changeset_revision is no longer associated with metadata, so get the next changeset_revision in the repository
# changelog that is associated with metadata.
changeset_revision = suc.get_next_downloadable_changeset_revision( repository,
repo,
after_changeset_revision=changeset_revision )
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, repository_id, changeset_revision )
ctx = hg_util.get_changectx_for_changeset( repo, changeset_revision )
repo_info_dict = repository_util.create_repo_info_dict( app=trans.app,
repository_clone_url=repository_clone_url,
changeset_revision=changeset_revision,
ctx_rev=str( ctx.rev() ),
repository_owner=repository.user.username,
repository_name=repository.name,
repository=repository,
repository_metadata=repository_metadata,
tool_dependencies=None,
repository_dependencies=None )
includes_data_managers = False
includes_datatypes = False
includes_tools = False
includes_tools_for_display_in_tool_panel = False
includes_workflows = False
readme_files_dict = None
metadata = repository_metadata.metadata
if metadata:
if 'data_manager' in metadata:
includes_data_managers = True
if 'datatypes' in metadata:
includes_datatypes = True
if 'tools' in metadata:
includes_tools = True
# Handle includes_tools_for_display_in_tool_panel.
tool_dicts = metadata[ 'tools' ]
for tool_dict in tool_dicts:
if tool_dict.get( 'includes_tools_for_display_in_tool_panel', False ):
includes_tools_for_display_in_tool_panel = True
break
if 'workflows' in metadata:
includes_workflows = True
readme_files_dict = readme_util.build_readme_files_dict( trans.app, repository, changeset_revision, metadata )
# See if the repo_info_dict was populated with repository_dependencies or tool_dependencies.
has_repository_dependencies = False
has_repository_dependencies_only_if_compiling_contained_td = False
includes_tool_dependencies = False
for name, repo_info_tuple in repo_info_dict.items():
if not has_repository_dependencies or not has_repository_dependencies_only_if_compiling_contained_td or not includes_tool_dependencies:
description, repository_clone_url, changeset_revision, ctx_rev, repository_owner, repository_dependencies, tool_dependencies = \
suc.get_repo_info_tuple_contents( repo_info_tuple )
for rd_key, rd_tups in repository_dependencies.items():
if rd_key in [ 'root_key', 'description' ]:
continue
curr_has_repository_dependencies, curr_has_repository_dependencies_only_if_compiling_contained_td = \
suc.get_repository_dependency_types( rd_tups )
if curr_has_repository_dependencies and not has_repository_dependencies:
has_repository_dependencies = True
if curr_has_repository_dependencies_only_if_compiling_contained_td and not has_repository_dependencies_only_if_compiling_contained_td:
has_repository_dependencies_only_if_compiling_contained_td = True
if tool_dependencies and not includes_tool_dependencies:
includes_tool_dependencies = True
return dict( includes_data_managers=includes_data_managers,
includes_datatypes=includes_datatypes,
includes_tools=includes_tools,
includes_tools_for_display_in_tool_panel=includes_tools_for_display_in_tool_panel,
has_repository_dependencies=has_repository_dependencies,
has_repository_dependencies_only_if_compiling_contained_td=has_repository_dependencies_only_if_compiling_contained_td,
includes_tool_dependencies=includes_tool_dependencies,
includes_workflows=includes_workflows,
readme_files_dict=readme_files_dict,
repo_info_dict=repo_info_dict )
@web.expose
def help( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
return trans.fill_template( '/webapps/tool_shed/repository/help.mako', message=message, status=status, **kwd )
@web.expose
def import_capsule( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
capsule_file_name = kwd.get( 'capsule_file_name', None )
encoded_file_path = kwd.get( 'encoded_file_path', None )
file_path = encoding_util.tool_shed_decode( encoded_file_path )
export_info_file_path = os.path.join( file_path, 'export_info.xml' )
irm = capsule_manager.ImportRepositoryManager( trans.app,
trans.request.host,
trans.user,
trans.user_is_admin() )
export_info_dict = irm.get_export_info_dict( export_info_file_path )
manifest_file_path = os.path.join( file_path, 'manifest.xml' )
# The manifest.xml file has already been validated, so no error_message should be returned here.
repository_info_dicts, error_message = irm.get_repository_info_from_manifest( manifest_file_path )
# Determine the status for each exported repository archive contained within the capsule.
repository_status_info_dicts = irm.get_repository_status_from_tool_shed( repository_info_dicts )
if 'import_capsule_button' in kwd:
# Generate a list of repository name / import results message tuples for display after the capsule is imported.
import_results_tups = []
# Only create repositories that do not yet exist and that the current user is authorized to create. The
# status will be None for repositories that fall into the intersection of these 2 categories.
for repository_status_info_dict in repository_status_info_dicts:
# Add the capsule_file_name and encoded_file_path to the repository_status_info_dict.
repository_status_info_dict[ 'capsule_file_name' ] = capsule_file_name
repository_status_info_dict[ 'encoded_file_path' ] = encoded_file_path
import_results_tups = irm.create_repository_and_import_archive( repository_status_info_dict,
import_results_tups )
irm.check_status_and_reset_downloadable( import_results_tups )
basic_util.remove_dir( file_path )
return trans.fill_template( '/webapps/tool_shed/repository/import_capsule_results.mako',
export_info_dict=export_info_dict,
import_results_tups=import_results_tups,
message=message,
status=status )
return trans.fill_template( '/webapps/tool_shed/repository/import_capsule.mako',
encoded_file_path=encoded_file_path,
export_info_dict=export_info_dict,
repository_status_info_dicts=repository_status_info_dicts,
message=message,
status=status )
@web.expose
def index( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
# See if there are any RepositoryMetadata records since menu items require them.
repository_metadata = trans.sa_session.query( trans.model.RepositoryMetadata ).first()
current_user = trans.user
# TODO: move the following to some in-memory register so these queries can be done once
# at startup. The in-memory register can then be managed during the current session.
can_administer_repositories = False
has_reviewed_repositories = False
has_deprecated_repositories = False
if current_user:
# See if the current user owns any repositories that have been reviewed.
for repository in current_user.active_repositories:
if repository.reviews:
has_reviewed_repositories = True
break
# See if the current user has any repositories that have been marked as deprecated.
for repository in current_user.active_repositories:
if repository.deprecated:
has_deprecated_repositories = True
break
# See if the current user can administer any repositories, but only if not an admin user.
if not trans.user_is_admin():
if current_user.active_repositories:
can_administer_repositories = True
else:
for repository in trans.sa_session.query( trans.model.Repository ) \
.filter( trans.model.Repository.table.c.deleted == False ):
if trans.app.security_agent.user_can_administer_repository( current_user, repository ):
can_administer_repositories = True
break
# Route in may have been from a sharable URL, in whcih case we'll have a user_id and possibly a name
# The received user_id will be the id of the repository owner.
user_id = kwd.get( 'user_id', None )
repository_id = kwd.get( 'repository_id', None )
changeset_revision = kwd.get( 'changeset_revision', None )
return trans.fill_template( '/webapps/tool_shed/index.mako',
repository_metadata=repository_metadata,
can_administer_repositories=can_administer_repositories,
has_reviewed_repositories=has_reviewed_repositories,
has_deprecated_repositories=has_deprecated_repositories,
user_id=user_id,
repository_id=repository_id,
changeset_revision=changeset_revision,
message=message,
status=status )
@web.expose
def install_repositories_by_revision( self, trans, **kwd ):
"""
Send the list of repository_ids and changeset_revisions to Galaxy so it can begin the installation
process. If the value of repository_ids is not received, then the name and owner of a single repository
must be received to install a single repository.
"""
repository_ids = kwd.get( 'repository_ids', None )
changeset_revisions = kwd.get( 'changeset_revisions', None )
name = kwd.get( 'name', None )
owner = kwd.get( 'owner', None )
if not repository_ids:
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
repository_ids = trans.security.encode_id( repository.id )
galaxy_url = common_util.handle_galaxy_url( trans, **kwd )
if galaxy_url:
# Redirect back to local Galaxy to perform install.
params = '?tool_shed_url=%s&repository_ids=%s&changeset_revisions=%s' % \
( web.url_for( '/', qualified=True ),
','.join( util.listify( repository_ids ) ),
','.join( util.listify( changeset_revisions ) ) )
url = common_util.url_join( galaxy_url,
'admin_toolshed/prepare_for_install%s' % params )
return trans.response.send_redirect( url )
else:
message = 'Repository installation is not possible due to an invalid Galaxy URL: <b>%s</b>. ' % galaxy_url
message += 'You may need to enable third-party cookies in your browser. '
status = 'error'
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_valid_categories',
message=message,
status=status ) )
@web.expose
def load_invalid_tool( self, trans, repository_id, tool_config, changeset_revision, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'error' )
render_repository_actions_for = kwd.get( 'render_repository_actions_for', 'tool_shed' )
tv = tool_validator.ToolValidator( trans.app )
repository, tool, error_message = tv.load_tool_from_changeset_revision( repository_id,
changeset_revision,
tool_config )
tool_state = tool_util.new_state( trans, tool, invalid=True )
invalid_file_tups = []
if tool:
invalid_file_tups = tv.check_tool_input_params( repository.repo_path( trans.app ),
tool_config,
tool,
[] )
if invalid_file_tups:
message = tool_util.generate_message_for_invalid_tools( trans.app,
invalid_file_tups,
repository,
{},
as_html=True,
displaying_invalid_tool=True )
elif error_message:
message = error_message
try:
return trans.fill_template( "/webapps/tool_shed/repository/tool_form.mako",
repository=repository,
render_repository_actions_for=render_repository_actions_for,
changeset_revision=changeset_revision,
tool=tool,
tool_state=tool_state,
message=message,
status='error' )
except Exception, e:
message = "Exception thrown attempting to display tool: %s." % str( e )
if trans.webapp.name == 'galaxy':
return trans.response.send_redirect( web.url_for( controller='repository',
action='preview_tools_in_changeset',
repository_id=repository_id,
changeset_revision=changeset_revision,
message=message,
status='error' ) )
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
operation='view_or_manage_repository',
id=repository_id,
changeset_revision=changeset_revision,
message=message,
status='error' ) )
@web.expose
@web.require_login( "manage email alerts" )
def manage_email_alerts( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
new_repo_alert = kwd.get( 'new_repo_alert', '' )
new_repo_alert_checked = CheckboxField.is_checked( new_repo_alert )
user = trans.user
if kwd.get( 'new_repo_alert_button', False ):
user.new_repo_alert = new_repo_alert_checked
trans.sa_session.add( user )
trans.sa_session.flush()
if new_repo_alert_checked:
message = 'You will receive email alerts for all new valid tool shed repositories.'
else:
message = 'You will not receive any email alerts for new valid tool shed repositories.'
checked = new_repo_alert_checked or ( user and user.new_repo_alert )
new_repo_alert_check_box = CheckboxField( 'new_repo_alert', checked=checked )
email_alert_repositories = []
for repository in trans.sa_session.query( trans.model.Repository ) \
.filter( and_( trans.model.Repository.table.c.deleted == False,
trans.model.Repository.table.c.email_alerts != None ) ) \
.order_by( trans.model.Repository.table.c.name ):
if user.email in repository.email_alerts:
email_alert_repositories.append( repository )
return trans.fill_template( "/webapps/tool_shed/user/manage_email_alerts.mako",
new_repo_alert_check_box=new_repo_alert_check_box,
email_alert_repositories=email_alert_repositories,
message=message,
status=status )
@web.expose
@web.require_login( "manage repository" )
def manage_repository( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository = suc.get_repository_in_tool_shed( trans.app, id )
repository_type = kwd.get( 'repository_type', str( repository.type ) )
repo_dir = repository.repo_path( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=None, repo_path=repo_dir, create=False )
repo_name = kwd.get( 'repo_name', repository.name )
changeset_revision = kwd.get( 'changeset_revision', repository.tip( trans.app ) )
description = kwd.get( 'description', repository.description )
long_description = kwd.get( 'long_description', repository.long_description )
avg_rating, num_ratings = self.get_ave_item_rating_data( trans.sa_session, repository, webapp_model=trans.model )
display_reviews = util.string_as_bool( kwd.get( 'display_reviews', False ) )
alerts = kwd.get( 'alerts', '' )
alerts_checked = CheckboxField.is_checked( alerts )
skip_tool_tests = kwd.get( 'skip_tool_tests', '' )
skip_tool_tests_checked = CheckboxField.is_checked( skip_tool_tests )
skip_tool_tests_comment = kwd.get( 'skip_tool_tests_comment', '' )
category_ids = util.listify( kwd.get( 'category_id', '' ) )
if repository.email_alerts:
email_alerts = json.loads( repository.email_alerts )
else:
email_alerts = []
allow_push = kwd.get( 'allow_push', '' )
error = False
user = trans.user
if kwd.get( 'edit_repository_button', False ):
flush_needed = False
if not ( trans.user_is_admin() or trans.app.security_agent.user_can_administer_repository( user, repository ) ):
message = "You are not the owner of this repository, so you cannot administer it."
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_repository',
id=id,
message=message,
status='error' ) )
if repository_type != repository.type:
repository.type = repository_type
flush_needed = True
if description != repository.description:
repository.description = description
flush_needed = True
if long_description != repository.long_description:
repository.long_description = long_description
flush_needed = True
if repository.times_downloaded == 0 and repo_name != repository.name:
message = repository_util.validate_repository_name( trans.app, repo_name, user )
if message:
error = True
else:
# Change the entry in the hgweb.config file for the repository.
old_lhs = "repos/%s/%s" % ( repository.user.username, repository.name )
new_lhs = "repos/%s/%s" % ( repository.user.username, repo_name )
trans.app.hgweb_config_manager.change_entry( old_lhs, new_lhs, repo_dir )
# Change the entry in the repository's hgrc file.
hgrc_file = os.path.join( repo_dir, '.hg', 'hgrc' )
repository_util.change_repository_name_in_hgrc_file( hgrc_file, repo_name )
# Rename the repository's admin role to match the new repository name.
repository_admin_role = repository.admin_role
repository_admin_role.name = \
repository_util.get_repository_admin_role_name( str( repo_name ),
str( repository.user.username ) )
trans.sa_session.add( repository_admin_role )
repository.name = repo_name
flush_needed = True
elif repository.times_downloaded != 0 and repo_name != repository.name:
message = "Repository names cannot be changed if the repository has been cloned. "
if flush_needed:
trans.sa_session.add( repository )
trans.sa_session.flush()
message += "The repository information has been updated."
elif kwd.get( 'skip_tool_tests_button', False ):
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, id, changeset_revision )
skip_tool_test = repository_metadata.skip_tool_tests
if skip_tool_test:
# Handle the mapper behavior.
skip_tool_test = skip_tool_test[ 0 ]
if skip_tool_tests_checked:
if repository_metadata.tool_test_results:
repository_metadata.tool_test_results = None
trans.sa_session.add( repository_metadata )
trans.sa_session.flush()
if skip_tool_test:
comment = skip_tool_test.comment
if comment != skip_tool_tests_comment:
skip_tool_test.comment = skip_tool_tests_comment
trans.sa_session.add( skip_tool_test )
trans.sa_session.flush()
else:
skip_tool_test = trans.model.SkipToolTest( repository_metadata_id=repository_metadata.id,
initial_changeset_revision=changeset_revision,
comment=skip_tool_tests_comment )
trans.sa_session.add( skip_tool_test )
trans.sa_session.flush()
message = "Tools in this revision will not be tested by the automated test framework."
else:
if skip_tool_test:
trans.sa_session.delete( skip_tool_test )
trans.sa_session.flush()
message = "Tools in this revision will be tested by the automated test framework."
elif kwd.get( 'manage_categories_button', False ):
flush_needed = False
# Delete all currently existing categories.
for rca in repository.categories:
trans.sa_session.delete( rca )
trans.sa_session.flush()
if category_ids:
# Create category associations
for category_id in category_ids:
category = trans.sa_session.query( trans.model.Category ).get( trans.security.decode_id( category_id ) )
rca = trans.app.model.RepositoryCategoryAssociation( repository, category )
trans.sa_session.add( rca )
trans.sa_session.flush()
message = "The repository information has been updated."
elif kwd.get( 'user_access_button', False ):
if allow_push not in [ 'none' ]:
remove_auth = kwd.get( 'remove_auth', '' )
if remove_auth:
usernames = ''
else:
user_ids = util.listify( allow_push )
usernames = []
for user_id in user_ids:
user = trans.sa_session.query( trans.model.User ).get( trans.security.decode_id( user_id ) )
usernames.append( user.username )
usernames = ','.join( usernames )
repository.set_allow_push( trans.app, usernames, remove_auth=remove_auth )
message = "The repository information has been updated."
elif kwd.get( 'receive_email_alerts_button', False ):
flush_needed = False
if alerts_checked:
if user.email not in email_alerts:
email_alerts.append( user.email )
repository.email_alerts = json.dumps( email_alerts )
flush_needed = True
else:
if user.email in email_alerts:
email_alerts.remove( user.email )
repository.email_alerts = json.dumps( email_alerts )
flush_needed = True
if flush_needed:
trans.sa_session.add( repository )
trans.sa_session.flush()
message = "The repository information has been updated."
if error:
status = 'error'
current_allow_push = repository.allow_push( trans.app )
if current_allow_push:
current_allow_push_list = current_allow_push.split( ',' )
else:
current_allow_push_list = []
allow_push_select_field = repository_util.build_allow_push_select_field( trans, current_allow_push_list )
checked = alerts_checked or user.email in email_alerts
alerts_check_box = CheckboxField( 'alerts', checked=checked )
changeset_revision_select_field = grids_util.build_changeset_revision_select_field( trans,
repository,
selected_value=changeset_revision,
add_id_to_name=False,
downloadable=False )
revision_label = hg_util.get_revision_label( trans.app, repository, repository.tip( trans.app ), include_date=False )
repository_metadata = None
metadata = None
is_malicious = False
skip_tool_test = None
repository_dependencies = None
if changeset_revision != hg_util.INITIAL_CHANGELOG_HASH:
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, id, changeset_revision )
if repository_metadata:
revision_label = hg_util.get_revision_label( trans.app, repository, changeset_revision, include_date=False )
metadata = repository_metadata.metadata
is_malicious = repository_metadata.malicious
else:
# There is no repository_metadata defined for the changeset_revision, so see if it was defined in a previous
# changeset in the changelog.
previous_changeset_revision = \
metadata_util.get_previous_metadata_changeset_revision( repository, repo, changeset_revision, downloadable=False )
if previous_changeset_revision != hg_util.INITIAL_CHANGELOG_HASH:
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, id, previous_changeset_revision )
if repository_metadata:
revision_label = hg_util.get_revision_label( trans.app, repository, previous_changeset_revision, include_date=False )
metadata = repository_metadata.metadata
is_malicious = repository_metadata.malicious
changeset_revision = previous_changeset_revision
if repository_metadata:
skip_tool_test = repository_metadata.skip_tool_tests
if skip_tool_test:
# Handle the mapper behavior.
skip_tool_test = skip_tool_test[ 0 ]
skip_tool_tests_checked = True
metadata = repository_metadata.metadata
# Get a dictionary of all repositories upon which the contents of the current repository_metadata record depend.
toolshed_base_url = str( web.url_for( '/', qualified=True ) ).rstrip( '/' )
rb = relation_builder.RelationBuilder( trans.app, repository, repository_metadata, toolshed_base_url )
repository_dependencies = rb.get_repository_dependencies_for_changeset_revision()
if str( repository.type ) != rt_util.REPOSITORY_SUITE_DEFINITION:
# Handle messaging for resetting repository type to the optimal value.
change_repository_type_message = rt_util.generate_message_for_repository_type_change( trans.app,
repository )
if change_repository_type_message:
message += change_repository_type_message
status = 'warning'
elif str( repository.type ) != rt_util.TOOL_DEPENDENCY_DEFINITION:
# Handle messaging for resetting repository type to the optimal value.
change_repository_type_message = rt_util.generate_message_for_repository_type_change( trans.app,
repository )
if change_repository_type_message:
message += change_repository_type_message
status = 'warning'
else:
# Handle messaging for orphan tool dependency definitions.
dd = dependency_display.DependencyDisplayer( trans.app )
orphan_message = dd.generate_message_for_orphan_tool_dependencies( repository, metadata )
if orphan_message:
message += orphan_message
status = 'warning'
if is_malicious:
if trans.app.security_agent.can_push( trans.app, trans.user, repository ):
message += malicious_error_can_push
else:
message += malicious_error
status = 'error'
repository_type_select_field = rt_util.build_repository_type_select_field( trans, repository=repository )
malicious_check_box = CheckboxField( 'malicious', checked=is_malicious )
skip_tool_tests_check_box = CheckboxField( 'skip_tool_tests', checked=skip_tool_tests_checked )
categories = suc.get_categories( trans.app )
selected_categories = [ rca.category_id for rca in repository.categories ]
tsucm = ToolShedUtilityContainerManager( trans.app )
containers_dict = tsucm.build_repository_containers( repository,
changeset_revision,
repository_dependencies,
repository_metadata )
heads = hg_util.get_repository_heads( repo )
deprecated_repository_dependency_tups = \
metadata_util.get_repository_dependency_tups_from_repository_metadata( trans.app,
repository_metadata,
deprecated_only=True )
return trans.fill_template( '/webapps/tool_shed/repository/manage_repository.mako',
repo_name=repo_name,
description=description,
long_description=long_description,
current_allow_push_list=current_allow_push_list,
allow_push_select_field=allow_push_select_field,
deprecated_repository_dependency_tups=deprecated_repository_dependency_tups,
repo=repo,
heads=heads,
repository=repository,
containers_dict=containers_dict,
repository_metadata=repository_metadata,
changeset_revision=changeset_revision,
changeset_revision_select_field=changeset_revision_select_field,
revision_label=revision_label,
selected_categories=selected_categories,
categories=categories,
metadata=metadata,
avg_rating=avg_rating,
display_reviews=display_reviews,
num_ratings=num_ratings,
alerts_check_box=alerts_check_box,
skip_tool_tests_check_box=skip_tool_tests_check_box,
skip_tool_test=skip_tool_test,
malicious_check_box=malicious_check_box,
repository_type_select_field=repository_type_select_field,
message=message,
status=status )
@web.expose
@web.require_login( "manage repository administrators" )
def manage_repository_admins( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository = suc.get_repository_in_tool_shed( trans.app, id )
changeset_revision = kwd.get( 'changeset_revision', repository.tip( trans.app ) )
metadata = None
if changeset_revision != hg_util.INITIAL_CHANGELOG_HASH:
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, id, changeset_revision )
if repository_metadata:
metadata = repository_metadata.metadata
else:
# There is no repository_metadata defined for the changeset_revision, so see if it was defined
# in a previous changeset in the changelog.
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
previous_changeset_revision = \
metadata_util.get_previous_metadata_changeset_revision( repository,
repo,
changeset_revision,
downloadable=False )
if previous_changeset_revision != hg_util.INITIAL_CHANGELOG_HASH:
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app,
id,
previous_changeset_revision )
if repository_metadata:
metadata = repository_metadata.metadata
role = repository.admin_role
associations_dict = repository_util.handle_role_associations( trans.app,
role,
repository,
**kwd )
in_users = associations_dict.get( 'in_users', [] )
out_users = associations_dict.get( 'out_users', [] )
in_groups = associations_dict.get( 'in_groups', [] )
out_groups = associations_dict.get( 'out_groups', [] )
message = associations_dict.get( 'message', '' )
status = associations_dict.get( 'status', 'done' )
return trans.fill_template( '/webapps/tool_shed/role/role.mako',
in_admin_controller=False,
repository=repository,
metadata=metadata,
changeset_revision=changeset_revision,
role=role,
in_users=in_users,
out_users=out_users,
in_groups=in_groups,
out_groups=out_groups,
message=message,
status=status )
@web.expose
@web.require_login( "review repository revision" )
def manage_repository_reviews_of_revision( self, trans, **kwd ):
return trans.response.send_redirect( web.url_for( controller='repository_review',
action='manage_repository_reviews_of_revision',
**kwd ) )
@web.expose
@web.require_login( "multi select email alerts" )
def multi_select_email_alerts( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "receive email alerts":
if trans.user:
if kwd[ 'id' ]:
kwd[ 'caller' ] = 'multi_select_email_alerts'
return trans.response.send_redirect( web.url_for( controller='repository',
action='set_email_alerts',
**kwd ) )
else:
kwd[ 'message' ] = 'You must be logged in to set email alerts.'
kwd[ 'status' ] = 'error'
del kwd[ 'operation' ]
elif operation == "view_or_manage_repository":
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
self.email_alerts_repository_grid.title = "Set email alerts for repository changes"
return self.email_alerts_repository_grid( trans, **kwd )
@web.expose
def next_installable_changeset_revision( self, trans, **kwd ):
"""
Handle a request from a Galaxy instance where the changeset_revision defined for a repository
in a dependency definition file is older than the changeset_revision associated with the installed
repository.
"""
name = kwd.get( 'name', None )
owner = kwd.get( 'owner', None )
changeset_revision = kwd.get( 'changeset_revision', None )
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
# Get the next installable changeset_revision beyond the received changeset_revision.
changeset_revision = suc.get_next_downloadable_changeset_revision( repository, repo, changeset_revision )
if changeset_revision:
return changeset_revision
return ''
@web.json
def open_folder( self, trans, folder_path ):
# Avoid caching
trans.response.headers['Pragma'] = 'no-cache'
trans.response.headers['Expires'] = '0'
return suc.open_repository_files_folder( folder_path )
@web.expose
def preview_tools_in_changeset( self, trans, repository_id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
changeset_revision = kwd.get( 'changeset_revision', repository.tip( trans.app ) )
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, repository_id, changeset_revision )
if repository_metadata:
repository_metadata_id = trans.security.encode_id( repository_metadata.id ),
metadata = repository_metadata.metadata
# Get a dictionary of all repositories upon which the contents of the current repository_metadata record depend.
toolshed_base_url = str( web.url_for( '/', qualified=True ) ).rstrip( '/' )
rb = relation_builder.RelationBuilder( trans.app, repository, repository_metadata, toolshed_base_url )
repository_dependencies = rb.get_repository_dependencies_for_changeset_revision()
if metadata:
if 'repository_dependencies' in metadata and not repository_dependencies:
# See if we have an invalid repository dependency definition or if the repository dependency is required
# only for compiling the repository's tool dependency.
invalid = False
repository_dependencies_dict = metadata[ 'repository_dependencies' ]
rd_tups = repository_dependencies_dict.get( 'repository_dependencies', [] )
for rd_tup in rd_tups:
rdtool_shed, \
rd_name, \
rd_owner, \
rd_changeset_revision, \
rd_prior_installation_required, \
rd_only_if_compiling_contained_td = \
common_util.parse_repository_dependency_tuple( rd_tup )
if not util.asbool( rd_only_if_compiling_contained_td ):
invalid = True
break
if invalid:
dd = dependency_display.DependencyDisplayer( trans.app )
message = dd.generate_message_for_invalid_repository_dependencies( metadata,
error_from_tuple=False )
status = 'error'
else:
repository_metadata_id = None
metadata = None
repository_dependencies = None
revision_label = hg_util.get_revision_label( trans.app, repository, changeset_revision, include_date=True )
changeset_revision_select_field = grids_util.build_changeset_revision_select_field( trans,
repository,
selected_value=changeset_revision,
add_id_to_name=False,
downloadable=False )
tsucm = ToolShedUtilityContainerManager( trans.app )
containers_dict = tsucm.build_repository_containers( repository,
changeset_revision,
repository_dependencies,
repository_metadata )
return trans.fill_template( '/webapps/tool_shed/repository/preview_tools_in_changeset.mako',
repository=repository,
containers_dict=containers_dict,
repository_metadata_id=repository_metadata_id,
changeset_revision=changeset_revision,
revision_label=revision_label,
changeset_revision_select_field=changeset_revision_select_field,
metadata=metadata,
message=message,
status=status )
@web.expose
def previous_changeset_revisions( self, trans, from_tip=False, **kwd ):
"""
Handle a request from a local Galaxy instance. This method will handle two scenarios: (1) the
repository was previously installed using an older changeset_revsion, but later the repository
was updated in the tool shed and the Galaxy admin is trying to install the latest changeset
revision of the same repository instead of updating the one that was previously installed. (2)
the admin is attempting to get updates for an installed repository that has a repository dependency
and both the repository and its dependency have available updates. In this case, the from_tip
parameter will be True because the repository dependency definition may define a changeset hash
for the dependency that is newer than the installed changeset revision of the dependency (this is
due to the behavior of "Tool dependency definition" repositories, whose metadata is always the tip),
so the complete list of changset hashes in the changelog must be returned.
"""
name = kwd.get( 'name', None )
owner = kwd.get( 'owner', None )
if name is not None and owner is not None:
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
from_tip = util.string_as_bool( from_tip )
if from_tip:
changeset_revision = repository.tip( trans.app )
else:
changeset_revision = kwd.get( 'changeset_revision', None )
if changeset_revision is not None:
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
# Get the lower bound changeset revision.
lower_bound_changeset_revision = metadata_util.get_previous_metadata_changeset_revision( repository,
repo,
changeset_revision,
downloadable=True )
# Build the list of changeset revision hashes.
changeset_hashes = []
for changeset in hg_util.reversed_lower_upper_bounded_changelog( repo,
lower_bound_changeset_revision,
changeset_revision ):
changeset_hashes.append( str( repo.changectx( changeset ) ) )
if changeset_hashes:
changeset_hashes_str = ','.join( changeset_hashes )
return changeset_hashes_str
return ''
@web.expose
@web.require_login( "rate repositories" )
def rate_repository( self, trans, **kwd ):
""" Rate a repository and return updated rating data. """
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
id = kwd.get( 'id', None )
if not id:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
message='Select a repository to rate',
status='error' ) )
repository = suc.get_repository_in_tool_shed( trans.app, id )
changeset_revision = repository.tip( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
if repository.user == trans.user:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
message="You are not allowed to rate your own repository",
status='error' ) )
if kwd.get( 'rate_button', False ):
rating = int( kwd.get( 'rating', '0' ) )
comment = kwd.get( 'comment', '' )
rating = self.rate_item( trans, trans.user, repository, rating, comment )
avg_rating, num_ratings = self.get_ave_item_rating_data( trans.sa_session, repository, webapp_model=trans.model )
display_reviews = util.string_as_bool( kwd.get( 'display_reviews', False ) )
rra = self.get_user_item_rating( trans.sa_session, trans.user, repository, webapp_model=trans.model )
metadata = metadata_util.get_repository_metadata_by_repository_id_changeset_revision( trans.app,
id,
changeset_revision,
metadata_only=True )
repository_type_select_field = rt_util.build_repository_type_select_field( trans, repository=repository )
revision_label = hg_util.get_revision_label( trans.app, repository, changeset_revision, include_date=True )
return trans.fill_template( '/webapps/tool_shed/repository/rate_repository.mako',
repository=repository,
metadata=metadata,
revision_label=revision_label,
avg_rating=avg_rating,
display_reviews=display_reviews,
num_ratings=num_ratings,
rra=rra,
repository_type_select_field=repository_type_select_field,
message=message,
status=status )
@web.expose
def reset_all_metadata( self, trans, id, **kwd ):
"""Reset all metadata on the complete changelog for a single repository in the tool shed."""
# This method is called only from the ~/templates/webapps/tool_shed/repository/manage_repository.mako template.
repository = suc.get_repository_in_tool_shed( trans.app, id )
rmm = repository_metadata_manager.RepositoryMetadataManager( app=trans.app,
user=trans.user,
repository=repository )
rmm.reset_all_metadata_on_repository_in_tool_shed()
rmm_metadata_dict = rmm.get_metadata_dict()
rmm_invalid_file_tups = rmm.get_invalid_file_tups()
if rmm_invalid_file_tups:
message = tool_util.generate_message_for_invalid_tools( trans.app,
rmm_invalid_file_tups,
repository,
rmm_metadata_dict )
status = 'error'
else:
message = "All repository metadata has been reset. "
status = 'done'
return trans.response.send_redirect( web.url_for( controller='repository',
action='manage_repository',
id=id,
message=message,
status=status ) )
@web.expose
def reset_metadata_on_my_writable_repositories_in_tool_shed( self, trans, **kwd ):
rmm = repository_metadata_manager.RepositoryMetadataManager( trans.app, trans.user )
if 'reset_metadata_on_selected_repositories_button' in kwd:
message, status = rmm.reset_metadata_on_selected_repositories( **kwd )
else:
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repositories_select_field = rmm.build_repository_ids_select_field( name='repository_ids',
multiple=True,
display='checkboxes',
my_writable=True )
return trans.fill_template( '/webapps/tool_shed/common/reset_metadata_on_selected_repositories.mako',
repositories_select_field=repositories_select_field,
message=message,
status=status )
@web.expose
def select_files_to_delete( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
commit_message = kwd.get( 'commit_message', 'Deleted selected files' )
repository = suc.get_repository_in_tool_shed( trans.app, id )
repo_dir = repository.repo_path( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=None, repo_path=repo_dir, create=False )
selected_files_to_delete = kwd.get( 'selected_files_to_delete', '' )
if kwd.get( 'select_files_to_delete_button', False ):
if selected_files_to_delete:
selected_files_to_delete = selected_files_to_delete.split( ',' )
# Get the current repository tip.
tip = repository.tip( trans.app )
for selected_file in selected_files_to_delete:
try:
hg_util.remove_file( repo.ui, repo, selected_file, force=True )
except Exception, e:
log.debug( "Error removing the following file using the mercurial API:\n %s" % str( selected_file ) )
log.debug( "The error was: %s" % str( e ))
log.debug( "Attempting to remove the file using a different approach." )
relative_selected_file = selected_file.split( 'repo_%d' % repository.id )[1].lstrip( '/' )
repo.dirstate.remove( relative_selected_file )
repo.dirstate.write()
absolute_selected_file = os.path.abspath( selected_file )
if os.path.isdir( absolute_selected_file ):
try:
os.rmdir( absolute_selected_file )
except OSError, e:
# The directory is not empty
pass
elif os.path.isfile( absolute_selected_file ):
os.remove( absolute_selected_file )
dir = os.path.split( absolute_selected_file )[0]
try:
os.rmdir( dir )
except OSError, e:
# The directory is not empty
pass
# Commit the change set.
if not commit_message:
commit_message = 'Deleted selected files'
hg_util.commit_changeset( repo.ui,
repo,
full_path_to_changeset=repo_dir,
username=trans.user.username,
message=commit_message )
suc.handle_email_alerts( trans.app, trans.request.host, repository )
# Update the repository files for browsing.
hg_util.update_repository( repo )
# Get the new repository tip.
if tip == repository.tip( trans.app ):
message += 'No changes to repository. '
else:
rmm = repository_metadata_manager.RepositoryMetadataManager( app=trans.app,
user=trans.user,
repository=repository )
status, error_message = rmm.set_repository_metadata_due_to_new_tip( trans.request.host, **kwd )
if error_message:
message = error_message
else:
message += 'The selected files were deleted from the repository. '
else:
message = "Select at least 1 file to delete from the repository before clicking <b>Delete selected files</b>."
status = "error"
repository_type_select_field = rt_util.build_repository_type_select_field( trans, repository=repository )
changeset_revision = repository.tip( trans.app )
metadata = metadata_util.get_repository_metadata_by_repository_id_changeset_revision( trans.app,
id,
changeset_revision,
metadata_only=True )
return trans.fill_template( '/webapps/tool_shed/repository/browse_repository.mako',
repo=repo,
repository=repository,
changeset_revision=changeset_revision,
metadata=metadata,
commit_message=commit_message,
repository_type_select_field=repository_type_select_field,
message=message,
status=status )
@web.expose
def send_to_owner( self, trans, id, message='' ):
repository = suc.get_repository_in_tool_shed( trans.app, id )
if not message:
message = 'Enter a message'
status = 'error'
elif trans.user and trans.user.email:
smtp_server = trans.app.config.smtp_server
from_address = trans.app.config.email_from
if smtp_server is None or from_address is None:
return trans.show_error_message( "Mail is not configured for this Galaxy tool shed instance" )
to_address = repository.user.email
# Get the name of the server hosting the tool shed instance.
host = trans.request.host
# Build the email message
body = string.Template( suc.contact_owner_template ) \
.safe_substitute( username=trans.user.username,
repository_name=repository.name,
email=trans.user.email,
message=message,
host=host )
subject = "Regarding your tool shed repository named %s" % repository.name
# Send it
try:
util.send_mail( from_address, to_address, subject, body, trans.app.config )
message = "Your message has been sent"
status = "done"
except Exception, e:
message = "An error occurred sending your message by email: %s" % str( e )
status = "error"
else:
# Do all we can to eliminate spam.
return trans.show_error_message( "You must be logged in to contact the owner of a repository." )
return trans.response.send_redirect( web.url_for( controller='repository',
action='contact_owner',
id=id,
message=message,
status=status ) )
@web.expose
@web.require_login( "set email alerts" )
def set_email_alerts( self, trans, **kwd ):
"""Set email alerts for selected repositories."""
# This method is called from multiple grids, so the caller must be passed.
caller = kwd[ 'caller' ]
user = trans.user
if user:
repository_ids = util.listify( kwd.get( 'id', '' ) )
total_alerts_added = 0
total_alerts_removed = 0
flush_needed = False
for repository_id in repository_ids:
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
if repository.email_alerts:
email_alerts = json.loads( repository.email_alerts )
else:
email_alerts = []
if user.email in email_alerts:
email_alerts.remove( user.email )
repository.email_alerts = json.dumps( email_alerts )
trans.sa_session.add( repository )
flush_needed = True
total_alerts_removed += 1
else:
email_alerts.append( user.email )
repository.email_alerts = json.dumps( email_alerts )
trans.sa_session.add( repository )
flush_needed = True
total_alerts_added += 1
if flush_needed:
trans.sa_session.flush()
message = 'Total alerts added: %d, total alerts removed: %d' % ( total_alerts_added, total_alerts_removed )
kwd[ 'message' ] = message
kwd[ 'status' ] = 'done'
del kwd[ 'operation' ]
return trans.response.send_redirect( web.url_for( controller='repository',
action=caller,
**kwd ) )
@web.expose
@web.require_login( "set repository as malicious" )
def set_malicious( self, trans, id, ctx_str, **kwd ):
malicious = kwd.get( 'malicious', '' )
if kwd.get( 'malicious_button', False ):
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, id, ctx_str )
malicious_checked = CheckboxField.is_checked( malicious )
repository_metadata.malicious = malicious_checked
trans.sa_session.add( repository_metadata )
trans.sa_session.flush()
if malicious_checked:
message = "The repository tip has been defined as malicious."
else:
message = "The repository tip has been defined as <b>not</b> malicious."
status = 'done'
return trans.response.send_redirect( web.url_for( controller='repository',
action='manage_repository',
id=id,
changeset_revision=ctx_str,
malicious=malicious,
message=message,
status=status ) )
@web.expose
def sharable_owner( self, trans, owner ):
"""Support for sharable URL for each repository owner's tools, e.g. http://example.org/view/owner."""
try:
user = suc.get_user_by_username( trans, owner )
except:
user = None
if user:
user_id = trans.security.encode_id( user.id )
return trans.response.send_redirect( web.url_for( controller='repository',
action='index',
user_id=user_id ) )
else:
return trans.show_error_message( "The tool shed <b>%s</b> contains no repositories owned by <b>%s</b>." % \
( web.url_for( '/', qualified=True ).rstrip( '/' ), str( owner ) ) )
@web.expose
def sharable_repository( self, trans, owner, name ):
"""Support for sharable URL for a specified repository, e.g. http://example.org/view/owner/name."""
try:
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
except:
repository = None
if repository:
repository_id = trans.security.encode_id( repository.id )
return trans.response.send_redirect( web.url_for( controller='repository',
action='index',
repository_id=repository_id ) )
else:
# If the owner is valid, then show all of their repositories.
try:
user = suc.get_user_by_username( trans, owner )
except:
user = None
if user:
user_id = trans.security.encode_id( user.id )
message = "This list of repositories owned by <b>%s</b>, does not include one named <b>%s</b>." % ( str( owner ), str( name ) )
return trans.response.send_redirect( web.url_for( controller='repository',
action='index',
user_id=user_id,
message=message,
status='error' ) )
else:
return trans.show_error_message( "The tool shed <b>%s</b> contains no repositories named <b>%s</b> with owner <b>%s</b>." % \
( web.url_for( '/', qualified=True ).rstrip( '/' ), str( name ), str( owner ) ) )
@web.expose
def sharable_repository_revision( self, trans, owner, name, changeset_revision ):
"""Support for sharable URL for a specified repository revision, e.g. http://example.org/view/owner/name/changeset_revision."""
try:
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
except:
repository = None
if repository:
repository_id = trans.security.encode_id( repository.id )
repository_metadata = metadata_util.get_repository_metadata_by_repository_id_changeset_revision( trans.app,
repository_id,
changeset_revision )
if not repository_metadata:
# Get updates to the received changeset_revision if any exist.
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
upper_bound_changeset_revision = suc.get_next_downloadable_changeset_revision( repository, repo, changeset_revision )
if upper_bound_changeset_revision:
changeset_revision = upper_bound_changeset_revision
repository_metadata = metadata_util.get_repository_metadata_by_repository_id_changeset_revision( trans.app,
repository_id,
changeset_revision )
if repository_metadata:
return trans.response.send_redirect( web.url_for( controller='repository',
action='index',
repository_id=repository_id,
changeset_revision=changeset_revision ) )
else:
message = "The change log for the repository named <b>%s</b> owned by <b>%s</b> does not include revision <b>%s</b>." % \
( str( name ), str( owner ), str( changeset_revision ) )
return trans.response.send_redirect( web.url_for( controller='repository',
action='index',
repository_id=repository_id,
message=message,
status='error' ) )
else:
# See if the owner is valid.
return trans.response.send_redirect( web.url_for( controller='repository',
action='sharable_owner',
owner=owner ) )
@web.expose
def status_for_installed_repository( self, trans, **kwd ):
"""
Handle a request from a local Galaxy instance, returning a dictionary with boolean values for whether there are updates available
for the repository revision, newer installable revisions available, the revision is the latest installable revision, or if the repository
is deprecated.
"""
name = kwd.get( 'name', None )
owner = kwd.get( 'owner', None )
changeset_revision = kwd.get( 'changeset_revision', None )
repository = suc.get_repository_by_name_and_owner( trans.app, name, owner )
if repository:
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app,
trans.security.encode_id( repository.id ),
changeset_revision )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
tool_shed_status_dict = {}
# Handle repository deprecation.
tool_shed_status_dict[ 'repository_deprecated' ] = str( repository.deprecated )
# Handle latest installable revision.
if changeset_revision == repository.tip( trans.app ):
tool_shed_status_dict[ 'latest_installable_revision' ] = 'True'
else:
next_installable_revision = suc.get_next_downloadable_changeset_revision( repository, repo, changeset_revision )
if repository_metadata is None:
if next_installable_revision:
tool_shed_status_dict[ 'latest_installable_revision' ] = 'True'
else:
tool_shed_status_dict[ 'latest_installable_revision' ] = 'False'
else:
if next_installable_revision:
tool_shed_status_dict[ 'latest_installable_revision' ] = 'False'
else:
tool_shed_status_dict[ 'latest_installable_revision' ] = 'True'
# Handle revision updates.
if changeset_revision == repository.tip( trans.app ):
tool_shed_status_dict[ 'revision_update' ] = 'False'
else:
if repository_metadata is None:
tool_shed_status_dict[ 'revision_update' ] = 'True'
else:
tool_shed_status_dict[ 'revision_update' ] = 'False'
# Handle revision upgrades.
ordered_metadata_changeset_revisions = suc.get_ordered_metadata_changeset_revisions( repository, repo, downloadable=True )
num_metadata_revisions = len( ordered_metadata_changeset_revisions )
for index, metadata_changeset_revision in enumerate( ordered_metadata_changeset_revisions ):
if index == num_metadata_revisions:
tool_shed_status_dict[ 'revision_upgrade' ] = 'False'
break
if metadata_changeset_revision == changeset_revision:
if num_metadata_revisions - index > 1:
tool_shed_status_dict[ 'revision_upgrade' ] = 'True'
else:
tool_shed_status_dict[ 'revision_upgrade' ] = 'False'
break
return encoding_util.tool_shed_encode( tool_shed_status_dict )
return encoding_util.tool_shed_encode( {} )
@web.expose
def updated_changeset_revisions( self, trans, **kwd ):
"""
Handle a request from a local Galaxy instance to retrieve the list of changeset revisions to which an
installed repository can be updated. This method will return a string of comma-separated changeset revision
hashes for all available updates to the received changeset revision. Among other things , this method
handles the scenario where an installed tool shed repository's tool_dependency definition file defines a
changeset revision for a complex repository dependency that is outdated. In other words, a defined changeset
revision is older than the current changeset revision for the required repository, making it impossible to
discover the repository without knowledge of revisions to which it could have been updated.
"""
name = kwd.get( 'name', None )
owner = kwd.get( 'owner', None )
changeset_revision = kwd.get( 'changeset_revision', None )
if name and owner and changeset_revision:
return suc.get_updated_changeset_revisions( trans.app, name, owner, changeset_revision )
return ''
@web.expose
def upload_capsule( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
url = kwd.get( 'url', '' )
if 'upload_capsule_button' in kwd:
irm = capsule_manager.ImportRepositoryManager( trans.app,
trans.request.host,
trans.user,
trans.user_is_admin() )
capsule_dict = irm.upload_capsule( **kwd )
status = capsule_dict.get( 'status', 'error' )
if status == 'error':
message = capsule_dict.get( 'error_message', '' )
else:
capsule_dict = irm.extract_capsule_files( **capsule_dict )
capsule_dict = irm.validate_capsule( **capsule_dict )
status = capsule_dict.get( 'status', 'error' )
if status == 'ok':
return trans.response.send_redirect( web.url_for( controller='repository',
action='import_capsule',
**capsule_dict ) )
else:
message = 'The capsule contents are invalid and cannot be imported:<br/>%s' % \
str( capsule_dict.get( 'error_message', '' ) )
return trans.fill_template( '/webapps/tool_shed/repository/upload_capsule.mako',
url=url,
message=message,
status=status )
@web.expose
def view_changelog( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository = suc.get_repository_in_tool_shed( trans.app, id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
changesets = []
for changeset in repo.changelog:
ctx = repo.changectx( changeset )
if suc.get_repository_metadata_by_changeset_revision( trans.app, id, str( ctx ) ):
has_metadata = True
else:
has_metadata = False
change_dict = { 'ctx' : ctx,
'rev' : str( ctx.rev() ),
'date' : date,
'display_date' : hg_util.get_readable_ctx_date( ctx ),
'description' : ctx.description(),
'files' : ctx.files(),
'user' : ctx.user(),
'parent' : ctx.parents()[0],
'has_metadata' : has_metadata }
# Make sure we'll view latest changeset first.
changesets.insert( 0, change_dict )
metadata = metadata_util.get_repository_metadata_by_repository_id_changeset_revision( trans.app,
id,
repository.tip( trans.app ),
metadata_only=True )
return trans.fill_template( '/webapps/tool_shed/repository/view_changelog.mako',
repository=repository,
metadata=metadata,
changesets=changesets,
message=message,
status=status )
@web.expose
def view_changeset( self, trans, id, ctx_str, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository = suc.get_repository_in_tool_shed( trans.app, id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
ctx = hg_util.get_changectx_for_changeset( repo, ctx_str )
if ctx is None:
message = "Repository does not include changeset revision '%s'." % str( ctx_str )
status = 'error'
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_changelog',
id=id,
message=message,
status=status ) )
ctx_parent = ctx.parents()[ 0 ]
if ctx.children():
ctx_child = ctx.children()[ 0 ]
else:
ctx_child = None
diffs = []
options_dict = hg_util.get_mercurial_default_options_dict( 'diff' )
# Not quite sure if the following settings make any difference, but with a combination of them and the size check on each
# diff, we don't run out of memory when viewing the changelog of the cisortho2 repository on the test tool shed.
options_dict[ 'maxfile' ] = basic_util.MAXDIFFSIZE
options_dict[ 'maxtotal' ] = basic_util.MAXDIFFSIZE
diffopts = mdiff.diffopts( **options_dict )
for diff in patch.diff( repo, node1=ctx_parent.node(), node2=ctx.node(), opts=diffopts ):
if len( diff ) > basic_util.MAXDIFFSIZE:
diff = util.shrink_string_by_size( diff, basic_util.MAXDIFFSIZE )
diffs.append( basic_util.to_html_string( diff ) )
modified, added, removed, deleted, unknown, ignored, clean = repo.status( node1=ctx_parent.node(), node2=ctx.node() )
anchors = modified + added + removed + deleted + unknown + ignored + clean
metadata = metadata_util.get_repository_metadata_by_repository_id_changeset_revision( trans.app,
id,
ctx_str,
metadata_only=True )
# For rendering the prev button.
if ctx_parent:
ctx_parent_date = hg_util.get_readable_ctx_date( ctx_parent )
ctx_parent_rev = ctx_parent.rev()
if ctx_parent_rev < 0:
prev = None
else:
prev = "<b>%s:%s</b> <i>(%s)</i>" % ( ctx_parent_rev, ctx_parent, ctx_parent_date )
else:
prev = None
if ctx_child:
ctx_child_date = hg_util.get_readable_ctx_date( ctx_child )
ctx_child_rev = ctx_child.rev()
next = "<b>%s:%s</b> <i>(%s)</i>" % ( ctx_child_rev, ctx_child, ctx_child_date )
else:
next = None
return trans.fill_template( '/webapps/tool_shed/repository/view_changeset.mako',
repository=repository,
metadata=metadata,
prev=prev,
next=next,
ctx=ctx,
ctx_parent=ctx_parent,
ctx_child=ctx_child,
anchors=anchors,
modified=modified,
added=added,
removed=removed,
deleted=deleted,
unknown=unknown,
ignored=ignored,
clean=clean,
diffs=diffs,
message=message,
status=status )
@web.expose
def view_or_manage_repository( self, trans, **kwd ):
repository_id = kwd.get( 'id', None )
if repository_id:
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
user = trans.user
if repository:
if user is not None and ( trans.user_is_admin() or \
trans.app.security_agent.user_can_administer_repository( user, repository ) ):
return trans.response.send_redirect( web.url_for( controller='repository',
action='manage_repository',
**kwd ) )
else:
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_repository',
**kwd ) )
return trans.show_error_message( "Invalid repository id '%s' received." % repository_id )
return trans.show_error_message( "The repository id was not received." )
@web.expose
def view_repository( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository = suc.get_repository_in_tool_shed( trans.app, id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
avg_rating, num_ratings = self.get_ave_item_rating_data( trans.sa_session, repository, webapp_model=trans.model )
changeset_revision = kwd.get( 'changeset_revision', repository.tip( trans.app ) )
display_reviews = kwd.get( 'display_reviews', False )
alerts = kwd.get( 'alerts', '' )
alerts_checked = CheckboxField.is_checked( alerts )
if repository.email_alerts:
email_alerts = json.loads( repository.email_alerts )
else:
email_alerts = []
repository_dependencies = None
user = trans.user
if user and kwd.get( 'receive_email_alerts_button', False ):
flush_needed = False
if alerts_checked:
if user.email not in email_alerts:
email_alerts.append( user.email )
repository.email_alerts = json.dumps( email_alerts )
flush_needed = True
else:
if user.email in email_alerts:
email_alerts.remove( user.email )
repository.email_alerts = json.dumps( email_alerts )
flush_needed = True
if flush_needed:
trans.sa_session.add( repository )
trans.sa_session.flush()
checked = alerts_checked or ( user and user.email in email_alerts )
alerts_check_box = CheckboxField( 'alerts', checked=checked )
changeset_revision_select_field = grids_util.build_changeset_revision_select_field( trans,
repository,
selected_value=changeset_revision,
add_id_to_name=False,
downloadable=False )
revision_label = hg_util.get_revision_label( trans.app, repository, changeset_revision, include_date=False )
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, id, changeset_revision )
if repository_metadata:
metadata = repository_metadata.metadata
# Get a dictionary of all repositories upon which the contents of the current repository_metadata record depend.
toolshed_base_url = str( web.url_for( '/', qualified=True ) ).rstrip( '/' )
rb = relation_builder.RelationBuilder( trans.app, repository, repository_metadata, toolshed_base_url )
repository_dependencies = rb.get_repository_dependencies_for_changeset_revision()
if str( repository.type ) != rt_util.TOOL_DEPENDENCY_DEFINITION:
# Handle messaging for orphan tool dependency definitions.
dd = dependency_display.DependencyDisplayer( trans.app )
orphan_message = dd.generate_message_for_orphan_tool_dependencies( repository, metadata )
if orphan_message:
message += orphan_message
status = 'warning'
else:
metadata = None
is_malicious = metadata_util.is_malicious( trans.app, id, repository.tip( trans.app ) )
if is_malicious:
if trans.app.security_agent.can_push( trans.app, trans.user, repository ):
message += malicious_error_can_push
else:
message += malicious_error
status = 'error'
tsucm = ToolShedUtilityContainerManager( trans.app )
containers_dict = tsucm.build_repository_containers( repository,
changeset_revision,
repository_dependencies,
repository_metadata )
repository_type_select_field = rt_util.build_repository_type_select_field( trans, repository=repository )
heads = hg_util.get_repository_heads( repo )
return trans.fill_template( '/webapps/tool_shed/repository/view_repository.mako',
repo=repo,
heads=heads,
repository=repository,
repository_metadata=repository_metadata,
metadata=metadata,
containers_dict=containers_dict,
avg_rating=avg_rating,
display_reviews=display_reviews,
num_ratings=num_ratings,
alerts_check_box=alerts_check_box,
changeset_revision=changeset_revision,
changeset_revision_select_field=changeset_revision_select_field,
revision_label=revision_label,
repository_type_select_field=repository_type_select_field,
message=message,
status=status )
@web.expose
def view_tool_metadata( self, trans, repository_id, changeset_revision, tool_id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
render_repository_actions_for = kwd.get( 'render_repository_actions_for', 'tool_shed' )
repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repo_files_dir = repository.repo_path( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=None, repo_path=repo_files_dir, create=False )
tool_metadata_dict = {}
tool_lineage = []
tool = None
guid = None
original_tool_data_path = trans.app.config.tool_data_path
revision_label = hg_util.get_revision_label( trans.app, repository, changeset_revision, include_date=False )
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, repository_id, changeset_revision )
if repository_metadata:
repository_metadata_id = trans.security.encode_id( repository_metadata.id )
metadata = repository_metadata.metadata
if metadata:
if 'tools' in metadata:
tv = tool_validator.ToolValidator( trans.app )
for tool_metadata_dict in metadata[ 'tools' ]:
if tool_metadata_dict[ 'id' ] == tool_id:
work_dir = tempfile.mkdtemp()
relative_path_to_tool_config = tool_metadata_dict[ 'tool_config' ]
guid = tool_metadata_dict[ 'guid' ]
full_path_to_tool_config = os.path.abspath( relative_path_to_tool_config )
full_path_to_dir, tool_config_filename = os.path.split( full_path_to_tool_config )
can_use_disk_file = tv.can_use_tool_config_disk_file( repository,
repo,
full_path_to_tool_config,
changeset_revision )
if can_use_disk_file:
trans.app.config.tool_data_path = work_dir
tool, valid, message, sample_files = \
tv.handle_sample_files_and_load_tool_from_disk( repo_files_dir,
repository_id,
full_path_to_tool_config,
work_dir )
if message:
status = 'error'
else:
tool, message, sample_files = \
tv.handle_sample_files_and_load_tool_from_tmp_config( repo,
repository_id,
changeset_revision,
tool_config_filename,
work_dir )
if message:
status = 'error'
basic_util.remove_dir( work_dir )
break
if guid:
tvm = tool_version_manager.ToolVersionManager( trans.app )
tool_lineage = tvm.get_version_lineage_for_tool( repository_id,
repository_metadata,
guid )
else:
repository_metadata_id = None
metadata = None
changeset_revision_select_field = grids_util.build_changeset_revision_select_field( trans,
repository,
selected_value=changeset_revision,
add_id_to_name=False,
downloadable=False )
trans.app.config.tool_data_path = original_tool_data_path
return trans.fill_template( "/webapps/tool_shed/repository/view_tool_metadata.mako",
render_repository_actions_for=render_repository_actions_for,
repository=repository,
repository_metadata_id=repository_metadata_id,
metadata=metadata,
tool=tool,
tool_metadata_dict=tool_metadata_dict,
tool_lineage=tool_lineage,
changeset_revision=changeset_revision,
revision_label=revision_label,
changeset_revision_select_field=changeset_revision_select_field,
message=message,
status=status )
@web.expose
def view_workflow( self, trans, workflow_name, repository_metadata_id, **kwd ):
"""Retrieve necessary information about a workflow from the database so that it can be displayed in an svg image."""
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
render_repository_actions_for = kwd.get( 'render_repository_actions_for', 'tool_shed' )
if workflow_name:
workflow_name = encoding_util.tool_shed_decode( workflow_name )
repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
repository = suc.get_repository_in_tool_shed( trans.app, trans.security.encode_id( repository_metadata.repository_id ) )
changeset_revision = repository_metadata.changeset_revision
metadata = repository_metadata.metadata
return trans.fill_template( "/webapps/tool_shed/repository/view_workflow.mako",
repository=repository,
render_repository_actions_for=render_repository_actions_for,
changeset_revision=changeset_revision,
repository_metadata_id=repository_metadata_id,
workflow_name=workflow_name,
metadata=metadata,
message=message,
status=status )
| mikel-egana-aranguren/SADI-Galaxy-Docker | galaxy-dist/lib/galaxy/webapps/tool_shed/controllers/repository.py | Python | gpl-3.0 | 230,844 | [
"Galaxy"
] | 65264e37d4f78a3141ca74f95d77282584f7eaacaa8312312c166ce50e8ee57c |
# pylint: disable=E1103
import pandas as pd
import os.path
from . import resource_usage as ru
TRANSCRIPT_GTF_FILE = "TRANSCRIPT_GTF_FILE"
GENOME_FASTA_DIR = "GENOME_FASTA_DIR"
SIMULATED_READS = "SIMULATED_READS"
LEFT_SIMULATED_READS = "LEFT_SIMULATED_READS"
RIGHT_SIMULATED_READS = "RIGHT_SIMULATED_READS"
FASTQ_READS = "FASTQ_READS"
STRANDED_READS = "STRANDED_READS"
QUANTIFIER_DIRECTORY = "QUANTIFIER_DIRECTORY"
NUM_THREADS = "NUM_THREADS"
_QUANT_METHODS = {}
def get_quantification_methods():
return _QUANT_METHODS
def _quantifier(cls):
_QUANT_METHODS[cls.get_name()] = cls()
return cls
class _QuantifierBase(object):
def __init__(self):
self.abundances = None
@classmethod
def get_name(cls):
raise NotImplementedError
def __str__(self):
return self.__class__.get_name()
@classmethod
def _add_timed_line(cls, writer, record_usage, resource_type, line):
writer.add_line(
(ru.get_time_command(resource_type) if record_usage else "") + line)
@classmethod
def _add_timed_pipe(cls, writer, record_usage, resource_type, pipe_commands):
if record_usage:
line = "bash -c \"{pipe}\"".format(pipe=" | ".join(pipe_commands))
cls._add_timed_line(writer, True, resource_type, line)
else:
writer.add_pipe(*pipe_commands)
@classmethod
def _add_timed_prequantification_command(cls, writer, record_usage, line):
cls._add_timed_line(
writer, record_usage, ru.PREQUANT_RESOURCE_TYPE, line)
@classmethod
def _add_timed_quantification_command(cls, writer, record_usage, line):
cls._add_timed_line(
writer, record_usage, ru.QUANT_RESOURCE_TYPE, line)
@classmethod
def _add_timed_prequantification_pipe(
cls, writer, record_usage, pipe_commands):
cls._add_timed_pipe(
writer, record_usage, ru.PREQUANT_RESOURCE_TYPE, pipe_commands)
@classmethod
def _add_timed_quantification_pipe(
cls, writer, record_usage, pipe_commands):
cls._add_timed_pipe(
writer, record_usage, ru.QUANT_RESOURCE_TYPE, pipe_commands)
@_quantifier
class _Cufflinks(_QuantifierBase):
FPKM_COLUMN = "FPKM"
CALC_BOWTIE_INDEX_DIR = \
"BOWTIE_INDEX_DIR=$(dirname {bowtie_index})"
BOWTIE_INDEX_DIR_DOESNT_EXIST = \
"! -d $BOWTIE_INDEX_DIR"
MAKE_BOWTIE_INDEX_DIR = \
"mkdir -p $BOWTIE_INDEX_DIR"
GET_GENOME_REF_FASTA_LIST = \
"REF_FILES=$(ls -1 {genome_fasta_dir}/*.fa | tr '\\n' ',')"
STRIP_LAST_COMMA_FROM_FA_LIST = \
"REF_FILES=${REF_FILES%,}"
BUILD_BOWTIE_INDEX = \
"bowtie-build $REF_FILES {bowtie_index}"
CONSTRUCT_BOWTIE_REF_FASTA = \
"bowtie-inspect {bowtie_index} > {bowtie_index}.fa"
MAP_READS_TO_GENOME_WITH_TOPHAT = \
"tophat {stranded_spec} --no-coverage-search -p {num_threads} " + \
"-o tho {bowtie_index} {reads_spec}"
QUANTIFY_ISOFORM_EXPRESSION = \
"cufflinks -o transcriptome -u -b {bowtie_index}.fa " + \
"-p {num_threads} " + "{stranded_spec} -G {transcript_gtf} " + \
"tho/accepted_hits.bam"
REMOVE_TOPHAT_OUTPUT_DIR = \
"rm -rf tho"
REMOVE_OUTPUT_EXCEPT_ABUNDANCES = \
r"find transcriptome \! -name 'isoforms.fpkm_tracking' -type f -delete"
@classmethod
def get_name(cls):
return "Cufflinks"
@classmethod
def _get_bowtie_index(cls, quantifier_dir):
return os.path.join(quantifier_dir, "bowtie-index", "index")
@classmethod
def write_preparatory_commands(cls, writer, record_usage, params):
writer.add_comment(
"Prepare the bowtie index for read mapping if it doesn't " +
"already exist. Note that this step only needs to be done " +
"once for a particular reference genome")
bowtie_index = cls._get_bowtie_index(params[QUANTIFIER_DIRECTORY])
writer.add_line(cls.CALC_BOWTIE_INDEX_DIR.format(
bowtie_index=bowtie_index))
with writer.section():
with writer.if_block(cls.BOWTIE_INDEX_DIR_DOESNT_EXIST):
writer.add_line(cls.MAKE_BOWTIE_INDEX_DIR)
writer.add_line(
cls.GET_GENOME_REF_FASTA_LIST.format(
genome_fasta_dir=params[GENOME_FASTA_DIR]))
writer.add_line(cls.STRIP_LAST_COMMA_FROM_FA_LIST)
cls._add_timed_prequantification_command(
writer, record_usage,
cls.BUILD_BOWTIE_INDEX.format(bowtie_index=bowtie_index))
cls._add_timed_prequantification_command(
writer, record_usage,
cls.CONSTRUCT_BOWTIE_REF_FASTA.format(
bowtie_index=bowtie_index))
@classmethod
def write_quantification_commands(cls, writer, record_usage, params):
bowtie_index = cls._get_bowtie_index(params[QUANTIFIER_DIRECTORY])
reads_spec = params[SIMULATED_READS] if SIMULATED_READS in params \
else "{l} {r}".format(
l=params[LEFT_SIMULATED_READS],
r=params[RIGHT_SIMULATED_READS])
stranded_spec = "--library-type " + \
("fr-secondstrand" if params[STRANDED_READS] else "fr-unstranded")
cls._add_timed_quantification_command(
writer, record_usage,
cls.MAP_READS_TO_GENOME_WITH_TOPHAT.format(
bowtie_index=bowtie_index,
reads_spec=reads_spec,
stranded_spec=stranded_spec,
num_threads=params[NUM_THREADS]))
cls._add_timed_quantification_command(
writer, record_usage,
cls.QUANTIFY_ISOFORM_EXPRESSION.format(
bowtie_index=bowtie_index,
transcript_gtf=params[TRANSCRIPT_GTF_FILE],
stranded_spec=stranded_spec,
num_threads=params[NUM_THREADS]))
@classmethod
def write_cleanup(cls, writer):
writer.add_line(cls.REMOVE_TOPHAT_OUTPUT_DIR)
writer.add_line(cls.REMOVE_OUTPUT_EXCEPT_ABUNDANCES)
def __init__(self):
_QuantifierBase.__init__(self)
self.norm_constant = 0
def get_transcript_abundance(self, transcript_id):
if self.abundances is None:
self.abundances = pd.read_csv(
"transcriptome/isoforms.fpkm_tracking",
delim_whitespace=True, index_col="tracking_id")
self.norm_constant = \
1000000 / (self.abundances[_Cufflinks.FPKM_COLUMN].sum())
fpkm = self.abundances.ix[transcript_id][_Cufflinks.FPKM_COLUMN] \
if transcript_id in self.abundances.index else 0
return self.norm_constant * fpkm
class _TranscriptomeBasedQuantifierBase(_QuantifierBase):
CALC_TRANSCRIPT_REF_DIR = \
"REF_DIR=$(dirname {ref_name})"
TRANSCRIPT_REF_DIR_EXISTS = \
"! -d $REF_DIR"
MAKE_TRANSCRIPT_REF_DIR = \
"mkdir -p $REF_DIR"
PREPARE_TRANSCRIPT_REF = \
"rsem-prepare-reference --gtf {transcript_gtf} " + \
"{bowtie_spec} {genome_fasta_dir} {ref_name}"
@classmethod
def _get_ref_name(cls, quantifier_dir):
ref_name = cls.get_name().lower()
return os.path.join(quantifier_dir, ref_name, ref_name)
@classmethod
def write_preparatory_commands(cls, writer, record_usage, params):
with writer.section():
writer.add_comment(
"Prepare the transcript reference if it doesn't already " +
"exist. We create the transcript reference using a tool " +
"from the RSEM package. Note that this step only needs to " +
"be done once for a particular set of transcripts.")
ref_name = cls._get_ref_name(params[QUANTIFIER_DIRECTORY])
bowtie_spec = "--bowtie" if cls._needs_bowtie_index() else ""
writer.add_line(
cls.CALC_TRANSCRIPT_REF_DIR.format(
ref_name=ref_name))
with writer.if_block(
cls.TRANSCRIPT_REF_DIR_EXISTS):
writer.add_line(cls.MAKE_TRANSCRIPT_REF_DIR)
cls._add_timed_prequantification_command(
writer, record_usage,
cls.PREPARE_TRANSCRIPT_REF.format(
transcript_gtf=params[TRANSCRIPT_GTF_FILE],
genome_fasta_dir=params[GENOME_FASTA_DIR],
ref_name=ref_name,
bowtie_spec=bowtie_spec))
@classmethod
def _needs_bowtie_index(cls):
raise NotImplementedError
@_quantifier
class _RSEM(_TranscriptomeBasedQuantifierBase):
QUANTIFY_ISOFORM_EXPRESSION = \
"rsem-calculate-expression --time {qualities_spec} --p " + \
"{num_threads} " + "{stranded_spec} {reads_spec} {ref_name} " + \
"rsem_sample"
REMOVE_OUTPUT_EXCEPT_ABUNDANCES = \
"find . -name \"rsem_sample*\"" + r" \! " + \
"-name rsem_sample.isoforms.results -type f -delete"
@classmethod
def get_name(cls):
return "RSEM"
@classmethod
def _needs_bowtie_index(cls):
return True
@classmethod
def write_quantification_commands(cls, writer, record_usage, params):
qualities_spec = "" if params[FASTQ_READS] else "--no-qualities"
reads_spec = params[SIMULATED_READS] if SIMULATED_READS in params \
else "--paired-end {l} {r}".format(
l=params[LEFT_SIMULATED_READS],
r=params[RIGHT_SIMULATED_READS])
stranded_spec = "--strand-specific" if params[STRANDED_READS] else ""
ref_name = cls._get_ref_name(params[QUANTIFIER_DIRECTORY])
cls._add_timed_quantification_command(
writer, record_usage,
cls.QUANTIFY_ISOFORM_EXPRESSION.format(
qualities_spec=qualities_spec,
reads_spec=reads_spec,
stranded_spec=stranded_spec,
ref_name=ref_name,
num_threads=params[NUM_THREADS]))
@classmethod
def write_cleanup(cls, writer):
writer.add_line(cls.REMOVE_OUTPUT_EXCEPT_ABUNDANCES)
def get_transcript_abundance(self, transcript_id):
if self.abundances is None:
self.abundances = pd.read_csv(
"rsem_sample.isoforms.results", delim_whitespace=True,
index_col="transcript_id")
return self.abundances.ix[transcript_id]["TPM"] \
if transcript_id in self.abundances.index else 0
@_quantifier
class _Express(_TranscriptomeBasedQuantifierBase):
MAP_READS_TO_TRANSCRIPT_REF = \
"bowtie {qualities_spec} -e 99999999 -l 25 -I 1 -X 1000 -a -S " + \
"-m 200 -p {num_threads} {stranded_spec} {ref_name} {reads_spec}"
CONVERT_SAM_TO_BAM = \
"samtools view -Sb - > hits.bam"
QUANTIFY_ISOFORM_EXPRESSION = \
"express {stranded_spec} {ref_name}.transcripts.fa hits.bam"
REMOVE_MAPPED_READS = \
"rm hits.bam"
REMOVE_OUTPUT_EXCEPT_ABUNDANCES = \
"rm params.xprs"
@classmethod
def get_name(cls):
return "Express"
@classmethod
def _needs_bowtie_index(cls):
return True
@classmethod
def write_quantification_commands(cls, writer, record_usage, params):
ref_name = cls._get_ref_name(params[QUANTIFIER_DIRECTORY])
qualities_spec = "-q" if params[FASTQ_READS] else "-f"
reads_spec = params[SIMULATED_READS] if SIMULATED_READS in params \
else "-1 {l} -2 {r}".format(
l=params[LEFT_SIMULATED_READS],
r=params[RIGHT_SIMULATED_READS])
bowtie_stranded_spec = "--norc" if params[STRANDED_READS] else ""
cls._add_timed_quantification_pipe(
writer, record_usage,
[cls.MAP_READS_TO_TRANSCRIPT_REF.format(
qualities_spec=qualities_spec,
stranded_spec=bowtie_stranded_spec,
ref_name=ref_name,
reads_spec=reads_spec,
num_threads=params[NUM_THREADS]),
cls.CONVERT_SAM_TO_BAM]
)
express_stranded_spec = \
("--f-stranded" if SIMULATED_READS in params
else "--fr-stranded") if params[STRANDED_READS] else ""
cls._add_timed_quantification_command(
writer, record_usage,
cls.QUANTIFY_ISOFORM_EXPRESSION.format(
stranded_spec=express_stranded_spec,
ref_name=ref_name))
@classmethod
def write_cleanup(cls, writer):
writer.add_line(cls.REMOVE_MAPPED_READS)
writer.add_line(cls.REMOVE_OUTPUT_EXCEPT_ABUNDANCES)
def get_transcript_abundance(self, transcript_id):
if self.abundances is None:
self.abundances = pd.read_csv(
"results.xprs", delim_whitespace=True, index_col="target_id")
return self.abundances.ix[transcript_id]["tpm"] \
if transcript_id in self.abundances.index else 0
@_quantifier
class _Sailfish(_TranscriptomeBasedQuantifierBase):
CREATE_TRANSCRIPT_INDEX = \
"sailfish index -p {num_threads} -t {ref_name}.transcripts.fa " + \
"-k 20 -o {index_dir}"
QUANTIFY_ISOFORM_EXPRESSION = \
"sailfish quant -p {num_threads} -i {index_dir} -l {library_spec} " + \
"{reads_spec} -o ."
FILTER_COMMENT_LINES = [
r"grep -v '^# \[' quant_bias_corrected.sf",
"sed -e 's/# //'i > quant_filtered.csv"
]
REMOVE_OUTPUT_EXCEPT_ABUNDANCES = \
"rm -rf logs quant_bias_corrected.sf quant.sf " + \
"reads.count_info reads.sfc"
@classmethod
def get_name(cls):
return "Sailfish"
@classmethod
def _needs_bowtie_index(cls):
return False
@classmethod
def _get_index_dir(cls, quantifier_dir):
return os.path.join(quantifier_dir, "sailfish", "index")
@classmethod
def write_preparatory_commands(cls, writer, record_usage, params):
# For convenience, we use a tool from the RSEM package to create the
# transcript reference
super(_Sailfish, cls).write_preparatory_commands(
writer, record_usage, params)
with writer.section():
writer.add_comment(
"Now create the Sailfish transcript index (this will only " +
"perform indexing if the index does not already exist.)")
ref_name = cls._get_ref_name(params[QUANTIFIER_DIRECTORY])
index_dir = cls._get_index_dir(params[QUANTIFIER_DIRECTORY])
cls._add_timed_prequantification_command(
writer, record_usage,
cls.CREATE_TRANSCRIPT_INDEX.format(
ref_name=ref_name, index_dir=index_dir,
num_threads=params[NUM_THREADS]))
@classmethod
def write_quantification_commands(cls, writer, record_usage, params):
index_dir = cls._get_index_dir(params[QUANTIFIER_DIRECTORY])
library_spec = \
("\"T=SE:S=S\"" if params[STRANDED_READS]
else "\"T=SE:S=U\"") if SIMULATED_READS in params else \
("\"T=PE:O=><:S=SA\"" if params[STRANDED_READS]
else "\"T=PE:O=><:S=U\"")
reads_spec = "-r {r}".format(r=params[SIMULATED_READS]) \
if SIMULATED_READS in params \
else "-1 {l} -2 {r}".format(
l=params[LEFT_SIMULATED_READS],
r=params[RIGHT_SIMULATED_READS])
cls._add_timed_quantification_command(
writer, record_usage,
cls.QUANTIFY_ISOFORM_EXPRESSION.format(
index_dir=index_dir,
library_spec=library_spec,
reads_spec=reads_spec,
num_threads=params[NUM_THREADS]))
writer.add_pipe(*cls.FILTER_COMMENT_LINES)
@classmethod
def write_cleanup(cls, writer):
writer.add_line(cls.REMOVE_OUTPUT_EXCEPT_ABUNDANCES)
def get_transcript_abundance(self, transcript_id):
if self.abundances is None:
self.abundances = pd.read_csv(
"quant_filtered.csv", delim_whitespace=True,
index_col="Transcript")
return self.abundances.ix[transcript_id]["TPM"] \
if transcript_id in self.abundances.index else 0
@_quantifier
class _Salmon(_TranscriptomeBasedQuantifierBase):
CREATE_SALMON_TRANSCRIPT_INDEX = \
"salmon index -t {ref_name}.transcripts.fa -i {index_dir}"
QUANTIFY_ISOFORM_EXPRESSION = \
"salmon quant -p {num_threads} -i {index_dir} -l " + \
"{library_spec} {reads_spec} -o ."
FILTER_COMMENT_LINES = [
r"grep -v '^# \[\|salmon' quant.sf",
"sed -e 's/# //'i > quant_filtered.csv"
]
REMOVE_OUTPUT_EXCEPT_ABUNDANCES = \
"rm -rf logs quant.sf"
@classmethod
def get_name(cls):
return "Salmon"
@classmethod
def _needs_bowtie_index(cls):
return False
@classmethod
def _get_index_dir(cls, quantifier_dir):
return os.path.join(quantifier_dir, "salmon", "index")
@classmethod
def write_preparatory_commands(cls, writer, record_usage, params):
# We again use a tool from the RSEM package to create the transcript
# reference sequences
super(_Salmon, cls).write_preparatory_commands(
writer, record_usage, params)
with writer.section():
index_dir = cls._get_index_dir(params[QUANTIFIER_DIRECTORY])
with writer.if_block("! -d " + index_dir):
writer.add_comment("Now create the Salmon transcript index.")
ref_name = cls._get_ref_name(params[QUANTIFIER_DIRECTORY])
cls._add_timed_prequantification_command(
writer, record_usage,
cls.CREATE_SALMON_TRANSCRIPT_INDEX.format(
ref_name=ref_name, index_dir=index_dir))
@classmethod
def write_quantification_commands(cls, writer, record_usage, params):
index_dir = cls._get_index_dir(params[QUANTIFIER_DIRECTORY])
library_spec = "" if SIMULATED_READS in params else "I"
library_spec += "SF" if params[STRANDED_READS] else "U"
reads_spec = "-r {r}".format(r=params[SIMULATED_READS]) \
if SIMULATED_READS in params \
else "-1 {l} -2 {r}".format(
l=params[LEFT_SIMULATED_READS],
r=params[RIGHT_SIMULATED_READS])
cls._add_timed_quantification_command(
writer, record_usage,
cls.QUANTIFY_ISOFORM_EXPRESSION.format(
index_dir=index_dir,
library_spec=library_spec,
reads_spec=reads_spec,
num_threads=params[NUM_THREADS]))
writer.add_pipe(*cls.FILTER_COMMENT_LINES)
@classmethod
def write_cleanup(cls, writer):
writer.add_line(cls.REMOVE_OUTPUT_EXCEPT_ABUNDANCES)
def get_transcript_abundance(self, transcript_id):
if self.abundances is None:
self.abundances = pd.read_csv(
"quant_filtered.csv", delim_whitespace=True,
index_col="Name")
return self.abundances.ix[transcript_id]["TPM"] \
if transcript_id in self.abundances.index else 0
| lweasel/piquant | piquant/quantifiers.py | Python | mit | 19,357 | [
"Bowtie"
] | a87f38ae1b01936e05102f7d09bcf45678409a06085d8ef6480fb8fdbabbc8f9 |
# ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc. Unless you have purchased from
# Numenta, Inc. a separate commercial license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
"""
This class provides utility classes and functions for use inside permutations
scripts.
"""
import numpy
import random
from nupic.support.configuration import Configuration
class PermuteVariable(object):
"""The base class of all PermuteXXX classes that can be used from within
a permutation script."""
def __init__(self):
pass
def getState(self):
"""Return the current state of this particle. This is used for
communicating our state into a model record entry so that it can be
instantiated on another worker."""
raise NotImplementedError
def setState(self, state):
"""Set the current state of this particle. This is counterpart to getState.
"""
raise NotImplementedError
def getPosition(self):
"""for int vars, returns position to nearest int
Parameters:
--------------------------------------------------------------
retval: current position
"""
raise NotImplementedError
def agitate(self):
"""This causes the variable to jiggle away from its current position.
It does this by increasing its velocity by a multiplicative factor.
Every time agitate() is called, the velocity will increase. In this way,
you can call agitate over and over again until the variable reaches a
new position."""
raise NotImplementedError
#=========================================================================
def newPosition(self, globalBestPosition, rng):
"""Choose a new position based on results obtained so far from other
particles and the passed in globalBestPosition.
Parameters:
--------------------------------------------------------------
globalBestPosition: global best position for this colony
rng: instance of random.Random() used for generating
random numbers
retval: new position
"""
raise NotImplementedError
def pushAwayFrom(self, otherVars, rng):
"""Choose a new position that is as far away as possible from all
'otherVars', where 'otherVars' is a list of PermuteVariable instances.
Parameters:
--------------------------------------------------------------
otherVars: list of other PermuteVariables to push away from
rng: instance of random.Random() used for generating
random numbers
"""
raise NotImplementedError
def resetVelocity(self, rng):
"""Reset the velocity to be some fraction of the total distance. This
is called usually when we start a new swarm and want to start at the
previous best position found in the previous swarm but with a
velocity which is a known fraction of the total distance between min
and max.
Parameters:
--------------------------------------------------------------
rng: instance of random.Random() used for generating
random numbers
"""
raise NotImplementedError
class PermuteFloat(PermuteVariable):
"""Define a permutation variable which can take on floating point values."""
def __init__(self, min, max, stepSize=None, inertia=None, cogRate=None,
socRate=None):
"""Construct a variable that permutes over floating point values using
the Particle Swarm Optimization (PSO) algorithm. See descriptions of
PSO (i.e. http://en.wikipedia.org/wiki/Particle_swarm_optimization)
for references to the inertia, cogRate, and socRate parameters.
Parameters:
-----------------------------------------------------------------------
min: min allowed value of position
max: max allowed value of position
stepSize: if not None, the position must be at min + N * stepSize,
where N is an integer
inertia: The inertia for the particle.
cogRate: This parameter controls how much the particle is affected
by its distance from it's local best position
socRate: This parameter controls how much the particle is affected
by its distance from the global best position
"""
super(PermuteFloat, self).__init__()
self.min = min
self.max = max
self.stepSize = stepSize
# The particle's initial position and velocity.
self._position = (self.max + self.min) / 2.0
self._velocity = (self.max - self.min) / 5.0
# The inertia, cognitive, and social components of the particle
self._inertia = (float(Configuration.get("nupic.hypersearch.inertia"))
if inertia is None else inertia)
self._cogRate = (float(Configuration.get("nupic.hypersearch.cogRate"))
if cogRate is None else cogRate)
self._socRate = (float(Configuration.get("nupic.hypersearch.socRate"))
if socRate is None else socRate)
# The particle's local best position and the best global position.
self._bestPosition = self.getPosition()
self._bestResult = None
def __repr__(self):
"""See comments in base class."""
return ("PermuteFloat(min=%f, max=%f, stepSize=%s) [position=%f(%f), "
"velocity=%f, _bestPosition=%s, _bestResult=%s]" % (
self.min, self.max, self.stepSize, self.getPosition(),
self._position, self._velocity, self._bestPosition,
self._bestResult))
def getState(self):
"""See comments in base class."""
return dict(_position = self._position,
position = self.getPosition(),
velocity = self._velocity,
bestPosition = self._bestPosition,
bestResult = self._bestResult)
def setState(self, state):
"""See comments in base class."""
self._position = state['_position']
self._velocity = state['velocity']
self._bestPosition = state['bestPosition']
self._bestResult = state['bestResult']
def getPosition(self):
"""See comments in base class."""
if self.stepSize is None:
return self._position
# Find nearest step
numSteps = (self._position - self.min) / self.stepSize
numSteps = int(round(numSteps))
position = self.min + (numSteps * self.stepSize)
position = max(self.min, position)
position = min(self.max, position)
return position
def agitate(self):
"""See comments in base class."""
# Increase velocity enough that it will be higher the next time
# newPosition() is called. We know that newPosition multiplies by inertia,
# so take that into account.
self._velocity *= 1.5 / self._inertia
# Clip velocity
maxV = (self.max - self.min)/2
if self._velocity > maxV:
self._velocity = maxV
elif self._velocity < -maxV:
self._velocity = -maxV
# if we at the max or min, reverse direction
if self._position == self.max and self._velocity > 0:
self._velocity *= -1
if self._position == self.min and self._velocity < 0:
self._velocity *= -1
def newPosition(self, globalBestPosition, rng):
"""See comments in base class."""
# First, update the velocity. The new velocity is given as:
# v = (inertia * v) + (cogRate * r1 * (localBest-pos))
# + (socRate * r2 * (globalBest-pos))
#
# where r1 and r2 are random numbers between 0 and 1.0
lb=float(Configuration.get("nupic.hypersearch.randomLowerBound"))
ub=float(Configuration.get("nupic.hypersearch.randomUpperBound"))
self._velocity = (self._velocity * self._inertia + rng.uniform(lb, ub) *
self._cogRate * (self._bestPosition - self.getPosition()))
if globalBestPosition is not None:
self._velocity += rng.uniform(lb, ub) * self._socRate * (
globalBestPosition - self.getPosition())
# update position based on velocity
self._position += self._velocity
# Clip it
self._position = max(self.min, self._position)
self._position = min(self.max, self._position)
# Return it
return self.getPosition()
def pushAwayFrom(self, otherPositions, rng):
"""See comments in base class."""
# If min and max are the same, nothing to do
if self.max == self.min:
return
# How many potential other positions to evaluate?
numPositions = len(otherPositions) * 4
if numPositions == 0:
return
# Assign a weight to each potential position based on how close it is
# to other particles.
stepSize = float(self.max-self.min) / numPositions
positions = numpy.arange(self.min, self.max + stepSize, stepSize)
# Get rid of duplicates.
numPositions = len(positions)
weights = numpy.zeros(numPositions)
# Assign a weight to each potential position, based on a gaussian falloff
# from each existing variable. The weight of a variable to each potential
# position is given as:
# e ^ -(dist^2/stepSize^2)
maxDistanceSq = -1 * (stepSize ** 2)
for pos in otherPositions:
distances = pos - positions
varWeights = numpy.exp(numpy.power(distances, 2) / maxDistanceSq)
weights += varWeights
# Put this particle at the position with smallest weight.
positionIdx = weights.argmin()
self._position = positions[positionIdx]
# Set its best position to this.
self._bestPosition = self.getPosition()
# Give it a random direction.
self._velocity *= rng.choice([1, -1])
def resetVelocity(self, rng):
"""See comments in base class."""
maxVelocity = (self.max - self.min) / 5.0
self._velocity = maxVelocity #min(abs(self._velocity), maxVelocity)
self._velocity *= rng.choice([1, -1])
class PermuteInt(PermuteFloat):
"""Define a permutation variable which can take on integer values."""
def __init__(self, min, max, stepSize=1, inertia=None, cogRate=None,
socRate=None):
super(PermuteInt, self).__init__(min, max, stepSize, inertia=inertia,
cogRate=cogRate, socRate=socRate)
def __repr__(self):
"""See comments in base class."""
return ("PermuteInt(min=%d, max=%d, stepSize=%d) [position=%d(%f), "
"velocity=%f, _bestPosition=%s, _bestResult=%s]" % (
self.min, self.max, self.stepSize, self.getPosition(),
self._position, self._velocity, self._bestPosition,
self._bestResult))
def getPosition(self):
"""See comments in base class."""
position = super(PermuteInt, self).getPosition()
position = int(round(position))
return position
class PermuteChoices(PermuteVariable):
"""Define a permutation variable which can take on discrete choices."""
def __init__(self, choices, fixEarly=False):
super(PermuteChoices, self).__init__()
self.choices = choices
self._positionIdx = 0
# Keep track of the results obtained for each choice
self._resultsPerChoice = [[]] * len(self.choices)
# The particle's local best position and the best global position
self._bestPositionIdx = self._positionIdx
self._bestResult = None
# If this is true then we only return the best position for this encoder
# after all choices have been seen.
self._fixEarly = fixEarly
# Factor that affects how quickly we assymptote to simply choosing the
# choice with the best error value
self._fixEarlyFactor = .7
def __repr__(self):
"""See comments in base class."""
return "PermuteChoices(choices=%s) [position=%s]" % (self.choices,
self.choices[self._positionIdx])
def getState(self):
"""See comments in base class."""
return dict(_position = self.getPosition(),
position = self.getPosition(),
velocity = None,
bestPosition = self.choices[self._bestPositionIdx],
bestResult = self._bestResult)
def setState(self, state):
"""See comments in base class."""
self._positionIdx = self.choices.index(state['_position'])
self._bestPositionIdx = self.choices.index(state['bestPosition'])
self._bestResult = state['bestResult']
def setResultsPerChoice(self, resultsPerChoice):
"""Setup our resultsPerChoice history based on the passed in
resultsPerChoice.
For example, if this variable has the following choices:
['a', 'b', 'c']
resultsPerChoice will have up to 3 elements, each element is a tuple
containing (choiceValue, errors) where errors is the list of errors
received from models that used the specific choice:
retval:
[('a', [0.1, 0.2, 0.3]), ('b', [0.5, 0.1, 0.6]), ('c', [0.2])]
"""
# Keep track of the results obtained for each choice.
self._resultsPerChoice = [[]] * len(self.choices)
for (choiceValue, values) in resultsPerChoice:
choiceIndex = self.choices.index(choiceValue)
self._resultsPerChoice[choiceIndex] = list(values)
def getPosition(self):
"""See comments in base class."""
return self.choices[self._positionIdx]
def agitate(self):
"""See comments in base class."""
# Not sure what to do for choice variables....
# TODO: figure this out
pass
def newPosition(self, globalBestPosition, rng):
"""See comments in base class."""
# Compute the mean score per choice.
numChoices = len(self.choices)
meanScorePerChoice = []
overallSum = 0
numResults = 0
for i in range(numChoices):
if len(self._resultsPerChoice[i]) > 0:
data = numpy.array(self._resultsPerChoice[i])
meanScorePerChoice.append(data.mean())
overallSum += data.sum()
numResults += data.size
else:
meanScorePerChoice.append(None)
if numResults == 0:
overallSum = 1.0
numResults = 1
# For any choices we don't have a result for yet, set to the overall mean.
for i in range(numChoices):
if meanScorePerChoice[i] is None:
meanScorePerChoice[i] = overallSum / numResults
# Now, pick a new choice based on the above probabilities. Note that the
# best result is the lowest result. We want to make it more likely to
# pick the choice that produced the lowest results. So, we need to invert
# the scores (someLargeNumber - score).
meanScorePerChoice = numpy.array(meanScorePerChoice)
# Invert meaning.
meanScorePerChoice = (1.1 * meanScorePerChoice.max()) - meanScorePerChoice
# If you want the scores to quickly converge to the best choice, raise the
# results to a power. This will cause lower scores to become lower
# probability as you see more results, until it eventually should
# assymptote to only choosing the best choice.
if self._fixEarly:
meanScorePerChoice **= (numResults * self._fixEarlyFactor / numChoices)
# Normalize.
total = meanScorePerChoice.sum()
if total == 0:
total = 1.0
meanScorePerChoice /= total
# Get distribution and choose one based on those probabilities.
distribution = meanScorePerChoice.cumsum()
r = rng.random() * distribution[-1]
choiceIdx = numpy.where(r <= distribution)[0][0]
self._positionIdx = choiceIdx
return self.getPosition()
def pushAwayFrom(self, otherPositions, rng):
"""See comments in base class."""
# Get the count of how many in each position
positions = [self.choices.index(x) for x in otherPositions]
positionCounts = [0] * len(self.choices)
for pos in positions:
positionCounts[pos] += 1
self._positionIdx = numpy.array(positionCounts).argmin()
self._bestPositionIdx = self._positionIdx
def resetVelocity(self, rng):
"""See comments in base class."""
pass
class PermuteEncoder(PermuteVariable):
""" A permutation variable that defines a field encoder. This serves as
a container for the encoder constructor arguments.
"""
def __init__(self, fieldName, encoderClass, name=None, **kwArgs):
super(PermuteEncoder, self).__init__()
self.fieldName = fieldName
if name is None:
name = fieldName
self.name = name
self.encoderClass = encoderClass
# Possible values in kwArgs include: w, n, minval, maxval, etc.
self.kwArgs = dict(kwArgs)
def __repr__(self):
"""See comments in base class."""
suffix = ""
for key, value in self.kwArgs.items():
suffix += "%s=%s, " % (key, value)
return "PermuteEncoder(fieldName=%s, encoderClass=%s, name=%s, %s)" % (
(self.fieldName, self.encoderClass, self.name, suffix))
def getDict(self, encoderName, flattenedChosenValues):
""" Return a dict that can be used to construct this encoder. This dict
can be passed directly to the addMultipleEncoders() method of the
multi encoder.
Parameters:
----------------------------------------------------------------------
encoderName: name of the encoder
flattenedChosenValues: dict of the flattened permutation variables. Any
variables within this dict whose key starts
with encoderName will be substituted for
encoder constructor args which are being
permuted over.
"""
encoder = dict(fieldname=self.fieldName,
name=self.name)
# Get the position of each encoder argument
for encoderArg, value in self.kwArgs.iteritems():
# If a permuted variable, get its chosen value.
if isinstance(value, PermuteVariable):
value = flattenedChosenValues["%s:%s" % (encoderName, encoderArg)]
encoder[encoderArg] = value
# Special treatment for DateEncoder timeOfDay and dayOfWeek stuff. In the
# permutations file, the class can be one of:
# DateEncoder.timeOfDay
# DateEncoder.dayOfWeek
# DateEncoder.season
# If one of these, we need to intelligently set the constructor args.
if '.' in self.encoderClass:
(encoder['type'], argName) = self.encoderClass.split('.')
argValue = (encoder['w'], encoder['radius'])
encoder[argName] = argValue
encoder.pop('w')
encoder.pop('radius')
else:
encoder['type'] = self.encoderClass
return encoder
class Tests(object):
def _testValidPositions(self, varClass, minValue, maxValue, stepSize,
iterations=100):
"""Run a bunch of iterations on a PermuteVar and collect which positions
were visited. Verify that they were all valid.
"""
positions = set()
cogRate = 2.0
socRate = 2.0
inertia = None
gBestPosition = maxValue
lBestPosition = minValue
foundBestPosition = None
foundBestResult = None
rng = random.Random()
rng.seed(42)
var = varClass(min=minValue, max=maxValue, stepSize=stepSize,
inertia=inertia, cogRate=cogRate, socRate=socRate)
for _ in xrange(iterations):
pos = var.getPosition()
if self.verbosity >= 1:
print "pos: %f" % (pos),
if self.verbosity >= 2:
print var
positions.add(pos)
# Set the result so that the local best is at lBestPosition.
result = 1.0 - abs(pos - lBestPosition)
if foundBestResult is None or result > foundBestResult:
foundBestResult = result
foundBestPosition = pos
state = var.getState()
state['bestPosition'] = foundBestPosition
state['bestResult'] = foundBestResult
var.setState(state)
var.newPosition(gBestPosition, rng)
positions = sorted(positions)
print "Positions visited (%d):" % (len(positions)), positions
# Validate positions.
assert (max(positions) <= maxValue)
assert (min(positions) <= minValue)
assert (len(positions)) <= int(round((maxValue - minValue)/stepSize)) + 1
def _testConvergence(self, varClass, minValue, maxValue, targetValue,
iterations=100):
"""Test that we can converge on the right answer."""
gBestPosition = targetValue
lBestPosition = targetValue
foundBestPosition = None
foundBestResult = None
rng = random.Random()
rng.seed(42)
var = varClass(min=minValue, max=maxValue)
for _ in xrange(iterations):
pos = var.getPosition()
if self.verbosity >= 1:
print "pos: %f" % (pos),
if self.verbosity >= 2:
print var
# Set the result so that the local best is at lBestPosition.
result = 1.0 - abs(pos - lBestPosition)
if foundBestResult is None or result > foundBestResult:
foundBestResult = result
foundBestPosition = pos
state = var.getState()
state['bestPosition'] = foundBestPosition
state['bestResult'] = foundBestResult
var.setState(state)
var.newPosition(gBestPosition, rng)
# Test that we reached the target.
print "Target: %f, Converged on: %f" % (targetValue, pos)
assert abs(pos-targetValue) < 0.001
def _testChoices(self):
pc = PermuteChoices(['0', '1', '2', '3'])
counts = [0] * 4
rng = random.Random()
rng.seed(42)
# Check the without results the choices are chosen uniformly.
for _ in range(1000):
pos = int(pc.newPosition(None, rng))
counts[pos] += 1
for count in counts:
assert count < 270 and count > 230
print "No results permuteChoice test passed"
# Check that with some results the choices are chosen with the lower
# errors being chosen more often.
choices = ['1', '11', '21', '31']
pc = PermuteChoices(choices)
resultsPerChoice = []
counts = dict()
for choice in choices:
resultsPerChoice.append((choice, [float(choice)]))
counts[choice] = 0
pc.setResultsPerChoice(resultsPerChoice)
rng = random.Random()
rng.seed(42)
# Check the without results the choices are chosen uniformly.
for _ in range(1000):
choice = pc.newPosition(None, rng)
counts[choice] += 1
# Make sure that as the error goes up, the number of times the choice is
# seen goes down.
prevCount = 1001
for choice in choices:
assert prevCount > counts[choice]
prevCount = counts[choice]
print "Results permuteChoice test passed"
# Check that with fixEarly as you see more data points you begin heavily
# biasing the probabilities to the one with the lowest error.
choices = ['1', '11', '21', '31']
pc = PermuteChoices(choices, fixEarly=True)
resultsPerChoiceDict = dict()
counts = dict()
for choice in choices:
resultsPerChoiceDict[choice] = (choice, [])
counts[choice] = 0
# The count of the highest probability entry, this should go up as more
# results are seen.
prevLowestErrorCount = 0
for _ in range(10):
for choice in choices:
resultsPerChoiceDict[choice][1].append(float(choice))
counts[choice] = 0
pc.setResultsPerChoice(resultsPerChoiceDict.values())
rng = random.Random()
rng.seed(42)
# Check the without results the choices are chosen uniformly.
for _ in range(1000):
choice = pc.newPosition(None, rng)
counts[choice] += 1
# Make sure that as the error goes up, the number of times the choice is
# seen goes down.
assert prevLowestErrorCount < counts['1']
prevLowestErrorCount = counts['1']
print "Fix early permuteChoice test passed"
def run(self):
"""Run unit tests on this module."""
# Set the verbosity level.
self.verbosity = 0
# ------------------------------------------------------------------------
# Test that step size is handled correctly for floats
self._testValidPositions(varClass=PermuteFloat, minValue=2.1,
maxValue=5.1, stepSize=0.5)
# ------------------------------------------------------------------------
# Test that step size is handled correctly for ints
self._testValidPositions(varClass=PermuteInt, minValue=2,
maxValue=11, stepSize=3)
# ------------------------------------------------------------------------
# Test that step size is handled correctly for ints
self._testValidPositions(varClass=PermuteInt, minValue=2,
maxValue=11, stepSize=1)
# ------------------------------------------------------------------------
# Test that we can converge on a target value
# Using Float
self._testConvergence(varClass=PermuteFloat, minValue=2.1,
maxValue=5.1, targetValue=5.0)
self._testConvergence(varClass=PermuteFloat, minValue=2.1,
maxValue=5.1, targetValue=2.2)
self._testConvergence(varClass=PermuteFloat, minValue=2.1,
maxValue=5.1, targetValue=3.5)
# Using int
self._testConvergence(varClass=PermuteInt, minValue=1,
maxValue=20, targetValue=19)
self._testConvergence(varClass=PermuteInt, minValue=1,
maxValue=20, targetValue=1)
#test permute choices
self._testChoices()
################################################################################
if __name__ == '__main__':
# Run all tests
tests = Tests()
tests.run()
| Petr-Kovalev/nupic-win32 | py/nupic/swarming/permutationhelpers.py | Python | gpl-3.0 | 26,130 | [
"Gaussian"
] | 66e6cbfa246b66cb4c27beeefca9153ee5c953545d5f351690bf9567fc9b01b8 |
# Copyright (C) 2011-2012 CRS4.
#
# This file is part of Seal.
#
# Seal is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Seal is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Seal. If not, see <http://www.gnu.org/licenses/>.
import os
import unittest
from seal.lib.io.sam_formatter import SamFormatter
from seal.lib.aligner.bwa.bwa_aligner import BwaAligner
class MappingsCollector(object):
def __init__(self):
self.mappings = []
self.formatter = SamFormatter()
def process(self, pair):
self.mappings.extend(map(self.formatter.format, pair))
class TestBwaAligner(unittest.TestCase):
def setUp(self):
self.aligner = BwaAligner()
test_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
self.aligner.reference = os.path.join(test_dir, 'seal', 'mini_ref_fixture', 'mini_ref.fasta')
self.aligner.hit_visitor = MappingsCollector()
self.aligner.qformat = "fastq-sanger"
self.pair = (
"HWI-ST301L:236:C0EJ5ACXX:1:1101:18292:2904",
"GGGAGGTGTTAGGGACAAGCCTGGAGGCAGCATGCGTCACTCCCATGCAGAGTCCATTGGCCAATGCTGGCTCCGATGGCCACATCTCACTCCAGGGGCAG",
"?@@B?<=AADFCFH@FB?EFEGAAFGEEGEGHCGEGIGH?B?CGEFHGIIGAEEEEHEAEEEH937;;@3=;>@8;?8;9A:<A#################",
"AATAGAATGTAATATAATATATGTAAAACACCAGGTGCCTAACCTGGCACAGAGCAGGAGGGCTAAGCATGACATCCAGCACGTGGTCAGTGGAATCCAGT",
"@@@DFDDDBHDD<EHEHIFEEB<IHIEGHDFEH?B:CBEHICEGCGGIIGFGCFCE@FAFEGAAGHIIHF;A?DBDFB);@@35;?,;@35(:5:ACCC<>")
def test_pair(self):
self.aligner.load_pair_record(self.pair)
self.aligner.run_alignment()
self.aligner.clear_batch()
results = sorted(self.aligner.hit_visitor.mappings)
self.assertEqual(
"HWI-ST301L:236:C0EJ5ACXX:1:1101:18292:2904 133 chr1 24762 0 * = 24762 0 AATAGAATGTAATATAATATATGTAAAACACCAGGTGCCTAACCTGGCACAGAGCAGGAGGGCTAAGCATGACATCCAGCACGTGGTCAGTGGAATCCAGT @@@DFDDDBHDD<EHEHIFEEB<IHIEGHDFEH?B:CBEHICEGCGGIIGFGCFCE@FAFEGAAGHIIHF;A?DBDFB);@@35;?,;@35(:5:ACCC<>",
results[0])
self.assertEqual(
"HWI-ST301L:236:C0EJ5ACXX:1:1101:18292:2904 73 chr1 24762 37 101M = 24762 0 GGGAGGTGTTAGGGACAAGCCTGGAGGCAGCATGCGTCACTCCCATGCAGAGTCCATTGGCCAATGCTGGCTCCGATGGCCACATCTCACTCCAGGGGCAG ?@@B?<=AADFCFH@FB?EFEGAAFGEEGEGHCGEGIGH?B?CGEFHGIIGAEEEEHEAEEEH937;;@3=;>@8;?8;9A:<A################# XT:A:U NM:i:2 SM:i:37 AM:i:0 X0:i:1 X1:i:0 XM:i:2 XO:i:0 XG:i:0 MD:Z:7T83G9",
results[1])
def suite():
"""Get a suite with all the tests from this module"""
return unittest.TestLoader().loadTestsFromTestCase(TestBwaAligner)
if __name__ == '__main__':
unittest.TextTestRunner(verbosity=2).run(suite())
| QwertyManiac/seal-cdh4 | tests/seal/lib/aligner/test_bwa_aligner.py | Python | gpl-3.0 | 3,040 | [
"BWA"
] | 2c8b54e9117c4f5d9e1e653c662ed0898d96f2e6a35c810d2d11a055f3a314c3 |
# Define constants
import os
script_dir = os.path.dirname(os.path.realpath(__file__)).replace("/utils", "")
available_snp_callers = ["bcftools", "freebayes"]
analysis_types = ["trim",
"align",
"merge",
"snps",
"indels",
"transposons",
"test"]
tools = ["bwa",
"samtools",
"freebayes",
"bcftools",
"picard"]
if os.uname()[0] == "Darwin":
LOCAL = True
echo = "gecho"
xargs = "gxargs"
sort = "gsort"
run = "python"
output_dirs = ""
stream_fq = "gunzip -kfc"
else:
run = "sbatch"
echo = "echo"
sort = "sort"
LOCAL = False
xargs = "xargs"
stream_fq = "zcat"
| AndersenLab/pyPipeline | utils/constants.py | Python | mit | 751 | [
"BWA"
] | 53f3f38073eec11ffdfdeaaf1ad8e621931ad9ad1b3e083bd09e64c39be57c4b |
from neuron import h, gui
import math
import random
#neuron.load_mechanisms("./mod")
class cfiber(object):
'''
C-fiber class with parameters:
L: int (mkM)
length of compartment
d: float
diameter of fiber
num: int
number of compartments
coordinates: dict (updates by position())
coordinates of each section
zpozition: int
z - coordinate for few cells simulation
fast_diff: bool
Is there fast diffusion?
-Yes: True
-No: False
diffs: list
list of diffusion mechanisms (NEURON staff)
recs: list
list of receptors mechanisms (NEURON staff)
'''
def __init__(self, L, d, zpozition, x_application, fast_diff, numofmodel):
self.coordinates = dict()
self.distances = dict()
self.diffusions = dict()
self.fast_diff = fast_diff
self.diffs = []
self.recs = []
self.L = L
self.diam = d
self.zpozition = zpozition
self.x_application = x_application
self.numofmodel = numofmodel
if self.numofmodel == 11 or self.numofmodel == 12:
self.num = 170
else:
self.num = 120
self.create_sections()
self.build_topology()
self.build_subsets()
self.define_geometry()
self.position()
self.distance()
self.define_biophysics()
def create_sections(self):
'''
Creates sections (compartments)
'''
self.branch = h.Section(name='branch', cell=self)
self.stimsec = [h.Section(name='stimsec[%d]' % i) for i in range(self.num)]
def build_topology(self):
'''
Connects sections
'''
self.stimsec[0].connect(self.branch(0), 1)
if self.numofmodel == 11 or self.numofmodel == 12:
for i in range(1, 70):
self.stimsec[i].connect(self.stimsec[i-1])
for i in range(70, 120):
self.stimsec[i].connect(self.stimsec[i-1])
for i in range(120, 170):
self.stimsec[i].connect(self.stimsec[i-1])
self.stimsec[70].connect(self.stimsec[69])
self.stimsec[120].connect(self.stimsec[69])
else:
for i in range(1, len(self.stimsec)):
self.stimsec[i].connect(self.stimsec[i-1])
def define_geometry(self):
'''
Adds length and diameter to sections
'''
for sec in self.stimsec:
sec.L = self.L# microns
sec.diam = self.diam # microns
self.branch.L = self.L
self.branch.diam = self.diam
self.branch.nseg = 1
h.define_shape() # Translate into 3D points.
def position(self):
'''
Adds 3D position
'''
if self.numofmodel == 11 or self.numofmodel == 12:
h.pt3dclear()
h.pt3dadd(0, 0, self.zpozition, self.diam)
h.pt3dadd(self.L, 0, self.zpozition, self.diam)
xyz = dict(x=self.L, y=0, z=0)
self.coordinates.update({self.branch: xyz})
for i in range(70):
h.pt3dclear()
h.pt3dadd(self.L*(i+1), 0, self.zpozition, self.diam)
h.pt3dadd(self.L*(i+2), 0, self.zpozition, self.diam)
xyz = dict(x=self.L*(i+2), y=0, z=0)
self.coordinates.update({self.stimsec[i]: xyz})
for i in range(70, 120):
h.pt3dclear()
h.pt3dadd(self.L*(i+1), i*8, self.zpozition, self.diam)
h.pt3dadd(self.L*(i+2), (i+1)*8, self.zpozition, self.diam)
xyz = dict(x=self.L*(i+2), y=(i+1)*8, z=0)
self.coordinates.update({self.stimsec[i]: xyz})
for i in range(120, 170):
h.pt3dclear()
h.pt3dadd(self.L*(i+1), i*(-8), self.zpozition, self.diam)
h.pt3dadd(self.L*(i+2), (i+1)*(-8), self.zpozition, self.diam)
xyz = dict(x=self.L*(i+2), y=(i+1)*(-8), z=0)
self.coordinates.update({self.stimsec[i]: xyz})
else:
i = 0
for sec in self.all:
h.pt3dclear()
h.pt3dadd(self.L*i, 0, self.zpozition, self.diam)
h.pt3dadd(self.L*(i+1), 0, self.zpozition, self.diam)
xyz = dict(x=self.L*(i+1), y=0, z=0)
self.coordinates.update({sec: xyz})
i+=1
def distance(self):
'''
Adds distances from application for every compartment
'''
#self.distances.clear()
for compartment in self.all:
distance = math.sqrt((self.x_application-self.coordinates.get(compartment).get('x'))**2 + (50-self.coordinates.get(compartment).get('y'))**2 + (0.01-self.coordinates.get(compartment).get('z'))**2)
self.distances.update({compartment: distance})
def define_biophysics(self):
'''
Adds channels and their parameters
'''
for sec in self.all: # 'all' defined in build_subsets
sec.Ra = 35 # Axial resistance in Ohm * cm
sec.cm = 1 # Membrane capacitance in micro Farads / cm^2
sec.insert('navv1p8')
sec.insert('extrapump')
sec.insert('koi')
sec.insert('naoi')
sec.insert('nakpump')
sec.insert('nattxs')
sec.insert('kdr')
sec.insert('kad')
sec.insert('kap')
sec.insert('leak')
sec.insert('Nav1_3')
sec.insert('extracellular')
if self.numofmodel == 8 or self.numofmodel >= 11:
sec.gbar_navv1p8 = 0.2
elif self.numofmodel == 7:
sec.gbar_navv1p8 = 0.1
else:
sec.gbar_navv1p8 = 0
sec.gbar_kdr = 0.01
sec.gbar_kad = 0.1
sec.gbar_kap = 0.1
if self.numofmodel == 6:
sec.gbar_nattxs = 0.2
else:
sec.gbar_nattxs = 0.1
sec.gbar_Nav1_3 = 0.2
sec.smalla_nakpump = -0.0047891
sec.theta_naoi = 0.029
sec.theta_koi = 0.029
sec.celsiusT_nattxs = 37
sec.celsiusT_navv1p8 = 37
sec.celsiusT_nakpump = 37
for sec in self.stimsec:
# self.add_P2Xreceptors(sec, 10, 12)
self.add_5HTreceptors(sec, 10, 9)
# self.add_5HTreceptors(sec, 80, 3)
def add_P2Xreceptors(self, compartment, time, g):
'''
Adds P2X3 receptors
Parameters
----------
compartment: section of NEURON cell
part of neuron
x: int
x - coordinate of ATP application
time: int (ms)
time of ATP application
g: float
receptor conductance
'''
if self.fast_diff:
diff = h.AtP_42(compartment(0.5))
diff.h = self.distances.get(compartment)
diff.tx1 = time
diff.Deff = 0.8
diff.c0cleft = 1
if self.numofmodel == 1:
diff.k = 0
else:
diff.k = 0.01
else:
diff = h.AtP_slow(compartment(0.5))
diff.h = self.distances.get(compartment)
diff.tx1 = time + 0 + (diff.h/1250)*1000
diff.c0cleft = 100
self.diffusions.update({diff: compartment})
rec = h.p2x3(compartment(0.5))
rec.gmax = g
rec.Ev = 5
# rec2 = h.p2x2(compartment(0.5))
# rec2.gmax = g/2
# rec2.Ev = -7
h.setpointer(diff._ref_atp, 'patp', rec)
# h.setpointer(diff._ref_atp, 'patp', rec2)
self.recs.append(rec)
self.diffs.append(diff)
# self.recs.append(rec2)
def add_5HTreceptors(self, compartment, time, g):
'''
Adds 5HT receptors
Parameters
----------
compartment: section of NEURON cell
part of neuron
x: int
x - coordinate of serotonin application
time: int (ms)
time of serotonin application
g: float
receptor conductance
'''
if self.fast_diff:
diff = h.diff_5HT(compartment(0.5))
diff.h = self.distances.get(compartment)
diff.tx1 = time
if self.numofmodel == 14:
diff.a = 100
else:
diff.a = 0
diff.Deff = 0.004
diff.c0cleft = 3
else:
diff = h.slow_5HT(compartment(0.5))
diff.h = self.distances.get(compartment)
diff.tx1 = time + 0 + (diff.h/50)*1000
diff.c0cleft = 3
rec = h.r5ht3a(compartment(0.5))
rec.gmax = g
h.setpointer(diff._ref_serotonin, 'serotonin', rec)
self.diffs.append(diff)
self.recs.append(rec)
def build_subsets(self):
'''
adds sections in NEURON SectionList
'''
self.all = h.SectionList()
for sec in h.allsec():
self.all.append(sec=sec)
def connect2target(self, target):
'''
Adds presynapses
Parameters
----------
target: NEURON cell
target neuron
Returns
-------
nc: NEURON NetCon
connection between neurons
'''
nc = h.NetCon(self.branch(1)._ref_v, target, sec = self.branch)
nc.threshold = 10
return nc | research-team/robot-dream | Nociception/cfiber.py | Python | mit | 9,526 | [
"NEURON"
] | bd7bc511c6974ba71609543ff4cfa33d74b3fe7af1362024bf788acd374a0fac |
"""
API for initiating and tracking requests for credit from a provider.
"""
import datetime
import logging
import uuid
import pytz
import six
from django.db import transaction
from edx_proctoring.api import get_last_exam_completion_date
from openedx.core.djangoapps.credit.exceptions import (
CreditProviderNotConfigured,
CreditRequestNotFound,
InvalidCreditStatus,
RequestAlreadyCompleted,
UserIsNotEligible
)
from openedx.core.djangoapps.credit.models import (
CreditEligibility,
CreditProvider,
CreditRequest,
CreditRequirementStatus
)
from openedx.core.djangoapps.credit.signature import get_shared_secret_key, signature
from student.models import CourseEnrollment, User
from util.date_utils import to_timestamp
from util.json_request import JsonResponse
# TODO: Cleanup this mess! ECOM-2908
log = logging.getLogger(__name__)
def get_credit_providers(providers_list=None):
"""Retrieve all available credit providers or filter on given providers_list.
Arguments:
providers_list (list of strings or None): contains list of ids of credit providers
or None.
Returns:
list of credit providers represented as dictionaries
Response Values:
>>> get_credit_providers(['hogwarts'])
[
{
"id": "hogwarts",
"name": "Hogwarts School of Witchcraft and Wizardry",
"url": "https://credit.example.com/",
"status_url": "https://credit.example.com/status/",
"description: "A new model for the Witchcraft and Wizardry School System.",
"enable_integration": false,
"fulfillment_instructions": "
<p>In order to fulfill credit, Hogwarts School of Witchcraft and Wizardry requires learners to:</p>
<ul>
<li>Sample instruction abc</li>
<li>Sample instruction xyz</li>
</ul>",
},
...
]
"""
return CreditProvider.get_credit_providers(providers_list=providers_list)
def get_credit_provider_info(request, provider_id): # pylint: disable=unused-argument
"""Retrieve the 'CreditProvider' model data against provided
credit provider.
Args:
provider_id (str): The identifier for the credit provider
Returns: 'CreditProvider' data dictionary
Example Usage:
>>> get_credit_provider_info("hogwarts")
{
"provider_id": "hogwarts",
"display_name": "Hogwarts School of Witchcraft and Wizardry",
"provider_url": "https://credit.example.com/",
"provider_status_url": "https://credit.example.com/status/",
"provider_description: "A new model for the Witchcraft and Wizardry School System.",
"enable_integration": False,
"fulfillment_instructions": "
<p>In order to fulfill credit, Hogwarts School of Witchcraft and Wizardry requires learners to:</p>
<ul>
<li>Sample instruction abc</li>
<li>Sample instruction xyz</li>
</ul>",
"thumbnail_url": "https://credit.example.com/logo.png"
}
"""
credit_provider = CreditProvider.get_credit_provider(provider_id=provider_id)
credit_provider_data = {}
if credit_provider:
credit_provider_data = {
"provider_id": credit_provider.provider_id,
"display_name": credit_provider.display_name,
"provider_url": credit_provider.provider_url,
"provider_status_url": credit_provider.provider_status_url,
"provider_description": credit_provider.provider_description,
"enable_integration": credit_provider.enable_integration,
"fulfillment_instructions": credit_provider.fulfillment_instructions,
"thumbnail_url": credit_provider.thumbnail_url
}
return JsonResponse(credit_provider_data)
@transaction.atomic
def create_credit_request(course_key, provider_id, username):
"""
Initiate a request for credit from a credit provider.
This will return the parameters that the user's browser will need to POST
to the credit provider. It does NOT calculate the signature.
Only users who are eligible for credit (have satisfied all credit requirements) are allowed to make requests.
A provider can be configured either with *integration enabled* or not.
If automatic integration is disabled, this method will simply return
a URL to the credit provider and method set to "GET", so the student can
visit the URL and request credit directly. No database record will be created
to track these requests.
If automatic integration *is* enabled, then this will also return the parameters
that the user's browser will need to POST to the credit provider.
These parameters will be digitally signed using a secret key shared with the credit provider.
A database record will be created to track the request with a 32-character UUID.
The returned dictionary can be used by the user's browser to send a POST request to the credit provider.
If a pending request already exists, this function should return a request description with the same UUID.
(Other parameters, such as the user's full name may be different than the original request).
If a completed request (either accepted or rejected) already exists, this function will
raise an exception. Users are not allowed to make additional requests once a request
has been completed.
Arguments:
course_key (CourseKey): The identifier for the course.
provider_id (str): The identifier of the credit provider.
username (str): The user initiating the request.
Returns: dict
Raises:
UserIsNotEligible: The user has not satisfied eligibility requirements for credit.
CreditProviderNotConfigured: The credit provider has not been configured for this course.
RequestAlreadyCompleted: The user has already submitted a request and received a response
from the credit provider.
Example Usage:
>>> create_credit_request(course.id, "hogwarts", "ron")
{
"url": "https://credit.example.com/request",
"method": "POST",
"parameters": {
"request_uuid": "557168d0f7664fe59097106c67c3f847",
"timestamp": 1434631630,
"course_org": "HogwartsX",
"course_num": "Potions101",
"course_run": "1T2015",
"final_grade": "0.95",
"user_username": "ron",
"user_email": "ron@example.com",
"user_full_name": "Ron Weasley",
"user_mailing_address": "",
"user_country": "US",
"signature": "cRCNjkE4IzY+erIjRwOQCpRILgOvXx4q2qvx141BCqI="
}
}
"""
try:
user_eligibility = CreditEligibility.objects.select_related('course').get(
username=username,
course__course_key=course_key
)
credit_course = user_eligibility.course
credit_provider = CreditProvider.objects.get(provider_id=provider_id)
except CreditEligibility.DoesNotExist:
log.warning(
u'User "%s" tried to initiate a request for credit in course "%s", '
u'but the user is not eligible for credit',
username, course_key
)
raise UserIsNotEligible
except CreditProvider.DoesNotExist:
log.error(u'Credit provider with ID "%s" has not been configured.', provider_id)
raise CreditProviderNotConfigured
# Check if we've enabled automatic integration with the credit
# provider. If not, we'll show the user a link to a URL
# where the user can request credit directly from the provider.
# Note that we do NOT track these requests in our database,
# since the state would always be "pending" (we never hear back).
if not credit_provider.enable_integration:
return {
"url": credit_provider.provider_url,
"method": "GET",
"parameters": {}
}
else:
# If automatic credit integration is enabled, then try
# to retrieve the shared signature *before* creating the request.
# That way, if there's a misconfiguration, we won't have requests
# in our system that we know weren't sent to the provider.
shared_secret_key = get_shared_secret_key(credit_provider.provider_id)
if shared_secret_key is None:
msg = u'Credit provider with ID "{provider_id}" does not have a secret key configured.'.format(
provider_id=credit_provider.provider_id
)
log.error(msg)
raise CreditProviderNotConfigured(msg)
# Initiate a new request if one has not already been created
credit_request, created = CreditRequest.objects.get_or_create(
course=credit_course,
provider=credit_provider,
username=username,
)
# Check whether we've already gotten a response for a request,
# If so, we're not allowed to issue any further requests.
# Skip checking the status if we know that we just created this record.
if not created and credit_request.status != "pending":
log.warning(
(
u'Cannot initiate credit request because the request with UUID "%s" '
u'exists with status "%s"'
), credit_request.uuid, credit_request.status
)
raise RequestAlreadyCompleted
if created:
credit_request.uuid = uuid.uuid4().hex
# Retrieve user account and profile info
user = User.objects.select_related('profile').get(username=username)
# Retrieve the final grade from the eligibility table
try:
final_grade = CreditRequirementStatus.objects.get(
username=username,
requirement__namespace="grade",
requirement__name="grade",
requirement__course__course_key=course_key,
status="satisfied"
).reason["final_grade"]
# NOTE (CCB): Limiting the grade to seven characters is a hack for ASU.
if len(six.text_type(final_grade)) > 7:
final_grade = u'{:.5f}'.format(final_grade)
else:
final_grade = six.text_type(final_grade)
except (CreditRequirementStatus.DoesNotExist, TypeError, KeyError):
msg = u'Could not retrieve final grade from the credit eligibility table for ' \
u'user [{user_id}] in course [{course_key}].'.format(user_id=user.id, course_key=course_key)
log.exception(msg)
raise UserIsNotEligible(msg)
# Getting the students's enrollment date
course_enrollment = CourseEnrollment.get_enrollment(user, course_key)
enrollment_date = course_enrollment.created if course_enrollment else ""
# Getting the student's course completion date
completion_date = get_last_exam_completion_date(course_key, username)
parameters = {
"request_uuid": credit_request.uuid,
"timestamp": to_timestamp(datetime.datetime.now(pytz.UTC)),
"course_org": course_key.org,
"course_num": course_key.course,
"course_run": course_key.run,
"enrollment_timestamp": to_timestamp(enrollment_date) if enrollment_date else "",
"course_completion_timestamp": to_timestamp(completion_date) if completion_date else "",
"final_grade": final_grade,
"user_username": user.username,
"user_email": user.email,
"user_full_name": user.profile.name,
"user_mailing_address": "",
"user_country": (
user.profile.country.code
if user.profile.country.code is not None
else ""
),
}
credit_request.parameters = parameters
credit_request.save()
if created:
log.info(u'Created new request for credit with UUID "%s"', credit_request.uuid)
else:
log.info(
u'Updated request for credit with UUID "%s" so the user can re-issue the request',
credit_request.uuid
)
# Sign the parameters using a secret key we share with the credit provider.
parameters["signature"] = signature(parameters, shared_secret_key)
return {
"url": credit_provider.provider_url,
"method": "POST",
"parameters": parameters
}
def update_credit_request_status(request_uuid, provider_id, status):
"""
Update the status of a credit request.
Approve or reject a request for a student to receive credit in a course
from a particular credit provider.
This function does NOT check that the status update is authorized.
The caller needs to handle authentication and authorization (checking the signature
of the message received from the credit provider)
The function is idempotent; if the request has already been updated to the status,
the function does nothing.
Arguments:
request_uuid (str): The unique identifier for the credit request.
provider_id (str): Identifier for the credit provider.
status (str): Either "approved" or "rejected"
Returns: None
Raises:
CreditRequestNotFound: No request exists that is associated with the given provider.
InvalidCreditStatus: The status is not either "approved" or "rejected".
"""
if status not in [CreditRequest.REQUEST_STATUS_APPROVED, CreditRequest.REQUEST_STATUS_REJECTED]:
raise InvalidCreditStatus
try:
request = CreditRequest.objects.get(uuid=request_uuid, provider__provider_id=provider_id)
old_status = request.status
request.status = status
request.save()
log.info(
u'Updated request with UUID "%s" from status "%s" to "%s" for provider with ID "%s".',
request_uuid, old_status, status, provider_id
)
except CreditRequest.DoesNotExist:
msg = (
u'Credit provider with ID "{provider_id}" attempted to '
u'update request with UUID "{request_uuid}", but no request '
u'with this UUID is associated with the provider.'
).format(provider_id=provider_id, request_uuid=request_uuid)
log.warning(msg)
raise CreditRequestNotFound(msg)
def get_credit_requests_for_user(username):
"""
Retrieve the status of a credit request.
Returns either "pending", "approved", or "rejected"
Arguments:
username (unicode): The username of the user who initiated the requests.
Returns: list
Example Usage:
>>> get_credit_request_status_for_user("bob")
[
{
"uuid": "557168d0f7664fe59097106c67c3f847",
"timestamp": 1434631630,
"course_key": "course-v1:HogwartsX+Potions101+1T2015",
"provider": {
"id": "HogwartsX",
"display_name": "Hogwarts School of Witchcraft and Wizardry",
},
"status": "pending" # or "approved" or "rejected"
}
]
"""
return CreditRequest.credit_requests_for_user(username)
def get_credit_request_status(username, course_key):
"""Get the credit request status.
This function returns the status of credit request of user for given course.
It returns the latest request status for the any credit provider.
The valid status are 'pending', 'approved' or 'rejected'.
Args:
username(str): The username of user
course_key(CourseKey): The course locator key
Returns:
A dictionary of credit request user has made if any
"""
credit_request = CreditRequest.get_user_request_status(username, course_key)
return {
"uuid": credit_request.uuid,
"timestamp": credit_request.modified,
"course_key": credit_request.course.course_key,
"provider": {
"id": credit_request.provider.provider_id,
"display_name": credit_request.provider.display_name
},
"status": credit_request.status
} if credit_request else {}
| edx-solutions/edx-platform | openedx/core/djangoapps/credit/api/provider.py | Python | agpl-3.0 | 16,195 | [
"VisIt"
] | 6166a7720639f4cd3fc247ec23e6f0523d759ba492730edc135994a7a5160422 |
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from Objects.CAdaline import AdalineGD, AdalineSGD
# Getting Iris data again...
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', header=None)
df.tail()
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', -1, 1)
X = df.iloc[0:100, [0, 2]].values
# Plotting the learning curve for adaptive linear neuron...
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(16, 9))
ada1 = AdalineGD(n_iter=10, eta=0.01).fit(X, y)
ax[0, 0].plot(range(1, len(ada1.cost_)+1), np.log10(ada1.cost_), marker='o')
ax[0, 0].set_xlabel('Epochs')
ax[0, 0].set_ylabel('log(Sum-Squared-Error)')
ax[0, 0].set_title('Adaline - learning rate 0.01')
ax[0, 0].grid()
ada2 = AdalineGD(n_iter=10, eta = 0.0001).fit(X, y)
ax[0, 1].plot(range(1, len(ada2.cost_)+1), np.log10(ada2.cost_), marker='o')
ax[0, 1].set_xlabel('Epochs')
ax[0, 1].set_ylabel('log(Sum-Squared-Error)')
ax[0, 1].set_title('Adaline - learning rate 0.0001')
ax[0, 1].grid()
# Now second part of the exercise
X_std = np.copy(X)
X_std[:, 0] = (X[:, 0] - X[:, 0].mean())/X[:, 0].std()
X_std[:, 1] = (X[:, 1] - X[:, 1].mean())/X[:, 1].std()
ada3 = AdalineGD(n_iter=15, eta=0.01)
ada3.fit(X_std, y)
ax[1, 0].plot(range(1, len(ada3.cost_)+1), np.log(ada3.cost_), marker='o')
ax[1, 0].set_xlabel('Epochs')
ax[1, 0].set_ylabel('log(Sum-Squared-Error)')
ax[1, 0].set_title('Adaline - learning rate 0.01 with standardized features')
ax[1, 0].grid()
# Now the exercise part with AdalineSGD - show convergence of algorithm
adasgd = AdalineSGD(n_iter = 15, eta = 0.01, random_state = 1)
adasgd.fit(X_std, y)
ax[1, 1].plot(range(1, len(adasgd.cost_)+1), adasgd.cost_, marker='o')
ax[1, 1].set_xlabel('Epochs')
ax[1, 1].set_ylabel('Average Cost')
ax[1, 1].set_title('Adaline - Stochastic Gradient Descent')
ax[1, 1].grid()
plt.show() | petritn/MachineLearning | AdalineLearning.py | Python | gpl-3.0 | 1,945 | [
"NEURON"
] | d4f62450665a07a92ea2d97ddc058016bffb91a74f9b099a897daa8b99ca613e |
# -*- coding: utf-8 -*-
#_____________________________________________________________________________
#
# Copyright (c) 2012 Berlin Institute of Technology
# All rights reserved.
#
# Developed by: Neural Information Processing Group (NI)
# School for Electrical Engineering and Computer Science
# Berlin Institute of Technology
# MAR 5-6, Marchstr. 23, 10587 Berlin, Germany
# http://www.ni.tu-berlin.de/
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal with the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimers.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimers in the documentation
# and/or other materials provided with the distribution.
# * Neither the names of Neural Information Processing Group (NI), Berlin
# Institute of Technology, nor the names of its contributors may be used to
# endorse or promote products derived from this Software without specific
# prior written permission.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# WITH THE SOFTWARE.
#_____________________________________________________________________________
#
# Acknowledgements:
# Philipp Meier <pmeier82@gmail.com>
#_____________________________________________________________________________
#
import sklearn.cluster
import scipy as sp
import scipy.linalg as sp_la
from tables import openFile
from mdp.nodes import PCANode
import cPickle
try:
import spikeplot as plot
plot.plt.interactive(False)
WITH_PLOT = True
except ImportError:
WITH_PLOT = False
ARC_PATH = './data.h5'
# def get_data(tf=65, trials=5, snr=0.5, mean_correct=False, save=False):
# # inits
# db = MunkSession()
# id_ana_mu = 1284
# # get relevant info for analysis item
# q = db.query("""
# SELECT a.expid, a.tetrode
# FROM analysis a
# WHERE a.id = %d
# """ % id_ana_mu) # L014 tet7 mua
# id_exp = q[0][0]
# id_tet = q[0][1]
# trial_ids = db.get_trial_range_exp(id_exp, trlidx=(0, trials),
# include_error_trials=False)
# id_mu = db.get_units_for_analysis(id_ana_mu)[0]
# ndet = TimeSeriesCovE(tf_max=tf, nc=4)
# data = {}
# align_at = int(tf / 4)
#
# print 'loading data trials..'
# for id_trl in trial_ids:
# trial_data = None
# try:
# trial_data = db.get_tetrode_data(id_trl, id_tet)
# if mean_correct is True:
# trial_data -= trial_data.mean(axis=0)
# data[id_trl] = trial_data
# print '\tprocessed %s' % db.get_fname_for_id(id_trl)
# except Exception, e:
# raise RuntimeError('error processing %s\n%s' %
# (db.get_fname_for_id(id_trl), e))
# finally:
# del trial_data
# print 'done.'
#
# print 'retrieving multiunit spike set @tf=%d' % tf
# spks_info = []
# spks = []
# for id_trl in trial_ids:
# trial_st = None
# try:
# trial_st = db.get_unit_data(id_mu, id_trl)['spiketrain']
# if trial_st.size == 0:
# print '\tno spiketrain for %s' % db.get_fname_for_id(id_trl)
# continue
# trial_spks, trial_st = get_aligned_spikes(
# data[id_trl],
# trial_st,
# tf,
# align_at=align_at,
# mc=False,
# kind='min')
# end = data[id_trl].shape[0]
# nep = epochs_from_spiketrain(trial_st, tf, end=end)
# nep = invert_epochs(nep, end=end)
# nep = merge_epochs(nep)
# ndet.update(data[id_trl], epochs=nep)
# spks.append(trial_spks)
# spks_info.append(sp.vstack([[id_trl] * trial_st.size,
# trial_st]).T)
# print '\tprocessed %s' % db.get_fname_for_id(id_trl)
# except Exception, e:
# raise RuntimeError('error processing %s\n%s' %
# (db.get_fname_for_id(id_trl), e))
# finally:
# del trial_st
# spks_info = sp.vstack(spks_info)
# spks = sp.vstack(spks)
# print 'found %d spikes in total' % spks.shape[0]
# print 'done.'
#
# print 'checking SNR of spikes'
# spks_snr = snr_maha(spks, ndet.get_icmx(tf=tf))
# good_spks = spks_snr > snr
# n_spks = spks.shape[0]
# spks = spks[good_spks]
# spks_info = spks_info[good_spks].astype(int)
# print 'keeping %d of %d spikes with SNR > %f' % (spks.shape[0], n_spks,
# snr)
#
# if save is True:
# ndet_pkl = cPickle.dumps(ndet)
# arc = openFile(ARC_PATH, 'w')
# arc.createArray(arc.root, 'spks', spks)
# arc.createArray(arc.root, 'spks_info', spks_info)
# arc.createArray(arc.root, 'ndet_pkl', ndet_pkl)
# arc.close()
# return spks, spks_info, ndet
def load_data():
arc = openFile(ARC_PATH, 'r')
spks = arc.getNode('/spks').read()
spks_info = arc.getNode('/spks_info').read()
ndet_pkl = arc.getNode('/ndet_pkl').read()
arc.close()
ndet = cPickle.loads(ndet_pkl)
print 'loaded', spks.shape[0], 'spikes from hdf archive'
return spks, spks_info, ndet
def pre_processing(spks, ndet, tf, pca_dim=4):
print 'starting to prewhiten w.r.t. noise..',
spks_prw = sp.dot(spks, ndet.get_whitening_op(tf=tf))
print 'done.'
print 'pca projection: %s' % pca_dim,
spks_pca = PCANode(output_dim=pca_dim)(spks_prw)
print 'done.'
return spks_pca
def cluster_kmeans(obs, crange=range(1, 21)):
rval = None
winner = sp.inf
print 'starting multiple runs of: kmeans'
for i in crange:
clus = sklearn.cluster.KMeans(k=i)
clus.fit(obs)
x = gof(clus.score(obs), obs, i)
print i, x
if x < winner:
winner = x
rval = clus.labels_
print 'done.'
return rval
def cluster_gmm(obs, crange=range(1, 21)):
rval = None
winner = sp.inf
print 'starting multiple runs of: GMM'
for i in crange:
clus = sklearn.mixture.GMM(n_components=i, cvtype='spherical')
clus.fit(obs, n_iter=0, init_params='wm', params='')
clus.covars = [4.0] * i
clus.fit(obs, init_params='', params='wm')
x = gof(clus.score(obs).sum(), obs, i)
print i, x
if x < winner:
winner = x
rval = clus.predict(obs)
print 'done.'
return rval
def cluster_ward(obs, crange=range(1, 21)):
rval = None
winner = sp.inf
print 'starting multiple runs of: hirachial clustering'
for i in crange:
clus = sklearn.cluster.Ward(n_clusters=i)
clus.fit(obs, n_iter=0, init_params='wm', params='')
clus.covars = [4.0] * i
clus.fit(obs, init_params='', params='wm')
x = gof(clus.score(obs).sum(), obs, i)
print i, x
if x < winner:
winner = x
rval = clus.predict(obs)
print 'done.'
return rval
def gof(ll, data, k):
N, Nk = map(sp.float64, data.shape)
Np = k * (Nk + 1) - 1
#=============================================================
# # calculate BIC value (Xu & Wunsch, 2005)
# return - ll + Np * 0.5 * sp.log(N)
#=============================================================
#=============================================================
# # calculate AIC value (Xu & Wunsch, 2005)
# return - 2 * (N - 1 - Nk - ncmp * 0.5) * ll / N + 3 * Np
#=============================================================
return - ll + Np * 0.5 * sp.log(N)
def gaussian_heat_kernel(X, delta=1.0):
return sp.exp(- X ** 2 / (2. * delta ** 2))
def cluster_spectral(obs):
print 'starting spectral clustering with gaussian heat kernel'
nobs = obs.shape[0]
aff_mx = sp.zeros((nobs, nobs))
for i in xrange(nobs):
for j in xrange(i, nobs):
aff_mx[i, j] = sp_la.norm(obs[i] - obs[j])
if i != j:
aff_mx[j, i] = aff_mx[i, j]
simi = gaussian_heat_kernel(aff_mx)
clus = sklearn.cluster.SpectralClustering()
clus.fit(simi)
print 'done.'
def main():
TF, SNR, PCADIM = 65, 0.5, 8
NTRL = 10
LOAD = False
if LOAD is True:
spks, spks_info, ndet = load_data()
else:
# spks, spks_info, ndet = get_data(tf=TF, trials=NTRL, snr=SNR,
# mean_correct=False, save=True)
pass
# plot.waveforms(spks, tf=TF, show=False)
input_obs = pre_processing(spks, ndet, TF, pca_dim=PCADIM)
plot.cluster(input_obs, show=False)
# kmeans
labels_km = cluster_kmeans(input_obs)
obs_km = {}
wf_km = {}
for i in xrange(labels_km.max() + 1):
obs_km[i] = input_obs[labels_km == i]
wf_km[i] = spks[labels_km == i]
if WITH_PLOT:
plot.cluster(obs_km, title='kmeans', show=False)
plot.waveforms(obs_km, tf=TF, title='kmeans', show=False)
# gmm
labels_gmm = cluster_gmm(input_obs)
obs_gmm = {}
wf_gmm = {}
for i in xrange(labels_km.max() + 1):
obs_gmm[i] = input_obs[labels_gmm == i]
wf_gmm[i] = spks[labels_gmm == i]
if WITH_PLOT:
plot.cluster(obs_gmm, title='gmm', show=False)
plot.waveforms(wf_gmm, tf=TF, title='gmm', show=False)
# ward
labels_ward = cluster_ward(input_obs)
obs_ward = {}
wf_ward = {}
for i in xrange(labels_km.max() + 1):
obs_ward[i] = input_obs[labels_ward == i]
wf_ward[i] = spks[labels_ward == i]
if WITH_PLOT:
plot.cluster(obs_ward, title='ward', show=False)
plot.waveforms(wf_ward, tf=TF, title='ward', show=False)
# spectral
#cluster_spectral(spks)
if WITH_PLOT:
plot.plt.show()
if __name__ == '__main__':
main()
| pmeier82/BOTMpy | botmpy/test/_test_cluster_methods.py | Python | mit | 10,851 | [
"Gaussian"
] | f1005ef960eb4726126893b9fba24b124c418f5eff3633bf10d1f852f9be42dc |
#!/usr/bin/env python3
"""
emep_readcdf contains:
class:
EmepFileClass - which defines objects with name,proj,x0, etc.
methods:
readcdf - reads EMEP cdf file, and figures out projection.
returns : EmepFile (object), and values of a
specified variable for the full period or
particular time-slices.
CHECK...
grid (2-d array) for one time-slice, 1st if not specified
Pt (1-d time array)
get_vals(xPt,yPt,EmepCdf,minmax=False,dbg=False):
gets the value at xPt, yPt from bi-linear interpolation
of nearest grids. Returns best estimate and min and max
of those grids.
printme()
Access as e.g. EmepFile.name
Usually called as module, but a quick test can be done to get values, e.g.
emep_readcdf.py -i /home/fred/somedir/test1_fullrun.nc -v SURF_MAXO3
"""
import datetime
import netCDF4 # as nc4
from numpy import maximum, vstack
import numpy as np
import os
import sys
import matplotlib.pyplot as plt
import time # for timing CPU
#Not used: import netcdftime # ... to play with dates:
# Own:
import emxgeo.get_emepcoords as coords
# ---------------------------------------------------------------------------#
class EmepFileClass(object):
""" Class to hold info on EMEP file's projection and dimensions
"""
def __init__(self,name=None,handle=None,varname=None,proj=None,lldim=None, \
dimx=None,dimy=None,ntime=None):
self.name=name
self.handle=handle
self.varname=varname
self.proj=proj # lonlat or PS so far
self.lldim= int( lldim ) # is e.g. lat 1-D or 2-D
self.dimx=dimx # e.g. latitude, lat
self.dimy=dimy
self.ntime=int ( ntime ) # 1 so far, for fullrun
self.x0=np.NaN # will be left edge
self.y0=np.NaN # will be bottom edge
self.xmax=np.NaN # will be left edge
self.ymax=np.NaN # will be bottom edge
self.dx=np.NaN # will be left edge
self.dy=np.NaN # will be bottom edge
self.times=[]
self.xcoords=[]
self.ycoords=[]
self.yAscending=True # True for S to N in y coords
self.xRegular=True # Constant spacing in x
self.yRegular=True # Constant spacing in y
self.vals=np.array([])
#self.ht=float( ht )
def __repr__(self):
return str(self.__class__) + ":" + str(self.__dict__)
# return str(self.__dict__)
# repr better than __str__ here, see e.g.
# http://stackoverflow.com/questions/1436703/difference-between-str-and-repr-in-python/2626364#2626364
def printme(self):
""" prints out summary of EmepFile f, excluding xcoords and ycoords """
me='EmepFileClass: '
f=self
print(("="*78))
print((me+"SUMMARY ", f.name))
print((me+"Variable ", f.varname))
print((me+"PROJ, dims ", f.proj, " : ", f.lldim, 'D ', f.dimx, f.dimy))
print((me+"xReg, yReg, yAsc? ", f.xRegular, f.yRegular, f.yAscending))
print((me+"ntime ", f.ntime))
print((me+"XCOORDS ", f.xcoords.min(), f.xcoords.max(), len(f.xcoords) ))
print((me+"yCOORDS ", f.ycoords.min(), f.ycoords.max(), len(f.ycoords), f.yAscending ))
print((me+"x0 y0 dx dy reg? ", f.x0, f.y0, f.dx, f.dy, f.xRegular, f.yRegular ))
print((me+"xmax, ymax ", f.xmax, f.ymax))
try:
print((me+"min max vals", f.vals.min(), f.vals.max()))
except:
print(me+"min max vals NOT SET YET")
print(("="*78))
#-----------------------------------------------------
def readcdf( ifile, var, getVals=True, tStep=None,
getijPts = [], getijPt=[ False, 0, 0 ], dbg=False, maxdays=0 ):
"""
Reads emep-produced (or other?) netcdf files and returns values of
variable 'var' as EmepCdf.vals array, along with projection, ycoords, xcoords
and number of dimensions.
If tStep is specified, vals is 2-d array for that time-step, otherwise
vals contains all data for that variable.
For lonlat projections, xcoords is usually longitude in degrees
For PS projections, xcoords is usually real, e.g. 1.0 -- 100.0
This routine can return one time-slice of gridded data, or
time-series for one point --- OR FULL ...
maxdays is A CRUDE FIX FOR MG AMAP files. Set to 365/366
"""
dtxt='readcdf: '
if( not os.path.isfile(ifile) ):
print((dtxt+"File %s doesn't exist!"% ifile))
sys.exit()
ecdf = netCDF4.Dataset(ifile,'r',format='NETCDF4')
proj='lonlat' # default
if( 'longitude' in ecdf.dimensions ):
dimx, dimy =( 'longitude', 'latitude')
elif ( 'lon' in ecdf.dimensions ) :
dimx, dimy =( 'lon', 'lat')
elif ( 'i' in ecdf.dimensions ) :
print((dtxt+'PS PROJ assumed for %s' % ifile))
dimx, dimy =( 'i_EMEP', 'j_EMEP')
proj='PS'
elif ( 'x' in ecdf.dimensions ) :
dimx, dimy =( 'x', 'y')
print((dtxt+'PS PROJxy assumed for %s' % ifile))
proj='PS'
else:
print("ERROR w PROJ", ecdf.dimensions); sys.exit(0)
# TEST IF WE HAVE THIS VAR!
if var not in ecdf.variables.keys():
print(dtxt+'TEST VAR NOT IN FILE! ', var, ifile)
return 'VarNotFound'
try:
tst=ecdf.variables[dimx]
lldim=len(tst.shape)
except:
lldim=2 # HARD CODE CDO x y
# Test Sep 2018
if 'time' in ecdf.variables.keys():
if dbg: print('DBG TIME ', ecdf.variables.keys())
t=ecdf.variables['time']
times=ecdf.variables['time'][:]
ntime=len(times) # TESTING . was =1
tvar = True
elif 'time' in ecdf.dimensions:
if dbg: print('TIME', ecdf.dimensions)
ntime=len(ecdf.dimensions['time'])
times =range(ntime)
tvar = False
else:
ntime=0
tvar = False
# AMAP FIX:
if maxdays>0:
tst = tst[:maxdays]
times =times[:maxdays]
ntime=maxdays
if dbg: print('DBG AMAP TIME ', ntime)
if dbg and ntime>0:
print(" SIZE OF TIME ", len(times))
print(" Time UNITS ", t.units)
if tvar:
print(netCDF4.num2date( times[0],units=t.units))
if ntime>1: print(netCDF4.num2date( times[1],units=t.units))
#print(netCDF4.num2date( times[365],units=t.units))
# ECHAM had 367 records for 2012:
# 0 2012-01-01 12:00:00
# 1 2012-01-02 11:30:00
# 365 2012-12-31 11:30:00
# 366 2013-01-01 00:00:00
# print(netCDF4.num2date( times[366],units=t.units)) # ChECK
# sys.exit()
EmepFile=EmepFileClass( ifile, ecdf, var, proj,lldim,dimx,dimy,ntime)
EmepFile.dimx = dimx
EmepFile.dimy = dimy
for tim in times:
if tvar:
EmepFile.times.append(netCDF4.num2date(tim,units=t.units))
else:
EmepFile.times.append(tim)
if( lldim == 1):
EmepFile.xcoords=ecdf.variables[dimx][:]
EmepFile.ycoords=ecdf.variables[dimy][:]
if EmepFile.ycoords[-1] < EmepFile.ycoords[0]: # from N to S
# We flip the coordinates
EmepFile.ycoords=np.flipud( EmepFile.ycoords )
EmepFile.yAscending = False
EmepFile.dx = EmepFile.xcoords[1]-EmepFile.xcoords[0]
EmepFile.dy = EmepFile.ycoords[1]-EmepFile.ycoords[0]
# For eg ECHAM the x-coords are from 0 to 360, and y-coords are reversed
# (N to S). We reset to EMEP standard here, to simply rest of code. Later we
# will also reset any 2-D variables directly after reading
EmepFile.x0 = EmepFile.xcoords[0] - 0.5 * EmepFile.dx
EmepFile.y0 = EmepFile.ycoords[0] - 0.5 * EmepFile.dy
EmepFile.xmax = EmepFile.xcoords[-1] + 0.5 * EmepFile.dx
EmepFile.ymax = EmepFile.ycoords[-1] + 0.5 * EmepFile.dy # from S to N
#FLIPD if EmepFile.ycoords[-1] < EmepFile.ycoords[0]: # from N to S
#FLIPD EmepFile.y0 = EmepFile.ycoords[-1] - 0.5 * EmepFile.dy
#FLIPD EmepFile.ymax = EmepFile.ycoords[0] + 0.5 * EmepFile.dy
#FLIPD EmepFile.yAscending = False
#EmepFile.ycoords=np.flipud( EmepFile.ycoords ) # No ascending
# Check for regular spacing... simple test if edge dx ~ mid dx
nx2= len(EmepFile.xcoords) // 2
ny2= len(EmepFile.ycoords) // 2
dx2= EmepFile.xcoords[nx2]-EmepFile.xcoords[nx2-1]
dy2= EmepFile.ycoords[ny2]-EmepFile.ycoords[ny2-1]
dx2 = abs(dx2); dy2 = abs(dy2)
if abs(dx2-EmepFile.dx) > 1.0e-5 : EmepFile.xRegular = False
if abs(dy2-EmepFile.dy) > 1.0e-5 : EmepFile.yRegular = False
# Shouldn't occur now, since we use i_EMEP for PS, lon for lonlat
elif ( lldim == 2):
# EmepFile.ycoords=ecdf.variables[dimy][:,:]
# EmepFile.xcoords=ecdf.variables[dimx][:,:]
# HAR CODE FOR CDO
sys.exit('HAR CODE DANGEROUS - NEEDS CHECK')
xx=ecdf.dimensions['x']
yy=ecdf.dimensions['y']
EmepFile.xcoords=np.linspace(0.5,xx.size-0.5,xx.size) # eg 0 .. 131 (2
EmepFile.ycoords=np.linspace(0.5,yy.size-0.5,yy.size) # eg 0 .. 158 (9
EmepFile.dx = 1.0
EmepFile.dy = 1.0
EmepFile.x0 = 0.0
EmepFile.y0 = 0.0
EmepFile.xmax = xx.size-0.5 # CHECK LATER
EmepFile.ymax = yy.size-0.5
print(EmepFile.ycoords)
EmepFile.printme()
if getVals:
# tStep will be zero for annual, or by edfault
if tStep == None:
print(dtxt+"getVals all time-steps " )
EmepFile.vals=np.array( ecdf.variables[var] ) # 2 or 3d
if maxdays>0:
EmepFile.vals=np.array( ecdf.variables[var][0:maxdays,:,:] ) # 2 or 3d
print('AMAP FIX ', EmepFile.vals.shape)
else:
EmepFile.vals=np.array( ecdf.variables[var] ) # 2 or 3d
else:
tmpv= np.array( ecdf.variables[var][:,:,:] )
maxtstep = tmpv.shape[0] -1 # -1 for python index
if maxtstep < tStep:
print ( dtxt+'TSTEP WARNING!! Requested ', tStep, ' but len=', maxtstep )
tStep = maxtstep
if maxdays>0:
print('AMAP FIXB ')
sys.exit()
#print ( dtxt+'SHAPE TMPV ', tStep, tmpv.shape, tmpv.shape[0] )
EmepFile.vals=np.array( ecdf.variables[var][tStep,:,:] )
if EmepFile.yAscending == False: # ECHAM
# ECHAM has j coordinates from N to S, and some (old?) files has 367 records
# for 2012. We flip and chop
i=5; j=19 # about 53N, 9E, j from top
nj = len(EmepFile.ycoords) - j - 1
ndims = len(EmepFile.vals.shape)
print('SHAPE FLIPPING VALS ', EmepFile.vals.shape )
if ndims>2:
nt2 = ntime//2
print('FLIP PRE ', nt2, EmepFile.vals[nt2,j,i], np.max(EmepFile.vals[:,j,i] ) )
#EmepFile.vals=EmepFile.vals[:-1,::-1,:] # Flips on j, chops time by one
EmepFile.vals=EmepFile.vals[:,::-1,:] # Flips on j
print('FLIP POST ', nt2, EmepFile.vals[nt2,nj,i], np.max(EmepFile.vals[:,nj,i] ) )
else:
EmepFile.vals=EmepFile.vals[::-1,:] # Flips on j
# O2017 - needs checking
elif len(getijPts) > 0:
npt=len(getijPts)
EmepFile.vals = np.zeros([ntime,npt])
sys.exit('ECHAM gb')
for tStep in range(0,ntime):
npt=0
for i, j in getijPts:
#if i < 1 or j < 1: sys.exit('Wrong iPt, jPt ')
#print('IJ n', i, j, npt)
EmepFile.vals[tStep,npt] =ecdf.variables[var][tStep,j,i]
#print('Inside:', npt, multiwrite( EmepFile.vals[0:10,npt],'%5.1f') )
npt += 1
elif getijPt[0]:
i = getijPt[1]; j = getijPt[2]
if i < 1 or j < 1: sys.exit('Wrong iPt, jPt ')
EmepFile.vals=ecdf.variables[var][:,j,i]
sys.exit('ECHAM gc')
else:
EmepFile.vals= np.array( [ np.nan, np.nan ])
print(dtxt+'getVals false')
#sys.exit('Checked time')
if dbg: print(dtxt+"DIMS ", ifile, dimx, dimy , lldim)
if dbg: print(dtxt+"PROJ ", proj)
try:
print((dtxt+"VALS ", EmepFile.vals.min(), EmepFile.vals.max(),
EmepFile.vals.shape))
except:
print((dtxt+'No vals requested'))
return EmepFile
#--------------- Was getEmepPt.py ----------------------------------
# getEmepPt comprises three methods
# RelIj
# RelXy
# get_vals - which uses bi-linear interpolation to get best-estimate
#
#-------------------------------------------------------------------
def RelIj(x, y, x0, y0, dx, dy):
""" gets i, j coordinates
x0, y0 are left and bottom edges (set in getEmepCdf usually)
designed for lon, lat arrays but should work with i,j arrays also?
"""
dtxt='RelIj:' # for debug txt
i= int( (x-x0)/dx ) # now with zero as lowest coord
j= int( (y-y0)/dy ) # now with zero as lowest coord
if( i < 0 or j < 0 ):
print(dtxt+"EDGE of domain ", x, xLeft, i); sys.exit(0)
return i, j
#-------------------------------------------------------------------
def RelXy(x, y, x0, y0, dx, dy):
""" returns (xrel,yrel) values for point (x,y)
x0, y0 are left and bottom edges (set in getEmepCdf usually)
This code cannot cope with edges though. """
xrel= (x-x0 )/dx
yrel= (y-y0 )/dy
#if xrel < 0.0 or yrel < 0.0:
# print("WARNING XYrel not coded ", x, x0, xrel, y, y0, yrel)
# print("WARNING Yrel", yrel)
# if yrel < 0.0:
# print("WARNING XYrel South Pole Fix!")
# yrel = max(0.0, yrel)
# #sys.exit('XYrel SP!')
# if xrel < 0.0:
# sys.exit('XYrel')
return xrel,yrel
#-------------------------------------------------------------------
def IrregRelP(p, pcoords,wrapP= -999,dbg=False):
""" Gets relative coordinates for irregular coordinate systems. Uses simple
search to find left+right coordinates. Assumes increasing pcoords to start
with. wrapP not implemented yet """
dtxt='IrregRelP:'
if p < np.min(pcoords): #2 if p < pcoords[0]:
print('WARNING - wrap around not implemented yet for pp', p, pcoords[0] )
return -888.
ncoords = len(pcoords)
coords = pcoords.copy()
flipped_coords = False
if ( pcoords[1] < pcoords[0] ): # Coords from -ve to +ve, e.g. N to S
sys.exit('IREG') # Shouldn't happen now that Echam flipped
flipped_coords = True
coords = np.flipud(coords) # Simplifies thoughts n code, from low to high
ip = 0
for pp in coords:
if dbg: print(dtxt+'pp: ',ip, p, pp, len(coords) )
if pp > p:
if dbg: print(dtxt+'!!: ',ip, pp,p)
break
ip += 1
if ip == ncoords:
print(dtxt+'WARNING - wrap around not implemented yet for pp', ip, np )
return -999.
dp = coords[ip]-coords[ip-1]
prel = ip-1 + ( p-coords[ip-1] )/dp
if ( pcoords[1] < pcoords[0] ): prel = (ncoords-1) - prel
xprel = (p-coords[0])/dp # Approx if dp not constant, just testing
print(dtxt+'DONE coords', ip, ncoords, p, coords[ip-1], coords[ip], dp, prel, xprel )
if flipped_coords:
print(dtxt+'DONE pcoords', ip, ncoords, p, pcoords[ip-1], pcoords[ip], dp, prel, xprel )
ip = ncoords - ip #CHECK
print(dtxt+'FLIP pcoords', ip, ncoords, p, pcoords[ip-1], pcoords[ip], dp, prel, xprel )
sys.exit('ECHAM tmp')
return prel
#def IrregRelP(p, pcoords):
# """ Gets relative coordinates for irregular coordinate systems.
# Uses simple search to find left+right coordinates. Assumes
# increasing pcoords to start with """
#
# if p < np.min(pcoords): #2 if p < pcoords[0]:
# print('WARNING - wrap around not implemented yet for pp', p, pcoords[0] )
# return -888.
#
# np = len(pcoords)
# ip = 0
# pStep = 1
# if ( pcoords[1] < pcoords[0] ): # Coords from -ve to +ve, e.g. N to S
# pStep = -1
# for pp in pcoords:
# if pp > p:
# break
# ip += pStep
#
# if ip == np:
# print('WARNING - wrap around not implemented yet for pp', ip, np )
# return -999.
#
# dp = abs(pcoords[ip]-pcoords[ip-1])
# prel = ip-1 + (p-pcoords[ip-1] )/dp
# xprel = (p-pcoords[0])/dp # Approx if dp not constant
# #print('TESTING pp', ip, np, p, pcoords[ip-1], pcoords[ip], dp, prel, xprel )
# return prel
def IrregRelXy(x, y, xcoords, ycoords,latlon=True):
""" Gets relative coordinates for irregular coordinate systems.
Ensures thar IrregRelP above is called with increasing pcoords
to cope with some grids using N to S and others S to N """
#x = -179.95 # DEBUG
#y = -88.4 # DEBUG
xp = x
#if xp > xcoords[-1]: # CRUDE EDGE PROBLEM, ARGH
# print('WARNING EDGING!', x, xp, xcoords[0], xcoords[-1] )
# xp = xcoords[-1]-0.0001
xrel = IrregRelP( xp, xcoords )
print('TESTING XX', xp, xcoords[0], xcoords[-1], xrel )
yrel = IrregRelP( y, ycoords )
# All y-coordinates are now ascending.
#if ycoords[-1] > ycoords[0]: # S to N
#else: # N to S,
# yrcoords = np.flipud( ycoords ) # alt ycoords[::-1]
# print('TESTING AA RY', len(ycoords) )
# yrrel = IrregRelP( y, yrcoords )
# yrel = len(ycoords) - yrrel
print('TESTING YY', y, ycoords[0], ycoords[-1], yrel )
if xrel < 0.0 or yrel < 0.0:
print("WARNING IreggXYrel not coded ", x, xrel, y, yrel)
#sys.exit('ECHAM gx')
return xrel,yrel
#-------------------------------------------------------------------
def get_vals(xPtin,yPtin,EmepCdf,minmax=False,dbg=False):
""" Uses bi-linear interpolation to estmate value
of field vals at point xPt, yPt
"""
dtxt='get_vals:'
# Get coordinates in model grid if polar stereo:
xPt, yPt = xPtin, yPtin
if hasattr(xPt,"__len__"): # copes with numpy class or simple list
print(dtxt+'ERROR! needs scalar x,y; got array:', type(xPt) )
sys.exit()
print(dtxt+'TEST? proj, xPt, yPt ', EmepCdf.proj, xPt, yPt,
'xrange:', EmepCdf.xcoords[0],EmepCdf.xcoords[-1],
'yrange:', EmepCdf.ycoords[0],EmepCdf.ycoords[-1])
if EmepCdf.proj == 'PS':
xPt, yPt = coords.lonlat2emepXy(xPt,yPt) # XPt, yPt are lon,lat
if dbg:
print('PS lon,lat => model xPt, yPt ', xPtin, yPtin, ' => ', xPt, yPt)
elif EmepCdf.xcoords[-1] > 189: # USES 189 to avoid some grids with eg 180.25
# if long xcoords are from 0 to 360, we shift Xpt
if xPtin < 0.0:
xPt = xPtin + 360
print(dtxt+'Xshift: ', xPtin , xPt, EmepCdf.xcoords[-1] )
#else: # lon lat already ok
# xemep, yemep = RelXy(xPt,yPt,EmepCdf.x0,EmepCdf.y0,EmepCdf.dx,EmepCdf.dy)
# New more consistent check is point is inside grid. John 2018-01-16
if ( xPt < EmepCdf.x0 or yPt < EmepCdf.y0
or xPt > EmepCdf.xmax or yPt > EmepCdf.ymax ):
print("OUTSIDE GRIDA ", xPt, yPt, EmepCdf.x0, EmepCdf.y0,
EmepCdf.xmax, EmepCdf.ymax)
return None, None, None
# err = np.array( [ np.NaN ] )
# if xPt > EmepCdf.xmax or yPt > EmepCdf.ymax:
# print("OUTSIDE GRID ", xPt, yPt, EmepCdf.xmax, EmepCdf.ymax )
# return err, err, err
#M17 Emep coords relative to grid LL point
#M17 x, y = RelXy(xPt, yPt, EmepCdf.x0,EmepCdf.y0,EmepCdf.dx,EmepCdf.dy)
# Emep coords relative to grid LL centre
if EmepCdf.xRegular and EmepCdf.yRegular :
x, y = RelXy(xPt, yPt, EmepCdf.xcoords[0],EmepCdf.ycoords[0],
EmepCdf.dx,EmepCdf.dy)
else:
x, y = IrregRelXy(xPt, yPt, EmepCdf.xcoords,EmepCdf.ycoords)
if x < 0 or y < 0:
print(dtxt+"OUTSIDE GRID ", xPt, yPt, x, y)
err = np.array( [ np.NaN ] ) # just to get array, not scalar
return err, err, err
if dbg:
print(dtxt+"INSIDE GRID ", xPt, yPt, x, y)
print(dtxt+"MIN x0, y0 ", EmepCdf.x0, EmepCdf.y0)
print(dtxt+"max XCRD YCRD ", EmepCdf.xcoords.max(), EmepCdf.ycoords.max())
print(dtxt+"xPt, yPt ", xPt, yPt) #, " DLON ", xcoords[1]-xcoords[0]
print(dtxt+"xxx XCRD YCRD ", x, y) #, " DLON ", xcoords[1]-xcoords[0]
EmepCdf.printme()
iL=int(x) # left
iR=iL+1
#if EmepCdf.yAscending:
jS=int(y) # from south
jN=min( jS+1, len(EmepCdf.ycoords)-1)
#else:
# jN=int(y) # from N
# jS=min( jN+1, len(EmepCdf.ycoords)-1) # TMP!!!
#QUERY 180 if jS > 180:
#QUERY 180 print(dtxt+'OOPSjS ', xPt, yPt, iL,jS, xcoords.max(), ycoords.max())
#QUERY 180 sys.exit(0)
# Get data for a square at 0,0, 0,1 etc for bidirectional
# relative to grid centre-points
#f00 =EmepCdf.vals[jS,iL] #f00 =e.variables[varname][:,jS,iL]
if dbg:
print(dtxt+'iL,iR-xx ', xPt, yPt, iL, iR,
EmepCdf.xcoords[iL], EmepCdf.xcoords[iR])
print(dtxt+'jS,jN-yy ', xPt, yPt, jS, jN,
EmepCdf.ycoords[jS], EmepCdf.ycoords[jN])
print(dtxt+'BOX SHAPE ', EmepCdf.vals.shape )
# Crude.... O2017
if len( EmepCdf.vals.shape ) > 2:
box = EmepCdf.vals[:,jS:jN+1,iL:iR+1]
else:
box = EmepCdf.vals[jS:jN+1,iL:iR+1]
box = box[ np.newaxis, :, : ] # Make 3D
f00 = box[:,0,0]
f10 = box[:,1,0]
f01 = box[:,0,1]
f11 = box[:,1,1]
# bidirectional interpolation
dx = x-int(x)
dy = y-int(y)
value = f00*(1-dx)*(1-dy) + f01*dx*(1-dy)+f10*(1-dx)*dy + f11*dx*dy
# tips from #http://stackoverflow.com/questions/21816433/element-wise-array-maximum-function-in-numpy-more-than-two-arrays
#maxvals = maximum.reduce([x0,x1,x2,x3])
# 5 times faster:
if minmax:
maxval = vstack([f00,f10,f01,f11]).max(axis=0)
minval = vstack([f00,f10,f01,f11]).min(axis=0)
if dbg:
print(dtxt,' --------- OUTFs ------------------------------------')
print(dtxt+"x,y -> ijcoords ", xPtin, yPtin, iL, iR, jS, jN )
print(dtxt+"x, y, dx dy ", x, y, dx, dy)
print(dtxt+"x, y, dx dy ", x, y, dx, dy)
print(dtxt, jS, iL, EmepCdf.vals[0,jS,iL], EmepCdf.vals.min(), EmepCdf.vals.max())
print(dtxt,x,y, dx, dy, iL,iR, jS, jN , EmepCdf.varname, EmepCdf.vals.max())
#print('Fs ', f00, f10, f01, f11)
#print('F00', f00, (1-dx)*(1-dy))
#print('F10', f10, dx*(1-dy))
#print('F01', f01, (1-dx)*dy)
#print('F11', f11, dx*dy)
for i in range(0,len(f00)): # if var is array
print( i, f00[i],f10[i],f01[i],f11[i], value[i] )
if minmax:
return value,minval,maxval
else:
return value
#-------------------------------------------------------------------
def get_jdays(EmepCdf):
""" Returns the day numbers of the all days
in the EMEP cdf file.
"""
origin_datetime = datetime.datetime(EmepCdf.times[0].year, 1, 1) - datetime.timedelta(days=1)
return np.array([int((tim - origin_datetime).total_seconds()/(24.*3600.)) for tim in EmepCdf.times])
#=============================================================================
def main():
import argparse
dtxt='EmepCdf main:'
#------------------ arguments ----------------------------------------------
parser=argparse.ArgumentParser(epilog=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('-v','--varname',help='varname in nc file',required=True)
parser.add_argument('-i','--ifile',help='Input file',required=True)
args=parser.parse_args()
print(dtxt+'ARGS', args)
ifile= args.ifile
var=args.varname
if not os.path.isfile(ifile):
sys.exit('FILE NOT FOUND!!!' + ifile )
print('-'*78) #------------------------------------
print("Testing full grid, tStep=3 ", args.ifile )
EmepFile = readcdf( ifile, var, getVals = True, tStep=3 ) # 180 from ECHAM day.nc
EmepFile.printme()
#print("Testing one point ", ifile)
#EmepFile2, ecdf2 = readcdf( ifile, var )
#EmepFile2.vals=ecdf2.variables[var][:,10,10]
#EmepFile2.printme()
#print "XY emep for proj %s %6.2f %6.2f is %6.2f %6.2f:" % (EmepFile.proj, lon,lat, xemep, yemep)
print('-'*78) #------------------------------------
lon, lat = -9.89, 53.3 # Mace Head
print('Testing one point:', lon, lat )
# Now with all tsteps:
EmepFile = readcdf( ifile, var, getVals = True ) # 180 from ECHAM day.nc
t3 = time.time()
v, minv, maxv = get_vals(lon,lat,EmepFile,minmax=True,dbg=False)
t4 = time.time()
print('Testing nmin, nmax:')
print('1st: ', v[0], minv[0], maxv[0], len(v) )
print('Last:', v[-1], minv[-1], maxv[-1], len(v) )
#SPEED EmepFile= readcdf( ifile, var )
#print(EmepFile.lldim, EmepFile.dimx, EmepFile.vals.max())
#SPEED EmepFile.printme()
# For points, here use i,j model coordinates coordinates
# Use getEmepPt for use of lat/long
#SPEED EmepFile= readcdf( ifile, var, getijPt = [ True, 23, 45] )
#SPEED EmepFile.printme()
#SPEED from StringFunctions import multiwrite
#SPEED print('Ozone series ', multiwrite( EmepFile.vals[0:10],'%5.1f') )
# Test a few points
print('-'*78) #------------------------------------
print('Testing several ij points:' )
gpts=[ [ 12, 24], [12,25], [13,23], [13, 24], [13,25], [14,24], [21, 34], [22,34] ]
EmepFile = readcdf( ifile, var, getVals=True )
# Fails for ECHAM since direct ecdf.variables used:
npt=len(gpts)
npt=0
for i, j in gpts:
print('point ', npt, i, j, EmepFile.vals[:4,j,i] )
npt += 1
#========================================== END
# for ipython tips, see http://stackoverflow.com/questions/22631845/how-to-pass-command-line-arguments-to-ipython
# e.g. ipython -i arg1 arg2
#import datetime as dtim #from netcdftime import utime
# For future use, maybe ...
#emeptime = utime('days since 1990-01-01 00:00:00')
#t0 = dtim.datetime(1990, 1, 1, 0, 0, 0 )
#nt0 = emeptime.date2num(t0)
#times=e.variables['time']
if ( __name__ == "__main__" ):
main()
| mifads/pyscripts | emxcdf/readcdf.py | Python | gpl-3.0 | 24,851 | [
"NetCDF"
] | b4ab592cb7228d14c81dc26a80eee22596ee594f061c9217eb0807ec5234031a |
import GPy
import GPyOpt
import argparse
import os
import numpy as np
import time
import FireflyAlgorithm as ffa
def func(var):
hist = []
gamma = var[:,0][0]
alpha = var[:,1][0]
fireflies = int(var[:,2][0] * 100)
step = int(var[:,3][0] * 100)
iterations = int(var[:,4][0] * 1000)
if args.v == 1 or args.v == 4:
alpha = int(alpha * 16)
if args.v == 2 or args.v == 5:
alpha = int(alpha * 32)
for i in range(args.n):
best_firefly = ffa.fireflyAlgorithm(0, d=args.d, i=iterations, g=gamma, a=alpha, f=fireflies, e=args.e, v=args.v, p=args.p, s=step, sch=args.sch)
hist.append(best_firefly.luminosity)
res = np.array(hist).mean()
print('Tried [Gamma, Alpha, #Fireflies, step, iterations] = [{}, {}, {}, {}, {}], got {}'.format(gamma, alpha, fireflies, step, iterations, res))
with open('bayesopt', 'a') as f:
f.write('{}\t{}\t{}\t{}\t{}\t{}\n'.format(gamma, alpha, fireflies, step, iterations, res))
return res
def main(args):
with open('bayesopt', 'w') as f:
print('cleaning previous results')
# bounds = [{'name': 'gamma', 'type': 'continuous', 'domain': (0, 1)},
# {'name': 'alpha', 'type': 'continuous', 'domain': (0, 1)},
# {'name': 'nbfireflies', 'type': 'continuous', 'domain': (0.02, 1)}]
bounds = [{'name': 'gamma', 'type': 'continuous', 'domain': (0.001, 1)},
{'name': 'alpha', 'type': 'continuous', 'domain': (0.0625, 1)},
{'name': 'nbfireflies', 'type': 'continuous', 'domain': (0.02, 1)},
{'name': 'step', 'type': 'continuous', 'domain': (0.01, 1)},
{'name': 'iterations', 'type': 'continuous', 'domain': (0.001, 1)}]
myBopt = GPyOpt.methods.BayesianOptimization(f = func,
domain = bounds,
model_type = 'GP',
acquisition_type = 'EI',
normalize_Y = True,
exact_feval = False,
initial_design_numdata = 8,
evaluator_type = 'local_penalization',
batch_size = 4,
num_cores = 4)
max_iter = args.m
t_start = time.time()
myBopt.run_optimization(max_iter)
best_value = myBopt.fx_opt[0]
best_gamma = myBopt.x_opt[0]
best_alpha = myBopt.x_opt[1]
if args.v == 1 or args.v == 4:
best_alpha = int(best_alpha * 16)
if args.v == 2 or args.v == 5:
best_alpha = int(best_alpha * 32)
best_fireflies = int(myBopt.x_opt[2] * 100)
best_step = int(myBopt.x_opt[3] * 100)
best_iteration = int(myBopt.x_opt[4] * 1000)
print('Best value: {} at [Gamma, Alpha, #Firefly, step, iterations] = [{}, {}, {}, {}, {}], found in {} s'.format(best_value, best_gamma, best_alpha, best_fireflies, best_step, best_iteration, time.time() - t_start))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-m", type = int, default = 100, help = "number of max iterations")
parser.add_argument("-d", type = int, default = 2, help = "number of drones")
# parser.add_argument("-i", type = int, default = 10000, help = "number of iterations")
parser.add_argument("-e", type = float, default = 0.1, help = "distance penalization coeficient")
parser.add_argument("-v", type = int, default = 1, help = "alpha version")
parser.add_argument("-n", type = int, default = 1, help = "number of runs")
parser.add_argument("-p", type = int, default = 1, help = "enable/desable verbose")
parser.add_argument("-s", type = int, default = 1, help = "step")
parser.add_argument("-sch", type = str, default = "linear", help = "segment schedule")
args = parser.parse_args()
main(args)
| OPU-Surveillance-System/monitoring | master/scripts/planner/solvers/bo_firefly_iteration.py | Python | mit | 4,015 | [
"Firefly"
] | 5404648a134ef950eb9503a04b4e6e5dc795638a3e05a2abe4e8f1b1f9659bcf |
# (c) 2013-2014, Michael DeHaan <michael.dehaan@gmail.com>
# Stephen Fromm <sfromm@gmail.com>
# Brian Coca <briancoca+dev@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
import os
import os.path
import pipes
import shutil
import tempfile
import base64
from ansible import utils
from ansible.runner.return_data import ReturnData
class ActionModule(object):
TRANSFERS_FILES = True
def __init__(self, runner):
self.runner = runner
def _assemble_from_fragments(self, src_path, delimiter=None, compiled_regexp=None):
''' assemble a file from a directory of fragments '''
tmpfd, temp_path = tempfile.mkstemp()
tmp = os.fdopen(tmpfd,'w')
delimit_me = False
add_newline = False
for f in sorted(os.listdir(src_path)):
if compiled_regexp and not compiled_regexp.search(f):
continue
fragment = "%s/%s" % (src_path, f)
if not os.path.isfile(fragment):
continue
fragment_content = file(fragment).read()
# always put a newline between fragments if the previous fragment didn't end with a newline.
if add_newline:
tmp.write('\n')
# delimiters should only appear between fragments
if delimit_me:
if delimiter:
# un-escape anything like newlines
delimiter = delimiter.decode('unicode-escape')
tmp.write(delimiter)
# always make sure there's a newline after the
# delimiter, so lines don't run together
if delimiter[-1] != '\n':
tmp.write('\n')
tmp.write(fragment_content)
delimit_me = True
if fragment_content.endswith('\n'):
add_newline = False
else:
add_newline = True
tmp.close()
return temp_path
def run(self, conn, tmp, module_name, module_args, inject, complex_args=None, **kwargs):
# load up options
options = {}
if complex_args:
options.update(complex_args)
options.update(utils.parse_kv(module_args))
src = options.get('src', None)
dest = options.get('dest', None)
delimiter = options.get('delimiter', None)
remote_src = utils.boolean(options.get('remote_src', 'yes'))
if src is None or dest is None:
result = dict(failed=True, msg="src and dest are required")
return ReturnData(conn=conn, comm_ok=False, result=result)
if remote_src:
return self.runner._execute_module(conn, tmp, 'assemble', module_args, inject=inject, complex_args=complex_args)
elif '_original_file' in inject:
src = utils.path_dwim_relative(inject['_original_file'], 'files', src, self.runner.basedir)
else:
# the source is local, so expand it here
src = os.path.expanduser(src)
# Does all work assembling the file
path = self._assemble_from_fragments(src, delimiter)
pathmd5 = utils.md5s(path)
remote_md5 = self.runner._remote_md5(conn, tmp, dest)
if pathmd5 != remote_md5:
resultant = file(path).read()
if self.runner.diff:
dest_result = self.runner._execute_module(conn, tmp, 'slurp', "path=%s" % dest, inject=inject, persist_files=True)
if 'content' in dest_result.result:
dest_contents = dest_result.result['content']
if dest_result.result['encoding'] == 'base64':
dest_contents = base64.b64decode(dest_contents)
else:
raise Exception("unknown encoding, failed: %s" % dest_result.result)
xfered = self.runner._transfer_str(conn, tmp, 'src', resultant)
# fix file permissions when the copy is done as a different user
if self.runner.sudo and self.runner.sudo_user != 'root':
self.runner._low_level_exec_command(conn, "chmod a+r %s" % xfered, tmp)
# run the copy module
module_args = "%s src=%s dest=%s original_basename=%s" % (module_args, pipes.quote(xfered), pipes.quote(dest), pipes.quote(os.path.basename(src)))
if self.runner.noop_on_check(inject):
return ReturnData(conn=conn, comm_ok=True, result=dict(changed=True), diff=dict(before_header=dest, after_header=src, after=resultant))
else:
res = self.runner._execute_module(conn, tmp, 'copy', module_args, inject=inject)
res.diff = dict(after=resultant)
return res
else:
module_args = "%s src=%s dest=%s original_basename=%s" % (module_args, pipes.quote(xfered), pipes.quote(dest), pipes.quote(os.path.basename(src)))
return self.runner._execute_module(conn, tmp, 'file', module_args, inject=inject)
| pilwon/ansible | lib/ansible/runner/action_plugins/assemble.py | Python | gpl-3.0 | 5,570 | [
"Brian"
] | 130b20b0718345b839a135e4bb78fd3f109e2f217bf29d0166ff850548c5a370 |
# -*- encoding: utf-8 -*-
"""
General helper functions that don't fit neatly under any given category.
They provide some useful string and conversion methods that might
be of use when designing your own game.
"""
from __future__ import division, print_function
from builtins import object, range
from future.utils import viewkeys, raise_
import os
import sys
import imp
import types
import math
import re
import textwrap
import random
from os.path import join as osjoin
from importlib import import_module
from inspect import ismodule, trace, getmembers, getmodule
from collections import defaultdict, OrderedDict
from twisted.internet import threads, defer, reactor
from django.conf import settings
from django.utils import timezone
from django.utils.translation import ugettext as _
from evennia.utils import logger
_MULTIMATCH_TEMPLATE = settings.SEARCH_MULTIMATCH_TEMPLATE
_EVENNIA_DIR = settings.EVENNIA_DIR
_GAME_DIR = settings.GAME_DIR
try:
import cPickle as pickle
except ImportError:
import pickle
ENCODINGS = settings.ENCODINGS
_GA = object.__getattribute__
_SA = object.__setattr__
_DA = object.__delattr__
_DEFAULT_WIDTH = settings.CLIENT_DEFAULT_WIDTH
def is_iter(iterable):
"""
Checks if an object behaves iterably.
Args:
iterable (any): Entity to check for iterability.
Returns:
is_iterable (bool): If `iterable` is iterable or not.
Notes:
Strings are *not* accepted as iterable (although they are
actually iterable), since string iterations are usually not
what we want to do with a string.
"""
return hasattr(iterable, '__iter__')
def make_iter(obj):
"""
Makes sure that the object is always iterable.
Args:
obj (any): Object to make iterable.
Returns:
iterable (list or iterable): The same object
passed-through or made iterable.
"""
return not hasattr(obj, '__iter__') and [obj] or obj
def wrap(text, width=_DEFAULT_WIDTH, indent=0):
"""
Safely wrap text to a certain number of characters.
Args:
text (str): The text to wrap.
width (int, optional): The number of characters to wrap to.
indent (int): How much to indent each line (with whitespace).
Returns:
text (str): Properly wrapped text.
"""
if not text:
return ""
text = to_unicode(text)
indent = " " * indent
return to_str(textwrap.fill(text, width, initial_indent=indent, subsequent_indent=indent))
# alias - fill
fill = wrap
def pad(text, width=_DEFAULT_WIDTH, align="c", fillchar=" "):
"""
Pads to a given width.
Args:
text (str): Text to pad.
width (int, optional): The width to pad to, in characters.
align (str, optional): This is one of 'c', 'l' or 'r' (center,
left or right).
fillchar (str, optional): The character to fill with.
Returns:
text (str): The padded text.
"""
align = align if align in ('c', 'l', 'r') else 'c'
fillchar = fillchar[0] if fillchar else " "
if align == 'l':
return text.ljust(width, fillchar)
elif align == 'r':
return text.rjust(width, fillchar)
else:
return text.center(width, fillchar)
def crop(text, width=_DEFAULT_WIDTH, suffix="[...]"):
"""
Crop text to a certain width, throwing away text from too-long
lines.
Args:
text (str): Text to crop.
width (int, optional): Width of line to crop, in characters.
suffix (str, optional): This is appended to the end of cropped
lines to show that the line actually continues. Cropping
will be done so that the suffix will also fit within the
given width. If width is too small to fit both crop and
suffix, the suffix will be dropped.
Returns:
text (str): The cropped text.
"""
utext = to_unicode(text)
ltext = len(utext)
if ltext <= width:
return text
else:
lsuffix = len(suffix)
utext = utext[:width] if lsuffix >= width else "%s%s" % (utext[:width - lsuffix], suffix)
return to_str(utext)
def dedent(text):
"""
Safely clean all whitespace at the left of a paragraph.
Args:
text (str): The text to dedent.
Returns:
text (str): Dedented string.
Notes:
This is useful for preserving triple-quoted string indentation
while still shifting it all to be next to the left edge of the
display.
"""
if not text:
return ""
return textwrap.dedent(text)
def justify(text, width=_DEFAULT_WIDTH, align="f", indent=0):
"""
Fully justify a text so that it fits inside `width`. When using
full justification (default) this will be done by padding between
words with extra whitespace where necessary. Paragraphs will
be retained.
Args:
text (str): Text to justify.
width (int, optional): The length of each line, in characters.
align (str, optional): The alignment, 'l', 'c', 'r' or 'f'
for left, center, right or full justification respectively.
indent (int, optional): Number of characters indentation of
entire justified text block.
Returns:
justified (str): The justified and indented block of text.
"""
def _process_line(line):
"""
helper function that distributes extra spaces between words. The number
of gaps is nwords - 1 but must be at least 1 for single-word lines. We
distribute odd spaces randomly to one of the gaps.
"""
line_rest = width - (wlen + ngaps)
gap = " " # minimum gap between words
if line_rest > 0:
if align == 'l':
line[-1] += " " * line_rest
elif align == 'r':
line[0] = " " * line_rest + line[0]
elif align == 'c':
pad = " " * (line_rest // 2)
line[0] = pad + line[0]
line[-1] = line[-1] + pad + " " * (line_rest % 2)
else: # align 'f'
gap += " " * (line_rest // max(1, ngaps))
rest_gap = line_rest % max(1, ngaps)
for i in range(rest_gap):
line[i] += " "
return gap.join(line)
# split into paragraphs and words
paragraphs = re.split("\n\s*?\n", text, re.MULTILINE)
words = []
for ip, paragraph in enumerate(paragraphs):
if ip > 0:
words.append(("\n\n", 0))
words.extend((word, len(word)) for word in paragraph.split())
ngaps, wlen, line = 0, 0, []
lines = []
while words:
if not line:
# start a new line
word = words.pop(0)
wlen = word[1]
line.append(word[0])
elif (words[0][1] + wlen + ngaps) >= width:
# next word would exceed word length of line + smallest gaps
lines.append(_process_line(line))
ngaps, wlen, line = 0, 0, []
else:
# put a new word on the line
word = words.pop(0)
line.append(word[0])
if word[1] == 0:
# a new paragraph, process immediately
lines.append(_process_line(line))
ngaps, wlen, line = 0, 0, []
else:
wlen += word[1]
ngaps += 1
if line: # catch any line left behind
lines.append(_process_line(line))
indentstring = " " * indent
return "\n".join([indentstring + line for line in lines])
def list_to_string(inlist, endsep="and", addquote=False):
"""
This pretty-formats a list as string output, adding an optional
alternative separator to the second to last entry. If `addquote`
is `True`, the outgoing strings will be surrounded by quotes.
Args:
inlist (list): The list to print.
endsep (str, optional): If set, the last item separator will
be replaced with this value.
addquote (bool, optional): This will surround all outgoing
values with double quotes.
Returns:
liststr (str): The list represented as a string.
Examples:
```python
# no endsep:
[1,2,3] -> '1, 2, 3'
# with endsep=='and':
[1,2,3] -> '1, 2 and 3'
# with addquote and endsep
[1,2,3] -> '"1", "2" and "3"'
```
"""
if not endsep:
endsep = ","
else:
endsep = " " + endsep
if not inlist:
return ""
if addquote:
if len(inlist) == 1:
return "\"%s\"" % inlist[0]
return ", ".join("\"%s\"" % v for v in inlist[:-1]) + "%s %s" % (endsep, "\"%s\"" % inlist[-1])
else:
if len(inlist) == 1:
return str(inlist[0])
return ", ".join(str(v) for v in inlist[:-1]) + "%s %s" % (endsep, inlist[-1])
def wildcard_to_regexp(instring):
"""
Converts a player-supplied string that may have wildcards in it to
regular expressions. This is useful for name matching.
Args:
instring (string): A string that may potentially contain
wildcards (`*` or `?`).
Returns:
regex (str): A string where wildcards were replaced with
regular expressions.
"""
regexp_string = ""
# If the string starts with an asterisk, we can't impose the beginning of
# string (^) limiter.
if instring[0] != "*":
regexp_string += "^"
# Replace any occurances of * or ? with the appropriate groups.
regexp_string += instring.replace("*", "(.*)").replace("?", "(.{1})")
# If there's an asterisk at the end of the string, we can't impose the
# end of string ($) limiter.
if instring[-1] != "*":
regexp_string += "$"
return regexp_string
def time_format(seconds, style=0):
"""
Function to return a 'prettified' version of a value in seconds.
Args:
seconds (int): Number if seconds to format.
style (int): One of the following styles:
0. "1d 08:30"
1. "1d"
2. "1 day, 8 hours, 30 minutes"
3. "1 day, 8 hours, 30 minutes, 10 seconds"
"""
if seconds < 0:
seconds = 0
else:
# We'll just use integer math, no need for decimal precision.
seconds = int(seconds)
days = seconds // 86400
seconds -= days * 86400
hours = seconds // 3600
seconds -= hours * 3600
minutes = seconds // 60
seconds -= minutes * 60
if style is 0:
"""
Standard colon-style output.
"""
if days > 0:
retval = '%id %02i:%02i' % (days, hours, minutes,)
else:
retval = '%02i:%02i' % (hours, minutes,)
return retval
elif style is 1:
"""
Simple, abbreviated form that only shows the highest time amount.
"""
if days > 0:
return '%id' % (days,)
elif hours > 0:
return '%ih' % (hours,)
elif minutes > 0:
return '%im' % (minutes,)
else:
return '%is' % (seconds,)
elif style is 2:
"""
Full-detailed, long-winded format. We ignore seconds.
"""
days_str = hours_str = ''
minutes_str = '0 minutes'
if days > 0:
if days == 1:
days_str = '%i day, ' % days
else:
days_str = '%i days, ' % days
if days or hours > 0:
if hours == 1:
hours_str = '%i hour, ' % hours
else:
hours_str = '%i hours, ' % hours
if hours or minutes > 0:
if minutes == 1:
minutes_str = '%i minute ' % minutes
else:
minutes_str = '%i minutes ' % minutes
retval = '%s%s%s' % (days_str, hours_str, minutes_str)
elif style is 3:
"""
Full-detailed, long-winded format. Includes seconds.
"""
days_str = hours_str = minutes_str = seconds_str = ''
if days > 0:
if days == 1:
days_str = '%i day, ' % days
else:
days_str = '%i days, ' % days
if days or hours > 0:
if hours == 1:
hours_str = '%i hour, ' % hours
else:
hours_str = '%i hours, ' % hours
if hours or minutes > 0:
if minutes == 1:
minutes_str = '%i minute ' % minutes
else:
minutes_str = '%i minutes ' % minutes
if minutes or seconds > 0:
if seconds == 1:
seconds_str = '%i second ' % seconds
else:
seconds_str = '%i seconds ' % seconds
retval = '%s%s%s%s' % (days_str, hours_str, minutes_str, seconds_str)
return retval.strip()
def datetime_format(dtobj):
"""
Pretty-prints the time since a given time.
Args:
dtobj (datetime): An datetime object, e.g. from Django's
`DateTimeField`.
Returns:
deltatime (str): A string describing how long ago `dtobj`
took place.
"""
year, month, day = dtobj.year, dtobj.month, dtobj.day
hour, minute, second = dtobj.hour, dtobj.minute, dtobj.second
now = timezone.now()
if year < now.year:
# another year
timestring = str(dtobj.date())
elif dtobj.date() < now.date():
# another date, same year
timestring = "%02i-%02i" % (day, month)
elif hour < now.hour - 1:
# same day, more than 1 hour ago
timestring = "%02i:%02i" % (hour, minute)
else:
# same day, less than 1 hour ago
timestring = "%02i:%02i:%02i" % (hour, minute, second)
return timestring
def host_os_is(osname):
"""
Check to see if the host OS matches the query.
Args:
osname (str): Common names are "posix" (linux/unix/mac) and
"nt" (windows).
Args:
is_os (bool): If the os matches or not.
"""
return os.name == osname
def get_evennia_version():
"""
Helper method for getting the current evennia version.
Returns:
version (str): The version string.
"""
import evennia
return evennia.__version__
def pypath_to_realpath(python_path, file_ending='.py', pypath_prefixes=None):
"""
Converts a dotted Python path to an absolute path under the
Evennia library directory or under the current game directory.
Args:
python_path (str): A dot-python path
file_ending (str): A file ending, including the period.
pypath_prefixes (list): A list of paths to test for existence. These
should be on python.path form. EVENNIA_DIR and GAME_DIR are automatically
checked, they need not be added to this list.
Returns:
abspaths (list): All existing, absolute paths created by
converting `python_path` to an absolute paths and/or
prepending `python_path` by `settings.EVENNIA_DIR`,
`settings.GAME_DIR` and by`pypath_prefixes` respectively.
Notes:
This will also try a few combinations of paths to allow cases
where pypath is given including the "evennia." or "mygame."
prefixes.
"""
path = python_path.strip().split('.')
plong = osjoin(*path) + file_ending
pshort = osjoin(*path[1:]) + file_ending if len(path) > 1 else plong # in case we had evennia. or mygame.
prefixlong = [osjoin(*ppath.strip().split('.'))
for ppath in make_iter(pypath_prefixes)] \
if pypath_prefixes else []
prefixshort = [osjoin(*ppath.strip().split('.')[1:])
for ppath in make_iter(pypath_prefixes) if len(ppath.strip().split('.')) > 1] \
if pypath_prefixes else []
paths = [plong] + \
[osjoin(_EVENNIA_DIR, prefix, plong) for prefix in prefixlong] + \
[osjoin(_GAME_DIR, prefix, plong) for prefix in prefixlong] + \
[osjoin(_EVENNIA_DIR, prefix, plong) for prefix in prefixshort] + \
[osjoin(_GAME_DIR, prefix, plong) for prefix in prefixshort] + \
[osjoin(_EVENNIA_DIR, plong), osjoin(_GAME_DIR, plong)] + \
[osjoin(_EVENNIA_DIR, prefix, pshort) for prefix in prefixshort] + \
[osjoin(_GAME_DIR, prefix, pshort) for prefix in prefixshort] + \
[osjoin(_EVENNIA_DIR, prefix, pshort) for prefix in prefixlong] + \
[osjoin(_GAME_DIR, prefix, pshort) for prefix in prefixlong] + \
[osjoin(_EVENNIA_DIR, pshort), osjoin(_GAME_DIR, pshort)]
# filter out non-existing paths
return list(set(p for p in paths if os.path.isfile(p)))
def dbref(dbref, reqhash=True):
"""
Converts/checks if input is a valid dbref.
Args:
dbref (int or str): A database ref on the form N or #N.
reqhash (bool, optional): Require the #N form to accept
input as a valid dbref.
Returns:
dbref (int or None): The integer part of the dbref or `None`
if input was not a valid dbref.
"""
if reqhash:
num = (int(dbref.lstrip('#')) if (isinstance(dbref, basestring) and
dbref.startswith("#") and
dbref.lstrip('#').isdigit())
else None)
return num if num > 0 else None
elif isinstance(dbref, basestring):
dbref = dbref.lstrip('#')
return int(dbref) if dbref.isdigit() and int(dbref) > 0 else None
else:
return dbref if isinstance(dbref, int) else None
def dbref_to_obj(inp, objclass, raise_errors=True):
"""
Convert a #dbref to a valid object.
Args:
inp (str or int): A valid #dbref.
objclass (class): A valid django model to filter against.
raise_errors (bool, optional): Whether to raise errors
or return `None` on errors.
Returns:
obj (Object or None): An entity loaded from the dbref.
Raises:
Exception: If `raise_errors` is `True` and
`objclass.objects.get(id=dbref)` did not return a valid
object.
"""
dbid = dbref(inp)
if not dbid:
# we only convert #dbrefs
return inp
try:
if dbid < 0:
return None
except ValueError:
return None
# if we get to this point, inp is an integer dbref; get the matching object
try:
return objclass.objects.get(id=dbid)
except Exception:
if raise_errors:
raise
return inp
# legacy alias
dbid_to_obj = dbref_to_obj
# some direct translations for the latinify
_UNICODE_MAP = {"EM DASH": "-", "FIGURE DASH": "-", "EN DASH": "-", "HORIZONTAL BAR": "-",
"HORIZONTAL ELLIPSIS": "...", "RIGHT SINGLE QUOTATION MARK": "'"}
def latinify(unicode_string, default='?', pure_ascii=False):
"""
Convert a unicode string to "safe" ascii/latin-1 characters.
This is used as a last resort when normal decoding does not work.
Arguments:
unicode_string (unicode): A string to convert to an ascii
or latin-1 string.
default (str, optional): Characters resisting mapping will be replaced
with this character or string.
Notes:
This is inspired by the gist by Ricardo Murri:
https://gist.github.com/riccardomurri/3c3ccec30f037be174d3
"""
from unicodedata import name
converted = []
for unich in iter(unicode_string):
try:
ch = unich.decode('ascii')
except UnicodeDecodeError:
# deduce a latin letter equivalent from the Unicode data
# point name; e.g., since `name(u'á') == 'LATIN SMALL
# LETTER A WITH ACUTE'` translate `á` to `a`. However, in
# some cases the unicode name is still "LATIN LETTER"
# although no direct equivalent in the Latin alphabeth
# exists (e.g., Þ, "LATIN CAPITAL LETTER THORN") -- we can
# avoid these cases by checking that the letter name is
# composed of one letter only.
# We also supply some direct-translations for some particular
# common cases.
what = name(unich)
if what in _UNICODE_MAP:
ch = _UNICODE_MAP[what]
else:
what = what.split()
if what[0] == 'LATIN' and what[2] == 'LETTER' and len(what[3]) == 1:
ch = what[3].lower() if what[1] == 'SMALL' else what[3].upper()
else:
ch = default
converted.append(chr(ord(ch)))
return ''.join(converted)
def to_unicode(obj, encoding='utf-8', force_string=False):
"""
This decodes a suitable object to the unicode format.
Args:
obj (any): Object to decode to unicode.
encoding (str, optional): The encoding type to use for the
dedoding.
force_string (bool, optional): Always convert to string, no
matter what type `obj` is initially.
Returns:
result (unicode or any): Will return a unicode object if input
was a string. If input was not a string, the original will be
returned unchanged unless `force_string` is also set.
Notes:
One needs to encode the obj back to utf-8 before writing to disk
or printing. That non-string objects are let through without
conversion is important for e.g. Attributes.
"""
if force_string and not isinstance(obj, basestring):
# some sort of other object. Try to
# convert it to a string representation.
if hasattr(obj, '__str__'):
obj = obj.__str__()
elif hasattr(obj, '__unicode__'):
obj = obj.__unicode__()
else:
# last resort
obj = str(obj)
if isinstance(obj, basestring) and not isinstance(obj, unicode):
try:
obj = unicode(obj, encoding)
return obj
except UnicodeDecodeError:
for alt_encoding in ENCODINGS:
try:
obj = unicode(obj, alt_encoding)
return obj
except UnicodeDecodeError:
pass
raise Exception("Error: '%s' contains invalid character(s) not in %s." % (obj, encoding))
return obj
def to_str(obj, encoding='utf-8', force_string=False):
"""
This encodes a unicode string back to byte-representation,
for printing, writing to disk etc.
Args:
obj (any): Object to encode to bytecode.
encoding (str, optional): The encoding type to use for the
encoding.
force_string (bool, optional): Always convert to string, no
matter what type `obj` is initially.
Notes:
Non-string objects are let through without modification - this
is required e.g. for Attributes. Use `force_string` to force
conversion of objects to strings.
"""
if force_string and not isinstance(obj, basestring):
# some sort of other object. Try to
# convert it to a string representation.
try:
obj = str(obj)
except Exception:
obj = unicode(obj)
if isinstance(obj, basestring) and isinstance(obj, unicode):
try:
obj = obj.encode(encoding)
return obj
except UnicodeEncodeError:
for alt_encoding in ENCODINGS:
try:
obj = obj.encode(alt_encoding)
return obj
except UnicodeEncodeError:
pass
# if we get to this point we have not found any way to convert this string. Try to parse it manually,
try:
return latinify(obj, '?')
except Exception, err:
raise Exception("%s, Error: Unicode could not encode unicode string '%s'(%s) to a bytestring. " % (err, obj, encoding))
return obj
def validate_email_address(emailaddress):
"""
Checks if an email address is syntactically correct.
Args:
emailaddress (str): Email address to validate.
Returns:
is_valid (bool): If this is a valid email or not.
Notes.
(This snippet was adapted from
http://commandline.org.uk/python/email-syntax-check.)
"""
emailaddress = r"%s" % emailaddress
domains = ("aero", "asia", "biz", "cat", "com", "coop",
"edu", "gov", "info", "int", "jobs", "mil", "mobi", "museum",
"name", "net", "org", "pro", "tel", "travel")
# Email address must be more than 7 characters in total.
if len(emailaddress) < 7:
return False # Address too short.
# Split up email address into parts.
try:
localpart, domainname = emailaddress.rsplit('@', 1)
host, toplevel = domainname.rsplit('.', 1)
except ValueError:
return False # Address does not have enough parts.
# Check for Country code or Generic Domain.
if len(toplevel) != 2 and toplevel not in domains:
return False # Not a domain name.
for i in '-_.%+.':
localpart = localpart.replace(i, "")
for i in '-_.':
host = host.replace(i, "")
if localpart.isalnum() and host.isalnum():
return True # Email address is fine.
else:
return False # Email address has funny characters.
def inherits_from(obj, parent):
"""
Takes an object and tries to determine if it inherits at *any*
distance from parent.
Args:
obj (any): Object to analyze. This may be either an instance
or a class.
parent (any): Can be either instance, class or python path to class.
Returns:
inherits_from (bool): If `parent` is a parent to `obj` or not.
Notes:
What differs this function from e.g. `isinstance()` is that `obj`
may be both an instance and a class, and parent may be an
instance, a class, or the python path to a class (counting from
the evennia root directory).
"""
if callable(obj):
# this is a class
obj_paths = ["%s.%s" % (mod.__module__, mod.__name__) for mod in obj.mro()]
else:
obj_paths = ["%s.%s" % (mod.__module__, mod.__name__) for mod in obj.__class__.mro()]
if isinstance(parent, basestring):
# a given string path, for direct matching
parent_path = parent
elif callable(parent):
# this is a class
parent_path = "%s.%s" % (parent.__module__, parent.__name__)
else:
parent_path = "%s.%s" % (parent.__class__.__module__, parent.__class__.__name__)
return any(1 for obj_path in obj_paths if obj_path == parent_path)
def server_services():
"""
Lists all services active on the Server. Observe that since
services are launched in memory, this function will only return
any results if called from inside the game.
Returns:
services (dict): A dict of available services.
"""
from evennia.server.sessionhandler import SESSIONS
if hasattr(SESSIONS, "server") and hasattr(SESSIONS.server, "services"):
server = SESSIONS.server.services.namedServices
else:
# This function must be called from inside the evennia process.
server = {}
del SESSIONS
return server
def uses_database(name="sqlite3"):
"""
Checks if the game is currently using a given database. This is a
shortcut to having to use the full backend name.
Args:
name (str): One of 'sqlite3', 'mysql', 'postgresql_psycopg2'
or 'oracle'.
Returns:
uses (bool): If the given database is used or not.
"""
try:
engine = settings.DATABASES["default"]["ENGINE"]
except KeyError:
engine = settings.DATABASE_ENGINE
return engine == "django.db.backends.%s" % name
def delay(delay, callback, *args, **kwargs):
"""
Delay the return of a value.
Args:
delay (int or float): The delay in seconds
callback (callable): Will be called with optional
arguments after `delay` seconds.
args (any, optional): Will be used as arguments to callback
Kwargs:
any (any): Will be used to call the callback.
Returns:
deferred (deferred): Will fire fire with callback after
`delay` seconds. Note that if `delay()` is used in the
commandhandler callback chain, the callback chain can be
defined directly in the command body and don't need to be
specified here.
"""
return reactor.callLater(delay, callback, *args, **kwargs)
_TYPECLASSMODELS = None
_OBJECTMODELS = None
def clean_object_caches(obj):
"""
Clean all object caches on the given object.
Args:
obj (Object instace): An object whose caches to clean.
Notes:
This is only the contents cache these days.
"""
global _TYPECLASSMODELS, _OBJECTMODELS
if not _TYPECLASSMODELS:
from evennia.typeclasses import models as _TYPECLASSMODELS
if not obj:
return
# contents cache
try:
_SA(obj, "_contents_cache", None)
except AttributeError:
pass
# on-object property cache
[_DA(obj, cname) for cname in viewkeys(obj.__dict__)
if cname.startswith("_cached_db_")]
try:
hashid = _GA(obj, "hashid")
_TYPECLASSMODELS._ATTRIBUTE_CACHE[hashid] = {}
except AttributeError:
pass
_PPOOL = None
_PCMD = None
_PROC_ERR = "A process has ended with a probable error condition: process ended by signal 9."
def run_async(to_execute, *args, **kwargs):
"""
Runs a function or executes a code snippet asynchronously.
Args:
to_execute (callable): If this is a callable, it will be
executed with *args and non-reserved *kwargs as arguments.
The callable will be executed using ProcPool, or in a thread
if ProcPool is not available.
Kwargs:
at_return (callable): Should point to a callable with one
argument. It will be called with the return value from
to_execute.
at_return_kwargs (dict): This dictionary will be used as
keyword arguments to the at_return callback.
at_err (callable): This will be called with a Failure instance
if there is an error in to_execute.
at_err_kwargs (dict): This dictionary will be used as keyword
arguments to the at_err errback.
Notes:
All other `*args` and `**kwargs` will be passed on to
`to_execute`. Run_async will relay executed code to a thread
or procpool.
Use this function with restrain and only for features/commands
that you know has no influence on the cause-and-effect order of your
game (commands given after the async function might be executed before
it has finished). Accessing the same property from different threads
can lead to unpredicted behaviour if you are not careful (this is called a
"race condition").
Also note that some databases, notably sqlite3, don't support access from
multiple threads simultaneously, so if you do heavy database access from
your `to_execute` under sqlite3 you will probably run very slow or even get
tracebacks.
"""
# handle special reserved input kwargs
callback = kwargs.pop("at_return", None)
errback = kwargs.pop("at_err", None)
callback_kwargs = kwargs.pop("at_return_kwargs", {})
errback_kwargs = kwargs.pop("at_err_kwargs", {})
if callable(to_execute):
# no process pool available, fall back to old deferToThread mechanism.
deferred = threads.deferToThread(to_execute, *args, **kwargs)
else:
# no appropriate input for this server setup
raise RuntimeError("'%s' could not be handled by run_async" % to_execute)
# attach callbacks
if callback:
deferred.addCallback(callback, **callback_kwargs)
deferred.addErrback(errback, **errback_kwargs)
def check_evennia_dependencies():
"""
Checks the versions of Evennia's dependencies including making
some checks for runtime libraries.
Returns:
result (bool): `False` if a show-stopping version mismatch is
found.
"""
# check main dependencies
from evennia.server.evennia_launcher import check_main_evennia_dependencies
not_error = check_main_evennia_dependencies()
errstring = ""
# South is no longer used ...
if 'south' in settings.INSTALLED_APPS:
errstring += "\n ERROR: 'south' found in settings.INSTALLED_APPS. " \
"\n South is no longer used. If this was added manually, remove it."
not_error = False
# IRC support
if settings.IRC_ENABLED:
try:
import twisted.words
twisted.words # set to avoid debug info about not-used import
except ImportError:
errstring += "\n ERROR: IRC is enabled, but twisted.words is not installed. Please install it." \
"\n Linux Debian/Ubuntu users should install package 'python-twisted-words', others" \
"\n can get it from http://twistedmatrix.com/trac/wiki/TwistedWords."
not_error = False
errstring = errstring.strip()
if errstring:
mlen = max(len(line) for line in errstring.split("\n"))
logger.log_err("%s\n%s\n%s" % ("-"*mlen, errstring, '-'*mlen))
return not_error
def has_parent(basepath, obj):
"""
Checks if `basepath` is somewhere in `obj`s parent tree.
Args:
basepath (str): Python dotpath to compare against obj path.
obj (any): Object whose path is to be checked.
Returns:
has_parent (bool): If the check was successful or not.
"""
try:
return any(cls for cls in obj.__class__.mro()
if basepath == "%s.%s" % (cls.__module__, cls.__name__))
except (TypeError, AttributeError):
# this can occur if we tried to store a class object, not an
# instance. Not sure if one should defend against this.
return False
def mod_import(module):
"""
A generic Python module loader.
Args:
module (str, module): This can be either a Python path
(dot-notation like `evennia.objects.models`), an absolute path
(e.g. `/home/eve/evennia/evennia/objects.models.py`) or an
already imported module object (e.g. `models`)
Returns:
module (module or None): An imported module. If the input argument was
already a module, this is returned as-is, otherwise the path is
parsed and imported. Returns `None` and logs error if import failed.
"""
if not module:
return None
if isinstance(module, types.ModuleType):
# if this is already a module, we are done
mod = module
else:
# first try to import as a python path
try:
mod = __import__(module, fromlist=["None"])
except ImportError as ex:
# check just where the ImportError happened (it could have been
# an erroneous import inside the module as well). This is the
# trivial way to do it ...
if str(ex) != "Import by filename is not supported.":
raise
# error in this module. Try absolute path import instead
if not os.path.isabs(module):
module = os.path.abspath(module)
path, filename = module.rsplit(os.path.sep, 1)
modname = re.sub(r"\.py$", "", filename)
try:
result = imp.find_module(modname, [path])
except ImportError:
logger.log_trace("Could not find module '%s' (%s.py) at path '%s'" % (modname, modname, path))
return
try:
mod = imp.load_module(modname, *result)
except ImportError:
logger.log_trace("Could not find or import module %s at path '%s'" % (modname, path))
mod = None
# we have to close the file handle manually
result[0].close()
return mod
def all_from_module(module):
"""
Return all global-level variables defined in a module.
Args:
module (str, module): This can be either a Python path
(dot-notation like `evennia.objects.models`), an absolute path
(e.g. `/home/eve/evennia/evennia/objects.models.py`) or an
already imported module object (e.g. `models`)
Returns:
variables (dict): A dict of {variablename: variable} for all
variables in the given module.
Notes:
Ignores modules and variable names starting with an underscore.
"""
mod = mod_import(module)
if not mod:
return {}
# make sure to only return variables actually defined in this
# module if available (try to avoid not imports)
members = getmembers(mod, predicate=lambda obj: getmodule(obj) in (mod, None))
return dict((key, val) for key, val in members if not key.startswith("_"))
#return dict((key, val) for key, val in mod.__dict__.items()
# if not (key.startswith("_") or ismodule(val)))
def callables_from_module(module):
"""
Return all global-level callables defined in a module.
Args:
module (str, module): A python-path to a module or an actual
module object.
Returns:
callables (dict): A dict of {name: callable, ...} from the module.
Notes:
Will ignore callables whose names start with underscore "_".
"""
mod = mod_import(module)
if not mod:
return {}
# make sure to only return callables actually defined in this module (not imports)
members = getmembers(mod, predicate=lambda obj: callable(obj) and getmodule(obj) == mod)
return dict((key, val) for key, val in members if not key.startswith("_"))
def variable_from_module(module, variable=None, default=None):
"""
Retrieve a variable or list of variables from a module. The
variable(s) must be defined globally in the module. If no variable
is given (or a list entry is `None`), all global variables are
extracted from the module.
Args:
module (string or module): Python path, absolute path or a module.
variable (string or iterable, optional): Single variable name or iterable
of variable names to extract. If not given, all variables in
the module will be returned.
default (string, optional): Default value to use if a variable fails to
be extracted. Ignored if `variable` is not given.
Returns:
variables (value or list): A single value or a list of values
depending on if `variable` is given or not. Errors in lists
are replaced by the `default` argument.
"""
if not module:
return default
mod = mod_import(module)
if variable:
result = []
for var in make_iter(variable):
if var:
# try to pick a named variable
result.append(mod.__dict__.get(var, default))
else:
# get all
result = [val for key, val in mod.__dict__.items()
if not (key.startswith("_") or ismodule(val))]
if len(result) == 1:
return result[0]
return result
def string_from_module(module, variable=None, default=None):
"""
This is a wrapper for `variable_from_module` that requires return
value to be a string to pass. It's primarily used by login screen.
Args:
module (string or module): Python path, absolute path or a module.
variable (string or iterable, optional): Single variable name or iterable
of variable names to extract. If not given, all variables in
the module will be returned.
default (string, optional): Default value to use if a variable fails to
be extracted. Ignored if `variable` is not given.
Returns:
variables (value or list): A single (string) value or a list of values
depending on if `variable` is given or not. Errors in lists (such
as the value not being a string) are replaced by the `default` argument.
"""
val = variable_from_module(module, variable=variable, default=default)
if val:
if variable:
return val
else:
result = [v for v in make_iter(val) if isinstance(v, basestring)]
return result if result else default
return default
def random_string_from_module(module):
"""
Returns a random global string from a module.
Args:
module (string or module): Python path, absolute path or a module.
Returns:
random (string): A random stribg variable from `module`.
"""
return random.choice(string_from_module(module))
def fuzzy_import_from_module(path, variable, default=None, defaultpaths=None):
"""
Import a variable based on a fuzzy path. First the literal
`path` will be tried, then all given `defaultpaths` will be
prepended to see a match is found.
Args:
path (str): Full or partial python path.
variable (str): Name of variable to import from module.
default (string, optional): Default value to use if a variable fails to
be extracted. Ignored if `variable` is not given.
defaultpaths (iterable, options): Python paths to attempt in order if
importing directly from `path` doesn't work.
Returns:
value (any): The variable imported from the module, or `default`, if
not found.
"""
paths = [path] + make_iter(defaultpaths)
for modpath in paths:
try:
mod = import_module(path)
except ImportError as ex:
if not str(ex).startswith ("No module named %s" % path):
# this means the module was found but it
# triggers an ImportError on import.
raise ex
return getattr(mod, variable, default)
return default
def class_from_module(path, defaultpaths=None):
"""
Return a class from a module, given the module's path. This is
primarily used to convert db_typeclass_path:s to classes.
Args:
path (str): Full Python dot-path to module.
defaultpaths (iterable, optional): If a direc import from `path` fails,
try subsequent imports by prepending those paths to `path`.
Returns:
class (Class): An uninstatiated class recovered from path.
Raises:
ImportError: If all loading failed.
"""
cls = None
if defaultpaths:
paths = [path] + ["%s.%s" % (dpath, path) for dpath in make_iter(defaultpaths)] if defaultpaths else []
else:
paths = [path]
for testpath in paths:
if "." in path:
testpath, clsname = testpath.rsplit(".", 1)
else:
raise ImportError("the path '%s' is not on the form modulepath.Classname." % path)
try:
mod = import_module(testpath, package="evennia")
except ImportError:
if len(trace()) > 2:
# this means the error happened within the called module and
# we must not hide it.
exc = sys.exc_info()
raise_(exc[1], None, exc[2])
else:
# otherwise, try the next suggested path
continue
try:
cls = getattr(mod, clsname)
break
except AttributeError:
if len(trace()) > 2:
# AttributeError within the module, don't hide it
exc = sys.exc_info()
raise_(exc[1], None, exc[2])
if not cls:
err = "Could not load typeclass '%s'" % path
if defaultpaths:
err += "\nPaths searched:\n %s" % "\n ".join(paths)
else:
err += "."
raise ImportError(err)
return cls
# alias
object_from_module = class_from_module
def init_new_player(player):
"""
Deprecated.
"""
from evennia.utils import logger
logger.log_dep("evennia.utils.utils.init_new_player is DEPRECATED and should not be used.")
def string_similarity(string1, string2):
"""
This implements a "cosine-similarity" algorithm as described for example in
*Proceedings of the 22nd International Conference on Computation
Linguistics* (Coling 2008), pages 593-600, Manchester, August 2008.
The measure-vectors used is simply a "bag of words" type histogram
(but for letters).
Args:
string1 (str): String to compare (may contain any number of words).
string2 (str): Second string to compare (any number of words).
Returns:
similarity (float): A value 0...1 rating how similar the two
strings are.
"""
vocabulary = set(list(string1 + string2))
vec1 = [string1.count(v) for v in vocabulary]
vec2 = [string2.count(v) for v in vocabulary]
try:
return float(sum(vec1[i] * vec2[i] for i in range(len(vocabulary)))) / \
(math.sqrt(sum(v1**2 for v1 in vec1)) * math.sqrt(sum(v2**2 for v2 in vec2)))
except ZeroDivisionError:
# can happen if empty-string cmdnames appear for some reason.
# This is a no-match.
return 0
def string_suggestions(string, vocabulary, cutoff=0.6, maxnum=3):
"""
Given a `string` and a `vocabulary`, return a match or a list of
suggestions based on string similarity.
Args:
string (str): A string to search for.
vocabulary (iterable): A list of available strings.
cutoff (int, 0-1): Limit the similarity matches (the higher
the value, the more exact a match is required).
maxnum (int): Maximum number of suggestions to return.
Returns:
suggestions (list): Suggestions from `vocabulary` with a
similarity-rating that higher than or equal to `cutoff`.
Could be empty if there are no matches.
"""
return [tup[1] for tup in sorted([(string_similarity(string, sugg), sugg)
for sugg in vocabulary],
key=lambda tup: tup[0], reverse=True)
if tup[0] >= cutoff][:maxnum]
def string_partial_matching(alternatives, inp, ret_index=True):
"""
Partially matches a string based on a list of `alternatives`.
Matching is made from the start of each subword in each
alternative. Case is not important. So e.g. "bi sh sw" or just
"big" or "shiny" or "sw" will match "Big shiny sword". Scoring is
done to allow to separate by most common demoninator. You will get
multiple matches returned if appropriate.
Args:
alternatives (list of str): A list of possible strings to
match.
inp (str): Search criterion.
ret_index (bool, optional): Return list of indices (from alternatives
array) instead of strings.
Returns:
matches (list): String-matches or indices if `ret_index` is `True`.
"""
if not alternatives or not inp:
return []
matches = defaultdict(list)
inp_words = inp.lower().split()
for altindex, alt in enumerate(alternatives):
alt_words = alt.lower().split()
last_index = 0
score = 0
for inp_word in inp_words:
# loop over parts, making sure only to visit each part once
# (this will invalidate input in the wrong word order)
submatch = [last_index + alt_num for alt_num, alt_word
in enumerate(alt_words[last_index:])
if alt_word.startswith(inp_word)]
if submatch:
last_index = min(submatch) + 1
score += 1
else:
score = 0
break
if score:
if ret_index:
matches[score].append(altindex)
else:
matches[score].append(alt)
if matches:
return matches[max(matches)]
return []
def format_table(table, extra_space=1):
"""
Note: `evennia.utils.evtable` is more powerful than this, but this
function can be useful when the number of columns and rows are
unknown and must be calculated on the fly.
Args.
table (list): A list of lists to represent columns in the
table: `[[val,val,val,...], [val,val,val,...], ...]`, where
each val will be placed on a separate row in the
column. All columns must have the same number of rows (some
positions may be empty though).
extra_space (int, optional): Sets how much *minimum* extra
padding (in characters) should be left between columns.
Returns:
table (list): A list of lists representing the rows to print
out one by one.
Notes:
The function formats the columns to be as wide as the widest member
of each column.
Examples:
```python
for ir, row in enumarate(ftable):
if ir == 0:
# make first row white
string += "\n{w" + ""join(row) + "{n"
else:
string += "\n" + "".join(row)
print string
```
"""
if not table:
return [[]]
max_widths = [max([len(str(val)) for val in col]) for col in table]
ftable = []
for irow in range(len(table[0])):
ftable.append([str(col[irow]).ljust(max_widths[icol]) + " " * extra_space
for icol, col in enumerate(table)])
return ftable
def get_evennia_pids():
"""
Get the currently valid PIDs (Process IDs) of the Portal and
Server by trying to access a PID file.
Returns:
server, portal (tuple): The PIDs of the respective processes,
or two `None` values if not found.
Examples:
This can be used to determine if we are in a subprocess by
something like:
```python
self_pid = os.getpid()
server_pid, portal_pid = get_evennia_pids()
is_subprocess = self_pid not in (server_pid, portal_pid)
```
"""
server_pidfile = os.path.join(settings.GAME_DIR, 'server.pid')
portal_pidfile = os.path.join(settings.GAME_DIR, 'portal.pid')
server_pid, portal_pid = None, None
if os.path.exists(server_pidfile):
f = open(server_pidfile, 'r')
server_pid = f.read()
f.close()
if os.path.exists(portal_pidfile):
f = open(portal_pidfile, 'r')
portal_pid = f.read()
f.close()
if server_pid and portal_pid:
return int(server_pid), int(portal_pid)
return None, None
from gc import get_referents
from sys import getsizeof
def deepsize(obj, max_depth=4):
"""
Get not only size of the given object, but also the size of
objects referenced by the object, down to `max_depth` distance
from the object.
Args:
obj (object): the object to be measured.
max_depth (int, optional): maximum referential distance
from `obj` that `deepsize()` should cover for
measuring objects referenced by `obj`.
Returns:
size (int): deepsize of `obj` in Bytes.
Notes:
This measure is necessarily approximate since some
memory is shared between objects. The `max_depth` of 4 is roughly
tested to give reasonable size information about database models
and their handlers.
"""
def _recurse(o, dct, depth):
if max_depth >= 0 and depth > max_depth:
return
for ref in get_referents(o):
idr = id(ref)
if not idr in dct:
dct[idr] = (ref, getsizeof(ref, default=0))
_recurse(ref, dct, depth+1)
sizedict = {}
_recurse(obj, sizedict, 0)
#count = len(sizedict) + 1
size = getsizeof(obj) + sum([p[1] for p in sizedict.values()])
return size
# lazy load handler
_missing = object()
class lazy_property(object):
"""
Delays loading of property until first access. Credit goes to the
Implementation in the werkzeug suite:
http://werkzeug.pocoo.org/docs/utils/#werkzeug.utils.cached_property
This should be used as a decorator in a class and in Evennia is
mainly used to lazy-load handlers:
```python
@lazy_property
def attributes(self):
return AttributeHandler(self)
```
Once initialized, the `AttributeHandler` will be available as a
property "attributes" on the object.
"""
def __init__(self, func, name=None, doc=None):
"Store all properties for now"
self.__name__ = name or func.__name__
self.__module__ = func.__module__
self.__doc__ = doc or func.__doc__
self.func = func
def __get__(self, obj, type=None):
"Triggers initialization"
if obj is None:
return self
value = obj.__dict__.get(self.__name__, _missing)
if value is _missing:
value = self.func(obj)
obj.__dict__[self.__name__] = value
return value
_STRIP_ANSI = None
_RE_CONTROL_CHAR = re.compile('[%s]' % re.escape(''.join([unichr(c) for c in range(0,32)])))# + range(127,160)])))
def strip_control_sequences(string):
"""
Remove non-print text sequences.
Args:
string (str): Text to strip.
Returns.
text (str): Stripped text.
"""
global _STRIP_ANSI
if not _STRIP_ANSI:
from evennia.utils.ansi import strip_raw_ansi as _STRIP_ANSI
return _RE_CONTROL_CHAR.sub('', _STRIP_ANSI(string))
def calledby(callerdepth=1):
"""
Only to be used for debug purposes. Insert this debug function in
another function; it will print which function called it.
Args:
callerdepth (int): Must be larger than 0. When > 1, it will
print the caller of the caller etc.
Returns:
calledby (str): A debug string detailing which routine called
us.
"""
import inspect, os
stack = inspect.stack()
# we must step one extra level back in stack since we don't want
# to include the call of this function itself.
callerdepth = min(max(2, callerdepth + 1), len(stack)-1)
frame = inspect.stack()[callerdepth]
path = os.path.sep.join(frame[1].rsplit(os.path.sep, 2)[-2:])
return "[called by '%s': %s:%s %s]" % (frame[3], path, frame[2], frame[4])
def m_len(target):
"""
Provides length checking for strings with MXP patterns, and falls
back to normal len for other objects.
Args:
target (string): A string with potential MXP components
to search.
Returns:
length (int): The length of `target`, ignoring MXP components.
"""
# Would create circular import if in module root.
from evennia.utils.ansi import ANSI_PARSER
if inherits_from(target, basestring) and "|lt" in target:
return len(ANSI_PARSER.strip_mxp(target))
return len(target)
#------------------------------------------------------------------
# Search handler function
#------------------------------------------------------------------
#
# Replace this hook function by changing settings.SEARCH_AT_RESULT.
#
def at_search_result(matches, caller, query="", quiet=False, **kwargs):
"""
This is a generic hook for handling all processing of a search
result, including error reporting. This is also called by the cmdhandler
to manage errors in command lookup.
Args:
matches (list): This is a list of 0, 1 or more typeclass
instances or Command instances, the matched result of the
search. If 0, a nomatch error should be echoed, and if >1,
multimatch errors should be given. Only if a single match
should the result pass through.
caller (Object): The object performing the search and/or which should
receive error messages.
query (str, optional): The search query used to produce `matches`.
quiet (bool, optional): If `True`, no messages will be echoed to caller
on errors.
Kwargs:
nofound_string (str): Replacement string to echo on a notfound error.
multimatch_string (str): Replacement string to echo on a multimatch error.
Returns:
processed_result (Object or None): This is always a single result
or `None`. If `None`, any error reporting/handling should
already have happened.
"""
error = ""
if not matches:
# no results.
error = kwargs.get("nofound_string") or _("Could not find '%s'." % query)
matches = None
elif len(matches) > 1:
error = kwargs.get("multimatch_string") or \
_("More than one match for '%s' (please narrow target):\n" % query)
for num, result in enumerate(matches):
# we need to consider Commands, where .aliases is a list
aliases = result.aliases.all() if hasattr(result.aliases, "all") else result.aliases
error += _MULTIMATCH_TEMPLATE.format(
number=num + 1,
name=result.get_display_name(caller) if hasattr(result, "get_display_name") else query,
aliases=" [%s]" % ";".join(aliases) if aliases else "",
info=result.get_extra_info(caller))
matches = None
else:
# exactly one match
matches = matches[0]
if error and not quiet:
caller.msg(error.strip())
return matches
class LimitedSizeOrderedDict(OrderedDict):
"""
This dictionary subclass is both ordered and limited to a maximum
number of elements. Its main use is to hold a cache that can never
grow out of bounds.
"""
def __init__(self, *args, **kwargs):
"""
Limited-size ordered dict.
Kwargs:
size_limit (int): Use this to limit the number of elements
alloweds to be in this list. By default the overshooting elements
will be removed in FIFO order.
fifo (bool, optional): Defaults to `True`. Remove overshooting elements
in FIFO order. If `False`, remove in FILO order.
"""
super(LimitedSizeOrderedDict, self).__init__()
self.size_limit = kwargs.get("size_limit", None)
self.filo = not kwargs.get("fifo", True) # FIFO inverse of FILO
self._check_size()
def _check_size(self):
filo = self.filo
if self.size_limit is not None:
while self.size_limit < len(self):
self.popitem(last=filo)
def __setitem__(self, key, value):
super(LimitedSizeOrderedDict, self).__setitem__(key, value)
self._check_size()
def update(self, *args, **kwargs):
super(LimitedSizeOrderedDict, self).update(*args, **kwargs)
self._check_size()
def get_game_dir_path():
"""
This is called by settings_default in order to determine the path
of the game directory.
Returns:
path (str): Full OS path to the game dir
"""
# current working directory, assumed to be somewhere inside gamedir.
for i in range(10):
gpath = os.getcwd()
if "server" in os.listdir(gpath):
if os.path.isfile(os.path.join("server", "conf", "settings.py")):
return gpath
else:
os.chdir(os.pardir)
raise RuntimeError("server/conf/settings.py not found: Must start from inside game dir.")
| MarsZone/DreamLand | evennia/evennia/utils/utils.py | Python | bsd-3-clause | 59,803 | [
"VisIt"
] | 0c7a5787a0a24829e5da4022e8de7b09cdedd1b7e907c46dfa43bfe939a5c555 |
# pylint: disable=missing-docstring
# pylint: disable=redefined-outer-name
from lettuce import step, world
from nose.tools import assert_equal, assert_in, assert_not_equal
from selenium.common.exceptions import InvalidElementStateException
from common import *
from contentstore.utils import reverse_course_url
from terrain.steps import reload_the_page
@step(u'I am viewing the grading settings')
def view_grading_settings(step):
world.click_course_settings()
link_css = 'li.nav-course-settings-grading a'
world.css_click(link_css)
@step(u'I add "([^"]*)" new grade')
def add_grade(step, many):
grade_css = '.new-grade-button'
for __ in range(int(many)):
world.css_click(grade_css)
@step(u'I delete a grade')
def delete_grade(step):
#grade_css = 'li.grade-specific-bar > a.remove-button'
#range_css = '.grade-specific-bar'
#world.css_find(range_css)[1].mouseover()
#world.css_click(grade_css)
world.browser.execute_script('document.getElementsByClassName("remove-button")[0].click()')
@step(u'Grade list has "([^"]*)" grades$')
def check_grade_values(step, grade_list): # pylint: disable=unused-argument
visible_list = ''.join(
[grade.text for grade in world.css_find('.letter-grade')]
)
assert_equal(visible_list, grade_list, 'Grade lists should be equal')
@step(u'I see I now have "([^"]*)" grades$')
def view_grade_slider(step, how_many):
grade_slider_css = '.grade-specific-bar'
all_grades = world.css_find(grade_slider_css)
assert_equal(len(all_grades), int(how_many))
@step(u'I move a grading section')
def move_grade_slider(step):
moveable_css = '.ui-resizable-e'
f = world.css_find(moveable_css).first
f.action_chains.drag_and_drop_by_offset(f._element, 100, 0).perform()
@step(u'I see that the grade range has changed')
def confirm_change(step):
range_css = '.range'
all_ranges = world.css_find(range_css)
for i in range(len(all_ranges)):
assert_not_equal(world.css_html(range_css, index=i), '0-50')
@step(u'I change assignment type "([^"]*)" to "([^"]*)"$')
def change_assignment_name(step, old_name, new_name):
name_id = '#course-grading-assignment-name'
index = get_type_index(old_name)
f = world.css_find(name_id)[index]
assert_not_equal(index, -1)
for __ in xrange(len(old_name)):
f._element.send_keys(Keys.END, Keys.BACK_SPACE)
f._element.send_keys(new_name)
@step(u'I go back to the main course page')
def main_course_page(step):
main_page_link = reverse_course_url('course_handler', world.scenario_dict['COURSE'].id)
world.visit(main_page_link)
assert_in('Course Outline', world.css_text('h1.page-header'))
@step(u'I do( not)? see the assignment name "([^"]*)"$')
def see_assignment_name(step, do_not, name):
# TODO: rewrite this once grading has been added back to the course outline
pass
# assignment_menu_css = 'ul.menu > li > a'
# # First assert that it is there, make take a bit to redraw
# assert_true(
# world.css_find(assignment_menu_css),
# msg="Could not find assignment menu"
# )
#
# assignment_menu = world.css_find(assignment_menu_css)
# allnames = [item.html for item in assignment_menu]
# if do_not:
# assert_not_in(name, allnames)
# else:
# assert_in(name, allnames)
@step(u'I delete the assignment type "([^"]*)"$')
def delete_assignment_type(step, to_delete):
delete_css = '.remove-grading-data'
world.css_click(delete_css, index=get_type_index(to_delete))
@step(u'I add a new assignment type "([^"]*)"$')
def add_assignment_type(step, new_name):
add_button_css = '.add-grading-data'
world.css_click(add_button_css)
name_id = '#course-grading-assignment-name'
new_assignment = world.css_find(name_id)[-1]
new_assignment._element.send_keys(new_name)
@step(u'I set the assignment weight to "([^"]*)"$')
def set_weight(step, weight):
weight_id = '#course-grading-assignment-gradeweight'
weight_field = world.css_find(weight_id)[-1]
old_weight = world.css_value(weight_id, -1)
for __ in range(len(old_weight)):
weight_field._element.send_keys(Keys.END, Keys.BACK_SPACE)
weight_field._element.send_keys(weight)
@step(u'the assignment weight is displayed as "([^"]*)"$')
def verify_weight(step, weight):
weight_id = '#course-grading-assignment-gradeweight'
assert_equal(world.css_value(weight_id, -1), weight)
@step(u'I do not see the changes persisted on refresh$')
def changes_not_persisted(step):
reload_the_page(step)
name_id = '#course-grading-assignment-name'
assert_equal(world.css_value(name_id), 'Homework')
@step(u'I see the assignment type "(.*)"$')
def i_see_the_assignment_type(_step, name):
assignment_css = '#course-grading-assignment-name'
assignments = world.css_find(assignment_css)
types = [ele['value'] for ele in assignments]
assert_in(name, types)
@step(u'I change the highest grade range to "(.*)"$')
def change_grade_range(_step, range_name):
range_css = 'span.letter-grade'
grade = world.css_find(range_css).first
grade.value = range_name
@step(u'I see the highest grade range is "(.*)"$')
def i_see_highest_grade_range(_step, range_name):
range_css = 'span.letter-grade'
grade = world.css_find(range_css).first
assert_equal(grade.value, range_name)
@step(u'I cannot edit the "Fail" grade range$')
def cannot_edit_fail(_step):
range_css = 'span.letter-grade'
ranges = world.css_find(range_css)
assert_equal(len(ranges), 2)
assert_not_equal(ranges.last.value, 'Failure')
# try to change the grade range -- this should throw an exception
try:
ranges.last.value = 'Failure'
except InvalidElementStateException:
pass # We should get this exception on failing to edit the element
# check to be sure that nothing has changed
ranges = world.css_find(range_css)
assert_equal(len(ranges), 2)
assert_not_equal(ranges.last.value, 'Failure')
@step(u'I change the grace period to "(.*)"$')
def i_change_grace_period(_step, grace_period):
grace_period_css = '#course-grading-graceperiod'
ele = world.css_find(grace_period_css).first
# Sometimes it takes a moment for the JavaScript
# to populate the field. If we don't wait for
# this to happen, then we can end up with
# an invalid value (e.g. "00:0048:00")
# which prevents us from saving.
assert_true(world.css_has_value(grace_period_css, "00:00"))
# Set the new grace period
ele.value = grace_period
@step(u'I see the grace period is "(.*)"$')
def the_grace_period_is(_step, grace_period):
grace_period_css = '#course-grading-graceperiod'
# The default value is 00:00
# so we need to wait for it to change
world.wait_for(
lambda _: world.css_has_value(grace_period_css, grace_period)
)
def get_type_index(name):
name_id = '#course-grading-assignment-name'
all_types = world.css_find(name_id)
for index in range(len(all_types)):
if world.css_value(name_id, index=index) == name:
return index
return -1
| Stanford-Online/edx-platform | cms/djangoapps/contentstore/features/grading.py | Python | agpl-3.0 | 7,162 | [
"VisIt"
] | 63b4b73340eb858d5e94bf926c2d78734521accaa79436060ee933fad3389fe1 |
"""
Copyright 2016 Brian Quach
Licensed under MIT (https://github.com/brianquach/udacity-nano-fullstack-catalog/blob/master/LICENSE) # noqa
"""
from catalog import app
app.secret_key = 'development'
app.run(host='0.0.0.0', port=8000, debug=False)
| brianquach/udacity-nano-fullstack-catalog | vagrant/catalog/runserver.py | Python | mit | 250 | [
"Brian"
] | 1a4201bdbe439cfb08a9c668ffe339ab9399fe9f6e26704e0c03ff4897b99ac0 |
x = []
##########################################################################
#
# QGIS-meshing plugins.
#
# Copyright (C) 2012-2013 Imperial College London and others.
#
# Please see the AUTHORS file in the main source directory for a
# full list of copyright holders.
#
# Dr Adam S. Candy, adam.candy@imperial.ac.uk
# Applied Modelling and Computation Group
# Department of Earth Science and Engineering
# Imperial College London
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation,
# version 2.1 of the License.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
##########################################################################
y = []
lon = []
lat = []
psi = []
inpt = '/home/eml11/plugins_in_progress/wider_area_bathymetry_filtered_subsampled.nc' # testing variable to be removed
outpt = '/home/eml11/plugins_in_progress/new_file005' # testing variable to be removed
import numpy as np
import copy
import sys
import os
from Scientific.IO import NetCDF
#import gdal_calc
def test(): # testing function to be removed
global x
global y
x = np.array([1,2,3,4,5,6,7,8,9,10])
y = x**2
return x, y
def getField(netcdf_file): #obs
file = NetCDF.NetCDFFile(netcdf_file, 'r')
global lon, lat, psi
lon = file.variables['lon'][:]
lat = file.variables['lat'][:]
psi = file.variables['z'][:, :]
lon = np.array(lon)
lat = np.array(lat)
psi = np.array(psi)
return lon, lat, psi
def lon(nc_f): return nc_f[0]
def lat(nc_f): return nc_f[1]
def field(nc_f): return nc_f[2]
getField(inpt)
def diferentialOp(field, withRespectTo):
lim = 'lim'
result = []
for i in range(field.size):
if i > 0 and i < (field.size-1):
del_field = (field[i+1]-field[i-1])/2.0
del_par = (withRespectTo[i+1]-withRespectTo[i-1])/2.0
elif i == 0:
del_field = (field[1]-field[0])/2.0
del_par = (withRespectTo[1]-withRespectTo[0])/2.0
else:
del_field = (field[i]-field[i-1])/2.0
del_par = (withRespectTo[i]-withRespectTo[i-1])/2.0
if isinstance(field, float) == True:# is this field correct param
if del_par != 0:
dif_field = del_field/del_par
else:
dif_field = lim
else:
dif_field = []
shp = del_field.shape
for j in range(del_field.size):
if del_par.flat[j] != 0:
elementIn_dif_field = del_field.flat[j]/del_par.flat[j]
else:
elementIn_dif_field = lim
dif_field.append(elementIn_dif_field)
dif_field = np.array(dif_field)
dif_field.shape = shp
result.append(dif_field) # note dif_field is an array
result = np.array(result)
test_result = copy.copy(result)
for k in range(result.size):
if test_result.flat[k] == lim:
test_result.flat[k] = 0
test_result = test_result.astype(float)
test_result *= test_result
mx = test_result.max()
for k in range(result.size):
if result.flat[k] == lim:
result.flat[k] = 10*mx
for i in result.flat:
i = np.float64(i)
return result
def diflon(psi = psi,lon = lon):
for i in range(psi.shape[0]):
result = diferentialOp(psi[i], lon)
return result
def diflat(psi = psi,lat = lat):
psi = psi.transpose()
for i in range(psi.shape[0]):
result = diferentialOp(psi[i],lat)
return result
def DivergenceOfField(psi = psi, lat = lat, lon = lon)
ddlon = diflon(psi,lon)
ddlat = diflat(psi,lat)
result = zeros((ddlat.size, ddlon.size))
for i in range(lat.size):
for j in range(lon.size):
result[i][j] = diflon[i]
def intOp(field, withRespectTo, c = 0.0):
result = []
shp = field.shape
int_field = np.array([c])
for i in range(field.size): # note will not work for multidimentional arrays
if i > 0 and i < (field.size - 1): # note this will not work for multidimentional arrays
av_field1 = (3*field[i]+field[i-1])
av_field2 = (3*field[i]+field[i+1])
del_par1 = (withRespectTo[i]-withRespectTo[i-1])
del_par2 = (withRespectTo[i+1]-withRespectTo[i])
elif i == 0:
av_field1 = (field[1]+3*field[0])
av_field2 = av_field1
del_par1 = (withRespectTo[1]-withRespectTo[0])
del_par2 = del_par1
else:
av_field1 = (3*field[i]+field[i-1]) #note maybe inacurate
av_field2 = av_field1
del_par1 = (withRespectTo[i]-withRespectTo[i-1])
del_par2 = del_par1
int_field += (av_field1*del_par1+av_field2*del_par2)/8.0
result.append(copy.copy(int_field)) # note int_field can be an array
result = np.array(result)
result.shape = shp
#result += c
return result
#def advcCalc(inputFileName, outputFileName, function):
#gdal_calc -A inputFileName --calc function --outfile outputFileName
def returnField(outputFileName): #obbs
output_file = '%s.nc' % str(outputFileName)
global outLon, outLat, outField, outsize1, outsize2
f = NetCDF.NetCDFFile(outputFileName, 'w')
f.createDimension('dim1', outsize1)
f.createDimension('dim2', outsize2)
f.createVariable('lon', 'd', ('dim1',))
f.createVariable('lat', 'd', ('dim2',))
f.createVariable('z', 'd', ('dim1','dim2',))
f.variables['lon'][:] = outLon
f.variables['lat'][:] = outLat
f.variables['z'][:] = outField
f.close()
| adamcandy/QGIS-Meshing | extras/raster_tools/RastorTools_obs.py | Python | lgpl-2.1 | 5,555 | [
"NetCDF"
] | ae5f1a0b73d614524b8be0e4595e41308c9d0864e5878cf313d9170238726abc |
'''
Created on Aug 29, 2012
@author: jchen
'''
from google.appengine.api import mail
import web2
def sendmailfunc(available, ticketurl, email):
avail_text=""
if available==1:
avail_text =" available"
else:
avail_text =" unavailable"
title="You have ticket becomes %s"%avail_text
body="You have ticket becomes %s, please visit following url to buy it %s "%(avail_text,ticketurl)
message=mail.EmailMessage(subject=title)
message.sender = "godosou@gmail.com"
message.to=email
message.body=body
message.send()
class sendmail(webapp2.RequestHandler):
def post(self):
available= int(self.request.get('available'))
ticketurl=self.request.get('ticketurl')
email=self.request.get('email')
self.reponse.out("%s"%available,email)
sendmailfunc(available,ticketurl)
def get(self):
self.response.out.write("out")
pass | godosou/barcaticketsinform | tasks/informclient.py | Python | lgpl-3.0 | 924 | [
"VisIt"
] | d909a620efc10998618cd9b8e1bbdfbdfb5b58de36a6c369f16bb2d22cafcf43 |
# -*- coding: utf-8 -*-
#!/usr/bin/env python
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2000-2007 Donald N. Allingham
# Copyright (C) 2007 Johan Gonqvist <johan.gronqvist@gmail.com>
# Copyright (C) 2007-2009 Gary Burton <gary.burton@zen.co.uk>
# Copyright (C) 2007-2009 Stephane Charette <stephanecharette@gmail.com>
# Copyright (C) 2008-2009 Brian G. Matherly
# Copyright (C) 2008 Jason M. Simanek <jason@bohemianalps.com>
# Copyright (C) 2008-2011 Rob G. Healey <robhealey1@gmail.com>
# Copyright (C) 2010 Doug Blank <doug.blank@gmail.com>
# Copyright (C) 2010 Jakim Friant
# Copyright (C) 2010- Serge Noiraud
# Copyright (C) 2011 Tim G L Lyons
# Copyright (C) 2013 Benny Malengier
# Copyright (C) 2016 Allen Crider
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
"""
Narrative Web Page generator.
Classe:
DownloadPage
"""
#------------------------------------------------
# python modules
#------------------------------------------------
import os
import datetime
from decimal import getcontext
import logging
#------------------------------------------------
# Gramps module
#------------------------------------------------
from gramps.gen.const import GRAMPS_LOCALE as glocale
from gramps.plugins.lib.libhtml import Html
#------------------------------------------------
# specific narrative web import
#------------------------------------------------
from gramps.plugins.webreport.basepage import BasePage
from gramps.plugins.webreport.common import (FULLCLEAR, html_escape)
_ = glocale.translation.sgettext
LOG = logging.getLogger(".NarrativeWeb")
getcontext().prec = 8
class DownloadPage(BasePage):
"""
This class is responsible for displaying information about the Download page
"""
def __init__(self, report, title):
"""
@param: report -- The instance of the main report class for this report
@param: title -- Is the title of the web page
"""
BasePage.__init__(self, report, title)
# do NOT include a Download Page
if not self.report.inc_download:
return
# menu options for class
# download and description #1
dlfname1 = self.report.dl_fname1
dldescr1 = self.report.dl_descr1
# download and description #2
dlfname2 = self.report.dl_fname2
dldescr2 = self.report.dl_descr2
# if no filenames at all, return???
if dlfname1 or dlfname2:
output_file, sio = self.report.create_file("download")
result = self.write_header(self._('Download'))
downloadpage, dummy_head, dummy_body, outerwrapper = result
# begin download page and table
with Html("div", class_="content", id="Download") as download:
outerwrapper += download
msg = self._("This page is for the user/ creator "
"of this Family Tree/ Narrative website "
"to share a couple of files with you "
"regarding their family. If there are "
"any files listed "
"below, clicking on them will allow you "
"to download them. The "
"download page and files have the same "
"copyright as the remainder "
"of these web pages.")
download += Html("p", msg, id="description")
# begin download table and table head
with Html("table", class_="infolist download") as table:
download += table
thead = Html("thead")
table += thead
trow = Html("tr")
thead += trow
trow.extend(
Html("th", label, class_="Column" + colclass,
inline=True)
for (label, colclass) in [
(self._("File Name"), "Filename"),
(self._("Description"), "Description"),
(self._("Last Modified"), "Modified")])
# table body
tbody = Html("tbody")
table += tbody
# if dlfname1 is not None, show it???
if dlfname1:
trow = Html("tr", id='Row01')
tbody += trow
fname = os.path.basename(dlfname1)
# TODO dlfname1 is filename, convert disk path to URL
tcell = Html("td", class_="ColumnFilename") + (
Html("a", fname, href=dlfname1,
title=html_escape(dldescr1))
)
trow += tcell
dldescr1 = dldescr1 or " "
trow += Html("td", dldescr1,
class_="ColumnDescription", inline=True)
tcell = Html("td", class_="ColumnModified", inline=True)
trow += tcell
if os.path.exists(dlfname1):
modified = os.stat(dlfname1).st_mtime
last_mod = datetime.datetime.fromtimestamp(modified)
tcell += last_mod
else:
tcell += " "
# if download filename #2, show it???
if dlfname2:
# begin row #2
trow = Html("tr", id='Row02')
tbody += trow
fname = os.path.basename(dlfname2)
tcell = Html("td", class_="ColumnFilename") + (
Html("a", fname, href=dlfname2,
title=html_escape(dldescr2))
)
trow += tcell
dldescr2 = dldescr2 or " "
trow += Html("td", dldescr2,
class_="ColumnDescription", inline=True)
tcell = Html("td", id='Col04',
class_="ColumnModified", inline=True)
trow += tcell
if os.path.exists(dlfname2):
modified = os.stat(dlfname2).st_mtime
last_mod = datetime.datetime.fromtimestamp(modified)
tcell += last_mod
else:
tcell += " "
# clear line for proper styling
# create footer section
footer = self.write_footer(None)
outerwrapper += (FULLCLEAR, footer)
# send page out for processing
# and close the file
self.xhtml_writer(downloadpage, output_file, sio, 0)
| sam-m888/gramps | gramps/plugins/webreport/download.py | Python | gpl-2.0 | 7,780 | [
"Brian"
] | 7bdca47ce96aedaf126138c72f07044557e50d61d99ccbd7f1e7429934df7d4c |
## Copyright (c) 2009-2011, Noel O'Boyle
## All rights reserved.
##
## This file is part of Cinfony.
## The contents are covered by the terms of the BSD license
## which is included in the file LICENSE_BSD.txt.
"""
webel - A Cinfony module that runs entirely on web services
webel can be used from all of CPython, Jython and IronPython.
Global variables:
informats - a dictionary of supported input formats
outformats - a dictionary of supported output formats
fps - a list of supported fingerprint types
"""
import re
import os
import urllib2
import StringIO
try:
import Tkinter as tk
import Image as PIL
import ImageTk as piltk
except ImportError:
tk = None
informats = {"smi":"SMILES", "inchikey":"InChIKey", "inchi":"InChI",
"name":"Common name"}
"""A dictionary of supported input formats"""
outformats = {"smi":"SMILES", "cdxml":"ChemDraw XML", "inchi":"InChI",
"sdf":"Symyx SDF", "names":"Common names", "inchikey":"InChIKey",
"alc":"Alchemy", "cerius":"MSI Cerius II", "charmm":"CHARMM",
"cif":"Crystallographic Information File",
"cml":"Chemical Markup Language", "ctx":"Gasteiger Clear Text",
"gjf":"Gaussian job file", "gromacs":"GROMACS",
"hyperchem":"HyperChem", "jme":"Java Molecule Editor",
"maestro":"Schrodinger MacroModel",
"mol":"Symyx mol", "mol2":"Tripos Sybyl MOL2",
"mrv":"ChemAxon MRV", "pdb":"Protein Data Bank",
"sdf3000":"Symyx SDF3000", "sln":"Sybl line notation",
"xyz":"XYZ", "iupac":"IUPAC name"}
"""A dictionary of supported output formats"""
fps = ["std", "maccs", "estate"]
"""A list of supported fingerprint types"""
# The following function is taken from urllib.py in the IronPython dist
def _quo(text, safe="/"):
always_safe = ('ABCDEFGHIJKLMNOPQRSTUVWXYZ'
'abcdefghijklmnopqrstuvwxyz'
'0123456789' '_.-')
_safemaps = {}
cachekey = (safe, always_safe)
try:
safe_map = _safemaps[cachekey]
except KeyError:
safe += always_safe
safe_map = {}
for i in range(256):
c = chr(i)
safe_map[c] = (c in safe) and c or ('%%%02X' % i)
_safemaps[cachekey] = safe_map
res = map(safe_map.__getitem__, text)
return ''.join(res)
def _makeserver(serverurl):
"""Curry the name of the server"""
def server(*urlcomponents):
url = "%s/" % serverurl + "/".join(urlcomponents)
resp = urllib2.urlopen(url)
return resp.read()
return server
rajweb = _makeserver("http://ws1.bmc.uu.se:8182/cdk")
nci = _makeserver("http://cactus.nci.nih.gov/chemical/structure")
_descs = None # Cache the list of descriptors
def getdescs():
"""Return a list of supported descriptor types"""
global _descs
if not _descs:
response = rajweb("descriptors").rstrip()
_descs = [x.split(".")[-1] for x in response.split("\n")]
return _descs
def readstring(format, string):
"""Read in a molecule from a string.
Required parameters:
format - see the informats variable for a list of available
input formats
string
Note: For InChIKeys a list of molecules is returned.
Example:
>>> input = "C1=CC=CS1"
>>> mymol = readstring("smi", input)
"""
format = format.lower()
if not format in informats:
raise ValueError("%s is not a recognised Webel format" % format)
if format != "smi":
smiles = nci(_quo(string), "smiles").rstrip()
else:
smiles = string
if format == "inchikey":
return [Molecule(smile) for smile in smiles.split("\n")]
else:
mol = Molecule(smiles)
if format == "name":
mol.title = string
return mol
class Outputfile(object):
"""Represent a file to which *output* is to be sent.
Although it's possible to write a single molecule to a file by
calling the write() method of a molecule, if multiple molecules
are to be written to the same file you should use the Outputfile
class.
Required parameters:
format - see the outformats variable for a list of available
output formats
filename
Optional parameters:
overwrite -- if the output file already exists, should it
be overwritten? (default is False)
Methods:
write(molecule)
close()
"""
def __init__(self, format, filename, overwrite=False):
self.format = format.lower()
self.filename = filename
if not overwrite and os.path.isfile(self.filename):
raise IOError("%s already exists. Use 'overwrite=True' to overwrite it." % self.filename)
if not format in outformats:
raise ValueError("%s is not a recognised Webel format" % format)
self.file = open(filename, "w")
def write(self, molecule):
"""Write a molecule to the output file.
Required parameters:
molecule
"""
if self.file.closed:
raise IOError("Outputfile instance is closed.")
output = molecule.write(self.format)
print >> self.file, output
def close(self):
"""Close the Outputfile to further writing."""
self.file.close()
class Molecule(object):
"""Represent a Webel Molecule.
Required parameter:
smiles -- a SMILES string or any type of cinfony Molecule
Attributes:
formula, molwt, title
Methods:
calcfp(), calcdesc(), draw(), write()
The underlying SMILES string can be accessed using the attribute:
smiles
"""
_cinfony = True
def __init__(self, smiles):
if hasattr(smiles, "_cinfony"):
a, b = smiles._exchange
if a == 0:
smiles = b
else:
# Must convert to SMILES
smiles = smiles.write("smi").split()[0]
self.smiles = smiles
self.title = ""
@property
def formula(self): return rajweb("mf", _quo(self.smiles))
@property
def molwt(self): return float(rajweb("mw", _quo(self.smiles)))
@property
def _exchange(self):
return (0, self.smiles)
def calcdesc(self, descnames=[]):
"""Calculate descriptor values.
Optional parameter:
descnames -- a list of names of descriptors
If descnames is not specified, all available descriptors are
calculated. See the descs variable for a list of available
descriptors.
"""
if not descnames:
descnames = getdescs()
else:
for descname in descnames:
if descname not in getdescs():
raise ValueError("%s is not a recognised Webel descriptor type" % descname)
ans = {}
p = re.compile("""Descriptor parent="(\w*)" name="([\w\-\+\d]*)" value="([\d\.]*)""")
for descname in descnames:
longname = "org.openscience.cdk.qsar.descriptors.molecular." + descname
response = rajweb("descriptor", longname, _quo(self.smiles))
for match in p.findall(response):
if match[2]:
ans["%s_%s" % (match[0], match[1])] = float(match[2])
return ans
def calcfp(self, fptype="std"):
"""Calculate a molecular fingerprint.
Optional parameters:
fptype -- the fingerprint type (default is "std"). See the
fps variable for a list of of available fingerprint
types.
"""
fptype = fptype.lower()
if fptype not in fps:
raise ValueError("%s is not a recognised Webel Fingerprint type" % fptype)
fp = rajweb("fingerprint/%s/%s" % (fptype, _quo(self.smiles))).rstrip()
return Fingerprint(fp)
def write(self, format="smi", filename=None, overwrite=False):
"""Write the molecule to a file or return a string.
Optional parameters:
format -- see the informats variable for a list of available
output formats (default is "smi")
filename -- default is None
overwite -- if the output file already exists, should it
be overwritten? (default is False)
If a filename is specified, the result is written to a file.
Otherwise, a string is returned containing the result.
To write multiple molecules to the same file you should use
the Outputfile class.
"""
format = format.lower()
if not format in outformats:
raise ValueError("%s is not a recognised Webel format" % format)
if format == "smi":
output = self.smiles
elif format == "names":
try:
output = nci(_quo(self.smiles), "%s" % format).rstrip().split("\n")
except urllib2.URLError, e:
if e.code == 404:
output = []
elif format in ['inchi', 'inchikey']:
format = "std" + format
output = nci(_quo(self.smiles), "%s" % format).rstrip()
elif format == 'iupac':
format = format + "_name"
try:
output = nci(_quo(self.smiles), "%s" % format).rstrip()
except urllib2.URLError, e:
if e.code == 404:
output = ""
else:
output = nci(_quo(self.smiles), "file?format=%s" % format).rstrip()
if filename:
if not overwrite and os.path.isfile(filename):
raise IOError("%s already exists. Use 'overwrite=True' to overwrite it." % filename)
outputfile = open(filename, "w")
print >> outputfile, output
outputfile.close()
else:
return output
def __str__(self):
return self.write()
def draw(self, show=True, filename=None):
"""Create a 2D depiction of the molecule.
Optional parameters:
show -- display on screen (default is True)
filename -- write to file (default is None)
Tkinter and Python Imaging Library are required for
image display.
"""
imagedata = nci(_quo(self.smiles), "image")
if filename:
print >> open(filename, "wb"), imagedata
if show:
if not tk:
errormessage = ("Tkinter or Python Imaging "
"Library not found, but is required for image "
"display. See installation instructions for "
"more information.")
raise ImportError, errormessage
root = tk.Tk()
root.title(self.smiles)
frame = tk.Frame(root, colormap="new", visual='truecolor').pack()
image = PIL.open(StringIO.StringIO(imagedata))
imagedata = piltk.PhotoImage(image)
label = tk.Label(frame, image=imagedata).pack()
quitbutton = tk.Button(root, text="Close", command=root.destroy).pack(fill=tk.X)
root.mainloop()
class Fingerprint(object):
"""A Molecular Fingerprint.
Required parameters:
fingerprint -- a string of 0's and 1's representing a binary fingerprint
Attributes:
fp -- the underlying fingerprint object
bits -- a list of bits set in the Fingerprint
Methods:
The "|" operator can be used to calculate the Tanimoto coeff. For example,
given two Fingerprints 'a', and 'b', the Tanimoto coefficient is given by:
tanimoto = a | b
"""
def __init__(self, fingerprint):
self.fp = fingerprint
def __or__(self, other):
mybits = set(self.bits)
otherbits = set(other.bits)
return len(mybits&otherbits) / float(len(mybits|otherbits))
@property
def bits(self):
return [i for i,x in enumerate(self.fp) if x=="1"]
def __str__(self):
return self.fp
class Smarts(object):
"""A Smarts Pattern Matcher
Required parameters:
smartspattern
Methods:
match(molecule)
Example:
>>> mol = readstring("smi","CCN(CC)CC") # triethylamine
>>> smarts = Smarts("[#6][#6]") # Matches an ethyl group
>>> smarts.match(mol)
True
"""
def __init__(self, smartspattern):
"""Initialise with a SMARTS pattern."""
self.pat = smartspattern
def match(self, molecule):
"""Does a SMARTS pattern match a particular molecule?
Required parameters:
molecule
"""
resp = rajweb("substruct", _quo(molecule.smiles), _quo(self.pat)).rstrip()
return resp == "true"
if __name__=="__main__": #pragma: no cover
import doctest
doctest.run_docstring_examples(rajweb, globals())
| cinfony/cinfony | cinfony/webel.py | Python | bsd-2-clause | 13,004 | [
"CDK",
"CHARMM",
"Gaussian",
"Gromacs",
"Hyperchem",
"MacroModel"
] | 7a17eccba53c43d4d96fce09f21a303aaf942eeaf17ef33fbab93efa2fe94622 |
# Copyright 2001 by Katharine Lindner. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""Martel based parser to read ECell formatted files.
This is a huge regular regular expression for ECell, built using
the 'regular expressiona on steroids' capabilities of Martel.
http://www.bioinformatics.org/ecell2/
Notes:
Just so I remember -- the new end of line syntax is:
New regexp syntax - \R
\R means "\n|\r\n?"
[\R] means "[\n\r]"
This helps us have endlines be consistent across platforms.
"""
# standard library
#http://www.bioinformatics.org/ecell2/
import string
# Martel
import Martel
from Martel import RecordReader
from Martel import Str
from Martel import AnyEol
from Martel import ToEol
from Martel import Group
from Martel import Alt
from Martel import Rep
from Martel import Rep1
from Martel import Any
from Martel import AnyBut
from Martel import Expression
# --- first set up some helper constants and functions
# Copyright 2001 by Katharine Lindner. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
excluded_chars = ' ' + chr( 0x09 ) + chr( 10 ) + chr( 13 )
block_type = Group( "block_type", Expression.NoCase( Str( "Type" ) ) )
header_line = Group( "header_line", \
block_type + ToEol())
tab = Group( "tab", Str( '\t' ) )
system_tag = Group( "system_tag", Expression.NoCase( Str( "system" ) ) )
reactor_tag = Group( "reactor_tag", Expression.NoCase( Str( "Reactor" ) ) )
substance_tag = Group( "substance_tag", Expression.NoCase( Str( "Substance" ) ) )
system_line = Group( "system_line", system_tag + ToEol() )
reactor_line = Group( "reactor_line", reactor_tag + ToEol() )
substance_line = Group( "substance_line", substance_tag + ToEol() )
continuation_line = Group( "continuation_line", tab + ToEol() )
include_line = Group( "include_line", Str( 'include' ) + ToEol())
substance_multiline = Group( "substance_multiline", \
substance_line +
Rep( continuation_line ) )
reactor_multiline = Group( "reactor_multiline", \
reactor_line +
Rep( continuation_line ) )
system_block = Group( "system_block", \
Rep1( system_line ) )
reactor_block = Group( "reactor_block", \
Rep1( reactor_multiline ) )
substance_block = Group( "substance_block", \
Rep1( substance_multiline ) )
valid_block = Group( "valid_block",
header_line +
Alt( system_block, reactor_block, substance_block ) )
valid_contents = Group( "valid_contents", Rep1( valid_block ) )
ecell_record = valid_contents
| dbmi-pitt/DIKB-Micropublication | scripts/mp-scripts/Bio/ECell/ecell_format.py | Python | apache-2.0 | 2,780 | [
"Biopython"
] | 6a64f9acd2f334f60e7f7d233a01d5df3a06f6191e0c9915a0dda7cdf500fe9f |
# Copyright 2014-2020 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: Oliver J. Backhouse <olbackhouse@gmail.com>
# George H. Booth <george.booth@kcl.ac.uk>
#
'''
Auxiliary second-order Green's function perturbation theory for
unrestricted references for arbitrary moment consistency
'''
import numpy as np
from pyscf import lib
from pyscf.lib import logger
from pyscf import __config__
from pyscf import ao2mo
from pyscf.agf2 import ragf2, uagf2, ragf2_slow
from pyscf.agf2 import aux_space as aux
def build_se_part(agf2, eri, gf_occ, gf_vir, os_factor=1.0, ss_factor=1.0):
''' Builds either the auxiliaries of the occupied self-energy,
or virtual if :attr:`gf_occ` and :attr:`gf_vir` are swapped,
for a single spin.
Args:
eri : _ChemistsERIs
Electronic repulsion integrals
gf_occ : tuple of GreensFunction
Occupied Green's function for each spin
gf_vir : tuple of GreensFunction
Virtual Green's function for each spin
Kwargs:
os_factor : float
Opposite-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
ss_factor : float
Same-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
Returns:
:class:`SelfEnergy`
'''
cput0 = (logger.process_clock(), logger.perf_counter())
log = logger.Logger(agf2.stdout, agf2.verbose)
assert type(gf_occ[0]) is aux.GreensFunction
assert type(gf_occ[1]) is aux.GreensFunction
assert type(gf_vir[0]) is aux.GreensFunction
assert type(gf_vir[1]) is aux.GreensFunction
tol = agf2.weight_tol
def _build_se_part_spin(spin=0):
''' Perform the build for a single spin
spin = 0: alpha
spin = 1: beta
'''
if spin == 0:
ab = slice(None)
else:
ab = slice(None, None, -1)
nmoa, nmob = agf2.nmo[ab]
gfo_a, gfo_b = gf_occ[ab]
gfv_a, gfv_b = gf_vir[ab]
noa, nob = gfo_a.naux, gfo_b.naux
nva, nvb = gfv_a.naux, gfv_b.naux
naux = nva*noa*(noa-1)//2 + nvb*noa*nob
if not (agf2.frozen is None or agf2.frozen == 0):
mask = uagf2.get_frozen_mask(agf2)
nmoa -= np.sum(~mask[ab][0])
nmob -= np.sum(~mask[ab][1])
e = np.zeros((naux))
v = np.zeros((nmoa, naux))
falph = np.sqrt(ss_factor)
fbeta = np.sqrt(os_factor)
eja_a = lib.direct_sum('j,a->ja', gfo_a.energy, -gfv_a.energy)
eja_b = lib.direct_sum('j,a->ja', gfo_b.energy, -gfv_b.energy)
ca = (gf_occ[0].coupling, gf_occ[0].coupling, gf_vir[0].coupling)
cb = (gf_occ[1].coupling, gf_occ[1].coupling, gf_vir[1].coupling)
qeri = _make_qmo_eris_incore(agf2, eri, ca, cb, spin=spin)
qeri_aa, qeri_ab = qeri
p1 = 0
for i in range(noa):
xija_aa = qeri_aa[:,i,:i].reshape(nmoa, -1)
xjia_aa = qeri_aa[:,:i,i].reshape(nmoa, -1)
xija_ab = qeri_ab[:,i].reshape(nmoa, -1)
eija_aa = gfo_a.energy[i] + eja_a[:i]
eija_ab = gfo_a.energy[i] + eja_b
p0, p1 = p1, p1 + i*nva
e[p0:p1] = eija_aa.ravel()
v[:,p0:p1] = falph * (xija_aa - xjia_aa)
p0, p1 = p1, p1 + nob*nvb
e[p0:p1] = eija_ab.ravel()
v[:,p0:p1] = fbeta * xija_ab
se = aux.SelfEnergy(e, v, chempot=gfo_a.chempot)
se.remove_uncoupled(tol=tol)
if not (agf2.frozen is None or agf2.frozen == 0):
coupling = np.zeros((agf2.nmo[ab][0], se.naux))
coupling[mask[ab][0]] = se.coupling
se = aux.SelfEnergy(se.energy, coupling, chempot=gfo_a.chempot)
return se
se_a = _build_se_part_spin(0)
cput0 = log.timer('se part (alpha)', *cput0)
se_b = _build_se_part_spin(1)
cput0 = log.timer('se part (beta)', *cput0)
return (se_a, se_b)
class UAGF2(uagf2.UAGF2):
''' Unrestricted AGF2 with canonical HF reference for arbitrary
moment consistency
Attributes:
nmom : tuple of int
Compression level of the Green's function and
self-energy, respectively
verbose : int
Print level. Default value equals to :class:`Mole.verbose`
max_memory : float or int
Allowed memory in MB. Default value equals to :class:`Mole.max_memory`
conv_tol : float
Convergence threshold for AGF2 energy. Default value is 1e-7
conv_tol_rdm1 : float
Convergence threshold for first-order reduced density matrix.
Default value is 1e-8.
conv_tol_nelec : float
Convergence threshold for the number of electrons. Default
value is 1e-6.
max_cycle : int
Maximum number of AGF2 iterations. Default value is 50.
max_cycle_outer : int
Maximum number of outer Fock loop iterations. Default
value is 20.
max_cycle_inner : int
Maximum number of inner Fock loop iterations. Default
value is 50.
weight_tol : float
Threshold in spectral weight of auxiliaries to be considered
zero. Default 1e-11.
fock_diis_space : int
DIIS space size for Fock loop iterations. Default value is 6.
fock_diis_min_space :
Minimum space of DIIS. Default value is 1.
os_factor : float
Opposite-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
ss_factor : float
Same-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
damping : float
Damping factor for the self-energy. Default value is 0.0
Saved results
e_corr : float
AGF2 correlation energy
e_tot : float
Total energy (HF + correlation)
e_1b : float
One-body part of :attr:`e_tot`
e_2b : float
Two-body part of :attr:`e_tot`
e_init : float
Initial correlation energy (truncated MP2)
converged : bool
Whether convergence was successful
se : tuple of SelfEnergy
Auxiliaries of the self-energy for each spin
gf : tuple of GreensFunction
Auxiliaries of the Green's function for each spin
'''
def __init__(self, mf, nmom=(None,0), frozen=None, mo_energy=None, mo_coeff=None, mo_occ=None):
uagf2.UAGF2.__init__(self, mf, frozen=frozen, mo_energy=mo_energy,
mo_coeff=mo_coeff, mo_occ=mo_occ)
self.nmom = nmom
self._keys.update(['nmom'])
build_se_part = build_se_part
def build_se(self, eri=None, gf=None, os_factor=None, ss_factor=None, se_prev=None):
''' Builds the auxiliaries of the self-energy.
Args:
eri : _ChemistsERIs
Electronic repulsion integrals
gf : tuple of GreensFunction
Auxiliaries of the Green's function
Kwargs:
os_factor : float
Opposite-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
ss_factor : float
Same-spin factor for spin-component-scaled (SCS)
calculations. Default 1.0
se_prev : SelfEnergy
Previous self-energy for damping. Default value is None
Returns
:class:`SelfEnergy`
'''
if eri is None: eri = self.ao2mo()
if gf is None: gf = self.gf
if gf is None: gf = self.init_gf()
focka = fockb = None
if self.nmom[1] is not None:
focka, fockb = self.get_fock(eri=eri, gf=gf)
if os_factor is None: os_factor = self.os_factor
if ss_factor is None: ss_factor = self.ss_factor
facs = dict(os_factor=os_factor, ss_factor=ss_factor)
gf_occ = (gf[0].get_occupied(), gf[1].get_occupied())
gf_vir = (gf[0].get_virtual(), gf[1].get_virtual())
se_occ = self.build_se_part(eri, gf_occ, gf_vir, **facs)
se_occ = (se_occ[0].compress(n=(None, self.nmom[1])),
se_occ[1].compress(n=(None, self.nmom[1])))
se_vir = self.build_se_part(eri, gf_vir, gf_occ, **facs)
se_vir = (se_vir[0].compress(n=(None, self.nmom[1])),
se_vir[1].compress(n=(None, self.nmom[1])))
se_a = aux.combine(se_occ[0], se_vir[0])
se_a = se_a.compress(phys=focka, n=(self.nmom[0], None))
se_b = aux.combine(se_occ[1], se_vir[1])
se_b = se_b.compress(phys=fockb, n=(self.nmom[0], None))
if se_prev is not None and self.damping != 0.0:
se_a_prev, se_b_prev = se_prev
se_a.coupling *= np.sqrt(1.0-self.damping)
se_b.coupling *= np.sqrt(1.0-self.damping)
se_a_prev.coupling *= np.sqrt(self.damping)
se_b_prev.coupling *= np.sqrt(self.damping)
se_a = aux.combine(se_a, se_a_prev)
se_b = aux.combine(se_b, se_b_prev)
se_a = se_a.compress(n=(None,0))
se_b = se_b.compress(n=(None,0))
return (se_a, se_b)
def dump_flags(self, verbose=None):
uagf2.UAGF2.dump_flags(self, verbose=verbose)
logger.info(self, 'nmom = %s', repr(self.nmom))
return self
def run_diis(self, se, diis=None):
return se
class _ChemistsERIs(uagf2._ChemistsERIs):
pass
_make_qmo_eris_incore = uagf2._make_qmo_eris_incore
if __name__ == '__main__':
from pyscf import gto, scf, mp
mol = gto.M(atom='O 0 0 0; H 0 0 1; H 0 1 0', basis='cc-pvdz', charge=-1, spin=1, verbose=3)
uhf = scf.UHF(mol)
uhf.conv_tol = 1e-11
uhf.run()
agf2 = UAGF2(uhf)
agf2.run()
agf2 = uagf2.UAGF2(uhf)
agf2.run()
| sunqm/pyscf | pyscf/agf2/uagf2_slow.py | Python | apache-2.0 | 10,460 | [
"PySCF"
] | 55eac0bf526a6d927e1b024f72e0da429c2ac505914a3b410e59661754f79ee2 |
# Player Variables
def major_bonus(stat):
stat += 10
return stat
def minor_bonus(stat):
stat += 5
return stat
#player variables
age = (18, 38)
potential = (30, 100)
def_iq = (35, 100)
off_iq = (35, 100)
decision_making = (39, 100)
court_awareness = (39, 100)
strength = (39, 100)
fatigue = (40, 100)
stamina = (40, 100)
shooting_touch = (5, 15)
all_star_quality = (0, 100)
height = (180, 190)
wingspan = (0, 13)
vertical = (20, 50)
speed = (40, 100)
passing = (50, 100)
dribbling = (50, 100)
shot_layup = (35, 65)
shot_close = (35, 50)
shot_midrange = (30, 50)
shot_three = (30, 49)
shot_ft = (55, 98)
steal = (40, 100)
block = (39, 60)
rebounding = (40, 100)
all_star_threshold = 90
all_star_bonus = 13
all_star_stat_min = 64
all_star_stat_max = 96
#coach variables
motivation = (40, 100)
coach_off_iq = (40, 100)
coach_def_iq = (40, 100)
training = (40, 100)
leadership = (40, 100)
offense_playbook = {'1' : 'motion', '2' : 'flex', '3' : 'triangle'}
defense_playbook = {'1' : 'man', '2' : 'zone', '3' : 'trap'}
#team variables
budget = 100 # figured in millions of $
home_court_advantage= (50,100)
team_name_options = [
"Chicago Cows",
"Indianapolis Impalas",
"Miami Manatees",
"Tampa Bay Turtles",
"Denver Donkeys",
"Phoenix Pumas",
"Portland Pythons",
"New Orleans Osprey",
"Houston Hawks",
"Boston Bears",
"Los Angeles Leopards",
"Brooklyn Bulldogs ",
"Minneapolis Moose",
"Montreal Mice",
"Memphis Mallards",
"Toronto Terns",
"Sacramento Stallions",
"Dallas Ducks",
"Philadelphia Pigeons",
"Cleveland Cougars",
"Seattle Seals",
"Detroit Doves",
"Atlanta Ants",
"Hartford Hares",
"St. Louis Swans",
"Wichita Warthogs",
"Louisville Lions",
"Jersey City Jackals",
"Oakland Owls",
"Orlando Otters",
"Washington Woodpeckers",
"Baltimore Badgers",
"Boise Beavers",
"Charleston Catfish",
"Cincinatti Crows",
"Buffalo Bison"
]
# league variables
number_of_teams = 12
salary_cap = 100
number_of_players = (number_of_teams * 30)
| benkul/BBA | game_variables.py | Python | mit | 2,072 | [
"MOOSE"
] | 6ab0fa9e55b383a83471278460208fdb4319c07e90bd7f679a090da809c4728f |
#!/usr/bin/env python
# Copyright 2014-2018 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Author: Qiming Sun <osirpt.sun@gmail.com>
#
'''
Parses for basis set in the Molpro format
'''
import numpy
try:
from pyscf.gto.basis.parse_nwchem import optimize_contraction
from pyscf.gto.basis.parse_nwchem import remove_zero
except ImportError:
optimize_contraction = lambda basis: basis
remove_zero = lambda basis: basis
MAXL = 8
MAPSPDF = {'S': 0,
'P': 1,
'D': 2,
'F': 3,
'G': 4,
'H': 5,
'I': 6,
'K': 7}
COMMENT_KEYWORDS = '!*#'
# parse the basis text which is in Molpro format, return an internal basis
# format which can be assigned to gto.mole.basis
def parse(string, optimize=True):
bastxt = []
for x in string.splitlines():
x = x.strip()
if x and x[0] not in COMMENT_KEYWORDS:
bastxt.append(x)
return _parse(bastxt, optimize)
def load(basisfile, symb, optimize=True):
return _parse(search_seg(basisfile, symb), optimize)
def search_seg(basisfile, symb):
with open(basisfile, 'r') as fin:
rawbas = []
dat = fin.readline()
while dat:
if dat[0] in COMMENT_KEYWORDS:
dat = fin.readline()
continue
elif dat[0].isalpha():
if dat.startswith(symb+' '):
rawbas.append(dat.splitlines()[0])
elif rawbas:
return rawbas
fin.readline() # line for references
elif rawbas:
rawbas.append(dat.splitlines()[0])
dat = fin.readline()
raise RuntimeError('Basis not found for %s in %s' % (symb, basisfile))
def _parse(raw_basis, optimize=True):
# pass 1
basis_add = []
for dat in raw_basis:
dat = dat.upper()
if dat[0].isalpha():
if ' ' not in dat:
# Skip the line of comments
continue
status = dat
val = []
basis_add.append([status, val])
else:
val.append(dat)
raw_basis = [[k, ' '.join(v)] for k,v in basis_add]
# pass 2
basis_add = []
for status, valstring in raw_basis:
tmp = status.split(':')
key = tmp[0].split()
l = MAPSPDF[key[1].upper()]
#TODO if key[-1] == 'SV'
val = tmp[1].split()
np = int(val[0])
nc = int(val[1])
rawd = [float(x) for x in valstring.replace('D','e').split()]
if nc == 0:
for e in rawd:
basis_add.append([l, [e, 1.]])
else:
exps = numpy.array(rawd[:np])
coeff = numpy.zeros((np,nc))
p1 = np
for i in range(nc):
start, end = val[2+i].split('.')
start, end = int(start), int(end)
nd = end - start + 1
p0, p1 = p1, p1 + nd
coeff[start-1:end,i] = rawd[p0:p1]
bval = numpy.hstack((exps[:,None], coeff))
basis_add.append([l] + bval.tolist())
basis_sorted = []
for l in range(MAXL):
basis_sorted.extend([b for b in basis_add if b[0] == l])
if optimize:
basis_sorted = optimize_contraction(basis_sorted)
basis_sorted = remove_zero(basis_sorted)
return basis_sorted
if __name__ == '__main__':
#print(search_seg('minao.libmol', 'C'))
print(load('cc_pvdz.libmol', 'C'))
| gkc1000/pyscf | pyscf/gto/basis/parse_molpro.py | Python | apache-2.0 | 4,032 | [
"Molpro",
"PySCF"
] | 673a7f5b865b7833d74c5cdbb27399ebfa6e43fdf20da5d1a5a4a482e1f97755 |
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Visit(CMakePackage):
"""VisIt is an Open Source, interactive, scalable, visualization,
animation and analysis tool. See comments in VisIt's package.py
for tips about building VisIt with spack. Building VisIt with
Spack is still experimental and many standard features are likely
disabled
LINUX-------------------------------------------------------------------
spack install visit ^python+shared ^glib@2.56.3 ^py-setuptools@44.1.0
LINUX-W/O-OPENGL--------------------------------------------------------
spack install visit ^python+shared ^glib@2.56.3 ^py-setuptools@44.1.0 \\
^mesa+opengl
MACOS-------------------------------------------------------------------
spack install visit ^python+shared ^glib@2.56.3 ^py-setuptools@44.1.0 \\
^qt~framework
"""
############################
# Suggestions for building:
############################
# cyrush note:
#
# Out of the box, VisIt's python 2 requirement will cause
# spack spec constraint errors due Qt + Mesa build
# dependencies.
#
# You can avoid this using:
#
# linux:
# spack install visit ^python+shared ^glib@2.56.3 ^py-setuptools@44.1.0
#
# linux w/o opengl: (add mesa as opengl if system lacks system opengl )
#
# spack install visit ^python+shared ^glib@2.56.3 ^py-setuptools@44.1.0 \
# ^mesa+opengl
#
# macOS:
# spack install visit ^python+shared ^glib@2.56.3 ^py-setuptools@44.1.0 \
# ^qt~framework
#
# Rpath issues undermine qwt (not qt) when a build as a framework
# VisIt's osxfixup resolves this for us in other cases,
# but we can't use osxfixup with spack b/c it will undermine other libs.
#
# Even with these changes, VisIt's Python CLI does not work on macOS,
# there is a linking issue related to OpenSSL.
# (dyld: Symbol not found: _GENERAL_NAME_free - which comes from OpenSSL)
#
############################
homepage = "https://wci.llnl.gov/simulation/computer-codes/visit/"
git = "https://github.com/visit-dav/visit.git"
url = "https://github.com/visit-dav/visit/releases/download/v3.2.1/visit3.2.1.tar.gz"
tags = ['radiuss']
maintainers = ['cyrush']
extendable = True
executables = ['^visit$']
version('develop', branch='develop')
version('3.2.1', sha256='779d59564c63f31fcbfeff24b14ddd6ac941b3bb7d671d31765a770d193f02e8')
version('3.1.1', sha256='0b60ac52fd00aff3cf212a310e36e32e13ae3ca0ddd1ea3f54f75e4d9b6c6cf0')
version('3.0.1', sha256='a506d4d83b8973829e68787d8d721199523ce7ec73e7594e93333c214c2c12bd')
version('2.13.3', sha256='cf0b3d2e39e1cd102dd886d3ef6da892733445e362fc28f24d9682012cccf2e5')
version('2.13.0', sha256='716644b8e78a00ff82691619d4d1e7a914965b6535884890b667b97ba08d6a0f')
version('2.12.3', sha256='2dd351a291ee3e79926bc00391ca89b202cfa4751331b0fdee1b960c7922161f')
version('2.12.2', sha256='55897d656ac2ea4eb87a30118b2e3963d6c8a391dda0790268426a73e4b06943')
version('2.10.3', sha256='05018215c4727eb42d47bb5cc4ff937b2a2ccaca90d141bc7fa426a0843a5dbc')
version('2.10.2', sha256='89ecdfaf197ef431685e31b75628774deb6cd75d3e332ef26505774403e8beff')
version('2.10.1', sha256='6b53dea89a241fd03300a7a3a50c0f773e2fb8458cd3ad06816e9bd2f0337cd8')
variant('gui', default=True, description='Enable VisIt\'s GUI')
variant('adios2', default=False, description='Enable ADIOS2 file format')
variant('hdf5', default=True, description='Enable HDF5 file format')
variant('silo', default=True, description='Enable Silo file format')
variant('python', default=True, description='Enable Python support')
variant('mpi', default=False, description='Enable parallel engine')
patch('spack-changes-3.1.patch', when="@3.1.0:,develop")
patch('spack-changes-3.0.1.patch', when="@3.0.1")
patch('nonframework-qwt.patch', when='^qt~framework platform=darwin')
patch('parallel-hdf5.patch', when='+hdf5+mpi')
#############################################
# Full List of dependencies from build_visit
#############################################
# cyrush note:
# I added these here to give folks details
# to help eventually build up to full
# support for visit
#############################################
# =====================================
# core:
# =====================================
# cmake (build)
# vtk
# qt
# qwt
# python
# mpi
#
# =====================================
# rendering (optional):
# =====================================
# icet
# vtk-m
# vtk-h
# llvm
# mesagl
# osmesa
# tbb
# embree
# ispc
# ospray
#
# =====================================
# python modules:
# =====================================
# numpy
# pillow
# mpi4py
# seedme
# sphinx (build, docs)
# sphinx rtd theme (build, docs)
# pyqt (visit support deprecated)
# pyside (note: we want pyside 2)
#
# =====================================
# testing related:
# =====================================
# p7zip (build, test)
#
# =====================================
# io libs:
# =====================================
# adios
# adios2
# advio
# boost
# boxlib
# cfitsio
# cgns
# conduit
# damaris
# fastbit
# fastquery
# gdal
# h5part
# hdf4
# hdf5
# mdsplus
# mfem
# mili
# moab
# mxml
# nektarpp
# netcdf
# openexr
# pidx
# silo
# stripack
# szip
# tbb
# uintah
# xdmf
# xercesc
# xsd
# zlib
#
# =====================================
depends_on('cmake@3.14.7', type='build')
# https://github.com/visit-dav/visit/issues/3498
# The vtk_compiler_visibility patch fixes a bug where
# VTKGenerateExportHeader.cmake fails to recognize gcc versions 10.0
# or greater.
# The vtk_rendering_opengl2_x11 patch adds include directories to
# Rendering/OpenGL2/CMakeLists.txt for systems that don't have the
# system X libraries and include files installed.
# The vtk_wrapping_python_x11 patch adds include directories to
# Wrapping/Python/CMakelists.txt for systems that don't have the
# system X libraries and include files installed.
depends_on('vtk@8.1.0+opengl2+osmesa~python',
patches=[patch('vtk_compiler_visibility.patch'),
patch('vtk_rendering_opengl2_x11.patch'),
patch('vtk_wrapping_python_x11.patch'),
],
when='~python @3.2:,develop')
depends_on('vtk@8.1.0+opengl2+osmesa+python',
patches=[patch('vtk_compiler_visibility.patch'),
patch('vtk_rendering_opengl2_x11.patch'),
patch('vtk_wrapping_python_x11.patch'),
],
when='+python @3.2:,develop')
depends_on('glu', when='platform=linux')
depends_on('vtk+python', when='+python @3.2:,develop')
depends_on('vtk~mpi', when='~mpi')
depends_on('vtk+qt', when='+gui')
# VisIt doesn't work with later versions of qt.
depends_on('qt+gui@5.14.2:', when='+gui @3.2:,develop')
depends_on('qwt', when='+gui')
# python@3.8 doesn't work with VisIt.
depends_on('python@3.7', when='+python')
# llvm@12.0.1, @11.1.0, @10.0.1 fail in build phase with gcc 6.1.0.
# llvm@9.0.1 fails in cmake phase with gcc 6.1.0.
# llvm@12.0.1, llvm@8.0.1 fail in build phase with gcc 11.2.0
depends_on('llvm@6:', when='^mesa')
depends_on('mesa+glx', when='^mesa')
depends_on('mesa-glu', when='^mesa')
# VisIt doesn't build with hdf5@1.12 and hdf5@1.10 produces files that
# are incompatible with hdf5@1.8.
depends_on('hdf5@1.8', when='+hdf5')
# VisIt uses Silo's 'ghost zone' data structures, which are only available
# in v4.10+ releases: https://wci.llnl.gov/simulation/computer-codes/silo/releases/release-notes-4.10
depends_on('silo@4.10:+shared', when='+silo')
depends_on('silo~mpi', when='+silo~mpi')
depends_on('silo+mpi', when='+silo+mpi')
depends_on('hdf5~mpi', when='+hdf5~mpi')
depends_on('hdf5+mpi', when='+hdf5+mpi')
depends_on('mpi', when='+mpi')
depends_on('adios2', when='+adios2')
root_cmakelists_dir = 'src'
@when('@3.0.0:,develop')
def patch(self):
# Some of VTK's targets don't create explicit libraries, so there is no
# 'vtktiff'. Instead, replace with the library variable defined from
# VTK's module flies (e.g. lib/cmake/vtk-8.1/Modules/vtktiff.cmake)
for filename in find('src', 'CMakeLists.txt'):
filter_file(r'\bvtk(tiff|jpeg|png)', r'${vtk\1_LIBRARIES}',
filename)
def cmake_args(self):
spec = self.spec
cxx_flags = [self.compiler.cxx_pic_flag]
cc_flags = [self.compiler.cc_pic_flag]
# NOTE: This is necessary in order to allow VisIt to compile a couple
# of lines of code with 'const char*' to/from 'char*' conversions.
if spec.satisfies('@3:%gcc'):
cxx_flags.append('-fpermissive')
cc_flags.append('-fpermissive')
args = [
'-DVTK_MAJOR_VERSION=' + str(spec['vtk'].version[0]),
'-DVTK_MINOR_VERSION=' + str(spec['vtk'].version[1]),
'-DVISIT_VTK_DIR:PATH=' + spec['vtk'].prefix,
'-DVISIT_ZLIB_DIR:PATH=' + spec['zlib'].prefix,
'-DVISIT_USE_GLEW=OFF',
'-DCMAKE_CXX_FLAGS=' + ' '.join(cxx_flags),
'-DCMAKE_C_FLAGS=' + ' '.join(cc_flags),
'-DVISIT_CONFIG_SITE=NONE',
]
# Provide the plugin compilation environment so as to extend VisIt
args.append('-DVISIT_INSTALL_THIRD_PARTY=ON')
if spec.satisfies('@3.1:'):
args.append('-DFIXUP_OSX=OFF')
if '+python' in spec:
args.append('-DVISIT_PYTHON_SCRIPTING=ON')
# keep this off, we have an openssl + python linking issue
# that appears in spack
args.append('-DVISIT_PYTHON_FILTERS=OFF')
args.append('-DPYTHON_DIR:PATH={0}'.format(spec['python'].home))
else:
args.append('-DVISIT_PYTHON_SCRIPTING=OFF')
# keep this off, we have an openssl + python linking issue
# that appears in spack
args.append('-DVISIT_PYTHON_FILTERS=OFF')
if '+gui' in spec:
qt_bin = spec['qt'].prefix.bin
args.extend([
'-DVISIT_LOC_QMAKE_EXE:FILEPATH={0}/qmake'.format(qt_bin),
'-DVISIT_QT_DIR:PATH=' + spec['qt'].prefix,
'-DVISIT_QWT_DIR:PATH=' + spec['qwt'].prefix
])
else:
args.append('-DVISIT_SERVER_COMPONENTS_ONLY=ON')
args.append('-DVISIT_ENGINE_ONLY=ON')
if '^mesa' in spec:
args.append(
'-DVISIT_LLVM_DIR:PATH={0}'.format(spec['llvm'].prefix))
args.append(
'-DVISIT_MESAGL_DIR:PATH={0}'.format(spec['mesa'].prefix))
if '+hdf5' in spec:
args.append(
'-DVISIT_HDF5_DIR:PATH={0}'.format(spec['hdf5'].prefix))
if '+mpi' in spec:
args.append('-DVISIT_HDF5_MPI_DIR:PATH={0}'.format(
spec['hdf5'].prefix))
if '+silo' in spec:
args.append(
'-DVISIT_SILO_DIR:PATH={0}'.format(spec['silo'].prefix))
if '+mpi' in spec:
args.append('-DVISIT_PARALLEL=ON')
args.append('-DVISIT_C_COMPILER={0}'.format(spec['mpi'].mpicc))
args.append('-DVISIT_CXX_COMPILER={0}'.format(spec['mpi'].mpicxx))
args.append('-DVISIT_MPI_COMPILER={0}'.format(spec['mpi'].mpicxx))
return args
# https://spack.readthedocs.io/en/latest/packaging_guide.html?highlight=executables#making-a-package-discoverable-with-spack-external-find
# Here we are only able to determine the latest version
# despite VisIt may have multiple versions
@classmethod
def determine_version(cls, exe):
output = Executable(exe)('-version', output=str, error=str)
match = re.search(r'\s*(\d[\d\.]+)\.', output)
return match.group(1) if match else None
| LLNL/spack | var/spack/repos/builtin/packages/visit/package.py | Python | lgpl-2.1 | 12,706 | [
"NetCDF",
"VTK",
"VisIt"
] | 0fd32f7984f040b89c36595fcf2661a07e97ef8a681c67e620e142d414ff2019 |
#!/usr/bin/env python
from __future__ import print_function
import vtk
import numpy
from icqsol.shapes.icqRefineSurface import RefineSurface
from icqsol.util.icqDataFetcher import getArrayIndexFromNameAndProjectOntoCells
class BaseSolver:
def __init__(self, pdata, max_edge_length, order=5):
"""
Constructor
@param pdata instance of vtkPolyData
@param max_edge_length maximum edge length, used to turn
polygons into triangles
"""
# triangulate
rs = RefineSurface(pdata)
rs.refine(max_edge_length=max_edge_length)
self.pdata = rs.getVtkPolyData()
# store the point indices for each cell
self.ptIdList = []
ptIds = vtk.vtkIdList()
polys = self.pdata.GetPolys()
polys.InitTraversal()
for i in range(polys.GetNumberOfCells()):
polys.GetNextCell(ptIds)
assert(ptIds.GetNumberOfIds() == 3)
self.ptIdList.append([ptIds.GetId(0),
ptIds.GetId(1),
ptIds.GetId(2)])
self.points = self.pdata.GetPoints()
self.polys = self.pdata.GetPolys()
self.numTriangles = self.polys.GetNumberOfCells()
# order of the integration, method dependent
self.order = order
# set in the derived classes
self.responseName = 'NO-SET'
self.sourceName = 'NOT-SET'
def getVtkPolyData(self):
"""
Get the (modified) vtkPolyData object
@return object
"""
return self.pdata
def getPoints(self):
"""
Get grid points
@return points
"""
points = self.pdata.GetPoints()
numPoints = points.GetNumberOfPoints()
res = numpy.zeros((numPoints, 3), numpy.float64)
for i in range(numPoints):
res[i, :] = points.GetPoint(i)
return res
def getCells(self):
"""
Get cell connectivity
@return cells
"""
polys = self.pdata.GetPolys()
numCells = polys.GetNumberOfCells()
res = numpy.zeros((numCells, 3), numpy.int)
polys.InitTraversal()
ptIds = vtk.vtkIdList()
for i in range(numCells):
polys.GetNextCell(ptIds)
res[i, :] = ptIds.GetId(0), ptIds.GetId(1), ptIds.GetId(2)
return res
def setResponseFieldName(self, name):
"""
Set the name of the response field
@param name name
"""
self.responseName = name
def setSourceFieldName(self, name):
"""
Set the name of the source field
@param name name
"""
self.sourceName = name
def getSourceArrayIndex(self):
"""
Get the source field index, projecting onto triangles is need be
"""
srcIndex = getArrayIndexFromNameAndProjectOntoCells(self.pdata, self.sourceName)
if srcIndex < 0:
msg = 'ERROR: could not find any cell field named {0}!'.format(self.sourceName)
raise RuntimeError(msg)
return srcIndex
def getSourceArray(self, srcIndex):
"""
Set the source array
@param srcIndex source array index
"""
srcArray = self.pdata.GetCellData().GetArray(srcIndex)
n = self.pdata.GetNumberOfPolys()
src = numpy.zeros((n,), numpy.float64)
for i in range(n):
src[i] = srcArray.GetComponent(i, 0)
return src
def addResponseField(self, rsp):
"""
Add the response field to the polydata
@param rsp response numpy array
"""
rspData = vtk.vtkDoubleArray()
rspData.SetNumberOfComponents(1)
n = len(rsp)
rspData.SetNumberOfTuples(n)
rspData.SetName(self.responseName)
for i in range(n):
rspData.SetTuple(i, [rsp[i]])
self.pdata.GetCellData().AddArray(rspData)
def setSourceFromExpression(self, expression):
"""
Set the source from expression
@param expression expression of x, y, and z
"""
from math import sqrt, pi, sin, cos, tan, log, exp
n = self.pdata.GetNumberOfPolys()
sourceData = vtk.vtkDoubleArray()
sourceData.SetNumberOfComponents(1)
sourceData.SetNumberOfTuples(n)
sourceData.SetName(self.sourceName)
midPoint = numpy.zeros((3,), numpy.float64)
ptIds = vtk.vtkIdList()
cells = self.pdata.GetPolys()
cells.InitTraversal()
for i in range(n):
cell = cells.GetNextCell(ptIds)
npts = ptIds.GetNumberOfIds()
midPoint *= 0 # reset
for j in range(npts):
midPoint += self.points.GetPoint(ptIds.GetId(j))
midPoint /= float(npts)
x, y, z = midPoint
v = eval(expression)
sourceData.SetTuple(i, [v])
self.pdata.GetCellData().AddArray(sourceData)
| gregvonkuster/icqsol | bem/icqBaseSolver.py | Python | mit | 5,012 | [
"VTK"
] | b970df6d6cd2905bb2695af2907a93cd74dfd8b865b6b4eabdf5cfa0e868f0df |
# this program corresponds to special.py
### Means test is not done yet
# E Means test is giving error (E)
# F Means test is failing (F)
# EF Means test is giving error and Failing
#! Means test is segfaulting
# 8 Means test runs forever
### test_besselpoly
### test_mathieu_a
### test_mathieu_even_coef
### test_mathieu_odd_coef
### test_modfresnelp
### test_modfresnelm
# test_pbdv_seq
### test_pbvv_seq
### test_sph_harm
from __future__ import division, print_function, absolute_import
import itertools
import platform
import numpy as np
from numpy import (array, isnan, r_, arange, finfo, pi, sin, cos, tan, exp,
log, zeros, sqrt, asarray, inf, nan_to_num, real, arctan, float_)
import pytest
from pytest import raises as assert_raises
from numpy.testing import (assert_equal, assert_almost_equal,
assert_array_equal, assert_array_almost_equal, assert_approx_equal,
assert_, assert_allclose,
assert_array_almost_equal_nulp)
from scipy import special
import scipy.special._ufuncs as cephes
from scipy.special import ellipk, zeta
from scipy.special._testutils import with_special_errors, \
assert_func_equal, FuncData
from scipy._lib._numpy_compat import suppress_warnings
from scipy._lib._version import NumpyVersion
import math
class TestCephes(object):
def test_airy(self):
cephes.airy(0)
def test_airye(self):
cephes.airye(0)
def test_binom(self):
n = np.array([0.264, 4, 5.2, 17])
k = np.array([2, 0.4, 7, 3.3])
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
rknown = np.array([[-0.097152, 0.9263051596159367, 0.01858423645695389,
-0.007581020651518199],[6, 2.0214389119675666, 0, 2.9827344527963846],
[10.92, 2.22993515861399, -0.00585728, 10.468891352063146],
[136, 3.5252179590758828, 19448, 1024.5526916174495]])
assert_func_equal(cephes.binom, rknown.ravel(), nk, rtol=1e-13)
# Test branches in implementation
np.random.seed(1234)
n = np.r_[np.arange(-7, 30), 1000*np.random.rand(30) - 500]
k = np.arange(0, 102)
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
assert_func_equal(cephes.binom,
cephes.binom(nk[:,0], nk[:,1] * (1 + 1e-15)),
nk,
atol=1e-10, rtol=1e-10)
def test_binom_2(self):
# Test branches in implementation
np.random.seed(1234)
n = np.r_[np.logspace(1, 300, 20)]
k = np.arange(0, 102)
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
assert_func_equal(cephes.binom,
cephes.binom(nk[:,0], nk[:,1] * (1 + 1e-15)),
nk,
atol=1e-10, rtol=1e-10)
def test_binom_exact(self):
@np.vectorize
def binom_int(n, k):
n = int(n)
k = int(k)
num = int(1)
den = int(1)
for i in range(1, k+1):
num *= i + n - k
den *= i
return float(num/den)
np.random.seed(1234)
n = np.arange(1, 15)
k = np.arange(0, 15)
nk = np.array(np.broadcast_arrays(n[:,None], k[None,:])
).reshape(2, -1).T
nk = nk[nk[:,0] >= nk[:,1]]
assert_func_equal(cephes.binom,
binom_int(nk[:,0], nk[:,1]),
nk,
atol=0, rtol=0)
def test_binom_nooverflow_8346(self):
# Test (binom(n, k) doesn't overflow prematurely */
dataset = [
(1000, 500, 2.70288240945436551e+299),
(1002, 501, 1.08007396880791225e+300),
(1004, 502, 4.31599279169058121e+300),
(1006, 503, 1.72468101616263781e+301),
(1008, 504, 6.89188009236419153e+301),
(1010, 505, 2.75402257948335448e+302),
(1012, 506, 1.10052048531923757e+303),
(1014, 507, 4.39774063758732849e+303),
(1016, 508, 1.75736486108312519e+304),
(1018, 509, 7.02255427788423734e+304),
(1020, 510, 2.80626776829962255e+305),
(1022, 511, 1.12140876377061240e+306),
(1024, 512, 4.48125455209897109e+306),
(1026, 513, 1.79075474304149900e+307),
(1028, 514, 7.15605105487789676e+307)
]
dataset = np.asarray(dataset)
FuncData(cephes.binom, dataset, (0, 1), 2, rtol=1e-12).check()
def test_bdtr(self):
assert_equal(cephes.bdtr(1,1,0.5),1.0)
def test_bdtri(self):
assert_equal(cephes.bdtri(1,3,0.5),0.5)
def test_bdtrc(self):
assert_equal(cephes.bdtrc(1,3,0.5),0.5)
def test_bdtrin(self):
assert_equal(cephes.bdtrin(1,0,1),5.0)
def test_bdtrik(self):
cephes.bdtrik(1,3,0.5)
def test_bei(self):
assert_equal(cephes.bei(0),0.0)
def test_beip(self):
assert_equal(cephes.beip(0),0.0)
def test_ber(self):
assert_equal(cephes.ber(0),1.0)
def test_berp(self):
assert_equal(cephes.berp(0),0.0)
def test_besselpoly(self):
assert_equal(cephes.besselpoly(0,0,0),1.0)
def test_beta(self):
assert_equal(cephes.beta(1,1),1.0)
assert_allclose(cephes.beta(-100.3, 1e-200), cephes.gamma(1e-200))
assert_allclose(cephes.beta(0.0342, 171), 24.070498359873497,
rtol=1e-13, atol=0)
def test_betainc(self):
assert_equal(cephes.betainc(1,1,1),1.0)
assert_allclose(cephes.betainc(0.0342, 171, 1e-10), 0.55269916901806648)
def test_betaln(self):
assert_equal(cephes.betaln(1,1),0.0)
assert_allclose(cephes.betaln(-100.3, 1e-200), cephes.gammaln(1e-200))
assert_allclose(cephes.betaln(0.0342, 170), 3.1811881124242447,
rtol=1e-14, atol=0)
def test_betaincinv(self):
assert_equal(cephes.betaincinv(1,1,1),1.0)
assert_allclose(cephes.betaincinv(0.0342, 171, 0.25),
8.4231316935498957e-21, rtol=3e-12, atol=0)
def test_beta_inf(self):
assert_(np.isinf(special.beta(-1, 2)))
def test_btdtr(self):
assert_equal(cephes.btdtr(1,1,1),1.0)
def test_btdtri(self):
assert_equal(cephes.btdtri(1,1,1),1.0)
def test_btdtria(self):
assert_equal(cephes.btdtria(1,1,1),5.0)
def test_btdtrib(self):
assert_equal(cephes.btdtrib(1,1,1),5.0)
def test_cbrt(self):
assert_approx_equal(cephes.cbrt(1),1.0)
def test_chdtr(self):
assert_equal(cephes.chdtr(1,0),0.0)
def test_chdtrc(self):
assert_equal(cephes.chdtrc(1,0),1.0)
def test_chdtri(self):
assert_equal(cephes.chdtri(1,1),0.0)
def test_chdtriv(self):
assert_equal(cephes.chdtriv(0,0),5.0)
def test_chndtr(self):
assert_equal(cephes.chndtr(0,1,0),0.0)
# Each row holds (x, nu, lam, expected_value)
# These values were computed using Wolfram Alpha with
# CDF[NoncentralChiSquareDistribution[nu, lam], x]
values = np.array([
[25.00, 20.0, 400, 4.1210655112396197139e-57],
[25.00, 8.00, 250, 2.3988026526832425878e-29],
[0.001, 8.00, 40., 5.3761806201366039084e-24],
[0.010, 8.00, 40., 5.45396231055999457039e-20],
[20.00, 2.00, 107, 1.39390743555819597802e-9],
[22.50, 2.00, 107, 7.11803307138105870671e-9],
[25.00, 2.00, 107, 3.11041244829864897313e-8],
[3.000, 2.00, 1.0, 0.62064365321954362734],
[350.0, 300., 10., 0.93880128006276407710],
[100.0, 13.5, 10., 0.99999999650104210949],
[700.0, 20.0, 400, 0.99999999925680650105],
[150.0, 13.5, 10., 0.99999999999999983046],
[160.0, 13.5, 10., 0.99999999999999999518], # 1.0
])
cdf = cephes.chndtr(values[:, 0], values[:, 1], values[:, 2])
assert_allclose(cdf, values[:, 3], rtol=1e-12)
assert_almost_equal(cephes.chndtr(np.inf, np.inf, 0), 2.0)
assert_almost_equal(cephes.chndtr(2, 1, np.inf), 0.0)
assert_(np.isnan(cephes.chndtr(np.nan, 1, 2)))
assert_(np.isnan(cephes.chndtr(5, np.nan, 2)))
assert_(np.isnan(cephes.chndtr(5, 1, np.nan)))
def test_chndtridf(self):
assert_equal(cephes.chndtridf(0,0,1),5.0)
def test_chndtrinc(self):
assert_equal(cephes.chndtrinc(0,1,0),5.0)
def test_chndtrix(self):
assert_equal(cephes.chndtrix(0,1,0),0.0)
def test_cosdg(self):
assert_equal(cephes.cosdg(0),1.0)
def test_cosm1(self):
assert_equal(cephes.cosm1(0),0.0)
def test_cotdg(self):
assert_almost_equal(cephes.cotdg(45),1.0)
def test_dawsn(self):
assert_equal(cephes.dawsn(0),0.0)
assert_allclose(cephes.dawsn(1.23), 0.50053727749081767)
def test_diric(self):
# Test behavior near multiples of 2pi. Regression test for issue
# described in gh-4001.
n_odd = [1, 5, 25]
x = np.array(2*np.pi + 5e-5).astype(np.float32)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=7)
x = np.array(2*np.pi + 1e-9).astype(np.float64)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=15)
x = np.array(2*np.pi + 1e-15).astype(np.float64)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=15)
if hasattr(np, 'float128'):
# No float128 available in 32-bit numpy
x = np.array(2*np.pi + 1e-12).astype(np.float128)
assert_almost_equal(special.diric(x, n_odd), 1.0, decimal=19)
n_even = [2, 4, 24]
x = np.array(2*np.pi + 1e-9).astype(np.float64)
assert_almost_equal(special.diric(x, n_even), -1.0, decimal=15)
# Test at some values not near a multiple of pi
x = np.arange(0.2*np.pi, 1.0*np.pi, 0.2*np.pi)
octave_result = [0.872677996249965, 0.539344662916632,
0.127322003750035, -0.206011329583298]
assert_almost_equal(special.diric(x, 3), octave_result, decimal=15)
def test_diric_broadcasting(self):
x = np.arange(5)
n = np.array([1, 3, 7])
assert_(special.diric(x[:, np.newaxis], n).shape == (x.size, n.size))
def test_ellipe(self):
assert_equal(cephes.ellipe(1),1.0)
def test_ellipeinc(self):
assert_equal(cephes.ellipeinc(0,1),0.0)
def test_ellipj(self):
cephes.ellipj(0,1)
def test_ellipk(self):
assert_allclose(ellipk(0), pi/2)
def test_ellipkinc(self):
assert_equal(cephes.ellipkinc(0,0),0.0)
def test_erf(self):
assert_equal(cephes.erf(0), 0.0)
def test_erf_symmetry(self):
x = 5.905732037710919
assert_equal(cephes.erf(x) + cephes.erf(-x), 0.0)
def test_erfc(self):
assert_equal(cephes.erfc(0), 1.0)
def test_exp10(self):
assert_approx_equal(cephes.exp10(2),100.0)
def test_exp2(self):
assert_equal(cephes.exp2(2),4.0)
def test_expm1(self):
assert_equal(cephes.expm1(0),0.0)
assert_equal(cephes.expm1(np.inf), np.inf)
assert_equal(cephes.expm1(-np.inf), -1)
assert_equal(cephes.expm1(np.nan), np.nan)
# Earlier numpy version don't guarantee that npy_cexp conforms to C99.
@pytest.mark.skipif(NumpyVersion(np.__version__) < '1.9.0', reason='')
def test_expm1_complex(self):
expm1 = cephes.expm1
assert_equal(expm1(0 + 0j), 0 + 0j)
assert_equal(expm1(complex(np.inf, 0)), complex(np.inf, 0))
assert_equal(expm1(complex(np.inf, 1)), complex(np.inf, np.inf))
assert_equal(expm1(complex(np.inf, 2)), complex(-np.inf, np.inf))
assert_equal(expm1(complex(np.inf, 4)), complex(-np.inf, -np.inf))
assert_equal(expm1(complex(np.inf, 5)), complex(np.inf, -np.inf))
assert_equal(expm1(complex(1, np.inf)), complex(np.nan, np.nan))
assert_equal(expm1(complex(0, np.inf)), complex(np.nan, np.nan))
assert_equal(expm1(complex(np.inf, np.inf)), complex(np.inf, np.nan))
assert_equal(expm1(complex(-np.inf, np.inf)), complex(-1, 0))
assert_equal(expm1(complex(-np.inf, np.nan)), complex(-1, 0))
assert_equal(expm1(complex(np.inf, np.nan)), complex(np.inf, np.nan))
assert_equal(expm1(complex(0, np.nan)), complex(np.nan, np.nan))
assert_equal(expm1(complex(1, np.nan)), complex(np.nan, np.nan))
assert_equal(expm1(complex(np.nan, 1)), complex(np.nan, np.nan))
assert_equal(expm1(complex(np.nan, np.nan)), complex(np.nan, np.nan))
@pytest.mark.xfail(reason='The real part of expm1(z) bad at these points')
def test_expm1_complex_hard(self):
# The real part of this function is difficult to evaluate when
# z.real = -log(cos(z.imag)).
y = np.array([0.1, 0.2, 0.3, 5, 11, 20])
x = -np.log(np.cos(y))
z = x + 1j*y
# evaluate using mpmath.expm1 with dps=1000
expected = np.array([-5.5507901846769623e-17+0.10033467208545054j,
2.4289354732893695e-18+0.20271003550867248j,
4.5235500262585768e-17+0.30933624960962319j,
7.8234305217489006e-17-3.3805150062465863j,
-1.3685191953697676e-16-225.95084645419513j,
8.7175620481291045e-17+2.2371609442247422j])
found = cephes.expm1(z)
# this passes.
assert_array_almost_equal_nulp(found.imag, expected.imag, 3)
# this fails.
assert_array_almost_equal_nulp(found.real, expected.real, 20)
def test_fdtr(self):
assert_equal(cephes.fdtr(1, 1, 0), 0.0)
# Computed using Wolfram Alpha: CDF[FRatioDistribution[1e-6, 5], 10]
assert_allclose(cephes.fdtr(1e-6, 5, 10), 0.9999940790193488,
rtol=1e-12)
def test_fdtrc(self):
assert_equal(cephes.fdtrc(1, 1, 0), 1.0)
# Computed using Wolfram Alpha:
# 1 - CDF[FRatioDistribution[2, 1/10], 1e10]
assert_allclose(cephes.fdtrc(2, 0.1, 1e10), 0.27223784621293512,
rtol=1e-12)
def test_fdtri(self):
assert_allclose(cephes.fdtri(1, 1, [0.499, 0.501]),
array([0.9937365, 1.00630298]), rtol=1e-6)
# From Wolfram Alpha:
# CDF[FRatioDistribution[1/10, 1], 3] = 0.8756751669632105666874...
p = 0.8756751669632105666874
assert_allclose(cephes.fdtri(0.1, 1, p), 3, rtol=1e-12)
@pytest.mark.xfail(reason='Returns nan on i686.')
def test_fdtri_mysterious_failure(self):
assert_allclose(cephes.fdtri(1, 1, 0.5), 1)
def test_fdtridfd(self):
assert_equal(cephes.fdtridfd(1,0,0),5.0)
def test_fresnel(self):
assert_equal(cephes.fresnel(0),(0.0,0.0))
def test_gamma(self):
assert_equal(cephes.gamma(5),24.0)
def test_gammainccinv(self):
assert_equal(cephes.gammainccinv(5,1),0.0)
def test_gammaln(self):
cephes.gammaln(10)
def test_gammasgn(self):
vals = np.array([-4, -3.5, -2.3, 1, 4.2], np.float64)
assert_array_equal(cephes.gammasgn(vals), np.sign(cephes.rgamma(vals)))
def test_gdtr(self):
assert_equal(cephes.gdtr(1,1,0),0.0)
def test_gdtr_inf(self):
assert_equal(cephes.gdtr(1,1,np.inf),1.0)
def test_gdtrc(self):
assert_equal(cephes.gdtrc(1,1,0),1.0)
def test_gdtria(self):
assert_equal(cephes.gdtria(0,1,1),0.0)
def test_gdtrib(self):
cephes.gdtrib(1,0,1)
# assert_equal(cephes.gdtrib(1,0,1),5.0)
def test_gdtrix(self):
cephes.gdtrix(1,1,.1)
def test_hankel1(self):
cephes.hankel1(1,1)
def test_hankel1e(self):
cephes.hankel1e(1,1)
def test_hankel2(self):
cephes.hankel2(1,1)
def test_hankel2e(self):
cephes.hankel2e(1,1)
def test_hyp1f1(self):
assert_approx_equal(cephes.hyp1f1(1,1,1), exp(1.0))
assert_approx_equal(cephes.hyp1f1(3,4,-6), 0.026056422099537251095)
cephes.hyp1f1(1,1,1)
def test_hyp2f1(self):
assert_equal(cephes.hyp2f1(1,1,1,0),1.0)
def test_i0(self):
assert_equal(cephes.i0(0),1.0)
def test_i0e(self):
assert_equal(cephes.i0e(0),1.0)
def test_i1(self):
assert_equal(cephes.i1(0),0.0)
def test_i1e(self):
assert_equal(cephes.i1e(0),0.0)
def test_it2i0k0(self):
cephes.it2i0k0(1)
def test_it2j0y0(self):
cephes.it2j0y0(1)
def test_it2struve0(self):
cephes.it2struve0(1)
def test_itairy(self):
cephes.itairy(1)
def test_iti0k0(self):
assert_equal(cephes.iti0k0(0),(0.0,0.0))
def test_itj0y0(self):
assert_equal(cephes.itj0y0(0),(0.0,0.0))
def test_itmodstruve0(self):
assert_equal(cephes.itmodstruve0(0),0.0)
def test_itstruve0(self):
assert_equal(cephes.itstruve0(0),0.0)
def test_iv(self):
assert_equal(cephes.iv(1,0),0.0)
def _check_ive(self):
assert_equal(cephes.ive(1,0),0.0)
def test_j0(self):
assert_equal(cephes.j0(0),1.0)
def test_j1(self):
assert_equal(cephes.j1(0),0.0)
def test_jn(self):
assert_equal(cephes.jn(0,0),1.0)
def test_jv(self):
assert_equal(cephes.jv(0,0),1.0)
def _check_jve(self):
assert_equal(cephes.jve(0,0),1.0)
def test_k0(self):
cephes.k0(2)
def test_k0e(self):
cephes.k0e(2)
def test_k1(self):
cephes.k1(2)
def test_k1e(self):
cephes.k1e(2)
def test_kei(self):
cephes.kei(2)
def test_keip(self):
assert_equal(cephes.keip(0),0.0)
def test_ker(self):
cephes.ker(2)
def test_kerp(self):
cephes.kerp(2)
def _check_kelvin(self):
cephes.kelvin(2)
def test_kn(self):
cephes.kn(1,1)
def test_kolmogi(self):
assert_equal(cephes.kolmogi(1),0.0)
assert_(np.isnan(cephes.kolmogi(np.nan)))
def test_kolmogorov(self):
assert_equal(cephes.kolmogorov(0), 1.0)
def test_kolmogp(self):
assert_equal(cephes._kolmogp(0), -0.0)
def test_kolmogc(self):
assert_equal(cephes._kolmogc(0), 0.0)
def test_kolmogci(self):
assert_equal(cephes._kolmogci(0), 0.0)
assert_(np.isnan(cephes._kolmogci(np.nan)))
def _check_kv(self):
cephes.kv(1,1)
def _check_kve(self):
cephes.kve(1,1)
def test_log1p(self):
log1p = cephes.log1p
assert_equal(log1p(0), 0.0)
assert_equal(log1p(-1), -np.inf)
assert_equal(log1p(-2), np.nan)
assert_equal(log1p(np.inf), np.inf)
# earlier numpy version don't guarantee that npy_clog conforms to C99
@pytest.mark.skipif(NumpyVersion(np.__version__) < '1.9.0', reason='')
def test_log1p_complex(self):
log1p = cephes.log1p
c = complex
assert_equal(log1p(0 + 0j), 0 + 0j)
assert_equal(log1p(c(-1, 0)), c(-np.inf, 0))
with suppress_warnings() as sup:
sup.filter(RuntimeWarning, "invalid value encountered in multiply")
assert_allclose(log1p(c(1, np.inf)), c(np.inf, np.pi/2))
assert_equal(log1p(c(1, np.nan)), c(np.nan, np.nan))
assert_allclose(log1p(c(-np.inf, 1)), c(np.inf, np.pi))
assert_equal(log1p(c(np.inf, 1)), c(np.inf, 0))
assert_allclose(log1p(c(-np.inf, np.inf)), c(np.inf, 3*np.pi/4))
assert_allclose(log1p(c(np.inf, np.inf)), c(np.inf, np.pi/4))
assert_equal(log1p(c(np.inf, np.nan)), c(np.inf, np.nan))
assert_equal(log1p(c(-np.inf, np.nan)), c(np.inf, np.nan))
assert_equal(log1p(c(np.nan, np.inf)), c(np.inf, np.nan))
assert_equal(log1p(c(np.nan, 1)), c(np.nan, np.nan))
assert_equal(log1p(c(np.nan, np.nan)), c(np.nan, np.nan))
def test_lpmv(self):
assert_equal(cephes.lpmv(0,0,1),1.0)
def test_mathieu_a(self):
assert_equal(cephes.mathieu_a(1,0),1.0)
def test_mathieu_b(self):
assert_equal(cephes.mathieu_b(1,0),1.0)
def test_mathieu_cem(self):
assert_equal(cephes.mathieu_cem(1,0,0),(1.0,0.0))
# Test AMS 20.2.27
@np.vectorize
def ce_smallq(m, q, z):
z *= np.pi/180
if m == 0:
return 2**(-0.5) * (1 - .5*q*cos(2*z)) # + O(q^2)
elif m == 1:
return cos(z) - q/8 * cos(3*z) # + O(q^2)
elif m == 2:
return cos(2*z) - q*(cos(4*z)/12 - 1/4) # + O(q^2)
else:
return cos(m*z) - q*(cos((m+2)*z)/(4*(m+1)) - cos((m-2)*z)/(4*(m-1))) # + O(q^2)
m = np.arange(0, 100)
q = np.r_[0, np.logspace(-30, -9, 10)]
assert_allclose(cephes.mathieu_cem(m[:,None], q[None,:], 0.123)[0],
ce_smallq(m[:,None], q[None,:], 0.123),
rtol=1e-14, atol=0)
def test_mathieu_sem(self):
assert_equal(cephes.mathieu_sem(1,0,0),(0.0,1.0))
# Test AMS 20.2.27
@np.vectorize
def se_smallq(m, q, z):
z *= np.pi/180
if m == 1:
return sin(z) - q/8 * sin(3*z) # + O(q^2)
elif m == 2:
return sin(2*z) - q*sin(4*z)/12 # + O(q^2)
else:
return sin(m*z) - q*(sin((m+2)*z)/(4*(m+1)) - sin((m-2)*z)/(4*(m-1))) # + O(q^2)
m = np.arange(1, 100)
q = np.r_[0, np.logspace(-30, -9, 10)]
assert_allclose(cephes.mathieu_sem(m[:,None], q[None,:], 0.123)[0],
se_smallq(m[:,None], q[None,:], 0.123),
rtol=1e-14, atol=0)
def test_mathieu_modcem1(self):
assert_equal(cephes.mathieu_modcem1(1,0,0),(0.0,0.0))
def test_mathieu_modcem2(self):
cephes.mathieu_modcem2(1,1,1)
# Test reflection relation AMS 20.6.19
m = np.arange(0, 4)[:,None,None]
q = np.r_[np.logspace(-2, 2, 10)][None,:,None]
z = np.linspace(0, 1, 7)[None,None,:]
y1 = cephes.mathieu_modcem2(m, q, -z)[0]
fr = -cephes.mathieu_modcem2(m, q, 0)[0] / cephes.mathieu_modcem1(m, q, 0)[0]
y2 = -cephes.mathieu_modcem2(m, q, z)[0] - 2*fr*cephes.mathieu_modcem1(m, q, z)[0]
assert_allclose(y1, y2, rtol=1e-10)
def test_mathieu_modsem1(self):
assert_equal(cephes.mathieu_modsem1(1,0,0),(0.0,0.0))
def test_mathieu_modsem2(self):
cephes.mathieu_modsem2(1,1,1)
# Test reflection relation AMS 20.6.20
m = np.arange(1, 4)[:,None,None]
q = np.r_[np.logspace(-2, 2, 10)][None,:,None]
z = np.linspace(0, 1, 7)[None,None,:]
y1 = cephes.mathieu_modsem2(m, q, -z)[0]
fr = cephes.mathieu_modsem2(m, q, 0)[1] / cephes.mathieu_modsem1(m, q, 0)[1]
y2 = cephes.mathieu_modsem2(m, q, z)[0] - 2*fr*cephes.mathieu_modsem1(m, q, z)[0]
assert_allclose(y1, y2, rtol=1e-10)
def test_mathieu_overflow(self):
# Check that these return NaNs instead of causing a SEGV
assert_equal(cephes.mathieu_cem(10000, 0, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_sem(10000, 0, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_cem(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_sem(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modcem1(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modsem1(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modcem2(10000, 1.5, 1.3), (np.nan, np.nan))
assert_equal(cephes.mathieu_modsem2(10000, 1.5, 1.3), (np.nan, np.nan))
def test_mathieu_ticket_1847(self):
# Regression test --- this call had some out-of-bounds access
# and could return nan occasionally
for k in range(60):
v = cephes.mathieu_modsem2(2, 100, -1)
# Values from ACM TOMS 804 (derivate by numerical differentiation)
assert_allclose(v[0], 0.1431742913063671074347, rtol=1e-10)
assert_allclose(v[1], 0.9017807375832909144719, rtol=1e-4)
def test_modfresnelm(self):
cephes.modfresnelm(0)
def test_modfresnelp(self):
cephes.modfresnelp(0)
def _check_modstruve(self):
assert_equal(cephes.modstruve(1,0),0.0)
def test_nbdtr(self):
assert_equal(cephes.nbdtr(1,1,1),1.0)
def test_nbdtrc(self):
assert_equal(cephes.nbdtrc(1,1,1),0.0)
def test_nbdtri(self):
assert_equal(cephes.nbdtri(1,1,1),1.0)
def __check_nbdtrik(self):
cephes.nbdtrik(1,.4,.5)
def test_nbdtrin(self):
assert_equal(cephes.nbdtrin(1,0,0),5.0)
def test_ncfdtr(self):
assert_equal(cephes.ncfdtr(1,1,1,0),0.0)
def test_ncfdtri(self):
assert_equal(cephes.ncfdtri(1, 1, 1, 0), 0.0)
f = [0.5, 1, 1.5]
p = cephes.ncfdtr(2, 3, 1.5, f)
assert_allclose(cephes.ncfdtri(2, 3, 1.5, p), f)
def test_ncfdtridfd(self):
dfd = [1, 2, 3]
p = cephes.ncfdtr(2, dfd, 0.25, 15)
assert_allclose(cephes.ncfdtridfd(2, p, 0.25, 15), dfd)
def test_ncfdtridfn(self):
dfn = [0.1, 1, 2, 3, 1e4]
p = cephes.ncfdtr(dfn, 2, 0.25, 15)
assert_allclose(cephes.ncfdtridfn(p, 2, 0.25, 15), dfn, rtol=1e-5)
def test_ncfdtrinc(self):
nc = [0.5, 1.5, 2.0]
p = cephes.ncfdtr(2, 3, nc, 15)
assert_allclose(cephes.ncfdtrinc(2, 3, p, 15), nc)
def test_nctdtr(self):
assert_equal(cephes.nctdtr(1,0,0),0.5)
assert_equal(cephes.nctdtr(9, 65536, 45), 0.0)
assert_approx_equal(cephes.nctdtr(np.inf, 1., 1.), 0.5, 5)
assert_(np.isnan(cephes.nctdtr(2., np.inf, 10.)))
assert_approx_equal(cephes.nctdtr(2., 1., np.inf), 1.)
assert_(np.isnan(cephes.nctdtr(np.nan, 1., 1.)))
assert_(np.isnan(cephes.nctdtr(2., np.nan, 1.)))
assert_(np.isnan(cephes.nctdtr(2., 1., np.nan)))
def __check_nctdtridf(self):
cephes.nctdtridf(1,0.5,0)
def test_nctdtrinc(self):
cephes.nctdtrinc(1,0,0)
def test_nctdtrit(self):
cephes.nctdtrit(.1,0.2,.5)
def test_nrdtrimn(self):
assert_approx_equal(cephes.nrdtrimn(0.5,1,1),1.0)
def test_nrdtrisd(self):
assert_allclose(cephes.nrdtrisd(0.5,0.5,0.5), 0.0,
atol=0, rtol=0)
def test_obl_ang1(self):
cephes.obl_ang1(1,1,1,0)
def test_obl_ang1_cv(self):
result = cephes.obl_ang1_cv(1,1,1,1,0)
assert_almost_equal(result[0],1.0)
assert_almost_equal(result[1],0.0)
def _check_obl_cv(self):
assert_equal(cephes.obl_cv(1,1,0),2.0)
def test_obl_rad1(self):
cephes.obl_rad1(1,1,1,0)
def test_obl_rad1_cv(self):
cephes.obl_rad1_cv(1,1,1,1,0)
def test_obl_rad2(self):
cephes.obl_rad2(1,1,1,0)
def test_obl_rad2_cv(self):
cephes.obl_rad2_cv(1,1,1,1,0)
def test_pbdv(self):
assert_equal(cephes.pbdv(1,0),(0.0,1.0))
def test_pbvv(self):
cephes.pbvv(1,0)
def test_pbwa(self):
cephes.pbwa(1,0)
def test_pdtr(self):
val = cephes.pdtr(0, 1)
assert_almost_equal(val, np.exp(-1))
# Edge case: m = 0.
val = cephes.pdtr([0, 1, 2], 0)
assert_array_equal(val, [1, 1, 1])
def test_pdtrc(self):
val = cephes.pdtrc(0, 1)
assert_almost_equal(val, 1 - np.exp(-1))
# Edge case: m = 0.
val = cephes.pdtrc([0, 1, 2], 0.0)
assert_array_equal(val, [0, 0, 0])
def test_pdtri(self):
with suppress_warnings() as sup:
sup.filter(RuntimeWarning, "floating point number truncated to an integer")
cephes.pdtri(0.5,0.5)
def test_pdtrik(self):
k = cephes.pdtrik(0.5, 1)
assert_almost_equal(cephes.gammaincc(k + 1, 1), 0.5)
# Edge case: m = 0 or very small.
k = cephes.pdtrik([[0], [0.25], [0.95]], [0, 1e-20, 1e-6])
assert_array_equal(k, np.zeros((3, 3)))
def test_pro_ang1(self):
cephes.pro_ang1(1,1,1,0)
def test_pro_ang1_cv(self):
assert_array_almost_equal(cephes.pro_ang1_cv(1,1,1,1,0),
array((1.0,0.0)))
def _check_pro_cv(self):
assert_equal(cephes.pro_cv(1,1,0),2.0)
def test_pro_rad1(self):
cephes.pro_rad1(1,1,1,0.1)
def test_pro_rad1_cv(self):
cephes.pro_rad1_cv(1,1,1,1,0)
def test_pro_rad2(self):
cephes.pro_rad2(1,1,1,0)
def test_pro_rad2_cv(self):
cephes.pro_rad2_cv(1,1,1,1,0)
def test_psi(self):
cephes.psi(1)
def test_radian(self):
assert_equal(cephes.radian(0,0,0),0)
def test_rgamma(self):
assert_equal(cephes.rgamma(1),1.0)
def test_round(self):
assert_equal(cephes.round(3.4),3.0)
assert_equal(cephes.round(-3.4),-3.0)
assert_equal(cephes.round(3.6),4.0)
assert_equal(cephes.round(-3.6),-4.0)
assert_equal(cephes.round(3.5),4.0)
assert_equal(cephes.round(-3.5),-4.0)
def test_shichi(self):
cephes.shichi(1)
def test_sici(self):
cephes.sici(1)
s, c = cephes.sici(np.inf)
assert_almost_equal(s, np.pi * 0.5)
assert_almost_equal(c, 0)
s, c = cephes.sici(-np.inf)
assert_almost_equal(s, -np.pi * 0.5)
assert_(np.isnan(c), "cosine integral(-inf) is not nan")
def test_sindg(self):
assert_equal(cephes.sindg(90),1.0)
def test_smirnov(self):
assert_equal(cephes.smirnov(1,.1),0.9)
assert_(np.isnan(cephes.smirnov(1,np.nan)))
def test_smirnovp(self):
assert_equal(cephes._smirnovp(1, .1), -1)
assert_equal(cephes._smirnovp(2, 0.75), -2*(0.25)**(2-1))
assert_equal(cephes._smirnovp(3, 0.75), -3*(0.25)**(3-1))
assert_(np.isnan(cephes._smirnovp(1, np.nan)))
def test_smirnovc(self):
assert_equal(cephes._smirnovc(1,.1),0.1)
assert_(np.isnan(cephes._smirnovc(1,np.nan)))
x10 = np.linspace(0, 1, 11, endpoint=True)
assert_almost_equal(cephes._smirnovc(3, x10), 1-cephes.smirnov(3, x10))
x4 = np.linspace(0, 1, 5, endpoint=True)
assert_almost_equal(cephes._smirnovc(4, x4), 1-cephes.smirnov(4, x4))
def test_smirnovi(self):
assert_almost_equal(cephes.smirnov(1,cephes.smirnovi(1,0.4)),0.4)
assert_almost_equal(cephes.smirnov(1,cephes.smirnovi(1,0.6)),0.6)
assert_(np.isnan(cephes.smirnovi(1,np.nan)))
def test_smirnovci(self):
assert_almost_equal(cephes._smirnovc(1,cephes._smirnovci(1,0.4)),0.4)
assert_almost_equal(cephes._smirnovc(1,cephes._smirnovci(1,0.6)),0.6)
assert_(np.isnan(cephes._smirnovci(1,np.nan)))
def test_spence(self):
assert_equal(cephes.spence(1),0.0)
def test_stdtr(self):
assert_equal(cephes.stdtr(1,0),0.5)
assert_almost_equal(cephes.stdtr(1,1), 0.75)
assert_almost_equal(cephes.stdtr(1,2), 0.852416382349)
def test_stdtridf(self):
cephes.stdtridf(0.7,1)
def test_stdtrit(self):
cephes.stdtrit(1,0.7)
def test_struve(self):
assert_equal(cephes.struve(0,0),0.0)
def test_tandg(self):
assert_equal(cephes.tandg(45),1.0)
def test_tklmbda(self):
assert_almost_equal(cephes.tklmbda(1,1),1.0)
def test_y0(self):
cephes.y0(1)
def test_y1(self):
cephes.y1(1)
def test_yn(self):
cephes.yn(1,1)
def test_yv(self):
cephes.yv(1,1)
def _check_yve(self):
cephes.yve(1,1)
def test_wofz(self):
z = [complex(624.2,-0.26123), complex(-0.4,3.), complex(0.6,2.),
complex(-1.,1.), complex(-1.,-9.), complex(-1.,9.),
complex(-0.0000000234545,1.1234), complex(-3.,5.1),
complex(-53,30.1), complex(0.0,0.12345),
complex(11,1), complex(-22,-2), complex(9,-28),
complex(21,-33), complex(1e5,1e5), complex(1e14,1e14)
]
w = [
complex(-3.78270245518980507452677445620103199303131110e-7,
0.000903861276433172057331093754199933411710053155),
complex(0.1764906227004816847297495349730234591778719532788,
-0.02146550539468457616788719893991501311573031095617),
complex(0.2410250715772692146133539023007113781272362309451,
0.06087579663428089745895459735240964093522265589350),
complex(0.30474420525691259245713884106959496013413834051768,
-0.20821893820283162728743734725471561394145872072738),
complex(7.317131068972378096865595229600561710140617977e34,
8.321873499714402777186848353320412813066170427e34),
complex(0.0615698507236323685519612934241429530190806818395,
-0.00676005783716575013073036218018565206070072304635),
complex(0.3960793007699874918961319170187598400134746631,
-5.593152259116644920546186222529802777409274656e-9),
complex(0.08217199226739447943295069917990417630675021771804,
-0.04701291087643609891018366143118110965272615832184),
complex(0.00457246000350281640952328010227885008541748668738,
-0.00804900791411691821818731763401840373998654987934),
complex(0.8746342859608052666092782112565360755791467973338452,
0.),
complex(0.00468190164965444174367477874864366058339647648741,
0.0510735563901306197993676329845149741675029197050),
complex(-0.0023193175200187620902125853834909543869428763219,
-0.025460054739731556004902057663500272721780776336),
complex(9.11463368405637174660562096516414499772662584e304,
3.97101807145263333769664875189354358563218932e305),
complex(-4.4927207857715598976165541011143706155432296e281,
-2.8019591213423077494444700357168707775769028e281),
complex(2.820947917809305132678577516325951485807107151e-6,
2.820947917668257736791638444590253942253354058e-6),
complex(2.82094791773878143474039725787438662716372268e-15,
2.82094791773878143474039725773333923127678361e-15)
]
assert_func_equal(cephes.wofz, w, z, rtol=1e-13)
class TestAiry(object):
def test_airy(self):
# This tests the airy function to ensure 8 place accuracy in computation
x = special.airy(.99)
assert_array_almost_equal(x,array([0.13689066,-0.16050153,1.19815925,0.92046818]),8)
x = special.airy(.41)
assert_array_almost_equal(x,array([0.25238916,-.23480512,0.80686202,0.51053919]),8)
x = special.airy(-.36)
assert_array_almost_equal(x,array([0.44508477,-0.23186773,0.44939534,0.48105354]),8)
def test_airye(self):
a = special.airye(0.01)
b = special.airy(0.01)
b1 = [None]*4
for n in range(2):
b1[n] = b[n]*exp(2.0/3.0*0.01*sqrt(0.01))
for n in range(2,4):
b1[n] = b[n]*exp(-abs(real(2.0/3.0*0.01*sqrt(0.01))))
assert_array_almost_equal(a,b1,6)
def test_bi_zeros(self):
bi = special.bi_zeros(2)
bia = (array([-1.17371322, -3.2710930]),
array([-2.29443968, -4.07315509]),
array([-0.45494438, 0.39652284]),
array([0.60195789, -0.76031014]))
assert_array_almost_equal(bi,bia,4)
bi = special.bi_zeros(5)
assert_array_almost_equal(bi[0],array([-1.173713222709127,
-3.271093302836352,
-4.830737841662016,
-6.169852128310251,
-7.376762079367764]),11)
assert_array_almost_equal(bi[1],array([-2.294439682614122,
-4.073155089071828,
-5.512395729663599,
-6.781294445990305,
-7.940178689168587]),10)
assert_array_almost_equal(bi[2],array([-0.454944383639657,
0.396522836094465,
-0.367969161486959,
0.349499116831805,
-0.336026240133662]),11)
assert_array_almost_equal(bi[3],array([0.601957887976239,
-0.760310141492801,
0.836991012619261,
-0.88947990142654,
0.929983638568022]),10)
def test_ai_zeros(self):
ai = special.ai_zeros(1)
assert_array_almost_equal(ai,(array([-2.33810741]),
array([-1.01879297]),
array([0.5357]),
array([0.7012])),4)
def test_ai_zeros_big(self):
z, zp, ai_zpx, aip_zx = special.ai_zeros(50000)
ai_z, aip_z, _, _ = special.airy(z)
ai_zp, aip_zp, _, _ = special.airy(zp)
ai_envelope = 1/abs(z)**(1./4)
aip_envelope = abs(zp)**(1./4)
# Check values
assert_allclose(ai_zpx, ai_zp, rtol=1e-10)
assert_allclose(aip_zx, aip_z, rtol=1e-10)
# Check they are zeros
assert_allclose(ai_z/ai_envelope, 0, atol=1e-10, rtol=0)
assert_allclose(aip_zp/aip_envelope, 0, atol=1e-10, rtol=0)
# Check first zeros, DLMF 9.9.1
assert_allclose(z[:6],
[-2.3381074105, -4.0879494441, -5.5205598281,
-6.7867080901, -7.9441335871, -9.0226508533], rtol=1e-10)
assert_allclose(zp[:6],
[-1.0187929716, -3.2481975822, -4.8200992112,
-6.1633073556, -7.3721772550, -8.4884867340], rtol=1e-10)
def test_bi_zeros_big(self):
z, zp, bi_zpx, bip_zx = special.bi_zeros(50000)
_, _, bi_z, bip_z = special.airy(z)
_, _, bi_zp, bip_zp = special.airy(zp)
bi_envelope = 1/abs(z)**(1./4)
bip_envelope = abs(zp)**(1./4)
# Check values
assert_allclose(bi_zpx, bi_zp, rtol=1e-10)
assert_allclose(bip_zx, bip_z, rtol=1e-10)
# Check they are zeros
assert_allclose(bi_z/bi_envelope, 0, atol=1e-10, rtol=0)
assert_allclose(bip_zp/bip_envelope, 0, atol=1e-10, rtol=0)
# Check first zeros, DLMF 9.9.2
assert_allclose(z[:6],
[-1.1737132227, -3.2710933028, -4.8307378417,
-6.1698521283, -7.3767620794, -8.4919488465], rtol=1e-10)
assert_allclose(zp[:6],
[-2.2944396826, -4.0731550891, -5.5123957297,
-6.7812944460, -7.9401786892, -9.0195833588], rtol=1e-10)
class TestAssocLaguerre(object):
def test_assoc_laguerre(self):
a1 = special.genlaguerre(11,1)
a2 = special.assoc_laguerre(.2,11,1)
assert_array_almost_equal(a2,a1(.2),8)
a2 = special.assoc_laguerre(1,11,1)
assert_array_almost_equal(a2,a1(1),8)
class TestBesselpoly(object):
def test_besselpoly(self):
pass
class TestKelvin(object):
def test_bei(self):
mbei = special.bei(2)
assert_almost_equal(mbei, 0.9722916273066613,5) # this may not be exact
def test_beip(self):
mbeip = special.beip(2)
assert_almost_equal(mbeip,0.91701361338403631,5) # this may not be exact
def test_ber(self):
mber = special.ber(2)
assert_almost_equal(mber,0.75173418271380821,5) # this may not be exact
def test_berp(self):
mberp = special.berp(2)
assert_almost_equal(mberp,-0.49306712470943909,5) # this may not be exact
def test_bei_zeros(self):
# Abramowitz & Stegun, Table 9.12
bi = special.bei_zeros(5)
assert_array_almost_equal(bi,array([5.02622,
9.45541,
13.89349,
18.33398,
22.77544]),4)
def test_beip_zeros(self):
bip = special.beip_zeros(5)
assert_array_almost_equal(bip,array([3.772673304934953,
8.280987849760042,
12.742147523633703,
17.193431752512542,
21.641143941167325]),8)
def test_ber_zeros(self):
ber = special.ber_zeros(5)
assert_array_almost_equal(ber,array([2.84892,
7.23883,
11.67396,
16.11356,
20.55463]),4)
def test_berp_zeros(self):
brp = special.berp_zeros(5)
assert_array_almost_equal(brp,array([6.03871,
10.51364,
14.96844,
19.41758,
23.86430]),4)
def test_kelvin(self):
mkelv = special.kelvin(2)
assert_array_almost_equal(mkelv,(special.ber(2) + special.bei(2)*1j,
special.ker(2) + special.kei(2)*1j,
special.berp(2) + special.beip(2)*1j,
special.kerp(2) + special.keip(2)*1j),8)
def test_kei(self):
mkei = special.kei(2)
assert_almost_equal(mkei,-0.20240006776470432,5)
def test_keip(self):
mkeip = special.keip(2)
assert_almost_equal(mkeip,0.21980790991960536,5)
def test_ker(self):
mker = special.ker(2)
assert_almost_equal(mker,-0.041664513991509472,5)
def test_kerp(self):
mkerp = special.kerp(2)
assert_almost_equal(mkerp,-0.10660096588105264,5)
def test_kei_zeros(self):
kei = special.kei_zeros(5)
assert_array_almost_equal(kei,array([3.91467,
8.34422,
12.78256,
17.22314,
21.66464]),4)
def test_keip_zeros(self):
keip = special.keip_zeros(5)
assert_array_almost_equal(keip,array([4.93181,
9.40405,
13.85827,
18.30717,
22.75379]),4)
# numbers come from 9.9 of A&S pg. 381
def test_kelvin_zeros(self):
tmp = special.kelvin_zeros(5)
berz,beiz,kerz,keiz,berpz,beipz,kerpz,keipz = tmp
assert_array_almost_equal(berz,array([2.84892,
7.23883,
11.67396,
16.11356,
20.55463]),4)
assert_array_almost_equal(beiz,array([5.02622,
9.45541,
13.89349,
18.33398,
22.77544]),4)
assert_array_almost_equal(kerz,array([1.71854,
6.12728,
10.56294,
15.00269,
19.44382]),4)
assert_array_almost_equal(keiz,array([3.91467,
8.34422,
12.78256,
17.22314,
21.66464]),4)
assert_array_almost_equal(berpz,array([6.03871,
10.51364,
14.96844,
19.41758,
23.86430]),4)
assert_array_almost_equal(beipz,array([3.77267,
# table from 1927 had 3.77320
# but this is more accurate
8.28099,
12.74215,
17.19343,
21.64114]),4)
assert_array_almost_equal(kerpz,array([2.66584,
7.17212,
11.63218,
16.08312,
20.53068]),4)
assert_array_almost_equal(keipz,array([4.93181,
9.40405,
13.85827,
18.30717,
22.75379]),4)
def test_ker_zeros(self):
ker = special.ker_zeros(5)
assert_array_almost_equal(ker,array([1.71854,
6.12728,
10.56294,
15.00269,
19.44381]),4)
def test_kerp_zeros(self):
kerp = special.kerp_zeros(5)
assert_array_almost_equal(kerp,array([2.66584,
7.17212,
11.63218,
16.08312,
20.53068]),4)
class TestBernoulli(object):
def test_bernoulli(self):
brn = special.bernoulli(5)
assert_array_almost_equal(brn,array([1.0000,
-0.5000,
0.1667,
0.0000,
-0.0333,
0.0000]),4)
class TestBeta(object):
def test_beta(self):
bet = special.beta(2,4)
betg = (special.gamma(2)*special.gamma(4))/special.gamma(6)
assert_almost_equal(bet,betg,8)
def test_betaln(self):
betln = special.betaln(2,4)
bet = log(abs(special.beta(2,4)))
assert_almost_equal(betln,bet,8)
def test_betainc(self):
btinc = special.betainc(1,1,.2)
assert_almost_equal(btinc,0.2,8)
def test_betaincinv(self):
y = special.betaincinv(2,4,.5)
comp = special.betainc(2,4,y)
assert_almost_equal(comp,.5,5)
class TestCombinatorics(object):
def test_comb(self):
assert_array_almost_equal(special.comb([10, 10], [3, 4]), [120., 210.])
assert_almost_equal(special.comb(10, 3), 120.)
assert_equal(special.comb(10, 3, exact=True), 120)
assert_equal(special.comb(10, 3, exact=True, repetition=True), 220)
assert_allclose([special.comb(20, k, exact=True) for k in range(21)],
special.comb(20, list(range(21))), atol=1e-15)
ii = np.iinfo(int).max + 1
assert_equal(special.comb(ii, ii-1, exact=True), ii)
expected = 100891344545564193334812497256
assert_equal(special.comb(100, 50, exact=True), expected)
def test_comb_with_np_int64(self):
n = 70
k = 30
np_n = np.int64(n)
np_k = np.int64(k)
assert_equal(special.comb(np_n, np_k, exact=True),
special.comb(n, k, exact=True))
def test_comb_zeros(self):
assert_equal(special.comb(2, 3, exact=True), 0)
assert_equal(special.comb(-1, 3, exact=True), 0)
assert_equal(special.comb(2, -1, exact=True), 0)
assert_equal(special.comb(2, -1, exact=False), 0)
assert_array_almost_equal(special.comb([2, -1, 2, 10], [3, 3, -1, 3]),
[0., 0., 0., 120.])
def test_perm(self):
assert_array_almost_equal(special.perm([10, 10], [3, 4]), [720., 5040.])
assert_almost_equal(special.perm(10, 3), 720.)
assert_equal(special.perm(10, 3, exact=True), 720)
def test_perm_zeros(self):
assert_equal(special.perm(2, 3, exact=True), 0)
assert_equal(special.perm(-1, 3, exact=True), 0)
assert_equal(special.perm(2, -1, exact=True), 0)
assert_equal(special.perm(2, -1, exact=False), 0)
assert_array_almost_equal(special.perm([2, -1, 2, 10], [3, 3, -1, 3]),
[0., 0., 0., 720.])
class TestTrigonometric(object):
def test_cbrt(self):
cb = special.cbrt(27)
cbrl = 27**(1.0/3.0)
assert_approx_equal(cb,cbrl)
def test_cbrtmore(self):
cb1 = special.cbrt(27.9)
cbrl1 = 27.9**(1.0/3.0)
assert_almost_equal(cb1,cbrl1,8)
def test_cosdg(self):
cdg = special.cosdg(90)
cdgrl = cos(pi/2.0)
assert_almost_equal(cdg,cdgrl,8)
def test_cosdgmore(self):
cdgm = special.cosdg(30)
cdgmrl = cos(pi/6.0)
assert_almost_equal(cdgm,cdgmrl,8)
def test_cosm1(self):
cs = (special.cosm1(0),special.cosm1(.3),special.cosm1(pi/10))
csrl = (cos(0)-1,cos(.3)-1,cos(pi/10)-1)
assert_array_almost_equal(cs,csrl,8)
def test_cotdg(self):
ct = special.cotdg(30)
ctrl = tan(pi/6.0)**(-1)
assert_almost_equal(ct,ctrl,8)
def test_cotdgmore(self):
ct1 = special.cotdg(45)
ctrl1 = tan(pi/4.0)**(-1)
assert_almost_equal(ct1,ctrl1,8)
def test_specialpoints(self):
assert_almost_equal(special.cotdg(45), 1.0, 14)
assert_almost_equal(special.cotdg(-45), -1.0, 14)
assert_almost_equal(special.cotdg(90), 0.0, 14)
assert_almost_equal(special.cotdg(-90), 0.0, 14)
assert_almost_equal(special.cotdg(135), -1.0, 14)
assert_almost_equal(special.cotdg(-135), 1.0, 14)
assert_almost_equal(special.cotdg(225), 1.0, 14)
assert_almost_equal(special.cotdg(-225), -1.0, 14)
assert_almost_equal(special.cotdg(270), 0.0, 14)
assert_almost_equal(special.cotdg(-270), 0.0, 14)
assert_almost_equal(special.cotdg(315), -1.0, 14)
assert_almost_equal(special.cotdg(-315), 1.0, 14)
assert_almost_equal(special.cotdg(765), 1.0, 14)
def test_sinc(self):
# the sinc implementation and more extensive sinc tests are in numpy
assert_array_equal(special.sinc([0]), 1)
assert_equal(special.sinc(0.0), 1.0)
def test_sindg(self):
sn = special.sindg(90)
assert_equal(sn,1.0)
def test_sindgmore(self):
snm = special.sindg(30)
snmrl = sin(pi/6.0)
assert_almost_equal(snm,snmrl,8)
snm1 = special.sindg(45)
snmrl1 = sin(pi/4.0)
assert_almost_equal(snm1,snmrl1,8)
class TestTandg(object):
def test_tandg(self):
tn = special.tandg(30)
tnrl = tan(pi/6.0)
assert_almost_equal(tn,tnrl,8)
def test_tandgmore(self):
tnm = special.tandg(45)
tnmrl = tan(pi/4.0)
assert_almost_equal(tnm,tnmrl,8)
tnm1 = special.tandg(60)
tnmrl1 = tan(pi/3.0)
assert_almost_equal(tnm1,tnmrl1,8)
def test_specialpoints(self):
assert_almost_equal(special.tandg(0), 0.0, 14)
assert_almost_equal(special.tandg(45), 1.0, 14)
assert_almost_equal(special.tandg(-45), -1.0, 14)
assert_almost_equal(special.tandg(135), -1.0, 14)
assert_almost_equal(special.tandg(-135), 1.0, 14)
assert_almost_equal(special.tandg(180), 0.0, 14)
assert_almost_equal(special.tandg(-180), 0.0, 14)
assert_almost_equal(special.tandg(225), 1.0, 14)
assert_almost_equal(special.tandg(-225), -1.0, 14)
assert_almost_equal(special.tandg(315), -1.0, 14)
assert_almost_equal(special.tandg(-315), 1.0, 14)
class TestEllip(object):
def test_ellipj_nan(self):
"""Regression test for #912."""
special.ellipj(0.5, np.nan)
def test_ellipj(self):
el = special.ellipj(0.2,0)
rel = [sin(0.2),cos(0.2),1.0,0.20]
assert_array_almost_equal(el,rel,13)
def test_ellipk(self):
elk = special.ellipk(.2)
assert_almost_equal(elk,1.659623598610528,11)
assert_equal(special.ellipkm1(0.0), np.inf)
assert_equal(special.ellipkm1(1.0), pi/2)
assert_equal(special.ellipkm1(np.inf), 0.0)
assert_equal(special.ellipkm1(np.nan), np.nan)
assert_equal(special.ellipkm1(-1), np.nan)
assert_allclose(special.ellipk(-10), 0.7908718902387385)
def test_ellipkinc(self):
elkinc = special.ellipkinc(pi/2,.2)
elk = special.ellipk(0.2)
assert_almost_equal(elkinc,elk,15)
alpha = 20*pi/180
phi = 45*pi/180
m = sin(alpha)**2
elkinc = special.ellipkinc(phi,m)
assert_almost_equal(elkinc,0.79398143,8)
# From pg. 614 of A & S
assert_equal(special.ellipkinc(pi/2, 0.0), pi/2)
assert_equal(special.ellipkinc(pi/2, 1.0), np.inf)
assert_equal(special.ellipkinc(pi/2, -np.inf), 0.0)
assert_equal(special.ellipkinc(pi/2, np.nan), np.nan)
assert_equal(special.ellipkinc(pi/2, 2), np.nan)
assert_equal(special.ellipkinc(0, 0.5), 0.0)
assert_equal(special.ellipkinc(np.inf, 0.5), np.inf)
assert_equal(special.ellipkinc(-np.inf, 0.5), -np.inf)
assert_equal(special.ellipkinc(np.inf, np.inf), np.nan)
assert_equal(special.ellipkinc(np.inf, -np.inf), np.nan)
assert_equal(special.ellipkinc(-np.inf, -np.inf), np.nan)
assert_equal(special.ellipkinc(-np.inf, np.inf), np.nan)
assert_equal(special.ellipkinc(np.nan, 0.5), np.nan)
assert_equal(special.ellipkinc(np.nan, np.nan), np.nan)
assert_allclose(special.ellipkinc(0.38974112035318718, 1), 0.4, rtol=1e-14)
assert_allclose(special.ellipkinc(1.5707, -10), 0.79084284661724946)
def test_ellipkinc_2(self):
# Regression test for gh-3550
# ellipkinc(phi, mbad) was NaN and mvals[2:6] were twice the correct value
mbad = 0.68359375000000011
phi = 0.9272952180016123
m = np.nextafter(mbad, 0)
mvals = []
for j in range(10):
mvals.append(m)
m = np.nextafter(m, 1)
f = special.ellipkinc(phi, mvals)
assert_array_almost_equal_nulp(f, np.full_like(f, 1.0259330100195334), 1)
# this bug also appears at phi + n * pi for at least small n
f1 = special.ellipkinc(phi + pi, mvals)
assert_array_almost_equal_nulp(f1, np.full_like(f1, 5.1296650500976675), 2)
def test_ellipkinc_singular(self):
# ellipkinc(phi, 1) has closed form and is finite only for phi in (-pi/2, pi/2)
xlog = np.logspace(-300, -17, 25)
xlin = np.linspace(1e-17, 0.1, 25)
xlin2 = np.linspace(0.1, pi/2, 25, endpoint=False)
assert_allclose(special.ellipkinc(xlog, 1), np.arcsinh(np.tan(xlog)), rtol=1e14)
assert_allclose(special.ellipkinc(xlin, 1), np.arcsinh(np.tan(xlin)), rtol=1e14)
assert_allclose(special.ellipkinc(xlin2, 1), np.arcsinh(np.tan(xlin2)), rtol=1e14)
assert_equal(special.ellipkinc(np.pi/2, 1), np.inf)
assert_allclose(special.ellipkinc(-xlog, 1), np.arcsinh(np.tan(-xlog)), rtol=1e14)
assert_allclose(special.ellipkinc(-xlin, 1), np.arcsinh(np.tan(-xlin)), rtol=1e14)
assert_allclose(special.ellipkinc(-xlin2, 1), np.arcsinh(np.tan(-xlin2)), rtol=1e14)
assert_equal(special.ellipkinc(-np.pi/2, 1), np.inf)
def test_ellipe(self):
ele = special.ellipe(.2)
assert_almost_equal(ele,1.4890350580958529,8)
assert_equal(special.ellipe(0.0), pi/2)
assert_equal(special.ellipe(1.0), 1.0)
assert_equal(special.ellipe(-np.inf), np.inf)
assert_equal(special.ellipe(np.nan), np.nan)
assert_equal(special.ellipe(2), np.nan)
assert_allclose(special.ellipe(-10), 3.6391380384177689)
def test_ellipeinc(self):
eleinc = special.ellipeinc(pi/2,.2)
ele = special.ellipe(0.2)
assert_almost_equal(eleinc,ele,14)
# pg 617 of A & S
alpha, phi = 52*pi/180,35*pi/180
m = sin(alpha)**2
eleinc = special.ellipeinc(phi,m)
assert_almost_equal(eleinc, 0.58823065, 8)
assert_equal(special.ellipeinc(pi/2, 0.0), pi/2)
assert_equal(special.ellipeinc(pi/2, 1.0), 1.0)
assert_equal(special.ellipeinc(pi/2, -np.inf), np.inf)
assert_equal(special.ellipeinc(pi/2, np.nan), np.nan)
assert_equal(special.ellipeinc(pi/2, 2), np.nan)
assert_equal(special.ellipeinc(0, 0.5), 0.0)
assert_equal(special.ellipeinc(np.inf, 0.5), np.inf)
assert_equal(special.ellipeinc(-np.inf, 0.5), -np.inf)
assert_equal(special.ellipeinc(np.inf, -np.inf), np.inf)
assert_equal(special.ellipeinc(-np.inf, -np.inf), -np.inf)
assert_equal(special.ellipeinc(np.inf, np.inf), np.nan)
assert_equal(special.ellipeinc(-np.inf, np.inf), np.nan)
assert_equal(special.ellipeinc(np.nan, 0.5), np.nan)
assert_equal(special.ellipeinc(np.nan, np.nan), np.nan)
assert_allclose(special.ellipeinc(1.5707, -10), 3.6388185585822876)
def test_ellipeinc_2(self):
# Regression test for gh-3550
# ellipeinc(phi, mbad) was NaN and mvals[2:6] were twice the correct value
mbad = 0.68359375000000011
phi = 0.9272952180016123
m = np.nextafter(mbad, 0)
mvals = []
for j in range(10):
mvals.append(m)
m = np.nextafter(m, 1)
f = special.ellipeinc(phi, mvals)
assert_array_almost_equal_nulp(f, np.full_like(f, 0.84442884574781019), 2)
# this bug also appears at phi + n * pi for at least small n
f1 = special.ellipeinc(phi + pi, mvals)
assert_array_almost_equal_nulp(f1, np.full_like(f1, 3.3471442287390509), 4)
class TestErf(object):
def test_erf(self):
er = special.erf(.25)
assert_almost_equal(er,0.2763263902,8)
def test_erf_zeros(self):
erz = special.erf_zeros(5)
erzr = array([1.45061616+1.88094300j,
2.24465928+2.61657514j,
2.83974105+3.17562810j,
3.33546074+3.64617438j,
3.76900557+4.06069723j])
assert_array_almost_equal(erz,erzr,4)
def _check_variant_func(self, func, other_func, rtol, atol=0):
np.random.seed(1234)
n = 10000
x = np.random.pareto(0.02, n) * (2*np.random.randint(0, 2, n) - 1)
y = np.random.pareto(0.02, n) * (2*np.random.randint(0, 2, n) - 1)
z = x + 1j*y
old_errors = np.seterr(all='ignore')
try:
w = other_func(z)
w_real = other_func(x).real
mask = np.isfinite(w)
w = w[mask]
z = z[mask]
mask = np.isfinite(w_real)
w_real = w_real[mask]
x = x[mask]
# test both real and complex variants
assert_func_equal(func, w, z, rtol=rtol, atol=atol)
assert_func_equal(func, w_real, x, rtol=rtol, atol=atol)
finally:
np.seterr(**old_errors)
def test_erfc_consistent(self):
self._check_variant_func(
cephes.erfc,
lambda z: 1 - cephes.erf(z),
rtol=1e-12,
atol=1e-14 # <- the test function loses precision
)
def test_erfcx_consistent(self):
self._check_variant_func(
cephes.erfcx,
lambda z: np.exp(z*z) * cephes.erfc(z),
rtol=1e-12
)
def test_erfi_consistent(self):
self._check_variant_func(
cephes.erfi,
lambda z: -1j * cephes.erf(1j*z),
rtol=1e-12
)
def test_dawsn_consistent(self):
self._check_variant_func(
cephes.dawsn,
lambda z: sqrt(pi)/2 * np.exp(-z*z) * cephes.erfi(z),
rtol=1e-12
)
def test_erf_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan, -1, 1]
assert_allclose(special.erf(vals), expected, rtol=1e-15)
def test_erfc_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan, 2, 0]
assert_allclose(special.erfc(vals), expected, rtol=1e-15)
def test_erfcx_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan, np.inf, 0]
assert_allclose(special.erfcx(vals), expected, rtol=1e-15)
def test_erfi_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan, -np.inf, np.inf]
assert_allclose(special.erfi(vals), expected, rtol=1e-15)
def test_dawsn_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan, -0.0, 0.0]
assert_allclose(special.dawsn(vals), expected, rtol=1e-15)
def test_wofz_nan_inf(self):
vals = [np.nan, -np.inf, np.inf]
expected = [np.nan + np.nan * 1.j, 0.-0.j, 0.+0.j]
assert_allclose(special.wofz(vals), expected, rtol=1e-15)
class TestEuler(object):
def test_euler(self):
eu0 = special.euler(0)
eu1 = special.euler(1)
eu2 = special.euler(2) # just checking segfaults
assert_allclose(eu0, [1], rtol=1e-15)
assert_allclose(eu1, [1, 0], rtol=1e-15)
assert_allclose(eu2, [1, 0, -1], rtol=1e-15)
eu24 = special.euler(24)
mathworld = [1,1,5,61,1385,50521,2702765,199360981,
19391512145,2404879675441,
370371188237525,69348874393137901,
15514534163557086905]
correct = zeros((25,),'d')
for k in range(0,13):
if (k % 2):
correct[2*k] = -float(mathworld[k])
else:
correct[2*k] = float(mathworld[k])
olderr = np.seterr(all='ignore')
try:
err = nan_to_num((eu24-correct)/correct)
errmax = max(err)
finally:
np.seterr(**olderr)
assert_almost_equal(errmax, 0.0, 14)
class TestExp(object):
def test_exp2(self):
ex = special.exp2(2)
exrl = 2**2
assert_equal(ex,exrl)
def test_exp2more(self):
exm = special.exp2(2.5)
exmrl = 2**(2.5)
assert_almost_equal(exm,exmrl,8)
def test_exp10(self):
ex = special.exp10(2)
exrl = 10**2
assert_approx_equal(ex,exrl)
def test_exp10more(self):
exm = special.exp10(2.5)
exmrl = 10**(2.5)
assert_almost_equal(exm,exmrl,8)
def test_expm1(self):
ex = (special.expm1(2),special.expm1(3),special.expm1(4))
exrl = (exp(2)-1,exp(3)-1,exp(4)-1)
assert_array_almost_equal(ex,exrl,8)
def test_expm1more(self):
ex1 = (special.expm1(2),special.expm1(2.1),special.expm1(2.2))
exrl1 = (exp(2)-1,exp(2.1)-1,exp(2.2)-1)
assert_array_almost_equal(ex1,exrl1,8)
class TestFactorialFunctions(object):
def test_factorial(self):
# Some known values, float math
assert_array_almost_equal(special.factorial(0), 1)
assert_array_almost_equal(special.factorial(1), 1)
assert_array_almost_equal(special.factorial(2), 2)
assert_array_almost_equal([6., 24., 120.],
special.factorial([3, 4, 5], exact=False))
assert_array_almost_equal(special.factorial([[5, 3], [4, 3]]),
[[120, 6], [24, 6]])
# Some known values, integer math
assert_equal(special.factorial(0, exact=True), 1)
assert_equal(special.factorial(1, exact=True), 1)
assert_equal(special.factorial(2, exact=True), 2)
assert_equal(special.factorial(5, exact=True), 120)
assert_equal(special.factorial(15, exact=True), 1307674368000)
# ndarray shape is maintained
assert_equal(special.factorial([7, 4, 15, 10], exact=True),
[5040, 24, 1307674368000, 3628800])
assert_equal(special.factorial([[5, 3], [4, 3]], True),
[[120, 6], [24, 6]])
# object arrays
assert_equal(special.factorial(np.arange(-3, 22), True),
special.factorial(np.arange(-3, 22), False))
# int64 array
assert_equal(special.factorial(np.arange(-3, 15), True),
special.factorial(np.arange(-3, 15), False))
# int32 array
assert_equal(special.factorial(np.arange(-3, 5), True),
special.factorial(np.arange(-3, 5), False))
# Consistent output for n < 0
for exact in (True, False):
assert_array_equal(0, special.factorial(-3, exact))
assert_array_equal([1, 2, 0, 0],
special.factorial([1, 2, -5, -4], exact))
for n in range(0, 22):
# Compare all with math.factorial
correct = math.factorial(n)
assert_array_equal(correct, special.factorial(n, True))
assert_array_equal(correct, special.factorial([n], True)[0])
assert_allclose(float(correct), special.factorial(n, False))
assert_allclose(float(correct), special.factorial([n], False)[0])
# Compare exact=True vs False, scalar vs array
assert_array_equal(special.factorial(n, True),
special.factorial(n, False))
assert_array_equal(special.factorial([n], True),
special.factorial([n], False))
@pytest.mark.parametrize('x, exact', [
(1, True),
(1, False),
(np.array(1), True),
(np.array(1), False),
])
def test_factorial_0d_return_type(self, x, exact):
assert np.isscalar(special.factorial(x, exact=exact))
def test_factorial2(self):
assert_array_almost_equal([105., 384., 945.],
special.factorial2([7, 8, 9], exact=False))
assert_equal(special.factorial2(7, exact=True), 105)
def test_factorialk(self):
assert_equal(special.factorialk(5, 1, exact=True), 120)
assert_equal(special.factorialk(5, 3, exact=True), 10)
class TestFresnel(object):
def test_fresnel(self):
frs = array(special.fresnel(.5))
assert_array_almost_equal(frs,array([0.064732432859999287, 0.49234422587144644]),8)
def test_fresnel_inf1(self):
frs = special.fresnel(np.inf)
assert_equal(frs, (0.5, 0.5))
def test_fresnel_inf2(self):
frs = special.fresnel(-np.inf)
assert_equal(frs, (-0.5, -0.5))
# values from pg 329 Table 7.11 of A & S
# slightly corrected in 4th decimal place
def test_fresnel_zeros(self):
szo, czo = special.fresnel_zeros(5)
assert_array_almost_equal(szo,
array([2.0093+0.2885j,
2.8335+0.2443j,
3.4675+0.2185j,
4.0026+0.2009j,
4.4742+0.1877j]),3)
assert_array_almost_equal(czo,
array([1.7437+0.3057j,
2.6515+0.2529j,
3.3204+0.2240j,
3.8757+0.2047j,
4.3611+0.1907j]),3)
vals1 = special.fresnel(szo)[0]
vals2 = special.fresnel(czo)[1]
assert_array_almost_equal(vals1,0,14)
assert_array_almost_equal(vals2,0,14)
def test_fresnelc_zeros(self):
szo, czo = special.fresnel_zeros(6)
frc = special.fresnelc_zeros(6)
assert_array_almost_equal(frc,czo,12)
def test_fresnels_zeros(self):
szo, czo = special.fresnel_zeros(5)
frs = special.fresnels_zeros(5)
assert_array_almost_equal(frs,szo,12)
class TestGamma(object):
def test_gamma(self):
gam = special.gamma(5)
assert_equal(gam,24.0)
def test_gammaln(self):
gamln = special.gammaln(3)
lngam = log(special.gamma(3))
assert_almost_equal(gamln,lngam,8)
def test_gammainccinv(self):
gccinv = special.gammainccinv(.5,.5)
gcinv = special.gammaincinv(.5,.5)
assert_almost_equal(gccinv,gcinv,8)
@with_special_errors
def test_gammaincinv(self):
y = special.gammaincinv(.4,.4)
x = special.gammainc(.4,y)
assert_almost_equal(x,0.4,1)
y = special.gammainc(10, 0.05)
x = special.gammaincinv(10, 2.5715803516000736e-20)
assert_almost_equal(0.05, x, decimal=10)
assert_almost_equal(y, 2.5715803516000736e-20, decimal=10)
x = special.gammaincinv(50, 8.20754777388471303050299243573393e-18)
assert_almost_equal(11.0, x, decimal=10)
@with_special_errors
def test_975(self):
# Regression test for ticket #975 -- switch point in algorithm
# check that things work OK at the point, immediately next floats
# around it, and a bit further away
pts = [0.25,
np.nextafter(0.25, 0), 0.25 - 1e-12,
np.nextafter(0.25, 1), 0.25 + 1e-12]
for xp in pts:
y = special.gammaincinv(.4, xp)
x = special.gammainc(0.4, y)
assert_allclose(x, xp, rtol=1e-12)
def test_rgamma(self):
rgam = special.rgamma(8)
rlgam = 1/special.gamma(8)
assert_almost_equal(rgam,rlgam,8)
def test_infinity(self):
assert_(np.isinf(special.gamma(-1)))
assert_equal(special.rgamma(-1), 0)
class TestHankel(object):
def test_negv1(self):
assert_almost_equal(special.hankel1(-3,2), -special.hankel1(3,2), 14)
def test_hankel1(self):
hank1 = special.hankel1(1,.1)
hankrl = (special.jv(1,.1) + special.yv(1,.1)*1j)
assert_almost_equal(hank1,hankrl,8)
def test_negv1e(self):
assert_almost_equal(special.hankel1e(-3,2), -special.hankel1e(3,2), 14)
def test_hankel1e(self):
hank1e = special.hankel1e(1,.1)
hankrle = special.hankel1(1,.1)*exp(-.1j)
assert_almost_equal(hank1e,hankrle,8)
def test_negv2(self):
assert_almost_equal(special.hankel2(-3,2), -special.hankel2(3,2), 14)
def test_hankel2(self):
hank2 = special.hankel2(1,.1)
hankrl2 = (special.jv(1,.1) - special.yv(1,.1)*1j)
assert_almost_equal(hank2,hankrl2,8)
def test_neg2e(self):
assert_almost_equal(special.hankel2e(-3,2), -special.hankel2e(3,2), 14)
def test_hankl2e(self):
hank2e = special.hankel2e(1,.1)
hankrl2e = special.hankel2e(1,.1)
assert_almost_equal(hank2e,hankrl2e,8)
class TestHyper(object):
def test_h1vp(self):
h1 = special.h1vp(1,.1)
h1real = (special.jvp(1,.1) + special.yvp(1,.1)*1j)
assert_almost_equal(h1,h1real,8)
def test_h2vp(self):
h2 = special.h2vp(1,.1)
h2real = (special.jvp(1,.1) - special.yvp(1,.1)*1j)
assert_almost_equal(h2,h2real,8)
def test_hyp0f1(self):
# scalar input
assert_allclose(special.hyp0f1(2.5, 0.5), 1.21482702689997, rtol=1e-12)
assert_allclose(special.hyp0f1(2.5, 0), 1.0, rtol=1e-15)
# float input, expected values match mpmath
x = special.hyp0f1(3.0, [-1.5, -1, 0, 1, 1.5])
expected = np.array([0.58493659229143, 0.70566805723127, 1.0,
1.37789689539747, 1.60373685288480])
assert_allclose(x, expected, rtol=1e-12)
# complex input
x = special.hyp0f1(3.0, np.array([-1.5, -1, 0, 1, 1.5]) + 0.j)
assert_allclose(x, expected.astype(complex), rtol=1e-12)
# test broadcasting
x1 = [0.5, 1.5, 2.5]
x2 = [0, 1, 0.5]
x = special.hyp0f1(x1, x2)
expected = [1.0, 1.8134302039235093, 1.21482702689997]
assert_allclose(x, expected, rtol=1e-12)
x = special.hyp0f1(np.row_stack([x1] * 2), x2)
assert_allclose(x, np.row_stack([expected] * 2), rtol=1e-12)
assert_raises(ValueError, special.hyp0f1,
np.row_stack([x1] * 3), [0, 1])
def test_hyp0f1_gh5764(self):
# Just checks the point that failed; there's a more systematic
# test in test_mpmath
res = special.hyp0f1(0.8, 0.5 + 0.5*1J)
# The expected value was generated using mpmath
assert_almost_equal(res, 1.6139719776441115 + 1J*0.80893054061790665)
def test_hyp1f1(self):
hyp1 = special.hyp1f1(.1,.1,.3)
assert_almost_equal(hyp1, 1.3498588075760032,7)
# test contributed by Moritz Deger (2008-05-29)
# https://github.com/scipy/scipy/issues/1186 (Trac #659)
# reference data obtained from mathematica [ a, b, x, m(a,b,x)]:
# produced with test_hyp1f1.nb
ref_data = array([[-8.38132975e+00, -1.28436461e+01, -2.91081397e+01, 1.04178330e+04],
[2.91076882e+00, -6.35234333e+00, -1.27083993e+01, 6.68132725e+00],
[-1.42938258e+01, 1.80869131e-01, 1.90038728e+01, 1.01385897e+05],
[5.84069088e+00, 1.33187908e+01, 2.91290106e+01, 1.59469411e+08],
[-2.70433202e+01, -1.16274873e+01, -2.89582384e+01, 1.39900152e+24],
[4.26344966e+00, -2.32701773e+01, 1.91635759e+01, 6.13816915e+21],
[1.20514340e+01, -3.40260240e+00, 7.26832235e+00, 1.17696112e+13],
[2.77372955e+01, -1.99424687e+00, 3.61332246e+00, 3.07419615e+13],
[1.50310939e+01, -2.91198675e+01, -1.53581080e+01, -3.79166033e+02],
[1.43995827e+01, 9.84311196e+00, 1.93204553e+01, 2.55836264e+10],
[-4.08759686e+00, 1.34437025e+01, -1.42072843e+01, 1.70778449e+01],
[8.05595738e+00, -1.31019838e+01, 1.52180721e+01, 3.06233294e+21],
[1.81815804e+01, -1.42908793e+01, 9.57868793e+00, -2.84771348e+20],
[-2.49671396e+01, 1.25082843e+01, -1.71562286e+01, 2.36290426e+07],
[2.67277673e+01, 1.70315414e+01, 6.12701450e+00, 7.77917232e+03],
[2.49565476e+01, 2.91694684e+01, 6.29622660e+00, 2.35300027e+02],
[6.11924542e+00, -1.59943768e+00, 9.57009289e+00, 1.32906326e+11],
[-1.47863653e+01, 2.41691301e+01, -1.89981821e+01, 2.73064953e+03],
[2.24070483e+01, -2.93647433e+00, 8.19281432e+00, -6.42000372e+17],
[8.04042600e-01, 1.82710085e+01, -1.97814534e+01, 5.48372441e-01],
[1.39590390e+01, 1.97318686e+01, 2.37606635e+00, 5.51923681e+00],
[-4.66640483e+00, -2.00237930e+01, 7.40365095e+00, 4.50310752e+00],
[2.76821999e+01, -6.36563968e+00, 1.11533984e+01, -9.28725179e+23],
[-2.56764457e+01, 1.24544906e+00, 1.06407572e+01, 1.25922076e+01],
[3.20447808e+00, 1.30874383e+01, 2.26098014e+01, 2.03202059e+04],
[-1.24809647e+01, 4.15137113e+00, -2.92265700e+01, 2.39621411e+08],
[2.14778108e+01, -2.35162960e+00, -1.13758664e+01, 4.46882152e-01],
[-9.85469168e+00, -3.28157680e+00, 1.67447548e+01, -1.07342390e+07],
[1.08122310e+01, -2.47353236e+01, -1.15622349e+01, -2.91733796e+03],
[-2.67933347e+01, -3.39100709e+00, 2.56006986e+01, -5.29275382e+09],
[-8.60066776e+00, -8.02200924e+00, 1.07231926e+01, 1.33548320e+06],
[-1.01724238e-01, -1.18479709e+01, -2.55407104e+01, 1.55436570e+00],
[-3.93356771e+00, 2.11106818e+01, -2.57598485e+01, 2.13467840e+01],
[3.74750503e+00, 1.55687633e+01, -2.92841720e+01, 1.43873509e-02],
[6.99726781e+00, 2.69855571e+01, -1.63707771e+01, 3.08098673e-02],
[-2.31996011e+01, 3.47631054e+00, 9.75119815e-01, 1.79971073e-02],
[2.38951044e+01, -2.91460190e+01, -2.50774708e+00, 9.56934814e+00],
[1.52730825e+01, 5.77062507e+00, 1.21922003e+01, 1.32345307e+09],
[1.74673917e+01, 1.89723426e+01, 4.94903250e+00, 9.90859484e+01],
[1.88971241e+01, 2.86255413e+01, 5.52360109e-01, 1.44165360e+00],
[1.02002319e+01, -1.66855152e+01, -2.55426235e+01, 6.56481554e+02],
[-1.79474153e+01, 1.22210200e+01, -1.84058212e+01, 8.24041812e+05],
[-1.36147103e+01, 1.32365492e+00, -7.22375200e+00, 9.92446491e+05],
[7.57407832e+00, 2.59738234e+01, -1.34139168e+01, 3.64037761e-02],
[2.21110169e+00, 1.28012666e+01, 1.62529102e+01, 1.33433085e+02],
[-2.64297569e+01, -1.63176658e+01, -1.11642006e+01, -2.44797251e+13],
[-2.46622944e+01, -3.02147372e+00, 8.29159315e+00, -3.21799070e+05],
[-1.37215095e+01, -1.96680183e+01, 2.91940118e+01, 3.21457520e+12],
[-5.45566105e+00, 2.81292086e+01, 1.72548215e-01, 9.66973000e-01],
[-1.55751298e+00, -8.65703373e+00, 2.68622026e+01, -3.17190834e+16],
[2.45393609e+01, -2.70571903e+01, 1.96815505e+01, 1.80708004e+37],
[5.77482829e+00, 1.53203143e+01, 2.50534322e+01, 1.14304242e+06],
[-1.02626819e+01, 2.36887658e+01, -2.32152102e+01, 7.28965646e+02],
[-1.30833446e+00, -1.28310210e+01, 1.87275544e+01, -9.33487904e+12],
[5.83024676e+00, -1.49279672e+01, 2.44957538e+01, -7.61083070e+27],
[-2.03130747e+01, 2.59641715e+01, -2.06174328e+01, 4.54744859e+04],
[1.97684551e+01, -2.21410519e+01, -2.26728740e+01, 3.53113026e+06],
[2.73673444e+01, 2.64491725e+01, 1.57599882e+01, 1.07385118e+07],
[5.73287971e+00, 1.21111904e+01, 1.33080171e+01, 2.63220467e+03],
[-2.82751072e+01, 2.08605881e+01, 9.09838900e+00, -6.60957033e-07],
[1.87270691e+01, -1.74437016e+01, 1.52413599e+01, 6.59572851e+27],
[6.60681457e+00, -2.69449855e+00, 9.78972047e+00, -2.38587870e+12],
[1.20895561e+01, -2.51355765e+01, 2.30096101e+01, 7.58739886e+32],
[-2.44682278e+01, 2.10673441e+01, -1.36705538e+01, 4.54213550e+04],
[-4.50665152e+00, 3.72292059e+00, -4.83403707e+00, 2.68938214e+01],
[-7.46540049e+00, -1.08422222e+01, -1.72203805e+01, -2.09402162e+02],
[-2.00307551e+01, -7.50604431e+00, -2.78640020e+01, 4.15985444e+19],
[1.99890876e+01, 2.20677419e+01, -2.51301778e+01, 1.23840297e-09],
[2.03183823e+01, -7.66942559e+00, 2.10340070e+01, 1.46285095e+31],
[-2.90315825e+00, -2.55785967e+01, -9.58779316e+00, 2.65714264e-01],
[2.73960829e+01, -1.80097203e+01, -2.03070131e+00, 2.52908999e+02],
[-2.11708058e+01, -2.70304032e+01, 2.48257944e+01, 3.09027527e+08],
[2.21959758e+01, 4.00258675e+00, -1.62853977e+01, -9.16280090e-09],
[1.61661840e+01, -2.26845150e+01, 2.17226940e+01, -8.24774394e+33],
[-3.35030306e+00, 1.32670581e+00, 9.39711214e+00, -1.47303163e+01],
[7.23720726e+00, -2.29763909e+01, 2.34709682e+01, -9.20711735e+29],
[2.71013568e+01, 1.61951087e+01, -7.11388906e-01, 2.98750911e-01],
[8.40057933e+00, -7.49665220e+00, 2.95587388e+01, 6.59465635e+29],
[-1.51603423e+01, 1.94032322e+01, -7.60044357e+00, 1.05186941e+02],
[-8.83788031e+00, -2.72018313e+01, 1.88269907e+00, 1.81687019e+00],
[-1.87283712e+01, 5.87479570e+00, -1.91210203e+01, 2.52235612e+08],
[-5.61338513e-01, 2.69490237e+01, 1.16660111e-01, 9.97567783e-01],
[-5.44354025e+00, -1.26721408e+01, -4.66831036e+00, 1.06660735e-01],
[-2.18846497e+00, 2.33299566e+01, 9.62564397e+00, 3.03842061e-01],
[6.65661299e+00, -2.39048713e+01, 1.04191807e+01, 4.73700451e+13],
[-2.57298921e+01, -2.60811296e+01, 2.74398110e+01, -5.32566307e+11],
[-1.11431826e+01, -1.59420160e+01, -1.84880553e+01, -1.01514747e+02],
[6.50301931e+00, 2.59859051e+01, -2.33270137e+01, 1.22760500e-02],
[-1.94987891e+01, -2.62123262e+01, 3.90323225e+00, 1.71658894e+01],
[7.26164601e+00, -1.41469402e+01, 2.81499763e+01, -2.50068329e+31],
[-1.52424040e+01, 2.99719005e+01, -2.85753678e+01, 1.31906693e+04],
[5.24149291e+00, -1.72807223e+01, 2.22129493e+01, 2.50748475e+25],
[3.63207230e-01, -9.54120862e-02, -2.83874044e+01, 9.43854939e-01],
[-2.11326457e+00, -1.25707023e+01, 1.17172130e+00, 1.20812698e+00],
[2.48513582e+00, 1.03652647e+01, -1.84625148e+01, 6.47910997e-02],
[2.65395942e+01, 2.74794672e+01, 1.29413428e+01, 2.89306132e+05],
[-9.49445460e+00, 1.59930921e+01, -1.49596331e+01, 3.27574841e+02],
[-5.89173945e+00, 9.96742426e+00, 2.60318889e+01, -3.15842908e-01],
[-1.15387239e+01, -2.21433107e+01, -2.17686413e+01, 1.56724718e-01],
[-5.30592244e+00, -2.42752190e+01, 1.29734035e+00, 1.31985534e+00]])
for a,b,c,expected in ref_data:
result = special.hyp1f1(a,b,c)
assert_(abs(expected - result)/expected < 1e-4)
def test_hyp1f1_gh2957(self):
hyp1 = special.hyp1f1(0.5, 1.5, -709.7827128933)
hyp2 = special.hyp1f1(0.5, 1.5, -709.7827128934)
assert_almost_equal(hyp1, hyp2, 12)
def test_hyp1f1_gh2282(self):
hyp = special.hyp1f1(0.5, 1.5, -1000)
assert_almost_equal(hyp, 0.028024956081989643, 12)
def test_hyp2f1(self):
# a collection of special cases taken from AMS 55
values = [[0.5, 1, 1.5, 0.2**2, 0.5/0.2*log((1+0.2)/(1-0.2))],
[0.5, 1, 1.5, -0.2**2, 1./0.2*arctan(0.2)],
[1, 1, 2, 0.2, -1/0.2*log(1-0.2)],
[3, 3.5, 1.5, 0.2**2,
0.5/0.2/(-5)*((1+0.2)**(-5)-(1-0.2)**(-5))],
[-3, 3, 0.5, sin(0.2)**2, cos(2*3*0.2)],
[3, 4, 8, 1, special.gamma(8)*special.gamma(8-4-3)/special.gamma(8-3)/special.gamma(8-4)],
[3, 2, 3-2+1, -1, 1./2**3*sqrt(pi) *
special.gamma(1+3-2)/special.gamma(1+0.5*3-2)/special.gamma(0.5+0.5*3)],
[5, 2, 5-2+1, -1, 1./2**5*sqrt(pi) *
special.gamma(1+5-2)/special.gamma(1+0.5*5-2)/special.gamma(0.5+0.5*5)],
[4, 0.5+4, 1.5-2*4, -1./3, (8./9)**(-2*4)*special.gamma(4./3) *
special.gamma(1.5-2*4)/special.gamma(3./2)/special.gamma(4./3-2*4)],
# and some others
# ticket #424
[1.5, -0.5, 1.0, -10.0, 4.1300097765277476484],
# negative integer a or b, with c-a-b integer and x > 0.9
[-2,3,1,0.95,0.715],
[2,-3,1,0.95,-0.007],
[-6,3,1,0.95,0.0000810625],
[2,-5,1,0.95,-0.000029375],
# huge negative integers
(10, -900, 10.5, 0.99, 1.91853705796607664803709475658e-24),
(10, -900, -10.5, 0.99, 3.54279200040355710199058559155e-18),
]
for i, (a, b, c, x, v) in enumerate(values):
cv = special.hyp2f1(a, b, c, x)
assert_almost_equal(cv, v, 8, err_msg='test #%d' % i)
def test_hyperu(self):
val1 = special.hyperu(1,0.1,100)
assert_almost_equal(val1,0.0098153,7)
a,b = [0.3,0.6,1.2,-2.7],[1.5,3.2,-0.4,-3.2]
a,b = asarray(a), asarray(b)
z = 0.5
hypu = special.hyperu(a,b,z)
hprl = (pi/sin(pi*b))*(special.hyp1f1(a,b,z) /
(special.gamma(1+a-b)*special.gamma(b)) -
z**(1-b)*special.hyp1f1(1+a-b,2-b,z)
/ (special.gamma(a)*special.gamma(2-b)))
assert_array_almost_equal(hypu,hprl,12)
def test_hyperu_gh2287(self):
assert_almost_equal(special.hyperu(1, 1.5, 20.2),
0.048360918656699191, 12)
class TestBessel(object):
def test_itj0y0(self):
it0 = array(special.itj0y0(.2))
assert_array_almost_equal(it0,array([0.19933433254006822, -0.34570883800412566]),8)
def test_it2j0y0(self):
it2 = array(special.it2j0y0(.2))
assert_array_almost_equal(it2,array([0.0049937546274601858, -0.43423067011231614]),8)
def test_negv_iv(self):
assert_equal(special.iv(3,2), special.iv(-3,2))
def test_j0(self):
oz = special.j0(.1)
ozr = special.jn(0,.1)
assert_almost_equal(oz,ozr,8)
def test_j1(self):
o1 = special.j1(.1)
o1r = special.jn(1,.1)
assert_almost_equal(o1,o1r,8)
def test_jn(self):
jnnr = special.jn(1,.2)
assert_almost_equal(jnnr,0.099500832639235995,8)
def test_negv_jv(self):
assert_almost_equal(special.jv(-3,2), -special.jv(3,2), 14)
def test_jv(self):
values = [[0, 0.1, 0.99750156206604002],
[2./3, 1e-8, 0.3239028506761532e-5],
[2./3, 1e-10, 0.1503423854873779e-6],
[3.1, 1e-10, 0.1711956265409013e-32],
[2./3, 4.0, -0.2325440850267039],
]
for i, (v, x, y) in enumerate(values):
yc = special.jv(v, x)
assert_almost_equal(yc, y, 8, err_msg='test #%d' % i)
def test_negv_jve(self):
assert_almost_equal(special.jve(-3,2), -special.jve(3,2), 14)
def test_jve(self):
jvexp = special.jve(1,.2)
assert_almost_equal(jvexp,0.099500832639235995,8)
jvexp1 = special.jve(1,.2+1j)
z = .2+1j
jvexpr = special.jv(1,z)*exp(-abs(z.imag))
assert_almost_equal(jvexp1,jvexpr,8)
def test_jn_zeros(self):
jn0 = special.jn_zeros(0,5)
jn1 = special.jn_zeros(1,5)
assert_array_almost_equal(jn0,array([2.4048255577,
5.5200781103,
8.6537279129,
11.7915344391,
14.9309177086]),4)
assert_array_almost_equal(jn1,array([3.83171,
7.01559,
10.17347,
13.32369,
16.47063]),4)
jn102 = special.jn_zeros(102,5)
assert_allclose(jn102, array([110.89174935992040343,
117.83464175788308398,
123.70194191713507279,
129.02417238949092824,
134.00114761868422559]), rtol=1e-13)
jn301 = special.jn_zeros(301,5)
assert_allclose(jn301, array([313.59097866698830153,
323.21549776096288280,
331.22338738656748796,
338.39676338872084500,
345.03284233056064157]), rtol=1e-13)
def test_jn_zeros_slow(self):
jn0 = special.jn_zeros(0, 300)
assert_allclose(jn0[260-1], 816.02884495068867280, rtol=1e-13)
assert_allclose(jn0[280-1], 878.86068707124422606, rtol=1e-13)
assert_allclose(jn0[300-1], 941.69253065317954064, rtol=1e-13)
jn10 = special.jn_zeros(10, 300)
assert_allclose(jn10[260-1], 831.67668514305631151, rtol=1e-13)
assert_allclose(jn10[280-1], 894.51275095371316931, rtol=1e-13)
assert_allclose(jn10[300-1], 957.34826370866539775, rtol=1e-13)
jn3010 = special.jn_zeros(3010,5)
assert_allclose(jn3010, array([3036.86590780927,
3057.06598526482,
3073.66360690272,
3088.37736494778,
3101.86438139042]), rtol=1e-8)
def test_jnjnp_zeros(self):
jn = special.jn
def jnp(n, x):
return (jn(n-1,x) - jn(n+1,x))/2
for nt in range(1, 30):
z, n, m, t = special.jnjnp_zeros(nt)
for zz, nn, tt in zip(z, n, t):
if tt == 0:
assert_allclose(jn(nn, zz), 0, atol=1e-6)
elif tt == 1:
assert_allclose(jnp(nn, zz), 0, atol=1e-6)
else:
raise AssertionError("Invalid t return for nt=%d" % nt)
def test_jnp_zeros(self):
jnp = special.jnp_zeros(1,5)
assert_array_almost_equal(jnp, array([1.84118,
5.33144,
8.53632,
11.70600,
14.86359]),4)
jnp = special.jnp_zeros(443,5)
assert_allclose(special.jvp(443, jnp), 0, atol=1e-15)
def test_jnyn_zeros(self):
jnz = special.jnyn_zeros(1,5)
assert_array_almost_equal(jnz,(array([3.83171,
7.01559,
10.17347,
13.32369,
16.47063]),
array([1.84118,
5.33144,
8.53632,
11.70600,
14.86359]),
array([2.19714,
5.42968,
8.59601,
11.74915,
14.89744]),
array([3.68302,
6.94150,
10.12340,
13.28576,
16.44006])),5)
def test_jvp(self):
jvprim = special.jvp(2,2)
jv0 = (special.jv(1,2)-special.jv(3,2))/2
assert_almost_equal(jvprim,jv0,10)
def test_k0(self):
ozk = special.k0(.1)
ozkr = special.kv(0,.1)
assert_almost_equal(ozk,ozkr,8)
def test_k0e(self):
ozke = special.k0e(.1)
ozker = special.kve(0,.1)
assert_almost_equal(ozke,ozker,8)
def test_k1(self):
o1k = special.k1(.1)
o1kr = special.kv(1,.1)
assert_almost_equal(o1k,o1kr,8)
def test_k1e(self):
o1ke = special.k1e(.1)
o1ker = special.kve(1,.1)
assert_almost_equal(o1ke,o1ker,8)
def test_jacobi(self):
a = 5*np.random.random() - 1
b = 5*np.random.random() - 1
P0 = special.jacobi(0,a,b)
P1 = special.jacobi(1,a,b)
P2 = special.jacobi(2,a,b)
P3 = special.jacobi(3,a,b)
assert_array_almost_equal(P0.c,[1],13)
assert_array_almost_equal(P1.c,array([a+b+2,a-b])/2.0,13)
cp = [(a+b+3)*(a+b+4), 4*(a+b+3)*(a+2), 4*(a+1)*(a+2)]
p2c = [cp[0],cp[1]-2*cp[0],cp[2]-cp[1]+cp[0]]
assert_array_almost_equal(P2.c,array(p2c)/8.0,13)
cp = [(a+b+4)*(a+b+5)*(a+b+6),6*(a+b+4)*(a+b+5)*(a+3),
12*(a+b+4)*(a+2)*(a+3),8*(a+1)*(a+2)*(a+3)]
p3c = [cp[0],cp[1]-3*cp[0],cp[2]-2*cp[1]+3*cp[0],cp[3]-cp[2]+cp[1]-cp[0]]
assert_array_almost_equal(P3.c,array(p3c)/48.0,13)
def test_kn(self):
kn1 = special.kn(0,.2)
assert_almost_equal(kn1,1.7527038555281462,8)
def test_negv_kv(self):
assert_equal(special.kv(3.0, 2.2), special.kv(-3.0, 2.2))
def test_kv0(self):
kv0 = special.kv(0,.2)
assert_almost_equal(kv0, 1.7527038555281462, 10)
def test_kv1(self):
kv1 = special.kv(1,0.2)
assert_almost_equal(kv1, 4.775972543220472, 10)
def test_kv2(self):
kv2 = special.kv(2,0.2)
assert_almost_equal(kv2, 49.51242928773287, 10)
def test_kn_largeorder(self):
assert_allclose(special.kn(32, 1), 1.7516596664574289e+43)
def test_kv_largearg(self):
assert_equal(special.kv(0, 1e19), 0)
def test_negv_kve(self):
assert_equal(special.kve(3.0, 2.2), special.kve(-3.0, 2.2))
def test_kve(self):
kve1 = special.kve(0,.2)
kv1 = special.kv(0,.2)*exp(.2)
assert_almost_equal(kve1,kv1,8)
z = .2+1j
kve2 = special.kve(0,z)
kv2 = special.kv(0,z)*exp(z)
assert_almost_equal(kve2,kv2,8)
def test_kvp_v0n1(self):
z = 2.2
assert_almost_equal(-special.kv(1,z), special.kvp(0,z, n=1), 10)
def test_kvp_n1(self):
v = 3.
z = 2.2
xc = -special.kv(v+1,z) + v/z*special.kv(v,z)
x = special.kvp(v,z, n=1)
assert_almost_equal(xc, x, 10) # this function (kvp) is broken
def test_kvp_n2(self):
v = 3.
z = 2.2
xc = (z**2+v**2-v)/z**2 * special.kv(v,z) + special.kv(v+1,z)/z
x = special.kvp(v, z, n=2)
assert_almost_equal(xc, x, 10)
def test_y0(self):
oz = special.y0(.1)
ozr = special.yn(0,.1)
assert_almost_equal(oz,ozr,8)
def test_y1(self):
o1 = special.y1(.1)
o1r = special.yn(1,.1)
assert_almost_equal(o1,o1r,8)
def test_y0_zeros(self):
yo,ypo = special.y0_zeros(2)
zo,zpo = special.y0_zeros(2,complex=1)
all = r_[yo,zo]
allval = r_[ypo,zpo]
assert_array_almost_equal(abs(special.yv(0.0,all)),0.0,11)
assert_array_almost_equal(abs(special.yv(1,all)-allval),0.0,11)
def test_y1_zeros(self):
y1 = special.y1_zeros(1)
assert_array_almost_equal(y1,(array([2.19714]),array([0.52079])),5)
def test_y1p_zeros(self):
y1p = special.y1p_zeros(1,complex=1)
assert_array_almost_equal(y1p,(array([0.5768+0.904j]), array([-0.7635+0.5892j])),3)
def test_yn_zeros(self):
an = special.yn_zeros(4,2)
assert_array_almost_equal(an,array([5.64515, 9.36162]),5)
an = special.yn_zeros(443,5)
assert_allclose(an, [450.13573091578090314, 463.05692376675001542,
472.80651546418663566, 481.27353184725625838,
488.98055964441374646], rtol=1e-15)
def test_ynp_zeros(self):
ao = special.ynp_zeros(0,2)
assert_array_almost_equal(ao,array([2.19714133, 5.42968104]),6)
ao = special.ynp_zeros(43,5)
assert_allclose(special.yvp(43, ao), 0, atol=1e-15)
ao = special.ynp_zeros(443,5)
assert_allclose(special.yvp(443, ao), 0, atol=1e-9)
def test_ynp_zeros_large_order(self):
ao = special.ynp_zeros(443,5)
assert_allclose(special.yvp(443, ao), 0, atol=1e-14)
def test_yn(self):
yn2n = special.yn(1,.2)
assert_almost_equal(yn2n,-3.3238249881118471,8)
def test_negv_yv(self):
assert_almost_equal(special.yv(-3,2), -special.yv(3,2), 14)
def test_yv(self):
yv2 = special.yv(1,.2)
assert_almost_equal(yv2,-3.3238249881118471,8)
def test_negv_yve(self):
assert_almost_equal(special.yve(-3,2), -special.yve(3,2), 14)
def test_yve(self):
yve2 = special.yve(1,.2)
assert_almost_equal(yve2,-3.3238249881118471,8)
yve2r = special.yv(1,.2+1j)*exp(-1)
yve22 = special.yve(1,.2+1j)
assert_almost_equal(yve22,yve2r,8)
def test_yvp(self):
yvpr = (special.yv(1,.2) - special.yv(3,.2))/2.0
yvp1 = special.yvp(2,.2)
assert_array_almost_equal(yvp1,yvpr,10)
def _cephes_vs_amos_points(self):
"""Yield points at which to compare Cephes implementation to AMOS"""
# check several points, including large-amplitude ones
for v in [-120, -100.3, -20., -10., -1., -.5,
0., 1., 12.49, 120., 301]:
for z in [-1300, -11, -10, -1, 1., 10., 200.5, 401., 600.5,
700.6, 1300, 10003]:
yield v, z
# check half-integers; these are problematic points at least
# for cephes/iv
for v in 0.5 + arange(-60, 60):
yield v, 3.5
def check_cephes_vs_amos(self, f1, f2, rtol=1e-11, atol=0, skip=None):
for v, z in self._cephes_vs_amos_points():
if skip is not None and skip(v, z):
continue
c1, c2, c3 = f1(v, z), f1(v,z+0j), f2(int(v), z)
if np.isinf(c1):
assert_(np.abs(c2) >= 1e300, (v, z))
elif np.isnan(c1):
assert_(c2.imag != 0, (v, z))
else:
assert_allclose(c1, c2, err_msg=(v, z), rtol=rtol, atol=atol)
if v == int(v):
assert_allclose(c3, c2, err_msg=(v, z),
rtol=rtol, atol=atol)
@pytest.mark.xfail(platform.machine() == 'ppc64le',
reason="fails on ppc64le")
def test_jv_cephes_vs_amos(self):
self.check_cephes_vs_amos(special.jv, special.jn, rtol=1e-10, atol=1e-305)
@pytest.mark.xfail(platform.machine() == 'ppc64le',
reason="fails on ppc64le")
def test_yv_cephes_vs_amos(self):
self.check_cephes_vs_amos(special.yv, special.yn, rtol=1e-11, atol=1e-305)
def test_yv_cephes_vs_amos_only_small_orders(self):
skipper = lambda v, z: (abs(v) > 50)
self.check_cephes_vs_amos(special.yv, special.yn, rtol=1e-11, atol=1e-305, skip=skipper)
def test_iv_cephes_vs_amos(self):
olderr = np.seterr(all='ignore')
try:
self.check_cephes_vs_amos(special.iv, special.iv, rtol=5e-9, atol=1e-305)
finally:
np.seterr(**olderr)
@pytest.mark.slow
def test_iv_cephes_vs_amos_mass_test(self):
N = 1000000
np.random.seed(1)
v = np.random.pareto(0.5, N) * (-1)**np.random.randint(2, size=N)
x = np.random.pareto(0.2, N) * (-1)**np.random.randint(2, size=N)
imsk = (np.random.randint(8, size=N) == 0)
v[imsk] = v[imsk].astype(int)
old_err = np.seterr(all='ignore')
try:
c1 = special.iv(v, x)
c2 = special.iv(v, x+0j)
# deal with differences in the inf and zero cutoffs
c1[abs(c1) > 1e300] = np.inf
c2[abs(c2) > 1e300] = np.inf
c1[abs(c1) < 1e-300] = 0
c2[abs(c2) < 1e-300] = 0
dc = abs(c1/c2 - 1)
dc[np.isnan(dc)] = 0
finally:
np.seterr(**old_err)
k = np.argmax(dc)
# Most error apparently comes from AMOS and not our implementation;
# there are some problems near integer orders there
assert_(dc[k] < 2e-7, (v[k], x[k], special.iv(v[k], x[k]), special.iv(v[k], x[k]+0j)))
def test_kv_cephes_vs_amos(self):
self.check_cephes_vs_amos(special.kv, special.kn, rtol=1e-9, atol=1e-305)
self.check_cephes_vs_amos(special.kv, special.kv, rtol=1e-9, atol=1e-305)
def test_ticket_623(self):
assert_allclose(special.jv(3, 4), 0.43017147387562193)
assert_allclose(special.jv(301, 1300), 0.0183487151115275)
assert_allclose(special.jv(301, 1296.0682), -0.0224174325312048)
def test_ticket_853(self):
"""Negative-order Bessels"""
# cephes
assert_allclose(special.jv(-1, 1), -0.4400505857449335)
assert_allclose(special.jv(-2, 1), 0.1149034849319005)
assert_allclose(special.yv(-1, 1), 0.7812128213002887)
assert_allclose(special.yv(-2, 1), -1.650682606816255)
assert_allclose(special.iv(-1, 1), 0.5651591039924851)
assert_allclose(special.iv(-2, 1), 0.1357476697670383)
assert_allclose(special.kv(-1, 1), 0.6019072301972347)
assert_allclose(special.kv(-2, 1), 1.624838898635178)
assert_allclose(special.jv(-0.5, 1), 0.43109886801837607952)
assert_allclose(special.yv(-0.5, 1), 0.6713967071418031)
assert_allclose(special.iv(-0.5, 1), 1.231200214592967)
assert_allclose(special.kv(-0.5, 1), 0.4610685044478945)
# amos
assert_allclose(special.jv(-1, 1+0j), -0.4400505857449335)
assert_allclose(special.jv(-2, 1+0j), 0.1149034849319005)
assert_allclose(special.yv(-1, 1+0j), 0.7812128213002887)
assert_allclose(special.yv(-2, 1+0j), -1.650682606816255)
assert_allclose(special.iv(-1, 1+0j), 0.5651591039924851)
assert_allclose(special.iv(-2, 1+0j), 0.1357476697670383)
assert_allclose(special.kv(-1, 1+0j), 0.6019072301972347)
assert_allclose(special.kv(-2, 1+0j), 1.624838898635178)
assert_allclose(special.jv(-0.5, 1+0j), 0.43109886801837607952)
assert_allclose(special.jv(-0.5, 1+1j), 0.2628946385649065-0.827050182040562j)
assert_allclose(special.yv(-0.5, 1+0j), 0.6713967071418031)
assert_allclose(special.yv(-0.5, 1+1j), 0.967901282890131+0.0602046062142816j)
assert_allclose(special.iv(-0.5, 1+0j), 1.231200214592967)
assert_allclose(special.iv(-0.5, 1+1j), 0.77070737376928+0.39891821043561j)
assert_allclose(special.kv(-0.5, 1+0j), 0.4610685044478945)
assert_allclose(special.kv(-0.5, 1+1j), 0.06868578341999-0.38157825981268j)
assert_allclose(special.jve(-0.5,1+0.3j), special.jv(-0.5, 1+0.3j)*exp(-0.3))
assert_allclose(special.yve(-0.5,1+0.3j), special.yv(-0.5, 1+0.3j)*exp(-0.3))
assert_allclose(special.ive(-0.5,0.3+1j), special.iv(-0.5, 0.3+1j)*exp(-0.3))
assert_allclose(special.kve(-0.5,0.3+1j), special.kv(-0.5, 0.3+1j)*exp(0.3+1j))
assert_allclose(special.hankel1(-0.5, 1+1j), special.jv(-0.5, 1+1j) + 1j*special.yv(-0.5,1+1j))
assert_allclose(special.hankel2(-0.5, 1+1j), special.jv(-0.5, 1+1j) - 1j*special.yv(-0.5,1+1j))
def test_ticket_854(self):
"""Real-valued Bessel domains"""
assert_(isnan(special.jv(0.5, -1)))
assert_(isnan(special.iv(0.5, -1)))
assert_(isnan(special.yv(0.5, -1)))
assert_(isnan(special.yv(1, -1)))
assert_(isnan(special.kv(0.5, -1)))
assert_(isnan(special.kv(1, -1)))
assert_(isnan(special.jve(0.5, -1)))
assert_(isnan(special.ive(0.5, -1)))
assert_(isnan(special.yve(0.5, -1)))
assert_(isnan(special.yve(1, -1)))
assert_(isnan(special.kve(0.5, -1)))
assert_(isnan(special.kve(1, -1)))
assert_(isnan(special.airye(-1)[0:2]).all(), special.airye(-1))
assert_(not isnan(special.airye(-1)[2:4]).any(), special.airye(-1))
def test_gh_7909(self):
assert_(special.kv(1.5, 0) == np.inf)
assert_(special.kve(1.5, 0) == np.inf)
def test_ticket_503(self):
"""Real-valued Bessel I overflow"""
assert_allclose(special.iv(1, 700), 1.528500390233901e302)
assert_allclose(special.iv(1000, 1120), 1.301564549405821e301)
def test_iv_hyperg_poles(self):
assert_allclose(special.iv(-0.5, 1), 1.231200214592967)
def iv_series(self, v, z, n=200):
k = arange(0, n).astype(float_)
r = (v+2*k)*log(.5*z) - special.gammaln(k+1) - special.gammaln(v+k+1)
r[isnan(r)] = inf
r = exp(r)
err = abs(r).max() * finfo(float_).eps * n + abs(r[-1])*10
return r.sum(), err
def test_i0_series(self):
for z in [1., 10., 200.5]:
value, err = self.iv_series(0, z)
assert_allclose(special.i0(z), value, atol=err, err_msg=z)
def test_i1_series(self):
for z in [1., 10., 200.5]:
value, err = self.iv_series(1, z)
assert_allclose(special.i1(z), value, atol=err, err_msg=z)
def test_iv_series(self):
for v in [-20., -10., -1., 0., 1., 12.49, 120.]:
for z in [1., 10., 200.5, -1+2j]:
value, err = self.iv_series(v, z)
assert_allclose(special.iv(v, z), value, atol=err, err_msg=(v, z))
def test_i0(self):
values = [[0.0, 1.0],
[1e-10, 1.0],
[0.1, 0.9071009258],
[0.5, 0.6450352706],
[1.0, 0.4657596077],
[2.5, 0.2700464416],
[5.0, 0.1835408126],
[20.0, 0.0897803119],
]
for i, (x, v) in enumerate(values):
cv = special.i0(x) * exp(-x)
assert_almost_equal(cv, v, 8, err_msg='test #%d' % i)
def test_i0e(self):
oize = special.i0e(.1)
oizer = special.ive(0,.1)
assert_almost_equal(oize,oizer,8)
def test_i1(self):
values = [[0.0, 0.0],
[1e-10, 0.4999999999500000e-10],
[0.1, 0.0452984468],
[0.5, 0.1564208032],
[1.0, 0.2079104154],
[5.0, 0.1639722669],
[20.0, 0.0875062222],
]
for i, (x, v) in enumerate(values):
cv = special.i1(x) * exp(-x)
assert_almost_equal(cv, v, 8, err_msg='test #%d' % i)
def test_i1e(self):
oi1e = special.i1e(.1)
oi1er = special.ive(1,.1)
assert_almost_equal(oi1e,oi1er,8)
def test_iti0k0(self):
iti0 = array(special.iti0k0(5))
assert_array_almost_equal(iti0,array([31.848667776169801, 1.5673873907283657]),5)
def test_it2i0k0(self):
it2k = special.it2i0k0(.1)
assert_array_almost_equal(it2k,array([0.0012503906973464409, 3.3309450354686687]),6)
def test_iv(self):
iv1 = special.iv(0,.1)*exp(-.1)
assert_almost_equal(iv1,0.90710092578230106,10)
def test_negv_ive(self):
assert_equal(special.ive(3,2), special.ive(-3,2))
def test_ive(self):
ive1 = special.ive(0,.1)
iv1 = special.iv(0,.1)*exp(-.1)
assert_almost_equal(ive1,iv1,10)
def test_ivp0(self):
assert_almost_equal(special.iv(1,2), special.ivp(0,2), 10)
def test_ivp(self):
y = (special.iv(0,2) + special.iv(2,2))/2
x = special.ivp(1,2)
assert_almost_equal(x,y,10)
class TestLaguerre(object):
def test_laguerre(self):
lag0 = special.laguerre(0)
lag1 = special.laguerre(1)
lag2 = special.laguerre(2)
lag3 = special.laguerre(3)
lag4 = special.laguerre(4)
lag5 = special.laguerre(5)
assert_array_almost_equal(lag0.c,[1],13)
assert_array_almost_equal(lag1.c,[-1,1],13)
assert_array_almost_equal(lag2.c,array([1,-4,2])/2.0,13)
assert_array_almost_equal(lag3.c,array([-1,9,-18,6])/6.0,13)
assert_array_almost_equal(lag4.c,array([1,-16,72,-96,24])/24.0,13)
assert_array_almost_equal(lag5.c,array([-1,25,-200,600,-600,120])/120.0,13)
def test_genlaguerre(self):
k = 5*np.random.random() - 0.9
lag0 = special.genlaguerre(0,k)
lag1 = special.genlaguerre(1,k)
lag2 = special.genlaguerre(2,k)
lag3 = special.genlaguerre(3,k)
assert_equal(lag0.c,[1])
assert_equal(lag1.c,[-1,k+1])
assert_almost_equal(lag2.c,array([1,-2*(k+2),(k+1.)*(k+2.)])/2.0)
assert_almost_equal(lag3.c,array([-1,3*(k+3),-3*(k+2)*(k+3),(k+1)*(k+2)*(k+3)])/6.0)
# Base polynomials come from Abrahmowitz and Stegan
class TestLegendre(object):
def test_legendre(self):
leg0 = special.legendre(0)
leg1 = special.legendre(1)
leg2 = special.legendre(2)
leg3 = special.legendre(3)
leg4 = special.legendre(4)
leg5 = special.legendre(5)
assert_equal(leg0.c, [1])
assert_equal(leg1.c, [1,0])
assert_almost_equal(leg2.c, array([3,0,-1])/2.0, decimal=13)
assert_almost_equal(leg3.c, array([5,0,-3,0])/2.0)
assert_almost_equal(leg4.c, array([35,0,-30,0,3])/8.0)
assert_almost_equal(leg5.c, array([63,0,-70,0,15,0])/8.0)
class TestLambda(object):
def test_lmbda(self):
lam = special.lmbda(1,.1)
lamr = (array([special.jn(0,.1), 2*special.jn(1,.1)/.1]),
array([special.jvp(0,.1), -2*special.jv(1,.1)/.01 + 2*special.jvp(1,.1)/.1]))
assert_array_almost_equal(lam,lamr,8)
class TestLog1p(object):
def test_log1p(self):
l1p = (special.log1p(10), special.log1p(11), special.log1p(12))
l1prl = (log(11), log(12), log(13))
assert_array_almost_equal(l1p,l1prl,8)
def test_log1pmore(self):
l1pm = (special.log1p(1), special.log1p(1.1), special.log1p(1.2))
l1pmrl = (log(2),log(2.1),log(2.2))
assert_array_almost_equal(l1pm,l1pmrl,8)
class TestLegendreFunctions(object):
def test_clpmn(self):
z = 0.5+0.3j
clp = special.clpmn(2, 2, z, 3)
assert_array_almost_equal(clp,
(array([[1.0000, z, 0.5*(3*z*z-1)],
[0.0000, sqrt(z*z-1), 3*z*sqrt(z*z-1)],
[0.0000, 0.0000, 3*(z*z-1)]]),
array([[0.0000, 1.0000, 3*z],
[0.0000, z/sqrt(z*z-1), 3*(2*z*z-1)/sqrt(z*z-1)],
[0.0000, 0.0000, 6*z]])),
7)
def test_clpmn_close_to_real_2(self):
eps = 1e-10
m = 1
n = 3
x = 0.5
clp_plus = special.clpmn(m, n, x+1j*eps, 2)[0][m, n]
clp_minus = special.clpmn(m, n, x-1j*eps, 2)[0][m, n]
assert_array_almost_equal(array([clp_plus, clp_minus]),
array([special.lpmv(m, n, x),
special.lpmv(m, n, x)]),
7)
def test_clpmn_close_to_real_3(self):
eps = 1e-10
m = 1
n = 3
x = 0.5
clp_plus = special.clpmn(m, n, x+1j*eps, 3)[0][m, n]
clp_minus = special.clpmn(m, n, x-1j*eps, 3)[0][m, n]
assert_array_almost_equal(array([clp_plus, clp_minus]),
array([special.lpmv(m, n, x)*np.exp(-0.5j*m*np.pi),
special.lpmv(m, n, x)*np.exp(0.5j*m*np.pi)]),
7)
def test_clpmn_across_unit_circle(self):
eps = 1e-7
m = 1
n = 1
x = 1j
for type in [2, 3]:
assert_almost_equal(special.clpmn(m, n, x+1j*eps, type)[0][m, n],
special.clpmn(m, n, x-1j*eps, type)[0][m, n], 6)
def test_inf(self):
for z in (1, -1):
for n in range(4):
for m in range(1, n):
lp = special.clpmn(m, n, z)
assert_(np.isinf(lp[1][1,1:]).all())
lp = special.lpmn(m, n, z)
assert_(np.isinf(lp[1][1,1:]).all())
def test_deriv_clpmn(self):
# data inside and outside of the unit circle
zvals = [0.5+0.5j, -0.5+0.5j, -0.5-0.5j, 0.5-0.5j,
1+1j, -1+1j, -1-1j, 1-1j]
m = 2
n = 3
for type in [2, 3]:
for z in zvals:
for h in [1e-3, 1e-3j]:
approx_derivative = (special.clpmn(m, n, z+0.5*h, type)[0]
- special.clpmn(m, n, z-0.5*h, type)[0])/h
assert_allclose(special.clpmn(m, n, z, type)[1],
approx_derivative,
rtol=1e-4)
def test_lpmn(self):
lp = special.lpmn(0,2,.5)
assert_array_almost_equal(lp,(array([[1.00000,
0.50000,
-0.12500]]),
array([[0.00000,
1.00000,
1.50000]])),4)
def test_lpn(self):
lpnf = special.lpn(2,.5)
assert_array_almost_equal(lpnf,(array([1.00000,
0.50000,
-0.12500]),
array([0.00000,
1.00000,
1.50000])),4)
def test_lpmv(self):
lp = special.lpmv(0,2,.5)
assert_almost_equal(lp,-0.125,7)
lp = special.lpmv(0,40,.001)
assert_almost_equal(lp,0.1252678976534484,7)
# XXX: this is outside the domain of the current implementation,
# so ensure it returns a NaN rather than a wrong answer.
olderr = np.seterr(all='ignore')
try:
lp = special.lpmv(-1,-1,.001)
finally:
np.seterr(**olderr)
assert_(lp != 0 or np.isnan(lp))
def test_lqmn(self):
lqmnf = special.lqmn(0,2,.5)
lqf = special.lqn(2,.5)
assert_array_almost_equal(lqmnf[0][0],lqf[0],4)
assert_array_almost_equal(lqmnf[1][0],lqf[1],4)
def test_lqmn_gt1(self):
"""algorithm for real arguments changes at 1.0001
test against analytical result for m=2, n=1
"""
x0 = 1.0001
delta = 0.00002
for x in (x0-delta, x0+delta):
lq = special.lqmn(2, 1, x)[0][-1, -1]
expected = 2/(x*x-1)
assert_almost_equal(lq, expected)
def test_lqmn_shape(self):
a, b = special.lqmn(4, 4, 1.1)
assert_equal(a.shape, (5, 5))
assert_equal(b.shape, (5, 5))
a, b = special.lqmn(4, 0, 1.1)
assert_equal(a.shape, (5, 1))
assert_equal(b.shape, (5, 1))
def test_lqn(self):
lqf = special.lqn(2,.5)
assert_array_almost_equal(lqf,(array([0.5493, -0.7253, -0.8187]),
array([1.3333, 1.216, -0.8427])),4)
class TestMathieu(object):
def test_mathieu_a(self):
pass
def test_mathieu_even_coef(self):
mc = special.mathieu_even_coef(2,5)
# Q not defined broken and cannot figure out proper reporting order
def test_mathieu_odd_coef(self):
# same problem as above
pass
class TestFresnelIntegral(object):
def test_modfresnelp(self):
pass
def test_modfresnelm(self):
pass
class TestOblCvSeq(object):
def test_obl_cv_seq(self):
obl = special.obl_cv_seq(0,3,1)
assert_array_almost_equal(obl,array([-0.348602,
1.393206,
5.486800,
11.492120]),5)
class TestParabolicCylinder(object):
def test_pbdn_seq(self):
pb = special.pbdn_seq(1,.1)
assert_array_almost_equal(pb,(array([0.9975,
0.0998]),
array([-0.0499,
0.9925])),4)
def test_pbdv(self):
pbv = special.pbdv(1,.2)
derrl = 1/2*(.2)*special.pbdv(1,.2)[0] - special.pbdv(0,.2)[0]
def test_pbdv_seq(self):
pbn = special.pbdn_seq(1,.1)
pbv = special.pbdv_seq(1,.1)
assert_array_almost_equal(pbv,(real(pbn[0]),real(pbn[1])),4)
def test_pbdv_points(self):
# simple case
eta = np.linspace(-10, 10, 5)
z = 2**(eta/2)*np.sqrt(np.pi)/special.gamma(.5-.5*eta)
assert_allclose(special.pbdv(eta, 0.)[0], z, rtol=1e-14, atol=1e-14)
# some points
assert_allclose(special.pbdv(10.34, 20.44)[0], 1.3731383034455e-32, rtol=1e-12)
assert_allclose(special.pbdv(-9.53, 3.44)[0], 3.166735001119246e-8, rtol=1e-12)
def test_pbdv_gradient(self):
x = np.linspace(-4, 4, 8)[:,None]
eta = np.linspace(-10, 10, 5)[None,:]
p = special.pbdv(eta, x)
eps = 1e-7 + 1e-7*abs(x)
dp = (special.pbdv(eta, x + eps)[0] - special.pbdv(eta, x - eps)[0]) / eps / 2.
assert_allclose(p[1], dp, rtol=1e-6, atol=1e-6)
def test_pbvv_gradient(self):
x = np.linspace(-4, 4, 8)[:,None]
eta = np.linspace(-10, 10, 5)[None,:]
p = special.pbvv(eta, x)
eps = 1e-7 + 1e-7*abs(x)
dp = (special.pbvv(eta, x + eps)[0] - special.pbvv(eta, x - eps)[0]) / eps / 2.
assert_allclose(p[1], dp, rtol=1e-6, atol=1e-6)
class TestPolygamma(object):
# from Table 6.2 (pg. 271) of A&S
def test_polygamma(self):
poly2 = special.polygamma(2,1)
poly3 = special.polygamma(3,1)
assert_almost_equal(poly2,-2.4041138063,10)
assert_almost_equal(poly3,6.4939394023,10)
# Test polygamma(0, x) == psi(x)
x = [2, 3, 1.1e14]
assert_almost_equal(special.polygamma(0, x), special.psi(x))
# Test broadcasting
n = [0, 1, 2]
x = [0.5, 1.5, 2.5]
expected = [-1.9635100260214238, 0.93480220054467933,
-0.23620405164172739]
assert_almost_equal(special.polygamma(n, x), expected)
expected = np.row_stack([expected]*2)
assert_almost_equal(special.polygamma(n, np.row_stack([x]*2)),
expected)
assert_almost_equal(special.polygamma(np.row_stack([n]*2), x),
expected)
class TestProCvSeq(object):
def test_pro_cv_seq(self):
prol = special.pro_cv_seq(0,3,1)
assert_array_almost_equal(prol,array([0.319000,
2.593084,
6.533471,
12.514462]),5)
class TestPsi(object):
def test_psi(self):
ps = special.psi(1)
assert_almost_equal(ps,-0.57721566490153287,8)
class TestRadian(object):
def test_radian(self):
rad = special.radian(90,0,0)
assert_almost_equal(rad,pi/2.0,5)
def test_radianmore(self):
rad1 = special.radian(90,1,60)
assert_almost_equal(rad1,pi/2+0.0005816135199345904,5)
class TestRiccati(object):
def test_riccati_jn(self):
N, x = 2, 0.2
S = np.empty((N, N))
for n in range(N):
j = special.spherical_jn(n, x)
jp = special.spherical_jn(n, x, derivative=True)
S[0,n] = x*j
S[1,n] = x*jp + j
assert_array_almost_equal(S, special.riccati_jn(n, x), 8)
def test_riccati_yn(self):
N, x = 2, 0.2
C = np.empty((N, N))
for n in range(N):
y = special.spherical_yn(n, x)
yp = special.spherical_yn(n, x, derivative=True)
C[0,n] = x*y
C[1,n] = x*yp + y
assert_array_almost_equal(C, special.riccati_yn(n, x), 8)
class TestRound(object):
def test_round(self):
rnd = list(map(int,(special.round(10.1),special.round(10.4),special.round(10.5),special.round(10.6))))
# Note: According to the documentation, scipy.special.round is
# supposed to round to the nearest even number if the fractional
# part is exactly 0.5. On some platforms, this does not appear
# to work and thus this test may fail. However, this unit test is
# correctly written.
rndrl = (10,10,10,11)
assert_array_equal(rnd,rndrl)
def test_sph_harm():
# Tests derived from tables in
# https://en.wikipedia.org/wiki/Table_of_spherical_harmonics
sh = special.sph_harm
pi = np.pi
exp = np.exp
sqrt = np.sqrt
sin = np.sin
cos = np.cos
assert_array_almost_equal(sh(0,0,0,0),
0.5/sqrt(pi))
assert_array_almost_equal(sh(-2,2,0.,pi/4),
0.25*sqrt(15./(2.*pi)) *
(sin(pi/4))**2.)
assert_array_almost_equal(sh(-2,2,0.,pi/2),
0.25*sqrt(15./(2.*pi)))
assert_array_almost_equal(sh(2,2,pi,pi/2),
0.25*sqrt(15/(2.*pi)) *
exp(0+2.*pi*1j)*sin(pi/2.)**2.)
assert_array_almost_equal(sh(2,4,pi/4.,pi/3.),
(3./8.)*sqrt(5./(2.*pi)) *
exp(0+2.*pi/4.*1j) *
sin(pi/3.)**2. *
(7.*cos(pi/3.)**2.-1))
assert_array_almost_equal(sh(4,4,pi/8.,pi/6.),
(3./16.)*sqrt(35./(2.*pi)) *
exp(0+4.*pi/8.*1j)*sin(pi/6.)**4.)
def test_sph_harm_ufunc_loop_selection():
# see https://github.com/scipy/scipy/issues/4895
dt = np.dtype(np.complex128)
assert_equal(special.sph_harm(0, 0, 0, 0).dtype, dt)
assert_equal(special.sph_harm([0], 0, 0, 0).dtype, dt)
assert_equal(special.sph_harm(0, [0], 0, 0).dtype, dt)
assert_equal(special.sph_harm(0, 0, [0], 0).dtype, dt)
assert_equal(special.sph_harm(0, 0, 0, [0]).dtype, dt)
assert_equal(special.sph_harm([0], [0], [0], [0]).dtype, dt)
class TestStruve(object):
def _series(self, v, z, n=100):
"""Compute Struve function & error estimate from its power series."""
k = arange(0, n)
r = (-1)**k * (.5*z)**(2*k+v+1)/special.gamma(k+1.5)/special.gamma(k+v+1.5)
err = abs(r).max() * finfo(float_).eps * n
return r.sum(), err
def test_vs_series(self):
"""Check Struve function versus its power series"""
for v in [-20, -10, -7.99, -3.4, -1, 0, 1, 3.4, 12.49, 16]:
for z in [1, 10, 19, 21, 30]:
value, err = self._series(v, z)
assert_allclose(special.struve(v, z), value, rtol=0, atol=err), (v, z)
def test_some_values(self):
assert_allclose(special.struve(-7.99, 21), 0.0467547614113, rtol=1e-7)
assert_allclose(special.struve(-8.01, 21), 0.0398716951023, rtol=1e-8)
assert_allclose(special.struve(-3.0, 200), 0.0142134427432, rtol=1e-12)
assert_allclose(special.struve(-8.0, -41), 0.0192469727846, rtol=1e-11)
assert_equal(special.struve(-12, -41), -special.struve(-12, 41))
assert_equal(special.struve(+12, -41), -special.struve(+12, 41))
assert_equal(special.struve(-11, -41), +special.struve(-11, 41))
assert_equal(special.struve(+11, -41), +special.struve(+11, 41))
assert_(isnan(special.struve(-7.1, -1)))
assert_(isnan(special.struve(-10.1, -1)))
def test_regression_679(self):
"""Regression test for #679"""
assert_allclose(special.struve(-1.0, 20 - 1e-8), special.struve(-1.0, 20 + 1e-8))
assert_allclose(special.struve(-2.0, 20 - 1e-8), special.struve(-2.0, 20 + 1e-8))
assert_allclose(special.struve(-4.3, 20 - 1e-8), special.struve(-4.3, 20 + 1e-8))
def test_chi2_smalldf():
assert_almost_equal(special.chdtr(0.6,3), 0.957890536704110)
def test_ch2_inf():
assert_equal(special.chdtr(0.7,np.inf), 1.0)
def test_chi2c_smalldf():
assert_almost_equal(special.chdtrc(0.6,3), 1-0.957890536704110)
def test_chi2_inv_smalldf():
assert_almost_equal(special.chdtri(0.6,1-0.957890536704110), 3)
def test_agm_simple():
rtol = 1e-13
# Gauss's constant
assert_allclose(1/special.agm(1, np.sqrt(2)), 0.834626841674073186,
rtol=rtol)
# These values were computed using Wolfram Alpha, with the
# function ArithmeticGeometricMean[a, b].
agm13 = 1.863616783244897
agm15 = 2.604008190530940
agm35 = 3.936235503649555
assert_allclose(special.agm([[1], [3]], [1, 3, 5]),
[[1, agm13, agm15],
[agm13, 3, agm35]], rtol=rtol)
# Computed by the iteration formula using mpmath,
# with mpmath.mp.prec = 1000:
agm12 = 1.4567910310469068
assert_allclose(special.agm(1, 2), agm12, rtol=rtol)
assert_allclose(special.agm(2, 1), agm12, rtol=rtol)
assert_allclose(special.agm(-1, -2), -agm12, rtol=rtol)
assert_allclose(special.agm(24, 6), 13.458171481725614, rtol=rtol)
assert_allclose(special.agm(13, 123456789.5), 11111458.498599306,
rtol=rtol)
assert_allclose(special.agm(1e30, 1), 2.229223055945383e+28, rtol=rtol)
assert_allclose(special.agm(1e-22, 1), 0.030182566420169886, rtol=rtol)
assert_allclose(special.agm(1e150, 1e180), 2.229223055945383e+178,
rtol=rtol)
assert_allclose(special.agm(1e180, 1e-150), 2.0634722510162677e+177,
rtol=rtol)
assert_allclose(special.agm(1e-150, 1e-170), 3.3112619670463756e-152,
rtol=rtol)
fi = np.finfo(1.0)
assert_allclose(special.agm(fi.tiny, fi.max), 1.9892072050015473e+305,
rtol=rtol)
assert_allclose(special.agm(0.75*fi.max, fi.max), 1.564904312298045e+308,
rtol=rtol)
assert_allclose(special.agm(fi.tiny, 3*fi.tiny), 4.1466849866735005e-308,
rtol=rtol)
# zero, nan and inf cases.
assert_equal(special.agm(0, 0), 0)
assert_equal(special.agm(99, 0), 0)
assert_equal(special.agm(-1, 10), np.nan)
assert_equal(special.agm(0, np.inf), np.nan)
assert_equal(special.agm(np.inf, 0), np.nan)
assert_equal(special.agm(0, -np.inf), np.nan)
assert_equal(special.agm(-np.inf, 0), np.nan)
assert_equal(special.agm(np.inf, -np.inf), np.nan)
assert_equal(special.agm(-np.inf, np.inf), np.nan)
assert_equal(special.agm(1, np.nan), np.nan)
assert_equal(special.agm(np.nan, -1), np.nan)
assert_equal(special.agm(1, np.inf), np.inf)
assert_equal(special.agm(np.inf, 1), np.inf)
assert_equal(special.agm(-1, -np.inf), -np.inf)
assert_equal(special.agm(-np.inf, -1), -np.inf)
def test_legacy():
# Legacy behavior: truncating arguments to integers
with suppress_warnings() as sup:
sup.filter(RuntimeWarning, "floating point number truncated to an integer")
assert_equal(special.bdtrc(1, 2, 0.3), special.bdtrc(1.8, 2.8, 0.3))
assert_equal(special.bdtr(1, 2, 0.3), special.bdtr(1.8, 2.8, 0.3))
assert_equal(special.bdtri(1, 2, 0.3), special.bdtri(1.8, 2.8, 0.3))
assert_equal(special.expn(1, 0.3), special.expn(1.8, 0.3))
assert_equal(special.nbdtrc(1, 2, 0.3), special.nbdtrc(1.8, 2.8, 0.3))
assert_equal(special.nbdtr(1, 2, 0.3), special.nbdtr(1.8, 2.8, 0.3))
assert_equal(special.nbdtri(1, 2, 0.3), special.nbdtri(1.8, 2.8, 0.3))
assert_equal(special.pdtri(1, 0.3), special.pdtri(1.8, 0.3))
assert_equal(special.kn(1, 0.3), special.kn(1.8, 0.3))
assert_equal(special.yn(1, 0.3), special.yn(1.8, 0.3))
assert_equal(special.smirnov(1, 0.3), special.smirnov(1.8, 0.3))
assert_equal(special.smirnovi(1, 0.3), special.smirnovi(1.8, 0.3))
@with_special_errors
def test_error_raising():
assert_raises(special.SpecialFunctionError, special.iv, 1, 1e99j)
def test_xlogy():
def xfunc(x, y):
with np.errstate(invalid='ignore'):
if x == 0 and not np.isnan(y):
return x
else:
return x*np.log(y)
z1 = np.asarray([(0,0), (0, np.nan), (0, np.inf), (1.0, 2.0)], dtype=float)
z2 = np.r_[z1, [(0, 1j), (1, 1j)]]
w1 = np.vectorize(xfunc)(z1[:,0], z1[:,1])
assert_func_equal(special.xlogy, w1, z1, rtol=1e-13, atol=1e-13)
w2 = np.vectorize(xfunc)(z2[:,0], z2[:,1])
assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13)
def test_xlog1py():
def xfunc(x, y):
with np.errstate(invalid='ignore'):
if x == 0 and not np.isnan(y):
return x
else:
return x * np.log1p(y)
z1 = np.asarray([(0,0), (0, np.nan), (0, np.inf), (1.0, 2.0),
(1, 1e-30)], dtype=float)
w1 = np.vectorize(xfunc)(z1[:,0], z1[:,1])
assert_func_equal(special.xlog1py, w1, z1, rtol=1e-13, atol=1e-13)
def test_entr():
def xfunc(x):
if x < 0:
return -np.inf
else:
return -special.xlogy(x, x)
values = (0, 0.5, 1.0, np.inf)
signs = [-1, 1]
arr = []
for sgn, v in itertools.product(signs, values):
arr.append(sgn * v)
z = np.array(arr, dtype=float)
w = np.vectorize(xfunc, otypes=[np.float64])(z)
assert_func_equal(special.entr, w, z, rtol=1e-13, atol=1e-13)
def test_kl_div():
def xfunc(x, y):
if x < 0 or y < 0 or (y == 0 and x != 0):
# extension of natural domain to preserve convexity
return np.inf
elif np.isposinf(x) or np.isposinf(y):
# limits within the natural domain
return np.inf
elif x == 0:
return y
else:
return special.xlogy(x, x/y) - x + y
values = (0, 0.5, 1.0)
signs = [-1, 1]
arr = []
for sgna, va, sgnb, vb in itertools.product(signs, values, signs, values):
arr.append((sgna*va, sgnb*vb))
z = np.array(arr, dtype=float)
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.kl_div, w, z, rtol=1e-13, atol=1e-13)
def test_rel_entr():
def xfunc(x, y):
if x > 0 and y > 0:
return special.xlogy(x, x/y)
elif x == 0 and y >= 0:
return 0
else:
return np.inf
values = (0, 0.5, 1.0)
signs = [-1, 1]
arr = []
for sgna, va, sgnb, vb in itertools.product(signs, values, signs, values):
arr.append((sgna*va, sgnb*vb))
z = np.array(arr, dtype=float)
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.rel_entr, w, z, rtol=1e-13, atol=1e-13)
def test_huber():
assert_equal(special.huber(-1, 1.5), np.inf)
assert_allclose(special.huber(2, 1.5), 0.5 * np.square(1.5))
assert_allclose(special.huber(2, 2.5), 2 * (2.5 - 0.5 * 2))
def xfunc(delta, r):
if delta < 0:
return np.inf
elif np.abs(r) < delta:
return 0.5 * np.square(r)
else:
return delta * (np.abs(r) - 0.5 * delta)
z = np.random.randn(10, 2)
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.huber, w, z, rtol=1e-13, atol=1e-13)
def test_pseudo_huber():
def xfunc(delta, r):
if delta < 0:
return np.inf
elif (not delta) or (not r):
return 0
else:
return delta**2 * (np.sqrt(1 + (r/delta)**2) - 1)
z = np.array(np.random.randn(10, 2).tolist() + [[0, 0.5], [0.5, 0]])
w = np.vectorize(xfunc, otypes=[np.float64])(z[:,0], z[:,1])
assert_func_equal(special.pseudo_huber, w, z, rtol=1e-13, atol=1e-13)
| jamestwebber/scipy | scipy/special/tests/test_basic.py | Python | bsd-3-clause | 132,935 | [
"Elk"
] | 84837d8f0b02367ec3560009b34fb07376cd1bc57a894d31c9be03d7c43bb153 |
# -*- coding: utf-8 -*-
import sys
import numpy as np
from ase.optimize.optimize import Optimizer
from ase.utils.linesearch import LineSearch
class LBFGS(Optimizer):
"""Limited memory BFGS optimizer.
A limited memory version of the bfgs algorithm. Unlike the bfgs algorithm
used in bfgs.py, the inverse of Hessian matrix is updated. The inverse
Hessian is represented only as a diagonal matrix to save memory
"""
def __init__(self, atoms, restart=None, logfile='-', trajectory=None,
maxstep=None, memory=100, damping = 1.0, alpha = 10.0,
use_line_search=False):
"""
Parameters:
restart: string
Pickle file used to store vectors for updating the inverse of Hessian
matrix. If set, file with such a name will be searched and information
stored will be used, if the file exists.
logfile: string
Where should output go. None for no output, '-' for stdout.
trajectory: string
Pickle file used to store trajectory of atomic movement.
maxstep: float
How far is a single atom allowed to move. This is useful for DFT
calculations where wavefunctions can be reused if steps are small.
Default is 0.04 Angstrom.
memory: int
Number of steps to be stored. Default value is 100. Three numpy
arrays of this length containing floats are stored.
damping: float
The calculated step is multiplied with this number before added to
the positions.
alpha: float
Initial guess for the Hessian (curvature of energy surface). A
conservative value of 70.0 is the default, but number of needed
steps to converge might be less if a lower value is used. However,
a lower value also means risk of instability.
"""
Optimizer.__init__(self, atoms, restart, logfile, trajectory)
if maxstep is not None:
if maxstep > 1.0:
raise ValueError('You are using a much too large value for ' +
'the maximum step size: %.1f Angstrom' % maxstep)
self.maxstep = maxstep
else:
self.maxstep = 0.04
self.memory = memory
self.H0 = 1. / alpha # Initial approximation of inverse Hessian
# 1./70. is to emulate the behaviour of BFGS
# Note that this is never changed!
self.damping = damping
self.use_line_search = use_line_search
self.p = None
self.function_calls = 0
self.force_calls = 0
def initialize(self):
"""Initalize everything so no checks have to be done in step"""
self.iteration = 0
self.s = []
self.y = []
self.rho = [] # Store also rho, to avoid calculationg the dot product
# again and again
self.r0 = None
self.f0 = None
self.e0 = None
self.task = 'START'
self.load_restart = False
def read(self):
"""Load saved arrays to reconstruct the Hessian"""
self.iteration, self.s, self.y, self.rho, \
self.r0, self.f0, self.e0, self.task = self.load()
self.load_restart = True
def step(self, f):
"""Take a single step
Use the given forces, update the history and calculate the next step --
then take it"""
r = self.atoms.get_positions()
p0 = self.p
self.update(r, f, self.r0, self.f0)
s = self.s
y = self.y
rho = self.rho
H0 = self.H0
loopmax = np.min([self.memory, self.iteration])
a = np.empty((loopmax,), dtype=np.float64)
### The algorithm itself:
q = - f.reshape(-1)
for i in range(loopmax - 1, -1, -1):
a[i] = rho[i] * np.dot(s[i], q)
q -= a[i] * y[i]
z = H0 * q
for i in range(loopmax):
b = rho[i] * np.dot(y[i], z)
z += s[i] * (a[i] - b)
self.p = - z.reshape((-1, 3))
###
g = -f
if self.use_line_search == True:
e = self.func(r)
self.line_search(r, g, e)
dr = (self.alpha_k * self.p).reshape(len(self.atoms),-1)
else:
self.force_calls += 1
self.function_calls += 1
dr = self.determine_step(self.p) * self.damping
self.atoms.set_positions(r+dr)
self.iteration += 1
self.r0 = r
self.f0 = -g
self.dump((self.iteration, self.s, self.y,
self.rho, self.r0, self.f0, self.e0, self.task))
def determine_step(self, dr):
"""Determine step to take according to maxstep
Normalize all steps as the largest step. This way
we still move along the eigendirection.
"""
steplengths = (dr**2).sum(1)**0.5
longest_step = np.max(steplengths)
if longest_step >= self.maxstep:
dr *= self.maxstep / longest_step
return dr
def update(self, r, f, r0, f0):
"""Update everything that is kept in memory
This function is mostly here to allow for replay_trajectory.
"""
if self.iteration > 0:
s0 = r.reshape(-1) - r0.reshape(-1)
self.s.append(s0)
# We use the gradient which is minus the force!
y0 = f0.reshape(-1) - f.reshape(-1)
self.y.append(y0)
rho0 = 1.0 / np.dot(y0, s0)
self.rho.append(rho0)
if self.iteration > self.memory:
self.s.pop(0)
self.y.pop(0)
self.rho.pop(0)
def replay_trajectory(self, traj):
"""Initialize history from old trajectory."""
if isinstance(traj, str):
from ase.io.trajectory import PickleTrajectory
traj = PickleTrajectory(traj, 'r')
r0 = None
f0 = None
# The last element is not added, as we get that for free when taking
# the first qn-step after the replay
for i in range(0, len(traj) - 1):
r = traj[i].get_positions()
f = traj[i].get_forces()
self.update(r, f, r0, f0)
r0 = r.copy()
f0 = f.copy()
self.iteration += 1
self.r0 = r0
self.f0 = f0
def func(self, x):
"""Objective function for use of the optimizers"""
self.atoms.set_positions(x.reshape(-1, 3))
self.function_calls += 1
return self.atoms.get_potential_energy()
def fprime(self, x):
"""Gradient of the objective function for use of the optimizers"""
self.atoms.set_positions(x.reshape(-1, 3))
self.force_calls += 1
# Remember that forces are minus the gradient!
return - self.atoms.get_forces().reshape(-1)
def line_search(self, r, g, e):
self.p = self.p.ravel()
p_size = np.sqrt((self.p **2).sum())
if p_size <= np.sqrt(len(self.atoms) * 1e-10):
self.p /= (p_size / np.sqrt(len(self.atoms)*1e-10))
g = g.ravel()
r = r.ravel()
ls = LineSearch()
self.alpha_k, e, self.e0, self.no_update = \
ls._line_search(self.func, self.fprime, r, self.p, g, e, self.e0,
maxstep=self.maxstep, c1=.23,
c2=.46, stpmax=50.)
class LBFGSLineSearch(LBFGS):
"""This optimizer uses the LBFGS algorithm, but does a line search that fulfills
the Wolff conditions.
"""
def __init__(self, *args, **kwargs):
kwargs['use_line_search'] = True
LBFGS.__init__(self, *args, **kwargs)
# """Modified version of LBFGS.
#
# This optimizer uses the LBFGS algorithm, but does a line search for the
# minimum along the search direction. This is done by issuing an additional
# force call for each step, thus doubling the number of calculations.
#
# Additionally the Hessian is reset if the new guess is not sufficiently
# better than the old one.
# """
# def __init__(self, *args, **kwargs):
# self.dR = kwargs.pop('dR', 0.1)
# LBFGS.__init__(self, *args, **kwargs)
#
# def update(self, r, f, r0, f0):
# """Update everything that is kept in memory
#
# This function is mostly here to allow for replay_trajectory.
# """
# if self.iteration > 0:
# a1 = abs(np.dot(f.reshape(-1), f0.reshape(-1)))
# a2 = np.dot(f0.reshape(-1), f0.reshape(-1))
# if not (a1 <= 0.5 * a2 and a2 != 0):
# # Reset optimization
# self.initialize()
#
# # Note that the reset above will set self.iteration to 0 again
# # which is why we should check again
# if self.iteration > 0:
# s0 = r.reshape(-1) - r0.reshape(-1)
# self.s.append(s0)
#
# # We use the gradient which is minus the force!
# y0 = f0.reshape(-1) - f.reshape(-1)
# self.y.append(y0)
#
# rho0 = 1.0 / np.dot(y0, s0)
# self.rho.append(rho0)
#
# if self.iteration > self.memory:
# self.s.pop(0)
# self.y.pop(0)
# self.rho.pop(0)
#
# def determine_step(self, dr):
# f = self.atoms.get_forces()
#
# # Unit-vector along the search direction
# du = dr / np.sqrt(np.dot(dr.reshape(-1), dr.reshape(-1)))
#
# # We keep the old step determination before we figure
# # out what is the best to do.
# maxstep = self.maxstep * np.sqrt(3 * len(self.atoms))
#
# # Finite difference step using temporary point
# self.atoms.positions += (du * self.dR)
# # Decide how much to move along the line du
# Fp1 = np.dot(f.reshape(-1), du.reshape(-1))
# Fp2 = np.dot(self.atoms.get_forces().reshape(-1), du.reshape(-1))
# CR = (Fp1 - Fp2) / self.dR
# #RdR = Fp1*0.1
# if CR < 0.0:
# #print "negcurve"
# RdR = maxstep
# #if(abs(RdR) > maxstep):
# # RdR = self.sign(RdR) * maxstep
# else:
# Fp = (Fp1 + Fp2) * 0.5
# RdR = Fp / CR
# if abs(RdR) > maxstep:
# RdR = np.sign(RdR) * maxstep
# else:
# RdR += self.dR * 0.5
# return du * RdR
class HessLBFGS(LBFGS):
"""Backwards compatibiliyt class"""
def __init__(self, *args, **kwargs):
if 'method' in kwargs:
del kwargs['method']
sys.stderr.write('Please use LBFGS instead of HessLBFGS!')
LBFGS.__init__(self, *args, **kwargs)
class LineLBFGS(LBFGSLineSearch):
"""Backwards compatibiliyt class"""
def __init__(self, *args, **kwargs):
if 'method' in kwargs:
del kwargs['method']
sys.stderr.write('Please use LBFGSLineSearch instead of LineLBFGS!')
LBFGSLineSearch.__init__(self, *args, **kwargs)
| slabanja/ase | ase/optimize/lbfgs.py | Python | gpl-2.0 | 11,107 | [
"ASE"
] | f305cbed7b9cbd531915161f076fb49d08e5259bca8f1ef2f1e96b83317bdf19 |
#!/usr/bin/python
#----------------------------------------------------
# Program to convert VIC fluxes files to NetCDF file
# will ask the user wich variable he wants to export
# and also for wich years. Assumes there is data
# for the entire time period, from 1-jan to 31-dec
# SET UP FOR DAILY TIME STEP. FLUX FILE SHOUD NOT
# CONTAIN HOUR RECORD!!
#----------------------------------------------------
#------------------------------------------------
# Writen by Daniel de Castro Victoria
# dvictori@cena.usp.br or daniel.victoria@gmail.com
# Needs python libraries Numeric and Scientific
# 03-dec-2004
#-------------------------------------------------
import os, sys, string
# handle dates...
import datetime
# NetCDF and Numeric
from Scientific.IO.NetCDF import *
from Numeric import *
# checking user input
if len(sys.argv) != 2:
print "Wrong user input"
print "Convert VIC fluxes files to NetCDF"
print "usage flux2cdf.py <vic flux dir>"
print "VIC FLUX DIR SHOULD CONTAIN TRAILING /"
sys.exit()
if sys.argv[1][-1] != "/":
print "VIC FLUX DIR SHOULD CONTAIN TRAILING /"
print "fixing it for you..."
sys.argv[1] = sys.argv[1] + "/"
print "IMPORTANT: "+sys.argv[1]+" SHOULD CONTAIN ONLY FLUXES FILES!!!"
# building file list and sorted lat lon list
file_list = os.listdir(sys.argv[1])
lat_t = []
lon_t = []
lat = []
lon = []
for f in file_list:
lat_t.append(float(string.split(f, "_")[1]))
lon_t.append(float(string.split(f, "_")[2]))
for i in lat_t:
if i not in lat:
lat.append(i)
for i in lon_t:
if i not in lon:
lon.append(i)
# putting in order. Lat should be from top to botom
# lon from left to rigth
lon.sort()
lat.sort()
lat.reverse()
del(lat_t)
del(lon_t)
#determining the parameter to use
print "Choose output parameter"
print "1 - Precipitation"
print "2 - Evapotranspiration"
print "3 - Runoff"
print "4 - Base flow"
print "5 - Interception"
print "6 - Soil moisture"
varini = input('Choose output (1 a 6)>')
#getting the collumn right
if varini < 6:
var = varini + 2
elif varini == 6: #more than one soil layer...
camada = input('which soil layer?>')
var = varini + 1 + camada
#set name of out_file. Named after parameter choice
if var == 3:
var_txt = "ppt"
var_name = "Precipitation"
elif var == 4:
var_txt = "evap"
var_name = "Evapotranspiration"
elif var == 5:
var_txt = "runoff"
var_name = "Runoff"
elif var == 6:
var_txt = "base"
var_name = "Baseflow"
elif var == 7:
var_txt = "intercep"
var_name = "Interception"
else:
var_txt = "soil_"+str(camada)
var_name = "Soil moisture, layer %i", camada
# for what date?
start_year = input("Enter start year:")
end_year = input("End year:")
inidate = datetime.date(start_year,01,01)
enddate = datetime.date(end_year,12,31)
days = enddate.toordinal() - inidate.toordinal()+1
print "Go grab a coffe, this could take a while..."
#
# create array containig all data
# This is going to be huge. Create an array with -9999 (NoData)
# Then populate the array by reading each flux file
#
all_data = zeros([days,len(lat),len(lon)], Float)-9999
c = len(file_list)
# for each file in list
for f in file_list:
# get lat & lon and it's index
latitude = float(string.split(f, sep="_")[1])
longitude = float(string.split(f, sep="_")[2])
lat_id = lat.index(latitude)
lon_id = lon.index(longitude)
print "%i files to write." % c
c = c -1
infile = open(sys.argv[1]+f, "r")
lixo = infile.readlines()
infile.close()
dado = []
for l in lixo:
if int(string.split(l, sep="\t")[0]) in range(inidate.year, enddate.year+1):
dado.append(float(string.split(l, sep="\t")[var]))
# putting data inside array.
# Since data has lat & lon fixed uses dimension [:,lat_index,lon_index]
all_data[:,lat_id,lon_id] = dado
#
# writing NetCDF
#
ncfile = NetCDFFile(var_txt+".nc", "w")
ncfile.Conventions = "COARDS"
ncfile.history = "Created using flux2cdf.py. " + datetime.date.today().isoformat()
ncfile.production = "VIC output"
ncfile.start_date = inidate.isoformat()
ncfile.end_date = enddate.isoformat()
#create dimensions
ncfile.createDimension("X", len(lon))
ncfile.createDimension("Y", len(lat))
ncfile.createDimension("T", days)
#create variables
latvar = ncfile.createVariable("Y", Float, ("Y",))
latvar.long_name = "Latitude"
latvar.units = "degrees_north"
latvar[:] = lat
lonvar = ncfile.createVariable("X", Float, ("X",))
lonvar.long_name = "Longitude"
lonvar.units = "degrees_east"
lonvar[:] = lon
timevar = ncfile.createVariable("T", Float, ("T",))
timevar.long_name = "Time"
timevar.units = "days since " + inidate.isoformat()
timevar[:] = range(0, days)
data_var = ncfile.createVariable(var_txt, Float, ("T","Y","X"))
data_var.long_name = var_name+" calculated by VIC"
data_var.missing_value = -9999.0
data_var.units = "milimeters"
data_var[:] = all_data
ncfile.close()
| yixinmao/VIC | tools/post_processing/flux2nc.py | Python | gpl-2.0 | 4,980 | [
"NetCDF"
] | 703cddb3580d0394f1413a325be85b23cc1db9cb8c792939e66036b93395bebf |
# -*- coding: utf-8 -*-
import codecs
import warnings
import re
from contextlib import contextmanager
from parso.normalizer import Normalizer, NormalizerConfig, Issue, Rule
from parso.python.tree import search_ancestor
from parso.python.tokenize import _get_token_collection
_BLOCK_STMTS = ('if_stmt', 'while_stmt', 'for_stmt', 'try_stmt', 'with_stmt')
_STAR_EXPR_PARENTS = ('testlist_star_expr', 'testlist_comp', 'exprlist')
# This is the maximal block size given by python.
_MAX_BLOCK_SIZE = 20
_MAX_INDENT_COUNT = 100
ALLOWED_FUTURES = (
'nested_scopes', 'generators', 'division', 'absolute_import',
'with_statement', 'print_function', 'unicode_literals',
)
_COMP_FOR_TYPES = ('comp_for', 'sync_comp_for')
def _get_rhs_name(node, version):
type_ = node.type
if type_ == "lambdef":
return "lambda"
elif type_ == "atom":
comprehension = _get_comprehension_type(node)
first, second = node.children[:2]
if comprehension is not None:
return comprehension
elif second.type == "dictorsetmaker":
if version < (3, 8):
return "literal"
else:
if second.children[1] == ":" or second.children[0] == "**":
return "dict display"
else:
return "set display"
elif (
first == "("
and (second == ")"
or (len(node.children) == 3 and node.children[1].type == "testlist_comp"))
):
return "tuple"
elif first == "(":
return _get_rhs_name(_remove_parens(node), version=version)
elif first == "[":
return "list"
elif first == "{" and second == "}":
return "dict display"
elif first == "{" and len(node.children) > 2:
return "set display"
elif type_ == "keyword":
if "yield" in node.value:
return "yield expression"
if version < (3, 8):
return "keyword"
else:
return str(node.value)
elif type_ == "operator" and node.value == "...":
return "Ellipsis"
elif type_ == "comparison":
return "comparison"
elif type_ in ("string", "number", "strings"):
return "literal"
elif type_ == "yield_expr":
return "yield expression"
elif type_ == "test":
return "conditional expression"
elif type_ in ("atom_expr", "power"):
if node.children[0] == "await":
return "await expression"
elif node.children[-1].type == "trailer":
trailer = node.children[-1]
if trailer.children[0] == "(":
return "function call"
elif trailer.children[0] == "[":
return "subscript"
elif trailer.children[0] == ".":
return "attribute"
elif (
("expr" in type_
and "star_expr" not in type_) # is a substring
or "_test" in type_
or type_ in ("term", "factor")
):
return "operator"
elif type_ == "star_expr":
return "starred"
elif type_ == "testlist_star_expr":
return "tuple"
elif type_ == "fstring":
return "f-string expression"
return type_ # shouldn't reach here
def _iter_stmts(scope):
"""
Iterates over all statements and splits up simple_stmt.
"""
for child in scope.children:
if child.type == 'simple_stmt':
for child2 in child.children:
if child2.type == 'newline' or child2 == ';':
continue
yield child2
else:
yield child
def _get_comprehension_type(atom):
first, second = atom.children[:2]
if second.type == 'testlist_comp' and second.children[1].type in _COMP_FOR_TYPES:
if first == '[':
return 'list comprehension'
else:
return 'generator expression'
elif second.type == 'dictorsetmaker' and second.children[-1].type in _COMP_FOR_TYPES:
if second.children[1] == ':':
return 'dict comprehension'
else:
return 'set comprehension'
return None
def _is_future_import(import_from):
# It looks like a __future__ import that is relative is still a future
# import. That feels kind of odd, but whatever.
# if import_from.level != 0:
# return False
from_names = import_from.get_from_names()
return [n.value for n in from_names] == ['__future__']
def _remove_parens(atom):
"""
Returns the inner part of an expression like `(foo)`. Also removes nested
parens.
"""
try:
children = atom.children
except AttributeError:
pass
else:
if len(children) == 3 and children[0] == '(':
return _remove_parens(atom.children[1])
return atom
def _iter_params(parent_node):
return (n for n in parent_node.children if n.type == 'param')
def _is_future_import_first(import_from):
"""
Checks if the import is the first statement of a file.
"""
found_docstring = False
for stmt in _iter_stmts(import_from.get_root_node()):
if stmt.type == 'string' and not found_docstring:
continue
found_docstring = True
if stmt == import_from:
return True
if stmt.type == 'import_from' and _is_future_import(stmt):
continue
return False
def _iter_definition_exprs_from_lists(exprlist):
def check_expr(child):
if child.type == 'atom':
if child.children[0] == '(':
testlist_comp = child.children[1]
if testlist_comp.type == 'testlist_comp':
for expr in _iter_definition_exprs_from_lists(testlist_comp):
yield expr
return
else:
# It's a paren that doesn't do anything, like 1 + (1)
for c in check_expr(testlist_comp):
yield c
return
elif child.children[0] == '[':
yield testlist_comp
return
yield child
if exprlist.type in _STAR_EXPR_PARENTS:
for child in exprlist.children[::2]:
for c in check_expr(child): # Python 2 sucks
yield c
else:
for c in check_expr(exprlist): # Python 2 sucks
yield c
def _get_expr_stmt_definition_exprs(expr_stmt):
exprs = []
for list_ in expr_stmt.children[:-2:2]:
if list_.type in ('testlist_star_expr', 'testlist'):
exprs += _iter_definition_exprs_from_lists(list_)
else:
exprs.append(list_)
return exprs
def _get_for_stmt_definition_exprs(for_stmt):
exprlist = for_stmt.children[1]
return list(_iter_definition_exprs_from_lists(exprlist))
def _is_argument_comprehension(argument):
return argument.children[1].type in _COMP_FOR_TYPES
def _any_fstring_error(version, node):
if version < (3, 9) or node is None:
return False
if node.type == "error_node":
return any(child.type == "fstring_start" for child in node.children)
elif node.type == "fstring":
return True
else:
return search_ancestor(node, "fstring")
class _Context(object):
def __init__(self, node, add_syntax_error, parent_context=None):
self.node = node
self.blocks = []
self.parent_context = parent_context
self._used_name_dict = {}
self._global_names = []
self._nonlocal_names = []
self._nonlocal_names_in_subscopes = []
self._add_syntax_error = add_syntax_error
def is_async_funcdef(self):
# Stupidly enough async funcdefs can have two different forms,
# depending if a decorator is used or not.
return self.is_function() \
and self.node.parent.type in ('async_funcdef', 'async_stmt')
def is_function(self):
return self.node.type == 'funcdef'
def add_name(self, name):
parent_type = name.parent.type
if parent_type == 'trailer':
# We are only interested in first level names.
return
if parent_type == 'global_stmt':
self._global_names.append(name)
elif parent_type == 'nonlocal_stmt':
self._nonlocal_names.append(name)
else:
self._used_name_dict.setdefault(name.value, []).append(name)
def finalize(self):
"""
Returns a list of nonlocal names that need to be part of that scope.
"""
self._analyze_names(self._global_names, 'global')
self._analyze_names(self._nonlocal_names, 'nonlocal')
global_name_strs = {n.value: n for n in self._global_names}
for nonlocal_name in self._nonlocal_names:
try:
global_name = global_name_strs[nonlocal_name.value]
except KeyError:
continue
message = "name '%s' is nonlocal and global" % global_name.value
if global_name.start_pos < nonlocal_name.start_pos:
error_name = global_name
else:
error_name = nonlocal_name
self._add_syntax_error(error_name, message)
nonlocals_not_handled = []
for nonlocal_name in self._nonlocal_names_in_subscopes:
search = nonlocal_name.value
if search in global_name_strs or self.parent_context is None:
message = "no binding for nonlocal '%s' found" % nonlocal_name.value
self._add_syntax_error(nonlocal_name, message)
elif not self.is_function() or \
nonlocal_name.value not in self._used_name_dict:
nonlocals_not_handled.append(nonlocal_name)
return self._nonlocal_names + nonlocals_not_handled
def _analyze_names(self, globals_or_nonlocals, type_):
def raise_(message):
self._add_syntax_error(base_name, message % (base_name.value, type_))
params = []
if self.node.type == 'funcdef':
params = self.node.get_params()
for base_name in globals_or_nonlocals:
found_global_or_nonlocal = False
# Somehow Python does it the reversed way.
for name in reversed(self._used_name_dict.get(base_name.value, [])):
if name.start_pos > base_name.start_pos:
# All following names don't have to be checked.
found_global_or_nonlocal = True
parent = name.parent
if parent.type == 'param' and parent.name == name:
# Skip those here, these definitions belong to the next
# scope.
continue
if name.is_definition():
if parent.type == 'expr_stmt' \
and parent.children[1].type == 'annassign':
if found_global_or_nonlocal:
# If it's after the global the error seems to be
# placed there.
base_name = name
raise_("annotated name '%s' can't be %s")
break
else:
message = "name '%s' is assigned to before %s declaration"
else:
message = "name '%s' is used prior to %s declaration"
if not found_global_or_nonlocal:
raise_(message)
# Only add an error for the first occurence.
break
for param in params:
if param.name.value == base_name.value:
raise_("name '%s' is parameter and %s"),
@contextmanager
def add_block(self, node):
self.blocks.append(node)
yield
self.blocks.pop()
def add_context(self, node):
return _Context(node, self._add_syntax_error, parent_context=self)
def close_child_context(self, child_context):
self._nonlocal_names_in_subscopes += child_context.finalize()
class ErrorFinder(Normalizer):
"""
Searches for errors in the syntax tree.
"""
def __init__(self, *args, **kwargs):
super(ErrorFinder, self).__init__(*args, **kwargs)
self._error_dict = {}
self.version = self.grammar.version_info
def initialize(self, node):
def create_context(node):
if node is None:
return None
parent_context = create_context(node.parent)
if node.type in ('classdef', 'funcdef', 'file_input'):
return _Context(node, self._add_syntax_error, parent_context)
return parent_context
self.context = create_context(node) or _Context(node, self._add_syntax_error)
self._indentation_count = 0
def visit(self, node):
if node.type == 'error_node':
with self.visit_node(node):
# Don't need to investigate the inners of an error node. We
# might find errors in there that should be ignored, because
# the error node itself already shows that there's an issue.
return ''
return super(ErrorFinder, self).visit(node)
@contextmanager
def visit_node(self, node):
self._check_type_rules(node)
if node.type in _BLOCK_STMTS:
with self.context.add_block(node):
if len(self.context.blocks) == _MAX_BLOCK_SIZE:
self._add_syntax_error(node, "too many statically nested blocks")
yield
return
elif node.type == 'suite':
self._indentation_count += 1
if self._indentation_count == _MAX_INDENT_COUNT:
self._add_indentation_error(node.children[1], "too many levels of indentation")
yield
if node.type == 'suite':
self._indentation_count -= 1
elif node.type in ('classdef', 'funcdef'):
context = self.context
self.context = context.parent_context
self.context.close_child_context(context)
def visit_leaf(self, leaf):
if leaf.type == 'error_leaf':
if leaf.token_type in ('INDENT', 'ERROR_DEDENT'):
# Indents/Dedents itself never have a prefix. They are just
# "pseudo" tokens that get removed by the syntax tree later.
# Therefore in case of an error we also have to check for this.
spacing = list(leaf.get_next_leaf()._split_prefix())[-1]
if leaf.token_type == 'INDENT':
message = 'unexpected indent'
else:
message = 'unindent does not match any outer indentation level'
self._add_indentation_error(spacing, message)
else:
if leaf.value.startswith('\\'):
message = 'unexpected character after line continuation character'
else:
match = re.match('\\w{,2}("{1,3}|\'{1,3})', leaf.value)
if match is None:
message = 'invalid syntax'
if (
self.version >= (3, 9)
and leaf.value in _get_token_collection(self.version).always_break_tokens
):
message = "f-string: " + message
else:
if len(match.group(1)) == 1:
message = 'EOL while scanning string literal'
else:
message = 'EOF while scanning triple-quoted string literal'
self._add_syntax_error(leaf, message)
return ''
elif leaf.value == ':':
parent = leaf.parent
if parent.type in ('classdef', 'funcdef'):
self.context = self.context.add_context(parent)
# The rest is rule based.
return super(ErrorFinder, self).visit_leaf(leaf)
def _add_indentation_error(self, spacing, message):
self.add_issue(spacing, 903, "IndentationError: " + message)
def _add_syntax_error(self, node, message):
self.add_issue(node, 901, "SyntaxError: " + message)
def add_issue(self, node, code, message):
# Overwrite the default behavior.
# Check if the issues are on the same line.
line = node.start_pos[0]
args = (code, message, node)
self._error_dict.setdefault(line, args)
def finalize(self):
self.context.finalize()
for code, message, node in self._error_dict.values():
self.issues.append(Issue(node, code, message))
class IndentationRule(Rule):
code = 903
def _get_message(self, message, node):
message = super(IndentationRule, self)._get_message(message, node)
return "IndentationError: " + message
@ErrorFinder.register_rule(type='error_node')
class _ExpectIndentedBlock(IndentationRule):
message = 'expected an indented block'
def get_node(self, node):
leaf = node.get_next_leaf()
return list(leaf._split_prefix())[-1]
def is_issue(self, node):
# This is the beginning of a suite that is not indented.
return node.children[-1].type == 'newline'
class ErrorFinderConfig(NormalizerConfig):
normalizer_class = ErrorFinder
class SyntaxRule(Rule):
code = 901
def _get_message(self, message, node):
message = super(SyntaxRule, self)._get_message(message, node)
if (
"f-string" not in message
and _any_fstring_error(self._normalizer.version, node)
):
message = "f-string: " + message
return "SyntaxError: " + message
@ErrorFinder.register_rule(type='error_node')
class _InvalidSyntaxRule(SyntaxRule):
message = "invalid syntax"
fstring_message = "f-string: invalid syntax"
def get_node(self, node):
return node.get_next_leaf()
def is_issue(self, node):
error = node.get_next_leaf().type != 'error_leaf'
if (
error
and _any_fstring_error(self._normalizer.version, node)
):
self.add_issue(node, message=self.fstring_message)
else:
# Error leafs will be added later as an error.
return error
@ErrorFinder.register_rule(value='await')
class _AwaitOutsideAsync(SyntaxRule):
message = "'await' outside async function"
def is_issue(self, leaf):
return not self._normalizer.context.is_async_funcdef()
def get_error_node(self, node):
# Return the whole await statement.
return node.parent
@ErrorFinder.register_rule(value='break')
class _BreakOutsideLoop(SyntaxRule):
message = "'break' outside loop"
def is_issue(self, leaf):
in_loop = False
for block in self._normalizer.context.blocks:
if block.type in ('for_stmt', 'while_stmt'):
in_loop = True
return not in_loop
@ErrorFinder.register_rule(value='continue')
class _ContinueChecks(SyntaxRule):
message = "'continue' not properly in loop"
message_in_finally = "'continue' not supported inside 'finally' clause"
def is_issue(self, leaf):
in_loop = False
for block in self._normalizer.context.blocks:
if block.type in ('for_stmt', 'while_stmt'):
in_loop = True
if block.type == 'try_stmt':
last_block = block.children[-3]
if (
last_block == "finally"
and leaf.start_pos > last_block.start_pos
and self._normalizer.version < (3, 8)
):
self.add_issue(leaf, message=self.message_in_finally)
return False # Error already added
if not in_loop:
return True
@ErrorFinder.register_rule(value='from')
class _YieldFromCheck(SyntaxRule):
message = "'yield from' inside async function"
def get_node(self, leaf):
return leaf.parent.parent # This is the actual yield statement.
def is_issue(self, leaf):
return leaf.parent.type == 'yield_arg' \
and self._normalizer.context.is_async_funcdef()
@ErrorFinder.register_rule(type='name')
class _NameChecks(SyntaxRule):
message = 'cannot assign to __debug__'
message_none = 'cannot assign to None'
def is_issue(self, leaf):
self._normalizer.context.add_name(leaf)
if leaf.value == '__debug__' and leaf.is_definition():
return True
if leaf.value == 'None' and self._normalizer.version < (3, 0) \
and leaf.is_definition():
self.add_issue(leaf, message=self.message_none)
@ErrorFinder.register_rule(type='string')
class _StringChecks(SyntaxRule):
message = "bytes can only contain ASCII literal characters."
def is_issue(self, leaf):
string_prefix = leaf.string_prefix.lower()
if 'b' in string_prefix \
and self._normalizer.version >= (3, 0) \
and any(c for c in leaf.value if ord(c) > 127):
# b'ä'
return True
if 'r' not in string_prefix:
# Raw strings don't need to be checked if they have proper
# escaping.
is_bytes = self._normalizer.version < (3, 0)
if 'b' in string_prefix:
is_bytes = True
if 'u' in string_prefix:
is_bytes = False
payload = leaf._get_payload()
if is_bytes:
payload = payload.encode('utf-8')
func = codecs.escape_decode
else:
func = codecs.unicode_escape_decode
try:
with warnings.catch_warnings():
# The warnings from parsing strings are not relevant.
warnings.filterwarnings('ignore')
func(payload)
except UnicodeDecodeError as e:
self.add_issue(leaf, message='(unicode error) ' + str(e))
except ValueError as e:
self.add_issue(leaf, message='(value error) ' + str(e))
@ErrorFinder.register_rule(value='*')
class _StarCheck(SyntaxRule):
message = "named arguments must follow bare *"
def is_issue(self, leaf):
params = leaf.parent
if params.type == 'parameters' and params:
after = params.children[params.children.index(leaf) + 1:]
after = [child for child in after
if child not in (',', ')') and not child.star_count]
return len(after) == 0
@ErrorFinder.register_rule(value='**')
class _StarStarCheck(SyntaxRule):
# e.g. {**{} for a in [1]}
# TODO this should probably get a better end_pos including
# the next sibling of leaf.
message = "dict unpacking cannot be used in dict comprehension"
def is_issue(self, leaf):
if leaf.parent.type == 'dictorsetmaker':
comp_for = leaf.get_next_sibling().get_next_sibling()
return comp_for is not None and comp_for.type in _COMP_FOR_TYPES
@ErrorFinder.register_rule(value='yield')
@ErrorFinder.register_rule(value='return')
class _ReturnAndYieldChecks(SyntaxRule):
message = "'return' with value in async generator"
message_async_yield = "'yield' inside async function"
def get_node(self, leaf):
return leaf.parent
def is_issue(self, leaf):
if self._normalizer.context.node.type != 'funcdef':
self.add_issue(self.get_node(leaf), message="'%s' outside function" % leaf.value)
elif self._normalizer.context.is_async_funcdef() \
and any(self._normalizer.context.node.iter_yield_exprs()):
if leaf.value == 'return' and leaf.parent.type == 'return_stmt':
return True
elif leaf.value == 'yield' \
and leaf.get_next_leaf() != 'from' \
and self._normalizer.version == (3, 5):
self.add_issue(self.get_node(leaf), message=self.message_async_yield)
@ErrorFinder.register_rule(type='strings')
class _BytesAndStringMix(SyntaxRule):
# e.g. 's' b''
message = "cannot mix bytes and nonbytes literals"
def _is_bytes_literal(self, string):
if string.type == 'fstring':
return False
return 'b' in string.string_prefix.lower()
def is_issue(self, node):
first = node.children[0]
# In Python 2 it's allowed to mix bytes and unicode.
if self._normalizer.version >= (3, 0):
first_is_bytes = self._is_bytes_literal(first)
for string in node.children[1:]:
if first_is_bytes != self._is_bytes_literal(string):
return True
@ErrorFinder.register_rule(type='import_as_names')
class _TrailingImportComma(SyntaxRule):
# e.g. from foo import a,
message = "trailing comma not allowed without surrounding parentheses"
def is_issue(self, node):
if node.children[-1] == ',' and node.parent.children[-1] != ')':
return True
@ErrorFinder.register_rule(type='import_from')
class _ImportStarInFunction(SyntaxRule):
message = "import * only allowed at module level"
def is_issue(self, node):
return node.is_star_import() and self._normalizer.context.parent_context is not None
@ErrorFinder.register_rule(type='import_from')
class _FutureImportRule(SyntaxRule):
message = "from __future__ imports must occur at the beginning of the file"
def is_issue(self, node):
if _is_future_import(node):
if not _is_future_import_first(node):
return True
for from_name, future_name in node.get_paths():
name = future_name.value
allowed_futures = list(ALLOWED_FUTURES)
if self._normalizer.version >= (3, 5):
allowed_futures.append('generator_stop')
if self._normalizer.version >= (3, 7):
allowed_futures.append('annotations')
if name == 'braces':
self.add_issue(node, message="not a chance")
elif name == 'barry_as_FLUFL':
m = "Seriously I'm not implementing this :) ~ Dave"
self.add_issue(node, message=m)
elif name not in allowed_futures:
message = "future feature %s is not defined" % name
self.add_issue(node, message=message)
@ErrorFinder.register_rule(type='star_expr')
class _StarExprRule(SyntaxRule):
message_iterable_unpacking = "iterable unpacking cannot be used in comprehension"
message_assignment = "can use starred expression only as assignment target"
def is_issue(self, node):
if node.parent.type == 'testlist_comp':
# [*[] for a in [1]]
if node.parent.children[1].type in _COMP_FOR_TYPES:
self.add_issue(node, message=self.message_iterable_unpacking)
if self._normalizer.version <= (3, 4):
n = search_ancestor(node, 'for_stmt', 'expr_stmt')
found_definition = False
if n is not None:
if n.type == 'expr_stmt':
exprs = _get_expr_stmt_definition_exprs(n)
else:
exprs = _get_for_stmt_definition_exprs(n)
if node in exprs:
found_definition = True
if not found_definition:
self.add_issue(node, message=self.message_assignment)
@ErrorFinder.register_rule(types=_STAR_EXPR_PARENTS)
class _StarExprParentRule(SyntaxRule):
def is_issue(self, node):
if node.parent.type == 'del_stmt':
if self._normalizer.version >= (3, 9):
self.add_issue(node.parent, message="cannot delete starred")
else:
self.add_issue(node.parent, message="can't use starred expression here")
else:
def is_definition(node, ancestor):
if ancestor is None:
return False
type_ = ancestor.type
if type_ == 'trailer':
return False
if type_ == 'expr_stmt':
return node.start_pos < ancestor.children[-1].start_pos
return is_definition(node, ancestor.parent)
if is_definition(node, node.parent):
args = [c for c in node.children if c != ',']
starred = [c for c in args if c.type == 'star_expr']
if len(starred) > 1:
if self._normalizer.version < (3, 9):
message = "two starred expressions in assignment"
else:
message = "multiple starred expressions in assignment"
self.add_issue(starred[1], message=message)
elif starred:
count = args.index(starred[0])
if count >= 256:
message = "too many expressions in star-unpacking assignment"
self.add_issue(starred[0], message=message)
@ErrorFinder.register_rule(type='annassign')
class _AnnotatorRule(SyntaxRule):
# True: int
# {}: float
message = "illegal target for annotation"
def get_node(self, node):
return node.parent
def is_issue(self, node):
type_ = None
lhs = node.parent.children[0]
lhs = _remove_parens(lhs)
try:
children = lhs.children
except AttributeError:
pass
else:
if ',' in children or lhs.type == 'atom' and children[0] == '(':
type_ = 'tuple'
elif lhs.type == 'atom' and children[0] == '[':
type_ = 'list'
trailer = children[-1]
if type_ is None:
if not (lhs.type == 'name'
# subscript/attributes are allowed
or lhs.type in ('atom_expr', 'power')
and trailer.type == 'trailer'
and trailer.children[0] != '('):
return True
else:
# x, y: str
message = "only single target (not %s) can be annotated"
self.add_issue(lhs.parent, message=message % type_)
@ErrorFinder.register_rule(type='argument')
class _ArgumentRule(SyntaxRule):
def is_issue(self, node):
first = node.children[0]
if self._normalizer.version < (3, 8):
# a((b)=c) is valid in <3.8
first = _remove_parens(first)
if node.children[1] == '=' and first.type != 'name':
if first.type == 'lambdef':
# f(lambda: 1=1)
if self._normalizer.version < (3, 8):
message = "lambda cannot contain assignment"
else:
message = 'expression cannot contain assignment, perhaps you meant "=="?'
else:
# f(+x=1)
if self._normalizer.version < (3, 8):
message = "keyword can't be an expression"
else:
message = 'expression cannot contain assignment, perhaps you meant "=="?'
self.add_issue(first, message=message)
if _is_argument_comprehension(node) and node.parent.type == 'classdef':
self.add_issue(node, message='invalid syntax')
@ErrorFinder.register_rule(type='nonlocal_stmt')
class _NonlocalModuleLevelRule(SyntaxRule):
message = "nonlocal declaration not allowed at module level"
def is_issue(self, node):
return self._normalizer.context.parent_context is None
@ErrorFinder.register_rule(type='arglist')
class _ArglistRule(SyntaxRule):
@property
def message(self):
if self._normalizer.version < (3, 7):
return "Generator expression must be parenthesized if not sole argument"
else:
return "Generator expression must be parenthesized"
def is_issue(self, node):
arg_set = set()
kw_only = False
kw_unpacking_only = False
is_old_starred = False
# In python 3 this would be a bit easier (stars are part of
# argument), but we have to understand both.
for argument in node.children:
if argument == ',':
continue
if argument in ('*', '**'):
# Python < 3.5 has the order engraved in the grammar
# file. No need to do anything here.
is_old_starred = True
continue
if is_old_starred:
is_old_starred = False
continue
if argument.type == 'argument':
first = argument.children[0]
if _is_argument_comprehension(argument) and len(node.children) >= 2:
# a(a, b for b in c)
return True
if first in ('*', '**'):
if first == '*':
if kw_unpacking_only:
# foo(**kwargs, *args)
message = "iterable argument unpacking " \
"follows keyword argument unpacking"
self.add_issue(argument, message=message)
else:
kw_unpacking_only = True
else: # Is a keyword argument.
kw_only = True
if first.type == 'name':
if first.value in arg_set:
# f(x=1, x=2)
message = "keyword argument repeated"
if self._normalizer.version >= (3, 9):
message += ": {}".format(first.value)
self.add_issue(first, message=message)
else:
arg_set.add(first.value)
else:
if kw_unpacking_only:
# f(**x, y)
message = "positional argument follows keyword argument unpacking"
self.add_issue(argument, message=message)
elif kw_only:
# f(x=2, y)
message = "positional argument follows keyword argument"
self.add_issue(argument, message=message)
@ErrorFinder.register_rule(type='parameters')
@ErrorFinder.register_rule(type='lambdef')
class _ParameterRule(SyntaxRule):
# def f(x=3, y): pass
message = "non-default argument follows default argument"
def is_issue(self, node):
param_names = set()
default_only = False
for p in _iter_params(node):
if p.name.value in param_names:
message = "duplicate argument '%s' in function definition"
self.add_issue(p.name, message=message % p.name.value)
param_names.add(p.name.value)
if p.default is None and not p.star_count:
if default_only:
return True
else:
default_only = True
@ErrorFinder.register_rule(type='try_stmt')
class _TryStmtRule(SyntaxRule):
message = "default 'except:' must be last"
def is_issue(self, try_stmt):
default_except = None
for except_clause in try_stmt.children[3::3]:
if except_clause in ('else', 'finally'):
break
if except_clause == 'except':
default_except = except_clause
elif default_except is not None:
self.add_issue(default_except, message=self.message)
@ErrorFinder.register_rule(type='fstring')
class _FStringRule(SyntaxRule):
_fstring_grammar = None
message_expr = "f-string expression part cannot include a backslash"
message_nested = "f-string: expressions nested too deeply"
message_conversion = "f-string: invalid conversion character: expected 's', 'r', or 'a'"
def _check_format_spec(self, format_spec, depth):
self._check_fstring_contents(format_spec.children[1:], depth)
def _check_fstring_expr(self, fstring_expr, depth):
if depth >= 2:
self.add_issue(fstring_expr, message=self.message_nested)
expr = fstring_expr.children[1]
if '\\' in expr.get_code():
self.add_issue(expr, message=self.message_expr)
conversion = fstring_expr.children[2]
if conversion.type == 'fstring_conversion':
name = conversion.children[1]
if name.value not in ('s', 'r', 'a'):
self.add_issue(name, message=self.message_conversion)
format_spec = fstring_expr.children[-2]
if format_spec.type == 'fstring_format_spec':
self._check_format_spec(format_spec, depth + 1)
def is_issue(self, fstring):
self._check_fstring_contents(fstring.children[1:-1])
def _check_fstring_contents(self, children, depth=0):
for fstring_content in children:
if fstring_content.type == 'fstring_expr':
self._check_fstring_expr(fstring_content, depth)
class _CheckAssignmentRule(SyntaxRule):
def _check_assignment(self, node, is_deletion=False, is_namedexpr=False, is_aug_assign=False):
error = None
type_ = node.type
if type_ == 'lambdef':
error = 'lambda'
elif type_ == 'atom':
first, second = node.children[:2]
error = _get_comprehension_type(node)
if error is None:
if second.type == 'dictorsetmaker':
if self._normalizer.version < (3, 8):
error = 'literal'
else:
if second.children[1] == ':':
error = 'dict display'
else:
error = 'set display'
elif first == "{" and second == "}":
if self._normalizer.version < (3, 8):
error = 'literal'
else:
error = "dict display"
elif first == "{" and len(node.children) > 2:
if self._normalizer.version < (3, 8):
error = 'literal'
else:
error = "set display"
elif first in ('(', '['):
if second.type == 'yield_expr':
error = 'yield expression'
elif second.type == 'testlist_comp':
# ([a, b] := [1, 2])
# ((a, b) := [1, 2])
if is_namedexpr:
if first == '(':
error = 'tuple'
elif first == '[':
error = 'list'
# This is not a comprehension, they were handled
# further above.
for child in second.children[::2]:
self._check_assignment(child, is_deletion, is_namedexpr, is_aug_assign)
else: # Everything handled, must be useless brackets.
self._check_assignment(second, is_deletion, is_namedexpr, is_aug_assign)
elif type_ == 'keyword':
if node.value == "yield":
error = "yield expression"
elif self._normalizer.version < (3, 8):
error = 'keyword'
else:
error = str(node.value)
elif type_ == 'operator':
if node.value == '...':
error = 'Ellipsis'
elif type_ == 'comparison':
error = 'comparison'
elif type_ in ('string', 'number', 'strings'):
error = 'literal'
elif type_ == 'yield_expr':
# This one seems to be a slightly different warning in Python.
message = 'assignment to yield expression not possible'
self.add_issue(node, message=message)
elif type_ == 'test':
error = 'conditional expression'
elif type_ in ('atom_expr', 'power'):
if node.children[0] == 'await':
error = 'await expression'
elif node.children[-2] == '**':
error = 'operator'
else:
# Has a trailer
trailer = node.children[-1]
assert trailer.type == 'trailer'
if trailer.children[0] == '(':
error = 'function call'
elif is_namedexpr and trailer.children[0] == '[':
error = 'subscript'
elif is_namedexpr and trailer.children[0] == '.':
error = 'attribute'
elif type_ == "fstring":
if self._normalizer.version < (3, 8):
error = 'literal'
else:
error = "f-string expression"
elif type_ in ('testlist_star_expr', 'exprlist', 'testlist'):
for child in node.children[::2]:
self._check_assignment(child, is_deletion, is_namedexpr, is_aug_assign)
elif ('expr' in type_ and type_ != 'star_expr' # is a substring
or '_test' in type_
or type_ in ('term', 'factor')):
error = 'operator'
elif type_ == "star_expr":
if is_deletion:
if self._normalizer.version >= (3, 9):
error = "starred"
else:
self.add_issue(node, message="can't use starred expression here")
elif not search_ancestor(node, *_STAR_EXPR_PARENTS) and not is_aug_assign:
self.add_issue(node, message="starred assignment target must be in a list or tuple")
self._check_assignment(node.children[1])
if error is not None:
if is_namedexpr:
message = 'cannot use assignment expressions with %s' % error
else:
cannot = "can't" if self._normalizer.version < (3, 8) else "cannot"
message = ' '.join([cannot, "delete" if is_deletion else "assign to", error])
self.add_issue(node, message=message)
@ErrorFinder.register_rule(type='sync_comp_for')
class _CompForRule(_CheckAssignmentRule):
message = "asynchronous comprehension outside of an asynchronous function"
def is_issue(self, node):
expr_list = node.children[1]
if expr_list.type != 'expr_list': # Already handled.
self._check_assignment(expr_list)
return node.parent.children[0] == 'async' \
and not self._normalizer.context.is_async_funcdef()
@ErrorFinder.register_rule(type='expr_stmt')
class _ExprStmtRule(_CheckAssignmentRule):
message = "illegal expression for augmented assignment"
extended_message = "'{target}' is an " + message
def is_issue(self, node):
augassign = node.children[1]
is_aug_assign = augassign != '=' and augassign.type != 'annassign'
if self._normalizer.version <= (3, 8) or not is_aug_assign:
for before_equal in node.children[:-2:2]:
self._check_assignment(before_equal, is_aug_assign=is_aug_assign)
if is_aug_assign:
target = _remove_parens(node.children[0])
# a, a[b], a.b
if target.type == "name" or (
target.type in ("atom_expr", "power")
and target.children[1].type == "trailer"
and target.children[-1].children[0] != "("
):
return False
if self._normalizer.version <= (3, 8):
return True
else:
self.add_issue(
node,
message=self.extended_message.format(
target=_get_rhs_name(node.children[0], self._normalizer.version)
),
)
@ErrorFinder.register_rule(type='with_item')
class _WithItemRule(_CheckAssignmentRule):
def is_issue(self, with_item):
self._check_assignment(with_item.children[2])
@ErrorFinder.register_rule(type='del_stmt')
class _DelStmtRule(_CheckAssignmentRule):
def is_issue(self, del_stmt):
child = del_stmt.children[1]
if child.type != 'expr_list': # Already handled.
self._check_assignment(child, is_deletion=True)
@ErrorFinder.register_rule(type='expr_list')
class _ExprListRule(_CheckAssignmentRule):
def is_issue(self, expr_list):
for expr in expr_list.children[::2]:
self._check_assignment(expr)
@ErrorFinder.register_rule(type='for_stmt')
class _ForStmtRule(_CheckAssignmentRule):
def is_issue(self, for_stmt):
# Some of the nodes here are already used, so no else if
expr_list = for_stmt.children[1]
if expr_list.type != 'expr_list': # Already handled.
self._check_assignment(expr_list)
@ErrorFinder.register_rule(type='namedexpr_test')
class _NamedExprRule(_CheckAssignmentRule):
# namedexpr_test: test [':=' test]
def is_issue(self, namedexpr_test):
# assigned name
first = namedexpr_test.children[0]
def search_namedexpr_in_comp_for(node):
while True:
parent = node.parent
if parent is None:
return parent
if parent.type == 'sync_comp_for' and parent.children[3] == node:
return parent
node = parent
if search_namedexpr_in_comp_for(namedexpr_test):
# [i+1 for i in (i := range(5))]
# [i+1 for i in (j := range(5))]
# [i+1 for i in (lambda: (j := range(5)))()]
message = 'assignment expression cannot be used in a comprehension iterable expression'
self.add_issue(namedexpr_test, message=message)
# defined names
exprlist = list()
def process_comp_for(comp_for):
if comp_for.type == 'sync_comp_for':
comp = comp_for
elif comp_for.type == 'comp_for':
comp = comp_for.children[1]
exprlist.extend(_get_for_stmt_definition_exprs(comp))
def search_all_comp_ancestors(node):
has_ancestors = False
while True:
node = search_ancestor(node, 'testlist_comp', 'dictorsetmaker')
if node is None:
break
for child in node.children:
if child.type in _COMP_FOR_TYPES:
process_comp_for(child)
has_ancestors = True
break
return has_ancestors
# check assignment expressions in comprehensions
search_all = search_all_comp_ancestors(namedexpr_test)
if search_all:
if self._normalizer.context.node.type == 'classdef':
message = 'assignment expression within a comprehension ' \
'cannot be used in a class body'
self.add_issue(namedexpr_test, message=message)
namelist = [expr.value for expr in exprlist if expr.type == 'name']
if first.type == 'name' and first.value in namelist:
# [i := 0 for i, j in range(5)]
# [[(i := i) for j in range(5)] for i in range(5)]
# [i for i, j in range(5) if True or (i := 1)]
# [False and (i := 0) for i, j in range(5)]
message = 'assignment expression cannot rebind ' \
'comprehension iteration variable %r' % first.value
self.add_issue(namedexpr_test, message=message)
self._check_assignment(first, is_namedexpr=True)
| sserrot/champion_relationships | venv/Lib/site-packages/parso/python/errors.py | Python | mit | 48,022 | [
"VisIt"
] | 122cdc20a7e5965ad31d854408521b8ae3859cd02511d40dd5e95705e2b9dad0 |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
'''
**********************************************************
*
* SpectraLearnPredict - SKlearn Neural Networks
* Perform Machine Learning on Spectroscopy Data.
*
* Uses: Deep Neural Networks, TensorFlow, SVM, PCA, K-Means
*
* By: Nicola Ferralis <feranick@hotmail.com>
*
***********************************************************
'''
import matplotlib
if matplotlib.get_backend() == 'TkAgg':
matplotlib.use('Agg')
import numpy as np
import sys, os.path, getopt, glob, csv
import random, time, configparser, os
from os.path import exists, splitext
from os import rename
from datetime import datetime, date
from .slp_config import *
#********************************************************************************
''' MultiLayer Perceptron - SKlearn '''
''' http://scikit-learn.org/stable/modules/neural_networks_supervised.html'''
#********************************************************************************
''' Train Neural Network - sklearn '''
#********************************************************************************
def trainNN(A, Cl, A_test, Cl_test, Root):
from sklearn.neural_network import MLPClassifier, MLPRegressor
from sklearn.externals import joblib
if nnDef.MLPRegressor is False:
Root+"/DNN-TF_"
nnTrainedData = Root + '.nnModelC.pkl'
else:
nnTrainedData = Root + '.nnModelR.pkl'
print('==========================================================================\n')
print('\033[1m Running Neural Network: multi-layer perceptron (MLP)\033[0m')
print(' Hidden layers with neuron count:', nnDef.hidden_layers)
print(' Optimizer:',nnDef.optimizer,', Activation Fn:',nnDef.activation_function,
', L2 reg. strength: ',nnDef.l2_reg_strength)
try:
if nnDef.alwaysRetrain == False:
with open(nnTrainedData):
print(' Opening NN training model...\n')
clf = joblib.load(nnTrainedData)
else:
raise ValueError(' Force NN retraining.')
except:
#**********************************************
''' Retrain training data if not available'''
#**********************************************
if nnDef.MLPRegressor is False:
print(' Retraining NN model using MLP Classifier...')
clf = MLPClassifier(solver=nnDef.optimizer, alpha=nnDef.l2_reg_strength,
activation = nnDef.activation_function,
hidden_layer_sizes=nnDef.hidden_layers, random_state=1)
else:
print(' Retraining NN model using MLP Regressor...')
clf = MLPRegressor(solver=nnDef.optimizer, alpha=nnDef.l2_reg_strength,
hidden_layer_sizes=nnDef.hidden_layers, random_state=1)
Cl = np.array(Cl,dtype=float)
clf.fit(A, Cl)
print(" Training on the full training dataset\n")
accur = clf.score(A_test,Cl_test)
if nnDef.MLPRegressor is False:
print(' Accuracy: ',100*accur,'%\n Loss: {:.5f}'.format(clf.loss_),'\n')
else:
print(' Coefficient of determination R^2: ',accur,
'\n Loss: {:.5f}'.format(clf.loss_),'\n')
joblib.dump(clf, nnTrainedData)
return clf
#********************************************************************************
''' Evaluate Neural Network - sklearn '''
#********************************************************************************
def predNN(clf, A, Cl, R):
if nnDef.MLPRegressor is False:
prob = clf.predict_proba(R)[0].tolist()
rosterPred = np.where(clf.predict_proba(R)[0]>nnDef.thresholdProbabilityPred/100)[0]
print('\n ==============================')
print(' \033[1mNN\033[0m - Probability >',str(nnDef.thresholdProbabilityPred),'%')
print(' ==============================')
print(' Prediction\tProbability [%]')
for i in range(rosterPred.shape[0]):
print(' ',str(np.unique(Cl)[rosterPred][i]),'\t\t',str('{:.4f}'.format(100*clf.predict_proba(R)[0][rosterPred][i])))
print(' ==============================')
predValue = clf.predict(R)[0]
predProb = round(100*max(prob),4)
print('\033[1m' + '\n Predicted classifier value (Deep Neural Networks - sklearn) = ' + str(predValue) +
' (probability = ' + str(predProb) + '%)\033[0m\n')
else:
Cl = np.array(Cl,dtype=float)
predValue = clf.predict(R)[0]
predProb = clf.score(A,Cl)
print('\033[1m' + '\n Predicted regressor value (Deep Neural Networks - sklearn) = ' + str('{:.3f}'.format(predValue)) +
' (R^2 = ' + str('{:.5f}'.format(predProb)) + ')\033[0m\n')
#**************************************
''' Neural Networks Classification Report '''
#**************************************
if nnDef.nnClassReport == True:
print(' Neural Networks Classification Report\n')
runClassReport(clf, A, Cl)
#*************************
''' Plot probabilities '''
#*************************
if plotDef.showProbPlot == True:
if nnDef.MLPRegressor is False:
plotProb(clf, R)
return predValue, predProb
| feranick/SpectralMachine | Archive/SpectraLearnPredict/SpectraLearnPredict/slp/slp_nn.py | Python | gpl-3.0 | 5,308 | [
"NEURON"
] | 04664b3c9b6b0d883c04fd0c5aceb5171488071dfe0e9a8b0dfbb42e2bf23353 |
# (c) 2014 Michael DeHaan, <michael@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import tempfile
import tarfile
from subprocess import Popen, PIPE
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_native
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.six import string_types
from ansible.playbook.role.definition import RoleDefinition
from ansible.utils.display import Display
__all__ = ['RoleRequirement']
VALID_SPEC_KEYS = [
'name',
'role',
'scm',
'src',
'version',
]
display = Display()
class RoleRequirement(RoleDefinition):
"""
Helper class for Galaxy, which is used to parse both dependencies
specified in meta/main.yml and requirements.yml files.
"""
def __init__(self):
pass
@staticmethod
def repo_url_to_role_name(repo_url):
# gets the role name out of a repo like
# http://git.example.com/repos/repo.git" => "repo"
if '://' not in repo_url and '@' not in repo_url:
return repo_url
trailing_path = repo_url.split('/')[-1]
if trailing_path.endswith('.git'):
trailing_path = trailing_path[:-4]
if trailing_path.endswith('.tar.gz'):
trailing_path = trailing_path[:-7]
if ',' in trailing_path:
trailing_path = trailing_path.split(',')[0]
return trailing_path
@staticmethod
def role_yaml_parse(role):
if isinstance(role, string_types):
name = None
scm = None
src = None
version = None
if ',' in role:
if role.count(',') == 1:
(src, version) = role.strip().split(',', 1)
elif role.count(',') == 2:
(src, version, name) = role.strip().split(',', 2)
else:
raise AnsibleError("Invalid role line (%s). Proper format is 'role_name[,version[,name]]'" % role)
else:
src = role
if name is None:
name = RoleRequirement.repo_url_to_role_name(src)
if '+' in src:
(scm, src) = src.split('+', 1)
return dict(name=name, src=src, scm=scm, version=version)
if 'role' in role:
name = role['role']
if ',' in name:
raise AnsibleError("Invalid old style role requirement: %s" % name)
else:
del role['role']
role['name'] = name
else:
role = role.copy()
if 'src'in role:
# New style: { src: 'galaxy.role,version,name', other_vars: "here" }
if 'github.com' in role["src"] and 'http' in role["src"] and '+' not in role["src"] and not role["src"].endswith('.tar.gz'):
role["src"] = "git+" + role["src"]
if '+' in role["src"]:
(scm, src) = role["src"].split('+')
role["scm"] = scm
role["src"] = src
if 'name' not in role:
role["name"] = RoleRequirement.repo_url_to_role_name(role["src"])
if 'version' not in role:
role['version'] = ''
if 'scm' not in role:
role['scm'] = None
for key in list(role.keys()):
if key not in VALID_SPEC_KEYS:
role.pop(key)
return role
@staticmethod
def scm_archive_role(src, scm='git', name=None, version='HEAD', keep_scm_meta=False):
def run_scm_cmd(cmd, tempdir):
try:
stdout = ''
stderr = ''
popen = Popen(cmd, cwd=tempdir, stdout=PIPE, stderr=PIPE)
stdout, stderr = popen.communicate()
except Exception as e:
ran = " ".join(cmd)
display.debug("ran %s:" % ran)
display.debug("\tstdout: " + stdout)
display.debug("\tstderr: " + stderr)
raise AnsibleError("when executing %s: %s" % (ran, to_native(e)))
if popen.returncode != 0:
raise AnsibleError("- command %s failed in directory %s (rc=%s)" % (' '.join(cmd), tempdir, popen.returncode))
if scm not in ['hg', 'git']:
raise AnsibleError("- scm %s is not currently supported" % scm)
try:
scm_path = get_bin_path(scm)
except (ValueError, OSError, IOError):
raise AnsibleError("could not find/use %s, it is required to continue with installing %s" % (scm, src))
tempdir = tempfile.mkdtemp(dir=C.DEFAULT_LOCAL_TMP)
clone_cmd = [scm_path, 'clone', src, name]
run_scm_cmd(clone_cmd, tempdir)
if scm == 'git' and version:
checkout_cmd = [scm_path, 'checkout', version]
run_scm_cmd(checkout_cmd, os.path.join(tempdir, name))
temp_file = tempfile.NamedTemporaryFile(delete=False, suffix='.tar', dir=C.DEFAULT_LOCAL_TMP)
archive_cmd = None
if keep_scm_meta:
display.vvv('tarring %s from %s to %s' % (name, tempdir, temp_file.name))
with tarfile.open(temp_file.name, "w") as tar:
tar.add(os.path.join(tempdir, name), arcname=name)
elif scm == 'hg':
archive_cmd = [scm_path, 'archive', '--prefix', "%s/" % name]
if version:
archive_cmd.extend(['-r', version])
archive_cmd.append(temp_file.name)
elif scm == 'git':
archive_cmd = [scm_path, 'archive', '--prefix=%s/' % name, '--output=%s' % temp_file.name]
if version:
archive_cmd.append(version)
else:
archive_cmd.append('HEAD')
if archive_cmd is not None:
display.vvv('archiving %s' % archive_cmd)
run_scm_cmd(archive_cmd, os.path.join(tempdir, name))
return temp_file.name
| onitake/ansible | lib/ansible/playbook/role/requirement.py | Python | gpl-3.0 | 6,765 | [
"Galaxy"
] | b96f79399731de110c804b51a34af0a21ec92c105b7202d16eddf804b2e2a086 |
'''Functions that implement the prior for the spline coefficients of
an isentrope for F_UNCLE
'''
import numpy as np
def k_g(
tau, # log of ratio of volumes
alpha, # Characteristic length of correlation
beta # log of fractional uncertainty
):
'''Kernel function in log-log coordinates is beta times a normalized
Gaussian with width alpha.
'''
return beta/(alpha*np.sqrt(2*np.pi)) * np.exp(-tau**2/(2*alpha**2))
def mu_v(
v, # Volume (scalar float or np array)
c=1.0, # Constant
gamma=3 # Exponent of density for pressure
):
''' Nominal pressure function in p,v coordinates
'''
return c/np.power(v,gamma)
def test_mu_v():
'''Demonstrates that mu_v works on vectors
'''
v = np.linspace(1.0,10.0,20)
p = mu_v(v)
return(0)
def k_v(
v, # First specific volume
v_, # Second specific volume
alpha=0.05, # Characteristic length of correlation
beta=0.05 # log of fractional uncertainty
):
''' Kernel function in p,v coordinates
'''
kg_tau = k_g(np.log(v/v_), alpha, beta)
return mu_v(v)*mu_v(v_)*kg_tau
def inner_k(
f, # First function
g, # Second function
k, # Kernel
v_min=1.0, # Lower bound for integration
v_max=10.0 # Upper bound for integration
):
'''Calculate and return the inner product:
<f|k|g> = \int_v_min^v_max dx \int_v_min^v_max dy f(x)k(x,y)g(y)
'''
import scipy.integrate
def first(f,k,v):
''' \int_v_min^v_max f(x)k(x,v) dx
'''
return scipy.integrate.quad(lambda x: f(x)*k(x,v), v_min, v_max)[0]
return scipy.integrate.quad(lambda y: first(f,k,y) * g(y), v_min, v_max)[0]
def gram(
funcs, # List of basis functions
k, # kernel
v_min=1.0,
v_max=10.0,
):
'''Calculate and return the Gram matrix of the functions with the
inner product:
<f|k|g> = \int_v_min^v_max dx \int_v_min^v_max dy f(x)k(x,y)g(y)
'''
n = len(funcs)
rv = np.empty((n,n))
for i,f in enumerate(funcs):
for j,g in enumerate(funcs):
if j < i: # Exploit symmetry because inner_k is slow
rv[i,j] = rv[j,i]
else:
print(i, j)
rv[i,j] = inner_k(f,g,k)
# end
return rv
#---------------
# Local Variables:
# eval: (python-mode)
# End:
| fraserphysics/F_UNCLE | F_UNCLE/Models/gram.py | Python | gpl-2.0 | 2,492 | [
"Gaussian"
] | 6d6abf7c8512e6bd6fc5be848536e6e2e61900c9aee4e9bc23eef3368a010607 |
"""
Functionality to read and write the Newick serialization format for trees.
.. seealso:: https://en.wikipedia.org/wiki/Newick_format
"""
import re
import pathlib
__version__ = "1.0.1.dev0"
RESERVED_PUNCTUATION = ':;,()'
COMMENT = re.compile(r'\[[^\]]*\]')
def length_parser(x):
return float(x or 0.0)
def length_formatter(x):
return '%s' % x
class Node(object):
"""
A Node may be a tree, a subtree or a leaf.
A Node has optional name and length (from parent) and a (possibly empty) list of
descendants. It further has an ancestor, which is *None* if the node is the
root node of a tree.
"""
def __init__(self, name=None, length=None, **kw):
"""
:param name: Node label.
:param length: Branch length from the new node to its parent.
:param kw: Recognized keyword arguments:\
`length_parser`: Custom parser for the `length` attribute of a Node.\
`length_formatter`: Custom formatter for the branch length when formatting a\
Node as Newick string.
"""
for char in RESERVED_PUNCTUATION:
if (name and char in name) or (length and char in length):
raise ValueError(
'Node names or branch lengths must not contain "%s"' % char)
self.name = name
self._length = length
self.descendants = []
self.ancestor = None
self._length_parser = kw.pop('length_parser', length_parser)
self._length_formatter = kw.pop('length_formatter', length_formatter)
def __repr__(self):
return 'Node("%s")' % self.name
@property
def length(self):
return self._length_parser(self._length)
@length.setter
def length(self, l):
if l is None:
self._length = l
else:
self._length = self._length_formatter(l)
@classmethod
def create(cls, name=None, length=None, descendants=None, **kw):
"""
Create a new `Node` object.
:param name: Node label.
:param length: Branch length from the new node to its parent.
:param descendants: list of descendants or `None`.
:param kw: Additonal keyword arguments are passed through to `Node.__init__`.
:return: `Node` instance.
"""
node = cls(name=name, length=length, **kw)
for descendant in descendants or []:
node.add_descendant(descendant)
return node
def add_descendant(self, node):
node.ancestor = self
self.descendants.append(node)
@property
def newick(self):
"""The representation of the Node in Newick format."""
label = self.name or ''
if self._length:
label += ':' + self._length
descendants = ','.join([n.newick for n in self.descendants])
if descendants:
descendants = '(' + descendants + ')'
return descendants + label
def _ascii_art(self, char1='\u2500', show_internal=True, maxlen=None):
if maxlen is None:
maxlen = max(
len(n.name) for n in self.walk()
if n.name and (show_internal or n.is_leaf)) + 4
pad = ' ' * (maxlen - 1)
namestr = '\u2500' + (self.name or '')
if self.descendants:
mids = []
result = []
for i, c in enumerate(self.descendants):
if len(self.descendants) == 1:
char2 = '\u2500'
elif i == 0:
char2 = '\u250c'
elif i == len(self.descendants) - 1:
char2 = '\u2514'
else:
char2 = '\u2500'
clines, mid = c._ascii_art(
char1=char2, show_internal=show_internal, maxlen=maxlen)
mids.append(mid + len(result))
result.extend(clines)
result.append('')
result.pop()
lo, hi, end = mids[0], mids[-1], len(result)
prefixes = [pad] * (lo + 1) +\
[pad + '\u2502'] * (hi - lo - 1) + \
[pad] * (end - hi)
mid = (lo + hi) // 2
prefixes[mid] = char1 + '\u2500' * (len(prefixes[mid]) - 2) + prefixes[mid][-1]
result = [p + l for p, l in zip(prefixes, result)]
if show_internal:
stem = result[mid]
result[mid] = stem[0] + namestr + stem[len(namestr) + 1:]
return result, mid
return [char1 + namestr], 0
def ascii_art(self, strict=False, show_internal=True):
"""
Return a unicode string representing a tree in ASCII art fashion.
:param strict: Use ASCII characters strictly (for the tree symbols).
:param show_internal: Show labels of internal nodes.
:return: unicode string
>>> node = loads('((A,B)C,((D,E)F,G,H)I)J;')[0]
>>> print(node.ascii_art(show_internal=False, strict=True))
/-A
/---|
| \-B
----| /-D
| /---|
| | \-E
\---|
|-G
\-H
"""
cmap = {
'\u2500': '-',
'\u2502': '|',
'\u250c': '/',
'\u2514': '\\',
'\u251c': '|',
'\u2524': '|',
'\u253c': '+',
}
def normalize(line):
m = re.compile('(?<=\u2502)(?P<s>\s+)(?=[\u250c\u2514\u2502])')
line = m.sub(lambda m: m.group('s')[1:], line)
line = re.sub('\u2500\u2502', '\u2500\u2524', line) # -|
line = re.sub('\u2502\u2500', '\u251c', line) # |-
line = re.sub('\u2524\u2500', '\u253c', line) # -|-
if strict:
for u, a in cmap.items():
line = line.replace(u, a)
return line
return '\n'.join(
normalize(l) for l in self._ascii_art(show_internal=show_internal)[0]
if set(l) != {' ', '\u2502'}) # remove lines of only spaces and pipes
@property
def is_leaf(self):
return not bool(self.descendants)
@property
def is_binary(self):
return all([len(n.descendants) in (0, 2) for n in self.walk()])
def walk(self, mode=None):
"""
Traverses the (sub)tree rooted at self, yielding each visited Node.
.. seealso:: https://en.wikipedia.org/wiki/Tree_traversal
:param mode: Specifies the algorithm to use when traversing the subtree rooted \
at self. `None` for breadth-first, `'postorder'` for post-order depth-first \
search.
:return: Generator of the visited Nodes.
"""
if mode == 'postorder':
for n in self._postorder():
yield n
else: # default to a breadth-first search
yield self
for node in self.descendants:
for n in node.walk():
yield n
def visit(self, visitor, predicate=None, **kw):
"""
Apply a function to matching nodes in the (sub)tree rooted at self.
:param visitor: A callable accepting a Node object as single argument..
:param predicate: A callable accepting a Node object as single argument and \
returning a boolean signaling whether Node matches; if `None` all nodes match.
:param kw: Addtional keyword arguments are passed through to self.walk.
"""
predicate = predicate or bool
for n in self.walk(**kw):
if predicate(n):
visitor(n)
def _postorder(self):
stack = [self]
descendant_map = {id(node): [n for n in node.descendants] for node in self.walk()}
while stack:
node = stack[-1]
descendants = descendant_map[id(node)]
# if we are at a leave-node, we remove the item from the stack
if not descendants:
stack.pop()
yield node
if stack:
descendant_map[id(stack[-1])].pop(0)
else:
stack.append(descendants[0])
def get_leaves(self):
"""
Get all the leaf nodes of the subtree descending from this node.
:return: List of Nodes with no descendants.
"""
return [n for n in self.walk() if n.is_leaf]
def get_node(self, label):
"""
Gets the specified node by name.
:return: Node or None if name does not exist in tree
"""
for n in self.walk():
if n.name == label:
return n
def get_leaf_names(self):
"""
Get the names of all the leaf nodes of the subtree descending from
this node.
:return: List of names of Nodes with no descendants.
"""
return [n.name for n in self.get_leaves()]
def prune(self, leaves, inverse=False):
"""
Remove all those nodes in the specified list, or if inverse=True,
remove all those nodes not in the specified list. The specified nodes
must be leaves and distinct from the root node.
:param nodes: A list of Node objects
:param inverse: Specifies whether to remove nodes in the list or not\
in the list.
"""
self.visit(
lambda n: n.ancestor.descendants.remove(n),
# We won't prune the root node, even if it is a leave and requested to
# be pruned!
lambda n: ((not inverse and n in leaves) or
(inverse and n.is_leaf and n not in leaves)) and n.ancestor,
mode="postorder")
def prune_by_names(self, leaf_names, inverse=False):
"""
Perform an (inverse) prune, with leaves specified by name.
:param node_names: A list of leaaf Node names (strings)
:param inverse: Specifies whether to remove nodes in the list or not\
in the list.
"""
self.prune([l for l in self.walk() if l.name in leaf_names], inverse)
def remove_redundant_nodes(self, preserve_lengths=True):
"""
Remove all nodes which have only a single child, and attach their
grandchildren to their parent. The resulting tree has the minimum
number of internal nodes required for the number of leaves.
:param preserve_lengths: If true, branch lengths of removed nodes are \
added to those of their children.
"""
for n in self.walk(mode='postorder'):
while n.ancestor and len(n.ancestor.descendants) == 1:
grandfather = n.ancestor.ancestor
father = n.ancestor
if preserve_lengths:
n.length += father.length
if grandfather:
for i, child in enumerate(grandfather.descendants):
if child is father:
del grandfather.descendants[i]
grandfather.add_descendant(n)
father.ancestor = None
else:
self.descendants = n.descendants
if preserve_lengths:
self.length = n.length
def resolve_polytomies(self):
"""
Insert additional nodes with length=0 into the subtree in such a way
that all non-leaf nodes have only 2 descendants, i.e. the tree becomes
a fully resolved binary tree.
"""
def _resolve_polytomies(n):
new = Node(length=self._length_formatter(self._length_parser('0')))
while len(n.descendants) > 1:
new.add_descendant(n.descendants.pop())
n.descendants.append(new)
self.visit(_resolve_polytomies, lambda n: len(n.descendants) > 2)
def remove_names(self):
"""
Set the name of all nodes in the subtree to None.
"""
self.visit(lambda n: setattr(n, 'name', None))
def remove_internal_names(self):
"""
Set the name of all non-leaf nodes in the subtree to None.
"""
self.visit(lambda n: setattr(n, 'name', None), lambda n: not n.is_leaf)
def remove_leaf_names(self):
"""
Set the name of all leaf nodes in the subtree to None.
"""
self.visit(lambda n: setattr(n, 'name', None), lambda n: n.is_leaf)
def remove_lengths(self):
"""
Set the length of all nodes in the subtree to None.
"""
self.visit(lambda n: setattr(n, 'length', None))
def loads(s, strip_comments=False, **kw):
"""
Load a list of trees from a Newick formatted string.
:param s: Newick formatted string.
:param strip_comments: Flag signaling whether to strip comments enclosed in square \
brackets.
:param kw: Keyword arguments are passed through to `Node.create`.
:return: List of Node objects.
"""
kw['strip_comments'] = strip_comments
return [parse_node(ss.strip(), **kw) for ss in s.split(';') if ss.strip()]
def dumps(trees):
"""
Serialize a list of trees in Newick format.
:param trees: List of Node objects or a single Node object.
:return: Newick formatted string.
"""
if isinstance(trees, Node):
trees = [trees]
return ';\n'.join([tree.newick for tree in trees]) + ';'
def load(fp, strip_comments=False, **kw):
"""
Load a list of trees from an open Newick formatted file.
:param fp: open file handle.
:param strip_comments: Flag signaling whether to strip comments enclosed in square \
brackets.
:param kw: Keyword arguments are passed through to `Node.create`.
:return: List of Node objects.
"""
kw['strip_comments'] = strip_comments
return loads(fp.read(), **kw)
def dump(tree, fp):
fp.write(dumps(tree))
def read(fname, encoding='utf8', strip_comments=False, **kw):
"""
Load a list of trees from a Newick formatted file.
:param fname: file path.
:param strip_comments: Flag signaling whether to strip comments enclosed in square \
brackets.
:param kw: Keyword arguments are passed through to `Node.create`.
:return: List of Node objects.
"""
kw['strip_comments'] = strip_comments
with pathlib.Path(fname).open(encoding=encoding) as fp:
return load(fp, **kw)
def write(tree, fname, encoding='utf8'):
with pathlib.Path(fname).open(encoding=encoding, mode='w') as fp:
dump(tree, fp)
def _parse_name_and_length(s):
length = None
if ':' in s:
s, length = s.split(':', 1)
return s or None, length or None
def _parse_siblings(s, **kw):
"""
http://stackoverflow.com/a/26809037
"""
bracket_level = 0
current = []
# trick to remove special-case of trailing chars
for c in (s + ","):
if c == "," and bracket_level == 0:
yield parse_node("".join(current), **kw)
current = []
else:
if c == "(":
bracket_level += 1
elif c == ")":
bracket_level -= 1
current.append(c)
def parse_node(s, strip_comments=False, **kw):
"""
Parse a Newick formatted string into a `Node` object.
:param s: Newick formatted string to parse.
:param strip_comments: Flag signaling whether to strip comments enclosed in square \
brackets.
:param kw: Keyword arguments are passed through to `Node.create`.
:return: `Node` instance.
"""
if strip_comments:
s = COMMENT.sub('', s)
s = s.strip()
parts = s.split(')')
if len(parts) == 1:
descendants, label = [], s
else:
if not parts[0].startswith('('):
raise ValueError('unmatched braces %s' % parts[0][:100])
descendants = list(_parse_siblings(')'.join(parts[:-1])[1:], **kw))
label = parts[-1]
name, length = _parse_name_and_length(label)
return Node.create(name=name, length=length, descendants=descendants, **kw)
| glottobank/python-newick | src/newick.py | Python | apache-2.0 | 16,032 | [
"VisIt"
] | 48e84cf1c5e857a8e7e0153ea3883404e4626d0366e85c23d95d67868140ff41 |
import click
import pysam
import gzip
import sys
from itertools import izip
@click.command()
@click.option('--baf', help='path for bamfile')
@click.option('--rf', help='path for readfile')
@click.option('--out', help='out file path directory')
def subsample(rf, baf, out):
readList = set([])
bam = pysam.AlignmentFile(baf, 'rb')
count = 0
for line in bam:
count +=1
if(count % 500000 == 0):
print("\r Done parsing " + str(count) + " alignments"),
readList.add(line.qname.strip())
bam.close()
print("\nDone reading BAM file\n")
count = 0
with gzip.open(rf, 'rb') as rfile, \
gzip.open(out+"_reads.fastq.gz", 'wb') as wfile:
for rline in rfile:
count +=1
if(count % 500000 == 0):
print("\r Done dumping " + str(count) + " reads"),
sys.stdout.flush()
if rline.strip().replace('@','') in readList:
wfile.write(rline)
for _ in range(3):
rline = rfile.next()
wfile.write(rline)
if __name__=="__main__":
subsample()
| k3yavi/alevin | testing/src-py/subsampleExReadsByBam.py | Python | gpl-3.0 | 1,148 | [
"pysam"
] | 15874272a06134fe6dd046e555596159b8844f9a35439a6341fe421f5add7ca0 |
# Internal HTSeq functions, not part of the API
import HTSeq
import numpy
def GenomicInterval_range(gi, step):
for pos in range(gi.start, gi.end, step):
yield HTSeq.GenomicPosition(gi.chrom, pos, gi.strand)
def GenomicInterval_xranged(gi, step):
if gi.strand == "-":
step *= -1
for pos in range(gi.start_d, gi.end_d, step):
yield HTSeq.GenomicPosition(gi.chrom, pos, gi.strand)
def ChromVector_steps(cv):
if isinstance(cv.array, numpy.ndarray):
start = cv.iv.start
prev_val = None
for i in range(cv.iv.start, cv.iv.end):
val = cv.array[i - cv.offset]
if prev_val is None or val != prev_val:
if prev_val is not None:
yield (HTSeq.GenomicInterval(cv.iv.chrom, start, i, cv.iv.strand), prev_val)
prev_val = val
start = i
yield (HTSeq.GenomicInterval(
cv.iv.chrom, start, cv.iv.end, cv.iv.strand), prev_val,
)
elif isinstance(cv.array, HTSeq.StepVector.StepVector):
for start, stop, value in cv.array[cv.iv.start:cv.iv.end].get_steps():
yield (HTSeq.GenomicInterval(
cv.iv.chrom, start, stop, cv.iv.strand), value,
)
else:
raise SystemError("Unknown array type.")
def GenomicArray_steps(ga):
for a in list(ga.chrom_vectors.values()):
for cv in list(a.values()):
for iv, val in cv.steps():
yield iv, val
| simon-anders/htseq | python3/HTSeq/_HTSeq_internal.py | Python | gpl-3.0 | 1,504 | [
"HTSeq"
] | 55809a87c7dd9e0fd668b224ab0995e057004e1cb5e585ef8b0c655ebe89c2f3 |
# -*- coding: utf-8 -*-
from __future__ import division, print_function
import h5py
import six
import re
import collections
import copy
import numpy as np
import pint
u = pint.UnitRegistry()
# Probably easier to get a list of all the attrs, then parse appropriately.
def h5ls_str(g, offset='', print_types=False):
"""Prints the input file/group/dataset (g) name and begin iterations on its
content.
See goo.gl/2JiUQK."""
string = []
if isinstance(g, h5py.File):
string.append(offset+repr(g.file))
elif isinstance(g, h5py.Dataset):
if print_types:
string.append(offset+g.name+' '+repr(g.shape)+' '+(g.dtype.str))
else:
string.append(offset+g.name+' '+repr(g.shape))
elif isinstance(g, h5py.Group):
string.append(offset+g.name)
else:
raise ValueError('WARNING: UNKNOWN ITEM IN HDF5 FILE'+g.name)
if isinstance(g, h5py.File) or isinstance(g, h5py.Group):
for key, subg in dict(g).iteritems():
string.append(h5ls_str(subg, offset + ' ',
print_types=print_types))
return "\n".join(string)
def h5ls(*args):
"""List the contents of an HDF5 file object or group.
Accepts a file / group handle, or a string interpreted as the hdf5
file path."""
for arg in args:
if isinstance(arg, six.string_types):
fh = h5py.File(arg, mode='r')
print(h5ls_str(fh))
fh.close()
else:
print(h5ls_str(arg))
def create_attr_dictionary(f):
d = {}
def visitarg(key, ds):
if isinstance(ds, h5py.Dataset):
d[key] = dict(ds.attrs.items())
f.visititems(visitarg)
return d
permissive = {u'name', u'unit', u'label'}
latex = {u'name', u'unit', u'label', u'label_latex'}
def missing_attrs(attr_dictionary, attr_names):
"""Gives a dictionary of missing attributes"""
mandatory = set(attr_names)
missing_attrs = {}
for ds_name, ds_attrs_dict in attr_dictionary.items():
ds_attrs_keys = set(ds_attrs_dict.keys())
missing_mandatory = mandatory.difference(ds_attrs_keys)
if missing_mandatory:
missing_attrs[ds_name] = tuple(missing_mandatory)
return missing_attrs
def is_container(obj):
"""Check that an object is a container, but not a string."""
return hasattr(obj, '__iter__') and not isinstance(obj, str)
def make_quantity(quantity_or_string):
if isinstance(quantity_or_string, pint.compat.string_types):
return u(quantity_or_string)
else:
return quantity_or_string
# TODO: Remove this ugly hack for pretty printing
# ------------------------------------------------------------------------
# This entire section is really repetitive, contains lots of duplicated work
def get_label_unit(quantity):
q = make_quantity(quantity)
return "".join(u"{0:P~}".format(q).split(' ')[1:]).replace('u', u'µ')
def get_unit_attr(quantity):
q = make_quantity(quantity)
return "".join(u"{0:~}".format(q).split(' ')[1:]).replace('**', '^')
def get_label_unit_substring(label):
try:
return re.search('[(\[][[\s\S]+[)\]]', label).group(0)[1:-1]
except AttributeError:
raise AttributeError("Could not find a unit substring in the label.")
def replace_unit_label_ascii(label, quantity):
return label.replace(
get_label_unit_substring(label), get_unit_attr(quantity))
def replace_unit_label(label, quantity):
return label.replace(
get_label_unit_substring(label), get_label_unit(quantity))
def replace_latex_label(label_latex, quantity):
q = make_quantity(quantity)
label_unit_substring = get_label_unit_substring(label_latex)
new_label = "".join(u"{0:L~}".format(q).split(' ')[1:])
substrings = re.findall('[a-zA-Z]+', new_label)
for s in substrings:
if s not in ('frac', 'sqrt'):
if s.startswith('u'):
new_label = new_label.replace(
s, "\\mu\\mathrm{{{s}}}".format(s=s[1:]))
else:
new_label = new_label.replace(s, "\\mathrm{{{s}}}".format(s=s))
return label_latex.replace(label_unit_substring, new_label)
# ---------------------------------------------------------------------------
def iterable(x):
"""True if x is an iterable other than a string: some sort of list-like
container"""
# Not sure whether this works on Python3; does it capture both bytes and
# unicode?
if isinstance(x, str):
return False
else:
return isinstance(x, collections.Iterable)
def nested_iterable(x):
"""Return true if x is (at least) list of lists, or a 2D numpy array, or
list of 1D numpy arrays.
Raises a TypeError if passed a non-iterable."""
return all(iterable(i) for i in x)
def make_nested_array(x):
if nested_iterable(x):
return np.array(x)
else:
return np.array([x])
def replicate(x, y, magic):
x = make_nested_array(x)
y = make_nested_array(y)
if magic is None:
magic = np.array([magic])
else:
magic = np.array(magic)
if len(y.shape) > 1:
x_r = np.resize(x, y.shape)
magic_r = np.resize(magic, y.shape[0])
else:
x_r = x
magic_r = magic
return zip(x_r, y, magic_r)
def h5_list(f):
def print_all(name):
"""Don't ever return a value, just none, so that we walk through
all values of the file"""
print(name)
return None
f.visit(print_all)
def silent_del(f, key):
"""Delete 'key' from the hdf5 file f, if it exists. If not, do nothing."""
try:
del f[key]
except KeyError:
pass
def update_attrs(attrs, dict_of_attrs):
for key, val in dict_of_attrs.iteritems():
attrs[key] = val
| ryanpdwyer/hdf5plotter | hdf5plotter/_util.py | Python | mit | 5,826 | [
"VisIt"
] | 8e4f1f03788c2af1fa47764d144273aac5c59f7d072e81d178cab63c4356bbe1 |
from views import *
from lookups import *
import requests
import re
from utils import *
import itertools
from config import config
if config.IMPORT_PYSAM_PRIMER3:
import pysam
import csv
#hpo lookup
import random
from flask import Response, request
import os
from werkzeug.datastructures import Headers
import re
@app.route('/bam_viewer/')
def bam_viewer():
return render_template('igv_viewer.html')
@app.route('/read_viz/bam/<sample>')
def read_viz(sample):
BAM_FILES=app.config['BAM_FILES']
print(request.method)
headers=Headers()
#headers.add('Content-Type','application/octet-stream')
headers.add('Content-Transfer-Encoding','binary')
#Date:Wed, 06 Jul 2016 17:19:52 GMT
#ETag:"flask-1446310274.0-12661331-649139018"
#Expires:Thu, 07 Jul 2016 05:19:52 GMT
#Keep-Alive:timeout=5, max=93
#Last-Modified:Sat, 31 Oct 2015 16:51:14 GMT
headers.add('Accept-Ranges', 'bytes')
#Server:Apache/2.4.12 (Red Hat) mod_wsgi/3.4 Python/2.7.8
headers.add('X-Frame-Options','SAMEORIGIN')
if sample=='gencode.v19.sorted.bed':
bamfile=BAM_FILES+'/gencode.v19.sorted.bed'
elif sample=='gencode.v19.sorted.bed.idx':
bamfile=BAM_FILES+'/gencode.v19.sorted.bed.idx'
elif sample.endswith('.bai'):
bamfile=BAM_FILES+'/%s.bam.bai' % sample
else:
bamfile=BAM_FILES+'/%s.bam' % sample
size = os.path.getsize(bamfile)
print(size)
status = 200
begin = 0
end = size-1
if request.headers.has_key("Range") and request.method=='GET':
print(request.headers['Range'])
headers.add('Accept-Ranges','bytes')
ranges = re.findall(r"\d+", request.headers["Range"])
begin = int( ranges[0] )
if len(ranges)>1: end = int( ranges[1] )
headers.add('Content-Range','bytes %s-%s/%s' % (str(begin),str(end),size) )
headers.add('Content-Length',str((end-begin)+1))
with file(bamfile,'rb') as f:
f.seek(begin)
data=f.read(end-begin)
print(len(data))
response = Response( data, status=206, mimetype="application/octet-stream", headers=headers, direct_passthrough=True)
else:
if request.method=='HEAD':
headers.add('Content-Length',size)
response = Response( '', status=200, mimetype="application/octet-stream", headers=headers, direct_passthrough=True)
elif request.method=='GET':
response = Response( file(bamfile), status=200, mimetype="application/octet-stream", headers=headers, direct_passthrough=True)
#Add mimetype
response.cache_control.public = True
response.make_conditional(request)
return response
def read_viz2():
print(sample)
print(region)
from subprocess import call
tmpfile=subprocess.Popen('mktemp', shell=True, stdout=subprocess.PIPE).stdout.read().strip()+'.bam'
print(tmpfile)
print(subprocess.Popen("samtools view -b %s/%s_sorted_unique.bam %s > %s" % (BAM_FILES,sample,region, tmpfile), shell=True, stdout=subprocess.PIPE).stdout.read())
subprocess.Popen('samtools index %s'%tmpfile).stdout.read()
| logust79/phenopolis | views/igv.py | Python | mit | 3,126 | [
"pysam"
] | be7ca93b5ad50426930d9a58c01371fbca98a3ef674e34e4768bb648dfa02061 |
# -*- coding: utf-8 -*-
'''
Copyright (c) 2015 by Tobias Houska
This file is part of Statistical Parameter Estimation Tool (SPOTPY).
:author: Tobias Houska
Holds functions to analyse results out of the database.
Note: This part of SPOTPY is in alpha status and not yet ready for production use.
'''
import numpy as np
import spotpy
font = {'family' : 'calibri',
'weight' : 'normal',
'size' : 18}
def load_csv_results(filename, usecols=None):
"""
Get an array of your results in the given file.
:filename: Expects an available filename, without the csv, in your working directory
:type: str
:return: Result array
:rtype: array
"""
if usecols == None:
return np.genfromtxt(filename+'.csv',delimiter=',',names=True,invalid_raise=False)
else:
return np.genfromtxt(filename+'.csv',delimiter=',',names=True,skip_footer=1,invalid_raise=False,usecols=usecols)[1:]
def load_hdf5_results(filename):
"""
Get an array of your results in the given file.
:filename: Expects an available filename, without the .h5 ending,
in your working directory
:type: str
:return: Result array, simulation is an ndarray,
which is different to structured arrays return by the csv/sql/ram databases
:rtype: array
"""
import h5py
with h5py.File(filename+'.h5', 'r') as f:
return f[filename][()]
def load_csv_parameter_results(filename, usecols=None):
"""
Get an array of your results in the given file, without the first and the
last column. The first line may have a different objectivefunction and the last
line may be incomplete, which would result in an error.
:filename: Expects an available filename, without the csv, in your working directory
:type: str
:return: Result array
:rtype: array
"""
ofile=open(filename+'.csv')
line = ofile.readline()
header=line.split(',')
ofile.close()
words=[]
index =[]
for i,word in enumerate(header):
if word.startswith('par'):
words.append(word)
index.append(i)
return np.genfromtxt(filename+'.csv', delimiter=',', names=words,
usecols=index, invalid_raise=False, skip_header=1)
def get_header(results):
return results.dtype.names
def get_like_fields(results):
header = get_header(results)
fields=[word for word in header if word.startswith('like')]
return fields
def get_parameter_fields(results):
header = get_header(results)
fields=[word for word in header if word.startswith('par')]
return fields
def get_simulation_fields(results):
header = get_header(results)
fields=[word for word in header if word.startswith('sim')]
return fields
def get_modelruns(results):
"""
Get an shorter array out of your result array, containing just the
simulations of your model.
:results: Expects an numpy array which should have indices beginning with "sim"
:type: array
:return: Array containing just the columns beginnning with the indice "sim"
:rtype: array
"""
fields=[word for word in results.dtype.names if word.startswith('sim')]
return results[fields]
def get_parameters(results):
"""
Get an shorter array out of your result array, containing just the
parameters of your model.
:results: Expects an numpy array which should have indices beginning with "par"
:type: array
:return: Array containing just the columns beginnning with the indice "par"
:rtype: array
"""
fields=[word for word in results.dtype.names if word.startswith('par')]
results = results[fields]
return results
def get_parameternames(results):
"""
Get list of strings with the names of the parameters of your model.
:results: Expects an numpy array which should have indices beginning with "par"
:type: array
:return: Strings with the names of the analysed parameters
:rtype: list
"""
fields=[word for word in results.dtype.names if word.startswith('par')]
parnames=[]
for field in fields:
parnames.append(field[3:])
return parnames
def get_maxlikeindex(results,verbose=True):
"""
Get the maximum objectivefunction of your result array
:results: Expects an numpy array which should of an index "like" for objectivefunctions
:type: array
:return: Index of the position in the results array with the maximum objectivefunction
value and value of the maximum objectivefunction of your result array
:rtype: int and float
"""
try:
likes=results['like']
except ValueError:
likes=results['like1']
maximum=np.nanmax(likes)
value=str(round(maximum,4))
text=str('Run number ' )
index=np.where(likes==maximum)
text2=str(' has the highest objectivefunction with: ')
textv=text+str(index[0][0])+text2+value
if verbose:
print(textv)
return index, maximum
def get_minlikeindex(results):
"""
Get the minimum objectivefunction of your result array
:results: Expects an numpy array which should of an index "like" for objectivefunctions
:type: array
:return: Index of the position in the results array with the minimum objectivefunction
value and value of the minimum objectivefunction of your result array
:rtype: int and float
"""
try:
likes=results['like']
except ValueError:
likes=results['like1']
minimum=np.nanmin(likes)
value=str(round(minimum,4))
text=str('Run number ' )
index=np.where(likes==minimum)
text2=str(' has the lowest objectivefunction with: ')
textv=text+str(index[0][0])+text2+value
print(textv)
return index, minimum
def get_percentiles(results,sim_number=''):
"""
Get 5,25,50,75 and 95 percentiles of your simulations
:results: Expects an numpy array which should of an index "simulation" for simulations
:type: array
:sim_number: Optional, Number of your simulation, needed when working with multiple lists of simulations
:type: int
:return: Percentiles of simulations
:rtype: int and float
"""
p5,p25,p50,p75,p95=[],[],[],[],[]
fields=[word for word in results.dtype.names if word.startswith('simulation'+str(sim_number))]
for i in range(len(fields)):
p5.append(np.percentile(list(results[fields[i]]),5))
p25.append(np.percentile(list(results[fields[i]]),25))
p50.append(np.percentile(list(results[fields[i]]),50))
p75.append(np.percentile(list(results[fields[i]]),75))
p95.append(np.percentile(list(results[fields[i]]),95))
return p5,p25,p50,p75,p95
def calc_like(results,evaluation,objectivefunction):
"""
Calculate another objectivefunction of your results
:results: Expects an numpy array which should of an index "simulation" for simulations
:type: array
:evaluation: Expects values, which correspond to your simulations
:type: list
:objectivefunction: Takes evaluation and simulation data and returns a objectivefunction, e.g. spotpy.objectvefunction.rmse
:type: function
:return: New objectivefunction list
:rtype: list
"""
likes=[]
sim=get_modelruns(results)
for s in sim:
likes.append(objectivefunction(list(s),evaluation))
return likes
def compare_different_objectivefunctions(like1,like2):
"""
Performs the Welch’s t-test (aka unequal variances t-test)
:like1: objectivefunction values
:type: list
:like2: Other objectivefunction values
:type: list
:return: p Value
:rtype: list
"""
from scipy import stats
out = stats.ttest_ind(like1, like2, equal_var=False)
print(out)
if out[1]>0.05:
print('like1 is NOT signifikant different to like2: p>0.05')
else:
print('like1 is signifikant different to like2: p<0.05' )
return out
def get_posterior(results,percentage=10, maximize=True):
"""
Get the best XX% of your result array (e.g. best 10% model runs would be a threshold setting of 0.9)
:results: Expects an numpy array which should have as first axis an index "like1". This will be sorted .
:type: array
:percentag: Optional, ratio of values that will be deleted.
:type: float
:maximize: If True (default), higher "like1" column values are assumed to be better.
If False, lower "like1" column values are assumed to be better.
:return: Posterior result array
:rtype: array
"""
if maximize:
index = np.where(results['like1']>=np.percentile(results['like1'],100.0-percentage))
else:
index = np.where(results['like1']>=np.percentile(results['like1'],100.0-percentage))
return results[index]
def plot_parameter_uncertainty(posterior_results,evaluation, fig_name='Posterior_parameter_uncertainty.png'):
import pylab as plt
simulation_fields = get_simulation_fields(posterior_results)
fig= plt.figure(figsize=(16,9))
for i in range(len(evaluation)):
if evaluation[i] == -9999:
evaluation[i] = np.nan
ax = plt.subplot(1,1,1)
q5,q95=[],[]
for field in simulation_fields:
q5.append(np.percentile(list(posterior_results[field]),2.5))
q95.append(np.percentile(list(posterior_results[field]),97.5))
ax.plot(q5,color='dimgrey',linestyle='solid')
ax.plot(q95,color='dimgrey',linestyle='solid')
ax.fill_between(np.arange(0,len(q5),1),list(q5),list(q95),facecolor='dimgrey',zorder=0,
linewidth=0,label='parameter uncertainty')
ax.plot(evaluation,'r.',markersize=1, label='Observation data')
bestindex,bestobjf = get_maxlikeindex(posterior_results,verbose=False)
plt.plot(list(posterior_results[simulation_fields][bestindex][0]),'b-',label='Obj='+str(round(bestobjf,2)))
plt.xlabel('Number of Observation Points')
plt.ylabel ('Simulated value')
plt.legend(loc='upper right')
fig.savefig(fig_name,dpi=300)
text='A plot of the parameter uncertainty has been saved as '+fig_name
print(text)
def sort_like(results):
return np.sort(results,axis=0)
def get_best_parameterset(results,maximize=True):
"""
Get the best parameter set of your result array, depending on your first objectivefunction
:results: Expects an numpy array which should have as first axis an index "like" or "like1".
:type: array
:maximize: Optional, default=True meaning the highest objectivefunction is taken as best, if False the lowest objectivefunction is taken as best.
:type: boolean
:return: Best parameter set
:rtype: array
"""
try:
likes=results['like']
except ValueError:
likes=results['like1']
if maximize:
best=np.nanmax(likes)
else:
best=np.nanmin(likes)
index=np.where(likes==best)
best_parameter_set = get_parameters(results[index])[0]
parameter_names = get_parameternames(results)
text=''
for i in range(len(parameter_names)):
text+=parameter_names[i]+'='+str(best_parameter_set[i])+', '
print('Best parameter set:\n'+text[:-2])
return get_parameters(results[index])
def get_min_max(spotpy_setup):
"""
Get the minimum and maximum values of your parameters function of the spotpy setup
:spotpy_setup: Class with a parameters function
:type: class
:return: Possible minimal and maximal values of all parameters in the parameters function of the spotpy_setup class
:rtype: Two arrays
"""
parameter_obj = spotpy.parameter.generate(spotpy.parameter.get_parameters_from_setup(spotpy_setup))
randompar = parameter_obj['random']
for i in range(1000):
randompar = np.column_stack((randompar, parameter_obj['random']))
return np.amin(randompar, axis=1), np.amax(randompar, axis=1)
def get_parbounds(spotpy_setup):
"""
Get the minimum and maximum parameter bounds of your parameters function of the spotpy setup
:spotpy_setup: Class with a parameters function
:type: class
:return: Possible minimal and maximal values of all parameters in the parameters function of the spotpy_setup class
:rtype: list
"""
parmin,parmax=get_min_max(spotpy_setup)
bounds=[]
for i in range(len(parmin)):
bounds.append([parmin[i],parmax[i]])
return bounds
def get_sensitivity_of_fast(results,like_index=1,M=4, print_to_console=True):
"""
Get the sensitivity for every parameter of your result array, created with the FAST algorithm
:results: Expects an numpy array which should have as first axis an index "like" or "like1".
:type: array
:like_index: Optional, index of objectivefunction to base the sensitivity on, default=None first objectivefunction is taken
:type: int
:return: Sensitivity indices for every parameter
:rtype: list
"""
import math
likes=results['like'+str(like_index)]
print('Number of model runs:', likes.size)
parnames = get_parameternames(results)
parnumber=len(parnames)
print('Number of parameters:', parnumber)
rest = likes.size % (parnumber)
if rest != 0:
print(""""
Number of samples in model output file must be a multiple of D,
where D is the number of parameters in your parameter file.
We handle this by ignoring the last """, rest, """runs.""")
likes = likes[:-rest ]
N = int(likes.size / parnumber)
# Recreate the vector omega used in the sampling
omega = np.zeros([parnumber])
omega[0] = math.floor((N - 1) / (2 * M))
m = math.floor(omega[0] / (2 * M))
print('m =', m)
if m >= (parnumber - 1):
omega[1:] = np.floor(np.linspace(1, m, parnumber - 1))
else:
omega[1:] = np.arange(parnumber - 1) % m + 1
print('Omega =', omega)
# Calculate and Output the First and Total Order Values
if print_to_console:
print("Parameter First Total")
Si = dict((k, [None] * parnumber) for k in ['S1', 'ST'])
print(Si)
for i in range(parnumber):
l = np.arange(i * N, (i + 1) * N)
print(l)
Si['S1'][i] = _compute_first_order(likes[l], N, M, omega[0])
Si['ST'][i] = _compute_total_order(likes[l], N, omega[0])
print(Si)
if print_to_console:
print("%s %f %f" %
(parnames[i], Si['S1'][i], Si['ST'][i]))
return Si
def plot_fast_sensitivity(results,like_index=1,number_of_sensitiv_pars=10,fig_name='FAST_sensitivity.png'):
"""
Example, how to plot the sensitivity for every parameter of your result array, created with the FAST algorithm
:results: Expects an numpy array which should have an header defined with the keyword like.
:type: array
:like: Default 'like1', Collum of which the sensitivity indices will be estimated on
:type: list
:number_of_sensitiv_pars: Optional, this number of most sensitive parameters will be shown in the legend
:type: int
:return: Parameter names which are sensitive, Sensitivity indices for every parameter, Parameter names which are not sensitive
:rtype: Three lists
"""
import matplotlib.pyplot as plt
parnames=get_parameternames(results)
fig=plt.figure(figsize=(16,6))
ax = plt.subplot(1,1,1)
Si = get_sensitivity_of_fast(results, like_index=like_index)
names = []
values = []
no_names = []
no_values = []
index=[]
no_index=[]
try:
threshold = np.sort(list(Si.values())[1])[-number_of_sensitiv_pars]
except IndexError:
threshold = 0
first_sens_call=True
first_insens_call=True
try:
Si.values()
except AttributeError:
exit("Our SI is wrong: " +str(Si))
for j in range(len(list(Si.values())[1])):
if list(Si.values())[1][j]>=threshold:
names.append(j)
values.append(list(Si.values())[1][j])
index.append(j)
if first_sens_call:
ax.bar(j, list(Si.values())[1][j], color='blue', label='Sensitive Parameters')
else:
ax.bar(j, list(Si.values())[1][j], color='blue')
first_sens_call=False
else:
#names.append('')
no_values.append(list(Si.values())[1][j])
no_index.append(j)
if first_insens_call:
ax.bar(j,list(Si.values())[1][j],color='orange', label = 'Insensitive parameter')
else:
ax.bar(j,list(Si.values())[1][j],color='orange')
first_insens_call=False
ax.set_ylim([0,1])
ax.set_xlabel('Model Paramters')
ax.set_ylabel('Total Sensititivity Index')
ax.legend()
ax.set_xticks(np.arange(0,len(parnames)))
xtickNames = ax.set_xticklabels(parnames, color='grey')
plt.setp(xtickNames, rotation=90)
for name_id in names:
ax.get_xticklabels()[name_id].set_color("black")
#ax.set_xticklabels(['0']+parnames)
ax.plot(np.arange(-1,len(parnames)+1,1),[threshold]*(len(parnames)+2),'r--')
ax.set_xlim(-0.5,len(parnames)-0.5)
plt.tight_layout()
fig.savefig(fig_name,dpi=300)
def plot_heatmap_griewank(results,algorithms, fig_name='heatmap_griewank.png'):
"""Example Plot as seen in the SPOTPY Documentation"""
import matplotlib.pyplot as plt
from matplotlib import ticker
from matplotlib import cm
font = {'family' : 'calibri',
'weight' : 'normal',
'size' : 20}
plt.rc('font', **font)
subplots=len(results)
xticks=[-40,0,40]
yticks=[-40,0,40]
fig=plt.figure(figsize=(16,6))
N = 2000
x = np.linspace(-50.0, 50.0, N)
y = np.linspace(-50.0, 50.0, N)
x, y = np.meshgrid(x, y)
z=1+ (x**2+y**2)/4000 - np.cos(x/np.sqrt(2))*np.cos(y/np.sqrt(3))
cmap = plt.get_cmap('autumn')
rows=2.0
for i in range(subplots):
amount_row = int(np.ceil(subplots/rows))
ax = plt.subplot(rows, amount_row, i+1)
CS = ax.contourf(x, y, z,locator=ticker.LogLocator(),cmap=cm.rainbow)
ax.plot(results[i]['par0'],results[i]['par1'],'ko',alpha=0.2,markersize=1.9)
ax.xaxis.set_ticks([])
if i==0:
ax.set_ylabel('y')
if i==subplots/rows:
ax.set_ylabel('y')
if i>=subplots/rows:
ax.set_xlabel('x')
ax.xaxis.set_ticks(xticks)
if i!=0 and i!=subplots/rows:
ax.yaxis.set_ticks([])
ax.set_title(algorithms[i])
fig.savefig(fig_name, bbox_inches='tight')
def plot_objectivefunction(results,evaluation,limit=None,sort=True, fig_name = 'objective_function.png'):
"""Example Plot as seen in the SPOTPY Documentation"""
import matplotlib.pyplot as plt
likes=calc_like(results,evaluation,spotpy.objectivefunctions.rmse)
data=likes
#Calc confidence Interval
mean = np.average(data)
# evaluate sample variance by setting delta degrees of freedom (ddof) to
# 1. The degree used in calculations is N - ddof
stddev = np.std(data, ddof=1)
from scipy.stats import t
# Get the endpoints of the range that contains 95% of the distribution
t_bounds = t.interval(0.999, len(data) - 1)
# sum mean to the confidence interval
ci = [mean + critval * stddev / np.sqrt(len(data)) for critval in t_bounds]
value="Mean: %f" % mean
print(value)
value="Confidence Interval 95%%: %f, %f" % (ci[0], ci[1])
print(value)
threshold=ci[1]
happend=None
bestlike=[data[0]]
for like in data:
if like<bestlike[-1]:
bestlike.append(like)
if bestlike[-1]<threshold and not happend:
thresholdpos=len(bestlike)
happend=True
else:
bestlike.append(bestlike[-1])
if limit:
plt.plot(bestlike,'k-')#[0:limit])
plt.axvline(x=thresholdpos,color='r')
plt.plot(likes,'b-')
#plt.ylim(ymin=-1,ymax=1.39)
else:
plt.plot(bestlike)
plt.savefig(fig_name)
def plot_parametertrace_algorithms(result_lists, algorithmnames, spot_setup,
fig_name='parametertrace_algorithms.png'):
"""Example Plot as seen in the SPOTPY Documentation"""
import matplotlib.pyplot as plt
font = {'family' : 'calibri',
'weight' : 'normal',
'size' : 20}
plt.rc('font', **font)
fig=plt.figure(figsize=(17,5))
subplots=len(result_lists)
parameter = spotpy.parameter.get_parameters_array(spot_setup)
rows=len(parameter['name'])
for j in range(rows):
for i in range(subplots):
ax = plt.subplot(rows,subplots,i+1+j*subplots)
data=result_lists[i]['par'+parameter['name'][j]]
ax.plot(data,'b-')
if i==0:
ax.set_ylabel(parameter['name'][j])
rep = len(data)
if i>0:
ax.yaxis.set_ticks([])
if j==rows-1:
ax.set_xlabel(algorithmnames[i-subplots])
else:
ax.xaxis.set_ticks([])
ax.plot([1]*rep,'r--')
ax.set_xlim(0,rep)
ax.set_ylim(parameter['minbound'][j],parameter['maxbound'][j])
#plt.tight_layout()
fig.savefig(fig_name, bbox_inches='tight')
def plot_parametertrace(results,parameternames=None,fig_name='Parameter_trace.png'):
"""
Get a plot with all values of a given parameter in your result array.
The plot will be saved as a .png file.
:results: Expects an numpy array which should of an index "like" for objectivefunctions
:type: array
:parameternames: A List of Strings with parameternames. A line object will be drawn for each String in the List.
:type: list
:return: Plot of all traces of the given parameternames.
:rtype: figure
"""
import matplotlib.pyplot as plt
fig=plt.figure(figsize=(16,9))
if not parameternames:
parameternames=get_parameternames(results)
names=''
i=1
for name in parameternames:
ax = plt.subplot(len(parameternames),1,i)
ax.plot(results['par'+name],label=name)
names+=name+'_'
ax.set_ylabel(name)
if i==len(parameternames):
ax.set_xlabel('Repetitions')
if i==1:
ax.set_title('Parametertrace')
ax.legend()
i+=1
fig.savefig(fig_name)
text='The figure as been saved as "'+fig_name
print(text)
def plot_posterior_parametertrace(results,parameternames=None,threshold=0.1, fig_name='Posterior_parametertrace.png'):
"""
Get a plot with all values of a given parameter in your result array.
The plot will be saved as a .png file.
:results: Expects an numpy array which should of an index "like" for objectivefunctions
:type: array
:parameternames: A List of Strings with parameternames. A line object will be drawn for each String in the List.
:type: list
:return: Plot of all traces of the given parameternames.
:rtype: figure
"""
import matplotlib.pyplot as plt
fig=plt.figure(figsize=(16,9))
results=sort_like(results)
if not parameternames:
parameternames=get_parameternames(results)
names=''
i=1
for name in parameternames:
ax = plt.subplot(len(parameternames),1,i)
ax.plot(results['par'+name][int(len(results)*threshold):],label=name)
names+=name+'_'
ax.set_ylabel(name)
if i==len(parameternames):
ax.set_xlabel('Repetitions')
if i==1:
ax.set_title('Parametertrace')
ax.legend()
i+=1
fig.savefig(fig_name)
text='The figure as been saved as '+fig_name
print(text)
def plot_posterior(results,evaluation,dates=None,ylabel='Posterior model simulation',xlabel='Time',bestperc=0.1, fig_name='bestmodelrun.png'):
"""
Get a plot with the maximum objectivefunction of your simulations in your result
array.
The plot will be saved as a .png file.
Args:
results (array): Expects an numpy array which should of an index "like" for
objectivefunctions and "sim" for simulations.
evaluation (list): Should contain the values of your observations. Expects that this list has the same lenght as the number of simulations in your result array.
Kwargs:
dates (list): A list of datetime values, equivalent to the evaluation data.
ylabel (str): Labels the y-axis with the given string.
xlabel (str): Labels the x-axis with the given string.
objectivefunction (str): Name of the objectivefunction function used for the simulations.
objectivefunctionmax (boolean): If True the maximum value of the objectivefunction will be searched. If false, the minimum will be searched.
calculatelike (boolean): If True, the NSE will be calulated for each simulation in the result array.
Returns:
figure. Plot of the simulation with the maximum objectivefunction value in the result array as a blue line and dots for the evaluation data.
"""
import matplotlib.pyplot as plt
index,maximum=get_maxlikeindex(results)
sim=get_modelruns(results)
bestmodelrun=list(sim[index][0])#Transform values into list to ensure plotting
bestparameterset=list(get_parameters(results)[index][0])
parameternames=list(get_parameternames(results) )
bestparameterstring=''
maxNSE=spotpy.objectivefunctions.nashsutcliffe(bestmodelrun,evaluation)
for i in range(len(parameternames)):
if i%8==0:
bestparameterstring+='\n'
bestparameterstring+=parameternames[i]+'='+str(round(bestparameterset[i],4))+','
fig=plt.figure(figsize=(16,8))
plt.plot(bestmodelrun,'b-',label='Simulation='+str(round(maxNSE,4)))
plt.plot(evaluation,'ro',label='Evaluation')
plt.legend()
plt.ylabel(ylabel)
plt.xlabel(xlabel)
plt.title('Maximum objectivefunction of Simulations with '+bestparameterstring[0:-2])
fig.savefig(fig_name)
text='The figure as been saved as '+fig_name
print(text)
def plot_bestmodelrun(results,evaluation,fig_name ='Best_model_run.png'):
"""
Get a plot with the maximum objectivefunction of your simulations in your result
array.
The plot will be saved as a .png file.
:results: Expects an numpy array which should of an index "like" for
objectivefunctions and "sim" for simulations.
type: Array
:evaluation: Should contain the values of your observations. Expects that this list has the same lenght as the number of simulations in your result array.
:type: list
Returns:
figure. Plot of the simulation with the maximum objectivefunction value in the result array as a blue line and dots for the evaluation data.
"""
import pylab as plt
fig= plt.figure(figsize=(16,9))
for i in range(len(evaluation)):
if evaluation[i] == -9999:
evaluation[i] = np.nan
plt.plot(evaluation,'ro',markersize=1, label='Observation data')
simulation_fields = get_simulation_fields(results)
bestindex,bestobjf = get_maxlikeindex(results,verbose=False)
plt.plot(list(results[simulation_fields][bestindex][0]),'b-',label='Obj='+str(round(bestobjf,2)))
plt.xlabel('Number of Observation Points')
plt.ylabel ('Simulated value')
plt.legend(loc='upper right')
fig.savefig(fig_name,dpi=300)
text='A plot of the best model run has been saved as '+fig_name
print(text)
def plot_bestmodelruns(results,evaluation,algorithms=None,dates=None,ylabel='Best model simulation',xlabel='Date',objectivefunctionmax=True,calculatelike=True,fig_name='bestmodelrun.png'):
"""
Get a plot with the maximum objectivefunction of your simulations in your result
array.
The plot will be saved as a .png file.
Args:
results (list of arrays): Expects list of numpy arrays which should of an index "like" for
objectivefunctions and "sim" for simulations.
evaluation (list): Should contain the values of your observations. Expects that this list has the same lenght as the number of simulations in your result array.
Kwargs:
dates (list): A list of datetime values, equivalent to the evaluation data.
ylabel (str): Labels the y-axis with the given string.
xlabel (str): Labels the x-axis with the given string.
objectivefunction (str): Name of the objectivefunction function used for the simulations.
objectivefunctionmax (boolean): If True the maximum value of the objectivefunction will be searched. If false, the minimum will be searched.
calculatelike (boolean): If True, the NSE will be calulated for each simulation in the result array.
Returns:
figure. Plot of the simulation with the maximum objectivefunction value in the result array as a blue line and dots for the evaluation data.
A really great idea. A way you might use me is
>>> bcf.analyser.plot_bestmodelrun(results,evaluation, ylabel='Best model simulation')
"""
import matplotlib.pyplot as plt
plt.rc('font', **font)
fig=plt.figure(figsize=(17,8))
colors=['grey', 'black', 'brown','red','orange', 'yellow','green','blue',]
plt.plot(dates,evaluation,'ro',label='Evaluation data')
for i in range(len(results)):
if calculatelike:
likes=[]
sim=get_modelruns(results[i])
par=get_parameters(results[i])
for s in sim:
likes.append(spotpy.objectivefunctions.lognashsutcliffe(evaluation,list(s)))
maximum=max(likes)
index=likes.index(maximum)
bestmodelrun=list(sim[index])
bestparameterset=list(par[index])
print(bestparameterset)
else:
if objectivefunctionmax==True:
index,maximum=get_maxlikeindex(results[i])
else:
index,maximum=get_minlikeindex(results[i])
bestmodelrun=list(get_modelruns(results[i])[index][0])#Transform values into list to ensure plotting
maxLike=spotpy.objectivefunctions.lognashsutcliffe(evaluation,bestmodelrun)
if dates is not None:
plt.plot(dates,bestmodelrun,'-',color=colors[i],label=algorithms[i]+': LogNSE='+str(round(maxLike,4)))
else:
plt.plot(bestmodelrun,'-',color=colors[i],label=algorithms[i]+': AI='+str(round(maxLike,4)))
#plt.plot(evaluation,'ro',label='Evaluation data')
plt.legend(bbox_to_anchor=(.0, 0), loc=3)
plt.ylabel(ylabel)
plt.xlabel(xlabel)
plt.ylim(15,50) #DELETE WHEN NOT USED WITH SOIL MOISTUR RESULTS
fig.savefig(fig_name)
text='The figure as been saved as '+fig_name
print(text)
def plot_objectivefunctiontraces(results,evaluation,algorithms,fig_name='Like_trace.png'):
import matplotlib.pyplot as plt
from matplotlib import colors
cnames=list(colors.cnames)
font = {'family' : 'calibri',
'weight' : 'normal',
'size' : 20}
plt.rc('font', **font)
fig=plt.figure(figsize=(16,3))
xticks=[5000,15000]
for i in range(len(results)):
ax = plt.subplot(1,len(results),i+1)
likes=calc_like(results[i],evaluation,spotpy.objectivefunctions.rmse)
ax.plot(likes,'b-')
ax.set_ylim(0,25)
ax.set_xlim(0,len(results[0]))
ax.set_xlabel(algorithms[i])
ax.xaxis.set_ticks(xticks)
if i==0:
ax.set_ylabel('RMSE')
ax.yaxis.set_ticks([0,10,20])
else:
ax.yaxis.set_ticks([])
plt.tight_layout()
fig.savefig(fig_name)
def plot_regression(results,evaluation,fig_name='regressionanalysis.png'):
import matplotlib.pyplot as plt
fig=plt.figure(figsize=(16,9))
simulations=get_modelruns(results)
for sim in simulations:
plt.plot(evaluation,list(sim),'bo',alpha=.05)
plt.ylabel('simulation')
plt.xlabel('evaluation')
plt.title('Regression between simulations and evaluation data')
fig.savefig(fig_name)
text='The figure as been saved as '+fig_name
print(text)
def plot_parameterInteraction(results, fig_name ='ParameterInteraction.png'):
'''Input: List with values of parameters and list of strings with parameter names
Output: Dotty plot of parameter distribution and gaussian kde distribution'''
import matplotlib.pyplot as plt
import pandas as pd
parameterdistribtion=get_parameters(results)
parameternames=get_parameternames(results)
df = pd.DataFrame(np.asarray(parameterdistribtion).T.tolist(), columns=parameternames)
pd.plotting.scatter_matrix(df, alpha=0.2, figsize=(12, 12), diagonal='kde')
plt.savefig(fig_name,dpi=300)
def plot_allmodelruns(modelruns,observations,dates=None, fig_name='bestmodel.png'):
'''Input: Array of modelruns and list of Observations
Output: Plot with all modelruns as a line and dots with the Observations
'''
import matplotlib.pyplot as plt
fig=plt.figure(figsize=(16,9))
ax = plt.subplot(1,1,1)
if dates is not None:
for i in range(len(modelruns)):
if i==0:
ax.plot(dates, modelruns[i],'b',alpha=.05,label='Simulations')
else:
ax.plot(dates, modelruns[i],'b',alpha=.05)
else:
for i in range(len(modelruns)):
if i==0:
ax.plot(modelruns[i],'b',alpha=.05,label='Simulations')
else:
ax.plot(modelruns[i],'b',alpha=.05)
ax.plot(observations,'ro',label='Evaluation')
ax.legend()
ax.set_xlabel = 'Best model simulation'
ax.set_ylabel = 'Evaluation points'
ax.set_title = 'Maximum objectivefunction of Simulations'
fig.savefig(fig_name)
text='The figure as been saved as '+fig_name
print(text)
def plot_autocorellation(parameterdistribution,parametername,fig_name='Autocorrelation.png'):
'''Input: List of sampled values for one Parameter
Output: Parameter Trace, Histogramm and Autocorrelation Plot'''
import matplotlib.pyplot as plt
import pandas as pd
pd.plotting.autocorrelation_plot(parameterdistribution)
plt.savefig(fig_name,dpi=300)
def plot_gelman_rubin(r_hat_values,fig_name='gelman_rub.png'):
'''Input: List of R_hat values of chains (see Gelman & Rubin 1992)
Output: Plot as seen for e.g. in (Sadegh and Vrugt 2014)'''
import matplotlib.pyplot as plt
fig=plt.figure(figsize=(16,9))
ax = plt.subplot(1,1,1)
ax.plot(r_hat_values)
ax.plot([1.2]*len(r_hat_values),'k--')
ax.set_xlabel='r_hat'
plt.savefig(fig_name,dpi=300)
def gelman_rubin(x):
'''NOT USED YET'''
if np.shape(x) < (2,):
raise ValueError(
'Gelman-Rubin diagnostic requires multiple chains of the same length.')
try:
m, n = np.shape(x)
except ValueError:
return [gelman_rubin(np.transpose(y)) for y in np.transpose(x)]
# Calculate between-chain variance
B_over_n = np.sum((np.mean(x, 1) - np.mean(x)) ** 2) / (m - 1)
# Calculate within-chain variances
W = np.sum(
[(x[i] - xbar) ** 2 for i,
xbar in enumerate(np.mean(x,
1))]) / (m * (n - 1))
# (over) estimate of variance
s2 = W * (n - 1) / n + B_over_n
# Pooled posterior variance estimate
V = s2 + B_over_n / m
# Calculate PSRF
R = V / W
return R
def plot_Geweke(parameterdistribution,parametername):
'''Input: Takes a list of sampled values for a parameter and his name as a string
Output: Plot as seen for e.g. in BUGS or PyMC'''
import matplotlib.pyplot as plt
# perform the Geweke test
Geweke_values = _Geweke(parameterdistribution)
# plot the results
fig = plt.figure()
plt.plot(Geweke_values,label=parametername)
plt.legend()
plt.title(parametername + '- Geweke_Test')
plt.xlabel('Subinterval')
plt.ylabel('Geweke Test')
plt.ylim([-3,3])
# plot the delimiting line
plt.plot( [2]*len(Geweke_values), 'r-.')
plt.plot( [-2]*len(Geweke_values), 'r-.')
def _compute_first_order(outputs, N, M, omega):
f = np.fft.fft(outputs)
Sp = np.power(np.absolute(f[np.arange(1, int((N + 1) / 2))]) / N, 2)
V = 2 * np.sum(Sp)
D1 = 2 * np.sum(Sp[np.arange(1, M + 1) * int(omega) - 1])
return D1 / V
def _compute_total_order(outputs, N, omega):
f = np.fft.fft(outputs)
Sp = np.power(np.absolute(f[np.arange(1, int((N + 1) / 2))]) / N, 2)
V = 2 * np.sum(Sp)
Dt = 2 * sum(Sp[np.arange(int(omega / 2))])
return (1 - Dt / V)
def _Geweke(samples, intervals=20):
'''Calculates Geweke Z-Scores'''
length=int(len(samples)/intervals/2)
# discard the first 10 per cent
first = 0.1*len(samples)
# create empty array to store the results
z = np.empty(intervals)
for k in np.arange(0, intervals):
# starting points of the two different subsamples
start1 = int(first + k*length)
start2 = int(len(samples)/2 + k*length)
# extract the sub samples
subsamples1 = samples[start1:start1+length]
subsamples2 = samples[start2:start2+length]
# calculate the mean and the variance
mean1 = np.mean(subsamples1)
mean2 = np.mean(subsamples2)
var1 = np.var(subsamples1)
var2 = np.var(subsamples2)
# calculate the Geweke test
z[k] = (mean1-mean2)/np.sqrt(var1+var2)
return z
| bees4ever/spotpy | spotpy/analyser.py | Python | mit | 37,517 | [
"Gaussian"
] | e8b8ad4c2f236d1308b94266901a34651b9c0deb13b456586db6cefac421531a |
"""
========================================
Special functions (:mod:`scipy.special`)
========================================
.. currentmodule:: scipy.special
Nearly all of the functions below are universal functions and follow
broadcasting and automatic array-looping rules.
.. seealso::
`scipy.special.cython_special` -- Typed Cython versions of special functions
Error handling
==============
Errors are handled by returning NaNs or other appropriate values.
Some of the special function routines can emit warnings or raise
exceptions when an error occurs. By default this is disabled; to
query and control the current error handling state the following
functions are provided.
.. autosummary::
:toctree: generated/
geterr -- Get the current way of handling special-function errors.
seterr -- Set how special-function errors are handled.
errstate -- Context manager for special-function error handling.
SpecialFunctionWarning -- Warning that can be emitted by special functions.
SpecialFunctionError -- Exception that can be raised by special functions.
Available functions
===================
Airy functions
--------------
.. autosummary::
:toctree: generated/
airy -- Airy functions and their derivatives.
airye -- Exponentially scaled Airy functions and their derivatives.
ai_zeros -- Compute `nt` zeros and values of the Airy function Ai and its derivative.
bi_zeros -- Compute `nt` zeros and values of the Airy function Bi and its derivative.
itairy -- Integrals of Airy functions
Elliptic functions and integrals
--------------------------------
.. autosummary::
:toctree: generated/
ellipj -- Jacobian elliptic functions.
ellipk -- Complete elliptic integral of the first kind.
ellipkm1 -- Complete elliptic integral of the first kind around `m` = 1.
ellipkinc -- Incomplete elliptic integral of the first kind.
ellipe -- Complete elliptic integral of the second kind.
ellipeinc -- Incomplete elliptic integral of the second kind.
Bessel functions
----------------
.. autosummary::
:toctree: generated/
jv -- Bessel function of the first kind of real order and complex argument.
jve -- Exponentially scaled Bessel function of order `v`.
yn -- Bessel function of the second kind of integer order and real argument.
yv -- Bessel function of the second kind of real order and complex argument.
yve -- Exponentially scaled Bessel function of the second kind of real order.
kn -- Modified Bessel function of the second kind of integer order `n`
kv -- Modified Bessel function of the second kind of real order `v`
kve -- Exponentially scaled modified Bessel function of the second kind.
iv -- Modified Bessel function of the first kind of real order.
ive -- Exponentially scaled modified Bessel function of the first kind.
hankel1 -- Hankel function of the first kind.
hankel1e -- Exponentially scaled Hankel function of the first kind.
hankel2 -- Hankel function of the second kind.
hankel2e -- Exponentially scaled Hankel function of the second kind.
The following is not an universal function:
.. autosummary::
:toctree: generated/
lmbda -- Jahnke-Emden Lambda function, Lambdav(x).
Zeros of Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^^
These are not universal functions:
.. autosummary::
:toctree: generated/
jnjnp_zeros -- Compute zeros of integer-order Bessel functions Jn and Jn'.
jnyn_zeros -- Compute nt zeros of Bessel functions Jn(x), Jn'(x), Yn(x), and Yn'(x).
jn_zeros -- Compute zeros of integer-order Bessel function Jn(x).
jnp_zeros -- Compute zeros of integer-order Bessel function derivative Jn'(x).
yn_zeros -- Compute zeros of integer-order Bessel function Yn(x).
ynp_zeros -- Compute zeros of integer-order Bessel function derivative Yn'(x).
y0_zeros -- Compute nt zeros of Bessel function Y0(z), and derivative at each zero.
y1_zeros -- Compute nt zeros of Bessel function Y1(z), and derivative at each zero.
y1p_zeros -- Compute nt zeros of Bessel derivative Y1'(z), and value at each zero.
Faster versions of common Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
j0 -- Bessel function of the first kind of order 0.
j1 -- Bessel function of the first kind of order 1.
y0 -- Bessel function of the second kind of order 0.
y1 -- Bessel function of the second kind of order 1.
i0 -- Modified Bessel function of order 0.
i0e -- Exponentially scaled modified Bessel function of order 0.
i1 -- Modified Bessel function of order 1.
i1e -- Exponentially scaled modified Bessel function of order 1.
k0 -- Modified Bessel function of the second kind of order 0, :math:`K_0`.
k0e -- Exponentially scaled modified Bessel function K of order 0
k1 -- Modified Bessel function of the second kind of order 1, :math:`K_1(x)`.
k1e -- Exponentially scaled modified Bessel function K of order 1.
Integrals of Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
itj0y0 -- Integrals of Bessel functions of order 0.
it2j0y0 -- Integrals related to Bessel functions of order 0.
iti0k0 -- Integrals of modified Bessel functions of order 0.
it2i0k0 -- Integrals related to modified Bessel functions of order 0.
besselpoly -- Weighted integral of a Bessel function.
Derivatives of Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
jvp -- Compute nth derivative of Bessel function Jv(z) with respect to `z`.
yvp -- Compute nth derivative of Bessel function Yv(z) with respect to `z`.
kvp -- Compute nth derivative of real-order modified Bessel function Kv(z)
ivp -- Compute nth derivative of modified Bessel function Iv(z) with respect to `z`.
h1vp -- Compute nth derivative of Hankel function H1v(z) with respect to `z`.
h2vp -- Compute nth derivative of Hankel function H2v(z) with respect to `z`.
Spherical Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
spherical_jn -- Spherical Bessel function of the first kind or its derivative.
spherical_yn -- Spherical Bessel function of the second kind or its derivative.
spherical_in -- Modified spherical Bessel function of the first kind or its derivative.
spherical_kn -- Modified spherical Bessel function of the second kind or its derivative.
Riccati-Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^
These are not universal functions:
.. autosummary::
:toctree: generated/
riccati_jn -- Compute Ricatti-Bessel function of the first kind and its derivative.
riccati_yn -- Compute Ricatti-Bessel function of the second kind and its derivative.
Struve functions
----------------
.. autosummary::
:toctree: generated/
struve -- Struve function.
modstruve -- Modified Struve function.
itstruve0 -- Integral of the Struve function of order 0.
it2struve0 -- Integral related to the Struve function of order 0.
itmodstruve0 -- Integral of the modified Struve function of order 0.
Raw statistical functions
-------------------------
.. seealso:: :mod:`scipy.stats`: Friendly versions of these functions.
.. autosummary::
:toctree: generated/
bdtr -- Binomial distribution cumulative distribution function.
bdtrc -- Binomial distribution survival function.
bdtri -- Inverse function to `bdtr` with respect to `p`.
bdtrik -- Inverse function to `bdtr` with respect to `k`.
bdtrin -- Inverse function to `bdtr` with respect to `n`.
btdtr -- Cumulative distribution function of the beta distribution.
btdtri -- The `p`-th quantile of the beta distribution.
btdtria -- Inverse of `btdtr` with respect to `a`.
btdtrib -- btdtria(a, p, x).
fdtr -- F cumulative distribution function.
fdtrc -- F survival function.
fdtri -- The `p`-th quantile of the F-distribution.
fdtridfd -- Inverse to `fdtr` vs dfd.
gdtr -- Gamma distribution cumulative distribution function.
gdtrc -- Gamma distribution survival function.
gdtria -- Inverse of `gdtr` vs a.
gdtrib -- Inverse of `gdtr` vs b.
gdtrix -- Inverse of `gdtr` vs x.
nbdtr -- Negative binomial cumulative distribution function.
nbdtrc -- Negative binomial survival function.
nbdtri -- Inverse of `nbdtr` vs `p`.
nbdtrik -- Inverse of `nbdtr` vs `k`.
nbdtrin -- Inverse of `nbdtr` vs `n`.
ncfdtr -- Cumulative distribution function of the non-central F distribution.
ncfdtridfd -- Calculate degrees of freedom (denominator) for the noncentral F-distribution.
ncfdtridfn -- Calculate degrees of freedom (numerator) for the noncentral F-distribution.
ncfdtri -- Inverse cumulative distribution function of the non-central F distribution.
ncfdtrinc -- Calculate non-centrality parameter for non-central F distribution.
nctdtr -- Cumulative distribution function of the non-central `t` distribution.
nctdtridf -- Calculate degrees of freedom for non-central t distribution.
nctdtrit -- Inverse cumulative distribution function of the non-central t distribution.
nctdtrinc -- Calculate non-centrality parameter for non-central t distribution.
nrdtrimn -- Calculate mean of normal distribution given other params.
nrdtrisd -- Calculate standard deviation of normal distribution given other params.
pdtr -- Poisson cumulative distribution function.
pdtrc -- Poisson survival function.
pdtri -- Inverse to `pdtr` vs m.
pdtrik -- Inverse to `pdtr` vs k.
stdtr -- Student t distribution cumulative distribution function.
stdtridf -- Inverse of `stdtr` vs df.
stdtrit -- Inverse of `stdtr` vs `t`.
chdtr -- Chi square cumulative distribution function.
chdtrc -- Chi square survival function.
chdtri -- Inverse to `chdtrc`.
chdtriv -- Inverse to `chdtr` vs `v`.
ndtr -- Gaussian cumulative distribution function.
log_ndtr -- Logarithm of Gaussian cumulative distribution function.
ndtri -- Inverse of `ndtr` vs x.
chndtr -- Non-central chi square cumulative distribution function.
chndtridf -- Inverse to `chndtr` vs `df`.
chndtrinc -- Inverse to `chndtr` vs `nc`.
chndtrix -- Inverse to `chndtr` vs `x`.
smirnov -- Kolmogorov-Smirnov complementary cumulative distribution function.
smirnovi -- Inverse to `smirnov`.
kolmogorov -- Complementary cumulative distribution function of Kolmogorov distribution.
kolmogi -- Inverse function to `kolmogorov`.
tklmbda -- Tukey-Lambda cumulative distribution function.
logit -- Logit ufunc for ndarrays.
expit -- Expit ufunc for ndarrays.
boxcox -- Compute the Box-Cox transformation.
boxcox1p -- Compute the Box-Cox transformation of 1 + `x`.
inv_boxcox -- Compute the inverse of the Box-Cox transformation.
inv_boxcox1p -- Compute the inverse of the Box-Cox transformation.
owens_t -- Owen's T Function.
Information Theory functions
----------------------------
.. autosummary::
:toctree: generated/
entr -- Elementwise function for computing entropy.
rel_entr -- Elementwise function for computing relative entropy.
kl_div -- Elementwise function for computing Kullback-Leibler divergence.
huber -- Huber loss function.
pseudo_huber -- Pseudo-Huber loss function.
Gamma and related functions
---------------------------
.. autosummary::
:toctree: generated/
gamma -- Gamma function.
gammaln -- Logarithm of the absolute value of the Gamma function for real inputs.
loggamma -- Principal branch of the logarithm of the Gamma function.
gammasgn -- Sign of the gamma function.
gammainc -- Regularized lower incomplete gamma function.
gammaincinv -- Inverse to `gammainc`.
gammaincc -- Regularized upper incomplete gamma function.
gammainccinv -- Inverse to `gammaincc`.
beta -- Beta function.
betaln -- Natural logarithm of absolute value of beta function.
betainc -- Incomplete beta integral.
betaincinv -- Inverse function to beta integral.
psi -- The digamma function.
rgamma -- Gamma function inverted.
polygamma -- Polygamma function n.
multigammaln -- Returns the log of multivariate gamma, also sometimes called the generalized gamma.
digamma -- psi(x[, out]).
poch -- Rising factorial (z)_m.
Error function and Fresnel integrals
------------------------------------
.. autosummary::
:toctree: generated/
erf -- Returns the error function of complex argument.
erfc -- Complementary error function, ``1 - erf(x)``.
erfcx -- Scaled complementary error function, ``exp(x**2) * erfc(x)``.
erfi -- Imaginary error function, ``-i erf(i z)``.
erfinv -- Inverse function for erf.
erfcinv -- Inverse function for erfc.
wofz -- Faddeeva function.
dawsn -- Dawson's integral.
fresnel -- Fresnel sin and cos integrals.
fresnel_zeros -- Compute nt complex zeros of sine and cosine Fresnel integrals S(z) and C(z).
modfresnelp -- Modified Fresnel positive integrals.
modfresnelm -- Modified Fresnel negative integrals.
voigt_profile -- Voigt profile.
These are not universal functions:
.. autosummary::
:toctree: generated/
erf_zeros -- Compute nt complex zeros of error function erf(z).
fresnelc_zeros -- Compute nt complex zeros of cosine Fresnel integral C(z).
fresnels_zeros -- Compute nt complex zeros of sine Fresnel integral S(z).
Legendre functions
------------------
.. autosummary::
:toctree: generated/
lpmv -- Associated Legendre function of integer order and real degree.
sph_harm -- Compute spherical harmonics.
These are not universal functions:
.. autosummary::
:toctree: generated/
clpmn -- Associated Legendre function of the first kind for complex arguments.
lpn -- Legendre function of the first kind.
lqn -- Legendre function of the second kind.
lpmn -- Sequence of associated Legendre functions of the first kind.
lqmn -- Sequence of associated Legendre functions of the second kind.
Ellipsoidal harmonics
---------------------
.. autosummary::
:toctree: generated/
ellip_harm -- Ellipsoidal harmonic functions E^p_n(l).
ellip_harm_2 -- Ellipsoidal harmonic functions F^p_n(l).
ellip_normal -- Ellipsoidal harmonic normalization constants gamma^p_n.
Orthogonal polynomials
----------------------
The following functions evaluate values of orthogonal polynomials:
.. autosummary::
:toctree: generated/
assoc_laguerre -- Compute the generalized (associated) Laguerre polynomial of degree n and order k.
eval_legendre -- Evaluate Legendre polynomial at a point.
eval_chebyt -- Evaluate Chebyshev polynomial of the first kind at a point.
eval_chebyu -- Evaluate Chebyshev polynomial of the second kind at a point.
eval_chebyc -- Evaluate Chebyshev polynomial of the first kind on [-2, 2] at a point.
eval_chebys -- Evaluate Chebyshev polynomial of the second kind on [-2, 2] at a point.
eval_jacobi -- Evaluate Jacobi polynomial at a point.
eval_laguerre -- Evaluate Laguerre polynomial at a point.
eval_genlaguerre -- Evaluate generalized Laguerre polynomial at a point.
eval_hermite -- Evaluate physicist's Hermite polynomial at a point.
eval_hermitenorm -- Evaluate probabilist's (normalized) Hermite polynomial at a point.
eval_gegenbauer -- Evaluate Gegenbauer polynomial at a point.
eval_sh_legendre -- Evaluate shifted Legendre polynomial at a point.
eval_sh_chebyt -- Evaluate shifted Chebyshev polynomial of the first kind at a point.
eval_sh_chebyu -- Evaluate shifted Chebyshev polynomial of the second kind at a point.
eval_sh_jacobi -- Evaluate shifted Jacobi polynomial at a point.
The following functions compute roots and quadrature weights for
orthogonal polynomials:
.. autosummary::
:toctree: generated/
roots_legendre -- Gauss-Legendre quadrature.
roots_chebyt -- Gauss-Chebyshev (first kind) quadrature.
roots_chebyu -- Gauss-Chebyshev (second kind) quadrature.
roots_chebyc -- Gauss-Chebyshev (first kind) quadrature.
roots_chebys -- Gauss-Chebyshev (second kind) quadrature.
roots_jacobi -- Gauss-Jacobi quadrature.
roots_laguerre -- Gauss-Laguerre quadrature.
roots_genlaguerre -- Gauss-generalized Laguerre quadrature.
roots_hermite -- Gauss-Hermite (physicst's) quadrature.
roots_hermitenorm -- Gauss-Hermite (statistician's) quadrature.
roots_gegenbauer -- Gauss-Gegenbauer quadrature.
roots_sh_legendre -- Gauss-Legendre (shifted) quadrature.
roots_sh_chebyt -- Gauss-Chebyshev (first kind, shifted) quadrature.
roots_sh_chebyu -- Gauss-Chebyshev (second kind, shifted) quadrature.
roots_sh_jacobi -- Gauss-Jacobi (shifted) quadrature.
The functions below, in turn, return the polynomial coefficients in
``orthopoly1d`` objects, which function similarly as `numpy.poly1d`.
The ``orthopoly1d`` class also has an attribute ``weights``, which returns
the roots, weights, and total weights for the appropriate form of Gaussian
quadrature. These are returned in an ``n x 3`` array with roots in the first
column, weights in the second column, and total weights in the final column.
Note that ``orthopoly1d`` objects are converted to `~numpy.poly1d` when doing
arithmetic, and lose information of the original orthogonal polynomial.
.. autosummary::
:toctree: generated/
legendre -- Legendre polynomial.
chebyt -- Chebyshev polynomial of the first kind.
chebyu -- Chebyshev polynomial of the second kind.
chebyc -- Chebyshev polynomial of the first kind on :math:`[-2, 2]`.
chebys -- Chebyshev polynomial of the second kind on :math:`[-2, 2]`.
jacobi -- Jacobi polynomial.
laguerre -- Laguerre polynomial.
genlaguerre -- Generalized (associated) Laguerre polynomial.
hermite -- Physicist's Hermite polynomial.
hermitenorm -- Normalized (probabilist's) Hermite polynomial.
gegenbauer -- Gegenbauer (ultraspherical) polynomial.
sh_legendre -- Shifted Legendre polynomial.
sh_chebyt -- Shifted Chebyshev polynomial of the first kind.
sh_chebyu -- Shifted Chebyshev polynomial of the second kind.
sh_jacobi -- Shifted Jacobi polynomial.
.. warning::
Computing values of high-order polynomials (around ``order > 20``) using
polynomial coefficients is numerically unstable. To evaluate polynomial
values, the ``eval_*`` functions should be used instead.
Hypergeometric functions
------------------------
.. autosummary::
:toctree: generated/
hyp2f1 -- Gauss hypergeometric function 2F1(a, b; c; z).
hyp1f1 -- Confluent hypergeometric function 1F1(a, b; x).
hyperu -- Confluent hypergeometric function U(a, b, x) of the second kind.
hyp0f1 -- Confluent hypergeometric limit function 0F1.
Parabolic cylinder functions
----------------------------
.. autosummary::
:toctree: generated/
pbdv -- Parabolic cylinder function D.
pbvv -- Parabolic cylinder function V.
pbwa -- Parabolic cylinder function W.
These are not universal functions:
.. autosummary::
:toctree: generated/
pbdv_seq -- Parabolic cylinder functions Dv(x) and derivatives.
pbvv_seq -- Parabolic cylinder functions Vv(x) and derivatives.
pbdn_seq -- Parabolic cylinder functions Dn(z) and derivatives.
Mathieu and related functions
-----------------------------
.. autosummary::
:toctree: generated/
mathieu_a -- Characteristic value of even Mathieu functions.
mathieu_b -- Characteristic value of odd Mathieu functions.
These are not universal functions:
.. autosummary::
:toctree: generated/
mathieu_even_coef -- Fourier coefficients for even Mathieu and modified Mathieu functions.
mathieu_odd_coef -- Fourier coefficients for even Mathieu and modified Mathieu functions.
The following return both function and first derivative:
.. autosummary::
:toctree: generated/
mathieu_cem -- Even Mathieu function and its derivative.
mathieu_sem -- Odd Mathieu function and its derivative.
mathieu_modcem1 -- Even modified Mathieu function of the first kind and its derivative.
mathieu_modcem2 -- Even modified Mathieu function of the second kind and its derivative.
mathieu_modsem1 -- Odd modified Mathieu function of the first kind and its derivative.
mathieu_modsem2 -- Odd modified Mathieu function of the second kind and its derivative.
Spheroidal wave functions
-------------------------
.. autosummary::
:toctree: generated/
pro_ang1 -- Prolate spheroidal angular function of the first kind and its derivative.
pro_rad1 -- Prolate spheroidal radial function of the first kind and its derivative.
pro_rad2 -- Prolate spheroidal radial function of the secon kind and its derivative.
obl_ang1 -- Oblate spheroidal angular function of the first kind and its derivative.
obl_rad1 -- Oblate spheroidal radial function of the first kind and its derivative.
obl_rad2 -- Oblate spheroidal radial function of the second kind and its derivative.
pro_cv -- Characteristic value of prolate spheroidal function.
obl_cv -- Characteristic value of oblate spheroidal function.
pro_cv_seq -- Characteristic values for prolate spheroidal wave functions.
obl_cv_seq -- Characteristic values for oblate spheroidal wave functions.
The following functions require pre-computed characteristic value:
.. autosummary::
:toctree: generated/
pro_ang1_cv -- Prolate spheroidal angular function pro_ang1 for precomputed characteristic value.
pro_rad1_cv -- Prolate spheroidal radial function pro_rad1 for precomputed characteristic value.
pro_rad2_cv -- Prolate spheroidal radial function pro_rad2 for precomputed characteristic value.
obl_ang1_cv -- Oblate spheroidal angular function obl_ang1 for precomputed characteristic value.
obl_rad1_cv -- Oblate spheroidal radial function obl_rad1 for precomputed characteristic value.
obl_rad2_cv -- Oblate spheroidal radial function obl_rad2 for precomputed characteristic value.
Kelvin functions
----------------
.. autosummary::
:toctree: generated/
kelvin -- Kelvin functions as complex numbers.
kelvin_zeros -- Compute nt zeros of all Kelvin functions.
ber -- Kelvin function ber.
bei -- Kelvin function bei
berp -- Derivative of the Kelvin function `ber`.
beip -- Derivative of the Kelvin function `bei`.
ker -- Kelvin function ker.
kei -- Kelvin function ker.
kerp -- Derivative of the Kelvin function ker.
keip -- Derivative of the Kelvin function kei.
These are not universal functions:
.. autosummary::
:toctree: generated/
ber_zeros -- Compute nt zeros of the Kelvin function ber(x).
bei_zeros -- Compute nt zeros of the Kelvin function bei(x).
berp_zeros -- Compute nt zeros of the Kelvin function ber'(x).
beip_zeros -- Compute nt zeros of the Kelvin function bei'(x).
ker_zeros -- Compute nt zeros of the Kelvin function ker(x).
kei_zeros -- Compute nt zeros of the Kelvin function kei(x).
kerp_zeros -- Compute nt zeros of the Kelvin function ker'(x).
keip_zeros -- Compute nt zeros of the Kelvin function kei'(x).
Combinatorics
-------------
.. autosummary::
:toctree: generated/
comb -- The number of combinations of N things taken k at a time.
perm -- Permutations of N things taken k at a time, i.e., k-permutations of N.
Lambert W and related functions
-------------------------------
.. autosummary::
:toctree: generated/
lambertw -- Lambert W function.
wrightomega -- Wright Omega function.
Other special functions
-----------------------
.. autosummary::
:toctree: generated/
agm -- Arithmetic, Geometric Mean.
bernoulli -- Bernoulli numbers B0..Bn (inclusive).
binom -- Binomial coefficient
diric -- Periodic sinc function, also called the Dirichlet function.
euler -- Euler numbers E0..En (inclusive).
expn -- Exponential integral E_n.
exp1 -- Exponential integral E_1 of complex argument z.
expi -- Exponential integral Ei.
factorial -- The factorial of a number or array of numbers.
factorial2 -- Double factorial.
factorialk -- Multifactorial of n of order k, n(!!...!).
shichi -- Hyperbolic sine and cosine integrals.
sici -- Sine and cosine integrals.
softmax -- Softmax function.
log_softmax -- Logarithm of softmax function.
spence -- Spence's function, also known as the dilogarithm.
zeta -- Riemann zeta function.
zetac -- Riemann zeta function minus 1.
Convenience functions
---------------------
.. autosummary::
:toctree: generated/
cbrt -- Cube root of `x`.
exp10 -- 10**x.
exp2 -- 2**x.
radian -- Convert from degrees to radians.
cosdg -- Cosine of the angle `x` given in degrees.
sindg -- Sine of angle given in degrees.
tandg -- Tangent of angle x given in degrees.
cotdg -- Cotangent of the angle `x` given in degrees.
log1p -- Calculates log(1+x) for use when `x` is near zero.
expm1 -- exp(x) - 1 for use when `x` is near zero.
cosm1 -- cos(x) - 1 for use when `x` is near zero.
round -- Round to nearest integer.
xlogy -- Compute ``x*log(y)`` so that the result is 0 if ``x = 0``.
xlog1py -- Compute ``x*log1p(y)`` so that the result is 0 if ``x = 0``.
logsumexp -- Compute the log of the sum of exponentials of input elements.
exprel -- Relative error exponential, (exp(x)-1)/x, for use when `x` is near zero.
sinc -- Return the sinc function.
"""
from .sf_error import SpecialFunctionWarning, SpecialFunctionError
from . import _ufuncs
from ._ufuncs import *
from . import _basic
from ._basic import *
from ._logsumexp import logsumexp, softmax, log_softmax
from . import orthogonal
from .orthogonal import *
from .spfun_stats import multigammaln
from ._ellip_harm import (
ellip_harm,
ellip_harm_2,
ellip_normal
)
from ._lambertw import lambertw
from ._spherical_bessel import (
spherical_jn,
spherical_yn,
spherical_in,
spherical_kn
)
__all__ = _ufuncs.__all__ + _basic.__all__ + orthogonal.__all__ + [
'SpecialFunctionWarning',
'SpecialFunctionError',
'orthogonal', # Not public, but kept in __all__ for back-compat
'logsumexp',
'softmax',
'log_softmax',
'multigammaln',
'ellip_harm',
'ellip_harm_2',
'ellip_normal',
'lambertw',
'spherical_jn',
'spherical_yn',
'spherical_in',
'spherical_kn',
]
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
| pizzathief/scipy | scipy/special/__init__.py | Python | bsd-3-clause | 27,440 | [
"Gaussian"
] | 79e04dca320900f62213136481e2972e5189aa3fe0d77695761f5d6a1817470f |
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
# Copyright (c), Toshio Kuratomi <tkuratomi@ansible.com> 2016
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
SIZE_RANGES = {
'Y': 1 << 80,
'Z': 1 << 70,
'E': 1 << 60,
'P': 1 << 50,
'T': 1 << 40,
'G': 1 << 30,
'M': 1 << 20,
'K': 1 << 10,
'B': 1,
}
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# ansible modules can be written in any language. To simplify
# development of Python modules, the functions available here can
# be used to do many common tasks
import locale
import os
import re
import shlex
import subprocess
import sys
import types
import time
import select
import shutil
import stat
import tempfile
import traceback
import grp
import pwd
import platform
import errno
import datetime
from collections import deque
from collections import Mapping, MutableMapping, Sequence, MutableSequence, Set, MutableSet
from itertools import repeat, chain
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
has_journal = True
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
try:
from collections.abc import KeysView
SEQUENCETYPE = (Sequence, frozenset, KeysView)
except:
SEQUENCETYPE = (Sequence, frozenset)
try:
import json
# Detect the python-json library which is incompatible
# Look for simplejson if that's the case
try:
if not isinstance(json.loads, types.FunctionType) or not isinstance(json.dumps, types.FunctionType):
raise ImportError
except AttributeError:
raise ImportError
except ImportError:
try:
import simplejson as json
except ImportError:
print('\n{"msg": "Error: ansible requires the stdlib json or simplejson module, neither was found!", "failed": true}')
sys.exit(1)
except SyntaxError:
print('\n{"msg": "SyntaxError: probably due to installed simplejson being for a different python version", "failed": true}')
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
except ImportError:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except ImportError:
pass
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils._text import to_native, to_bytes, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
_NUMBERTYPES = tuple(list(integer_types) + [float])
# Deprecated compat. Only kept in case another module used these names Using
# ansible.module_utils.six is preferred
NUMBERTYPES = _NUMBERTYPES
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2.6+
bytes
except NameError:
# Python 2.4
bytes = binary_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
src=dict(),
mode=dict(type='raw'),
owner=dict(),
group=dict(),
seuser=dict(),
serole=dict(),
selevel=dict(),
setype=dict(),
follow=dict(type='bool', default=False),
# not taken by the file module, but other modules call file so it must ignore them.
content=dict(no_log=True),
backup=dict(),
force=dict(),
remote_src=dict(), # used by assemble
regexp=dict(), # used by assemble
delimiter=dict(), # used by assemble
directory_mode=dict(), # used by copy
unsafe_writes=dict(type='bool'), # should be available to any module using atomic_move
attributes=dict(aliases=['attr']),
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
PERM_BITS = 0o7777 # file mode permission bits
EXEC_PERM_BITS = 0o0111 # execute permission bits
DEFAULT_PERM = 0o0666 # default file permission bits
def get_platform():
''' what's the platform? example: Linux is a platform. '''
return platform.system()
def get_distribution():
''' return the distribution name '''
if platform.system() == 'Linux':
try:
supported_dists = platform._supported_dists + ('arch', 'alpine', 'devuan')
distribution = platform.linux_distribution(supported_dists=supported_dists)[0].capitalize()
if not distribution and os.path.isfile('/etc/system-release'):
distribution = platform.linux_distribution(supported_dists=['system'])[0].capitalize()
if 'Amazon' in distribution:
distribution = 'Amazon'
else:
distribution = 'OtherLinux'
except:
# FIXME: MethodMissing, I assume?
distribution = platform.dist()[0].capitalize()
else:
distribution = None
return distribution
def get_distribution_version():
''' return the distribution version '''
if platform.system() == 'Linux':
try:
distribution_version = platform.linux_distribution()[1]
if not distribution_version and os.path.isfile('/etc/system-release'):
distribution_version = platform.linux_distribution(supported_dists=['system'])[1]
except:
# FIXME: MethodMissing, I assume?
distribution_version = platform.dist()[1]
else:
distribution_version = None
return distribution_version
def get_all_subclasses(cls):
'''
used by modules like Hardware or Network fact classes to retrieve all subclasses of a given class.
__subclasses__ return only direct sub classes. This one go down into the class tree.
'''
# Retrieve direct subclasses
subclasses = cls.__subclasses__()
to_visit = list(subclasses)
# Then visit all subclasses
while to_visit:
for sc in to_visit:
# The current class is now visited, so remove it from list
to_visit.remove(sc)
# Appending all subclasses to visit and keep a reference of available class
for ssc in sc.__subclasses__():
subclasses.append(ssc)
to_visit.append(ssc)
return subclasses
def load_platform_subclass(cls, *args, **kwargs):
'''
used by modules like User to have different implementations based on detected platform. See User
module for an example.
'''
this_platform = get_platform()
distribution = get_distribution()
subclass = None
# get the most specific superclass for this platform
if distribution is not None:
for sc in get_all_subclasses(cls):
if sc.distribution is not None and sc.distribution == distribution and sc.platform == this_platform:
subclass = sc
if subclass is None:
for sc in get_all_subclasses(cls):
if sc.platform == this_platform and sc.distribution is None:
subclass = sc
if subclass is None:
subclass = cls
return super(cls, subclass).__new__(subclass)
def json_dict_unicode_to_bytes(d, encoding='utf-8', errors='surrogate_or_strict'):
''' Recursively convert dict keys and values to byte str
Specialized for json return because this only handles, lists, tuples,
and dict container types (the containers that the json module returns)
'''
if isinstance(d, text_type):
return to_bytes(d, encoding=encoding, errors=errors)
elif isinstance(d, dict):
return dict(map(json_dict_unicode_to_bytes, iteritems(d), repeat(encoding), repeat(errors)))
elif isinstance(d, list):
return list(map(json_dict_unicode_to_bytes, d, repeat(encoding), repeat(errors)))
elif isinstance(d, tuple):
return tuple(map(json_dict_unicode_to_bytes, d, repeat(encoding), repeat(errors)))
else:
return d
def json_dict_bytes_to_unicode(d, encoding='utf-8', errors='surrogate_or_strict'):
''' Recursively convert dict keys and values to byte str
Specialized for json return because this only handles, lists, tuples,
and dict container types (the containers that the json module returns)
'''
if isinstance(d, binary_type):
# Warning, can traceback
return to_text(d, encoding=encoding, errors=errors)
elif isinstance(d, dict):
return dict(map(json_dict_bytes_to_unicode, iteritems(d), repeat(encoding), repeat(errors)))
elif isinstance(d, list):
return list(map(json_dict_bytes_to_unicode, d, repeat(encoding), repeat(errors)))
elif isinstance(d, tuple):
return tuple(map(json_dict_bytes_to_unicode, d, repeat(encoding), repeat(errors)))
else:
return d
def return_values(obj):
""" Return native stringified values from datastructures.
For use with removing sensitive values pre-jsonification."""
if isinstance(obj, (text_type, binary_type)):
if obj:
yield to_native(obj, errors='surrogate_or_strict')
return
elif isinstance(obj, SEQUENCETYPE):
for element in obj:
for subelement in return_values(element):
yield subelement
elif isinstance(obj, Mapping):
for element in obj.items():
for subelement in return_values(element[1]):
yield subelement
elif isinstance(obj, (bool, NoneType)):
# This must come before int because bools are also ints
return
elif isinstance(obj, NUMBERTYPES):
yield to_native(obj, nonstring='simplerepr')
else:
raise TypeError('Unknown parameter type: %s, %s' % (type(obj), obj))
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(NUMBERTYPES, (bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, datetime.datetime):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def bytes_to_human(size, isbits=False, unit=None):
base = 'Bytes'
if isbits:
base = 'bits'
suffix = ''
for suffix, limit in sorted(iteritems(SIZE_RANGES), key=lambda item: -item[1]):
if (unit is None and size >= limit) or unit is not None and unit.upper() == suffix[0]:
break
if limit != 1:
suffix += base[0]
else:
suffix = base
return '%.2f %s' % (float(size) / limit, suffix)
def human_to_bytes(number, default_unit=None, isbits=False):
'''
Convert number in string format into bytes (ex: '2K' => 2048) or using unit argument
ex:
human_to_bytes('10M') <=> human_to_bytes(10, 'M')
'''
m = re.search('^\s*(\d*\.?\d*)\s*([A-Za-z]+)?', str(number), flags=re.IGNORECASE)
if m is None:
raise ValueError("human_to_bytes() can't interpret following string: %s" % str(number))
try:
num = float(m.group(1))
except:
raise ValueError("human_to_bytes() can't interpret following number: %s (original input string: %s)" % (m.group(1), number))
unit = m.group(2)
if unit is None:
unit = default_unit
if unit is None:
''' No unit given, returning raw number '''
return int(round(num))
range_key = unit[0].upper()
try:
limit = SIZE_RANGES[range_key]
except:
raise ValueError("human_to_bytes() failed to convert %s (unit = %s). The suffix must be one of %s" % (number, unit, ", ".join(SIZE_RANGES.keys())))
# default value
unit_class = 'B'
unit_class_name = 'byte'
# handling bits case
if isbits:
unit_class = 'b'
unit_class_name = 'bit'
# check unit value if more than one character (KB, MB)
if len(unit) > 1:
expect_message = 'expect %s%s or %s' % (range_key, unit_class, range_key)
if range_key == 'B':
expect_message = 'expect %s or %s' % (unit_class, unit_class_name)
if unit_class_name in unit.lower():
pass
elif unit[1] != unit_class:
raise ValueError("human_to_bytes() failed to convert %s. Value is not a valid string (%s)" % (number, expect_message))
return int(round(num * limit))
def is_executable(path):
'''is the given path executable?
Limitations:
* Does not account for FSACLs.
* Most times we really want to know "Can the current user execute this
file" This function does not tell us that, only if an execute bit is set.
'''
# These are all bitfields so first bitwise-or all the permissions we're
# looking for, then bitwise-and with the file's mode to determine if any
# execute bits are set.
return ((stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH) & os.stat(path)[stat.ST_MODE])
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
else:
raise AnsibleFallbackNotFound
def _lenient_lowercase(lst):
"""Lowercase elements of a list.
If an element is not a string, pass it through untouched.
"""
lowered = []
for value in lst:
try:
lowered.append(value.lower())
except AttributeError:
lowered.append(value)
return lowered
def format_attributes(attributes):
attribute_list = []
for attr in attributes:
if attr in FILE_ATTRIBUTES:
attribute_list.append(FILE_ATTRIBUTES[attr])
return attribute_list
def get_flags_from_attributes(attributes):
flags = []
for key, attr in FILE_ATTRIBUTES.items():
if attr in attributes:
flags.append(key)
return ''.join(flags)
class AnsibleFallbackNotFound(Exception):
pass
class _SetEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Set):
return list(obj)
return super(_SetEncoder, self).default(obj)
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
check_invalid_arguments=True, mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False, supports_check_mode=False,
required_if=None):
'''
common code for quickly building an ansible module in Python
(although you can write modules in anything that can return JSON)
see library/* for examples
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.check_invalid_arguments = check_invalid_arguments
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._warnings = []
self._deprecations = []
self.aliases = {}
self._legal_inputs = ['_ansible_check_mode', '_ansible_no_log', '_ansible_debug', '_ansible_diff', '_ansible_verbosity',
'_ansible_selinux_special_fs', '_ansible_module_name', '_ansible_version', '_ansible_syslog_facility',
'_ansible_socket']
self._options_context = list()
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except Exception as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._check_arguments(check_invalid_arguments)
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
self._CHECK_ARGUMENT_TYPES_DISPATCHER = {
'str': self._check_type_str,
'list': self._check_type_list,
'dict': self._check_type_dict,
'bool': self._check_type_bool,
'int': self._check_type_int,
'float': self._check_type_float,
'path': self._check_type_path,
'raw': self._check_type_raw,
'jsonarg': self._check_type_jsonarg,
'json': self._check_type_jsonarg,
'bytes': self._check_type_bytes,
'bits': self._check_type_bits,
}
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
def warn(self, warning):
if isinstance(warning, string_types):
self._warnings.append(warning)
self.log('[WARNING] %s' % warning)
else:
raise TypeError("warn requires a string not a %s" % type(warning))
def deprecate(self, msg, version=None):
if isinstance(msg, string_types):
self._deprecations.append({
'msg': msg,
'version': version
})
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
else:
raise TypeError("deprecate requires a string not a %s" % type(msg))
def load_file_common_arguments(self, params):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
'''
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if path_mount_point == mount_point:
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if owner is None:
return changed
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed')
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if group is None:
return changed
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if mode is None:
return changed
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno == errno.EPERM: # Can't set mode on symbolic links
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
existing = self.get_file_attributes(b_path)
if existing.get('attr_flags', '') != attributes:
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '=%s' % attributes, b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = attributes
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chattr failed', details=to_native(e),
exception=traceback.format_exc())
return changed
def get_file_attributes(self, path):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
attrcmd = [attrcmd, '-vd', path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split(' ')[0:2]
output['attr_flags'] = res[1].replace('-', '').strip()
output['version'] = res[0].strip()
output['attributes'] = format_attributes(output['attr_flags'])
except:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
else:
kwargs['state'] = 'absent'
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None):
# this uses exceptions as it happens before we can safely call fail_json
aliases_results = {} # alias:canon
if param is None:
param = self.params
if spec is None:
spec = self.argument_spec
for (k, v) in spec.items():
self._legal_inputs.append(k)
aliases = v.get('aliases', None)
default = v.get('default', None)
required = v.get('required', False)
if default is not None and required:
# not alias specific but this is a good place to check this
raise Exception("internal error: required and default are mutually exclusive for %s" % k)
if aliases is None:
continue
if not isinstance(aliases, SEQUENCETYPE) or isinstance(aliases, (binary_type, text_type)):
raise Exception('internal error: aliases must be a list or tuple')
for alias in aliases:
self._legal_inputs.append(alias)
aliases_results[alias] = k
if alias in param:
param[k] = param[alias]
return aliases_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# Use the argspec to determine which args are no_log
for arg_name, arg_opts in spec.items():
if arg_opts.get('no_log', False):
# Find the value for the no_log'd param
no_log_object = param.get(arg_name, None)
if no_log_object:
self.no_log_values.update(return_values(no_log_object))
if arg_opts.get('removed_in_version') is not None and arg_name in param:
self._deprecations.append({
'msg': "Param '%s' is deprecated. See the module docs for more information" % arg_name,
'version': arg_opts.get('removed_in_version')
})
def _check_arguments(self, check_invalid_arguments, spec=None, param=None, legal_inputs=None):
self._syslog_facility = 'LOG_USER'
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
for (k, v) in list(param.items()):
if k == '_ansible_check_mode' and v:
self.check_mode = True
elif k == '_ansible_no_log':
self.no_log = self.boolean(v)
elif k == '_ansible_debug':
self._debug = self.boolean(v)
elif k == '_ansible_diff':
self._diff = self.boolean(v)
elif k == '_ansible_verbosity':
self._verbosity = v
elif k == '_ansible_selinux_special_fs':
self._selinux_special_fs = v
elif k == '_ansible_syslog_facility':
self._syslog_facility = v
elif k == '_ansible_version':
self.ansible_version = v
elif k == '_ansible_module_name':
self._name = v
elif k == '_ansible_socket':
self._socket_path = v
elif check_invalid_arguments and k not in legal_inputs:
unsupported_parameters.add(k)
# clean up internal params:
if k.startswith('_ansible_'):
del self.params[k]
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ','.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
msg += " Supported parameters include: %s" % (','.join(sorted(spec.keys())))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
count = 0
if param is None:
param = self.params
for term in check:
if term in param:
count += 1
return count
def _check_mutually_exclusive(self, spec, param=None):
if spec is None:
return
for check in spec:
count = self._count_terms(check, param)
if count > 1:
msg = "parameters are mutually exclusive: %s" % (check,)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
for check in spec:
count = self._count_terms(check, param)
if count == 0:
msg = "one of the following is required: %s" % ','.join(check)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
for check in spec:
counts = [self._count_terms([field], param) for field in check]
non_zero = [c for c in counts if c > 0]
if len(non_zero) > 0:
if 0 in counts:
msg = "parameters are required together: %s" % (check,)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_arguments(self, spec=None, param=None):
''' ensure all required arguments are present '''
missing = []
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
required = v.get('required', False)
if required and k not in param:
missing.append(k)
if len(missing) > 0:
msg = "missing required arguments: %s" % ",".join(missing)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
for sp in spec:
missing = []
max_missing_count = 0
is_one_of = False
if len(sp) == 4:
key, val, requirements, is_one_of = sp
else:
key, val, requirements = sp
# is_one_of is True at least one requirement should be
# present, else all requirements should be present.
if is_one_of:
max_missing_count = len(requirements)
if key in param and param[key] == val:
for check in requirements:
count = self._count_terms((check,), param)
if count == 0:
missing.append(check)
if len(missing) and len(missing) >= max_missing_count:
msg = "%s is %s but the following are missing: %s" % (key, val, ','.join(missing))
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
if param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = _lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = _lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ",".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
# do not allow method calls to modules
if not isinstance(value, string_types):
# already templated to a datavaluestructure, perhaps?
if include_exceptions:
return (value, None)
return value
if re.search(r'\w\.\w+\(', value):
if include_exceptions:
return (value, None)
return value
# do not allow imports
if re.search(r'import \w+', value):
if include_exceptions:
return (value, None)
return value
try:
result = literal_eval(value)
if include_exceptions:
return (result, None)
else:
return result
except Exception as e:
if include_exceptions:
return (value, e)
return value
def _check_type_str(self, value):
if isinstance(value, string_types):
return value
# Note: This could throw a unicode error if value's __str__() method
# returns non-ascii. Have to port utils.to_bytes() if that happens
return str(value)
def _check_type_list(self, value):
if isinstance(value, list):
return value
if isinstance(value, string_types):
return value.split(",")
elif isinstance(value, int) or isinstance(value, float):
return [str(value)]
raise TypeError('%s cannot be converted to a list' % type(value))
def _check_type_dict(self, value):
if isinstance(value, dict):
return value
if isinstance(value, string_types):
if value.startswith("{"):
try:
return json.loads(value)
except:
(result, exc) = self.safe_eval(value, dict(), include_exceptions=True)
if exc is not None:
raise TypeError('unable to evaluate string as dictionary')
return result
elif '=' in value:
fields = []
field_buffer = []
in_quote = False
in_escape = False
for c in value.strip():
if in_escape:
field_buffer.append(c)
in_escape = False
elif c == '\\':
in_escape = True
elif not in_quote and c in ('\'', '"'):
in_quote = c
elif in_quote and in_quote == c:
in_quote = False
elif not in_quote and c in (',', ' '):
field = ''.join(field_buffer)
if field:
fields.append(field)
field_buffer = []
else:
field_buffer.append(c)
field = ''.join(field_buffer)
if field:
fields.append(field)
return dict(x.split("=", 1) for x in fields)
else:
raise TypeError("dictionary requested, could not parse JSON or key=value")
raise TypeError('%s cannot be converted to a dict' % type(value))
def _check_type_bool(self, value):
if isinstance(value, bool):
return value
if isinstance(value, string_types) or isinstance(value, int):
return self.boolean(value)
raise TypeError('%s cannot be converted to a bool' % type(value))
def _check_type_int(self, value):
if isinstance(value, int):
return value
if isinstance(value, string_types):
return int(value)
raise TypeError('%s cannot be converted to an int' % type(value))
def _check_type_float(self, value):
if isinstance(value, float):
return value
if isinstance(value, (binary_type, text_type, int)):
return float(value)
raise TypeError('%s cannot be converted to a float' % type(value))
def _check_type_path(self, value):
value = self._check_type_str(value)
return os.path.expanduser(os.path.expandvars(value))
def _check_type_jsonarg(self, value):
# Return a jsonified string. Sometimes the controller turns a json
# string into a dict/list so transform it back into json here
if isinstance(value, (text_type, binary_type)):
return value.strip()
else:
if isinstance(value, (list, tuple, dict)):
return self.jsonify(value)
raise TypeError('%s cannot be converted to a json string' % type(value))
def _check_type_raw(self, value):
return value
def _check_type_bytes(self, value):
try:
self.human_to_bytes(value)
except ValueError:
raise TypeError('%s cannot be converted to a Byte value' % type(value))
def _check_type_bits(self, value):
try:
self.human_to_bytes(value, isbits=True)
except ValueError:
raise TypeError('%s cannot be converted to a Bit value' % type(value))
def _handle_options(self, argument_spec=None, params=None):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if spec is None or not params[k]:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for param in elements:
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param)
self._handle_no_log_values(spec, param)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._check_arguments(self.check_invalid_arguments, spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param)
self._options_context.pop()
def _check_argument_types(self, spec=None, param=None):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
if not callable(wanted):
if wanted is None:
# Mostly we want to default to str.
# For values set to None explicitly, return None instead as
# that allows a user to unset a parameter
if param[k] is None:
continue
wanted = 'str'
try:
type_checker = self._CHECK_ARGUMENT_TYPES_DISPATCHER[wanted]
except KeyError:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
else:
# set the type_checker to the callable, and reset wanted to the callable's name (or type if it doesn't have one, ala MagicMock)
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
try:
param[k] = type_checker(value)
except (TypeError, ValueError) as e:
self.fail_json(msg="argument %s is of type %s and we were unable to convert to %s: %s" %
(k, type(value), wanted, to_native(e)))
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
journal.send(u"%s %s" % (module, journal_msg), **dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', False)
if self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
# try to capture all passwords/passphrase named fields missed by no_log
elif PASSWORD_MATCH.search(param) and arg_opts.get('type', 'str') != 'bool' and not arg_opts.get('choices', False):
# skip boolean and enums as they are about 'password' state
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=[]):
'''
find system executable in PATH.
Optional arguments:
- required: if executable is not found and required is true, fail_json
- opt_dirs: optional list of directories to search in addition to PATH
if found return full path; otherwise return None
'''
sbin_paths = ['/sbin', '/usr/sbin', '/usr/local/sbin']
paths = []
for d in opt_dirs:
if d is not None and os.path.exists(d):
paths.append(d)
paths += os.environ.get('PATH', '').split(os.pathsep)
bin_path = None
# mangle PATH to include /sbin dirs
for p in sbin_paths:
if p not in paths and os.path.exists(p):
paths.append(p)
for d in paths:
if not d:
continue
path = os.path.join(d, arg)
if os.path.exists(path) and not os.path.isdir(path) and is_executable(path):
bin_path = path
break
if required and bin_path is None:
self.fail_json(msg='Failed to find required executable %s in paths: %s' % (arg, os.pathsep.join(paths)))
return bin_path
def boolean(self, arg):
''' return a bool for the arg '''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
for encoding in ("utf-8", "latin-1"):
try:
return json.dumps(data, encoding=encoding, cls=_SetEncoder)
# Old systems using old simplejson module does not support encoding keyword.
except TypeError:
try:
new_data = json_dict_bytes_to_unicode(data, encoding=encoding)
except UnicodeDecodeError:
continue
return json.dumps(new_data, cls=_SetEncoder)
except UnicodeDecodeError:
continue
self.fail_json(msg='Invalid unicode encoding encountered')
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
if self._warnings:
kwargs['warnings'] = self._warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
else:
self.deprecate(d)
else:
self.deprecate(kwargs['deprecations'])
if self._deprecations:
kwargs['deprecations'] = self._deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, **kwargs):
''' return from the module, with an error message '''
assert 'msg' in kwargs, "implementation error -- msg to explain the error is required"
kwargs['failed'] = True
# add traceback if debug or high verbosity and it is missing
# Note: badly named as exception, it is really always been 'traceback'
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
''' This is for checking for required params when we can not check via argspec because we
need more information than is simply given in the argspec.
'''
if not required_params:
return
missing_params = []
for required_param in required_params:
if not self.params.get(required_param):
missing_params.append(required_param)
if missing_params:
self.fail_json(msg="missing required arguments: %s" % ','.join(missing_params))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
if not os.path.exists(filename):
return None
if os.path.isdir(filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src)
current_attribs = current_attribs.get('attr_flags', [])
current_attribs = ''.join(current_attribs)
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
else:
b_dest_dir = os.path.dirname(b_dest)
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
native_dest_dir = b_dest_dir
native_suffix = os.path.basename(b_dest)
native_prefix = b('.ansible_tmp')
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=native_prefix, dir=native_dest_dir, suffix=native_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp(). Traceback
# would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating temp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tempdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to rename file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _read_from_pipes(self, rpipes, rfds, file_descriptor):
data = b('')
if file_descriptor in rfds:
data = os.read(file_descriptor.fileno(), 9000)
if data == b(''):
rpipes.remove(file_descriptor)
return data
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict'):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment vairable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
if isinstance(args, list):
if use_unsafe_shell:
args = " ".join([shlex_quote(x) for x in args])
shell = True
elif isinstance(args, (binary_type, text_type)) and use_unsafe_shell:
shell = True
elif isinstance(args, (binary_type, text_type)):
if not use_unsafe_shell:
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
else:
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
if executable is None:
executable = os.environ.get('SHELL')
if executable:
args = [executable, '-c', args]
else:
shell = True
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
# expand things like $HOME and ~
if not shell:
args = [os.path.expanduser(os.path.expandvars(x)) for x in args if x is not None]
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
old_env_vals['PATH'] = os.environ['PATH']
os.environ['PATH'] = "%s:%s" % (path_prefix, os.environ['PATH'])
# If using test-module and explode, the remote lib path will resemble ...
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system ...
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
# create a printable version of the command for use
# in reporting later, which strips out things like
# passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
clean_args = ' '.join(shlex_quote(arg) for arg in clean_args)
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd and os.path.isdir(cwd):
cwd = os.path.abspath(os.path.expanduser(cwd))
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not open %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + clean_args)
cmd = subprocess.Popen(args, **kwargs)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b('')
stderr = b('')
rpipes = [cmd.stdout, cmd.stderr]
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
rfds, wfds, efds = select.select(rpipes, [], rpipes, 1)
stdout += self._read_from_pipes(rpipes, rfds, cmd.stdout)
stderr += self._read_from_pipes(rpipes, rfds, cmd.stderr)
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
else:
stdout = stdout
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not rpipes or not rfds) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if rpipes is empty
elif not rpipes and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (clean_args, to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=clean_args)
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (clean_args, to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=clean_args)
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=clean_args, rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
| nrwahl2/ansible | lib/ansible/module_utils/basic.py | Python | gpl-3.0 | 112,326 | [
"VisIt"
] | 1dc2ec95757abdc9c425efb69e51c4145cfebd9f5aea8abfc6f67d326341fac8 |
# -*- coding: utf-8 -*-
"""
Regression tests for the Test Client, especially the customized assertions.
"""
from __future__ import unicode_literals
import os
import itertools
from django.conf import settings
from django.core.urlresolvers import reverse, NoReverseMatch
from django.template import (TemplateSyntaxError,
Context, Template, loader)
import django.template.context
from django.test import Client, TestCase, override_settings
from django.test.client import encode_file, RequestFactory
from django.test.utils import ContextList, str_prefix
from django.template.response import SimpleTemplateResponse
from django.utils._os import upath
from django.utils.translation import ugettext_lazy
from django.http import HttpResponse
from django.contrib.auth.signals import user_logged_out, user_logged_in
from django.contrib.auth.models import User
from .models import CustomUser
from .views import CustomTestException
@override_settings(
TEMPLATE_DIRS=(os.path.join(os.path.dirname(upath(__file__)), 'templates'),),
ROOT_URLCONF='test_client_regress.urls',
)
class AssertContainsTests(TestCase):
def test_contains(self):
"Responses can be inspected for content, including counting repeated substrings"
response = self.client.get('/no_template_view/')
self.assertNotContains(response, 'never')
self.assertContains(response, 'never', 0)
self.assertContains(response, 'once')
self.assertContains(response, 'once', 1)
self.assertContains(response, 'twice')
self.assertContains(response, 'twice', 2)
try:
self.assertContains(response, 'text', status_code=999)
except AssertionError as e:
self.assertIn("Couldn't retrieve content: Response code was 200 (expected 999)", str(e))
try:
self.assertContains(response, 'text', status_code=999, msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Couldn't retrieve content: Response code was 200 (expected 999)", str(e))
try:
self.assertNotContains(response, 'text', status_code=999)
except AssertionError as e:
self.assertIn("Couldn't retrieve content: Response code was 200 (expected 999)", str(e))
try:
self.assertNotContains(response, 'text', status_code=999, msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Couldn't retrieve content: Response code was 200 (expected 999)", str(e))
try:
self.assertNotContains(response, 'once')
except AssertionError as e:
self.assertIn("Response should not contain 'once'", str(e))
try:
self.assertNotContains(response, 'once', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Response should not contain 'once'", str(e))
try:
self.assertContains(response, 'never', 1)
except AssertionError as e:
self.assertIn("Found 0 instances of 'never' in response (expected 1)", str(e))
try:
self.assertContains(response, 'never', 1, msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Found 0 instances of 'never' in response (expected 1)", str(e))
try:
self.assertContains(response, 'once', 0)
except AssertionError as e:
self.assertIn("Found 1 instances of 'once' in response (expected 0)", str(e))
try:
self.assertContains(response, 'once', 0, msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Found 1 instances of 'once' in response (expected 0)", str(e))
try:
self.assertContains(response, 'once', 2)
except AssertionError as e:
self.assertIn("Found 1 instances of 'once' in response (expected 2)", str(e))
try:
self.assertContains(response, 'once', 2, msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Found 1 instances of 'once' in response (expected 2)", str(e))
try:
self.assertContains(response, 'twice', 1)
except AssertionError as e:
self.assertIn("Found 2 instances of 'twice' in response (expected 1)", str(e))
try:
self.assertContains(response, 'twice', 1, msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Found 2 instances of 'twice' in response (expected 1)", str(e))
try:
self.assertContains(response, 'thrice')
except AssertionError as e:
self.assertIn("Couldn't find 'thrice' in response", str(e))
try:
self.assertContains(response, 'thrice', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Couldn't find 'thrice' in response", str(e))
try:
self.assertContains(response, 'thrice', 3)
except AssertionError as e:
self.assertIn("Found 0 instances of 'thrice' in response (expected 3)", str(e))
try:
self.assertContains(response, 'thrice', 3, msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Found 0 instances of 'thrice' in response (expected 3)", str(e))
def test_unicode_contains(self):
"Unicode characters can be found in template context"
# Regression test for #10183
r = self.client.get('/check_unicode/')
self.assertContains(r, 'さかき')
self.assertContains(r, b'\xe5\xb3\xa0'.decode('utf-8'))
def test_unicode_not_contains(self):
"Unicode characters can be searched for, and not found in template context"
# Regression test for #10183
r = self.client.get('/check_unicode/')
self.assertNotContains(r, 'はたけ')
self.assertNotContains(r, b'\xe3\x81\xaf\xe3\x81\x9f\xe3\x81\x91'.decode('utf-8'))
def test_binary_contains(self):
r = self.client.get('/check_binary/')
self.assertContains(r, b'%PDF-1.4\r\n%\x93\x8c\x8b\x9e')
with self.assertRaises(AssertionError):
self.assertContains(r, b'%PDF-1.4\r\n%\x93\x8c\x8b\x9e', count=2)
def test_binary_not_contains(self):
r = self.client.get('/check_binary/')
self.assertNotContains(r, b'%ODF-1.4\r\n%\x93\x8c\x8b\x9e')
with self.assertRaises(AssertionError):
self.assertNotContains(r, b'%PDF-1.4\r\n%\x93\x8c\x8b\x9e')
def test_nontext_contains(self):
r = self.client.get('/no_template_view/')
self.assertContains(r, ugettext_lazy('once'))
def test_nontext_not_contains(self):
r = self.client.get('/no_template_view/')
self.assertNotContains(r, ugettext_lazy('never'))
def test_assert_contains_renders_template_response(self):
""" Test that we can pass in an unrendered SimpleTemplateReponse
without throwing an error.
Refs #15826.
"""
response = SimpleTemplateResponse(Template('Hello'), status=200)
self.assertContains(response, 'Hello')
def test_assert_contains_using_non_template_response(self):
""" Test that auto-rendering does not affect responses that aren't
instances (or subclasses) of SimpleTemplateResponse.
Refs #15826.
"""
response = HttpResponse('Hello')
self.assertContains(response, 'Hello')
def test_assert_not_contains_renders_template_response(self):
""" Test that we can pass in an unrendered SimpleTemplateReponse
without throwing an error.
Refs #15826.
"""
response = SimpleTemplateResponse(Template('Hello'), status=200)
self.assertNotContains(response, 'Bye')
def test_assert_not_contains_using_non_template_response(self):
""" Test that auto-rendering does not affect responses that aren't
instances (or subclasses) of SimpleTemplateResponse.
Refs #15826.
"""
response = HttpResponse('Hello')
self.assertNotContains(response, 'Bye')
@override_settings(PASSWORD_HASHERS=('django.contrib.auth.hashers.SHA1PasswordHasher',),
ROOT_URLCONF='test_client_regress.urls',)
class AssertTemplateUsedTests(TestCase):
fixtures = ['testdata.json']
def test_no_context(self):
"Template usage assertions work then templates aren't in use"
response = self.client.get('/no_template_view/')
# Check that the no template case doesn't mess with the template assertions
self.assertTemplateNotUsed(response, 'GET Template')
try:
self.assertTemplateUsed(response, 'GET Template')
except AssertionError as e:
self.assertIn("No templates used to render the response", str(e))
try:
self.assertTemplateUsed(response, 'GET Template', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: No templates used to render the response", str(e))
def test_single_context(self):
"Template assertions work when there is a single context"
response = self.client.get('/post_view/', {})
try:
self.assertTemplateNotUsed(response, 'Empty GET Template')
except AssertionError as e:
self.assertIn("Template 'Empty GET Template' was used unexpectedly in rendering the response", str(e))
try:
self.assertTemplateNotUsed(response, 'Empty GET Template', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Template 'Empty GET Template' was used unexpectedly in rendering the response", str(e))
try:
self.assertTemplateUsed(response, 'Empty POST Template')
except AssertionError as e:
self.assertIn("Template 'Empty POST Template' was not a template used to render the response. Actual template(s) used: Empty GET Template", str(e))
try:
self.assertTemplateUsed(response, 'Empty POST Template', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Template 'Empty POST Template' was not a template used to render the response. Actual template(s) used: Empty GET Template", str(e))
def test_multiple_context(self):
"Template assertions work when there are multiple contexts"
post_data = {
'text': 'Hello World',
'email': 'foo@example.com',
'value': 37,
'single': 'b',
'multi': ('b', 'c', 'e')
}
response = self.client.post('/form_view_with_template/', post_data)
self.assertContains(response, 'POST data OK')
try:
self.assertTemplateNotUsed(response, "form_view.html")
except AssertionError as e:
self.assertIn("Template 'form_view.html' was used unexpectedly in rendering the response", str(e))
try:
self.assertTemplateNotUsed(response, 'base.html')
except AssertionError as e:
self.assertIn("Template 'base.html' was used unexpectedly in rendering the response", str(e))
try:
self.assertTemplateUsed(response, "Valid POST Template")
except AssertionError as e:
self.assertIn("Template 'Valid POST Template' was not a template used to render the response. Actual template(s) used: form_view.html, base.html", str(e))
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class AssertRedirectsTests(TestCase):
def test_redirect_page(self):
"An assertion is raised if the original page couldn't be retrieved as expected"
# This page will redirect with code 301, not 302
response = self.client.get('/permanent_redirect_view/')
try:
self.assertRedirects(response, '/get_view/')
except AssertionError as e:
self.assertIn("Response didn't redirect as expected: Response code was 301 (expected 302)", str(e))
try:
self.assertRedirects(response, '/get_view/', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Response didn't redirect as expected: Response code was 301 (expected 302)", str(e))
def test_lost_query(self):
"An assertion is raised if the redirect location doesn't preserve GET parameters"
response = self.client.get('/redirect_view/', {'var': 'value'})
try:
self.assertRedirects(response, '/get_view/')
except AssertionError as e:
self.assertIn("Response redirected to 'http://testserver/get_view/?var=value', expected 'http://testserver/get_view/'", str(e))
try:
self.assertRedirects(response, '/get_view/', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Response redirected to 'http://testserver/get_view/?var=value', expected 'http://testserver/get_view/'", str(e))
def test_incorrect_target(self):
"An assertion is raised if the response redirects to another target"
response = self.client.get('/permanent_redirect_view/')
try:
# Should redirect to get_view
self.assertRedirects(response, '/some_view/')
except AssertionError as e:
self.assertIn("Response didn't redirect as expected: Response code was 301 (expected 302)", str(e))
def test_target_page(self):
"An assertion is raised if the response redirect target cannot be retrieved as expected"
response = self.client.get('/double_redirect_view/')
try:
# The redirect target responds with a 301 code, not 200
self.assertRedirects(response, 'http://testserver/permanent_redirect_view/')
except AssertionError as e:
self.assertIn("Couldn't retrieve redirection page '/permanent_redirect_view/': response code was 301 (expected 200)", str(e))
try:
# The redirect target responds with a 301 code, not 200
self.assertRedirects(response, 'http://testserver/permanent_redirect_view/', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Couldn't retrieve redirection page '/permanent_redirect_view/': response code was 301 (expected 200)", str(e))
def test_redirect_chain(self):
"You can follow a redirect chain of multiple redirects"
response = self.client.get('/redirects/further/more/', {}, follow=True)
self.assertRedirects(response, '/no_template_view/',
status_code=301, target_status_code=200)
self.assertEqual(len(response.redirect_chain), 1)
self.assertEqual(response.redirect_chain[0], ('http://testserver/no_template_view/', 301))
def test_multiple_redirect_chain(self):
"You can follow a redirect chain of multiple redirects"
response = self.client.get('/redirects/', {}, follow=True)
self.assertRedirects(response, '/no_template_view/',
status_code=301, target_status_code=200)
self.assertEqual(len(response.redirect_chain), 3)
self.assertEqual(response.redirect_chain[0], ('http://testserver/redirects/further/', 301))
self.assertEqual(response.redirect_chain[1], ('http://testserver/redirects/further/more/', 301))
self.assertEqual(response.redirect_chain[2], ('http://testserver/no_template_view/', 301))
def test_redirect_chain_to_non_existent(self):
"You can follow a chain to a non-existent view"
response = self.client.get('/redirect_to_non_existent_view2/', {}, follow=True)
self.assertRedirects(response, '/non_existent_view/',
status_code=301, target_status_code=404)
def test_redirect_chain_to_self(self):
"Redirections to self are caught and escaped"
response = self.client.get('/redirect_to_self/', {}, follow=True)
# The chain of redirects stops once the cycle is detected.
self.assertRedirects(response, '/redirect_to_self/',
status_code=301, target_status_code=301)
self.assertEqual(len(response.redirect_chain), 2)
def test_circular_redirect(self):
"Circular redirect chains are caught and escaped"
response = self.client.get('/circular_redirect_1/', {}, follow=True)
# The chain of redirects will get back to the starting point, but stop there.
self.assertRedirects(response, '/circular_redirect_2/',
status_code=301, target_status_code=301)
self.assertEqual(len(response.redirect_chain), 4)
def test_redirect_chain_post(self):
"A redirect chain will be followed from an initial POST post"
response = self.client.post('/redirects/',
{'nothing': 'to_send'}, follow=True)
self.assertRedirects(response,
'/no_template_view/', 301, 200)
self.assertEqual(len(response.redirect_chain), 3)
def test_redirect_chain_head(self):
"A redirect chain will be followed from an initial HEAD request"
response = self.client.head('/redirects/',
{'nothing': 'to_send'}, follow=True)
self.assertRedirects(response,
'/no_template_view/', 301, 200)
self.assertEqual(len(response.redirect_chain), 3)
def test_redirect_chain_options(self):
"A redirect chain will be followed from an initial OPTIONS request"
response = self.client.options('/redirects/',
follow=True)
self.assertRedirects(response,
'/no_template_view/', 301, 200)
self.assertEqual(len(response.redirect_chain), 3)
def test_redirect_chain_put(self):
"A redirect chain will be followed from an initial PUT request"
response = self.client.put('/redirects/',
follow=True)
self.assertRedirects(response,
'/no_template_view/', 301, 200)
self.assertEqual(len(response.redirect_chain), 3)
def test_redirect_chain_delete(self):
"A redirect chain will be followed from an initial DELETE request"
response = self.client.delete('/redirects/',
follow=True)
self.assertRedirects(response,
'/no_template_view/', 301, 200)
self.assertEqual(len(response.redirect_chain), 3)
def test_redirect_to_different_host(self):
"The test client will preserve scheme, host and port changes"
response = self.client.get('/redirect_other_host/', follow=True)
self.assertRedirects(response,
'https://otherserver:8443/no_template_view/',
status_code=301, target_status_code=200)
# We can't use is_secure() or get_host()
# because response.request is a dictionary, not an HttpRequest
self.assertEqual(response.request.get('wsgi.url_scheme'), 'https')
self.assertEqual(response.request.get('SERVER_NAME'), 'otherserver')
self.assertEqual(response.request.get('SERVER_PORT'), '8443')
def test_redirect_chain_on_non_redirect_page(self):
"An assertion is raised if the original page couldn't be retrieved as expected"
# This page will redirect with code 301, not 302
response = self.client.get('/get_view/', follow=True)
try:
self.assertRedirects(response, '/get_view/')
except AssertionError as e:
self.assertIn("Response didn't redirect as expected: Response code was 200 (expected 302)", str(e))
try:
self.assertRedirects(response, '/get_view/', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Response didn't redirect as expected: Response code was 200 (expected 302)", str(e))
def test_redirect_on_non_redirect_page(self):
"An assertion is raised if the original page couldn't be retrieved as expected"
# This page will redirect with code 301, not 302
response = self.client.get('/get_view/')
try:
self.assertRedirects(response, '/get_view/')
except AssertionError as e:
self.assertIn("Response didn't redirect as expected: Response code was 200 (expected 302)", str(e))
try:
self.assertRedirects(response, '/get_view/', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: Response didn't redirect as expected: Response code was 200 (expected 302)", str(e))
def test_redirect_scheme(self):
"An assertion is raised if the response doesn't have the scheme specified in expected_url"
# Assure that original request scheme is preserved if no scheme specified in the redirect location
response = self.client.get('/redirect_view/', secure=True)
self.assertRedirects(response, 'https://testserver/get_view/')
# For all possible True/False combinations of follow and secure
for follow, secure in itertools.product([True, False], repeat=2):
# always redirects to https
response = self.client.get('/https_redirect_view/', follow=follow, secure=secure)
# no scheme to compare too, always succeeds
self.assertRedirects(response, '/secure_view/', status_code=301)
# the goal scheme is https
self.assertRedirects(response, 'https://testserver/secure_view/', status_code=301)
with self.assertRaises(AssertionError):
self.assertRedirects(response, 'http://testserver/secure_view/', status_code=301)
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class AssertFormErrorTests(TestCase):
def test_unknown_form(self):
"An assertion is raised if the form name is unknown"
post_data = {
'text': 'Hello World',
'email': 'not an email address',
'value': 37,
'single': 'b',
'multi': ('b', 'c', 'e')
}
response = self.client.post('/form_view/', post_data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, "Invalid POST Template")
try:
self.assertFormError(response, 'wrong_form', 'some_field', 'Some error.')
except AssertionError as e:
self.assertIn("The form 'wrong_form' was not used to render the response", str(e))
try:
self.assertFormError(response, 'wrong_form', 'some_field', 'Some error.', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: The form 'wrong_form' was not used to render the response", str(e))
def test_unknown_field(self):
"An assertion is raised if the field name is unknown"
post_data = {
'text': 'Hello World',
'email': 'not an email address',
'value': 37,
'single': 'b',
'multi': ('b', 'c', 'e')
}
response = self.client.post('/form_view/', post_data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, "Invalid POST Template")
try:
self.assertFormError(response, 'form', 'some_field', 'Some error.')
except AssertionError as e:
self.assertIn("The form 'form' in context 0 does not contain the field 'some_field'", str(e))
try:
self.assertFormError(response, 'form', 'some_field', 'Some error.', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: The form 'form' in context 0 does not contain the field 'some_field'", str(e))
def test_noerror_field(self):
"An assertion is raised if the field doesn't have any errors"
post_data = {
'text': 'Hello World',
'email': 'not an email address',
'value': 37,
'single': 'b',
'multi': ('b', 'c', 'e')
}
response = self.client.post('/form_view/', post_data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, "Invalid POST Template")
try:
self.assertFormError(response, 'form', 'value', 'Some error.')
except AssertionError as e:
self.assertIn("The field 'value' on form 'form' in context 0 contains no errors", str(e))
try:
self.assertFormError(response, 'form', 'value', 'Some error.', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: The field 'value' on form 'form' in context 0 contains no errors", str(e))
def test_unknown_error(self):
"An assertion is raised if the field doesn't contain the provided error"
post_data = {
'text': 'Hello World',
'email': 'not an email address',
'value': 37,
'single': 'b',
'multi': ('b', 'c', 'e')
}
response = self.client.post('/form_view/', post_data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, "Invalid POST Template")
try:
self.assertFormError(response, 'form', 'email', 'Some error.')
except AssertionError as e:
self.assertIn(str_prefix("The field 'email' on form 'form' in context 0 does not contain the error 'Some error.' (actual errors: [%(_)s'Enter a valid email address.'])"), str(e))
try:
self.assertFormError(response, 'form', 'email', 'Some error.', msg_prefix='abc')
except AssertionError as e:
self.assertIn(str_prefix("abc: The field 'email' on form 'form' in context 0 does not contain the error 'Some error.' (actual errors: [%(_)s'Enter a valid email address.'])"), str(e))
def test_unknown_nonfield_error(self):
"""
Checks that an assertion is raised if the form's non field errors
doesn't contain the provided error.
"""
post_data = {
'text': 'Hello World',
'email': 'not an email address',
'value': 37,
'single': 'b',
'multi': ('b', 'c', 'e')
}
response = self.client.post('/form_view/', post_data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, "Invalid POST Template")
try:
self.assertFormError(response, 'form', None, 'Some error.')
except AssertionError as e:
self.assertIn("The form 'form' in context 0 does not contain the non-field error 'Some error.' (actual errors: )", str(e))
try:
self.assertFormError(response, 'form', None, 'Some error.', msg_prefix='abc')
except AssertionError as e:
self.assertIn("abc: The form 'form' in context 0 does not contain the non-field error 'Some error.' (actual errors: )", str(e))
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class AssertFormsetErrorTests(TestCase):
msg_prefixes = [("", {}), ("abc: ", {"msg_prefix": "abc"})]
def setUp(self):
"""Makes response object for testing field and non-field errors"""
# For testing field and non-field errors
self.response_form_errors = self.getResponse({
'form-TOTAL_FORMS': '2',
'form-INITIAL_FORMS': '2',
'form-0-text': 'Raise non-field error',
'form-0-email': 'not an email address',
'form-0-value': 37,
'form-0-single': 'b',
'form-0-multi': ('b', 'c', 'e'),
'form-1-text': 'Hello World',
'form-1-email': 'email@domain.com',
'form-1-value': 37,
'form-1-single': 'b',
'form-1-multi': ('b', 'c', 'e'),
})
# For testing non-form errors
self.response_nonform_errors = self.getResponse({
'form-TOTAL_FORMS': '2',
'form-INITIAL_FORMS': '2',
'form-0-text': 'Hello World',
'form-0-email': 'email@domain.com',
'form-0-value': 37,
'form-0-single': 'b',
'form-0-multi': ('b', 'c', 'e'),
'form-1-text': 'Hello World',
'form-1-email': 'email@domain.com',
'form-1-value': 37,
'form-1-single': 'b',
'form-1-multi': ('b', 'c', 'e'),
})
def getResponse(self, post_data):
response = self.client.post('/formset_view/', post_data)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, "Invalid POST Template")
return response
def test_unknown_formset(self):
"An assertion is raised if the formset name is unknown"
for prefix, kwargs in self.msg_prefixes:
with self.assertRaises(AssertionError) as cm:
self.assertFormsetError(self.response_form_errors,
'wrong_formset',
0,
'Some_field',
'Some error.',
**kwargs)
self.assertIn(prefix + "The formset 'wrong_formset' was not "
"used to render the response",
str(cm.exception))
def test_unknown_field(self):
"An assertion is raised if the field name is unknown"
for prefix, kwargs in self.msg_prefixes:
with self.assertRaises(AssertionError) as cm:
self.assertFormsetError(self.response_form_errors,
'my_formset',
0,
'Some_field',
'Some error.',
**kwargs)
self.assertIn(prefix + "The formset 'my_formset', "
"form 0 in context 0 "
"does not contain the field 'Some_field'",
str(cm.exception))
def test_no_error_field(self):
"An assertion is raised if the field doesn't have any errors"
for prefix, kwargs in self.msg_prefixes:
with self.assertRaises(AssertionError) as cm:
self.assertFormsetError(self.response_form_errors,
'my_formset',
1,
'value',
'Some error.',
**kwargs)
self.assertIn(prefix + "The field 'value' "
"on formset 'my_formset', form 1 "
"in context 0 contains no errors",
str(cm.exception))
def test_unknown_error(self):
"An assertion is raised if the field doesn't contain the specified error"
for prefix, kwargs in self.msg_prefixes:
with self.assertRaises(AssertionError) as cm:
self.assertFormsetError(self.response_form_errors,
'my_formset',
0,
'email',
'Some error.',
**kwargs)
self.assertIn(str_prefix(prefix + "The field 'email' "
"on formset 'my_formset', form 0 in context 0 does not "
"contain the error 'Some error.' (actual errors: "
"[%(_)s'Enter a valid email address.'])"),
str(cm.exception))
def test_field_error(self):
"No assertion is raised if the field contains the provided error"
for prefix, kwargs in self.msg_prefixes:
self.assertFormsetError(self.response_form_errors,
'my_formset',
0,
'email',
['Enter a valid email address.'],
**kwargs)
def test_no_nonfield_error(self):
"An assertion is raised if the formsets non-field errors doesn't contain any errors."
for prefix, kwargs in self.msg_prefixes:
with self.assertRaises(AssertionError) as cm:
self.assertFormsetError(self.response_form_errors,
'my_formset',
1,
None,
'Some error.',
**kwargs)
self.assertIn(prefix + "The formset 'my_formset', form 1 in "
"context 0 does not contain any "
"non-field errors.",
str(cm.exception))
def test_unknown_nonfield_error(self):
"An assertion is raised if the formsets non-field errors doesn't contain the provided error."
for prefix, kwargs in self.msg_prefixes:
with self.assertRaises(AssertionError) as cm:
self.assertFormsetError(self.response_form_errors,
'my_formset',
0,
None,
'Some error.',
**kwargs)
self.assertIn(str_prefix(prefix +
"The formset 'my_formset', form 0 in context 0 does not "
"contain the non-field error 'Some error.' (actual errors: "
"[%(_)s'Non-field error.'])"), str(cm.exception))
def test_nonfield_error(self):
"No assertion is raised if the formsets non-field errors contains the provided error."
for prefix, kwargs in self.msg_prefixes:
self.assertFormsetError(self.response_form_errors,
'my_formset',
0,
None,
'Non-field error.',
**kwargs)
def test_no_nonform_error(self):
"An assertion is raised if the formsets non-form errors doesn't contain any errors."
for prefix, kwargs in self.msg_prefixes:
with self.assertRaises(AssertionError) as cm:
self.assertFormsetError(self.response_form_errors,
'my_formset',
None,
None,
'Some error.',
**kwargs)
self.assertIn(prefix + "The formset 'my_formset' in context 0 "
"does not contain any non-form errors.",
str(cm.exception))
def test_unknown_nonform_error(self):
"An assertion is raised if the formsets non-form errors doesn't contain the provided error."
for prefix, kwargs in self.msg_prefixes:
with self.assertRaises(AssertionError) as cm:
self.assertFormsetError(self.response_nonform_errors,
'my_formset',
None,
None,
'Some error.',
**kwargs)
self.assertIn(str_prefix(prefix +
"The formset 'my_formset' in context 0 does not contain the "
"non-form error 'Some error.' (actual errors: [%(_)s'Forms "
"in a set must have distinct email addresses.'])"), str(cm.exception))
def test_nonform_error(self):
"No assertion is raised if the formsets non-form errors contains the provided error."
for prefix, kwargs in self.msg_prefixes:
self.assertFormsetError(self.response_nonform_errors,
'my_formset',
None,
None,
'Forms in a set must have distinct email '
'addresses.',
**kwargs)
class ProcessedMiddleware(object):
def process_request(self, request):
request.has_been_processed = True
@override_settings(PASSWORD_HASHERS=('django.contrib.auth.hashers.SHA1PasswordHasher',),
ROOT_URLCONF='test_client_regress.urls',)
class LoginTests(TestCase):
fixtures = ['testdata']
def test_login_different_client(self):
"Check that using a different test client doesn't violate authentication"
# Create a second client, and log in.
c = Client()
login = c.login(username='testclient', password='password')
self.assertTrue(login, 'Could not log in')
# Get a redirection page with the second client.
response = c.get("/login_protected_redirect_view/")
# At this points, the self.client isn't logged in.
# Check that assertRedirects uses the original client, not the
# default client.
self.assertRedirects(response, "http://testserver/get_view/")
@override_settings(
MIDDLEWARE_CLASSES=list(settings.MIDDLEWARE_CLASSES) +
['test_client_regress.tests.ProcessedMiddleware'])
def test_request_middleware(self):
"Check that the request middleware is executed on login request"
def listener(sender, signal, **kwargs):
request = kwargs['request']
self.assertTrue(hasattr(request, 'has_been_processed'))
# Unlike other Client request performing methods, login and logout don't
# return the response, therefore we must use signals to get it
user_logged_in.connect(listener)
try:
self.client.login(username='testclient', password='password')
finally:
user_logged_in.disconnect(listener)
@override_settings(
PASSWORD_HASHERS=('django.contrib.auth.hashers.SHA1PasswordHasher',),
SESSION_ENGINE='test_client_regress.session',
ROOT_URLCONF='test_client_regress.urls',
)
class SessionEngineTests(TestCase):
fixtures = ['testdata']
def test_login(self):
"A session engine that modifies the session key can be used to log in"
login = self.client.login(username='testclient', password='password')
self.assertTrue(login, 'Could not log in')
# Try to access a login protected page.
response = self.client.get("/login_protected_view/")
self.assertEqual(response.status_code, 200)
self.assertEqual(response.context['user'].username, 'testclient')
@override_settings(ROOT_URLCONF='test_client_regress.urls',)
class URLEscapingTests(TestCase):
def test_simple_argument_get(self):
"Get a view that has a simple string argument"
response = self.client.get(reverse('arg_view', args=['Slartibartfast']))
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'Howdy, Slartibartfast')
def test_argument_with_space_get(self):
"Get a view that has a string argument that requires escaping"
response = self.client.get(reverse('arg_view', args=['Arthur Dent']))
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'Hi, Arthur')
def test_simple_argument_post(self):
"Post for a view that has a simple string argument"
response = self.client.post(reverse('arg_view', args=['Slartibartfast']))
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'Howdy, Slartibartfast')
def test_argument_with_space_post(self):
"Post for a view that has a string argument that requires escaping"
response = self.client.post(reverse('arg_view', args=['Arthur Dent']))
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'Hi, Arthur')
@override_settings(PASSWORD_HASHERS=('django.contrib.auth.hashers.SHA1PasswordHasher',),
ROOT_URLCONF='test_client_regress.urls',)
class ExceptionTests(TestCase):
fixtures = ['testdata.json']
def test_exception_cleared(self):
"#5836 - A stale user exception isn't re-raised by the test client."
login = self.client.login(username='testclient', password='password')
self.assertTrue(login, 'Could not log in')
try:
self.client.get("/staff_only/")
self.fail("General users should not be able to visit this page")
except CustomTestException:
pass
# At this point, an exception has been raised, and should be cleared.
# This next operation should be successful; if it isn't we have a problem.
login = self.client.login(username='staff', password='password')
self.assertTrue(login, 'Could not log in')
try:
self.client.get("/staff_only/")
except CustomTestException:
self.fail("Staff should be able to visit this page")
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class TemplateExceptionTests(TestCase):
def setUp(self):
# Reset the loaders so they don't try to render cached templates.
if loader.template_source_loaders is not None:
for template_loader in loader.template_source_loaders:
if hasattr(template_loader, 'reset'):
template_loader.reset()
@override_settings(
TEMPLATE_DIRS=(os.path.join(os.path.dirname(upath(__file__)), 'bad_templates'),)
)
def test_bad_404_template(self):
"Errors found when rendering 404 error templates are re-raised"
try:
self.client.get("/no_such_view/")
self.fail("Should get error about syntax error in template")
except TemplateSyntaxError:
pass
# We need two different tests to check URLconf substitution - one to check
# it was changed, and another one (without self.urls) to check it was reverted on
# teardown. This pair of tests relies upon the alphabetical ordering of test execution.
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class UrlconfSubstitutionTests(TestCase):
def test_urlconf_was_changed(self):
"TestCase can enforce a custom URLconf on a per-test basis"
url = reverse('arg_view', args=['somename'])
self.assertEqual(url, '/arg_view/somename/')
# This test needs to run *after* UrlconfSubstitutionTests; the zz prefix in the
# name is to ensure alphabetical ordering.
class zzUrlconfSubstitutionTests(TestCase):
def test_urlconf_was_reverted(self):
"""URLconf is reverted to original value after modification in a TestCase
This will not find a match as the default ROOT_URLCONF is empty.
"""
with self.assertRaises(NoReverseMatch):
reverse('arg_view', args=['somename'])
@override_settings(PASSWORD_HASHERS=('django.contrib.auth.hashers.SHA1PasswordHasher',),
ROOT_URLCONF='test_client_regress.urls',)
class ContextTests(TestCase):
fixtures = ['testdata']
def test_single_context(self):
"Context variables can be retrieved from a single context"
response = self.client.get("/request_data/", data={'foo': 'whiz'})
self.assertEqual(response.context.__class__, Context)
self.assertTrue('get-foo' in response.context)
self.assertEqual(response.context['get-foo'], 'whiz')
self.assertEqual(response.context['request-foo'], 'whiz')
self.assertEqual(response.context['data'], 'sausage')
try:
response.context['does-not-exist']
self.fail('Should not be able to retrieve non-existent key')
except KeyError as e:
self.assertEqual(e.args[0], 'does-not-exist')
def test_inherited_context(self):
"Context variables can be retrieved from a list of contexts"
response = self.client.get("/request_data_extended/", data={'foo': 'whiz'})
self.assertEqual(response.context.__class__, ContextList)
self.assertEqual(len(response.context), 2)
self.assertTrue('get-foo' in response.context)
self.assertEqual(response.context['get-foo'], 'whiz')
self.assertEqual(response.context['request-foo'], 'whiz')
self.assertEqual(response.context['data'], 'bacon')
try:
response.context['does-not-exist']
self.fail('Should not be able to retrieve non-existent key')
except KeyError as e:
self.assertEqual(e.args[0], 'does-not-exist')
def test_contextlist_keys(self):
c1 = Context()
c1.update({'hello': 'world', 'goodbye': 'john'})
c1.update({'hello': 'dolly', 'dolly': 'parton'})
c2 = Context()
c2.update({'goodbye': 'world', 'python': 'rocks'})
c2.update({'goodbye': 'dolly'})
l = ContextList([c1, c2])
# None, True and False are builtins of BaseContext, and present
# in every Context without needing to be added.
self.assertEqual(set(['None', 'True', 'False', 'hello', 'goodbye',
'python', 'dolly']),
l.keys())
def test_15368(self):
# Need to insert a context processor that assumes certain things about
# the request instance. This triggers a bug caused by some ways of
# copying RequestContext.
try:
django.template.context._standard_context_processors = (lambda request: {'path': request.special_path},)
response = self.client.get("/request_context_view/")
self.assertContains(response, 'Path: /request_context_view/')
finally:
django.template.context._standard_context_processors = None
def test_nested_requests(self):
"""
response.context is not lost when view call another view.
"""
response = self.client.get("/nested_view/")
self.assertEqual(response.context.__class__, Context)
self.assertEqual(response.context['nested'], 'yes')
@override_settings(PASSWORD_HASHERS=('django.contrib.auth.hashers.SHA1PasswordHasher',),
ROOT_URLCONF='test_client_regress.urls',)
class SessionTests(TestCase):
fixtures = ['testdata.json']
def test_session(self):
"The session isn't lost if a user logs in"
# The session doesn't exist to start.
response = self.client.get('/check_session/')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'NO')
# This request sets a session variable.
response = self.client.get('/set_session/')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'set_session')
# Check that the session has been modified
response = self.client.get('/check_session/')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'YES')
# Log in
login = self.client.login(username='testclient', password='password')
self.assertTrue(login, 'Could not log in')
# Session should still contain the modified value
response = self.client.get('/check_session/')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'YES')
def test_logout(self):
"""Logout should work whether the user is logged in or not (#9978)."""
self.client.logout()
login = self.client.login(username='testclient', password='password')
self.assertTrue(login, 'Could not log in')
self.client.logout()
self.client.logout()
def test_logout_with_user(self):
"""Logout should send user_logged_out signal if user was logged in."""
def listener(*args, **kwargs):
listener.executed = True
self.assertEqual(kwargs['sender'], User)
listener.executed = False
user_logged_out.connect(listener)
self.client.login(username='testclient', password='password')
self.client.logout()
user_logged_out.disconnect(listener)
self.assertTrue(listener.executed)
@override_settings(AUTH_USER_MODEL='test_client_regress.CustomUser')
def test_logout_with_custom_user(self):
"""Logout should send user_logged_out signal if custom user was logged in."""
def listener(*args, **kwargs):
self.assertEqual(kwargs['sender'], CustomUser)
listener.executed = True
listener.executed = False
u = CustomUser.custom_objects.create(email='test@test.com')
u.set_password('password')
u.save()
user_logged_out.connect(listener)
self.client.login(username='test@test.com', password='password')
self.client.logout()
user_logged_out.disconnect(listener)
self.assertTrue(listener.executed)
def test_logout_without_user(self):
"""Logout should send signal even if user not authenticated."""
def listener(user, *args, **kwargs):
listener.user = user
listener.executed = True
listener.executed = False
user_logged_out.connect(listener)
self.client.login(username='incorrect', password='password')
self.client.logout()
user_logged_out.disconnect(listener)
self.assertTrue(listener.executed)
self.assertIsNone(listener.user)
def test_login_with_user(self):
"""Login should send user_logged_in signal on successful login."""
def listener(*args, **kwargs):
listener.executed = True
listener.executed = False
user_logged_in.connect(listener)
self.client.login(username='testclient', password='password')
user_logged_out.disconnect(listener)
self.assertTrue(listener.executed)
def test_login_without_signal(self):
"""Login shouldn't send signal if user wasn't logged in"""
def listener(*args, **kwargs):
listener.executed = True
listener.executed = False
user_logged_in.connect(listener)
self.client.login(username='incorrect', password='password')
user_logged_in.disconnect(listener)
self.assertFalse(listener.executed)
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class RequestMethodTests(TestCase):
def test_get(self):
"Request a view via request method GET"
response = self.client.get('/request_methods/')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'request method: GET')
def test_post(self):
"Request a view via request method POST"
response = self.client.post('/request_methods/')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'request method: POST')
def test_head(self):
"Request a view via request method HEAD"
response = self.client.head('/request_methods/')
self.assertEqual(response.status_code, 200)
# A HEAD request doesn't return any content.
self.assertNotEqual(response.content, b'request method: HEAD')
self.assertEqual(response.content, b'')
def test_options(self):
"Request a view via request method OPTIONS"
response = self.client.options('/request_methods/')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'request method: OPTIONS')
def test_put(self):
"Request a view via request method PUT"
response = self.client.put('/request_methods/')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'request method: PUT')
def test_delete(self):
"Request a view via request method DELETE"
response = self.client.delete('/request_methods/')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'request method: DELETE')
def test_patch(self):
"Request a view via request method PATCH"
response = self.client.patch('/request_methods/')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'request method: PATCH')
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class RequestMethodStringDataTests(TestCase):
def test_post(self):
"Request a view with string data via request method POST"
# Regression test for #11371
data = '{"test": "json"}'
response = self.client.post('/request_methods/', data=data, content_type='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'request method: POST')
def test_put(self):
"Request a view with string data via request method PUT"
# Regression test for #11371
data = '{"test": "json"}'
response = self.client.put('/request_methods/', data=data, content_type='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'request method: PUT')
def test_patch(self):
"Request a view with string data via request method PATCH"
# Regression test for #17797
data = '{"test": "json"}'
response = self.client.patch('/request_methods/', data=data, content_type='application/json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, b'request method: PATCH')
@override_settings(ROOT_URLCONF='test_client_regress.urls',)
class QueryStringTests(TestCase):
def test_get_like_requests(self):
# See: https://code.djangoproject.com/ticket/10571.
for method_name in ('get', 'head'):
# A GET-like request can pass a query string as data
method = getattr(self.client, method_name)
response = method("/request_data/", data={'foo': 'whiz'})
self.assertEqual(response.context['get-foo'], 'whiz')
self.assertEqual(response.context['request-foo'], 'whiz')
# A GET-like request can pass a query string as part of the URL
response = method("/request_data/?foo=whiz")
self.assertEqual(response.context['get-foo'], 'whiz')
self.assertEqual(response.context['request-foo'], 'whiz')
# Data provided in the URL to a GET-like request is overridden by actual form data
response = method("/request_data/?foo=whiz", data={'foo': 'bang'})
self.assertEqual(response.context['get-foo'], 'bang')
self.assertEqual(response.context['request-foo'], 'bang')
response = method("/request_data/?foo=whiz", data={'bar': 'bang'})
self.assertEqual(response.context['get-foo'], None)
self.assertEqual(response.context['get-bar'], 'bang')
self.assertEqual(response.context['request-foo'], None)
self.assertEqual(response.context['request-bar'], 'bang')
def test_post_like_requests(self):
# A POST-like request can pass a query string as data
response = self.client.post("/request_data/", data={'foo': 'whiz'})
self.assertEqual(response.context['get-foo'], None)
self.assertEqual(response.context['post-foo'], 'whiz')
# A POST-like request can pass a query string as part of the URL
response = self.client.post("/request_data/?foo=whiz")
self.assertEqual(response.context['get-foo'], 'whiz')
self.assertEqual(response.context['post-foo'], None)
self.assertEqual(response.context['request-foo'], 'whiz')
# POST data provided in the URL augments actual form data
response = self.client.post("/request_data/?foo=whiz", data={'foo': 'bang'})
self.assertEqual(response.context['get-foo'], 'whiz')
self.assertEqual(response.context['post-foo'], 'bang')
self.assertEqual(response.context['request-foo'], 'bang')
response = self.client.post("/request_data/?foo=whiz", data={'bar': 'bang'})
self.assertEqual(response.context['get-foo'], 'whiz')
self.assertEqual(response.context['get-bar'], None)
self.assertEqual(response.context['post-foo'], None)
self.assertEqual(response.context['post-bar'], 'bang')
self.assertEqual(response.context['request-foo'], 'whiz')
self.assertEqual(response.context['request-bar'], 'bang')
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class UnicodePayloadTests(TestCase):
def test_simple_unicode_payload(self):
"A simple ASCII-only unicode JSON document can be POSTed"
# Regression test for #10571
json = '{"english": "mountain pass"}'
response = self.client.post("/parse_unicode_json/", json,
content_type="application/json")
self.assertEqual(response.content, json.encode())
def test_unicode_payload_utf8(self):
"A non-ASCII unicode data encoded as UTF-8 can be POSTed"
# Regression test for #10571
json = '{"dog": "собака"}'
response = self.client.post("/parse_unicode_json/", json,
content_type="application/json; charset=utf-8")
self.assertEqual(response.content, json.encode('utf-8'))
def test_unicode_payload_utf16(self):
"A non-ASCII unicode data encoded as UTF-16 can be POSTed"
# Regression test for #10571
json = '{"dog": "собака"}'
response = self.client.post("/parse_unicode_json/", json,
content_type="application/json; charset=utf-16")
self.assertEqual(response.content, json.encode('utf-16'))
def test_unicode_payload_non_utf(self):
"A non-ASCII unicode data as a non-UTF based encoding can be POSTed"
# Regression test for #10571
json = '{"dog": "собака"}'
response = self.client.post("/parse_unicode_json/", json,
content_type="application/json; charset=koi8-r")
self.assertEqual(response.content, json.encode('koi8-r'))
class DummyFile(object):
def __init__(self, filename):
self.name = filename
def read(self):
return b'TEST_FILE_CONTENT'
class UploadedFileEncodingTest(TestCase):
def test_file_encoding(self):
encoded_file = encode_file('TEST_BOUNDARY', 'TEST_KEY', DummyFile('test_name.bin'))
self.assertEqual(b'--TEST_BOUNDARY', encoded_file[0])
self.assertEqual(b'Content-Disposition: form-data; name="TEST_KEY"; filename="test_name.bin"', encoded_file[1])
self.assertEqual(b'TEST_FILE_CONTENT', encoded_file[-1])
def test_guesses_content_type_on_file_encoding(self):
self.assertEqual(b'Content-Type: application/octet-stream',
encode_file('IGNORE', 'IGNORE', DummyFile("file.bin"))[2])
self.assertEqual(b'Content-Type: text/plain',
encode_file('IGNORE', 'IGNORE', DummyFile("file.txt"))[2])
self.assertIn(encode_file('IGNORE', 'IGNORE', DummyFile("file.zip"))[2], (
b'Content-Type: application/x-compress',
b'Content-Type: application/x-zip',
b'Content-Type: application/x-zip-compressed',
b'Content-Type: application/zip',))
self.assertEqual(b'Content-Type: application/octet-stream',
encode_file('IGNORE', 'IGNORE', DummyFile("file.unknown"))[2])
@override_settings(ROOT_URLCONF='test_client_regress.urls',)
class RequestHeadersTest(TestCase):
fixtures = ['testdata']
def test_client_headers(self):
"A test client can receive custom headers"
response = self.client.get("/check_headers/", HTTP_X_ARG_CHECK='Testing 123')
self.assertEqual(response.content, b"HTTP_X_ARG_CHECK: Testing 123")
self.assertEqual(response.status_code, 200)
@override_settings(PASSWORD_HASHERS=('django.contrib.auth.hashers.SHA1PasswordHasher',))
def test_client_login_headers(self):
"Test client headers are used in login"
client = Client(HTTP_HOST='different')
def listener(sender, signal, **kwargs):
request = kwargs['request']
self.assertEqual(request.get_host(), 'different')
# Unlike other Client request performing methods, login and logout don't
# return the response, therefore we must use signals to get it
user_logged_in.connect(listener)
try:
client.login(username='testclient', password='password')
finally:
user_logged_in.disconnect(listener)
def test_client_headers_redirect(self):
"Test client headers are preserved through redirects"
response = self.client.get("/check_headers_redirect/", follow=True, HTTP_X_ARG_CHECK='Testing 123')
self.assertEqual(response.content, b"HTTP_X_ARG_CHECK: Testing 123")
self.assertRedirects(response, '/check_headers/',
status_code=301, target_status_code=200)
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class ReadLimitedStreamTest(TestCase):
"""
Tests that ensure that HttpRequest.body, HttpRequest.read() and
HttpRequest.read(BUFFER) have proper LimitedStream behavior.
Refs #14753, #15785
"""
def test_body_from_empty_request(self):
"""HttpRequest.body on a test client GET request should return
the empty string."""
self.assertEqual(self.client.get("/body/").content, b'')
def test_read_from_empty_request(self):
"""HttpRequest.read() on a test client GET request should return the
empty string."""
self.assertEqual(self.client.get("/read_all/").content, b'')
def test_read_numbytes_from_empty_request(self):
"""HttpRequest.read(LARGE_BUFFER) on a test client GET request should
return the empty string."""
self.assertEqual(self.client.get("/read_buffer/").content, b'')
def test_read_from_nonempty_request(self):
"""HttpRequest.read() on a test client PUT request with some payload
should return that payload."""
payload = b'foobar'
self.assertEqual(self.client.put(
"/read_all/",
data=payload,
content_type='text/plain').content, payload)
def test_read_numbytes_from_nonempty_request(self):
"""HttpRequest.read(LARGE_BUFFER) on a test client PUT request with
some payload should return that payload."""
payload = b'foobar'
self.assertEqual(
self.client.put("/read_buffer/",
data=payload,
content_type='text/plain').content, payload)
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class RequestFactoryStateTest(TestCase):
"""Regression tests for #15929."""
# These tests are checking that certain middleware don't change certain
# global state. Alternatively, from the point of view of a test, they are
# ensuring test isolation behavior. So, unusually, it doesn't make sense to
# run the tests individually, and if any are failing it is confusing to run
# them with any other set of tests.
def common_test_that_should_always_pass(self):
request = RequestFactory().get('/')
request.session = {}
self.assertFalse(hasattr(request, 'user'))
def test_request(self):
self.common_test_that_should_always_pass()
def test_request_after_client(self):
# apart from the next line the three tests are identical
self.client.get('/')
self.common_test_that_should_always_pass()
def test_request_after_client_2(self):
# This test is executed after the previous one
self.common_test_that_should_always_pass()
@override_settings(ROOT_URLCONF='test_client_regress.urls')
class RequestFactoryEnvironmentTests(TestCase):
"""
Regression tests for #8551 and #17067: ensure that environment variables
are set correctly in RequestFactory.
"""
def test_should_set_correct_env_variables(self):
request = RequestFactory().get('/path/')
self.assertEqual(request.META.get('REMOTE_ADDR'), '127.0.0.1')
self.assertEqual(request.META.get('SERVER_NAME'), 'testserver')
self.assertEqual(request.META.get('SERVER_PORT'), '80')
self.assertEqual(request.META.get('SERVER_PROTOCOL'), 'HTTP/1.1')
self.assertEqual(request.META.get('SCRIPT_NAME') +
request.META.get('PATH_INFO'), '/path/')
| liavkoren/djangoDev | tests/test_client_regress/tests.py | Python | bsd-3-clause | 64,722 | [
"VisIt"
] | ae2da23a5c320e2a55b9705b0f866e5ba61b2b0c59456aff91f1d24b2dcc4ea8 |
"""
A simple VTK widget for PyQt or PySide.
See http://www.trolltech.com for Qt documentation,
http://www.riverbankcomputing.co.uk for PyQt, and
http://pyside.github.io for PySide.
This class is based on the vtkGenericRenderWindowInteractor and is
therefore fairly powerful. It should also play nicely with the
vtk3DWidget code.
Created by Prabhu Ramachandran, May 2002
Based on David Gobbi's QVTKRenderWidget.py
Changes by Gerard Vermeulen Feb. 2003
Win32 support.
Changes by Gerard Vermeulen, May 2003
Bug fixes and better integration with the Qt framework.
Changes by Phil Thompson, Nov. 2006
Ported to PyQt v4.
Added support for wheel events.
Changes by Phil Thompson, Oct. 2007
Bug fixes.
Changes by Phil Thompson, Mar. 2008
Added cursor support.
Changes by Rodrigo Mologni, Sep. 2013 (Credit to Daniele Esposti)
Bug fix to PySide: Converts PyCObject to void pointer.
Changes by Greg Schussman, Aug. 2014
The keyPressEvent function now passes keysym instead of None.
Changes by Alex Tsui, Apr. 2015
Port from PyQt4 to PyQt5.
"""
# Check whether a specific PyQt implementation was chosen
try:
import vtk.qt
PyQtImpl = vtk.qt.PyQtImpl
except ImportError:
pass
if PyQtImpl is None:
# Autodetect the PyQt implementation to use
try:
import PyQt5
PyQtImpl = "PyQt5"
except ImportError:
try:
import PyQt4
PyQtImpl = "PyQt4"
except ImportError:
try:
import PySide
PyQtImpl = "PySide"
except ImportError:
raise ImportError("Cannot load either PyQt or PySide")
if PyQtImpl == "PyQt5":
from PyQt5.QtWidgets import QWidget
from PyQt5.QtWidgets import QSizePolicy
from PyQt5.QtWidgets import QApplication
from PyQt5.QtCore import Qt
from PyQt5.QtCore import QTimer
from PyQt5.QtCore import QObject
from PyQt5.QtCore import QSize
from PyQt5.QtCore import QEvent
elif PyQtImpl == "PyQt4":
from PyQt4.QtGui import QWidget
from PyQt4.QtGui import QSizePolicy
from PyQt4.QtGui import QApplication
from PyQt4.QtCore import Qt
from PyQt4.QtCore import QTimer
from PyQt4.QtCore import QObject
from PyQt4.QtCore import QSize
from PyQt4.QtCore import QEvent
elif PyQtImpl == "PySide":
from PySide.QtGui import QWidget
from PySide.QtGui import QSizePolicy
from PySide.QtGui import QApplication
from PySide.QtCore import Qt
from PySide.QtCore import QTimer
from PySide.QtCore import QObject
from PySide.QtCore import QSize
from PySide.QtCore import QEvent
else:
raise ImportError("Unknown PyQt implementation " + repr(PyQtImpl))
class QVTKRenderWindowInteractor(QWidget):
""" A QVTKRenderWindowInteractor for Python and Qt. Uses a
vtkGenericRenderWindowInteractor to handle the interactions. Use
GetRenderWindow() to get the vtkRenderWindow. Create with the
keyword stereo=1 in order to generate a stereo-capable window.
The user interface is summarized in vtkInteractorStyle.h:
- Keypress j / Keypress t: toggle between joystick (position
sensitive) and trackball (motion sensitive) styles. In joystick
style, motion occurs continuously as long as a mouse button is
pressed. In trackball style, motion occurs when the mouse button
is pressed and the mouse pointer moves.
- Keypress c / Keypress o: toggle between camera and object
(actor) modes. In camera mode, mouse events affect the camera
position and focal point. In object mode, mouse events affect
the actor that is under the mouse pointer.
- Button 1: rotate the camera around its focal point (if camera
mode) or rotate the actor around its origin (if actor mode). The
rotation is in the direction defined from the center of the
renderer's viewport towards the mouse position. In joystick mode,
the magnitude of the rotation is determined by the distance the
mouse is from the center of the render window.
- Button 2: pan the camera (if camera mode) or translate the actor
(if object mode). In joystick mode, the direction of pan or
translation is from the center of the viewport towards the mouse
position. In trackball mode, the direction of motion is the
direction the mouse moves. (Note: with 2-button mice, pan is
defined as <Shift>-Button 1.)
- Button 3: zoom the camera (if camera mode) or scale the actor
(if object mode). Zoom in/increase scale if the mouse position is
in the top half of the viewport; zoom out/decrease scale if the
mouse position is in the bottom half. In joystick mode, the amount
of zoom is controlled by the distance of the mouse pointer from
the horizontal centerline of the window.
- Keypress 3: toggle the render window into and out of stereo
mode. By default, red-blue stereo pairs are created. Some systems
support Crystal Eyes LCD stereo glasses; you have to invoke
SetStereoTypeToCrystalEyes() on the rendering window. Note: to
use stereo you also need to pass a stereo=1 keyword argument to
the constructor.
- Keypress e: exit the application.
- Keypress f: fly to the picked point
- Keypress p: perform a pick operation. The render window interactor
has an internal instance of vtkCellPicker that it uses to pick.
- Keypress r: reset the camera view along the current view
direction. Centers the actors and moves the camera so that all actors
are visible.
- Keypress s: modify the representation of all actors so that they
are surfaces.
- Keypress u: invoke the user-defined function. Typically, this
keypress will bring up an interactor that you can type commands in.
- Keypress w: modify the representation of all actors so that they
are wireframe.
"""
# Map between VTK and Qt cursors.
_CURSOR_MAP = {
0: Qt.ArrowCursor, # VTK_CURSOR_DEFAULT
1: Qt.ArrowCursor, # VTK_CURSOR_ARROW
2: Qt.SizeBDiagCursor, # VTK_CURSOR_SIZENE
3: Qt.SizeFDiagCursor, # VTK_CURSOR_SIZENWSE
4: Qt.SizeBDiagCursor, # VTK_CURSOR_SIZESW
5: Qt.SizeFDiagCursor, # VTK_CURSOR_SIZESE
6: Qt.SizeVerCursor, # VTK_CURSOR_SIZENS
7: Qt.SizeHorCursor, # VTK_CURSOR_SIZEWE
8: Qt.SizeAllCursor, # VTK_CURSOR_SIZEALL
9: Qt.PointingHandCursor, # VTK_CURSOR_HAND
10: Qt.CrossCursor, # VTK_CURSOR_CROSSHAIR
}
def __init__(self, parent=None, wflags=Qt.WindowFlags(), **kw):
# the current button
self._ActiveButton = Qt.NoButton
# private attributes
self.__saveX = 0
self.__saveY = 0
self.__saveModifiers = Qt.NoModifier
self.__saveButtons = Qt.NoButton
# do special handling of some keywords:
# stereo, rw
try:
stereo = bool(kw['stereo'])
except KeyError:
stereo = False
try:
rw = kw['rw']
except KeyError:
rw = None
# create qt-level widget
QWidget.__init__(self, parent, wflags|Qt.MSWindowsOwnDC)
if rw: # user-supplied render window
self._RenderWindow = rw
else:
self._RenderWindow = vtk.vtkRenderWindow()
WId = self.winId()
if type(WId).__name__ == 'PyCObject':
from ctypes import pythonapi, c_void_p, py_object
pythonapi.PyCObject_AsVoidPtr.restype = c_void_p
pythonapi.PyCObject_AsVoidPtr.argtypes = [py_object]
WId = pythonapi.PyCObject_AsVoidPtr(WId)
self._RenderWindow.SetWindowInfo(str(int(WId)))
if stereo: # stereo mode
self._RenderWindow.StereoCapableWindowOn()
self._RenderWindow.SetStereoTypeToCrystalEyes()
try:
self._Iren = kw['iren']
except KeyError:
self._Iren = vtk.vtkGenericRenderWindowInteractor()
self._Iren.SetRenderWindow(self._RenderWindow)
# do all the necessary qt setup
self.setAttribute(Qt.WA_OpaquePaintEvent)
self.setAttribute(Qt.WA_PaintOnScreen)
self.setMouseTracking(True) # get all mouse events
self.setFocusPolicy(Qt.WheelFocus)
self.setSizePolicy(QSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding))
self._Timer = QTimer(self)
self._Timer.timeout.connect(self.TimerEvent)
self._Iren.AddObserver('CreateTimerEvent', self.CreateTimer)
self._Iren.AddObserver('DestroyTimerEvent', self.DestroyTimer)
self._Iren.GetRenderWindow().AddObserver('CursorChangedEvent',
self.CursorChangedEvent)
#Create a hidden child widget and connect its destroyed signal to its
#parent ``Finalize`` slot. The hidden children will be destroyed before
#its parent thus allowing cleanup of VTK elements.
self._hidden = QWidget(self)
self._hidden.hide()
self._hidden.destroyed.connect(self.Finalize)
def __getattr__(self, attr):
"""Makes the object behave like a vtkGenericRenderWindowInteractor"""
if attr == '__vtk__':
return lambda t=self._Iren: t
elif hasattr(self._Iren, attr):
return getattr(self._Iren, attr)
else:
raise AttributeError(self.__class__.__name__ +
" has no attribute named " + attr)
def Finalize(self):
'''
Call internal cleanup method on VTK objects
'''
self._RenderWindow.Finalize()
def CreateTimer(self, obj, evt):
self._Timer.start(10)
def DestroyTimer(self, obj, evt):
self._Timer.stop()
return 1
def TimerEvent(self):
self._Iren.TimerEvent()
def CursorChangedEvent(self, obj, evt):
"""Called when the CursorChangedEvent fires on the render window."""
# This indirection is needed since when the event fires, the current
# cursor is not yet set so we defer this by which time the current
# cursor should have been set.
QTimer.singleShot(0, self.ShowCursor)
def HideCursor(self):
"""Hides the cursor."""
self.setCursor(Qt.BlankCursor)
def ShowCursor(self):
"""Shows the cursor."""
vtk_cursor = self._Iren.GetRenderWindow().GetCurrentCursor()
qt_cursor = self._CURSOR_MAP.get(vtk_cursor, Qt.ArrowCursor)
self.setCursor(qt_cursor)
def closeEvent(self, evt):
self.Finalize()
def sizeHint(self):
return QSize(400, 400)
def paintEngine(self):
return None
def paintEvent(self, ev):
self._Iren.Render()
def resizeEvent(self, ev):
w = self.width()
h = self.height()
vtk.vtkRenderWindow.SetSize(self._RenderWindow, w, h)
self._Iren.SetSize(w, h)
self._Iren.ConfigureEvent()
self.update()
def _GetCtrlShift(self, ev):
ctrl = shift = False
if hasattr(ev, 'modifiers'):
if ev.modifiers() & Qt.ShiftModifier:
shift = True
if ev.modifiers() & Qt.ControlModifier:
ctrl = True
else:
if self.__saveModifiers & Qt.ShiftModifier:
shift = True
if self.__saveModifiers & Qt.ControlModifier:
ctrl = True
return ctrl, shift
def enterEvent(self, ev):
ctrl, shift = self._GetCtrlShift(ev)
self._Iren.SetEventInformationFlipY(self.__saveX, self.__saveY,
ctrl, shift, chr(0), 0, None)
self._Iren.EnterEvent()
def leaveEvent(self, ev):
ctrl, shift = self._GetCtrlShift(ev)
self._Iren.SetEventInformationFlipY(self.__saveX, self.__saveY,
ctrl, shift, chr(0), 0, None)
self._Iren.LeaveEvent()
def mousePressEvent(self, ev):
ctrl, shift = self._GetCtrlShift(ev)
repeat = 0
if ev.type() == QEvent.MouseButtonDblClick:
repeat = 1
self._Iren.SetEventInformationFlipY(ev.x(), ev.y(),
ctrl, shift, chr(0), repeat, None)
self._ActiveButton = ev.button()
if self._ActiveButton == Qt.LeftButton:
self._Iren.LeftButtonPressEvent()
elif self._ActiveButton == Qt.RightButton:
self._Iren.RightButtonPressEvent()
elif self._ActiveButton == Qt.MidButton:
self._Iren.MiddleButtonPressEvent()
def mouseReleaseEvent(self, ev):
ctrl, shift = self._GetCtrlShift(ev)
self._Iren.SetEventInformationFlipY(ev.x(), ev.y(),
ctrl, shift, chr(0), 0, None)
if self._ActiveButton == Qt.LeftButton:
self._Iren.LeftButtonReleaseEvent()
elif self._ActiveButton == Qt.RightButton:
self._Iren.RightButtonReleaseEvent()
elif self._ActiveButton == Qt.MidButton:
self._Iren.MiddleButtonReleaseEvent()
def mouseMoveEvent(self, ev):
self.__saveModifiers = ev.modifiers()
self.__saveButtons = ev.buttons()
self.__saveX = ev.x()
self.__saveY = ev.y()
ctrl, shift = self._GetCtrlShift(ev)
self._Iren.SetEventInformationFlipY(ev.x(), ev.y(),
ctrl, shift, chr(0), 0, None)
self._Iren.MouseMoveEvent()
def keyPressEvent(self, ev):
ctrl, shift = self._GetCtrlShift(ev)
if ev.key() < 256:
key = str(ev.text())
else:
key = chr(0)
keySym = _qt_key_to_key_sym(ev.key())
if shift and len(keySym) == 1 and keySym.isalpha():
keySym = keySym.upper()
self._Iren.SetEventInformationFlipY(self.__saveX, self.__saveY,
ctrl, shift, key, 0, keySym)
self._Iren.KeyPressEvent()
self._Iren.CharEvent()
def keyReleaseEvent(self, ev):
ctrl, shift = self._GetCtrlShift(ev)
if ev.key() < 256:
key = chr(ev.key())
else:
key = chr(0)
self._Iren.SetEventInformationFlipY(self.__saveX, self.__saveY,
ctrl, shift, key, 0, None)
self._Iren.KeyReleaseEvent()
def wheelEvent(self, ev):
if ev.delta() >= 0:
self._Iren.MouseWheelForwardEvent()
else:
self._Iren.MouseWheelBackwardEvent()
def GetRenderWindow(self):
return self._RenderWindow
def Render(self):
self.update()
def QVTKRenderWidgetConeExample():
"""A simple example that uses the QVTKRenderWindowInteractor class."""
# every QT app needs an app
app = QApplication(['QVTKRenderWindowInteractor'])
# create the widget
widget = QVTKRenderWindowInteractor()
widget.Initialize()
widget.Start()
# if you dont want the 'q' key to exit comment this.
widget.AddObserver("ExitEvent", lambda o, e, a=app: a.quit())
ren = vtk.vtkRenderer()
widget.GetRenderWindow().AddRenderer(ren)
cone = vtk.vtkConeSource()
cone.SetResolution(8)
coneMapper = vtk.vtkPolyDataMapper()
coneMapper.SetInputConnection(cone.GetOutputPort())
coneActor = vtk.vtkActor()
coneActor.SetMapper(coneMapper)
ren.AddActor(coneActor)
# show the widget
widget.show()
# start event processing
app.exec_()
_keysyms = {
Qt.Key_Backspace: 'BackSpace',
Qt.Key_Tab: 'Tab',
Qt.Key_Backtab: 'Tab',
# Qt.Key_Clear : 'Clear',
Qt.Key_Return: 'Return',
Qt.Key_Enter: 'Return',
Qt.Key_Shift: 'Shift_L',
Qt.Key_Control: 'Control_L',
Qt.Key_Alt: 'Alt_L',
Qt.Key_Pause: 'Pause',
Qt.Key_CapsLock: 'Caps_Lock',
Qt.Key_Escape: 'Escape',
Qt.Key_Space: 'space',
# Qt.Key_Prior : 'Prior',
# Qt.Key_Next : 'Next',
Qt.Key_End: 'End',
Qt.Key_Home: 'Home',
Qt.Key_Left: 'Left',
Qt.Key_Up: 'Up',
Qt.Key_Right: 'Right',
Qt.Key_Down: 'Down',
Qt.Key_SysReq: 'Snapshot',
Qt.Key_Insert: 'Insert',
Qt.Key_Delete: 'Delete',
Qt.Key_Help: 'Help',
Qt.Key_0: '0',
Qt.Key_1: '1',
Qt.Key_2: '2',
Qt.Key_3: '3',
Qt.Key_4: '4',
Qt.Key_5: '5',
Qt.Key_6: '6',
Qt.Key_7: '7',
Qt.Key_8: '8',
Qt.Key_9: '9',
Qt.Key_A: 'a',
Qt.Key_B: 'b',
Qt.Key_C: 'c',
Qt.Key_D: 'd',
Qt.Key_E: 'e',
Qt.Key_F: 'f',
Qt.Key_G: 'g',
Qt.Key_H: 'h',
Qt.Key_I: 'i',
Qt.Key_J: 'j',
Qt.Key_K: 'k',
Qt.Key_L: 'l',
Qt.Key_M: 'm',
Qt.Key_N: 'n',
Qt.Key_O: 'o',
Qt.Key_P: 'p',
Qt.Key_Q: 'q',
Qt.Key_R: 'r',
Qt.Key_S: 's',
Qt.Key_T: 't',
Qt.Key_U: 'u',
Qt.Key_V: 'v',
Qt.Key_W: 'w',
Qt.Key_X: 'x',
Qt.Key_Y: 'y',
Qt.Key_Z: 'z',
Qt.Key_Asterisk: 'asterisk',
Qt.Key_Plus: 'plus',
Qt.Key_Minus: 'minus',
Qt.Key_Period: 'period',
Qt.Key_Slash: 'slash',
Qt.Key_F1: 'F1',
Qt.Key_F2: 'F2',
Qt.Key_F3: 'F3',
Qt.Key_F4: 'F4',
Qt.Key_F5: 'F5',
Qt.Key_F6: 'F6',
Qt.Key_F7: 'F7',
Qt.Key_F8: 'F8',
Qt.Key_F9: 'F9',
Qt.Key_F10: 'F10',
Qt.Key_F11: 'F11',
Qt.Key_F12: 'F12',
Qt.Key_F13: 'F13',
Qt.Key_F14: 'F14',
Qt.Key_F15: 'F15',
Qt.Key_F16: 'F16',
Qt.Key_F17: 'F17',
Qt.Key_F18: 'F18',
Qt.Key_F19: 'F19',
Qt.Key_F20: 'F20',
Qt.Key_F21: 'F21',
Qt.Key_F22: 'F22',
Qt.Key_F23: 'F23',
Qt.Key_F24: 'F24',
Qt.Key_NumLock: 'Num_Lock',
Qt.Key_ScrollLock: 'Scroll_Lock',
}
def _qt_key_to_key_sym(key):
""" Convert a Qt key into a vtk keysym.
This is essentially copied from the c++ implementation in
GUISupport/Qt/QVTKInteractorAdapter.cxx.
"""
if key not in _keysyms:
return None
return _keysyms[key]
if __name__ == "__main__":
print PyQtImpl
QVTKRenderWidgetConeExample()
| sankhesh/VTK | Wrapping/Python/vtk/qt/QVTKRenderWindowInteractor.py | Python | bsd-3-clause | 18,051 | [
"CRYSTAL",
"VTK"
] | 64838acf55113263796a030717cc81ea5044d9c2dd2c39e8b9e11b279b038c66 |
""" Bokeh is a Python interactive visualization library that targets modern
web browsers for presentation.
Its goal is to provide elegant, concise construction of novel graphics in the
style of d3.js, but also deliver this capability with high-performance
interactivity over very large or streaming datasets. Bokeh can help anyone
who would like to quickly and easily create interactive plots, dashboards,
and data applications.
For full documentation, please visit: http://bokeh.pydata.org
"""
from __future__ import absolute_import, print_function
# configure Bokeh version
from .util.version import __version__; __version__
from .util.version import __base_version__; __base_version__
# configure Bokeh logger
from .util import logconfig
del logconfig
# Configure warnings to always show, despite Python's active efforts
# to hide them from users.
import warnings
from .util.warnings import BokehDeprecationWarning, BokehUserWarning
warnings.simplefilter('always', BokehDeprecationWarning)
warnings.simplefilter('always', BokehUserWarning)
# imports below are names we want to make available in the bokeh
# module as transitive imports
from . import sampledata; sampledata
def test(args=None):
''' Run the Bokeh unit tests under the bokeh python directory using ``py.test``.
.. note::
Does not run any BokehJS, examples, or integration tests.
Args:
args(list, optional): command line arguments accepted by ``py.test``
For example, ``args=['-s', '-k plotting']`` prevents capture of standard out
and only runs tests that match ``"plotting"``. For more ``py.test`` options
see http://pytest.org/latest/usage.html.
Returns:
int: ``py.test`` exitcode
'''
from .util.testing import runtests
return runtests(args)
def license():
''' Print the Bokeh license to the console.
Returns:
None
'''
from os.path import join
with open(join(__path__[0], 'LICENSE.txt')) as lic:
print(lic.read())
| Ziqi-Li/bknqgis | bokeh/bokeh/__init__.py | Python | gpl-2.0 | 2,025 | [
"VisIt"
] | ff7740c520bd1fbe9168292fa573c0a9a55fe6bbb95491d417d0c79053caecc2 |
"""Defines a set of test problems.
This module complements the More, Garbow and Hillstrom
problem set implemented in OTK++ and implements a Python
interface. Whereas OTK++ implements only evaluators for
these functions, this module also contains the following
definitions:
- symbolic expressions and expression generators for
arbitrary-dimensional functions
- default starting points
- default stopping criteria
- known minimizers and minimum function values, if any
- plotting ranges for two-variable functions
Note: Each test problem has the "f" attribute that
is the actual function associated with the problem.
"""
from inspect import getargspec
import native
from native import *
class TestProblem:
"""The base class for test functions."""
def __init__(self, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic, n=0, m=0):
if fEvalType == FuncEvalType.symbolic:
self.f = Function(self.generate_expression(), gEvalType)
else:
argspec = getargspec(self.__init__)[0]
if not 'n' in argspec and not 'm' in argspec:
eval_str = 'native.' + self.__class__.__name__ + '(gEvalType)'
elif 'm' not in argspec:
eval_str = 'native.' + self.__class__.__name__ + '(' + str(n) + ',gEvalType)'
elif 'n' not in argspec:
eval_str = 'native.' + self.__class__.__name__ + '(' + str(m) + ',gEvalType)'
else:
eval_str = 'native.' + self.__class__.__name__ + '(' + str(n) + ',' + str(m) + ',gEvalType)'
self.f = eval(eval_str)
class PlotSpec:
"""Defines plotting ranges and axis scales."""
def __init__(self, x_range, y_range, z_range, z_logscale):
self.x_range = x_range
self.y_range = y_range
self.z_range = z_range
self.z_logscale = z_logscale
class PowellBadlyScaled(TestProblem):
name = 'Powell badly scaled'
x0 = (0, 1)
f_min = 0
stopcrit = FDistToMinTest(f_min=0, eps=1e-14, relative=False);
plot_spec = PlotSpec((-5e-5, 2e-4), (-1, 10), (1e-6, 1000), True)
def __init__(self, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
TestProblem.__init__(self, fEvalType, gEvalType)
def generate_expression(self):
return "(1e4*x*y-1)^2+(exp(-x)+exp(-y)-1.0001)^2"
class BrownBadlyScaled(TestProblem):
name = 'Brown badly scaled'
x0 = (1, 1)
x_min = (1e6, 2e-6)
f_min = 0
stopcrit = XDistToMinTest(x_min=(1e6, 2e-6), eps=1e-6, relative=False)
plot_spec = PlotSpec((1e6-1e-4, 1e6+1e-4),
(2e-6-1e-10, 2e-6+1e-10),
(1e-16, 1e-6), True)
def __init__(self, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
TestProblem.__init__(self, fEvalType, gEvalType)
def generate_expression(self):
return '(x-1e6)^2+(y-2e-6)^2+(x*y-2)^2'
class Beale(TestProblem):
name = 'Beale'
x0 = (1, 1)
x_min = (3, 0.5)
f_min = 0
stopcrit = XDistToMinTest(x_min=(3, 0.5), eps=1e-6, relative=False)
plot_spec = PlotSpec((0.25, 4.25), (-1, 1.5), (1e-3, 500), True)
def __init__(self, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
TestProblem.__init__(self, fEvalType, gEvalType)
def generate_expression(self):
return '(1.5-x*(1-y))^2+(2.25-x*(1-y^2))^2+(2.625-x*(1-y^3))^2'
class HelicalValley(TestProblem):
name = 'Helical valley'
x0 = (-1, 0, 0)
x_min = (1, 0, 0)
f_min = 0
stopcrit = XDistToMinTest(x_min=(1, 0, 0), eps=1e-6, relative=False)
plot_spec = None
def __init__(self, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
TestProblem.__init__(self, fEvalType, gEvalType)
def generate_expression(self):
e = ''
atan_term = '1/(2*pi)*atan(x2/x1)'
e += '100.0*(step(x1)*' + atan_term + '+step(-x1)*(' + atan_term + '+0.5))^2+'
e += '100.0*(sqrt(x1^2+x2^2) - 1.0)^2+'
e += 'x3^2'
return e
class Gaussian(TestProblem):
name = 'Gaussian'
x0 = (0.4, 1, 0)
stopcrit = FDistToMinTest(f_min=1.12793e-8, eps=1e-4, relative=True)
plot_spec = None
def __init__(self, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
TestProblem.__init__(self, fEvalType, gEvalType)
def generate_expression(self):
y = (0.0009, 0.0044, 0.0175, 0.0540, 0.1295, 0.2420, 0.3521, 0.3989,
0.3521, 0.2420, 0.1295, 0.0540, 0.0175, 0.0044, 0.0009)
e = ''
for i in range(1, 16):
ti = '(8-' + str(i) + ')/2'
yi = str(y[i-1])
e += '(x1*exp((-x2*(' + ti + '-x3)^2)/2)-' + yi + ')^2'
if i < 15:
e += '+'
return e
class Gulf(TestProblem):
name = 'Gulf'
x0 = (5, 2.5, 0.15)
x_min = (50, 25, 1.5)
f_min = 0
stopcrit = XDistToMinTest(x_min=(50, 25, 1.5), eps=1e-6)
plot_spec = None
def __init__(self, m, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
if m < 3 or m > 100:
raise ValueError('must be 3<=m<=100')
self.m = m
TestProblem.__init__(self, fEvalType, gEvalType, m=m)
def generate_expression(self):
e = ''
for i in range(1, self.m + 1):
ti = str(i) + '/100'
yi = '(25+(-50*log(' + ti + '))^(2/3))'
mi = str(self.m*i)
e += '(exp(-abs(' + yi + '-x2)^x3/x1)-' + ti + ')^2'
if i < self.m:
e += '+'
return e
class Box(TestProblem):
name = 'Box'
x0 = (0, 10, 20)
f_min = 0
stopcrit = FDistToMinTest(f_min=0, eps=1e-6, relative=False)
plot_spec = None
def __init__(self, m, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
if m < 3:
raise ValueError('must be m>=3')
self.m = m
TestProblem.__init__(self, fEvalType, gEvalType, m=m)
def generate_expression(self):
e = ''
for i in range(1, self.m + 1):
ti = '0.1*' + str(i)
e += '(exp(-' + ti + '*x1)-exp(-' + ti + '*x2)-x3*(exp(-' + ti + ')-exp(-10*' + ti + ')))^2'
if i < self.m:
e += "+";
return e
class Wood(TestProblem):
name = 'Wood'
x0 = (-3, -1, -3, -1)
x_min = (1, 1, 1, 1)
f_min = 0
stopcrit = XDistToMinTest(x_min=(1, 1, 1, 1), eps=1e-6, relative=False)
plot_spec = None
def __init__(self, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
TestProblem.__init__(self, fEvalType, gEvalType)
def generate_expression(self):
e = ''
e += '100*(x2-x1^2)^2+'
e += '(1-x1)^2+'
e += '90*(x4-x3^2)^2+'
e += '(1-x3)^2+'
e += '10*(x2+x4-2)^2+'
e += '1/10*(x2-x4)^2'
return e
class BrownDennis(TestProblem):
name = 'Brown and Dennis'
x0 = (25, 5, -5, -1)
stopcrit = FDistToMinTest(f_min=85822.2, eps=0.1, relative=False)
plot_spec = None
def __init__(self, m, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
if m < 4:
raise ValueError('must be m>=4')
self.m = m
TestProblem.__init__(self, fEvalType, gEvalType, m=m)
def generate_expression(self):
e = ''
for i in range(1, self.m + 1):
ti = str(i) + '/5'
e += '((x1+' + ti + '*x2-exp(' + ti + '))^2+(x3+x4*sin(' + ti + ')-cos(' + ti + '))^2)^2'
if i < self.m:
e += "+"
return e
class BiggsEXP6(TestProblem):
name = 'Biggs EXP6'
x0 = (1, 2, 1, 1, 1, 1)
x_min = (1, 10, 1, 5, 4, 3)
stopcrit = FDistToMinTest(f_min=5.65565e-3, eps=1e-4, relative=True)
plot_spec = None
def __init__(self, m, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
if m < 6:
raise ValueError('must be m>=6')
self.m = m
TestProblem.__init__(self, fEvalType, gEvalType, m=m)
def generate_expression(self):
e = ''
for i in range(1, self.m + 1):
ti = '0.1*' + str(i)
yi = '(exp(-' + ti + ')-5.0*exp(-10.0*' + ti + ')+3*exp(-4.0*' + ti + '))'
e += '(x3*exp(-' + ti + '*x1)-x4*exp(-' + ti + '*x2)+x6*exp(-' + ti + '*x5)-' + yi + ')^2'
if i < self.m:
e += '+'
return e
class Watson(TestProblem):
name = 'Watson'
stopcrit = FDistToMinTest(f_min=2.28767e-3, eps=1e-4, relative=True)
plot_spec = PlotSpec((-1.5, 0.4), (0, 2), (0.25, 500), True)
def __init__(self, n, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
if n < 2 or n > 31:
raise ValueError('must be 2<=n<=31')
self.n = n
TestProblem.__init__(self, fEvalType, gEvalType, n=n)
x0 = []
for i in range(n):
x0.append(0)
self.x0 = tuple(x0)
def generate_expression(self):
e = ''
for i in range(1, 30):
sum1 = ''
sum2 = ''
ti = str(i) + '/29'
for j in range(2, self.n + 1):
sum1 += '(' + str(j) + '-1)*x' + str(j) + '*(' + ti + ')^(' + str(j) + '-2)'
if j < self.n:
sum1 += '+'
for j in range(1, self.n + 1):
sum2 += 'x' + str(j) + '*(' + ti + ')^(' + str(j) + '-1)'
if j < self.n:
sum2 += '+'
e += '(' + sum1 + '-(' + sum2 + ')^2-1)^2'
e += '+'
e += 'x1^2+(x2-x1^2-1)^2'
return e
class ExtendedRosenbrock(TestProblem):
name = 'Extended Rosenbrock'
f_min = 0
plot_spec = PlotSpec((-1.5, 1.4), (-0.25, 1.25), (1e-3, 1000), True)
def __init__(self, n, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
if n % 2 != 0:
raise ValueError("n must be even")
self.n = n
TestProblem.__init__(self, fEvalType, gEvalType, n=n)
x0 = []
for i in range(n):
if i % 2 == 0:
x0.append(-1.2)
else:
x0.append(1)
self.x0 = tuple(x0)
x_min = []
for i in range(n):
x_min.append(1)
self.x_min = tuple(x_min)
self.stopcrit = XDistToMinTest(x_min=self.x_min, eps=1e-6, relative=False)
def generate_expression(self):
e = ''
for i in range(1, self.n, 2):
x1 = 'x' + str(i)
x2 = 'x' + str(i + 1)
term1 = '100*(' + x1 + '*' + x1 + '-' + x2 + ')*(' + x1 + '*' + x1 + '-' + x2 + ')'
term2 = '(1-' + x1 + ')*(1-' + x1 + ')'
e += term1 + '+' + term2
if i < self.n - 1:
e += '+'
return e
class ExtendedPowellSingular(TestProblem):
name = 'Extended Powell singular'
f_min = 0
plot_spec = None
def __init__(self, n, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
if n % 4 != 0:
raise ValueError("n must be a multiple of 4")
self.n = n
TestProblem.__init__(self, fEvalType, gEvalType, n=n)
x0 = []
for i in range(n):
if i % 4 == 0:
x0.append(3)
elif i % 4 == 1:
x0.append(-1)
elif i % 4 == 2:
x0.append(0)
elif i % 4 == 3:
x0.append(1)
self.x0 = tuple(x0)
x_min = []
for i in range(n):
x_min.append(0)
self.x_min = tuple(x_min)
self.stopcrit = XDistToMinTest(x_min=self.x_min, eps=1e-6, relative=False)
def generate_expression(self):
e = ''
for i in range(0, self.n, 4):
fix = 'x' + str(i) + '+10.0*x' + str(i+1);
e += '(' + fix + ')^2+'
fix = 'x' + str(i+2) + '-x' + str(i+3)
e += '5.0*(' + fix + ')^2+'
fix = '(x' + str(i+1) + '+2.0*x' + str(i+2) + ')^2'
e += '(' + fix + ')^2+'
fix = '(x' + str(i) + '-x' + str(i+3) + ')^2'
e += '10.0*(' + fix + ')^2'
if i < self.n - 4:
e += '+'
return e
class PenaltyFunctionI(TestProblem):
name = 'Penalty function I'
stopcrit = FDistToMinTest(f_min=7.08765e-5, eps=1e-4, relative=True)
plot_spec = PlotSpec((-1, 1), (-1, 1), (1e-6, 10), True)
def __init__(self, n, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
self.n = n
TestProblem.__init__(self, fEvalType, gEvalType, n=n)
x0 = []
for i in range(n):
x0.append(i + 1)
self.x0 = tuple(x0)
def generate_expression(self):
e = ''
for i in range(1, self.n + 1):
fix = 'x' + str(i) + '-1.0'
e += '10^-5*(' + fix + ')^2+'
fix = "";
for i in range(1, self.n + 1):
fix += 'x' + str(i) + '^2+'
fix += '-0.25'
e += '(' + fix + ')^2'
return e;
class PenaltyFunctionII(TestProblem):
name = 'Penalty function II'
stopcrit = FDistToMinTest(f_min=2.93660e-4, eps=1e-4, relative=True)
plot_spec = PlotSpec((-1, 1.25), (-2, 2), (1e-4, 50), True)
def __init__(self, n, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
self.n = n
TestProblem.__init__(self, fEvalType, gEvalType, n=n)
x0 = []
for i in range(n):
x0.append(0.5)
self.x0 = tuple(x0)
def generate_expression(self):
e = '(x1-0.2)^2+'
for i in range(2, self.n + 1):
fix = ''
yi = 'exp(' + str(i) + '/10)+' + 'exp(' + str(i - 1) + '/10)'
fix += 'exp(x' + str(i) + '/10)+'
fix += 'exp(x' + str(i - 1) + '/10)-'
fix += '(' + yi + ')'
e += '10^-5*(' + fix + ')^2+'
for i in range(self.n + 1, 2 * self.n):
fix = 'exp((x' + str(i - self.n + 1) + ')/10)-exp(-1/10)'
e += '10^-5*(' + fix + ')^2+'
sum_str = ''
for i in range(1, self.n + 1):
sum_str += str(self.n-i+1) + '*x' + str(i) + '^2+'
sum_str += '-1'
e += '(' + sum_str + ')^2'
return e
class VariablyDimensioned(TestProblem):
name = 'Variably dimensioned'
f_min = 0
plot_spec = PlotSpec((0, 2), (0, 2), (1e-4, 100), True)
def __init__(self, n, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
self.n = n
TestProblem.__init__(self, fEvalType, gEvalType, n=n)
x0 = []
for i in range(n):
x0.append(1.0 - (i+1.0)/n)
self.x0 = tuple(x0)
x_min = []
for i in range(n):
x_min.append(1)
self.x_min = tuple(x_min)
self.stopcrit = XDistToMinTest(x_min=self.x_min, eps=1e-6, relative=False)
def generate_expression(self):
e = ''
for i in range(1, self.n + 1):
e += '(x' + str(i) + '-1)^2+'
s = ''
for i in range(1, self.n + 1):
s += str(i) + '*(x' + str(i) + '-1)'
if i < self.n:
s += '+'
e += '(' + s + ')^2+(' + s + ')^4'
return e
class Trigonometric(TestProblem):
name = 'Trigonometric'
f_min = 0
stopcrit = FDistToMinTest(f_min=0, eps=1e-5, relative=False)
plot_spec = PlotSpec((-10, 10), (-10, 10), (1e-5, 140), False)
def __init__(self, n, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
self.n = n
TestProblem.__init__(self, fEvalType, gEvalType, n=n)
x0 = []
for i in range(n):
x0.append(1.0/n)
self.x0 = tuple(x0)
def generate_expression(self):
e = ''
for i in range(1, self.n + 1):
fix = ''
fix += str(self.n) + '-('
for j in range(1, self.n + 1):
fix += 'cos(x' + str(j) + ')'
if j < self.n:
fix += '+'
fix += ')+' + str(i) + '*(1-cos(x' + str(i) + '))-sin(x' + str(i) + ')'
e += '(' + fix + ')^2'
if i < self.n:
e += '+'
return e
class ChebyQuad(TestProblem):
name = 'Chebyquad'
stopcrit = FDistToMinTest(f_min=3.51687e-3, eps=1e-5, relative=True)
plot_spec = PlotSpec((0, 1), (0, 1), (1, 10), False)
def __init__(self, n, m, fEvalType=FuncEvalType.symbolic, gEvalType=DerivEvalType.symbolic):
if m < n:
raise ValueError('must be m>=n')
self.n = n
self.m = m
TestProblem.__init__(self, fEvalType, gEvalType, n=n, m=m)
x0 = []
for i in range(n):
x0.append((i+1.0) / (n+1.0))
self.x0 = tuple(x0)
def generate_expression(self):
e = ''
for i in range(1, self.m + 1):
t = ''
for j in range(1, self.n + 1):
Tim1 = '1.0'
x_str = "x" + str(j)
Ti = x_str
for k in range(2, i + 1):
prev_Ti = Ti;
Ti = '2.0*(' + x_str + '*(' + Ti + '))-(' + Tim1 + ')'
Tim1 = prev_Ti;
t += Ti
if j < self.n:
t += '+'
t = '(' + t + ')/' + str(self.n)
if i % 2 == 0:
t += '+1.0/(' + str(i) + '^2-1.0)'
e += '(' + t + ')^2'
if i < self.m:
e += "+"
return e
| tbs1980/otkpp | pyotk/pyotk/testproblems.py | Python | gpl-3.0 | 15,187 | [
"Gaussian"
] | 00e4888b7186e39d6799541db63e68a93746cf2f85adc7f4d2bb2da8bc8b10f7 |
"""
A set of functions for various types of fitting.
"""
import logging
import os
import glob
import json
from george import kernels
import FittingUtilities
from astropy import units as u, constants
from scipy.optimize import fmin, brute, minimize
from scipy.interpolate import InterpolatedUnivariateSpline as spline
from scipy.stats import norm
import numpy as np
from lmfit import Model, Parameters
from skmonaco import mcimport
import matplotlib.pyplot as plt
import george
import statsmodels.api as sm
from statsmodels.robust.norms import TukeyBiweight
import pandas as pd
import triangle
from astropy.modeling import fitting
from astropy.modeling.polynomial import Chebyshev2D
import DataStructures
from HelperFunctions import IsListlike, ExtrapolatingUnivariateSpline, ensure_dir, fwhm
import fitters as fitting_utilities
##import pdb
#from astropy.analytic_functions import blackbody_lambda as blackbody
from PlotBlackbodies import Planck as blackbody
import StellarModel
import Broaden
import Correlate
try:
import emcee
emcee_import = True
except ImportError:
logging.warn("emcee module not loaded! BayesFit and bayesian_total_least_squares are unavailable!")
emcee_import = False
try:
import pymultinest
multinest_import = True
except ImportError:
logging.warn('pymultinest module not loaded. MultiNestFitter will not be available!')
multinest_import = False
def RobustFit(x, y, fitorder=3, weight_fcn=TukeyBiweight(), badregions=None):
"""
Performs a robust fit (less sensitive to outliers) to x and y
:param x: A numpy.ndarray with the x-coordinates of the function to fit
:param y: A numpy.ndarray with the y-coordinates of the function to fit
:param fitorder: The order of the fit
:param badregions: A list of lists containing wavelength regions to ignore in the fit
:return:
"""
# Re-scale x for stability
if badregions is None:
x_train = x
y_train = y
else:
cond = np.any([(x >= reg[0]) & (x <= reg[1]) for reg in badregions], axis=0)
x_train = x[~cond]
y_train = y[~cond]
m, s = x.mean(), x.std()
x = (x - m) / s
x_train = (x_train - m) / s
X = np.ones(x.size)
X_train = np.ones(x_train.size)
for i in range(1, fitorder + 1):
X = np.column_stack((X, x ** i))
X_train = np.column_stack((X_train, x_train ** i))
fitter = sm.RLM(y_train, X_train, M=weight_fcn)
results = fitter.fit()
return results.predict(X)
if emcee_import:
def BayesFit(data, model_fcn, priors, limits=None, burn_in=100, nwalkers=100, nsamples=100, nthreads=1,
full_output=False, a=2):
"""
This function will do a Bayesian fit to the model. Warning! I don't think it quite works yet!
Parameter description:
data: A DataStructures.xypoint instance containing the data
model_fcn: A function that takes an x-array and parameters,
and returns a y-array. The number of parameters
should be the same as the length of the 'priors'
parameter
priors: Either a 2d np array or a list of lists. Each index
should contain the expected value and the uncertainty
in that value (assumes all Gaussian priors!).
limits: If given, it should be a list of the same shape as
'priors', giving the limits of each parameter
burn_in: The burn-in period for the MCMC before you start counting
nwalkers: The number of emcee 'walkers' to use.
nsamples: The number of samples to use in the MCMC sampling. Note that
the actual number of samples is nsamples * nwalkers
nthreads: The number of processing threads to use (parallelization)
This probably needs MPI installed to work. Not sure though...
full_ouput: Return the full sample chain instead of just the mean and
standard deviation of each parameter.
a: See emcee.EnsembleSampler. Basically, it controls the step size
"""
# Priors needs to be a np array later, so convert to that first
priors = np.array(priors)
# Define the likelihood, prior, and posterior probability functions
likelihood = lambda pars, data, model_fcn: np.sum(
-(data.y - model_fcn(data.x, *pars)) ** 2 / (2.0 * data.err ** 2))
if limits == None:
prior = lambda pars, priors: np.sum(-(pars - priors[:, 0]) ** 2 / (2.0 * priors[:, 1] ** 2))
posterior = lambda pars, data, model_fcn, priors: likelihood(pars, data, model_fcn) + prior(pars, priors)
else:
limits = np.array(limits)
prior = lambda pars, priors, limits: -9e19 if any(
np.logical_or(pars < limits[:, 0], pars > limits[:, 1])) else np.sum(
-(pars - priors[:, 0]) ** 2 / (2.0 * priors[:, 1] ** 2))
posterior = lambda pars, data, model_fcn, priors, limits: likelihood(pars, data, model_fcn) + prior(pars,
priors,
limits)
# Set up the MCMC sampler
ndim = priors.shape[0]
if limits == None:
p0 = [np.random.normal(loc=priors[:, 0], scale=priors[:, 1]) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, posterior, threads=nthreads, args=(data, model_fcn, priors),
a=4)
else:
ranges = np.array([l[1] - l[0] for l in limits])
p0 = [np.random.rand(ndim) * ranges + limits[:, 0] for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, posterior, threads=nthreads,
args=(data, model_fcn, priors, limits), a=a)
# Burn-in the sampler
pos, prob, state = sampler.run_mcmc(p0, burn_in)
# Reset the chain to remove the burn-in samples.
sampler.reset()
# Run the sampler
pos, prob, state = sampler.run_mcmc(pos, nsamples, rstate0=state)
print "Acceptance fraction = %f" % np.mean(sampler.acceptance_fraction)
maxprob_indice = np.argmax(prob)
priors[:, 0] = pos[maxprob_indice]
# Get the parameter estimates
chain = sampler.flatchain
for i in range(ndim):
priors[i][1] = np.std(chain[:, i])
if full_output:
return priors, sampler
return priors
class ListModel(Model):
"""
Subclass of lmfit's Model, which can take a list of xypoints.
The fit method reforms the list into a single array, and then
passes off to the lmfit method.
This is very bare bones now (Sep 25, 2014). Will probably need to add more later.
"""
def __init__(self, func, independent_vars=None, param_names=None,
missing='none', prefix='', name=None, **kws):
Model.__init__(self, func, independent_vars=independent_vars, param_names=param_names,
missing=missing, prefix=prefix, name=name, **kws)
def fit(self, data, fitcont=True, fit_kws=None, **kws):
x = np.hstack([d.x for d in data])
y = np.hstack([d.y for d in data])
w = np.hstack([1.0 / d.err for d in data])
self.order_lengths = [d.size() for d in data]
kws['x'] = x
self.fitcont = fitcont
output = Model.fit(self, y, weights=w, fit_kws=fit_kws, **kws)
# Need to re-shape the best-fit
best_fit = []
length = 0
for i in range(len(data)):
best_fit.append(output.best_fit[length:length + data[i].size()])
length += data[i].size()
output.best_fit = best_fit
return output
def _residual(self, params, data, weights=None, **kwargs):
"default residual: (data-model)*weights"
# Make sure the parameters are in the right format
if not isinstance(params, Parameters):
if 'names' in kwargs:
parnames = kwargs['names']
else:
raise KeyError("Must give the parameter names if the params are just list instances!")
d = {name: value for name, value in zip(parnames, params)}
params = self.make_params(**d)
# print params
model = Model.eval(self, params, **kwargs)
length = 0
loglikelihood = []
for i, l in enumerate(self.order_lengths):
x = kwargs['x'][length:length + l]
y = data[length:length + l]
m = model[length:length + l]
if self.fitcont:
ratio = y / m
cont = FittingUtilities.Continuum(x, ratio, fitorder=5, lowreject=2, highreject=2)
else:
cont = np.ones(x.size)
loglikelihood.append((y - cont * m))
length += l
loglikelihood = np.hstack(loglikelihood)
if weights is not None:
loglikelihood *= weights
return loglikelihood
def MCMC_fit(self, data, priors, names, prior_type='flat', fitcont=True, model_getter=None, nthreads=1):
"""
Do a fit using emcee
:param data: list of xypoints
:param priors: list of priors (each value must be a 2-D list)
:param names: The names of the variables, in the same order as the priors list
:keyword prior_type: The type of prior. Choices are 'flat' or 'gaussian'
:keyword fitcont: Should we fit the continuum in each step?
:param nthreads: The number of threads to spawn (for parallelization)
:return:
"""
x = np.hstack([d.x for d in data])
y = np.hstack([d.y for d in data])
c = np.hstack([d.cont for d in data])
e = np.hstack([d.err for d in data])
fulldata = DataStructures.xypoint(x=x, y=y, err=e, cont=c)
weights = 1.0 / e
self.order_lengths = [d.size() for d in data]
self.fitcont = fitcont
# Define the prior functions
priors = np.array(priors)
if prior_type.lower() == 'gauss':
lnprior = lambda pars, prior_vals: np.sum(-(pars - prior_vals[:, 0]) ** 2 / (2.0 * prior_vals[:, 1] ** 2))
guess = [p[0] for p in priors]
scale = [p[1] / 10.0 for p in priors]
elif prior_type.lower() == 'flat':
def lnprior(pars, prior_vals):
tmp = [prior_vals[i][0] < pars[i] < prior_vals[i][1] for i in range(len(pars))]
return 0.0 if all(tmp) else -np.inf
guess = [(p[0] + p[1]) / 2.0 for p in priors]
scale = [(p[1] - p[0]) / 100.0 for p in priors]
else:
raise ValueError("prior_type must be one of 'gauss' or 'flat'")
# Define the full probability functions
def lnprob(pars, priors, data, weights, **kwargs):
lp = lnprior(pars, priors)
if not np.isfinite(lp):
return -np.inf
return lp + np.sum(self._residual(pars, data, weights, **kwargs) ** 2)
# Set up the emcee sampler
ndim = len(priors)
nwalkers = 100
pars = np.array(guess)
pos = [pars + scale * np.random.randn(ndim) for i in range(nwalkers)]
if model_getter is None:
model_getter = self.opts['model_getter']
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(priors, fulldata.y, weights),
kwargs={'model_getter': model_getter, 'names': names, 'x': x},
threads=nthreads)
return sampler, pos
# #################################################################
# Bayesian total least squares regression #
# #################################################################
if emcee_import:
class Bayesian_TLS(object):
def __init__(self, x, y, xerr, yerr):
"""
Class to perform a bayesian total least squares fit to data with errors in both the x- and y-axes.
:param x: A numpy ndarray with the independent variable
:param y: A numpy ndarray with the dependent variable
:param xerr: A numpy ndarray with the uncertainty in the independent variable
:param yerr: A numpy ndarray with the uncertainty in the dependent variable
"""
self.x = x
self.y = y
self.xerr = xerr
self.yerr = yerr
# Default values for a bunch of stuff
self.nwalkers = 100
self.n_burn = 200
self.n_prod = 1000
self.sampler = None
def model(self, p, x):
"""
A parameteric model to fit y = f(x, p)
This can be overridden in a class that inherits from this one to make a new model
"""
return np.poly1d(p)(x)
def _partial_likelihood(self, x, pars):
"""
The part of the likelihood function that just compares the y values to the model prediction.
:param pars:
:return:
"""
y_pred = self.model(pars, x)
P = np.product(np.exp(-(self.y - y_pred) ** 2 / self.yerr ** 2))
return P * (2 * np.pi) ** (self.x.size / 2.) * np.product(self.xerr)
def _sampling_distribution(self, size=1, loc=0, scale=1):
if IsListlike(loc):
return np.array([np.random.normal(loc=l, scale=s, size=size) for l, s in zip(loc, scale)])
return np.random.normal(loc=loc, scale=scale, size=size)
def _lnlike(self, pars):
"""
likelihood function. This uses the class variables for x,y,xerr, and yerr, as well as the 'model' instance.
Uses Monte Carlo integration to remove the nuisance parameters (the true x locations of each point)
"""
P, err = mcimport(self._partial_likelihood,
npoints=1000000, args=(pars,),
distribution=self._sampling_distribution,
dist_kwargs={'loc': self.x, 'scale': self.xerr / np.sqrt(2)},
nprocs=2)
print('Relative error in integral: {}'.format(err / P))
return np.log(P)
"""
xtrue = pars[:self.x.size]
y_pred = self.model(pars[self.x.size:], xtrue) # Predict the y value
# Make the log-likelihood
return np.sum(-(self.x - xtrue) ** 2 / self.xerr ** 2 - (self.y - y_pred) ** 2 / self.yerr * 2)
"""
def lnprior(self, pars):
"""
Log of the prior for the parameters. This can be overridden to make custom priors
"""
return 0.0
def _lnprob(self, pars):
"""
Log of the posterior probability of pars given the data.
"""
lp = self.lnprior(pars)
return lp + self._lnlike(pars) if np.isfinite(lp) else -np.inf
def guess_fit_parameters(self, fitorder=1):
"""
Do a normal fit to the data, ignoring the the uncertainty on the dependent variables.
The result will be saved for use as initial guess parameters in the full MCMC fit.
If you use a custom model, you will probably have to override this method as well.
"""
pars = np.zeros(fitorder + 1)
pars[-2] = 1.0
min_func = lambda p, xi, yi, yerri: np.sum((yi - self.model(p, xi)) ** 2 / yerri ** 2)
best_pars = fmin(min_func, x0=pars, args=(self.x, self.y, self.yerr))
self.guess_pars = best_pars
return best_pars
def fit(self, nwalkers=None, n_burn=None, n_prod=None, guess=True, initial_pars=None, **guess_kws):
"""
Perform the full MCMC fit.
:param nwalkers: The number of walkers to use in the MCMC sampler
:param n_burn: The number of samples to discard for the burn-in portion
:param n_prod: The number of MCMC samples to take in the final production sampling
:param guess: Flag for whether the data should be fit in a normal way first, to get decent starting parameters.
If true, it uses self.guess_fit_parameters and passes guess_kws to the function.
If false, it uses initial_pars. You MUST give initial_pars if guess=False!
"""
nwalkers = self.nwalkers if nwalkers is None else nwalkers
n_burn = self.n_burn if n_burn is None else n_burn
n_prod = self.n_prod if n_prod is None else n_prod
if guess:
initial_pars = self.guess_fit_parameters(**guess_kws)
elif initial_pars is None:
raise ValueError('Must give initial pars if guess = False!')
# Set up the MCMC sampler
pars = np.hstack((self.x, initial_pars))
ndim = pars.size
p0 = emcee.utils.sample_ball(pars, std=[1e-6] * ndim, size=nwalkers)
sampler = emcee.EnsembleSampler(nwalkers, ndim, self._lnprob)
# Burn-in
print 'Running burn-in'
p1, lnp, _ = sampler.run_mcmc(p0, n_burn)
sampler.reset()
print 'Running production'
sampler.run_mcmc(p1, n_prod)
# Save the sampler instance as a class variable
self.sampler = sampler
return
def predict(self, x, N=100):
"""
predict the y value for the given x values. Use the N most probable MCMC chains
"""
if self.sampler is None:
logging.warn('Need to run the fit method before predict!')
return
# Find the N best walkers
if N == 'all':
N = self.sampler.flatchain.shape[0]
else:
N = min(N, self.sampler.flatchain.shape[0])
indices = np.argsort(self.sampler.lnprobability.flatten())[:N]
pars = self.sampler.flatchain[indices, self.x.size:]
y = np.array([self.model(p, x) for p in pars])
return y
class Bayesian_LS(object):
def __init__(self, x=1, y=1, yerr=1, param_names=None):
"""
Class to perform a bayesian least squares fit to data with errors in only the y-axis.
:param x: A numpy ndarray with the independent variable
:param y: A numpy ndarray with the dependent variable
:param yerr: A numpy ndarray with the uncertainty in the dependent variable
:param param_names: An iterable of the parameter names. You MUST give this if using the
multinest backend.
"""
self.x = x
self.y = y
self.yerr = yerr
self.sampler = None
self.samples = None
self.n_params = None
self.param_names = None
if param_names is not None:
self.n_params = len(param_names)
self.param_names = param_names
return
def model(self, p, x):
"""
A parameteric model to fit y = f(x, p)
This can be overridden in a class that inherits from this one to make a new model
"""
return np.poly1d(p)(x)
def _lnlike(self, pars):
"""
likelihood function. This uses the class variables for x,y,xerr, and yerr, as well as the 'model' instance.
"""
y_pred = self.model(pars, self.x) # Predict the y value
# Make the log-likelihood
return -0.5 * np.sum((self.y - y_pred) ** 2 / self.yerr * 2 + np.log(2*np.pi*self.yerr**2))
def lnprior(self, pars):
"""
Log of the prior for the parameters. This can be overridden to make custom priors
"""
return 0.0
def _lnprob(self, pars):
"""
Log of the posterior probability of pars given the data.
"""
lp = self.lnprior(pars)
return lp + self._lnlike(pars) if np.isfinite(lp) else -np.inf
def mnest_prior(self, cube, ndim, nparams):
"""
This pretty much MUST be overridden for any practical use!
Transform the 'cube' parameter, which holds everything being fit,
from a uniform distibution on [0,1] to the prior probability distribution.
(Use the inverse cumulative distribution function)
"""
return
def mnest_lnlike(self, cube, ndim, nparams):
"""
This is probably okay as it is. You may (but probably not) need to override
_lnlike, but not this one.
"""
pars = np.array([cube[i] for i in range(nparams)])
return self._lnlike(pars)
def guess_fit_parameters(self, fitorder=1):
"""
Do a normal (non-bayesian) fit to the data.
The result will be saved for use as initial guess parameters in the full MCMC fit.
If you use a custom model, you will probably have to override this method as well.
"""
pars = np.zeros(fitorder + 1)
pars[-2] = 1.0
min_func = lambda p, xi, yi, yerri: np.sum((yi - self.model(p, xi)) ** 2 / yerri ** 2)
best_pars = fmin(min_func, x0=pars, args=(self.x, self.y, self.yerr))
self.guess_pars = best_pars
return best_pars
def fit(self, backend='emcee', *args, **kwargs):
"""
Perform the full MCMC fit. This function calls either fit_multinest or fit_emcee, depending on the backend.
See the doc-strings for those methods to learn what args and kwargs should be for each backend.
:param backend: string - either 'emcee' or 'multinest'.
:param args: A list of arguments to pass to either fit_multinest or fit_emcee
:param kwargs: A dict of keyword arguments to pass to either fit_multinest or fit_emcee
:return: None
"""
if backend.lower() == 'emcee':
return self.fit_emcee(*args, **kwargs)
elif backend.lower() == 'multinest':
return self.fit_multinest(*args, **kwargs)
def fit_emcee(self, nwalkers=100, n_burn=200, n_prod=1000, guess=True, initial_pars=None, **guess_kws):
"""
Perform the full MCMC fit using emcee.
:param nwalkers: The number of walkers to use in the MCMC sampler
:param n_burn: The number of samples to discard for the burn-in portion
:param n_prod: The number of MCMC samples to take in the final production sampling
:param guess: Flag for whether the data should be fit in a normal way first, to get decent starting parameters.
If true, it uses self.guess_fit_parameters and passes guess_kws to the function.
If false, it uses initial_pars. You MUST give initial_pars if guess=False!
:param initial_pars: Initial parameters to use. Should be either a 1d array with the guess pars
for each parameter, or a 2d array giving the range each parameter can take.
If 1d, the sampler will be initialized in a small ball near the guess values.
If 2d, the sampler will be initialized uniformly filling the volume.
"""
if guess:
initial_pars = self.guess_fit_parameters(**guess_kws)
elif initial_pars is None:
raise ValueError('Must give initial pars if guess = False!')
# Give generic parameter names so that the triangle method works
if self.param_names is None:
self.n_params = len(initial_pars)
self.param_names = ['c{}'.format(i) for i in range(self.n_params)]
# Set up the MCMC sampler
pars = np.array(initial_pars)
if pars.ndim == 1:
ndim = pars.size
p0 = emcee.utils.sample_ball(pars, std=[1e-6] * ndim, size=nwalkers)
elif pars.ndim == 2:
ndim = pars.shape[0]
p0 = np.random.uniform(low=pars[:, 0], high=pars[:, 1], size=(nwalkers, ndim))
else:
raise TypeError('initial_pars should be either 1d or 2d. You gave a {}d array!'.format(pars.ndim))
sampler = emcee.EnsembleSampler(nwalkers, ndim, self._lnprob)
# Burn-in
print 'Running burn-in'
i = 0
for p1, lnp, rstate in sampler.sample(p0, iterations=n_burn):
if i % 10 == 0:
logging.info('Done with burn-in iteration {} / {}'.format(i+1, n_burn))
i += 1
#sampler.reset()
print 'Running production'
i = 0
for p1, lnp, _ in sampler.sample(p1, lnprob0=lnp, rstate0=rstate, iterations=n_prod):
if i % 10 == 0:
logging.info('Done with production iteration {} / {}'.format(i+1, n_prod))
i += 1
# Save the sampler instance as a class variable
self.sampler = sampler
# Put the chain in a pandas array for easier access/manipulation
self.make_emcee_samples(n_burn)
return
def make_emcee_samples(self, n_burn):
ndim = self.sampler.chain.shape[2]
samples = self.sampler.chain[:, n_burn:, :].reshape((-1, ndim))
lnprob = self.sampler.lnprobability[:, n_burn:].flatten()
chain_dict = {self.param_names[i]: samples[:, i] for i in range(self.n_params)}
chain_dict['lnprob'] = lnprob
self.samples = pd.DataFrame(data=chain_dict)
return
def fit_multinest(self, n_live_points=1000, basename='chains/single-',
verbose=True, refit=False, overwrite=False,
**kwargs):
"""
Fits model using MultiNest, via pymultinest. This function was taken almost entirely
form Timothy Morton's 'isochrones' code on github.
:param n_live_points:
Number of live points to use for MultiNest fit.
:param basename:
Where the MulitNest-generated files will live.
By default this will be in a folder named `chains`
in the current working directory. Calling this
will define a `_mnest_basename` attribute for
this object.
:param verbose:
Whether you want MultiNest to talk to you.
:param refit, overwrite:
Set either of these to true if you want to
delete the MultiNest files associated with the
given basename and start over.
:param **kwargs:
Additional keyword arguments will be passed to
:func:`pymultinest.run`.
"""
# Make sure the output directory exists
ensure_dir(basename)
# If previous fit exists, see if it's using the same
# observed properties
prop_nomatch = False
propfile = '{}properties.json'.format(basename)
if os.path.exists(propfile):
with open(propfile) as f:
props = json.load(f)
if set(props) != set(self.param_names):
prop_nomatch = True
if prop_nomatch and not overwrite:
raise ValueError('Properties not same as saved chains ' +
'(basename {}*). '.format(basename) +
'Use overwrite=True to fit.')
if refit or overwrite:
files = glob.glob('{}*'.format(basename))
[os.remove(f) for f in files]
self._mnest_basename = basename
pymultinest.run(self.mnest_lnlike, self.mnest_prior, self.n_params,
n_live_points=n_live_points, outputfiles_basename=basename,
verbose=verbose,
**kwargs)
with open(propfile, 'w') as f:
json.dump(self.param_names, f, indent=2)
self._make_mn_samples()
return
def _make_mn_samples(self):
"""
Make MCMC samples out of a multinest run. MUST call fit() method before this!
"""
chain = np.loadtxt('{}post_equal_weights.dat'.format(self._mnest_basename))
chain_dict = {self.param_names[i]: chain[:, i] for i in range(self.n_params)}
chain_dict['lnprob'] = chain[:, -1]
self.samples = pd.DataFrame(data=chain_dict)
return
def predict(self, x, N=100, highest=False):
"""
predict the y value for the given x values. Use the N most probable MCMC chains if highest=False,
otherwise use the first N chains.
"""
if self.samples is None:
logging.warn('Need to run the fit method before predict!')
return
# Find the N best walkers
if N == 'all':
N = self.samples.shape[0]
else:
N = min(N, self.samples.shape[0])
if highest:
samples = self.samples.sort('lnprob', ascending=False)[:N]
else:
indices = np.random.randint(0, self.samples.shape[0], N)
samples = self.samples.ix[indices]
pars = samples[self.param_names].as_matrix()
y = np.array([self.model(p, x) for p in pars])
return y
def plot_samples(self, x, N=100, ax=None, *plot_args, **plot_kws):
"""
Plot N best-fit curves at x-values x, on axis ax (if given)
:param x:
:param N:
:param ax:
:return: matplotlib axis object, with which to plot other stuff, label, etc
"""
y = self.predict(x, N=N)
if ax is None:
ax = plt.gca()
for i in range(N):
ax.plot(x, y[i], *plot_args, **plot_kws)
return ax
def spoof_sampler(self, flatchain, flatlnprobability, force=False):
"""
Create a sampler object with the flatchain and lnprobability attributes so self.predict will work.
This is useful for predicting values from pre-tabulated MCMC parameter fits
:param flatchain: The original sampler.flatchain property
:param lnprobability: The original sampler.lnprobabiliity property
:keyword force: Force creation of a sampler object, even if one already exists.
:return: None
"""
if self.sampler is not None and not force:
logging.warn('sampler instance already exists! Use force=True to overwrite.')
return
self.sampler = MCSampler_Spoof(flatchain, flatlnprobability)
# Make samples
if self.n_params is None:
self.n_params = flatchain.shape[1]
if self.param_names is None:
self.param_names = ['a{}'.format(i) for i in range(self.n_params)]
chain_dict = {self.param_names[i]: flatchain[:, i] for i in range(self.n_params)}
chain_dict['lnprob'] = flatlnprobability
self.samples = pd.DataFrame(data=chain_dict)
return
def triangle(self, **kws):
if self.samples is None:
logging.warn('Need to run the fit method first!')
return
samples = self.samples[self.param_names].as_matrix()
triangle.corner(samples, labels=self.param_names, **kws)
return
@property
def mnest_analyzer(self):
"""
PyMultiNest Analyzer object associated with fit.
See PyMultiNest documentation for more.
"""
return pymultinest.Analyzer(self.n_params, self._mnest_basename)
@property
def evidence(self):
"""
Log(evidence) from multinest fit
"""
s = self.mnest_analyzer.get_stats()
return (s['global evidence'], s['global evidence error'])
class GPFitter(Bayesian_LS):
"""
A Subclass of Bayesian_LS that fits a guassian process on top of a model fit.
"""
def __init__(self, x=1, y=1, yerr=1, solver=None):
self.solver = george.BasicSolver if solver is None else solver
super(GPFitter, self).__init__(x=x, y=y, yerr=yerr)
def _lnlike(self, pars):
"""
likelihood function. This uses the class variables for x,y,xerr, and yerr, as well as the 'model' instance.
"""
#y_pred = self.x
y_pred = self.model(pars[2:], self.x)
a, tau = np.exp(pars[:2])
gp = george.GP(a * kernels.ExpSquaredKernel(tau), solver=self.solver)
gp.compute(self.x, self.yerr)
return gp.lnlikelihood(self.y - y_pred)
def lnprior(self, pars):
"""
Prior. You may want to set a prior on the model parameters.
"""
lna, lntau = pars[:2]
modelpars = pars[2:]
if -20 < lna < 30 and 0 < lntau < 30:
return 0.0
return -np.inf
def guess_fit_parameters(self, fitorder=1):
"""
Do a normal (non-bayesian and non-GP) fit to the data.
The result will be saved for use as initial guess parameters in the full MCMC fit.
If you use a custom model, you will probably have to override this method as well.
"""
pars = np.zeros(fitorder + 1)
pars[-2] = 1.0
min_func = lambda p, xi, yi, yerri: np.sum((yi - self.model(p, xi)) ** 2 / yerri ** 2)
best_pars = fmin(min_func, x0=pars, args=(self.x, self.y, self.yerr))
self.guess_pars = [0, 10]
self.guess_pars.extend(best_pars)
return self.guess_pars
def predict(self, x, N=100, highest=False):
"""
Predict the y value for the given x values.
"""
if self.sampler is None:
logging.warn('Need to run the fit method before predict!')
return
# Find the N best walkers
if N == 'all':
N = self.sampler.flatchain.shape[0]
else:
N = min(N, self.sampler.flatchain.shape[0])
if highest:
indices = np.argsort(self.sampler.flatlnprobability)[:N]
pars = self.sampler.flatchain[indices]
else:
pars = self.sampler.flatchain[:N]
yvals = []
for i, p in enumerate(pars):
logging.info('Generating GP samples for iteration {}/{}'.format(i+1, len(pars)))
a, tau = np.exp(p[:2])
ypred_data = self.model(p[2:], self.x)
ypred = self.model(p[2:], x)
gp = george.GP(a * kernels.ExpSquaredKernel(tau), solver=self.solver)
gp.compute(self.x, self.yerr)
s = gp.sample_conditional(self.y - ypred_data, x) + ypred
yvals.append(s)
return np.array(yvals)
class Differential_RV(object):
"""
This class performs a differential RV analysis on two observations of the same star.
"""
def __init__(self, observation, reference, continuum_fit_order=2):
"""
Initialize the class.
:param observation: A list of xypoint objects for the observation spectrum
:param reference: A list of xypoint objects for the reference spectrum
:param continuum_fit_order: The polynomial order with which to fit the difference in continuum between the stars.
"""
# Error checking
assert len(observation) == len(reference)
# The continuum shape should be the same for both, so we will just make it flat
for i, order in enumerate(observation):
observation[i].cont = np.ones(order.size()) * np.median(order.y)
for i, order in enumerate(reference):
reference[i].cont = np.ones(order.size()) * np.median(order.y)
#reference[i].y /= cont
#self.observation = [ExtrapolatingUnivariateSpline(o.x, o.y/o.cont) for o in observation]
self.observation = [ExtrapolatingUnivariateSpline(o.x, o.y) for o in observation]
self.reference = [r.copy() for r in reference]
self.x_arr = [r.x for r in reference]
e = [ExtrapolatingUnivariateSpline(o.x, o.err, fill_value=0.0) for o in observation]
self.err = [np.sqrt(e(r.x)**2 + r.err**2) for e, r in zip(e, reference)]
self.continuum_fit_order = continuum_fit_order
def model_with_blaze(self, x, RV, *args, **kwargs):
"""
Return the observation array, interpolated on x and shifted by RV km/s.
x should be a list of x-axes (take from the reference star)
This method should be overridden for more complicated models (such as for fitting absolute RVs)
"""
# Constant (speed of light)
clight = constants.c.cgs.to(u.km/u.s).value
# Make blaze function for both the observation and template from the args
xdeg, ydeg = self.blaze_x_degree, self.blaze_y_degree
xdom, ydom = self.blaze_x_domain, self.blaze_y_domain
ref_pars = dict(zip(self.blaze_param_names, args[:len(self.blaze_param_names)]))
obs_pars = dict(zip(self.blaze_param_names, args[len(self.blaze_param_names):]))
ref_blazefcn = Chebyshev2D(xdeg, ydeg, x_domain=xdom, y_domain=ydom, **ref_pars)
obs_blazefcn = Chebyshev2D(xdeg, ydeg, x_domain=xdom, y_domain=ydom, **obs_pars)
output = []
for i, (xi, obs, ref) in enumerate(zip(x, self.observation, self.reference)):
data = obs(xi*(1+RV/clight))
#idx = ~np.isnan(data)
ap = np.array([i]*data.size)
pix = np.arange(data.size)
ref_blaze = ref_blazefcn(ap, pix)
obs_blaze = obs_blazefcn(ap, pix)
output.append(ref_blaze / obs_blaze * data)
#output.append(obs_blaze / ref_blaze * data)
#cont = np.poly1d(np.polyfit(xi[idx], data[idx]/(ref.y[idx]/ref.cont[idx]), self.continuum_fit_order))(xi)
#output.append(data/cont)
return output
def model(self, x, RV):
"""
Return the observation array, interpolated on x and shifted by RV km/s.
x should be a list of x-axes (take from the reference star)
This method should be overridden for more complicated models (such as for fitting absolute RVs)
"""
# Constant (speed of light)
clight = constants.c.cgs.to(u.km/u.s).value
output = []
for i, (xi, obs, ref) in enumerate(zip(x, self.observation, self.reference)):
data = obs(xi*(1+RV/clight))
idx = ~np.isnan(data)
cont = np.poly1d(np.polyfit(xi[idx], data[idx]/(ref.y[idx]), self.continuum_fit_order))(xi)
output.append(data/cont)
return output
def lnlike(self, pars):
"""
likelihood function. Uses class variables for model, and the two lists with
the observation and reference spectrum
"""
pars = np.atleast_1d(pars)
model_orders = self.model(self.x_arr, *pars)
lnlike = 0.0
for ref_order, obs_order, err in zip(self.reference, model_orders, self.err):
idx = ~np.isnan(obs_order)
#lnlike += -0.5 * np.sum((ref_order.y[idx] - obs_order[idx]*ref_order.cont[idx])**2 / (err[idx]**2) + np.log(2*np.pi*err[idx]))
lnlike += -0.5 * np.sum((ref_order.y[idx] - obs_order[idx])**2 / (err[idx]**2) + np.log(2*np.pi*err[idx]))
return lnlike
def lnprior(self, pars):
"""
Prior probability distribution for all the parameters.
Override this if you add more parameters.
"""
RV = pars[0]
if -100 < RV < 100:
return 0.0
return -np.inf
def lnprob(self, pars):
"""
Log of the posterior probability of pars given the data.
"""
lp = self.lnprior(pars)
return lp + self.lnlike(pars) if np.isfinite(lp) else -np.inf
def guess_fit_parameters(self, guess_pars=None, search_range=(-50., 50.)):
"""
Do a normal (non-bayesian) fit to the data.
:param guess_pars: Initial guess parameters. If not given, it guesses RV=0km/s
"""
if guess_pars is None:
guess_pars = [0]
lnlike = lambda pars: -self.lnlike(pars)# + 100.0*bound([-50, 50], pars[0])
best_pars = brute(lnlike, [search_range], Ns=100)
return best_pars
def initialize_blaze_fit(self, blaze, x_degree=2, y_degree=6):
"""
Initialize a blaze function fit using a flat field
"""
# Fit the blaze function
aps = np.hstack([[i]*b.size() for i, b in enumerate(blaze)])
pixels = np.hstack([np.arange(b.size()) for b in blaze])
values = np.hstack([b.y for b in blaze])
blaze_fcn = ChebFit(aps, pixels, values, x_degree=x_degree, y_degree=y_degree)
# Save class variables for making a similar polynomial
self.initial_blaze_pars = dict(zip(blaze_fcn.param_names, blaze_fcn.parameters))
self.blaze_x_degree = blaze_fcn.x_degree
self.blaze_y_degree = blaze_fcn.y_degree
self.blaze_x_domain = blaze_fcn.x_domain
self.blaze_y_domain = blaze_fcn.y_domain
self.blaze_param_names = blaze_fcn.param_names
return blaze_fcn
def fit(self, nwalkers=100, n_burn=100, n_prod=500, guess=True, initial_pars=None, **guess_kws):
if guess or initial_pars is None:
initial_pars = self.guess_fit_parameters(**guess_kws)
logging.info('Normal fit done: pars = ')
logging.info(initial_pars)
pars = np.atleast_1d(initial_pars)
ndim = pars.size
p0 = emcee.utils.sample_ball(pars, std=[1e-6] * ndim, size=nwalkers)
sampler = emcee.EnsembleSampler(nwalkers, ndim, self.lnprob)
# Burn-in
logging.info('Running burn-in')
p1, lnp, _ = sampler.run_mcmc(p0, n_burn)
sampler.reset()
logging.info('Running production')
sampler.run_mcmc(p1, n_prod)
# Save the sampler instance as a class variable
self.sampler = sampler
return
def plot(self, params):
"""
Plot the spectra together to visually evaluate the fit
"""
from matplotlib import gridspec
model_orders = self.model(self.x_arr, *params)
fig = plt.figure()
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1])
bottom = plt.subplot(gs[1])
top = plt.subplot(gs[0], sharex=bottom)
for ref_order, obs_order in zip(self.reference, model_orders):
#top.plot(ref_order.x, ref_order.y/ref_order.cont, 'k-', alpha=0.5)
top.plot(ref_order.x, ref_order.y, 'k-', alpha=0.5)
top.plot(ref_order.x, obs_order, 'r-', alpha=0.5)
#bottom.plot(ref_order.x, ref_order.y/ref_order.cont - obs_order, 'k-', alpha=0.5)
bottom.plot(ref_order.x, ref_order.y - obs_order, 'k-', alpha=0.5)
top.plot([], [], 'k-', alpha=0.5, label='Reference Spectrum')
top.plot([], [], 'r-', alpha=0.5, label='Observed Spectrum')
#top.set_xticklabels([])
plt.setp(top.get_xticklabels(), visible=False)
leg = top.legend(loc='best', fancybox=True)
leg.get_frame().set_alpha(0.5)
bottom.set_xlabel('Wavelength (nm)')
top.set_ylabel('Relative Flux')
bottom.set_ylabel('O-C')
fig.subplots_adjust(hspace=0.0)
plt.show()
class MCSampler_Spoof(object):
def __init__(self, flatchain, flatlnprobability):
self.flatchain = flatchain
self.flatlnprobability = flatlnprobability
return
def ChebFit(x, y, z, x_degree=2, y_degree=2):
p_init = Chebyshev2D(x_degree=x_degree, y_degree=y_degree)
f = fitting.LinearLSQFitter()
p = f(p_init, x, y, z)
return p
if multinest_import and emcee_import:
class MultiNestFitter(Bayesian_LS):
def __init__(self, x=1, y=1, yerr=1, param_names=None):
"""
Class to perform a bayesian least squares fit to data with errors in only the y-axis.
All of the parameters are REQUIRED.
:param x: A numpy ndarray with the independent variable
:param y: A numpy ndarray with the dependent variable
:param yerr: A numpy ndarray with the uncertainty in the dependent variable
:param param_names: The names of parameters, in a list/set/numpy array/iterable
"""
self.x = x
self.y = y
self.yerr = yerr
self.n_params = len(param_names)
self.param_names = param_names
return
def mnest_prior(self, cube, ndim, nparams):
"""
This pretty much MUST be overridden for any practical use!
Transform the 'cube' parameter, which holds everything being fit,
from a uniform distibution on [0,1] to uniform on [min, max] for
each parameter
"""
return
def mnest_lnlike(self, cube, ndim, nparams):
"""
This is probably okay as it is. You may (but probably not) need to override
_lnlike, but not this one.
"""
pars = np.array([cube[i] for i in range(nparams)])
return self._lnlike(pars)
def fit(self, n_live_points=1000, basename='chains/single-',
verbose=True, refit=False, overwrite=False,
**kwargs):
"""
Fits model using MultiNest, via pymultinest. This function was taken almost entirely
form Timothy Morton's 'isochrones' code on github.
:param n_live_points:
Number of live points to use for MultiNest fit.
:param basename:
Where the MulitNest-generated files will live.
By default this will be in a folder named `chains`
in the current working directory. Calling this
will define a `_mnest_basename` attribute for
this object.
:param verbose:
Whether you want MultiNest to talk to you.
:param refit, overwrite:
Set either of these to true if you want to
delete the MultiNest files associated with the
given basename and start over.
:param **kwargs:
Additional keyword arguments will be passed to
:func:`pymultinest.run`.
"""
# Make sure the output directory exists
ensure_dir(basename)
#If previous fit exists, see if it's using the same
# observed properties
prop_nomatch = False
propfile = '{}properties.json'.format(basename)
if os.path.exists(propfile):
with open(propfile) as f:
props = json.load(f)
if set(props) != set(self.param_names):
prop_nomatch = True
if prop_nomatch and not overwrite:
raise ValueError('Properties not same as saved chains ' +
'(basename {}*). '.format(basename) +
'Use overwrite=True to fit.')
if refit or overwrite:
files = glob.glob('{}*'.format(basename))
[os.remove(f) for f in files]
self._mnest_basename = basename
pymultinest.run(self.mnest_lnlike, self.mnest_prior, self.n_params,
n_live_points=n_live_points, outputfiles_basename=basename,
verbose=verbose,
**kwargs)
with open(propfile, 'w') as f:
json.dump(self.param_names, f, indent=2)
self._make_samples()
return
def _make_samples(self):
"""
Make MCMC samples out of a run. MUST call fit() method before this!
"""
chain = np.loadtxt('{}post_equal_weights.dat'.format(self._mnest_basename))
chain_dict = {self.param_names[i]: chain[:, i] for i in range(self.n_params)}
chain_dict['lnprob'] = chain[:, -1]
self.samples = pd.DataFrame(data=chain_dict)
return
def predict(self, x, N=100, highest=False):
"""
predict the y value for the given x values. Use the N most probable MCMC chains if highest=False,
otherwise use the first N chains.
"""
if self.samples is None:
logging.warn('Need to run the fit method before predict!')
return
# Find the N best walkers
if N == 'all':
N = self.samples.shape[0]
else:
N = min(N, self.samples.shape[0])
if highest:
samples = self.samples.sort('lnprob', ascending=False)[:N]
else:
indices = np.random.randint(0, self.samples.shape[0], N)
samples = self.samples.ix[indices]
pars = samples[self.param_names].as_matrix()
y = np.array([self.model(p, x) for p in pars])
return y
def triangle(self, **kws):
if self.samples is None:
logging.warn('Need to run the fit method before predict!')
return
samples = self.samples[self.param_names].as_matrix()
triangle.corner(samples, labels=self.param_names, **kws)
@property
def mnest_analyzer(self):
"""
PyMultiNest Analyzer object associated with fit.
See PyMultiNest documentation for more.
"""
return pymultinest.Analyzer(self.n_params, self._mnest_basename)
@property
def evidence(self):
"""
Log(evidence) from multinest fit
"""
s = self.mnest_analyzer.get_stats()
return (s['global evidence'],s['global evidence error'])
class RVFitter_Old(Bayesian_LS):
"""
Fits a model spectrum to the data, finding the RV shift
"""
def __init__(self, echelle_spec, model_library, T=9000, logg=4.0, feh=0.0):
"""
Initialize the RVFitter class. This class uses a phoenix model
spectrum to find the best radial velocity shift of the given data.
:param echelle_spec: A list of DataStructures.xypoint instances containing
each order of the echelle spectrum to fit
:param model_library: The path to an HDF5 file containing a phoenix model grid.
:param T: The model temperature (in Kelvin) to use
:param logg: The surface gravity (in cgs units) to use
:param feh: The metallicity ([Fe/H]) in use
"""
# Concatenate the echelle orders
x = [o.x for o in echelle_spec]
y = [o.y for o in echelle_spec]
yerr = [o.err for o in echelle_spec]
self.spec_orders = echelle_spec
# Get the requested model
model_list = StellarModel.GetModelList(type='hdf5',
hdf5_file=model_library,
temperature=[T],
metal=[feh],
logg=[logg])
modeldict, _ = StellarModel.MakeModelDicts(model_list, type='hdf5',
hdf5_file=model_library,
vsini_values=[0.0], vac2air=True,
logspace=True)
model = modeldict[T][logg][feh][0.0][0.0]
# Only keep the parts of the model we need
idx = (model.x > x[0][0]-10) & (model.x < x[-1][-1]+10)
self.model_spec = model[idx].copy()
self.model_spec.cont = RobustFit(self.model_spec.x, self.model_spec.y, fitorder=3)
# Save some variables as class vars
self._clight = constants.c.cgs.to(u.km/u.s).value
self._T = T
self._logg = logg
self._feh = feh
a, b = min(x[0]), max(x[-1])
self._xScaler = lambda xi: (2*xi - b - a) / (b - a)
super(RVFitter, self).__init__(x, y, yerr)
return
def model(self, p, x):
"""
Generate a model spectrum by convolving with a rotational profile,
and shifting to the appropriate velocity
"""
rv, vsini, epsilon, Tff, Tsource = p[:5]
#factor_pars = p[5:]
#factor_fcn = np.poly1d(factor_pars)
model = Broaden.RotBroad(self.model_spec, vsini*u.km.to(u.cm),
epsilon=epsilon,
linear=True, findcont=False)
fcn = spline(model.x, model.y/model.cont)
model_orders = []
for xi in x:
mi = fcn(xi*(1+rv/self._clight))
prim_bb = blackbody(xi*u.nm.to(u.cm), Tsource)
ff_bb = blackbody(xi*u.nm.to(u.cm), Tff)
#factor = factor_fcn(np.median(self._xScaler(xi)))
#model_orders.append(mi/factor * prim_bb/ff_bb)
model_orders.append(mi * prim_bb/ff_bb)
return model_orders
def _fit_factor(self, waves, model_fluxes, data_fluxes, fitorder=3):
wl = [np.median(w) for w in waves]
resid = [np.median(data/model) for data, model in zip(data_fluxes, model_fluxes)]
fcn = np.poly1d(np.polyfit(wl, resid, fitorder))
return [fcn(w) for w in wl]
def _lnlike(self, pars):
y_pred = self.model(pars, self.x)
scale_factor = self._fit_factor(self.x, y_pred, self.y)
s = 0
for yi, yi_err, ypred_i, f in zip(self.y, self.yerr, y_pred, scale_factor):
s += -0.5*np.sum((yi-ypred_i*f)**2 / yi_err**2 + np.log(2*np.pi*yi_err**2) )
return s
def lnprior(self, pars):
"""Prior probability function: flat in all variables except Tsource
"""
rv, vsini, epsilon, Tff, Tsource = pars[:5]
factor_pars = pars[5:]
if -100 < rv < 100 and 5 < vsini < 500 and 0 < epsilon < 1 and 1000 < Tff < 10000:
return -0.5*(Tsource-self._T)**2 / (300**2)
return -np.inf
def _fit_ff_teff(self, x, y, model_spec, RV, vsini, Tsource):
model = Broaden.RotBroad(model_spec, vsini*u.km.to(u.cm), linear=True, findcont=False)
fcn = spline(model.x, model.y/model.cont)
clight = constants.c.cgs.to(u.km/u.s).value
residual_spec = []
for xi, yi in zip(x, y):
mi = fcn(xi*(1+RV/clight))
prim_bb = blackbody(xi*u.nm.to(u.cm), Tsource)
residual_spec.append(prim_bb*mi/yi)
def errfcn(Tsec, wavearr, fluxarr):
s = 0
for wave, flux in zip(wavearr, fluxarr):
sec_bb = blackbody(wave*u.nm.to(u.cm), Tsec)
f = np.median(flux / sec_bb)
s += 0.5*np.sum((sec_bb*f - flux)**2)
return s
search_range = (2000, 8000)
best_pars = brute(errfcn, [search_range], Ns=50, args=(x, residual_spec))
best_Tsec = best_pars[0]
waves = []
factors = []
for wave, flux in zip(x, residual_spec):
sec_bb = blackbody(wave*u.nm.to(u.cm), best_Tsec)
f = np.median(flux / sec_bb)
waves.append(np.median(wave))
factors.append(f)
waves, factors = np.array(waves), np.array(factors)
f_pars = np.polyfit(self._xScaler(waves), factors, 3)
f_fcn = np.poly1d(f_pars)
f = f_fcn(self._xScaler(self.model_spec.x))
import pylab
pylab.plot(waves, factors, 'bo')
pylab.plot(self.model_spec.x, f, 'r--')
pylab.show()
return best_pars[0], f_pars
def guess_fit_parameters(self):
"""Guess the rv by cross-correlating
"""
retdict = Correlate.GetCCF(self.spec_orders, self.model_spec, resolution=None,
process_model=True, rebin_data=True,
vsini=0.0, addmode='simple')
ccf = retdict['CCF']
good = (ccf.x >-200) & (ccf.x < 200)
ccf = ccf[good]
idx = ccf.y.argmax()
rv_guess = ccf.x[idx]
try:
vsini_guess = fwhm(ccf.x, ccf.y, k=0)
except:
vsini_guess = 50.0
T_ff_guess, f_pars = self._fit_ff_teff(self.x, self.y, self.model_spec, rv_guess, vsini_guess, self._T)
self.guess_pars = [rv_guess, vsini_guess, 0.5, T_ff_guess, self._T]
#self.guess_pars.extend(f_pars)
return self.guess_pars
def predict(self, x, N=100, highest=False):
"""
predict the y value for the given x values. Use the N most probable MCMC chains if highest=False,
otherwise use the first N chains.
"""
if self.sampler is None:
logging.warn('Need to run the fit method before predict!')
return
# Find the N best walkers
if N == 'all':
N = self.sampler.flatchain.shape[0]
else:
N = min(N, self.sampler.flatchain.shape[0])
if highest:
indices = np.argsort(self.sampler.flatlnprobability)[:N]
pars = self.sampler.flatchain[indices]
else:
pars = self.sampler.flatchain[:N]
y = []
for p in pars:
ypred = self.model(p, x)
scale_factor = self._fit_factor(self.x, ypred, self.y)
y.append([yi*f for yi, f in zip(ypred, scale_factor)])
#y = [self.model(p, x) for p in pars]
return y
def plot(self, N=100, ax=None, **plot_kws):
ypred = self.predict(self.x, N=N)
if ax is None:
ax = plt.gca()
for i, (xi, yi) in enumerate(zip(self.x, self.y)):
ax.plot(xi, yi, 'k-', **plot_kws)
for j in range(len(ypred)):
mi = ypred[j][i]
ax.plot(xi, mi, 'b-', **plot_kws)
return ax
class RVFitter(Bayesian_LS):
"""
Fits a model spectrum to the data, finding the RV shift
"""
def __init__(self, echelle_spec, model_library, T=9000, logg=4.0, feh=0.0, fit_bb_fluxes=False, norm_model=True, fit_veiling=False):
"""
Initialize the RVFitter class. This class uses a phoenix model
spectrum to find the best radial velocity shift of the given data.
:param echelle_spec: A list of DataStructures.xypoint instances containing
each order of the echelle spectrum to fit
:param model_library: The path to an HDF5 file containing a phoenix model grid.
:param T: The model temperature (in Kelvin) to use
:param logg: The surface gravity (in cgs units) to use
:param feh: The metallicity ([Fe/H]) in use
:param norm_model: Whether or not to fit the continuum to the model spectrum. If False, the model
spectra in model_library are assumed to be pre-normalized.
:param fit_veiling: Should we fit a veiling parameter to account for lines that are way too small?
"""
# Find the smallest order
N = min([o.size() for o in echelle_spec])
# Concatenate the echelle orders
x = [o.x[:N] for o in echelle_spec]
y = [o.y[:N] for o in echelle_spec]
yerr = [o.err[:N] for o in echelle_spec]
self.spec_orders = echelle_spec
ds_x = [xi*10 for xi in x]
# Save some variables as class vars
self._clight = constants.c.cgs.to(u.km / u.s).value
a, b = min(x[0]), max(x[-1])
self._xScaler = lambda xi: (2 * xi - b - a) / (b - a)
self._T = None
self._logg = None
self._feh = None
self._normalize_model = norm_model
parnames = ['RV', 'vsini', 'epsilon']
if fit_veiling:
parnames.append('veil')
if fit_bb_fluxes:
parnames.extend(['T_ff', 'T_source'])
super(RVFitter, self).__init__(x, y, yerr, param_names=parnames)
# Make an interpolator instance using Starfish machinery.
hdf5_int = StellarModel.HDF5Interface(model_library)
dataspec = StellarModel.DataSpectrum(wls=ds_x, fls=y, sigmas=yerr)
self.interpolator = StellarModel.Interpolator(hdf5_int, dataspec)
self.update_model(Teff=T, logg=logg, feh=feh)
return
def update_model(self, Teff=9000, logg=4.5, feh=0.0):
# make sure this is not the model we already have
if Teff == self._T and logg == self._logg and feh == self._feh:
return
# Interpolate the model
model_flux = self.interpolator(dict(temp=Teff, logg=logg, Z=feh))
model = DataStructures.xypoint(x=self.interpolator.wl / 10., y=model_flux)
# Only keep the parts of the model we need
idx = (model.x > self.x[0][0] - 10) & (model.x < self.x[-1][-1] + 10)
self.model_spec = model[idx].copy()
if self._normalize_model:
self.model_spec.cont = RobustFit(self.model_spec.x, self.model_spec.y, fitorder=3)
else:
self.model_spec.cont = np.ones(self.model_spec.size())
# Update instance variables
self._T = Teff
self._logg = logg
self._feh = feh
return
def mnest_prior(self, cube, ndim, nparams):
cube[0] = cube[0] * 400. - 200. # RV - uniform on (-200, 200)
cube[1] = cube[1]*400. # vsini - uniform on (0, 400)
if ndim > 3:
cube[3] = cube[3] * 10.2 - 0.2 # veiling: uniform on (-0.2, 10)
if ndim > 4:
cube[4] = cube[4] * 2000 + 2500. # flat-field temperature - uniform on (2500, 4500)
cube[5] = norm(loc=self._T, scale=1000).ppf(cube[5]) # source temperature - gaussian with large std. dev.
return
def model(self, p, x):
"""
Generate a model spectrum by convolving with a rotational profile,
and shifting to the appropriate velocity
"""
rv, vsini, epsilon = p[:3]
veil = 0.0
estimate_bb_fluxes = False
if len(p) > 3:
veil = p[3]
if len(p) > 4:
Tff, Tsource = p[4:6]
estimate_bb_fluxes = True
model = Broaden.RotBroad(self.model_spec, vsini*u.km.to(u.cm),
epsilon=epsilon,
linear=True, findcont=False)
fcn = spline(model.x, model.y/model.cont)
model_orders = []
for xi in x:
mi = (fcn(xi * (1 - rv / self._clight)) + veil) / (veil + 1)
if estimate_bb_fluxes:
prim_bb = blackbody(xi * u.nm.to(u.cm), Tsource)
ff_bb = blackbody(xi * u.nm.to(u.cm), Tff)
mi *= prim_bb / ff_bb
model_orders.append(mi)
return model_orders
def _fit_factor(self, waves, model_fluxes, data_fluxes, fitorder=3):
wl = [np.median(w) for w in waves]
resid = [np.median(data/model) for data, model in zip(data_fluxes, model_fluxes)]
fcn = np.poly1d(np.polyfit(wl, resid, fitorder))
return [fcn(w) for w in wl]
def _lnlike(self, pars):
y_pred = self.model(pars, self.x)
scale_factor = self._fit_factor(self.x, y_pred, self.y) if len(pars) > 3 else np.ones(len(y_pred))
# scale_factor = self._fit_factor(self.x, y_pred, self.y)
#scale_factor = np.ones(len(y_pred))
s = 0
for yi, yi_err, ypred_i, f in zip(self.y, self.yerr, y_pred, scale_factor):
s += -0.5 * np.nansum((yi - ypred_i * f) ** 2 / yi_err ** 2 + np.log(2 * np.pi * yi_err ** 2))
return s
def lnprior(self, pars):
"""Prior probability function for emcee: flat in all variables except Tsource
"""
rv, vsini, epsilon = p[:3]
veil = 0.0
Tff = 3500
Tsource = self._T
if len(p) > 3:
veil = p[3]
if len(p) > 4:
Tff, Tsource = p[4:6]
if -100 < rv < 100 and 0 < vsini < 400 and 0 < epsilon < 1 and 0 < veil < 10 and 1000 < Tff < 10000:
return -0.5 * (Tsource - self._T) ** 2 / (300 ** 2)
return -np.inf
def _fit_ff_teff(self, x, y, model_spec, RV, vsini, Tsource):
model = Broaden.RotBroad(model_spec, vsini*u.km.to(u.cm), linear=True, findcont=False)
fcn = spline(model.x, model.y/model.cont)
clight = constants.c.cgs.to(u.km/u.s).value
residual_spec = []
for xi, yi in zip(x, y):
mi = fcn(xi*(1+RV/clight))
prim_bb = blackbody(xi*u.nm.to(u.cm), Tsource)
residual_spec.append(prim_bb*mi/yi)
def errfcn(Tsec, wavearr, fluxarr):
s = 0
for wave, flux in zip(wavearr, fluxarr):
sec_bb = blackbody(wave*u.nm.to(u.cm), Tsec)
f = np.median(flux / sec_bb)
s += 0.5*np.sum((sec_bb*f - flux)**2)
return s
search_range = (2000, 8000)
best_pars = brute(errfcn, [search_range], Ns=50, args=(x, residual_spec))
best_Tsec = best_pars[0]
waves = []
factors = []
for wave, flux in zip(x, residual_spec):
sec_bb = blackbody(wave*u.nm.to(u.cm), best_Tsec)
f = np.median(flux / sec_bb)
waves.append(np.median(wave))
factors.append(f)
waves, factors = np.array(waves), np.array(factors)
f_pars = np.polyfit(self._xScaler(waves), factors, 3)
f_fcn = np.poly1d(f_pars)
f = f_fcn(self._xScaler(self.model_spec.x))
return best_pars[0], f_pars
def _rv_lnlike(self, rv, vsini=100):
p = (rv, vsini, 0.5, self._T, self._T)
_, ll = self.flatten_spectrum(plot=False, pars=p, return_lnlike=True)
return -ll
def _guess_lnlike(self, pars, vsini=100., **kwargs):
logging.info('T = {}\nlogg = {}'.format(pars[0], pars[1]))
self.update_model(Teff=pars[0], logg=pars[1], feh=self._feh)
out = minimize(self._rv_lnlike, self._current_rv_guess, args=(vsini,))
self._current_rv_guess = out.x
p = (out.x, vsini, 0.5, self._T, self._T)
_, ll = self.flatten_spectrum(plot=False, pars=p, return_lnlike=True)
return -ll
def guess_fit_parameters(self, vsini_trials=10, refine=False,
teff_range=3000, logg_lims=(3.0, 4.5), N=10,
*args, **kwargs):
""" Guess the rv, vsini, teff, and logg with a course grid search
:param refine: If true, finish the grid search with fmin
:return: The best parameter set
"""
# First, work out the approximate rv and vsini by cross-correlating.
logging.info('Estimating the RV and vsini by cross-correlation')
vsini_vals = np.linspace(10, 400, vsini_trials)
max_ccf = np.empty(vsini_vals.size)
max_vel = np.empty(vsini_vals.size)
for i, vsini in enumerate(vsini_vals):
logging.debug('Trying vsini = {} km/s'.format(vsini))
data = []
for o in self.spec_orders:
if o.x[-1] > 480 and o.x[0] < 491:
continue
prim_bb = blackbody(o.x * u.nm.to(u.cm), self._T)
ff_bb = blackbody(o.x * u.nm.to(u.cm), 3500)
o.cont = np.median(o.y) * prim_bb / ff_bb
data.append(o)
# data = [o.copy() for o in self.spec_orders]
retdict = Correlate.GetCCF(data, self.model_spec.copy(), resolution=None,
process_model=True, rebin_data=True,
vsini=vsini, addmode='simple')
ccf = retdict['CCF']
idx = np.argmax(ccf.y)
max_ccf[i] = ccf.y[idx]
max_vel[i] = ccf.x[idx]
try:
coeffs = np.polyfit(vsini_vals, max_ccf, 2)
vsini_guess = -coeffs[1] / (2 * coeffs[0])
idx = np.argmin(np.abs(vsini_vals - vsini_guess))
rv_guess = max_vel[idx]
except:
rv_guess = -max_vel[np.argmax(max_ccf)]
vsini_guess = vsini_vals[np.argmax(max_ccf)]
# Now, do a grid search in teff and logg, finding the best rv at each point.
logging.info('Estimating logg and Teff by brute force. Get some coffee...')
teff_lims = (np.max([self._T - teff_range / 2, 7000.0]), np.min([self._T + teff_range / 2, 30000.0]))
the_ranges = [teff_lims, logg_lims]
finish = fmin if refine else None
self._current_rv_guess = rv_guess
bruteresults = brute(self._guess_lnlike, the_ranges, args=(vsini_guess,), Ns=N, finish=None)
if finish:
out = minimize(self._guess_lnlike, bruteresults, args=(vsini_guess,), bounds=((7000, 30000), (3.0, 4.5)))
best_teff, best_logg = out.x
else:
best_teff, best_logg = bruteresults
ll = self._guess_lnlike((best_teff, best_logg), vsini=vsini_guess)
self.guess_pars = [self._current_rv_guess, vsini_guess, 0.5, self._T, self._T]
return self.guess_pars
def predict(self, x, N=100, highest=False):
"""
predict the y value for the given x values. Use the N most probable MCMC chains if highest=False,
otherwise use the first N chains.
"""
if self.samples is None:
logging.warn('Need to run the fit method before predict!')
return
# Find the N best walkers
if N == 'all':
N = self.samples.shape[0]
else:
N = min(N, self.samples.shape[0])
if highest:
samples = self.samples.sort('lnprob', ascending=False)[:N]
else:
indices = np.random.randint(0, self.samples.shape[0], N)
samples = self.samples.ix[indices]
pars = samples[self.param_names].as_matrix()
y = []
for p in pars:
ypred = self.model(p, x)
scale_factor = self._fit_factor(self.x, ypred, self.y) if len(pars) > 3 else np.ones(len(ypred))
y.append([yi*f for yi, f in zip(ypred, scale_factor)])
return y
def plot(self, N=100, ax=None, **plot_kws):
ypred = self.predict(self.x, N=N)
if ax is None:
ax = plt.gca()
for i, (xi, yi) in enumerate(zip(self.x, self.y)):
ax.plot(xi, yi, 'k-', **plot_kws)
for j in range(len(ypred)):
mi = ypred[j][i]
ax.plot(xi, mi, 'b-', **plot_kws)
return ax
def _estimate_logg(self, logg_lims=(3.0, 5.0), rv=0.0, vsini=100, N=10, refine=False, **kwargs):
"""
Fit log(g) on a grid. The quality of fit is determined by order overlap, so you need some!
:param logg_lims: iterable of size >= 2 - gives the limits in log(g) to search
:param rv: float - The approximate radial velocity of the star (km/s)
:param vsini: float - the projected rotational velocity of the star (km/s)
:param N: int - the number of points to include in the initial log(g) grid
:param refine: boolean - if True, search on a finer grid near the best point
:return: the best log(g) for this data
"""
logg_grid = np.linspace(logg_lims[0], logg_lims[1], N)
lnlike = []
for logg in logg_grid:
logging.debug('logg = {}'.format(logg))
self.update_model(Teff=self._T, logg=logg, feh=self._feh)
flattened_orders = self.flatten_spectrum(plot=False, pars=(rv, vsini, 0.5, 3500, self._T))
# Find how well the orders overlap
lnl = 0.0
for i, left in enumerate(flattened_orders):
if i < len(flattened_orders) - 1:
right = flattened_orders[i + 1]
right_fcn = spline(right.x, right.y)
idx = left.x > right.x[0]
lnl += -0.5 * np.sum((left.y[idx] - right_fcn(left.x[idx])) ** 2)
lnlike.append(lnl)
if refine:
# Make a finer grid near the maximum
logging.debug(lnlike)
max_idx = np.argmax(lnlike)
low = logg_grid[max(0, max_idx-1)]
high = logg_grid[min(len(logg_grid)-1, max_idx+1)]
logg_grid = np.linspace(low, high, 10)
lnlike = []
for logg in logg_grid:
logging.debug('logg = {}'.format(logg))
self.update_model(Teff=self._T, logg=logg, feh=self._feh)
flattened_orders = self.flatten_spectrum(plot=False, pars=(rv, vsini, 0.5, 3500, self._T))
lnl = 0.0
for i, left in enumerate(flattened_orders):
if i < len(flattened_orders) - 1:
right = flattened_orders[i + 1]
right_fcn = spline(right.x, right.y)
idx = left.x > right.x[0]
lnl += -0.5 * np.sum((left.y[idx] - right_fcn(left.x[idx])) ** 2)
lnlike.append(lnl)
return logg_grid[np.argmax(lnlike)]
def _teff_logg_like_old(self, input_pars, rv=0.0, vsini=100, **kwargs):
logging.debug('T = {}\nlogg = {}'.format(input_pars[0], input_pars[1]))
self.update_model(Teff=input_pars[0], logg=input_pars[1], feh=self._feh)
flattened_orders = self.flatten_spectrum(plot=False, pars=(rv, vsini, 0.5, self._T, self._T))
# Find how well the orders overlap
lnl = 0.0
for i, left in enumerate(flattened_orders):
if i < len(flattened_orders) - 1:
right = flattened_orders[i + 1]
right_fcn = spline(right.x, right.y)
idx = left.x > right.x[0]
lnl += 0.5 * np.sum((left.y[idx] - right_fcn(left.x[idx])) ** 2)
return lnl
def _teff_logg_like(self, input_pars, rv=0.0, vsini=100, **kwargs):
logging.debug('T = {}\nlogg = {}'.format(input_pars[0], input_pars[1]))
self.update_model(Teff=input_pars[0], logg=input_pars[1], feh=self._feh)
p = (rv, vsini, 0.5, self._T, self._T)
flattened_orders, ll = self.flatten_spectrum(plot=False, pars=p, return_lnlike=True)
return -ll
def _estimate_logg_teff(self, logg_lims=(3.0, 5.0), teff_range=3000.0, rv=0.0, vsini=100, N=10, refine=False,
**kwargs):
teff_lims = (np.max([self._T - teff_range / 2, 7000.0]), np.min([self._T + teff_range / 2, 30000.0]))
the_ranges = [teff_lims, logg_lims]
finish = fmin if refine else None
bruteresults = brute(self._teff_logg_like, the_ranges, args=(rv, vsini), Ns=N, finish=finish)
return bruteresults[0], bruteresults[1]
def flatten_spectrum(self, plot=False, pars=None, return_lnlike=False, update_logg=False, update_teff_logg=False,
fitorder=2, **kwargs):
"""
Returns a flattened spectrum as a list of DataStructures.xypoint instances
:return:
"""
# Get the best parameters from the samples if it has been fit; otherwise, guess them
if pars is None:
if self.samples is not None:
pars = self.samples.mean()[['RV', 'vsini', 'epsilon', 'T_ff', 'T_source']].values
else:
logging.info('Guessing initial parameters via cross-correlation...')
pars = self.guess_fit_parameters(**kwargs)
print(pars)
if update_logg and not update_teff_logg:
logging.info('Estimating log(g)...')
best_logg = self._estimate_logg(rv=pars[0], vsini=pars[1], **kwargs)
logging.info('Best log(g) = {:.2f}'.format(best_logg))
self.update_model(Teff=self._T, feh=self._feh, logg=best_logg)
logging.info('RE-Guessing inital RV and Vsini for updated logg')
pars = self.guess_fit_parameters(**kwargs)
if update_teff_logg:
logging.info('Estimating log(g) and Teff...')
best_teff,best_logg = self._estimate_logg_teff(rv=pars[0], vsini=pars[1], **kwargs)
logging.info('Best log(g) = {:.2f}'.format(best_logg))
logging.info('Best Teff = {:.2f}'.format(best_teff))
self.update_model(Teff=best_teff, feh=self._feh, logg=best_logg)
logging.info('RE-Guessing inital RV and Vsini for updated logg and Teff')
pars = self.guess_fit_parameters(**kwargs)
print(pars)
# Get the model orders and scale factor
model_orders = self.model(pars, self.x)
scale_factor = self._fit_factor(self.x, model_orders, self.y)
# Normalize and (optionally) plot
normalized = []
normalized_err = []
lnlike = 0.0
if plot:
fig, ax = plt.subplots() # figsize=(15, 10))
for xi, yi, yi_err, model, f in zip(self.x, self.y, self.yerr, model_orders, scale_factor):
cont = np.poly1d(np.polyfit(xi, yi/model, fitorder))(xi)
normed = yi / cont
normed_err = yi_err / cont
if plot:
ax.plot(xi, normed, alpha=0.5)
ax.plot(xi, model, 'k-', lw=1)
normalized.append(normed)
normalized_err.append(normed_err)
lnlike += -0.5 * np.sum(
(normed - model) ** 2 / normed_err ** 2 + np.log(2 * np.pi * normed_err ** 2))
if plot:
plt.show()
# Convert the normalized spectra to xypoint instances
flattened = [DataStructures.xypoint(x=xi, y=n, err=n_err) for xi, n, n_err in
zip(self.x, normalized, normalized_err)]
# Calculate and return the log-likelihood of the fit if requested
if return_lnlike:
return flattened, lnlike
return flattened
class SpecFitter(fitting_utilities.Bayesian_LS):
"""
Fits a model spectrum to the data, finding the RV shift, vsini, Teff, log(g), and [Fe/H]
"""
def __init__(self, echelle_spec, model_library, T=9000, logg=4.0, feh=0.0, norm_model=True):
"""
Initialize the RVFitter class. This class uses a phoenix model
spectrum to find the best radial velocity shift of the given data.
:param echelle_spec: A list of DataStructures.xypoint instances containing
each order of the echelle spectrum to fit
:param model_library: The path to an HDF5 file containing a phoenix model grid.
:param T: The model temperature (in Kelvin) to use
:param logg: The surface gravity (in cgs units) to use
:param feh: The metallicity ([Fe/H]) in use
:param norm_model: Whether or not to fit the continuum to the model spectrum. If False, the model
spectra in model_library are assumed to be pre-normalized.
"""
# Find the smallest order
N = min([o.size() for o in echelle_spec])
# Concatenate the echelle orders
x = np.array([o.x[:N] for o in echelle_spec])
y = np.array([o.y[:N] for o in echelle_spec])
yerr = np.array([o.err[:N] for o in echelle_spec])
self.spec_orders = echelle_spec
ds_x = [xi*10 for xi in x]
# Save some variables as class vars
self._clight = constants.c.cgs.to(u.km / u.s).value
a, b = min(x[0]), max(x[-1])
self._xScaler = lambda xi: (2 * xi - b - a) / (b - a)
self._T = None
self._logg = None
self._feh = None
self._normalize_model = norm_model
parnames = ['RV', 'vsini', 'epsilon', 'teff', 'logg', 'feh']
super(SpecFitter, self).__init__(x, y, yerr, param_names=parnames)
# Make an interpolator instance using Starfish machinery.
hdf5_int = StellarModel.HDF5Interface(model_library)
dataspec = StellarModel.DataSpectrum(wls=ds_x, fls=y, sigmas=yerr)
self.interpolator = StellarModel.Interpolator(hdf5_int, dataspec)
self.update_model(Teff=T, logg=logg, feh=feh)
return
def update_model(self, Teff=9000, logg=4.5, feh=0.0):
# make sure this is not the model we already have
if Teff == self._T and logg == self._logg and feh == self._feh:
return
# Interpolate the model
model_flux = self.interpolator(dict(temp=Teff, logg=logg, Z=feh))
model = DataStructures.xypoint(x=self.interpolator.wl / 10., y=model_flux)
# Only keep the parts of the model we need
idx = (model.x > self.x[0][0] - 10) & (model.x < self.x[-1][-1] + 10)
self.model_spec = model[idx].copy()
if self._normalize_model:
#self.model_spec.cont = RobustFit(self.model_spec.x, self.model_spec.y, fitorder=3)
self.model_spec.cont = FittingUtilities.Continuum(self.model_spec.x, self.model_spec.y, fitorder=3, lowreject=2)
else:
self.model_spec.cont = np.ones(self.model_spec.size())
# Update instance variables
self._T = Teff
self._logg = logg
self._feh = feh
return
def mnest_prior(self, cube, ndim, nparams):
cube[0] = cube[0] * 400. - 200. # RV - uniform on (-200, 200)
cube[1] = cube[1]*400. # vsini - uniform on (0, 400)
cube[3] = cube[3]*23000 + 7000 # Teff - uniform on (7000, 30000)
cube[4] = cube[4]*2.0 + 3.0 # log(g) - uniform on (3, 5)
cube[5] = cube[5] - 0.5 # [Fe/H] - uniform on (-0.5, 0.5)
return
def model(self, p, x):
"""
Generate a model spectrum by convolving with a rotational profile,
and shifting to the appropriate velocity
"""
rv, vsini, epsilon, logT, logg, feh = p
teff = 10**logT
self.update_model(Teff=teff, logg=logg, feh=feh)
model = Broaden.RotBroad(self.model_spec, vsini*u.km.to(u.cm),
epsilon=epsilon,
linear=True, findcont=False)
fcn = spline(model.x, model.y/model.cont)
model_orders = np.zeros(x.shape)
for i, xi in enumerate(x):
model_orders[i] = fcn(xi * (1 - rv / self._clight))
return model_orders
def _lnlike(self, pars):
y_pred = self.model(pars, self.x)
s = 0
for yi, yi_err, ypred_i in zip(self.y, self.yerr, y_pred):
s += -0.5 * np.nansum((yi - ypred_i) ** 2 / yi_err ** 2 + np.log(2 * np.pi * yi_err ** 2))
return s
def lnprior(self, pars):
"""Prior probability function for emcee: flat in all variables except Tsource
"""
rv, vsini, epsilon, logT, logg, feh = pars
teff = 10**logT
if -100 < rv < 100 and 0 < vsini < 500 and 0 < epsilon < 1 and 7000 < teff < 30000 and 3.0 < logg < 5.0 and -0.5 < feh < 0.5:
return 0.0
return -np.inf
def guess_fit_parameters(self, *args, **fit_kws):
""" Guess the rv, vsini, teff, and logg with a course grid search
:param refine: If true, finish the grid search with fmin
:return: The best parameter set
"""
import lmfit
# First, work out the approximate rv and vsini by cross-correlating.
logging.info('Estimating the RV and vsini by cross-correlation')
vsini_vals = np.linspace(10, 400, 10)
max_ccf = np.empty(vsini_vals.size)
max_vel = np.empty(vsini_vals.size)
for i, vsini in enumerate(vsini_vals):
logging.debug('Trying vsini = {} km/s'.format(vsini))
data = []
for o in self.spec_orders:
o.cont = np.median(o.y)*np.ones_like(o.x)
data.append(o)
retdict = Correlate.GetCCF(data, self.model_spec.copy(), resolution=None,
process_model=True, rebin_data=True,
vsini=vsini, addmode='ml')
ccf = retdict['CCF']
idx = np.argmax(ccf.y)
max_ccf[i] = ccf.y[idx]
max_vel[i] = ccf.x[idx]
try:
coeffs = np.polyfit(vsini_vals, max_ccf, 2)
vsini_guess = min(400, -coeffs[1] / (2 * coeffs[0]))
idx = np.argmin(np.abs(vsini_vals - vsini_guess))
rv_guess = max_vel[idx]
except:
rv_guess = -max_vel[np.argmax(max_ccf)]
vsini_guess = vsini_vals[np.argmax(max_ccf)]
# Now, fit everything else
def errfcn(pars):
parvals = pars.valuesdict()
p = (parvals['rv'], parvals['vsini'], parvals['epsilon'], parvals['logT'], parvals['logg'], parvals['feh'])
#print(p)
y_pred = self.model(p, self.x)
resid = np.zeros(self.x.shape)
for i, (yi, yi_err, ypred_i) in enumerate(zip(self.y, self.yerr, y_pred)):
resid[i] = 0.5*((yi - ypred_i) ** 2 / yi_err ** 2 + np.log(2 * np.pi * yi_err ** 2)) + self.lnprior(p)
retval = resid.flatten()
print(p, retval.sum())
return retval.sum()
params = lmfit.Parameters()
params.add('rv', value=rv_guess, min=-100, max=100)
params.add('vsini', value=vsini_guess, min=10, max=500)
params.add('epsilon', value=0.5, min=0.01, max=0.99)
params.add('logT', value=3.95, min=3.85, max=4.47)
params.add('logg', value=4.0, min=3.0, max=5.0)
params.add('feh', value=0.0, min=-0.5, max=0.5)
result = lmfit.minimize(errfcn, params, **fit_kws)
return result
def predict(self, x, N=100, highest=False):
"""
predict the y value for the given x values. Use the N most probable MCMC chains if highest=False,
otherwise use the first N chains.
"""
if self.samples is None:
logging.warn('Need to run the fit method before predict!')
return
# Find the N best walkers
if N == 'all':
N = self.samples.shape[0]
else:
N = min(N, self.samples.shape[0])
if highest:
samples = self.samples.sort('lnprob', ascending=False)[:N]
else:
indices = np.random.randint(0, self.samples.shape[0], N)
samples = self.samples.ix[indices]
pars = samples[self.param_names].as_matrix()
y = []
for p in pars:
ypred = self.model(p, x)
scale_factor = self._fit_factor(self.x, ypred, self.y) if len(pars) > 3 else np.ones(len(ypred))
y.append([yi*f for yi, f in zip(ypred, scale_factor)])
return y
def plot(self, N=100, ax=None, **plot_kws):
ypred = self.predict(self.x, N=N)
if ax is None:
ax = plt.gca()
for i, (xi, yi) in enumerate(zip(self.x, self.y)):
ax.plot(xi, yi, 'k-', **plot_kws)
for j in range(len(ypred)):
mi = ypred[j][i]
ax.plot(xi, mi, 'b-', **plot_kws)
return ax
| kgullikson88/General | Fitters.py | Python | gpl-3.0 | 88,851 | [
"Gaussian"
] | c15ab895ab8024d9bf47d6097f4ba162e9d63fbfab93f685207241933825a71f |
import numpy as np
def minmax(X, low, high, minX=None, maxX=None, dtype=np.float):
X = np.asarray(X)
if minX is None:
minX = np.min(X)
if maxX is None:
maxX = np.max(X)
# normalize to [0...1].
X -= float(minX)
X /= float((maxX - minX))
# scale to [low...high].
X = X * (high - low)
X = X + low
return np.asarray(X, dtype=dtype)
def zscore(X, mean=None, std=None):
"""
Mean Normalization + Feature Scaling
:param X: ndarray
:param mean: mean
:param std: std dev
:return: normalized ndarry
"""
X = np.asarray(X)
if mean is None:
mean = X.mean()
if std is None:
std = X.std()
X = (X - mean) / std
return X
def gaussian(X, mu, sig):
return (1/(sig*np.sqrt(2*np.pi)))*\
np.exp(-(X-mu)**2/(2*sig**2))
def inverse_dissim(X):
"""
:param X: int or np.array
:return:
"""
X = np.asarray(X)
X = zscore(X)
X = minmax(X, 0, 10)
return 1./(1+X)
def vector_normalize(x):
return x / np.linalg.norm(x)
def gaussian_kernel(X, mu=None, sig=None):
"""
gaussian kernel.
convert distance to similarity by setting mu=0
:param X:
:param mu:
:param sig:
:return:
"""
X = np.asarray(X)
if mu is None:
mu = X.mean()
if sig is None:
sig = X.std()
return np.exp(-np.power(X-mu, 2)/(2*sig**2)) | idf/FaceReader | facerec_py/facerec/normalization.py | Python | mit | 1,416 | [
"Gaussian"
] | 8207329490b48de597f5cd5c86d6aecabc199d86209b440ceabbd795e5834742 |
from __future__ import division
import random
import math
import itertools
from collections import Counter,defaultdict
try:
from Bio import SeqIO
from Bio.Data.IUPACData import protein_letters as PROTEIN_ALPHABET
except ImportError:
raise ImportError("Failed to import necessary Biopython components. "\
"Please install Biopython.")
#===============================================================================
# Substitution Matrix Class
#===============================================================================
class SubstitutionMatrix(object):
'''
Only identity comparison (1 if identical, 0 if not) is currently implemented.
'''
PROTEIN_LETTERS = frozenset(PROTEIN_ALPHABET)
class KeyReturningDefaultDict(defaultdict):
def __missing__(self,key):
self[key] = self.default_factory(key)
return self[key]
def __init__(self,matrix=None):
if matrix is None:
self._matrix = self.KeyReturningDefaultDict(lambda x: len(x) % 2
if x < self.PROTEIN_LETTERS
else {}[x])
else:
raise NotImplementedError("Only identity comparison scoring is currently implemented")
def __call__(self,item1,item2=None):
'''
The pair of residues to be compared may be passed as an iterable or as
separate arguments
'''
score_this = frozenset(item1) if item2 is None else frozenset({item1,item2})
try:
return self._matrix[score_this]
except KeyError:
as_list = list(score_this - self.PROTEIN_LETTERS)
if len(as_list) == 1:
error_head = "Character "+as_list.pop()+" is "
else:
error_head = "Characters "+repr(as_list)+" are "
raise KeyError(error_head+"not part of the protein alphabet "+\
repr(PROTEIN_ALPHABET))
@classmethod
def get_matrix(cls,matrix=None):
'''
Pass-through constructor only creates an instance when the argument is not
already an instance
'''
if isinstance(matrix,cls):
return matrix
else:
return cls(matrix)
| romansloutsky/weightedSDP | wsdp.py | Python | mit | 2,131 | [
"Biopython"
] | ac4577fbde043dd9ed995c9ce50cfff0242fbed72103e17efac3b6f67381a9bf |
#!/usr/bin/python
# graph.py
# Graph algorithms for TSP Art
#
# Author: Tommy Tracy II
# Created: 11/2/2013
#
from math import *
from scipy.spatial import Delaunay
import numpy as np
VERBOSE = False
# ---------- Generate Graph from Nodes ---------- #
# Returns all of the edges (point_0, point_1, weight) in the Delaunay triangulation of the nodes
# What's neat about this is that all edges in the Minimum Spanning Tree MUST be in the Delaunay Triangulation of the nodes!
def delaunay_graph(nodes):
edges = []
points = np.array(nodes)
tri = Delaunay(points)
simplices = tri.simplices
triangles = points[simplices].tolist()
for triangle in triangles:
edges.append((tuple(triangle[0]), tuple(triangle[1]), hypot(triangle[0][1] - triangle[1][1], triangle[0][0] - triangle[1][0])))
edges.append((tuple(triangle[1]), tuple(triangle[2]), hypot(triangle[1][1] - triangle[2][1], triangle[1][0] - triangle[2][0])))
edges.append((tuple(triangle[2]), tuple(triangle[0]), hypot(triangle[2][1] - triangle[0][1], triangle[2][0] - triangle[0][0])))
return edges
# Returns the edges (point_0, points_1, weight) that make up a fully interconnected graph for all of the nodes
# WARNING: OBSOLETE
# This is inefficient, and has been replaced with the Delaunay Graph above
def full_graph(nodes):
edges = []
while(len(nodes) != 0):
current_node = nodes.pop(0)
current_y = current_node[0]
current_x = current_node[1]
for i in range(len(nodes)):
temp_node = nodes[i]
temp_y = temp_node[0]
temp_x = temp_node[1]
weight = hypot(current_x - temp_x, current_y - temp_y)
edges.append((current_node, temp_node, weight))
return edges # Return tuple of (node_1, node_2, distance)
# ---------- Generate a minimum spanning tree from a weighted graph ---------- #
# A minimum spanning tree is a tree that spans all nodes (with edges) in such a way to minimize the total edge length
# Given the graph, generate a minimum spanning tree
def min_span_tree(nodes):
# Edges that make up the min spanning tree
span_tree = []
forest = [[] for x in xrange(len(nodes))] # Quick way to make a list for every node
# Make forest of trees (list of nodes) with single node per tree; therefore Number of trees = number of nodes
for i in range(len(nodes)):
forest[i].append(nodes[i])
# Get all edges in a Delaunay graph of the nodes
edges = delaunay_graph(nodes)
# Sort the edges by weight
sorted_edges = sorted(edges, key=lambda edge: edge[2])
# Iterate through all edges (until we're done)
for edge in sorted_edges:
# We're done! All nodes in one tree
if len(forest) == 1:
break
node_a = edge[0]
node_b = edge[1]
a_index = -1
b_index = -1
# Go through forest and find which tree node a and b are located
for i in range(len(forest)):
tree = forest[i]
if node_a in tree:
a_index = i
if node_b in tree:
b_index = i
if (a_index != -1) and (b_index != -1):
break
# They're in different trees! Great; let's join them
if a_index != b_index:
forest[a_index] = forest[a_index] + forest[b_index]
forest.pop(b_index)
span_tree.append(edge)
# Else: They're already in the same tree; don't care
return span_tree
# ---------- Do a Depth-First Traversal of the graph represented by the edges ---------- #
# This serves as a Polynomial-time, accepted approximation for the TSP problem.
# Make graph from edges, and then do a depth first traversal of the tree, returning the edges in the traversal
def depth_first_traversal(edges):
graph = dict()
# Go through all edges and construct graph in dictionary
# This graph will need to be bi-directional
# Cannot guarantee full traversal otherwise
for edge in edges:
node_a = edge[0]
node_b = edge[1]
if node_a in graph:
if node_b in graph[node_a]:
print "DUPLICATE CHILD A"
else:
graph[node_a].append(node_b)
else:
graph[node_a] = [node_b]
if node_b in graph:
if node_a in graph[node_b]:
print "DUPLICATE CHILD B"
else:
graph[node_b].append(node_a)
else:
graph[node_b] = [node_a]
# Iteratively traverse it
# Order children and traverse in a counter-clockwise order, printing out new verticies as we reach them
tsp_nodes = pre_order(graph)
#print "DONE GETTING TSP NODES!"
#print "LET US SEE IF WE HAVE DUPLICATES"
#for i in range(0, len(tsp_nodes)-1):
# for j in range(i+1, len(tsp_nodes)):
# if tsp_nodes[i] == tsp_nodes[j]:
# print "we're seeing the same node again!"
# print tsp_nodes[i], tsp_nodes[j]
#Chain the nodes into a series of edges! They're ordered too.
tsp_edges = []
for i in range(0, (len(tsp_nodes)-1)):
tsp_edges.append((tsp_nodes[i], tsp_nodes[i+1]))
tsp_edges.append((tsp_nodes[len(tsp_nodes)-1], tsp_nodes[0]))
return tsp_edges
def pre_order(graph):
# Pick arbitrary node and call it root; it's arbitrary because the graph right now is not directed; there is no true ROOT
root_node = graph[graph.keys()[0]][0]
if VERBOSE:
print "Root: ", root_node
tsp_nodes = [] # Add nodes to the tsp_nodes list as we visit them in the depth-first traversal
tsp_nodes.append(root_node) # Add root node as the first node on the tsp path *prefix traversal
parent_stack = [] # Stack to push on parents as we move down the tree
parent = root_node # Root will be our first parent!
old_parent = root_node
while True:
# Grab the chidren from GRAPH
children = graph[parent]
# Remove parent's parent from children
if old_parent in children:
children.remove(old_parent)
# If we've popped everyone off the stack and we're out of children, time to return
if (len(parent_stack) == 0) and (len(children) == 0):
break
# Order the children by location (starting at 0 degrees, going counter-clocksise)
sorted_children = sorted(children, key=lambda child: edge_angle((parent, child)))
if VERBOSE:
print "Sorted Children: ", sorted_children
# If we're out of children, go up one!
if len(sorted_children) == 0:
parent = parent_stack.pop()
if(len(parent_stack) != 0):
old_parent = parent_stack[-1]
continue
else:
graph[parent].remove(sorted_children[0])
old_parent = parent
parent_stack.append(old_parent)
parent = sorted_children[0]
if VERBOSE:
print "New Parent: ", parent
if parent in tsp_nodes:
print "Freaking duplicate"
else:
tsp_nodes.append(parent) # Add node to the first position of the tsp_nodes list
return tsp_nodes
# ---------- Remove Crossings from the Graph ---------- #
# Although the Travelling Salesman Problem (TSP) does not establish a requirement for removing path crossings,
# in order to achieve the single-loop format and clean up the result, the path shall never cross itself.
# Return the set of edges without crossings
def remove_crossings(edges):
done = False
if VERBOSE:
print "Running Uncrossing Algorithm"
# Create a traversal dictionary to simplify traversal of the graph
traversal = dict()
# Make dictionary for graphs edges for easy traversal in either direction
# Node -> (Node before, Node after)
for i in range (0, len(edges)-1):
# traversal (node) -> (node before, node after)
traversal[edges[i][1]] = (edges[i][0], edges[i+1][1])
traversal[edges[-1][1]] = (edges[-1][0], edges[0][1])
# Iterate through all pairs of edges, determine if there's a crossing, and we uncross the edges
# We do this until there are no more crossings
test_var = 0;
while True:
num_crossings = 0
num_of_edges = len(traversal.keys()) # Because we're in a cycle, the number of edges = number of nodes
if test_var == 0:
test_var = num_of_edges
else:
if test_var != num_of_edges:
print "We lost ", test_var-num_of_edges, " edges!"
return
if VERBOSE:
print "Number of edges: ", num_of_edges
print "i goes from ", 0, " to ", num_of_edges-1
print "j goes from i+1 to ",num_of_edges
# Iterate through all pairs of edges i:=[0,end-1], j:=[1,end]
for i in range(0, (num_of_edges-1)):
#if done:
#break
for j in range(i+1, num_of_edges):
#if done:
#break
current_i_node = traversal.keys()[i]
current_j_node = traversal.keys()[j]
edge_i = (current_i_node, traversal[current_i_node][1]) # edge_i = (node at index i, node after i)
edge_j = (current_j_node, traversal[current_j_node][1]) # edge_j = (node at index j, node after j)
if VERBOSE:
print "Current Edges: ", edge_i, edge_j
if(detect_crossing(edge_i, edge_j)):
#print traversal
num_crossings += 1
# Unravel the two segments that are crossing
first_edge_node_0 = edge_i[0] # This node can either connect to second_edge_node_0 or second_edge_node_1
first_edge_node_1 = edge_i[1]
if VERBOSE:
print "First edge: ", first_edge_node_0, first_edge_node_1
second_edge_node_0 = edge_j[0]
second_edge_node_1 = edge_j[1]
if VERBOSE:
print "Second edge: ", second_edge_node_0, second_edge_node_1
# In order to determine which of the two points first_edge_node_0 will NOT be connected to,
# find the first node that is connected to this node via a path backwards
# In order to maintain a constant direction, flip the direction of all edges until we find the first second_edge node
iterator_node = first_edge_node_0
#Reverse this node
node_before = traversal[iterator_node][0]
node_after = None # The node after is not yet known
traversal[iterator_node] = (node_after, node_before) # Now we're pointing in the opposite direction!
index = 0
# Iterate backwards through the graph (from edge to edge) and check to see which of the 2 above nodes we hit first
while True:
iterator_node = traversal[iterator_node][1]
#Reverse this node as well
node_before = traversal[iterator_node][0]
node_after = traversal[iterator_node][1]
traversal[iterator_node] = (node_after, node_before)
# We've looped back to second_edge_node_0; do _not_ connect to it or we'll have two disjoint graphs
if iterator_node == second_edge_node_0:
print "This is no good!"
return
# We've looped back to second_edge_node_1; do _not_connect to it or we'll have two disjoint graphs
if iterator_node == second_edge_node_1:
# Set correct direction for first_edge_node_0
node_before_fe_n0 = second_edge_node_0
node_after_fe_n0 = traversal[first_edge_node_0][1]
traversal[first_edge_node_0] = (node_before_fe_n0, node_after_fe_n0)
traversal[second_edge_node_0] = (traversal[second_edge_node_0][0], first_edge_node_0)
if VERBOSE:
print "first_edge_node_0: ", first_edge_node_0
print "first_edge_node_0 neighbors: ", traversal[first_edge_node_0]
print "second_edge_node_0: ", second_edge_node_0
print "second_edge_node_0 neighbors: ", traversal[second_edge_node_0]
# Set correct direction for first_edge_node_1
node_before_fe_n1 = second_edge_node_1 # We already reversed this node!
node_after_fe_n1 = traversal[first_edge_node_1][1]
traversal[first_edge_node_1] = (node_before_fe_n1, node_after_fe_n1)
traversal[second_edge_node_1] = (traversal[second_edge_node_1][0], first_edge_node_1)
if VERBOSE:
print "first_edge_node_1: ", first_edge_node_1
print "first_edge_node_1 neighbors: ", traversal[first_edge_node_1]
print "second_edge_node_1: ", second_edge_node_1
print "second_edge_node_1 neighbors: ", traversal[second_edge_node_1]
print traversal
break
if iterator_node == first_edge_node_0:
print "We looped; we're done"
done = True;
break
index += 1
#if done:
# break
#if done:
# break
if num_crossings == 0 or done:
final_list = []
#Convert dictionary back to list
for node in traversal.keys():
final_list.append((node, traversal[node][1]))
return final_list
else:
print "Number of crossings: ", num_crossings
continue
# Detect a crossing
# The easiest way to do this is use the following rule:
# In order for two line segments to crsoss:
# Both of Line segment 1's points must be on opposite sides of Line Segment 2
# Both of Line segment 2's points must be on opposite sides of Line Segment 1
def detect_crossing(edge_1, edge_2):
# Check if edge_2's points are on opposite sides of edge_1 (cross product)
cross_1 = cross_product(edge_1, edge_2[0]) # Find cross product of edge_1 and first point of edge_2
cross_2 = cross_product(edge_1, edge_2[1]) # Find cross product of edge_1 and second point of edge_2
# If edge_2's points are on opposite sides of edge_1...
if((cross_1 < 0 and cross_2 > 0) or (cross_1 > 0 and cross_2 < 0)):
# Check if edge_1's points are on opposite sides of edge_2 (cross product)
cross_3 = cross_product(edge_2, edge_1[0]) # Find cross product of edge_2 and first point of edge_1
cross_4 = cross_product(edge_2, edge_1[1]) # Find cross product of edge_2 and second point of edge_1
# If edge_1's points are on opposite sides of edge_2, return True - We found a crossing! :(
if((cross_3 < 0 and cross_4 > 0) or (cross_3 > 0 and cross_4 < 0)):
return True
# Else, no crossing
return False
# ---------- Fun Maths formulas --------- #
# Find the angle of the edge
def edge_angle(edge):
x1 = edge[0][0]
y1 = edge[0][1]
x2 = edge[1][0]
y2 = edge[1][1]
temp = atan2(y2 - y1, x2 - x1) * (180 / pi)
if temp < 0:
temp += 360
return temp
# Calculate the cross product of the edge (edge[0], edge[1]) with another edge composed with the point (edge[0], point)
def cross_product(edge, point):
return (edge[1][0] - edge[0][0]) * (point[1] - edge[0][1]) - (edge[1][1] - edge[0][1])*(point[0] - edge[0][0]) | tjt7a/TSart | graph.py | Python | gpl-2.0 | 13,771 | [
"VisIt"
] | fba46f5e686fb0164d187240a017987160e77959c9949e9a65421fc404e18bd7 |
# -*- coding: utf-8 -*-
"""
Fields for Discontinous Galerkin method
"""
import numpy as nm
import six
from numpy.lib.stride_tricks import as_strided
from six.moves import range
from sfepy.base.base import (output, assert_, Struct)
from sfepy.discrete import Integral, PolySpace
from sfepy.discrete.common.fields import parse_shape
from sfepy.discrete.fem.fields_base import FEField
from sfepy.discrete.fem.mappings import VolumeMapping
def get_unraveler(n_el_nod, n_cell):
"""Returns function for unraveling i.e. unpacking dof data from
serialized array from shape (n_el_nod*n_cell, 1) to (n_cell, n_el_nod, 1).
The unraveler returns non-writeable view into the input array.
Parameters
----------
n_el_nod : int
expected dimensions of dofs array
n_cell : int
Returns
-------
unravel : callable
"""
def unravel(u):
"""Returns non-writeable view into the input array reshaped to
(n*m, 1) to (m, n, 1) .
Parameters
----------
u : array_like
solution in shape (n*m, 1)
Returns
-------
u : ndarray
unraveledsolution in shape (m, n, 1)
"""
ustride1 = u.strides[0]
ur = as_strided(u,
shape=(n_cell, n_el_nod, 1),
strides=(n_el_nod * ustride1, ustride1, ustride1),
writeable=False)
return ur
return unravel
def get_raveler(n_el_nod, n_cell):
"""Returns function for raveling i.e. packing dof data from
two dimensional array of shape (n_cell, n_el_nod, 1) to (n_el_nod*n_cell, 1)
The raveler returns view into the input array.
Parameters
----------
n_el_nod :
param n_el_nod, n_cell: expected dimensions of dofs array
n_cell : int
Returns
-------
ravel : callable
"""
def ravel(u):
"""Returns view into the input array reshaped from (m, n, 1) to (n*m, 1)
to (m, n, 1) .
Parameters
----------
u : array_like
Returns
-------
u : ndarray
"""
# ustride1 = u.strides[0]
# ur = as_strided(u, shape=(n_el_nod*n_cell, 1),
# strides=(n_cell*ustride1, ustride1))
ur = nm.ravel(u)[:, None]
# possibly use according to
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html
# ur = u.reshape(-1)
return ur
return ravel
# mapping between geometry element types
# and their facets types
# TODO move to sfepy/discrete/fem/geometry_element.py?
cell_facet_gel_name = {
"1_2": "0_1",
"2_3": "1_2",
"2_4": "1_2",
"3_4": "2_3",
"3_8": "2_4"
}
def get_gel(region):
"""
Parameters
----------
region : sfepy.discrete.common.region.Region
Returns
-------
gel :
base geometry element of the region
"""
cmesh = region.domain.cmesh
for key, gel in six.iteritems(region.domain.geom_els):
ct = cmesh.cell_types
if (ct[region.cells] == cmesh.key_to_index[gel.name]).all():
return gel
else:
raise ValueError('Region {} contains multiple'
' reference geometries!'.format(region))
class DGField(FEField):
"""Class for usage with DG terms, provides functionality for Discontinous
Galerkin method like neighbour look up, projection to discontinuous basis
and correct DOF treatment.
"""
family_name = 'volume_DG_legendre_discontinuous'
is_surface = False
def __init__(self, name, dtype, shape, region, space="H1",
poly_space_base="legendre", approx_order=1, integral=None):
"""
Creates DGField, with Legendre polyspace and default integral
corresponding to 2 * approx_order.
Parameters
----------
name : string
dtype : type
shape : string
'vector', 'scalar' or something else
region : sfepy.discrete.common.region.Region
space : string
default "H1"
poly_space_base : PolySpace
optionally force polyspace
approx_order : 0 for FVM, default 1
integral : Integral
if None integral of order 2*approx_order is created
"""
shape = parse_shape(shape, region.domain.shape.dim)
Struct.__init__(self, name=name, dtype=dtype, shape=shape,
region=region)
if isinstance(approx_order, tuple):
self.approx_order = approx_order[0]
else:
self.approx_order = approx_order
# geometry
self.domain = region.domain
self.region = region
self.dim = region.tdim
self._setup_geometry()
self._setup_connectivity()
# TODO treat domains embedded into higher dimensional spaces?
self.n_el_facets = self.dim + 1 if self.gel.is_simplex else 2**self.dim
# approximation space
self.space = space
self.poly_space_base = poly_space_base
self.force_bubble = False
self._create_interpolant()
# DOFs
self._setup_shape()
self._setup_all_dofs()
self.ravel_sol = get_raveler(self.n_el_nod, self.n_cell)
self.unravel_sol = get_unraveler(self.n_el_nod, self.n_cell)
# integral
self.clear_qp_base()
self.clear_facet_qp_base()
if integral is None:
self.integral = Integral("dg_fi", order = 2 * self.approx_order)
else:
self.integral = integral
self.ori = None
self.basis_transform = None
# mapping
self.mappings = {}
self.mapping = self.create_mapping(self.region, self.integral, "volume",
return_mapping=True)[1]
self.mappings0 = {}
# neighbour facet mapping and data caches
# TODO use lru cache or different method?
self.clear_facet_neighbour_idx_cache()
self.clear_normals_cache()
self.clear_facet_vols_cache()
self.boundary_facet_local_idx = {}
def _create_interpolant(self):
name = self.gel.name + '_DG_legendre'
ps = PolySpace.any_from_args(name, self.gel, self.approx_order,
base=self.poly_space_base,
force_bubble=False)
self.poly_space = ps
# 'legendre_simplex' is created for '1_2'.
if self.gel.name in ["2_4", "3_8"]:
self.extended = True
else:
self.extended = False
def _setup_all_dofs(self):
"""Sets up all the differet kinds of DOFs, for DG only bubble DOFs"""
self.n_el_nod = self.poly_space.n_nod
self.n_vertex_dof = 0 # in DG we will propably never need vertex DOFs
self.n_edge_dof = 0 # use facets DOFS for AFS methods
self.n_face_dof = 0 # use facet DOF for AFS methods
(self.n_bubble_dof,
self.bubble_remap,
self.bubble_dofs) = self._setup_bubble_dofs()
self.n_nod = self.n_vertex_dof + self.n_edge_dof \
+ self.n_face_dof + self.n_bubble_dof
def _setup_bubble_dofs(self):
"""Creates DOF information for so called element, cell or bubble DOFs
- the only DOFs used in DG
n_dof = n_cells * n_el_nod
remap optional remapping between cells
dofs is mapping between dofs and cells
Returns
-------
n_dof : int
remap : ndarray
dofs : ndarray
"""
self.n_cell = self.region.get_n_cells(self.is_surface)
n_dof = self.n_cell * self.n_el_nod
dofs = nm.arange(n_dof, dtype=nm.int32)\
.reshape(self.n_cell, self.n_el_nod)
remap = nm.arange(self.n_cell)
self.econn = dofs
self.dofs2cells = nm.repeat(nm.arange(self.n_cell), self.n_el_nod)
return n_dof, remap, dofs
def _setup_shape(self):
"""What is shape used for and what it really means.
Does it represent shape of the problem?
"""
self.n_components = nm.prod(self.shape)
self.val_shape = self.shape
def _setup_geometry(self):
"""Setup the field region geometry."""
# get_gel extracts the highest dimension geometry from self.region
self.gel = get_gel(self.region)
def _setup_connectivity(self):
"""Forces self.domain.mesh to build necessary conductivities
so they are available in self.get_nbrhd_dofs
"""
self.region.domain.mesh.cmesh.setup_connectivity(self.dim, self.dim)
self.region.domain.mesh.cmesh.setup_connectivity(self.dim - 1, self.dim)
self.region.domain.mesh.cmesh.setup_connectivity(self.dim, self.dim - 1)
def get_coor(self, nods=None):
"""Returns coors for matching nodes
# TODO revise DG_EPBC and EPBC matching?
Parameters
----------
nods :
if None use all nodes (Default value = None)
Returns
-------
coors : ndarray
coors on surface
"""
if nods is None:
nods = self.bubble_dofs
cells = self.dofs2cells[nods]
coors = self.domain.mesh.cmesh.get_centroids(self.dim)[cells]
eps = min(self.domain.cmesh.get_volumes(self.dim)) / (self.n_el_nod + 2)
if self.dim == 1:
extended_coors = nm.zeros(nm.shape(coors)[:-1] + (2,))
extended_coors[:, 0] = coors[:, 0]
coors = extended_coors
# shift centroid coors to lie within cells but be different for each dof
# use coors of facet QPs?
coors += eps * nm.repeat(nm.arange(self.n_el_nod),
len(nm.unique(cells)))[:, None]
return coors
def clear_facet_qp_base(self):
"""Clears facet_qp_base cache"""
self.facet_bf = {}
self.facet_qp = None
self.facet_whs = None
def _transform_qps_to_facets(self, qps, geo_name):
"""Transforms points given in qps to all facets of the reference element
with geometry geo_name.
Parameters
----------
qps :
qps corresponding to facet dimension to be transformed
geo_name :
element type
Returns
-------
tqps : ndarray
tqps is of shape shape(qps) + (n_el_facets, geo dim)
"""
if geo_name == "1_2":
tqps = nm.zeros(nm.shape(qps) + (2, 1,))
tqps[..., 0, 0] = 0.
tqps[..., 1, 0] = 1.
elif geo_name == "2_3":
tqps = nm.zeros(nm.shape(qps) + (3, 2,))
# 0.
tqps[..., 0, 0] = qps # x = 0 + t
tqps[..., 0, 1] = 0. # y = 0
# 1.
tqps[..., 1, 0] = 1 - qps # x = 1 - t
tqps[..., 1, 1] = qps # y = t
# 2.
tqps[..., 2, 0] = 0 # x = 0
tqps[..., 2, 1] = 1 - qps # y = 1 - t
elif geo_name == "2_4":
tqps = nm.zeros(nm.shape(qps) + (4, 2,))
# 0.
tqps[..., 0, 0] = qps # x = t
tqps[..., 0, 1] = 0. # y = 0
# 1.
tqps[..., 1, 0] = 1 # x = 1
tqps[..., 1, 1] = qps # y = t
# 2.
tqps[..., 2, 0] = 1 - qps # x = 1 -t
tqps[..., 2, 1] = 1 # y = 1
# 3.
tqps[..., 3, 0] = 0 # x = 0
tqps[..., 3, 1] = 1 - qps # y = 1 - t
elif geo_name == "3_4":
# tqps = nm.zeros(nm.shape(qps) + (4, 3,))
raise NotImplementedError("Geometry {} not supported, yet"
.format(geo_name))
elif geo_name == "3_8":
# tqps = nm.zeros(nm.shape(qps) + (8, 3,))
raise NotImplementedError("Geometry {} not supported, yet"
.format(geo_name))
else:
raise NotImplementedError("Geometry {} not supported, yet"
.format(geo_name))
return tqps
def get_facet_qp(self):
"""Returns quadrature points on all facets of the reference element in
array of shape (n_qp, 1 , n_el_facets, dim)
Returns
-------
qps : ndarray
quadrature points
weights : ndarray
Still needs to be transformed to actual facets!
"""
if self.dim == 1:
facet_qps = self._transform_qps_to_facets(nm.zeros((1, 1)), "1_2")
weights = nm.ones((1, 1, 1))
else:
qps, weights = self.integral.get_qp(cell_facet_gel_name[self.gel.name])
weights = weights[None, :, None]
facet_qps = self._transform_qps_to_facets(qps, self.gel.name)
return facet_qps, weights
def get_facet_base(self, derivative=False, base_only=False):
"""
Returns values of base in facets quadrature points, data shape is a bit
crazy right now:
(number of qps, 1, n_el_facets, 1, n_el_nod)
end for derivatine:
(1, number of qps, (dim,) * derivative, n_el_facets, 1, n_el_nod)
Parameters
----------
derivative: truthy or integer
base_only: do not return weights
Returns
-------
facet_bf : ndarray
values of basis functions in facet qps
weights : ndarray, optionally
weights of qps
"""
if derivative:
diff = int(derivative)
else:
diff = 0
if diff in self.facet_bf:
facet_bf = self.facet_bf[diff]
whs = self.facet_whs
else:
qps, whs = self.get_facet_qp()
ps = self.poly_space
self.facet_qp = qps
self.facet_whs = whs
if derivative:
facet_bf = nm.zeros((1,) + nm.shape(qps)[:-1] +
(self.dim,) * diff + (self.n_el_nod,))
else:
facet_bf = nm.zeros(nm.shape(qps)[:-1] + (1, self.n_el_nod,))
for i in range(self.n_el_facets):
facet_bf[..., i, :, :] = \
ps.eval_base(qps[..., i, :], diff=diff,
transform=self.basis_transform)
self.facet_bf[diff] = facet_bf
if base_only:
return facet_bf
else:
return facet_bf, whs
def clear_facet_neighbour_idx_cache(self, region=None):
"""
If region is None clear all!
Parameters
----------
region : sfepy.discrete.common.region.Region
If None clear all.
"""
if region is None:
self.facet_neighbour_index = {}
else:
self.facet_neighbour_index.pop(region.name)
def get_facet_neighbor_idx(self, region=None, eq_map=None):
"""
Returns index of cell neighbours sharing facet, along with local index
of the facet within neighbour, also treats periodic boundary conditions
i.e. plugs correct neighbours for cell on periodic boundary.
Where there are no neighbours specified puts -1 instead of neighbour
and facet id
Cashes neighbour index in self.facet_neighbours
Parameters
----------
region : sfepy.discrete.common.region.Region
Main region, must contain cells.
eq_map :
eq_map from state variable containing information on
EPBC and DG EPBC. (Default value = None)
Returns
-------
facet_neighbours : ndarray
Shape is
(n_cell, n_el_facet, 2),
first value is index of the neighbouring cell,
the second is index of the facet in said nb. cell.
"""
if region is None or eq_map is None:
# HOTFIX enabling limiter to obtain connectivity data without
# knowing eq_map or region
if self.region.name in self.facet_neighbour_index:
return self.facet_neighbour_index[self.region.name]
else:
raise ValueError("No facet neighbour mapping for main " +
"region {}".format(self.region.name) +
" cached yet, call with region and " +
"eq_map first.")
if region.name in self.facet_neighbour_index:
return self.facet_neighbour_index[region.name]
dim, n_cell, n_el_facets = self.get_region_info(region)
cmesh = region.domain.mesh.cmesh
cells = region.cells
facet_neighbours = nm.zeros((n_cell, n_el_facets, 2), dtype=nm.int32)
c2fi, c2fo = cmesh.get_incident(dim - 1, cells, dim, ret_offsets=True)
for ic, o1 in enumerate(c2fo[:-1]): # loop over cells
o2 = c2fo[ic + 1]
# get neighbours per facet of the cell
c2ci, c2co = cmesh.get_incident(dim, c2fi[o1:o2], dim - 1,
ret_offsets=True)
ii = cmesh.get_local_ids(c2fi[o1:o2], dim - 1, c2ci, c2co, dim)
fis = nm.c_[c2ci, ii]
nbrs = []
for ifa, of1 in enumerate(c2co[:-1]): # loop over facets
of2 = c2co[ifa + 1]
if of2 == (of1 + 1): # facet has only one cell
# Surface facet.
nbrs.append([-1, -1]) # c2ci[of1]) # append no neighbours
else:
if c2ci[of1] == cells[ic]: # do not append the cell itself
nbrs.append(fis[of2 - 1])
else:
nbrs.append(fis[of1])
facet_neighbours[ic, :, :] = nbrs
facet_neighbours = \
self._set_fem_periodic_facet_neighbours(facet_neighbours, eq_map)
facet_neighbours = \
self._set_dg_periodic_facet_neighbours(facet_neighbours, eq_map)
# cache results
self.facet_neighbour_index[region.name] = facet_neighbours
return facet_neighbours
def _set_dg_periodic_facet_neighbours(self, facet_neighbours, eq_map):
"""
Parameters
----------
facet_neighbours : array_like
Shape is
(n_cell, n_el_facet, 2),
first value is index of the neighbouring cell
the second is index of the facet in said nb. cell.
eq_map :
must contain dg_ep_bc a List with pairs of slave and master boundary
cell boundary facet mapping
Returns
-------
facet_neighbours : ndarray
Updated incidence array.
"""
# if eq_map.
# treat DG EPBC - these are definitely preferred
if eq_map.n_dg_epbc > 0 and self.gel.name not in ["1_2", "2_4", "3_6"]:
raise ValueError(
"Periodic boundary conditions not supported " +
"for geometry {} elements.".format(self.gel.name))
dg_epbc = eq_map.dg_epbc
for master_bc2bfi, slave_bc2bfi in dg_epbc:
# set neighbours of periodic cells to one another
facet_neighbours[master_bc2bfi[:, 0], master_bc2bfi[:, 1], 0] = \
slave_bc2bfi[:, 0]
facet_neighbours[slave_bc2bfi[:, 0], slave_bc2bfi[:, 1], 0] = \
master_bc2bfi[:, 0]
# set neighbours facets
facet_neighbours[slave_bc2bfi[:, 0], slave_bc2bfi[:, 1], 1] = \
master_bc2bfi[:, 1]
facet_neighbours[master_bc2bfi[:, 0], master_bc2bfi[:, 1], 1] =\
slave_bc2bfi[:, 1]
return facet_neighbours
def _set_fem_periodic_facet_neighbours(self, facet_neighbours, eq_map):
"""Maybe remove after DG EPBC revision in self.get_coor
Parameters
----------
facet_neighbours : array_like
Shape is (n_cell, n_el_facet, 2), first value is index of the
neighbouring cell the second is index of the facet in said nb. cell.
eq_map :
eq_map from state variable containing information on
EPBC and DG EPBC.
Returns
-------
facet_neighbours : ndarray
Updated incidence array.
"""
# treat classical FEM EPBCs - we need to correct neighbours
if eq_map.n_epbc > 0:
# set neighbours of periodic cells to one another
mcells = nm.unique(self.dofs2cells[eq_map.master])
scells = nm.unique(self.dofs2cells[eq_map.slave])
mcells_facets = nm.array(
nm.where(facet_neighbours[mcells] == -1))[1, 0] # facets mcells
scells_facets = nm.array(
nm.where(facet_neighbours[scells] == -1))[1, 0] # facets scells
# [1, 0] above, first we need second axis to get axis on which
# facet indices are stored, second we drop axis with neighbour
# local facet index,
#
# for multiple s/mcells this will have to be
# something like 1 + 2*nm.arange(len(mcells)) - to skip double
# entries for -1 tags in neighbours and neighbour local facet idx
# set neighbours of mcells to scells
facet_neighbours[mcells, mcells_facets, 0] = scells
# set neighbour facets to facets of scell missing neighbour
facet_neighbours[
mcells, mcells_facets, 1] = scells_facets
# we do not need to distinguish EBC and EPBC cells, EBC overwrite
# EPBC, we only need to fix shapes
# set neighbours of scells to mcells
facet_neighbours[scells, scells_facets, 0] = mcells
# set neighbour facets to facets of mcell missing neighbour0
facet_neighbours[
scells, scells_facets, 1] = mcells_facets
return facet_neighbours
@staticmethod
def get_region_info(region):
"""
Extracts information about region needed in various methods of DGField
Parameters
----------
region : sfepy.discrete.common.region.Region
Returns
-------
dim, n_cell, n_el_facets
"""
if not region.has_cells():
raise ValueError("Region {} has no cells".format(region.name))
n_cell = region.get_n_cells()
dim = region.tdim
gel = get_gel(region)
n_el_facets = dim + 1 if gel.is_simplex else 2 ** dim
return dim, n_cell, n_el_facets
def get_both_facet_state_vals(self, state, region,
derivative=None, reduce_nod=True):
"""Computes values of the variable represented by dofs in
quadrature points located at facets, returns both values -
inner and outer, along with weights.
Parameters
----------
state : state variable containing BC info
region : sfepy.discrete.common.region.Region
derivative : compute derivative if truthy,
compute n-th derivative if a number (Default value = None)
reduce_nod : if False DOES NOT sum nodes into values at QPs
(Default value = True)
Returns
-------
inner_facet_values (n_cell, n_el_facets, n_qp),
outer facet values (n_cell, n_el_facets, n_qp),
weights,
if derivative is True:
inner_facet_values (n_cell, n_el_facets, dim, n_qp),
outer_facet values (n_cell, n_el_facets, dim, n_qp)
"""
if derivative:
diff = int(derivative)
else:
diff = 0
unreduce_nod = int(not reduce_nod)
inner_base_vals, outer_base_vals, whs = \
self.get_both_facet_base_vals(state, region, derivative=derivative)
dofs = self.unravel_sol(state.data[0])
n_qp = whs.shape[-1]
outputs_shape = (self.n_cell, self.n_el_facets) + \
(self.n_el_nod,) * unreduce_nod + \
(self.dim,) * diff + \
(n_qp,)
inner_facet_vals = nm.zeros(outputs_shape)
if unreduce_nod:
inner_facet_vals[:] = nm.einsum('id...,idf...->ifd...',
dofs, inner_base_vals)
else:
inner_facet_vals[:] = nm.einsum('id...,id...->i...',
dofs, inner_base_vals)
per_facet_neighbours = self.get_facet_neighbor_idx(region, state.eq_map)
outer_facet_vals = nm.zeros(outputs_shape)
for facet_n in range(self.n_el_facets):
if unreduce_nod:
outer_facet_vals[:, facet_n, :] = \
nm.einsum('id...,id...->id...',
dofs[per_facet_neighbours[:, facet_n, 0]],
outer_base_vals[:, :, facet_n])
else:
outer_facet_vals[:, facet_n, :] = \
nm.einsum('id...,id...->i...',
dofs[per_facet_neighbours[:, facet_n, 0]],
outer_base_vals[:, :, facet_n])
boundary_cells = nm.array(nm.where(per_facet_neighbours[:, :, 0] < 0)).T
outer_facet_vals[boundary_cells[:, 0], boundary_cells[:, 1]] = 0.0
# TODO detect and print boundary cells without defined BCs?
for ebc, ebc_vals in zip(state.eq_map.dg_ebc.get(diff, []),
state.eq_map.dg_ebc_val.get(diff, [])):
if unreduce_nod:
raise NotImplementedError(
"Unreduced DOFs are not available for boundary " +
"outerfacets")
outer_facet_vals[ebc[:, 0], ebc[:, 1], :] = \
nm.einsum("id,id...->id...",
ebc_vals, inner_base_vals[0, :, ebc[:, 1]])
else:
# fix flipping qp order to accomodate for
# opposite facet orientation of neighbours
outer_facet_vals[ebc[:, 0], ebc[:, 1], :] = ebc_vals[:, ::-1]
# flip outer_facet_vals moved to get_both_facet_base_vals
return inner_facet_vals, outer_facet_vals, whs
def get_both_facet_base_vals(self, state, region, derivative=None):
"""Returns values of the basis function in quadrature points on facets
broadcasted to all cells inner to the element as well as outer ones
along with weights for the qps broadcasted and transformed to elements.
Contains quick fix to flip facet QPs for right integration order.
Parameters
----------
state : used to get EPBC info
region : sfepy.discrete.common.region.Region for connectivity
derivative : if u need derivative
(Default value = None)
Returns
-------
outer_facet_base_vals:
inner_facet_base_vals:
shape (n_cell, n_el_nod, n_el_facet, n_qp) or
(n_cell, n_el_nod, n_el_facet, dim, n_qp)
when derivative is True or 1
whs: shape (n_cell, n_el_facet, n_qp)
"""
if derivative:
diff = int(derivative)
else:
diff = 0
facet_bf, whs = self.get_facet_base(derivative=derivative)
n_qp = nm.shape(whs)[1]
facet_vols = self.get_facet_vols(region)
whs = facet_vols * whs[None, :, :, 0]
base_shape = (self.n_cell, self.n_el_nod, self.n_el_facets) + \
(self.dim,) * diff + \
(n_qp,)
inner_facet_base_vals = nm.zeros(base_shape)
outer_facet_base_vals = nm.zeros(base_shape)
if derivative:
inner_facet_base_vals[:] = facet_bf[0, :, 0, :, :, :]\
.swapaxes(-2, -3).T
else:
inner_facet_base_vals[:] = facet_bf[:, 0, :, 0, :].T
per_facet_neighbours = self.get_facet_neighbor_idx(region, state.eq_map)
# numpy prepends shape resulting from multiple
# indexing before remaining shape
if derivative:
outer_facet_base_vals[:] = \
inner_facet_base_vals[0, :, per_facet_neighbours[:, :, 1]]\
.swapaxes(-3, -4)
else:
outer_facet_base_vals[:] = \
inner_facet_base_vals[0, :, per_facet_neighbours[:, :, 1]]\
.swapaxes(-2, -3)
# fix to flip facet QPs for right integration order
return inner_facet_base_vals, outer_facet_base_vals[..., ::-1], whs
def clear_normals_cache(self, region=None):
"""Clears normals cache for given region or all regions.
Parameters
----------
region : sfepy.discrete.common.region.Region
region to clear cache or None to clear all
"""
if region is None:
self.normals_cache = {}
else:
if isinstance(region, str):
self.normals_cache.pop(region)
else:
self.normals_cache.pop(region.name)
def get_cell_normals_per_facet(self, region):
"""Caches results, use clear_normals_cache to clear the cache.
Parameters
----------
region: sfepy.discrete.common.region.Region
Main region, must contain cells.
Returns
-------
normals: ndarray
normals of facets in array of shape (n_cell, n_el_facets, dim)
"""
if region.name in self.normals_cache:
return self.normals_cache[region.name]
dim, n_cell, n_el_facets = self.get_region_info(region)
cmesh = region.domain.mesh.cmesh
normals = cmesh.get_facet_normals()
normals_out = nm.zeros((n_cell, n_el_facets, dim))
c2f = cmesh.get_conn(dim, dim - 1)
for ic, o1 in enumerate(c2f.offsets[:-1]):
o2 = c2f.offsets[ic + 1]
for ifal, ifa in enumerate(c2f.indices[o1:o2]):
normals_out[ic, ifal] = normals[o1 + ifal]
self.normals_cache[region.name] = normals_out
return normals_out
def clear_facet_vols_cache(self, region=None):
"""Clears facet volume cache for given region or all regions.
Parameters
----------
region : sfepy.discrete.common.region.Region
region to clear cache or None to clear all
"""
if region is None:
self.facet_vols_cache = {}
else:
if isinstance(region, str):
self.facet_vols_cache.pop(region)
else:
self.facet_vols_cache.pop(region.name)
def get_facet_vols(self, region):
"""Caches results, use clear_facet_vols_cache to clear the cache
Parameters
----------
region : sfepy.discrete.common.region.Region
Returns
-------
vols_out: ndarray
volumes of the facets by cells shape (n_cell, n_el_facets, 1)
"""
if region.name in self.facet_vols_cache:
return self.facet_vols_cache[region.name]
dim, n_cell, n_el_facets = self.get_region_info(region)
cmesh = region.domain.mesh.cmesh
if dim == 1:
vols = nm.ones((cmesh.num[0], 1))
else:
vols = cmesh.get_volumes(dim - 1)[:, None]
vols_out = nm.zeros((n_cell, n_el_facets, 1))
c2f = cmesh.get_conn(dim, dim - 1)
for ic, o1 in enumerate(c2f.offsets[:-1]):
o2 = c2f.offsets[ic + 1]
for ifal, ifa in enumerate(c2f.indices[o1:o2]):
vols_out[ic, ifal] = vols[ifa]
self.facet_vols_cache[region.name] = vols_out
return vols_out
def get_data_shape(self, integral, integration='volume', region_name=None):
"""Returns data shape
(n_nod, n_qp, self.gel.dim, self.n_el_nod)
Parameters
----------
integral : integral used
integration :
'volume' is only supported value (Default value = 'volume')
region_name : not used
(Default value = None)
Returns
-------
data_shape : tuple
"""
if integration in ('volume',):
# from FEField.get_data_shape()
_, weights = integral.get_qp(self.gel.name)
n_qp = weights.shape[0]
data_shape = (self.n_cell, n_qp, self.gel.dim, self.n_el_nod)
# econn.shape[1] == n_el_nod i.e. number nod in element
else:
raise NotImplementedError('unsupported integration! (%s)'
% integration)
return data_shape
def get_econn(self, conn_type, region, is_trace=False, integration=None):
"""Getter for econn
Parameters
----------
conn_type : string or Struct
'volume' is only supported
region : sfepy.discrete.common.region.Region
is_trace : ignored
(Default value = False)
integration : ignored
(Default value = None)
Returns
-------
econn : ndarray
connectivity information
"""
ct = conn_type.type if isinstance(conn_type, Struct) else conn_type
if ct == 'volume':
if region.name == self.region.name:
conn = self.econn
else:
raise ValueError("Bad region for the field")
else:
raise ValueError('unknown connectivity type! (%s)' % ct)
return conn
def setup_extra_data(self, geometry, info, is_trace):
"""This is called in create_adof_conns(conn_info, var_indx=None,
active_only=True, verbose=True)
for each variable but has no effect.
Parameters
----------
geometry :
ignored
info :
set to self.info
is_trace :
set to self.trace
"""
# placeholder, what is this used for?
# dct = info.dc_type.type
self.info = info
self.is_trace = is_trace
def get_dofs_in_region(self, region, merge=True):
"""Return indices of DOFs that belong to the given region.
Not Used in BC treatment
Parameters
----------
region : sfepy.discrete.common.region.Region
merge : bool
merge dof tuple into one numpy array, default True
Returns
-------
dofs : ndarray
"""
dofs = []
if region.has_cells(): # main region or its part
els = nm.ravel(self.bubble_remap[region.cells])
eldofs = self.bubble_dofs[els[els >= 0]]
dofs.append(eldofs)
else:
# return indices of cells adjacent to boundary facets
dim = self.dim
cmesh = region.domain.mesh.cmesh
bc_cells = cmesh.get_incident(dim, region.facets, dim - 1)
bc_dofs = self.bubble_dofs[bc_cells]
dofs.append(bc_dofs)
if merge:
dofs = nm.concatenate(dofs)
return dofs
def get_bc_facet_idx(self, region):
"""Caches results in self.boundary_facet_local_idx
Parameters
----------
region : sfepy.discrete.common.region.Region
surface region defining BCs
Returns
-------
bc2bfi : ndarray
index of cells on boundary along with corresponding facets
"""
if region.name in self.boundary_facet_local_idx:
return self.boundary_facet_local_idx[region.name]
bc2bfi = region.get_facet_indices()
self.boundary_facet_local_idx[region.name] = bc2bfi
return bc2bfi
def create_mapping(self, region, integral, integration,
return_mapping=True):
"""Creates and returns mapping
Parameters
----------
region : sfepy.discrete.common.region.Region
integral : Integral
integration : str
'volume' is only accepted option
return_mapping : default True
(Default value = True)
Returns
-------
mapping : VolumeMapping
"""
domain = self.domain
coors = domain.get_mesh_coors(actual=True)
dconn = domain.get_conn()
# from FEField
if integration == 'volume':
qp = self.get_qp('v', integral)
# qp = self.integral.get_qp(self.gel.name)
iels = region.get_cells()
geo_ps = self.gel.poly_space
ps = self.poly_space
bf = self.get_base('v', 0, integral, iels=iels)
conn = nm.take(dconn, iels.astype(nm.int32), axis=0)
mapping = VolumeMapping(coors, conn, poly_space=geo_ps)
vg = mapping.get_mapping(qp.vals, qp.weights, poly_space=ps,
ori=self.ori,
transform=self.basis_transform)
out = vg
else:
raise ValueError('unsupported integration geometry type: %s'
% integration)
if out is not None:
# Store the integral used.
out.integral = integral
out.qp = qp
out.ps = ps
# Update base.
out.bf[:] = bf
if return_mapping:
out = (out, mapping)
return out
def set_dofs(self, fun=0.0, region=None, dpn=None, warn=None):
"""Compute projection of fun into the basis, alternatively set DOFs
directly to provided value or values either in main volume region
or in boundary region.
Parameters
----------
fun : callable, scalar or array corresponding to dofs
(Default value = 0.0)
region : sfepy.discrete.common.region.Region
region to set DOFs on (Default value = None)
dpn : number of dofs per element
(Default value = None)
warn :
(Default value = None)
Returns
-------
nods : ndarray
vals : ndarray
"""
if region is None:
region = self.region
return self.set_cell_dofs(fun, region, dpn, warn)
elif region.has_cells():
return self.set_cell_dofs(fun, region, dpn, warn)
elif region.kind_tdim == self.dim - 1:
nods, vals = self.set_facet_dofs(fun, region, dpn, warn)
return nods, vals
def set_cell_dofs(self, fun=0.0, region=None, dpn=None, warn=None):
"""
Compute projection of fun onto the basis, in main region, alternatively
set DOFs directly to provided value or values
Parameters
----------
fun : callable, scallar or array corresponding to dofs
(Default value = 0.0)
region : sfepy.discrete.common.region.Region
region to set DOFs on (Default value = None)
dpn : number of dofs per element
(Default value = None)
warn : not used
(Default value = None)
Returns
-------
nods : ndarray
vals : ndarray
"""
aux = self.get_dofs_in_region(region)
nods = nm.unique(nm.hstack(aux))
if nm.isscalar(fun):
vals = nm.zeros(aux.shape)
vals[:, 0] = fun
vals = nm.hstack(vals)
elif isinstance(fun, nm.ndarray):
# useful for testing, allows to pass complete array of dofs as IC
if nm.shape(fun) == nm.shape(nods):
vals = fun
elif callable(fun):
qp, weights = self.integral.get_qp(self.gel.name)
coors = self.mapping.get_physical_qps(qp)
base_vals_qp = self.poly_space.eval_base(qp)[:, 0, :]
# this drops redundant axis that is returned by eval_base due to
# consistency with derivatives
# left hand, so far only orthogonal basis
# for legendre base this can be calculated exactly
# in 1D it is: 1 / (2 * nm.arange(self.n_el_nod) + 1)
lhs_diag = nm.einsum("q,q...->...", weights, base_vals_qp ** 2)
rhs_vec = nm.einsum("q,q...,iq...->i...",
weights, base_vals_qp, fun(coors))
vals = (rhs_vec / lhs_diag)
# plot for 1D
# from utils.visualizer import plot1D_legendre_dofs, reconstruct
# _legendre_dofs
# import matplotlib.pyplot as plt
# plot1D_legendre_dofs(self.domain.mesh.coors, (vals,), fun)
# ww, xx = reconstruct_legendre_dofs(self.domain.mesh.coors, 1,
# vals.T[..., None, None])
# plt.plot(xx, ww[:, 0], label="reconstructed dofs")
# plt.show()
return nods, vals
def set_facet_dofs(self, fun, region, dpn, warn):
"""Compute projection of fun onto the basis on facets, alternatively
set DOFs directly to provided value or values
Parameters
----------
fun : callable, scalar or array corresponding to dofs
region : sfepy.discrete.common.region.Region
region to set DOFs on
dpn : int
number of dofs per element
warn :
not used
Returns
-------
nods : ndarray
vals : ndarray
"""
raise NotImplementedError(
"Setting facet DOFs is not supported with DGField, " +
"use values at qp directly. " +
"This is usually result of using ebc instead of dgebc")
aux = self.get_dofs_in_region(region)
nods = nm.unique(nm.hstack(aux))
if nm.isscalar(fun):
vals = nm.zeros(aux.shape)
vals[:, 0] = fun
vals = nm.hstack(vals)
elif isinstance(fun, nm.ndarray):
assert_(len(fun) == dpn)
vals = nm.zeros(aux.shape)
vals[:, 0] = nm.repeat(fun, vals.shape[0])
elif callable(fun):
vals = nm.zeros(aux.shape)
# set zero DOF to value fun, set other DOFs to zero
# get facets QPs
qp, weights = self.get_facet_qp()
weights = weights[0, :, 0]
qp = qp[:, 0, :, :]
# get facets weights ?
# get coors
bc2bfi = self.get_bc_facet_idx(region)
coors = self.mapping.get_physical_qps(qp)
# get_physical_qps returns data in strange format, swapping
# some axis and flipping qps order
bcoors = coors[bc2bfi[:, 1], ::-1, bc2bfi[:, 0], :]
# get facet basis vals
base_vals_qp = self.poly_space.eval_base(qp)[:, 0, 0, :]
# solve for boundary cell DOFs
bc_val = fun(bcoors)
# this returns singular matrix - projection on the boundary should
# be into facet dim space
#lhs = nm.einsum("q,qd,qc->dc", weights, base_vals_qp, base_vals_qp)
# inv_lhs = nm.linalg.inv(lhs)
# rhs_vec = nm.einsum("q,q...,iq...->i...",
# weights, base_vals_qp, bc_val)
return nods, vals
def get_bc_facet_values(self, fun, region, ret_coors=False, diff=0):
"""Returns values of fun in facet QPs of the region
Parameters
----------
diff: derivative 0 or 1 supported
fun: Function value or values to set qps values to
region : sfepy.discrete.common.region.Region
boundary region
ret_coors: default False,
Return physical coors of qps in shape (n_cell, n_qp, dim).
Returns
-------
vals : ndarray
In shape (n_cell,) + (self.dim,) * diff + (n_qp,)
"""
if region.has_cells():
raise NotImplementedError(
"Region {} has cells and can't be used as boundary region".
format(region))
# get facets QPs
qp, weights = self.get_facet_qp()
weights = weights[0, :, 0]
qp = qp[:, 0, :, :]
n_qp = qp.shape[0]
# get facets weights ?
# get physical coors
bc2bfi = self.get_bc_facet_idx(region)
n_cell = bc2bfi.shape[0]
coors = self.mapping.get_physical_qps(qp)
# get_physical_qps returns data in strange format,
# swapping some axis and flipping qps order
# to get coors in shape (n_facet, n_qp, n_cell, dim)
if len(coors.shape) == 3:
coors = coors[:, None, :, :] # add axis for qps when it is missing
coors = coors.swapaxes(0, 2)
bcoors = coors[bc2bfi[:, 1], ::-1, bc2bfi[:, 0], :]
diff_shape = (self.dim,) * diff
output_shape = (n_cell,) + diff_shape + (n_qp,)
vals = nm.zeros(output_shape)
# we do not need last axis of coors, values are scalars
if nm.isscalar(fun):
if sum(diff_shape) > 1:
output(("Warning: Setting gradient of shape {} "
"in region {} with scalar value {}")
.format(diff_shape, region.name, fun))
vals[:] = fun
elif isinstance(fun, nm.ndarray):
try:
vals[:] = fun[:, None]
except ValueError:
raise ValueError(("Provided values of shape {} could not" +
" be used to set BC qps of shape {} in " +
"region {}")
.format(fun.shape, vals.shape, region.name))
elif callable(fun):
# get boundary values
vals[:] = fun(bcoors)
if ret_coors:
return bcoors, vals
return vals
def get_nodal_values(self, dofs, region, ref_nodes=None):
"""Computes nodal representation of the DOFs
Parameters
---------
dofs : array_like
dofs to transform to nodes
region : ignored
ref_nodes:
reference node to use instead of default qps
Parameters
----------
dofs : array_like
region : Region
ref_nodes : array_like
(Default value = None)
Returns
-------
nodes : ndarray
nodal_vals : ndarray
"""
if ref_nodes is None:
# poly_space could provide special nodes
ref_nodes = self.get_qp('v', self.integral).vals
base_vals_node = self.poly_space.eval_base(ref_nodes)[:, 0, :]
dofs = self.unravel_sol(dofs[:, 0])
nodal_vals = nm.sum(dofs * base_vals_node.T, axis=1)
nodes = self.mapping.get_physical_qps(ref_nodes)
# import matplotlib.pyplot as plt
# plt.plot(nodes[:, 0], nodal_vals)
# plt.show()
return nodes, nodal_vals
def create_output(self, dofs, var_name, dof_names=None,
key=None, extend=True, fill_value=None,
linearization=None):
"""Converts the DOFs corresponding to the field to a dictionary of
output data usable by Mesh.write().
For 1D puts DOFs into vairables u_modal{0} ... u_modal{n}, where
n = approx_order and marks them for writing as cell data.
For 2+D puts dofs into name_cell_nodes and creates sturct with:
mode = "cell_nodes", data and iterpolation scheme.
Also get node values and adds them to dictionary as cell_nodes
Parameters
----------
dofs : ndarray, shape (n_nod, n_component)
The array of DOFs reshaped so that each column corresponds
to one component.
var_name : str
The variable name corresponding to `dofs`.
dof_names : tuple of str
The names of DOF components. (Default value = None)
key : str, optional
The key to be used in the output dictionary instead of the
variable name. (Default value = None)
extend : bool, not used
Extend the DOF values to cover the whole domain.
(Default value = True)
fill_value : float or complex, not used
The value used to fill the missing DOF values if `extend` is True.
(Default value = None)
linearization : Struct or None, not used
The linearization configuration for higher order approximations.
(Default value = None)
Returns
-------
out : dict
"""
out = {}
udofs = self.unravel_sol(dofs)
name = var_name if key is None else key
if self.dim == 1:
for i in range(self.n_el_nod):
out[name + "_modal{}".format(i)] = \
Struct(mode="cell", data=udofs[:, i, None, None])
else:
interpolation_scheme = self.poly_space.get_interpol_scheme()
unravel = get_unraveler(self.n_el_nod, self.n_cell)
out[name + "_cell_nodes"] = Struct(mode="cell_nodes",
data=unravel(dofs)[..., 0],
scheme=interpolation_scheme)
return out | rc/sfepy | sfepy/discrete/dg/fields.py | Python | bsd-3-clause | 49,854 | [
"MCell"
] | bbc239f6feb451a5c8f9952af8a5ef4914b993741b9897962ffba5597fef0a70 |
# -*- coding: utf-8 -*-
# vim: autoindent shiftwidth=4 expandtab textwidth=120 tabstop=4 softtabstop=4
###############################################################################
# OpenLP - Open Source Lyrics Projection #
# --------------------------------------------------------------------------- #
# Copyright (c) 2008-2013 Raoul Snyman #
# Portions copyright (c) 2008-2013 Tim Bentley, Gerald Britton, Jonathan #
# Corwin, Samuel Findlay, Michael Gorven, Scott Guerrieri, Matthias Hub, #
# Meinert Jordan, Armin Köhler, Erik Lundin, Edwin Lunando, Brian T. Meyer. #
# Joshua Miller, Stevan Pettit, Andreas Preikschat, Mattias Põldaru, #
# Christian Richter, Philip Ridout, Simon Scudder, Jeffrey Smith, #
# Maikel Stuivenberg, Martin Thompson, Jon Tibble, Dave Warnock, #
# Frode Woldsund, Martin Zibricky, Patrick Zimmermann #
# --------------------------------------------------------------------------- #
# This program is free software; you can redistribute it and/or modify it #
# under the terms of the GNU General Public License as published by the Free #
# Software Foundation; version 2 of the License. #
# #
# This program is distributed in the hope that it will be useful, but WITHOUT #
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or #
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for #
# more details. #
# #
# You should have received a copy of the GNU General Public License along #
# with this program; if not, write to the Free Software Foundation, Inc., 59 #
# Temple Place, Suite 330, Boston, MA 02111-1307 USA #
###############################################################################
"""
The :mod:`serviceitem` provides the service item functionality including the
type and capability of an item.
"""
import cgi
import datetime
import logging
import os
import uuid
from PyQt4 import QtGui
from openlp.core.lib import ImageSource, Settings, Registry, build_icon, clean_tags, expand_tags, translate
log = logging.getLogger(__name__)
class ServiceItemType(object):
"""
Defines the type of service item
"""
Text = 1
Image = 2
Command = 3
class ItemCapabilities(object):
"""
Provides an enumeration of a service item's capabilities
``CanPreview``
The capability to allow the ServiceManager to add to the preview tab when making the previous item live.
``CanEdit``
The capability to allow the ServiceManager to allow the item to be edited
``CanMaintain``
The capability to allow the ServiceManager to allow the item to be reordered.
``RequiresMedia``
Determines is the service_item needs a Media Player
``CanLoop``
The capability to allow the SlideController to allow the loop processing.
``CanAppend``
The capability to allow the ServiceManager to add leaves to the
item
``NoLineBreaks``
The capability to remove lines breaks in the renderer
``OnLoadUpdate``
The capability to update MediaManager when a service Item is loaded.
``AddIfNewItem``
Not Used
``ProvidesOwnDisplay``
The capability to tell the SlideController the service Item has a different display.
``HasDetailedTitleDisplay``
Being Removed and decommissioned.
``HasVariableStartTime``
The capability to tell the ServiceManager that a change to start time is possible.
``CanSoftBreak``
The capability to tell the renderer that Soft Break is allowed
``CanWordSplit``
The capability to tell the renderer that it can split words is
allowed
``HasBackgroundAudio``
That a audio file is present with the text.
``CanAutoStartForLive``
The capability to ignore the do not play if display blank flag.
"""
CanPreview = 1
CanEdit = 2
CanMaintain = 3
RequiresMedia = 4
CanLoop = 5
CanAppend = 6
NoLineBreaks = 7
OnLoadUpdate = 8
AddIfNewItem = 9
ProvidesOwnDisplay = 10
HasDetailedTitleDisplay = 11
HasVariableStartTime = 12
CanSoftBreak = 13
CanWordSplit = 14
HasBackgroundAudio = 15
CanAutoStartForLive = 16
class ServiceItem(object):
"""
The service item is a base class for the plugins to use to interact with
the service manager, the slide controller, and the projection screen
compositor.
"""
log.info('Service Item created')
def __init__(self, plugin=None):
"""
Set up the service item.
``plugin``
The plugin that this service item belongs to.
"""
if plugin:
self.name = plugin.name
self.title = ''
self.processor = None
self.audit = ''
self.items = []
self.iconic_representation = None
self.raw_footer = []
self.foot_text = ''
self.theme = None
self.service_item_type = None
self._raw_frames = []
self._display_frames = []
self.unique_identifier = 0
self.notes = ''
self.from_plugin = False
self.capabilities = []
self.is_valid = True
self.icon = None
self.themedata = None
self.main = None
self.footer = None
self.bg_image_bytes = None
self.search_string = ''
self.data_string = ''
self.edit_id = None
self.xml_version = None
self.start_time = 0
self.end_time = 0
self.media_length = 0
self.from_service = False
self.image_border = '#000000'
self.background_audio = []
self.theme_overwritten = False
self.temporary_edit = False
self.auto_play_slides_once = False
self.auto_play_slides_loop = False
self.timed_slide_interval = 0
self.will_auto_start = False
self.has_original_files = True
self._new_item()
def _new_item(self):
"""
Method to set the internal id of the item. This is used to compare
service items to see if they are the same.
"""
self.unique_identifier = str(uuid.uuid1())
self.validate_item()
def add_capability(self, capability):
"""
Add an ItemCapability to a ServiceItem
``capability``
The capability to add
"""
self.capabilities.append(capability)
def is_capable(self, capability):
"""
Tell the caller if a ServiceItem has a capability
``capability``
The capability to test for
"""
return capability in self.capabilities
def add_icon(self, icon):
"""
Add an icon to the service item. This is used when displaying the
service item in the service manager.
``icon``
A string to an icon in the resources or on disk.
"""
self.icon = icon
self.iconic_representation = build_icon(icon)
def render(self, provides_own_theme_data=False):
"""
The render method is what generates the frames for the screen and
obtains the display information from the renderer. At this point all
slides are built for the given display size.
``provides_own_theme_data``
This switch disables the usage of the item's theme. However, this is
disabled by default. If this is used, it has to be taken care, that
the renderer knows the correct theme data. However, this is needed
for the theme manager.
"""
log.debug('Render called')
self._display_frames = []
self.bg_image_bytes = None
if not provides_own_theme_data:
self.renderer.set_item_theme(self.theme)
self.themedata, self.main, self.footer = self.renderer.pre_render()
if self.service_item_type == ServiceItemType.Text:
log.debug('Formatting slides: %s' % self.title)
# Save rendered pages to this dict. In the case that a slide is used
# twice we can use the pages saved to the dict instead of rendering
# them again.
previous_pages = {}
for slide in self._raw_frames:
verse_tag = slide['verseTag']
if verse_tag in previous_pages and previous_pages[verse_tag][0] == slide['raw_slide']:
pages = previous_pages[verse_tag][1]
else:
pages = self.renderer.format_slide(slide['raw_slide'], self)
previous_pages[verse_tag] = (slide['raw_slide'], pages)
for page in pages:
page = page.replace('<br>', '{br}')
html = expand_tags(cgi.escape(page.rstrip()))
self._display_frames.append({
'title': clean_tags(page),
'text': clean_tags(page.rstrip()),
'html': html.replace('&nbsp;', ' '),
'verseTag': verse_tag
})
elif self.service_item_type == ServiceItemType.Image or self.service_item_type == ServiceItemType.Command:
pass
else:
log.error('Invalid value renderer: %s' % self.service_item_type)
self.title = clean_tags(self.title)
# The footer should never be None, but to be compatible with a few
# nightly builds between 1.9.4 and 1.9.5, we have to correct this to
# avoid tracebacks.
if self.raw_footer is None:
self.raw_footer = []
self.foot_text = '<br>'.join([_f for _f in self.raw_footer if _f])
def add_from_image(self, path, title, background=None):
"""
Add an image slide to the service item.
``path``
The directory in which the image file is located.
``title``
A title for the slide in the service item.
"""
if background:
self.image_border = background
self.service_item_type = ServiceItemType.Image
self._raw_frames.append({'title': title, 'path': path})
self.image_manager.add_image(path, ImageSource.ImagePlugin, self.image_border)
self._new_item()
def add_from_text(self, raw_slide, verse_tag=None):
"""
Add a text slide to the service item.
``raw_slide``
The raw text of the slide.
"""
if verse_tag:
verse_tag = verse_tag.upper()
self.service_item_type = ServiceItemType.Text
title = raw_slide[:30].split('\n')[0]
self._raw_frames.append({'title': title, 'raw_slide': raw_slide, 'verseTag': verse_tag})
self._new_item()
def add_from_command(self, path, file_name, image):
"""
Add a slide from a command.
``path``
The title of the slide in the service item.
``file_name``
The title of the slide in the service item.
``image``
The command of/for the slide.
"""
self.service_item_type = ServiceItemType.Command
self._raw_frames.append({'title': file_name, 'image': image, 'path': path})
self._new_item()
def get_service_repr(self, lite_save):
"""
This method returns some text which can be saved into the service
file to represent this item.
"""
service_header = {
'name': self.name,
'plugin': self.name,
'theme': self.theme,
'title': self.title,
'icon': self.icon,
'footer': self.raw_footer,
'type': self.service_item_type,
'audit': self.audit,
'notes': self.notes,
'from_plugin': self.from_plugin,
'capabilities': self.capabilities,
'search': self.search_string,
'data': self.data_string,
'xml_version': self.xml_version,
'auto_play_slides_once': self.auto_play_slides_once,
'auto_play_slides_loop': self.auto_play_slides_loop,
'timed_slide_interval': self.timed_slide_interval,
'start_time': self.start_time,
'end_time': self.end_time,
'media_length': self.media_length,
'background_audio': self.background_audio,
'theme_overwritten': self.theme_overwritten,
'will_auto_start': self.will_auto_start,
'processor': self.processor
}
service_data = []
if self.service_item_type == ServiceItemType.Text:
service_data = [slide for slide in self._raw_frames]
elif self.service_item_type == ServiceItemType.Image:
if lite_save:
for slide in self._raw_frames:
service_data.append({'title': slide['title'], 'path': slide['path']})
else:
service_data = [slide['title'] for slide in self._raw_frames]
elif self.service_item_type == ServiceItemType.Command:
for slide in self._raw_frames:
service_data.append({'title': slide['title'], 'image': slide['image'], 'path': slide['path']})
return {'header': service_header, 'data': service_data}
def set_from_service(self, serviceitem, path=None):
"""
This method takes a service item from a saved service file (passed
from the ServiceManager) and extracts the data actually required.
``serviceitem``
The item to extract data from.
``path``
Defaults to *None*. This is the service manager path for things
which have their files saved with them or None when the saved
service is lite and the original file paths need to be preserved..
"""
log.debug('set_from_service called with path %s' % path)
header = serviceitem['serviceitem']['header']
self.title = header['title']
self.name = header['name']
self.service_item_type = header['type']
self.theme = header['theme']
self.add_icon(header['icon'])
self.raw_footer = header['footer']
self.audit = header['audit']
self.notes = header['notes']
self.from_plugin = header['from_plugin']
self.capabilities = header['capabilities']
# Added later so may not be present in older services.
self.search_string = header.get('search', '')
self.data_string = header.get('data', '')
self.xml_version = header.get('xml_version')
self.start_time = header.get('start_time', 0)
self.end_time = header.get('end_time', 0)
self.media_length = header.get('media_length', 0)
self.auto_play_slides_once = header.get('auto_play_slides_once', False)
self.auto_play_slides_loop = header.get('auto_play_slides_loop', False)
self.timed_slide_interval = header.get('timed_slide_interval', 0)
self.will_auto_start = header.get('will_auto_start', False)
self.processor = header.get('processor', None)
self.has_original_files = True
#TODO Remove me in 2,3 build phase
if self.is_capable(ItemCapabilities.HasDetailedTitleDisplay):
self.capabilities.remove(ItemCapabilities.HasDetailedTitleDisplay)
self.processor = self.title
self.title = None
if 'background_audio' in header:
self.background_audio = []
for filename in header['background_audio']:
# Give them real file paths
self.background_audio.append(os.path.join(path, filename))
self.theme_overwritten = header.get('theme_overwritten', False)
if self.service_item_type == ServiceItemType.Text:
for slide in serviceitem['serviceitem']['data']:
self._raw_frames.append(slide)
elif self.service_item_type == ServiceItemType.Image:
settings_section = serviceitem['serviceitem']['header']['name']
background = QtGui.QColor(Settings().value(settings_section + '/background color'))
if path:
self.has_original_files = False
for text_image in serviceitem['serviceitem']['data']:
filename = os.path.join(path, text_image)
self.add_from_image(filename, text_image, background)
else:
for text_image in serviceitem['serviceitem']['data']:
self.add_from_image(text_image['path'], text_image['title'], background)
elif self.service_item_type == ServiceItemType.Command:
for text_image in serviceitem['serviceitem']['data']:
if not self.title:
self.title = text_image['title']
if path:
self.has_original_files = False
self.add_from_command(path, text_image['title'], text_image['image'])
else:
self.add_from_command(text_image['path'], text_image['title'], text_image['image'])
self._new_item()
def get_display_title(self):
"""
Returns the title of the service item.
"""
if self.is_text():
return self.title
else:
if len(self._raw_frames) > 1:
return self.title
else:
return self._raw_frames[0]['title']
def merge(self, other):
"""
Updates the unique_identifier with the value from the original one
The unique_identifier is unique for a given service item but this allows one to
replace an original version.
``other``
The service item to be merged with
"""
self.unique_identifier = other.unique_identifier
self.notes = other.notes
self.temporary_edit = other.temporary_edit
# Copy theme over if present.
if other.theme is not None:
self.theme = other.theme
self._new_item()
self.render()
if self.is_capable(ItemCapabilities.HasBackgroundAudio):
log.debug(self.background_audio)
def __eq__(self, other):
"""
Confirms the service items are for the same instance
"""
if not other:
return False
return self.unique_identifier == other.unique_identifier
def __ne__(self, other):
"""
Confirms the service items are not for the same instance
"""
return self.unique_identifier != other.unique_identifier
def __hash__(self):
"""
Return the hash for the service item.
"""
return self.unique_identifier
def is_media(self):
"""
Confirms if the ServiceItem is media
"""
return ItemCapabilities.RequiresMedia in self.capabilities
def is_command(self):
"""
Confirms if the ServiceItem is a command
"""
return self.service_item_type == ServiceItemType.Command
def is_image(self):
"""
Confirms if the ServiceItem is an image
"""
return self.service_item_type == ServiceItemType.Image
def uses_file(self):
"""
Confirms if the ServiceItem uses a file
"""
return self.service_item_type == ServiceItemType.Image or self.service_item_type == ServiceItemType.Command
def is_text(self):
"""
Confirms if the ServiceItem is text
"""
return self.service_item_type == ServiceItemType.Text
def set_media_length(self, length):
"""
Stores the media length of the item
``length``
The length of the media item
"""
self.media_length = length
if length > 0:
self.add_capability(ItemCapabilities.HasVariableStartTime)
def get_frames(self):
"""
Returns the frames for the ServiceItem
"""
if self.service_item_type == ServiceItemType.Text:
return self._display_frames
else:
return self._raw_frames
def get_rendered_frame(self, row):
"""
Returns the correct frame for a given list and renders it if required.
``row``
The service item slide to be returned
"""
if self.service_item_type == ServiceItemType.Text:
return self._display_frames[row]['html'].split('\n')[0]
elif self.service_item_type == ServiceItemType.Image:
return self._raw_frames[row]['path']
else:
return self._raw_frames[row]['image']
def get_frame_title(self, row=0):
"""
Returns the title of the raw frame
"""
try:
return self._raw_frames[row]['title']
except IndexError:
return ''
def get_frame_path(self, row=0, frame=None):
"""
Returns the path of the raw frame
"""
if not frame:
try:
frame = self._raw_frames[row]
except IndexError:
return ''
if self.is_image():
path_from = frame['path']
else:
path_from = os.path.join(frame['path'], frame['title'])
return path_from
def remove_frame(self, frame):
"""
Remove the specified frame from the item
"""
if frame in self._raw_frames:
self._raw_frames.remove(frame)
def get_media_time(self):
"""
Returns the start and finish time for a media item
"""
start = None
end = None
if self.start_time != 0:
start = translate('OpenLP.ServiceItem', '<strong>Start</strong>: %s') % \
str(datetime.timedelta(seconds=self.start_time))
if self.media_length != 0:
end = translate('OpenLP.ServiceItem', '<strong>Length</strong>: %s') % \
str(datetime.timedelta(seconds=self.media_length))
if not start and not end:
return ''
elif start and not end:
return start
elif not start and end:
return end
else:
return '%s <br>%s' % (start, end)
def update_theme(self, theme):
"""
updates the theme in the service item
``theme``
The new theme to be replaced in the service item
"""
self.theme_overwritten = (theme is None)
self.theme = theme
self._new_item()
self.render()
def remove_invalid_frames(self, invalid_paths=None):
"""
Remove invalid frames, such as ones where the file no longer exists.
"""
if self.uses_file():
for frame in self.get_frames():
if self.get_frame_path(frame=frame) in invalid_paths:
self.remove_frame(frame)
def missing_frames(self):
"""
Returns if there are any frames in the service item
"""
return not bool(self._raw_frames)
def validate_item(self, suffix_list=None):
"""
Validates a service item to make sure it is valid
"""
self.is_valid = True
for frame in self._raw_frames:
if self.is_image() and not os.path.exists(frame['path']):
self.is_valid = False
break
elif self.is_command():
file_name = os.path.join(frame['path'], frame['title'])
if not os.path.exists(file_name):
self.is_valid = False
break
if suffix_list and not self.is_text():
file_suffix = frame['title'].split('.')[-1]
if file_suffix.lower() not in suffix_list:
self.is_valid = False
break
def _get_renderer(self):
"""
Adds the Renderer to the class dynamically
"""
if not hasattr(self, '_renderer'):
self._renderer = Registry().get('renderer')
return self._renderer
renderer = property(_get_renderer)
def _get_image_manager(self):
"""
Adds the image manager to the class dynamically
"""
if not hasattr(self, '_image_manager'):
self._image_manager = Registry().get('image_manager')
return self._image_manager
image_manager = property(_get_image_manager)
| marmyshev/bug_1117098 | openlp/core/lib/serviceitem.py | Python | gpl-2.0 | 24,844 | [
"Brian"
] | 1a2afe7b1db90eb65823bb77f1e6783a6076466ab1bcebeda0d72d7ac3908063 |
"""Fetches all the freely available WMT14 data
Visit: http://www.statmt.org/wmt14/translation-task.html"""
import os
import anna.data.utils as utils
CORPORA = {
"europarl-parallel.tgz":
"http://www.statmt.org/wmt13/training-parallel-europarl-v7.tgz",
"europarl-monolingual.tgz":
"http://www.statmt.org/wmt13/training-monolingual-europarl-v7.tgz",
"commoncrawl.tgz":
"http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz",
"un.tgz":
"http://www.statmt.org/wmt13/training-parallel-un.tgz",
"nc-parallel.tgz":
"http://www.statmt.org/wmt14/training-parallel-nc-v9.tgz",
"nc-monolingual.tgz":
"http://www.statmt.org/wmt14/training-monolingual-nc-v9.tgz",
"giga-fren.tar":
"http://www.statmt.org/wmt10/training-giga-fren.tar",
"dev.tgz": "http://www.statmt.org/wmt14/dev.tgz",
"test.tgz": "http://www.statmt.org/wmt14/test-full.tgz"
}
def fetch(data_dir, dest="wmt14"):
"""
Fetches most data from the WMT14 shared task.
Creates the `dest` if it doesn't exist.
Args:
data_dir (str): absolute path to the dir where datasets are stored
dest (str): name for dir where WMT14 datasets will be extracted
Returns:
final_dir (str): absolute path where WMT14 datasets were extracted
"""
# Create folder
wmt_dir = os.path.join(data_dir, dest)
utils.create_folder(wmt_dir)
# Download all datasets
for f, url in CORPORA.items():
utils.urlretrieve(url, os.path.join(wmt_dir, f))
return wmt_dir
| jpbottaro/anna | anna/data/dataset/wmt.py | Python | mit | 1,573 | [
"VisIt"
] | e212c9a469e3adc163f2f872a59c2802b0945fcbffb3df41328d54dacdde2448 |
r"""
.. warning:: This model and this model description are under review following
concerns raised by SasView users. If you need to use this model,
please email help@sasview.org for the latest situation. *The
SasView Developers. September 2018.*
Definition
----------
Calculates the scattering from a **body-centered cubic lattice** with
paracrystalline distortion. Thermal vibrations are considered to be negligible,
and the size of the paracrystal is infinitely large. Paracrystalline distortion
is assumed to be isotropic and characterized by a Gaussian distribution.
The scattering intensity $I(q)$ is calculated as
.. math::
I(q) = \frac{\text{scale}}{V_p} V_\text{lattice} P(q) Z(q)
where *scale* is the volume fraction of spheres, $V_p$ is the volume of the
primary particle, $V_\text{lattice}$ is a volume correction for the crystal
structure, $P(q)$ is the form factor of the sphere (normalized), and $Z(q)$
is the paracrystalline structure factor for a body-centered cubic structure.
Equation (1) of the 1990 reference\ [#Matsuoka1990]_ is used to calculate
$Z(q)$, using equations (29)-(31) from the 1987 paper\ [#Matsuoka1987]_ for
$Z1$, $Z2$, and $Z3$.
The lattice correction (the occupied volume of the lattice) for a
body-centered cubic structure of particles of radius $R$ and nearest neighbor
separation $D$ is
.. math::
V_\text{lattice} = \frac{16\pi}{3} \frac{R^3}{\left(D\sqrt{2}\right)^3}
The distortion factor (one standard deviation) of the paracrystal is included
in the calculation of $Z(q)$
.. math::
\Delta a = g D
where $g$ is a fractional distortion based on the nearest neighbor distance.
.. figure:: img/bcc_geometry.jpg
Body-centered cubic lattice.
For a crystal, diffraction peaks appear at reduced q-values given by
.. math::
\frac{qD}{2\pi} = \sqrt{h^2 + k^2 + l^2}
where for a body-centered cubic lattice, only reflections where
$(h + k + l) = \text{even}$ are allowed and reflections where
$(h + k + l) = \text{odd}$ are forbidden. Thus the peak positions
correspond to (just the first 5)
.. math::
\begin{array}{lccccc}
q/q_o & 1 & \sqrt{2} & \sqrt{3} & \sqrt{4} & \sqrt{5} \\
\text{Indices} & (110) & (200) & (211) & (220) & (310) \\
\end{array}
.. note::
The calculation of $Z(q)$ is a double numerical integral that must be
carried out with a high density of points to properly capture the sharp
peaks of the paracrystalline scattering. So be warned that the calculation
is slow. Fitting of any experimental data must be resolution smeared for
any meaningful fit. This makes a triple integral which may be very slow.
This example dataset is produced using 200 data points,
*qmin* = 0.001 |Ang^-1|, *qmax* = 0.1 |Ang^-1| and the above default values.
The 2D (Anisotropic model) is based on the reference below where $I(q)$ is
approximated for 1d scattering. Thus the scattering pattern for 2D may not be
accurate, particularly at low $q$. For general details of the calculation and
angular dispersions for oriented particles see :ref:`orientation`. Note that
we are not responsible for any incorrectness of the 2D model computation.
.. figure:: img/parallelepiped_angle_definition.png
Orientation of the crystal with respect to the scattering plane, when
$\theta = \phi = 0$ the $c$ axis is along the beam direction (the $z$ axis).
References
----------
.. [#Matsuoka1987] Hideki Matsuoka et. al. *Physical Review B*, 36 (1987)
1754-1765 (Original Paper)
.. [#Matsuoka1990] Hideki Matsuoka et. al. *Physical Review B*, 41 (1990)
3854-3856 (Corrections to FCC and BCC lattice structure calculation)
Authorship and Verification
---------------------------
* **Author:** NIST IGOR/DANSE **Date:** pre 2010
* **Last Modified by:** Paul Butler **Date:** September 29, 2016
* **Last Reviewed by:** Richard Heenan **Date:** March 21, 2016
"""
import numpy as np
from numpy import inf, pi
name = "bcc_paracrystal"
title = "Body-centred cubic lattic with paracrystalline distortion"
description = """
Calculates the scattering from a **body-centered cubic lattice** with
paracrystalline distortion. Thermal vibrations are considered to be
negligible, and the size of the paracrystal is infinitely large.
Paracrystalline distortion is assumed to be isotropic and characterized
by a Gaussian distribution.
"""
category = "shape:paracrystal"
#note - calculation requires double precision
single = False
# pylint: disable=bad-whitespace, line-too-long
# ["name", "units", default, [lower, upper], "type","description" ],
parameters = [["dnn", "Ang", 220, [-inf, inf], "", "Nearest neighbour distance"],
["d_factor", "", 0.06, [-inf, inf], "", "Paracrystal distortion factor"],
["radius", "Ang", 40, [0, inf], "volume", "Particle radius"],
["sld", "1e-6/Ang^2", 4, [-inf, inf], "sld", "Particle scattering length density"],
["sld_solvent", "1e-6/Ang^2", 1, [-inf, inf], "sld", "Solvent scattering length density"],
["theta", "degrees", 60, [-360, 360], "orientation", "c axis to beam angle"],
["phi", "degrees", 60, [-360, 360], "orientation", "rotation about beam"],
["psi", "degrees", 60, [-360, 360], "orientation", "rotation about c axis"]
]
# pylint: enable=bad-whitespace, line-too-long
source = ["lib/sas_3j1x_x.c", "lib/gauss150.c", "lib/sphere_form.c", "bcc_paracrystal.c"]
def random():
"""Return a random parameter set for the model."""
# Define lattice spacing as a multiple of the particle radius
# using the formula a = 4 r/sqrt(3). Systems which are ordered
# are probably mostly filled, so use a distribution which goes from
# zero to one, but leaving 90% of them within 80% of the
# maximum bcc packing. Lattice distortion values are empirically
# useful between 0.01 and 0.7. Use an exponential distribution
# in this range 'cuz its easy.
radius = 10**np.random.uniform(1.3, 4)
d_factor = 10**np.random.uniform(-2, -0.7) # sigma_d in 0.01-0.7
dnn_fraction = np.random.beta(a=10, b=1)
dnn = radius*4/np.sqrt(3)/dnn_fraction
pars = dict(
#sld=1, sld_solvent=0, scale=1, background=1e-32,
dnn=dnn,
d_factor=d_factor,
radius=radius,
)
return pars
# april 6 2017, rkh add unit tests, NOT compared with any other calc method, assume correct!
# add 2d test later
# TODO: fix the 2d tests
q = 4.*pi/220.
tests = [
[{}, [0.001, q, 0.215268], [1.46601394721, 2.85851284174, 0.00866710287078]],
#[{'theta': 20.0, 'phi': 30, 'psi': 40.0}, (-0.017, 0.035), 2082.20264399],
#[{'theta': 20.0, 'phi': 30, 'psi': 40.0}, (-0.081, 0.011), 0.436323144781],
]
| SasView/sasmodels | sasmodels/models/bcc_paracrystal.py | Python | bsd-3-clause | 6,955 | [
"CRYSTAL",
"Gaussian"
] | 5ed97bdb0f9ee13ce004a751a803cfdcc2b6f696f137aecd238cd605cac86641 |
# Copyright (C) 2012,2013
# Max Planck Institute for Polymer Research
# Copyright (C) 2008,2009,2010,2011
# Max-Planck-Institute for Polymer Research & Fraunhofer SCAI
#
# This file is part of ESPResSo++.
#
# ESPResSo++ is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ESPResSo++ is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
r"""
********************************
espressopp.analysis.AnalysisBase
********************************
This abstract base class provides the interface and some basic
functionality for classes that do analysis or observable measurements
It provides the following methods:
.. function:: espressopp.analysis.AnalysisBase.compute()
Computes the instant value of the observable.
:rtype: a python list or a scalar
.. function:: espressopp.analysis.AnalysisBase.getAverageValue()
Returns the average value for the observable and the standard deviation.
:rtype: a python list
.. function:: espressopp.analysis.AnalysisBase.getNumberOfMeasurements()
counts the number of measurements that have been performed (standalone or in integrator)
does _not_ include measurements that have been done using "compute()"
:rtype:
.. function:: espressopp.analysis.AnalysisBase.performMeasurement()
Computes the observable and updates average and standard deviation
:rtype:
.. function:: espressopp.analysis.AnalysisBase.reset()
Resets average and standard deviation
:rtype:
"""
from espressopp import pmi
from espressopp.ParticleAccess import *
from _espressopp import analysis_AnalysisBase
class AnalysisBaseLocal(ParticleAccessLocal, analysis_AnalysisBase):
def performMeasurement(self):
if not pmi._PMIComm or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
self.cxxclass.performMeasurement(self)
def reset(self):
if not pmi._PMIComm or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
self.cxxclass.reset(self)
def compute(self):
if not pmi._PMIComm or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
res = self.cxxclass.compute(self)
if len(res) > 1:
return res
else:
return res[0]
def getAverageValue(self):
if not pmi._PMIComm or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
return self.cxxclass.getAverageValue(self)
def getNumberOfMeasurements(self):
if not pmi._PMIComm or pmi._MPIcomm.rank in pmi._PMIComm.getMPIcpugroup():
return self.cxxclass.getNumberOfMeasurements(self)
if pmi.isController :
class AnalysisBase(ParticleAccess):
__metaclass__ = pmi.Proxy
pmiproxydefs = dict(
pmicall = [ "performMeasurement", "reset", "compute", "getAverageValue", "getNumberOfMeasurements" ]
)
| fedepad/espressopp | src/analysis/AnalysisBase.py | Python | gpl-3.0 | 3,322 | [
"ESPResSo"
] | 5611299eadc4f40780ceb9870ab69c9d241aa268f5fb3cd5cc0bd88e12062b4e |
"""
=======================================
Signal processing (:mod:`scipy.signal`)
=======================================
Convolution
===========
.. autosummary::
:toctree: generated/
convolve -- N-dimensional convolution.
correlate -- N-dimensional correlation.
fftconvolve -- N-dimensional convolution using the FFT.
convolve2d -- 2-dimensional convolution (more options).
correlate2d -- 2-dimensional correlation (more options).
sepfir2d -- Convolve with a 2-D separable FIR filter.
choose_conv_method -- Chooses faster of FFT and direct convolution methods.
B-splines
=========
.. autosummary::
:toctree: generated/
bspline -- B-spline basis function of order n.
cubic -- B-spline basis function of order 3.
quadratic -- B-spline basis function of order 2.
gauss_spline -- Gaussian approximation to the B-spline basis function.
cspline1d -- Coefficients for 1-D cubic (3rd order) B-spline.
qspline1d -- Coefficients for 1-D quadratic (2nd order) B-spline.
cspline2d -- Coefficients for 2-D cubic (3rd order) B-spline.
qspline2d -- Coefficients for 2-D quadratic (2nd order) B-spline.
cspline1d_eval -- Evaluate a cubic spline at the given points.
qspline1d_eval -- Evaluate a quadratic spline at the given points.
spline_filter -- Smoothing spline (cubic) filtering of a rank-2 array.
Filtering
=========
.. autosummary::
:toctree: generated/
order_filter -- N-dimensional order filter.
medfilt -- N-dimensional median filter.
medfilt2d -- 2-dimensional median filter (faster).
wiener -- N-dimensional wiener filter.
symiirorder1 -- 2nd-order IIR filter (cascade of first-order systems).
symiirorder2 -- 4th-order IIR filter (cascade of second-order systems).
lfilter -- 1-dimensional FIR and IIR digital linear filtering.
lfiltic -- Construct initial conditions for `lfilter`.
lfilter_zi -- Compute an initial state zi for the lfilter function that
-- corresponds to the steady state of the step response.
filtfilt -- A forward-backward filter.
savgol_filter -- Filter a signal using the Savitzky-Golay filter.
deconvolve -- 1-d deconvolution using lfilter.
sosfilt -- 1-dimensional IIR digital linear filtering using
-- a second-order sections filter representation.
sosfilt_zi -- Compute an initial state zi for the sosfilt function that
-- corresponds to the steady state of the step response.
sosfiltfilt -- A forward-backward filter for second-order sections.
hilbert -- Compute 1-D analytic signal, using the Hilbert transform.
hilbert2 -- Compute 2-D analytic signal, using the Hilbert transform.
decimate -- Downsample a signal.
detrend -- Remove linear and/or constant trends from data.
resample -- Resample using Fourier method.
resample_poly -- Resample using polyphase filtering method.
upfirdn -- Upsample, apply FIR filter, downsample.
Filter design
=============
.. autosummary::
:toctree: generated/
bilinear -- Digital filter from an analog filter using
-- the bilinear transform.
bilinear_zpk -- Digital filter from an analog filter using
-- the bilinear transform.
findfreqs -- Find array of frequencies for computing filter response.
firls -- FIR filter design using least-squares error minimization.
firwin -- Windowed FIR filter design, with frequency response
-- defined as pass and stop bands.
firwin2 -- Windowed FIR filter design, with arbitrary frequency
-- response.
freqs -- Analog filter frequency response from TF coefficients.
freqs_zpk -- Analog filter frequency response from ZPK coefficients.
freqz -- Digital filter frequency response from TF coefficients.
freqz_zpk -- Digital filter frequency response from ZPK coefficients.
sosfreqz -- Digital filter frequency response for SOS format filter.
group_delay -- Digital filter group delay.
iirdesign -- IIR filter design given bands and gains.
iirfilter -- IIR filter design given order and critical frequencies.
kaiser_atten -- Compute the attenuation of a Kaiser FIR filter, given
-- the number of taps and the transition width at
-- discontinuities in the frequency response.
kaiser_beta -- Compute the Kaiser parameter beta, given the desired
-- FIR filter attenuation.
kaiserord -- Design a Kaiser window to limit ripple and width of
-- transition region.
minimum_phase -- Convert a linear phase FIR filter to minimum phase.
savgol_coeffs -- Compute the FIR filter coefficients for a Savitzky-Golay
-- filter.
remez -- Optimal FIR filter design.
unique_roots -- Unique roots and their multiplicities.
residue -- Partial fraction expansion of b(s) / a(s).
residuez -- Partial fraction expansion of b(z) / a(z).
invres -- Inverse partial fraction expansion for analog filter.
invresz -- Inverse partial fraction expansion for digital filter.
BadCoefficients -- Warning on badly conditioned filter coefficients
Lower-level filter design functions:
.. autosummary::
:toctree: generated/
abcd_normalize -- Check state-space matrices and ensure they are rank-2.
band_stop_obj -- Band Stop Objective Function for order minimization.
besselap -- Return (z,p,k) for analog prototype of Bessel filter.
buttap -- Return (z,p,k) for analog prototype of Butterworth filter.
cheb1ap -- Return (z,p,k) for type I Chebyshev filter.
cheb2ap -- Return (z,p,k) for type II Chebyshev filter.
cmplx_sort -- Sort roots based on magnitude.
ellipap -- Return (z,p,k) for analog prototype of elliptic filter.
lp2bp -- Transform a lowpass filter prototype to a bandpass filter.
lp2bp_zpk -- Transform a lowpass filter prototype to a bandpass filter.
lp2bs -- Transform a lowpass filter prototype to a bandstop filter.
lp2bs_zpk -- Transform a lowpass filter prototype to a bandstop filter.
lp2hp -- Transform a lowpass filter prototype to a highpass filter.
lp2hp_zpk -- Transform a lowpass filter prototype to a highpass filter.
lp2lp -- Transform a lowpass filter prototype to a lowpass filter.
lp2lp_zpk -- Transform a lowpass filter prototype to a lowpass filter.
normalize -- Normalize polynomial representation of a transfer function.
Matlab-style IIR filter design
==============================
.. autosummary::
:toctree: generated/
butter -- Butterworth
buttord
cheby1 -- Chebyshev Type I
cheb1ord
cheby2 -- Chebyshev Type II
cheb2ord
ellip -- Elliptic (Cauer)
ellipord
bessel -- Bessel (no order selection available -- try butterod)
iirnotch -- Design second-order IIR notch digital filter.
iirpeak -- Design second-order IIR peak (resonant) digital filter.
Continuous-Time Linear Systems
==============================
.. autosummary::
:toctree: generated/
lti -- Continuous-time linear time invariant system base class.
StateSpace -- Linear time invariant system in state space form.
TransferFunction -- Linear time invariant system in transfer function form.
ZerosPolesGain -- Linear time invariant system in zeros, poles, gain form.
lsim -- continuous-time simulation of output to linear system.
lsim2 -- like lsim, but `scipy.integrate.odeint` is used.
impulse -- impulse response of linear, time-invariant (LTI) system.
impulse2 -- like impulse, but `scipy.integrate.odeint` is used.
step -- step response of continuous-time LTI system.
step2 -- like step, but `scipy.integrate.odeint` is used.
freqresp -- frequency response of a continuous-time LTI system.
bode -- Bode magnitude and phase data (continuous-time LTI).
Discrete-Time Linear Systems
============================
.. autosummary::
:toctree: generated/
dlti -- Discrete-time linear time invariant system base class.
StateSpace -- Linear time invariant system in state space form.
TransferFunction -- Linear time invariant system in transfer function form.
ZerosPolesGain -- Linear time invariant system in zeros, poles, gain form.
dlsim -- simulation of output to a discrete-time linear system.
dimpulse -- impulse response of a discrete-time LTI system.
dstep -- step response of a discrete-time LTI system.
dfreqresp -- frequency response of a discrete-time LTI system.
dbode -- Bode magnitude and phase data (discrete-time LTI).
LTI Representations
===================
.. autosummary::
:toctree: generated/
tf2zpk -- transfer function to zero-pole-gain.
tf2sos -- transfer function to second-order sections.
tf2ss -- transfer function to state-space.
zpk2tf -- zero-pole-gain to transfer function.
zpk2sos -- zero-pole-gain to second-order sections.
zpk2ss -- zero-pole-gain to state-space.
ss2tf -- state-pace to transfer function.
ss2zpk -- state-space to pole-zero-gain.
sos2zpk -- second-order sections to zero-pole-gain.
sos2tf -- second-order sections to transfer function.
cont2discrete -- continuous-time to discrete-time LTI conversion.
place_poles -- pole placement.
Waveforms
=========
.. autosummary::
:toctree: generated/
chirp -- Frequency swept cosine signal, with several freq functions.
gausspulse -- Gaussian modulated sinusoid
max_len_seq -- Maximum length sequence
sawtooth -- Periodic sawtooth
square -- Square wave
sweep_poly -- Frequency swept cosine signal; freq is arbitrary polynomial
unit_impulse -- Discrete unit impulse
Window functions
================
For window functions, see the `scipy.signal.windows` namespace.
In the `scipy.signal` namespace, there is a convenience function to
obtain these windows by name:
.. autosummary::
:toctree: generated/
get_window -- Return a window of a given length and type.
Wavelets
========
.. autosummary::
:toctree: generated/
cascade -- compute scaling function and wavelet from coefficients
daub -- return low-pass
morlet -- Complex Morlet wavelet.
qmf -- return quadrature mirror filter from low-pass
ricker -- return ricker wavelet
cwt -- perform continuous wavelet transform
Peak finding
============
.. autosummary::
:toctree: generated/
argrelmin -- Calculate the relative minima of data
argrelmax -- Calculate the relative maxima of data
argrelextrema -- Calculate the relative extrema of data
find_peaks -- Find a subset of peaks inside a signal.
find_peaks_cwt -- Find peaks in a 1-D array with wavelet transformation.
peak_prominences -- Calculate the prominence of each peak in a signal.
peak_widths -- Calculate the width of each peak in a signal.
Spectral Analysis
=================
.. autosummary::
:toctree: generated/
periodogram -- Compute a (modified) periodogram
welch -- Compute a periodogram using Welch's method
csd -- Compute the cross spectral density, using Welch's method
coherence -- Compute the magnitude squared coherence, using Welch's method
spectrogram -- Compute the spectrogram
lombscargle -- Computes the Lomb-Scargle periodogram
vectorstrength -- Computes the vector strength
stft -- Compute the Short Time Fourier Transform
istft -- Compute the Inverse Short Time Fourier Transform
check_COLA -- Check the COLA constraint for iSTFT reconstruction
check_NOLA -- Check the NOLA constraint for iSTFT reconstruction
"""
from __future__ import division, print_function, absolute_import
from . import sigtools, windows
from .waveforms import *
from ._max_len_seq import max_len_seq
from ._upfirdn import upfirdn
# The spline module (a C extension) provides:
# cspline2d, qspline2d, sepfir2d, symiirord1, symiirord2
from .spline import *
from .bsplines import *
from .filter_design import *
from .fir_filter_design import *
from .ltisys import *
from .lti_conversion import *
from .signaltools import *
from ._savitzky_golay import savgol_coeffs, savgol_filter
from .spectral import *
from .wavelets import *
from ._peak_finding import *
from .windows import get_window # keep this one in signal namespace
# deal with * -> windows.* doc-only soft-deprecation
deprecated_windows = ('boxcar', 'triang', 'parzen', 'bohman', 'blackman',
'nuttall', 'blackmanharris', 'flattop', 'bartlett',
'barthann', 'hamming', 'kaiser', 'gaussian',
'general_gaussian', 'chebwin', 'slepian', 'cosine',
'hann', 'exponential', 'tukey')
# backward compatibility imports for actually deprecated windows not
# in the above list
from .windows import hanning
def deco(name):
f = getattr(windows, name)
# Add deprecation to docstring
def wrapped(*args, **kwargs):
return f(*args, **kwargs)
wrapped.__name__ = name
wrapped.__module__ = 'scipy.signal'
if hasattr(f, '__qualname__'):
wrapped.__qualname__ = f.__qualname__
if f.__doc__ is not None:
lines = f.__doc__.splitlines()
for li, line in enumerate(lines):
if line.strip() == 'Parameters':
break
else:
raise RuntimeError('dev error: badly formatted doc')
spacing = ' ' * line.find('P')
lines.insert(li, ('{0}.. warning:: scipy.signal.{1} is deprecated,\n'
'{0} use scipy.signal.windows.{1} '
'instead.\n'.format(spacing, name)))
wrapped.__doc__ = '\n'.join(lines)
return wrapped
for name in deprecated_windows:
locals()[name] = deco(name)
del deprecated_windows, name, deco
__all__ = [s for s in dir() if not s.startswith('_')]
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester
| gertingold/scipy | scipy/signal/__init__.py | Python | bsd-3-clause | 14,599 | [
"Gaussian"
] | 3b6b513e69412fd8e2cdf9825a05c7c798512bca61253e3895a98a78c154d054 |
from __future__ import print_function
import unittest as ut
import espressomd
import numpy as np
from espressomd.electrostatics import *
from espressomd import electrostatic_extensions
@ut.skipIf(not espressomd.has_features(["ELECTROSTATICS"]),
"Features not available, skipping test!")
class ELC_vs_MMM2D_neutral(ut.TestCase):
# Handle to espresso system
system = espressomd.System(box_l=[1.0, 1.0, 1.0])
acc = 1e-6
elc_gap = 5.0
box_l = 10.0
bl2 = box_l * 0.5
system.time_step = 0.01
system.cell_system.skin = 0.1
def test_elc_vs_mmm2d(self):
elc_param_sets = {
"inert": { "gap_size": self.elc_gap, "maxPWerror": self.acc },
"dielectric": { "gap_size": self.elc_gap, "maxPWerror": self.acc, "delta_mid_bot": 0.1, "delta_mid_top": 0.9 },
"const_pot_0": { "gap_size": self.elc_gap, "maxPWerror": self.acc, "const_pot": 1, "pot_diff": 0.0},
"const_pot_1": { "gap_size": self.elc_gap, "maxPWerror": self.acc, "const_pot": 1, "pot_diff": 1.0},
"const_pot_m1": { "gap_size": self.elc_gap, "maxPWerror": self.acc, "const_pot": 1, "pot_diff": -1.0}
}
mmm2d_param_sets = {
"inert": { "prefactor": 1.0, "maxPWerror": self.acc },
"dielectric": { "prefactor": 1.0, "maxPWerror": self.acc, "dielectric_contrast_on": 1, "delta_mid_bot": 0.1, "delta_mid_top": 0.9 },
"const_pot_0": { "prefactor": 1.0, "maxPWerror": self.acc, "const_pot": 1, "pot_diff": 0.0},
"const_pot_1": { "prefactor": 1.0, "maxPWerror": self.acc, "const_pot": 1, "pot_diff": 1.0},
"const_pot_m1": { "prefactor": 1.0, "maxPWerror": self.acc, "const_pot": 1, "pot_diff": -1.0}
}
self.system.box_l = [self.box_l, self.box_l, self.box_l]
buf_node_grid = self.system.cell_system.node_grid
self.system.cell_system.set_layered(n_layers=10, use_verlet_lists = False)
self.system.periodicity = [1, 1, 0]
q=1.0
self.system.part.add(id=0, pos=(5.0, 5.0, 5.0), q=-q)
self.system.part.add(id=1, pos=(2.0, 2.0, 5.0), q=q/3.0)
self.system.part.add(id=2, pos=(2.0, 5.0, 2.0), q=q/3.0)
self.system.part.add(id=3, pos=(5.0, 2.0, 7.0), q=q/3.0)
#MMM2D
mmm2d = MMM2D(**mmm2d_param_sets["inert"])
self.system.actors.add(mmm2d)
mmm2d_res = {}
mmm2d_res["inert"] = self.scan()
mmm2d.set_params(**mmm2d_param_sets["dielectric"])
mmm2d_res["dielectric"] = self.scan()
mmm2d.set_params(**mmm2d_param_sets["const_pot_0"])
mmm2d_res["const_pot_0"] = self.scan()
mmm2d.set_params(**mmm2d_param_sets["const_pot_1"])
mmm2d_res["const_pot_1"] = self.scan()
mmm2d.set_params(**mmm2d_param_sets["const_pot_m1"])
mmm2d_res["const_pot_m1"] = self.scan()
self.system.actors.remove(mmm2d)
#ELC
self.system.box_l = [self.box_l, self.box_l, self.box_l+self.elc_gap]
self.system.cell_system.set_domain_decomposition(use_verlet_lists = True)
self.system.cell_system.node_grid = buf_node_grid
self.system.periodicity = [1, 1, 1]
p3m = P3M(prefactor=1.0, accuracy=self.acc, mesh = [16,16,24], cao = 6)
self.system.actors.add(p3m)
elc = electrostatic_extensions.ELC(**elc_param_sets["inert"])
self.system.actors.add(elc)
elc_res = {}
elc_res["inert"] = self.scan()
elc.set_params(**elc_param_sets["dielectric"])
elc_res["dielectric"] = self.scan()
elc.set_params(**elc_param_sets["const_pot_0"])
elc_res["const_pot_0"] = self.scan()
elc.set_params(**elc_param_sets["const_pot_1"])
elc_res["const_pot_1"] = self.scan()
elc.set_params(**elc_param_sets["const_pot_m1"])
elc_res["const_pot_m1"] = self.scan()
for run in elc_res:
self.assertTrue(np.testing.assert_allclose(mmm2d_res[run], elc_res[run], rtol=0, atol=1e-4) == None)
def scan(self):
n=10
d = 0.5
res = []
for i in range(n+1):
z= self.box_l-d - 1.0*i/n*(self.box_l-2*d)
self.system.part[0].pos = [self.bl2, self.bl2, z]
self.system.integrator.run(0)
energy = self.system.analysis.energy()
m = [z]
m.extend(self.system.part[0].f)
m.append(energy['coulomb'])
res.append(m)
return res
if __name__ == "__main__":
print("Features: ", espressomd.features())
ut.main()
| KonradBreitsprecher/espresso | testsuite/elc_vs_mmm2d_neutral.py | Python | gpl-3.0 | 4,613 | [
"ESPResSo"
] | e871c223284494a2438e451b6f68541219243198e5224660a46d6fd6974a7e33 |
# -*- coding: utf-8 -*-
"""
End-to-end tests for the LMS.
"""
from datetime import datetime, timedelta
from flaky import flaky
from textwrap import dedent
from unittest import skip
from nose.plugins.attrib import attr
import pytz
import urllib
from bok_choy.promise import EmptyPromise
from ..helpers import (
UniqueCourseTest,
EventsTestMixin,
load_data_str,
generate_course_key,
select_option_by_value,
element_has_text,
select_option_by_text,
get_selected_option_text
)
from ...pages.lms import BASE_URL
from ...pages.lms.account_settings import AccountSettingsPage
from ...pages.lms.auto_auth import AutoAuthPage
from ...pages.lms.create_mode import ModeCreationPage
from ...pages.common.logout import LogoutPage
from ...pages.lms.course_info import CourseInfoPage
from ...pages.lms.tab_nav import TabNavPage
from ...pages.lms.course_nav import CourseNavPage
from ...pages.lms.progress import ProgressPage
from ...pages.lms.dashboard import DashboardPage
from ...pages.lms.problem import ProblemPage
from ...pages.lms.video.video import VideoPage
from ...pages.lms.courseware import CoursewarePage
from ...pages.studio.settings import SettingsPage
from ...pages.lms.login_and_register import CombinedLoginAndRegisterPage, ResetPasswordPage
from ...pages.lms.track_selection import TrackSelectionPage
from ...pages.lms.pay_and_verify import PaymentAndVerificationFlow, FakePaymentPage
from ...pages.lms.course_wiki import CourseWikiPage, CourseWikiEditPage
from ...fixtures.course import CourseFixture, XBlockFixtureDesc, CourseUpdateDesc
@attr('shard_8')
class ForgotPasswordPageTest(UniqueCourseTest):
"""
Test that forgot password forms is rendered if url contains 'forgot-password-modal'
in hash.
"""
def setUp(self):
""" Initialize the page object """
super(ForgotPasswordPageTest, self).setUp()
self.user_info = self._create_user()
self.reset_password_page = ResetPasswordPage(self.browser)
def _create_user(self):
"""
Create a unique user
"""
auto_auth = AutoAuthPage(self.browser).visit()
user_info = auto_auth.user_info
LogoutPage(self.browser).visit()
return user_info
def test_reset_password_form_visibility(self):
# Navigate to the password reset page
self.reset_password_page.visit()
# Expect that reset password form is visible on the page
self.assertTrue(self.reset_password_page.is_form_visible())
def test_reset_password_confirmation_box_visibility(self):
# Navigate to the password reset page
self.reset_password_page.visit()
# Navigate to the password reset form and try to submit it
self.reset_password_page.fill_password_reset_form(self.user_info['email'])
self.reset_password_page.is_success_visible(".submission-success")
# Expect that we're shown a success message
self.assertIn("Password Reset Email Sent", self.reset_password_page.get_success_message())
@attr('shard_8')
class LoginFromCombinedPageTest(UniqueCourseTest):
"""Test that we can log in using the combined login/registration page.
Also test that we can request a password reset from the combined
login/registration page.
"""
def setUp(self):
"""Initialize the page objects and create a test course. """
super(LoginFromCombinedPageTest, self).setUp()
self.login_page = CombinedLoginAndRegisterPage(
self.browser,
start_page="login",
course_id=self.course_id
)
self.dashboard_page = DashboardPage(self.browser)
# Create a course to enroll in
CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
).install()
def test_login_success(self):
# Create a user account
email, password = self._create_unique_user()
# Navigate to the login page and try to log in
self.login_page.visit().login(email=email, password=password)
# Expect that we reach the dashboard and we're auto-enrolled in the course
course_names = self.dashboard_page.wait_for_page().available_courses
self.assertIn(self.course_info["display_name"], course_names)
def test_login_failure(self):
# Navigate to the login page
self.login_page.visit()
# User account does not exist
self.login_page.login(email="nobody@nowhere.com", password="password")
# Verify that an error is displayed
self.assertIn("Email or password is incorrect.", self.login_page.wait_for_errors())
def test_toggle_to_register_form(self):
self.login_page.visit().toggle_form()
self.assertEqual(self.login_page.current_form, "register")
@flaky # ECOM-1165
def test_password_reset_success(self):
# Create a user account
email, password = self._create_unique_user() # pylint: disable=unused-variable
# Navigate to the password reset form and try to submit it
self.login_page.visit().password_reset(email=email)
# Expect that we're shown a success message
self.assertIn("Password Reset Email Sent", self.login_page.wait_for_success())
def test_password_reset_failure(self):
# Navigate to the password reset form
self.login_page.visit()
# User account does not exist
self.login_page.password_reset(email="nobody@nowhere.com")
# Expect that we're shown a failure message
self.assertIn(
"No user with the provided email address exists.",
self.login_page.wait_for_errors()
)
def test_third_party_login(self):
"""
Test that we can login using third party credentials, and that the
third party account gets linked to the edX account.
"""
# Create a user account
email, password = self._create_unique_user()
# Navigate to the login page
self.login_page.visit()
# Baseline screen-shots are different for chrome and firefox.
self.assertScreenshot('#login .login-providers', 'login-providers-{}'.format(self.browser.name))
# Try to log in using "Dummy" provider
self.login_page.click_third_party_dummy_provider()
# The user will be redirected somewhere and then back to the login page:
msg_text = self.login_page.wait_for_auth_status_message()
self.assertIn("You have successfully signed into Dummy", msg_text)
self.assertIn("To link your accounts, sign in now using your edX password", msg_text)
# Now login with username and password:
self.login_page.login(email=email, password=password)
# Expect that we reach the dashboard and we're auto-enrolled in the course
course_names = self.dashboard_page.wait_for_page().available_courses
self.assertIn(self.course_info["display_name"], course_names)
try:
# Now logout and check that we can log back in instantly (because the account is linked):
LogoutPage(self.browser).visit()
self.login_page.visit()
self.login_page.click_third_party_dummy_provider()
self.dashboard_page.wait_for_page()
finally:
self._unlink_dummy_account()
def test_hinted_login(self):
""" Test the login page when coming from course URL that specified which third party provider to use """
# Create a user account and link it to third party auth with the dummy provider:
AutoAuthPage(self.browser, course_id=self.course_id).visit()
self._link_dummy_account()
try:
LogoutPage(self.browser).visit()
# When not logged in, try to load a course URL that includes the provider hint ?tpa_hint=...
course_page = CoursewarePage(self.browser, self.course_id)
self.browser.get(course_page.url + '?tpa_hint=oa2-dummy')
# We should now be redirected to the login page
self.login_page.wait_for_page()
self.assertIn(
"Would you like to sign in using your Dummy credentials?",
self.login_page.hinted_login_prompt
)
# Baseline screen-shots are different for chrome and firefox.
self.assertScreenshot('#hinted-login-form', 'hinted-login-{}'.format(self.browser.name))
self.login_page.click_third_party_dummy_provider()
# We should now be redirected to the course page
course_page.wait_for_page()
finally:
self._unlink_dummy_account()
def _link_dummy_account(self):
""" Go to Account Settings page and link the user's account to the Dummy provider """
account_settings = AccountSettingsPage(self.browser).visit()
# switch to "Linked Accounts" tab
account_settings.switch_account_settings_tabs('accounts-tab')
field_id = "auth-oa2-dummy"
account_settings.wait_for_field(field_id)
self.assertEqual("Link Your Account", account_settings.link_title_for_link_field(field_id))
account_settings.click_on_link_in_link_field(field_id)
# make sure we are on "Linked Accounts" tab after the account settings
# page is reloaded
account_settings.switch_account_settings_tabs('accounts-tab')
account_settings.wait_for_link_title_for_link_field(field_id, "Unlink This Account")
def _unlink_dummy_account(self):
""" Verify that the 'Dummy' third party auth provider is linked, then unlink it """
# This must be done after linking the account, or we'll get cross-test side effects
account_settings = AccountSettingsPage(self.browser).visit()
# switch to "Linked Accounts" tab
account_settings.switch_account_settings_tabs('accounts-tab')
field_id = "auth-oa2-dummy"
account_settings.wait_for_field(field_id)
self.assertEqual("Unlink This Account", account_settings.link_title_for_link_field(field_id))
account_settings.click_on_link_in_link_field(field_id)
account_settings.wait_for_message(field_id, "Successfully unlinked")
def _create_unique_user(self):
"""
Create a new user with a unique name and email.
"""
username = "test_{uuid}".format(uuid=self.unique_id[0:6])
email = "{user}@example.com".format(user=username)
password = "password"
# Create the user (automatically logs us in)
AutoAuthPage(
self.browser,
username=username,
email=email,
password=password
).visit()
# Log out
LogoutPage(self.browser).visit()
return (email, password)
@attr('shard_8')
class RegisterFromCombinedPageTest(UniqueCourseTest):
"""Test that we can register a new user from the combined login/registration page. """
def setUp(self):
"""Initialize the page objects and create a test course. """
super(RegisterFromCombinedPageTest, self).setUp()
self.register_page = CombinedLoginAndRegisterPage(
self.browser,
start_page="register",
course_id=self.course_id
)
self.dashboard_page = DashboardPage(self.browser)
# Create a course to enroll in
CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
).install()
def test_register_success(self):
# Navigate to the registration page
self.register_page.visit()
# Fill in the form and submit it
username = "test_{uuid}".format(uuid=self.unique_id[0:6])
email = "{user}@example.com".format(user=username)
self.register_page.register(
email=email,
password="password",
username=username,
full_name="Test User",
country="US",
favorite_movie="Mad Max: Fury Road",
terms_of_service=True
)
# Expect that we reach the dashboard and we're auto-enrolled in the course
course_names = self.dashboard_page.wait_for_page().available_courses
self.assertIn(self.course_info["display_name"], course_names)
def test_register_failure(self):
# Navigate to the registration page
self.register_page.visit()
# Enter a blank for the username field, which is required
# Don't agree to the terms of service / honor code.
# Don't specify a country code, which is required.
# Don't specify a favorite movie.
username = "test_{uuid}".format(uuid=self.unique_id[0:6])
email = "{user}@example.com".format(user=username)
self.register_page.register(
email=email,
password="password",
username="",
full_name="Test User",
terms_of_service=False
)
# Verify that the expected errors are displayed.
errors = self.register_page.wait_for_errors()
self.assertIn(u'Please enter your Public username.', errors)
self.assertIn(u'You must agree to the edX Terms of Service and Honor Code.', errors)
self.assertIn(u'Please select your Country.', errors)
self.assertIn(u'Please tell us your favorite movie.', errors)
def test_toggle_to_login_form(self):
self.register_page.visit().toggle_form()
self.assertEqual(self.register_page.current_form, "login")
def test_third_party_register(self):
"""
Test that we can register using third party credentials, and that the
third party account gets linked to the edX account.
"""
# Navigate to the register page
self.register_page.visit()
# Baseline screen-shots are different for chrome and firefox.
self.assertScreenshot('#register .login-providers', 'register-providers-{}'.format(self.browser.name))
# Try to authenticate using the "Dummy" provider
self.register_page.click_third_party_dummy_provider()
# The user will be redirected somewhere and then back to the register page:
msg_text = self.register_page.wait_for_auth_status_message()
self.assertEqual(self.register_page.current_form, "register")
self.assertIn("You've successfully signed into Dummy", msg_text)
self.assertIn("We just need a little more information", msg_text)
# Now the form should be pre-filled with the data from the Dummy provider:
self.assertEqual(self.register_page.email_value, "adama@fleet.colonies.gov")
self.assertEqual(self.register_page.full_name_value, "William Adama")
self.assertIn("Galactica1", self.register_page.username_value)
# Set country, accept the terms, and submit the form:
self.register_page.register(country="US", favorite_movie="Battlestar Galactica", terms_of_service=True)
# Expect that we reach the dashboard and we're auto-enrolled in the course
course_names = self.dashboard_page.wait_for_page().available_courses
self.assertIn(self.course_info["display_name"], course_names)
# Now logout and check that we can log back in instantly (because the account is linked):
LogoutPage(self.browser).visit()
login_page = CombinedLoginAndRegisterPage(self.browser, start_page="login")
login_page.visit()
login_page.click_third_party_dummy_provider()
self.dashboard_page.wait_for_page()
# Now unlink the account (To test the account settings view and also to prevent cross-test side effects)
account_settings = AccountSettingsPage(self.browser).visit()
# switch to "Linked Accounts" tab
account_settings.switch_account_settings_tabs('accounts-tab')
field_id = "auth-oa2-dummy"
account_settings.wait_for_field(field_id)
self.assertEqual("Unlink This Account", account_settings.link_title_for_link_field(field_id))
account_settings.click_on_link_in_link_field(field_id)
account_settings.wait_for_message(field_id, "Successfully unlinked")
@attr('shard_8')
class PayAndVerifyTest(EventsTestMixin, UniqueCourseTest):
"""Test that we can proceed through the payment and verification flow."""
def setUp(self):
"""Initialize the test.
Create the necessary page objects, create a test course and configure its modes,
create a user and log them in.
"""
super(PayAndVerifyTest, self).setUp()
self.track_selection_page = TrackSelectionPage(self.browser, self.course_id)
self.payment_and_verification_flow = PaymentAndVerificationFlow(self.browser, self.course_id)
self.immediate_verification_page = PaymentAndVerificationFlow(self.browser, self.course_id, entry_point='verify-now')
self.upgrade_page = PaymentAndVerificationFlow(self.browser, self.course_id, entry_point='upgrade')
self.fake_payment_page = FakePaymentPage(self.browser, self.course_id)
self.dashboard_page = DashboardPage(self.browser)
# Create a course
CourseFixture(
self.course_info['org'],
self.course_info['number'],
self.course_info['run'],
self.course_info['display_name']
).install()
# Add an honor mode to the course
ModeCreationPage(self.browser, self.course_id).visit()
# Add a verified mode to the course
ModeCreationPage(self.browser, self.course_id, mode_slug=u'verified', mode_display_name=u'Verified Certificate', min_price=10, suggested_prices='10,20').visit()
@skip("Flaky 02/02/2015")
def test_immediate_verification_enrollment(self):
# Create a user and log them in
student_id = AutoAuthPage(self.browser).visit().get_user_id()
# Navigate to the track selection page
self.track_selection_page.visit()
# Enter the payment and verification flow by choosing to enroll as verified
self.track_selection_page.enroll('verified')
# Proceed to the fake payment page
self.payment_and_verification_flow.proceed_to_payment()
# Submit payment
self.fake_payment_page.submit_payment()
# Proceed to verification
self.payment_and_verification_flow.immediate_verification()
# Take face photo and proceed to the ID photo step
self.payment_and_verification_flow.webcam_capture()
self.payment_and_verification_flow.next_verification_step(self.immediate_verification_page)
# Take ID photo and proceed to the review photos step
self.payment_and_verification_flow.webcam_capture()
self.payment_and_verification_flow.next_verification_step(self.immediate_verification_page)
# Submit photos and proceed to the enrollment confirmation step
self.payment_and_verification_flow.next_verification_step(self.immediate_verification_page)
# Navigate to the dashboard
self.dashboard_page.visit()
# Expect that we're enrolled as verified in the course
enrollment_mode = self.dashboard_page.get_enrollment_mode(self.course_info["display_name"])
self.assertEqual(enrollment_mode, 'verified')
def test_deferred_verification_enrollment(self):
# Create a user and log them in
student_id = AutoAuthPage(self.browser).visit().get_user_id()
# Navigate to the track selection page
self.track_selection_page.visit()
# Enter the payment and verification flow by choosing to enroll as verified
self.track_selection_page.enroll('verified')
# Proceed to the fake payment page
self.payment_and_verification_flow.proceed_to_payment()
# Submit payment
self.fake_payment_page.submit_payment()
# Navigate to the dashboard
self.dashboard_page.visit()
# Expect that we're enrolled as verified in the course
enrollment_mode = self.dashboard_page.get_enrollment_mode(self.course_info["display_name"])
self.assertEqual(enrollment_mode, 'verified')
def test_enrollment_upgrade(self):
# Create a user, log them in, and enroll them in the honor mode
student_id = AutoAuthPage(self.browser, course_id=self.course_id).visit().get_user_id()
# Navigate to the dashboard
self.dashboard_page.visit()
# Expect that we're enrolled as honor in the course
enrollment_mode = self.dashboard_page.get_enrollment_mode(self.course_info["display_name"])
self.assertEqual(enrollment_mode, 'honor')
# Click the upsell button on the dashboard
self.dashboard_page.upgrade_enrollment(self.course_info["display_name"], self.upgrade_page)
# Select the first contribution option appearing on the page
self.upgrade_page.indicate_contribution()
# Proceed to the fake payment page
self.upgrade_page.proceed_to_payment()
def only_enrollment_events(event):
"""Filter out all non-enrollment events."""
return event['event_type'].startswith('edx.course.enrollment.')
expected_events = [
{
'event_type': 'edx.course.enrollment.mode_changed',
'event': {
'user_id': int(student_id),
'mode': 'verified',
}
}
]
with self.assert_events_match_during(event_filter=only_enrollment_events, expected_events=expected_events):
# Submit payment
self.fake_payment_page.submit_payment()
# Navigate to the dashboard
self.dashboard_page.visit()
# Expect that we're enrolled as verified in the course
enrollment_mode = self.dashboard_page.get_enrollment_mode(self.course_info["display_name"])
self.assertEqual(enrollment_mode, 'verified')
@attr('shard_1')
class CourseWikiTest(UniqueCourseTest):
"""
Tests that verify the course wiki.
"""
def setUp(self):
"""
Initialize pages and install a course fixture.
"""
super(CourseWikiTest, self).setUp()
# self.course_info['number'] must be shorter since we are accessing the wiki. See TNL-1751
self.course_info['number'] = self.unique_id[0:6]
self.course_info_page = CourseInfoPage(self.browser, self.course_id)
self.course_wiki_page = CourseWikiPage(self.browser, self.course_id)
self.course_info_page = CourseInfoPage(self.browser, self.course_id)
self.course_wiki_edit_page = CourseWikiEditPage(self.browser, self.course_id, self.course_info)
self.tab_nav = TabNavPage(self.browser)
CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
).install()
# Auto-auth register for the course
AutoAuthPage(self.browser, course_id=self.course_id).visit()
# Access course wiki page
self.course_info_page.visit()
self.tab_nav.go_to_tab('Wiki')
def _open_editor(self):
self.course_wiki_page.open_editor()
self.course_wiki_edit_page.wait_for_page()
def test_edit_course_wiki(self):
"""
Wiki page by default is editable for students.
After accessing the course wiki,
Replace the content of the default page
Confirm new content has been saved
"""
content = "hello"
self._open_editor()
self.course_wiki_edit_page.replace_wiki_content(content)
self.course_wiki_edit_page.save_wiki_content()
actual_content = unicode(self.course_wiki_page.q(css='.wiki-article p').text[0])
self.assertEqual(content, actual_content)
@attr('shard_1')
class HighLevelTabTest(UniqueCourseTest):
"""
Tests that verify each of the high-level tabs available within a course.
"""
def setUp(self):
"""
Initialize pages and install a course fixture.
"""
super(HighLevelTabTest, self).setUp()
# self.course_info['number'] must be shorter since we are accessing the wiki. See TNL-1751
self.course_info['number'] = self.unique_id[0:6]
self.course_info_page = CourseInfoPage(self.browser, self.course_id)
self.progress_page = ProgressPage(self.browser, self.course_id)
self.course_nav = CourseNavPage(self.browser)
self.tab_nav = TabNavPage(self.browser)
self.video = VideoPage(self.browser)
# Install a course with sections/problems, tabs, updates, and handouts
course_fix = CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
)
course_fix.add_update(
CourseUpdateDesc(date='January 29, 2014', content='Test course update1')
)
course_fix.add_handout('demoPDF.pdf')
course_fix.add_children(
XBlockFixtureDesc('static_tab', 'Test Static Tab', data=r"static tab data with mathjax \(E=mc^2\)"),
XBlockFixtureDesc('chapter', 'Test Section').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection').add_children(
XBlockFixtureDesc('problem', 'Test Problem 1', data=load_data_str('multiple_choice.xml')),
XBlockFixtureDesc('problem', 'Test Problem 2', data=load_data_str('formula_problem.xml')),
XBlockFixtureDesc('html', 'Test HTML'),
)
),
XBlockFixtureDesc('chapter', 'Test Section 2').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection 2'),
XBlockFixtureDesc('sequential', 'Test Subsection 3'),
)
).install()
# Auto-auth register for the course
AutoAuthPage(self.browser, course_id=self.course_id).visit()
def test_course_info(self):
"""
Navigate to the course info page.
"""
# Navigate to the course info page from the progress page
self.progress_page.visit()
self.tab_nav.go_to_tab('Home')
# Expect just one update
self.assertEqual(self.course_info_page.num_updates, 1)
# Expect a link to the demo handout pdf
handout_links = self.course_info_page.handout_links
self.assertEqual(len(handout_links), 1)
self.assertIn('demoPDF.pdf', handout_links[0])
def test_progress(self):
"""
Navigate to the progress page.
"""
# Navigate to the progress page from the info page
self.course_info_page.visit()
self.tab_nav.go_to_tab('Progress')
# We haven't answered any problems yet, so assume scores are zero
# Only problems should have scores; so there should be 2 scores.
CHAPTER = 'Test Section'
SECTION = 'Test Subsection'
EXPECTED_SCORES = [(0, 3), (0, 1)]
actual_scores = self.progress_page.scores(CHAPTER, SECTION)
self.assertEqual(actual_scores, EXPECTED_SCORES)
def test_static_tab(self):
"""
Navigate to a static tab (course content)
"""
# From the course info page, navigate to the static tab
self.course_info_page.visit()
self.tab_nav.go_to_tab('Test Static Tab')
self.assertTrue(self.tab_nav.is_on_tab('Test Static Tab'))
def test_static_tab_with_mathjax(self):
"""
Navigate to a static tab (course content)
"""
# From the course info page, navigate to the static tab
self.course_info_page.visit()
self.tab_nav.go_to_tab('Test Static Tab')
self.assertTrue(self.tab_nav.is_on_tab('Test Static Tab'))
# Verify that Mathjax has rendered
self.tab_nav.mathjax_has_rendered()
def test_wiki_tab_first_time(self):
"""
Navigate to the course wiki tab. When the wiki is accessed for
the first time, it is created on the fly.
"""
course_wiki = CourseWikiPage(self.browser, self.course_id)
# From the course info page, navigate to the wiki tab
self.course_info_page.visit()
self.tab_nav.go_to_tab('Wiki')
self.assertTrue(self.tab_nav.is_on_tab('Wiki'))
# Assert that a default wiki is created
expected_article_name = "{org}.{course_number}.{course_run}".format(
org=self.course_info['org'],
course_number=self.course_info['number'],
course_run=self.course_info['run']
)
self.assertEqual(expected_article_name, course_wiki.article_name)
def test_courseware_nav(self):
"""
Navigate to a particular unit in the course.
"""
# Navigate to the course page from the info page
self.course_info_page.visit()
self.tab_nav.go_to_tab('Course')
# Check that the course navigation appears correctly
EXPECTED_SECTIONS = {
'Test Section': ['Test Subsection'],
'Test Section 2': ['Test Subsection 2', 'Test Subsection 3']
}
actual_sections = self.course_nav.sections
for section, subsections in EXPECTED_SECTIONS.iteritems():
self.assertIn(section, actual_sections)
self.assertEqual(actual_sections[section], EXPECTED_SECTIONS[section])
# Navigate to a particular section
self.course_nav.go_to_section('Test Section', 'Test Subsection')
# Check the sequence items
EXPECTED_ITEMS = ['Test Problem 1', 'Test Problem 2', 'Test HTML']
actual_items = self.course_nav.sequence_items
self.assertEqual(len(actual_items), len(EXPECTED_ITEMS))
for expected in EXPECTED_ITEMS:
self.assertIn(expected, actual_items)
@attr('shard_1')
class PDFTextBooksTabTest(UniqueCourseTest):
"""
Tests that verify each of the textbook tabs available within a course.
"""
def setUp(self):
"""
Initialize pages and install a course fixture.
"""
super(PDFTextBooksTabTest, self).setUp()
self.course_info_page = CourseInfoPage(self.browser, self.course_id)
self.tab_nav = TabNavPage(self.browser)
# Install a course with TextBooks
course_fix = CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
)
# Add PDF textbooks to course fixture.
for i in range(1, 3):
course_fix.add_textbook("PDF Book {}".format(i), [{"title": "Chapter Of Book {}".format(i), "url": ""}])
course_fix.install()
# Auto-auth register for the course
AutoAuthPage(self.browser, course_id=self.course_id).visit()
def test_verify_textbook_tabs(self):
"""
Test multiple pdf textbooks loads correctly in lms.
"""
self.course_info_page.visit()
# Verify each PDF textbook tab by visiting, it will fail if correct tab is not loaded.
for i in range(1, 3):
self.tab_nav.go_to_tab("PDF Book {}".format(i))
@attr('shard_1')
class VisibleToStaffOnlyTest(UniqueCourseTest):
"""
Tests that content with visible_to_staff_only set to True cannot be viewed by students.
"""
def setUp(self):
super(VisibleToStaffOnlyTest, self).setUp()
course_fix = CourseFixture(
self.course_info['org'],
self.course_info['number'],
self.course_info['run'],
self.course_info['display_name']
)
course_fix.add_children(
XBlockFixtureDesc('chapter', 'Test Section').add_children(
XBlockFixtureDesc('sequential', 'Subsection With Locked Unit').add_children(
XBlockFixtureDesc('vertical', 'Locked Unit', metadata={'visible_to_staff_only': True}).add_children(
XBlockFixtureDesc('html', 'Html Child in locked unit', data="<html>Visible only to staff</html>"),
),
XBlockFixtureDesc('vertical', 'Unlocked Unit').add_children(
XBlockFixtureDesc('html', 'Html Child in unlocked unit', data="<html>Visible only to all</html>"),
)
),
XBlockFixtureDesc('sequential', 'Unlocked Subsection').add_children(
XBlockFixtureDesc('vertical', 'Test Unit').add_children(
XBlockFixtureDesc('html', 'Html Child in visible unit', data="<html>Visible to all</html>"),
)
),
XBlockFixtureDesc('sequential', 'Locked Subsection', metadata={'visible_to_staff_only': True}).add_children(
XBlockFixtureDesc('vertical', 'Test Unit').add_children(
XBlockFixtureDesc(
'html', 'Html Child in locked subsection', data="<html>Visible only to staff</html>"
)
)
)
)
).install()
self.courseware_page = CoursewarePage(self.browser, self.course_id)
self.course_nav = CourseNavPage(self.browser)
def test_visible_to_staff(self):
"""
Scenario: All content is visible for a user marked is_staff (different from course staff)
Given some of the course content has been marked 'visible_to_staff_only'
And I am logged on with an account marked 'is_staff'
Then I can see all course content
"""
AutoAuthPage(self.browser, username="STAFF_TESTER", email="johndoe_staff@example.com",
course_id=self.course_id, staff=True).visit()
self.courseware_page.visit()
self.assertEqual(3, len(self.course_nav.sections['Test Section']))
self.course_nav.go_to_section("Test Section", "Subsection With Locked Unit")
self.assertEqual([u'Locked Unit', u'Unlocked Unit'], self.course_nav.sequence_items)
self.course_nav.go_to_section("Test Section", "Unlocked Subsection")
self.assertEqual([u'Test Unit'], self.course_nav.sequence_items)
self.course_nav.go_to_section("Test Section", "Locked Subsection")
self.assertEqual([u'Test Unit'], self.course_nav.sequence_items)
def test_visible_to_student(self):
"""
Scenario: Content marked 'visible_to_staff_only' is not visible for students in the course
Given some of the course content has been marked 'visible_to_staff_only'
And I am logged on with an authorized student account
Then I can only see content without 'visible_to_staff_only' set to True
"""
AutoAuthPage(self.browser, username="STUDENT_TESTER", email="johndoe_student@example.com",
course_id=self.course_id, staff=False).visit()
self.courseware_page.visit()
self.assertEqual(2, len(self.course_nav.sections['Test Section']))
self.course_nav.go_to_section("Test Section", "Subsection With Locked Unit")
self.assertEqual([u'Unlocked Unit'], self.course_nav.sequence_items)
self.course_nav.go_to_section("Test Section", "Unlocked Subsection")
self.assertEqual([u'Test Unit'], self.course_nav.sequence_items)
@attr('shard_1')
class TooltipTest(UniqueCourseTest):
"""
Tests that tooltips are displayed
"""
def setUp(self):
"""
Initialize pages and install a course fixture.
"""
super(TooltipTest, self).setUp()
self.course_info_page = CourseInfoPage(self.browser, self.course_id)
self.tab_nav = TabNavPage(self.browser)
course_fix = CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
)
course_fix.add_children(
XBlockFixtureDesc('static_tab', 'Test Static Tab'),
XBlockFixtureDesc('chapter', 'Test Section').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection').add_children(
XBlockFixtureDesc('problem', 'Test Problem 1', data=load_data_str('multiple_choice.xml')),
XBlockFixtureDesc('problem', 'Test Problem 2', data=load_data_str('formula_problem.xml')),
XBlockFixtureDesc('html', 'Test HTML'),
)
)
).install()
self.courseware_page = CoursewarePage(self.browser, self.course_id)
# Auto-auth register for the course
AutoAuthPage(self.browser, course_id=self.course_id).visit()
def test_tooltip(self):
"""
Verify that tooltips are displayed when you hover over the sequence nav bar.
"""
self.course_info_page.visit()
self.tab_nav.go_to_tab('Course')
self.courseware_page.verify_tooltips_displayed()
@attr('shard_1')
class PreRequisiteCourseTest(UniqueCourseTest):
"""
Tests that pre-requisite course messages are displayed
"""
def setUp(self):
"""
Initialize pages and install a course fixture.
"""
super(PreRequisiteCourseTest, self).setUp()
CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
).install()
self.prc_info = {
'org': 'test_org',
'number': self.unique_id,
'run': 'prc_test_run',
'display_name': 'PR Test Course' + self.unique_id
}
CourseFixture(
self.prc_info['org'], self.prc_info['number'],
self.prc_info['run'], self.prc_info['display_name']
).install()
pre_requisite_course_key = generate_course_key(
self.prc_info['org'],
self.prc_info['number'],
self.prc_info['run']
)
self.pre_requisite_course_id = unicode(pre_requisite_course_key)
self.dashboard_page = DashboardPage(self.browser)
self.settings_page = SettingsPage(
self.browser,
self.course_info['org'],
self.course_info['number'],
self.course_info['run']
)
# Auto-auth register for the course
AutoAuthPage(self.browser, course_id=self.course_id).visit()
def test_dashboard_message(self):
"""
Scenario: Any course where there is a Pre-Requisite course Student dashboard should have
appropriate messaging.
Given that I am on the Student dashboard
When I view a course with a pre-requisite course set
Then At the bottom of course I should see course requirements message.'
"""
# visit dashboard page and make sure there is not pre-requisite course message
self.dashboard_page.visit()
self.assertFalse(self.dashboard_page.pre_requisite_message_displayed())
# Logout and login as a staff.
LogoutPage(self.browser).visit()
AutoAuthPage(self.browser, course_id=self.course_id, staff=True).visit()
# visit course settings page and set pre-requisite course
self.settings_page.visit()
self._set_pre_requisite_course()
# Logout and login as a student.
LogoutPage(self.browser).visit()
AutoAuthPage(self.browser, course_id=self.course_id, staff=False).visit()
# visit dashboard page again now it should have pre-requisite course message
self.dashboard_page.visit()
EmptyPromise(lambda: self.dashboard_page.available_courses > 0, 'Dashboard page loaded').fulfill()
self.assertTrue(self.dashboard_page.pre_requisite_message_displayed())
def _set_pre_requisite_course(self):
"""
set pre-requisite course
"""
select_option_by_value(self.settings_page.pre_requisite_course_options, self.pre_requisite_course_id)
self.settings_page.save_changes()
@attr('shard_1')
class ProblemExecutionTest(UniqueCourseTest):
"""
Tests of problems.
"""
def setUp(self):
"""
Initialize pages and install a course fixture.
"""
super(ProblemExecutionTest, self).setUp()
self.course_info_page = CourseInfoPage(self.browser, self.course_id)
self.course_nav = CourseNavPage(self.browser)
self.tab_nav = TabNavPage(self.browser)
# Install a course with sections and problems.
course_fix = CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
)
course_fix.add_asset(['python_lib.zip'])
course_fix.add_children(
XBlockFixtureDesc('chapter', 'Test Section').add_children(
XBlockFixtureDesc('sequential', 'Test Subsection').add_children(
XBlockFixtureDesc('problem', 'Python Problem', data=dedent(
"""\
<problem>
<script type="loncapa/python">
from number_helpers import seventeen, fortytwo
oneseven = seventeen()
def check_function(expect, ans):
if int(ans) == fortytwo(-22):
return True
else:
return False
</script>
<p>What is the sum of $oneseven and 3?</p>
<customresponse expect="20" cfn="check_function">
<textline/>
</customresponse>
</problem>
"""
))
)
)
).install()
# Auto-auth register for the course
AutoAuthPage(self.browser, course_id=self.course_id).visit()
def test_python_execution_in_problem(self):
# Navigate to the problem page
self.course_info_page.visit()
self.tab_nav.go_to_tab('Course')
self.course_nav.go_to_section('Test Section', 'Test Subsection')
problem_page = ProblemPage(self.browser)
self.assertEqual(problem_page.problem_name.upper(), 'PYTHON PROBLEM')
# Does the page have computation results?
self.assertIn("What is the sum of 17 and 3?", problem_page.problem_text)
# Fill in the answer correctly.
problem_page.fill_answer("20")
problem_page.click_check()
self.assertTrue(problem_page.is_correct())
# Fill in the answer incorrectly.
problem_page.fill_answer("4")
problem_page.click_check()
self.assertFalse(problem_page.is_correct())
@attr('shard_1')
class EntranceExamTest(UniqueCourseTest):
"""
Tests that course has an entrance exam.
"""
def setUp(self):
"""
Initialize pages and install a course fixture.
"""
super(EntranceExamTest, self).setUp()
CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
).install()
self.courseware_page = CoursewarePage(self.browser, self.course_id)
self.settings_page = SettingsPage(
self.browser,
self.course_info['org'],
self.course_info['number'],
self.course_info['run']
)
# Auto-auth register for the course
AutoAuthPage(self.browser, course_id=self.course_id).visit()
def test_entrance_exam_section(self):
"""
Scenario: Any course that is enabled for an entrance exam, should have entrance exam chapter at course
page.
Given that I am on the course page
When I view the course that has an entrance exam
Then there should be an "Entrance Exam" chapter.'
"""
entrance_exam_link_selector = '.accordion .course-navigation .chapter .group-heading'
# visit course page and make sure there is not entrance exam chapter.
self.courseware_page.visit()
self.courseware_page.wait_for_page()
self.assertFalse(element_has_text(
page=self.courseware_page,
css_selector=entrance_exam_link_selector,
text='Entrance Exam'
))
# Logout and login as a staff.
LogoutPage(self.browser).visit()
AutoAuthPage(self.browser, course_id=self.course_id, staff=True).visit()
# visit course settings page and set/enabled entrance exam for that course.
self.settings_page.visit()
self.settings_page.wait_for_page()
self.assertTrue(self.settings_page.is_browser_on_page())
self.settings_page.entrance_exam_field.click()
self.settings_page.save_changes()
# Logout and login as a student.
LogoutPage(self.browser).visit()
AutoAuthPage(self.browser, course_id=self.course_id, staff=False).visit()
# visit course info page and make sure there is an "Entrance Exam" section.
self.courseware_page.visit()
self.courseware_page.wait_for_page()
self.assertTrue(element_has_text(
page=self.courseware_page,
css_selector=entrance_exam_link_selector,
text='Entrance Exam'
))
@attr('shard_1')
class NotLiveRedirectTest(UniqueCourseTest):
"""
Test that a banner is shown when the user is redirected to
the dashboard from a non-live course.
"""
def setUp(self):
"""Create a course that isn't live yet and enroll for it."""
super(NotLiveRedirectTest, self).setUp()
CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name'],
start_date=datetime(year=2099, month=1, day=1)
).install()
AutoAuthPage(self.browser, course_id=self.course_id).visit()
def test_redirect_banner(self):
"""
Navigate to the course info page, then check that we're on the
dashboard page with the appropriate message.
"""
url = BASE_URL + "/courses/" + self.course_id + "/" + 'info'
self.browser.get(url)
page = DashboardPage(self.browser)
page.wait_for_page()
self.assertIn(
'The course you are looking for does not start until',
page.banner_text
)
@attr('shard_1')
class EnrollmentClosedRedirectTest(UniqueCourseTest):
"""
Test that a banner is shown when the user is redirected to the
dashboard after trying to view the track selection page for a
course after enrollment has ended.
"""
def setUp(self):
"""Create a course that is closed for enrollment, and sign in as a user."""
super(EnrollmentClosedRedirectTest, self).setUp()
course = CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
)
now = datetime.now(pytz.UTC)
course.add_course_details({
'enrollment_start': (now - timedelta(days=30)).isoformat(),
'enrollment_end': (now - timedelta(days=1)).isoformat()
})
course.install()
# Add an honor mode to the course
ModeCreationPage(self.browser, self.course_id).visit()
# Add a verified mode to the course
ModeCreationPage(
self.browser,
self.course_id,
mode_slug=u'verified',
mode_display_name=u'Verified Certificate',
min_price=10,
suggested_prices='10,20'
).visit()
def _assert_dashboard_message(self):
"""
Assert that the 'closed for enrollment' text is present on the
dashboard.
"""
page = DashboardPage(self.browser)
page.wait_for_page()
self.assertIn(
'The course you are looking for is closed for enrollment',
page.banner_text
)
def test_redirect_banner(self):
"""
Navigate to the course info page, then check that we're on the
dashboard page with the appropriate message.
"""
AutoAuthPage(self.browser).visit()
url = BASE_URL + "/course_modes/choose/" + self.course_id
self.browser.get(url)
self._assert_dashboard_message()
def test_login_redirect(self):
"""
Test that the user is correctly redirected after logistration when
attempting to enroll in a closed course.
"""
url = '{base_url}/register?{params}'.format(
base_url=BASE_URL,
params=urllib.urlencode({
'course_id': self.course_id,
'enrollment_action': 'enroll',
'email_opt_in': 'false'
})
)
self.browser.get(url)
register_page = CombinedLoginAndRegisterPage(
self.browser,
start_page="register",
course_id=self.course_id
)
register_page.wait_for_page()
register_page.register(
email="email@example.com",
password="password",
username="username",
full_name="Test User",
country="US",
favorite_movie="Mad Max: Fury Road",
terms_of_service=True
)
self._assert_dashboard_message()
@attr('shard_1')
class LMSLanguageTest(UniqueCourseTest):
""" Test suite for the LMS Language """
def setUp(self):
super(LMSLanguageTest, self).setUp()
self.dashboard_page = DashboardPage(self.browser)
self.account_settings = AccountSettingsPage(self.browser)
AutoAuthPage(self.browser).visit()
def test_lms_language_change(self):
"""
Scenario: Ensure that language selection is working fine.
First I go to the user dashboard page in LMS. I can see 'English' is selected by default.
Then I choose 'Dummy Language' from drop down (at top of the page).
Then I visit the student account settings page and I can see the language has been updated to 'Dummy Language'
in both drop downs.
After that I select the 'English' language and visit the dashboard page again.
Then I can see that top level language selector persist its value to 'English'.
"""
self.dashboard_page.visit()
language_selector = self.dashboard_page.language_selector
self.assertEqual(
get_selected_option_text(language_selector),
u'English'
)
select_option_by_text(language_selector, 'Dummy Language (Esperanto)')
self.dashboard_page.wait_for_ajax()
self.account_settings.visit()
self.assertEqual(self.account_settings.value_for_dropdown_field('pref-lang'), u'Dummy Language (Esperanto)')
self.assertEqual(
get_selected_option_text(language_selector),
u'Dummy Language (Esperanto)'
)
# changed back to English language.
select_option_by_text(language_selector, 'English')
self.account_settings.wait_for_ajax()
self.assertEqual(self.account_settings.value_for_dropdown_field('pref-lang'), u'English')
self.dashboard_page.visit()
self.assertEqual(
get_selected_option_text(language_selector),
u'English'
)
@attr('a11y')
class CourseInfoA11yTest(UniqueCourseTest):
"""Accessibility test for course home/info page."""
def setUp(self):
super(CourseInfoA11yTest, self).setUp()
self.course_fixture = CourseFixture(
self.course_info['org'], self.course_info['number'],
self.course_info['run'], self.course_info['display_name']
)
self.course_fixture.add_update(
CourseUpdateDesc(date='January 29, 2014', content='Test course update1')
)
self.course_fixture.add_update(
CourseUpdateDesc(date='February 5th, 2014', content='Test course update2')
)
self.course_fixture.add_update(
CourseUpdateDesc(date='March 31st, 2014', content='Test course update3')
)
self.course_fixture.install()
self.course_info_page = CourseInfoPage(self.browser, self.course_id)
AutoAuthPage(self.browser, course_id=self.course_id).visit()
def test_course_home_a11y(self):
self.course_info_page.visit()
self.course_info_page.a11y_audit.config.set_rules({
"ignore": [
'section', # TODO: AC-491
]
})
self.course_info_page.a11y_audit.check_for_accessibility_errors()
| waheedahmed/edx-platform | common/test/acceptance/tests/lms/test_lms.py | Python | agpl-3.0 | 53,038 | [
"VisIt"
] | bbffdf811884c5c30aaefbde19ef648e5ff2624e1843fa6b90c650739cfeeda2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.