text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ensure_dim(core, dim, dim_):
"""Ensure that dim is correct.""" |
if dim is None:
dim = dim_
if not dim:
return core, 1
if dim_ == dim:
return core, int(dim)
if dim > dim_:
key_convert = lambda vari: vari[:dim_]
else:
key_convert = lambda vari: vari + (0,)*(dim-dim_)
new_core = {}
for key, val in core.items():
key_ = key_convert(key)
if key_ in new_core:
new_core[key_] += val
else:
new_core[key_] = val
return new_core, int(dim) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sort_key(val):
"""Sort key for sorting keys in grevlex order.""" |
return numpy.sum((max(val)+1)**numpy.arange(len(val)-1, -1, -1)*val) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def copy(self):
"""Return a copy of the polynomial.""" |
return Poly(self.A.copy(), self.dim, self.shape,
self.dtype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def coefficients(self):
"""Polynomial coefficients.""" |
out = numpy.array([self.A[key] for key in self.keys])
out = numpy.rollaxis(out, -1)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def QoI_Dist(poly, dist, sample=10000, **kws):
""" Constructs distributions for the quantity of interests. The function constructs a kernel density estimator (KDE) for each polynomial (poly) by sampling it. With the KDEs, distributions (Dists) are constructed. The Dists can be used for e.g. plotting probability density functions (PDF), or to make a second uncertainty quantification simulation with that newly generated Dists. Args: poly (Poly):
Polynomial of interest. dist (Dist):
Defines the space where the samples for the KDE is taken from the poly. sample (int):
Number of samples used in estimation to construct the KDE. Returns: (numpy.ndarray):
The constructed quantity of interest (QoI) distributions, where ``qoi_dists.shape==poly.shape``. Examples: [0.29143037 0.39931708 0.29536329] """ |
shape = poly.shape
poly = polynomials.flatten(poly)
dim = len(dist)
#sample from the inumpyut dist
samples = dist.sample(sample, **kws)
qoi_dists = []
for i in range(0, len(poly)):
#sample the polynomial solution
if dim == 1:
dataset = poly[i](samples)
else:
dataset = poly[i](*samples)
lo = dataset.min()
up = dataset.max()
#creates qoi_dist
qoi_dist = distributions.SampleDist(dataset, lo, up)
qoi_dists.append(qoi_dist)
#reshape the qoi_dists to match the shape of the inumpyut poly
qoi_dists = numpy.array(qoi_dists, distributions.Dist)
qoi_dists = qoi_dists.reshape(shape)
if not shape:
qoi_dists = qoi_dists.item()
return qoi_dists |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quad_gauss_legendre(order, lower=0, upper=1, composite=None):
""" Generate the quadrature nodes and weights in Gauss-Legendre quadrature. Example: [[0.0694 0.33 0.67 0.9306]] [0.1739 0.3261 0.3261 0.1739] """ |
order = numpy.asarray(order, dtype=int).flatten()
lower = numpy.asarray(lower).flatten()
upper = numpy.asarray(upper).flatten()
dim = max(lower.size, upper.size, order.size)
order = numpy.ones(dim, dtype=int)*order
lower = numpy.ones(dim)*lower
upper = numpy.ones(dim)*upper
if composite is None:
composite = numpy.array(0)
composite = numpy.asarray(composite)
if not composite.size:
composite = numpy.array([numpy.linspace(0, 1, composite+1)]*dim)
else:
composite = numpy.array(composite)
if len(composite.shape) <= 1:
composite = numpy.transpose([composite])
composite = ((composite.T-lower)/(upper-lower)).T
results = [_gauss_legendre(order[i], composite[i]) for i in range(dim)]
abscis = numpy.array([_[0] for _ in results])
weights = numpy.array([_[1] for _ in results])
abscis = chaospy.quad.combine(abscis)
weights = chaospy.quad.combine(weights)
abscis = (upper-lower)*abscis + lower
weights = numpy.prod(weights*(upper-lower), 1)
return abscis.T, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _gauss_legendre(order, composite=1):
"""Backend function.""" |
inner = numpy.ones(order+1)*0.5
outer = numpy.arange(order+1)**2
outer = outer/(16*outer-4.)
banded = numpy.diag(numpy.sqrt(outer[1:]), k=-1) + numpy.diag(inner) + \
numpy.diag(numpy.sqrt(outer[1:]), k=1)
vals, vecs = numpy.linalg.eig(banded)
abscis, weight = vals.real, vecs[0, :]**2
indices = numpy.argsort(abscis)
abscis, weight = abscis[indices], weight[indices]
n_abscis = len(abscis)
composite = numpy.array(composite).flatten()
composite = list(set(composite))
composite = [comp for comp in composite if (comp < 1) and (comp > 0)]
composite.sort()
composite = [0]+composite+[1]
abscissas = numpy.empty(n_abscis*(len(composite)-1))
weights = numpy.empty(n_abscis*(len(composite)-1))
for dim in range(len(composite)-1):
abscissas[dim*n_abscis:(dim+1)*n_abscis] = \
abscis*(composite[dim+1]-composite[dim]) + composite[dim]
weights[dim*n_abscis:(dim+1)*n_abscis] = \
weight*(composite[dim+1]-composite[dim])
return abscissas, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quad_gauss_patterson(order, dist):
""" Generate sets abscissas and weights for Gauss-Patterson quadrature. Args: order (int) : The quadrature order. Must be in the interval (0, 8). dist (Dist) : The domain to create quadrature over. Returns: (numpy.ndarray, numpy.ndarray) : Abscissas and weights. Example: [[0.0031 0.0198 0.0558 0.1127 0.1894 0.2829 0.3883 0.5 0.6117 0.7171 0.8106 0.8873 0.9442 0.9802 0.9969]] [0.0085 0.0258 0.0465 0.0672 0.0858 0.1003 0.1096 0.1128 0.1096 0.1003 0.0858 0.0672 0.0465 0.0258 0.0085] Reference: Prem Kythe, Michael Schaeferkotter, Handbook of Computational Methods for Integration, Chapman and Hall, 2004, ISBN: 1-58488-428-2, LC: QA299.3.K98. Thomas Patterson, The Optimal Addition of Points to Quadrature Formulae, Mathematics of Computation, Volume 22, Number 104, October 1968, pages 847-856. """ |
if len(dist) > 1:
if isinstance(order, int):
values = [quad_gauss_patterson(order, d) for d in dist]
else:
values = [quad_gauss_patterson(order[i], dist[i])
for i in range(len(dist))]
abscissas = [_[0][0] for _ in values]
weights = [_[1] for _ in values]
abscissas = chaospy.quad.combine(abscissas).T
weights = numpy.prod(chaospy.quad.combine(weights), -1)
return abscissas, weights
order = sorted(PATTERSON_VALUES.keys())[order]
abscissas, weights = PATTERSON_VALUES[order]
lower, upper = dist.range()
abscissas = .5*(abscissas*(upper-lower)+upper+lower)
abscissas = abscissas.reshape(1, abscissas.size)
weights /= numpy.sum(weights)
return abscissas, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_quadrature( order, domain, accuracy=100, sparse=False, rule="C", composite=1, growth=None, part=None, normalize=False, **kws ):
""" Numerical quadrature node and weight generator. Args: order (int):
The order of the quadrature. domain (numpy.ndarray, Dist):
If array is provided domain is the lower and upper bounds (lo,up). Invalid if gaussian is set. If Dist is provided, bounds and nodes are adapted to the distribution. This includes weighting the nodes in Clenshaw-Curtis quadrature. accuracy (int):
If gaussian is set, but the Dist provieded in domain does not provide an analytical TTR, ac sets the approximation order for the descitized Stieltje's method. sparse (bool):
If True used Smolyak's sparse grid instead of normal tensor product grid. rule (str):
Rule for generating abscissas and weights. Either done with quadrature rules, or with random samples with constant weights. composite (int):
If provided, composite quadrature will be used. Value determines the number of domains along an axis. Ignored in the case gaussian=True. normalize (bool):
In the case of distributions, the abscissas and weights are not tailored to a distribution beyond matching the bounds. If True, the samples are normalized multiplying the weights with the density of the distribution evaluated at the abscissas and normalized afterwards to sum to one. growth (bool):
If True sets the growth rule for the composite quadrature rule to exponential for Clenshaw-Curtis quadrature. """ |
from ..distributions.baseclass import Dist
isdist = isinstance(domain, Dist)
if isdist:
dim = len(domain)
else:
dim = np.array(domain[0]).size
rule = rule.lower()
if len(rule) == 1:
rule = collection.QUAD_SHORT_NAMES[rule]
quad_function = collection.get_function(
rule,
domain,
normalize,
growth=growth,
composite=composite,
accuracy=accuracy,
)
if sparse:
order = np.ones(len(domain), dtype=int)*order
abscissas, weights = sparse_grid.sparse_grid(quad_function, order, dim)
else:
abscissas, weights = quad_function(order)
assert len(weights) == abscissas.shape[1]
assert len(abscissas.shape) == 2
return abscissas, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def deprecation_warning(func, name):
"""Add a deprecation warning do each distribution.""" |
@wraps(func)
def caller(*args, **kwargs):
"""Docs to be replaced."""
logger = logging.getLogger(__name__)
instance = func(*args, **kwargs)
logger.warning(
"Distribution `chaospy.{}` has been renamed to ".format(name) +
"`chaospy.{}` and will be deprecated next release.".format(instance.__class__.__name__))
return instance
return caller |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def E_cond(poly, freeze, dist, **kws):
""" Conditional expected value operator. 1st order statistics of a polynomial on a given probability space conditioned on some of the variables. Args: poly (Poly):
Polynomial to find conditional expected value on. freeze (numpy.ndarray):
Boolean values defining the conditional variables. True values implies that the value is conditioned on, e.g. frozen during the expected value calculation. dist (Dist) : The distributions of the input used in ``poly``. Returns: (chaospy.poly.base.Poly) : Same as ``poly``, but with the variables not tagged in ``frozen`` integrated away. Examples: [1.0, q0, 0.0, 0.0] [1.0, 1.0, q1, 10.0q1] [1.0, q0, q1, 10.0q0q1] [1.0, 1.0, 0.0, 0.0] """ |
if poly.dim < len(dist):
poly = polynomials.setdim(poly, len(dist))
freeze = polynomials.Poly(freeze)
freeze = polynomials.setdim(freeze, len(dist))
keys = freeze.keys
if len(keys) == 1 and keys[0] == (0,)*len(dist):
freeze = list(freeze.A.values())[0]
else:
freeze = numpy.array(keys)
freeze = freeze.reshape(int(freeze.size/len(dist)), len(dist))
shape = poly.shape
poly = polynomials.flatten(poly)
kmax = numpy.max(poly.keys, 0) + 1
keys = [range(k) for k in kmax]
A = poly.A.copy()
keys = poly.keys
out = {}
zeros = [0]*poly.dim
for i in range(len(keys)):
key = list(keys[i])
a = A[tuple(key)]
for d in range(poly.dim):
for j in range(len(freeze)):
if freeze[j, d]:
key[d], zeros[d] = zeros[d], key[d]
break
tmp = a*dist.mom(tuple(key))
if tuple(zeros) in out:
out[tuple(zeros)] = out[tuple(zeros)] + tmp
else:
out[tuple(zeros)] = tmp
for d in range(poly.dim):
for j in range(len(freeze)):
if freeze[j, d]:
key[d], zeros[d] = zeros[d], key[d]
break
out = polynomials.Poly(out, poly.dim, poly.shape, float)
out = polynomials.reshape(out, shape)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_samples(order, domain=1, rule="R", antithetic=None):
""" Sample generator. Args: order (int):
Sample order. Determines the number of samples to create. domain (Dist, int, numpy.ndarray):
Defines the space where the samples are generated. If integer is provided, the space ``[0, 1]^domain`` will be used. If array-like object is provided, a hypercube it defines will be used. If distribution, the domain it spans will be used. rule (str):
rule for generating samples. The various rules are listed in :mod:`chaospy.distributions.sampler.generator`. antithetic (tuple):
Sequence of boolean values. Represents the axes to mirror using antithetic variable. """ |
logger = logging.getLogger(__name__)
logger.debug("generating random samples using rule %s", rule)
rule = rule.upper()
if isinstance(domain, int):
dim = domain
trans = lambda x_data: x_data
elif isinstance(domain, (tuple, list, numpy.ndarray)):
domain = numpy.asfarray(domain)
if len(domain.shape) < 2:
dim = 1
else:
dim = len(domain[0])
trans = lambda x_data: ((domain[1]-domain[0])*x_data.T + domain[0]).T
else:
dist = domain
dim = len(dist)
trans = dist.inv
if antithetic is not None:
from .antithetic import create_antithetic_variates
antithetic = numpy.array(antithetic, dtype=bool).flatten()
if antithetic.size == 1 and dim > 1:
antithetic = numpy.repeat(antithetic, dim)
size = numpy.sum(1*numpy.array(antithetic))
order_saved = order
order = int(numpy.log(order - dim))
order = order if order > 1 else 1
while order**dim < order_saved:
order += 1
trans_ = trans
trans = lambda x_data: trans_(
create_antithetic_variates(x_data, antithetic)[:, :order_saved])
assert rule in SAMPLERS, "rule not recognised"
sampler = SAMPLERS[rule]
x_data = trans(sampler(order=order, dim=dim))
logger.debug("order: %d, dim: %d -> shape: %s", order, dim, x_data.shape)
return x_data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sparse_segment(cords):
r""" Create a segment of a sparse grid. Convert a ol-index to sparse grid coordinates on ``[0, 1]^N`` hyper-cube. A sparse grid of order ``D`` coencide with the set of sparse_segments where ``||cords||_1 <= D``. More specifically, a segment of: .. math:: \cup_{cords \in C} sparse_segment(cords) == sparse_grid(M) where: .. math:: C = {cords: M=sum(cords)} Args: cords (numpy.ndarray):
The segment to extract. ``cord`` must consist of non-negative integers. Returns: Q (numpy.ndarray):
Sparse segment where ``Q.shape==(K, sum(M))`` and ``K`` is segment specific. Examples: [[0.5 0.125] [0.5 0.375] [0.5 0.625] [0.5 0.875]] [[0.5 0.25 0.5 0.5 ] [0.5 0.75 0.5 0.5 ]] """ |
cords = np.array(cords)+1
slices = []
for cord in cords:
slices.append(slice(1, 2**cord+1, 2))
grid = np.mgrid[slices]
indices = grid.reshape(len(cords), np.prod(grid.shape[1:])).T
sgrid = indices*2.**-cords
return sgrid |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lagrange_polynomial(abscissas, sort="GR"):
""" Create Lagrange polynomials. Args: abscissas (numpy.ndarray):
Sample points where the Lagrange polynomials shall be defined. Example: [-0.05q0+0.5, 0.05q0+0.5] [0.5q0^2-0.5q0, -q0^2+1.0, 0.5q0^2+0.5q0] [0.5q0-0.5q1+0.5, -q0+1.0, 0.5q0+0.5q1-0.5] [[1. 0. 0.] [0. 1. 0.] [0. 0. 1.]] """ |
abscissas = numpy.asfarray(abscissas)
if len(abscissas.shape) == 1:
abscissas = abscissas.reshape(1, abscissas.size)
dim, size = abscissas.shape
order = 1
while chaospy.bertran.terms(order, dim) <= size:
order += 1
indices = numpy.array(chaospy.bertran.bindex(0, order-1, dim, sort)[:size])
idx, idy = numpy.mgrid[:size, :size]
matrix = numpy.prod(abscissas.T[idx]**indices[idy], -1)
det = numpy.linalg.det(matrix)
if det == 0:
raise numpy.linalg.LinAlgError("invertible matrix required")
vec = chaospy.poly.basis(0, order-1, dim, sort)[:size]
coeffs = numpy.zeros((size, size))
if size == 1:
out = chaospy.poly.basis(0, 0, dim, sort)*abscissas.item()
elif size == 2:
coeffs = numpy.linalg.inv(matrix)
out = chaospy.poly.sum(vec*(coeffs.T), 1)
else:
for i in range(size):
for j in range(size):
coeffs[i, j] += numpy.linalg.det(matrix[1:, 1:])
matrix = numpy.roll(matrix, -1, axis=0)
matrix = numpy.roll(matrix, -1, axis=1)
coeffs /= det
out = chaospy.poly.sum(vec*(coeffs.T), 1)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def SampleDist(samples, lo=None, up=None):
""" Distribution based on samples. Estimates a distribution from the given samples by constructing a kernel density estimator (KDE). Args: samples: Sample values to construction of the KDE lo (float) : Location of lower threshold up (float) : Location of upper threshold Example: sample_dist(lo=0, up=2) [0. 0.6016 1. 1.3984 2. ] [0. 0.25 0.5 0.75 1. ] [0.2254 0.4272 0.5135 0.4272 0.2254] [-0.4123 1.1645 -0.0131 1.3302] 1.0 [[1.3835 0.7983 1.1872] [0.2429 0.2693 0.4102]] """ |
samples = numpy.asarray(samples)
if lo is None:
lo = samples.min()
if up is None:
up = samples.max()
try:
#construct the kernel density estimator
dist = sample_dist(samples, lo, up)
#raised by gaussian_kde if dataset is singular matrix
except numpy.linalg.LinAlgError:
dist = Uniform(lower=-numpy.inf, upper=numpy.inf)
return dist |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bastos_ohagen(mat, eps=1e-16):
""" Bastos-O'Hagen algorithm for modified Cholesky decomposition. Args: mat (numpy.ndarray):
Input matrix to decompose. Assumed to close to positive definite. eps (float):
Tolerance value for the eigenvalues. Values smaller than `tol*numpy.diag(mat).max()` are considered to be zero. Returns: (:py:data:typing.Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray]):
perm: Permutation matrix lowtri: Upper triangular decomposition errors: Error matrix Examples: [[0 1 0] [1 0 0] [0 0 1]] [[ 2.4495 0. 0. ] [ 0.8165 1.8257 0. ] [ 1.2247 -0. 0.9129]] [[4. 2. 1. ] [2. 6. 3. ] [1. 3. 2.3333]] """ |
mat_ref = numpy.asfarray(mat)
mat = mat_ref.copy()
diag_max = numpy.diag(mat).max()
assert len(mat.shape) == 2
size = len(mat)
hitri = numpy.zeros((size, size))
piv = numpy.arange(size)
for idx in range(size):
idx_max = numpy.argmax(numpy.diag(mat[idx:, idx:])) + idx
if mat[idx_max, idx_max] <= numpy.abs(diag_max*eps):
if not idx:
raise ValueError("Purly negative definite")
for j in range(idx, size):
hitri[j, j] = hitri[j-1, j-1]/float(j)
break
tmp = mat[:, idx].copy()
mat[:, idx] = mat[:, idx_max]
mat[:, idx_max] = tmp
tmp = hitri[:, idx].copy()
hitri[:, idx] = hitri[:, idx_max]
hitri[:, idx_max] = tmp
tmp = mat[idx, :].copy()
mat[idx, :] = mat[idx_max, :]
mat[idx_max, :] = tmp
piv[idx], piv[idx_max] = piv[idx_max], piv[idx]
hitri[idx, idx] = numpy.sqrt(mat[idx, idx])
rval = mat[idx, idx+1:]/hitri[idx, idx]
hitri[idx, idx+1:] = rval
mat[idx+1:, idx+1:] -= numpy.outer(rval, rval)
perm = numpy.zeros((size, size), dtype=int)
for idx in range(size):
perm[idx, piv[idx]] = 1
return perm, hitri.T |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Sens_t(poly, dist, **kws):
""" Variance-based decomposition AKA Sobol' indices Total effect sensitivity index Args: poly (Poly):
Polynomial to find first order Sobol indices on. dist (Dist):
The distributions of the input used in ``poly``. Returns: (numpy.ndarray) : First order sensitivity indices for each parameters in ``poly``, with shape ``(len(dist),) + poly.shape``. Examples: [[0. 1. 0. 0.57142857] [0. 0. 1. 0.57142857]] """ |
dim = len(dist)
if poly.dim < dim:
poly = chaospy.poly.setdim(poly, len(dist))
zero = [1]*dim
out = numpy.zeros((dim,) + poly.shape, dtype=float)
V = Var(poly, dist, **kws)
for i in range(dim):
zero[i] = 0
out[i] = ((V-Var(E_cond(poly, zero, dist, **kws), dist, **kws)) /
(V+(V == 0))**(V!=0))
zero[i] = 1
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def construct(parent=None, defaults=None, **kwargs):
""" Random variable constructor. Args: cdf: Cumulative distribution function. Optional if ``parent`` is used. bnd: Boundary interval. Optional if ``parent`` is used. parent (Dist):
Distribution used as basis for new distribution. Any other argument that is omitted will instead take is function from ``parent``. doc (str]):
Documentation for the distribution. str (str, :py:data:typing.Callable):
Pretty print of the variable. pdf: Probability density function. ppf: Point percentile function. mom: Raw moment generator. ttr: Three terms recursion coefficient generator. init: Custom initialiser method. defaults (dict):
Default values to provide to initialiser. Returns: (Dist):
New custom distribution. """ |
for key in kwargs:
assert key in LEGAL_ATTRS, "{} is not legal input".format(key)
if parent is not None:
for key, value in LEGAL_ATTRS.items():
if key not in kwargs and hasattr(parent, value):
kwargs[key] = getattr(parent, value)
assert "cdf" in kwargs, "cdf function must be defined"
assert "bnd" in kwargs, "bnd function must be defined"
if "str" in kwargs and isinstance(kwargs["str"], str):
string = kwargs.pop("str")
kwargs["str"] = lambda *args, **kwargs: string
defaults = defaults if defaults else {}
for key in defaults:
assert key in LEGAL_ATTRS, "invalid default value {}".format(key)
def custom_distribution(**kws):
prm = defaults.copy()
prm.update(kws)
dist = Dist(**prm)
for key, function in kwargs.items():
attr_name = LEGAL_ATTRS[key]
setattr(dist, attr_name, types.MethodType(function, dist))
return dist
if "doc" in kwargs:
custom_distribution.__doc__ = kwargs["doc"]
return custom_distribution |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit_quadrature(orth, nodes, weights, solves, retall=False, norms=None, **kws):
""" Using spectral projection to create a polynomial approximation over distribution space. Args: orth (chaospy.poly.base.Poly):
Orthogonal polynomial expansion. Must be orthogonal for the approximation to be accurate. nodes (numpy.ndarray):
Where to evaluate the polynomial expansion and model to approximate. ``nodes.shape==(D,K)`` where ``D`` is the number of dimensions and ``K`` is the number of nodes. weights (numpy.ndarray):
Weights when doing numerical integration. ``weights.shape == (K,)`` must hold. solves (numpy.ndarray):
The model evaluation to approximate. If `numpy.ndarray` is provided, it must have ``len(solves) == K``. If callable, it must take a single argument X with ``len(X) == D``, and return a consistent numpy compatible shape. norms (numpy.ndarray):
In the of TTR using coefficients to estimate the polynomial norm is more stable than manual calculation. Calculated using quadrature if no provided. ``norms.shape == (len(orth),)`` must hold. Returns: (chaospy.poly.base.Poly):
Fitted model approximation in the form of an polynomial. """ |
orth = chaospy.poly.Poly(orth)
nodes = numpy.asfarray(nodes)
weights = numpy.asfarray(weights)
if callable(solves):
solves = [solves(node) for node in nodes.T]
solves = numpy.asfarray(solves)
shape = solves.shape
solves = solves.reshape(weights.size, int(solves.size/weights.size))
ovals = orth(*nodes)
vals1 = [(val*solves.T*weights).T for val in ovals]
if norms is None:
norms = numpy.sum(ovals**2*weights, -1)
else:
norms = numpy.array(norms).flatten()
assert len(norms) == len(orth)
coefs = (numpy.sum(vals1, 1).T/norms).T
coefs = coefs.reshape(len(coefs), *shape[1:])
approx_model = chaospy.poly.transpose(chaospy.poly.sum(orth*coefs.T, -1))
if retall:
return approx_model, coefs
return approx_model |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sparse_grid(func, order, dim=None, skew=None):
""" Smolyak sparse grid constructor. Args: func (:py:data:typing.Callable):
Function that takes a single argument ``order`` of type ``numpy.ndarray`` and with ``order.shape = (dim,)`` order (int, numpy.ndarray):
The order of the grid. If ``numpy.ndarray``, it overrides both ``dim`` and ``skew``. dim (int):
Number of dimension. skew (list):
Order skewness. """ |
if not isinstance(order, int):
orders = numpy.array(order).flatten()
dim = orders.size
m_order = int(numpy.min(orders))
skew = [order-m_order for order in orders]
return sparse_grid(func, m_order, dim, skew)
abscissas, weights = [], []
bindex = chaospy.bertran.bindex(order-dim+1, order, dim)
if skew is None:
skew = numpy.zeros(dim, dtype=int)
else:
skew = numpy.array(skew, dtype=int)
assert len(skew) == dim
for idx in range(
chaospy.bertran.terms(order, dim)
- chaospy.bertran.terms(order-dim, dim)):
idb = bindex[idx]
abscissa, weight = func(skew+idb)
weight *= (-1)**(order-sum(idb))*comb(dim-1, order-sum(idb))
abscissas.append(abscissa)
weights.append(weight)
abscissas = numpy.concatenate(abscissas, 1)
weights = numpy.concatenate(weights, 0)
abscissas = numpy.around(abscissas, 15)
order = numpy.lexsort(tuple(abscissas))
abscissas = abscissas.T[order].T
weights = weights[order]
# identify non-unique terms
diff = numpy.diff(abscissas.T, axis=0)
unique = numpy.ones(len(abscissas.T), bool)
unique[1:] = (diff != 0).any(axis=1)
# merge duplicate nodes
length = len(weights)
idx = 1
while idx < length:
while idx < length and unique[idx]:
idx += 1
idy = idx+1
while idy < length and not unique[idy]:
idy += 1
if idy-idx > 1:
weights[idx-1] = numpy.sum(weights[idx-1:idy])
idx = idy+1
abscissas = abscissas[:, unique]
weights = weights[unique]
return abscissas, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def evaluate_bound( distribution, x_data, parameters=None, cache=None, ):
""" Evaluate lower and upper bounds. Args: distribution (Dist):
Distribution to evaluate. x_data (numpy.ndarray):
Locations for where evaluate bounds at. Relevant in the case of multivariate distributions where the bounds are affected by the output of other distributions. parameters (:py:data:typing.Any):
Collection of parameters to override the default ones in the distribution. cache (:py:data:typing.Any):
A collection of previous calculations in case the same distribution turns up on more than one occasion. Returns: The lower and upper bounds of ``distribution`` at location ``x_data`` using parameters ``parameters``. """ |
assert len(x_data) == len(distribution)
assert len(x_data.shape) == 2
cache = cache if cache is not None else {}
parameters = load_parameters(
distribution, "_bnd", parameters=parameters, cache=cache)
out = numpy.zeros((2,) + x_data.shape)
lower, upper = distribution._bnd(x_data.copy(), **parameters)
out.T[:, :, 0] = numpy.asfarray(lower).T
out.T[:, :, 1] = numpy.asfarray(upper).T
cache[distribution] = out
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def inner(*args):
""" Inner product of a polynomial set. Args: args (chaospy.poly.base.Poly):
The polynomials to perform inner product on. Returns: (chaospy.poly.base.Poly):
Resulting polynomial. Examples: q0^2+q0q1^2-1 14 """ |
haspoly = sum([isinstance(arg, Poly) for arg in args])
# Numpy
if not haspoly:
return numpy.sum(numpy.prod(args, 0), 0)
# Poly
out = args[0]
for arg in args[1:]:
out = out * arg
return sum(out) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def outer(*args):
""" Polynomial outer product. Args: P1 (chaospy.poly.base.Poly, numpy.ndarray):
First term in outer product P2 (chaospy.poly.base.Poly, numpy.ndarray):
Second term in outer product Returns: (chaospy.poly.base.Poly):
Poly set with same dimensions as itter. Examples: [1, q0, q0^2] [q0, q0^2, q0^3] [[1, q0, q0^2], [q0, q0^2, q0^3], [q0^2, q0^3, q0^4]] """ |
if len(args) > 2:
part1 = args[0]
part2 = outer(*args[1:])
elif len(args) == 2:
part1, part2 = args
else:
return args[0]
dtype = chaospy.poly.typing.dtyping(part1, part2)
if dtype in (list, tuple, numpy.ndarray):
part1 = numpy.array(part1)
part2 = numpy.array(part2)
shape = part1.shape + part2.shape
return numpy.outer(
chaospy.poly.shaping.flatten(part1),
chaospy.poly.shaping.flatten(part2),
)
if dtype == Poly:
if isinstance(part1, Poly) and isinstance(part2, Poly):
if (1,) in (part1.shape, part2.shape):
return part1*part2
shape = part1.shape+part2.shape
out = []
for _ in chaospy.poly.shaping.flatten(part1):
out.append(part2*_)
return chaospy.poly.shaping.reshape(Poly(out), shape)
if isinstance(part1, (int, float, list, tuple)):
part2, part1 = numpy.array(part1), part2
else:
part2 = numpy.array(part2)
core_old = part1.A
core_new = {}
for key in part1.keys:
core_new[key] = outer(core_old[key], part2)
shape = part1.shape+part2.shape
dtype = chaospy.poly.typing.dtyping(part1.dtype, part2.dtype)
return Poly(core_new, part1.dim, shape, dtype)
raise NotImplementedError |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dot(poly1, poly2):
""" Dot product of polynomial vectors. Args: poly1 (Poly) : left part of product. poly2 (Poly) : right part of product. Returns: (Poly) : product of poly1 and poly2. Examples: [1, q0, q0^2] 2q0^2+q0 q0^4+q0^2+1 """ |
if not isinstance(poly1, Poly) and not isinstance(poly2, Poly):
return numpy.dot(poly1, poly2)
poly1 = Poly(poly1)
poly2 = Poly(poly2)
poly = poly1*poly2
if numpy.prod(poly1.shape) <= 1 or numpy.prod(poly2.shape) <= 1:
return poly
return chaospy.poly.sum(poly, 0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quad_genz_keister_16(order):
""" Hermite Genz-Keister 16 rule. Args: order (int):
The quadrature order. Must be in the interval (0, 8). Returns: (:py:data:typing.Tuple[numpy.ndarray, numpy.ndarray]):
Abscissas and weights Examples: [-1.7321 0. 1.7321] [0.1667 0.6667 0.1667] """ |
order = sorted(GENZ_KEISTER_16.keys())[order]
abscissas, weights = GENZ_KEISTER_16[order]
abscissas = numpy.array(abscissas)
weights = numpy.array(weights)
weights /= numpy.sum(weights)
abscissas *= numpy.sqrt(2)
return abscissas, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def orth_gs(order, dist, normed=False, sort="GR", cross_truncation=1., **kws):
""" Gram-Schmidt process for generating orthogonal polynomials. Args: order (int, Poly):
The upper polynomial order. Alternative a custom polynomial basis can be used. dist (Dist):
Weighting distribution(s) defining orthogonality. normed (bool):
If True orthonormal polynomials will be used instead of monic. sort (str):
Ordering argument passed to poly.basis. If custom basis is used, argument is ignored. cross_truncation (float):
Use hyperbolic cross truncation scheme to reduce the number of terms in expansion. Returns: (Poly):
The orthogonal polynomial expansion. Examples: [1.0, q1, q0, q1^2-1.0, q0q1, q0^2-1.0] """ |
logger = logging.getLogger(__name__)
dim = len(dist)
if isinstance(order, int):
if order == 0:
return chaospy.poly.Poly(1, dim=dim)
basis = chaospy.poly.basis(
0, order, dim, sort, cross_truncation=cross_truncation)
else:
basis = order
basis = list(basis)
polynomials = [basis[0]]
if normed:
for idx in range(1, len(basis)):
# orthogonalize polynomial:
for idy in range(idx):
orth = chaospy.descriptives.E(
basis[idx]*polynomials[idy], dist, **kws)
basis[idx] = basis[idx] - polynomials[idy]*orth
# normalize:
norms = chaospy.descriptives.E(polynomials[-1]**2, dist, **kws)
if norms <= 0:
logger.warning("Warning: Polynomial cutoff at term %d", idx)
break
basis[idx] = basis[idx] / numpy.sqrt(norms)
polynomials.append(basis[idx])
else:
norms = [1.]
for idx in range(1, len(basis)):
# orthogonalize polynomial:
for idy in range(idx):
orth = chaospy.descriptives.E(
basis[idx]*polynomials[idy], dist, **kws)
basis[idx] = basis[idx] - polynomials[idy] * orth / norms[idy]
norms.append(
chaospy.descriptives.E(polynomials[-1]**2, dist, **kws))
if norms[-1] <= 0:
logger.warning("Warning: Polynomial cutoff at term %d", idx)
break
polynomials.append(basis[idx])
return chaospy.poly.Poly(polynomials, dim=dim, shape=(len(polynomials),)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_parameters( distribution, method_name, parameters=None, cache=None, cache_key=lambda x:x, ):
""" Load parameter values by filling them in from cache. Args: distribution (Dist):
The distribution to load parameters from. method_name (str):
Name of the method for where the parameters should be used. Typically ``"_pdf"``, ``_cdf`` or the like. parameters (:py:data:typing.Any):
Default parameters to use if there are no cache to retrieve. Use the distributions internal parameters, if not provided. cache (:py:data:typing.Any):
A dictionary containing previous evaluations from the stack. If a parameters contains a distribution that contains in the cache, it will be replaced with the cache value. If omitted, a new one will be created. cache_key (:py:data:typing.Any) Redefine the keys of the cache to suite other purposes. Returns: Same as ``parameters``, if provided. The ``distribution`` parameter if not. In either case, parameters may be updated with cache values (if provided) or by ``cache`` if the call signature of ``method_name`` (on ``distribution``) contains an ``cache`` argument. """ |
from .. import baseclass
if cache is None:
cache = {}
if parameters is None:
parameters = {}
parameters_ = distribution.prm.copy()
parameters_.update(**parameters)
parameters = parameters_
# self aware and should handle things itself:
if contains_call_signature(getattr(distribution, method_name), "cache"):
parameters["cache"] = cache
# dumb distribution and just wants to evaluate:
else:
for key, value in parameters.items():
if isinstance(value, baseclass.Dist):
value = cache_key(value)
if value in cache:
parameters[key] = cache[value]
else:
raise baseclass.StochasticallyDependentError(
"evaluating under-defined distribution {}.".format(distribution))
return parameters |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quad_genz_keister_18(order):
""" Hermite Genz-Keister 18 rule. Args: order (int):
The quadrature order. Must be in the interval (0, 8). Returns: (:py:data:typing.Tuple[numpy.ndarray, numpy.ndarray]):
Abscissas and weights Examples: [-1.7321 0. 1.7321] [0.1667 0.6667 0.1667] """ |
order = sorted(GENZ_KEISTER_18.keys())[order]
abscissas, weights = GENZ_KEISTER_18[order]
abscissas = numpy.array(abscissas)
weights = numpy.array(weights)
weights /= numpy.sum(weights)
abscissas *= numpy.sqrt(2)
return abscissas, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dtyping(*args):
""" Find least common denominator dtype. Examples: True <class 'chaospy.poly.base.Poly'> """ |
args = list(args)
for idx, arg in enumerate(args):
if isinstance(arg, Poly):
args[idx] = Poly
elif isinstance(arg, numpy.generic):
args[idx] = numpy.asarray(arg).dtype
elif isinstance(arg, (float, int)):
args[idx] = type(arg)
for type_ in DATATYPES:
if type_ in args:
return type_
raise ValueError(
"dtypes not recognised " + str([str(_) for _ in args])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def toarray(vari):
""" Convert polynomial array into a numpy.asarray of polynomials. Args: vari (Poly, numpy.ndarray):
Input data. Returns: (numpy.ndarray):
A numpy array with ``Q.shape==A.shape``. Examples: [1, q0, q0^2] True q0 """ |
if isinstance(vari, Poly):
shape = vari.shape
out = numpy.asarray(
[{} for _ in range(numpy.prod(shape))],
dtype=object
)
core = vari.A.copy()
for key in core.keys():
core[key] = core[key].flatten()
for i in range(numpy.prod(shape)):
if not numpy.all(core[key][i] == 0):
out[i][key] = core[key][i]
for i in range(numpy.prod(shape)):
out[i] = Poly(out[i], vari.dim, (), vari.dtype)
out = out.reshape(shape)
return out
return numpy.asarray(vari) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _bnd(self, xloc, dist, length, cache):
""" boundary function. Example: [[[0. 0. 0.] [0. 0. 0.]] <BLANKLINE> [[2. 2. 2.] [2. 2. 2.]]] """ |
lower, upper = evaluation.evaluate_bound(
dist, xloc.reshape(1, -1))
lower = lower.reshape(length, -1)
upper = upper.reshape(length, -1)
assert lower.shape == xloc.shape, (lower.shape, xloc.shape)
assert upper.shape == xloc.shape
return lower, upper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _mom(self, k, dist, length, cache):
""" Moment generating function. Example: [1. 0.5 0.25] """ |
return numpy.prod(dist.mom(k), 0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sum(vari, axis=None):
# pylint: disable=redefined-builtin """ Sum the components of a shapeable quantity along a given axis. Args: vari (chaospy.poly.base.Poly, numpy.ndarray):
Input data. axis (int):
Axis over which the sum is taken. By default ``axis`` is None, and all elements are summed. Returns: (chaospy.poly.base.Poly, numpy.ndarray):
Polynomial array with same shape as ``vari``, with the specified axis removed. If ``vari`` is an 0-d array, or ``axis`` is None, a (non-iterable) component is returned. Examples: [1, q0, q0^2] q0^2+q0+1 """ |
if isinstance(vari, Poly):
core = vari.A.copy()
for key in vari.keys:
core[key] = sum(core[key], axis)
return Poly(core, vari.dim, None, vari.dtype)
return np.sum(vari, axis) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cumsum(vari, axis=None):
""" Cumulative sum the components of a shapeable quantity along a given axis. Args: vari (chaospy.poly.base.Poly, numpy.ndarray):
Input data. axis (int):
Axis over which the sum is taken. By default ``axis`` is None, and all elements are summed. Returns: (chaospy.poly.base.Poly, numpy.ndarray):
Polynomial array with same shape as ``vari``. Examples: [1, q0, q0^2] [1, q0+1, q0^2+q0+1] """ |
if isinstance(vari, Poly):
core = vari.A.copy()
for key, val in core.items():
core[key] = cumsum(val, axis)
return Poly(core, vari.dim, None, vari.dtype)
return np.cumsum(vari, axis) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prod(vari, axis=None):
""" Product of the components of a shapeable quantity along a given axis. Args: vari (chaospy.poly.base.Poly, numpy.ndarray):
Input data. axis (int):
Axis over which the sum is taken. By default ``axis`` is None, and all elements are summed. Returns: (chaospy.poly.base.Poly, numpy.ndarray):
Polynomial array with same shape as ``vari``, with the specified axis removed. If ``vari`` is an 0-d array, or ``axis`` is None, a (non-iterable) component is returned. Examples: [1, q0, q0^2] q0^3 """ |
if isinstance(vari, Poly):
if axis is None:
vari = chaospy.poly.shaping.flatten(vari)
axis = 0
vari = chaospy.poly.shaping.rollaxis(vari, axis)
out = vari[0]
for poly in vari[1:]:
out = out*poly
return out
return np.prod(vari, axis) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cumprod(vari, axis=None):
""" Perform the cumulative product of a shapeable quantity over a given axis. Args: vari (chaospy.poly.base.Poly, numpy.ndarray):
Input data. axis (int):
Axis over which the sum is taken. By default ``axis`` is None, and all elements are summed. Returns: (chaospy.poly.base.Poly):
An array shaped as ``vari`` but with the specified axis removed. Examples: [1, q0, q0^2, q0^3] [1, q0, q0^3, q0^6] """ |
if isinstance(vari, Poly):
if np.prod(vari.shape) == 1:
return vari.copy()
if axis is None:
vari = chaospy.poly.shaping.flatten(vari)
axis = 0
vari = chaospy.poly.shaping.rollaxis(vari, axis)
out = [vari[0]]
for poly in vari[1:]:
out.append(out[-1]*poly)
return Poly(out, vari.dim, vari.shape, vari.dtype)
return np.cumprod(vari, axis) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quad_leja(order, dist):
""" Generate Leja quadrature node. Example: [[-2.7173 -1.4142 0. 1.7635]] [0.022 0.1629 0.6506 0.1645] """ |
from chaospy.distributions import evaluation
if len(dist) > 1 and evaluation.get_dependencies(*list(dist)):
raise evaluation.DependencyError(
"Leja quadrature do not supper distribution with dependencies.")
if len(dist) > 1:
if isinstance(order, int):
out = [quad_leja(order, _) for _ in dist]
else:
out = [quad_leja(order[_], dist[_]) for _ in range(len(dist))]
abscissas = [_[0][0] for _ in out]
weights = [_[1] for _ in out]
abscissas = chaospy.quad.combine(abscissas).T
weights = chaospy.quad.combine(weights)
weights = numpy.prod(weights, -1)
return abscissas, weights
lower, upper = dist.range()
abscissas = [lower, dist.mom(1), upper]
for _ in range(int(order)):
obj = create_objective(dist, abscissas)
opts, vals = zip(
*[fminbound(
obj, abscissas[idx], abscissas[idx+1], full_output=1)[:2]
for idx in range(len(abscissas)-1)]
)
index = numpy.argmin(vals)
abscissas.insert(index+1, opts[index])
abscissas = numpy.asfarray(abscissas).flatten()[1:-1]
weights = create_weights(abscissas, dist)
abscissas = abscissas.reshape(1, abscissas.size)
return numpy.array(abscissas), numpy.array(weights) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_objective(dist, abscissas):
"""Create objective function.""" |
abscissas_ = numpy.array(abscissas[1:-1])
def obj(absisa):
"""Local objective function."""
out = -numpy.sqrt(dist.pdf(absisa))
out *= numpy.prod(numpy.abs(abscissas_ - absisa))
return out
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_weights(nodes, dist):
"""Create weights for the Laja method.""" |
poly = chaospy.quad.generate_stieltjes(dist, len(nodes)-1, retall=True)[0]
poly = chaospy.poly.flatten(chaospy.poly.Poly(poly))
weights_inverse = poly(nodes)
weights = numpy.linalg.inv(weights_inverse)
return weights[:, 0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def gill_king(mat, eps=1e-16):
""" Gill-King algorithm for modified cholesky decomposition. Args: mat (numpy.ndarray):
Must be a non-singular and symmetric matrix. If sparse, the result will also be sparse. eps (float):
Error tolerance used in algorithm. Returns: (numpy.ndarray):
Lower triangular Cholesky factor. Examples: [[2. 0. 0. ] [1. 2.2361 0. ] [0.5 1.118 1.2264]] [[4. 2. 1. ] [2. 6. 3. ] [1. 3. 3.004]] """ |
if not scipy.sparse.issparse(mat):
mat = numpy.asfarray(mat)
assert numpy.allclose(mat, mat.T)
size = mat.shape[0]
mat_diag = mat.diagonal()
gamma = abs(mat_diag).max()
off_diag = abs(mat - numpy.diag(mat_diag)).max()
delta = eps*max(gamma + off_diag, 1)
beta = numpy.sqrt(max(gamma, off_diag/size, eps))
lowtri = _gill_king(mat, beta, delta)
return lowtri |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _gill_king(mat, beta, delta):
"""Backend function for the Gill-King algorithm.""" |
size = mat.shape[0]
# initialize d_vec and lowtri
if scipy.sparse.issparse(mat):
lowtri = scipy.sparse.eye(*mat.shape)
else:
lowtri = numpy.eye(size)
d_vec = numpy.zeros(size, dtype=float)
# there are no inner for loops, everything implemented with
# vector operations for a reasonable level of efficiency
for idx in range(size):
if idx == 0:
idz = [] # column index: all columns to left of diagonal
# d_vec(idz) doesn't work in case idz is empty
else:
idz = numpy.s_[:idx]
djtemp = mat[idx, idx] - numpy.dot(
lowtri[idx, idz], d_vec[idz]*lowtri[idx, idz].T)
# C(idx, idx) in book
if idx < size - 1:
idy = numpy.s_[idx+1:size]
# row index: all rows below diagonal
ccol = mat[idy, idx] - numpy.dot(
lowtri[idy, idz], d_vec[idz]*lowtri[idx, idz].T)
# C(idy, idx) in book
theta = abs(ccol).max()
# guarantees d_vec(idx) not too small and lowtri(idy, idx) not too
# big in sufficiently positive definite case, d_vec(idx) = djtemp
d_vec[idx] = max(abs(djtemp), (theta/beta)**2, delta)
lowtri[idy, idx] = ccol/d_vec[idx]
else:
d_vec[idx] = max(abs(djtemp), delta)
# convert to usual output format: replace lowtri by lowtri*sqrt(D) and
# transpose
for idx in range(size):
lowtri[:, idx] = lowtri[:, idx]*numpy.sqrt(d_vec[idx])
# lowtri = lowtri*diag(sqrt(d_vec)) bad in sparse case
return lowtri |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def approximate_moment( dist, K, retall=False, control_var=None, rule="F", order=1000, **kws ):
""" Approximation method for estimation of raw statistical moments. Args: dist : Dist Distribution domain with dim=len(dist) K : numpy.ndarray The exponents of the moments of interest with shape (dim,K). control_var : Dist If provided will be used as a control variable to try to reduce the error. acc (:py:data:typing.Optional[int]):
The order of quadrature/MCI sparse : bool If True used Smolyak's sparse grid instead of normal tensor product grid in numerical integration. rule : str Quadrature rule Key Description "G" Optiomal Gaussian quadrature from Golub-Welsch Slow for high order and composit is ignored. "E" Gauss-Legendre quadrature "C" Clenshaw-Curtis quadrature. Exponential growth rule is used when sparse is True to make the rule nested. Monte Carlo Integration Key Description "H" Halton sequence "K" Korobov set "L" Latin hypercube sampling "M" Hammersley sequence "R" (Pseudo-)Random sampling "S" Sobol sequence composite (:py:data:typing.Optional[int, numpy.ndarray]):
If provided, composite quadrature will be used. Ignored in the case if gaussian=True. If int provided, determines number of even domain splits If array of ints, determines number of even domain splits along each axis If array of arrays/floats, determines location of splits antithetic (:py:data:typing.Optional[numpy.ndarray]):
List of bool. Represents the axes to mirror using antithetic variable during MCI. """ |
dim = len(dist)
shape = K.shape
size = int(K.size/dim)
K = K.reshape(dim, size)
if dim > 1:
shape = shape[1:]
X, W = quad.generate_quadrature(order, dist, rule=rule, normalize=True, **kws)
grid = numpy.mgrid[:len(X[0]), :size]
X = X.T[grid[0]].T
K = K.T[grid[1]].T
out = numpy.prod(X**K, 0)*W
if control_var is not None:
Y = control_var.ppf(dist.fwd(X))
mu = control_var.mom(numpy.eye(len(control_var)))
if (mu.size == 1) and (dim > 1):
mu = mu.repeat(dim)
for d in range(dim):
alpha = numpy.cov(out, Y[d])[0, 1]/numpy.var(Y[d])
out -= alpha*(Y[d]-mu)
out = numpy.sum(out, -1)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def approximate_density( dist, xloc, parameters=None, cache=None, eps=1.e-7 ):
""" Approximate the probability density function. Args: dist : Dist Distribution in question. May not be an advanced variable. xloc : numpy.ndarray Location coordinates. Requires that xloc.shape=(len(dist), K). eps : float Acceptable error level for the approximations retall : bool If True return Graph with the next calculation state with the approximation. Returns: numpy.ndarray: Local probability density function with ``out.shape == xloc.shape``. To calculate actual density function, evaluate ``numpy.prod(out, 0)``. Example: [[0.0242 0.0399 0.0242]] [[0.0242 0.0399 0.0242]] """ |
if parameters is None:
parameters = dist.prm.copy()
if cache is None:
cache = {}
xloc = numpy.asfarray(xloc)
lo, up = numpy.min(xloc), numpy.max(xloc)
mu = .5*(lo+up)
eps = numpy.where(xloc < mu, eps, -eps)*xloc
floc = evaluation.evaluate_forward(
dist, xloc, parameters=parameters.copy(), cache=cache.copy())
for d in range(len(dist)):
xloc[d] += eps[d]
tmp = evaluation.evaluate_forward(
dist, xloc, parameters=parameters.copy(), cache=cache.copy())
floc[d] -= tmp[d]
xloc[d] -= eps[d]
floc = numpy.abs(floc / eps)
return floc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_van_der_corput_samples(idx, number_base=2):
""" Van der Corput samples. Args: idx (int, numpy.ndarray):
The index of the sequence. If array is provided, all values in array is returned. number_base (int):
The numerical base from where to create the samples from. Returns (float, numpy.ndarray):
Van der Corput samples. """ |
assert number_base > 1
idx = numpy.asarray(idx).flatten() + 1
out = numpy.zeros(len(idx), dtype=float)
base = float(number_base)
active = numpy.ones(len(idx), dtype=bool)
while numpy.any(active):
out[active] += (idx[active] % number_base)/base
idx //= number_base
base *= number_base
active = idx > 0
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add(*args):
"""Polynomial addition.""" |
if len(args) > 2:
return add(args[0], add(args[1], args[1:]))
if len(args) == 1:
return args[0]
part1, part2 = args
if isinstance(part2, Poly):
if part2.dim > part1.dim:
part1 = chaospy.dimension.setdim(part1, part2.dim)
elif part2.dim < part1.dim:
part2 = chaospy.dimension.setdim(part2, part1.dim)
dtype = chaospy.poly.typing.dtyping(part1.dtype, part2.dtype)
core1 = part1.A.copy()
core2 = part2.A.copy()
if np.prod(part2.shape) > np.prod(part1.shape):
shape = part2.shape
ones = np.ones(shape, dtype=int)
for key in core1:
core1[key] = core1[key]*ones
else:
shape = part1.shape
ones = np.ones(shape, dtype=int)
for key in core2:
core2[key] = core2[key]*ones
for idx in core1:
if idx in core2:
core2[idx] = core2[idx] + core1[idx]
else:
core2[idx] = core1[idx]
out = core2
return Poly(out, part1.dim, shape, dtype)
part2 = np.asarray(part2)
core = part1.A.copy()
dtype = chaospy.poly.typing.dtyping(part1.dtype, part2.dtype)
zero = (0,)*part1.dim
if zero not in core:
core[zero] = np.zeros(part1.shape, dtype=int)
core[zero] = core[zero] + part2
if np.prod(part2.shape) > np.prod(part1.shape):
ones = np.ones(part2.shape, dtype=dtype)
for key in core:
core[key] = core[key]*ones
return Poly(core, part1.dim, None, dtype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mul(*args):
"""Polynomial multiplication.""" |
if len(args) > 2:
return add(args[0], add(args[1], args[1:]))
if len(args) == 1:
return args[0]
part1, part2 = args
if not isinstance(part2, Poly):
if isinstance(part2, (float, int)):
part2 = np.asarray(part2)
if not part2.shape:
core = part1.A.copy()
dtype = chaospy.poly.typing.dtyping(
part1.dtype, part2.dtype)
for key in part1.keys:
core[key] = np.asarray(core[key]*part2, dtype)
return Poly(core, part1.dim, part1.shape, dtype)
part2 = Poly(part2)
if part2.dim > part1.dim:
part1 = chaospy.dimension.setdim(part1, part2.dim)
elif part2.dim < part1.dim:
part2 = chaospy.dimension.setdim(part2, part1.dim)
if np.prod(part1.shape) >= np.prod(part2.shape):
shape = part1.shape
else:
shape = part2.shape
dtype = chaospy.poly.typing.dtyping(part1.dtype, part2.dtype)
if part1.dtype != part2.dtype:
if part1.dtype == dtype:
part2 = chaospy.poly.typing.asfloat(part2)
else:
part1 = chaospy.poly.typing.asfloat(part1)
core = {}
for idx1 in part2.A:
for idx2 in part1.A:
key = tuple(np.array(idx1) + np.array(idx2))
core[key] = np.asarray(
core.get(key, 0) + part2.A[idx1]*part1.A[idx2])
core = {key: value for key, value in core.items() if np.any(value)}
out = Poly(core, part1.dim, shape, dtype)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot(self, resolution_constant_regions=20, resolution_smooth_regions=200):
""" Return arrays x, y for plotting the piecewise constant function. Just the minimum number of straight lines are returned if ``eps=0``, otherwise `resolution_constant_regions` plotting intervals are insed in the constant regions with `resolution_smooth_regions` plotting intervals in the smoothed regions. """ |
if self.eps == 0:
x = []; y = []
for I, value in zip(self._indicator_functions, self._values):
x.append(I.L)
y.append(value)
x.append(I.R)
y.append(value)
return x, y
else:
n = float(resolution_smooth_regions)/self.eps
if len(self.data) == 1:
return [self.L, self.R], [self._values[0], self._values[0]]
else:
x = [np.linspace(self.data[0][0], self.data[1][0]-self.eps,
resolution_constant_regions+1)]
# Iterate over all internal discontinuities
for I in self._indicator_functions[1:]:
x.append(np.linspace(I.L-self.eps, I.L+self.eps,
resolution_smooth_regions+1))
x.append(np.linspace(I.L+self.eps, I.R-self.eps,
resolution_constant_regions+1))
# Last part
x.append(np.linspace(I.R-self.eps, I.R, 3))
x = np.concatenate(x)
y = self(x)
return x, y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def gradient(poly):
""" Gradient of a polynomial. Args: poly (Poly) : polynomial to take gradient of. Returns: (Poly) : The resulting gradient. Examples: [2, q2, q1] """ |
return differential(poly, chaospy.poly.collection.basis(1, 1, poly.dim)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_korobov_samples(order, dim, base=17797):
""" Create Korobov lattice samples. Args: order (int):
The order of the Korobov latice. Defines the number of samples. dim (int):
The number of dimensions in the output. base (int):
The number based used to calculate the distribution of values. Returns (numpy.ndarray):
Korobov lattice with ``shape == (dim, order)`` """ |
values = numpy.empty(dim)
values[0] = 1
for idx in range(1, dim):
values[idx] = base*values[idx-1] % (order+1)
grid = numpy.mgrid[:dim, :order+1]
out = values[grid[0]] * (grid[1]+1) / (order+1.) % 1.
return out[:, :order] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quad_genz_keister(order, dist, rule=24):
""" Genz-Keister quadrature rule. Eabsicassample: [[0.0416 0.5 0.9584]] [0.1667 0.6667 0.1667] """ |
assert isinstance(rule, int)
if len(dist) > 1:
if isinstance(order, int):
values = [quad_genz_keister(order, d, rule) for d in dist]
else:
values = [quad_genz_keister(order[i], dist[i], rule)
for i in range(len(dist))]
abscissas = [_[0][0] for _ in values]
abscissas = chaospy.quad.combine(abscissas).T
weights = [_[1] for _ in values]
weights = np.prod(chaospy.quad.combine(weights), -1)
return abscissas, weights
foo = chaospy.quad.genz_keister.COLLECTION[rule]
abscissas, weights = foo(order)
abscissas = dist.inv(scipy.special.ndtr(abscissas))
abscissas = abscissas.reshape(1, abscissas.size)
return abscissas, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Perc(poly, q, dist, sample=10000, **kws):
""" Percentile function. Note that this function is an empirical function that operates using Monte Carlo sampling. Args: poly (Poly):
Polynomial of interest. q (numpy.ndarray):
positions where percentiles are taken. Must be a number or an array, where all values are on the interval ``[0, 100]``. dist (Dist):
Defines the space where percentile is taken. sample (int):
Number of samples used in estimation. Returns: (numpy.ndarray):
Percentiles of ``poly`` with ``Q.shape=poly.shape+q.shape``. Examples: [[ 0. -3. -6.3 ] [ 0. -0.64 -0.04] [ 0.03 -0.01 -0. ] [ 0.15 0.66 0.04] [ 2.1 3. 6.3 ]] """ |
shape = poly.shape
poly = polynomials.flatten(poly)
q = numpy.array(q)/100.
dim = len(dist)
# Interior
Z = dist.sample(sample, **kws)
if dim==1:
Z = (Z, )
q = numpy.array([q])
poly1 = poly(*Z)
# Min/max
mi, ma = dist.range().reshape(2, dim)
ext = numpy.mgrid[(slice(0, 2, 1), )*dim].reshape(dim, 2**dim).T
ext = numpy.where(ext, mi, ma).T
poly2 = poly(*ext)
poly2 = numpy.array([_ for _ in poly2.T if not numpy.any(numpy.isnan(_))]).T
# Finish
if poly2.shape:
poly1 = numpy.concatenate([poly1, poly2], -1)
samples = poly1.shape[-1]
poly1.sort()
out = poly1.T[numpy.asarray(q*(samples-1), dtype=int)]
out = out.reshape(q.shape + shape)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def probabilistic_collocation(order, dist, subset=.1):
""" Probabilistic collocation method. Args: order (int, numpy.ndarray) : Quadrature order along each axis. dist (Dist) : Distribution to generate samples from. subset (float) : Rate of which to removed samples. """ |
abscissas, weights = chaospy.quad.collection.golub_welsch(order, dist)
likelihood = dist.pdf(abscissas)
alpha = numpy.random.random(len(weights))
alpha = likelihood > alpha*subset*numpy.max(likelihood)
abscissas = abscissas.T[alpha].T
weights = weights[alpha]
return abscissas, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_function(rule, domain, normalize, **parameters):
""" Create a quadrature function and set default parameter values. Args: rule (str):
Name of quadrature rule defined in ``QUAD_FUNCTIONS``. domain (Dist, numpy.ndarray):
Defines ``lower`` and ``upper`` that is passed quadrature rule. If ``Dist``, ``domain`` is renamed to ``dist`` and also passed. normalize (bool):
In the case of distributions, the abscissas and weights are not tailored to a distribution beyond matching the bounds. If True, the samples are normalized multiplying the weights with the density of the distribution evaluated at the abscissas and normalized afterwards to sum to one. parameters (:py:data:typing.Any):
Redefining of the parameter defaults. Only add parameters that the quadrature rule expect. Returns: (:py:data:typing.Callable):
Function that can be called only using argument ``order``. """ |
from ...distributions.baseclass import Dist
if isinstance(domain, Dist):
lower, upper = domain.range()
parameters["dist"] = domain
else:
lower, upper = numpy.array(domain)
parameters["lower"] = lower
parameters["upper"] = upper
quad_function = QUAD_FUNCTIONS[rule]
parameters_spec = inspect.getargspec(quad_function)[0]
parameters_spec = {key: None for key in parameters_spec}
del parameters_spec["order"]
for key in parameters_spec:
if key in parameters:
parameters_spec[key] = parameters[key]
def _quad_function(order, *args, **kws):
"""Implementation of quadrature function."""
params = parameters_spec.copy()
params.update(kws)
abscissas, weights = quad_function(order, *args, **params)
# normalize if prudent:
if rule in UNORMALIZED_QUADRATURE_RULES and normalize:
if isinstance(domain, Dist):
if len(domain) == 1:
weights *= domain.pdf(abscissas).flatten()
else:
weights *= domain.pdf(abscissas)
weights /= numpy.sum(weights)
return abscissas, weights
return _quad_function |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit_regression( polynomials, abscissas, evals, rule="LS", retall=False, order=0, alpha=-1, ):
""" Fit a polynomial chaos expansion using linear regression. Args: polynomials (chaospy.poly.base.Poly):
Polynomial expansion with ``polynomials.shape=(M,)`` and `polynomials.dim=D`. abscissas (numpy.ndarray):
Collocation nodes with ``abscissas.shape == (D, K)``. evals (numpy.ndarray):
Model evaluations with ``len(evals)=K``. retall (bool):
If True return Fourier coefficients in addition to R. order (int):
Tikhonov regularization order. alpha (float):
Dampning parameter for the Tikhonov regularization. Calculated automatically if negative. Returns: (Poly, numpy.ndarray):
Fitted polynomial with ``R.shape=evals.shape[1:]`` and ``R.dim=D``. The Fourier coefficients in the estimation. Examples: 0.5q0+0.5q1+1.0 """ |
abscissas = numpy.asarray(abscissas)
if len(abscissas.shape) == 1:
abscissas = abscissas.reshape(1, *abscissas.shape)
evals = numpy.array(evals)
poly_evals = polynomials(*abscissas).T
shape = evals.shape[1:]
evals = evals.reshape(evals.shape[0], int(numpy.prod(evals.shape[1:])))
if isinstance(rule, str):
rule = rule.upper()
if rule == "LS":
uhat = linalg.lstsq(poly_evals, evals)[0]
elif rule == "T":
uhat = rlstsq(poly_evals, evals, order=order, alpha=alpha, cross=False)
elif rule == "TC":
uhat = rlstsq(poly_evals, evals, order=order, alpha=alpha, cross=True)
else:
from sklearn.linear_model.base import LinearModel
assert isinstance(rule, LinearModel)
uhat = rule.fit(poly_evals, evals).coef_.T
evals = evals.reshape(evals.shape[0], *shape)
approx_model = chaospy.poly.sum((polynomials*uhat.T), -1)
approx_model = chaospy.poly.reshape(approx_model, shape)
if retall == 1:
return approx_model, uhat
elif retall == 2:
return approx_model, uhat, poly_evals
return approx_model |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quad_genz_keister_24 ( order ):
""" Hermite Genz-Keister 24 rule. Args: order (int):
The quadrature order. Must be in the interval (0, 8). Returns: (:py:data:typing.Tuple[numpy.ndarray, numpy.ndarray]):
Abscissas and weights Examples: [-1.7321 0. 1.7321] [0.1667 0.6667 0.1667] """ |
order = sorted(GENZ_KEISTER_24.keys())[order]
abscissas, weights = GENZ_KEISTER_24[order]
abscissas = numpy.array(abscissas)
weights = numpy.array(weights)
weights /= numpy.sum(weights)
abscissas *= numpy.sqrt(2)
return abscissas, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def contains_call_signature(caller, key):
""" Check if a function or method call signature contains a specific argument. Args: caller (Callable):
Method or function to check if signature is contain in. key (str):
Signature to look for. Returns: True if ``key`` exits in ``caller`` call signature. Examples: True False True False """ |
try:
args = inspect.signature(caller).parameters
except AttributeError:
args = inspect.getargspec(caller).args
return key in args |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Sens_m_sample(poly, dist, samples, rule="R"):
""" First order sensitivity indices estimated using Saltelli's method. Args: poly (chaospy.Poly):
If provided, evaluated samples through polynomials before returned. dist (chaopy.Dist):
distribution to sample from. samples (int):
The number of samples to draw for each matrix. rule (str):
Scheme for generating random samples. Return: (numpy.ndarray):
array with `shape == (len(dist), len(poly))` where `sens[dim][pol]` is the first sensitivity index for distribution dimensions `dim` and polynomial index `pol`. Examples: [q0^2, q0q1, q1^2] [[0.008 0.0026 0. ] [0. 0.6464 2.1321]] """ |
dim = len(dist)
generator = Saltelli(dist, samples, poly, rule=rule)
zeros = [0]*dim
ones = [1]*dim
index = [0]*dim
variance = numpy.var(generator[zeros], -1)
matrix_0 = generator[zeros]
matrix_1 = generator[ones]
mean = .5*(numpy.mean(matrix_1) + numpy.mean(matrix_0))
matrix_0 -= mean
matrix_1 -= mean
out = [
numpy.mean(matrix_1*((generator[index]-mean)-matrix_0), -1) /
numpy.where(variance, variance, 1)
for index in numpy.eye(dim, dtype=bool)
]
return numpy.array(out) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Sens_m2_sample(poly, dist, samples, rule="R"):
""" Second order sensitivity indices estimated using Saltelli's method. Args: poly (chaospy.Poly):
If provided, evaluated samples through polynomials before returned. dist (chaopy.Dist):
distribution to sample from. samples (int):
The number of samples to draw for each matrix. rule (str):
Scheme for generating random samples. Return: (numpy.ndarray):
array with `shape == (len(dist), len(dist), len(poly))` where `sens[dim1][dim2][pol]` is the correlating sensitivity between dimension `dim1` and `dim2` and polynomial index `pol`. Examples: [q0^2, q0q1, q1^2] [[[ 0.008 0.0026 0. ] [-0.0871 1.1516 1.2851]] <BLANKLINE> [[-0.0871 1.1516 1.2851] [ 0. 0.7981 1.38 ]]] """ |
dim = len(dist)
generator = Saltelli(dist, samples, poly, rule=rule)
zeros = [0]*dim
ones = [1]*dim
index = [0]*dim
variance = numpy.var(generator[zeros], -1)
matrix_0 = generator[zeros]
matrix_1 = generator[ones]
mean = .5*(numpy.mean(matrix_1) + numpy.mean(matrix_0))
matrix_0 -= mean
matrix_1 -= mean
out = numpy.empty((dim, dim)+poly.shape)
for dim1 in range(dim):
index[dim1] = 1
matrix = generator[index]-mean
out[dim1, dim1] = numpy.mean(
matrix_1*(matrix-matrix_0),
-1,
) / numpy.where(variance, variance, 1)
for dim2 in range(dim1+1, dim):
index[dim2] = 1
matrix = generator[index]-mean
out[dim1, dim2] = out[dim2, dim1] = numpy.mean(
matrix_1*(matrix-matrix_0),
-1,
) / numpy.where(variance, variance, 1)
index[dim2] = 0
index[dim1] = 0
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Sens_t_sample(poly, dist, samples, rule="R"):
""" Total order sensitivity indices estimated using Saltelli's method. Args: poly (chaospy.Poly):
If provided, evaluated samples through polynomials before returned. dist (chaopy.Dist):
distribution to sample from. samples (int):
The number of samples to draw for each matrix. rule (str):
Scheme for generating random samples. Return: (numpy.ndarray):
array with `shape == (len(dist), len(poly))` where `sens[dim][pol]` is the total order sensitivity index for distribution dimensions `dim` and polynomial index `pol`. Examples: [q0^2, q0q1, q1^2] [[ 1. 0.2 -0.3807] [ 0.9916 0.9962 1. ]] """ |
generator = Saltelli(dist, samples, poly, rule=rule)
dim = len(dist)
zeros = [0]*dim
variance = numpy.var(generator[zeros], -1)
return numpy.array([
1-numpy.mean((generator[~index]-generator[zeros])**2, -1,) /
(2*numpy.where(variance, variance, 1))
for index in numpy.eye(dim, dtype=bool)
]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_matrix(self, indices):
"""Retrieve Saltelli matrix.""" |
new = numpy.empty(self.samples1.shape)
for idx in range(len(indices)):
if indices[idx]:
new[idx] = self.samples1[idx]
else:
new[idx] = self.samples2[idx]
if self.poly:
new = self.poly(*new)
return new |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_stieltjes( dist, order, accuracy=100, normed=False, retall=False, **kws):
""" Discretized Stieltjes' method. Args: dist (Dist):
Distribution defining the space to create weights for. order (int):
The polynomial order create. accuracy (int):
The quadrature order of the Clenshaw-Curtis nodes to use at each step, if approximation is used. retall (bool):
If included, more values are returned Returns: (list):
List of polynomials, norms of polynomials and three terms coefficients. The list created from the method with ``len(orth) == order+1``. If ``len(dist) > 1``, then each polynomials are multivariate. (numpy.ndarray, numpy.ndarray, numpy.ndarray):
If ``retall`` is true, also return polynomial norms and the three term coefficients. The norms of the polynomials with ``norms.shape = (dim, order+1)`` where ``dim`` are the number of dimensions in dist. The coefficients have ``shape == (dim, order+1)``. Examples: [q0^2-1.0, q1^2-4.0q1+2.0] [[1. 1. 2.] [1. 1. 4.]] [[0. 0. 0.] [1. 3. 5.]] [[1. 1. 2.] [1. 1. 4.]] q0^2-q0+0.16666667 [[1. 0.0833 0.0056]] """ |
from .. import distributions
assert not distributions.evaluation.get_dependencies(dist)
if len(dist) > 1:
# one for each dimension:
orth, norms, coeff1, coeff2 = zip(*[generate_stieltjes(
_, order, accuracy, normed, retall=True, **kws) for _ in dist])
# ensure each polynomial has its own dimension:
orth = [[chaospy.setdim(_, len(orth)) for _ in poly] for poly in orth]
orth = [[chaospy.rolldim(_, len(dist)-idx) for _ in poly] for idx, poly in enumerate(orth)]
orth = [chaospy.poly.base.Poly(_) for _ in zip(*orth)]
if not retall:
return orth
# stack results:
norms = numpy.vstack(norms)
coeff1 = numpy.vstack(coeff1)
coeff2 = numpy.vstack(coeff2)
return orth, norms, coeff1, coeff2
try:
orth, norms, coeff1, coeff2 = _stieltjes_analytical(
dist, order, normed)
except NotImplementedError:
orth, norms, coeff1, coeff2 = _stieltjes_approx(
dist, order, accuracy, normed, **kws)
if retall:
assert not numpy.any(numpy.isnan(coeff1))
assert not numpy.any(numpy.isnan(coeff2))
return orth, norms, coeff1, coeff2
return orth |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _stieltjes_analytical(dist, order, normed):
"""Stieltjes' method with analytical recurrence coefficients.""" |
dimensions = len(dist)
mom_order = numpy.arange(order+1).repeat(dimensions)
mom_order = mom_order.reshape(order+1, dimensions).T
coeff1, coeff2 = dist.ttr(mom_order)
coeff2[:, 0] = 1.
poly = chaospy.poly.collection.core.variable(dimensions)
if normed:
orth = [
poly**0*numpy.ones(dimensions),
(poly-coeff1[:, 0])/numpy.sqrt(coeff2[:, 1]),
]
for order_ in range(1, order):
orth.append(
(orth[-1]*(poly-coeff1[:, order_])
-orth[-2]*numpy.sqrt(coeff2[:, order_]))
/numpy.sqrt(coeff2[:, order_+1])
)
norms = numpy.ones(coeff2.shape)
else:
orth = [poly-poly, poly**0*numpy.ones(dimensions)]
for order_ in range(order):
orth.append(
orth[-1]*(poly-coeff1[:, order_])
- orth[-2]*coeff2[:, order_]
)
orth = orth[1:]
norms = numpy.cumprod(coeff2, 1)
return orth, norms, coeff1, coeff2 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _stieltjes_approx(dist, order, accuracy, normed, **kws):
"""Stieltjes' method with approximative recurrence coefficients.""" |
kws["rule"] = kws.get("rule", "C")
assert kws["rule"].upper() != "G"
absisas, weights = chaospy.quad.generate_quadrature(
accuracy, dist.range(), **kws)
weights = weights*dist.pdf(absisas)
poly = chaospy.poly.variable(len(dist))
orth = [poly*0, poly**0]
inner = numpy.sum(absisas*weights, -1)
norms = [numpy.ones(len(dist)), numpy.ones(len(dist))]
coeff1 = []
coeff2 = []
for _ in range(order):
coeff1.append(inner/norms[-1])
coeff2.append(norms[-1]/norms[-2])
orth.append((poly-coeff1[-1])*orth[-1] - orth[-2]*coeff2[-1])
raw_nodes = orth[-1](*absisas)**2*weights
inner = numpy.sum(absisas*raw_nodes, -1)
norms.append(numpy.sum(raw_nodes, -1))
if normed:
orth[-1] = orth[-1]/numpy.sqrt(norms[-1])
coeff1.append(inner/norms[-1])
coeff2.append(norms[-1]/norms[-2])
coeff1 = numpy.transpose(coeff1)
coeff2 = numpy.transpose(coeff2)
norms = numpy.array(norms[1:]).T
orth = orth[1:]
return orth, norms, coeff1, coeff2 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_state(seed_value=None, step=None):
"""Set random seed.""" |
global RANDOM_SEED # pylint: disable=global-statement
if seed_value is not None:
RANDOM_SEED = seed_value
if step is not None:
RANDOM_SEED += step |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rule_generator(*funcs):
""" Constructor for creating multivariate quadrature generator. Args: funcs (:py:data:typing.Callable):
One dimensional integration rule where each rule returns ``abscissas`` and ``weights`` as one dimensional arrays. They must take one positional argument ``order``. Returns: (:py:data:typing.Callable):
Multidimensional integration quadrature function that takes the arguments ``order`` and ``sparse``, and a optional ``part``. The argument ``sparse`` is used to select for if Smolyak sparse grid is used, and ``part`` defines if subset of rule should be generated (for parallelization). Example: [[-1. -1. 0. 0. 1. 1. ] [ 0.2113 0.7887 0.2113 0.7887 0.2113 0.7887]] [0.1667 0.1667 0.6667 0.6667 0.1667 0.1667] """ |
dim = len(funcs)
tensprod_rule = create_tensorprod_function(funcs)
assert hasattr(tensprod_rule, "__call__")
mv_rule = create_mv_rule(tensprod_rule, dim)
assert hasattr(mv_rule, "__call__")
return mv_rule |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_tensorprod_function(funcs):
"""Combine 1-D rules into multivariate rule using tensor product.""" |
dim = len(funcs)
def tensprod_rule(order, part=None):
"""Tensor product rule."""
order = order*numpy.ones(dim, int)
values = [funcs[idx](order[idx]) for idx in range(dim)]
abscissas = [numpy.array(_[0]).flatten() for _ in values]
abscissas = chaospy.quad.combine(abscissas, part=part).T
weights = [numpy.array(_[1]).flatten() for _ in values]
weights = numpy.prod(chaospy.quad.combine(weights, part=part), -1)
return abscissas, weights
return tensprod_rule |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_mv_rule(tensorprod_rule, dim):
"""Convert tensor product rule into a multivariate quadrature generator.""" |
def mv_rule(order, sparse=False, part=None):
"""
Multidimensional integration rule.
Args:
order (int, numpy.ndarray) : order of integration rule. If numpy.ndarray,
order along each axis.
sparse (bool) : use Smolyak sparse grid.
Returns:
(numpy.ndarray, numpy.ndarray) abscissas and weights.
"""
if sparse:
order = numpy.ones(dim, dtype=int)*order
tensorprod_rule_ = lambda order, part=part:\
tensorprod_rule(order, part=part)
return chaospy.quad.sparse_grid(tensorprod_rule_, order)
return tensorprod_rule(order, part=part)
return mv_rule |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quad_fejer(order, lower=0, upper=1, growth=False, part=None):
""" Generate the quadrature abscissas and weights in Fejer quadrature. Example: [[0.0955 0.3455 0.6545 0.9045]] [0.1804 0.2996 0.2996 0.1804] """ |
order = numpy.asarray(order, dtype=int).flatten()
lower = numpy.asarray(lower).flatten()
upper = numpy.asarray(upper).flatten()
dim = max(lower.size, upper.size, order.size)
order = numpy.ones(dim, dtype=int)*order
lower = numpy.ones(dim)*lower
upper = numpy.ones(dim)*upper
composite = numpy.array([numpy.arange(2)]*dim)
if growth:
results = [
_fejer(numpy.where(order[i] == 0, 0, 2.**(order[i]+1)-2))
for i in range(dim)
]
else:
results = [
_fejer(order[i], composite[i]) for i in range(dim)
]
abscis = [_[0] for _ in results]
weight = [_[1] for _ in results]
abscis = chaospy.quad.combine(abscis, part=part).T
weight = chaospy.quad.combine(weight, part=part)
abscis = ((upper-lower)*abscis.T + lower).T
weight = numpy.prod(weight*(upper-lower), -1)
assert len(abscis) == dim
assert len(weight) == len(abscis.T)
return abscis, weight |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_latin_hypercube_samples(order, dim=1):
""" Latin Hypercube sampling. Args: order (int):
The order of the latin hyper-cube. Defines the number of samples. dim (int):
The number of dimensions in the latin hyper-cube. Returns (numpy.ndarray):
Latin hyper-cube with ``shape == (dim, order)``. """ |
randoms = numpy.random.random(order*dim).reshape((dim, order))
for dim_ in range(dim):
perm = numpy.random.permutation(order) # pylint: disable=no-member
randoms[dim_] = (perm + randoms[dim_])/order
return randoms |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_hammersley_samples(order, dim=1, burnin=-1, primes=()):
""" Create samples from the Hammersley set. For ``dim == 1`` the sequence falls back to Van Der Corput sequence. Args: order (int):
The order of the Hammersley sequence. Defines the number of samples. dim (int):
The number of dimensions in the Hammersley sequence. burnin (int):
Skip the first ``burnin`` samples. If negative, the maximum of ``primes`` is used. primes (tuple):
The (non-)prime base to calculate values along each axis. If empty, growing prime values starting from 2 will be used. Returns: (numpy.ndarray):
Hammersley set with ``shape == (dim, order)``. """ |
if dim == 1:
return create_halton_samples(
order=order, dim=1, burnin=burnin, primes=primes)
out = numpy.empty((dim, order), dtype=float)
out[:dim-1] = create_halton_samples(
order=order, dim=dim-1, burnin=burnin, primes=primes)
out[dim-1] = numpy.linspace(0, 1, order+2)[1:-1]
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_primes(threshold):
""" Generate prime values using sieve of Eratosthenes method. Args: threshold (int):
The upper bound for the size of the prime values. Returns (List[int]):
All primes from 2 and up to ``threshold``. """ |
if threshold == 2:
return [2]
elif threshold < 2:
return []
numbers = list(range(3, threshold+1, 2))
root_of_threshold = threshold ** 0.5
half = int((threshold+1)/2-1)
idx = 0
counter = 3
while counter <= root_of_threshold:
if numbers[idx]:
idy = int((counter*counter-3)/2)
numbers[idy] = 0
while idy < half:
numbers[idy] = 0
idy += counter
idx += 1
counter = 2*idx+3
return [2] + [number for number in numbers if number] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def evaluate_inverse( distribution, u_data, cache=None, parameters=None ):
""" Evaluate inverse Rosenblatt transformation. Args: distribution (Dist):
Distribution to evaluate. u_data (numpy.ndarray):
Locations for where evaluate inverse transformation distribution at. parameters (:py:data:typing.Any):
Collection of parameters to override the default ones in the distribution. cache (:py:data:typing.Any):
A collection of previous calculations in case the same distribution turns up on more than one occasion. Returns: The cumulative distribution values of ``distribution`` at location ``u_data`` using parameters ``parameters``. """ |
if cache is None:
cache = {}
out = numpy.zeros(u_data.shape)
# Distribution self know how to handle inverse Rosenblatt.
if hasattr(distribution, "_ppf"):
parameters = load_parameters(
distribution, "_ppf", parameters=parameters, cache=cache)
out[:] = distribution._ppf(u_data.copy(), **parameters)
# Approximate inverse Rosenblatt based on cumulative distribution function.
else:
from .. import approximation
parameters = load_parameters(
distribution, "_cdf", parameters=parameters, cache=cache)
out[:] = approximation.approximate_inverse(
distribution, u_data.copy(), cache=cache.copy(), parameters=parameters)
# Store cache.
cache[distribution] = out
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mom_recurse(self, idxi, idxj, idxk):
"""Backend mement main loop.""" |
rank_ = min(
chaospy.bertran.rank(idxi, self.dim),
chaospy.bertran.rank(idxj, self.dim),
chaospy.bertran.rank(idxk, self.dim)
)
par, axis0 = chaospy.bertran.parent(idxk, self.dim)
gpar, _ = chaospy.bertran.parent(par, self.dim, axis0)
idxi_child = chaospy.bertran.child(idxi, self.dim, axis0)
oneup = chaospy.bertran.child(0, self.dim, axis0)
out1 = self.mom_111(idxi_child, idxj, par)
out2 = self.mom_111(
chaospy.bertran.child(oneup, self.dim, axis0), par, par)
for k in range(gpar, idxk):
if chaospy.bertran.rank(k, self.dim) >= rank_:
out1 -= self.mom_111(oneup, k, par) \
* self.mom_111(idxi, idxj, k)
out2 -= self.mom_111(oneup, par, k) \
* self(oneup, k, par)
return out1 / out2 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Sens_m_nataf(order, dist, samples, vals, **kws):
""" Variance-based decomposition through the Nataf distribution. Generates first order sensitivity indices Args: order (int):
Polynomial order used ``orth_ttr``. dist (Copula):
Assumed to be Nataf with independent components samples (numpy.ndarray):
Samples used for evaluation (typically generated from ``dist``.) vals (numpy.ndarray):
Evaluations of the model for given samples. Returns: (numpy.ndarray):
Sensitivity indices with shape ``(len(dist),) + vals.shape[1:]``. """ |
assert dist.__class__.__name__ == "Copula"
trans = dist.prm["trans"]
assert trans.__class__.__name__ == "nataf"
vals = numpy.array(vals)
cov = trans.prm["C"]
cov = numpy.dot(cov, cov.T)
marginal = dist.prm["dist"]
dim = len(dist)
orth = chaospy.orthogonal.orth_ttr(order, marginal, sort="GR")
r = range(dim)
index = [1] + [0]*(dim-1)
nataf = chaospy.dist.Nataf(marginal, cov, r)
samples_ = marginal.inv( nataf.fwd( samples ) )
poly, coeffs = chaospy.collocation.fit_regression(
orth, samples_, vals, retall=1)
V = Var(poly, marginal, **kws)
out = numpy.zeros((dim,) + poly.shape)
out[0] = Var(E_cond(poly, index, marginal, **kws),
marginal, **kws)/(V+(V == 0))*(V != 0)
for i in range(1, dim):
r = r[1:] + r[:1]
index = index[-1:] + index[:-1]
nataf = chaospy.dist.Nataf(marginal, cov, r)
samples_ = marginal.inv( nataf.fwd( samples ) )
poly, coeffs = chaospy.collocation.fit_regression(
orth, samples_, vals, retall=1)
out[i] = Var(E_cond(poly, index, marginal, **kws),
marginal, **kws)/(V+(V == 0))*(V != 0)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quad_golub_welsch(order, dist, accuracy=100, **kws):
""" Golub-Welsch algorithm for creating quadrature nodes and weights. Args: order (int):
Quadrature order dist (Dist):
Distribution nodes and weights are found for with `dim=len(dist)` accuracy (int):
Accuracy used in discretized Stieltjes procedure. Will be increased by one for each iteration. Returns: (numpy.ndarray, numpy.ndarray):
Optimal collocation nodes with `x.shape=(dim, order+1)` and weights with `w.shape=(order+1,)`. Examples: [[-2.3344 -0.742 0.742 2.3344]] [0.0459 0.4541 0.4541 0.0459] [[0.2113 0.2113 0.7887 0.7887] [0.2113 0.7887 0.2113 0.7887]] [0.25 0.25 0.25 0.25] """ |
order = numpy.array(order)*numpy.ones(len(dist), dtype=int)+1
_, _, coeff1, coeff2 = chaospy.quad.generate_stieltjes(
dist, numpy.max(order), accuracy=accuracy, retall=True, **kws)
dimensions = len(dist)
abscisas, weights = _golbub_welsch(order, coeff1, coeff2)
if dimensions == 1:
abscisa = numpy.reshape(abscisas, (1, order[0]))
weight = numpy.reshape(weights, (order[0],))
else:
abscisa = chaospy.quad.combine(abscisas).T
weight = numpy.prod(chaospy.quad.combine(weights), -1)
assert len(abscisa) == dimensions
assert len(weight) == len(abscisa.T)
return abscisa, weight |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _golbub_welsch(orders, coeff1, coeff2):
"""Recurrence coefficients to abscisas and weights.""" |
abscisas, weights = [], []
for dim, order in enumerate(orders):
if order:
bands = numpy.zeros((2, order))
bands[0] = coeff1[dim, :order]
bands[1, :-1] = numpy.sqrt(coeff2[dim, 1:order])
vals, vecs = scipy.linalg.eig_banded(bands, lower=True)
abscisa, weight = vals.real, vecs[0, :]**2
indices = numpy.argsort(abscisa)
abscisa, weight = abscisa[indices], weight[indices]
else:
abscisa, weight = numpy.array([coeff1[dim, 0]]), numpy.array([1.])
abscisas.append(abscisa)
weights.append(weight)
return abscisas, weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Kurt(poly, dist=None, fisher=True, **kws):
""" Kurtosis operator. Element by element 4rd order statistics of a distribution or polynomial. Args: poly (Poly, Dist):
Input to take kurtosis on. dist (Dist):
Defines the space the skewness is taken on. It is ignored if ``poly`` is a distribution. fisher (bool):
If True, Fisher's definition is used (Normal -> 0.0). If False, Pearson's definition is used (normal -> 3.0) Returns: (numpy.ndarray):
Element for element variance along ``poly``, where ``skewness.shape==poly.shape``. Examples: [6. 0.] [9. 3.] [nan 6. 0. 15.] """ |
if isinstance(poly, distributions.Dist):
x = polynomials.variable(len(poly))
poly, dist = x, poly
else:
poly = polynomials.Poly(poly)
if fisher:
adjust = 3
else:
adjust = 0
shape = poly.shape
poly = polynomials.flatten(poly)
m1 = E(poly, dist)
m2 = E(poly**2, dist)
m3 = E(poly**3, dist)
m4 = E(poly**4, dist)
out = (m4-4*m3*m1 + 6*m2*m1**2 - 3*m1**4) /\
(m2**2-2*m2*m1**2+m1**4) - adjust
out = numpy.reshape(out, shape)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def basis(start, stop=None, dim=1, sort="G", cross_truncation=1.):
""" Create an N-dimensional unit polynomial basis. Args: start (int, numpy.ndarray):
the minimum polynomial to include. If int is provided, set as lowest total order. If array of int, set as lower order along each axis. stop (int, numpy.ndarray):
the maximum shape included. If omitted: ``stop <- start; start <- 0`` If int is provided, set as largest total order. If array of int, set as largest order along each axis. dim (int):
dim of the basis. Ignored if array is provided in either start or stop. sort (str):
The polynomial ordering where the letters ``G``, ``I`` and ``R`` can be used to set grade, inverse and reverse to the ordering. For ``basis(start=0, stop=2, dim=2, order=order)`` we get: ====== ================== order output ====== ================== "" [1 y y^2 x xy x^2] "G" [1 y x y^2 xy x^2] "I" [x^2 xy x y^2 y 1] "R" [1 x x^2 y xy y^2] "GIR" [y^2 xy x^2 y x 1] ====== ================== cross_truncation (float):
Use hyperbolic cross truncation scheme to reduce the number of terms in expansion. Returns: (Poly) : Polynomial array. Examples: [q0^4, q0^3q1, q0^2q1^2, q0q1^3, q1^4] [q0q1, q0^2q1, q0q1^2, q0^2q1^2] """ |
if stop is None:
start, stop = numpy.array(0), start
start = numpy.array(start, dtype=int)
stop = numpy.array(stop, dtype=int)
dim = max(start.size, stop.size, dim)
indices = numpy.array(chaospy.bertran.bindex(
numpy.min(start), 2*numpy.max(stop), dim, sort, cross_truncation))
if start.size == 1:
bellow = numpy.sum(indices, -1) >= start
else:
start = numpy.ones(dim, dtype=int)*start
bellow = numpy.all(indices-start >= 0, -1)
if stop.size == 1:
above = numpy.sum(indices, -1) <= stop.item()
else:
stop = numpy.ones(dim, dtype=int)*stop
above = numpy.all(stop-indices >= 0, -1)
pool = list(indices[above*bellow])
arg = numpy.zeros(len(pool), dtype=int)
arg[0] = 1
poly = {}
for idx in pool:
idx = tuple(idx)
poly[idx] = arg
arg = numpy.roll(arg, 1)
x = numpy.zeros(len(pool), dtype=int)
x[0] = 1
A = {}
for I in pool:
I = tuple(I)
A[I] = x
x = numpy.roll(x,1)
return Poly(A, dim) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cutoff(poly, *args):
""" Remove polynomial components with order outside a given interval. Args: poly (Poly):
Input data. low (int):
The lowest order that is allowed to be included. Defaults to 0. high (int):
The upper threshold for the cutoff range. Returns: (Poly):
The same as `P`, except that all terms that have a order not within the bound `low <= order < high` are removed. Examples: [q1^3+1, q0+q1^2, q0^2+q1, q0^3+1] [1, q0+q1^2, q0^2+q1, 1] [0, q0+q1^2, q0^2+q1, 0] """ |
if len(args) == 1:
low, high = 0, args[0]
else:
low, high = args[:2]
core_old = poly.A
core_new = {}
for key in poly.keys:
if low <= numpy.sum(key) < high:
core_new[key] = core_old[key]
return Poly(core_new, poly.dim, poly.shape, poly.dtype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prange(N=1, dim=1):
""" Constructor to create a range of polynomials where the exponent vary. Args: N (int):
Number of polynomials in the array. dim (int):
The dimension the polynomial should span. Returns: (Poly):
A polynomial array of length N containing simple polynomials with increasing exponent. Examples: [1, q0, q0^2, q0^3] [1, q2, q2^2, q2^3] """ |
A = {}
r = numpy.arange(N, dtype=int)
key = numpy.zeros(dim, dtype=int)
for i in range(N):
key[-1] = i
A[tuple(key)] = 1*(r==i)
return Poly(A, dim, (N,), int) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rolldim(P, n=1):
""" Roll the axes. Args: P (Poly) : Input polynomial. n (int) : The axis that after rolling becomes the 0th axis. Returns: (Poly) : Polynomial with new axis configuration. Examples: q0^3+q1^2+q2 q0^2+q2^3+q1 """ |
dim = P.dim
shape = P.shape
dtype = P.dtype
A = dict(((key[n:]+key[:n],P.A[key]) for key in P.keys))
return Poly(A, dim, shape, dtype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def swapdim(P, dim1=1, dim2=0):
""" Swap the dim between two variables. Args: P (Poly):
Input polynomial. dim1 (int):
First dim dim2 (int):
Second dim. Returns: (Poly):
Polynomial with swapped dimensions. Examples: q0^4-q1 q1^4-q0 """ |
if not isinstance(P, Poly):
return numpy.swapaxes(P, dim1, dim2)
dim = P.dim
shape = P.shape
dtype = P.dtype
if dim1==dim2:
return P
m = max(dim1, dim2)
if P.dim <= m:
P = chaospy.poly.dimension.setdim(P, m+1)
dim = m+1
A = {}
for key in P.keys:
val = P.A[key]
key = list(key)
key[dim1], key[dim2] = key[dim2], key[dim1]
A[tuple(key)] = val
return Poly(A, dim, shape, dtype) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tril(P, k=0):
"""Lower triangle of coefficients.""" |
A = P.A.copy()
for key in P.keys:
A[key] = numpy.tril(P.A[key])
return Poly(A, dim=P.dim, shape=P.shape) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tricu(P, k=0):
"""Cross-diagonal upper triangle.""" |
tri = numpy.sum(numpy.mgrid[[slice(0,_,1) for _ in P.shape]], 0)
tri = tri<len(tri) + k
if isinstance(P, Poly):
A = P.A.copy()
B = {}
for key in P.keys:
B[key] = A[key]*tri
return Poly(B, shape=P.shape, dim=P.dim, dtype=P.dtype)
out = P*tri
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def variable(dims=1):
""" Simple constructor to create single variables to create polynomials. Args: dims (int):
Number of dimensions in the array. Returns: (Poly):
Polynomial array with unit components in each dimension. Examples: q0 [q0, q1, q2] """ |
if dims == 1:
return Poly({(1,): 1}, dim=1, shape=())
return Poly({
tuple(indices): indices for indices in numpy.eye(dims, dtype=int)
}, dim=dims, shape=(dims,)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(A, ax=None):
"""Test if all values in A evaluate to True """ |
if isinstance(A, Poly):
out = numpy.zeros(A.shape, dtype=bool)
B = A.A
for key in A.keys:
out += all(B[key], ax)
return out
return numpy.all(A, ax) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def around(A, decimals=0):
""" Evenly round to the given number of decimals. Args: A (Poly, numpy.ndarray):
Input data. decimals (int):
Number of decimal places to round to (default: 0). If decimals is negative, it specifies the number of positions to the left of the decimal point. Returns: (Poly, numpy.ndarray):
Same type as A. Examples: [1.0, 0.25q0, 0.0625q0^2] [1.0, 0.0, 0.0] [1.0, 0.25q0, 0.06q0^2] """ |
if isinstance(A, Poly):
B = A.A.copy()
for key in A.keys:
B[key] = around(B[key], decimals)
return Poly(B, A.dim, A.shape, A.dtype)
return numpy.around(A, decimals) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def diag(A, k=0):
"""Extract or construct a diagonal polynomial array.""" |
if isinstance(A, Poly):
core, core_new = A.A, {}
for key in A.keys:
core_new[key] = numpy.diag(core[key], k)
return Poly(core_new, A.dim, None, A.dtype)
return numpy.diag(A, k) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prune(A, threshold):
""" Remove coefficients that is not larger than a given threshold. Args: A (Poly):
Input data. threshold (float):
Threshold for which values to cut. Returns: (Poly):
Same type as A. Examples: 0.0625q0^2+0.25q0+1.0 0.25q0+1.0 1.0 0.0 """ |
if isinstance(A, Poly):
B = A.A.copy()
for key in A.keys:
values = B[key].copy()
values[numpy.abs(values) < threshold] = 0.
B[key] = values
return Poly(B, A.dim, A.shape, A.dtype)
A = A.copy()
A[numpy.abs(A) < threshold] = 0.
return A |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _range(self, xloc, cache):
""" Special handle for finding bounds on constrained dists. Example: [[0. 0.] [1. 2.]] """ |
uloc = numpy.zeros((2, len(self)))
for dist in evaluation.sorted_dependencies(self, reverse=True):
if dist not in self.inverse_map:
continue
idx = self.inverse_map[dist]
xloc_ = xloc[idx].reshape(1, -1)
uloc[:, idx] = evaluation.evaluate_bound(
dist, xloc_, cache=cache).flatten()
return uloc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Spearman(poly, dist, sample=10000, retall=False, **kws):
""" Calculate Spearman's rank-order correlation coefficient. Args: poly (Poly):
Polynomial of interest. dist (Dist):
Defines the space where correlation is taken. sample (int):
Number of samples used in estimation. retall (bool):
If true, return p-value as well. Returns: (float, numpy.ndarray):
Correlation output ``rho``. Of type float if two-dimensional problem. Correleation matrix if larger. (float, numpy.ndarray):
The two-sided p-value for a hypothesis test whose null hypothesis is that two sets of data are uncorrelated, has same dimension as ``rho``. """ |
samples = dist.sample(sample, **kws)
poly = polynomials.flatten(poly)
Y = poly(*samples)
if retall:
return spearmanr(Y.T)
return spearmanr(Y.T)[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _diff(self, x, th, eps):
""" Differentiation function. Numerical approximation of a Rosenblatt transformation created from copula formulation. """ |
foo = lambda y: self.igen(numpy.sum(self.gen(y, th), 0), th)
out1 = out2 = 0.
sign = 1 - 2*(x > .5).T
for I in numpy.ndindex(*((2,)*(len(x)-1)+(1,))):
eps_ = numpy.array(I)*eps
x_ = (x.T + sign*eps_).T
out1 += (-1)**sum(I)*foo(x_)
x_[-1] = 1
out2 += (-1)**sum(I)*foo(x_)
out = out1/out2
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def available_services():
""" get the available services to be activated read the models dir to find the services installed to be added to the system by the administrator """ |
all_datas = ()
data = ()
for class_path in settings.TH_SERVICES:
class_name = class_path.rsplit('.', 1)[1]
# 2nd array position contains the name of the service
data = (class_name, class_name.rsplit('Service', 1)[1])
all_datas = (data,) + all_datas
return all_datas |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle(self, *args, **options):
""" get the trigger to fire """ |
trigger_id = options.get('trigger_id')
trigger = TriggerService.objects.filter(
id=int(trigger_id),
status=True,
user__is_active=True,
provider_failed__lt=settings.DJANGO_TH.get('failed_tries', 10),
consumer_failed__lt=settings.DJANGO_TH.get('failed_tries', 10)
).select_related('consumer__name', 'provider__name')
try:
with Pool(processes=1) as pool:
r = Read()
result = pool.map_async(r.reading, trigger)
result.get(timeout=360)
p = Pub()
result = pool.map_async(p.publishing, trigger)
result.get(timeout=360)
cache.delete('django_th' + '_fire_trigger_' + str(trigger_id))
except TimeoutError as e:
logger.warning(e) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_data(self, **kwargs):
""" get the data from the service as the pushbullet service does not have any date in its API linked to the note, add the triggered date to the dict data thus the service will be triggered when data will be found :param kwargs: contain keyword args : trigger_id at least :type kwargs: dict :rtype: list """ |
trigger_id = kwargs.get('trigger_id')
trigger = Pushbullet.objects.get(trigger_id=trigger_id)
date_triggered = kwargs.get('date_triggered')
data = list()
pushes = self.pushb.get_pushes()
for p in pushes:
title = 'From Pushbullet'
created = arrow.get(p.get('created'))
if created > date_triggered and p.get('type') == trigger.type and\
(p.get('sender_email') == p.get('receiver_email') or p.get('sender_email') is None):
title = title + ' Channel' if p.get('channel_iden') and p.get('title') is None else title
# if sender_email and receiver_email are the same ;
# that means that "I" made a note or something
# if sender_email is None, then "an API" does the post
body = p.get('body')
data.append({'title': title, 'content': body})
# digester
self.send_digest_event(trigger_id, title, '')
cache.set('th_pushbullet_' + str(trigger_id), data)
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def html_entity_decode_char(self, m, defs=htmlentities.entitydefs):
""" decode html entity into one of the html char """ |
try:
char = defs[m.group(1)]
return "&{char};".format(char=char)
except ValueError:
return m.group(0)
except KeyError:
return m.group(0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def html_entity_decode_codepoint(self, m, defs=htmlentities.codepoint2name):
""" decode html entity into one of the codepoint2name """ |
try:
char = defs[m.group(1)]
return "&{char};".format(char=char)
except ValueError:
return m.group(0)
except KeyError:
return m.group(0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def html_entity_decode(self):
""" entry point of this set of tools to decode html entities """ |
pattern = re.compile(r"&#(\w+?);")
string = pattern.sub(self.html_entity_decode_char, self.my_string)
return pattern.sub(self.html_entity_decode_codepoint, string) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_data(self, **kwargs):
""" get the data from the service As the pocket service does not have any date in its API linked to the note, add the triggered date to the dict data thus the service will be triggered when data will be found :param kwargs: contain keyword args : trigger_id at least :type kwargs: dict :rtype: list """ |
trigger_id = kwargs.get('trigger_id')
date_triggered = kwargs.get('date_triggered')
data = list()
# pocket uses a timestamp date format
since = arrow.get(date_triggered).timestamp
if self.token is not None:
# get the data from the last time the trigger have been started
# timestamp form
pockets = self.pocket.get(since=since, state="unread")
content = ''
if pockets is not None and len(pockets[0]['list']) > 0:
for my_pocket in pockets[0]['list'].values():
if my_pocket.get('excerpt'):
content = my_pocket['excerpt']
elif my_pocket.get('given_title'):
content = my_pocket['given_title']
my_date = arrow.get(str(date_triggered), 'YYYY-MM-DD HH:mm:ss').to(settings.TIME_ZONE)
data.append({'my_date': str(my_date),
'tag': '',
'link': my_pocket['given_url'],
'title': my_pocket['given_title'],
'content': content,
'tweet_id': 0})
# digester
self.send_digest_event(trigger_id, my_pocket['given_title'], my_pocket['given_url'])
cache.set('th_pocket_' + str(trigger_id), data)
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_prohibited_element(tag_name, document_element):
""" To fit the Evernote DTD need, drop this tag name """ |
elements = document_element.getElementsByTagName(tag_name)
for element in elements:
p = element.parentNode
p.removeChild(element) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.