_id
stringlengths
2
7
title
stringlengths
1
88
partition
stringclasses
3 values
text
stringlengths
31
13.1k
language
stringclasses
1 value
meta_information
dict
q266400
_get_permutations
test
def _get_permutations(num_results, dims, seed=None): """Uniform iid sample from the space of permutations. Draws a sample of size `num_results` from the group of permutations of degrees specified by the `dims` tensor. These are packed together into one tensor such that each row is one sample from each of the dimensions in `dims`. For example, if dims = [2,3] and num_results = 2, the result is a tensor of shape [2, 2 + 3] and the first row of the result might look like: [1, 0, 2, 0, 1]. The first two elements are a permutation over 2 elements while the next three are a permutation over 3 elements. Args: num_results: A positive scalar `Tensor` of integral type. The number of draws from the discrete uniform distribution over the permutation groups. dims: A 1D `Tensor` of the same dtype as `num_results`. The degree of the permutation groups from which to sample. seed: (Optional) Python
python
{ "resource": "" }
q266401
_get_indices
test
def _get_indices(num_results, sequence_indices, dtype, name=None): """Generates starting points for the Halton sequence procedure. The k'th element of the sequence is generated starting from a positive integer which must be distinct for each `k`. It is conventional to choose the starting point as `k` itself (or `k+1` if k is zero based). This function generates the starting integers for the required elements and reshapes the result for later use. Args: num_results: Positive scalar `Tensor` of dtype int32. The number of samples to generate. If this parameter is supplied, then `sequence_indices` should be None. sequence_indices: `Tensor` of dtype int32 and rank 1. The entries index into
python
{ "resource": "" }
q266402
_base_expansion_size
test
def _base_expansion_size(num, bases): """Computes the number of terms in the place value expansion. Let num = a0 + a1 b + a2 b^2 + ... ak b^k be the place value expansion of `num` in base b (ak <> 0). This function computes and returns `k+1` for each base `b` specified in `bases`. This can be inferred from the base `b` logarithm of `num` as follows: $$k = Floor(log_b (num)) + 1 = Floor( log(num) / log(b)) + 1$$ Args: num: Scalar `Tensor` of dtype either
python
{ "resource": "" }
q266403
_primes_less_than
test
def _primes_less_than(n): # Based on # https://stackoverflow.com/questions/2068372/fastest-way-to-list-all-primes-below-n-in-python/3035188#3035188 """Returns sorted array of primes such that `2 <= prime < n`.""" small_primes = np.array((2, 3, 5)) if n <= 6: return small_primes[small_primes < n] sieve = np.ones(n // 3 + (n % 6 == 2), dtype=np.bool) sieve[0] = False m = int(n **
python
{ "resource": "" }
q266404
_machine_eps
test
def _machine_eps(dtype): """Returns the machine epsilon for the supplied dtype.""" if isinstance(dtype, tf.DType):
python
{ "resource": "" }
q266405
hager_zhang
test
def hager_zhang(value_and_gradients_function, initial_step_size=None, value_at_initial_step=None, value_at_zero=None, converged=None, threshold_use_approximate_wolfe_condition=1e-6, shrinkage_param=0.66, expansion_param=5.0, sufficient_decrease_param=0.1, curvature_param=0.9, step_size_shrink_param=0.1, max_iterations=50, name=None): """The Hager Zhang line search algorithm. Performs an inexact line search based on the algorithm of [Hager and Zhang (2006)][2]. The univariate objective function `value_and_gradients_function` is typically generated by projecting a multivariate objective function along a search direction. Suppose the multivariate function to be minimized is `g(x1,x2, .. xn)`. Let (d1, d2, ..., dn) be the direction along which we wish to perform a line search. Then the projected univariate function to be used for line search is ```None f(a) = g(x1 + d1 * a, x2 + d2 * a, ..., xn + dn * a) ``` The directional derivative along (d1, d2, ..., dn) is needed for this procedure. This also corresponds to the derivative of the projected function `f(a)` with respect to `a`. Note that this derivative must be negative for `a = 0` if the direction is a descent direction. The usual stopping criteria for the line search is the satisfaction of the (weak) Wolfe conditions. For details of the Wolfe conditions, see ref. [3]. On a finite precision machine, the exact Wolfe conditions can be difficult to satisfy when one is very close to the minimum and as argued by [Hager and Zhang (2005)][1], one can only expect the minimum to be determined within square root of machine precision. To improve the situation, they propose to replace the Wolfe conditions with an approximate version depending on the derivative of the function which is applied only when one is very close to the minimum. The following algorithm implements this enhanced scheme. ### Usage: Primary use of line search methods is as an internal component of a class of optimization algorithms (called line search based methods as opposed to trust region methods). Hence, the end user will typically not want to access line search directly. In particular, inexact line search should not be confused with a univariate minimization method. The stopping criteria of line search is the satisfaction of Wolfe conditions and not the discovery of the minimum of the function. With this caveat in mind, the following example illustrates the standalone usage of the line search. ```python # Define value and gradient namedtuple ValueAndGradient = namedtuple('ValueAndGradient', ['x', 'f', 'df']) # Define a quadratic target with minimum at 1.3. def value_and_gradients_function(x): return ValueAndGradient(x=x, f=(x - 1.3) ** 2, df=2 * (x-1.3)) # Set initial step size. step_size = tf.constant(0.1) ls_result = tfp.optimizer.linesearch.hager_zhang( value_and_gradients_function, initial_step_size=step_size) # Evaluate the results. with tf.Session() as session: results = session.run(ls_result) # Ensure convergence. assert results.converged # If the line search converged, the left and the right ends of the # bracketing interval are identical. assert results.left.x == result.right.x # Print the number of evaluations and the final step size. print ("Final Step Size: %f, Evaluations: %d" % (results.left.x, results.func_evals)) ``` ### References: [1]: William Hager, Hongchao Zhang. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim., Vol 16. 1, pp. 170-172. 2005. https://www.math.lsu.edu/~hozhang/papers/cg_descent.pdf [2]: William Hager, Hongchao Zhang. Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Transactions on Mathematical Software, Vol 32., 1, pp. 113-137. 2006. http://users.clas.ufl.edu/hager/papers/CG/cg_compare.pdf [3]: Jorge Nocedal, Stephen Wright. Numerical Optimization. Springer Series in Operations Research. pp 33-36. 2006 Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that correspond to scalar tensors of real dtype containing the point at which the function was evaluated, the value of the function, and its derivative at that point. The other namedtuple fields, if present, should be tensors or sequences (possibly nested) of tensors. In usual optimization application, this function would be generated by projecting the multivariate objective function along some specific direction. The direction is determined by some other procedure but should be a descent direction (i.e. the derivative of the projected univariate function must be negative at 0.). Alternatively, the function may represent the batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned namedtuple should each be a tensor of shape [n], with the corresponding input points, function values, and derivatives at those input points. initial_step_size: (Optional) Scalar positive `Tensor` of real dtype, or a tensor of shape [n] in batching mode. The initial value (or values) to try to bracket the minimum. Default is `1.` as a float32. Note that this point need not necessarily bracket the minimum for the line search to work correctly but the supplied value must be greater than 0. A good initial value will make the search converge faster. value_at_initial_step: (Optional) The full return value of evaluating value_and_gradients_function at initial_step_size, i.e. a namedtuple with 'x', 'f', 'df', if already known by the caller. If supplied the value of `initial_step_size` will be ignored, otherwise the tuple will be computed by evaluating value_and_gradients_function. value_at_zero: (Optional) The full return value of value_and_gradients_function at `0.`, i.e. a namedtuple with 'x', 'f', 'df', if already known by the caller. If not supplied the tuple will be computed by evaluating value_and_gradients_function. converged: (Optional) In batching mode a tensor of shape [n], indicating batch members which have already converged and no further search should be performed. These batch members are also reported as converged in the output, and both their `left` and `right` are set to the `value_at_initial_step`. threshold_use_approximate_wolfe_condition: Scalar positive `Tensor` of real dtype. Corresponds to the parameter 'epsilon' in [Hager and Zhang (2006)][2]. Used to estimate the threshold at which the line search switches to approximate Wolfe conditions. shrinkage_param: Scalar positive Tensor of real dtype. Must be less than `1.`. Corresponds to the parameter `gamma` in [Hager and Zhang (2006)][2].
python
{ "resource": "" }
q266406
_fix_step_size
test
def _fix_step_size(value_and_gradients_function, val_c_input, active, step_size_shrink_param): """Shrinks the input step size until the value and grad become finite.""" # The maximum iterations permitted are determined as the number of halvings # it takes to reduce 1 to 0 in the given dtype. iter_max = np.ceil(-np.log2(_machine_eps(val_c_input.x.dtype))) def _cond(i, val_c, to_fix): del val_c # Unused. return (i < iter_max) & tf.reduce_any(input_tensor=to_fix) def _body(i, val_c, to_fix):
python
{ "resource": "" }
q266407
_bracket_and_search
test
def _bracket_and_search( value_and_gradients_function, init_interval, f_lim, max_iterations, shrinkage_param, expansion_param, sufficient_decrease_param, curvature_param): """Brackets the minimum and performs a line search. Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that correspond to scalar tensors of real dtype containing the point at which the function was evaluated, the value of the function, and its derivative at that point. The other namedtuple fields, if present, should be tensors or sequences (possibly nested) of tensors. In usual optimization application, this function would be generated by projecting the multivariate objective function along some specific direction. The direction is determined by some other procedure but should be a descent direction (i.e. the derivative of the projected univariate function must be negative at 0.). Alternatively, the function may represent the batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned namedtuple should each be a tensor of shape [n], with the corresponding input points, function values, and derivatives at those input points. init_interval: Instance of `HagerZhangLineSearchResults` containing the initial line search interval. The gradient of init_interval.left must be negative (i.e. must be a descent direction), while init_interval.right must be positive and finite. f_lim: Scalar `Tensor` of float dtype. max_iterations: Positive scalar `Tensor` of integral dtype. The maximum number of iterations to perform in the line search. The number of iterations used to bracket the minimum are also counted against this parameter. shrinkage_param: Scalar positive Tensor of real dtype. Must be less than `1.`. Corresponds to the parameter `gamma` in [Hager and Zhang (2006)][2]. expansion_param: Scalar positive `Tensor` of real dtype. Must be greater than `1.`. Used to expand the initial interval in case it does not bracket a minimum. Corresponds to `rho` in [Hager and Zhang (2006)][2]. sufficient_decrease_param: Positive scalar `Tensor` of real dtype. Bounded above by the curvature param. Corresponds to `delta` in the terminology of [Hager and Zhang (2006)][2]. curvature_param: Positive scalar `Tensor` of real dtype. Bounded above by `1.`. Corresponds to 'sigma' in the terminology of [Hager and Zhang (2006)][2]. Returns: A namedtuple containing the following fields. converged: Boolean `Tensor` of shape [n]. Whether a point satisfying Wolfe/Approx wolfe was found. failed: Boolean `Tensor` of shape [n]. Whether line search failed e.g. if either the objective function or the gradient are
python
{ "resource": "" }
q266408
_line_search_after_bracketing
test
def _line_search_after_bracketing( value_and_gradients_function, search_interval, val_0, f_lim, max_iterations, sufficient_decrease_param, curvature_param, shrinkage_param): """The main loop of line search after the minimum has been bracketed. Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that correspond to scalar tensors of real dtype containing the point at which the function was evaluated, the value of the function, and its derivative at that point. The other namedtuple fields, if present, should be tensors or sequences (possibly nested) of tensors. In usual optimization application, this function would be generated by projecting the multivariate objective function along some specific direction. The direction is determined by some other procedure but should be a descent direction (i.e. the derivative of the projected univariate function must be negative at 0.). Alternatively, the function may represent the batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned namedtuple should each be a tensor of shape [n], with the corresponding input points, function values, and derivatives at those input points. search_interval: Instance of `HagerZhangLineSearchResults` containing the current line search interval. val_0: A namedtuple as returned by value_and_gradients_function evaluated at `0.`. The gradient must be negative (i.e. must be a descent direction). f_lim: Scalar `Tensor` of float dtype. max_iterations: Positive scalar `Tensor` of integral dtype. The maximum number of iterations to perform in the line search. The number of iterations used to bracket the minimum are also counted against this parameter. sufficient_decrease_param: Positive scalar `Tensor` of real dtype. Bounded above by the curvature param. Corresponds to `delta` in the terminology of [Hager and Zhang (2006)][2]. curvature_param: Positive scalar `Tensor` of real dtype. Bounded above by `1.`. Corresponds to 'sigma' in the terminology of [Hager and Zhang (2006)][2]. shrinkage_param: Scalar positive Tensor of real dtype. Must be less than `1.`. Corresponds to the parameter `gamma` in [Hager and Zhang (2006)][2]. Returns: A namedtuple containing the following fields. converged: Boolean `Tensor` of shape [n]. Whether a point satisfying Wolfe/Approx wolfe was found. failed: Boolean `Tensor` of shape [n]. Whether line search failed e.g. if either the objective function or the gradient are not finite at an evaluation point. iterations: Scalar int32 `Tensor`. Number of line search iterations made. func_evals: Scalar int32 `Tensor`. Number of function evaluations made. left: A namedtuple, as returned by value_and_gradients_function, of the left end point of the updated bracketing interval. right: A namedtuple, as returned by value_and_gradients_function, of the right end point of the updated bracketing interval. """ def _loop_cond(curr_interval): """Loop condition.""" active = ~(curr_interval.converged | curr_interval.failed) return (curr_interval.iterations < max_iterations) & tf.reduce_any(input_tensor=active)
python
{ "resource": "" }
q266409
_line_search_inner_bisection
test
def _line_search_inner_bisection( value_and_gradients_function, search_interval, active, f_lim): """Performs bisection and updates the interval.""" midpoint = (search_interval.left.x + search_interval.right.x) / 2 val_mid = value_and_gradients_function(midpoint) is_valid_mid = hzl.is_finite(val_mid) still_active = active & is_valid_mid new_failed = active & ~is_valid_mid next_inteval = search_interval._replace( failed=search_interval.failed | new_failed,
python
{ "resource": "" }
q266410
_prepare_args
test
def _prepare_args(value_and_gradients_function, initial_step_size, val_initial, val_0, approximate_wolfe_threshold): """Prepares the arguments for the line search initialization. Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that correspond to scalar tensors of real dtype containing the point at which the function was evaluated, the value of the function, and its derivative at that point. The other namedtuple fields, if present, should be tensors or sequences (possibly nested) of tensors. In usual optimization application, this function would be generated by projecting the multivariate objective function along some specific direction. The direction is determined by some other procedure but should be a descent direction (i.e. the derivative of the projected univariate function must be negative at 0.). Alternatively, the function may represent the batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned namedtuple should each be a tensor of shape [n], with the corresponding input points, function values, and derivatives at those input points. initial_step_size: Scalar positive `Tensor` of real dtype, or a tensor of shape [n] in batching mode. The
python
{ "resource": "" }
q266411
_print
test
def _print(pass_through_tensor, values): """Wrapper for tf.Print which supports lists and namedtuples for printing.""" flat_values = [] for value in values: # Checks if it is a namedtuple. if hasattr(value, '_fields'): for field in value._fields: flat_values.extend([field, _to_str(getattr(value, field))]) continue if isinstance(value, (list, tuple)):
python
{ "resource": "" }
q266412
quadrature_scheme_softmaxnormal_gauss_hermite
test
def quadrature_scheme_softmaxnormal_gauss_hermite( normal_loc, normal_scale, quadrature_size, validate_args=False, name=None): """Use Gauss-Hermite quadrature to form quadrature on `K - 1` simplex. A `SoftmaxNormal` random variable `Y` may be generated via ``` Y = SoftmaxCentered(X), X = Normal(normal_loc, normal_scale) ``` Note: for a given `quadrature_size`, this method is generally less accurate than `quadrature_scheme_softmaxnormal_quantiles`. Args: normal_loc: `float`-like `Tensor` with shape `[b1, ..., bB, K-1]`, B>=0. The location parameter of the Normal used to construct the SoftmaxNormal. normal_scale: `float`-like `Tensor`. Broadcastable with `normal_loc`. The scale parameter of the Normal used to construct the SoftmaxNormal. quadrature_size: Python `int` scalar representing the number of quadrature points. validate_args: Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. name: Python `str` name prefixed to Ops created by this class. Returns: grid: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the convex combination of affine parameters for `K` components. `grid[..., :, n]` is the `n`-th grid point, living in the `K - 1` simplex. probs: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the associated with each grid point. """ with tf.name_scope( name or "quadrature_scheme_softmaxnormal_gauss_hermite"): normal_loc = tf.convert_to_tensor(value=normal_loc, name="normal_loc")
python
{ "resource": "" }
q266413
quadrature_scheme_softmaxnormal_quantiles
test
def quadrature_scheme_softmaxnormal_quantiles( normal_loc, normal_scale, quadrature_size, validate_args=False, name=None): """Use SoftmaxNormal quantiles to form quadrature on `K - 1` simplex. A `SoftmaxNormal` random variable `Y` may be generated via ``` Y = SoftmaxCentered(X), X = Normal(normal_loc, normal_scale) ``` Args: normal_loc: `float`-like `Tensor` with shape `[b1, ..., bB, K-1]`, B>=0. The location parameter of the Normal used to construct the SoftmaxNormal. normal_scale: `float`-like `Tensor`. Broadcastable with `normal_loc`. The scale parameter of the Normal used to construct the SoftmaxNormal. quadrature_size: Python `int` scalar representing the number of quadrature points. validate_args: Python `bool`, default `False`. When `True` distribution parameters are checked for validity despite possibly degrading runtime performance. When `False` invalid inputs may silently render incorrect outputs. name: Python `str` name prefixed to Ops created by this class. Returns: grid: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the convex combination of affine parameters for `K` components. `grid[..., :, n]` is the `n`-th grid point, living in the `K - 1` simplex. probs: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the associated with each grid point. """ with tf.name_scope(name or "softmax_normal_grid_and_probs"): normal_loc = tf.convert_to_tensor(value=normal_loc, name="normal_loc") dt = dtype_util.base_dtype(normal_loc.dtype) normal_scale = tf.convert_to_tensor( value=normal_scale, dtype=dt, name="normal_scale") normal_scale = maybe_check_quadrature_param( normal_scale, "normal_scale", validate_args) dist = normal.Normal(loc=normal_loc, scale=normal_scale) def _get_batch_ndims(): """Helper to get rank(dist.batch_shape), statically if possible.""" ndims = tensorshape_util.rank(dist.batch_shape) if ndims is None: ndims = tf.shape(input=dist.batch_shape_tensor())[0] return ndims batch_ndims = _get_batch_ndims() def _get_final_shape(qs): """Helper to build `TensorShape`.""" bs = tensorshape_util.with_rank_at_least(dist.batch_shape, 1)
python
{ "resource": "" }
q266414
maybe_check_quadrature_param
test
def maybe_check_quadrature_param(param, name, validate_args): """Helper which checks validity of `loc` and `scale` init args.""" with tf.name_scope("check_" + name): assertions = [] if tensorshape_util.rank(param.shape) is not None: if tensorshape_util.rank(param.shape) == 0: raise ValueError("Mixing params must be a (batch of) vector; " "{}.rank={} is not at least one.".format( name, tensorshape_util.rank(param.shape))) elif validate_args: assertions.append( assert_util.assert_rank_at_least( param, 1, message=("Mixing params must be a (batch of) vector; " "{}.rank is not at least one.".format(name)))) # TODO(jvdillon): Remove once we support k-mixtures. if tensorshape_util.with_rank_at_least(param.shape, 1)[-1] is not None: if tf.compat.dimension_value(param.shape[-1]) != 1: raise NotImplementedError("Currently only bimixtures are supported; " "{}.shape[-1]={} is not 1.".format(
python
{ "resource": "" }
q266415
determine_batch_event_shapes
test
def determine_batch_event_shapes(grid, endpoint_affine): """Helper to infer batch_shape and event_shape.""" with tf.name_scope("determine_batch_event_shapes"): # grid # shape: [B, k, q] # endpoint_affine # len=k, shape: [B, d, d] batch_shape = grid.shape[:-2] batch_shape_tensor = tf.shape(input=grid)[:-2] event_shape = None event_shape_tensor = None def _set_event_shape(shape, shape_tensor): if event_shape is None: return shape, shape_tensor return (tf.broadcast_static_shape(event_shape, shape), tf.broadcast_dynamic_shape(event_shape_tensor, shape_tensor)) for aff in endpoint_affine: if aff.shift is not None: batch_shape = tf.broadcast_static_shape(batch_shape, aff.shift.shape[:-1]) batch_shape_tensor = tf.broadcast_dynamic_shape( batch_shape_tensor, tf.shape(input=aff.shift)[:-1]) event_shape, event_shape_tensor = _set_event_shape( aff.shift.shape[-1:], tf.shape(input=aff.shift)[-1:])
python
{ "resource": "" }
q266416
interpolate_loc
test
def interpolate_loc(grid, loc): """Helper which interpolates between two locs.""" if len(loc) != 2: raise NotImplementedError("Currently only bimixtures are supported; " "len(scale)={} is not 2.".format(len(loc))) deg = tf.compat.dimension_value( tensorshape_util.with_rank_at_least(grid.shape, 1)[-1]) if deg is None: raise ValueError("Num quadrature grid points must be known prior " "to graph execution.") with tf.name_scope("interpolate_loc"): if loc is None or loc[0] is None and loc[1] is None: return [None]*deg # shape: [B, 1, k, deg] w = grid[..., tf.newaxis, :, :] loc = [ x[..., tf.newaxis] #
python
{ "resource": "" }
q266417
interpolate_scale
test
def interpolate_scale(grid, scale): """Helper which interpolates between two scales.""" if len(scale) != 2: raise NotImplementedError("Currently only bimixtures are supported; " "len(scale)={} is not 2.".format(len(scale))) deg = tf.compat.dimension_value( tensorshape_util.with_rank_at_least(grid.shape, 1)[-1]) if deg is None: raise ValueError("Num quadrature grid points must be known prior "
python
{ "resource": "" }
q266418
linop_scale
test
def linop_scale(w, op): """Creates weighted `LinOp` from existing `LinOp`.""" # We assume w > 0. (This assumption only relates to the is_* attributes.) with tf.name_scope("linop_scale"): # TODO(b/35301104): LinearOperatorComposition doesn't combine operators, so # special case combinations here. Once it does, this function can be # replaced by: # return linop_composition_lib.LinearOperatorComposition([ # scaled_identity(w), op]) def scaled_identity(w): return tf.linalg.LinearOperatorScaledIdentity( num_rows=op.range_dimension_tensor(), multiplier=w, is_non_singular=op.is_non_singular, is_self_adjoint=op.is_self_adjoint, is_positive_definite=op.is_positive_definite) if isinstance(op, tf.linalg.LinearOperatorIdentity): return scaled_identity(w) if isinstance(op, tf.linalg.LinearOperatorScaledIdentity): return scaled_identity(w * op.multiplier) if isinstance(op, tf.linalg.LinearOperatorDiag): return tf.linalg.LinearOperatorDiag( diag=w[...,
python
{ "resource": "" }
q266419
concat_vectors
test
def concat_vectors(*args): """Concatenates input vectors, statically if possible.""" args_ = [tf.get_static_value(x) for x in args] if any(vec is None for vec in args_):
python
{ "resource": "" }
q266420
_log_vector_matrix
test
def _log_vector_matrix(vs, ms): """Multiply tensor of vectors by matrices assuming values stored are logs.""" return
python
{ "resource": "" }
q266421
_log_matrix_vector
test
def _log_matrix_vector(ms, vs): """Multiply tensor of matrices by vectors assuming
python
{ "resource": "" }
q266422
_vector_matrix
test
def _vector_matrix(vs, ms): """Multiply tensor of vectors by matrices."""
python
{ "resource": "" }
q266423
_extract_log_probs
test
def _extract_log_probs(num_states, dist): """Tabulate log probabilities from a batch of distributions.""" states = tf.reshape(tf.range(num_states), tf.concat([[num_states],
python
{ "resource": "" }
q266424
HiddenMarkovModel._marginal_hidden_probs
test
def _marginal_hidden_probs(self): """Compute marginal pdf for each individual observable.""" initial_log_probs = tf.broadcast_to(self._log_init, tf.concat([self.batch_shape_tensor(), [self._num_states]], axis=0)) # initial_log_probs :: batch_shape num_states if self._num_steps > 1: transition_log_probs = self._log_trans def forward_step(log_probs, _): return _log_vector_matrix(log_probs, transition_log_probs) dummy_index = tf.zeros(self._num_steps - 1, dtype=tf.float32) forward_log_probs = tf.scan(forward_step, dummy_index,
python
{ "resource": "" }
q266425
HiddenMarkovModel.posterior_marginals
test
def posterior_marginals(self, observations, name=None): """Compute marginal posterior distribution for each state. This function computes, for each time step, the marginal conditional probability that the hidden Markov model was in each possible state given the observations that were made at each time step. So if the hidden states are `z[0],...,z[num_steps - 1]` and the observations are `x[0], ..., x[num_steps - 1]`, then this function computes `P(z[i] | x[0], ..., x[num_steps - 1])` for all `i` from `0` to `num_steps - 1`. This operation is sometimes called smoothing. It uses a form of the forward-backward algorithm. Note: the behavior of this function is undefined if the `observations` argument represents impossible observations from the model. Args: observations: A tensor representing a batch of observations made on the hidden Markov model. The rightmost dimension of this tensor gives the steps in a sequence of observations from a single sample from the hidden Markov model. The size of this dimension should match the `num_steps` parameter of the hidden Markov model object. The other dimensions are the dimensions of the batch and these are broadcast with the hidden Markov model's parameters. name: Python `str` name prefixed to Ops created by this class. Default value: "HiddenMarkovModel". Returns: posterior_marginal: A `Categorical` distribution object representing the marginal probability of the hidden Markov model being in each state at each step. The rightmost dimension of the `Categorical` distributions batch will equal the `num_steps` parameter providing one marginal distribution for each step. The other dimensions are the dimensions corresponding to the batch of observations. Raises: ValueError: if rightmost dimension of `observations` does not have size `num_steps`. """ with tf.name_scope(name or "posterior_marginals"): with tf.control_dependencies(self._runtime_assertions): observation_tensor_shape = tf.shape(input=observations) with self._observation_shape_preconditions(observation_tensor_shape): observation_batch_shape = observation_tensor_shape[ :-1 - self._underlying_event_rank] observation_event_shape = observation_tensor_shape[ -1 - self._underlying_event_rank:] batch_shape = tf.broadcast_dynamic_shape(observation_batch_shape, self.batch_shape_tensor()) log_init = tf.broadcast_to(self._log_init, tf.concat([batch_shape, [self._num_states]], axis=0)) log_transition = self._log_trans observations = tf.broadcast_to(observations, tf.concat([batch_shape, observation_event_shape], axis=0)) observation_rank = tf.rank(observations) underlying_event_rank = self._underlying_event_rank observations = distribution_util.move_dimension( observations, observation_rank - underlying_event_rank - 1, 0) observations = tf.expand_dims( observations, observation_rank - underlying_event_rank)
python
{ "resource": "" }
q266426
HiddenMarkovModel.posterior_mode
test
def posterior_mode(self, observations, name=None): """Compute maximum likelihood sequence of hidden states. When this function is provided with a sequence of observations `x[0], ..., x[num_steps - 1]`, it returns the sequence of hidden states `z[0], ..., z[num_steps - 1]`, drawn from the underlying Markov chain, that is most likely to yield those observations. It uses the [Viterbi algorithm]( https://en.wikipedia.org/wiki/Viterbi_algorithm). Note: the behavior of this function is undefined if the `observations` argument represents impossible observations from the model. Note: if there isn't a unique most likely sequence then one of the equally most likely sequences is chosen. Args: observations: A tensor representing a batch of observations made on the hidden Markov model. The rightmost dimensions of this tensor correspond to the dimensions of the observation distributions of the underlying Markov chain. The next dimension from the right indexes the steps in a sequence of observations from a single sample from the hidden Markov model. The size of this dimension should match the `num_steps` parameter of the hidden Markov model object. The other dimensions are the dimensions of the batch and these are broadcast with the hidden Markov model's parameters. name: Python `str` name prefixed to Ops created by this class. Default value: "HiddenMarkovModel". Returns: posterior_mode: A `Tensor` representing the most likely sequence of hidden states. The rightmost dimension of this tensor will equal the `num_steps` parameter providing one hidden state for each step. The other dimensions are those of the batch. Raises: ValueError: if the `observations` tensor does not consist of sequences of `num_steps` observations. #### Examples ```python tfd = tfp.distributions # A simple weather model. # Represent a cold day with 0 and a hot day with 1. # Suppose the first day of a sequence has a 0.8 chance of being cold. initial_distribution = tfd.Categorical(probs=[0.8, 0.2]) # Suppose a cold day has a 30% chance of being followed by a hot day # and a hot day has a 20% chance of being followed by a cold day. transition_distribution = tfd.Categorical(probs=[[0.7, 0.3], [0.2, 0.8]]) # Suppose additionally that on each day the temperature is # normally distributed with mean and standard deviation 0 and 5 on # a cold day and mean and standard deviation 15 and 10 on a hot day. observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.]) # This gives the hidden Markov model: model = tfd.HiddenMarkovModel( initial_distribution=initial_distribution, transition_distribution=transition_distribution, observation_distribution=observation_distribution, num_steps=7) # Suppose we observe gradually rising temperatures over a week: temps = [-2., 0., 2., 4., 6., 8., 10.] # We can now compute the most probable sequence of hidden states: model.posterior_mode(temps) # The result is [0 0 0 0 0 1 1] telling us that the transition # from "cold" to "hot" most likely happened between the # 5th and 6th days. ``` """ with tf.name_scope(name or "posterior_mode"): with tf.control_dependencies(self._runtime_assertions): observation_tensor_shape = tf.shape(input=observations) with self._observation_shape_preconditions(observation_tensor_shape): observation_batch_shape = observation_tensor_shape[ :-1 - self._underlying_event_rank] observation_event_shape = observation_tensor_shape[ -1 - self._underlying_event_rank:] batch_shape = tf.broadcast_dynamic_shape(observation_batch_shape, self.batch_shape_tensor()) log_init = tf.broadcast_to(self._log_init, tf.concat([batch_shape, [self._num_states]], axis=0)) observations = tf.broadcast_to(observations, tf.concat([batch_shape, observation_event_shape], axis=0)) observation_rank
python
{ "resource": "" }
q266427
_choose_random_direction
test
def _choose_random_direction(current_state_parts, batch_rank, seed=None): """Chooses a random direction in the event space.""" seed_gen = distributions.SeedStream(seed, salt='_choose_random_direction') # Chooses the random directions across each of the input components. rnd_direction_parts = [ tf.random.normal( tf.shape(input=current_state_part), dtype=tf.float32, seed=seed_gen())
python
{ "resource": "" }
q266428
_sample_next
test
def _sample_next(target_log_prob_fn, current_state_parts, step_sizes, max_doublings, current_target_log_prob, batch_rank, seed=None, name=None): """Applies a single iteration of slice sampling update. Applies hit and run style slice sampling. Chooses a uniform random direction on the unit sphere in the event space. Applies the one dimensional slice sampling update along that direction. Args: target_log_prob_fn: Python callable which takes an argument like `*current_state_parts` and returns its (possibly unnormalized) log-density under the target distribution. current_state_parts: Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s). The first `independent_chain_ndims` of the `Tensor`(s) index different chains. step_sizes: Python `list` of `Tensor`s. Provides a measure of the width of the density. Used to find the slice bounds. Must broadcast with the shape of `current_state_parts`. max_doublings: Integer number of doublings to allow while locating the slice boundaries. current_target_log_prob: `Tensor` representing the value of `target_log_prob_fn(*current_state_parts)`. The only reason to specify this argument is to reduce TF graph size. batch_rank: Integer. The number of axes in the state that correspond to independent batches. seed: Python integer to seed random number generators. name: Python `str` name prefixed to Ops created by this function. Default value: `None` (i.e., 'find_slice_bounds'). Returns: proposed_state_parts: Tensor or Python list of `Tensor`s representing the state(s) of the Markov chain(s) at each result step. Has same shape as input `current_state_parts`. proposed_target_log_prob: `Tensor` representing the value of `target_log_prob_fn` at `next_state`. bounds_satisfied: Boolean `Tensor` of the same shape as the log density. True indicates whether the an interval containing the slice for that batch was found successfully. direction: `Tensor` or Python list of `Tensors`s representing the direction along which the slice was sampled. Has the same shape and dtype(s) as `current_state_parts`. upper_bounds: `Tensor` of batch shape and the dtype of the input state. The upper bounds of the slices along the sampling direction. lower_bounds: `Tensor` of batch shape and the dtype of the input state. The lower bounds of the slices along the sampling direction. """ with tf.compat.v1.name_scope(name, 'sample_next', [ current_state_parts, step_sizes, max_doublings, current_target_log_prob, batch_rank ]): # First step: Choose a random direction. # Direction is a list of tensors. The i'th tensor should have the same shape # as the i'th state part. direction = _choose_random_direction(current_state_parts, batch_rank=batch_rank, seed=seed) # Interpolates the step sizes for the chosen direction. # Applies an ellipsoidal interpolation to compute the step direction for # the chosen direction. Suppose we are given step sizes for each direction. # Label these s_1, s_2, ... s_k. These are the step sizes to use if moving # in a direction parallel to one of the axes. Consider an ellipsoid which # intercepts the i'th axis at s_i. The step size for a direction specified # by the unit vector (n_1, n_2 ...n_k) is then defined as the intersection # of the line through this vector with this ellipsoid. # # One can show that the length of the vector from the origin to the # intersection point is given by: # 1 / sqrt(n_1^2 / s_1^2 + n_2^2 / s_2^2 + ...). # # Proof: # The equation of the ellipsoid is: # Sum_i [x_i^2 / s_i^2 ] = 1. Let n be a unit direction vector. Points # along the line given by n may be parameterized as alpha*n where alpha is # the distance along the vector. Plugging this into the equation for the # ellipsoid, we get: # alpha^2 ( n_1^2 / s_1^2 + n_2^2 / s_2^2 + ...) = 1 # so alpha = \sqrt { \frac{1} { ( n_1^2 / s_1^2 + n_2^2 / s_2^2 + ...) } } reduce_axes = [tf.range(batch_rank, tf.rank(dirn_part)) for dirn_part in direction] components = [ tf.reduce_sum( input_tensor=(dirn_part / step_size)**2, axis=reduce_axes[i]) for i, (step_size, dirn_part) in enumerate(zip(step_sizes, direction)) ] step_size = tf.math.rsqrt(tf.add_n(components)) # Computes the rank of a tensor. Uses the static rank if possible. def _get_rank(x): return (len(x.shape.as_list()) if x.shape.dims is not None else tf.rank(x)) state_part_ranks = [_get_rank(part) for
python
{ "resource": "" }
q266429
_maybe_call_fn
test
def _maybe_call_fn(fn, fn_arg_list, fn_result=None, description='target_log_prob'): """Helper which computes `fn_result` if needed.""" fn_arg_list = (list(fn_arg_list) if mcmc_util.is_list_like(fn_arg_list) else [fn_arg_list]) if fn_result is None: fn_result = fn(*fn_arg_list)
python
{ "resource": "" }
q266430
_right_pad
test
def _right_pad(x, final_rank): """Pads the shape of x to the right to be of rank final_rank. Expands the dims of `x` to the right such that its rank is equal to final_rank. For example, if `x` is of shape [1, 5, 7, 2] and `final_rank` is 7, we return padded_x, which is of shape [1, 5, 7, 2, 1, 1, 1]. Args: x: The tensor whose shape is to be padded. final_rank: Scalar int32 `Tensor` or
python
{ "resource": "" }
q266431
SliceSampler.one_step
test
def one_step(self, current_state, previous_kernel_results): """Runs one iteration of Slice Sampler. Args: current_state: `Tensor` or Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s). The first `r` dimensions index independent chains, `r = tf.rank(target_log_prob_fn(*current_state))`. previous_kernel_results: `collections.namedtuple` containing `Tensor`s representing values from previous calls to this function (or from the `bootstrap_results` function.) Returns: next_state: Tensor or Python list of `Tensor`s representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as `current_state`. kernel_results: `collections.namedtuple` of internal calculations used to advance the chain. Raises: ValueError: if there isn't one `step_size` or a list with same length as `current_state`. TypeError: if `not target_log_prob.dtype.is_floating`. """ with tf.compat.v1.name_scope( name=mcmc_util.make_name(self.name, 'slice', 'one_step'), values=[ self.step_size, self.max_doublings, self._seed_stream, current_state, previous_kernel_results.target_log_prob ]): with tf.compat.v1.name_scope('initialize'): [ current_state_parts, step_sizes, current_target_log_prob ] = _prepare_args( self.target_log_prob_fn, current_state,
python
{ "resource": "" }
q266432
_build_trainable_posterior
test
def _build_trainable_posterior(param, initial_loc_fn): """Built a transformed-normal variational dist over a parameter's support.""" loc = tf.compat.v1.get_variable( param.name + '_loc', initializer=lambda: initial_loc_fn(param), dtype=param.prior.dtype, use_resource=True) scale = tf.nn.softplus( tf.compat.v1.get_variable( param.name + '_scale',
python
{ "resource": "" }
q266433
build_factored_variational_loss
test
def build_factored_variational_loss(model, observed_time_series, init_batch_shape=(), seed=None, name=None): """Build a loss function for variational inference in STS models. Variational inference searches for the distribution within some family of approximate posteriors that minimizes a divergence between the approximate posterior `q(z)` and true posterior `p(z|observed_time_series)`. By converting inference to optimization, it's generally much faster than sampling-based inference algorithms such as HMC. The tradeoff is that the approximating family rarely contains the true posterior, so it may miss important aspects of posterior structure (in particular, dependence between variables) and should not be blindly trusted. Results may vary; it's generally wise to compare to HMC to evaluate whether inference quality is sufficient for your task at hand. This method constructs a loss function for variational inference using the Kullback-Liebler divergence `KL[q(z) || p(z|observed_time_series)]`, with an approximating family given by independent Normal distributions transformed to the appropriate parameter space for each parameter. Minimizing this loss (the negative ELBO) maximizes a lower bound on the log model evidence `-log p(observed_time_series)`. This is equivalent to the 'mean-field' method implemented in [1]. and is a standard approach. The resulting posterior approximations are unimodal; they will tend to underestimate posterior uncertainty when the true posterior contains multiple modes (the `KL[q||p]` divergence encourages choosing a single mode) or dependence between variables. Args: model: An instance of `StructuralTimeSeries` representing a time-series model. This represents a joint distribution over time-series and their parameters with batch shape `[b1, ..., bN]`. observed_time_series: `float` `Tensor` of shape `concat([sample_shape, model.batch_shape, [num_timesteps, 1]]) where `sample_shape` corresponds to i.i.d. observations, and the trailing `[1]` dimension may (optionally) be omitted if `num_timesteps > 1`. May optionally be an instance of `tfp.sts.MaskedTimeSeries`, which includes a mask `Tensor` to specify timesteps with missing observations. init_batch_shape: Batch shape (Python `tuple`, `list`, or `int`) of initial states to optimize in parallel. Default value: `()`. (i.e., just run a single optimization). seed: Python integer to seed the random number generator. name: Python `str` name prefixed to ops created by this function. Default value: `None` (i.e., 'build_factored_variational_loss'). Returns: variational_loss: `float` `Tensor` of shape `concat([init_batch_shape, model.batch_shape])`, encoding a stochastic estimate of an upper bound on the negative model evidence `-log p(y)`. Minimizing this loss performs variational inference; the gap between the variational bound and the true (generally unknown) model evidence corresponds to the divergence `KL[q||p]` between the approximate and true posterior. variational_distributions: `collections.OrderedDict` giving the approximate posterior for each model parameter. The keys are Python `str` parameter names in order, corresponding to `[param.name for param in model.parameters]`. The values are `tfd.Distribution` instances with batch shape `concat([init_batch_shape, model.batch_shape])`; these will typically be of the form `tfd.TransformedDistribution(tfd.Normal(...), bijector=param.bijector)`. #### Examples Assume we've built a structural time-series model: ```python day_of_week = tfp.sts.Seasonal( num_seasons=7, observed_time_series=observed_time_series, name='day_of_week') local_linear_trend = tfp.sts.LocalLinearTrend( observed_time_series=observed_time_series, name='local_linear_trend') model = tfp.sts.Sum(components=[day_of_week, local_linear_trend], observed_time_series=observed_time_series) ``` To run variational inference, we simply construct the loss and optimize it: ```python (variational_loss, variational_distributions) = tfp.sts.build_factored_variational_loss( model=model, observed_time_series=observed_time_series) train_op = tf.train.AdamOptimizer(0.1).minimize(variational_loss)
python
{ "resource": "" }
q266434
_minimize_in_graph
test
def _minimize_in_graph(build_loss_fn, num_steps=200, optimizer=None): """Run an optimizer within the graph to minimize a loss function.""" optimizer = tf.compat.v1.train.AdamOptimizer( 0.1) if optimizer is None else optimizer def train_loop_body(step): train_op = optimizer.minimize( build_loss_fn if tf.executing_eagerly() else build_loss_fn())
python
{ "resource": "" }
q266435
moments_of_masked_time_series
test
def moments_of_masked_time_series(time_series_tensor, broadcast_mask): """Compute mean and variance, accounting for a mask. Args: time_series_tensor: float `Tensor` time series of shape `concat([batch_shape, [num_timesteps]])`. broadcast_mask: bool `Tensor` of the same shape as `time_series`. Returns: mean: float `Tensor` of shape `batch_shape`. variance: float `Tensor` of shape `batch_shape`. """ num_unmasked_entries = tf.cast( tf.reduce_sum(input_tensor=tf.cast(~broadcast_mask, tf.int32), axis=-1), time_series_tensor.dtype) # Manually compute mean and variance, excluding masked entries.
python
{ "resource": "" }
q266436
initial_value_of_masked_time_series
test
def initial_value_of_masked_time_series(time_series_tensor, broadcast_mask): """Get the first unmasked entry of each time series in the batch. Args: time_series_tensor: float `Tensor` of shape [..., num_timesteps]. broadcast_mask: bool `Tensor` of same shape as `time_series`. """ num_timesteps = tf.shape(input=time_series_tensor)[-1] # Compute the index of the first unmasked entry for each series in the batch. unmasked_negindices = ( tf.cast(~broadcast_mask, tf.int32) * tf.range(num_timesteps, 0, -1)) first_unmasked_indices = num_timesteps - tf.reduce_max(
python
{ "resource": "" }
q266437
broadcast_batch_shape
test
def broadcast_batch_shape(distributions): """Get broadcast batch shape from distributions, statically if possible.""" # Static case batch_shape = distributions[0].batch_shape for distribution in distributions: batch_shape = tf.broadcast_static_shape(batch_shape, distribution.batch_shape) if batch_shape.is_fully_defined():
python
{ "resource": "" }
q266438
factored_joint_mvn
test
def factored_joint_mvn(distributions): """Combine MultivariateNormals into a factored joint distribution. Given a list of multivariate normal distributions `dist[i] = Normal(loc[i], scale[i])`, construct the joint distribution given by concatenating independent samples from these distributions. This is multivariate normal with mean vector given by the concatenation of the component mean vectors, and block-diagonal covariance matrix in which the blocks are the component covariances. Note that for computational efficiency, multivariate normals are represented by a 'scale' (factored covariance) linear operator rather than the full covariance matrix. Args: distributions: Python `iterable` of MultivariateNormal distribution instances (e.g., `tfd.MultivariateNormalDiag`, `tfd.MultivariateNormalTriL`, etc.). These must be broadcastable to a consistent batch shape,
python
{ "resource": "" }
q266439
sum_mvns
test
def sum_mvns(distributions): """Attempt to sum MultivariateNormal distributions. The sum of (multivariate) normal random variables is itself (multivariate) normal, with mean given by the sum of means and (co)variance given by the sum of (co)variances. This method exploits this fact to compute the sum of a list of `tfd.MultivariateNormalDiag` objects. It may in the future be extended to support summation of other forms of (Multivariate)Normal distributions. Args: distributions: Python `iterable` of `tfd.MultivariateNormalDiag` distribution instances. These must all have the same event shape, and broadcast to a consistent batch shape. Returns: sum_distribution: A `tfd.MultivariateNormalDiag` instance with mean equal to the sum of input means and covariance equal to the sum of input covariances. """ graph_parents = [tensor for distribution in distributions
python
{ "resource": "" }
q266440
empirical_statistics
test
def empirical_statistics(observed_time_series): """Compute statistics of a provided time series, as heuristic initialization. Args: observed_time_series: `Tensor` representing a time series, or batch of time series, of shape either `batch_shape + [num_timesteps, 1]` or `batch_shape + [num_timesteps]` (allowed if `num_timesteps > 1`). Returns: observed_mean: `Tensor` of shape `batch_shape`, giving the empirical mean of each time series in the batch. observed_stddev: `Tensor` of shape `batch_shape`, giving the empirical standard deviation of each time series in the batch. observed_initial_centered: `Tensor of shape `batch_shape`, giving the initial value of each time series in the batch after centering (subtracting the mean). """ with tf.compat.v1.name_scope( 'empirical_statistics', values=[observed_time_series]): [ observed_time_series, mask ] = canonicalize_observed_time_series_with_mask(observed_time_series) squeezed_series = observed_time_series[..., 0] if mask is None: observed_mean, observed_variance = tf.nn.moments( x=squeezed_series, axes=-1) observed_initial = squeezed_series[..., 0] else: broadcast_mask = tf.broadcast_to(tf.cast(mask, tf.bool), tf.shape(input=squeezed_series)) observed_mean, observed_variance = (
python
{ "resource": "" }
q266441
_maybe_expand_trailing_dim
test
def _maybe_expand_trailing_dim(observed_time_series_tensor): """Ensures `observed_time_series_tensor` has a trailing dimension of size 1. The `tfd.LinearGaussianStateSpaceModel` Distribution has event shape of `[num_timesteps, observation_size]`, but canonical BSTS models are univariate, so their observation_size is always `1`. The extra trailing dimension gets annoying, so this method allows arguments with or without the extra dimension. There is no ambiguity except in the trivial special case where `num_timesteps = 1`; this can be avoided by specifying any unit-length series in the explicit `[num_timesteps, 1]` style. Most users should not call this method directly, and instead call `canonicalize_observed_time_series_with_mask`, which handles converting to `Tensor` and specifying an optional missingness mask. Args: observed_time_series_tensor: `Tensor` of shape `batch_shape + [num_timesteps, 1]` or `batch_shape + [num_timesteps]`, where `num_timesteps > 1`. Returns: expanded_time_series: `Tensor` of shape `batch_shape + [num_timesteps, 1]`. """ with tf.compat.v1.name_scope(
python
{ "resource": "" }
q266442
canonicalize_observed_time_series_with_mask
test
def canonicalize_observed_time_series_with_mask( maybe_masked_observed_time_series): """Extract a Tensor with canonical shape and optional mask. Args: maybe_masked_observed_time_series: a `Tensor`-like object with shape `[..., num_timesteps]` or `[..., num_timesteps, 1]`, or a `tfp.sts.MaskedTimeSeries` containing such an object. Returns: masked_time_series: a `tfp.sts.MaskedTimeSeries` namedtuple, in which the `observed_time_series` is converted to `Tensor` with canonical shape `[..., num_timesteps, 1]`, and `is_missing` is either `None` or a boolean `Tensor`. """ with tf.compat.v1.name_scope('canonicalize_observed_time_series_with_mask'): if hasattr(maybe_masked_observed_time_series, 'is_missing'): observed_time_series = ( maybe_masked_observed_time_series.time_series) is_missing = maybe_masked_observed_time_series.is_missing else: observed_time_series = maybe_masked_observed_time_series is_missing = None observed_time_series
python
{ "resource": "" }
q266443
mix_over_posterior_draws
test
def mix_over_posterior_draws(means, variances): """Construct a predictive normal distribution that mixes over posterior draws. Args: means: float `Tensor` of shape `[num_posterior_draws, ..., num_timesteps]`. variances: float `Tensor` of shape `[num_posterior_draws, ..., num_timesteps]`. Returns: mixture_dist: `tfd.MixtureSameFamily(tfd.Independent(tfd.Normal))` instance representing a uniform mixture over the posterior samples, with `batch_shape = ...` and `event_shape = [num_timesteps]`. """ # The inputs `means`, `variances` have shape # `concat([ # [num_posterior_draws], # sample_shape, # batch_shape, # [num_timesteps]])` # Because MixtureSameFamily mixes over the rightmost batch dimension, # we need to move the `num_posterior_draws` dimension to be rightmost # in the batch shape. This requires use of `Independent` (to preserve # `num_timesteps` as part of the event shape) and `move_dimension`. # TODO(b/120245392): enhance `MixtureSameFamily` to reduce along an # arbitrary axis, and eliminate `move_dimension` calls here. with tf.compat.v1.name_scope(
python
{ "resource": "" }
q266444
Uniform.range
test
def range(self, name="range"): """`high - low`."""
python
{ "resource": "" }
q266445
_make_summary_statistic
test
def _make_summary_statistic(attr): """Factory for making summary statistics, eg, mean, mode, stddev.""" def _fn(self): if any(self._dist_fn_args): # pylint: disable=protected-access raise ValueError( 'Can only compute ' + attr + ' when all distributions are '
python
{ "resource": "" }
q266446
_unify_call_signature
test
def _unify_call_signature(i, dist_fn): """Creates `dist_fn_wrapped` which calls `dist_fn` with all prev nodes. Args: i: Python `int` corresponding to position in topologically sorted DAG. dist_fn: Python `callable` which takes a subset of previously constructed distributions (in reverse order) and produces a new distribution instance. Returns: dist_fn_wrapped: Python `callable` which takes all previous distributions (in non reverse order) and produces a new distribution instance. args: `tuple` of `str` representing the arg names of `dist_fn` (and in non wrapped, "natural" order). `None` is returned only if the input is not a `callable`. """ if distribution_util.is_distribution_instance(dist_fn): return (lambda *_: dist_fn), None if not callable(dist_fn): raise TypeError('{} must be either `tfd.Distribution`-like or ' '`callable`.'.format(dist_fn)) args = _get_required_args(dist_fn) if not args: return (lambda
python
{ "resource": "" }
q266447
_resolve_distribution_names
test
def _resolve_distribution_names(dist_fn_args, dist_names, leaf_name): """Uses arg names to resolve distribution names.""" if dist_names is None: dist_names = [] else: dist_names = dist_names.copy() n = len(dist_fn_args) dist_names.extend([None]*(n - len(dist_names))) for i_, args in enumerate(reversed(dist_fn_args)): if not args: continue # There's no args to analyze. i = n -
python
{ "resource": "" }
q266448
_get_required_args
test
def _get_required_args(fn): """Returns the distribution's required args.""" argspec = tf_inspect.getfullargspec(fn) args = argspec.args if tf_inspect.isclass(fn): args = args[1:] # Remove the `self` arg. if argspec.defaults: # Remove the args which have defaults. By convention we only feed # *required args*. This means some distributions must always be wrapped #
python
{ "resource": "" }
q266449
_kl_joint_joint
test
def _kl_joint_joint(d0, d1, name=None): """Calculate the KL divergence between two `JointDistributionSequential`s. Args: d0: instance of a `JointDistributionSequential` object. d1: instance of a `JointDistributionSequential` object. name: (optional) Name to use for created operations. Default value: `"kl_joint_joint"`. Returns: kl_joint_joint: `Tensor` The sum of KL divergences between elemental distributions of two joint distributions. Raises: ValueError: when joint distributions have a different number of elemental distributions. ValueError: when either joint distribution has a distribution with dynamic dependency, i.e., when either joint distribution is not a collection of independent distributions. """ if len(d0._dist_fn_wrapped) != len(d1._dist_fn_wrapped): # pylint: disable=protected-access raise ValueError( 'Can only compute KL divergence between when each has the'
python
{ "resource": "" }
q266450
JointDistributionSequential._build
test
def _build(self, model): """Creates `dist_fn`, `dist_fn_wrapped`, `dist_fn_args`.""" if not isinstance(model, collections.Sequence): raise TypeError('`model` must be `list`-like (saw: {}).'.format( type(model).__name__))
python
{ "resource": "" }
q266451
JointDistributionSequential._resolve_graph
test
def _resolve_graph(self, distribution_names=None, leaf_name='x'): """Creates a `tuple` of `tuple`s of dependencies. This function is **experimental**. That said, we encourage its use and ask that you report problems to `tfprobability@tensorflow.org`. Args: distribution_names: `list` of `str` or `None` names corresponding to each of `model` elements. (`None`s are expanding into the appropriate `str`.) leaf_name: `str` used when no maker depends on a particular `model` element. Returns: graph: `tuple` of `(str tuple)` pairs representing the name of each distribution (maker) and the names of its dependencies. #### Example ```python d = tfd.JointDistributionSequential([ tfd.Independent(tfd.Exponential(rate=[100, 120]), 1), lambda e: tfd.Gamma(concentration=e[..., 0], rate=e[..., 1]), tfd.Normal(loc=0, scale=2.), lambda n, g: tfd.Normal(loc=n, scale=g), ]) d._resolve_graph() # ==> ( # ('e', ()), # ('g', ('e',)), # ('n', ()), # ('x', ('n', 'g')), # ) ``` """
python
{ "resource": "" }
q266452
JointDistributionSequential._entropy
test
def _entropy(self): """Shannon entropy in nats.""" if any(self._dist_fn_args): raise ValueError( 'Can only compute entropy when all distributions are independent.')
python
{ "resource": "" }
q266453
check_arg_in_support
test
def check_arg_in_support(f): """Decorator function for argument bounds checking. This decorator is meant to be used with methods that require the first argument to be in the support of the distribution. If `validate_args` is `True`, the method is wrapped with an assertion that the first argument is greater than or equal to `loc`, since the support of the half-Cauchy distribution is given by `[loc, infinity)`. Args:
python
{ "resource": "" }
q266454
image_summary
test
def image_summary(seqs, name, num=None): """Visualizes sequences as TensorBoard summaries. Args: seqs: A tensor of shape [n, t, h, w, c]. name: String name of this summary. num: Integer for the number of examples to visualize. Defaults to all examples. """ seqs = tf.clip_by_value(seqs, 0., 1.) seqs = tf.unstack(seqs[:num]) joined_seqs = [tf.concat(tf.unstack(seq), 1) for seq in seqs]
python
{ "resource": "" }
q266455
visualize_reconstruction
test
def visualize_reconstruction(inputs, reconstruct, num=3, name="reconstruction"): """Visualizes the reconstruction of inputs in TensorBoard. Args: inputs: A tensor of the original inputs, of shape [batch, timesteps, h, w, c]. reconstruct: A tensor of a reconstruction of inputs, of shape [batch, timesteps, h, w, c]. num: Integer for the number of examples to visualize.
python
{ "resource": "" }
q266456
visualize_qualitative_analysis
test
def visualize_qualitative_analysis(inputs, model, samples=1, batch_size=3, length=8): """Visualizes a qualitative analysis of a given model. Args: inputs: A tensor of the original inputs, of shape [batch, timesteps, h, w, c]. model: A DisentangledSequentialVAE model. samples: Number of samples to draw from the latent distributions. batch_size: Number of sequences to generate. length: Number of timesteps to generate for each sequence. """ average = lambda dist: tf.reduce_mean( input_tensor=dist.mean(), axis=0) # avg over samples with tf.compat.v1.name_scope("val_reconstruction"): reconstruct = functools.partial(model.reconstruct, inputs=inputs, samples=samples) visualize_reconstruction(inputs, average(reconstruct())) visualize_reconstruction(inputs, average(reconstruct(sample_static=True)),
python
{ "resource": "" }
q266457
summarize_dist_params
test
def summarize_dist_params(dist, name, name_scope="dist_params"): """Summarize the parameters of a distribution. Args: dist: A Distribution object with mean and standard deviation parameters. name: The name of the distribution. name_scope: The name scope of this summary. """ with tf.compat.v1.name_scope(name_scope): tf.compat.v2.summary.histogram( name="{}/{}".format(name, "mean"), data=dist.mean(),
python
{ "resource": "" }
q266458
summarize_mean_in_nats_and_bits
test
def summarize_mean_in_nats_and_bits(inputs, units, name, nats_name_scope="nats", bits_name_scope="bits_per_dim"): """Summarize the mean of a tensor in nats and bits per unit. Args: inputs: A tensor of values measured in nats. units: The units of the tensor with which to compute the mean bits per unit. name: The name of the tensor. nats_name_scope: The name scope of the nats summary. bits_name_scope: The name scope of the bits summary.
python
{ "resource": "" }
q266459
LearnableMultivariateNormalDiag.call
test
def call(self, inputs): """Runs the model to generate multivariate normal distribution. Args: inputs: Unused. Returns: A MultivariateNormalDiag distribution with event
python
{ "resource": "" }
q266460
LearnableMultivariateNormalDiagCell.zero_state
test
def zero_state(self, sample_batch_shape=()): """Returns an initial state for the LSTM cell. Args: sample_batch_shape: A 0D or 1D tensor of the combined sample and batch shape. Returns: A tuple of the initial previous output at timestep 0 of shape [sample_batch_shape, dimensions], and the cell state. """ h0 = tf.zeros([1, self.hidden_size]) c0 = tf.zeros([1, self.hidden_size])
python
{ "resource": "" }
q266461
LearnableMultivariateNormalDiagCell.call
test
def call(self, inputs, state): """Runs the model to generate a distribution for a single timestep. This generates a batched MultivariateNormalDiag distribution using the output of the recurrent model at the current timestep to parameterize the distribution. Args: inputs: The sampled value of `z` at the previous timestep, i.e., `z_{t-1}`, of shape [..., dimensions]. `z_0` should be set to the empty matrix. state: A tuple containing the (hidden, cell) state. Returns: A tuple of a MultivariateNormalDiag distribution, and the state of the recurrent function at the end of the current timestep. The distribution will have event shape [dimensions], batch shape [...], and sample shape [sample_shape, ..., dimensions]. """ # In order to allow the user to pass in a single example without a batch # dimension, we always expand the input to at
python
{ "resource": "" }
q266462
Compressor.call
test
def call(self, inputs): """Runs the model to generate an intermediate representation of x_t. Args: inputs: A batch of image sequences `x_{1:T}` of shape `[sample_shape, batch_size, timesteps, height, width, channels]`. Returns: A batch of intermediate representations of shape [sample_shape, batch_size, timesteps, hidden_size]. """ image_shape = tf.shape(input=inputs)[-3:] collapsed_shape = tf.concat(([-1], image_shape), axis=0) out
python
{ "resource": "" }
q266463
DisentangledSequentialVAE.generate
test
def generate(self, batch_size, length, samples=1, fix_static=False, fix_dynamic=False): """Generate new sequences. Args: batch_size: Number of sequences to generate. length: Number of timesteps to generate for each sequence. samples: Number of samples to draw from the latent distributions. fix_static: Boolean for whether or not to share the same random sample of the static latent variable `f` from its prior across all examples. fix_dynamic: Boolean for whether or not to share the same random sample of the dynamic latent variable `z_{1:T}` from its prior across all examples. Returns: A batched Independent distribution wrapping a set of Normal distributions over the pixels of the generated sequences, where
python
{ "resource": "" }
q266464
DisentangledSequentialVAE.reconstruct
test
def reconstruct(self, inputs, samples=1, sample_static=False, sample_dynamic=False, swap_static=False, swap_dynamic=False, fix_static=False, fix_dynamic=False): """Reconstruct the given input sequences. Args: inputs: A batch of image sequences `x_{1:T}` of shape `[batch_size, timesteps, height, width, channels]`. samples: Number of samples to draw from the latent distributions. sample_static: Boolean for whether or not to randomly sample the static latent variable `f` from its prior distribution. sample_dynamic: Boolean for whether or not to randomly sample the dynamic latent variable `z_{1:T}` from its prior distribution. swap_static: Boolean for whether or not to swap the encodings for the static latent variable `f` between the examples. swap_dynamic: Boolean for whether or not to swap the encodings for the dynamic latent variable `z_{1:T}` between the examples. fix_static: Boolean for whether or not to share the same random sample of the static latent variable `f` from its prior across all examples. fix_dynamic: Boolean for whether or not to share the same random sample of the dynamic latent variable `z_{1:T}` from its prior across all examples. Returns: A batched Independent distribution wrapping a set of Normal distributions over the pixels of the reconstruction of the input, where the Independent distribution has event shape [height, width, channels], batch shape [samples, batch_size, timesteps], and
python
{ "resource": "" }
q266465
DisentangledSequentialVAE.sample_static_prior
test
def sample_static_prior(self, samples, batch_size, fixed=False): """Sample the static latent prior. Args: samples: Number of samples to draw from the latent distribution. batch_size: Number of sequences to sample. fixed: Boolean for whether or not to share the same random sample across all sequences.
python
{ "resource": "" }
q266466
DisentangledSequentialVAE.sample_dynamic_prior
test
def sample_dynamic_prior(self, samples, batch_size, length, fixed=False): """Sample the dynamic latent prior. Args: samples: Number of samples to draw from the latent distribution. batch_size: Number of sequences to sample. length: Number of timesteps to sample for each sequence. fixed: Boolean for whether or not to share the same random sample across all sequences. Returns: A tuple of a sample tensor of shape [samples, batch_size, length latent_size], and a MultivariateNormalDiag distribution from which the tensor was sampled, with event shape [latent_size], and batch shape [samples, 1, length] if fixed or [samples, batch_size, length] otherwise. """ if fixed: sample_batch_size = 1 else: sample_batch_size = batch_size sample, state = self.dynamic_prior.zero_state([samples, sample_batch_size])
python
{ "resource": "" }
q266467
StructuralTimeSeries.batch_shape
test
def batch_shape(self): """Static batch shape of models represented by this component. Returns: batch_shape: A `tf.TensorShape` giving the broadcast batch shape of all model parameters. This should match the batch shape of derived state space models, i.e., `self.make_state_space_model(...).batch_shape`. It may be partially
python
{ "resource": "" }
q266468
StructuralTimeSeries.batch_shape_tensor
test
def batch_shape_tensor(self): """Runtime batch shape of models represented by this component. Returns: batch_shape: `int` `Tensor` giving the broadcast batch shape of all model parameters. This should match the batch shape of derived state space models, i.e., `self.make_state_space_model(...).batch_shape_tensor()`. """ batch_shape =
python
{ "resource": "" }
q266469
StructuralTimeSeries.make_state_space_model
test
def make_state_space_model(self, num_timesteps, param_vals=None, initial_state_prior=None, initial_step=0): """Instantiate this model as a Distribution over specified `num_timesteps`. Args: num_timesteps: Python `int` number of timesteps to model. param_vals: a list of `Tensor` parameter values in order corresponding to `self.parameters`, or a dict mapping from parameter names to values. initial_state_prior: an optional `Distribution` instance overriding the default prior on the model's initial state. This is used in forecasting ("today's prior is yesterday's posterior"). initial_step: optional `int` specifying the initial timestep to model.
python
{ "resource": "" }
q266470
StructuralTimeSeries.prior_sample
test
def prior_sample(self, num_timesteps, initial_step=0, params_sample_shape=(), trajectories_sample_shape=(), seed=None): """Sample from the joint prior over model parameters and trajectories. Args: num_timesteps: Scalar `int` `Tensor` number of timesteps to model. initial_step: Optional scalar `int` `Tensor` specifying the starting timestep. Default value: 0. params_sample_shape: Number of possible worlds to sample iid from the parameter prior, or more generally, `Tensor` `int` shape to fill with iid samples. Default value: [] (i.e., draw a single sample and don't expand the shape). trajectories_sample_shape: For each sampled set of parameters, number of trajectories to sample, or more generally, `Tensor` `int` shape to fill with iid samples. Default value: [] (i.e., draw a single sample and don't expand the shape). seed: Python `int` random seed. Returns: trajectories: `float` `Tensor` of shape `trajectories_sample_shape + params_sample_shape + [num_timesteps, 1]` containing all sampled trajectories. param_samples: list of sampled parameter value `Tensor`s, in order corresponding to `self.parameters`, each of shape
python
{ "resource": "" }
q266471
_compute_min_event_ndims
test
def _compute_min_event_ndims(bijector_list, compute_forward=True): """Computes the min_event_ndims associated with the give list of bijectors. Given a list `bijector_list` of bijectors, compute the min_event_ndims that is associated with the composition of bijectors in that list. min_event_ndims is the # of right most dimensions for which the bijector has done necessary computation on (i.e. the non-broadcastable part of the computation). We can derive the min_event_ndims for a chain of bijectors as follows: In the case where there are no rank changing bijectors, this will simply be `max(b.forward_min_event_ndims for b in bijector_list)`. This is because the bijector with the most forward_min_event_ndims requires the most dimensions, and hence the chain also requires operating on those dimensions. However in the case of rank changing, more care is needed in determining the exact amount of dimensions. Padding dimensions causes subsequent bijectors to operate on the padded dimensions, and Removing dimensions causes bijectors to operate more left. Args: bijector_list: List of bijectors to be composed by chain. compute_forward: Boolean. If True, computes the min_event_ndims associated with a forward call to Chain, and otherwise computes the min_event_ndims associated with an inverse call to Chain. The latter is the same as the min_event_ndims associated with a forward call to Invert(Chain(....)). Returns: min_event_ndims """ min_event_ndims = 0 # This is a mouthful, but what this encapsulates is that if not for rank # changing bijectors, we'd only need to compute the largest of the min # required ndims. Hence "max_min". Due to rank changing bijectors, we need to # account for synthetic rank growth / synthetic rank decrease from a rank # changing bijector. rank_changed_adjusted_max_min_event_ndims = 0 if compute_forward: bijector_list = reversed(bijector_list) for b in bijector_list:
python
{ "resource": "" }
q266472
vector_size_to_square_matrix_size
test
def vector_size_to_square_matrix_size(d, validate_args, name=None): """Convert a vector size to a matrix size.""" if isinstance(d, (float, int, np.generic, np.ndarray)): n = (-1 + np.sqrt(1 + 8 * d)) / 2. if float(int(n)) != n: raise ValueError("Vector length is not a triangular number.") return int(n) else: with tf.name_scope(name or "vector_size_to_square_matrix_size") as name: n = (-1. + tf.sqrt(1 + 8. * tf.cast(d, dtype=tf.float32))) / 2.
python
{ "resource": "" }
q266473
_argsort
test
def _argsort(values, axis=-1, direction='ASCENDING', stable=False, name=None): # pylint: disable=unused-argument """Numpy implementation of `tf.argsort`.""" if direction == 'ASCENDING': pass elif direction == 'DESCENDING': values
python
{ "resource": "" }
q266474
_sort
test
def _sort(values, axis=-1, direction='ASCENDING', stable=False, name=None): # pylint: disable=unused-argument """Numpy implementation of `tf.sort`.""" if direction == 'ASCENDING': pass elif direction == 'DESCENDING': values = np.negative(values) else: raise
python
{ "resource": "" }
q266475
ndtr
test
def ndtr(x, name="ndtr"): """Normal distribution function. Returns the area under the Gaussian probability density function, integrated from minus infinity to x: ``` 1 / x ndtr(x) = ---------- | exp(-0.5 t**2) dt sqrt(2 pi) /-inf = 0.5 (1 + erf(x / sqrt(2))) = 0.5 erfc(x / sqrt(2))
python
{ "resource": "" }
q266476
_ndtr
test
def _ndtr(x): """Implements ndtr core logic.""" half_sqrt_2 = tf.constant( 0.5 * np.sqrt(2.), dtype=x.dtype, name="half_sqrt_2") w = x * half_sqrt_2 z = tf.abs(w) y = tf.where(
python
{ "resource": "" }
q266477
ndtri
test
def ndtri(p, name="ndtri"): """The inverse of the CDF of the Normal distribution function. Returns x such that the area under the pdf from minus infinity to x is equal to p. A piece-wise rational approximation is done for the function. This is a port of the implementation in netlib. Args: p: `Tensor` of type `float32`, `float64`. name: Python string. A name for the operation (default="ndtri"). Returns: x: `Tensor` with `dtype=p.dtype`. Raises: TypeError:
python
{ "resource": "" }
q266478
log_ndtr
test
def log_ndtr(x, series_order=3, name="log_ndtr"): """Log Normal distribution function. For details of the Normal distribution function see `ndtr`. This function calculates `(log o ndtr)(x)` by either calling `log(ndtr(x))` or using an asymptotic series. Specifically: - For `x > upper_segment`, use the approximation `-ndtr(-x)` based on `log(1-x) ~= -x, x << 1`. - For `lower_segment < x <= upper_segment`, use the existing `ndtr` technique and take a log. - For `x <= lower_segment`, we use the series approximation of erf to compute the log CDF directly. The `lower_segment` is set based on the precision of the input: ``` lower_segment = { -20, x.dtype=float64 { -10, x.dtype=float32 upper_segment = { 8, x.dtype=float64 { 5, x.dtype=float32 ``` When `x < lower_segment`, the `ndtr` asymptotic series approximation is: ``` ndtr(x) = scale * (1 + sum) + R_N scale = exp(-0.5 x**2) / (-x sqrt(2 pi)) sum = Sum{(-1)^n (2n-1)!! / (x**2)^n, n=1:N} R_N = O(exp(-0.5 x**2) (2N+1)!! / |x|^{2N+3}) ``` where `(2n-1)!! = (2n-1) (2n-3) (2n-5) ... (3) (1)` is a [double-factorial](https://en.wikipedia.org/wiki/Double_factorial). Args: x: `Tensor` of type `float32`, `float64`. series_order: Positive Python `integer`. Maximum depth to evaluate the asymptotic expansion. This is the `N` above. name: Python string. A name for the operation (default="log_ndtr"). Returns: log_ndtr: `Tensor` with `dtype=x.dtype`. Raises: TypeError: if `x.dtype` is not handled. TypeError: if `series_order` is a not Python `integer.` ValueError: if `series_order` is not in `[0, 30]`. """ if not isinstance(series_order, int): raise TypeError("series_order must be a Python integer.") if series_order < 0: raise ValueError("series_order must be non-negative.") if series_order > 30: raise ValueError("series_order must be <= 30.") with tf.name_scope(name): x = tf.convert_to_tensor(value=x, name="x") if dtype_util.base_equal(x.dtype, tf.float64): lower_segment = LOGNDTR_FLOAT64_LOWER upper_segment =
python
{ "resource": "" }
q266479
_log_ndtr_asymptotic_series
test
def _log_ndtr_asymptotic_series(x, series_order): """Calculates the asymptotic series used in log_ndtr.""" npdt = dtype_util.as_numpy_dtype(x.dtype) if series_order <= 0: return npdt(1) x_2 = tf.square(x) even_sum = tf.zeros_like(x) odd_sum = tf.zeros_like(x) x_2n = x_2 # Start with x^{2*1} = x^{2*n} with n = 1. for n in range(1, series_order + 1):
python
{ "resource": "" }
q266480
erfinv
test
def erfinv(x, name="erfinv"): """The inverse function for erf, the error function. Args: x: `Tensor` of type `float32`, `float64`. name: Python string. A name for the operation (default="erfinv"). Returns: x: `Tensor` with `dtype=x.dtype`. Raises:
python
{ "resource": "" }
q266481
log_cdf_laplace
test
def log_cdf_laplace(x, name="log_cdf_laplace"): """Log Laplace distribution function. This function calculates `Log[L(x)]`, where `L(x)` is the cumulative distribution function of the Laplace distribution, i.e. ```L(x) := 0.5 * int_{-infty}^x e^{-|t|} dt``` For numerical accuracy, `L(x)` is computed in different ways depending on `x`, ``` x <= 0: Log[L(x)] = Log[0.5] + x, which is exact 0 < x: Log[L(x)] = Log[1 - 0.5 * e^{-x}], which is exact ``` Args: x: `Tensor` of type `float32`, `float64`. name: Python string. A name for the operation (default="log_ndtr"). Returns: `Tensor` with `dtype=x.dtype`. Raises: TypeError: if `x.dtype` is not handled. """ with tf.name_scope(name): x = tf.convert_to_tensor(value=x, name="x") # For x < 0, L(x) = 0.5 * exp{x} exactly, so Log[L(x)] = log(0.5) + x.
python
{ "resource": "" }
q266482
text_messages_joint_log_prob
test
def text_messages_joint_log_prob(count_data, lambda_1, lambda_2, tau): """Joint log probability function.""" alpha = (1. / tf.reduce_mean(input_tensor=count_data)) rv_lambda = tfd.Exponential(rate=alpha) rv_tau = tfd.Uniform() lambda_ = tf.gather( [lambda_1, lambda_2], indices=tf.cast( tau * tf.cast(tf.size(input=count_data), dtype=tf.float32) <= tf.cast(
python
{ "resource": "" }
q266483
benchmark_text_messages_hmc
test
def benchmark_text_messages_hmc( num_results=int(3e3), num_burnin_steps=int(3e3), num_leapfrog_steps=3): """Runs HMC on the text-messages unnormalized posterior.""" if not tf.executing_eagerly(): tf.compat.v1.reset_default_graph() # Build a static, pretend dataset. count_data = tf.cast( tf.concat( [tfd.Poisson(rate=15.).sample(43), tfd.Poisson(rate=25.).sample(31)], axis=0), dtype=tf.float32) if tf.executing_eagerly(): count_data = count_data.numpy() else: with tf.compat.v1.Session(): count_data = count_data.eval() # Define a closure over our joint_log_prob. def unnormalized_log_posterior(lambda1, lambda2, tau): return text_messages_joint_log_prob(count_data, lambda1, lambda2, tau) if tf.executing_eagerly(): sample_chain = tf.function(tfp.mcmc.sample_chain) else: sample_chain = tfp.mcmc.sample_chain # Initialize the step_size. (It will be automatically adapted.) step_size = tf.compat.v2.Variable( name='step_size', initial_value=tf.constant(0.05, dtype=tf.float32), trainable=False) def computation(): """The benchmark computation.""" initial_chain_state = [ tf.constant(count_data.mean(), name='init_lambda1'), tf.constant(count_data.mean(), name='init_lambda2'), tf.constant(0.5, name='init_tau'), ] unconstraining_bijectors = [ tfp.bijectors.Exp(), # Maps a positive real to R. tfp.bijectors.Exp(), # Maps a positive real to R. tfp.bijectors.Sigmoid(), # Maps [0,1] to R. ] _, kernel_results = sample_chain( num_results=num_results, num_burnin_steps=num_burnin_steps, current_state=initial_chain_state,
python
{ "resource": "" }
q266484
GaussianProcess._is_univariate_marginal
test
def _is_univariate_marginal(self, index_points): """True if the given index_points would yield a univariate marginal. Args: index_points: the set of index set locations at which to compute the marginal Gaussian distribution. If this set is of size 1, the marginal is univariate. Returns: is_univariate: Boolean indicating whether the marginal is univariate or multivariate. In the case of
python
{ "resource": "" }
q266485
GaussianProcess.get_marginal_distribution
test
def get_marginal_distribution(self, index_points=None): """Compute the marginal of this GP over function values at `index_points`. Args: index_points: `float` `Tensor` representing finite (batch of) vector(s) of points in the index set over which the GP is defined. Shape has the form `[b1, ..., bB, e, f1, ..., fF]` where `F` is the number of feature dimensions and must equal `kernel.feature_ndims` and `e` is the number (size) of index points in each batch. Ultimately this distribution corresponds to a `e`-dimensional multivariate normal. The batch shape must be broadcastable with `kernel.batch_shape` and any batch dims yielded by `mean_fn`. Returns: marginal: a `Normal` or `MultivariateNormalLinearOperator` distribution, according to whether `index_points` consists of one or many index points, respectively. """ with self._name_scope('get_marginal_distribution'): # TODO(cgs): consider caching the result here, keyed on `index_points`. index_points = self._get_index_points(index_points) covariance = self._compute_covariance(index_points) loc = self._mean_fn(index_points) # If we're sure the number of index points is 1, we can just construct a # scalar Normal. This has computational benefits and supports things like # CDF that aren't otherwise straightforward to provide. if self._is_univariate_marginal(index_points): scale = tf.sqrt(covariance) # `loc` has a trailing 1 in the shape; squeeze it. loc = tf.squeeze(loc,
python
{ "resource": "" }
q266486
GaussianProcess._get_index_points
test
def _get_index_points(self, index_points=None): """Return `index_points` if not None, else `self._index_points`. Args: index_points: if given, this is what is returned; else, `self._index_points` Returns: index_points: the given arg, if not None, else the class member `self._index_points`. Rases: ValueError: if `index_points` and `self._index_points` are both `None`. """ if self._index_points is None and index_points is None: raise ValueError( 'This GaussianProcess instance was not instantiated with a value for ' 'index_points. One must therefore be provided when calling sample, ' 'log_prob, and other such methods. In particular, one can\'t compute '
python
{ "resource": "" }
q266487
make_iaf_stack
test
def make_iaf_stack(total_event_size, num_hidden_layers=2, seed=None, dtype=tf.float32): """Creates an stacked IAF bijector. This bijector operates on vector-valued events. Args: total_event_size: Number of dimensions to operate over. num_hidden_layers: How many hidden layers to use in each IAF. seed: Random seed for the initializers. dtype: DType for the variables. Returns: bijector: The created bijector. """ seed = tfd.SeedStream(seed, 'make_iaf_stack') def make_iaf(): """Create an IAF.""" initializer = tf.compat.v2.keras.initializers.VarianceScaling( 2 * 0.01, seed=seed() % (2**31 - 1)) made = tfb.AutoregressiveLayer( params=2, event_shape=[total_event_size], hidden_units=[total_event_size] * num_hidden_layers, activation=tf.nn.elu, kernel_initializer=initializer,
python
{ "resource": "" }
q266488
NeuTra.one_step
test
def one_step(self, current_state, previous_kernel_results): """Runs one iteration of NeuTra. Args: current_state: `Tensor` or Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s). The first `r` dimensions index independent chains, `r = tf.rank(target_log_prob_fn(*current_state))`. previous_kernel_results: `collections.namedtuple` containing `Tensor`s representing values from previous calls to this function (or from the `bootstrap_results` function.) Returns: next_state: Tensor or Python list of `Tensor`s representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as `current_state`. kernel_results: `collections.namedtuple` of internal calculations used to advance the chain. """ @tfp.mcmc.internal.util.make_innermost_setter
python
{ "resource": "" }
q266489
NeuTra.bootstrap_results
test
def bootstrap_results(self, state): """Trains the bijector and creates initial `previous_kernel_results`. The supplied `state` is only used to determine the number of chains to run in parallel_iterations Args: state: `Tensor` or Python `list` of `Tensor`s representing the initial state(s) of the Markov chain(s). The first `r` dimensions index independent chains, `r = tf.rank(target_log_prob_fn(*state))`. Returns: kernel_results: Instance of `UncalibratedHamiltonianMonteCarloKernelResults` inside `MetropolisHastingsResults` inside `TransformedTransitionKernelResults` inside `SimpleStepSizeAdaptationResults`. """ def loss(): q = self._flattened_variational_distribution() # TODO(siege): How to seed this? samples = q.sample(self.train_batch_size) return tf.reduce_mean( input_tensor=q.log_prob(samples) - self._flattened_target_log_prob(samples), axis=-1) lr = tf.convert_to_tensor(value=self.learning_rate, dtype=self._dtype) dtype = lr.dtype learning_rate = tf.compat.v2.optimizers.schedules.PiecewiseConstantDecay( list(self.num_train_steps * np.array([0.2, 0.8]).astype(dtype.as_numpy_dtype())), [lr, lr * 0.1, lr * 0.01]) opt = tf.compat.v2.optimizers.Adam(learning_rate) @tf.function(autograph=False) def train_step(): with tf.GradientTape() as tape: loss_val = loss() vals = tape.watched_variables() grads = tape.gradient(loss_val,
python
{ "resource": "" }
q266490
_outer_squared_difference
test
def _outer_squared_difference(x, y): """Convenience function analogous to tf.squared_difference.""" z = x - y
python
{ "resource": "" }
q266491
_value_and_batch_jacobian
test
def _value_and_batch_jacobian(f, x): """Enables uniform interface to value and batch jacobian calculation. Works in both eager and graph modes. Arguments: f: The scalar function to evaluate. x: The value at which to compute the value and the batch jacobian. Returns: A tuple (f(x), J(x)), where J(x) is the batch jacobian. """ if tf.executing_eagerly(): with tf.GradientTape() as
python
{ "resource": "" }
q266492
_prevent_2nd_derivative
test
def _prevent_2nd_derivative(x): """Disables computation of the second derivatives for a tensor. NB: you need to apply a non-identity function to the output tensor for the exception to be raised. Arguments: x: A tensor. Returns: A tensor with the same value and the same derivative as x, but that raises LookupError when
python
{ "resource": "" }
q266493
MixtureSameFamily._distributional_transform
test
def _distributional_transform(self, x): """Performs distributional transform of the mixture samples. Distributional transform removes the parameters from samples of a multivariate distribution by applying conditional CDFs: (F(x_1), F(x_2 | x1_), ..., F(x_d | x_1, ..., x_d-1)) (the indexing is over the "flattened" event dimensions). The result is a sample of product of Uniform[0, 1] distributions. We assume that the components are factorized, so the conditional CDFs become F(x_i | x_1, ..., x_i-1) = sum_k w_i^k F_k (x_i), where w_i^k is the posterior mixture weight: for i > 0 w_i^k = w_k prob_k(x_1, ..., x_i-1) / sum_k' w_k' prob_k'(x_1, ..., x_i-1) and w_0^k = w_k is the mixture probability of the k-th component. Arguments: x: Sample of mixture distribution Returns: Result of the distributional transform """ if tensorshape_util.rank(x.shape) is None: # tf.nn.softmax raises an error when applied to inputs of undefined rank. raise ValueError("Distributional transform does not support inputs of " "undefined rank.") # Obtain factorized components distribution and assert that it's # a scalar distribution. if isinstance(self._components_distribution, independent.Independent): univariate_components = self._components_distribution.distribution else: univariate_components = self._components_distribution with tf.control_dependencies([ assert_util.assert_equal( univariate_components.is_scalar_event(), True, message="`univariate_components` must have scalar event") ]): x_padded = self._pad_sample_dims(x) # [S, B, 1, E] log_prob_x = univariate_components.log_prob(x_padded) # [S, B, k,
python
{ "resource": "" }
q266494
_split_covariance_into_marginals
test
def _split_covariance_into_marginals(covariance, block_sizes): """Split a covariance matrix into block-diagonal marginals of given sizes.""" start_dim = 0 marginals = [] for size in block_sizes:
python
{ "resource": "" }
q266495
_decompose_from_posterior_marginals
test
def _decompose_from_posterior_marginals( model, posterior_means, posterior_covs, parameter_samples): """Utility method to decompose a joint posterior into components. Args: model: `tfp.sts.Sum` instance defining an additive STS model. posterior_means: float `Tensor` of shape `concat( [[num_posterior_draws], batch_shape, num_timesteps, latent_size])` representing the posterior mean over latents in an `AdditiveStateSpaceModel`. posterior_covs: float `Tensor` of shape `concat( [[num_posterior_draws], batch_shape, num_timesteps, latent_size, latent_size])` representing the posterior marginal covariances over latents in an `AdditiveStateSpaceModel`. parameter_samples: Python `list` of `Tensors` representing posterior samples of model parameters, with shapes `[concat([ [num_posterior_draws], param.prior.batch_shape, param.prior.event_shape]) for param in model.parameters]`. This may optionally also be a map (Python `dict`) of parameter names to `Tensor` values. Returns: component_dists: A `collections.OrderedDict` instance mapping component StructuralTimeSeries instances (elements of `model.components`) to `tfd.Distribution` instances representing the posterior marginal distributions on the process modeled by each component. Each distribution has batch shape matching that of `posterior_means`/`posterior_covs`, and event shape of `[num_timesteps]`. """ try: model.components except AttributeError: raise ValueError('Model decomposed into components must be an instance of' '`tfp.sts.Sum` (passed model {})'.format(model)) with tf.compat.v1.name_scope('decompose_from_posterior_marginals'): # Extract the component means/covs from the joint latent posterior. latent_sizes = [component.latent_size for component in model.components] component_means = tf.split(posterior_means, latent_sizes, axis=-1) component_covs = _split_covariance_into_marginals(
python
{ "resource": "" }
q266496
decompose_by_component
test
def decompose_by_component(model, observed_time_series, parameter_samples): """Decompose an observed time series into contributions from each component. This method decomposes a time series according to the posterior represention of a structural time series model. In particular, it: - Computes the posterior marginal mean and covariances over the additive model's latent space. - Decomposes the latent posterior into the marginal blocks for each model component. - Maps the per-component latent posteriors back through each component's observation model, to generate the time series modeled by that component. Args: model: An instance of `tfp.sts.Sum` representing a structural time series model. observed_time_series: `float` `Tensor` of shape `batch_shape + [num_timesteps, 1]` (omitting the trailing unit dimension is also supported when `num_timesteps > 1`), specifying an observed time series. May optionally be an instance of `tfp.sts.MaskedTimeSeries`, which includes a mask `Tensor` to specify timesteps with missing observations. parameter_samples: Python `list` of `Tensors` representing posterior samples of model parameters, with shapes `[concat([ [num_posterior_draws], param.prior.batch_shape, param.prior.event_shape]) for param in model.parameters]`. This may optionally also be a map (Python `dict`) of parameter names to `Tensor` values. Returns: component_dists: A `collections.OrderedDict` instance mapping component StructuralTimeSeries instances (elements of `model.components`) to `tfd.Distribution` instances representing the posterior marginal distributions on the process modeled by each component. Each distribution has batch shape matching that of `posterior_means`/`posterior_covs`, and event shape of `[num_timesteps]`. #### Examples Suppose we've built a model and
python
{ "resource": "" }
q266497
decompose_forecast_by_component
test
def decompose_forecast_by_component(model, forecast_dist, parameter_samples): """Decompose a forecast distribution into contributions from each component. Args: model: An instance of `tfp.sts.Sum` representing a structural time series model. forecast_dist: A `Distribution` instance returned by `tfp.sts.forecast()`. (specifically, must be a `tfd.MixtureSameFamily` over a `tfd.LinearGaussianStateSpaceModel` parameterized by posterior samples). parameter_samples: Python `list` of `Tensors` representing posterior samples of model parameters, with shapes `[concat([[num_posterior_draws], param.prior.batch_shape, param.prior.event_shape]) for param in model.parameters]`. This may optionally also be a map (Python `dict`) of parameter names to `Tensor` values. Returns: component_forecasts: A `collections.OrderedDict` instance mapping component StructuralTimeSeries instances (elements of `model.components`) to `tfd.Distribution` instances representing the marginal forecast for each component. Each distribution has batch and event shape matching `forecast_dist` (specifically, the event shape is `[num_steps_forecast]`). #### Examples Suppose we've built a model, fit it to data, and constructed a forecast distribution: ```python day_of_week = tfp.sts.Seasonal( num_seasons=7, observed_time_series=observed_time_series, name='day_of_week') local_linear_trend = tfp.sts.LocalLinearTrend( observed_time_series=observed_time_series, name='local_linear_trend') model = tfp.sts.Sum(components=[day_of_week, local_linear_trend], observed_time_series=observed_time_series) num_steps_forecast = 50 samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series) forecast_dist = tfp.sts.forecast(model, observed_time_series,
python
{ "resource": "" }
q266498
dense_to_sparse
test
def dense_to_sparse(x, ignore_value=None, name=None): """Converts dense `Tensor` to `SparseTensor`, dropping `ignore_value` cells. Args: x: A `Tensor`. ignore_value: Entries in `x` equal to this value will be absent from the return `SparseTensor`. If `None`, default value of `x` dtype will be used (e.g. '' for `str`, 0 for `int`). name: Python `str` prefix for ops created by this function. Returns: sparse_x: A `tf.SparseTensor` with the same shape as `x`. Raises: ValueError: when `x`'s rank is `None`. """ # Copied (with modifications) from: # tensorflow/contrib/layers/python/ops/sparse_ops.py. with tf.compat.v1.name_scope(name, 'dense_to_sparse', [x, ignore_value]): x = tf.convert_to_tensor(value=x, name='x') if ignore_value is None: if x.dtype.base_dtype == tf.string: # Exception due to TF strings are converted to numpy objects by default.
python
{ "resource": "" }
q266499
_operator
test
def _operator(attr): """Defers an operator overload to `attr`. Args: attr: Operator attribute to use. Returns: Function calling operator attribute. """
python
{ "resource": "" }