_id
stringlengths
2
7
title
stringlengths
1
88
partition
stringclasses
3 values
text
stringlengths
31
13.1k
language
stringclasses
1 value
meta_information
dict
q266700
choose
test
def choose(is_accepted, accepted, rejected, name=None): """Helper which expand_dims `is_accepted` then applies tf.where.""" if not is_namedtuple_like(accepted): return _choose_base_case(is_accepted, accepted, rejected, name=name) if not isinstance(accepted, type(rejected)): raise TypeError('Type of `accepted` ({}) must be identical to ' 'type of `rejected` ({})'.format( type(accepted).__name__,
python
{ "resource": "" }
q266701
safe_sum
test
def safe_sum(x, alt_value=-np.inf, name=None): """Elementwise adds list members, replacing non-finite results with alt_value. Typically the `alt_value` is chosen so the `MetropolisHastings` `TransitionKernel` always rejects the proposal. Args: x: Python `list` of `Tensors` to elementwise add. alt_value: Python scalar used to replace any elementwise sums which would otherwise be non-finite. name: Python `str` name prefixed to Ops created by this function. Default value: `None` (i.e., "safe_sum"). Returns: safe_sum: `Tensor` representing the elementwise sum of list of `Tensor`s `x` or `alt_value` where sums are non-finite. Raises: TypeError: if `x` is not list-like. ValueError: if `x` is empty. """ with tf.compat.v1.name_scope(name, 'safe_sum', [x, alt_value]): if not is_list_like(x):
python
{ "resource": "" }
q266702
_value_and_gradients
test
def _value_and_gradients(fn, fn_arg_list, result=None, grads=None, name=None): """Helper to `maybe_call_fn_and_grads`.""" with tf.compat.v1.name_scope(name, 'value_and_gradients', [fn_arg_list, result, grads]): def _convert_to_tensor(x, name): ctt = lambda x_: x_ if x_ is None else tf.convert_to_tensor( value=x_, name=name) return [ctt(x_) for x_ in x] if is_list_like(x) else ctt(x) fn_arg_list = (list(fn_arg_list) if is_list_like(fn_arg_list) else [fn_arg_list]) fn_arg_list = _convert_to_tensor(fn_arg_list, 'fn_arg') if result is None: result = fn(*fn_arg_list) if grads is None and tf.executing_eagerly():
python
{ "resource": "" }
q266703
maybe_call_fn_and_grads
test
def maybe_call_fn_and_grads(fn, fn_arg_list, result=None, grads=None, check_non_none_grads=True, name=None): """Calls `fn` and computes the gradient of the result wrt `args_list`.""" with tf.compat.v1.name_scope(name, 'maybe_call_fn_and_grads', [fn_arg_list, result, grads]): fn_arg_list = (list(fn_arg_list) if is_list_like(fn_arg_list) else [fn_arg_list]) result, grads = _value_and_gradients(fn, fn_arg_list, result, grads) if not all(r.dtype.is_floating for r in (result if is_list_like(result) else [result])): # pylint: disable=superfluous-parens raise
python
{ "resource": "" }
q266704
smart_for_loop
test
def smart_for_loop(loop_num_iter, body_fn, initial_loop_vars, parallel_iterations=10, name=None): """Construct a for loop, preferring a python loop if `n` is staticaly known. Given `loop_num_iter` and `body_fn`, return an op corresponding to executing `body_fn` `loop_num_iter` times, feeding previous outputs of `body_fn` into the next iteration. If `loop_num_iter` is statically known, the op is constructed via python for loop, and otherwise a `tf.while_loop` is used. Args: loop_num_iter: `Integer` `Tensor` representing the number of loop iterations. body_fn: Callable to be executed `loop_num_iter` times. initial_loop_vars: Listlike object of `Tensors` to be passed in to `body_fn`'s first execution. parallel_iterations: The number of iterations allowed to run in parallel.
python
{ "resource": "" }
q266705
trace_scan
test
def trace_scan(loop_fn, initial_state, elems, trace_fn, parallel_iterations=10, name=None): """A simplified version of `tf.scan` that has configurable tracing. This function repeatedly calls `loop_fn(state, elem)`, where `state` is the `initial_state` during the first iteration, and the return value of `loop_fn` for every iteration thereafter. `elem` is a slice of `elements` along the first dimension, accessed in order. Additionally, it calls `trace_fn` on the return value of `loop_fn`. The `Tensor`s in return values of `trace_fn` are stacked and returned from this function, such that the first dimension of those `Tensor`s matches the size of `elems`. Args: loop_fn: A callable that takes in a `Tensor` or a nested collection of `Tensor`s with the same structure as `initial_state`, a slice of `elems` and returns the same structure as `initial_state`. initial_state: A `Tensor` or a nested collection of `Tensor`s passed to `loop_fn` in the first iteration. elems: A `Tensor` that is split along the first dimension and each element of which is passed to `loop_fn`. trace_fn: A callable that takes in the return value of `loop_fn` and returns a `Tensor` or a nested collection of `Tensor`s. parallel_iterations: Passed to the internal `tf.while_loop`. name: Name scope used in this function. Default: 'trace_scan'. Returns: final_state: The final return value of `loop_fn`. trace: The same structure as the return value of `trace_fn`, but with each `Tensor` being a stack of the corresponding `Tensors` in the return value of `trace_fn` for each slice of `elems`. """ with tf.compat.v1.name_scope( name, 'trace_scan', [initial_state, elems]), tf.compat.v1.variable_scope( tf.compat.v1.get_variable_scope()) as vs: if vs.caching_device is None and not tf.executing_eagerly(): vs.set_caching_device(lambda op: op.device) initial_state = tf.nest.map_structure( lambda x: tf.convert_to_tensor(value=x, name='initial_state'), initial_state) elems = tf.convert_to_tensor(value=elems, name='elems') static_length = elems.shape[0]
python
{ "resource": "" }
q266706
make_innermost_setter
test
def make_innermost_setter(setter): """Wraps a setter so it applies to the inner-most results in `kernel_results`. The wrapped setter unwraps `kernel_results` and applies `setter` to the first results without an `inner_results` attribute. Args: setter: A callable that takes the kernel results as well as some `*args` and `**kwargs` and returns a modified copy of those kernel results. Returns: new_setter: A wrapped `setter`. """ @functools.wraps(setter) def _new_setter(kernel_results, *args, **kwargs): """Wrapped setter.""" results_stack
python
{ "resource": "" }
q266707
make_innermost_getter
test
def make_innermost_getter(getter): """Wraps a getter so it applies to the inner-most results in `kernel_results`. The wrapped getter unwraps `kernel_results` and returns the return value of `getter` called with the first results without an `inner_results` attribute. Args: getter: A callable that takes Kernel results and returns some value. Returns: new_getter: A wrapped `getter`. """ @functools.wraps(getter) def _new_getter(kernel_results, *args, **kwargs):
python
{ "resource": "" }
q266708
enable_store_parameters_in_results
test
def enable_store_parameters_in_results(kernel): """Enables the `store_parameters_in_results` parameter in a chain of kernels. This is a temporary utility for use during the transition period of the parameter storage methods. Args: kernel: A TransitionKernel. Returns: kernel: The same kernel, but recreated with `store_parameters_in_results` recursively set to `True` in its parameters and its inner kernels (as appropriate). """ kernel_stack = [] while hasattr(kernel, 'parameters') and 'inner_kernel' in kernel.parameters: kernel_stack.append(kernel) kernel = kernel.parameters['inner_kernel'] def _recreate_kernel(kernel, parameters): new_parameters = kernel.parameters.copy()
python
{ "resource": "" }
q266709
_replace_event_shape_in_shape_tensor
test
def _replace_event_shape_in_shape_tensor( input_shape, event_shape_in, event_shape_out, validate_args): """Replaces the rightmost dims in a `Tensor` representing a shape. Args: input_shape: a rank-1 `Tensor` of integers event_shape_in: the event shape expected to be present in rightmost dims of `shape_in`. event_shape_out: the event shape with which to replace `event_shape_in` in the rightmost dims of `input_shape`. validate_args: Python `bool` indicating whether arguments should be checked for correctness. Returns: output_shape: A rank-1 integer `Tensor` with the same contents as `input_shape` except for the event dims, which are replaced with `event_shape_out`. """ output_tensorshape, is_validated = _replace_event_shape_in_tensorshape( tensorshape_util.constant_value_as_shape(input_shape), event_shape_in, event_shape_out) # TODO(b/124240153): Remove map(tf.identity, deps) once tf.function # correctly supports control_dependencies. validation_dependencies = ( map(tf.identity, (event_shape_in, event_shape_out)) if validate_args else ()) if (tensorshape_util.is_fully_defined(output_tensorshape) and (is_validated or not validate_args)): with tf.control_dependencies(validation_dependencies): output_shape = tf.convert_to_tensor( value=output_tensorshape, name='output_shape', dtype_hint=tf.int32) return output_shape, output_tensorshape with tf.control_dependencies(validation_dependencies): event_shape_in_ndims = ( tf.size(input=event_shape_in) if tensorshape_util.num_elements(event_shape_in.shape) is None else tensorshape_util.num_elements(event_shape_in.shape)) input_non_event_shape, input_event_shape = tf.split( input_shape, num_or_size_splits=[-1, event_shape_in_ndims]) additional_assertions = [] if is_validated: pass elif validate_args: # Check that `input_event_shape` and `event_shape_in` are compatible in the # sense that they have equal entries in any
python
{ "resource": "" }
q266710
_replace_event_shape_in_tensorshape
test
def _replace_event_shape_in_tensorshape( input_tensorshape, event_shape_in, event_shape_out): """Replaces the event shape dims of a `TensorShape`. Args: input_tensorshape: a `TensorShape` instance in which to attempt replacing event shape. event_shape_in: `Tensor` shape representing the event shape expected to be present in (rightmost dims of) `tensorshape_in`. Must be compatible with the rightmost dims of `tensorshape_in`. event_shape_out: `Tensor` shape representing the new event shape, i.e., the replacement of `event_shape_in`, Returns: output_tensorshape: `TensorShape` with the rightmost `event_shape_in` replaced by `event_shape_out`. Might be partially defined, i.e., `TensorShape(None)`. is_validated: Python `bool` indicating static validation happened. Raises: ValueError: if we can determine the event shape portion of `tensorshape_in` as well as `event_shape_in` both statically, and they are not compatible. "Compatible" here means that they are identical on any dims that are not -1 in `event_shape_in`. """ event_shape_in_ndims = tensorshape_util.num_elements(event_shape_in.shape) if tensorshape_util.rank( input_tensorshape) is None or event_shape_in_ndims is None: return tf.TensorShape(None), False # Not is_validated. input_non_event_ndims = tensorshape_util.rank( input_tensorshape) - event_shape_in_ndims if input_non_event_ndims < 0: raise ValueError( 'Input has fewer ndims ({}) than event shape ndims ({}).'.format( tensorshape_util.rank(input_tensorshape), event_shape_in_ndims)) input_non_event_tensorshape = input_tensorshape[:input_non_event_ndims] input_event_tensorshape = input_tensorshape[input_non_event_ndims:] # Check that `input_event_shape_` and `event_shape_in` are compatible in the # sense that they have equal entries in any position that isn't a `-1` in # `event_shape_in`. Note that our validations at construction time ensure # there is at most one such entry in `event_shape_in`. event_shape_in_ = tf.get_static_value(event_shape_in)
python
{ "resource": "" }
q266711
_maybe_check_valid_shape
test
def _maybe_check_valid_shape(shape, validate_args): """Check that a shape Tensor is int-type and otherwise sane.""" if not dtype_util.is_integer(shape.dtype): raise TypeError('{} dtype ({}) should be `int`-like.'.format( shape, dtype_util.name(shape.dtype))) assertions = [] message = '`{}` rank should be <= 1.' if tensorshape_util.rank(shape.shape) is not None: if tensorshape_util.rank(shape.shape) > 1: raise ValueError(message.format(shape)) elif validate_args: assertions.append(assert_util.assert_less(
python
{ "resource": "" }
q266712
converged_any
test
def converged_any(converged, failed): """Condition to stop when any batch member converges,
python
{ "resource": "" }
q266713
get_initial_state_args
test
def get_initial_state_args(value_and_gradients_function, initial_position, grad_tolerance, control_inputs=None): """Returns a dictionary to populate the initial state of the search procedure. Performs an initial convergence check and the first evaluation of the objective function. Args: value_and_gradients_function: A Python callable that accepts a tensor and returns a tuple of two tensors: the objective function value and its derivative. initial_position: The starting point of the search procedure. grad_tolerance: The gradient tolerance for the procedure. control_inputs: Optional ops used to assert the validity of inputs, these are added as control dependencies to execute before the objective
python
{ "resource": "" }
q266714
line_search_step
test
def line_search_step(state, value_and_gradients_function, search_direction, grad_tolerance, f_relative_tolerance, x_tolerance, stopping_condition): """Performs the line search step of the BFGS search procedure. Uses hager_zhang line search procedure to compute a suitable step size to advance the current `state.position` along the given `search_direction`. Also, if the line search is successful, updates the `state.position` by taking the corresponding step. Args: state: A namedtuple instance holding values for the current state of the search procedure. The state must include the fields: `position`, `objective_value`, `objective_gradient`, `num_iterations`, `num_objective_evaluations`, `converged` and `failed`. value_and_gradients_function: A Python callable that accepts a point as a real `Tensor` of shape `[..., n]` and returns a tuple of two tensors of the same dtype: the objective function value, a real `Tensor` of shape `[...]`, and its derivative, another real `Tensor` of shape `[..., n]`. search_direction: A real `Tensor` of shape `[..., n]`. The direction along which to perform line search. grad_tolerance: Scalar `Tensor` of real dtype. Specifies the gradient tolerance for the procedure. f_relative_tolerance: Scalar `Tensor` of real dtype. Specifies the tolerance for the relative change in the objective value. x_tolerance: Scalar `Tensor` of real dtype. Specifies the tolerance for the change in the position. stopping_condition: A Python function that takes as input two Boolean tensors of shape `[...]`, and returns a Boolean scalar tensor. The input tensors are `converged` and `failed`, indicating the current status of each respective batch member; the return value states whether the algorithm should stop. Returns: A copy of the input state with the following fields updated: converged: a Boolean `Tensor` of shape `[...]` indicating whether the convergence criteria has been met. failed: a Boolean `Tensor` of shape `[...]` indicating whether the line search procedure failed to converge, or if either the updated gradient or objective function are no longer finite. num_iterations: Increased by 1. num_objective_evaluations: Increased by the number of times that the objective function got evaluated. position, objective_value, objective_gradient: If line search succeeded, updated by computing the new position and evaluating the objective function at that position. """ line_search_value_grad_func = _restrict_along_direction( value_and_gradients_function, state.position, search_direction) derivative_at_start_pt = tf.reduce_sum( input_tensor=state.objective_gradient * search_direction, axis=-1) val_0 = ValueAndGradient(x=_broadcast(0, state.position), f=state.objective_value,
python
{ "resource": "" }
q266715
_restrict_along_direction
test
def _restrict_along_direction(value_and_gradients_function, position, direction): """Restricts a function in n-dimensions to a given direction. Suppose f: R^n -> R. Then given a point x0 and a vector p0 in R^n, the restriction of the function along that direction is defined by: ```None g(t) = f(x0 + t * p0) ``` This function performs this restriction on the given function. In addition, it also computes the gradient of the restricted function along the restriction direction. This is equivalent to computing `dg/dt` in the definition above. Args: value_and_gradients_function: Callable accepting a single real `Tensor` argument of shape `[..., n]` and returning a tuple of a real `Tensor` of shape `[...]` and a real `Tensor` of shape `[..., n]`. The multivariate function whose restriction is to be computed. The output values of the callable are the function value and the gradients at the input argument. position: `Tensor` of real dtype and shape consumable by `value_and_gradients_function`. Corresponds to `x0` in the definition above. direction: `Tensor` of the same dtype and shape as `position`. The direction along which to restrict the function. Note that the direction need not be a unit vector. Returns: restricted_value_and_gradients_func: A callable accepting a tensor of shape broadcastable to `[...]` and same dtype as `position` and returning a namedtuple of `Tensors`. The input tensor is the parameter along the direction labelled `t` above. The return value contains fields: x:
python
{ "resource": "" }
q266716
_update_position
test
def _update_position(state, position_delta, next_objective, next_gradient, grad_tolerance, f_relative_tolerance, x_tolerance): """Updates the state advancing its position by a given position_delta.""" failed = state.failed | ~tf.math.is_finite(next_objective) | ~tf.reduce_all( input_tensor=tf.math.is_finite(next_gradient), axis=-1) next_position = state.position + position_delta converged = ~failed & _check_convergence(state.position, next_position, state.objective_value, next_objective,
python
{ "resource": "" }
q266717
_check_convergence
test
def _check_convergence(current_position, next_position, current_objective, next_objective, next_gradient, grad_tolerance, f_relative_tolerance, x_tolerance): """Checks if the algorithm satisfies the convergence criteria.""" grad_converged =
python
{ "resource": "" }
q266718
_broadcast
test
def _broadcast(value, target): """Broadcast a value to match the batching dimensions of a target. If necessary the value is converted into a tensor. Both value and target should be of the same dtype. Args: value: A value to broadcast. target: A `Tensor` of shape [b1, ..., bn, d]. Returns: A `Tensor` of shape [b1, ...,
python
{ "resource": "" }
q266719
_harmonic_number
test
def _harmonic_number(x): """Compute the harmonic number from its analytic continuation. Derivation from [here]( https://en.wikipedia.org/wiki/Digamma_function#Relation_to_harmonic_numbers) and [Euler's constant]( https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant). Args: x: input float.
python
{ "resource": "" }
q266720
default_exchange_proposed_fn
test
def default_exchange_proposed_fn(prob_exchange): """Default exchange proposal function, for replica exchange MC. With probability `prob_exchange` propose combinations of replica for exchange. When exchanging, create combinations of adjacent replicas in [Replica Exchange Monte Carlo]( https://en.wikipedia.org/wiki/Parallel_tempering) ``` exchange_fn = default_exchange_proposed_fn(prob_exchange=0.5) exchange_proposed = exchange_fn(num_replica=3) exchange_proposed.eval() ==> [[0, 1]] # 1 exchange, 0 <--> 1 exchange_proposed.eval() ==> [] # 0 exchanges ``` Args: prob_exchange: Scalar `Tensor` giving probability that any exchanges will be generated. Returns: default_exchange_proposed_fn_: Python callable which take a number of replicas (a Python integer), and return combinations of replicas for exchange as an [n, 2] integer `Tensor`, `0 <= n <= num_replica // 2`, with *unique* values in the set `{0, ..., num_replica}`. """ def default_exchange_proposed_fn_(num_replica, seed=None): """Default function for `exchange_proposed_fn` of `kernel`.""" seed_stream = distributions.SeedStream(seed, 'default_exchange_proposed_fn') zero_start = tf.random.uniform([], seed=seed_stream()) > 0.5 if num_replica % 2 == 0: def _exchange(): flat_exchange = tf.range(num_replica) if num_replica > 2: start = tf.cast(~zero_start, dtype=tf.int32) end = num_replica - start flat_exchange =
python
{ "resource": "" }
q266721
_get_field
test
def _get_field(kernel_results, field_name): """field_name from kernel_results or kernel_results.accepted_results.""" if hasattr(kernel_results, field_name):
python
{ "resource": "" }
q266722
ReplicaExchangeMC._get_exchanged_states
test
def _get_exchanged_states(self, old_states, exchange_proposed, exchange_proposed_n, sampled_replica_states, sampled_replica_results): """Get list of TensorArrays holding exchanged states, and zeros.""" with tf.compat.v1.name_scope('get_exchanged_states'): target_log_probs = [] for replica in range(self.num_replica): replica_log_prob = _get_field(sampled_replica_results[replica], 'target_log_prob') inverse_temp = self.inverse_temperatures[replica] target_log_probs.append(replica_log_prob / inverse_temp) target_log_probs = tf.stack(target_log_probs, axis=0) dtype = target_log_probs.dtype num_state_parts = len(sampled_replica_states[0]) # exchanged_states[k][i] is Tensor of (new) state part k, for replica i. # The `k` will be known statically, and `i` is a Tensor. # We will insert values into indices `i` for every replica with a proposed # exchange. exchanged_states = [ tf.TensorArray( dtype, size=self.num_replica, dynamic_size=False, tensor_array_name='exchanged_states', # State part k has same shape, regardless of replica. So use 0. element_shape=sampled_replica_states[0][k].shape) for k in range(num_state_parts) ] # Draw random variables here, to avoid sampling in the loop (and losing # reproducibility). This
python
{ "resource": "" }
q266723
DirichletMultinomial._variance_scale_term
test
def _variance_scale_term(self): """Helper to `_covariance` and `_variance` which computes a shared scale.""" # Expand back the last dim so the shape of _variance_scale_term matches the # shape of self.concentration.
python
{ "resource": "" }
q266724
forward_log_det_jacobian_fn
test
def forward_log_det_jacobian_fn(bijector): """Makes a function which applies a list of Bijectors' `log_det_jacobian`s.""" if not mcmc_util.is_list_like(bijector): bijector = [bijector] def fn(transformed_state_parts, event_ndims): return
python
{ "resource": "" }
q266725
forward_transform_fn
test
def forward_transform_fn(bijector): """Makes a function which applies a list of Bijectors' `forward`s.""" if not mcmc_util.is_list_like(bijector): bijector = [bijector]
python
{ "resource": "" }
q266726
inverse_transform_fn
test
def inverse_transform_fn(bijector): """Makes a function which applies a list of Bijectors' `inverse`s.""" if not mcmc_util.is_list_like(bijector):
python
{ "resource": "" }
q266727
TransformedTransitionKernel.one_step
test
def one_step(self, current_state, previous_kernel_results): """Runs one iteration of the Transformed Kernel. Args: current_state: `Tensor` or Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s), _after_ application of `bijector.forward`. The first `r` dimensions index independent chains, `r = tf.rank(target_log_prob_fn(*current_state))`. The `inner_kernel.one_step` does not actually use `current_state`, rather it takes as input `previous_kernel_results.transformed_state` (because `TransformedTransitionKernel` creates a copy of the input inner_kernel with a modified `target_log_prob_fn` which internally applies the `bijector.forward`). previous_kernel_results: `collections.namedtuple` containing `Tensor`s representing values from previous calls to this function (or from the `bootstrap_results` function.) Returns: next_state: Tensor or Python list of `Tensor`s representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as `current_state`. kernel_results: `collections.namedtuple` of internal calculations used to advance the chain. """ with tf.compat.v1.name_scope( name=mcmc_util.make_name(self.name, 'transformed_kernel', 'one_step'), values=[previous_kernel_results]): transformed_next_state, kernel_results = self._inner_kernel.one_step(
python
{ "resource": "" }
q266728
val_where
test
def val_where(cond, tval, fval): """Like tf.where but works on namedtuples.""" if isinstance(tval, tf.Tensor): return tf.where(cond, tval, fval) elif isinstance(tval, tuple): cls = type(tval)
python
{ "resource": "" }
q266729
secant2
test
def secant2(value_and_gradients_function, val_0, search_interval, f_lim, sufficient_decrease_param=0.1, curvature_param=0.9, name=None): """Performs the secant square procedure of Hager Zhang. Given an interval that brackets a root, this procedure performs an update of both end points using two intermediate points generated using the secant interpolation. For details see the steps S1-S4 in [Hager and Zhang (2006)][2]. The interval [a, b] must satisfy the opposite slope conditions described in the documentation for `update`. Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns an object that can be converted to a namedtuple. The namedtuple should have fields 'f' and 'df' that correspond to scalar tensors of real dtype containing the value of the function and its derivative at that point. The other namedtuple fields, if present, should be tensors or sequences (possibly nested) of tensors. In usual optimization application, this function would be generated by projecting the multivariate objective function along some specific direction. The direction is determined by some other procedure but should be a descent direction (i.e. the derivative of the projected univariate function must be negative at 0.). Alternatively, the function may represent the batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and the fields 'f' and 'df' in the returned namedtuple should each be a tensor of shape [n], with the corresponding function values and derivatives at the input points. val_0: A namedtuple, as returned by value_and_gradients_function evaluated at `0.`. search_interval: A namedtuple describing the current search interval, must include the fields: - converged: Boolean `Tensor` of shape [n], indicating batch members where search has already converged. Interval for these batch members won't be modified. - failed: Boolean `Tensor` of shape [n], indicating batch members where search has already failed. Interval for these batch members wont be modified. - iterations: Scalar int32 `Tensor`. Number of line search iterations so far. - func_evals: Scalar int32 `Tensor`. Number of function evaluations so far. - left: A namedtuple, as returned by value_and_gradients_function, of the left end point of the current search interval. - right: A namedtuple, as returned by value_and_gradients_function, of the right end point of the current search interval. f_lim: Scalar `Tensor` of real dtype. The function value threshold for the approximate Wolfe conditions to be checked. sufficient_decrease_param: Positive scalar `Tensor` of real dtype. Bounded above by the curvature param. Corresponds to 'delta' in the terminology of [Hager and Zhang (2006)][2]. curvature_param: Positive scalar `Tensor` of real dtype. Bounded above by `1.`. Corresponds to 'sigma' in the terminology of [Hager and Zhang (2006)][2]. name: (Optional) Python str. The name prefixed to the ops created by this function. If not supplied, the default
python
{ "resource": "" }
q266730
_secant2_inner
test
def _secant2_inner(value_and_gradients_function, initial_args, val_0, val_c, f_lim, sufficient_decrease_param, curvature_param): """Helper function for secant square.""" # Apply the `update` function on active branch members to squeeze their # bracketing interval. update_result = update(value_and_gradients_function, initial_args.left, initial_args.right, val_c, f_lim, active=initial_args.active) # Update active and failed flags, update left/right on non-failed entries. active = initial_args.active & ~update_result.failed failed = initial_args.failed | update_result.failed val_left = val_where(active, update_result.left, initial_args.left) val_right = val_where(active, update_result.right, initial_args.right) # Check if new `c` points should be generated. updated_left = active & tf.equal(val_left.x, val_c.x) updated_right = active & tf.equal(val_right.x, val_c.x) is_new = updated_left | updated_right next_c = tf.where(updated_left, _secant(initial_args.left, val_left), val_c.x) next_c = tf.where(updated_right, _secant(initial_args.right, val_right), next_c) in_range = (val_left.x <= next_c) & (next_c <= val_right.x) # Figure out if an extra
python
{ "resource": "" }
q266731
_secant2_inner_update
test
def _secant2_inner_update(value_and_gradients_function, initial_args, val_0, val_c, f_lim, sufficient_decrease_param, curvature_param): """Helper function for secant-square step.""" # Fail if `val_c` is no longer finite. new_failed = initial_args.active & ~is_finite(val_c) active = initial_args.active & ~new_failed failed = initial_args.failed | new_failed # We converge when we find a point satisfying the Wolfe conditions, in those # cases we set `val_left = val_right = val_c`. found_wolfe = active & _satisfies_wolfe( val_0, val_c, f_lim, sufficient_decrease_param, curvature_param) val_left = val_where(found_wolfe, val_c, initial_args.left) val_right = val_where(found_wolfe, val_c, initial_args.right) converged = initial_args.converged | found_wolfe active = active & ~found_wolfe # If any active batch members remain, we apply the `update` function to # squeeze further their corresponding left/right bracketing interval. def _apply_update(): update_result = update( value_and_gradients_function, val_left, val_right,
python
{ "resource": "" }
q266732
update
test
def update(value_and_gradients_function, val_left, val_right, val_trial, f_lim, active=None): """Squeezes a bracketing interval containing the minimum. Given an interval which brackets a minimum and a point in that interval, finds a smaller nested interval which also brackets the minimum. If the supplied point does not lie in the bracketing interval, the current interval is returned. The following description is given in terms of individual points evaluated on a line function to be minimized. Note, however, the implementation also accepts batches of points allowing to minimize multiple line functions at once. See details on the docstring of `value_and_gradients_function` below. The requirement of the interval bracketing a minimum is expressed through the opposite slope conditions. Assume the left end point is 'a', the right end point is 'b', the function to be minimized is 'f' and the derivative is 'df'. The update procedure relies on the following conditions being satisfied: ''' f(a) <= f(0) + epsilon (1) df(a) < 0 (2) df(b) > 0 (3) ''' In the first condition, epsilon is a small positive constant. The condition demands that the function at the left end point be not much bigger than the starting point (i.e. 0). This is an easy to satisfy condition because by assumption, we are in a direction where the function value is decreasing. The second and third conditions together demand that there is at least one zero of the derivative in between a and b. In addition to the interval, the update algorithm requires a third point to be supplied. Usually, this point would lie within the interval [a, b]. If the point is outside this interval, the current interval is returned. If the point lies within the interval, the behaviour of the function and derivative value at this point is used to squeeze the original interval in a manner that preserves the opposite slope conditions. For further details of this component, see the procedure U0-U3 on page 123 of the [Hager and Zhang (2006)][2] article. Note that this function does not explicitly verify whether the opposite slope conditions are satisfied for the supplied interval. It is assumed that this is so. Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns an object that can be converted to a namedtuple. The namedtuple should have fields 'f' and 'df' that correspond to scalar tensors of real dtype containing the value of the function and its derivative at that point. The other namedtuple fields, if present, should be tensors or sequences (possibly nested) of tensors. In usual optimization application, this function would be generated by projecting the multivariate objective function along some specific direction. The direction is determined by some other procedure but should be a descent direction (i.e. the derivative of the projected univariate function must be negative at 0.). Alternatively, the function may represent the batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and the fields 'f' and 'df' in the returned namedtuple should each be a tensor of shape [n], with the corresponding function values and derivatives at the input points. val_left: Return value of value_and_gradients_function at the left end point of the bracketing interval (labelles 'a' above). val_right: Return value of value_and_gradients_function at the right end point of the bracketing interval (labelles 'b' above). val_trial: Return value of value_and_gradients_function at the trial point to be used to shrink the interval (labelled 'c'
python
{ "resource": "" }
q266733
bracket
test
def bracket(value_and_gradients_function, search_interval, f_lim, max_iterations, expansion_param=5.0): """Brackets the minimum given an initial starting point. Applies the Hager Zhang bracketing algorithm to find an interval containing a region with points satisfying Wolfe conditions. Uses the supplied initial step size 'c', the right end point of the provided search interval, to find such an interval. The only condition on 'c' is that it should be positive. For more details see steps B0-B3 in [Hager and Zhang (2006)][2]. Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns a namedtuple containing the value filed `f` of the function and its derivative value field `df` at that point. Alternatively, the function may representthe batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and return a tuple of two tensors of shape [n], the function values and the corresponding derivatives at the input points. search_interval: A namedtuple describing the current search interval, must include the fields: - converged: Boolean `Tensor` of shape [n], indicating batch members where search has already converged. Interval for these batch members wont be modified. - failed: Boolean `Tensor` of shape [n], indicating batch members where search has already failed. Interval for these batch members wont be modified. - iterations: Scalar int32 `Tensor`. Number of line search iterations so far. - func_evals: Scalar int32 `Tensor`. Number of function evaluations so far. - left: A namedtuple, as returned by value_and_gradients_function evaluated at 0, the left end point of the current interval. - right: A namedtuple, as returned by value_and_gradients_function, of the right end point of the current interval (labelled 'c' above). f_lim: real `Tensor` of shape [n]. The function value threshold for the approximate Wolfe conditions to be checked for each batch member. max_iterations: Int32 scalar `Tensor`. The maximum number of iterations permitted. The limit applies equally to all batch members. expansion_param: Scalar positive `Tensor` of real dtype. Must be greater than `1.`. Used to expand the initial interval in case it does not bracket a minimum. Returns: A namedtuple with the following fields. iteration: An int32 scalar `Tensor`. The number of iterations performed. Bounded above by `max_iterations` parameter. stopped: A boolean `Tensor` of shape [n]. True for those batch members where the algorithm terminated before reaching `max_iterations`. failed: A boolean `Tensor` of shape [n]. True for those batch members where an error was encountered during bracketing. num_evals: An int32 scalar `Tensor`. The number of times the objective function was evaluated. left: Return value of value_and_gradients_function at the updated left end point of the interval found. right: Return value of value_and_gradients_function at the updated right end point of the interval found. """ already_stopped = search_interval.failed | search_interval.converged # If the slope at right end point is positive, step B1 in [2], then the given # initial points already bracket a minimum. bracketed = search_interval.right.df >= 0 # Bisection is needed, step B2, if right end point almost
python
{ "resource": "" }
q266734
bisect
test
def bisect(value_and_gradients_function, initial_left, initial_right, f_lim): """Bisects an interval and updates to satisfy opposite slope conditions. Corresponds to the step U3 in [Hager and Zhang (2006)][2]. Args: value_and_gradients_function: A Python callable that accepts a real scalar tensor and returns a namedtuple containing the value filed `f` of the function and its derivative value field `df` at that point. Alternatively, the function may representthe batching of `n` such line functions (e.g. projecting a single multivariate objective function along `n` distinct directions at once) accepting n points as input, i.e. a tensor of shape [n], and return a tuple of two tensors of shape [n], the function values and the corresponding derivatives at the input points. initial_left: Return value of value_and_gradients_function at the left end point of the current bracketing interval. initial_right: Return value of value_and_gradients_function at the right end point of the current bracketing interval. f_lim: real `Tensor` of shape [n]. The function value threshold for the approximate Wolfe conditions to be checked for each batch member. Returns: A namedtuple containing the following fields: iteration: An int32 scalar `Tensor`. The number of iterations performed. Bounded above by `max_iterations` parameter. stopped: A boolean scalar `Tensor`. True if the bisect algorithm terminated. failed: A scalar boolean tensor. Indicates whether the objective function
python
{ "resource": "" }
q266735
_bisect
test
def _bisect(value_and_gradients_function, initial_args, f_lim): """Actual implementation of bisect given initial_args in a _BracketResult.""" def _loop_cond(curr): # TODO(b/112524024): Also take into account max_iterations. return ~tf.reduce_all(input_tensor=curr.stopped) def _loop_body(curr): """Narrow down interval to satisfy opposite slope conditions.""" mid = value_and_gradients_function((curr.left.x + curr.right.x) / 2) # Fail if function values at mid point are no longer finite; or left/right # points are so close to it that we can't distinguish them any more. failed = (curr.failed | ~is_finite(mid) | tf.equal(mid.x, curr.left.x) | tf.equal(mid.x, curr.right.x)) # If mid point has a negative slope and the function value at that point is # small enough, we can use it as a new left end point to narrow down the # interval. If mid point has a positive slope, then we have found a suitable # right end point to bracket a minima within opposite slopes. Otherwise, the # mid point has a negative slope but the function value at that point is too # high to work as left end point, we are in the same situation in which we # started the loop so we just update the right end point and continue. to_update = ~(curr.stopped | failed) update_left = (mid.df < 0) & (mid.f <= f_lim) left = val_where(to_update & update_left, mid, curr.left) right = val_where(to_update & ~update_left, mid, curr.right) # We're done when the right end point has
python
{ "resource": "" }
q266736
is_finite
test
def is_finite(val_1, val_2=None): """Checks if the supplied values are finite. Args: val_1: A namedtuple instance with the function value and derivative, as returned e.g. by value_and_gradients_function evaluations. val_2: (Optional) A namedtuple instance with the function value and derivative, as returned e.g. by value_and_gradients_function evaluations. Returns: is_finite: Scalar boolean `Tensor` indicating whether the function value and the derivative in `val_1`
python
{ "resource": "" }
q266737
_satisfies_wolfe
test
def _satisfies_wolfe(val_0, val_c, f_lim, sufficient_decrease_param, curvature_param): """Checks whether the Wolfe or approx Wolfe conditions are satisfied. The Wolfe conditions are a set of stopping criteria for an inexact line search algorithm. Let f(a) be the function value along the search direction and df(a) the derivative along the search direction evaluated a distance 'a'. Here 'a' is the distance along the search direction. The Wolfe conditions are: ```None f(a) <= f(0) + delta * a * df(0) (Armijo/Sufficient decrease condition) df(a) >= sigma * df(0) (Weak curvature condition) ``` `delta` and `sigma` are two user supplied parameters satisfying: `0 < delta < sigma <= 1.`. In the following, delta is called `sufficient_decrease_param` and sigma is called `curvature_param`. On a finite precision machine, the Wolfe conditions are difficult to satisfy when one is close to the minimum. Hence, Hager-Zhang propose replacing the sufficient decrease condition with the following condition on the derivative in the vicinity of a minimum. ```None df(a) <= (2 * delta - 1) * df(0) (Approx Wolfe sufficient decrease) ``` This condition is only used if one is near the minimum. This is tested using ```None f(a) <= f(0) + epsilon * |f(0)| ``` The following function checks both the Wolfe and approx Wolfe conditions. Here, `epsilon` is a small positive constant. In the following, the argument `f_lim` corresponds to the product: epsilon * |f(0)|. Args: val_0: A namedtuple, as returned by value_and_gradients_function evaluated at 0. val_c: A namedtuple, as returned by value_and_gradients_function evaluated at the point to be
python
{ "resource": "" }
q266738
_secant
test
def _secant(val_a, val_b): """Returns the secant interpolation for the minimum. The secant method is a technique for finding roots of nonlinear functions. When finding the minimum, one applies the secant method to the derivative of the function. For an arbitrary function and a bounding interval, the secant approximation can produce the next point which is outside the bounding interval. However, with the assumption of opposite slope condtion on the interval [a,b] the new point c is always bracketed by [a,b]. Note that by assumption, f'(a) < 0 and f'(b) > 0. Hence c is a weighted average of a and b and thus always in [a, b]. Args: val_a: A namedtuple with the left end point, function value and derivative,
python
{ "resource": "" }
q266739
make_simple_step_size_update_policy
test
def make_simple_step_size_update_policy(num_adaptation_steps, target_rate=0.75, decrement_multiplier=0.01, increment_multiplier=0.01, step_counter=None): """Create a function implementing a step-size update policy. The simple policy increases or decreases the `step_size_var` based on the average of `exp(minimum(0., log_accept_ratio))`. It is based on [Section 4.2 of Andrieu and Thoms (2008)]( https://people.eecs.berkeley.edu/~jordan/sail/readings/andrieu-thoms.pdf). The `num_adaptation_steps` argument is set independently of any burnin for the overall chain. In general, adaptation prevents the chain from reaching a stationary distribution, so obtaining consistent samples requires `num_adaptation_steps` be set to a value [somewhat smaller]( http://andrewgelman.com/2017/12/15/burn-vs-warm-iterative-simulation-algorithms/#comment-627745) than the number of burnin steps. However, it may sometimes be helpful to set `num_adaptation_steps` to a larger value during development in order to inspect the behavior of the chain during adaptation. Args: num_adaptation_steps: Scalar `int` `Tensor` number of initial steps to during which to adjust the step size. This may be greater, less than, or equal to the number of burnin steps. If `None`, the step size is adapted on every step (note this breaks stationarity of the chain!). target_rate: Scalar `Tensor` representing desired `accept_ratio`. Default value: `0.75` (i.e., [center of asymptotically optimal rate](https://arxiv.org/abs/1411.6669)). decrement_multiplier: `Tensor` representing amount to downscale current `step_size`. Default value: `0.01`. increment_multiplier: `Tensor` representing amount to upscale current `step_size`. Default value: `0.01`. step_counter: Scalar `int` `Variable` specifying the current step. The step size is adapted iff `step_counter < num_adaptation_steps`. Default value: if `None`, an internal variable `step_size_adaptation_step_counter` is created and initialized to `-1`. Returns: step_size_simple_update_fn: Callable that takes args `step_size_var, kernel_results` and returns updated step size(s). """ if step_counter is None and num_adaptation_steps is not None: step_counter = tf.compat.v1.get_variable( name='step_size_adaptation_step_counter', initializer=np.array(-1, dtype=np.int32), # Specify the dtype for variable sharing to work correctly # (b/120599991). dtype=tf.int32, trainable=False, use_resource=True) def step_size_simple_update_fn(step_size_var, kernel_results): """Updates (list of) `step_size` using a standard adaptive MCMC procedure. Args: step_size_var: (List of) `tf.Variable`s representing the per `state_part` HMC `step_size`. kernel_results: `collections.namedtuple` containing `Tensor`s representing values from most recent call to `one_step`. Returns: step_size_assign:
python
{ "resource": "" }
q266740
_leapfrog_integrator_one_step
test
def _leapfrog_integrator_one_step( target_log_prob_fn, independent_chain_ndims, step_sizes, current_momentum_parts, current_state_parts, current_target_log_prob, current_target_log_prob_grad_parts, state_gradients_are_stopped=False, name=None): """Applies `num_leapfrog_steps` of the leapfrog integrator. Assumes a simple quadratic kinetic energy function: `0.5 ||momentum||**2`. #### Examples: ##### Simple quadratic potential. ```python import matplotlib.pyplot as plt %matplotlib inline import numpy as np import tensorflow as tf from tensorflow_probability.python.mcmc.hmc import _leapfrog_integrator_one_step # pylint: disable=line-too-long tfd = tfp.distributions dims = 10 num_iter = int(1e3) dtype = np.float32 position = tf.placeholder(np.float32) momentum = tf.placeholder(np.float32) target_log_prob_fn = tfd.MultivariateNormalDiag( loc=tf.zeros(dims, dtype)).log_prob def _leapfrog_one_step(*args): # Closure representing computation done during each leapfrog step. return _leapfrog_integrator_one_step( target_log_prob_fn=target_log_prob_fn, independent_chain_ndims=0, step_sizes=[0.1], current_momentum_parts=args[0], current_state_parts=args[1], current_target_log_prob=args[2], current_target_log_prob_grad_parts=args[3]) # Do leapfrog integration. [ [next_momentum], [next_position], next_target_log_prob, next_target_log_prob_grad_parts, ] = tf.while_loop( cond=lambda *args: True, body=_leapfrog_one_step, loop_vars=[ [momentum], [position], target_log_prob_fn(position), tf.gradients(target_log_prob_fn(position), position), ], maximum_iterations=3) momentum_ = np.random.randn(dims).astype(dtype) position_ = np.random.randn(dims).astype(dtype) positions = np.zeros([num_iter, dims], dtype) with tf.Session() as sess: for i in xrange(num_iter): position_, momentum_ = sess.run( [next_momentum, next_position], feed_dict={position: position_, momentum: momentum_}) positions[i] = position_ plt.plot(positions[:, 0]); # Sinusoidal. ``` Args: target_log_prob_fn: Python callable which takes an argument like `*current_state_parts` and returns its (possibly unnormalized) log-density under the target distribution. independent_chain_ndims: Scalar `int` `Tensor` representing the number of leftmost `Tensor` dimensions which index independent chains. step_sizes: Python `list` of `Tensor`s representing the step size for the leapfrog integrator. Must broadcast with the shape of `current_state_parts`. Larger step sizes lead to faster progress, but too-large step sizes make rejection exponentially more likely. When possible, it's often helpful to match per-variable step sizes to the standard deviations of the target distribution in each variable. current_momentum_parts: Tensor containing the value(s) of the momentum variable(s) to update. current_state_parts: Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s). The first `independent_chain_ndims` of the `Tensor`(s) index different chains. current_target_log_prob: `Tensor` representing the value of `target_log_prob_fn(*current_state_parts)`. The only reason to specify this argument is to reduce TF graph size. current_target_log_prob_grad_parts: Python list of `Tensor`s representing gradient of `target_log_prob_fn(*current_state_parts`) wrt `current_state_parts`. Must have same shape as `current_state_parts`. The only reason to specify this argument is to reduce TF graph size. state_gradients_are_stopped: Python `bool` indicating that the proposed new state be run through `tf.stop_gradient`. This is particularly useful when combining optimization over samples from the HMC chain. Default value: `False` (i.e., do not apply `stop_gradient`). name: Python `str` name prefixed to Ops created by this function. Default value: `None` (i.e., 'hmc_leapfrog_integrator'). Returns: proposed_momentum_parts: Updated value of the momentum. proposed_state_parts: Tensor or Python list of `Tensor`s representing the state(s) of the Markov chain(s) at each result step. Has same shape as input `current_state_parts`. proposed_target_log_prob: `Tensor` representing the value of `target_log_prob_fn` at `next_state`. proposed_target_log_prob_grad_parts: Gradient of `proposed_target_log_prob` wrt `next_state`. Raises: ValueError: if `len(momentum_parts) != len(state_parts)`. ValueError: if `len(state_parts) != len(step_sizes)`. ValueError: if `len(state_parts) != len(grads_target_log_prob)`. TypeError: if `not target_log_prob.dtype.is_floating`. """ # Note on per-variable step sizes: # # Using per-variable step sizes is equivalent to using the same step # size for all variables and adding a diagonal mass matrix in the # kinetic energy term of the Hamiltonian being integrated. This is # hinted at by Neal (2011) but not derived in detail there. # # Let x and v be position and momentum variables respectively. # Let g(x) be the gradient of `target_log_prob_fn(x)`. # Let S be a diagonal matrix of per-variable step sizes. # Let the Hamiltonian H(x, v) = -target_log_prob_fn(x) + 0.5 * ||v||**2. # # Using per-variable step sizes gives the updates # v' = v + 0.5 * matmul(S, g(x)) # x'' = x + matmul(S, v') # v'' = v' + 0.5 * matmul(S, g(x'')) # # Let u = matmul(inv(S), v). # Multiplying v by inv(S) in the updates above gives the transformed dynamics # u' = matmul(inv(S), v') = matmul(inv(S), v) + 0.5 * g(x) # = u + 0.5 * g(x) # x'' = x + matmul(S, v') = x + matmul(S**2, u') # u'' = matmul(inv(S), v'') = matmul(inv(S), v') + 0.5 * g(x'') # = u' + 0.5 * g(x'') # # These are exactly the leapfrog updates for the Hamiltonian # H'(x, u) = -target_log_prob_fn(x) + 0.5 * u^T S**2 u #
python
{ "resource": "" }
q266741
_compute_log_acceptance_correction
test
def _compute_log_acceptance_correction(current_momentums, proposed_momentums, independent_chain_ndims, name=None): """Helper to `kernel` which computes the log acceptance-correction. A sufficient but not necessary condition for the existence of a stationary distribution, `p(x)`, is "detailed balance", i.e.: ```none p(x'|x) p(x) = p(x|x') p(x') ``` In the Metropolis-Hastings algorithm, a state is proposed according to `g(x'|x)` and accepted according to `a(x'|x)`, hence `p(x'|x) = g(x'|x) a(x'|x)`. Inserting this into the detailed balance equation implies: ```none g(x'|x) a(x'|x) p(x) = g(x|x') a(x|x') p(x') ==> a(x'|x) / a(x|x') = p(x') / p(x) [g(x|x') / g(x'|x)] (*) ``` One definition of `a(x'|x)` which satisfies (*) is: ```none a(x'|x) = min(1, p(x') / p(x) [g(x|x') / g(x'|x)]) ``` (To see that this satisfies (*), notice that under this definition only at most one `a(x'|x)` and `a(x|x') can be other than one.) We call the bracketed term the "acceptance correction". In the case of UncalibratedHMC, the log acceptance-correction is not the log proposal-ratio. UncalibratedHMC augments the state-space with momentum, z. Assuming a standard Gaussian distribution for momentums, the chain eventually converges to: ```none p([x, z]) propto= target_prob(x) exp(-0.5 z**2) ``` Relating this back to Metropolis-Hastings parlance, for HMC we have: ```none p([x, z])
python
{ "resource": "" }
q266742
HamiltonianMonteCarlo.one_step
test
def one_step(self, current_state, previous_kernel_results): """Runs one iteration of Hamiltonian Monte Carlo. Args: current_state: `Tensor` or Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s). The first `r` dimensions index independent chains, `r = tf.rank(target_log_prob_fn(*current_state))`. previous_kernel_results: `collections.namedtuple` containing `Tensor`s representing values from previous calls to this function (or from the `bootstrap_results` function.) Returns: next_state: Tensor or Python list of `Tensor`s representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as `current_state`. kernel_results:
python
{ "resource": "" }
q266743
HamiltonianMonteCarlo.bootstrap_results
test
def bootstrap_results(self, init_state): """Creates initial `previous_kernel_results` using a supplied `state`.""" kernel_results = self._impl.bootstrap_results(init_state) if self.step_size_update_fn is not None: step_size_assign = self.step_size_update_fn(self.step_size, None) # pylint: disable=not-callable
python
{ "resource": "" }
q266744
bayesian_resnet
test
def bayesian_resnet(input_shape, num_classes=10, kernel_posterior_scale_mean=-9.0, kernel_posterior_scale_stddev=0.1, kernel_posterior_scale_constraint=0.2): """Constructs a ResNet18 model. Args: input_shape: A `tuple` indicating the Tensor shape. num_classes: `int` representing the number of class labels. kernel_posterior_scale_mean: Python `int` number for the kernel posterior's scale (log variance) mean. The smaller the mean the closer is the initialization to a deterministic network. kernel_posterior_scale_stddev: Python `float` number for the initial kernel posterior's scale stddev. ``` q(W|x) ~ N(mu, var), log_var ~ N(kernel_posterior_scale_mean, kernel_posterior_scale_stddev) ```` kernel_posterior_scale_constraint: Python `float` number for the log value to constrain the log variance throughout training.
python
{ "resource": "" }
q266745
_resnet_block
test
def _resnet_block(x, filters, kernel, stride, kernel_posterior_fn): """Network block for ResNet.""" x = tf.keras.layers.BatchNormalization()(x) x = tf.keras.layers.Activation('relu')(x) if stride != 1 or filters != x.shape[1]: shortcut = _projection_shortcut(x, filters, stride, kernel_posterior_fn) else: shortcut = x x = tfp.layers.Convolution2DFlipout( filters, kernel, strides=stride,
python
{ "resource": "" }
q266746
make_encoder
test
def make_encoder(activation, num_topics, layer_sizes): """Create the encoder function. Args: activation: Activation function to use. num_topics: The number of topics. layer_sizes: The number of hidden units per layer in the encoder. Returns: encoder: A `callable` mapping a bag-of-words `Tensor` to a `tfd.Distribution` instance over topics. """ encoder_net = tf.keras.Sequential() for num_hidden_units in layer_sizes: encoder_net.add( tf.keras.layers.Dense( num_hidden_units, activation=activation, kernel_initializer=tf.compat.v1.glorot_normal_initializer())) encoder_net.add( tf.keras.layers.Dense(
python
{ "resource": "" }
q266747
make_decoder
test
def make_decoder(num_topics, num_words): """Create the decoder function. Args: num_topics: The number of topics. num_words: The number of words. Returns: decoder: A `callable` mapping a `Tensor` of encodings to a `tfd.Distribution` instance over words. """ topics_words_logits = tf.compat.v1.get_variable( "topics_words_logits", shape=[num_topics, num_words], initializer=tf.compat.v1.glorot_normal_initializer()) topics_words = tf.nn.softmax(topics_words_logits, axis=-1) def decoder(topics): word_probs = tf.matmul(topics, topics_words)
python
{ "resource": "" }
q266748
make_prior
test
def make_prior(num_topics, initial_value): """Create the prior distribution. Args: num_topics: Number of topics. initial_value: The starting value for the prior parameters. Returns: prior: A `callable` that returns a `tf.distribution.Distribution` instance, the prior distribution. prior_variables: A `list` of `Variable` objects, the trainable parameters of the prior. """ def _softplus_inverse(x): return np.log(np.expm1(x)) logit_concentration = tf.compat.v1.get_variable(
python
{ "resource": "" }
q266749
sample_chain
test
def sample_chain( num_results, current_state, previous_kernel_results=None, kernel=None, num_burnin_steps=0, num_steps_between_results=0, trace_fn=lambda current_state, kernel_results: kernel_results, return_final_kernel_results=False, parallel_iterations=10, name=None, ): """Implements Markov chain Monte Carlo via repeated `TransitionKernel` steps. This function samples from an Markov chain at `current_state` and whose stationary distribution is governed by the supplied `TransitionKernel` instance (`kernel`). This function can sample from multiple chains, in parallel. (Whether or not there are multiple chains is dictated by the `kernel`.) The `current_state` can be represented as a single `Tensor` or a `list` of `Tensors` which collectively represent the current state. Since MCMC states are correlated, it is sometimes desirable to produce additional intermediate states, and then discard them, ending up with a set of states with decreased autocorrelation. See [Owen (2017)][1]. Such "thinning" is made possible by setting `num_steps_between_results > 0`. The chain then takes `num_steps_between_results` extra steps between the steps that make it into the results. The extra steps are never materialized (in calls to `sess.run`), and thus do not increase memory requirements. Warning: when setting a `seed` in the `kernel`, ensure that `sample_chain`'s `parallel_iterations=1`, otherwise results will not be reproducible. In addition to returning the chain state, this function supports tracing of auxiliary variables used by the kernel. The traced values are selected by specifying `trace_fn`. By default, all kernel results are traced but in the future the default will be changed to no results being traced, so plan accordingly. See below for some examples of this feature. Args: num_results: Integer number of Markov chain draws. current_state: `Tensor` or Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s). previous_kernel_results: A `Tensor` or a nested collection of `Tensor`s representing internal calculations made within the previous call to this function (or as returned by `bootstrap_results`). kernel: An instance of `tfp.mcmc.TransitionKernel` which implements one step of the Markov chain. num_burnin_steps: Integer number of chain steps to take before starting to collect results. Default value: 0 (i.e., no burn-in). num_steps_between_results: Integer number of chain steps between collecting a result. Only one out of every `num_steps_between_samples + 1` steps is included in the returned results. The number of returned chain states is still equal to `num_results`. Default value: 0 (i.e., no thinning). trace_fn: A callable that takes in the current chain state and the previous kernel results and return a `Tensor` or a nested collection of `Tensor`s that is then traced along with the chain state. return_final_kernel_results: If `True`, then the final kernel results are returned alongside the chain state and the trace specified by the `trace_fn`. parallel_iterations: The number of iterations allowed to run in parallel. It must be a positive integer. See `tf.while_loop` for more details. name: Python `str` name prefixed to Ops created by this function. Default value: `None` (i.e., "mcmc_sample_chain"). Returns: checkpointable_states_and_trace: if `return_final_kernel_results` is `True`. The return value is an instance of `CheckpointableStatesAndTrace`. all_states: if `return_final_kernel_results` is `False` and `trace_fn` is `None`. The return value is a `Tensor` or Python list of `Tensor`s representing the state(s) of the Markov chain(s) at each result step. Has same shape as input `current_state` but with a prepended `num_results`-size dimension. states_and_trace: if `return_final_kernel_results` is `False` and `trace_fn` is not `None`. The return value is an instance of `StatesAndTrace`. #### Examples ##### Sample from a diagonal-variance Gaussian. I.e., ```none for i=1..n: x[i] ~ MultivariateNormal(loc=0, scale=diag(true_stddev)) # likelihood ``` ```python import tensorflow as tf import tensorflow_probability as tfp tfd = tfp.distributions dims = 10 true_stddev = np.sqrt(np.linspace(1., 3., dims)) likelihood = tfd.MultivariateNormalDiag(loc=0., scale_diag=true_stddev) states = tfp.mcmc.sample_chain( num_results=1000, num_burnin_steps=500, current_state=tf.zeros(dims), kernel=tfp.mcmc.HamiltonianMonteCarlo( target_log_prob_fn=likelihood.log_prob, step_size=0.5, num_leapfrog_steps=2), trace_fn=None) sample_mean = tf.reduce_mean(states, axis=0) # ==> approx all zeros sample_stddev = tf.sqrt(tf.reduce_mean( tf.squared_difference(states, sample_mean), axis=0)) # ==> approx equal true_stddev ``` ##### Sampling from factor-analysis posteriors with known factors. I.e., ```none # prior w ~ MultivariateNormal(loc=0, scale=eye(d)) for i=1..n: # likelihood x[i] ~ Normal(loc=w^T F[i], scale=1) ``` where `F` denotes factors. ```python import tensorflow as tf import tensorflow_probability as tfp tfd = tfp.distributions # Specify model. def make_prior(dims): return tfd.MultivariateNormalDiag( loc=tf.zeros(dims)) def make_likelihood(weights, factors): return tfd.MultivariateNormalDiag( loc=tf.matmul(weights, factors, adjoint_b=True)) def joint_log_prob(num_weights, factors, x, w): return (make_prior(num_weights).log_prob(w) + make_likelihood(w, factors).log_prob(x)) def unnormalized_log_posterior(w): # Posterior is proportional to: `p(W, X=x | factors)`. return joint_log_prob(num_weights, factors, x, w) # Setup data. num_weights = 10 # == d num_factors = 40 # == n num_chains = 100 weights = make_prior(num_weights).sample(1) factors = tf.random_normal([num_factors, num_weights]) x = make_likelihood(weights, factors).sample() # Sample from Hamiltonian Monte Carlo Markov Chain. # Get `num_results` samples from `num_chains` independent chains. chains_states, kernels_results = tfp.mcmc.sample_chain( num_results=1000, num_burnin_steps=500, current_state=tf.zeros([num_chains, num_weights], name='init_weights'), kernel=tfp.mcmc.HamiltonianMonteCarlo( target_log_prob_fn=unnormalized_log_posterior, step_size=0.1, num_leapfrog_steps=2)) # Compute sample stats. sample_mean = tf.reduce_mean(chains_states, axis=[0, 1]) # ==> approx equal to weights sample_var = tf.reduce_mean( tf.squared_difference(chains_states, sample_mean), axis=[0, 1]) # ==> less than 1 ``` ##### Custom tracing functions. ```python import tensorflow as tf import tensorflow_probability as tfp tfd = tfp.distributions likelihood = tfd.Normal(loc=0., scale=1.) def sample_chain(trace_fn): return tfp.mcmc.sample_chain( num_results=1000, num_burnin_steps=500, current_state=0., kernel=tfp.mcmc.HamiltonianMonteCarlo(
python
{ "resource": "" }
q266750
deep_exponential_family
test
def deep_exponential_family(data_size, feature_size, units, shape): """A multi-layered topic model over a documents-by-terms matrix.""" w2 = ed.Gamma(0.1, 0.3, sample_shape=[units[2], units[1]], name="w2") w1 = ed.Gamma(0.1, 0.3, sample_shape=[units[1], units[0]], name="w1") w0 = ed.Gamma(0.1, 0.3, sample_shape=[units[0], feature_size], name="w0") z2 = ed.Gamma(0.1, 0.1,
python
{ "resource": "" }
q266751
trainable_positive_deterministic
test
def trainable_positive_deterministic(shape, min_loc=1e-3, name=None): """Learnable Deterministic distribution over positive reals.""" with tf.compat.v1.variable_scope( None, default_name="trainable_positive_deterministic"):
python
{ "resource": "" }
q266752
trainable_gamma
test
def trainable_gamma(shape, min_concentration=1e-3, min_scale=1e-5, name=None): """Learnable Gamma via concentration and scale parameterization.""" with tf.compat.v1.variable_scope(None, default_name="trainable_gamma"): unconstrained_concentration = tf.compat.v1.get_variable( "unconstrained_concentration", shape, initializer=tf.compat.v1.initializers.random_normal( mean=0.5, stddev=0.1)) unconstrained_scale = tf.compat.v1.get_variable( "unconstrained_scale",
python
{ "resource": "" }
q266753
load_nips2011_papers
test
def load_nips2011_papers(path): """Loads NIPS 2011 conference papers. The NIPS 1987-2015 data set is in the form of a 11,463 x 5,812 matrix of per-paper word counts, containing 11,463 words and 5,811 NIPS conference papers (Perrone et al., 2016). We subset to papers in 2011 and words appearing in at least two documents and having a total word count of at least 10. Built from the Observations Python package. Args: path: str. Path to directory which either stores file or otherwise file will be downloaded and extracted there. Filename is `NIPS_1987-2015.csv`. Returns: bag_of_words: np.ndarray of shape [num_documents, num_words]. Each element denotes the number of occurrences of a specific word in a specific document. words: List of strings, denoting the words for `bag_of_words`'s columns. """ path = os.path.expanduser(path) filename = "NIPS_1987-2015.csv" filepath = os.path.join(path, filename) if not os.path.exists(filepath): url = ("https://archive.ics.uci.edu/ml/machine-learning-databases/" "00371/NIPS_1987-2015.csv") if not tf.io.gfile.exists(path): tf.io.gfile.makedirs(path) print("Downloading %s to %s" % (url, filepath))
python
{ "resource": "" }
q266754
_AmplitudeLengthScaleMixin._init_params
test
def _init_params(self, amplitude, length_scale, validate_args): """Shared init logic for `amplitude` and `length_scale` params. Args: amplitude: `Tensor` (or convertible) or `None` to convert, validate. length_scale: `Tensor` (or convertible) or `None` to convert, validate. validate_args: If `True`, parameters are checked for validity despite possibly degrading runtime performance Returns: dtype: The common `DType` of the parameters. """ dtype = util.maybe_get_common_dtype( [amplitude, length_scale]) if amplitude is not None: amplitude = tf.convert_to_tensor( value=amplitude, name='amplitude', dtype=dtype) self._amplitude = _validate_arg_if_not_none(
python
{ "resource": "" }
q266755
_registered_kl
test
def _registered_kl(type_a, type_b): """Get the KL function registered for classes a and b.""" hierarchy_a = tf_inspect.getmro(type_a) hierarchy_b = tf_inspect.getmro(type_b) dist_to_children = None kl_fn = None for mro_to_a, parent_a in enumerate(hierarchy_a):
python
{ "resource": "" }
q266756
read_image
test
def read_image(filepath): """Returns an image tensor.""" im_bytes = tf.io.read_file(filepath) im =
python
{ "resource": "" }
q266757
download_sprites
test
def download_sprites(): """Downloads the sprites data and returns the saved filepath.""" filepath = os.path.join(FLAGS.data_dir, DATA_SPRITES_DIR) if not tf.io.gfile.exists(filepath): if not tf.io.gfile.exists(FLAGS.data_dir): tf.io.gfile.makedirs(FLAGS.data_dir) zip_name = "{}.zip".format(filepath)
python
{ "resource": "" }
q266758
create_character
test
def create_character(skin, hair, top, pants): """Creates a character sprite from a set of attribute sprites.""" dtype = skin.dtype hair_mask = tf.cast(hair[..., -1:] <= 0, dtype) top_mask = tf.cast(top[..., -1:] <= 0, dtype) pants_mask = tf.cast(pants[...,
python
{ "resource": "" }
q266759
create_seq
test
def create_seq(character, action_metadata, direction, length=8, start=0): """Creates a sequence. Args: character: A character sprite tensor. action_metadata: An action metadata tuple. direction: An integer representing the direction, i.e., the row offset within each action group corresponding to a particular direction. length: Desired length of the sequence. If this is longer than the number of available frames, it will roll over to the beginning. start: Index of possible frames at which to start the sequence. Returns: A sequence tensor. """ sprite_start = (action_metadata[0]+direction) * FRAME_SIZE sprite_end = (action_metadata[0]+direction+1) * FRAME_SIZE sprite_line = character[sprite_start:sprite_end, ...] # Extract 64x64 patches that are side-by-side in the sprite, and limit # to the actual number of frames for the given action. frames = tf.stack(tf.split(sprite_line, 13, axis=1)) # 13 is
python
{ "resource": "" }
q266760
create_random_seq
test
def create_random_seq(character, action_metadata, direction, length=8): """Creates a random sequence.""" start =
python
{ "resource": "" }
q266761
create_sprites_dataset
test
def create_sprites_dataset(characters, actions, directions, channels=3, length=8, shuffle=False, fake_data=False): """Creates a tf.data pipeline for the sprites dataset. Args: characters: A list of (skin, hair, top, pants) tuples containing relative paths to the sprite png image for each attribute. actions: A list of Actions. directions: A list of Directions. channels: Number of image channels to yield. length: Desired length of the sequences. shuffle: Whether or not to shuffle the characters and sequences start frame. fake_data: Boolean for whether or not to yield synthetic data. Returns: A tf.data.Dataset yielding (seq, skin label index, hair label index, top label index, pants label index, action label index, skin label name, hair label_name, top label name, pants label name, action label name) tuples. """ if fake_data: dummy_image = tf.random.normal([HEIGHT, WIDTH, CHANNELS]) else: basedir = download_sprites() action_names = [action.name for action in actions] action_metadata = [(action.start_row, action.frames) for action in actions] direction_rows = [direction.row_offset for direction in directions] chars = tf.data.Dataset.from_tensor_slices(characters) act_names = tf.data.Dataset.from_tensor_slices(action_names).repeat() acts_metadata = tf.data.Dataset.from_tensor_slices(action_metadata).repeat() dir_rows = tf.data.Dataset.from_tensor_slices(direction_rows).repeat() if shuffle: chars = chars.shuffle(len(characters)) dataset = tf.data.Dataset.zip((chars, act_names, acts_metadata, dir_rows)) skin_table = tf.contrib.lookup.index_table_from_tensor(sorted(SKIN_COLORS)) hair_table = tf.contrib.lookup.index_table_from_tensor(sorted(HAIRSTYLES)) top_table = tf.contrib.lookup.index_table_from_tensor(sorted(TOPS)) pants_table = tf.contrib.lookup.index_table_from_tensor(sorted(PANTS)) action_table = tf.contrib.lookup.index_table_from_tensor(sorted(action_names)) def process_example(attrs, act_name, act_metadata, dir_row_offset):
python
{ "resource": "" }
q266762
_maybe_validate_distributions
test
def _maybe_validate_distributions(distributions, dtype_override, validate_args): """Checks that `distributions` satisfies all assumptions.""" assertions = [] if not _is_iterable(distributions) or not distributions: raise ValueError('`distributions` must be a list of one or more ' 'distributions.') if dtype_override is None: dts = [ dtype_util.base_dtype(d.dtype) for d in distributions if d.dtype is not None ] if dts[1:] != dts[:-1]: raise TypeError('Distributions must have same dtype; found: {}.'.format( set(dtype_util.name(dt) for dt in dts))) # Validate event_ndims. for d in distributions: if tensorshape_util.rank(d.event_shape) is not None: if tensorshape_util.rank(d.event_shape) != 1: raise ValueError('`Distribution` must be vector variate, ' 'found event nimds: {}.'.format( tensorshape_util.rank(d.event_shape))) elif validate_args: assertions.append( assert_util.assert_equal( 1, tf.size(input=d.event_shape_tensor()), message='`Distribution` must be vector variate.')) batch_shapes = [d.batch_shape for d in distributions] if all(tensorshape_util.is_fully_defined(b) for b in batch_shapes): if
python
{ "resource": "" }
q266763
_flatten_summand_list
test
def _flatten_summand_list(kernels): """Flatten a list of kernels which may contain _SumKernel instances. Args: kernels: Python list of `PositiveSemidefiniteKernel` instances Returns: Python list containing the elements of kernels, with any _SumKernel instances
python
{ "resource": "" }
q266764
_flatten_multiplicand_list
test
def _flatten_multiplicand_list(kernels): """Flatten a list of kernels which may contain _ProductKernel instances. Args: kernels: Python list of `PositiveSemidefiniteKernel` instances Returns: Python list containing the elements of kernels, with any _ProductKernel
python
{ "resource": "" }
q266765
build_fake_data
test
def build_fake_data(): """Build fake CIFAR10-style data for unit testing.""" num_examples = 10 x_train = np.random.rand(num_examples, *IMAGE_SHAPE).astype(np.float32) y_train
python
{ "resource": "" }
q266766
count_integers
test
def count_integers(arr, weights=None, minlength=None, maxlength=None, axis=None, dtype=tf.int32, name=None): """Counts the number of occurrences of each value in an integer array `arr`. Works like `tf.math.bincount`, but provides an `axis` kwarg that specifies dimensions to reduce over. With `~axis = [i for i in range(arr.ndim) if i not in axis]`, this function returns a `Tensor` of shape `[K] + arr.shape[~axis]`. If `minlength` and `maxlength` are not given, `K = tf.reduce_max(arr) + 1` if `arr` is non-empty, and 0 otherwise. If `weights` are non-None, then index `i` of the output stores the sum of the value in `weights` at each index where the corresponding value in `arr` is `i`. Args: arr: An `int32` `Tensor` of non-negative values. weights: If non-None, must be the same shape as arr. For each value in `arr`, the bin will be incremented by the corresponding weight instead of 1. minlength: If given, ensures the output has length at least `minlength`, padding with zeros at the end if necessary. maxlength: If given, skips values in `arr` that are equal or greater than `maxlength`, ensuring that the output has length at most `maxlength`. axis: A `0-D` or `1-D` `int32` `Tensor` (with static values) designating dimensions in `arr` to reduce over. `Default value:` `None`, meaning reduce over all dimensions. dtype: If `weights` is None, determines the type of the output bins. name: A name scope for the associated operations (optional). Returns: A vector with the same dtype as `weights` or the given `dtype`. The bin values. """ with tf.compat.v1.name_scope( name, 'count_integers', values=[arr, weights, minlength, maxlength, axis]): if axis is None: return tf.math.bincount( arr, weights=weights, minlength=minlength, maxlength=maxlength, dtype=dtype) arr = tf.convert_to_tensor(value=arr, dtype=tf.int32, name='arr') arr_ndims = _get_static_ndims(arr, expect_static=True) axis = _make_static_axis_non_negative_list(axis, arr_ndims) # ~axis from docstring. Dims in arr that are not in axis. not_axis = sorted(set(range(arr_ndims)).difference(axis)) # If we're reducing over everything, just use standard bincount. if not not_axis: return tf.math.bincount( arr, weights=weights, minlength=minlength, maxlength=maxlength, dtype=dtype) # Move dims in ~axis to the left, so we can tf.map_fn bincount over them, # Producing counts for every index I in ~axis. # Thus, flat_arr is not totally
python
{ "resource": "" }
q266767
find_bins
test
def find_bins(x, edges, extend_lower_interval=False, extend_upper_interval=False, dtype=None, name=None): """Bin values into discrete intervals. Given `edges = [c0, ..., cK]`, defining intervals `I0 = [c0, c1)`, `I1 = [c1, c2)`, ..., `I_{K-1} = [c_{K-1}, cK]`, This function returns `bins`, such that: `edges[bins[i]] <= x[i] < edges[bins[i] + 1]`. Args: x: Numeric `N-D` `Tensor` with `N > 0`. edges: `Tensor` of same `dtype` as `x`. The first dimension indexes edges of intervals. Must either be `1-D` or have `x.shape[1:] == edges.shape[1:]`. If `rank(edges) > 1`, `edges[k]` designates a shape `edges.shape[1:]` `Tensor` of bin edges for the corresponding dimensions of `x`. extend_lower_interval: Python `bool`. If `True`, extend the lowest interval `I0` to `(-inf, c1]`. extend_upper_interval: Python `bool`. If `True`, extend the upper interval `I_{K-1}` to `[c_{K-1}, +inf)`. dtype: The output type (`int32` or `int64`). `Default value:` `x.dtype`. This effects the output values when `x` is below/above the intervals, which will be `-1/K+1` for `int` types and `NaN` for `float`s. At indices where `x` is `NaN`, the output values will be `0` for `int` types and `NaN` for floats. name: A Python string name to prepend to created ops. Default: 'find_bins' Returns: bins: `Tensor` with same `shape` as `x` and `dtype`. Has whole number values. `bins[i] = k` means the `x[i]` falls into the `kth` bin, ie, `edges[bins[i]] <= x[i] < edges[bins[i] + 1]`. Raises: ValueError: If `edges.shape[0]` is determined to be less than 2. #### Examples Cut a `1-D` array ```python x = [0., 5., 6., 10., 20.] edges = [0., 5., 10.] tfp.stats.find_bins(x, edges) ==> [0., 0., 1., 1., np.nan] ``` Cut `x` into its deciles ```python x = tf.random_uniform(shape=(100, 200)) decile_edges = tfp.stats.quantiles(x, num_quantiles=10) bins = tfp.stats.find_bins(x, edges=decile_edges) bins.shape ==> (100, 200) tf.reduce_mean(bins == 0.) ==> approximately 0.1 tf.reduce_mean(bins == 1.) ==> approximately 0.1 ``` """ # TFP users may be surprised to see the "action" in the leftmost dim of # edges, rather than the rightmost (event) dim. Why? # 1. Most likely you created edges by getting quantiles over samples, and # quantile/percentile return these edges in the leftmost (sample) dim. # 2. Say you have event_shape = [5], then we expect the bin will be different # for all 5 events, so the index of the bin should not be in the event dim. with tf.compat.v1.name_scope( name, default_name='find_bins', values=[x, edges]): in_type = dtype_util.common_dtype([x, edges], preferred_dtype=tf.float32) edges = tf.convert_to_tensor(value=edges, name='edges', dtype=in_type) x = tf.convert_to_tensor(value=x, name='x', dtype=in_type) if (tf.compat.dimension_value(edges.shape[0]) is not None and tf.compat.dimension_value(edges.shape[0]) < 2): raise ValueError( 'First dimension of `edges` must have length > 1 to index 1 or ' 'more bin. Found: {}'.format(edges.shape))
python
{ "resource": "" }
q266768
histogram
test
def histogram(x, edges, axis=None, extend_lower_interval=False, extend_upper_interval=False, dtype=None, name=None): """Count how often `x` falls in intervals defined by `edges`. Given `edges = [c0, ..., cK]`, defining intervals `I0 = [c0, c1)`, `I1 = [c1, c2)`, ..., `I_{K-1} = [c_{K-1}, cK]`, This function counts how often `x` falls into each interval. Values of `x` outside of the intervals cause errors. Consider using `extend_lower_interval`, `extend_upper_interval` to deal with this. Args: x: Numeric `N-D` `Tensor` with `N > 0`. If `axis` is not `None`, must have statically known number of dimensions. The `axis` kwarg determines which dimensions index iid samples. Other dimensions of `x` index "events" for which we will compute different histograms. edges: `Tensor` of same `dtype` as `x`. The first dimension indexes edges of intervals. Must either be `1-D` or have `edges.shape[1:]` the same as the dimensions of `x` excluding `axis`. If `rank(edges) > 1`, `edges[k]` designates a shape `edges.shape[1:]` `Tensor` of interval edges for the corresponding dimensions of `x`. axis: Optional `0-D` or `1-D` integer `Tensor` with constant values. The axis in `x` that index iid samples. `Default value:` `None` (treat every dimension as sample dimension). extend_lower_interval: Python `bool`. If `True`, extend the lowest interval `I0` to `(-inf, c1]`. extend_upper_interval: Python `bool`. If `True`, extend the upper interval `I_{K-1}` to `[c_{K-1}, +inf)`. dtype: The output type (`int32` or `int64`). `Default value:` `x.dtype`. name: A Python string name to prepend to created ops. `Default value:` 'histogram' Returns: counts: `Tensor` of type `dtype` and, with `~axis = [i for i in range(arr.ndim) if i not in axis]`, `counts.shape = [edges.shape[0]] + x.shape[~axis]`. With `I` a multi-index into `~axis`, `counts[k][I]` is the number of times event(s) fell into the `kth` interval of `edges`. #### Examples ```python # x.shape = [1000, 2] # x[:, 0] ~ Uniform(0, 1), x[:, 1] ~ Uniform(1, 2). x = tf.stack([tf.random_uniform([1000]), 1 + tf.random_uniform([1000])], axis=-1) # edges ==> bins [0, 0.5), [0.5, 1.0), [1.0, 1.5), [1.5, 2.0]. edges = [0., 0.5, 1.0, 1.5, 2.0] tfp.stats.histogram(x, edges) ==> approximately [500, 500, 500, 500] tfp.stats.histogram(x, edges, axis=0) ==> approximately [[500, 500, 0, 0], [0, 0, 500, 500]] ``` """ with tf.compat.v1.name_scope(name, 'histogram', values=[x, edges, axis]): # Tensor conversions. in_dtype = dtype_util.common_dtype([x, edges], preferred_dtype=tf.float32) x = tf.convert_to_tensor(value=x, name='x', dtype=in_dtype) edges = tf.convert_to_tensor(value=edges, name='edges', dtype=in_dtype) # Move dims in axis to the left end as one flattened dim. # After this, x.shape = [n_samples] + E. if axis is None: x = tf.reshape(x, shape=[-1])
python
{ "resource": "" }
q266769
quantiles
test
def quantiles(x, num_quantiles, axis=None, interpolation=None, keep_dims=False, validate_args=False, name=None): """Compute quantiles of `x` along `axis`. The quantiles of a distribution are cut points dividing the range into intervals with equal probabilities. Given a vector `x` of samples, this function estimates the cut points by returning `num_quantiles + 1` cut points, `(c0, ..., cn)`, such that, roughly speaking, equal number of sample points lie in the `num_quantiles` intervals `[c0, c1), [c1, c2), ..., [c_{n-1}, cn]`. That is, * About `1 / n` fraction of the data lies in `[c_{k-1}, c_k)`, `k = 1, ..., n` * About `k / n` fraction of the data lies below `c_k`. * `c0` is the sample minimum and `cn` is the maximum. The exact number of data points in each interval depends on the size of `x` (e.g. whether the size is divisible by `n`) and the `interpolation` kwarg. Args: x: Numeric `N-D` `Tensor` with `N > 0`. If `axis` is not `None`, `x` must have statically known number of dimensions. num_quantiles: Scalar `integer` `Tensor`. The number of intervals the returned `num_quantiles + 1` cut points divide the range into. axis: Optional `0-D` or `1-D` integer `Tensor` with constant values. The axis that index independent samples over which to return the desired percentile. If `None` (the default), treat every dimension as a sample dimension, returning a scalar. interpolation : {'nearest', 'linear', 'lower', 'higher', 'midpoint'}. Default value: 'nearest'. This specifies the interpolation method to use when the fractions `k / n` lie between two data points `i < j`: * linear: i + (j - i) * fraction, where fraction is the fractional part of the index surrounded by i and j. * lower: `i`. * higher: `j`. * nearest: `i` or `j`, whichever is nearest. * midpoint: (i + j) / 2. `linear` and `midpoint` interpolation do not work with integer dtypes. keep_dims: Python `bool`. If `True`, the last dimension is kept with size 1 If `False`, the last dimension is removed from the output shape. validate_args: Whether to add runtime checks of argument validity. If False, and arguments are incorrect, correct behavior is not guaranteed. name: A Python string name to give this `Op`. Default is 'percentile' Returns: cut_points: A `rank(x) + 1 - len(axis)` dimensional `Tensor` with same `dtype` as `x` and shape `[num_quantiles + 1, ...]` where the trailing shape is that of `x` without the dimensions in `axis` (unless `keep_dims is True`) Raises: ValueError: If argument 'interpolation' is not an allowed type. ValueError: If interpolation type not compatible
python
{ "resource": "" }
q266770
_get_static_ndims
test
def _get_static_ndims(x, expect_static=False, expect_ndims=None, expect_ndims_no_more_than=None, expect_ndims_at_least=None): """Get static number of dimensions and assert that some expectations are met. This function returns the number of dimensions 'ndims' of x, as a Python int. The optional expect arguments are used to check the ndims of x, but this is only done if the static ndims of x is not None. Args: x: A Tensor. expect_static: Expect `x` to have statically defined `ndims`. expect_ndims: Optional Python integer. If provided, assert that x has number of dimensions equal to this. expect_ndims_no_more_than: Optional Python integer. If provided, assert that x has no more than this many dimensions. expect_ndims_at_least: Optional Python integer. If provided, assert that x has at least this many dimensions. Returns: ndims: A Python integer. Raises: ValueError: If any of the expectations above are violated. """ ndims = x.shape.ndims if ndims is None: shape_const = tf.get_static_value(tf.shape(input=x)) if shape_const is not None: ndims = shape_const.ndim if ndims is None: if expect_static: raise ValueError( 'Expected argument `x` to have statically defined `ndims`. Found: ' % x)
python
{ "resource": "" }
q266771
_insert_back_keep_dims
test
def _insert_back_keep_dims(x, axis): """Insert the dims in `axis` back as singletons after being removed. Args: x: `Tensor`. axis: Python list of integers. Returns:
python
{ "resource": "" }
q266772
_make_static_axis_non_negative_list
test
def _make_static_axis_non_negative_list(axis, ndims): """Convert possibly negatively indexed axis to non-negative list of ints. Args: axis: Integer Tensor. ndims: Number of dimensions into which axis indexes. Returns: A list of non-negative Python integers. Raises: ValueError: If `axis` is not statically defined. """ axis = distribution_util.make_non_negative_axis(axis, ndims) axis_const = tf.get_static_value(axis) if axis_const is None:
python
{ "resource": "" }
q266773
_move_dims_to_flat_end
test
def _move_dims_to_flat_end(x, axis, x_ndims, right_end=True): """Move dims corresponding to `axis` in `x` to the end, then flatten. Args: x: `Tensor` with shape `[B0,B1,...,Bb]`. axis: Python list of indices into dimensions of `x`. x_ndims: Python integer holding number of dimensions in `x`. right_end: Python bool. Whether to move dims to the right end (else left). Returns: `Tensor` with value from `x` and dims in `axis` moved to end into one single dimension. """ if not axis: return x # Suppose x.shape = [a, b, c, d] # Suppose axis = [1, 3] # other_dims = [0, 2] in example above. other_dims = sorted(set(range(x_ndims)).difference(axis)) # x_permed.shape = [a, c, b, d] perm = other_dims + list(axis) if right_end else list(axis) + other_dims x_permed = tf.transpose(a=x, perm=perm) if x.shape.is_fully_defined(): x_shape = x.shape.as_list()
python
{ "resource": "" }
q266774
_sort_tensor
test
def _sort_tensor(tensor): """Use `top_k` to sort a `Tensor` along the last dimension.""" sorted_, _
python
{ "resource": "" }
q266775
Sum.make_component_state_space_models
test
def make_component_state_space_models(self, num_timesteps, param_vals, initial_step=0): """Build an ordered list of Distribution instances for component models. Args: num_timesteps: Python `int` number of timesteps to model. param_vals: a list of `Tensor` parameter values in order corresponding to `self.parameters`, or a dict mapping from parameter names to values. initial_step: optional `int` specifying the initial timestep to model. This is relevant when the model contains time-varying components, e.g., holidays or seasonality. Returns: component_ssms: a Python list of `LinearGaussianStateSpaceModel` Distribution objects, in order corresponding to `self.components`. """ with tf.compat.v1.name_scope('make_component_state_space_models'): # List the model parameters in canonical order param_map = self._canonicalize_param_vals_as_map(param_vals) param_vals_list = [param_map[p.name] for p in self.parameters]
python
{ "resource": "" }
q266776
amari_alpha
test
def amari_alpha(logu, alpha=1., self_normalized=False, name=None): """The Amari-alpha Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` When `self_normalized = True`, the Amari-alpha Csiszar-function is: ```none f(u) = { -log(u) + (u - 1), alpha = 0 { u log(u) - (u - 1), alpha = 1 { [(u**alpha - 1) - alpha (u - 1)] / (alpha (alpha - 1)), otherwise ``` When `self_normalized = False` the `(u - 1)` terms are omitted. Warning: when `alpha != 0` and/or `self_normalized = True` this function makes non-log-space calculations and may therefore be numerically unstable for `|logu| >> 0`. For more information, see: A. Cichocki and S. Amari. "Families of Alpha-Beta-and GammaDivergences: Flexible and Robust Measures of Similarities." Entropy, vol. 12, no. 6, pp. 1532-1568, 2010. Args: logu: `float`-like `Tensor` representing `log(u)` from above.
python
{ "resource": "" }
q266777
kl_reverse
test
def kl_reverse(logu, self_normalized=False, name=None): """The reverse Kullback-Leibler Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` When `self_normalized = True`, the KL-reverse Csiszar-function is: ```none f(u) = -log(u) + (u - 1) ``` When `self_normalized = False` the `(u - 1)` term is omitted. Observe that as an f-Divergence, this Csiszar-function implies: ```none D_f[p, q] = KL[q, p] ``` The KL is "reverse" because in maximum likelihood we think of minimizing `q` as in `KL[p, q]`. Warning: when self_normalized = True` this function makes non-log-space calculations and may therefore be numerically unstable for `|logu| >> 0`. Args: logu: `float`-like `Tensor` representing `log(u)` from above. self_normalized: Python `bool` indicating whether `f'(u=1)=0`. When `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even
python
{ "resource": "" }
q266778
jensen_shannon
test
def jensen_shannon(logu, self_normalized=False, name=None): """The Jensen-Shannon Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` When `self_normalized = True`, the Jensen-Shannon Csiszar-function is: ```none f(u) = u log(u) - (1 + u) log(1 + u) + (u + 1) log(2) ``` When `self_normalized = False` the `(u + 1) log(2)` term is omitted. Observe that as an f-Divergence, this Csiszar-function implies: ```none D_f[p, q] = KL[p, m] + KL[q, m] m(x) = 0.5 p(x) + 0.5 q(x) ``` In a sense, this divergence is the "reverse" of the Arithmetic-Geometric f-Divergence. This Csiszar-function induces a symmetric f-Divergence, i.e., `D_f[p, q] = D_f[q, p]`. Warning: this function makes non-log-space calculations and may therefore be numerically unstable for `|logu| >> 0`. For more information, see: Lin, J. "Divergence measures based on the Shannon entropy." IEEE Trans. Inf. Th., 37, 145-151, 1991. Args: logu: `float`-like `Tensor` representing `log(u)`
python
{ "resource": "" }
q266779
pearson
test
def pearson(logu, name=None): """The Pearson Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` The Pearson Csiszar-function is: ```none f(u) = (u - 1)**2 ``` Warning: this function makes non-log-space calculations and may therefore be numerically unstable for `|logu| >> 0`. Args: logu: `float`-like `Tensor` representing `log(u)` from above. name: Python `str` name prefixed to Ops created
python
{ "resource": "" }
q266780
squared_hellinger
test
def squared_hellinger(logu, name=None): """The Squared-Hellinger Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` The Squared-Hellinger Csiszar-function is: ```none f(u) = (sqrt(u) - 1)**2 ``` This Csiszar-function induces a symmetric f-Divergence, i.e., `D_f[p, q] = D_f[q, p]`. Warning: this function makes non-log-space calculations and may therefore be numerically unstable for `|logu| >> 0`. Args: logu: `float`-like `Tensor` representing `log(u)` from above.
python
{ "resource": "" }
q266781
triangular
test
def triangular(logu, name=None): """The Triangular Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` The Triangular Csiszar-function is: ```none f(u) = (u - 1)**2 / (1 + u) ``` This Csiszar-function induces a symmetric f-Divergence, i.e., `D_f[p, q] = D_f[q, p]`. Warning: this function makes non-log-space calculations and may therefore be numerically unstable for `|logu| >> 0`. Args: logu: `float`-like `Tensor` representing `log(u)` from above.
python
{ "resource": "" }
q266782
t_power
test
def t_power(logu, t, self_normalized=False, name=None): """The T-Power Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` When `self_normalized = True` the T-Power Csiszar-function is: ```none f(u) = s [ u**t - 1 - t(u - 1) ] s = { -1 0 < t < 1 { +1 otherwise ``` When `self_normalized = False` the `- t(u - 1)` term is omitted. This is similar to the `amari_alpha` Csiszar-function, with the associated divergence being the same up to factors depending only on `t`. Args: logu: `float`-like `Tensor` representing `log(u)` from above. t: `Tensor` of same `dtype` as `logu` and broadcastable shape. self_normalized: Python `bool` indicating whether `f'(u=1)=0`. name: Python `str` name prefixed to Ops created by this function. Returns: t_power_of_u: `float`-like `Tensor` of the Csiszar-function evaluated
python
{ "resource": "" }
q266783
log1p_abs
test
def log1p_abs(logu, name=None): """The log1p-abs Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` The Log1p-Abs Csiszar-function is: ```none f(u) = u**(sign(u-1)) - 1 ``` This function is so-named because it was invented from the following recipe. Choose a convex function g such that g(0)=0 and solve for f: ```none log(1 + f(u)) = g(log(u)). <=> f(u) = exp(g(log(u))) - 1
python
{ "resource": "" }
q266784
jeffreys
test
def jeffreys(logu, name=None): """The Jeffreys Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` The Jeffreys Csiszar-function is: ```none f(u) = 0.5 ( u log(u) - log(u) ) = 0.5 kl_forward + 0.5 kl_reverse = symmetrized_csiszar_function(kl_reverse) = symmetrized_csiszar_function(kl_forward) ``` This Csiszar-function induces a symmetric f-Divergence, i.e., `D_f[p, q] = D_f[q, p]`. Warning: this function makes non-log-space calculations and may therefore be numerically unstable for `|logu| >> 0`.
python
{ "resource": "" }
q266785
modified_gan
test
def modified_gan(logu, self_normalized=False, name=None): """The Modified-GAN Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` When `self_normalized = True` the modified-GAN (Generative/Adversarial Network) Csiszar-function is: ```none f(u) = log(1 + u) - log(u) + 0.5 (u - 1) ``` When `self_normalized = False` the `0.5 (u - 1)` is omitted. The unmodified GAN Csiszar-function is identical to Jensen-Shannon (with `self_normalized = False`). Warning: this function makes non-log-space calculations and may therefore be numerically unstable for `|logu| >> 0`. Args: logu: `float`-like `Tensor` representing `log(u)` from above. self_normalized: Python `bool` indicating whether `f'(u=1)=0`. When `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even when `p, q` are unnormalized measures.
python
{ "resource": "" }
q266786
dual_csiszar_function
test
def dual_csiszar_function(logu, csiszar_function, name=None): """Calculates the dual Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` The Csiszar-dual is defined as: ```none f^*(u) = u f(1 / u) ``` where `f` is some other Csiszar-function. For example, the dual of `kl_reverse` is `kl_forward`, i.e., ```none f(u) = -log(u) f^*(u) = u f(1 / u) = -u log(1 / u) = u log(u) ``` The dual of the dual is
python
{ "resource": "" }
q266787
symmetrized_csiszar_function
test
def symmetrized_csiszar_function(logu, csiszar_function, name=None): """Symmetrizes a Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` The symmetrized Csiszar-function is defined as: ```none f_g(u) = 0.5 g(u) + 0.5 u g (1 / u) ``` where `g` is some other Csiszar-function. We say the function is "symmetrized" because: ```none D_{f_g}[p, q] = D_{f_g}[q, p] ``` for all `p << >> q` (i.e., `support(p) = support(q)`). There exists alternatives for symmetrizing a Csiszar-function. For example, ```none f_g(u) = max(f(u), f^*(u)), ``` where `f^*` is the dual Csiszar-function, also implies a symmetric f-Divergence. Example: When either of the following
python
{ "resource": "" }
q266788
monte_carlo_csiszar_f_divergence
test
def monte_carlo_csiszar_f_divergence( f, p_log_prob, q, num_draws, use_reparametrization=None, seed=None, name=None): """Monte-Carlo approximation of the Csiszar f-Divergence. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` The Csiszar f-Divergence for Csiszar-function f is given by: ```none D_f[p(X), q(X)] := E_{q(X)}[ f( p(X) / q(X) ) ] ~= m**-1 sum_j^m f( p(x_j) / q(x_j) ), where x_j ~iid q(X) ``` Tricks: Reparameterization and Score-Gradient When q is "reparameterized", i.e., a diffeomorphic transformation of a parameterless distribution (e.g., `Normal(Y; m, s) <=> Y = sX + m, X ~ Normal(0,1)`), we can swap gradient and expectation, i.e., `grad[Avg{ s_i : i=1...n }] = Avg{ grad[s_i] : i=1...n }` where `S_n=Avg{s_i}` and `s_i = f(x_i), x_i ~iid q(X)`. However, if q is not reparameterized, TensorFlow's gradient will be incorrect since the chain-rule stops at samples of unreparameterized distributions. In this circumstance using the Score-Gradient trick results in an unbiased gradient, i.e., ```none grad[ E_q[f(X)] ] = grad[ int dx q(x) f(x) ] = int dx grad[ q(x) f(x) ] = int dx [ q'(x) f(x) + q(x) f'(x) ] = int dx q(x) [q'(x) / q(x) f(x) + f'(x) ] = int dx q(x) grad[ f(x) q(x) / stop_grad[q(x)] ] = E_q[ grad[ f(x) q(x) / stop_grad[q(x)] ] ] ``` Unless `q.reparameterization_type != tfd.FULLY_REPARAMETERIZED` it is usually preferable to set `use_reparametrization = True`. Example Application: The Csiszar f-Divergence is a useful framework for variational inference. I.e., observe that, ```none f(p(x)) = f( E_{q(Z | x)}[ p(x, Z) / q(Z | x) ] ) <= E_{q(Z | x)}[ f( p(x, Z) / q(Z | x) ) ] := D_f[p(x, Z), q(Z | x)] ``` The inequality follows from the fact that the "perspective" of `f`, i.e., `(s, t) |-> t f(s / t))`, is convex in `(s, t)` when `s/t in domain(f)` and `t` is a real. Since the above framework includes the popular Evidence Lower BOund (ELBO) as a special case, i.e., `f(u) = -log(u)`, we call this framework "Evidence Divergence Bound Optimization" (EDBO). Args: f: Python `callable` representing a Csiszar-function in log-space, i.e., takes `p_log_prob(q_samples) - q.log_prob(q_samples)`. p_log_prob: Python `callable` taking (a batch of) samples from `q` and returning the natural-log of the probability under distribution `p`. (In variational inference `p` is the joint distribution.) q: `tf.Distribution`-like instance; must implement: `reparameterization_type`, `sample(n, seed)`, and `log_prob(x)`. (In variational inference `q` is the approximate posterior distribution.) num_draws: Integer scalar number of draws used to approximate the f-Divergence expectation. use_reparametrization: Python `bool`. When `None` (the default), automatically set to: `q.reparameterization_type == tfd.FULLY_REPARAMETERIZED`. When `True` uses the standard Monte-Carlo average. When `False` uses the score-gradient trick. (See above for details.) When `False`, consider using `csiszar_vimco`. seed: Python `int` seed for `q.sample`. name: Python `str` name prefixed to Ops created by this function. Returns: monte_carlo_csiszar_f_divergence: `float`-like `Tensor` Monte Carlo approximation of the Csiszar f-Divergence. Raises: ValueError: if `q` is not a reparameterized distribution and `use_reparametrization =
python
{ "resource": "" }
q266789
csiszar_vimco_helper
test
def csiszar_vimco_helper(logu, name=None): """Helper to `csiszar_vimco`; computes `log_avg_u`, `log_sooavg_u`. `axis = 0` of `logu` is presumed to correspond to iid samples from `q`, i.e., ```none logu[j] = log(u[j]) u[j] = p(x, h[j]) / q(h[j] | x) h[j] iid~ q(H | x) ``` Args: logu: Floating-type `Tensor` representing `log(p(x, h) / q(h | x))`. name: Python `str` name prefixed to Ops created by this function. Returns: log_avg_u: `logu.dtype` `Tensor` corresponding to the natural-log of the average of `u`. The sum of the gradient of `log_avg_u` is `1`. log_sooavg_u: `logu.dtype` `Tensor` characterized by the natural-log of the average of `u`` except that the average swaps-out `u[i]` for the leave-`i`-out Geometric-average. The mean of the gradient of `log_sooavg_u` is `1`. Mathematically `log_sooavg_u` is, ```none log_sooavg_u[i] = log(Avg{h[j ; i] : j=0, ..., m-1}) h[j ; i] = { u[j] j!=i { GeometricAverage{u[k] : k != i} j==i ``` """ with tf.compat.v1.name_scope(name, "csiszar_vimco_helper", [logu]): logu = tf.convert_to_tensor(value=logu, name="logu") n = tf.compat.dimension_value(logu.shape.with_rank_at_least(1)[0]) if n is None: n = tf.shape(input=logu)[0] log_n = tf.math.log(tf.cast(n, dtype=logu.dtype)) nm1 = tf.cast(n - 1, dtype=logu.dtype) else: log_n = np.log(n).astype(logu.dtype.as_numpy_dtype) nm1 = np.asarray(n - 1, dtype=logu.dtype.as_numpy_dtype) # Throughout we reduce across axis=0 since this is presumed to be iid # samples. log_max_u = tf.reduce_max(input_tensor=logu, axis=0) log_sum_u_minus_log_max_u = tf.reduce_logsumexp( input_tensor=logu - log_max_u, axis=0)
python
{ "resource": "" }
q266790
_assert_ndims_statically
test
def _assert_ndims_statically(x, expect_ndims=None, expect_ndims_at_least=None, expect_static=False): """Assert that Tensor x has expected number of dimensions.""" ndims = x.shape.ndims if ndims is None: if expect_static: raise ValueError('Expected static ndims. Found: {}'.format(x)) return if expect_ndims is not None and ndims != expect_ndims:
python
{ "resource": "" }
q266791
_batch_gather_with_broadcast
test
def _batch_gather_with_broadcast(params, indices, axis): """Like batch_gather, but broadcasts to the left of axis.""" # batch_gather assumes... # params.shape = [A1,...,AN, B1,...,BM] # indices.shape = [A1,...,AN, C] # which gives output of shape # [A1,...,AN, C, B1,...,BM] # Here we broadcast dims of each to the left of `axis` in params, and left of # the rightmost dim in indices, e.g. we can
python
{ "resource": "" }
q266792
_broadcast_cat_event_and_params
test
def _broadcast_cat_event_and_params(event, params, base_dtype): """Broadcasts the event or distribution parameters.""" if dtype_util.is_integer(event.dtype): pass elif dtype_util.is_floating(event.dtype): # When `validate_args=True` we've already ensured int/float casting # is closed. event = tf.cast(event, dtype=tf.int32) else: raise TypeError("`value` should have integer `dtype` or " "`self.dtype` ({})".format(base_dtype)) shape_known_statically = ( tensorshape_util.rank(params.shape) is not None and tensorshape_util.is_fully_defined(params.shape[:-1]) and tensorshape_util.is_fully_defined(event.shape))
python
{ "resource": "" }
q266793
expectation_importance_sampler_logspace
test
def expectation_importance_sampler_logspace( log_f, log_p, sampling_dist_q, z=None, n=None, seed=None, name='expectation_importance_sampler_logspace'): r"""Importance sampling with a positive function, in log-space. With \\(p(z) := exp^{log_p(z)}\\), and \\(f(z) = exp{log_f(z)}\\), this `Op` returns \\(Log[ n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ] ], z_i ~ q,\\) \\(\approx Log[ E_q[ f(Z) p(Z) / q(Z) ] ]\\) \\(= Log[E_p[f(Z)]]\\) This integral is done in log-space with max-subtraction to better handle the often extreme values that `f(z) p(z) / q(z)` can take on. In contrast to `expectation_importance_sampler`, this `Op` returns values in log-space. User supplies either `Tensor` of samples `z`, or number of samples to draw `n` Args: log_f: Callable mapping samples from `sampling_dist_q` to `Tensors` with shape broadcastable to `q.batch_shape`. For example, `log_f` works "just like" `sampling_dist_q.log_prob`. log_p: Callable mapping samples from `sampling_dist_q` to `Tensors` with shape broadcastable to `q.batch_shape`.
python
{ "resource": "" }
q266794
_broadcast_event_and_samples
test
def _broadcast_event_and_samples(event, samples, event_ndims): """Broadcasts the event or samples.""" # This is the shape of self.samples, without the samples axis, i.e. the shape # of the result of a call to dist.sample(). This way we can broadcast it with # event to get a properly-sized event, then add the singleton dim back at # -event_ndims - 1. samples_shape = tf.concat( [tf.shape(input=samples)[:-event_ndims - 1], tf.shape(input=samples)[tf.rank(samples)
python
{ "resource": "" }
q266795
minimize
test
def minimize(value_and_gradients_function, initial_position, tolerance=1e-8, x_tolerance=0, f_relative_tolerance=0, initial_inverse_hessian_estimate=None, max_iterations=50, parallel_iterations=1, stopping_condition=None, name=None): """Applies the BFGS algorithm to minimize a differentiable function. Performs unconstrained minimization of a differentiable function using the BFGS scheme. For details of the algorithm, see [Nocedal and Wright(2006)][1]. ### Usage: The following example demonstrates the BFGS optimizer attempting to find the minimum for a simple two dimensional quadratic objective function. ```python minimum = np.array([1.0, 1.0]) # The center of the quadratic bowl. scales = np.array([2.0, 3.0]) # The scales along the two axes. # The objective function and the gradient. def quadratic(x): value = tf.reduce_sum(scales * (x - minimum) ** 2) return value, tf.gradients(value, x)[0] start = tf.constant([0.6, 0.8]) # Starting point for the search. optim_results = tfp.optimizer.bfgs_minimize( quadratic, initial_position=start, tolerance=1e-8) with tf.Session() as session: results = session.run(optim_results) # Check that the search converged assert(results.converged) # Check that the argmin is close to the actual value. np.testing.assert_allclose(results.position, minimum) # Print out the total number of function evaluations it took. Should be 6. print ("Function evaluations: %d" % results.num_objective_evaluations) ``` ### References: [1]: Jorge Nocedal, Stephen Wright. Numerical Optimization. Springer Series in Operations Research. pp 136-140. 2006 http://pages.mtu.edu/~struther/Courses/OLD/Sp2013/5630/Jorge_Nocedal_Numerical_optimization_267490.pdf Args: value_and_gradients_function: A Python callable that accepts a point as a real `Tensor` and returns a tuple of `Tensor`s of real dtype containing the value of the function and its gradient at that point. The function to be minimized. The input should be of shape `[..., n]`, where `n` is the size of the domain of input points, and all others are batching dimensions. The first component of the return value should be a real `Tensor` of matching shape `[...]`. The second component (the gradient) should also be of shape `[..., n]` like the input value to the function. initial_position: real `Tensor` of shape `[..., n]`. The starting point, or points when using batching dimensions, of the search procedure. At these points the function value and the gradient norm should be finite. tolerance: Scalar `Tensor` of real dtype. Specifies the gradient tolerance for the procedure. If the supremum norm of the gradient vector is below this number, the algorithm is stopped. x_tolerance: Scalar `Tensor` of real dtype. If the absolute change in the position between one iteration and the next is smaller than this number, the algorithm is stopped. f_relative_tolerance: Scalar `Tensor` of real dtype. If the relative change in the objective value between one iteration and the next is smaller than this value, the algorithm is stopped. initial_inverse_hessian_estimate: Optional `Tensor` of the same dtype as the components of the output of the `value_and_gradients_function`. If specified, the shape should broadcastable to shape `[..., n, n]`; e.g. if a single `[n, n]` matrix is provided, it will be automatically broadcasted to all batches. Alternatively, one can also specify a different hessian estimate for each batch member. For the correctness of the algorithm, it is required that this parameter be symmetric and positive definite. Specifies the starting estimate for the inverse of the Hessian at the initial point. If not specified, the identity matrix is used as the starting estimate for the inverse Hessian. max_iterations: Scalar positive int32 `Tensor`. The maximum number of iterations for BFGS updates. parallel_iterations: Positive integer. The number of iterations allowed to run in parallel. stopping_condition: (Optional) A Python function that takes as input two Boolean tensors of shape `[...]`, and returns a Boolean scalar tensor. The input tensors are `converged` and `failed`, indicating the current status of each respective batch member; the return value states whether the algorithm should stop. The default is tfp.optimizer.converged_all which only stops when all batch members have either converged or failed. An alternative is tfp.optimizer.converged_any which stops as soon as one batch member has converged, or when all have failed. name: (Optional) Python str. The name prefixed to the ops created by this function. If not supplied, the default name 'minimize' is used. Returns: optimizer_results: A namedtuple containing the following items: converged: boolean tensor of shape `[...]` indicating for each batch member whether the minimum was found within tolerance. failed: boolean tensor of shape `[...]` indicating for each batch member whether a line search step failed to find a suitable step size satisfying Wolfe conditions. In the absence of any constraints on the number of objective evaluations permitted, this value will be the complement of `converged`. However, if there is a constraint and the search stopped due to available evaluations being exhausted, both `failed` and `converged` will be simultaneously False. num_objective_evaluations: The total number of objective evaluations performed. position: A tensor of shape `[..., n]` containing the last argument value found during the search from each starting point. If the search converged, then this value is the argmin of the objective function. objective_value: A tensor of shape `[...]` with the value of the objective function at the `position`. If the search converged, then this is the (local) minimum of the objective function. objective_gradient: A tensor of shape `[..., n]` containing the gradient of the objective function at the `position`. If the search converged the max-norm of this tensor should be below the tolerance. inverse_hessian_estimate: A tensor of shape `[..., n, n]` containing the inverse of the estimated Hessian. """ with tf.compat.v1.name_scope( name, 'minimize', [initial_position, tolerance, initial_inverse_hessian_estimate]): initial_position = tf.convert_to_tensor(
python
{ "resource": "" }
q266796
_inv_hessian_control_inputs
test
def _inv_hessian_control_inputs(inv_hessian): """Computes control inputs to validate a provided inverse Hessian. These ensure that the provided inverse Hessian is positive definite and symmetric. Args: inv_hessian: The starting estimate for the inverse of the Hessian at the initial point. Returns: A list of tf.Assert ops suitable for use with tf.control_dependencies. """ # The easiest way to validate if the inverse Hessian is positive definite is # to compute its Cholesky decomposition. is_positive_definite = tf.reduce_all( input_tensor=tf.math.is_finite(tf.linalg.cholesky(inv_hessian)), axis=[-1, -2]) # Then check that the supplied inverse Hessian is symmetric.
python
{ "resource": "" }
q266797
_update_inv_hessian
test
def _update_inv_hessian(prev_state, next_state): """Update the BGFS state by computing the next inverse hessian estimate.""" # Only update the inverse Hessian if not already failed or converged. should_update = ~next_state.converged & ~next_state.failed # Compute the normalization term (y^T . s), should not update if is singular. gradient_delta = next_state.objective_gradient - prev_state.objective_gradient position_delta = next_state.position - prev_state.position normalization_factor = tf.reduce_sum( input_tensor=gradient_delta * position_delta, axis=-1) should_update = should_update & ~tf.equal(normalization_factor, 0) def _do_update_inv_hessian(): next_inv_hessian = _bfgs_inv_hessian_update(
python
{ "resource": "" }
q266798
_bfgs_inv_hessian_update
test
def _bfgs_inv_hessian_update(grad_delta, position_delta, normalization_factor, inv_hessian_estimate): """Applies the BFGS update to the inverse Hessian estimate. The BFGS update rule is (note A^T denotes the transpose of a vector/matrix A). ```None rho = 1/(grad_delta^T * position_delta) U = (I - rho * position_delta * grad_delta^T) H_1 = U * H_0 * U^T + rho * position_delta * position_delta^T ``` Here, `H_0` is the inverse Hessian estimate at the previous iteration and `H_1` is the next estimate. Note that `*` should be interpreted as the matrix multiplication (with the understanding that matrix multiplication for scalars is usual multiplication and for matrix with vector is the action of the matrix on the vector.). The implementation below utilizes an expanded version of the above formula to avoid the matrix multiplications that would be needed otherwise. By expansion it is easy to see that one only needs matrix-vector or vector-vector operations. The expanded version is: ```None f = 1 + rho * (grad_delta^T * H_0 * grad_delta) H_1 - H_0 = - rho * [position_delta * (H_0 * grad_delta)^T + (H_0 * grad_delta) * position_delta^T] + rho * f * [position_delta * position_delta^T] ``` All the terms in square brackets are matrices and are constructed using vector outer products. All the other terms on the right hand side are scalars. Also worth noting that the first and second lines are both rank 1 updates applied to the current inverse Hessian estimate. Args: grad_delta: Real `Tensor` of shape `[..., n]`. The difference between the gradient at the new position and the old position. position_delta: Real `Tensor` of shape `[..., n]`. The change in position from the previous iteration to the current one. normalization_factor: Real `Tensor` of shape `[...]`. Should be equal to `grad_delta^T * position_delta`, i.e. `1/rho` as defined above. inv_hessian_estimate: Real `Tensor` of shape `[..., n, n]`. The previous estimate of the inverse Hessian. Should be positive definite and
python
{ "resource": "" }
q266799
_mul_right
test
def _mul_right(mat, vec): """Computes the product of a matrix with a vector on the right. Note this supports dynamic shapes and batched computation. Examples: M = tf.reshape(tf.range(6), shape=(3, 2)) # => [[0, 1], # [2, 3], # [4, 5]] v = tf.constant([1, 2]) # Shape: (2,) _mul_right(M, v) # => [ 2, 8, 14] # Shape: (3,) M = tf.reshape(tf.range(30), shape=(2, 3, 5)) # => [[[ 0, 1, 2, 3,
python
{ "resource": "" }