code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def as_top_level_api( logdensity_fn: Callable, step_size: float, inverse_mass_matrix: Array, # assume momentum is always Gaussian period: int, *, bijection: Callable = integrators.velocity_verlet, ) -> SamplingAlgorithm: """Implements the (basic) user interface for the Periodic orbital MCMC...
Implements the (basic) user interface for the Periodic orbital MCMC kernel. Each iteration of the periodic orbital MCMC outputs ``period`` weighted samples from a single Hamiltonian orbit connecting the previous sample and momentum (latent) variable with precision matrix ``inverse_mass_matrix``, evaluated ...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/mcmc/periodic_orbital.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/periodic_orbital.py
Apache-2.0
def periodic_orbital_proposal( bijection: Callable, kinetic_energy_fn: Callable, period: int, step_size: float, ) -> Callable: """Periodic Orbital algorithm. The algorithm builds and orbit and computes the weights for each of its steps by applying a bijection `period` times, both forwards a...
Periodic Orbital algorithm. The algorithm builds and orbit and computes the weights for each of its steps by applying a bijection `period` times, both forwards and backwards depending on the direction of the initial state. Parameters ---------- bijection continuous, differentialble and...
periodic_orbital_proposal
python
blackjax-devs/blackjax
blackjax/mcmc/periodic_orbital.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/periodic_orbital.py
Apache-2.0
def generate( direction: int, init_state: integrators.IntegratorState ) -> tuple[PeriodicOrbitalState, PeriodicOrbitalInfo]: """Generate orbit by applying bijection forwards and backwards on period. As described in algorithm 2 of :cite:p:`neklyudov2022orbital`, each iteration of the periodi...
Generate orbit by applying bijection forwards and backwards on period. As described in algorithm 2 of :cite:p:`neklyudov2022orbital`, each iteration of the periodic orbital MCMC takes a position and its direction, i.e. its step in the orbit, then it runs the bijection backwards until it reaches...
generate
python
blackjax-devs/blackjax
blackjax/mcmc/periodic_orbital.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/periodic_orbital.py
Apache-2.0
def proposal_generator(energy_fn: Callable) -> tuple[Callable, Callable]: """ Parameters ---------- energy_fn A function that computes the energy associated to a given state Returns ------- Two functions, one to generate an initial proposal when no step has been taken, another ...
Parameters ---------- energy_fn A function that computes the energy associated to a given state Returns ------- Two functions, one to generate an initial proposal when no step has been taken, another to generate proposals after each step.
proposal_generator
python
blackjax-devs/blackjax
blackjax/mcmc/proposal.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/proposal.py
Apache-2.0
def update(initial_energy: float, new_state: TrajectoryState) -> Proposal: """Generate a new proposal from a trajectory state. The trajectory state records information about the position in the state space and corresponding logdensity. A proposal also carries a weight that is equal to t...
Generate a new proposal from a trajectory state. The trajectory state records information about the position in the state space and corresponding logdensity. A proposal also carries a weight that is equal to the difference between the current energy and the previous one. It thus carries...
update
python
blackjax-devs/blackjax
blackjax/mcmc/proposal.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/proposal.py
Apache-2.0
def progressive_biased_sampling( rng_key: PRNGKey, proposal: Proposal, new_proposal: Proposal ) -> Proposal: """Baised proposal sampling :cite:p:`betancourt2017conceptual`. Unlike uniform sampling, biased sampling favors new proposals. It thus biases the transition away from the trajectory's initial st...
Baised proposal sampling :cite:p:`betancourt2017conceptual`. Unlike uniform sampling, biased sampling favors new proposals. It thus biases the transition away from the trajectory's initial state.
progressive_biased_sampling
python
blackjax-devs/blackjax
blackjax/mcmc/proposal.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/proposal.py
Apache-2.0
def compute_asymmetric_acceptance_ratio(transition_energy_fn: Callable) -> Callable: """Generate a meta function to compute the transition between two states. In particular, both states are used to compute the energies to consider in weighting the proposal, to account for asymmetries. Parameters -...
Generate a meta function to compute the transition between two states. In particular, both states are used to compute the energies to consider in weighting the proposal, to account for asymmetries. Parameters ---------- transition_energy_fn A function that computes the energy of a transiti...
compute_asymmetric_acceptance_ratio
python
blackjax-devs/blackjax
blackjax/mcmc/proposal.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/proposal.py
Apache-2.0
def static_binomial_sampling( rng_key: PRNGKey, log_p_accept: float, proposal, new_proposal ): """Accept or reject a proposal. In the static setting, the probability with which the new proposal is accepted is a function of the difference in energy between the previous and the current states. If the...
Accept or reject a proposal. In the static setting, the probability with which the new proposal is accepted is a function of the difference in energy between the previous and the current states. If the current energy is lower than the previous one then the new proposal is accepted with probability 1. ...
static_binomial_sampling
python
blackjax-devs/blackjax
blackjax/mcmc/proposal.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/proposal.py
Apache-2.0
def nonreversible_slice_sampling( slice: Array, delta_energy: float, proposal, new_proposal ): """Slice sampling for non-reversible Metropolis-Hasting update. Performs a non-reversible update of a uniform [0, 1] value for Metropolis-Hastings accept/reject decisions :cite:p:`neal2020non`, in addition ...
Slice sampling for non-reversible Metropolis-Hasting update. Performs a non-reversible update of a uniform [0, 1] value for Metropolis-Hastings accept/reject decisions :cite:p:`neal2020non`, in addition to the accept/reject step of a current state and new proposal.
nonreversible_slice_sampling
python
blackjax-devs/blackjax
blackjax/mcmc/proposal.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/proposal.py
Apache-2.0
def normal(sigma: Array) -> Callable: """Normal Random Walk proposal. Propose a new position such that its distance to the current position is normally distributed. Suitable for continuous variables. Parameter --------- sigma: vector or matrix that contains the standard deviation of th...
Normal Random Walk proposal. Propose a new position such that its distance to the current position is normally distributed. Suitable for continuous variables. Parameter --------- sigma: vector or matrix that contains the standard deviation of the centered normal distribution from w...
normal
python
blackjax-devs/blackjax
blackjax/mcmc/random_walk.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/random_walk.py
Apache-2.0
def build_additive_step(): """Build a Random Walk Rosenbluth-Metropolis-Hastings kernel Returns ------- A kernel that takes a rng_key and a Pytree that contains the current state of the chain and that returns a new state of the chain along with information about the transition. """ def...
Build a Random Walk Rosenbluth-Metropolis-Hastings kernel Returns ------- A kernel that takes a rng_key and a Pytree that contains the current state of the chain and that returns a new state of the chain along with information about the transition.
build_additive_step
python
blackjax-devs/blackjax
blackjax/mcmc/random_walk.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/random_walk.py
Apache-2.0
def additive_step_random_walk( logdensity_fn: Callable, random_step: Callable ) -> SamplingAlgorithm: """Implements the user interface for the Additive Step RMH Examples -------- A new kernel can be initialized and used with the following code: .. code:: rw = blackjax.additive_step_r...
Implements the user interface for the Additive Step RMH Examples -------- A new kernel can be initialized and used with the following code: .. code:: rw = blackjax.additive_step_random_walk(logdensity_fn, random_step) state = rw.init(position) new_state, info = rw.step(rng_ke...
additive_step_random_walk
python
blackjax-devs/blackjax
blackjax/mcmc/random_walk.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/random_walk.py
Apache-2.0
def build_irmh() -> Callable: """ Build an Independent Random Walk Rosenbluth-Metropolis-Hastings kernel. This implies that the proposal distribution does not depend on the particle being mutated :cite:p:`wang2022exact`. Returns ------- A kernel that takes a rng_key and a Pytree that contains t...
Build an Independent Random Walk Rosenbluth-Metropolis-Hastings kernel. This implies that the proposal distribution does not depend on the particle being mutated :cite:p:`wang2022exact`. Returns ------- A kernel that takes a rng_key and a Pytree that contains the current state of the chain and...
build_irmh
python
blackjax-devs/blackjax
blackjax/mcmc/random_walk.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/random_walk.py
Apache-2.0
def kernel( rng_key: PRNGKey, state: RWState, logdensity_fn: Callable, proposal_distribution: Callable, proposal_logdensity_fn: Optional[Callable] = None, ) -> tuple[RWState, RWInfo]: """ Parameters ---------- proposal_distribution ...
Parameters ---------- proposal_distribution A function that, given a PRNGKey, is able to produce a sample in the same domain of the target distribution. proposal_logdensity_fn: For non-symmetric proposals, a function that returns the log-density ...
kernel
python
blackjax-devs/blackjax
blackjax/mcmc/random_walk.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/random_walk.py
Apache-2.0
def irmh_as_top_level_api( logdensity_fn: Callable, proposal_distribution: Callable, proposal_logdensity_fn: Optional[Callable] = None, ) -> SamplingAlgorithm: """Implements the (basic) user interface for the independent RMH. Examples -------- A new kernel can be initialized and used with ...
Implements the (basic) user interface for the independent RMH. Examples -------- A new kernel can be initialized and used with the following code: .. code:: rmh = blackjax.irmh(logdensity_fn, proposal_distribution) state = rmh.init(position) new_state, info = rmh.step(rng_key...
irmh_as_top_level_api
python
blackjax-devs/blackjax
blackjax/mcmc/random_walk.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/random_walk.py
Apache-2.0
def build_rmh(): """Build a Rosenbluth-Metropolis-Hastings kernel. Returns ------- A kernel that takes a rng_key and a Pytree that contains the current state of the chain and that returns a new state of the chain along with information about the transition. """ def kernel( rng...
Build a Rosenbluth-Metropolis-Hastings kernel. Returns ------- A kernel that takes a rng_key and a Pytree that contains the current state of the chain and that returns a new state of the chain along with information about the transition.
build_rmh
python
blackjax-devs/blackjax
blackjax/mcmc/random_walk.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/random_walk.py
Apache-2.0
def kernel( rng_key: PRNGKey, state: RWState, logdensity_fn: Callable, transition_generator: Callable, proposal_logdensity_fn: Optional[Callable] = None, ) -> tuple[RWState, RWInfo]: """Move the chain by one step using the Rosenbluth Metropolis Hastings algori...
Move the chain by one step using the Rosenbluth Metropolis Hastings algorithm. Parameters ---------- rng_key: The pseudo-random number generator key used to generate random numbers. logdensity_fn: A function that returns the log-probability at a...
kernel
python
blackjax-devs/blackjax
blackjax/mcmc/random_walk.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/random_walk.py
Apache-2.0
def rmh_as_top_level_api( logdensity_fn: Callable, proposal_generator: Callable[[PRNGKey, ArrayLikeTree], ArrayTree], proposal_logdensity_fn: Optional[Callable[[ArrayLikeTree], ArrayTree]] = None, ) -> SamplingAlgorithm: """Implements the user interface for the RMH. Examples -------- A new...
Implements the user interface for the RMH. Examples -------- A new kernel can be initialized and used with the following code: .. code:: rmh = blackjax.rmh(logdensity_fn, proposal_generator) state = rmh.init(position) new_state, info = rmh.step(rng_key, state) We can JIT...
rmh_as_top_level_api
python
blackjax-devs/blackjax
blackjax/mcmc/random_walk.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/random_walk.py
Apache-2.0
def as_top_level_api( logdensity_fn: Callable, step_size: float, mass_matrix: Union[metrics.Metric, Callable], num_integration_steps: int, *, divergence_threshold: int = 1000, integrator: Callable = integrators.implicit_midpoint, ) -> SamplingAlgorithm: """A Riemannian Manifold Hamiltoni...
A Riemannian Manifold Hamiltonian Monte Carlo kernel Of note, this kernel is simply an alias of the ``hmc`` kernel with a different choice of default integrator (``implicit_midpoint`` instead of ``velocity_verlet``) since RMHMC is typically used for Hamiltonian systems that are not separable. Para...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/mcmc/rmhmc.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/rmhmc.py
Apache-2.0
def _leaf_idx_to_ckpt_idxs(n): """Find the checkpoint id from a step number.""" # computes the number of non-zero bits except the last bit # e.g. 6 -> 2, 7 -> 2, 13 -> 2 idx_max = jnp.bitwise_count(n >> 1).astype(jnp.int32) # computes the number of contiguous last non-zero bits ...
Find the checkpoint id from a step number.
_leaf_idx_to_ckpt_idxs
python
blackjax-devs/blackjax
blackjax/mcmc/termination.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/termination.py
Apache-2.0
def _is_iterative_turning(checkpoints, momentum_sum, momentum): """Checks whether there is a U-turn in the iteratively built expanded trajectory. These checks only need to be performed as specific points. """ r, _ = jax.flatten_util.ravel_pytree(momentum) r_sum, _ = jax.flatten...
Checks whether there is a U-turn in the iteratively built expanded trajectory. These checks only need to be performed as specific points.
_is_iterative_turning
python
blackjax-devs/blackjax
blackjax/mcmc/termination.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/termination.py
Apache-2.0
def append_to_trajectory(trajectory: Trajectory, state: IntegratorState) -> Trajectory: """Append a state to the (right of the) trajectory to form a new trajectory.""" momentum_sum = jax.tree_util.tree_map( jnp.add, trajectory.momentum_sum, state.momentum ) return Trajectory( trajectory....
Append a state to the (right of the) trajectory to form a new trajectory.
append_to_trajectory
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def reorder_trajectories( direction: int, trajectory: Trajectory, new_trajectory: Trajectory ) -> tuple[Trajectory, Trajectory]: """Order the two trajectories depending on the direction.""" return jax.lax.cond( direction > 0, lambda _: ( trajectory, new_trajectory, ...
Order the two trajectories depending on the direction.
reorder_trajectories
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def static_integration( integrator: Callable, direction: int = 1, ) -> Callable: """Generate a trajectory by integrating several times in one direction.""" def integrate( initial_state: IntegratorState, step_size, num_integration_steps ) -> IntegratorState: directed_step_size = jax....
Generate a trajectory by integrating several times in one direction.
static_integration
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def dynamic_progressive_integration( integrator: Callable, kinetic_energy: Callable, update_termination_state: Callable, is_criterion_met: Callable, divergence_threshold: float, ): """Integrate a trajectory and update the proposal sequentially in one direction until the termination criterion...
Integrate a trajectory and update the proposal sequentially in one direction until the termination criterion is met. Parameters ---------- integrator The symplectic integrator used to integrate the hamiltonian trajectory. kinetic_energy Function to compute the current value of the k...
dynamic_progressive_integration
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def integrate( rng_key: PRNGKey, initial_state: IntegratorState, direction: int, termination_state, max_num_steps: int, step_size, initial_energy, ): """Integrate the trajectory starting from `initial_state` and update the proposal sequentially...
Integrate the trajectory starting from `initial_state` and update the proposal sequentially (hence progressive) until the termination criterion is met (hence dynamic). Parameters ---------- rng_key Key used by JAX's random number generator. initial_state ...
integrate
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def do_keep_integrating(loop_state): """Decide whether we should continue integrating the trajectory""" integration_state, (is_diverging, has_terminated) = loop_state return ( (integration_state.step < max_num_steps) & ~has_terminated &...
Decide whether we should continue integrating the trajectory
do_keep_integrating
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def dynamic_recursive_integration( integrator: Callable, kinetic_energy: Callable, uturn_check_fn: Callable, divergence_threshold: float, use_robust_uturn_check: bool = False, ): """Integrate a trajectory and update the proposal recursively in Python until the termination criterion is met. ...
Integrate a trajectory and update the proposal recursively in Python until the termination criterion is met. This is the implementation of Algorithm 6 from :cite:p:`hoffman2014no` with multinomial sampling. The implemenation here is mostly for validating the progressive implementation to make sure the ...
dynamic_recursive_integration
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def buildtree_integrate( rng_key: PRNGKey, initial_state: IntegratorState, direction: int, tree_depth: int, step_size, initial_energy: float, ): """Integrate the trajectory starting from `initial_state` and update the proposal recursively with tree dou...
Integrate the trajectory starting from `initial_state` and update the proposal recursively with tree doubling until the termination criterion is met. The function `buildtree_integrate` calls itself for tree_depth > 0, thus invokes the recursive scheme that builds a trajectory by doubling a bina...
buildtree_integrate
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def dynamic_multiplicative_expansion( trajectory_integrator: Callable, uturn_check_fn: Callable, max_num_expansions: int = 10, rate: int = 2, ) -> Callable: """Sample a trajectory and update the proposal sequentially until the termination criterion is met. The trajectory is sampled with the...
Sample a trajectory and update the proposal sequentially until the termination criterion is met. The trajectory is sampled with the following procedure: 1. Pick a direction at random; 2. Integrate `num_step` steps in this direction; 3. If the integration has stopped prematurely, do not update the p...
dynamic_multiplicative_expansion
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def do_keep_expanding(loop_state) -> bool: """Determine whether we need to keep expanding the trajectory.""" expansion_state, (is_diverging, is_turning) = loop_state return ( (expansion_state.step < max_num_expansions) & ~is_diverging &...
Determine whether we need to keep expanding the trajectory.
do_keep_expanding
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def expand_once(loop_state): """Expand the current trajectory. At each step we draw a direction at random, build a subtrajectory starting from the leftmost or rightmost point of the current trajectory that is twice as long as the current trajectory. Once tha...
Expand the current trajectory. At each step we draw a direction at random, build a subtrajectory starting from the leftmost or rightmost point of the current trajectory that is twice as long as the current trajectory. Once that is done, possibly update the current propo...
expand_once
python
blackjax-devs/blackjax
blackjax/mcmc/trajectory.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/mcmc/trajectory.py
Apache-2.0
def dual_averaging( t0: int = 10, gamma: float = 0.05, kappa: float = 0.75 ) -> tuple[Callable, Callable, Callable]: """Find the state that minimizes an objective function using a primal-dual subgradient method. See :cite:p:`nesterov2009primal` for a detailed explanation of the algorithm and its mathem...
Find the state that minimizes an objective function using a primal-dual subgradient method. See :cite:p:`nesterov2009primal` for a detailed explanation of the algorithm and its mathematical properties. Parameters ---------- t0: float >= 0 Free parameter that stabilizes the initial iter...
dual_averaging
python
blackjax-devs/blackjax
blackjax/optimizers/dual_averaging.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/optimizers/dual_averaging.py
Apache-2.0
def init(x_init: float) -> DualAveragingState: """Initialize the state of the dual averaging scheme. The parameter :math:`\\mu` is set to :math:`\\log(10 \\x_init)` where :math:`\\x_init` is the initial value of the state. """ mu: float = jnp.log(10 * x_init) step = 1 ...
Initialize the state of the dual averaging scheme. The parameter :math:`\mu` is set to :math:`\log(10 \x_init)` where :math:`\x_init` is the initial value of the state.
init
python
blackjax-devs/blackjax
blackjax/optimizers/dual_averaging.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/optimizers/dual_averaging.py
Apache-2.0
def update(da_state: DualAveragingState, gradient) -> DualAveragingState: """Update the state of the Dual Averaging adaptive algorithm. Parameters ---------- gradient: The gradient of the function to optimize with respect to the state `x`, computed at the current...
Update the state of the Dual Averaging adaptive algorithm. Parameters ---------- gradient: The gradient of the function to optimize with respect to the state `x`, computed at the current value of `x`. da_state: The current state of the dual averaging ...
update
python
blackjax-devs/blackjax
blackjax/optimizers/dual_averaging.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/optimizers/dual_averaging.py
Apache-2.0
def minimize_lbfgs( fun: Callable, x0: ArrayLikeTree, maxiter: int = 30, maxcor: float = 10, gtol: float = 1e-08, ftol: float = 1e-05, maxls: int = 1000, **lbfgs_kwargs, ) -> tuple[OptStep, LBFGSHistory]: """ Minimize a function using L-BFGS Parameters ---------- fun...
Minimize a function using L-BFGS Parameters ---------- fun: function of the form f(x) where x is a pytree and returns a real scalar. The function should be composed of operations with vjp defined. x0: initial guess maxiter: maximum number of iterations maxco...
minimize_lbfgs
python
blackjax-devs/blackjax
blackjax/optimizers/lbfgs.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/optimizers/lbfgs.py
Apache-2.0
def lbfgs_recover_alpha(alpha_lm1, s_l, z_l, epsilon=1e-12): """ Compute diagonal elements of the inverse Hessian approximation from optimation path. It implements the inner loop body of Algorithm 3 in :cite:p:`zhang2022pathfinder`. Parameters ---------- alpha_lm1 The diagonal element o...
Compute diagonal elements of the inverse Hessian approximation from optimation path. It implements the inner loop body of Algorithm 3 in :cite:p:`zhang2022pathfinder`. Parameters ---------- alpha_lm1 The diagonal element of the inverse Hessian approximation of the previous iteration s_...
lbfgs_recover_alpha
python
blackjax-devs/blackjax
blackjax/optimizers/lbfgs.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/optimizers/lbfgs.py
Apache-2.0
def lbfgs_inverse_hessian_factors(S, Z, alpha): """ Calculates factors for inverse hessian factored representation. It implements formula II.2 of: Pathfinder: Parallel quasi-newton variational inference, Lu Zhang et al., arXiv:2108.03782 """ param_dims = S.shape[-1] StZ = S.T @ Z R = j...
Calculates factors for inverse hessian factored representation. It implements formula II.2 of: Pathfinder: Parallel quasi-newton variational inference, Lu Zhang et al., arXiv:2108.03782
lbfgs_inverse_hessian_factors
python
blackjax-devs/blackjax
blackjax/optimizers/lbfgs.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/optimizers/lbfgs.py
Apache-2.0
def lbfgs_inverse_hessian_formula_2(alpha, beta, gamma): """ Calculates inverse hessian from factors as in formula II.3 of: Pathfinder: Parallel quasi-newton variational inference, Lu Zhang et al., arXiv:2108.03782 """ param_dims = alpha.shape[0] dsqrt_alpha = jnp.diag(jnp.sqrt(alpha)) ids...
Calculates inverse hessian from factors as in formula II.3 of: Pathfinder: Parallel quasi-newton variational inference, Lu Zhang et al., arXiv:2108.03782
lbfgs_inverse_hessian_formula_2
python
blackjax-devs/blackjax
blackjax/optimizers/lbfgs.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/optimizers/lbfgs.py
Apache-2.0
def bfgs_sample(rng_key, num_samples, position, grad_position, alpha, beta, gamma): """ Draws approximate samples of target distribution. It implements Algorithm 4 in: Pathfinder: Parallel quasi-newton variational inference, Lu Zhang et al., arXiv:2108.03782 """ if not isinstance(num_samples, ...
Draws approximate samples of target distribution. It implements Algorithm 4 in: Pathfinder: Parallel quasi-newton variational inference, Lu Zhang et al., arXiv:2108.03782
bfgs_sample
python
blackjax-devs/blackjax
blackjax/optimizers/lbfgs.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/optimizers/lbfgs.py
Apache-2.0
def as_top_level_api( logdensity_estimator: Callable, gradient_estimator: Callable, zeta: float = 1, num_partitions: int = 512, energy_gap: float = 100, min_energy: float = 0, ) -> SamplingAlgorithm: r"""Implements the (basic) user interface for the Contour SGLD kernel. Parameters -...
Implements the (basic) user interface for the Contour SGLD kernel. Parameters ---------- logdensity_estimator A function that returns an estimation of the model's logdensity given a position and a batch of data. gradient_estimator A function that takes a position, a batch of dat...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/sgmcmc/csgld.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/csgld.py
Apache-2.0
def overdamped_langevin(): """Euler solver for overdamped Langevin diffusion. This algorithm was ported from :cite:p:`coullon2022sgmcmcjax`. """ def one_step( rng_key: PRNGKey, position: ArrayLikeTree, logdensity_grad: ArrayLikeTree, step_size: float, temperatu...
Euler solver for overdamped Langevin diffusion. This algorithm was ported from :cite:p:`coullon2022sgmcmcjax`.
overdamped_langevin
python
blackjax-devs/blackjax
blackjax/sgmcmc/diffusions.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/diffusions.py
Apache-2.0
def sghmc(alpha: float = 0.01, beta: float = 0): """Euler solver for the diffusion equation of the SGHMC algorithm :cite:p:`chen2014stochastic`, with parameters alpha and beta scaled according to :cite:p:`ma2015complete`. This algorithm was ported from :cite:p:`coullon2022sgmcmcjax`. """ def one_...
Euler solver for the diffusion equation of the SGHMC algorithm :cite:p:`chen2014stochastic`, with parameters alpha and beta scaled according to :cite:p:`ma2015complete`. This algorithm was ported from :cite:p:`coullon2022sgmcmcjax`.
sghmc
python
blackjax-devs/blackjax
blackjax/sgmcmc/diffusions.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/diffusions.py
Apache-2.0
def sgnht(alpha: float = 0.01, beta: float = 0): """Euler solver for the diffusion equation of the SGNHT algorithm :cite:p:`ding2014bayesian`. This algorithm was ported from :cite:p:`coullon2022sgmcmcjax`. """ def one_step( rng_key: PRNGKey, position: ArrayLikeTree, momentum: ...
Euler solver for the diffusion equation of the SGNHT algorithm :cite:p:`ding2014bayesian`. This algorithm was ported from :cite:p:`coullon2022sgmcmcjax`.
sgnht
python
blackjax-devs/blackjax
blackjax/sgmcmc/diffusions.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/diffusions.py
Apache-2.0
def logdensity_estimator( logprior_fn: Callable, loglikelihood_fn: Callable, data_size: int ) -> Callable: """Builds a simple estimator for the log-density. This estimator first appeared in :cite:p:`robbins1951stochastic`. The `logprior_fn` function has a single argument: the current position (value o...
Builds a simple estimator for the log-density. This estimator first appeared in :cite:p:`robbins1951stochastic`. The `logprior_fn` function has a single argument: the current position (value of parameters). The `loglikelihood_fn` takes two arguments: the current position and a batch of data; if there ...
logdensity_estimator
python
blackjax-devs/blackjax
blackjax/sgmcmc/gradients.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/gradients.py
Apache-2.0
def logdensity_estimator_fn( position: ArrayLikeTree, minibatch: ArrayLikeTree ) -> ArrayTree: """Return an approximation of the log-posterior density. Parameters ---------- position The current value of the random variables. batch The current...
Return an approximation of the log-posterior density. Parameters ---------- position The current value of the random variables. batch The current batch of data Returns ------- An approximation of the value of the log-posterior density fun...
logdensity_estimator_fn
python
blackjax-devs/blackjax
blackjax/sgmcmc/gradients.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/gradients.py
Apache-2.0
def grad_estimator( logprior_fn: Callable, loglikelihood_fn: Callable, data_size: int ) -> Callable: """Build a simple estimator for the gradient of the log-density.""" logdensity_estimator_fn = logdensity_estimator( logprior_fn, loglikelihood_fn, data_size ) return jax.grad(logdensity_esti...
Build a simple estimator for the gradient of the log-density.
grad_estimator
python
blackjax-devs/blackjax
blackjax/sgmcmc/gradients.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/gradients.py
Apache-2.0
def control_variates( logdensity_grad_estimator: Callable, centering_position: ArrayLikeTree, data: ArrayLikeTree, ) -> Callable: """Builds a control variate gradient estimator :cite:p:`baker2019control`. This algorithm was ported from :cite:p:`coullon2022sgmcmcjax`. Parameters ---------- ...
Builds a control variate gradient estimator :cite:p:`baker2019control`. This algorithm was ported from :cite:p:`coullon2022sgmcmcjax`. Parameters ---------- logdensity_grad_estimator A function that approximates the target's gradient function. data The full dataset. centering_p...
control_variates
python
blackjax-devs/blackjax
blackjax/sgmcmc/gradients.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/gradients.py
Apache-2.0
def cv_grad_estimator_fn( position: ArrayLikeTree, minibatch: ArrayLikeTree ) -> ArrayTree: """Return an approximation of the log-posterior density. Parameters ---------- position The current value of the random variables. batch The current ba...
Return an approximation of the log-posterior density. Parameters ---------- position The current value of the random variables. batch The current batch of data. The first dimension is assumed to be the batch dimension. Returns -------...
cv_grad_estimator_fn
python
blackjax-devs/blackjax
blackjax/sgmcmc/gradients.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/gradients.py
Apache-2.0
def build_kernel(alpha: float = 0.01, beta: float = 0) -> Callable: """Stochastic gradient Hamiltonian Monte Carlo (SgHMC) algorithm.""" integrator = diffusions.sghmc(alpha, beta) def kernel( rng_key: PRNGKey, position: ArrayLikeTree, grad_estimator: Callable, minibatch: Arr...
Stochastic gradient Hamiltonian Monte Carlo (SgHMC) algorithm.
build_kernel
python
blackjax-devs/blackjax
blackjax/sgmcmc/sghmc.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/sghmc.py
Apache-2.0
def as_top_level_api( grad_estimator: Callable, num_integration_steps: int = 10, alpha: float = 0.01, beta: float = 0, ) -> SamplingAlgorithm: """Implements the (basic) user interface for the SGHMC kernel. The general sghmc kernel builder (:meth:`blackjax.sgmcmc.sghmc.build_kernel`, alias `...
Implements the (basic) user interface for the SGHMC kernel. The general sghmc kernel builder (:meth:`blackjax.sgmcmc.sghmc.build_kernel`, alias `blackjax.sghmc.build_kernel`) can be cumbersome to manipulate. Since most users only need to specify the kernel parameters at initialization time, we provide ...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/sgmcmc/sghmc.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/sghmc.py
Apache-2.0
def build_kernel() -> Callable: """Stochastic gradient Langevin Dynamics (SgLD) algorithm.""" integrator = diffusions.overdamped_langevin() def kernel( rng_key: PRNGKey, position: ArrayLikeTree, grad_estimator: Callable, minibatch: ArrayLikeTree, step_size: float, ...
Stochastic gradient Langevin Dynamics (SgLD) algorithm.
build_kernel
python
blackjax-devs/blackjax
blackjax/sgmcmc/sgld.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/sgld.py
Apache-2.0
def as_top_level_api( grad_estimator: Callable, ) -> SamplingAlgorithm: """Implements the (basic) user interface for the SGLD kernel. The general sgld kernel builder (:meth:`blackjax.sgmcmc.sgld.build_kernel`, alias `blackjax.sgld.build_kernel`) can be cumbersome to manipulate. Since most users onl...
Implements the (basic) user interface for the SGLD kernel. The general sgld kernel builder (:meth:`blackjax.sgmcmc.sgld.build_kernel`, alias `blackjax.sgld.build_kernel`) can be cumbersome to manipulate. Since most users only need to specify the kernel parameters at initialization time, we provide a he...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/sgmcmc/sgld.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/sgld.py
Apache-2.0
def as_top_level_api( grad_estimator: Callable, alpha: float = 0.01, beta: float = 0.0, ) -> SamplingAlgorithm: """Implements the (basic) user interface for the SGNHT kernel. The general sgnht kernel (:meth:`blackjax.sgmcmc.sgnht.build_kernel`, alias `blackjax.sgnht.build_kernel`) can be cumber...
Implements the (basic) user interface for the SGNHT kernel. The general sgnht kernel (:meth:`blackjax.sgmcmc.sgnht.build_kernel`, alias `blackjax.sgnht.build_kernel`) can be cumbersome to manipulate. Since most users only need to specify the kernel parameters at initialization time, we provide a helper...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/sgmcmc/sgnht.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/sgmcmc/sgnht.py
Apache-2.0
def build_kernel( logprior_fn: Callable, loglikelihood_fn: Callable, mcmc_step_fn: Callable, mcmc_init_fn: Callable, resampling_fn: Callable, target_ess: float, root_solver: Callable = solver.dichotomy, **extra_parameters, ) -> Callable: r"""Build a Tempered SMC step using an adaptiv...
Build a Tempered SMC step using an adaptive schedule. Parameters ---------- logprior_fn: Callable A function that computes the log-prior density. loglikelihood_fn: Callable A function that returns the log-likelihood density. mcmc_kernel_factory: Callable A callable function ...
build_kernel
python
blackjax-devs/blackjax
blackjax/smc/adaptive_tempered.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/adaptive_tempered.py
Apache-2.0
def as_top_level_api( logprior_fn: Callable, loglikelihood_fn: Callable, mcmc_step_fn: Callable, mcmc_init_fn: Callable, mcmc_parameters: dict, resampling_fn: Callable, target_ess: float, root_solver: Callable = solver.dichotomy, num_mcmc_steps: int = 10, **extra_parameters, ) ->...
Implements the (basic) user interface for the Adaptive Tempered SMC kernel. Parameters ---------- logprior_fn The log-prior function of the model we wish to draw samples from. loglikelihood_fn The log-likelihood function of the model we wish to draw samples from. mcmc_step_fn ...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/smc/adaptive_tempered.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/adaptive_tempered.py
Apache-2.0
def step( rng_key: PRNGKey, state: SMCState, update_fn: Callable, weight_fn: Callable, resample_fn: Callable, num_resampled: Optional[int] = None, ) -> tuple[SMCState, SMCInfo]: """General SMC sampling step. `update_fn` here corresponds to the Markov kernel $M_{t+1}$, and `weight_fn` ...
General SMC sampling step. `update_fn` here corresponds to the Markov kernel $M_{t+1}$, and `weight_fn` corresponds to the potential function $G_t$. We first use `update_fn` to generate new particles from the current ones, weigh these particles using `weight_fn` and resample them with `resample_fn`. ...
step
python
blackjax-devs/blackjax
blackjax/smc/base.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/base.py
Apache-2.0
def update_and_take_last( mcmc_init_fn, tempered_logposterior_fn, shared_mcmc_step_fn, num_mcmc_steps, n_particles, ): """Given N particles, runs num_mcmc_steps of a kernel starting at each particle, and returns the last values, waisting the previous num_mcmc_steps-1 samples per chain. ...
Given N particles, runs num_mcmc_steps of a kernel starting at each particle, and returns the last values, waisting the previous num_mcmc_steps-1 samples per chain.
update_and_take_last
python
blackjax-devs/blackjax
blackjax/smc/base.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/base.py
Apache-2.0
def log_ess(log_weights: Array) -> float: """Compute the effective sample size. Parameters ---------- log_weights: 1D Array log-weights of the sample Returns ------- log_ess: float The logarithm of the effective sample size """ return 2 * jsp.special.logsumexp(log_...
Compute the effective sample size. Parameters ---------- log_weights: 1D Array log-weights of the sample Returns ------- log_ess: float The logarithm of the effective sample size
log_ess
python
blackjax-devs/blackjax
blackjax/smc/ess.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/ess.py
Apache-2.0
def ess_solver( logdensity_fn: Callable, particles: ArrayLikeTree, target_ess: float, max_delta: float, root_solver: Callable, ): """ESS solver for computing the next increment of SMC tempering. Parameters ---------- logdensity_fn: Callable The log probability function we wi...
ESS solver for computing the next increment of SMC tempering. Parameters ---------- logdensity_fn: Callable The log probability function we wish to sample from. particles: SMCState Current state of the tempered SMC algorithm target_ess: float The relative ESS targeted for th...
ess_solver
python
blackjax-devs/blackjax
blackjax/smc/ess.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/ess.py
Apache-2.0
def unshared_parameters_and_step_fn(mcmc_parameters, mcmc_step_fn): """Splits MCMC parameters into two dictionaries. The shared dictionary represents the parameters common to all chains, and the unshared are different per chain. Binds the step fn using the shared parameters. """ shared_mcmc_para...
Splits MCMC parameters into two dictionaries. The shared dictionary represents the parameters common to all chains, and the unshared are different per chain. Binds the step fn using the shared parameters.
unshared_parameters_and_step_fn
python
blackjax-devs/blackjax
blackjax/smc/from_mcmc.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/from_mcmc.py
Apache-2.0
def build_kernel( mcmc_step_fn: Callable, mcmc_init_fn: Callable, resampling_fn: Callable, update_strategy: Callable = update_and_take_last, ): """SMC step from MCMC kernels. Builds MCMC kernels from the input parameters, which may change across iterations. Moreover, it defines the way such ...
SMC step from MCMC kernels. Builds MCMC kernels from the input parameters, which may change across iterations. Moreover, it defines the way such kernels are used to update the particles. This layer adapts an API defined in terms of kernels (mcmc_step_fn and mcmc_init_fn) into an API that depends on an u...
build_kernel
python
blackjax-devs/blackjax
blackjax/smc/from_mcmc.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/from_mcmc.py
Apache-2.0
def build_kernel( smc_algorithm, logprior_fn: Callable, loglikelihood_fn: Callable, mcmc_step_fn: Callable, mcmc_init_fn: Callable, resampling_fn: Callable, mcmc_parameter_update_fn: Callable[ [PRNGKey, SMCState, SMCInfo], Dict[str, ArrayTree] ], num_mcmc_steps: int = 10, ...
In the context of an SMC sampler (whose step_fn returning state has a .particles attribute), there's an inner MCMC that is used to perturbate/update each of the particles. This adaptation tunes some parameter of that MCMC, based on particles. The parameter type must be a valid JAX type. Parameters ----...
build_kernel
python
blackjax-devs/blackjax
blackjax/smc/inner_kernel_tuning.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/inner_kernel_tuning.py
Apache-2.0
def as_top_level_api( smc_algorithm, logprior_fn: Callable, loglikelihood_fn: Callable, mcmc_step_fn: Callable, mcmc_init_fn: Callable, resampling_fn: Callable, mcmc_parameter_update_fn: Callable[ [PRNGKey, SMCState, SMCInfo], Dict[str, ArrayTree] ], initial_parameter_value, ...
In the context of an SMC sampler (whose step_fn returning state has a .particles attribute), there's an inner MCMC that is used to perturbate/update each of the particles. This adaptation tunes some parameter of that MCMC, based on particles. The parameter type must be a valid JAX type. Parameters ...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/smc/inner_kernel_tuning.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/inner_kernel_tuning.py
Apache-2.0
def init(particles: ArrayLikeTree, num_datapoints: int) -> PartialPosteriorsSMCState: """num_datapoints are the number of observations that could potentially be used in a partial posterior. Since the initial data_mask is all 0s, it means that no likelihood term will be added (only prior). """ num_pa...
num_datapoints are the number of observations that could potentially be used in a partial posterior. Since the initial data_mask is all 0s, it means that no likelihood term will be added (only prior).
init
python
blackjax-devs/blackjax
blackjax/smc/partial_posteriors_path.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/partial_posteriors_path.py
Apache-2.0
def build_kernel( mcmc_step_fn: Callable, mcmc_init_fn: Callable, resampling_fn: Callable, num_mcmc_steps: Optional[int], mcmc_parameters: ArrayTree, partial_logposterior_factory: Callable[[Array], Callable], update_strategy=update_and_take_last, ) -> Callable: """Build the Partial Poste...
Build the Partial Posteriors (data tempering) SMC kernel. The distribution's trajectory includes increasingly adding more datapoints to the likelihood. See Section 2.2 of https://arxiv.org/pdf/2007.11936 Parameters ---------- mcmc_step_fn A function that computes the log density of the prior...
build_kernel
python
blackjax-devs/blackjax
blackjax/smc/partial_posteriors_path.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/partial_posteriors_path.py
Apache-2.0
def as_top_level_api( mcmc_step_fn: Callable, mcmc_init_fn: Callable, mcmc_parameters: dict, resampling_fn: Callable, num_mcmc_steps, partial_logposterior_factory: Callable, update_strategy=update_and_take_last, ) -> SamplingAlgorithm: """A factory that wraps the kernel into a SamplingAl...
A factory that wraps the kernel into a SamplingAlgorithm object. See build_kernel for full documentation on the parameters.
as_top_level_api
python
blackjax-devs/blackjax
blackjax/smc/partial_posteriors_path.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/partial_posteriors_path.py
Apache-2.0
def esjd(m): """Implements ESJD (expected squared jumping distance). Inner Mahalanobis distance is computed using the Cholesky decomposition of M=LLt, and then inverting L. Whenever M is symmetrical definite positive then it must exist a Cholesky Decomposition. For example, if M is the Covariance Matrix...
Implements ESJD (expected squared jumping distance). Inner Mahalanobis distance is computed using the Cholesky decomposition of M=LLt, and then inverting L. Whenever M is symmetrical definite positive then it must exist a Cholesky Decomposition. For example, if M is the Covariance Matrix of Metropolis-Hasti...
esjd
python
blackjax-devs/blackjax
blackjax/smc/pretuning.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/pretuning.py
Apache-2.0
def update_parameter_distribution( key: PRNGKey, previous_param_samples: ArrayLikeTree, previous_particles: ArrayLikeTree, latest_particles: ArrayLikeTree, measure_of_chain_mixing: Callable, alpha: float, sigma_parameters: ArrayLikeTree, acceptance_probability: Array, ): """Given an ...
Given an existing parameter distribution that was used to mutate previous_particles into latest_particles, updates that parameter distribution by resampling from previous_param_samples after adding noise to those samples. The weights used are a linear function of the measure of chain mixing. Only works with...
update_parameter_distribution
python
blackjax-devs/blackjax
blackjax/smc/pretuning.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/pretuning.py
Apache-2.0
def build_pretune( mcmc_init_fn: Callable, mcmc_step_fn: Callable, alpha: float, sigma_parameters: ArrayLikeTree, n_particles: int, performance_of_chain_measure_factory: Callable = default_measure_factory, natural_parameters: Optional[List[str]] = None, positive_parameters: Optional[List...
Implements Buchholz et al https://arxiv.org/pdf/1808.07730 pretuning procedure. The goal is to maintain a probability distribution of parameters, in order to assign different values to each inner MCMC chain. To have performant parameters for the distribution at step t, it takes a single step, measures t...
build_pretune
python
blackjax-devs/blackjax
blackjax/smc/pretuning.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/pretuning.py
Apache-2.0
def pretune_and_update(key, state: StateWithParameterOverride, logposterior): """ Updates the parameters that need to be pretuned and returns the rest. """ new_parameter_distribution, chain_mixing_measurement = pretune( key, state, logposterior ) old_parameter...
Updates the parameters that need to be pretuned and returns the rest.
pretune_and_update
python
blackjax-devs/blackjax
blackjax/smc/pretuning.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/pretuning.py
Apache-2.0
def build_kernel( smc_algorithm, logprior_fn: Callable, loglikelihood_fn: Callable, mcmc_step_fn: Callable, mcmc_init_fn: Callable, resampling_fn: Callable, pretune_fn: Callable, num_mcmc_steps: int = 10, update_strategy=update_and_take_last, **extra_parameters, ) -> Callable: ...
In the context of an SMC sampler (whose step_fn returning state has a .particles attribute), there's an inner MCMC that is used to perturbate/update each of the particles. This adaptation tunes some parameter of that MCMC, based on particles. The parameter type must be a valid JAX type. Parameters ----...
build_kernel
python
blackjax-devs/blackjax
blackjax/smc/pretuning.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/pretuning.py
Apache-2.0
def pretuned_step( rng_key: PRNGKey, state, num_mcmc_steps: int, mcmc_parameters: dict, logposterior_fn: Callable, log_weights_fn: Callable, ) -> tuple[smc.base.SMCState, SMCInfoWithParameterDistribution]: """Wraps the output of smc.from_mcmc.build_kernel into...
Wraps the output of smc.from_mcmc.build_kernel into a pretuning + step method. This one should be a subtype of the former, in the sense that a usage of the former can be replaced with an instance of this one.
pretuned_step
python
blackjax-devs/blackjax
blackjax/smc/pretuning.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/pretuning.py
Apache-2.0
def dichotomy(fun, min_delta, max_delta, eps=1e-4, max_iter=100): """Solves for delta by dichotomy. If max_delta is such that fun(max_delta) > 0, then we assume that max_delta can be used as an increment in the tempering. Parameters ---------- fun: Callable The decreasing function to s...
Solves for delta by dichotomy. If max_delta is such that fun(max_delta) > 0, then we assume that max_delta can be used as an increment in the tempering. Parameters ---------- fun: Callable The decreasing function to solve, we must have fun(min_delta) > 0, fun(max_delta) < 0 min_delta: ...
dichotomy
python
blackjax-devs/blackjax
blackjax/smc/solver.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/solver.py
Apache-2.0
def build_kernel( logprior_fn: Callable, loglikelihood_fn: Callable, mcmc_step_fn: Callable, mcmc_init_fn: Callable, resampling_fn: Callable, update_strategy: Callable = update_and_take_last, update_particles_fn: Optional[Callable] = None, ) -> Callable: """Build the base Tempered SMC ke...
Build the base Tempered SMC kernel. Tempered SMC uses tempering to sample from a distribution given by .. math:: p(x) \propto p_0(x) \exp(-V(x)) \mathrm{d}x where :math:`p_0` is the prior distribution, typically easy to sample from and for which the density is easy to compute, and :math:`\exp...
build_kernel
python
blackjax-devs/blackjax
blackjax/smc/tempered.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/tempered.py
Apache-2.0
def kernel( rng_key: PRNGKey, state: TemperedSMCState, num_mcmc_steps: int, lmbda: float, mcmc_parameters: dict, ) -> tuple[TemperedSMCState, smc.base.SMCInfo]: """Move the particles one step using the Tempered SMC algorithm. Parameters ---------- ...
Move the particles one step using the Tempered SMC algorithm. Parameters ---------- rng_key JAX PRNGKey for randomness state Current state of the tempered SMC algorithm lmbda Current value of the tempering parameter mcmc_parameters ...
kernel
python
blackjax-devs/blackjax
blackjax/smc/tempered.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/tempered.py
Apache-2.0
def as_top_level_api( logprior_fn: Callable, loglikelihood_fn: Callable, mcmc_step_fn: Callable, mcmc_init_fn: Callable, mcmc_parameters: dict, resampling_fn: Callable, num_mcmc_steps: Optional[int] = 10, update_strategy=update_and_take_last, update_particles_fn=None, ) -> SamplingAl...
Implements the (basic) user interface for the Adaptive Tempered SMC kernel. Parameters ---------- logprior_fn The log-prior function of the model we wish to draw samples from. loglikelihood_fn The log-likelihood function of the model we wish to draw samples from. mcmc_step_fn ...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/smc/tempered.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/tempered.py
Apache-2.0
def update_waste_free( mcmc_init_fn, logposterior_fn, mcmc_step_fn, n_particles: int, p: int, num_resampled, num_mcmc_steps=None, ): """ Given M particles, mutates them using p-1 steps. Returns M*P-1 particles, consistent of the initial plus all the intermediate steps, thus imple...
Given M particles, mutates them using p-1 steps. Returns M*P-1 particles, consistent of the initial plus all the intermediate steps, thus implementing a waste-free update function See Algorithm 2: https://arxiv.org/abs/2011.02328
update_waste_free
python
blackjax-devs/blackjax
blackjax/smc/waste_free.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/waste_free.py
Apache-2.0
def update(rng_key, position, step_parameters): """ Given the initial particles, runs a chain starting at each. The combines the initial particles with all the particles generated at each step of each chain. """ states, infos = jax.vmap(mcmc_kernel)(rng_key, position, ste...
Given the initial particles, runs a chain starting at each. The combines the initial particles with all the particles generated at each step of each chain.
update
python
blackjax-devs/blackjax
blackjax/smc/waste_free.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/waste_free.py
Apache-2.0
def update_scale_from_acceptance_rate( scales: jax.Array, acceptance_rates: jax.Array, target_acceptance_rate: float = 0.234, ) -> jax.Array: """ Given N chains from some MCMC algorithm like Random Walk Metropolis and N scale factors, each associated to a different chain. Updates the scale f...
Given N chains from some MCMC algorithm like Random Walk Metropolis and N scale factors, each associated to a different chain. Updates the scale factors taking into account acceptance rates and the average acceptance rate. Under certain assumptions it is known that the optimal acceptance rate ...
update_scale_from_acceptance_rate
python
blackjax-devs/blackjax
blackjax/smc/tuning/from_kernel_info.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/smc/tuning/from_kernel_info.py
Apache-2.0
def init( position: ArrayLikeTree, optimizer: GradientTransformation, *optimizer_args, **optimizer_kwargs, ) -> MFVIState: """Initialize the mean-field VI state.""" mu = jax.tree.map(jnp.zeros_like, position) rho = jax.tree.map(lambda x: -2.0 * jnp.ones_like(x), position) opt_state = opt...
Initialize the mean-field VI state.
init
python
blackjax-devs/blackjax
blackjax/vi/meanfield_vi.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/meanfield_vi.py
Apache-2.0
def step( rng_key: PRNGKey, state: MFVIState, logdensity_fn: Callable, optimizer: GradientTransformation, num_samples: int = 5, stl_estimator: bool = True, ) -> tuple[MFVIState, MFVIInfo]: """Approximate the target density using the mean-field approximation. Parameters ---------- ...
Approximate the target density using the mean-field approximation. Parameters ---------- rng_key Key for JAX's pseudo-random number generator. init_state Initial state of the mean-field approximation. logdensity_fn Function that represents the target log-density to approxima...
step
python
blackjax-devs/blackjax
blackjax/vi/meanfield_vi.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/meanfield_vi.py
Apache-2.0
def as_top_level_api( logdensity_fn: Callable, optimizer: GradientTransformation, num_samples: int = 100, ): """High-level implementation of Mean-Field Variational Inference. Parameters ---------- logdensity_fn A function that represents the log-density function associated with ...
High-level implementation of Mean-Field Variational Inference. Parameters ---------- logdensity_fn A function that represents the log-density function associated with the distribution we want to sample from. optimizer Optax optimizer to use to optimize the ELBO. num_samples ...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/vi/meanfield_vi.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/meanfield_vi.py
Apache-2.0
def approximate( rng_key: PRNGKey, logdensity_fn: Callable, initial_position: ArrayLikeTree, num_samples: int = 200, *, # lgbfs parameters maxiter=30, maxcor=10, maxls=1000, gtol=1e-08, ftol=1e-05, **lbfgs_kwargs, ) -> tuple[PathfinderState, PathfinderInfo]: """Pathfinde...
Pathfinder variational inference algorithm. Pathfinder locates normal approximations to the target density along a quasi-Newton optimization path, with local covariance estimated using the inverse Hessian estimates produced by the L-BFGS optimizer. Function implements the algorithm 3 in :cite:p:`zhang...
approximate
python
blackjax-devs/blackjax
blackjax/vi/pathfinder.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/pathfinder.py
Apache-2.0
def path_finder_body_fn(rng_key, S, Z, alpha_l, theta, theta_grad): """The for loop body in Algorithm 1 of the Pathfinder paper.""" beta, gamma = lbfgs_inverse_hessian_factors(S.T, Z.T, alpha_l) phi, logq = bfgs_sample( rng_key=rng_key, num_samples=num_samples, ...
The for loop body in Algorithm 1 of the Pathfinder paper.
path_finder_body_fn
python
blackjax-devs/blackjax
blackjax/vi/pathfinder.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/pathfinder.py
Apache-2.0
def sample( rng_key: PRNGKey, state: PathfinderState, num_samples: Union[int, tuple[()], tuple[int]] = (), ) -> ArrayTree: """Draw from the Pathfinder approximation of the target distribution. Parameters ---------- rng_key PRNG key state PathfinderState containing inform...
Draw from the Pathfinder approximation of the target distribution. Parameters ---------- rng_key PRNG key state PathfinderState containing information for sampling num_samples Number of samples to draw Returns ------- Samples drawn from the approximate Pathfinde...
sample
python
blackjax-devs/blackjax
blackjax/vi/pathfinder.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/pathfinder.py
Apache-2.0
def as_top_level_api(logdensity_fn: Callable) -> PathFinderAlgorithm: """Implements the (basic) user interface for the pathfinder kernel. Pathfinder locates normal approximations to the target density along a quasi-Newton optimization path, with local covariance estimated using the inverse Hessian esti...
Implements the (basic) user interface for the pathfinder kernel. Pathfinder locates normal approximations to the target density along a quasi-Newton optimization path, with local covariance estimated using the inverse Hessian estimates produced by the L-BFGS optimizer. Pathfinder returns draws from the...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/vi/pathfinder.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/pathfinder.py
Apache-2.0
def init( initial_particles: ArrayLikeTree, kernel_parameters: dict[str, Any], optimizer: optax.GradientTransformation, ) -> SVGDState: """ Initializes Stein Variational Gradient Descent Algorithm. Parameters ---------- initial_particles Initial set of particles to start the opt...
Initializes Stein Variational Gradient Descent Algorithm. Parameters ---------- initial_particles Initial set of particles to start the optimization kernel_paremeters Arguments to the kernel function optimizer Optax compatible optimizer, which conforms to the `optax.Gra...
init
python
blackjax-devs/blackjax
blackjax/vi/svgd.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/svgd.py
Apache-2.0
def kernel( state: SVGDState, grad_logdensity_fn: Callable, kernel: Callable, **grad_params, ) -> SVGDState: """ Performs one step of Stein Variational Gradient Descent. See Algorithm 1 of :cite:p:`liu2016stein`. Parameters ---------- ...
Performs one step of Stein Variational Gradient Descent. See Algorithm 1 of :cite:p:`liu2016stein`. Parameters ---------- state SVGDState object containing information about previous iteration grad_logdensity_fn gradient, or an estimate, of the ...
kernel
python
blackjax-devs/blackjax
blackjax/vi/svgd.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/svgd.py
Apache-2.0
def update_median_heuristic(state: SVGDState) -> SVGDState: """Median heuristic for setting the bandwidth of RBF kernels. A reasonable middle-ground for choosing the `length_scale` of the RBF kernel is to pick the empirical median of the squared distance between particles. This strategy is called the m...
Median heuristic for setting the bandwidth of RBF kernels. A reasonable middle-ground for choosing the `length_scale` of the RBF kernel is to pick the empirical median of the squared distance between particles. This strategy is called the median heuristic.
update_median_heuristic
python
blackjax-devs/blackjax
blackjax/vi/svgd.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/svgd.py
Apache-2.0
def as_top_level_api( grad_logdensity_fn: Callable, optimizer, kernel: Callable = rbf_kernel, update_kernel_parameters: Callable = update_median_heuristic, ): """Implements the (basic) user interface for the svgd algorithm :cite:p:`liu2016stein`. Parameters ---------- grad_logdensity_fn...
Implements the (basic) user interface for the svgd algorithm :cite:p:`liu2016stein`. Parameters ---------- grad_logdensity_fn gradient, or an estimate, of the target log density function to samples approximately from optimizer Optax compatible optimizer, which conforms to the `optax.Gra...
as_top_level_api
python
blackjax-devs/blackjax
blackjax/vi/svgd.py
https://github.com/blackjax-devs/blackjax/blob/master/blackjax/vi/svgd.py
Apache-2.0
def test_hmc(self): """Count the number of times the logdensity is compiled when using HMC. The logdensity is compiled twice: when initializing the state and when compiling the kernel. """ @chex.assert_max_traces(n=2) def logdensity_fn(x): return jscipy.sta...
Count the number of times the logdensity is compiled when using HMC. The logdensity is compiled twice: when initializing the state and when compiling the kernel.
test_hmc
python
blackjax-devs/blackjax
tests/test_compilation.py
https://github.com/blackjax-devs/blackjax/blob/master/tests/test_compilation.py
Apache-2.0
def test_nuts(self): """Count the number of times the logdensity is compiled when using NUTS. The logdensity is compiled twice: when initializing the state and when compiling the kernel. """ @chex.assert_max_traces(n=2) def logdensity_fn(x): return jscipy.s...
Count the number of times the logdensity is compiled when using NUTS. The logdensity is compiled twice: when initializing the state and when compiling the kernel.
test_nuts
python
blackjax-devs/blackjax
tests/test_compilation.py
https://github.com/blackjax-devs/blackjax/blob/master/tests/test_compilation.py
Apache-2.0
def test_hmc_warmup(self): """Count the number of times the logdensity is compiled when using window adaptation to adapt the value of the step size and the inverse mass matrix for the HMC algorithm. """ @chex.assert_max_traces(n=3) def logdensity_fn(x): retu...
Count the number of times the logdensity is compiled when using window adaptation to adapt the value of the step size and the inverse mass matrix for the HMC algorithm.
test_hmc_warmup
python
blackjax-devs/blackjax
tests/test_compilation.py
https://github.com/blackjax-devs/blackjax/blob/master/tests/test_compilation.py
Apache-2.0
def test_nuts_warmup(self): """Count the number of times the logdensity is compiled when using window adaptation to adapt the value of the step size and the inverse mass matrix for the NUTS algorithm. """ @chex.assert_max_traces(n=3) def logdensity_fn(x): re...
Count the number of times the logdensity is compiled when using window adaptation to adapt the value of the step size and the inverse mass matrix for the NUTS algorithm.
test_nuts_warmup
python
blackjax-devs/blackjax
tests/test_compilation.py
https://github.com/blackjax-devs/blackjax/blob/master/tests/test_compilation.py
Apache-2.0
def check_compatible(self, initial_state, progress_bar): """ Runs 10 steps with `run_inference_algorithm` starting with `initial_state` and potentially a progress bar. """ _ = run_inference_algorithm( rng_key=self.key, initial_state=initial_state, ...
Runs 10 steps with `run_inference_algorithm` starting with `initial_state` and potentially a progress bar.
check_compatible
python
blackjax-devs/blackjax
tests/test_util.py
https://github.com/blackjax-devs/blackjax/blob/master/tests/test_util.py
Apache-2.0
def test_preconditioning_matrix(self, seed): """Test two different ways of using pre-conditioning matrix has exactly same effect. We follow the discussion in Appendix G of the Barker 2020 paper. """ key = jax.random.key(seed) init_key, inference_key = jax.random.split(key, 2) ...
Test two different ways of using pre-conditioning matrix has exactly same effect. We follow the discussion in Appendix G of the Barker 2020 paper.
test_preconditioning_matrix
python
blackjax-devs/blackjax
tests/mcmc/test_barker.py
https://github.com/blackjax-devs/blackjax/blob/master/tests/mcmc/test_barker.py
Apache-2.0
def HarmonicOscillator(inv_mass_matrix, k=1.0, m=1.0): """Potential and Kinetic energy of an harmonic oscillator.""" def neg_potential_energy(q): return -jnp.sum(0.5 * k * jnp.square(q["x"])) def kinetic_energy(p, position=None): del position v = jnp.multiply(inv_mass_matrix, p["x"...
Potential and Kinetic energy of an harmonic oscillator.
HarmonicOscillator
python
blackjax-devs/blackjax
tests/mcmc/test_integrators.py
https://github.com/blackjax-devs/blackjax/blob/master/tests/mcmc/test_integrators.py
Apache-2.0
def FreeFall(inv_mass_matrix, g=1.0): """Potential and kinetic energy of a free-falling object.""" def neg_potential_energy(q): return -jnp.sum(g * q["x"]) def kinetic_energy(p, position=None): del position v = jnp.multiply(inv_mass_matrix, p["x"]) return jnp.sum(0.5 * jnp....
Potential and kinetic energy of a free-falling object.
FreeFall
python
blackjax-devs/blackjax
tests/mcmc/test_integrators.py
https://github.com/blackjax-devs/blackjax/blob/master/tests/mcmc/test_integrators.py
Apache-2.0
def PlanetaryMotion(inv_mass_matrix): """Potential and kinetic energy for planar planetary motion.""" def neg_potential_energy(q): return 1.0 / jnp.power(q["x"] ** 2 + q["y"] ** 2, 0.5) def kinetic_energy(p, position=None): del position z = jnp.stack([p["x"], p["y"]], axis=-1) ...
Potential and kinetic energy for planar planetary motion.
PlanetaryMotion
python
blackjax-devs/blackjax
tests/mcmc/test_integrators.py
https://github.com/blackjax-devs/blackjax/blob/master/tests/mcmc/test_integrators.py
Apache-2.0