doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
property variance | torch.distributions#torch.distributions.gamma.Gamma.variance |
class torch.distributions.geometric.Geometric(probs=None, logits=None, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Creates a Geometric distribution parameterized by probs, where probs is the probability of success of Bernoulli trials. It represents the probability that in k+1k + 1 Bernoulli trials, the first kk trials failed, before seeing a success. Samples are non-negative integers [0, inf\inf ). Example: >>> m = Geometric(torch.tensor([0.3]))
>>> m.sample() # underlying Bernoulli has 30% chance 1; 70% chance 0
tensor([ 2.])
Parameters
probs (Number, Tensor) – the probability of sampling 1. Must be in range (0, 1]
logits (Number, Tensor) – the log-odds of sampling 1.
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}
entropy() [source]
expand(batch_shape, _instance=None) [source]
log_prob(value) [source]
logits [source]
property mean
probs [source]
sample(sample_shape=torch.Size([])) [source]
support = IntegerGreaterThan(lower_bound=0)
property variance | torch.distributions#torch.distributions.geometric.Geometric |
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)} | torch.distributions#torch.distributions.geometric.Geometric.arg_constraints |
entropy() [source] | torch.distributions#torch.distributions.geometric.Geometric.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.geometric.Geometric.expand |
logits [source] | torch.distributions#torch.distributions.geometric.Geometric.logits |
log_prob(value) [source] | torch.distributions#torch.distributions.geometric.Geometric.log_prob |
property mean | torch.distributions#torch.distributions.geometric.Geometric.mean |
probs [source] | torch.distributions#torch.distributions.geometric.Geometric.probs |
sample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.geometric.Geometric.sample |
support = IntegerGreaterThan(lower_bound=0) | torch.distributions#torch.distributions.geometric.Geometric.support |
property variance | torch.distributions#torch.distributions.geometric.Geometric.variance |
class torch.distributions.gumbel.Gumbel(loc, scale, validate_args=None) [source]
Bases: torch.distributions.transformed_distribution.TransformedDistribution Samples from a Gumbel Distribution. Examples: >>> m = Gumbel(torch.tensor([1.0]), torch.tensor([2.0]))
>>> m.sample() # sample from Gumbel distribution with loc=1, scale=2
tensor([ 1.0124])
Parameters
loc (float or Tensor) – Location parameter of the distribution
scale (float or Tensor) – Scale parameter of the distribution
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}
entropy() [source]
expand(batch_shape, _instance=None) [source]
log_prob(value) [source]
property mean
property stddev
support = Real()
property variance | torch.distributions#torch.distributions.gumbel.Gumbel |
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.gumbel.Gumbel.arg_constraints |
entropy() [source] | torch.distributions#torch.distributions.gumbel.Gumbel.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.gumbel.Gumbel.expand |
log_prob(value) [source] | torch.distributions#torch.distributions.gumbel.Gumbel.log_prob |
property mean | torch.distributions#torch.distributions.gumbel.Gumbel.mean |
property stddev | torch.distributions#torch.distributions.gumbel.Gumbel.stddev |
support = Real() | torch.distributions#torch.distributions.gumbel.Gumbel.support |
property variance | torch.distributions#torch.distributions.gumbel.Gumbel.variance |
class torch.distributions.half_cauchy.HalfCauchy(scale, validate_args=None) [source]
Bases: torch.distributions.transformed_distribution.TransformedDistribution Creates a half-Cauchy distribution parameterized by scale where: X ~ Cauchy(0, scale)
Y = |X| ~ HalfCauchy(scale)
Example: >>> m = HalfCauchy(torch.tensor([1.0]))
>>> m.sample() # half-cauchy distributed with scale=1
tensor([ 2.3214])
Parameters
scale (float or Tensor) – scale of the full Cauchy distribution
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)}
cdf(value) [source]
entropy() [source]
expand(batch_shape, _instance=None) [source]
has_rsample = True
icdf(prob) [source]
log_prob(value) [source]
property mean
property scale
support = GreaterThan(lower_bound=0.0)
property variance | torch.distributions#torch.distributions.half_cauchy.HalfCauchy |
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.arg_constraints |
cdf(value) [source] | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.cdf |
entropy() [source] | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.expand |
has_rsample = True | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.has_rsample |
icdf(prob) [source] | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.icdf |
log_prob(value) [source] | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.log_prob |
property mean | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.mean |
property scale | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.scale |
support = GreaterThan(lower_bound=0.0) | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.support |
property variance | torch.distributions#torch.distributions.half_cauchy.HalfCauchy.variance |
class torch.distributions.half_normal.HalfNormal(scale, validate_args=None) [source]
Bases: torch.distributions.transformed_distribution.TransformedDistribution Creates a half-normal distribution parameterized by scale where: X ~ Normal(0, scale)
Y = |X| ~ HalfNormal(scale)
Example: >>> m = HalfNormal(torch.tensor([1.0]))
>>> m.sample() # half-normal distributed with scale=1
tensor([ 0.1046])
Parameters
scale (float or Tensor) – scale of the full Normal distribution
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)}
cdf(value) [source]
entropy() [source]
expand(batch_shape, _instance=None) [source]
has_rsample = True
icdf(prob) [source]
log_prob(value) [source]
property mean
property scale
support = GreaterThan(lower_bound=0.0)
property variance | torch.distributions#torch.distributions.half_normal.HalfNormal |
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.half_normal.HalfNormal.arg_constraints |
cdf(value) [source] | torch.distributions#torch.distributions.half_normal.HalfNormal.cdf |
entropy() [source] | torch.distributions#torch.distributions.half_normal.HalfNormal.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.half_normal.HalfNormal.expand |
has_rsample = True | torch.distributions#torch.distributions.half_normal.HalfNormal.has_rsample |
icdf(prob) [source] | torch.distributions#torch.distributions.half_normal.HalfNormal.icdf |
log_prob(value) [source] | torch.distributions#torch.distributions.half_normal.HalfNormal.log_prob |
property mean | torch.distributions#torch.distributions.half_normal.HalfNormal.mean |
property scale | torch.distributions#torch.distributions.half_normal.HalfNormal.scale |
support = GreaterThan(lower_bound=0.0) | torch.distributions#torch.distributions.half_normal.HalfNormal.support |
property variance | torch.distributions#torch.distributions.half_normal.HalfNormal.variance |
class torch.distributions.independent.Independent(base_distribution, reinterpreted_batch_ndims, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Reinterprets some of the batch dims of a distribution as event dims. This is mainly useful for changing the shape of the result of log_prob(). For example to create a diagonal Normal distribution with the same shape as a Multivariate Normal distribution (so they are interchangeable), you can: >>> loc = torch.zeros(3)
>>> scale = torch.ones(3)
>>> mvn = MultivariateNormal(loc, scale_tril=torch.diag(scale))
>>> [mvn.batch_shape, mvn.event_shape]
[torch.Size(()), torch.Size((3,))]
>>> normal = Normal(loc, scale)
>>> [normal.batch_shape, normal.event_shape]
[torch.Size((3,)), torch.Size(())]
>>> diagn = Independent(normal, 1)
>>> [diagn.batch_shape, diagn.event_shape]
[torch.Size(()), torch.Size((3,))]
Parameters
base_distribution (torch.distributions.distribution.Distribution) – a base distribution
reinterpreted_batch_ndims (int) – the number of batch dims to reinterpret as event dims
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {}
entropy() [source]
enumerate_support(expand=True) [source]
expand(batch_shape, _instance=None) [source]
property has_enumerate_support
property has_rsample
log_prob(value) [source]
property mean
rsample(sample_shape=torch.Size([])) [source]
sample(sample_shape=torch.Size([])) [source]
property support
property variance | torch.distributions#torch.distributions.independent.Independent |
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {} | torch.distributions#torch.distributions.independent.Independent.arg_constraints |
entropy() [source] | torch.distributions#torch.distributions.independent.Independent.entropy |
enumerate_support(expand=True) [source] | torch.distributions#torch.distributions.independent.Independent.enumerate_support |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.independent.Independent.expand |
property has_enumerate_support | torch.distributions#torch.distributions.independent.Independent.has_enumerate_support |
property has_rsample | torch.distributions#torch.distributions.independent.Independent.has_rsample |
log_prob(value) [source] | torch.distributions#torch.distributions.independent.Independent.log_prob |
property mean | torch.distributions#torch.distributions.independent.Independent.mean |
rsample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.independent.Independent.rsample |
sample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.independent.Independent.sample |
property support | torch.distributions#torch.distributions.independent.Independent.support |
property variance | torch.distributions#torch.distributions.independent.Independent.variance |
torch.distributions.kl.kl_divergence(p, q) [source]
Compute Kullback-Leibler divergence KL(p∥q)KL(p \| q) between two distributions. KL(p∥q)=∫p(x)logp(x)q(x)dxKL(p \| q) = \int p(x) \log\frac {p(x)} {q(x)} \,dx
Parameters
p (Distribution) – A Distribution object.
q (Distribution) – A Distribution object. Returns
A batch of KL divergences of shape batch_shape. Return type
Tensor Raises
NotImplementedError – If the distribution types have not been registered via register_kl(). | torch.distributions#torch.distributions.kl.kl_divergence |
torch.distributions.kl.register_kl(type_p, type_q) [source]
Decorator to register a pairwise function with kl_divergence(). Usage: @register_kl(Normal, Normal)
def kl_normal_normal(p, q):
# insert implementation here
Lookup returns the most specific (type,type) match ordered by subclass. If the match is ambiguous, a RuntimeWarning is raised. For example to resolve the ambiguous situation: @register_kl(BaseP, DerivedQ)
def kl_version1(p, q): ...
@register_kl(DerivedP, BaseQ)
def kl_version2(p, q): ...
you should register a third most-specific implementation, e.g.: register_kl(DerivedP, DerivedQ)(kl_version1) # Break the tie.
Parameters
type_p (type) – A subclass of Distribution.
type_q (type) – A subclass of Distribution. | torch.distributions#torch.distributions.kl.register_kl |
class torch.distributions.kumaraswamy.Kumaraswamy(concentration1, concentration0, validate_args=None) [source]
Bases: torch.distributions.transformed_distribution.TransformedDistribution Samples from a Kumaraswamy distribution. Example: >>> m = Kumaraswamy(torch.Tensor([1.0]), torch.Tensor([1.0]))
>>> m.sample() # sample from a Kumaraswamy distribution with concentration alpha=1 and beta=1
tensor([ 0.1729])
Parameters
concentration1 (float or Tensor) – 1st concentration parameter of the distribution (often referred to as alpha)
concentration0 (float or Tensor) – 2nd concentration parameter of the distribution (often referred to as beta)
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)}
entropy() [source]
expand(batch_shape, _instance=None) [source]
has_rsample = True
property mean
support = Interval(lower_bound=0.0, upper_bound=1.0)
property variance | torch.distributions#torch.distributions.kumaraswamy.Kumaraswamy |
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.kumaraswamy.Kumaraswamy.arg_constraints |
entropy() [source] | torch.distributions#torch.distributions.kumaraswamy.Kumaraswamy.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.kumaraswamy.Kumaraswamy.expand |
has_rsample = True | torch.distributions#torch.distributions.kumaraswamy.Kumaraswamy.has_rsample |
property mean | torch.distributions#torch.distributions.kumaraswamy.Kumaraswamy.mean |
support = Interval(lower_bound=0.0, upper_bound=1.0) | torch.distributions#torch.distributions.kumaraswamy.Kumaraswamy.support |
property variance | torch.distributions#torch.distributions.kumaraswamy.Kumaraswamy.variance |
class torch.distributions.laplace.Laplace(loc, scale, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Creates a Laplace distribution parameterized by loc and scale. Example: >>> m = Laplace(torch.tensor([0.0]), torch.tensor([1.0]))
>>> m.sample() # Laplace distributed with loc=0, scale=1
tensor([ 0.1046])
Parameters
loc (float or Tensor) – mean of the distribution
scale (float or Tensor) – scale of the distribution
arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}
cdf(value) [source]
entropy() [source]
expand(batch_shape, _instance=None) [source]
has_rsample = True
icdf(value) [source]
log_prob(value) [source]
property mean
rsample(sample_shape=torch.Size([])) [source]
property stddev
support = Real()
property variance | torch.distributions#torch.distributions.laplace.Laplace |
arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.laplace.Laplace.arg_constraints |
cdf(value) [source] | torch.distributions#torch.distributions.laplace.Laplace.cdf |
entropy() [source] | torch.distributions#torch.distributions.laplace.Laplace.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.laplace.Laplace.expand |
has_rsample = True | torch.distributions#torch.distributions.laplace.Laplace.has_rsample |
icdf(value) [source] | torch.distributions#torch.distributions.laplace.Laplace.icdf |
log_prob(value) [source] | torch.distributions#torch.distributions.laplace.Laplace.log_prob |
property mean | torch.distributions#torch.distributions.laplace.Laplace.mean |
rsample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.laplace.Laplace.rsample |
property stddev | torch.distributions#torch.distributions.laplace.Laplace.stddev |
support = Real() | torch.distributions#torch.distributions.laplace.Laplace.support |
property variance | torch.distributions#torch.distributions.laplace.Laplace.variance |
class torch.distributions.lkj_cholesky.LKJCholesky(dim, concentration=1.0, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution LKJ distribution for lower Cholesky factor of correlation matrices. The distribution is controlled by concentration parameter η\eta to make the probability of the correlation matrix MM generated from a Cholesky factor propotional to det(M)η−1\det(M)^{\eta - 1} . Because of that, when concentration == 1, we have a uniform distribution over Cholesky factors of correlation matrices. Note that this distribution samples the Cholesky factor of correlation matrices and not the correlation matrices themselves and thereby differs slightly from the derivations in [1] for the LKJCorr distribution. For sampling, this uses the Onion method from [1] Section 3. L ~ LKJCholesky(dim, concentration) X = L @ L’ ~ LKJCorr(dim, concentration) Example: >>> l = LKJCholesky(3, 0.5)
>>> l.sample() # l @ l.T is a sample of a correlation 3x3 matrix
tensor([[ 1.0000, 0.0000, 0.0000],
[ 0.3516, 0.9361, 0.0000],
[-0.1899, 0.4748, 0.8593]])
Parameters
dimension (dim) – dimension of the matrices
concentration (float or Tensor) – concentration/shape parameter of the distribution (often referred to as eta) References [1] Generating random correlation matrices based on vines and extended onion method, Daniel Lewandowski, Dorota Kurowicka, Harry Joe.
arg_constraints = {'concentration': GreaterThan(lower_bound=0.0)}
expand(batch_shape, _instance=None) [source]
log_prob(value) [source]
sample(sample_shape=torch.Size([])) [source]
support = CorrCholesky() | torch.distributions#torch.distributions.lkj_cholesky.LKJCholesky |
arg_constraints = {'concentration': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.lkj_cholesky.LKJCholesky.arg_constraints |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.lkj_cholesky.LKJCholesky.expand |
log_prob(value) [source] | torch.distributions#torch.distributions.lkj_cholesky.LKJCholesky.log_prob |
sample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.lkj_cholesky.LKJCholesky.sample |
support = CorrCholesky() | torch.distributions#torch.distributions.lkj_cholesky.LKJCholesky.support |
class torch.distributions.log_normal.LogNormal(loc, scale, validate_args=None) [source]
Bases: torch.distributions.transformed_distribution.TransformedDistribution Creates a log-normal distribution parameterized by loc and scale where: X ~ Normal(loc, scale)
Y = exp(X) ~ LogNormal(loc, scale)
Example: >>> m = LogNormal(torch.tensor([0.0]), torch.tensor([1.0]))
>>> m.sample() # log-normal distributed with mean=0 and stddev=1
tensor([ 0.1046])
Parameters
loc (float or Tensor) – mean of log of distribution
scale (float or Tensor) – standard deviation of log of the distribution
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}
entropy() [source]
expand(batch_shape, _instance=None) [source]
has_rsample = True
property loc
property mean
property scale
support = GreaterThan(lower_bound=0.0)
property variance | torch.distributions#torch.distributions.log_normal.LogNormal |
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.log_normal.LogNormal.arg_constraints |
entropy() [source] | torch.distributions#torch.distributions.log_normal.LogNormal.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.log_normal.LogNormal.expand |
has_rsample = True | torch.distributions#torch.distributions.log_normal.LogNormal.has_rsample |
property loc | torch.distributions#torch.distributions.log_normal.LogNormal.loc |
property mean | torch.distributions#torch.distributions.log_normal.LogNormal.mean |
property scale | torch.distributions#torch.distributions.log_normal.LogNormal.scale |
support = GreaterThan(lower_bound=0.0) | torch.distributions#torch.distributions.log_normal.LogNormal.support |
property variance | torch.distributions#torch.distributions.log_normal.LogNormal.variance |
class torch.distributions.lowrank_multivariate_normal.LowRankMultivariateNormal(loc, cov_factor, cov_diag, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Creates a multivariate normal distribution with covariance matrix having a low-rank form parameterized by cov_factor and cov_diag: covariance_matrix = cov_factor @ cov_factor.T + cov_diag
Example >>> m = LowRankMultivariateNormal(torch.zeros(2), torch.tensor([[1.], [0.]]), torch.ones(2))
>>> m.sample() # normally distributed with mean=`[0,0]`, cov_factor=`[[1],[0]]`, cov_diag=`[1,1]`
tensor([-0.2102, -0.5429])
Parameters
loc (Tensor) – mean of the distribution with shape batch_shape + event_shape
cov_factor (Tensor) – factor part of low-rank form of covariance matrix with shape batch_shape + event_shape + (rank,)
cov_diag (Tensor) – diagonal part of low-rank form of covariance matrix with shape batch_shape + event_shape
Note The computation for determinant and inverse of covariance matrix is avoided when cov_factor.shape[1] << cov_factor.shape[0] thanks to Woodbury matrix identity and matrix determinant lemma. Thanks to these formulas, we just need to compute the determinant and inverse of the small size “capacitance” matrix: capacitance = I + cov_factor.T @ inv(cov_diag) @ cov_factor
arg_constraints = {'cov_diag': IndependentConstraint(GreaterThan(lower_bound=0.0), 1), 'cov_factor': IndependentConstraint(Real(), 2), 'loc': IndependentConstraint(Real(), 1)}
covariance_matrix [source]
entropy() [source]
expand(batch_shape, _instance=None) [source]
has_rsample = True
log_prob(value) [source]
property mean
precision_matrix [source]
rsample(sample_shape=torch.Size([])) [source]
scale_tril [source]
support = IndependentConstraint(Real(), 1)
variance [source] | torch.distributions#torch.distributions.lowrank_multivariate_normal.LowRankMultivariateNormal |
arg_constraints = {'cov_diag': IndependentConstraint(GreaterThan(lower_bound=0.0), 1), 'cov_factor': IndependentConstraint(Real(), 2), 'loc': IndependentConstraint(Real(), 1)} | torch.distributions#torch.distributions.lowrank_multivariate_normal.LowRankMultivariateNormal.arg_constraints |
covariance_matrix [source] | torch.distributions#torch.distributions.lowrank_multivariate_normal.LowRankMultivariateNormal.covariance_matrix |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.