doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
property variance
torch.distributions#torch.distributions.poisson.Poisson.variance
class torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None) [source] Bases: torch.distributions.distribution.Distribution Creates a LogitRelaxedBernoulli distribution parameterized by probs or logits (but not both), which is the logit of a RelaxedBernoulli distribution. Samples are logits of values in (0, 1). See [1] for more details. Parameters temperature (Tensor) – relaxation temperature probs (Number, Tensor) – the probability of sampling 1 logits (Number, Tensor) – the log-odds of sampling 1 [1] The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables (Maddison et al, 2017) [2] Categorical Reparametrization with Gumbel-Softmax (Jang et al, 2017) arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)} expand(batch_shape, _instance=None) [source] log_prob(value) [source] logits [source] property param_shape probs [source] rsample(sample_shape=torch.Size([])) [source] support = Real()
torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}
torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.arg_constraints
expand(batch_shape, _instance=None) [source]
torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.expand
logits [source]
torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.logits
log_prob(value) [source]
torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.log_prob
property param_shape
torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.param_shape
probs [source]
torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.probs
rsample(sample_shape=torch.Size([])) [source]
torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.rsample
support = Real()
torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.support
class torch.distributions.relaxed_bernoulli.RelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None) [source] Bases: torch.distributions.transformed_distribution.TransformedDistribution Creates a RelaxedBernoulli distribution, parametrized by temperature, and either probs or logits (but not both). This is a relaxed version of the Bernoulli distribution, so the values are in (0, 1), and has reparametrizable samples. Example: >>> m = RelaxedBernoulli(torch.tensor([2.2]), torch.tensor([0.1, 0.2, 0.3, 0.99])) >>> m.sample() tensor([ 0.2951, 0.3442, 0.8918, 0.9021]) Parameters temperature (Tensor) – relaxation temperature probs (Number, Tensor) – the probability of sampling 1 logits (Number, Tensor) – the log-odds of sampling 1 arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)} expand(batch_shape, _instance=None) [source] has_rsample = True property logits property probs support = Interval(lower_bound=0.0, upper_bound=1.0) property temperature
torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}
torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.arg_constraints
expand(batch_shape, _instance=None) [source]
torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.expand
has_rsample = True
torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.has_rsample
property logits
torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.logits
property probs
torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.probs
support = Interval(lower_bound=0.0, upper_bound=1.0)
torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.support
property temperature
torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.temperature
class torch.distributions.relaxed_categorical.RelaxedOneHotCategorical(temperature, probs=None, logits=None, validate_args=None) [source] Bases: torch.distributions.transformed_distribution.TransformedDistribution Creates a RelaxedOneHotCategorical distribution parametrized by temperature, and either probs or logits. This is a relaxed version of the OneHotCategorical distribution, so its samples are on simplex, and are reparametrizable. Example: >>> m = RelaxedOneHotCategorical(torch.tensor([2.2]), torch.tensor([0.1, 0.2, 0.3, 0.4])) >>> m.sample() tensor([ 0.1294, 0.2324, 0.3859, 0.2523]) Parameters temperature (Tensor) – relaxation temperature probs (Tensor) – event probabilities logits (Tensor) – unnormalized log probability for each event arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()} expand(batch_shape, _instance=None) [source] has_rsample = True property logits property probs support = Simplex() property temperature
torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}
torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.arg_constraints
expand(batch_shape, _instance=None) [source]
torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.expand
has_rsample = True
torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.has_rsample
property logits
torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.logits
property probs
torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.probs
support = Simplex()
torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.support
property temperature
torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.temperature
class torch.distributions.studentT.StudentT(df, loc=0.0, scale=1.0, validate_args=None) [source] Bases: torch.distributions.distribution.Distribution Creates a Student’s t-distribution parameterized by degree of freedom df, mean loc and scale scale. Example: >>> m = StudentT(torch.tensor([2.0])) >>> m.sample() # Student's t-distributed with degrees of freedom=2 tensor([ 0.1046]) Parameters df (float or Tensor) – degrees of freedom loc (float or Tensor) – mean of the distribution scale (float or Tensor) – scale of the distribution arg_constraints = {'df': GreaterThan(lower_bound=0.0), 'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)} entropy() [source] expand(batch_shape, _instance=None) [source] has_rsample = True log_prob(value) [source] property mean rsample(sample_shape=torch.Size([])) [source] support = Real() property variance
torch.distributions#torch.distributions.studentT.StudentT
arg_constraints = {'df': GreaterThan(lower_bound=0.0), 'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}
torch.distributions#torch.distributions.studentT.StudentT.arg_constraints
entropy() [source]
torch.distributions#torch.distributions.studentT.StudentT.entropy
expand(batch_shape, _instance=None) [source]
torch.distributions#torch.distributions.studentT.StudentT.expand
has_rsample = True
torch.distributions#torch.distributions.studentT.StudentT.has_rsample
log_prob(value) [source]
torch.distributions#torch.distributions.studentT.StudentT.log_prob
property mean
torch.distributions#torch.distributions.studentT.StudentT.mean
rsample(sample_shape=torch.Size([])) [source]
torch.distributions#torch.distributions.studentT.StudentT.rsample
support = Real()
torch.distributions#torch.distributions.studentT.StudentT.support
property variance
torch.distributions#torch.distributions.studentT.StudentT.variance
class torch.distributions.transformed_distribution.TransformedDistribution(base_distribution, transforms, validate_args=None) [source] Bases: torch.distributions.distribution.Distribution Extension of the Distribution class, which applies a sequence of Transforms to a base distribution. Let f be the composition of transforms applied: X ~ BaseDistribution Y = f(X) ~ TransformedDistribution(BaseDistribution, f) log p(Y) = log p(X) + log |det (dX/dY)| Note that the .event_shape of a TransformedDistribution is the maximum shape of its base distribution and its transforms, since transforms can introduce correlations among events. An example for the usage of TransformedDistribution would be: # Building a Logistic Distribution # X ~ Uniform(0, 1) # f = a + b * logit(X) # Y ~ f(X) ~ Logistic(a, b) base_distribution = Uniform(0, 1) transforms = [SigmoidTransform().inv, AffineTransform(loc=a, scale=b)] logistic = TransformedDistribution(base_distribution, transforms) For more examples, please look at the implementations of Gumbel, HalfCauchy, HalfNormal, LogNormal, Pareto, Weibull, RelaxedBernoulli and RelaxedOneHotCategorical arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {} cdf(value) [source] Computes the cumulative distribution function by inverting the transform(s) and computing the score of the base distribution. expand(batch_shape, _instance=None) [source] property has_rsample icdf(value) [source] Computes the inverse cumulative distribution function using transform(s) and computing the score of the base distribution. log_prob(value) [source] Scores the sample by inverting the transform(s) and computing the score using the score of the base distribution and the log abs det jacobian. rsample(sample_shape=torch.Size([])) [source] Generates a sample_shape shaped reparameterized sample or sample_shape shaped batch of reparameterized samples if the distribution parameters are batched. Samples first from base distribution and applies transform() for every transform in the list. sample(sample_shape=torch.Size([])) [source] Generates a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched. Samples first from base distribution and applies transform() for every transform in the list. property support
torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {}
torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.arg_constraints
cdf(value) [source] Computes the cumulative distribution function by inverting the transform(s) and computing the score of the base distribution.
torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.cdf
expand(batch_shape, _instance=None) [source]
torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.expand
property has_rsample
torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.has_rsample
icdf(value) [source] Computes the inverse cumulative distribution function using transform(s) and computing the score of the base distribution.
torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.icdf
log_prob(value) [source] Scores the sample by inverting the transform(s) and computing the score using the score of the base distribution and the log abs det jacobian.
torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.log_prob
rsample(sample_shape=torch.Size([])) [source] Generates a sample_shape shaped reparameterized sample or sample_shape shaped batch of reparameterized samples if the distribution parameters are batched. Samples first from base distribution and applies transform() for every transform in the list.
torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.rsample
sample(sample_shape=torch.Size([])) [source] Generates a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched. Samples first from base distribution and applies transform() for every transform in the list.
torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.sample
property support
torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.support
class torch.distributions.transforms.AbsTransform(cache_size=0) [source] Transform via the mapping y=∣x∣y = |x| .
torch.distributions#torch.distributions.transforms.AbsTransform
class torch.distributions.transforms.AffineTransform(loc, scale, event_dim=0, cache_size=0) [source] Transform via the pointwise affine mapping y=loc+scale×xy = \text{loc} + \text{scale} \times x . Parameters loc (Tensor or float) – Location parameter. scale (Tensor or float) – Scale parameter. event_dim (int) – Optional size of event_shape. This should be zero for univariate random variables, 1 for distributions over vectors, 2 for distributions over matrices, etc.
torch.distributions#torch.distributions.transforms.AffineTransform
class torch.distributions.transforms.ComposeTransform(parts, cache_size=0) [source] Composes multiple transforms in a chain. The transforms being composed are responsible for caching. Parameters parts (list of Transform) – A list of transforms to compose. cache_size (int) – Size of cache. If zero, no caching is done. If one, the latest single value is cached. Only 0 and 1 are supported.
torch.distributions#torch.distributions.transforms.ComposeTransform
class torch.distributions.transforms.CorrCholeskyTransform(cache_size=0) [source] Transforms an uncontrained real vector xx with length D∗(D−1)/2D*(D-1)/2 into the Cholesky factor of a D-dimension correlation matrix. This Cholesky factor is a lower triangular matrix with positive diagonals and unit Euclidean norm for each row. The transform is processed as follows: First we convert x into a lower triangular matrix in row order. For each row XiX_i of the lower triangular part, we apply a signed version of class StickBreakingTransform to transform XiX_i into a unit Euclidean length vector using the following steps: - Scales into the interval (−1,1)(-1, 1) domain: ri=tanh⁡(Xi)r_i = \tanh(X_i) . - Transforms into an unsigned domain: zi=ri2z_i = r_i^2 . - Applies si=StickBreakingTransform(zi)s_i = StickBreakingTransform(z_i) . - Transforms back into signed domain: yi=sign(ri)∗siy_i = sign(r_i) * \sqrt{s_i} .
torch.distributions#torch.distributions.transforms.CorrCholeskyTransform
class torch.distributions.transforms.ExpTransform(cache_size=0) [source] Transform via the mapping y=exp⁡(x)y = \exp(x) .
torch.distributions#torch.distributions.transforms.ExpTransform
class torch.distributions.transforms.IndependentTransform(base_transform, reinterpreted_batch_ndims, cache_size=0) [source] Wrapper around another transform to treat reinterpreted_batch_ndims-many extra of the right most dimensions as dependent. This has no effect on the forward or backward transforms, but does sum out reinterpreted_batch_ndims-many of the rightmost dimensions in log_abs_det_jacobian(). Parameters base_transform (Transform) – A base transform. reinterpreted_batch_ndims (int) – The number of extra rightmost dimensions to treat as dependent.
torch.distributions#torch.distributions.transforms.IndependentTransform
class torch.distributions.transforms.LowerCholeskyTransform(cache_size=0) [source] Transform from unconstrained matrices to lower-triangular matrices with nonnegative diagonal entries. This is useful for parameterizing positive definite matrices in terms of their Cholesky factorization.
torch.distributions#torch.distributions.transforms.LowerCholeskyTransform
class torch.distributions.transforms.PowerTransform(exponent, cache_size=0) [source] Transform via the mapping y=xexponenty = x^{\text{exponent}} .
torch.distributions#torch.distributions.transforms.PowerTransform
class torch.distributions.transforms.ReshapeTransform(in_shape, out_shape, cache_size=0) [source] Unit Jacobian transform to reshape the rightmost part of a tensor. Note that in_shape and out_shape must have the same number of elements, just as for torch.Tensor.reshape(). Parameters in_shape (torch.Size) – The input event shape. out_shape (torch.Size) – The output event shape.
torch.distributions#torch.distributions.transforms.ReshapeTransform
class torch.distributions.transforms.SigmoidTransform(cache_size=0) [source] Transform via the mapping y=11+exp⁡(−x)y = \frac{1}{1 + \exp(-x)} and x=logit(y)x = \text{logit}(y) .
torch.distributions#torch.distributions.transforms.SigmoidTransform
class torch.distributions.transforms.SoftmaxTransform(cache_size=0) [source] Transform from unconstrained space to the simplex via y=exp⁡(x)y = \exp(x) then normalizing. This is not bijective and cannot be used for HMC. However this acts mostly coordinate-wise (except for the final normalization), and thus is appropriate for coordinate-wise optimization algorithms.
torch.distributions#torch.distributions.transforms.SoftmaxTransform
class torch.distributions.transforms.StackTransform(tseq, dim=0, cache_size=0) [source] Transform functor that applies a sequence of transforms tseq component-wise to each submatrix at dim in a way compatible with torch.stack(). Example:: x = torch.stack([torch.range(1, 10), torch.range(1, 10)], dim=1) t = StackTransform([ExpTransform(), identity_transform], dim=1) y = t(x)
torch.distributions#torch.distributions.transforms.StackTransform
class torch.distributions.transforms.StickBreakingTransform(cache_size=0) [source] Transform from unconstrained space to the simplex of one additional dimension via a stick-breaking process. This transform arises as an iterated sigmoid transform in a stick-breaking construction of the Dirichlet distribution: the first logit is transformed via sigmoid to the first probability and the probability of everything else, and then the process recurses. This is bijective and appropriate for use in HMC; however it mixes coordinates together and is less appropriate for optimization.
torch.distributions#torch.distributions.transforms.StickBreakingTransform
class torch.distributions.transforms.TanhTransform(cache_size=0) [source] Transform via the mapping y=tanh⁡(x)y = \tanh(x) . It is equivalent to ` ComposeTransform([AffineTransform(0., 2.), SigmoidTransform(), AffineTransform(-1., 2.)]) ` However this might not be numerically stable, thus it is recommended to use TanhTransform instead. Note that one should use cache_size=1 when it comes to NaN/Inf values.
torch.distributions#torch.distributions.transforms.TanhTransform
class torch.distributions.transforms.Transform(cache_size=0) [source] Abstract class for invertable transformations with computable log det jacobians. They are primarily used in torch.distributions.TransformedDistribution. Caching is useful for transforms whose inverses are either expensive or numerically unstable. Note that care must be taken with memoized values since the autograd graph may be reversed. For example while the following works with or without caching: y = t(x) t.log_abs_det_jacobian(x, y).backward() # x will receive gradients. However the following will error when caching due to dependency reversal: y = t(x) z = t.inv(y) grad(z.sum(), [y]) # error because z is x Derived classes should implement one or both of _call() or _inverse(). Derived classes that set bijective=True should also implement log_abs_det_jacobian(). Parameters cache_size (int) – Size of cache. If zero, no caching is done. If one, the latest single value is cached. Only 0 and 1 are supported. Variables ~Transform.domain (Constraint) – The constraint representing valid inputs to this transform. ~Transform.codomain (Constraint) – The constraint representing valid outputs to this transform which are inputs to the inverse transform. ~Transform.bijective (bool) – Whether this transform is bijective. A transform t is bijective iff t.inv(t(x)) == x and t(t.inv(y)) == y for every x in the domain and y in the codomain. Transforms that are not bijective should at least maintain the weaker pseudoinverse properties t(t.inv(t(x)) == t(x) and t.inv(t(t.inv(y))) == t.inv(y). ~Transform.sign (int or Tensor) – For bijective univariate transforms, this should be +1 or -1 depending on whether transform is monotone increasing or decreasing. property inv Returns the inverse Transform of this transform. This should satisfy t.inv.inv is t. property sign Returns the sign of the determinant of the Jacobian, if applicable. In general this only makes sense for bijective transforms. log_abs_det_jacobian(x, y) [source] Computes the log det jacobian log |dy/dx| given input and output. forward_shape(shape) [source] Infers the shape of the forward computation, given the input shape. Defaults to preserving shape. inverse_shape(shape) [source] Infers the shapes of the inverse computation, given the output shape. Defaults to preserving shape.
torch.distributions#torch.distributions.transforms.Transform
forward_shape(shape) [source] Infers the shape of the forward computation, given the input shape. Defaults to preserving shape.
torch.distributions#torch.distributions.transforms.Transform.forward_shape
property inv Returns the inverse Transform of this transform. This should satisfy t.inv.inv is t.
torch.distributions#torch.distributions.transforms.Transform.inv
inverse_shape(shape) [source] Infers the shapes of the inverse computation, given the output shape. Defaults to preserving shape.
torch.distributions#torch.distributions.transforms.Transform.inverse_shape
log_abs_det_jacobian(x, y) [source] Computes the log det jacobian log |dy/dx| given input and output.
torch.distributions#torch.distributions.transforms.Transform.log_abs_det_jacobian
property sign Returns the sign of the determinant of the Jacobian, if applicable. In general this only makes sense for bijective transforms.
torch.distributions#torch.distributions.transforms.Transform.sign
class torch.distributions.uniform.Uniform(low, high, validate_args=None) [source] Bases: torch.distributions.distribution.Distribution Generates uniformly distributed random samples from the half-open interval [low, high). Example: >>> m = Uniform(torch.tensor([0.0]), torch.tensor([5.0])) >>> m.sample() # uniformly distributed in the range [0.0, 5.0) tensor([ 2.3418]) Parameters low (float or Tensor) – lower range (inclusive). high (float or Tensor) – upper range (exclusive). arg_constraints = {'high': Dependent(), 'low': Dependent()} cdf(value) [source] entropy() [source] expand(batch_shape, _instance=None) [source] has_rsample = True icdf(value) [source] log_prob(value) [source] property mean rsample(sample_shape=torch.Size([])) [source] property stddev property support property variance
torch.distributions#torch.distributions.uniform.Uniform
arg_constraints = {'high': Dependent(), 'low': Dependent()}
torch.distributions#torch.distributions.uniform.Uniform.arg_constraints
cdf(value) [source]
torch.distributions#torch.distributions.uniform.Uniform.cdf
entropy() [source]
torch.distributions#torch.distributions.uniform.Uniform.entropy
expand(batch_shape, _instance=None) [source]
torch.distributions#torch.distributions.uniform.Uniform.expand
has_rsample = True
torch.distributions#torch.distributions.uniform.Uniform.has_rsample
icdf(value) [source]
torch.distributions#torch.distributions.uniform.Uniform.icdf
log_prob(value) [source]
torch.distributions#torch.distributions.uniform.Uniform.log_prob
property mean
torch.distributions#torch.distributions.uniform.Uniform.mean
rsample(sample_shape=torch.Size([])) [source]
torch.distributions#torch.distributions.uniform.Uniform.rsample
property stddev
torch.distributions#torch.distributions.uniform.Uniform.stddev
property support
torch.distributions#torch.distributions.uniform.Uniform.support
property variance
torch.distributions#torch.distributions.uniform.Uniform.variance
class torch.distributions.von_mises.VonMises(loc, concentration, validate_args=None) [source] Bases: torch.distributions.distribution.Distribution A circular von Mises distribution. This implementation uses polar coordinates. The loc and value args can be any real number (to facilitate unconstrained optimization), but are interpreted as angles modulo 2 pi. Example:: >>> m = dist.VonMises(torch.tensor([1.0]), torch.tensor([1.0])) >>> m.sample() # von Mises distributed with loc=1 and concentration=1 tensor([1.9777]) Parameters loc (torch.Tensor) – an angle in radians. concentration (torch.Tensor) – concentration parameter arg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'loc': Real()} expand(batch_shape) [source] has_rsample = False log_prob(value) [source] property mean The provided mean is the circular one. sample(sample_shape=torch.Size([])) [source] The sampling algorithm for the von Mises distribution is based on the following paper: Best, D. J., and Nicholas I. Fisher. “Efficient simulation of the von Mises distribution.” Applied Statistics (1979): 152-157. support = Real() variance [source] The provided variance is the circular one.
torch.distributions#torch.distributions.von_mises.VonMises
arg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'loc': Real()}
torch.distributions#torch.distributions.von_mises.VonMises.arg_constraints
expand(batch_shape) [source]
torch.distributions#torch.distributions.von_mises.VonMises.expand
has_rsample = False
torch.distributions#torch.distributions.von_mises.VonMises.has_rsample
log_prob(value) [source]
torch.distributions#torch.distributions.von_mises.VonMises.log_prob
property mean The provided mean is the circular one.
torch.distributions#torch.distributions.von_mises.VonMises.mean
sample(sample_shape=torch.Size([])) [source] The sampling algorithm for the von Mises distribution is based on the following paper: Best, D. J., and Nicholas I. Fisher. “Efficient simulation of the von Mises distribution.” Applied Statistics (1979): 152-157.
torch.distributions#torch.distributions.von_mises.VonMises.sample
support = Real()
torch.distributions#torch.distributions.von_mises.VonMises.support
variance [source] The provided variance is the circular one.
torch.distributions#torch.distributions.von_mises.VonMises.variance
class torch.distributions.weibull.Weibull(scale, concentration, validate_args=None) [source] Bases: torch.distributions.transformed_distribution.TransformedDistribution Samples from a two-parameter Weibull distribution. Example >>> m = Weibull(torch.tensor([1.0]), torch.tensor([1.0])) >>> m.sample() # sample from a Weibull distribution with scale=1, concentration=1 tensor([ 0.4784]) Parameters scale (float or Tensor) – Scale parameter of distribution (lambda). concentration (float or Tensor) – Concentration parameter of distribution (k/shape). arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'concentration': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)} entropy() [source] expand(batch_shape, _instance=None) [source] property mean support = GreaterThan(lower_bound=0.0) property variance
torch.distributions#torch.distributions.weibull.Weibull
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'concentration': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)}
torch.distributions#torch.distributions.weibull.Weibull.arg_constraints
entropy() [source]
torch.distributions#torch.distributions.weibull.Weibull.entropy
expand(batch_shape, _instance=None) [source]
torch.distributions#torch.distributions.weibull.Weibull.expand
property mean
torch.distributions#torch.distributions.weibull.Weibull.mean
support = GreaterThan(lower_bound=0.0)
torch.distributions#torch.distributions.weibull.Weibull.support
property variance
torch.distributions#torch.distributions.weibull.Weibull.variance
torch.div(input, other, *, rounding_mode=None, out=None) → Tensor Divides each element of the input input by the corresponding element of other. outi=inputiotheri\text{out}_i = \frac{\text{input}_i}{\text{other}_i} Note By default, this performs a “true” division like Python 3. See the rounding_mode argument for floor division. Supports broadcasting to a common shape, type promotion, and integer, float, and complex inputs. Always promotes integer types to the default scalar type. Parameters input (Tensor) – the dividend other (Tensor or Number) – the divisor Keyword Arguments rounding_mode (str, optional) – Type of rounding applied to the result: None - default behavior. Performs no rounding and, if both input and other are integer types, promotes the inputs to the default scalar type. Equivalent to true division in Python (the / operator) and NumPy’s np.true_divide. "trunc" - rounds the results of the division towards zero. Equivalent to C-style integer division. "floor" - rounds the results of the division down. Equivalent to floor division in Python (the // operator) and NumPy’s np.floor_divide. out (Tensor, optional) – the output tensor. Examples: >>> x = torch.tensor([ 0.3810, 1.2774, -0.2972, -0.3719, 0.4637]) >>> torch.div(x, 0.5) tensor([ 0.7620, 2.5548, -0.5944, -0.7438, 0.9274]) >>> a = torch.tensor([[-0.3711, -1.9353, -0.4605, -0.2917], ... [ 0.1815, -1.0111, 0.9805, -1.5923], ... [ 0.1062, 1.4581, 0.7759, -1.2344], ... [-0.1830, -0.0313, 1.1908, -1.4757]]) >>> b = torch.tensor([ 0.8032, 0.2930, -0.8113, -0.2308]) >>> torch.div(a, b) tensor([[-0.4620, -6.6051, 0.5676, 1.2639], [ 0.2260, -3.4509, -1.2086, 6.8990], [ 0.1322, 4.9764, -0.9564, 5.3484], [-0.2278, -0.1068, -1.4678, 6.3938]]) >>> torch.div(a, b, rounding_mode='trunc') tensor([[-0., -6., 0., 1.], [ 0., -3., -1., 6.], [ 0., 4., -0., 5.], [-0., -0., -1., 6.]]) >>> torch.div(a, b, rounding_mode='floor') tensor([[-1., -7., 0., 1.], [ 0., -4., -2., 6.], [ 0., 4., -1., 5.], [-1., -1., -2., 6.]])
torch.generated.torch.div#torch.div
torch.divide(input, other, *, rounding_mode=None, out=None) → Tensor Alias for torch.div().
torch.generated.torch.divide#torch.divide
torch.dot(input, other, *, out=None) → Tensor Computes the dot product of two 1D tensors. Note Unlike NumPy’s dot, torch.dot intentionally only supports computing the dot product of two 1D tensors with the same number of elements. Parameters input (Tensor) – first tensor in the dot product, must be 1D. other (Tensor) – second tensor in the dot product, must be 1D. Keyword Arguments {out} – Example: >>> torch.dot(torch.tensor([2, 3]), torch.tensor([2, 1])) tensor(7)
torch.generated.torch.dot#torch.dot
torch.dstack(tensors, *, out=None) → Tensor Stack tensors in sequence depthwise (along third axis). This is equivalent to concatenation along the third axis after 1-D and 2-D tensors have been reshaped by torch.atleast_3d(). Parameters tensors (sequence of Tensors) – sequence of tensors to concatenate Keyword Arguments out (Tensor, optional) – the output tensor. Example:: >>> a = torch.tensor([1, 2, 3]) >>> b = torch.tensor([4, 5, 6]) >>> torch.dstack((a,b)) tensor([[[1, 4], [2, 5], [3, 6]]]) >>> a = torch.tensor([[1],[2],[3]]) >>> b = torch.tensor([[4],[5],[6]]) >>> torch.dstack((a,b)) tensor([[[1, 4]], [[2, 5]], [[3, 6]]])
torch.generated.torch.dstack#torch.dstack
torch.eig(input, eigenvectors=False, *, out=None) -> (Tensor, Tensor) Computes the eigenvalues and eigenvectors of a real square matrix. Note Since eigenvalues and eigenvectors might be complex, backward pass is supported only if eigenvalues and eigenvectors are all real valued. When input is on CUDA, torch.eig() causes host-device synchronization. Parameters input (Tensor) – the square matrix of shape (n×n)(n \times n) for which the eigenvalues and eigenvectors will be computed eigenvectors (bool) – True to compute both eigenvalues and eigenvectors; otherwise, only eigenvalues will be computed Keyword Arguments out (tuple, optional) – the output tensors Returns A namedtuple (eigenvalues, eigenvectors) containing eigenvalues (Tensor): Shape (n×2)(n \times 2) . Each row is an eigenvalue of input, where the first element is the real part and the second element is the imaginary part. The eigenvalues are not necessarily ordered. eigenvectors (Tensor): If eigenvectors=False, it’s an empty tensor. Otherwise, this tensor of shape (n×n)(n \times n) can be used to compute normalized (unit length) eigenvectors of corresponding eigenvalues as follows. If the corresponding eigenvalues[j] is a real number, column eigenvectors[:, j] is the eigenvector corresponding to eigenvalues[j]. If the corresponding eigenvalues[j] and eigenvalues[j + 1] form a complex conjugate pair, then the true eigenvectors can be computed as true eigenvector[j]=eigenvectors[:,j]+i×eigenvectors[:,j+1]\text{true eigenvector}[j] = eigenvectors[:, j] + i \times eigenvectors[:, j + 1] , true eigenvector[j+1]=eigenvectors[:,j]−i×eigenvectors[:,j+1]\text{true eigenvector}[j + 1] = eigenvectors[:, j] - i \times eigenvectors[:, j + 1] . Return type (Tensor, Tensor) Example: Trivial example with a diagonal matrix. By default, only eigenvalues are computed: >>> a = torch.diag(torch.tensor([1, 2, 3], dtype=torch.double)) >>> e, v = torch.eig(a) >>> e tensor([[1., 0.], [2., 0.], [3., 0.]], dtype=torch.float64) >>> v tensor([], dtype=torch.float64) Compute also the eigenvectors: >>> e, v = torch.eig(a, eigenvectors=True) >>> e tensor([[1., 0.], [2., 0.], [3., 0.]], dtype=torch.float64) >>> v tensor([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]], dtype=torch.float64)
torch.generated.torch.eig#torch.eig