doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.log1p(input, *, out=None) → Tensor
Returns a new tensor with the natural logarithm of (1 + input). yi=loge(xi+1)y_i = \log_{e} (x_i + 1)
Note This function is more accurate than torch.log() for small values of input Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(5)
>>> a
tensor([-1.0090, -0.9923, 1.0249, -0.5372, 0.2492])
>>> torch.log1p(a)
tensor([ nan, -4.8653, 0.7055, -0.7705, 0.2225]) | torch.generated.torch.log1p#torch.log1p |
torch.log2(input, *, out=None) → Tensor
Returns a new tensor with the logarithm to the base 2 of the elements of input. yi=log2(xi)y_{i} = \log_{2} (x_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.rand(5)
>>> a
tensor([ 0.8419, 0.8003, 0.9971, 0.5287, 0.0490])
>>> torch.log2(a)
tensor([-0.2483, -0.3213, -0.0042, -0.9196, -4.3504]) | torch.generated.torch.log2#torch.log2 |
torch.logaddexp(input, other, *, out=None) → Tensor
Logarithm of the sum of exponentiations of the inputs. Calculates pointwise log(ex+ey)\log\left(e^x + e^y\right) . This function is useful in statistics where the calculated probabilities of events may be so small as to exceed the range of normal floating point numbers. In such cases the logarithm of the calculated probability is stored. This function allows adding probabilities stored in such a fashion. This op should be disambiguated with torch.logsumexp() which performs a reduction on a single tensor. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the second input tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.logaddexp(torch.tensor([-1.0]), torch.tensor([-1.0, -2, -3]))
tensor([-0.3069, -0.6867, -0.8731])
>>> torch.logaddexp(torch.tensor([-100.0, -200, -300]), torch.tensor([-1.0, -2, -3]))
tensor([-1., -2., -3.])
>>> torch.logaddexp(torch.tensor([1.0, 2000, 30000]), torch.tensor([-1.0, -2, -3]))
tensor([1.1269e+00, 2.0000e+03, 3.0000e+04]) | torch.generated.torch.logaddexp#torch.logaddexp |
torch.logaddexp2(input, other, *, out=None) → Tensor
Logarithm of the sum of exponentiations of the inputs in base-2. Calculates pointwise log2(2x+2y)\log_2\left(2^x + 2^y\right) . See torch.logaddexp() for more details. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the second input tensor Keyword Arguments
out (Tensor, optional) – the output tensor. | torch.generated.torch.logaddexp2#torch.logaddexp2 |
torch.logcumsumexp(input, dim, *, out=None) → Tensor
Returns the logarithm of the cumulative summation of the exponentiation of elements of input in the dimension dim. For summation index jj given by dim and other indices ii , the result is logcumsumexp(x)ij=log∑j=0iexp(xij)\text{logcumsumexp}(x)_{ij} = \log \sum\limits_{j=0}^{i} \exp(x_{ij}) Parameters
input (Tensor) – the input tensor.
dim (int) – the dimension to do the operation over Keyword Arguments
out (Tensor, optional) – the output tensor. Example::
>>> a = torch.randn(10)
>>> torch.logcumsumexp(a, dim=0)
tensor([-0.42296738, -0.04462666, 0.86278635, 0.94622083, 1.05277811,
1.39202815, 1.83525007, 1.84492621, 2.06084887, 2.06844475])) | torch.generated.torch.logcumsumexp#torch.logcumsumexp |
torch.logdet(input) → Tensor
Calculates log determinant of a square matrix or batches of square matrices. Note Result is -inf if input has zero log determinant, and is nan if input has negative determinant. Note Backward through logdet() internally uses SVD results when input is not invertible. In this case, double backward through logdet() will be unstable in when input doesn’t have distinct singular values. See svd() for details. Parameters
input (Tensor) – the input tensor of size (*, n, n) where * is zero or more batch dimensions. Example: >>> A = torch.randn(3, 3)
>>> torch.det(A)
tensor(0.2611)
>>> torch.logdet(A)
tensor(-1.3430)
>>> A
tensor([[[ 0.9254, -0.6213],
[-0.5787, 1.6843]],
[[ 0.3242, -0.9665],
[ 0.4539, -0.0887]],
[[ 1.1336, -0.4025],
[-0.7089, 0.9032]]])
>>> A.det()
tensor([1.1990, 0.4099, 0.7386])
>>> A.det().log()
tensor([ 0.1815, -0.8917, -0.3031]) | torch.generated.torch.logdet#torch.logdet |
torch.logical_and(input, other, *, out=None) → Tensor
Computes the element-wise logical AND of the given input tensors. Zeros are treated as False and nonzeros are treated as True. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the tensor to compute AND with Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.logical_and(torch.tensor([True, False, True]), torch.tensor([True, False, False]))
tensor([ True, False, False])
>>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)
>>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)
>>> torch.logical_and(a, b)
tensor([False, False, True, False])
>>> torch.logical_and(a.double(), b.double())
tensor([False, False, True, False])
>>> torch.logical_and(a.double(), b)
tensor([False, False, True, False])
>>> torch.logical_and(a, b, out=torch.empty(4, dtype=torch.bool))
tensor([False, False, True, False]) | torch.generated.torch.logical_and#torch.logical_and |
torch.logical_not(input, *, out=None) → Tensor
Computes the element-wise logical NOT of the given input tensor. If not specified, the output tensor will have the bool dtype. If the input tensor is not a bool tensor, zeros are treated as False and non-zeros are treated as True. Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.logical_not(torch.tensor([True, False]))
tensor([False, True])
>>> torch.logical_not(torch.tensor([0, 1, -10], dtype=torch.int8))
tensor([ True, False, False])
>>> torch.logical_not(torch.tensor([0., 1.5, -10.], dtype=torch.double))
tensor([ True, False, False])
>>> torch.logical_not(torch.tensor([0., 1., -10.], dtype=torch.double), out=torch.empty(3, dtype=torch.int16))
tensor([1, 0, 0], dtype=torch.int16) | torch.generated.torch.logical_not#torch.logical_not |
torch.logical_or(input, other, *, out=None) → Tensor
Computes the element-wise logical OR of the given input tensors. Zeros are treated as False and nonzeros are treated as True. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the tensor to compute OR with Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.logical_or(torch.tensor([True, False, True]), torch.tensor([True, False, False]))
tensor([ True, False, True])
>>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)
>>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)
>>> torch.logical_or(a, b)
tensor([ True, True, True, False])
>>> torch.logical_or(a.double(), b.double())
tensor([ True, True, True, False])
>>> torch.logical_or(a.double(), b)
tensor([ True, True, True, False])
>>> torch.logical_or(a, b, out=torch.empty(4, dtype=torch.bool))
tensor([ True, True, True, False]) | torch.generated.torch.logical_or#torch.logical_or |
torch.logical_xor(input, other, *, out=None) → Tensor
Computes the element-wise logical XOR of the given input tensors. Zeros are treated as False and nonzeros are treated as True. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the tensor to compute XOR with Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.logical_xor(torch.tensor([True, False, True]), torch.tensor([True, False, False]))
tensor([False, False, True])
>>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)
>>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)
>>> torch.logical_xor(a, b)
tensor([ True, True, False, False])
>>> torch.logical_xor(a.double(), b.double())
tensor([ True, True, False, False])
>>> torch.logical_xor(a.double(), b)
tensor([ True, True, False, False])
>>> torch.logical_xor(a, b, out=torch.empty(4, dtype=torch.bool))
tensor([ True, True, False, False]) | torch.generated.torch.logical_xor#torch.logical_xor |
torch.logit(input, eps=None, *, out=None) → Tensor
Returns a new tensor with the logit of the elements of input. input is clamped to [eps, 1 - eps] when eps is not None. When eps is None and input < 0 or input > 1, the function will yields NaN. yi=ln(zi1−zi)zi={xiif eps is Noneepsif xi<epsxiif eps≤xi≤1−eps1−epsif xi>1−epsy_{i} = \ln(\frac{z_{i}}{1 - z_{i}}) \\ z_{i} = \begin{cases} x_{i} & \text{if eps is None} \\ \text{eps} & \text{if } x_{i} < \text{eps} \\ x_{i} & \text{if } \text{eps} \leq x_{i} \leq 1 - \text{eps} \\ 1 - \text{eps} & \text{if } x_{i} > 1 - \text{eps} \end{cases}
Parameters
input (Tensor) – the input tensor.
eps (float, optional) – the epsilon for input clamp bound. Default: None
Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.rand(5)
>>> a
tensor([0.2796, 0.9331, 0.6486, 0.1523, 0.6516])
>>> torch.logit(a, eps=1e-6)
tensor([-0.9466, 2.6352, 0.6131, -1.7169, 0.6261]) | torch.generated.torch.logit#torch.logit |
torch.logspace(start, end, steps, base=10.0, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Creates a one-dimensional tensor of size steps whose values are evenly spaced from basestart{{\text{{base}}}}^{{\text{{start}}}} to baseend{{\text{{base}}}}^{{\text{{end}}}} , inclusive, on a logarithmic scale with base base. That is, the values are: (basestart,base(start+end−startsteps−1),…,base(start+(steps−2)∗end−startsteps−1),baseend)(\text{base}^{\text{start}}, \text{base}^{(\text{start} + \frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \ldots, \text{base}^{(\text{start} + (\text{steps} - 2) * \frac{\text{end} - \text{start}}{ \text{steps} - 1})}, \text{base}^{\text{end}})
Warning Not providing a value for steps is deprecated. For backwards compatibility, not providing a value for steps will create a tensor with 100 elements. Note that this behavior is not reflected in the documented function signature and should not be relied on. In a future PyTorch release, failing to provide a value for steps will throw a runtime error. Parameters
start (float) – the starting value for the set of points
end (float) – the ending value for the set of points
steps (int) – size of the constructed tensor
base (float, optional) – base of the logarithm function. Default: 10.0. Keyword Arguments
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> torch.logspace(start=-10, end=10, steps=5)
tensor([ 1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10])
>>> torch.logspace(start=0.1, end=1.0, steps=5)
tensor([ 1.2589, 2.1135, 3.5481, 5.9566, 10.0000])
>>> torch.logspace(start=0.1, end=1.0, steps=1)
tensor([1.2589])
>>> torch.logspace(start=2, end=2, steps=1, base=2)
tensor([4.0]) | torch.generated.torch.logspace#torch.logspace |
torch.logsumexp(input, dim, keepdim=False, *, out=None)
Returns the log of summed exponentials of each row of the input tensor in the given dimension dim. The computation is numerically stabilized. For summation index jj given by dim and other indices ii , the result is logsumexp(x)i=log∑jexp(xij)\text{logsumexp}(x)_{i} = \log \sum_j \exp(x_{ij}) If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). Parameters
input (Tensor) – the input tensor.
dim (int or tuple of python:ints) – the dimension or dimensions to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out (Tensor, optional) – the output tensor. Example::
>>> a = torch.randn(3, 3)
>>> torch.logsumexp(a, 1)
tensor([ 0.8442, 1.4322, 0.8711]) | torch.generated.torch.logsumexp#torch.logsumexp |
torch.lstsq(input, A, *, out=None) → Tensor
Computes the solution to the least squares and least norm problems for a full rank matrix AA of size (m×n)(m \times n) and a matrix BB of size (m×k)(m \times k) . If m≥nm \geq n , lstsq() solves the least-squares problem: minX∥AX−B∥2.\begin{array}{ll} \min_X & \|AX-B\|_2. \end{array}
If m<nm < n , lstsq() solves the least-norm problem: minX∥X∥2subject toAX=B.\begin{array}{ll} \min_X & \|X\|_2 & \text{subject to} & AX = B. \end{array}
Returned tensor XX has shape (max(m,n)×k)(\max(m, n) \times k) . The first nn rows of XX contains the solution. If m≥nm \geq n , the residual sum of squares for the solution in each column is given by the sum of squares of elements in the remaining m−nm - n rows of that column. Note The case when m<nm < n is not supported on the GPU. Parameters
input (Tensor) – the matrix BB
A (Tensor) – the mm by nn matrix AA
Keyword Arguments
out (tuple, optional) – the optional destination tensor Returns
A namedtuple (solution, QR) containing:
solution (Tensor): the least squares solution
QR (Tensor): the details of the QR factorization Return type
(Tensor, Tensor) Note The returned matrices will always be transposed, irrespective of the strides of the input matrices. That is, they will have stride (1, m) instead of (m, 1). Example: >>> A = torch.tensor([[1., 1, 1],
... [2, 3, 4],
... [3, 5, 2],
... [4, 2, 5],
... [5, 4, 3]])
>>> B = torch.tensor([[-10., -3],
... [ 12, 14],
... [ 14, 12],
... [ 16, 16],
... [ 18, 16]])
>>> X, _ = torch.lstsq(B, A)
>>> X
tensor([[ 2.0000, 1.0000],
[ 1.0000, 1.0000],
[ 1.0000, 2.0000],
[ 10.9635, 4.8501],
[ 8.9332, 5.2418]]) | torch.generated.torch.lstsq#torch.lstsq |
torch.lt(input, other, *, out=None) → Tensor
Computes input<other\text{input} < \text{other} element-wise. The second argument can be a number or a tensor whose shape is broadcastable with the first argument. Parameters
input (Tensor) – the tensor to compare
other (Tensor or float) – the tensor or value to compare Keyword Arguments
out (Tensor, optional) – the output tensor. Returns
A boolean tensor that is True where input is less than other and False elsewhere Example: >>> torch.lt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[False, False], [True, False]]) | torch.generated.torch.lt#torch.lt |
torch.lu(*args, **kwargs)
Computes the LU factorization of a matrix or batches of matrices A. Returns a tuple containing the LU factorization and pivots of A. Pivoting is done if pivot is set to True. Note The pivots returned by the function are 1-indexed. If pivot is False, then the returned pivots is a tensor filled with zeros of the appropriate size. Note LU factorization with pivot = False is not available for CPU, and attempting to do so will throw an error. However, LU factorization with pivot = False is available for CUDA. Note This function does not check if the factorization was successful or not if get_infos is True since the status of the factorization is present in the third element of the return tuple. Note In the case of batches of square matrices with size less or equal to 32 on a CUDA device, the LU factorization is repeated for singular matrices due to the bug in the MAGMA library (see magma issue 13). Note L, U, and P can be derived using torch.lu_unpack(). Warning The LU factorization does have backward support, but only for square inputs of full rank. Parameters
A (Tensor) – the tensor to factor of size (∗,m,n)(*, m, n)
pivot (bool, optional) – controls whether pivoting is done. Default: True
get_infos (bool, optional) – if set to True, returns an info IntTensor. Default: False
out (tuple, optional) – optional output tuple. If get_infos is True, then the elements in the tuple are Tensor, IntTensor, and IntTensor. If get_infos is False, then the elements in the tuple are Tensor, IntTensor. Default: None
Returns
A tuple of tensors containing
factorization (Tensor): the factorization of size (∗,m,n)(*, m, n)
pivots (IntTensor): the pivots of size (∗,min(m,n))(*, \text{min}(m, n)) . pivots stores all the intermediate transpositions of rows. The final permutation perm could be reconstructed by applying swap(perm[i], perm[pivots[i] - 1]) for i = 0, ..., pivots.size(-1) - 1, where perm is initially the identity permutation of mm elements (essentially this is what torch.lu_unpack() is doing).
infos (IntTensor, optional): if get_infos is True, this is a tensor of size (∗)(*) where non-zero values indicate whether factorization for the matrix or each minibatch has succeeded or failed Return type
(Tensor, IntTensor, IntTensor (optional)) Example: >>> A = torch.randn(2, 3, 3)
>>> A_LU, pivots = torch.lu(A)
>>> A_LU
tensor([[[ 1.3506, 2.5558, -0.0816],
[ 0.1684, 1.1551, 0.1940],
[ 0.1193, 0.6189, -0.5497]],
[[ 0.4526, 1.2526, -0.3285],
[-0.7988, 0.7175, -0.9701],
[ 0.2634, -0.9255, -0.3459]]])
>>> pivots
tensor([[ 3, 3, 3],
[ 3, 3, 3]], dtype=torch.int32)
>>> A_LU, pivots, info = torch.lu(A, get_infos=True)
>>> if info.nonzero().size(0) == 0:
... print('LU factorization succeeded for all samples!')
LU factorization succeeded for all samples! | torch.generated.torch.lu#torch.lu |
torch.lu_solve(b, LU_data, LU_pivots, *, out=None) → Tensor
Returns the LU solve of the linear system Ax=bAx = b using the partially pivoted LU factorization of A from torch.lu(). This function supports float, double, cfloat and cdouble dtypes for input. Parameters
b (Tensor) – the RHS tensor of size (∗,m,k)(*, m, k) , where ∗* is zero or more batch dimensions.
LU_data (Tensor) – the pivoted LU factorization of A from torch.lu() of size (∗,m,m)(*, m, m) , where ∗* is zero or more batch dimensions.
LU_pivots (IntTensor) – the pivots of the LU factorization from torch.lu() of size (∗,m)(*, m) , where ∗* is zero or more batch dimensions. The batch dimensions of LU_pivots must be equal to the batch dimensions of LU_data. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> A = torch.randn(2, 3, 3)
>>> b = torch.randn(2, 3, 1)
>>> A_LU = torch.lu(A)
>>> x = torch.lu_solve(b, *A_LU)
>>> torch.norm(torch.bmm(A, x) - b)
tensor(1.00000e-07 *
2.8312) | torch.generated.torch.lu_solve#torch.lu_solve |
torch.lu_unpack(LU_data, LU_pivots, unpack_data=True, unpack_pivots=True) [source]
Unpacks the data and pivots from a LU factorization of a tensor. Returns a tuple of tensors as (the pivots, the L tensor, the U tensor). Parameters
LU_data (Tensor) – the packed LU factorization data
LU_pivots (Tensor) – the packed LU factorization pivots
unpack_data (bool) – flag indicating if the data should be unpacked
unpack_pivots (bool) – flag indicating if the pivots should be unpacked Examples: >>> A = torch.randn(2, 3, 3)
>>> A_LU, pivots = A.lu()
>>> P, A_L, A_U = torch.lu_unpack(A_LU, pivots)
>>>
>>> # can recover A from factorization
>>> A_ = torch.bmm(P, torch.bmm(A_L, A_U))
>>> # LU factorization of a rectangular matrix:
>>> A = torch.randn(2, 3, 2)
>>> A_LU, pivots = A.lu()
>>> P, A_L, A_U = torch.lu_unpack(A_LU, pivots)
>>> P
tensor([[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]],
[[0., 0., 1.],
[0., 1., 0.],
[1., 0., 0.]]])
>>> A_L
tensor([[[ 1.0000, 0.0000],
[ 0.4763, 1.0000],
[ 0.3683, 0.1135]],
[[ 1.0000, 0.0000],
[ 0.2957, 1.0000],
[-0.9668, -0.3335]]])
>>> A_U
tensor([[[ 2.1962, 1.0881],
[ 0.0000, -0.8681]],
[[-1.0947, 0.3736],
[ 0.0000, 0.5718]]])
>>> A_ = torch.bmm(P, torch.bmm(A_L, A_U))
>>> torch.norm(A_ - A)
tensor(2.9802e-08) | torch.generated.torch.lu_unpack#torch.lu_unpack |
torch.manual_seed(seed) [source]
Sets the seed for generating random numbers. Returns a torch.Generator object. Parameters
seed (int) – The desired seed. Value must be within the inclusive range [-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised. Negative inputs are remapped to positive values with the formula 0xffff_ffff_ffff_ffff + seed. | torch.generated.torch.manual_seed#torch.manual_seed |
torch.masked_select(input, mask, *, out=None) → Tensor
Returns a new 1-D tensor which indexes the input tensor according to the boolean mask mask which is a BoolTensor. The shapes of the mask tensor and the input tensor don’t need to match, but they must be broadcastable. Note The returned tensor does not use the same storage as the original tensor Parameters
input (Tensor) – the input tensor.
mask (BoolTensor) – the tensor containing the binary mask to index with Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.3552, -2.3825, -0.8297, 0.3477],
[-1.2035, 1.2252, 0.5002, 0.6248],
[ 0.1307, -2.0608, 0.1244, 2.0139]])
>>> mask = x.ge(0.5)
>>> mask
tensor([[False, False, False, False],
[False, True, True, True],
[False, False, False, True]])
>>> torch.masked_select(x, mask)
tensor([ 1.2252, 0.5002, 0.6248, 2.0139]) | torch.generated.torch.masked_select#torch.masked_select |
torch.matmul(input, other, *, out=None) → Tensor
Matrix product of two tensors. The behavior depends on the dimensionality of the tensors as follows: If both tensors are 1-dimensional, the dot product (scalar) is returned. If both arguments are 2-dimensional, the matrix-matrix product is returned. If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed. If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if input is a (j×1×n×n)(j \times 1 \times n \times n) tensor and other is a (k×n×n)(k \times n \times n) tensor, out will be a (j×k×n×n)(j \times k \times n \times n) tensor. Note that the broadcasting logic only looks at the batch dimensions when determining if the inputs are broadcastable, and not the matrix dimensions. For example, if input is a (j×1×n×m)(j \times 1 \times n \times m) tensor and other is a (k×m×p)(k \times m \times p) tensor, these inputs are valid for broadcasting even though the final two dimensions (i.e. the matrix dimensions) are different. out will be a (j×k×n×p)(j \times k \times n \times p) tensor. This operator supports TensorFloat32. Note The 1-dimensional dot product version of this function does not support an out parameter. Parameters
input (Tensor) – the first tensor to be multiplied
other (Tensor) – the second tensor to be multiplied Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> # vector x vector
>>> tensor1 = torch.randn(3)
>>> tensor2 = torch.randn(3)
>>> torch.matmul(tensor1, tensor2).size()
torch.Size([])
>>> # matrix x vector
>>> tensor1 = torch.randn(3, 4)
>>> tensor2 = torch.randn(4)
>>> torch.matmul(tensor1, tensor2).size()
torch.Size([3])
>>> # batched matrix x broadcasted vector
>>> tensor1 = torch.randn(10, 3, 4)
>>> tensor2 = torch.randn(4)
>>> torch.matmul(tensor1, tensor2).size()
torch.Size([10, 3])
>>> # batched matrix x batched matrix
>>> tensor1 = torch.randn(10, 3, 4)
>>> tensor2 = torch.randn(10, 4, 5)
>>> torch.matmul(tensor1, tensor2).size()
torch.Size([10, 3, 5])
>>> # batched matrix x broadcasted matrix
>>> tensor1 = torch.randn(10, 3, 4)
>>> tensor2 = torch.randn(4, 5)
>>> torch.matmul(tensor1, tensor2).size()
torch.Size([10, 3, 5]) | torch.generated.torch.matmul#torch.matmul |
torch.matrix_exp()
Returns the matrix exponential. Supports batched input. For a matrix A, the matrix exponential is defined as eA=∑k=0∞Ak/k!\mathrm{e}^A = \sum_{k=0}^\infty A^k / k!
The implementation is based on: Bader, P.; Blanes, S.; Casas, F. Computing the Matrix Exponential with an Optimized Taylor Polynomial Approximation. Mathematics 2019, 7, 1174. Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.randn(2, 2, 2)
>>> a[0, :, :] = torch.eye(2, 2)
>>> a[1, :, :] = 2 * torch.eye(2, 2)
>>> a
tensor([[[1., 0.],
[0., 1.]],
[[2., 0.],
[0., 2.]]])
>>> torch.matrix_exp(a)
tensor([[[2.7183, 0.0000],
[0.0000, 2.7183]],
[[7.3891, 0.0000],
[0.0000, 7.3891]]])
>>> import math
>>> x = torch.tensor([[0, math.pi/3], [-math.pi/3, 0]])
>>> x.matrix_exp() # should be [[cos(pi/3), sin(pi/3)], [-sin(pi/3), cos(pi/3)]]
tensor([[ 0.5000, 0.8660],
[-0.8660, 0.5000]]) | torch.generated.torch.matrix_exp#torch.matrix_exp |
torch.matrix_power(input, n) → Tensor
Returns the matrix raised to the power n for square matrices. For batch of matrices, each individual matrix is raised to the power n. If n is negative, then the inverse of the matrix (if invertible) is raised to the power n. For a batch of matrices, the batched inverse (if invertible) is raised to the power n. If n is 0, then an identity matrix is returned. Parameters
input (Tensor) – the input tensor.
n (int) – the power to raise the matrix to Example: >>> a = torch.randn(2, 2, 2)
>>> a
tensor([[[-1.9975, -1.9610],
[ 0.9592, -2.3364]],
[[-1.2534, -1.3429],
[ 0.4153, -1.4664]]])
>>> torch.matrix_power(a, 3)
tensor([[[ 3.9392, -23.9916],
[ 11.7357, -0.2070]],
[[ 0.2468, -6.7168],
[ 2.0774, -0.8187]]]) | torch.generated.torch.matrix_power#torch.matrix_power |
torch.matrix_rank(input, tol=None, symmetric=False, *, out=None) → Tensor
Returns the numerical rank of a 2-D tensor. The method to compute the matrix rank is done using SVD by default. If symmetric is True, then input is assumed to be symmetric, and the computation of the rank is done by obtaining the eigenvalues. tol is the threshold below which the singular values (or the eigenvalues when symmetric is True) are considered to be 0. If tol is not specified, tol is set to S.max() * max(S.size()) * eps where S is the singular values (or the eigenvalues when symmetric is True), and eps is the epsilon value for the datatype of input. Note torch.matrix_rank() is deprecated. Please use torch.linalg.matrix_rank() instead. The parameter symmetric was renamed in torch.linalg.matrix_rank() to hermitian. Parameters
input (Tensor) – the input 2-D tensor
tol (float, optional) – the tolerance value. Default: None
symmetric (bool, optional) – indicates whether input is symmetric. Default: False
Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.eye(10)
>>> torch.matrix_rank(a)
tensor(10)
>>> b = torch.eye(10)
>>> b[0, 0] = 0
>>> torch.matrix_rank(b)
tensor(9) | torch.generated.torch.matrix_rank#torch.matrix_rank |
torch.max(input) → Tensor
Returns the maximum value of all elements in the input tensor. Warning This function produces deterministic (sub)gradients unlike max(dim=0) Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.6763, 0.7445, -2.2369]])
>>> torch.max(a)
tensor(0.7445)
torch.max(input, dim, keepdim=False, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax). If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensors having 1 fewer dimension than input. Note If there are multiple maximal values in a reduced row then the indices of the first maximal value are returned. Parameters
input (Tensor) – the input tensor.
dim (int) – the dimension to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Default: False. Keyword Arguments
out (tuple, optional) – the result tuple of two output tensors (max, max_indices) Example: >>> a = torch.randn(4, 4)
>>> a
tensor([[-1.2360, -0.2942, -0.1222, 0.8475],
[ 1.1949, -1.1127, -2.2379, -0.6702],
[ 1.5717, -0.9207, 0.1297, -1.8768],
[-0.6172, 1.0036, -0.6060, -0.2432]])
>>> torch.max(a, 1)
torch.return_types.max(values=tensor([0.8475, 1.1949, 1.5717, 1.0036]), indices=tensor([3, 0, 0, 1]))
torch.max(input, other, *, out=None) → Tensor
See torch.maximum(). | torch.generated.torch.max#torch.max |
torch.maximum(input, other, *, out=None) → Tensor
Computes the element-wise maximum of input and other. Note If one of the elements being compared is a NaN, then that element is returned. maximum() is not supported for tensors with complex dtypes. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the second input tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.tensor((1, 2, -1))
>>> b = torch.tensor((3, 0, 4))
>>> torch.maximum(a, b)
tensor([3, 2, 4]) | torch.generated.torch.maximum#torch.maximum |
torch.mean(input) → Tensor
Returns the mean value of all elements in the input tensor. Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.2294, -0.5481, 1.3288]])
>>> torch.mean(a)
tensor(0.3367)
torch.mean(input, dim, keepdim=False, *, out=None) → Tensor
Returns the mean value of each row of the input tensor in the given dimension dim. If dim is a list of dimensions, reduce over all of them. If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). Parameters
input (Tensor) – the input tensor.
dim (int or tuple of python:ints) – the dimension or dimensions to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4, 4)
>>> a
tensor([[-0.3841, 0.6320, 0.4254, -0.7384],
[-0.9644, 1.0131, -0.6549, -1.4279],
[-0.2951, -1.3350, -0.7694, 0.5600],
[ 1.0842, -0.9580, 0.3623, 0.2343]])
>>> torch.mean(a, 1)
tensor([-0.0163, -0.5085, -0.4599, 0.1807])
>>> torch.mean(a, 1, True)
tensor([[-0.0163],
[-0.5085],
[-0.4599],
[ 0.1807]]) | torch.generated.torch.mean#torch.mean |
torch.median(input) → Tensor
Returns the median of the values in input. Note The median is not unique for input tensors with an even number of elements. In this case the lower of the two medians is returned. To compute the mean of both medians, use torch.quantile() with q=0.5 instead. Warning This function produces deterministic (sub)gradients unlike median(dim=0) Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[ 1.5219, -1.5212, 0.2202]])
>>> torch.median(a)
tensor(0.2202)
torch.median(input, dim=-1, keepdim=False, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values contains the median of each row of input in the dimension dim, and indices contains the index of the median values found in the dimension dim. By default, dim is the last dimension of the input tensor. If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the outputs tensor having 1 fewer dimension than input. Note The median is not unique for input tensors with an even number of elements in the dimension dim. In this case the lower of the two medians is returned. To compute the mean of both medians in input, use torch.quantile() with q=0.5 instead. Warning indices does not necessarily contain the first occurrence of each median value found, unless it is unique. The exact implementation details are device-specific. Do not expect the same result when run on CPU and GPU in general. For the same reason do not expect the gradients to be deterministic. Parameters
input (Tensor) – the input tensor.
dim (int) – the dimension to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out ((Tensor, Tensor), optional) – The first tensor will be populated with the median values and the second tensor, which must have dtype long, with their indices in the dimension dim of input. Example: >>> a = torch.randn(4, 5)
>>> a
tensor([[ 0.2505, -0.3982, -0.9948, 0.3518, -1.3131],
[ 0.3180, -0.6993, 1.0436, 0.0438, 0.2270],
[-0.2751, 0.7303, 0.2192, 0.3321, 0.2488],
[ 1.0778, -1.9510, 0.7048, 0.4742, -0.7125]])
>>> torch.median(a, 1)
torch.return_types.median(values=tensor([-0.3982, 0.2270, 0.2488, 0.4742]), indices=tensor([1, 4, 4, 3])) | torch.generated.torch.median#torch.median |
torch.meshgrid(*tensors) [source]
Take NN tensors, each of which can be either scalar or 1-dimensional vector, and create NN N-dimensional grids, where the ii th grid is defined by expanding the ii th input over dimensions defined by other inputs. Parameters
tensors (list of Tensor) – list of scalars or 1 dimensional tensors. Scalars will be treated as tensors of size (1,)(1,) automatically Returns
If the input has kk tensors of size (N1,),(N2,),…,(Nk,)(N_1,), (N_2,), \ldots , (N_k,) , then the output would also have kk tensors, where all tensors are of size (N1,N2,…,Nk)(N_1, N_2, \ldots , N_k) . Return type
seq (sequence of Tensors) Example: >>> x = torch.tensor([1, 2, 3])
>>> y = torch.tensor([4, 5, 6])
>>> grid_x, grid_y = torch.meshgrid(x, y)
>>> grid_x
tensor([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
>>> grid_y
tensor([[4, 5, 6],
[4, 5, 6],
[4, 5, 6]]) | torch.generated.torch.meshgrid#torch.meshgrid |
torch.min(input) → Tensor
Returns the minimum value of all elements in the input tensor. Warning This function produces deterministic (sub)gradients unlike min(dim=0) Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.6750, 1.0857, 1.7197]])
>>> torch.min(a)
tensor(0.6750)
torch.min(input, dim, keepdim=False, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values is the minimum value of each row of the input tensor in the given dimension dim. And indices is the index location of each minimum value found (argmin). If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensors having 1 fewer dimension than input. Note If there are multiple minimal values in a reduced row then the indices of the first minimal value are returned. Parameters
input (Tensor) – the input tensor.
dim (int) – the dimension to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out (tuple, optional) – the tuple of two output tensors (min, min_indices) Example: >>> a = torch.randn(4, 4)
>>> a
tensor([[-0.6248, 1.1334, -1.1899, -0.2803],
[-1.4644, -0.2635, -0.3651, 0.6134],
[ 0.2457, 0.0384, 1.0128, 0.7015],
[-0.1153, 2.9849, 2.1458, 0.5788]])
>>> torch.min(a, 1)
torch.return_types.min(values=tensor([-1.1899, -1.4644, 0.0384, -0.1153]), indices=tensor([2, 0, 1, 0]))
torch.min(input, other, *, out=None) → Tensor
See torch.minimum(). | torch.generated.torch.min#torch.min |
torch.minimum(input, other, *, out=None) → Tensor
Computes the element-wise minimum of input and other. Note If one of the elements being compared is a NaN, then that element is returned. minimum() is not supported for tensors with complex dtypes. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the second input tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.tensor((1, 2, -1))
>>> b = torch.tensor((3, 0, 4))
>>> torch.minimum(a, b)
tensor([1, 0, -1]) | torch.generated.torch.minimum#torch.minimum |
torch.mm(input, mat2, *, out=None) → Tensor
Performs a matrix multiplication of the matrices input and mat2. If input is a (n×m)(n \times m) tensor, mat2 is a (m×p)(m \times p) tensor, out will be a (n×p)(n \times p) tensor. Note This function does not broadcast. For broadcasting matrix products, see torch.matmul(). Supports strided and sparse 2-D tensors as inputs, autograd with respect to strided inputs. This operator supports TensorFloat32. Parameters
input (Tensor) – the first matrix to be matrix multiplied
mat2 (Tensor) – the second matrix to be matrix multiplied Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> mat1 = torch.randn(2, 3)
>>> mat2 = torch.randn(3, 3)
>>> torch.mm(mat1, mat2)
tensor([[ 0.4851, 0.5037, -0.3633],
[-0.0760, -3.6705, 2.4784]]) | torch.generated.torch.mm#torch.mm |
torch.mode(input, dim=-1, keepdim=False, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values is the mode value of each row of the input tensor in the given dimension dim, i.e. a value which appears most often in that row, and indices is the index location of each mode value found. By default, dim is the last dimension of the input tensor. If keepdim is True, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensors having 1 fewer dimension than input. Note This function is not defined for torch.cuda.Tensor yet. Parameters
input (Tensor) – the input tensor.
dim (int) – the dimension to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out (tuple, optional) – the result tuple of two output tensors (values, indices) Example: >>> a = torch.randint(10, (5,))
>>> a
tensor([6, 5, 1, 0, 2])
>>> b = a + (torch.randn(50, 1) * 5).long()
>>> torch.mode(b, 0)
torch.return_types.mode(values=tensor([6, 5, 1, 0, 2]), indices=tensor([2, 2, 2, 2, 2])) | torch.generated.torch.mode#torch.mode |
torch.moveaxis(input, source, destination) → Tensor
Alias for torch.movedim(). This function is equivalent to NumPy’s moveaxis function. Examples: >>> t = torch.randn(3,2,1)
>>> t
tensor([[[-0.3362],
[-0.8437]],
[[-0.9627],
[ 0.1727]],
[[ 0.5173],
[-0.1398]]])
>>> torch.moveaxis(t, 1, 0).shape
torch.Size([2, 3, 1])
>>> torch.moveaxis(t, 1, 0)
tensor([[[-0.3362],
[-0.9627],
[ 0.5173]],
[[-0.8437],
[ 0.1727],
[-0.1398]]])
>>> torch.moveaxis(t, (1, 2), (0, 1)).shape
torch.Size([2, 1, 3])
>>> torch.moveaxis(t, (1, 2), (0, 1))
tensor([[[-0.3362, -0.9627, 0.5173]],
[[-0.8437, 0.1727, -0.1398]]]) | torch.generated.torch.moveaxis#torch.moveaxis |
torch.movedim(input, source, destination) → Tensor
Moves the dimension(s) of input at the position(s) in source to the position(s) in destination. Other dimensions of input that are not explicitly moved remain in their original order and appear at the positions not specified in destination. Parameters
input (Tensor) – the input tensor.
source (int or tuple of python:ints) – Original positions of the dims to move. These must be unique.
destination (int or tuple of python:ints) – Destination positions for each of the original dims. These must also be unique. Examples: >>> t = torch.randn(3,2,1)
>>> t
tensor([[[-0.3362],
[-0.8437]],
[[-0.9627],
[ 0.1727]],
[[ 0.5173],
[-0.1398]]])
>>> torch.movedim(t, 1, 0).shape
torch.Size([2, 3, 1])
>>> torch.movedim(t, 1, 0)
tensor([[[-0.3362],
[-0.9627],
[ 0.5173]],
[[-0.8437],
[ 0.1727],
[-0.1398]]])
>>> torch.movedim(t, (1, 2), (0, 1)).shape
torch.Size([2, 1, 3])
>>> torch.movedim(t, (1, 2), (0, 1))
tensor([[[-0.3362, -0.9627, 0.5173]],
[[-0.8437, 0.1727, -0.1398]]]) | torch.generated.torch.movedim#torch.movedim |
torch.msort(input, *, out=None) → Tensor
Sorts the elements of the input tensor along its first dimension in ascending order by value. Note torch.msort(t) is equivalent to torch.sort(t, dim=0)[0]. See also torch.sort(). Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> t = torch.randn(3, 4)
>>> t
tensor([[-0.1321, 0.4370, -1.2631, -1.1289],
[-2.0527, -1.1250, 0.2275, 0.3077],
[-0.0881, -0.1259, -0.5495, 1.0284]])
>>> torch.msort(t)
tensor([[-2.0527, -1.1250, -1.2631, -1.1289],
[-0.1321, -0.1259, -0.5495, 0.3077],
[-0.0881, 0.4370, 0.2275, 1.0284]]) | torch.generated.torch.msort#torch.msort |
torch.mul(input, other, *, out=None)
Multiplies each element of the input input with the scalar other and returns a new resulting tensor. outi=other×inputi\text{out}_i = \text{other} \times \text{input}_i
If input is of type FloatTensor or DoubleTensor, other should be a real number, otherwise it should be an integer Parameters
{input} –
other (Number) – the number to be multiplied to each element of input
Keyword Arguments
{out} – Example: >>> a = torch.randn(3)
>>> a
tensor([ 0.2015, -0.4255, 2.6087])
>>> torch.mul(a, 100)
tensor([ 20.1494, -42.5491, 260.8663])
torch.mul(input, other, *, out=None)
Each element of the tensor input is multiplied by the corresponding element of the Tensor other. The resulting tensor is returned. The shapes of input and other must be broadcastable. outi=inputi×otheri\text{out}_i = \text{input}_i \times \text{other}_i
Parameters
input (Tensor) – the first multiplicand tensor
other (Tensor) – the second multiplicand tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4, 1)
>>> a
tensor([[ 1.1207],
[-0.3137],
[ 0.0700],
[ 0.8378]])
>>> b = torch.randn(1, 4)
>>> b
tensor([[ 0.5146, 0.1216, -0.5244, 2.2382]])
>>> torch.mul(a, b)
tensor([[ 0.5767, 0.1363, -0.5877, 2.5083],
[-0.1614, -0.0382, 0.1645, -0.7021],
[ 0.0360, 0.0085, -0.0367, 0.1567],
[ 0.4312, 0.1019, -0.4394, 1.8753]]) | torch.generated.torch.mul#torch.mul |
torch.multinomial(input, num_samples, replacement=False, *, generator=None, out=None) → LongTensor
Returns a tensor where each row contains num_samples indices sampled from the multinomial probability distribution located in the corresponding row of tensor input. Note The rows of input do not need to sum to one (in which case we use the values as weights), but must be non-negative, finite and have a non-zero sum. Indices are ordered from left to right according to when each was sampled (first samples are placed in first column). If input is a vector, out is a vector of size num_samples. If input is a matrix with m rows, out is an matrix of shape (m×num_samples)(m \times \text{num\_samples}) . If replacement is True, samples are drawn with replacement. If not, they are drawn without replacement, which means that when a sample index is drawn for a row, it cannot be drawn again for that row. Note When drawn without replacement, num_samples must be lower than number of non-zero elements in input (or the min number of non-zero elements in each row of input if it is a matrix). Parameters
input (Tensor) – the input tensor containing probabilities
num_samples (int) – number of samples to draw
replacement (bool, optional) – whether to draw with replacement or not Keyword Arguments
generator (torch.Generator, optional) – a pseudorandom number generator for sampling
out (Tensor, optional) – the output tensor. Example: >>> weights = torch.tensor([0, 10, 3, 0], dtype=torch.float) # create a tensor of weights
>>> torch.multinomial(weights, 2)
tensor([1, 2])
>>> torch.multinomial(weights, 4) # ERROR!
RuntimeError: invalid argument 2: invalid multinomial distribution (with replacement=False,
not enough non-negative category to sample) at ../aten/src/TH/generic/THTensorRandom.cpp:320
>>> torch.multinomial(weights, 4, replacement=True)
tensor([ 2, 1, 1, 1]) | torch.generated.torch.multinomial#torch.multinomial |
torch.multiply(input, other, *, out=None)
Alias for torch.mul(). | torch.generated.torch.multiply#torch.multiply |
Multiprocessing package - torch.multiprocessing torch.multiprocessing is a wrapper around the native multiprocessing module. It registers custom reducers, that use shared memory to provide shared views on the same data in different processes. Once the tensor/storage is moved to shared_memory (see share_memory_()), it will be possible to send it to other processes without making any copies. The API is 100% compatible with the original module - it’s enough to change import multiprocessing to import torch.multiprocessing to have all the tensors sent through the queues or shared via other mechanisms, moved to shared memory. Because of the similarity of APIs we do not document most of this package contents, and we recommend referring to very good docs of the original module. Warning If the main process exits abruptly (e.g. because of an incoming signal), Python’s multiprocessing sometimes fails to clean up its children. It’s a known caveat, so if you’re seeing any resource leaks after interrupting the interpreter, it probably means that this has just happened to you. Strategy management
torch.multiprocessing.get_all_sharing_strategies() [source]
Returns a set of sharing strategies supported on a current system.
torch.multiprocessing.get_sharing_strategy() [source]
Returns the current strategy for sharing CPU tensors.
torch.multiprocessing.set_sharing_strategy(new_strategy) [source]
Sets the strategy for sharing CPU tensors. Parameters
new_strategy (str) – Name of the selected strategy. Should be one of the values returned by get_all_sharing_strategies().
Sharing CUDA tensors Sharing CUDA tensors between processes is supported only in Python 3, using a spawn or forkserver start methods. Unlike CPU tensors, the sending process is required to keep the original tensor as long as the receiving process retains a copy of the tensor. The refcounting is implemented under the hood but requires users to follow the next best practices. Warning If the consumer process dies abnormally to a fatal signal, the shared tensor could be forever kept in memory as long as the sending process is running. Release memory ASAP in the consumer. ## Good
x = queue.get()
# do somethings with x
del x
## Bad
x = queue.get()
# do somethings with x
# do everything else (producer have to keep x in memory)
2. Keep producer process running until all consumers exits. This will prevent the situation when the producer process releasing memory which is still in use by the consumer. ## producer
# send tensors, do something
event.wait()
## consumer
# receive tensors and use them
event.set()
Don’t pass received tensors. # not going to work
x = queue.get()
queue_2.put(x)
# you need to create a process-local copy
x = queue.get()
x_clone = x.clone()
queue_2.put(x_clone)
# putting and getting from the same queue in the same process will likely end up with segfault
queue.put(tensor)
x = queue.get()
Sharing strategies This section provides a brief overview into how different sharing strategies work. Note that it applies only to CPU tensor - CUDA tensors will always use the CUDA API, as that’s the only way they can be shared. File descriptor - file_descriptor
Note This is the default strategy (except for macOS and OS X where it’s not supported). This strategy will use file descriptors as shared memory handles. Whenever a storage is moved to shared memory, a file descriptor obtained from shm_open is cached with the object, and when it’s going to be sent to other processes, the file descriptor will be transferred (e.g. via UNIX sockets) to it. The receiver will also cache the file descriptor and mmap it, to obtain a shared view onto the storage data. Note that if there will be a lot of tensors shared, this strategy will keep a large number of file descriptors open most of the time. If your system has low limits for the number of open file descriptors, and you can’t raise them, you should use the file_system strategy. File system - file_system
This strategy will use file names given to shm_open to identify the shared memory regions. This has a benefit of not requiring the implementation to cache the file descriptors obtained from it, but at the same time is prone to shared memory leaks. The file can’t be deleted right after its creation, because other processes need to access it to open their views. If the processes fatally crash, or are killed, and don’t call the storage destructors, the files will remain in the system. This is very serious, because they keep using up the memory until the system is restarted, or they’re freed manually. To counter the problem of shared memory file leaks, torch.multiprocessing will spawn a daemon named torch_shm_manager that will isolate itself from the current process group, and will keep track of all shared memory allocations. Once all processes connected to it exit, it will wait a moment to ensure there will be no new connections, and will iterate over all shared memory files allocated by the group. If it finds that any of them still exist, they will be deallocated. We’ve tested this method and it proved to be robust to various failures. Still, if your system has high enough limits, and file_descriptor is a supported strategy, we do not recommend switching to this one. Spawning subprocesses Note Available for Python >= 3.4. This depends on the spawn start method in Python’s multiprocessing package. Spawning a number of subprocesses to perform some function can be done by creating Process instances and calling join to wait for their completion. This approach works fine when dealing with a single subprocess but presents potential issues when dealing with multiple processes. Namely, joining processes sequentially implies they will terminate sequentially. If they don’t, and the first process does not terminate, the process termination will go unnoticed. Also, there are no native facilities for error propagation. The spawn function below addresses these concerns and takes care of error propagation, out of order termination, and will actively terminate processes upon detecting an error in one of them.
torch.multiprocessing.spawn(fn, args=(), nprocs=1, join=True, daemon=False, start_method='spawn') [source]
Spawns nprocs processes that run fn with args. If one of the processes exits with a non-zero exit status, the remaining processes are killed and an exception is raised with the cause of termination. In the case an exception was caught in the child process, it is forwarded and its traceback is included in the exception raised in the parent process. Parameters
fn (function) –
Function is called as the entrypoint of the spawned process. This function must be defined at the top level of a module so it can be pickled and spawned. This is a requirement imposed by multiprocessing. The function is called as fn(i, *args), where i is the process index and args is the passed through tuple of arguments.
args (tuple) – Arguments passed to fn.
nprocs (int) – Number of processes to spawn.
join (bool) – Perform a blocking join on all processes.
daemon (bool) – The spawned processes’ daemon flag. If set to True, daemonic processes will be created.
start_method (string) – (deprecated) this method will always use spawn as the start method. To use a different start method use start_processes(). Returns
None if join is True, ProcessContext if join is False
class torch.multiprocessing.SpawnContext [source]
Returned by spawn() when called with join=False.
join(timeout=None)
Tries to join one or more processes in this spawn context. If one of them exited with a non-zero exit status, this function kills the remaining processes and raises an exception with the cause of the first process exiting. Returns True if all processes have been joined successfully, False if there are more processes that need to be joined. Parameters
timeout (float) – Wait this long before giving up on waiting. | torch.multiprocessing |
torch.multiprocessing.get_all_sharing_strategies() [source]
Returns a set of sharing strategies supported on a current system. | torch.multiprocessing#torch.multiprocessing.get_all_sharing_strategies |
torch.multiprocessing.get_sharing_strategy() [source]
Returns the current strategy for sharing CPU tensors. | torch.multiprocessing#torch.multiprocessing.get_sharing_strategy |
torch.multiprocessing.set_sharing_strategy(new_strategy) [source]
Sets the strategy for sharing CPU tensors. Parameters
new_strategy (str) – Name of the selected strategy. Should be one of the values returned by get_all_sharing_strategies(). | torch.multiprocessing#torch.multiprocessing.set_sharing_strategy |
torch.multiprocessing.spawn(fn, args=(), nprocs=1, join=True, daemon=False, start_method='spawn') [source]
Spawns nprocs processes that run fn with args. If one of the processes exits with a non-zero exit status, the remaining processes are killed and an exception is raised with the cause of termination. In the case an exception was caught in the child process, it is forwarded and its traceback is included in the exception raised in the parent process. Parameters
fn (function) –
Function is called as the entrypoint of the spawned process. This function must be defined at the top level of a module so it can be pickled and spawned. This is a requirement imposed by multiprocessing. The function is called as fn(i, *args), where i is the process index and args is the passed through tuple of arguments.
args (tuple) – Arguments passed to fn.
nprocs (int) – Number of processes to spawn.
join (bool) – Perform a blocking join on all processes.
daemon (bool) – The spawned processes’ daemon flag. If set to True, daemonic processes will be created.
start_method (string) – (deprecated) this method will always use spawn as the start method. To use a different start method use start_processes(). Returns
None if join is True, ProcessContext if join is False | torch.multiprocessing#torch.multiprocessing.spawn |
class torch.multiprocessing.SpawnContext [source]
Returned by spawn() when called with join=False.
join(timeout=None)
Tries to join one or more processes in this spawn context. If one of them exited with a non-zero exit status, this function kills the remaining processes and raises an exception with the cause of the first process exiting. Returns True if all processes have been joined successfully, False if there are more processes that need to be joined. Parameters
timeout (float) – Wait this long before giving up on waiting. | torch.multiprocessing#torch.multiprocessing.SpawnContext |
join(timeout=None)
Tries to join one or more processes in this spawn context. If one of them exited with a non-zero exit status, this function kills the remaining processes and raises an exception with the cause of the first process exiting. Returns True if all processes have been joined successfully, False if there are more processes that need to be joined. Parameters
timeout (float) – Wait this long before giving up on waiting. | torch.multiprocessing#torch.multiprocessing.SpawnContext.join |
torch.mv(input, vec, *, out=None) → Tensor
Performs a matrix-vector product of the matrix input and the vector vec. If input is a (n×m)(n \times m) tensor, vec is a 1-D tensor of size mm , out will be 1-D of size nn . Note This function does not broadcast. Parameters
input (Tensor) – matrix to be multiplied
vec (Tensor) – vector to be multiplied Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> mat = torch.randn(2, 3)
>>> vec = torch.randn(3)
>>> torch.mv(mat, vec)
tensor([ 1.0404, -0.6361]) | torch.generated.torch.mv#torch.mv |
torch.mvlgamma(input, p) → Tensor
Computes the multivariate log-gamma function) with dimension pp element-wise, given by log(Γp(a))=C+∑i=1plog(Γ(a−i−12))\log(\Gamma_{p}(a)) = C + \displaystyle \sum_{i=1}^{p} \log\left(\Gamma\left(a - \frac{i - 1}{2}\right)\right)
where C=log(π)×p(p−1)4C = \log(\pi) \times \frac{p (p - 1)}{4} and Γ(⋅)\Gamma(\cdot) is the Gamma function. All elements must be greater than p−12\frac{p - 1}{2} , otherwise an error would be thrown. Parameters
input (Tensor) – the tensor to compute the multivariate log-gamma function
p (int) – the number of dimensions Example: >>> a = torch.empty(2, 3).uniform_(1, 2)
>>> a
tensor([[1.6835, 1.8474, 1.1929],
[1.0475, 1.7162, 1.4180]])
>>> torch.mvlgamma(a, 2)
tensor([[0.3928, 0.4007, 0.7586],
[1.0311, 0.3901, 0.5049]]) | torch.generated.torch.mvlgamma#torch.mvlgamma |
torch.nanmedian(input) → Tensor
Returns the median of the values in input, ignoring NaN values. This function is identical to torch.median() when there are no NaN values in input. When input has one or more NaN values, torch.median() will always return NaN, while this function will return the median of the non-NaN elements in input. If all the elements in input are NaN it will also return NaN. Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.tensor([1, float('nan'), 3, 2])
>>> a.median()
tensor(nan)
>>> a.nanmedian()
tensor(2.)
torch.nanmedian(input, dim=-1, keepdim=False, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values contains the median of each row of input in the dimension dim, ignoring NaN values, and indices contains the index of the median values found in the dimension dim. This function is identical to torch.median() when there are no NaN values in a reduced row. When a reduced row has one or more NaN values, torch.median() will always reduce it to NaN, while this function will reduce it to the median of the non-NaN elements. If all the elements in a reduced row are NaN then it will be reduced to NaN, too. Parameters
input (Tensor) – the input tensor.
dim (int) – the dimension to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out ((Tensor, Tensor), optional) – The first tensor will be populated with the median values and the second tensor, which must have dtype long, with their indices in the dimension dim of input. Example: >>> a = torch.tensor([[2, 3, 1], [float('nan'), 1, float('nan')]])
>>> a
tensor([[2., 3., 1.],
[nan, 1., nan]])
>>> a.median(0)
torch.return_types.median(values=tensor([nan, 1., nan]), indices=tensor([1, 1, 1]))
>>> a.nanmedian(0)
torch.return_types.nanmedian(values=tensor([2., 1., 1.]), indices=tensor([0, 1, 0])) | torch.generated.torch.nanmedian#torch.nanmedian |
torch.nanquantile(input, q, dim=None, keepdim=False, *, out=None) → Tensor
This is a variant of torch.quantile() that “ignores” NaN values, computing the quantiles q as if NaN values in input did not exist. If all values in a reduced row are NaN then the quantiles for that reduction will be NaN. See the documentation for torch.quantile(). Parameters
input (Tensor) – the input tensor.
q (float or Tensor) – a scalar or 1D tensor of quantile values in the range [0, 1]
dim (int) – the dimension to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> t = torch.tensor([float('nan'), 1, 2])
>>> t.quantile(0.5)
tensor(nan)
>>> t.nanquantile(0.5)
tensor(1.5000)
>>> t = torch.tensor([[float('nan'), float('nan')], [1, 2]])
>>> t
tensor([[nan, nan],
[1., 2.]])
>>> t.nanquantile(0.5, dim=0)
tensor([1., 2.])
>>> t.nanquantile(0.5, dim=1)
tensor([ nan, 1.5000]) | torch.generated.torch.nanquantile#torch.nanquantile |
torch.nansum(input, *, dtype=None) → Tensor
Returns the sum of all elements, treating Not a Numbers (NaNs) as zero. Parameters
input (Tensor) – the input tensor. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: >>> a = torch.tensor([1., 2., float('nan'), 4.])
>>> torch.nansum(a)
tensor(7.)
torch.nansum(input, dim, keepdim=False, *, dtype=None) → Tensor
Returns the sum of each row of the input tensor in the given dimension dim, treating Not a Numbers (NaNs) as zero. If dim is a list of dimensions, reduce over all of them. If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). Parameters
input (Tensor) – the input tensor.
dim (int or tuple of python:ints) – the dimension or dimensions to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: >>> torch.nansum(torch.tensor([1., float("nan")]))
1.0
>>> a = torch.tensor([[1, 2], [3., float("nan")]])
>>> torch.nansum(a)
tensor(6.)
>>> torch.nansum(a, dim=0)
tensor([4., 2.])
>>> torch.nansum(a, dim=1)
tensor([3., 3.]) | torch.generated.torch.nansum#torch.nansum |
torch.nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None) → Tensor
Replaces NaN, positive infinity, and negative infinity values in input with the values specified by nan, posinf, and neginf, respectively. By default, NaN`s are replaced with zero, positive infinity is replaced with the
greatest finite value representable by :attr:`input’s dtype, and negative infinity is replaced with the least finite value representable by input’s dtype. Parameters
input (Tensor) – the input tensor.
nan (Number, optional) – the value to replace NaNs with. Default is zero.
posinf (Number, optional) – if a Number, the value to replace positive infinity values with. If None, positive infinity values are replaced with the greatest finite value representable by input’s dtype. Default is None.
neginf (Number, optional) – if a Number, the value to replace negative infinity values with. If None, negative infinity values are replaced with the lowest finite value representable by input’s dtype. Default is None. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> x = torch.tensor([float('nan'), float('inf'), -float('inf'), 3.14])
>>> torch.nan_to_num(x)
tensor([ 0.0000e+00, 3.4028e+38, -3.4028e+38, 3.1400e+00])
>>> torch.nan_to_num(x, nan=2.0)
tensor([ 2.0000e+00, 3.4028e+38, -3.4028e+38, 3.1400e+00])
>>> torch.nan_to_num(x, nan=2.0, posinf=1.0)
tensor([ 2.0000e+00, 1.0000e+00, -3.4028e+38, 3.1400e+00]) | torch.generated.torch.nan_to_num#torch.nan_to_num |
torch.narrow(input, dim, start, length) → Tensor
Returns a new tensor that is a narrowed version of input tensor. The dimension dim is input from start to start + length. The returned tensor and input tensor share the same underlying storage. Parameters
input (Tensor) – the tensor to narrow
dim (int) – the dimension along which to narrow
start (int) – the starting dimension
length (int) – the distance to the ending dimension Example: >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> torch.narrow(x, 0, 0, 2)
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
>>> torch.narrow(x, 1, 1, 2)
tensor([[ 2, 3],
[ 5, 6],
[ 8, 9]]) | torch.generated.torch.narrow#torch.narrow |
torch.ne(input, other, *, out=None) → Tensor
Computes input≠other\text{input} \neq \text{other} element-wise. The second argument can be a number or a tensor whose shape is broadcastable with the first argument. Parameters
input (Tensor) – the tensor to compare
other (Tensor or float) – the tensor or value to compare Keyword Arguments
out (Tensor, optional) – the output tensor. Returns
A boolean tensor that is True where input is not equal to other and False elsewhere Example: >>> torch.ne(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[False, True], [True, False]]) | torch.generated.torch.ne#torch.ne |
torch.neg(input, *, out=None) → Tensor
Returns a new tensor with the negative of the elements of input. out=−1×input\text{out} = -1 \times \text{input}
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(5)
>>> a
tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])
>>> torch.neg(a)
tensor([-0.0090, 0.2262, 0.0682, 0.2866, -0.3940]) | torch.generated.torch.neg#torch.neg |
torch.negative(input, *, out=None) → Tensor
Alias for torch.neg() | torch.generated.torch.negative#torch.negative |
torch.nextafter(input, other, *, out=None) → Tensor
Return the next floating-point value after input towards other, elementwise. The shapes of input and other must be broadcastable. Parameters
input (Tensor) – the first input tensor
other (Tensor) – the second input tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example::
>>> eps = torch.finfo(torch.float32).eps
>>> torch.nextafter(torch.Tensor([1, 2]), torch.Tensor([2, 1])) == torch.Tensor([eps + 1, 2 - eps])
tensor([True, True]) | torch.generated.torch.nextafter#torch.nextafter |
torch.nn These are the basic building block for graphs torch.nn Containers Convolution Layers Pooling layers Padding Layers Non-linear Activations (weighted sum, nonlinearity) Non-linear Activations (other) Normalization Layers Recurrent Layers Transformer Layers Linear Layers Dropout Layers Sparse Layers Distance Functions Loss Functions Vision Layers Shuffle Layers DataParallel Layers (multi-GPU, distributed) Utilities Quantized Functions Lazy Modules Initialization
Parameter
A kind of Tensor that is to be considered a module parameter.
UninitializedParameter
A parameter that is not initialized. Containers
Module
Base class for all neural network modules.
Sequential
A sequential container.
ModuleList
Holds submodules in a list.
ModuleDict
Holds submodules in a dictionary.
ParameterList
Holds parameters in a list.
ParameterDict
Holds parameters in a dictionary. Global Hooks For Module
register_module_forward_pre_hook
Registers a forward pre-hook common to all modules.
register_module_forward_hook
Registers a global forward hook for all the modules
register_module_backward_hook
Registers a backward hook common to all the modules. Convolution Layers
nn.Conv1d Applies a 1D convolution over an input signal composed of several input planes.
nn.Conv2d Applies a 2D convolution over an input signal composed of several input planes.
nn.Conv3d Applies a 3D convolution over an input signal composed of several input planes.
nn.ConvTranspose1d Applies a 1D transposed convolution operator over an input image composed of several input planes.
nn.ConvTranspose2d Applies a 2D transposed convolution operator over an input image composed of several input planes.
nn.ConvTranspose3d Applies a 3D transposed convolution operator over an input image composed of several input planes.
nn.LazyConv1d A torch.nn.Conv1d module with lazy initialization of the in_channels argument of the Conv1d that is inferred from the input.size(1).
nn.LazyConv2d A torch.nn.Conv2d module with lazy initialization of the in_channels argument of the Conv2d that is inferred from the input.size(1).
nn.LazyConv3d A torch.nn.Conv3d module with lazy initialization of the in_channels argument of the Conv3d that is inferred from the input.size(1).
nn.LazyConvTranspose1d A torch.nn.ConvTranspose1d module with lazy initialization of the in_channels argument of the ConvTranspose1d that is inferred from the input.size(1).
nn.LazyConvTranspose2d A torch.nn.ConvTranspose2d module with lazy initialization of the in_channels argument of the ConvTranspose2d that is inferred from the input.size(1).
nn.LazyConvTranspose3d A torch.nn.ConvTranspose3d module with lazy initialization of the in_channels argument of the ConvTranspose3d that is inferred from the input.size(1).
nn.Unfold Extracts sliding local blocks from a batched input tensor.
nn.Fold Combines an array of sliding local blocks into a large containing tensor. Pooling layers
nn.MaxPool1d Applies a 1D max pooling over an input signal composed of several input planes.
nn.MaxPool2d Applies a 2D max pooling over an input signal composed of several input planes.
nn.MaxPool3d Applies a 3D max pooling over an input signal composed of several input planes.
nn.MaxUnpool1d Computes a partial inverse of MaxPool1d.
nn.MaxUnpool2d Computes a partial inverse of MaxPool2d.
nn.MaxUnpool3d Computes a partial inverse of MaxPool3d.
nn.AvgPool1d Applies a 1D average pooling over an input signal composed of several input planes.
nn.AvgPool2d Applies a 2D average pooling over an input signal composed of several input planes.
nn.AvgPool3d Applies a 3D average pooling over an input signal composed of several input planes.
nn.FractionalMaxPool2d Applies a 2D fractional max pooling over an input signal composed of several input planes.
nn.LPPool1d Applies a 1D power-average pooling over an input signal composed of several input planes.
nn.LPPool2d Applies a 2D power-average pooling over an input signal composed of several input planes.
nn.AdaptiveMaxPool1d Applies a 1D adaptive max pooling over an input signal composed of several input planes.
nn.AdaptiveMaxPool2d Applies a 2D adaptive max pooling over an input signal composed of several input planes.
nn.AdaptiveMaxPool3d Applies a 3D adaptive max pooling over an input signal composed of several input planes.
nn.AdaptiveAvgPool1d Applies a 1D adaptive average pooling over an input signal composed of several input planes.
nn.AdaptiveAvgPool2d Applies a 2D adaptive average pooling over an input signal composed of several input planes.
nn.AdaptiveAvgPool3d Applies a 3D adaptive average pooling over an input signal composed of several input planes. Padding Layers
nn.ReflectionPad1d Pads the input tensor using the reflection of the input boundary.
nn.ReflectionPad2d Pads the input tensor using the reflection of the input boundary.
nn.ReplicationPad1d Pads the input tensor using replication of the input boundary.
nn.ReplicationPad2d Pads the input tensor using replication of the input boundary.
nn.ReplicationPad3d Pads the input tensor using replication of the input boundary.
nn.ZeroPad2d Pads the input tensor boundaries with zero.
nn.ConstantPad1d Pads the input tensor boundaries with a constant value.
nn.ConstantPad2d Pads the input tensor boundaries with a constant value.
nn.ConstantPad3d Pads the input tensor boundaries with a constant value. Non-linear Activations (weighted sum, nonlinearity)
nn.ELU Applies the element-wise function:
nn.Hardshrink Applies the hard shrinkage function element-wise:
nn.Hardsigmoid Applies the element-wise function:
nn.Hardtanh Applies the HardTanh function element-wise
nn.Hardswish Applies the hardswish function, element-wise, as described in the paper:
nn.LeakyReLU Applies the element-wise function:
nn.LogSigmoid Applies the element-wise function:
nn.MultiheadAttention Allows the model to jointly attend to information from different representation subspaces.
nn.PReLU Applies the element-wise function:
nn.ReLU Applies the rectified linear unit function element-wise:
nn.ReLU6 Applies the element-wise function:
nn.RReLU Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper:
nn.SELU Applied element-wise, as:
nn.CELU Applies the element-wise function:
nn.GELU Applies the Gaussian Error Linear Units function:
nn.Sigmoid Applies the element-wise function:
nn.SiLU Applies the silu function, element-wise.
nn.Softplus Applies the element-wise function:
nn.Softshrink Applies the soft shrinkage function elementwise:
nn.Softsign Applies the element-wise function:
nn.Tanh Applies the element-wise function:
nn.Tanhshrink Applies the element-wise function:
nn.Threshold Thresholds each element of the input Tensor. Non-linear Activations (other)
nn.Softmin Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1.
nn.Softmax Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.
nn.Softmax2d Applies SoftMax over features to each spatial location.
nn.LogSoftmax Applies the log(Softmax(x))\log(\text{Softmax}(x)) function to an n-dimensional input Tensor.
nn.AdaptiveLogSoftmaxWithLoss Efficient softmax approximation as described in Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. Normalization Layers
nn.BatchNorm1d Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
nn.BatchNorm2d Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
nn.BatchNorm3d Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
nn.GroupNorm Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization
nn.SyncBatchNorm Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
nn.InstanceNorm1d Applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
nn.InstanceNorm2d Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
nn.InstanceNorm3d Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
nn.LayerNorm Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization
nn.LocalResponseNorm Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Recurrent Layers
nn.RNNBase
nn.RNN Applies a multi-layer Elman RNN with tanh\tanh or ReLU\text{ReLU} non-linearity to an input sequence.
nn.LSTM Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
nn.GRU Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
nn.RNNCell An Elman RNN cell with tanh or ReLU non-linearity.
nn.LSTMCell A long short-term memory (LSTM) cell.
nn.GRUCell A gated recurrent unit (GRU) cell Transformer Layers
nn.Transformer A transformer model.
nn.TransformerEncoder TransformerEncoder is a stack of N encoder layers
nn.TransformerDecoder TransformerDecoder is a stack of N decoder layers
nn.TransformerEncoderLayer TransformerEncoderLayer is made up of self-attn and feedforward network.
nn.TransformerDecoderLayer TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. Linear Layers
nn.Identity A placeholder identity operator that is argument-insensitive.
nn.Linear Applies a linear transformation to the incoming data: y=xAT+by = xA^T + b
nn.Bilinear Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x_1^T A x_2 + b
nn.LazyLinear A torch.nn.Linear module with lazy initialization. Dropout Layers
nn.Dropout During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution.
nn.Dropout2d Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ).
nn.Dropout3d Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ).
nn.AlphaDropout Applies Alpha Dropout over the input. Sparse Layers
nn.Embedding A simple lookup table that stores embeddings of a fixed dictionary and size.
nn.EmbeddingBag Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings. Distance Functions
nn.CosineSimilarity Returns cosine similarity between x1x_1 and x2x_2 , computed along dim.
nn.PairwiseDistance Computes the batchwise pairwise distance between vectors v1v_1 , v2v_2 using the p-norm: Loss Functions
nn.L1Loss Creates a criterion that measures the mean absolute error (MAE) between each element in the input xx and target yy .
nn.MSELoss Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input xx and target yy .
nn.CrossEntropyLoss This criterion combines LogSoftmax and NLLLoss in one single class.
nn.CTCLoss The Connectionist Temporal Classification loss.
nn.NLLLoss The negative log likelihood loss.
nn.PoissonNLLLoss Negative log likelihood loss with Poisson distribution of target.
nn.GaussianNLLLoss Gaussian negative log likelihood loss.
nn.KLDivLoss The Kullback-Leibler divergence loss measure
nn.BCELoss Creates a criterion that measures the Binary Cross Entropy between the target and the output:
nn.BCEWithLogitsLoss This loss combines a Sigmoid layer and the BCELoss in one single class.
nn.MarginRankingLoss Creates a criterion that measures the loss given inputs x1x1 , x2x2 , two 1D mini-batch Tensors, and a label 1D mini-batch tensor yy (containing 1 or -1).
nn.HingeEmbeddingLoss Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1).
nn.MultiLabelMarginLoss Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input xx (a 2D mini-batch Tensor) and output yy (which is a 2D Tensor of target class indices).
nn.SmoothL1Loss Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise.
nn.SoftMarginLoss Creates a criterion that optimizes a two-class classification logistic loss between input tensor xx and target tensor yy (containing 1 or -1).
nn.MultiLabelSoftMarginLoss Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input xx and target yy of size (N,C)(N, C) .
nn.CosineEmbeddingLoss Creates a criterion that measures the loss given input tensors x1x_1 , x2x_2 and a Tensor label yy with values 1 or -1.
nn.MultiMarginLoss Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input xx (a 2D mini-batch Tensor) and output yy (which is a 1D tensor of target class indices, 0≤y≤x.size(1)−10 \leq y \leq \text{x.size}(1)-1 ):
nn.TripletMarginLoss Creates a criterion that measures the triplet loss given an input tensors x1x1 , x2x2 , x3x3 and a margin with a value greater than 00 .
nn.TripletMarginWithDistanceLoss Creates a criterion that measures the triplet loss given input tensors aa , pp , and nn (representing anchor, positive, and negative examples, respectively), and a nonnegative, real-valued function (“distance function”) used to compute the relationship between the anchor and positive example (“positive distance”) and the anchor and negative example (“negative distance”). Vision Layers
nn.PixelShuffle Rearranges elements in a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) to a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) , where r is an upscale factor.
nn.PixelUnshuffle Reverses the PixelShuffle operation by rearranging elements in a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) to a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) , where r is a downscale factor.
nn.Upsample Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
nn.UpsamplingNearest2d Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.
nn.UpsamplingBilinear2d Applies a 2D bilinear upsampling to an input signal composed of several input channels. Shuffle Layers
nn.ChannelShuffle Divide the channels in a tensor of shape (∗,C,H,W)(*, C , H, W) into g groups and rearrange them as (∗,Cg,g,H,W)(*, C \frac g, g, H, W) , while keeping the original tensor shape. DataParallel Layers (multi-GPU, distributed)
nn.DataParallel Implements data parallelism at the module level.
nn.parallel.DistributedDataParallel Implements distributed data parallelism that is based on torch.distributed package at the module level. Utilities From the torch.nn.utils module
clip_grad_norm_
Clips gradient norm of an iterable of parameters.
clip_grad_value_
Clips gradient of an iterable of parameters at specified value.
parameters_to_vector
Convert parameters to one vector
vector_to_parameters
Convert one vector to the parameters
prune.BasePruningMethod Abstract base class for creation of new pruning techniques.
prune.PruningContainer Container holding a sequence of pruning methods for iterative pruning.
prune.Identity Utility pruning method that does not prune any units but generates the pruning parametrization with a mask of ones.
prune.RandomUnstructured Prune (currently unpruned) units in a tensor at random.
prune.L1Unstructured Prune (currently unpruned) units in a tensor by zeroing out the ones with the lowest L1-norm.
prune.RandomStructured Prune entire (currently unpruned) channels in a tensor at random.
prune.LnStructured Prune entire (currently unpruned) channels in a tensor based on their Ln-norm.
prune.CustomFromMask
prune.identity Applies pruning reparametrization to the tensor corresponding to the parameter called name in module without actually pruning any units.
prune.random_unstructured Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) units selected at random.
prune.l1_unstructured Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) units with the lowest L1-norm.
prune.random_structured Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) channels along the specified dim selected at random.
prune.ln_structured Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) channels along the specified dim with the lowest L``n``-norm.
prune.global_unstructured Globally prunes tensors corresponding to all parameters in parameters by applying the specified pruning_method.
prune.custom_from_mask Prunes tensor corresponding to parameter called name in module by applying the pre-computed mask in mask.
prune.remove Removes the pruning reparameterization from a module and the pruning method from the forward hook.
prune.is_pruned Check whether module is pruned by looking for forward_pre_hooks in its modules that inherit from the BasePruningMethod.
weight_norm
Applies weight normalization to a parameter in the given module.
remove_weight_norm
Removes the weight normalization reparameterization from a module.
spectral_norm
Applies spectral normalization to a parameter in the given module.
remove_spectral_norm
Removes the spectral normalization reparameterization from a module. Utility functions in other modules
nn.utils.rnn.PackedSequence Holds the data and list of batch_sizes of a packed sequence.
nn.utils.rnn.pack_padded_sequence Packs a Tensor containing padded sequences of variable length.
nn.utils.rnn.pad_packed_sequence Pads a packed batch of variable length sequences.
nn.utils.rnn.pad_sequence Pad a list of variable length Tensors with padding_value
nn.utils.rnn.pack_sequence Packs a list of variable length Tensors
nn.Flatten Flattens a contiguous range of dims into a tensor.
nn.Unflatten Unflattens a tensor dim expanding it to a desired shape. Quantized Functions Quantization refers to techniques for performing computations and storing tensors at lower bitwidths than floating point precision. PyTorch supports both per tensor and per channel asymmetric linear quantization. To learn more how to use quantized functions in PyTorch, please refer to the Quantization documentation. Lazy Modules Initialization
nn.modules.lazy.LazyModuleMixin A mixin for modules that lazily initialize parameters, also known as “lazy modules.” | torch.nn |
class torch.nn.AdaptiveAvgPool1d(output_size) [source]
Applies a 1D adaptive average pooling over an input signal composed of several input planes. The output size is H, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the target output size H Examples >>> # target output size of 5
>>> m = nn.AdaptiveAvgPool1d(5)
>>> input = torch.randn(1, 64, 8)
>>> output = m(input) | torch.generated.torch.nn.adaptiveavgpool1d#torch.nn.AdaptiveAvgPool1d |
class torch.nn.AdaptiveAvgPool2d(output_size) [source]
Applies a 2D adaptive average pooling over an input signal composed of several input planes. The output is of size H x W, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H. H and W can be either a int, or None which means the size will be the same as that of the input. Examples >>> # target output size of 5x7
>>> m = nn.AdaptiveAvgPool2d((5,7))
>>> input = torch.randn(1, 64, 8, 9)
>>> output = m(input)
>>> # target output size of 7x7 (square)
>>> m = nn.AdaptiveAvgPool2d(7)
>>> input = torch.randn(1, 64, 10, 9)
>>> output = m(input)
>>> # target output size of 10x7
>>> m = nn.AdaptiveAvgPool2d((None, 7))
>>> input = torch.randn(1, 64, 10, 9)
>>> output = m(input) | torch.generated.torch.nn.adaptiveavgpool2d#torch.nn.AdaptiveAvgPool2d |
class torch.nn.AdaptiveAvgPool3d(output_size) [source]
Applies a 3D adaptive average pooling over an input signal composed of several input planes. The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the target output size of the form D x H x W. Can be a tuple (D, H, W) or a single number D for a cube D x D x D. D, H and W can be either a int, or None which means the size will be the same as that of the input. Examples >>> # target output size of 5x7x9
>>> m = nn.AdaptiveAvgPool3d((5,7,9))
>>> input = torch.randn(1, 64, 8, 9, 10)
>>> output = m(input)
>>> # target output size of 7x7x7 (cube)
>>> m = nn.AdaptiveAvgPool3d(7)
>>> input = torch.randn(1, 64, 10, 9, 8)
>>> output = m(input)
>>> # target output size of 7x9x8
>>> m = nn.AdaptiveAvgPool3d((7, None, None))
>>> input = torch.randn(1, 64, 10, 9, 8)
>>> output = m(input) | torch.generated.torch.nn.adaptiveavgpool3d#torch.nn.AdaptiveAvgPool3d |
class torch.nn.AdaptiveLogSoftmaxWithLoss(in_features, n_classes, cutoffs, div_value=4.0, head_bias=False) [source]
Efficient softmax approximation as described in Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. Adaptive softmax is an approximate strategy for training models with large output spaces. It is most effective when the label distribution is highly imbalanced, for example in natural language modelling, where the word frequency distribution approximately follows the Zipf’s law. Adaptive softmax partitions the labels into several clusters, according to their frequency. These clusters may contain different number of targets each. Additionally, clusters containing less frequent labels assign lower dimensional embeddings to those labels, which speeds up the computation. For each minibatch, only clusters for which at least one target is present are evaluated. The idea is that the clusters which are accessed frequently (like the first one, containing most frequent labels), should also be cheap to compute – that is, contain a small number of assigned labels. We highly recommend taking a look at the original paper for more details.
cutoffs should be an ordered Sequence of integers sorted in the increasing order. It controls number of clusters and the partitioning of targets into clusters. For example setting cutoffs = [10, 100, 1000] means that first 10 targets will be assigned to the ‘head’ of the adaptive softmax, targets 11, 12, …, 100 will be assigned to the first cluster, and targets 101, 102, …, 1000 will be assigned to the second cluster, while targets 1001, 1002, …, n_classes - 1 will be assigned to the last, third cluster.
div_value is used to compute the size of each additional cluster, which is given as ⌊in_featuresdiv_valueidx⌋\left\lfloor\frac{\texttt{in\_features}}{\texttt{div\_value}^{idx}}\right\rfloor , where idxidx is the cluster index (with clusters for less frequent words having larger indices, and indices starting from 11 ).
head_bias if set to True, adds a bias term to the ‘head’ of the adaptive softmax. See paper for details. Set to False in the official implementation. Warning Labels passed as inputs to this module should be sorted according to their frequency. This means that the most frequent label should be represented by the index 0, and the least frequent label should be represented by the index n_classes - 1. Note This module returns a NamedTuple with output and loss fields. See further documentation for details. Note To compute log-probabilities for all classes, the log_prob method can be used. Parameters
in_features (int) – Number of features in the input tensor
n_classes (int) – Number of classes in the dataset
cutoffs (Sequence) – Cutoffs used to assign targets to their buckets
div_value (float, optional) – value used as an exponent to compute sizes of the clusters. Default: 4.0
head_bias (bool, optional) – If True, adds a bias term to the ‘head’ of the adaptive softmax. Default: False
Returns
output is a Tensor of size N containing computed target log probabilities for each example
loss is a Scalar representing the computed negative log likelihood loss Return type
NamedTuple with output and loss fields Shape:
input: (N,in_features)(N, \texttt{in\_features})
target: (N)(N) where each value satisfies 0<=target[i]<=n_classes0 <= \texttt{target[i]} <= \texttt{n\_classes}
output1: (N)(N)
output2: Scalar
log_prob(input) [source]
Computes log probabilities for all n_classes\texttt{n\_classes} Parameters
input (Tensor) – a minibatch of examples Returns
log-probabilities of for each class cc in range 0<=c<=n_classes0 <= c <= \texttt{n\_classes} , where n_classes\texttt{n\_classes} is a parameter passed to AdaptiveLogSoftmaxWithLoss constructor. Shape:
Input: (N,in_features)(N, \texttt{in\_features})
Output: (N,n_classes)(N, \texttt{n\_classes})
predict(input) [source]
This is equivalent to self.log_pob(input).argmax(dim=1), but is more efficient in some cases. Parameters
input (Tensor) – a minibatch of examples Returns
a class with the highest probability for each example Return type
output (Tensor) Shape:
Input: (N,in_features)(N, \texttt{in\_features})
Output: (N)(N) | torch.generated.torch.nn.adaptivelogsoftmaxwithloss#torch.nn.AdaptiveLogSoftmaxWithLoss |
log_prob(input) [source]
Computes log probabilities for all n_classes\texttt{n\_classes} Parameters
input (Tensor) – a minibatch of examples Returns
log-probabilities of for each class cc in range 0<=c<=n_classes0 <= c <= \texttt{n\_classes} , where n_classes\texttt{n\_classes} is a parameter passed to AdaptiveLogSoftmaxWithLoss constructor. Shape:
Input: (N,in_features)(N, \texttt{in\_features})
Output: (N,n_classes)(N, \texttt{n\_classes}) | torch.generated.torch.nn.adaptivelogsoftmaxwithloss#torch.nn.AdaptiveLogSoftmaxWithLoss.log_prob |
predict(input) [source]
This is equivalent to self.log_pob(input).argmax(dim=1), but is more efficient in some cases. Parameters
input (Tensor) – a minibatch of examples Returns
a class with the highest probability for each example Return type
output (Tensor) Shape:
Input: (N,in_features)(N, \texttt{in\_features})
Output: (N)(N) | torch.generated.torch.nn.adaptivelogsoftmaxwithloss#torch.nn.AdaptiveLogSoftmaxWithLoss.predict |
class torch.nn.AdaptiveMaxPool1d(output_size, return_indices=False) [source]
Applies a 1D adaptive max pooling over an input signal composed of several input planes. The output size is H, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the target output size H
return_indices – if True, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool1d. Default: False
Examples >>> # target output size of 5
>>> m = nn.AdaptiveMaxPool1d(5)
>>> input = torch.randn(1, 64, 8)
>>> output = m(input) | torch.generated.torch.nn.adaptivemaxpool1d#torch.nn.AdaptiveMaxPool1d |
class torch.nn.AdaptiveMaxPool2d(output_size, return_indices=False) [source]
Applies a 2D adaptive max pooling over an input signal composed of several input planes. The output is of size H x W, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H. H and W can be either a int, or None which means the size will be the same as that of the input.
return_indices – if True, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool2d. Default: False
Examples >>> # target output size of 5x7
>>> m = nn.AdaptiveMaxPool2d((5,7))
>>> input = torch.randn(1, 64, 8, 9)
>>> output = m(input)
>>> # target output size of 7x7 (square)
>>> m = nn.AdaptiveMaxPool2d(7)
>>> input = torch.randn(1, 64, 10, 9)
>>> output = m(input)
>>> # target output size of 10x7
>>> m = nn.AdaptiveMaxPool2d((None, 7))
>>> input = torch.randn(1, 64, 10, 9)
>>> output = m(input) | torch.generated.torch.nn.adaptivemaxpool2d#torch.nn.AdaptiveMaxPool2d |
class torch.nn.AdaptiveMaxPool3d(output_size, return_indices=False) [source]
Applies a 3D adaptive max pooling over an input signal composed of several input planes. The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the target output size of the image of the form D x H x W. Can be a tuple (D, H, W) or a single D for a cube D x D x D. D, H and W can be either a int, or None which means the size will be the same as that of the input.
return_indices – if True, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool3d. Default: False
Examples >>> # target output size of 5x7x9
>>> m = nn.AdaptiveMaxPool3d((5,7,9))
>>> input = torch.randn(1, 64, 8, 9, 10)
>>> output = m(input)
>>> # target output size of 7x7x7 (cube)
>>> m = nn.AdaptiveMaxPool3d(7)
>>> input = torch.randn(1, 64, 10, 9, 8)
>>> output = m(input)
>>> # target output size of 7x9x8
>>> m = nn.AdaptiveMaxPool3d((7, None, None))
>>> input = torch.randn(1, 64, 10, 9, 8)
>>> output = m(input) | torch.generated.torch.nn.adaptivemaxpool3d#torch.nn.AdaptiveMaxPool3d |
class torch.nn.AlphaDropout(p=0.5, inplace=False) [source]
Applies Alpha Dropout over the input. Alpha Dropout is a type of Dropout that maintains the self-normalizing property. For an input with zero mean and unit standard deviation, the output of Alpha Dropout maintains the original mean and standard deviation of the input. Alpha Dropout goes hand-in-hand with SELU activation function, which ensures that the outputs have zero mean and unit standard deviation. During training, it randomly masks some of the elements of the input tensor with probability p using samples from a bernoulli distribution. The elements to masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit standard deviation. During evaluation the module simply computes an identity function. More details can be found in the paper Self-Normalizing Neural Networks . Parameters
p (float) – probability of an element to be dropped. Default: 0.5
inplace (bool, optional) – If set to True, will do this operation in-place Shape:
Input: (∗)(*) . Input can be of any shape Output: (∗)(*) . Output is of the same shape as input Examples: >>> m = nn.AlphaDropout(p=0.2)
>>> input = torch.randn(20, 16)
>>> output = m(input) | torch.generated.torch.nn.alphadropout#torch.nn.AlphaDropout |
class torch.nn.AvgPool1d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) [source]
Applies a 1D average pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,C,L)(N, C, L) , output (N,C,Lout)(N, C, L_{out}) and kernel_size kk can be precisely described as: out(Ni,Cj,l)=1k∑m=0k−1input(Ni,Cj,stride×l+m)\text{out}(N_i, C_j, l) = \frac{1}{k} \sum_{m=0}^{k-1} \text{input}(N_i, C_j, \text{stride} \times l + m)
If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. Note When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored. The parameters kernel_size, stride, padding can each be an int or a one-element tuple. Parameters
kernel_size – the size of the window
stride – the stride of the window. Default value is kernel_size
padding – implicit zero padding to be added on both sides
ceil_mode – when True, will use ceil instead of floor to compute the output shape
count_include_pad – when True, will include the zero-padding in the averaging calculation Shape:
Input: (N,C,Lin)(N, C, L_{in})
Output: (N,C,Lout)(N, C, L_{out}) , where Lout=⌊Lin+2×padding−kernel_sizestride+1⌋L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{kernel\_size}}{\text{stride}} + 1\right\rfloor
Examples: >>> # pool with window of size=3, stride=2
>>> m = nn.AvgPool1d(3, stride=2)
>>> m(torch.tensor([[[1.,2,3,4,5,6,7]]]))
tensor([[[ 2., 4., 6.]]]) | torch.generated.torch.nn.avgpool1d#torch.nn.AvgPool1d |
class torch.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) [source]
Applies a 2D average pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,C,H,W)(N, C, H, W) , output (N,C,Hout,Wout)(N, C, H_{out}, W_{out}) and kernel_size (kH,kW)(kH, kW) can be precisely described as: out(Ni,Cj,h,w)=1kH∗kW∑m=0kH−1∑n=0kW−1input(Ni,Cj,stride[0]×h+m,stride[1]×w+n)out(N_i, C_j, h, w) = \frac{1}{kH * kW} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} input(N_i, C_j, stride[0] \times h + m, stride[1] \times w + n)
If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. Note When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored. The parameters kernel_size, stride, padding can either be: a single int – in which case the same value is used for the height and width dimension a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension Parameters
kernel_size – the size of the window
stride – the stride of the window. Default value is kernel_size
padding – implicit zero padding to be added on both sides
ceil_mode – when True, will use ceil instead of floor to compute the output shape
count_include_pad – when True, will include the zero-padding in the averaging calculation
divisor_override – if specified, it will be used as divisor, otherwise kernel_size will be used Shape:
Input: (N,C,Hin,Win)(N, C, H_{in}, W_{in})
Output: (N,C,Hout,Wout)(N, C, H_{out}, W_{out}) , where Hout=⌊Hin+2×padding[0]−kernel_size[0]stride[0]+1⌋H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor
Wout=⌊Win+2×padding[1]−kernel_size[1]stride[1]+1⌋W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor
Examples: >>> # pool of square window of size=3, stride=2
>>> m = nn.AvgPool2d(3, stride=2)
>>> # pool of non-square window
>>> m = nn.AvgPool2d((3, 2), stride=(2, 1))
>>> input = torch.randn(20, 16, 50, 32)
>>> output = m(input) | torch.generated.torch.nn.avgpool2d#torch.nn.AvgPool2d |
class torch.nn.AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) [source]
Applies a 3D average pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,C,D,H,W)(N, C, D, H, W) , output (N,C,Dout,Hout,Wout)(N, C, D_{out}, H_{out}, W_{out}) and kernel_size (kD,kH,kW)(kD, kH, kW) can be precisely described as: out(Ni,Cj,d,h,w)=∑k=0kD−1∑m=0kH−1∑n=0kW−1input(Ni,Cj,stride[0]×d+k,stride[1]×h+m,stride[2]×w+n)kD×kH×kW\begin{aligned} \text{out}(N_i, C_j, d, h, w) ={} & \sum_{k=0}^{kD-1} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} \\ & \frac{\text{input}(N_i, C_j, \text{stride}[0] \times d + k, \text{stride}[1] \times h + m, \text{stride}[2] \times w + n)} {kD \times kH \times kW} \end{aligned}
If padding is non-zero, then the input is implicitly zero-padded on all three sides for padding number of points. Note When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored. The parameters kernel_size, stride can either be: a single int – in which case the same value is used for the depth, height and width dimension a tuple of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension Parameters
kernel_size – the size of the window
stride – the stride of the window. Default value is kernel_size
padding – implicit zero padding to be added on all three sides
ceil_mode – when True, will use ceil instead of floor to compute the output shape
count_include_pad – when True, will include the zero-padding in the averaging calculation
divisor_override – if specified, it will be used as divisor, otherwise kernel_size will be used Shape:
Input: (N,C,Din,Hin,Win)(N, C, D_{in}, H_{in}, W_{in})
Output: (N,C,Dout,Hout,Wout)(N, C, D_{out}, H_{out}, W_{out}) , where Dout=⌊Din+2×padding[0]−kernel_size[0]stride[0]+1⌋D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor
Hout=⌊Hin+2×padding[1]−kernel_size[1]stride[1]+1⌋H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor
Wout=⌊Win+2×padding[2]−kernel_size[2]stride[2]+1⌋W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{kernel\_size}[2]}{\text{stride}[2]} + 1\right\rfloor
Examples: >>> # pool of square window of size=3, stride=2
>>> m = nn.AvgPool3d(3, stride=2)
>>> # pool of non-square window
>>> m = nn.AvgPool3d((3, 2, 2), stride=(2, 1, 2))
>>> input = torch.randn(20, 16, 50,44, 31)
>>> output = m(input) | torch.generated.torch.nn.avgpool3d#torch.nn.AvgPool3d |
class torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) [source]
Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The mean and standard-deviation are calculated per-dimension over the mini-batches and γ\gamma and β\beta are learnable parameter vectors of size C (where C is the input size). By default, the elements of γ\gamma are set to 1 and the elements of β\beta are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1. If track_running_stats is set to False, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. Note This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t , where x^\hat{x} is the estimated statistic and xtx_t is the new observed value. Because the Batch Normalization is done over the C dimension, computing statistics on (N, L) slices, it’s common terminology to call this Temporal Batch Normalization. Parameters
num_features – CC from an expected input of size (N,C,L)(N, C, L) or LL from input of size (N,L)(N, L)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1
affine – a boolean value that when set to True, this module has learnable affine parameters. Default: True
track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics, and initializes statistics buffers running_mean and running_var as None. When these buffers are None, this module always uses batch statistics. in both training and eval modes. Default: True
Shape:
Input: (N,C)(N, C) or (N,C,L)(N, C, L)
Output: (N,C)(N, C) or (N,C,L)(N, C, L) (same shape as input) Examples: >>> # With Learnable Parameters
>>> m = nn.BatchNorm1d(100)
>>> # Without Learnable Parameters
>>> m = nn.BatchNorm1d(100, affine=False)
>>> input = torch.randn(20, 100)
>>> output = m(input) | torch.generated.torch.nn.batchnorm1d#torch.nn.BatchNorm1d |
class torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) [source]
Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The mean and standard-deviation are calculated per-dimension over the mini-batches and γ\gamma and β\beta are learnable parameter vectors of size C (where C is the input size). By default, the elements of γ\gamma are set to 1 and the elements of β\beta are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1. If track_running_stats is set to False, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. Note This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t , where x^\hat{x} is the estimated statistic and xtx_t is the new observed value. Because the Batch Normalization is done over the C dimension, computing statistics on (N, H, W) slices, it’s common terminology to call this Spatial Batch Normalization. Parameters
num_features – CC from an expected input of size (N,C,H,W)(N, C, H, W)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1
affine – a boolean value that when set to True, this module has learnable affine parameters. Default: True
track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics, and initializes statistics buffers running_mean and running_var as None. When these buffers are None, this module always uses batch statistics. in both training and eval modes. Default: True
Shape:
Input: (N,C,H,W)(N, C, H, W)
Output: (N,C,H,W)(N, C, H, W) (same shape as input) Examples: >>> # With Learnable Parameters
>>> m = nn.BatchNorm2d(100)
>>> # Without Learnable Parameters
>>> m = nn.BatchNorm2d(100, affine=False)
>>> input = torch.randn(20, 100, 35, 45)
>>> output = m(input) | torch.generated.torch.nn.batchnorm2d#torch.nn.BatchNorm2d |
class torch.nn.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) [source]
Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The mean and standard-deviation are calculated per-dimension over the mini-batches and γ\gamma and β\beta are learnable parameter vectors of size C (where C is the input size). By default, the elements of γ\gamma are set to 1 and the elements of β\beta are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1. If track_running_stats is set to False, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. Note This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t , where x^\hat{x} is the estimated statistic and xtx_t is the new observed value. Because the Batch Normalization is done over the C dimension, computing statistics on (N, D, H, W) slices, it’s common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization. Parameters
num_features – CC from an expected input of size (N,C,D,H,W)(N, C, D, H, W)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1
affine – a boolean value that when set to True, this module has learnable affine parameters. Default: True
track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics, and initializes statistics buffers running_mean and running_var as None. When these buffers are None, this module always uses batch statistics. in both training and eval modes. Default: True
Shape:
Input: (N,C,D,H,W)(N, C, D, H, W)
Output: (N,C,D,H,W)(N, C, D, H, W) (same shape as input) Examples: >>> # With Learnable Parameters
>>> m = nn.BatchNorm3d(100)
>>> # Without Learnable Parameters
>>> m = nn.BatchNorm3d(100, affine=False)
>>> input = torch.randn(20, 100, 35, 45, 10)
>>> output = m(input) | torch.generated.torch.nn.batchnorm3d#torch.nn.BatchNorm3d |
class torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction='mean') [source]
Creates a criterion that measures the Binary Cross Entropy between the target and the output: The unreduced (i.e. with reduction set to 'none') loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logxn+(1−yn)⋅log(1−xn)],\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right],
where NN is the batch size. If reduction is not 'none' (default 'mean'), then ℓ(x,y)={mean(L),if reduction=‘mean’;sum(L),if reduction=‘sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}
This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets yy should be numbers between 0 and 1. Notice that if xnx_n is either 0 or 1, one of the log terms would be mathematically undefined in the above loss equation. PyTorch chooses to set log(0)=−∞\log (0) = -\infty , since limx→0log(x)=−∞\lim_{x\to 0} \log (x) = -\infty . However, an infinite term in the loss equation is not desirable for several reasons. For one, if either yn=0y_n = 0 or (1−yn)=0(1 - y_n) = 0 , then we would be multiplying 0 with infinity. Secondly, if we have an infinite loss value, then we would also have an infinite term in our gradient, since limx→0ddxlog(x)=∞\lim_{x\to 0} \frac{d}{dx} \log (x) = \infty . This would make BCELoss’s backward method nonlinear with respect to xnx_n , and using it for things like linear regression would not be straight-forward. Our solution is that BCELoss clamps its log function outputs to be greater than or equal to -100. This way, we can always have a finite loss value and a linear backward method. Parameters
weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Shape:
Input: (N,∗)(N, *) where ∗* means, any number of additional dimensions Target: (N,∗)(N, *) , same shape as the input Output: scalar. If reduction is 'none', then (N,∗)(N, *) , same shape as input. Examples: >>> m = nn.Sigmoid()
>>> loss = nn.BCELoss()
>>> input = torch.randn(3, requires_grad=True)
>>> target = torch.empty(3).random_(2)
>>> output = loss(m(input), target)
>>> output.backward() | torch.generated.torch.nn.bceloss#torch.nn.BCELoss |
class torch.nn.BCEWithLogitsLoss(weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None) [source]
This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability. The unreduced (i.e. with reduction set to 'none') loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logσ(xn)+(1−yn)⋅log(1−σ(xn))],\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log \sigma(x_n) + (1 - y_n) \cdot \log (1 - \sigma(x_n)) \right],
where NN is the batch size. If reduction is not 'none' (default 'mean'), then ℓ(x,y)={mean(L),if reduction=‘mean’;sum(L),if reduction=‘sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}
This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets t[i] should be numbers between 0 and 1. It’s possible to trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as: ℓc(x,y)=Lc={l1,c,…,lN,c}⊤,ln,c=−wn,c[pcyn,c⋅logσ(xn,c)+(1−yn,c)⋅log(1−σ(xn,c))],\ell_c(x, y) = L_c = \{l_{1,c},\dots,l_{N,c}\}^\top, \quad l_{n,c} = - w_{n,c} \left[ p_c y_{n,c} \cdot \log \sigma(x_{n,c}) + (1 - y_{n,c}) \cdot \log (1 - \sigma(x_{n,c})) \right],
where cc is the class number (c>1c > 1 for multi-label binary classification, c=1c = 1 for single-label binary classification), nn is the number of the sample in the batch and pcp_c is the weight of the positive answer for the class cc . pc>1p_c > 1 increases the recall, pc<1p_c < 1 increases the precision. For example, if a dataset contains 100 positive and 300 negative examples of a single class, then pos_weight for the class should be equal to 300100=3\frac{300}{100}=3 . The loss would act as if the dataset contains 3×100=3003\times 100=300 positive examples. Examples: >>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10
>>> output = torch.full([10, 64], 1.5) # A prediction (logit)
>>> pos_weight = torch.ones([64]) # All weights are equal to 1
>>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)
>>> criterion(output, target) # -log(sigmoid(1.5))
tensor(0.2014)
Parameters
weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
pos_weight (Tensor, optional) – a weight of positive examples. Must be a vector with length equal to the number of classes. Shape:
Input: (N,∗)(N, *) where ∗* means, any number of additional dimensions Target: (N,∗)(N, *) , same shape as the input Output: scalar. If reduction is 'none', then (N,∗)(N, *) , same shape as input. Examples: >>> loss = nn.BCEWithLogitsLoss()
>>> input = torch.randn(3, requires_grad=True)
>>> target = torch.empty(3).random_(2)
>>> output = loss(input, target)
>>> output.backward() | torch.generated.torch.nn.bcewithlogitsloss#torch.nn.BCEWithLogitsLoss |
class torch.nn.Bilinear(in1_features, in2_features, out_features, bias=True) [source]
Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x_1^T A x_2 + b Parameters
in1_features – size of each first input sample
in2_features – size of each second input sample
out_features – size of each output sample
bias – If set to False, the layer will not learn an additive bias. Default: True
Shape:
Input1: (N,∗,Hin1)(N, *, H_{in1}) where Hin1=in1_featuresH_{in1}=\text{in1\_features} and ∗* means any number of additional dimensions. All but the last dimension of the inputs should be the same. Input2: (N,∗,Hin2)(N, *, H_{in2}) where Hin2=in2_featuresH_{in2}=\text{in2\_features} . Output: (N,∗,Hout)(N, *, H_{out}) where Hout=out_featuresH_{out}=\text{out\_features} and all but the last dimension are the same shape as the input. Variables
~Bilinear.weight – the learnable weights of the module of shape (out_features,in1_features,in2_features)(\text{out\_features}, \text{in1\_features}, \text{in2\_features}) . The values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) , where k=1in1_featuresk = \frac{1}{\text{in1\_features}}
~Bilinear.bias – the learnable bias of the module of shape (out_features)(\text{out\_features}) . If bias is True, the values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) , where k=1in1_featuresk = \frac{1}{\text{in1\_features}}
Examples: >>> m = nn.Bilinear(20, 30, 40)
>>> input1 = torch.randn(128, 20)
>>> input2 = torch.randn(128, 30)
>>> output = m(input1, input2)
>>> print(output.size())
torch.Size([128, 40]) | torch.generated.torch.nn.bilinear#torch.nn.Bilinear |
class torch.nn.CELU(alpha=1.0, inplace=False) [source]
Applies the element-wise function: CELU(x)=max(0,x)+min(0,α∗(exp(x/α)−1))\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1))
More details can be found in the paper Continuously Differentiable Exponential Linear Units . Parameters
alpha – the α\alpha value for the CELU formulation. Default: 1.0
inplace – can optionally do the operation in-place. Default: False
Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.CELU()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.celu#torch.nn.CELU |
class torch.nn.ChannelShuffle(groups) [source]
Divide the channels in a tensor of shape (∗,C,H,W)(*, C , H, W) into g groups and rearrange them as (∗,Cg,g,H,W)(*, C \frac g, g, H, W) , while keeping the original tensor shape. Parameters
groups (int) – number of groups to divide channels in. Examples: >>> channel_shuffle = nn.ChannelShuffle(2)
>>> input = torch.randn(1, 4, 2, 2)
>>> print(input)
[[[[1, 2],
[3, 4]],
[[5, 6],
[7, 8]],
[[9, 10],
[11, 12]],
[[13, 14],
[15, 16]],
]]
>>> output = channel_shuffle(input)
>>> print(output)
[[[[1, 2],
[3, 4]],
[[9, 10],
[11, 12]],
[[5, 6],
[7, 8]],
[[13, 14],
[15, 16]],
]] | torch.generated.torch.nn.channelshuffle#torch.nn.ChannelShuffle |
class torch.nn.ConstantPad1d(padding, value) [source]
Pads the input tensor boundaries with a constant value. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in both boundaries. If a 2-tuple, uses (padding_left\text{padding\_left} , padding_right\text{padding\_right} ) Shape:
Input: (N,C,Win)(N, C, W_{in})
Output: (N,C,Wout)(N, C, W_{out}) where Wout=Win+padding_left+padding_rightW_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ConstantPad1d(2, 3.5)
>>> input = torch.randn(1, 2, 4)
>>> input
tensor([[[-1.0491, -0.7152, -0.0749, 0.8530],
[-1.3287, 1.8966, 0.1466, -0.2771]]])
>>> m(input)
tensor([[[ 3.5000, 3.5000, -1.0491, -0.7152, -0.0749, 0.8530, 3.5000,
3.5000],
[ 3.5000, 3.5000, -1.3287, 1.8966, 0.1466, -0.2771, 3.5000,
3.5000]]])
>>> m = nn.ConstantPad1d(2, 3.5)
>>> input = torch.randn(1, 2, 3)
>>> input
tensor([[[ 1.6616, 1.4523, -1.1255],
[-3.6372, 0.1182, -1.8652]]])
>>> m(input)
tensor([[[ 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000, 3.5000],
[ 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000, 3.5000]]])
>>> # using different paddings for different sides
>>> m = nn.ConstantPad1d((3, 1), 3.5)
>>> m(input)
tensor([[[ 3.5000, 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000],
[ 3.5000, 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000]]]) | torch.generated.torch.nn.constantpad1d#torch.nn.ConstantPad1d |
class torch.nn.ConstantPad2d(padding, value) [source]
Pads the input tensor boundaries with a constant value. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (padding_left\text{padding\_left} , padding_right\text{padding\_right} , padding_top\text{padding\_top} , padding_bottom\text{padding\_bottom} ) Shape:
Input: (N,C,Hin,Win)(N, C, H_{in}, W_{in})
Output: (N,C,Hout,Wout)(N, C, H_{out}, W_{out}) where Hout=Hin+padding_top+padding_bottomH_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom} Wout=Win+padding_left+padding_rightW_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ConstantPad2d(2, 3.5)
>>> input = torch.randn(1, 2, 2)
>>> input
tensor([[[ 1.6585, 0.4320],
[-0.8701, -0.4649]]])
>>> m(input)
tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
[ 3.5000, 3.5000, 1.6585, 0.4320, 3.5000, 3.5000],
[ 3.5000, 3.5000, -0.8701, -0.4649, 3.5000, 3.5000],
[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]])
>>> # using different paddings for different sides
>>> m = nn.ConstantPad2d((3, 0, 2, 1), 3.5)
>>> m(input)
tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
[ 3.5000, 3.5000, 3.5000, 1.6585, 0.4320],
[ 3.5000, 3.5000, 3.5000, -0.8701, -0.4649],
[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]]) | torch.generated.torch.nn.constantpad2d#torch.nn.ConstantPad2d |
class torch.nn.ConstantPad3d(padding, value) [source]
Pads the input tensor boundaries with a constant value. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (padding_left\text{padding\_left} , padding_right\text{padding\_right} , padding_top\text{padding\_top} , padding_bottom\text{padding\_bottom} , padding_front\text{padding\_front} , padding_back\text{padding\_back} ) Shape:
Input: (N,C,Din,Hin,Win)(N, C, D_{in}, H_{in}, W_{in})
Output: (N,C,Dout,Hout,Wout)(N, C, D_{out}, H_{out}, W_{out}) where Dout=Din+padding_front+padding_backD_{out} = D_{in} + \text{padding\_front} + \text{padding\_back} Hout=Hin+padding_top+padding_bottomH_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom} Wout=Win+padding_left+padding_rightW_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ConstantPad3d(3, 3.5)
>>> input = torch.randn(16, 3, 10, 20, 30)
>>> output = m(input)
>>> # using different paddings for different sides
>>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5)
>>> output = m(input) | torch.generated.torch.nn.constantpad3d#torch.nn.ConstantPad3d |
class torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
Applies a 1D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,Cin,L)(N, C_{\text{in}}, L) and output (N,Cout,Lout)(N, C_{\text{out}}, L_{\text{out}}) can be precisely described as: out(Ni,Coutj)=bias(Coutj)+∑k=0Cin−1weight(Coutj,k)⋆input(Ni,k)\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)
where ⋆\star is the valid cross-correlation operator, NN is a batch size, CC denotes a number of channels, LL is a length of signal sequence. This module supports TensorFloat32.
stride controls the stride for the cross-correlation, a single number or a one-element tuple.
padding controls the amount of implicit padding on both sides for padding number of points.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example, At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated. At groups= in_channels, each input channel is convolved with its own set of filters (of size out_channelsin_channels\frac{\text{out\_channels}}{\text{in\_channels}} ). Note When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”. In other words, for an input of size (N,Cin,Lin)(N, C_{in}, L_{in}) , a depthwise convolution with a depthwise multiplier K can be performed with the arguments (Cin=Cin,Cout=Cin×K,...,groups=Cin)(C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in}) . Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Shape:
Input: (N,Cin,Lin)(N, C_{in}, L_{in})
Output: (N,Cout,Lout)(N, C_{out}, L_{out}) where Lout=⌊Lin+2×padding−dilation×(kernel_size−1)−1stride+1⌋L_{out} = \left\lfloor\frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel\_size} - 1) - 1}{\text{stride}} + 1\right\rfloor
Variables
~Conv1d.weight (Tensor) – the learnable weights of the module of shape (out_channels,in_channelsgroups,kernel_size)(\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}}, \text{kernel\_size}) . The values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCin∗kernel_sizek = \frac{groups}{C_\text{in} * \text{kernel\_size}}
~Conv1d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If bias is True, then the values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCin∗kernel_sizek = \frac{groups}{C_\text{in} * \text{kernel\_size}}
Examples: >>> m = nn.Conv1d(16, 33, 3, stride=2)
>>> input = torch.randn(20, 16, 50)
>>> output = m(input) | torch.generated.torch.nn.conv1d#torch.nn.Conv1d |
class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
Applies a 2D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,Cin,H,W)(N, C_{\text{in}}, H, W) and output (N,Cout,Hout,Wout)(N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}}) can be precisely described as: out(Ni,Coutj)=bias(Coutj)+∑k=0Cin−1weight(Coutj,k)⋆input(Ni,k)\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{\text{in}} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)
where ⋆\star is the valid 2D cross-correlation operator, NN is a batch size, CC denotes a number of channels, HH is a height of input planes in pixels, and WW is width in pixels. This module supports TensorFloat32.
stride controls the stride for the cross-correlation, a single number or a tuple.
padding controls the amount of implicit padding on both sides for padding number of points for each dimension.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example, At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated. At groups= in_channels, each input channel is convolved with its own set of filters (of size out_channelsin_channels\frac{\text{out\_channels}}{\text{in\_channels}} ). The parameters kernel_size, stride, padding, dilation can either be: a single int – in which case the same value is used for the height and width dimension a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension Note When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”. In other words, for an input of size (N,Cin,Lin)(N, C_{in}, L_{in}) , a depthwise convolution with a depthwise multiplier K can be performed with the arguments (Cin=Cin,Cout=Cin×K,...,groups=Cin)(C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in}) . Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Shape:
Input: (N,Cin,Hin,Win)(N, C_{in}, H_{in}, W_{in})
Output: (N,Cout,Hout,Wout)(N, C_{out}, H_{out}, W_{out}) where Hout=⌊Hin+2×padding[0]−dilation[0]×(kernel_size[0]−1)−1stride[0]+1⌋H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor
Wout=⌊Win+2×padding[1]−dilation[1]×(kernel_size[1]−1)−1stride[1]+1⌋W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor
Variables
~Conv2d.weight (Tensor) – the learnable weights of the module of shape (out_channels,in_channelsgroups,(\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}}, kernel_size[0],kernel_size[1])\text{kernel\_size[0]}, \text{kernel\_size[1]}) . The values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCin∗∏i=01kernel_size[i]k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}
~Conv2d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If bias is True, then the values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCin∗∏i=01kernel_size[i]k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}
Examples >>> # With square kernels and equal stride
>>> m = nn.Conv2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
>>> input = torch.randn(20, 16, 50, 100)
>>> output = m(input) | torch.generated.torch.nn.conv2d#torch.nn.Conv2d |
class torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
Applies a 3D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,Cin,D,H,W)(N, C_{in}, D, H, W) and output (N,Cout,Dout,Hout,Wout)(N, C_{out}, D_{out}, H_{out}, W_{out}) can be precisely described as: out(Ni,Coutj)=bias(Coutj)+∑k=0Cin−1weight(Coutj,k)⋆input(Ni,k)out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k)
where ⋆\star is the valid 3D cross-correlation operator This module supports TensorFloat32.
stride controls the stride for the cross-correlation.
padding controls the amount of implicit padding on both sides for padding number of points for each dimension.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example, At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated. At groups= in_channels, each input channel is convolved with its own set of filters (of size out_channelsin_channels\frac{\text{out\_channels}}{\text{in\_channels}} ). The parameters kernel_size, stride, padding, dilation can either be: a single int – in which case the same value is used for the depth, height and width dimension a tuple of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension Note When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”. In other words, for an input of size (N,Cin,Lin)(N, C_{in}, L_{in}) , a depthwise convolution with a depthwise multiplier K can be performed with the arguments (Cin=Cin,Cout=Cin×K,...,groups=Cin)(C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in}) . Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to all three sides of the input. Default: 0
padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Shape:
Input: (N,Cin,Din,Hin,Win)(N, C_{in}, D_{in}, H_{in}, W_{in})
Output: (N,Cout,Dout,Hout,Wout)(N, C_{out}, D_{out}, H_{out}, W_{out}) where Dout=⌊Din+2×padding[0]−dilation[0]×(kernel_size[0]−1)−1stride[0]+1⌋D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor
Hout=⌊Hin+2×padding[1]−dilation[1]×(kernel_size[1]−1)−1stride[1]+1⌋H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor
Wout=⌊Win+2×padding[2]−dilation[2]×(kernel_size[2]−1)−1stride[2]+1⌋W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor
Variables
~Conv3d.weight (Tensor) – the learnable weights of the module of shape (out_channels,in_channelsgroups,(\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}}, kernel_size[0],kernel_size[1],kernel_size[2])\text{kernel\_size[0]}, \text{kernel\_size[1]}, \text{kernel\_size[2]}) . The values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCin∗∏i=02kernel_size[i]k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}
~Conv3d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If bias is True, then the values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCin∗∏i=02kernel_size[i]k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}
Examples: >>> # With square kernels and equal stride
>>> m = nn.Conv3d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0))
>>> input = torch.randn(20, 16, 10, 50, 100)
>>> output = m(input) | torch.generated.torch.nn.conv3d#torch.nn.Conv3d |
class torch.nn.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
Applies a 1D transposed convolution operator over an input image composed of several input planes. This module can be seen as the gradient of Conv1d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). This module supports TensorFloat32.
stride controls the stride for the cross-correlation.
padding controls the amount of implicit zero padding on both sides for dilation * (kernel_size - 1) - padding number of points. See note below for details.
output_padding controls the additional size added to one side of the output shape. See note below for details.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example, At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated. At groups= in_channels, each input channel is convolved with its own set of filters (of size out_channelsin_channels\frac{\text{out\_channels}}{\text{in\_channels}} ). Note The padding argument effectively adds dilation * (kernel_size - 1) - padding amount of zero padding to both sizes of the input. This is set so that when a Conv1d and a ConvTranspose1d are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, when stride > 1, Conv1d maps multiple input shapes to the same output shape. output_padding is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note that output_padding is only used to find output shape, but does not actually add zero-padding to output. Note In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic =
True. Please see the notes on Reproducibility for background. Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of the input. Default: 0
output_padding (int or tuple, optional) – Additional size added to one side of the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1 Shape:
Input: (N,Cin,Lin)(N, C_{in}, L_{in})
Output: (N,Cout,Lout)(N, C_{out}, L_{out}) where Lout=(Lin−1)×stride−2×padding+dilation×(kernel_size−1)+output_padding+1L_{out} = (L_{in} - 1) \times \text{stride} - 2 \times \text{padding} + \text{dilation} \times (\text{kernel\_size} - 1) + \text{output\_padding} + 1
Variables
~ConvTranspose1d.weight (Tensor) – the learnable weights of the module of shape (in_channels,out_channelsgroups,(\text{in\_channels}, \frac{\text{out\_channels}}{\text{groups}}, kernel_size)\text{kernel\_size}) . The values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCout∗kernel_sizek = \frac{groups}{C_\text{out} * \text{kernel\_size}}
~ConvTranspose1d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If bias is True, then the values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCout∗kernel_sizek = \frac{groups}{C_\text{out} * \text{kernel\_size}} | torch.generated.torch.nn.convtranspose1d#torch.nn.ConvTranspose1d |
class torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
Applies a 2D transposed convolution operator over an input image composed of several input planes. This module can be seen as the gradient of Conv2d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). This module supports TensorFloat32.
stride controls the stride for the cross-correlation.
padding controls the amount of implicit zero padding on both sides for dilation * (kernel_size - 1) - padding number of points. See note below for details.
output_padding controls the additional size added to one side of the output shape. See note below for details.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example, At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated. At groups= in_channels, each input channel is convolved with its own set of filters (of size out_channelsin_channels\frac{\text{out\_channels}}{\text{in\_channels}} ). The parameters kernel_size, stride, padding, output_padding can either be: a single int – in which case the same value is used for the height and width dimensions a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension Note The padding argument effectively adds dilation * (kernel_size - 1) - padding amount of zero padding to both sizes of the input. This is set so that when a Conv2d and a ConvTranspose2d are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, when stride > 1, Conv2d maps multiple input shapes to the same output shape. output_padding is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note that output_padding is only used to find output shape, but does not actually add zero-padding to output. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Default: 0
output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1 Shape:
Input: (N,Cin,Hin,Win)(N, C_{in}, H_{in}, W_{in})
Output: (N,Cout,Hout,Wout)(N, C_{out}, H_{out}, W_{out}) where Hout=(Hin−1)×stride[0]−2×padding[0]+dilation[0]×(kernel_size[0]−1)+output_padding[0]+1H_{out} = (H_{in} - 1) \times \text{stride}[0] - 2 \times \text{padding}[0] + \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) + \text{output\_padding}[0] + 1
Wout=(Win−1)×stride[1]−2×padding[1]+dilation[1]×(kernel_size[1]−1)+output_padding[1]+1W_{out} = (W_{in} - 1) \times \text{stride}[1] - 2 \times \text{padding}[1] + \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) + \text{output\_padding}[1] + 1
Variables
~ConvTranspose2d.weight (Tensor) – the learnable weights of the module of shape (in_channels,out_channelsgroups,(\text{in\_channels}, \frac{\text{out\_channels}}{\text{groups}}, kernel_size[0],kernel_size[1])\text{kernel\_size[0]}, \text{kernel\_size[1]}) . The values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCout∗∏i=01kernel_size[i]k = \frac{groups}{C_\text{out} * \prod_{i=0}^{1}\text{kernel\_size}[i]}
~ConvTranspose2d.bias (Tensor) – the learnable bias of the module of shape (out_channels) If bias is True, then the values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCout∗∏i=01kernel_size[i]k = \frac{groups}{C_\text{out} * \prod_{i=0}^{1}\text{kernel\_size}[i]}
Examples: >>> # With square kernels and equal stride
>>> m = nn.ConvTranspose2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> input = torch.randn(20, 16, 50, 100)
>>> output = m(input)
>>> # exact output size can be also specified as an argument
>>> input = torch.randn(1, 16, 12, 12)
>>> downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1)
>>> upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(input)
>>> h.size()
torch.Size([1, 16, 6, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
torch.Size([1, 16, 12, 12]) | torch.generated.torch.nn.convtranspose2d#torch.nn.ConvTranspose2d |
class torch.nn.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
Applies a 3D transposed convolution operator over an input image composed of several input planes. The transposed convolution operator multiplies each input value element-wise by a learnable kernel, and sums over the outputs from all input feature planes. This module can be seen as the gradient of Conv3d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). This module supports TensorFloat32.
stride controls the stride for the cross-correlation.
padding controls the amount of implicit zero padding on both sides for dilation * (kernel_size - 1) - padding number of points. See note below for details.
output_padding controls the additional size added to one side of the output shape. See note below for details.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example, At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated. At groups= in_channels, each input channel is convolved with its own set of filters (of size out_channelsin_channels\frac{\text{out\_channels}}{\text{in\_channels}} ). The parameters kernel_size, stride, padding, output_padding can either be: a single int – in which case the same value is used for the depth, height and width dimensions a tuple of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension Note The padding argument effectively adds dilation * (kernel_size - 1) - padding amount of zero padding to both sizes of the input. This is set so that when a Conv3d and a ConvTranspose3d are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, when stride > 1, Conv3d maps multiple input shapes to the same output shape. output_padding is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note that output_padding is only used to find output shape, but does not actually add zero-padding to output. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Default: 0
output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1 Shape:
Input: (N,Cin,Din,Hin,Win)(N, C_{in}, D_{in}, H_{in}, W_{in})
Output: (N,Cout,Dout,Hout,Wout)(N, C_{out}, D_{out}, H_{out}, W_{out}) where Dout=(Din−1)×stride[0]−2×padding[0]+dilation[0]×(kernel_size[0]−1)+output_padding[0]+1D_{out} = (D_{in} - 1) \times \text{stride}[0] - 2 \times \text{padding}[0] + \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) + \text{output\_padding}[0] + 1
Hout=(Hin−1)×stride[1]−2×padding[1]+dilation[1]×(kernel_size[1]−1)+output_padding[1]+1H_{out} = (H_{in} - 1) \times \text{stride}[1] - 2 \times \text{padding}[1] + \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) + \text{output\_padding}[1] + 1
Wout=(Win−1)×stride[2]−2×padding[2]+dilation[2]×(kernel_size[2]−1)+output_padding[2]+1W_{out} = (W_{in} - 1) \times \text{stride}[2] - 2 \times \text{padding}[2] + \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) + \text{output\_padding}[2] + 1
Variables
~ConvTranspose3d.weight (Tensor) – the learnable weights of the module of shape (in_channels,out_channelsgroups,(\text{in\_channels}, \frac{\text{out\_channels}}{\text{groups}}, kernel_size[0],kernel_size[1],kernel_size[2])\text{kernel\_size[0]}, \text{kernel\_size[1]}, \text{kernel\_size[2]}) . The values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCout∗∏i=02kernel_size[i]k = \frac{groups}{C_\text{out} * \prod_{i=0}^{2}\text{kernel\_size}[i]}
~ConvTranspose3d.bias (Tensor) – the learnable bias of the module of shape (out_channels) If bias is True, then the values of these weights are sampled from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsCout∗∏i=02kernel_size[i]k = \frac{groups}{C_\text{out} * \prod_{i=0}^{2}\text{kernel\_size}[i]}
Examples: >>> # With square kernels and equal stride
>>> m = nn.ConvTranspose3d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.ConvTranspose3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(0, 4, 2))
>>> input = torch.randn(20, 16, 10, 50, 100)
>>> output = m(input) | torch.generated.torch.nn.convtranspose3d#torch.nn.ConvTranspose3d |
class torch.nn.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean') [source]
Creates a criterion that measures the loss given input tensors x1x_1 , x2x_2 and a Tensor label yy with values 1 or -1. This is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning. The loss function for each sample is: loss(x,y)={1−cos(x1,x2),if y=1max(0,cos(x1,x2)−margin),if y=−1\text{loss}(x, y) = \begin{cases} 1 - \cos(x_1, x_2), & \text{if } y = 1 \\ \max(0, \cos(x_1, x_2) - \text{margin}), & \text{if } y = -1 \end{cases}
Parameters
margin (float, optional) – Should be a number from −1-1 to 11 , 00 to 0.50.5 is suggested. If margin is missing, the default value is 00 .
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' | torch.generated.torch.nn.cosineembeddingloss#torch.nn.CosineEmbeddingLoss |
class torch.nn.CosineSimilarity(dim=1, eps=1e-08) [source]
Returns cosine similarity between x1x_1 and x2x_2 , computed along dim. similarity=x1⋅x2max(∥x1∥2⋅∥x2∥2,ϵ).\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}.
Parameters
dim (int, optional) – Dimension where cosine similarity is computed. Default: 1
eps (float, optional) – Small value to avoid division by zero. Default: 1e-8 Shape:
Input1: (∗1,D,∗2)(\ast_1, D, \ast_2) where D is at position dim
Input2: (∗1,D,∗2)(\ast_1, D, \ast_2) , same shape as the Input1 Output: (∗1,∗2)(\ast_1, \ast_2)
Examples::
>>> input1 = torch.randn(100, 128)
>>> input2 = torch.randn(100, 128)
>>> cos = nn.CosineSimilarity(dim=1, eps=1e-6)
>>> output = cos(input1, input2) | torch.generated.torch.nn.cosinesimilarity#torch.nn.CosineSimilarity |
class torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source]
This criterion combines LogSoftmax and NLLLoss in one single class. It is useful when training a classification problem with C classes. If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set. The input is expected to contain raw, unnormalized scores for each class. input has to be a Tensor of size either (minibatch,C)(minibatch, C) or (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K) with K≥1K \geq 1 for the K-dimensional case (described later). This criterion expects a class index in the range [0,C−1][0, C-1] as the target for each value of a 1D tensor of size minibatch; if ignore_index is specified, this criterion also accepts this class index (this index may not necessarily be in the class range). The loss can be described as: loss(x,class)=−log(exp(x[class])∑jexp(x[j]))=−x[class]+log(∑jexp(x[j]))\text{loss}(x, class) = -\log\left(\frac{\exp(x[class])}{\sum_j \exp(x[j])}\right) = -x[class] + \log\left(\sum_j \exp(x[j])\right)
or in the case of the weight argument being specified: loss(x,class)=weight[class](−x[class]+log(∑jexp(x[j])))\text{loss}(x, class) = weight[class] \left(-x[class] + \log\left(\sum_j \exp(x[j])\right)\right)
The losses are averaged across observations for each minibatch. If the weight argument is specified then this is a weighted average: loss=∑i=1Nloss(i,class[i])∑i=1Nweight[class[i]]\text{loss} = \frac{\sum^{N}_{i=1} loss(i, class[i])}{\sum^{N}_{i=1} weight[class[i]]}
Can also be used for higher dimension inputs, such as 2D images, by providing an input of size (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K) with K≥1K \geq 1 , where KK is the number of dimensions, and a target of appropriate shape (see below). Parameters
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets.
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the weighted mean of the output is taken, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Shape:
Input: (N,C)(N, C) where C = number of classes, or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K) with K≥1K \geq 1 in the case of K-dimensional loss. Target: (N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-1 , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) with K≥1K \geq 1 in the case of K-dimensional loss. Output: scalar. If reduction is 'none', then the same size as the target: (N)(N) , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) with K≥1K \geq 1 in the case of K-dimensional loss. Examples: >>> loss = nn.CrossEntropyLoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.empty(3, dtype=torch.long).random_(5)
>>> output = loss(input, target)
>>> output.backward() | torch.generated.torch.nn.crossentropyloss#torch.nn.CrossEntropyLoss |
class torch.nn.CTCLoss(blank=0, reduction='mean', zero_infinity=False) [source]
The Connectionist Temporal Classification loss. Calculates loss between a continuous (unsegmented) time series and a target sequence. CTCLoss sums over the probability of possible alignments of input to target, producing a loss value which is differentiable with respect to each input node. The alignment of input to target is assumed to be “many-to-one”, which limits the length of the target sequence such that it must be ≤\leq the input length. Parameters
blank (int, optional) – blank label. Default 00 .
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the output losses will be divided by the target lengths and then the mean over the batch is taken. Default: 'mean'
zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default: False Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Shape:
Log_probs: Tensor of size (T,N,C)(T, N, C) , where T=input lengthT = \text{input length} , N=batch sizeN = \text{batch size} , and C=number of classes (including blank)C = \text{number of classes (including blank)} . The logarithmized probabilities of the outputs (e.g. obtained with torch.nn.functional.log_softmax()). Targets: Tensor of size (N,S)(N, S) or (sum(target_lengths))(\operatorname{sum}(\text{target\_lengths})) , where N=batch sizeN = \text{batch size} and S=max target length, if shape is (N,S)S = \text{max target length, if shape is } (N, S) . It represent the target sequences. Each element in the target sequence is a class index. And the target index cannot be blank (default=0). In the (N,S)(N, S) form, targets are padded to the length of the longest sequence, and stacked. In the (sum(target_lengths))(\operatorname{sum}(\text{target\_lengths})) form, the targets are assumed to be un-padded and concatenated within 1 dimension. Input_lengths: Tuple or tensor of size (N)(N) , where N=batch sizeN = \text{batch size} . It represent the lengths of the inputs (must each be ≤T\leq T ). And the lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. Target_lengths: Tuple or tensor of size (N)(N) , where N=batch sizeN = \text{batch size} . It represent lengths of the targets. Lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. If target shape is (N,S)(N,S) , target_lengths are effectively the stop index sns_n for each target sequence, such that target_n = targets[n,0:s_n] for each target in a batch. Lengths must each be ≤S\leq S If the targets are given as a 1d tensor that is the concatenation of individual targets, the target_lengths must add up to the total length of the tensor. Output: scalar. If reduction is 'none', then (N)(N) , where N=batch sizeN = \text{batch size} . Examples: >>> # Target are to be padded
>>> T = 50 # Input sequence length
>>> C = 20 # Number of classes (including blank)
>>> N = 16 # Batch size
>>> S = 30 # Target sequence length of longest target in batch (padding length)
>>> S_min = 10 # Minimum target length, for demonstration purposes
>>>
>>> # Initialize random batch of input vectors, for *size = (T,N,C)
>>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()
>>>
>>> # Initialize random batch of targets (0 = blank, 1:C = classes)
>>> target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)
>>>
>>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
>>> target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)
>>> ctc_loss = nn.CTCLoss()
>>> loss = ctc_loss(input, target, input_lengths, target_lengths)
>>> loss.backward()
>>>
>>>
>>> # Target are to be un-padded
>>> T = 50 # Input sequence length
>>> C = 20 # Number of classes (including blank)
>>> N = 16 # Batch size
>>>
>>> # Initialize random batch of input vectors, for *size = (T,N,C)
>>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()
>>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
>>>
>>> # Initialize random batch of targets (0 = blank, 1:C = classes)
>>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long)
>>> target = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long)
>>> ctc_loss = nn.CTCLoss()
>>> loss = ctc_loss(input, target, input_lengths, target_lengths)
>>> loss.backward()
Reference:
A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf Note In order to use CuDNN, the following must be satisfied: targets must be in concatenated format, all input_lengths must be T. blank=0blank=0 , target_lengths ≤256\leq 256 , the integer arguments must be of dtype torch.int32. The regular implementation uses the (more common in PyTorch) torch.long dtype. Note In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic =
True. Please see the notes on Reproducibility for background. | torch.generated.torch.nn.ctcloss#torch.nn.CTCLoss |
class torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0) [source]
Implements data parallelism at the module level. This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied once per device). In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module. The batch size should be larger than the number of GPUs used. Warning It is recommended to use DistributedDataParallel, instead of this class, to do multi-GPU training, even if there is only a single node. See: Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel and Distributed Data Parallel. Arbitrary positional and keyword inputs are allowed to be passed into DataParallel but some types are specially handled. tensors will be scattered on dim specified (default 0). tuple, list and dict types will be shallow copied. The other types will be shared among different threads and can be corrupted if written to in the model’s forward pass. The parallelized module must have its parameters and buffers on device_ids[0] before running this DataParallel module. Warning In each forward, module is replicated on each device, so any updates to the running module in forward will be lost. For example, if module has a counter attribute that is incremented in each forward, it will always stay at the initial value because the update is done on the replicas which are destroyed after forward. However, DataParallel guarantees that the replica on device[0] will have its parameters and buffers sharing storage with the base parallelized module. So in-place updates to the parameters or buffers on device[0] will be recorded. E.g., BatchNorm2d and spectral_norm() rely on this behavior to update the buffers. Warning Forward and backward hooks defined on module and its submodules will be invoked len(device_ids) times, each with inputs located on a particular device. Particularly, the hooks are only guaranteed to be executed in correct order with respect to operations on corresponding devices. For example, it is not guaranteed that hooks set via register_forward_pre_hook() be executed before all len(device_ids) forward() calls, but that each such hook be executed before the corresponding forward() call of that device. Warning When module returns a scalar (i.e., 0-dimensional tensor) in forward(), this wrapper will return a vector of length equal to number of devices used in data parallelism, containing the result from each device. Note There is a subtlety in using the pack sequence -> recurrent network -> unpack sequence pattern in a Module wrapped in DataParallel. See My recurrent network doesn’t work with data parallelism section in FAQ for details. Parameters
module (Module) – module to be parallelized
device_ids (list of python:int or torch.device) – CUDA devices (default: all devices)
output_device (int or torch.device) – device location of output (default: device_ids[0]) Variables
~DataParallel.module (Module) – the module to be parallelized Example: >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])
>>> output = net(input_var) # input_var can be on any device, including CPU | torch.generated.torch.nn.dataparallel#torch.nn.DataParallel |
class torch.nn.Dropout(p=0.5, inplace=False) [source]
During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call. This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors . Furthermore, the outputs are scaled by a factor of 11−p\frac{1}{1-p} during training. This means that during evaluation the module simply computes an identity function. Parameters
p – probability of an element to be zeroed. Default: 0.5
inplace – If set to True, will do this operation in-place. Default: False
Shape:
Input: (∗)(*) . Input can be of any shape Output: (∗)(*) . Output is of the same shape as input Examples: >>> m = nn.Dropout(p=0.2)
>>> input = torch.randn(20, 16)
>>> output = m(input) | torch.generated.torch.nn.dropout#torch.nn.Dropout |
class torch.nn.Dropout2d(p=0.5, inplace=False) [source]
Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. Usually the input comes from nn.Conv2d modules. As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, nn.Dropout2d() will help promote independence between feature maps and should be used instead. Parameters
p (float, optional) – probability of an element to be zero-ed.
inplace (bool, optional) – If set to True, will do this operation in-place Shape:
Input: (N,C,H,W)(N, C, H, W)
Output: (N,C,H,W)(N, C, H, W) (same shape as input) Examples: >>> m = nn.Dropout2d(p=0.2)
>>> input = torch.randn(20, 16, 32, 32)
>>> output = m(input) | torch.generated.torch.nn.dropout2d#torch.nn.Dropout2d |
class torch.nn.Dropout3d(p=0.5, inplace=False) [source]
Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. Usually the input comes from nn.Conv3d modules. As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, nn.Dropout3d() will help promote independence between feature maps and should be used instead. Parameters
p (float, optional) – probability of an element to be zeroed.
inplace (bool, optional) – If set to True, will do this operation in-place Shape:
Input: (N,C,D,H,W)(N, C, D, H, W)
Output: (N,C,D,H,W)(N, C, D, H, W) (same shape as input) Examples: >>> m = nn.Dropout3d(p=0.2)
>>> input = torch.randn(20, 16, 4, 32, 32)
>>> output = m(input) | torch.generated.torch.nn.dropout3d#torch.nn.Dropout3d |
class torch.nn.ELU(alpha=1.0, inplace=False) [source]
Applies the element-wise function: ELU(x)={x, if x>0α∗(exp(x)−1), if x≤0\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha * (\exp(x) - 1), & \text{ if } x \leq 0 \end{cases}
Parameters
alpha – the α\alpha value for the ELU formulation. Default: 1.0
inplace – can optionally do the operation in-place. Default: False
Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.ELU()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.elu#torch.nn.ELU |
class torch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None) [source]
A simple lookup table that stores embeddings of a fixed dictionary and size. This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings. Parameters
num_embeddings (int) – size of the dictionary of embeddings
embedding_dim (int) – the size of each embedding vector
padding_idx (int, optional) – If given, pads the output with the embedding vector at padding_idx (initialized to zeros) whenever it encounters the index.
max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm.
norm_type (float, optional) – The p of the p-norm to compute for the max_norm option. Default 2.
scale_grad_by_freq (boolean, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False.
sparse (bool, optional) – If True, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Variables
~Embedding.weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from N(0,1)\mathcal{N}(0, 1) Shape:
Input: (∗)(*) , IntTensor or LongTensor of arbitrary shape containing the indices to extract Output: (∗,H)(*, H) , where * is the input shape and H=embedding_dimH=\text{embedding\_dim}
Note Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s optim.SGD (CUDA and CPU), optim.SparseAdam (CUDA and CPU) and optim.Adagrad (CPU) Note With padding_idx set, the embedding vector at padding_idx is initialized to all zeros. However, note that this vector can be modified afterwards, e.g., using a customized initialization method, and thus changing the vector used to pad the output. The gradient for this vector from Embedding is always zero. Note When max_norm is not None, Embedding’s forward method will modify the weight tensor in-place. Since tensors needed for gradient computations cannot be modified in-place, performing a differentiable operation on Embedding.weight before calling Embedding’s forward method requires cloning Embedding.weight when max_norm is not None. For example: n, d, m = 3, 5, 7
embedding = nn.Embedding(n, d, max_norm=True)
W = torch.randn((m, d), requires_grad=True)
idx = torch.tensor([1, 2])
a = embedding.weight.clone() @ W.t() # weight must be cloned for this to be differentiable
b = embedding(idx) @ W.t() # modifies weight in-place
out = (a.unsqueeze(0) + b.unsqueeze(1))
loss = out.sigmoid().prod()
loss.backward()
Examples: >>> # an Embedding module containing 10 tensors of size 3
>>> embedding = nn.Embedding(10, 3)
>>> # a batch of 2 samples of 4 indices each
>>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]])
>>> embedding(input)
tensor([[[-0.0251, -1.6902, 0.7172],
[-0.6431, 0.0748, 0.6969],
[ 1.4970, 1.3448, -0.9685],
[-0.3677, -2.7265, -0.1685]],
[[ 1.4970, 1.3448, -0.9685],
[ 0.4362, -0.4004, 0.9400],
[-0.6431, 0.0748, 0.6969],
[ 0.9124, -2.3616, 1.1151]]])
>>> # example with padding_idx
>>> embedding = nn.Embedding(10, 3, padding_idx=0)
>>> input = torch.LongTensor([[0,2,0,5]])
>>> embedding(input)
tensor([[[ 0.0000, 0.0000, 0.0000],
[ 0.1535, -2.0309, 0.9315],
[ 0.0000, 0.0000, 0.0000],
[-0.1655, 0.9897, 0.0635]]])
classmethod from_pretrained(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False) [source]
Creates Embedding instance from given 2-dimensional FloatTensor. Parameters
embeddings (Tensor) – FloatTensor containing weights for the Embedding. First dimension is being passed to Embedding as num_embeddings, second as embedding_dim.
freeze (boolean, optional) – If True, the tensor does not get updated in the learning process. Equivalent to embedding.weight.requires_grad = False. Default: True
padding_idx (int, optional) – See module initialization documentation.
max_norm (float, optional) – See module initialization documentation.
norm_type (float, optional) – See module initialization documentation. Default 2.
scale_grad_by_freq (boolean, optional) – See module initialization documentation. Default False.
sparse (bool, optional) – See module initialization documentation. Examples: >>> # FloatTensor containing pretrained weights
>>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])
>>> embedding = nn.Embedding.from_pretrained(weight)
>>> # Get embeddings for index 1
>>> input = torch.LongTensor([1])
>>> embedding(input)
tensor([[ 4.0000, 5.1000, 6.3000]]) | torch.generated.torch.nn.embedding#torch.nn.Embedding |
classmethod from_pretrained(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False) [source]
Creates Embedding instance from given 2-dimensional FloatTensor. Parameters
embeddings (Tensor) – FloatTensor containing weights for the Embedding. First dimension is being passed to Embedding as num_embeddings, second as embedding_dim.
freeze (boolean, optional) – If True, the tensor does not get updated in the learning process. Equivalent to embedding.weight.requires_grad = False. Default: True
padding_idx (int, optional) – See module initialization documentation.
max_norm (float, optional) – See module initialization documentation.
norm_type (float, optional) – See module initialization documentation. Default 2.
scale_grad_by_freq (boolean, optional) – See module initialization documentation. Default False.
sparse (bool, optional) – See module initialization documentation. Examples: >>> # FloatTensor containing pretrained weights
>>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])
>>> embedding = nn.Embedding.from_pretrained(weight)
>>> # Get embeddings for index 1
>>> input = torch.LongTensor([1])
>>> embedding(input)
tensor([[ 4.0000, 5.1000, 6.3000]]) | torch.generated.torch.nn.embedding#torch.nn.Embedding.from_pretrained |
class torch.nn.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, _weight=None, include_last_offset=False) [source]
Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings. For bags of constant length and no per_sample_weights and 2D inputs, this class with mode="sum" is equivalent to Embedding followed by torch.sum(dim=1), with mode="mean" is equivalent to Embedding followed by torch.mean(dim=1), with mode="max" is equivalent to Embedding followed by torch.max(dim=1). However, EmbeddingBag is much more time and memory efficient than using a chain of these operations. EmbeddingBag also supports per-sample weights as an argument to the forward pass. This scales the output of the Embedding before performing a weighted reduction as specified by mode. If per_sample_weights` is passed, the only supported mode is "sum", which computes a weighted sum according to per_sample_weights. Parameters
num_embeddings (int) – size of the dictionary of embeddings
embedding_dim (int) – the size of each embedding vector
max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm.
norm_type (float, optional) – The p of the p-norm to compute for the max_norm option. Default 2.
scale_grad_by_freq (boolean, optional) – if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False. Note: this option is not supported when mode="max".
mode (string, optional) – "sum", "mean" or "max". Specifies the way to reduce the bag. "sum" computes the weighted sum, taking per_sample_weights into consideration. "mean" computes the average of the values in the bag, "max" computes the max value over each bag. Default: "mean"
sparse (bool, optional) – if True, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Note: this option is not supported when mode="max".
include_last_offset (bool, optional) – if True, offsets has one additional element, where the last element is equivalent to the size of indices. This matches the CSR format. Variables
~EmbeddingBag.weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from N(0,1)\mathcal{N}(0, 1) .
Inputs: input (IntTensor or LongTensor), offsets (IntTensor or LongTensor, optional), and
per_index_weights (Tensor, optional)
input and offsets have to be of the same type, either int or long
If input is 2D of shape (B, N), it will be treated as B bags (sequences) each of fixed length N, and this will return B values aggregated in a way depending on the mode. offsets is ignored and required to be None in this case.
If input is 1D of shape (N), it will be treated as a concatenation of multiple bags (sequences). offsets is required to be a 1D tensor containing the starting index positions of each bag in input. Therefore, for offsets of shape (B), input will be viewed as having B bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros. per_sample_weights (Tensor, optional): a tensor of float / double weights, or None
to indicate all weights should be taken to be 1. If specified, per_sample_weights must have exactly the same shape as input and is treated as having the same offsets, if those are not None. Only supported for mode='sum'. Output shape: (B, embedding_dim) Examples: >>> # an Embedding module containing 10 tensors of size 3
>>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum')
>>> # a batch of 2 samples of 4 indices each
>>> input = torch.LongTensor([1,2,4,5,4,3,2,9])
>>> offsets = torch.LongTensor([0,4])
>>> embedding_sum(input, offsets)
tensor([[-0.8861, -5.4350, -0.0523],
[ 1.1306, -2.5798, -1.0044]])
classmethod from_pretrained(embeddings, freeze=True, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, include_last_offset=False) [source]
Creates EmbeddingBag instance from given 2-dimensional FloatTensor. Parameters
embeddings (Tensor) – FloatTensor containing weights for the EmbeddingBag. First dimension is being passed to EmbeddingBag as ‘num_embeddings’, second as ‘embedding_dim’.
freeze (boolean, optional) – If True, the tensor does not get updated in the learning process. Equivalent to embeddingbag.weight.requires_grad = False. Default: True
max_norm (float, optional) – See module initialization documentation. Default: None
norm_type (float, optional) – See module initialization documentation. Default 2.
scale_grad_by_freq (boolean, optional) – See module initialization documentation. Default False.
mode (string, optional) – See module initialization documentation. Default: "mean"
sparse (bool, optional) – See module initialization documentation. Default: False.
include_last_offset (bool, optional) – See module initialization documentation. Default: False. Examples: >>> # FloatTensor containing pretrained weights
>>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])
>>> embeddingbag = nn.EmbeddingBag.from_pretrained(weight)
>>> # Get embeddings for index 1
>>> input = torch.LongTensor([[1, 0]])
>>> embeddingbag(input)
tensor([[ 2.5000, 3.7000, 4.6500]]) | torch.generated.torch.nn.embeddingbag#torch.nn.EmbeddingBag |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.