doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.nn.functional.max_unpool3d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source]
Computes a partial inverse of MaxPool3d. See MaxUnpool3d for details. | torch.nn.functional#torch.nn.functional.max_unpool3d |
torch.nn.functional.mse_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source]
Measures the element-wise mean squared error. See MSELoss for details. | torch.nn.functional#torch.nn.functional.mse_loss |
torch.nn.functional.multilabel_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source]
See MultiLabelMarginLoss for details. | torch.nn.functional#torch.nn.functional.multilabel_margin_loss |
torch.nn.functional.multilabel_soft_margin_loss(input, target, weight=None, size_average=None) → Tensor [source]
See MultiLabelSoftMarginLoss for details. | torch.nn.functional#torch.nn.functional.multilabel_soft_margin_loss |
torch.nn.functional.multi_margin_loss(input, target, p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean') [source]
multi_margin_loss(input, target, p=1, margin=1, weight=None, size_average=None,
reduce=None, reduction=’mean’) -> Tensor See MultiMarginLoss for details. | torch.nn.functional#torch.nn.functional.multi_margin_loss |
torch.nn.functional.nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source]
The negative log likelihood loss. See NLLLoss for details. Parameters
input – (N,C)(N, C) where C = number of classes or (N,C,H,W)(N, C, H, W) in case of 2D Loss, or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K) where K≥1K \geq 1 in the case of K-dimensional loss.
target – (N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-1 , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) where K≥1K \geq 1 for K-dimensional loss.
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Default: -100
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Example: >>> # input is of size N x C = 3 x 5
>>> input = torch.randn(3, 5, requires_grad=True)
>>> # each element in target has to have 0 <= value < C
>>> target = torch.tensor([1, 0, 4])
>>> output = F.nll_loss(F.log_softmax(input), target)
>>> output.backward() | torch.nn.functional#torch.nn.functional.nll_loss |
torch.nn.functional.normalize(input, p=2, dim=1, eps=1e-12, out=None) [source]
Performs LpL_p normalization of inputs over specified dimension. For a tensor input of sizes (n0,...,ndim,...,nk)(n_0, ..., n_{dim}, ..., n_k) , each ndimn_{dim} -element vector vv along dimension dim is transformed as v=vmax(∥v∥p,ϵ).v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)}.
With the default arguments it uses the Euclidean norm over vectors along dimension 11 for normalization. Parameters
input – input tensor of any shape
p (float) – the exponent value in the norm formulation. Default: 2
dim (int) – the dimension to reduce. Default: 1
eps (float) – small value to avoid division by zero. Default: 1e-12
out (Tensor, optional) – the output tensor. If out is used, this operation won’t be differentiable. | torch.nn.functional#torch.nn.functional.normalize |
torch.nn.functional.one_hot(tensor, num_classes=-1) → LongTensor
Takes LongTensor with index values of shape (*) and returns a tensor of shape (*, num_classes) that have zeros everywhere except where the index of last dimension matches the corresponding value of the input tensor, in which case it will be 1. See also One-hot on Wikipedia . Parameters
tensor (LongTensor) – class values of any shape.
num_classes (int) – Total number of classes. If set to -1, the number of classes will be inferred as one greater than the largest class value in the input tensor. Returns
LongTensor that has one more dimension with 1 values at the index of last dimension indicated by the input, and 0 everywhere else. Examples >>> F.one_hot(torch.arange(0, 5) % 3)
tensor([[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0],
[0, 1, 0]])
>>> F.one_hot(torch.arange(0, 5) % 3, num_classes=5)
tensor([[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0]])
>>> F.one_hot(torch.arange(0, 6).view(3,2) % 3)
tensor([[[1, 0, 0],
[0, 1, 0]],
[[0, 0, 1],
[1, 0, 0]],
[[0, 1, 0],
[0, 0, 1]]]) | torch.nn.functional#torch.nn.functional.one_hot |
torch.nn.functional.pad(input, pad, mode='constant', value=0)
Pads tensor. Padding size:
The padding size by which to pad some dimensions of input are described starting from the last dimension and moving forward. ⌊len(pad)2⌋\left\lfloor\frac{\text{len(pad)}}{2}\right\rfloor dimensions of input will be padded. For example, to pad only the last dimension of the input tensor, then pad has the form (padding_left,padding_right)(\text{padding\_left}, \text{padding\_right}) ; to pad the last 2 dimensions of the input tensor, then use (padding_left,padding_right,(\text{padding\_left}, \text{padding\_right}, padding_top,padding_bottom)\text{padding\_top}, \text{padding\_bottom}) ; to pad the last 3 dimensions, use (padding_left,padding_right,(\text{padding\_left}, \text{padding\_right}, padding_top,padding_bottom\text{padding\_top}, \text{padding\_bottom} padding_front,padding_back)\text{padding\_front}, \text{padding\_back}) . Padding mode:
See torch.nn.ConstantPad2d, torch.nn.ReflectionPad2d, and torch.nn.ReplicationPad2d for concrete examples on how each of the padding modes works. Constant padding is implemented for arbitrary dimensions. Replicate padding is implemented for padding the last 3 dimensions of 5D input tensor, or the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor. Reflect padding is only implemented for padding the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor. Note When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on Reproducibility for background. Parameters
input (Tensor) – N-dimensional tensor
pad (tuple) – m-elements tuple, where m2≤\frac{m}{2} \leq input dimensions and mm is even.
mode – 'constant', 'reflect', 'replicate' or 'circular'. Default: 'constant'
value – fill value for 'constant' padding. Default: 0
Examples: >>> t4d = torch.empty(3, 3, 4, 2)
>>> p1d = (1, 1) # pad last dim by 1 on each side
>>> out = F.pad(t4d, p1d, "constant", 0) # effectively zero padding
>>> print(out.size())
torch.Size([3, 3, 4, 4])
>>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2)
>>> out = F.pad(t4d, p2d, "constant", 0)
>>> print(out.size())
torch.Size([3, 3, 8, 4])
>>> t4d = torch.empty(3, 3, 4, 2)
>>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3)
>>> out = F.pad(t4d, p3d, "constant", 0)
>>> print(out.size())
torch.Size([3, 9, 7, 3]) | torch.nn.functional#torch.nn.functional.pad |
torch.nn.functional.pairwise_distance(x1, x2, p=2.0, eps=1e-06, keepdim=False) [source]
See torch.nn.PairwiseDistance for details | torch.nn.functional#torch.nn.functional.pairwise_distance |
torch.nn.functional.pdist(input, p=2) → Tensor
Computes the p-norm distance between every pair of row vectors in the input. This is identical to the upper triangular portion, excluding the diagonal, of torch.norm(input[:, None] - input, dim=2, p=p). This function will be faster if the rows are contiguous. If input has shape N×MN \times M then the output will have shape 12N(N−1)\frac{1}{2} N (N - 1) . This function is equivalent to scipy.spatial.distance.pdist(input, ‘minkowski’, p=p) if p∈(0,∞)p \in (0, \infty) . When p=0p = 0 it is equivalent to scipy.spatial.distance.pdist(input, ‘hamming’) * M. When p=∞p = \infty , the closest scipy function is scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x - y).max()). Parameters
input – input tensor of shape N×MN \times M .
p – p value for the p-norm distance to calculate between each vector pair ∈[0,∞]\in [0, \infty] . | torch.nn.functional#torch.nn.functional.pdist |
torch.nn.functional.pixel_shuffle(input, upscale_factor) → Tensor
Rearranges elements in a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) to a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) , where r is the upscale_factor. See PixelShuffle for details. Parameters
input (Tensor) – the input tensor
upscale_factor (int) – factor to increase spatial resolution by Examples: >>> input = torch.randn(1, 9, 4, 4)
>>> output = torch.nn.functional.pixel_shuffle(input, 3)
>>> print(output.size())
torch.Size([1, 1, 12, 12]) | torch.nn.functional#torch.nn.functional.pixel_shuffle |
torch.nn.functional.pixel_unshuffle(input, downscale_factor) → Tensor
Reverses the PixelShuffle operation by rearranging elements in a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) to a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) , where r is the downscale_factor. See PixelUnshuffle for details. Parameters
input (Tensor) – the input tensor
downscale_factor (int) – factor to increase spatial resolution by Examples: >>> input = torch.randn(1, 1, 12, 12)
>>> output = torch.nn.functional.pixel_unshuffle(input, 3)
>>> print(output.size())
torch.Size([1, 9, 4, 4]) | torch.nn.functional#torch.nn.functional.pixel_unshuffle |
torch.nn.functional.poisson_nll_loss(input, target, log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean') [source]
Poisson negative log likelihood loss. See PoissonNLLLoss for details. Parameters
input – expectation of underlying Poisson distribution.
target – random sample target∼Poisson(input)target \sim \text{Poisson}(input) .
log_input – if True the loss is computed as exp(input)−target∗input\exp(\text{input}) - \text{target} * \text{input} , if False then loss is input−target∗log(input+eps)\text{input} - \text{target} * \log(\text{input}+\text{eps}) . Default: True
full – whether to compute full loss, i. e. to add the Stirling approximation term. Default: False target∗log(target)−target+0.5∗log(2∗π∗target)\text{target} * \log(\text{target}) - \text{target} + 0.5 * \log(2 * \pi * \text{target}) .
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
eps (float, optional) – Small value to avoid evaluation of log(0)\log(0) when log_input`=``False`. Default: 1e-8
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' | torch.nn.functional#torch.nn.functional.poisson_nll_loss |
torch.nn.functional.prelu(input, weight) → Tensor [source]
Applies element-wise the function PReLU(x)=max(0,x)+weight∗min(0,x)\text{PReLU}(x) = \max(0,x) + \text{weight} * \min(0,x) where weight is a learnable parameter. See PReLU for more details. | torch.nn.functional#torch.nn.functional.prelu |
torch.nn.functional.relu(input, inplace=False) → Tensor [source]
Applies the rectified linear unit function element-wise. See ReLU for more details. | torch.nn.functional#torch.nn.functional.relu |
torch.nn.functional.relu6(input, inplace=False) → Tensor [source]
Applies the element-wise function ReLU6(x)=min(max(0,x),6)\text{ReLU6}(x) = \min(\max(0,x), 6) . See ReLU6 for more details. | torch.nn.functional#torch.nn.functional.relu6 |
torch.nn.functional.relu_(input) → Tensor
In-place version of relu(). | torch.nn.functional#torch.nn.functional.relu_ |
torch.nn.functional.rrelu(input, lower=1./8, upper=1./3, training=False, inplace=False) → Tensor [source]
Randomized leaky ReLU. See RReLU for more details. | torch.nn.functional#torch.nn.functional.rrelu |
torch.nn.functional.rrelu_(input, lower=1./8, upper=1./3, training=False) → Tensor
In-place version of rrelu(). | torch.nn.functional#torch.nn.functional.rrelu_ |
torch.nn.functional.selu(input, inplace=False) → Tensor [source]
Applies element-wise, SELU(x)=scale∗(max(0,x)+min(0,α∗(exp(x)−1)))\text{SELU}(x) = scale * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1))) , with α=1.6732632423543772848170429916717\alpha=1.6732632423543772848170429916717 and scale=1.0507009873554804934193349852946scale=1.0507009873554804934193349852946 . See SELU for more details. | torch.nn.functional#torch.nn.functional.selu |
torch.nn.functional.sigmoid(input) → Tensor [source]
Applies the element-wise function Sigmoid(x)=11+exp(−x)\text{Sigmoid}(x) = \frac{1}{1 + \exp(-x)} See Sigmoid for more details. | torch.nn.functional#torch.nn.functional.sigmoid |
torch.nn.functional.silu(input, inplace=False) [source]
Applies the silu function, element-wise. silu(x)=x∗σ(x),where σ(x) is the logistic sigmoid.\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.}
Note See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid Linear Unit) was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning and Swish: a Self-Gated Activation Function where the SiLU was experimented with later. See SiLU for more details. | torch.nn.functional#torch.nn.functional.silu |
torch.nn.functional.smooth_l1_loss(input, target, size_average=None, reduce=None, reduction='mean', beta=1.0) [source]
Function that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. See SmoothL1Loss for details. | torch.nn.functional#torch.nn.functional.smooth_l1_loss |
torch.nn.functional.softmax(input, dim=None, _stacklevel=3, dtype=None) [source]
Applies a softmax function. Softmax is defined as: Softmax(xi)=exp(xi)∑jexp(xj)\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)} It is applied to all slices along dim, and will re-scale them so that the elements lie in the range [0, 1] and sum to 1. See Softmax for more details. Parameters
input (Tensor) – input
dim (int) – A dimension along which softmax will be computed.
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Note This function doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use log_softmax instead (it’s faster and has better numerical properties). | torch.nn.functional#torch.nn.functional.softmax |
torch.nn.functional.softmin(input, dim=None, _stacklevel=3, dtype=None) [source]
Applies a softmin function. Note that Softmin(x)=Softmax(−x)\text{Softmin}(x) = \text{Softmax}(-x) . See softmax definition for mathematical formula. See Softmin for more details. Parameters
input (Tensor) – input
dim (int) – A dimension along which softmin will be computed (so every slice along dim will sum to 1).
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. | torch.nn.functional#torch.nn.functional.softmin |
torch.nn.functional.softplus(input, beta=1, threshold=20) → Tensor
Applies element-wise, the function Softplus(x)=1β∗log(1+exp(β∗x))\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) . For numerical stability the implementation reverts to the linear function when input×β>thresholdinput \times \beta > threshold . See Softplus for more details. | torch.nn.functional#torch.nn.functional.softplus |
torch.nn.functional.softshrink(input, lambd=0.5) → Tensor
Applies the soft shrinkage function elementwise See Softshrink for more details. | torch.nn.functional#torch.nn.functional.softshrink |
torch.nn.functional.softsign(input) → Tensor [source]
Applies element-wise, the function SoftSign(x)=x1+∣x∣\text{SoftSign}(x) = \frac{x}{1 + |x|} See Softsign for more details. | torch.nn.functional#torch.nn.functional.softsign |
torch.nn.functional.soft_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source]
See SoftMarginLoss for details. | torch.nn.functional#torch.nn.functional.soft_margin_loss |
torch.nn.functional.tanh(input) → Tensor [source]
Applies element-wise, Tanh(x)=tanh(x)=exp(x)−exp(−x)exp(x)+exp(−x)\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)}{\exp(x) + \exp(-x)} See Tanh for more details. | torch.nn.functional#torch.nn.functional.tanh |
torch.nn.functional.tanhshrink(input) → Tensor [source]
Applies element-wise, Tanhshrink(x)=x−Tanh(x)\text{Tanhshrink}(x) = x - \text{Tanh}(x) See Tanhshrink for more details. | torch.nn.functional#torch.nn.functional.tanhshrink |
torch.nn.functional.threshold(input, threshold, value, inplace=False)
Thresholds each element of the input Tensor. See Threshold for more details. | torch.nn.functional#torch.nn.functional.threshold |
torch.nn.functional.threshold_(input, threshold, value) → Tensor
In-place version of threshold(). | torch.nn.functional#torch.nn.functional.threshold_ |
torch.nn.functional.triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean') [source]
See TripletMarginLoss for details | torch.nn.functional#torch.nn.functional.triplet_margin_loss |
torch.nn.functional.triplet_margin_with_distance_loss(anchor, positive, negative, *, distance_function=None, margin=1.0, swap=False, reduction='mean') [source]
See TripletMarginWithDistanceLoss for details. | torch.nn.functional#torch.nn.functional.triplet_margin_with_distance_loss |
torch.nn.functional.unfold(input, kernel_size, dilation=1, padding=0, stride=1) [source]
Extracts sliding local blocks from a batched input tensor. Warning Currently, only 4-D input tensors (batched image-like tensors) are supported. Warning More than one element of the unfolded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensor, please clone it first. See torch.nn.Unfold for details | torch.nn.functional#torch.nn.functional.unfold |
torch.nn.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None) [source]
Upsamples the input to either the given size or the given scale_factor Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(...). Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. The algorithm used for upsampling is determined by mode. Currently temporal, spatial and volumetric upsampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. The modes available for upsampling are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only) Parameters
input (Tensor) – the input tensor
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size.
scale_factor (float or Tuple[float]) – multiplier for spatial size. Has to match input size if it is a tuple.
mode (string) – algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' | 'bicubic' | 'trilinear'. Default: 'nearest'
align_corners (bool, optional) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'linear', 'bilinear', 'bicubic' or 'trilinear'. Default: False
Note With mode='bicubic', it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call result.clamp(min=0, max=255) if you want to reduce the overshoot when displaying the image. Warning With align_corners = True, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False. See Upsample for concrete examples on how this affects the outputs. | torch.nn.functional#torch.nn.functional.upsample |
torch.nn.functional.upsample_bilinear(input, size=None, scale_factor=None) [source]
Upsamples the input, using bilinear upsampling. Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(..., mode='bilinear', align_corners=True). Expected inputs are spatial (4 dimensional). Use upsample_trilinear fo volumetric (5 dimensional) inputs. Parameters
input (Tensor) – input
size (int or Tuple[int, int]) – output spatial size.
scale_factor (int or Tuple[int, int]) – multiplier for spatial size Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. | torch.nn.functional#torch.nn.functional.upsample_bilinear |
torch.nn.functional.upsample_nearest(input, size=None, scale_factor=None) [source]
Upsamples the input, using nearest neighbours’ pixel values. Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(..., mode='nearest'). Currently spatial and volumetric upsampling are supported (i.e. expected inputs are 4 or 5 dimensional). Parameters
input (Tensor) – input
size (int or Tuple[int, int] or Tuple[int, int, int]) – output spatia size.
scale_factor (int) – multiplier for spatial size. Has to be an integer. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. | torch.nn.functional#torch.nn.functional.upsample_nearest |
class torch.nn.GaussianNLLLoss(*, full=False, eps=1e-06, reduction='mean') [source]
Gaussian negative log likelihood loss. The targets are treated as samples from Gaussian distributions with expectations and variances predicted by the neural network. For a D-dimensional target tensor modelled as having heteroscedastic Gaussian distributions with a D-dimensional tensor of expectations input and a D-dimensional tensor of positive variances var the loss is: loss=12∑i=1D(log(max(var[i], eps))+(input[i]−target[i])2max(var[i], eps))+const.\text{loss} = \frac{1}{2}\sum_{i=1}^D \left(\log\left(\text{max}\left(\text{var}[i], \ \text{eps}\right)\right) + \frac{\left(\text{input}[i] - \text{target}[i]\right)^2} {\text{max}\left(\text{var}[i], \ \text{eps}\right)}\right) + \text{const.}
where eps is used for stability. By default, the constant term of the loss function is omitted unless full is True. If var is a scalar (implying target tensor has homoscedastic Gaussian distributions) it is broadcasted to be the same size as the input. Parameters
full (bool, optional) – include the constant term in the loss calculation. Default: False.
eps (float, optional) – value used to clamp var (see note below), for stability. Default: 1e-6.
reduction (string, optional) – specifies the reduction to apply to the output:'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the output is the average of all batch member losses, 'sum': the output is the sum of all batch member losses. Default: 'mean'. Shape:
Input: (N,∗)(N, *) where ∗* means any number of additional dimensions Target: (N,∗)(N, *) , same shape as the input Var: (N,1)(N, 1) or (N,∗)(N, *) , same shape as the input Output: scalar if reduction is 'mean' (default) or 'sum'. If reduction is 'none', then (N)(N)
Examples: >>> loss = nn.GaussianNLLLoss()
>>> input = torch.randn(5, 2, requires_grad=True)
>>> target = torch.randn(5, 2)
>>> var = torch.ones(5, 2, requires_grad=True) #heteroscedastic
>>> output = loss(input, target, var)
>>> output.backward()
>>> loss = nn.GaussianNLLLoss()
>>> input = torch.randn(5, 2, requires_grad=True)
>>> target = torch.randn(5, 2)
>>> var = torch.ones(5, 1, requires_grad=True) #homoscedastic
>>> output = loss(input, target, var)
>>> output.backward()
Note The clamping of var is ignored with respect to autograd, and so the gradients are unaffected by it. Reference:
Nix, D. A. and Weigend, A. S., “Estimating the mean and variance of the target probability distribution”, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), Orlando, FL, USA, 1994, pp. 55-60 vol.1, doi: 10.1109/ICNN.1994.374138. | torch.generated.torch.nn.gaussiannllloss#torch.nn.GaussianNLLLoss |
class torch.nn.GELU [source]
Applies the Gaussian Error Linear Units function: GELU(x)=x∗Φ(x)\text{GELU}(x) = x * \Phi(x)
where Φ(x)\Phi(x) is the Cumulative Distribution Function for Gaussian Distribution. Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.GELU()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.gelu#torch.nn.GELU |
class torch.nn.GroupNorm(num_groups, num_channels, eps=1e-05, affine=True) [source]
Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The input channels are separated into num_groups groups, each containing num_channels / num_groups channels. The mean and standard-deviation are calculated separately over the each group. γ\gamma and β\beta are learnable per-channel affine transform parameter vectors of size num_channels if affine is True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). This layer uses statistics computed from input data in both training and evaluation modes. Parameters
num_groups (int) – number of groups to separate the channels into
num_channels (int) – number of channels expected in input
eps – a value added to the denominator for numerical stability. Default: 1e-5
affine – a boolean value that when set to True, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default: True. Shape:
Input: (N,C,∗)(N, C, *) where C=num_channelsC=\text{num\_channels}
Output: (N,C,∗)(N, C, *) (same shape as input) Examples: >>> input = torch.randn(20, 6, 10, 10)
>>> # Separate 6 channels into 3 groups
>>> m = nn.GroupNorm(3, 6)
>>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm)
>>> m = nn.GroupNorm(6, 6)
>>> # Put all 6 channels into a single group (equivalent with LayerNorm)
>>> m = nn.GroupNorm(1, 6)
>>> # Activating the module
>>> output = m(input) | torch.generated.torch.nn.groupnorm#torch.nn.GroupNorm |
class torch.nn.GRU(*args, **kwargs) [source]
Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. For each element in the input sequence, each layer computes the following function: rt=σ(Wirxt+bir+Whrh(t−1)+bhr)zt=σ(Wizxt+biz+Whzh(t−1)+bhz)nt=tanh(Winxt+bin+rt∗(Whnh(t−1)+bhn))ht=(1−zt)∗nt+zt∗h(t−1)\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \end{array}
where hth_t is the hidden state at time t, xtx_t is the input at time t, h(t−1)h_{(t-1)} is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and rtr_t , ztz_t , ntn_t are the reset, update, and new gates, respectively. σ\sigma is the sigmoid function, and ∗* is the Hadamard product. In a multilayer GRU, the input xt(l)x^{(l)}_t of the ll -th layer (l>=2l >= 2 ) is the hidden state ht(l−1)h^{(l-1)}_t of the previous layer multiplied by dropout δt(l−1)\delta^{(l-1)}_t where each δt(l−1)\delta^{(l-1)}_t is a Bernoulli random variable which is 00 with probability dropout. Parameters
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1
bias – If False, then the layer does not use bias weights b_ih and b_hh. Default: True
batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout – If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to dropout. Default: 0
bidirectional – If True, becomes a bidirectional GRU. Default: False
Inputs: input, h_0
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() for details.
h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num_directions should be 2, else it should be 1. Outputs: output, h_n
output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features h_t from the last layer of the GRU, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated using output.view(seq_len, batch, num_directions, hidden_size), with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case.
h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len Like output, the layers can be separated using h_n.view(num_layers, num_directions, batch, hidden_size). Shape:
Input1: (L,N,Hin)(L, N, H_{in}) tensor containing input features where Hin=input_sizeH_{in}=\text{input\_size} and L represents a sequence length. Input2: (S,N,Hout)(S, N, H_{out}) tensor containing the initial hidden state for each element in the batch. Hout=hidden_sizeH_{out}=\text{hidden\_size} Defaults to zero if not provided. where S=num_layers∗num_directionsS=\text{num\_layers} * \text{num\_directions} If the RNN is bidirectional, num_directions should be 2, else it should be 1. Output1: (L,N,Hall)(L, N, H_{all}) where Hall=num_directions∗hidden_sizeH_{all}=\text{num\_directions} * \text{hidden\_size}
Output2: (S,N,Hout)(S, N, H_{out}) tensor containing the next hidden state for each element in the batch Variables
~GRU.weight_ih_l[k] – the learnable input-hidden weights of the kth\text{k}^{th} layer (W_ir|W_iz|W_in), of shape (3*hidden_size, input_size) for k = 0. Otherwise, the shape is (3*hidden_size, num_directions * hidden_size)
~GRU.weight_hh_l[k] – the learnable hidden-hidden weights of the kth\text{k}^{th} layer (W_hr|W_hz|W_hn), of shape (3*hidden_size, hidden_size)
~GRU.bias_ih_l[k] – the learnable input-hidden bias of the kth\text{k}^{th} layer (b_ir|b_iz|b_in), of shape (3*hidden_size)
~GRU.bias_hh_l[k] – the learnable hidden-hidden bias of the kth\text{k}^{th} layer (b_hr|b_hz|b_hn), of shape (3*hidden_size)
Note All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}} Orphan Note If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype torch.float16 4) V100 GPU is used, 5) input data is not in PackedSequence format persistent algorithm can be selected to improve performance. Examples: >>> rnn = nn.GRU(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> output, hn = rnn(input, h0) | torch.generated.torch.nn.gru#torch.nn.GRU |
class torch.nn.GRUCell(input_size, hidden_size, bias=True) [source]
A gated recurrent unit (GRU) cell r=σ(Wirx+bir+Whrh+bhr)z=σ(Wizx+biz+Whzh+bhz)n=tanh(Winx+bin+r∗(Whnh+bhn))h′=(1−z)∗n+z∗h\begin{array}{ll} r = \sigma(W_{ir} x + b_{ir} + W_{hr} h + b_{hr}) \\ z = \sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\ n = \tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' = (1 - z) * n + z * h \end{array}
where σ\sigma is the sigmoid function, and ∗* is the Hadamard product. Parameters
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
bias – If False, then the layer does not use bias weights b_ih and b_hh. Default: True
Inputs: input, hidden
input of shape (batch, input_size): tensor containing input features
hidden of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. Outputs: h’
h’ of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch Shape:
Input1: (N,Hin)(N, H_{in}) tensor containing input features where HinH_{in} = input_size
Input2: (N,Hout)(N, H_{out}) tensor containing the initial hidden state for each element in the batch where HoutH_{out} = hidden_size Defaults to zero if not provided. Output: (N,Hout)(N, H_{out}) tensor containing the next hidden state for each element in the batch Variables
~GRUCell.weight_ih – the learnable input-hidden weights, of shape (3*hidden_size, input_size)
~GRUCell.weight_hh – the learnable hidden-hidden weights, of shape (3*hidden_size, hidden_size)
~GRUCell.bias_ih – the learnable input-hidden bias, of shape (3*hidden_size)
~GRUCell.bias_hh – the learnable hidden-hidden bias, of shape (3*hidden_size)
Note All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}} Examples: >>> rnn = nn.GRUCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx = rnn(input[i], hx)
output.append(hx) | torch.generated.torch.nn.grucell#torch.nn.GRUCell |
class torch.nn.Hardshrink(lambd=0.5) [source]
Applies the hard shrinkage function element-wise: HardShrink(x)={x, if x>λx, if x<−λ0, otherwise \text{HardShrink}(x) = \begin{cases} x, & \text{ if } x > \lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}
Parameters
lambd – the λ\lambda value for the Hardshrink formulation. Default: 0.5 Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.Hardshrink()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.hardshrink#torch.nn.Hardshrink |
class torch.nn.Hardsigmoid(inplace=False) [source]
Applies the element-wise function: Hardsigmoid(x)={0if x≤−3,1if x≥+3,x/6+1/2otherwise\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 & \text{otherwise} \end{cases}
Parameters
inplace – can optionally do the operation in-place. Default: False Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.Hardsigmoid()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.hardsigmoid#torch.nn.Hardsigmoid |
class torch.nn.Hardswish(inplace=False) [source]
Applies the hardswish function, element-wise, as described in the paper: Searching for MobileNetV3. Hardswish(x)={0if x≤−3,xif x≥+3,x⋅(x+3)/6otherwise\text{Hardswish}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ x & \text{if~} x \ge +3, \\ x \cdot (x + 3) /6 & \text{otherwise} \end{cases}
Parameters
inplace – can optionally do the operation in-place. Default: False Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.Hardswish()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.hardswish#torch.nn.Hardswish |
class torch.nn.Hardtanh(min_val=-1.0, max_val=1.0, inplace=False, min_value=None, max_value=None) [source]
Applies the HardTanh function element-wise HardTanh is defined as: HardTanh(x)={1 if x>1−1 if x<−1x otherwise \text{HardTanh}(x) = \begin{cases} 1 & \text{ if } x > 1 \\ -1 & \text{ if } x < -1 \\ x & \text{ otherwise } \\ \end{cases}
The range of the linear region [−1,1][-1, 1] can be adjusted using min_val and max_val. Parameters
min_val – minimum value of the linear region range. Default: -1
max_val – maximum value of the linear region range. Default: 1
inplace – can optionally do the operation in-place. Default: False
Keyword arguments min_value and max_value have been deprecated in favor of min_val and max_val. Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.Hardtanh(-2, 2)
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.hardtanh#torch.nn.Hardtanh |
class torch.nn.HingeEmbeddingLoss(margin=1.0, size_average=None, reduce=None, reduction='mean') [source]
Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as xx , and is typically used for learning nonlinear embeddings or semi-supervised learning. The loss function for nn -th sample in the mini-batch is ln={xn,ifyn=1,max{0,Δ−xn},ifyn=−1,l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, \Delta - x_n\}, & \text{if}\; y_n = -1, \end{cases}
and the total loss functions is ℓ(x,y)={mean(L),if reduction=‘mean’;sum(L),if reduction=‘sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}
where L={l1,…,lN}⊤L = \{l_1,\dots,l_N\}^\top . Parameters
margin (float, optional) – Has a default value of 1.
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Shape:
Input: (∗)(*) where ∗* means, any number of dimensions. The sum operation operates over all the elements. Target: (∗)(*) , same shape as the input Output: scalar. If reduction is 'none', then same shape as the input | torch.generated.torch.nn.hingeembeddingloss#torch.nn.HingeEmbeddingLoss |
class torch.nn.Identity(*args, **kwargs) [source]
A placeholder identity operator that is argument-insensitive. Parameters
args – any argument (unused)
kwargs – any keyword argument (unused) Examples: >>> m = nn.Identity(54, unused_argument1=0.1, unused_argument2=False)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 20]) | torch.generated.torch.nn.identity#torch.nn.Identity |
torch.nn.init
torch.nn.init.calculate_gain(nonlinearity, param=None) [source]
Return the recommended gain value for the given nonlinearity function. The values are as follows:
nonlinearity gain
Linear / Identity 11
Conv{1,2,3}D 11
Sigmoid 11
Tanh 53\frac{5}{3}
ReLU 2\sqrt{2}
Leaky Relu 21+negative_slope2\sqrt{\frac{2}{1 + \text{negative\_slope}^2}}
SELU 34\frac{3}{4} Parameters
nonlinearity – the non-linear function (nn.functional name)
param – optional parameter for the non-linear function Examples >>> gain = nn.init.calculate_gain('leaky_relu', 0.2) # leaky_relu with negative_slope=0.2
torch.nn.init.uniform_(tensor, a=0.0, b=1.0) [source]
Fills the input Tensor with values drawn from the uniform distribution U(a,b)\mathcal{U}(a, b) . Parameters
tensor – an n-dimensional torch.Tensor
a – the lower bound of the uniform distribution
b – the upper bound of the uniform distribution Examples >>> w = torch.empty(3, 5)
>>> nn.init.uniform_(w)
torch.nn.init.normal_(tensor, mean=0.0, std=1.0) [source]
Fills the input Tensor with values drawn from the normal distribution N(mean,std2)\mathcal{N}(\text{mean}, \text{std}^2) . Parameters
tensor – an n-dimensional torch.Tensor
mean – the mean of the normal distribution
std – the standard deviation of the normal distribution Examples >>> w = torch.empty(3, 5)
>>> nn.init.normal_(w)
torch.nn.init.constant_(tensor, val) [source]
Fills the input Tensor with the value val\text{val} . Parameters
tensor – an n-dimensional torch.Tensor
val – the value to fill the tensor with Examples >>> w = torch.empty(3, 5)
>>> nn.init.constant_(w, 0.3)
torch.nn.init.ones_(tensor) [source]
Fills the input Tensor with the scalar value 1. Parameters
tensor – an n-dimensional torch.Tensor Examples >>> w = torch.empty(3, 5)
>>> nn.init.ones_(w)
torch.nn.init.zeros_(tensor) [source]
Fills the input Tensor with the scalar value 0. Parameters
tensor – an n-dimensional torch.Tensor Examples >>> w = torch.empty(3, 5)
>>> nn.init.zeros_(w)
torch.nn.init.eye_(tensor) [source]
Fills the 2-dimensional input Tensor with the identity matrix. Preserves the identity of the inputs in Linear layers, where as many inputs are preserved as possible. Parameters
tensor – a 2-dimensional torch.Tensor Examples >>> w = torch.empty(3, 5)
>>> nn.init.eye_(w)
torch.nn.init.dirac_(tensor, groups=1) [source]
Fills the {3, 4, 5}-dimensional input Tensor with the Dirac delta function. Preserves the identity of the inputs in Convolutional layers, where as many input channels are preserved as possible. In case of groups>1, each group of channels preserves identity Parameters
tensor – a {3, 4, 5}-dimensional torch.Tensor
groups (optional) – number of groups in the conv layer (default: 1) Examples >>> w = torch.empty(3, 16, 5, 5)
>>> nn.init.dirac_(w)
>>> w = torch.empty(3, 24, 5, 5)
>>> nn.init.dirac_(w, 3)
torch.nn.init.xavier_uniform_(tensor, gain=1.0) [source]
Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from U(−a,a)\mathcal{U}(-a, a) where a=gain×6fan_in+fan_outa = \text{gain} \times \sqrt{\frac{6}{\text{fan\_in} + \text{fan\_out}}}
Also known as Glorot initialization. Parameters
tensor – an n-dimensional torch.Tensor
gain – an optional scaling factor Examples >>> w = torch.empty(3, 5)
>>> nn.init.xavier_uniform_(w, gain=nn.init.calculate_gain('relu'))
torch.nn.init.xavier_normal_(tensor, gain=1.0) [source]
Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from N(0,std2)\mathcal{N}(0, \text{std}^2) where std=gain×2fan_in+fan_out\text{std} = \text{gain} \times \sqrt{\frac{2}{\text{fan\_in} + \text{fan\_out}}}
Also known as Glorot initialization. Parameters
tensor – an n-dimensional torch.Tensor
gain – an optional scaling factor Examples >>> w = torch.empty(3, 5)
>>> nn.init.xavier_normal_(w)
torch.nn.init.kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu') [source]
Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from U(−bound,bound)\mathcal{U}(-\text{bound}, \text{bound}) where bound=gain×3fan_mode\text{bound} = \text{gain} \times \sqrt{\frac{3}{\text{fan\_mode}}}
Also known as He initialization. Parameters
tensor – an n-dimensional torch.Tensor
a – the negative slope of the rectifier used after this layer (only used with 'leaky_relu')
mode – either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves the magnitude of the variance of the weights in the forward pass. Choosing 'fan_out' preserves the magnitudes in the backwards pass.
nonlinearity – the non-linear function (nn.functional name), recommended to use only with 'relu' or 'leaky_relu' (default). Examples >>> w = torch.empty(3, 5)
>>> nn.init.kaiming_uniform_(w, mode='fan_in', nonlinearity='relu')
torch.nn.init.kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu') [source]
Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from N(0,std2)\mathcal{N}(0, \text{std}^2) where std=gainfan_mode\text{std} = \frac{\text{gain}}{\sqrt{\text{fan\_mode}}}
Also known as He initialization. Parameters
tensor – an n-dimensional torch.Tensor
a – the negative slope of the rectifier used after this layer (only used with 'leaky_relu')
mode – either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves the magnitude of the variance of the weights in the forward pass. Choosing 'fan_out' preserves the magnitudes in the backwards pass.
nonlinearity – the non-linear function (nn.functional name), recommended to use only with 'relu' or 'leaky_relu' (default). Examples >>> w = torch.empty(3, 5)
>>> nn.init.kaiming_normal_(w, mode='fan_out', nonlinearity='relu')
torch.nn.init.orthogonal_(tensor, gain=1) [source]
Fills the input Tensor with a (semi) orthogonal matrix, as described in Exact solutions to the nonlinear dynamics of learning in deep linear neural networks - Saxe, A. et al. (2013). The input tensor must have at least 2 dimensions, and for tensors with more than 2 dimensions the trailing dimensions are flattened. Parameters
tensor – an n-dimensional torch.Tensor, where n≥2n \geq 2
gain – optional scaling factor Examples >>> w = torch.empty(3, 5)
>>> nn.init.orthogonal_(w)
torch.nn.init.sparse_(tensor, sparsity, std=0.01) [source]
Fills the 2D input Tensor as a sparse matrix, where the non-zero elements will be drawn from the normal distribution N(0,0.01)\mathcal{N}(0, 0.01) , as described in Deep learning via Hessian-free optimization - Martens, J. (2010). Parameters
tensor – an n-dimensional torch.Tensor
sparsity – The fraction of elements in each column to be set to zero
std – the standard deviation of the normal distribution used to generate the non-zero values Examples >>> w = torch.empty(3, 5)
>>> nn.init.sparse_(w, sparsity=0.1) | torch.nn.init |
torch.nn.init.calculate_gain(nonlinearity, param=None) [source]
Return the recommended gain value for the given nonlinearity function. The values are as follows:
nonlinearity gain
Linear / Identity 11
Conv{1,2,3}D 11
Sigmoid 11
Tanh 53\frac{5}{3}
ReLU 2\sqrt{2}
Leaky Relu 21+negative_slope2\sqrt{\frac{2}{1 + \text{negative\_slope}^2}}
SELU 34\frac{3}{4} Parameters
nonlinearity – the non-linear function (nn.functional name)
param – optional parameter for the non-linear function Examples >>> gain = nn.init.calculate_gain('leaky_relu', 0.2) # leaky_relu with negative_slope=0.2 | torch.nn.init#torch.nn.init.calculate_gain |
torch.nn.init.constant_(tensor, val) [source]
Fills the input Tensor with the value val\text{val} . Parameters
tensor – an n-dimensional torch.Tensor
val – the value to fill the tensor with Examples >>> w = torch.empty(3, 5)
>>> nn.init.constant_(w, 0.3) | torch.nn.init#torch.nn.init.constant_ |
torch.nn.init.dirac_(tensor, groups=1) [source]
Fills the {3, 4, 5}-dimensional input Tensor with the Dirac delta function. Preserves the identity of the inputs in Convolutional layers, where as many input channels are preserved as possible. In case of groups>1, each group of channels preserves identity Parameters
tensor – a {3, 4, 5}-dimensional torch.Tensor
groups (optional) – number of groups in the conv layer (default: 1) Examples >>> w = torch.empty(3, 16, 5, 5)
>>> nn.init.dirac_(w)
>>> w = torch.empty(3, 24, 5, 5)
>>> nn.init.dirac_(w, 3) | torch.nn.init#torch.nn.init.dirac_ |
torch.nn.init.eye_(tensor) [source]
Fills the 2-dimensional input Tensor with the identity matrix. Preserves the identity of the inputs in Linear layers, where as many inputs are preserved as possible. Parameters
tensor – a 2-dimensional torch.Tensor Examples >>> w = torch.empty(3, 5)
>>> nn.init.eye_(w) | torch.nn.init#torch.nn.init.eye_ |
torch.nn.init.kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu') [source]
Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from N(0,std2)\mathcal{N}(0, \text{std}^2) where std=gainfan_mode\text{std} = \frac{\text{gain}}{\sqrt{\text{fan\_mode}}}
Also known as He initialization. Parameters
tensor – an n-dimensional torch.Tensor
a – the negative slope of the rectifier used after this layer (only used with 'leaky_relu')
mode – either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves the magnitude of the variance of the weights in the forward pass. Choosing 'fan_out' preserves the magnitudes in the backwards pass.
nonlinearity – the non-linear function (nn.functional name), recommended to use only with 'relu' or 'leaky_relu' (default). Examples >>> w = torch.empty(3, 5)
>>> nn.init.kaiming_normal_(w, mode='fan_out', nonlinearity='relu') | torch.nn.init#torch.nn.init.kaiming_normal_ |
torch.nn.init.kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu') [source]
Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from U(−bound,bound)\mathcal{U}(-\text{bound}, \text{bound}) where bound=gain×3fan_mode\text{bound} = \text{gain} \times \sqrt{\frac{3}{\text{fan\_mode}}}
Also known as He initialization. Parameters
tensor – an n-dimensional torch.Tensor
a – the negative slope of the rectifier used after this layer (only used with 'leaky_relu')
mode – either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves the magnitude of the variance of the weights in the forward pass. Choosing 'fan_out' preserves the magnitudes in the backwards pass.
nonlinearity – the non-linear function (nn.functional name), recommended to use only with 'relu' or 'leaky_relu' (default). Examples >>> w = torch.empty(3, 5)
>>> nn.init.kaiming_uniform_(w, mode='fan_in', nonlinearity='relu') | torch.nn.init#torch.nn.init.kaiming_uniform_ |
torch.nn.init.normal_(tensor, mean=0.0, std=1.0) [source]
Fills the input Tensor with values drawn from the normal distribution N(mean,std2)\mathcal{N}(\text{mean}, \text{std}^2) . Parameters
tensor – an n-dimensional torch.Tensor
mean – the mean of the normal distribution
std – the standard deviation of the normal distribution Examples >>> w = torch.empty(3, 5)
>>> nn.init.normal_(w) | torch.nn.init#torch.nn.init.normal_ |
torch.nn.init.ones_(tensor) [source]
Fills the input Tensor with the scalar value 1. Parameters
tensor – an n-dimensional torch.Tensor Examples >>> w = torch.empty(3, 5)
>>> nn.init.ones_(w) | torch.nn.init#torch.nn.init.ones_ |
torch.nn.init.orthogonal_(tensor, gain=1) [source]
Fills the input Tensor with a (semi) orthogonal matrix, as described in Exact solutions to the nonlinear dynamics of learning in deep linear neural networks - Saxe, A. et al. (2013). The input tensor must have at least 2 dimensions, and for tensors with more than 2 dimensions the trailing dimensions are flattened. Parameters
tensor – an n-dimensional torch.Tensor, where n≥2n \geq 2
gain – optional scaling factor Examples >>> w = torch.empty(3, 5)
>>> nn.init.orthogonal_(w) | torch.nn.init#torch.nn.init.orthogonal_ |
torch.nn.init.sparse_(tensor, sparsity, std=0.01) [source]
Fills the 2D input Tensor as a sparse matrix, where the non-zero elements will be drawn from the normal distribution N(0,0.01)\mathcal{N}(0, 0.01) , as described in Deep learning via Hessian-free optimization - Martens, J. (2010). Parameters
tensor – an n-dimensional torch.Tensor
sparsity – The fraction of elements in each column to be set to zero
std – the standard deviation of the normal distribution used to generate the non-zero values Examples >>> w = torch.empty(3, 5)
>>> nn.init.sparse_(w, sparsity=0.1) | torch.nn.init#torch.nn.init.sparse_ |
torch.nn.init.uniform_(tensor, a=0.0, b=1.0) [source]
Fills the input Tensor with values drawn from the uniform distribution U(a,b)\mathcal{U}(a, b) . Parameters
tensor – an n-dimensional torch.Tensor
a – the lower bound of the uniform distribution
b – the upper bound of the uniform distribution Examples >>> w = torch.empty(3, 5)
>>> nn.init.uniform_(w) | torch.nn.init#torch.nn.init.uniform_ |
torch.nn.init.xavier_normal_(tensor, gain=1.0) [source]
Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from N(0,std2)\mathcal{N}(0, \text{std}^2) where std=gain×2fan_in+fan_out\text{std} = \text{gain} \times \sqrt{\frac{2}{\text{fan\_in} + \text{fan\_out}}}
Also known as Glorot initialization. Parameters
tensor – an n-dimensional torch.Tensor
gain – an optional scaling factor Examples >>> w = torch.empty(3, 5)
>>> nn.init.xavier_normal_(w) | torch.nn.init#torch.nn.init.xavier_normal_ |
torch.nn.init.xavier_uniform_(tensor, gain=1.0) [source]
Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from U(−a,a)\mathcal{U}(-a, a) where a=gain×6fan_in+fan_outa = \text{gain} \times \sqrt{\frac{6}{\text{fan\_in} + \text{fan\_out}}}
Also known as Glorot initialization. Parameters
tensor – an n-dimensional torch.Tensor
gain – an optional scaling factor Examples >>> w = torch.empty(3, 5)
>>> nn.init.xavier_uniform_(w, gain=nn.init.calculate_gain('relu')) | torch.nn.init#torch.nn.init.xavier_uniform_ |
torch.nn.init.zeros_(tensor) [source]
Fills the input Tensor with the scalar value 0. Parameters
tensor – an n-dimensional torch.Tensor Examples >>> w = torch.empty(3, 5)
>>> nn.init.zeros_(w) | torch.nn.init#torch.nn.init.zeros_ |
class torch.nn.InstanceNorm1d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source]
Applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization. y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. γ\gamma and β\beta are learnable parameter vectors of size C (where C is the input size) if affine is True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). By default, this layer uses instance statistics computed from input data in both training and evaluation modes. If track_running_stats is set to True, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1. Note This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t , where x^\hat{x} is the estimated statistic and xtx_t is the new observed value. Note InstanceNorm1d and LayerNorm are very similar, but have some subtle differences. InstanceNorm1d is applied on each channel of channeled data like multidimensional time series, but LayerNorm is usually applied on entire sample and often in NLP tasks. Additionally, LayerNorm applies elementwise affine transform, while InstanceNorm1d usually don’t apply affine transform. Parameters
num_features – CC from an expected input of size (N,C,L)(N, C, L) or LL from input of size (N,L)(N, L)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to True, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: False.
track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: False
Shape:
Input: (N,C,L)(N, C, L)
Output: (N,C,L)(N, C, L) (same shape as input) Examples: >>> # Without Learnable Parameters
>>> m = nn.InstanceNorm1d(100)
>>> # With Learnable Parameters
>>> m = nn.InstanceNorm1d(100, affine=True)
>>> input = torch.randn(20, 100, 40)
>>> output = m(input) | torch.generated.torch.nn.instancenorm1d#torch.nn.InstanceNorm1d |
class torch.nn.InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source]
Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization. y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. γ\gamma and β\beta are learnable parameter vectors of size C (where C is the input size) if affine is True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). By default, this layer uses instance statistics computed from input data in both training and evaluation modes. If track_running_stats is set to True, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1. Note This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t , where x^\hat{x} is the estimated statistic and xtx_t is the new observed value. Note InstanceNorm2d and LayerNorm are very similar, but have some subtle differences. InstanceNorm2d is applied on each channel of channeled data like RGB images, but LayerNorm is usually applied on entire sample and often in NLP tasks. Additionally, LayerNorm applies elementwise affine transform, while InstanceNorm2d usually don’t apply affine transform. Parameters
num_features – CC from an expected input of size (N,C,H,W)(N, C, H, W)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to True, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: False.
track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: False
Shape:
Input: (N,C,H,W)(N, C, H, W)
Output: (N,C,H,W)(N, C, H, W) (same shape as input) Examples: >>> # Without Learnable Parameters
>>> m = nn.InstanceNorm2d(100)
>>> # With Learnable Parameters
>>> m = nn.InstanceNorm2d(100, affine=True)
>>> input = torch.randn(20, 100, 35, 45)
>>> output = m(input) | torch.generated.torch.nn.instancenorm2d#torch.nn.InstanceNorm2d |
class torch.nn.InstanceNorm3d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source]
Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization. y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. γ\gamma and β\beta are learnable parameter vectors of size C (where C is the input size) if affine is True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). By default, this layer uses instance statistics computed from input data in both training and evaluation modes. If track_running_stats is set to True, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1. Note This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1−momentum)×x^+momentum×xt\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t , where x^\hat{x} is the estimated statistic and xtx_t is the new observed value. Note InstanceNorm3d and LayerNorm are very similar, but have some subtle differences. InstanceNorm3d is applied on each channel of channeled data like 3D models with RGB color, but LayerNorm is usually applied on entire sample and often in NLP tasks. Additionally, LayerNorm applies elementwise affine transform, while InstanceNorm3d usually don’t apply affine transform. Parameters
num_features – CC from an expected input of size (N,C,D,H,W)(N, C, D, H, W)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to True, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: False.
track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: False
Shape:
Input: (N,C,D,H,W)(N, C, D, H, W)
Output: (N,C,D,H,W)(N, C, D, H, W) (same shape as input) Examples: >>> # Without Learnable Parameters
>>> m = nn.InstanceNorm3d(100)
>>> # With Learnable Parameters
>>> m = nn.InstanceNorm3d(100, affine=True)
>>> input = torch.randn(20, 100, 35, 45, 10)
>>> output = m(input) | torch.generated.torch.nn.instancenorm3d#torch.nn.InstanceNorm3d |
class torch.nn.intrinsic.ConvBn1d(conv, bn) [source]
This is a sequential container which calls the Conv 1d and Batch Norm 1d modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvBn1d |
class torch.nn.intrinsic.ConvBn2d(conv, bn) [source]
This is a sequential container which calls the Conv 2d and Batch Norm 2d modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvBn2d |
class torch.nn.intrinsic.ConvBnReLU1d(conv, bn, relu) [source]
This is a sequential container which calls the Conv 1d, Batch Norm 1d, and ReLU modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvBnReLU1d |
class torch.nn.intrinsic.ConvBnReLU2d(conv, bn, relu) [source]
This is a sequential container which calls the Conv 2d, Batch Norm 2d, and ReLU modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvBnReLU2d |
class torch.nn.intrinsic.ConvReLU1d(conv, relu) [source]
This is a sequential container which calls the Conv1d and ReLU modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvReLU1d |
class torch.nn.intrinsic.ConvReLU2d(conv, relu) [source]
This is a sequential container which calls the Conv2d and ReLU modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvReLU2d |
class torch.nn.intrinsic.qat.ConvBn2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None) [source]
A ConvBn2d module is a module fused from Conv2d and BatchNorm2d, attached with FakeQuantize modules for weight, used in quantization aware training. We combined the interface of torch.nn.Conv2d and torch.nn.BatchNorm2d. Similar to torch.nn.Conv2d, with FakeQuantize modules initialized to default. Variables
~ConvBn2d.freeze_bn –
~ConvBn2d.weight_fake_quant – fake quant module for weight | torch.nn.intrinsic.qat#torch.nn.intrinsic.qat.ConvBn2d |
class torch.nn.intrinsic.qat.ConvBnReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None) [source]
A ConvBnReLU2d module is a module fused from Conv2d, BatchNorm2d and ReLU, attached with FakeQuantize modules for weight, used in quantization aware training. We combined the interface of torch.nn.Conv2d and torch.nn.BatchNorm2d and torch.nn.ReLU. Similar to torch.nn.Conv2d, with FakeQuantize modules initialized to default. Variables
~ConvBnReLU2d.weight_fake_quant – fake quant module for weight | torch.nn.intrinsic.qat#torch.nn.intrinsic.qat.ConvBnReLU2d |
class torch.nn.intrinsic.qat.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None) [source]
A ConvReLU2d module is a fused module of Conv2d and ReLU, attached with FakeQuantize modules for weight for quantization aware training. We combined the interface of Conv2d and BatchNorm2d. Variables
~ConvReLU2d.weight_fake_quant – fake quant module for weight | torch.nn.intrinsic.qat#torch.nn.intrinsic.qat.ConvReLU2d |
class torch.nn.intrinsic.qat.LinearReLU(in_features, out_features, bias=True, qconfig=None) [source]
A LinearReLU module fused from Linear and ReLU modules, attached with FakeQuantize modules for weight, used in quantization aware training. We adopt the same interface as torch.nn.Linear. Similar to torch.nn.intrinsic.LinearReLU, with FakeQuantize modules initialized to default. Variables
~LinearReLU.weight – fake quant module for weight Examples: >>> m = nn.qat.LinearReLU(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30]) | torch.nn.intrinsic.qat#torch.nn.intrinsic.qat.LinearReLU |
class torch.nn.intrinsic.quantized.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A ConvReLU2d module is a fused module of Conv2d and ReLU We adopt the same interface as torch.nn.quantized.Conv2d. Variables
as torch.nn.quantized.Conv2d (Same) – | torch.nn.intrinsic.quantized#torch.nn.intrinsic.quantized.ConvReLU2d |
class torch.nn.intrinsic.quantized.ConvReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A ConvReLU3d module is a fused module of Conv3d and ReLU We adopt the same interface as torch.nn.quantized.Conv3d. Attributes: Same as torch.nn.quantized.Conv3d | torch.nn.intrinsic.quantized#torch.nn.intrinsic.quantized.ConvReLU3d |
class torch.nn.intrinsic.quantized.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8) [source]
A LinearReLU module fused from Linear and ReLU modules We adopt the same interface as torch.nn.quantized.Linear. Variables
as torch.nn.quantized.Linear (Same) – Examples: >>> m = nn.intrinsic.LinearReLU(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30]) | torch.nn.intrinsic.quantized#torch.nn.intrinsic.quantized.LinearReLU |
class torch.nn.KLDivLoss(size_average=None, reduce=None, reduction='mean', log_target=False) [source]
The Kullback-Leibler divergence loss measure Kullback-Leibler divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions. As with NLLLoss, the input given is expected to contain log-probabilities and is not restricted to a 2D Tensor. The targets are interpreted as probabilities by default, but could be considered as log-probabilities with log_target set to True. This criterion expects a target Tensor of the same size as the input Tensor. The unreduced (i.e. with reduction set to 'none') loss can be described as: l(x,y)=L={l1,…,lN},ln=yn⋅(logyn−xn)l(x,y) = L = \{ l_1,\dots,l_N \}, \quad l_n = y_n \cdot \left( \log y_n - x_n \right)
where the index NN spans all dimensions of input and LL has the same shape as input. If reduction is not 'none' (default 'mean'), then: ℓ(x,y)={mean(L),if reduction=‘mean’;sum(L),if reduction=‘sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';} \\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}
In default reduction mode 'mean', the losses are averaged for each minibatch over observations as well as over dimensions. 'batchmean' mode gives the correct KL divergence where losses are averaged over batch dimension only. 'mean' mode’s behavior will be changed to the same as 'batchmean' in the next major release. Parameters
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'batchmean' | 'sum' | 'mean'. 'none': no reduction will be applied. 'batchmean': the sum of the output will be divided by batchsize. 'sum': the output will be summed. 'mean': the output will be divided by the number of elements in the output. Default: 'mean'
log_target (bool, optional) – Specifies whether target is passed in the log space. Default: False
Note size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Note reduction = 'mean' doesn’t return the true kl divergence value, please use reduction = 'batchmean' which aligns with KL math definition. In the next major release, 'mean' will be changed to be the same as 'batchmean'. Shape:
Input: (N,∗)(N, *) where ∗* means, any number of additional dimensions Target: (N,∗)(N, *) , same shape as the input Output: scalar by default. If :attr:reduction is 'none', then (N,∗)(N, *) , the same shape as the input | torch.generated.torch.nn.kldivloss#torch.nn.KLDivLoss |
class torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean') [source]
Creates a criterion that measures the mean absolute error (MAE) between each element in the input xx and target yy . The unreduced (i.e. with reduction set to 'none') loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=∣xn−yn∣,\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left| x_n - y_n \right|,
where NN is the batch size. If reduction is not 'none' (default 'mean'), then: ℓ(x,y)={mean(L),if reduction=‘mean’;sum(L),if reduction=‘sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}
xx and yy are tensors of arbitrary shapes with a total of nn elements each. The sum operation still operates over all the elements, and divides by nn . The division by nn can be avoided if one sets reduction = 'sum'. Supports real-valued and complex-valued inputs. Parameters
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Shape:
Input: (N,∗)(N, *) where ∗* means, any number of additional dimensions Target: (N,∗)(N, *) , same shape as the input Output: scalar. If reduction is 'none', then (N,∗)(N, *) , same shape as the input Examples: >>> loss = nn.L1Loss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> output = loss(input, target)
>>> output.backward() | torch.generated.torch.nn.l1loss#torch.nn.L1Loss |
class torch.nn.LayerNorm(normalized_shape, eps=1e-05, elementwise_affine=True) [source]
Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The mean and standard-deviation are calculated separately over the last certain number dimensions which have to be of the shape specified by normalized_shape. γ\gamma and β\beta are learnable affine transform parameters of normalized_shape if elementwise_affine is True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). Note Unlike Batch Normalization and Instance Normalization, which applies scalar scale and bias for each entire channel/plane with the affine option, Layer Normalization applies per-element scale and bias with elementwise_affine. This layer uses statistics computed from input data in both training and evaluation modes. Parameters
normalized_shape (int or list or torch.Size) –
input shape from an expected input of size [∗×normalized_shape[0]×normalized_shape[1]×…×normalized_shape[−1]][* \times \text{normalized\_shape}[0] \times \text{normalized\_shape}[1] \times \ldots \times \text{normalized\_shape}[-1]]
If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.
eps – a value added to the denominator for numerical stability. Default: 1e-5
elementwise_affine – a boolean value that when set to True, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default: True. Shape:
Input: (N,∗)(N, *)
Output: (N,∗)(N, *) (same shape as input) Examples: >>> input = torch.randn(20, 5, 10, 10)
>>> # With Learnable Parameters
>>> m = nn.LayerNorm(input.size()[1:])
>>> # Without Learnable Parameters
>>> m = nn.LayerNorm(input.size()[1:], elementwise_affine=False)
>>> # Normalize over last two dimensions
>>> m = nn.LayerNorm([10, 10])
>>> # Normalize over last dimension of size 10
>>> m = nn.LayerNorm(10)
>>> # Activating the module
>>> output = m(input) | torch.generated.torch.nn.layernorm#torch.nn.LayerNorm |
class torch.nn.LazyConv1d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A torch.nn.Conv1d module with lazy initialization of the in_channels argument of the Conv1d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
See also torch.nn.Conv1d and torch.nn.modules.lazy.LazyModuleMixin
cls_to_become
alias of Conv1d | torch.generated.torch.nn.lazyconv1d#torch.nn.LazyConv1d |
cls_to_become
alias of Conv1d | torch.generated.torch.nn.lazyconv1d#torch.nn.LazyConv1d.cls_to_become |
class torch.nn.LazyConv2d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A torch.nn.Conv2d module with lazy initialization of the in_channels argument of the Conv2d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
See also torch.nn.Conv2d and torch.nn.modules.lazy.LazyModuleMixin
cls_to_become
alias of Conv2d | torch.generated.torch.nn.lazyconv2d#torch.nn.LazyConv2d |
cls_to_become
alias of Conv2d | torch.generated.torch.nn.lazyconv2d#torch.nn.LazyConv2d.cls_to_become |
class torch.nn.LazyConv3d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A torch.nn.Conv3d module with lazy initialization of the in_channels argument of the Conv3d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
See also torch.nn.Conv3d and torch.nn.modules.lazy.LazyModuleMixin
cls_to_become
alias of Conv3d | torch.generated.torch.nn.lazyconv3d#torch.nn.LazyConv3d |
cls_to_become
alias of Conv3d | torch.generated.torch.nn.lazyconv3d#torch.nn.LazyConv3d.cls_to_become |
class torch.nn.LazyConvTranspose1d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
A torch.nn.ConvTranspose1d module with lazy initialization of the in_channels argument of the ConvTranspose1d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of the input. Default: 0
output_padding (int or tuple, optional) – Additional size added to one side of the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1 See also torch.nn.ConvTranspose1d and torch.nn.modules.lazy.LazyModuleMixin
cls_to_become
alias of ConvTranspose1d | torch.generated.torch.nn.lazyconvtranspose1d#torch.nn.LazyConvTranspose1d |
cls_to_become
alias of ConvTranspose1d | torch.generated.torch.nn.lazyconvtranspose1d#torch.nn.LazyConvTranspose1d.cls_to_become |
class torch.nn.LazyConvTranspose2d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
A torch.nn.ConvTranspose2d module with lazy initialization of the in_channels argument of the ConvTranspose2d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Default: 0
output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1 See also torch.nn.ConvTranspose2d and torch.nn.modules.lazy.LazyModuleMixin
cls_to_become
alias of ConvTranspose2d | torch.generated.torch.nn.lazyconvtranspose2d#torch.nn.LazyConvTranspose2d |
cls_to_become
alias of ConvTranspose2d | torch.generated.torch.nn.lazyconvtranspose2d#torch.nn.LazyConvTranspose2d.cls_to_become |
class torch.nn.LazyConvTranspose3d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
A torch.nn.ConvTranspose3d module with lazy initialization of the in_channels argument of the ConvTranspose3d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Default: 0
output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1 See also torch.nn.ConvTranspose3d and torch.nn.modules.lazy.LazyModuleMixin
cls_to_become
alias of ConvTranspose3d | torch.generated.torch.nn.lazyconvtranspose3d#torch.nn.LazyConvTranspose3d |
cls_to_become
alias of ConvTranspose3d | torch.generated.torch.nn.lazyconvtranspose3d#torch.nn.LazyConvTranspose3d.cls_to_become |
class torch.nn.LazyLinear(out_features, bias=True) [source]
A torch.nn.Linear module with lazy initialization. In this module, the weight and bias are of torch.nn.UninitializedParameter class. They will be initialized after the first call to forward is done and the module will become a regular torch.nn.Linear module. Check the torch.nn.modules.lazy.LazyModuleMixin for further documentation on lazy modules and their limitations. Parameters
out_features – size of each output sample
bias – If set to False, the layer will not learn an additive bias. Default: True
Variables
~LazyLinear.weight – the learnable weights of the module of shape (out_features,in_features)(\text{out\_features}, \text{in\_features}) . The values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) , where k=1in_featuresk = \frac{1}{\text{in\_features}}
~LazyLinear.bias – the learnable bias of the module of shape (out_features)(\text{out\_features}) . If bias is True, the values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1in_featuresk = \frac{1}{\text{in\_features}}
cls_to_become
alias of Linear | torch.generated.torch.nn.lazylinear#torch.nn.LazyLinear |
cls_to_become
alias of Linear | torch.generated.torch.nn.lazylinear#torch.nn.LazyLinear.cls_to_become |
class torch.nn.LeakyReLU(negative_slope=0.01, inplace=False) [source]
Applies the element-wise function: LeakyReLU(x)=max(0,x)+negative_slope∗min(0,x)\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x)
or LeakyRELU(x)={x, if x≥0negative_slope×x, otherwise \text{LeakyRELU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ \text{negative\_slope} \times x, & \text{ otherwise } \end{cases}
Parameters
negative_slope – Controls the angle of the negative slope. Default: 1e-2
inplace – can optionally do the operation in-place. Default: False
Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.LeakyReLU(0.1)
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.leakyrelu#torch.nn.LeakyReLU |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.