doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
classmethod from_float(mod) [source]
Create a qat module from a float module or qparams_dict Args: mod a float module, either produced by torch.quantization utilities or directly from user | torch.nn.qat#torch.nn.qat.Linear.from_float |
class torch.nn.quantized.BatchNorm2d(num_features, eps=1e-05, momentum=0.1) [source]
This is the quantized version of BatchNorm2d. | torch.nn.quantized#torch.nn.quantized.BatchNorm2d |
class torch.nn.quantized.BatchNorm3d(num_features, eps=1e-05, momentum=0.1) [source]
This is the quantized version of BatchNorm3d. | torch.nn.quantized#torch.nn.quantized.BatchNorm3d |
class torch.nn.quantized.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
Applies a 1D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation see Conv1d. Note Only zeros is supported for the padding_mode argument. Note Only torch.quint8 is supported for the input data type. Variables
~Conv1d.weight (Tensor) β packed tensor derived from the learnable weight parameter.
~Conv1d.scale (Tensor) β scalar for the output scale
~Conv1d.zero_point (Tensor) β scalar for the output zero point See Conv1d for other attributes. Examples: >>> m = nn.quantized.Conv1d(16, 33, 3, stride=2)
>>> input = torch.randn(20, 16, 100)
>>> # quantize input to quint8
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0,
dtype=torch.quint8)
>>> output = m(q_input)
classmethod from_float(mod) [source]
Creates a quantized module from a float module or qparams_dict. Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user | torch.nn.quantized#torch.nn.quantized.Conv1d |
classmethod from_float(mod) [source]
Creates a quantized module from a float module or qparams_dict. Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user | torch.nn.quantized#torch.nn.quantized.Conv1d.from_float |
class torch.nn.quantized.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
Applies a 2D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation see Conv2d. Note Only zeros is supported for the padding_mode argument. Note Only torch.quint8 is supported for the input data type. Variables
~Conv2d.weight (Tensor) β packed tensor derived from the learnable weight parameter.
~Conv2d.scale (Tensor) β scalar for the output scale
~Conv2d.zero_point (Tensor) β scalar for the output zero point See Conv2d for other attributes. Examples: >>> # With square kernels and equal stride
>>> m = nn.quantized.Conv2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
>>> input = torch.randn(20, 16, 50, 100)
>>> # quantize input to quint8
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
classmethod from_float(mod) [source]
Creates a quantized module from a float module or qparams_dict. Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user | torch.nn.quantized#torch.nn.quantized.Conv2d |
classmethod from_float(mod) [source]
Creates a quantized module from a float module or qparams_dict. Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user | torch.nn.quantized#torch.nn.quantized.Conv2d.from_float |
class torch.nn.quantized.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
Applies a 3D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation see Conv3d. Note Only zeros is supported for the padding_mode argument. Note Only torch.quint8 is supported for the input data type. Variables
~Conv3d.weight (Tensor) β packed tensor derived from the learnable weight parameter.
~Conv3d.scale (Tensor) β scalar for the output scale
~Conv3d.zero_point (Tensor) β scalar for the output zero point See Conv3d for other attributes. Examples: >>> # With square kernels and equal stride
>>> m = nn.quantized.Conv3d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2), dilation=(1, 2, 2))
>>> input = torch.randn(20, 16, 56, 56, 56)
>>> # quantize input to quint8
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
classmethod from_float(mod) [source]
Creates a quantized module from a float module or qparams_dict. Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user | torch.nn.quantized#torch.nn.quantized.Conv3d |
classmethod from_float(mod) [source]
Creates a quantized module from a float module or qparams_dict. Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user | torch.nn.quantized#torch.nn.quantized.Conv3d.from_float |
class torch.nn.quantized.DeQuantize [source]
Dequantizes an incoming tensor Examples::
>>> input = torch.tensor([[1., -1.], [1., -1.]])
>>> scale, zero_point, dtype = 1.0, 2, torch.qint8
>>> qm = Quantize(scale, zero_point, dtype)
>>> quantized_input = qm(input)
>>> dqm = DeQuantize()
>>> dequantized = dqm(quantized_input)
>>> print(dequantized)
tensor([[ 1., -1.],
[ 1., -1.]], dtype=torch.float32) | torch.nn.quantized#torch.nn.quantized.DeQuantize |
torch.nn.quantized.dynamic Linear
class torch.nn.quantized.dynamic.Linear(in_features, out_features, bias_=True, dtype=torch.qint8) [source]
A dynamic quantized linear module with floating point tensor as inputs and outputs. We adopt the same interface as torch.nn.Linear, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for documentation. Similar to torch.nn.Linear, attributes will be randomly initialized at module creation time and will be overwritten later Variables
~Linear.weight (Tensor) β the non-learnable quantized weights of the module which are of shape (out_features,in_features)(\text{out\_features}, \text{in\_features}) .
~Linear.bias (Tensor) β the non-learnable floating point bias of the module of shape (out_features)(\text{out\_features}) . If bias is True, the values are initialized to zero. Examples: >>> m = nn.quantized.dynamic.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
classmethod from_float(mod) [source]
Create a dynamic quantized module from a float module or qparams_dict Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user
LSTM
class torch.nn.quantized.dynamic.LSTM(*args, **kwargs) [source]
A dynamic quantized LSTM module with floating point tensor as inputs and outputs. We adopt the same interface as torch.nn.LSTM, please see https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM for documentation. Examples: >>> rnn = nn.LSTM(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> c0 = torch.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0))
LSTMCell
class torch.nn.quantized.dynamic.LSTMCell(*args, **kwargs) [source]
A long short-term memory (LSTM) cell. A dynamic quantized LSTMCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.LSTMCell, please see https://pytorch.org/docs/stable/nn.html#torch.nn.LSTMCell for documentation. Examples: >>> rnn = nn.LSTMCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> cx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx, cx = rnn(input[i], (hx, cx))
output.append(hx)
GRUCell
class torch.nn.quantized.dynamic.GRUCell(input_size, hidden_size, bias=True, dtype=torch.qint8) [source]
A gated recurrent unit (GRU) cell A dynamic quantized GRUCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.GRUCell, please see https://pytorch.org/docs/stable/nn.html#torch.nn.GRUCell for documentation. Examples: >>> rnn = nn.GRUCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx = rnn(input[i], hx)
output.append(hx)
RNNCell
class torch.nn.quantized.dynamic.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh', dtype=torch.qint8) [source]
An Elman RNN cell with tanh or ReLU non-linearity. A dynamic quantized RNNCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.RNNCell, please see https://pytorch.org/docs/stable/nn.html#torch.nn.RNNCell for documentation. Examples: >>> rnn = nn.RNNCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx = rnn(input[i], hx)
output.append(hx) | torch.nn.quantized.dynamic |
class torch.nn.quantized.dynamic.GRUCell(input_size, hidden_size, bias=True, dtype=torch.qint8) [source]
A gated recurrent unit (GRU) cell A dynamic quantized GRUCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.GRUCell, please see https://pytorch.org/docs/stable/nn.html#torch.nn.GRUCell for documentation. Examples: >>> rnn = nn.GRUCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx = rnn(input[i], hx)
output.append(hx) | torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.GRUCell |
class torch.nn.quantized.dynamic.Linear(in_features, out_features, bias_=True, dtype=torch.qint8) [source]
A dynamic quantized linear module with floating point tensor as inputs and outputs. We adopt the same interface as torch.nn.Linear, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for documentation. Similar to torch.nn.Linear, attributes will be randomly initialized at module creation time and will be overwritten later Variables
~Linear.weight (Tensor) β the non-learnable quantized weights of the module which are of shape (out_features,in_features)(\text{out\_features}, \text{in\_features}) .
~Linear.bias (Tensor) β the non-learnable floating point bias of the module of shape (out_features)(\text{out\_features}) . If bias is True, the values are initialized to zero. Examples: >>> m = nn.quantized.dynamic.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
classmethod from_float(mod) [source]
Create a dynamic quantized module from a float module or qparams_dict Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user | torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.Linear |
classmethod from_float(mod) [source]
Create a dynamic quantized module from a float module or qparams_dict Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user | torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.Linear.from_float |
class torch.nn.quantized.dynamic.LSTM(*args, **kwargs) [source]
A dynamic quantized LSTM module with floating point tensor as inputs and outputs. We adopt the same interface as torch.nn.LSTM, please see https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM for documentation. Examples: >>> rnn = nn.LSTM(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> c0 = torch.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0)) | torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.LSTM |
class torch.nn.quantized.dynamic.LSTMCell(*args, **kwargs) [source]
A long short-term memory (LSTM) cell. A dynamic quantized LSTMCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.LSTMCell, please see https://pytorch.org/docs/stable/nn.html#torch.nn.LSTMCell for documentation. Examples: >>> rnn = nn.LSTMCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> cx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx, cx = rnn(input[i], (hx, cx))
output.append(hx) | torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.LSTMCell |
class torch.nn.quantized.dynamic.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh', dtype=torch.qint8) [source]
An Elman RNN cell with tanh or ReLU non-linearity. A dynamic quantized RNNCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.RNNCell, please see https://pytorch.org/docs/stable/nn.html#torch.nn.RNNCell for documentation. Examples: >>> rnn = nn.RNNCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx = rnn(input[i], hx)
output.append(hx) | torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.RNNCell |
class torch.nn.quantized.ELU(scale, zero_point, alpha=1.0) [source]
This is the quantized equivalent of ELU. Parameters
scale β quantization scale of the output tensor
zero_point β quantization zero point of the output tensor
alpha β the alpha constant | torch.nn.quantized#torch.nn.quantized.ELU |
class torch.nn.quantized.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, dtype=torch.quint8) [source]
A quantized Embedding module with quantized packed weights as inputs. We adopt the same interface as torch.nn.Embedding, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Embedding for documentation. Similar to Embedding, attributes will be randomly initialized at module creation time and will be overwritten later Variables
~Embedding.weight (Tensor) β the non-learnable quantized weights of the module of shape (num_embeddings,embedding_dim)(\text{num\_embeddings}, \text{embedding\_dim}) . Examples::
>>> m = nn.quantized.Embedding(num_embeddings=10, embedding_dim=12)
>>> indices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8])
>>> output = m(indices)
>>> print(output.size())
torch.Size([9, 12]
classmethod from_float(mod) [source]
Create a quantized embedding module from a float module Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by user | torch.nn.quantized#torch.nn.quantized.Embedding |
classmethod from_float(mod) [source]
Create a quantized embedding module from a float module Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by user | torch.nn.quantized#torch.nn.quantized.Embedding.from_float |
class torch.nn.quantized.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='sum', sparse=False, _weight=None, include_last_offset=False, dtype=torch.quint8) [source]
A quantized EmbeddingBag module with quantized packed weights as inputs. We adopt the same interface as torch.nn.EmbeddingBag, please see https://pytorch.org/docs/stable/nn.html#torch.nn.EmbeddingBag for documentation. Similar to EmbeddingBag, attributes will be randomly initialized at module creation time and will be overwritten later Variables
~EmbeddingBag.weight (Tensor) β the non-learnable quantized weights of the module of shape (num_embeddings,embedding_dim)(\text{num\_embeddings}, \text{embedding\_dim}) . Examples::
>>> m = nn.quantized.EmbeddingBag(num_embeddings=10, embedding_dim=12, include_last_offset=True, mode='sum')
>>> indices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8, 6, 6, 9, 1, 6, 8, 8, 3, 2, 3, 6, 3, 6, 5, 7, 0, 8, 4, 6, 5, 8, 2, 3])
>>> offsets = torch.tensor([0, 19, 20, 28, 28, 32])
>>> output = m(indices, offsets)
>>> print(output.size())
torch.Size([5, 12]
classmethod from_float(mod) [source]
Create a quantized embedding_bag module from a float module Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by user | torch.nn.quantized#torch.nn.quantized.EmbeddingBag |
classmethod from_float(mod) [source]
Create a quantized embedding_bag module from a float module Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by user | torch.nn.quantized#torch.nn.quantized.EmbeddingBag.from_float |
class torch.nn.quantized.FloatFunctional [source]
State collector class for float operations. The instance of this class can be used instead of the torch. prefix for some operations. See example usage below. Note This class does not provide a forward hook. Instead, you must use one of the underlying functions (e.g. add). Examples: >>> f_add = FloatFunctional()
>>> a = torch.tensor(3.0)
>>> b = torch.tensor(4.0)
>>> f_add.add(a, b) # Equivalent to ``torch.add(a, b)``
Valid operation names:
add cat mul add_relu add_scalar mul_scalar | torch.nn.quantized#torch.nn.quantized.FloatFunctional |
torch.nn.quantized.functional.adaptive_avg_pool2d(input, output_size) [source]
Applies a 2D adaptive average pooling over a quantized input signal composed of several quantized input planes. Note The input quantization parameters propagate to the output. See AdaptiveAvgPool2d for details and output shape. Parameters
output_size β the target output size (single integer or double-integer tuple) | torch.nn.quantized#torch.nn.quantized.functional.adaptive_avg_pool2d |
torch.nn.quantized.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) [source]
Applies 2D average-pooling operation in kHΓkWkH \times kW regions by step size sHΓsWsH \times sW steps. The number of output features is equal to the number of input planes. Note The input quantization parameters propagate to the output. See AvgPool2d for details and output shape. Parameters
input β quantized input tensor (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW)
kernel_size β size of the pooling region. Can be a single number or a tuple (kH, kW)
stride β stride of the pooling operation. Can be a single number or a tuple (sH, sW). Default: kernel_size
padding β implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
ceil_mode β when True, will use ceil instead of floor in the formula to compute the output shape. Default: False
count_include_pad β when True, will include the zero-padding in the averaging calculation. Default: True
divisor_override β if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None | torch.nn.quantized#torch.nn.quantized.functional.avg_pool2d |
torch.nn.quantized.functional.conv1d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8) [source]
Applies a 1D convolution over a quantized 1D input composed of several input planes. See Conv1d for details and output shape. Parameters
input β quantized input tensor of shape (minibatch,in_channels,iW)(\text{minibatch} , \text{in\_channels} , iW)
weight β quantized filters of shape (out_channels,in_channelsgroups,iW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , iW)
bias β non-quantized bias tensor of shape (out_channels)(\text{out\_channels}) . The tensor type must be torch.float.
stride β the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1
padding β implicit paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0
dilation β the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1
groups β split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1
padding_mode β the padding mode to use. Only βzerosβ is supported for quantized convolution at the moment. Default: βzerosβ
scale β quantization scale for the output. Default: 1.0
zero_point β quantization zero_point for the output. Default: 0
dtype β quantization data type to use. Default: torch.quint8
Examples: >>> from torch.nn.quantized import functional as qF
>>> filters = torch.randn(33, 16, 3, dtype=torch.float)
>>> inputs = torch.randn(20, 16, 50, dtype=torch.float)
>>> bias = torch.randn(33, dtype=torch.float)
>>>
>>> scale, zero_point = 1.0, 0
>>> dtype_inputs = torch.quint8
>>> dtype_filters = torch.qint8
>>>
>>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters)
>>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs)
>>> qF.conv1d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point) | torch.nn.quantized#torch.nn.quantized.functional.conv1d |
torch.nn.quantized.functional.conv2d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8) [source]
Applies a 2D convolution over a quantized 2D input composed of several input planes. See Conv2d for details and output shape. Parameters
input β quantized input tensor of shape (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW)
weight β quantized filters of shape (out_channels,in_channelsgroups,kH,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kH , kW)
bias β non-quantized bias tensor of shape (out_channels)(\text{out\_channels}) . The tensor type must be torch.float.
stride β the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1
padding β implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
dilation β the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1
groups β split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1
padding_mode β the padding mode to use. Only βzerosβ is supported for quantized convolution at the moment. Default: βzerosβ
scale β quantization scale for the output. Default: 1.0
zero_point β quantization zero_point for the output. Default: 0
dtype β quantization data type to use. Default: torch.quint8
Examples: >>> from torch.nn.quantized import functional as qF
>>> filters = torch.randn(8, 4, 3, 3, dtype=torch.float)
>>> inputs = torch.randn(1, 4, 5, 5, dtype=torch.float)
>>> bias = torch.randn(8, dtype=torch.float)
>>>
>>> scale, zero_point = 1.0, 0
>>> dtype_inputs = torch.quint8
>>> dtype_filters = torch.qint8
>>>
>>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters)
>>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs)
>>> qF.conv2d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point) | torch.nn.quantized#torch.nn.quantized.functional.conv2d |
torch.nn.quantized.functional.conv3d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8) [source]
Applies a 3D convolution over a quantized 3D input composed of several input planes. See Conv3d for details and output shape. Parameters
input β quantized input tensor of shape (minibatch,in_channels,iD,iH,iW)(\text{minibatch} , \text{in\_channels} , iD , iH , iW)
weight β quantized filters of shape (out_channels,in_channelsgroups,kD,kH,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kD , kH , kW)
bias β non-quantized bias tensor of shape (out_channels)(\text{out\_channels}) . The tensor type must be torch.float.
stride β the stride of the convolving kernel. Can be a single number or a tuple (sD, sH, sW). Default: 1
padding β implicit paddings on both sides of the input. Can be a single number or a tuple (padD, padH, padW). Default: 0
dilation β the spacing between kernel elements. Can be a single number or a tuple (dD, dH, dW). Default: 1
groups β split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1
padding_mode β the padding mode to use. Only βzerosβ is supported for quantized convolution at the moment. Default: βzerosβ
scale β quantization scale for the output. Default: 1.0
zero_point β quantization zero_point for the output. Default: 0
dtype β quantization data type to use. Default: torch.quint8
Examples: >>> from torch.nn.quantized import functional as qF
>>> filters = torch.randn(8, 4, 3, 3, 3, dtype=torch.float)
>>> inputs = torch.randn(1, 4, 5, 5, 5, dtype=torch.float)
>>> bias = torch.randn(8, dtype=torch.float)
>>>
>>> scale, zero_point = 1.0, 0
>>> dtype_inputs = torch.quint8
>>> dtype_filters = torch.qint8
>>>
>>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters)
>>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs)
>>> qF.conv3d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point) | torch.nn.quantized#torch.nn.quantized.functional.conv3d |
torch.nn.quantized.functional.hardswish(input, scale, zero_point) [source]
This is the quantized version of hardswish(). Parameters
input β quantized input
scale β quantization scale of the output tensor
zero_point β quantization zero point of the output tensor | torch.nn.quantized#torch.nn.quantized.functional.hardswish |
torch.nn.quantized.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None) [source]
Down/up samples the input to either the given size or the given scale_factor See torch.nn.functional.interpolate() for implementation details. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. Note The input quantization parameters propagate to the output. Note Only 2D/3D input is supported for quantized inputs Note Only the following modes are supported for the quantized inputs: bilinear nearest Parameters
input (Tensor) β the input tensor
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) β output spatial size.
scale_factor (float or Tuple[float]) β multiplier for spatial size. Has to match input size if it is a tuple.
mode (str) β algorithm used for upsampling: 'nearest' | 'bilinear'
align_corners (bool, optional) β Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'bilinear'. Default: False | torch.nn.quantized#torch.nn.quantized.functional.interpolate |
torch.nn.quantized.functional.linear(input, weight, bias=None, scale=None, zero_point=None) [source]
Applies a linear transformation to the incoming quantized data: y=xAT+by = xA^T + b . See Linear Note Current implementation packs weights on every call, which has penalty on performance. If you want to avoid the overhead, use Linear. Parameters
input (Tensor) β Quantized input of type torch.quint8
weight (Tensor) β Quantized weight of type torch.qint8
bias (Tensor) β None or fp32 bias of type torch.float
scale (double) β output scale. If None, derived from the input scale
zero_point (long) β output zero point. If None, derived from the input zero_point Shape:
Input: (N,β,in_features)(N, *, in\_features) where * means any number of additional dimensions Weight: (out_features,in_features)(out\_features, in\_features)
Bias: (out_features)(out\_features)
Output: (N,β,out_features)(N, *, out\_features) | torch.nn.quantized#torch.nn.quantized.functional.linear |
torch.nn.quantized.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False) [source]
Applies a 2D max pooling over a quantized input signal composed of several quantized input planes. Note The input quantization parameters are propagated to the output. See MaxPool2d for details. | torch.nn.quantized#torch.nn.quantized.functional.max_pool2d |
torch.nn.quantized.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None) [source]
Upsamples the input to either the given size or the given scale_factor Warning This function is deprecated in favor of torch.nn.quantized.functional.interpolate(). This is equivalent with nn.quantized.functional.interpolate(...). See torch.nn.functional.interpolate() for implementation details. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. Note The input quantization parameters propagate to the output. Note Only 2D input is supported for quantized inputs Note Only the following modes are supported for the quantized inputs: bilinear nearest Parameters
input (Tensor) β quantized input tensor
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) β output spatial size.
scale_factor (float or Tuple[float]) β multiplier for spatial size. Has to be an integer.
mode (string) β algorithm used for upsampling: 'nearest' | 'bilinear'
align_corners (bool, optional) β Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'bilinear'. Default: False
Warning With align_corners = True, the linearly interpolating modes (bilinear) donβt proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False. See Upsample for concrete examples on how this affects the outputs. | torch.nn.quantized#torch.nn.quantized.functional.upsample |
torch.nn.quantized.functional.upsample_bilinear(input, size=None, scale_factor=None) [source]
Upsamples the input, using bilinear upsampling. Warning This function is deprecated in favor of torch.nn.quantized.functional.interpolate(). This is equivalent with nn.quantized.functional.interpolate(..., mode='bilinear', align_corners=True). Note The input quantization parameters propagate to the output. Note Only 2D inputs are supported Parameters
input (Tensor) β quantized input
size (int or Tuple[int, int]) β output spatial size.
scale_factor (int or Tuple[int, int]) β multiplier for spatial size | torch.nn.quantized#torch.nn.quantized.functional.upsample_bilinear |
torch.nn.quantized.functional.upsample_nearest(input, size=None, scale_factor=None) [source]
Upsamples the input, using nearest neighboursβ pixel values. Warning This function is deprecated in favor of torch.nn.quantized.functional.interpolate(). This is equivalent with nn.quantized.functional.interpolate(..., mode='nearest'). Note The input quantization parameters propagate to the output. Note Only 2D inputs are supported Parameters
input (Tensor) β quantized input
size (int or Tuple[int, int] or Tuple[int, int, int]) β output spatial size.
scale_factor (int) β multiplier for spatial size. Has to be an integer. | torch.nn.quantized#torch.nn.quantized.functional.upsample_nearest |
class torch.nn.quantized.GroupNorm(num_groups, num_channels, weight, bias, scale, zero_point, eps=1e-05, affine=True) [source]
This is the quantized version of GroupNorm. Additional args:
scale - quantization scale of the output, type: double.
zero_point - quantization zero point of the output, type: long. | torch.nn.quantized#torch.nn.quantized.GroupNorm |
class torch.nn.quantized.Hardswish(scale, zero_point) [source]
This is the quantized version of Hardswish. Parameters
scale β quantization scale of the output tensor
zero_point β quantization zero point of the output tensor | torch.nn.quantized#torch.nn.quantized.Hardswish |
class torch.nn.quantized.InstanceNorm1d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source]
This is the quantized version of InstanceNorm1d. Additional args:
scale - quantization scale of the output, type: double.
zero_point - quantization zero point of the output, type: long. | torch.nn.quantized#torch.nn.quantized.InstanceNorm1d |
class torch.nn.quantized.InstanceNorm2d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source]
This is the quantized version of InstanceNorm2d. Additional args:
scale - quantization scale of the output, type: double.
zero_point - quantization zero point of the output, type: long. | torch.nn.quantized#torch.nn.quantized.InstanceNorm2d |
class torch.nn.quantized.InstanceNorm3d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source]
This is the quantized version of InstanceNorm3d. Additional args:
scale - quantization scale of the output, type: double.
zero_point - quantization zero point of the output, type: long. | torch.nn.quantized#torch.nn.quantized.InstanceNorm3d |
class torch.nn.quantized.LayerNorm(normalized_shape, weight, bias, scale, zero_point, eps=1e-05, elementwise_affine=True) [source]
This is the quantized version of LayerNorm. Additional args:
scale - quantization scale of the output, type: double.
zero_point - quantization zero point of the output, type: long. | torch.nn.quantized#torch.nn.quantized.LayerNorm |
class torch.nn.quantized.Linear(in_features, out_features, bias_=True, dtype=torch.qint8) [source]
A quantized linear module with quantized tensor as inputs and outputs. We adopt the same interface as torch.nn.Linear, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for documentation. Similar to Linear, attributes will be randomly initialized at module creation time and will be overwritten later Variables
~Linear.weight (Tensor) β the non-learnable quantized weights of the module of shape (out_features,in_features)(\text{out\_features}, \text{in\_features}) .
~Linear.bias (Tensor) β the non-learnable bias of the module of shape (out_features)(\text{out\_features}) . If bias is True, the values are initialized to zero.
~Linear.scale β scale parameter of output Quantized Tensor, type: double
~Linear.zero_point β zero_point parameter for output Quantized Tensor, type: long Examples: >>> m = nn.quantized.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> input = torch.quantize_per_tensor(input, 1.0, 0, torch.quint8)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
classmethod from_float(mod) [source]
Create a quantized module from a float module or qparams_dict Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user | torch.nn.quantized#torch.nn.quantized.Linear |
classmethod from_float(mod) [source]
Create a quantized module from a float module or qparams_dict Parameters
mod (Module) β a float module, either produced by torch.quantization utilities or provided by the user | torch.nn.quantized#torch.nn.quantized.Linear.from_float |
class torch.nn.quantized.QFunctional [source]
Wrapper class for quantized operations. The instance of this class can be used instead of the torch.ops.quantized prefix. See example usage below. Note This class does not provide a forward hook. Instead, you must use one of the underlying functions (e.g. add). Examples: >>> q_add = QFunctional()
>>> a = torch.quantize_per_tensor(torch.tensor(3.0), 1.0, 0, torch.qint32)
>>> b = torch.quantize_per_tensor(torch.tensor(4.0), 1.0, 0, torch.qint32)
>>> q_add.add(a, b) # Equivalent to ``torch.ops.quantized.add(a, b, 1.0, 0)``
Valid operation names:
add cat mul add_relu add_scalar mul_scalar | torch.nn.quantized#torch.nn.quantized.QFunctional |
class torch.nn.quantized.Quantize(scale, zero_point, dtype) [source]
Quantizes an incoming tensor Parameters
scale β scale of the output Quantized Tensor
zero_point β zero_point of output Quantized Tensor
dtype β data type of output Quantized Tensor Variables
zero_point, dtype (`scale`,) β Examples::
>>> t = torch.tensor([[1., -1.], [1., -1.]])
>>> scale, zero_point, dtype = 1.0, 2, torch.qint8
>>> qm = Quantize(scale, zero_point, dtype)
>>> qt = qm(t)
>>> print(qt)
tensor([[ 1., -1.],
[ 1., -1.]], size=(2, 2), dtype=torch.qint8, scale=1.0, zero_point=2) | torch.nn.quantized#torch.nn.quantized.Quantize |
class torch.nn.quantized.ReLU6(inplace=False) [source]
Applies the element-wise function: ReLU6(x)=minβ‘(maxβ‘(x0,x),q(6))\text{ReLU6}(x) = \min(\max(x_0, x), q(6)) , where x0x_0 is the zero_point, and q(6)q(6) is the quantized representation of number 6. Parameters
inplace β can optionally do the operation in-place. Default: False Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.quantized.ReLU6()
>>> input = torch.randn(2)
>>> input = torch.quantize_per_tensor(input, 1.0, 0, dtype=torch.qint32)
>>> output = m(input) | torch.nn.quantized#torch.nn.quantized.ReLU6 |
class torch.nn.ReflectionPad1d(padding) [source]
Pads the input tensor using the reflection of the input boundary. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) β the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (padding_left\text{padding\_left} , padding_right\text{padding\_right} ) Shape:
Input: (N,C,Win)(N, C, W_{in})
Output: (N,C,Wout)(N, C, W_{out}) where Wout=Win+padding_left+padding_rightW_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ReflectionPad1d(2)
>>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4)
>>> input
tensor([[[0., 1., 2., 3.],
[4., 5., 6., 7.]]])
>>> m(input)
tensor([[[2., 1., 0., 1., 2., 3., 2., 1.],
[6., 5., 4., 5., 6., 7., 6., 5.]]])
>>> # using different paddings for different sides
>>> m = nn.ReflectionPad1d((3, 1))
>>> m(input)
tensor([[[3., 2., 1., 0., 1., 2., 3., 2.],
[7., 6., 5., 4., 5., 6., 7., 6.]]]) | torch.generated.torch.nn.reflectionpad1d#torch.nn.ReflectionPad1d |
class torch.nn.ReflectionPad2d(padding) [source]
Pads the input tensor using the reflection of the input boundary. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) β the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (padding_left\text{padding\_left} , padding_right\text{padding\_right} , padding_top\text{padding\_top} , padding_bottom\text{padding\_bottom} ) Shape:
Input: (N,C,Hin,Win)(N, C, H_{in}, W_{in})
Output: (N,C,Hout,Wout)(N, C, H_{out}, W_{out}) where Hout=Hin+padding_top+padding_bottomH_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom} Wout=Win+padding_left+padding_rightW_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ReflectionPad2d(2)
>>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3)
>>> input
tensor([[[[0., 1., 2.],
[3., 4., 5.],
[6., 7., 8.]]]])
>>> m(input)
tensor([[[[8., 7., 6., 7., 8., 7., 6.],
[5., 4., 3., 4., 5., 4., 3.],
[2., 1., 0., 1., 2., 1., 0.],
[5., 4., 3., 4., 5., 4., 3.],
[8., 7., 6., 7., 8., 7., 6.],
[5., 4., 3., 4., 5., 4., 3.],
[2., 1., 0., 1., 2., 1., 0.]]]])
>>> # using different paddings for different sides
>>> m = nn.ReflectionPad2d((1, 1, 2, 0))
>>> m(input)
tensor([[[[7., 6., 7., 8., 7.],
[4., 3., 4., 5., 4.],
[1., 0., 1., 2., 1.],
[4., 3., 4., 5., 4.],
[7., 6., 7., 8., 7.]]]]) | torch.generated.torch.nn.reflectionpad2d#torch.nn.ReflectionPad2d |
class torch.nn.ReLU(inplace=False) [source]
Applies the rectified linear unit function element-wise: ReLU(x)=(x)+=maxβ‘(0,x)\text{ReLU}(x) = (x)^+ = \max(0, x) Parameters
inplace β can optionally do the operation in-place. Default: False Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.ReLU()
>>> input = torch.randn(2)
>>> output = m(input)
An implementation of CReLU - https://arxiv.org/abs/1603.05201
>>> m = nn.ReLU()
>>> input = torch.randn(2).unsqueeze(0)
>>> output = torch.cat((m(input),m(-input))) | torch.generated.torch.nn.relu#torch.nn.ReLU |
class torch.nn.ReLU6(inplace=False) [source]
Applies the element-wise function: ReLU6(x)=minβ‘(maxβ‘(0,x),6)\text{ReLU6}(x) = \min(\max(0,x), 6)
Parameters
inplace β can optionally do the operation in-place. Default: False Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.ReLU6()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.relu6#torch.nn.ReLU6 |
class torch.nn.ReplicationPad1d(padding) [source]
Pads the input tensor using replication of the input boundary. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) β the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (padding_left\text{padding\_left} , padding_right\text{padding\_right} ) Shape:
Input: (N,C,Win)(N, C, W_{in})
Output: (N,C,Wout)(N, C, W_{out}) where Wout=Win+padding_left+padding_rightW_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ReplicationPad1d(2)
>>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4)
>>> input
tensor([[[0., 1., 2., 3.],
[4., 5., 6., 7.]]])
>>> m(input)
tensor([[[0., 0., 0., 1., 2., 3., 3., 3.],
[4., 4., 4., 5., 6., 7., 7., 7.]]])
>>> # using different paddings for different sides
>>> m = nn.ReplicationPad1d((3, 1))
>>> m(input)
tensor([[[0., 0., 0., 0., 1., 2., 3., 3.],
[4., 4., 4., 4., 5., 6., 7., 7.]]]) | torch.generated.torch.nn.replicationpad1d#torch.nn.ReplicationPad1d |
class torch.nn.ReplicationPad2d(padding) [source]
Pads the input tensor using replication of the input boundary. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) β the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (padding_left\text{padding\_left} , padding_right\text{padding\_right} , padding_top\text{padding\_top} , padding_bottom\text{padding\_bottom} ) Shape:
Input: (N,C,Hin,Win)(N, C, H_{in}, W_{in})
Output: (N,C,Hout,Wout)(N, C, H_{out}, W_{out}) where Hout=Hin+padding_top+padding_bottomH_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom} Wout=Win+padding_left+padding_rightW_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ReplicationPad2d(2)
>>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3)
>>> input
tensor([[[[0., 1., 2.],
[3., 4., 5.],
[6., 7., 8.]]]])
>>> m(input)
tensor([[[[0., 0., 0., 1., 2., 2., 2.],
[0., 0., 0., 1., 2., 2., 2.],
[0., 0., 0., 1., 2., 2., 2.],
[3., 3., 3., 4., 5., 5., 5.],
[6., 6., 6., 7., 8., 8., 8.],
[6., 6., 6., 7., 8., 8., 8.],
[6., 6., 6., 7., 8., 8., 8.]]]])
>>> # using different paddings for different sides
>>> m = nn.ReplicationPad2d((1, 1, 2, 0))
>>> m(input)
tensor([[[[0., 0., 1., 2., 2.],
[0., 0., 1., 2., 2.],
[0., 0., 1., 2., 2.],
[3., 3., 4., 5., 5.],
[6., 6., 7., 8., 8.]]]]) | torch.generated.torch.nn.replicationpad2d#torch.nn.ReplicationPad2d |
class torch.nn.ReplicationPad3d(padding) [source]
Pads the input tensor using replication of the input boundary. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) β the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (padding_left\text{padding\_left} , padding_right\text{padding\_right} , padding_top\text{padding\_top} , padding_bottom\text{padding\_bottom} , padding_front\text{padding\_front} , padding_back\text{padding\_back} ) Shape:
Input: (N,C,Din,Hin,Win)(N, C, D_{in}, H_{in}, W_{in})
Output: (N,C,Dout,Hout,Wout)(N, C, D_{out}, H_{out}, W_{out}) where Dout=Din+padding_front+padding_backD_{out} = D_{in} + \text{padding\_front} + \text{padding\_back} Hout=Hin+padding_top+padding_bottomH_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom} Wout=Win+padding_left+padding_rightW_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ReplicationPad3d(3)
>>> input = torch.randn(16, 3, 8, 320, 480)
>>> output = m(input)
>>> # using different paddings for different sides
>>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1))
>>> output = m(input) | torch.generated.torch.nn.replicationpad3d#torch.nn.ReplicationPad3d |
class torch.nn.RNN(*args, **kwargs) [source]
Applies a multi-layer Elman RNN with tanhβ‘\tanh or ReLU\text{ReLU} non-linearity to an input sequence. For each element in the input sequence, each layer computes the following function: ht=tanhβ‘(Wihxt+bih+Whhh(tβ1)+bhh)h_t = \tanh(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh})
where hth_t is the hidden state at time t, xtx_t is the input at time t, and h(tβ1)h_{(t-1)} is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0. If nonlinearity is 'relu', then ReLU\text{ReLU} is used instead of tanhβ‘\tanh . Parameters
input_size β The number of expected features in the input x
hidden_size β The number of features in the hidden state h
num_layers β Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1
nonlinearity β The non-linearity to use. Can be either 'tanh' or 'relu'. Default: 'tanh'
bias β If False, then the layer does not use bias weights b_ih and b_hh. Default: True
batch_first β If True, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout β If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to dropout. Default: 0
bidirectional β If True, becomes a bidirectional RNN. Default: False
Inputs: input, h_0
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() or torch.nn.utils.rnn.pack_sequence() for details.
h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num_directions should be 2, else it should be 1. Outputs: output, h_n
output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the RNN, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated using output.view(seq_len, batch, num_directions, hidden_size), with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case.
h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len. Like output, the layers can be separated using h_n.view(num_layers, num_directions, batch, hidden_size). Shape:
Input1: (L,N,Hin)(L, N, H_{in}) tensor containing input features where Hin=input_sizeH_{in}=\text{input\_size} and L represents a sequence length. Input2: (S,N,Hout)(S, N, H_{out}) tensor containing the initial hidden state for each element in the batch. Hout=hidden_sizeH_{out}=\text{hidden\_size} Defaults to zero if not provided. where S=num_layersβnum_directionsS=\text{num\_layers} * \text{num\_directions} If the RNN is bidirectional, num_directions should be 2, else it should be 1. Output1: (L,N,Hall)(L, N, H_{all}) where Hall=num_directionsβhidden_sizeH_{all}=\text{num\_directions} * \text{hidden\_size}
Output2: (S,N,Hout)(S, N, H_{out}) tensor containing the next hidden state for each element in the batch Variables
~RNN.weight_ih_l[k] β the learnable input-hidden weights of the k-th layer, of shape (hidden_size, input_size) for k = 0. Otherwise, the shape is (hidden_size, num_directions * hidden_size)
~RNN.weight_hh_l[k] β the learnable hidden-hidden weights of the k-th layer, of shape (hidden_size, hidden_size)
~RNN.bias_ih_l[k] β the learnable input-hidden bias of the k-th layer, of shape (hidden_size)
~RNN.bias_hh_l[k] β the learnable hidden-hidden bias of the k-th layer, of shape (hidden_size)
Note All the weights and biases are initialized from U(βk,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}} Warning There are known non-determinism issues for RNN functions on some versions of cuDNN and CUDA. You can enforce deterministic behavior by setting the following environment variables: On CUDA 10.1, set environment variable CUDA_LAUNCH_BLOCKING=1. This may affect performance. On CUDA 10.2 or later, set environment variable (note the leading colon symbol) CUBLAS_WORKSPACE_CONFIG=:16:8 or CUBLAS_WORKSPACE_CONFIG=:4096:2. See the cuDNN 8 Release Notes for more information. Orphan Note If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype torch.float16 4) V100 GPU is used, 5) input data is not in PackedSequence format persistent algorithm can be selected to improve performance. Examples: >>> rnn = nn.RNN(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> output, hn = rnn(input, h0) | torch.generated.torch.nn.rnn#torch.nn.RNN |
class torch.nn.RNNBase(mode, input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, proj_size=0) [source]
flatten_parameters() [source]
Resets parameter data pointer so that they can use faster code paths. Right now, this works only if the module is on the GPU and cuDNN is enabled. Otherwise, itβs a no-op. | torch.generated.torch.nn.rnnbase#torch.nn.RNNBase |
flatten_parameters() [source]
Resets parameter data pointer so that they can use faster code paths. Right now, this works only if the module is on the GPU and cuDNN is enabled. Otherwise, itβs a no-op. | torch.generated.torch.nn.rnnbase#torch.nn.RNNBase.flatten_parameters |
class torch.nn.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh') [source]
An Elman RNN cell with tanh or ReLU non-linearity. hβ²=tanhβ‘(Wihx+bih+Whhh+bhh)h' = \tanh(W_{ih} x + b_{ih} + W_{hh} h + b_{hh})
If nonlinearity is βreluβ, then ReLU is used in place of tanh. Parameters
input_size β The number of expected features in the input x
hidden_size β The number of features in the hidden state h
bias β If False, then the layer does not use bias weights b_ih and b_hh. Default: True
nonlinearity β The non-linearity to use. Can be either 'tanh' or 'relu'. Default: 'tanh'
Inputs: input, hidden
input of shape (batch, input_size): tensor containing input features
hidden of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. Outputs: hβ
hβ of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch Shape:
Input1: (N,Hin)(N, H_{in}) tensor containing input features where HinH_{in} = input_size
Input2: (N,Hout)(N, H_{out}) tensor containing the initial hidden state for each element in the batch where HoutH_{out} = hidden_size Defaults to zero if not provided. Output: (N,Hout)(N, H_{out}) tensor containing the next hidden state for each element in the batch Variables
~RNNCell.weight_ih β the learnable input-hidden weights, of shape (hidden_size, input_size)
~RNNCell.weight_hh β the learnable hidden-hidden weights, of shape (hidden_size, hidden_size)
~RNNCell.bias_ih β the learnable input-hidden bias, of shape (hidden_size)
~RNNCell.bias_hh β the learnable hidden-hidden bias, of shape (hidden_size)
Note All the weights and biases are initialized from U(βk,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}} Examples: >>> rnn = nn.RNNCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx = rnn(input[i], hx)
output.append(hx) | torch.generated.torch.nn.rnncell#torch.nn.RNNCell |
class torch.nn.RReLU(lower=0.125, upper=0.3333333333333333, inplace=False) [source]
Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper: Empirical Evaluation of Rectified Activations in Convolutional Network. The function is defined as: RReLU(x)={xif xβ₯0ax otherwise \text{RReLU}(x) = \begin{cases} x & \text{if } x \geq 0 \\ ax & \text{ otherwise } \end{cases}
where aa is randomly sampled from uniform distribution U(lower,upper)\mathcal{U}(\text{lower}, \text{upper}) . See: https://arxiv.org/pdf/1505.00853.pdf Parameters
lower β lower bound of the uniform distribution. Default: 18\frac{1}{8}
upper β upper bound of the uniform distribution. Default: 13\frac{1}{3}
inplace β can optionally do the operation in-place. Default: False
Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.RReLU(0.1, 0.3)
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.rrelu#torch.nn.RReLU |
class torch.nn.SELU(inplace=False) [source]
Applied element-wise, as: SELU(x)=scaleβ(maxβ‘(0,x)+minβ‘(0,Ξ±β(expβ‘(x)β1)))\text{SELU}(x) = \text{scale} * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1)))
with Ξ±=1.6732632423543772848170429916717\alpha = 1.6732632423543772848170429916717 and scale=1.0507009873554804934193349852946\text{scale} = 1.0507009873554804934193349852946 . More details can be found in the paper Self-Normalizing Neural Networks . Parameters
inplace (bool, optional) β can optionally do the operation in-place. Default: False Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.SELU()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.selu#torch.nn.SELU |
class torch.nn.Sequential(*args) [source]
A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in. To make it easier to understand, here is a small example: # Example of using Sequential
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
# Example of using Sequential with OrderedDict
model = nn.Sequential(OrderedDict([
('conv1', nn.Conv2d(1,20,5)),
('relu1', nn.ReLU()),
('conv2', nn.Conv2d(20,64,5)),
('relu2', nn.ReLU())
])) | torch.generated.torch.nn.sequential#torch.nn.Sequential |
class torch.nn.Sigmoid [source]
Applies the element-wise function: Sigmoid(x)=Ο(x)=11+expβ‘(βx)\text{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)}
Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.Sigmoid()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.sigmoid#torch.nn.Sigmoid |
class torch.nn.SiLU(inplace=False) [source]
Applies the silu function, element-wise. silu(x)=xβΟ(x),where Ο(x) is the logistic sigmoid.\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.}
Note See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid Linear Unit) was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning and Swish: a Self-Gated Activation Function where the SiLU was experimented with later. Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.SiLU()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.silu#torch.nn.SiLU |
class torch.nn.SmoothL1Loss(size_average=None, reduce=None, reduction='mean', beta=1.0) [source]
Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. It is less sensitive to outliers than the torch.nn.MSELoss and in some cases prevents exploding gradients (e.g. see Fast R-CNN paper by Ross Girshick). Omitting a scaling factor of beta, this loss is also known as the Huber loss: loss(x,y)=1nβizi\text{loss}(x, y) = \frac{1}{n} \sum_{i} z_{i}
where ziz_{i} is given by: zi={0.5(xiβyi)2/beta,if β£xiβyiβ£<betaβ£xiβyiβ£β0.5βbeta,otherwise z_{i} = \begin{cases} 0.5 (x_i - y_i)^2 / beta, & \text{if } |x_i - y_i| < beta \\ |x_i - y_i| - 0.5 * beta, & \text{otherwise } \end{cases}
xx and yy arbitrary shapes with a total of nn elements each the sum operation still operates over all the elements, and divides by nn . beta is an optional parameter that defaults to 1. Note: When beta is set to 0, this is equivalent to L1Loss. Passing a negative value in for beta will result in an exception. The division by nn can be avoided if sets reduction = 'sum'. Parameters
size_average (bool, optional) β Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) β Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) β Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
beta (float, optional) β Specifies the threshold at which to change between L1 and L2 loss. This value defaults to 1.0. Shape:
Input: (N,β)(N, *) where β* means, any number of additional dimensions Target: (N,β)(N, *) , same shape as the input Output: scalar. If reduction is 'none', then (N,β)(N, *) , same shape as the input | torch.generated.torch.nn.smoothl1loss#torch.nn.SmoothL1Loss |
class torch.nn.SoftMarginLoss(size_average=None, reduce=None, reduction='mean') [source]
Creates a criterion that optimizes a two-class classification logistic loss between input tensor xx and target tensor yy (containing 1 or -1). loss(x,y)=βilogβ‘(1+expβ‘(βy[i]βx[i]))x.nelement()\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()}
Parameters
size_average (bool, optional) β Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) β Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) β Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Shape:
Input: (β)(*) where β* means, any number of additional dimensions Target: (β)(*) , same shape as the input Output: scalar. If reduction is 'none', then same shape as the input | torch.generated.torch.nn.softmarginloss#torch.nn.SoftMarginLoss |
class torch.nn.Softmax(dim=None) [source]
Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1. Softmax is defined as: Softmax(xi)=expβ‘(xi)βjexpβ‘(xj)\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}
When the input Tensor is a sparse tensor then the unspecifed values are treated as -inf. Shape:
Input: (β)(*) where * means, any number of additional dimensions Output: (β)(*) , same shape as the input Returns
a Tensor of the same dimension and shape as the input with values in the range [0, 1] Parameters
dim (int) β A dimension along which Softmax will be computed (so every slice along dim will sum to 1). Note This module doesnβt work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use LogSoftmax instead (itβs faster and has better numerical properties). Examples: >>> m = nn.Softmax(dim=1)
>>> input = torch.randn(2, 3)
>>> output = m(input) | torch.generated.torch.nn.softmax#torch.nn.Softmax |
class torch.nn.Softmax2d [source]
Applies SoftMax over features to each spatial location. When given an image of Channels x Height x Width, it will apply Softmax to each location (Channels,hi,wj)(Channels, h_i, w_j) Shape:
Input: (N,C,H,W)(N, C, H, W)
Output: (N,C,H,W)(N, C, H, W) (same shape as input) Returns
a Tensor of the same dimension and shape as the input with values in the range [0, 1] Examples: >>> m = nn.Softmax2d()
>>> # you softmax over the 2nd dimension
>>> input = torch.randn(2, 3, 12, 13)
>>> output = m(input) | torch.generated.torch.nn.softmax2d#torch.nn.Softmax2d |
class torch.nn.Softmin(dim=None) [source]
Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1. Softmin is defined as: Softmin(xi)=expβ‘(βxi)βjexpβ‘(βxj)\text{Softmin}(x_{i}) = \frac{\exp(-x_i)}{\sum_j \exp(-x_j)}
Shape:
Input: (β)(*) where * means, any number of additional dimensions Output: (β)(*) , same shape as the input Parameters
dim (int) β A dimension along which Softmin will be computed (so every slice along dim will sum to 1). Returns
a Tensor of the same dimension and shape as the input, with values in the range [0, 1] Examples: >>> m = nn.Softmin()
>>> input = torch.randn(2, 3)
>>> output = m(input) | torch.generated.torch.nn.softmin#torch.nn.Softmin |
class torch.nn.Softplus(beta=1, threshold=20) [source]
Applies the element-wise function: Softplus(x)=1Ξ²βlogβ‘(1+expβ‘(Ξ²βx))\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x))
SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive. For numerical stability the implementation reverts to the linear function when inputΓΞ²>thresholdinput \times \beta > threshold . Parameters
beta β the Ξ²\beta value for the Softplus formulation. Default: 1
threshold β values above this revert to a linear function. Default: 20 Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.Softplus()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.softplus#torch.nn.Softplus |
class torch.nn.Softshrink(lambd=0.5) [source]
Applies the soft shrinkage function elementwise: SoftShrinkage(x)={xβΞ», if x>Ξ»x+Ξ», if x<βΞ»0, otherwise \text{SoftShrinkage}(x) = \begin{cases} x - \lambda, & \text{ if } x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}
Parameters
lambd β the Ξ»\lambda (must be no less than zero) value for the Softshrink formulation. Default: 0.5 Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.Softshrink()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.softshrink#torch.nn.Softshrink |
class torch.nn.Softsign [source]
Applies the element-wise function: SoftSign(x)=x1+β£xβ£\text{SoftSign}(x) = \frac{x}{ 1 + |x|}
Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.Softsign()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.softsign#torch.nn.Softsign |
class torch.nn.SyncBatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None) [source]
Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . y=xβE[x]Var[x]+Ο΅βΞ³+Ξ²y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The mean and standard-deviation are calculated per-dimension over all mini-batches of the same process groups. Ξ³\gamma and Ξ²\beta are learnable parameter vectors of size C (where C is the input size). By default, the elements of Ξ³\gamma are sampled from U(0,1)\mathcal{U}(0, 1) and the elements of Ξ²\beta are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1. If track_running_stats is set to False, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well. Note This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x^new=(1βmomentum)Γx^+momentumΓxt\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t , where x^\hat{x} is the estimated statistic and xtx_t is the new observed value. Because the Batch Normalization is done for each channel in the C dimension, computing statistics on (N, +) slices, itβs common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization. Currently SyncBatchNorm only supports DistributedDataParallel (DDP) with single GPU per process. Use torch.nn.SyncBatchNorm.convert_sync_batchnorm() to convert BatchNorm*D layer to SyncBatchNorm before wrapping Network with DDP. Parameters
num_features β CC from an expected input of size (N,C,+)(N, C, +)
eps β a value added to the denominator for numerical stability. Default: 1e-5
momentum β the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1
affine β a boolean value that when set to True, this module has learnable affine parameters. Default: True
track_running_stats β a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics, and initializes statistics buffers running_mean and running_var as None. When these buffers are None, this module always uses batch statistics. in both training and eval modes. Default: True
process_group β synchronization of stats happen within each process group individually. Default behavior is synchronization across the whole world Shape:
Input: (N,C,+)(N, C, +)
Output: (N,C,+)(N, C, +) (same shape as input) Examples: >>> # With Learnable Parameters
>>> m = nn.SyncBatchNorm(100)
>>> # creating process group (optional)
>>> # ranks is a list of int identifying rank ids.
>>> ranks = list(range(8))
>>> r1, r2 = ranks[:4], ranks[4:]
>>> # Note: every rank calls into new_group for every
>>> # process group created, even if that rank is not
>>> # part of the group.
>>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]]
>>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1]
>>> # Without Learnable Parameters
>>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group)
>>> input = torch.randn(20, 100, 35, 45, 10)
>>> output = m(input)
>>> # network is nn.BatchNorm layer
>>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group)
>>> # only single gpu per process is currently supported
>>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel(
>>> sync_bn_network,
>>> device_ids=[args.local_rank],
>>> output_device=args.local_rank)
classmethod convert_sync_batchnorm(module, process_group=None) [source]
Helper function to convert all BatchNorm*D layers in the model to torch.nn.SyncBatchNorm layers. Parameters
module (nn.Module) β module containing one or more attr:BatchNorm*D layers
process_group (optional) β process group to scope synchronization, default is the whole world Returns
The original module with the converted torch.nn.SyncBatchNorm layers. If the original module is a BatchNorm*D layer, a new torch.nn.SyncBatchNorm layer object will be returned instead. Example: >>> # Network with nn.BatchNorm layer
>>> module = torch.nn.Sequential(
>>> torch.nn.Linear(20, 100),
>>> torch.nn.BatchNorm1d(100),
>>> ).cuda()
>>> # creating process group (optional)
>>> # ranks is a list of int identifying rank ids.
>>> ranks = list(range(8))
>>> r1, r2 = ranks[:4], ranks[4:]
>>> # Note: every rank calls into new_group for every
>>> # process group created, even if that rank is not
>>> # part of the group.
>>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]]
>>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1]
>>> sync_bn_module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module, process_group) | torch.generated.torch.nn.syncbatchnorm#torch.nn.SyncBatchNorm |
classmethod convert_sync_batchnorm(module, process_group=None) [source]
Helper function to convert all BatchNorm*D layers in the model to torch.nn.SyncBatchNorm layers. Parameters
module (nn.Module) β module containing one or more attr:BatchNorm*D layers
process_group (optional) β process group to scope synchronization, default is the whole world Returns
The original module with the converted torch.nn.SyncBatchNorm layers. If the original module is a BatchNorm*D layer, a new torch.nn.SyncBatchNorm layer object will be returned instead. Example: >>> # Network with nn.BatchNorm layer
>>> module = torch.nn.Sequential(
>>> torch.nn.Linear(20, 100),
>>> torch.nn.BatchNorm1d(100),
>>> ).cuda()
>>> # creating process group (optional)
>>> # ranks is a list of int identifying rank ids.
>>> ranks = list(range(8))
>>> r1, r2 = ranks[:4], ranks[4:]
>>> # Note: every rank calls into new_group for every
>>> # process group created, even if that rank is not
>>> # part of the group.
>>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]]
>>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1]
>>> sync_bn_module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module, process_group) | torch.generated.torch.nn.syncbatchnorm#torch.nn.SyncBatchNorm.convert_sync_batchnorm |
class torch.nn.Tanh [source]
Applies the element-wise function: Tanh(x)=tanhβ‘(x)=expβ‘(x)βexpβ‘(βx)expβ‘(x)+expβ‘(βx)\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)} {\exp(x) + \exp(-x)}
Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.Tanh()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.tanh#torch.nn.Tanh |
class torch.nn.Tanhshrink [source]
Applies the element-wise function: Tanhshrink(x)=xβtanhβ‘(x)\text{Tanhshrink}(x) = x - \tanh(x)
Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.Tanhshrink()
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.tanhshrink#torch.nn.Tanhshrink |
class torch.nn.Threshold(threshold, value, inplace=False) [source]
Thresholds each element of the input Tensor. Threshold is defined as: y={x, if x>thresholdvalue, otherwise y = \begin{cases} x, &\text{ if } x > \text{threshold} \\ \text{value}, &\text{ otherwise } \end{cases}
Parameters
threshold β The value to threshold at
value β The value to replace with
inplace β can optionally do the operation in-place. Default: False
Shape:
Input: (N,β)(N, *) where * means, any number of additional dimensions Output: (N,β)(N, *) , same shape as the input Examples: >>> m = nn.Threshold(0.1, 20)
>>> input = torch.randn(2)
>>> output = m(input) | torch.generated.torch.nn.threshold#torch.nn.Threshold |
class torch.nn.Transformer(d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, activation='relu', custom_encoder=None, custom_decoder=None) [source]
A transformer model. User is able to modify the attributes as needed. The architecture is based on the paper βAttention Is All You Needβ. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users can build the BERT(https://arxiv.org/abs/1810.04805) model with corresponding parameters. Parameters
d_model β the number of expected features in the encoder/decoder inputs (default=512).
nhead β the number of heads in the multiheadattention models (default=8).
num_encoder_layers β the number of sub-encoder-layers in the encoder (default=6).
num_decoder_layers β the number of sub-decoder-layers in the decoder (default=6).
dim_feedforward β the dimension of the feedforward network model (default=2048).
dropout β the dropout value (default=0.1).
activation β the activation function of encoder/decoder intermediate layer, relu or gelu (default=relu).
custom_encoder β custom encoder (default=None).
custom_decoder β custom decoder (default=None). Examples::
>>> transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
>>> src = torch.rand((10, 32, 512))
>>> tgt = torch.rand((20, 32, 512))
>>> out = transformer_model(src, tgt)
Note: A full example to apply nn.Transformer module for the word language model is available in https://github.com/pytorch/examples/tree/master/word_language_model
forward(src, tgt, src_mask=None, tgt_mask=None, memory_mask=None, src_key_padding_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None) [source]
Take in and process masked source/target sequences. Parameters
src β the sequence to the encoder (required).
tgt β the sequence to the decoder (required).
src_mask β the additive mask for the src sequence (optional).
tgt_mask β the additive mask for the tgt sequence (optional).
memory_mask β the additive mask for the encoder output (optional).
src_key_padding_mask β the ByteTensor mask for src keys per batch (optional).
tgt_key_padding_mask β the ByteTensor mask for tgt keys per batch (optional).
memory_key_padding_mask β the ByteTensor mask for memory keys per batch (optional). Shape:
src: (S,N,E)(S, N, E) . tgt: (T,N,E)(T, N, E) . src_mask: (S,S)(S, S) . tgt_mask: (T,T)(T, T) . memory_mask: (T,S)(T, S) . src_key_padding_mask: (N,S)(N, S) . tgt_key_padding_mask: (N,T)(N, T) . memory_key_padding_mask: (N,S)(N, S) . Note: [src/tgt/memory]_mask ensures that position i is allowed to attend the unmasked positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged. If a BoolTensor is provided, positions with True are not allowed to attend while False values will be unchanged. If a FloatTensor is provided, it will be added to the attention weight. [src/tgt/memory]_key_padding_mask provides specified elements in the key to be ignored by the attention. If a ByteTensor is provided, the non-zero positions will be ignored while the zero positions will be unchanged. If a BoolTensor is provided, the positions with the value of True will be ignored while the position with the value of False will be unchanged. output: (T,N,E)(T, N, E) . Note: Due to the multi-head attention architecture in the transformer model, the output sequence length of a transformer is same as the input sequence (i.e. target) length of the decode. where S is the source sequence length, T is the target sequence length, N is the batch size, E is the feature number Examples >>> output = transformer_model(src, tgt, src_mask=src_mask, tgt_mask=tgt_mask)
generate_square_subsequent_mask(sz) [source]
Generate a square mask for the sequence. The masked positions are filled with float(β-infβ). Unmasked positions are filled with float(0.0). | torch.generated.torch.nn.transformer#torch.nn.Transformer |
forward(src, tgt, src_mask=None, tgt_mask=None, memory_mask=None, src_key_padding_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None) [source]
Take in and process masked source/target sequences. Parameters
src β the sequence to the encoder (required).
tgt β the sequence to the decoder (required).
src_mask β the additive mask for the src sequence (optional).
tgt_mask β the additive mask for the tgt sequence (optional).
memory_mask β the additive mask for the encoder output (optional).
src_key_padding_mask β the ByteTensor mask for src keys per batch (optional).
tgt_key_padding_mask β the ByteTensor mask for tgt keys per batch (optional).
memory_key_padding_mask β the ByteTensor mask for memory keys per batch (optional). Shape:
src: (S,N,E)(S, N, E) . tgt: (T,N,E)(T, N, E) . src_mask: (S,S)(S, S) . tgt_mask: (T,T)(T, T) . memory_mask: (T,S)(T, S) . src_key_padding_mask: (N,S)(N, S) . tgt_key_padding_mask: (N,T)(N, T) . memory_key_padding_mask: (N,S)(N, S) . Note: [src/tgt/memory]_mask ensures that position i is allowed to attend the unmasked positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged. If a BoolTensor is provided, positions with True are not allowed to attend while False values will be unchanged. If a FloatTensor is provided, it will be added to the attention weight. [src/tgt/memory]_key_padding_mask provides specified elements in the key to be ignored by the attention. If a ByteTensor is provided, the non-zero positions will be ignored while the zero positions will be unchanged. If a BoolTensor is provided, the positions with the value of True will be ignored while the position with the value of False will be unchanged. output: (T,N,E)(T, N, E) . Note: Due to the multi-head attention architecture in the transformer model, the output sequence length of a transformer is same as the input sequence (i.e. target) length of the decode. where S is the source sequence length, T is the target sequence length, N is the batch size, E is the feature number Examples >>> output = transformer_model(src, tgt, src_mask=src_mask, tgt_mask=tgt_mask) | torch.generated.torch.nn.transformer#torch.nn.Transformer.forward |
generate_square_subsequent_mask(sz) [source]
Generate a square mask for the sequence. The masked positions are filled with float(β-infβ). Unmasked positions are filled with float(0.0). | torch.generated.torch.nn.transformer#torch.nn.Transformer.generate_square_subsequent_mask |
class torch.nn.TransformerDecoder(decoder_layer, num_layers, norm=None) [source]
TransformerDecoder is a stack of N decoder layers Parameters
decoder_layer β an instance of the TransformerDecoderLayer() class (required).
num_layers β the number of sub-decoder-layers in the decoder (required).
norm β the layer normalization component (optional). Examples::
>>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
>>> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers=6)
>>> memory = torch.rand(10, 32, 512)
>>> tgt = torch.rand(20, 32, 512)
>>> out = transformer_decoder(tgt, memory)
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None) [source]
Pass the inputs (and mask) through the decoder layer in turn. Parameters
tgt β the sequence to the decoder (required).
memory β the sequence from the last layer of the encoder (required).
tgt_mask β the mask for the tgt sequence (optional).
memory_mask β the mask for the memory sequence (optional).
tgt_key_padding_mask β the mask for the tgt keys per batch (optional).
memory_key_padding_mask β the mask for the memory keys per batch (optional). Shape:
see the docs in Transformer class. | torch.generated.torch.nn.transformerdecoder#torch.nn.TransformerDecoder |
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None) [source]
Pass the inputs (and mask) through the decoder layer in turn. Parameters
tgt β the sequence to the decoder (required).
memory β the sequence from the last layer of the encoder (required).
tgt_mask β the mask for the tgt sequence (optional).
memory_mask β the mask for the memory sequence (optional).
tgt_key_padding_mask β the mask for the tgt keys per batch (optional).
memory_key_padding_mask β the mask for the memory keys per batch (optional). Shape:
see the docs in Transformer class. | torch.generated.torch.nn.transformerdecoder#torch.nn.TransformerDecoder.forward |
class torch.nn.TransformerDecoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation='relu') [source]
TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. This standard decoder layer is based on the paper βAttention Is All You Needβ. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application. Parameters
d_model β the number of expected features in the input (required).
nhead β the number of heads in the multiheadattention models (required).
dim_feedforward β the dimension of the feedforward network model (default=2048).
dropout β the dropout value (default=0.1).
activation β the activation function of intermediate layer, relu or gelu (default=relu). Examples::
>>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
>>> memory = torch.rand(10, 32, 512)
>>> tgt = torch.rand(20, 32, 512)
>>> out = decoder_layer(tgt, memory)
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None) [source]
Pass the inputs (and mask) through the decoder layer. Parameters
tgt β the sequence to the decoder layer (required).
memory β the sequence from the last layer of the encoder (required).
tgt_mask β the mask for the tgt sequence (optional).
memory_mask β the mask for the memory sequence (optional).
tgt_key_padding_mask β the mask for the tgt keys per batch (optional).
memory_key_padding_mask β the mask for the memory keys per batch (optional). Shape:
see the docs in Transformer class. | torch.generated.torch.nn.transformerdecoderlayer#torch.nn.TransformerDecoderLayer |
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None) [source]
Pass the inputs (and mask) through the decoder layer. Parameters
tgt β the sequence to the decoder layer (required).
memory β the sequence from the last layer of the encoder (required).
tgt_mask β the mask for the tgt sequence (optional).
memory_mask β the mask for the memory sequence (optional).
tgt_key_padding_mask β the mask for the tgt keys per batch (optional).
memory_key_padding_mask β the mask for the memory keys per batch (optional). Shape:
see the docs in Transformer class. | torch.generated.torch.nn.transformerdecoderlayer#torch.nn.TransformerDecoderLayer.forward |
class torch.nn.TransformerEncoder(encoder_layer, num_layers, norm=None) [source]
TransformerEncoder is a stack of N encoder layers Parameters
encoder_layer β an instance of the TransformerEncoderLayer() class (required).
num_layers β the number of sub-encoder-layers in the encoder (required).
norm β the layer normalization component (optional). Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
>>> transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6)
>>> src = torch.rand(10, 32, 512)
>>> out = transformer_encoder(src)
forward(src, mask=None, src_key_padding_mask=None) [source]
Pass the input through the encoder layers in turn. Parameters
src β the sequence to the encoder (required).
mask β the mask for the src sequence (optional).
src_key_padding_mask β the mask for the src keys per batch (optional). Shape:
see the docs in Transformer class. | torch.generated.torch.nn.transformerencoder#torch.nn.TransformerEncoder |
forward(src, mask=None, src_key_padding_mask=None) [source]
Pass the input through the encoder layers in turn. Parameters
src β the sequence to the encoder (required).
mask β the mask for the src sequence (optional).
src_key_padding_mask β the mask for the src keys per batch (optional). Shape:
see the docs in Transformer class. | torch.generated.torch.nn.transformerencoder#torch.nn.TransformerEncoder.forward |
class torch.nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation='relu') [source]
TransformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper βAttention Is All You Needβ. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application. Parameters
d_model β the number of expected features in the input (required).
nhead β the number of heads in the multiheadattention models (required).
dim_feedforward β the dimension of the feedforward network model (default=2048).
dropout β the dropout value (default=0.1).
activation β the activation function of intermediate layer, relu or gelu (default=relu). Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
>>> src = torch.rand(10, 32, 512)
>>> out = encoder_layer(src)
forward(src, src_mask=None, src_key_padding_mask=None) [source]
Pass the input through the encoder layer. Parameters
src β the sequence to the encoder layer (required).
src_mask β the mask for the src sequence (optional).
src_key_padding_mask β the mask for the src keys per batch (optional). Shape:
see the docs in Transformer class. | torch.generated.torch.nn.transformerencoderlayer#torch.nn.TransformerEncoderLayer |
forward(src, src_mask=None, src_key_padding_mask=None) [source]
Pass the input through the encoder layer. Parameters
src β the sequence to the encoder layer (required).
src_mask β the mask for the src sequence (optional).
src_key_padding_mask β the mask for the src keys per batch (optional). Shape:
see the docs in Transformer class. | torch.generated.torch.nn.transformerencoderlayer#torch.nn.TransformerEncoderLayer.forward |
class torch.nn.TripletMarginLoss(margin=1.0, p=2.0, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean') [source]
Creates a criterion that measures the triplet loss given an input tensors x1x1 , x2x2 , x3x3 and a margin with a value greater than 00 . This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n (i.e., anchor, positive examples and negative examples respectively). The shapes of all input tensors should be (N,D)(N, D) . The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. The loss function for each sample in the mini-batch is: L(a,p,n)=maxβ‘{d(ai,pi)βd(ai,ni)+margin,0}L(a, p, n) = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\}
where d(xi,yi)=β₯xiβyiβ₯pd(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p
See also TripletMarginWithDistanceLoss, which computes the triplet margin loss for input tensors using a custom distance function. Parameters
margin (float, optional) β Default: 11 .
p (int, optional) β The norm degree for pairwise distance. Default: 22 .
swap (bool, optional) β The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. Default: False.
size_average (bool, optional) β Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) β Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) β Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Shape:
Input: (N,D)(N, D) where DD is the vector dimension.
Output: A Tensor of shape (N)(N) if reduction is 'none', or a scalar
otherwise. >>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2)
>>> anchor = torch.randn(100, 128, requires_grad=True)
>>> positive = torch.randn(100, 128, requires_grad=True)
>>> negative = torch.randn(100, 128, requires_grad=True)
>>> output = triplet_loss(anchor, positive, negative)
>>> output.backward() | torch.generated.torch.nn.tripletmarginloss#torch.nn.TripletMarginLoss |
class torch.nn.TripletMarginWithDistanceLoss(*, distance_function=None, margin=1.0, swap=False, reduction='mean') [source]
Creates a criterion that measures the triplet loss given input tensors aa , pp , and nn (representing anchor, positive, and negative examples, respectively), and a nonnegative, real-valued function (βdistance functionβ) used to compute the relationship between the anchor and positive example (βpositive distanceβ) and the anchor and negative example (βnegative distanceβ). The unreduced loss (i.e., with reduction set to 'none') can be described as: β(a,p,n)=L={l1,β¦,lN}β€,li=maxβ‘{d(ai,pi)βd(ai,ni)+margin,0}\ell(a, p, n) = L = \{l_1,\dots,l_N\}^\top, \quad l_i = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\}
where NN is the batch size; dd is a nonnegative, real-valued function quantifying the closeness of two tensors, referred to as the distance_function; and marginmargin is a nonnegative margin representing the minimum difference between the positive and negative distances that is required for the loss to be 0. The input tensors have NN elements each and can be of any shape that the distance function can handle. If reduction is not 'none' (default 'mean'), then: β(x,y)={meanβ‘(L),if reduction=βmeanβ;sumβ‘(L),if reduction=βsumβ.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}
See also TripletMarginLoss, which computes the triplet loss for input tensors using the lpl_p distance as the distance function. Parameters
distance_function (callable, optional) β A nonnegative, real-valued function that quantifies the closeness of two tensors. If not specified, nn.PairwiseDistance will be used. Default: None
margin (float, optional) β A nonnegative margin representing the minimum difference between the positive and negative distances required for the loss to be 0. Larger margins penalize cases where the negative examples are not distant enough from the anchors, relative to the positives. Default: 11 .
swap (bool, optional) β Whether to use the distance swap described in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. If True, and if the positive example is closer to the negative example than the anchor is, swaps the positive example and the anchor in the loss computation. Default: False.
reduction (string, optional) β Specifies the (optional) reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'mean'
Shape:
Input: (N,β)(N, *) where β* represents any number of additional dimensions as supported by the distance function. Output: A Tensor of shape (N)(N) if reduction is 'none', or a scalar otherwise. Examples: >>> # Initialize embeddings
>>> embedding = nn.Embedding(1000, 128)
>>> anchor_ids = torch.randint(0, 1000, (1,))
>>> positive_ids = torch.randint(0, 1000, (1,))
>>> negative_ids = torch.randint(0, 1000, (1,))
>>> anchor = embedding(anchor_ids)
>>> positive = embedding(positive_ids)
>>> negative = embedding(negative_ids)
>>>
>>> # Built-in Distance Function
>>> triplet_loss = \
>>> nn.TripletMarginWithDistanceLoss(distance_function=nn.PairwiseDistance())
>>> output = triplet_loss(anchor, positive, negative)
>>> output.backward()
>>>
>>> # Custom Distance Function
>>> def l_infinity(x1, x2):
>>> return torch.max(torch.abs(x1 - x2), dim=1).values
>>>
>>> triplet_loss = \
>>> nn.TripletMarginWithDistanceLoss(distance_function=l_infinity, margin=1.5)
>>> output = triplet_loss(anchor, positive, negative)
>>> output.backward()
>>>
>>> # Custom Distance Function (Lambda)
>>> triplet_loss = \
>>> nn.TripletMarginWithDistanceLoss(
>>> distance_function=lambda x, y: 1.0 - F.cosine_similarity(x, y))
>>> output = triplet_loss(anchor, positive, negative)
>>> output.backward()
Reference:
V. Balntas, et al.: Learning shallow convolutional feature descriptors with triplet losses: http://www.bmva.org/bmvc/2016/papers/paper119/index.html | torch.generated.torch.nn.tripletmarginwithdistanceloss#torch.nn.TripletMarginWithDistanceLoss |
class torch.nn.Unflatten(dim, unflattened_size) [source]
Unflattens a tensor dim expanding it to a desired shape. For use with Sequential.
dim specifies the dimension of the input tensor to be unflattened, and it can be either int or str when Tensor or NamedTensor is used, respectively.
unflattened_size is the new shape of the unflattened dimension of the tensor and it can be a tuple of ints or a list of ints or torch.Size for Tensor input; a NamedShape (tuple of (name, size) tuples) for NamedTensor input. Shape:
Input: (N,βdims)(N, *dims)
Output: (N,Cout,Hout,Wout)(N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}})
Parameters
dim (Union[int, str]) β Dimension to be unflattened
unflattened_size (Union[torch.Size, Tuple, List, NamedShape]) β New shape of the unflattened dimension Examples >>> input = torch.randn(2, 50)
>>> # With tuple of ints
>>> m = nn.Sequential(
>>> nn.Linear(50, 50),
>>> nn.Unflatten(1, (2, 5, 5))
>>> )
>>> output = m(input)
>>> output.size()
torch.Size([2, 2, 5, 5])
>>> # With torch.Size
>>> m = nn.Sequential(
>>> nn.Linear(50, 50),
>>> nn.Unflatten(1, torch.Size([2, 5, 5]))
>>> )
>>> output = m(input)
>>> output.size()
torch.Size([2, 2, 5, 5])
>>> # With namedshape (tuple of tuples)
>>> input = torch.randn(2, 50, names=('N', 'features'))
>>> unflatten = nn.Unflatten('features', (('C', 2), ('H', 5), ('W', 5)))
>>> output = unflatten(input)
>>> output.size()
torch.Size([2, 2, 5, 5])
add_module(name, module)
Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters
name (string) β name of the child module. The child module can be accessed from this module using the given name
module (Module) β child module to be added to the module.
apply(fn)
Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters
fn (Module -> None) β function to be applied to each submodule Returns
self Return type
Module Example: >>> @torch.no_grad()
>>> def init_weights(m):
>>> print(m)
>>> if type(m) == nn.Linear:
>>> m.weight.fill_(1.0)
>>> print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
bfloat16()
Casts all floating point parameters and buffers to bfloat16 datatype. Returns
self Return type
Module
buffers(recurse=True)
Returns an iterator over module buffers. Parameters
recurse (bool) β if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
torch.Tensor β module buffer Example: >>> for buf in model.buffers():
>>> print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
children()
Returns an iterator over immediate children modules. Yields
Module β a child module
cpu()
Moves all model parameters and buffers to the CPU. Returns
self Return type
Module
cuda(device=None)
Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters
device (int, optional) β if specified, all parameters will be copied to that device Returns
self Return type
Module
double()
Casts all floating point parameters and buffers to double datatype. Returns
self Return type
Module
eval()
Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns
self Return type
Module
float()
Casts all floating point parameters and buffers to float datatype. Returns
self Return type
Module
half()
Casts all floating point parameters and buffers to half datatype. Returns
self Return type
Module
load_state_dict(state_dict, strict=True)
Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this moduleβs state_dict() function. Parameters
state_dict (dict) β a dict containing parameters and persistent buffers.
strict (bool, optional) β whether to strictly enforce that the keys in state_dict match the keys returned by this moduleβs state_dict() function. Default: True
Returns
missing_keys is a list of str containing the missing keys
unexpected_keys is a list of str containing the unexpected keys Return type
NamedTuple with missing_keys and unexpected_keys fields
modules()
Returns an iterator over all modules in the network. Yields
Module β a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
print(idx, '->', m)
0 -> Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
named_buffers(prefix='', recurse=True)
Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Parameters
prefix (str) β prefix to prepend to all buffer names.
recurse (bool) β if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
(string, torch.Tensor) β Tuple containing the name and buffer Example: >>> for name, buf in self.named_buffers():
>>> if name in ['running_var']:
>>> print(buf.size())
named_children()
Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields
(string, Module) β Tuple containing a name and child module Example: >>> for name, module in model.named_children():
>>> if name in ['conv4', 'conv5']:
>>> print(module)
named_modules(memo=None, prefix='')
Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Yields
(string, Module) β Tuple of name and module Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
print(idx, '->', m)
0 -> ('', Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix='', recurse=True)
Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Parameters
prefix (str) β prefix to prepend to all parameter names.
recurse (bool) β if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields
(string, Parameter) β Tuple containing the name and parameter Example: >>> for name, param in self.named_parameters():
>>> if name in ['bias']:
>>> print(param.size())
parameters(recurse=True)
Returns an iterator over module parameters. This is typically passed to an optimizer. Parameters
recurse (bool) β if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields
Parameter β module parameter Example: >>> for param in model.parameters():
>>> print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
register_backward_hook(hook)
Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_buffer(name, tensor, persistent=True)
Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNormβs running_mean is not a parameter, but is part of the moduleβs state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this moduleβs state_dict. Buffers can be accessed as attributes using given names. Parameters
name (string) β name of the buffer. The buffer can be accessed from this module using the given name
tensor (Tensor) β buffer to be registered.
persistent (bool) β whether the buffer is part of this moduleβs state_dict. Example: >>> self.register_buffer('running_mean', torch.zeros(num_features))
register_forward_hook(hook)
Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the module. Keyword arguments wonβt be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_forward_pre_hook(hook)
Registers a forward pre-hook on the module. The hook will be called every time before forward() is invoked. It should have the following signature: hook(module, input) -> None or modified input
The input contains only the positional arguments given to the module. Keyword arguments wonβt be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple). Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_full_backward_hook(hook)
Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments. Warning Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_parameter(name, param)
Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters
name (string) β name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter) β parameter to be added to the module.
requires_grad_(requires_grad=True)
Change if autograd should record operations on parameters in this module. This method sets the parametersβ requires_grad attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). Parameters
requires_grad (bool) β whether autograd should record operations on parameters in this module. Default: True. Returns
self Return type
Module
state_dict(destination=None, prefix='', keep_vars=False)
Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns
a dictionary containing a whole state of the module Return type
dict Example: >>> module.state_dict().keys()
['bias', 'weight']
to(*args, **kwargs)
Moves and/or casts the parameters and buffers. This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtype`s. In addition, this method will
only cast the floating point or complex parameters and buffers to :attr:`dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices. See below for examples. Note This method modifies the module in-place. Parameters
device (torch.device) β the desired device of the parameters and buffers in this module
dtype (torch.dtype) β the desired floating point or complex dtype of the parameters and buffers in this module
tensor (torch.Tensor) β Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
memory_format (torch.memory_format) β the desired memory format for 4D parameters and buffers in this module (keyword only argument) Returns
self Return type
Module Examples: >>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j, 0.2382+0.j],
[ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
train(mode=True)
Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Parameters
mode (bool) β whether to set training mode (True) or evaluation mode (False). Default: True. Returns
self Return type
Module
type(dst_type)
Casts all parameters and buffers to dst_type. Parameters
dst_type (type or string) β the desired type Returns
self Return type
Module
xpu(device=None)
Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Parameters
device (int, optional) β if specified, all parameters will be copied to that device Returns
self Return type
Module
zero_grad(set_to_none=False)
Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters
set_to_none (bool) β instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details. | torch.generated.torch.nn.unflatten#torch.nn.Unflatten |
add_module(name, module)
Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters
name (string) β name of the child module. The child module can be accessed from this module using the given name
module (Module) β child module to be added to the module. | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.add_module |
apply(fn)
Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters
fn (Module -> None) β function to be applied to each submodule Returns
self Return type
Module Example: >>> @torch.no_grad()
>>> def init_weights(m):
>>> print(m)
>>> if type(m) == nn.Linear:
>>> m.weight.fill_(1.0)
>>> print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
) | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.apply |
bfloat16()
Casts all floating point parameters and buffers to bfloat16 datatype. Returns
self Return type
Module | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.bfloat16 |
buffers(recurse=True)
Returns an iterator over module buffers. Parameters
recurse (bool) β if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
torch.Tensor β module buffer Example: >>> for buf in model.buffers():
>>> print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L) | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.buffers |
children()
Returns an iterator over immediate children modules. Yields
Module β a child module | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.children |
cpu()
Moves all model parameters and buffers to the CPU. Returns
self Return type
Module | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.cpu |
cuda(device=None)
Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters
device (int, optional) β if specified, all parameters will be copied to that device Returns
self Return type
Module | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.cuda |
double()
Casts all floating point parameters and buffers to double datatype. Returns
self Return type
Module | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.double |
eval()
Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns
self Return type
Module | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.eval |
float()
Casts all floating point parameters and buffers to float datatype. Returns
self Return type
Module | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.float |
half()
Casts all floating point parameters and buffers to half datatype. Returns
self Return type
Module | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.half |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.