doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.nonzero(input, *, out=None, as_tuple=False) → LongTensor or tuple of LongTensors
Note torch.nonzero(..., as_tuple=False) (default) returns a 2-D tensor where each row is the index for a nonzero value. torch.nonzero(..., as_tuple=True) returns a tuple of 1-D index tensors, allowing for advanced indexing, so x[x.nonzero(as_tuple=True)] gives all nonzero values of tensor x. Of the returned tuple, each index tensor contains nonzero indices for a certain dimension. See below for more details on the two behaviors. When input is on CUDA, torch.nonzero() causes host-device synchronization. When as_tuple is ``False`` (default): Returns a tensor containing the indices of all non-zero elements of input. Each row in the result contains the indices of a non-zero element in input. The result is sorted lexicographically, with the last index changing the fastest (C-style). If input has nn dimensions, then the resulting indices tensor out is of size (z×n)(z \times n) , where zz is the total number of non-zero elements in the input tensor. When as_tuple is ``True``: Returns a tuple of 1-D tensors, one for each dimension in input, each containing the indices (in that dimension) of all non-zero elements of input . If input has nn dimensions, then the resulting tuple contains nn tensors of size zz , where zz is the total number of non-zero elements in the input tensor. As a special case, when input has zero dimensions and a nonzero scalar value, it is treated as a one-dimensional tensor with one element. Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (LongTensor, optional) – the output tensor containing indices Returns
If as_tuple is False, the output tensor containing indices. If as_tuple is True, one 1-D tensor for each dimension, containing the indices of each nonzero element along that dimension. Return type
LongTensor or tuple of LongTensor Example: >>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]))
tensor([[ 0],
[ 1],
[ 2],
[ 4]])
>>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
... [0.0, 0.4, 0.0, 0.0],
... [0.0, 0.0, 1.2, 0.0],
... [0.0, 0.0, 0.0,-0.4]]))
tensor([[ 0, 0],
[ 1, 1],
[ 2, 2],
[ 3, 3]])
>>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]), as_tuple=True)
(tensor([0, 1, 2, 4]),)
>>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
... [0.0, 0.4, 0.0, 0.0],
... [0.0, 0.0, 1.2, 0.0],
... [0.0, 0.0, 0.0,-0.4]]), as_tuple=True)
(tensor([0, 1, 2, 3]), tensor([0, 1, 2, 3]))
>>> torch.nonzero(torch.tensor(5), as_tuple=True)
(tensor([0]),) | torch.generated.torch.nonzero#torch.nonzero |
torch.norm(input, p='fro', dim=None, keepdim=False, out=None, dtype=None) [source]
Returns the matrix norm or vector norm of a given tensor. Warning torch.norm is deprecated and may be removed in a future PyTorch release. Use torch.linalg.norm() instead, but note that torch.linalg.norm() has a different signature and slightly different behavior that is more consistent with NumPy’s numpy.linalg.norm. Parameters
input (Tensor) – The input tensor. Its data type must be either a floating point or complex type. For complex inputs, the norm is calculated using the absolute value of each element. If the input is complex and neither dtype nor out is specified, the result’s data type will be the corresponding floating point type (e.g. float if input is complexfloat).
p (int, float, inf, -inf, 'fro', 'nuc', optional) –
the order of norm. Default: 'fro' The following norms can be calculated:
ord matrix norm vector norm
’fro’ Frobenius norm –
‘nuc’ nuclear norm –
Number – sum(abs(x)**ord)**(1./ord) The vector norm can be calculated across any number of dimensions. The corresponding dimensions of input are flattened into one dimension, and the norm is calculated on the flattened dimension. Frobenius norm produces the same result as p=2 in all cases except when dim is a list of three or more dims, in which case Frobenius norm throws an error. Nuclear norm can only be calculated across exactly two dimensions.
dim (int, tuple of python:ints, list of python:ints, optional) – Specifies which dimension or dimensions of input to calculate the norm across. If dim is None, the norm will be calculated across all dimensions of input. If the norm type indicated by p does not support the specified number of dimensions, an error will occur.
keepdim (bool, optional) – whether the output tensors have dim retained or not. Ignored if dim = None and out = None. Default: False
out (Tensor, optional) – the output tensor. Ignored if dim = None and out = None.
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to :attr:’dtype’ while performing the operation. Default: None. Note Even though p='fro' supports any number of dimensions, the true mathematical definition of Frobenius norm only applies to tensors with exactly two dimensions. torch.linalg.norm() with ord='fro' aligns with the mathematical definition, since it can only be applied across exactly two dimensions. Example: >>> import torch
>>> a = torch.arange(9, dtype= torch.float) - 4
>>> b = a.reshape((3, 3))
>>> torch.norm(a)
tensor(7.7460)
>>> torch.norm(b)
tensor(7.7460)
>>> torch.norm(a, float('inf'))
tensor(4.)
>>> torch.norm(b, float('inf'))
tensor(4.)
>>> c = torch.tensor([[ 1, 2, 3],[-1, 1, 4]] , dtype= torch.float)
>>> torch.norm(c, dim=0)
tensor([1.4142, 2.2361, 5.0000])
>>> torch.norm(c, dim=1)
tensor([3.7417, 4.2426])
>>> torch.norm(c, p=1, dim=1)
tensor([6., 6.])
>>> d = torch.arange(8, dtype= torch.float).reshape(2,2,2)
>>> torch.norm(d, dim=(1,2))
tensor([ 3.7417, 11.2250])
>>> torch.norm(d[0, :, :]), torch.norm(d[1, :, :])
(tensor(3.7417), tensor(11.2250)) | torch.generated.torch.norm#torch.norm |
torch.normal(mean, std, *, generator=None, out=None) → Tensor
Returns a tensor of random numbers drawn from separate normal distributions whose mean and standard deviation are given. The mean is a tensor with the mean of each output element’s normal distribution The std is a tensor with the standard deviation of each output element’s normal distribution The shapes of mean and std don’t need to match, but the total number of elements in each tensor need to be the same. Note When the shapes do not match, the shape of mean is used as the shape for the returned output tensor Parameters
mean (Tensor) – the tensor of per-element means
std (Tensor) – the tensor of per-element standard deviations Keyword Arguments
generator (torch.Generator, optional) – a pseudorandom number generator for sampling
out (Tensor, optional) – the output tensor. Example: >>> torch.normal(mean=torch.arange(1., 11.), std=torch.arange(1, 0, -0.1))
tensor([ 1.0425, 3.5672, 2.7969, 4.2925, 4.7229, 6.2134,
8.0505, 8.1408, 9.0563, 10.0566])
torch.normal(mean=0.0, std, *, out=None) → Tensor
Similar to the function above, but the means are shared among all drawn elements. Parameters
mean (float, optional) – the mean for all distributions
std (Tensor) – the tensor of per-element standard deviations Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.normal(mean=0.5, std=torch.arange(1., 6.))
tensor([-1.2793, -1.0732, -2.0687, 5.1177, -1.2303])
torch.normal(mean, std=1.0, *, out=None) → Tensor
Similar to the function above, but the standard-deviations are shared among all drawn elements. Parameters
mean (Tensor) – the tensor of per-element means
std (float, optional) – the standard deviation for all distributions Keyword Arguments
out (Tensor, optional) – the output tensor Example: >>> torch.normal(mean=torch.arange(1., 6.))
tensor([ 1.1552, 2.6148, 2.6535, 5.8318, 4.2361])
torch.normal(mean, std, size, *, out=None) → Tensor
Similar to the function above, but the means and standard deviations are shared among all drawn elements. The resulting tensor has size given by size. Parameters
mean (float) – the mean for all distributions
std (float) – the standard deviation for all distributions
size (int...) – a sequence of integers defining the shape of the output tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.normal(2, 3, size=(1, 4))
tensor([[-1.3987, -1.9544, 3.6048, 0.7909]]) | torch.generated.torch.normal#torch.normal |
torch.not_equal(input, other, *, out=None) → Tensor
Alias for torch.ne(). | torch.generated.torch.not_equal#torch.not_equal |
class torch.no_grad [source]
Context-manager that disabled gradient calculation. Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True. In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True. This context manager is thread local; it will not affect computation in other threads. Also functions as a decorator. (Make sure to instantiate with parenthesis.) Example: >>> x = torch.tensor([1], requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
>>> @torch.no_grad()
... def doubler(x):
... return x * 2
>>> z = doubler(x)
>>> z.requires_grad
False | torch.generated.torch.no_grad#torch.no_grad |
torch.numel(input) → int
Returns the total number of elements in the input tensor. Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.randn(1, 2, 3, 4, 5)
>>> torch.numel(a)
120
>>> a = torch.zeros(4,4)
>>> torch.numel(a)
16 | torch.generated.torch.numel#torch.numel |
torch.ones(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size. Parameters
size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple. Keyword Arguments
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> torch.ones(2, 3)
tensor([[ 1., 1., 1.],
[ 1., 1., 1.]])
>>> torch.ones(5)
tensor([ 1., 1., 1., 1., 1.]) | torch.generated.torch.ones#torch.ones |
torch.ones_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor
Returns a tensor filled with the scalar value 1, with the same size as input. torch.ones_like(input) is equivalent to torch.ones(input.size(), dtype=input.dtype, layout=input.layout, device=input.device). Warning As of 0.4, this function does not support an out keyword. As an alternative, the old torch.ones_like(input, out=output) is equivalent to torch.ones(input.size(), out=output). Parameters
input (Tensor) – the size of input will determine size of the output tensor. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.
layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format. Example: >>> input = torch.empty(2, 3)
>>> torch.ones_like(input)
tensor([[ 1., 1., 1.],
[ 1., 1., 1.]]) | torch.generated.torch.ones_like#torch.ones_like |
torch.onnx Example: End-to-end AlexNet from PyTorch to ONNX Tracing vs Scripting Write PyTorch model in Torch way Using dictionaries to handle Named Arguments as model inputs
Indexing Getter Setter TorchVision support Limitations Supported operators
Adding support for operators ATen operators Non-ATen operators Custom operators
Operator Export Type ONNX ONNX_ATEN ONNX_ATEN_FALLBACK RAW ONNX_FALLTHROUGH Frequently Asked Questions Use external data format Training Functions Example: End-to-end AlexNet from PyTorch to ONNX Here is a simple script which exports a pretrained AlexNet as defined in torchvision into ONNX. It runs a single round of inference and then saves the resulting traced model to alexnet.onnx: import torch
import torchvision
dummy_input = torch.randn(10, 3, 224, 224, device='cuda')
model = torchvision.models.alexnet(pretrained=True).cuda()
# Providing input and output names sets the display names for values
# within the model's graph. Setting these does not change the semantics
# of the graph; it is only for readability.
#
# The inputs to the network consist of the flat list of inputs (i.e.
# the values you would pass to the forward() method) followed by the
# flat list of parameters. You can partially specify names, i.e. provide
# a list here shorter than the number of inputs to the model, and we will
# only set that subset of names, starting from the beginning.
input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
output_names = [ "output1" ]
torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names)
The resulting alexnet.onnx is a binary protobuf file which contains both the network structure and parameters of the model you exported (in this case, AlexNet). The keyword argument verbose=True causes the exporter to print out a human-readable representation of the network: # These are the inputs and parameters to the network, which have taken on
# the names we specified earlier.
graph(%actual_input_1 : Float(10, 3, 224, 224)
%learned_0 : Float(64, 3, 11, 11)
%learned_1 : Float(64)
%learned_2 : Float(192, 64, 5, 5)
%learned_3 : Float(192)
# ---- omitted for brevity ----
%learned_14 : Float(1000, 4096)
%learned_15 : Float(1000)) {
# Every statement consists of some output tensors (and their types),
# the operator to be run (with its attributes, e.g., kernels, strides,
# etc.), its input tensors (%actual_input_1, %learned_0, %learned_1)
%17 : Float(10, 64, 55, 55) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[11, 11], pads=[2, 2, 2, 2], strides=[4, 4]](%actual_input_1, %learned_0, %learned_1), scope: AlexNet/Sequential[features]/Conv2d[0]
%18 : Float(10, 64, 55, 55) = onnx::Relu(%17), scope: AlexNet/Sequential[features]/ReLU[1]
%19 : Float(10, 64, 27, 27) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%18), scope: AlexNet/Sequential[features]/MaxPool2d[2]
# ---- omitted for brevity ----
%29 : Float(10, 256, 6, 6) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%28), scope: AlexNet/Sequential[features]/MaxPool2d[12]
# Dynamic means that the shape is not known. This may be because of a
# limitation of our implementation (which we would like to fix in a
# future release) or shapes which are truly dynamic.
%30 : Dynamic = onnx::Shape(%29), scope: AlexNet
%31 : Dynamic = onnx::Slice[axes=[0], ends=[1], starts=[0]](%30), scope: AlexNet
%32 : Long() = onnx::Squeeze[axes=[0]](%31), scope: AlexNet
%33 : Long() = onnx::Constant[value={9216}](), scope: AlexNet
# ---- omitted for brevity ----
%output1 : Float(10, 1000) = onnx::Gemm[alpha=1, beta=1, broadcast=1, transB=1](%45, %learned_14, %learned_15), scope: AlexNet/Sequential[classifier]/Linear[6]
return (%output1);
}
You can also verify the protobuf using the ONNX library. You can install ONNX with conda: conda install -c conda-forge onnx
Then, you can run: import onnx
# Load the ONNX model
model = onnx.load("alexnet.onnx")
# Check that the IR is well formed
onnx.checker.check_model(model)
# Print a human readable representation of the graph
onnx.helper.printable_graph(model.graph)
To run the exported script with caffe2, you will need to install caffe2: If you don’t have one already, Please follow the install instructions. Once these are installed, you can use the backend for Caffe2: # ...continuing from above
import caffe2.python.onnx.backend as backend
import numpy as np
rep = backend.prepare(model, device="CUDA:0") # or "CPU"
# For the Caffe2 backend:
# rep.predict_net is the Caffe2 protobuf for the network
# rep.workspace is the Caffe2 workspace for the network
# (see the class caffe2.python.onnx.backend.Workspace)
outputs = rep.run(np.random.randn(10, 3, 224, 224).astype(np.float32))
# To run networks with more than one input, pass a tuple
# rather than a single numpy ndarray.
print(outputs[0])
You can also run the exported model with ONNX Runtime, you will need to install ONNX Runtime: please follow these instructions. Once these are installed, you can use the backend for ONNX Runtime: # ...continuing from above
import onnxruntime as ort
ort_session = ort.InferenceSession('alexnet.onnx')
outputs = ort_session.run(None, {'actual_input_1': np.random.randn(10, 3, 224, 224).astype(np.float32)})
print(outputs[0])
Here is another tutorial of exporting the SuperResolution model to ONNX.. In the future, there will be backends for other frameworks as well. Tracing vs Scripting The ONNX exporter can be both trace-based and script-based exporter.
trace-based means that it operates by executing your model once, and exporting the operators which were actually run during this run. This means that if your model is dynamic, e.g., changes behavior depending on input data, the export won’t be accurate. Similarly, a trace is likely to be valid only for a specific input size (which is one reason why we require explicit inputs on tracing.) We recommend examining the model trace and making sure the traced operators look reasonable. If your model contains control flows like for loops and if conditions, trace-based exporter will unroll the loops and if conditions, exporting a static graph that is exactly the same as this run. If you want to export your model with dynamic control flows, you will need to use the script-based exporter.
script-based means that the model you are trying to export is a ScriptModule. ScriptModule is the core data structure in TorchScript, and TorchScript is a subset of Python language, that creates serializable and optimizable models from PyTorch code. We allow mixing tracing and scripting. You can compose tracing and scripting to suit the particular requirements of a part of a model. Checkout this example: import torch
# Trace-based only
class LoopModel(torch.nn.Module):
def forward(self, x, y):
for i in range(y):
x = x + i
return x
model = LoopModel()
dummy_input = torch.ones(2, 3, dtype=torch.long)
loop_count = torch.tensor(5, dtype=torch.long)
torch.onnx.export(model, (dummy_input, loop_count), 'loop.onnx', verbose=True)
With trace-based exporter, we get the result ONNX graph which unrolls the for loop: graph(%0 : Long(2, 3),
%1 : Long()):
%2 : Tensor = onnx::Constant[value={1}]()
%3 : Tensor = onnx::Add(%0, %2)
%4 : Tensor = onnx::Constant[value={2}]()
%5 : Tensor = onnx::Add(%3, %4)
%6 : Tensor = onnx::Constant[value={3}]()
%7 : Tensor = onnx::Add(%5, %6)
%8 : Tensor = onnx::Constant[value={4}]()
%9 : Tensor = onnx::Add(%7, %8)
return (%9)
To utilize script-based exporter for capturing the dynamic loop, we can write the loop in script, and call it from the regular nn.Module: # Mixing tracing and scripting
@torch.jit.script
def loop(x, y):
for i in range(int(y)):
x = x + i
return x
class LoopModel2(torch.nn.Module):
def forward(self, x, y):
return loop(x, y)
model = LoopModel2()
dummy_input = torch.ones(2, 3, dtype=torch.long)
loop_count = torch.tensor(5, dtype=torch.long)
torch.onnx.export(model, (dummy_input, loop_count), 'loop.onnx', verbose=True,
input_names=['input_data', 'loop_range'])
Now the exported ONNX graph becomes: graph(%input_data : Long(2, 3),
%loop_range : Long()):
%2 : Long() = onnx::Constant[value={1}](), scope: LoopModel2/loop
%3 : Tensor = onnx::Cast[to=9](%2)
%4 : Long(2, 3) = onnx::Loop(%loop_range, %3, %input_data), scope: LoopModel2/loop # custom_loop.py:240:5
block0(%i.1 : Long(), %cond : bool, %x.6 : Long(2, 3)):
%8 : Long(2, 3) = onnx::Add(%x.6, %i.1), scope: LoopModel2/loop # custom_loop.py:241:13
%9 : Tensor = onnx::Cast[to=9](%2)
-> (%9, %8)
return (%4)
The dynamic control flow is captured correctly. We can verify in backends with different loop range. import caffe2.python.onnx.backend as backend
import numpy as np
import onnx
model = onnx.load('loop.onnx')
rep = backend.prepare(model)
outputs = rep.run((dummy_input.numpy(), np.array(9).astype(np.int64)))
print(outputs[0])
#[[37 37 37]
# [37 37 37]]
import onnxruntime as ort
ort_sess = ort.InferenceSession('loop.onnx')
outputs = ort_sess.run(None, {'input_data': dummy_input.numpy(),
'loop_range': np.array(9).astype(np.int64)})
print(outputs)
#[array([[37, 37, 37],
# [37, 37, 37]], dtype=int64)]
To avoid exporting a variable scalar tensor as a fixed value constant as part of the ONNX model, please avoid use of torch.Tensor.item(). Torch supports implicit cast of single-element tensors to numbers. E.g.: class LoopModel(torch.nn.Module):
def forward(self, x, y):
res = []
arr = x.split(2, 0)
for i in range(int(y)):
res += [arr[i].sum(0, False)]
return torch.stack(res)
model = torch.jit.script(LoopModel())
inputs = (torch.randn(16), torch.tensor(8))
out = model(*inputs)
torch.onnx.export(model, inputs, 'loop_and_list.onnx', opset_version=11, example_outputs=out)
Write PyTorch model in Torch way PyTorch models can be written using numpy manipulations, but this is not proper when we convert to the ONNX model. For the trace-based exporter, tracing treats the numpy values as the constant node, therefore it calculates the wrong result if we change the input. So the PyTorch model need implement using torch operators. For example, do not use numpy operators on numpy tensors: np.concatenate((x, y, z), axis=1)
do not convert to numpy types: y = x.astype(np.int)
Always use torch tensors and torch operators: torch.concat, etc. In addition, Dropout layer need defined in init function so that inferencing can handle it properly, i.e., class MyModule(nn.Module):
def __init__(self):
self.dropout = nn.Dropout(0.5)
def forward(self, x):
x = self.dropout(x)
Using dictionaries to handle Named Arguments as model inputs There are two ways to handle models which consist of named parameters or keyword arguments as inputs: The first method is to pass all the inputs in the same order as required by the model and pass None values for the keyword arguments that do not require a value to be passed The second and more intuitive method is to represent the keyword arguments as key-value pairs where the key represents the name of the argument in the model signature and the value represents the value of the argument to be passed For example, in the model: class Model(torch.nn.Module):
def forward(self, x, y=None, z=None):
if y is not None:
return x + y
if z is not None:
return x + z
return x
m = Model()
x = torch.randn(2, 3)
z = torch.randn(2, 3)
There are two ways of exporting the model:
Not using a dictionary for the keyword arguments and passing all the inputs in the same order as required by the model torch.onnx.export(model, (x, None, z), ‘test.onnx’)
Using a dictionary to represent the keyword arguments. This dictionary is always passed in addition to the non-keyword arguments and is always the last argument in the args tuple. torch.onnx.export(model, (x, {'y': None, 'z': z}), ‘test.onnx’)
For cases in which there are no keyword arguments, models can be exported with either an empty or no dictionary. For example, torch.onnx.export(model, (x, {}), ‘test.onnx’)
or
torch.onnx.export(model, (x, ), ‘test.onnx’)
An exception to this rule are cases in which the last input is also of a dictionary type. In these cases it is mandatory to have an empty dictionary as the last argument in the args tuple. For example, class Model(torch.nn.Module):
def forward(self, k, x):
...
return x
m = Model()
k = torch.randn(2, 3)
x = {torch.tensor(1.): torch.randn(2, 3)}
Without the presence of the empty dictionary, the export call assumes that the ‘x’ input is intended to represent the optional dictionary consisting of named arguments. In order to prevent this from being an issue a constraint is placed to provide an empty dictionary as the last input in the tuple args in such cases. The new call would look like this. torch.onnx.export(model, (k, x, {}), ‘test.onnx’)
Indexing Tensor indexing in PyTorch is very flexible and complicated. There are two categories of indexing. Both are largely supported in exporting today. If you are experiencing issues exporting indexing that belongs to the supported patterns below, please double check that you are exporting with the latest opset (opset_version=12). Getter This type of indexing occurs on the RHS. Export is supported for ONNX opset version >= 9. E.g.: data = torch.randn(3, 4)
index = torch.tensor([1, 2])
# RHS indexing is supported in ONNX opset >= 11.
class RHSIndexing(torch.nn.Module):
def forward(self, data, index):
return data[index]
out = RHSIndexing()(data, index)
torch.onnx.export(RHSIndexing(), (data, index), 'indexing.onnx', opset_version=9)
# onnxruntime
import onnxruntime
sess = onnxruntime.InferenceSession('indexing.onnx')
out_ort = sess.run(None, {
sess.get_inputs()[0].name: data.numpy(),
sess.get_inputs()[1].name: index.numpy(),
})
assert torch.all(torch.eq(out, torch.tensor(out_ort)))
Below is the list of supported patterns for RHS indexing. # Scalar indices
data[0, 1]
# Slice indices
data[:3]
# Tensor indices
data[torch.tensor([[1, 2], [2, 3]])]
data[torch.tensor([2, 3]), torch.tensor([1, 2])]
data[torch.tensor([[1, 2], [2, 3]]), torch.tensor([2, 3])]
data[torch.tensor([2, 3]), :, torch.tensor([1, 2])]
# Ellipsis
# Not supported in scripting
# i.e. torch.jit.script(model) will fail if model contains this pattern.
# Export is supported under tracing
# i.e. torch.onnx.export(model)
data[...]
# The combination of above
data[2, ..., torch.tensor([2, 1, 3]), 2:4, torch.tensor([[1], [2]])]
# Boolean mask (supported for ONNX opset version >= 11)
data[data != 1]
And below is the list of unsupported patterns for RHS indexing. # Tensor indices that includes negative values.
data[torch.tensor([[1, 2], [2, -3]]), torch.tensor([-2, 3])]
Setter In code, this type of indexing occurs on the LHS. Export is supported for ONNX opset version >= 11. E.g.: data = torch.zeros(3, 4)
new_data = torch.arange(4).to(torch.float32)
# LHS indexing is supported in ONNX opset >= 11.
class LHSIndexing(torch.nn.Module):
def forward(self, data, new_data):
data[1] = new_data
return data
out = LHSIndexing()(data, new_data)
data = torch.zeros(3, 4)
new_data = torch.arange(4).to(torch.float32)
torch.onnx.export(LHSIndexing(), (data, new_data), 'inplace_assign.onnx', opset_version=11)
# onnxruntime
import onnxruntime
sess = onnxruntime.InferenceSession('inplace_assign.onnx')
out_ort = sess.run(None, {
sess.get_inputs()[0].name: torch.zeros(3, 4).numpy(),
sess.get_inputs()[1].name: new_data.numpy(),
})
assert torch.all(torch.eq(out, torch.tensor(out_ort)))
Below is the list of supported patterns for LHS indexing. # Scalar indices
data[0, 1] = new_data
# Slice indices
data[:3] = new_data
# Tensor indices
# If more than one tensor are used as indices, only consecutive 1-d tensor indices are supported.
data[torch.tensor([[1, 2], [2, 3]])] = new_data
data[torch.tensor([2, 3]), torch.tensor([1, 2])] = new_data
# Ellipsis
# Not supported to export in script modules
# i.e. torch.onnx.export(torch.jit.script(model)) will fail if model contains this pattern.
# Export is supported under tracing
# i.e. torch.onnx.export(model)
data[...] = new_data
# The combination of above
data[2, ..., torch.tensor([2, 1, 3]), 2:4] += update
# Boolean mask
data[data != 1] = new_data
And below is the list of unsupported patterns for LHS indexing. # Multiple tensor indices if any has rank >= 2
data[torch.tensor([[1, 2], [2, 3]]), torch.tensor([2, 3])] = new_data
# Multiple tensor indices that are not consecutive
data[torch.tensor([2, 3]), :, torch.tensor([1, 2])] = new_data
# Tensor indices that includes negative values.
data[torch.tensor([1, -2]), torch.tensor([-2, 3])] = new_data
If you are experiencing issues exporting indexing that belongs to the above supported patterns, please double check that you are exporting with the latest opset (opset_version=12). TorchVision support All TorchVision models, except for quantized versions, are exportable to ONNX. More details can be found in TorchVision. Limitations Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. Users need to verify their dict inputs carefully, and keep in mind that dynamic lookups are not available. PyTorch and ONNX backends(Caffe2, ONNX Runtime, etc) often have implementations of operators with some numeric differences. Depending on model structure, these differences may be negligible, but they can also cause major divergences in behavior (especially on untrained models.) We allow Caffe2 to call directly to Torch implementations of operators, to help you smooth over these differences when precision is important, and to also document these differences. Supported operators The following operators are supported: BatchNorm ConstantPadNd Conv Dropout Embedding (no optional arguments supported) EmbeddingBag FeatureDropout (training mode not supported) Index MaxPool1d MaxPool2d MaxPool3d RNN abs absolute acos adaptive_avg_pool1d adaptive_avg_pool2d adaptive_avg_pool3d adaptive_max_pool1d adaptive_max_pool2d adaptive_max_pool3d add (nonzero alpha not supported) addmm and arange argmax argmin asin atan avg_pool1d avg_pool2d avg_pool2d avg_pool3d as_strided baddbmm bitshift cat ceil celu clamp clamp_max clamp_min concat copy cos cumsum det dim_arange div dropout einsum elu empty empty_like eq erf exp expand expand_as eye flatten floor floor_divide frobenius_norm full full_like gather ge gelu glu group_norm gt hardswish hardtanh im2col index_copy index_fill index_put index_select instance_norm interpolate isnan KLDivLoss layer_norm le leaky_relu len log log1p log2 log_sigmoid log_softmax logdet logsumexp lt masked_fill masked_scatter masked_select max mean min mm mul multinomial narrow ne neg new_empty new_full new_zeros nll_loss nonzero norm ones ones_like or permute pixel_shuffle pow prelu (single weight shared among input channels not supported) prod rand randn randn_like reciprocal reflection_pad relu repeat replication_pad reshape reshape_as round rrelu rsqrt rsub scalar_tensor scatter scatter_add select selu sigmoid sign sin size slice softmax softplus sort split sqrt squeeze stack std sub (nonzero alpha not supported) sum t tan tanh threshold (non-zero threshold/non-zero value not supported) to topk transpose true_divide type_as unbind unfold (experimental support with ATen-Caffe2 integration) unique unsqueeze upsample_nearest1d upsample_nearest2d upsample_nearest3d view weight_norm where zeros zeros_like The operator set above is sufficient to export the following models: AlexNet DCGAN DenseNet Inception (warning: this model is highly sensitive to changes in operator implementation) ResNet SuperResolution VGG word_language_model Adding support for operators Adding export support for operators is an advance usage. To achieve this, developers need to touch the source code of PyTorch. Please follow the instructions for installing PyTorch from source. If the wanted operator is standardized in ONNX, it should be easy to add support for exporting such operator (adding a symbolic function for the operator). To confirm whether the operator is standardized or not, please check the ONNX operator list. ATen operators If the operator is an ATen operator, which means you can find the declaration of the function in torch/csrc/autograd/generated/VariableType.h (available in generated code in PyTorch install dir), you should add the symbolic function in torch/onnx/symbolic_opset<version>.py and follow the instructions listed as below: Define the symbolic function in torch/onnx/symbolic_opset<version>.py, for example torch/onnx/symbolic_opset9.py. Make sure the function has the same name as the ATen operator/function defined in VariableType.h. The first parameter is always the exported ONNX graph. Parameter names must EXACTLY match the names in VariableType.h, because dispatch is done with keyword arguments. Parameter ordering does NOT necessarily match what is in VariableType.h, tensors (inputs) are always first, then non-tensor arguments. In the symbolic function, if the operator is already standardized in ONNX, we only need to create a node to represent the ONNX operator in the graph. If the input argument is a tensor, but ONNX asks for a scalar, we have to explicitly do the conversion. The helper function _scalar can convert a scalar tensor into a python scalar, and _if_scalar_type_as can turn a Python scalar into a PyTorch tensor. Non-ATen operators If the operator is a non-ATen operator, the symbolic function has to be added in the corresponding PyTorch Function class. Please read the following instructions: Create a symbolic function named symbolic in the corresponding Function class. The first parameter is always the exported ONNX graph. Parameter names except the first must EXACTLY match the names in forward. The output tuple size must match the outputs of forward. In the symbolic function, if the operator is already standardized in ONNX, we just need to create a node to represent the ONNX operator in the graph. Symbolic functions should be implemented in Python. All of these functions interact with Python methods which are implemented via C++-Python bindings, but intuitively the interface they provide looks like this: def operator/symbolic(g, *inputs):
"""
Modifies Graph (e.g., using "op"), adding the ONNX operations representing
this PyTorch function, and returning a Value or tuple of Values specifying the
ONNX outputs whose values correspond to the original PyTorch return values
of the autograd Function (or None if an output is not supported by ONNX).
Args:
g (Graph): graph to write the ONNX representation into
inputs (Value...): list of values representing the variables which contain
the inputs for this function
"""
class Value(object):
"""Represents an intermediate tensor value computed in ONNX."""
def type(self):
"""Returns the Type of the value."""
class Type(object):
def sizes(self):
"""Returns a tuple of ints representing the shape of a tensor this describes."""
class Graph(object):
def op(self, opname, *inputs, **attrs):
"""
Create an ONNX operator 'opname', taking 'args' as inputs
and attributes 'kwargs' and add it as a node to the current graph,
returning the value representing the single output of this
operator (see the `outputs` keyword argument for multi-return
nodes).
The set of operators and the inputs/attributes they take
is documented at https://github.com/onnx/onnx/blob/master/docs/Operators.md
Args:
opname (string): The ONNX operator name, e.g., `Abs` or `Add`.
args (Value...): The inputs to the operator; usually provided
as arguments to the `symbolic` definition.
kwargs: The attributes of the ONNX operator, with keys named
according to the following convention: `alpha_f` indicates
the `alpha` attribute with type `f`. The valid type specifiers are
`f` (float), `i` (int), `s` (string) or `t` (Tensor). An attribute
specified with type float accepts either a single float, or a
list of floats (e.g., you would say `dims_i` for a `dims` attribute
that takes a list of integers).
outputs (int, optional): The number of outputs this operator returns;
by default an operator is assumed to return a single output.
If `outputs` is greater than one, this functions returns a tuple
of output `Value`, representing each output of the ONNX operator
in positional.
"""
The ONNX graph C++ definition is in torch/csrc/jit/ir/ir.h. Here is an example of handling missing symbolic function for elu operator. We try to export the model and see the error message as below: UserWarning: ONNX export failed on elu because torch.onnx.symbolic_opset9.elu does not exist
RuntimeError: ONNX export failed: Couldn't export operator elu
The export fails because PyTorch does not support exporting elu operator. We find virtual Tensor elu(const Tensor & input, Scalar alpha, bool inplace) const override; in VariableType.h. This means elu is an ATen operator. We check the ONNX operator list, and confirm that Elu is standardized in ONNX. We add the following lines to symbolic_opset9.py: def elu(g, input, alpha, inplace=False):
return g.op("Elu", input, alpha_f=_scalar(alpha))
Now PyTorch is able to export elu operator. There are more examples in symbolic_opset9.py, symbolic_opset10.py. The interface for specifying operator definitions is experimental; adventurous users should note that the APIs will probably change in a future interface. Custom operators Following this tutorial Extending TorchScript with Custom C++ Operators, you can create and register your own custom ops implementation in PyTorch. Here’s how to export such model to ONNX.: # Create custom symbolic function
from torch.onnx.symbolic_helper import parse_args
@parse_args('v', 'v', 'f', 'i')
def symbolic_foo_forward(g, input1, input2, attr1, attr2):
return g.op("Foo", input1, input2, attr1_f=attr1, attr2_i=attr2)
# Register custom symbolic function
from torch.onnx import register_custom_op_symbolic
register_custom_op_symbolic('custom_ops::foo_forward', symbolic_foo_forward, 9)
class FooModel(torch.nn.Module):
def __init__(self, attr1, attr2):
super(FooModule, self).__init__()
self.attr1 = attr1
self.attr2 = attr2
def forward(self, input1, input2):
# Calling custom op
return torch.ops.custom_ops.foo_forward(input1, input2, self.attr1, self.attr2)
model = FooModel(attr1, attr2)
torch.onnx.export(model, (dummy_input1, dummy_input2), 'model.onnx', custom_opsets={"custom_domain": 2})
Depending on the custom operator, you can export it as one or a combination of existing ONNX ops. You can also export it as a custom op in ONNX as well. In that case, you can specify the custom domain and version (custom opset) using the custom_opsets dictionary at export. If not explicitly specified, the custom opset version is set to 1 by default. Using custom ONNX ops, you will need to extend the backend of your choice with matching custom ops implementation, e.g. Caffe2 custom ops, ONNX Runtime custom ops. Operator Export Type Exporting models with unsupported ONNX operators can be achieved using the operator_export_type flag in export API. This flag is useful when users try to export ATen and non-ATen operators that are not registered and supported in ONNX. ONNX This mode is used to export all operators as regular ONNX operators. This is the default operator_export_type mode. Example torch ir graph:
graph(%0 : Float(2, 3, 4, strides=[12, 4, 1])):
%3 : Float(2, 3, 4, strides=[12, 4, 1]) = aten:exp(%0)
%4 : Float(2, 3, 4, strides=[12, 4, 1]) = aten:div(%0, %3)
return (%4)
Is exported as:
graph(%0 : Float(2, 3, 4, strides=[12, 4, 1])):
%1 : Float(2, 3, 4, strides=[12, 4, 1]) = onnx:Exp(%0)
%2 : Float(2, 3, 4, strides=[12, 4, 1]) = onnx:Div(%0, %1)
return (%2)
ONNX_ATEN This mode is used to export all operators as ATen ops, and avoid conversion to ONNX. Example torch ir graph:
graph(%0 : Float(2, 3, 4, strides=[12, 4, 1])):
%3 : Float(2, 3, 4, strides=[12, 4, 1]) = aten::exp(%0)
%4 : Float(2, 3, 4, strides=[12, 4, 1]) = aten::div(%0, %3)
return (%4)
Is exported as:
graph(%0 : Float(2, 3, 4, strides=[12, 4, 1])):
%1 : Float(2, 3, 4, strides=[12, 4, 1]) = aten::ATen[operator="exp"](%0)
%2 : Float(2, 3, 4, strides=[12, 4, 1]) = aten::ATen[operator="div"](%0, %1)
return (%2)
ONNX_ATEN_FALLBACK To fallback on unsupported ATen operators in ONNX. Supported operators are exported to ONNX regularly. In the following example, aten::triu is not supported in ONNX. Exporter falls back on this operator. Example torch ir graph:
graph(%0 : Float):
%3 : int = prim::Constant[value=0]()
%4 : Float = aten::triu(%0, %3) # unsupported op
%5 : Float = aten::mul(%4, %0) # registered op
return (%5)
is exported as:
graph(%0 : Float):
%1 : Long() = onnx::Constant[value={0}]()
%2 : Float = aten::ATen[operator="triu"](%0, %1) # unsupported op
%3 : Float = onnx::Mul(%2, %0) # registered op
return (%3)
RAW To export a raw ir. Example torch ir graph:
graph(%x.1 : Float(1, strides=[1])):
%1 : Tensor = aten::exp(%x.1)
%2 : Tensor = aten::div(%x.1, %1)
%y.1 : Tensor[] = prim::ListConstruct(%2)
return (%y.1)
is exported as:
graph(%x.1 : Float(1, strides=[1])):
%1 : Tensor = aten::exp(%x.1)
%2 : Tensor = aten::div(%x.1, %1)
%y.1 : Tensor[] = prim::ListConstruct(%2)
return (%y.1)
ONNX_FALLTHROUGH This mode can be used to export any operator (ATen or non-ATen) that is not registered and supported in ONNX. Exported falls through and exports the operator as is, as custom op. Exporting custom operators enables users to register and implement the operator as part of their runtime backend. Example torch ir graph:
graph(%0 : Float(2, 3, 4, strides=[12, 4, 1]),
%1 : Float(2, 3, 4, strides=[12, 4, 1])):
%6 : Float(2, 3, 4, strides=[12, 4, 1]) = foo_namespace::bar(%0, %1) # custom op
%7 : Float(2, 3, 4, strides=[12, 4, 1]) = aten::div(%6, %0) # registered op
return (%7))
is exported as:
graph(%0 : Float(2, 3, 4, strides=[12, 4, 1]),
%1 : Float(2, 3, 4, strides=[12, 4, 1])):
%2 : Float(2, 3, 4, strides=[12, 4, 1]) = foo_namespace::bar(%0, %1) # custom op
%3 : Float(2, 3, 4, strides=[12, 4, 1]) = onnx::Div(%2, %0) # registered op
return (%3
Frequently Asked Questions Q: I have exported my lstm model, but its input size seems to be fixed? The tracer records the example inputs shape in the graph. In case the model should accept inputs of dynamic shape, you can utilize the parameter dynamic_axes in export api. layer_count = 4
model = nn.LSTM(10, 20, num_layers=layer_count, bidirectional=True)
model.eval()
with torch.no_grad():
input = torch.randn(5, 3, 10)
h0 = torch.randn(layer_count * 2, 3, 20)
c0 = torch.randn(layer_count * 2, 3, 20)
output, (hn, cn) = model(input, (h0, c0))
# default export
torch.onnx.export(model, (input, (h0, c0)), 'lstm.onnx')
onnx_model = onnx.load('lstm.onnx')
# input shape [5, 3, 10]
print(onnx_model.graph.input[0])
# export with `dynamic_axes`
torch.onnx.export(model, (input, (h0, c0)), 'lstm.onnx',
input_names=['input', 'h0', 'c0'],
output_names=['output', 'hn', 'cn'],
dynamic_axes={'input': {0: 'sequence'}, 'output': {0: 'sequence'}})
onnx_model = onnx.load('lstm.onnx')
# input shape ['sequence', 3, 10]
print(onnx_model.graph.input[0])
Q: How to export models with loops in it? Please checkout Tracing vs Scripting. Q: Does ONNX support implicit scalar datatype casting? No, but the exporter will try to handle that part. Scalars are converted to constant tensors in ONNX. The exporter will try to figure out the right datatype for scalars. However for cases that it failed to do so, you will need to manually provide the datatype information. This often happens with scripted models, where the datatypes are not recorded. We are trying to improve the datatype propagation in the exporter such that manual changes are not required in the future. class ImplicitCastType(torch.jit.ScriptModule):
@torch.jit.script_method
def forward(self, x):
# Exporter knows x is float32, will export '2' as float32 as well.
y = x + 2
# Without type propagation, exporter doesn't know the datatype of y.
# Thus '3' is exported as int64 by default.
return y + 3
# The following will export correctly.
# return y + torch.tensor([3], dtype=torch.float32)
x = torch.tensor([1.0], dtype=torch.float32)
torch.onnx.export(ImplicitCastType(), x, 'models/implicit_cast.onnx',
example_outputs=ImplicitCastType()(x))
Q: Is tensor in-place indexed assignment like data[index] = new_data supported? Yes, this is supported for ONNX opset version >= 11. Please checkout Indexing. Q: Is tensor list exportable to ONNX? Yes, this is supported now for ONNX opset version >= 11. ONNX introduced the concept of Sequence in opset 11. Similar to list, Sequence is a data type that contains arbitrary number of Tensors. Associated operators are also introduced in ONNX, such as SequenceInsert, SequenceAt, etc. However, in-place list append within loops is not exportable to ONNX. To implement this, please use inplace add operator. E.g.: class ListLoopModel(torch.nn.Module):
def forward(self, x):
res = []
res1 = []
arr = x.split(2, 0)
res2 = torch.zeros(3, 4, dtype=torch.long)
for i in range(len(arr)):
res += [arr[i].sum(0, False)]
res1 += [arr[-1 - i].sum(0, False)]
res2 += 1
return torch.stack(res), torch.stack(res1), res2
model = torch.jit.script(ListLoopModel())
inputs = torch.randn(16)
out = model(inputs)
torch.onnx.export(model, (inputs, ), 'loop_and_list.onnx', opset_version=11, example_outputs=out)
# onnxruntime
import onnxruntime
sess = onnxruntime.InferenceSession('loop_and_list.onnx')
out_ort = sess.run(None, {
sess.get_inputs()[0].name: inputs.numpy(),
})
assert [torch.allclose(o, torch.tensor(o_ort)) for o, o_ort in zip(out, out_ort)]
Use external data format use_external_data_format argument in export API enables export of models in ONNX external data format. With this option enabled, the exporter stores some model parameters in external binary files, rather than the ONNX file itself. These external binary files are stored in the same location as the ONNX file. Argument ‘f’ must be a string specifying the location of the model. model = torchvision.models.mobilenet_v2(pretrained=True)
input = torch.randn(2, 3, 224, 224, requires_grad=True)
torch.onnx.export(model, (input, ), './large_model.onnx', use_external_data_format=True)
This argument enables export of large models to ONNX. Models larger than 2GB cannot be exported in one file because of the protobuf size limit. Users should set use_external_data_format to True to successfully export such models. Training Training argument in export API allows users to export models in a training-friendly mode. TrainingMode.TRAINING exports model in a training-friendly mode that avoids certain model optimizations which might interfere with model parameter training. TrainingMode.PRESERVE exports the model in inference mode if model.training is False. Otherwise, it exports the model in a training-friendly mode. The default mode for this argument is TrainingMode.EVAL which exports the model in inference mode. Functions
torch.onnx.export(model, args, f, export_params=True, verbose=False, training=<TrainingMode.EVAL: 0>, input_names=None, output_names=None, aten=False, export_raw_ir=False, operator_export_type=None, opset_version=None, _retain_param_name=True, do_constant_folding=True, example_outputs=None, strip_doc_string=True, dynamic_axes=None, keep_initializers_as_inputs=None, custom_opsets=None, enable_onnx_checker=True, use_external_data_format=False) [source]
Export a model into ONNX format. This exporter runs your model once in order to get a trace of its execution to be exported; at the moment, it supports a limited set of dynamic models (e.g., RNNs.) Parameters
model (torch.nn.Module) – the model to be exported.
args (tuple of arguments or torch.Tensor, a dictionary consisting of named arguments (optional)) –
a dictionary to specify the input to the corresponding named parameter: - KEY: str, named parameter - VALUE: corresponding input args can be structured either as:
ONLY A TUPLE OF ARGUMENTS or torch.Tensor: ‘’args = (x, y, z)’'
The inputs to the model, e.g., such that model(*args) is a valid invocation of the model. Any non-Tensor arguments will be hard-coded into the exported model; any Tensor arguments will become inputs of the exported model, in the order they occur in args. If args is a Tensor, this is equivalent to having called it with a 1-ary tuple of that Tensor.
A TUPLE OF ARGUEMENTS WITH A DICTIONARY OF NAMED PARAMETERS: ‘’args = (x,
{
‘y’: input_y,
‘z’: input_z
}) ‘’
The inputs to the model are structured as a tuple consisting of non-keyword arguments and the last value of this tuple being a dictionary consisting of named parameters and the corresponding inputs as key-value pairs. If certain named argument is not present in the dictionary, it is assigned the default value, or None if default value is not provided. Cases in which an dictionary input is the last input of the args tuple would cause a conflict when a dictionary of named parameters is used. The model below provides such an example. class Model(torch.nn.Module):
def forward(self, k, x):
… return x m = Model() k = torch.randn(2, 3) x = {torch.tensor(1.): torch.randn(2, 3)} In the previous iteration, the call to export API would look like torch.onnx.export(model, (k, x), ‘test.onnx’) This would work as intended. However, the export function would now assume that the ‘x’ input is intended to represent the optional dictionary consisting of named arguments. In order to prevent this from being an issue a constraint is placed to provide an empty dictionary as the last input in the tuple args in such cases. The new call would look like this. torch.onnx.export(model, (k, x, {}), ‘test.onnx’)
f – a file-like object (has to implement fileno that returns a file descriptor) or a string containing a file name. A binary Protobuf will be written to this file.
export_params (bool, default True) – if specified, all parameters will be exported. Set this to False if you want to export an untrained model. In this case, the exported model will first take all of its parameters as arguments, the ordering as specified by model.state_dict().values()
verbose (bool, default False) – if specified, we will print out a debug description of the trace being exported.
training (enum, default TrainingMode.EVAL) – TrainingMode.EVAL: export the model in inference mode. TrainingMode.PRESERVE: export the model in inference mode if model.training is False and to a training friendly mode if model.training is True. TrainingMode.TRAINING: export the model in a training friendly mode.
input_names (list of strings, default empty list) – names to assign to the input nodes of the graph, in order
output_names (list of strings, default empty list) – names to assign to the output nodes of the graph, in order
aten (bool, default False) – [DEPRECATED. use operator_export_type] export the model in aten mode. If using aten mode, all the ops original exported by the functions in symbolic_opset<version>.py are exported as ATen ops.
export_raw_ir (bool, default False) – [DEPRECATED. use operator_export_type] export the internal IR directly instead of converting it to ONNX ops.
operator_export_type (enum, default OperatorExportTypes.ONNX) –
OperatorExportTypes.ONNX: All ops are exported as regular ONNX ops (with ONNX namespace). OperatorExportTypes.ONNX_ATEN: All ops are exported as ATen ops (with aten namespace). OperatorExportTypes.ONNX_ATEN_FALLBACK: If an ATen op is not supported in ONNX or its symbolic is missing, fall back on ATen op. Registered ops are exported to ONNX regularly. Example graph: graph(%0 : Float)::
%3 : int = prim::Constant[value=0]()
%4 : Float = aten::triu(%0, %3) # missing op
%5 : Float = aten::mul(%4, %0) # registered op
return (%5)
is exported as: graph(%0 : Float)::
%1 : Long() = onnx::Constant[value={0}]()
%2 : Float = aten::ATen[operator="triu"](%0, %1) # missing op
%3 : Float = onnx::Mul(%2, %0) # registered op
return (%3)
In the above example, aten::triu is not supported in ONNX, hence exporter falls back on this op. OperatorExportTypes.RAW: Export raw ir. OperatorExportTypes.ONNX_FALLTHROUGH: If an op is not supported in ONNX, fall through and export the operator as is, as a custom ONNX op. Using this mode, the op can be exported and implemented by the user for their runtime backend. Example graph: graph(%x.1 : Long(1, strides=[1]))::
%1 : None = prim::Constant()
%2 : Tensor = aten::sum(%x.1, %1)
%y.1 : Tensor[] = prim::ListConstruct(%2)
return (%y.1)
is exported as: graph(%x.1 : Long(1, strides=[1]))::
%1 : Tensor = onnx::ReduceSum[keepdims=0](%x.1)
%y.1 : Long() = prim::ListConstruct(%1)
return (%y.1)
In the above example, prim::ListConstruct is not supported, hence exporter falls through.
opset_version (int, default is 9) – by default we export the model to the opset version of the onnx submodule. Since ONNX’s latest opset may evolve before next stable release, by default we export to one stable opset version. Right now, supported stable opset version is 9. The opset_version must be _onnx_main_opset or in _onnx_stable_opsets which are defined in torch/onnx/symbolic_helper.py
do_constant_folding (bool, default False) – If True, the constant-folding optimization is applied to the model during export. Constant-folding optimization will replace some of the ops that have all constant inputs, with pre-computed constant nodes.
example_outputs (tuple of Tensors, default None) – Model’s example outputs being exported. example_outputs must be provided when exporting a ScriptModule or TorchScript Function.
strip_doc_string (bool, default True) – if True, strips the field “doc_string” from the exported model, which information about the stack trace.
dynamic_axes (dict<string, dict<python:int, string>> or dict<string, list(int)>, default empty dict) –
a dictionary to specify dynamic axes of input/output, such that: - KEY: input and/or output names - VALUE: index of dynamic axes for given key and potentially the name to be used for exported dynamic axes. In general the value is defined according to one of the following ways or a combination of both: (1). A list of integers specifying the dynamic axes of provided input. In this scenario automated names will be generated and applied to dynamic axes of provided input/output during export. OR (2). An inner dictionary that specifies a mapping FROM the index of dynamic axis in corresponding input/output TO the name that is desired to be applied on such axis of such input/output during export. Example. if we have the following shape for inputs and outputs: shape(input_1) = ('b', 3, 'w', 'h')
and shape(input_2) = ('b', 4)
and shape(output) = ('b', 'd', 5)
Then dynamic axes can be defined either as:
ONLY INDICES: ``dynamic_axes = {'input_1':[0, 2, 3],
'input_2':[0],
'output':[0, 1]}``
where automatic names will be generated for exported dynamic axes
INDICES WITH CORRESPONDING NAMES: ``dynamic_axes = {'input_1':{0:'batch',
1:'width',
2:'height'},
'input_2':{0:'batch'},
'output':{0:'batch',
1:'detections'}}``
where provided names will be applied to exported dynamic axes
MIXED MODE OF (1) and (2): ``dynamic_axes = {'input_1':[0, 2, 3],
'input_2':{0:'batch'},
'output':[0,1]}``
keep_initializers_as_inputs (bool, default None) –
If True, all the initializers (typically corresponding to parameters) in the exported graph will also be added as inputs to the graph. If False, then initializers are not added as inputs to the graph, and only the non-parameter inputs are added as inputs. This may allow for better optimizations (such as constant folding etc.) by backends/runtimes that execute these graphs. If unspecified (default None), then the behavior is chosen automatically as follows. If operator_export_type is OperatorExportTypes.ONNX, the behavior is equivalent to setting this argument to False. For other values of operator_export_type, the behavior is equivalent to setting this argument to True. Note that for ONNX opset version < 9, initializers MUST be part of graph inputs. Therefore, if opset_version argument is set to a 8 or lower, this argument will be ignored.
custom_opsets (dict<string, int>, default empty dict) – A dictionary to indicate custom opset domain and version at export. If model contains a custom opset, it is optional to specify the domain and opset version in the dictionary: - KEY: opset domain name - VALUE: opset version If the custom opset is not provided in this dictionary, opset version is set to 1 by default.
enable_onnx_checker (bool, default True) – If True the onnx model checker will be run as part of the export, to ensure the exported model is a valid ONNX model.
external_data_format (bool, default False) – If True, then the model is exported in ONNX external data format, in which case some of the model parameters are stored in external binary files and not in the ONNX model file itself. See link for format details: https://github.com/onnx/onnx/blob/8b3f7e2e7a0f2aba0e629e23d89f07c7fc0e6a5e/onnx/onnx.proto#L423 Also, in this case, argument ‘f’ must be a string specifying the location of the model. The external binary files will be stored in the same location specified by the model location ‘f’. If False, then the model is stored in regular format, i.e. model and parameters are all in one file. This argument is ignored for all export types other than ONNX.
torch.onnx.export_to_pretty_string(*args, **kwargs) [source]
torch.onnx.register_custom_op_symbolic(symbolic_name, symbolic_fn, opset_version) [source]
torch.onnx.operators.shape_as_tensor(x) [source]
torch.onnx.select_model_mode_for_export(model, mode) [source]
A context manager to temporarily set the training mode of ‘model’ to ‘mode’, resetting it when we exit the with-block. A no-op if mode is None. In version 1.6 changed to this from set_training
torch.onnx.is_in_onnx_export() [source]
Check whether it’s in the middle of the ONNX export. This function returns True in the middle of torch.onnx.export(). torch.onnx.export should be executed with single thread. | torch.onnx |
torch.onnx.export(model, args, f, export_params=True, verbose=False, training=<TrainingMode.EVAL: 0>, input_names=None, output_names=None, aten=False, export_raw_ir=False, operator_export_type=None, opset_version=None, _retain_param_name=True, do_constant_folding=True, example_outputs=None, strip_doc_string=True, dynamic_axes=None, keep_initializers_as_inputs=None, custom_opsets=None, enable_onnx_checker=True, use_external_data_format=False) [source]
Export a model into ONNX format. This exporter runs your model once in order to get a trace of its execution to be exported; at the moment, it supports a limited set of dynamic models (e.g., RNNs.) Parameters
model (torch.nn.Module) – the model to be exported.
args (tuple of arguments or torch.Tensor, a dictionary consisting of named arguments (optional)) –
a dictionary to specify the input to the corresponding named parameter: - KEY: str, named parameter - VALUE: corresponding input args can be structured either as:
ONLY A TUPLE OF ARGUMENTS or torch.Tensor: ‘’args = (x, y, z)’'
The inputs to the model, e.g., such that model(*args) is a valid invocation of the model. Any non-Tensor arguments will be hard-coded into the exported model; any Tensor arguments will become inputs of the exported model, in the order they occur in args. If args is a Tensor, this is equivalent to having called it with a 1-ary tuple of that Tensor.
A TUPLE OF ARGUEMENTS WITH A DICTIONARY OF NAMED PARAMETERS: ‘’args = (x,
{
‘y’: input_y,
‘z’: input_z
}) ‘’
The inputs to the model are structured as a tuple consisting of non-keyword arguments and the last value of this tuple being a dictionary consisting of named parameters and the corresponding inputs as key-value pairs. If certain named argument is not present in the dictionary, it is assigned the default value, or None if default value is not provided. Cases in which an dictionary input is the last input of the args tuple would cause a conflict when a dictionary of named parameters is used. The model below provides such an example. class Model(torch.nn.Module):
def forward(self, k, x):
… return x m = Model() k = torch.randn(2, 3) x = {torch.tensor(1.): torch.randn(2, 3)} In the previous iteration, the call to export API would look like torch.onnx.export(model, (k, x), ‘test.onnx’) This would work as intended. However, the export function would now assume that the ‘x’ input is intended to represent the optional dictionary consisting of named arguments. In order to prevent this from being an issue a constraint is placed to provide an empty dictionary as the last input in the tuple args in such cases. The new call would look like this. torch.onnx.export(model, (k, x, {}), ‘test.onnx’)
f – a file-like object (has to implement fileno that returns a file descriptor) or a string containing a file name. A binary Protobuf will be written to this file.
export_params (bool, default True) – if specified, all parameters will be exported. Set this to False if you want to export an untrained model. In this case, the exported model will first take all of its parameters as arguments, the ordering as specified by model.state_dict().values()
verbose (bool, default False) – if specified, we will print out a debug description of the trace being exported.
training (enum, default TrainingMode.EVAL) – TrainingMode.EVAL: export the model in inference mode. TrainingMode.PRESERVE: export the model in inference mode if model.training is False and to a training friendly mode if model.training is True. TrainingMode.TRAINING: export the model in a training friendly mode.
input_names (list of strings, default empty list) – names to assign to the input nodes of the graph, in order
output_names (list of strings, default empty list) – names to assign to the output nodes of the graph, in order
aten (bool, default False) – [DEPRECATED. use operator_export_type] export the model in aten mode. If using aten mode, all the ops original exported by the functions in symbolic_opset<version>.py are exported as ATen ops.
export_raw_ir (bool, default False) – [DEPRECATED. use operator_export_type] export the internal IR directly instead of converting it to ONNX ops.
operator_export_type (enum, default OperatorExportTypes.ONNX) –
OperatorExportTypes.ONNX: All ops are exported as regular ONNX ops (with ONNX namespace). OperatorExportTypes.ONNX_ATEN: All ops are exported as ATen ops (with aten namespace). OperatorExportTypes.ONNX_ATEN_FALLBACK: If an ATen op is not supported in ONNX or its symbolic is missing, fall back on ATen op. Registered ops are exported to ONNX regularly. Example graph: graph(%0 : Float)::
%3 : int = prim::Constant[value=0]()
%4 : Float = aten::triu(%0, %3) # missing op
%5 : Float = aten::mul(%4, %0) # registered op
return (%5)
is exported as: graph(%0 : Float)::
%1 : Long() = onnx::Constant[value={0}]()
%2 : Float = aten::ATen[operator="triu"](%0, %1) # missing op
%3 : Float = onnx::Mul(%2, %0) # registered op
return (%3)
In the above example, aten::triu is not supported in ONNX, hence exporter falls back on this op. OperatorExportTypes.RAW: Export raw ir. OperatorExportTypes.ONNX_FALLTHROUGH: If an op is not supported in ONNX, fall through and export the operator as is, as a custom ONNX op. Using this mode, the op can be exported and implemented by the user for their runtime backend. Example graph: graph(%x.1 : Long(1, strides=[1]))::
%1 : None = prim::Constant()
%2 : Tensor = aten::sum(%x.1, %1)
%y.1 : Tensor[] = prim::ListConstruct(%2)
return (%y.1)
is exported as: graph(%x.1 : Long(1, strides=[1]))::
%1 : Tensor = onnx::ReduceSum[keepdims=0](%x.1)
%y.1 : Long() = prim::ListConstruct(%1)
return (%y.1)
In the above example, prim::ListConstruct is not supported, hence exporter falls through.
opset_version (int, default is 9) – by default we export the model to the opset version of the onnx submodule. Since ONNX’s latest opset may evolve before next stable release, by default we export to one stable opset version. Right now, supported stable opset version is 9. The opset_version must be _onnx_main_opset or in _onnx_stable_opsets which are defined in torch/onnx/symbolic_helper.py
do_constant_folding (bool, default False) – If True, the constant-folding optimization is applied to the model during export. Constant-folding optimization will replace some of the ops that have all constant inputs, with pre-computed constant nodes.
example_outputs (tuple of Tensors, default None) – Model’s example outputs being exported. example_outputs must be provided when exporting a ScriptModule or TorchScript Function.
strip_doc_string (bool, default True) – if True, strips the field “doc_string” from the exported model, which information about the stack trace.
dynamic_axes (dict<string, dict<python:int, string>> or dict<string, list(int)>, default empty dict) –
a dictionary to specify dynamic axes of input/output, such that: - KEY: input and/or output names - VALUE: index of dynamic axes for given key and potentially the name to be used for exported dynamic axes. In general the value is defined according to one of the following ways or a combination of both: (1). A list of integers specifying the dynamic axes of provided input. In this scenario automated names will be generated and applied to dynamic axes of provided input/output during export. OR (2). An inner dictionary that specifies a mapping FROM the index of dynamic axis in corresponding input/output TO the name that is desired to be applied on such axis of such input/output during export. Example. if we have the following shape for inputs and outputs: shape(input_1) = ('b', 3, 'w', 'h')
and shape(input_2) = ('b', 4)
and shape(output) = ('b', 'd', 5)
Then dynamic axes can be defined either as:
ONLY INDICES: ``dynamic_axes = {'input_1':[0, 2, 3],
'input_2':[0],
'output':[0, 1]}``
where automatic names will be generated for exported dynamic axes
INDICES WITH CORRESPONDING NAMES: ``dynamic_axes = {'input_1':{0:'batch',
1:'width',
2:'height'},
'input_2':{0:'batch'},
'output':{0:'batch',
1:'detections'}}``
where provided names will be applied to exported dynamic axes
MIXED MODE OF (1) and (2): ``dynamic_axes = {'input_1':[0, 2, 3],
'input_2':{0:'batch'},
'output':[0,1]}``
keep_initializers_as_inputs (bool, default None) –
If True, all the initializers (typically corresponding to parameters) in the exported graph will also be added as inputs to the graph. If False, then initializers are not added as inputs to the graph, and only the non-parameter inputs are added as inputs. This may allow for better optimizations (such as constant folding etc.) by backends/runtimes that execute these graphs. If unspecified (default None), then the behavior is chosen automatically as follows. If operator_export_type is OperatorExportTypes.ONNX, the behavior is equivalent to setting this argument to False. For other values of operator_export_type, the behavior is equivalent to setting this argument to True. Note that for ONNX opset version < 9, initializers MUST be part of graph inputs. Therefore, if opset_version argument is set to a 8 or lower, this argument will be ignored.
custom_opsets (dict<string, int>, default empty dict) – A dictionary to indicate custom opset domain and version at export. If model contains a custom opset, it is optional to specify the domain and opset version in the dictionary: - KEY: opset domain name - VALUE: opset version If the custom opset is not provided in this dictionary, opset version is set to 1 by default.
enable_onnx_checker (bool, default True) – If True the onnx model checker will be run as part of the export, to ensure the exported model is a valid ONNX model.
external_data_format (bool, default False) – If True, then the model is exported in ONNX external data format, in which case some of the model parameters are stored in external binary files and not in the ONNX model file itself. See link for format details: https://github.com/onnx/onnx/blob/8b3f7e2e7a0f2aba0e629e23d89f07c7fc0e6a5e/onnx/onnx.proto#L423 Also, in this case, argument ‘f’ must be a string specifying the location of the model. The external binary files will be stored in the same location specified by the model location ‘f’. If False, then the model is stored in regular format, i.e. model and parameters are all in one file. This argument is ignored for all export types other than ONNX. | torch.onnx#torch.onnx.export |
torch.onnx.export_to_pretty_string(*args, **kwargs) [source] | torch.onnx#torch.onnx.export_to_pretty_string |
torch.onnx.is_in_onnx_export() [source]
Check whether it’s in the middle of the ONNX export. This function returns True in the middle of torch.onnx.export(). torch.onnx.export should be executed with single thread. | torch.onnx#torch.onnx.is_in_onnx_export |
torch.onnx.operators.shape_as_tensor(x) [source] | torch.onnx#torch.onnx.operators.shape_as_tensor |
torch.onnx.register_custom_op_symbolic(symbolic_name, symbolic_fn, opset_version) [source] | torch.onnx#torch.onnx.register_custom_op_symbolic |
torch.onnx.select_model_mode_for_export(model, mode) [source]
A context manager to temporarily set the training mode of ‘model’ to ‘mode’, resetting it when we exit the with-block. A no-op if mode is None. In version 1.6 changed to this from set_training | torch.onnx#torch.onnx.select_model_mode_for_export |
torch.optim torch.optim is a package implementing various optimization algorithms. Most commonly used methods are already supported, and the interface is general enough, so that more sophisticated ones can be also easily integrated in the future. How to use an optimizer To use torch.optim you have to construct an optimizer object, that will hold the current state and will update the parameters based on the computed gradients. Constructing it To construct an Optimizer you have to give it an iterable containing the parameters (all should be Variable s) to optimize. Then, you can specify optimizer-specific options such as the learning rate, weight decay, etc. Note If you need to move a model to GPU via .cuda(), please do so before constructing optimizers for it. Parameters of a model after .cuda() will be different objects with those before the call. In general, you should make sure that optimized parameters live in consistent locations when optimizers are constructed and used. Example: optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
optimizer = optim.Adam([var1, var2], lr=0.0001)
Per-parameter options Optimizer s also support specifying per-parameter options. To do this, instead of passing an iterable of Variable s, pass in an iterable of dict s. Each of them will define a separate parameter group, and should contain a params key, containing a list of parameters belonging to it. Other keys should match the keyword arguments accepted by the optimizers, and will be used as optimization options for this group. Note You can still pass options as keyword arguments. They will be used as defaults, in the groups that didn’t override them. This is useful when you only want to vary a single option, while keeping all others consistent between parameter groups. For example, this is very useful when one wants to specify per-layer learning rates: optim.SGD([
{'params': model.base.parameters()},
{'params': model.classifier.parameters(), 'lr': 1e-3}
], lr=1e-2, momentum=0.9)
This means that model.base’s parameters will use the default learning rate of 1e-2, model.classifier’s parameters will use a learning rate of 1e-3, and a momentum of 0.9 will be used for all parameters. Taking an optimization step All optimizers implement a step() method, that updates the parameters. It can be used in two ways: optimizer.step() This is a simplified version supported by most optimizers. The function can be called once the gradients are computed using e.g. backward(). Example: for input, target in dataset:
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
optimizer.step(closure) Some optimization algorithms such as Conjugate Gradient and LBFGS need to reevaluate the function multiple times, so you have to pass in a closure that allows them to recompute your model. The closure should clear the gradients, compute the loss, and return it. Example: for input, target in dataset:
def closure():
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
return loss
optimizer.step(closure)
Algorithms
class torch.optim.Optimizer(params, defaults) [source]
Base class for all optimizers. Warning Parameters need to be specified as collections that have a deterministic ordering that is consistent between runs. Examples of objects that don’t satisfy those properties are sets and iterators over values of dictionaries. Parameters
params (iterable) – an iterable of torch.Tensor s or dict s. Specifies what Tensors should be optimized.
defaults – (dict): a dict containing default values of optimization options (used when a parameter group doesn’t specify them).
add_param_group(param_group) [source]
Add a param group to the Optimizer s param_groups. This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses. Parameters
param_group (dict) – Specifies what Tensors should be optimized along with group
optimization options. (specific) –
load_state_dict(state_dict) [source]
Loads the optimizer state. Parameters
state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().
state_dict() [source]
Returns the state of the optimizer as a dict. It contains two entries:
state - a dict holding current optimization state. Its content
differs between optimizer classes. param_groups - a dict containing all parameter groups
step(closure) [source]
Performs a single optimization step (parameter update). Parameters
closure (callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers. Note Unless otherwise specified, this function should not modify the .grad field of the parameters.
zero_grad(set_to_none=False) [source]
Sets the gradients of all optimized torch.Tensor s to zero. Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True) followed by a backward pass, .grads are guaranteed to be None for params that did not receive a gradient. 3. torch.optim optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).
class torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0) [source]
Implements Adadelta algorithm. It has been proposed in ADADELTA: An Adaptive Learning Rate Method. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
rho (float, optional) – coefficient used for computing a running average of squared gradients (default: 0.9)
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-6)
lr (float, optional) – coefficient that scale delta before it is applied to the parameters (default: 1.0)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-10) [source]
Implements Adagrad algorithm. It has been proposed in Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
lr_decay (float, optional) – learning rate decay (default: 0)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-10)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False) [source]
Implements Adam algorithm. It has been proposed in Adam: A Method for Stochastic Optimization. The implementation of the L2 penalty follows changes proposed in Decoupled Weight Decay Regularization. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
amsgrad (boolean, optional) – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond (default: False)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False) [source]
Implements AdamW algorithm. The original Adam algorithm was proposed in Adam: A Method for Stochastic Optimization. The AdamW variant was proposed in Decoupled Weight Decay Regularization. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay coefficient (default: 1e-2)
amsgrad (boolean, optional) – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond (default: False)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08) [source]
Implements lazy version of Adam algorithm suitable for sparse tensors. In this variant, only moments that show up in the gradient get updated, and only those portions of the gradient get applied to the parameters. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.Adamax(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0) [source]
Implements Adamax algorithm (a variant of Adam based on infinity norm). It has been proposed in Adam: A Method for Stochastic Optimization. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 2e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0) [source]
Implements Averaged Stochastic Gradient Descent. It has been proposed in Acceleration of stochastic approximation by averaging. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
lambd (float, optional) – decay term (default: 1e-4)
alpha (float, optional) – power for eta update (default: 0.75)
t0 (float, optional) – point at which to start averaging (default: 1e6)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn=None) [source]
Implements L-BFGS algorithm, heavily inspired by minFunc <https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html>. Warning This optimizer doesn’t support per-parameter options and parameter groups (there can be only one). Warning Right now all parameters have to be on a single device. This will be improved in the future. Note This is a very memory intensive optimizer (it requires additional param_bytes * (history_size + 1) bytes). If it doesn’t fit in memory try reducing the history size, or use a different algorithm. Parameters
lr (float) – learning rate (default: 1)
max_iter (int) – maximal number of iterations per optimization step (default: 20)
max_eval (int) – maximal number of function evaluations per optimization step (default: max_iter * 1.25).
tolerance_grad (float) – termination tolerance on first order optimality (default: 1e-5).
tolerance_change (float) – termination tolerance on function value/parameter changes (default: 1e-9).
history_size (int) – update history size (default: 100).
line_search_fn (str) – either ‘strong_wolfe’ or None (default: None).
step(closure) [source]
Performs a single optimization step. Parameters
closure (callable) – A closure that reevaluates the model and returns the loss.
class torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False) [source]
Implements RMSprop algorithm. Proposed by G. Hinton in his course. The centered version first appears in Generating Sequences With Recurrent Neural Networks. The implementation here takes the square root of the gradient average before adding epsilon (note that TensorFlow interchanges these two operations). The effective learning rate is thus α/(v+ϵ)\alpha/(\sqrt{v} + \epsilon) where α\alpha is the scheduled learning rate and vv is the weighted moving average of the squared gradient. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
momentum (float, optional) – momentum factor (default: 0)
alpha (float, optional) – smoothing constant (default: 0.99)
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
centered (bool, optional) – if True, compute the centered RMSProp, the gradient is normalized by an estimation of its variance
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50)) [source]
Implements the resilient backpropagation algorithm. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
etas (Tuple[float, float], optional) – pair of (etaminus, etaplis), that are multiplicative increase and decrease factors (default: (0.5, 1.2))
step_sizes (Tuple[float, float], optional) – a pair of minimal and maximal allowed step sizes (default: (1e-6, 50))
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.SGD(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False) [source]
Implements stochastic gradient descent (optionally with momentum). Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float) – learning rate
momentum (float, optional) – momentum factor (default: 0)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
dampening (float, optional) – dampening for momentum (default: 0)
nesterov (bool, optional) – enables Nesterov momentum (default: False) Example >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
Note The implementation of SGD with Momentum/Nesterov subtly differs from Sutskever et. al. and implementations in some other frameworks. Considering the specific case of Momentum, the update can be written as vt+1=μ∗vt+gt+1,pt+1=pt−lr∗vt+1,\begin{aligned} v_{t+1} & = \mu * v_{t} + g_{t+1}, \\ p_{t+1} & = p_{t} - \text{lr} * v_{t+1}, \end{aligned}
where pp , gg , vv and μ\mu denote the parameters, gradient, velocity, and momentum respectively. This is in contrast to Sutskever et. al. and other frameworks which employ an update of the form vt+1=μ∗vt+lr∗gt+1,pt+1=pt−vt+1.\begin{aligned} v_{t+1} & = \mu * v_{t} + \text{lr} * g_{t+1}, \\ p_{t+1} & = p_{t} - v_{t+1}. \end{aligned}
The Nesterov version is analogously modified.
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
How to adjust learning rate torch.optim.lr_scheduler provides several methods to adjust the learning rate based on the number of epochs. torch.optim.lr_scheduler.ReduceLROnPlateau allows dynamic learning rate reducing based on some validation measurements. Learning rate scheduling should be applied after optimizer’s update; e.g., you should write your code this way: >>> scheduler = ...
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
Warning Prior to PyTorch 1.1.0, the learning rate scheduler was expected to be called before the optimizer’s update; 1.1.0 changed this behavior in a BC-breaking way. If you use the learning rate scheduler (calling scheduler.step()) before the optimizer’s update (calling optimizer.step()), this will skip the first value of the learning rate schedule. If you are unable to reproduce results after upgrading to PyTorch 1.1.0, please check if you are calling scheduler.step() at the wrong time.
class torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1, verbose=False) [source]
Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
lr_lambda (function or list) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> # Assuming optimizer has two groups.
>>> lambda1 = lambda epoch: epoch // 30
>>> lambda2 = lambda epoch: 0.95 ** epoch
>>> scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
load_state_dict(state_dict) [source]
Loads the schedulers state. When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().
state_dict() [source]
Returns the state of the scheduler as a dict. It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas. When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.
class torch.optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda, last_epoch=-1, verbose=False) [source]
Multiply the learning rate of each parameter group by the factor given in the specified function. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
lr_lambda (function or list) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> lmbda = lambda epoch: 0.95
>>> scheduler = MultiplicativeLR(optimizer, lr_lambda=lmbda)
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
load_state_dict(state_dict) [source]
Loads the schedulers state. Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().
state_dict() [source]
Returns the state of the scheduler as a dict. It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.
class torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1, verbose=False) [source]
Decays the learning rate of each parameter group by gamma every step_size epochs. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
step_size (int) – Period of learning rate decay.
gamma (float) – Multiplicative factor of learning rate decay. Default: 0.1.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05 if epoch < 30
>>> # lr = 0.005 if 30 <= epoch < 60
>>> # lr = 0.0005 if 60 <= epoch < 90
>>> # ...
>>> scheduler = StepLR(optimizer, step_size=30, gamma=0.1)
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
class torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=-1, verbose=False) [source]
Decays the learning rate of each parameter group by gamma once the number of epoch reaches one of the milestones. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
milestones (list) – List of epoch indices. Must be increasing.
gamma (float) – Multiplicative factor of learning rate decay. Default: 0.1.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05 if epoch < 30
>>> # lr = 0.005 if 30 <= epoch < 80
>>> # lr = 0.0005 if epoch >= 80
>>> scheduler = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
class torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=-1, verbose=False) [source]
Decays the learning rate of each parameter group by gamma every epoch. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
gamma (float) – Multiplicative factor of learning rate decay.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False.
class torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=-1, verbose=False) [source]
Set the learning rate of each parameter group using a cosine annealing schedule, where ηmax\eta_{max} is set to the initial lr and TcurT_{cur} is the number of epochs since the last restart in SGDR: ηt=ηmin+12(ηmax−ηmin)(1+cos(TcurTmaxπ)),Tcur≠(2k+1)Tmax;ηt+1=ηt+12(ηmax−ηmin)(1−cos(1Tmaxπ)),Tcur=(2k+1)Tmax.\begin{aligned} \eta_t & = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right), & T_{cur} \neq (2k+1)T_{max}; \\ \eta_{t+1} & = \eta_{t} + \frac{1}{2}(\eta_{max} - \eta_{min}) \left(1 - \cos\left(\frac{1}{T_{max}}\pi\right)\right), & T_{cur} = (2k+1)T_{max}. \end{aligned}
When last_epoch=-1, sets initial lr as lr. Notice that because the schedule is defined recursively, the learning rate can be simultaneously modified outside this scheduler by other operators. If the learning rate is set solely by this scheduler, the learning rate at each step becomes: ηt=ηmin+12(ηmax−ηmin)(1+cos(TcurTmaxπ))\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right)
It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts. Note that this only implements the cosine annealing part of SGDR, and not the restarts. Parameters
optimizer (Optimizer) – Wrapped optimizer.
T_max (int) – Maximum number of iterations.
eta_min (float) – Minimum learning rate. Default: 0.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False.
class torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False) [source]
Reduce learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced. Parameters
optimizer (Optimizer) – Wrapped optimizer.
mode (str) – One of min, max. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing. Default: ‘min’.
factor (float) – Factor by which the learning rate will be reduced. new_lr = lr * factor. Default: 0.1.
patience (int) – Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the 3rd epoch if the loss still hasn’t improved then. Default: 10.
threshold (float) – Threshold for measuring the new optimum, to only focus on significant changes. Default: 1e-4.
threshold_mode (str) – One of rel, abs. In rel mode, dynamic_threshold = best * ( 1 + threshold ) in ‘max’ mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. Default: ‘rel’.
cooldown (int) – Number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0.
min_lr (float or list) – A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. Default: 0.
eps (float) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> scheduler = ReduceLROnPlateau(optimizer, 'min')
>>> for epoch in range(10):
>>> train(...)
>>> val_loss = validate(...)
>>> # Note that step should be called after validate()
>>> scheduler.step(val_loss)
class torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=-1, verbose=False) [source]
Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper Cyclical Learning Rates for Training Neural Networks. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis. Cyclical learning rate policy changes the learning rate after every batch. step should be called after a batch has been used for training. This class has three built-in policies, as put forth in the paper: “triangular”: A basic triangular cycle without amplitude scaling. “triangular2”: A basic triangular cycle that scales initial amplitude by half each cycle. “exp_range”: A cycle that scales initial amplitude by gammacycle iterations\text{gamma}^{\text{cycle iterations}} at each cycle iteration. This implementation was adapted from the github repo: bckenstler/CLR Parameters
optimizer (Optimizer) – Wrapped optimizer.
base_lr (float or list) – Initial learning rate which is the lower boundary in the cycle for each parameter group.
max_lr (float or list) – Upper learning rate boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_lr - base_lr). The lr at any cycle is the sum of base_lr and some scaling of the amplitude; therefore max_lr may not actually be reached depending on scaling function.
step_size_up (int) – Number of training iterations in the increasing half of a cycle. Default: 2000
step_size_down (int) – Number of training iterations in the decreasing half of a cycle. If step_size_down is None, it is set to step_size_up. Default: None
mode (str) – One of {triangular, triangular2, exp_range}. Values correspond to policies detailed above. If scale_fn is not None, this argument is ignored. Default: ‘triangular’
gamma (float) – Constant in ‘exp_range’ scaling function: gamma**(cycle iterations) Default: 1.0
scale_fn (function) – Custom scaling policy defined by a single argument lambda function, where 0 <= scale_fn(x) <= 1 for all x >= 0. If specified, then ‘mode’ is ignored. Default: None
scale_mode (str) – {‘cycle’, ‘iterations’}. Defines whether scale_fn is evaluated on cycle number or cycle iterations (training iterations since start of cycle). Default: ‘cycle’
cycle_momentum (bool) – If True, momentum is cycled inversely to learning rate between ‘base_momentum’ and ‘max_momentum’. Default: True
base_momentum (float or list) – Lower momentum boundaries in the cycle for each parameter group. Note that momentum is cycled inversely to learning rate; at the peak of a cycle, momentum is ‘base_momentum’ and learning rate is ‘max_lr’. Default: 0.8
max_momentum (float or list) – Upper momentum boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_momentum - base_momentum). The momentum at any cycle is the difference of max_momentum and some scaling of the amplitude; therefore base_momentum may not actually be reached depending on scaling function. Note that momentum is cycled inversely to learning rate; at the start of a cycle, momentum is ‘max_momentum’ and learning rate is ‘base_lr’ Default: 0.9
last_epoch (int) – The index of the last batch. This parameter is used when resuming a training job. Since step() should be invoked after each batch instead of after each epoch, this number represents the total number of batches computed, not the total number of epochs computed. When last_epoch=-1, the schedule is started from the beginning. Default: -1
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1)
>>> data_loader = torch.utils.data.DataLoader(...)
>>> for epoch in range(10):
>>> for batch in data_loader:
>>> train_batch(...)
>>> scheduler.step()
get_lr() [source]
Calculates the learning rate at batch index. This function treats self.last_epoch as the last batch index. If self.cycle_momentum is True, this function has a side effect of updating the optimizer’s momentum.
class torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, total_steps=None, epochs=None, steps_per_epoch=None, pct_start=0.3, anneal_strategy='cos', cycle_momentum=True, base_momentum=0.85, max_momentum=0.95, div_factor=25.0, final_div_factor=10000.0, three_phase=False, last_epoch=-1, verbose=False) [source]
Sets the learning rate of each parameter group according to the 1cycle learning rate policy. The 1cycle policy anneals the learning rate from an initial learning rate to some maximum learning rate and then from that maximum learning rate to some minimum learning rate much lower than the initial learning rate. This policy was initially described in the paper Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. The 1cycle learning rate policy changes the learning rate after every batch. step should be called after a batch has been used for training. This scheduler is not chainable. Note also that the total number of steps in the cycle can be determined in one of two ways (listed in order of precedence): A value for total_steps is explicitly provided. A number of epochs (epochs) and a number of steps per epoch (steps_per_epoch) are provided. In this case, the number of total steps is inferred by total_steps = epochs * steps_per_epoch You must either provide a value for total_steps or provide a value for both epochs and steps_per_epoch. The default behaviour of this scheduler follows the fastai implementation of 1cycle, which claims that “unpublished work has shown even better results by using only two phases”. To mimic the behaviour of the original paper instead, set three_phase=True. Parameters
optimizer (Optimizer) – Wrapped optimizer.
max_lr (float or list) – Upper learning rate boundaries in the cycle for each parameter group.
total_steps (int) – The total number of steps in the cycle. Note that if a value is not provided here, then it must be inferred by providing a value for epochs and steps_per_epoch. Default: None
epochs (int) – The number of epochs to train for. This is used along with steps_per_epoch in order to infer the total number of steps in the cycle if a value for total_steps is not provided. Default: None
steps_per_epoch (int) – The number of steps per epoch to train for. This is used along with epochs in order to infer the total number of steps in the cycle if a value for total_steps is not provided. Default: None
pct_start (float) – The percentage of the cycle (in number of steps) spent increasing the learning rate. Default: 0.3
anneal_strategy (str) – {‘cos’, ‘linear’} Specifies the annealing strategy: “cos” for cosine annealing, “linear” for linear annealing. Default: ‘cos’
cycle_momentum (bool) – If True, momentum is cycled inversely to learning rate between ‘base_momentum’ and ‘max_momentum’. Default: True
base_momentum (float or list) – Lower momentum boundaries in the cycle for each parameter group. Note that momentum is cycled inversely to learning rate; at the peak of a cycle, momentum is ‘base_momentum’ and learning rate is ‘max_lr’. Default: 0.85
max_momentum (float or list) – Upper momentum boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_momentum - base_momentum). Note that momentum is cycled inversely to learning rate; at the start of a cycle, momentum is ‘max_momentum’ and learning rate is ‘base_lr’ Default: 0.95
div_factor (float) – Determines the initial learning rate via initial_lr = max_lr/div_factor Default: 25
final_div_factor (float) – Determines the minimum learning rate via min_lr = initial_lr/final_div_factor Default: 1e4
three_phase (bool) – If True, use a third phase of the schedule to annihilate the learning rate according to ‘final_div_factor’ instead of modifying the second phase (the first two phases will be symmetrical about the step indicated by ‘pct_start’).
last_epoch (int) – The index of the last batch. This parameter is used when resuming a training job. Since step() should be invoked after each batch instead of after each epoch, this number represents the total number of batches computed, not the total number of epochs computed. When last_epoch=-1, the schedule is started from the beginning. Default: -1
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> data_loader = torch.utils.data.DataLoader(...)
>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.01, steps_per_epoch=len(data_loader), epochs=10)
>>> for epoch in range(10):
>>> for batch in data_loader:
>>> train_batch(...)
>>> scheduler.step()
class torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=-1, verbose=False) [source]
Set the learning rate of each parameter group using a cosine annealing schedule, where ηmax\eta_{max} is set to the initial lr, TcurT_{cur} is the number of epochs since the last restart and TiT_{i} is the number of epochs between two warm restarts in SGDR: ηt=ηmin+12(ηmax−ηmin)(1+cos(TcurTiπ))\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{i}}\pi\right)\right)
When Tcur=TiT_{cur}=T_{i} , set ηt=ηmin\eta_t = \eta_{min} . When Tcur=0T_{cur}=0 after restart, set ηt=ηmax\eta_t=\eta_{max} . It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts. Parameters
optimizer (Optimizer) – Wrapped optimizer.
T_0 (int) – Number of iterations for the first restart.
T_mult (int, optional) – A factor increases TiT_{i} after a restart. Default: 1.
eta_min (float, optional) – Minimum learning rate. Default: 0.
last_epoch (int, optional) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False.
step(epoch=None) [source]
Step could be called after every batch update Example >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)
>>> iters = len(dataloader)
>>> for epoch in range(20):
>>> for i, sample in enumerate(dataloader):
>>> inputs, labels = sample['inputs'], sample['labels']
>>> optimizer.zero_grad()
>>> outputs = net(inputs)
>>> loss = criterion(outputs, labels)
>>> loss.backward()
>>> optimizer.step()
>>> scheduler.step(epoch + i / iters)
This function can be called in an interleaved way. Example >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)
>>> for epoch in range(20):
>>> scheduler.step()
>>> scheduler.step(26)
>>> scheduler.step() # scheduler.step(27), instead of scheduler(20)
Stochastic Weight Averaging torch.optim.swa_utils implements Stochastic Weight Averaging (SWA). In particular, torch.optim.swa_utils.AveragedModel class implements SWA models, torch.optim.swa_utils.SWALR implements the SWA learning rate scheduler and torch.optim.swa_utils.update_bn() is a utility function used to update SWA batch normalization statistics at the end of training. SWA has been proposed in Averaging Weights Leads to Wider Optima and Better Generalization. Constructing averaged models AveragedModel class serves to compute the weights of the SWA model. You can create an averaged model by running: >>> swa_model = AveragedModel(model)
Here the model model can be an arbitrary torch.nn.Module object. swa_model will keep track of the running averages of the parameters of the model. To update these averages, you can use the update_parameters() function: >>> swa_model.update_parameters(model)
SWA learning rate schedules Typically, in SWA the learning rate is set to a high constant value. SWALR is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant. For example, the following code creates a scheduler that linearly anneals the learning rate from its initial value to 0.05 in 5 epochs within each parameter group: >>> swa_scheduler = torch.optim.swa_utils.SWALR(optimizer, \
>>> anneal_strategy="linear", anneal_epochs=5, swa_lr=0.05)
You can also use cosine annealing to a fixed value instead of linear annealing by setting anneal_strategy="cos". Taking care of batch normalization update_bn() is a utility function that allows to compute the batchnorm statistics for the SWA model on a given dataloader loader at the end of training: >>> torch.optim.swa_utils.update_bn(loader, swa_model)
update_bn() applies the swa_model to every element in the dataloader and computes the activation statistics for each batch normalization layer in the model. Warning update_bn() assumes that each batch in the dataloader loader is either a tensors or a list of tensors where the first element is the tensor that the network swa_model should be applied to. If your dataloader has a different structure, you can update the batch normalization statistics of the swa_model by doing a forward pass with the swa_model on each element of the dataset. Custom averaging strategies By default, torch.optim.swa_utils.AveragedModel computes a running equal average of the parameters that you provide, but you can also use custom averaging functions with the avg_fn parameter. In the following example ema_model computes an exponential moving average. Example: >>> ema_avg = lambda averaged_model_parameter, model_parameter, num_averaged:\
>>> 0.1 * averaged_model_parameter + 0.9 * model_parameter
>>> ema_model = torch.optim.swa_utils.AveragedModel(model, avg_fn=ema_avg)
Putting it all together In the example below, swa_model is the SWA model that accumulates the averages of the weights. We train the model for a total of 300 epochs and we switch to the SWA learning rate schedule and start to collect SWA averages of the parameters at epoch 160: >>> loader, optimizer, model, loss_fn = ...
>>> swa_model = torch.optim.swa_utils.AveragedModel(model)
>>> scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=300)
>>> swa_start = 160
>>> swa_scheduler = SWALR(optimizer, swa_lr=0.05)
>>>
>>> for epoch in range(300):
>>> for input, target in loader:
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
>>> if epoch > swa_start:
>>> swa_model.update_parameters(model)
>>> swa_scheduler.step()
>>> else:
>>> scheduler.step()
>>>
>>> # Update bn statistics for the swa_model at the end
>>> torch.optim.swa_utils.update_bn(loader, swa_model)
>>> # Use swa_model to make predictions on test data
>>> preds = swa_model(test_input) | torch.optim |
class torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0) [source]
Implements Adadelta algorithm. It has been proposed in ADADELTA: An Adaptive Learning Rate Method. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
rho (float, optional) – coefficient used for computing a running average of squared gradients (default: 0.9)
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-6)
lr (float, optional) – coefficient that scale delta before it is applied to the parameters (default: 1.0)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adadelta |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adadelta.step |
class torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-10) [source]
Implements Adagrad algorithm. It has been proposed in Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
lr_decay (float, optional) – learning rate decay (default: 0)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-10)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adagrad |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adagrad.step |
class torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False) [source]
Implements Adam algorithm. It has been proposed in Adam: A Method for Stochastic Optimization. The implementation of the L2 penalty follows changes proposed in Decoupled Weight Decay Regularization. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
amsgrad (boolean, optional) – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond (default: False)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adam |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adam.step |
class torch.optim.Adamax(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0) [source]
Implements Adamax algorithm (a variant of Adam based on infinity norm). It has been proposed in Adam: A Method for Stochastic Optimization. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 2e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adamax |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adamax.step |
class torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False) [source]
Implements AdamW algorithm. The original Adam algorithm was proposed in Adam: A Method for Stochastic Optimization. The AdamW variant was proposed in Decoupled Weight Decay Regularization. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay coefficient (default: 1e-2)
amsgrad (boolean, optional) – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond (default: False)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.AdamW |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.AdamW.step |
class torch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0) [source]
Implements Averaged Stochastic Gradient Descent. It has been proposed in Acceleration of stochastic approximation by averaging. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
lambd (float, optional) – decay term (default: 1e-4)
alpha (float, optional) – power for eta update (default: 0.75)
t0 (float, optional) – point at which to start averaging (default: 1e6)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.ASGD |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.ASGD.step |
class torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn=None) [source]
Implements L-BFGS algorithm, heavily inspired by minFunc <https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html>. Warning This optimizer doesn’t support per-parameter options and parameter groups (there can be only one). Warning Right now all parameters have to be on a single device. This will be improved in the future. Note This is a very memory intensive optimizer (it requires additional param_bytes * (history_size + 1) bytes). If it doesn’t fit in memory try reducing the history size, or use a different algorithm. Parameters
lr (float) – learning rate (default: 1)
max_iter (int) – maximal number of iterations per optimization step (default: 20)
max_eval (int) – maximal number of function evaluations per optimization step (default: max_iter * 1.25).
tolerance_grad (float) – termination tolerance on first order optimality (default: 1e-5).
tolerance_change (float) – termination tolerance on function value/parameter changes (default: 1e-9).
history_size (int) – update history size (default: 100).
line_search_fn (str) – either ‘strong_wolfe’ or None (default: None).
step(closure) [source]
Performs a single optimization step. Parameters
closure (callable) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.LBFGS |
step(closure) [source]
Performs a single optimization step. Parameters
closure (callable) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.LBFGS.step |
class torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=-1, verbose=False) [source]
Set the learning rate of each parameter group using a cosine annealing schedule, where ηmax\eta_{max} is set to the initial lr and TcurT_{cur} is the number of epochs since the last restart in SGDR: ηt=ηmin+12(ηmax−ηmin)(1+cos(TcurTmaxπ)),Tcur≠(2k+1)Tmax;ηt+1=ηt+12(ηmax−ηmin)(1−cos(1Tmaxπ)),Tcur=(2k+1)Tmax.\begin{aligned} \eta_t & = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right), & T_{cur} \neq (2k+1)T_{max}; \\ \eta_{t+1} & = \eta_{t} + \frac{1}{2}(\eta_{max} - \eta_{min}) \left(1 - \cos\left(\frac{1}{T_{max}}\pi\right)\right), & T_{cur} = (2k+1)T_{max}. \end{aligned}
When last_epoch=-1, sets initial lr as lr. Notice that because the schedule is defined recursively, the learning rate can be simultaneously modified outside this scheduler by other operators. If the learning rate is set solely by this scheduler, the learning rate at each step becomes: ηt=ηmin+12(ηmax−ηmin)(1+cos(TcurTmaxπ))\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{max}}\pi\right)\right)
It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts. Note that this only implements the cosine annealing part of SGDR, and not the restarts. Parameters
optimizer (Optimizer) – Wrapped optimizer.
T_max (int) – Maximum number of iterations.
eta_min (float) – Minimum learning rate. Default: 0.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. | torch.optim#torch.optim.lr_scheduler.CosineAnnealingLR |
class torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=-1, verbose=False) [source]
Set the learning rate of each parameter group using a cosine annealing schedule, where ηmax\eta_{max} is set to the initial lr, TcurT_{cur} is the number of epochs since the last restart and TiT_{i} is the number of epochs between two warm restarts in SGDR: ηt=ηmin+12(ηmax−ηmin)(1+cos(TcurTiπ))\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})\left(1 + \cos\left(\frac{T_{cur}}{T_{i}}\pi\right)\right)
When Tcur=TiT_{cur}=T_{i} , set ηt=ηmin\eta_t = \eta_{min} . When Tcur=0T_{cur}=0 after restart, set ηt=ηmax\eta_t=\eta_{max} . It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts. Parameters
optimizer (Optimizer) – Wrapped optimizer.
T_0 (int) – Number of iterations for the first restart.
T_mult (int, optional) – A factor increases TiT_{i} after a restart. Default: 1.
eta_min (float, optional) – Minimum learning rate. Default: 0.
last_epoch (int, optional) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False.
step(epoch=None) [source]
Step could be called after every batch update Example >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)
>>> iters = len(dataloader)
>>> for epoch in range(20):
>>> for i, sample in enumerate(dataloader):
>>> inputs, labels = sample['inputs'], sample['labels']
>>> optimizer.zero_grad()
>>> outputs = net(inputs)
>>> loss = criterion(outputs, labels)
>>> loss.backward()
>>> optimizer.step()
>>> scheduler.step(epoch + i / iters)
This function can be called in an interleaved way. Example >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)
>>> for epoch in range(20):
>>> scheduler.step()
>>> scheduler.step(26)
>>> scheduler.step() # scheduler.step(27), instead of scheduler(20) | torch.optim#torch.optim.lr_scheduler.CosineAnnealingWarmRestarts |
step(epoch=None) [source]
Step could be called after every batch update Example >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)
>>> iters = len(dataloader)
>>> for epoch in range(20):
>>> for i, sample in enumerate(dataloader):
>>> inputs, labels = sample['inputs'], sample['labels']
>>> optimizer.zero_grad()
>>> outputs = net(inputs)
>>> loss = criterion(outputs, labels)
>>> loss.backward()
>>> optimizer.step()
>>> scheduler.step(epoch + i / iters)
This function can be called in an interleaved way. Example >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)
>>> for epoch in range(20):
>>> scheduler.step()
>>> scheduler.step(26)
>>> scheduler.step() # scheduler.step(27), instead of scheduler(20) | torch.optim#torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.step |
class torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=-1, verbose=False) [source]
Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper Cyclical Learning Rates for Training Neural Networks. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis. Cyclical learning rate policy changes the learning rate after every batch. step should be called after a batch has been used for training. This class has three built-in policies, as put forth in the paper: “triangular”: A basic triangular cycle without amplitude scaling. “triangular2”: A basic triangular cycle that scales initial amplitude by half each cycle. “exp_range”: A cycle that scales initial amplitude by gammacycle iterations\text{gamma}^{\text{cycle iterations}} at each cycle iteration. This implementation was adapted from the github repo: bckenstler/CLR Parameters
optimizer (Optimizer) – Wrapped optimizer.
base_lr (float or list) – Initial learning rate which is the lower boundary in the cycle for each parameter group.
max_lr (float or list) – Upper learning rate boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_lr - base_lr). The lr at any cycle is the sum of base_lr and some scaling of the amplitude; therefore max_lr may not actually be reached depending on scaling function.
step_size_up (int) – Number of training iterations in the increasing half of a cycle. Default: 2000
step_size_down (int) – Number of training iterations in the decreasing half of a cycle. If step_size_down is None, it is set to step_size_up. Default: None
mode (str) – One of {triangular, triangular2, exp_range}. Values correspond to policies detailed above. If scale_fn is not None, this argument is ignored. Default: ‘triangular’
gamma (float) – Constant in ‘exp_range’ scaling function: gamma**(cycle iterations) Default: 1.0
scale_fn (function) – Custom scaling policy defined by a single argument lambda function, where 0 <= scale_fn(x) <= 1 for all x >= 0. If specified, then ‘mode’ is ignored. Default: None
scale_mode (str) – {‘cycle’, ‘iterations’}. Defines whether scale_fn is evaluated on cycle number or cycle iterations (training iterations since start of cycle). Default: ‘cycle’
cycle_momentum (bool) – If True, momentum is cycled inversely to learning rate between ‘base_momentum’ and ‘max_momentum’. Default: True
base_momentum (float or list) – Lower momentum boundaries in the cycle for each parameter group. Note that momentum is cycled inversely to learning rate; at the peak of a cycle, momentum is ‘base_momentum’ and learning rate is ‘max_lr’. Default: 0.8
max_momentum (float or list) – Upper momentum boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_momentum - base_momentum). The momentum at any cycle is the difference of max_momentum and some scaling of the amplitude; therefore base_momentum may not actually be reached depending on scaling function. Note that momentum is cycled inversely to learning rate; at the start of a cycle, momentum is ‘max_momentum’ and learning rate is ‘base_lr’ Default: 0.9
last_epoch (int) – The index of the last batch. This parameter is used when resuming a training job. Since step() should be invoked after each batch instead of after each epoch, this number represents the total number of batches computed, not the total number of epochs computed. When last_epoch=-1, the schedule is started from the beginning. Default: -1
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1)
>>> data_loader = torch.utils.data.DataLoader(...)
>>> for epoch in range(10):
>>> for batch in data_loader:
>>> train_batch(...)
>>> scheduler.step()
get_lr() [source]
Calculates the learning rate at batch index. This function treats self.last_epoch as the last batch index. If self.cycle_momentum is True, this function has a side effect of updating the optimizer’s momentum. | torch.optim#torch.optim.lr_scheduler.CyclicLR |
get_lr() [source]
Calculates the learning rate at batch index. This function treats self.last_epoch as the last batch index. If self.cycle_momentum is True, this function has a side effect of updating the optimizer’s momentum. | torch.optim#torch.optim.lr_scheduler.CyclicLR.get_lr |
class torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=-1, verbose=False) [source]
Decays the learning rate of each parameter group by gamma every epoch. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
gamma (float) – Multiplicative factor of learning rate decay.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. | torch.optim#torch.optim.lr_scheduler.ExponentialLR |
class torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1, verbose=False) [source]
Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
lr_lambda (function or list) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> # Assuming optimizer has two groups.
>>> lambda1 = lambda epoch: epoch // 30
>>> lambda2 = lambda epoch: 0.95 ** epoch
>>> scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
load_state_dict(state_dict) [source]
Loads the schedulers state. When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().
state_dict() [source]
Returns the state of the scheduler as a dict. It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas. When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. | torch.optim#torch.optim.lr_scheduler.LambdaLR |
load_state_dict(state_dict) [source]
Loads the schedulers state. When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict(). | torch.optim#torch.optim.lr_scheduler.LambdaLR.load_state_dict |
state_dict() [source]
Returns the state of the scheduler as a dict. It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas. When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. | torch.optim#torch.optim.lr_scheduler.LambdaLR.state_dict |
class torch.optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda, last_epoch=-1, verbose=False) [source]
Multiply the learning rate of each parameter group by the factor given in the specified function. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
lr_lambda (function or list) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> lmbda = lambda epoch: 0.95
>>> scheduler = MultiplicativeLR(optimizer, lr_lambda=lmbda)
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
load_state_dict(state_dict) [source]
Loads the schedulers state. Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().
state_dict() [source]
Returns the state of the scheduler as a dict. It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas. | torch.optim#torch.optim.lr_scheduler.MultiplicativeLR |
load_state_dict(state_dict) [source]
Loads the schedulers state. Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict(). | torch.optim#torch.optim.lr_scheduler.MultiplicativeLR.load_state_dict |
state_dict() [source]
Returns the state of the scheduler as a dict. It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas. | torch.optim#torch.optim.lr_scheduler.MultiplicativeLR.state_dict |
class torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=-1, verbose=False) [source]
Decays the learning rate of each parameter group by gamma once the number of epoch reaches one of the milestones. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
milestones (list) – List of epoch indices. Must be increasing.
gamma (float) – Multiplicative factor of learning rate decay. Default: 0.1.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05 if epoch < 30
>>> # lr = 0.005 if 30 <= epoch < 80
>>> # lr = 0.0005 if epoch >= 80
>>> scheduler = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step() | torch.optim#torch.optim.lr_scheduler.MultiStepLR |
class torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, total_steps=None, epochs=None, steps_per_epoch=None, pct_start=0.3, anneal_strategy='cos', cycle_momentum=True, base_momentum=0.85, max_momentum=0.95, div_factor=25.0, final_div_factor=10000.0, three_phase=False, last_epoch=-1, verbose=False) [source]
Sets the learning rate of each parameter group according to the 1cycle learning rate policy. The 1cycle policy anneals the learning rate from an initial learning rate to some maximum learning rate and then from that maximum learning rate to some minimum learning rate much lower than the initial learning rate. This policy was initially described in the paper Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. The 1cycle learning rate policy changes the learning rate after every batch. step should be called after a batch has been used for training. This scheduler is not chainable. Note also that the total number of steps in the cycle can be determined in one of two ways (listed in order of precedence): A value for total_steps is explicitly provided. A number of epochs (epochs) and a number of steps per epoch (steps_per_epoch) are provided. In this case, the number of total steps is inferred by total_steps = epochs * steps_per_epoch You must either provide a value for total_steps or provide a value for both epochs and steps_per_epoch. The default behaviour of this scheduler follows the fastai implementation of 1cycle, which claims that “unpublished work has shown even better results by using only two phases”. To mimic the behaviour of the original paper instead, set three_phase=True. Parameters
optimizer (Optimizer) – Wrapped optimizer.
max_lr (float or list) – Upper learning rate boundaries in the cycle for each parameter group.
total_steps (int) – The total number of steps in the cycle. Note that if a value is not provided here, then it must be inferred by providing a value for epochs and steps_per_epoch. Default: None
epochs (int) – The number of epochs to train for. This is used along with steps_per_epoch in order to infer the total number of steps in the cycle if a value for total_steps is not provided. Default: None
steps_per_epoch (int) – The number of steps per epoch to train for. This is used along with epochs in order to infer the total number of steps in the cycle if a value for total_steps is not provided. Default: None
pct_start (float) – The percentage of the cycle (in number of steps) spent increasing the learning rate. Default: 0.3
anneal_strategy (str) – {‘cos’, ‘linear’} Specifies the annealing strategy: “cos” for cosine annealing, “linear” for linear annealing. Default: ‘cos’
cycle_momentum (bool) – If True, momentum is cycled inversely to learning rate between ‘base_momentum’ and ‘max_momentum’. Default: True
base_momentum (float or list) – Lower momentum boundaries in the cycle for each parameter group. Note that momentum is cycled inversely to learning rate; at the peak of a cycle, momentum is ‘base_momentum’ and learning rate is ‘max_lr’. Default: 0.85
max_momentum (float or list) – Upper momentum boundaries in the cycle for each parameter group. Functionally, it defines the cycle amplitude (max_momentum - base_momentum). Note that momentum is cycled inversely to learning rate; at the start of a cycle, momentum is ‘max_momentum’ and learning rate is ‘base_lr’ Default: 0.95
div_factor (float) – Determines the initial learning rate via initial_lr = max_lr/div_factor Default: 25
final_div_factor (float) – Determines the minimum learning rate via min_lr = initial_lr/final_div_factor Default: 1e4
three_phase (bool) – If True, use a third phase of the schedule to annihilate the learning rate according to ‘final_div_factor’ instead of modifying the second phase (the first two phases will be symmetrical about the step indicated by ‘pct_start’).
last_epoch (int) – The index of the last batch. This parameter is used when resuming a training job. Since step() should be invoked after each batch instead of after each epoch, this number represents the total number of batches computed, not the total number of epochs computed. When last_epoch=-1, the schedule is started from the beginning. Default: -1
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> data_loader = torch.utils.data.DataLoader(...)
>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.01, steps_per_epoch=len(data_loader), epochs=10)
>>> for epoch in range(10):
>>> for batch in data_loader:
>>> train_batch(...)
>>> scheduler.step() | torch.optim#torch.optim.lr_scheduler.OneCycleLR |
class torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False) [source]
Reduce learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced. Parameters
optimizer (Optimizer) – Wrapped optimizer.
mode (str) – One of min, max. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing. Default: ‘min’.
factor (float) – Factor by which the learning rate will be reduced. new_lr = lr * factor. Default: 0.1.
patience (int) – Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the 3rd epoch if the loss still hasn’t improved then. Default: 10.
threshold (float) – Threshold for measuring the new optimum, to only focus on significant changes. Default: 1e-4.
threshold_mode (str) – One of rel, abs. In rel mode, dynamic_threshold = best * ( 1 + threshold ) in ‘max’ mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. Default: ‘rel’.
cooldown (int) – Number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0.
min_lr (float or list) – A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. Default: 0.
eps (float) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> scheduler = ReduceLROnPlateau(optimizer, 'min')
>>> for epoch in range(10):
>>> train(...)
>>> val_loss = validate(...)
>>> # Note that step should be called after validate()
>>> scheduler.step(val_loss) | torch.optim#torch.optim.lr_scheduler.ReduceLROnPlateau |
class torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1, verbose=False) [source]
Decays the learning rate of each parameter group by gamma every step_size epochs. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
step_size (int) – Period of learning rate decay.
gamma (float) – Multiplicative factor of learning rate decay. Default: 0.1.
last_epoch (int) – The index of last epoch. Default: -1.
verbose (bool) – If True, prints a message to stdout for each update. Default: False. Example >>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05 if epoch < 30
>>> # lr = 0.005 if 30 <= epoch < 60
>>> # lr = 0.0005 if 60 <= epoch < 90
>>> # ...
>>> scheduler = StepLR(optimizer, step_size=30, gamma=0.1)
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step() | torch.optim#torch.optim.lr_scheduler.StepLR |
class torch.optim.Optimizer(params, defaults) [source]
Base class for all optimizers. Warning Parameters need to be specified as collections that have a deterministic ordering that is consistent between runs. Examples of objects that don’t satisfy those properties are sets and iterators over values of dictionaries. Parameters
params (iterable) – an iterable of torch.Tensor s or dict s. Specifies what Tensors should be optimized.
defaults – (dict): a dict containing default values of optimization options (used when a parameter group doesn’t specify them).
add_param_group(param_group) [source]
Add a param group to the Optimizer s param_groups. This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses. Parameters
param_group (dict) – Specifies what Tensors should be optimized along with group
optimization options. (specific) –
load_state_dict(state_dict) [source]
Loads the optimizer state. Parameters
state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().
state_dict() [source]
Returns the state of the optimizer as a dict. It contains two entries:
state - a dict holding current optimization state. Its content
differs between optimizer classes. param_groups - a dict containing all parameter groups
step(closure) [source]
Performs a single optimization step (parameter update). Parameters
closure (callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers. Note Unless otherwise specified, this function should not modify the .grad field of the parameters.
zero_grad(set_to_none=False) [source]
Sets the gradients of all optimized torch.Tensor s to zero. Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True) followed by a backward pass, .grads are guaranteed to be None for params that did not receive a gradient. 3. torch.optim optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether). | torch.optim#torch.optim.Optimizer |
add_param_group(param_group) [source]
Add a param group to the Optimizer s param_groups. This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses. Parameters
param_group (dict) – Specifies what Tensors should be optimized along with group
optimization options. (specific) – | torch.optim#torch.optim.Optimizer.add_param_group |
load_state_dict(state_dict) [source]
Loads the optimizer state. Parameters
state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict(). | torch.optim#torch.optim.Optimizer.load_state_dict |
state_dict() [source]
Returns the state of the optimizer as a dict. It contains two entries:
state - a dict holding current optimization state. Its content
differs between optimizer classes. param_groups - a dict containing all parameter groups | torch.optim#torch.optim.Optimizer.state_dict |
step(closure) [source]
Performs a single optimization step (parameter update). Parameters
closure (callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers. Note Unless otherwise specified, this function should not modify the .grad field of the parameters. | torch.optim#torch.optim.Optimizer.step |
zero_grad(set_to_none=False) [source]
Sets the gradients of all optimized torch.Tensor s to zero. Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True) followed by a backward pass, .grads are guaranteed to be None for params that did not receive a gradient. 3. torch.optim optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether). | torch.optim#torch.optim.Optimizer.zero_grad |
class torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False) [source]
Implements RMSprop algorithm. Proposed by G. Hinton in his course. The centered version first appears in Generating Sequences With Recurrent Neural Networks. The implementation here takes the square root of the gradient average before adding epsilon (note that TensorFlow interchanges these two operations). The effective learning rate is thus α/(v+ϵ)\alpha/(\sqrt{v} + \epsilon) where α\alpha is the scheduled learning rate and vv is the weighted moving average of the squared gradient. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
momentum (float, optional) – momentum factor (default: 0)
alpha (float, optional) – smoothing constant (default: 0.99)
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
centered (bool, optional) – if True, compute the centered RMSProp, the gradient is normalized by an estimation of its variance
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.RMSprop |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.RMSprop.step |
class torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50)) [source]
Implements the resilient backpropagation algorithm. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
etas (Tuple[float, float], optional) – pair of (etaminus, etaplis), that are multiplicative increase and decrease factors (default: (0.5, 1.2))
step_sizes (Tuple[float, float], optional) – a pair of minimal and maximal allowed step sizes (default: (1e-6, 50))
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Rprop |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Rprop.step |
class torch.optim.SGD(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False) [source]
Implements stochastic gradient descent (optionally with momentum). Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float) – learning rate
momentum (float, optional) – momentum factor (default: 0)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
dampening (float, optional) – dampening for momentum (default: 0)
nesterov (bool, optional) – enables Nesterov momentum (default: False) Example >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
Note The implementation of SGD with Momentum/Nesterov subtly differs from Sutskever et. al. and implementations in some other frameworks. Considering the specific case of Momentum, the update can be written as vt+1=μ∗vt+gt+1,pt+1=pt−lr∗vt+1,\begin{aligned} v_{t+1} & = \mu * v_{t} + g_{t+1}, \\ p_{t+1} & = p_{t} - \text{lr} * v_{t+1}, \end{aligned}
where pp , gg , vv and μ\mu denote the parameters, gradient, velocity, and momentum respectively. This is in contrast to Sutskever et. al. and other frameworks which employ an update of the form vt+1=μ∗vt+lr∗gt+1,pt+1=pt−vt+1.\begin{aligned} v_{t+1} & = \mu * v_{t} + \text{lr} * g_{t+1}, \\ p_{t+1} & = p_{t} - v_{t+1}. \end{aligned}
The Nesterov version is analogously modified.
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.SGD |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.SGD.step |
class torch.optim.SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08) [source]
Implements lazy version of Adam algorithm suitable for sparse tensors. In this variant, only moments that show up in the gradient get updated, and only those portions of the gradient get applied to the parameters. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.SparseAdam |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.SparseAdam.step |
torch.orgqr(input, input2) → Tensor
Computes the orthogonal matrix Q of a QR factorization, from the (input, input2) tuple returned by torch.geqrf(). This directly calls the underlying LAPACK function ?orgqr. See LAPACK documentation for orgqr for further details. Parameters
input (Tensor) – the a from torch.geqrf().
input2 (Tensor) – the tau from torch.geqrf(). | torch.generated.torch.orgqr#torch.orgqr |
torch.ormqr(input, input2, input3, left=True, transpose=False) → Tensor
Multiplies mat (given by input3) by the orthogonal Q matrix of the QR factorization formed by torch.geqrf() that is represented by (a, tau) (given by (input, input2)). This directly calls the underlying LAPACK function ?ormqr. See LAPACK documentation for ormqr for further details. Parameters
input (Tensor) – the a from torch.geqrf().
input2 (Tensor) – the tau from torch.geqrf().
input3 (Tensor) – the matrix to be multiplied. | torch.generated.torch.ormqr#torch.ormqr |
torch.outer(input, vec2, *, out=None) → Tensor
Outer product of input and vec2. If input is a vector of size nn and vec2 is a vector of size mm , then out must be a matrix of size (n×m)(n \times m) . Note This function does not broadcast. Parameters
input (Tensor) – 1-D input vector
vec2 (Tensor) – 1-D input vector Keyword Arguments
out (Tensor, optional) – optional output matrix Example: >>> v1 = torch.arange(1., 5.)
>>> v2 = torch.arange(1., 4.)
>>> torch.outer(v1, v2)
tensor([[ 1., 2., 3.],
[ 2., 4., 6.],
[ 3., 6., 9.],
[ 4., 8., 12.]]) | torch.generated.torch.outer#torch.outer |
torch.overrides This module exposes various helper functions for the __torch_function__ protocol. See Extending torch for more detail on the __torch_function__ protocol. Functions
torch.overrides.get_ignored_functions() [source]
Return public functions that cannot be overridden by __torch_function__. Returns
A tuple of functions that are publicly available in the torch API but cannot be overridden with __torch_function__. Mostly this is because none of the arguments of these functions are tensors or tensor-likes. Return type
Set[Callable] Examples >>> torch.Tensor.as_subclass in torch.overrides.get_ignored_functions()
True
>>> torch.add in torch.overrides.get_ignored_functions()
False
torch.overrides.get_overridable_functions() [source]
List functions that are overridable via __torch_function__ Returns
A dictionary that maps namespaces that contain overridable functions to functions in that namespace that can be overridden. Return type
Dict[Any, List[Callable]]
torch.overrides.get_testing_overrides() [source]
Return a dict containing dummy overrides for all overridable functions Returns
A dictionary that maps overridable functions in the PyTorch API to lambda functions that have the same signature as the real function and unconditionally return -1. These lambda functions are useful for testing API coverage for a type that defines __torch_function__. Return type
Dict[Callable, Callable] Examples >>> import inspect
>>> my_add = torch.overrides.get_testing_overrides()[torch.add]
>>> inspect.signature(my_add)
<Signature (input, other, out=None)>
torch.overrides.handle_torch_function(public_api, relevant_args, *args, **kwargs) [source]
Implement a function with checks for __torch_function__ overrides. See torch::autograd::handle_torch_function for the equivalent of this function in the C++ implementation. Parameters
public_api (function) – Function exposed by the public torch API originally called like public_api(*args, **kwargs) on which arguments are now being checked.
relevant_args (iterable) – Iterable of arguments to check for __torch_function__ methods.
args (tuple) – Arbitrary positional arguments originally passed into public_api.
kwargs (tuple) – Arbitrary keyword arguments originally passed into public_api. Returns
Result from calling implementation or an __torch_function__ method, as appropriate. Return type
object :raises TypeError : if no implementation is found.: Example >>> def func(a):
... if type(a) is not torch.Tensor: # This will make func dispatchable by __torch_function__
... return handle_torch_function(func, (a,), a)
... return a + 0
torch.overrides.has_torch_function()
Check for __torch_function__ implementations in the elements of an iterable. Considers exact Tensor s and Parameter s non-dispatchable. :param relevant_args: Iterable or aguments to check for __torch_function__ methods. :type relevant_args: iterable Returns
True if any of the elements of relevant_args have __torch_function__ implementations, False otherwise. Return type
bool See also
torch.is_tensor_like()
Checks if something is a Tensor-like, including an exact Tensor.
torch.overrides.is_tensor_like(inp) [source]
Returns True if the passed-in input is a Tensor-like. Currently, this occurs whenever there’s a __torch_function__ attribute on the type of the input. Examples A subclass of tensor is generally a Tensor-like. >>> class SubTensor(torch.Tensor): ...
>>> is_tensor_like(SubTensor([0]))
True
Built-in or user types aren’t usually Tensor-like. >>> is_tensor_like(6)
False
>>> is_tensor_like(None)
False
>>> class NotATensor: ...
>>> is_tensor_like(NotATensor())
False
But, they can be made Tensor-like by implementing __torch_function__. >>> class TensorLike:
... def __torch_function__(self, func, types, args, kwargs):
... return -1
>>> is_tensor_like(TensorLike())
True
torch.overrides.is_tensor_method_or_property(func) [source]
Returns True if the function passed in is a handler for a method or property belonging to torch.Tensor, as passed into __torch_function__. Note For properties, their __get__ method must be passed in. This may be needed, in particular, for the following reasons: Methods/properties sometimes don’t contain a __module__ slot. They require that the first passed-in argument is an instance of torch.Tensor. Examples >>> is_tensor_method_or_property(torch.Tensor.add)
True
>>> is_tensor_method_or_property(torch.add)
False
torch.overrides.wrap_torch_function(dispatcher) [source]
Wraps a given function with __torch_function__ -related functionality. Parameters
dispatcher (Callable) – A callable that returns an iterable of Tensor-likes passed into the function. Note This decorator may reduce the performance of your code. Generally, it’s enough to express your code as a series of functions that, themselves, support __torch_function__. If you find yourself in the rare situation where this is not the case, e.g. if you’re wrapping a low-level library and you also need it to work for Tensor-likes, then this function is available. Examples >>> def dispatcher(a): # Must have the same signature as func
... return (a,)
>>> @torch.overrides.wrap_torch_function(dispatcher)
>>> def func(a): # This will make func dispatchable by __torch_function__
... return a + 0 | torch.overrides |
torch.overrides.get_ignored_functions() [source]
Return public functions that cannot be overridden by __torch_function__. Returns
A tuple of functions that are publicly available in the torch API but cannot be overridden with __torch_function__. Mostly this is because none of the arguments of these functions are tensors or tensor-likes. Return type
Set[Callable] Examples >>> torch.Tensor.as_subclass in torch.overrides.get_ignored_functions()
True
>>> torch.add in torch.overrides.get_ignored_functions()
False | torch.overrides#torch.overrides.get_ignored_functions |
torch.overrides.get_overridable_functions() [source]
List functions that are overridable via __torch_function__ Returns
A dictionary that maps namespaces that contain overridable functions to functions in that namespace that can be overridden. Return type
Dict[Any, List[Callable]] | torch.overrides#torch.overrides.get_overridable_functions |
torch.overrides.get_testing_overrides() [source]
Return a dict containing dummy overrides for all overridable functions Returns
A dictionary that maps overridable functions in the PyTorch API to lambda functions that have the same signature as the real function and unconditionally return -1. These lambda functions are useful for testing API coverage for a type that defines __torch_function__. Return type
Dict[Callable, Callable] Examples >>> import inspect
>>> my_add = torch.overrides.get_testing_overrides()[torch.add]
>>> inspect.signature(my_add)
<Signature (input, other, out=None)> | torch.overrides#torch.overrides.get_testing_overrides |
torch.overrides.handle_torch_function(public_api, relevant_args, *args, **kwargs) [source]
Implement a function with checks for __torch_function__ overrides. See torch::autograd::handle_torch_function for the equivalent of this function in the C++ implementation. Parameters
public_api (function) – Function exposed by the public torch API originally called like public_api(*args, **kwargs) on which arguments are now being checked.
relevant_args (iterable) – Iterable of arguments to check for __torch_function__ methods.
args (tuple) – Arbitrary positional arguments originally passed into public_api.
kwargs (tuple) – Arbitrary keyword arguments originally passed into public_api. Returns
Result from calling implementation or an __torch_function__ method, as appropriate. Return type
object :raises TypeError : if no implementation is found.: Example >>> def func(a):
... if type(a) is not torch.Tensor: # This will make func dispatchable by __torch_function__
... return handle_torch_function(func, (a,), a)
... return a + 0 | torch.overrides#torch.overrides.handle_torch_function |
torch.overrides.has_torch_function()
Check for __torch_function__ implementations in the elements of an iterable. Considers exact Tensor s and Parameter s non-dispatchable. :param relevant_args: Iterable or aguments to check for __torch_function__ methods. :type relevant_args: iterable Returns
True if any of the elements of relevant_args have __torch_function__ implementations, False otherwise. Return type
bool See also
torch.is_tensor_like()
Checks if something is a Tensor-like, including an exact Tensor. | torch.overrides#torch.overrides.has_torch_function |
torch.overrides.is_tensor_like(inp) [source]
Returns True if the passed-in input is a Tensor-like. Currently, this occurs whenever there’s a __torch_function__ attribute on the type of the input. Examples A subclass of tensor is generally a Tensor-like. >>> class SubTensor(torch.Tensor): ...
>>> is_tensor_like(SubTensor([0]))
True
Built-in or user types aren’t usually Tensor-like. >>> is_tensor_like(6)
False
>>> is_tensor_like(None)
False
>>> class NotATensor: ...
>>> is_tensor_like(NotATensor())
False
But, they can be made Tensor-like by implementing __torch_function__. >>> class TensorLike:
... def __torch_function__(self, func, types, args, kwargs):
... return -1
>>> is_tensor_like(TensorLike())
True | torch.overrides#torch.overrides.is_tensor_like |
torch.overrides.is_tensor_method_or_property(func) [source]
Returns True if the function passed in is a handler for a method or property belonging to torch.Tensor, as passed into __torch_function__. Note For properties, their __get__ method must be passed in. This may be needed, in particular, for the following reasons: Methods/properties sometimes don’t contain a __module__ slot. They require that the first passed-in argument is an instance of torch.Tensor. Examples >>> is_tensor_method_or_property(torch.Tensor.add)
True
>>> is_tensor_method_or_property(torch.add)
False | torch.overrides#torch.overrides.is_tensor_method_or_property |
torch.overrides.wrap_torch_function(dispatcher) [source]
Wraps a given function with __torch_function__ -related functionality. Parameters
dispatcher (Callable) – A callable that returns an iterable of Tensor-likes passed into the function. Note This decorator may reduce the performance of your code. Generally, it’s enough to express your code as a series of functions that, themselves, support __torch_function__. If you find yourself in the rare situation where this is not the case, e.g. if you’re wrapping a low-level library and you also need it to work for Tensor-likes, then this function is available. Examples >>> def dispatcher(a): # Must have the same signature as func
... return (a,)
>>> @torch.overrides.wrap_torch_function(dispatcher)
>>> def func(a): # This will make func dispatchable by __torch_function__
... return a + 0 | torch.overrides#torch.overrides.wrap_torch_function |
torch.pca_lowrank(A, q=None, center=True, niter=2) [source]
Performs linear Principal Component Analysis (PCA) on a low-rank matrix, batches of such matrices, or sparse matrix. This function returns a namedtuple (U, S, V) which is the nearly optimal approximation of a singular value decomposition of a centered matrix AA such that A=Udiag(S)VTA = U diag(S) V^T . Note The relation of (U, S, V) to PCA is as follows:
AA is a data matrix with m samples and n features the VV columns represent the principal directions
S∗∗2/(m−1)S ** 2 / (m - 1) contains the eigenvalues of ATA/(m−1)A^T A / (m - 1) which is the covariance of A when center=True is provided.
matmul(A, V[:, :k]) projects data to the first k principal components Note Different from the standard SVD, the size of returned matrices depend on the specified rank and q values as follows:
UU is m x q matrix
SS is q-vector
VV is n x q matrix Note To obtain repeatable results, reset the seed for the pseudorandom number generator Parameters
A (Tensor) – the input tensor of size (∗,m,n)(*, m, n)
q (int, optional) – a slightly overestimated rank of AA . By default, q = min(6, m,
n).
center (bool, optional) – if True, center the input tensor, otherwise, assume that the input is centered.
niter (int, optional) – the number of subspace iterations to conduct; niter must be a nonnegative integer, and defaults to 2. References: - Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding
structure with randomness: probabilistic algorithms for
constructing approximate matrix decompositions,
arXiv:0909.4061 [math.NA; math.PR], 2009 (available at
`arXiv <http://arxiv.org/abs/0909.4061>`_). | torch.generated.torch.pca_lowrank#torch.pca_lowrank |
torch.pinverse(input, rcond=1e-15) → Tensor
Calculates the pseudo-inverse (also known as the Moore-Penrose inverse) of a 2D tensor. Please look at Moore-Penrose inverse for more details Note torch.pinverse() is deprecated. Please use torch.linalg.pinv() instead which includes new parameters hermitian and out. Note This method is implemented using the Singular Value Decomposition. Note The pseudo-inverse is not necessarily a continuous function in the elements of the matrix [1]. Therefore, derivatives are not always existent, and exist for a constant rank only [2]. However, this method is backprop-able due to the implementation by using SVD results, and could be unstable. Double-backward will also be unstable due to the usage of SVD internally. See svd() for more details. Note Supports real and complex inputs. Batched version for complex inputs is only supported on the CPU. Parameters
input (Tensor) – The input tensor of size (∗,m,n)(*, m, n) where ∗* is zero or more batch dimensions.
rcond (float, optional) – A floating point value to determine the cutoff for small singular values. Default: 1e-15. Returns
The pseudo-inverse of input of dimensions (∗,n,m)(*, n, m) Example: >>> input = torch.randn(3, 5)
>>> input
tensor([[ 0.5495, 0.0979, -1.4092, -0.1128, 0.4132],
[-1.1143, -0.3662, 0.3042, 1.6374, -0.9294],
[-0.3269, -0.5745, -0.0382, -0.5922, -0.6759]])
>>> torch.pinverse(input)
tensor([[ 0.0600, -0.1933, -0.2090],
[-0.0903, -0.0817, -0.4752],
[-0.7124, -0.1631, -0.2272],
[ 0.1356, 0.3933, -0.5023],
[-0.0308, -0.1725, -0.5216]])
>>> # Batched pinverse example
>>> a = torch.randn(2,6,3)
>>> b = torch.pinverse(a)
>>> torch.matmul(b, a)
tensor([[[ 1.0000e+00, 1.6391e-07, -1.1548e-07],
[ 8.3121e-08, 1.0000e+00, -2.7567e-07],
[ 3.5390e-08, 1.4901e-08, 1.0000e+00]],
[[ 1.0000e+00, -8.9407e-08, 2.9802e-08],
[-2.2352e-07, 1.0000e+00, 1.1921e-07],
[ 0.0000e+00, 8.9407e-08, 1.0000e+00]]]) | torch.generated.torch.pinverse#torch.pinverse |
torch.poisson(input, generator=None) → Tensor
Returns a tensor of the same size as input with each element sampled from a Poisson distribution with rate parameter given by the corresponding element in input i.e., outi∼Poisson(inputi)\text{out}_i \sim \text{Poisson}(\text{input}_i)
Parameters
input (Tensor) – the input tensor containing the rates of the Poisson distribution Keyword Arguments
generator (torch.Generator, optional) – a pseudorandom number generator for sampling Example: >>> rates = torch.rand(4, 4) * 5 # rate parameter between 0 and 5
>>> torch.poisson(rates)
tensor([[9., 1., 3., 5.],
[8., 6., 6., 0.],
[0., 4., 5., 3.],
[2., 1., 4., 2.]]) | torch.generated.torch.poisson#torch.poisson |
torch.polar(abs, angle, *, out=None) → Tensor
Constructs a complex tensor whose elements are Cartesian coordinates corresponding to the polar coordinates with absolute value abs and angle angle. out=abs⋅cos(angle)+abs⋅sin(angle)⋅j\text{out} = \text{abs} \cdot \cos(\text{angle}) + \text{abs} \cdot \sin(\text{angle}) \cdot j
Parameters
abs (Tensor) – The absolute value the complex tensor. Must be float or double.
angle (Tensor) – The angle of the complex tensor. Must be same dtype as abs. Keyword Arguments
out (Tensor) – If the inputs are torch.float32, must be torch.complex64. If the inputs are torch.float64, must be torch.complex128. Example::
>>> import numpy as np
>>> abs = torch.tensor([1, 2], dtype=torch.float64)
>>> angle = torch.tensor([np.pi / 2, 5 * np.pi / 4], dtype=torch.float64)
>>> z = torch.polar(abs, angle)
>>> z
tensor([(0.0000+1.0000j), (-1.4142-1.4142j)], dtype=torch.complex128) | torch.generated.torch.polar#torch.polar |
torch.polygamma(n, input, *, out=None) → Tensor
Computes the nthn^{th} derivative of the digamma function on input. n≥0n \geq 0 is called the order of the polygamma function. ψ(n)(x)=d(n)dx(n)ψ(x)\psi^{(n)}(x) = \frac{d^{(n)}}{dx^{(n)}} \psi(x)
Note This function is implemented only for nonnegative integers n≥0n \geq 0 . Parameters
n (int) – the order of the polygamma function
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example::
>>> a = torch.tensor([1, 0.5])
>>> torch.polygamma(1, a)
tensor([1.64493, 4.9348])
>>> torch.polygamma(2, a)
tensor([ -2.4041, -16.8288])
>>> torch.polygamma(3, a)
tensor([ 6.4939, 97.4091])
>>> torch.polygamma(4, a)
tensor([ -24.8863, -771.4742]) | torch.generated.torch.polygamma#torch.polygamma |
torch.pow(input, exponent, *, out=None) → Tensor
Takes the power of each element in input with exponent and returns a tensor with the result. exponent can be either a single float number or a Tensor with the same number of elements as input. When exponent is a scalar value, the operation applied is: outi=xiexponent\text{out}_i = x_i ^ \text{exponent}
When exponent is a tensor, the operation applied is: outi=xiexponenti\text{out}_i = x_i ^ {\text{exponent}_i}
When exponent is a tensor, the shapes of input and exponent must be broadcastable. Parameters
input (Tensor) – the input tensor.
exponent (float or tensor) – the exponent value Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.4331, 1.2475, 0.6834, -0.2791])
>>> torch.pow(a, 2)
tensor([ 0.1875, 1.5561, 0.4670, 0.0779])
>>> exp = torch.arange(1., 5.)
>>> a = torch.arange(1., 5.)
>>> a
tensor([ 1., 2., 3., 4.])
>>> exp
tensor([ 1., 2., 3., 4.])
>>> torch.pow(a, exp)
tensor([ 1., 4., 27., 256.])
torch.pow(self, exponent, *, out=None) → Tensor
self is a scalar float value, and exponent is a tensor. The returned tensor out is of the same shape as exponent The operation applied is: outi=selfexponenti\text{out}_i = \text{self} ^ {\text{exponent}_i}
Parameters
self (float) – the scalar base value for the power operation
exponent (Tensor) – the exponent tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> exp = torch.arange(1., 5.)
>>> base = 2
>>> torch.pow(base, exp)
tensor([ 2., 4., 8., 16.]) | torch.generated.torch.pow#torch.pow |
torch.prod(input, *, dtype=None) → Tensor
Returns the product of all elements in the input tensor. Parameters
input (Tensor) – the input tensor. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[-0.8020, 0.5428, -1.5854]])
>>> torch.prod(a)
tensor(0.6902)
torch.prod(input, dim, keepdim=False, *, dtype=None) → Tensor
Returns the product of each row of the input tensor in the given dimension dim. If keepdim is True, the output tensor is of the same size as input except in the dimension dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 fewer dimension than input. Parameters
input (Tensor) – the input tensor.
dim (int) – the dimension to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: >>> a = torch.randn(4, 2)
>>> a
tensor([[ 0.5261, -0.3837],
[ 1.1857, -0.2498],
[-1.1646, 0.0705],
[ 1.1131, -1.0629]])
>>> torch.prod(a, 1)
tensor([-0.2018, -0.2962, -0.0821, -1.1831]) | torch.generated.torch.prod#torch.prod |
torch.promote_types(type1, type2) → dtype
Returns the torch.dtype with the smallest size and scalar kind that is not smaller nor of lower kind than either type1 or type2. See type promotion documentation for more information on the type promotion logic. Parameters
type1 (torch.dtype) –
type2 (torch.dtype) – Example: >>> torch.promote_types(torch.int32, torch.float32)
torch.float32
>>> torch.promote_types(torch.uint8, torch.long)
torch.long | torch.generated.torch.promote_types#torch.promote_types |
torch.qr(input, some=True, *, out=None) -> (Tensor, Tensor)
Computes the QR decomposition of a matrix or a batch of matrices input, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper triangular matrix or batch of upper triangular matrices. If some is True, then this function returns the thin (reduced) QR factorization. Otherwise, if some is False, this function returns the complete QR factorization. Warning torch.qr is deprecated. Please use torch.linalg.qr() instead. Differences with torch.linalg.qr:
torch.linalg.qr takes a string parameter mode instead of some:
some=True is equivalent of mode='reduced': both are the default
some=False is equivalent of mode='complete'. Warning If you plan to backpropagate through QR, note that the current backward implementation is only well-defined when the first min(input.size(−1),input.size(−2))\min(input.size(-1), input.size(-2)) columns of input are linearly independent. This behavior will propably change once QR supports pivoting. Note This function uses LAPACK for CPU inputs and MAGMA for CUDA inputs, and may produce different (valid) decompositions on different device types or different platforms. Parameters
input (Tensor) – the input tensor of size (∗,m,n)(*, m, n) where * is zero or more batch dimensions consisting of matrices of dimension m×nm \times n .
some (bool, optional) –
Set to True for reduced QR decomposition and False for complete QR decomposition. If k = min(m, n) then:
some=True : returns (Q, R) with dimensions (m, k), (k, n) (default)
'some=False': returns (Q, R) with dimensions (m, m), (m, n) Keyword Arguments
out (tuple, optional) – tuple of Q and R tensors. The dimensions of Q and R are detailed in the description of some above. Example: >>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])
>>> q, r = torch.qr(a)
>>> q
tensor([[-0.8571, 0.3943, 0.3314],
[-0.4286, -0.9029, -0.0343],
[ 0.2857, -0.1714, 0.9429]])
>>> r
tensor([[ -14.0000, -21.0000, 14.0000],
[ 0.0000, -175.0000, 70.0000],
[ 0.0000, 0.0000, -35.0000]])
>>> torch.mm(q, r).round()
tensor([[ 12., -51., 4.],
[ 6., 167., -68.],
[ -4., 24., -41.]])
>>> torch.mm(q.t(), q).round()
tensor([[ 1., 0., 0.],
[ 0., 1., -0.],
[ 0., -0., 1.]])
>>> a = torch.randn(3, 4, 5)
>>> q, r = torch.qr(a, some=False)
>>> torch.allclose(torch.matmul(q, r), a)
True
>>> torch.allclose(torch.matmul(q.transpose(-2, -1), q), torch.eye(5))
True | torch.generated.torch.qr#torch.qr |
torch.quantile(input, q) → Tensor
Returns the q-th quantiles of all elements in the input tensor, doing a linear interpolation when the q-th quantile lies between two data points. Parameters
input (Tensor) – the input tensor.
q (float or Tensor) – a scalar or 1D tensor of quantile values in the range [0, 1] Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.0700, -0.5446, 0.9214]])
>>> q = torch.tensor([0, 0.5, 1])
>>> torch.quantile(a, q)
tensor([-0.5446, 0.0700, 0.9214])
torch.quantile(input, q, dim=None, keepdim=False, *, out=None) → Tensor
Returns the q-th quantiles of each row of the input tensor along the dimension dim, doing a linear interpolation when the q-th quantile lies between two data points. By default, dim is None resulting in the input tensor being flattened before computation. If keepdim is True, the output dimensions are of the same size as input except in the dimensions being reduced (dim or all if dim is None) where they have size 1. Otherwise, the dimensions being reduced are squeezed (see torch.squeeze()). If q is a 1D tensor, an extra dimension is prepended to the output tensor with the same size as q which represents the quantiles. Parameters
input (Tensor) – the input tensor.
q (float or Tensor) – a scalar or 1D tensor of quantile values in the range [0, 1]
dim (int) – the dimension to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(2, 3)
>>> a
tensor([[ 0.0795, -1.2117, 0.9765],
[ 1.1707, 0.6706, 0.4884]])
>>> q = torch.tensor([0.25, 0.5, 0.75])
>>> torch.quantile(a, q, dim=1, keepdim=True)
tensor([[[-0.5661],
[ 0.5795]],
[[ 0.0795],
[ 0.6706]],
[[ 0.5280],
[ 0.9206]]])
>>> torch.quantile(a, q, dim=1, keepdim=True).shape
torch.Size([3, 2, 1]) | torch.generated.torch.quantile#torch.quantile |
torch.quantization This module implements the functions you call directly to convert your model from FP32 to quantized form. For example the prepare() is used in post training quantization to prepares your model for the calibration step and convert() actually converts the weights to int8 and replaces the operations with their quantized counterparts. There are other helper functions for things like quantizing the input to your model and performing critical fusions like conv+relu. Top-level quantization APIs
torch.quantization.quantize(model, run_fn, run_args, mapping=None, inplace=False) [source]
Quantize the input float model with post training static quantization. First it will prepare the model for calibration, then it calls run_fn which will run the calibration step, after that we will convert the model to a quantized model. Parameters
model – input float model
run_fn – a calibration function for calibrating the prepared model
run_args – positional arguments for run_fn
inplace – carry out model transformations in-place, the original module is mutated
mapping – correspondence between original module types and quantized counterparts Returns
Quantized model.
torch.quantization.quantize_dynamic(model, qconfig_spec=None, dtype=torch.qint8, mapping=None, inplace=False) [source]
Converts a float model to dynamic (i.e. weights-only) quantized model. Replaces specified modules with dynamic weight-only quantized versions and output the quantized model. For simplest usage provide dtype argument that can be float16 or qint8. Weight-only quantization by default is performed for layers with large weights size - i.e. Linear and RNN variants. Fine grained control is possible with qconfig and mapping that act similarly to quantize(). If qconfig is provided, the dtype argument is ignored. Parameters
model – input model
qconfig_spec –
Either: A dictionary that maps from name or type of submodule to quantization configuration, qconfig applies to all submodules of a given module unless qconfig for the submodules are specified (when the submodule already has qconfig attribute). Entries in the dictionary need to be QConfigDynamic instances. A set of types and/or submodule names to apply dynamic quantization to, in which case the dtype argument is used to specify the bit-width
inplace – carry out model transformations in-place, the original module is mutated
mapping – maps type of a submodule to a type of corresponding dynamically quantized version with which the submodule needs to be replaced
torch.quantization.quantize_qat(model, run_fn, run_args, inplace=False) [source]
Do quantization aware training and output a quantized model Parameters
model – input model
run_fn – a function for evaluating the prepared model, can be a function that simply runs the prepared model or a training loop
run_args – positional arguments for run_fn
Returns
Quantized model.
torch.quantization.prepare(model, inplace=False, allow_list=None, observer_non_leaf_module_list=None, prepare_custom_config_dict=None) [source]
Prepares a copy of the model for quantization calibration or quantization-aware training. Quantization configuration should be assigned preemptively to individual submodules in .qconfig attribute. The model will be attached with observer or fake quant modules, and qconfig will be propagated. Parameters
model – input model to be modified in-place
inplace – carry out model transformations in-place, the original module is mutated
allow_list – list of quantizable modules
observer_non_leaf_module_list – list of non-leaf modules we want to add observer
prepare_custom_config_dict – customization configuration dictionary for prepare function # Example of prepare_custom_config_dict:
prepare_custom_config_dict = {
# user will manually define the corresponding observed
# module class which has a from_float class method that converts
# float custom module to observed custom module
"float_to_observed_custom_module_class": {
CustomModule: ObservedCustomModule
}
}
torch.quantization.prepare_qat(model, mapping=None, inplace=False) [source]
Prepares a copy of the model for quantization calibration or quantization-aware training and converts it to quantized version. Quantization configuration should be assigned preemptively to individual submodules in .qconfig attribute. Parameters
model – input model to be modified in-place
mapping – dictionary that maps float modules to quantized modules to be replaced.
inplace – carry out model transformations in-place, the original module is mutated
torch.quantization.convert(module, mapping=None, inplace=False, remove_qconfig=True, convert_custom_config_dict=None) [source]
Converts submodules in input module to a different module according to mapping by calling from_float method on the target module class. And remove qconfig at the end if remove_qconfig is set to True. Parameters
module – prepared and calibrated module
mapping – a dictionary that maps from source module type to target module type, can be overwritten to allow swapping user defined Modules
inplace – carry out model transformations in-place, the original module is mutated
convert_custom_config_dict – custom configuration dictionary for convert function # Example of convert_custom_config_dict:
convert_custom_config_dict = {
# user will manually define the corresponding quantized
# module class which has a from_observed class method that converts
# observed custom module to quantized custom module
"observed_to_quantized_custom_module_class": {
ObservedCustomModule: QuantizedCustomModule
}
}
class torch.quantization.QConfig [source]
Describes how to quantize a layer or a part of the network by providing settings (observer classes) for activations and weights respectively. Note that QConfig needs to contain observer classes (like MinMaxObserver) or a callable that returns instances on invocation, not the concrete observer instances themselves. Quantization preparation function will instantiate observers multiple times for each of the layers. Observer classes have usually reasonable default arguments, but they can be overwritten with with_args method (that behaves like functools.partial): my_qconfig = QConfig(activation=MinMaxObserver.with_args(dtype=torch.qint8), weight=default_observer.with_args(dtype=torch.qint8))
class torch.quantization.QConfigDynamic [source]
Describes how to dynamically quantize a layer or a part of the network by providing settings (observer classes) for weights. It’s like QConfig, but for dynamic quantization. Note that QConfigDynamic needs to contain observer classes (like MinMaxObserver) or a callable that returns instances on invocation, not the concrete observer instances themselves. Quantization function will instantiate observers multiple times for each of the layers. Observer classes have usually reasonable default arguments, but they can be overwritten with with_args method (that behaves like functools.partial): my_qconfig = QConfigDynamic(weight=default_observer.with_args(dtype=torch.qint8))
Preparing model for quantization
torch.quantization.fuse_modules(model, modules_to_fuse, inplace=False, fuser_func=<function fuse_known_modules>, fuse_custom_config_dict=None) [source]
Fuses a list of modules into a single module Fuses only the following sequence of modules: conv, bn conv, bn, relu conv, relu linear, relu bn, relu All other sequences are left unchanged. For these sequences, replaces the first item in the list with the fused module, replacing the rest of the modules with identity. Parameters
model – Model containing the modules to be fused
modules_to_fuse – list of list of module names to fuse. Can also be a list of strings if there is only a single list of modules to fuse.
inplace – bool specifying if fusion happens in place on the model, by default a new model is returned
fuser_func – Function that takes in a list of modules and outputs a list of fused modules of the same length. For example, fuser_func([convModule, BNModule]) returns the list [ConvBNModule, nn.Identity()] Defaults to torch.quantization.fuse_known_modules
fuse_custom_config_dict – custom configuration for fusion # Example of fuse_custom_config_dict
fuse_custom_config_dict = {
# Additional fuser_method mapping
"additional_fuser_method_mapping": {
(torch.nn.Conv2d, torch.nn.BatchNorm2d): fuse_conv_bn
},
}
Returns
model with fused modules. A new copy is created if inplace=True. Examples: >>> m = myModel()
>>> # m is a module containing the sub-modules below
>>> modules_to_fuse = [ ['conv1', 'bn1', 'relu1'], ['submodule.conv', 'submodule.relu']]
>>> fused_m = torch.quantization.fuse_modules(m, modules_to_fuse)
>>> output = fused_m(input)
>>> m = myModel()
>>> # Alternately provide a single list of modules to fuse
>>> modules_to_fuse = ['conv1', 'bn1', 'relu1']
>>> fused_m = torch.quantization.fuse_modules(m, modules_to_fuse)
>>> output = fused_m(input)
class torch.quantization.QuantStub(qconfig=None) [source]
Quantize stub module, before calibration, this is same as an observer, it will be swapped as nnq.Quantize in convert. Parameters
qconfig – quantization configuration for the tensor, if qconfig is not provided, we will get qconfig from parent modules
class torch.quantization.DeQuantStub [source]
Dequantize stub module, before calibration, this is same as identity, this will be swapped as nnq.DeQuantize in convert.
class torch.quantization.QuantWrapper(module) [source]
A wrapper class that wraps the input module, adds QuantStub and DeQuantStub and surround the call to module with call to quant and dequant modules. This is used by the quantization utility functions to add the quant and dequant modules, before convert function QuantStub will just be observer, it observes the input tensor, after convert, QuantStub will be swapped to nnq.Quantize which does actual quantization. Similarly for DeQuantStub.
torch.quantization.add_quant_dequant(module) [source]
Wrap the leaf child module in QuantWrapper if it has a valid qconfig Note that this function will modify the children of module inplace and it can return a new module which wraps the input module as well. Parameters
module – input module with qconfig attributes for all the leaf modules
we want to quantize (that) – Returns
Either the inplace modified module with submodules wrapped in QuantWrapper based on qconfig or a new QuantWrapper module which wraps the input module, the latter case only happens when the input module is a leaf module and we want to quantize it.
Utility functions
torch.quantization.add_observer_(module, qconfig_propagation_list=None, non_leaf_module_list=None, device=None, custom_module_class_mapping=None) [source]
Add observer for the leaf child of the module. This function insert observer module to all leaf child module that has a valid qconfig attribute. Parameters
module – input module with qconfig attributes for all the leaf modules that we want to quantize
device – parent device, if any
non_leaf_module_list – list of non-leaf modules we want to add observer Returns
None, module is modified inplace with added observer modules and forward_hooks
torch.quantization.swap_module(mod, mapping, custom_module_class_mapping) [source]
Swaps the module if it has a quantized counterpart and it has an observer attached. Parameters
mod – input module
mapping – a dictionary that maps from nn module to nnq module Returns
The corresponding quantized module of mod
torch.quantization.propagate_qconfig_(module, qconfig_dict=None, allow_list=None) [source]
Propagate qconfig through the module hierarchy and assign qconfig attribute on each leaf module Parameters
module – input module
qconfig_dict – dictionary that maps from name or type of submodule to quantization configuration, qconfig applies to all submodules of a given module unless qconfig for the submodules are specified (when the submodule already has qconfig attribute) Returns
None, module is modified inplace with qconfig attached
torch.quantization.default_eval_fn(model, calib_data) [source]
Default evaluation function takes a torch.utils.data.Dataset or a list of input Tensors and run the model on the dataset
Observers
class torch.quantization.ObserverBase(dtype) [source]
Base observer Module. Any observer implementation should derive from this class. Concrete observers should follow the same API. In forward, they will update the statistics of the observed Tensor. And they should provide a calculate_qparams function that computes the quantization parameters given the collected statistics. Parameters
dtype – Quantized data type
classmethod with_args(**kwargs)
Wrapper that allows creation of class factories. This can be useful when there is a need to create classes with the same constructor arguments, but different instances. Example: >>> Foo.with_args = classmethod(_with_args)
>>> foo_builder = Foo.with_args(a=3, b=4).with_args(answer=42)
>>> foo_instance1 = foo_builder()
>>> foo_instance2 = foo_builder()
>>> id(foo_instance1) == id(foo_instance2)
False
class torch.quantization.MinMaxObserver(dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running min and max values. This observer uses the tensor min/max statistics to compute the quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. Given running min/max as xminx_\text{min} and xmaxx_\text{max} , scale ss and zero point zz are computed as: The running minimum/maximum xmin/maxx_\text{min/max} is computed as: xmin={min(X)if xmin=Nonemin(xmin,min(X))otherwisexmax={max(X)if xmax=Nonemax(xmax,max(X))otherwise\begin{array}{ll} x_\text{min} &= \begin{cases} \min(X) & \text{if~}x_\text{min} = \text{None} \\ \min\left(x_\text{min}, \min(X)\right) & \text{otherwise} \end{cases}\\ x_\text{max} &= \begin{cases} \max(X) & \text{if~}x_\text{max} = \text{None} \\ \max\left(x_\text{max}, \max(X)\right) & \text{otherwise} \end{cases}\\ \end{array}
where XX is the observed tensor. The scale ss and zero point zz are then computed as: if Symmetric:s=2max(∣xmin∣,xmax)/(Qmax−Qmin)z={0if dtype is qint8128otherwiseOtherwise:s=(xmax−xmin)/(Qmax−Qmin)z=Qmin−round(xmin/s)\begin{aligned} \text{if Symmetric:}&\\ &s = 2 \max(|x_\text{min}|, x_\text{max}) / \left( Q_\text{max} - Q_\text{min} \right) \\ &z = \begin{cases} 0 & \text{if dtype is qint8} \\ 128 & \text{otherwise} \end{cases}\\ \text{Otherwise:}&\\ &s = \left( x_\text{max} - x_\text{min} \right ) / \left( Q_\text{max} - Q_\text{min} \right ) \\ &z = Q_\text{min} - \text{round}(x_\text{min} / s) \end{aligned}
where QminQ_\text{min} and QmaxQ_\text{max} are the minimum and maximum of the quantized data type. Warning Only works with torch.per_tensor_symmetric quantization scheme Warning dtype can only take torch.qint8 or torch.quint8. Note If the running minimum equals to the running maximum, the scale and zero_point are set to 1.0 and 0.
class torch.quantization.MovingAverageMinMaxObserver(averaging_constant=0.01, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the moving average of the min and max values. This observer computes the quantization parameters based on the moving averages of minimums and maximums of the incoming tensors. The module records the average minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
averaging_constant – Averaging constant for min/max.
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The moving average min/max is computed as follows xmin={min(X)if xmin=None(1−c)xmin+cmin(X)otherwisexmax={max(X)if xmax=None(1−c)xmax+cmax(X)otherwise\begin{array}{ll} x_\text{min} = \begin{cases} \min(X) & \text{if~}x_\text{min} = \text{None} \\ (1 - c) x_\text{min} + c \min(X) & \text{otherwise} \end{cases}\\ x_\text{max} = \begin{cases} \max(X) & \text{if~}x_\text{max} = \text{None} \\ (1 - c) x_\text{max} + c \max(X) & \text{otherwise} \end{cases}\\ \end{array}
where xmin/maxx_\text{min/max} is the running average min/max, XX is is the incoming tensor, and cc is the averaging_constant. The scale and zero point are then computed as in MinMaxObserver. Note Only works with torch.per_tensor_affine quantization scheme. Note If the running minimum equals to the running maximum, the scale and zero_point are set to 1.0 and 0.
class torch.quantization.PerChannelMinMaxObserver(ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running per channel min and max values. This observer uses the tensor min/max statistics to compute the per channel quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
ch_axis – Channel axis
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The quantization parameters are computed the same way as in MinMaxObserver, with the difference that the running min/max values are stored per channel. Scales and zero points are thus computed per channel as well. Note If the running minimum equals to the running maximum, the scales and zero_points are set to 1.0 and 0.
class torch.quantization.MovingAveragePerChannelMinMaxObserver(averaging_constant=0.01, ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running per channel min and max values. This observer uses the tensor min/max statistics to compute the per channel quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
averaging_constant – Averaging constant for min/max.
ch_axis – Channel axis
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The quantization parameters are computed the same way as in MovingAverageMinMaxObserver, with the difference that the running min/max values are stored per channel. Scales and zero points are thus computed per channel as well. Note If the running minimum equals to the running maximum, the scales and zero_points are set to 1.0 and 0.
class torch.quantization.HistogramObserver(bins=2048, upsample_rate=128, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False) [source]
The module records the running histogram of tensor values along with min/max values. calculate_qparams will calculate scale and zero_point. Parameters
bins – Number of bins to use for the histogram
upsample_rate – Factor by which the histograms are upsampled, this is used to interpolate histograms with varying ranges across observations
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit The scale and zero point are computed as follows:
Create the histogram of the incoming inputs.
The histogram is computed continuously, and the ranges per bin change with every new tensor observed.
Search the distribution in the histogram for optimal min/max values.
The search for the min/max values ensures the minimization of the quantization error with respect to the floating point model.
Compute the scale and zero point the same way as in the
MinMaxObserver
class torch.quantization.FakeQuantize(observer=<class 'torch.quantization.observer.MovingAverageMinMaxObserver'>, quant_min=0, quant_max=255, **observer_kwargs) [source]
Simulate the quantize and dequantize operations in training time. The output of this module is given by x_out = (clamp(round(x/scale + zero_point), quant_min, quant_max)-zero_point)*scale
scale defines the scale factor used for quantization.
zero_point specifies the quantized value to which 0 in floating point maps to
quant_min specifies the minimum allowable quantized value.
quant_max specifies the maximum allowable quantized value.
fake_quant_enable controls the application of fake quantization on tensors, note that statistics can still be updated.
observer_enable controls statistics collection on tensors
dtype specifies the quantized dtype that is being emulated with fake-quantization,
allowable values are torch.qint8 and torch.quint8. The values of quant_min and quant_max should be chosen to be consistent with the dtype Parameters
observer (module) – Module for observing statistics on input tensors and calculating scale and zero-point.
quant_min (int) – The minimum allowable quantized value.
quant_max (int) – The maximum allowable quantized value.
observer_kwargs (optional) – Arguments for the observer module Variables
~FakeQuantize.observer (Module) – User provided module that collects statistics on the input tensor and provides a method to calculate scale and zero-point.
class torch.quantization.NoopObserver(dtype=torch.float16, custom_op_name='') [source]
Observer that doesn’t do anything and just passes its configuration to the quantized module’s .from_float(). Primarily used for quantization to float16 which doesn’t require determining ranges. Parameters
dtype – Quantized data type
custom_op_name – (temporary) specify this observer for an operator that doesn’t require any observation (Can be used in Graph Mode Passes for special case ops).
Debugging utilities
torch.quantization.get_observer_dict(mod, target_dict, prefix='') [source]
Traverse the modules and save all observers into dict. This is mainly used for quantization accuracy debug :param mod: the top module we want to save all observers :param prefix: the prefix for the current module :param target_dict: the dictionary used to save all the observers
class torch.quantization.RecordingObserver(**kwargs) [source]
The module is mainly for debug and records the tensor values during runtime. Parameters
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
nn.intrinsic | torch.quantization |
torch.quantization.add_observer_(module, qconfig_propagation_list=None, non_leaf_module_list=None, device=None, custom_module_class_mapping=None) [source]
Add observer for the leaf child of the module. This function insert observer module to all leaf child module that has a valid qconfig attribute. Parameters
module – input module with qconfig attributes for all the leaf modules that we want to quantize
device – parent device, if any
non_leaf_module_list – list of non-leaf modules we want to add observer Returns
None, module is modified inplace with added observer modules and forward_hooks | torch.quantization#torch.quantization.add_observer_ |
torch.quantization.add_quant_dequant(module) [source]
Wrap the leaf child module in QuantWrapper if it has a valid qconfig Note that this function will modify the children of module inplace and it can return a new module which wraps the input module as well. Parameters
module – input module with qconfig attributes for all the leaf modules
we want to quantize (that) – Returns
Either the inplace modified module with submodules wrapped in QuantWrapper based on qconfig or a new QuantWrapper module which wraps the input module, the latter case only happens when the input module is a leaf module and we want to quantize it. | torch.quantization#torch.quantization.add_quant_dequant |
torch.quantization.convert(module, mapping=None, inplace=False, remove_qconfig=True, convert_custom_config_dict=None) [source]
Converts submodules in input module to a different module according to mapping by calling from_float method on the target module class. And remove qconfig at the end if remove_qconfig is set to True. Parameters
module – prepared and calibrated module
mapping – a dictionary that maps from source module type to target module type, can be overwritten to allow swapping user defined Modules
inplace – carry out model transformations in-place, the original module is mutated
convert_custom_config_dict – custom configuration dictionary for convert function # Example of convert_custom_config_dict:
convert_custom_config_dict = {
# user will manually define the corresponding quantized
# module class which has a from_observed class method that converts
# observed custom module to quantized custom module
"observed_to_quantized_custom_module_class": {
ObservedCustomModule: QuantizedCustomModule
}
} | torch.quantization#torch.quantization.convert |
torch.quantization.default_eval_fn(model, calib_data) [source]
Default evaluation function takes a torch.utils.data.Dataset or a list of input Tensors and run the model on the dataset | torch.quantization#torch.quantization.default_eval_fn |
class torch.quantization.DeQuantStub [source]
Dequantize stub module, before calibration, this is same as identity, this will be swapped as nnq.DeQuantize in convert. | torch.quantization#torch.quantization.DeQuantStub |
class torch.quantization.FakeQuantize(observer=<class 'torch.quantization.observer.MovingAverageMinMaxObserver'>, quant_min=0, quant_max=255, **observer_kwargs) [source]
Simulate the quantize and dequantize operations in training time. The output of this module is given by x_out = (clamp(round(x/scale + zero_point), quant_min, quant_max)-zero_point)*scale
scale defines the scale factor used for quantization.
zero_point specifies the quantized value to which 0 in floating point maps to
quant_min specifies the minimum allowable quantized value.
quant_max specifies the maximum allowable quantized value.
fake_quant_enable controls the application of fake quantization on tensors, note that statistics can still be updated.
observer_enable controls statistics collection on tensors
dtype specifies the quantized dtype that is being emulated with fake-quantization,
allowable values are torch.qint8 and torch.quint8. The values of quant_min and quant_max should be chosen to be consistent with the dtype Parameters
observer (module) – Module for observing statistics on input tensors and calculating scale and zero-point.
quant_min (int) – The minimum allowable quantized value.
quant_max (int) – The maximum allowable quantized value.
observer_kwargs (optional) – Arguments for the observer module Variables
~FakeQuantize.observer (Module) – User provided module that collects statistics on the input tensor and provides a method to calculate scale and zero-point. | torch.quantization#torch.quantization.FakeQuantize |
torch.quantization.fuse_modules(model, modules_to_fuse, inplace=False, fuser_func=<function fuse_known_modules>, fuse_custom_config_dict=None) [source]
Fuses a list of modules into a single module Fuses only the following sequence of modules: conv, bn conv, bn, relu conv, relu linear, relu bn, relu All other sequences are left unchanged. For these sequences, replaces the first item in the list with the fused module, replacing the rest of the modules with identity. Parameters
model – Model containing the modules to be fused
modules_to_fuse – list of list of module names to fuse. Can also be a list of strings if there is only a single list of modules to fuse.
inplace – bool specifying if fusion happens in place on the model, by default a new model is returned
fuser_func – Function that takes in a list of modules and outputs a list of fused modules of the same length. For example, fuser_func([convModule, BNModule]) returns the list [ConvBNModule, nn.Identity()] Defaults to torch.quantization.fuse_known_modules
fuse_custom_config_dict – custom configuration for fusion # Example of fuse_custom_config_dict
fuse_custom_config_dict = {
# Additional fuser_method mapping
"additional_fuser_method_mapping": {
(torch.nn.Conv2d, torch.nn.BatchNorm2d): fuse_conv_bn
},
}
Returns
model with fused modules. A new copy is created if inplace=True. Examples: >>> m = myModel()
>>> # m is a module containing the sub-modules below
>>> modules_to_fuse = [ ['conv1', 'bn1', 'relu1'], ['submodule.conv', 'submodule.relu']]
>>> fused_m = torch.quantization.fuse_modules(m, modules_to_fuse)
>>> output = fused_m(input)
>>> m = myModel()
>>> # Alternately provide a single list of modules to fuse
>>> modules_to_fuse = ['conv1', 'bn1', 'relu1']
>>> fused_m = torch.quantization.fuse_modules(m, modules_to_fuse)
>>> output = fused_m(input) | torch.quantization#torch.quantization.fuse_modules |
torch.quantization.get_observer_dict(mod, target_dict, prefix='') [source]
Traverse the modules and save all observers into dict. This is mainly used for quantization accuracy debug :param mod: the top module we want to save all observers :param prefix: the prefix for the current module :param target_dict: the dictionary used to save all the observers | torch.quantization#torch.quantization.get_observer_dict |
class torch.quantization.HistogramObserver(bins=2048, upsample_rate=128, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False) [source]
The module records the running histogram of tensor values along with min/max values. calculate_qparams will calculate scale and zero_point. Parameters
bins – Number of bins to use for the histogram
upsample_rate – Factor by which the histograms are upsampled, this is used to interpolate histograms with varying ranges across observations
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit The scale and zero point are computed as follows:
Create the histogram of the incoming inputs.
The histogram is computed continuously, and the ranges per bin change with every new tensor observed.
Search the distribution in the histogram for optimal min/max values.
The search for the min/max values ensures the minimization of the quantization error with respect to the floating point model.
Compute the scale and zero point the same way as in the
MinMaxObserver | torch.quantization#torch.quantization.HistogramObserver |
class torch.quantization.MinMaxObserver(dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running min and max values. This observer uses the tensor min/max statistics to compute the quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. Given running min/max as xminx_\text{min} and xmaxx_\text{max} , scale ss and zero point zz are computed as: The running minimum/maximum xmin/maxx_\text{min/max} is computed as: xmin={min(X)if xmin=Nonemin(xmin,min(X))otherwisexmax={max(X)if xmax=Nonemax(xmax,max(X))otherwise\begin{array}{ll} x_\text{min} &= \begin{cases} \min(X) & \text{if~}x_\text{min} = \text{None} \\ \min\left(x_\text{min}, \min(X)\right) & \text{otherwise} \end{cases}\\ x_\text{max} &= \begin{cases} \max(X) & \text{if~}x_\text{max} = \text{None} \\ \max\left(x_\text{max}, \max(X)\right) & \text{otherwise} \end{cases}\\ \end{array}
where XX is the observed tensor. The scale ss and zero point zz are then computed as: if Symmetric:s=2max(∣xmin∣,xmax)/(Qmax−Qmin)z={0if dtype is qint8128otherwiseOtherwise:s=(xmax−xmin)/(Qmax−Qmin)z=Qmin−round(xmin/s)\begin{aligned} \text{if Symmetric:}&\\ &s = 2 \max(|x_\text{min}|, x_\text{max}) / \left( Q_\text{max} - Q_\text{min} \right) \\ &z = \begin{cases} 0 & \text{if dtype is qint8} \\ 128 & \text{otherwise} \end{cases}\\ \text{Otherwise:}&\\ &s = \left( x_\text{max} - x_\text{min} \right ) / \left( Q_\text{max} - Q_\text{min} \right ) \\ &z = Q_\text{min} - \text{round}(x_\text{min} / s) \end{aligned}
where QminQ_\text{min} and QmaxQ_\text{max} are the minimum and maximum of the quantized data type. Warning Only works with torch.per_tensor_symmetric quantization scheme Warning dtype can only take torch.qint8 or torch.quint8. Note If the running minimum equals to the running maximum, the scale and zero_point are set to 1.0 and 0. | torch.quantization#torch.quantization.MinMaxObserver |
class torch.quantization.MovingAverageMinMaxObserver(averaging_constant=0.01, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the moving average of the min and max values. This observer computes the quantization parameters based on the moving averages of minimums and maximums of the incoming tensors. The module records the average minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
averaging_constant – Averaging constant for min/max.
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The moving average min/max is computed as follows xmin={min(X)if xmin=None(1−c)xmin+cmin(X)otherwisexmax={max(X)if xmax=None(1−c)xmax+cmax(X)otherwise\begin{array}{ll} x_\text{min} = \begin{cases} \min(X) & \text{if~}x_\text{min} = \text{None} \\ (1 - c) x_\text{min} + c \min(X) & \text{otherwise} \end{cases}\\ x_\text{max} = \begin{cases} \max(X) & \text{if~}x_\text{max} = \text{None} \\ (1 - c) x_\text{max} + c \max(X) & \text{otherwise} \end{cases}\\ \end{array}
where xmin/maxx_\text{min/max} is the running average min/max, XX is is the incoming tensor, and cc is the averaging_constant. The scale and zero point are then computed as in MinMaxObserver. Note Only works with torch.per_tensor_affine quantization scheme. Note If the running minimum equals to the running maximum, the scale and zero_point are set to 1.0 and 0. | torch.quantization#torch.quantization.MovingAverageMinMaxObserver |
class torch.quantization.MovingAveragePerChannelMinMaxObserver(averaging_constant=0.01, ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running per channel min and max values. This observer uses the tensor min/max statistics to compute the per channel quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
averaging_constant – Averaging constant for min/max.
ch_axis – Channel axis
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The quantization parameters are computed the same way as in MovingAverageMinMaxObserver, with the difference that the running min/max values are stored per channel. Scales and zero points are thus computed per channel as well. Note If the running minimum equals to the running maximum, the scales and zero_points are set to 1.0 and 0. | torch.quantization#torch.quantization.MovingAveragePerChannelMinMaxObserver |
class torch.quantization.NoopObserver(dtype=torch.float16, custom_op_name='') [source]
Observer that doesn’t do anything and just passes its configuration to the quantized module’s .from_float(). Primarily used for quantization to float16 which doesn’t require determining ranges. Parameters
dtype – Quantized data type
custom_op_name – (temporary) specify this observer for an operator that doesn’t require any observation (Can be used in Graph Mode Passes for special case ops). | torch.quantization#torch.quantization.NoopObserver |
class torch.quantization.ObserverBase(dtype) [source]
Base observer Module. Any observer implementation should derive from this class. Concrete observers should follow the same API. In forward, they will update the statistics of the observed Tensor. And they should provide a calculate_qparams function that computes the quantization parameters given the collected statistics. Parameters
dtype – Quantized data type
classmethod with_args(**kwargs)
Wrapper that allows creation of class factories. This can be useful when there is a need to create classes with the same constructor arguments, but different instances. Example: >>> Foo.with_args = classmethod(_with_args)
>>> foo_builder = Foo.with_args(a=3, b=4).with_args(answer=42)
>>> foo_instance1 = foo_builder()
>>> foo_instance2 = foo_builder()
>>> id(foo_instance1) == id(foo_instance2)
False | torch.quantization#torch.quantization.ObserverBase |
classmethod with_args(**kwargs)
Wrapper that allows creation of class factories. This can be useful when there is a need to create classes with the same constructor arguments, but different instances. Example: >>> Foo.with_args = classmethod(_with_args)
>>> foo_builder = Foo.with_args(a=3, b=4).with_args(answer=42)
>>> foo_instance1 = foo_builder()
>>> foo_instance2 = foo_builder()
>>> id(foo_instance1) == id(foo_instance2)
False | torch.quantization#torch.quantization.ObserverBase.with_args |
class torch.quantization.PerChannelMinMaxObserver(ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running per channel min and max values. This observer uses the tensor min/max statistics to compute the per channel quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters
ch_axis – Channel axis
dtype – Quantized data type
qscheme – Quantization scheme to be used
reduce_range – Reduces the range of the quantized data type by 1 bit
quant_min – Minimum quantization value. If unspecified, it will follow the 8-bit setup.
quant_max – Maximum quantization value. If unspecified, it will follow the 8-bit setup. The quantization parameters are computed the same way as in MinMaxObserver, with the difference that the running min/max values are stored per channel. Scales and zero points are thus computed per channel as well. Note If the running minimum equals to the running maximum, the scales and zero_points are set to 1.0 and 0. | torch.quantization#torch.quantization.PerChannelMinMaxObserver |
torch.quantization.prepare(model, inplace=False, allow_list=None, observer_non_leaf_module_list=None, prepare_custom_config_dict=None) [source]
Prepares a copy of the model for quantization calibration or quantization-aware training. Quantization configuration should be assigned preemptively to individual submodules in .qconfig attribute. The model will be attached with observer or fake quant modules, and qconfig will be propagated. Parameters
model – input model to be modified in-place
inplace – carry out model transformations in-place, the original module is mutated
allow_list – list of quantizable modules
observer_non_leaf_module_list – list of non-leaf modules we want to add observer
prepare_custom_config_dict – customization configuration dictionary for prepare function # Example of prepare_custom_config_dict:
prepare_custom_config_dict = {
# user will manually define the corresponding observed
# module class which has a from_float class method that converts
# float custom module to observed custom module
"float_to_observed_custom_module_class": {
CustomModule: ObservedCustomModule
}
} | torch.quantization#torch.quantization.prepare |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.