doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
class torch.iinfo | torch.type_info#torch.torch.iinfo |
class torch.layout | torch.tensor_attributes#torch.torch.layout |
class torch.memory_format | torch.tensor_attributes#torch.torch.memory_format |
torch.trace(input) β Tensor
Returns the sum of the elements of the diagonal of the input 2-D matrix. Example: >>> x = torch.arange(1., 10.).view(3, 3)
>>> x
tensor([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]])
>>> torch.trace(x)
tensor(15.) | torch.generated.torch.trace#torch.trace |
torch.transpose(input, dim0, dim1) β Tensor
Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped. The resulting out tensor shares its underlying storage with the input tensor, so changing the content of one would change the content of the other. Parameters
input (Tensor) β the input tensor.
dim0 (int) β the first dimension to be transposed
dim1 (int) β the second dimension to be transposed Example: >>> x = torch.randn(2, 3)
>>> x
tensor([[ 1.0028, -0.9893, 0.5809],
[-0.1669, 0.7299, 0.4942]])
>>> torch.transpose(x, 0, 1)
tensor([[ 1.0028, -0.1669],
[-0.9893, 0.7299],
[ 0.5809, 0.4942]]) | torch.generated.torch.transpose#torch.transpose |
torch.trapz(y, x, *, dim=-1) β Tensor
Estimate β«ydx\int y\,dx along dim, using the trapezoid rule. Parameters
y (Tensor) β The values of the function to integrate
x (Tensor) β The points at which the function y is sampled. If x is not in ascending order, intervals on which it is decreasing contribute negatively to the estimated integral (i.e., the convention β«abf=ββ«baf\int_a^b f = -\int_b^a f is followed).
dim (int) β The dimension along which to integrate. By default, use the last dimension. Returns
A Tensor with the same shape as the input, except with dim removed. Each element of the returned tensor represents the estimated integral β«ydx\int y\,dx along dim. Example: >>> y = torch.randn((2, 3))
>>> y
tensor([[-2.1156, 0.6857, -0.2700],
[-1.2145, 0.5540, 2.0431]])
>>> x = torch.tensor([[1, 3, 4], [1, 2, 3]])
>>> torch.trapz(y, x)
tensor([-1.2220, 0.9683])
torch.trapz(y, *, dx=1, dim=-1) β Tensor
As above, but the sample points are spaced uniformly at a distance of dx. Parameters
y (Tensor) β The values of the function to integrate Keyword Arguments
dx (float) β The distance between points at which y is sampled.
dim (int) β The dimension along which to integrate. By default, use the last dimension. Returns
A Tensor with the same shape as the input, except with dim removed. Each element of the returned tensor represents the estimated integral β«ydx\int y\,dx along dim. | torch.generated.torch.trapz#torch.trapz |
torch.triangular_solve(input, A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)
Solves a system of equations with a triangular coefficient matrix AA and multiple right-hand sides bb . In particular, solves AX=bAX = b and assumes AA is upper-triangular with the default keyword arguments. torch.triangular_solve(b, A) can take in 2D inputs b, A or inputs that are batches of 2D matrices. If the inputs are batches, then returns batched outputs X Supports real-valued and complex-valued inputs. Parameters
input (Tensor) β multiple right-hand sides of size (β,m,k)(*, m, k) where β* is zero of more batch dimensions (bb )
A (Tensor) β the input triangular coefficient matrix of size (β,m,m)(*, m, m) where β* is zero or more batch dimensions
upper (bool, optional) β whether to solve the upper-triangular system of equations (default) or the lower-triangular system of equations. Default: True.
transpose (bool, optional) β whether AA should be transposed before being sent into the solver. Default: False.
unitriangular (bool, optional) β whether AA is unit triangular. If True, the diagonal elements of AA are assumed to be 1 and not referenced from AA . Default: False. Returns
A namedtuple (solution, cloned_coefficient) where cloned_coefficient is a clone of AA and solution is the solution XX to AX=bAX = b (or whatever variant of the system of equations, depending on the keyword arguments.) Examples: >>> A = torch.randn(2, 2).triu()
>>> A
tensor([[ 1.1527, -1.0753],
[ 0.0000, 0.7986]])
>>> b = torch.randn(2, 3)
>>> b
tensor([[-0.0210, 2.3513, -1.5492],
[ 1.5429, 0.7403, -1.0243]])
>>> torch.triangular_solve(b, A)
torch.return_types.triangular_solve(
solution=tensor([[ 1.7841, 2.9046, -2.5405],
[ 1.9320, 0.9270, -1.2826]]),
cloned_coefficient=tensor([[ 1.1527, -1.0753],
[ 0.0000, 0.7986]])) | torch.generated.torch.triangular_solve#torch.triangular_solve |
torch.tril(input, diagonal=0, *, out=None) β Tensor
Returns the lower triangular part of the matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0. The lower triangular part of the matrix is defined as the elements on and below the diagonal. The argument diagonal controls which diagonal to consider. If diagonal = 0, all elements on and below the main diagonal are retained. A positive value includes just as many diagonals above the main diagonal, and similarly a negative value excludes just as many diagonals below the main diagonal. The main diagonal are the set of indices {(i,i)}\lbrace (i, i) \rbrace for iβ[0,minβ‘{d1,d2}β1]i \in [0, \min\{d_{1}, d_{2}\} - 1] where d1,d2d_{1}, d_{2} are the dimensions of the matrix. Parameters
input (Tensor) β the input tensor.
diagonal (int, optional) β the diagonal to consider Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(3, 3)
>>> a
tensor([[-1.0813, -0.8619, 0.7105],
[ 0.0935, 0.1380, 2.2112],
[-0.3409, -0.9828, 0.0289]])
>>> torch.tril(a)
tensor([[-1.0813, 0.0000, 0.0000],
[ 0.0935, 0.1380, 0.0000],
[-0.3409, -0.9828, 0.0289]])
>>> b = torch.randn(4, 6)
>>> b
tensor([[ 1.2219, 0.5653, -0.2521, -0.2345, 1.2544, 0.3461],
[ 0.4785, -0.4477, 0.6049, 0.6368, 0.8775, 0.7145],
[ 1.1502, 3.2716, -1.1243, -0.5413, 0.3615, 0.6864],
[-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0978]])
>>> torch.tril(b, diagonal=1)
tensor([[ 1.2219, 0.5653, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.4785, -0.4477, 0.6049, 0.0000, 0.0000, 0.0000],
[ 1.1502, 3.2716, -1.1243, -0.5413, 0.0000, 0.0000],
[-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0000]])
>>> torch.tril(b, diagonal=-1)
tensor([[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.4785, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 1.1502, 3.2716, 0.0000, 0.0000, 0.0000, 0.0000],
[-0.0614, -0.7344, -1.3164, 0.0000, 0.0000, 0.0000]]) | torch.generated.torch.tril#torch.tril |
torch.tril_indices(row, col, offset=0, *, dtype=torch.long, device='cpu', layout=torch.strided) β Tensor
Returns the indices of the lower triangular part of a row-by- col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. Indices are ordered based on rows and then columns. The lower triangular part of the matrix is defined as the elements on and below the diagonal. The argument offset controls which diagonal to consider. If offset = 0, all elements on and below the main diagonal are retained. A positive value includes just as many diagonals above the main diagonal, and similarly a negative value excludes just as many diagonals below the main diagonal. The main diagonal are the set of indices {(i,i)}\lbrace (i, i) \rbrace for iβ[0,minβ‘{d1,d2}β1]i \in [0, \min\{d_{1}, d_{2}\} - 1] where d1,d2d_{1}, d_{2} are the dimensions of the matrix. Note When running on CUDA, row * col must be less than 2592^{59} to prevent overflow during calculation. Parameters
row (int) β number of rows in the 2-D matrix.
col (int) β number of columns in the 2-D matrix.
offset (int) β diagonal offset from the main diagonal. Default: if not provided, 0. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned tensor. Default: if None, torch.long.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
layout (torch.layout, optional) β currently only support torch.strided. Example::
>>> a = torch.tril_indices(3, 3)
>>> a
tensor([[0, 1, 1, 2, 2, 2],
[0, 0, 1, 0, 1, 2]])
>>> a = torch.tril_indices(4, 3, -1)
>>> a
tensor([[1, 2, 2, 3, 3, 3],
[0, 0, 1, 0, 1, 2]])
>>> a = torch.tril_indices(4, 3, 1)
>>> a
tensor([[0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3],
[0, 1, 0, 1, 2, 0, 1, 2, 0, 1, 2]]) | torch.generated.torch.tril_indices#torch.tril_indices |
torch.triu(input, diagonal=0, *, out=None) β Tensor
Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0. The upper triangular part of the matrix is defined as the elements on and above the diagonal. The argument diagonal controls which diagonal to consider. If diagonal = 0, all elements on and above the main diagonal are retained. A positive value excludes just as many diagonals above the main diagonal, and similarly a negative value includes just as many diagonals below the main diagonal. The main diagonal are the set of indices {(i,i)}\lbrace (i, i) \rbrace for iβ[0,minβ‘{d1,d2}β1]i \in [0, \min\{d_{1}, d_{2}\} - 1] where d1,d2d_{1}, d_{2} are the dimensions of the matrix. Parameters
input (Tensor) β the input tensor.
diagonal (int, optional) β the diagonal to consider Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(3, 3)
>>> a
tensor([[ 0.2309, 0.5207, 2.0049],
[ 0.2072, -1.0680, 0.6602],
[ 0.3480, -0.5211, -0.4573]])
>>> torch.triu(a)
tensor([[ 0.2309, 0.5207, 2.0049],
[ 0.0000, -1.0680, 0.6602],
[ 0.0000, 0.0000, -0.4573]])
>>> torch.triu(a, diagonal=1)
tensor([[ 0.0000, 0.5207, 2.0049],
[ 0.0000, 0.0000, 0.6602],
[ 0.0000, 0.0000, 0.0000]])
>>> torch.triu(a, diagonal=-1)
tensor([[ 0.2309, 0.5207, 2.0049],
[ 0.2072, -1.0680, 0.6602],
[ 0.0000, -0.5211, -0.4573]])
>>> b = torch.randn(4, 6)
>>> b
tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],
[-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],
[ 0.4333, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],
[-0.9888, 1.0679, -1.3337, -1.6556, 0.4798, 0.2830]])
>>> torch.triu(b, diagonal=1)
tensor([[ 0.0000, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],
[ 0.0000, 0.0000, -1.2919, 1.3378, -0.1768, -1.0857],
[ 0.0000, 0.0000, 0.0000, -1.0432, 0.9348, -0.4410],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.4798, 0.2830]])
>>> torch.triu(b, diagonal=-1)
tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],
[-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],
[ 0.0000, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],
[ 0.0000, 0.0000, -1.3337, -1.6556, 0.4798, 0.2830]]) | torch.generated.torch.triu#torch.triu |
torch.triu_indices(row, col, offset=0, *, dtype=torch.long, device='cpu', layout=torch.strided) β Tensor
Returns the indices of the upper triangular part of a row by col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. Indices are ordered based on rows and then columns. The upper triangular part of the matrix is defined as the elements on and above the diagonal. The argument offset controls which diagonal to consider. If offset = 0, all elements on and above the main diagonal are retained. A positive value excludes just as many diagonals above the main diagonal, and similarly a negative value includes just as many diagonals below the main diagonal. The main diagonal are the set of indices {(i,i)}\lbrace (i, i) \rbrace for iβ[0,minβ‘{d1,d2}β1]i \in [0, \min\{d_{1}, d_{2}\} - 1] where d1,d2d_{1}, d_{2} are the dimensions of the matrix. Note When running on CUDA, row * col must be less than 2592^{59} to prevent overflow during calculation. Parameters
row (int) β number of rows in the 2-D matrix.
col (int) β number of columns in the 2-D matrix.
offset (int) β diagonal offset from the main diagonal. Default: if not provided, 0. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned tensor. Default: if None, torch.long.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
layout (torch.layout, optional) β currently only support torch.strided. Example::
>>> a = torch.triu_indices(3, 3)
>>> a
tensor([[0, 0, 0, 1, 1, 2],
[0, 1, 2, 1, 2, 2]])
>>> a = torch.triu_indices(4, 3, -1)
>>> a
tensor([[0, 0, 0, 1, 1, 1, 2, 2, 3],
[0, 1, 2, 0, 1, 2, 1, 2, 2]])
>>> a = torch.triu_indices(4, 3, 1)
>>> a
tensor([[0, 0, 1],
[1, 2, 2]]) | torch.generated.torch.triu_indices#torch.triu_indices |
torch.true_divide(dividend, divisor, *, out) β Tensor
Alias for torch.div() with rounding_mode=None. | torch.generated.torch.true_divide#torch.true_divide |
torch.trunc(input, *, out=None) β Tensor
Returns a new tensor with the truncated integer values of the elements of input. Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 3.4742, 0.5466, -0.8008, -0.9079])
>>> torch.trunc(a)
tensor([ 3., 0., -0., -0.]) | torch.generated.torch.trunc#torch.trunc |
torch.unbind(input, dim=0) β seq
Removes a tensor dimension. Returns a tuple of all slices along a given dimension, already without it. Parameters
input (Tensor) β the tensor to unbind
dim (int) β dimension to remove Example: >>> torch.unbind(torch.tensor([[1, 2, 3],
>>> [4, 5, 6],
>>> [7, 8, 9]]))
(tensor([1, 2, 3]), tensor([4, 5, 6]), tensor([7, 8, 9])) | torch.generated.torch.unbind#torch.unbind |
torch.unique(*args, **kwargs)
Returns the unique elements of the input tensor. Note This function is different from torch.unique_consecutive() in the sense that this function also eliminates non-consecutive duplicate values. Note Currently in the CUDA implementation and the CPU implementation when dim is specified, torch.unique always sort the tensor at the beginning regardless of the sort argument. Sorting could be slow, so if your input tensor is already sorted, it is recommended to use torch.unique_consecutive() which avoids the sorting. Parameters
input (Tensor) β the input tensor
sorted (bool) β Whether to sort the unique elements in ascending order before returning as output.
return_inverse (bool) β Whether to also return the indices for where elements in the original input ended up in the returned unique list.
return_counts (bool) β Whether to also return the counts for each unique element.
dim (int) β the dimension to apply unique. If None, the unique of the flattened input is returned. default: None
Returns
A tensor or a tuple of tensors containing
output (Tensor): the output list of unique scalar elements.
inverse_indices (Tensor): (optional) if return_inverse is True, there will be an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor.
counts (Tensor): (optional) if return_counts is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor. Return type
(Tensor, Tensor (optional), Tensor (optional)) Example: >>> output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long))
>>> output
tensor([ 2, 3, 1])
>>> output, inverse_indices = torch.unique(
... torch.tensor([1, 3, 2, 3], dtype=torch.long), sorted=True, return_inverse=True)
>>> output
tensor([ 1, 2, 3])
>>> inverse_indices
tensor([ 0, 2, 1, 2])
>>> output, inverse_indices = torch.unique(
... torch.tensor([[1, 3], [2, 3]], dtype=torch.long), sorted=True, return_inverse=True)
>>> output
tensor([ 1, 2, 3])
>>> inverse_indices
tensor([[ 0, 2],
[ 1, 2]]) | torch.generated.torch.unique#torch.unique |
torch.unique_consecutive(*args, **kwargs)
Eliminates all but the first element from every consecutive group of equivalent elements. Note This function is different from torch.unique() in the sense that this function only eliminates consecutive duplicate values. This semantics is similar to std::unique in C++. Parameters
input (Tensor) β the input tensor
return_inverse (bool) β Whether to also return the indices for where elements in the original input ended up in the returned unique list.
return_counts (bool) β Whether to also return the counts for each unique element.
dim (int) β the dimension to apply unique. If None, the unique of the flattened input is returned. default: None
Returns
A tensor or a tuple of tensors containing
output (Tensor): the output list of unique scalar elements.
inverse_indices (Tensor): (optional) if return_inverse is True, there will be an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor.
counts (Tensor): (optional) if return_counts is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor. Return type
(Tensor, Tensor (optional), Tensor (optional)) Example: >>> x = torch.tensor([1, 1, 2, 2, 3, 1, 1, 2])
>>> output = torch.unique_consecutive(x)
>>> output
tensor([1, 2, 3, 1, 2])
>>> output, inverse_indices = torch.unique_consecutive(x, return_inverse=True)
>>> output
tensor([1, 2, 3, 1, 2])
>>> inverse_indices
tensor([0, 0, 1, 1, 2, 3, 3, 4])
>>> output, counts = torch.unique_consecutive(x, return_counts=True)
>>> output
tensor([1, 2, 3, 1, 2])
>>> counts
tensor([2, 2, 1, 2, 1]) | torch.generated.torch.unique_consecutive#torch.unique_consecutive |
torch.unsqueeze(input, dim) β Tensor
Returns a new tensor with a dimension of size one inserted at the specified position. The returned tensor shares the same underlying data with this tensor. A dim value within the range [-input.dim() - 1, input.dim() + 1) can be used. Negative dim will correspond to unsqueeze() applied at dim = dim + input.dim() + 1. Parameters
input (Tensor) β the input tensor.
dim (int) β the index at which to insert the singleton dimension Example: >>> x = torch.tensor([1, 2, 3, 4])
>>> torch.unsqueeze(x, 0)
tensor([[ 1, 2, 3, 4]])
>>> torch.unsqueeze(x, 1)
tensor([[ 1],
[ 2],
[ 3],
[ 4]]) | torch.generated.torch.unsqueeze#torch.unsqueeze |
torch.use_deterministic_algorithms(d) [source]
Sets whether PyTorch operations must use βdeterministicβ algorithms. That is, algorithms which, given the same input, and when run on the same software and hardware, always produce the same output. When True, operations will use deterministic algorithms when available, and if only nondeterministic algorithms are available they will throw a :class:RuntimeError when called. Warning This feature is in beta, and its design and implementation may change in the future. The following normally-nondeterministic operations will act deterministically when d=True:
torch.nn.Conv1d when called on CUDA tensor
torch.nn.Conv2d when called on CUDA tensor
torch.nn.Conv3d when called on CUDA tensor
torch.nn.ConvTranspose1d when called on CUDA tensor
torch.nn.ConvTranspose2d when called on CUDA tensor
torch.nn.ConvTranspose3d when called on CUDA tensor
torch.bmm() when called on sparse-dense CUDA tensors
torch.__getitem__() backward when self is a CPU tensor and indices is a list of tensors
torch.index_put() with accumulate=True when called on a CPU tensor The following normally-nondeterministic operations will throw a RuntimeError when d=True:
torch.nn.AvgPool3d when called on a CUDA tensor that requires grad
torch.nn.AdaptiveAvgPool2d when called on a CUDA tensor that requires grad
torch.nn.AdaptiveAvgPool3d when called on a CUDA tensor that requires grad
torch.nn.MaxPool3d when called on a CUDA tensor that requires grad
torch.nn.AdaptiveMaxPool2d when called on a CUDA tensor that requires grad
torch.nn.FractionalMaxPool2d when called on a CUDA tensor that requires grad
torch.nn.FractionalMaxPool3d when called on a CUDA tensor that requires grad
torch.nn.functional.interpolate() when called on a CUDA tensor that requires grad and one of the following modes is used: linear bilinear bicubic trilinear
torch.nn.ReflectionPad1d when called on a CUDA tensor that requires grad
torch.nn.ReflectionPad2d when called on a CUDA tensor that requires grad
torch.nn.ReplicationPad1d when called on a CUDA tensor that requires grad
torch.nn.ReplicationPad2d when called on a CUDA tensor that requires grad
torch.nn.ReplicationPad3d when called on a CUDA tensor that requires grad
torch.nn.NLLLoss when called on a CUDA tensor that requires grad
torch.nn.CTCLoss when called on a CUDA tensor that requires grad
torch.nn.EmbeddingBag when called on a CUDA tensor that requires grad
torch.scatter_add_() when called on a CUDA tensor
torch.index_add_() when called on a CUDA tensor torch.index_copy()
torch.index_select() when called on a CUDA tensor that requires grad
torch.repeat_interleave() when called on a CUDA tensor that requires grad
torch.histc() when called on a CUDA tensor
torch.bincount() when called on a CUDA tensor
torch.kthvalue() with called on a CUDA tensor
torch.median() with indices output when called on a CUDA tensor A handful of CUDA operations are nondeterministic if the CUDA version is 10.2 or greater, unless the environment variable CUBLAS_WORKSPACE_CONFIG=:4096:8 or CUBLAS_WORKSPACE_CONFIG=:16:8 is set. See the CUDA documentation for more details: https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility If one of these environment variable configurations is not set, a RuntimeError will be raised from these operations when called with CUDA tensors: torch.mm() torch.mv() torch.bmm() Note that deterministic operations tend to have worse performance than non-deterministic operations. Parameters
d (bool) β If True, force operations to be deterministic. If False, allow non-deterministic operations. | torch.generated.torch.use_deterministic_algorithms#torch.use_deterministic_algorithms |
Benchmark Utils - torch.utils.benchmark
class torch.utils.benchmark.Timer(stmt='pass', setup='pass', timer=<function timer>, globals=None, label=None, sub_label=None, description=None, env=None, num_threads=1, language=<Language.PYTHON: 0>) [source]
Helper class for measuring execution time of PyTorch statements. For a full tutorial on how to use this class, see: https://pytorch.org/tutorials/recipes/recipes/benchmark.html The PyTorch Timer is based on timeit.Timer (and in fact uses timeit.Timer internally), but with several key differences:
Runtime aware:
Timer will perform warmups (important as some elements of PyTorch are lazily initialized), set threadpool size so that comparisons are apples-to-apples, and synchronize asynchronous CUDA functions when necessary.
Focus on replicates:
When measuring code, and particularly complex kernels / models, run-to-run variation is a significant confounding factor. It is expected that all measurements should include replicates to quantify noise and allow median computation, which is more robust than mean. To that effect, this class deviates from the timeit API by conceptually merging timeit.Timer.repeat and timeit.Timer.autorange. (Exact algorithms are discussed in method docstrings.) The timeit method is replicated for cases where an adaptive strategy is not desired.
Optional metadata:
When defining a Timer, one can optionally specify label, sub_label, description, and env. (Defined later) These fields are included in the representation of result object and by the Compare class to group and display results for comparison.
Instruction counts
In addition to wall times, Timer can run a statement under Callgrind and report instructions executed. Directly analogous to timeit.Timer constructor arguments: stmt, setup, timer, globals PyTorch Timer specific constructor arguments: label, sub_label, description, env, num_threads Parameters
stmt β Code snippet to be run in a loop and timed.
setup β Optional setup code. Used to define variables used in stmt
timer β Callable which returns the current time. If PyTorch was built without CUDA or there is no GPU present, this defaults to timeit.default_timer; otherwise it will synchronize CUDA before measuring the time.
globals β A dict which defines the global variables when stmt is being executed. This is the other method for providing variables which stmt needs.
label β String which summarizes stmt. For instance, if stmt is βtorch.nn.functional.relu(torch.add(x, 1, out=out))β one might set label to βReLU(x + 1)β to improve readability.
sub_label β
Provide supplemental information to disambiguate measurements with identical stmt or label. For instance, in our example above sub_label might be βfloatβ or βintβ, so that it is easy to differentiate: βReLU(x + 1): (float)β βReLU(x + 1): (int)β when printing Measurements or summarizing using Compare.
description β
String to distinguish measurements with identical label and sub_label. The principal use of description is to signal to Compare the columns of data. For instance one might set it based on the input size to create a table of the form: | n=1 | n=4 | ...
------------- ...
ReLU(x + 1): (float) | ... | ... | ...
ReLU(x + 1): (int) | ... | ... | ...
using Compare. It is also included when printing a Measurement.
env β This tag indicates that otherwise identical tasks were run in different environments, and are therefore not equivilent, for instance when A/B testing a change to a kernel. Compare will treat Measurements with different env specification as distinct when merging replicate runs.
num_threads β The size of the PyTorch threadpool when executing stmt. Single threaded performace is important as both a key inference workload and a good indicator of intrinsic algorithmic efficiency, so the default is set to one. This is in contrast to the default PyTorch threadpool size which tries to utilize all cores.
blocked_autorange(callback=None, min_run_time=0.2) [source]
Measure many replicates while keeping timer overhead to a minimum. At a high level, blocked_autorange executes the following pseudo-code: `setup`
total_time = 0
while total_time < min_run_time
start = timer()
for _ in range(block_size):
`stmt`
total_time += (timer() - start)
Note the variable block_size in the inner loop. The choice of block size is important to measurement quality, and must balance two competing objectives: A small block size results in more replicates and generally better statistics. A large block size better amortizes the cost of timer invocation, and results in a less biased measurement. This is important because CUDA syncronization time is non-trivial (order single to low double digit microseconds) and would otherwise bias the measurement. blocked_autorange sets block_size by running a warmup period, increasing block size until timer overhead is less than 0.1% of the overall computation. This value is then used for the main measurement loop. Returns
A Measurement object that contains measured runtimes and repetition counts, and can be used to compute statistics. (mean, median, etc.)
collect_callgrind(number=100, collect_baseline=True) [source]
Collect instruction counts using Callgrind. Unlike wall times, instruction counts are deterministic (modulo non-determinism in the program itself and small amounts of jitter from the Python interpreter.) This makes them ideal for detailed performance analysis. This method runs stmt in a separate process so that Valgrind can instrument the program. Performance is severely degraded due to the instrumentation, howevever this is ameliorated by the fact that a small number of iterations is generally sufficient to obtain good measurements. In order to to use this method valgrind, callgrind_control, and callgrind_annotate must be installed. Because there is a process boundary between the caller (this process) and the stmt execution, globals cannot contain arbitrary in-memory data structures. (Unlike timing methods) Instead, globals are restricted to builtins, nn.Modulesβs, and TorchScripted functions/modules to reduce the surprise factor from serialization and subsequent deserialization. The GlobalsBridge class provides more detail on this subject. Take particular care with nn.Modules: they rely on pickle and you may need to add an import to setup for them to transfer properly. By default, a profile for an empty statement will be collected and cached to indicate how many instructions are from the Python loop which drives stmt. Returns
A CallgrindStats object which provides instruction counts and some basic facilities for analyzing and manipulating results.
timeit(number=1000000) [source]
Mirrors the semantics of timeit.Timer.timeit(). Execute the main statement (stmt) number times. https://docs.python.org/3/library/timeit.html#timeit.Timer.timeit
class torch.utils.benchmark.Measurement(number_per_run, raw_times, task_spec, metadata=None) [source]
The result of a Timer measurement. This class stores one or more measurements of a given statement. It is serializable and provides several convenience methods (including a detailed __repr__) for downstream consumers.
static merge(measurements) [source]
Convenience method for merging replicates. Merge will extrapolate times to number_per_run=1 and will not transfer any metadata. (Since it might differ between replicates)
property significant_figures
Approximate significant figure estimate. This property is intended to give a convenient way to estimate the precision of a measurement. It only uses the interquartile region to estimate statistics to try to mitigate skew from the tails, and uses a static z value of 1.645 since it is not expected to be used for small values of n, so z can approximate t. The significant figure estimation used in conjunction with the trim_sigfig method to provide a more human interpretable data summary. __repr__ does not use this method; it simply displays raw values. Significant figure estimation is intended for Compare.
class torch.utils.benchmark.CallgrindStats(task_spec, number_per_run, built_with_debug_symbols, baseline_inclusive_stats, baseline_exclusive_stats, stmt_inclusive_stats, stmt_exclusive_stats) [source]
Top level container for Callgrind results collected by Timer. Manipulation is generally done using the FunctionCounts class, which is obtained by calling CallgrindStats.stats(β¦). Several convenience methods are provided as well; the most significant is CallgrindStats.as_standardized().
as_standardized() [source]
Strip library names and some prefixes from function strings. When comparing two different sets of instruction counts, on stumbling block can be path prefixes. Callgrind includes the full filepath when reporting a function (as it should). However, this can cause issues when diffing profiles. If a key component such as Python or PyTorch was built in separate locations in the two profiles, which can result in something resembling: 23234231 /tmp/first_build_dir/thing.c:foo(...)
9823794 /tmp/first_build_dir/thing.c:bar(...)
...
53453 .../aten/src/Aten/...:function_that_actually_changed(...)
...
-9823794 /tmp/second_build_dir/thing.c:bar(...)
-23234231 /tmp/second_build_dir/thing.c:foo(...)
Stripping prefixes can ameliorate this issue by regularizing the strings and causing better cancellation of equivilent call sites when diffing.
counts(*, denoise=False) [source]
Returns the total number of instructions executed. See FunctionCounts.denoise() for an explation of the denoise arg.
delta(other, inclusive=False, subtract_baselines=True) [source]
Diff two sets of counts. One common reason to collect instruction counts is to determine the the effect that a particular change will have on the number of instructions needed to perform some unit of work. If a change increases that number, the next logical question is βwhyβ. This generally involves looking at what part if the code increased in instruction count. This function automates that process so that one can easily diff counts on both an inclusive and exclusive basis. The subtract_baselines argument allows one to disable baseline correction, though in most cases it shouldnβt matter as the baselines are expected to more or less cancel out.
stats(inclusive=False) [source]
Returns detailed function counts. Conceptually, the FunctionCounts returned can be thought of as a tuple of (count, path_and_function_name) tuples. inclusive matches the semantics of callgrind. If True, the counts include instructions executed by children. inclusive=True is useful for identifying hot spots in code; inclusive=False is useful for reducing noise when diffing counts from two different runs. (See CallgrindStats.delta(β¦) for more details)
class torch.utils.benchmark.FunctionCounts(_data, inclusive, _linewidth=None) [source]
Container for manipulating Callgrind results. It supports:
Addition and subtraction to combine or diff results. Tuple-like indexing. A denoise function which strips CPython calls which are known to be non-deterministic and quite noisy. Two higher order methods (filter and transform) for custom manipulation.
denoise() [source]
Remove known noisy instructions. Several instructions in the CPython interpreter are rather noisy. These instructions involve unicode to dictionary lookups which Python uses to map variable names. FunctionCounts is generally a content agnostic container, however this is sufficiently important for obtaining reliable results to warrant an exception.
filter(filter_fn) [source]
Keep only the elements where filter_fn applied to function name returns True.
transform(map_fn) [source]
Apply map_fn to all of the function names. This can be used to regularize function names (e.g. stripping irrelevant parts of the file path), coalesce entries by mapping multiple functions to the same name (in which case the counts are added together), etc. | torch.benchmark_utils |
class torch.utils.benchmark.CallgrindStats(task_spec, number_per_run, built_with_debug_symbols, baseline_inclusive_stats, baseline_exclusive_stats, stmt_inclusive_stats, stmt_exclusive_stats) [source]
Top level container for Callgrind results collected by Timer. Manipulation is generally done using the FunctionCounts class, which is obtained by calling CallgrindStats.stats(β¦). Several convenience methods are provided as well; the most significant is CallgrindStats.as_standardized().
as_standardized() [source]
Strip library names and some prefixes from function strings. When comparing two different sets of instruction counts, on stumbling block can be path prefixes. Callgrind includes the full filepath when reporting a function (as it should). However, this can cause issues when diffing profiles. If a key component such as Python or PyTorch was built in separate locations in the two profiles, which can result in something resembling: 23234231 /tmp/first_build_dir/thing.c:foo(...)
9823794 /tmp/first_build_dir/thing.c:bar(...)
...
53453 .../aten/src/Aten/...:function_that_actually_changed(...)
...
-9823794 /tmp/second_build_dir/thing.c:bar(...)
-23234231 /tmp/second_build_dir/thing.c:foo(...)
Stripping prefixes can ameliorate this issue by regularizing the strings and causing better cancellation of equivilent call sites when diffing.
counts(*, denoise=False) [source]
Returns the total number of instructions executed. See FunctionCounts.denoise() for an explation of the denoise arg.
delta(other, inclusive=False, subtract_baselines=True) [source]
Diff two sets of counts. One common reason to collect instruction counts is to determine the the effect that a particular change will have on the number of instructions needed to perform some unit of work. If a change increases that number, the next logical question is βwhyβ. This generally involves looking at what part if the code increased in instruction count. This function automates that process so that one can easily diff counts on both an inclusive and exclusive basis. The subtract_baselines argument allows one to disable baseline correction, though in most cases it shouldnβt matter as the baselines are expected to more or less cancel out.
stats(inclusive=False) [source]
Returns detailed function counts. Conceptually, the FunctionCounts returned can be thought of as a tuple of (count, path_and_function_name) tuples. inclusive matches the semantics of callgrind. If True, the counts include instructions executed by children. inclusive=True is useful for identifying hot spots in code; inclusive=False is useful for reducing noise when diffing counts from two different runs. (See CallgrindStats.delta(β¦) for more details) | torch.benchmark_utils#torch.utils.benchmark.CallgrindStats |
as_standardized() [source]
Strip library names and some prefixes from function strings. When comparing two different sets of instruction counts, on stumbling block can be path prefixes. Callgrind includes the full filepath when reporting a function (as it should). However, this can cause issues when diffing profiles. If a key component such as Python or PyTorch was built in separate locations in the two profiles, which can result in something resembling: 23234231 /tmp/first_build_dir/thing.c:foo(...)
9823794 /tmp/first_build_dir/thing.c:bar(...)
...
53453 .../aten/src/Aten/...:function_that_actually_changed(...)
...
-9823794 /tmp/second_build_dir/thing.c:bar(...)
-23234231 /tmp/second_build_dir/thing.c:foo(...)
Stripping prefixes can ameliorate this issue by regularizing the strings and causing better cancellation of equivilent call sites when diffing. | torch.benchmark_utils#torch.utils.benchmark.CallgrindStats.as_standardized |
counts(*, denoise=False) [source]
Returns the total number of instructions executed. See FunctionCounts.denoise() for an explation of the denoise arg. | torch.benchmark_utils#torch.utils.benchmark.CallgrindStats.counts |
delta(other, inclusive=False, subtract_baselines=True) [source]
Diff two sets of counts. One common reason to collect instruction counts is to determine the the effect that a particular change will have on the number of instructions needed to perform some unit of work. If a change increases that number, the next logical question is βwhyβ. This generally involves looking at what part if the code increased in instruction count. This function automates that process so that one can easily diff counts on both an inclusive and exclusive basis. The subtract_baselines argument allows one to disable baseline correction, though in most cases it shouldnβt matter as the baselines are expected to more or less cancel out. | torch.benchmark_utils#torch.utils.benchmark.CallgrindStats.delta |
stats(inclusive=False) [source]
Returns detailed function counts. Conceptually, the FunctionCounts returned can be thought of as a tuple of (count, path_and_function_name) tuples. inclusive matches the semantics of callgrind. If True, the counts include instructions executed by children. inclusive=True is useful for identifying hot spots in code; inclusive=False is useful for reducing noise when diffing counts from two different runs. (See CallgrindStats.delta(β¦) for more details) | torch.benchmark_utils#torch.utils.benchmark.CallgrindStats.stats |
class torch.utils.benchmark.FunctionCounts(_data, inclusive, _linewidth=None) [source]
Container for manipulating Callgrind results. It supports:
Addition and subtraction to combine or diff results. Tuple-like indexing. A denoise function which strips CPython calls which are known to be non-deterministic and quite noisy. Two higher order methods (filter and transform) for custom manipulation.
denoise() [source]
Remove known noisy instructions. Several instructions in the CPython interpreter are rather noisy. These instructions involve unicode to dictionary lookups which Python uses to map variable names. FunctionCounts is generally a content agnostic container, however this is sufficiently important for obtaining reliable results to warrant an exception.
filter(filter_fn) [source]
Keep only the elements where filter_fn applied to function name returns True.
transform(map_fn) [source]
Apply map_fn to all of the function names. This can be used to regularize function names (e.g. stripping irrelevant parts of the file path), coalesce entries by mapping multiple functions to the same name (in which case the counts are added together), etc. | torch.benchmark_utils#torch.utils.benchmark.FunctionCounts |
denoise() [source]
Remove known noisy instructions. Several instructions in the CPython interpreter are rather noisy. These instructions involve unicode to dictionary lookups which Python uses to map variable names. FunctionCounts is generally a content agnostic container, however this is sufficiently important for obtaining reliable results to warrant an exception. | torch.benchmark_utils#torch.utils.benchmark.FunctionCounts.denoise |
filter(filter_fn) [source]
Keep only the elements where filter_fn applied to function name returns True. | torch.benchmark_utils#torch.utils.benchmark.FunctionCounts.filter |
transform(map_fn) [source]
Apply map_fn to all of the function names. This can be used to regularize function names (e.g. stripping irrelevant parts of the file path), coalesce entries by mapping multiple functions to the same name (in which case the counts are added together), etc. | torch.benchmark_utils#torch.utils.benchmark.FunctionCounts.transform |
class torch.utils.benchmark.Measurement(number_per_run, raw_times, task_spec, metadata=None) [source]
The result of a Timer measurement. This class stores one or more measurements of a given statement. It is serializable and provides several convenience methods (including a detailed __repr__) for downstream consumers.
static merge(measurements) [source]
Convenience method for merging replicates. Merge will extrapolate times to number_per_run=1 and will not transfer any metadata. (Since it might differ between replicates)
property significant_figures
Approximate significant figure estimate. This property is intended to give a convenient way to estimate the precision of a measurement. It only uses the interquartile region to estimate statistics to try to mitigate skew from the tails, and uses a static z value of 1.645 since it is not expected to be used for small values of n, so z can approximate t. The significant figure estimation used in conjunction with the trim_sigfig method to provide a more human interpretable data summary. __repr__ does not use this method; it simply displays raw values. Significant figure estimation is intended for Compare. | torch.benchmark_utils#torch.utils.benchmark.Measurement |
static merge(measurements) [source]
Convenience method for merging replicates. Merge will extrapolate times to number_per_run=1 and will not transfer any metadata. (Since it might differ between replicates) | torch.benchmark_utils#torch.utils.benchmark.Measurement.merge |
property significant_figures
Approximate significant figure estimate. This property is intended to give a convenient way to estimate the precision of a measurement. It only uses the interquartile region to estimate statistics to try to mitigate skew from the tails, and uses a static z value of 1.645 since it is not expected to be used for small values of n, so z can approximate t. The significant figure estimation used in conjunction with the trim_sigfig method to provide a more human interpretable data summary. __repr__ does not use this method; it simply displays raw values. Significant figure estimation is intended for Compare. | torch.benchmark_utils#torch.utils.benchmark.Measurement.significant_figures |
class torch.utils.benchmark.Timer(stmt='pass', setup='pass', timer=<function timer>, globals=None, label=None, sub_label=None, description=None, env=None, num_threads=1, language=<Language.PYTHON: 0>) [source]
Helper class for measuring execution time of PyTorch statements. For a full tutorial on how to use this class, see: https://pytorch.org/tutorials/recipes/recipes/benchmark.html The PyTorch Timer is based on timeit.Timer (and in fact uses timeit.Timer internally), but with several key differences:
Runtime aware:
Timer will perform warmups (important as some elements of PyTorch are lazily initialized), set threadpool size so that comparisons are apples-to-apples, and synchronize asynchronous CUDA functions when necessary.
Focus on replicates:
When measuring code, and particularly complex kernels / models, run-to-run variation is a significant confounding factor. It is expected that all measurements should include replicates to quantify noise and allow median computation, which is more robust than mean. To that effect, this class deviates from the timeit API by conceptually merging timeit.Timer.repeat and timeit.Timer.autorange. (Exact algorithms are discussed in method docstrings.) The timeit method is replicated for cases where an adaptive strategy is not desired.
Optional metadata:
When defining a Timer, one can optionally specify label, sub_label, description, and env. (Defined later) These fields are included in the representation of result object and by the Compare class to group and display results for comparison.
Instruction counts
In addition to wall times, Timer can run a statement under Callgrind and report instructions executed. Directly analogous to timeit.Timer constructor arguments: stmt, setup, timer, globals PyTorch Timer specific constructor arguments: label, sub_label, description, env, num_threads Parameters
stmt β Code snippet to be run in a loop and timed.
setup β Optional setup code. Used to define variables used in stmt
timer β Callable which returns the current time. If PyTorch was built without CUDA or there is no GPU present, this defaults to timeit.default_timer; otherwise it will synchronize CUDA before measuring the time.
globals β A dict which defines the global variables when stmt is being executed. This is the other method for providing variables which stmt needs.
label β String which summarizes stmt. For instance, if stmt is βtorch.nn.functional.relu(torch.add(x, 1, out=out))β one might set label to βReLU(x + 1)β to improve readability.
sub_label β
Provide supplemental information to disambiguate measurements with identical stmt or label. For instance, in our example above sub_label might be βfloatβ or βintβ, so that it is easy to differentiate: βReLU(x + 1): (float)β βReLU(x + 1): (int)β when printing Measurements or summarizing using Compare.
description β
String to distinguish measurements with identical label and sub_label. The principal use of description is to signal to Compare the columns of data. For instance one might set it based on the input size to create a table of the form: | n=1 | n=4 | ...
------------- ...
ReLU(x + 1): (float) | ... | ... | ...
ReLU(x + 1): (int) | ... | ... | ...
using Compare. It is also included when printing a Measurement.
env β This tag indicates that otherwise identical tasks were run in different environments, and are therefore not equivilent, for instance when A/B testing a change to a kernel. Compare will treat Measurements with different env specification as distinct when merging replicate runs.
num_threads β The size of the PyTorch threadpool when executing stmt. Single threaded performace is important as both a key inference workload and a good indicator of intrinsic algorithmic efficiency, so the default is set to one. This is in contrast to the default PyTorch threadpool size which tries to utilize all cores.
blocked_autorange(callback=None, min_run_time=0.2) [source]
Measure many replicates while keeping timer overhead to a minimum. At a high level, blocked_autorange executes the following pseudo-code: `setup`
total_time = 0
while total_time < min_run_time
start = timer()
for _ in range(block_size):
`stmt`
total_time += (timer() - start)
Note the variable block_size in the inner loop. The choice of block size is important to measurement quality, and must balance two competing objectives: A small block size results in more replicates and generally better statistics. A large block size better amortizes the cost of timer invocation, and results in a less biased measurement. This is important because CUDA syncronization time is non-trivial (order single to low double digit microseconds) and would otherwise bias the measurement. blocked_autorange sets block_size by running a warmup period, increasing block size until timer overhead is less than 0.1% of the overall computation. This value is then used for the main measurement loop. Returns
A Measurement object that contains measured runtimes and repetition counts, and can be used to compute statistics. (mean, median, etc.)
collect_callgrind(number=100, collect_baseline=True) [source]
Collect instruction counts using Callgrind. Unlike wall times, instruction counts are deterministic (modulo non-determinism in the program itself and small amounts of jitter from the Python interpreter.) This makes them ideal for detailed performance analysis. This method runs stmt in a separate process so that Valgrind can instrument the program. Performance is severely degraded due to the instrumentation, howevever this is ameliorated by the fact that a small number of iterations is generally sufficient to obtain good measurements. In order to to use this method valgrind, callgrind_control, and callgrind_annotate must be installed. Because there is a process boundary between the caller (this process) and the stmt execution, globals cannot contain arbitrary in-memory data structures. (Unlike timing methods) Instead, globals are restricted to builtins, nn.Modulesβs, and TorchScripted functions/modules to reduce the surprise factor from serialization and subsequent deserialization. The GlobalsBridge class provides more detail on this subject. Take particular care with nn.Modules: they rely on pickle and you may need to add an import to setup for them to transfer properly. By default, a profile for an empty statement will be collected and cached to indicate how many instructions are from the Python loop which drives stmt. Returns
A CallgrindStats object which provides instruction counts and some basic facilities for analyzing and manipulating results.
timeit(number=1000000) [source]
Mirrors the semantics of timeit.Timer.timeit(). Execute the main statement (stmt) number times. https://docs.python.org/3/library/timeit.html#timeit.Timer.timeit | torch.benchmark_utils#torch.utils.benchmark.Timer |
blocked_autorange(callback=None, min_run_time=0.2) [source]
Measure many replicates while keeping timer overhead to a minimum. At a high level, blocked_autorange executes the following pseudo-code: `setup`
total_time = 0
while total_time < min_run_time
start = timer()
for _ in range(block_size):
`stmt`
total_time += (timer() - start)
Note the variable block_size in the inner loop. The choice of block size is important to measurement quality, and must balance two competing objectives: A small block size results in more replicates and generally better statistics. A large block size better amortizes the cost of timer invocation, and results in a less biased measurement. This is important because CUDA syncronization time is non-trivial (order single to low double digit microseconds) and would otherwise bias the measurement. blocked_autorange sets block_size by running a warmup period, increasing block size until timer overhead is less than 0.1% of the overall computation. This value is then used for the main measurement loop. Returns
A Measurement object that contains measured runtimes and repetition counts, and can be used to compute statistics. (mean, median, etc.) | torch.benchmark_utils#torch.utils.benchmark.Timer.blocked_autorange |
collect_callgrind(number=100, collect_baseline=True) [source]
Collect instruction counts using Callgrind. Unlike wall times, instruction counts are deterministic (modulo non-determinism in the program itself and small amounts of jitter from the Python interpreter.) This makes them ideal for detailed performance analysis. This method runs stmt in a separate process so that Valgrind can instrument the program. Performance is severely degraded due to the instrumentation, howevever this is ameliorated by the fact that a small number of iterations is generally sufficient to obtain good measurements. In order to to use this method valgrind, callgrind_control, and callgrind_annotate must be installed. Because there is a process boundary between the caller (this process) and the stmt execution, globals cannot contain arbitrary in-memory data structures. (Unlike timing methods) Instead, globals are restricted to builtins, nn.Modulesβs, and TorchScripted functions/modules to reduce the surprise factor from serialization and subsequent deserialization. The GlobalsBridge class provides more detail on this subject. Take particular care with nn.Modules: they rely on pickle and you may need to add an import to setup for them to transfer properly. By default, a profile for an empty statement will be collected and cached to indicate how many instructions are from the Python loop which drives stmt. Returns
A CallgrindStats object which provides instruction counts and some basic facilities for analyzing and manipulating results. | torch.benchmark_utils#torch.utils.benchmark.Timer.collect_callgrind |
timeit(number=1000000) [source]
Mirrors the semantics of timeit.Timer.timeit(). Execute the main statement (stmt) number times. https://docs.python.org/3/library/timeit.html#timeit.Timer.timeit | torch.benchmark_utils#torch.utils.benchmark.Timer.timeit |
torch.utils.bottleneck torch.utils.bottleneck is a tool that can be used as an initial step for debugging bottlenecks in your program. It summarizes runs of your script with the Python profiler and PyTorchβs autograd profiler. Run it on the command line with python -m torch.utils.bottleneck /path/to/source/script.py [args]
where [args] are any number of arguments to script.py, or run python -m torch.utils.bottleneck -h for more usage instructions. Warning Because your script will be profiled, please ensure that it exits in a finite amount of time. Warning Due to the asynchronous nature of CUDA kernels, when running against CUDA code, the cProfile output and CPU-mode autograd profilers may not show correct timings: the reported CPU time reports the amount of time used to launch the kernels but does not include the time the kernel spent executing on a GPU unless the operation does a synchronize. Ops that do synchronize appear to be extremely expensive under regular CPU-mode profilers. In these case where timings are incorrect, the CUDA-mode autograd profiler may be helpful. Note To decide which (CPU-only-mode or CUDA-mode) autograd profiler output to look at, you should first check if your script is CPU-bound (βCPU total time is much greater than CUDA total timeβ). If it is CPU-bound, looking at the results of the CPU-mode autograd profiler will help. If on the other hand your script spends most of its time executing on the GPU, then it makes sense to start looking for responsible CUDA operators in the output of the CUDA-mode autograd profiler. Of course the reality is much more complicated and your script might not be in one of those two extremes depending on the part of the model youβre evaluating. If the profiler outputs donβt help, you could try looking at the result of torch.autograd.profiler.emit_nvtx() with nvprof. However, please take into account that the NVTX overhead is very high and often gives a heavily skewed timeline. Warning If you are profiling CUDA code, the first profiler that bottleneck runs (cProfile) will include the CUDA startup time (CUDA buffer allocation cost) in its time reporting. This should not matter if your bottlenecks result in code much slower than the CUDA startup time. For more complicated uses of the profilers (like in a multi-GPU case), please see https://docs.python.org/3/library/profile.html or torch.autograd.profiler.profile() for more information. | torch.bottleneck |
torch.utils.checkpoint Note Checkpointing is implemented by rerunning a forward-pass segment for each checkpointed segment during backward. This can cause persistent states like the RNG state to be advanced than they would without checkpointing. By default, checkpointing includes logic to juggle the RNG state such that checkpointed passes making use of RNG (through dropout for example) have deterministic output as compared to non-checkpointed passes. The logic to stash and restore RNG states can incur a moderate performance hit depending on the runtime of checkpointed operations. If deterministic output compared to non-checkpointed passes is not required, supply preserve_rng_state=False to checkpoint or checkpoint_sequential to omit stashing and restoring the RNG state during each checkpoint. The stashing logic saves and restores the RNG state for the current device and the device of all cuda Tensor arguments to the run_fn. However, the logic has no way to anticipate if the user will move Tensors to a new device within the run_fn itself. Therefore, if you move Tensors to a new device (βnewβ meaning not belonging to the set of [current device + devices of Tensor arguments]) within run_fn, deterministic output compared to non-checkpointed passes is never guaranteed.
torch.utils.checkpoint.checkpoint(function, *args, **kwargs) [source]
Checkpoint a model or part of the model Checkpointing works by trading compute for memory. Rather than storing all intermediate activations of the entire computation graph for computing backward, the checkpointed part does not save intermediate activations, and instead recomputes them in backward pass. It can be applied on any part of a model. Specifically, in the forward pass, function will run in torch.no_grad() manner, i.e., not storing the intermediate activations. Instead, the forward pass saves the inputs tuple and the function parameter. In the backwards pass, the saved inputs and function is retrieved, and the forward pass is computed on function again, now tracking the intermediate activations, and then the gradients are calculated using these activation values. Warning Checkpointing doesnβt work with torch.autograd.grad(), but only with torch.autograd.backward(). Warning If function invocation during backward does anything different than the one during forward, e.g., due to some global variable, the checkpointed version wonβt be equivalent, and unfortunately it canβt be detected. Warning If checkpointed segment contains tensors detached from the computational graph by detach() or torch.no_grad(), the backward pass will raise an error. This is because checkpoint makes all the outputs require gradients which causes issues when a tensor is defined to have no gradient in the model. To circumvent this, detach the tensors outside of the checkpoint function. Parameters
function β describes what to run in the forward pass of the model or part of the model. It should also know how to handle the inputs passed as the tuple. For example, in LSTM, if user passes (activation, hidden), function should correctly use the first input as activation and the second input as hidden
preserve_rng_state (bool, optional, default=True) β Omit stashing and restoring the RNG state during each checkpoint.
args β tuple containing inputs to the function
Returns
Output of running function on *args
torch.utils.checkpoint.checkpoint_sequential(functions, segments, input, **kwargs) [source]
A helper function for checkpointing sequential models. Sequential models execute a list of modules/functions in order (sequentially). Therefore, we can divide such a model in various segments and checkpoint each segment. All segments except the last will run in torch.no_grad() manner, i.e., not storing the intermediate activations. The inputs of each checkpointed segment will be saved for re-running the segment in the backward pass. See checkpoint() on how checkpointing works. Warning Checkpointing doesnβt work with torch.autograd.grad(), but only with torch.autograd.backward(). Parameters
functions β A torch.nn.Sequential or the list of modules or functions (comprising the model) to run sequentially.
segments β Number of chunks to create in the model
input β A Tensor that is input to functions
preserve_rng_state (bool, optional, default=True) β Omit stashing and restoring the RNG state during each checkpoint. Returns
Output of running functions sequentially on *inputs Example >>> model = nn.Sequential(...)
>>> input_var = checkpoint_sequential(model, chunks, input_var) | torch.checkpoint |
torch.utils.checkpoint.checkpoint(function, *args, **kwargs) [source]
Checkpoint a model or part of the model Checkpointing works by trading compute for memory. Rather than storing all intermediate activations of the entire computation graph for computing backward, the checkpointed part does not save intermediate activations, and instead recomputes them in backward pass. It can be applied on any part of a model. Specifically, in the forward pass, function will run in torch.no_grad() manner, i.e., not storing the intermediate activations. Instead, the forward pass saves the inputs tuple and the function parameter. In the backwards pass, the saved inputs and function is retrieved, and the forward pass is computed on function again, now tracking the intermediate activations, and then the gradients are calculated using these activation values. Warning Checkpointing doesnβt work with torch.autograd.grad(), but only with torch.autograd.backward(). Warning If function invocation during backward does anything different than the one during forward, e.g., due to some global variable, the checkpointed version wonβt be equivalent, and unfortunately it canβt be detected. Warning If checkpointed segment contains tensors detached from the computational graph by detach() or torch.no_grad(), the backward pass will raise an error. This is because checkpoint makes all the outputs require gradients which causes issues when a tensor is defined to have no gradient in the model. To circumvent this, detach the tensors outside of the checkpoint function. Parameters
function β describes what to run in the forward pass of the model or part of the model. It should also know how to handle the inputs passed as the tuple. For example, in LSTM, if user passes (activation, hidden), function should correctly use the first input as activation and the second input as hidden
preserve_rng_state (bool, optional, default=True) β Omit stashing and restoring the RNG state during each checkpoint.
args β tuple containing inputs to the function
Returns
Output of running function on *args | torch.checkpoint#torch.utils.checkpoint.checkpoint |
torch.utils.checkpoint.checkpoint_sequential(functions, segments, input, **kwargs) [source]
A helper function for checkpointing sequential models. Sequential models execute a list of modules/functions in order (sequentially). Therefore, we can divide such a model in various segments and checkpoint each segment. All segments except the last will run in torch.no_grad() manner, i.e., not storing the intermediate activations. The inputs of each checkpointed segment will be saved for re-running the segment in the backward pass. See checkpoint() on how checkpointing works. Warning Checkpointing doesnβt work with torch.autograd.grad(), but only with torch.autograd.backward(). Parameters
functions β A torch.nn.Sequential or the list of modules or functions (comprising the model) to run sequentially.
segments β Number of chunks to create in the model
input β A Tensor that is input to functions
preserve_rng_state (bool, optional, default=True) β Omit stashing and restoring the RNG state during each checkpoint. Returns
Output of running functions sequentially on *inputs Example >>> model = nn.Sequential(...)
>>> input_var = checkpoint_sequential(model, chunks, input_var) | torch.checkpoint#torch.utils.checkpoint.checkpoint_sequential |
torch.utils.cpp_extension
torch.utils.cpp_extension.CppExtension(name, sources, *args, **kwargs) [source]
Creates a setuptools.Extension for C++. Convenience method that creates a setuptools.Extension with the bare minimum (but often sufficient) arguments to build a C++ extension. All arguments are forwarded to the setuptools.Extension constructor. Example >>> from setuptools import setup
>>> from torch.utils.cpp_extension import BuildExtension, CppExtension
>>> setup(
name='extension',
ext_modules=[
CppExtension(
name='extension',
sources=['extension.cpp'],
extra_compile_args=['-g']),
],
cmdclass={
'build_ext': BuildExtension
})
torch.utils.cpp_extension.CUDAExtension(name, sources, *args, **kwargs) [source]
Creates a setuptools.Extension for CUDA/C++. Convenience method that creates a setuptools.Extension with the bare minimum (but often sufficient) arguments to build a CUDA/C++ extension. This includes the CUDA include path, library path and runtime library. All arguments are forwarded to the setuptools.Extension constructor. Example >>> from setuptools import setup
>>> from torch.utils.cpp_extension import BuildExtension, CUDAExtension
>>> setup(
name='cuda_extension',
ext_modules=[
CUDAExtension(
name='cuda_extension',
sources=['extension.cpp', 'extension_kernel.cu'],
extra_compile_args={'cxx': ['-g'],
'nvcc': ['-O2']})
],
cmdclass={
'build_ext': BuildExtension
})
Compute capabilities: By default the extension will be compiled to run on all archs of the cards visible during the building process of the extension, plus PTX. If down the road a new card is installed the extension may need to be recompiled. If a visible card has a compute capability (CC) thatβs newer than the newest version for which your nvcc can build fully-compiled binaries, Pytorch will make nvcc fall back to building kernels with the newest version of PTX your nvcc does support (see below for details on PTX). You can override the default behavior using TORCH_CUDA_ARCH_LIST to explicitly specify which CCs you want the extension to support: TORCH_CUDA_ARCH_LIST=β6.1 8.6β python build_my_extension.py TORCH_CUDA_ARCH_LIST=β5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTXβ python build_my_extension.py The +PTX option causes extension kernel binaries to include PTX instructions for the specified CC. PTX is an intermediate representation that allows kernels to runtime-compile for any CC >= the specified CC (for example, 8.6+PTX generates PTX that can runtime-compile for any GPU with CC >= 8.6). This improves your binaryβs forward compatibility. However, relying on older PTX to provide forward compat by runtime-compiling for newer CCs can modestly reduce performance on those newer CCs. If you know exact CC(s) of the GPUs you want to target, youβre always better off specifying them individually. For example, if you want your extension to run on 8.0 and 8.6, β8.0+PTXβ would work functionally because it includes PTX that can runtime-compile for 8.6, but β8.0 8.6β would be better. Note that while itβs possible to include all supported archs, the more archs get included the slower the building process will be, as it will build a separate kernel image for each arch.
torch.utils.cpp_extension.BuildExtension(*args, **kwargs) [source]
A custom setuptools build extension . This setuptools.build_ext subclass takes care of passing the minimum required compiler flags (e.g. -std=c++14) as well as mixed C++/CUDA compilation (and support for CUDA files in general). When using BuildExtension, it is allowed to supply a dictionary for extra_compile_args (rather than the usual list) that maps from languages (cxx or nvcc) to a list of additional compiler flags to supply to the compiler. This makes it possible to supply different flags to the C++ and CUDA compiler during mixed compilation. use_ninja (bool): If use_ninja is True (default), then we attempt to build using the Ninja backend. Ninja greatly speeds up compilation compared to the standard setuptools.build_ext. Fallbacks to the standard distutils backend if Ninja is not available. Note By default, the Ninja backend uses #CPUS + 2 workers to build the extension. This may use up too many resources on some systems. One can control the number of workers by setting the MAX_JOBS environment variable to a non-negative number.
torch.utils.cpp_extension.load(name, sources, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, is_standalone=False, keep_intermediates=True) [source]
Loads a PyTorch C++ extension just-in-time (JIT). To load an extension, a Ninja build file is emitted, which is used to compile the given sources into a dynamic library. This library is subsequently loaded into the current Python process as a module and returned from this function, ready for use. By default, the directory to which the build file is emitted and the resulting library compiled to is <tmp>/torch_extensions/<name>, where <tmp> is the temporary folder on the current platform and <name> the name of the extension. This location can be overridden in two ways. First, if the TORCH_EXTENSIONS_DIR environment variable is set, it replaces <tmp>/torch_extensions and all extensions will be compiled into subfolders of this directory. Second, if the build_directory argument to this function is supplied, it overrides the entire path, i.e. the library will be compiled into that folder directly. To compile the sources, the default system compiler (c++) is used, which can be overridden by setting the CXX environment variable. To pass additional arguments to the compilation process, extra_cflags or extra_ldflags can be provided. For example, to compile your extension with optimizations, pass extra_cflags=['-O3']. You can also use extra_cflags to pass further include directories. CUDA support with mixed compilation is provided. Simply pass CUDA source files (.cu or .cuh) along with other sources. Such files will be detected and compiled with nvcc rather than the C++ compiler. This includes passing the CUDA lib64 directory as a library directory, and linking cudart. You can pass additional flags to nvcc via extra_cuda_cflags, just like with extra_cflags for C++. Various heuristics for finding the CUDA install directory are used, which usually work fine. If not, setting the CUDA_HOME environment variable is the safest option. Parameters
name β The name of the extension to build. This MUST be the same as the name of the pybind11 module!
sources β A list of relative or absolute paths to C++ source files.
extra_cflags β optional list of compiler flags to forward to the build.
extra_cuda_cflags β optional list of compiler flags to forward to nvcc when building CUDA sources.
extra_ldflags β optional list of linker flags to forward to the build.
extra_include_paths β optional list of include directories to forward to the build.
build_directory β optional path to use as build workspace.
verbose β If True, turns on verbose logging of load steps.
with_cuda β Determines whether CUDA headers and libraries are added to the build. If set to None (default), this value is automatically determined based on the existence of .cu or .cuh in sources. Set it to True` to force CUDA headers and libraries to be included.
is_python_module β If True (default), imports the produced shared library as a Python module. If False, behavior depends on is_standalone.
is_standalone β If False (default) loads the constructed extension into the process as a plain dynamic library. If True, build a standalone executable. Returns
Returns the loaded PyTorch extension as a Python module.
If is_python_module is False and is_standalone is False:
Returns nothing. (The shared library is loaded into the process as a side effect.)
If is_standalone is True.
Return the path to the executable. (On Windows, TORCH_LIB_PATH is added to the PATH environment variable as a side effect.) Return type
If is_python_module is True Example >>> from torch.utils.cpp_extension import load
>>> module = load(
name='extension',
sources=['extension.cpp', 'extension_kernel.cu'],
extra_cflags=['-O2'],
verbose=True)
torch.utils.cpp_extension.load_inline(name, cpp_sources, cuda_sources=None, functions=None, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, with_pytorch_error_handling=True, keep_intermediates=True) [source]
Loads a PyTorch C++ extension just-in-time (JIT) from string sources. This function behaves exactly like load(), but takes its sources as strings rather than filenames. These strings are stored to files in the build directory, after which the behavior of load_inline() is identical to load(). See the tests for good examples of using this function. Sources may omit two required parts of a typical non-inline C++ extension: the necessary header includes, as well as the (pybind11) binding code. More precisely, strings passed to cpp_sources are first concatenated into a single .cpp file. This file is then prepended with #include
<torch/extension.h>. Furthermore, if the functions argument is supplied, bindings will be automatically generated for each function specified. functions can either be a list of function names, or a dictionary mapping from function names to docstrings. If a list is given, the name of each function is used as its docstring. The sources in cuda_sources are concatenated into a separate .cu file and prepended with torch/types.h, cuda.h and cuda_runtime.h includes. The .cpp and .cu files are compiled separately, but ultimately linked into a single library. Note that no bindings are generated for functions in cuda_sources per se. To bind to a CUDA kernel, you must create a C++ function that calls it, and either declare or define this C++ function in one of the cpp_sources (and include its name in functions). See load() for a description of arguments omitted below. Parameters
cpp_sources β A string, or list of strings, containing C++ source code.
cuda_sources β A string, or list of strings, containing CUDA source code.
functions β A list of function names for which to generate function bindings. If a dictionary is given, it should map function names to docstrings (which are otherwise just the function names).
with_cuda β Determines whether CUDA headers and libraries are added to the build. If set to None (default), this value is automatically determined based on whether cuda_sources is provided. Set it to True to force CUDA headers and libraries to be included.
with_pytorch_error_handling β Determines whether pytorch error and warning macros are handled by pytorch instead of pybind. To do this, each function foo is called via an intermediary _safe_foo function. This redirection might cause issues in obscure cases of cpp. This flag should be set to False when this redirect causes issues. Example >>> from torch.utils.cpp_extension import load_inline
>>> source = \'\'\'
at::Tensor sin_add(at::Tensor x, at::Tensor y) {
return x.sin() + y.sin();
}
\'\'\'
>>> module = load_inline(name='inline_extension',
cpp_sources=[source],
functions=['sin_add'])
Note By default, the Ninja backend uses #CPUS + 2 workers to build the extension. This may use up too many resources on some systems. One can control the number of workers by setting the MAX_JOBS environment variable to a non-negative number.
torch.utils.cpp_extension.include_paths(cuda=False) [source]
Get the include paths required to build a C++ or CUDA extension. Parameters
cuda β If True, includes CUDA-specific include paths. Returns
A list of include path strings.
torch.utils.cpp_extension.check_compiler_abi_compatibility(compiler) [source]
Verifies that the given compiler is ABI-compatible with PyTorch. Parameters
compiler (str) β The compiler executable name to check (e.g. g++). Must be executable in a shell process. Returns
False if the compiler is (likely) ABI-incompatible with PyTorch, else True.
torch.utils.cpp_extension.verify_ninja_availability() [source]
Raises RuntimeError if ninja build system is not available on the system, does nothing otherwise.
torch.utils.cpp_extension.is_ninja_available() [source]
Returns True if the ninja build system is available on the system, False otherwise. | torch.cpp_extension |
torch.utils.cpp_extension.BuildExtension(*args, **kwargs) [source]
A custom setuptools build extension . This setuptools.build_ext subclass takes care of passing the minimum required compiler flags (e.g. -std=c++14) as well as mixed C++/CUDA compilation (and support for CUDA files in general). When using BuildExtension, it is allowed to supply a dictionary for extra_compile_args (rather than the usual list) that maps from languages (cxx or nvcc) to a list of additional compiler flags to supply to the compiler. This makes it possible to supply different flags to the C++ and CUDA compiler during mixed compilation. use_ninja (bool): If use_ninja is True (default), then we attempt to build using the Ninja backend. Ninja greatly speeds up compilation compared to the standard setuptools.build_ext. Fallbacks to the standard distutils backend if Ninja is not available. Note By default, the Ninja backend uses #CPUS + 2 workers to build the extension. This may use up too many resources on some systems. One can control the number of workers by setting the MAX_JOBS environment variable to a non-negative number. | torch.cpp_extension#torch.utils.cpp_extension.BuildExtension |
torch.utils.cpp_extension.check_compiler_abi_compatibility(compiler) [source]
Verifies that the given compiler is ABI-compatible with PyTorch. Parameters
compiler (str) β The compiler executable name to check (e.g. g++). Must be executable in a shell process. Returns
False if the compiler is (likely) ABI-incompatible with PyTorch, else True. | torch.cpp_extension#torch.utils.cpp_extension.check_compiler_abi_compatibility |
torch.utils.cpp_extension.CppExtension(name, sources, *args, **kwargs) [source]
Creates a setuptools.Extension for C++. Convenience method that creates a setuptools.Extension with the bare minimum (but often sufficient) arguments to build a C++ extension. All arguments are forwarded to the setuptools.Extension constructor. Example >>> from setuptools import setup
>>> from torch.utils.cpp_extension import BuildExtension, CppExtension
>>> setup(
name='extension',
ext_modules=[
CppExtension(
name='extension',
sources=['extension.cpp'],
extra_compile_args=['-g']),
],
cmdclass={
'build_ext': BuildExtension
}) | torch.cpp_extension#torch.utils.cpp_extension.CppExtension |
torch.utils.cpp_extension.CUDAExtension(name, sources, *args, **kwargs) [source]
Creates a setuptools.Extension for CUDA/C++. Convenience method that creates a setuptools.Extension with the bare minimum (but often sufficient) arguments to build a CUDA/C++ extension. This includes the CUDA include path, library path and runtime library. All arguments are forwarded to the setuptools.Extension constructor. Example >>> from setuptools import setup
>>> from torch.utils.cpp_extension import BuildExtension, CUDAExtension
>>> setup(
name='cuda_extension',
ext_modules=[
CUDAExtension(
name='cuda_extension',
sources=['extension.cpp', 'extension_kernel.cu'],
extra_compile_args={'cxx': ['-g'],
'nvcc': ['-O2']})
],
cmdclass={
'build_ext': BuildExtension
})
Compute capabilities: By default the extension will be compiled to run on all archs of the cards visible during the building process of the extension, plus PTX. If down the road a new card is installed the extension may need to be recompiled. If a visible card has a compute capability (CC) thatβs newer than the newest version for which your nvcc can build fully-compiled binaries, Pytorch will make nvcc fall back to building kernels with the newest version of PTX your nvcc does support (see below for details on PTX). You can override the default behavior using TORCH_CUDA_ARCH_LIST to explicitly specify which CCs you want the extension to support: TORCH_CUDA_ARCH_LIST=β6.1 8.6β python build_my_extension.py TORCH_CUDA_ARCH_LIST=β5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTXβ python build_my_extension.py The +PTX option causes extension kernel binaries to include PTX instructions for the specified CC. PTX is an intermediate representation that allows kernels to runtime-compile for any CC >= the specified CC (for example, 8.6+PTX generates PTX that can runtime-compile for any GPU with CC >= 8.6). This improves your binaryβs forward compatibility. However, relying on older PTX to provide forward compat by runtime-compiling for newer CCs can modestly reduce performance on those newer CCs. If you know exact CC(s) of the GPUs you want to target, youβre always better off specifying them individually. For example, if you want your extension to run on 8.0 and 8.6, β8.0+PTXβ would work functionally because it includes PTX that can runtime-compile for 8.6, but β8.0 8.6β would be better. Note that while itβs possible to include all supported archs, the more archs get included the slower the building process will be, as it will build a separate kernel image for each arch. | torch.cpp_extension#torch.utils.cpp_extension.CUDAExtension |
torch.utils.cpp_extension.include_paths(cuda=False) [source]
Get the include paths required to build a C++ or CUDA extension. Parameters
cuda β If True, includes CUDA-specific include paths. Returns
A list of include path strings. | torch.cpp_extension#torch.utils.cpp_extension.include_paths |
torch.utils.cpp_extension.is_ninja_available() [source]
Returns True if the ninja build system is available on the system, False otherwise. | torch.cpp_extension#torch.utils.cpp_extension.is_ninja_available |
torch.utils.cpp_extension.load(name, sources, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, is_standalone=False, keep_intermediates=True) [source]
Loads a PyTorch C++ extension just-in-time (JIT). To load an extension, a Ninja build file is emitted, which is used to compile the given sources into a dynamic library. This library is subsequently loaded into the current Python process as a module and returned from this function, ready for use. By default, the directory to which the build file is emitted and the resulting library compiled to is <tmp>/torch_extensions/<name>, where <tmp> is the temporary folder on the current platform and <name> the name of the extension. This location can be overridden in two ways. First, if the TORCH_EXTENSIONS_DIR environment variable is set, it replaces <tmp>/torch_extensions and all extensions will be compiled into subfolders of this directory. Second, if the build_directory argument to this function is supplied, it overrides the entire path, i.e. the library will be compiled into that folder directly. To compile the sources, the default system compiler (c++) is used, which can be overridden by setting the CXX environment variable. To pass additional arguments to the compilation process, extra_cflags or extra_ldflags can be provided. For example, to compile your extension with optimizations, pass extra_cflags=['-O3']. You can also use extra_cflags to pass further include directories. CUDA support with mixed compilation is provided. Simply pass CUDA source files (.cu or .cuh) along with other sources. Such files will be detected and compiled with nvcc rather than the C++ compiler. This includes passing the CUDA lib64 directory as a library directory, and linking cudart. You can pass additional flags to nvcc via extra_cuda_cflags, just like with extra_cflags for C++. Various heuristics for finding the CUDA install directory are used, which usually work fine. If not, setting the CUDA_HOME environment variable is the safest option. Parameters
name β The name of the extension to build. This MUST be the same as the name of the pybind11 module!
sources β A list of relative or absolute paths to C++ source files.
extra_cflags β optional list of compiler flags to forward to the build.
extra_cuda_cflags β optional list of compiler flags to forward to nvcc when building CUDA sources.
extra_ldflags β optional list of linker flags to forward to the build.
extra_include_paths β optional list of include directories to forward to the build.
build_directory β optional path to use as build workspace.
verbose β If True, turns on verbose logging of load steps.
with_cuda β Determines whether CUDA headers and libraries are added to the build. If set to None (default), this value is automatically determined based on the existence of .cu or .cuh in sources. Set it to True` to force CUDA headers and libraries to be included.
is_python_module β If True (default), imports the produced shared library as a Python module. If False, behavior depends on is_standalone.
is_standalone β If False (default) loads the constructed extension into the process as a plain dynamic library. If True, build a standalone executable. Returns
Returns the loaded PyTorch extension as a Python module.
If is_python_module is False and is_standalone is False:
Returns nothing. (The shared library is loaded into the process as a side effect.)
If is_standalone is True.
Return the path to the executable. (On Windows, TORCH_LIB_PATH is added to the PATH environment variable as a side effect.) Return type
If is_python_module is True Example >>> from torch.utils.cpp_extension import load
>>> module = load(
name='extension',
sources=['extension.cpp', 'extension_kernel.cu'],
extra_cflags=['-O2'],
verbose=True) | torch.cpp_extension#torch.utils.cpp_extension.load |
torch.utils.cpp_extension.load_inline(name, cpp_sources, cuda_sources=None, functions=None, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, with_pytorch_error_handling=True, keep_intermediates=True) [source]
Loads a PyTorch C++ extension just-in-time (JIT) from string sources. This function behaves exactly like load(), but takes its sources as strings rather than filenames. These strings are stored to files in the build directory, after which the behavior of load_inline() is identical to load(). See the tests for good examples of using this function. Sources may omit two required parts of a typical non-inline C++ extension: the necessary header includes, as well as the (pybind11) binding code. More precisely, strings passed to cpp_sources are first concatenated into a single .cpp file. This file is then prepended with #include
<torch/extension.h>. Furthermore, if the functions argument is supplied, bindings will be automatically generated for each function specified. functions can either be a list of function names, or a dictionary mapping from function names to docstrings. If a list is given, the name of each function is used as its docstring. The sources in cuda_sources are concatenated into a separate .cu file and prepended with torch/types.h, cuda.h and cuda_runtime.h includes. The .cpp and .cu files are compiled separately, but ultimately linked into a single library. Note that no bindings are generated for functions in cuda_sources per se. To bind to a CUDA kernel, you must create a C++ function that calls it, and either declare or define this C++ function in one of the cpp_sources (and include its name in functions). See load() for a description of arguments omitted below. Parameters
cpp_sources β A string, or list of strings, containing C++ source code.
cuda_sources β A string, or list of strings, containing CUDA source code.
functions β A list of function names for which to generate function bindings. If a dictionary is given, it should map function names to docstrings (which are otherwise just the function names).
with_cuda β Determines whether CUDA headers and libraries are added to the build. If set to None (default), this value is automatically determined based on whether cuda_sources is provided. Set it to True to force CUDA headers and libraries to be included.
with_pytorch_error_handling β Determines whether pytorch error and warning macros are handled by pytorch instead of pybind. To do this, each function foo is called via an intermediary _safe_foo function. This redirection might cause issues in obscure cases of cpp. This flag should be set to False when this redirect causes issues. Example >>> from torch.utils.cpp_extension import load_inline
>>> source = \'\'\'
at::Tensor sin_add(at::Tensor x, at::Tensor y) {
return x.sin() + y.sin();
}
\'\'\'
>>> module = load_inline(name='inline_extension',
cpp_sources=[source],
functions=['sin_add'])
Note By default, the Ninja backend uses #CPUS + 2 workers to build the extension. This may use up too many resources on some systems. One can control the number of workers by setting the MAX_JOBS environment variable to a non-negative number. | torch.cpp_extension#torch.utils.cpp_extension.load_inline |
torch.utils.cpp_extension.verify_ninja_availability() [source]
Raises RuntimeError if ninja build system is not available on the system, does nothing otherwise. | torch.cpp_extension#torch.utils.cpp_extension.verify_ninja_availability |
torch.utils.data At the heart of PyTorch data loading utility is the torch.utils.data.DataLoader class. It represents a Python iterable over a dataset, with support for
map-style and iterable-style datasets,
customizing data loading order,
automatic batching,
single- and multi-process data loading,
automatic memory pinning. These options are configured by the constructor arguments of a DataLoader, which has signature: DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,
batch_sampler=None, num_workers=0, collate_fn=None,
pin_memory=False, drop_last=False, timeout=0,
worker_init_fn=None, *, prefetch_factor=2,
persistent_workers=False)
The sections below describe in details the effects and usages of these options. Dataset Types The most important argument of DataLoader constructor is dataset, which indicates a dataset object to load data from. PyTorch supports two different types of datasets:
map-style datasets,
iterable-style datasets. Map-style datasets A map-style dataset is one that implements the __getitem__() and __len__() protocols, and represents a map from (possibly non-integral) indices/keys to data samples. For example, such a dataset, when accessed with dataset[idx], could read the idx-th image and its corresponding label from a folder on the disk. See Dataset for more details. Iterable-style datasets An iterable-style dataset is an instance of a subclass of IterableDataset that implements the __iter__() protocol, and represents an iterable over data samples. This type of datasets is particularly suitable for cases where random reads are expensive or even improbable, and where the batch size depends on the fetched data. For example, such a dataset, when called iter(dataset), could return a stream of data reading from a database, a remote server, or even logs generated in real time. See IterableDataset for more details. Note When using an IterableDataset with multi-process data loading. The same dataset object is replicated on each worker process, and thus the replicas must be configured differently to avoid duplicated data. See IterableDataset documentations for how to achieve this. Data Loading Order and Sampler For iterable-style datasets, data loading order is entirely controlled by the user-defined iterable. This allows easier implementations of chunk-reading and dynamic batch size (e.g., by yielding a batched sample at each time). The rest of this section concerns the case with map-style datasets. torch.utils.data.Sampler classes are used to specify the sequence of indices/keys used in data loading. They represent iterable objects over the indices to datasets. E.g., in the common case with stochastic gradient decent (SGD), a Sampler could randomly permute a list of indices and yield each one at a time, or yield a small number of them for mini-batch SGD. A sequential or shuffled sampler will be automatically constructed based on the shuffle argument to a DataLoader. Alternatively, users may use the sampler argument to specify a custom Sampler object that at each time yields the next index/key to fetch. A custom Sampler that yields a list of batch indices at a time can be passed as the batch_sampler argument. Automatic batching can also be enabled via batch_size and drop_last arguments. See the next section for more details on this. Note Neither sampler nor batch_sampler is compatible with iterable-style datasets, since such datasets have no notion of a key or an index. Loading Batched and Non-Batched Data DataLoader supports automatically collating individual fetched data samples into batches via arguments batch_size, drop_last, and batch_sampler. Automatic batching (default) This is the most common case, and corresponds to fetching a minibatch of data and collating them into batched samples, i.e., containing Tensors with one dimension being the batch dimension (usually the first). When batch_size (default 1) is not None, the data loader yields batched samples instead of individual samples. batch_size and drop_last arguments are used to specify how the data loader obtains batches of dataset keys. For map-style datasets, users can alternatively specify batch_sampler, which yields a list of keys at a time. Note The batch_size and drop_last arguments essentially are used to construct a batch_sampler from sampler. For map-style datasets, the sampler is either provided by user or constructed based on the shuffle argument. For iterable-style datasets, the sampler is a dummy infinite one. See this section on more details on samplers. Note When fetching from iterable-style datasets with multi-processing, the drop_last argument drops the last non-full batch of each workerβs dataset replica. After fetching a list of samples using the indices from sampler, the function passed as the collate_fn argument is used to collate lists of samples into batches. In this case, loading from a map-style dataset is roughly equivalent with: for indices in batch_sampler:
yield collate_fn([dataset[i] for i in indices])
and loading from an iterable-style dataset is roughly equivalent with: dataset_iter = iter(dataset)
for indices in batch_sampler:
yield collate_fn([next(dataset_iter) for _ in indices])
A custom collate_fn can be used to customize collation, e.g., padding sequential data to max length of a batch. See this section on more about collate_fn. Disable automatic batching In certain cases, users may want to handle batching manually in dataset code, or simply load individual samples. For example, it could be cheaper to directly load batched data (e.g., bulk reads from a database or reading continuous chunks of memory), or the batch size is data dependent, or the program is designed to work on individual samples. Under these scenarios, itβs likely better to not use automatic batching (where collate_fn is used to collate the samples), but let the data loader directly return each member of the dataset object. When both batch_size and batch_sampler are None (default value for batch_sampler is already None), automatic batching is disabled. Each sample obtained from the dataset is processed with the function passed as the collate_fn argument. When automatic batching is disabled, the default collate_fn simply converts NumPy arrays into PyTorch Tensors, and keeps everything else untouched. In this case, loading from a map-style dataset is roughly equivalent with: for index in sampler:
yield collate_fn(dataset[index])
and loading from an iterable-style dataset is roughly equivalent with: for data in iter(dataset):
yield collate_fn(data)
See this section on more about collate_fn. Working with collate_fn
The use of collate_fn is slightly different when automatic batching is enabled or disabled. When automatic batching is disabled, collate_fn is called with each individual data sample, and the output is yielded from the data loader iterator. In this case, the default collate_fn simply converts NumPy arrays in PyTorch tensors. When automatic batching is enabled, collate_fn is called with a list of data samples at each time. It is expected to collate the input samples into a batch for yielding from the data loader iterator. The rest of this section describes behavior of the default collate_fn in this case. For instance, if each data sample consists of a 3-channel image and an integral class label, i.e., each element of the dataset returns a tuple (image, class_index), the default collate_fn collates a list of such tuples into a single tuple of a batched image tensor and a batched class label Tensor. In particular, the default collate_fn has the following properties: It always prepends a new dimension as the batch dimension. It automatically converts NumPy arrays and Python numerical values into PyTorch Tensors. It preserves the data structure, e.g., if each sample is a dictionary, it outputs a dictionary with the same set of keys but batched Tensors as values (or lists if the values can not be converted into Tensors). Same for list s, tuple s, namedtuple s, etc. Users may use customized collate_fn to achieve custom batching, e.g., collating along a dimension other than the first, padding sequences of various lengths, or adding support for custom data types. Single- and Multi-process Data Loading A DataLoader uses single-process data loading by default. Within a Python process, the Global Interpreter Lock (GIL) prevents true fully parallelizing Python code across threads. To avoid blocking computation code with data loading, PyTorch provides an easy switch to perform multi-process data loading by simply setting the argument num_workers to a positive integer. Single-process data loading (default) In this mode, data fetching is done in the same process a DataLoader is initialized. Therefore, data loading may block computing. However, this mode may be preferred when resource(s) used for sharing data among processes (e.g., shared memory, file descriptors) is limited, or when the entire dataset is small and can be loaded entirely in memory. Additionally, single-process loading often shows more readable error traces and thus is useful for debugging. Multi-process data loading Setting the argument num_workers as a positive integer will turn on multi-process data loading with the specified number of loader worker processes. In this mode, each time an iterator of a DataLoader is created (e.g., when you call enumerate(dataloader)), num_workers worker processes are created. At this point, the dataset, collate_fn, and worker_init_fn are passed to each worker, where they are used to initialize, and fetch data. This means that dataset access together with its internal IO, transforms (including collate_fn) runs in the worker process. torch.utils.data.get_worker_info() returns various useful information in a worker process (including the worker id, dataset replica, initial seed, etc.), and returns None in main process. Users may use this function in dataset code and/or worker_init_fn to individually configure each dataset replica, and to determine whether the code is running in a worker process. For example, this can be particularly helpful in sharding the dataset. For map-style datasets, the main process generates the indices using sampler and sends them to the workers. So any shuffle randomization is done in the main process which guides loading by assigning indices to load. For iterable-style datasets, since each worker process gets a replica of the dataset object, naive multi-process loading will often result in duplicated data. Using torch.utils.data.get_worker_info() and/or worker_init_fn, users may configure each replica independently. (See IterableDataset documentations for how to achieve this. ) For similar reasons, in multi-process loading, the drop_last argument drops the last non-full batch of each workerβs iterable-style dataset replica. Workers are shut down once the end of the iteration is reached, or when the iterator becomes garbage collected. Warning It is generally not recommended to return CUDA tensors in multi-process loading because of many subtleties in using CUDA and sharing CUDA tensors in multiprocessing (see CUDA in multiprocessing). Instead, we recommend using automatic memory pinning (i.e., setting pin_memory=True), which enables fast data transfer to CUDA-enabled GPUs. Platform-specific behaviors Since workers rely on Python multiprocessing, worker launch behavior is different on Windows compared to Unix. On Unix, fork() is the default multiprocessing start method. Using fork(), child workers typically can access the dataset and Python argument functions directly through the cloned address space. On Windows, spawn() is the default multiprocessing start method. Using spawn(), another interpreter is launched which runs your main script, followed by the internal worker function that receives the dataset, collate_fn and other arguments through pickle serialization. This separate serialization means that you should take two steps to ensure you are compatible with Windows while using multi-process data loading: Wrap most of you main scriptβs code within if __name__ == '__main__': block, to make sure it doesnβt run again (most likely generating error) when each worker process is launched. You can place your dataset and DataLoader instance creation logic here, as it doesnβt need to be re-executed in workers. Make sure that any custom collate_fn, worker_init_fn or dataset code is declared as top level definitions, outside of the __main__ check. This ensures that they are available in worker processes. (this is needed since functions are pickled as references only, not bytecode.) Randomness in multi-process data loading By default, each worker will have its PyTorch seed set to base_seed + worker_id, where base_seed is a long generated by main process using its RNG (thereby, consuming a RNG state mandatorily). However, seeds for other libraries may be duplicated upon initializing workers (e.g., NumPy), causing each worker to return identical random numbers. (See this section in FAQ.). In worker_init_fn, you may access the PyTorch seed set for each worker with either torch.utils.data.get_worker_info().seed or torch.initial_seed(), and use it to seed other libraries before data loading. Memory Pinning Host to GPU copies are much faster when they originate from pinned (page-locked) memory. See Use pinned memory buffers for more details on when and how to use pinned memory generally. For data loading, passing pin_memory=True to a DataLoader will automatically put the fetched data Tensors in pinned memory, and thus enables faster data transfer to CUDA-enabled GPUs. The default memory pinning logic only recognizes Tensors and maps and iterables containing Tensors. By default, if the pinning logic sees a batch that is a custom type (which will occur if you have a collate_fn that returns a custom batch type), or if each element of your batch is a custom type, the pinning logic will not recognize them, and it will return that batch (or those elements) without pinning the memory. To enable memory pinning for custom batch or data type(s), define a pin_memory() method on your custom type(s). See the example below. Example: class SimpleCustomBatch:
def __init__(self, data):
transposed_data = list(zip(*data))
self.inp = torch.stack(transposed_data[0], 0)
self.tgt = torch.stack(transposed_data[1], 0)
# custom memory pinning method on custom type
def pin_memory(self):
self.inp = self.inp.pin_memory()
self.tgt = self.tgt.pin_memory()
return self
def collate_wrapper(batch):
return SimpleCustomBatch(batch)
inps = torch.arange(10 * 5, dtype=torch.float32).view(10, 5)
tgts = torch.arange(10 * 5, dtype=torch.float32).view(10, 5)
dataset = TensorDataset(inps, tgts)
loader = DataLoader(dataset, batch_size=2, collate_fn=collate_wrapper,
pin_memory=True)
for batch_ndx, sample in enumerate(loader):
print(sample.inp.is_pinned())
print(sample.tgt.is_pinned())
class torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, multiprocessing_context=None, generator=None, *, prefetch_factor=2, persistent_workers=False) [source]
Data loader. Combines a dataset and a sampler, and provides an iterable over the given dataset. The DataLoader supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning. See torch.utils.data documentation page for more details. Parameters
dataset (Dataset) β dataset from which to load the data.
batch_size (int, optional) β how many samples per batch to load (default: 1).
shuffle (bool, optional) β set to True to have the data reshuffled at every epoch (default: False).
sampler (Sampler or Iterable, optional) β defines the strategy to draw samples from the dataset. Can be any Iterable with __len__ implemented. If specified, shuffle must not be specified.
batch_sampler (Sampler or Iterable, optional) β like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, shuffle, sampler, and drop_last.
num_workers (int, optional) β how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)
collate_fn (callable, optional) β merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.
pin_memory (bool, optional) β If True, the data loader will copy Tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn returns a batch that is a custom type, see the example below.
drop_last (bool, optional) β set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False)
timeout (numeric, optional) β if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: 0)
worker_init_fn (callable, optional) β If not None, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None)
prefetch_factor (int, optional, keyword-only arg) β Number of samples loaded in advance by each worker. 2 means there will be a total of 2 * num_workers samples prefetched across all workers. (default: 2)
persistent_workers (bool, optional) β If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. (default: False) Warning If the spawn start method is used, worker_init_fn cannot be an unpicklable object, e.g., a lambda function. See Multiprocessing best practices on more details related to multiprocessing in PyTorch. Warning len(dataloader) heuristic is based on the length of the sampler used. When dataset is an IterableDataset, it instead returns an estimate based on len(dataset) / batch_size, with proper rounding depending on drop_last, regardless of multi-process loading configurations. This represents the best guess PyTorch can make because PyTorch trusts user dataset code in correctly handling multi-process loading to avoid duplicate data. However, if sharding results in multiple workers having incomplete last batches, this estimate can still be inaccurate, because (1) an otherwise complete batch can be broken into multiple ones and (2) more than one batch worth of samples can be dropped when drop_last is set. Unfortunately, PyTorch can not detect such cases in general. See Dataset Types for more details on these two types of datasets and how IterableDataset interacts with Multi-process data loading. Warning See Reproducibility, and My data loader workers return identical random numbers, and Randomness in multi-process data loading notes for random seed related questions.
class torch.utils.data.Dataset [source]
An abstract class representing a Dataset. All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite __getitem__(), supporting fetching a data sample for a given key. Subclasses could also optionally overwrite __len__(), which is expected to return the size of the dataset by many Sampler implementations and the default options of DataLoader. Note DataLoader by default constructs a index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided.
class torch.utils.data.IterableDataset [source]
An iterable Dataset. All datasets that represent an iterable of data samples should subclass it. Such form of datasets is particularly useful when data come from a stream. All subclasses should overwrite __iter__(), which would return an iterator of samples in this dataset. When a subclass is used with DataLoader, each item in the dataset will be yielded from the DataLoader iterator. When num_workers > 0, each worker process will have a different copy of the dataset object, so it is often desired to configure each copy independently to avoid having duplicate data returned from the workers. get_worker_info(), when called in a worker process, returns information about the worker. It can be used in either the datasetβs __iter__() method or the DataLoader βs worker_init_fn option to modify each copyβs behavior. Example 1: splitting workload across all workers in __iter__(): >>> class MyIterableDataset(torch.utils.data.IterableDataset):
... def __init__(self, start, end):
... super(MyIterableDataset).__init__()
... assert end > start, "this example code only works with end >= start"
... self.start = start
... self.end = end
...
... def __iter__(self):
... worker_info = torch.utils.data.get_worker_info()
... if worker_info is None: # single-process data loading, return the full iterator
... iter_start = self.start
... iter_end = self.end
... else: # in a worker process
... # split workload
... per_worker = int(math.ceil((self.end - self.start) / float(worker_info.num_workers)))
... worker_id = worker_info.id
... iter_start = self.start + worker_id * per_worker
... iter_end = min(iter_start + per_worker, self.end)
... return iter(range(iter_start, iter_end))
...
>>> # should give same set of data as range(3, 7), i.e., [3, 4, 5, 6].
>>> ds = MyIterableDataset(start=3, end=7)
>>> # Single-process loading
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=0)))
[3, 4, 5, 6]
>>> # Mult-process loading with two worker processes
>>> # Worker 0 fetched [3, 4]. Worker 1 fetched [5, 6].
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=2)))
[3, 5, 4, 6]
>>> # With even more workers
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=20)))
[3, 4, 5, 6]
Example 2: splitting workload across all workers using worker_init_fn: >>> class MyIterableDataset(torch.utils.data.IterableDataset):
... def __init__(self, start, end):
... super(MyIterableDataset).__init__()
... assert end > start, "this example code only works with end >= start"
... self.start = start
... self.end = end
...
... def __iter__(self):
... return iter(range(self.start, self.end))
...
>>> # should give same set of data as range(3, 7), i.e., [3, 4, 5, 6].
>>> ds = MyIterableDataset(start=3, end=7)
>>> # Single-process loading
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=0)))
[3, 4, 5, 6]
>>>
>>> # Directly doing multi-process loading yields duplicate data
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=2)))
[3, 3, 4, 4, 5, 5, 6, 6]
>>> # Define a `worker_init_fn` that configures each dataset copy differently
>>> def worker_init_fn(worker_id):
... worker_info = torch.utils.data.get_worker_info()
... dataset = worker_info.dataset # the dataset copy in this worker process
... overall_start = dataset.start
... overall_end = dataset.end
... # configure the dataset to only process the split workload
... per_worker = int(math.ceil((overall_end - overall_start) / float(worker_info.num_workers)))
... worker_id = worker_info.id
... dataset.start = overall_start + worker_id * per_worker
... dataset.end = min(dataset.start + per_worker, overall_end)
...
>>> # Mult-process loading with the custom `worker_init_fn`
>>> # Worker 0 fetched [3, 4]. Worker 1 fetched [5, 6].
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=2, worker_init_fn=worker_init_fn)))
[3, 5, 4, 6]
>>> # With even more workers
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=20, worker_init_fn=worker_init_fn)))
[3, 4, 5, 6]
class torch.utils.data.TensorDataset(*tensors) [source]
Dataset wrapping tensors. Each sample will be retrieved by indexing tensors along the first dimension. Parameters
*tensors (Tensor) β tensors that have the same size of the first dimension.
class torch.utils.data.ConcatDataset(datasets) [source]
Dataset as a concatenation of multiple datasets. This class is useful to assemble different existing datasets. Parameters
datasets (sequence) β List of datasets to be concatenated
class torch.utils.data.ChainDataset(datasets) [source]
Dataset for chainning multiple IterableDataset s. This class is useful to assemble different existing dataset streams. The chainning operation is done on-the-fly, so concatenating large-scale datasets with this class will be efficient. Parameters
datasets (iterable of IterableDataset) β datasets to be chained together
class torch.utils.data.BufferedShuffleDataset(dataset, buffer_size) [source]
Dataset shuffled from the original dataset. This class is useful to shuffle an existing instance of an IterableDataset. The buffer with buffer_size is filled with the items from the dataset first. Then, each item will be yielded from the buffer by reservoir sampling via iterator. buffer_size is required to be larger than 0. For buffer_size == 1, the dataset is not shuffled. In order to fully shuffle the whole dataset, buffer_size is required to be greater than or equal to the size of dataset. When it is used with DataLoader, each item in the dataset will be yielded from the DataLoader iterator. And, the method to set up a random seed is different based on num_workers. For single-process mode (num_workers == 0), the random seed is required to be set before the DataLoader in the main process. >>> ds = BufferedShuffleDataset(dataset)
>>> random.seed(...)
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=0)))
For multi-process mode (num_workers > 0), the random seed is set by a callable function in each worker. >>> ds = BufferedShuffleDataset(dataset)
>>> def init_fn(worker_id):
... random.seed(...)
>>> print(list(torch.utils.data.DataLoader(ds, ..., num_workers=n, worker_init_fn=init_fn)))
Parameters
dataset (IterableDataset) β The original IterableDataset.
buffer_size (int) β The buffer size for shuffling.
class torch.utils.data.Subset(dataset, indices) [source]
Subset of a dataset at specified indices. Parameters
dataset (Dataset) β The whole Dataset
indices (sequence) β Indices in the whole set selected for subset
torch.utils.data.get_worker_info() [source]
Returns the information about the current DataLoader iterator worker process. When called in a worker, this returns an object guaranteed to have the following attributes:
id: the current worker id.
num_workers: the total number of workers.
seed: the random seed set for the current worker. This value is determined by main process RNG and the worker id. See DataLoaderβs documentation for more details.
dataset: the copy of the dataset object in this process. Note that this will be a different object in a different process than the one in the main process. When called in the main process, this returns None. Note When used in a worker_init_fn passed over to DataLoader, this method can be useful to set up each worker process differently, for instance, using worker_id to configure the dataset object to only read a specific fraction of a sharded dataset, or use seed to seed other libraries used in dataset code (e.g., NumPy).
torch.utils.data.random_split(dataset, lengths, generator=<torch._C.Generator object>) [source]
Randomly split a dataset into non-overlapping new datasets of given lengths. Optionally fix the generator for reproducible results, e.g.: >>> random_split(range(10), [3, 7], generator=torch.Generator().manual_seed(42))
Parameters
dataset (Dataset) β Dataset to be split
lengths (sequence) β lengths of splits to be produced
generator (Generator) β Generator used for the random permutation.
class torch.utils.data.Sampler(data_source) [source]
Base class for all Samplers. Every Sampler subclass has to provide an __iter__() method, providing a way to iterate over indices of dataset elements, and a __len__() method that returns the length of the returned iterators. Note The __len__() method isnβt strictly required by DataLoader, but is expected in any calculation involving the length of a DataLoader.
class torch.utils.data.SequentialSampler(data_source) [source]
Samples elements sequentially, always in the same order. Parameters
data_source (Dataset) β dataset to sample from
class torch.utils.data.RandomSampler(data_source, replacement=False, num_samples=None, generator=None) [source]
Samples elements randomly. If without replacement, then sample from a shuffled dataset. If with replacement, then user can specify num_samples to draw. Parameters
data_source (Dataset) β dataset to sample from
replacement (bool) β samples are drawn on-demand with replacement if True, default=``False``
num_samples (int) β number of samples to draw, default=`len(dataset)`. This argument is supposed to be specified only when replacement is True.
generator (Generator) β Generator used in sampling.
class torch.utils.data.SubsetRandomSampler(indices, generator=None) [source]
Samples elements randomly from a given list of indices, without replacement. Parameters
indices (sequence) β a sequence of indices
generator (Generator) β Generator used in sampling.
class torch.utils.data.WeightedRandomSampler(weights, num_samples, replacement=True, generator=None) [source]
Samples elements from [0,..,len(weights)-1] with given probabilities (weights). Parameters
weights (sequence) β a sequence of weights, not necessary summing up to one
num_samples (int) β number of samples to draw
replacement (bool) β if True, samples are drawn with replacement. If not, they are drawn without replacement, which means that when a sample index is drawn for a row, it cannot be drawn again for that row.
generator (Generator) β Generator used in sampling. Example >>> list(WeightedRandomSampler([0.1, 0.9, 0.4, 0.7, 3.0, 0.6], 5, replacement=True))
[4, 4, 1, 4, 5]
>>> list(WeightedRandomSampler([0.9, 0.4, 0.05, 0.2, 0.3, 0.1], 5, replacement=False))
[0, 1, 4, 3, 2]
class torch.utils.data.BatchSampler(sampler, batch_size, drop_last) [source]
Wraps another sampler to yield a mini-batch of indices. Parameters
sampler (Sampler or Iterable) β Base sampler. Can be any iterable object
batch_size (int) β Size of mini-batch.
drop_last (bool) β If True, the sampler will drop the last batch if its size would be less than batch_size
Example >>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=False))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
>>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=True))
[[0, 1, 2], [3, 4, 5], [6, 7, 8]]
class torch.utils.data.distributed.DistributedSampler(dataset, num_replicas=None, rank=None, shuffle=True, seed=0, drop_last=False) [source]
Sampler that restricts data loading to a subset of the dataset. It is especially useful in conjunction with torch.nn.parallel.DistributedDataParallel. In such a case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it. Note Dataset is assumed to be of constant size. Parameters
dataset β Dataset used for sampling.
num_replicas (int, optional) β Number of processes participating in distributed training. By default, world_size is retrieved from the current distributed group.
rank (int, optional) β Rank of the current process within num_replicas. By default, rank is retrieved from the current distributed group.
shuffle (bool, optional) β If True (default), sampler will shuffle the indices.
seed (int, optional) β random seed used to shuffle the sampler if shuffle=True. This number should be identical across all processes in the distributed group. Default: 0.
drop_last (bool, optional) β if True, then the sampler will drop the tail of the data to make it evenly divisible across the number of replicas. If False, the sampler will add extra indices to make the data evenly divisible across the replicas. Default: False. Warning In distributed mode, calling the set_epoch() method at the beginning of each epoch before creating the DataLoader iterator is necessary to make shuffling work properly across multiple epochs. Otherwise, the same ordering will be always used. Example: >>> sampler = DistributedSampler(dataset) if is_distributed else None
>>> loader = DataLoader(dataset, shuffle=(sampler is None),
... sampler=sampler)
>>> for epoch in range(start_epoch, n_epochs):
... if is_distributed:
... sampler.set_epoch(epoch)
... train(loader) | torch.data |
class torch.utils.data.BatchSampler(sampler, batch_size, drop_last) [source]
Wraps another sampler to yield a mini-batch of indices. Parameters
sampler (Sampler or Iterable) β Base sampler. Can be any iterable object
batch_size (int) β Size of mini-batch.
drop_last (bool) β If True, the sampler will drop the last batch if its size would be less than batch_size
Example >>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=False))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
>>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=True))
[[0, 1, 2], [3, 4, 5], [6, 7, 8]] | torch.data#torch.utils.data.BatchSampler |
class torch.utils.data.BufferedShuffleDataset(dataset, buffer_size) [source]
Dataset shuffled from the original dataset. This class is useful to shuffle an existing instance of an IterableDataset. The buffer with buffer_size is filled with the items from the dataset first. Then, each item will be yielded from the buffer by reservoir sampling via iterator. buffer_size is required to be larger than 0. For buffer_size == 1, the dataset is not shuffled. In order to fully shuffle the whole dataset, buffer_size is required to be greater than or equal to the size of dataset. When it is used with DataLoader, each item in the dataset will be yielded from the DataLoader iterator. And, the method to set up a random seed is different based on num_workers. For single-process mode (num_workers == 0), the random seed is required to be set before the DataLoader in the main process. >>> ds = BufferedShuffleDataset(dataset)
>>> random.seed(...)
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=0)))
For multi-process mode (num_workers > 0), the random seed is set by a callable function in each worker. >>> ds = BufferedShuffleDataset(dataset)
>>> def init_fn(worker_id):
... random.seed(...)
>>> print(list(torch.utils.data.DataLoader(ds, ..., num_workers=n, worker_init_fn=init_fn)))
Parameters
dataset (IterableDataset) β The original IterableDataset.
buffer_size (int) β The buffer size for shuffling. | torch.data#torch.utils.data.BufferedShuffleDataset |
class torch.utils.data.ChainDataset(datasets) [source]
Dataset for chainning multiple IterableDataset s. This class is useful to assemble different existing dataset streams. The chainning operation is done on-the-fly, so concatenating large-scale datasets with this class will be efficient. Parameters
datasets (iterable of IterableDataset) β datasets to be chained together | torch.data#torch.utils.data.ChainDataset |
class torch.utils.data.ConcatDataset(datasets) [source]
Dataset as a concatenation of multiple datasets. This class is useful to assemble different existing datasets. Parameters
datasets (sequence) β List of datasets to be concatenated | torch.data#torch.utils.data.ConcatDataset |
class torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, multiprocessing_context=None, generator=None, *, prefetch_factor=2, persistent_workers=False) [source]
Data loader. Combines a dataset and a sampler, and provides an iterable over the given dataset. The DataLoader supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning. See torch.utils.data documentation page for more details. Parameters
dataset (Dataset) β dataset from which to load the data.
batch_size (int, optional) β how many samples per batch to load (default: 1).
shuffle (bool, optional) β set to True to have the data reshuffled at every epoch (default: False).
sampler (Sampler or Iterable, optional) β defines the strategy to draw samples from the dataset. Can be any Iterable with __len__ implemented. If specified, shuffle must not be specified.
batch_sampler (Sampler or Iterable, optional) β like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, shuffle, sampler, and drop_last.
num_workers (int, optional) β how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)
collate_fn (callable, optional) β merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.
pin_memory (bool, optional) β If True, the data loader will copy Tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn returns a batch that is a custom type, see the example below.
drop_last (bool, optional) β set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False)
timeout (numeric, optional) β if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: 0)
worker_init_fn (callable, optional) β If not None, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None)
prefetch_factor (int, optional, keyword-only arg) β Number of samples loaded in advance by each worker. 2 means there will be a total of 2 * num_workers samples prefetched across all workers. (default: 2)
persistent_workers (bool, optional) β If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. (default: False) Warning If the spawn start method is used, worker_init_fn cannot be an unpicklable object, e.g., a lambda function. See Multiprocessing best practices on more details related to multiprocessing in PyTorch. Warning len(dataloader) heuristic is based on the length of the sampler used. When dataset is an IterableDataset, it instead returns an estimate based on len(dataset) / batch_size, with proper rounding depending on drop_last, regardless of multi-process loading configurations. This represents the best guess PyTorch can make because PyTorch trusts user dataset code in correctly handling multi-process loading to avoid duplicate data. However, if sharding results in multiple workers having incomplete last batches, this estimate can still be inaccurate, because (1) an otherwise complete batch can be broken into multiple ones and (2) more than one batch worth of samples can be dropped when drop_last is set. Unfortunately, PyTorch can not detect such cases in general. See Dataset Types for more details on these two types of datasets and how IterableDataset interacts with Multi-process data loading. Warning See Reproducibility, and My data loader workers return identical random numbers, and Randomness in multi-process data loading notes for random seed related questions. | torch.data#torch.utils.data.DataLoader |
class torch.utils.data.Dataset [source]
An abstract class representing a Dataset. All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite __getitem__(), supporting fetching a data sample for a given key. Subclasses could also optionally overwrite __len__(), which is expected to return the size of the dataset by many Sampler implementations and the default options of DataLoader. Note DataLoader by default constructs a index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided. | torch.data#torch.utils.data.Dataset |
class torch.utils.data.distributed.DistributedSampler(dataset, num_replicas=None, rank=None, shuffle=True, seed=0, drop_last=False) [source]
Sampler that restricts data loading to a subset of the dataset. It is especially useful in conjunction with torch.nn.parallel.DistributedDataParallel. In such a case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it. Note Dataset is assumed to be of constant size. Parameters
dataset β Dataset used for sampling.
num_replicas (int, optional) β Number of processes participating in distributed training. By default, world_size is retrieved from the current distributed group.
rank (int, optional) β Rank of the current process within num_replicas. By default, rank is retrieved from the current distributed group.
shuffle (bool, optional) β If True (default), sampler will shuffle the indices.
seed (int, optional) β random seed used to shuffle the sampler if shuffle=True. This number should be identical across all processes in the distributed group. Default: 0.
drop_last (bool, optional) β if True, then the sampler will drop the tail of the data to make it evenly divisible across the number of replicas. If False, the sampler will add extra indices to make the data evenly divisible across the replicas. Default: False. Warning In distributed mode, calling the set_epoch() method at the beginning of each epoch before creating the DataLoader iterator is necessary to make shuffling work properly across multiple epochs. Otherwise, the same ordering will be always used. Example: >>> sampler = DistributedSampler(dataset) if is_distributed else None
>>> loader = DataLoader(dataset, shuffle=(sampler is None),
... sampler=sampler)
>>> for epoch in range(start_epoch, n_epochs):
... if is_distributed:
... sampler.set_epoch(epoch)
... train(loader) | torch.data#torch.utils.data.distributed.DistributedSampler |
torch.utils.data.get_worker_info() [source]
Returns the information about the current DataLoader iterator worker process. When called in a worker, this returns an object guaranteed to have the following attributes:
id: the current worker id.
num_workers: the total number of workers.
seed: the random seed set for the current worker. This value is determined by main process RNG and the worker id. See DataLoaderβs documentation for more details.
dataset: the copy of the dataset object in this process. Note that this will be a different object in a different process than the one in the main process. When called in the main process, this returns None. Note When used in a worker_init_fn passed over to DataLoader, this method can be useful to set up each worker process differently, for instance, using worker_id to configure the dataset object to only read a specific fraction of a sharded dataset, or use seed to seed other libraries used in dataset code (e.g., NumPy). | torch.data#torch.utils.data.get_worker_info |
class torch.utils.data.IterableDataset [source]
An iterable Dataset. All datasets that represent an iterable of data samples should subclass it. Such form of datasets is particularly useful when data come from a stream. All subclasses should overwrite __iter__(), which would return an iterator of samples in this dataset. When a subclass is used with DataLoader, each item in the dataset will be yielded from the DataLoader iterator. When num_workers > 0, each worker process will have a different copy of the dataset object, so it is often desired to configure each copy independently to avoid having duplicate data returned from the workers. get_worker_info(), when called in a worker process, returns information about the worker. It can be used in either the datasetβs __iter__() method or the DataLoader βs worker_init_fn option to modify each copyβs behavior. Example 1: splitting workload across all workers in __iter__(): >>> class MyIterableDataset(torch.utils.data.IterableDataset):
... def __init__(self, start, end):
... super(MyIterableDataset).__init__()
... assert end > start, "this example code only works with end >= start"
... self.start = start
... self.end = end
...
... def __iter__(self):
... worker_info = torch.utils.data.get_worker_info()
... if worker_info is None: # single-process data loading, return the full iterator
... iter_start = self.start
... iter_end = self.end
... else: # in a worker process
... # split workload
... per_worker = int(math.ceil((self.end - self.start) / float(worker_info.num_workers)))
... worker_id = worker_info.id
... iter_start = self.start + worker_id * per_worker
... iter_end = min(iter_start + per_worker, self.end)
... return iter(range(iter_start, iter_end))
...
>>> # should give same set of data as range(3, 7), i.e., [3, 4, 5, 6].
>>> ds = MyIterableDataset(start=3, end=7)
>>> # Single-process loading
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=0)))
[3, 4, 5, 6]
>>> # Mult-process loading with two worker processes
>>> # Worker 0 fetched [3, 4]. Worker 1 fetched [5, 6].
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=2)))
[3, 5, 4, 6]
>>> # With even more workers
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=20)))
[3, 4, 5, 6]
Example 2: splitting workload across all workers using worker_init_fn: >>> class MyIterableDataset(torch.utils.data.IterableDataset):
... def __init__(self, start, end):
... super(MyIterableDataset).__init__()
... assert end > start, "this example code only works with end >= start"
... self.start = start
... self.end = end
...
... def __iter__(self):
... return iter(range(self.start, self.end))
...
>>> # should give same set of data as range(3, 7), i.e., [3, 4, 5, 6].
>>> ds = MyIterableDataset(start=3, end=7)
>>> # Single-process loading
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=0)))
[3, 4, 5, 6]
>>>
>>> # Directly doing multi-process loading yields duplicate data
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=2)))
[3, 3, 4, 4, 5, 5, 6, 6]
>>> # Define a `worker_init_fn` that configures each dataset copy differently
>>> def worker_init_fn(worker_id):
... worker_info = torch.utils.data.get_worker_info()
... dataset = worker_info.dataset # the dataset copy in this worker process
... overall_start = dataset.start
... overall_end = dataset.end
... # configure the dataset to only process the split workload
... per_worker = int(math.ceil((overall_end - overall_start) / float(worker_info.num_workers)))
... worker_id = worker_info.id
... dataset.start = overall_start + worker_id * per_worker
... dataset.end = min(dataset.start + per_worker, overall_end)
...
>>> # Mult-process loading with the custom `worker_init_fn`
>>> # Worker 0 fetched [3, 4]. Worker 1 fetched [5, 6].
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=2, worker_init_fn=worker_init_fn)))
[3, 5, 4, 6]
>>> # With even more workers
>>> print(list(torch.utils.data.DataLoader(ds, num_workers=20, worker_init_fn=worker_init_fn)))
[3, 4, 5, 6] | torch.data#torch.utils.data.IterableDataset |
class torch.utils.data.RandomSampler(data_source, replacement=False, num_samples=None, generator=None) [source]
Samples elements randomly. If without replacement, then sample from a shuffled dataset. If with replacement, then user can specify num_samples to draw. Parameters
data_source (Dataset) β dataset to sample from
replacement (bool) β samples are drawn on-demand with replacement if True, default=``False``
num_samples (int) β number of samples to draw, default=`len(dataset)`. This argument is supposed to be specified only when replacement is True.
generator (Generator) β Generator used in sampling. | torch.data#torch.utils.data.RandomSampler |
torch.utils.data.random_split(dataset, lengths, generator=<torch._C.Generator object>) [source]
Randomly split a dataset into non-overlapping new datasets of given lengths. Optionally fix the generator for reproducible results, e.g.: >>> random_split(range(10), [3, 7], generator=torch.Generator().manual_seed(42))
Parameters
dataset (Dataset) β Dataset to be split
lengths (sequence) β lengths of splits to be produced
generator (Generator) β Generator used for the random permutation. | torch.data#torch.utils.data.random_split |
class torch.utils.data.Sampler(data_source) [source]
Base class for all Samplers. Every Sampler subclass has to provide an __iter__() method, providing a way to iterate over indices of dataset elements, and a __len__() method that returns the length of the returned iterators. Note The __len__() method isnβt strictly required by DataLoader, but is expected in any calculation involving the length of a DataLoader. | torch.data#torch.utils.data.Sampler |
class torch.utils.data.SequentialSampler(data_source) [source]
Samples elements sequentially, always in the same order. Parameters
data_source (Dataset) β dataset to sample from | torch.data#torch.utils.data.SequentialSampler |
class torch.utils.data.Subset(dataset, indices) [source]
Subset of a dataset at specified indices. Parameters
dataset (Dataset) β The whole Dataset
indices (sequence) β Indices in the whole set selected for subset | torch.data#torch.utils.data.Subset |
class torch.utils.data.SubsetRandomSampler(indices, generator=None) [source]
Samples elements randomly from a given list of indices, without replacement. Parameters
indices (sequence) β a sequence of indices
generator (Generator) β Generator used in sampling. | torch.data#torch.utils.data.SubsetRandomSampler |
class torch.utils.data.TensorDataset(*tensors) [source]
Dataset wrapping tensors. Each sample will be retrieved by indexing tensors along the first dimension. Parameters
*tensors (Tensor) β tensors that have the same size of the first dimension. | torch.data#torch.utils.data.TensorDataset |
class torch.utils.data.WeightedRandomSampler(weights, num_samples, replacement=True, generator=None) [source]
Samples elements from [0,..,len(weights)-1] with given probabilities (weights). Parameters
weights (sequence) β a sequence of weights, not necessary summing up to one
num_samples (int) β number of samples to draw
replacement (bool) β if True, samples are drawn with replacement. If not, they are drawn without replacement, which means that when a sample index is drawn for a row, it cannot be drawn again for that row.
generator (Generator) β Generator used in sampling. Example >>> list(WeightedRandomSampler([0.1, 0.9, 0.4, 0.7, 3.0, 0.6], 5, replacement=True))
[4, 4, 1, 4, 5]
>>> list(WeightedRandomSampler([0.9, 0.4, 0.05, 0.2, 0.3, 0.1], 5, replacement=False))
[0, 1, 4, 3, 2] | torch.data#torch.utils.data.WeightedRandomSampler |
torch.utils.dlpack
torch.utils.dlpack.from_dlpack(dlpack) β Tensor
Decodes a DLPack to a tensor. Parameters
dlpack β a PyCapsule object with the dltensor The tensor will share the memory with the object represented in the dlpack. Note that each dlpack can only be consumed once.
torch.utils.dlpack.to_dlpack(tensor) β PyCapsule
Returns a DLPack representing the tensor. Parameters
tensor β a tensor to be exported The dlpack shares the tensors memory. Note that each dlpack can only be consumed once. | torch.dlpack |
torch.utils.dlpack.from_dlpack(dlpack) β Tensor
Decodes a DLPack to a tensor. Parameters
dlpack β a PyCapsule object with the dltensor The tensor will share the memory with the object represented in the dlpack. Note that each dlpack can only be consumed once. | torch.dlpack#torch.utils.dlpack.from_dlpack |
torch.utils.dlpack.to_dlpack(tensor) β PyCapsule
Returns a DLPack representing the tensor. Parameters
tensor β a tensor to be exported The dlpack shares the tensors memory. Note that each dlpack can only be consumed once. | torch.dlpack#torch.utils.dlpack.to_dlpack |
torch.utils.mobile_optimizer Warning This API is in beta and may change in the near future. Torch mobile supports torch.mobile_optimizer.optimize_for_mobile utility to run a list of optimization pass with modules in eval mode. The method takes the following parameters: a torch.jit.ScriptModule object, a blocklisting optimization set and a preserved method list
By default, if optimization blocklist is None or empty, optimize_for_mobile will run the following optimizations:
Conv2D + BatchNorm fusion (blocklisting option MobileOptimizerType::CONV_BN_FUSION): This optimization pass folds Conv2d-BatchNorm2d into Conv2d in forward method of this module and all its submodules. The weight and bias of the Conv2d are correspondingly updated.
Insert and Fold prepacked ops (blocklisting option MobileOptimizerType::INSERT_FOLD_PREPACK_OPS): This optimization pass rewrites the graph to replace 2D convolutions and linear ops with their prepacked counterparts. Prepacked ops are stateful ops in that, they require some state to be created, such as weight prepacking and use this state, i.e. prepacked weights, during op execution. XNNPACK is one such backend that provides prepacked ops, with kernels optimized for mobile platforms (such as ARM CPUs). Prepacking of weight enables efficient memory access and thus faster kernel execution. At the moment optimize_for_mobile pass rewrites the graph to replace Conv2D/Linear with 1) op that pre-packs weight for XNNPACK conv2d/linear ops and 2) op that takes pre-packed weight and activation as input and generates output activations. Since 1 needs to be done only once, we fold the weight pre-packing such that it is done only once at model load time. This pass of the optimize_for_mobile does 1 and 2 and then folds, i.e. removes, weight pre-packing ops.
ReLU/Hardtanh fusion: XNNPACK ops support fusion of clamping. That is clamping of output activation is done as part of the kernel, including for 2D convolution and linear op kernels. Thus clamping effectively comes for free. Thus any op that can be expressed as clamping op, such as ReLU or hardtanh, can be fused with previous Conv2D or linear op in XNNPACK. This pass rewrites graph by finding ReLU/hardtanh ops that follow XNNPACK Conv2D/linear ops, written by the previous pass, and fuses them together.
Dropout removal (blocklisting option MobileOptimizerType::REMOVE_DROPOUT): This optimization pass removes dropout and dropout_ nodes from this module when training is false.
Conv packed params hoisting (blocklisting option MobileOptimizerType::HOIST_CONV_PACKED_PARAMS): This optimization pass moves convolution packed params to the root module, so that the convolution structs can be deleted. This decreases model size without impacting numerics. optimize_for_mobile will also invoke freeze_module pass which only preserves forward method. If you have other method to that needed to be preserved, add them into the preserved method list and pass into the method.
torch.utils.mobile_optimizer.optimize_for_mobile(script_module, optimization_blocklist=None, preserved_methods=None, backend='CPU') [source]
Parameters
script_module β An instance of torch script module with type of ScriptModule.
optimization_blocklist β A set with type of MobileOptimizerType. When set is not passed, optimization method will run all the optimizer pass; otherwise, optimizer method will run the optimization pass that is not included inside optimization_blocklist.
perserved_methods β A list of methods that needed to be preserved when freeze_module pass is invoked
backend β Device type to use for running the result model (βCPUβ(default), βVulkanβ or βMetalβ). Returns
A new optimized torch script module | torch.mobile_optimizer |
torch.utils.mobile_optimizer.optimize_for_mobile(script_module, optimization_blocklist=None, preserved_methods=None, backend='CPU') [source]
Parameters
script_module β An instance of torch script module with type of ScriptModule.
optimization_blocklist β A set with type of MobileOptimizerType. When set is not passed, optimization method will run all the optimizer pass; otherwise, optimizer method will run the optimization pass that is not included inside optimization_blocklist.
perserved_methods β A list of methods that needed to be preserved when freeze_module pass is invoked
backend β Device type to use for running the result model (βCPUβ(default), βVulkanβ or βMetalβ). Returns
A new optimized torch script module | torch.mobile_optimizer#torch.utils.mobile_optimizer.optimize_for_mobile |
torch.utils.model_zoo Moved to torch.hub.
torch.utils.model_zoo.load_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None)
Loads the Torch serialized object at the given URL. If downloaded file is a zip file, it will be automatically decompressed. If the object is already present in model_dir, itβs deserialized and returned. The default value of model_dir is <hub_dir>/checkpoints where hub_dir is the directory returned by get_dir(). Parameters
url (string) β URL of the object to download
model_dir (string, optional) β directory in which to save the object
map_location (optional) β a function or a dict specifying how to remap storage locations (see torch.load)
progress (bool, optional) β whether or not to display a progress bar to stderr. Default: True
check_hash (bool, optional) β If True, the filename part of the URL should follow the naming convention filename-<sha256>.ext where <sha256> is the first eight or more digits of the SHA256 hash of the contents of the file. The hash is used to ensure unique names and to verify the contents of the file. Default: False
file_name (string, optional) β name for the downloaded file. Filename from url will be used if not set. Example >>> state_dict = torch.hub.load_state_dict_from_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth') | torch.model_zoo |
torch.utils.model_zoo.load_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None)
Loads the Torch serialized object at the given URL. If downloaded file is a zip file, it will be automatically decompressed. If the object is already present in model_dir, itβs deserialized and returned. The default value of model_dir is <hub_dir>/checkpoints where hub_dir is the directory returned by get_dir(). Parameters
url (string) β URL of the object to download
model_dir (string, optional) β directory in which to save the object
map_location (optional) β a function or a dict specifying how to remap storage locations (see torch.load)
progress (bool, optional) β whether or not to display a progress bar to stderr. Default: True
check_hash (bool, optional) β If True, the filename part of the URL should follow the naming convention filename-<sha256>.ext where <sha256> is the first eight or more digits of the SHA256 hash of the contents of the file. The hash is used to ensure unique names and to verify the contents of the file. Default: False
file_name (string, optional) β name for the downloaded file. Filename from url will be used if not set. Example >>> state_dict = torch.hub.load_state_dict_from_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth') | torch.model_zoo#torch.utils.model_zoo.load_url |
torch.utils.tensorboard Before going further, more details on TensorBoard can be found at https://www.tensorflow.org/tensorboard/ Once youβve installed TensorBoard, these utilities let you log PyTorch models and metrics into a directory for visualization within the TensorBoard UI. Scalars, images, histograms, graphs, and embedding visualizations are all supported for PyTorch models and tensors as well as Caffe2 nets and blobs. The SummaryWriter class is your main entry to log data for consumption and visualization by TensorBoard. For example: import torch
import torchvision
from torch.utils.tensorboard import SummaryWriter
from torchvision import datasets, transforms
# Writer will output to ./runs/ directory by default
writer = SummaryWriter()
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])
trainset = datasets.MNIST('mnist_train', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
model = torchvision.models.resnet50(False)
# Have ResNet model take in grayscale rather than RGB
model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)
images, labels = next(iter(trainloader))
grid = torchvision.utils.make_grid(images)
writer.add_image('images', grid, 0)
writer.add_graph(model, images)
writer.close()
This can then be visualized with TensorBoard, which should be installable and runnable with: pip install tensorboard
tensorboard --logdir=runs
Lots of information can be logged for one experiment. To avoid cluttering the UI and have better result clustering, we can group plots by naming them hierarchically. For example, βLoss/trainβ and βLoss/testβ will be grouped together, while βAccuracy/trainβ and βAccuracy/testβ will be grouped separately in the TensorBoard interface. from torch.utils.tensorboard import SummaryWriter
import numpy as np
writer = SummaryWriter()
for n_iter in range(100):
writer.add_scalar('Loss/train', np.random.random(), n_iter)
writer.add_scalar('Loss/test', np.random.random(), n_iter)
writer.add_scalar('Accuracy/train', np.random.random(), n_iter)
writer.add_scalar('Accuracy/test', np.random.random(), n_iter)
Expected result:
class torch.utils.tensorboard.writer.SummaryWriter(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='') [source]
Writes entries directly to event files in the log_dir to be consumed by TensorBoard. The SummaryWriter class provides a high-level API to create an event file in a given directory and add summaries and events to it. The class updates the file contents asynchronously. This allows a training program to call methods to add data to the file directly from the training loop, without slowing down training.
__init__(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='') [source]
Creates a SummaryWriter that will write out events and summaries to the event file. Parameters
log_dir (string) β Save directory location. Default is runs/CURRENT_DATETIME_HOSTNAME, which changes after each run. Use hierarchical folder structure to compare between runs easily. e.g. pass in βruns/exp1β, βruns/exp2β, etc. for each new experiment to compare across them.
comment (string) β Comment log_dir suffix appended to the default log_dir. If log_dir is assigned, this argument has no effect.
purge_step (int) β When logging crashes at step T+XT+X and restarts at step TT , any events whose global_step larger or equal to TT will be purged and hidden from TensorBoard. Note that crashed and resumed experiments should have the same log_dir.
max_queue (int) β Size of the queue for pending events and summaries before one of the βaddβ calls forces a flush to disk. Default is ten items.
flush_secs (int) β How often, in seconds, to flush the pending events and summaries to disk. Default is every two minutes.
filename_suffix (string) β Suffix added to all event filenames in the log_dir directory. More details on filename construction in tensorboard.summary.writer.event_file_writer.EventFileWriter. Examples: from torch.utils.tensorboard import SummaryWriter
# create a summary writer with automatically generated folder name.
writer = SummaryWriter()
# folder location: runs/May04_22-14-54_s-MacBook-Pro.local/
# create a summary writer using the specified folder name.
writer = SummaryWriter("my_experiment")
# folder location: my_experiment
# create a summary writer with comment appended.
writer = SummaryWriter(comment="LR_0.1_BATCH_16")
# folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/
add_scalar(tag, scalar_value, global_step=None, walltime=None) [source]
Add scalar data to summary. Parameters
tag (string) β Data identifier
scalar_value (float or string/blobname) β Value to save
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) with seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
x = range(100)
for i in x:
writer.add_scalar('y=2x', i * 2, i)
writer.close()
Expected result:
add_scalars(main_tag, tag_scalar_dict, global_step=None, walltime=None) [source]
Adds many scalar data to summary. Parameters
main_tag (string) β The parent name for the tags
tag_scalar_dict (dict) β Key-value pair storing the tag and corresponding values
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
r = 5
for i in range(100):
writer.add_scalars('run_14h', {'xsinx':i*np.sin(i/r),
'xcosx':i*np.cos(i/r),
'tanx': np.tan(i/r)}, i)
writer.close()
# This call adds three values to the same scalar plot with the tag
# 'run_14h' in TensorBoard's scalar section.
Expected result:
add_histogram(tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None) [source]
Add histogram to summary. Parameters
tag (string) β Data identifier
values (torch.Tensor, numpy.array, or string/blobname) β Values to build histogram
global_step (int) β Global step value to record
bins (string) β One of {βtensorflowβ,βautoβ, βfdβ, β¦}. This determines how the bins are made. You can find other options in: https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
writer = SummaryWriter()
for i in range(10):
x = np.random.random(1000)
writer.add_histogram('distribution centers', x + i, i)
writer.close()
Expected result:
add_image(tag, img_tensor, global_step=None, walltime=None, dataformats='CHW') [source]
Add image data to summary. Note that this requires the pillow package. Parameters
tag (string) β Data identifier
img_tensor (torch.Tensor, numpy.array, or string/blobname) β Image data
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
img_tensor: Default is (3,H,W)(3, H, W) . You can use torchvision.utils.make_grid() to convert a batch of tensor into 3xHxW format or call add_images and let us do the job. Tensor with (1,H,W)(1, H, W) , (H,W)(H, W) , (H,W,3)(H, W, 3) is also suitable as long as corresponding dataformats argument is passed, e.g. CHW, HWC, HW. Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
img = np.zeros((3, 100, 100))
img[0] = np.arange(0, 10000).reshape(100, 100) / 10000
img[1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000
img_HWC = np.zeros((100, 100, 3))
img_HWC[:, :, 0] = np.arange(0, 10000).reshape(100, 100) / 10000
img_HWC[:, :, 1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000
writer = SummaryWriter()
writer.add_image('my_image', img, 0)
# If you have non-default dimension setting, set the dataformats argument.
writer.add_image('my_image_HWC', img_HWC, 0, dataformats='HWC')
writer.close()
Expected result:
add_images(tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW') [source]
Add batched image data to summary. Note that this requires the pillow package. Parameters
tag (string) β Data identifier
img_tensor (torch.Tensor, numpy.array, or string/blobname) β Image data
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event
dataformats (string) β Image data format specification of the form NCHW, NHWC, CHW, HWC, HW, WH, etc. Shape:
img_tensor: Default is (N,3,H,W)(N, 3, H, W) . If dataformats is specified, other shape will be accepted. e.g. NCHW or NHWC. Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
img_batch = np.zeros((16, 3, 100, 100))
for i in range(16):
img_batch[i, 0] = np.arange(0, 10000).reshape(100, 100) / 10000 / 16 * i
img_batch[i, 1] = (1 - np.arange(0, 10000).reshape(100, 100) / 10000) / 16 * i
writer = SummaryWriter()
writer.add_images('my_image_batch', img_batch, 0)
writer.close()
Expected result:
add_figure(tag, figure, global_step=None, close=True, walltime=None) [source]
Render matplotlib figure into an image and add it to summary. Note that this requires the matplotlib package. Parameters
tag (string) β Data identifier
figure (matplotlib.pyplot.figure) β Figure or a list of figures
global_step (int) β Global step value to record
close (bool) β Flag to automatically close the figure
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event
add_video(tag, vid_tensor, global_step=None, fps=4, walltime=None) [source]
Add video data to summary. Note that this requires the moviepy package. Parameters
tag (string) β Data identifier
vid_tensor (torch.Tensor) β Video data
global_step (int) β Global step value to record
fps (float or int) β Frames per second
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
vid_tensor: (N,T,C,H,W)(N, T, C, H, W) . The values should lie in [0, 255] for type uint8 or [0, 1] for type float.
add_audio(tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None) [source]
Add audio data to summary. Parameters
tag (string) β Data identifier
snd_tensor (torch.Tensor) β Sound data
global_step (int) β Global step value to record
sample_rate (int) β sample rate in Hz
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
snd_tensor: (1,L)(1, L) . The values should lie between [-1, 1].
add_text(tag, text_string, global_step=None, walltime=None) [source]
Add text data to summary. Parameters
tag (string) β Data identifier
text_string (string) β String to save
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: writer.add_text('lstm', 'This is an lstm', 0)
writer.add_text('rnn', 'This is an rnn', 10)
add_graph(model, input_to_model=None, verbose=False) [source]
Add graph data to summary. Parameters
model (torch.nn.Module) β Model to draw.
input_to_model (torch.Tensor or list of torch.Tensor) β A variable or a tuple of variables to be fed.
verbose (bool) β Whether to print graph structure in console.
add_embedding(mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None) [source]
Add embedding projector data to summary. Parameters
mat (torch.Tensor or numpy.array) β A matrix which each row is the feature vector of the data point
metadata (list) β A list of labels, each element will be convert to string
label_img (torch.Tensor) β Images correspond to each data point
global_step (int) β Global step value to record
tag (string) β Name for the embedding Shape:
mat: (N,D)(N, D) , where N is number of data and D is feature dimension label_img: (N,C,H,W)(N, C, H, W) Examples: import keyword
import torch
meta = []
while len(meta)<100:
meta = meta+keyword.kwlist # get some strings
meta = meta[:100]
for i, v in enumerate(meta):
meta[i] = v+str(i)
label_img = torch.rand(100, 3, 10, 32)
for i in range(100):
label_img[i]*=i/100.0
writer.add_embedding(torch.randn(100, 5), metadata=meta, label_img=label_img)
writer.add_embedding(torch.randn(100, 5), label_img=label_img)
writer.add_embedding(torch.randn(100, 5), metadata=meta)
add_pr_curve(tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None) [source]
Adds precision recall curve. Plotting a precision-recall curve lets you understand your modelβs performance under different threshold settings. With this function, you provide the ground truth labeling (T/F) and prediction confidence (usually the output of your model) for each target. The TensorBoard UI will let you choose the threshold interactively. Parameters
tag (string) β Data identifier
labels (torch.Tensor, numpy.array, or string/blobname) β Ground truth data. Binary label for each element.
predictions (torch.Tensor, numpy.array, or string/blobname) β The probability that an element be classified as true. Value should be in [0, 1]
global_step (int) β Global step value to record
num_thresholds (int) β Number of thresholds used to draw the curve.
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
labels = np.random.randint(2, size=100) # binary label
predictions = np.random.rand(100)
writer = SummaryWriter()
writer.add_pr_curve('pr_curve', labels, predictions, 0)
writer.close()
add_custom_scalars(layout) [source]
Create special chart by collecting charts tags in βscalarsβ. Note that this function can only be called once for each SummaryWriter() object. Because it only provides metadata to tensorboard, the function can be called before or after the training loop. Parameters
layout (dict) β {categoryName: charts}, where charts is also a dictionary {chartName: ListOfProperties}. The first element in ListOfProperties is the chartβs type (one of Multiline or Margin) and the second element should be a list containing the tags you have used in add_scalar function, which will be collected into the new chart. Examples: layout = {'Taiwan':{'twse':['Multiline',['twse/0050', 'twse/2330']]},
'USA':{ 'dow':['Margin', ['dow/aaa', 'dow/bbb', 'dow/ccc']],
'nasdaq':['Margin', ['nasdaq/aaa', 'nasdaq/bbb', 'nasdaq/ccc']]}}
writer.add_custom_scalars(layout)
add_mesh(tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None) [source]
Add meshes or 3D point clouds to TensorBoard. The visualization is based on Three.js, so it allows users to interact with the rendered object. Besides the basic definitions such as vertices, faces, users can further provide camera parameter, lighting condition, etc. Please check https://threejs.org/docs/index.html#manual/en/introduction/Creating-a-scene for advanced usage. Parameters
tag (string) β Data identifier
vertices (torch.Tensor) β List of the 3D coordinates of vertices.
colors (torch.Tensor) β Colors for each vertex
faces (torch.Tensor) β Indices of vertices within each triangle. (Optional)
config_dict β Dictionary with ThreeJS classes names and configuration.
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
vertices: (B,N,3)(B, N, 3) . (batch, number_of_vertices, channels) colors: (B,N,3)(B, N, 3) . The values should lie in [0, 255] for type uint8 or [0, 1] for type float. faces: (B,N,3)(B, N, 3) . The values should lie in [0, number_of_vertices] for type uint8. Examples: from torch.utils.tensorboard import SummaryWriter
vertices_tensor = torch.as_tensor([
[1, 1, 1],
[-1, -1, 1],
[1, -1, -1],
[-1, 1, -1],
], dtype=torch.float).unsqueeze(0)
colors_tensor = torch.as_tensor([
[255, 0, 0],
[0, 255, 0],
[0, 0, 255],
[255, 0, 255],
], dtype=torch.int).unsqueeze(0)
faces_tensor = torch.as_tensor([
[0, 2, 3],
[0, 3, 1],
[0, 1, 2],
[1, 3, 2],
], dtype=torch.int).unsqueeze(0)
writer = SummaryWriter()
writer.add_mesh('my_mesh', vertices=vertices_tensor, colors=colors_tensor, faces=faces_tensor)
writer.close()
add_hparams(hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None) [source]
Add a set of hyperparameters to be compared in TensorBoard. Parameters
hparam_dict (dict) β Each key-value pair in the dictionary is the name of the hyper parameter and itβs corresponding value. The type of the value can be one of bool, string, float, int, or None.
metric_dict (dict) β Each key-value pair in the dictionary is the name of the metric and itβs corresponding value. Note that the key used here should be unique in the tensorboard record. Otherwise the value you added by add_scalar will be displayed in hparam plugin. In most cases, this is unwanted.
hparam_domain_discrete β (Optional[Dict[str, List[Any]]]) A dictionary that contains names of the hyperparameters and all discrete values they can hold
run_name (str) β Name of the run, to be included as part of the logdir. If unspecified, will use current timestamp. Examples: from torch.utils.tensorboard import SummaryWriter
with SummaryWriter() as w:
for i in range(5):
w.add_hparams({'lr': 0.1*i, 'bsize': i},
{'hparam/accuracy': 10*i, 'hparam/loss': 10*i})
Expected result:
flush() [source]
Flushes the event file to disk. Call this method to make sure that all pending events have been written to disk.
close() [source] | torch.tensorboard |
class torch.utils.tensorboard.writer.SummaryWriter(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='') [source]
Writes entries directly to event files in the log_dir to be consumed by TensorBoard. The SummaryWriter class provides a high-level API to create an event file in a given directory and add summaries and events to it. The class updates the file contents asynchronously. This allows a training program to call methods to add data to the file directly from the training loop, without slowing down training.
__init__(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='') [source]
Creates a SummaryWriter that will write out events and summaries to the event file. Parameters
log_dir (string) β Save directory location. Default is runs/CURRENT_DATETIME_HOSTNAME, which changes after each run. Use hierarchical folder structure to compare between runs easily. e.g. pass in βruns/exp1β, βruns/exp2β, etc. for each new experiment to compare across them.
comment (string) β Comment log_dir suffix appended to the default log_dir. If log_dir is assigned, this argument has no effect.
purge_step (int) β When logging crashes at step T+XT+X and restarts at step TT , any events whose global_step larger or equal to TT will be purged and hidden from TensorBoard. Note that crashed and resumed experiments should have the same log_dir.
max_queue (int) β Size of the queue for pending events and summaries before one of the βaddβ calls forces a flush to disk. Default is ten items.
flush_secs (int) β How often, in seconds, to flush the pending events and summaries to disk. Default is every two minutes.
filename_suffix (string) β Suffix added to all event filenames in the log_dir directory. More details on filename construction in tensorboard.summary.writer.event_file_writer.EventFileWriter. Examples: from torch.utils.tensorboard import SummaryWriter
# create a summary writer with automatically generated folder name.
writer = SummaryWriter()
# folder location: runs/May04_22-14-54_s-MacBook-Pro.local/
# create a summary writer using the specified folder name.
writer = SummaryWriter("my_experiment")
# folder location: my_experiment
# create a summary writer with comment appended.
writer = SummaryWriter(comment="LR_0.1_BATCH_16")
# folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/
add_scalar(tag, scalar_value, global_step=None, walltime=None) [source]
Add scalar data to summary. Parameters
tag (string) β Data identifier
scalar_value (float or string/blobname) β Value to save
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) with seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
x = range(100)
for i in x:
writer.add_scalar('y=2x', i * 2, i)
writer.close()
Expected result:
add_scalars(main_tag, tag_scalar_dict, global_step=None, walltime=None) [source]
Adds many scalar data to summary. Parameters
main_tag (string) β The parent name for the tags
tag_scalar_dict (dict) β Key-value pair storing the tag and corresponding values
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
r = 5
for i in range(100):
writer.add_scalars('run_14h', {'xsinx':i*np.sin(i/r),
'xcosx':i*np.cos(i/r),
'tanx': np.tan(i/r)}, i)
writer.close()
# This call adds three values to the same scalar plot with the tag
# 'run_14h' in TensorBoard's scalar section.
Expected result:
add_histogram(tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None) [source]
Add histogram to summary. Parameters
tag (string) β Data identifier
values (torch.Tensor, numpy.array, or string/blobname) β Values to build histogram
global_step (int) β Global step value to record
bins (string) β One of {βtensorflowβ,βautoβ, βfdβ, β¦}. This determines how the bins are made. You can find other options in: https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
writer = SummaryWriter()
for i in range(10):
x = np.random.random(1000)
writer.add_histogram('distribution centers', x + i, i)
writer.close()
Expected result:
add_image(tag, img_tensor, global_step=None, walltime=None, dataformats='CHW') [source]
Add image data to summary. Note that this requires the pillow package. Parameters
tag (string) β Data identifier
img_tensor (torch.Tensor, numpy.array, or string/blobname) β Image data
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
img_tensor: Default is (3,H,W)(3, H, W) . You can use torchvision.utils.make_grid() to convert a batch of tensor into 3xHxW format or call add_images and let us do the job. Tensor with (1,H,W)(1, H, W) , (H,W)(H, W) , (H,W,3)(H, W, 3) is also suitable as long as corresponding dataformats argument is passed, e.g. CHW, HWC, HW. Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
img = np.zeros((3, 100, 100))
img[0] = np.arange(0, 10000).reshape(100, 100) / 10000
img[1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000
img_HWC = np.zeros((100, 100, 3))
img_HWC[:, :, 0] = np.arange(0, 10000).reshape(100, 100) / 10000
img_HWC[:, :, 1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000
writer = SummaryWriter()
writer.add_image('my_image', img, 0)
# If you have non-default dimension setting, set the dataformats argument.
writer.add_image('my_image_HWC', img_HWC, 0, dataformats='HWC')
writer.close()
Expected result:
add_images(tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW') [source]
Add batched image data to summary. Note that this requires the pillow package. Parameters
tag (string) β Data identifier
img_tensor (torch.Tensor, numpy.array, or string/blobname) β Image data
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event
dataformats (string) β Image data format specification of the form NCHW, NHWC, CHW, HWC, HW, WH, etc. Shape:
img_tensor: Default is (N,3,H,W)(N, 3, H, W) . If dataformats is specified, other shape will be accepted. e.g. NCHW or NHWC. Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
img_batch = np.zeros((16, 3, 100, 100))
for i in range(16):
img_batch[i, 0] = np.arange(0, 10000).reshape(100, 100) / 10000 / 16 * i
img_batch[i, 1] = (1 - np.arange(0, 10000).reshape(100, 100) / 10000) / 16 * i
writer = SummaryWriter()
writer.add_images('my_image_batch', img_batch, 0)
writer.close()
Expected result:
add_figure(tag, figure, global_step=None, close=True, walltime=None) [source]
Render matplotlib figure into an image and add it to summary. Note that this requires the matplotlib package. Parameters
tag (string) β Data identifier
figure (matplotlib.pyplot.figure) β Figure or a list of figures
global_step (int) β Global step value to record
close (bool) β Flag to automatically close the figure
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event
add_video(tag, vid_tensor, global_step=None, fps=4, walltime=None) [source]
Add video data to summary. Note that this requires the moviepy package. Parameters
tag (string) β Data identifier
vid_tensor (torch.Tensor) β Video data
global_step (int) β Global step value to record
fps (float or int) β Frames per second
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
vid_tensor: (N,T,C,H,W)(N, T, C, H, W) . The values should lie in [0, 255] for type uint8 or [0, 1] for type float.
add_audio(tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None) [source]
Add audio data to summary. Parameters
tag (string) β Data identifier
snd_tensor (torch.Tensor) β Sound data
global_step (int) β Global step value to record
sample_rate (int) β sample rate in Hz
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
snd_tensor: (1,L)(1, L) . The values should lie between [-1, 1].
add_text(tag, text_string, global_step=None, walltime=None) [source]
Add text data to summary. Parameters
tag (string) β Data identifier
text_string (string) β String to save
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: writer.add_text('lstm', 'This is an lstm', 0)
writer.add_text('rnn', 'This is an rnn', 10)
add_graph(model, input_to_model=None, verbose=False) [source]
Add graph data to summary. Parameters
model (torch.nn.Module) β Model to draw.
input_to_model (torch.Tensor or list of torch.Tensor) β A variable or a tuple of variables to be fed.
verbose (bool) β Whether to print graph structure in console.
add_embedding(mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None) [source]
Add embedding projector data to summary. Parameters
mat (torch.Tensor or numpy.array) β A matrix which each row is the feature vector of the data point
metadata (list) β A list of labels, each element will be convert to string
label_img (torch.Tensor) β Images correspond to each data point
global_step (int) β Global step value to record
tag (string) β Name for the embedding Shape:
mat: (N,D)(N, D) , where N is number of data and D is feature dimension label_img: (N,C,H,W)(N, C, H, W) Examples: import keyword
import torch
meta = []
while len(meta)<100:
meta = meta+keyword.kwlist # get some strings
meta = meta[:100]
for i, v in enumerate(meta):
meta[i] = v+str(i)
label_img = torch.rand(100, 3, 10, 32)
for i in range(100):
label_img[i]*=i/100.0
writer.add_embedding(torch.randn(100, 5), metadata=meta, label_img=label_img)
writer.add_embedding(torch.randn(100, 5), label_img=label_img)
writer.add_embedding(torch.randn(100, 5), metadata=meta)
add_pr_curve(tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None) [source]
Adds precision recall curve. Plotting a precision-recall curve lets you understand your modelβs performance under different threshold settings. With this function, you provide the ground truth labeling (T/F) and prediction confidence (usually the output of your model) for each target. The TensorBoard UI will let you choose the threshold interactively. Parameters
tag (string) β Data identifier
labels (torch.Tensor, numpy.array, or string/blobname) β Ground truth data. Binary label for each element.
predictions (torch.Tensor, numpy.array, or string/blobname) β The probability that an element be classified as true. Value should be in [0, 1]
global_step (int) β Global step value to record
num_thresholds (int) β Number of thresholds used to draw the curve.
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
labels = np.random.randint(2, size=100) # binary label
predictions = np.random.rand(100)
writer = SummaryWriter()
writer.add_pr_curve('pr_curve', labels, predictions, 0)
writer.close()
add_custom_scalars(layout) [source]
Create special chart by collecting charts tags in βscalarsβ. Note that this function can only be called once for each SummaryWriter() object. Because it only provides metadata to tensorboard, the function can be called before or after the training loop. Parameters
layout (dict) β {categoryName: charts}, where charts is also a dictionary {chartName: ListOfProperties}. The first element in ListOfProperties is the chartβs type (one of Multiline or Margin) and the second element should be a list containing the tags you have used in add_scalar function, which will be collected into the new chart. Examples: layout = {'Taiwan':{'twse':['Multiline',['twse/0050', 'twse/2330']]},
'USA':{ 'dow':['Margin', ['dow/aaa', 'dow/bbb', 'dow/ccc']],
'nasdaq':['Margin', ['nasdaq/aaa', 'nasdaq/bbb', 'nasdaq/ccc']]}}
writer.add_custom_scalars(layout)
add_mesh(tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None) [source]
Add meshes or 3D point clouds to TensorBoard. The visualization is based on Three.js, so it allows users to interact with the rendered object. Besides the basic definitions such as vertices, faces, users can further provide camera parameter, lighting condition, etc. Please check https://threejs.org/docs/index.html#manual/en/introduction/Creating-a-scene for advanced usage. Parameters
tag (string) β Data identifier
vertices (torch.Tensor) β List of the 3D coordinates of vertices.
colors (torch.Tensor) β Colors for each vertex
faces (torch.Tensor) β Indices of vertices within each triangle. (Optional)
config_dict β Dictionary with ThreeJS classes names and configuration.
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
vertices: (B,N,3)(B, N, 3) . (batch, number_of_vertices, channels) colors: (B,N,3)(B, N, 3) . The values should lie in [0, 255] for type uint8 or [0, 1] for type float. faces: (B,N,3)(B, N, 3) . The values should lie in [0, number_of_vertices] for type uint8. Examples: from torch.utils.tensorboard import SummaryWriter
vertices_tensor = torch.as_tensor([
[1, 1, 1],
[-1, -1, 1],
[1, -1, -1],
[-1, 1, -1],
], dtype=torch.float).unsqueeze(0)
colors_tensor = torch.as_tensor([
[255, 0, 0],
[0, 255, 0],
[0, 0, 255],
[255, 0, 255],
], dtype=torch.int).unsqueeze(0)
faces_tensor = torch.as_tensor([
[0, 2, 3],
[0, 3, 1],
[0, 1, 2],
[1, 3, 2],
], dtype=torch.int).unsqueeze(0)
writer = SummaryWriter()
writer.add_mesh('my_mesh', vertices=vertices_tensor, colors=colors_tensor, faces=faces_tensor)
writer.close()
add_hparams(hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None) [source]
Add a set of hyperparameters to be compared in TensorBoard. Parameters
hparam_dict (dict) β Each key-value pair in the dictionary is the name of the hyper parameter and itβs corresponding value. The type of the value can be one of bool, string, float, int, or None.
metric_dict (dict) β Each key-value pair in the dictionary is the name of the metric and itβs corresponding value. Note that the key used here should be unique in the tensorboard record. Otherwise the value you added by add_scalar will be displayed in hparam plugin. In most cases, this is unwanted.
hparam_domain_discrete β (Optional[Dict[str, List[Any]]]) A dictionary that contains names of the hyperparameters and all discrete values they can hold
run_name (str) β Name of the run, to be included as part of the logdir. If unspecified, will use current timestamp. Examples: from torch.utils.tensorboard import SummaryWriter
with SummaryWriter() as w:
for i in range(5):
w.add_hparams({'lr': 0.1*i, 'bsize': i},
{'hparam/accuracy': 10*i, 'hparam/loss': 10*i})
Expected result:
flush() [source]
Flushes the event file to disk. Call this method to make sure that all pending events have been written to disk.
close() [source] | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter |
add_audio(tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None) [source]
Add audio data to summary. Parameters
tag (string) β Data identifier
snd_tensor (torch.Tensor) β Sound data
global_step (int) β Global step value to record
sample_rate (int) β sample rate in Hz
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
snd_tensor: (1,L)(1, L) . The values should lie between [-1, 1]. | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_audio |
add_custom_scalars(layout) [source]
Create special chart by collecting charts tags in βscalarsβ. Note that this function can only be called once for each SummaryWriter() object. Because it only provides metadata to tensorboard, the function can be called before or after the training loop. Parameters
layout (dict) β {categoryName: charts}, where charts is also a dictionary {chartName: ListOfProperties}. The first element in ListOfProperties is the chartβs type (one of Multiline or Margin) and the second element should be a list containing the tags you have used in add_scalar function, which will be collected into the new chart. Examples: layout = {'Taiwan':{'twse':['Multiline',['twse/0050', 'twse/2330']]},
'USA':{ 'dow':['Margin', ['dow/aaa', 'dow/bbb', 'dow/ccc']],
'nasdaq':['Margin', ['nasdaq/aaa', 'nasdaq/bbb', 'nasdaq/ccc']]}}
writer.add_custom_scalars(layout) | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_custom_scalars |
add_embedding(mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None) [source]
Add embedding projector data to summary. Parameters
mat (torch.Tensor or numpy.array) β A matrix which each row is the feature vector of the data point
metadata (list) β A list of labels, each element will be convert to string
label_img (torch.Tensor) β Images correspond to each data point
global_step (int) β Global step value to record
tag (string) β Name for the embedding Shape:
mat: (N,D)(N, D) , where N is number of data and D is feature dimension label_img: (N,C,H,W)(N, C, H, W) Examples: import keyword
import torch
meta = []
while len(meta)<100:
meta = meta+keyword.kwlist # get some strings
meta = meta[:100]
for i, v in enumerate(meta):
meta[i] = v+str(i)
label_img = torch.rand(100, 3, 10, 32)
for i in range(100):
label_img[i]*=i/100.0
writer.add_embedding(torch.randn(100, 5), metadata=meta, label_img=label_img)
writer.add_embedding(torch.randn(100, 5), label_img=label_img)
writer.add_embedding(torch.randn(100, 5), metadata=meta) | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_embedding |
add_figure(tag, figure, global_step=None, close=True, walltime=None) [source]
Render matplotlib figure into an image and add it to summary. Note that this requires the matplotlib package. Parameters
tag (string) β Data identifier
figure (matplotlib.pyplot.figure) β Figure or a list of figures
global_step (int) β Global step value to record
close (bool) β Flag to automatically close the figure
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_figure |
add_graph(model, input_to_model=None, verbose=False) [source]
Add graph data to summary. Parameters
model (torch.nn.Module) β Model to draw.
input_to_model (torch.Tensor or list of torch.Tensor) β A variable or a tuple of variables to be fed.
verbose (bool) β Whether to print graph structure in console. | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_graph |
add_histogram(tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None) [source]
Add histogram to summary. Parameters
tag (string) β Data identifier
values (torch.Tensor, numpy.array, or string/blobname) β Values to build histogram
global_step (int) β Global step value to record
bins (string) β One of {βtensorflowβ,βautoβ, βfdβ, β¦}. This determines how the bins are made. You can find other options in: https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
writer = SummaryWriter()
for i in range(10):
x = np.random.random(1000)
writer.add_histogram('distribution centers', x + i, i)
writer.close()
Expected result: | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_histogram |
add_hparams(hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None) [source]
Add a set of hyperparameters to be compared in TensorBoard. Parameters
hparam_dict (dict) β Each key-value pair in the dictionary is the name of the hyper parameter and itβs corresponding value. The type of the value can be one of bool, string, float, int, or None.
metric_dict (dict) β Each key-value pair in the dictionary is the name of the metric and itβs corresponding value. Note that the key used here should be unique in the tensorboard record. Otherwise the value you added by add_scalar will be displayed in hparam plugin. In most cases, this is unwanted.
hparam_domain_discrete β (Optional[Dict[str, List[Any]]]) A dictionary that contains names of the hyperparameters and all discrete values they can hold
run_name (str) β Name of the run, to be included as part of the logdir. If unspecified, will use current timestamp. Examples: from torch.utils.tensorboard import SummaryWriter
with SummaryWriter() as w:
for i in range(5):
w.add_hparams({'lr': 0.1*i, 'bsize': i},
{'hparam/accuracy': 10*i, 'hparam/loss': 10*i})
Expected result: | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_hparams |
add_image(tag, img_tensor, global_step=None, walltime=None, dataformats='CHW') [source]
Add image data to summary. Note that this requires the pillow package. Parameters
tag (string) β Data identifier
img_tensor (torch.Tensor, numpy.array, or string/blobname) β Image data
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
img_tensor: Default is (3,H,W)(3, H, W) . You can use torchvision.utils.make_grid() to convert a batch of tensor into 3xHxW format or call add_images and let us do the job. Tensor with (1,H,W)(1, H, W) , (H,W)(H, W) , (H,W,3)(H, W, 3) is also suitable as long as corresponding dataformats argument is passed, e.g. CHW, HWC, HW. Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
img = np.zeros((3, 100, 100))
img[0] = np.arange(0, 10000).reshape(100, 100) / 10000
img[1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000
img_HWC = np.zeros((100, 100, 3))
img_HWC[:, :, 0] = np.arange(0, 10000).reshape(100, 100) / 10000
img_HWC[:, :, 1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000
writer = SummaryWriter()
writer.add_image('my_image', img, 0)
# If you have non-default dimension setting, set the dataformats argument.
writer.add_image('my_image_HWC', img_HWC, 0, dataformats='HWC')
writer.close()
Expected result: | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_image |
add_images(tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW') [source]
Add batched image data to summary. Note that this requires the pillow package. Parameters
tag (string) β Data identifier
img_tensor (torch.Tensor, numpy.array, or string/blobname) β Image data
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event
dataformats (string) β Image data format specification of the form NCHW, NHWC, CHW, HWC, HW, WH, etc. Shape:
img_tensor: Default is (N,3,H,W)(N, 3, H, W) . If dataformats is specified, other shape will be accepted. e.g. NCHW or NHWC. Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
img_batch = np.zeros((16, 3, 100, 100))
for i in range(16):
img_batch[i, 0] = np.arange(0, 10000).reshape(100, 100) / 10000 / 16 * i
img_batch[i, 1] = (1 - np.arange(0, 10000).reshape(100, 100) / 10000) / 16 * i
writer = SummaryWriter()
writer.add_images('my_image_batch', img_batch, 0)
writer.close()
Expected result: | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_images |
add_mesh(tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None) [source]
Add meshes or 3D point clouds to TensorBoard. The visualization is based on Three.js, so it allows users to interact with the rendered object. Besides the basic definitions such as vertices, faces, users can further provide camera parameter, lighting condition, etc. Please check https://threejs.org/docs/index.html#manual/en/introduction/Creating-a-scene for advanced usage. Parameters
tag (string) β Data identifier
vertices (torch.Tensor) β List of the 3D coordinates of vertices.
colors (torch.Tensor) β Colors for each vertex
faces (torch.Tensor) β Indices of vertices within each triangle. (Optional)
config_dict β Dictionary with ThreeJS classes names and configuration.
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
vertices: (B,N,3)(B, N, 3) . (batch, number_of_vertices, channels) colors: (B,N,3)(B, N, 3) . The values should lie in [0, 255] for type uint8 or [0, 1] for type float. faces: (B,N,3)(B, N, 3) . The values should lie in [0, number_of_vertices] for type uint8. Examples: from torch.utils.tensorboard import SummaryWriter
vertices_tensor = torch.as_tensor([
[1, 1, 1],
[-1, -1, 1],
[1, -1, -1],
[-1, 1, -1],
], dtype=torch.float).unsqueeze(0)
colors_tensor = torch.as_tensor([
[255, 0, 0],
[0, 255, 0],
[0, 0, 255],
[255, 0, 255],
], dtype=torch.int).unsqueeze(0)
faces_tensor = torch.as_tensor([
[0, 2, 3],
[0, 3, 1],
[0, 1, 2],
[1, 3, 2],
], dtype=torch.int).unsqueeze(0)
writer = SummaryWriter()
writer.add_mesh('my_mesh', vertices=vertices_tensor, colors=colors_tensor, faces=faces_tensor)
writer.close() | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_mesh |
add_pr_curve(tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None) [source]
Adds precision recall curve. Plotting a precision-recall curve lets you understand your modelβs performance under different threshold settings. With this function, you provide the ground truth labeling (T/F) and prediction confidence (usually the output of your model) for each target. The TensorBoard UI will let you choose the threshold interactively. Parameters
tag (string) β Data identifier
labels (torch.Tensor, numpy.array, or string/blobname) β Ground truth data. Binary label for each element.
predictions (torch.Tensor, numpy.array, or string/blobname) β The probability that an element be classified as true. Value should be in [0, 1]
global_step (int) β Global step value to record
num_thresholds (int) β Number of thresholds used to draw the curve.
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
import numpy as np
labels = np.random.randint(2, size=100) # binary label
predictions = np.random.rand(100)
writer = SummaryWriter()
writer.add_pr_curve('pr_curve', labels, predictions, 0)
writer.close() | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_pr_curve |
add_scalar(tag, scalar_value, global_step=None, walltime=None) [source]
Add scalar data to summary. Parameters
tag (string) β Data identifier
scalar_value (float or string/blobname) β Value to save
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) with seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
x = range(100)
for i in x:
writer.add_scalar('y=2x', i * 2, i)
writer.close()
Expected result: | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_scalar |
add_scalars(main_tag, tag_scalar_dict, global_step=None, walltime=None) [source]
Adds many scalar data to summary. Parameters
main_tag (string) β The parent name for the tags
tag_scalar_dict (dict) β Key-value pair storing the tag and corresponding values
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
r = 5
for i in range(100):
writer.add_scalars('run_14h', {'xsinx':i*np.sin(i/r),
'xcosx':i*np.cos(i/r),
'tanx': np.tan(i/r)}, i)
writer.close()
# This call adds three values to the same scalar plot with the tag
# 'run_14h' in TensorBoard's scalar section.
Expected result: | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_scalars |
add_text(tag, text_string, global_step=None, walltime=None) [source]
Add text data to summary. Parameters
tag (string) β Data identifier
text_string (string) β String to save
global_step (int) β Global step value to record
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Examples: writer.add_text('lstm', 'This is an lstm', 0)
writer.add_text('rnn', 'This is an rnn', 10) | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_text |
add_video(tag, vid_tensor, global_step=None, fps=4, walltime=None) [source]
Add video data to summary. Note that this requires the moviepy package. Parameters
tag (string) β Data identifier
vid_tensor (torch.Tensor) β Video data
global_step (int) β Global step value to record
fps (float or int) β Frames per second
walltime (float) β Optional override default walltime (time.time()) seconds after epoch of event Shape:
vid_tensor: (N,T,C,H,W)(N, T, C, H, W) . The values should lie in [0, 255] for type uint8 or [0, 1] for type float. | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_video |
close() [source] | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.close |
flush() [source]
Flushes the event file to disk. Call this method to make sure that all pending events have been written to disk. | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.flush |
__init__(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='') [source]
Creates a SummaryWriter that will write out events and summaries to the event file. Parameters
log_dir (string) β Save directory location. Default is runs/CURRENT_DATETIME_HOSTNAME, which changes after each run. Use hierarchical folder structure to compare between runs easily. e.g. pass in βruns/exp1β, βruns/exp2β, etc. for each new experiment to compare across them.
comment (string) β Comment log_dir suffix appended to the default log_dir. If log_dir is assigned, this argument has no effect.
purge_step (int) β When logging crashes at step T+XT+X and restarts at step TT , any events whose global_step larger or equal to TT will be purged and hidden from TensorBoard. Note that crashed and resumed experiments should have the same log_dir.
max_queue (int) β Size of the queue for pending events and summaries before one of the βaddβ calls forces a flush to disk. Default is ten items.
flush_secs (int) β How often, in seconds, to flush the pending events and summaries to disk. Default is every two minutes.
filename_suffix (string) β Suffix added to all event filenames in the log_dir directory. More details on filename construction in tensorboard.summary.writer.event_file_writer.EventFileWriter. Examples: from torch.utils.tensorboard import SummaryWriter
# create a summary writer with automatically generated folder name.
writer = SummaryWriter()
# folder location: runs/May04_22-14-54_s-MacBook-Pro.local/
# create a summary writer using the specified folder name.
writer = SummaryWriter("my_experiment")
# folder location: my_experiment
# create a summary writer with comment appended.
writer = SummaryWriter(comment="LR_0.1_BATCH_16")
# folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/ | torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.__init__ |
torch.vander(x, N=None, increasing=False) β Tensor
Generates a Vandermonde matrix. The columns of the output matrix are elementwise powers of the input vector x(Nβ1),x(Nβ2),...,x0x^{(N-1)}, x^{(N-2)}, ..., x^0 . If increasing is True, the order of the columns is reversed x0,x1,...,x(Nβ1)x^0, x^1, ..., x^{(N-1)} . Such a matrix with a geometric progression in each row is named for Alexandre-Theophile Vandermonde. Parameters
x (Tensor) β 1-D input tensor.
N (int, optional) β Number of columns in the output. If N is not specified, a square array is returned (N=len(x))(N = len(x)) .
increasing (bool, optional) β Order of the powers of the columns. If True, the powers increase from left to right, if False (the default) they are reversed. Returns
Vandermonde matrix. If increasing is False, the first column is x(Nβ1)x^{(N-1)} , the second x(Nβ2)x^{(N-2)} and so forth. If increasing is True, the columns are x0,x1,...,x(Nβ1)x^0, x^1, ..., x^{(N-1)} . Return type
Tensor Example: >>> x = torch.tensor([1, 2, 3, 5])
>>> torch.vander(x)
tensor([[ 1, 1, 1, 1],
[ 8, 4, 2, 1],
[ 27, 9, 3, 1],
[125, 25, 5, 1]])
>>> torch.vander(x, N=3)
tensor([[ 1, 1, 1],
[ 4, 2, 1],
[ 9, 3, 1],
[25, 5, 1]])
>>> torch.vander(x, N=3, increasing=True)
tensor([[ 1, 1, 1],
[ 1, 2, 4],
[ 1, 3, 9],
[ 1, 5, 25]]) | torch.generated.torch.vander#torch.vander |
torch.var(input, unbiased=True) β Tensor
Returns the variance of all elements in the input tensor. If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Besselβs correction will be used. Parameters
input (Tensor) β the input tensor.
unbiased (bool) β whether to use the unbiased estimation or not Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[-0.3425, -1.2636, -0.4864]])
>>> torch.var(a)
tensor(0.2455)
torch.var(input, dim, unbiased=True, keepdim=False, *, out=None) β Tensor
Returns the variance of each row of the input tensor in the given dimension dim. If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Besselβs correction will be used. Parameters
input (Tensor) β the input tensor.
dim (int or tuple of python:ints) β the dimension or dimensions to reduce.
unbiased (bool) β whether to use the unbiased estimation or not
keepdim (bool) β whether the output tensor has dim retained or not. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4, 4)
>>> a
tensor([[-0.3567, 1.7385, -1.3042, 0.7423],
[ 1.3436, -0.1015, -0.9834, -0.8438],
[ 0.6056, 0.1089, -0.3112, -1.4085],
[-0.7700, 0.6074, -0.1469, 0.7777]])
>>> torch.var(a, 1)
tensor([ 1.7444, 1.1363, 0.7356, 0.5112]) | torch.generated.torch.var#torch.var |
torch.var_mean(input, unbiased=True) -> (Tensor, Tensor)
Returns the variance and mean of all elements in the input tensor. If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Besselβs correction will be used. Parameters
input (Tensor) β the input tensor.
unbiased (bool) β whether to use the unbiased estimation or not Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[0.0146, 0.4258, 0.2211]])
>>> torch.var_mean(a)
(tensor(0.0423), tensor(0.2205))
torch.var_mean(input, dim, keepdim=False, unbiased=True) -> (Tensor, Tensor)
Returns the variance and mean of each row of the input tensor in the given dimension dim. If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Besselβs correction will be used. Parameters
input (Tensor) β the input tensor.
dim (int or tuple of python:ints) β the dimension or dimensions to reduce.
keepdim (bool) β whether the output tensor has dim retained or not.
unbiased (bool) β whether to use the unbiased estimation or not Example: >>> a = torch.randn(4, 4)
>>> a
tensor([[-1.5650, 2.0415, -0.1024, -0.5790],
[ 0.2325, -2.6145, -1.6428, -0.3537],
[-0.2159, -1.1069, 1.2882, -1.3265],
[-0.6706, -1.5893, 0.6827, 1.6727]])
>>> torch.var_mean(a, 1)
(tensor([2.3174, 1.6403, 1.4092, 2.0791]), tensor([-0.0512, -1.0946, -0.3403, 0.0239])) | torch.generated.torch.var_mean#torch.var_mean |
torch.vdot(input, other, *, out=None) β Tensor
Computes the dot product of two 1D tensors. The vdot(a, b) function handles complex numbers differently than dot(a, b). If the first argument is complex, the complex conjugate of the first argument is used for the calculation of the dot product. Note Unlike NumPyβs vdot, torch.vdot intentionally only supports computing the dot product of two 1D tensors with the same number of elements. Parameters
input (Tensor) β first tensor in the dot product, must be 1D. Its conjugate is used if itβs complex.
other (Tensor) β second tensor in the dot product, must be 1D. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> torch.vdot(torch.tensor([2, 3]), torch.tensor([2, 1]))
tensor(7)
>>> a = torch.tensor((1 +2j, 3 - 1j))
>>> b = torch.tensor((2 +1j, 4 - 0j))
>>> torch.vdot(a, b)
tensor([16.+1.j])
>>> torch.vdot(b, a)
tensor([16.-1.j]) | torch.generated.torch.vdot#torch.vdot |
torch.view_as_complex(input) β Tensor
Returns a view of input as a complex tensor. For an input complex tensor of size m1,m2,β¦,mi,2m1, m2, \dots, mi, 2 , this function returns a new complex tensor of size m1,m2,β¦,mim1, m2, \dots, mi where the last dimension of the input tensor is expected to represent the real and imaginary components of complex numbers. Warning view_as_complex() is only supported for tensors with torch.dtype torch.float64 and torch.float32. The input is expected to have the last dimension of size 2. In addition, the tensor must have a stride of 1 for its last dimension. The strides of all other dimensions must be even numbers. Parameters
input (Tensor) β the input tensor. Example::
>>> x=torch.randn(4, 2)
>>> x
tensor([[ 1.6116, -0.5772],
[-1.4606, -0.9120],
[ 0.0786, -1.7497],
[-0.6561, -1.6623]])
>>> torch.view_as_complex(x)
tensor([(1.6116-0.5772j), (-1.4606-0.9120j), (0.0786-1.7497j), (-0.6561-1.6623j)]) | torch.generated.torch.view_as_complex#torch.view_as_complex |
torch.view_as_real(input) β Tensor
Returns a view of input as a real tensor. For an input complex tensor of size m1,m2,β¦,mim1, m2, \dots, mi , this function returns a new real tensor of size m1,m2,β¦,mi,2m1, m2, \dots, mi, 2 , where the last dimension of size 2 represents the real and imaginary components of complex numbers. Warning view_as_real() is only supported for tensors with complex dtypes. Parameters
input (Tensor) β the input tensor. Example::
>>> x=torch.randn(4, dtype=torch.cfloat)
>>> x
tensor([(0.4737-0.3839j), (-0.2098-0.6699j), (0.3470-0.9451j), (-0.5174-1.3136j)])
>>> torch.view_as_real(x)
tensor([[ 0.4737, -0.3839],
[-0.2098, -0.6699],
[ 0.3470, -0.9451],
[-0.5174, -1.3136]]) | torch.generated.torch.view_as_real#torch.view_as_real |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.