doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
mode(dim=None, keepdim=False) -> (Tensor, LongTensor) See torch.mode()
torch.tensors#torch.Tensor.mode
moveaxis(source, destination) β†’ Tensor See torch.moveaxis()
torch.tensors#torch.Tensor.moveaxis
movedim(source, destination) β†’ Tensor See torch.movedim()
torch.tensors#torch.Tensor.movedim
msort() β†’ Tensor See torch.msort()
torch.tensors#torch.Tensor.msort
mul(value) β†’ Tensor See torch.mul().
torch.tensors#torch.Tensor.mul
multinomial(num_samples, replacement=False, *, generator=None) β†’ Tensor See torch.multinomial()
torch.tensors#torch.Tensor.multinomial
multiply(value) β†’ Tensor See torch.multiply().
torch.tensors#torch.Tensor.multiply
multiply_(value) β†’ Tensor In-place version of multiply().
torch.tensors#torch.Tensor.multiply_
mul_(value) β†’ Tensor In-place version of mul().
torch.tensors#torch.Tensor.mul_
mv(vec) β†’ Tensor See torch.mv()
torch.tensors#torch.Tensor.mv
mvlgamma(p) β†’ Tensor See torch.mvlgamma()
torch.tensors#torch.Tensor.mvlgamma
mvlgamma_(p) β†’ Tensor In-place version of mvlgamma()
torch.tensors#torch.Tensor.mvlgamma_
names Stores names for each of this tensor’s dimensions. names[idx] corresponds to the name of tensor dimension idx. Names are either a string if the dimension is named or None if the dimension is unnamed. Dimension names may contain characters or underscore. Furthermore, a dimension name must be a valid Python variable name (i.e., does not start with underscore). Tensors may not have two named dimensions with the same name. Warning The named tensor API is experimental and subject to change.
torch.named_tensor#torch.Tensor.names
nanmedian(dim=None, keepdim=False) -> (Tensor, LongTensor) See torch.nanmedian()
torch.tensors#torch.Tensor.nanmedian
nanquantile(q, dim=None, keepdim=False) β†’ Tensor See torch.nanquantile()
torch.tensors#torch.Tensor.nanquantile
nansum(dim=None, keepdim=False, dtype=None) β†’ Tensor See torch.nansum()
torch.tensors#torch.Tensor.nansum
nan_to_num(nan=0.0, posinf=None, neginf=None) β†’ Tensor See torch.nan_to_num().
torch.tensors#torch.Tensor.nan_to_num
nan_to_num_(nan=0.0, posinf=None, neginf=None) β†’ Tensor In-place version of nan_to_num().
torch.tensors#torch.Tensor.nan_to_num_
narrow(dimension, start, length) β†’ Tensor See torch.narrow() Example: >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> x.narrow(0, 0, 2) tensor([[ 1, 2, 3], [ 4, 5, 6]]) >>> x.narrow(1, 1, 2) tensor([[ 2, 3], [ 5, 6], [ 8, 9]])
torch.tensors#torch.Tensor.narrow
narrow_copy(dimension, start, length) β†’ Tensor Same as Tensor.narrow() except returning a copy rather than shared storage. This is primarily for sparse tensors, which do not have a shared-storage narrow method. Calling `narrow_copy with `dimemsion > self.sparse_dim()` will return a copy with the relevant dense dimension narrowed, and `self.shape` updated accordingly.
torch.tensors#torch.Tensor.narrow_copy
ndim Alias for dim()
torch.tensors#torch.Tensor.ndim
ndimension() β†’ int Alias for dim()
torch.tensors#torch.Tensor.ndimension
ne(other) β†’ Tensor See torch.ne().
torch.tensors#torch.Tensor.ne
neg() β†’ Tensor See torch.neg()
torch.tensors#torch.Tensor.neg
negative() β†’ Tensor See torch.negative()
torch.tensors#torch.Tensor.negative
negative_() β†’ Tensor In-place version of negative()
torch.tensors#torch.Tensor.negative_
neg_() β†’ Tensor In-place version of neg()
torch.tensors#torch.Tensor.neg_
nelement() β†’ int Alias for numel()
torch.tensors#torch.Tensor.nelement
new_empty(size, dtype=None, device=None, requires_grad=False) β†’ Tensor Returns a Tensor of size size filled with uninitialized data. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters dtype (torch.dtype, optional) – the desired type of returned tensor. Default: if None, same torch.dtype as this tensor. device (torch.device, optional) – the desired device of returned tensor. Default: if None, same torch.device as this tensor. requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.ones(()) >>> tensor.new_empty((2, 3)) tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30], [ 3.0949e-41, 4.4842e-44, 0.0000e+00]])
torch.tensors#torch.Tensor.new_empty
new_full(size, fill_value, dtype=None, device=None, requires_grad=False) β†’ Tensor Returns a Tensor of size size filled with fill_value. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters fill_value (scalar) – the number to fill the output tensor with. dtype (torch.dtype, optional) – the desired type of returned tensor. Default: if None, same torch.dtype as this tensor. device (torch.device, optional) – the desired device of returned tensor. Default: if None, same torch.device as this tensor. requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.ones((2,), dtype=torch.float64) >>> tensor.new_full((3, 4), 3.141592) tensor([[ 3.1416, 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416, 3.1416]], dtype=torch.float64)
torch.tensors#torch.Tensor.new_full
new_ones(size, dtype=None, device=None, requires_grad=False) β†’ Tensor Returns a Tensor of size size filled with 1. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor. dtype (torch.dtype, optional) – the desired type of returned tensor. Default: if None, same torch.dtype as this tensor. device (torch.device, optional) – the desired device of returned tensor. Default: if None, same torch.device as this tensor. requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.tensor((), dtype=torch.int32) >>> tensor.new_ones((2, 3)) tensor([[ 1, 1, 1], [ 1, 1, 1]], dtype=torch.int32)
torch.tensors#torch.Tensor.new_ones
new_tensor(data, dtype=None, device=None, requires_grad=False) β†’ Tensor Returns a new Tensor with data as the tensor data. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Warning new_tensor() always copies data. If you have a Tensor data and want to avoid a copy, use torch.Tensor.requires_grad_() or torch.Tensor.detach(). If you have a numpy array and want to avoid a copy, use torch.from_numpy(). Warning When data is a tensor x, new_tensor() reads out β€˜the data’ from whatever it is passed, and constructs a leaf variable. Therefore tensor.new_tensor(x) is equivalent to x.clone().detach() and tensor.new_tensor(x, requires_grad=True) is equivalent to x.clone().detach().requires_grad_(True). The equivalents using clone() and detach() are recommended. Parameters data (array_like) – The returned Tensor copies data. dtype (torch.dtype, optional) – the desired type of returned tensor. Default: if None, same torch.dtype as this tensor. device (torch.device, optional) – the desired device of returned tensor. Default: if None, same torch.device as this tensor. requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.ones((2,), dtype=torch.int8) >>> data = [[0, 1], [2, 3]] >>> tensor.new_tensor(data) tensor([[ 0, 1], [ 2, 3]], dtype=torch.int8)
torch.tensors#torch.Tensor.new_tensor
new_zeros(size, dtype=None, device=None, requires_grad=False) β†’ Tensor Returns a Tensor of size size filled with 0. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor. dtype (torch.dtype, optional) – the desired type of returned tensor. Default: if None, same torch.dtype as this tensor. device (torch.device, optional) – the desired device of returned tensor. Default: if None, same torch.device as this tensor. requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.tensor((), dtype=torch.float64) >>> tensor.new_zeros((2, 3)) tensor([[ 0., 0., 0.], [ 0., 0., 0.]], dtype=torch.float64)
torch.tensors#torch.Tensor.new_zeros
nextafter(other) β†’ Tensor See torch.nextafter()
torch.tensors#torch.Tensor.nextafter
nextafter_(other) β†’ Tensor In-place version of nextafter()
torch.tensors#torch.Tensor.nextafter_
ne_(other) β†’ Tensor In-place version of ne().
torch.tensors#torch.Tensor.ne_
nonzero() β†’ LongTensor See torch.nonzero()
torch.tensors#torch.Tensor.nonzero
norm(p='fro', dim=None, keepdim=False, dtype=None) [source] See torch.norm()
torch.tensors#torch.Tensor.norm
normal_(mean=0, std=1, *, generator=None) β†’ Tensor Fills self tensor with elements samples from the normal distribution parameterized by mean and std.
torch.tensors#torch.Tensor.normal_
not_equal(other) β†’ Tensor See torch.not_equal().
torch.tensors#torch.Tensor.not_equal
not_equal_(other) β†’ Tensor In-place version of not_equal().
torch.tensors#torch.Tensor.not_equal_
numel() β†’ int See torch.numel()
torch.tensors#torch.Tensor.numel
numpy() β†’ numpy.ndarray Returns self tensor as a NumPy ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the ndarray and vice versa.
torch.tensors#torch.Tensor.numpy
orgqr(input2) β†’ Tensor See torch.orgqr()
torch.tensors#torch.Tensor.orgqr
ormqr(input2, input3, left=True, transpose=False) β†’ Tensor See torch.ormqr()
torch.tensors#torch.Tensor.ormqr
outer(vec2) β†’ Tensor See torch.outer().
torch.tensors#torch.Tensor.outer
permute(*dims) β†’ Tensor Returns a view of the original tensor with its dimensions permuted. Parameters *dims (int...) – The desired ordering of dimensions Example >>> x = torch.randn(2, 3, 5) >>> x.size() torch.Size([2, 3, 5]) >>> x.permute(2, 0, 1).size() torch.Size([5, 2, 3])
torch.tensors#torch.Tensor.permute
pinverse() β†’ Tensor See torch.pinverse()
torch.tensors#torch.Tensor.pinverse
pin_memory() β†’ Tensor Copies the tensor to pinned memory, if it’s not already pinned.
torch.tensors#torch.Tensor.pin_memory
polygamma(n) β†’ Tensor See torch.polygamma()
torch.tensors#torch.Tensor.polygamma
polygamma_(n) β†’ Tensor In-place version of polygamma()
torch.tensors#torch.Tensor.polygamma_
pow(exponent) β†’ Tensor See torch.pow()
torch.tensors#torch.Tensor.pow
pow_(exponent) β†’ Tensor In-place version of pow()
torch.tensors#torch.Tensor.pow_
prod(dim=None, keepdim=False, dtype=None) β†’ Tensor See torch.prod()
torch.tensors#torch.Tensor.prod
put_(indices, tensor, accumulate=False) β†’ Tensor Copies the elements from tensor into the positions specified by indices. For the purpose of indexing, the self tensor is treated as if it were a 1-D tensor. If accumulate is True, the elements in tensor are added to self. If accumulate is False, the behavior is undefined if indices contain duplicate elements. Parameters indices (LongTensor) – the indices into self tensor (Tensor) – the tensor containing values to copy from accumulate (bool) – whether to accumulate into self Example: >>> src = torch.tensor([[4, 3, 5], ... [6, 7, 8]]) >>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10])) tensor([[ 4, 9, 5], [ 10, 7, 8]])
torch.tensors#torch.Tensor.put_
qr(some=True) -> (Tensor, Tensor) See torch.qr()
torch.tensors#torch.Tensor.qr
qscheme() β†’ torch.qscheme Returns the quantization scheme of a given QTensor.
torch.tensors#torch.Tensor.qscheme
quantile(q, dim=None, keepdim=False) β†’ Tensor See torch.quantile()
torch.tensors#torch.Tensor.quantile
q_per_channel_axis() β†’ int Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied.
torch.tensors#torch.Tensor.q_per_channel_axis
q_per_channel_scales() β†’ Tensor Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.
torch.tensors#torch.Tensor.q_per_channel_scales
q_per_channel_zero_points() β†’ Tensor Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.
torch.tensors#torch.Tensor.q_per_channel_zero_points
q_scale() β†’ float Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().
torch.tensors#torch.Tensor.q_scale
q_zero_point() β†’ int Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().
torch.tensors#torch.Tensor.q_zero_point
rad2deg() β†’ Tensor See torch.rad2deg()
torch.tensors#torch.Tensor.rad2deg
random_(from=0, to=None, *, generator=None) β†’ Tensor Fills self tensor with numbers sampled from the discrete uniform distribution over [from, to - 1]. If not specified, the values are usually only bounded by self tensor’s data type. However, for floating point types, if unspecified, range will be [0, 2^mantissa] to ensure that every value is representable. For example, torch.tensor(1, dtype=torch.double).random_() will be uniform in [0, 2^53].
torch.tensors#torch.Tensor.random_
ravel(input) β†’ Tensor see torch.ravel()
torch.tensors#torch.Tensor.ravel
real Returns a new tensor containing real values of the self tensor. The returned tensor and self share the same underlying storage. Warning real() is only supported for tensors with complex dtypes. Example:: >>> x=torch.randn(4, dtype=torch.cfloat) >>> x tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)]) >>> x.real tensor([ 0.3100, -0.5445, -1.6492, -0.0638])
torch.tensors#torch.Tensor.real
reciprocal() β†’ Tensor See torch.reciprocal()
torch.tensors#torch.Tensor.reciprocal
reciprocal_() β†’ Tensor In-place version of reciprocal()
torch.tensors#torch.Tensor.reciprocal_
record_stream(stream) Ensures that the tensor memory is not reused for another tensor until all current work queued on stream are complete. Note The caching allocator is aware of only the stream where a tensor was allocated. Due to the awareness, it already correctly manages the life cycle of tensors on only one stream. But if a tensor is used on a stream different from the stream of origin, the allocator might reuse the memory unexpectedly. Calling this method lets the allocator know which streams have used the tensor.
torch.tensors#torch.Tensor.record_stream
refine_names(*names) [source] Refines the dimension names of self according to names. Refining is a special case of renaming that β€œlifts” unnamed dimensions. A None dim can be refined to have any name; a named dim can only be refined to have the same name. Because named tensors can coexist with unnamed tensors, refining names gives a nice way to write named-tensor-aware code that works with both named and unnamed tensors. names may contain up to one Ellipsis (...). The Ellipsis is expanded greedily; it is expanded in-place to fill names to the same length as self.dim() using names from the corresponding indices of self.names. Python 2 does not support Ellipsis but one may use a string literal instead ('...'). Parameters names (iterable of str) – The desired names of the output tensor. May contain up to one Ellipsis. Examples: >>> imgs = torch.randn(32, 3, 128, 128) >>> named_imgs = imgs.refine_names('N', 'C', 'H', 'W') >>> named_imgs.names ('N', 'C', 'H', 'W') >>> tensor = torch.randn(2, 3, 5, 7, 11) >>> tensor = tensor.refine_names('A', ..., 'B', 'C') >>> tensor.names ('A', None, None, 'B', 'C') Warning The named tensor API is experimental and subject to change.
torch.named_tensor#torch.Tensor.refine_names
register_hook(hook) [source] Registers a backward hook. The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature: hook(grad) -> Tensor or None The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of grad. This function returns a handle with a method handle.remove() that removes the hook from the module. Example: >>> v = torch.tensor([0., 0., 0.], requires_grad=True) >>> h = v.register_hook(lambda grad: grad * 2) # double the gradient >>> v.backward(torch.tensor([1., 2., 3.])) >>> v.grad 2 4 6 [torch.FloatTensor of size (3,)] >>> h.remove() # removes the hook
torch.autograd#torch.Tensor.register_hook
remainder(divisor) β†’ Tensor See torch.remainder()
torch.tensors#torch.Tensor.remainder
remainder_(divisor) β†’ Tensor In-place version of remainder()
torch.tensors#torch.Tensor.remainder_
rename(*names, **rename_map) [source] Renames dimension names of self. There are two main usages: self.rename(**rename_map) returns a view on tensor that has dims renamed as specified in the mapping rename_map. self.rename(*names) returns a view on tensor, renaming all dimensions positionally using names. Use self.rename(None) to drop names on a tensor. One cannot specify both positional args names and keyword args rename_map. Examples: >>> imgs = torch.rand(2, 3, 5, 7, names=('N', 'C', 'H', 'W')) >>> renamed_imgs = imgs.rename(N='batch', C='channels') >>> renamed_imgs.names ('batch', 'channels', 'H', 'W') >>> renamed_imgs = imgs.rename(None) >>> renamed_imgs.names (None,) >>> renamed_imgs = imgs.rename('batch', 'channel', 'height', 'width') >>> renamed_imgs.names ('batch', 'channel', 'height', 'width') Warning The named tensor API is experimental and subject to change.
torch.named_tensor#torch.Tensor.rename
rename_(*names, **rename_map) [source] In-place version of rename().
torch.named_tensor#torch.Tensor.rename_
renorm(p, dim, maxnorm) β†’ Tensor See torch.renorm()
torch.tensors#torch.Tensor.renorm
renorm_(p, dim, maxnorm) β†’ Tensor In-place version of renorm()
torch.tensors#torch.Tensor.renorm_
repeat(*sizes) β†’ Tensor Repeats this tensor along the specified dimensions. Unlike expand(), this function copies the tensor’s data. Warning repeat() behaves differently from numpy.repeat, but is more similar to numpy.tile. For the operator similar to numpy.repeat, see torch.repeat_interleave(). Parameters sizes (torch.Size or int...) – The number of times to repeat this tensor along each dimension Example: >>> x = torch.tensor([1, 2, 3]) >>> x.repeat(4, 2) tensor([[ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3]]) >>> x.repeat(4, 2, 1).size() torch.Size([4, 2, 3])
torch.tensors#torch.Tensor.repeat
repeat_interleave(repeats, dim=None) β†’ Tensor See torch.repeat_interleave().
torch.tensors#torch.Tensor.repeat_interleave
requires_grad Is True if gradients need to be computed for this Tensor, False otherwise. Note The fact that gradients need to be computed for a Tensor do not mean that the grad attribute will be populated, see is_leaf for more details.
torch.autograd#torch.Tensor.requires_grad
requires_grad_(requires_grad=True) β†’ Tensor Change if autograd should record operations on this tensor: sets this tensor’s requires_grad attribute in-place. Returns this tensor. requires_grad_()’s main use case is to tell autograd to begin recording operations on a Tensor tensor. If tensor has requires_grad=False (because it was obtained through a DataLoader, or required preprocessing or initialization), tensor.requires_grad_() makes it so that autograd will begin to record operations on tensor. Parameters requires_grad (bool) – If autograd should record operations on this tensor. Default: True. Example: >>> # Let's say we want to preprocess some saved weights and use >>> # the result as new weights. >>> saved_weights = [0.1, 0.2, 0.3, 0.25] >>> loaded_weights = torch.tensor(saved_weights) >>> weights = preprocess(loaded_weights) # some function >>> weights tensor([-0.5503, 0.4926, -2.1158, -0.8303]) >>> # Now, start to record operations done to weights >>> weights.requires_grad_() >>> out = weights.pow(2).sum() >>> out.backward() >>> weights.grad tensor([-1.1007, 0.9853, -4.2316, -1.6606])
torch.tensors#torch.Tensor.requires_grad_
reshape(*shape) β†’ Tensor Returns a tensor with the same data and number of elements as self but with the specified shape. This method returns a view if shape is compatible with the current shape. See torch.Tensor.view() on when it is possible to return a view. See torch.reshape() Parameters shape (tuple of python:ints or int...) – the desired shape
torch.tensors#torch.Tensor.reshape
reshape_as(other) β†’ Tensor Returns this tensor as the same shape as other. self.reshape_as(other) is equivalent to self.reshape(other.sizes()). This method returns a view if other.sizes() is compatible with the current shape. See torch.Tensor.view() on when it is possible to return a view. Please see reshape() for more information about reshape. Parameters other (torch.Tensor) – The result tensor has the same shape as other.
torch.tensors#torch.Tensor.reshape_as
resize_(*sizes, memory_format=torch.contiguous_format) β†’ Tensor Resizes self tensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not changed. Existing elements are preserved but any new memory is uninitialized. Warning This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use view(), which checks for contiguity, or reshape(), which copies data if needed. To change the size in-place with custom strides, see set_(). Parameters sizes (torch.Size or int...) – the desired size memory_format (torch.memory_format, optional) – the desired memory format of Tensor. Default: torch.contiguous_format. Note that memory format of self is going to be unaffected if self.size() matches sizes. Example: >>> x = torch.tensor([[1, 2], [3, 4], [5, 6]]) >>> x.resize_(2, 2) tensor([[ 1, 2], [ 3, 4]])
torch.tensors#torch.Tensor.resize_
resize_as_(tensor, memory_format=torch.contiguous_format) β†’ Tensor Resizes the self tensor to be the same size as the specified tensor. This is equivalent to self.resize_(tensor.size()). Parameters memory_format (torch.memory_format, optional) – the desired memory format of Tensor. Default: torch.contiguous_format. Note that memory format of self is going to be unaffected if self.size() matches tensor.size().
torch.tensors#torch.Tensor.resize_as_
retain_grad() [source] Enables .grad attribute for non-leaf Tensors.
torch.autograd#torch.Tensor.retain_grad
roll(shifts, dims) β†’ Tensor See torch.roll()
torch.tensors#torch.Tensor.roll
rot90(k, dims) β†’ Tensor See torch.rot90()
torch.tensors#torch.Tensor.rot90
round() β†’ Tensor See torch.round()
torch.tensors#torch.Tensor.round
round_() β†’ Tensor In-place version of round()
torch.tensors#torch.Tensor.round_
rsqrt() β†’ Tensor See torch.rsqrt()
torch.tensors#torch.Tensor.rsqrt
rsqrt_() β†’ Tensor In-place version of rsqrt()
torch.tensors#torch.Tensor.rsqrt_
scatter(dim, index, src) β†’ Tensor Out-of-place version of torch.Tensor.scatter_()
torch.tensors#torch.Tensor.scatter
scatter_(dim, index, src, reduce=None) β†’ Tensor Writes all values from the tensor src into self at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. For a 3-D tensor, self is updated as: self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2 This is the reverse operation of the manner described in gather(). self, index and src (if it is a Tensor) should all have the same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= self.size(d) for all dimensions d != dim. Note that index and src do not broadcast. Moreover, as for gather(), the values of index must be between 0 and self.size(dim) - 1 inclusive. Warning When indices are not unique, the behavior is non-deterministic (one of the values from src will be picked arbitrarily) and the gradient will be incorrect (it will be propagated to all locations in the source that correspond to the same index)! Note The backward pass is implemented only for src.shape == index.shape. Additionally accepts an optional reduce argument that allows specification of an optional reduction operation, which is applied to all values in the tensor src into self at the indicies specified in the index. For each value in src, the reduction operation is applied to an index in self which is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. Given a 3-D tensor and reduction using the multiplication operation, self is updated as: self[index[i][j][k]][j][k] *= src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] *= src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] *= src[i][j][k] # if dim == 2 Reducing with the addition operation is the same as using scatter_add_(). Parameters dim (int) – the axis along which to index index (LongTensor) – the indices of elements to scatter, can be either empty or of the same dimensionality as src. When empty, the operation returns self unchanged. src (Tensor or float) – the source element(s) to scatter. reduce (str, optional) – reduction operation to apply, can be either 'add' or 'multiply'. Example: >>> src = torch.arange(1, 11).reshape((2, 5)) >>> src tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]) >>> index = torch.tensor([[0, 1, 2, 0]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src) tensor([[1, 0, 0, 4, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0]]) >>> index = torch.tensor([[0, 1, 2], [0, 1, 4]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src) tensor([[1, 2, 3, 0, 0], [6, 7, 0, 0, 8], [0, 0, 0, 0, 0]]) >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]), ... 1.23, reduce='multiply') tensor([[2.0000, 2.0000, 2.4600, 2.0000], [2.0000, 2.0000, 2.0000, 2.4600]]) >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]), ... 1.23, reduce='add') tensor([[2.0000, 2.0000, 3.2300, 2.0000], [2.0000, 2.0000, 2.0000, 3.2300]])
torch.tensors#torch.Tensor.scatter_
scatter_add(dim, index, src) β†’ Tensor Out-of-place version of torch.Tensor.scatter_add_()
torch.tensors#torch.Tensor.scatter_add
scatter_add_(dim, index, src) β†’ Tensor Adds all values from the tensor other into self at the indices specified in the index tensor in a similar fashion as scatter_(). For each value in src, it is added to an index in self which is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. For a 3-D tensor, self is updated as: self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2 self, index and src should have same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= self.size(d) for all dimensions d != dim. Note that index and src do not broadcast. Note This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information. Note The backward pass is implemented only for src.shape == index.shape. Parameters dim (int) – the axis along which to index index (LongTensor) – the indices of elements to scatter and add, can be either empty or of the same dimensionality as src. When empty, the operation returns self unchanged. src (Tensor) – the source elements to scatter and add Example: >>> src = torch.ones((2, 5)) >>> index = torch.tensor([[0, 1, 2, 0, 0]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src) tensor([[1., 0., 0., 1., 1.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.]]) >>> index = torch.tensor([[0, 1, 2, 0, 0], [0, 1, 2, 2, 2]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src) tensor([[2., 0., 0., 1., 1.], [0., 2., 0., 0., 0.], [0., 0., 2., 1., 1.]])
torch.tensors#torch.Tensor.scatter_add_
select(dim, index) β†’ Tensor Slices the self tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed. Parameters dim (int) – the dimension to slice index (int) – the index to select with Note select() is equivalent to slicing. For example, tensor.select(0, index) is equivalent to tensor[index] and tensor.select(2, index) is equivalent to tensor[:,:,index].
torch.tensors#torch.Tensor.select
set_(source=None, storage_offset=0, size=None, stride=None) β†’ Tensor Sets the underlying storage, size, and strides. If source is a tensor, self tensor will share the same storage and have the same size and strides as source. Changes to elements in one tensor will be reflected in the other. If source is a Storage, the method sets the underlying storage, offset, size, and stride. Parameters source (Tensor or Storage) – the tensor or storage to use storage_offset (int, optional) – the offset in the storage size (torch.Size, optional) – the desired size. Defaults to the size of the source. stride (tuple, optional) – the desired stride. Defaults to C-contiguous strides.
torch.tensors#torch.Tensor.set_
sgn() β†’ Tensor See torch.sgn()
torch.tensors#torch.Tensor.sgn