doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.Tensor A torch.Tensor is a multi-dimensional matrix containing elements of a single data type. Torch defines 10 tensor types with CPU and GPU variants which are as follows:
Data type dtype CPU tensor GPU tensor
32-bit floating point torch.float32 or torch.float torch.FloatTensor torch.cuda.FloatTensor
64-bit floating point torch.float64 or torch.double torch.DoubleTensor torch.cuda.DoubleTensor
16-bit floating point 1 torch.float16 or torch.half torch.HalfTensor torch.cuda.HalfTensor
16-bit floating point 2 torch.bfloat16 torch.BFloat16Tensor torch.cuda.BFloat16Tensor
32-bit complex torch.complex32
64-bit complex torch.complex64
128-bit complex torch.complex128 or torch.cdouble
8-bit integer (unsigned) torch.uint8 torch.ByteTensor torch.cuda.ByteTensor
8-bit integer (signed) torch.int8 torch.CharTensor torch.cuda.CharTensor
16-bit integer (signed) torch.int16 or torch.short torch.ShortTensor torch.cuda.ShortTensor
32-bit integer (signed) torch.int32 or torch.int torch.IntTensor torch.cuda.IntTensor
64-bit integer (signed) torch.int64 or torch.long torch.LongTensor torch.cuda.LongTensor
Boolean torch.bool torch.BoolTensor torch.cuda.BoolTensor
1
Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important at the expense of range.
2
Sometimes referred to as Brain Floating Point: uses 1 sign, 8 exponent, and 7 significand bits. Useful when range is important, since it has the same number of exponent bits as float32 torch.Tensor is an alias for the default tensor type (torch.FloatTensor). A tensor can be constructed from a Python list or sequence using the torch.tensor() constructor: >>> torch.tensor([[1., -1.], [1., -1.]])
tensor([[ 1.0000, -1.0000],
[ 1.0000, -1.0000]])
>>> torch.tensor(np.array([[1, 2, 3], [4, 5, 6]]))
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
Warning torch.tensor() always copies data. If you have a Tensor data and just want to change its requires_grad flag, use requires_grad_() or detach() to avoid a copy. If you have a numpy array and want to avoid a copy, use torch.as_tensor(). A tensor of specific data type can be constructed by passing a torch.dtype and/or a torch.device to a constructor or tensor creation op: >>> torch.zeros([2, 4], dtype=torch.int32)
tensor([[ 0, 0, 0, 0],
[ 0, 0, 0, 0]], dtype=torch.int32)
>>> cuda0 = torch.device('cuda:0')
>>> torch.ones([2, 4], dtype=torch.float64, device=cuda0)
tensor([[ 1.0000, 1.0000, 1.0000, 1.0000],
[ 1.0000, 1.0000, 1.0000, 1.0000]], dtype=torch.float64, device='cuda:0')
The contents of a tensor can be accessed and modified using Pythonβs indexing and slicing notation: >>> x = torch.tensor([[1, 2, 3], [4, 5, 6]])
>>> print(x[1][2])
tensor(6)
>>> x[0][1] = 8
>>> print(x)
tensor([[ 1, 8, 3],
[ 4, 5, 6]])
Use torch.Tensor.item() to get a Python number from a tensor containing a single value: >>> x = torch.tensor([[1]])
>>> x
tensor([[ 1]])
>>> x.item()
1
>>> x = torch.tensor(2.5)
>>> x
tensor(2.5000)
>>> x.item()
2.5
A tensor can be created with requires_grad=True so that torch.autograd records operations on them for automatic differentiation. >>> x = torch.tensor([[1., -1.], [1., 1.]], requires_grad=True)
>>> out = x.pow(2).sum()
>>> out.backward()
>>> x.grad
tensor([[ 2.0000, -2.0000],
[ 2.0000, 2.0000]])
Each tensor has an associated torch.Storage, which holds its data. The tensor class also provides multi-dimensional, strided view of a storage and defines numeric operations on it. Note For more information on tensor views, see Tensor Views. Note For more information on the torch.dtype, torch.device, and torch.layout attributes of a torch.Tensor, see Tensor Attributes. Note Methods which mutate a tensor are marked with an underscore suffix. For example, torch.FloatTensor.abs_() computes the absolute value in-place and returns the modified tensor, while torch.FloatTensor.abs() computes the result in a new tensor. Note To change an existing tensorβs torch.device and/or torch.dtype, consider using to() method on the tensor. Warning Current implementation of torch.Tensor introduces memory overhead, thus it might lead to unexpectedly high memory usage in the applications with many tiny tensors. If this is your case, consider using one large structure.
class torch.Tensor
There are a few main ways to create a tensor, depending on your use case. To create a tensor with pre-existing data, use torch.tensor(). To create a tensor with specific size, use torch.* tensor creation ops (see Creation Ops). To create a tensor with the same size (and similar types) as another tensor, use torch.*_like tensor creation ops (see Creation Ops). To create a tensor with similar type but different size as another tensor, use tensor.new_* creation ops.
new_tensor(data, dtype=None, device=None, requires_grad=False) β Tensor
Returns a new Tensor with data as the tensor data. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Warning new_tensor() always copies data. If you have a Tensor data and want to avoid a copy, use torch.Tensor.requires_grad_() or torch.Tensor.detach(). If you have a numpy array and want to avoid a copy, use torch.from_numpy(). Warning When data is a tensor x, new_tensor() reads out βthe dataβ from whatever it is passed, and constructs a leaf variable. Therefore tensor.new_tensor(x) is equivalent to x.clone().detach() and tensor.new_tensor(x, requires_grad=True) is equivalent to x.clone().detach().requires_grad_(True). The equivalents using clone() and detach() are recommended. Parameters
data (array_like) β The returned Tensor copies data.
dtype (torch.dtype, optional) β the desired type of returned tensor. Default: if None, same torch.dtype as this tensor.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, same torch.device as this tensor.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.ones((2,), dtype=torch.int8)
>>> data = [[0, 1], [2, 3]]
>>> tensor.new_tensor(data)
tensor([[ 0, 1],
[ 2, 3]], dtype=torch.int8)
new_full(size, fill_value, dtype=None, device=None, requires_grad=False) β Tensor
Returns a Tensor of size size filled with fill_value. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters
fill_value (scalar) β the number to fill the output tensor with.
dtype (torch.dtype, optional) β the desired type of returned tensor. Default: if None, same torch.dtype as this tensor.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, same torch.device as this tensor.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.ones((2,), dtype=torch.float64)
>>> tensor.new_full((3, 4), 3.141592)
tensor([[ 3.1416, 3.1416, 3.1416, 3.1416],
[ 3.1416, 3.1416, 3.1416, 3.1416],
[ 3.1416, 3.1416, 3.1416, 3.1416]], dtype=torch.float64)
new_empty(size, dtype=None, device=None, requires_grad=False) β Tensor
Returns a Tensor of size size filled with uninitialized data. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters
dtype (torch.dtype, optional) β the desired type of returned tensor. Default: if None, same torch.dtype as this tensor.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, same torch.device as this tensor.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.ones(())
>>> tensor.new_empty((2, 3))
tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30],
[ 3.0949e-41, 4.4842e-44, 0.0000e+00]])
new_ones(size, dtype=None, device=None, requires_grad=False) β Tensor
Returns a Tensor of size size filled with 1. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters
size (int...) β a list, tuple, or torch.Size of integers defining the shape of the output tensor.
dtype (torch.dtype, optional) β the desired type of returned tensor. Default: if None, same torch.dtype as this tensor.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, same torch.device as this tensor.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.tensor((), dtype=torch.int32)
>>> tensor.new_ones((2, 3))
tensor([[ 1, 1, 1],
[ 1, 1, 1]], dtype=torch.int32)
new_zeros(size, dtype=None, device=None, requires_grad=False) β Tensor
Returns a Tensor of size size filled with 0. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters
size (int...) β a list, tuple, or torch.Size of integers defining the shape of the output tensor.
dtype (torch.dtype, optional) β the desired type of returned tensor. Default: if None, same torch.dtype as this tensor.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, same torch.device as this tensor.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.tensor((), dtype=torch.float64)
>>> tensor.new_zeros((2, 3))
tensor([[ 0., 0., 0.],
[ 0., 0., 0.]], dtype=torch.float64)
is_cuda
Is True if the Tensor is stored on the GPU, False otherwise.
is_quantized
Is True if the Tensor is quantized, False otherwise.
is_meta
Is True if the Tensor is a meta tensor, False otherwise. Meta tensors are like normal tensors, but they carry no data.
device
Is the torch.device where this Tensor is.
grad
This attribute is None by default and becomes a Tensor the first time a call to backward() computes gradients for self. The attribute will then contain the gradients computed and future calls to backward() will accumulate (add) gradients into it.
ndim
Alias for dim()
T
Is this Tensor with its dimensions reversed. If n is the number of dimensions in x, x.T is equivalent to x.permute(n-1, n-2, ..., 0).
real
Returns a new tensor containing real values of the self tensor. The returned tensor and self share the same underlying storage. Warning real() is only supported for tensors with complex dtypes. Example::
>>> x=torch.randn(4, dtype=torch.cfloat)
>>> x
tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])
>>> x.real
tensor([ 0.3100, -0.5445, -1.6492, -0.0638])
imag
Returns a new tensor containing imaginary values of the self tensor. The returned tensor and self share the same underlying storage. Warning imag() is only supported for tensors with complex dtypes. Example::
>>> x=torch.randn(4, dtype=torch.cfloat)
>>> x
tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])
>>> x.imag
tensor([ 0.3553, -0.7896, -0.0633, -0.8119])
abs() β Tensor
See torch.abs()
abs_() β Tensor
In-place version of abs()
absolute() β Tensor
Alias for abs()
absolute_() β Tensor
In-place version of absolute() Alias for abs_()
acos() β Tensor
See torch.acos()
acos_() β Tensor
In-place version of acos()
arccos() β Tensor
See torch.arccos()
arccos_() β Tensor
In-place version of arccos()
add(other, *, alpha=1) β Tensor
Add a scalar or tensor to self tensor. If both alpha and other are specified, each element of other is scaled by alpha before being used. When other is a tensor, the shape of other must be broadcastable with the shape of the underlying tensor See torch.add()
add_(other, *, alpha=1) β Tensor
In-place version of add()
addbmm(batch1, batch2, *, beta=1, alpha=1) β Tensor
See torch.addbmm()
addbmm_(batch1, batch2, *, beta=1, alpha=1) β Tensor
In-place version of addbmm()
addcdiv(tensor1, tensor2, *, value=1) β Tensor
See torch.addcdiv()
addcdiv_(tensor1, tensor2, *, value=1) β Tensor
In-place version of addcdiv()
addcmul(tensor1, tensor2, *, value=1) β Tensor
See torch.addcmul()
addcmul_(tensor1, tensor2, *, value=1) β Tensor
In-place version of addcmul()
addmm(mat1, mat2, *, beta=1, alpha=1) β Tensor
See torch.addmm()
addmm_(mat1, mat2, *, beta=1, alpha=1) β Tensor
In-place version of addmm()
sspaddmm(mat1, mat2, *, beta=1, alpha=1) β Tensor
See torch.sspaddmm()
addmv(mat, vec, *, beta=1, alpha=1) β Tensor
See torch.addmv()
addmv_(mat, vec, *, beta=1, alpha=1) β Tensor
In-place version of addmv()
addr(vec1, vec2, *, beta=1, alpha=1) β Tensor
See torch.addr()
addr_(vec1, vec2, *, beta=1, alpha=1) β Tensor
In-place version of addr()
allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) β Tensor
See torch.allclose()
amax(dim=None, keepdim=False) β Tensor
See torch.amax()
amin(dim=None, keepdim=False) β Tensor
See torch.amin()
angle() β Tensor
See torch.angle()
apply_(callable) β Tensor
Applies the function callable to each element in the tensor, replacing each element with the value returned by callable. Note This function only works with CPU tensors and should not be used in code sections that require high performance.
argmax(dim=None, keepdim=False) β LongTensor
See torch.argmax()
argmin(dim=None, keepdim=False) β LongTensor
See torch.argmin()
argsort(dim=-1, descending=False) β LongTensor
See torch.argsort()
asin() β Tensor
See torch.asin()
asin_() β Tensor
In-place version of asin()
arcsin() β Tensor
See torch.arcsin()
arcsin_() β Tensor
In-place version of arcsin()
as_strided(size, stride, storage_offset=0) β Tensor
See torch.as_strided()
atan() β Tensor
See torch.atan()
atan_() β Tensor
In-place version of atan()
arctan() β Tensor
See torch.arctan()
arctan_() β Tensor
In-place version of arctan()
atan2(other) β Tensor
See torch.atan2()
atan2_(other) β Tensor
In-place version of atan2()
all(dim=None, keepdim=False) β Tensor
See torch.all()
any(dim=None, keepdim=False) β Tensor
See torch.any()
backward(gradient=None, retain_graph=None, create_graph=False, inputs=None) [source]
Computes the gradient of current tensor w.r.t. graph leaves. The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying gradient. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. self. This function accumulates gradients in the leaves - you might need to zero .grad attributes or set them to None before calling it. See Default gradient layouts for details on the memory layout of accumulated gradients. Note If you run any forward ops, create gradient, and/or call backward in a user-specified CUDA stream context, see Stream semantics of backward passes. Parameters
gradient (Tensor or None) β Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless create_graph is True. None values can be specified for scalar Tensors or ones that donβt require grad. If a None value would be acceptable then this argument is optional.
retain_graph (bool, optional) β If False, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.
create_graph (bool, optional) β If True, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to False.
inputs (sequence of Tensor) β Inputs w.r.t. which the gradient will be accumulated into .grad. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute the attr::tensors. All the provided inputs must be leaf Tensors.
baddbmm(batch1, batch2, *, beta=1, alpha=1) β Tensor
See torch.baddbmm()
baddbmm_(batch1, batch2, *, beta=1, alpha=1) β Tensor
In-place version of baddbmm()
bernoulli(*, generator=None) β Tensor
Returns a result tensor where each result[i]\texttt{result[i]} is independently sampled from Bernoulli(self[i])\text{Bernoulli}(\texttt{self[i]}) . self must have floating point dtype, and the result will have the same dtype. See torch.bernoulli()
bernoulli_()
bernoulli_(p=0.5, *, generator=None) β Tensor
Fills each location of self with an independent sample from Bernoulli(p)\text{Bernoulli}(\texttt{p}) . self can have integral dtype.
bernoulli_(p_tensor, *, generator=None) β Tensor
p_tensor should be a tensor containing probabilities to be used for drawing the binary random number. The ith\text{i}^{th} element of self tensor will be set to a value sampled from Bernoulli(p_tensor[i])\text{Bernoulli}(\texttt{p\_tensor[i]}) . self can have integral dtype, but p_tensor must have floating point dtype.
See also bernoulli() and torch.bernoulli()
bfloat16(memory_format=torch.preserve_format) β Tensor
self.bfloat16() is equivalent to self.to(torch.bfloat16). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
bincount(weights=None, minlength=0) β Tensor
See torch.bincount()
bitwise_not() β Tensor
See torch.bitwise_not()
bitwise_not_() β Tensor
In-place version of bitwise_not()
bitwise_and() β Tensor
See torch.bitwise_and()
bitwise_and_() β Tensor
In-place version of bitwise_and()
bitwise_or() β Tensor
See torch.bitwise_or()
bitwise_or_() β Tensor
In-place version of bitwise_or()
bitwise_xor() β Tensor
See torch.bitwise_xor()
bitwise_xor_() β Tensor
In-place version of bitwise_xor()
bmm(batch2) β Tensor
See torch.bmm()
bool(memory_format=torch.preserve_format) β Tensor
self.bool() is equivalent to self.to(torch.bool). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
byte(memory_format=torch.preserve_format) β Tensor
self.byte() is equivalent to self.to(torch.uint8). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
broadcast_to(shape) β Tensor
See torch.broadcast_to().
cauchy_(median=0, sigma=1, *, generator=None) β Tensor
Fills the tensor with numbers drawn from the Cauchy distribution: f(x)=1ΟΟ(xβmedian)2+Ο2f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 + \sigma^2}
ceil() β Tensor
See torch.ceil()
ceil_() β Tensor
In-place version of ceil()
char(memory_format=torch.preserve_format) β Tensor
self.char() is equivalent to self.to(torch.int8). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
cholesky(upper=False) β Tensor
See torch.cholesky()
cholesky_inverse(upper=False) β Tensor
See torch.cholesky_inverse()
cholesky_solve(input2, upper=False) β Tensor
See torch.cholesky_solve()
chunk(chunks, dim=0) β List of Tensors
See torch.chunk()
clamp(min, max) β Tensor
See torch.clamp()
clamp_(min, max) β Tensor
In-place version of clamp()
clip(min, max) β Tensor
Alias for clamp().
clip_(min, max) β Tensor
Alias for clamp_().
clone(*, memory_format=torch.preserve_format) β Tensor
See torch.clone()
contiguous(memory_format=torch.contiguous_format) β Tensor
Returns a contiguous in memory tensor containing the same data as self tensor. If self tensor is already in the specified memory format, this function returns the self tensor. Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.contiguous_format.
copy_(src, non_blocking=False) β Tensor
Copies the elements from src into self tensor and returns self. The src tensor must be broadcastable with the self tensor. It may be of a different data type or reside on a different device. Parameters
src (Tensor) β the source tensor to copy from
non_blocking (bool) β if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.
conj() β Tensor
See torch.conj()
copysign(other) β Tensor
See torch.copysign()
copysign_(other) β Tensor
In-place version of copysign()
cos() β Tensor
See torch.cos()
cos_() β Tensor
In-place version of cos()
cosh() β Tensor
See torch.cosh()
cosh_() β Tensor
In-place version of cosh()
count_nonzero(dim=None) β Tensor
See torch.count_nonzero()
acosh() β Tensor
See torch.acosh()
acosh_() β Tensor
In-place version of acosh()
arccosh()
acosh() -> Tensor See torch.arccosh()
arccosh_()
acosh_() -> Tensor In-place version of arccosh()
cpu(memory_format=torch.preserve_format) β Tensor
Returns a copy of this object in CPU memory. If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned. Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
cross(other, dim=-1) β Tensor
See torch.cross()
cuda(device=None, non_blocking=False, memory_format=torch.preserve_format) β Tensor
Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned. Parameters
device (torch.device) β The destination GPU device. Defaults to the current CUDA device.
non_blocking (bool) β If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default: False.
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
logcumsumexp(dim) β Tensor
See torch.logcumsumexp()
cummax(dim) -> (Tensor, Tensor)
See torch.cummax()
cummin(dim) -> (Tensor, Tensor)
See torch.cummin()
cumprod(dim, dtype=None) β Tensor
See torch.cumprod()
cumprod_(dim, dtype=None) β Tensor
In-place version of cumprod()
cumsum(dim, dtype=None) β Tensor
See torch.cumsum()
cumsum_(dim, dtype=None) β Tensor
In-place version of cumsum()
data_ptr() β int
Returns the address of the first element of self tensor.
deg2rad() β Tensor
See torch.deg2rad()
dequantize() β Tensor
Given a quantized Tensor, dequantize it and return the dequantized float Tensor.
det() β Tensor
See torch.det()
dense_dim() β int
Return the number of dense dimensions in a sparse tensor self. Warning Throws an error if self is not a sparse tensor. See also Tensor.sparse_dim() and hybrid tensors.
detach()
Returns a new Tensor, detached from the current graph. The result will never require gradient. Note Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks. IMPORTANT NOTE: Previously, in-place size / stride / storage changes (such as resize_ / resize_as_ / set_ / transpose_) to the returned tensor also update the original tensor. Now, these in-place changes will not update the original tensor anymore, and will instead trigger an error. For sparse tensors: In-place indices / values changes (such as zero_ / copy_ / add_) to the returned tensor will not update the original tensor anymore, and will instead trigger an error.
detach_()
Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached in-place.
diag(diagonal=0) β Tensor
See torch.diag()
diag_embed(offset=0, dim1=-2, dim2=-1) β Tensor
See torch.diag_embed()
diagflat(offset=0) β Tensor
See torch.diagflat()
diagonal(offset=0, dim1=0, dim2=1) β Tensor
See torch.diagonal()
fill_diagonal_(fill_value, wrap=False) β Tensor
Fill the main diagonal of a tensor that has at least 2-dimensions. When dims>2, all dimensions of input must be of equal length. This function modifies the input tensor in-place, and returns the input tensor. Parameters
fill_value (Scalar) β the fill value
wrap (bool) β the diagonal βwrappedβ after N columns for tall matrices. Example: >>> a = torch.zeros(3, 3)
>>> a.fill_diagonal_(5)
tensor([[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.]])
>>> b = torch.zeros(7, 3)
>>> b.fill_diagonal_(5)
tensor([[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
>>> c = torch.zeros(7, 3)
>>> c.fill_diagonal_(5, wrap=True)
tensor([[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.],
[0., 0., 0.],
[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.]])
fmax(other) β Tensor
See torch.fmax()
fmin(other) β Tensor
See torch.fmin()
diff(n=1, dim=-1, prepend=None, append=None) β Tensor
See torch.diff()
digamma() β Tensor
See torch.digamma()
digamma_() β Tensor
In-place version of digamma()
dim() β int
Returns the number of dimensions of self tensor.
dist(other, p=2) β Tensor
See torch.dist()
div(value, *, rounding_mode=None) β Tensor
See torch.div()
div_(value, *, rounding_mode=None) β Tensor
In-place version of div()
divide(value, *, rounding_mode=None) β Tensor
See torch.divide()
divide_(value, *, rounding_mode=None) β Tensor
In-place version of divide()
dot(other) β Tensor
See torch.dot()
double(memory_format=torch.preserve_format) β Tensor
self.double() is equivalent to self.to(torch.float64). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
eig(eigenvectors=False) -> (Tensor, Tensor)
See torch.eig()
element_size() β int
Returns the size in bytes of an individual element. Example: >>> torch.tensor([]).element_size()
4
>>> torch.tensor([], dtype=torch.uint8).element_size()
1
eq(other) β Tensor
See torch.eq()
eq_(other) β Tensor
In-place version of eq()
equal(other) β bool
See torch.equal()
erf() β Tensor
See torch.erf()
erf_() β Tensor
In-place version of erf()
erfc() β Tensor
See torch.erfc()
erfc_() β Tensor
In-place version of erfc()
erfinv() β Tensor
See torch.erfinv()
erfinv_() β Tensor
In-place version of erfinv()
exp() β Tensor
See torch.exp()
exp_() β Tensor
In-place version of exp()
expm1() β Tensor
See torch.expm1()
expm1_() β Tensor
In-place version of expm1()
expand(*sizes) β Tensor
Returns a new view of the self tensor with singleton dimensions expanded to a larger size. Passing -1 as the size for a dimension means not changing the size of that dimension. Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1. Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory. Parameters
*sizes (torch.Size or int...) β the desired expanded size Warning More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first. Example: >>> x = torch.tensor([[1], [2], [3]])
>>> x.size()
torch.Size([3, 1])
>>> x.expand(3, 4)
tensor([[ 1, 1, 1, 1],
[ 2, 2, 2, 2],
[ 3, 3, 3, 3]])
>>> x.expand(-1, 4) # -1 means not changing the size of that dimension
tensor([[ 1, 1, 1, 1],
[ 2, 2, 2, 2],
[ 3, 3, 3, 3]])
expand_as(other) β Tensor
Expand this tensor to the same size as other. self.expand_as(other) is equivalent to self.expand(other.size()). Please see expand() for more information about expand. Parameters
other (torch.Tensor) β The result tensor has the same size as other.
exponential_(lambd=1, *, generator=None) β Tensor
Fills self tensor with elements drawn from the exponential distribution: f(x)=Ξ»eβΞ»xf(x) = \lambda e^{-\lambda x}
fix() β Tensor
See torch.fix().
fix_() β Tensor
In-place version of fix()
fill_(value) β Tensor
Fills self tensor with the specified value.
flatten(input, start_dim=0, end_dim=-1) β Tensor
see torch.flatten()
flip(dims) β Tensor
See torch.flip()
fliplr() β Tensor
See torch.fliplr()
flipud() β Tensor
See torch.flipud()
float(memory_format=torch.preserve_format) β Tensor
self.float() is equivalent to self.to(torch.float32). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
float_power(exponent) β Tensor
See torch.float_power()
float_power_(exponent) β Tensor
In-place version of float_power()
floor() β Tensor
See torch.floor()
floor_() β Tensor
In-place version of floor()
floor_divide(value) β Tensor
See torch.floor_divide()
floor_divide_(value) β Tensor
In-place version of floor_divide()
fmod(divisor) β Tensor
See torch.fmod()
fmod_(divisor) β Tensor
In-place version of fmod()
frac() β Tensor
See torch.frac()
frac_() β Tensor
In-place version of frac()
gather(dim, index) β Tensor
See torch.gather()
gcd(other) β Tensor
See torch.gcd()
gcd_(other) β Tensor
In-place version of gcd()
ge(other) β Tensor
See torch.ge().
ge_(other) β Tensor
In-place version of ge().
greater_equal(other) β Tensor
See torch.greater_equal().
greater_equal_(other) β Tensor
In-place version of greater_equal().
geometric_(p, *, generator=None) β Tensor
Fills self tensor with elements drawn from the geometric distribution: f(X=k)=pkβ1(1βp)f(X=k) = p^{k - 1} (1 - p)
geqrf() -> (Tensor, Tensor)
See torch.geqrf()
ger(vec2) β Tensor
See torch.ger()
get_device() -> Device ordinal (Integer)
For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, an error is thrown. Example: >>> x = torch.randn(3, 4, 5, device='cuda:0')
>>> x.get_device()
0
>>> x.cpu().get_device() # RuntimeError: get_device is not implemented for type torch.FloatTensor
gt(other) β Tensor
See torch.gt().
gt_(other) β Tensor
In-place version of gt().
greater(other) β Tensor
See torch.greater().
greater_(other) β Tensor
In-place version of greater().
half(memory_format=torch.preserve_format) β Tensor
self.half() is equivalent to self.to(torch.float16). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
hardshrink(lambd=0.5) β Tensor
See torch.nn.functional.hardshrink()
heaviside(values) β Tensor
See torch.heaviside()
histc(bins=100, min=0, max=0) β Tensor
See torch.histc()
hypot(other) β Tensor
See torch.hypot()
hypot_(other) β Tensor
In-place version of hypot()
i0() β Tensor
See torch.i0()
i0_() β Tensor
In-place version of i0()
igamma(other) β Tensor
See torch.igamma()
igamma_(other) β Tensor
In-place version of igamma()
igammac(other) β Tensor
See torch.igammac()
igammac_(other) β Tensor
In-place version of igammac()
index_add_(dim, index, tensor) β Tensor
Accumulate the elements of tensor into the self tensor by adding to the indices in the order given in index. For example, if dim == 0 and index[i] == j, then the ith row of tensor is added to the jth row of self. The dimth dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised. Note This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information. Parameters
dim (int) β dimension along which to index
index (IntTensor or LongTensor) β indices of tensor to select from
tensor (Tensor) β the tensor containing values to add Example: >>> x = torch.ones(5, 3)
>>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
>>> index = torch.tensor([0, 4, 2])
>>> x.index_add_(0, index, t)
tensor([[ 2., 3., 4.],
[ 1., 1., 1.],
[ 8., 9., 10.],
[ 1., 1., 1.],
[ 5., 6., 7.]])
index_add(tensor1, dim, index, tensor2) β Tensor
Out-of-place version of torch.Tensor.index_add_(). tensor1 corresponds to self in torch.Tensor.index_add_().
index_copy_(dim, index, tensor) β Tensor
Copies the elements of tensor into the self tensor by selecting the indices in the order given in index. For example, if dim == 0 and index[i] == j, then the ith row of tensor is copied to the jth row of self. The dimth dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised. Note If index contains duplicate entries, multiple elements from tensor will be copied to the same index of self. The result is nondeterministic since it depends on which copy occurs last. Parameters
dim (int) β dimension along which to index
index (LongTensor) β indices of tensor to select from
tensor (Tensor) β the tensor containing values to copy Example: >>> x = torch.zeros(5, 3)
>>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
>>> index = torch.tensor([0, 4, 2])
>>> x.index_copy_(0, index, t)
tensor([[ 1., 2., 3.],
[ 0., 0., 0.],
[ 7., 8., 9.],
[ 0., 0., 0.],
[ 4., 5., 6.]])
index_copy(tensor1, dim, index, tensor2) β Tensor
Out-of-place version of torch.Tensor.index_copy_(). tensor1 corresponds to self in torch.Tensor.index_copy_().
index_fill_(dim, index, val) β Tensor
Fills the elements of the self tensor with value val by selecting the indices in the order given in index. Parameters
dim (int) β dimension along which to index
index (LongTensor) β indices of self tensor to fill in
val (float) β the value to fill with Example::
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
>>> index = torch.tensor([0, 2])
>>> x.index_fill_(1, index, -1)
tensor([[-1., 2., -1.],
[-1., 5., -1.],
[-1., 8., -1.]])
index_fill(tensor1, dim, index, value) β Tensor
Out-of-place version of torch.Tensor.index_fill_(). tensor1 corresponds to self in torch.Tensor.index_fill_().
index_put_(indices, values, accumulate=False) β Tensor
Puts values from the tensor values into the tensor self using the indices specified in indices (which is a tuple of Tensors). The expression tensor.index_put_(indices, values) is equivalent to tensor[indices] = values. Returns self. If accumulate is True, the elements in values are added to self. If accumulate is False, the behavior is undefined if indices contain duplicate elements. Parameters
indices (tuple of LongTensor) β tensors used to index into self.
values (Tensor) β tensor of same dtype as self.
accumulate (bool) β whether to accumulate into self
index_put(tensor1, indices, values, accumulate=False) β Tensor
Out-place version of index_put_(). tensor1 corresponds to self in torch.Tensor.index_put_().
index_select(dim, index) β Tensor
See torch.index_select()
indices() β Tensor
Return the indices tensor of a sparse COO tensor. Warning Throws an error if self is not a sparse COO tensor. See also Tensor.values(). Note This method can only be called on a coalesced sparse tensor. See Tensor.coalesce() for details.
inner(other) β Tensor
See torch.inner().
int(memory_format=torch.preserve_format) β Tensor
self.int() is equivalent to self.to(torch.int32). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
int_repr() β Tensor
Given a quantized Tensor, self.int_repr() returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.
inverse() β Tensor
See torch.inverse()
isclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) β Tensor
See torch.isclose()
isfinite() β Tensor
See torch.isfinite()
isinf() β Tensor
See torch.isinf()
isposinf() β Tensor
See torch.isposinf()
isneginf() β Tensor
See torch.isneginf()
isnan() β Tensor
See torch.isnan()
is_contiguous(memory_format=torch.contiguous_format) β bool
Returns True if self tensor is contiguous in memory in the order specified by memory format. Parameters
memory_format (torch.memory_format, optional) β Specifies memory allocation order. Default: torch.contiguous_format.
is_complex() β bool
Returns True if the data type of self is a complex data type.
is_floating_point() β bool
Returns True if the data type of self is a floating point data type.
is_leaf
All Tensors that have requires_grad which is False will be leaf Tensors by convention. For Tensors that have requires_grad which is True, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn is None. Only leaf Tensors will have their grad populated during a call to backward(). To get grad populated for non-leaf Tensors, you can use retain_grad(). Example: >>> a = torch.rand(10, requires_grad=True)
>>> a.is_leaf
True
>>> b = torch.rand(10, requires_grad=True).cuda()
>>> b.is_leaf
False
# b was created by the operation that cast a cpu Tensor into a cuda Tensor
>>> c = torch.rand(10, requires_grad=True) + 2
>>> c.is_leaf
False
# c was created by the addition operation
>>> d = torch.rand(10).cuda()
>>> d.is_leaf
True
# d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)
>>> e = torch.rand(10).cuda().requires_grad_()
>>> e.is_leaf
True
# e requires gradients and has no operations creating it
>>> f = torch.rand(10, requires_grad=True, device="cuda")
>>> f.is_leaf
True
# f requires grad, has no operation creating it
is_pinned()
Returns true if this tensor resides in pinned memory.
is_set_to(tensor) β bool
Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride).
is_shared() [source]
Checks if tensor is in shared memory. This is always True for CUDA tensors.
is_signed() β bool
Returns True if the data type of self is a signed data type.
is_sparse
Is True if the Tensor uses sparse storage layout, False otherwise.
istft(n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False) [source]
See torch.istft()
isreal() β Tensor
See torch.isreal()
item() β number
Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see tolist(). This operation is not differentiable. Example: >>> x = torch.tensor([1.0])
>>> x.item()
1.0
kthvalue(k, dim=None, keepdim=False) -> (Tensor, LongTensor)
See torch.kthvalue()
lcm(other) β Tensor
See torch.lcm()
lcm_(other) β Tensor
In-place version of lcm()
ldexp(other) β Tensor
See torch.ldexp()
ldexp_(other) β Tensor
In-place version of ldexp()
le(other) β Tensor
See torch.le().
le_(other) β Tensor
In-place version of le().
less_equal(other) β Tensor
See torch.less_equal().
less_equal_(other) β Tensor
In-place version of less_equal().
lerp(end, weight) β Tensor
See torch.lerp()
lerp_(end, weight) β Tensor
In-place version of lerp()
lgamma() β Tensor
See torch.lgamma()
lgamma_() β Tensor
In-place version of lgamma()
log() β Tensor
See torch.log()
log_() β Tensor
In-place version of log()
logdet() β Tensor
See torch.logdet()
log10() β Tensor
See torch.log10()
log10_() β Tensor
In-place version of log10()
log1p() β Tensor
See torch.log1p()
log1p_() β Tensor
In-place version of log1p()
log2() β Tensor
See torch.log2()
log2_() β Tensor
In-place version of log2()
log_normal_(mean=1, std=2, *, generator=None)
Fills self tensor with numbers samples from the log-normal distribution parameterized by the given mean ΞΌ\mu and standard deviation Ο\sigma . Note that mean and std are the mean and standard deviation of the underlying normal distribution, and not of the returned distribution: f(x)=1xΟ2Οeβ(lnβ‘xβΞΌ)22Ο2f(x) = \dfrac{1}{x \sigma \sqrt{2\pi}}\ e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}}
logaddexp(other) β Tensor
See torch.logaddexp()
logaddexp2(other) β Tensor
See torch.logaddexp2()
logsumexp(dim, keepdim=False) β Tensor
See torch.logsumexp()
logical_and() β Tensor
See torch.logical_and()
logical_and_() β Tensor
In-place version of logical_and()
logical_not() β Tensor
See torch.logical_not()
logical_not_() β Tensor
In-place version of logical_not()
logical_or() β Tensor
See torch.logical_or()
logical_or_() β Tensor
In-place version of logical_or()
logical_xor() β Tensor
See torch.logical_xor()
logical_xor_() β Tensor
In-place version of logical_xor()
logit() β Tensor
See torch.logit()
logit_() β Tensor
In-place version of logit()
long(memory_format=torch.preserve_format) β Tensor
self.long() is equivalent to self.to(torch.int64). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
lstsq(A) -> (Tensor, Tensor)
See torch.lstsq()
lt(other) β Tensor
See torch.lt().
lt_(other) β Tensor
In-place version of lt().
less()
lt(other) -> Tensor See torch.less().
less_(other) β Tensor
In-place version of less().
lu(pivot=True, get_infos=False) [source]
See torch.lu()
lu_solve(LU_data, LU_pivots) β Tensor
See torch.lu_solve()
as_subclass(cls) β Tensor
Makes a cls instance with the same data pointer as self. Changes in the output mirror changes in self, and the output stays attached to the autograd graph. cls must be a subclass of Tensor.
map_(tensor, callable)
Applies callable for each element in self tensor and the given tensor and stores the results in self tensor. self tensor and the given tensor must be broadcastable. The callable should have the signature: def callable(a, b) -> number
masked_scatter_(mask, source)
Copies elements from source into self tensor at positions where the mask is True. The shape of mask must be broadcastable with the shape of the underlying tensor. The source should have at least as many elements as the number of ones in mask Parameters
mask (BoolTensor) β the boolean mask
source (Tensor) β the tensor to copy from Note The mask operates on the self tensor, not on the given source tensor.
masked_scatter(mask, tensor) β Tensor
Out-of-place version of torch.Tensor.masked_scatter_()
masked_fill_(mask, value)
Fills elements of self tensor with value where mask is True. The shape of mask must be broadcastable with the shape of the underlying tensor. Parameters
mask (BoolTensor) β the boolean mask
value (float) β the value to fill in with
masked_fill(mask, value) β Tensor
Out-of-place version of torch.Tensor.masked_fill_()
masked_select(mask) β Tensor
See torch.masked_select()
matmul(tensor2) β Tensor
See torch.matmul()
matrix_power(n) β Tensor
See torch.matrix_power()
matrix_exp() β Tensor
See torch.matrix_exp()
max(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)
See torch.max()
maximum(other) β Tensor
See torch.maximum()
mean(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)
See torch.mean()
median(dim=None, keepdim=False) -> (Tensor, LongTensor)
See torch.median()
nanmedian(dim=None, keepdim=False) -> (Tensor, LongTensor)
See torch.nanmedian()
min(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)
See torch.min()
minimum(other) β Tensor
See torch.minimum()
mm(mat2) β Tensor
See torch.mm()
smm(mat) β Tensor
See torch.smm()
mode(dim=None, keepdim=False) -> (Tensor, LongTensor)
See torch.mode()
movedim(source, destination) β Tensor
See torch.movedim()
moveaxis(source, destination) β Tensor
See torch.moveaxis()
msort() β Tensor
See torch.msort()
mul(value) β Tensor
See torch.mul().
mul_(value) β Tensor
In-place version of mul().
multiply(value) β Tensor
See torch.multiply().
multiply_(value) β Tensor
In-place version of multiply().
multinomial(num_samples, replacement=False, *, generator=None) β Tensor
See torch.multinomial()
mv(vec) β Tensor
See torch.mv()
mvlgamma(p) β Tensor
See torch.mvlgamma()
mvlgamma_(p) β Tensor
In-place version of mvlgamma()
nansum(dim=None, keepdim=False, dtype=None) β Tensor
See torch.nansum()
narrow(dimension, start, length) β Tensor
See torch.narrow() Example: >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> x.narrow(0, 0, 2)
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
>>> x.narrow(1, 1, 2)
tensor([[ 2, 3],
[ 5, 6],
[ 8, 9]])
narrow_copy(dimension, start, length) β Tensor
Same as Tensor.narrow() except returning a copy rather than shared storage. This is primarily for sparse tensors, which do not have a shared-storage narrow method. Calling `narrow_copy with `dimemsion > self.sparse_dim()` will return a copy with the relevant dense dimension narrowed, and `self.shape` updated accordingly.
ndimension() β int
Alias for dim()
nan_to_num(nan=0.0, posinf=None, neginf=None) β Tensor
See torch.nan_to_num().
nan_to_num_(nan=0.0, posinf=None, neginf=None) β Tensor
In-place version of nan_to_num().
ne(other) β Tensor
See torch.ne().
ne_(other) β Tensor
In-place version of ne().
not_equal(other) β Tensor
See torch.not_equal().
not_equal_(other) β Tensor
In-place version of not_equal().
neg() β Tensor
See torch.neg()
neg_() β Tensor
In-place version of neg()
negative() β Tensor
See torch.negative()
negative_() β Tensor
In-place version of negative()
nelement() β int
Alias for numel()
nextafter(other) β Tensor
See torch.nextafter()
nextafter_(other) β Tensor
In-place version of nextafter()
nonzero() β LongTensor
See torch.nonzero()
norm(p='fro', dim=None, keepdim=False, dtype=None) [source]
See torch.norm()
normal_(mean=0, std=1, *, generator=None) β Tensor
Fills self tensor with elements samples from the normal distribution parameterized by mean and std.
numel() β int
See torch.numel()
numpy() β numpy.ndarray
Returns self tensor as a NumPy ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the ndarray and vice versa.
orgqr(input2) β Tensor
See torch.orgqr()
ormqr(input2, input3, left=True, transpose=False) β Tensor
See torch.ormqr()
outer(vec2) β Tensor
See torch.outer().
permute(*dims) β Tensor
Returns a view of the original tensor with its dimensions permuted. Parameters
*dims (int...) β The desired ordering of dimensions Example >>> x = torch.randn(2, 3, 5)
>>> x.size()
torch.Size([2, 3, 5])
>>> x.permute(2, 0, 1).size()
torch.Size([5, 2, 3])
pin_memory() β Tensor
Copies the tensor to pinned memory, if itβs not already pinned.
pinverse() β Tensor
See torch.pinverse()
polygamma(n) β Tensor
See torch.polygamma()
polygamma_(n) β Tensor
In-place version of polygamma()
pow(exponent) β Tensor
See torch.pow()
pow_(exponent) β Tensor
In-place version of pow()
prod(dim=None, keepdim=False, dtype=None) β Tensor
See torch.prod()
put_(indices, tensor, accumulate=False) β Tensor
Copies the elements from tensor into the positions specified by indices. For the purpose of indexing, the self tensor is treated as if it were a 1-D tensor. If accumulate is True, the elements in tensor are added to self. If accumulate is False, the behavior is undefined if indices contain duplicate elements. Parameters
indices (LongTensor) β the indices into self
tensor (Tensor) β the tensor containing values to copy from
accumulate (bool) β whether to accumulate into self Example: >>> src = torch.tensor([[4, 3, 5],
... [6, 7, 8]])
>>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10]))
tensor([[ 4, 9, 5],
[ 10, 7, 8]])
qr(some=True) -> (Tensor, Tensor)
See torch.qr()
qscheme() β torch.qscheme
Returns the quantization scheme of a given QTensor.
quantile(q, dim=None, keepdim=False) β Tensor
See torch.quantile()
nanquantile(q, dim=None, keepdim=False) β Tensor
See torch.nanquantile()
q_scale() β float
Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().
q_zero_point() β int
Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().
q_per_channel_scales() β Tensor
Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.
q_per_channel_zero_points() β Tensor
Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.
q_per_channel_axis() β int
Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied.
rad2deg() β Tensor
See torch.rad2deg()
random_(from=0, to=None, *, generator=None) β Tensor
Fills self tensor with numbers sampled from the discrete uniform distribution over [from, to - 1]. If not specified, the values are usually only bounded by self tensorβs data type. However, for floating point types, if unspecified, range will be [0, 2^mantissa] to ensure that every value is representable. For example, torch.tensor(1, dtype=torch.double).random_() will be uniform in [0, 2^53].
ravel(input) β Tensor
see torch.ravel()
reciprocal() β Tensor
See torch.reciprocal()
reciprocal_() β Tensor
In-place version of reciprocal()
record_stream(stream)
Ensures that the tensor memory is not reused for another tensor until all current work queued on stream are complete. Note The caching allocator is aware of only the stream where a tensor was allocated. Due to the awareness, it already correctly manages the life cycle of tensors on only one stream. But if a tensor is used on a stream different from the stream of origin, the allocator might reuse the memory unexpectedly. Calling this method lets the allocator know which streams have used the tensor.
register_hook(hook) [source]
Registers a backward hook. The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature: hook(grad) -> Tensor or None
The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of grad. This function returns a handle with a method handle.remove() that removes the hook from the module. Example: >>> v = torch.tensor([0., 0., 0.], requires_grad=True)
>>> h = v.register_hook(lambda grad: grad * 2) # double the gradient
>>> v.backward(torch.tensor([1., 2., 3.]))
>>> v.grad
2
4
6
[torch.FloatTensor of size (3,)]
>>> h.remove() # removes the hook
remainder(divisor) β Tensor
See torch.remainder()
remainder_(divisor) β Tensor
In-place version of remainder()
renorm(p, dim, maxnorm) β Tensor
See torch.renorm()
renorm_(p, dim, maxnorm) β Tensor
In-place version of renorm()
repeat(*sizes) β Tensor
Repeats this tensor along the specified dimensions. Unlike expand(), this function copies the tensorβs data. Warning repeat() behaves differently from numpy.repeat, but is more similar to numpy.tile. For the operator similar to numpy.repeat, see torch.repeat_interleave(). Parameters
sizes (torch.Size or int...) β The number of times to repeat this tensor along each dimension Example: >>> x = torch.tensor([1, 2, 3])
>>> x.repeat(4, 2)
tensor([[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3]])
>>> x.repeat(4, 2, 1).size()
torch.Size([4, 2, 3])
repeat_interleave(repeats, dim=None) β Tensor
See torch.repeat_interleave().
requires_grad
Is True if gradients need to be computed for this Tensor, False otherwise. Note The fact that gradients need to be computed for a Tensor do not mean that the grad attribute will be populated, see is_leaf for more details.
requires_grad_(requires_grad=True) β Tensor
Change if autograd should record operations on this tensor: sets this tensorβs requires_grad attribute in-place. Returns this tensor. requires_grad_()βs main use case is to tell autograd to begin recording operations on a Tensor tensor. If tensor has requires_grad=False (because it was obtained through a DataLoader, or required preprocessing or initialization), tensor.requires_grad_() makes it so that autograd will begin to record operations on tensor. Parameters
requires_grad (bool) β If autograd should record operations on this tensor. Default: True. Example: >>> # Let's say we want to preprocess some saved weights and use
>>> # the result as new weights.
>>> saved_weights = [0.1, 0.2, 0.3, 0.25]
>>> loaded_weights = torch.tensor(saved_weights)
>>> weights = preprocess(loaded_weights) # some function
>>> weights
tensor([-0.5503, 0.4926, -2.1158, -0.8303])
>>> # Now, start to record operations done to weights
>>> weights.requires_grad_()
>>> out = weights.pow(2).sum()
>>> out.backward()
>>> weights.grad
tensor([-1.1007, 0.9853, -4.2316, -1.6606])
reshape(*shape) β Tensor
Returns a tensor with the same data and number of elements as self but with the specified shape. This method returns a view if shape is compatible with the current shape. See torch.Tensor.view() on when it is possible to return a view. See torch.reshape() Parameters
shape (tuple of python:ints or int...) β the desired shape
reshape_as(other) β Tensor
Returns this tensor as the same shape as other. self.reshape_as(other) is equivalent to self.reshape(other.sizes()). This method returns a view if other.sizes() is compatible with the current shape. See torch.Tensor.view() on when it is possible to return a view. Please see reshape() for more information about reshape. Parameters
other (torch.Tensor) β The result tensor has the same shape as other.
resize_(*sizes, memory_format=torch.contiguous_format) β Tensor
Resizes self tensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not changed. Existing elements are preserved but any new memory is uninitialized. Warning This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use view(), which checks for contiguity, or reshape(), which copies data if needed. To change the size in-place with custom strides, see set_(). Parameters
sizes (torch.Size or int...) β the desired size
memory_format (torch.memory_format, optional) β the desired memory format of Tensor. Default: torch.contiguous_format. Note that memory format of self is going to be unaffected if self.size() matches sizes. Example: >>> x = torch.tensor([[1, 2], [3, 4], [5, 6]])
>>> x.resize_(2, 2)
tensor([[ 1, 2],
[ 3, 4]])
resize_as_(tensor, memory_format=torch.contiguous_format) β Tensor
Resizes the self tensor to be the same size as the specified tensor. This is equivalent to self.resize_(tensor.size()). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of Tensor. Default: torch.contiguous_format. Note that memory format of self is going to be unaffected if self.size() matches tensor.size().
retain_grad() [source]
Enables .grad attribute for non-leaf Tensors.
roll(shifts, dims) β Tensor
See torch.roll()
rot90(k, dims) β Tensor
See torch.rot90()
round() β Tensor
See torch.round()
round_() β Tensor
In-place version of round()
rsqrt() β Tensor
See torch.rsqrt()
rsqrt_() β Tensor
In-place version of rsqrt()
scatter(dim, index, src) β Tensor
Out-of-place version of torch.Tensor.scatter_()
scatter_(dim, index, src, reduce=None) β Tensor
Writes all values from the tensor src into self at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. For a 3-D tensor, self is updated as: self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0
self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1
self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2
This is the reverse operation of the manner described in gather(). self, index and src (if it is a Tensor) should all have the same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= self.size(d) for all dimensions d != dim. Note that index and src do not broadcast. Moreover, as for gather(), the values of index must be between 0 and self.size(dim) - 1 inclusive. Warning When indices are not unique, the behavior is non-deterministic (one of the values from src will be picked arbitrarily) and the gradient will be incorrect (it will be propagated to all locations in the source that correspond to the same index)! Note The backward pass is implemented only for src.shape == index.shape. Additionally accepts an optional reduce argument that allows specification of an optional reduction operation, which is applied to all values in the tensor src into self at the indicies specified in the index. For each value in src, the reduction operation is applied to an index in self which is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. Given a 3-D tensor and reduction using the multiplication operation, self is updated as: self[index[i][j][k]][j][k] *= src[i][j][k] # if dim == 0
self[i][index[i][j][k]][k] *= src[i][j][k] # if dim == 1
self[i][j][index[i][j][k]] *= src[i][j][k] # if dim == 2
Reducing with the addition operation is the same as using scatter_add_(). Parameters
dim (int) β the axis along which to index
index (LongTensor) β the indices of elements to scatter, can be either empty or of the same dimensionality as src. When empty, the operation returns self unchanged.
src (Tensor or float) β the source element(s) to scatter.
reduce (str, optional) β reduction operation to apply, can be either 'add' or 'multiply'. Example: >>> src = torch.arange(1, 11).reshape((2, 5))
>>> src
tensor([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10]])
>>> index = torch.tensor([[0, 1, 2, 0]])
>>> torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src)
tensor([[1, 0, 0, 4, 0],
[0, 2, 0, 0, 0],
[0, 0, 3, 0, 0]])
>>> index = torch.tensor([[0, 1, 2], [0, 1, 4]])
>>> torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src)
tensor([[1, 2, 3, 0, 0],
[6, 7, 0, 0, 8],
[0, 0, 0, 0, 0]])
>>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]),
... 1.23, reduce='multiply')
tensor([[2.0000, 2.0000, 2.4600, 2.0000],
[2.0000, 2.0000, 2.0000, 2.4600]])
>>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]),
... 1.23, reduce='add')
tensor([[2.0000, 2.0000, 3.2300, 2.0000],
[2.0000, 2.0000, 2.0000, 3.2300]])
scatter_add_(dim, index, src) β Tensor
Adds all values from the tensor other into self at the indices specified in the index tensor in a similar fashion as scatter_(). For each value in src, it is added to an index in self which is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. For a 3-D tensor, self is updated as: self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0
self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1
self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2
self, index and src should have same number of dimensions. It is also required that index.size(d) <= src.size(d) for all dimensions d, and that index.size(d) <= self.size(d) for all dimensions d != dim. Note that index and src do not broadcast. Note This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information. Note The backward pass is implemented only for src.shape == index.shape. Parameters
dim (int) β the axis along which to index
index (LongTensor) β the indices of elements to scatter and add, can be either empty or of the same dimensionality as src. When empty, the operation returns self unchanged.
src (Tensor) β the source elements to scatter and add Example: >>> src = torch.ones((2, 5))
>>> index = torch.tensor([[0, 1, 2, 0, 0]])
>>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src)
tensor([[1., 0., 0., 1., 1.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.]])
>>> index = torch.tensor([[0, 1, 2, 0, 0], [0, 1, 2, 2, 2]])
>>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src)
tensor([[2., 0., 0., 1., 1.],
[0., 2., 0., 0., 0.],
[0., 0., 2., 1., 1.]])
scatter_add(dim, index, src) β Tensor
Out-of-place version of torch.Tensor.scatter_add_()
select(dim, index) β Tensor
Slices the self tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed. Parameters
dim (int) β the dimension to slice
index (int) β the index to select with Note select() is equivalent to slicing. For example, tensor.select(0, index) is equivalent to tensor[index] and tensor.select(2, index) is equivalent to tensor[:,:,index].
set_(source=None, storage_offset=0, size=None, stride=None) β Tensor
Sets the underlying storage, size, and strides. If source is a tensor, self tensor will share the same storage and have the same size and strides as source. Changes to elements in one tensor will be reflected in the other. If source is a Storage, the method sets the underlying storage, offset, size, and stride. Parameters
source (Tensor or Storage) β the tensor or storage to use
storage_offset (int, optional) β the offset in the storage
size (torch.Size, optional) β the desired size. Defaults to the size of the source.
stride (tuple, optional) β the desired stride. Defaults to C-contiguous strides.
share_memory_() [source]
Moves the underlying storage to shared memory. This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized.
short(memory_format=torch.preserve_format) β Tensor
self.short() is equivalent to self.to(torch.int16). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format.
sigmoid() β Tensor
See torch.sigmoid()
sigmoid_() β Tensor
In-place version of sigmoid()
sign() β Tensor
See torch.sign()
sign_() β Tensor
In-place version of sign()
signbit() β Tensor
See torch.signbit()
sgn() β Tensor
See torch.sgn()
sgn_() β Tensor
In-place version of sgn()
sin() β Tensor
See torch.sin()
sin_() β Tensor
In-place version of sin()
sinc() β Tensor
See torch.sinc()
sinc_() β Tensor
In-place version of sinc()
sinh() β Tensor
See torch.sinh()
sinh_() β Tensor
In-place version of sinh()
asinh() β Tensor
See torch.asinh()
asinh_() β Tensor
In-place version of asinh()
arcsinh() β Tensor
See torch.arcsinh()
arcsinh_() β Tensor
In-place version of arcsinh()
size() β torch.Size
Returns the size of the self tensor. The returned value is a subclass of tuple. Example: >>> torch.empty(3, 4, 5).size()
torch.Size([3, 4, 5])
slogdet() -> (Tensor, Tensor)
See torch.slogdet()
solve(A) β Tensor, Tensor
See torch.solve()
sort(dim=-1, descending=False) -> (Tensor, LongTensor)
See torch.sort()
split(split_size, dim=0) [source]
See torch.split()
sparse_mask(mask) β Tensor
Returns a new sparse tensor with values from a strided tensor self filtered by the indices of the sparse tensor mask. The values of mask sparse tensor are ignored. self and mask tensors must have the same shape. Note The returned sparse tensor has the same indices as the sparse tensor mask, even when the corresponding values in self are zeros. Parameters
mask (Tensor) β a sparse tensor whose indices are used as a filter Example: >>> nse = 5
>>> dims = (5, 5, 2, 2)
>>> I = torch.cat([torch.randint(0, dims[0], size=(nse,)),
... torch.randint(0, dims[1], size=(nse,))], 0).reshape(2, nse)
>>> V = torch.randn(nse, dims[2], dims[3])
>>> S = torch.sparse_coo_tensor(I, V, dims).coalesce()
>>> D = torch.randn(dims)
>>> D.sparse_mask(S)
tensor(indices=tensor([[0, 0, 0, 2],
[0, 1, 4, 3]]),
values=tensor([[[ 1.6550, 0.2397],
[-0.1611, -0.0779]],
[[ 0.2326, -1.0558],
[ 1.4711, 1.9678]],
[[-0.5138, -0.0411],
[ 1.9417, 0.5158]],
[[ 0.0793, 0.0036],
[-0.2569, -0.1055]]]),
size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo)
sparse_dim() β int
Return the number of sparse dimensions in a sparse tensor self. Warning Throws an error if self is not a sparse tensor. See also Tensor.dense_dim() and hybrid tensors.
sqrt() β Tensor
See torch.sqrt()
sqrt_() β Tensor
In-place version of sqrt()
square() β Tensor
See torch.square()
square_() β Tensor
In-place version of square()
squeeze(dim=None) β Tensor
See torch.squeeze()
squeeze_(dim=None) β Tensor
In-place version of squeeze()
std(dim=None, unbiased=True, keepdim=False) β Tensor
See torch.std()
stft(n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None) [source]
See torch.stft() Warning This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result.
storage() β torch.Storage
Returns the underlying storage.
storage_offset() β int
Returns self tensorβs offset in the underlying storage in terms of number of storage elements (not bytes). Example: >>> x = torch.tensor([1, 2, 3, 4, 5])
>>> x.storage_offset()
0
>>> x[3:].storage_offset()
3
storage_type() β type
Returns the type of the underlying storage.
stride(dim) β tuple or int
Returns the stride of self tensor. Stride is the jump necessary to go from one element to the next one in the specified dimension dim. A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimension dim. Parameters
dim (int, optional) β the desired dimension in which stride is required Example: >>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])
>>> x.stride()
(5, 1)
>>> x.stride(0)
5
>>> x.stride(-1)
1
sub(other, *, alpha=1) β Tensor
See torch.sub().
sub_(other, *, alpha=1) β Tensor
In-place version of sub()
subtract(other, *, alpha=1) β Tensor
See torch.subtract().
subtract_(other, *, alpha=1) β Tensor
In-place version of subtract().
sum(dim=None, keepdim=False, dtype=None) β Tensor
See torch.sum()
sum_to_size(*size) β Tensor
Sum this tensor to size. size must be broadcastable to this tensor size. Parameters
size (int...) β a sequence of integers defining the shape of the output tensor.
svd(some=True, compute_uv=True) -> (Tensor, Tensor, Tensor)
See torch.svd()
swapaxes(axis0, axis1) β Tensor
See torch.swapaxes()
swapdims(dim0, dim1) β Tensor
See torch.swapdims()
symeig(eigenvectors=False, upper=True) -> (Tensor, Tensor)
See torch.symeig()
t() β Tensor
See torch.t()
t_() β Tensor
In-place version of t()
tensor_split(indices_or_sections, dim=0) β List of Tensors
See torch.tensor_split()
tile(*reps) β Tensor
See torch.tile()
to(*args, **kwargs) β Tensor
Performs Tensor dtype and/or device conversion. A torch.dtype and torch.device are inferred from the arguments of self.to(*args, **kwargs). Note If the self Tensor already has the correct torch.dtype and torch.device, then self is returned. Otherwise, the returned tensor is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to:
to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) β Tensor
Returns a Tensor with the specified dtype Args:
memory_format (torch.memory_format, optional): the desired memory format of returned Tensor. Default: torch.preserve_format.
to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) β Tensor
Returns a Tensor with the specified device and (optional) dtype. If dtype is None it is inferred to be self.dtype. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion. Args:
memory_format (torch.memory_format, optional): the desired memory format of returned Tensor. Default: torch.preserve_format.
to(other, non_blocking=False, copy=False) β Tensor
Returns a Tensor with same torch.dtype and torch.device as the Tensor other. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.
Example: >>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu
>>> tensor.to(torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64)
>>> cuda0 = torch.device('cuda:0')
>>> tensor.to(cuda0)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], device='cuda:0')
>>> tensor.to(cuda0, dtype=torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
>>> other = torch.randn((), dtype=torch.float64, device=cuda0)
>>> tensor.to(other, non_blocking=True)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
to_mkldnn() β Tensor
Returns a copy of the tensor in torch.mkldnn layout.
take(indices) β Tensor
See torch.take()
tan() β Tensor
See torch.tan()
tan_() β Tensor
In-place version of tan()
tanh() β Tensor
See torch.tanh()
tanh_() β Tensor
In-place version of tanh()
atanh() β Tensor
See torch.atanh()
atanh_(other) β Tensor
In-place version of atanh()
arctanh() β Tensor
See torch.arctanh()
arctanh_(other) β Tensor
In-place version of arctanh()
tolist() β list or number
Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with item(). Tensors are automatically moved to the CPU first if necessary. This operation is not differentiable. Examples: >>> a = torch.randn(2, 2)
>>> a.tolist()
[[0.012766935862600803, 0.5415473580360413],
[-0.08909505605697632, 0.7729271650314331]]
>>> a[0,0].tolist()
0.012766935862600803
topk(k, dim=None, largest=True, sorted=True) -> (Tensor, LongTensor)
See torch.topk()
to_sparse(sparseDims) β Tensor
Returns a sparse copy of the tensor. PyTorch supports sparse tensors in coordinate format. Parameters
sparseDims (int, optional) β the number of sparse dimensions to include in the new sparse tensor Example: >>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]])
>>> d
tensor([[ 0, 0, 0],
[ 9, 0, 10],
[ 0, 0, 0]])
>>> d.to_sparse()
tensor(indices=tensor([[1, 1],
[0, 2]]),
values=tensor([ 9, 10]),
size=(3, 3), nnz=2, layout=torch.sparse_coo)
>>> d.to_sparse(1)
tensor(indices=tensor([[1]]),
values=tensor([[ 9, 0, 10]]),
size=(3, 3), nnz=1, layout=torch.sparse_coo)
trace() β Tensor
See torch.trace()
transpose(dim0, dim1) β Tensor
See torch.transpose()
transpose_(dim0, dim1) β Tensor
In-place version of transpose()
triangular_solve(A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)
See torch.triangular_solve()
tril(k=0) β Tensor
See torch.tril()
tril_(k=0) β Tensor
In-place version of tril()
triu(k=0) β Tensor
See torch.triu()
triu_(k=0) β Tensor
In-place version of triu()
true_divide(value) β Tensor
See torch.true_divide()
true_divide_(value) β Tensor
In-place version of true_divide_()
trunc() β Tensor
See torch.trunc()
trunc_() β Tensor
In-place version of trunc()
type(dtype=None, non_blocking=False, **kwargs) β str or Tensor
Returns the type if dtype is not provided, else casts this object to the specified type. If this is already of the correct type, no copy is performed and the original object is returned. Parameters
dtype (type or string) β The desired type
non_blocking (bool) β If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.
**kwargs β For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated.
type_as(tensor) β Tensor
Returns this tensor cast to the type of the given tensor. This is a no-op if the tensor is already of the correct type. This is equivalent to self.type(tensor.type()) Parameters
tensor (Tensor) β the tensor which has the desired type
unbind(dim=0) β seq
See torch.unbind()
unfold(dimension, size, step) β Tensor
Returns a view of the original tensor which contains all slices of size size from self tensor in the dimension dimension. Step between two slices is given by step. If sizedim is the size of dimension dimension for self, the size of dimension dimension in the returned tensor will be (sizedim - size) / step + 1. An additional dimension of size size is appended in the returned tensor. Parameters
dimension (int) β dimension in which unfolding happens
size (int) β the size of each slice that is unfolded
step (int) β the step between each slice Example: >>> x = torch.arange(1., 8)
>>> x
tensor([ 1., 2., 3., 4., 5., 6., 7.])
>>> x.unfold(0, 2, 1)
tensor([[ 1., 2.],
[ 2., 3.],
[ 3., 4.],
[ 4., 5.],
[ 5., 6.],
[ 6., 7.]])
>>> x.unfold(0, 2, 2)
tensor([[ 1., 2.],
[ 3., 4.],
[ 5., 6.]])
uniform_(from=0, to=1) β Tensor
Fills self tensor with numbers sampled from the continuous uniform distribution: P(x)=1toβfromP(x) = \dfrac{1}{\text{to} - \text{from}}
unique(sorted=True, return_inverse=False, return_counts=False, dim=None) [source]
Returns the unique elements of the input tensor. See torch.unique()
unique_consecutive(return_inverse=False, return_counts=False, dim=None) [source]
Eliminates all but the first element from every consecutive group of equivalent elements. See torch.unique_consecutive()
unsqueeze(dim) β Tensor
See torch.unsqueeze()
unsqueeze_(dim) β Tensor
In-place version of unsqueeze()
values() β Tensor
Return the values tensor of a sparse COO tensor. Warning Throws an error if self is not a sparse COO tensor. See also Tensor.indices(). Note This method can only be called on a coalesced sparse tensor. See Tensor.coalesce() for details.
var(dim=None, unbiased=True, keepdim=False) β Tensor
See torch.var()
vdot(other) β Tensor
See torch.vdot()
view(*shape) β Tensor
Returns a new tensor with the same data as the self tensor but of a different shape. The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions d,d+1,β¦,d+kd, d+1, \dots, d+k that satisfy the following contiguity-like condition that βi=d,β¦,d+kβ1\forall i = d, \dots, d+k-1 , stride[i]=stride[i+1]Γsize[i+1]\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1]
Otherwise, it will not be possible to view self tensor as shape without copying it (e.g., via contiguous()). When it is unclear whether a view() can be performed, it is advisable to use reshape(), which returns a view if the shapes are compatible, and copies (equivalent to calling contiguous()) otherwise. Parameters
shape (torch.Size or int...) β the desired size Example: >>> x = torch.randn(4, 4)
>>> x.size()
torch.Size([4, 4])
>>> y = x.view(16)
>>> y.size()
torch.Size([16])
>>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions
>>> z.size()
torch.Size([2, 8])
>>> a = torch.randn(1, 2, 3, 4)
>>> a.size()
torch.Size([1, 2, 3, 4])
>>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension
>>> b.size()
torch.Size([1, 3, 2, 4])
>>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory
>>> c.size()
torch.Size([1, 3, 2, 4])
>>> torch.equal(b, c)
False
view(dtype) β Tensor
Returns a new tensor with the same data as the self tensor but of a different dtype. dtype must have the same number of bytes per element as selfβs dtype. Warning This overload is not supported by TorchScript, and using it in a Torchscript program will cause undefined behavior. Parameters
dtype (torch.dtype) β the desired dtype Example: >>> x = torch.randn(4, 4)
>>> x
tensor([[ 0.9482, -0.0310, 1.4999, -0.5316],
[-0.1520, 0.7472, 0.5617, -0.8649],
[-2.4724, -0.0334, -0.2976, -0.8499],
[-0.2109, 1.9913, -0.9607, -0.6123]])
>>> x.dtype
torch.float32
>>> y = x.view(torch.int32)
>>> y
tensor([[ 1064483442, -1124191867, 1069546515, -1089989247],
[-1105482831, 1061112040, 1057999968, -1084397505],
[-1071760287, -1123489973, -1097310419, -1084649136],
[-1101533110, 1073668768, -1082790149, -1088634448]],
dtype=torch.int32)
>>> y[0, 0] = 1000000000
>>> x
tensor([[ 0.0047, -0.0310, 1.4999, -0.5316],
[-0.1520, 0.7472, 0.5617, -0.8649],
[-2.4724, -0.0334, -0.2976, -0.8499],
[-0.2109, 1.9913, -0.9607, -0.6123]])
>>> x.view(torch.int16)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Viewing a tensor as a new dtype with a different number of bytes per element is not supported.
view_as(other) β Tensor
View this tensor as the same size as other. self.view_as(other) is equivalent to self.view(other.size()). Please see view() for more information about view. Parameters
other (torch.Tensor) β The result tensor has the same size as other.
where(condition, y) β Tensor
self.where(condition, y) is equivalent to torch.where(condition, self, y). See torch.where()
xlogy(other) β Tensor
See torch.xlogy()
xlogy_(other) β Tensor
In-place version of xlogy()
zero_() β Tensor
Fills self tensor with zeros. | torch.tensors |
torch.tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) β Tensor
Constructs a tensor with data. Warning torch.tensor() always copies data. If you have a Tensor data and want to avoid a copy, use torch.Tensor.requires_grad_() or torch.Tensor.detach(). If you have a NumPy ndarray and want to avoid a copy, use torch.as_tensor(). Warning When data is a tensor x, torch.tensor() reads out βthe dataβ from whatever it is passed, and constructs a leaf variable. Therefore torch.tensor(x) is equivalent to x.clone().detach() and torch.tensor(x, requires_grad=True) is equivalent to x.clone().detach().requires_grad_(True). The equivalents using clone() and detach() are recommended. Parameters
data (array_like) β Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned tensor. Default: if None, infers data type from data.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False.
pin_memory (bool, optional) β If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: False. Example: >>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
tensor([[ 0.1000, 1.2000],
[ 2.2000, 3.1000],
[ 4.9000, 5.2000]])
>>> torch.tensor([0, 1]) # Type inference on data
tensor([ 0, 1])
>>> torch.tensor([[0.11111, 0.222222, 0.3333333]],
... dtype=torch.float64,
... device=torch.device('cuda:0')) # creates a torch.cuda.DoubleTensor
tensor([[ 0.1111, 0.2222, 0.3333]], dtype=torch.float64, device='cuda:0')
>>> torch.tensor(3.14159) # Create a scalar (zero-dimensional tensor)
tensor(3.1416)
>>> torch.tensor([]) # Create an empty tensor (of size (0,))
tensor([]) | torch.generated.torch.tensor#torch.tensor |
abs() β Tensor
See torch.abs() | torch.tensors#torch.Tensor.abs |
absolute() β Tensor
Alias for abs() | torch.tensors#torch.Tensor.absolute |
absolute_() β Tensor
In-place version of absolute() Alias for abs_() | torch.tensors#torch.Tensor.absolute_ |
abs_() β Tensor
In-place version of abs() | torch.tensors#torch.Tensor.abs_ |
acos() β Tensor
See torch.acos() | torch.tensors#torch.Tensor.acos |
acosh() β Tensor
See torch.acosh() | torch.tensors#torch.Tensor.acosh |
acosh_() β Tensor
In-place version of acosh() | torch.tensors#torch.Tensor.acosh_ |
acos_() β Tensor
In-place version of acos() | torch.tensors#torch.Tensor.acos_ |
add(other, *, alpha=1) β Tensor
Add a scalar or tensor to self tensor. If both alpha and other are specified, each element of other is scaled by alpha before being used. When other is a tensor, the shape of other must be broadcastable with the shape of the underlying tensor See torch.add() | torch.tensors#torch.Tensor.add |
addbmm(batch1, batch2, *, beta=1, alpha=1) β Tensor
See torch.addbmm() | torch.tensors#torch.Tensor.addbmm |
addbmm_(batch1, batch2, *, beta=1, alpha=1) β Tensor
In-place version of addbmm() | torch.tensors#torch.Tensor.addbmm_ |
addcdiv(tensor1, tensor2, *, value=1) β Tensor
See torch.addcdiv() | torch.tensors#torch.Tensor.addcdiv |
addcdiv_(tensor1, tensor2, *, value=1) β Tensor
In-place version of addcdiv() | torch.tensors#torch.Tensor.addcdiv_ |
addcmul(tensor1, tensor2, *, value=1) β Tensor
See torch.addcmul() | torch.tensors#torch.Tensor.addcmul |
addcmul_(tensor1, tensor2, *, value=1) β Tensor
In-place version of addcmul() | torch.tensors#torch.Tensor.addcmul_ |
addmm(mat1, mat2, *, beta=1, alpha=1) β Tensor
See torch.addmm() | torch.tensors#torch.Tensor.addmm |
addmm_(mat1, mat2, *, beta=1, alpha=1) β Tensor
In-place version of addmm() | torch.tensors#torch.Tensor.addmm_ |
addmv(mat, vec, *, beta=1, alpha=1) β Tensor
See torch.addmv() | torch.tensors#torch.Tensor.addmv |
addmv_(mat, vec, *, beta=1, alpha=1) β Tensor
In-place version of addmv() | torch.tensors#torch.Tensor.addmv_ |
addr(vec1, vec2, *, beta=1, alpha=1) β Tensor
See torch.addr() | torch.tensors#torch.Tensor.addr |
addr_(vec1, vec2, *, beta=1, alpha=1) β Tensor
In-place version of addr() | torch.tensors#torch.Tensor.addr_ |
add_(other, *, alpha=1) β Tensor
In-place version of add() | torch.tensors#torch.Tensor.add_ |
align_as(other) β Tensor
Permutes the dimensions of the self tensor to match the dimension order in the other tensor, adding size-one dims for any new names. This operation is useful for explicit broadcasting by names (see examples). All of the dims of self must be named in order to use this method. The resulting tensor is a view on the original tensor. All dimension names of self must be present in other.names. other may contain named dimensions that are not in self.names; the output tensor has a size-one dimension for each of those new names. To align a tensor to a specific order, use align_to(). Examples: # Example 1: Applying a mask
>>> mask = torch.randint(2, [127, 128], dtype=torch.bool).refine_names('W', 'H')
>>> imgs = torch.randn(32, 128, 127, 3, names=('N', 'H', 'W', 'C'))
>>> imgs.masked_fill_(mask.align_as(imgs), 0)
# Example 2: Applying a per-channel-scale
>>> def scale_channels(input, scale):
>>> scale = scale.refine_names('C')
>>> return input * scale.align_as(input)
>>> num_channels = 3
>>> scale = torch.randn(num_channels, names=('C',))
>>> imgs = torch.rand(32, 128, 128, num_channels, names=('N', 'H', 'W', 'C'))
>>> more_imgs = torch.rand(32, num_channels, 128, 128, names=('N', 'C', 'H', 'W'))
>>> videos = torch.randn(3, num_channels, 128, 128, 128, names=('N', 'C', 'H', 'W', 'D'))
# scale_channels is agnostic to the dimension order of the input
>>> scale_channels(imgs, scale)
>>> scale_channels(more_imgs, scale)
>>> scale_channels(videos, scale)
Warning The named tensor API is experimental and subject to change. | torch.named_tensor#torch.Tensor.align_as |
align_to(*names) [source]
Permutes the dimensions of the self tensor to match the order specified in names, adding size-one dims for any new names. All of the dims of self must be named in order to use this method. The resulting tensor is a view on the original tensor. All dimension names of self must be present in names. names may contain additional names that are not in self.names; the output tensor has a size-one dimension for each of those new names. names may contain up to one Ellipsis (...). The Ellipsis is expanded to be equal to all dimension names of self that are not mentioned in names, in the order that they appear in self. Python 2 does not support Ellipsis but one may use a string literal instead ('...'). Parameters
names (iterable of str) β The desired dimension ordering of the output tensor. May contain up to one Ellipsis that is expanded to all unmentioned dim names of self. Examples: >>> tensor = torch.randn(2, 2, 2, 2, 2, 2)
>>> named_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F')
# Move the F and E dims to the front while keeping the rest in order
>>> named_tensor.align_to('F', 'E', ...)
Warning The named tensor API is experimental and subject to change. | torch.named_tensor#torch.Tensor.align_to |
all(dim=None, keepdim=False) β Tensor
See torch.all() | torch.tensors#torch.Tensor.all |
allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) β Tensor
See torch.allclose() | torch.tensors#torch.Tensor.allclose |
amax(dim=None, keepdim=False) β Tensor
See torch.amax() | torch.tensors#torch.Tensor.amax |
amin(dim=None, keepdim=False) β Tensor
See torch.amin() | torch.tensors#torch.Tensor.amin |
angle() β Tensor
See torch.angle() | torch.tensors#torch.Tensor.angle |
any(dim=None, keepdim=False) β Tensor
See torch.any() | torch.tensors#torch.Tensor.any |
apply_(callable) β Tensor
Applies the function callable to each element in the tensor, replacing each element with the value returned by callable. Note This function only works with CPU tensors and should not be used in code sections that require high performance. | torch.tensors#torch.Tensor.apply_ |
arccos() β Tensor
See torch.arccos() | torch.tensors#torch.Tensor.arccos |
arccosh()
acosh() -> Tensor See torch.arccosh() | torch.tensors#torch.Tensor.arccosh |
arccosh_()
acosh_() -> Tensor In-place version of arccosh() | torch.tensors#torch.Tensor.arccosh_ |
arccos_() β Tensor
In-place version of arccos() | torch.tensors#torch.Tensor.arccos_ |
arcsin() β Tensor
See torch.arcsin() | torch.tensors#torch.Tensor.arcsin |
arcsinh() β Tensor
See torch.arcsinh() | torch.tensors#torch.Tensor.arcsinh |
arcsinh_() β Tensor
In-place version of arcsinh() | torch.tensors#torch.Tensor.arcsinh_ |
arcsin_() β Tensor
In-place version of arcsin() | torch.tensors#torch.Tensor.arcsin_ |
arctan() β Tensor
See torch.arctan() | torch.tensors#torch.Tensor.arctan |
arctanh() β Tensor
See torch.arctanh() | torch.tensors#torch.Tensor.arctanh |
arctanh_(other) β Tensor
In-place version of arctanh() | torch.tensors#torch.Tensor.arctanh_ |
arctan_() β Tensor
In-place version of arctan() | torch.tensors#torch.Tensor.arctan_ |
argmax(dim=None, keepdim=False) β LongTensor
See torch.argmax() | torch.tensors#torch.Tensor.argmax |
argmin(dim=None, keepdim=False) β LongTensor
See torch.argmin() | torch.tensors#torch.Tensor.argmin |
argsort(dim=-1, descending=False) β LongTensor
See torch.argsort() | torch.tensors#torch.Tensor.argsort |
asin() β Tensor
See torch.asin() | torch.tensors#torch.Tensor.asin |
asinh() β Tensor
See torch.asinh() | torch.tensors#torch.Tensor.asinh |
asinh_() β Tensor
In-place version of asinh() | torch.tensors#torch.Tensor.asinh_ |
asin_() β Tensor
In-place version of asin() | torch.tensors#torch.Tensor.asin_ |
as_strided(size, stride, storage_offset=0) β Tensor
See torch.as_strided() | torch.tensors#torch.Tensor.as_strided |
as_subclass(cls) β Tensor
Makes a cls instance with the same data pointer as self. Changes in the output mirror changes in self, and the output stays attached to the autograd graph. cls must be a subclass of Tensor. | torch.tensors#torch.Tensor.as_subclass |
atan() β Tensor
See torch.atan() | torch.tensors#torch.Tensor.atan |
atan2(other) β Tensor
See torch.atan2() | torch.tensors#torch.Tensor.atan2 |
atan2_(other) β Tensor
In-place version of atan2() | torch.tensors#torch.Tensor.atan2_ |
atanh() β Tensor
See torch.atanh() | torch.tensors#torch.Tensor.atanh |
atanh_(other) β Tensor
In-place version of atanh() | torch.tensors#torch.Tensor.atanh_ |
atan_() β Tensor
In-place version of atan() | torch.tensors#torch.Tensor.atan_ |
backward(gradient=None, retain_graph=None, create_graph=False, inputs=None) [source]
Computes the gradient of current tensor w.r.t. graph leaves. The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying gradient. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. self. This function accumulates gradients in the leaves - you might need to zero .grad attributes or set them to None before calling it. See Default gradient layouts for details on the memory layout of accumulated gradients. Note If you run any forward ops, create gradient, and/or call backward in a user-specified CUDA stream context, see Stream semantics of backward passes. Parameters
gradient (Tensor or None) β Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless create_graph is True. None values can be specified for scalar Tensors or ones that donβt require grad. If a None value would be acceptable then this argument is optional.
retain_graph (bool, optional) β If False, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.
create_graph (bool, optional) β If True, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to False.
inputs (sequence of Tensor) β Inputs w.r.t. which the gradient will be accumulated into .grad. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute the attr::tensors. All the provided inputs must be leaf Tensors. | torch.autograd#torch.Tensor.backward |
baddbmm(batch1, batch2, *, beta=1, alpha=1) β Tensor
See torch.baddbmm() | torch.tensors#torch.Tensor.baddbmm |
baddbmm_(batch1, batch2, *, beta=1, alpha=1) β Tensor
In-place version of baddbmm() | torch.tensors#torch.Tensor.baddbmm_ |
bernoulli(*, generator=None) β Tensor
Returns a result tensor where each result[i]\texttt{result[i]} is independently sampled from Bernoulli(self[i])\text{Bernoulli}(\texttt{self[i]}) . self must have floating point dtype, and the result will have the same dtype. See torch.bernoulli() | torch.tensors#torch.Tensor.bernoulli |
bernoulli_()
bernoulli_(p=0.5, *, generator=None) β Tensor
Fills each location of self with an independent sample from Bernoulli(p)\text{Bernoulli}(\texttt{p}) . self can have integral dtype.
bernoulli_(p_tensor, *, generator=None) β Tensor
p_tensor should be a tensor containing probabilities to be used for drawing the binary random number. The ith\text{i}^{th} element of self tensor will be set to a value sampled from Bernoulli(p_tensor[i])\text{Bernoulli}(\texttt{p\_tensor[i]}) . self can have integral dtype, but p_tensor must have floating point dtype.
See also bernoulli() and torch.bernoulli() | torch.tensors#torch.Tensor.bernoulli_ |
bfloat16(memory_format=torch.preserve_format) β Tensor
self.bfloat16() is equivalent to self.to(torch.bfloat16). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.tensors#torch.Tensor.bfloat16 |
bincount(weights=None, minlength=0) β Tensor
See torch.bincount() | torch.tensors#torch.Tensor.bincount |
bitwise_and() β Tensor
See torch.bitwise_and() | torch.tensors#torch.Tensor.bitwise_and |
bitwise_and_() β Tensor
In-place version of bitwise_and() | torch.tensors#torch.Tensor.bitwise_and_ |
bitwise_not() β Tensor
See torch.bitwise_not() | torch.tensors#torch.Tensor.bitwise_not |
bitwise_not_() β Tensor
In-place version of bitwise_not() | torch.tensors#torch.Tensor.bitwise_not_ |
bitwise_or() β Tensor
See torch.bitwise_or() | torch.tensors#torch.Tensor.bitwise_or |
bitwise_or_() β Tensor
In-place version of bitwise_or() | torch.tensors#torch.Tensor.bitwise_or_ |
bitwise_xor() β Tensor
See torch.bitwise_xor() | torch.tensors#torch.Tensor.bitwise_xor |
bitwise_xor_() β Tensor
In-place version of bitwise_xor() | torch.tensors#torch.Tensor.bitwise_xor_ |
bmm(batch2) β Tensor
See torch.bmm() | torch.tensors#torch.Tensor.bmm |
bool(memory_format=torch.preserve_format) β Tensor
self.bool() is equivalent to self.to(torch.bool). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.tensors#torch.Tensor.bool |
broadcast_to(shape) β Tensor
See torch.broadcast_to(). | torch.tensors#torch.Tensor.broadcast_to |
byte(memory_format=torch.preserve_format) β Tensor
self.byte() is equivalent to self.to(torch.uint8). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.tensors#torch.Tensor.byte |
cauchy_(median=0, sigma=1, *, generator=None) β Tensor
Fills the tensor with numbers drawn from the Cauchy distribution: f(x)=1ΟΟ(xβmedian)2+Ο2f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 + \sigma^2} | torch.tensors#torch.Tensor.cauchy_ |
ceil() β Tensor
See torch.ceil() | torch.tensors#torch.Tensor.ceil |
ceil_() β Tensor
In-place version of ceil() | torch.tensors#torch.Tensor.ceil_ |
char(memory_format=torch.preserve_format) β Tensor
self.char() is equivalent to self.to(torch.int8). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.tensors#torch.Tensor.char |
cholesky(upper=False) β Tensor
See torch.cholesky() | torch.tensors#torch.Tensor.cholesky |
cholesky_inverse(upper=False) β Tensor
See torch.cholesky_inverse() | torch.tensors#torch.Tensor.cholesky_inverse |
cholesky_solve(input2, upper=False) β Tensor
See torch.cholesky_solve() | torch.tensors#torch.Tensor.cholesky_solve |
chunk(chunks, dim=0) β List of Tensors
See torch.chunk() | torch.tensors#torch.Tensor.chunk |
clamp(min, max) β Tensor
See torch.clamp() | torch.tensors#torch.Tensor.clamp |
clamp_(min, max) β Tensor
In-place version of clamp() | torch.tensors#torch.Tensor.clamp_ |
clip(min, max) β Tensor
Alias for clamp(). | torch.tensors#torch.Tensor.clip |
clip_(min, max) β Tensor
Alias for clamp_(). | torch.tensors#torch.Tensor.clip_ |
clone(*, memory_format=torch.preserve_format) β Tensor
See torch.clone() | torch.tensors#torch.Tensor.clone |
coalesce() β Tensor
Returns a coalesced copy of self if self is an uncoalesced tensor. Returns self if self is a coalesced tensor. Warning Throws an error if self is not a sparse COO tensor. | torch.sparse#torch.Tensor.coalesce |
conj() β Tensor
See torch.conj() | torch.tensors#torch.Tensor.conj |
contiguous(memory_format=torch.contiguous_format) β Tensor
Returns a contiguous in memory tensor containing the same data as self tensor. If self tensor is already in the specified memory format, this function returns the self tensor. Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.contiguous_format. | torch.tensors#torch.Tensor.contiguous |
copysign(other) β Tensor
See torch.copysign() | torch.tensors#torch.Tensor.copysign |
copysign_(other) β Tensor
In-place version of copysign() | torch.tensors#torch.Tensor.copysign_ |
copy_(src, non_blocking=False) β Tensor
Copies the elements from src into self tensor and returns self. The src tensor must be broadcastable with the self tensor. It may be of a different data type or reside on a different device. Parameters
src (Tensor) β the source tensor to copy from
non_blocking (bool) β if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect. | torch.tensors#torch.Tensor.copy_ |
cos() β Tensor
See torch.cos() | torch.tensors#torch.Tensor.cos |
cosh() β Tensor
See torch.cosh() | torch.tensors#torch.Tensor.cosh |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.