doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
sgn_() β Tensor
In-place version of sgn() | torch.tensors#torch.Tensor.sgn_ |
share_memory_() [source]
Moves the underlying storage to shared memory. This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized. | torch.tensors#torch.Tensor.share_memory_ |
short(memory_format=torch.preserve_format) β Tensor
self.short() is equivalent to self.to(torch.int16). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.tensors#torch.Tensor.short |
sigmoid() β Tensor
See torch.sigmoid() | torch.tensors#torch.Tensor.sigmoid |
sigmoid_() β Tensor
In-place version of sigmoid() | torch.tensors#torch.Tensor.sigmoid_ |
sign() β Tensor
See torch.sign() | torch.tensors#torch.Tensor.sign |
signbit() β Tensor
See torch.signbit() | torch.tensors#torch.Tensor.signbit |
sign_() β Tensor
In-place version of sign() | torch.tensors#torch.Tensor.sign_ |
sin() β Tensor
See torch.sin() | torch.tensors#torch.Tensor.sin |
sinc() β Tensor
See torch.sinc() | torch.tensors#torch.Tensor.sinc |
sinc_() β Tensor
In-place version of sinc() | torch.tensors#torch.Tensor.sinc_ |
sinh() β Tensor
See torch.sinh() | torch.tensors#torch.Tensor.sinh |
sinh_() β Tensor
In-place version of sinh() | torch.tensors#torch.Tensor.sinh_ |
sin_() β Tensor
In-place version of sin() | torch.tensors#torch.Tensor.sin_ |
size() β torch.Size
Returns the size of the self tensor. The returned value is a subclass of tuple. Example: >>> torch.empty(3, 4, 5).size()
torch.Size([3, 4, 5]) | torch.tensors#torch.Tensor.size |
slogdet() -> (Tensor, Tensor)
See torch.slogdet() | torch.tensors#torch.Tensor.slogdet |
solve(A) β Tensor, Tensor
See torch.solve() | torch.tensors#torch.Tensor.solve |
sort(dim=-1, descending=False) -> (Tensor, LongTensor)
See torch.sort() | torch.tensors#torch.Tensor.sort |
sparse_dim() β int
Return the number of sparse dimensions in a sparse tensor self. Warning Throws an error if self is not a sparse tensor. See also Tensor.dense_dim() and hybrid tensors. | torch.sparse#torch.Tensor.sparse_dim |
sparse_mask(mask) β Tensor
Returns a new sparse tensor with values from a strided tensor self filtered by the indices of the sparse tensor mask. The values of mask sparse tensor are ignored. self and mask tensors must have the same shape. Note The returned sparse tensor has the same indices as the sparse tensor mask, even when the corresponding values in self are zeros. Parameters
mask (Tensor) β a sparse tensor whose indices are used as a filter Example: >>> nse = 5
>>> dims = (5, 5, 2, 2)
>>> I = torch.cat([torch.randint(0, dims[0], size=(nse,)),
... torch.randint(0, dims[1], size=(nse,))], 0).reshape(2, nse)
>>> V = torch.randn(nse, dims[2], dims[3])
>>> S = torch.sparse_coo_tensor(I, V, dims).coalesce()
>>> D = torch.randn(dims)
>>> D.sparse_mask(S)
tensor(indices=tensor([[0, 0, 0, 2],
[0, 1, 4, 3]]),
values=tensor([[[ 1.6550, 0.2397],
[-0.1611, -0.0779]],
[[ 0.2326, -1.0558],
[ 1.4711, 1.9678]],
[[-0.5138, -0.0411],
[ 1.9417, 0.5158]],
[[ 0.0793, 0.0036],
[-0.2569, -0.1055]]]),
size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo) | torch.sparse#torch.Tensor.sparse_mask |
sparse_resize_(size, sparse_dim, dense_dim) β Tensor
Resizes self sparse tensor to the desired size and the number of sparse and dense dimensions. Note If the number of specified elements in self is zero, then size, sparse_dim, and dense_dim can be any size and positive integers such that len(size) == sparse_dim +
dense_dim. If self specifies one or more elements, however, then each dimension in size must not be smaller than the corresponding dimension of self, sparse_dim must equal the number of sparse dimensions in self, and dense_dim must equal the number of dense dimensions in self. Warning Throws an error if self is not a sparse tensor. Parameters
size (torch.Size) β the desired size. If self is non-empty sparse tensor, the desired size cannot be smaller than the original size.
sparse_dim (int) β the number of sparse dimensions
dense_dim (int) β the number of dense dimensions | torch.sparse#torch.Tensor.sparse_resize_ |
sparse_resize_and_clear_(size, sparse_dim, dense_dim) β Tensor
Removes all specified elements from a sparse tensor self and resizes self to the desired size and the number of sparse and dense dimensions. Parameters
size (torch.Size) β the desired size.
sparse_dim (int) β the number of sparse dimensions
dense_dim (int) β the number of dense dimensions | torch.sparse#torch.Tensor.sparse_resize_and_clear_ |
split(split_size, dim=0) [source]
See torch.split() | torch.tensors#torch.Tensor.split |
sqrt() β Tensor
See torch.sqrt() | torch.tensors#torch.Tensor.sqrt |
sqrt_() β Tensor
In-place version of sqrt() | torch.tensors#torch.Tensor.sqrt_ |
square() β Tensor
See torch.square() | torch.tensors#torch.Tensor.square |
square_() β Tensor
In-place version of square() | torch.tensors#torch.Tensor.square_ |
squeeze(dim=None) β Tensor
See torch.squeeze() | torch.tensors#torch.Tensor.squeeze |
squeeze_(dim=None) β Tensor
In-place version of squeeze() | torch.tensors#torch.Tensor.squeeze_ |
std(dim=None, unbiased=True, keepdim=False) β Tensor
See torch.std() | torch.tensors#torch.Tensor.std |
stft(n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None) [source]
See torch.stft() Warning This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result. | torch.tensors#torch.Tensor.stft |
storage() β torch.Storage
Returns the underlying storage. | torch.tensors#torch.Tensor.storage |
storage_offset() β int
Returns self tensorβs offset in the underlying storage in terms of number of storage elements (not bytes). Example: >>> x = torch.tensor([1, 2, 3, 4, 5])
>>> x.storage_offset()
0
>>> x[3:].storage_offset()
3 | torch.tensors#torch.Tensor.storage_offset |
storage_type() β type
Returns the type of the underlying storage. | torch.tensors#torch.Tensor.storage_type |
stride(dim) β tuple or int
Returns the stride of self tensor. Stride is the jump necessary to go from one element to the next one in the specified dimension dim. A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimension dim. Parameters
dim (int, optional) β the desired dimension in which stride is required Example: >>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])
>>> x.stride()
(5, 1)
>>> x.stride(0)
5
>>> x.stride(-1)
1 | torch.tensors#torch.Tensor.stride |
sub(other, *, alpha=1) β Tensor
See torch.sub(). | torch.tensors#torch.Tensor.sub |
subtract(other, *, alpha=1) β Tensor
See torch.subtract(). | torch.tensors#torch.Tensor.subtract |
subtract_(other, *, alpha=1) β Tensor
In-place version of subtract(). | torch.tensors#torch.Tensor.subtract_ |
sub_(other, *, alpha=1) β Tensor
In-place version of sub() | torch.tensors#torch.Tensor.sub_ |
sum(dim=None, keepdim=False, dtype=None) β Tensor
See torch.sum() | torch.tensors#torch.Tensor.sum |
sum_to_size(*size) β Tensor
Sum this tensor to size. size must be broadcastable to this tensor size. Parameters
size (int...) β a sequence of integers defining the shape of the output tensor. | torch.tensors#torch.Tensor.sum_to_size |
svd(some=True, compute_uv=True) -> (Tensor, Tensor, Tensor)
See torch.svd() | torch.tensors#torch.Tensor.svd |
swapaxes(axis0, axis1) β Tensor
See torch.swapaxes() | torch.tensors#torch.Tensor.swapaxes |
swapdims(dim0, dim1) β Tensor
See torch.swapdims() | torch.tensors#torch.Tensor.swapdims |
symeig(eigenvectors=False, upper=True) -> (Tensor, Tensor)
See torch.symeig() | torch.tensors#torch.Tensor.symeig |
T
Is this Tensor with its dimensions reversed. If n is the number of dimensions in x, x.T is equivalent to x.permute(n-1, n-2, ..., 0). | torch.tensors#torch.Tensor.T |
t() β Tensor
See torch.t() | torch.tensors#torch.Tensor.t |
take(indices) β Tensor
See torch.take() | torch.tensors#torch.Tensor.take |
tan() β Tensor
See torch.tan() | torch.tensors#torch.Tensor.tan |
tanh() β Tensor
See torch.tanh() | torch.tensors#torch.Tensor.tanh |
tanh_() β Tensor
In-place version of tanh() | torch.tensors#torch.Tensor.tanh_ |
tan_() β Tensor
In-place version of tan() | torch.tensors#torch.Tensor.tan_ |
tensor_split(indices_or_sections, dim=0) β List of Tensors
See torch.tensor_split() | torch.tensors#torch.Tensor.tensor_split |
tile(*reps) β Tensor
See torch.tile() | torch.tensors#torch.Tensor.tile |
to(*args, **kwargs) β Tensor
Performs Tensor dtype and/or device conversion. A torch.dtype and torch.device are inferred from the arguments of self.to(*args, **kwargs). Note If the self Tensor already has the correct torch.dtype and torch.device, then self is returned. Otherwise, the returned tensor is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to:
to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) β Tensor
Returns a Tensor with the specified dtype Args:
memory_format (torch.memory_format, optional): the desired memory format of returned Tensor. Default: torch.preserve_format.
to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) β Tensor
Returns a Tensor with the specified device and (optional) dtype. If dtype is None it is inferred to be self.dtype. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion. Args:
memory_format (torch.memory_format, optional): the desired memory format of returned Tensor. Default: torch.preserve_format.
to(other, non_blocking=False, copy=False) β Tensor
Returns a Tensor with same torch.dtype and torch.device as the Tensor other. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.
Example: >>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu
>>> tensor.to(torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64)
>>> cuda0 = torch.device('cuda:0')
>>> tensor.to(cuda0)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], device='cuda:0')
>>> tensor.to(cuda0, dtype=torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
>>> other = torch.randn((), dtype=torch.float64, device=cuda0)
>>> tensor.to(other, non_blocking=True)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0') | torch.tensors#torch.Tensor.to |
tolist() β list or number
Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with item(). Tensors are automatically moved to the CPU first if necessary. This operation is not differentiable. Examples: >>> a = torch.randn(2, 2)
>>> a.tolist()
[[0.012766935862600803, 0.5415473580360413],
[-0.08909505605697632, 0.7729271650314331]]
>>> a[0,0].tolist()
0.012766935862600803 | torch.tensors#torch.Tensor.tolist |
topk(k, dim=None, largest=True, sorted=True) -> (Tensor, LongTensor)
See torch.topk() | torch.tensors#torch.Tensor.topk |
to_dense() β Tensor
Creates a strided copy of self. Warning Throws an error if self is a strided tensor. Example: >>> s = torch.sparse_coo_tensor(
... torch.tensor([[1, 1],
... [0, 2]]),
... torch.tensor([9, 10]),
... size=(3, 3))
>>> s.to_dense()
tensor([[ 0, 0, 0],
[ 9, 0, 10],
[ 0, 0, 0]]) | torch.sparse#torch.Tensor.to_dense |
to_mkldnn() β Tensor
Returns a copy of the tensor in torch.mkldnn layout. | torch.tensors#torch.Tensor.to_mkldnn |
to_sparse(sparseDims) β Tensor
Returns a sparse copy of the tensor. PyTorch supports sparse tensors in coordinate format. Parameters
sparseDims (int, optional) β the number of sparse dimensions to include in the new sparse tensor Example: >>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]])
>>> d
tensor([[ 0, 0, 0],
[ 9, 0, 10],
[ 0, 0, 0]])
>>> d.to_sparse()
tensor(indices=tensor([[1, 1],
[0, 2]]),
values=tensor([ 9, 10]),
size=(3, 3), nnz=2, layout=torch.sparse_coo)
>>> d.to_sparse(1)
tensor(indices=tensor([[1]]),
values=tensor([[ 9, 0, 10]]),
size=(3, 3), nnz=1, layout=torch.sparse_coo) | torch.sparse#torch.Tensor.to_sparse |
trace() β Tensor
See torch.trace() | torch.tensors#torch.Tensor.trace |
transpose(dim0, dim1) β Tensor
See torch.transpose() | torch.tensors#torch.Tensor.transpose |
transpose_(dim0, dim1) β Tensor
In-place version of transpose() | torch.tensors#torch.Tensor.transpose_ |
triangular_solve(A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)
See torch.triangular_solve() | torch.tensors#torch.Tensor.triangular_solve |
tril(k=0) β Tensor
See torch.tril() | torch.tensors#torch.Tensor.tril |
tril_(k=0) β Tensor
In-place version of tril() | torch.tensors#torch.Tensor.tril_ |
triu(k=0) β Tensor
See torch.triu() | torch.tensors#torch.Tensor.triu |
triu_(k=0) β Tensor
In-place version of triu() | torch.tensors#torch.Tensor.triu_ |
true_divide(value) β Tensor
See torch.true_divide() | torch.tensors#torch.Tensor.true_divide |
true_divide_(value) β Tensor
In-place version of true_divide_() | torch.tensors#torch.Tensor.true_divide_ |
trunc() β Tensor
See torch.trunc() | torch.tensors#torch.Tensor.trunc |
trunc_() β Tensor
In-place version of trunc() | torch.tensors#torch.Tensor.trunc_ |
type(dtype=None, non_blocking=False, **kwargs) β str or Tensor
Returns the type if dtype is not provided, else casts this object to the specified type. If this is already of the correct type, no copy is performed and the original object is returned. Parameters
dtype (type or string) β The desired type
non_blocking (bool) β If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.
**kwargs β For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated. | torch.tensors#torch.Tensor.type |
type_as(tensor) β Tensor
Returns this tensor cast to the type of the given tensor. This is a no-op if the tensor is already of the correct type. This is equivalent to self.type(tensor.type()) Parameters
tensor (Tensor) β the tensor which has the desired type | torch.tensors#torch.Tensor.type_as |
t_() β Tensor
In-place version of t() | torch.tensors#torch.Tensor.t_ |
unbind(dim=0) β seq
See torch.unbind() | torch.tensors#torch.Tensor.unbind |
unflatten(dim, sizes) [source]
Expands the dimension dim of the self tensor over multiple dimensions of sizes given by sizes.
sizes is the new shape of the unflattened dimension and it can be a Tuple[int] as well as torch.Size if self is a Tensor, or namedshape (Tuple[(name: str, size: int)]) if self is a NamedTensor. The total number of elements in sizes must match the number of elements in the original dim being unflattened. Parameters
dim (Union[int, str]) β Dimension to unflatten
sizes (Union[Tuple[int] or torch.Size, Tuple[Tuple[str, int]]]) β New shape of the unflattened dimension Examples >>> torch.randn(3, 4, 1).unflatten(1, (2, 2)).shape
torch.Size([3, 2, 2, 1])
>>> torch.randn(2, 4, names=('A', 'B')).unflatten('B', (('B1', 2), ('B2', 2)))
tensor([[[-1.1772, 0.0180],
[ 0.2412, 0.1431]],
[[-1.1819, -0.8899], [ 1.5813, 0.2274]]], names=(βAβ, βB1β, βB2β)) Warning The named tensor API is experimental and subject to change. | torch.named_tensor#torch.Tensor.unflatten |
unfold(dimension, size, step) β Tensor
Returns a view of the original tensor which contains all slices of size size from self tensor in the dimension dimension. Step between two slices is given by step. If sizedim is the size of dimension dimension for self, the size of dimension dimension in the returned tensor will be (sizedim - size) / step + 1. An additional dimension of size size is appended in the returned tensor. Parameters
dimension (int) β dimension in which unfolding happens
size (int) β the size of each slice that is unfolded
step (int) β the step between each slice Example: >>> x = torch.arange(1., 8)
>>> x
tensor([ 1., 2., 3., 4., 5., 6., 7.])
>>> x.unfold(0, 2, 1)
tensor([[ 1., 2.],
[ 2., 3.],
[ 3., 4.],
[ 4., 5.],
[ 5., 6.],
[ 6., 7.]])
>>> x.unfold(0, 2, 2)
tensor([[ 1., 2.],
[ 3., 4.],
[ 5., 6.]]) | torch.tensors#torch.Tensor.unfold |
uniform_(from=0, to=1) β Tensor
Fills self tensor with numbers sampled from the continuous uniform distribution: P(x)=1toβfromP(x) = \dfrac{1}{\text{to} - \text{from}} | torch.tensors#torch.Tensor.uniform_ |
unique(sorted=True, return_inverse=False, return_counts=False, dim=None) [source]
Returns the unique elements of the input tensor. See torch.unique() | torch.tensors#torch.Tensor.unique |
unique_consecutive(return_inverse=False, return_counts=False, dim=None) [source]
Eliminates all but the first element from every consecutive group of equivalent elements. See torch.unique_consecutive() | torch.tensors#torch.Tensor.unique_consecutive |
unsqueeze(dim) β Tensor
See torch.unsqueeze() | torch.tensors#torch.Tensor.unsqueeze |
unsqueeze_(dim) β Tensor
In-place version of unsqueeze() | torch.tensors#torch.Tensor.unsqueeze_ |
values() β Tensor
Return the values tensor of a sparse COO tensor. Warning Throws an error if self is not a sparse COO tensor. See also Tensor.indices(). Note This method can only be called on a coalesced sparse tensor. See Tensor.coalesce() for details. | torch.sparse#torch.Tensor.values |
var(dim=None, unbiased=True, keepdim=False) β Tensor
See torch.var() | torch.tensors#torch.Tensor.var |
vdot(other) β Tensor
See torch.vdot() | torch.tensors#torch.Tensor.vdot |
view(*shape) β Tensor
Returns a new tensor with the same data as the self tensor but of a different shape. The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions d,d+1,β¦,d+kd, d+1, \dots, d+k that satisfy the following contiguity-like condition that βi=d,β¦,d+kβ1\forall i = d, \dots, d+k-1 , stride[i]=stride[i+1]Γsize[i+1]\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1]
Otherwise, it will not be possible to view self tensor as shape without copying it (e.g., via contiguous()). When it is unclear whether a view() can be performed, it is advisable to use reshape(), which returns a view if the shapes are compatible, and copies (equivalent to calling contiguous()) otherwise. Parameters
shape (torch.Size or int...) β the desired size Example: >>> x = torch.randn(4, 4)
>>> x.size()
torch.Size([4, 4])
>>> y = x.view(16)
>>> y.size()
torch.Size([16])
>>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions
>>> z.size()
torch.Size([2, 8])
>>> a = torch.randn(1, 2, 3, 4)
>>> a.size()
torch.Size([1, 2, 3, 4])
>>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension
>>> b.size()
torch.Size([1, 3, 2, 4])
>>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory
>>> c.size()
torch.Size([1, 3, 2, 4])
>>> torch.equal(b, c)
False
view(dtype) β Tensor
Returns a new tensor with the same data as the self tensor but of a different dtype. dtype must have the same number of bytes per element as selfβs dtype. Warning This overload is not supported by TorchScript, and using it in a Torchscript program will cause undefined behavior. Parameters
dtype (torch.dtype) β the desired dtype Example: >>> x = torch.randn(4, 4)
>>> x
tensor([[ 0.9482, -0.0310, 1.4999, -0.5316],
[-0.1520, 0.7472, 0.5617, -0.8649],
[-2.4724, -0.0334, -0.2976, -0.8499],
[-0.2109, 1.9913, -0.9607, -0.6123]])
>>> x.dtype
torch.float32
>>> y = x.view(torch.int32)
>>> y
tensor([[ 1064483442, -1124191867, 1069546515, -1089989247],
[-1105482831, 1061112040, 1057999968, -1084397505],
[-1071760287, -1123489973, -1097310419, -1084649136],
[-1101533110, 1073668768, -1082790149, -1088634448]],
dtype=torch.int32)
>>> y[0, 0] = 1000000000
>>> x
tensor([[ 0.0047, -0.0310, 1.4999, -0.5316],
[-0.1520, 0.7472, 0.5617, -0.8649],
[-2.4724, -0.0334, -0.2976, -0.8499],
[-0.2109, 1.9913, -0.9607, -0.6123]])
>>> x.view(torch.int16)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Viewing a tensor as a new dtype with a different number of bytes per element is not supported. | torch.tensors#torch.Tensor.view |
view_as(other) β Tensor
View this tensor as the same size as other. self.view_as(other) is equivalent to self.view(other.size()). Please see view() for more information about view. Parameters
other (torch.Tensor) β The result tensor has the same size as other. | torch.tensors#torch.Tensor.view_as |
where(condition, y) β Tensor
self.where(condition, y) is equivalent to torch.where(condition, self, y). See torch.where() | torch.tensors#torch.Tensor.where |
xlogy(other) β Tensor
See torch.xlogy() | torch.tensors#torch.Tensor.xlogy |
xlogy_(other) β Tensor
In-place version of xlogy() | torch.tensors#torch.Tensor.xlogy_ |
zero_() β Tensor
Fills self tensor with zeros. | torch.tensors#torch.Tensor.zero_ |
torch.tensordot(a, b, dims=2, out=None) [source]
Returns a contraction of a and b over multiple dimensions. tensordot implements a generalized matrix product. Parameters
a (Tensor) β Left tensor to contract
b (Tensor) β Right tensor to contract
dims (int or Tuple[List[int]] containing two lists) β number of dimensions to contract or explicit lists of dimensions for a and b respectively When called with a non-negative integer argument dims = dd , and the number of dimensions of a and b is mm and nn , respectively, tensordot() computes ri0,...,imβd,id,...,in=βk0,...,kdβ1ai0,...,imβd,k0,...,kdβ1Γbk0,...,kdβ1,id,...,in.r_{i_0,...,i_{m-d}, i_d,...,i_n} = \sum_{k_0,...,k_{d-1}} a_{i_0,...,i_{m-d},k_0,...,k_{d-1}} \times b_{k_0,...,k_{d-1}, i_d,...,i_n}.
When called with dims of the list form, the given dimensions will be contracted in place of the last dd of a and the first dd of bb . The sizes in these dimensions must match, but tensordot() will deal with broadcasted dimensions. Examples: >>> a = torch.arange(60.).reshape(3, 4, 5)
>>> b = torch.arange(24.).reshape(4, 3, 2)
>>> torch.tensordot(a, b, dims=([1, 0], [0, 1]))
tensor([[4400., 4730.],
[4532., 4874.],
[4664., 5018.],
[4796., 5162.],
[4928., 5306.]])
>>> a = torch.randn(3, 4, 5, device='cuda')
>>> b = torch.randn(4, 5, 6, device='cuda')
>>> c = torch.tensordot(a, b, dims=2).cpu()
tensor([[ 8.3504, -2.5436, 6.2922, 2.7556, -1.0732, 3.2741],
[ 3.3161, 0.0704, 5.0187, -0.4079, -4.3126, 4.8744],
[ 0.8223, 3.9445, 3.2168, -0.2400, 3.4117, 1.7780]])
>>> a = torch.randn(3, 5, 4, 6)
>>> b = torch.randn(6, 4, 5, 3)
>>> torch.tensordot(a, b, dims=([2, 1, 3], [1, 2, 0]))
tensor([[ 7.7193, -2.4867, -10.3204],
[ 1.5513, -14.4737, -6.5113],
[ -0.2850, 4.2573, -3.5997]]) | torch.generated.torch.tensordot#torch.tensordot |
torch.tensor_split(input, indices_or_sections, dim=0) β List of Tensors
Splits a tensor into multiple sub-tensors, all of which are views of input, along dimension dim according to the indices or number of sections specified by indices_or_sections. This function is based on NumPyβs numpy.array_split(). Parameters
input (Tensor) β the tensor to split
indices_or_sections (Tensor, int or list or tuple of python:ints) β
If indices_or_sections is an integer n or a zero dimensional long tensor with value n, input is split into n sections along dimension dim. If input is divisible by n along dimension dim, each section will be of equal size, input.size(dim) / n. If input is not divisible by n, the sizes of the first int(input.size(dim) % n) sections will have size int(input.size(dim) / n) + 1, and the rest will have size int(input.size(dim) / n). If indices_or_sections is a list or tuple of ints, or a one-dimensional long tensor, then input is split along dimension dim at each of the indices in the list, tuple or tensor. For instance, indices_or_sections=[2, 3] and dim=0 would result in the tensors input[:2], input[2:3], and input[3:]. If indices_or_sections is a tensor, it must be a zero-dimensional or one-dimensional long tensor on the CPU.
dim (int, optional) β dimension along which to split the tensor. Default: 0
Example::
>>> x = torch.arange(8)
>>> torch.tensor_split(x, 3)
(tensor([0, 1, 2]), tensor([3, 4, 5]), tensor([6, 7]))
>>> x = torch.arange(7)
>>> torch.tensor_split(x, 3)
(tensor([0, 1, 2]), tensor([3, 4]), tensor([5, 6]))
>>> torch.tensor_split(x, (1, 6))
(tensor([0]), tensor([1, 2, 3, 4, 5]), tensor([6]))
>>> x = torch.arange(14).reshape(2, 7)
>>> x
tensor([[ 0, 1, 2, 3, 4, 5, 6],
[ 7, 8, 9, 10, 11, 12, 13]])
>>> torch.tensor_split(x, 3, dim=1)
(tensor([[0, 1, 2],
[7, 8, 9]]),
tensor([[ 3, 4],
[10, 11]]),
tensor([[ 5, 6],
[12, 13]]))
>>> torch.tensor_split(x, (1, 6), dim=1)
(tensor([[0],
[7]]),
tensor([[ 1, 2, 3, 4, 5],
[ 8, 9, 10, 11, 12]]),
tensor([[ 6],
[13]])) | torch.generated.torch.tensor_split#torch.tensor_split |
torch.tile(input, reps) β Tensor
Constructs a tensor by repeating the elements of input. The reps argument specifies the number of repetitions in each dimension. If reps specifies fewer dimensions than input has, then ones are prepended to reps until all dimensions are specified. For example, if input has shape (8, 6, 4, 2) and reps is (2, 2), then reps is treated as (1, 1, 2, 2). Analogously, if input has fewer dimensions than reps specifies, then input is treated as if it were unsqueezed at dimension zero until it has as many dimensions as reps specifies. For example, if input has shape (4, 2) and reps is (3, 3, 2, 2), then input is treated as if it had the shape (1, 1, 4, 2). Note This function is similar to NumPyβs tile function. Parameters
input (Tensor) β the tensor whose elements to repeat.
reps (tuple) β the number of repetitions per dimension. Example: >>> x = torch.tensor([1, 2, 3])
>>> x.tile((2,))
tensor([1, 2, 3, 1, 2, 3])
>>> y = torch.tensor([[1, 2], [3, 4]])
>>> torch.tile(y, (2, 2))
tensor([[1, 2, 1, 2],
[3, 4, 3, 4],
[1, 2, 1, 2],
[3, 4, 3, 4]]) | torch.generated.torch.tile#torch.tile |
torch.topk(input, k, dim=None, largest=True, sorted=True, *, out=None) -> (Tensor, LongTensor)
Returns the k largest elements of the given input tensor along a given dimension. If dim is not given, the last dimension of the input is chosen. If largest is False then the k smallest elements are returned. A namedtuple of (values, indices) is returned, where the indices are the indices of the elements in the original input tensor. The boolean option sorted if True, will make sure that the returned k elements are themselves sorted Parameters
input (Tensor) β the input tensor.
k (int) β the k in βtop-kβ
dim (int, optional) β the dimension to sort along
largest (bool, optional) β controls whether to return largest or smallest elements
sorted (bool, optional) β controls whether to return the elements in sorted order Keyword Arguments
out (tuple, optional) β the output tuple of (Tensor, LongTensor) that can be optionally given to be used as output buffers Example: >>> x = torch.arange(1., 6.)
>>> x
tensor([ 1., 2., 3., 4., 5.])
>>> torch.topk(x, 3)
torch.return_types.topk(values=tensor([5., 4., 3.]), indices=tensor([4, 3, 2])) | torch.generated.torch.topk#torch.topk |
torch.default_generator Returns the default CPU torch.Generator | torch#torch.torch.default_generator |
class torch.device | torch.tensor_attributes#torch.torch.device |
class torch.dtype | torch.tensor_attributes#torch.torch.dtype |
class torch.finfo | torch.type_info#torch.torch.finfo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.