doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
igammac_(other) β†’ Tensor In-place version of igammac()
torch.tensors#torch.Tensor.igammac_
igamma_(other) β†’ Tensor In-place version of igamma()
torch.tensors#torch.Tensor.igamma_
imag Returns a new tensor containing imaginary values of the self tensor. The returned tensor and self share the same underlying storage. Warning imag() is only supported for tensors with complex dtypes. Example:: >>> x=torch.randn(4, dtype=torch.cfloat) >>> x tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)]) >>> x.imag tensor([ 0.3553, -0.7896, -0.0633, -0.8119])
torch.tensors#torch.Tensor.imag
index_add(tensor1, dim, index, tensor2) β†’ Tensor Out-of-place version of torch.Tensor.index_add_(). tensor1 corresponds to self in torch.Tensor.index_add_().
torch.tensors#torch.Tensor.index_add
index_add_(dim, index, tensor) β†’ Tensor Accumulate the elements of tensor into the self tensor by adding to the indices in the order given in index. For example, if dim == 0 and index[i] == j, then the ith row of tensor is added to the jth row of self. The dimth dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised. Note This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information. Parameters dim (int) – dimension along which to index index (IntTensor or LongTensor) – indices of tensor to select from tensor (Tensor) – the tensor containing values to add Example: >>> x = torch.ones(5, 3) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2]) >>> x.index_add_(0, index, t) tensor([[ 2., 3., 4.], [ 1., 1., 1.], [ 8., 9., 10.], [ 1., 1., 1.], [ 5., 6., 7.]])
torch.tensors#torch.Tensor.index_add_
index_copy(tensor1, dim, index, tensor2) β†’ Tensor Out-of-place version of torch.Tensor.index_copy_(). tensor1 corresponds to self in torch.Tensor.index_copy_().
torch.tensors#torch.Tensor.index_copy
index_copy_(dim, index, tensor) β†’ Tensor Copies the elements of tensor into the self tensor by selecting the indices in the order given in index. For example, if dim == 0 and index[i] == j, then the ith row of tensor is copied to the jth row of self. The dimth dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised. Note If index contains duplicate entries, multiple elements from tensor will be copied to the same index of self. The result is nondeterministic since it depends on which copy occurs last. Parameters dim (int) – dimension along which to index index (LongTensor) – indices of tensor to select from tensor (Tensor) – the tensor containing values to copy Example: >>> x = torch.zeros(5, 3) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2]) >>> x.index_copy_(0, index, t) tensor([[ 1., 2., 3.], [ 0., 0., 0.], [ 7., 8., 9.], [ 0., 0., 0.], [ 4., 5., 6.]])
torch.tensors#torch.Tensor.index_copy_
index_fill(tensor1, dim, index, value) β†’ Tensor Out-of-place version of torch.Tensor.index_fill_(). tensor1 corresponds to self in torch.Tensor.index_fill_().
torch.tensors#torch.Tensor.index_fill
index_fill_(dim, index, val) β†’ Tensor Fills the elements of the self tensor with value val by selecting the indices in the order given in index. Parameters dim (int) – dimension along which to index index (LongTensor) – indices of self tensor to fill in val (float) – the value to fill with Example:: >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 2]) >>> x.index_fill_(1, index, -1) tensor([[-1., 2., -1.], [-1., 5., -1.], [-1., 8., -1.]])
torch.tensors#torch.Tensor.index_fill_
index_put(tensor1, indices, values, accumulate=False) β†’ Tensor Out-place version of index_put_(). tensor1 corresponds to self in torch.Tensor.index_put_().
torch.tensors#torch.Tensor.index_put
index_put_(indices, values, accumulate=False) β†’ Tensor Puts values from the tensor values into the tensor self using the indices specified in indices (which is a tuple of Tensors). The expression tensor.index_put_(indices, values) is equivalent to tensor[indices] = values. Returns self. If accumulate is True, the elements in values are added to self. If accumulate is False, the behavior is undefined if indices contain duplicate elements. Parameters indices (tuple of LongTensor) – tensors used to index into self. values (Tensor) – tensor of same dtype as self. accumulate (bool) – whether to accumulate into self
torch.tensors#torch.Tensor.index_put_
index_select(dim, index) β†’ Tensor See torch.index_select()
torch.tensors#torch.Tensor.index_select
indices() β†’ Tensor Return the indices tensor of a sparse COO tensor. Warning Throws an error if self is not a sparse COO tensor. See also Tensor.values(). Note This method can only be called on a coalesced sparse tensor. See Tensor.coalesce() for details.
torch.sparse#torch.Tensor.indices
inner(other) β†’ Tensor See torch.inner().
torch.tensors#torch.Tensor.inner
int(memory_format=torch.preserve_format) β†’ Tensor self.int() is equivalent to self.to(torch.int32). See to(). Parameters memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.
torch.tensors#torch.Tensor.int
int_repr() β†’ Tensor Given a quantized Tensor, self.int_repr() returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.
torch.tensors#torch.Tensor.int_repr
inverse() β†’ Tensor See torch.inverse()
torch.tensors#torch.Tensor.inverse
isclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) β†’ Tensor See torch.isclose()
torch.tensors#torch.Tensor.isclose
isfinite() β†’ Tensor See torch.isfinite()
torch.tensors#torch.Tensor.isfinite
isinf() β†’ Tensor See torch.isinf()
torch.tensors#torch.Tensor.isinf
isnan() β†’ Tensor See torch.isnan()
torch.tensors#torch.Tensor.isnan
isneginf() β†’ Tensor See torch.isneginf()
torch.tensors#torch.Tensor.isneginf
isposinf() β†’ Tensor See torch.isposinf()
torch.tensors#torch.Tensor.isposinf
isreal() β†’ Tensor See torch.isreal()
torch.tensors#torch.Tensor.isreal
istft(n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False) [source] See torch.istft()
torch.tensors#torch.Tensor.istft
is_coalesced() β†’ bool Returns True if self is a sparse COO tensor that is coalesced, False otherwise. Warning Throws an error if self is not a sparse COO tensor. See coalesce() and uncoalesced tensors.
torch.sparse#torch.Tensor.is_coalesced
is_complex() β†’ bool Returns True if the data type of self is a complex data type.
torch.tensors#torch.Tensor.is_complex
is_contiguous(memory_format=torch.contiguous_format) β†’ bool Returns True if self tensor is contiguous in memory in the order specified by memory format. Parameters memory_format (torch.memory_format, optional) – Specifies memory allocation order. Default: torch.contiguous_format.
torch.tensors#torch.Tensor.is_contiguous
is_cuda Is True if the Tensor is stored on the GPU, False otherwise.
torch.tensors#torch.Tensor.is_cuda
is_floating_point() β†’ bool Returns True if the data type of self is a floating point data type.
torch.tensors#torch.Tensor.is_floating_point
is_leaf All Tensors that have requires_grad which is False will be leaf Tensors by convention. For Tensors that have requires_grad which is True, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn is None. Only leaf Tensors will have their grad populated during a call to backward(). To get grad populated for non-leaf Tensors, you can use retain_grad(). Example: >>> a = torch.rand(10, requires_grad=True) >>> a.is_leaf True >>> b = torch.rand(10, requires_grad=True).cuda() >>> b.is_leaf False # b was created by the operation that cast a cpu Tensor into a cuda Tensor >>> c = torch.rand(10, requires_grad=True) + 2 >>> c.is_leaf False # c was created by the addition operation >>> d = torch.rand(10).cuda() >>> d.is_leaf True # d does not require gradients and so has no operation creating it (that is tracked by the autograd engine) >>> e = torch.rand(10).cuda().requires_grad_() >>> e.is_leaf True # e requires gradients and has no operations creating it >>> f = torch.rand(10, requires_grad=True, device="cuda") >>> f.is_leaf True # f requires grad, has no operation creating it
torch.autograd#torch.Tensor.is_leaf
is_meta Is True if the Tensor is a meta tensor, False otherwise. Meta tensors are like normal tensors, but they carry no data.
torch.tensors#torch.Tensor.is_meta
is_pinned() Returns true if this tensor resides in pinned memory.
torch.tensors#torch.Tensor.is_pinned
is_quantized Is True if the Tensor is quantized, False otherwise.
torch.tensors#torch.Tensor.is_quantized
is_set_to(tensor) β†’ bool Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride).
torch.tensors#torch.Tensor.is_set_to
is_shared() [source] Checks if tensor is in shared memory. This is always True for CUDA tensors.
torch.tensors#torch.Tensor.is_shared
is_signed() β†’ bool Returns True if the data type of self is a signed data type.
torch.tensors#torch.Tensor.is_signed
is_sparse Is True if the Tensor uses sparse storage layout, False otherwise.
torch.sparse#torch.Tensor.is_sparse
item() β†’ number Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see tolist(). This operation is not differentiable. Example: >>> x = torch.tensor([1.0]) >>> x.item() 1.0
torch.tensors#torch.Tensor.item
kthvalue(k, dim=None, keepdim=False) -> (Tensor, LongTensor) See torch.kthvalue()
torch.tensors#torch.Tensor.kthvalue
lcm(other) β†’ Tensor See torch.lcm()
torch.tensors#torch.Tensor.lcm
lcm_(other) β†’ Tensor In-place version of lcm()
torch.tensors#torch.Tensor.lcm_
ldexp(other) β†’ Tensor See torch.ldexp()
torch.tensors#torch.Tensor.ldexp
ldexp_(other) β†’ Tensor In-place version of ldexp()
torch.tensors#torch.Tensor.ldexp_
le(other) β†’ Tensor See torch.le().
torch.tensors#torch.Tensor.le
lerp(end, weight) β†’ Tensor See torch.lerp()
torch.tensors#torch.Tensor.lerp
lerp_(end, weight) β†’ Tensor In-place version of lerp()
torch.tensors#torch.Tensor.lerp_
less() lt(other) -> Tensor See torch.less().
torch.tensors#torch.Tensor.less
less_(other) β†’ Tensor In-place version of less().
torch.tensors#torch.Tensor.less_
less_equal(other) β†’ Tensor See torch.less_equal().
torch.tensors#torch.Tensor.less_equal
less_equal_(other) β†’ Tensor In-place version of less_equal().
torch.tensors#torch.Tensor.less_equal_
le_(other) β†’ Tensor In-place version of le().
torch.tensors#torch.Tensor.le_
lgamma() β†’ Tensor See torch.lgamma()
torch.tensors#torch.Tensor.lgamma
lgamma_() β†’ Tensor In-place version of lgamma()
torch.tensors#torch.Tensor.lgamma_
log() β†’ Tensor See torch.log()
torch.tensors#torch.Tensor.log
log10() β†’ Tensor See torch.log10()
torch.tensors#torch.Tensor.log10
log10_() β†’ Tensor In-place version of log10()
torch.tensors#torch.Tensor.log10_
log1p() β†’ Tensor See torch.log1p()
torch.tensors#torch.Tensor.log1p
log1p_() β†’ Tensor In-place version of log1p()
torch.tensors#torch.Tensor.log1p_
log2() β†’ Tensor See torch.log2()
torch.tensors#torch.Tensor.log2
log2_() β†’ Tensor In-place version of log2()
torch.tensors#torch.Tensor.log2_
logaddexp(other) β†’ Tensor See torch.logaddexp()
torch.tensors#torch.Tensor.logaddexp
logaddexp2(other) β†’ Tensor See torch.logaddexp2()
torch.tensors#torch.Tensor.logaddexp2
logcumsumexp(dim) β†’ Tensor See torch.logcumsumexp()
torch.tensors#torch.Tensor.logcumsumexp
logdet() β†’ Tensor See torch.logdet()
torch.tensors#torch.Tensor.logdet
logical_and() β†’ Tensor See torch.logical_and()
torch.tensors#torch.Tensor.logical_and
logical_and_() β†’ Tensor In-place version of logical_and()
torch.tensors#torch.Tensor.logical_and_
logical_not() β†’ Tensor See torch.logical_not()
torch.tensors#torch.Tensor.logical_not
logical_not_() β†’ Tensor In-place version of logical_not()
torch.tensors#torch.Tensor.logical_not_
logical_or() β†’ Tensor See torch.logical_or()
torch.tensors#torch.Tensor.logical_or
logical_or_() β†’ Tensor In-place version of logical_or()
torch.tensors#torch.Tensor.logical_or_
logical_xor() β†’ Tensor See torch.logical_xor()
torch.tensors#torch.Tensor.logical_xor
logical_xor_() β†’ Tensor In-place version of logical_xor()
torch.tensors#torch.Tensor.logical_xor_
logit() β†’ Tensor See torch.logit()
torch.tensors#torch.Tensor.logit
logit_() β†’ Tensor In-place version of logit()
torch.tensors#torch.Tensor.logit_
logsumexp(dim, keepdim=False) β†’ Tensor See torch.logsumexp()
torch.tensors#torch.Tensor.logsumexp
log_() β†’ Tensor In-place version of log()
torch.tensors#torch.Tensor.log_
log_normal_(mean=1, std=2, *, generator=None) Fills self tensor with numbers samples from the log-normal distribution parameterized by the given mean ΞΌ\mu and standard deviation Οƒ\sigma . Note that mean and std are the mean and standard deviation of the underlying normal distribution, and not of the returned distribution: f(x)=1xΟƒ2Ο€eβˆ’(ln⁑xβˆ’ΞΌ)22Οƒ2f(x) = \dfrac{1}{x \sigma \sqrt{2\pi}}\ e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}}
torch.tensors#torch.Tensor.log_normal_
long(memory_format=torch.preserve_format) β†’ Tensor self.long() is equivalent to self.to(torch.int64). See to(). Parameters memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.
torch.tensors#torch.Tensor.long
lstsq(A) -> (Tensor, Tensor) See torch.lstsq()
torch.tensors#torch.Tensor.lstsq
lt(other) β†’ Tensor See torch.lt().
torch.tensors#torch.Tensor.lt
lt_(other) β†’ Tensor In-place version of lt().
torch.tensors#torch.Tensor.lt_
lu(pivot=True, get_infos=False) [source] See torch.lu()
torch.tensors#torch.Tensor.lu
lu_solve(LU_data, LU_pivots) β†’ Tensor See torch.lu_solve()
torch.tensors#torch.Tensor.lu_solve
map_(tensor, callable) Applies callable for each element in self tensor and the given tensor and stores the results in self tensor. self tensor and the given tensor must be broadcastable. The callable should have the signature: def callable(a, b) -> number
torch.tensors#torch.Tensor.map_
masked_fill(mask, value) β†’ Tensor Out-of-place version of torch.Tensor.masked_fill_()
torch.tensors#torch.Tensor.masked_fill
masked_fill_(mask, value) Fills elements of self tensor with value where mask is True. The shape of mask must be broadcastable with the shape of the underlying tensor. Parameters mask (BoolTensor) – the boolean mask value (float) – the value to fill in with
torch.tensors#torch.Tensor.masked_fill_
masked_scatter(mask, tensor) β†’ Tensor Out-of-place version of torch.Tensor.masked_scatter_()
torch.tensors#torch.Tensor.masked_scatter
masked_scatter_(mask, source) Copies elements from source into self tensor at positions where the mask is True. The shape of mask must be broadcastable with the shape of the underlying tensor. The source should have at least as many elements as the number of ones in mask Parameters mask (BoolTensor) – the boolean mask source (Tensor) – the tensor to copy from Note The mask operates on the self tensor, not on the given source tensor.
torch.tensors#torch.Tensor.masked_scatter_
masked_select(mask) β†’ Tensor See torch.masked_select()
torch.tensors#torch.Tensor.masked_select
matmul(tensor2) β†’ Tensor See torch.matmul()
torch.tensors#torch.Tensor.matmul
matrix_exp() β†’ Tensor See torch.matrix_exp()
torch.tensors#torch.Tensor.matrix_exp
matrix_power(n) β†’ Tensor See torch.matrix_power()
torch.tensors#torch.Tensor.matrix_power
max(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor) See torch.max()
torch.tensors#torch.Tensor.max
maximum(other) β†’ Tensor See torch.maximum()
torch.tensors#torch.Tensor.maximum
mean(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor) See torch.mean()
torch.tensors#torch.Tensor.mean
median(dim=None, keepdim=False) -> (Tensor, LongTensor) See torch.median()
torch.tensors#torch.Tensor.median
min(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor) See torch.min()
torch.tensors#torch.Tensor.min
minimum(other) β†’ Tensor See torch.minimum()
torch.tensors#torch.Tensor.minimum
mm(mat2) β†’ Tensor See torch.mm()
torch.tensors#torch.Tensor.mm