doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
cosh_() β†’ Tensor In-place version of cosh()
torch.tensors#torch.Tensor.cosh_
cos_() β†’ Tensor In-place version of cos()
torch.tensors#torch.Tensor.cos_
count_nonzero(dim=None) β†’ Tensor See torch.count_nonzero()
torch.tensors#torch.Tensor.count_nonzero
cpu(memory_format=torch.preserve_format) β†’ Tensor Returns a copy of this object in CPU memory. If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned. Parameters memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.
torch.tensors#torch.Tensor.cpu
cross(other, dim=-1) β†’ Tensor See torch.cross()
torch.tensors#torch.Tensor.cross
cuda(device=None, non_blocking=False, memory_format=torch.preserve_format) β†’ Tensor Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned. Parameters device (torch.device) – The destination GPU device. Defaults to the current CUDA device. non_blocking (bool) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default: False. memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.
torch.tensors#torch.Tensor.cuda
cummax(dim) -> (Tensor, Tensor) See torch.cummax()
torch.tensors#torch.Tensor.cummax
cummin(dim) -> (Tensor, Tensor) See torch.cummin()
torch.tensors#torch.Tensor.cummin
cumprod(dim, dtype=None) β†’ Tensor See torch.cumprod()
torch.tensors#torch.Tensor.cumprod
cumprod_(dim, dtype=None) β†’ Tensor In-place version of cumprod()
torch.tensors#torch.Tensor.cumprod_
cumsum(dim, dtype=None) β†’ Tensor See torch.cumsum()
torch.tensors#torch.Tensor.cumsum
cumsum_(dim, dtype=None) β†’ Tensor In-place version of cumsum()
torch.tensors#torch.Tensor.cumsum_
data_ptr() β†’ int Returns the address of the first element of self tensor.
torch.tensors#torch.Tensor.data_ptr
deg2rad() β†’ Tensor See torch.deg2rad()
torch.tensors#torch.Tensor.deg2rad
dense_dim() β†’ int Return the number of dense dimensions in a sparse tensor self. Warning Throws an error if self is not a sparse tensor. See also Tensor.sparse_dim() and hybrid tensors.
torch.sparse#torch.Tensor.dense_dim
dequantize() β†’ Tensor Given a quantized Tensor, dequantize it and return the dequantized float Tensor.
torch.tensors#torch.Tensor.dequantize
det() β†’ Tensor See torch.det()
torch.tensors#torch.Tensor.det
detach() Returns a new Tensor, detached from the current graph. The result will never require gradient. Note Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks. IMPORTANT NOTE: Previously, in-place size / stride / storage changes (such as resize_ / resize_as_ / set_ / transpose_) to the returned tensor also update the original tensor. Now, these in-place changes will not update the original tensor anymore, and will instead trigger an error. For sparse tensors: In-place indices / values changes (such as zero_ / copy_ / add_) to the returned tensor will not update the original tensor anymore, and will instead trigger an error.
torch.autograd#torch.Tensor.detach
detach_() Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached in-place.
torch.autograd#torch.Tensor.detach_
device Is the torch.device where this Tensor is.
torch.tensors#torch.Tensor.device
diag(diagonal=0) β†’ Tensor See torch.diag()
torch.tensors#torch.Tensor.diag
diagflat(offset=0) β†’ Tensor See torch.diagflat()
torch.tensors#torch.Tensor.diagflat
diagonal(offset=0, dim1=0, dim2=1) β†’ Tensor See torch.diagonal()
torch.tensors#torch.Tensor.diagonal
diag_embed(offset=0, dim1=-2, dim2=-1) β†’ Tensor See torch.diag_embed()
torch.tensors#torch.Tensor.diag_embed
diff(n=1, dim=-1, prepend=None, append=None) β†’ Tensor See torch.diff()
torch.tensors#torch.Tensor.diff
digamma() β†’ Tensor See torch.digamma()
torch.tensors#torch.Tensor.digamma
digamma_() β†’ Tensor In-place version of digamma()
torch.tensors#torch.Tensor.digamma_
dim() β†’ int Returns the number of dimensions of self tensor.
torch.tensors#torch.Tensor.dim
dist(other, p=2) β†’ Tensor See torch.dist()
torch.tensors#torch.Tensor.dist
div(value, *, rounding_mode=None) β†’ Tensor See torch.div()
torch.tensors#torch.Tensor.div
divide(value, *, rounding_mode=None) β†’ Tensor See torch.divide()
torch.tensors#torch.Tensor.divide
divide_(value, *, rounding_mode=None) β†’ Tensor In-place version of divide()
torch.tensors#torch.Tensor.divide_
div_(value, *, rounding_mode=None) β†’ Tensor In-place version of div()
torch.tensors#torch.Tensor.div_
dot(other) β†’ Tensor See torch.dot()
torch.tensors#torch.Tensor.dot
double(memory_format=torch.preserve_format) β†’ Tensor self.double() is equivalent to self.to(torch.float64). See to(). Parameters memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.
torch.tensors#torch.Tensor.double
eig(eigenvectors=False) -> (Tensor, Tensor) See torch.eig()
torch.tensors#torch.Tensor.eig
element_size() β†’ int Returns the size in bytes of an individual element. Example: >>> torch.tensor([]).element_size() 4 >>> torch.tensor([], dtype=torch.uint8).element_size() 1
torch.tensors#torch.Tensor.element_size
eq(other) β†’ Tensor See torch.eq()
torch.tensors#torch.Tensor.eq
equal(other) β†’ bool See torch.equal()
torch.tensors#torch.Tensor.equal
eq_(other) β†’ Tensor In-place version of eq()
torch.tensors#torch.Tensor.eq_
erf() β†’ Tensor See torch.erf()
torch.tensors#torch.Tensor.erf
erfc() β†’ Tensor See torch.erfc()
torch.tensors#torch.Tensor.erfc
erfc_() β†’ Tensor In-place version of erfc()
torch.tensors#torch.Tensor.erfc_
erfinv() β†’ Tensor See torch.erfinv()
torch.tensors#torch.Tensor.erfinv
erfinv_() β†’ Tensor In-place version of erfinv()
torch.tensors#torch.Tensor.erfinv_
erf_() β†’ Tensor In-place version of erf()
torch.tensors#torch.Tensor.erf_
exp() β†’ Tensor See torch.exp()
torch.tensors#torch.Tensor.exp
expand(*sizes) β†’ Tensor Returns a new view of the self tensor with singleton dimensions expanded to a larger size. Passing -1 as the size for a dimension means not changing the size of that dimension. Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1. Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory. Parameters *sizes (torch.Size or int...) – the desired expanded size Warning More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first. Example: >>> x = torch.tensor([[1], [2], [3]]) >>> x.size() torch.Size([3, 1]) >>> x.expand(3, 4) tensor([[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 3, 3, 3]]) >>> x.expand(-1, 4) # -1 means not changing the size of that dimension tensor([[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 3, 3, 3]])
torch.tensors#torch.Tensor.expand
expand_as(other) β†’ Tensor Expand this tensor to the same size as other. self.expand_as(other) is equivalent to self.expand(other.size()). Please see expand() for more information about expand. Parameters other (torch.Tensor) – The result tensor has the same size as other.
torch.tensors#torch.Tensor.expand_as
expm1() β†’ Tensor See torch.expm1()
torch.tensors#torch.Tensor.expm1
expm1_() β†’ Tensor In-place version of expm1()
torch.tensors#torch.Tensor.expm1_
exponential_(lambd=1, *, generator=None) β†’ Tensor Fills self tensor with elements drawn from the exponential distribution: f(x)=Ξ»eβˆ’Ξ»xf(x) = \lambda e^{-\lambda x}
torch.tensors#torch.Tensor.exponential_
exp_() β†’ Tensor In-place version of exp()
torch.tensors#torch.Tensor.exp_
fill_(value) β†’ Tensor Fills self tensor with the specified value.
torch.tensors#torch.Tensor.fill_
fill_diagonal_(fill_value, wrap=False) β†’ Tensor Fill the main diagonal of a tensor that has at least 2-dimensions. When dims>2, all dimensions of input must be of equal length. This function modifies the input tensor in-place, and returns the input tensor. Parameters fill_value (Scalar) – the fill value wrap (bool) – the diagonal β€˜wrapped’ after N columns for tall matrices. Example: >>> a = torch.zeros(3, 3) >>> a.fill_diagonal_(5) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.]]) >>> b = torch.zeros(7, 3) >>> b.fill_diagonal_(5) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]) >>> c = torch.zeros(7, 3) >>> c.fill_diagonal_(5, wrap=True) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.], [0., 0., 0.], [5., 0., 0.], [0., 5., 0.], [0., 0., 5.]])
torch.tensors#torch.Tensor.fill_diagonal_
fix() β†’ Tensor See torch.fix().
torch.tensors#torch.Tensor.fix
fix_() β†’ Tensor In-place version of fix()
torch.tensors#torch.Tensor.fix_
flatten(input, start_dim=0, end_dim=-1) β†’ Tensor see torch.flatten()
torch.tensors#torch.Tensor.flatten
flip(dims) β†’ Tensor See torch.flip()
torch.tensors#torch.Tensor.flip
fliplr() β†’ Tensor See torch.fliplr()
torch.tensors#torch.Tensor.fliplr
flipud() β†’ Tensor See torch.flipud()
torch.tensors#torch.Tensor.flipud
float(memory_format=torch.preserve_format) β†’ Tensor self.float() is equivalent to self.to(torch.float32). See to(). Parameters memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.
torch.tensors#torch.Tensor.float
float_power(exponent) β†’ Tensor See torch.float_power()
torch.tensors#torch.Tensor.float_power
float_power_(exponent) β†’ Tensor In-place version of float_power()
torch.tensors#torch.Tensor.float_power_
floor() β†’ Tensor See torch.floor()
torch.tensors#torch.Tensor.floor
floor_() β†’ Tensor In-place version of floor()
torch.tensors#torch.Tensor.floor_
floor_divide(value) β†’ Tensor See torch.floor_divide()
torch.tensors#torch.Tensor.floor_divide
floor_divide_(value) β†’ Tensor In-place version of floor_divide()
torch.tensors#torch.Tensor.floor_divide_
fmax(other) β†’ Tensor See torch.fmax()
torch.tensors#torch.Tensor.fmax
fmin(other) β†’ Tensor See torch.fmin()
torch.tensors#torch.Tensor.fmin
fmod(divisor) β†’ Tensor See torch.fmod()
torch.tensors#torch.Tensor.fmod
fmod_(divisor) β†’ Tensor In-place version of fmod()
torch.tensors#torch.Tensor.fmod_
frac() β†’ Tensor See torch.frac()
torch.tensors#torch.Tensor.frac
frac_() β†’ Tensor In-place version of frac()
torch.tensors#torch.Tensor.frac_
gather(dim, index) β†’ Tensor See torch.gather()
torch.tensors#torch.Tensor.gather
gcd(other) β†’ Tensor See torch.gcd()
torch.tensors#torch.Tensor.gcd
gcd_(other) β†’ Tensor In-place version of gcd()
torch.tensors#torch.Tensor.gcd_
ge(other) β†’ Tensor See torch.ge().
torch.tensors#torch.Tensor.ge
geometric_(p, *, generator=None) β†’ Tensor Fills self tensor with elements drawn from the geometric distribution: f(X=k)=pkβˆ’1(1βˆ’p)f(X=k) = p^{k - 1} (1 - p)
torch.tensors#torch.Tensor.geometric_
geqrf() -> (Tensor, Tensor) See torch.geqrf()
torch.tensors#torch.Tensor.geqrf
ger(vec2) β†’ Tensor See torch.ger()
torch.tensors#torch.Tensor.ger
get_device() -> Device ordinal (Integer) For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, an error is thrown. Example: >>> x = torch.randn(3, 4, 5, device='cuda:0') >>> x.get_device() 0 >>> x.cpu().get_device() # RuntimeError: get_device is not implemented for type torch.FloatTensor
torch.tensors#torch.Tensor.get_device
ge_(other) β†’ Tensor In-place version of ge().
torch.tensors#torch.Tensor.ge_
grad This attribute is None by default and becomes a Tensor the first time a call to backward() computes gradients for self. The attribute will then contain the gradients computed and future calls to backward() will accumulate (add) gradients into it.
torch.autograd#torch.Tensor.grad
greater(other) β†’ Tensor See torch.greater().
torch.tensors#torch.Tensor.greater
greater_(other) β†’ Tensor In-place version of greater().
torch.tensors#torch.Tensor.greater_
greater_equal(other) β†’ Tensor See torch.greater_equal().
torch.tensors#torch.Tensor.greater_equal
greater_equal_(other) β†’ Tensor In-place version of greater_equal().
torch.tensors#torch.Tensor.greater_equal_
gt(other) β†’ Tensor See torch.gt().
torch.tensors#torch.Tensor.gt
gt_(other) β†’ Tensor In-place version of gt().
torch.tensors#torch.Tensor.gt_
half(memory_format=torch.preserve_format) β†’ Tensor self.half() is equivalent to self.to(torch.float16). See to(). Parameters memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.
torch.tensors#torch.Tensor.half
hardshrink(lambd=0.5) β†’ Tensor See torch.nn.functional.hardshrink()
torch.tensors#torch.Tensor.hardshrink
heaviside(values) β†’ Tensor See torch.heaviside()
torch.tensors#torch.Tensor.heaviside
histc(bins=100, min=0, max=0) β†’ Tensor See torch.histc()
torch.tensors#torch.Tensor.histc
hypot(other) β†’ Tensor See torch.hypot()
torch.tensors#torch.Tensor.hypot
hypot_(other) β†’ Tensor In-place version of hypot()
torch.tensors#torch.Tensor.hypot_
i0() β†’ Tensor See torch.i0()
torch.tensors#torch.Tensor.i0
i0_() β†’ Tensor In-place version of i0()
torch.tensors#torch.Tensor.i0_
igamma(other) β†’ Tensor See torch.igamma()
torch.tensors#torch.Tensor.igamma
igammac(other) β†’ Tensor See torch.igammac()
torch.tensors#torch.Tensor.igammac