doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.quantization.prepare_qat(model, mapping=None, inplace=False) [source]
Prepares a copy of the model for quantization calibration or quantization-aware training and converts it to quantized version. Quantization configuration should be assigned preemptively to individual submodules in .qconfig attribute. Parameters
model β input model to be modified in-place
mapping β dictionary that maps float modules to quantized modules to be replaced.
inplace β carry out model transformations in-place, the original module is mutated | torch.quantization#torch.quantization.prepare_qat |
torch.quantization.propagate_qconfig_(module, qconfig_dict=None, allow_list=None) [source]
Propagate qconfig through the module hierarchy and assign qconfig attribute on each leaf module Parameters
module β input module
qconfig_dict β dictionary that maps from name or type of submodule to quantization configuration, qconfig applies to all submodules of a given module unless qconfig for the submodules are specified (when the submodule already has qconfig attribute) Returns
None, module is modified inplace with qconfig attached | torch.quantization#torch.quantization.propagate_qconfig_ |
class torch.quantization.QConfig [source]
Describes how to quantize a layer or a part of the network by providing settings (observer classes) for activations and weights respectively. Note that QConfig needs to contain observer classes (like MinMaxObserver) or a callable that returns instances on invocation, not the concrete observer instances themselves. Quantization preparation function will instantiate observers multiple times for each of the layers. Observer classes have usually reasonable default arguments, but they can be overwritten with with_args method (that behaves like functools.partial): my_qconfig = QConfig(activation=MinMaxObserver.with_args(dtype=torch.qint8), weight=default_observer.with_args(dtype=torch.qint8)) | torch.quantization#torch.quantization.QConfig |
class torch.quantization.QConfigDynamic [source]
Describes how to dynamically quantize a layer or a part of the network by providing settings (observer classes) for weights. Itβs like QConfig, but for dynamic quantization. Note that QConfigDynamic needs to contain observer classes (like MinMaxObserver) or a callable that returns instances on invocation, not the concrete observer instances themselves. Quantization function will instantiate observers multiple times for each of the layers. Observer classes have usually reasonable default arguments, but they can be overwritten with with_args method (that behaves like functools.partial): my_qconfig = QConfigDynamic(weight=default_observer.with_args(dtype=torch.qint8)) | torch.quantization#torch.quantization.QConfigDynamic |
torch.quantization.quantize(model, run_fn, run_args, mapping=None, inplace=False) [source]
Quantize the input float model with post training static quantization. First it will prepare the model for calibration, then it calls run_fn which will run the calibration step, after that we will convert the model to a quantized model. Parameters
model β input float model
run_fn β a calibration function for calibrating the prepared model
run_args β positional arguments for run_fn
inplace β carry out model transformations in-place, the original module is mutated
mapping β correspondence between original module types and quantized counterparts Returns
Quantized model. | torch.quantization#torch.quantization.quantize |
torch.quantization.quantize_dynamic(model, qconfig_spec=None, dtype=torch.qint8, mapping=None, inplace=False) [source]
Converts a float model to dynamic (i.e. weights-only) quantized model. Replaces specified modules with dynamic weight-only quantized versions and output the quantized model. For simplest usage provide dtype argument that can be float16 or qint8. Weight-only quantization by default is performed for layers with large weights size - i.e. Linear and RNN variants. Fine grained control is possible with qconfig and mapping that act similarly to quantize(). If qconfig is provided, the dtype argument is ignored. Parameters
model β input model
qconfig_spec β
Either: A dictionary that maps from name or type of submodule to quantization configuration, qconfig applies to all submodules of a given module unless qconfig for the submodules are specified (when the submodule already has qconfig attribute). Entries in the dictionary need to be QConfigDynamic instances. A set of types and/or submodule names to apply dynamic quantization to, in which case the dtype argument is used to specify the bit-width
inplace β carry out model transformations in-place, the original module is mutated
mapping β maps type of a submodule to a type of corresponding dynamically quantized version with which the submodule needs to be replaced | torch.quantization#torch.quantization.quantize_dynamic |
torch.quantization.quantize_qat(model, run_fn, run_args, inplace=False) [source]
Do quantization aware training and output a quantized model Parameters
model β input model
run_fn β a function for evaluating the prepared model, can be a function that simply runs the prepared model or a training loop
run_args β positional arguments for run_fn
Returns
Quantized model. | torch.quantization#torch.quantization.quantize_qat |
class torch.quantization.QuantStub(qconfig=None) [source]
Quantize stub module, before calibration, this is same as an observer, it will be swapped as nnq.Quantize in convert. Parameters
qconfig β quantization configuration for the tensor, if qconfig is not provided, we will get qconfig from parent modules | torch.quantization#torch.quantization.QuantStub |
class torch.quantization.QuantWrapper(module) [source]
A wrapper class that wraps the input module, adds QuantStub and DeQuantStub and surround the call to module with call to quant and dequant modules. This is used by the quantization utility functions to add the quant and dequant modules, before convert function QuantStub will just be observer, it observes the input tensor, after convert, QuantStub will be swapped to nnq.Quantize which does actual quantization. Similarly for DeQuantStub. | torch.quantization#torch.quantization.QuantWrapper |
class torch.quantization.RecordingObserver(**kwargs) [source]
The module is mainly for debug and records the tensor values during runtime. Parameters
dtype β Quantized data type
qscheme β Quantization scheme to be used
reduce_range β Reduces the range of the quantized data type by 1 bit | torch.quantization#torch.quantization.RecordingObserver |
torch.quantization.swap_module(mod, mapping, custom_module_class_mapping) [source]
Swaps the module if it has a quantized counterpart and it has an observer attached. Parameters
mod β input module
mapping β a dictionary that maps from nn module to nnq module Returns
The corresponding quantized module of mod | torch.quantization#torch.quantization.swap_module |
torch.quantize_per_channel(input, scales, zero_points, axis, dtype) β Tensor
Converts a float tensor to a per-channel quantized tensor with given scales and zero points. Parameters
input (Tensor) β float tensor to quantize
scales (Tensor) β float 1D tensor of scales to use, size should match input.size(axis)
zero_points (int) β integer 1D tensor of offset to use, size should match input.size(axis)
axis (int) β dimension on which apply per-channel quantization
dtype (torch.dtype) β the desired data type of returned tensor. Has to be one of the quantized dtypes: torch.quint8, torch.qint8, torch.qint32
Returns
A newly quantized tensor Return type
Tensor Example: >>> x = torch.tensor([[-1.0, 0.0], [1.0, 2.0]])
>>> torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8)
tensor([[-1., 0.],
[ 1., 2.]], size=(2, 2), dtype=torch.quint8,
quantization_scheme=torch.per_channel_affine,
scale=tensor([0.1000, 0.0100], dtype=torch.float64),
zero_point=tensor([10, 0]), axis=0)
>>> torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8).int_repr()
tensor([[ 0, 10],
[100, 200]], dtype=torch.uint8) | torch.generated.torch.quantize_per_channel#torch.quantize_per_channel |
torch.quantize_per_tensor(input, scale, zero_point, dtype) β Tensor
Converts a float tensor to a quantized tensor with given scale and zero point. Parameters
input (Tensor) β float tensor to quantize
scale (float) β scale to apply in quantization formula
zero_point (int) β offset in integer value that maps to float zero
dtype (torch.dtype) β the desired data type of returned tensor. Has to be one of the quantized dtypes: torch.quint8, torch.qint8, torch.qint32
Returns
A newly quantized tensor Return type
Tensor Example: >>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8)
tensor([-1., 0., 1., 2.], size=(4,), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)
>>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8).int_repr()
tensor([ 0, 10, 20, 30], dtype=torch.uint8) | torch.generated.torch.quantize_per_tensor#torch.quantize_per_tensor |
class torch.quasirandom.SobolEngine(dimension, scramble=False, seed=None) [source]
The torch.quasirandom.SobolEngine is an engine for generating (scrambled) Sobol sequences. Sobol sequences are an example of low discrepancy quasi-random sequences. This implementation of an engine for Sobol sequences is capable of sampling sequences up to a maximum dimension of 21201. It uses direction numbers from https://web.maths.unsw.edu.au/~fkuo/sobol/ obtained using the search criterion D(6) up to the dimension 21201. This is the recommended choice by the authors. References Art B. Owen. Scrambling Sobol and Niederreiter-Xing points. Journal of Complexity, 14(4):466-489, December 1998. I. M. Sobol. The distribution of points in a cube and the accurate evaluation of integrals. Zh. Vychisl. Mat. i Mat. Phys., 7:784-802, 1967. Parameters
dimension (Int) β The dimensionality of the sequence to be drawn
scramble (bool, optional) β Setting this to True will produce scrambled Sobol sequences. Scrambling is capable of producing better Sobol sequences. Default: False.
seed (Int, optional) β This is the seed for the scrambling. The seed of the random number generator is set to this, if specified. Otherwise, it uses a random seed. Default: None
Examples: >>> soboleng = torch.quasirandom.SobolEngine(dimension=5)
>>> soboleng.draw(3)
tensor([[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.7500, 0.2500, 0.7500, 0.2500, 0.7500],
[0.2500, 0.7500, 0.2500, 0.7500, 0.2500]])
draw(n=1, out=None, dtype=torch.float32) [source]
Function to draw a sequence of n points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (n,dimension)(n, dimension) . Parameters
n (Int, optional) β The length of sequence of points to draw. Default: 1
out (Tensor, optional) β The output tensor
dtype (torch.dtype, optional) β the desired data type of the returned tensor. Default: torch.float32
draw_base2(m, out=None, dtype=torch.float32) [source]
Function to draw a sequence of 2**m points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (2ββm,dimension)(2**m, dimension) . Parameters
m (Int) β The (base2) exponent of the number of points to draw.
out (Tensor, optional) β The output tensor
dtype (torch.dtype, optional) β the desired data type of the returned tensor. Default: torch.float32
fast_forward(n) [source]
Function to fast-forward the state of the SobolEngine by n steps. This is equivalent to drawing n samples without using the samples. Parameters
n (Int) β The number of steps to fast-forward by.
reset() [source]
Function to reset the SobolEngine to base state. | torch.generated.torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine |
draw(n=1, out=None, dtype=torch.float32) [source]
Function to draw a sequence of n points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (n,dimension)(n, dimension) . Parameters
n (Int, optional) β The length of sequence of points to draw. Default: 1
out (Tensor, optional) β The output tensor
dtype (torch.dtype, optional) β the desired data type of the returned tensor. Default: torch.float32 | torch.generated.torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine.draw |
draw_base2(m, out=None, dtype=torch.float32) [source]
Function to draw a sequence of 2**m points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (2ββm,dimension)(2**m, dimension) . Parameters
m (Int) β The (base2) exponent of the number of points to draw.
out (Tensor, optional) β The output tensor
dtype (torch.dtype, optional) β the desired data type of the returned tensor. Default: torch.float32 | torch.generated.torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine.draw_base2 |
fast_forward(n) [source]
Function to fast-forward the state of the SobolEngine by n steps. This is equivalent to drawing n samples without using the samples. Parameters
n (Int) β The number of steps to fast-forward by. | torch.generated.torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine.fast_forward |
reset() [source]
Function to reset the SobolEngine to base state. | torch.generated.torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine.reset |
torch.rad2deg(input, *, out=None) β Tensor
Returns a new tensor with each of the elements of input converted from angles in radians to degrees. Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.tensor([[3.142, -3.142], [6.283, -6.283], [1.570, -1.570]])
>>> torch.rad2deg(a)
tensor([[ 180.0233, -180.0233],
[ 359.9894, -359.9894],
[ 89.9544, -89.9544]]) | torch.generated.torch.rad2deg#torch.rad2deg |
torch.rand(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) β Tensor
Returns a tensor filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) The shape of the tensor is defined by the variable argument size. Parameters
size (int...) β a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple. Keyword Arguments
out (Tensor, optional) β the output tensor.
dtype (torch.dtype, optional) β the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) β the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> torch.rand(4)
tensor([ 0.5204, 0.2503, 0.3525, 0.5673])
>>> torch.rand(2, 3)
tensor([[ 0.8237, 0.5781, 0.6879],
[ 0.3816, 0.7249, 0.0998]]) | torch.generated.torch.rand#torch.rand |
torch.randint(low=0, high, size, *, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) β Tensor
Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive). The shape of the tensor is defined by the variable argument size. Note With the global dtype default (torch.float32), this function returns a tensor with dtype torch.int64. Parameters
low (int, optional) β Lowest integer to be drawn from the distribution. Default: 0.
high (int) β One above the highest integer to be drawn from the distribution.
size (tuple) β a tuple defining the shape of the output tensor. Keyword Arguments
generator (torch.Generator, optional) β a pseudorandom number generator for sampling
out (Tensor, optional) β the output tensor.
dtype (torch.dtype, optional) β the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) β the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> torch.randint(3, 5, (3,))
tensor([4, 3, 4])
>>> torch.randint(10, (2, 2))
tensor([[0, 2],
[5, 5]])
>>> torch.randint(3, 10, (2, 2))
tensor([[4, 5],
[6, 7]]) | torch.generated.torch.randint#torch.randint |
torch.randint_like(input, low=0, high, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) β Tensor
Returns a tensor with the same shape as Tensor input filled with random integers generated uniformly between low (inclusive) and high (exclusive). Parameters
input (Tensor) β the size of input will determine size of the output tensor.
low (int, optional) β Lowest integer to be drawn from the distribution. Default: 0.
high (int) β One above the highest integer to be drawn from the distribution. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.
layout (torch.layout, optional) β the desired layout of returned tensor. Default: if None, defaults to the layout of input.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, defaults to the device of input.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False.
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.generated.torch.randint_like#torch.randint_like |
torch.randn(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) β Tensor
Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution). outiβΌN(0,1)\text{out}_{i} \sim \mathcal{N}(0, 1)
The shape of the tensor is defined by the variable argument size. Parameters
size (int...) β a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple. Keyword Arguments
out (Tensor, optional) β the output tensor.
dtype (torch.dtype, optional) β the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) β the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> torch.randn(4)
tensor([-2.1436, 0.9966, 2.3426, -0.6366])
>>> torch.randn(2, 3)
tensor([[ 1.5954, 2.8929, -1.0923],
[ 1.1719, -0.4709, -0.1996]]) | torch.generated.torch.randn#torch.randn |
torch.randn_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) β Tensor
Returns a tensor with the same size as input that is filled with random numbers from a normal distribution with mean 0 and variance 1. torch.randn_like(input) is equivalent to torch.randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device). Parameters
input (Tensor) β the size of input will determine size of the output tensor. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.
layout (torch.layout, optional) β the desired layout of returned tensor. Default: if None, defaults to the layout of input.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, defaults to the device of input.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False.
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.generated.torch.randn_like#torch.randn_like |
torch.random
torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices') [source]
Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in. Parameters
devices (iterable of CUDA IDs) β CUDA devices for which to fork the RNG. CPU RNG state is always forked. By default, fork_rng() operates on all devices, but will emit a warning if your machine has a lot of devices, since this function will run very slowly in that case. If you explicitly specify devices, this warning will be suppressed
enabled (bool) β if False, the RNG is not forked. This is a convenience argument for easily disabling the context manager without having to delete it and unindent your Python code under it.
torch.random.get_rng_state() [source]
Returns the random number generator state as a torch.ByteTensor.
torch.random.initial_seed() [source]
Returns the initial seed for generating random numbers as a Python long.
torch.random.manual_seed(seed) [source]
Sets the seed for generating random numbers. Returns a torch.Generator object. Parameters
seed (int) β The desired seed. Value must be within the inclusive range [-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised. Negative inputs are remapped to positive values with the formula 0xffff_ffff_ffff_ffff + seed.
torch.random.seed() [source]
Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG.
torch.random.set_rng_state(new_state) [source]
Sets the random number generator state. Parameters
new_state (torch.ByteTensor) β The desired state | torch.random |
torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices') [source]
Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in. Parameters
devices (iterable of CUDA IDs) β CUDA devices for which to fork the RNG. CPU RNG state is always forked. By default, fork_rng() operates on all devices, but will emit a warning if your machine has a lot of devices, since this function will run very slowly in that case. If you explicitly specify devices, this warning will be suppressed
enabled (bool) β if False, the RNG is not forked. This is a convenience argument for easily disabling the context manager without having to delete it and unindent your Python code under it. | torch.random#torch.random.fork_rng |
torch.random.get_rng_state() [source]
Returns the random number generator state as a torch.ByteTensor. | torch.random#torch.random.get_rng_state |
torch.random.initial_seed() [source]
Returns the initial seed for generating random numbers as a Python long. | torch.random#torch.random.initial_seed |
torch.random.manual_seed(seed) [source]
Sets the seed for generating random numbers. Returns a torch.Generator object. Parameters
seed (int) β The desired seed. Value must be within the inclusive range [-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised. Negative inputs are remapped to positive values with the formula 0xffff_ffff_ffff_ffff + seed. | torch.random#torch.random.manual_seed |
torch.random.seed() [source]
Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG. | torch.random#torch.random.seed |
torch.random.set_rng_state(new_state) [source]
Sets the random number generator state. Parameters
new_state (torch.ByteTensor) β The desired state | torch.random#torch.random.set_rng_state |
torch.randperm(n, *, generator=None, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) β Tensor
Returns a random permutation of integers from 0 to n - 1. Parameters
n (int) β the upper bound (exclusive) Keyword Arguments
generator (torch.Generator, optional) β a pseudorandom number generator for sampling
out (Tensor, optional) β the output tensor.
dtype (torch.dtype, optional) β the desired data type of returned tensor. Default: torch.int64.
layout (torch.layout, optional) β the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False.
pin_memory (bool, optional) β If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: False. Example: >>> torch.randperm(4)
tensor([2, 1, 0, 3]) | torch.generated.torch.randperm#torch.randperm |
torch.rand_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) β Tensor
Returns a tensor with the same size as input that is filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) . torch.rand_like(input) is equivalent to torch.rand(input.size(), dtype=input.dtype, layout=input.layout, device=input.device). Parameters
input (Tensor) β the size of input will determine size of the output tensor. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.
layout (torch.layout, optional) β the desired layout of returned tensor. Default: if None, defaults to the layout of input.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, defaults to the device of input.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False.
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.generated.torch.rand_like#torch.rand_like |
torch.range(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) β Tensor
Returns a 1-D tensor of size βendβstartstepβ+1\left\lfloor \frac{\text{end} - \text{start}}{\text{step}} \right\rfloor + 1 with values from start to end with step step. Step is the gap between two values in the tensor. outi+1=outi+step.\text{out}_{i+1} = \text{out}_i + \text{step}.
Warning This function is deprecated and will be removed in a future release because its behavior is inconsistent with Pythonβs range builtin. Instead, use torch.arange(), which produces values in [start, end). Parameters
start (float) β the starting value for the set of points. Default: 0.
end (float) β the ending value for the set of points
step (float) β the gap between each pair of adjacent points. Default: 1. Keyword Arguments
out (Tensor, optional) β the output tensor.
dtype (torch.dtype, optional) β the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see get_default_dtype(). Otherwise, the dtype is inferred to be torch.int64.
layout (torch.layout, optional) β the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> torch.range(1, 4)
tensor([ 1., 2., 3., 4.])
>>> torch.range(1, 4, 0.5)
tensor([ 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000]) | torch.generated.torch.range#torch.range |
torch.ravel(input) β Tensor
Return a contiguous flattened tensor. A copy is made only if needed. Parameters
input (Tensor) β the input tensor. Example: >>> t = torch.tensor([[[1, 2],
... [3, 4]],
... [[5, 6],
... [7, 8]]])
>>> torch.ravel(t)
tensor([1, 2, 3, 4, 5, 6, 7, 8]) | torch.generated.torch.ravel#torch.ravel |
torch.real(input) β Tensor
Returns a new tensor containing real values of the self tensor. The returned tensor and self share the same underlying storage. Warning real() is only supported for tensors with complex dtypes. Parameters
input (Tensor) β the input tensor. Example::
>>> x=torch.randn(4, dtype=torch.cfloat)
>>> x
tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])
>>> x.real
tensor([ 0.3100, -0.5445, -1.6492, -0.0638]) | torch.generated.torch.real#torch.real |
torch.reciprocal(input, *, out=None) β Tensor
Returns a new tensor with the reciprocal of the elements of input Note Unlike NumPyβs reciprocal, torch.reciprocal supports integral inputs. Integral inputs to reciprocal are automatically promoted to the default scalar type. outi=1inputi\text{out}_{i} = \frac{1}{\text{input}_{i}}
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-0.4595, -2.1219, -1.4314, 0.7298])
>>> torch.reciprocal(a)
tensor([-2.1763, -0.4713, -0.6986, 1.3702]) | torch.generated.torch.reciprocal#torch.reciprocal |
torch.remainder(input, other, *, out=None) β Tensor
Computes the element-wise remainder of division. The dividend and divisor may contain both for integer and floating point numbers. The remainder has the same sign as the divisor other. Supports broadcasting to a common shape, type promotion, and integer and float inputs. Note Complex inputs are not supported. In some cases, it is not mathematically possible to satisfy the definition of a modulo operation with complex numbers. See torch.fmod() for how division by zero is handled. Parameters
input (Tensor) β the dividend
other (Tensor or Scalar) β the divisor Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> torch.remainder(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)
tensor([ 1., 0., 1., 1., 0., 1.])
>>> torch.remainder(torch.tensor([1, 2, 3, 4, 5]), 1.5)
tensor([ 1.0000, 0.5000, 0.0000, 1.0000, 0.5000])
See also torch.fmod(), which computes the element-wise remainder of division equivalently to the C library function fmod(). | torch.generated.torch.remainder#torch.remainder |
torch.renorm(input, p, dim, maxnorm, *, out=None) β Tensor
Returns a tensor where each sub-tensor of input along dimension dim is normalized such that the p-norm of the sub-tensor is lower than the value maxnorm Note If the norm of a row is lower than maxnorm, the row is unchanged Parameters
input (Tensor) β the input tensor.
p (float) β the power for the norm computation
dim (int) β the dimension to slice over to get the sub-tensors
maxnorm (float) β the maximum norm to keep each sub-tensor under Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> x = torch.ones(3, 3)
>>> x[1].fill_(2)
tensor([ 2., 2., 2.])
>>> x[2].fill_(3)
tensor([ 3., 3., 3.])
>>> x
tensor([[ 1., 1., 1.],
[ 2., 2., 2.],
[ 3., 3., 3.]])
>>> torch.renorm(x, 1, 0, 5)
tensor([[ 1.0000, 1.0000, 1.0000],
[ 1.6667, 1.6667, 1.6667],
[ 1.6667, 1.6667, 1.6667]]) | torch.generated.torch.renorm#torch.renorm |
torch.repeat_interleave(input, repeats, dim=None) β Tensor
Repeat elements of a tensor. Warning This is different from torch.Tensor.repeat() but similar to numpy.repeat. Parameters
input (Tensor) β the input tensor.
repeats (Tensor or int) β The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis.
dim (int, optional) β The dimension along which to repeat values. By default, use the flattened input array, and return a flat output array. Returns
Repeated tensor which has the same shape as input, except along the given axis. Return type
Tensor Example: >>> x = torch.tensor([1, 2, 3])
>>> x.repeat_interleave(2)
tensor([1, 1, 2, 2, 3, 3])
>>> y = torch.tensor([[1, 2], [3, 4]])
>>> torch.repeat_interleave(y, 2)
tensor([1, 1, 2, 2, 3, 3, 4, 4])
>>> torch.repeat_interleave(y, 3, dim=1)
tensor([[1, 1, 1, 2, 2, 2],
[3, 3, 3, 4, 4, 4]])
>>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0)
tensor([[1, 2],
[3, 4],
[3, 4]])
torch.repeat_interleave(repeats) β Tensor
If the repeats is tensor([n1, n2, n3, β¦]), then the output will be tensor([0, 0, β¦, 1, 1, β¦, 2, 2, β¦, β¦]) where 0 appears n1 times, 1 appears n2 times, 2 appears n3 times, etc. | torch.generated.torch.repeat_interleave#torch.repeat_interleave |
torch.reshape(input, shape) β Tensor
Returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior. See torch.Tensor.view() on when it is possible to return a view. A single dimension may be -1, in which case itβs inferred from the remaining dimensions and the number of elements in input. Parameters
input (Tensor) β the tensor to be reshaped
shape (tuple of python:ints) β the new shape Example: >>> a = torch.arange(4.)
>>> torch.reshape(a, (2, 2))
tensor([[ 0., 1.],
[ 2., 3.]])
>>> b = torch.tensor([[0, 1], [2, 3]])
>>> torch.reshape(b, (-1,))
tensor([ 0, 1, 2, 3]) | torch.generated.torch.reshape#torch.reshape |
torch.result_type(tensor1, tensor2) β dtype
Returns the torch.dtype that would result from performing an arithmetic operation on the provided input tensors. See type promotion documentation for more information on the type promotion logic. Parameters
tensor1 (Tensor or Number) β an input tensor or number
tensor2 (Tensor or Number) β an input tensor or number Example: >>> torch.result_type(torch.tensor([1, 2], dtype=torch.int), 1.0)
torch.float32
>>> torch.result_type(torch.tensor([1, 2], dtype=torch.uint8), torch.tensor(1))
torch.uint8 | torch.generated.torch.result_type#torch.result_type |
torch.roll(input, shifts, dims=None) β Tensor
Roll the tensor along the given dimension(s). Elements that are shifted beyond the last position are re-introduced at the first position. If a dimension is not specified, the tensor will be flattened before rolling and then restored to the original shape. Parameters
input (Tensor) β the input tensor.
shifts (int or tuple of python:ints) β The number of places by which the elements of the tensor are shifted. If shifts is a tuple, dims must be a tuple of the same size, and each dimension will be rolled by the corresponding value
dims (int or tuple of python:ints) β Axis along which to roll Example: >>> x = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]).view(4, 2)
>>> x
tensor([[1, 2],
[3, 4],
[5, 6],
[7, 8]])
>>> torch.roll(x, 1, 0)
tensor([[7, 8],
[1, 2],
[3, 4],
[5, 6]])
>>> torch.roll(x, -1, 0)
tensor([[3, 4],
[5, 6],
[7, 8],
[1, 2]])
>>> torch.roll(x, shifts=(2, 1), dims=(0, 1))
tensor([[6, 5],
[8, 7],
[2, 1],
[4, 3]]) | torch.generated.torch.roll#torch.roll |
torch.rot90(input, k, dims) β Tensor
Rotate a n-D tensor by 90 degrees in the plane specified by dims axis. Rotation direction is from the first towards the second axis if k > 0, and from the second towards the first for k < 0. Parameters
input (Tensor) β the input tensor.
k (int) β number of times to rotate
dims (a list or tuple) β axis to rotate Example: >>> x = torch.arange(4).view(2, 2)
>>> x
tensor([[0, 1],
[2, 3]])
>>> torch.rot90(x, 1, [0, 1])
tensor([[1, 3],
[0, 2]])
>>> x = torch.arange(8).view(2, 2, 2)
>>> x
tensor([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
>>> torch.rot90(x, 1, [1, 2])
tensor([[[1, 3],
[0, 2]],
[[5, 7],
[4, 6]]]) | torch.generated.torch.rot90#torch.rot90 |
torch.round(input, *, out=None) β Tensor
Returns a new tensor with each of the elements of input rounded to the closest integer. Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.9920, 0.6077, 0.9734, -1.0362])
>>> torch.round(a)
tensor([ 1., 1., 1., -1.]) | torch.generated.torch.round#torch.round |
torch.row_stack(tensors, *, out=None) β Tensor
Alias of torch.vstack(). | torch.generated.torch.row_stack#torch.row_stack |
torch.rsqrt(input, *, out=None) β Tensor
Returns a new tensor with the reciprocal of the square-root of each of the elements of input. outi=1inputi\text{out}_{i} = \frac{1}{\sqrt{\text{input}_{i}}}
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-0.0370, 0.2970, 1.5420, -0.9105])
>>> torch.rsqrt(a)
tensor([ nan, 1.8351, 0.8053, nan]) | torch.generated.torch.rsqrt#torch.rsqrt |
torch.save(obj, f, pickle_module=<module 'pickle' from '/home/matti/miniconda3/lib/python3.7/pickle.py'>, pickle_protocol=2, _use_new_zipfile_serialization=True) [source]
Saves an object to a disk file. See also: saving-loading-tensors Parameters
obj β saved object
f β a file-like object (has to implement write and flush) or a string or os.PathLike object containing a file name
pickle_module β module used for pickling metadata and objects
pickle_protocol β can be specified to override the default protocol Note A common PyTorch convention is to save tensors using .pt file extension. Note PyTorch preserves storage sharing across serialization. See preserve-storage-sharing for more details. Note The 1.6 release of PyTorch switched torch.save to use a new zipfile-based file format. torch.load still retains the ability to load files in the old format. If for any reason you want torch.save to use the old format, pass the kwarg _use_new_zipfile_serialization=False. Example >>> # Save to file
>>> x = torch.tensor([0, 1, 2, 3, 4])
>>> torch.save(x, 'tensor.pt')
>>> # Save to io.BytesIO buffer
>>> buffer = io.BytesIO()
>>> torch.save(x, buffer) | torch.generated.torch.save#torch.save |
torch.scatter(input, dim, index, src) β Tensor
Out-of-place version of torch.Tensor.scatter_() | torch.generated.torch.scatter#torch.scatter |
torch.scatter_add(input, dim, index, src) β Tensor
Out-of-place version of torch.Tensor.scatter_add_() | torch.generated.torch.scatter_add#torch.scatter_add |
torch.searchsorted(sorted_sequence, values, *, out_int32=False, right=False, out=None) β Tensor
Find the indices from the innermost dimension of sorted_sequence such that, if the corresponding values in values were inserted before the indices, the order of the corresponding innermost dimension within sorted_sequence would be preserved. Return a new tensor with the same size as values. If right is False (default), then the left boundary of sorted_sequence is closed. More formally, the returned index satisfies the following rules:
sorted_sequence right returned index satisfies
1-D False sorted_sequence[i-1] < values[m][n]...[l][x] <= sorted_sequence[i]
1-D True sorted_sequence[i-1] <= values[m][n]...[l][x] < sorted_sequence[i]
N-D False sorted_sequence[m][n]...[l][i-1] < values[m][n]...[l][x] <= sorted_sequence[m][n]...[l][i]
N-D True sorted_sequence[m][n]...[l][i-1] <= values[m][n]...[l][x] < sorted_sequence[m][n]...[l][i] Parameters
sorted_sequence (Tensor) β N-D or 1-D tensor, containing monotonically increasing sequence on the innermost dimension.
values (Tensor or Scalar) β N-D tensor or a Scalar containing the search value(s). Keyword Arguments
out_int32 (bool, optional) β indicate the output data type. torch.int32 if True, torch.int64 otherwise. Default value is False, i.e. default output data type is torch.int64.
right (bool, optional) β if False, return the first suitable location that is found. If True, return the last such index. If no suitable index found, return 0 for non-numerical value (eg. nan, inf) or the size of innermost dimension within sorted_sequence (one pass the last index of the innermost dimension). In other words, if False, gets the lower bound index for each value in values on the corresponding innermost dimension of the sorted_sequence. If True, gets the upper bound index instead. Default value is False.
out (Tensor, optional) β the output tensor, must be the same size as values if provided. Note If your use case is always 1-D sorted sequence, torch.bucketize() is preferred, because it has fewer dimension checks resulting in slightly better performance. Example: >>> sorted_sequence = torch.tensor([[1, 3, 5, 7, 9], [2, 4, 6, 8, 10]])
>>> sorted_sequence
tensor([[ 1, 3, 5, 7, 9],
[ 2, 4, 6, 8, 10]])
>>> values = torch.tensor([[3, 6, 9], [3, 6, 9]])
>>> values
tensor([[3, 6, 9],
[3, 6, 9]])
>>> torch.searchsorted(sorted_sequence, values)
tensor([[1, 3, 4],
[1, 2, 4]])
>>> torch.searchsorted(sorted_sequence, values, right=True)
tensor([[2, 3, 5],
[1, 3, 4]])
>>> sorted_sequence_1d = torch.tensor([1, 3, 5, 7, 9])
>>> sorted_sequence_1d
tensor([1, 3, 5, 7, 9])
>>> torch.searchsorted(sorted_sequence_1d, values)
tensor([[1, 3, 4],
[1, 3, 4]]) | torch.generated.torch.searchsorted#torch.searchsorted |
torch.seed() [source]
Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG. | torch.generated.torch.seed#torch.seed |
torch.set_default_dtype(d) [source]
Sets the default floating point dtype to d. This dtype is: The inferred dtype for python floats in torch.tensor(). Used to infer dtype for python complex numbers. The default complex dtype is set to torch.complex128 if default floating point dtype is torch.float64, otherwise itβs set to torch.complex64
The default floating point dtype is initially torch.float32. Parameters
d (torch.dtype) β the floating point dtype to make the default Example >>> # initial default for floating point is torch.float32
>>> torch.tensor([1.2, 3]).dtype
torch.float32
>>> # initial default for floating point is torch.complex64
>>> torch.tensor([1.2, 3j]).dtype
torch.complex64
>>> torch.set_default_dtype(torch.float64)
>>> torch.tensor([1.2, 3]).dtype # a new floating point tensor
torch.float64
>>> torch.tensor([1.2, 3j]).dtype # a new complex tensor
torch.complex128 | torch.generated.torch.set_default_dtype#torch.set_default_dtype |
torch.set_default_tensor_type(t) [source]
Sets the default torch.Tensor type to floating point tensor type t. This type will also be used as default floating point type for type inference in torch.tensor(). The default floating point tensor type is initially torch.FloatTensor. Parameters
t (type or string) β the floating point tensor type or its name Example: >>> torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_tensor_type(torch.DoubleTensor)
>>> torch.tensor([1.2, 3]).dtype # a new floating point tensor
torch.float64 | torch.generated.torch.set_default_tensor_type#torch.set_default_tensor_type |
torch.set_flush_denormal(mode) β bool
Disables denormal floating numbers on CPU. Returns True if your system supports flushing denormal numbers and it successfully configures flush denormal mode. set_flush_denormal() is only supported on x86 architectures supporting SSE3. Parameters
mode (bool) β Controls whether to enable flush denormal mode or not Example: >>> torch.set_flush_denormal(True)
True
>>> torch.tensor([1e-323], dtype=torch.float64)
tensor([ 0.], dtype=torch.float64)
>>> torch.set_flush_denormal(False)
True
>>> torch.tensor([1e-323], dtype=torch.float64)
tensor(9.88131e-324 *
[ 1.0000], dtype=torch.float64) | torch.generated.torch.set_flush_denormal#torch.set_flush_denormal |
class torch.set_grad_enabled(mode) [source]
Context-manager that sets gradient calculation to on or off. set_grad_enabled will enable or disable grads based on its argument mode. It can be used as a context-manager or as a function. This context manager is thread local; it will not affect computation in other threads. Parameters
mode (bool) β Flag whether to enable grad (True), or disable (False). This can be used to conditionally enable gradients. Example: >>> x = torch.tensor([1], requires_grad=True)
>>> is_train = False
>>> with torch.set_grad_enabled(is_train):
... y = x * 2
>>> y.requires_grad
False
>>> torch.set_grad_enabled(True)
>>> y = x * 2
>>> y.requires_grad
True
>>> torch.set_grad_enabled(False)
>>> y = x * 2
>>> y.requires_grad
False | torch.generated.torch.set_grad_enabled#torch.set_grad_enabled |
torch.set_num_interop_threads(int)
Sets the number of threads used for interop parallelism (e.g. in JIT interpreter) on CPU. Warning Can only be called once and before any inter-op parallel work is started (e.g. JIT execution). | torch.generated.torch.set_num_interop_threads#torch.set_num_interop_threads |
torch.set_num_threads(int)
Sets the number of threads used for intraop parallelism on CPU. Warning To ensure that the correct number of threads is used, set_num_threads must be called before running eager, JIT or autograd code. | torch.generated.torch.set_num_threads#torch.set_num_threads |
torch.set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None, sci_mode=None) [source]
Set options for printing. Items shamelessly taken from NumPy Parameters
precision β Number of digits of precision for floating point output (default = 4).
threshold β Total number of array elements which trigger summarization rather than full repr (default = 1000).
edgeitems β Number of array items in summary at beginning and end of each dimension (default = 3).
linewidth β The number of characters per line for the purpose of inserting line breaks (default = 80). Thresholded matrices will ignore this parameter.
profile β Sane defaults for pretty printing. Can override with any of the above options. (any one of default, short, full)
sci_mode β Enable (True) or disable (False) scientific notation. If None (default) is specified, the value is defined by torch._tensor_str._Formatter. This value is automatically chosen by the framework. | torch.generated.torch.set_printoptions#torch.set_printoptions |
torch.set_rng_state(new_state) [source]
Sets the random number generator state. Parameters
new_state (torch.ByteTensor) β The desired state | torch.generated.torch.set_rng_state#torch.set_rng_state |
torch.sgn(input, *, out=None) β Tensor
For complex tensors, this function returns a new tensor whose elemants have the same angle as that of the elements of input and absolute value 1. For a non-complex tensor, this function returns the signs of the elements of input (see torch.sign()). outi=0\text{out}_{i} = 0 , if β£inputiβ£==0|{\text{{input}}_i}| == 0 outi=inputiβ£inputiβ£\text{out}_{i} = \frac{{\text{{input}}_i}}{|{\text{{input}}_i}|} , otherwise Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> x=torch.tensor([3+4j, 7-24j, 0, 1+2j])
>>> x.sgn()
tensor([0.6000+0.8000j, 0.2800-0.9600j, 0.0000+0.0000j, 0.4472+0.8944j]) | torch.generated.torch.sgn#torch.sgn |
torch.sigmoid(input, *, out=None) β Tensor
Returns a new tensor with the sigmoid of the elements of input. outi=11+eβinputi\text{out}_{i} = \frac{1}{1 + e^{-\text{input}_{i}}}
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.9213, 1.0887, -0.8858, -1.7683])
>>> torch.sigmoid(a)
tensor([ 0.7153, 0.7481, 0.2920, 0.1458]) | torch.generated.torch.sigmoid#torch.sigmoid |
torch.sign(input, *, out=None) β Tensor
Returns a new tensor with the signs of the elements of input. outi=sgnβ‘(inputi)\text{out}_{i} = \operatorname{sgn}(\text{input}_{i})
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.tensor([0.7, -1.2, 0., 2.3])
>>> a
tensor([ 0.7000, -1.2000, 0.0000, 2.3000])
>>> torch.sign(a)
tensor([ 1., -1., 0., 1.]) | torch.generated.torch.sign#torch.sign |
torch.signbit(input, *, out=None) β Tensor
Tests if each element of input has its sign bit set (is less than zero) or not. Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.tensor([0.7, -1.2, 0., 2.3])
>>> torch.signbit(a)
tensor([ False, True, False, False]) | torch.generated.torch.signbit#torch.signbit |
torch.sin(input, *, out=None) β Tensor
Returns a new tensor with the sine of the elements of input. outi=sinβ‘(inputi)\text{out}_{i} = \sin(\text{input}_{i})
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-0.5461, 0.1347, -2.7266, -0.2746])
>>> torch.sin(a)
tensor([-0.5194, 0.1343, -0.4032, -0.2711]) | torch.generated.torch.sin#torch.sin |
torch.sinc(input, *, out=None) β Tensor
Computes the normalized sinc of input. outi={1,if inputi=0sinβ‘(Οinputi)/(Οinputi),otherwise\text{out}_{i} = \begin{cases} 1, & \text{if}\ \text{input}_{i}=0 \\ \sin(\pi \text{input}_{i}) / (\pi \text{input}_{i}), & \text{otherwise} \end{cases}
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.2252, -0.2948, 1.0267, -1.1566])
>>> torch.sinc(a)
tensor([ 0.9186, 0.8631, -0.0259, -0.1300]) | torch.generated.torch.sinc#torch.sinc |
torch.sinh(input, *, out=None) β Tensor
Returns a new tensor with the hyperbolic sine of the elements of input. outi=sinhβ‘(inputi)\text{out}_{i} = \sinh(\text{input}_{i})
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.5380, -0.8632, -0.1265, 0.9399])
>>> torch.sinh(a)
tensor([ 0.5644, -0.9744, -0.1268, 1.0845])
Note When input is on the CPU, the implementation of torch.sinh may use the Sleef library, which rounds very large results to infinity or negative infinity. See here for details. | torch.generated.torch.sinh#torch.sinh |
torch.slogdet(input) -> (Tensor, Tensor)
Calculates the sign and log absolute value of the determinant(s) of a square matrix or batches of square matrices. Note torch.slogdet() is deprecated. Please use torch.linalg.slogdet() instead. Note If input has zero determinant, this returns (0, -inf). Note Backward through slogdet() internally uses SVD results when input is not invertible. In this case, double backward through slogdet() will be unstable in when input doesnβt have distinct singular values. See svd() for details. Parameters
input (Tensor) β the input tensor of size (*, n, n) where * is zero or more batch dimensions. Returns
A namedtuple (sign, logabsdet) containing the sign of the determinant, and the log value of the absolute determinant. Example: >>> A = torch.randn(3, 3)
>>> A
tensor([[ 0.0032, -0.2239, -1.1219],
[-0.6690, 0.1161, 0.4053],
[-1.6218, -0.9273, -0.0082]])
>>> torch.det(A)
tensor(-0.7576)
>>> torch.logdet(A)
tensor(nan)
>>> torch.slogdet(A)
torch.return_types.slogdet(sign=tensor(-1.), logabsdet=tensor(-0.2776)) | torch.generated.torch.slogdet#torch.slogdet |
torch.smm(input, mat) β Tensor
Performs a matrix multiplication of the sparse matrix input with the dense matrix mat. Parameters
input (Tensor) β a sparse matrix to be matrix multiplied
mat (Tensor) β a dense matrix to be matrix multiplied | torch.sparse#torch.smm |
torch.solve(input, A, *, out=None) -> (Tensor, Tensor)
This function returns the solution to the system of linear equations represented by AX=BAX = B and the LU factorization of A, in order as a namedtuple solution, LU. LU contains L and U factors for LU factorization of A. torch.solve(B, A) can take in 2D inputs B, A or inputs that are batches of 2D matrices. If the inputs are batches, then returns batched outputs solution, LU. Supports real-valued and complex-valued inputs. Note Irrespective of the original strides, the returned matrices solution and LU will be transposed, i.e. with strides like B.contiguous().transpose(-1, -2).stride() and A.contiguous().transpose(-1, -2).stride() respectively. Parameters
input (Tensor) β input matrix BB of size (β,m,k)(*, m, k) , where β* is zero or more batch dimensions.
A (Tensor) β input square matrix of size (β,m,m)(*, m, m) , where β* is zero or more batch dimensions. Keyword Arguments
out ((Tensor, Tensor), optional) β optional output tuple. Example: >>> A = torch.tensor([[6.80, -2.11, 5.66, 5.97, 8.23],
... [-6.05, -3.30, 5.36, -4.44, 1.08],
... [-0.45, 2.58, -2.70, 0.27, 9.04],
... [8.32, 2.71, 4.35, -7.17, 2.14],
... [-9.67, -5.14, -7.26, 6.08, -6.87]]).t()
>>> B = torch.tensor([[4.02, 6.19, -8.22, -7.57, -3.03],
... [-1.56, 4.00, -8.67, 1.75, 2.86],
... [9.81, -4.09, -4.57, -8.61, 8.99]]).t()
>>> X, LU = torch.solve(B, A)
>>> torch.dist(B, torch.mm(A, X))
tensor(1.00000e-06 *
7.0977)
>>> # Batched solver example
>>> A = torch.randn(2, 3, 1, 4, 4)
>>> B = torch.randn(2, 3, 1, 4, 6)
>>> X, LU = torch.solve(B, A)
>>> torch.dist(B, A.matmul(X))
tensor(1.00000e-06 *
3.6386) | torch.generated.torch.solve#torch.solve |
torch.sort(input, dim=-1, descending=False, *, out=None) -> (Tensor, LongTensor)
Sorts the elements of the input tensor along a given dimension in ascending order by value. If dim is not given, the last dimension of the input is chosen. If descending is True then the elements are sorted in descending order by value. A namedtuple of (values, indices) is returned, where the values are the sorted values and indices are the indices of the elements in the original input tensor. Parameters
input (Tensor) β the input tensor.
dim (int, optional) β the dimension to sort along
descending (bool, optional) β controls the sorting order (ascending or descending) Keyword Arguments
out (tuple, optional) β the output tuple of (Tensor, LongTensor) that can be optionally given to be used as output buffers Example: >>> x = torch.randn(3, 4)
>>> sorted, indices = torch.sort(x)
>>> sorted
tensor([[-0.2162, 0.0608, 0.6719, 2.3332],
[-0.5793, 0.0061, 0.6058, 0.9497],
[-0.5071, 0.3343, 0.9553, 1.0960]])
>>> indices
tensor([[ 1, 0, 2, 3],
[ 3, 1, 0, 2],
[ 0, 3, 1, 2]])
>>> sorted, indices = torch.sort(x, 0)
>>> sorted
tensor([[-0.5071, -0.2162, 0.6719, -0.5793],
[ 0.0608, 0.0061, 0.9497, 0.3343],
[ 0.6058, 0.9553, 1.0960, 2.3332]])
>>> indices
tensor([[ 2, 0, 0, 1],
[ 0, 1, 1, 2],
[ 1, 2, 2, 0]]) | torch.generated.torch.sort#torch.sort |
torch.sparse Introduction PyTorch provides torch.Tensor to represent a multi-dimensional array containing elements of a single data type. By default, array elements are stored contiguously in memory leading to efficient implementations of various array processing algorithms that relay on the fast access to array elements. However, there exists an important class of multi-dimensional arrays, so-called sparse arrays, where the contiguous memory storage of array elements turns out to be suboptimal. Sparse arrays have a property of having a vast portion of elements being equal to zero which means that a lot of memory as well as processor resources can be spared if only the non-zero elements are stored or/and processed. Various sparse storage formats (such as COO, CSR/CSC, LIL, etc.) have been developed that are optimized for a particular structure of non-zero elements in sparse arrays as well as for specific operations on the arrays. Note When talking about storing only non-zero elements of a sparse array, the usage of adjective βnon-zeroβ is not strict: one is allowed to store also zeros in the sparse array data structure. Hence, in the following, we use βspecified elementsβ for those array elements that are actually stored. In addition, the unspecified elements are typically assumed to have zero value, but not only, hence we use the term βfill valueβ to denote such elements. Note Using a sparse storage format for storing sparse arrays can be advantageous only when the size and sparsity levels of arrays are high. Otherwise, for small-sized or low-sparsity arrays using the contiguous memory storage format is likely the most efficient approach. Warning The PyTorch API of sparse tensors is in beta and may change in the near future. Sparse COO tensors Currently, PyTorch implements the so-called Coordinate format, or COO format, as the default sparse storage format for storing sparse tensors. In COO format, the specified elements are stored as tuples of element indices and the corresponding values. In particular, the indices of specified elements are collected in indices tensor of size (ndim, nse) and with element type torch.int64, the corresponding values are collected in values tensor of size (nse,) and with an arbitrary integer or floating point number element type, where ndim is the dimensionality of the tensor and nse is the number of specified elements. Note The memory consumption of a sparse COO tensor is at least (ndim *
8 + <size of element type in bytes>) * nse bytes (plus a constant overhead from storing other tensor data). The memory consumption of a strided tensor is at least product(<tensor shape>) * <size of element type in bytes>. For example, the memory consumption of a 10 000 x 10 000 tensor with 100 000 non-zero 32-bit floating point numbers is at least (2 * 8 + 4) * 100 000 = 2 000 000 bytes when using COO tensor layout and 10 000 * 10 000 * 4 = 400 000 000 bytes when using the default strided tensor layout. Notice the 200 fold memory saving from using the COO storage format. Construction A sparse COO tensor can be constructed by providing the two tensors of indices and values, as well as the size of the sparse tensor (when it cannot be inferred from the indices and values tensors) to a function torch.sparse_coo_tensor(). Suppose we want to define a sparse tensor with the entry 3 at location (0, 2), entry 4 at location (1, 0), and entry 5 at location (1, 2). Unspecified elements are assumed to have the same value, fill value, which is zero by default. We would then write: >>> i = [[0, 1, 1],
[2, 0, 2]]
>>> v = [3, 4, 5]
>>> s = torch.sparse_coo_tensor(i, v, (2, 3))
>>> s
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3, 4, 5]),
size=(2, 3), nnz=3, layout=torch.sparse_coo)
>>> s.to_dense()
tensor([[0, 0, 3],
[4, 0, 5]])
Note that the input i is NOT a list of index tuples. If you want to write your indices this way, you should transpose before passing them to the sparse constructor: >>> i = [[0, 2], [1, 0], [1, 2]]
>>> v = [3, 4, 5 ]
>>> s = torch.sparse_coo_tensor(list(zip(*i)), v, (2, 3))
>>> # Or another equivalent formulation to get s
>>> s = torch.sparse_coo_tensor(torch.tensor(i).t(), v, (2, 3))
>>> torch.sparse_coo_tensor(i.t(), v, torch.Size([2,3])).to_dense()
tensor([[0, 0, 3],
[4, 0, 5]])
An empty sparse COO tensor can be constructed by specifying its size only: >>> torch.sparse_coo_tensor(size=(2, 3))
tensor(indices=tensor([], size=(2, 0)),
values=tensor([], size=(0,)),
size=(2, 3), nnz=0, layout=torch.sparse_coo)
Hybrid sparse COO tensors Pytorch implements an extension of sparse tensors with scalar values to sparse tensors with (contiguous) tensor values. Such tensors are called hybrid tensors. PyTorch hybrid COO tensor extends the sparse COO tensor by allowing the values tensor to be a multi-dimensional tensor so that we have: the indices of specified elements are collected in indices tensor of size (sparse_dims, nse) and with element type torch.int64, the corresponding (tensor) values are collected in values tensor of size (nse, dense_dims) and with an arbitrary integer or floating point number element type. Note We use (M + K)-dimensional tensor to denote a N-dimensional hybrid sparse tensor, where M and K are the numbers of sparse and dense dimensions, respectively, such that M + K == N holds. Suppose we want to create a (2 + 1)-dimensional tensor with the entry [3, 4] at location (0, 2), entry [5, 6] at location (1, 0), and entry [7, 8] at location (1, 2). We would write >>> i = [[0, 1, 1],
[2, 0, 2]]
>>> v = [[3, 4], [5, 6], [7, 8]]
>>> s = torch.sparse_coo_tensor(i, v, (2, 3, 2))
>>> s
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([[3, 4],
[5, 6],
[7, 8]]),
size=(2, 3, 2), nnz=3, layout=torch.sparse_coo)
>>> s.to_dense()
tensor([[[0, 0],
[0, 0],
[3, 4]],
[[5, 6],
[0, 0],
[7, 8]]])
In general, if s is a sparse COO tensor and M =
s.sparse_dim(), K = s.dense_dim(), then we have the following invariants:
M + K == len(s.shape) == s.ndim - dimensionality of a tensor is the sum of the number of sparse and dense dimensions,
s.indices().shape == (M, nse) - sparse indices are stored explicitly,
s.values().shape == (nse,) + s.shape[M : M + K] - the values of a hybrid tensor are K-dimensional tensors,
s.values().layout == torch.strided - values are stored as strided tensors. Note Dense dimensions always follow sparse dimensions, that is, mixing of dense and sparse dimensions is not supported. Uncoalesced sparse COO tensors PyTorch sparse COO tensor format permits uncoalesced sparse tensors, where there may be duplicate coordinates in the indices; in this case, the interpretation is that the value at that index is the sum of all duplicate value entries. For example, one can specify multiple values, 3 and 4, for the same index 1, that leads to an 1-D uncoalesced tensor: >>> i = [[1, 1]]
>>> v = [3, 4]
>>> s=torch.sparse_coo_tensor(i, v, (3,))
>>> s
tensor(indices=tensor([[1, 1]]),
values=tensor( [3, 4]),
size=(3,), nnz=2, layout=torch.sparse_coo)
while the coalescing process will accumulate the multi-valued elements into a single value using summation: >>> s.coalesce()
tensor(indices=tensor([[1]]),
values=tensor([7]),
size=(3,), nnz=1, layout=torch.sparse_coo)
In general, the output of torch.Tensor.coalesce() method is a sparse tensor with the following properties: the indices of specified tensor elements are unique, the indices are sorted in lexicographical order,
torch.Tensor.is_coalesced() returns True. Note For the most part, you shouldnβt have to care whether or not a sparse tensor is coalesced or not, as most operations will work identically given a coalesced or uncoalesced sparse tensor. However, some operations can be implemented more efficiently on uncoalesced tensors, and some on coalesced tensors. For instance, addition of sparse COO tensors is implemented by simply concatenating the indices and values tensors: >>> a = torch.sparse_coo_tensor([[1, 1]], [5, 6], (2,))
>>> b = torch.sparse_coo_tensor([[0, 0]], [7, 8], (2,))
>>> a + b
tensor(indices=tensor([[0, 0, 1, 1]]),
values=tensor([7, 8, 5, 6]),
size=(2,), nnz=4, layout=torch.sparse_coo)
If you repeatedly perform an operation that can produce duplicate entries (e.g., torch.Tensor.add()), you should occasionally coalesce your sparse tensors to prevent them from growing too large. On the other hand, the lexicographical ordering of indices can be advantageous for implementing algorithms that involve many element selection operations, such as slicing or matrix products. Working with sparse COO tensors Letβs consider the following example: >>> i = [[0, 1, 1],
[2, 0, 2]]
>>> v = [[3, 4], [5, 6], [7, 8]]
>>> s = torch.sparse_coo_tensor(i, v, (2, 3, 2))
As mentioned above, a sparse COO tensor is a torch.Tensor instance and to distinguish it from the Tensor instances that use some other layout, on can use torch.Tensor.is_sparse or torch.Tensor.layout properties: >>> isinstance(s, torch.Tensor)
True
>>> s.is_sparse
True
>>> s.layout == torch.sparse_coo
True
The number of sparse and dense dimensions can be acquired using methods torch.Tensor.sparse_dim() and torch.Tensor.dense_dim(), respectively. For instance: >>> s.sparse_dim(), s.dense_dim()
(2, 1)
If s is a sparse COO tensor then its COO format data can be acquired using methods torch.Tensor.indices() and torch.Tensor.values(). Note Currently, one can acquire the COO format data only when the tensor instance is coalesced: >>> s.indices()
RuntimeError: Cannot get indices on an uncoalesced tensor, please call .coalesce() first
For acquiring the COO format data of an uncoalesced tensor, use torch.Tensor._values() and torch.Tensor._indices(): >>> s._indices()
tensor([[0, 1, 1],
[2, 0, 2]])
Constructing a new sparse COO tensor results a tensor that is not coalesced: >>> s.is_coalesced()
False
but one can construct a coalesced copy of a sparse COO tensor using the torch.Tensor.coalesce() method: >>> s2 = s.coalesce()
>>> s2.indices()
tensor([[0, 1, 1],
[2, 0, 2]])
When working with uncoalesced sparse COO tensors, one must take into an account the additive nature of uncoalesced data: the values of the same indices are the terms of a sum that evaluation gives the value of the corresponding tensor element. For example, the scalar multiplication on an uncoalesced sparse tensor could be implemented by multiplying all the uncoalesced values with the scalar because c *
(a + b) == c * a + c * b holds. However, any nonlinear operation, say, a square root, cannot be implemented by applying the operation to uncoalesced data because sqrt(a + b) == sqrt(a) + sqrt(b) does not hold in general. Slicing (with positive step) of a sparse COO tensor is supported only for dense dimensions. Indexing is supported for both sparse and dense dimensions: >>> s[1]
tensor(indices=tensor([[0, 2]]),
values=tensor([[5, 6],
[7, 8]]),
size=(3, 2), nnz=2, layout=torch.sparse_coo)
>>> s[1, 0, 1]
tensor(6)
>>> s[1, 0, 1:]
tensor([6])
In PyTorch, the fill value of a sparse tensor cannot be specified explicitly and is assumed to be zero in general. However, there exists operations that may interpret the fill value differently. For instance, torch.sparse.softmax() computes the softmax with the assumption that the fill value is negative infinity. Supported Linear Algebra operations The following table summarizes supported Linear Algebra operations on sparse matrices where the operands layouts may vary. Here T[layout] denotes a tensor with a given layout. Similarly, M[layout] denotes a matrix (2-D PyTorch tensor), and V[layout] denotes a vector (1-D PyTorch tensor). In addition, f denotes a scalar (float or 0-D PyTorch tensor), * is element-wise multiplication, and @ is matrix multiplication.
PyTorch operation Sparse grad? Layout signature
torch.mv() no M[sparse_coo] @ V[strided] -> V[strided]
torch.matmul() no M[sparse_coo] @ M[strided] -> M[strided]
torch.mm() no M[sparse_coo] @ M[strided] -> M[strided]
torch.sparse.mm() yes M[sparse_coo] @ M[strided] -> M[strided]
torch.smm() no M[sparse_coo] @ M[strided] -> M[sparse_coo]
torch.hspmm() no M[sparse_coo] @ M[strided] -> M[hybrid sparse_coo]
torch.bmm() no T[sparse_coo] @ T[strided] -> T[strided]
torch.addmm() no f * M[strided] + f * (M[sparse_coo] @ M[strided]) -> M[strided]
torch.sparse.addmm() yes f * M[strided] + f * (M[sparse_coo] @ M[strided]) -> M[strided]
torch.sspaddmm() no f * M[sparse_coo] + f * (M[sparse_coo] @ M[strided]) -> M[sparse_coo]
torch.lobpcg() no GENEIG(M[sparse_coo]) -> M[strided], M[strided]
torch.pca_lowrank() yes PCA(M[sparse_coo]) -> M[strided], M[strided], M[strided]
torch.svd_lowrank() yes SVD(M[sparse_coo]) -> M[strided], M[strided], M[strided] where βSparse grad?β column indicates if the PyTorch operation supports backward with respect to sparse matrix argument. All PyTorch operations, except torch.smm(), support backward with respect to strided matrix arguments. Note Currently, PyTorch does not support matrix multiplication with the layout signature M[strided] @ M[sparse_coo]. However, applications can still compute this using the matrix relation D @
S == (S.t() @ D.t()).t().
class torch.Tensor
The following methods are specific to sparse tensors:
is_sparse
Is True if the Tensor uses sparse storage layout, False otherwise.
dense_dim() β int
Return the number of dense dimensions in a sparse tensor self. Warning Throws an error if self is not a sparse tensor. See also Tensor.sparse_dim() and hybrid tensors.
sparse_dim() β int
Return the number of sparse dimensions in a sparse tensor self. Warning Throws an error if self is not a sparse tensor. See also Tensor.dense_dim() and hybrid tensors.
sparse_mask(mask) β Tensor
Returns a new sparse tensor with values from a strided tensor self filtered by the indices of the sparse tensor mask. The values of mask sparse tensor are ignored. self and mask tensors must have the same shape. Note The returned sparse tensor has the same indices as the sparse tensor mask, even when the corresponding values in self are zeros. Parameters
mask (Tensor) β a sparse tensor whose indices are used as a filter Example: >>> nse = 5
>>> dims = (5, 5, 2, 2)
>>> I = torch.cat([torch.randint(0, dims[0], size=(nse,)),
... torch.randint(0, dims[1], size=(nse,))], 0).reshape(2, nse)
>>> V = torch.randn(nse, dims[2], dims[3])
>>> S = torch.sparse_coo_tensor(I, V, dims).coalesce()
>>> D = torch.randn(dims)
>>> D.sparse_mask(S)
tensor(indices=tensor([[0, 0, 0, 2],
[0, 1, 4, 3]]),
values=tensor([[[ 1.6550, 0.2397],
[-0.1611, -0.0779]],
[[ 0.2326, -1.0558],
[ 1.4711, 1.9678]],
[[-0.5138, -0.0411],
[ 1.9417, 0.5158]],
[[ 0.0793, 0.0036],
[-0.2569, -0.1055]]]),
size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo)
sparse_resize_(size, sparse_dim, dense_dim) β Tensor
Resizes self sparse tensor to the desired size and the number of sparse and dense dimensions. Note If the number of specified elements in self is zero, then size, sparse_dim, and dense_dim can be any size and positive integers such that len(size) == sparse_dim +
dense_dim. If self specifies one or more elements, however, then each dimension in size must not be smaller than the corresponding dimension of self, sparse_dim must equal the number of sparse dimensions in self, and dense_dim must equal the number of dense dimensions in self. Warning Throws an error if self is not a sparse tensor. Parameters
size (torch.Size) β the desired size. If self is non-empty sparse tensor, the desired size cannot be smaller than the original size.
sparse_dim (int) β the number of sparse dimensions
dense_dim (int) β the number of dense dimensions
sparse_resize_and_clear_(size, sparse_dim, dense_dim) β Tensor
Removes all specified elements from a sparse tensor self and resizes self to the desired size and the number of sparse and dense dimensions. Parameters
size (torch.Size) β the desired size.
sparse_dim (int) β the number of sparse dimensions
dense_dim (int) β the number of dense dimensions
to_dense() β Tensor
Creates a strided copy of self. Warning Throws an error if self is a strided tensor. Example: >>> s = torch.sparse_coo_tensor(
... torch.tensor([[1, 1],
... [0, 2]]),
... torch.tensor([9, 10]),
... size=(3, 3))
>>> s.to_dense()
tensor([[ 0, 0, 0],
[ 9, 0, 10],
[ 0, 0, 0]])
to_sparse(sparseDims) β Tensor
Returns a sparse copy of the tensor. PyTorch supports sparse tensors in coordinate format. Parameters
sparseDims (int, optional) β the number of sparse dimensions to include in the new sparse tensor Example: >>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]])
>>> d
tensor([[ 0, 0, 0],
[ 9, 0, 10],
[ 0, 0, 0]])
>>> d.to_sparse()
tensor(indices=tensor([[1, 1],
[0, 2]]),
values=tensor([ 9, 10]),
size=(3, 3), nnz=2, layout=torch.sparse_coo)
>>> d.to_sparse(1)
tensor(indices=tensor([[1]]),
values=tensor([[ 9, 0, 10]]),
size=(3, 3), nnz=1, layout=torch.sparse_coo)
coalesce() β Tensor
Returns a coalesced copy of self if self is an uncoalesced tensor. Returns self if self is a coalesced tensor. Warning Throws an error if self is not a sparse COO tensor.
is_coalesced() β bool
Returns True if self is a sparse COO tensor that is coalesced, False otherwise. Warning Throws an error if self is not a sparse COO tensor. See coalesce() and uncoalesced tensors.
indices() β Tensor
Return the indices tensor of a sparse COO tensor. Warning Throws an error if self is not a sparse COO tensor. See also Tensor.values(). Note This method can only be called on a coalesced sparse tensor. See Tensor.coalesce() for details.
values() β Tensor
Return the values tensor of a sparse COO tensor. Warning Throws an error if self is not a sparse COO tensor. See also Tensor.indices(). Note This method can only be called on a coalesced sparse tensor. See Tensor.coalesce() for details.
The following torch.Tensor methods support sparse COO tensors: add() add_() addmm() addmm_() any() asin() asin_() arcsin() arcsin_() bmm() clone() deg2rad() deg2rad_() detach() detach_() dim() div() div_() floor_divide() floor_divide_() get_device() index_select() isnan() log1p() log1p_() mm() mul() mul_() mv() narrow_copy() neg() neg_() negative() negative_() numel() rad2deg() rad2deg_() resize_as_() size() pow() sqrt() square() smm() sspaddmm() sub() sub_() t() t_() transpose() transpose_() zero_() Sparse tensor functions
torch.sparse_coo_tensor(indices, values, size=None, *, dtype=None, device=None, requires_grad=False) β Tensor
Constructs a sparse tensor in COO(rdinate) format with specified values at the given indices. Note This function returns an uncoalesced tensor. Parameters
indices (array_like) β Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types. Will be cast to a torch.LongTensor internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values.
values (array_like) β Initial values for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.
size (list, tuple, or torch.Size, optional) β Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned tensor. Default: if None, infers data type from values.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> i = torch.tensor([[0, 1, 1],
... [2, 0, 2]])
>>> v = torch.tensor([3, 4, 5], dtype=torch.float32)
>>> torch.sparse_coo_tensor(i, v, [2, 4])
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3., 4., 5.]),
size=(2, 4), nnz=3, layout=torch.sparse_coo)
>>> torch.sparse_coo_tensor(i, v) # Shape inference
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3., 4., 5.]),
size=(2, 3), nnz=3, layout=torch.sparse_coo)
>>> torch.sparse_coo_tensor(i, v, [2, 4],
... dtype=torch.float64,
... device=torch.device('cuda:0'))
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3., 4., 5.]),
device='cuda:0', size=(2, 4), nnz=3, dtype=torch.float64,
layout=torch.sparse_coo)
# Create an empty sparse tensor with the following invariants:
# 1. sparse_dim + dense_dim = len(SparseTensor.shape)
# 2. SparseTensor._indices().shape = (sparse_dim, nnz)
# 3. SparseTensor._values().shape = (nnz, SparseTensor.shape[sparse_dim:])
#
# For instance, to create an empty sparse tensor with nnz = 0, dense_dim = 0 and
# sparse_dim = 1 (hence indices is a 2D tensor of shape = (1, 0))
>>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), [], [1])
tensor(indices=tensor([], size=(1, 0)),
values=tensor([], size=(0,)),
size=(1,), nnz=0, layout=torch.sparse_coo)
# and to create an empty sparse tensor with nnz = 0, dense_dim = 1 and
# sparse_dim = 1
>>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), torch.empty([0, 2]), [1, 2])
tensor(indices=tensor([], size=(1, 0)),
values=tensor([], size=(0, 2)),
size=(1, 2), nnz=0, layout=torch.sparse_coo)
torch.sparse.sum(input, dim=None, dtype=None) [source]
Returns the sum of each row of the sparse tensor input in the given dimensions dim. If dim is a list of dimensions, reduce over all of them. When sum over all sparse_dim, this method returns a dense tensor instead of a sparse tensor. All summed dim are squeezed (see torch.squeeze()), resulting an output tensor having dim fewer dimensions than input. During backward, only gradients at nnz locations of input will propagate back. Note that the gradients of input is coalesced. Parameters
input (Tensor) β the input sparse tensor
dim (int or tuple of python:ints) β a dimension or a list of dimensions to reduce. Default: reduce over all dims.
dtype (torch.dtype, optional) β the desired data type of returned Tensor. Default: dtype of input. Example: >>> nnz = 3
>>> dims = [5, 5, 2, 3]
>>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)),
torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz)
>>> V = torch.randn(nnz, dims[2], dims[3])
>>> size = torch.Size(dims)
>>> S = torch.sparse_coo_tensor(I, V, size)
>>> S
tensor(indices=tensor([[2, 0, 3],
[2, 4, 1]]),
values=tensor([[[-0.6438, -1.6467, 1.4004],
[ 0.3411, 0.0918, -0.2312]],
[[ 0.5348, 0.0634, -2.0494],
[-0.7125, -1.0646, 2.1844]],
[[ 0.1276, 0.1874, -0.6334],
[-1.9682, -0.5340, 0.7483]]]),
size=(5, 5, 2, 3), nnz=3, layout=torch.sparse_coo)
# when sum over only part of sparse_dims, return a sparse tensor
>>> torch.sparse.sum(S, [1, 3])
tensor(indices=tensor([[0, 2, 3]]),
values=tensor([[-1.4512, 0.4073],
[-0.8901, 0.2017],
[-0.3183, -1.7539]]),
size=(5, 2), nnz=3, layout=torch.sparse_coo)
# when sum over all sparse dim, return a dense tensor
# with summed dims squeezed
>>> torch.sparse.sum(S, [0, 1, 3])
tensor([-2.6596, -1.1450])
torch.sparse.addmm(mat, mat1, mat2, beta=1.0, alpha=1.0) [source]
This function does exact same thing as torch.addmm() in the forward, except that it supports backward for sparse matrix mat1. mat1 need to have sparse_dim = 2. Note that the gradients of mat1 is a coalesced sparse tensor. Parameters
mat (Tensor) β a dense matrix to be added
mat1 (Tensor) β a sparse matrix to be multiplied
mat2 (Tensor) β a dense matrix to be multiplied
beta (Number, optional) β multiplier for mat (Ξ²\beta )
alpha (Number, optional) β multiplier for mat1@mat2mat1 @ mat2 (Ξ±\alpha )
torch.sparse.mm(mat1, mat2) [source]
Performs a matrix multiplication of the sparse matrix mat1 and the (sparse or strided) matrix mat2. Similar to torch.mm(), If mat1 is a (nΓm)(n \times m) tensor, mat2 is a (mΓp)(m \times p) tensor, out will be a (nΓp)(n \times p) tensor. mat1 need to have sparse_dim = 2. This function also supports backward for both matrices. Note that the gradients of mat1 is a coalesced sparse tensor. Parameters
mat1 (SparseTensor) β the first sparse matrix to be multiplied
mat2 (Tensor) β the second matrix to be multiplied, which could be sparse or dense Shape:
The format of the output tensor of this function follows: - sparse x sparse -> sparse - sparse x dense -> dense Example: >>> a = torch.randn(2, 3).to_sparse().requires_grad_(True)
>>> a
tensor(indices=tensor([[0, 0, 0, 1, 1, 1],
[0, 1, 2, 0, 1, 2]]),
values=tensor([ 1.5901, 0.0183, -0.6146, 1.8061, -0.0112, 0.6302]),
size=(2, 3), nnz=6, layout=torch.sparse_coo, requires_grad=True)
>>> b = torch.randn(3, 2, requires_grad=True)
>>> b
tensor([[-0.6479, 0.7874],
[-1.2056, 0.5641],
[-1.1716, -0.9923]], requires_grad=True)
>>> y = torch.sparse.mm(a, b)
>>> y
tensor([[-0.3323, 1.8723],
[-1.8951, 0.7904]], grad_fn=<SparseAddmmBackward>)
>>> y.sum().backward()
>>> a.grad
tensor(indices=tensor([[0, 0, 0, 1, 1, 1],
[0, 1, 2, 0, 1, 2]]),
values=tensor([ 0.1394, -0.6415, -2.1639, 0.1394, -0.6415, -2.1639]),
size=(2, 3), nnz=6, layout=torch.sparse_coo)
torch.sspaddmm(input, mat1, mat2, *, beta=1, alpha=1, out=None) β Tensor
Matrix multiplies a sparse tensor mat1 with a dense tensor mat2, then adds the sparse tensor input to the result. Note: This function is equivalent to torch.addmm(), except input and mat1 are sparse. Parameters
input (Tensor) β a sparse matrix to be added
mat1 (Tensor) β a sparse matrix to be matrix multiplied
mat2 (Tensor) β a dense matrix to be matrix multiplied Keyword Arguments
beta (Number, optional) β multiplier for mat (Ξ²\beta )
alpha (Number, optional) β multiplier for mat1@mat2mat1 @ mat2 (Ξ±\alpha )
out (Tensor, optional) β the output tensor.
torch.hspmm(mat1, mat2, *, out=None) β Tensor
Performs a matrix multiplication of a sparse COO matrix mat1 and a strided matrix mat2. The result is a (1 + 1)-dimensional hybrid COO matrix. Parameters
mat1 (Tensor) β the first sparse matrix to be matrix multiplied
mat2 (Tensor) β the second strided matrix to be matrix multiplied Keyword Arguments
{out} β
torch.smm(input, mat) β Tensor
Performs a matrix multiplication of the sparse matrix input with the dense matrix mat. Parameters
input (Tensor) β a sparse matrix to be matrix multiplied
mat (Tensor) β a dense matrix to be matrix multiplied
torch.sparse.softmax(input, dim, dtype=None) [source]
Applies a softmax function. Softmax is defined as: Softmax(xi)=exp(xi)βjexp(xj)\text{Softmax}(x_{i}) = \frac{exp(x_i)}{\sum_j exp(x_j)} where i,ji, j run over sparse tensor indices and unspecified entries are ignores. This is equivalent to defining unspecified entries as negative infinity so that exp(xk)=0exp(x_k) = 0 when the entry with index kk has not specified. It is applied to all slices along dim, and will re-scale them so that the elements lie in the range [0, 1] and sum to 1. Parameters
input (Tensor) β input
dim (int) β A dimension along which softmax will be computed.
dtype (torch.dtype, optional) β the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None
torch.sparse.log_softmax(input, dim, dtype=None) [source]
Applies a softmax function followed by logarithm. See softmax for more details. Parameters
input (Tensor) β input
dim (int) β A dimension along which softmax will be computed.
dtype (torch.dtype, optional) β the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None
Other functions The following torch functions support sparse COO tensors: cat() dstack() empty() empty_like() hstack() index_select() is_complex() is_floating_point() is_nonzero() is_same_size() is_signed() is_tensor() lobpcg() mm() native_norm() pca_lowrank() select() stack() svd_lowrank() unsqueeze() vstack() zeros() zeros_like() | torch.sparse |
torch.sparse.addmm(mat, mat1, mat2, beta=1.0, alpha=1.0) [source]
This function does exact same thing as torch.addmm() in the forward, except that it supports backward for sparse matrix mat1. mat1 need to have sparse_dim = 2. Note that the gradients of mat1 is a coalesced sparse tensor. Parameters
mat (Tensor) β a dense matrix to be added
mat1 (Tensor) β a sparse matrix to be multiplied
mat2 (Tensor) β a dense matrix to be multiplied
beta (Number, optional) β multiplier for mat (Ξ²\beta )
alpha (Number, optional) β multiplier for mat1@mat2mat1 @ mat2 (Ξ±\alpha ) | torch.sparse#torch.sparse.addmm |
torch.sparse.log_softmax(input, dim, dtype=None) [source]
Applies a softmax function followed by logarithm. See softmax for more details. Parameters
input (Tensor) β input
dim (int) β A dimension along which softmax will be computed.
dtype (torch.dtype, optional) β the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None | torch.sparse#torch.sparse.log_softmax |
torch.sparse.mm(mat1, mat2) [source]
Performs a matrix multiplication of the sparse matrix mat1 and the (sparse or strided) matrix mat2. Similar to torch.mm(), If mat1 is a (nΓm)(n \times m) tensor, mat2 is a (mΓp)(m \times p) tensor, out will be a (nΓp)(n \times p) tensor. mat1 need to have sparse_dim = 2. This function also supports backward for both matrices. Note that the gradients of mat1 is a coalesced sparse tensor. Parameters
mat1 (SparseTensor) β the first sparse matrix to be multiplied
mat2 (Tensor) β the second matrix to be multiplied, which could be sparse or dense Shape:
The format of the output tensor of this function follows: - sparse x sparse -> sparse - sparse x dense -> dense Example: >>> a = torch.randn(2, 3).to_sparse().requires_grad_(True)
>>> a
tensor(indices=tensor([[0, 0, 0, 1, 1, 1],
[0, 1, 2, 0, 1, 2]]),
values=tensor([ 1.5901, 0.0183, -0.6146, 1.8061, -0.0112, 0.6302]),
size=(2, 3), nnz=6, layout=torch.sparse_coo, requires_grad=True)
>>> b = torch.randn(3, 2, requires_grad=True)
>>> b
tensor([[-0.6479, 0.7874],
[-1.2056, 0.5641],
[-1.1716, -0.9923]], requires_grad=True)
>>> y = torch.sparse.mm(a, b)
>>> y
tensor([[-0.3323, 1.8723],
[-1.8951, 0.7904]], grad_fn=<SparseAddmmBackward>)
>>> y.sum().backward()
>>> a.grad
tensor(indices=tensor([[0, 0, 0, 1, 1, 1],
[0, 1, 2, 0, 1, 2]]),
values=tensor([ 0.1394, -0.6415, -2.1639, 0.1394, -0.6415, -2.1639]),
size=(2, 3), nnz=6, layout=torch.sparse_coo) | torch.sparse#torch.sparse.mm |
torch.sparse.softmax(input, dim, dtype=None) [source]
Applies a softmax function. Softmax is defined as: Softmax(xi)=exp(xi)βjexp(xj)\text{Softmax}(x_{i}) = \frac{exp(x_i)}{\sum_j exp(x_j)} where i,ji, j run over sparse tensor indices and unspecified entries are ignores. This is equivalent to defining unspecified entries as negative infinity so that exp(xk)=0exp(x_k) = 0 when the entry with index kk has not specified. It is applied to all slices along dim, and will re-scale them so that the elements lie in the range [0, 1] and sum to 1. Parameters
input (Tensor) β input
dim (int) β A dimension along which softmax will be computed.
dtype (torch.dtype, optional) β the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None | torch.sparse#torch.sparse.softmax |
torch.sparse.sum(input, dim=None, dtype=None) [source]
Returns the sum of each row of the sparse tensor input in the given dimensions dim. If dim is a list of dimensions, reduce over all of them. When sum over all sparse_dim, this method returns a dense tensor instead of a sparse tensor. All summed dim are squeezed (see torch.squeeze()), resulting an output tensor having dim fewer dimensions than input. During backward, only gradients at nnz locations of input will propagate back. Note that the gradients of input is coalesced. Parameters
input (Tensor) β the input sparse tensor
dim (int or tuple of python:ints) β a dimension or a list of dimensions to reduce. Default: reduce over all dims.
dtype (torch.dtype, optional) β the desired data type of returned Tensor. Default: dtype of input. Example: >>> nnz = 3
>>> dims = [5, 5, 2, 3]
>>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)),
torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz)
>>> V = torch.randn(nnz, dims[2], dims[3])
>>> size = torch.Size(dims)
>>> S = torch.sparse_coo_tensor(I, V, size)
>>> S
tensor(indices=tensor([[2, 0, 3],
[2, 4, 1]]),
values=tensor([[[-0.6438, -1.6467, 1.4004],
[ 0.3411, 0.0918, -0.2312]],
[[ 0.5348, 0.0634, -2.0494],
[-0.7125, -1.0646, 2.1844]],
[[ 0.1276, 0.1874, -0.6334],
[-1.9682, -0.5340, 0.7483]]]),
size=(5, 5, 2, 3), nnz=3, layout=torch.sparse_coo)
# when sum over only part of sparse_dims, return a sparse tensor
>>> torch.sparse.sum(S, [1, 3])
tensor(indices=tensor([[0, 2, 3]]),
values=tensor([[-1.4512, 0.4073],
[-0.8901, 0.2017],
[-0.3183, -1.7539]]),
size=(5, 2), nnz=3, layout=torch.sparse_coo)
# when sum over all sparse dim, return a dense tensor
# with summed dims squeezed
>>> torch.sparse.sum(S, [0, 1, 3])
tensor([-2.6596, -1.1450]) | torch.sparse#torch.sparse.sum |
torch.sparse_coo_tensor(indices, values, size=None, *, dtype=None, device=None, requires_grad=False) β Tensor
Constructs a sparse tensor in COO(rdinate) format with specified values at the given indices. Note This function returns an uncoalesced tensor. Parameters
indices (array_like) β Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types. Will be cast to a torch.LongTensor internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values.
values (array_like) β Initial values for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.
size (list, tuple, or torch.Size, optional) β Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned tensor. Default: if None, infers data type from values.
device (torch.device, optional) β the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) β If autograd should record operations on the returned tensor. Default: False. Example: >>> i = torch.tensor([[0, 1, 1],
... [2, 0, 2]])
>>> v = torch.tensor([3, 4, 5], dtype=torch.float32)
>>> torch.sparse_coo_tensor(i, v, [2, 4])
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3., 4., 5.]),
size=(2, 4), nnz=3, layout=torch.sparse_coo)
>>> torch.sparse_coo_tensor(i, v) # Shape inference
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3., 4., 5.]),
size=(2, 3), nnz=3, layout=torch.sparse_coo)
>>> torch.sparse_coo_tensor(i, v, [2, 4],
... dtype=torch.float64,
... device=torch.device('cuda:0'))
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3., 4., 5.]),
device='cuda:0', size=(2, 4), nnz=3, dtype=torch.float64,
layout=torch.sparse_coo)
# Create an empty sparse tensor with the following invariants:
# 1. sparse_dim + dense_dim = len(SparseTensor.shape)
# 2. SparseTensor._indices().shape = (sparse_dim, nnz)
# 3. SparseTensor._values().shape = (nnz, SparseTensor.shape[sparse_dim:])
#
# For instance, to create an empty sparse tensor with nnz = 0, dense_dim = 0 and
# sparse_dim = 1 (hence indices is a 2D tensor of shape = (1, 0))
>>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), [], [1])
tensor(indices=tensor([], size=(1, 0)),
values=tensor([], size=(0,)),
size=(1,), nnz=0, layout=torch.sparse_coo)
# and to create an empty sparse tensor with nnz = 0, dense_dim = 1 and
# sparse_dim = 1
>>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), torch.empty([0, 2]), [1, 2])
tensor(indices=tensor([], size=(1, 0)),
values=tensor([], size=(0, 2)),
size=(1, 2), nnz=0, layout=torch.sparse_coo) | torch.generated.torch.sparse_coo_tensor#torch.sparse_coo_tensor |
torch.split(tensor, split_size_or_sections, dim=0) [source]
Splits the tensor into chunks. Each chunk is a view of the original tensor. If split_size_or_sections is an integer type, then tensor will be split into equally sized chunks (if possible). Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by split_size. If split_size_or_sections is a list, then tensor will be split into len(split_size_or_sections) chunks with sizes in dim according to split_size_or_sections. Parameters
tensor (Tensor) β tensor to split.
split_size_or_sections (int) or (list(int)) β size of a single chunk or list of sizes for each chunk
dim (int) β dimension along which to split the tensor. Example::
>>> a = torch.arange(10).reshape(5,2)
>>> a
tensor([[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9]])
>>> torch.split(a, 2)
(tensor([[0, 1],
[2, 3]]),
tensor([[4, 5],
[6, 7]]),
tensor([[8, 9]]))
>>> torch.split(a, [1,4])
(tensor([[0, 1]]),
tensor([[2, 3],
[4, 5],
[6, 7],
[8, 9]])) | torch.generated.torch.split#torch.split |
torch.sqrt(input, *, out=None) β Tensor
Returns a new tensor with the square-root of the elements of input. outi=inputi\text{out}_{i} = \sqrt{\text{input}_{i}}
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-2.0755, 1.0226, 0.0831, 0.4806])
>>> torch.sqrt(a)
tensor([ nan, 1.0112, 0.2883, 0.6933]) | torch.generated.torch.sqrt#torch.sqrt |
torch.square(input, *, out=None) β Tensor
Returns a new tensor with the square of the elements of input. Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-2.0755, 1.0226, 0.0831, 0.4806])
>>> torch.square(a)
tensor([ 4.3077, 1.0457, 0.0069, 0.2310]) | torch.generated.torch.square#torch.square |
torch.squeeze(input, dim=None, *, out=None) β Tensor
Returns a tensor with all the dimensions of input of size 1 removed. For example, if input is of shape: (AΓ1ΓBΓCΓ1ΓD)(A \times 1 \times B \times C \times 1 \times D) then the out tensor will be of shape: (AΓBΓCΓD)(A \times B \times C \times D) . When dim is given, a squeeze operation is done only in the given dimension. If input is of shape: (AΓ1ΓB)(A \times 1 \times B) , squeeze(input, 0) leaves the tensor unchanged, but squeeze(input, 1) will squeeze the tensor to the shape (AΓB)(A \times B) . Note The returned tensor shares the storage with the input tensor, so changing the contents of one will change the contents of the other. Warning If the tensor has a batch dimension of size 1, then squeeze(input) will also remove the batch dimension, which can lead to unexpected errors. Parameters
input (Tensor) β the input tensor.
dim (int, optional) β if given, the input will be squeezed only in this dimension Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> x = torch.zeros(2, 1, 2, 1, 2)
>>> x.size()
torch.Size([2, 1, 2, 1, 2])
>>> y = torch.squeeze(x)
>>> y.size()
torch.Size([2, 2, 2])
>>> y = torch.squeeze(x, 0)
>>> y.size()
torch.Size([2, 1, 2, 1, 2])
>>> y = torch.squeeze(x, 1)
>>> y.size()
torch.Size([2, 2, 1, 2]) | torch.generated.torch.squeeze#torch.squeeze |
torch.sspaddmm(input, mat1, mat2, *, beta=1, alpha=1, out=None) β Tensor
Matrix multiplies a sparse tensor mat1 with a dense tensor mat2, then adds the sparse tensor input to the result. Note: This function is equivalent to torch.addmm(), except input and mat1 are sparse. Parameters
input (Tensor) β a sparse matrix to be added
mat1 (Tensor) β a sparse matrix to be matrix multiplied
mat2 (Tensor) β a dense matrix to be matrix multiplied Keyword Arguments
beta (Number, optional) β multiplier for mat (Ξ²\beta )
alpha (Number, optional) β multiplier for mat1@mat2mat1 @ mat2 (Ξ±\alpha )
out (Tensor, optional) β the output tensor. | torch.sparse#torch.sspaddmm |
torch.stack(tensors, dim=0, *, out=None) β Tensor
Concatenates a sequence of tensors along a new dimension. All tensors need to be of the same size. Parameters
tensors (sequence of Tensors) β sequence of tensors to concatenate
dim (int) β dimension to insert. Has to be between 0 and the number of dimensions of concatenated tensors (inclusive) Keyword Arguments
out (Tensor, optional) β the output tensor. | torch.generated.torch.stack#torch.stack |
torch.std(input, unbiased=True) β Tensor
Returns the standard-deviation of all elements in the input tensor. If unbiased is False, then the standard-deviation will be calculated via the biased estimator. Otherwise, Besselβs correction will be used. Parameters
input (Tensor) β the input tensor.
unbiased (bool) β whether to use the unbiased estimation or not Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[-0.8166, -1.3802, -0.3560]])
>>> torch.std(a)
tensor(0.5130)
torch.std(input, dim, unbiased=True, keepdim=False, *, out=None) β Tensor
Returns the standard-deviation of each row of the input tensor in the dimension dim. If dim is a list of dimensions, reduce over all of them. If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). If unbiased is False, then the standard-deviation will be calculated via the biased estimator. Otherwise, Besselβs correction will be used. Parameters
input (Tensor) β the input tensor.
dim (int or tuple of python:ints) β the dimension or dimensions to reduce.
unbiased (bool) β whether to use the unbiased estimation or not
keepdim (bool) β whether the output tensor has dim retained or not. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.2035, 1.2959, 1.8101, -0.4644],
[ 1.5027, -0.3270, 0.5905, 0.6538],
[-1.5745, 1.3330, -0.5596, -0.6548],
[ 0.1264, -0.5080, 1.6420, 0.1992]])
>>> torch.std(a, dim=1)
tensor([ 1.0311, 0.7477, 1.2204, 0.9087]) | torch.generated.torch.std#torch.std |
torch.std_mean(input, unbiased=True) -> (Tensor, Tensor)
Returns the standard-deviation and mean of all elements in the input tensor. If unbiased is False, then the standard-deviation will be calculated via the biased estimator. Otherwise, Besselβs correction will be used. Parameters
input (Tensor) β the input tensor.
unbiased (bool) β whether to use the unbiased estimation or not Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[0.3364, 0.3591, 0.9462]])
>>> torch.std_mean(a)
(tensor(0.3457), tensor(0.5472))
torch.std_mean(input, dim, unbiased=True, keepdim=False) -> (Tensor, Tensor)
Returns the standard-deviation and mean of each row of the input tensor in the dimension dim. If dim is a list of dimensions, reduce over all of them. If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). If unbiased is False, then the standard-deviation will be calculated via the biased estimator. Otherwise, Besselβs correction will be used. Parameters
input (Tensor) β the input tensor.
dim (int or tuple of python:ints) β the dimension or dimensions to reduce.
unbiased (bool) β whether to use the unbiased estimation or not
keepdim (bool) β whether the output tensor has dim retained or not. Example: >>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.5648, -0.5984, -1.2676, -1.4471],
[ 0.9267, 1.0612, 1.1050, -0.6014],
[ 0.0154, 1.9301, 0.0125, -1.0904],
[-1.9711, -0.7748, -1.3840, 0.5067]])
>>> torch.std_mean(a, 1)
(tensor([0.9110, 0.8197, 1.2552, 1.0608]), tensor([-0.6871, 0.6229, 0.2169, -0.9058])) | torch.generated.torch.std_mean#torch.std_mean |
torch.stft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None) [source]
Short-time Fourier transform (STFT). Warning From version 1.8.0, return_complex must always be given explicitly for real inputs and return_complex=False has been deprecated. Strongly prefer return_complex=True as in a future pytorch release, this function will only return complex tensors. Note that torch.view_as_real() can be used to recover a real tensor with an extra last dimension for real and imaginary components. The STFT computes the Fourier transform of short overlapping windows of the input. This giving frequency components of the signal as they change over time. The interface of this function is modeled after the librosa stft function. Ignoring the optional batch dimension, this method computes the following expression: X[m,Ο]=βk=0win_length-1window[k] input[mΓhop_length+k]expβ‘(βj2Οβ
Οkwin_length),X[m, \omega] = \sum_{k = 0}^{\text{win\_length-1}}% \text{window}[k]\ \text{input}[m \times \text{hop\_length} + k]\ % \exp\left(- j \frac{2 \pi \cdot \omega k}{\text{win\_length}}\right),
where mm is the index of the sliding window, and Ο\omega is the frequency that 0β€Ο<n_fft0 \leq \omega < \text{n\_fft} . When onesided is the default value True,
input must be either a 1-D time sequence or a 2-D batch of time sequences. If hop_length is None (default), it is treated as equal to floor(n_fft / 4). If win_length is None (default), it is treated as equal to n_fft.
window can be a 1-D tensor of size win_length, e.g., from torch.hann_window(). If window is None (default), it is treated as if having 11 everywhere in the window. If win_length<n_fft\text{win\_length} < \text{n\_fft} , window will be padded on both sides to length n_fft before being applied. If center is True (default), input will be padded on both sides so that the tt -th frame is centered at time tΓhop_lengtht \times \text{hop\_length} . Otherwise, the tt -th frame begins at time tΓhop_lengtht \times \text{hop\_length} .
pad_mode determines the padding method used on input when center is True. See torch.nn.functional.pad() for all available options. Default is "reflect". If onesided is True (default for real input), only values for Ο\omega in [0,1,2,β¦,βn_fft2β+1]\left[0, 1, 2, \dots, \left\lfloor \frac{\text{n\_fft}}{2} \right\rfloor + 1\right] are returned because the real-to-complex Fourier transform satisfies the conjugate symmetry, i.e., X[m,Ο]=X[m,n_fftβΟ]βX[m, \omega] = X[m, \text{n\_fft} - \omega]^* . Note if the input or window tensors are complex, then onesided output is not possible. If normalized is True (default is False), the function returns the normalized STFT results, i.e., multiplied by (frame_length)β0.5(\text{frame\_length})^{-0.5} . If return_complex is True (default if input is complex), the return is a input.dim() + 1 dimensional complex tensor. If False, the output is a input.dim() + 2 dimensional real tensor where the last dimension represents the real and imaginary components. Returns either a complex tensor of size (βΓNΓT)(* \times N \times T) if return_complex is true, or a real tensor of size (βΓNΓTΓ2)(* \times N \times T \times 2) . Where β* is the optional batch size of input, NN is the number of frequencies where STFT is applied and TT is the total number of frames used. Warning This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result. Parameters
input (Tensor) β the input tensor
n_fft (int) β size of Fourier transform
hop_length (int, optional) β the distance between neighboring sliding window frames. Default: None (treated as equal to floor(n_fft / 4))
win_length (int, optional) β the size of window frame and STFT filter. Default: None (treated as equal to n_fft)
window (Tensor, optional) β the optional window function. Default: None (treated as window of all 11 s)
center (bool, optional) β whether to pad input on both sides so that the tt -th frame is centered at time tΓhop_lengtht \times \text{hop\_length} . Default: True
pad_mode (string, optional) β controls the padding method used when center is True. Default: "reflect"
normalized (bool, optional) β controls whether to return the normalized STFT results Default: False
onesided (bool, optional) β controls whether to return half of results to avoid redundancy for real inputs. Default: True for real input and window, False otherwise.
return_complex (bool, optional) β whether to return a complex tensor, or a real tensor with an extra last dimension for the real and imaginary components. Returns
A tensor containing the STFT result with shape described above Return type
Tensor | torch.generated.torch.stft#torch.stft |
torch.Storage A torch.Storage is a contiguous, one-dimensional array of a single data type. Every torch.Tensor has a corresponding storage of the same data type.
class torch.FloatStorage(*args, **kwargs) [source]
bfloat16()
Casts this storage to bfloat16 type
bool()
Casts this storage to bool type
byte()
Casts this storage to byte type
char()
Casts this storage to char type
clone()
Returns a copy of this storage
complex_double()
Casts this storage to complex double type
complex_float()
Casts this storage to complex float type
copy_()
cpu()
Returns a CPU copy of this storage if itβs not already on the CPU
cuda(device=None, non_blocking=False, **kwargs)
Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned. Parameters
device (int) β The destination GPU id. Defaults to the current device.
non_blocking (bool) β If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.
**kwargs β For compatibility, may contain the key async in place of the non_blocking argument.
data_ptr()
device
double()
Casts this storage to double type
dtype
element_size()
fill_()
float()
Casts this storage to float type
static from_buffer()
static from_file(filename, shared=False, size=0) β Storage
If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file. size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed. Parameters
filename (str) β file name to map
shared (bool) β whether to share memory
size (int) β number of elements in the storage
get_device()
half()
Casts this storage to half type
int()
Casts this storage to int type
is_cuda: bool = False
is_pinned()
is_shared()
is_sparse: bool = False
long()
Casts this storage to long type
new()
pin_memory()
Copies the storage to pinned memory, if itβs not already pinned.
resize_()
share_memory_()
Moves the storage to shared memory. This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized. Returns: self
short()
Casts this storage to short type
size()
tolist()
Returns a list containing the elements of this storage
type(dtype=None, non_blocking=False, **kwargs)
Returns the type if dtype is not provided, else casts this object to the specified type. If this is already of the correct type, no copy is performed and the original object is returned. Parameters
dtype (type or string) β The desired type
non_blocking (bool) β If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.
**kwargs β For compatibility, may contain the key async in place of the non_blocking argument. The async arg is deprecated. | torch.storage |
torch.sub(input, other, *, alpha=1, out=None) β Tensor
Subtracts other, scaled by alpha, from input. outi=inputiβalphaΓotheri\text{{out}}_i = \text{{input}}_i - \text{{alpha}} \times \text{{other}}_i
Supports broadcasting to a common shape, type promotion, and integer, float, and complex inputs. Parameters
input (Tensor) β the input tensor.
other (Tensor or Scalar) β the tensor or scalar to subtract from input
Keyword Arguments
alpha (Scalar) β the scalar multiplier for other
out (Tensor, optional) β the output tensor. Example: >>> a = torch.tensor((1, 2))
>>> b = torch.tensor((0, 1))
>>> torch.sub(a, b, alpha=2)
tensor([1, 0]) | torch.generated.torch.sub#torch.sub |
torch.subtract(input, other, *, alpha=1, out=None) β Tensor
Alias for torch.sub(). | torch.generated.torch.subtract#torch.subtract |
torch.sum(input, *, dtype=None) β Tensor
Returns the sum of all elements in the input tensor. Parameters
input (Tensor) β the input tensor. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.1133, -0.9567, 0.2958]])
>>> torch.sum(a)
tensor(-0.5475)
torch.sum(input, dim, keepdim=False, *, dtype=None) β Tensor
Returns the sum of each row of the input tensor in the given dimension dim. If dim is a list of dimensions, reduce over all of them. If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). Parameters
input (Tensor) β the input tensor.
dim (int or tuple of python:ints) β the dimension or dimensions to reduce.
keepdim (bool) β whether the output tensor has dim retained or not. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: >>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.0569, -0.2475, 0.0737, -0.3429],
[-0.2993, 0.9138, 0.9337, -1.6864],
[ 0.1132, 0.7892, -0.1003, 0.5688],
[ 0.3637, -0.9906, -0.4752, -1.5197]])
>>> torch.sum(a, 1)
tensor([-0.4598, -0.1381, 1.3708, -2.6217])
>>> b = torch.arange(4 * 5 * 6).view(4, 5, 6)
>>> torch.sum(b, (2, 1))
tensor([ 435., 1335., 2235., 3135.]) | torch.generated.torch.sum#torch.sum |
torch.svd(input, some=True, compute_uv=True, *, out=None) -> (Tensor, Tensor, Tensor)
Computes the singular value decomposition of either a matrix or batch of matrices input. The singular value decomposition is represented as a namedtuple (U,S,V), such that input = U diag(S) Vα΄΄, where Vα΄΄ is the transpose of V for the real-valued inputs, or the conjugate transpose of V for the complex-valued inputs. If input is a batch of tensors, then U, S, and V are also batched with the same batch dimensions as input. If some is True (default), the method returns the reduced singular value decomposition i.e., if the last two dimensions of input are m and n, then the returned U and V matrices will contain only min(n, m) orthonormal columns. If compute_uv is False, the returned U and V will be zero-filled matrices of shape (m Γ m) and (n Γ n) respectively, and the same device as input. The some argument has no effect when compute_uv is False. Supports input of float, double, cfloat and cdouble data types. The dtypes of U and V are the same as inputβs. S will always be real-valued, even if input is complex. Warning torch.svd() is deprecated. Please use torch.linalg.svd() instead, which is similar to NumPyβs numpy.linalg.svd. Note Differences with torch.linalg.svd():
some is the opposite of torch.linalg.svd()βs full_matricies. Note that default value for both is True, so the default behavior is effectively the opposite.
torch.svd() returns V, whereas torch.linalg.svd() returns Vα΄΄. If compute_uv=False, torch.svd() returns zero-filled tensors for U and Vh, whereas torch.linalg.svd() returns empty tensors. Note The singular values are returned in descending order. If input is a batch of matrices, then the singular values of each matrix in the batch is returned in descending order. Note The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the cuSOLVER routines gesvdj and gesvdjBatched on CUDA 10.1.243 and later, and uses the MAGMA routine gesdd on earlier versions of CUDA. Note The returned matrix U will be transposed, i.e. with strides U.contiguous().transpose(-2, -1).stride(). Note Gradients computed using U and V may be unstable if input is not full rank or has non-unique singular values. Note When some = False, the gradients on U[..., :, min(m, n):] and V[..., :, min(m, n):] will be ignored in backward as those vectors can be arbitrary bases of the subspaces. Note The S tensor can only be used to compute gradients if compute_uv is True. Note With the complex-valued input the backward operation works correctly only for gauge invariant loss functions. Please look at Gauge problem in AD for more details. Note Since U and V of an SVD is not unique, each vector can be multiplied by an arbitrary phase factor eiΟe^{i \phi} while the SVD result is still correct. Different platforms, like Numpy, or inputs on different device types, may produce different U and V tensors. Parameters
input (Tensor) β the input tensor of size (*, m, n) where * is zero or more batch dimensions consisting of (m Γ n) matrices.
some (bool, optional) β controls whether to compute the reduced or full decomposition, and consequently the shape of returned U and V. Defaults to True.
compute_uv (bool, optional) β option whether to compute U and V or not. Defaults to True. Keyword Arguments
out (tuple, optional) β the output tuple of tensors Example: >>> a = torch.randn(5, 3)
>>> a
tensor([[ 0.2364, -0.7752, 0.6372],
[ 1.7201, 0.7394, -0.0504],
[-0.3371, -1.0584, 0.5296],
[ 0.3550, -0.4022, 1.5569],
[ 0.2445, -0.0158, 1.1414]])
>>> u, s, v = torch.svd(a)
>>> u
tensor([[ 0.4027, 0.0287, 0.5434],
[-0.1946, 0.8833, 0.3679],
[ 0.4296, -0.2890, 0.5261],
[ 0.6604, 0.2717, -0.2618],
[ 0.4234, 0.2481, -0.4733]])
>>> s
tensor([2.3289, 2.0315, 0.7806])
>>> v
tensor([[-0.0199, 0.8766, 0.4809],
[-0.5080, 0.4054, -0.7600],
[ 0.8611, 0.2594, -0.4373]])
>>> torch.dist(a, torch.mm(torch.mm(u, torch.diag(s)), v.t()))
tensor(8.6531e-07)
>>> a_big = torch.randn(7, 5, 3)
>>> u, s, v = torch.svd(a_big)
>>> torch.dist(a_big, torch.matmul(torch.matmul(u, torch.diag_embed(s)), v.transpose(-2, -1)))
tensor(2.6503e-06) | torch.generated.torch.svd#torch.svd |
torch.svd_lowrank(A, q=6, niter=2, M=None) [source]
Return the singular value decomposition (U, S, V) of a matrix, batches of matrices, or a sparse matrix AA such that AβUdiag(S)VTA \approx U diag(S) V^T . In case MM is given, then SVD is computed for the matrix AβMA - M . Note The implementation is based on the Algorithm 5.1 from Halko et al, 2009. Note To obtain repeatable results, reset the seed for the pseudorandom number generator Note The input is assumed to be a low-rank matrix. Note In general, use the full-rank SVD implementation torch.svd for dense matrices due to its 10-fold higher performance characteristics. The low-rank SVD will be useful for huge sparse matrices that torch.svd cannot handle. Args::
A (Tensor): the input tensor of size (β,m,n)(*, m, n) q (int, optional): a slightly overestimated rank of A. niter (int, optional): the number of subspace iterations to
conduct; niter must be a nonnegative integer, and defaults to 2 M (Tensor, optional): the input tensorβs mean of size
(β,1,n)(*, 1, n) . References::
Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, arXiv:0909.4061 [math.NA; math.PR], 2009 (available at arXiv). | torch.generated.torch.svd_lowrank#torch.svd_lowrank |
torch.swapaxes(input, axis0, axis1) β Tensor
Alias for torch.transpose(). This function is equivalent to NumPyβs swapaxes function. Examples: >>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]])
>>> x
tensor([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
>>> torch.swapaxes(x, 0, 1)
tensor([[[0, 1],
[4, 5]],
[[2, 3],
[6, 7]]])
>>> torch.swapaxes(x, 0, 2)
tensor([[[0, 4],
[2, 6]],
[[1, 5],
[3, 7]]]) | torch.generated.torch.swapaxes#torch.swapaxes |
torch.swapdims(input, dim0, dim1) β Tensor
Alias for torch.transpose(). This function is equivalent to NumPyβs swapaxes function. Examples: >>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]])
>>> x
tensor([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
>>> torch.swapdims(x, 0, 1)
tensor([[[0, 1],
[4, 5]],
[[2, 3],
[6, 7]]])
>>> torch.swapdims(x, 0, 2)
tensor([[[0, 4],
[2, 6]],
[[1, 5],
[3, 7]]]) | torch.generated.torch.swapdims#torch.swapdims |
torch.symeig(input, eigenvectors=False, upper=True, *, out=None) -> (Tensor, Tensor)
This function returns eigenvalues and eigenvectors of a real symmetric matrix input or a batch of real symmetric matrices, represented by a namedtuple (eigenvalues, eigenvectors). This function calculates all eigenvalues (and vectors) of input such that input=Vdiag(e)VT\text{input} = V \text{diag}(e) V^T . The boolean argument eigenvectors defines computation of both eigenvectors and eigenvalues or eigenvalues only. If it is False, only eigenvalues are computed. If it is True, both eigenvalues and eigenvectors are computed. Since the input matrix input is supposed to be symmetric, only the upper triangular portion is used by default. If upper is False, then lower triangular portion is used. Note The eigenvalues are returned in ascending order. If input is a batch of matrices, then the eigenvalues of each matrix in the batch is returned in ascending order. Note Irrespective of the original strides, the returned matrix V will be transposed, i.e. with strides V.contiguous().transpose(-1, -2).stride(). Warning Extra care needs to be taken when backward through outputs. Such operation is only stable when all eigenvalues are distinct and becomes less stable the smaller minβ‘iβ jβ£Ξ»iβΞ»jβ£\min_{i \neq j} |\lambda_i - \lambda_j| is. Parameters
input (Tensor) β the input tensor of size (β,n,n)(*, n, n) where * is zero or more batch dimensions consisting of symmetric matrices.
eigenvectors (bool, optional) β controls whether eigenvectors have to be computed
upper (boolean, optional) β controls whether to consider upper-triangular or lower-triangular region Keyword Arguments
out (tuple, optional) β the output tuple of (Tensor, Tensor) Returns
A namedtuple (eigenvalues, eigenvectors) containing
eigenvalues (Tensor): Shape (β,m)(*, m) . The eigenvalues in ascending order.
eigenvectors (Tensor): Shape (β,m,m)(*, m, m) . If eigenvectors=False, itβs an empty tensor. Otherwise, this tensor contains the orthonormal eigenvectors of the input. Return type
(Tensor, Tensor) Examples: >>> a = torch.randn(5, 5)
>>> a = a + a.t() # To make a symmetric
>>> a
tensor([[-5.7827, 4.4559, -0.2344, -1.7123, -1.8330],
[ 4.4559, 1.4250, -2.8636, -3.2100, -0.1798],
[-0.2344, -2.8636, 1.7112, -5.5785, 7.1988],
[-1.7123, -3.2100, -5.5785, -2.6227, 3.1036],
[-1.8330, -0.1798, 7.1988, 3.1036, -5.1453]])
>>> e, v = torch.symeig(a, eigenvectors=True)
>>> e
tensor([-13.7012, -7.7497, -2.3163, 5.2477, 8.1050])
>>> v
tensor([[ 0.1643, 0.9034, -0.0291, 0.3508, 0.1817],
[-0.2417, -0.3071, -0.5081, 0.6534, 0.4026],
[-0.5176, 0.1223, -0.0220, 0.3295, -0.7798],
[-0.4850, 0.2695, -0.5773, -0.5840, 0.1337],
[ 0.6415, -0.0447, -0.6381, -0.0193, -0.4230]])
>>> a_big = torch.randn(5, 2, 2)
>>> a_big = a_big + a_big.transpose(-2, -1) # To make a_big symmetric
>>> e, v = a_big.symeig(eigenvectors=True)
>>> torch.allclose(torch.matmul(v, torch.matmul(e.diag_embed(), v.transpose(-2, -1))), a_big)
True | torch.generated.torch.symeig#torch.symeig |
torch.t(input) β Tensor
Expects input to be <= 2-D tensor and transposes dimensions 0 and 1. 0-D and 1-D tensors are returned as is. When input is a 2-D tensor this is equivalent to transpose(input, 0, 1). Parameters
input (Tensor) β the input tensor. Example: >>> x = torch.randn(())
>>> x
tensor(0.1995)
>>> torch.t(x)
tensor(0.1995)
>>> x = torch.randn(3)
>>> x
tensor([ 2.4320, -0.4608, 0.7702])
>>> torch.t(x)
tensor([ 2.4320, -0.4608, 0.7702])
>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.4875, 0.9158, -0.5872],
[ 0.3938, -0.6929, 0.6932]])
>>> torch.t(x)
tensor([[ 0.4875, 0.3938],
[ 0.9158, -0.6929],
[-0.5872, 0.6932]]) | torch.generated.torch.t#torch.t |
torch.take(input, index) β Tensor
Returns a new tensor with the elements of input at the given indices. The input tensor is treated as if it were viewed as a 1-D tensor. The result takes the same shape as the indices. Parameters
input (Tensor) β the input tensor.
indices (LongTensor) β the indices into tensor Example: >>> src = torch.tensor([[4, 3, 5],
... [6, 7, 8]])
>>> torch.take(src, torch.tensor([0, 2, 5]))
tensor([ 4, 5, 8]) | torch.generated.torch.take#torch.take |
torch.tan(input, *, out=None) β Tensor
Returns a new tensor with the tangent of the elements of input. outi=tanβ‘(inputi)\text{out}_{i} = \tan(\text{input}_{i})
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-1.2027, -1.7687, 0.4412, -1.3856])
>>> torch.tan(a)
tensor([-2.5930, 4.9859, 0.4722, -5.3366]) | torch.generated.torch.tan#torch.tan |
torch.tanh(input, *, out=None) β Tensor
Returns a new tensor with the hyperbolic tangent of the elements of input. outi=tanhβ‘(inputi)\text{out}_{i} = \tanh(\text{input}_{i})
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.8986, -0.7279, 1.1745, 0.2611])
>>> torch.tanh(a)
tensor([ 0.7156, -0.6218, 0.8257, 0.2553]) | torch.generated.torch.tanh#torch.tanh |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.