doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.isclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) → Tensor
Returns a new tensor with boolean elements representing if each element of input is “close” to the corresponding element of other. Closeness is defined as: ∣input−other∣≤atol+rtol×∣other∣\lvert \text{input} - \text{other} \rvert \leq \texttt{atol} + \texttt{rtol} \times \lvert \text{other} \rvert
where input and other are finite. Where input and/or other are nonfinite they are close if and only if they are equal, with NaNs being considered equal to each other when equal_nan is True. Parameters
input (Tensor) – first tensor to compare
other (Tensor) – second tensor to compare
atol (float, optional) – absolute tolerance. Default: 1e-08
rtol (float, optional) – relative tolerance. Default: 1e-05
equal_nan (bool, optional) – if True, then two NaN s will be considered equal. Default: False
Examples: >>> torch.isclose(torch.tensor((1., 2, 3)), torch.tensor((1 + 1e-10, 3, 4)))
tensor([ True, False, False])
>>> torch.isclose(torch.tensor((float('inf'), 4)), torch.tensor((float('inf'), 6)), rtol=.5)
tensor([True, True]) | torch.generated.torch.isclose#torch.isclose |
torch.isfinite(input) → Tensor
Returns a new tensor with boolean elements representing if each element is finite or not. Real values are finite when they are not NaN, negative infinity, or infinity. Complex values are finite when both their real and imaginary parts are finite. Args:
input (Tensor): the input tensor. Returns:
A boolean tensor that is True where input is finite and False elsewhere Example: >>> torch.isfinite(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))
tensor([True, False, True, False, False]) | torch.generated.torch.isfinite#torch.isfinite |
torch.isinf(input) → Tensor
Tests if each element of input is infinite (positive or negative infinity) or not. Note Complex values are infinite when their real or imaginary part is infinite. Args:
{input} Returns:
A boolean tensor that is True where input is infinite and False elsewhere Example: >>> torch.isinf(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))
tensor([False, True, False, True, False]) | torch.generated.torch.isinf#torch.isinf |
torch.isnan(input) → Tensor
Returns a new tensor with boolean elements representing if each element of input is NaN or not. Complex values are considered NaN when either their real and/or imaginary part is NaN. Parameters
input (Tensor) – the input tensor. Returns
A boolean tensor that is True where input is NaN and False elsewhere Example: >>> torch.isnan(torch.tensor([1, float('nan'), 2]))
tensor([False, True, False]) | torch.generated.torch.isnan#torch.isnan |
torch.isneginf(input, *, out=None) → Tensor
Tests if each element of input is negative infinity or not. Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example::
>>> a = torch.tensor([-float('inf'), float('inf'), 1.2])
>>> torch.isneginf(a)
tensor([ True, False, False]) | torch.generated.torch.isneginf#torch.isneginf |
torch.isposinf(input, *, out=None) → Tensor
Tests if each element of input is positive infinity or not. Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example::
>>> a = torch.tensor([-float('inf'), float('inf'), 1.2])
>>> torch.isposinf(a)
tensor([False, True, False]) | torch.generated.torch.isposinf#torch.isposinf |
torch.isreal(input) → Tensor
Returns a new tensor with boolean elements representing if each element of input is real-valued or not. All real-valued types are considered real. Complex values are considered real when their imaginary part is 0. Parameters
input (Tensor) – the input tensor. Returns
A boolean tensor that is True where input is real and False elsewhere Example: >>> torch.isreal(torch.tensor([1, 1+1j, 2+0j]))
tensor([True, False, True]) | torch.generated.torch.isreal#torch.isreal |
torch.istft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False) [source]
Inverse short time Fourier Transform. This is expected to be the inverse of stft(). It has the same parameters (+ additional optional parameter of length) and it should return the least squares estimation of the original signal. The algorithm will check using the NOLA condition ( nonzero overlap). Important consideration in the parameters window and center so that the envelop created by the summation of all the windows is never zero at certain point in time. Specifically, ∑t=−∞∞∣w∣2[n−t×hop_length]=0\sum_{t=-\infty}^{\infty} |w|^2[n-t\times hop\_length] \cancel{=} 0 . Since stft() discards elements at the end of the signal if they do not fit in a frame, istft may return a shorter signal than the original signal (can occur if center is False since the signal isn’t padded). If center is True, then there will be padding e.g. 'constant', 'reflect', etc. Left padding can be trimmed off exactly because they can be calculated but right padding cannot be calculated without additional information. Example: Suppose the last window is: [17, 18, 0, 0, 0] vs [18, 0, 0, 0, 0] The n_fft, hop_length, win_length are all the same which prevents the calculation of right padding. These additional values could be zeros or a reflection of the signal so providing length could be useful. If length is None then padding will be aggressively removed (some loss of signal). [1] D. W. Griffin and J. S. Lim, “Signal estimation from modified short-time Fourier transform,” IEEE Trans. ASSP, vol.32, no.2, pp.236-243, Apr. 1984. Parameters
input (Tensor) –
The input tensor. Expected to be output of stft(), can either be complex (channel, fft_size, n_frame), or real (channel, fft_size, n_frame, 2) where the channel dimension is optional. Deprecated since version 1.8.0: Real input is deprecated, use complex inputs as returned by stft(..., return_complex=True) instead.
n_fft (int) – Size of Fourier transform
hop_length (Optional[int]) – The distance between neighboring sliding window frames. (Default: n_fft // 4)
win_length (Optional[int]) – The size of window frame and STFT filter. (Default: n_fft)
window (Optional[torch.Tensor]) – The optional window function. (Default: torch.ones(win_length))
center (bool) – Whether input was padded on both sides so that the tt -th frame is centered at time t×hop_lengtht \times \text{hop\_length} . (Default: True)
normalized (bool) – Whether the STFT was normalized. (Default: False)
onesided (Optional[bool]) – Whether the STFT was onesided. (Default: True if n_fft != fft_size in the input size)
length (Optional[int]) – The amount to trim the signal by (i.e. the original signal length). (Default: whole signal)
return_complex (Optional[bool]) – Whether the output should be complex, or if the input should be assumed to derive from a real signal and window. Note that this is incompatible with onesided=True. (Default: False) Returns
Least squares estimation of the original signal of size (…, signal_length) Return type
Tensor | torch.generated.torch.istft#torch.istft |
torch.is_complex(input) -> (bool)
Returns True if the data type of input is a complex data type i.e., one of torch.complex64, and torch.complex128. Parameters
input (Tensor) – the input tensor. | torch.generated.torch.is_complex#torch.is_complex |
torch.is_floating_point(input) -> (bool)
Returns True if the data type of input is a floating point data type i.e., one of torch.float64, torch.float32, torch.float16, and torch.bfloat16. Parameters
input (Tensor) – the input tensor. | torch.generated.torch.is_floating_point#torch.is_floating_point |
torch.is_nonzero(input) -> (bool)
Returns True if the input is a single element tensor which is not equal to zero after type conversions. i.e. not equal to torch.tensor([0.]) or torch.tensor([0]) or torch.tensor([False]). Throws a RuntimeError if torch.numel() != 1 (even in case of sparse tensors). Parameters
input (Tensor) – the input tensor. Examples: >>> torch.is_nonzero(torch.tensor([0.]))
False
>>> torch.is_nonzero(torch.tensor([1.5]))
True
>>> torch.is_nonzero(torch.tensor([False]))
False
>>> torch.is_nonzero(torch.tensor([3]))
True
>>> torch.is_nonzero(torch.tensor([1, 3, 5]))
Traceback (most recent call last):
...
RuntimeError: bool value of Tensor with more than one value is ambiguous
>>> torch.is_nonzero(torch.tensor([]))
Traceback (most recent call last):
...
RuntimeError: bool value of Tensor with no values is ambiguous | torch.generated.torch.is_nonzero#torch.is_nonzero |
torch.is_storage(obj) [source]
Returns True if obj is a PyTorch storage object. Parameters
obj (Object) – Object to test | torch.generated.torch.is_storage#torch.is_storage |
torch.is_tensor(obj) [source]
Returns True if obj is a PyTorch tensor. Note that this function is simply doing isinstance(obj, Tensor). Using that isinstance check is better for typechecking with mypy, and more explicit - so it’s recommended to use that instead of is_tensor. Parameters
obj (Object) – Object to test | torch.generated.torch.is_tensor#torch.is_tensor |
torch.jit.export(fn) [source]
This decorator indicates that a method on an nn.Module is used as an entry point into a ScriptModule and should be compiled. forward implicitly is assumed to be an entry point, so it does not need this decorator. Functions and methods called from forward are compiled as they are seen by the compiler, so they do not need this decorator either. Example (using @torch.jit.export on a method): import torch
import torch.nn as nn
class MyModule(nn.Module):
def implicitly_compiled_method(self, x):
return x + 99
# `forward` is implicitly decorated with `@torch.jit.export`,
# so adding it here would have no effect
def forward(self, x):
return x + 10
@torch.jit.export
def another_forward(self, x):
# When the compiler sees this call, it will compile
# `implicitly_compiled_method`
return self.implicitly_compiled_method(x)
def unused_method(self, x):
return x - 20
# `m` will contain compiled methods:
# `forward`
# `another_forward`
# `implicitly_compiled_method`
# `unused_method` will not be compiled since it was not called from
# any compiled methods and wasn't decorated with `@torch.jit.export`
m = torch.jit.script(MyModule()) | torch.jit#torch.jit.export |
torch.jit.fork(func, *args, **kwargs) [source]
Creates an asynchronous task executing func and a reference to the value of the result of this execution. fork will return immediately, so the return value of func may not have been computed yet. To force completion of the task and access the return value invoke torch.jit.wait on the Future. fork invoked with a func which returns T is typed as torch.jit.Future[T]. fork calls can be arbitrarily nested, and may be invoked with positional and keyword arguments. Asynchronous execution will only occur when run in TorchScript. If run in pure python, fork will not execute in parallel. fork will also not execute in parallel when invoked while tracing, however the fork and wait calls will be captured in the exported IR Graph. .. warning: `fork` tasks will execute non-deterministicly. We recommend only spawning
parallel fork tasks for pure functions that do not modify their inputs,
module attributes, or global state.
Parameters
func (callable or torch.nn.Module) – A Python function or torch.nn.Module that will be invoked. If executed in TorchScript, it will execute asynchronously, otherwise it will not. Traced invocations of fork will be captured in the IR.
**kwargs (*args,) – arguments to invoke func with. Returns
a reference to the execution of func. The value T can only be accessed by forcing completion of func through torch.jit.wait. Return type
torch.jit.Future[T] Example (fork a free function): import torch
from torch import Tensor
def foo(a : Tensor, b : int) -> Tensor:
return a + b
def bar(a):
fut : torch.jit.Future[Tensor] = torch.jit.fork(foo, a, b=2)
return torch.jit.wait(fut)
script_bar = torch.jit.script(bar)
input = torch.tensor(2)
# only the scripted version executes asynchronously
assert script_bar(input) == bar(input)
# trace is not run asynchronously, but fork is captured in IR
graph = torch.jit.trace(bar, (input,)).graph
assert "fork" in str(graph)
Example (fork a module method): import torch
from torch import Tensor
class AddMod(torch.nn.Module):
def forward(self, a: Tensor, b : int):
return a + b
class Mod(torch.nn.Module):
def __init__(self):
super(self).__init__()
self.mod = AddMod()
def forward(self, input):
fut = torch.jit.fork(self.mod, a, b=2)
return torch.jit.wait(fut)
input = torch.tensor(2)
mod = Mod()
assert mod(input) == torch.jit.script(mod).forward(input) | torch.generated.torch.jit.fork#torch.jit.fork |
torch.jit.freeze(mod, preserved_attrs=None, optimize_numerics=True) [source]
Freezing a ScriptModule will clone it and attempt to inline the cloned module’s submodules, parameters, and attributes as constants in the TorchScript IR Graph. By default, forward will be preserved, as well as attributes & methods specified in preserved_attrs. Additionally, any attribute that is modified within a preserved method will be preserved. Freezing currently only accepts ScriptModules that are in eval mode. Parameters
mod (ScriptModule) – a module to be frozen
preserved_attrs (Optional[List[str]]) – a list of attributes to preserve in addition to the forward method.
modified in preserved methods will also be preserved. (Attributes) –
optimize_numerics (bool) – If True, a set of optimization passes will be run that does not strictly
numerics. Full details of optimization can be found at torch.jit.optimize_frozen_module. (preserve) – Returns
Frozen ScriptModule. Example (Freezing a simple module with a Parameter): def forward(self, input):
output = self.weight.mm(input)
output = self.linear(output)
return output
scripted_module = torch.jit.script(MyModule(2, 3).eval())
frozen_module = torch.jit.freeze(scripted_module)
# parameters have been removed and inlined into the Graph as constants
assert len(list(frozen_module.named_parameters())) == 0
# See the compiled graph as Python code
print(frozen_module.code)
Example (Freezing a module with preserved attributes) def forward(self, input):
self.modified_tensor += 1
return input + self.modified_tensor
scripted_module = torch.jit.script(MyModule2().eval())
frozen_module = torch.jit.freeze(scripted_module, preserved_attrs=["version"])
# we've manually preserved `version`, so it still exists on the frozen module and can be modified
assert frozen_module.version == 1
frozen_module.version = 2
# `modified_tensor` is detected as being mutated in the forward, so freezing preserves
# it to retain model semantics
assert frozen_module(torch.tensor(1)) == torch.tensor(12)
# now that we've run it once, the next result will be incremented by one
assert frozen_module(torch.tensor(1)) == torch.tensor(13)
Note If you’re not sure why an attribute is not being inlined as a constant, you can run dump_alias_db on frozen_module.forward.graph to see if freezing has detected the attribute is being modified. | torch.generated.torch.jit.freeze#torch.jit.freeze |
torch.jit.ignore(drop=False, **kwargs) [source]
This decorator indicates to the compiler that a function or method should be ignored and left as a Python function. This allows you to leave code in your model that is not yet TorchScript compatible. If called from TorchScript, ignored functions will dispatch the call to the Python interpreter. Models with ignored functions cannot be exported; use @torch.jit.unused instead. Example (using @torch.jit.ignore on a method): import torch
import torch.nn as nn
class MyModule(nn.Module):
@torch.jit.ignore
def debugger(self, x):
import pdb
pdb.set_trace()
def forward(self, x):
x += 10
# The compiler would normally try to compile `debugger`,
# but since it is `@ignore`d, it will be left as a call
# to Python
self.debugger(x)
return x
m = torch.jit.script(MyModule())
# Error! The call `debugger` cannot be saved since it calls into Python
m.save("m.pt")
Example (using @torch.jit.ignore(drop=True) on a method): import torch
import torch.nn as nn
class MyModule(nn.Module):
@torch.jit.ignore(drop=True)
def training_method(self, x):
import pdb
pdb.set_trace()
def forward(self, x):
if self.training:
self.training_method(x)
return x
m = torch.jit.script(MyModule())
# This is OK since `training_method` is not saved, the call is replaced
# with a `raise`.
m.save("m.pt") | torch.generated.torch.jit.ignore#torch.jit.ignore |
torch.jit.isinstance(obj, target_type) [source]
This function provides for conatiner type refinement in TorchScript. It can refine parameterized containers of the List, Dict, Tuple, and Optional types. E.g. List[str], Dict[str, List[torch.Tensor]], Optional[Tuple[int,str,int]]. It can also refine basic types such as bools and ints that are available in TorchScript. Parameters
obj – object to refine the type of
target_type – type to try to refine obj to Returns
True if obj was successfully refined to the type of target_type,
False otherwise with no new type refinement Return type
bool Example (using torch.jit.isinstance for type refinement): .. testcode: import torch
from typing import Any, Dict, List
class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule, self).__init__()
def forward(self, input: Any): # note the Any type
if torch.jit.isinstance(input, List[torch.Tensor]):
for t in input:
y = t.clamp(0, 0.5)
elif torch.jit.isinstance(input, Dict[str, str]):
for val in input.values():
print(val)
m = torch.jit.script(MyModule())
x = [torch.rand(3,3), torch.rand(4,3)]
m(x)
y = {"key1":"val1","key2":"val2"}
m(y) | torch.generated.torch.jit.isinstance#torch.jit.isinstance |
torch.jit.is_scripting() [source]
Function that returns True when in compilation and False otherwise. This is useful especially with the @unused decorator to leave code in your model that is not yet TorchScript compatible. .. testcode: import torch
@torch.jit.unused
def unsupported_linear_op(x):
return x
def linear(x):
if not torch.jit.is_scripting():
return torch.linear(x)
else:
return unsupported_linear_op(x) | torch.jit_language_reference#torch.jit.is_scripting |
torch.jit.load(f, map_location=None, _extra_files=None) [source]
Load a ScriptModule or ScriptFunction previously saved with torch.jit.save All previously saved modules, no matter their device, are first loaded onto CPU, and then are moved to the devices they were saved from. If this fails (e.g. because the run time system doesn’t have certain devices), an exception is raised. Parameters
f – a file-like object (has to implement read, readline, tell, and seek), or a string containing a file name
map_location (string or torch.device) – A simplified version of map_location in torch.jit.save used to dynamically remap storages to an alternative set of devices.
_extra_files (dictionary of filename to content) – The extra filenames given in the map would be loaded and their content would be stored in the provided map. Returns
A ScriptModule object. Example: import torch
import io
torch.jit.load('scriptmodule.pt')
# Load ScriptModule from io.BytesIO object
with open('scriptmodule.pt', 'rb') as f:
buffer = io.BytesIO(f.read())
# Load all tensors to the original device
torch.jit.load(buffer)
# Load all tensors onto CPU, using a device
buffer.seek(0)
torch.jit.load(buffer, map_location=torch.device('cpu'))
# Load all tensors onto CPU, using a string
buffer.seek(0)
torch.jit.load(buffer, map_location='cpu')
# Load with extra files.
extra_files = {'foo.txt': ''} # values will be replaced with data
torch.jit.load('scriptmodule.pt', _extra_files=extra_files)
print(extra_files['foo.txt']) | torch.generated.torch.jit.load#torch.jit.load |
torch.jit.save(m, f, _extra_files=None) [source]
Save an offline version of this module for use in a separate process. The saved module serializes all of the methods, submodules, parameters, and attributes of this module. It can be loaded into the C++ API using torch::jit::load(filename) or into the Python API with torch.jit.load. To be able to save a module, it must not make any calls to native Python functions. This means that all submodules must be subclasses of ScriptModule as well. Danger All modules, no matter their device, are always loaded onto the CPU during loading. This is different from torch.load()’s semantics and may change in the future. Parameters
m – A ScriptModule to save.
f – A file-like object (has to implement write and flush) or a string containing a file name.
_extra_files – Map from filename to contents which will be stored as part of f. Note torch.jit.save attempts to preserve the behavior of some operators across versions. For example, dividing two integer tensors in PyTorch 1.5 performed floor division, and if the module containing that code is saved in PyTorch 1.5 and loaded in PyTorch 1.6 its division behavior will be preserved. The same module saved in PyTorch 1.6 will fail to load in PyTorch 1.5, however, since the behavior of division changed in 1.6, and 1.5 does not know how to replicate the 1.6 behavior. Example: import torch
import io
class MyModule(torch.nn.Module):
def forward(self, x):
return x + 10
m = torch.jit.script(MyModule())
# Save to file
torch.jit.save(m, 'scriptmodule.pt')
# This line is equivalent to the previous
m.save("scriptmodule.pt")
# Save to io.BytesIO buffer
buffer = io.BytesIO()
torch.jit.save(m, buffer)
# Save with extra files
extra_files = {'foo.txt': b'bar'}
torch.jit.save(m, 'scriptmodule.pt', _extra_files=extra_files) | torch.generated.torch.jit.save#torch.jit.save |
torch.jit.script(obj, optimize=None, _frames_up=0, _rcb=None) [source]
Scripting a function or nn.Module will inspect the source code, compile it as TorchScript code using the TorchScript compiler, and return a ScriptModule or ScriptFunction. TorchScript itself is a subset of the Python language, so not all features in Python work, but we provide enough functionality to compute on tensors and do control-dependent operations. For a complete guide, see the TorchScript Language Reference. torch.jit.script can be used as a function for modules and functions, and as a decorator @torch.jit.script for TorchScript Classes and functions. Parameters
obj (callable, class, or nn.Module) – The nn.Module, function, or class type to compile. Returns
If obj is nn.Module, script returns a ScriptModule object. The returned ScriptModule will have the same set of sub-modules and parameters as the original nn.Module. If obj is a standalone function, a ScriptFunction will be returned. Scripting a function
The @torch.jit.script decorator will construct a ScriptFunction by compiling the body of the function. Example (scripting a function): import torch
@torch.jit.script
def foo(x, y):
if x.max() > y.max():
r = x
else:
r = y
return r
print(type(foo)) # torch.jit.ScriptFuncion
# See the compiled graph as Python code
print(foo.code)
# Call the function using the TorchScript interpreter
foo(torch.ones(2, 2), torch.ones(2, 2))
Scripting an nn.Module
Scripting an nn.Module by default will compile the forward method and recursively compile any methods, submodules, and functions called by forward. If a nn.Module only uses features supported in TorchScript, no changes to the original module code should be necessary. script will construct ScriptModule that has copies of the attributes, parameters, and methods of the original module. Example (scripting a simple module with a Parameter): import torch
class MyModule(torch.nn.Module):
def __init__(self, N, M):
super(MyModule, self).__init__()
# This parameter will be copied to the new ScriptModule
self.weight = torch.nn.Parameter(torch.rand(N, M))
# When this submodule is used, it will be compiled
self.linear = torch.nn.Linear(N, M)
def forward(self, input):
output = self.weight.mv(input)
# This calls the `forward` method of the `nn.Linear` module, which will
# cause the `self.linear` submodule to be compiled to a `ScriptModule` here
output = self.linear(output)
return output
scripted_module = torch.jit.script(MyModule(2, 3))
Example (scripting a module with traced submodules): import torch
import torch.nn as nn
import torch.nn.functional as F
class MyModule(nn.Module):
def __init__(self):
super(MyModule, self).__init__()
# torch.jit.trace produces a ScriptModule's conv1 and conv2
self.conv1 = torch.jit.trace(nn.Conv2d(1, 20, 5), torch.rand(1, 1, 16, 16))
self.conv2 = torch.jit.trace(nn.Conv2d(20, 20, 5), torch.rand(1, 20, 16, 16))
def forward(self, input):
input = F.relu(self.conv1(input))
input = F.relu(self.conv2(input))
return input
scripted_module = torch.jit.script(MyModule())
To compile a method other than forward (and recursively compile anything it calls), add the @torch.jit.export decorator to the method. To opt out of compilation use @torch.jit.ignore or @torch.jit.unused. Example (an exported and ignored method in a module): import torch
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self):
super(MyModule, self).__init__()
@torch.jit.export
def some_entry_point(self, input):
return input + 10
@torch.jit.ignore
def python_only_fn(self, input):
# This function won't be compiled, so any
# Python APIs can be used
import pdb
pdb.set_trace()
def forward(self, input):
if self.training:
self.python_only_fn(input)
return input * 99
scripted_module = torch.jit.script(MyModule())
print(scripted_module.some_entry_point(torch.randn(2, 2)))
print(scripted_module(torch.randn(2, 2))) | torch.generated.torch.jit.script#torch.jit.script |
class torch.jit.ScriptFunction
Functionally equivalent to a ScriptModule, but represents a single function and does not have any attributes or Parameters.
get_debug_state(self: torch._C.ScriptFunction) → torch._C.GraphExecutorState
save(self: torch._C.ScriptFunction, filename: str, _extra_files: Dict[str, str] = {}) → None
save_to_buffer(self: torch._C.ScriptFunction, _extra_files: Dict[str, str] = {}) → bytes | torch.generated.torch.jit.scriptfunction#torch.jit.ScriptFunction |
get_debug_state(self: torch._C.ScriptFunction) → torch._C.GraphExecutorState | torch.generated.torch.jit.scriptfunction#torch.jit.ScriptFunction.get_debug_state |
save(self: torch._C.ScriptFunction, filename: str, _extra_files: Dict[str, str] = {}) → None | torch.generated.torch.jit.scriptfunction#torch.jit.ScriptFunction.save |
save_to_buffer(self: torch._C.ScriptFunction, _extra_files: Dict[str, str] = {}) → bytes | torch.generated.torch.jit.scriptfunction#torch.jit.ScriptFunction.save_to_buffer |
class torch.jit.ScriptModule [source]
A wrapper around C++ torch::jit::Module. ScriptModules contain methods, attributes, parameters, and constants. These can be accessed the same as on a normal nn.Module.
add_module(name, module)
Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters
name (string) – name of the child module. The child module can be accessed from this module using the given name
module (Module) – child module to be added to the module.
apply(fn)
Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters
fn (Module -> None) – function to be applied to each submodule Returns
self Return type
Module Example: >>> @torch.no_grad()
>>> def init_weights(m):
>>> print(m)
>>> if type(m) == nn.Linear:
>>> m.weight.fill_(1.0)
>>> print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
bfloat16()
Casts all floating point parameters and buffers to bfloat16 datatype. Returns
self Return type
Module
buffers(recurse=True)
Returns an iterator over module buffers. Parameters
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
torch.Tensor – module buffer Example: >>> for buf in model.buffers():
>>> print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
children()
Returns an iterator over immediate children modules. Yields
Module – a child module
property code
Returns a pretty-printed representation (as valid Python syntax) of the internal graph for the forward method. See Inspecting Code for details.
property code_with_constants
Returns a tuple of: [0] a pretty-printed representation (as valid Python syntax) of the internal graph for the forward method. See code. [1] a ConstMap following the CONSTANT.cN format of the output in [0]. The indices in the [0] output are keys to the underlying constant’s values. See Inspecting Code for details.
cpu()
Moves all model parameters and buffers to the CPU. Returns
self Return type
Module
cuda(device=None)
Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters
device (int, optional) – if specified, all parameters will be copied to that device Returns
self Return type
Module
double()
Casts all floating point parameters and buffers to double datatype. Returns
self Return type
Module
eval()
Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns
self Return type
Module
extra_repr()
Set the extra representation of the module To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
float()
Casts all floating point parameters and buffers to float datatype. Returns
self Return type
Module
property graph
Returns a string representation of the internal graph for the forward method. See Interpreting Graphs for details.
half()
Casts all floating point parameters and buffers to half datatype. Returns
self Return type
Module
property inlined_graph
Returns a string representation of the internal graph for the forward method. This graph will be preprocessed to inline all function and method calls. See Interpreting Graphs for details.
load_state_dict(state_dict, strict=True)
Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True
Returns
missing_keys is a list of str containing the missing keys
unexpected_keys is a list of str containing the unexpected keys Return type
NamedTuple with missing_keys and unexpected_keys fields
modules()
Returns an iterator over all modules in the network. Yields
Module – a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
print(idx, '->', m)
0 -> Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
named_buffers(prefix='', recurse=True)
Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Parameters
prefix (str) – prefix to prepend to all buffer names.
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
(string, torch.Tensor) – Tuple containing the name and buffer Example: >>> for name, buf in self.named_buffers():
>>> if name in ['running_var']:
>>> print(buf.size())
named_children()
Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields
(string, Module) – Tuple containing a name and child module Example: >>> for name, module in model.named_children():
>>> if name in ['conv4', 'conv5']:
>>> print(module)
named_modules(memo=None, prefix='')
Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Yields
(string, Module) – Tuple of name and module Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
print(idx, '->', m)
0 -> ('', Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix='', recurse=True)
Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Parameters
prefix (str) – prefix to prepend to all parameter names.
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields
(string, Parameter) – Tuple containing the name and parameter Example: >>> for name, param in self.named_parameters():
>>> if name in ['bias']:
>>> print(param.size())
parameters(recurse=True)
Returns an iterator over module parameters. This is typically passed to an optimizer. Parameters
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields
Parameter – module parameter Example: >>> for param in model.parameters():
>>> print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
register_backward_hook(hook)
Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_buffer(name, tensor, persistent=True)
Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict. Buffers can be accessed as attributes using given names. Parameters
name (string) – name of the buffer. The buffer can be accessed from this module using the given name
tensor (Tensor) – buffer to be registered.
persistent (bool) – whether the buffer is part of this module’s state_dict. Example: >>> self.register_buffer('running_mean', torch.zeros(num_features))
register_forward_hook(hook)
Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_forward_pre_hook(hook)
Registers a forward pre-hook on the module. The hook will be called every time before forward() is invoked. It should have the following signature: hook(module, input) -> None or modified input
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple). Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_full_backward_hook(hook)
Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments. Warning Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_parameter(name, param)
Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters
name (string) – name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter) – parameter to be added to the module.
requires_grad_(requires_grad=True)
Change if autograd should record operations on parameters in this module. This method sets the parameters’ requires_grad attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). Parameters
requires_grad (bool) – whether autograd should record operations on parameters in this module. Default: True. Returns
self Return type
Module
save(f, _extra_files={})
See torch.jit.save for details.
state_dict(destination=None, prefix='', keep_vars=False)
Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns
a dictionary containing a whole state of the module Return type
dict Example: >>> module.state_dict().keys()
['bias', 'weight']
to(*args, **kwargs)
Moves and/or casts the parameters and buffers. This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtype`s. In addition, this method will
only cast the floating point or complex parameters and buffers to :attr:`dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices. See below for examples. Note This method modifies the module in-place. Parameters
device (torch.device) – the desired device of the parameters and buffers in this module
dtype (torch.dtype) – the desired floating point or complex dtype of the parameters and buffers in this module
tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
memory_format (torch.memory_format) – the desired memory format for 4D parameters and buffers in this module (keyword only argument) Returns
self Return type
Module Examples: >>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j, 0.2382+0.j],
[ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
train(mode=True)
Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Parameters
mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True. Returns
self Return type
Module
type(dst_type)
Casts all parameters and buffers to dst_type. Parameters
dst_type (type or string) – the desired type Returns
self Return type
Module
xpu(device=None)
Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Parameters
device (int, optional) – if specified, all parameters will be copied to that device Returns
self Return type
Module
zero_grad(set_to_none=False)
Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule |
add_module(name, module)
Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters
name (string) – name of the child module. The child module can be accessed from this module using the given name
module (Module) – child module to be added to the module. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.add_module |
apply(fn)
Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters
fn (Module -> None) – function to be applied to each submodule Returns
self Return type
Module Example: >>> @torch.no_grad()
>>> def init_weights(m):
>>> print(m)
>>> if type(m) == nn.Linear:
>>> m.weight.fill_(1.0)
>>> print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
) | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.apply |
bfloat16()
Casts all floating point parameters and buffers to bfloat16 datatype. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.bfloat16 |
buffers(recurse=True)
Returns an iterator over module buffers. Parameters
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
torch.Tensor – module buffer Example: >>> for buf in model.buffers():
>>> print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L) | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.buffers |
children()
Returns an iterator over immediate children modules. Yields
Module – a child module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.children |
property code
Returns a pretty-printed representation (as valid Python syntax) of the internal graph for the forward method. See Inspecting Code for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.code |
property code_with_constants
Returns a tuple of: [0] a pretty-printed representation (as valid Python syntax) of the internal graph for the forward method. See code. [1] a ConstMap following the CONSTANT.cN format of the output in [0]. The indices in the [0] output are keys to the underlying constant’s values. See Inspecting Code for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.code_with_constants |
cpu()
Moves all model parameters and buffers to the CPU. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.cpu |
cuda(device=None)
Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters
device (int, optional) – if specified, all parameters will be copied to that device Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.cuda |
double()
Casts all floating point parameters and buffers to double datatype. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.double |
eval()
Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.eval |
extra_repr()
Set the extra representation of the module To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.extra_repr |
float()
Casts all floating point parameters and buffers to float datatype. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.float |
property graph
Returns a string representation of the internal graph for the forward method. See Interpreting Graphs for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.graph |
half()
Casts all floating point parameters and buffers to half datatype. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.half |
property inlined_graph
Returns a string representation of the internal graph for the forward method. This graph will be preprocessed to inline all function and method calls. See Interpreting Graphs for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.inlined_graph |
load_state_dict(state_dict, strict=True)
Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True
Returns
missing_keys is a list of str containing the missing keys
unexpected_keys is a list of str containing the unexpected keys Return type
NamedTuple with missing_keys and unexpected_keys fields | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.load_state_dict |
modules()
Returns an iterator over all modules in the network. Yields
Module – a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
print(idx, '->', m)
0 -> Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True) | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.modules |
named_buffers(prefix='', recurse=True)
Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Parameters
prefix (str) – prefix to prepend to all buffer names.
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
(string, torch.Tensor) – Tuple containing the name and buffer Example: >>> for name, buf in self.named_buffers():
>>> if name in ['running_var']:
>>> print(buf.size()) | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.named_buffers |
named_children()
Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields
(string, Module) – Tuple containing a name and child module Example: >>> for name, module in model.named_children():
>>> if name in ['conv4', 'conv5']:
>>> print(module) | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.named_children |
named_modules(memo=None, prefix='')
Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Yields
(string, Module) – Tuple of name and module Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
print(idx, '->', m)
0 -> ('', Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True)) | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.named_modules |
named_parameters(prefix='', recurse=True)
Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Parameters
prefix (str) – prefix to prepend to all parameter names.
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields
(string, Parameter) – Tuple containing the name and parameter Example: >>> for name, param in self.named_parameters():
>>> if name in ['bias']:
>>> print(param.size()) | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.named_parameters |
parameters(recurse=True)
Returns an iterator over module parameters. This is typically passed to an optimizer. Parameters
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields
Parameter – module parameter Example: >>> for param in model.parameters():
>>> print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L) | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.parameters |
register_backward_hook(hook)
Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_backward_hook |
register_buffer(name, tensor, persistent=True)
Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict. Buffers can be accessed as attributes using given names. Parameters
name (string) – name of the buffer. The buffer can be accessed from this module using the given name
tensor (Tensor) – buffer to be registered.
persistent (bool) – whether the buffer is part of this module’s state_dict. Example: >>> self.register_buffer('running_mean', torch.zeros(num_features)) | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_buffer |
register_forward_hook(hook)
Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_forward_hook |
register_forward_pre_hook(hook)
Registers a forward pre-hook on the module. The hook will be called every time before forward() is invoked. It should have the following signature: hook(module, input) -> None or modified input
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple). Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_forward_pre_hook |
register_full_backward_hook(hook)
Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments. Warning Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_full_backward_hook |
register_parameter(name, param)
Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters
name (string) – name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter) – parameter to be added to the module. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_parameter |
requires_grad_(requires_grad=True)
Change if autograd should record operations on parameters in this module. This method sets the parameters’ requires_grad attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). Parameters
requires_grad (bool) – whether autograd should record operations on parameters in this module. Default: True. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.requires_grad_ |
save(f, _extra_files={})
See torch.jit.save for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.save |
state_dict(destination=None, prefix='', keep_vars=False)
Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns
a dictionary containing a whole state of the module Return type
dict Example: >>> module.state_dict().keys()
['bias', 'weight'] | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.state_dict |
to(*args, **kwargs)
Moves and/or casts the parameters and buffers. This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtype`s. In addition, this method will
only cast the floating point or complex parameters and buffers to :attr:`dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices. See below for examples. Note This method modifies the module in-place. Parameters
device (torch.device) – the desired device of the parameters and buffers in this module
dtype (torch.dtype) – the desired floating point or complex dtype of the parameters and buffers in this module
tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
memory_format (torch.memory_format) – the desired memory format for 4D parameters and buffers in this module (keyword only argument) Returns
self Return type
Module Examples: >>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j, 0.2382+0.j],
[ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128) | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.to |
train(mode=True)
Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Parameters
mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.train |
type(dst_type)
Casts all parameters and buffers to dst_type. Parameters
dst_type (type or string) – the desired type Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.type |
xpu(device=None)
Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Parameters
device (int, optional) – if specified, all parameters will be copied to that device Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.xpu |
zero_grad(set_to_none=False)
Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.zero_grad |
torch.jit.script_if_tracing(fn) [source]
Compiles fn when it is first called during tracing. torch.jit.script has a non-negligible start up time when it is first called due to lazy-initializations of many compiler builtins. Therefore you should not use it in library code. However, you may want to have parts of your library work in tracing even if they use control flow. In these cases, you should use @torch.jit.script_if_tracing to substitute for torch.jit.script. Parameters
fn – A function to compile. Returns
If called during tracing, a ScriptFunction created by torch.jit.script is returned. Otherwise, the original function fn is returned. | torch.generated.torch.jit.script_if_tracing#torch.jit.script_if_tracing |
torch.jit.trace(func, example_inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=<torch.jit.CompilationUnit object>) [source]
Trace a function and return an executable or ScriptFunction that will be optimized using just-in-time compilation. Tracing is ideal for code that operates only on Tensors and lists, dictionaries, and tuples of Tensors. Using torch.jit.trace and torch.jit.trace_module, you can turn an existing module or Python function into a TorchScript ScriptFunction or ScriptModule. You must provide example inputs, and we run the function, recording the operations performed on all the tensors. The resulting recording of a standalone function produces ScriptFunction. The resulting recording of nn.Module.forward or nn.Module produces ScriptModule. This module also contains any parameters that the original module had as well. Warning Tracing only correctly records functions and modules which are not data dependent (e.g., do not have conditionals on data in tensors) and do not have any untracked external dependencies (e.g., perform input/output or access global variables). Tracing only records operations done when the given function is run on the given tensors. Therefore, the returned ScriptModule will always run the same traced graph on any input. This has some important implications when your module is expected to run different sets of operations, depending on the input and/or the module state. For example, Tracing will not record any control-flow like if-statements or loops. When this control-flow is constant across your module, this is fine and it often inlines the control-flow decisions. But sometimes the control-flow is actually part of the model itself. For instance, a recurrent network is a loop over the (possibly dynamic) length of an input sequence. In the returned ScriptModule, operations that have different behaviors in training and eval modes will always behave as if it is in the mode it was in during tracing, no matter which mode the ScriptModule is in. In cases like these, tracing would not be appropriate and scripting is a better choice. If you trace such models, you may silently get incorrect results on subsequent invocations of the model. The tracer will try to emit warnings when doing something that may cause an incorrect trace to be produced. Parameters
func (callable or torch.nn.Module) – A Python function or torch.nn.Module that will be run with example_inputs. func arguments and return values must be tensors or (possibly nested) tuples that contain tensors. When a module is passed torch.jit.trace, only the forward method is run and traced (see torch.jit.trace for details).
example_inputs (tuple or torch.Tensor) – A tuple of example inputs that will be passed to the function while tracing. The resulting trace can be run with inputs of different types and shapes assuming the traced operations support those types and shapes. example_inputs may also be a single Tensor in which case it is automatically wrapped in a tuple. Keyword Arguments
check_trace (bool, optional) – Check if the same inputs run through traced code produce the same outputs. Default: True. You might want to disable this if, for example, your network contains non- deterministic ops or if you are sure that the network is correct despite a checker failure.
check_inputs (list of tuples, optional) – A list of tuples of input arguments that should be used to check the trace against what is expected. Each tuple is equivalent to a set of input arguments that would be specified in example_inputs. For best results, pass in a set of checking inputs representative of the space of shapes and types of inputs you expect the network to see. If not specified, the original example_inputs are used for checking
check_tolerance (float, optional) – Floating-point comparison tolerance to use in the checker procedure. This can be used to relax the checker strictness in the event that results diverge numerically for a known reason, such as operator fusion.
strict (bool, optional) – run the tracer in a strict mode or not (default: True). Only turn this off when you want the tracer to record your mutable container types (currently list/dict) and you are sure that the container you are using in your problem is a constant structure and does not get used as control flow (if, for) conditions. Returns
If func is nn.Module or forward of nn.Module, trace returns a ScriptModule object with a single forward method containing the traced code. The returned ScriptModule will have the same set of sub-modules and parameters as the original nn.Module. If func is a standalone function, trace returns ScriptFunction. Example (tracing a function): import torch
def foo(x, y):
return 2 * x + y
# Run `foo` with the provided inputs and record the tensor operations
traced_foo = torch.jit.trace(foo, (torch.rand(3), torch.rand(3)))
# `traced_foo` can now be run with the TorchScript interpreter or saved
# and loaded in a Python-free environment
Example (tracing an existing module): import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv = nn.Conv2d(1, 1, 3)
def forward(self, x):
return self.conv(x)
n = Net()
example_weight = torch.rand(1, 1, 3, 3)
example_forward_input = torch.rand(1, 1, 3, 3)
# Trace a specific method and construct `ScriptModule` with
# a single `forward` method
module = torch.jit.trace(n.forward, example_forward_input)
# Trace a module (implicitly traces `forward`) and construct a
# `ScriptModule` with a single `forward` method
module = torch.jit.trace(n, example_forward_input) | torch.generated.torch.jit.trace#torch.jit.trace |
torch.jit.trace_module(mod, inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=<torch.jit.CompilationUnit object>) [source]
Trace a module and return an executable ScriptModule that will be optimized using just-in-time compilation. When a module is passed to torch.jit.trace, only the forward method is run and traced. With trace_module, you can specify a dictionary of method names to example inputs to trace (see the inputs) argument below. See torch.jit.trace for more information on tracing. Parameters
mod (torch.nn.Module) – A torch.nn.Module containing methods whose names are specified in inputs. The given methods will be compiled as a part of a single ScriptModule.
inputs (dict) – A dict containing sample inputs indexed by method names in mod. The inputs will be passed to methods whose names correspond to inputs’ keys while tracing. { 'forward' : example_forward_input, 'method2': example_method2_input}
Keyword Arguments
check_trace (bool, optional) – Check if the same inputs run through traced code produce the same outputs. Default: True. You might want to disable this if, for example, your network contains non- deterministic ops or if you are sure that the network is correct despite a checker failure.
check_inputs (list of dicts, optional) – A list of dicts of input arguments that should be used to check the trace against what is expected. Each tuple is equivalent to a set of input arguments that would be specified in inputs. For best results, pass in a set of checking inputs representative of the space of shapes and types of inputs you expect the network to see. If not specified, the original inputs are used for checking
check_tolerance (float, optional) – Floating-point comparison tolerance to use in the checker procedure. This can be used to relax the checker strictness in the event that results diverge numerically for a known reason, such as operator fusion. Returns
A ScriptModule object with a single forward method containing the traced code. When func is a torch.nn.Module, the returned ScriptModule will have the same set of sub-modules and parameters as func. Example (tracing a module with multiple methods): import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv = nn.Conv2d(1, 1, 3)
def forward(self, x):
return self.conv(x)
def weighted_kernel_sum(self, weight):
return weight * self.conv.weight
n = Net()
example_weight = torch.rand(1, 1, 3, 3)
example_forward_input = torch.rand(1, 1, 3, 3)
# Trace a specific method and construct `ScriptModule` with
# a single `forward` method
module = torch.jit.trace(n.forward, example_forward_input)
# Trace a module (implicitly traces `forward`) and construct a
# `ScriptModule` with a single `forward` method
module = torch.jit.trace(n, example_forward_input)
# Trace specific methods on a module (specified in `inputs`), constructs
# a `ScriptModule` with `forward` and `weighted_kernel_sum` methods
inputs = {'forward' : example_forward_input, 'weighted_kernel_sum' : example_weight}
module = torch.jit.trace_module(n, inputs) | torch.generated.torch.jit.trace_module#torch.jit.trace_module |
torch.jit.unused(fn) [source]
This decorator indicates to the compiler that a function or method should be ignored and replaced with the raising of an exception. This allows you to leave code in your model that is not yet TorchScript compatible and still export your model. Example (using @torch.jit.unused on a method): import torch
import torch.nn as nn
class MyModule(nn.Module):
def __init__(self, use_memory_efficient):
super(MyModule, self).__init__()
self.use_memory_efficient = use_memory_efficient
@torch.jit.unused
def memory_efficient(self, x):
import pdb
pdb.set_trace()
return x + 10
def forward(self, x):
# Use not-yet-scriptable memory efficient mode
if self.use_memory_efficient:
return self.memory_efficient(x)
else:
return x + 10
m = torch.jit.script(MyModule(use_memory_efficient=False))
m.save("m.pt")
m = torch.jit.script(MyModule(use_memory_efficient=True))
# exception raised
m(torch.rand(100)) | torch.generated.torch.jit.unused#torch.jit.unused |
torch.jit.wait(future) [source]
Forces completion of a torch.jit.Future[T] asynchronous task, returning the result of the task. See fork() for docs and examples. :param func: an asynchronous task reference, created through torch.jit.fork :type func: torch.jit.Future[T] Returns
the return value of the the completed task Return type
T | torch.generated.torch.jit.wait#torch.jit.wait |
torch.kaiser_window(window_length, periodic=True, beta=12.0, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Computes the Kaiser window with window length window_length and shape parameter beta. Let I_0 be the zeroth order modified Bessel function of the first kind (see torch.i0()) and N = L - 1 if periodic is False and L if periodic is True, where L is the window_length. This function computes: outi=I0(β1−(i−N/2N/2)2)/I0(β)out_i = I_0 \left( \beta \sqrt{1 - \left( {\frac{i - N/2}{N/2}} \right) ^2 } \right) / I_0( \beta )
Calling torch.kaiser_window(L, B, periodic=True) is equivalent to calling torch.kaiser_window(L + 1, B, periodic=False)[:-1]). The periodic argument is intended as a helpful shorthand to produce a periodic window as input to functions like torch.stft(). Note If window_length is one, then the returned window is a single element tensor containing a one. Parameters
window_length (int) – length of the window.
periodic (bool, optional) – If True, returns a periodic window suitable for use in spectral analysis. If False, returns a symmetric window suitable for use in filter design.
beta (float, optional) – shape parameter for the window. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. | torch.generated.torch.kaiser_window#torch.kaiser_window |
torch.kron(input, other, *, out=None) → Tensor
Computes the Kronecker product, denoted by ⊗\otimes , of input and other. If input is a (a0×a1×⋯×an)(a_0 \times a_1 \times \dots \times a_n) tensor and other is a (b0×b1×⋯×bn)(b_0 \times b_1 \times \dots \times b_n) tensor, the result will be a (a0∗b0×a1∗b1×⋯×an∗bn)(a_0*b_0 \times a_1*b_1 \times \dots \times a_n*b_n) tensor with the following entries: (input⊗other)k0,k1,…,kn=inputi0,i1,…,in∗otherj0,j1,…,jn,(\text{input} \otimes \text{other})_{k_0, k_1, \dots, k_n} = \text{input}_{i_0, i_1, \dots, i_n} * \text{other}_{j_0, j_1, \dots, j_n},
where kt=it∗bt+jtk_t = i_t * b_t + j_t for 0≤t≤n0 \leq t \leq n . If one tensor has fewer dimensions than the other it is unsqueezed until it has the same number of dimensions. Supports real-valued and complex-valued inputs. Note This function generalizes the typical definition of the Kronecker product for two matrices to two tensors, as described above. When input is a (m×n)(m \times n) matrix and other is a (p×q)(p \times q) matrix, the result will be a (p∗m×q∗n)(p*m \times q*n) block matrix: A⊗B=[a11B⋯a1nB⋮⋱⋮am1B⋯amnB]\mathbf{A} \otimes \mathbf{B}=\begin{bmatrix} a_{11} \mathbf{B} & \cdots & a_{1 n} \mathbf{B} \\ \vdots & \ddots & \vdots \\ a_{m 1} \mathbf{B} & \cdots & a_{m n} \mathbf{B} \end{bmatrix}
where input is A\mathbf{A} and other is B\mathbf{B} . Parameters
input (Tensor) –
other (Tensor) – Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> mat1 = torch.eye(2)
>>> mat2 = torch.ones(2, 2)
>>> torch.kron(mat1, mat2)
tensor([[1., 1., 0., 0.],
[1., 1., 0., 0.],
[0., 0., 1., 1.],
[0., 0., 1., 1.]])
>>> mat1 = torch.eye(2)
>>> mat2 = torch.arange(1, 5).reshape(2, 2)
>>> torch.kron(mat1, mat2)
tensor([[1., 2., 0., 0.],
[3., 4., 0., 0.],
[0., 0., 1., 2.],
[0., 0., 3., 4.]]) | torch.generated.torch.kron#torch.kron |
torch.kthvalue(input, k, dim=None, keepdim=False, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values is the k th smallest element of each row of the input tensor in the given dimension dim. And indices is the index location of each element found. If dim is not given, the last dimension of the input is chosen. If keepdim is True, both the values and indices tensors are the same size as input, except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in both the values and indices tensors having 1 fewer dimension than the input tensor. Note When input is a CUDA tensor and there are multiple valid k th values, this function may nondeterministically return indices for any of them. Parameters
input (Tensor) – the input tensor.
k (int) – k for the k-th smallest element
dim (int, optional) – the dimension to find the kth value along
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out (tuple, optional) – the output tuple of (Tensor, LongTensor) can be optionally given to be used as output buffers Example: >>> x = torch.arange(1., 6.)
>>> x
tensor([ 1., 2., 3., 4., 5.])
>>> torch.kthvalue(x, 4)
torch.return_types.kthvalue(values=tensor(4.), indices=tensor(3))
>>> x=torch.arange(1.,7.).resize_(2,3)
>>> x
tensor([[ 1., 2., 3.],
[ 4., 5., 6.]])
>>> torch.kthvalue(x, 2, 0, True)
torch.return_types.kthvalue(values=tensor([[4., 5., 6.]]), indices=tensor([[1, 1, 1]])) | torch.generated.torch.kthvalue#torch.kthvalue |
torch.lcm(input, other, *, out=None) → Tensor
Computes the element-wise least common multiple (LCM) of input and other. Both input and other must have integer types. Note This defines lcm(0,0)=0lcm(0, 0) = 0 and lcm(0,a)=0lcm(0, a) = 0 . Parameters
input (Tensor) – the input tensor.
other (Tensor) – the second input tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.tensor([5, 10, 15])
>>> b = torch.tensor([3, 4, 5])
>>> torch.lcm(a, b)
tensor([15, 20, 15])
>>> c = torch.tensor([3])
>>> torch.lcm(a, c)
tensor([15, 30, 15]) | torch.generated.torch.lcm#torch.lcm |
torch.ldexp(input, other, *, out=None) → Tensor
Multiplies input by 2**:attr:other. outi=inputi∗2iother\text{{out}}_i = \text{{input}}_i * 2^\text{{other}}_i
Typically this function is used to construct floating point numbers by multiplying mantissas in input with integral powers of two created from the exponents in :attr:’other’. Parameters
input (Tensor) – the input tensor.
other (Tensor) – a tensor of exponents, typically integers. Keyword Arguments
out (Tensor, optional) – the output tensor. Example::
>>> torch.ldexp(torch.tensor([1.]), torch.tensor([1]))
tensor([2.])
>>> torch.ldexp(torch.tensor([1.0]), torch.tensor([1, 2, 3, 4]))
tensor([ 2., 4., 8., 16.]) | torch.generated.torch.ldexp#torch.ldexp |
torch.le(input, other, *, out=None) → Tensor
Computes input≤other\text{input} \leq \text{other} element-wise. The second argument can be a number or a tensor whose shape is broadcastable with the first argument. Parameters
input (Tensor) – the tensor to compare
other (Tensor or Scalar) – the tensor or value to compare Keyword Arguments
out (Tensor, optional) – the output tensor. Returns
A boolean tensor that is True where input is less than or equal to other and False elsewhere Example: >>> torch.le(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[True, False], [True, True]]) | torch.generated.torch.le#torch.le |
torch.lerp(input, end, weight, *, out=None)
Does a linear interpolation of two tensors start (given by input) and end based on a scalar or tensor weight and returns the resulting out tensor. outi=starti+weighti×(endi−starti)\text{out}_i = \text{start}_i + \text{weight}_i \times (\text{end}_i - \text{start}_i)
The shapes of start and end must be broadcastable. If weight is a tensor, then the shapes of weight, start, and end must be broadcastable. Parameters
input (Tensor) – the tensor with the starting points
end (Tensor) – the tensor with the ending points
weight (float or tensor) – the weight for the interpolation formula Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> start = torch.arange(1., 5.)
>>> end = torch.empty(4).fill_(10)
>>> start
tensor([ 1., 2., 3., 4.])
>>> end
tensor([ 10., 10., 10., 10.])
>>> torch.lerp(start, end, 0.5)
tensor([ 5.5000, 6.0000, 6.5000, 7.0000])
>>> torch.lerp(start, end, torch.full_like(start, 0.5))
tensor([ 5.5000, 6.0000, 6.5000, 7.0000]) | torch.generated.torch.lerp#torch.lerp |
torch.less(input, other, *, out=None) → Tensor
Alias for torch.lt(). | torch.generated.torch.less#torch.less |
torch.less_equal(input, other, *, out=None) → Tensor
Alias for torch.le(). | torch.generated.torch.less_equal#torch.less_equal |
torch.lgamma(input, *, out=None) → Tensor
Computes the logarithm of the gamma function on input. outi=logΓ(inputi)\text{out}_{i} = \log \Gamma(\text{input}_{i})
Parameters
input (Tensor) – the input tensor.
out (Tensor, optional) – the output tensor. Example: >>> a = torch.arange(0.5, 2, 0.5)
>>> torch.lgamma(a)
tensor([ 0.5724, 0.0000, -0.1208]) | torch.generated.torch.lgamma#torch.lgamma |
torch.linalg Common linear algebra operations. This module is in BETA. New functions are still being added, and some functions may change in future PyTorch releases. See the documentation of each function for details. Functions
torch.linalg.cholesky(input, *, out=None) → Tensor
Computes the Cholesky decomposition of a Hermitian (or symmetric for real-valued matrices) positive-definite matrix or the Cholesky decompositions for a batch of such matrices. Each decomposition has the form: input=LLH\text{input} = LL^H
where LL is a lower-triangular matrix and LHL^H is the conjugate transpose of LL , which is just a transpose for the case of real-valued input matrices. In code it translates to input = L @ L.t() if input is real-valued and input = L @ L.conj().t() if input is complex-valued. The batch of LL matrices is returned. Supports real-valued and complex-valued inputs. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note LAPACK’s potrf is used for CPU inputs, and MAGMA’s potrf is used for CUDA inputs. Note If input is not a Hermitian positive-definite matrix, or if it’s a batch of matrices and one or more of them is not a Hermitian positive-definite matrix, then a RuntimeError will be thrown. If input is a batch of matrices, then the error message will include the batch index of the first matrix that is not Hermitian positive-definite. Parameters
input (Tensor) – the input tensor of size (∗,n,n)(*, n, n) consisting of Hermitian positive-definite n×nn \times n matrices, where ∗* is zero or more batch dimensions. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> a = torch.randn(2, 2, dtype=torch.complex128)
>>> a = torch.mm(a, a.t().conj()) # creates a Hermitian positive-definite matrix
>>> l = torch.linalg.cholesky(a)
>>> a
tensor([[2.5266+0.0000j, 1.9586-2.0626j],
[1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128)
>>> l
tensor([[1.5895+0.0000j, 0.0000+0.0000j],
[1.2322+1.2976j, 2.4928+0.0000j]], dtype=torch.complex128)
>>> torch.mm(l, l.t().conj())
tensor([[2.5266+0.0000j, 1.9586-2.0626j],
[1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128)
>>> a = torch.randn(3, 2, 2, dtype=torch.float64)
>>> a = torch.matmul(a, a.transpose(-2, -1)) # creates a symmetric positive-definite matrix
>>> l = torch.linalg.cholesky(a)
>>> a
tensor([[[ 1.1629, 2.0237],
[ 2.0237, 6.6593]],
[[ 0.4187, 0.1830],
[ 0.1830, 0.1018]],
[[ 1.9348, -2.5744],
[-2.5744, 4.6386]]], dtype=torch.float64)
>>> l
tensor([[[ 1.0784, 0.0000],
[ 1.8766, 1.7713]],
[[ 0.6471, 0.0000],
[ 0.2829, 0.1477]],
[[ 1.3910, 0.0000],
[-1.8509, 1.1014]]], dtype=torch.float64)
>>> torch.allclose(torch.matmul(l, l.transpose(-2, -1)), a)
True
torch.linalg.cond(input, p=None, *, out=None) → Tensor
Computes the condition number of a matrix input, or of each matrix in a batched input, using the matrix norm defined by p. For norms {‘fro’, ‘nuc’, inf, -inf, 1, -1} this is defined as the matrix norm of input times the matrix norm of the inverse of input computed using torch.linalg.norm(). While for norms {None, 2, -2} this is defined as the ratio between the largest and smallest singular values computed using torch.linalg.svd(). This function supports float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function may synchronize that device with the CPU depending on which norm p is used. Note For norms {None, 2, -2}, input may be a non-square matrix or batch of non-square matrices. For other norms, however, input must be a square matrix or a batch of square matrices, and if this requirement is not satisfied a RuntimeError will be thrown. Note For norms {‘fro’, ‘nuc’, inf, -inf, 1, -1} if input is a non-invertible matrix then a tensor containing infinity will be returned. If input is a batch of matrices and one or more of them is not invertible then a RuntimeError will be thrown. Parameters
input (Tensor) – the input matrix of size (m, n) or the batch of matrices of size (*, m, n) where * is one or more batch dimensions.
p (int, float, inf, -inf, 'fro', 'nuc', optional) –
the type of the matrix norm to use in the computations. inf refers to float('inf'), numpy’s inf object, or any equivalent object. The following norms can be used:
p norm for matrices
None ratio of the largest singular value to the smallest singular value
’fro’ Frobenius norm
’nuc’ nuclear norm
inf max(sum(abs(x), dim=1))
-inf min(sum(abs(x), dim=1))
1 max(sum(abs(x), dim=0))
-1 min(sum(abs(x), dim=0))
2 ratio of the largest singular value to the smallest singular value
-2 ratio of the smallest singular value to the largest singular value Default: None Keyword Arguments
out (Tensor, optional) – tensor to write the output to. Default is None. Returns
The condition number of input. The output dtype is always real valued even for complex inputs (e.g. float if input is cfloat). Examples: >>> a = torch.randn(3, 4, 4, dtype=torch.complex64)
>>> torch.linalg.cond(a)
>>> a = torch.tensor([[1., 0, -1], [0, 1, 0], [1, 0, 1]])
>>> torch.linalg.cond(a)
tensor([1.4142])
>>> torch.linalg.cond(a, 'fro')
tensor(3.1623)
>>> torch.linalg.cond(a, 'nuc')
tensor(9.2426)
>>> torch.linalg.cond(a, float('inf'))
tensor(2.)
>>> torch.linalg.cond(a, float('-inf'))
tensor(1.)
>>> torch.linalg.cond(a, 1)
tensor(2.)
>>> torch.linalg.cond(a, -1)
tensor(1.)
>>> torch.linalg.cond(a, 2)
tensor([1.4142])
>>> torch.linalg.cond(a, -2)
tensor([0.7071])
>>> a = torch.randn(2, 3, 3)
>>> a
tensor([[[-0.9204, 1.1140, 1.2055],
[ 0.3988, -0.2395, -0.7441],
[-0.5160, 0.3115, 0.2619]],
[[-2.2128, 0.9241, 2.1492],
[-1.1277, 2.7604, -0.8760],
[ 1.2159, 0.5960, 0.0498]]])
>>> torch.linalg.cond(a)
tensor([[9.5917],
[3.2538]])
>>> a = torch.randn(2, 3, 3, dtype=torch.complex64)
>>> a
tensor([[[-0.4671-0.2137j, -0.1334-0.9508j, 0.6252+0.1759j],
[-0.3486-0.2991j, -0.1317+0.1252j, 0.3025-0.1604j],
[-0.5634+0.8582j, 0.1118-0.4677j, -0.1121+0.7574j]],
[[ 0.3964+0.2533j, 0.9385-0.6417j, -0.0283-0.8673j],
[ 0.2635+0.2323j, -0.8929-1.1269j, 0.3332+0.0733j],
[ 0.1151+0.1644j, -1.1163+0.3471j, -0.5870+0.1629j]]])
>>> torch.linalg.cond(a)
tensor([[4.6245],
[4.5671]])
>>> torch.linalg.cond(a, 1)
tensor([9.2589, 9.3486])
torch.linalg.det(input) → Tensor
Computes the determinant of a square matrix input, or of each square matrix in a batched input. This function supports float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The determinant is computed using LU factorization. LAPACK’s getrf is used for CPU inputs, and MAGMA’s getrf is used for CUDA inputs. Note Backward through det internally uses torch.linalg.svd() when input is not invertible. In this case, double backward through det will be unstable when input doesn’t have distinct singular values. See torch.linalg.svd() for more details. Parameters
input (Tensor) – the input matrix of size (n, n) or the batch of matrices of size (*, n, n) where * is one or more batch dimensions. Example: >>> a = torch.randn(3, 3)
>>> a
tensor([[ 0.9478, 0.9158, -1.1295],
[ 0.9701, 0.7346, -1.8044],
[-0.2337, 0.0557, 0.6929]])
>>> torch.linalg.det(a)
tensor(0.0934)
>>> a = torch.randn(3, 2, 2)
>>> a
tensor([[[ 0.9254, -0.6213],
[-0.5787, 1.6843]],
[[ 0.3242, -0.9665],
[ 0.4539, -0.0887]],
[[ 1.1336, -0.4025],
[-0.7089, 0.9032]]])
>>> torch.linalg.det(a)
tensor([1.1990, 0.4099, 0.7386])
torch.linalg.slogdet(input, *, out=None) -> (Tensor, Tensor)
Calculates the sign and natural logarithm of the absolute value of a square matrix’s determinant, or of the absolute values of the determinants of a batch of square matrices input. The determinant can be computed with sign * exp(logabsdet). Supports input of float, double, cfloat and cdouble datatypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The determinant is computed using LU factorization. LAPACK’s getrf is used for CPU inputs, and MAGMA’s getrf is used for CUDA inputs. Note For matrices that have zero determinant, this returns (0, -inf). If input is batched then the entries in the result tensors corresponding to matrices with the zero determinant have sign 0 and the natural logarithm of the absolute value of the determinant -inf. Parameters
input (Tensor) – the input matrix of size (n,n)(n, n) or the batch of matrices of size (∗,n,n)(*, n, n) where ∗* is one or more batch dimensions. Keyword Arguments
out (tuple, optional) – tuple of two tensors to write the output to. Returns
A namedtuple (sign, logabsdet) containing the sign of the determinant and the natural logarithm of the absolute value of determinant, respectively. Example: >>> A = torch.randn(3, 3)
>>> A
tensor([[ 0.0032, -0.2239, -1.1219],
[-0.6690, 0.1161, 0.4053],
[-1.6218, -0.9273, -0.0082]])
>>> torch.linalg.det(A)
tensor(-0.7576)
>>> torch.linalg.logdet(A)
tensor(nan)
>>> torch.linalg.slogdet(A)
torch.return_types.linalg_slogdet(sign=tensor(-1.), logabsdet=tensor(-0.2776))
torch.linalg.eigh(input, UPLO='L', *, out=None) -> (Tensor, Tensor)
Computes the eigenvalues and eigenvectors of a complex Hermitian (or real symmetric) matrix input, or of each such matrix in a batched input. For a single matrix input, the tensor of eigenvalues w and the tensor of eigenvectors V decompose the input such that input = V diag(w) Vᴴ, where Vᴴ is the transpose of V for real-valued input, or the conjugate transpose of V for complex-valued input. Since the matrix or matrices in input are assumed to be Hermitian, the imaginary part of their diagonals is always treated as zero. When UPLO is “L”, its default value, only the lower triangular part of each matrix is used in the computation. When UPLO is “U” only the upper triangular part of each matrix is used. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The eigenvalues/eigenvectors are computed using LAPACK’s syevd and heevd routines for CPU inputs, and MAGMA’s syevd and heevd routines for CUDA inputs. Note The eigenvalues of real symmetric or complex Hermitian matrices are always real. Note The eigenvectors of matrices are not unique, so any eigenvector multiplied by a constant remains a valid eigenvector. This function may compute different eigenvector representations on different device types. Usually the difference is only in the sign of the eigenvector. Note See torch.linalg.eigvalsh() for a related function that computes only eigenvalues. However, that function is not differentiable. Parameters
input (Tensor) – the Hermitian n times n matrix or the batch of such matrices of size (*, n, n) where * is one or more batch dimensions.
UPLO ('L', 'U', optional) – controls whether to use the upper-triangular or the lower-triangular part of input in the computations. Default is 'L'. Keyword Arguments
out (tuple, optional) – tuple of two tensors to write the output to. Default is None. Returns
A namedtuple (eigenvalues, eigenvectors) containing
eigenvalues (Tensor): Shape (*, m).
The eigenvalues in ascending order.
eigenvectors (Tensor): Shape (*, m, m).
The orthonormal eigenvectors of the input. Return type
(Tensor, Tensor) Examples: >>> a = torch.randn(2, 2, dtype=torch.complex128)
>>> a = a + a.t().conj() # creates a Hermitian matrix
>>> a
tensor([[2.9228+0.0000j, 0.2029-0.0862j],
[0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)
>>> w, v = torch.linalg.eigh(a)
>>> w
tensor([0.3277, 2.9415], dtype=torch.float64)
>>> v
tensor([[-0.0846+-0.0000j, -0.9964+0.0000j],
[ 0.9170+0.3898j, -0.0779-0.0331j]], dtype=torch.complex128)
>>> torch.allclose(torch.matmul(v, torch.matmul(w.to(v.dtype).diag_embed(), v.t().conj())), a)
True
>>> a = torch.randn(3, 2, 2, dtype=torch.float64)
>>> a = a + a.transpose(-2, -1) # creates a symmetric matrix
>>> w, v = torch.linalg.eigh(a)
>>> torch.allclose(torch.matmul(v, torch.matmul(w.diag_embed(), v.transpose(-2, -1))), a)
True
torch.linalg.eigvalsh(input, UPLO='L', *, out=None) → Tensor
Computes the eigenvalues of a complex Hermitian (or real symmetric) matrix input, or of each such matrix in a batched input. The eigenvalues are returned in ascending order. Since the matrix or matrices in input are assumed to be Hermitian, the imaginary part of their diagonals is always treated as zero. When UPLO is “L”, its default value, only the lower triangular part of each matrix is used in the computation. When UPLO is “U” only the upper triangular part of each matrix is used. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The eigenvalues are computed using LAPACK’s syevd and heevd routines for CPU inputs, and MAGMA’s syevd and heevd routines for CUDA inputs. Note The eigenvalues of real symmetric or complex Hermitian matrices are always real. Note This function doesn’t support backpropagation, please use torch.linalg.eigh() instead, which also computes the eigenvectors. Note See torch.linalg.eigh() for a related function that computes both eigenvalues and eigenvectors. Parameters
input (Tensor) – the Hermitian n times n matrix or the batch of such matrices of size (*, n, n) where * is one or more batch dimensions.
UPLO ('L', 'U', optional) – controls whether to use the upper-triangular or the lower-triangular part of input in the computations. Default is 'L'. Keyword Arguments
out (Tensor, optional) – tensor to write the output to. Default is None. Examples: >>> a = torch.randn(2, 2, dtype=torch.complex128)
>>> a = a + a.t().conj() # creates a Hermitian matrix
>>> a
tensor([[2.9228+0.0000j, 0.2029-0.0862j],
[0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)
>>> w = torch.linalg.eigvalsh(a)
>>> w
tensor([0.3277, 2.9415], dtype=torch.float64)
>>> a = torch.randn(3, 2, 2, dtype=torch.float64)
>>> a = a + a.transpose(-2, -1) # creates a symmetric matrix
>>> a
tensor([[[ 2.8050, -0.3850],
[-0.3850, 3.2376]],
[[-1.0307, -2.7457],
[-2.7457, -1.7517]],
[[ 1.7166, 2.2207],
[ 2.2207, -2.0898]]], dtype=torch.float64)
>>> w = torch.linalg.eigvalsh(a)
>>> w
tensor([[ 2.5797, 3.4629],
[-4.1605, 1.3780],
[-3.1113, 2.7381]], dtype=torch.float64)
torch.linalg.matrix_rank(input, tol=None, hermitian=False, *, out=None) → Tensor
Computes the numerical rank of a matrix input, or of each matrix in a batched input. The matrix rank is computed as the number of singular values (or absolute eigenvalues when hermitian is True) that are greater than the specified tol threshold. If tol is not specified, tol is set to S.max(dim=-1)*max(input.shape[-2:])*eps, where S is the singular values (or absolute eigenvalues when hermitian is True), and eps is the epsilon value for the datatype of input. The epsilon value can be obtained using the eps attribute of torch.finfo. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The matrix rank is computed using singular value decomposition (see torch.linalg.svd()) by default. If hermitian is True, then input is assumed to be Hermitian (symmetric if real-valued), and the computation is done by obtaining the eigenvalues (see torch.linalg.eigvalsh()). Parameters
input (Tensor) – the input matrix of size (m, n) or the batch of matrices of size (*, m, n) where * is one or more batch dimensions.
tol (float, optional) – the tolerance value. Default is None
hermitian (bool, optional) – indicates whether input is Hermitian. Default is False. Keyword Arguments
out (Tensor, optional) – tensor to write the output to. Default is None. Examples: >>> a = torch.eye(10)
>>> torch.linalg.matrix_rank(a)
tensor(10)
>>> b = torch.eye(10)
>>> b[0, 0] = 0
>>> torch.linalg.matrix_rank(b)
tensor(9)
>>> a = torch.randn(4, 3, 2)
>>> torch.linalg.matrix_rank(a)
tensor([2, 2, 2, 2])
>>> a = torch.randn(2, 4, 2, 3)
>>> torch.linalg.matrix_rank(a)
tensor([[2, 2, 2, 2],
[2, 2, 2, 2]])
>>> a = torch.randn(2, 4, 3, 3, dtype=torch.complex64)
>>> torch.linalg.matrix_rank(a)
tensor([[3, 3, 3, 3],
[3, 3, 3, 3]])
>>> torch.linalg.matrix_rank(a, hermitian=True)
tensor([[3, 3, 3, 3],
[3, 3, 3, 3]])
>>> torch.linalg.matrix_rank(a, tol=1.0)
tensor([[3, 2, 2, 2],
[1, 2, 1, 2]])
>>> torch.linalg.matrix_rank(a, tol=1.0, hermitian=True)
tensor([[2, 2, 2, 1],
[1, 2, 2, 2]])
torch.linalg.norm(input, ord=None, dim=None, keepdim=False, *, out=None, dtype=None) → Tensor
Returns the matrix norm or vector norm of a given tensor. This function can calculate one of eight different types of matrix norms, or one of an infinite number of vector norms, depending on both the number of reduction dimensions and the value of the ord parameter. Parameters
input (Tensor) – The input tensor. If dim is None, x must be 1-D or 2-D, unless ord is None. If both dim and ord are None, the 2-norm of the input flattened to 1-D will be returned. Its data type must be either a floating point or complex type. For complex inputs, the norm is calculated on of the absolute values of each element. If the input is complex and neither dtype nor out is specified, the result’s data type will be the corresponding floating point type (e.g. float if input is complexfloat).
ord (int, float, inf, -inf, 'fro', 'nuc', optional) –
The order of norm. inf refers to float('inf'), numpy’s inf object, or any equivalent object. The following norms can be calculated:
ord norm for matrices norm for vectors
None Frobenius norm 2-norm
’fro’ Frobenius norm – not supported –
‘nuc’ nuclear norm – not supported –
inf max(sum(abs(x), dim=1)) max(abs(x))
-inf min(sum(abs(x), dim=1)) min(abs(x))
0 – not supported – sum(x != 0)
1 max(sum(abs(x), dim=0)) as below
-1 min(sum(abs(x), dim=0)) as below
2 2-norm (largest sing. value) as below
-2 smallest singular value as below
other – not supported – sum(abs(x)**ord)**(1./ord) Default: None
dim (int, 2-tuple of python:ints, 2-list of python:ints, optional) – If dim is an int, vector norm will be calculated over the specified dimension. If dim is a 2-tuple of ints, matrix norm will be calculated over the specified dimensions. If dim is None, matrix norm will be calculated when the input tensor has two dimensions, and vector norm will be calculated when the input tensor has one dimension. Default: None
keepdim (bool, optional) – If set to True, the reduced dimensions are retained in the result as dimensions with size one. Default: False
Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None
dtype (torch.dtype, optional) – If specified, the input tensor is cast to dtype before performing the operation, and the returned tensor’s type will be dtype. If this argument is used in conjunction with the out argument, the output tensor’s type must match this argument or a RuntimeError will be raised. Default: None
Examples: >>> import torch
>>> from torch import linalg as LA
>>> a = torch.arange(9, dtype=torch.float) - 4
>>> a
tensor([-4., -3., -2., -1., 0., 1., 2., 3., 4.])
>>> b = a.reshape((3, 3))
>>> b
tensor([[-4., -3., -2.],
[-1., 0., 1.],
[ 2., 3., 4.]])
>>> LA.norm(a)
tensor(7.7460)
>>> LA.norm(b)
tensor(7.7460)
>>> LA.norm(b, 'fro')
tensor(7.7460)
>>> LA.norm(a, float('inf'))
tensor(4.)
>>> LA.norm(b, float('inf'))
tensor(9.)
>>> LA.norm(a, -float('inf'))
tensor(0.)
>>> LA.norm(b, -float('inf'))
tensor(2.)
>>> LA.norm(a, 1)
tensor(20.)
>>> LA.norm(b, 1)
tensor(7.)
>>> LA.norm(a, -1)
tensor(0.)
>>> LA.norm(b, -1)
tensor(6.)
>>> LA.norm(a, 2)
tensor(7.7460)
>>> LA.norm(b, 2)
tensor(7.3485)
>>> LA.norm(a, -2)
tensor(0.)
>>> LA.norm(b.double(), -2)
tensor(1.8570e-16, dtype=torch.float64)
>>> LA.norm(a, 3)
tensor(5.8480)
>>> LA.norm(a, -3)
tensor(0.)
Using the dim argument to compute vector norms: >>> c = torch.tensor([[1., 2., 3.],
... [-1, 1, 4]])
>>> LA.norm(c, dim=0)
tensor([1.4142, 2.2361, 5.0000])
>>> LA.norm(c, dim=1)
tensor([3.7417, 4.2426])
>>> LA.norm(c, ord=1, dim=1)
tensor([6., 6.])
Using the dim argument to compute matrix norms: >>> m = torch.arange(8, dtype=torch.float).reshape(2, 2, 2)
>>> LA.norm(m, dim=(1,2))
tensor([ 3.7417, 11.2250])
>>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :])
(tensor(3.7417), tensor(11.2250))
torch.linalg.pinv(input, rcond=1e-15, hermitian=False, *, out=None) → Tensor
Computes the pseudo-inverse (also known as the Moore-Penrose inverse) of a matrix input, or of each matrix in a batched input. The singular values (or the absolute values of the eigenvalues when hermitian is True) that are below the specified rcond threshold are treated as zero and discarded in the computation. Supports input of float, double, cfloat and cdouble datatypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The pseudo-inverse is computed using singular value decomposition (see torch.linalg.svd()) by default. If hermitian is True, then input is assumed to be Hermitian (symmetric if real-valued), and the computation of the pseudo-inverse is done by obtaining the eigenvalues and eigenvectors (see torch.linalg.eigh()). Note If singular value decomposition or eigenvalue decomposition algorithms do not converge then a RuntimeError will be thrown. Parameters
input (Tensor) – the input matrix of size (m, n) or the batch of matrices of size (*, m, n) where * is one or more batch dimensions.
rcond (float, Tensor, optional) – the tolerance value to determine the cutoff for small singular values. Must be broadcastable to the singular values of input as returned by torch.svd(). Default is 1e-15.
hermitian (bool, optional) – indicates whether input is Hermitian. Default is False. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default is None. Examples: >>> input = torch.randn(3, 5)
>>> input
tensor([[ 0.5495, 0.0979, -1.4092, -0.1128, 0.4132],
[-1.1143, -0.3662, 0.3042, 1.6374, -0.9294],
[-0.3269, -0.5745, -0.0382, -0.5922, -0.6759]])
>>> torch.linalg.pinv(input)
tensor([[ 0.0600, -0.1933, -0.2090],
[-0.0903, -0.0817, -0.4752],
[-0.7124, -0.1631, -0.2272],
[ 0.1356, 0.3933, -0.5023],
[-0.0308, -0.1725, -0.5216]])
Batched linalg.pinv example
>>> a = torch.randn(2, 6, 3)
>>> b = torch.linalg.pinv(a)
>>> torch.matmul(b, a)
tensor([[[ 1.0000e+00, 1.6391e-07, -1.1548e-07],
[ 8.3121e-08, 1.0000e+00, -2.7567e-07],
[ 3.5390e-08, 1.4901e-08, 1.0000e+00]],
[[ 1.0000e+00, -8.9407e-08, 2.9802e-08],
[-2.2352e-07, 1.0000e+00, 1.1921e-07],
[ 0.0000e+00, 8.9407e-08, 1.0000e+00]]])
Hermitian input example
>>> a = torch.randn(3, 3, dtype=torch.complex64)
>>> a = a + a.t().conj() # creates a Hermitian matrix
>>> b = torch.linalg.pinv(a, hermitian=True)
>>> torch.matmul(b, a)
tensor([[ 1.0000e+00+0.0000e+00j, -1.1921e-07-2.3842e-07j,
5.9605e-08-2.3842e-07j],
[ 5.9605e-08+2.3842e-07j, 1.0000e+00+2.3842e-07j,
-4.7684e-07+1.1921e-07j],
[-1.1921e-07+0.0000e+00j, -2.3842e-07-2.9802e-07j,
1.0000e+00-1.7897e-07j]])
Non-default rcond example
>>> rcond = 0.5
>>> a = torch.randn(3, 3)
>>> torch.linalg.pinv(a)
tensor([[ 0.2971, -0.4280, -2.0111],
[-0.0090, 0.6426, -0.1116],
[-0.7832, -0.2465, 1.0994]])
>>> torch.linalg.pinv(a, rcond)
tensor([[-0.2672, -0.2351, -0.0539],
[-0.0211, 0.6467, -0.0698],
[-0.4400, -0.3638, -0.0910]])
Matrix-wise rcond example
>>> a = torch.randn(5, 6, 2, 3, 3)
>>> rcond = torch.rand(2) # different rcond values for each matrix in a[:, :, 0] and a[:, :, 1]
>>> torch.linalg.pinv(a, rcond)
>>> rcond = torch.randn(5, 6, 2) # different rcond value for each matrix in 'a'
>>> torch.linalg.pinv(a, rcond)
torch.linalg.svd(input, full_matrices=True, compute_uv=True, *, out=None) -> (Tensor, Tensor, Tensor)
Computes the singular value decomposition of either a matrix or batch of matrices input.” The singular value decomposition is represented as a namedtuple (U, S, Vh), such that input=U@diag(S)×Vhinput = U \mathbin{@} diag(S) \times Vh . If input is a batch of tensors, then U, S, and Vh are also batched with the same batch dimensions as input. If full_matrices is False (default), the method returns the reduced singular value decomposition i.e., if the last two dimensions of input are m and n, then the returned U and V matrices will contain only min(n,m)min(n, m) orthonormal columns. If compute_uv is False, the returned U and Vh will be empy tensors with no elements and the same device as input. The full_matrices argument has no effect when compute_uv is False. The dtypes of U and V are the same as input’s. S will always be real-valued, even if input is complex. Note Unlike NumPy’s linalg.svd, this always returns a namedtuple of three tensors, even when compute_uv=False. This behavior may change in a future PyTorch release. Note The singular values are returned in descending order. If input is a batch of matrices, then the singular values of each matrix in the batch is returned in descending order. Note The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the cuSOLVER routines gesvdj and gesvdjBatched on CUDA 10.1.243 and later, and uses the MAGMA routine gesdd on earlier versions of CUDA. Note The returned matrix U will be transposed, i.e. with strides U.contiguous().transpose(-2, -1).stride(). Note Gradients computed using U and Vh may be unstable if input is not full rank or has non-unique singular values. Note When full_matrices = True, the gradients on U[..., :, min(m, n):] and V[..., :, min(m, n):] will be ignored in backward as those vectors can be arbitrary bases of the subspaces. Note The S tensor can only be used to compute gradients if compute_uv is True. Note Since U and V of an SVD is not unique, each vector can be multiplied by an arbitrary phase factor eiϕe^{i \phi} while the SVD result is still correct. Different platforms, like Numpy, or inputs on different device types, may produce different U and V tensors. Parameters
input (Tensor) – the input tensor of size (∗,m,n)(*, m, n) where * is zero or more batch dimensions consisting of m×nm \times n matrices.
full_matrices (bool, optional) – controls whether to compute the full or reduced decomposition, and consequently the shape of returned U and V. Defaults to True.
compute_uv (bool, optional) – whether to compute U and V or not. Defaults to True.
out (tuple, optional) – a tuple of three tensors to use for the outputs. If compute_uv=False, the 1st and 3rd arguments must be tensors, but they are ignored. E.g. you can pass (torch.Tensor(), out_S, torch.Tensor())
Example: >>> import torch
>>> a = torch.randn(5, 3)
>>> a
tensor([[-0.3357, -0.2987, -1.1096],
[ 1.4894, 1.0016, -0.4572],
[-1.9401, 0.7437, 2.0968],
[ 0.1515, 1.3812, 1.5491],
[-1.8489, -0.5907, -2.5673]])
>>>
>>> # reconstruction in the full_matrices=False case
>>> u, s, vh = torch.linalg.svd(a, full_matrices=False)
>>> u.shape, s.shape, vh.shape
(torch.Size([5, 3]), torch.Size([3]), torch.Size([3, 3]))
>>> torch.dist(a, u @ torch.diag(s) @ vh)
tensor(1.0486e-06)
>>>
>>> # reconstruction in the full_matrices=True case
>>> u, s, vh = torch.linalg.svd(a)
>>> u.shape, s.shape, vh.shape
(torch.Size([5, 5]), torch.Size([3]), torch.Size([3, 3]))
>>> torch.dist(a, u[:, :3] @ torch.diag(s) @ vh)
>>> torch.dist(a, u[:, :3] @ torch.diag(s) @ vh)
tensor(1.0486e-06)
>>>
>>> # extra dimensions
>>> a_big = torch.randn(7, 5, 3)
>>> u, s, vh = torch.linalg.svd(a_big, full_matrices=False)
>>> torch.dist(a_big, u @ torch.diag_embed(s) @ vh)
tensor(3.0957e-06)
torch.linalg.solve(input, other, *, out=None) → Tensor
Computes the solution x to the matrix equation matmul(input, x) = other with a square matrix, or batches of such matrices, input and one or more right-hand side vectors other. If input is batched and other is not, then other is broadcast to have the same batch dimensions as input. The resulting tensor has the same shape as the (possibly broadcast) other. Supports input of float, double, cfloat and cdouble dtypes. Note If input is a non-square or non-invertible matrix, or a batch containing non-square matrices or one or more non-invertible matrices, then a RuntimeError will be thrown. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Parameters
input (Tensor) – the square n×nn \times n matrix or the batch of such matrices of size (∗,n,n)(*, n, n) where * is one or more batch dimensions.
other (Tensor) – right-hand side tensor of shape (∗,n)(*, n) or (∗,n,k)(*, n, k) , where kk is the number of right-hand side vectors. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> A = torch.eye(3)
>>> b = torch.randn(3)
>>> x = torch.linalg.solve(A, b)
>>> torch.allclose(A @ x, b)
True
Batched input: >>> A = torch.randn(2, 3, 3)
>>> b = torch.randn(3, 1)
>>> x = torch.linalg.solve(A, b)
>>> torch.allclose(A @ x, b)
True
>>> b = torch.rand(3) # b is broadcast internally to (*A.shape[:-2], 3)
>>> x = torch.linalg.solve(A, b)
>>> x.shape
torch.Size([2, 3])
>>> Ax = A @ x.unsqueeze(-1)
>>> torch.allclose(Ax, b.unsqueeze(-1).expand_as(Ax))
True
torch.linalg.tensorinv(input, ind=2, *, out=None) → Tensor
Computes a tensor input_inv such that tensordot(input_inv, input, ind) == I_n (inverse tensor equation), where I_n is the n-dimensional identity tensor and n is equal to input.ndim. The resulting tensor input_inv has shape equal to input.shape[ind:] + input.shape[:ind]. Supports input of float, double, cfloat and cdouble data types. Note If input is not invertible or does not satisfy the requirement prod(input.shape[ind:]) == prod(input.shape[:ind]), then a RuntimeError will be thrown. Note When input is a 2-dimensional tensor and ind=1, this function computes the (multiplicative) inverse of input, equivalent to calling torch.inverse(). Parameters
input (Tensor) – A tensor to invert. Its shape must satisfy prod(input.shape[:ind]) == prod(input.shape[ind:]).
ind (int) – A positive integer that describes the inverse tensor equation. See torch.tensordot() for details. Default: 2. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> a = torch.eye(4 * 6).reshape((4, 6, 8, 3))
>>> ainv = torch.linalg.tensorinv(a, ind=2)
>>> ainv.shape
torch.Size([8, 3, 4, 6])
>>> b = torch.randn(4, 6)
>>> torch.allclose(torch.tensordot(ainv, b), torch.linalg.tensorsolve(a, b))
True
>>> a = torch.randn(4, 4)
>>> a_tensorinv = torch.linalg.tensorinv(a, ind=1)
>>> a_inv = torch.inverse(a)
>>> torch.allclose(a_tensorinv, a_inv)
True
torch.linalg.tensorsolve(input, other, dims=None, *, out=None) → Tensor
Computes a tensor x such that tensordot(input, x, dims=x.ndim) = other. The resulting tensor x has the same shape as input[other.ndim:]. Supports real-valued and complex-valued inputs. Note If input does not satisfy the requirement prod(input.shape[other.ndim:]) == prod(input.shape[:other.ndim]) after (optionally) moving the dimensions using dims, then a RuntimeError will be thrown. Parameters
input (Tensor) – “left-hand-side” tensor, it must satisfy the requirement prod(input.shape[other.ndim:]) == prod(input.shape[:other.ndim]).
other (Tensor) – “right-hand-side” tensor of shape input.shape[other.ndim].
dims (Tuple[int]) – dimensions of input to be moved before the computation. Equivalent to calling input = movedim(input, dims, range(len(dims) - input.ndim, 0)). If None (default), no dimensions are moved. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> a = torch.eye(2 * 3 * 4).reshape((2 * 3, 4, 2, 3, 4))
>>> b = torch.randn(2 * 3, 4)
>>> x = torch.linalg.tensorsolve(a, b)
>>> x.shape
torch.Size([2, 3, 4])
>>> torch.allclose(torch.tensordot(a, x, dims=x.ndim), b)
True
>>> a = torch.randn(6, 4, 4, 3, 2)
>>> b = torch.randn(4, 3, 2)
>>> x = torch.linalg.tensorsolve(a, b, dims=(0, 2))
>>> x.shape
torch.Size([6, 4])
>>> a = a.permute(1, 3, 4, 0, 2)
>>> a.shape[b.ndim:]
torch.Size([6, 4])
>>> torch.allclose(torch.tensordot(a, x, dims=x.ndim), b, atol=1e-6)
True
torch.linalg.inv(input, *, out=None) → Tensor
Computes the multiplicative inverse matrix of a square matrix input, or of each square matrix in a batched input. The result satisfies the relation: matmul(inv(input),input) = matmul(input,inv(input)) = eye(input.shape[0]).expand_as(input). Supports input of float, double, cfloat and cdouble data types. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The inverse matrix is computed using LAPACK’s getrf and getri routines for CPU inputs. For CUDA inputs, cuSOLVER’s getrf and getrs routines as well as cuBLAS’ getrf and getri routines are used if CUDA version >= 10.1.243, otherwise MAGMA’s getrf and getri routines are used instead. Note If input is a non-invertible matrix or non-square matrix, or batch with at least one such matrix, then a RuntimeError will be thrown. Parameters
input (Tensor) – the square (n, n) matrix or the batch of such matrices of size (*, n, n) where * is one or more batch dimensions. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default is None. Examples: >>> x = torch.rand(4, 4)
>>> y = torch.linalg.inv(x)
>>> z = torch.mm(x, y)
>>> z
tensor([[ 1.0000, -0.0000, -0.0000, 0.0000],
[ 0.0000, 1.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 1.0000, 0.0000],
[ 0.0000, -0.0000, -0.0000, 1.0000]])
>>> torch.max(torch.abs(z - torch.eye(4))) # Max non-zero
tensor(1.1921e-07)
>>> # Batched inverse example
>>> x = torch.randn(2, 3, 4, 4)
>>> y = torch.linalg.inv(x)
>>> z = torch.matmul(x, y)
>>> torch.max(torch.abs(z - torch.eye(4).expand_as(x))) # Max non-zero
tensor(1.9073e-06)
>>> x = torch.rand(4, 4, dtype=torch.cdouble)
>>> y = torch.linalg.inv(x)
>>> z = torch.mm(x, y)
>>> z
tensor([[ 1.0000e+00+0.0000e+00j, -1.3878e-16+3.4694e-16j,
5.5511e-17-1.1102e-16j, 0.0000e+00-1.6653e-16j],
[ 5.5511e-16-1.6653e-16j, 1.0000e+00+6.9389e-17j,
2.2204e-16-1.1102e-16j, -2.2204e-16+1.1102e-16j],
[ 3.8858e-16-1.2490e-16j, 2.7756e-17+3.4694e-17j,
1.0000e+00+0.0000e+00j, -4.4409e-16+5.5511e-17j],
[ 4.4409e-16+5.5511e-16j, -3.8858e-16+1.8041e-16j,
2.2204e-16+0.0000e+00j, 1.0000e+00-3.4694e-16j]],
dtype=torch.complex128)
>>> torch.max(torch.abs(z - torch.eye(4, dtype=torch.cdouble))) # Max non-zero
tensor(7.5107e-16, dtype=torch.float64)
torch.linalg.qr(input, mode='reduced', *, out=None) -> (Tensor, Tensor)
Computes the QR decomposition of a matrix or a batch of matrices input, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper triangular matrix or batch of upper triangular matrices. Depending on the value of mode this function returns the reduced or complete QR factorization. See below for a list of valid modes. Note Differences with numpy.linalg.qr:
mode='raw' is not implemented unlike numpy.linalg.qr, this function always returns a tuple of two tensors. When mode='r', the Q tensor is an empty tensor. This behavior may change in a future PyTorch release. Note Backpropagation is not supported for mode='r'. Use mode='reduced' instead. Backpropagation is also not supported if the first min(input.size(−1),input.size(−2))\min(input.size(-1), input.size(-2)) columns of any matrix in input are not linearly independent. While no error will be thrown when this occurs the values of the “gradient” produced may be anything. This behavior may change in the future. Note This function uses LAPACK for CPU inputs and MAGMA for CUDA inputs, and may produce different (valid) decompositions on different device types or different platforms. Parameters
input (Tensor) – the input tensor of size (∗,m,n)(*, m, n) where * is zero or more batch dimensions consisting of matrices of dimension m×nm \times n .
mode (str, optional) –
if k = min(m, n) then:
'reduced' : returns (Q, R) with dimensions (m, k), (k, n) (default)
'complete': returns (Q, R) with dimensions (m, m), (m, n)
'r': computes only R; returns (Q, R) where Q is empty and R has dimensions (k, n) Keyword Arguments
out (tuple, optional) – tuple of Q and R tensors. The dimensions of Q and R are detailed in the description of mode above. Example: >>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])
>>> q, r = torch.linalg.qr(a)
>>> q
tensor([[-0.8571, 0.3943, 0.3314],
[-0.4286, -0.9029, -0.0343],
[ 0.2857, -0.1714, 0.9429]])
>>> r
tensor([[ -14.0000, -21.0000, 14.0000],
[ 0.0000, -175.0000, 70.0000],
[ 0.0000, 0.0000, -35.0000]])
>>> torch.mm(q, r).round()
tensor([[ 12., -51., 4.],
[ 6., 167., -68.],
[ -4., 24., -41.]])
>>> torch.mm(q.t(), q).round()
tensor([[ 1., 0., 0.],
[ 0., 1., -0.],
[ 0., -0., 1.]])
>>> q2, r2 = torch.linalg.qr(a, mode='r')
>>> q2
tensor([])
>>> torch.equal(r, r2)
True
>>> a = torch.randn(3, 4, 5)
>>> q, r = torch.linalg.qr(a, mode='complete')
>>> torch.allclose(torch.matmul(q, r), a)
True
>>> torch.allclose(torch.matmul(q.transpose(-2, -1), q), torch.eye(5))
True | torch.linalg |
torch.linalg.cholesky(input, *, out=None) → Tensor
Computes the Cholesky decomposition of a Hermitian (or symmetric for real-valued matrices) positive-definite matrix or the Cholesky decompositions for a batch of such matrices. Each decomposition has the form: input=LLH\text{input} = LL^H
where LL is a lower-triangular matrix and LHL^H is the conjugate transpose of LL , which is just a transpose for the case of real-valued input matrices. In code it translates to input = L @ L.t() if input is real-valued and input = L @ L.conj().t() if input is complex-valued. The batch of LL matrices is returned. Supports real-valued and complex-valued inputs. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note LAPACK’s potrf is used for CPU inputs, and MAGMA’s potrf is used for CUDA inputs. Note If input is not a Hermitian positive-definite matrix, or if it’s a batch of matrices and one or more of them is not a Hermitian positive-definite matrix, then a RuntimeError will be thrown. If input is a batch of matrices, then the error message will include the batch index of the first matrix that is not Hermitian positive-definite. Parameters
input (Tensor) – the input tensor of size (∗,n,n)(*, n, n) consisting of Hermitian positive-definite n×nn \times n matrices, where ∗* is zero or more batch dimensions. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> a = torch.randn(2, 2, dtype=torch.complex128)
>>> a = torch.mm(a, a.t().conj()) # creates a Hermitian positive-definite matrix
>>> l = torch.linalg.cholesky(a)
>>> a
tensor([[2.5266+0.0000j, 1.9586-2.0626j],
[1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128)
>>> l
tensor([[1.5895+0.0000j, 0.0000+0.0000j],
[1.2322+1.2976j, 2.4928+0.0000j]], dtype=torch.complex128)
>>> torch.mm(l, l.t().conj())
tensor([[2.5266+0.0000j, 1.9586-2.0626j],
[1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128)
>>> a = torch.randn(3, 2, 2, dtype=torch.float64)
>>> a = torch.matmul(a, a.transpose(-2, -1)) # creates a symmetric positive-definite matrix
>>> l = torch.linalg.cholesky(a)
>>> a
tensor([[[ 1.1629, 2.0237],
[ 2.0237, 6.6593]],
[[ 0.4187, 0.1830],
[ 0.1830, 0.1018]],
[[ 1.9348, -2.5744],
[-2.5744, 4.6386]]], dtype=torch.float64)
>>> l
tensor([[[ 1.0784, 0.0000],
[ 1.8766, 1.7713]],
[[ 0.6471, 0.0000],
[ 0.2829, 0.1477]],
[[ 1.3910, 0.0000],
[-1.8509, 1.1014]]], dtype=torch.float64)
>>> torch.allclose(torch.matmul(l, l.transpose(-2, -1)), a)
True | torch.linalg#torch.linalg.cholesky |
torch.linalg.cond(input, p=None, *, out=None) → Tensor
Computes the condition number of a matrix input, or of each matrix in a batched input, using the matrix norm defined by p. For norms {‘fro’, ‘nuc’, inf, -inf, 1, -1} this is defined as the matrix norm of input times the matrix norm of the inverse of input computed using torch.linalg.norm(). While for norms {None, 2, -2} this is defined as the ratio between the largest and smallest singular values computed using torch.linalg.svd(). This function supports float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function may synchronize that device with the CPU depending on which norm p is used. Note For norms {None, 2, -2}, input may be a non-square matrix or batch of non-square matrices. For other norms, however, input must be a square matrix or a batch of square matrices, and if this requirement is not satisfied a RuntimeError will be thrown. Note For norms {‘fro’, ‘nuc’, inf, -inf, 1, -1} if input is a non-invertible matrix then a tensor containing infinity will be returned. If input is a batch of matrices and one or more of them is not invertible then a RuntimeError will be thrown. Parameters
input (Tensor) – the input matrix of size (m, n) or the batch of matrices of size (*, m, n) where * is one or more batch dimensions.
p (int, float, inf, -inf, 'fro', 'nuc', optional) –
the type of the matrix norm to use in the computations. inf refers to float('inf'), numpy’s inf object, or any equivalent object. The following norms can be used:
p norm for matrices
None ratio of the largest singular value to the smallest singular value
’fro’ Frobenius norm
’nuc’ nuclear norm
inf max(sum(abs(x), dim=1))
-inf min(sum(abs(x), dim=1))
1 max(sum(abs(x), dim=0))
-1 min(sum(abs(x), dim=0))
2 ratio of the largest singular value to the smallest singular value
-2 ratio of the smallest singular value to the largest singular value Default: None Keyword Arguments
out (Tensor, optional) – tensor to write the output to. Default is None. Returns
The condition number of input. The output dtype is always real valued even for complex inputs (e.g. float if input is cfloat). Examples: >>> a = torch.randn(3, 4, 4, dtype=torch.complex64)
>>> torch.linalg.cond(a)
>>> a = torch.tensor([[1., 0, -1], [0, 1, 0], [1, 0, 1]])
>>> torch.linalg.cond(a)
tensor([1.4142])
>>> torch.linalg.cond(a, 'fro')
tensor(3.1623)
>>> torch.linalg.cond(a, 'nuc')
tensor(9.2426)
>>> torch.linalg.cond(a, float('inf'))
tensor(2.)
>>> torch.linalg.cond(a, float('-inf'))
tensor(1.)
>>> torch.linalg.cond(a, 1)
tensor(2.)
>>> torch.linalg.cond(a, -1)
tensor(1.)
>>> torch.linalg.cond(a, 2)
tensor([1.4142])
>>> torch.linalg.cond(a, -2)
tensor([0.7071])
>>> a = torch.randn(2, 3, 3)
>>> a
tensor([[[-0.9204, 1.1140, 1.2055],
[ 0.3988, -0.2395, -0.7441],
[-0.5160, 0.3115, 0.2619]],
[[-2.2128, 0.9241, 2.1492],
[-1.1277, 2.7604, -0.8760],
[ 1.2159, 0.5960, 0.0498]]])
>>> torch.linalg.cond(a)
tensor([[9.5917],
[3.2538]])
>>> a = torch.randn(2, 3, 3, dtype=torch.complex64)
>>> a
tensor([[[-0.4671-0.2137j, -0.1334-0.9508j, 0.6252+0.1759j],
[-0.3486-0.2991j, -0.1317+0.1252j, 0.3025-0.1604j],
[-0.5634+0.8582j, 0.1118-0.4677j, -0.1121+0.7574j]],
[[ 0.3964+0.2533j, 0.9385-0.6417j, -0.0283-0.8673j],
[ 0.2635+0.2323j, -0.8929-1.1269j, 0.3332+0.0733j],
[ 0.1151+0.1644j, -1.1163+0.3471j, -0.5870+0.1629j]]])
>>> torch.linalg.cond(a)
tensor([[4.6245],
[4.5671]])
>>> torch.linalg.cond(a, 1)
tensor([9.2589, 9.3486]) | torch.linalg#torch.linalg.cond |
torch.linalg.det(input) → Tensor
Computes the determinant of a square matrix input, or of each square matrix in a batched input. This function supports float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The determinant is computed using LU factorization. LAPACK’s getrf is used for CPU inputs, and MAGMA’s getrf is used for CUDA inputs. Note Backward through det internally uses torch.linalg.svd() when input is not invertible. In this case, double backward through det will be unstable when input doesn’t have distinct singular values. See torch.linalg.svd() for more details. Parameters
input (Tensor) – the input matrix of size (n, n) or the batch of matrices of size (*, n, n) where * is one or more batch dimensions. Example: >>> a = torch.randn(3, 3)
>>> a
tensor([[ 0.9478, 0.9158, -1.1295],
[ 0.9701, 0.7346, -1.8044],
[-0.2337, 0.0557, 0.6929]])
>>> torch.linalg.det(a)
tensor(0.0934)
>>> a = torch.randn(3, 2, 2)
>>> a
tensor([[[ 0.9254, -0.6213],
[-0.5787, 1.6843]],
[[ 0.3242, -0.9665],
[ 0.4539, -0.0887]],
[[ 1.1336, -0.4025],
[-0.7089, 0.9032]]])
>>> torch.linalg.det(a)
tensor([1.1990, 0.4099, 0.7386]) | torch.linalg#torch.linalg.det |
torch.linalg.eigh(input, UPLO='L', *, out=None) -> (Tensor, Tensor)
Computes the eigenvalues and eigenvectors of a complex Hermitian (or real symmetric) matrix input, or of each such matrix in a batched input. For a single matrix input, the tensor of eigenvalues w and the tensor of eigenvectors V decompose the input such that input = V diag(w) Vᴴ, where Vᴴ is the transpose of V for real-valued input, or the conjugate transpose of V for complex-valued input. Since the matrix or matrices in input are assumed to be Hermitian, the imaginary part of their diagonals is always treated as zero. When UPLO is “L”, its default value, only the lower triangular part of each matrix is used in the computation. When UPLO is “U” only the upper triangular part of each matrix is used. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The eigenvalues/eigenvectors are computed using LAPACK’s syevd and heevd routines for CPU inputs, and MAGMA’s syevd and heevd routines for CUDA inputs. Note The eigenvalues of real symmetric or complex Hermitian matrices are always real. Note The eigenvectors of matrices are not unique, so any eigenvector multiplied by a constant remains a valid eigenvector. This function may compute different eigenvector representations on different device types. Usually the difference is only in the sign of the eigenvector. Note See torch.linalg.eigvalsh() for a related function that computes only eigenvalues. However, that function is not differentiable. Parameters
input (Tensor) – the Hermitian n times n matrix or the batch of such matrices of size (*, n, n) where * is one or more batch dimensions.
UPLO ('L', 'U', optional) – controls whether to use the upper-triangular or the lower-triangular part of input in the computations. Default is 'L'. Keyword Arguments
out (tuple, optional) – tuple of two tensors to write the output to. Default is None. Returns
A namedtuple (eigenvalues, eigenvectors) containing
eigenvalues (Tensor): Shape (*, m).
The eigenvalues in ascending order.
eigenvectors (Tensor): Shape (*, m, m).
The orthonormal eigenvectors of the input. Return type
(Tensor, Tensor) Examples: >>> a = torch.randn(2, 2, dtype=torch.complex128)
>>> a = a + a.t().conj() # creates a Hermitian matrix
>>> a
tensor([[2.9228+0.0000j, 0.2029-0.0862j],
[0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)
>>> w, v = torch.linalg.eigh(a)
>>> w
tensor([0.3277, 2.9415], dtype=torch.float64)
>>> v
tensor([[-0.0846+-0.0000j, -0.9964+0.0000j],
[ 0.9170+0.3898j, -0.0779-0.0331j]], dtype=torch.complex128)
>>> torch.allclose(torch.matmul(v, torch.matmul(w.to(v.dtype).diag_embed(), v.t().conj())), a)
True
>>> a = torch.randn(3, 2, 2, dtype=torch.float64)
>>> a = a + a.transpose(-2, -1) # creates a symmetric matrix
>>> w, v = torch.linalg.eigh(a)
>>> torch.allclose(torch.matmul(v, torch.matmul(w.diag_embed(), v.transpose(-2, -1))), a)
True | torch.linalg#torch.linalg.eigh |
torch.linalg.eigvalsh(input, UPLO='L', *, out=None) → Tensor
Computes the eigenvalues of a complex Hermitian (or real symmetric) matrix input, or of each such matrix in a batched input. The eigenvalues are returned in ascending order. Since the matrix or matrices in input are assumed to be Hermitian, the imaginary part of their diagonals is always treated as zero. When UPLO is “L”, its default value, only the lower triangular part of each matrix is used in the computation. When UPLO is “U” only the upper triangular part of each matrix is used. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The eigenvalues are computed using LAPACK’s syevd and heevd routines for CPU inputs, and MAGMA’s syevd and heevd routines for CUDA inputs. Note The eigenvalues of real symmetric or complex Hermitian matrices are always real. Note This function doesn’t support backpropagation, please use torch.linalg.eigh() instead, which also computes the eigenvectors. Note See torch.linalg.eigh() for a related function that computes both eigenvalues and eigenvectors. Parameters
input (Tensor) – the Hermitian n times n matrix or the batch of such matrices of size (*, n, n) where * is one or more batch dimensions.
UPLO ('L', 'U', optional) – controls whether to use the upper-triangular or the lower-triangular part of input in the computations. Default is 'L'. Keyword Arguments
out (Tensor, optional) – tensor to write the output to. Default is None. Examples: >>> a = torch.randn(2, 2, dtype=torch.complex128)
>>> a = a + a.t().conj() # creates a Hermitian matrix
>>> a
tensor([[2.9228+0.0000j, 0.2029-0.0862j],
[0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)
>>> w = torch.linalg.eigvalsh(a)
>>> w
tensor([0.3277, 2.9415], dtype=torch.float64)
>>> a = torch.randn(3, 2, 2, dtype=torch.float64)
>>> a = a + a.transpose(-2, -1) # creates a symmetric matrix
>>> a
tensor([[[ 2.8050, -0.3850],
[-0.3850, 3.2376]],
[[-1.0307, -2.7457],
[-2.7457, -1.7517]],
[[ 1.7166, 2.2207],
[ 2.2207, -2.0898]]], dtype=torch.float64)
>>> w = torch.linalg.eigvalsh(a)
>>> w
tensor([[ 2.5797, 3.4629],
[-4.1605, 1.3780],
[-3.1113, 2.7381]], dtype=torch.float64) | torch.linalg#torch.linalg.eigvalsh |
torch.linalg.inv(input, *, out=None) → Tensor
Computes the multiplicative inverse matrix of a square matrix input, or of each square matrix in a batched input. The result satisfies the relation: matmul(inv(input),input) = matmul(input,inv(input)) = eye(input.shape[0]).expand_as(input). Supports input of float, double, cfloat and cdouble data types. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The inverse matrix is computed using LAPACK’s getrf and getri routines for CPU inputs. For CUDA inputs, cuSOLVER’s getrf and getrs routines as well as cuBLAS’ getrf and getri routines are used if CUDA version >= 10.1.243, otherwise MAGMA’s getrf and getri routines are used instead. Note If input is a non-invertible matrix or non-square matrix, or batch with at least one such matrix, then a RuntimeError will be thrown. Parameters
input (Tensor) – the square (n, n) matrix or the batch of such matrices of size (*, n, n) where * is one or more batch dimensions. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default is None. Examples: >>> x = torch.rand(4, 4)
>>> y = torch.linalg.inv(x)
>>> z = torch.mm(x, y)
>>> z
tensor([[ 1.0000, -0.0000, -0.0000, 0.0000],
[ 0.0000, 1.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 1.0000, 0.0000],
[ 0.0000, -0.0000, -0.0000, 1.0000]])
>>> torch.max(torch.abs(z - torch.eye(4))) # Max non-zero
tensor(1.1921e-07)
>>> # Batched inverse example
>>> x = torch.randn(2, 3, 4, 4)
>>> y = torch.linalg.inv(x)
>>> z = torch.matmul(x, y)
>>> torch.max(torch.abs(z - torch.eye(4).expand_as(x))) # Max non-zero
tensor(1.9073e-06)
>>> x = torch.rand(4, 4, dtype=torch.cdouble)
>>> y = torch.linalg.inv(x)
>>> z = torch.mm(x, y)
>>> z
tensor([[ 1.0000e+00+0.0000e+00j, -1.3878e-16+3.4694e-16j,
5.5511e-17-1.1102e-16j, 0.0000e+00-1.6653e-16j],
[ 5.5511e-16-1.6653e-16j, 1.0000e+00+6.9389e-17j,
2.2204e-16-1.1102e-16j, -2.2204e-16+1.1102e-16j],
[ 3.8858e-16-1.2490e-16j, 2.7756e-17+3.4694e-17j,
1.0000e+00+0.0000e+00j, -4.4409e-16+5.5511e-17j],
[ 4.4409e-16+5.5511e-16j, -3.8858e-16+1.8041e-16j,
2.2204e-16+0.0000e+00j, 1.0000e+00-3.4694e-16j]],
dtype=torch.complex128)
>>> torch.max(torch.abs(z - torch.eye(4, dtype=torch.cdouble))) # Max non-zero
tensor(7.5107e-16, dtype=torch.float64) | torch.linalg#torch.linalg.inv |
torch.linalg.matrix_rank(input, tol=None, hermitian=False, *, out=None) → Tensor
Computes the numerical rank of a matrix input, or of each matrix in a batched input. The matrix rank is computed as the number of singular values (or absolute eigenvalues when hermitian is True) that are greater than the specified tol threshold. If tol is not specified, tol is set to S.max(dim=-1)*max(input.shape[-2:])*eps, where S is the singular values (or absolute eigenvalues when hermitian is True), and eps is the epsilon value for the datatype of input. The epsilon value can be obtained using the eps attribute of torch.finfo. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The matrix rank is computed using singular value decomposition (see torch.linalg.svd()) by default. If hermitian is True, then input is assumed to be Hermitian (symmetric if real-valued), and the computation is done by obtaining the eigenvalues (see torch.linalg.eigvalsh()). Parameters
input (Tensor) – the input matrix of size (m, n) or the batch of matrices of size (*, m, n) where * is one or more batch dimensions.
tol (float, optional) – the tolerance value. Default is None
hermitian (bool, optional) – indicates whether input is Hermitian. Default is False. Keyword Arguments
out (Tensor, optional) – tensor to write the output to. Default is None. Examples: >>> a = torch.eye(10)
>>> torch.linalg.matrix_rank(a)
tensor(10)
>>> b = torch.eye(10)
>>> b[0, 0] = 0
>>> torch.linalg.matrix_rank(b)
tensor(9)
>>> a = torch.randn(4, 3, 2)
>>> torch.linalg.matrix_rank(a)
tensor([2, 2, 2, 2])
>>> a = torch.randn(2, 4, 2, 3)
>>> torch.linalg.matrix_rank(a)
tensor([[2, 2, 2, 2],
[2, 2, 2, 2]])
>>> a = torch.randn(2, 4, 3, 3, dtype=torch.complex64)
>>> torch.linalg.matrix_rank(a)
tensor([[3, 3, 3, 3],
[3, 3, 3, 3]])
>>> torch.linalg.matrix_rank(a, hermitian=True)
tensor([[3, 3, 3, 3],
[3, 3, 3, 3]])
>>> torch.linalg.matrix_rank(a, tol=1.0)
tensor([[3, 2, 2, 2],
[1, 2, 1, 2]])
>>> torch.linalg.matrix_rank(a, tol=1.0, hermitian=True)
tensor([[2, 2, 2, 1],
[1, 2, 2, 2]]) | torch.linalg#torch.linalg.matrix_rank |
torch.linalg.norm(input, ord=None, dim=None, keepdim=False, *, out=None, dtype=None) → Tensor
Returns the matrix norm or vector norm of a given tensor. This function can calculate one of eight different types of matrix norms, or one of an infinite number of vector norms, depending on both the number of reduction dimensions and the value of the ord parameter. Parameters
input (Tensor) – The input tensor. If dim is None, x must be 1-D or 2-D, unless ord is None. If both dim and ord are None, the 2-norm of the input flattened to 1-D will be returned. Its data type must be either a floating point or complex type. For complex inputs, the norm is calculated on of the absolute values of each element. If the input is complex and neither dtype nor out is specified, the result’s data type will be the corresponding floating point type (e.g. float if input is complexfloat).
ord (int, float, inf, -inf, 'fro', 'nuc', optional) –
The order of norm. inf refers to float('inf'), numpy’s inf object, or any equivalent object. The following norms can be calculated:
ord norm for matrices norm for vectors
None Frobenius norm 2-norm
’fro’ Frobenius norm – not supported –
‘nuc’ nuclear norm – not supported –
inf max(sum(abs(x), dim=1)) max(abs(x))
-inf min(sum(abs(x), dim=1)) min(abs(x))
0 – not supported – sum(x != 0)
1 max(sum(abs(x), dim=0)) as below
-1 min(sum(abs(x), dim=0)) as below
2 2-norm (largest sing. value) as below
-2 smallest singular value as below
other – not supported – sum(abs(x)**ord)**(1./ord) Default: None
dim (int, 2-tuple of python:ints, 2-list of python:ints, optional) – If dim is an int, vector norm will be calculated over the specified dimension. If dim is a 2-tuple of ints, matrix norm will be calculated over the specified dimensions. If dim is None, matrix norm will be calculated when the input tensor has two dimensions, and vector norm will be calculated when the input tensor has one dimension. Default: None
keepdim (bool, optional) – If set to True, the reduced dimensions are retained in the result as dimensions with size one. Default: False
Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None
dtype (torch.dtype, optional) – If specified, the input tensor is cast to dtype before performing the operation, and the returned tensor’s type will be dtype. If this argument is used in conjunction with the out argument, the output tensor’s type must match this argument or a RuntimeError will be raised. Default: None
Examples: >>> import torch
>>> from torch import linalg as LA
>>> a = torch.arange(9, dtype=torch.float) - 4
>>> a
tensor([-4., -3., -2., -1., 0., 1., 2., 3., 4.])
>>> b = a.reshape((3, 3))
>>> b
tensor([[-4., -3., -2.],
[-1., 0., 1.],
[ 2., 3., 4.]])
>>> LA.norm(a)
tensor(7.7460)
>>> LA.norm(b)
tensor(7.7460)
>>> LA.norm(b, 'fro')
tensor(7.7460)
>>> LA.norm(a, float('inf'))
tensor(4.)
>>> LA.norm(b, float('inf'))
tensor(9.)
>>> LA.norm(a, -float('inf'))
tensor(0.)
>>> LA.norm(b, -float('inf'))
tensor(2.)
>>> LA.norm(a, 1)
tensor(20.)
>>> LA.norm(b, 1)
tensor(7.)
>>> LA.norm(a, -1)
tensor(0.)
>>> LA.norm(b, -1)
tensor(6.)
>>> LA.norm(a, 2)
tensor(7.7460)
>>> LA.norm(b, 2)
tensor(7.3485)
>>> LA.norm(a, -2)
tensor(0.)
>>> LA.norm(b.double(), -2)
tensor(1.8570e-16, dtype=torch.float64)
>>> LA.norm(a, 3)
tensor(5.8480)
>>> LA.norm(a, -3)
tensor(0.)
Using the dim argument to compute vector norms: >>> c = torch.tensor([[1., 2., 3.],
... [-1, 1, 4]])
>>> LA.norm(c, dim=0)
tensor([1.4142, 2.2361, 5.0000])
>>> LA.norm(c, dim=1)
tensor([3.7417, 4.2426])
>>> LA.norm(c, ord=1, dim=1)
tensor([6., 6.])
Using the dim argument to compute matrix norms: >>> m = torch.arange(8, dtype=torch.float).reshape(2, 2, 2)
>>> LA.norm(m, dim=(1,2))
tensor([ 3.7417, 11.2250])
>>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :])
(tensor(3.7417), tensor(11.2250)) | torch.linalg#torch.linalg.norm |
torch.linalg.pinv(input, rcond=1e-15, hermitian=False, *, out=None) → Tensor
Computes the pseudo-inverse (also known as the Moore-Penrose inverse) of a matrix input, or of each matrix in a batched input. The singular values (or the absolute values of the eigenvalues when hermitian is True) that are below the specified rcond threshold are treated as zero and discarded in the computation. Supports input of float, double, cfloat and cdouble datatypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The pseudo-inverse is computed using singular value decomposition (see torch.linalg.svd()) by default. If hermitian is True, then input is assumed to be Hermitian (symmetric if real-valued), and the computation of the pseudo-inverse is done by obtaining the eigenvalues and eigenvectors (see torch.linalg.eigh()). Note If singular value decomposition or eigenvalue decomposition algorithms do not converge then a RuntimeError will be thrown. Parameters
input (Tensor) – the input matrix of size (m, n) or the batch of matrices of size (*, m, n) where * is one or more batch dimensions.
rcond (float, Tensor, optional) – the tolerance value to determine the cutoff for small singular values. Must be broadcastable to the singular values of input as returned by torch.svd(). Default is 1e-15.
hermitian (bool, optional) – indicates whether input is Hermitian. Default is False. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default is None. Examples: >>> input = torch.randn(3, 5)
>>> input
tensor([[ 0.5495, 0.0979, -1.4092, -0.1128, 0.4132],
[-1.1143, -0.3662, 0.3042, 1.6374, -0.9294],
[-0.3269, -0.5745, -0.0382, -0.5922, -0.6759]])
>>> torch.linalg.pinv(input)
tensor([[ 0.0600, -0.1933, -0.2090],
[-0.0903, -0.0817, -0.4752],
[-0.7124, -0.1631, -0.2272],
[ 0.1356, 0.3933, -0.5023],
[-0.0308, -0.1725, -0.5216]])
Batched linalg.pinv example
>>> a = torch.randn(2, 6, 3)
>>> b = torch.linalg.pinv(a)
>>> torch.matmul(b, a)
tensor([[[ 1.0000e+00, 1.6391e-07, -1.1548e-07],
[ 8.3121e-08, 1.0000e+00, -2.7567e-07],
[ 3.5390e-08, 1.4901e-08, 1.0000e+00]],
[[ 1.0000e+00, -8.9407e-08, 2.9802e-08],
[-2.2352e-07, 1.0000e+00, 1.1921e-07],
[ 0.0000e+00, 8.9407e-08, 1.0000e+00]]])
Hermitian input example
>>> a = torch.randn(3, 3, dtype=torch.complex64)
>>> a = a + a.t().conj() # creates a Hermitian matrix
>>> b = torch.linalg.pinv(a, hermitian=True)
>>> torch.matmul(b, a)
tensor([[ 1.0000e+00+0.0000e+00j, -1.1921e-07-2.3842e-07j,
5.9605e-08-2.3842e-07j],
[ 5.9605e-08+2.3842e-07j, 1.0000e+00+2.3842e-07j,
-4.7684e-07+1.1921e-07j],
[-1.1921e-07+0.0000e+00j, -2.3842e-07-2.9802e-07j,
1.0000e+00-1.7897e-07j]])
Non-default rcond example
>>> rcond = 0.5
>>> a = torch.randn(3, 3)
>>> torch.linalg.pinv(a)
tensor([[ 0.2971, -0.4280, -2.0111],
[-0.0090, 0.6426, -0.1116],
[-0.7832, -0.2465, 1.0994]])
>>> torch.linalg.pinv(a, rcond)
tensor([[-0.2672, -0.2351, -0.0539],
[-0.0211, 0.6467, -0.0698],
[-0.4400, -0.3638, -0.0910]])
Matrix-wise rcond example
>>> a = torch.randn(5, 6, 2, 3, 3)
>>> rcond = torch.rand(2) # different rcond values for each matrix in a[:, :, 0] and a[:, :, 1]
>>> torch.linalg.pinv(a, rcond)
>>> rcond = torch.randn(5, 6, 2) # different rcond value for each matrix in 'a'
>>> torch.linalg.pinv(a, rcond) | torch.linalg#torch.linalg.pinv |
torch.linalg.qr(input, mode='reduced', *, out=None) -> (Tensor, Tensor)
Computes the QR decomposition of a matrix or a batch of matrices input, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper triangular matrix or batch of upper triangular matrices. Depending on the value of mode this function returns the reduced or complete QR factorization. See below for a list of valid modes. Note Differences with numpy.linalg.qr:
mode='raw' is not implemented unlike numpy.linalg.qr, this function always returns a tuple of two tensors. When mode='r', the Q tensor is an empty tensor. This behavior may change in a future PyTorch release. Note Backpropagation is not supported for mode='r'. Use mode='reduced' instead. Backpropagation is also not supported if the first min(input.size(−1),input.size(−2))\min(input.size(-1), input.size(-2)) columns of any matrix in input are not linearly independent. While no error will be thrown when this occurs the values of the “gradient” produced may be anything. This behavior may change in the future. Note This function uses LAPACK for CPU inputs and MAGMA for CUDA inputs, and may produce different (valid) decompositions on different device types or different platforms. Parameters
input (Tensor) – the input tensor of size (∗,m,n)(*, m, n) where * is zero or more batch dimensions consisting of matrices of dimension m×nm \times n .
mode (str, optional) –
if k = min(m, n) then:
'reduced' : returns (Q, R) with dimensions (m, k), (k, n) (default)
'complete': returns (Q, R) with dimensions (m, m), (m, n)
'r': computes only R; returns (Q, R) where Q is empty and R has dimensions (k, n) Keyword Arguments
out (tuple, optional) – tuple of Q and R tensors. The dimensions of Q and R are detailed in the description of mode above. Example: >>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])
>>> q, r = torch.linalg.qr(a)
>>> q
tensor([[-0.8571, 0.3943, 0.3314],
[-0.4286, -0.9029, -0.0343],
[ 0.2857, -0.1714, 0.9429]])
>>> r
tensor([[ -14.0000, -21.0000, 14.0000],
[ 0.0000, -175.0000, 70.0000],
[ 0.0000, 0.0000, -35.0000]])
>>> torch.mm(q, r).round()
tensor([[ 12., -51., 4.],
[ 6., 167., -68.],
[ -4., 24., -41.]])
>>> torch.mm(q.t(), q).round()
tensor([[ 1., 0., 0.],
[ 0., 1., -0.],
[ 0., -0., 1.]])
>>> q2, r2 = torch.linalg.qr(a, mode='r')
>>> q2
tensor([])
>>> torch.equal(r, r2)
True
>>> a = torch.randn(3, 4, 5)
>>> q, r = torch.linalg.qr(a, mode='complete')
>>> torch.allclose(torch.matmul(q, r), a)
True
>>> torch.allclose(torch.matmul(q.transpose(-2, -1), q), torch.eye(5))
True | torch.linalg#torch.linalg.qr |
torch.linalg.slogdet(input, *, out=None) -> (Tensor, Tensor)
Calculates the sign and natural logarithm of the absolute value of a square matrix’s determinant, or of the absolute values of the determinants of a batch of square matrices input. The determinant can be computed with sign * exp(logabsdet). Supports input of float, double, cfloat and cdouble datatypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The determinant is computed using LU factorization. LAPACK’s getrf is used for CPU inputs, and MAGMA’s getrf is used for CUDA inputs. Note For matrices that have zero determinant, this returns (0, -inf). If input is batched then the entries in the result tensors corresponding to matrices with the zero determinant have sign 0 and the natural logarithm of the absolute value of the determinant -inf. Parameters
input (Tensor) – the input matrix of size (n,n)(n, n) or the batch of matrices of size (∗,n,n)(*, n, n) where ∗* is one or more batch dimensions. Keyword Arguments
out (tuple, optional) – tuple of two tensors to write the output to. Returns
A namedtuple (sign, logabsdet) containing the sign of the determinant and the natural logarithm of the absolute value of determinant, respectively. Example: >>> A = torch.randn(3, 3)
>>> A
tensor([[ 0.0032, -0.2239, -1.1219],
[-0.6690, 0.1161, 0.4053],
[-1.6218, -0.9273, -0.0082]])
>>> torch.linalg.det(A)
tensor(-0.7576)
>>> torch.linalg.logdet(A)
tensor(nan)
>>> torch.linalg.slogdet(A)
torch.return_types.linalg_slogdet(sign=tensor(-1.), logabsdet=tensor(-0.2776)) | torch.linalg#torch.linalg.slogdet |
torch.linalg.solve(input, other, *, out=None) → Tensor
Computes the solution x to the matrix equation matmul(input, x) = other with a square matrix, or batches of such matrices, input and one or more right-hand side vectors other. If input is batched and other is not, then other is broadcast to have the same batch dimensions as input. The resulting tensor has the same shape as the (possibly broadcast) other. Supports input of float, double, cfloat and cdouble dtypes. Note If input is a non-square or non-invertible matrix, or a batch containing non-square matrices or one or more non-invertible matrices, then a RuntimeError will be thrown. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Parameters
input (Tensor) – the square n×nn \times n matrix or the batch of such matrices of size (∗,n,n)(*, n, n) where * is one or more batch dimensions.
other (Tensor) – right-hand side tensor of shape (∗,n)(*, n) or (∗,n,k)(*, n, k) , where kk is the number of right-hand side vectors. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> A = torch.eye(3)
>>> b = torch.randn(3)
>>> x = torch.linalg.solve(A, b)
>>> torch.allclose(A @ x, b)
True
Batched input: >>> A = torch.randn(2, 3, 3)
>>> b = torch.randn(3, 1)
>>> x = torch.linalg.solve(A, b)
>>> torch.allclose(A @ x, b)
True
>>> b = torch.rand(3) # b is broadcast internally to (*A.shape[:-2], 3)
>>> x = torch.linalg.solve(A, b)
>>> x.shape
torch.Size([2, 3])
>>> Ax = A @ x.unsqueeze(-1)
>>> torch.allclose(Ax, b.unsqueeze(-1).expand_as(Ax))
True | torch.linalg#torch.linalg.solve |
torch.linalg.svd(input, full_matrices=True, compute_uv=True, *, out=None) -> (Tensor, Tensor, Tensor)
Computes the singular value decomposition of either a matrix or batch of matrices input.” The singular value decomposition is represented as a namedtuple (U, S, Vh), such that input=U@diag(S)×Vhinput = U \mathbin{@} diag(S) \times Vh . If input is a batch of tensors, then U, S, and Vh are also batched with the same batch dimensions as input. If full_matrices is False (default), the method returns the reduced singular value decomposition i.e., if the last two dimensions of input are m and n, then the returned U and V matrices will contain only min(n,m)min(n, m) orthonormal columns. If compute_uv is False, the returned U and Vh will be empy tensors with no elements and the same device as input. The full_matrices argument has no effect when compute_uv is False. The dtypes of U and V are the same as input’s. S will always be real-valued, even if input is complex. Note Unlike NumPy’s linalg.svd, this always returns a namedtuple of three tensors, even when compute_uv=False. This behavior may change in a future PyTorch release. Note The singular values are returned in descending order. If input is a batch of matrices, then the singular values of each matrix in the batch is returned in descending order. Note The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the cuSOLVER routines gesvdj and gesvdjBatched on CUDA 10.1.243 and later, and uses the MAGMA routine gesdd on earlier versions of CUDA. Note The returned matrix U will be transposed, i.e. with strides U.contiguous().transpose(-2, -1).stride(). Note Gradients computed using U and Vh may be unstable if input is not full rank or has non-unique singular values. Note When full_matrices = True, the gradients on U[..., :, min(m, n):] and V[..., :, min(m, n):] will be ignored in backward as those vectors can be arbitrary bases of the subspaces. Note The S tensor can only be used to compute gradients if compute_uv is True. Note Since U and V of an SVD is not unique, each vector can be multiplied by an arbitrary phase factor eiϕe^{i \phi} while the SVD result is still correct. Different platforms, like Numpy, or inputs on different device types, may produce different U and V tensors. Parameters
input (Tensor) – the input tensor of size (∗,m,n)(*, m, n) where * is zero or more batch dimensions consisting of m×nm \times n matrices.
full_matrices (bool, optional) – controls whether to compute the full or reduced decomposition, and consequently the shape of returned U and V. Defaults to True.
compute_uv (bool, optional) – whether to compute U and V or not. Defaults to True.
out (tuple, optional) – a tuple of three tensors to use for the outputs. If compute_uv=False, the 1st and 3rd arguments must be tensors, but they are ignored. E.g. you can pass (torch.Tensor(), out_S, torch.Tensor())
Example: >>> import torch
>>> a = torch.randn(5, 3)
>>> a
tensor([[-0.3357, -0.2987, -1.1096],
[ 1.4894, 1.0016, -0.4572],
[-1.9401, 0.7437, 2.0968],
[ 0.1515, 1.3812, 1.5491],
[-1.8489, -0.5907, -2.5673]])
>>>
>>> # reconstruction in the full_matrices=False case
>>> u, s, vh = torch.linalg.svd(a, full_matrices=False)
>>> u.shape, s.shape, vh.shape
(torch.Size([5, 3]), torch.Size([3]), torch.Size([3, 3]))
>>> torch.dist(a, u @ torch.diag(s) @ vh)
tensor(1.0486e-06)
>>>
>>> # reconstruction in the full_matrices=True case
>>> u, s, vh = torch.linalg.svd(a)
>>> u.shape, s.shape, vh.shape
(torch.Size([5, 5]), torch.Size([3]), torch.Size([3, 3]))
>>> torch.dist(a, u[:, :3] @ torch.diag(s) @ vh)
>>> torch.dist(a, u[:, :3] @ torch.diag(s) @ vh)
tensor(1.0486e-06)
>>>
>>> # extra dimensions
>>> a_big = torch.randn(7, 5, 3)
>>> u, s, vh = torch.linalg.svd(a_big, full_matrices=False)
>>> torch.dist(a_big, u @ torch.diag_embed(s) @ vh)
tensor(3.0957e-06) | torch.linalg#torch.linalg.svd |
torch.linalg.tensorinv(input, ind=2, *, out=None) → Tensor
Computes a tensor input_inv such that tensordot(input_inv, input, ind) == I_n (inverse tensor equation), where I_n is the n-dimensional identity tensor and n is equal to input.ndim. The resulting tensor input_inv has shape equal to input.shape[ind:] + input.shape[:ind]. Supports input of float, double, cfloat and cdouble data types. Note If input is not invertible or does not satisfy the requirement prod(input.shape[ind:]) == prod(input.shape[:ind]), then a RuntimeError will be thrown. Note When input is a 2-dimensional tensor and ind=1, this function computes the (multiplicative) inverse of input, equivalent to calling torch.inverse(). Parameters
input (Tensor) – A tensor to invert. Its shape must satisfy prod(input.shape[:ind]) == prod(input.shape[ind:]).
ind (int) – A positive integer that describes the inverse tensor equation. See torch.tensordot() for details. Default: 2. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> a = torch.eye(4 * 6).reshape((4, 6, 8, 3))
>>> ainv = torch.linalg.tensorinv(a, ind=2)
>>> ainv.shape
torch.Size([8, 3, 4, 6])
>>> b = torch.randn(4, 6)
>>> torch.allclose(torch.tensordot(ainv, b), torch.linalg.tensorsolve(a, b))
True
>>> a = torch.randn(4, 4)
>>> a_tensorinv = torch.linalg.tensorinv(a, ind=1)
>>> a_inv = torch.inverse(a)
>>> torch.allclose(a_tensorinv, a_inv)
True | torch.linalg#torch.linalg.tensorinv |
torch.linalg.tensorsolve(input, other, dims=None, *, out=None) → Tensor
Computes a tensor x such that tensordot(input, x, dims=x.ndim) = other. The resulting tensor x has the same shape as input[other.ndim:]. Supports real-valued and complex-valued inputs. Note If input does not satisfy the requirement prod(input.shape[other.ndim:]) == prod(input.shape[:other.ndim]) after (optionally) moving the dimensions using dims, then a RuntimeError will be thrown. Parameters
input (Tensor) – “left-hand-side” tensor, it must satisfy the requirement prod(input.shape[other.ndim:]) == prod(input.shape[:other.ndim]).
other (Tensor) – “right-hand-side” tensor of shape input.shape[other.ndim].
dims (Tuple[int]) – dimensions of input to be moved before the computation. Equivalent to calling input = movedim(input, dims, range(len(dims) - input.ndim, 0)). If None (default), no dimensions are moved. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> a = torch.eye(2 * 3 * 4).reshape((2 * 3, 4, 2, 3, 4))
>>> b = torch.randn(2 * 3, 4)
>>> x = torch.linalg.tensorsolve(a, b)
>>> x.shape
torch.Size([2, 3, 4])
>>> torch.allclose(torch.tensordot(a, x, dims=x.ndim), b)
True
>>> a = torch.randn(6, 4, 4, 3, 2)
>>> b = torch.randn(4, 3, 2)
>>> x = torch.linalg.tensorsolve(a, b, dims=(0, 2))
>>> x.shape
torch.Size([6, 4])
>>> a = a.permute(1, 3, 4, 0, 2)
>>> a.shape[b.ndim:]
torch.Size([6, 4])
>>> torch.allclose(torch.tensordot(a, x, dims=x.ndim), b, atol=1e-6)
True | torch.linalg#torch.linalg.tensorsolve |
torch.linspace(start, end, steps, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Creates a one-dimensional tensor of size steps whose values are evenly spaced from start to end, inclusive. That is, the value are: (start,start+end−startsteps−1,…,start+(steps−2)∗end−startsteps−1,end)(\text{start}, \text{start} + \frac{\text{end} - \text{start}}{\text{steps} - 1}, \ldots, \text{start} + (\text{steps} - 2) * \frac{\text{end} - \text{start}}{\text{steps} - 1}, \text{end})
Warning Not providing a value for steps is deprecated. For backwards compatibility, not providing a value for steps will create a tensor with 100 elements. Note that this behavior is not reflected in the documented function signature and should not be relied on. In a future PyTorch release, failing to provide a value for steps will throw a runtime error. Parameters
start (float) – the starting value for the set of points
end (float) – the ending value for the set of points
steps (int) – size of the constructed tensor Keyword Arguments
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> torch.linspace(3, 10, steps=5)
tensor([ 3.0000, 4.7500, 6.5000, 8.2500, 10.0000])
>>> torch.linspace(-10, 10, steps=5)
tensor([-10., -5., 0., 5., 10.])
>>> torch.linspace(start=-10, end=10, steps=5)
tensor([-10., -5., 0., 5., 10.])
>>> torch.linspace(start=-10, end=10, steps=1)
tensor([-10.]) | torch.generated.torch.linspace#torch.linspace |
torch.load(f, map_location=None, pickle_module=<module 'pickle' from '/home/matti/miniconda3/lib/python3.7/pickle.py'>, **pickle_load_args) [source]
Loads an object saved with torch.save() from a file. torch.load() uses Python’s unpickling facilities but treats storages, which underlie tensors, specially. They are first deserialized on the CPU and are then moved to the device they were saved from. If this fails (e.g. because the run time system doesn’t have certain devices), an exception is raised. However, storages can be dynamically remapped to an alternative set of devices using the map_location argument. If map_location is a callable, it will be called once for each serialized storage with two arguments: storage and location. The storage argument will be the initial deserialization of the storage, residing on the CPU. Each serialized storage has a location tag associated with it which identifies the device it was saved from, and this tag is the second argument passed to map_location. The builtin location tags are 'cpu' for CPU tensors and 'cuda:device_id' (e.g. 'cuda:2') for CUDA tensors. map_location should return either None or a storage. If map_location returns a storage, it will be used as the final deserialized object, already moved to the right device. Otherwise, torch.load() will fall back to the default behavior, as if map_location wasn’t specified. If map_location is a torch.device object or a string containing a device tag, it indicates the location where all tensors should be loaded. Otherwise, if map_location is a dict, it will be used to remap location tags appearing in the file (keys), to ones that specify where to put the storages (values). User extensions can register their own location tags and tagging and deserialization methods using torch.serialization.register_package(). Parameters
f – a file-like object (has to implement read(), readline(), tell(), and seek()), or a string or os.PathLike object containing a file name
map_location – a function, torch.device, string or a dict specifying how to remap storage locations
pickle_module – module used for unpickling metadata and objects (has to match the pickle_module used to serialize file)
pickle_load_args – (Python 3 only) optional keyword arguments passed over to pickle_module.load() and pickle_module.Unpickler(), e.g., errors=.... Warning torch.load() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never load data that could have come from an untrusted source, or that could have been tampered with. Only load data you trust. Note When you call torch.load() on a file which contains GPU tensors, those tensors will be loaded to GPU by default. You can call torch.load(.., map_location='cpu') and then load_state_dict() to avoid GPU RAM surge when loading a model checkpoint. Note By default, we decode byte strings as utf-8. This is to avoid a common error case UnicodeDecodeError: 'ascii' codec can't decode byte 0x... when loading files saved by Python 2 in Python 3. If this default is incorrect, you may use an extra encoding keyword argument to specify how these objects should be loaded, e.g., encoding='latin1' decodes them to strings using latin1 encoding, and encoding='bytes' keeps them as byte arrays which can be decoded later with byte_array.decode(...). Example >>> torch.load('tensors.pt')
# Load all tensors onto the CPU
>>> torch.load('tensors.pt', map_location=torch.device('cpu'))
# Load all tensors onto the CPU, using a function
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
# Load all tensors onto GPU 1
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
# Map tensors from GPU 1 to GPU 0
>>> torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
# Load tensor from io.BytesIO object
>>> with open('tensor.pt', 'rb') as f:
... buffer = io.BytesIO(f.read())
>>> torch.load(buffer)
# Load a module with 'ascii' encoding for unpickling
>>> torch.load('module.pt', encoding='ascii') | torch.generated.torch.load#torch.load |
torch.lobpcg(A, k=None, B=None, X=None, n=None, iK=None, niter=None, tol=None, largest=None, method=None, tracker=None, ortho_iparams=None, ortho_fparams=None, ortho_bparams=None) [source]
Find the k largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric positive defined generalized eigenvalue problem using matrix-free LOBPCG methods. This function is a front-end to the following LOBPCG algorithms selectable via method argument: method=”basic” - the LOBPCG method introduced by Andrew Knyazev, see [Knyazev2001]. A less robust method, may fail when Cholesky is applied to singular input. method=”ortho” - the LOBPCG method with orthogonal basis selection [StathopoulosEtal2002]. A robust method. Supported inputs are dense, sparse, and batches of dense matrices. Note In general, the basic method spends least time per iteration. However, the robust methods converge much faster and are more stable. So, the usage of the basic method is generally not recommended but there exist cases where the usage of the basic method may be preferred. Warning The backward method does not support sparse and complex inputs. It works only when B is not provided (i.e. B == None). We are actively working on extensions, and the details of the algorithms are going to be published promptly. Warning While it is assumed that A is symmetric, A.grad is not. To make sure that A.grad is symmetric, so that A - t * A.grad is symmetric in first-order optimization routines, prior to running lobpcg we do the following symmetrization map: A -> (A + A.t()) / 2. The map is performed only when the A requires gradients. Parameters
A (Tensor) – the input tensor of size (∗,m,m)(*, m, m)
B (Tensor, optional) – the input tensor of size (∗,m,m)(*, m, m) . When not specified, B is interpereted as identity matrix.
X (tensor, optional) – the input tensor of size (∗,m,n)(*, m, n) where k <= n <= m. When specified, it is used as initial approximation of eigenvectors. X must be a dense tensor.
iK (tensor, optional) – the input tensor of size (∗,m,m)(*, m, m) . When specified, it will be used as preconditioner.
k (integer, optional) – the number of requested eigenpairs. Default is the number of XX columns (when specified) or 1.
n (integer, optional) – if XX is not specified then n specifies the size of the generated random approximation of eigenvectors. Default value for n is k. If XX is specified, the value of n (when specified) must be the number of XX columns.
tol (float, optional) – residual tolerance for stopping criterion. Default is feps ** 0.5 where feps is smallest non-zero floating-point number of the given input tensor A data type.
largest (bool, optional) – when True, solve the eigenproblem for the largest eigenvalues. Otherwise, solve the eigenproblem for smallest eigenvalues. Default is True.
method (str, optional) – select LOBPCG method. See the description of the function above. Default is “ortho”.
niter (int, optional) – maximum number of iterations. When reached, the iteration process is hard-stopped and the current approximation of eigenpairs is returned. For infinite iteration but until convergence criteria is met, use -1.
tracker (callable, optional) –
a function for tracing the iteration process. When specified, it is called at each iteration step with LOBPCG instance as an argument. The LOBPCG instance holds the full state of the iteration process in the following attributes: iparams, fparams, bparams - dictionaries of integer, float, and boolean valued input parameters, respectively ivars, fvars, bvars, tvars - dictionaries of integer, float, boolean, and Tensor valued iteration variables, respectively. A, B, iK - input Tensor arguments. E, X, S, R - iteration Tensor variables. For instance: ivars[“istep”] - the current iteration step X - the current approximation of eigenvectors E - the current approximation of eigenvalues R - the current residual ivars[“converged_count”] - the current number of converged eigenpairs tvars[“rerr”] - the current state of convergence criteria Note that when tracker stores Tensor objects from the LOBPCG instance, it must make copies of these. If tracker sets bvars[“force_stop”] = True, the iteration process will be hard-stopped.
ortho_fparams, ortho_bparams (ortho_iparams,) – various parameters to LOBPCG algorithm when using method=”ortho”. Returns
tensor of eigenvalues of size (∗,k)(*, k) X (Tensor): tensor of eigenvectors of size (∗,m,k)(*, m, k) Return type
E (Tensor) References [Knyazev2001] Andrew V. Knyazev. (2001) Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method. SIAM J. Sci. Comput., 23(2), 517-541. (25 pages) https://epubs.siam.org/doi/abs/10.1137/S1064827500366124 [StathopoulosEtal2002] Andreas Stathopoulos and Kesheng Wu. (2002) A Block Orthogonalization Procedure with Constant Synchronization Requirements. SIAM J. Sci. Comput., 23(6), 2165-2182. (18 pages) https://epubs.siam.org/doi/10.1137/S1064827500370883 [DuerschEtal2018] Jed A. Duersch, Meiyue Shao, Chao Yang, Ming Gu. (2018) A Robust and Efficient Implementation of LOBPCG. SIAM J. Sci. Comput., 40(5), C655-C676. (22 pages) https://epubs.siam.org/doi/abs/10.1137/17M1129830 | torch.generated.torch.lobpcg#torch.lobpcg |
torch.log(input, *, out=None) → Tensor
Returns a new tensor with the natural logarithm of the elements of input. yi=loge(xi)y_{i} = \log_{e} (x_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(5)
>>> a
tensor([-0.7168, -0.5471, -0.8933, -1.4428, -0.1190])
>>> torch.log(a)
tensor([ nan, nan, nan, nan, nan]) | torch.generated.torch.log#torch.log |
torch.log10(input, *, out=None) → Tensor
Returns a new tensor with the logarithm to the base 10 of the elements of input. yi=log10(xi)y_{i} = \log_{10} (x_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.rand(5)
>>> a
tensor([ 0.5224, 0.9354, 0.7257, 0.1301, 0.2251])
>>> torch.log10(a)
tensor([-0.2820, -0.0290, -0.1392, -0.8857, -0.6476]) | torch.generated.torch.log10#torch.log10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.