doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.nonzero(input, *, out=None, as_tuple=False) → LongTensor or tuple of LongTensors
Note torch.nonzero(..., as_tuple=False) (default) returns a 2-D tensor where each row is the index for a nonzero value. torch.nonzero(..., as_tuple=True) returns a tuple of 1-D index tensors, allowing for advanced indexing, so x[x... | torch.generated.torch.nonzero#torch.nonzero |
torch.norm(input, p='fro', dim=None, keepdim=False, out=None, dtype=None) [source]
Returns the matrix norm or vector norm of a given tensor. Warning torch.norm is deprecated and may be removed in a future PyTorch release. Use torch.linalg.norm() instead, but note that torch.linalg.norm() has a different signature an... | torch.generated.torch.norm#torch.norm |
torch.normal(mean, std, *, generator=None, out=None) → Tensor
Returns a tensor of random numbers drawn from separate normal distributions whose mean and standard deviation are given. The mean is a tensor with the mean of each output element’s normal distribution The std is a tensor with the standard deviation of each... | torch.generated.torch.normal#torch.normal |
torch.not_equal(input, other, *, out=None) → Tensor
Alias for torch.ne(). | torch.generated.torch.not_equal#torch.not_equal |
class torch.no_grad [source]
Context-manager that disabled gradient calculation. Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True. In this mode, the result... | torch.generated.torch.no_grad#torch.no_grad |
torch.numel(input) → int
Returns the total number of elements in the input tensor. Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.randn(1, 2, 3, 4, 5)
>>> torch.numel(a)
120
>>> a = torch.zeros(4,4)
>>> torch.numel(a)
16 | torch.generated.torch.numel#torch.numel |
torch.ones(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size. Parameters
size (int...) – a sequence of integers defining the shape of the output tensor. Can be a varia... | torch.generated.torch.ones#torch.ones |
torch.ones_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor
Returns a tensor filled with the scalar value 1, with the same size as input. torch.ones_like(input) is equivalent to torch.ones(input.size(), dtype=input.dtype, layout=input.layout, devi... | torch.generated.torch.ones_like#torch.ones_like |
torch.onnx Example: End-to-end AlexNet from PyTorch to ONNX Tracing vs Scripting Write PyTorch model in Torch way Using dictionaries to handle Named Arguments as model inputs
Indexing Getter Setter TorchVision support Limitations Supported operators
Adding support for operators ATen operators Non-ATen operators ... | torch.onnx |
torch.onnx.export(model, args, f, export_params=True, verbose=False, training=<TrainingMode.EVAL: 0>, input_names=None, output_names=None, aten=False, export_raw_ir=False, operator_export_type=None, opset_version=None, _retain_param_name=True, do_constant_folding=True, example_outputs=None, strip_doc_string=True, dynam... | torch.onnx#torch.onnx.export |
torch.onnx.export_to_pretty_string(*args, **kwargs) [source] | torch.onnx#torch.onnx.export_to_pretty_string |
torch.onnx.is_in_onnx_export() [source]
Check whether it’s in the middle of the ONNX export. This function returns True in the middle of torch.onnx.export(). torch.onnx.export should be executed with single thread. | torch.onnx#torch.onnx.is_in_onnx_export |
torch.onnx.operators.shape_as_tensor(x) [source] | torch.onnx#torch.onnx.operators.shape_as_tensor |
torch.onnx.register_custom_op_symbolic(symbolic_name, symbolic_fn, opset_version) [source] | torch.onnx#torch.onnx.register_custom_op_symbolic |
torch.onnx.select_model_mode_for_export(model, mode) [source]
A context manager to temporarily set the training mode of ‘model’ to ‘mode’, resetting it when we exit the with-block. A no-op if mode is None. In version 1.6 changed to this from set_training | torch.onnx#torch.onnx.select_model_mode_for_export |
torch.optim torch.optim is a package implementing various optimization algorithms. Most commonly used methods are already supported, and the interface is general enough, so that more sophisticated ones can be also easily integrated in the future. How to use an optimizer To use torch.optim you have to construct an optim... | torch.optim |
class torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0) [source]
Implements Adadelta algorithm. It has been proposed in ADADELTA: An Adaptive Learning Rate Method. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
rho (float, optional) – c... | torch.optim#torch.optim.Adadelta |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adadelta.step |
class torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-10) [source]
Implements Adagrad algorithm. It has been proposed in Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Parameters
params (iterable) – iterable of parameters to optim... | torch.optim#torch.optim.Adagrad |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adagrad.step |
class torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False) [source]
Implements Adam algorithm. It has been proposed in Adam: A Method for Stochastic Optimization. The implementation of the L2 penalty follows changes proposed in Decoupled Weight Decay Regularization. Parame... | torch.optim#torch.optim.Adam |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adam.step |
class torch.optim.Adamax(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0) [source]
Implements Adamax algorithm (a variant of Adam based on infinity norm). It has been proposed in Adam: A Method for Stochastic Optimization. Parameters
params (iterable) – iterable of parameters to optimize or dicts ... | torch.optim#torch.optim.Adamax |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Adamax.step |
class torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False) [source]
Implements AdamW algorithm. The original Adam algorithm was proposed in Adam: A Method for Stochastic Optimization. The AdamW variant was proposed in Decoupled Weight Decay Regularization. Parameters
... | torch.optim#torch.optim.AdamW |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.AdamW.step |
class torch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0) [source]
Implements Averaged Stochastic Gradient Descent. It has been proposed in Acceleration of stochastic approximation by averaging. Parameters
params (iterable) – iterable of parameters to optimize or dicts defini... | torch.optim#torch.optim.ASGD |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.ASGD.step |
class torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn=None) [source]
Implements L-BFGS algorithm, heavily inspired by minFunc <https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html>. Warning This optimizer doesn’t support per-... | torch.optim#torch.optim.LBFGS |
step(closure) [source]
Performs a single optimization step. Parameters
closure (callable) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.LBFGS.step |
class torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=-1, verbose=False) [source]
Set the learning rate of each parameter group using a cosine annealing schedule, where ηmax\eta_{max} is set to the initial lr and TcurT_{cur} is the number of epochs since the last restart in SGDR: ... | torch.optim#torch.optim.lr_scheduler.CosineAnnealingLR |
class torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=-1, verbose=False) [source]
Set the learning rate of each parameter group using a cosine annealing schedule, where ηmax\eta_{max} is set to the initial lr, TcurT_{cur} is the number of epochs since the last re... | torch.optim#torch.optim.lr_scheduler.CosineAnnealingWarmRestarts |
step(epoch=None) [source]
Step could be called after every batch update Example >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)
>>> iters = len(dataloader)
>>> for epoch in range(20):
>>> for i, sample in enumerate(dataloader):
>>> inputs, labels = sample['inputs'], sample['labels']
>>... | torch.optim#torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.step |
class torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=-1, verbose=False) [source]
Sets the learning rate of each parameter group a... | torch.optim#torch.optim.lr_scheduler.CyclicLR |
get_lr() [source]
Calculates the learning rate at batch index. This function treats self.last_epoch as the last batch index. If self.cycle_momentum is True, this function has a side effect of updating the optimizer’s momentum. | torch.optim#torch.optim.lr_scheduler.CyclicLR.get_lr |
class torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=-1, verbose=False) [source]
Decays the learning rate of each parameter group by gamma every epoch. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
gamma (float) – Multiplicative factor of le... | torch.optim#torch.optim.lr_scheduler.ExponentialLR |
class torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1, verbose=False) [source]
Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
lr_lambda (function or lis... | torch.optim#torch.optim.lr_scheduler.LambdaLR |
load_state_dict(state_dict) [source]
Loads the schedulers state. When saving or loading the scheduler, please make sure to also save or load the state of the optimizer. Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict(). | torch.optim#torch.optim.lr_scheduler.LambdaLR.load_state_dict |
state_dict() [source]
Returns the state of the scheduler as a dict. It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas. When saving or loading the scheduler, pl... | torch.optim#torch.optim.lr_scheduler.LambdaLR.state_dict |
class torch.optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda, last_epoch=-1, verbose=False) [source]
Multiply the learning rate of each parameter group by the factor given in the specified function. When last_epoch=-1, sets initial lr as lr. Parameters
optimizer (Optimizer) – Wrapped optimizer.
lr_lambda... | torch.optim#torch.optim.lr_scheduler.MultiplicativeLR |
load_state_dict(state_dict) [source]
Loads the schedulers state. Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict(). | torch.optim#torch.optim.lr_scheduler.MultiplicativeLR.load_state_dict |
state_dict() [source]
Returns the state of the scheduler as a dict. It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas. | torch.optim#torch.optim.lr_scheduler.MultiplicativeLR.state_dict |
class torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=-1, verbose=False) [source]
Decays the learning rate of each parameter group by gamma once the number of epoch reaches one of the milestones. Notice that such decay can happen simultaneously with other changes to the learning rate... | torch.optim#torch.optim.lr_scheduler.MultiStepLR |
class torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, total_steps=None, epochs=None, steps_per_epoch=None, pct_start=0.3, anneal_strategy='cos', cycle_momentum=True, base_momentum=0.85, max_momentum=0.95, div_factor=25.0, final_div_factor=10000.0, three_phase=False, last_epoch=-1, verbose=False) [source]
Sets ... | torch.optim#torch.optim.lr_scheduler.OneCycleLR |
class torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False) [source]
Reduce learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor o... | torch.optim#torch.optim.lr_scheduler.ReduceLROnPlateau |
class torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1, verbose=False) [source]
Decays the learning rate of each parameter group by gamma every step_size epochs. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When las... | torch.optim#torch.optim.lr_scheduler.StepLR |
class torch.optim.Optimizer(params, defaults) [source]
Base class for all optimizers. Warning Parameters need to be specified as collections that have a deterministic ordering that is consistent between runs. Examples of objects that don’t satisfy those properties are sets and iterators over values of dictionaries. ... | torch.optim#torch.optim.Optimizer |
add_param_group(param_group) [source]
Add a param group to the Optimizer s param_groups. This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses. Parameters
param_group (dict) – Specifies what Tensors should be optimized al... | torch.optim#torch.optim.Optimizer.add_param_group |
load_state_dict(state_dict) [source]
Loads the optimizer state. Parameters
state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict(). | torch.optim#torch.optim.Optimizer.load_state_dict |
state_dict() [source]
Returns the state of the optimizer as a dict. It contains two entries:
state - a dict holding current optimization state. Its content
differs between optimizer classes. param_groups - a dict containing all parameter groups | torch.optim#torch.optim.Optimizer.state_dict |
step(closure) [source]
Performs a single optimization step (parameter update). Parameters
closure (callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers. Note Unless otherwise specified, this function should not modify the .grad field of the parameters. | torch.optim#torch.optim.Optimizer.step |
zero_grad(set_to_none=False) [source]
Sets the gradients of all optimized torch.Tensor s to zero. Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For exam... | torch.optim#torch.optim.Optimizer.zero_grad |
class torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False) [source]
Implements RMSprop algorithm. Proposed by G. Hinton in his course. The centered version first appears in Generating Sequences With Recurrent Neural Networks. The implementation here takes the square ... | torch.optim#torch.optim.RMSprop |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.RMSprop.step |
class torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50)) [source]
Implements the resilient backpropagation algorithm. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
etas (Tuple[flo... | torch.optim#torch.optim.Rprop |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.Rprop.step |
class torch.optim.SGD(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False) [source]
Implements stochastic gradient descent (optionally with momentum). Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning. Parameters
p... | torch.optim#torch.optim.SGD |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.SGD.step |
class torch.optim.SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08) [source]
Implements lazy version of Adam algorithm suitable for sparse tensors. In this variant, only moments that show up in the gradient get updated, and only those portions of the gradient get applied to the parameters. Parameters
pa... | torch.optim#torch.optim.SparseAdam |
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | torch.optim#torch.optim.SparseAdam.step |
torch.orgqr(input, input2) → Tensor
Computes the orthogonal matrix Q of a QR factorization, from the (input, input2) tuple returned by torch.geqrf(). This directly calls the underlying LAPACK function ?orgqr. See LAPACK documentation for orgqr for further details. Parameters
input (Tensor) – the a from torch.geqr... | torch.generated.torch.orgqr#torch.orgqr |
torch.ormqr(input, input2, input3, left=True, transpose=False) → Tensor
Multiplies mat (given by input3) by the orthogonal Q matrix of the QR factorization formed by torch.geqrf() that is represented by (a, tau) (given by (input, input2)). This directly calls the underlying LAPACK function ?ormqr. See LAPACK document... | torch.generated.torch.ormqr#torch.ormqr |
torch.outer(input, vec2, *, out=None) → Tensor
Outer product of input and vec2. If input is a vector of size nn and vec2 is a vector of size mm , then out must be a matrix of size (n×m)(n \times m) . Note This function does not broadcast. Parameters
input (Tensor) – 1-D input vector
vec2 (Tensor) – 1-D input ... | torch.generated.torch.outer#torch.outer |
torch.overrides This module exposes various helper functions for the __torch_function__ protocol. See Extending torch for more detail on the __torch_function__ protocol. Functions
torch.overrides.get_ignored_functions() [source]
Return public functions that cannot be overridden by __torch_function__. Returns
A tu... | torch.overrides |
torch.overrides.get_ignored_functions() [source]
Return public functions that cannot be overridden by __torch_function__. Returns
A tuple of functions that are publicly available in the torch API but cannot be overridden with __torch_function__. Mostly this is because none of the arguments of these functions are te... | torch.overrides#torch.overrides.get_ignored_functions |
torch.overrides.get_overridable_functions() [source]
List functions that are overridable via __torch_function__ Returns
A dictionary that maps namespaces that contain overridable functions to functions in that namespace that can be overridden. Return type
Dict[Any, List[Callable]] | torch.overrides#torch.overrides.get_overridable_functions |
torch.overrides.get_testing_overrides() [source]
Return a dict containing dummy overrides for all overridable functions Returns
A dictionary that maps overridable functions in the PyTorch API to lambda functions that have the same signature as the real function and unconditionally return -1. These lambda functions ... | torch.overrides#torch.overrides.get_testing_overrides |
torch.overrides.handle_torch_function(public_api, relevant_args, *args, **kwargs) [source]
Implement a function with checks for __torch_function__ overrides. See torch::autograd::handle_torch_function for the equivalent of this function in the C++ implementation. Parameters
public_api (function) – Function expose... | torch.overrides#torch.overrides.handle_torch_function |
torch.overrides.has_torch_function()
Check for __torch_function__ implementations in the elements of an iterable. Considers exact Tensor s and Parameter s non-dispatchable. :param relevant_args: Iterable or aguments to check for __torch_function__ methods. :type relevant_args: iterable Returns
True if any of the el... | torch.overrides#torch.overrides.has_torch_function |
torch.overrides.is_tensor_like(inp) [source]
Returns True if the passed-in input is a Tensor-like. Currently, this occurs whenever there’s a __torch_function__ attribute on the type of the input. Examples A subclass of tensor is generally a Tensor-like. >>> class SubTensor(torch.Tensor): ...
>>> is_tensor_like(SubTen... | torch.overrides#torch.overrides.is_tensor_like |
torch.overrides.is_tensor_method_or_property(func) [source]
Returns True if the function passed in is a handler for a method or property belonging to torch.Tensor, as passed into __torch_function__. Note For properties, their __get__ method must be passed in. This may be needed, in particular, for the following rea... | torch.overrides#torch.overrides.is_tensor_method_or_property |
torch.overrides.wrap_torch_function(dispatcher) [source]
Wraps a given function with __torch_function__ -related functionality. Parameters
dispatcher (Callable) – A callable that returns an iterable of Tensor-likes passed into the function. Note This decorator may reduce the performance of your code. Generally, ... | torch.overrides#torch.overrides.wrap_torch_function |
torch.pca_lowrank(A, q=None, center=True, niter=2) [source]
Performs linear Principal Component Analysis (PCA) on a low-rank matrix, batches of such matrices, or sparse matrix. This function returns a namedtuple (U, S, V) which is the nearly optimal approximation of a singular value decomposition of a centered matrix... | torch.generated.torch.pca_lowrank#torch.pca_lowrank |
torch.pinverse(input, rcond=1e-15) → Tensor
Calculates the pseudo-inverse (also known as the Moore-Penrose inverse) of a 2D tensor. Please look at Moore-Penrose inverse for more details Note torch.pinverse() is deprecated. Please use torch.linalg.pinv() instead which includes new parameters hermitian and out. Note... | torch.generated.torch.pinverse#torch.pinverse |
torch.poisson(input, generator=None) → Tensor
Returns a tensor of the same size as input with each element sampled from a Poisson distribution with rate parameter given by the corresponding element in input i.e., outi∼Poisson(inputi)\text{out}_i \sim \text{Poisson}(\text{input}_i)
Parameters
input (Tensor) – the... | torch.generated.torch.poisson#torch.poisson |
torch.polar(abs, angle, *, out=None) → Tensor
Constructs a complex tensor whose elements are Cartesian coordinates corresponding to the polar coordinates with absolute value abs and angle angle. out=abs⋅cos(angle)+abs⋅sin(angle)⋅j\text{out} = \text{abs} \cdot \cos(\text{angle}) + \text{abs} \cdot \sin(\text{angle}... | torch.generated.torch.polar#torch.polar |
torch.polygamma(n, input, *, out=None) → Tensor
Computes the nthn^{th} derivative of the digamma function on input. n≥0n \geq 0 is called the order of the polygamma function. ψ(n)(x)=d(n)dx(n)ψ(x)\psi^{(n)}(x) = \frac{d^{(n)}}{dx^{(n)}} \psi(x)
Note This function is implemented only for nonnegative integers n≥0... | torch.generated.torch.polygamma#torch.polygamma |
torch.pow(input, exponent, *, out=None) → Tensor
Takes the power of each element in input with exponent and returns a tensor with the result. exponent can be either a single float number or a Tensor with the same number of elements as input. When exponent is a scalar value, the operation applied is: outi=xiexponent\... | torch.generated.torch.pow#torch.pow |
torch.prod(input, *, dtype=None) → Tensor
Returns the product of all elements in the input tensor. Parameters
input (Tensor) – the input tensor. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is per... | torch.generated.torch.prod#torch.prod |
torch.promote_types(type1, type2) → dtype
Returns the torch.dtype with the smallest size and scalar kind that is not smaller nor of lower kind than either type1 or type2. See type promotion documentation for more information on the type promotion logic. Parameters
type1 (torch.dtype) –
type2 (torch.dtype) – ... | torch.generated.torch.promote_types#torch.promote_types |
torch.qr(input, some=True, *, out=None) -> (Tensor, Tensor)
Computes the QR decomposition of a matrix or a batch of matrices input, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper triangular mat... | torch.generated.torch.qr#torch.qr |
torch.quantile(input, q) → Tensor
Returns the q-th quantiles of all elements in the input tensor, doing a linear interpolation when the q-th quantile lies between two data points. Parameters
input (Tensor) – the input tensor.
q (float or Tensor) – a scalar or 1D tensor of quantile values in the range [0, 1] E... | torch.generated.torch.quantile#torch.quantile |
torch.quantization This module implements the functions you call directly to convert your model from FP32 to quantized form. For example the prepare() is used in post training quantization to prepares your model for the calibration step and convert() actually converts the weights to int8 and replaces the operations wit... | torch.quantization |
torch.quantization.add_observer_(module, qconfig_propagation_list=None, non_leaf_module_list=None, device=None, custom_module_class_mapping=None) [source]
Add observer for the leaf child of the module. This function insert observer module to all leaf child module that has a valid qconfig attribute. Parameters
mod... | torch.quantization#torch.quantization.add_observer_ |
torch.quantization.add_quant_dequant(module) [source]
Wrap the leaf child module in QuantWrapper if it has a valid qconfig Note that this function will modify the children of module inplace and it can return a new module which wraps the input module as well. Parameters
module – input module with qconfig attribute... | torch.quantization#torch.quantization.add_quant_dequant |
torch.quantization.convert(module, mapping=None, inplace=False, remove_qconfig=True, convert_custom_config_dict=None) [source]
Converts submodules in input module to a different module according to mapping by calling from_float method on the target module class. And remove qconfig at the end if remove_qconfig is set ... | torch.quantization#torch.quantization.convert |
torch.quantization.default_eval_fn(model, calib_data) [source]
Default evaluation function takes a torch.utils.data.Dataset or a list of input Tensors and run the model on the dataset | torch.quantization#torch.quantization.default_eval_fn |
class torch.quantization.DeQuantStub [source]
Dequantize stub module, before calibration, this is same as identity, this will be swapped as nnq.DeQuantize in convert. | torch.quantization#torch.quantization.DeQuantStub |
class torch.quantization.FakeQuantize(observer=<class 'torch.quantization.observer.MovingAverageMinMaxObserver'>, quant_min=0, quant_max=255, **observer_kwargs) [source]
Simulate the quantize and dequantize operations in training time. The output of this module is given by x_out = (clamp(round(x/scale + zero_point), ... | torch.quantization#torch.quantization.FakeQuantize |
torch.quantization.fuse_modules(model, modules_to_fuse, inplace=False, fuser_func=<function fuse_known_modules>, fuse_custom_config_dict=None) [source]
Fuses a list of modules into a single module Fuses only the following sequence of modules: conv, bn conv, bn, relu conv, relu linear, relu bn, relu All other sequence... | torch.quantization#torch.quantization.fuse_modules |
torch.quantization.get_observer_dict(mod, target_dict, prefix='') [source]
Traverse the modules and save all observers into dict. This is mainly used for quantization accuracy debug :param mod: the top module we want to save all observers :param prefix: the prefix for the current module :param target_dict: the dictio... | torch.quantization#torch.quantization.get_observer_dict |
class torch.quantization.HistogramObserver(bins=2048, upsample_rate=128, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False) [source]
The module records the running histogram of tensor values along with min/max values. calculate_qparams will calculate scale and zero_point. Parameters
bins – N... | torch.quantization#torch.quantization.HistogramObserver |
class torch.quantization.MinMaxObserver(dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running min and max values. This observer uses the tensor min/max statistics to compute the q... | torch.quantization#torch.quantization.MinMaxObserver |
class torch.quantization.MovingAverageMinMaxObserver(averaging_constant=0.01, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the moving average of the min and max values. This observer... | torch.quantization#torch.quantization.MovingAverageMinMaxObserver |
class torch.quantization.MovingAveragePerChannelMinMaxObserver(averaging_constant=0.01, ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running per channel min and max v... | torch.quantization#torch.quantization.MovingAveragePerChannelMinMaxObserver |
class torch.quantization.NoopObserver(dtype=torch.float16, custom_op_name='') [source]
Observer that doesn’t do anything and just passes its configuration to the quantized module’s .from_float(). Primarily used for quantization to float16 which doesn’t require determining ranges. Parameters
dtype – Quantized data... | torch.quantization#torch.quantization.NoopObserver |
class torch.quantization.ObserverBase(dtype) [source]
Base observer Module. Any observer implementation should derive from this class. Concrete observers should follow the same API. In forward, they will update the statistics of the observed Tensor. And they should provide a calculate_qparams function that computes t... | torch.quantization#torch.quantization.ObserverBase |
classmethod with_args(**kwargs)
Wrapper that allows creation of class factories. This can be useful when there is a need to create classes with the same constructor arguments, but different instances. Example: >>> Foo.with_args = classmethod(_with_args)
>>> foo_builder = Foo.with_args(a=3, b=4).with_args(answer=42)
>... | torch.quantization#torch.quantization.ObserverBase.with_args |
class torch.quantization.PerChannelMinMaxObserver(ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None) [source]
Observer module for computing the quantization parameters based on the running per channel min and max values. This observer uses the tensor m... | torch.quantization#torch.quantization.PerChannelMinMaxObserver |
torch.quantization.prepare(model, inplace=False, allow_list=None, observer_non_leaf_module_list=None, prepare_custom_config_dict=None) [source]
Prepares a copy of the model for quantization calibration or quantization-aware training. Quantization configuration should be assigned preemptively to individual submodules ... | torch.quantization#torch.quantization.prepare |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.