doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
classmethod from_pretrained(embeddings, freeze=True, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, include_last_offset=False) [source] Creates EmbeddingBag instance from given 2-dimensional FloatTensor. Parameters embeddings (Tensor) – FloatTensor containing weights for the EmbeddingBag. First dimension is being passed to EmbeddingBag as ‘num_embeddings’, second as ‘embedding_dim’. freeze (boolean, optional) – If True, the tensor does not get updated in the learning process. Equivalent to embeddingbag.weight.requires_grad = False. Default: True max_norm (float, optional) – See module initialization documentation. Default: None norm_type (float, optional) – See module initialization documentation. Default 2. scale_grad_by_freq (boolean, optional) – See module initialization documentation. Default False. mode (string, optional) – See module initialization documentation. Default: "mean" sparse (bool, optional) – See module initialization documentation. Default: False. include_last_offset (bool, optional) – See module initialization documentation. Default: False. Examples: >>> # FloatTensor containing pretrained weights >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) >>> embeddingbag = nn.EmbeddingBag.from_pretrained(weight) >>> # Get embeddings for index 1 >>> input = torch.LongTensor([[1, 0]]) >>> embeddingbag(input) tensor([[ 2.5000, 3.7000, 4.6500]])
torch.generated.torch.nn.embeddingbag#torch.nn.EmbeddingBag.from_pretrained
class torch.nn.Flatten(start_dim=1, end_dim=-1) [source] Flattens a contiguous range of dims into a tensor. For use with Sequential. Shape: Input: (N,∗dims)(N, *dims) Output: (N,∏∗dims)(N, \prod *dims) (for the default case). Parameters start_dim – first dim to flatten (default = 1). end_dim – last dim to flatten (default = -1). Examples:: >>> input = torch.randn(32, 1, 5, 5) >>> m = nn.Sequential( >>> nn.Conv2d(1, 32, 5, 1, 1), >>> nn.Flatten() >>> ) >>> output = m(input) >>> output.size() torch.Size([32, 288]) add_module(name, module) Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters name (string) – name of the child module. The child module can be accessed from this module using the given name module (Module) – child module to be added to the module. apply(fn) Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters fn (Module -> None) – function to be applied to each submodule Returns self Return type Module Example: >>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) bfloat16() Casts all floating point parameters and buffers to bfloat16 datatype. Returns self Return type Module buffers(recurse=True) Returns an iterator over module buffers. Parameters recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields torch.Tensor – module buffer Example: >>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L) children() Returns an iterator over immediate children modules. Yields Module – a child module cpu() Moves all model parameters and buffers to the CPU. Returns self Return type Module cuda(device=None) Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters device (int, optional) – if specified, all parameters will be copied to that device Returns self Return type Module double() Casts all floating point parameters and buffers to double datatype. Returns self Return type Module eval() Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns self Return type Module float() Casts all floating point parameters and buffers to float datatype. Returns self Return type Module half() Casts all floating point parameters and buffers to half datatype. Returns self Return type Module load_state_dict(state_dict, strict=True) Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters state_dict (dict) – a dict containing parameters and persistent buffers. strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True Returns missing_keys is a list of str containing the missing keys unexpected_keys is a list of str containing the unexpected keys Return type NamedTuple with missing_keys and unexpected_keys fields modules() Returns an iterator over all modules in the network. Yields Module – a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True) named_buffers(prefix='', recurse=True) Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Parameters prefix (str) – prefix to prepend to all buffer names. recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields (string, torch.Tensor) – Tuple containing the name and buffer Example: >>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size()) named_children() Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields (string, Module) – Tuple containing a name and child module Example: >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module) named_modules(memo=None, prefix='') Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Yields (string, Module) – Tuple of name and module Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True)) named_parameters(prefix='', recurse=True) Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Parameters prefix (str) – prefix to prepend to all parameter names. recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields (string, Parameter) – Tuple containing the name and parameter Example: >>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size()) parameters(recurse=True) Returns an iterator over module parameters. This is typically passed to an optimizer. Parameters recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields Parameter – module parameter Example: >>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L) register_backward_hook(hook) Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns a handle that can be used to remove the added hook by calling handle.remove() Return type torch.utils.hooks.RemovableHandle register_buffer(name, tensor, persistent=True) Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict. Buffers can be accessed as attributes using given names. Parameters name (string) – name of the buffer. The buffer can be accessed from this module using the given name tensor (Tensor) – buffer to be registered. persistent (bool) – whether the buffer is part of this module’s state_dict. Example: >>> self.register_buffer('running_mean', torch.zeros(num_features)) register_forward_hook(hook) Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None or modified output The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called. Returns a handle that can be used to remove the added hook by calling handle.remove() Return type torch.utils.hooks.RemovableHandle register_forward_pre_hook(hook) Registers a forward pre-hook on the module. The hook will be called every time before forward() is invoked. It should have the following signature: hook(module, input) -> None or modified input The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple). Returns a handle that can be used to remove the added hook by calling handle.remove() Return type torch.utils.hooks.RemovableHandle register_full_backward_hook(hook) Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments. Warning Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. Returns a handle that can be used to remove the added hook by calling handle.remove() Return type torch.utils.hooks.RemovableHandle register_parameter(name, param) Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters name (string) – name of the parameter. The parameter can be accessed from this module using the given name param (Parameter) – parameter to be added to the module. requires_grad_(requires_grad=True) Change if autograd should record operations on parameters in this module. This method sets the parameters’ requires_grad attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). Parameters requires_grad (bool) – whether autograd should record operations on parameters in this module. Default: True. Returns self Return type Module state_dict(destination=None, prefix='', keep_vars=False) Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns a dictionary containing a whole state of the module Return type dict Example: >>> module.state_dict().keys() ['bias', 'weight'] to(*args, **kwargs) Moves and/or casts the parameters and buffers. This can be called as to(device=None, dtype=None, non_blocking=False) to(dtype, non_blocking=False) to(tensor, non_blocking=False) to(memory_format=torch.channels_last) Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtype`s. In addition, this method will only cast the floating point or complex parameters and buffers to :attr:`dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices. See below for examples. Note This method modifies the module in-place. Parameters device (torch.device) – the desired device of the parameters and buffers in this module dtype (torch.dtype) – the desired floating point or complex dtype of the parameters and buffers in this module tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module memory_format (torch.memory_format) – the desired memory format for 4D parameters and buffers in this module (keyword only argument) Returns self Return type Module Examples: >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128) train(mode=True) Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Parameters mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True. Returns self Return type Module type(dst_type) Casts all parameters and buffers to dst_type. Parameters dst_type (type or string) – the desired type Returns self Return type Module xpu(device=None) Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Parameters device (int, optional) – if specified, all parameters will be copied to that device Returns self Return type Module zero_grad(set_to_none=False) Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.
torch.generated.torch.nn.flatten#torch.nn.Flatten
add_module(name, module) Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters name (string) – name of the child module. The child module can be accessed from this module using the given name module (Module) – child module to be added to the module.
torch.generated.torch.nn.flatten#torch.nn.Flatten.add_module
apply(fn) Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters fn (Module -> None) – function to be applied to each submodule Returns self Return type Module Example: >>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
torch.generated.torch.nn.flatten#torch.nn.Flatten.apply
bfloat16() Casts all floating point parameters and buffers to bfloat16 datatype. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.bfloat16
buffers(recurse=True) Returns an iterator over module buffers. Parameters recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields torch.Tensor – module buffer Example: >>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
torch.generated.torch.nn.flatten#torch.nn.Flatten.buffers
children() Returns an iterator over immediate children modules. Yields Module – a child module
torch.generated.torch.nn.flatten#torch.nn.Flatten.children
cpu() Moves all model parameters and buffers to the CPU. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.cpu
cuda(device=None) Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters device (int, optional) – if specified, all parameters will be copied to that device Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.cuda
double() Casts all floating point parameters and buffers to double datatype. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.double
eval() Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.eval
float() Casts all floating point parameters and buffers to float datatype. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.float
half() Casts all floating point parameters and buffers to half datatype. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.half
load_state_dict(state_dict, strict=True) Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters state_dict (dict) – a dict containing parameters and persistent buffers. strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True Returns missing_keys is a list of str containing the missing keys unexpected_keys is a list of str containing the unexpected keys Return type NamedTuple with missing_keys and unexpected_keys fields
torch.generated.torch.nn.flatten#torch.nn.Flatten.load_state_dict
modules() Returns an iterator over all modules in the network. Yields Module – a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True)
torch.generated.torch.nn.flatten#torch.nn.Flatten.modules
named_buffers(prefix='', recurse=True) Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Parameters prefix (str) – prefix to prepend to all buffer names. recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields (string, torch.Tensor) – Tuple containing the name and buffer Example: >>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size())
torch.generated.torch.nn.flatten#torch.nn.Flatten.named_buffers
named_children() Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields (string, Module) – Tuple containing a name and child module Example: >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
torch.generated.torch.nn.flatten#torch.nn.Flatten.named_children
named_modules(memo=None, prefix='') Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Yields (string, Module) – Tuple of name and module Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
torch.generated.torch.nn.flatten#torch.nn.Flatten.named_modules
named_parameters(prefix='', recurse=True) Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Parameters prefix (str) – prefix to prepend to all parameter names. recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields (string, Parameter) – Tuple containing the name and parameter Example: >>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
torch.generated.torch.nn.flatten#torch.nn.Flatten.named_parameters
parameters(recurse=True) Returns an iterator over module parameters. This is typically passed to an optimizer. Parameters recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields Parameter – module parameter Example: >>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
torch.generated.torch.nn.flatten#torch.nn.Flatten.parameters
register_backward_hook(hook) Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns a handle that can be used to remove the added hook by calling handle.remove() Return type torch.utils.hooks.RemovableHandle
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_backward_hook
register_buffer(name, tensor, persistent=True) Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict. Buffers can be accessed as attributes using given names. Parameters name (string) – name of the buffer. The buffer can be accessed from this module using the given name tensor (Tensor) – buffer to be registered. persistent (bool) – whether the buffer is part of this module’s state_dict. Example: >>> self.register_buffer('running_mean', torch.zeros(num_features))
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_buffer
register_forward_hook(hook) Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None or modified output The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called. Returns a handle that can be used to remove the added hook by calling handle.remove() Return type torch.utils.hooks.RemovableHandle
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_forward_hook
register_forward_pre_hook(hook) Registers a forward pre-hook on the module. The hook will be called every time before forward() is invoked. It should have the following signature: hook(module, input) -> None or modified input The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple). Returns a handle that can be used to remove the added hook by calling handle.remove() Return type torch.utils.hooks.RemovableHandle
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_forward_pre_hook
register_full_backward_hook(hook) Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments. Warning Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. Returns a handle that can be used to remove the added hook by calling handle.remove() Return type torch.utils.hooks.RemovableHandle
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_full_backward_hook
register_parameter(name, param) Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters name (string) – name of the parameter. The parameter can be accessed from this module using the given name param (Parameter) – parameter to be added to the module.
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_parameter
requires_grad_(requires_grad=True) Change if autograd should record operations on parameters in this module. This method sets the parameters’ requires_grad attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). Parameters requires_grad (bool) – whether autograd should record operations on parameters in this module. Default: True. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.requires_grad_
state_dict(destination=None, prefix='', keep_vars=False) Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns a dictionary containing a whole state of the module Return type dict Example: >>> module.state_dict().keys() ['bias', 'weight']
torch.generated.torch.nn.flatten#torch.nn.Flatten.state_dict
to(*args, **kwargs) Moves and/or casts the parameters and buffers. This can be called as to(device=None, dtype=None, non_blocking=False) to(dtype, non_blocking=False) to(tensor, non_blocking=False) to(memory_format=torch.channels_last) Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtype`s. In addition, this method will only cast the floating point or complex parameters and buffers to :attr:`dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices. See below for examples. Note This method modifies the module in-place. Parameters device (torch.device) – the desired device of the parameters and buffers in this module dtype (torch.dtype) – the desired floating point or complex dtype of the parameters and buffers in this module tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module memory_format (torch.memory_format) – the desired memory format for 4D parameters and buffers in this module (keyword only argument) Returns self Return type Module Examples: >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
torch.generated.torch.nn.flatten#torch.nn.Flatten.to
train(mode=True) Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Parameters mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.train
type(dst_type) Casts all parameters and buffers to dst_type. Parameters dst_type (type or string) – the desired type Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.type
xpu(device=None) Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Parameters device (int, optional) – if specified, all parameters will be copied to that device Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.xpu
zero_grad(set_to_none=False) Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.
torch.generated.torch.nn.flatten#torch.nn.Flatten.zero_grad
class torch.nn.Fold(output_size, kernel_size, dilation=1, padding=0, stride=1) [source] Combines an array of sliding local blocks into a large containing tensor. Consider a batched input tensor containing sliding local blocks, e.g., patches of images, of shape (N,C×∏(kernel_size),L)(N, C \times \prod(\text{kernel\_size}), L) , where NN is batch dimension, C×∏(kernel_size)C \times \prod(\text{kernel\_size}) is the number of values within a block (a block has ∏(kernel_size)\prod(\text{kernel\_size}) spatial locations each containing a CC -channeled vector), and LL is the total number of blocks. (This is exactly the same specification as the output shape of Unfold.) This operation combines these local blocks into the large output tensor of shape (N,C,output_size[0],output_size[1],…)(N, C, \text{output\_size}[0], \text{output\_size}[1], \dots) by summing the overlapping values. Similar to Unfold, the arguments must satisfy L=∏d⌊output_size[d]+2×padding[d]−dilation[d]×(kernel_size[d]−1)−1stride[d]+1⌋,L = \prod_d \left\lfloor\frac{\text{output\_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel\_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor, where dd is over all spatial dimensions. output_size describes the spatial shape of the large containing tensor of the sliding local blocks. It is useful to resolve the ambiguity when multiple input shapes map to same number of sliding blocks, e.g., with stride > 0. The padding, stride and dilation arguments specify how the sliding blocks are retrieved. stride controls the stride for the sliding blocks. padding controls the amount of implicit zero-paddings on both sides for padding number of points for each dimension before reshaping. dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does. Parameters output_size (int or tuple) – the shape of the spatial dimensions of the output (i.e., output.sizes()[2:]) kernel_size (int or tuple) – the size of the sliding blocks stride (int or tuple) – the stride of the sliding blocks in the input spatial dimensions. Default: 1 padding (int or tuple, optional) – implicit zero padding to be added on both sides of input. Default: 0 dilation (int or tuple, optional) – a parameter that controls the stride of elements within the neighborhood. Default: 1 If output_size, kernel_size, dilation, padding or stride is an int or a tuple of length 1 then their values will be replicated across all spatial dimensions. For the case of two output spatial dimensions this operation is sometimes called col2im. Note Fold calculates each combined value in the resulting large tensor by summing all values from all containing blocks. Unfold extracts the values in the local blocks by copying from the large tensor. So, if the blocks overlap, they are not inverses of each other. In general, folding and unfolding operations are related as follows. Consider Fold and Unfold instances created with the same parameters: >>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...) >>> fold = nn.Fold(output_size=..., **fold_params) >>> unfold = nn.Unfold(**fold_params) Then for any (supported) input tensor the following equality holds: fold(unfold(input)) == divisor * input where divisor is a tensor that depends only on the shape and dtype of the input: >>> input_ones = torch.ones(input.shape, dtype=input.dtype) >>> divisor = fold(unfold(input_ones)) When the divisor tensor contains no zero elements, then fold and unfold operations are inverses of each other (up to constant divisor). Warning Currently, only 4-D output tensors (batched image-like tensors) are supported. Shape: Input: (N,C×∏(kernel_size),L)(N, C \times \prod(\text{kernel\_size}), L) Output: (N,C,output_size[0],output_size[1],…)(N, C, \text{output\_size}[0], \text{output\_size}[1], \dots) as described above Examples: >>> fold = nn.Fold(output_size=(4, 5), kernel_size=(2, 2)) >>> input = torch.randn(1, 3 * 2 * 2, 12) >>> output = fold(input) >>> output.size() torch.Size([1, 3, 4, 5])
torch.generated.torch.nn.fold#torch.nn.Fold
class torch.nn.FractionalMaxPool2d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None) [source] Applies a 2D fractional max pooling over an input signal composed of several input planes. Fractional MaxPooling is described in detail in the paper Fractional MaxPooling by Ben Graham The max-pooling operation is applied in kH×kWkH \times kW regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes. Parameters kernel_size – the size of the window to take a max over. Can be a single number k (for a square kernel of k x k) or a tuple (kh, kw) output_size – the target output size of the image of the form oH x oW. Can be a tuple (oH, oW) or a single number oH for a square image oH x oH output_ratio – If one wants to have an output size as a ratio of the input size, this option can be given. This has to be a number or tuple in the range (0, 1) return_indices – if True, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool2d(). Default: False Examples >>> # pool of square window of size=3, and target output size 13x12 >>> m = nn.FractionalMaxPool2d(3, output_size=(13, 12)) >>> # pool of square window and target output size being half of input image size >>> m = nn.FractionalMaxPool2d(3, output_ratio=(0.5, 0.5)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input)
torch.generated.torch.nn.fractionalmaxpool2d#torch.nn.FractionalMaxPool2d
torch.nn.functional Convolution functions conv1d torch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor Applies a 1D convolution over an input signal composed of several input planes. This operator supports TensorFloat32. See Conv1d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iW)(\text{minibatch} , \text{in\_channels} , iW) weight – filters of shape (out_channels,in_channelsgroups,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kW) bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a one-element tuple (sW,). Default: 1 padding – implicit paddings on both sides of the input. Can be a single number or a one-element tuple (padW,). Default: 0 dilation – the spacing between kernel elements. Can be a single number or a one-element tuple (dW,). Default: 1 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 Examples: >>> filters = torch.randn(33, 16, 3) >>> inputs = torch.randn(20, 16, 50) >>> F.conv1d(inputs, filters) conv2d torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor Applies a 2D convolution over an input image composed of several input planes. This operator supports TensorFloat32. See Conv2d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW) weight – filters of shape (out_channels,in_channelsgroups,kH,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kH , kW) bias – optional bias tensor of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1 padding – implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0 dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 Examples: >>> # With square kernels and equal stride >>> filters = torch.randn(8,4,3,3) >>> inputs = torch.randn(1,4,5,5) >>> F.conv2d(inputs, filters, padding=1) conv3d torch.nn.functional.conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor Applies a 3D convolution over an input image composed of several input planes. This operator supports TensorFloat32. See Conv3d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iT,iH,iW)(\text{minibatch} , \text{in\_channels} , iT , iH , iW) weight – filters of shape (out_channels,in_channelsgroups,kT,kH,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kT , kH , kW) bias – optional bias tensor of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1 padding – implicit paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0 dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 Examples: >>> filters = torch.randn(33, 16, 3, 3, 3) >>> inputs = torch.randn(20, 16, 50, 10, 20) >>> F.conv3d(inputs, filters) conv_transpose1d torch.nn.functional.conv_transpose1d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called “deconvolution”. This operator supports TensorFloat32. See ConvTranspose1d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iW)(\text{minibatch} , \text{in\_channels} , iW) weight – filters of shape (in_channels,out_channelsgroups,kW)(\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kW) bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1 padding – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0 output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 dilation – the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1 Examples: >>> inputs = torch.randn(20, 16, 50) >>> weights = torch.randn(16, 33, 5) >>> F.conv_transpose1d(inputs, weights) conv_transpose2d torch.nn.functional.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”. This operator supports TensorFloat32. See ConvTranspose2d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW) weight – filters of shape (in_channels,out_channelsgroups,kH,kW)(\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kH , kW) bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1 padding – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padH, padW). Default: 0 output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padH, out_padW). Default: 0 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1 Examples: >>> # With square kernels and equal stride >>> inputs = torch.randn(1, 4, 5, 5) >>> weights = torch.randn(4, 8, 3, 3) >>> F.conv_transpose2d(inputs, weights, padding=1) conv_transpose3d torch.nn.functional.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution” This operator supports TensorFloat32. See ConvTranspose3d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iT,iH,iW)(\text{minibatch} , \text{in\_channels} , iT , iH , iW) weight – filters of shape (in_channels,out_channelsgroups,kT,kH,kW)(\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kT , kH , kW) bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1 padding – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padT, padH, padW). Default: 0 output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padT, out_padH, out_padW). Default: 0 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1 Examples: >>> inputs = torch.randn(20, 16, 50, 10, 20) >>> weights = torch.randn(16, 33, 3, 3, 3) >>> F.conv_transpose3d(inputs, weights) unfold torch.nn.functional.unfold(input, kernel_size, dilation=1, padding=0, stride=1) [source] Extracts sliding local blocks from a batched input tensor. Warning Currently, only 4-D input tensors (batched image-like tensors) are supported. Warning More than one element of the unfolded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensor, please clone it first. See torch.nn.Unfold for details fold torch.nn.functional.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1) [source] Combines an array of sliding local blocks into a large containing tensor. Warning Currently, only 3-D output tensors (unfolded batched image-like tensors) are supported. See torch.nn.Fold for details Pooling functions avg_pool1d torch.nn.functional.avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) → Tensor Applies a 1D average pooling over an input signal composed of several input planes. See AvgPool1d for details and output shape. Parameters input – input tensor of shape (minibatch,in_channels,iW)(\text{minibatch} , \text{in\_channels} , iW) kernel_size – the size of the window. Can be a single number or a tuple (kW,) stride – the stride of the window. Can be a single number or a tuple (sW,). Default: kernel_size padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0 ceil_mode – when True, will use ceil instead of floor to compute the output shape. Default: False count_include_pad – when True, will include the zero-padding in the averaging calculation. Default: True Examples: >>> # pool of square window of size=3, stride=2 >>> input = torch.tensor([[[1, 2, 3, 4, 5, 6, 7]]], dtype=torch.float32) >>> F.avg_pool1d(input, kernel_size=3, stride=2) tensor([[[ 2., 4., 6.]]]) avg_pool2d torch.nn.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor Applies 2D average-pooling operation in kH×kWkH \times kW regions by step size sH×sWsH \times sW steps. The number of output features is equal to the number of input planes. See AvgPool2d for details and output shape. Parameters input – input tensor (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW) kernel_size – size of the pooling region. Can be a single number or a tuple (kH, kW) stride – stride of the pooling operation. Can be a single number or a tuple (sH, sW). Default: kernel_size padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0 ceil_mode – when True, will use ceil instead of floor in the formula to compute the output shape. Default: False count_include_pad – when True, will include the zero-padding in the averaging calculation. Default: True divisor_override – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None avg_pool3d torch.nn.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor Applies 3D average-pooling operation in kT×kH×kWkT \times kH \times kW regions by step size sT×sH×sWsT \times sH \times sW steps. The number of output features is equal to ⌊input planessT⌋\lfloor\frac{\text{input planes}}{sT}\rfloor . See AvgPool3d for details and output shape. Parameters input – input tensor (minibatch,in_channels,iT×iH,iW)(\text{minibatch} , \text{in\_channels} , iT \times iH , iW) kernel_size – size of the pooling region. Can be a single number or a tuple (kT, kH, kW) stride – stride of the pooling operation. Can be a single number or a tuple (sT, sH, sW). Default: kernel_size padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW), Default: 0 ceil_mode – when True, will use ceil instead of floor in the formula to compute the output shape count_include_pad – when True, will include the zero-padding in the averaging calculation divisor_override – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None max_pool1d torch.nn.functional.max_pool1d(*args, **kwargs) Applies a 1D max pooling over an input signal composed of several input planes. See MaxPool1d for details. max_pool2d torch.nn.functional.max_pool2d(*args, **kwargs) Applies a 2D max pooling over an input signal composed of several input planes. See MaxPool2d for details. max_pool3d torch.nn.functional.max_pool3d(*args, **kwargs) Applies a 3D max pooling over an input signal composed of several input planes. See MaxPool3d for details. max_unpool1d torch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source] Computes a partial inverse of MaxPool1d. See MaxUnpool1d for details. max_unpool2d torch.nn.functional.max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source] Computes a partial inverse of MaxPool2d. See MaxUnpool2d for details. max_unpool3d torch.nn.functional.max_unpool3d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source] Computes a partial inverse of MaxPool3d. See MaxUnpool3d for details. lp_pool1d torch.nn.functional.lp_pool1d(input, norm_type, kernel_size, stride=None, ceil_mode=False) [source] Applies a 1D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well. See LPPool1d for details. lp_pool2d torch.nn.functional.lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False) [source] Applies a 2D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well. See LPPool2d for details. adaptive_max_pool1d torch.nn.functional.adaptive_max_pool1d(*args, **kwargs) Applies a 1D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool1d for details and output shape. Parameters output_size – the target output size (single integer) return_indices – whether to return pooling indices. Default: False adaptive_max_pool2d torch.nn.functional.adaptive_max_pool2d(*args, **kwargs) Applies a 2D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool2d for details and output shape. Parameters output_size – the target output size (single integer or double-integer tuple) return_indices – whether to return pooling indices. Default: False adaptive_max_pool3d torch.nn.functional.adaptive_max_pool3d(*args, **kwargs) Applies a 3D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool3d for details and output shape. Parameters output_size – the target output size (single integer or triple-integer tuple) return_indices – whether to return pooling indices. Default: False adaptive_avg_pool1d torch.nn.functional.adaptive_avg_pool1d(input, output_size) → Tensor Applies a 1D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool1d for details and output shape. Parameters output_size – the target output size (single integer) adaptive_avg_pool2d torch.nn.functional.adaptive_avg_pool2d(input, output_size) [source] Applies a 2D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool2d for details and output shape. Parameters output_size – the target output size (single integer or double-integer tuple) adaptive_avg_pool3d torch.nn.functional.adaptive_avg_pool3d(input, output_size) [source] Applies a 3D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool3d for details and output shape. Parameters output_size – the target output size (single integer or triple-integer tuple) Non-linear activation functions threshold torch.nn.functional.threshold(input, threshold, value, inplace=False) Thresholds each element of the input Tensor. See Threshold for more details. torch.nn.functional.threshold_(input, threshold, value) → Tensor In-place version of threshold(). relu torch.nn.functional.relu(input, inplace=False) → Tensor [source] Applies the rectified linear unit function element-wise. See ReLU for more details. torch.nn.functional.relu_(input) → Tensor In-place version of relu(). hardtanh torch.nn.functional.hardtanh(input, min_val=-1., max_val=1., inplace=False) → Tensor [source] Applies the HardTanh function element-wise. See Hardtanh for more details. torch.nn.functional.hardtanh_(input, min_val=-1., max_val=1.) → Tensor In-place version of hardtanh(). hardswish torch.nn.functional.hardswish(input, inplace=False) [source] Applies the hardswish function, element-wise, as described in the paper: Searching for MobileNetV3. Hardswish(x)={0if x≤−3,xif x≥+3,x⋅(x+3)/6otherwise\text{Hardswish}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ x & \text{if~} x \ge +3, \\ x \cdot (x + 3) /6 & \text{otherwise} \end{cases} See Hardswish for more details. relu6 torch.nn.functional.relu6(input, inplace=False) → Tensor [source] Applies the element-wise function ReLU6(x)=min⁡(max⁡(0,x),6)\text{ReLU6}(x) = \min(\max(0,x), 6) . See ReLU6 for more details. elu torch.nn.functional.elu(input, alpha=1.0, inplace=False) [source] Applies element-wise, ELU(x)=max⁡(0,x)+min⁡(0,α∗(exp⁡(x)−1))\text{ELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x) - 1)) . See ELU for more details. torch.nn.functional.elu_(input, alpha=1.) → Tensor In-place version of elu(). selu torch.nn.functional.selu(input, inplace=False) → Tensor [source] Applies element-wise, SELU(x)=scale∗(max⁡(0,x)+min⁡(0,α∗(exp⁡(x)−1)))\text{SELU}(x) = scale * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1))) , with α=1.6732632423543772848170429916717\alpha=1.6732632423543772848170429916717 and scale=1.0507009873554804934193349852946scale=1.0507009873554804934193349852946 . See SELU for more details. celu torch.nn.functional.celu(input, alpha=1., inplace=False) → Tensor [source] Applies element-wise, CELU(x)=max⁡(0,x)+min⁡(0,α∗(exp⁡(x/α)−1))\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1)) . See CELU for more details. leaky_relu torch.nn.functional.leaky_relu(input, negative_slope=0.01, inplace=False) → Tensor [source] Applies element-wise, LeakyReLU(x)=max⁡(0,x)+negative_slope∗min⁡(0,x)\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x) See LeakyReLU for more details. torch.nn.functional.leaky_relu_(input, negative_slope=0.01) → Tensor In-place version of leaky_relu(). prelu torch.nn.functional.prelu(input, weight) → Tensor [source] Applies element-wise the function PReLU(x)=max⁡(0,x)+weight∗min⁡(0,x)\text{PReLU}(x) = \max(0,x) + \text{weight} * \min(0,x) where weight is a learnable parameter. See PReLU for more details. rrelu torch.nn.functional.rrelu(input, lower=1./8, upper=1./3, training=False, inplace=False) → Tensor [source] Randomized leaky ReLU. See RReLU for more details. torch.nn.functional.rrelu_(input, lower=1./8, upper=1./3, training=False) → Tensor In-place version of rrelu(). glu torch.nn.functional.glu(input, dim=-1) → Tensor [source] The gated linear unit. Computes: GLU(a,b)=a⊗σ(b)\text{GLU}(a, b) = a \otimes \sigma(b) where input is split in half along dim to form a and b, σ\sigma is the sigmoid function and ⊗\otimes is the element-wise product between matrices. See Language Modeling with Gated Convolutional Networks. Parameters input (Tensor) – input tensor dim (int) – dimension on which to split the input. Default: -1 gelu torch.nn.functional.gelu(input) → Tensor [source] Applies element-wise the function GELU(x)=x∗Φ(x)\text{GELU}(x) = x * \Phi(x) where Φ(x)\Phi(x) is the Cumulative Distribution Function for Gaussian Distribution. See Gaussian Error Linear Units (GELUs). logsigmoid torch.nn.functional.logsigmoid(input) → Tensor Applies element-wise LogSigmoid(xi)=log⁡(11+exp⁡(−xi))\text{LogSigmoid}(x_i) = \log \left(\frac{1}{1 + \exp(-x_i)}\right) See LogSigmoid for more details. hardshrink torch.nn.functional.hardshrink(input, lambd=0.5) → Tensor [source] Applies the hard shrinkage function element-wise See Hardshrink for more details. tanhshrink torch.nn.functional.tanhshrink(input) → Tensor [source] Applies element-wise, Tanhshrink(x)=x−Tanh(x)\text{Tanhshrink}(x) = x - \text{Tanh}(x) See Tanhshrink for more details. softsign torch.nn.functional.softsign(input) → Tensor [source] Applies element-wise, the function SoftSign(x)=x1+∣x∣\text{SoftSign}(x) = \frac{x}{1 + |x|} See Softsign for more details. softplus torch.nn.functional.softplus(input, beta=1, threshold=20) → Tensor Applies element-wise, the function Softplus(x)=1β∗log⁡(1+exp⁡(β∗x))\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) . For numerical stability the implementation reverts to the linear function when input×β>thresholdinput \times \beta > threshold . See Softplus for more details. softmin torch.nn.functional.softmin(input, dim=None, _stacklevel=3, dtype=None) [source] Applies a softmin function. Note that Softmin(x)=Softmax(−x)\text{Softmin}(x) = \text{Softmax}(-x) . See softmax definition for mathematical formula. See Softmin for more details. Parameters input (Tensor) – input dim (int) – A dimension along which softmin will be computed (so every slice along dim will sum to 1). dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. softmax torch.nn.functional.softmax(input, dim=None, _stacklevel=3, dtype=None) [source] Applies a softmax function. Softmax is defined as: Softmax(xi)=exp⁡(xi)∑jexp⁡(xj)\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)} It is applied to all slices along dim, and will re-scale them so that the elements lie in the range [0, 1] and sum to 1. See Softmax for more details. Parameters input (Tensor) – input dim (int) – A dimension along which softmax will be computed. dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Note This function doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use log_softmax instead (it’s faster and has better numerical properties). softshrink torch.nn.functional.softshrink(input, lambd=0.5) → Tensor Applies the soft shrinkage function elementwise See Softshrink for more details. gumbel_softmax torch.nn.functional.gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=-1) [source] Samples from the Gumbel-Softmax distribution (Link 1 Link 2) and optionally discretizes. Parameters logits – […, num_features] unnormalized log probabilities tau – non-negative scalar temperature hard – if True, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograd dim (int) – A dimension along which softmax will be computed. Default: -1. Returns Sampled tensor of same shape as logits from the Gumbel-Softmax distribution. If hard=True, the returned samples will be one-hot, otherwise they will be probability distributions that sum to 1 across dim. Note This function is here for legacy reasons, may be removed from nn.Functional in the future. Note The main trick for hard is to do y_hard - y_soft.detach() + y_soft It achieves two things: - makes the output value exactly one-hot (since we add then subtract y_soft value) - makes the gradient equal to y_soft gradient (since we strip all other gradients) Examples:: >>> logits = torch.randn(20, 32) >>> # Sample soft categorical using reparametrization trick: >>> F.gumbel_softmax(logits, tau=1, hard=False) >>> # Sample hard categorical using "Straight-through" trick: >>> F.gumbel_softmax(logits, tau=1, hard=True) log_softmax torch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None) [source] Applies a softmax followed by a logarithm. While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly. See LogSoftmax for more details. Parameters input (Tensor) – input dim (int) – A dimension along which log_softmax will be computed. dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. tanh torch.nn.functional.tanh(input) → Tensor [source] Applies element-wise, Tanh(x)=tanh⁡(x)=exp⁡(x)−exp⁡(−x)exp⁡(x)+exp⁡(−x)\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)}{\exp(x) + \exp(-x)} See Tanh for more details. sigmoid torch.nn.functional.sigmoid(input) → Tensor [source] Applies the element-wise function Sigmoid(x)=11+exp⁡(−x)\text{Sigmoid}(x) = \frac{1}{1 + \exp(-x)} See Sigmoid for more details. hardsigmoid torch.nn.functional.hardsigmoid(input) → Tensor [source] Applies the element-wise function Hardsigmoid(x)={0if x≤−3,1if x≥+3,x/6+1/2otherwise\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 & \text{otherwise} \end{cases} Parameters inplace – If set to True, will do this operation in-place. Default: False See Hardsigmoid for more details. silu torch.nn.functional.silu(input, inplace=False) [source] Applies the silu function, element-wise. silu(x)=x∗σ(x),where σ(x) is the logistic sigmoid.\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.} Note See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid Linear Unit) was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning and Swish: a Self-Gated Activation Function where the SiLU was experimented with later. See SiLU for more details. Normalization functions batch_norm torch.nn.functional.batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05) [source] Applies Batch Normalization for each channel across a batch of data. See BatchNorm1d, BatchNorm2d, BatchNorm3d for details. instance_norm torch.nn.functional.instance_norm(input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05) [source] Applies Instance Normalization for each channel in each data sample in a batch. See InstanceNorm1d, InstanceNorm2d, InstanceNorm3d for details. layer_norm torch.nn.functional.layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05) [source] Applies Layer Normalization for last certain number of dimensions. See LayerNorm for details. local_response_norm torch.nn.functional.local_response_norm(input, size, alpha=0.0001, beta=0.75, k=1.0) [source] Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Applies normalization across channels. See LocalResponseNorm for details. normalize torch.nn.functional.normalize(input, p=2, dim=1, eps=1e-12, out=None) [source] Performs LpL_p normalization of inputs over specified dimension. For a tensor input of sizes (n0,...,ndim,...,nk)(n_0, ..., n_{dim}, ..., n_k) , each ndimn_{dim} -element vector vv along dimension dim is transformed as v=vmax⁡(∥v∥p,ϵ).v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)}. With the default arguments it uses the Euclidean norm over vectors along dimension 11 for normalization. Parameters input – input tensor of any shape p (float) – the exponent value in the norm formulation. Default: 2 dim (int) – the dimension to reduce. Default: 1 eps (float) – small value to avoid division by zero. Default: 1e-12 out (Tensor, optional) – the output tensor. If out is used, this operation won’t be differentiable. Linear functions linear torch.nn.functional.linear(input, weight, bias=None) [source] Applies a linear transformation to the incoming data: y=xAT+by = xA^T + b . This operator supports TensorFloat32. Shape: Input: (N,∗,in_features)(N, *, in\_features) N is the batch size, * means any number of additional dimensions Weight: (out_features,in_features)(out\_features, in\_features) Bias: (out_features)(out\_features) Output: (N,∗,out_features)(N, *, out\_features) bilinear torch.nn.functional.bilinear(input1, input2, weight, bias=None) [source] Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x_1^T A x_2 + b Shape: input1: (N,∗,Hin1)(N, *, H_{in1}) where Hin1=in1_featuresH_{in1}=\text{in1\_features} and ∗* means any number of additional dimensions. All but the last dimension of the inputs should be the same. input2: (N,∗,Hin2)(N, *, H_{in2}) where Hin2=in2_featuresH_{in2}=\text{in2\_features} weight: (out_features,in1_features,in2_features)(\text{out\_features}, \text{in1\_features}, \text{in2\_features}) bias: (out_features)(\text{out\_features}) output: (N,∗,Hout)(N, *, H_{out}) where Hout=out_featuresH_{out}=\text{out\_features} and all but the last dimension are the same shape as the input. Dropout functions dropout torch.nn.functional.dropout(input, p=0.5, training=True, inplace=False) [source] During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. See Dropout for details. Parameters p – probability of an element to be zeroed. Default: 0.5 training – apply dropout if is True. Default: True inplace – If set to True, will do this operation in-place. Default: False alpha_dropout torch.nn.functional.alpha_dropout(input, p=0.5, training=False, inplace=False) [source] Applies alpha dropout to the input. See AlphaDropout for details. feature_alpha_dropout torch.nn.functional.feature_alpha_dropout(input, p=0.5, training=False, inplace=False) [source] Randomly masks out entire channels (a channel is a feature map, e.g. the jj -th channel of the ii -th sample in the batch input is a tensor input[i,j]\text{input}[i, j] ) of the input tensor). Instead of setting activations to zero, as in regular Dropout, the activations are set to the negative saturation value of the SELU activation function. Each element will be masked independently on every forward call with probability p using samples from a Bernoulli distribution. The elements to be masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit variance. See FeatureAlphaDropout for details. Parameters p – dropout probability of a channel to be zeroed. Default: 0.5 training – apply dropout if is True. Default: True inplace – If set to True, will do this operation in-place. Default: False dropout2d torch.nn.functional.dropout2d(input, p=0.5, training=True, inplace=False) [source] Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. See Dropout2d for details. Parameters p – probability of a channel to be zeroed. Default: 0.5 training – apply dropout if is True. Default: True inplace – If set to True, will do this operation in-place. Default: False dropout3d torch.nn.functional.dropout3d(input, p=0.5, training=True, inplace=False) [source] Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. See Dropout3d for details. Parameters p – probability of a channel to be zeroed. Default: 0.5 training – apply dropout if is True. Default: True inplace – If set to True, will do this operation in-place. Default: False Sparse functions embedding torch.nn.functional.embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False) [source] A simple lookup table that looks up embeddings in a fixed dictionary and size. This module is often used to retrieve word embeddings using indices. The input to the module is a list of indices, and the embedding matrix, and the output is the corresponding word embeddings. See torch.nn.Embedding for more details. Parameters input (LongTensor) – Tensor containing indices into the embedding matrix weight (Tensor) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size padding_idx (int, optional) – If given, pads the output with the embedding vector at padding_idx (initialized to zeros) whenever it encounters the index. max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm. Note: this will modify weight in-place. norm_type (float, optional) – The p of the p-norm to compute for the max_norm option. Default 2. scale_grad_by_freq (boolean, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False. sparse (bool, optional) – If True, gradient w.r.t. weight will be a sparse tensor. See Notes under torch.nn.Embedding for more details regarding sparse gradients. Shape: Input: LongTensor of arbitrary shape containing the indices to extract Weight: Embedding matrix of floating point type with shape (V, embedding_dim), where V = maximum index + 1 and embedding_dim = the embedding size Output: (*, embedding_dim), where * is the input shape Examples: >>> # a batch of 2 samples of 4 indices each >>> input = torch.tensor([[1,2,4,5],[4,3,2,9]]) >>> # an embedding matrix containing 10 tensors of size 3 >>> embedding_matrix = torch.rand(10, 3) >>> F.embedding(input, embedding_matrix) tensor([[[ 0.8490, 0.9625, 0.6753], [ 0.9666, 0.7761, 0.6108], [ 0.6246, 0.9751, 0.3618], [ 0.4161, 0.2419, 0.7383]], [[ 0.6246, 0.9751, 0.3618], [ 0.0237, 0.7794, 0.0528], [ 0.9666, 0.7761, 0.6108], [ 0.3385, 0.8612, 0.1867]]]) >>> # example with padding_idx >>> weights = torch.rand(10, 3) >>> weights[0, :].zero_() >>> embedding_matrix = weights >>> input = torch.tensor([[0,2,0,5]]) >>> F.embedding(input, embedding_matrix, padding_idx=0) tensor([[[ 0.0000, 0.0000, 0.0000], [ 0.5609, 0.5384, 0.8720], [ 0.0000, 0.0000, 0.0000], [ 0.6262, 0.2438, 0.7471]]]) embedding_bag torch.nn.functional.embedding_bag(input, weight, offsets=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean', sparse=False, per_sample_weights=None, include_last_offset=False) [source] Computes sums, means or maxes of bags of embeddings, without instantiating the intermediate embeddings. See torch.nn.EmbeddingBag for more details. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. Parameters input (LongTensor) – Tensor containing bags of indices into the embedding matrix weight (Tensor) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size offsets (LongTensor, optional) – Only used when input is 1D. offsets determines the starting index position of each bag (sequence) in input. max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm. Note: this will modify weight in-place. norm_type (float, optional) – The p in the p-norm to compute for the max_norm option. Default 2. scale_grad_by_freq (boolean, optional) – if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False. Note: this option is not supported when mode="max". mode (string, optional) – "sum", "mean" or "max". Specifies the way to reduce the bag. Default: "mean" sparse (bool, optional) – if True, gradient w.r.t. weight will be a sparse tensor. See Notes under torch.nn.Embedding for more details regarding sparse gradients. Note: this option is not supported when mode="max". per_sample_weights (Tensor, optional) – a tensor of float / double weights, or None to indicate all weights should be taken to be 1. If specified, per_sample_weights must have exactly the same shape as input and is treated as having the same offsets, if those are not None. include_last_offset (bool, optional) – if True, the size of offsets is equal to the number of bags + 1. last element is the size of the input, or the ending index position of the last bag (The) – Shape: input (LongTensor) and offsets (LongTensor, optional) If input is 2D of shape (B, N), it will be treated as B bags (sequences) each of fixed length N, and this will return B values aggregated in a way depending on the mode. offsets is ignored and required to be None in this case. If input is 1D of shape (N), it will be treated as a concatenation of multiple bags (sequences). offsets is required to be a 1D tensor containing the starting index positions of each bag in input. Therefore, for offsets of shape (B), input will be viewed as having B bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros. weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim) per_sample_weights (Tensor, optional). Has the same shape as input. output: aggregated embedding values of shape (B, embedding_dim) Examples: >>> # an Embedding module containing 10 tensors of size 3 >>> embedding_matrix = torch.rand(10, 3) >>> # a batch of 2 samples of 4 indices each >>> input = torch.tensor([1,2,4,5,4,3,2,9]) >>> offsets = torch.tensor([0,4]) >>> F.embedding_bag(embedding_matrix, input, offsets) tensor([[ 0.3397, 0.3552, 0.5545], [ 0.5893, 0.4386, 0.5882]]) one_hot torch.nn.functional.one_hot(tensor, num_classes=-1) → LongTensor Takes LongTensor with index values of shape (*) and returns a tensor of shape (*, num_classes) that have zeros everywhere except where the index of last dimension matches the corresponding value of the input tensor, in which case it will be 1. See also One-hot on Wikipedia . Parameters tensor (LongTensor) – class values of any shape. num_classes (int) – Total number of classes. If set to -1, the number of classes will be inferred as one greater than the largest class value in the input tensor. Returns LongTensor that has one more dimension with 1 values at the index of last dimension indicated by the input, and 0 everywhere else. Examples >>> F.one_hot(torch.arange(0, 5) % 3) tensor([[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0], [0, 1, 0]]) >>> F.one_hot(torch.arange(0, 5) % 3, num_classes=5) tensor([[1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [1, 0, 0, 0, 0], [0, 1, 0, 0, 0]]) >>> F.one_hot(torch.arange(0, 6).view(3,2) % 3) tensor([[[1, 0, 0], [0, 1, 0]], [[0, 0, 1], [1, 0, 0]], [[0, 1, 0], [0, 0, 1]]]) Distance functions pairwise_distance torch.nn.functional.pairwise_distance(x1, x2, p=2.0, eps=1e-06, keepdim=False) [source] See torch.nn.PairwiseDistance for details cosine_similarity torch.nn.functional.cosine_similarity(x1, x2, dim=1, eps=1e-8) → Tensor Returns cosine similarity between x1 and x2, computed along dim. similarity=x1⋅x2max⁡(∥x1∥2⋅∥x2∥2,ϵ)\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)} Parameters x1 (Tensor) – First input. x2 (Tensor) – Second input (of size matching x1). dim (int, optional) – Dimension of vectors. Default: 1 eps (float, optional) – Small value to avoid division by zero. Default: 1e-8 Shape: Input: (∗1,D,∗2)(\ast_1, D, \ast_2) where D is at position dim. Output: (∗1,∗2)(\ast_1, \ast_2) where 1 is at position dim. Example: >>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> output = F.cosine_similarity(input1, input2) >>> print(output) pdist torch.nn.functional.pdist(input, p=2) → Tensor Computes the p-norm distance between every pair of row vectors in the input. This is identical to the upper triangular portion, excluding the diagonal, of torch.norm(input[:, None] - input, dim=2, p=p). This function will be faster if the rows are contiguous. If input has shape N×MN \times M then the output will have shape 12N(N−1)\frac{1}{2} N (N - 1) . This function is equivalent to scipy.spatial.distance.pdist(input, ‘minkowski’, p=p) if p∈(0,∞)p \in (0, \infty) . When p=0p = 0 it is equivalent to scipy.spatial.distance.pdist(input, ‘hamming’) * M. When p=∞p = \infty , the closest scipy function is scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x - y).max()). Parameters input – input tensor of shape N×MN \times M . p – p value for the p-norm distance to calculate between each vector pair ∈[0,∞]\in [0, \infty] . Loss functions binary_cross_entropy torch.nn.functional.binary_cross_entropy(input, target, weight=None, size_average=None, reduce=None, reduction='mean') [source] Function that measures the Binary Cross Entropy between the target and the output. See BCELoss for details. Parameters input – Tensor of arbitrary shape target – Tensor of the same shape as input weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' Examples: >>> input = torch.randn((3, 2), requires_grad=True) >>> target = torch.rand((3, 2), requires_grad=False) >>> loss = F.binary_cross_entropy(F.sigmoid(input), target) >>> loss.backward() binary_cross_entropy_with_logits torch.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None) [source] Function that measures Binary Cross Entropy between target and output logits. See BCEWithLogitsLoss for details. Parameters input – Tensor of arbitrary shape target – Tensor of the same shape as input weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' pos_weight (Tensor, optional) – a weight of positive examples. Must be a vector with length equal to the number of classes. Examples: >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> loss = F.binary_cross_entropy_with_logits(input, target) >>> loss.backward() poisson_nll_loss torch.nn.functional.poisson_nll_loss(input, target, log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean') [source] Poisson negative log likelihood loss. See PoissonNLLLoss for details. Parameters input – expectation of underlying Poisson distribution. target – random sample target∼Poisson(input)target \sim \text{Poisson}(input) . log_input – if True the loss is computed as exp⁡(input)−target∗input\exp(\text{input}) - \text{target} * \text{input} , if False then loss is input−target∗log⁡(input+eps)\text{input} - \text{target} * \log(\text{input}+\text{eps}) . Default: True full – whether to compute full loss, i. e. to add the Stirling approximation term. Default: False target∗log⁡(target)−target+0.5∗log⁡(2∗π∗target)\text{target} * \log(\text{target}) - \text{target} + 0.5 * \log(2 * \pi * \text{target}) . size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True eps (float, optional) – Small value to avoid evaluation of log⁡(0)\log(0) when log_input`=``False`. Default: 1e-8 reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' cosine_embedding_loss torch.nn.functional.cosine_embedding_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor [source] See CosineEmbeddingLoss for details. cross_entropy torch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source] This criterion combines log_softmax and nll_loss in a single function. See CrossEntropyLoss for details. Parameters input (Tensor) – (N,C)(N, C) where C = number of classes or (N,C,H,W)(N, C, H, W) in case of 2D Loss, or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K) where K≥1K \geq 1 in the case of K-dimensional loss. target (Tensor) – (N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-1 , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) where K≥1K \geq 1 for K-dimensional loss. weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' Examples: >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randint(5, (3,), dtype=torch.int64) >>> loss = F.cross_entropy(input, target) >>> loss.backward() ctc_loss torch.nn.functional.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False) [source] The Connectionist Temporal Classification loss. See CTCLoss for details. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. Parameters log_probs – (T,N,C)(T, N, C) where C = number of characters in alphabet including blank, T = input length, and N = batch size. The logarithmized probabilities of the outputs (e.g. obtained with torch.nn.functional.log_softmax()). targets – (N,S)(N, S) or (sum(target_lengths)). Targets cannot be blank. In the second form, the targets are assumed to be concatenated. input_lengths – (N)(N) . Lengths of the inputs (must each be ≤T\leq T ) target_lengths – (N)(N) . Lengths of the targets blank (int, optional) – Blank label. Default 00 . reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the output losses will be divided by the target lengths and then the mean over the batch is taken, 'sum': the output will be summed. Default: 'mean' zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default: False Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Example: >>> log_probs = torch.randn(50, 16, 20).log_softmax(2).detach().requires_grad_() >>> targets = torch.randint(1, 20, (16, 30), dtype=torch.long) >>> input_lengths = torch.full((16,), 50, dtype=torch.long) >>> target_lengths = torch.randint(10,30,(16,), dtype=torch.long) >>> loss = F.ctc_loss(log_probs, targets, input_lengths, target_lengths) >>> loss.backward() hinge_embedding_loss torch.nn.functional.hinge_embedding_loss(input, target, margin=1.0, size_average=None, reduce=None, reduction='mean') → Tensor [source] See HingeEmbeddingLoss for details. kl_div torch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction='mean', log_target=False) [source] The Kullback-Leibler divergence Loss See KLDivLoss for details. Parameters input – Tensor of arbitrary shape target – Tensor of the same shape as input size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'batchmean' | 'sum' | 'mean'. 'none': no reduction will be applied 'batchmean': the sum of the output will be divided by the batchsize 'sum': the output will be summed 'mean': the output will be divided by the number of elements in the output Default: 'mean' log_target (bool) – A flag indicating whether target is passed in the log space. It is recommended to pass certain distributions (like softmax) in the log space to avoid numerical issues caused by explicit log. Default: False Note size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Note :attr:reduction = 'mean' doesn’t return the true kl divergence value, please use :attr:reduction = 'batchmean' which aligns with KL math definition. In the next major release, 'mean' will be changed to be the same as ‘batchmean’. l1_loss torch.nn.functional.l1_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source] Function that takes the mean element-wise absolute value difference. See L1Loss for details. mse_loss torch.nn.functional.mse_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source] Measures the element-wise mean squared error. See MSELoss for details. margin_ranking_loss torch.nn.functional.margin_ranking_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor [source] See MarginRankingLoss for details. multilabel_margin_loss torch.nn.functional.multilabel_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source] See MultiLabelMarginLoss for details. multilabel_soft_margin_loss torch.nn.functional.multilabel_soft_margin_loss(input, target, weight=None, size_average=None) → Tensor [source] See MultiLabelSoftMarginLoss for details. multi_margin_loss torch.nn.functional.multi_margin_loss(input, target, p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean') [source] multi_margin_loss(input, target, p=1, margin=1, weight=None, size_average=None, reduce=None, reduction=’mean’) -> Tensor See MultiMarginLoss for details. nll_loss torch.nn.functional.nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source] The negative log likelihood loss. See NLLLoss for details. Parameters input – (N,C)(N, C) where C = number of classes or (N,C,H,W)(N, C, H, W) in case of 2D Loss, or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K) where K≥1K \geq 1 in the case of K-dimensional loss. target – (N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-1 , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) where K≥1K \geq 1 for K-dimensional loss. weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' Example: >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward() smooth_l1_loss torch.nn.functional.smooth_l1_loss(input, target, size_average=None, reduce=None, reduction='mean', beta=1.0) [source] Function that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. See SmoothL1Loss for details. soft_margin_loss torch.nn.functional.soft_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source] See SoftMarginLoss for details. triplet_margin_loss torch.nn.functional.triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean') [source] See TripletMarginLoss for details triplet_margin_with_distance_loss torch.nn.functional.triplet_margin_with_distance_loss(anchor, positive, negative, *, distance_function=None, margin=1.0, swap=False, reduction='mean') [source] See TripletMarginWithDistanceLoss for details. Vision functions pixel_shuffle torch.nn.functional.pixel_shuffle(input, upscale_factor) → Tensor Rearranges elements in a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) to a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) , where r is the upscale_factor. See PixelShuffle for details. Parameters input (Tensor) – the input tensor upscale_factor (int) – factor to increase spatial resolution by Examples: >>> input = torch.randn(1, 9, 4, 4) >>> output = torch.nn.functional.pixel_shuffle(input, 3) >>> print(output.size()) torch.Size([1, 1, 12, 12]) pixel_unshuffle torch.nn.functional.pixel_unshuffle(input, downscale_factor) → Tensor Reverses the PixelShuffle operation by rearranging elements in a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) to a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) , where r is the downscale_factor. See PixelUnshuffle for details. Parameters input (Tensor) – the input tensor downscale_factor (int) – factor to increase spatial resolution by Examples: >>> input = torch.randn(1, 1, 12, 12) >>> output = torch.nn.functional.pixel_unshuffle(input, 3) >>> print(output.size()) torch.Size([1, 9, 4, 4]) pad torch.nn.functional.pad(input, pad, mode='constant', value=0) Pads tensor. Padding size: The padding size by which to pad some dimensions of input are described starting from the last dimension and moving forward. ⌊len(pad)2⌋\left\lfloor\frac{\text{len(pad)}}{2}\right\rfloor dimensions of input will be padded. For example, to pad only the last dimension of the input tensor, then pad has the form (padding_left,padding_right)(\text{padding\_left}, \text{padding\_right}) ; to pad the last 2 dimensions of the input tensor, then use (padding_left,padding_right,(\text{padding\_left}, \text{padding\_right}, padding_top,padding_bottom)\text{padding\_top}, \text{padding\_bottom}) ; to pad the last 3 dimensions, use (padding_left,padding_right,(\text{padding\_left}, \text{padding\_right}, padding_top,padding_bottom\text{padding\_top}, \text{padding\_bottom} padding_front,padding_back)\text{padding\_front}, \text{padding\_back}) . Padding mode: See torch.nn.ConstantPad2d, torch.nn.ReflectionPad2d, and torch.nn.ReplicationPad2d for concrete examples on how each of the padding modes works. Constant padding is implemented for arbitrary dimensions. Replicate padding is implemented for padding the last 3 dimensions of 5D input tensor, or the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor. Reflect padding is only implemented for padding the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor. Note When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on Reproducibility for background. Parameters input (Tensor) – N-dimensional tensor pad (tuple) – m-elements tuple, where m2≤\frac{m}{2} \leq input dimensions and mm is even. mode – 'constant', 'reflect', 'replicate' or 'circular'. Default: 'constant' value – fill value for 'constant' padding. Default: 0 Examples: >>> t4d = torch.empty(3, 3, 4, 2) >>> p1d = (1, 1) # pad last dim by 1 on each side >>> out = F.pad(t4d, p1d, "constant", 0) # effectively zero padding >>> print(out.size()) torch.Size([3, 3, 4, 4]) >>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2) >>> out = F.pad(t4d, p2d, "constant", 0) >>> print(out.size()) torch.Size([3, 3, 8, 4]) >>> t4d = torch.empty(3, 3, 4, 2) >>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3) >>> out = F.pad(t4d, p3d, "constant", 0) >>> print(out.size()) torch.Size([3, 9, 7, 3]) interpolate torch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None) [source] Down/up samples the input to either the given size or the given scale_factor The algorithm used for interpolation is determined by mode. Currently temporal, spatial and volumetric sampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. The modes available for resizing are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only), area Parameters input (Tensor) – the input tensor size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size. scale_factor (float or Tuple[float]) – multiplier for spatial size. Has to match input size if it is a tuple. mode (str) – algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' | 'bicubic' | 'trilinear' | 'area'. Default: 'nearest' align_corners (bool, optional) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'linear', 'bilinear', 'bicubic' or 'trilinear'. Default: False recompute_scale_factor (bool, optional) – recompute the scale_factor for use in the interpolation calculation. When scale_factor is passed as a parameter, it is used to compute the output_size. If recompute_scale_factor is False or not specified, the passed-in scale_factor will be used in the interpolation computation. Otherwise, a new scale_factor will be computed based on the output and input sizes for use in the interpolation computation (i.e. the computation will be identical to if the computed output_size were passed-in explicitly). Note that when scale_factor is floating-point, the recomputed scale_factor may differ from the one passed in due to rounding and precision issues. Note With mode='bicubic', it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call result.clamp(min=0, max=255) if you want to reduce the overshoot when displaying the image. Warning With align_corners = True, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False. See Upsample for concrete examples on how this affects the outputs. Warning When scale_factor is specified, if recompute_scale_factor=True, scale_factor is used to compute the output_size which will then be used to infer new scales for the interpolation. The default behavior for recompute_scale_factor changed to False in 1.6.0, and scale_factor is used in the interpolation calculation. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. upsample torch.nn.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None) [source] Upsamples the input to either the given size or the given scale_factor Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(...). Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. The algorithm used for upsampling is determined by mode. Currently temporal, spatial and volumetric upsampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. The modes available for upsampling are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only) Parameters input (Tensor) – the input tensor size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size. scale_factor (float or Tuple[float]) – multiplier for spatial size. Has to match input size if it is a tuple. mode (string) – algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' | 'bicubic' | 'trilinear'. Default: 'nearest' align_corners (bool, optional) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'linear', 'bilinear', 'bicubic' or 'trilinear'. Default: False Note With mode='bicubic', it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call result.clamp(min=0, max=255) if you want to reduce the overshoot when displaying the image. Warning With align_corners = True, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False. See Upsample for concrete examples on how this affects the outputs. upsample_nearest torch.nn.functional.upsample_nearest(input, size=None, scale_factor=None) [source] Upsamples the input, using nearest neighbours’ pixel values. Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(..., mode='nearest'). Currently spatial and volumetric upsampling are supported (i.e. expected inputs are 4 or 5 dimensional). Parameters input (Tensor) – input size (int or Tuple[int, int] or Tuple[int, int, int]) – output spatia size. scale_factor (int) – multiplier for spatial size. Has to be an integer. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. upsample_bilinear torch.nn.functional.upsample_bilinear(input, size=None, scale_factor=None) [source] Upsamples the input, using bilinear upsampling. Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(..., mode='bilinear', align_corners=True). Expected inputs are spatial (4 dimensional). Use upsample_trilinear fo volumetric (5 dimensional) inputs. Parameters input (Tensor) – input size (int or Tuple[int, int]) – output spatial size. scale_factor (int or Tuple[int, int]) – multiplier for spatial size Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. grid_sample torch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=None) [source] Given an input and a flow-field grid, computes the output using input values and pixel locations from grid. Currently, only spatial (4-D) and volumetric (5-D) input are supported. In the spatial (4-D) case, for input with shape (N,C,Hin,Win)(N, C, H_\text{in}, W_\text{in}) and grid with shape (N,Hout,Wout,2)(N, H_\text{out}, W_\text{out}, 2) , the output will have shape (N,C,Hout,Wout)(N, C, H_\text{out}, W_\text{out}) . For each output location output[n, :, h, w], the size-2 vector grid[n, h, w] specifies input pixel locations x and y, which are used to interpolate the output value output[n, :, h, w]. In the case of 5D inputs, grid[n, d, h, w] specifies the x, y, z pixel locations for interpolating output[n, :, d, h, w]. mode argument specifies nearest or bilinear interpolation method to sample the input pixels. grid specifies the sampling pixel locations normalized by the input spatial dimensions. Therefore, it should have most values in the range of [-1, 1]. For example, values x = -1, y = -1 is the left-top pixel of input, and values x = 1, y = 1 is the right-bottom pixel of input. If grid has values outside the range of [-1, 1], the corresponding outputs are handled as defined by padding_mode. Options are padding_mode="zeros": use 0 for out-of-bound grid locations, padding_mode="border": use border values for out-of-bound grid locations, padding_mode="reflection": use values at locations reflected by the border for out-of-bound grid locations. For location far away from the border, it will keep being reflected until becoming in bound, e.g., (normalized) pixel location x = -3.5 reflects by border -1 and becomes x' = 1.5, then reflects by border 1 and becomes x'' = -0.5. Note This function is often used in conjunction with affine_grid() to build Spatial Transformer Networks . Note When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on Reproducibility for background. Note NaN values in grid would be interpreted as -1. Parameters input (Tensor) – input of shape (N,C,Hin,Win)(N, C, H_\text{in}, W_\text{in}) (4-D case) or (N,C,Din,Hin,Win)(N, C, D_\text{in}, H_\text{in}, W_\text{in}) (5-D case) grid (Tensor) – flow-field of shape (N,Hout,Wout,2)(N, H_\text{out}, W_\text{out}, 2) (4-D case) or (N,Dout,Hout,Wout,3)(N, D_\text{out}, H_\text{out}, W_\text{out}, 3) (5-D case) mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest' | 'bicubic'. Default: 'bilinear' Note: mode='bicubic' supports only 4-D input. When mode='bilinear' and the input is 5-D, the interpolation mode used internally will actually be trilinear. However, when the input is 4-D, the interpolation mode will legitimately be bilinear. padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros' align_corners (bool, optional) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic. This option parallels the align_corners option in interpolate(), and so whichever option is used here should also be used there to resize the input image before grid sampling. Default: False Returns output Tensor Return type output (Tensor) Warning When align_corners = True, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled by grid_sample() will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 was align_corners = True. Since then, the default behavior has been changed to align_corners = False, in order to bring it in line with the default for interpolate(). Note mode='bicubic' is implemented using the cubic convolution algorithm with α=−0.75\alpha=-0.75 . The constant α\alpha might be different from packages to packages. For example, PIL and OpenCV use -0.5 and -0.75 respectively. This algorithm may “overshoot” the range of values it’s interpolating. For example, it may produce negative values or values greater than 255 when interpolating input in [0, 255]. Clamp the results with :func: torch.clamp to ensure they are within the valid range. affine_grid torch.nn.functional.affine_grid(theta, size, align_corners=None) [source] Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices theta. Note This function is often used in conjunction with grid_sample() to build Spatial Transformer Networks . Parameters theta (Tensor) – input batch of affine matrices with shape (N×2×3N \times 2 \times 3 ) for 2D or (N×3×4N \times 3 \times 4 ) for 3D size (torch.Size) – the target output image size. (N×C×H×WN \times C \times H \times W for 2D or N×C×D×H×WN \times C \times D \times H \times W for 3D) Example: torch.Size((32, 3, 24, 24)) align_corners (bool, optional) – if True, consider -1 and 1 to refer to the centers of the corner pixels rather than the image corners. Refer to grid_sample() for a more complete description. A grid generated by affine_grid() should be passed to grid_sample() with the same setting for this option. Default: False Returns output Tensor of size (N×H×W×2N \times H \times W \times 2 ) Return type output (Tensor) Warning When align_corners = True, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled by grid_sample() will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 was align_corners = True. Since then, the default behavior has been changed to align_corners = False, in order to bring it in line with the default for interpolate(). Warning When align_corners = True, 2D affine transforms on 1D data and 3D affine transforms on 2D data (that is, when one of the spatial dimensions has unit size) are ill-defined, and not an intended use case. This is not a problem when align_corners = False. Up to version 1.2.0, all grid points along a unit dimension were considered arbitrarily to be at -1. From version 1.3.0, under align_corners = True all grid points along a unit dimension are considered to be at `0 (the center of the input image). DataParallel functions (multi-GPU, distributed) data_parallel torch.nn.parallel.data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None) [source] Evaluates module(input) in parallel across the GPUs given in device_ids. This is the functional version of the DataParallel module. Parameters module (Module) – the module to evaluate in parallel inputs (Tensor) – inputs to the module device_ids (list of python:int or torch.device) – GPU ids on which to replicate module output_device (list of python:int or torch.device) – GPU location of the output Use -1 to indicate the CPU. (default: device_ids[0]) Returns a Tensor containing the result of module(input) located on output_device
torch.nn.functional
torch.nn.functional.adaptive_avg_pool1d(input, output_size) → Tensor Applies a 1D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool1d for details and output shape. Parameters output_size – the target output size (single integer)
torch.nn.functional#torch.nn.functional.adaptive_avg_pool1d
torch.nn.functional.adaptive_avg_pool2d(input, output_size) [source] Applies a 2D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool2d for details and output shape. Parameters output_size – the target output size (single integer or double-integer tuple)
torch.nn.functional#torch.nn.functional.adaptive_avg_pool2d
torch.nn.functional.adaptive_avg_pool3d(input, output_size) [source] Applies a 3D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool3d for details and output shape. Parameters output_size – the target output size (single integer or triple-integer tuple)
torch.nn.functional#torch.nn.functional.adaptive_avg_pool3d
torch.nn.functional.adaptive_max_pool1d(*args, **kwargs) Applies a 1D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool1d for details and output shape. Parameters output_size – the target output size (single integer) return_indices – whether to return pooling indices. Default: False
torch.nn.functional#torch.nn.functional.adaptive_max_pool1d
torch.nn.functional.adaptive_max_pool2d(*args, **kwargs) Applies a 2D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool2d for details and output shape. Parameters output_size – the target output size (single integer or double-integer tuple) return_indices – whether to return pooling indices. Default: False
torch.nn.functional#torch.nn.functional.adaptive_max_pool2d
torch.nn.functional.adaptive_max_pool3d(*args, **kwargs) Applies a 3D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool3d for details and output shape. Parameters output_size – the target output size (single integer or triple-integer tuple) return_indices – whether to return pooling indices. Default: False
torch.nn.functional#torch.nn.functional.adaptive_max_pool3d
torch.nn.functional.affine_grid(theta, size, align_corners=None) [source] Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices theta. Note This function is often used in conjunction with grid_sample() to build Spatial Transformer Networks . Parameters theta (Tensor) – input batch of affine matrices with shape (N×2×3N \times 2 \times 3 ) for 2D or (N×3×4N \times 3 \times 4 ) for 3D size (torch.Size) – the target output image size. (N×C×H×WN \times C \times H \times W for 2D or N×C×D×H×WN \times C \times D \times H \times W for 3D) Example: torch.Size((32, 3, 24, 24)) align_corners (bool, optional) – if True, consider -1 and 1 to refer to the centers of the corner pixels rather than the image corners. Refer to grid_sample() for a more complete description. A grid generated by affine_grid() should be passed to grid_sample() with the same setting for this option. Default: False Returns output Tensor of size (N×H×W×2N \times H \times W \times 2 ) Return type output (Tensor) Warning When align_corners = True, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled by grid_sample() will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 was align_corners = True. Since then, the default behavior has been changed to align_corners = False, in order to bring it in line with the default for interpolate(). Warning When align_corners = True, 2D affine transforms on 1D data and 3D affine transforms on 2D data (that is, when one of the spatial dimensions has unit size) are ill-defined, and not an intended use case. This is not a problem when align_corners = False. Up to version 1.2.0, all grid points along a unit dimension were considered arbitrarily to be at -1. From version 1.3.0, under align_corners = True all grid points along a unit dimension are considered to be at `0 (the center of the input image).
torch.nn.functional#torch.nn.functional.affine_grid
torch.nn.functional.alpha_dropout(input, p=0.5, training=False, inplace=False) [source] Applies alpha dropout to the input. See AlphaDropout for details.
torch.nn.functional#torch.nn.functional.alpha_dropout
torch.nn.functional.avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) → Tensor Applies a 1D average pooling over an input signal composed of several input planes. See AvgPool1d for details and output shape. Parameters input – input tensor of shape (minibatch,in_channels,iW)(\text{minibatch} , \text{in\_channels} , iW) kernel_size – the size of the window. Can be a single number or a tuple (kW,) stride – the stride of the window. Can be a single number or a tuple (sW,). Default: kernel_size padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0 ceil_mode – when True, will use ceil instead of floor to compute the output shape. Default: False count_include_pad – when True, will include the zero-padding in the averaging calculation. Default: True Examples: >>> # pool of square window of size=3, stride=2 >>> input = torch.tensor([[[1, 2, 3, 4, 5, 6, 7]]], dtype=torch.float32) >>> F.avg_pool1d(input, kernel_size=3, stride=2) tensor([[[ 2., 4., 6.]]])
torch.nn.functional#torch.nn.functional.avg_pool1d
torch.nn.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor Applies 2D average-pooling operation in kH×kWkH \times kW regions by step size sH×sWsH \times sW steps. The number of output features is equal to the number of input planes. See AvgPool2d for details and output shape. Parameters input – input tensor (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW) kernel_size – size of the pooling region. Can be a single number or a tuple (kH, kW) stride – stride of the pooling operation. Can be a single number or a tuple (sH, sW). Default: kernel_size padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0 ceil_mode – when True, will use ceil instead of floor in the formula to compute the output shape. Default: False count_include_pad – when True, will include the zero-padding in the averaging calculation. Default: True divisor_override – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None
torch.nn.functional#torch.nn.functional.avg_pool2d
torch.nn.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor Applies 3D average-pooling operation in kT×kH×kWkT \times kH \times kW regions by step size sT×sH×sWsT \times sH \times sW steps. The number of output features is equal to ⌊input planessT⌋\lfloor\frac{\text{input planes}}{sT}\rfloor . See AvgPool3d for details and output shape. Parameters input – input tensor (minibatch,in_channels,iT×iH,iW)(\text{minibatch} , \text{in\_channels} , iT \times iH , iW) kernel_size – size of the pooling region. Can be a single number or a tuple (kT, kH, kW) stride – stride of the pooling operation. Can be a single number or a tuple (sT, sH, sW). Default: kernel_size padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW), Default: 0 ceil_mode – when True, will use ceil instead of floor in the formula to compute the output shape count_include_pad – when True, will include the zero-padding in the averaging calculation divisor_override – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None
torch.nn.functional#torch.nn.functional.avg_pool3d
torch.nn.functional.batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05) [source] Applies Batch Normalization for each channel across a batch of data. See BatchNorm1d, BatchNorm2d, BatchNorm3d for details.
torch.nn.functional#torch.nn.functional.batch_norm
torch.nn.functional.bilinear(input1, input2, weight, bias=None) [source] Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x_1^T A x_2 + b Shape: input1: (N,∗,Hin1)(N, *, H_{in1}) where Hin1=in1_featuresH_{in1}=\text{in1\_features} and ∗* means any number of additional dimensions. All but the last dimension of the inputs should be the same. input2: (N,∗,Hin2)(N, *, H_{in2}) where Hin2=in2_featuresH_{in2}=\text{in2\_features} weight: (out_features,in1_features,in2_features)(\text{out\_features}, \text{in1\_features}, \text{in2\_features}) bias: (out_features)(\text{out\_features}) output: (N,∗,Hout)(N, *, H_{out}) where Hout=out_featuresH_{out}=\text{out\_features} and all but the last dimension are the same shape as the input.
torch.nn.functional#torch.nn.functional.bilinear
torch.nn.functional.binary_cross_entropy(input, target, weight=None, size_average=None, reduce=None, reduction='mean') [source] Function that measures the Binary Cross Entropy between the target and the output. See BCELoss for details. Parameters input – Tensor of arbitrary shape target – Tensor of the same shape as input weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' Examples: >>> input = torch.randn((3, 2), requires_grad=True) >>> target = torch.rand((3, 2), requires_grad=False) >>> loss = F.binary_cross_entropy(F.sigmoid(input), target) >>> loss.backward()
torch.nn.functional#torch.nn.functional.binary_cross_entropy
torch.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None) [source] Function that measures Binary Cross Entropy between target and output logits. See BCEWithLogitsLoss for details. Parameters input – Tensor of arbitrary shape target – Tensor of the same shape as input weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' pos_weight (Tensor, optional) – a weight of positive examples. Must be a vector with length equal to the number of classes. Examples: >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> loss = F.binary_cross_entropy_with_logits(input, target) >>> loss.backward()
torch.nn.functional#torch.nn.functional.binary_cross_entropy_with_logits
torch.nn.functional.celu(input, alpha=1., inplace=False) → Tensor [source] Applies element-wise, CELU(x)=max⁡(0,x)+min⁡(0,α∗(exp⁡(x/α)−1))\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1)) . See CELU for more details.
torch.nn.functional#torch.nn.functional.celu
torch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor Applies a 1D convolution over an input signal composed of several input planes. This operator supports TensorFloat32. See Conv1d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iW)(\text{minibatch} , \text{in\_channels} , iW) weight – filters of shape (out_channels,in_channelsgroups,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kW) bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a one-element tuple (sW,). Default: 1 padding – implicit paddings on both sides of the input. Can be a single number or a one-element tuple (padW,). Default: 0 dilation – the spacing between kernel elements. Can be a single number or a one-element tuple (dW,). Default: 1 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 Examples: >>> filters = torch.randn(33, 16, 3) >>> inputs = torch.randn(20, 16, 50) >>> F.conv1d(inputs, filters)
torch.nn.functional#torch.nn.functional.conv1d
torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor Applies a 2D convolution over an input image composed of several input planes. This operator supports TensorFloat32. See Conv2d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW) weight – filters of shape (out_channels,in_channelsgroups,kH,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kH , kW) bias – optional bias tensor of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1 padding – implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0 dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 Examples: >>> # With square kernels and equal stride >>> filters = torch.randn(8,4,3,3) >>> inputs = torch.randn(1,4,5,5) >>> F.conv2d(inputs, filters, padding=1)
torch.nn.functional#torch.nn.functional.conv2d
torch.nn.functional.conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor Applies a 3D convolution over an input image composed of several input planes. This operator supports TensorFloat32. See Conv3d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iT,iH,iW)(\text{minibatch} , \text{in\_channels} , iT , iH , iW) weight – filters of shape (out_channels,in_channelsgroups,kT,kH,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kT , kH , kW) bias – optional bias tensor of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1 padding – implicit paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0 dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 Examples: >>> filters = torch.randn(33, 16, 3, 3, 3) >>> inputs = torch.randn(20, 16, 50, 10, 20) >>> F.conv3d(inputs, filters)
torch.nn.functional#torch.nn.functional.conv3d
torch.nn.functional.conv_transpose1d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called “deconvolution”. This operator supports TensorFloat32. See ConvTranspose1d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iW)(\text{minibatch} , \text{in\_channels} , iW) weight – filters of shape (in_channels,out_channelsgroups,kW)(\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kW) bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1 padding – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0 output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 dilation – the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1 Examples: >>> inputs = torch.randn(20, 16, 50) >>> weights = torch.randn(16, 33, 5) >>> F.conv_transpose1d(inputs, weights)
torch.nn.functional#torch.nn.functional.conv_transpose1d
torch.nn.functional.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”. This operator supports TensorFloat32. See ConvTranspose2d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW) weight – filters of shape (in_channels,out_channelsgroups,kH,kW)(\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kH , kW) bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1 padding – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padH, padW). Default: 0 output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padH, out_padW). Default: 0 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1 Examples: >>> # With square kernels and equal stride >>> inputs = torch.randn(1, 4, 5, 5) >>> weights = torch.randn(4, 8, 3, 3) >>> F.conv_transpose2d(inputs, weights, padding=1)
torch.nn.functional#torch.nn.functional.conv_transpose2d
torch.nn.functional.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution” This operator supports TensorFloat32. See ConvTranspose3d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iT,iH,iW)(\text{minibatch} , \text{in\_channels} , iT , iH , iW) weight – filters of shape (in_channels,out_channelsgroups,kT,kH,kW)(\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kT , kH , kW) bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1 padding – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padT, padH, padW). Default: 0 output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padT, out_padH, out_padW). Default: 0 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1 Examples: >>> inputs = torch.randn(20, 16, 50, 10, 20) >>> weights = torch.randn(16, 33, 3, 3, 3) >>> F.conv_transpose3d(inputs, weights)
torch.nn.functional#torch.nn.functional.conv_transpose3d
torch.nn.functional.cosine_embedding_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor [source] See CosineEmbeddingLoss for details.
torch.nn.functional#torch.nn.functional.cosine_embedding_loss
torch.nn.functional.cosine_similarity(x1, x2, dim=1, eps=1e-8) → Tensor Returns cosine similarity between x1 and x2, computed along dim. similarity=x1⋅x2max⁡(∥x1∥2⋅∥x2∥2,ϵ)\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)} Parameters x1 (Tensor) – First input. x2 (Tensor) – Second input (of size matching x1). dim (int, optional) – Dimension of vectors. Default: 1 eps (float, optional) – Small value to avoid division by zero. Default: 1e-8 Shape: Input: (∗1,D,∗2)(\ast_1, D, \ast_2) where D is at position dim. Output: (∗1,∗2)(\ast_1, \ast_2) where 1 is at position dim. Example: >>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> output = F.cosine_similarity(input1, input2) >>> print(output)
torch.nn.functional#torch.nn.functional.cosine_similarity
torch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source] This criterion combines log_softmax and nll_loss in a single function. See CrossEntropyLoss for details. Parameters input (Tensor) – (N,C)(N, C) where C = number of classes or (N,C,H,W)(N, C, H, W) in case of 2D Loss, or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K) where K≥1K \geq 1 in the case of K-dimensional loss. target (Tensor) – (N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-1 , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) where K≥1K \geq 1 for K-dimensional loss. weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' Examples: >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randint(5, (3,), dtype=torch.int64) >>> loss = F.cross_entropy(input, target) >>> loss.backward()
torch.nn.functional#torch.nn.functional.cross_entropy
torch.nn.functional.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False) [source] The Connectionist Temporal Classification loss. See CTCLoss for details. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. Parameters log_probs – (T,N,C)(T, N, C) where C = number of characters in alphabet including blank, T = input length, and N = batch size. The logarithmized probabilities of the outputs (e.g. obtained with torch.nn.functional.log_softmax()). targets – (N,S)(N, S) or (sum(target_lengths)). Targets cannot be blank. In the second form, the targets are assumed to be concatenated. input_lengths – (N)(N) . Lengths of the inputs (must each be ≤T\leq T ) target_lengths – (N)(N) . Lengths of the targets blank (int, optional) – Blank label. Default 00 . reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the output losses will be divided by the target lengths and then the mean over the batch is taken, 'sum': the output will be summed. Default: 'mean' zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default: False Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Example: >>> log_probs = torch.randn(50, 16, 20).log_softmax(2).detach().requires_grad_() >>> targets = torch.randint(1, 20, (16, 30), dtype=torch.long) >>> input_lengths = torch.full((16,), 50, dtype=torch.long) >>> target_lengths = torch.randint(10,30,(16,), dtype=torch.long) >>> loss = F.ctc_loss(log_probs, targets, input_lengths, target_lengths) >>> loss.backward()
torch.nn.functional#torch.nn.functional.ctc_loss
torch.nn.functional.dropout(input, p=0.5, training=True, inplace=False) [source] During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. See Dropout for details. Parameters p – probability of an element to be zeroed. Default: 0.5 training – apply dropout if is True. Default: True inplace – If set to True, will do this operation in-place. Default: False
torch.nn.functional#torch.nn.functional.dropout
torch.nn.functional.dropout2d(input, p=0.5, training=True, inplace=False) [source] Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. See Dropout2d for details. Parameters p – probability of a channel to be zeroed. Default: 0.5 training – apply dropout if is True. Default: True inplace – If set to True, will do this operation in-place. Default: False
torch.nn.functional#torch.nn.functional.dropout2d
torch.nn.functional.dropout3d(input, p=0.5, training=True, inplace=False) [source] Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. See Dropout3d for details. Parameters p – probability of a channel to be zeroed. Default: 0.5 training – apply dropout if is True. Default: True inplace – If set to True, will do this operation in-place. Default: False
torch.nn.functional#torch.nn.functional.dropout3d
torch.nn.functional.elu(input, alpha=1.0, inplace=False) [source] Applies element-wise, ELU(x)=max⁡(0,x)+min⁡(0,α∗(exp⁡(x)−1))\text{ELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x) - 1)) . See ELU for more details.
torch.nn.functional#torch.nn.functional.elu
torch.nn.functional.elu_(input, alpha=1.) → Tensor In-place version of elu().
torch.nn.functional#torch.nn.functional.elu_
torch.nn.functional.embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False) [source] A simple lookup table that looks up embeddings in a fixed dictionary and size. This module is often used to retrieve word embeddings using indices. The input to the module is a list of indices, and the embedding matrix, and the output is the corresponding word embeddings. See torch.nn.Embedding for more details. Parameters input (LongTensor) – Tensor containing indices into the embedding matrix weight (Tensor) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size padding_idx (int, optional) – If given, pads the output with the embedding vector at padding_idx (initialized to zeros) whenever it encounters the index. max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm. Note: this will modify weight in-place. norm_type (float, optional) – The p of the p-norm to compute for the max_norm option. Default 2. scale_grad_by_freq (boolean, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False. sparse (bool, optional) – If True, gradient w.r.t. weight will be a sparse tensor. See Notes under torch.nn.Embedding for more details regarding sparse gradients. Shape: Input: LongTensor of arbitrary shape containing the indices to extract Weight: Embedding matrix of floating point type with shape (V, embedding_dim), where V = maximum index + 1 and embedding_dim = the embedding size Output: (*, embedding_dim), where * is the input shape Examples: >>> # a batch of 2 samples of 4 indices each >>> input = torch.tensor([[1,2,4,5],[4,3,2,9]]) >>> # an embedding matrix containing 10 tensors of size 3 >>> embedding_matrix = torch.rand(10, 3) >>> F.embedding(input, embedding_matrix) tensor([[[ 0.8490, 0.9625, 0.6753], [ 0.9666, 0.7761, 0.6108], [ 0.6246, 0.9751, 0.3618], [ 0.4161, 0.2419, 0.7383]], [[ 0.6246, 0.9751, 0.3618], [ 0.0237, 0.7794, 0.0528], [ 0.9666, 0.7761, 0.6108], [ 0.3385, 0.8612, 0.1867]]]) >>> # example with padding_idx >>> weights = torch.rand(10, 3) >>> weights[0, :].zero_() >>> embedding_matrix = weights >>> input = torch.tensor([[0,2,0,5]]) >>> F.embedding(input, embedding_matrix, padding_idx=0) tensor([[[ 0.0000, 0.0000, 0.0000], [ 0.5609, 0.5384, 0.8720], [ 0.0000, 0.0000, 0.0000], [ 0.6262, 0.2438, 0.7471]]])
torch.nn.functional#torch.nn.functional.embedding
torch.nn.functional.embedding_bag(input, weight, offsets=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean', sparse=False, per_sample_weights=None, include_last_offset=False) [source] Computes sums, means or maxes of bags of embeddings, without instantiating the intermediate embeddings. See torch.nn.EmbeddingBag for more details. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. Parameters input (LongTensor) – Tensor containing bags of indices into the embedding matrix weight (Tensor) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size offsets (LongTensor, optional) – Only used when input is 1D. offsets determines the starting index position of each bag (sequence) in input. max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm. Note: this will modify weight in-place. norm_type (float, optional) – The p in the p-norm to compute for the max_norm option. Default 2. scale_grad_by_freq (boolean, optional) – if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False. Note: this option is not supported when mode="max". mode (string, optional) – "sum", "mean" or "max". Specifies the way to reduce the bag. Default: "mean" sparse (bool, optional) – if True, gradient w.r.t. weight will be a sparse tensor. See Notes under torch.nn.Embedding for more details regarding sparse gradients. Note: this option is not supported when mode="max". per_sample_weights (Tensor, optional) – a tensor of float / double weights, or None to indicate all weights should be taken to be 1. If specified, per_sample_weights must have exactly the same shape as input and is treated as having the same offsets, if those are not None. include_last_offset (bool, optional) – if True, the size of offsets is equal to the number of bags + 1. last element is the size of the input, or the ending index position of the last bag (The) – Shape: input (LongTensor) and offsets (LongTensor, optional) If input is 2D of shape (B, N), it will be treated as B bags (sequences) each of fixed length N, and this will return B values aggregated in a way depending on the mode. offsets is ignored and required to be None in this case. If input is 1D of shape (N), it will be treated as a concatenation of multiple bags (sequences). offsets is required to be a 1D tensor containing the starting index positions of each bag in input. Therefore, for offsets of shape (B), input will be viewed as having B bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros. weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim) per_sample_weights (Tensor, optional). Has the same shape as input. output: aggregated embedding values of shape (B, embedding_dim) Examples: >>> # an Embedding module containing 10 tensors of size 3 >>> embedding_matrix = torch.rand(10, 3) >>> # a batch of 2 samples of 4 indices each >>> input = torch.tensor([1,2,4,5,4,3,2,9]) >>> offsets = torch.tensor([0,4]) >>> F.embedding_bag(embedding_matrix, input, offsets) tensor([[ 0.3397, 0.3552, 0.5545], [ 0.5893, 0.4386, 0.5882]])
torch.nn.functional#torch.nn.functional.embedding_bag
torch.nn.functional.feature_alpha_dropout(input, p=0.5, training=False, inplace=False) [source] Randomly masks out entire channels (a channel is a feature map, e.g. the jj -th channel of the ii -th sample in the batch input is a tensor input[i,j]\text{input}[i, j] ) of the input tensor). Instead of setting activations to zero, as in regular Dropout, the activations are set to the negative saturation value of the SELU activation function. Each element will be masked independently on every forward call with probability p using samples from a Bernoulli distribution. The elements to be masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit variance. See FeatureAlphaDropout for details. Parameters p – dropout probability of a channel to be zeroed. Default: 0.5 training – apply dropout if is True. Default: True inplace – If set to True, will do this operation in-place. Default: False
torch.nn.functional#torch.nn.functional.feature_alpha_dropout
torch.nn.functional.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1) [source] Combines an array of sliding local blocks into a large containing tensor. Warning Currently, only 3-D output tensors (unfolded batched image-like tensors) are supported. See torch.nn.Fold for details
torch.nn.functional#torch.nn.functional.fold
torch.nn.functional.gelu(input) → Tensor [source] Applies element-wise the function GELU(x)=x∗Φ(x)\text{GELU}(x) = x * \Phi(x) where Φ(x)\Phi(x) is the Cumulative Distribution Function for Gaussian Distribution. See Gaussian Error Linear Units (GELUs).
torch.nn.functional#torch.nn.functional.gelu
torch.nn.functional.glu(input, dim=-1) → Tensor [source] The gated linear unit. Computes: GLU(a,b)=a⊗σ(b)\text{GLU}(a, b) = a \otimes \sigma(b) where input is split in half along dim to form a and b, σ\sigma is the sigmoid function and ⊗\otimes is the element-wise product between matrices. See Language Modeling with Gated Convolutional Networks. Parameters input (Tensor) – input tensor dim (int) – dimension on which to split the input. Default: -1
torch.nn.functional#torch.nn.functional.glu
torch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=None) [source] Given an input and a flow-field grid, computes the output using input values and pixel locations from grid. Currently, only spatial (4-D) and volumetric (5-D) input are supported. In the spatial (4-D) case, for input with shape (N,C,Hin,Win)(N, C, H_\text{in}, W_\text{in}) and grid with shape (N,Hout,Wout,2)(N, H_\text{out}, W_\text{out}, 2) , the output will have shape (N,C,Hout,Wout)(N, C, H_\text{out}, W_\text{out}) . For each output location output[n, :, h, w], the size-2 vector grid[n, h, w] specifies input pixel locations x and y, which are used to interpolate the output value output[n, :, h, w]. In the case of 5D inputs, grid[n, d, h, w] specifies the x, y, z pixel locations for interpolating output[n, :, d, h, w]. mode argument specifies nearest or bilinear interpolation method to sample the input pixels. grid specifies the sampling pixel locations normalized by the input spatial dimensions. Therefore, it should have most values in the range of [-1, 1]. For example, values x = -1, y = -1 is the left-top pixel of input, and values x = 1, y = 1 is the right-bottom pixel of input. If grid has values outside the range of [-1, 1], the corresponding outputs are handled as defined by padding_mode. Options are padding_mode="zeros": use 0 for out-of-bound grid locations, padding_mode="border": use border values for out-of-bound grid locations, padding_mode="reflection": use values at locations reflected by the border for out-of-bound grid locations. For location far away from the border, it will keep being reflected until becoming in bound, e.g., (normalized) pixel location x = -3.5 reflects by border -1 and becomes x' = 1.5, then reflects by border 1 and becomes x'' = -0.5. Note This function is often used in conjunction with affine_grid() to build Spatial Transformer Networks . Note When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on Reproducibility for background. Note NaN values in grid would be interpreted as -1. Parameters input (Tensor) – input of shape (N,C,Hin,Win)(N, C, H_\text{in}, W_\text{in}) (4-D case) or (N,C,Din,Hin,Win)(N, C, D_\text{in}, H_\text{in}, W_\text{in}) (5-D case) grid (Tensor) – flow-field of shape (N,Hout,Wout,2)(N, H_\text{out}, W_\text{out}, 2) (4-D case) or (N,Dout,Hout,Wout,3)(N, D_\text{out}, H_\text{out}, W_\text{out}, 3) (5-D case) mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest' | 'bicubic'. Default: 'bilinear' Note: mode='bicubic' supports only 4-D input. When mode='bilinear' and the input is 5-D, the interpolation mode used internally will actually be trilinear. However, when the input is 4-D, the interpolation mode will legitimately be bilinear. padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros' align_corners (bool, optional) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic. This option parallels the align_corners option in interpolate(), and so whichever option is used here should also be used there to resize the input image before grid sampling. Default: False Returns output Tensor Return type output (Tensor) Warning When align_corners = True, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled by grid_sample() will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 was align_corners = True. Since then, the default behavior has been changed to align_corners = False, in order to bring it in line with the default for interpolate(). Note mode='bicubic' is implemented using the cubic convolution algorithm with α=−0.75\alpha=-0.75 . The constant α\alpha might be different from packages to packages. For example, PIL and OpenCV use -0.5 and -0.75 respectively. This algorithm may “overshoot” the range of values it’s interpolating. For example, it may produce negative values or values greater than 255 when interpolating input in [0, 255]. Clamp the results with :func: torch.clamp to ensure they are within the valid range.
torch.nn.functional#torch.nn.functional.grid_sample
torch.nn.functional.gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=-1) [source] Samples from the Gumbel-Softmax distribution (Link 1 Link 2) and optionally discretizes. Parameters logits – […, num_features] unnormalized log probabilities tau – non-negative scalar temperature hard – if True, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograd dim (int) – A dimension along which softmax will be computed. Default: -1. Returns Sampled tensor of same shape as logits from the Gumbel-Softmax distribution. If hard=True, the returned samples will be one-hot, otherwise they will be probability distributions that sum to 1 across dim. Note This function is here for legacy reasons, may be removed from nn.Functional in the future. Note The main trick for hard is to do y_hard - y_soft.detach() + y_soft It achieves two things: - makes the output value exactly one-hot (since we add then subtract y_soft value) - makes the gradient equal to y_soft gradient (since we strip all other gradients) Examples:: >>> logits = torch.randn(20, 32) >>> # Sample soft categorical using reparametrization trick: >>> F.gumbel_softmax(logits, tau=1, hard=False) >>> # Sample hard categorical using "Straight-through" trick: >>> F.gumbel_softmax(logits, tau=1, hard=True)
torch.nn.functional#torch.nn.functional.gumbel_softmax
torch.nn.functional.hardshrink(input, lambd=0.5) → Tensor [source] Applies the hard shrinkage function element-wise See Hardshrink for more details.
torch.nn.functional#torch.nn.functional.hardshrink
torch.nn.functional.hardsigmoid(input) → Tensor [source] Applies the element-wise function Hardsigmoid(x)={0if x≤−3,1if x≥+3,x/6+1/2otherwise\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 & \text{otherwise} \end{cases} Parameters inplace – If set to True, will do this operation in-place. Default: False See Hardsigmoid for more details.
torch.nn.functional#torch.nn.functional.hardsigmoid
torch.nn.functional.hardswish(input, inplace=False) [source] Applies the hardswish function, element-wise, as described in the paper: Searching for MobileNetV3. Hardswish(x)={0if x≤−3,xif x≥+3,x⋅(x+3)/6otherwise\text{Hardswish}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ x & \text{if~} x \ge +3, \\ x \cdot (x + 3) /6 & \text{otherwise} \end{cases} See Hardswish for more details.
torch.nn.functional#torch.nn.functional.hardswish
torch.nn.functional.hardtanh(input, min_val=-1., max_val=1., inplace=False) → Tensor [source] Applies the HardTanh function element-wise. See Hardtanh for more details.
torch.nn.functional#torch.nn.functional.hardtanh
torch.nn.functional.hardtanh_(input, min_val=-1., max_val=1.) → Tensor In-place version of hardtanh().
torch.nn.functional#torch.nn.functional.hardtanh_
torch.nn.functional.hinge_embedding_loss(input, target, margin=1.0, size_average=None, reduce=None, reduction='mean') → Tensor [source] See HingeEmbeddingLoss for details.
torch.nn.functional#torch.nn.functional.hinge_embedding_loss
torch.nn.functional.instance_norm(input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05) [source] Applies Instance Normalization for each channel in each data sample in a batch. See InstanceNorm1d, InstanceNorm2d, InstanceNorm3d for details.
torch.nn.functional#torch.nn.functional.instance_norm
torch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None) [source] Down/up samples the input to either the given size or the given scale_factor The algorithm used for interpolation is determined by mode. Currently temporal, spatial and volumetric sampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. The modes available for resizing are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only), area Parameters input (Tensor) – the input tensor size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size. scale_factor (float or Tuple[float]) – multiplier for spatial size. Has to match input size if it is a tuple. mode (str) – algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' | 'bicubic' | 'trilinear' | 'area'. Default: 'nearest' align_corners (bool, optional) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'linear', 'bilinear', 'bicubic' or 'trilinear'. Default: False recompute_scale_factor (bool, optional) – recompute the scale_factor for use in the interpolation calculation. When scale_factor is passed as a parameter, it is used to compute the output_size. If recompute_scale_factor is False or not specified, the passed-in scale_factor will be used in the interpolation computation. Otherwise, a new scale_factor will be computed based on the output and input sizes for use in the interpolation computation (i.e. the computation will be identical to if the computed output_size were passed-in explicitly). Note that when scale_factor is floating-point, the recomputed scale_factor may differ from the one passed in due to rounding and precision issues. Note With mode='bicubic', it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call result.clamp(min=0, max=255) if you want to reduce the overshoot when displaying the image. Warning With align_corners = True, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False. See Upsample for concrete examples on how this affects the outputs. Warning When scale_factor is specified, if recompute_scale_factor=True, scale_factor is used to compute the output_size which will then be used to infer new scales for the interpolation. The default behavior for recompute_scale_factor changed to False in 1.6.0, and scale_factor is used in the interpolation calculation. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information.
torch.nn.functional#torch.nn.functional.interpolate
torch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction='mean', log_target=False) [source] The Kullback-Leibler divergence Loss See KLDivLoss for details. Parameters input – Tensor of arbitrary shape target – Tensor of the same shape as input size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'batchmean' | 'sum' | 'mean'. 'none': no reduction will be applied 'batchmean': the sum of the output will be divided by the batchsize 'sum': the output will be summed 'mean': the output will be divided by the number of elements in the output Default: 'mean' log_target (bool) – A flag indicating whether target is passed in the log space. It is recommended to pass certain distributions (like softmax) in the log space to avoid numerical issues caused by explicit log. Default: False Note size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Note :attr:reduction = 'mean' doesn’t return the true kl divergence value, please use :attr:reduction = 'batchmean' which aligns with KL math definition. In the next major release, 'mean' will be changed to be the same as ‘batchmean’.
torch.nn.functional#torch.nn.functional.kl_div
torch.nn.functional.l1_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source] Function that takes the mean element-wise absolute value difference. See L1Loss for details.
torch.nn.functional#torch.nn.functional.l1_loss
torch.nn.functional.layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05) [source] Applies Layer Normalization for last certain number of dimensions. See LayerNorm for details.
torch.nn.functional#torch.nn.functional.layer_norm
torch.nn.functional.leaky_relu(input, negative_slope=0.01, inplace=False) → Tensor [source] Applies element-wise, LeakyReLU(x)=max⁡(0,x)+negative_slope∗min⁡(0,x)\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x) See LeakyReLU for more details.
torch.nn.functional#torch.nn.functional.leaky_relu
torch.nn.functional.leaky_relu_(input, negative_slope=0.01) → Tensor In-place version of leaky_relu().
torch.nn.functional#torch.nn.functional.leaky_relu_
torch.nn.functional.linear(input, weight, bias=None) [source] Applies a linear transformation to the incoming data: y=xAT+by = xA^T + b . This operator supports TensorFloat32. Shape: Input: (N,∗,in_features)(N, *, in\_features) N is the batch size, * means any number of additional dimensions Weight: (out_features,in_features)(out\_features, in\_features) Bias: (out_features)(out\_features) Output: (N,∗,out_features)(N, *, out\_features)
torch.nn.functional#torch.nn.functional.linear
torch.nn.functional.local_response_norm(input, size, alpha=0.0001, beta=0.75, k=1.0) [source] Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Applies normalization across channels. See LocalResponseNorm for details.
torch.nn.functional#torch.nn.functional.local_response_norm
torch.nn.functional.logsigmoid(input) → Tensor Applies element-wise LogSigmoid(xi)=log⁡(11+exp⁡(−xi))\text{LogSigmoid}(x_i) = \log \left(\frac{1}{1 + \exp(-x_i)}\right) See LogSigmoid for more details.
torch.nn.functional#torch.nn.functional.logsigmoid
torch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None) [source] Applies a softmax followed by a logarithm. While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly. See LogSoftmax for more details. Parameters input (Tensor) – input dim (int) – A dimension along which log_softmax will be computed. dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None.
torch.nn.functional#torch.nn.functional.log_softmax
torch.nn.functional.lp_pool1d(input, norm_type, kernel_size, stride=None, ceil_mode=False) [source] Applies a 1D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well. See LPPool1d for details.
torch.nn.functional#torch.nn.functional.lp_pool1d
torch.nn.functional.lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False) [source] Applies a 2D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well. See LPPool2d for details.
torch.nn.functional#torch.nn.functional.lp_pool2d
torch.nn.functional.margin_ranking_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor [source] See MarginRankingLoss for details.
torch.nn.functional#torch.nn.functional.margin_ranking_loss
torch.nn.functional.max_pool1d(*args, **kwargs) Applies a 1D max pooling over an input signal composed of several input planes. See MaxPool1d for details.
torch.nn.functional#torch.nn.functional.max_pool1d
torch.nn.functional.max_pool2d(*args, **kwargs) Applies a 2D max pooling over an input signal composed of several input planes. See MaxPool2d for details.
torch.nn.functional#torch.nn.functional.max_pool2d
torch.nn.functional.max_pool3d(*args, **kwargs) Applies a 3D max pooling over an input signal composed of several input planes. See MaxPool3d for details.
torch.nn.functional#torch.nn.functional.max_pool3d
torch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source] Computes a partial inverse of MaxPool1d. See MaxUnpool1d for details.
torch.nn.functional#torch.nn.functional.max_unpool1d
torch.nn.functional.max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source] Computes a partial inverse of MaxPool2d. See MaxUnpool2d for details.
torch.nn.functional#torch.nn.functional.max_unpool2d