_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_25200 |
Parameters
s{'nearest', 'bilinear'} or None
If None, use rcParams["image.interpolation"] (default: 'antialiased'). | |
doc_25201 |
Get Addition of dataframe and other, element-wise (binary operator radd). Equivalent to other + dataframe, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, add. Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **. Parameters
other:scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis:{0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level:int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value:float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing. Returns
DataFrame
Result of the arithmetic operation. See also DataFrame.add
Add DataFrames. DataFrame.sub
Subtract DataFrames. DataFrame.mul
Multiply DataFrames. DataFrame.div
Divide DataFrames (float division). DataFrame.truediv
Divide DataFrames (float division). DataFrame.floordiv
Divide DataFrames (integer division). DataFrame.mod
Calculate modulo (remainder after division). DataFrame.pow
Calculate exponential power. Notes Mismatched indices will be unioned together. Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0 | |
doc_25202 |
Set the colors for masked (bad) values and, when norm.clip =
False, low (under) and high (over) out-of-range values. | |
doc_25203 | String containing the name of the device file. | |
doc_25204 |
Convert series to series of this class. The series is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. New in version 1.7.0. Parameters
seriesseries
The series instance to be converted.
domain{None, array_like}, optional
If given, the array must be of the form [beg, end], where beg and end are the endpoints of the domain. If None is given then the class domain is used. The default is None.
window{None, array_like}, optional
If given, the resulting array must be if the form [beg, end], where beg and end are the endpoints of the window. If None is given then the class window is used. The default is None. Returns
new_seriesseries
A series of the same kind as the calling class and equal to series when evaluated. See also convert
similar instance method | |
doc_25205 |
Swaps two rows of a CSC/CSR matrix in-place. Parameters
Xsparse matrix of shape (n_samples, n_features)
Matrix whose two rows are to be swapped. It should be of CSR or CSC format.
mint
Index of the row of X to be swapped.
nint
Index of the row of X to be swapped. | |
doc_25206 |
The command to pop a skip tensor. def forward(self, input):
skip = yield pop('name')
return f(input) + skip
Parameters
name (str) – name of skip tensor Returns
the skip tensor previously stashed by another layer under the same name | |
doc_25207 | Without any optional argument, this method acquires the lock unconditionally, if necessary waiting until it is released by another thread (only one thread at a time can acquire a lock — that’s their reason for existence). If the integer waitflag argument is present, the action depends on its value: if it is zero, the lock is only acquired if it can be acquired immediately without waiting, while if it is nonzero, the lock is acquired unconditionally as above. If the floating-point timeout argument is present and positive, it specifies the maximum wait time in seconds before returning. A negative timeout argument specifies an unbounded wait. You cannot specify a timeout if waitflag is zero. The return value is True if the lock is acquired successfully, False if not. Changed in version 3.2: The timeout parameter is new. Changed in version 3.2: Lock acquires can now be interrupted by signals on POSIX. | |
doc_25208 |
Set the linestyle(s) for the collection.
linestyle description
'-' or 'solid' solid line
'--' or 'dashed' dashed line
'-.' or 'dashdot' dash-dotted line
':' or 'dotted' dotted line Alternatively a dash tuple of the following form can be provided: (offset, onoffseq),
where onoffseq is an even length tuple of on and off ink in points. Parameters
lsstr or tuple or list thereof
Valid values for individual linestyles include {'-', '--', '-.', ':', '', (offset, on-off-seq)}. See Line2D.set_linestyle for a complete description. | |
doc_25209 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_25210 |
Return a list of URLs, one for each element of the collection. The list contains None for elements without a URL. See Hyperlinks for an example. | |
doc_25211 |
Return whether to redraw after every plotting command. Note This function is only intended for use in backends. End users should use pyplot.isinteractive instead. | |
doc_25212 | A view that produces a JavaScript code library with functions that mimic the gettext interface, plus an array of translation strings. Attributes
domain
Translation domain containing strings to add in the view output. Defaults to 'djangojs'.
packages
A list of application names among installed applications. Those apps should contain a locale directory. All those catalogs plus all catalogs found in LOCALE_PATHS (which are always included) are merged into one catalog. Defaults to None, which means that all available translations from all INSTALLED_APPS are provided in the JavaScript output.
Example with default values: from django.views.i18n import JavaScriptCatalog
urlpatterns = [
path('jsi18n/', JavaScriptCatalog.as_view(), name='javascript-catalog'),
]
Example with custom packages: urlpatterns = [
path('jsi18n/myapp/',
JavaScriptCatalog.as_view(packages=['your.app.label']),
name='javascript-catalog'),
]
If your root URLconf uses i18n_patterns(), JavaScriptCatalog must also be wrapped by i18n_patterns() for the catalog to be correctly generated. Example with i18n_patterns(): from django.conf.urls.i18n import i18n_patterns
urlpatterns = i18n_patterns(
path('jsi18n/', JavaScriptCatalog.as_view(), name='javascript-catalog'),
) | |
doc_25213 |
Return a copy of a with its elements centered in a string of length width. Calls str.center element-wise. Parameters
aarray_like of str or unicode
widthint
The length of the resulting strings
fillcharstr or unicode, optional
The padding character to use (default is space). Returns
outndarray
Output array of str or unicode, depending on input types See also str.center | |
doc_25214 |
Get Multiplication of dataframe and other, element-wise (binary operator mul). Equivalent to dataframe * other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rmul. Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **. Parameters
other:scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis:{0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level:int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value:float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing. Returns
DataFrame
Result of the arithmetic operation. See also DataFrame.add
Add DataFrames. DataFrame.sub
Subtract DataFrames. DataFrame.mul
Multiply DataFrames. DataFrame.div
Divide DataFrames (float division). DataFrame.truediv
Divide DataFrames (float division). DataFrame.floordiv
Divide DataFrames (integer division). DataFrame.mod
Calculate modulo (remainder after division). DataFrame.pow
Calculate exponential power. Notes Mismatched indices will be unioned together. Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0 | |
doc_25215 | Rather than being a function, tuple is actually an immutable sequence type, as documented in Tuples and Sequence Types — list, tuple, range. | |
doc_25216 |
Initialize PyTorch’s CUDA state. You may need to call this explicitly if you are interacting with PyTorch via its C API, as Python bindings for CUDA functionality will not be available until this initialization takes place. Ordinary users should not need this, as all of PyTorch’s CUDA methods automatically initialize CUDA state on-demand. Does nothing if the CUDA state is already initialized. | |
doc_25217 | See Migration guide for more details. tf.compat.v1.raw_ops.ParseExampleDatasetV2
tf.raw_ops.ParseExampleDatasetV2(
input_dataset, num_parallel_calls, dense_defaults, sparse_keys, dense_keys,
sparse_types, dense_shapes, output_types, output_shapes,
deterministic='default', ragged_keys=[], ragged_value_types=[],
ragged_split_types=[], name=None
)
Args
input_dataset A Tensor of type variant.
num_parallel_calls A Tensor of type int64.
dense_defaults A list of Tensor objects with types from: float32, int64, string. A dict mapping string keys to Tensors. The keys of the dict must match the dense_keys of the feature.
sparse_keys A list of strings. A list of string keys in the examples features. The results for these keys will be returned as SparseTensor objects.
dense_keys A list of strings. A list of Ndense string Tensors (scalars). The keys expected in the Examples features associated with dense values.
sparse_types A list of tf.DTypes from: tf.float32, tf.int64, tf.string. A list of DTypes of the same length as sparse_keys. Only tf.float32 (FloatList), tf.int64 (Int64List), and tf.string (BytesList) are supported.
dense_shapes A list of shapes (each a tf.TensorShape or list of ints). List of tuples with the same length as dense_keys. The shape of the data for each dense feature referenced by dense_keys. Required for any input tensors identified by dense_keys. Must be either fully defined, or may contain an unknown first dimension. An unknown first dimension means the feature is treated as having a variable number of blocks, and the output shape along this dimension is considered unknown at graph build time. Padding is applied for minibatch elements smaller than the maximum number of blocks for the given feature along this dimension.
output_types A list of tf.DTypes that has length >= 1. The type list for the return values.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. The list of shapes being produced.
deterministic An optional string. Defaults to "default". A string indicating the op-level determinism to use. Deterministic controls whether the dataset is allowed to return elements out of order if the next element to be returned isn't available, but a later element is. Options are "true", "false", and "default". "default" indicates that determinism should be decided by the experimental_deterministic parameter of tf.data.Options.
ragged_keys An optional list of strings. Defaults to [].
ragged_value_types An optional list of tf.DTypes from: tf.float32, tf.int64, tf.string. Defaults to [].
ragged_split_types An optional list of tf.DTypes from: tf.int32, tf.int64. Defaults to [].
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_25218 | tf.compat.v1.resource_variables_enabled()
Resource variables are improved versions of TensorFlow variables with a well-defined memory model. Accessing a resource variable reads its value, and all ops which access a specific read value of the variable are guaranteed to see the same value for that tensor. Writes which happen after a read (by having a control or data dependency on the read) are guaranteed not to affect the value of the read tensor, and similarly writes which happen before a read are guaranteed to affect the value. No guarantees are made about unordered read/write pairs. Calling tf.enable_resource_variables() lets you opt-in to this TensorFlow 2.0 feature. | |
doc_25219 | Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details. | |
doc_25220 | Returns an iterable of strings over the contents of the package. Do note that it is not required that all names returned by the iterator be actual resources, e.g. it is acceptable to return names for which is_resource() would be false. Allowing non-resource names to be returned is to allow for situations where how a package and its resources are stored are known a priori and the non-resource names would be useful. For instance, returning subdirectory names is allowed so that when it is known that the package and resources are stored on the file system then those subdirectory names can be used directly. The abstract method returns an iterable of no items. | |
doc_25221 |
Compute the roots of a Hermite series. Return the roots (a.k.a. “zeros”) of the polynomial \[p(x) = \sum_i c[i] * H_i(x).\] Parameters
c1-D array_like
1-D array of coefficients. Returns
outndarray
Array of the roots of the series. If all the roots are real, then out is also real, otherwise it is complex. See also numpy.polynomial.polynomial.polyroots
numpy.polynomial.legendre.legroots
numpy.polynomial.laguerre.lagroots
numpy.polynomial.chebyshev.chebroots
numpy.polynomial.hermite_e.hermeroots
Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The Hermite series basis polynomials aren’t powers of x so the results of this function may seem unintuitive. Examples >>> from numpy.polynomial.hermite import hermroots, hermfromroots
>>> coef = hermfromroots([-1, 0, 1])
>>> coef
array([0. , 0.25 , 0. , 0.125])
>>> hermroots(coef)
array([-1.00000000e+00, -1.38777878e-17, 1.00000000e+00]) | |
doc_25222 |
Append rows of other to the end of caller, returning a new object. Columns in other that are not in the caller are added as new columns. Parameters
other:DataFrame or Series/dict-like object, or list of these
The data to append.
ignore_index:bool, default False
If True, the resulting axis will be labeled 0, 1, …, n - 1.
verify_integrity:bool, default False
If True, raise ValueError on creating index with duplicates.
sort:bool, default False
Sort columns if the columns of self and other are not aligned. Changed in version 1.0.0: Changed to not sort by default. Returns
DataFrame
A new DataFrame consisting of the rows of caller and the rows of other. See also concat
General function to concatenate DataFrame or Series objects. Notes If a list of dict/series is passed and the keys are all contained in the DataFrame’s index, the order of the columns in the resulting DataFrame will be unchanged. Iteratively appending rows to a DataFrame can be more computationally intensive than a single concatenate. A better solution is to append those rows to a list and then concatenate the list with the original DataFrame all at once. Examples
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'), index=['x', 'y'])
>>> df
A B
x 1 2
y 3 4
>>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB'), index=['x', 'y'])
>>> df.append(df2)
A B
x 1 2
y 3 4
x 5 6
y 7 8
With ignore_index set to True:
>>> df.append(df2, ignore_index=True)
A B
0 1 2
1 3 4
2 5 6
3 7 8
The following, while not recommended methods for generating DataFrames, show two ways to generate a DataFrame from multiple data sources. Less efficient:
>>> df = pd.DataFrame(columns=['A'])
>>> for i in range(5):
... df = df.append({'A': i}, ignore_index=True)
>>> df
A
0 0
1 1
2 2
3 3
4 4
More efficient:
>>> pd.concat([pd.DataFrame([i], columns=['A']) for i in range(5)],
... ignore_index=True)
A
0 0
1 1
2 2
3 3
4 4 | |
doc_25223 | The HTML ID attribute for this BoundField. Returns an empty string if Form.auto_id is False. | |
doc_25224 |
Construct this type from a string. This is useful mainly for data types that accept parameters. For example, a period dtype accepts a frequency parameter that can be set as period[H] (where H means hourly frequency). By default, in the abstract class, just the name of the type is expected. But subclasses can overwrite this method to accept parameters. Parameters
string:str
The name of the type, for example category. Returns
ExtensionDtype
Instance of the dtype. Raises
TypeError
If a class cannot be constructed from this ‘string’. Examples For extension dtypes with arguments the following may be an adequate implementation.
>>> @classmethod
... def construct_from_string(cls, string):
... pattern = re.compile(r"^my_type\[(?P<arg_name>.+)\]$")
... match = pattern.match(string)
... if match:
... return cls(**match.groupdict())
... else:
... raise TypeError(
... f"Cannot construct a '{cls.__name__}' from '{string}'"
... ) | |
doc_25225 |
Draw samples from a Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, shape (sometimes designated “k”) and scale (sometimes designated “theta”), where both parameters are > 0. Note New code should use the gamma method of a default_rng() instance instead; please see the Quick Start. Parameters
shapefloat or array_like of floats
The shape of the gamma distribution. Must be non-negative.
scalefloat or array_like of floats, optional
The scale of the gamma distribution. Must be non-negative. Default is equal to 1.
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if shape and scale are both scalars. Otherwise, np.broadcast(shape, scale).size samples are drawn. Returns
outndarray or scalar
Drawn samples from the parameterized gamma distribution. See also scipy.stats.gamma
probability density function, distribution or cumulative density function, etc. Generator.gamma
which should be used for new code. Notes The probability density for the Gamma distribution is \[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\] where \(k\) is the shape and \(\theta\) the scale, and \(\Gamma\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. References 1
Weisstein, Eric W. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/GammaDistribution.html 2
Wikipedia, “Gamma distribution”, https://en.wikipedia.org/wiki/Gamma_distribution Examples Draw samples from the distribution: >>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2)
>>> s = np.random.gamma(shape, scale, 1000)
Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt
>>> import scipy.special as sps
>>> count, bins, ignored = plt.hist(s, 50, density=True)
>>> y = bins**(shape-1)*(np.exp(-bins/scale) /
... (sps.gamma(shape)*scale**shape))
>>> plt.plot(bins, y, linewidth=2, color='r')
>>> plt.show() | |
doc_25226 | Stop after one line of code. | |
doc_25227 |
Guaranteed return of an indexer even when non-unique. This dispatches to get_indexer or get_indexer_non_unique as appropriate. Returns
np.ndarray[np.intp]
List of indices. Examples
>>> idx = pd.Index([np.nan, 'var1', np.nan])
>>> idx.get_indexer_for([np.nan])
array([0, 2]) | |
doc_25228 |
Calculate 2**p for all p in the input array. Parameters
xarray_like
Input values.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
outndarray or scalar
Element-wise 2 to the power x. This is a scalar if x is a scalar. See also power
Notes New in version 1.3.0. Examples >>> np.exp2([2, 3])
array([ 4., 8.]) | |
doc_25229 | This call decodes uuencoded file in_file placing the result on file out_file. If out_file is a pathname, mode is used to set the permission bits if the file must be created. Defaults for out_file and mode are taken from the uuencode header. However, if the file specified in the header already exists, a uu.Error is raised. decode() may print a warning to standard error if the input was produced by an incorrect uuencoder and Python could recover from that error. Setting quiet to a true value silences this warning. | |
doc_25230 |
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | |
doc_25231 | Returns the q-th quantiles of all elements in the input tensor, doing a linear interpolation when the q-th quantile lies between two data points. Parameters
input (Tensor) – the input tensor.
q (float or Tensor) – a scalar or 1D tensor of quantile values in the range [0, 1] Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.0700, -0.5446, 0.9214]])
>>> q = torch.tensor([0, 0.5, 1])
>>> torch.quantile(a, q)
tensor([-0.5446, 0.0700, 0.9214])
torch.quantile(input, q, dim=None, keepdim=False, *, out=None) → Tensor
Returns the q-th quantiles of each row of the input tensor along the dimension dim, doing a linear interpolation when the q-th quantile lies between two data points. By default, dim is None resulting in the input tensor being flattened before computation. If keepdim is True, the output dimensions are of the same size as input except in the dimensions being reduced (dim or all if dim is None) where they have size 1. Otherwise, the dimensions being reduced are squeezed (see torch.squeeze()). If q is a 1D tensor, an extra dimension is prepended to the output tensor with the same size as q which represents the quantiles. Parameters
input (Tensor) – the input tensor.
q (float or Tensor) – a scalar or 1D tensor of quantile values in the range [0, 1]
dim (int) – the dimension to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(2, 3)
>>> a
tensor([[ 0.0795, -1.2117, 0.9765],
[ 1.1707, 0.6706, 0.4884]])
>>> q = torch.tensor([0.25, 0.5, 0.75])
>>> torch.quantile(a, q, dim=1, keepdim=True)
tensor([[[-0.5661],
[ 0.5795]],
[[ 0.0795],
[ 0.6706]],
[[ 0.5280],
[ 0.9206]]])
>>> torch.quantile(a, q, dim=1, keepdim=True).shape
torch.Size([3, 2, 1]) | |
doc_25232 |
Fit the label sets binarizer and transform the given label sets. Parameters
yiterable of iterables
A set of labels (any orderable and hashable object) for each sample. If the classes parameter is set, y will not be iterated. Returns
y_indicator{ndarray, sparse matrix} of shape (n_samples, n_classes)
A matrix such that y_indicator[i, j] = 1 i.f.f. classes_[j] is in y[i], and 0 otherwise. Sparse matrix will be of CSR format. | |
doc_25233 | Returns the number of non-fixed hyperparameters of the kernel. | |
doc_25234 |
Return the Figure instance the artist belongs to. | |
doc_25235 | Token value for ";". | |
doc_25236 |
Build a forest of trees from the training set (X, y). Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csc_matrix.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in regression).
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node. Returns
selfobject | |
doc_25237 | Should return True if deleting obj is permitted, False otherwise. If obj is None, should return True or False to indicate whether deleting objects of this type is permitted in general (e.g., False will be interpreted as meaning that the current user is not permitted to delete any object of this type). | |
doc_25238 |
The imaginary part of the array. Examples >>> x = np.sqrt([1+0j, 0+1j])
>>> x.imag
array([ 0. , 0.70710678])
>>> x.imag.dtype
dtype('float64') | |
doc_25239 | See Migration guide for more details. tf.compat.v1.NoGradient, tf.compat.v1.NotDifferentiable, tf.compat.v1.no_gradient
tf.no_gradient(
op_type
)
This function should not be used for operations that have a well-defined gradient that is not yet implemented. This function is only used when defining a new op type. It may be used for ops such as tf.size() that are not differentiable. For example: tf.no_gradient("Size")
The gradient computed for 'op_type' will then propagate zeros. For ops that have a well-defined gradient but are not yet implemented, no declaration should be made, and an error must be thrown if an attempt to request its gradient is made.
Args
op_type The string type of an operation. This corresponds to the OpDef.name field for the proto that defines the operation.
Raises
TypeError If op_type is not a string. | |
doc_25240 |
Normalized sliding window histogram Parameters
image2-D array (integer or float)
Input image.
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (integer or float), optional
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
n_binsint or None
The number of histogram bins. Will default to image.max() + 1 if None is passed. Returns
out3-D array (float)
Array of dimensions (H,W,N), where (H,W) are the dimensions of the input image and N is n_bins or image.max() + 1 if no value is provided as a parameter. Effectively, each pixel is a N-D feature vector that is the histogram. The sum of the elements in the feature vector will be 1, unless no pixels in the window were covered by both selem and mask, in which case all elements will be 0. Examples >>> from skimage import data
>>> from skimage.filters.rank import windowed_histogram
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> hist_img = windowed_histogram(img, disk(5)) | |
doc_25241 |
Initialize self. See help(type(self)) for accurate signature. | |
doc_25242 | This attribute is a flag which controls the interpretation of blanks in the window. When it is on, trailing blanks on each line are ignored; any cursor motion that would land the cursor on a trailing blank goes to the end of that line instead, and trailing blanks are stripped when the window contents are gathered. | |
doc_25243 | A subclass of PurePath, this class represents concrete paths of the system’s path flavour (instantiating it creates either a PosixPath or a WindowsPath): >>> Path('setup.py')
PosixPath('setup.py')
pathsegments is specified similarly to PurePath. | |
doc_25244 |
Detect missing values. Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True). Returns
Series
Mask of bool values for each element in Series that indicates whether an element is an NA value. See also Series.isnull
Alias of isna. Series.notna
Boolean inverse of isna. Series.dropna
Omit axes labels with missing values. isna
Top-level isna. Examples Show which entries in a DataFrame are NA.
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
Show which entries in a Series are NA.
>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0 5.0
1 6.0
2 NaN
dtype: float64
>>> ser.isna()
0 False
1 False
2 True
dtype: bool | |
doc_25245 | Returns a new NormalDist object where mu represents the arithmetic mean and sigma represents the standard deviation. If sigma is negative, raises StatisticsError.
mean
A read-only property for the arithmetic mean of a normal distribution.
median
A read-only property for the median of a normal distribution.
mode
A read-only property for the mode of a normal distribution.
stdev
A read-only property for the standard deviation of a normal distribution.
variance
A read-only property for the variance of a normal distribution. Equal to the square of the standard deviation.
classmethod from_samples(data)
Makes a normal distribution instance with mu and sigma parameters estimated from the data using fmean() and stdev(). The data can be any iterable and should consist of values that can be converted to type float. If data does not contain at least two elements, raises StatisticsError because it takes at least one point to estimate a central value and at least two points to estimate dispersion.
samples(n, *, seed=None)
Generates n random samples for a given mean and standard deviation. Returns a list of float values. If seed is given, creates a new instance of the underlying random number generator. This is useful for creating reproducible results, even in a multi-threading context.
pdf(x)
Using a probability density function (pdf), compute the relative likelihood that a random variable X will be near the given value x. Mathematically, it is the limit of the ratio P(x <=
X < x+dx) / dx as dx approaches zero. The relative likelihood is computed as the probability of a sample occurring in a narrow range divided by the width of the range (hence the word “density”). Since the likelihood is relative to other points, its value can be greater than 1.0.
cdf(x)
Using a cumulative distribution function (cdf), compute the probability that a random variable X will be less than or equal to x. Mathematically, it is written P(X <= x).
inv_cdf(p)
Compute the inverse cumulative distribution function, also known as the quantile function or the percent-point function. Mathematically, it is written x : P(X <= x) = p. Finds the value x of the random variable X such that the probability of the variable being less than or equal to that value equals the given probability p.
overlap(other)
Measures the agreement between two normal probability distributions. Returns a value between 0.0 and 1.0 giving the overlapping area for the two probability density functions.
quantiles(n=4)
Divide the normal distribution into n continuous intervals with equal probability. Returns a list of (n - 1) cut points separating the intervals. Set n to 4 for quartiles (the default). Set n to 10 for deciles. Set n to 100 for percentiles which gives the 99 cuts points that separate the normal distribution into 100 equal sized groups.
zscore(x)
Compute the Standard Score describing x in terms of the number of standard deviations above or below the mean of the normal distribution: (x - mean) / stdev. New in version 3.9.
Instances of NormalDist support addition, subtraction, multiplication and division by a constant. These operations are used for translation and scaling. For example: >>> temperature_february = NormalDist(5, 2.5) # Celsius
>>> temperature_february * (9/5) + 32 # Fahrenheit
NormalDist(mu=41.0, sigma=4.5)
Dividing a constant by an instance of NormalDist is not supported because the result wouldn’t be normally distributed. Since normal distributions arise from additive effects of independent variables, it is possible to add and subtract two independent normally distributed random variables represented as instances of NormalDist. For example: >>> birth_weights = NormalDist.from_samples([2.5, 3.1, 2.1, 2.4, 2.7, 3.5])
>>> drug_effects = NormalDist(0.4, 0.15)
>>> combined = birth_weights + drug_effects
>>> round(combined.mean, 1)
3.1
>>> round(combined.stdev, 1)
0.5
New in version 3.8. | |
doc_25246 | Is True if the Tensor is stored on the GPU, False otherwise. | |
doc_25247 |
A legend handler that shows numpoints in the legend, and allows them to be individually offset in the y-direction. Parameters
numpointsint
Number of points to show in legend entry.
yoffsetsarray of floats
Length numpoints list of y offsets for each point in legend entry. **kwargs
Keyword arguments forwarded to HandlerNpoints. get_ydata(legend, xdescent, ydescent, width, height, fontsize)[source] | |
doc_25248 |
Bases: object A class attribute that must be set before use.
__init__(init_val=None) [source]
Initialize self. See help(type(self)) for accurate signature.
instances = {(<skimage.viewer.utils.core.RequiredAttr object>, None): 'Widget is not attached to a Plugin.', (<skimage.viewer.utils.core.RequiredAttr object>, None): 'Plugin is not attached to ImageViewer'} | |
doc_25249 | See Migration guide for more details. tf.compat.v1.keras.applications.mobilenet_v3.preprocess_input
tf.keras.applications.mobilenet_v3.preprocess_input(
x, data_format=None
)
Usage example with applications.MobileNet: i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)
x = tf.cast(i, tf.float32)
x = tf.keras.applications.mobilenet.preprocess_input(x)
core = tf.keras.applications.MobileNet()
x = core(x)
model = tf.keras.Model(inputs=[i], outputs=[x])
image = tf.image.decode_png(tf.io.read_file('file.png'))
result = model(image)
Arguments
x A floating point numpy.array or a tf.Tensor, 3D or 4D with 3 color channels, with values in the range [0, 255]. The preprocessed data are written over the input data if the data types are compatible. To avoid this behaviour, numpy.copy(x) can be used.
data_format Optional data format of the image tensor/array. Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last").
Returns Preprocessed numpy.array or a tf.Tensor with type float32. The inputs pixel values are scaled between -1 and 1, sample-wise.
Raises
ValueError In case of unknown data_format argument. | |
doc_25250 |
TransformerDecoder is a stack of N decoder layers Parameters
decoder_layer – an instance of the TransformerDecoderLayer() class (required).
num_layers – the number of sub-decoder-layers in the decoder (required).
norm – the layer normalization component (optional). Examples::
>>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
>>> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers=6)
>>> memory = torch.rand(10, 32, 512)
>>> tgt = torch.rand(20, 32, 512)
>>> out = transformer_decoder(tgt, memory)
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None) [source]
Pass the inputs (and mask) through the decoder layer in turn. Parameters
tgt – the sequence to the decoder (required).
memory – the sequence from the last layer of the encoder (required).
tgt_mask – the mask for the tgt sequence (optional).
memory_mask – the mask for the memory sequence (optional).
tgt_key_padding_mask – the mask for the tgt keys per batch (optional).
memory_key_padding_mask – the mask for the memory keys per batch (optional). Shape:
see the docs in Transformer class. | |
doc_25251 | An event loop based on the selectors module. Uses the most efficient selector available for the given platform. It is also possible to manually configure the exact selector implementation to be used: import asyncio
import selectors
selector = selectors.SelectSelector()
loop = asyncio.SelectorEventLoop(selector)
asyncio.set_event_loop(loop)
Availability: Unix, Windows. | |
doc_25252 | class typing.TextIO
class typing.BinaryIO
Generic type IO[AnyStr] and its subclasses TextIO(IO[str]) and BinaryIO(IO[bytes]) represent the types of I/O streams such as returned by open(). These types are also in the typing.io namespace. | |
doc_25253 | See Migration guide for more details. tf.compat.v1.raw_ops.ScatterAdd
tf.raw_ops.ScatterAdd(
ref, indices, updates, use_locking=False, name=None
)
This operation computes # Scalar indices
ref[indices, ...] += updates[...]
# Vector indices (for each i)
ref[indices[i], ...] += updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]
This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
use_locking An optional bool. Defaults to False. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | |
doc_25254 | The view function used to serve files from static_folder. A route is automatically registered for this view at static_url_path if static_folder is set. Changelog New in version 0.5. Parameters
filename (str) – Return type
Response | |
doc_25255 | Pop the last-pushed input source from the input stack. This is the same method used internally when the lexer reaches EOF on a stacked input stream. | |
doc_25256 | static bytearray.maketrans(from, to)
This static method returns a translation table usable for bytes.translate() that will map each character in from into the character at the same position in to; from and to must both be bytes-like objects and have the same length. New in version 3.1. | |
doc_25257 | Return the number of zero crossings in the fragment passed as an argument. | |
doc_25258 | Cancel the drag-and-drop process. | |
doc_25259 |
Create a pseudocolor plot of an unstructured triangular grid. The triangulation can be specified in one of two ways; either: tripcolor(triangulation, ...)
where triangulation is a Triangulation object, or tripcolor(x, y, ...)
tripcolor(x, y, triangles, ...)
tripcolor(x, y, triangles=triangles, ...)
tripcolor(x, y, mask=mask, ...)
tripcolor(x, y, triangles, mask=mask, ...)
in which case a Triangulation object will be created. See Triangulation for a explanation of these possibilities. The next argument must be C, the array of color values, either one per point in the triangulation if color values are defined at points, or one per triangle in the triangulation if color values are defined at triangles. If there are the same number of points and triangles in the triangulation it is assumed that color values are defined at points; to force the use of color values at triangles use the kwarg facecolors=C instead of just C. shading may be 'flat' (the default) or 'gouraud'. If shading is 'flat' and C values are defined at points, the color values used for each triangle are from the mean C of the triangle's three points. If shading is 'gouraud' then color values must be defined at points. The remaining kwargs are the same as for pcolor.
Examples using matplotlib.axes.Axes.tripcolor
Tripcolor Demo
tripcolor(x, y, z) | |
doc_25260 | A class method called after tests in an individual class have run. tearDownClass is called with the class as the only argument and must be decorated as a classmethod(): @classmethod
def tearDownClass(cls):
...
See Class and Module Fixtures for more details. New in version 3.2. | |
doc_25261 |
Detect missing values. Return a boolean same-sized object indicating if the values are NA. NA values, such as None, numpy.NaN or pd.NaT, get mapped to True values. Everything else get mapped to False values. Characters such as empty strings ‘’ or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True). Returns
numpy.ndarray[bool]
A boolean array of whether my values are NA. See also Index.notna
Boolean inverse of isna. Index.dropna
Omit entries with missing values. isna
Top-level isna. Series.isna
Detect missing values in Series object. Examples Show which entries in a pandas.Index are NA. The result is an array.
>>> idx = pd.Index([5.2, 6.0, np.NaN])
>>> idx
Float64Index([5.2, 6.0, nan], dtype='float64')
>>> idx.isna()
array([False, False, True])
Empty strings are not considered NA values. None is considered an NA value.
>>> idx = pd.Index(['black', '', 'red', None])
>>> idx
Index(['black', '', 'red', None], dtype='object')
>>> idx.isna()
array([False, False, False, True])
For datetimes, NaT (Not a Time) is considered as an NA value.
>>> idx = pd.DatetimeIndex([pd.Timestamp('1940-04-25'),
... pd.Timestamp(''), None, pd.NaT])
>>> idx
DatetimeIndex(['1940-04-25', 'NaT', 'NaT', 'NaT'],
dtype='datetime64[ns]', freq=None)
>>> idx.isna()
array([False, True, True, True]) | |
doc_25262 | draw a polygon polygon(surface, points, color) -> None Draws an unfilled polygon on the given surface. For a filled polygon use filled_polygon(). The adjacent coordinates in the points argument, as well as the first and last points, will be connected by line segments. e.g. For the points [(x1, y1), (x2, y2), (x3, y3)] a line segment will be drawn from (x1, y1) to (x2, y2), from (x2, y2) to (x3, y3), and from (x3, y3) to (x1, y1).
Parameters:
surface (Surface) -- surface to draw on
points (tuple(coordinate) or list(coordinate)) -- a sequence of 3 or more (x, y) coordinates, where each coordinate in the sequence must be a tuple/list/pygame.math.Vector2 of 2 ints/floats (float values will be truncated)
color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A])
Returns:
None
Return type:
NoneType
Raises:
ValueError -- if len(points) < 3 (must have at least 3 points)
IndexError -- if len(coordinate) < 2 (each coordinate must have at least 2 items) | |
doc_25263 | Header folding is controlled by the refold_source policy setting. A value is considered to be a ‘source value’ if and only if it does not have a name attribute (having a name attribute means it is a header object of some sort). If a source value needs to be refolded according to the policy, it is converted into a header object by passing the name and the value with any CR and LF characters removed to the header_factory. Folding of a header object is done by calling its fold method with the current policy. Source values are split into lines using splitlines(). If the value is not to be refolded, the lines are rejoined using the linesep from the policy and returned. The exception is lines containing non-ascii binary data. In that case the value is refolded regardless of the refold_source setting, which causes the binary data to be CTE encoded using the unknown-8bit charset. | |
doc_25264 |
First adjusts the date to fall on a valid day according to the roll rule, then applies offsets to the given dates counted in valid days. New in version 1.7.0. Parameters
datesarray_like of datetime64[D]
The array of dates to process.
offsetsarray_like of int
The array of offsets, which is broadcast with dates.
roll{‘raise’, ‘nat’, ‘forward’, ‘following’, ‘backward’, ‘preceding’, ‘modifiedfollowing’, ‘modifiedpreceding’}, optional
How to treat dates that do not fall on a valid day. The default is ‘raise’. ‘raise’ means to raise an exception for an invalid day. ‘nat’ means to return a NaT (not-a-time) for an invalid day. ‘forward’ and ‘following’ mean to take the first valid day later in time. ‘backward’ and ‘preceding’ mean to take the first valid day earlier in time. ‘modifiedfollowing’ means to take the first valid day later in time unless it is across a Month boundary, in which case to take the first valid day earlier in time. ‘modifiedpreceding’ means to take the first valid day earlier in time unless it is across a Month boundary, in which case to take the first valid day later in time.
weekmaskstr or array_like of bool, optional
A seven-element array indicating which of Monday through Sunday are valid days. May be specified as a length-seven list or array, like [1,1,1,1,1,0,0]; a length-seven string, like ‘1111100’; or a string like “Mon Tue Wed Thu Fri”, made up of 3-character abbreviations for weekdays, optionally separated by white space. Valid abbreviations are: Mon Tue Wed Thu Fri Sat Sun
holidaysarray_like of datetime64[D], optional
An array of dates to consider as invalid dates. They may be specified in any order, and NaT (not-a-time) dates are ignored. This list is saved in a normalized form that is suited for fast calculations of valid days.
busdaycalbusdaycalendar, optional
A busdaycalendar object which specifies the valid days. If this parameter is provided, neither weekmask nor holidays may be provided.
outarray of datetime64[D], optional
If provided, this array is filled with the result. Returns
outarray of datetime64[D]
An array with a shape from broadcasting dates and offsets together, containing the dates with offsets applied. See also busdaycalendar
An object that specifies a custom set of valid days. is_busday
Returns a boolean array indicating valid days. busday_count
Counts how many valid days are in a half-open date range. Examples >>> # First business day in October 2011 (not accounting for holidays)
... np.busday_offset('2011-10', 0, roll='forward')
numpy.datetime64('2011-10-03')
>>> # Last business day in February 2012 (not accounting for holidays)
... np.busday_offset('2012-03', -1, roll='forward')
numpy.datetime64('2012-02-29')
>>> # Third Wednesday in January 2011
... np.busday_offset('2011-01', 2, roll='forward', weekmask='Wed')
numpy.datetime64('2011-01-19')
>>> # 2012 Mother's Day in Canada and the U.S.
... np.busday_offset('2012-05', 1, roll='forward', weekmask='Sun')
numpy.datetime64('2012-05-13')
>>> # First business day on or after a date
... np.busday_offset('2011-03-20', 0, roll='forward')
numpy.datetime64('2011-03-21')
>>> np.busday_offset('2011-03-22', 0, roll='forward')
numpy.datetime64('2011-03-22')
>>> # First business day after a date
... np.busday_offset('2011-03-20', 1, roll='backward')
numpy.datetime64('2011-03-21')
>>> np.busday_offset('2011-03-22', 1, roll='backward')
numpy.datetime64('2011-03-23') | |
doc_25265 | tf.compat.v1.get_local_variable(
name, shape=None, dtype=None, initializer=None, regularizer=None,
trainable=False, collections=None, caching_device=None, partitioner=None,
validate_shape=True, use_resource=None, custom_getter=None, constraint=None,
synchronization=tf.VariableSynchronization.AUTO,
aggregation=tf.compat.v1.VariableAggregation.NONE
)
Behavior is the same as in get_variable, except that variables are added to the LOCAL_VARIABLES collection and trainable is set to False. This function prefixes the name with the current variable scope and performs reuse checks. See the Variable Scope How To for an extensive description of how reusing works. Here is a basic example: def foo():
with tf.variable_scope("foo", reuse=tf.AUTO_REUSE):
v = tf.get_variable("v", [1])
return v
v1 = foo() # Creates v.
v2 = foo() # Gets the same, existing v.
assert v1 == v2
If initializer is None (the default), the default initializer passed in the variable scope will be used. If that one is None too, a glorot_uniform_initializer will be used. The initializer can also be a Tensor, in which case the variable is initialized to this value and shape. Similarly, if the regularizer is None (the default), the default regularizer passed in the variable scope will be used (if that is None too, then by default no regularization is performed). If a partitioner is provided, a PartitionedVariable is returned. Accessing this object as a Tensor returns the shards concatenated along the partition axis. Some useful partitioners are available. See, e.g., variable_axis_size_partitioner and min_max_variable_partitioner.
Args
name The name of the new or existing variable.
shape Shape of the new or existing variable.
dtype Type of the new or existing variable (defaults to DT_FLOAT).
initializer Initializer for the variable if one is created. Can either be an initializer object or a Tensor. If it's a Tensor, its shape must be known unless validate_shape is False.
regularizer A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection tf.GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
collections List of graph collections keys to add the Variable to. Defaults to [GraphKeys.LOCAL_VARIABLES] (see tf.Variable).
caching_device Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements.
partitioner Optional callable that accepts a fully defined TensorShape and dtype of the Variable to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned).
validate_shape If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. For this to be used the initializer must be a Tensor and not an initializer object.
use_resource If False, creates a regular Variable. If true, creates an experimental ResourceVariable instead with well-defined semantics. Defaults to False (will later change to True). When eager execution is enabled this argument is always forced to be True.
custom_getter Callable that takes as a first argument the true getter, and allows overwriting the internal get_variable method. The signature of custom_getter should match that of this method, but the most future-proof version will allow for changes: def custom_getter(getter, *args, **kwargs). Direct access to all get_variable parameters is also allowed: def custom_getter(getter, name, *args, **kwargs). A simple identity custom getter that simply creates variables with modified names is: def custom_getter(getter, name, *args, **kwargs):
return getter(name + '_suffix', *args, **kwargs)
constraint An optional projection function to be applied to the variable after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
synchronization Indicates when a distributed a variable will be aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize.
aggregation Indicates how a distributed variable will be aggregated. Accepted values are constants defined in the class tf.VariableAggregation.
Returns The created or existing Variable (or PartitionedVariable, if a partitioner was used).
Raises
ValueError when creating a new variable and shape is not declared, when violating reuse during variable creation, or when initializer dtype and dtype don't match. Reuse is set inside variable_scope. | |
doc_25266 |
Alias for set_facecolor. | |
doc_25267 |
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | |
doc_25268 | Return the frame object for the caller’s stack frame. CPython implementation detail: This function relies on Python stack frame support in the interpreter, which isn’t guaranteed to exist in all implementations of Python. If running in an implementation without Python stack frame support this function returns None. | |
doc_25269 | This function handles an exception using the default settings (that is, show a report in the browser, but don’t log to a file). This can be used when you’ve caught an exception and want to report it using cgitb. The optional info argument should be a 3-tuple containing an exception type, exception value, and traceback object, exactly like the tuple returned by sys.exc_info(). If the info argument is not supplied, the current exception is obtained from sys.exc_info(). | |
doc_25270 |
Return a reversed instance of the Colormap. Note This function is not implemented for base class. Parameters
namestr, optional
The name for the reversed colormap. If it's None the name will be the name of the parent colormap + "_r". See also LinearSegmentedColormap.reversed
ListedColormap.reversed | |
doc_25271 |
Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values. | |
doc_25272 | Regular file. | |
doc_25273 | marshal.dump(value, file[, version])
Write the value on the open file. The value must be a supported type. The file must be a writeable binary file. If the value has (or contains an object that has) an unsupported type, a ValueError exception is raised — but garbage data will also be written to the file. The object will not be properly read back by load(). The version argument indicates the data format that dump should use (see below).
marshal.load(file)
Read one value from the open file and return it. If no valid value is read (e.g. because the data has a different Python version’s incompatible marshal format), raise EOFError, ValueError or TypeError. The file must be a readable binary file. Note If an object containing an unsupported type was marshalled with dump(), load() will substitute None for the unmarshallable type.
marshal.dumps(value[, version])
Return the bytes object that would be written to a file by dump(value, file). The value must be a supported type. Raise a ValueError exception if value has (or contains an object that has) an unsupported type. The version argument indicates the data format that dumps should use (see below).
marshal.loads(bytes)
Convert the bytes-like object to a value. If no valid value is found, raise EOFError, ValueError or TypeError. Extra bytes in the input are ignored.
In addition, the following constants are defined:
marshal.version
Indicates the format that the module uses. Version 0 is the historical format, version 1 shares interned strings and version 2 uses a binary format for floating point numbers. Version 3 adds support for object instancing and recursion. The current version is 4.
Footnotes
1
The name of this module stems from a bit of terminology used by the designers of Modula-3 (amongst others), who use the term “marshalling” for shipping of data around in a self-contained form. Strictly speaking, “to marshal” means to convert some data from internal to external form (in an RPC buffer for instance) and “unmarshalling” for the reverse process. | |
doc_25274 |
kind = 'quarter' | |
doc_25275 |
Return a Series/DataFrame with absolute numeric value of each element. This function only applies to elements that are all numeric. Returns
abs
Series/DataFrame containing the absolute value of each element. See also numpy.absolute
Calculate the absolute value element-wise. Notes For complex inputs, 1.2 + 1j, the absolute value is \(\sqrt{ a^2 + b^2 }\). Examples Absolute numeric values in a Series.
>>> s = pd.Series([-1.10, 2, -3.33, 4])
>>> s.abs()
0 1.10
1 2.00
2 3.33
3 4.00
dtype: float64
Absolute numeric values in a Series with complex numbers.
>>> s = pd.Series([1.2 + 1j])
>>> s.abs()
0 1.56205
dtype: float64
Absolute numeric values in a Series with a Timedelta element.
>>> s = pd.Series([pd.Timedelta('1 days')])
>>> s.abs()
0 1 days
dtype: timedelta64[ns]
Select rows with data closest to certain value using argsort (from StackOverflow).
>>> df = pd.DataFrame({
... 'a': [4, 5, 6, 7],
... 'b': [10, 20, 30, 40],
... 'c': [100, 50, -30, -50]
... })
>>> df
a b c
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
>>> df.loc[(df.c - 43).abs().argsort()]
a b c
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50 | |
doc_25276 | A bytes object representing the current version of the module. Also available as __version__. | |
doc_25277 |
Other Members
bytes_or_text_types
complex_types
integral_types
real_types | |
doc_25278 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_25279 | torch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor
Applies a 1D convolution over an input signal composed of several input planes. This operator supports TensorFloat32. See Conv1d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
input – input tensor of shape (minibatch,in_channels,iW)(\text{minibatch} , \text{in\_channels} , iW)
weight – filters of shape (out_channels,in_channelsgroups,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kW)
bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None
stride – the stride of the convolving kernel. Can be a single number or a one-element tuple (sW,). Default: 1
padding – implicit paddings on both sides of the input. Can be a single number or a one-element tuple (padW,). Default: 0
dilation – the spacing between kernel elements. Can be a single number or a one-element tuple (dW,). Default: 1
groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 Examples: >>> filters = torch.randn(33, 16, 3)
>>> inputs = torch.randn(20, 16, 50)
>>> F.conv1d(inputs, filters)
conv2d
torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor
Applies a 2D convolution over an input image composed of several input planes. This operator supports TensorFloat32. See Conv2d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
input – input tensor of shape (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW)
weight – filters of shape (out_channels,in_channelsgroups,kH,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kH , kW)
bias – optional bias tensor of shape (out_channels)(\text{out\_channels}) . Default: None
stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1
padding – implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1
groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 Examples: >>> # With square kernels and equal stride
>>> filters = torch.randn(8,4,3,3)
>>> inputs = torch.randn(1,4,5,5)
>>> F.conv2d(inputs, filters, padding=1)
conv3d
torch.nn.functional.conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor
Applies a 3D convolution over an input image composed of several input planes. This operator supports TensorFloat32. See Conv3d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
input – input tensor of shape (minibatch,in_channels,iT,iH,iW)(\text{minibatch} , \text{in\_channels} , iT , iH , iW)
weight – filters of shape (out_channels,in_channelsgroups,kT,kH,kW)(\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kT , kH , kW)
bias – optional bias tensor of shape (out_channels)(\text{out\_channels}) . Default: None
stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1
padding – implicit paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0
dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1
groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 Examples: >>> filters = torch.randn(33, 16, 3, 3, 3)
>>> inputs = torch.randn(20, 16, 50, 10, 20)
>>> F.conv3d(inputs, filters)
conv_transpose1d
torch.nn.functional.conv_transpose1d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor
Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called “deconvolution”. This operator supports TensorFloat32. See ConvTranspose1d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
input – input tensor of shape (minibatch,in_channels,iW)(\text{minibatch} , \text{in\_channels} , iW)
weight – filters of shape (in_channels,out_channelsgroups,kW)(\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kW)
bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None
stride – the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1
padding – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0
output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0
groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1
dilation – the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1 Examples: >>> inputs = torch.randn(20, 16, 50)
>>> weights = torch.randn(16, 33, 5)
>>> F.conv_transpose1d(inputs, weights)
conv_transpose2d
torch.nn.functional.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor
Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”. This operator supports TensorFloat32. See ConvTranspose2d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
input – input tensor of shape (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW)
weight – filters of shape (in_channels,out_channelsgroups,kH,kW)(\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kH , kW)
bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None
stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1
padding – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padH, padW). Default: 0
output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padH, out_padW). Default: 0
groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1
dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1 Examples: >>> # With square kernels and equal stride
>>> inputs = torch.randn(1, 4, 5, 5)
>>> weights = torch.randn(4, 8, 3, 3)
>>> F.conv_transpose2d(inputs, weights, padding=1)
conv_transpose3d
torch.nn.functional.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor
Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution” This operator supports TensorFloat32. See ConvTranspose3d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters
input – input tensor of shape (minibatch,in_channels,iT,iH,iW)(\text{minibatch} , \text{in\_channels} , iT , iH , iW)
weight – filters of shape (in_channels,out_channelsgroups,kT,kH,kW)(\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kT , kH , kW)
bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None
stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1
padding – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padT, padH, padW). Default: 0
output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padT, out_padH, out_padW). Default: 0
groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1
dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1 Examples: >>> inputs = torch.randn(20, 16, 50, 10, 20)
>>> weights = torch.randn(16, 33, 3, 3, 3)
>>> F.conv_transpose3d(inputs, weights)
unfold
torch.nn.functional.unfold(input, kernel_size, dilation=1, padding=0, stride=1) [source]
Extracts sliding local blocks from a batched input tensor. Warning Currently, only 4-D input tensors (batched image-like tensors) are supported. Warning More than one element of the unfolded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensor, please clone it first. See torch.nn.Unfold for details
fold
torch.nn.functional.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1) [source]
Combines an array of sliding local blocks into a large containing tensor. Warning Currently, only 3-D output tensors (unfolded batched image-like tensors) are supported. See torch.nn.Fold for details
Pooling functions avg_pool1d
torch.nn.functional.avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) → Tensor
Applies a 1D average pooling over an input signal composed of several input planes. See AvgPool1d for details and output shape. Parameters
input – input tensor of shape (minibatch,in_channels,iW)(\text{minibatch} , \text{in\_channels} , iW)
kernel_size – the size of the window. Can be a single number or a tuple (kW,)
stride – the stride of the window. Can be a single number or a tuple (sW,). Default: kernel_size
padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0
ceil_mode – when True, will use ceil instead of floor to compute the output shape. Default: False
count_include_pad – when True, will include the zero-padding in the averaging calculation. Default: True
Examples: >>> # pool of square window of size=3, stride=2
>>> input = torch.tensor([[[1, 2, 3, 4, 5, 6, 7]]], dtype=torch.float32)
>>> F.avg_pool1d(input, kernel_size=3, stride=2)
tensor([[[ 2., 4., 6.]]])
avg_pool2d
torch.nn.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor
Applies 2D average-pooling operation in kH×kWkH \times kW regions by step size sH×sWsH \times sW steps. The number of output features is equal to the number of input planes. See AvgPool2d for details and output shape. Parameters
input – input tensor (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW)
kernel_size – size of the pooling region. Can be a single number or a tuple (kH, kW)
stride – stride of the pooling operation. Can be a single number or a tuple (sH, sW). Default: kernel_size
padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
ceil_mode – when True, will use ceil instead of floor in the formula to compute the output shape. Default: False
count_include_pad – when True, will include the zero-padding in the averaging calculation. Default: True
divisor_override – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None
avg_pool3d
torch.nn.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor
Applies 3D average-pooling operation in kT×kH×kWkT \times kH \times kW regions by step size sT×sH×sWsT \times sH \times sW steps. The number of output features is equal to ⌊input planessT⌋\lfloor\frac{\text{input planes}}{sT}\rfloor . See AvgPool3d for details and output shape. Parameters
input – input tensor (minibatch,in_channels,iT×iH,iW)(\text{minibatch} , \text{in\_channels} , iT \times iH , iW)
kernel_size – size of the pooling region. Can be a single number or a tuple (kT, kH, kW)
stride – stride of the pooling operation. Can be a single number or a tuple (sT, sH, sW). Default: kernel_size
padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW), Default: 0
ceil_mode – when True, will use ceil instead of floor in the formula to compute the output shape
count_include_pad – when True, will include the zero-padding in the averaging calculation
divisor_override – if specified, it will be used as divisor, otherwise size of the pooling region will be used. Default: None
max_pool1d
torch.nn.functional.max_pool1d(*args, **kwargs)
Applies a 1D max pooling over an input signal composed of several input planes. See MaxPool1d for details.
max_pool2d
torch.nn.functional.max_pool2d(*args, **kwargs)
Applies a 2D max pooling over an input signal composed of several input planes. See MaxPool2d for details.
max_pool3d
torch.nn.functional.max_pool3d(*args, **kwargs)
Applies a 3D max pooling over an input signal composed of several input planes. See MaxPool3d for details.
max_unpool1d
torch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source]
Computes a partial inverse of MaxPool1d. See MaxUnpool1d for details.
max_unpool2d
torch.nn.functional.max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source]
Computes a partial inverse of MaxPool2d. See MaxUnpool2d for details.
max_unpool3d
torch.nn.functional.max_unpool3d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source]
Computes a partial inverse of MaxPool3d. See MaxUnpool3d for details.
lp_pool1d
torch.nn.functional.lp_pool1d(input, norm_type, kernel_size, stride=None, ceil_mode=False) [source]
Applies a 1D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well. See LPPool1d for details.
lp_pool2d
torch.nn.functional.lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False) [source]
Applies a 2D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well. See LPPool2d for details.
adaptive_max_pool1d
torch.nn.functional.adaptive_max_pool1d(*args, **kwargs)
Applies a 1D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool1d for details and output shape. Parameters
output_size – the target output size (single integer)
return_indices – whether to return pooling indices. Default: False
adaptive_max_pool2d
torch.nn.functional.adaptive_max_pool2d(*args, **kwargs)
Applies a 2D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool2d for details and output shape. Parameters
output_size – the target output size (single integer or double-integer tuple)
return_indices – whether to return pooling indices. Default: False
adaptive_max_pool3d
torch.nn.functional.adaptive_max_pool3d(*args, **kwargs)
Applies a 3D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool3d for details and output shape. Parameters
output_size – the target output size (single integer or triple-integer tuple)
return_indices – whether to return pooling indices. Default: False
adaptive_avg_pool1d
torch.nn.functional.adaptive_avg_pool1d(input, output_size) → Tensor
Applies a 1D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool1d for details and output shape. Parameters
output_size – the target output size (single integer)
adaptive_avg_pool2d
torch.nn.functional.adaptive_avg_pool2d(input, output_size) [source]
Applies a 2D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool2d for details and output shape. Parameters
output_size – the target output size (single integer or double-integer tuple)
adaptive_avg_pool3d
torch.nn.functional.adaptive_avg_pool3d(input, output_size) [source]
Applies a 3D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool3d for details and output shape. Parameters
output_size – the target output size (single integer or triple-integer tuple)
Non-linear activation functions threshold
torch.nn.functional.threshold(input, threshold, value, inplace=False)
Thresholds each element of the input Tensor. See Threshold for more details.
torch.nn.functional.threshold_(input, threshold, value) → Tensor
In-place version of threshold().
relu
torch.nn.functional.relu(input, inplace=False) → Tensor [source]
Applies the rectified linear unit function element-wise. See ReLU for more details.
torch.nn.functional.relu_(input) → Tensor
In-place version of relu().
hardtanh
torch.nn.functional.hardtanh(input, min_val=-1., max_val=1., inplace=False) → Tensor [source]
Applies the HardTanh function element-wise. See Hardtanh for more details.
torch.nn.functional.hardtanh_(input, min_val=-1., max_val=1.) → Tensor
In-place version of hardtanh().
hardswish
torch.nn.functional.hardswish(input, inplace=False) [source]
Applies the hardswish function, element-wise, as described in the paper: Searching for MobileNetV3. Hardswish(x)={0if x≤−3,xif x≥+3,x⋅(x+3)/6otherwise\text{Hardswish}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ x & \text{if~} x \ge +3, \\ x \cdot (x + 3) /6 & \text{otherwise} \end{cases}
See Hardswish for more details.
relu6
torch.nn.functional.relu6(input, inplace=False) → Tensor [source]
Applies the element-wise function ReLU6(x)=min(max(0,x),6)\text{ReLU6}(x) = \min(\max(0,x), 6) . See ReLU6 for more details.
elu
torch.nn.functional.elu(input, alpha=1.0, inplace=False) [source]
Applies element-wise, ELU(x)=max(0,x)+min(0,α∗(exp(x)−1))\text{ELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x) - 1)) . See ELU for more details.
torch.nn.functional.elu_(input, alpha=1.) → Tensor
In-place version of elu().
selu
torch.nn.functional.selu(input, inplace=False) → Tensor [source]
Applies element-wise, SELU(x)=scale∗(max(0,x)+min(0,α∗(exp(x)−1)))\text{SELU}(x) = scale * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1))) , with α=1.6732632423543772848170429916717\alpha=1.6732632423543772848170429916717 and scale=1.0507009873554804934193349852946scale=1.0507009873554804934193349852946 . See SELU for more details.
celu
torch.nn.functional.celu(input, alpha=1., inplace=False) → Tensor [source]
Applies element-wise, CELU(x)=max(0,x)+min(0,α∗(exp(x/α)−1))\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1)) . See CELU for more details.
leaky_relu
torch.nn.functional.leaky_relu(input, negative_slope=0.01, inplace=False) → Tensor [source]
Applies element-wise, LeakyReLU(x)=max(0,x)+negative_slope∗min(0,x)\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x) See LeakyReLU for more details.
torch.nn.functional.leaky_relu_(input, negative_slope=0.01) → Tensor
In-place version of leaky_relu().
prelu
torch.nn.functional.prelu(input, weight) → Tensor [source]
Applies element-wise the function PReLU(x)=max(0,x)+weight∗min(0,x)\text{PReLU}(x) = \max(0,x) + \text{weight} * \min(0,x) where weight is a learnable parameter. See PReLU for more details.
rrelu
torch.nn.functional.rrelu(input, lower=1./8, upper=1./3, training=False, inplace=False) → Tensor [source]
Randomized leaky ReLU. See RReLU for more details.
torch.nn.functional.rrelu_(input, lower=1./8, upper=1./3, training=False) → Tensor
In-place version of rrelu().
glu
torch.nn.functional.glu(input, dim=-1) → Tensor [source]
The gated linear unit. Computes: GLU(a,b)=a⊗σ(b)\text{GLU}(a, b) = a \otimes \sigma(b)
where input is split in half along dim to form a and b, σ\sigma is the sigmoid function and ⊗\otimes is the element-wise product between matrices. See Language Modeling with Gated Convolutional Networks. Parameters
input (Tensor) – input tensor
dim (int) – dimension on which to split the input. Default: -1
gelu
torch.nn.functional.gelu(input) → Tensor [source]
Applies element-wise the function GELU(x)=x∗Φ(x)\text{GELU}(x) = x * \Phi(x) where Φ(x)\Phi(x) is the Cumulative Distribution Function for Gaussian Distribution. See Gaussian Error Linear Units (GELUs).
logsigmoid
torch.nn.functional.logsigmoid(input) → Tensor
Applies element-wise LogSigmoid(xi)=log(11+exp(−xi))\text{LogSigmoid}(x_i) = \log \left(\frac{1}{1 + \exp(-x_i)}\right) See LogSigmoid for more details.
hardshrink
torch.nn.functional.hardshrink(input, lambd=0.5) → Tensor [source]
Applies the hard shrinkage function element-wise See Hardshrink for more details.
tanhshrink
torch.nn.functional.tanhshrink(input) → Tensor [source]
Applies element-wise, Tanhshrink(x)=x−Tanh(x)\text{Tanhshrink}(x) = x - \text{Tanh}(x) See Tanhshrink for more details.
softsign
torch.nn.functional.softsign(input) → Tensor [source]
Applies element-wise, the function SoftSign(x)=x1+∣x∣\text{SoftSign}(x) = \frac{x}{1 + |x|} See Softsign for more details.
softplus
torch.nn.functional.softplus(input, beta=1, threshold=20) → Tensor
Applies element-wise, the function Softplus(x)=1β∗log(1+exp(β∗x))\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) . For numerical stability the implementation reverts to the linear function when input×β>thresholdinput \times \beta > threshold . See Softplus for more details.
softmin
torch.nn.functional.softmin(input, dim=None, _stacklevel=3, dtype=None) [source]
Applies a softmin function. Note that Softmin(x)=Softmax(−x)\text{Softmin}(x) = \text{Softmax}(-x) . See softmax definition for mathematical formula. See Softmin for more details. Parameters
input (Tensor) – input
dim (int) – A dimension along which softmin will be computed (so every slice along dim will sum to 1).
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None.
softmax
torch.nn.functional.softmax(input, dim=None, _stacklevel=3, dtype=None) [source]
Applies a softmax function. Softmax is defined as: Softmax(xi)=exp(xi)∑jexp(xj)\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)} It is applied to all slices along dim, and will re-scale them so that the elements lie in the range [0, 1] and sum to 1. See Softmax for more details. Parameters
input (Tensor) – input
dim (int) – A dimension along which softmax will be computed.
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Note This function doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use log_softmax instead (it’s faster and has better numerical properties).
softshrink
torch.nn.functional.softshrink(input, lambd=0.5) → Tensor
Applies the soft shrinkage function elementwise See Softshrink for more details.
gumbel_softmax
torch.nn.functional.gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=-1) [source]
Samples from the Gumbel-Softmax distribution (Link 1 Link 2) and optionally discretizes. Parameters
logits – […, num_features] unnormalized log probabilities
tau – non-negative scalar temperature
hard – if True, the returned samples will be discretized as one-hot vectors, but will be differentiated as if it is the soft sample in autograd
dim (int) – A dimension along which softmax will be computed. Default: -1. Returns
Sampled tensor of same shape as logits from the Gumbel-Softmax distribution. If hard=True, the returned samples will be one-hot, otherwise they will be probability distributions that sum to 1 across dim. Note This function is here for legacy reasons, may be removed from nn.Functional in the future. Note The main trick for hard is to do y_hard - y_soft.detach() + y_soft It achieves two things: - makes the output value exactly one-hot (since we add then subtract y_soft value) - makes the gradient equal to y_soft gradient (since we strip all other gradients) Examples::
>>> logits = torch.randn(20, 32)
>>> # Sample soft categorical using reparametrization trick:
>>> F.gumbel_softmax(logits, tau=1, hard=False)
>>> # Sample hard categorical using "Straight-through" trick:
>>> F.gumbel_softmax(logits, tau=1, hard=True)
log_softmax
torch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None) [source]
Applies a softmax followed by a logarithm. While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly. See LogSoftmax for more details. Parameters
input (Tensor) – input
dim (int) – A dimension along which log_softmax will be computed.
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None.
tanh
torch.nn.functional.tanh(input) → Tensor [source]
Applies element-wise, Tanh(x)=tanh(x)=exp(x)−exp(−x)exp(x)+exp(−x)\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)}{\exp(x) + \exp(-x)} See Tanh for more details.
sigmoid
torch.nn.functional.sigmoid(input) → Tensor [source]
Applies the element-wise function Sigmoid(x)=11+exp(−x)\text{Sigmoid}(x) = \frac{1}{1 + \exp(-x)} See Sigmoid for more details.
hardsigmoid
torch.nn.functional.hardsigmoid(input) → Tensor [source]
Applies the element-wise function Hardsigmoid(x)={0if x≤−3,1if x≥+3,x/6+1/2otherwise\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 & \text{otherwise} \end{cases}
Parameters
inplace – If set to True, will do this operation in-place. Default: False See Hardsigmoid for more details.
silu
torch.nn.functional.silu(input, inplace=False) [source]
Applies the silu function, element-wise. silu(x)=x∗σ(x),where σ(x) is the logistic sigmoid.\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.}
Note See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid Linear Unit) was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning and Swish: a Self-Gated Activation Function where the SiLU was experimented with later. See SiLU for more details.
Normalization functions batch_norm
torch.nn.functional.batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05) [source]
Applies Batch Normalization for each channel across a batch of data. See BatchNorm1d, BatchNorm2d, BatchNorm3d for details.
instance_norm
torch.nn.functional.instance_norm(input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05) [source]
Applies Instance Normalization for each channel in each data sample in a batch. See InstanceNorm1d, InstanceNorm2d, InstanceNorm3d for details.
layer_norm
torch.nn.functional.layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05) [source]
Applies Layer Normalization for last certain number of dimensions. See LayerNorm for details.
local_response_norm
torch.nn.functional.local_response_norm(input, size, alpha=0.0001, beta=0.75, k=1.0) [source]
Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Applies normalization across channels. See LocalResponseNorm for details.
normalize
torch.nn.functional.normalize(input, p=2, dim=1, eps=1e-12, out=None) [source]
Performs LpL_p normalization of inputs over specified dimension. For a tensor input of sizes (n0,...,ndim,...,nk)(n_0, ..., n_{dim}, ..., n_k) , each ndimn_{dim} -element vector vv along dimension dim is transformed as v=vmax(∥v∥p,ϵ).v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)}.
With the default arguments it uses the Euclidean norm over vectors along dimension 11 for normalization. Parameters
input – input tensor of any shape
p (float) – the exponent value in the norm formulation. Default: 2
dim (int) – the dimension to reduce. Default: 1
eps (float) – small value to avoid division by zero. Default: 1e-12
out (Tensor, optional) – the output tensor. If out is used, this operation won’t be differentiable.
Linear functions linear
torch.nn.functional.linear(input, weight, bias=None) [source]
Applies a linear transformation to the incoming data: y=xAT+by = xA^T + b . This operator supports TensorFloat32. Shape: Input: (N,∗,in_features)(N, *, in\_features) N is the batch size, * means any number of additional dimensions Weight: (out_features,in_features)(out\_features, in\_features)
Bias: (out_features)(out\_features)
Output: (N,∗,out_features)(N, *, out\_features)
bilinear
torch.nn.functional.bilinear(input1, input2, weight, bias=None) [source]
Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x_1^T A x_2 + b Shape: input1: (N,∗,Hin1)(N, *, H_{in1}) where Hin1=in1_featuresH_{in1}=\text{in1\_features} and ∗* means any number of additional dimensions. All but the last dimension of the inputs should be the same. input2: (N,∗,Hin2)(N, *, H_{in2}) where Hin2=in2_featuresH_{in2}=\text{in2\_features}
weight: (out_features,in1_features,in2_features)(\text{out\_features}, \text{in1\_features}, \text{in2\_features})
bias: (out_features)(\text{out\_features})
output: (N,∗,Hout)(N, *, H_{out}) where Hout=out_featuresH_{out}=\text{out\_features} and all but the last dimension are the same shape as the input.
Dropout functions dropout
torch.nn.functional.dropout(input, p=0.5, training=True, inplace=False) [source]
During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. See Dropout for details. Parameters
p – probability of an element to be zeroed. Default: 0.5
training – apply dropout if is True. Default: True
inplace – If set to True, will do this operation in-place. Default: False
alpha_dropout
torch.nn.functional.alpha_dropout(input, p=0.5, training=False, inplace=False) [source]
Applies alpha dropout to the input. See AlphaDropout for details.
feature_alpha_dropout
torch.nn.functional.feature_alpha_dropout(input, p=0.5, training=False, inplace=False) [source]
Randomly masks out entire channels (a channel is a feature map, e.g. the jj -th channel of the ii -th sample in the batch input is a tensor input[i,j]\text{input}[i, j] ) of the input tensor). Instead of setting activations to zero, as in regular Dropout, the activations are set to the negative saturation value of the SELU activation function. Each element will be masked independently on every forward call with probability p using samples from a Bernoulli distribution. The elements to be masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit variance. See FeatureAlphaDropout for details. Parameters
p – dropout probability of a channel to be zeroed. Default: 0.5
training – apply dropout if is True. Default: True
inplace – If set to True, will do this operation in-place. Default: False
dropout2d
torch.nn.functional.dropout2d(input, p=0.5, training=True, inplace=False) [source]
Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. See Dropout2d for details. Parameters
p – probability of a channel to be zeroed. Default: 0.5
training – apply dropout if is True. Default: True
inplace – If set to True, will do this operation in-place. Default: False
dropout3d
torch.nn.functional.dropout3d(input, p=0.5, training=True, inplace=False) [source]
Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution. See Dropout3d for details. Parameters
p – probability of a channel to be zeroed. Default: 0.5
training – apply dropout if is True. Default: True
inplace – If set to True, will do this operation in-place. Default: False
Sparse functions embedding
torch.nn.functional.embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False) [source]
A simple lookup table that looks up embeddings in a fixed dictionary and size. This module is often used to retrieve word embeddings using indices. The input to the module is a list of indices, and the embedding matrix, and the output is the corresponding word embeddings. See torch.nn.Embedding for more details. Parameters
input (LongTensor) – Tensor containing indices into the embedding matrix
weight (Tensor) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size
padding_idx (int, optional) – If given, pads the output with the embedding vector at padding_idx (initialized to zeros) whenever it encounters the index.
max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm. Note: this will modify weight in-place.
norm_type (float, optional) – The p of the p-norm to compute for the max_norm option. Default 2.
scale_grad_by_freq (boolean, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False.
sparse (bool, optional) – If True, gradient w.r.t. weight will be a sparse tensor. See Notes under torch.nn.Embedding for more details regarding sparse gradients. Shape:
Input: LongTensor of arbitrary shape containing the indices to extract
Weight: Embedding matrix of floating point type with shape (V, embedding_dim),
where V = maximum index + 1 and embedding_dim = the embedding size Output: (*, embedding_dim), where * is the input shape Examples: >>> # a batch of 2 samples of 4 indices each
>>> input = torch.tensor([[1,2,4,5],[4,3,2,9]])
>>> # an embedding matrix containing 10 tensors of size 3
>>> embedding_matrix = torch.rand(10, 3)
>>> F.embedding(input, embedding_matrix)
tensor([[[ 0.8490, 0.9625, 0.6753],
[ 0.9666, 0.7761, 0.6108],
[ 0.6246, 0.9751, 0.3618],
[ 0.4161, 0.2419, 0.7383]],
[[ 0.6246, 0.9751, 0.3618],
[ 0.0237, 0.7794, 0.0528],
[ 0.9666, 0.7761, 0.6108],
[ 0.3385, 0.8612, 0.1867]]])
>>> # example with padding_idx
>>> weights = torch.rand(10, 3)
>>> weights[0, :].zero_()
>>> embedding_matrix = weights
>>> input = torch.tensor([[0,2,0,5]])
>>> F.embedding(input, embedding_matrix, padding_idx=0)
tensor([[[ 0.0000, 0.0000, 0.0000],
[ 0.5609, 0.5384, 0.8720],
[ 0.0000, 0.0000, 0.0000],
[ 0.6262, 0.2438, 0.7471]]])
embedding_bag
torch.nn.functional.embedding_bag(input, weight, offsets=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean', sparse=False, per_sample_weights=None, include_last_offset=False) [source]
Computes sums, means or maxes of bags of embeddings, without instantiating the intermediate embeddings. See torch.nn.EmbeddingBag for more details. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. Parameters
input (LongTensor) – Tensor containing bags of indices into the embedding matrix
weight (Tensor) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size
offsets (LongTensor, optional) – Only used when input is 1D. offsets determines the starting index position of each bag (sequence) in input.
max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm. Note: this will modify weight in-place.
norm_type (float, optional) – The p in the p-norm to compute for the max_norm option. Default 2.
scale_grad_by_freq (boolean, optional) – if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False. Note: this option is not supported when mode="max".
mode (string, optional) – "sum", "mean" or "max". Specifies the way to reduce the bag. Default: "mean"
sparse (bool, optional) – if True, gradient w.r.t. weight will be a sparse tensor. See Notes under torch.nn.Embedding for more details regarding sparse gradients. Note: this option is not supported when mode="max".
per_sample_weights (Tensor, optional) – a tensor of float / double weights, or None to indicate all weights should be taken to be 1. If specified, per_sample_weights must have exactly the same shape as input and is treated as having the same offsets, if those are not None.
include_last_offset (bool, optional) – if True, the size of offsets is equal to the number of bags + 1.
last element is the size of the input, or the ending index position of the last bag (The) – Shape:
input (LongTensor) and offsets (LongTensor, optional)
If input is 2D of shape (B, N), it will be treated as B bags (sequences) each of fixed length N, and this will return B values aggregated in a way depending on the mode. offsets is ignored and required to be None in this case.
If input is 1D of shape (N), it will be treated as a concatenation of multiple bags (sequences). offsets is required to be a 1D tensor containing the starting index positions of each bag in input. Therefore, for offsets of shape (B), input will be viewed as having B bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros.
weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)
per_sample_weights (Tensor, optional). Has the same shape as input.
output: aggregated embedding values of shape (B, embedding_dim)
Examples: >>> # an Embedding module containing 10 tensors of size 3
>>> embedding_matrix = torch.rand(10, 3)
>>> # a batch of 2 samples of 4 indices each
>>> input = torch.tensor([1,2,4,5,4,3,2,9])
>>> offsets = torch.tensor([0,4])
>>> F.embedding_bag(embedding_matrix, input, offsets)
tensor([[ 0.3397, 0.3552, 0.5545],
[ 0.5893, 0.4386, 0.5882]])
one_hot
torch.nn.functional.one_hot(tensor, num_classes=-1) → LongTensor
Takes LongTensor with index values of shape (*) and returns a tensor of shape (*, num_classes) that have zeros everywhere except where the index of last dimension matches the corresponding value of the input tensor, in which case it will be 1. See also One-hot on Wikipedia . Parameters
tensor (LongTensor) – class values of any shape.
num_classes (int) – Total number of classes. If set to -1, the number of classes will be inferred as one greater than the largest class value in the input tensor. Returns
LongTensor that has one more dimension with 1 values at the index of last dimension indicated by the input, and 0 everywhere else. Examples >>> F.one_hot(torch.arange(0, 5) % 3)
tensor([[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0],
[0, 1, 0]])
>>> F.one_hot(torch.arange(0, 5) % 3, num_classes=5)
tensor([[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0]])
>>> F.one_hot(torch.arange(0, 6).view(3,2) % 3)
tensor([[[1, 0, 0],
[0, 1, 0]],
[[0, 0, 1],
[1, 0, 0]],
[[0, 1, 0],
[0, 0, 1]]])
Distance functions pairwise_distance
torch.nn.functional.pairwise_distance(x1, x2, p=2.0, eps=1e-06, keepdim=False) [source]
See torch.nn.PairwiseDistance for details
cosine_similarity
torch.nn.functional.cosine_similarity(x1, x2, dim=1, eps=1e-8) → Tensor
Returns cosine similarity between x1 and x2, computed along dim. similarity=x1⋅x2max(∥x1∥2⋅∥x2∥2,ϵ)\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}
Parameters
x1 (Tensor) – First input.
x2 (Tensor) – Second input (of size matching x1).
dim (int, optional) – Dimension of vectors. Default: 1
eps (float, optional) – Small value to avoid division by zero. Default: 1e-8 Shape:
Input: (∗1,D,∗2)(\ast_1, D, \ast_2) where D is at position dim. Output: (∗1,∗2)(\ast_1, \ast_2) where 1 is at position dim. Example: >>> input1 = torch.randn(100, 128)
>>> input2 = torch.randn(100, 128)
>>> output = F.cosine_similarity(input1, input2)
>>> print(output)
pdist
torch.nn.functional.pdist(input, p=2) → Tensor
Computes the p-norm distance between every pair of row vectors in the input. This is identical to the upper triangular portion, excluding the diagonal, of torch.norm(input[:, None] - input, dim=2, p=p). This function will be faster if the rows are contiguous. If input has shape N×MN \times M then the output will have shape 12N(N−1)\frac{1}{2} N (N - 1) . This function is equivalent to scipy.spatial.distance.pdist(input, ‘minkowski’, p=p) if p∈(0,∞)p \in (0, \infty) . When p=0p = 0 it is equivalent to scipy.spatial.distance.pdist(input, ‘hamming’) * M. When p=∞p = \infty , the closest scipy function is scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x - y).max()). Parameters
input – input tensor of shape N×MN \times M .
p – p value for the p-norm distance to calculate between each vector pair ∈[0,∞]\in [0, \infty] .
Loss functions binary_cross_entropy
torch.nn.functional.binary_cross_entropy(input, target, weight=None, size_average=None, reduce=None, reduction='mean') [source]
Function that measures the Binary Cross Entropy between the target and the output. See BCELoss for details. Parameters
input – Tensor of arbitrary shape
target – Tensor of the same shape as input
weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Examples: >>> input = torch.randn((3, 2), requires_grad=True)
>>> target = torch.rand((3, 2), requires_grad=False)
>>> loss = F.binary_cross_entropy(F.sigmoid(input), target)
>>> loss.backward()
binary_cross_entropy_with_logits
torch.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None) [source]
Function that measures Binary Cross Entropy between target and output logits. See BCEWithLogitsLoss for details. Parameters
input – Tensor of arbitrary shape
target – Tensor of the same shape as input
weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
pos_weight (Tensor, optional) – a weight of positive examples. Must be a vector with length equal to the number of classes. Examples: >>> input = torch.randn(3, requires_grad=True)
>>> target = torch.empty(3).random_(2)
>>> loss = F.binary_cross_entropy_with_logits(input, target)
>>> loss.backward()
poisson_nll_loss
torch.nn.functional.poisson_nll_loss(input, target, log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean') [source]
Poisson negative log likelihood loss. See PoissonNLLLoss for details. Parameters
input – expectation of underlying Poisson distribution.
target – random sample target∼Poisson(input)target \sim \text{Poisson}(input) .
log_input – if True the loss is computed as exp(input)−target∗input\exp(\text{input}) - \text{target} * \text{input} , if False then loss is input−target∗log(input+eps)\text{input} - \text{target} * \log(\text{input}+\text{eps}) . Default: True
full – whether to compute full loss, i. e. to add the Stirling approximation term. Default: False target∗log(target)−target+0.5∗log(2∗π∗target)\text{target} * \log(\text{target}) - \text{target} + 0.5 * \log(2 * \pi * \text{target}) .
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
eps (float, optional) – Small value to avoid evaluation of log(0)\log(0) when log_input`=``False`. Default: 1e-8
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
cosine_embedding_loss
torch.nn.functional.cosine_embedding_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor [source]
See CosineEmbeddingLoss for details.
cross_entropy
torch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source]
This criterion combines log_softmax and nll_loss in a single function. See CrossEntropyLoss for details. Parameters
input (Tensor) – (N,C)(N, C) where C = number of classes or (N,C,H,W)(N, C, H, W) in case of 2D Loss, or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K) where K≥1K \geq 1 in the case of K-dimensional loss.
target (Tensor) – (N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-1 , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) where K≥1K \geq 1 for K-dimensional loss.
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Default: -100
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Examples: >>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randint(5, (3,), dtype=torch.int64)
>>> loss = F.cross_entropy(input, target)
>>> loss.backward()
ctc_loss
torch.nn.functional.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False) [source]
The Connectionist Temporal Classification loss. See CTCLoss for details. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. Parameters
log_probs – (T,N,C)(T, N, C) where C = number of characters in alphabet including blank, T = input length, and N = batch size. The logarithmized probabilities of the outputs (e.g. obtained with torch.nn.functional.log_softmax()).
targets – (N,S)(N, S) or (sum(target_lengths)). Targets cannot be blank. In the second form, the targets are assumed to be concatenated.
input_lengths – (N)(N) . Lengths of the inputs (must each be ≤T\leq T )
target_lengths – (N)(N) . Lengths of the targets
blank (int, optional) – Blank label. Default 00 .
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the output losses will be divided by the target lengths and then the mean over the batch is taken, 'sum': the output will be summed. Default: 'mean'
zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default: False Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Example: >>> log_probs = torch.randn(50, 16, 20).log_softmax(2).detach().requires_grad_()
>>> targets = torch.randint(1, 20, (16, 30), dtype=torch.long)
>>> input_lengths = torch.full((16,), 50, dtype=torch.long)
>>> target_lengths = torch.randint(10,30,(16,), dtype=torch.long)
>>> loss = F.ctc_loss(log_probs, targets, input_lengths, target_lengths)
>>> loss.backward()
hinge_embedding_loss
torch.nn.functional.hinge_embedding_loss(input, target, margin=1.0, size_average=None, reduce=None, reduction='mean') → Tensor [source]
See HingeEmbeddingLoss for details.
kl_div
torch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction='mean', log_target=False) [source]
The Kullback-Leibler divergence Loss See KLDivLoss for details. Parameters
input – Tensor of arbitrary shape
target – Tensor of the same shape as input
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'batchmean' | 'sum' | 'mean'. 'none': no reduction will be applied 'batchmean': the sum of the output will be divided by the batchsize 'sum': the output will be summed 'mean': the output will be divided by the number of elements in the output Default: 'mean'
log_target (bool) – A flag indicating whether target is passed in the log space. It is recommended to pass certain distributions (like softmax) in the log space to avoid numerical issues caused by explicit log. Default: False
Note size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Note :attr:reduction = 'mean' doesn’t return the true kl divergence value, please use :attr:reduction = 'batchmean' which aligns with KL math definition. In the next major release, 'mean' will be changed to be the same as ‘batchmean’.
l1_loss
torch.nn.functional.l1_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source]
Function that takes the mean element-wise absolute value difference. See L1Loss for details.
mse_loss
torch.nn.functional.mse_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source]
Measures the element-wise mean squared error. See MSELoss for details.
margin_ranking_loss
torch.nn.functional.margin_ranking_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor [source]
See MarginRankingLoss for details.
multilabel_margin_loss
torch.nn.functional.multilabel_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source]
See MultiLabelMarginLoss for details.
multilabel_soft_margin_loss
torch.nn.functional.multilabel_soft_margin_loss(input, target, weight=None, size_average=None) → Tensor [source]
See MultiLabelSoftMarginLoss for details.
multi_margin_loss
torch.nn.functional.multi_margin_loss(input, target, p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean') [source]
multi_margin_loss(input, target, p=1, margin=1, weight=None, size_average=None,
reduce=None, reduction=’mean’) -> Tensor See MultiMarginLoss for details.
nll_loss
torch.nn.functional.nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source]
The negative log likelihood loss. See NLLLoss for details. Parameters
input – (N,C)(N, C) where C = number of classes or (N,C,H,W)(N, C, H, W) in case of 2D Loss, or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K) where K≥1K \geq 1 in the case of K-dimensional loss.
target – (N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-1 , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) where K≥1K \geq 1 for K-dimensional loss.
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Default: -100
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Example: >>> # input is of size N x C = 3 x 5
>>> input = torch.randn(3, 5, requires_grad=True)
>>> # each element in target has to have 0 <= value < C
>>> target = torch.tensor([1, 0, 4])
>>> output = F.nll_loss(F.log_softmax(input), target)
>>> output.backward()
smooth_l1_loss
torch.nn.functional.smooth_l1_loss(input, target, size_average=None, reduce=None, reduction='mean', beta=1.0) [source]
Function that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. See SmoothL1Loss for details.
soft_margin_loss
torch.nn.functional.soft_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source]
See SoftMarginLoss for details.
triplet_margin_loss
torch.nn.functional.triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean') [source]
See TripletMarginLoss for details
triplet_margin_with_distance_loss
torch.nn.functional.triplet_margin_with_distance_loss(anchor, positive, negative, *, distance_function=None, margin=1.0, swap=False, reduction='mean') [source]
See TripletMarginWithDistanceLoss for details.
Vision functions pixel_shuffle
torch.nn.functional.pixel_shuffle(input, upscale_factor) → Tensor
Rearranges elements in a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) to a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) , where r is the upscale_factor. See PixelShuffle for details. Parameters
input (Tensor) – the input tensor
upscale_factor (int) – factor to increase spatial resolution by Examples: >>> input = torch.randn(1, 9, 4, 4)
>>> output = torch.nn.functional.pixel_shuffle(input, 3)
>>> print(output.size())
torch.Size([1, 1, 12, 12])
pixel_unshuffle
torch.nn.functional.pixel_unshuffle(input, downscale_factor) → Tensor
Reverses the PixelShuffle operation by rearranging elements in a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) to a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) , where r is the downscale_factor. See PixelUnshuffle for details. Parameters
input (Tensor) – the input tensor
downscale_factor (int) – factor to increase spatial resolution by Examples: >>> input = torch.randn(1, 1, 12, 12)
>>> output = torch.nn.functional.pixel_unshuffle(input, 3)
>>> print(output.size())
torch.Size([1, 9, 4, 4])
pad
torch.nn.functional.pad(input, pad, mode='constant', value=0)
Pads tensor. Padding size:
The padding size by which to pad some dimensions of input are described starting from the last dimension and moving forward. ⌊len(pad)2⌋\left\lfloor\frac{\text{len(pad)}}{2}\right\rfloor dimensions of input will be padded. For example, to pad only the last dimension of the input tensor, then pad has the form (padding_left,padding_right)(\text{padding\_left}, \text{padding\_right}) ; to pad the last 2 dimensions of the input tensor, then use (padding_left,padding_right,(\text{padding\_left}, \text{padding\_right}, padding_top,padding_bottom)\text{padding\_top}, \text{padding\_bottom}) ; to pad the last 3 dimensions, use (padding_left,padding_right,(\text{padding\_left}, \text{padding\_right}, padding_top,padding_bottom\text{padding\_top}, \text{padding\_bottom} padding_front,padding_back)\text{padding\_front}, \text{padding\_back}) . Padding mode:
See torch.nn.ConstantPad2d, torch.nn.ReflectionPad2d, and torch.nn.ReplicationPad2d for concrete examples on how each of the padding modes works. Constant padding is implemented for arbitrary dimensions. Replicate padding is implemented for padding the last 3 dimensions of 5D input tensor, or the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor. Reflect padding is only implemented for padding the last 2 dimensions of 4D input tensor, or the last dimension of 3D input tensor. Note When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on Reproducibility for background. Parameters
input (Tensor) – N-dimensional tensor
pad (tuple) – m-elements tuple, where m2≤\frac{m}{2} \leq input dimensions and mm is even.
mode – 'constant', 'reflect', 'replicate' or 'circular'. Default: 'constant'
value – fill value for 'constant' padding. Default: 0
Examples: >>> t4d = torch.empty(3, 3, 4, 2)
>>> p1d = (1, 1) # pad last dim by 1 on each side
>>> out = F.pad(t4d, p1d, "constant", 0) # effectively zero padding
>>> print(out.size())
torch.Size([3, 3, 4, 4])
>>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2)
>>> out = F.pad(t4d, p2d, "constant", 0)
>>> print(out.size())
torch.Size([3, 3, 8, 4])
>>> t4d = torch.empty(3, 3, 4, 2)
>>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3)
>>> out = F.pad(t4d, p3d, "constant", 0)
>>> print(out.size())
torch.Size([3, 9, 7, 3])
interpolate
torch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None) [source]
Down/up samples the input to either the given size or the given scale_factor The algorithm used for interpolation is determined by mode. Currently temporal, spatial and volumetric sampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. The modes available for resizing are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only), area Parameters
input (Tensor) – the input tensor
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size.
scale_factor (float or Tuple[float]) – multiplier for spatial size. Has to match input size if it is a tuple.
mode (str) – algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' | 'bicubic' | 'trilinear' | 'area'. Default: 'nearest'
align_corners (bool, optional) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'linear', 'bilinear', 'bicubic' or 'trilinear'. Default: False
recompute_scale_factor (bool, optional) – recompute the scale_factor for use in the interpolation calculation. When scale_factor is passed as a parameter, it is used to compute the output_size. If recompute_scale_factor is False or not specified, the passed-in scale_factor will be used in the interpolation computation. Otherwise, a new scale_factor will be computed based on the output and input sizes for use in the interpolation computation (i.e. the computation will be identical to if the computed output_size were passed-in explicitly). Note that when scale_factor is floating-point, the recomputed scale_factor may differ from the one passed in due to rounding and precision issues. Note With mode='bicubic', it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call result.clamp(min=0, max=255) if you want to reduce the overshoot when displaying the image. Warning With align_corners = True, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False. See Upsample for concrete examples on how this affects the outputs. Warning When scale_factor is specified, if recompute_scale_factor=True, scale_factor is used to compute the output_size which will then be used to infer new scales for the interpolation. The default behavior for recompute_scale_factor changed to False in 1.6.0, and scale_factor is used in the interpolation calculation. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information.
upsample
torch.nn.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None) [source]
Upsamples the input to either the given size or the given scale_factor Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(...). Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. The algorithm used for upsampling is determined by mode. Currently temporal, spatial and volumetric upsampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. The modes available for upsampling are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only) Parameters
input (Tensor) – the input tensor
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size.
scale_factor (float or Tuple[float]) – multiplier for spatial size. Has to match input size if it is a tuple.
mode (string) – algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' | 'bicubic' | 'trilinear'. Default: 'nearest'
align_corners (bool, optional) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'linear', 'bilinear', 'bicubic' or 'trilinear'. Default: False
Note With mode='bicubic', it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call result.clamp(min=0, max=255) if you want to reduce the overshoot when displaying the image. Warning With align_corners = True, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False. See Upsample for concrete examples on how this affects the outputs.
upsample_nearest
torch.nn.functional.upsample_nearest(input, size=None, scale_factor=None) [source]
Upsamples the input, using nearest neighbours’ pixel values. Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(..., mode='nearest'). Currently spatial and volumetric upsampling are supported (i.e. expected inputs are 4 or 5 dimensional). Parameters
input (Tensor) – input
size (int or Tuple[int, int] or Tuple[int, int, int]) – output spatia size.
scale_factor (int) – multiplier for spatial size. Has to be an integer. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information.
upsample_bilinear
torch.nn.functional.upsample_bilinear(input, size=None, scale_factor=None) [source]
Upsamples the input, using bilinear upsampling. Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(..., mode='bilinear', align_corners=True). Expected inputs are spatial (4 dimensional). Use upsample_trilinear fo volumetric (5 dimensional) inputs. Parameters
input (Tensor) – input
size (int or Tuple[int, int]) – output spatial size.
scale_factor (int or Tuple[int, int]) – multiplier for spatial size Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information.
grid_sample
torch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=None) [source]
Given an input and a flow-field grid, computes the output using input values and pixel locations from grid. Currently, only spatial (4-D) and volumetric (5-D) input are supported. In the spatial (4-D) case, for input with shape (N,C,Hin,Win)(N, C, H_\text{in}, W_\text{in}) and grid with shape (N,Hout,Wout,2)(N, H_\text{out}, W_\text{out}, 2) , the output will have shape (N,C,Hout,Wout)(N, C, H_\text{out}, W_\text{out}) . For each output location output[n, :, h, w], the size-2 vector grid[n, h, w] specifies input pixel locations x and y, which are used to interpolate the output value output[n, :, h, w]. In the case of 5D inputs, grid[n, d, h, w] specifies the x, y, z pixel locations for interpolating output[n, :, d, h, w]. mode argument specifies nearest or bilinear interpolation method to sample the input pixels. grid specifies the sampling pixel locations normalized by the input spatial dimensions. Therefore, it should have most values in the range of [-1, 1]. For example, values x = -1, y = -1 is the left-top pixel of input, and values x = 1, y = 1 is the right-bottom pixel of input. If grid has values outside the range of [-1, 1], the corresponding outputs are handled as defined by padding_mode. Options are
padding_mode="zeros": use 0 for out-of-bound grid locations,
padding_mode="border": use border values for out-of-bound grid locations,
padding_mode="reflection": use values at locations reflected by the border for out-of-bound grid locations. For location far away from the border, it will keep being reflected until becoming in bound, e.g., (normalized) pixel location x = -3.5 reflects by border -1 and becomes x' = 1.5, then reflects by border 1 and becomes x'' = -0.5. Note This function is often used in conjunction with affine_grid() to build Spatial Transformer Networks . Note When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on Reproducibility for background. Note NaN values in grid would be interpreted as -1. Parameters
input (Tensor) – input of shape (N,C,Hin,Win)(N, C, H_\text{in}, W_\text{in}) (4-D case) or (N,C,Din,Hin,Win)(N, C, D_\text{in}, H_\text{in}, W_\text{in}) (5-D case)
grid (Tensor) – flow-field of shape (N,Hout,Wout,2)(N, H_\text{out}, W_\text{out}, 2) (4-D case) or (N,Dout,Hout,Wout,3)(N, D_\text{out}, H_\text{out}, W_\text{out}, 3) (5-D case)
mode (str) – interpolation mode to calculate output values 'bilinear' | 'nearest' | 'bicubic'. Default: 'bilinear' Note: mode='bicubic' supports only 4-D input. When mode='bilinear' and the input is 5-D, the interpolation mode used internally will actually be trilinear. However, when the input is 4-D, the interpolation mode will legitimately be bilinear.
padding_mode (str) – padding mode for outside grid values 'zeros' | 'border' | 'reflection'. Default: 'zeros'
align_corners (bool, optional) – Geometrically, we consider the pixels of the input as squares rather than points. If set to True, the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels. If set to False, they are instead considered as referring to the corner points of the input’s corner pixels, making the sampling more resolution agnostic. This option parallels the align_corners option in interpolate(), and so whichever option is used here should also be used there to resize the input image before grid sampling. Default: False
Returns
output Tensor Return type
output (Tensor) Warning When align_corners = True, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled by grid_sample() will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 was align_corners = True. Since then, the default behavior has been changed to align_corners = False, in order to bring it in line with the default for interpolate(). Note mode='bicubic' is implemented using the cubic convolution algorithm with α=−0.75\alpha=-0.75 . The constant α\alpha might be different from packages to packages. For example, PIL and OpenCV use -0.5 and -0.75 respectively. This algorithm may “overshoot” the range of values it’s interpolating. For example, it may produce negative values or values greater than 255 when interpolating input in [0, 255]. Clamp the results with :func: torch.clamp to ensure they are within the valid range.
affine_grid
torch.nn.functional.affine_grid(theta, size, align_corners=None) [source]
Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices theta. Note This function is often used in conjunction with grid_sample() to build Spatial Transformer Networks . Parameters
theta (Tensor) – input batch of affine matrices with shape (N×2×3N \times 2 \times 3 ) for 2D or (N×3×4N \times 3 \times 4 ) for 3D
size (torch.Size) – the target output image size. (N×C×H×WN \times C \times H \times W for 2D or N×C×D×H×WN \times C \times D \times H \times W for 3D) Example: torch.Size((32, 3, 24, 24))
align_corners (bool, optional) – if True, consider -1 and 1 to refer to the centers of the corner pixels rather than the image corners. Refer to grid_sample() for a more complete description. A grid generated by affine_grid() should be passed to grid_sample() with the same setting for this option. Default: False
Returns
output Tensor of size (N×H×W×2N \times H \times W \times 2 ) Return type
output (Tensor) Warning When align_corners = True, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled by grid_sample() will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 was align_corners = True. Since then, the default behavior has been changed to align_corners = False, in order to bring it in line with the default for interpolate(). Warning When align_corners = True, 2D affine transforms on 1D data and 3D affine transforms on 2D data (that is, when one of the spatial dimensions has unit size) are ill-defined, and not an intended use case. This is not a problem when align_corners = False. Up to version 1.2.0, all grid points along a unit dimension were considered arbitrarily to be at -1. From version 1.3.0, under align_corners = True all grid points along a unit dimension are considered to be at `0 (the center of the input image).
DataParallel functions (multi-GPU, distributed) data_parallel
torch.nn.parallel.data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None) [source]
Evaluates module(input) in parallel across the GPUs given in device_ids. This is the functional version of the DataParallel module. Parameters
module (Module) – the module to evaluate in parallel
inputs (Tensor) – inputs to the module
device_ids (list of python:int or torch.device) – GPU ids on which to replicate module
output_device (list of python:int or torch.device) – GPU location of the output Use -1 to indicate the CPU. (default: device_ids[0]) Returns
a Tensor containing the result of module(input) located on output_device | |
doc_25280 |
Set the (group) id for the artist. Parameters
gidstr | |
doc_25281 | See Migration guide for more details. tf.compat.v1.raw_ops.VarIsInitializedOp
tf.raw_ops.VarIsInitializedOp(
resource, name=None
)
Args
resource A Tensor of type resource. the input resource handle.
name A name for the operation (optional).
Returns A Tensor of type bool. | |
doc_25282 |
Return the path of the scikit-learn data dir. This folder is used by some large dataset loaders to avoid downloading the data several times. By default the data dir is set to a folder named ‘scikit_learn_data’ in the user home folder. Alternatively, it can be set by the ‘SCIKIT_LEARN_DATA’ environment variable or programmatically by giving an explicit folder path. The ‘~’ symbol is expanded to the user home folder. If the folder does not already exist, it is automatically created. Parameters
data_homestr, default=None
The path to scikit-learn data directory. If None, the default path is ~/sklearn_learn_data. | |
doc_25283 | Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed! | |
doc_25284 |
Set the font stretch or width. Options are: 'ultra-condensed', 'extra-condensed', 'condensed', 'semi-condensed', 'normal', 'semi-expanded', 'expanded', 'extra-expanded' or 'ultra-expanded', or a numeric value in the range 0-1000. | |
doc_25285 | os.O_DIRECT
os.O_DIRECTORY
os.O_NOFOLLOW
os.O_NOATIME
os.O_PATH
os.O_TMPFILE
os.O_SHLOCK
os.O_EXLOCK
The above constants are extensions and not present if they are not defined by the C library. Changed in version 3.4: Add O_PATH on systems that support it. Add O_TMPFILE, only available on Linux Kernel 3.11 or newer. | |
doc_25286 |
Fit underlying estimators. Parameters
X(sparse) array-like of shape (n_samples, n_features)
Data.
ynumpy array of shape [n_samples]
Multi-class targets. Returns
self | |
doc_25287 | Returns a new Tensor with data as the tensor data. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Warning new_tensor() always copies data. If you have a Tensor data and want to avoid a copy, use torch.Tensor.requires_grad_() or torch.Tensor.detach(). If you have a numpy array and want to avoid a copy, use torch.from_numpy(). Warning When data is a tensor x, new_tensor() reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Therefore tensor.new_tensor(x) is equivalent to x.clone().detach() and tensor.new_tensor(x, requires_grad=True) is equivalent to x.clone().detach().requires_grad_(True). The equivalents using clone() and detach() are recommended. Parameters
data (array_like) – The returned Tensor copies data.
dtype (torch.dtype, optional) – the desired type of returned tensor. Default: if None, same torch.dtype as this tensor.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, same torch.device as this tensor.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> tensor = torch.ones((2,), dtype=torch.int8)
>>> data = [[0, 1], [2, 3]]
>>> tensor.new_tensor(data)
tensor([[ 0, 1],
[ 2, 3]], dtype=torch.int8) | |
doc_25288 |
Set the edge color of the Figure rectangle. Parameters
colorcolor | |
doc_25289 | Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the input tensor Return type
pruned_tensor (torch.Tensor) | |
doc_25290 |
Return a string representation of data. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. The default implementation converts ints and floats and arrays of ints and floats into a comma-separated string enclosed in square brackets, unless the artist has an associated colorbar, in which case scalar values are formatted using the colorbar's formatter. See also get_cursor_data | |
doc_25291 | Raises a ValidationError if str(value) contains one or more nulls characters ('\x00').
Parameters:
message – If not None, overrides message.
code – If not None, overrides code.
message
The error message used by ValidationError if validation fails. Defaults to "Null characters are not allowed.".
code
The error code used by ValidationError if validation fails. Defaults to "null_characters_not_allowed". | |
doc_25292 |
Bases: matplotlib.patches.Patch A scale-free ellipse. Parameters
xy(float, float)
xy coordinates of ellipse centre.
widthfloat
Total length (diameter) of horizontal axis.
heightfloat
Total length (diameter) of vertical axis.
anglefloat, default: 0
Rotation in degrees anti-clockwise. Notes Valid keyword arguments are:
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha unknown
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float propertyangle
Return the angle of the ellipse.
propertycenter
Return the center of the ellipse.
get_angle()[source]
Return the angle of the ellipse.
get_center()[source]
Return the center of the ellipse.
get_height()[source]
Return the height of the ellipse.
get_patch_transform()[source]
Return the Transform instance mapping patch coordinates to data coordinates. For example, one may define a patch of a circle which represents a radius of 5 by providing coordinates for a unit circle, and a transform which scales the coordinates (the patch coordinate) by 5.
get_path()[source]
Return the path of the ellipse.
get_width()[source]
Return the width of the ellipse.
propertyheight
Return the height of the ellipse.
set(*, agg_filter=<UNSET>, alpha=<UNSET>, angle=<UNSET>, animated=<UNSET>, antialiased=<UNSET>, capstyle=<UNSET>, center=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, color=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, fill=<UNSET>, gid=<UNSET>, hatch=<UNSET>, height=<UNSET>, in_layout=<UNSET>, joinstyle=<UNSET>, label=<UNSET>, linestyle=<UNSET>, linewidth=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, width=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
angle float
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
center (float, float)
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
height float
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
width float
zorder float
set_angle(angle)[source]
Set the angle of the ellipse. Parameters
anglefloat
set_center(xy)[source]
Set the center of the ellipse. Parameters
xy(float, float)
set_height(height)[source]
Set the height of the ellipse. Parameters
heightfloat
set_width(width)[source]
Set the width of the ellipse. Parameters
widthfloat
propertywidth
Return the width of the ellipse.
Examples using matplotlib.patches.Ellipse
Clipping images with patches
Axes box aspect
Plot a confidence ellipse of a two-dimensional dataset
Scale invariant angle label
Annotating Plots
AnnotationBbox demo
Reference for Matplotlib artists
Dolphins
Ellipse Demo
Hatch demo
Circles, Wedges and Polygons
ggplot style sheet
Grayscale style sheet
Style sheets reference
Anatomy of a figure
Looking Glass
Anchored Artists
Custom projection
Packed-bubble chart
Draw flat objects in 3D plot
Radar chart (aka spider or star chart)
Ellipse With Units
Anchored Box04
Annotate Explain
Simple Annotate01
Legend guide
Transformations Tutorial
Annotations | |
doc_25293 | Return an upper bound on ratio() relatively quickly. | |
doc_25294 | A shortcut that returns the referenced object. | |
doc_25295 |
Return the minimum value of the Index. Parameters
axis:{None}
Dummy argument for consistency with Series.
skipna:bool, default True
Exclude NA/null values when showing the result. *args, **kwargs
Additional arguments and keywords for compatibility with NumPy. Returns
scalar
Minimum value. See also Index.max
Return the maximum value of the object. Series.min
Return the minimum value in a Series. DataFrame.min
Return the minimum values in a DataFrame. Examples
>>> idx = pd.Index([3, 2, 1])
>>> idx.min()
1
>>> idx = pd.Index(['c', 'b', 'a'])
>>> idx.min()
'a'
For a MultiIndex, the minimum is determined lexicographically.
>>> idx = pd.MultiIndex.from_product([('a', 'b'), (2, 1)])
>>> idx.min()
('a', 1) | |
doc_25296 |
An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array. For more information, refer to the numpy module and examine the methods and attributes of an array. Parameters
(for the __new__ method; see Notes below)
shapetuple of ints
Shape of created array.
dtypedata-type, optional
Any object that can be interpreted as a numpy data type.
bufferobject exposing buffer interface, optional
Used to fill the array with data.
offsetint, optional
Offset of array data in buffer.
stridestuple of ints, optional
Strides of data in memory.
order{‘C’, ‘F’}, optional
Row-major (C-style) or column-major (Fortran-style) order. See also array
Construct an array. zeros
Create an array, each element of which is zero. empty
Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). dtype
Create a data-type. numpy.typing.NDArray
An ndarray alias generic w.r.t. its dtype.type. Notes There are two modes of creating an array using __new__: If buffer is None, then only shape, dtype, and order are used. If buffer is an object exposing the buffer interface, then all keywords are interpreted. No __init__ method is needed because the array is fully initialized after the __new__ method. Examples These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray. First mode, buffer is None: >>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
[ nan, 2.5e-323]])
Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]),
... offset=np.int_().itemsize,
... dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])
Attributes
Tndarray
Transpose of the array.
databuffer
The array’s elements, in memory.
dtypedtype object
Describes the format of the elements in the array.
flagsdict
Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.
flatnumpy.flatiter object
Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO).
imagndarray
Imaginary part of the array.
realndarray
Real part of the array.
sizeint
Number of elements in the array.
itemsizeint
The memory use of each array element in bytes.
nbytesint
The total number of bytes required to store the array data, i.e., itemsize * size.
ndimint
The array’s number of dimensions.
shapetuple of ints
Shape of the array.
stridestuple of ints
The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4).
ctypesctypes object
Class containing properties of the array needed for interaction with ctypes.
basendarray
If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored. | |
doc_25297 | This attribute contains a mapping of error code integers to two-element tuples containing a short and long message. For example, {code: (shortmessage,
longmessage)}. The shortmessage is usually used as the message key in an error response, and longmessage as the explain key. It is used by send_response_only() and send_error() methods. | |
doc_25298 | See Migration guide for more details. tf.compat.v1.raw_ops.QuantizedDepthwiseConv2DWithBias
tf.raw_ops.QuantizedDepthwiseConv2DWithBias(
input, filter, bias, min_input, max_input, min_filter, max_filter, strides,
padding, out_type=tf.dtypes.qint32, dilations=[1, 1, 1, 1], name=None
)
Args
input A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. The original input tensor.
filter A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. The original filter tensor.
bias A Tensor of type float32. The original bias tensor.
min_input A Tensor of type float32. The float value that the minimum quantized input value represents.
max_input A Tensor of type float32. The float value that the maximum quantized input value represents.
min_filter A Tensor of type float32. The float value that the minimum quantized filter value represents.
max_filter A Tensor of type float32. The float value that the maximum quantized filter value represents.
strides A list of ints. List of stride values.
padding A string from: "SAME", "VALID".
out_type An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.qint32. The type of the output.
dilations An optional list of ints. Defaults to [1, 1, 1, 1]. List of dilation values.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output, min_output, max_output). output A Tensor of type out_type.
min_output A Tensor of type float32.
max_output A Tensor of type float32. | |
doc_25299 |
Return the alpha value used for blending - not supported on all backends. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.