_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_23800 | Maildir instances do not keep any open files and the underlying mailboxes do not support locking, so this method does nothing. | |
doc_23801 |
Call self as a function. | |
doc_23802 | See Migration guide for more details. tf.compat.v1.raw_ops.FakeQuantWithMinMaxArgsGradient
tf.raw_ops.FakeQuantWithMinMaxArgsGradient(
gradients, inputs, min=-6, max=6, num_bits=8, narrow_range=False, name=None
)
Args
gradients A Tensor of type float32. Backpropagated gradients above the FakeQuantWithMinMaxArgs operation.
inputs A Tensor of type float32. Values passed as inputs to the FakeQuantWithMinMaxArgs operation.
min An optional float. Defaults to -6.
max An optional float. Defaults to 6.
num_bits An optional int. Defaults to 8.
narrow_range An optional bool. Defaults to False.
name A name for the operation (optional).
Returns A Tensor of type float32. | |
doc_23803 |
Pad right side of strings in the Series/Index. Equivalent to str.ljust(). Parameters
width:int
Minimum width of resulting string; additional characters will be filled with fillchar.
fillchar:str
Additional character for filling, default is whitespace. Returns
filled:Series/Index of objects. | |
doc_23804 | Duplicate the socket. The newly created socket is non-inheritable. Changed in version 3.4: The socket is now non-inheritable. | |
doc_23805 | See torch.logaddexp() | |
doc_23806 |
Simple example of how to use the MCP and MCP_Geometric classes. See the MCP and MCP_Geometric class documentation for explanation of the path-finding algorithm. Parameters
arrayndarray
Array of costs.
startiterable
n-d index into array defining the starting point
enditerable
n-d index into array defining the end point
fully_connectedbool (optional)
If True, diagonal moves are permitted, if False, only axial moves.
geometricbool (optional)
If True, the MCP_Geometric class is used to calculate costs, if False, the MCP base class is used. See the class documentation for an explanation of the differences between MCP and MCP_Geometric. Returns
pathlist
List of n-d index tuples defining the path from start to end.
costfloat
Cost of the path. If geometric is False, the cost of the path is the sum of the values of array along the path. If geometric is True, a finer computation is made (see the documentation of the MCP_Geometric class). See also
MCP, MCP_Geometric
Examples >>> import numpy as np
>>> from skimage.graph import route_through_array
>>>
>>> image = np.array([[1, 3], [10, 12]])
>>> image
array([[ 1, 3],
[10, 12]])
>>> # Forbid diagonal steps
>>> route_through_array(image, [0, 0], [1, 1], fully_connected=False)
([(0, 0), (0, 1), (1, 1)], 9.5)
>>> # Now allow diagonal steps: the path goes directly from start to end
>>> route_through_array(image, [0, 0], [1, 1])
([(0, 0), (1, 1)], 9.19238815542512)
>>> # Cost is the sum of array values along the path (16 = 1 + 3 + 12)
>>> route_through_array(image, [0, 0], [1, 1], fully_connected=False,
... geometric=False)
([(0, 0), (0, 1), (1, 1)], 16.0)
>>> # Larger array where we display the path that is selected
>>> image = np.arange((36)).reshape((6, 6))
>>> image
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]])
>>> # Find the path with lowest cost
>>> indices, weight = route_through_array(image, (0, 0), (5, 5))
>>> indices = np.stack(indices, axis=-1)
>>> path = np.zeros_like(image)
>>> path[indices[0], indices[1]] = 1
>>> path
array([[1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1]]) | |
doc_23807 |
Compute the approximate Hessian Determinant over an image. The 2D approximate method uses box filters over integral images to compute the approximate Hessian Determinant, as described in [1]. Parameters
imagearray
The image over which to compute Hessian Determinant.
sigmafloat, optional
Standard deviation used for the Gaussian kernel, used for the Hessian matrix.
approximatebool, optional
If True and the image is 2D, use a much faster approximate computation. This argument has no effect on 3D and higher images. Returns
outarray
The array of the Determinant of Hessians. Notes For 2D images when approximate=True, the running time of this method only depends on size of the image. It is independent of sigma as one would expect. The downside is that the result for sigma less than 3 is not accurate, i.e., not similar to the result obtained if someone computed the Hessian and took its determinant. References
1
Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, “SURF: Speeded Up Robust Features” ftp://ftp.vision.ee.ethz.ch/publications/articles/eth_biwi_00517.pdf | |
doc_23808 | Put item into the queue. | |
doc_23809 |
Initialize self. See help(type(self)) for accurate signature. | |
doc_23810 |
Set the font size, in points, of the cell text. Parameters
sizefloat
Notes As long as auto font size has not been disabled, the value will be clipped such that the text fits horizontally into the cell. You can disable this behavior using auto_set_font_size. >>> the_table.auto_set_font_size(False)
>>> the_table.set_fontsize(20)
However, there is no automatic scaling of the row height so that the text may exceed the cell boundary. | |
doc_23811 | Point objects are instantiated using arguments that represent the component coordinates of the point or with a single sequence coordinates. For example, the following are equivalent: >>> pnt = Point(5, 23)
>>> pnt = Point([5, 23])
Empty Point objects may be instantiated by passing no arguments or an empty sequence. The following are equivalent: >>> pnt = Point()
>>> pnt = Point([]) | |
doc_23812 |
Return a function that splits a string into a sequence of tokens. Returns
tokenizer: callable
A function to split a string into a sequence of tokens. | |
doc_23813 |
Shape of the i’th bicluster. Parameters
iint
The index of the cluster. Returns
n_rowsint
Number of rows in the bicluster.
n_colsint
Number of columns in the bicluster. | |
doc_23814 |
Bases: matplotlib.gridspec.GridSpecBase A grid layout to place subplots within a figure. The location of the grid cells is determined in a similar way to SubplotParams using left, right, top, bottom, wspace and hspace. Parameters
nrows, ncolsint
The number of rows and columns of the grid.
figureFigure, optional
Only used for constrained layout to create a proper layoutgrid.
left, right, top, bottomfloat, optional
Extent of the subplots as a fraction of figure width or height. Left cannot be larger than right, and bottom cannot be larger than top. If not given, the values will be inferred from a figure or rcParams at draw time. See also GridSpec.get_subplot_params.
wspacefloat, optional
The amount of width reserved for space between subplots, expressed as a fraction of the average axis width. If not given, the values will be inferred from a figure or rcParams when necessary. See also GridSpec.get_subplot_params.
hspacefloat, optional
The amount of height reserved for space between subplots, expressed as a fraction of the average axis height. If not given, the values will be inferred from a figure or rcParams when necessary. See also GridSpec.get_subplot_params.
width_ratiosarray-like of length ncols, optional
Defines the relative widths of the columns. Each column gets a relative width of width_ratios[i] / sum(width_ratios). If not given, all columns will have the same width.
height_ratiosarray-like of length nrows, optional
Defines the relative heights of the rows. Each row gets a relative height of height_ratios[i] / sum(height_ratios). If not given, all rows will have the same height. get_subplot_params(figure=None)[source]
Return the SubplotParams for the GridSpec. In order of precedence the values are taken from non-None attributes of the GridSpec the provided figure
rcParams["figure.subplot.*"]
locally_modified_subplot_params()[source]
Return a list of the names of the subplot parameters explicitly set in the GridSpec. This is a subset of the attributes of SubplotParams.
tight_layout(figure, renderer=None, pad=1.08, h_pad=None, w_pad=None, rect=None)[source]
Adjust subplot parameters to give specified padding. Parameters
padfloat
Padding between the figure edge and the edges of subplots, as a fraction of the font-size.
h_pad, w_padfloat, optional
Padding (height/width) between edges of adjacent subplots. Defaults to pad.
recttuple of 4 floats, default: (0, 0, 1, 1), i.e. the whole figure
(left, bottom, right, top) rectangle in normalized figure coordinates that the whole subplots area (including labels) will fit into.
update(**kwargs)[source]
Update the subplot parameters of the grid. Parameters that are not explicitly given are not changed. Setting a parameter to None resets it to rcParams["figure.subplot.*"]. Parameters
left, right, top, bottomfloat or None, optional
Extent of the subplots as a fraction of figure width or height.
wspace, hspacefloat, optional
Spacing between the subplots as a fraction of the average subplot width / height.
Examples using matplotlib.gridspec.GridSpec
Psd Demo
Scatter plot with histograms
Streamplot
Aligning Labels
Resizing axes with constrained layout
Resizing axes with tight layout
Combining two subplots using subplots and GridSpec
Using Gridspec to make multi-column/row subplot layouts
Nested Gridspecs
Figure subfigures
Creating multiple subplots using plt.subplots
Text Rotation Mode
Custom spines with axisartist
GridSpec demo
Nested GridSpecs
Constrained Layout Guide
Tight Layout guide
Arranging multiple Axes in a Figure
origin and extent in imshow | |
doc_23815 | See Migration guide for more details. tf.compat.v1.io.parse_example
tf.compat.v1.parse_example(
serialized, features, name=None, example_names=None
)
Parses a number of serialized Example protos given in serialized. We refer to serialized as a batch with batch_size many entries of individual Example protos. example_names may contain descriptive names for the corresponding serialized protos. These may be useful for debugging purposes, but they have no effect on the output. If not None, example_names must be the same length as serialized. This op parses serialized examples into a dictionary mapping keys to Tensor SparseTensor, and RaggedTensor objects. features is a dict from keys to VarLenFeature, SparseFeature, RaggedFeature, and FixedLenFeature objects. Each VarLenFeature and SparseFeature is mapped to a SparseTensor; each FixedLenFeature is mapped to a Tensor; and each RaggedFeature is mapped to a RaggedTensor. Each VarLenFeature maps to a SparseTensor of the specified type representing a ragged matrix. Its indices are [batch, index] where batch identifies the example in serialized, and index is the value's index in the list of values associated with that feature and example. Each SparseFeature maps to a SparseTensor of the specified type representing a Tensor of dense_shape [batch_size] + SparseFeature.size. Its values come from the feature in the examples with key value_key. A values[i] comes from a position k in the feature of an example at batch entry batch. This positional information is recorded in indices[i] as [batch, index_0, index_1, ...] where index_j is the k-th value of the feature in the example at with key SparseFeature.index_key[j]. In other words, we split the indices (except the first index indicating the batch entry) of a SparseTensor by dimension into different features of the Example. Due to its complexity a VarLenFeature should be preferred over a SparseFeature whenever possible. Each FixedLenFeature df maps to a Tensor of the specified type (or tf.float32 if not specified) and shape (serialized.size(),) + df.shape. FixedLenFeature entries with a default_value are optional. With no default value, we will fail if that Feature is missing from any example in serialized. Each FixedLenSequenceFeature df maps to a Tensor of the specified type (or tf.float32 if not specified) and shape (serialized.size(), None) + df.shape. All examples in serialized will be padded with default_value along the second dimension. Each RaggedFeature maps to a RaggedTensor of the specified type. It is formed by stacking the RaggedTensor for each example, where the RaggedTensor for each individual example is constructed using the tensors specified by RaggedTensor.values_key and RaggedTensor.partition. See the tf.io.RaggedFeature documentation for details and examples. Examples: For example, if one expects a tf.float32 VarLenFeature ft and three serialized Examples are provided: serialized = [
features
{ feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } },
features
{ feature []},
features
{ feature { key: "ft" value { float_list { value: [3.0] } } }
]
then the output will look like: {"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],
values=[1.0, 2.0, 3.0],
dense_shape=(3, 2)) }
If instead a FixedLenSequenceFeature with default_value = -1.0 and shape=[] is used then the output will look like: {"ft": [[1.0, 2.0], [3.0, -1.0]]}
Given two Example input protos in serialized: [
features {
feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } }
feature { key: "gps" value { float_list { value: [] } } }
},
features {
feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } }
feature { key: "dank" value { int64_list { value: [ 42 ] } } }
feature { key: "gps" value { } }
}
]
And arguments example_names: ["input0", "input1"],
features: {
"kw": VarLenFeature(tf.string),
"dank": VarLenFeature(tf.int64),
"gps": VarLenFeature(tf.float32),
}
Then the output is a dictionary: {
"kw": SparseTensor(
indices=[[0, 0], [0, 1], [1, 0]],
values=["knit", "big", "emmy"]
dense_shape=[2, 2]),
"dank": SparseTensor(
indices=[[1, 0]],
values=[42],
dense_shape=[2, 1]),
"gps": SparseTensor(
indices=[],
values=[],
dense_shape=[2, 0]),
}
For dense results in two serialized Examples: [
features {
feature { key: "age" value { int64_list { value: [ 0 ] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
},
features {
feature { key: "age" value { int64_list { value: [] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
}
]
We can use arguments: example_names: ["input0", "input1"],
features: {
"age": FixedLenFeature([], dtype=tf.int64, default_value=-1),
"gender": FixedLenFeature([], dtype=tf.string),
}
And the expected output is: {
"age": [[0], [-1]],
"gender": [["f"], ["f"]],
}
An alternative to VarLenFeature to obtain a SparseTensor is SparseFeature. For example, given two Example input protos in serialized: [
features {
feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } }
feature { key: "ix" value { int64_list { value: [ 3, 20 ] } } }
},
features {
feature { key: "val" value { float_list { value: [ 0.0 ] } } }
feature { key: "ix" value { int64_list { value: [ 42 ] } } }
}
]
And arguments example_names: ["input0", "input1"],
features: {
"sparse": SparseFeature(
index_key="ix", value_key="val", dtype=tf.float32, size=100),
}
Then the output is a dictionary: {
"sparse": SparseTensor(
indices=[[0, 3], [0, 20], [1, 42]],
values=[0.5, -1.0, 0.0]
dense_shape=[2, 100]),
}
See the tf.io.RaggedFeature documentation for examples showing how RaggedFeature can be used to obtain RaggedTensors.
Args
serialized A vector (1-D Tensor) of strings, a batch of binary serialized Example protos.
features A dict mapping feature keys to FixedLenFeature, VarLenFeature, SparseFeature, and RaggedFeature values.
example_names A vector (1-D Tensor) of strings (optional), the names of the serialized protos in the batch.
name A name for this operation (optional).
Returns A dict mapping feature keys to Tensor, SparseTensor, and RaggedTensor values.
Raises
ValueError if any feature is invalid. | |
doc_23816 | Delete the folder whose name is folder. If the folder contains any messages, a NotEmptyError exception will be raised and the folder will not be deleted. | |
doc_23817 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_23818 | This is a dictionary that maps module names to modules which have already been loaded. This can be manipulated to force reloading of modules and other tricks. However, replacing the dictionary will not necessarily work as expected and deleting essential items from the dictionary may cause Python to fail. | |
doc_23819 | See Migration guide for more details. tf.compat.v1.raw_ops.QuantizedRelu
tf.raw_ops.QuantizedRelu(
features, min_features, max_features, out_type=tf.dtypes.quint8, name=None
)
Args
features A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16.
min_features A Tensor of type float32. The float value that the lowest quantized value represents.
max_features A Tensor of type float32. The float value that the highest quantized value represents.
out_type An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.quint8.
name A name for the operation (optional).
Returns A tuple of Tensor objects (activations, min_activations, max_activations). activations A Tensor of type out_type.
min_activations A Tensor of type float32.
max_activations A Tensor of type float32. | |
doc_23820 |
Compare to another Series and show the differences. New in version 1.1.0. Parameters
other:Series
Object to compare with.
align_axis:{0 or ‘index’, 1 or ‘columns’}, default 1
Determine which axis to align the comparison on.
0, or ‘index’:Resulting differences are stacked vertically
with rows drawn alternately from self and other.
1, or ‘columns’:Resulting differences are aligned horizontally
with columns drawn alternately from self and other.
keep_shape:bool, default False
If true, all rows and columns are kept. Otherwise, only the ones with different values are kept.
keep_equal:bool, default False
If true, the result keeps values that are equal. Otherwise, equal values are shown as NaNs. Returns
Series or DataFrame
If axis is 0 or ‘index’ the result will be a Series. The resulting index will be a MultiIndex with ‘self’ and ‘other’ stacked alternately at the inner level. If axis is 1 or ‘columns’ the result will be a DataFrame. It will have two columns namely ‘self’ and ‘other’. See also DataFrame.compare
Compare with another DataFrame and show differences. Notes Matching NaNs will not appear as a difference. Examples
>>> s1 = pd.Series(["a", "b", "c", "d", "e"])
>>> s2 = pd.Series(["a", "a", "c", "b", "e"])
Align the differences on columns
>>> s1.compare(s2)
self other
1 b a
3 d b
Stack the differences on indices
>>> s1.compare(s2, align_axis=0)
1 self b
other a
3 self d
other b
dtype: object
Keep all original rows
>>> s1.compare(s2, keep_shape=True)
self other
0 NaN NaN
1 b a
2 NaN NaN
3 d b
4 NaN NaN
Keep all original rows and also all original values
>>> s1.compare(s2, keep_shape=True, keep_equal=True)
self other
0 a a
1 b a
2 c c
3 d b
4 e e | |
doc_23821 | >>> numpy.test(label='slow')
The test method may take two or more arguments; the first label is a string specifying what should be tested and the second verbose is an integer giving the level of output verbosity. See the docstring numpy.test for details. The default value for label is ‘fast’ - which will run the standard tests. The string ‘full’ will run the full battery of tests, including those identified as being slow to run. If verbose is 1 or less, the tests will just show information messages about the tests that are run; but if it is greater than 1, then the tests will also provide warnings on missing tests. So if you want to run every test and get messages about which modules don’t have tests: >>> numpy.test(label='full', verbose=2) # or numpy.test('full', 2)
Finally, if you are only interested in testing a subset of NumPy, for example, the core module, use the following: >>> numpy.core.test()
Running tests from the command line If you want to build NumPy in order to work on NumPy itself, use runtests.py.To run NumPy’s full test suite: $ python runtests.py
Testing a subset of NumPy: $python runtests.py -t numpy/core/tests
For detailed info on testing, see Testing builds Other methods of running tests Run tests using your favourite IDE such as vscode or pycharm Writing your own tests If you are writing a package that you’d like to become part of NumPy, please write the tests as you develop the package. Every Python module, extension module, or subpackage in the NumPy package directory should have a corresponding test_<name>.py file. Pytest examines these files for test methods (named test*) and test classes (named Test*). Suppose you have a NumPy module numpy/xxx/yyy.py containing a function zzz(). To test this function you would create a test module called test_yyy.py. If you only need to test one aspect of zzz, you can simply add a test function: def test_zzz():
assert zzz() == 'Hello from zzz'
More often, we need to group a number of tests together, so we create a test class: import pytest
# import xxx symbols
from numpy.xxx.yyy import zzz
import pytest
class TestZzz:
def test_simple(self):
assert zzz() == 'Hello from zzz'
def test_invalid_parameter(self):
with pytest.raises(ValueError, match='.*some matching regex.*'):
...
Within these test methods, assert and related functions are used to test whether a certain assumption is valid. If the assertion fails, the test fails. pytest internally rewrites the assert statement to give informative output when it fails, so should be preferred over the legacy variant numpy.testing.assert_. Whereas plain assert statements are ignored when running Python in optimized mode with -O, this is not an issue when running tests with pytest. Similarly, the pytest functions pytest.raises and pytest.warns should be preferred over their legacy counterparts numpy.testing.assert_raises and numpy.testing.assert_warns, since the pytest variants are more broadly used and allow more explicit targeting of warnings and errors when used with the match regex. Note that test_ functions or methods should not have a docstring, because that makes it hard to identify the test from the output of running the test suite with verbose=2 (or similar verbosity setting). Use plain comments (#) if necessary. Also since much of NumPy is legacy code that was originally written without unit tests, there are still several modules that don’t have tests yet. Please feel free to choose one of these modules and develop tests for it. Using C code in tests NumPy exposes a rich C-API . These are tested using c-extension modules written “as-if” they know nothing about the internals of NumPy, rather using the official C-API interfaces only. Examples of such modules are tests for a user-defined rational dtype in _rational_tests or the ufunc machinery tests in _umath_tests which are part of the binary distribution. Starting from version 1.21, you can also write snippets of C code in tests that will be compiled locally into c-extension modules and loaded into python. numpy.testing.extbuild.build_and_import_extension(modname, functions, *, prologue='', build_dir=None, include_dirs=[], more_init='')
Build and imports a c-extension module modname from a list of function fragments functions. Parameters
functionslist of fragments
Each fragment is a sequence of func_name, calling convention, snippet.
prologuestring
Code to preceed the rest, usually extra #include or #define macros.
build_dirpathlib.Path
Where to build the module, usually a temporary directory
include_dirslist
Extra directories to find include files when compiling
more_initstring
Code to appear in the module PyMODINIT_FUNC Returns
out: module
The module will have been loaded and is ready for use Examples >>> functions = [("test_bytes", "METH_O", """
if ( !PyBytesCheck(args)) {
Py_RETURN_FALSE;
}
Py_RETURN_TRUE;
""")]
>>> mod = build_and_import_extension("testme", functions)
>>> assert not mod.test_bytes(u'abc')
>>> assert mod.test_bytes(b'abc')
Labeling tests Unlabeled tests like the ones above are run in the default numpy.test() run. If you want to label your test as slow - and therefore reserved for a full numpy.test(label='full') run, you can label it with pytest.mark.slow: import pytest
@pytest.mark.slow
def test_big(self):
print('Big, slow test')
Similarly for methods: class test_zzz:
@pytest.mark.slow
def test_simple(self):
assert_(zzz() == 'Hello from zzz')
Easier setup and teardown functions / methods Testing looks for module-level or class-level setup and teardown functions by name; thus: def setup():
"""Module-level setup"""
print('doing setup')
def teardown():
"""Module-level teardown"""
print('doing teardown')
class TestMe:
def setup():
"""Class-level setup"""
print('doing setup')
def teardown():
"""Class-level teardown"""
print('doing teardown')
Setup and teardown functions to functions and methods are known as “fixtures”, and their use is not encouraged. Parametric tests One very nice feature of testing is allowing easy testing across a range of parameters - a nasty problem for standard unit tests. Use the pytest.mark.parametrize decorator. Doctests Doctests are a convenient way of documenting the behavior of a function and allowing that behavior to be tested at the same time. The output of an interactive Python session can be included in the docstring of a function, and the test framework can run the example and compare the actual output to the expected output. The doctests can be run by adding the doctests argument to the test() call; for example, to run all tests (including doctests) for numpy.lib: >>> import numpy as np
>>> np.lib.test(doctests=True)
The doctests are run as if they are in a fresh Python instance which has executed import numpy as np. Tests that are part of a NumPy subpackage will have that subpackage already imported. E.g. for a test in numpy/linalg/tests/, the namespace will be created such that from numpy import linalg has already executed. tests/ Rather than keeping the code and the tests in the same directory, we put all the tests for a given subpackage in a tests/ subdirectory. For our example, if it doesn’t already exist you will need to create a tests/ directory in numpy/xxx/. So the path for test_yyy.py is numpy/xxx/tests/test_yyy.py. Once the numpy/xxx/tests/test_yyy.py is written, its possible to run the tests by going to the tests/ directory and typing: python test_yyy.py
Or if you add numpy/xxx/tests/ to the Python path, you could run the tests interactively in the interpreter like this: >>> import test_yyy
>>> test_yyy.test()
__init__.py and setup.py
Usually, however, adding the tests/ directory to the python path isn’t desirable. Instead it would better to invoke the test straight from the module xxx. To this end, simply place the following lines at the end of your package’s __init__.py file: ...
def test(level=1, verbosity=1):
from numpy.testing import Tester
return Tester().test(level, verbosity)
You will also need to add the tests directory in the configuration section of your setup.py: ...
def configuration(parent_package='', top_path=None):
...
config.add_subpackage('tests')
return config
...
Now you can do the following to test your module: >>> import numpy
>>> numpy.xxx.test()
Also, when invoking the entire NumPy test suite, your tests will be found and run: >>> import numpy
>>> numpy.test()
# your tests are included and run automatically!
Tips & Tricks Creating many similar tests If you have a collection of tests that must be run multiple times with minor variations, it can be helpful to create a base class containing all the common tests, and then create a subclass for each variation. Several examples of this technique exist in NumPy; below are excerpts from one in numpy/linalg/tests/test_linalg.py: class LinalgTestCase:
def test_single(self):
a = array([[1., 2.], [3., 4.]], dtype=single)
b = array([2., 1.], dtype=single)
self.do(a, b)
def test_double(self):
a = array([[1., 2.], [3., 4.]], dtype=double)
b = array([2., 1.], dtype=double)
self.do(a, b)
...
class TestSolve(LinalgTestCase):
def do(self, a, b):
x = linalg.solve(a, b)
assert_allclose(b, dot(a, x))
assert imply(isinstance(b, matrix), isinstance(x, matrix))
class TestInv(LinalgTestCase):
def do(self, a, b):
a_inv = linalg.inv(a)
assert_allclose(dot(a, a_inv), identity(asarray(a).shape[0]))
assert imply(isinstance(a, matrix), isinstance(a_inv, matrix))
In this case, we wanted to test solving a linear algebra problem using matrices of several data types, using linalg.solve and linalg.inv. The common test cases (for single-precision, double-precision, etc. matrices) are collected in LinalgTestCase. Known failures & skipping tests Sometimes you might want to skip a test or mark it as a known failure, such as when the test suite is being written before the code it’s meant to test, or if a test only fails on a particular architecture. To skip a test, simply use skipif: import pytest
@pytest.mark.skipif(SkipMyTest, reason="Skipping this test because...")
def test_something(foo):
...
The test is marked as skipped if SkipMyTest evaluates to nonzero, and the message in verbose test output is the second argument given to skipif. Similarly, a test can be marked as a known failure by using xfail: import pytest
@pytest.mark.xfail(MyTestFails, reason="This test is known to fail because...")
def test_something_else(foo):
...
Of course, a test can be unconditionally skipped or marked as a known failure by using skip or xfail without argument, respectively. A total of the number of skipped and known failing tests is displayed at the end of the test run. Skipped tests are marked as 'S' in the test results (or 'SKIPPED' for verbose > 1), and known failing tests are marked as 'x' (or 'XFAIL' if verbose >
1). Tests on random data Tests on random data are good, but since test failures are meant to expose new bugs or regressions, a test that passes most of the time but fails occasionally with no code changes is not helpful. Make the random data deterministic by setting the random number seed before generating it. Use either Python’s random.seed(some_number) or NumPy’s numpy.random.seed(some_number), depending on the source of random numbers. Alternatively, you can use Hypothesis to generate arbitrary data. Hypothesis manages both Python’s and Numpy’s random seeds for you, and provides a very concise and powerful way to describe data (including hypothesis.extra.numpy, e.g. for a set of mutually-broadcastable shapes). The advantages over random generation include tools to replay and share failures without requiring a fixed seed, reporting minimal examples for each failure, and better-than-naive-random techniques for triggering bugs. Documentation for numpy.test
numpy.test(label='fast', verbose=1, extra_argv=None, doctests=False, coverage=False, durations=- 1, tests=None)
Pytest test runner. A test function is typically added to a package’s __init__.py like so: from numpy._pytesttester import PytestTester
test = PytestTester(__name__).test
del PytestTester
Calling this test function finds and runs all tests associated with the module and all its sub-modules. Parameters
module_namemodule name
The name of the module to test. Notes Unlike the previous nose-based implementation, this class is not publicly exposed as it performs some numpy-specific warning suppression. Attributes
module_namestr
Full path to the package to test. | |
doc_23822 |
Fits the imputer on X and return self. Parameters
Xarray-like, shape (n_samples, n_features)
Input data, where “n_samples” is the number of samples and “n_features” is the number of features.
yignored
Returns
selfobject
Returns self. | |
doc_23823 | See Migration guide for more details. tf.compat.v1.xla.experimental.jit_scope
@contextlib.contextmanager
tf.xla.experimental.jit_scope(
compile_ops=True, separate_compiled_gradients=False
)
Note: This is an experimental feature.
The compilation is a hint and only supported on a best-effort basis. Example usage: with tf.xla.experimental.jit_scope():
c = tf.matmul(a, b) # compiled
with tf.xla.experimental.jit_scope(compile_ops=False):
d = tf.matmul(a, c) # not compiled
with tf.xla.experimental.jit_scope(
compile_ops=lambda node_def: 'matmul' in node_def.op.lower()):
e = tf.matmul(a, b) + d # matmul is compiled, the addition is not.
Example of separate_compiled_gradients: # In the example below, the computations for f, g and h will all be compiled
# in separate scopes.
with tf.xla.experimental.jit_scope(
separate_compiled_gradients=True):
f = tf.matmul(a, b)
g = tf.gradients([f], [a, b], name='mygrads1')
h = tf.gradients([f], [a, b], name='mygrads2')
Ops that are not in the scope may be clustered and compiled with ops in the scope with compile_ops=True, while the ops in the scope with compile_ops=False will never be compiled. For example: # In the example below, x and loss may be clustered and compiled together,
# while y will not be compiled.
with tf.xla.experimental.jit_scope():
x = tf.matmul(a, b)
with tf.xla.experimental.jit_scope(compile_ops=False):
y = tf.matmul(c, d)
loss = x + y
If you want to only compile the ops in the scope with compile_ops=True, consider adding an outer jit_scope(compile_ops=False): # In the example below, only x will be compiled.
with tf.xla.experimental.jit_scope(compile_ops=False):
with tf.xla.experimental.jit_scope():
x = tf.matmul(a, b)
y = tf.matmul(c, d)
loss = x + y
Args
compile_ops Whether to enable or disable compilation in the scope. Either a Python bool, or a callable that accepts the parameter node_def and returns a python bool.
separate_compiled_gradients If true put each gradient subgraph into a separate compilation scope. This gives fine-grained control over which portions of the graph will be compiled as a single unit. Compiling gradients separately may yield better performance for some graphs. The scope is named based on the scope of the forward computation as well as the name of the gradients. As a result, the gradients will be compiled in a scope that is separate from both the forward computation, and from other gradients.
Raises
RuntimeError if called when eager execution is enabled.
Yields The current scope, enabling or disabling compilation. | |
doc_23824 | Construct a full (“absolute”) URL by combining a “base URL” (base) with another URL (url). Informally, this uses components of the base URL, in particular the addressing scheme, the network location and (part of) the path, to provide missing components in the relative URL. For example: >>> from urllib.parse import urljoin
>>> urljoin('http://www.cwi.nl/%7Eguido/Python.html', 'FAQ.html')
'http://www.cwi.nl/%7Eguido/FAQ.html'
The allow_fragments argument has the same meaning and default as for urlparse(). Note If url is an absolute URL (that is, it starts with // or scheme://), the url’s hostname and/or scheme will be present in the result. For example: >>> urljoin('http://www.cwi.nl/%7Eguido/Python.html',
... '//www.python.org/%7Eguido')
'http://www.python.org/%7Eguido'
If you do not want that behavior, preprocess the url with urlsplit() and urlunsplit(), removing possible scheme and netloc parts. Changed in version 3.5: Behavior updated to match the semantics defined in RFC 3986. | |
doc_23825 |
Function to calculate only the edges of the bins used by the histogram function. Parameters
aarray_like
Input data. The histogram is computed over the flattened array.
binsint or sequence of scalars or str, optional
If bins is an int, it defines the number of equal-width bins in the given range (10, by default). If bins is a sequence, it defines the bin edges, including the rightmost edge, allowing for non-uniform bin widths. If bins is a string from the list below, histogram_bin_edges will use the method chosen to calculate the optimal bin width and consequently the number of bins (see Notes for more detail on the estimators) from the data that falls within the requested range. While the bin width will be optimal for the actual data in the range, the number of bins will be computed to fill the entire range, including the empty portions. For visualisation, using the ‘auto’ option is suggested. Weighted data is not supported for automated bin size selection. ‘auto’
Maximum of the ‘sturges’ and ‘fd’ estimators. Provides good all around performance. ‘fd’ (Freedman Diaconis Estimator)
Robust (resilient to outliers) estimator that takes into account data variability and data size. ‘doane’
An improved version of Sturges’ estimator that works better with non-normal datasets. ‘scott’
Less robust estimator that that takes into account data variability and data size. ‘stone’
Estimator based on leave-one-out cross-validation estimate of the integrated squared error. Can be regarded as a generalization of Scott’s rule. ‘rice’
Estimator does not take variability into account, only data size. Commonly overestimates number of bins required. ‘sturges’
R’s default method, only accounts for data size. Only optimal for gaussian data and underestimates number of bins for large non-gaussian datasets. ‘sqrt’
Square root (of data size) estimator, used by Excel and other programs for its speed and simplicity.
range(float, float), optional
The lower and upper range of the bins. If not provided, range is simply (a.min(), a.max()). Values outside the range are ignored. The first element of the range must be less than or equal to the second. range affects the automatic bin computation as well. While bin width is computed to be optimal based on the actual data within range, the bin count will fill the entire range including portions containing no data.
weightsarray_like, optional
An array of weights, of the same shape as a. Each value in a only contributes its associated weight towards the bin count (instead of 1). This is currently not used by any of the bin estimators, but may be in the future. Returns
bin_edgesarray of dtype float
The edges to pass into histogram See also histogram
Notes The methods to estimate the optimal number of bins are well founded in literature, and are inspired by the choices R provides for histogram visualisation. Note that having the number of bins proportional to \(n^{1/3}\) is asymptotically optimal, which is why it appears in most estimators. These are simply plug-in methods that give good starting points for number of bins. In the equations below, \(h\) is the binwidth and \(n_h\) is the number of bins. All estimators that compute bin counts are recast to bin width using the ptp of the data. The final bin count is obtained from np.round(np.ceil(range / h)). The final bin width is often less than what is returned by the estimators below. ‘auto’ (maximum of the ‘sturges’ and ‘fd’ estimators)
A compromise to get a good value. For small datasets the Sturges value will usually be chosen, while larger datasets will usually default to FD. Avoids the overly conservative behaviour of FD and Sturges for small and large datasets respectively. Switchover point is usually \(a.size \approx 1000\). ‘fd’ (Freedman Diaconis Estimator)
\[h = 2 \frac{IQR}{n^{1/3}}\] The binwidth is proportional to the interquartile range (IQR) and inversely proportional to cube root of a.size. Can be too conservative for small datasets, but is quite good for large datasets. The IQR is very robust to outliers. ‘scott’
\[h = \sigma \sqrt[3]{\frac{24 * \sqrt{\pi}}{n}}\] The binwidth is proportional to the standard deviation of the data and inversely proportional to cube root of x.size. Can be too conservative for small datasets, but is quite good for large datasets. The standard deviation is not very robust to outliers. Values are very similar to the Freedman-Diaconis estimator in the absence of outliers. ‘rice’
\[n_h = 2n^{1/3}\] The number of bins is only proportional to cube root of a.size. It tends to overestimate the number of bins and it does not take into account data variability. ‘sturges’
\[n_h = \log _{2}n+1\] The number of bins is the base 2 log of a.size. This estimator assumes normality of data and is too conservative for larger, non-normal datasets. This is the default method in R’s hist method. ‘doane’
\[ \begin{align}\begin{aligned}n_h = 1 + \log_{2}(n) + \log_{2}(1 + \frac{|g_1|}{\sigma_{g_1}})\\g_1 = mean[(\frac{x - \mu}{\sigma})^3]\\\sigma_{g_1} = \sqrt{\frac{6(n - 2)}{(n + 1)(n + 3)}}\end{aligned}\end{align} \] An improved version of Sturges’ formula that produces better estimates for non-normal datasets. This estimator attempts to account for the skew of the data. ‘sqrt’
\[n_h = \sqrt n\] The simplest and fastest estimator. Only takes into account the data size. Examples >>> arr = np.array([0, 0, 0, 1, 2, 3, 3, 4, 5])
>>> np.histogram_bin_edges(arr, bins='auto', range=(0, 1))
array([0. , 0.25, 0.5 , 0.75, 1. ])
>>> np.histogram_bin_edges(arr, bins=2)
array([0. , 2.5, 5. ])
For consistency with histogram, an array of pre-computed bins is passed through unmodified: >>> np.histogram_bin_edges(arr, [1, 2])
array([1, 2])
This function allows one set of bins to be computed, and reused across multiple histograms: >>> shared_bins = np.histogram_bin_edges(arr, bins='auto')
>>> shared_bins
array([0., 1., 2., 3., 4., 5.])
>>> group_id = np.array([0, 1, 1, 0, 1, 1, 0, 1, 1])
>>> hist_0, _ = np.histogram(arr[group_id == 0], bins=shared_bins)
>>> hist_1, _ = np.histogram(arr[group_id == 1], bins=shared_bins)
>>> hist_0; hist_1
array([1, 1, 0, 1, 0])
array([2, 0, 1, 1, 2])
Which gives more easily comparable results than using separate bins for each histogram: >>> hist_0, bins_0 = np.histogram(arr[group_id == 0], bins='auto')
>>> hist_1, bins_1 = np.histogram(arr[group_id == 1], bins='auto')
>>> hist_0; hist_1
array([1, 1, 1])
array([2, 1, 1, 2])
>>> bins_0; bins_1
array([0., 1., 2., 3.])
array([0. , 1.25, 2.5 , 3.75, 5. ]) | |
doc_23826 | The backend options class for ProcessGroupAgent, which is derived from RpcBackendOptions. Parameters
num_send_recv_threads (int, optional) – The number of threads in the thread-pool used by ProcessGroupAgent (default: 4).
rpc_timeout (float, optional) – The default timeout, in seconds, for RPC requests (default: 60 seconds). If the RPC has not completed in this timeframe, an exception indicating so will be raised. Callers can override this timeout for individual RPCs in rpc_sync() and rpc_async() if necessary.
init_method (str, optional) – The URL to initialize ProcessGroupGloo (default: env://).
property init_method
URL specifying how to initialize the process group. Default is env://
property num_send_recv_threads
The number of threads in the thread-pool used by ProcessGroupAgent.
property rpc_timeout
A float indicating the timeout to use for all RPCs. If an RPC does not complete in this timeframe, it will complete with an exception indicating that it has timed out. | |
doc_23827 | Remove all HTTP/1.1 “Hop-by-Hop” headers from a list or Headers object. This operation works in-place. Changelog New in version 0.5. Parameters
headers (Union[werkzeug.datastructures.Headers, List[Tuple[str, str]]]) – a list or Headers object. Return type
None | |
doc_23828 |
Fit the model from data in X and transform X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features. If affinity is “precomputed” X : {array-like, sparse matrix} of shape (n_samples, n_samples), Interpret X as precomputed adjacency graph computed from samples.
yIgnored
Returns
X_newarray-like of shape (n_samples, n_components) | |
doc_23829 | tf.compat.v1.distribute.experimental.CentralStorageStrategy(
compute_devices=None, parameter_device=None
)
Variables are assigned to local CPU or the only GPU. If there is more than one GPU, compute operations (other than variable update operations) will be replicated across all GPUs. For Example: strategy = tf.distribute.experimental.CentralStorageStrategy()
# Create a dataset
ds = tf.data.Dataset.range(5).batch(2)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(ds)
with strategy.scope():
@tf.function
def train_step(val):
return val + 1
# Iterate over the distributed dataset
for x in dist_dataset:
# process dataset elements
strategy.run(train_step, args=(x,))
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example:
global_batch_size = 2
# Passing the devices is optional.
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
# Create a dataset
dataset = tf.data.Dataset.range(4).batch(global_batch_size)
# Distribute that dataset
dist_dataset = strategy.experimental_distribute_dataset(dataset)
@tf.function
def replica_fn(input):
return input*2
result = []
# Iterate over the `tf.distribute.DistributedDataset`
for x in dist_dataset:
# process dataset elements
result.append(strategy.run(replica_fn, args=(x,)))
print(result)
[PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>
}, PerReplica:{
0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>,
1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>
}]
Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A tf.distribute.DistributedDataset.
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value.
Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
experimental_make_numpy_dataset View source
experimental_make_numpy_dataset(
numpy_input, session=None
)
Makes a tf.data.Dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input. Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy. Example: numpy_input = np.ones([10], dtype=np.float32)
dataset = strategy.experimental_make_numpy_dataset(numpy_input)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
Args
numpy_input A nest of NumPy input arrays that will be converted into a dataset. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior.
session (TensorFlow v1.x graph execution only) A session used for initialization.
Returns A tf.data.Dataset representing numpy_input.
experimental_run View source
experimental_run(
fn, input_iterator=None
)
Runs ops in fn on each replica, with inputs from input_iterator. DEPRECATED: This method is not available in TF 2.x. Please switch to using run instead. When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica. Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator. fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group. Key Point: Depending on the tf.distribute.Strategy implementation being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica).
Args
fn The function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors.
input_iterator (Optional) input iterator from which the inputs are taken.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica).
make_dataset_iterator View source
make_dataset_iterator(
dataset
)
Makes an iterator for input provided via dataset. DEPRECATED: This method is not available in TF 2.x. Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas. The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc.
Args
dataset tf.data.Dataset that will be distributed evenly across all replicas.
Returns An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator.
make_input_fn_iterator View source
make_input_fn_iterator(
input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER
)
Returns an iterator split across replicas created from an input function. DEPRECATED: This method is not available in TF 2.x. The input_fn should take an tf.distribute.InputContext object where information about batching and input sharding can be accessed: def input_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
with strategy.scope():
iterator = strategy.make_input_fn_iterator(input_fn)
replica_results = strategy.experimental_run(replica_fn, iterator)
The tf.data.Dataset returned by input_fn should have a per-replica batch size, which may be computed using input_context.get_per_replica_batch_size.
Args
input_fn A function taking a tf.distribute.InputContext object and returning a tf.data.Dataset.
replication_mode an enum value of tf.distribute.InputReplicationMode. Only PER_WORKER is supported currently, which means there will be a single call to input_fn per worker. Replicas will dequeue from the local tf.data.Dataset on their worker.
Returns An iterator object that should first be .initialize()-ed. It may then either be passed to strategy.experimental_run() or you can iterator.get_next() to get the next value to pass to strategy.extended.call_for_each_replica().
reduce View source
reduce(
reduce_op, value, axis=None
)
Reduce value across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
total = strategy.reduce("SUM", per_replica_result, axis=None)
total
<tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
i = tf.distribute.get_replica_context().replica_id_in_sync_group
return tf.identity(i)
per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1
total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients.
tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.
Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
tensor_input = tf.constant(3.0)
@tf.function
def replica_fn(input):
return input*2.0
result = strategy.run(replica_fn, args=(tensor_input,))
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,
1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce("sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}
Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica).
scope View source
scope()
Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# Variable created inside scope:
with strategy.scope():
mirrored_variable = tf.Variable(1.)
mirrored_variable
MirroredVariable:{
0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>,
1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0>
}
# Variable created outside scope:
regular_variable = tf.Variable(1.)
regular_variable
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables.
Returns A context manager.
update_config_proto View source
update_config_proto(
config_proto
)
Returns a copy of config_proto modified for use with this strategy. DEPRECATED: This method is not available in TF 2.x. The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance.
Args
config_proto a tf.ConfigProto object.
Returns The updated copy of the config_proto. | |
doc_23830 | returns a spherical interpolation to the given vector. slerp(Vector2, float) -> Vector2 Calculates the spherical interpolation from self to the given Vector. The second argument - often called t - must be in the range [-1, 1]. It parametrizes where - in between the two vectors - the result should be. If a negative value is given the interpolation will not take the complement of the shortest path. | |
doc_23831 | Accept: application/json
Might receive an error response indicating that the DELETE method is not allowed on that resource: HTTP/1.1 405 Method Not Allowed
Content-Type: application/json
Content-Length: 42
{"detail": "Method 'DELETE' not allowed."}
Validation errors are handled slightly differently, and will include the field names as the keys in the response. If the validation error was not specific to a particular field then it will use the "non_field_errors" key, or whatever string value has been set for the NON_FIELD_ERRORS_KEY setting. An example validation error might look like this: HTTP/1.1 400 Bad Request
Content-Type: application/json
Content-Length: 94
{"amount": ["A valid integer is required."], "description": ["This field may not be blank."]}
Custom exception handling You can implement custom exception handling by creating a handler function that converts exceptions raised in your API views into response objects. This allows you to control the style of error responses used by your API. The function must take a pair of arguments, the first is the exception to be handled, and the second is a dictionary containing any extra context such as the view currently being handled. The exception handler function should either return a Response object, or return None if the exception cannot be handled. If the handler returns None then the exception will be re-raised and Django will return a standard HTTP 500 'server error' response. For example, you might want to ensure that all error responses include the HTTP status code in the body of the response, like so: HTTP/1.1 405 Method Not Allowed
Content-Type: application/json
Content-Length: 62
{"status_code": 405, "detail": "Method 'DELETE' not allowed."}
In order to alter the style of the response, you could write the following custom exception handler: from rest_framework.views import exception_handler
def custom_exception_handler(exc, context):
# Call REST framework's default exception handler first,
# to get the standard error response.
response = exception_handler(exc, context)
# Now add the HTTP status code to the response.
if response is not None:
response.data['status_code'] = response.status_code
return response
The context argument is not used by the default handler, but can be useful if the exception handler needs further information such as the view currently being handled, which can be accessed as context['view']. The exception handler must also be configured in your settings, using the EXCEPTION_HANDLER setting key. For example: REST_FRAMEWORK = {
'EXCEPTION_HANDLER': 'my_project.my_app.utils.custom_exception_handler'
}
If not specified, the 'EXCEPTION_HANDLER' setting defaults to the standard exception handler provided by REST framework: REST_FRAMEWORK = {
'EXCEPTION_HANDLER': 'rest_framework.views.exception_handler'
}
Note that the exception handler will only be called for responses generated by raised exceptions. It will not be used for any responses returned directly by the view, such as the HTTP_400_BAD_REQUEST responses that are returned by the generic views when serializer validation fails. API Reference APIException Signature: APIException() The base class for all exceptions raised inside an APIView class or @api_view. To provide a custom exception, subclass APIException and set the .status_code, .default_detail, and default_code attributes on the class. For example, if your API relies on a third party service that may sometimes be unreachable, you might want to implement an exception for the "503 Service Unavailable" HTTP response code. You could do this like so: from rest_framework.exceptions import APIException
class ServiceUnavailable(APIException):
status_code = 503
default_detail = 'Service temporarily unavailable, try again later.'
default_code = 'service_unavailable'
Inspecting API exceptions There are a number of different properties available for inspecting the status of an API exception. You can use these to build custom exception handling for your project. The available attributes and methods are:
.detail - Return the textual description of the error.
.get_codes() - Return the code identifier of the error.
.get_full_details() - Return both the textual description and the code identifier. In most cases the error detail will be a simple item: >>> print(exc.detail)
You do not have permission to perform this action.
>>> print(exc.get_codes())
permission_denied
>>> print(exc.get_full_details())
{'message':'You do not have permission to perform this action.','code':'permission_denied'}
In the case of validation errors the error detail will be either a list or dictionary of items: >>> print(exc.detail)
{"name":"This field is required.","age":"A valid integer is required."}
>>> print(exc.get_codes())
{"name":"required","age":"invalid"}
>>> print(exc.get_full_details())
{"name":{"message":"This field is required.","code":"required"},"age":{"message":"A valid integer is required.","code":"invalid"}}
ParseError Signature: ParseError(detail=None, code=None) Raised if the request contains malformed data when accessing request.data. By default this exception results in a response with the HTTP status code "400 Bad Request". AuthenticationFailed Signature: AuthenticationFailed(detail=None, code=None) Raised when an incoming request includes incorrect authentication. By default this exception results in a response with the HTTP status code "401 Unauthenticated", but it may also result in a "403 Forbidden" response, depending on the authentication scheme in use. See the authentication documentation for more details. NotAuthenticated Signature: NotAuthenticated(detail=None, code=None) Raised when an unauthenticated request fails the permission checks. By default this exception results in a response with the HTTP status code "401 Unauthenticated", but it may also result in a "403 Forbidden" response, depending on the authentication scheme in use. See the authentication documentation for more details. PermissionDenied Signature: PermissionDenied(detail=None, code=None) Raised when an authenticated request fails the permission checks. By default this exception results in a response with the HTTP status code "403 Forbidden". NotFound Signature: NotFound(detail=None, code=None) Raised when a resource does not exists at the given URL. This exception is equivalent to the standard Http404 Django exception. By default this exception results in a response with the HTTP status code "404 Not Found". MethodNotAllowed Signature: MethodNotAllowed(method, detail=None, code=None) Raised when an incoming request occurs that does not map to a handler method on the view. By default this exception results in a response with the HTTP status code "405 Method Not Allowed". NotAcceptable Signature: NotAcceptable(detail=None, code=None) Raised when an incoming request occurs with an Accept header that cannot be satisfied by any of the available renderers. By default this exception results in a response with the HTTP status code "406 Not Acceptable". UnsupportedMediaType Signature: UnsupportedMediaType(media_type, detail=None, code=None) Raised if there are no parsers that can handle the content type of the request data when accessing request.data. By default this exception results in a response with the HTTP status code "415 Unsupported Media Type". Throttled Signature: Throttled(wait=None, detail=None, code=None) Raised when an incoming request fails the throttling checks. By default this exception results in a response with the HTTP status code "429 Too Many Requests". ValidationError Signature: ValidationError(detail, code=None) The ValidationError exception is slightly different from the other APIException classes: The detail argument is mandatory, not optional. The detail argument may be a list or dictionary of error details, and may also be a nested data structure. By using a dictionary, you can specify field-level errors while performing object-level validation in the validate() method of a serializer. For example. raise serializers.ValidationError({'name': 'Please enter a valid name.'})
By convention you should import the serializers module and use a fully qualified ValidationError style, in order to differentiate it from Django's built-in validation error. For example. raise serializers.ValidationError('This field must be an integer value.')
The ValidationError class should be used for serializer and field validation, and by validator classes. It is also raised when calling serializer.is_valid with the raise_exception keyword argument: serializer.is_valid(raise_exception=True)
The generic views use the raise_exception=True flag, which means that you can override the style of validation error responses globally in your API. To do so, use a custom exception handler, as described above. By default this exception results in a response with the HTTP status code "400 Bad Request". Generic Error Views Django REST Framework provides two error views suitable for providing generic JSON 500 Server Error and 400 Bad Request responses. (Django's default error views provide HTML responses, which may not be appropriate for an API-only application.) Use these as per Django's Customizing error views documentation. rest_framework.exceptions.server_error Returns a response with status code 500 and application/json content type. Set as handler500: handler500 = 'rest_framework.exceptions.server_error'
rest_framework.exceptions.bad_request Returns a response with status code 400 and application/json content type. Set as handler400: handler400 = 'rest_framework.exceptions.bad_request'
exceptions.py | |
doc_23832 | In [0, 1]. Used to disambiguate wall times during a repeated interval. (A repeated interval occurs when clocks are rolled back at the end of daylight saving time or when the UTC offset for the current zone is decreased for political reasons.) The value 0 (1) represents the earlier (later) of the two moments with the same wall time representation. New in version 3.6. | |
doc_23833 |
Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters
rendererRendererBase subclass.
Notes This method is overridden in the Artist subclasses. | |
doc_23834 | Base class for content managers. Provides the standard registry mechanisms to register converters between MIME content and other representations, as well as the get_content and set_content dispatch methods.
get_content(msg, *args, **kw)
Look up a handler function based on the mimetype of msg (see next paragraph), call it, passing through all arguments, and return the result of the call. The expectation is that the handler will extract the payload from msg and return an object that encodes information about the extracted data. To find the handler, look for the following keys in the registry, stopping with the first one found: the string representing the full MIME type (maintype/subtype) the string representing the maintype
the empty string If none of these keys produce a handler, raise a KeyError for the full MIME type.
set_content(msg, obj, *args, **kw)
If the maintype is multipart, raise a TypeError; otherwise look up a handler function based on the type of obj (see next paragraph), call clear_content() on the msg, and call the handler function, passing through all arguments. The expectation is that the handler will transform and store obj into msg, possibly making other changes to msg as well, such as adding various MIME headers to encode information needed to interpret the stored data. To find the handler, obtain the type of obj (typ = type(obj)), and look for the following keys in the registry, stopping with the first one found: the type itself (typ) the type’s fully qualified name (typ.__module__ + '.' +
typ.__qualname__). the type’s qualname (typ.__qualname__) the type’s name (typ.__name__). If none of the above match, repeat all of the checks above for each of the types in the MRO (typ.__mro__). Finally, if no other key yields a handler, check for a handler for the key None. If there is no handler for None, raise a KeyError for the fully qualified name of the type. Also add a MIME-Version header if one is not present (see also MIMEPart).
add_get_handler(key, handler)
Record the function handler as the handler for key. For the possible values of key, see get_content().
add_set_handler(typekey, handler)
Record handler as the function to call when an object of a type matching typekey is passed to set_content(). For the possible values of typekey, see set_content(). | |
doc_23835 |
Calls enable or disable based on toggled value. | |
doc_23836 |
Bases: torch.distributions.distribution.Distribution Generates uniformly distributed random samples from the half-open interval [low, high). Example: >>> m = Uniform(torch.tensor([0.0]), torch.tensor([5.0]))
>>> m.sample() # uniformly distributed in the range [0.0, 5.0)
tensor([ 2.3418])
Parameters
low (float or Tensor) – lower range (inclusive).
high (float or Tensor) – upper range (exclusive).
arg_constraints = {'high': Dependent(), 'low': Dependent()}
cdf(value) [source]
entropy() [source]
expand(batch_shape, _instance=None) [source]
has_rsample = True
icdf(value) [source]
log_prob(value) [source]
property mean
rsample(sample_shape=torch.Size([])) [source]
property stddev
property support
property variance | |
doc_23837 | Abstract base class for arrays. The recommended way to create concrete array types is by multiplying any ctypes data type with a positive integer. Alternatively, you can subclass this type and define _length_ and _type_ class variables. Array elements can be read and written using standard subscript and slice accesses; for slice reads, the resulting object is not itself an Array.
_length_
A positive integer specifying the number of elements in the array. Out-of-range subscripts result in an IndexError. Will be returned by len().
_type_
Specifies the type of each element in the array.
Array subclass constructors accept positional arguments, used to initialize the elements in order. | |
doc_23838 |
Series basis polynomial of degree deg. Returns the series representing the basis polynomial of degree deg. New in version 1.7.0. Parameters
degint
Degree of the basis polynomial for the series. Must be >= 0.
domain{None, array_like}, optional
If given, the array must be of the form [beg, end], where beg and end are the endpoints of the domain. If None is given then the class domain is used. The default is None.
window{None, array_like}, optional
If given, the resulting array must be if the form [beg, end], where beg and end are the endpoints of the window. If None is given then the class window is used. The default is None. Returns
new_seriesseries
A series with the coefficient of the deg term set to one and all others zero. | |
doc_23839 | Return a callable which is used to create a LogRecord. New in version 3.2: This function has been provided, along with setLogRecordFactory(), to allow developers more control over how the LogRecord representing a logging event is constructed. See setLogRecordFactory() for more information about the how the factory is called. | |
doc_23840 | Default exception handler. This is called when an exception occurs and no exception handler is set. This can be called by a custom exception handler that wants to defer to the default handler behavior. context parameter has the same meaning as in call_exception_handler(). | |
doc_23841 | Stop autoincrement mode: cancels any recurring timer event initiated by Progressbar.start() for this progress bar. | |
doc_23842 | See Migration guide for more details. tf.compat.v1.manip.scatter_nd, tf.compat.v1.scatter_nd
tf.scatter_nd(
indices, updates, shape, name=None
)
Creates a new tensor by applying sparse updates to individual values or slices within a tensor (initially zero for numeric, empty for string) of the given shape according to indices. This operator is the inverse of the tf.gather_nd operator which extracts values or slices from a given tensor. This operation is similar to tensor_scatter_add, except that the tensor is zero-initialized. Calling tf.scatter_nd(indices, values, shape) is identical to tensor_scatter_add(tf.zeros(shape, values.dtype), indices, values) If indices contains duplicates, then their updates are accumulated (summed). Warning: The order in which updates are applied is nondeterministic, so the output will be nondeterministic if indices contains duplicates -- because of some numerical approximation issues, numbers summed in different order may yield different results. indices is an integer tensor containing indices into a new tensor of shape shape. The last dimension of indices can be at most the rank of shape: indices.shape[-1] <= shape.rank
The last dimension of indices corresponds to indices into elements (if indices.shape[-1] = shape.rank) or slices (if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of shape. updates is a tensor with shape indices.shape[:-1] + shape[indices.shape[-1]:]
The simplest form of scatter is to insert individual elements in a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements. In Python, this scatter operation would look like this: indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
shape = tf.constant([8])
scatter = tf.scatter_nd(indices, updates, shape)
print(scatter)
The resulting tensor would look like this: [0, 11, 0, 10, 9, 0, 0, 12]
We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values. In Python, this scatter operation would look like this: indices = tf.constant([[0], [2]])
updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]]])
shape = tf.constant([4, 4, 4])
scatter = tf.scatter_nd(indices, updates, shape)
print(scatter)
The resulting tensor would look like this: [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]
Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.
Args
indices A Tensor. Must be one of the following types: int32, int64. Index tensor.
updates A Tensor. Updates to scatter into output.
shape A Tensor. Must have the same type as indices. 1-D. The shape of the resulting tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as updates. | |
doc_23843 | See Migration guide for more details. tf.compat.v1.raw_ops.LookupTableImport
tf.raw_ops.LookupTableImport(
table_handle, keys, values, name=None
)
The tensor keys must be of the same type as the keys of the table. The tensor values must be of the type of the table values.
Args
table_handle A Tensor of type mutable string. Handle to the table.
keys A Tensor. Any shape. Keys to look up.
values A Tensor. Values to associate with keys.
name A name for the operation (optional).
Returns The created Operation. | |
doc_23844 | tf.compat.v1.errors.error_code_from_exception_type() | |
doc_23845 |
Return the label used for this artist in the legend. | |
doc_23846 | Refer to the corresponding attribute documentation in IPv4Network. | |
doc_23847 |
Return a view of the array with axis1 and axis2 interchanged. Refer to numpy.swapaxes for full documentation. See also numpy.swapaxes
equivalent function | |
doc_23848 |
Alias for set_facecolor. | |
doc_23849 | A wrapper around the JSON serializer from django.core.signing. Can only serialize basic data types. In addition, as JSON supports only string keys, note that using non-string keys in request.session won’t work as expected: >>> # initial assignment
>>> request.session[0] = 'bar'
>>> # subsequent requests following serialization & deserialization
>>> # of session data
>>> request.session[0] # KeyError
>>> request.session['0']
'bar'
Similarly, data that can’t be encoded in JSON, such as non-UTF8 bytes like '\xd9' (which raises UnicodeDecodeError), can’t be stored. See the Write your own serializer section for more details on limitations of JSON serialization. | |
doc_23850 |
Write object to a comma-separated values (csv) file. Parameters
path_or_buf:str, path object, file-like object, or None, default None
String, path object (implementing os.PathLike[str]), or file-like object implementing a write() function. If None, the result is returned as a string. If a non-binary file object is passed, it should be opened with newline=’’, disabling universal newlines. If a binary file object is passed, mode might need to contain a ‘b’. Changed in version 1.2.0: Support for binary file objects was introduced.
sep:str, default ‘,’
String of length 1. Field delimiter for the output file.
na_rep:str, default ‘’
Missing data representation.
float_format:str, default None
Format string for floating point numbers.
columns:sequence, optional
Columns to write.
header:bool or list of str, default True
Write out the column names. If a list of strings is given it is assumed to be aliases for the column names.
index:bool, default True
Write row names (index).
index_label:str or sequence, or False, default None
Column label for index column(s) if desired. If None is given, and header and index are True, then the index names are used. A sequence should be given if the object uses MultiIndex. If False do not print fields for index names. Use index_label=False for easier importing in R.
mode:str
Python write mode, default ‘w’.
encoding:str, optional
A string representing the encoding to use in the output file, defaults to ‘utf-8’. encoding is not supported if path_or_buf is a non-binary file object.
compression:str or dict, default ‘infer’
For on-the-fly compression of the output data. If ‘infer’ and ‘%s’ path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, or ‘.zst’ (otherwise no compression). Set to None for no compression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor, respectively. As an example, the following could be passed for faster compression and to create a reproducible gzip archive: compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}. Changed in version 1.0.0: May now be a dict with key ‘method’ as compression mode and other entries as additional compression options if compression mode is ‘zip’. Changed in version 1.1.0: Passing compression options as keys in dict is supported for compression modes ‘gzip’, ‘bz2’, ‘zstd’, and ‘zip’. Changed in version 1.2.0: Compression is supported for binary file objects. Changed in version 1.2.0: Previous versions forwarded dict entries for ‘gzip’ to gzip.open instead of gzip.GzipFile which prevented setting mtime.
quoting:optional constant from csv module
Defaults to csv.QUOTE_MINIMAL. If you have set a float_format then floats are converted to strings and thus csv.QUOTE_NONNUMERIC will treat them as non-numeric.
quotechar:str, default ‘"’
String of length 1. Character used to quote fields.
line_terminator:str, optional
The newline character or character sequence to use in the output file. Defaults to os.linesep, which depends on the OS in which this method is called (’\n’ for linux, ‘\r\n’ for Windows, i.e.).
chunksize:int or None
Rows to write at a time.
date_format:str, default None
Format string for datetime objects.
doublequote:bool, default True
Control quoting of quotechar inside a field.
escapechar:str, default None
String of length 1. Character used to escape sep and quotechar when appropriate.
decimal:str, default ‘.’
Character recognized as decimal separator. E.g. use ‘,’ for European data.
errors:str, default ‘strict’
Specifies how encoding and decoding errors are to be handled. See the errors argument for open() for a full list of options. New in version 1.1.0.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. Returns
None or str
If path_or_buf is None, returns the resulting csv format as a string. Otherwise returns None. See also read_csv
Load a CSV file into a DataFrame. to_excel
Write DataFrame to an Excel file. Examples
>>> df = pd.DataFrame({'name': ['Raphael', 'Donatello'],
... 'mask': ['red', 'purple'],
... 'weapon': ['sai', 'bo staff']})
>>> df.to_csv(index=False)
'name,mask,weapon\nRaphael,red,sai\nDonatello,purple,bo staff\n'
Create ‘out.zip’ containing ‘out.csv’
>>> compression_opts = dict(method='zip',
... archive_name='out.csv')
>>> df.to_csv('out.zip', index=False,
... compression=compression_opts)
To write a csv file to a new folder or nested folder you will first need to create it using either Pathlib or os:
>>> from pathlib import Path
>>> filepath = Path('folder/subfolder/out.csv')
>>> filepath.parent.mkdir(parents=True, exist_ok=True)
>>> df.to_csv(filepath)
>>> import os
>>> os.makedirs('folder/subfolder', exist_ok=True)
>>> df.to_csv('folder/subfolder/out.csv') | |
doc_23851 | Attempt to find the spec to handle fullname within path. New in version 3.4. | |
doc_23852 | Closes the underlying Windows handle. If the handle is already closed, no error is raised. | |
doc_23853 | See Migration guide for more details. tf.compat.v1.raw_ops.Square
tf.raw_ops.Square(
x, name=None
)
I.e., \(y = x * x = x^2\).
tf.math.square([-2., 0., 3.])
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([4., 0., 9.], dtype=float32)>
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_23854 | The get_search_results method modifies the list of objects displayed into those that match the provided search term. It accepts the request, a queryset that applies the current filters, and the user-provided search term. It returns a tuple containing a queryset modified to implement the search, and a boolean indicating if the results may contain duplicates. The default implementation searches the fields named in ModelAdmin.search_fields. This method may be overridden with your own custom search method. For example, you might wish to search by an integer field, or use an external tool such as Solr or Haystack. You must establish if the queryset changes implemented by your search method may introduce duplicates into the results, and return True in the second element of the return value. For example, to search by name and age, you could use: class PersonAdmin(admin.ModelAdmin):
list_display = ('name', 'age')
search_fields = ('name',)
def get_search_results(self, request, queryset, search_term):
queryset, may_have_duplicates = super().get_search_results(
request, queryset, search_term,
)
try:
search_term_as_int = int(search_term)
except ValueError:
pass
else:
queryset |= self.model.objects.filter(age=search_term_as_int)
return queryset, may_have_duplicates
This implementation is more efficient than search_fields =
('name', '=age') which results in a string comparison for the numeric field, for example ... OR UPPER("polls_choice"."votes"::text) = UPPER('4') on PostgreSQL. | |
doc_23855 |
Set the sketch parameters. Parameters
scalefloat, optional
The amplitude of the wiggle perpendicular to the source line, in pixels. If scale is None, or not provided, no sketch filter will be provided.
lengthfloat, optional
The length of the wiggle along the line, in pixels (default 128.0)
randomnessfloat, optional
The scale factor by which the length is shrunken or expanded (default 16.0) The PGF backend uses this argument as an RNG seed and not as described above. Using the same seed yields the same random shape. | |
doc_23856 | Given a dictionary of data and this widget’s name, returns the value of this widget. files may contain data coming from request.FILES. Returns None if a value wasn’t provided. Note also that value_from_datadict may be called more than once during handling of form data, so if you customize it and add expensive processing, you should implement some caching mechanism yourself. | |
doc_23857 | See Migration guide for more details. tf.compat.v1.raw_ops.Tanh
tf.raw_ops.Tanh(
x, name=None
)
Given an input tensor, this function computes hyperbolic tangent of every element in the tensor. Input range is [-inf, inf] and output range is [-1,1].
x = tf.constant([-float("inf"), -5, -0.5, 1, 1.2, 2, 3, float("inf")])
tf.math.tanh(x)
<tf.Tensor: shape=(8,), dtype=float32, numpy=
array([-1. , -0.99990916, -0.46211717, 0.7615942 , 0.8336547 ,
0.9640276 , 0.9950547 , 1. ], dtype=float32)>
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_23858 |
Return the first element of the underlying data as a Python scalar. Returns
scalar
The first element of %(klass)s. Raises
ValueError
If the data is not length-1. | |
doc_23859 |
Update this artist's properties from the dict props. Parameters
propsdict | |
doc_23860 |
Return the sizes ('areas') of the elements in the collection. Returns
array
The 'area' of each element. | |
doc_23861 |
Attribute to group 'radio' like tools (mutually exclusive). str that identifies the group or None if not belonging to a group. | |
doc_23862 | See Migration guide for more details. tf.compat.v1.raw_ops.BoostedTreesFlushQuantileSummaries
tf.raw_ops.BoostedTreesFlushQuantileSummaries(
quantile_stream_resource_handle, num_features, name=None
)
An op that outputs a list of quantile summaries of a quantile stream resource. Each summary Tensor is rank 2, containing summaries (value, weight, min_rank, max_rank) for a single feature.
Args
quantile_stream_resource_handle A Tensor of type resource. resource handle referring to a QuantileStreamResource.
num_features An int that is >= 0.
name A name for the operation (optional).
Returns A list of num_features Tensor objects with type float32. | |
doc_23863 |
Display pending images. Launch the event loop of the current gui plugin, and display all pending images, queued via imshow. This is required when using imshow from non-interactive scripts. A call to show will block execution of code until all windows have been closed. Examples >>> import skimage.io as io
>>> for i in range(4):
... ax_im = io.imshow(np.random.rand(50, 50))
>>> io.show() | |
doc_23864 | Same as ForeignKey.limit_choices_to. | |
doc_23865 | Optional. Either True or False. Default is True. Specifies whether files in the specified location should be included. Either this or allow_folders must be True. | |
doc_23866 |
Return the floor of the input, element-wise. The floor of the scalar x is the largest integer i, such that i <= x. It is often denoted as \(\lfloor x \rfloor\). Parameters
xarray_like
Input data.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray or scalar
The floor of each element in x. This is a scalar if x is a scalar. See also
ceil, trunc, rint, fix
Notes Some spreadsheet programs calculate the “floor-towards-zero”, where floor(-2.5) == -2. NumPy instead uses the definition of floor where floor(-2.5) == -3. The “floor-towards-zero” function is called fix in NumPy. Examples >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
>>> np.floor(a)
array([-2., -2., -1., 0., 1., 1., 2.]) | |
doc_23867 | tf.compat.v1.layers.Conv2DTranspose(
filters, kernel_size, strides=(1, 1), padding='valid',
data_format='channels_last', activation=None, use_bias=True,
kernel_initializer=None, bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, trainable=True, name=None,
**kwargs
)
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
Arguments
filters Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).
kernel_size A tuple or list of 2 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.
strides A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions.
padding one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width).
activation Activation function. Set it to None to maintain a linear activation.
use_bias Boolean, whether the layer uses a bias.
kernel_initializer An initializer for the convolution kernel.
bias_initializer An initializer for the bias vector. If None, the default initializer will be used.
kernel_regularizer Optional regularizer for the convolution kernel.
bias_regularizer Optional regularizer for the bias vector.
activity_regularizer Optional regularizer function for the output.
kernel_constraint Optional projection function to be applied to the kernel after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
bias_constraint Optional projection function to be applied to the bias after being updated by an Optimizer.
trainable Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
name A string, the name of the layer.
Attributes
graph
scope_name | |
doc_23868 | You generally shouldn’t have to set or change that attribute which should be set up depending on the field class. It matches the OpenGIS standard geometry name. | |
doc_23869 |
Return an array whose values are limited to [min, max]. One of max or min must be given. Refer to numpy.clip for full documentation. See also numpy.clip
equivalent function | |
doc_23870 |
alias of numpy.ma.core.MaskedArray | |
doc_23871 |
[Deprecated] Notes Deprecated since version 3.4: | |
doc_23872 | See Migration guide for more details. tf.compat.v1.raw_ops.QueueDequeueV2
tf.raw_ops.QueueDequeueV2(
handle, component_types, timeout_ms=-1, name=None
)
This operation has k outputs, where k is the number of components in the tuples stored in the given queue, and output i is the ith component of the dequeued tuple. N.B. If the queue is empty, this operation will block until an element has been dequeued (or 'timeout_ms' elapses, if specified).
Args
handle A Tensor of type resource. The handle to a queue.
component_types A list of tf.DTypes that has length >= 1. The type of each component in a tuple.
timeout_ms An optional int. Defaults to -1. If the queue is empty, this operation will block for up to timeout_ms milliseconds. Note: This option is not supported yet.
name A name for the operation (optional).
Returns A list of Tensor objects of type component_types. | |
doc_23873 |
Set the sketch parameters. Parameters
scalefloat, optional
The amplitude of the wiggle perpendicular to the source line, in pixels. If scale is None, or not provided, no sketch filter will be provided.
lengthfloat, optional
The length of the wiggle along the line, in pixels (default 128.0)
randomnessfloat, optional
The scale factor by which the length is shrunken or expanded (default 16.0) The PGF backend uses this argument as an RNG seed and not as described above. Using the same seed yields the same random shape. | |
doc_23874 |
tick_loc, tick_angle, tick_label, (optionally) tick_label | |
doc_23875 |
Return an n-dimensional window of a given size and dimensionality. Parameters
window_typestring, float, or tuple
The type of window to be created. Any window type supported by scipy.signal.get_window is allowed here. See notes below for a current list, or the SciPy documentation for the version of SciPy on your machine.
shapetuple of int or int
The shape of the window along each axis. If an integer is provided, a 1D window is generated.
warp_kwargsdict
Keyword arguments passed to skimage.transform.warp (e.g., warp_kwargs={'order':3} to change interpolation method). Returns
nd_windowndarray
A window of the specified shape. dtype is np.double. Notes This function is based on scipy.signal.get_window and thus can access all of the window types available to that function (e.g., "hann", "boxcar"). Note that certain window types require parameters that have to be supplied with the window name as a tuple (e.g., ("tukey", 0.8)). If only a float is supplied, it is interpreted as the beta parameter of the Kaiser window. See https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.windows.get_window.html for more details. Note that this function generates a double precision array of the specified shape and can thus generate very large arrays that consume a large amount of available memory. The approach taken here to create nD windows is to first calculate the Euclidean distance from the center of the intended nD window to each position in the array. That distance is used to sample, with interpolation, from a 1D window returned from scipy.signal.get_window. The method of interpolation can be changed with the order keyword argument passed to skimage.transform.warp. Some coordinates in the output window will be outside of the original signal; these will be filled in with zeros. Window types: - boxcar - triang - blackman - hamming - hann - bartlett - flattop - parzen - bohman - blackmanharris - nuttall - barthann - kaiser (needs beta) - gaussian (needs standard deviation) - general_gaussian (needs power, width) - slepian (needs width) - dpss (needs normalized half-bandwidth) - chebwin (needs attenuation) - exponential (needs decay scale) - tukey (needs taper fraction) References
1
Two-dimensional window design, Wikipedia, https://en.wikipedia.org/wiki/Two_dimensional_window_design Examples Return a Hann window with shape (512, 512): >>> from skimage.filters import window
>>> w = window('hann', (512, 512))
Return a Kaiser window with beta parameter of 16 and shape (256, 256, 35): >>> w = window(16, (256, 256, 35))
Return a Tukey window with an alpha parameter of 0.8 and shape (100, 300): >>> w = window(('tukey', 0.8), (100, 300)) | |
doc_23876 |
Unsupervised learner for implementing neighbor searches. Read more in the User Guide. New in version 0.9. Parameters
n_neighborsint, default=5
Number of neighbors to use by default for kneighbors queries.
radiusfloat, default=1.0
Range of parameter space to use by default for radius_neighbors queries.
algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree
‘kd_tree’ will use KDTree
‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force.
leaf_sizeint, default=30
Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
metricstr or callable, default=’minkowski’
the distance metric to use for the tree. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of DistanceMetric for a list of available metrics. If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors.
pint, default=2
Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
metric_paramsdict, default=None
Additional keyword arguments for the metric function.
n_jobsint, default=None
The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes
effective_metric_str
Metric used to compute distances to neighbors.
effective_metric_params_dict
Parameters for the metric used to compute distances to neighbors.
n_samples_fit_int
Number of samples in the fitted data. See also
KNeighborsClassifier
RadiusNeighborsClassifier
KNeighborsRegressor
RadiusNeighborsRegressor
BallTree
Notes See Nearest Neighbors in the online documentation for a discussion of the choice of algorithm and leaf_size. https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm Examples >>> import numpy as np
>>> from sklearn.neighbors import NearestNeighbors
>>> samples = [[0, 0, 2], [1, 0, 0], [0, 0, 1]]
>>> neigh = NearestNeighbors(n_neighbors=2, radius=0.4)
>>> neigh.fit(samples)
NearestNeighbors(...)
>>> neigh.kneighbors([[0, 0, 1.3]], 2, return_distance=False)
array([[2, 0]]...)
>>> nbrs = neigh.radius_neighbors(
... [[0, 0, 1.3]], 0.4, return_distance=False
... )
>>> np.asarray(nbrs[0][0])
array(2)
Methods
fit(X[, y]) Fit the nearest neighbors estimator from the training dataset.
get_params([deep]) Get parameters for this estimator.
kneighbors([X, n_neighbors, return_distance]) Finds the K-neighbors of a point.
kneighbors_graph([X, n_neighbors, mode]) Computes the (weighted) graph of k-Neighbors for points in X
radius_neighbors([X, radius, …]) Finds the neighbors within a given radius of a point or points.
radius_neighbors_graph([X, radius, mode, …]) Computes the (weighted) graph of Neighbors for points in X
set_params(**params) Set the parameters of this estimator.
fit(X, y=None) [source]
Fit the nearest neighbors estimator from the training dataset. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’
Training data.
yIgnored
Not used, present for API consistency by convention. Returns
selfNearestNeighbors
The fitted nearest neighbors estimator.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
kneighbors(X=None, n_neighbors=None, return_distance=True) [source]
Finds the K-neighbors of a point. Returns indices of and distances to the neighbors of each point. Parameters
Xarray-like, shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
n_neighborsint, default=None
Number of neighbors required for each sample. The default is the value passed to the constructor.
return_distancebool, default=True
Whether or not to return the distances. Returns
neigh_distndarray of shape (n_queries, n_neighbors)
Array representing the lengths to points, only present if return_distance=True
neigh_indndarray of shape (n_queries, n_neighbors)
Indices of the nearest points in the population matrix. Examples In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1] >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(n_neighbors=1)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))
As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points: >>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
kneighbors_graph(X=None, n_neighbors=None, mode='connectivity') [source]
Computes the (weighted) graph of k-Neighbors for points in X Parameters
Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For metric='precomputed' the shape should be (n_queries, n_indexed). Otherwise the shape should be (n_queries, n_features).
n_neighborsint, default=None
Number of neighbors for each sample. The default is the value passed to the constructor.
mode{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. Returns
Asparse-matrix of shape (n_queries, n_samples_fit)
n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix is of CSR format. See also
NearestNeighbors.radius_neighbors_graph
Examples >>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(n_neighbors=2)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source]
Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters
Xarray-like of (n_samples, n_features), default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
radiusfloat, default=None
Limiting distance of neighbors to return. The default is the value passed to the constructor.
return_distancebool, default=True
Whether or not to return the distances.
sort_resultsbool, default=False
If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns
neigh_distndarray of shape (n_samples,) of arrays
Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter.
neigh_indndarray of shape (n_samples,) of arrays
An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np
>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(radius=1.6)
>>> neigh.fit(samples)
NearestNeighbors(radius=1.6)
>>> rng = neigh.radius_neighbors([[1., 1., 1.]])
>>> print(np.asarray(rng[0][0]))
[1.5 0.5]
>>> print(np.asarray(rng[1][0]))
[1 2]
The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time.
radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source]
Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters
Xarray-like of shape (n_samples, n_features), default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
radiusfloat, default=None
Radius of neighborhoods. The default is the value passed to the constructor.
mode{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points.
sort_resultsbool, default=False
If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns
Asparse-matrix of shape (n_queries, n_samples_fit)
n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also
kneighbors_graph
Examples >>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(radius=1.5)
>>> neigh.fit(X)
NearestNeighbors(radius=1.5)
>>> A = neigh.radius_neighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 0.],
[1., 0., 1.]])
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_23877 |
Add a vertical span (rectangle) across the Axes. The rectangle spans from xmin to xmax horizontally, and, by default, the whole y-axis vertically. The y-span can be set using ymin (default: 0) and ymax (default: 1) which are in axis units; e.g. ymin = 0.5 always refers to the middle of the y-axis regardless of the limits set by set_ylim. Parameters
xminfloat
Lower x-coordinate of the span, in data units.
xmaxfloat
Upper x-coordinate of the span, in data units.
yminfloat, default: 0
Lower y-coordinate of the span, in y-axis units (0-1).
ymaxfloat, default: 1
Upper y-coordinate of the span, in y-axis units (0-1). Returns
Polygon
Vertical span (rectangle) from (xmin, ymin) to (xmax, ymax). Other Parameters
**kwargsPolygon properties
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
closed bool
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
xy (N, 2) array-like
zorder float See also axhspan
Add a horizontal span across the Axes. Examples Draw a vertical, green, translucent rectangle from x = 1.25 to x = 1.55 that spans the yrange of the Axes. >>> axvspan(1.25, 1.55, facecolor='g', alpha=0.5)
Examples using matplotlib.axes.Axes.axvspan
axhspan Demo
Transformations Tutorial | |
doc_23878 | True if the session object detected a modification. Be advised that modifications on mutable structures are not picked up automatically, in that situation you have to explicitly set the attribute to True yourself. Here an example: # this change is not picked up because a mutable object (here
# a list) is changed.
session['objects'].append(42)
# so mark it as modified yourself
session.modified = True | |
doc_23879 | Store the value from stdout. It is meant to hold the stdout at the time the regrtest began. | |
doc_23880 |
Return the clip path. | |
doc_23881 |
Bases: matplotlib.dates.RRuleLocator Make ticks on occurrences of each month, e.g., 1, 3, 12. Mark every month in bymonth; bymonth can be an int or sequence. Default is range(1, 13), i.e. every month. interval is the interval between each iteration. For example, if interval=2, mark every second occurrence. | |
doc_23882 | An abstract base class for classes that implement object.__aenter__() and object.__aexit__(). A default implementation for object.__aenter__() is provided which returns self while object.__aexit__() is an abstract method which by default returns None. See also the definition of Asynchronous Context Managers. New in version 3.7. | |
doc_23883 |
Transforms a masked array into a flexible-type array. The flexible type array that is returned will have two fields: the _data field stores the _data part of the array. the _mask field stores the _mask part of the array. Parameters
None
Returns
recordndarray
A new flexible-type ndarray with two fields: the first element containing a value, the second element containing the corresponding mask boolean. The returned record shape matches self.shape. Notes A side-effect of transforming a masked array into a flexible ndarray is that meta information (fill_value, …) will be lost. Examples >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4)
>>> x
masked_array(
data=[[1, --, 3],
[--, 5, --],
[7, --, 9]],
mask=[[False, True, False],
[ True, False, True],
[False, True, False]],
fill_value=999999)
>>> x.toflex()
array([[(1, False), (2, True), (3, False)],
[(4, True), (5, False), (6, True)],
[(7, False), (8, True), (9, False)]],
dtype=[('_data', '<i8'), ('_mask', '?')]) | |
doc_23884 |
Fills the input Tensor with a (semi) orthogonal matrix, as described in Exact solutions to the nonlinear dynamics of learning in deep linear neural networks - Saxe, A. et al. (2013). The input tensor must have at least 2 dimensions, and for tensors with more than 2 dimensions the trailing dimensions are flattened. Parameters
tensor – an n-dimensional torch.Tensor, where n≥2n \geq 2
gain – optional scaling factor Examples >>> w = torch.empty(3, 5)
>>> nn.init.orthogonal_(w) | |
doc_23885 |
Set the pick radius used for containment tests. Parameters
prfloat
Pick radius, in points. | |
doc_23886 | torch.linalg.cholesky(input, *, out=None) → Tensor
Computes the Cholesky decomposition of a Hermitian (or symmetric for real-valued matrices) positive-definite matrix or the Cholesky decompositions for a batch of such matrices. Each decomposition has the form: input=LLH\text{input} = LL^H
where LL is a lower-triangular matrix and LHL^H is the conjugate transpose of LL , which is just a transpose for the case of real-valued input matrices. In code it translates to input = L @ L.t() if input is real-valued and input = L @ L.conj().t() if input is complex-valued. The batch of LL matrices is returned. Supports real-valued and complex-valued inputs. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note LAPACK’s potrf is used for CPU inputs, and MAGMA’s potrf is used for CUDA inputs. Note If input is not a Hermitian positive-definite matrix, or if it’s a batch of matrices and one or more of them is not a Hermitian positive-definite matrix, then a RuntimeError will be thrown. If input is a batch of matrices, then the error message will include the batch index of the first matrix that is not Hermitian positive-definite. Parameters
input (Tensor) – the input tensor of size (∗,n,n)(*, n, n) consisting of Hermitian positive-definite n×nn \times n matrices, where ∗* is zero or more batch dimensions. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> a = torch.randn(2, 2, dtype=torch.complex128)
>>> a = torch.mm(a, a.t().conj()) # creates a Hermitian positive-definite matrix
>>> l = torch.linalg.cholesky(a)
>>> a
tensor([[2.5266+0.0000j, 1.9586-2.0626j],
[1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128)
>>> l
tensor([[1.5895+0.0000j, 0.0000+0.0000j],
[1.2322+1.2976j, 2.4928+0.0000j]], dtype=torch.complex128)
>>> torch.mm(l, l.t().conj())
tensor([[2.5266+0.0000j, 1.9586-2.0626j],
[1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128)
>>> a = torch.randn(3, 2, 2, dtype=torch.float64)
>>> a = torch.matmul(a, a.transpose(-2, -1)) # creates a symmetric positive-definite matrix
>>> l = torch.linalg.cholesky(a)
>>> a
tensor([[[ 1.1629, 2.0237],
[ 2.0237, 6.6593]],
[[ 0.4187, 0.1830],
[ 0.1830, 0.1018]],
[[ 1.9348, -2.5744],
[-2.5744, 4.6386]]], dtype=torch.float64)
>>> l
tensor([[[ 1.0784, 0.0000],
[ 1.8766, 1.7713]],
[[ 0.6471, 0.0000],
[ 0.2829, 0.1477]],
[[ 1.3910, 0.0000],
[-1.8509, 1.1014]]], dtype=torch.float64)
>>> torch.allclose(torch.matmul(l, l.transpose(-2, -1)), a)
True
torch.linalg.cond(input, p=None, *, out=None) → Tensor
Computes the condition number of a matrix input, or of each matrix in a batched input, using the matrix norm defined by p. For norms {‘fro’, ‘nuc’, inf, -inf, 1, -1} this is defined as the matrix norm of input times the matrix norm of the inverse of input computed using torch.linalg.norm(). While for norms {None, 2, -2} this is defined as the ratio between the largest and smallest singular values computed using torch.linalg.svd(). This function supports float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function may synchronize that device with the CPU depending on which norm p is used. Note For norms {None, 2, -2}, input may be a non-square matrix or batch of non-square matrices. For other norms, however, input must be a square matrix or a batch of square matrices, and if this requirement is not satisfied a RuntimeError will be thrown. Note For norms {‘fro’, ‘nuc’, inf, -inf, 1, -1} if input is a non-invertible matrix then a tensor containing infinity will be returned. If input is a batch of matrices and one or more of them is not invertible then a RuntimeError will be thrown. Parameters
input (Tensor) – the input matrix of size (m, n) or the batch of matrices of size (*, m, n) where * is one or more batch dimensions.
p (int, float, inf, -inf, 'fro', 'nuc', optional) –
the type of the matrix norm to use in the computations. inf refers to float('inf'), numpy’s inf object, or any equivalent object. The following norms can be used:
p norm for matrices
None ratio of the largest singular value to the smallest singular value
’fro’ Frobenius norm
’nuc’ nuclear norm
inf max(sum(abs(x), dim=1))
-inf min(sum(abs(x), dim=1))
1 max(sum(abs(x), dim=0))
-1 min(sum(abs(x), dim=0))
2 ratio of the largest singular value to the smallest singular value
-2 ratio of the smallest singular value to the largest singular value Default: None Keyword Arguments
out (Tensor, optional) – tensor to write the output to. Default is None. Returns
The condition number of input. The output dtype is always real valued even for complex inputs (e.g. float if input is cfloat). Examples: >>> a = torch.randn(3, 4, 4, dtype=torch.complex64)
>>> torch.linalg.cond(a)
>>> a = torch.tensor([[1., 0, -1], [0, 1, 0], [1, 0, 1]])
>>> torch.linalg.cond(a)
tensor([1.4142])
>>> torch.linalg.cond(a, 'fro')
tensor(3.1623)
>>> torch.linalg.cond(a, 'nuc')
tensor(9.2426)
>>> torch.linalg.cond(a, float('inf'))
tensor(2.)
>>> torch.linalg.cond(a, float('-inf'))
tensor(1.)
>>> torch.linalg.cond(a, 1)
tensor(2.)
>>> torch.linalg.cond(a, -1)
tensor(1.)
>>> torch.linalg.cond(a, 2)
tensor([1.4142])
>>> torch.linalg.cond(a, -2)
tensor([0.7071])
>>> a = torch.randn(2, 3, 3)
>>> a
tensor([[[-0.9204, 1.1140, 1.2055],
[ 0.3988, -0.2395, -0.7441],
[-0.5160, 0.3115, 0.2619]],
[[-2.2128, 0.9241, 2.1492],
[-1.1277, 2.7604, -0.8760],
[ 1.2159, 0.5960, 0.0498]]])
>>> torch.linalg.cond(a)
tensor([[9.5917],
[3.2538]])
>>> a = torch.randn(2, 3, 3, dtype=torch.complex64)
>>> a
tensor([[[-0.4671-0.2137j, -0.1334-0.9508j, 0.6252+0.1759j],
[-0.3486-0.2991j, -0.1317+0.1252j, 0.3025-0.1604j],
[-0.5634+0.8582j, 0.1118-0.4677j, -0.1121+0.7574j]],
[[ 0.3964+0.2533j, 0.9385-0.6417j, -0.0283-0.8673j],
[ 0.2635+0.2323j, -0.8929-1.1269j, 0.3332+0.0733j],
[ 0.1151+0.1644j, -1.1163+0.3471j, -0.5870+0.1629j]]])
>>> torch.linalg.cond(a)
tensor([[4.6245],
[4.5671]])
>>> torch.linalg.cond(a, 1)
tensor([9.2589, 9.3486])
torch.linalg.det(input) → Tensor
Computes the determinant of a square matrix input, or of each square matrix in a batched input. This function supports float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The determinant is computed using LU factorization. LAPACK’s getrf is used for CPU inputs, and MAGMA’s getrf is used for CUDA inputs. Note Backward through det internally uses torch.linalg.svd() when input is not invertible. In this case, double backward through det will be unstable when input doesn’t have distinct singular values. See torch.linalg.svd() for more details. Parameters
input (Tensor) – the input matrix of size (n, n) or the batch of matrices of size (*, n, n) where * is one or more batch dimensions. Example: >>> a = torch.randn(3, 3)
>>> a
tensor([[ 0.9478, 0.9158, -1.1295],
[ 0.9701, 0.7346, -1.8044],
[-0.2337, 0.0557, 0.6929]])
>>> torch.linalg.det(a)
tensor(0.0934)
>>> a = torch.randn(3, 2, 2)
>>> a
tensor([[[ 0.9254, -0.6213],
[-0.5787, 1.6843]],
[[ 0.3242, -0.9665],
[ 0.4539, -0.0887]],
[[ 1.1336, -0.4025],
[-0.7089, 0.9032]]])
>>> torch.linalg.det(a)
tensor([1.1990, 0.4099, 0.7386])
torch.linalg.slogdet(input, *, out=None) -> (Tensor, Tensor)
Calculates the sign and natural logarithm of the absolute value of a square matrix’s determinant, or of the absolute values of the determinants of a batch of square matrices input. The determinant can be computed with sign * exp(logabsdet). Supports input of float, double, cfloat and cdouble datatypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The determinant is computed using LU factorization. LAPACK’s getrf is used for CPU inputs, and MAGMA’s getrf is used for CUDA inputs. Note For matrices that have zero determinant, this returns (0, -inf). If input is batched then the entries in the result tensors corresponding to matrices with the zero determinant have sign 0 and the natural logarithm of the absolute value of the determinant -inf. Parameters
input (Tensor) – the input matrix of size (n,n)(n, n) or the batch of matrices of size (∗,n,n)(*, n, n) where ∗* is one or more batch dimensions. Keyword Arguments
out (tuple, optional) – tuple of two tensors to write the output to. Returns
A namedtuple (sign, logabsdet) containing the sign of the determinant and the natural logarithm of the absolute value of determinant, respectively. Example: >>> A = torch.randn(3, 3)
>>> A
tensor([[ 0.0032, -0.2239, -1.1219],
[-0.6690, 0.1161, 0.4053],
[-1.6218, -0.9273, -0.0082]])
>>> torch.linalg.det(A)
tensor(-0.7576)
>>> torch.linalg.logdet(A)
tensor(nan)
>>> torch.linalg.slogdet(A)
torch.return_types.linalg_slogdet(sign=tensor(-1.), logabsdet=tensor(-0.2776))
torch.linalg.eigh(input, UPLO='L', *, out=None) -> (Tensor, Tensor)
Computes the eigenvalues and eigenvectors of a complex Hermitian (or real symmetric) matrix input, or of each such matrix in a batched input. For a single matrix input, the tensor of eigenvalues w and the tensor of eigenvectors V decompose the input such that input = V diag(w) Vᴴ, where Vᴴ is the transpose of V for real-valued input, or the conjugate transpose of V for complex-valued input. Since the matrix or matrices in input are assumed to be Hermitian, the imaginary part of their diagonals is always treated as zero. When UPLO is “L”, its default value, only the lower triangular part of each matrix is used in the computation. When UPLO is “U” only the upper triangular part of each matrix is used. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The eigenvalues/eigenvectors are computed using LAPACK’s syevd and heevd routines for CPU inputs, and MAGMA’s syevd and heevd routines for CUDA inputs. Note The eigenvalues of real symmetric or complex Hermitian matrices are always real. Note The eigenvectors of matrices are not unique, so any eigenvector multiplied by a constant remains a valid eigenvector. This function may compute different eigenvector representations on different device types. Usually the difference is only in the sign of the eigenvector. Note See torch.linalg.eigvalsh() for a related function that computes only eigenvalues. However, that function is not differentiable. Parameters
input (Tensor) – the Hermitian n times n matrix or the batch of such matrices of size (*, n, n) where * is one or more batch dimensions.
UPLO ('L', 'U', optional) – controls whether to use the upper-triangular or the lower-triangular part of input in the computations. Default is 'L'. Keyword Arguments
out (tuple, optional) – tuple of two tensors to write the output to. Default is None. Returns
A namedtuple (eigenvalues, eigenvectors) containing
eigenvalues (Tensor): Shape (*, m).
The eigenvalues in ascending order.
eigenvectors (Tensor): Shape (*, m, m).
The orthonormal eigenvectors of the input. Return type
(Tensor, Tensor) Examples: >>> a = torch.randn(2, 2, dtype=torch.complex128)
>>> a = a + a.t().conj() # creates a Hermitian matrix
>>> a
tensor([[2.9228+0.0000j, 0.2029-0.0862j],
[0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)
>>> w, v = torch.linalg.eigh(a)
>>> w
tensor([0.3277, 2.9415], dtype=torch.float64)
>>> v
tensor([[-0.0846+-0.0000j, -0.9964+0.0000j],
[ 0.9170+0.3898j, -0.0779-0.0331j]], dtype=torch.complex128)
>>> torch.allclose(torch.matmul(v, torch.matmul(w.to(v.dtype).diag_embed(), v.t().conj())), a)
True
>>> a = torch.randn(3, 2, 2, dtype=torch.float64)
>>> a = a + a.transpose(-2, -1) # creates a symmetric matrix
>>> w, v = torch.linalg.eigh(a)
>>> torch.allclose(torch.matmul(v, torch.matmul(w.diag_embed(), v.transpose(-2, -1))), a)
True
torch.linalg.eigvalsh(input, UPLO='L', *, out=None) → Tensor
Computes the eigenvalues of a complex Hermitian (or real symmetric) matrix input, or of each such matrix in a batched input. The eigenvalues are returned in ascending order. Since the matrix or matrices in input are assumed to be Hermitian, the imaginary part of their diagonals is always treated as zero. When UPLO is “L”, its default value, only the lower triangular part of each matrix is used in the computation. When UPLO is “U” only the upper triangular part of each matrix is used. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The eigenvalues are computed using LAPACK’s syevd and heevd routines for CPU inputs, and MAGMA’s syevd and heevd routines for CUDA inputs. Note The eigenvalues of real symmetric or complex Hermitian matrices are always real. Note This function doesn’t support backpropagation, please use torch.linalg.eigh() instead, which also computes the eigenvectors. Note See torch.linalg.eigh() for a related function that computes both eigenvalues and eigenvectors. Parameters
input (Tensor) – the Hermitian n times n matrix or the batch of such matrices of size (*, n, n) where * is one or more batch dimensions.
UPLO ('L', 'U', optional) – controls whether to use the upper-triangular or the lower-triangular part of input in the computations. Default is 'L'. Keyword Arguments
out (Tensor, optional) – tensor to write the output to. Default is None. Examples: >>> a = torch.randn(2, 2, dtype=torch.complex128)
>>> a = a + a.t().conj() # creates a Hermitian matrix
>>> a
tensor([[2.9228+0.0000j, 0.2029-0.0862j],
[0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)
>>> w = torch.linalg.eigvalsh(a)
>>> w
tensor([0.3277, 2.9415], dtype=torch.float64)
>>> a = torch.randn(3, 2, 2, dtype=torch.float64)
>>> a = a + a.transpose(-2, -1) # creates a symmetric matrix
>>> a
tensor([[[ 2.8050, -0.3850],
[-0.3850, 3.2376]],
[[-1.0307, -2.7457],
[-2.7457, -1.7517]],
[[ 1.7166, 2.2207],
[ 2.2207, -2.0898]]], dtype=torch.float64)
>>> w = torch.linalg.eigvalsh(a)
>>> w
tensor([[ 2.5797, 3.4629],
[-4.1605, 1.3780],
[-3.1113, 2.7381]], dtype=torch.float64)
torch.linalg.matrix_rank(input, tol=None, hermitian=False, *, out=None) → Tensor
Computes the numerical rank of a matrix input, or of each matrix in a batched input. The matrix rank is computed as the number of singular values (or absolute eigenvalues when hermitian is True) that are greater than the specified tol threshold. If tol is not specified, tol is set to S.max(dim=-1)*max(input.shape[-2:])*eps, where S is the singular values (or absolute eigenvalues when hermitian is True), and eps is the epsilon value for the datatype of input. The epsilon value can be obtained using the eps attribute of torch.finfo. Supports input of float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The matrix rank is computed using singular value decomposition (see torch.linalg.svd()) by default. If hermitian is True, then input is assumed to be Hermitian (symmetric if real-valued), and the computation is done by obtaining the eigenvalues (see torch.linalg.eigvalsh()). Parameters
input (Tensor) – the input matrix of size (m, n) or the batch of matrices of size (*, m, n) where * is one or more batch dimensions.
tol (float, optional) – the tolerance value. Default is None
hermitian (bool, optional) – indicates whether input is Hermitian. Default is False. Keyword Arguments
out (Tensor, optional) – tensor to write the output to. Default is None. Examples: >>> a = torch.eye(10)
>>> torch.linalg.matrix_rank(a)
tensor(10)
>>> b = torch.eye(10)
>>> b[0, 0] = 0
>>> torch.linalg.matrix_rank(b)
tensor(9)
>>> a = torch.randn(4, 3, 2)
>>> torch.linalg.matrix_rank(a)
tensor([2, 2, 2, 2])
>>> a = torch.randn(2, 4, 2, 3)
>>> torch.linalg.matrix_rank(a)
tensor([[2, 2, 2, 2],
[2, 2, 2, 2]])
>>> a = torch.randn(2, 4, 3, 3, dtype=torch.complex64)
>>> torch.linalg.matrix_rank(a)
tensor([[3, 3, 3, 3],
[3, 3, 3, 3]])
>>> torch.linalg.matrix_rank(a, hermitian=True)
tensor([[3, 3, 3, 3],
[3, 3, 3, 3]])
>>> torch.linalg.matrix_rank(a, tol=1.0)
tensor([[3, 2, 2, 2],
[1, 2, 1, 2]])
>>> torch.linalg.matrix_rank(a, tol=1.0, hermitian=True)
tensor([[2, 2, 2, 1],
[1, 2, 2, 2]])
torch.linalg.norm(input, ord=None, dim=None, keepdim=False, *, out=None, dtype=None) → Tensor
Returns the matrix norm or vector norm of a given tensor. This function can calculate one of eight different types of matrix norms, or one of an infinite number of vector norms, depending on both the number of reduction dimensions and the value of the ord parameter. Parameters
input (Tensor) – The input tensor. If dim is None, x must be 1-D or 2-D, unless ord is None. If both dim and ord are None, the 2-norm of the input flattened to 1-D will be returned. Its data type must be either a floating point or complex type. For complex inputs, the norm is calculated on of the absolute values of each element. If the input is complex and neither dtype nor out is specified, the result’s data type will be the corresponding floating point type (e.g. float if input is complexfloat).
ord (int, float, inf, -inf, 'fro', 'nuc', optional) –
The order of norm. inf refers to float('inf'), numpy’s inf object, or any equivalent object. The following norms can be calculated:
ord norm for matrices norm for vectors
None Frobenius norm 2-norm
’fro’ Frobenius norm – not supported –
‘nuc’ nuclear norm – not supported –
inf max(sum(abs(x), dim=1)) max(abs(x))
-inf min(sum(abs(x), dim=1)) min(abs(x))
0 – not supported – sum(x != 0)
1 max(sum(abs(x), dim=0)) as below
-1 min(sum(abs(x), dim=0)) as below
2 2-norm (largest sing. value) as below
-2 smallest singular value as below
other – not supported – sum(abs(x)**ord)**(1./ord) Default: None
dim (int, 2-tuple of python:ints, 2-list of python:ints, optional) – If dim is an int, vector norm will be calculated over the specified dimension. If dim is a 2-tuple of ints, matrix norm will be calculated over the specified dimensions. If dim is None, matrix norm will be calculated when the input tensor has two dimensions, and vector norm will be calculated when the input tensor has one dimension. Default: None
keepdim (bool, optional) – If set to True, the reduced dimensions are retained in the result as dimensions with size one. Default: False
Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None
dtype (torch.dtype, optional) – If specified, the input tensor is cast to dtype before performing the operation, and the returned tensor’s type will be dtype. If this argument is used in conjunction with the out argument, the output tensor’s type must match this argument or a RuntimeError will be raised. Default: None
Examples: >>> import torch
>>> from torch import linalg as LA
>>> a = torch.arange(9, dtype=torch.float) - 4
>>> a
tensor([-4., -3., -2., -1., 0., 1., 2., 3., 4.])
>>> b = a.reshape((3, 3))
>>> b
tensor([[-4., -3., -2.],
[-1., 0., 1.],
[ 2., 3., 4.]])
>>> LA.norm(a)
tensor(7.7460)
>>> LA.norm(b)
tensor(7.7460)
>>> LA.norm(b, 'fro')
tensor(7.7460)
>>> LA.norm(a, float('inf'))
tensor(4.)
>>> LA.norm(b, float('inf'))
tensor(9.)
>>> LA.norm(a, -float('inf'))
tensor(0.)
>>> LA.norm(b, -float('inf'))
tensor(2.)
>>> LA.norm(a, 1)
tensor(20.)
>>> LA.norm(b, 1)
tensor(7.)
>>> LA.norm(a, -1)
tensor(0.)
>>> LA.norm(b, -1)
tensor(6.)
>>> LA.norm(a, 2)
tensor(7.7460)
>>> LA.norm(b, 2)
tensor(7.3485)
>>> LA.norm(a, -2)
tensor(0.)
>>> LA.norm(b.double(), -2)
tensor(1.8570e-16, dtype=torch.float64)
>>> LA.norm(a, 3)
tensor(5.8480)
>>> LA.norm(a, -3)
tensor(0.)
Using the dim argument to compute vector norms: >>> c = torch.tensor([[1., 2., 3.],
... [-1, 1, 4]])
>>> LA.norm(c, dim=0)
tensor([1.4142, 2.2361, 5.0000])
>>> LA.norm(c, dim=1)
tensor([3.7417, 4.2426])
>>> LA.norm(c, ord=1, dim=1)
tensor([6., 6.])
Using the dim argument to compute matrix norms: >>> m = torch.arange(8, dtype=torch.float).reshape(2, 2, 2)
>>> LA.norm(m, dim=(1,2))
tensor([ 3.7417, 11.2250])
>>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :])
(tensor(3.7417), tensor(11.2250))
torch.linalg.pinv(input, rcond=1e-15, hermitian=False, *, out=None) → Tensor
Computes the pseudo-inverse (also known as the Moore-Penrose inverse) of a matrix input, or of each matrix in a batched input. The singular values (or the absolute values of the eigenvalues when hermitian is True) that are below the specified rcond threshold are treated as zero and discarded in the computation. Supports input of float, double, cfloat and cdouble datatypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The pseudo-inverse is computed using singular value decomposition (see torch.linalg.svd()) by default. If hermitian is True, then input is assumed to be Hermitian (symmetric if real-valued), and the computation of the pseudo-inverse is done by obtaining the eigenvalues and eigenvectors (see torch.linalg.eigh()). Note If singular value decomposition or eigenvalue decomposition algorithms do not converge then a RuntimeError will be thrown. Parameters
input (Tensor) – the input matrix of size (m, n) or the batch of matrices of size (*, m, n) where * is one or more batch dimensions.
rcond (float, Tensor, optional) – the tolerance value to determine the cutoff for small singular values. Must be broadcastable to the singular values of input as returned by torch.svd(). Default is 1e-15.
hermitian (bool, optional) – indicates whether input is Hermitian. Default is False. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default is None. Examples: >>> input = torch.randn(3, 5)
>>> input
tensor([[ 0.5495, 0.0979, -1.4092, -0.1128, 0.4132],
[-1.1143, -0.3662, 0.3042, 1.6374, -0.9294],
[-0.3269, -0.5745, -0.0382, -0.5922, -0.6759]])
>>> torch.linalg.pinv(input)
tensor([[ 0.0600, -0.1933, -0.2090],
[-0.0903, -0.0817, -0.4752],
[-0.7124, -0.1631, -0.2272],
[ 0.1356, 0.3933, -0.5023],
[-0.0308, -0.1725, -0.5216]])
Batched linalg.pinv example
>>> a = torch.randn(2, 6, 3)
>>> b = torch.linalg.pinv(a)
>>> torch.matmul(b, a)
tensor([[[ 1.0000e+00, 1.6391e-07, -1.1548e-07],
[ 8.3121e-08, 1.0000e+00, -2.7567e-07],
[ 3.5390e-08, 1.4901e-08, 1.0000e+00]],
[[ 1.0000e+00, -8.9407e-08, 2.9802e-08],
[-2.2352e-07, 1.0000e+00, 1.1921e-07],
[ 0.0000e+00, 8.9407e-08, 1.0000e+00]]])
Hermitian input example
>>> a = torch.randn(3, 3, dtype=torch.complex64)
>>> a = a + a.t().conj() # creates a Hermitian matrix
>>> b = torch.linalg.pinv(a, hermitian=True)
>>> torch.matmul(b, a)
tensor([[ 1.0000e+00+0.0000e+00j, -1.1921e-07-2.3842e-07j,
5.9605e-08-2.3842e-07j],
[ 5.9605e-08+2.3842e-07j, 1.0000e+00+2.3842e-07j,
-4.7684e-07+1.1921e-07j],
[-1.1921e-07+0.0000e+00j, -2.3842e-07-2.9802e-07j,
1.0000e+00-1.7897e-07j]])
Non-default rcond example
>>> rcond = 0.5
>>> a = torch.randn(3, 3)
>>> torch.linalg.pinv(a)
tensor([[ 0.2971, -0.4280, -2.0111],
[-0.0090, 0.6426, -0.1116],
[-0.7832, -0.2465, 1.0994]])
>>> torch.linalg.pinv(a, rcond)
tensor([[-0.2672, -0.2351, -0.0539],
[-0.0211, 0.6467, -0.0698],
[-0.4400, -0.3638, -0.0910]])
Matrix-wise rcond example
>>> a = torch.randn(5, 6, 2, 3, 3)
>>> rcond = torch.rand(2) # different rcond values for each matrix in a[:, :, 0] and a[:, :, 1]
>>> torch.linalg.pinv(a, rcond)
>>> rcond = torch.randn(5, 6, 2) # different rcond value for each matrix in 'a'
>>> torch.linalg.pinv(a, rcond)
torch.linalg.svd(input, full_matrices=True, compute_uv=True, *, out=None) -> (Tensor, Tensor, Tensor)
Computes the singular value decomposition of either a matrix or batch of matrices input.” The singular value decomposition is represented as a namedtuple (U, S, Vh), such that input=U@diag(S)×Vhinput = U \mathbin{@} diag(S) \times Vh . If input is a batch of tensors, then U, S, and Vh are also batched with the same batch dimensions as input. If full_matrices is False (default), the method returns the reduced singular value decomposition i.e., if the last two dimensions of input are m and n, then the returned U and V matrices will contain only min(n,m)min(n, m) orthonormal columns. If compute_uv is False, the returned U and Vh will be empy tensors with no elements and the same device as input. The full_matrices argument has no effect when compute_uv is False. The dtypes of U and V are the same as input’s. S will always be real-valued, even if input is complex. Note Unlike NumPy’s linalg.svd, this always returns a namedtuple of three tensors, even when compute_uv=False. This behavior may change in a future PyTorch release. Note The singular values are returned in descending order. If input is a batch of matrices, then the singular values of each matrix in the batch is returned in descending order. Note The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the cuSOLVER routines gesvdj and gesvdjBatched on CUDA 10.1.243 and later, and uses the MAGMA routine gesdd on earlier versions of CUDA. Note The returned matrix U will be transposed, i.e. with strides U.contiguous().transpose(-2, -1).stride(). Note Gradients computed using U and Vh may be unstable if input is not full rank or has non-unique singular values. Note When full_matrices = True, the gradients on U[..., :, min(m, n):] and V[..., :, min(m, n):] will be ignored in backward as those vectors can be arbitrary bases of the subspaces. Note The S tensor can only be used to compute gradients if compute_uv is True. Note Since U and V of an SVD is not unique, each vector can be multiplied by an arbitrary phase factor eiϕe^{i \phi} while the SVD result is still correct. Different platforms, like Numpy, or inputs on different device types, may produce different U and V tensors. Parameters
input (Tensor) – the input tensor of size (∗,m,n)(*, m, n) where * is zero or more batch dimensions consisting of m×nm \times n matrices.
full_matrices (bool, optional) – controls whether to compute the full or reduced decomposition, and consequently the shape of returned U and V. Defaults to True.
compute_uv (bool, optional) – whether to compute U and V or not. Defaults to True.
out (tuple, optional) – a tuple of three tensors to use for the outputs. If compute_uv=False, the 1st and 3rd arguments must be tensors, but they are ignored. E.g. you can pass (torch.Tensor(), out_S, torch.Tensor())
Example: >>> import torch
>>> a = torch.randn(5, 3)
>>> a
tensor([[-0.3357, -0.2987, -1.1096],
[ 1.4894, 1.0016, -0.4572],
[-1.9401, 0.7437, 2.0968],
[ 0.1515, 1.3812, 1.5491],
[-1.8489, -0.5907, -2.5673]])
>>>
>>> # reconstruction in the full_matrices=False case
>>> u, s, vh = torch.linalg.svd(a, full_matrices=False)
>>> u.shape, s.shape, vh.shape
(torch.Size([5, 3]), torch.Size([3]), torch.Size([3, 3]))
>>> torch.dist(a, u @ torch.diag(s) @ vh)
tensor(1.0486e-06)
>>>
>>> # reconstruction in the full_matrices=True case
>>> u, s, vh = torch.linalg.svd(a)
>>> u.shape, s.shape, vh.shape
(torch.Size([5, 5]), torch.Size([3]), torch.Size([3, 3]))
>>> torch.dist(a, u[:, :3] @ torch.diag(s) @ vh)
>>> torch.dist(a, u[:, :3] @ torch.diag(s) @ vh)
tensor(1.0486e-06)
>>>
>>> # extra dimensions
>>> a_big = torch.randn(7, 5, 3)
>>> u, s, vh = torch.linalg.svd(a_big, full_matrices=False)
>>> torch.dist(a_big, u @ torch.diag_embed(s) @ vh)
tensor(3.0957e-06)
torch.linalg.solve(input, other, *, out=None) → Tensor
Computes the solution x to the matrix equation matmul(input, x) = other with a square matrix, or batches of such matrices, input and one or more right-hand side vectors other. If input is batched and other is not, then other is broadcast to have the same batch dimensions as input. The resulting tensor has the same shape as the (possibly broadcast) other. Supports input of float, double, cfloat and cdouble dtypes. Note If input is a non-square or non-invertible matrix, or a batch containing non-square matrices or one or more non-invertible matrices, then a RuntimeError will be thrown. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Parameters
input (Tensor) – the square n×nn \times n matrix or the batch of such matrices of size (∗,n,n)(*, n, n) where * is one or more batch dimensions.
other (Tensor) – right-hand side tensor of shape (∗,n)(*, n) or (∗,n,k)(*, n, k) , where kk is the number of right-hand side vectors. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> A = torch.eye(3)
>>> b = torch.randn(3)
>>> x = torch.linalg.solve(A, b)
>>> torch.allclose(A @ x, b)
True
Batched input: >>> A = torch.randn(2, 3, 3)
>>> b = torch.randn(3, 1)
>>> x = torch.linalg.solve(A, b)
>>> torch.allclose(A @ x, b)
True
>>> b = torch.rand(3) # b is broadcast internally to (*A.shape[:-2], 3)
>>> x = torch.linalg.solve(A, b)
>>> x.shape
torch.Size([2, 3])
>>> Ax = A @ x.unsqueeze(-1)
>>> torch.allclose(Ax, b.unsqueeze(-1).expand_as(Ax))
True
torch.linalg.tensorinv(input, ind=2, *, out=None) → Tensor
Computes a tensor input_inv such that tensordot(input_inv, input, ind) == I_n (inverse tensor equation), where I_n is the n-dimensional identity tensor and n is equal to input.ndim. The resulting tensor input_inv has shape equal to input.shape[ind:] + input.shape[:ind]. Supports input of float, double, cfloat and cdouble data types. Note If input is not invertible or does not satisfy the requirement prod(input.shape[ind:]) == prod(input.shape[:ind]), then a RuntimeError will be thrown. Note When input is a 2-dimensional tensor and ind=1, this function computes the (multiplicative) inverse of input, equivalent to calling torch.inverse(). Parameters
input (Tensor) – A tensor to invert. Its shape must satisfy prod(input.shape[:ind]) == prod(input.shape[ind:]).
ind (int) – A positive integer that describes the inverse tensor equation. See torch.tensordot() for details. Default: 2. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> a = torch.eye(4 * 6).reshape((4, 6, 8, 3))
>>> ainv = torch.linalg.tensorinv(a, ind=2)
>>> ainv.shape
torch.Size([8, 3, 4, 6])
>>> b = torch.randn(4, 6)
>>> torch.allclose(torch.tensordot(ainv, b), torch.linalg.tensorsolve(a, b))
True
>>> a = torch.randn(4, 4)
>>> a_tensorinv = torch.linalg.tensorinv(a, ind=1)
>>> a_inv = torch.inverse(a)
>>> torch.allclose(a_tensorinv, a_inv)
True
torch.linalg.tensorsolve(input, other, dims=None, *, out=None) → Tensor
Computes a tensor x such that tensordot(input, x, dims=x.ndim) = other. The resulting tensor x has the same shape as input[other.ndim:]. Supports real-valued and complex-valued inputs. Note If input does not satisfy the requirement prod(input.shape[other.ndim:]) == prod(input.shape[:other.ndim]) after (optionally) moving the dimensions using dims, then a RuntimeError will be thrown. Parameters
input (Tensor) – “left-hand-side” tensor, it must satisfy the requirement prod(input.shape[other.ndim:]) == prod(input.shape[:other.ndim]).
other (Tensor) – “right-hand-side” tensor of shape input.shape[other.ndim].
dims (Tuple[int]) – dimensions of input to be moved before the computation. Equivalent to calling input = movedim(input, dims, range(len(dims) - input.ndim, 0)). If None (default), no dimensions are moved. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default: None Examples: >>> a = torch.eye(2 * 3 * 4).reshape((2 * 3, 4, 2, 3, 4))
>>> b = torch.randn(2 * 3, 4)
>>> x = torch.linalg.tensorsolve(a, b)
>>> x.shape
torch.Size([2, 3, 4])
>>> torch.allclose(torch.tensordot(a, x, dims=x.ndim), b)
True
>>> a = torch.randn(6, 4, 4, 3, 2)
>>> b = torch.randn(4, 3, 2)
>>> x = torch.linalg.tensorsolve(a, b, dims=(0, 2))
>>> x.shape
torch.Size([6, 4])
>>> a = a.permute(1, 3, 4, 0, 2)
>>> a.shape[b.ndim:]
torch.Size([6, 4])
>>> torch.allclose(torch.tensordot(a, x, dims=x.ndim), b, atol=1e-6)
True
torch.linalg.inv(input, *, out=None) → Tensor
Computes the multiplicative inverse matrix of a square matrix input, or of each square matrix in a batched input. The result satisfies the relation: matmul(inv(input),input) = matmul(input,inv(input)) = eye(input.shape[0]).expand_as(input). Supports input of float, double, cfloat and cdouble data types. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The inverse matrix is computed using LAPACK’s getrf and getri routines for CPU inputs. For CUDA inputs, cuSOLVER’s getrf and getrs routines as well as cuBLAS’ getrf and getri routines are used if CUDA version >= 10.1.243, otherwise MAGMA’s getrf and getri routines are used instead. Note If input is a non-invertible matrix or non-square matrix, or batch with at least one such matrix, then a RuntimeError will be thrown. Parameters
input (Tensor) – the square (n, n) matrix or the batch of such matrices of size (*, n, n) where * is one or more batch dimensions. Keyword Arguments
out (Tensor, optional) – The output tensor. Ignored if None. Default is None. Examples: >>> x = torch.rand(4, 4)
>>> y = torch.linalg.inv(x)
>>> z = torch.mm(x, y)
>>> z
tensor([[ 1.0000, -0.0000, -0.0000, 0.0000],
[ 0.0000, 1.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 1.0000, 0.0000],
[ 0.0000, -0.0000, -0.0000, 1.0000]])
>>> torch.max(torch.abs(z - torch.eye(4))) # Max non-zero
tensor(1.1921e-07)
>>> # Batched inverse example
>>> x = torch.randn(2, 3, 4, 4)
>>> y = torch.linalg.inv(x)
>>> z = torch.matmul(x, y)
>>> torch.max(torch.abs(z - torch.eye(4).expand_as(x))) # Max non-zero
tensor(1.9073e-06)
>>> x = torch.rand(4, 4, dtype=torch.cdouble)
>>> y = torch.linalg.inv(x)
>>> z = torch.mm(x, y)
>>> z
tensor([[ 1.0000e+00+0.0000e+00j, -1.3878e-16+3.4694e-16j,
5.5511e-17-1.1102e-16j, 0.0000e+00-1.6653e-16j],
[ 5.5511e-16-1.6653e-16j, 1.0000e+00+6.9389e-17j,
2.2204e-16-1.1102e-16j, -2.2204e-16+1.1102e-16j],
[ 3.8858e-16-1.2490e-16j, 2.7756e-17+3.4694e-17j,
1.0000e+00+0.0000e+00j, -4.4409e-16+5.5511e-17j],
[ 4.4409e-16+5.5511e-16j, -3.8858e-16+1.8041e-16j,
2.2204e-16+0.0000e+00j, 1.0000e+00-3.4694e-16j]],
dtype=torch.complex128)
>>> torch.max(torch.abs(z - torch.eye(4, dtype=torch.cdouble))) # Max non-zero
tensor(7.5107e-16, dtype=torch.float64)
torch.linalg.qr(input, mode='reduced', *, out=None) -> (Tensor, Tensor)
Computes the QR decomposition of a matrix or a batch of matrices input, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper triangular matrix or batch of upper triangular matrices. Depending on the value of mode this function returns the reduced or complete QR factorization. See below for a list of valid modes. Note Differences with numpy.linalg.qr:
mode='raw' is not implemented unlike numpy.linalg.qr, this function always returns a tuple of two tensors. When mode='r', the Q tensor is an empty tensor. This behavior may change in a future PyTorch release. Note Backpropagation is not supported for mode='r'. Use mode='reduced' instead. Backpropagation is also not supported if the first min(input.size(−1),input.size(−2))\min(input.size(-1), input.size(-2)) columns of any matrix in input are not linearly independent. While no error will be thrown when this occurs the values of the “gradient” produced may be anything. This behavior may change in the future. Note This function uses LAPACK for CPU inputs and MAGMA for CUDA inputs, and may produce different (valid) decompositions on different device types or different platforms. Parameters
input (Tensor) – the input tensor of size (∗,m,n)(*, m, n) where * is zero or more batch dimensions consisting of matrices of dimension m×nm \times n .
mode (str, optional) –
if k = min(m, n) then:
'reduced' : returns (Q, R) with dimensions (m, k), (k, n) (default)
'complete': returns (Q, R) with dimensions (m, m), (m, n)
'r': computes only R; returns (Q, R) where Q is empty and R has dimensions (k, n) Keyword Arguments
out (tuple, optional) – tuple of Q and R tensors. The dimensions of Q and R are detailed in the description of mode above. Example: >>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])
>>> q, r = torch.linalg.qr(a)
>>> q
tensor([[-0.8571, 0.3943, 0.3314],
[-0.4286, -0.9029, -0.0343],
[ 0.2857, -0.1714, 0.9429]])
>>> r
tensor([[ -14.0000, -21.0000, 14.0000],
[ 0.0000, -175.0000, 70.0000],
[ 0.0000, 0.0000, -35.0000]])
>>> torch.mm(q, r).round()
tensor([[ 12., -51., 4.],
[ 6., 167., -68.],
[ -4., 24., -41.]])
>>> torch.mm(q.t(), q).round()
tensor([[ 1., 0., 0.],
[ 0., 1., -0.],
[ 0., -0., 1.]])
>>> q2, r2 = torch.linalg.qr(a, mode='r')
>>> q2
tensor([])
>>> torch.equal(r, r2)
True
>>> a = torch.randn(3, 4, 5)
>>> q, r = torch.linalg.qr(a, mode='complete')
>>> torch.allclose(torch.matmul(q, r), a)
True
>>> torch.allclose(torch.matmul(q.transpose(-2, -1), q), torch.eye(5))
True | |
doc_23887 |
Autoscale the scalar limits on the norm instance using the current array | |
doc_23888 | See Migration guide for more details. tf.compat.v1.keras.layers.ELU
tf.keras.layers.ELU(
alpha=1.0, **kwargs
)
It follows: f(x) = alpha * (exp(x) - 1.) for x < 0
f(x) = x for x >= 0
Input shape: Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape: Same shape as the input.
Arguments
alpha Scale for the negative factor. | |
doc_23889 |
For each element in a, return a copy of the string where all characters occurring in the optional argument deletechars are removed, and the remaining characters have been mapped through the given translation table. Calls str.translate element-wise. Parameters
aarray-like of str or unicode
tablestr of length 256
deletecharsstr
Returns
outndarray
Output array of str or unicode, depending on input type See also str.translate | |
doc_23890 |
Logical figure that can be placed inside a figure. Typically instantiated using Figure.add_subfigure or SubFigure.add_subfigure, or SubFigure.subfigures. A subfigure has the same methods as a figure except for those particularly tied to the size or dpi of the figure, and is confined to a prescribed region of the figure. For example the following puts two subfigures side-by-side: fig = plt.figure()
sfigs = fig.subfigures(1, 2)
axsL = sfigs[0].subplots(1, 2)
axsR = sfigs[1].subplots(2, 1)
See Figure subfigures Parameters
parentfigure.Figure or figure.SubFigure
Figure or subfigure that contains the SubFigure. SubFigures can be nested.
subplotspecgridspec.SubplotSpec
Defines the region in a parent gridspec where the subfigure will be placed.
facecolordefault: rcParams["figure.facecolor"] (default: 'white')
The figure patch face color.
edgecolordefault: rcParams["figure.edgecolor"] (default: 'white')
The figure patch edge color.
linewidthfloat
The linewidth of the frame (i.e. the edge linewidth of the figure patch).
frameonbool, default: rcParams["figure.frameon"] (default: True)
If False, suppress drawing the figure background patch. Other Parameters
**kwargsSubFigure properties, optional
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
edgecolor color
facecolor color
figure Figure
frameon bool
gid str
in_layout bool
label object
linewidth number
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float add_artist(artist, clip=False)[source]
Add an Artist to the figure. Usually artists are added to Axes objects using Axes.add_artist; this method can be used in the rare cases where one needs to add artists directly to the figure instead. Parameters
artistArtist
The artist to add to the figure. If the added artist has no transform previously set, its transform will be set to figure.transSubfigure.
clipbool, default: False
Whether the added artist should be clipped by the figure patch. Returns
Artist
The added artist.
add_axes(*args, **kwargs)[source]
Add an Axes to the figure. Call signatures: add_axes(rect, projection=None, polar=False, **kwargs)
add_axes(ax)
Parameters
rectsequence of float
The dimensions [left, bottom, width, height] of the new Axes. All quantities are in fractions of figure width and height.
projection{None, 'aitoff', 'hammer', 'lambert', 'mollweide', 'polar', 'rectilinear', str}, optional
The projection type of the Axes. str is the name of a custom projection, see projections. The default None results in a 'rectilinear' projection.
polarbool, default: False
If True, equivalent to projection='polar'.
axes_classsubclass type of Axes, optional
The axes.Axes subclass that is instantiated. This parameter is incompatible with projection and polar. See axisartist for examples.
sharex, shareyAxes, optional
Share the x or y axis with sharex and/or sharey. The axis will have the same limits, ticks, and scale as the axis of the shared axes.
labelstr
A label for the returned Axes. Returns
Axes, or a subclass of Axes
The returned axes class depends on the projection used. It is Axes if rectilinear projection is used and projections.polar.PolarAxes if polar projection is used. Other Parameters
**kwargs
This method also takes the keyword arguments for the returned Axes class. The keyword arguments for the rectilinear Axes class Axes can be found in the following table but there might also be other keyword arguments if another projection is used, see the actual Axes class.
Property Description
adjustable {'box', 'datalim'}
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
anchor (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...}
animated bool
aspect {'auto', 'equal'} or float
autoscale_on bool
autoscalex_on bool
autoscaley_on bool
axes_locator Callable[[Axes, Renderer], Bbox]
axisbelow bool or 'line'
box_aspect float or None
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
facecolor or fc color
figure Figure
frame_on bool
gid str
in_layout bool
label object
navigate bool
navigate_mode unknown
path_effects AbstractPathEffect
picker None or bool or float or callable
position [left, bottom, width, height] or Bbox
prop_cycle unknown
rasterization_zorder float or None
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
title str
transform Transform
url str
visible bool
xbound unknown
xlabel str
xlim (bottom: float, top: float)
xmargin float greater than -0.5
xscale {"linear", "log", "symlog", "logit", ...} or ScaleBase
xticklabels unknown
xticks unknown
ybound unknown
ylabel str
ylim (bottom: float, top: float)
ymargin float greater than -0.5
yscale {"linear", "log", "symlog", "logit", ...} or ScaleBase
yticklabels unknown
yticks unknown
zorder float See also Figure.add_subplot
pyplot.subplot
pyplot.axes
Figure.subplots
pyplot.subplots
Notes In rare circumstances, add_axes may be called with a single argument, an Axes instance already created in the present figure but not in the figure's list of Axes. Examples Some simple examples: rect = l, b, w, h
fig = plt.figure()
fig.add_axes(rect)
fig.add_axes(rect, frameon=False, facecolor='g')
fig.add_axes(rect, polar=True)
ax = fig.add_axes(rect, projection='polar')
fig.delaxes(ax)
fig.add_axes(ax)
add_callback(func)[source]
Add a callback function that will be called whenever one of the Artist's properties changes. Parameters
funccallable
The callback function. It must have the signature: def func(artist: Artist) -> Any
where artist is the calling Artist. Return values may exist but are ignored. Returns
int
The observer id associated with the callback. This id can be used for removing the callback with remove_callback later. See also remove_callback
add_gridspec(nrows=1, ncols=1, **kwargs)[source]
Return a GridSpec that has this figure as a parent. This allows complex layout of Axes in the figure. Parameters
nrowsint, default: 1
Number of rows in grid.
ncolsint, default: 1
Number or columns in grid. Returns
GridSpec
Other Parameters
**kwargs
Keyword arguments are passed to GridSpec. See also matplotlib.pyplot.subplots
Examples Adding a subplot that spans two rows: fig = plt.figure()
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax2 = fig.add_subplot(gs[1, 0])
# spans two rows:
ax3 = fig.add_subplot(gs[:, 1])
add_subfigure(subplotspec, **kwargs)[source]
Add a SubFigure to the figure as part of a subplot arrangement. Parameters
subplotspecgridspec.SubplotSpec
Defines the region in a parent gridspec where the subfigure will be placed. Returns
figure.SubFigure
Other Parameters
**kwargs
Are passed to the SubFigure object. See also Figure.subfigures
add_subplot(*args, **kwargs)[source]
Add an Axes to the figure as part of a subplot arrangement. Call signatures: add_subplot(nrows, ncols, index, **kwargs)
add_subplot(pos, **kwargs)
add_subplot(ax)
add_subplot()
Parameters
*argsint, (int, int, index), or SubplotSpec, default: (1, 1, 1)
The position of the subplot described by one of Three integers (nrows, ncols, index). The subplot will take the index position on a grid with nrows rows and ncols columns. index starts at 1 in the upper left corner and increases to the right. index can also be a two-tuple specifying the (first, last) indices (1-based, and including last) of the subplot, e.g., fig.add_subplot(3, 1, (1, 2)) makes a subplot that spans the upper 2/3 of the figure. A 3-digit integer. The digits are interpreted as if given separately as three single-digit integers, i.e. fig.add_subplot(235) is the same as fig.add_subplot(2, 3, 5). Note that this can only be used if there are no more than 9 subplots. A SubplotSpec. In rare circumstances, add_subplot may be called with a single argument, a subplot Axes instance already created in the present figure but not in the figure's list of Axes.
projection{None, 'aitoff', 'hammer', 'lambert', 'mollweide', 'polar', 'rectilinear', str}, optional
The projection type of the subplot (Axes). str is the name of a custom projection, see projections. The default None results in a 'rectilinear' projection.
polarbool, default: False
If True, equivalent to projection='polar'.
axes_classsubclass type of Axes, optional
The axes.Axes subclass that is instantiated. This parameter is incompatible with projection and polar. See axisartist for examples.
sharex, shareyAxes, optional
Share the x or y axis with sharex and/or sharey. The axis will have the same limits, ticks, and scale as the axis of the shared axes.
labelstr
A label for the returned Axes. Returns
axes.SubplotBase, or another subclass of Axes
The Axes of the subplot. The returned Axes base class depends on the projection used. It is Axes if rectilinear projection is used and projections.polar.PolarAxes if polar projection is used. The returned Axes is then a subplot subclass of the base class. Other Parameters
**kwargs
This method also takes the keyword arguments for the returned Axes base class; except for the figure argument. The keyword arguments for the rectilinear base class Axes can be found in the following table but there might also be other keyword arguments if another projection is used.
Property Description
adjustable {'box', 'datalim'}
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
anchor (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...}
animated bool
aspect {'auto', 'equal'} or float
autoscale_on bool
autoscalex_on bool
autoscaley_on bool
axes_locator Callable[[Axes, Renderer], Bbox]
axisbelow bool or 'line'
box_aspect float or None
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
facecolor or fc color
figure Figure
frame_on bool
gid str
in_layout bool
label object
navigate bool
navigate_mode unknown
path_effects AbstractPathEffect
picker None or bool or float or callable
position [left, bottom, width, height] or Bbox
prop_cycle unknown
rasterization_zorder float or None
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
title str
transform Transform
url str
visible bool
xbound unknown
xlabel str
xlim (bottom: float, top: float)
xmargin float greater than -0.5
xscale {"linear", "log", "symlog", "logit", ...} or ScaleBase
xticklabels unknown
xticks unknown
ybound unknown
ylabel str
ylim (bottom: float, top: float)
ymargin float greater than -0.5
yscale {"linear", "log", "symlog", "logit", ...} or ScaleBase
yticklabels unknown
yticks unknown
zorder float See also Figure.add_axes
pyplot.subplot
pyplot.axes
Figure.subplots
pyplot.subplots
Examples fig = plt.figure()
fig.add_subplot(231)
ax1 = fig.add_subplot(2, 3, 1) # equivalent but more general
fig.add_subplot(232, frameon=False) # subplot with no frame
fig.add_subplot(233, projection='polar') # polar subplot
fig.add_subplot(234, sharex=ax1) # subplot sharing x-axis with ax1
fig.add_subplot(235, facecolor="red") # red subplot
ax1.remove() # delete ax1 from the figure
fig.add_subplot(ax1) # add ax1 back to the figure
align_labels(axs=None)[source]
Align the xlabels and ylabels of subplots with the same subplots row or column (respectively) if label alignment is being done automatically (i.e. the label position is not manually set). Alignment persists for draw events after this is called. Parameters
axslist of Axes
Optional list (or ndarray) of Axes to align the labels. Default is to align all Axes on the figure. See also matplotlib.figure.Figure.align_xlabels
matplotlib.figure.Figure.align_ylabels
align_xlabels(axs=None)[source]
Align the xlabels of subplots in the same subplot column if label alignment is being done automatically (i.e. the label position is not manually set). Alignment persists for draw events after this is called. If a label is on the bottom, it is aligned with labels on Axes that also have their label on the bottom and that have the same bottom-most subplot row. If the label is on the top, it is aligned with labels on Axes with the same top-most row. Parameters
axslist of Axes
Optional list of (or ndarray) Axes to align the xlabels. Default is to align all Axes on the figure. See also matplotlib.figure.Figure.align_ylabels
matplotlib.figure.Figure.align_labels
Notes This assumes that axs are from the same GridSpec, so that their SubplotSpec positions correspond to figure positions. Examples Example with rotated xtick labels: fig, axs = plt.subplots(1, 2)
for tick in axs[0].get_xticklabels():
tick.set_rotation(55)
axs[0].set_xlabel('XLabel 0')
axs[1].set_xlabel('XLabel 1')
fig.align_xlabels()
align_ylabels(axs=None)[source]
Align the ylabels of subplots in the same subplot column if label alignment is being done automatically (i.e. the label position is not manually set). Alignment persists for draw events after this is called. If a label is on the left, it is aligned with labels on Axes that also have their label on the left and that have the same left-most subplot column. If the label is on the right, it is aligned with labels on Axes with the same right-most column. Parameters
axslist of Axes
Optional list (or ndarray) of Axes to align the ylabels. Default is to align all Axes on the figure. See also matplotlib.figure.Figure.align_xlabels
matplotlib.figure.Figure.align_labels
Notes This assumes that axs are from the same GridSpec, so that their SubplotSpec positions correspond to figure positions. Examples Example with large yticks labels: fig, axs = plt.subplots(2, 1)
axs[0].plot(np.arange(0, 1000, 50))
axs[0].set_ylabel('YLabel 0')
axs[1].set_ylabel('YLabel 1')
fig.align_ylabels()
autofmt_xdate(bottom=0.2, rotation=30, ha='right', which='major')[source]
Date ticklabels often overlap, so it is useful to rotate them and right align them. Also, a common use case is a number of subplots with shared x-axis where the x-axis is date data. The ticklabels are often long, and it helps to rotate them on the bottom subplot and turn them off on other subplots, as well as turn off xlabels. Parameters
bottomfloat, default: 0.2
The bottom of the subplots for subplots_adjust.
rotationfloat, default: 30 degrees
The rotation angle of the xtick labels in degrees.
ha{'left', 'center', 'right'}, default: 'right'
The horizontal alignment of the xticklabels.
which{'major', 'minor', 'both'}, default: 'major'
Selects which ticklabels to rotate.
propertyaxes
List of Axes in the SubFigure. You can access and modify the Axes in the SubFigure through this list. Do not modify the list itself. Instead, use add_axes, add_subplot or delaxes to add or remove an Axes. Note: The SubFigure.axes property and get_axes method are equivalent.
colorbar(mappable, cax=None, ax=None, use_gridspec=True, **kw)[source]
Add a colorbar to a plot. Parameters
mappable
The matplotlib.cm.ScalarMappable (i.e., AxesImage, ContourSet, etc.) described by this colorbar. This argument is mandatory for the Figure.colorbar method but optional for the pyplot.colorbar function, which sets the default to the current image. Note that one can create a ScalarMappable "on-the-fly" to generate colorbars not attached to a previously drawn artist, e.g. fig.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax)
caxAxes, optional
Axes into which the colorbar will be drawn.
axAxes, list of Axes, optional
One or more parent axes from which space for a new colorbar axes will be stolen, if cax is None. This has no effect if cax is set.
use_gridspecbool, optional
If cax is None, a new cax is created as an instance of Axes. If ax is an instance of Subplot and use_gridspec is True, cax is created as an instance of Subplot using the gridspec module. Returns
colorbarColorbar
Notes Additional keyword arguments are of two kinds: axes properties: locationNone or {'left', 'right', 'top', 'bottom'}
The location, relative to the parent axes, where the colorbar axes is created. It also determines the orientation of the colorbar (colorbars on the left and right are vertical, colorbars at the top and bottom are horizontal). If None, the location will come from the orientation if it is set (vertical colorbars on the right, horizontal ones at the bottom), or default to 'right' if orientation is unset. orientationNone or {'vertical', 'horizontal'}
The orientation of the colorbar. It is preferable to set the location of the colorbar, as that also determines the orientation; passing incompatible values for location and orientation raises an exception. fractionfloat, default: 0.15
Fraction of original axes to use for colorbar. shrinkfloat, default: 1.0
Fraction by which to multiply the size of the colorbar. aspectfloat, default: 20
Ratio of long to short dimensions. padfloat, default: 0.05 if vertical, 0.15 if horizontal
Fraction of original axes between colorbar and new image axes. anchor(float, float), optional
The anchor point of the colorbar axes. Defaults to (0.0, 0.5) if vertical; (0.5, 1.0) if horizontal. panchor(float, float), or False, optional
The anchor point of the colorbar parent axes. If False, the parent axes' anchor will be unchanged. Defaults to (1.0, 0.5) if vertical; (0.5, 0.0) if horizontal. colorbar properties:
Property Description
extend {'neither', 'both', 'min', 'max'} If not 'neither', make pointed end(s) for out-of- range values. These are set for a given colormap using the colormap set_under and set_over methods.
extendfrac {None, 'auto', length, lengths} If set to None, both the minimum and maximum triangular colorbar extensions with have a length of 5% of the interior colorbar length (this is the default setting). If set to 'auto', makes the triangular colorbar extensions the same lengths as the interior boxes (when spacing is set to 'uniform') or the same lengths as the respective adjacent interior boxes (when spacing is set to 'proportional'). If a scalar, indicates the length of both the minimum and maximum triangular colorbar extensions as a fraction of the interior colorbar length. A two-element sequence of fractions may also be given, indicating the lengths of the minimum and maximum colorbar extensions respectively as a fraction of the interior colorbar length.
extendrect bool If False the minimum and maximum colorbar extensions will be triangular (the default). If True the extensions will be rectangular.
spacing {'uniform', 'proportional'} Uniform spacing gives each discrete color the same space; proportional makes the space proportional to the data interval.
ticks None or list of ticks or Locator If None, ticks are determined automatically from the input.
format None or str or Formatter If None, ScalarFormatter is used. If a format string is given, e.g., '%.3f', that is used. An alternative Formatter may be given instead.
drawedges bool Whether to draw lines at color boundaries.
label str The label on the colorbar's long axis. The following will probably be useful only in the context of indexed colors (that is, when the mappable has norm=NoNorm()), or other unusual circumstances.
Property Description
boundaries None or a sequence
values None or a sequence which must be of length 1 less than the sequence of boundaries. For each region delimited by adjacent entries in boundaries, the colormapped to the corresponding value in values will be used. If mappable is a ContourSet, its extend kwarg is included automatically. The shrink kwarg provides a simple way to scale the colorbar with respect to the axes. Note that if cax is specified, it determines the size of the colorbar and shrink and aspect kwargs are ignored. For more precise control, you can manually specify the positions of the axes objects in which the mappable and the colorbar are drawn. In this case, do not use any of the axes properties kwargs. It is known that some vector graphics viewers (svg and pdf) renders white gaps between segments of the colorbar. This is due to bugs in the viewers, not Matplotlib. As a workaround, the colorbar can be rendered with overlapping segments: cbar = colorbar()
cbar.solids.set_edgecolor("face")
draw()
However this has negative consequences in other circumstances, e.g. with semi-transparent images (alpha < 1) and colorbar extensions; therefore, this workaround is not used by default (see issue #1188).
contains(mouseevent)[source]
Test whether the mouse event occurred on the figure. Returns
bool, {}
convert_xunits(x)[source]
Convert x using the unit type of the xaxis. If the artist is not in contained in an Axes or if the xaxis does not have units, x itself is returned.
convert_yunits(y)[source]
Convert y using the unit type of the yaxis. If the artist is not in contained in an Axes or if the yaxis does not have units, y itself is returned.
delaxes(ax)[source]
Remove the Axes ax from the figure; update the current Axes.
propertydpi
draw(renderer)[source]
Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters
rendererRendererBase subclass.
Notes This method is overridden in the Artist subclasses.
findobj(match=None, include_self=True)[source]
Find artist objects. Recursively find all Artist instances contained in the artist. Parameters
match
A filter criterion for the matches. This can be
None: Return all objects contained in artist. A function with signature def match(artist: Artist) -> bool. The result will only contain artists for which the function returns True. A class instance: e.g., Line2D. The result will only contain artists of this class or its subclasses (isinstance check).
include_selfbool
Include self in the list to be checked for a match. Returns
list of Artist
format_cursor_data(data)[source]
Return a string representation of data. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. The default implementation converts ints and floats and arrays of ints and floats into a comma-separated string enclosed in square brackets, unless the artist has an associated colorbar, in which case scalar values are formatted using the colorbar's formatter. See also get_cursor_data
propertyframeon
Return the figure's background patch visibility, i.e. whether the figure background will be drawn. Equivalent to Figure.patch.get_visible().
gca(**kwargs)[source]
Get the current Axes. If there is currently no Axes on this Figure, a new one is created using Figure.add_subplot. (To test whether there is currently an Axes on a Figure, check whether figure.axes is empty. To test whether there is currently a Figure on the pyplot figure stack, check whether pyplot.get_fignums() is empty.) The following kwargs are supported for ensuring the returned Axes adheres to the given projection etc., and for Axes creation if the active Axes does not exist:
Property Description
adjustable {'box', 'datalim'}
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
anchor (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...}
animated bool
aspect {'auto', 'equal'} or float
autoscale_on bool
autoscalex_on bool
autoscaley_on bool
axes_locator Callable[[Axes, Renderer], Bbox]
axisbelow bool or 'line'
box_aspect float or None
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
facecolor or fc color
figure Figure
frame_on bool
gid str
in_layout bool
label object
navigate bool
navigate_mode unknown
path_effects AbstractPathEffect
picker None or bool or float or callable
position [left, bottom, width, height] or Bbox
prop_cycle unknown
rasterization_zorder float or None
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
title str
transform Transform
url str
visible bool
xbound unknown
xlabel str
xlim (bottom: float, top: float)
xmargin float greater than -0.5
xscale {"linear", "log", "symlog", "logit", ...} or ScaleBase
xticklabels unknown
xticks unknown
ybound unknown
ylabel str
ylim (bottom: float, top: float)
ymargin float greater than -0.5
yscale {"linear", "log", "symlog", "logit", ...} or ScaleBase
yticklabels unknown
yticks unknown
zorder float
get_agg_filter()[source]
Return filter function to be used for agg filter.
get_alpha()[source]
Return the alpha value used for blending - not supported on all backends.
get_animated()[source]
Return whether the artist is animated.
get_axes()[source]
List of Axes in the SubFigure. You can access and modify the Axes in the SubFigure through this list. Do not modify the list itself. Instead, use add_axes, add_subplot or delaxes to add or remove an Axes. Note: The SubFigure.axes property and get_axes method are equivalent.
get_children()[source]
Get a list of artists contained in the figure.
get_clip_box()[source]
Return the clipbox.
get_clip_on()[source]
Return whether the artist uses clipping.
get_clip_path()[source]
Return the clip path.
get_constrained_layout()[source]
Return whether constrained layout is being used. See Constrained Layout Guide.
get_constrained_layout_pads(relative=False)[source]
Get padding for constrained_layout. Returns a list of w_pad, h_pad in inches and wspace and hspace as fractions of the subplot. See Constrained Layout Guide. Parameters
relativebool
If True, then convert from inches to figure relative.
get_cursor_data(event)[source]
Return the cursor data for a given event. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. Cursor data can be used by Artists to provide additional context information for a given event. The default implementation just returns None. Subclasses can override the method and return arbitrary data. However, when doing so, they must ensure that format_cursor_data can convert the data to a string representation. The only current use case is displaying the z-value of an AxesImage in the status bar of a plot window, while moving the mouse. Parameters
eventmatplotlib.backend_bases.MouseEvent
See also format_cursor_data
get_default_bbox_extra_artists()[source]
get_edgecolor()[source]
Get the edge color of the Figure rectangle.
get_facecolor()[source]
Get the face color of the Figure rectangle.
get_figure()[source]
Return the Figure instance the artist belongs to.
get_frameon()[source]
Return the figure's background patch visibility, i.e. whether the figure background will be drawn. Equivalent to Figure.patch.get_visible().
get_gid()[source]
Return the group id.
get_in_layout()[source]
Return boolean flag, True if artist is included in layout calculations. E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight').
get_label()[source]
Return the label used for this artist in the legend.
get_linewidth()[source]
Get the line width of the Figure rectangle.
get_path_effects()[source]
get_picker()[source]
Return the picking behavior of the artist. The possible values are described in set_picker. See also
set_picker, pickable, pick
get_rasterized()[source]
Return whether the artist is to be rasterized.
get_sketch_params()[source]
Return the sketch parameters for the artist. Returns
tuple or None
A 3-tuple with the following elements:
scale: The amplitude of the wiggle perpendicular to the source line.
length: The length of the wiggle along the line.
randomness: The scale factor by which the length is shrunken or expanded. Returns None if no sketch parameters were set.
get_snap()[source]
Return the snap setting. See set_snap for details.
get_tightbbox(renderer, bbox_extra_artists=None)[source]
Return a (tight) bounding box of the figure in inches. Note that FigureBase differs from all other artists, which return their Bbox in pixels. Artists that have artist.set_in_layout(False) are not included in the bbox. Parameters
rendererRendererBase subclass
renderer that will be used to draw the figures (i.e. fig.canvas.get_renderer())
bbox_extra_artistslist of Artist or None
List of artists to include in the tight bounding box. If None (default), then all artist children of each Axes are included in the tight bounding box. Returns
BboxBase
containing the bounding box (in figure inches).
get_transform()[source]
Return the Transform instance used by this artist.
get_transformed_clip_path_and_affine()[source]
Return the clip path with the non-affine part of its transformation applied, and the remaining affine part of its transformation.
get_url()[source]
Return the url.
get_visible()[source]
Return the visibility.
get_window_extent(*args, **kwargs)[source]
Get the artist's bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly.
get_zorder()[source]
Return the artist's zorder.
have_units()[source]
Return whether units are set on any axis.
is_transform_set()[source]
Return whether the Artist has an explicitly set transform. This is True after set_transform has been called.
legend(*args, **kwargs)[source]
Place a legend on the figure. Call signatures: legend()
legend(handles, labels)
legend(handles=handles)
legend(labels)
The call signatures correspond to the following different ways to use this method: 1. Automatic detection of elements to be shown in the legend The elements to be added to the legend are automatically determined, when you do not pass in any extra arguments. In this case, the labels are taken from the artist. You can specify them either at artist creation or by calling the set_label() method on the artist: ax.plot([1, 2, 3], label='Inline label')
fig.legend()
or: line, = ax.plot([1, 2, 3])
line.set_label('Label via method')
fig.legend()
Specific lines can be excluded from the automatic legend element selection by defining a label starting with an underscore. This is default for all artists, so calling Figure.legend without any arguments and without setting the labels manually will result in no legend being drawn. 2. Explicitly listing the artists and labels in the legend For full control of which artists have a legend entry, it is possible to pass an iterable of legend artists followed by an iterable of legend labels respectively: fig.legend([line1, line2, line3], ['label1', 'label2', 'label3'])
3. Explicitly listing the artists in the legend This is similar to 2, but the labels are taken from the artists' label properties. Example: line1, = ax1.plot([1, 2, 3], label='label1')
line2, = ax2.plot([1, 2, 3], label='label2')
fig.legend(handles=[line1, line2])
4. Labeling existing plot elements Discouraged This call signature is discouraged, because the relation between plot elements and labels is only implicit by their order and can easily be mixed up. To make a legend for all artists on all Axes, call this function with an iterable of strings, one for each legend item. For example: fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot([1, 3, 5], color='blue')
ax2.plot([2, 4, 6], color='red')
fig.legend(['the blues', 'the reds'])
Parameters
handleslist of Artist, optional
A list of Artists (lines, patches) to be added to the legend. Use this together with labels, if you need full control on what is shown in the legend and the automatic mechanism described above is not sufficient. The length of handles and labels should be the same in this case. If they are not, they are truncated to the smaller length.
labelslist of str, optional
A list of labels to show next to the artists. Use this together with handles, if you need full control on what is shown in the legend and the automatic mechanism described above is not sufficient. Returns
Legend
Other Parameters
locstr or pair of floats, default: rcParams["legend.loc"] (default: 'best') ('best' for axes, 'upper right' for figures)
The location of the legend. The strings 'upper left', 'upper right', 'lower left', 'lower right' place the legend at the corresponding corner of the axes/figure. The strings 'upper center', 'lower center', 'center left', 'center right' place the legend at the center of the corresponding edge of the axes/figure. The string 'center' places the legend at the center of the axes/figure. The string 'best' places the legend at the location, among the nine locations defined so far, with the minimum overlap with other drawn artists. This option can be quite slow for plots with large amounts of data; your plotting speed may benefit from providing a specific location. The location can also be a 2-tuple giving the coordinates of the lower-left corner of the legend in axes coordinates (in which case bbox_to_anchor will be ignored). For back-compatibility, 'center right' (but no other location) can also be spelled 'right', and each "string" locations can also be given as a numeric value:
Location String Location Code
'best' 0
'upper right' 1
'upper left' 2
'lower left' 3
'lower right' 4
'right' 5
'center left' 6
'center right' 7
'lower center' 8
'upper center' 9
'center' 10
bbox_to_anchorBboxBase, 2-tuple, or 4-tuple of floats
Box that is used to position the legend in conjunction with loc. Defaults to axes.bbox (if called as a method to Axes.legend) or figure.bbox (if Figure.legend). This argument allows arbitrary placement of the legend. Bbox coordinates are interpreted in the coordinate system given by bbox_transform, with the default transform Axes or Figure coordinates, depending on which legend is called. If a 4-tuple or BboxBase is given, then it specifies the bbox (x, y, width, height) that the legend is placed in. To put the legend in the best location in the bottom right quadrant of the axes (or figure): loc='best', bbox_to_anchor=(0.5, 0., 0.5, 0.5)
A 2-tuple (x, y) places the corner of the legend specified by loc at x, y. For example, to put the legend's upper right-hand corner in the center of the axes (or figure) the following keywords can be used: loc='upper right', bbox_to_anchor=(0.5, 0.5)
ncolint, default: 1
The number of columns that the legend has.
propNone or matplotlib.font_manager.FontProperties or dict
The font properties of the legend. If None (default), the current matplotlib.rcParams will be used.
fontsizeint or {'xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large'}
The font size of the legend. If the value is numeric the size will be the absolute font size in points. String values are relative to the current default font size. This argument is only used if prop is not specified.
labelcolorstr or list, default: rcParams["legend.labelcolor"] (default: 'None')
The color of the text in the legend. Either a valid color string (for example, 'red'), or a list of color strings. The labelcolor can also be made to match the color of the line or marker using 'linecolor', 'markerfacecolor' (or 'mfc'), or 'markeredgecolor' (or 'mec'). Labelcolor can be set globally using rcParams["legend.labelcolor"] (default: 'None'). If None, use rcParams["text.color"] (default: 'black').
numpointsint, default: rcParams["legend.numpoints"] (default: 1)
The number of marker points in the legend when creating a legend entry for a Line2D (line).
scatterpointsint, default: rcParams["legend.scatterpoints"] (default: 1)
The number of marker points in the legend when creating a legend entry for a PathCollection (scatter plot).
scatteryoffsetsiterable of floats, default: [0.375, 0.5, 0.3125]
The vertical offset (relative to the font size) for the markers created for a scatter plot legend entry. 0.0 is at the base the legend text, and 1.0 is at the top. To draw all markers at the same height, set to [0.5].
markerscalefloat, default: rcParams["legend.markerscale"] (default: 1.0)
The relative size of legend markers compared with the originally drawn ones.
markerfirstbool, default: True
If True, legend marker is placed to the left of the legend label. If False, legend marker is placed to the right of the legend label.
frameonbool, default: rcParams["legend.frameon"] (default: True)
Whether the legend should be drawn on a patch (frame).
fancyboxbool, default: rcParams["legend.fancybox"] (default: True)
Whether round edges should be enabled around the FancyBboxPatch which makes up the legend's background.
shadowbool, default: rcParams["legend.shadow"] (default: False)
Whether to draw a shadow behind the legend.
framealphafloat, default: rcParams["legend.framealpha"] (default: 0.8)
The alpha transparency of the legend's background. If shadow is activated and framealpha is None, the default value is ignored.
facecolor"inherit" or color, default: rcParams["legend.facecolor"] (default: 'inherit')
The legend's background color. If "inherit", use rcParams["axes.facecolor"] (default: 'white').
edgecolor"inherit" or color, default: rcParams["legend.edgecolor"] (default: '0.8')
The legend's background patch edge color. If "inherit", use take rcParams["axes.edgecolor"] (default: 'black').
mode{"expand", None}
If mode is set to "expand" the legend will be horizontally expanded to fill the axes area (or bbox_to_anchor if defines the legend's size).
bbox_transformNone or matplotlib.transforms.Transform
The transform for the bounding box (bbox_to_anchor). For a value of None (default) the Axes' transAxes transform will be used.
titlestr or None
The legend's title. Default is no title (None).
title_fontpropertiesNone or matplotlib.font_manager.FontProperties or dict
The font properties of the legend's title. If None (default), the title_fontsize argument will be used if present; if title_fontsize is also None, the current rcParams["legend.title_fontsize"] (default: None) will be used.
title_fontsizeint or {'xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large'}, default: rcParams["legend.title_fontsize"] (default: None)
The font size of the legend's title. Note: This cannot be combined with title_fontproperties. If you want to set the fontsize alongside other font properties, use the size parameter in title_fontproperties.
borderpadfloat, default: rcParams["legend.borderpad"] (default: 0.4)
The fractional whitespace inside the legend border, in font-size units.
labelspacingfloat, default: rcParams["legend.labelspacing"] (default: 0.5)
The vertical space between the legend entries, in font-size units.
handlelengthfloat, default: rcParams["legend.handlelength"] (default: 2.0)
The length of the legend handles, in font-size units.
handleheightfloat, default: rcParams["legend.handleheight"] (default: 0.7)
The height of the legend handles, in font-size units.
handletextpadfloat, default: rcParams["legend.handletextpad"] (default: 0.8)
The pad between the legend handle and text, in font-size units.
borderaxespadfloat, default: rcParams["legend.borderaxespad"] (default: 0.5)
The pad between the axes and legend border, in font-size units.
columnspacingfloat, default: rcParams["legend.columnspacing"] (default: 2.0)
The spacing between columns, in font-size units.
handler_mapdict or None
The custom dictionary mapping instances or types to a legend handler. This handler_map updates the default handler map found at matplotlib.legend.Legend.get_legend_handler_map. See also Axes.legend
Notes Some artists are not supported by this function. See Legend guide for details.
propertymouseover
If this property is set to True, the artist will be queried for custom context information when the mouse cursor moves over it. See also get_cursor_data(), ToolCursorPosition and NavigationToolbar2.
pchanged()[source]
Call all of the registered callbacks. This function is triggered internally when a property is changed. See also add_callback
remove_callback
pick(mouseevent)[source]
Process a pick event. Each child artist will fire a pick event if mouseevent is over the artist and the artist has picker set. See also
set_picker, get_picker, pickable
pickable()[source]
Return whether the artist is pickable. See also
set_picker, get_picker, pick
properties()[source]
Return a dictionary of all the properties of the artist.
remove()[source]
Remove the artist from the figure if possible. The effect will not be visible until the figure is redrawn, e.g., with FigureCanvasBase.draw_idle. Call relim to update the axes limits if desired. Note: relim will not see collections even if the collection was added to the axes with autolim = True. Note: there is no support for removing the artist's legend entry.
remove_callback(oid)[source]
Remove a callback based on its observer id. See also add_callback
sca(a)[source]
Set the current Axes to be a and return a.
set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, frameon=<UNSET>, gid=<UNSET>, in_layout=<UNSET>, label=<UNSET>, linewidth=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
edgecolor color
facecolor color
figure Figure
frameon bool
gid str
in_layout bool
label object
linewidth number
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float
set_agg_filter(filter_func)[source]
Set the agg filter. Parameters
filter_funccallable
A filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array.
set_alpha(alpha)[source]
Set the alpha value used for blending - not supported on all backends. Parameters
alphascalar or None
alpha must be within the 0-1 range, inclusive.
set_animated(b)[source]
Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters
bbool
set_clip_box(clipbox)[source]
Set the artist's clip Bbox. Parameters
clipboxBbox
set_clip_on(b)[source]
Set whether the artist uses clipping. When False artists will be visible outside of the axes which can lead to unexpected results. Parameters
bbool
set_clip_path(path, transform=None)[source]
Set the artist's clip path. Parameters
pathPatch or Path or TransformedPath or None
The clip path. If given a Path, transform must be provided as well. If None, a previously set clip path is removed.
transformTransform, optional
Only used if path is a Path, in which case the given Path is converted to a TransformedPath using transform. Notes For efficiency, if path is a Rectangle this method will set the clipping box to the corresponding rectangle and set the clipping path to None. For technical reasons (support of set), a tuple (path, transform) is also accepted as a single positional parameter.
set_edgecolor(color)[source]
Set the edge color of the Figure rectangle. Parameters
colorcolor
set_facecolor(color)[source]
Set the face color of the Figure rectangle. Parameters
colorcolor
set_figure(fig)[source]
Set the Figure instance the artist belongs to. Parameters
figFigure
set_frameon(b)[source]
Set the figure's background patch visibility, i.e. whether the figure background will be drawn. Equivalent to Figure.patch.set_visible(). Parameters
bbool
set_gid(gid)[source]
Set the (group) id for the artist. Parameters
gidstr
set_in_layout(in_layout)[source]
Set if artist is to be included in layout calculations, E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). Parameters
in_layoutbool
set_label(s)[source]
Set a label that will be displayed in the legend. Parameters
sobject
s will be converted to a string by calling str.
set_linewidth(linewidth)[source]
Set the line width of the Figure rectangle. Parameters
linewidthnumber
set_path_effects(path_effects)[source]
Set the path effects. Parameters
path_effectsAbstractPathEffect
set_picker(picker)[source]
Define the picking behavior of the artist. Parameters
pickerNone or bool or float or callable
This can be one of the following:
None: Picking is disabled for this artist (default). A boolean: If True then picking will be enabled and the artist will fire a pick event if the mouse event is over the artist. A float: If picker is a number it is interpreted as an epsilon tolerance in points and the artist will fire off an event if its data is within epsilon of the mouse event. For some artists like lines and patch collections, the artist may provide additional data to the pick event that is generated, e.g., the indices of the data within epsilon of the pick event
A function: If picker is callable, it is a user supplied function which determines whether the artist is hit by the mouse event: hit, props = picker(artist, mouseevent)
to determine the hit test. if the mouse event is over the artist, return hit=True and props is a dictionary of properties you want added to the PickEvent attributes.
set_rasterized(rasterized)[source]
Force rasterized (bitmap) drawing for vector graphics output. Rasterized drawing is not supported by all artists. If you try to enable this on an artist that does not support it, the command has no effect and a warning will be issued. This setting is ignored for pixel-based output. See also Rasterization for vector graphics. Parameters
rasterizedbool
set_sketch_params(scale=None, length=None, randomness=None)[source]
Set the sketch parameters. Parameters
scalefloat, optional
The amplitude of the wiggle perpendicular to the source line, in pixels. If scale is None, or not provided, no sketch filter will be provided.
lengthfloat, optional
The length of the wiggle along the line, in pixels (default 128.0)
randomnessfloat, optional
The scale factor by which the length is shrunken or expanded (default 16.0) The PGF backend uses this argument as an RNG seed and not as described above. Using the same seed yields the same random shape.
set_snap(snap)[source]
Set the snapping behavior. Snapping aligns positions with the pixel grid, which results in clearer images. For example, if a black line of 1px width was defined at a position in between two pixels, the resulting image would contain the interpolated value of that line in the pixel grid, which would be a grey value on both adjacent pixel positions. In contrast, snapping will move the line to the nearest integer pixel value, so that the resulting image will really contain a 1px wide black line. Snapping is currently only supported by the Agg and MacOSX backends. Parameters
snapbool or None
Possible values:
True: Snap vertices to the nearest pixel center.
False: Do not modify vertex positions.
None: (auto) If the path contains only rectilinear line segments, round to the nearest pixel center.
set_transform(t)[source]
Set the artist transform. Parameters
tTransform
set_url(url)[source]
Set the url for the artist. Parameters
urlstr
set_visible(b)[source]
Set the artist's visibility. Parameters
bbool
set_zorder(level)[source]
Set the zorder for the artist. Artists with lower zorder values are drawn first. Parameters
levelfloat
propertystale
Whether the artist is 'stale' and needs to be re-drawn for the output to match the internal state of the artist.
propertysticky_edges
x and y sticky edge lists for autoscaling. When performing autoscaling, if a data limit coincides with a value in the corresponding sticky_edges list, then no margin will be added--the view limit "sticks" to the edge. A typical use case is histograms, where one usually expects no margin on the bottom edge (0) of the histogram. Moreover, margin expansion "bumps" against sticky edges and cannot cross them. For example, if the upper data limit is 1.0, the upper view limit computed by simple margin application is 1.2, but there is a sticky edge at 1.1, then the actual upper view limit will be 1.1. This attribute cannot be assigned to; however, the x and y lists can be modified in place as needed. Examples >>> artist.sticky_edges.x[:] = (xmin, xmax)
>>> artist.sticky_edges.y[:] = (ymin, ymax)
subfigures(nrows=1, ncols=1, squeeze=True, wspace=None, hspace=None, width_ratios=None, height_ratios=None, **kwargs)[source]
Add a subfigure to this figure or subfigure. A subfigure has the same artist methods as a figure, and is logically the same as a figure, but cannot print itself. See Figure subfigures. Parameters
nrows, ncolsint, default: 1
Number of rows/columns of the subfigure grid.
squeezebool, default: True
If True, extra dimensions are squeezed out from the returned array of subfigures.
wspace, hspacefloat, default: None
The amount of width/height reserved for space between subfigures, expressed as a fraction of the average subfigure width/height. If not given, the values will be inferred from a figure or rcParams when necessary.
width_ratiosarray-like of length ncols, optional
Defines the relative widths of the columns. Each column gets a relative width of width_ratios[i] / sum(width_ratios). If not given, all columns will have the same width.
height_ratiosarray-like of length nrows, optional
Defines the relative heights of the rows. Each row gets a relative height of height_ratios[i] / sum(height_ratios). If not given, all rows will have the same height.
subplot_mosaic(mosaic, *, sharex=False, sharey=False, subplot_kw=None, gridspec_kw=None, empty_sentinel='.')[source]
Build a layout of Axes based on ASCII art or nested lists. This is a helper function to build complex GridSpec layouts visually. Note This API is provisional and may be revised in the future based on early user feedback. Parameters
mosaiclist of list of {hashable or nested} or str
A visual layout of how you want your Axes to be arranged labeled as strings. For example x = [['A panel', 'A panel', 'edge'],
['C panel', '.', 'edge']]
produces 4 Axes: 'A panel' which is 1 row high and spans the first two columns 'edge' which is 2 rows high and is on the right edge 'C panel' which in 1 row and 1 column wide in the bottom left a blank space 1 row and 1 column wide in the bottom center Any of the entries in the layout can be a list of lists of the same form to create nested layouts. If input is a str, then it can either be a multi-line string of the form '''
AAE
C.E
'''
where each character is a column and each line is a row. Or it can be a single-line string where rows are separated by ;: 'AB;CC'
The string notation allows only single character Axes labels and does not support nesting but is very terse.
sharex, shareybool, default: False
If True, the x-axis (sharex) or y-axis (sharey) will be shared among all subplots. In that case, tick label visibility and axis units behave as for subplots. If False, each subplot's x- or y-axis will be independent.
subplot_kwdict, optional
Dictionary with keywords passed to the Figure.add_subplot call used to create each subplot.
gridspec_kwdict, optional
Dictionary with keywords passed to the GridSpec constructor used to create the grid the subplots are placed on.
empty_sentinelobject, optional
Entry in the layout to mean "leave this space empty". Defaults to '.'. Note, if layout is a string, it is processed via inspect.cleandoc to remove leading white space, which may interfere with using white-space as the empty sentinel. Returns
dict[label, Axes]
A dictionary mapping the labels to the Axes objects. The order of the axes is left-to-right and top-to-bottom of their position in the total layout.
subplots(nrows=1, ncols=1, *, sharex=False, sharey=False, squeeze=True, subplot_kw=None, gridspec_kw=None)[source]
Add a set of subplots to this figure. This utility wrapper makes it convenient to create common layouts of subplots in a single call. Parameters
nrows, ncolsint, default: 1
Number of rows/columns of the subplot grid.
sharex, shareybool or {'none', 'all', 'row', 'col'}, default: False
Controls sharing of x-axis (sharex) or y-axis (sharey): True or 'all': x- or y-axis will be shared among all subplots. False or 'none': each subplot x- or y-axis will be independent. 'row': each subplot row will share an x- or y-axis. 'col': each subplot column will share an x- or y-axis. When subplots have a shared x-axis along a column, only the x tick labels of the bottom subplot are created. Similarly, when subplots have a shared y-axis along a row, only the y tick labels of the first column subplot are created. To later turn other subplots' ticklabels on, use tick_params. When subplots have a shared axis that has units, calling Axis.set_units will update each axis with the new units.
squeezebool, default: True
If True, extra dimensions are squeezed out from the returned array of Axes: if only one subplot is constructed (nrows=ncols=1), the resulting single Axes object is returned as a scalar. for Nx1 or 1xM subplots, the returned object is a 1D numpy object array of Axes objects. for NxM, subplots with N>1 and M>1 are returned as a 2D array. If False, no squeezing at all is done: the returned Axes object is always a 2D array containing Axes instances, even if it ends up being 1x1.
subplot_kwdict, optional
Dict with keywords passed to the Figure.add_subplot call used to create each subplot.
gridspec_kwdict, optional
Dict with keywords passed to the GridSpec constructor used to create the grid the subplots are placed on. Returns
Axes or array of Axes
Either a single Axes object or an array of Axes objects if more than one subplot was created. The dimensions of the resulting array can be controlled with the squeeze keyword, see above. See also pyplot.subplots
Figure.add_subplot
pyplot.subplot
Examples # First create some toy data:
x = np.linspace(0, 2*np.pi, 400)
y = np.sin(x**2)
# Create a figure
plt.figure()
# Create a subplot
ax = fig.subplots()
ax.plot(x, y)
ax.set_title('Simple plot')
# Create two subplots and unpack the output array immediately
ax1, ax2 = fig.subplots(1, 2, sharey=True)
ax1.plot(x, y)
ax1.set_title('Sharing Y axis')
ax2.scatter(x, y)
# Create four polar Axes and access them through the returned array
axes = fig.subplots(2, 2, subplot_kw=dict(projection='polar'))
axes[0, 0].plot(x, y)
axes[1, 1].scatter(x, y)
# Share a X axis with each column of subplots
fig.subplots(2, 2, sharex='col')
# Share a Y axis with each row of subplots
fig.subplots(2, 2, sharey='row')
# Share both X and Y axes with all subplots
fig.subplots(2, 2, sharex='all', sharey='all')
# Note that this is the same as
fig.subplots(2, 2, sharex=True, sharey=True)
subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)[source]
Adjust the subplot layout parameters. Unset parameters are left unmodified; initial values are given by rcParams["figure.subplot.[name]"]. Parameters
leftfloat, optional
The position of the left edge of the subplots, as a fraction of the figure width.
rightfloat, optional
The position of the right edge of the subplots, as a fraction of the figure width.
bottomfloat, optional
The position of the bottom edge of the subplots, as a fraction of the figure height.
topfloat, optional
The position of the top edge of the subplots, as a fraction of the figure height.
wspacefloat, optional
The width of the padding between subplots, as a fraction of the average Axes width.
hspacefloat, optional
The height of the padding between subplots, as a fraction of the average Axes height.
suptitle(t, **kwargs)[source]
Add a centered suptitle to the figure. Parameters
tstr
The suptitle text.
xfloat, default: 0.5
The x location of the text in figure coordinates.
yfloat, default: 0.98
The y location of the text in figure coordinates.
horizontalalignment, ha{'center', 'left', 'right'}, default: center
The horizontal alignment of the text relative to (x, y).
verticalalignment, va{'top', 'center', 'bottom', 'baseline'}, default: top
The vertical alignment of the text relative to (x, y).
fontsize, sizedefault: rcParams["figure.titlesize"] (default: 'large')
The font size of the text. See Text.set_size for possible values.
fontweight, weightdefault: rcParams["figure.titleweight"] (default: 'normal')
The font weight of the text. See Text.set_weight for possible values. Returns
text
The Text instance of the suptitle. Other Parameters
fontpropertiesNone or dict, optional
A dict of font properties. If fontproperties is given the default values for font size and weight are taken from the FontProperties defaults. rcParams["figure.titlesize"] (default: 'large') and rcParams["figure.titleweight"] (default: 'normal') are ignored in this case. **kwargs
Additional kwargs are matplotlib.text.Text properties.
supxlabel(t, **kwargs)[source]
Add a centered supxlabel to the figure. Parameters
tstr
The supxlabel text.
xfloat, default: 0.5
The x location of the text in figure coordinates.
yfloat, default: 0.01
The y location of the text in figure coordinates.
horizontalalignment, ha{'center', 'left', 'right'}, default: center
The horizontal alignment of the text relative to (x, y).
verticalalignment, va{'top', 'center', 'bottom', 'baseline'}, default: bottom
The vertical alignment of the text relative to (x, y).
fontsize, sizedefault: rcParams["figure.titlesize"] (default: 'large')
The font size of the text. See Text.set_size for possible values.
fontweight, weightdefault: rcParams["figure.titleweight"] (default: 'normal')
The font weight of the text. See Text.set_weight for possible values. Returns
text
The Text instance of the supxlabel. Other Parameters
fontpropertiesNone or dict, optional
A dict of font properties. If fontproperties is given the default values for font size and weight are taken from the FontProperties defaults. rcParams["figure.titlesize"] (default: 'large') and rcParams["figure.titleweight"] (default: 'normal') are ignored in this case. **kwargs
Additional kwargs are matplotlib.text.Text properties.
supylabel(t, **kwargs)[source]
Add a centered supylabel to the figure. Parameters
tstr
The supylabel text.
xfloat, default: 0.02
The x location of the text in figure coordinates.
yfloat, default: 0.5
The y location of the text in figure coordinates.
horizontalalignment, ha{'center', 'left', 'right'}, default: left
The horizontal alignment of the text relative to (x, y).
verticalalignment, va{'top', 'center', 'bottom', 'baseline'}, default: center
The vertical alignment of the text relative to (x, y).
fontsize, sizedefault: rcParams["figure.titlesize"] (default: 'large')
The font size of the text. See Text.set_size for possible values.
fontweight, weightdefault: rcParams["figure.titleweight"] (default: 'normal')
The font weight of the text. See Text.set_weight for possible values. Returns
text
The Text instance of the supylabel. Other Parameters
fontpropertiesNone or dict, optional
A dict of font properties. If fontproperties is given the default values for font size and weight are taken from the FontProperties defaults. rcParams["figure.titlesize"] (default: 'large') and rcParams["figure.titleweight"] (default: 'normal') are ignored in this case. **kwargs
Additional kwargs are matplotlib.text.Text properties.
text(x, y, s, fontdict=None, **kwargs)[source]
Add text to figure. Parameters
x, yfloat
The position to place the text. By default, this is in figure coordinates, floats in [0, 1]. The coordinate system can be changed using the transform keyword.
sstr
The text string.
fontdictdict, optional
A dictionary to override the default text properties. If not given, the defaults are determined by rcParams["font.*"]. Properties passed as kwargs override the corresponding ones given in fontdict. Returns
Text
Other Parameters
**kwargsText properties
Other miscellaneous text parameters.
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
backgroundcolor color
bbox dict with properties for patches.FancyBboxPatch
clip_box unknown
clip_on unknown
clip_path unknown
color or c color
figure Figure
fontfamily or family {FONTNAME, 'serif', 'sans-serif', 'cursive', 'fantasy', 'monospace'}
fontproperties or font or font_properties font_manager.FontProperties or str or pathlib.Path
fontsize or size float or {'xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large'}
fontstretch or stretch {a numeric value in range 0-1000, 'ultra-condensed', 'extra-condensed', 'condensed', 'semi-condensed', 'normal', 'semi-expanded', 'expanded', 'extra-expanded', 'ultra-expanded'}
fontstyle or style {'normal', 'italic', 'oblique'}
fontvariant or variant {'normal', 'small-caps'}
fontweight or weight {a numeric value in range 0-1000, 'ultralight', 'light', 'normal', 'regular', 'book', 'medium', 'roman', 'semibold', 'demibold', 'demi', 'bold', 'heavy', 'extra bold', 'black'}
gid str
horizontalalignment or ha {'center', 'right', 'left'}
in_layout bool
label object
linespacing float (multiple of font size)
math_fontfamily str
multialignment or ma {'left', 'right', 'center'}
parse_math bool
path_effects AbstractPathEffect
picker None or bool or float or callable
position (float, float)
rasterized bool
rotation float or {'vertical', 'horizontal'}
rotation_mode {None, 'default', 'anchor'}
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
text object
transform Transform
transform_rotates_text bool
url str
usetex bool or None
verticalalignment or va {'center', 'top', 'bottom', 'baseline', 'center_baseline'}
visible bool
wrap bool
x float
y float
zorder float See also Axes.text
pyplot.text
update(props)[source]
Update this artist's properties from the dict props. Parameters
propsdict
update_from(other)[source]
Copy properties from other to self.
zorder=0 | |
doc_23891 |
Convert an image to floating point format. This function is similar to img_as_float64, but will not convert lower-precision floating point arrays to float64. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of float
Output image. Notes The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0]. | |
doc_23892 | Read exactly n bytes. Raise an IncompleteReadError if EOF is reached before n can be read. Use the IncompleteReadError.partial attribute to get the partially read data. | |
doc_23893 |
Returns a copy of the calling offset object with n=1 and all other attributes equal. | |
doc_23894 |
Parameters
axesmatplotlib.axes.Axes
The Axes to which the created Axis belongs.
pickradiusfloat
The acceptance radius for containment tests. See also Axis.contains. | |
doc_23895 |
Update colors from the scalar mappable array, if any. Assign colors to edges and faces based on the array and/or colors that were directly set, as appropriate. | |
doc_23896 |
Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3
n_alphasint, default=100
Number of alphas along the regularization path
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
**paramskwargs
keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also
lars_path
Lasso
LassoLars
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T
>>> y = np.array([1, 2, 3.1])
>>> # Use lasso_path to compute a coefficient path
>>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5])
>>> print(coef_path)
[[0. 0. 0.46874778]
[0.2159048 0.4425765 0.23689075]]
>>> # Now use lars_path and 1D linear interpolation to compute the
>>> # same path
>>> from sklearn.linear_model import lars_path
>>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso')
>>> from scipy import interpolate
>>> coef_path_continuous = interpolate.interp1d(alphas[::-1],
... coef_path_lars[:, ::-1])
>>> print(coef_path_continuous([5., 1., .5]))
[[0. 0. 0.46915237]
[0.2159048 0.4425765 0.23668876]] | |
doc_23897 | See Migration guide for more details. tf.compat.v1.linalg.eigh, tf.compat.v1.self_adjoint_eig
tf.linalg.eigh(
tensor, name=None
)
Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices in tensor such that tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i], for i=0...N-1.
Args
tensor Tensor of shape [..., N, N]. Only the lower triangular part of each inner inner matrix is referenced.
name string, optional name of the operation.
Returns
e Eigenvalues. Shape is [..., N]. Sorted in non-decreasing order.
v Eigenvectors. Shape is [..., N, N]. The columns of the inner most matrices contain eigenvectors of the corresponding matrices in tensor | |
doc_23898 | This is the factory used by EmailPolicy by default. HeaderRegistry builds the class used to create a header instance dynamically, using base_class and a specialized class retrieved from a registry that it holds. When a given header name does not appear in the registry, the class specified by default_class is used as the specialized class. When use_default_map is True (the default), the standard mapping of header names to classes is copied in to the registry during initialization. base_class is always the last class in the generated class’s __bases__ list. The default mappings are: subject
UniqueUnstructuredHeader date
UniqueDateHeader resent-date
DateHeader orig-date
UniqueDateHeader sender
UniqueSingleAddressHeader resent-sender
SingleAddressHeader to
UniqueAddressHeader resent-to
AddressHeader cc
UniqueAddressHeader resent-cc
AddressHeader bcc
UniqueAddressHeader resent-bcc
AddressHeader from
UniqueAddressHeader resent-from
AddressHeader reply-to
UniqueAddressHeader mime-version
MIMEVersionHeader content-type
ContentTypeHeader content-disposition
ContentDispositionHeader content-transfer-encoding
ContentTransferEncodingHeader message-id
MessageIDHeader HeaderRegistry has the following methods:
map_to_type(self, name, cls)
name is the name of the header to be mapped. It will be converted to lower case in the registry. cls is the specialized class to be used, along with base_class, to create the class used to instantiate headers that match name.
__getitem__(name)
Construct and return a class to handle creating a name header.
__call__(name, value)
Retrieves the specialized header associated with name from the registry (using default_class if name does not appear in the registry) and composes it with base_class to produce a class, calls the constructed class’s constructor, passing it the same argument list, and finally returns the class instance created thereby. | |
doc_23899 |
Return the visibility. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.