_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_2700 |
Set the artist offset transform. Parameters
transOffsetTransform | |
doc_2701 | Signals the division of a non-infinite number by zero. Can occur with division, modulo division, or when raising a number to a negative power. If this signal is not trapped, returns Infinity or -Infinity with the sign determined by the inputs to the calculation. | |
doc_2702 |
Alias for get_linestyle. | |
doc_2703 | The application namespace for the URL pattern that matches the URL. | |
doc_2704 | gzip.open(filename, mode='rb', compresslevel=9, encoding=None, errors=None, newline=None)
Open a gzip-compressed file in binary or text mode, returning a file object. The filename argument can be an actual filename (a str or bytes object), or an existing file object to read from or write to. The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', 'wb', 'x' or 'xb' for binary mode, or 'rt', 'at', 'wt', or 'xt' for text mode. The default is 'rb'. The compresslevel argument is an integer from 0 to 9, as for the GzipFile constructor. For binary mode, this function is equivalent to the GzipFile constructor: GzipFile(filename, mode, compresslevel). In this case, the encoding, errors and newline arguments must not be provided. For text mode, a GzipFile object is created, and wrapped in an io.TextIOWrapper instance with the specified encoding, error handling behavior, and line ending(s). Changed in version 3.3: Added support for filename being a file object, support for text mode, and the encoding, errors and newline arguments. Changed in version 3.4: Added support for the 'x', 'xb' and 'xt' modes. Changed in version 3.6: Accepts a path-like object.
exception gzip.BadGzipFile
An exception raised for invalid gzip files. It inherits OSError. EOFError and zlib.error can also be raised for invalid gzip files. New in version 3.8.
class gzip.GzipFile(filename=None, mode=None, compresslevel=9, fileobj=None, mtime=None)
Constructor for the GzipFile class, which simulates most of the methods of a file object, with the exception of the truncate() method. At least one of fileobj and filename must be given a non-trivial value. The new class instance is based on fileobj, which can be a regular file, an io.BytesIO object, or any other object which simulates a file. It defaults to None, in which case filename is opened to provide a file object. When fileobj is not None, the filename argument is only used to be included in the gzip file header, which may include the original filename of the uncompressed file. It defaults to the filename of fileobj, if discernible; otherwise, it defaults to the empty string, and in this case the original filename is not included in the header. The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', 'wb', 'x', or 'xb', depending on whether the file will be read or written. The default is the mode of fileobj if discernible; otherwise, the default is 'rb'. In future Python releases the mode of fileobj will not be used. It is better to always specify mode for writing. Note that the file is always opened in binary mode. To open a compressed file in text mode, use open() (or wrap your GzipFile with an io.TextIOWrapper). The compresslevel argument is an integer from 0 to 9 controlling the level of compression; 1 is fastest and produces the least compression, and 9 is slowest and produces the most compression. 0 is no compression. The default is 9. The mtime argument is an optional numeric timestamp to be written to the last modification time field in the stream when compressing. It should only be provided in compression mode. If omitted or None, the current time is used. See the mtime attribute for more details. Calling a GzipFile object’s close() method does not close fileobj, since you might wish to append more material after the compressed data. This also allows you to pass an io.BytesIO object opened for writing as fileobj, and retrieve the resulting memory buffer using the io.BytesIO object’s getvalue() method. GzipFile supports the io.BufferedIOBase interface, including iteration and the with statement. Only the truncate() method isn’t implemented. GzipFile also provides the following method and attribute:
peek(n)
Read n uncompressed bytes without advancing the file position. At most one single read on the compressed stream is done to satisfy the call. The number of bytes returned may be more or less than requested. Note While calling peek() does not change the file position of the GzipFile, it may change the position of the underlying file object (e.g. if the GzipFile was constructed with the fileobj parameter). New in version 3.2.
mtime
When decompressing, the value of the last modification time field in the most recently read header may be read from this attribute, as an integer. The initial value before reading any headers is None. All gzip compressed streams are required to contain this timestamp field. Some programs, such as gunzip, make use of the timestamp. The format is the same as the return value of time.time() and the st_mtime attribute of the object returned by os.stat().
Changed in version 3.1: Support for the with statement was added, along with the mtime constructor argument and mtime attribute. Changed in version 3.2: Support for zero-padded and unseekable files was added. Changed in version 3.3: The io.BufferedIOBase.read1() method is now implemented. Changed in version 3.4: Added support for the 'x' and 'xb' modes. Changed in version 3.5: Added support for writing arbitrary bytes-like objects. The read() method now accepts an argument of None. Changed in version 3.6: Accepts a path-like object. Deprecated since version 3.9: Opening GzipFile for writing without specifying the mode argument is deprecated.
gzip.compress(data, compresslevel=9, *, mtime=None)
Compress the data, returning a bytes object containing the compressed data. compresslevel and mtime have the same meaning as in the GzipFile constructor above. New in version 3.2. Changed in version 3.8: Added the mtime parameter for reproducible output.
gzip.decompress(data)
Decompress the data, returning a bytes object containing the uncompressed data. New in version 3.2.
Examples of usage Example of how to read a compressed file: import gzip
with gzip.open('/home/joe/file.txt.gz', 'rb') as f:
file_content = f.read()
Example of how to create a compressed GZIP file: import gzip
content = b"Lots of content here"
with gzip.open('/home/joe/file.txt.gz', 'wb') as f:
f.write(content)
Example of how to GZIP compress an existing file: import gzip
import shutil
with open('/home/joe/file.txt', 'rb') as f_in:
with gzip.open('/home/joe/file.txt.gz', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
Example of how to GZIP compress a binary string: import gzip
s_in = b"Lots of content here"
s_out = gzip.compress(s_in)
See also
Module zlib
The basic data compression module needed to support the gzip file format. Command Line Interface The gzip module provides a simple command line interface to compress or decompress files. Once executed the gzip module keeps the input file(s). Changed in version 3.8: Add a new command line interface with a usage. By default, when you will execute the CLI, the default compression level is 6. Command line options
file
If file is not specified, read from sys.stdin.
--fast
Indicates the fastest compression method (less compression).
--best
Indicates the slowest compression method (best compression).
-d, --decompress
Decompress the given file.
-h, --help
Show the help message. | |
doc_2705 | The initial character set specified. Common aliases are converted to their official email names (e.g. latin_1 is converted to iso-8859-1). Defaults to 7-bit us-ascii. | |
doc_2706 | The prefix that the application is mounted under, without a trailing slash. path comes after this. | |
doc_2707 | skimage.measure.approximate_polygon(coords, …) Approximate a polygonal chain with the specified tolerance.
skimage.measure.block_reduce(image, block_size) Downsample image by applying function func to local blocks.
skimage.measure.euler_number(image[, …]) Calculate the Euler characteristic in binary image.
skimage.measure.find_contours(image[, …]) Find iso-valued contours in a 2D array for a given level value.
skimage.measure.grid_points_in_poly(shape, verts) Test whether points on a specified grid are inside a polygon.
skimage.measure.inertia_tensor(image[, mu]) Compute the inertia tensor of the input image.
skimage.measure.inertia_tensor_eigvals(image) Compute the eigenvalues of the inertia tensor of the image.
skimage.measure.label(input[, background, …]) Label connected regions of an integer array.
skimage.measure.marching_cubes(volume[, …]) Marching cubes algorithm to find surfaces in 3d volumetric data.
skimage.measure.marching_cubes_classic(volume) Classic marching cubes algorithm to find surfaces in 3d volumetric data.
skimage.measure.marching_cubes_lewiner(volume) Lewiner marching cubes algorithm to find surfaces in 3d volumetric data.
skimage.measure.mesh_surface_area(verts, faces) Compute surface area, given vertices & triangular faces
skimage.measure.moments(image[, order]) Calculate all raw image moments up to a certain order.
skimage.measure.moments_central(image[, …]) Calculate all central image moments up to a certain order.
skimage.measure.moments_coords(coords[, order]) Calculate all raw image moments up to a certain order.
skimage.measure.moments_coords_central(coords) Calculate all central image moments up to a certain order.
skimage.measure.moments_hu(nu) Calculate Hu’s set of image moments (2D-only).
skimage.measure.moments_normalized(mu[, order]) Calculate all normalized central image moments up to a certain order.
skimage.measure.perimeter(image[, neighbourhood]) Calculate total perimeter of all objects in binary image.
skimage.measure.perimeter_crofton(image[, …]) Calculate total Crofton perimeter of all objects in binary image.
skimage.measure.points_in_poly(points, verts) Test whether points lie inside a polygon.
skimage.measure.profile_line(image, src, dst) Return the intensity profile of an image measured along a scan line.
skimage.measure.ransac(data, model_class, …) Fit a model to data with the RANSAC (random sample consensus) algorithm.
skimage.measure.regionprops(label_image[, …]) Measure properties of labeled image regions.
skimage.measure.regionprops_table(label_image) Compute image properties and return them as a pandas-compatible table.
skimage.measure.shannon_entropy(image[, base]) Calculate the Shannon entropy of an image.
skimage.measure.subdivide_polygon(coords[, …]) Subdivision of polygonal curves using B-Splines.
skimage.measure.CircleModel() Total least squares estimator for 2D circles.
skimage.measure.EllipseModel() Total least squares estimator for 2D ellipses.
skimage.measure.LineModelND() Total least squares estimator for N-dimensional lines. approximate_polygon
skimage.measure.approximate_polygon(coords, tolerance) [source]
Approximate a polygonal chain with the specified tolerance. It is based on the Douglas-Peucker algorithm. Note that the approximated polygon is always within the convex hull of the original polygon. Parameters
coords(N, 2) array
Coordinate array.
tolerancefloat
Maximum distance from original points of polygon to approximated polygonal chain. If tolerance is 0, the original coordinate array is returned. Returns
coords(M, 2) array
Approximated polygonal chain where M <= N. References
1
https://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm
block_reduce
skimage.measure.block_reduce(image, block_size, func=<function sum>, cval=0, func_kwargs=None) [source]
Downsample image by applying function func to local blocks. This function is useful for max and mean pooling, for example. Parameters
imagendarray
N-dimensional input image.
block_sizearray_like
Array containing down-sampling integer factor along each axis.
funccallable
Function object which is used to calculate the return value for each local block. This function must implement an axis parameter. Primary functions are numpy.sum, numpy.min, numpy.max, numpy.mean and numpy.median. See also func_kwargs.
cvalfloat
Constant padding value if image is not perfectly divisible by the block size.
func_kwargsdict
Keyword arguments passed to func. Notably useful for passing dtype argument to np.mean. Takes dictionary of inputs, e.g.: func_kwargs={'dtype': np.float16}). Returns
imagendarray
Down-sampled image with same number of dimensions as input image. Examples >>> from skimage.measure import block_reduce
>>> image = np.arange(3*3*4).reshape(3, 3, 4)
>>> image
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]],
[[24, 25, 26, 27],
[28, 29, 30, 31],
[32, 33, 34, 35]]])
>>> block_reduce(image, block_size=(3, 3, 1), func=np.mean)
array([[[16., 17., 18., 19.]]])
>>> image_max1 = block_reduce(image, block_size=(1, 3, 4), func=np.max)
>>> image_max1
array([[[11]],
[[23]],
[[35]]])
>>> image_max2 = block_reduce(image, block_size=(3, 1, 4), func=np.max)
>>> image_max2
array([[[27],
[31],
[35]]])
euler_number
skimage.measure.euler_number(image, connectivity=None) [source]
Calculate the Euler characteristic in binary image. For 2D objects, the Euler number is the number of objects minus the number of holes. For 3D objects, the Euler number is obtained as the number of objects plus the number of holes, minus the number of tunnels, or loops. Parameters
image: (N, M) ndarray or (N, M, D) ndarray.
2D or 3D images. If image is not binary, all values strictly greater than zero are considered as the object.
connectivityint, optional
Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor. Accepted values are ranging from 1 to input.ndim. If None, a full connectivity of input.ndim is used. 4 or 8 neighborhoods are defined for 2D images (connectivity 1 and 2, respectively). 6 or 26 neighborhoods are defined for 3D images, (connectivity 1 and 3, respectively). Connectivity 2 is not defined. Returns
euler_numberint
Euler characteristic of the set of all objects in the image. Notes The Euler characteristic is an integer number that describes the topology of the set of all objects in the input image. If object is 4-connected, then background is 8-connected, and conversely. The computation of the Euler characteristic is based on an integral geometry formula in discretized space. In practice, a neighbourhood configuration is constructed, and a LUT is applied for each configuration. The coefficients used are the ones of Ohser et al. It can be useful to compute the Euler characteristic for several connectivities. A large relative difference between results for different connectivities suggests that the image resolution (with respect to the size of objects and holes) is too low. References
1
S. Rivollier. Analyse d’image geometrique et morphometrique par diagrammes de forme et voisinages adaptatifs generaux. PhD thesis, 2010. Ecole Nationale Superieure des Mines de Saint-Etienne. https://tel.archives-ouvertes.fr/tel-00560838
2
Ohser J., Nagel W., Schladitz K. (2002) The Euler Number of Discretized Sets - On the Choice of Adjacency in Homogeneous Lattices. In: Mecke K., Stoyan D. (eds) Morphology of Condensed Matter. Lecture Notes in Physics, vol 600. Springer, Berlin, Heidelberg. Examples >>> import numpy as np
>>> SAMPLE = np.zeros((100,100,100));
>>> SAMPLE[40:60, 40:60, 40:60]=1
>>> euler_number(SAMPLE)
1...
>>> SAMPLE[45:55,45:55,45:55] = 0;
>>> euler_number(SAMPLE)
2...
>>> SAMPLE = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
... [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
... [1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0],
... [0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1],
... [0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1]])
>>> euler_number(SAMPLE) # doctest:
0
>>> euler_number(SAMPLE, connectivity=1) # doctest:
2
Examples using skimage.measure.euler_number
Euler number find_contours
skimage.measure.find_contours(image, level=None, fully_connected='low', positive_orientation='low', *, mask=None) [source]
Find iso-valued contours in a 2D array for a given level value. Uses the “marching squares” method to compute a the iso-valued contours of the input 2D array for a particular level value. Array values are linearly interpolated to provide better precision for the output contours. Parameters
image2D ndarray of double
Input image in which to find contours.
levelfloat, optional
Value along which to find contours in the array. By default, the level is set to (max(image) + min(image)) / 2 Changed in version 0.18: This parameter is now optional.
fully_connectedstr, {‘low’, ‘high’}
Indicates whether array elements below the given level value are to be considered fully-connected (and hence elements above the value will only be face connected), or vice-versa. (See notes below for details.)
positive_orientationstr, {‘low’, ‘high’}
Indicates whether the output contours will produce positively-oriented polygons around islands of low- or high-valued elements. If ‘low’ then contours will wind counter- clockwise around elements below the iso-value. Alternately, this means that low-valued elements are always on the left of the contour. (See below for details.)
mask2D ndarray of bool, or None
A boolean mask, True where we want to draw contours. Note that NaN values are always excluded from the considered region (mask is set to False wherever array is NaN). Returns
contourslist of (n,2)-ndarrays
Each contour is an ndarray of shape (n, 2), consisting of n (row, column) coordinates along the contour. See also
skimage.measure.marching_cubes
Notes The marching squares algorithm is a special case of the marching cubes algorithm [1]. A simple explanation is available here: http://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html There is a single ambiguous case in the marching squares algorithm: when a given 2 x 2-element square has two high-valued and two low-valued elements, each pair diagonally adjacent. (Where high- and low-valued is with respect to the contour value sought.) In this case, either the high-valued elements can be ‘connected together’ via a thin isthmus that separates the low-valued elements, or vice-versa. When elements are connected together across a diagonal, they are considered ‘fully connected’ (also known as ‘face+vertex-connected’ or ‘8-connected’). Only high-valued or low-valued elements can be fully-connected, the other set will be considered as ‘face-connected’ or ‘4-connected’. By default, low-valued elements are considered fully-connected; this can be altered with the ‘fully_connected’ parameter. Output contours are not guaranteed to be closed: contours which intersect the array edge or a masked-off region (either where mask is False or where array is NaN) will be left open. All other contours will be closed. (The closed-ness of a contours can be tested by checking whether the beginning point is the same as the end point.) Contours are oriented. By default, array values lower than the contour value are to the left of the contour and values greater than the contour value are to the right. This means that contours will wind counter-clockwise (i.e. in ‘positive orientation’) around islands of low-valued pixels. This behavior can be altered with the ‘positive_orientation’ parameter. The order of the contours in the output list is determined by the position of the smallest x,y (in lexicographical order) coordinate in the contour. This is a side-effect of how the input array is traversed, but can be relied upon. Warning Array coordinates/values are assumed to refer to the center of the array element. Take a simple example input: [0, 1]. The interpolated position of 0.5 in this array is midway between the 0-element (at x=0) and the 1-element (at x=1), and thus would fall at x=0.5. This means that to find reasonable contours, it is best to find contours midway between the expected “light” and “dark” values. In particular, given a binarized array, do not choose to find contours at the low or high value of the array. This will often yield degenerate contours, especially around structures that are a single array element wide. Instead choose a middle value, as above. References
1
Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). DOI:10.1145/37401.37422 Examples >>> a = np.zeros((3, 3))
>>> a[0, 0] = 1
>>> a
array([[1., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
>>> find_contours(a, 0.5)
[array([[0. , 0.5],
[0.5, 0. ]])]
Examples using skimage.measure.find_contours
Contour finding
Measure region properties grid_points_in_poly
skimage.measure.grid_points_in_poly(shape, verts) [source]
Test whether points on a specified grid are inside a polygon. For each (r, c) coordinate on a grid, i.e. (0, 0), (0, 1) etc., test whether that point lies inside a polygon. Parameters
shapetuple (M, N)
Shape of the grid.
verts(V, 2) array
Specify the V vertices of the polygon, sorted either clockwise or anti-clockwise. The first point may (but does not need to be) duplicated. Returns
mask(M, N) ndarray of bool
True where the grid falls inside the polygon. See also
points_in_poly
inertia_tensor
skimage.measure.inertia_tensor(image, mu=None) [source]
Compute the inertia tensor of the input image. Parameters
imagearray
The input image.
muarray, optional
The pre-computed central moments of image. The inertia tensor computation requires the central moments of the image. If an application requires both the central moments and the inertia tensor (for example, skimage.measure.regionprops), then it is more efficient to pre-compute them and pass them to the inertia tensor call. Returns
Tarray, shape (image.ndim, image.ndim)
The inertia tensor of the input image. \(T_{i, j}\) contains the covariance of image intensity along axes \(i\) and \(j\). References
1
https://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_tensor
2
Bernd Jähne. Spatio-Temporal Image Processing: Theory and Scientific Applications. (Chapter 8: Tensor Methods) Springer, 1993.
inertia_tensor_eigvals
skimage.measure.inertia_tensor_eigvals(image, mu=None, T=None) [source]
Compute the eigenvalues of the inertia tensor of the image. The inertia tensor measures covariance of the image intensity along the image axes. (See inertia_tensor.) The relative magnitude of the eigenvalues of the tensor is thus a measure of the elongation of a (bright) object in the image. Parameters
imagearray
The input image.
muarray, optional
The pre-computed central moments of image.
Tarray, shape (image.ndim, image.ndim)
The pre-computed inertia tensor. If T is given, mu and image are ignored. Returns
eigvalslist of float, length image.ndim
The eigenvalues of the inertia tensor of image, in descending order. Notes Computing the eigenvalues requires the inertia tensor of the input image. This is much faster if the central moments (mu) are provided, or, alternatively, one can provide the inertia tensor (T) directly.
label
skimage.measure.label(input, background=None, return_num=False, connectivity=None) [source]
Label connected regions of an integer array. Two pixels are connected when they are neighbors and have the same value. In 2D, they can be neighbors either in a 1- or 2-connected sense. The value refers to the maximum number of orthogonal hops to consider a pixel/voxel a neighbor: 1-connectivity 2-connectivity diagonal connection close-up
[ ] [ ] [ ] [ ] [ ]
| \ | / | <- hop 2
[ ]--[x]--[ ] [ ]--[x]--[ ] [x]--[ ]
| / | \ hop 1
[ ] [ ] [ ] [ ]
Parameters
inputndarray of dtype int
Image to label.
backgroundint, optional
Consider all pixels with this value as background pixels, and label them as 0. By default, 0-valued pixels are considered as background pixels.
return_numbool, optional
Whether to return the number of assigned labels.
connectivityint, optional
Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor. Accepted values are ranging from 1 to input.ndim. If None, a full connectivity of input.ndim is used. Returns
labelsndarray of dtype int
Labeled array, where all connected regions are assigned the same integer value.
numint, optional
Number of labels, which equals the maximum label index and is only returned if return_num is True. See also
regionprops
regionprops_table
References
1
Christophe Fiorio and Jens Gustedt, “Two linear time Union-Find strategies for image processing”, Theoretical Computer Science 154 (1996), pp. 165-181.
2
Kensheng Wu, Ekow Otoo and Arie Shoshani, “Optimizing connected component labeling algorithms”, Paper LBNL-56864, 2005, Lawrence Berkeley National Laboratory (University of California), http://repositories.cdlib.org/lbnl/LBNL-56864 Examples >>> import numpy as np
>>> x = np.eye(3).astype(int)
>>> print(x)
[[1 0 0]
[0 1 0]
[0 0 1]]
>>> print(label(x, connectivity=1))
[[1 0 0]
[0 2 0]
[0 0 3]]
>>> print(label(x, connectivity=2))
[[1 0 0]
[0 1 0]
[0 0 1]]
>>> print(label(x, background=-1))
[[1 2 2]
[2 1 2]
[2 2 1]]
>>> x = np.array([[1, 0, 0],
... [1, 1, 5],
... [0, 0, 0]])
>>> print(label(x))
[[1 0 0]
[1 1 2]
[0 0 0]]
Examples using skimage.measure.label
Measure region properties
Euler number
Segment human cells (in mitosis) marching_cubes
skimage.measure.marching_cubes(volume, level=None, *, spacing=(1.0, 1.0, 1.0), gradient_direction='descent', step_size=1, allow_degenerate=True, method='lewiner', mask=None) [source]
Marching cubes algorithm to find surfaces in 3d volumetric data. In contrast with Lorensen et al. approach [2], Lewiner et al. algorithm is faster, resolves ambiguities, and guarantees topologically correct results. Therefore, this algorithm generally a better choice. Parameters
volume(M, N, P) array
Input data volume to find isosurfaces. Will internally be converted to float32 if necessary.
levelfloat, optional
Contour value to search for isosurfaces in volume. If not given or None, the average of the min and max of vol is used.
spacinglength-3 tuple of floats, optional
Voxel spacing in spatial dimensions corresponding to numpy array indexing dimensions (M, N, P) as in volume.
gradient_directionstring, optional
Controls if the mesh was generated from an isosurface with gradient descent toward objects of interest (the default), or the opposite, considering the left-hand rule. The two options are: * descent : Object was greater than exterior * ascent : Exterior was greater than object
step_sizeint, optional
Step size in voxels. Default 1. Larger steps yield faster but coarser results. The result will always be topologically correct though.
allow_degeneratebool, optional
Whether to allow degenerate (i.e. zero-area) triangles in the end-result. Default True. If False, degenerate triangles are removed, at the cost of making the algorithm slower. method: str, optional
One of ‘lewiner’, ‘lorensen’ or ‘_lorensen’. Specify witch of Lewiner et al. or Lorensen et al. method will be used. The ‘_lorensen’ flag correspond to an old implementation that will be deprecated in version 0.19.
mask(M, N, P) array, optional
Boolean array. The marching cube algorithm will be computed only on True elements. This will save computational time when interfaces are located within certain region of the volume M, N, P-e.g. the top half of the cube-and also allow to compute finite surfaces-i.e. open surfaces that do not end at the border of the cube. Returns
verts(V, 3) array
Spatial coordinates for V unique mesh vertices. Coordinate order matches input volume (M, N, P). If allow_degenerate is set to True, then the presence of degenerate triangles in the mesh can make this array have duplicate vertices.
faces(F, 3) array
Define triangular faces via referencing vertex indices from verts. This algorithm specifically outputs triangles, so each face has exactly three indices.
normals(V, 3) array
The normal direction at each vertex, as calculated from the data.
values(V, ) array
Gives a measure for the maximum value of the data in the local region near each vertex. This can be used by visualization tools to apply a colormap to the mesh. See also
skimage.measure.mesh_surface_area
skimage.measure.find_contours
Notes The algorithm [1] is an improved version of Chernyaev’s Marching Cubes 33 algorithm. It is an efficient algorithm that relies on heavy use of lookup tables to handle the many different cases, keeping the algorithm relatively easy. This implementation is written in Cython, ported from Lewiner’s C++ implementation. To quantify the area of an isosurface generated by this algorithm, pass verts and faces to skimage.measure.mesh_surface_area. Regarding visualization of algorithm output, to contour a volume named myvolume about the level 0.0, using the mayavi package: >>>
>> from mayavi import mlab
>> verts, faces, _, _ = marching_cubes(myvolume, 0.0)
>> mlab.triangular_mesh([vert[0] for vert in verts],
[vert[1] for vert in verts],
[vert[2] for vert in verts],
faces)
>> mlab.show()
Similarly using the visvis package: >>>
>> import visvis as vv
>> verts, faces, normals, values = marching_cubes(myvolume, 0.0)
>> vv.mesh(np.fliplr(verts), faces, normals, values)
>> vv.use().Run()
To reduce the number of triangles in the mesh for better performance, see this example using the mayavi package. References
1
Thomas Lewiner, Helio Lopes, Antonio Wilson Vieira and Geovan Tavares. Efficient implementation of Marching Cubes’ cases with topological guarantees. Journal of Graphics Tools 8(2) pp. 1-15 (december 2003). DOI:10.1080/10867651.2003.10487582
2
Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). DOI:10.1145/37401.37422
marching_cubes_classic
skimage.measure.marching_cubes_classic(volume, level=None, spacing=(1.0, 1.0, 1.0), gradient_direction='descent') [source]
Classic marching cubes algorithm to find surfaces in 3d volumetric data. Note that the marching_cubes() algorithm is recommended over this algorithm, because it’s faster and produces better results. Parameters
volume(M, N, P) array of doubles
Input data volume to find isosurfaces. Will be cast to np.float64.
levelfloat
Contour value to search for isosurfaces in volume. If not given or None, the average of the min and max of vol is used.
spacinglength-3 tuple of floats
Voxel spacing in spatial dimensions corresponding to numpy array indexing dimensions (M, N, P) as in volume.
gradient_directionstring
Controls if the mesh was generated from an isosurface with gradient descent toward objects of interest (the default), or the opposite. The two options are: * descent : Object was greater than exterior * ascent : Exterior was greater than object Returns
verts(V, 3) array
Spatial coordinates for V unique mesh vertices. Coordinate order matches input volume (M, N, P). If allow_degenerate is set to True, then the presence of degenerate triangles in the mesh can make this array have duplicate vertices.
faces(F, 3) array
Define triangular faces via referencing vertex indices from verts. This algorithm specifically outputs triangles, so each face has exactly three indices. See also
skimage.measure.marching_cubes
skimage.measure.mesh_surface_area
Notes The marching cubes algorithm is implemented as described in [1]. A simple explanation is available here: http://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html
There are several known ambiguous cases in the marching cubes algorithm. Using point labeling as in [1], Figure 4, as shown: v8 ------ v7
/ | / | y
/ | / | ^ z
v4 ------ v3 | | /
| v5 ----|- v6 |/ (note: NOT right handed!)
| / | / ----> x
| / | /
v1 ------ v2
Most notably, if v4, v8, v2, and v6 are all >= level (or any generalization of this case) two parallel planes are generated by this algorithm, separating v4 and v8 from v2 and v6. An equally valid interpretation would be a single connected thin surface enclosing all four points. This is the best known ambiguity, though there are others. This algorithm does not attempt to resolve such ambiguities; it is a naive implementation of marching cubes as in [1], but may be a good beginning for work with more recent techniques (Dual Marching Cubes, Extended Marching Cubes, Cubic Marching Squares, etc.). Because of interactions between neighboring cubes, the isosurface(s) generated by this algorithm are NOT guaranteed to be closed, particularly for complicated contours. Furthermore, this algorithm does not guarantee a single contour will be returned. Indeed, ALL isosurfaces which cross level will be found, regardless of connectivity. The output is a triangular mesh consisting of a set of unique vertices and connecting triangles. The order of these vertices and triangles in the output list is determined by the position of the smallest x,y,z (in lexicographical order) coordinate in the contour. This is a side-effect of how the input array is traversed, but can be relied upon. The generated mesh guarantees coherent orientation as of version 0.12. To quantify the area of an isosurface generated by this algorithm, pass outputs directly into skimage.measure.mesh_surface_area. References
1(1,2,3)
Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). DOI:10.1145/37401.37422
marching_cubes_lewiner
skimage.measure.marching_cubes_lewiner(volume, level=None, spacing=(1.0, 1.0, 1.0), gradient_direction='descent', step_size=1, allow_degenerate=True, use_classic=False, mask=None) [source]
Lewiner marching cubes algorithm to find surfaces in 3d volumetric data. In contrast to marching_cubes_classic(), this algorithm is faster, resolves ambiguities, and guarantees topologically correct results. Therefore, this algorithm generally a better choice, unless there is a specific need for the classic algorithm. Parameters
volume(M, N, P) array
Input data volume to find isosurfaces. Will internally be converted to float32 if necessary.
levelfloat
Contour value to search for isosurfaces in volume. If not given or None, the average of the min and max of vol is used.
spacinglength-3 tuple of floats
Voxel spacing in spatial dimensions corresponding to numpy array indexing dimensions (M, N, P) as in volume.
gradient_directionstring
Controls if the mesh was generated from an isosurface with gradient descent toward objects of interest (the default), or the opposite, considering the left-hand rule. The two options are: * descent : Object was greater than exterior * ascent : Exterior was greater than object
step_sizeint
Step size in voxels. Default 1. Larger steps yield faster but coarser results. The result will always be topologically correct though.
allow_degeneratebool
Whether to allow degenerate (i.e. zero-area) triangles in the end-result. Default True. If False, degenerate triangles are removed, at the cost of making the algorithm slower.
use_classicbool
If given and True, the classic marching cubes by Lorensen (1987) is used. This option is included for reference purposes. Note that this algorithm has ambiguities and is not guaranteed to produce a topologically correct result. The results with using this option are not generally the same as the marching_cubes_classic() function.
mask(M, N, P) array
Boolean array. The marching cube algorithm will be computed only on True elements. This will save computational time when interfaces are located within certain region of the volume M, N, P-e.g. the top half of the cube-and also allow to compute finite surfaces-i.e. open surfaces that do not end at the border of the cube. Returns
verts(V, 3) array
Spatial coordinates for V unique mesh vertices. Coordinate order matches input volume (M, N, P). If allow_degenerate is set to True, then the presence of degenerate triangles in the mesh can make this array have duplicate vertices.
faces(F, 3) array
Define triangular faces via referencing vertex indices from verts. This algorithm specifically outputs triangles, so each face has exactly three indices.
normals(V, 3) array
The normal direction at each vertex, as calculated from the data.
values(V, ) array
Gives a measure for the maximum value of the data in the local region near each vertex. This can be used by visualization tools to apply a colormap to the mesh. See also
skimage.measure.marching_cubes
skimage.measure.mesh_surface_area
Notes The algorithm [1] is an improved version of Chernyaev’s Marching Cubes 33 algorithm. It is an efficient algorithm that relies on heavy use of lookup tables to handle the many different cases, keeping the algorithm relatively easy. This implementation is written in Cython, ported from Lewiner’s C++ implementation. To quantify the area of an isosurface generated by this algorithm, pass verts and faces to skimage.measure.mesh_surface_area. Regarding visualization of algorithm output, to contour a volume named myvolume about the level 0.0, using the mayavi package: >>> from mayavi import mlab
>>> verts, faces, normals, values = marching_cubes_lewiner(myvolume, 0.0)
>>> mlab.triangular_mesh([vert[0] for vert in verts],
... [vert[1] for vert in verts],
... [vert[2] for vert in verts],
... faces)
>>> mlab.show()
Similarly using the visvis package: >>> import visvis as vv
>>> verts, faces, normals, values = marching_cubes_lewiner(myvolume, 0.0)
>>> vv.mesh(np.fliplr(verts), faces, normals, values)
>>> vv.use().Run()
References
1
Thomas Lewiner, Helio Lopes, Antonio Wilson Vieira and Geovan Tavares. Efficient implementation of Marching Cubes’ cases with topological guarantees. Journal of Graphics Tools 8(2) pp. 1-15 (december 2003). DOI:10.1080/10867651.2003.10487582
mesh_surface_area
skimage.measure.mesh_surface_area(verts, faces) [source]
Compute surface area, given vertices & triangular faces Parameters
verts(V, 3) array of floats
Array containing (x, y, z) coordinates for V unique mesh vertices.
faces(F, 3) array of ints
List of length-3 lists of integers, referencing vertex coordinates as provided in verts Returns
areafloat
Surface area of mesh. Units now [coordinate units] ** 2. See also
skimage.measure.marching_cubes
skimage.measure.marching_cubes_classic
Notes The arguments expected by this function are the first two outputs from skimage.measure.marching_cubes. For unit correct output, ensure correct spacing was passed to skimage.measure.marching_cubes. This algorithm works properly only if the faces provided are all triangles.
moments
skimage.measure.moments(image, order=3) [source]
Calculate all raw image moments up to a certain order. The following properties can be calculated from raw image moments:
Area as: M[0, 0]. Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that raw moments are neither translation, scale nor rotation invariant. Parameters
imagenD double or uint8 array
Rasterized shape as image.
orderint, optional
Maximum order of moments. Default is 3. Returns
m(order + 1, order + 1) array
Raw image moments. References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 1
>>> M = moments(image)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> centroid
(14.5, 14.5)
moments_central
skimage.measure.moments_central(image, center=None, order=3, **kwargs) [source]
Calculate all central image moments up to a certain order. The center coordinates (cr, cc) can be calculated from the raw moments as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that central moments are translation invariant but not scale and rotation invariant. Parameters
imagenD double or uint8 array
Rasterized shape as image.
centertuple of float, optional
Coordinates of the image centroid. This will be computed if it is not provided.
orderint, optional
The maximum order of moments computed. Returns
mu(order + 1, order + 1) array
Central image moments. References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 1
>>> M = moments(image)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> moments_central(image, centroid)
array([[16., 0., 20., 0.],
[ 0., 0., 0., 0.],
[20., 0., 25., 0.],
[ 0., 0., 0., 0.]])
moments_coords
skimage.measure.moments_coords(coords, order=3) [source]
Calculate all raw image moments up to a certain order. The following properties can be calculated from raw image moments:
Area as: M[0, 0]. Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that raw moments are neither translation, scale nor rotation invariant. Parameters
coords(N, D) double or uint8 array
Array of N points that describe an image of D dimensionality in Cartesian space.
orderint, optional
Maximum order of moments. Default is 3. Returns
M(order + 1, order + 1, …) array
Raw image moments. (D dimensions) References
1
Johannes Kilian. Simple Image Analysis By Moments. Durham University, version 0.2, Durham, 2001. Examples >>> coords = np.array([[row, col]
... for row in range(13, 17)
... for col in range(14, 18)], dtype=np.double)
>>> M = moments_coords(coords)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> centroid
(14.5, 15.5)
moments_coords_central
skimage.measure.moments_coords_central(coords, center=None, order=3) [source]
Calculate all central image moments up to a certain order. The following properties can be calculated from raw image moments:
Area as: M[0, 0]. Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that raw moments are neither translation, scale nor rotation invariant. Parameters
coords(N, D) double or uint8 array
Array of N points that describe an image of D dimensionality in Cartesian space. A tuple of coordinates as returned by np.nonzero is also accepted as input.
centertuple of float, optional
Coordinates of the image centroid. This will be computed if it is not provided.
orderint, optional
Maximum order of moments. Default is 3. Returns
Mc(order + 1, order + 1, …) array
Central image moments. (D dimensions) References
1
Johannes Kilian. Simple Image Analysis By Moments. Durham University, version 0.2, Durham, 2001. Examples >>> coords = np.array([[row, col]
... for row in range(13, 17)
... for col in range(14, 18)])
>>> moments_coords_central(coords)
array([[16., 0., 20., 0.],
[ 0., 0., 0., 0.],
[20., 0., 25., 0.],
[ 0., 0., 0., 0.]])
As seen above, for symmetric objects, odd-order moments (columns 1 and 3, rows 1 and 3) are zero when centered on the centroid, or center of mass, of the object (the default). If we break the symmetry by adding a new point, this no longer holds: >>> coords2 = np.concatenate((coords, [[17, 17]]), axis=0)
>>> np.round(moments_coords_central(coords2),
... decimals=2)
array([[17. , 0. , 22.12, -2.49],
[ 0. , 3.53, 1.73, 7.4 ],
[25.88, 6.02, 36.63, 8.83],
[ 4.15, 19.17, 14.8 , 39.6 ]])
Image moments and central image moments are equivalent (by definition) when the center is (0, 0): >>> np.allclose(moments_coords(coords),
... moments_coords_central(coords, (0, 0)))
True
moments_hu
skimage.measure.moments_hu(nu) [source]
Calculate Hu’s set of image moments (2D-only). Note that this set of moments is proofed to be translation, scale and rotation invariant. Parameters
nu(M, M) array
Normalized central image moments, where M must be >= 4. Returns
nu(7,) array
Hu’s set of image moments. References
1
M. K. Hu, “Visual Pattern Recognition by Moment Invariants”, IRE Trans. Info. Theory, vol. IT-8, pp. 179-187, 1962
2
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
3
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
4
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
5
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 0.5
>>> image[10:12, 10:12] = 1
>>> mu = moments_central(image)
>>> nu = moments_normalized(mu)
>>> moments_hu(nu)
array([7.45370370e-01, 3.51165981e-01, 1.04049179e-01, 4.06442107e-02,
2.64312299e-03, 2.40854582e-02, 4.33680869e-19])
moments_normalized
skimage.measure.moments_normalized(mu, order=3) [source]
Calculate all normalized central image moments up to a certain order. Note that normalized central moments are translation and scale invariant but not rotation invariant. Parameters
mu(M,[ …,] M) array
Central image moments, where M must be greater than or equal to order.
orderint, optional
Maximum order of moments. Default is 3. Returns
nu(order + 1,[ …,] order + 1) array
Normalized central image moments. References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 1
>>> m = moments(image)
>>> centroid = (m[0, 1] / m[0, 0], m[1, 0] / m[0, 0])
>>> mu = moments_central(image, centroid)
>>> moments_normalized(mu)
array([[ nan, nan, 0.078125 , 0. ],
[ nan, 0. , 0. , 0. ],
[0.078125 , 0. , 0.00610352, 0. ],
[0. , 0. , 0. , 0. ]])
perimeter
skimage.measure.perimeter(image, neighbourhood=4) [source]
Calculate total perimeter of all objects in binary image. Parameters
image(N, M) ndarray
2D binary image.
neighbourhood4 or 8, optional
Neighborhood connectivity for border pixel determination. It is used to compute the contour. A higher neighbourhood widens the border on which the perimeter is computed. Returns
perimeterfloat
Total perimeter of all objects in binary image. References
1
K. Benkrid, D. Crookes. Design and FPGA Implementation of a Perimeter Estimator. The Queen’s University of Belfast. http://www.cs.qub.ac.uk/~d.crookes/webpubs/papers/perimeter.doc Examples >>> from skimage import data, util
>>> from skimage.measure import label
>>> # coins image (binary)
>>> img_coins = data.coins() > 110
>>> # total perimeter of all objects in the image
>>> perimeter(img_coins, neighbourhood=4)
7796.867...
>>> perimeter(img_coins, neighbourhood=8)
8806.268...
Examples using skimage.measure.perimeter
Different perimeters perimeter_crofton
skimage.measure.perimeter_crofton(image, directions=4) [source]
Calculate total Crofton perimeter of all objects in binary image. Parameters
image(N, M) ndarray
2D image. If image is not binary, all values strictly greater than zero are considered as the object.
directions2 or 4, optional
Number of directions used to approximate the Crofton perimeter. By default, 4 is used: it should be more accurate than 2. Computation time is the same in both cases. Returns
perimeterfloat
Total perimeter of all objects in binary image. Notes This measure is based on Crofton formula [1], which is a measure from integral geometry. It is defined for general curve length evaluation via a double integral along all directions. In a discrete space, 2 or 4 directions give a quite good approximation, 4 being more accurate than 2 for more complex shapes. Similar to perimeter(), this function returns an approximation of the perimeter in continuous space. References
1
https://en.wikipedia.org/wiki/Crofton_formula
2
S. Rivollier. Analyse d’image geometrique et morphometrique par diagrammes de forme et voisinages adaptatifs generaux. PhD thesis, 2010. Ecole Nationale Superieure des Mines de Saint-Etienne. https://tel.archives-ouvertes.fr/tel-00560838 Examples >>> from skimage import data, util
>>> from skimage.measure import label
>>> # coins image (binary)
>>> img_coins = data.coins() > 110
>>> # total perimeter of all objects in the image
>>> perimeter_crofton(img_coins, directions=2)
8144.578...
>>> perimeter_crofton(img_coins, directions=4)
7837.077...
Examples using skimage.measure.perimeter_crofton
Different perimeters points_in_poly
skimage.measure.points_in_poly(points, verts) [source]
Test whether points lie inside a polygon. Parameters
points(N, 2) array
Input points, (x, y).
verts(M, 2) array
Vertices of the polygon, sorted either clockwise or anti-clockwise. The first point may (but does not need to be) duplicated. Returns
mask(N,) array of bool
True if corresponding point is inside the polygon. See also
grid_points_in_poly
profile_line
skimage.measure.profile_line(image, src, dst, linewidth=1, order=None, mode=None, cval=0.0, *, reduce_func=<function mean>) [source]
Return the intensity profile of an image measured along a scan line. Parameters
imagendarray, shape (M, N[, C])
The image, either grayscale (2D array) or multichannel (3D array, where the final axis contains the channel information).
srcarray_like, shape (2, )
The coordinates of the start point of the scan line.
dstarray_like, shape (2, )
The coordinates of the end point of the scan line. The destination point is included in the profile, in contrast to standard numpy indexing.
linewidthint, optional
Width of the scan, perpendicular to the line
orderint in {0, 1, 2, 3, 4, 5}, optional
The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail.
mode{‘constant’, ‘nearest’, ‘reflect’, ‘mirror’, ‘wrap’}, optional
How to compute any values falling outside of the image.
cvalfloat, optional
If mode is ‘constant’, what constant value to use outside the image.
reduce_funccallable, optional
Function used to calculate the aggregation of pixel values perpendicular to the profile_line direction when linewidth > 1. If set to None the unreduced array will be returned. Returns
return_valuearray
The intensity profile along the scan line. The length of the profile is the ceil of the computed length of the scan line. Examples >>> x = np.array([[1, 1, 1, 2, 2, 2]])
>>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])
>>> img
array([[0, 0, 0, 0, 0, 0],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[0, 0, 0, 0, 0, 0]])
>>> profile_line(img, (2, 1), (2, 4))
array([1., 1., 2., 2.])
>>> profile_line(img, (1, 0), (1, 6), cval=4)
array([1., 1., 1., 2., 2., 2., 4.])
The destination point is included in the profile, in contrast to standard numpy indexing. For example: >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds
array([1., 1., 1., 2., 2., 2., 0.])
>>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row
array([1., 1., 1., 2., 2., 2.])
For different reduce_func inputs: >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)
array([0.66666667, 0.66666667, 0.66666667, 1.33333333])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)
array([1, 1, 1, 2])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)
array([2, 2, 2, 4])
The unreduced array will be returned when reduce_func is None or when reduce_func acts on each pixel value individually. >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,
... reduce_func=None)
array([[1, 1, 2],
[1, 1, 2],
[1, 1, 2],
[0, 0, 0]])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)
array([[1. , 1. , 0. ],
[1. , 1. , 0. ],
[1. , 1. , 0. ],
[1.41421356, 1.41421356, 0. ]])
ransac
skimage.measure.ransac(data, model_class, min_samples, residual_threshold, is_data_valid=None, is_model_valid=None, max_trials=100, stop_sample_num=inf, stop_residuals_sum=0, stop_probability=1, random_state=None, initial_inliers=None) [source]
Fit a model to data with the RANSAC (random sample consensus) algorithm. RANSAC is an iterative algorithm for the robust estimation of parameters from a subset of inliers from the complete data set. Each iteration performs the following tasks: Select min_samples random samples from the original data and check whether the set of data is valid (see is_data_valid). Estimate a model to the random subset (model_cls.estimate(*data[random_subset]) and check whether the estimated model is valid (see is_model_valid). Classify all data as inliers or outliers by calculating the residuals to the estimated model (model_cls.residuals(*data)) - all data samples with residuals smaller than the residual_threshold are considered as inliers. Save estimated model as best model if number of inlier samples is maximal. In case the current estimated model has the same number of inliers, it is only considered as the best model if it has less sum of residuals. These steps are performed either a maximum number of times or until one of the special stop criteria are met. The final model is estimated using all inlier samples of the previously determined best model. Parameters
data[list, tuple of] (N, …) array
Data set to which the model is fitted, where N is the number of data points and the remaining dimension are depending on model requirements. If the model class requires multiple input data arrays (e.g. source and destination coordinates of skimage.transform.AffineTransform), they can be optionally passed as tuple or list. Note, that in this case the functions estimate(*data), residuals(*data), is_model_valid(model, *random_data) and is_data_valid(*random_data) must all take each data array as separate arguments.
model_classobject
Object with the following object methods: success = estimate(*data) residuals(*data) where success indicates whether the model estimation succeeded (True or None for success, False for failure).
min_samplesint in range (0, N)
The minimum number of data points to fit a model to.
residual_thresholdfloat larger than 0
Maximum distance for a data point to be classified as an inlier.
is_data_validfunction, optional
This function is called with the randomly selected data before the model is fitted to it: is_data_valid(*random_data).
is_model_validfunction, optional
This function is called with the estimated model and the randomly selected data: is_model_valid(model, *random_data), .
max_trialsint, optional
Maximum number of iterations for random sample selection.
stop_sample_numint, optional
Stop iteration if at least this number of inliers are found.
stop_residuals_sumfloat, optional
Stop iteration if sum of residuals is less than or equal to this threshold.
stop_probabilityfloat in range [0, 1], optional
RANSAC iteration stops if at least one outlier-free set of the training data is sampled with probability >= stop_probability, depending on the current best model’s inlier ratio and the number of trials. This requires to generate at least N samples (trials): N >= log(1 - probability) / log(1 - e**m) where the probability (confidence) is typically set to a high value such as 0.99, e is the current fraction of inliers w.r.t. the total number of samples, and m is the min_samples value.
random_stateint, RandomState instance or None, optional
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
initial_inliersarray-like of bool, shape (N,), optional
Initial samples selection for model estimation Returns
modelobject
Best model with largest consensus set.
inliers(N, ) array
Boolean mask of inliers classified as True. References
1
“RANSAC”, Wikipedia, https://en.wikipedia.org/wiki/RANSAC Examples Generate ellipse data without tilt and add noise: >>> t = np.linspace(0, 2 * np.pi, 50)
>>> xc, yc = 20, 30
>>> a, b = 5, 10
>>> x = xc + a * np.cos(t)
>>> y = yc + b * np.sin(t)
>>> data = np.column_stack([x, y])
>>> np.random.seed(seed=1234)
>>> data += np.random.normal(size=data.shape)
Add some faulty data: >>> data[0] = (100, 100)
>>> data[1] = (110, 120)
>>> data[2] = (120, 130)
>>> data[3] = (140, 130)
Estimate ellipse model using all available data: >>> model = EllipseModel()
>>> model.estimate(data)
True
>>> np.round(model.params)
array([ 72., 75., 77., 14., 1.])
Estimate ellipse model using RANSAC: >>> ransac_model, inliers = ransac(data, EllipseModel, 20, 3, max_trials=50)
>>> abs(np.round(ransac_model.params))
array([20., 30., 5., 10., 0.])
>>> inliers
array([False, False, False, False, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True], dtype=bool)
>>> sum(inliers) > 40
True
RANSAC can be used to robustly estimate a geometric transformation. In this section, we also show how to use a proportion of the total samples, rather than an absolute number. >>> from skimage.transform import SimilarityTransform
>>> np.random.seed(0)
>>> src = 100 * np.random.rand(50, 2)
>>> model0 = SimilarityTransform(scale=0.5, rotation=1, translation=(10, 20))
>>> dst = model0(src)
>>> dst[0] = (10000, 10000)
>>> dst[1] = (-100, 100)
>>> dst[2] = (50, 50)
>>> ratio = 0.5 # use half of the samples
>>> min_samples = int(ratio * len(src))
>>> model, inliers = ransac((src, dst), SimilarityTransform, min_samples, 10,
... initial_inliers=np.ones(len(src), dtype=bool))
>>> inliers
array([False, False, False, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True])
regionprops
skimage.measure.regionprops(label_image, intensity_image=None, cache=True, coordinates=None, *, extra_properties=None) [source]
Measure properties of labeled image regions. Parameters
label_image(M, N[, P]) ndarray
Labeled input image. Labels with value 0 are ignored. Changed in version 0.14.1: Previously, label_image was processed by numpy.squeeze and so any number of singleton dimensions was allowed. This resulted in inconsistent handling of images with singleton dimensions. To recover the old behaviour, use regionprops(np.squeeze(label_image), ...).
intensity_image(M, N[, P][, C]) ndarray, optional
Intensity (i.e., input) image with same size as labeled image, plus optionally an extra dimension for multichannel data. Default is None. Changed in version 0.18.0: The ability to provide an extra dimension for channels was added.
cachebool, optional
Determine whether to cache calculated properties. The computation is much faster for cached properties, whereas the memory consumption increases.
coordinatesDEPRECATED
This argument is deprecated and will be removed in a future version of scikit-image. See Coordinate conventions for more details. Deprecated since version 0.16.0: Use “rc” coordinates everywhere. It may be sufficient to call numpy.transpose on your label image to get the same values as 0.15 and earlier. However, for some properties, the transformation will be less trivial. For example, the new orientation is \(\frac{\pi}{2}\) plus the old orientation.
extra_propertiesIterable of callables
Add extra property computation functions that are not included with skimage. The name of the property is derived from the function name, the dtype is inferred by calling the function on a small sample. If the name of an extra property clashes with the name of an existing property the extra property wil not be visible and a UserWarning is issued. A property computation function must take a region mask as its first argument. If the property requires an intensity image, it must accept the intensity image as the second argument. Returns
propertieslist of RegionProperties
Each item describes one labeled region, and can be accessed using the attributes listed below. See also
label
Notes The following properties can be accessed as attributes or keys:
areaint
Number of pixels of the region.
bboxtuple
Bounding box (min_row, min_col, max_row, max_col). Pixels belonging to the bounding box are in the half-open interval [min_row; max_row) and [min_col; max_col).
bbox_areaint
Number of pixels of bounding box.
centroidarray
Centroid coordinate tuple (row, col).
convex_areaint
Number of pixels of convex hull image, which is the smallest convex polygon that encloses the region.
convex_image(H, J) ndarray
Binary convex hull image which has the same size as bounding box.
coords(N, 2) ndarray
Coordinate list (row, col) of the region.
eccentricityfloat
Eccentricity of the ellipse that has the same second-moments as the region. The eccentricity is the ratio of the focal distance (distance between focal points) over the major axis length. The value is in the interval [0, 1). When it is 0, the ellipse becomes a circle.
equivalent_diameterfloat
The diameter of a circle with the same area as the region.
euler_numberint
Euler characteristic of the set of non-zero pixels. Computed as number of connected components subtracted by number of holes (input.ndim connectivity). In 3D, number of connected components plus number of holes subtracted by number of tunnels.
extentfloat
Ratio of pixels in the region to pixels in the total bounding box. Computed as area / (rows * cols)
feret_diameter_maxfloat
Maximum Feret’s diameter computed as the longest distance between points around a region’s convex hull contour as determined by find_contours. [5]
filled_areaint
Number of pixels of the region will all the holes filled in. Describes the area of the filled_image.
filled_image(H, J) ndarray
Binary region image with filled holes which has the same size as bounding box.
image(H, J) ndarray
Sliced binary region image which has the same size as bounding box.
inertia_tensorndarray
Inertia tensor of the region for the rotation around its mass.
inertia_tensor_eigvalstuple
The eigenvalues of the inertia tensor in decreasing order.
intensity_imagendarray
Image inside region bounding box.
labelint
The label in the labeled input image.
local_centroidarray
Centroid coordinate tuple (row, col), relative to region bounding box.
major_axis_lengthfloat
The length of the major axis of the ellipse that has the same normalized second central moments as the region.
max_intensityfloat
Value with the greatest intensity in the region.
mean_intensityfloat
Value with the mean intensity in the region.
min_intensityfloat
Value with the least intensity in the region.
minor_axis_lengthfloat
The length of the minor axis of the ellipse that has the same normalized second central moments as the region.
moments(3, 3) ndarray
Spatial moments up to 3rd order: m_ij = sum{ array(row, col) * row^i * col^j }
where the sum is over the row, col coordinates of the region.
moments_central(3, 3) ndarray
Central moments (translation invariant) up to 3rd order: mu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }
where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s centroid.
moments_hutuple
Hu moments (translation, scale and rotation invariant).
moments_normalized(3, 3) ndarray
Normalized moments (translation and scale invariant) up to 3rd order: nu_ij = mu_ij / m_00^[(i+j)/2 + 1]
where m_00 is the zeroth spatial moment.
orientationfloat
Angle between the 0th axis (rows) and the major axis of the ellipse that has the same second moments as the region, ranging from -pi/2 to pi/2 counter-clockwise.
perimeterfloat
Perimeter of object which approximates the contour as a line through the centers of border pixels using a 4-connectivity.
perimeter_croftonfloat
Perimeter of object approximated by the Crofton formula in 4 directions.
slicetuple of slices
A slice to extract the object from the source image.
solidityfloat
Ratio of pixels in the region to pixels of the convex hull image.
weighted_centroidarray
Centroid coordinate tuple (row, col) weighted with intensity image.
weighted_local_centroidarray
Centroid coordinate tuple (row, col), relative to region bounding box, weighted with intensity image.
weighted_moments(3, 3) ndarray
Spatial moments of intensity image up to 3rd order: wm_ij = sum{ array(row, col) * row^i * col^j }
where the sum is over the row, col coordinates of the region.
weighted_moments_central(3, 3) ndarray
Central moments (translation invariant) of intensity image up to 3rd order: wmu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }
where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s weighted centroid.
weighted_moments_hutuple
Hu moments (translation, scale and rotation invariant) of intensity image.
weighted_moments_normalized(3, 3) ndarray
Normalized moments (translation and scale invariant) of intensity image up to 3rd order: wnu_ij = wmu_ij / wm_00^[(i+j)/2 + 1]
where wm_00 is the zeroth spatial moment (intensity-weighted area). Each region also supports iteration, so that you can do: for prop in region:
print(prop, region[prop])
References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment
5
W. Pabst, E. Gregorová. Characterization of particles and particle systems, pp. 27-28. ICT Prague, 2007. https://old.vscht.cz/sil/keramika/Characterization_of_particles/CPPS%20_English%20version_.pdf Examples >>> from skimage import data, util
>>> from skimage.measure import label, regionprops
>>> img = util.img_as_ubyte(data.coins()) > 110
>>> label_img = label(img, connectivity=img.ndim)
>>> props = regionprops(label_img)
>>> # centroid of first labeled object
>>> props[0].centroid
(22.72987986048314, 81.91228523446583)
>>> # centroid of first labeled object
>>> props[0]['centroid']
(22.72987986048314, 81.91228523446583)
Add custom measurements by passing functions as extra_properties >>> from skimage import data, util
>>> from skimage.measure import label, regionprops
>>> import numpy as np
>>> img = util.img_as_ubyte(data.coins()) > 110
>>> label_img = label(img, connectivity=img.ndim)
>>> def pixelcount(regionmask):
... return np.sum(regionmask)
>>> props = regionprops(label_img, extra_properties=(pixelcount,))
>>> props[0].pixelcount
7741
>>> props[1]['pixelcount']
42
Examples using skimage.measure.regionprops
Measure region properties regionprops_table
skimage.measure.regionprops_table(label_image, intensity_image=None, properties=('label', 'bbox'), *, cache=True, separator='-', extra_properties=None) [source]
Compute image properties and return them as a pandas-compatible table. The table is a dictionary mapping column names to value arrays. See Notes section below for details. New in version 0.16. Parameters
label_image(N, M[, P]) ndarray
Labeled input image. Labels with value 0 are ignored.
intensity_image(M, N[, P][, C]) ndarray, optional
Intensity (i.e., input) image with same size as labeled image, plus optionally an extra dimension for multichannel data. Default is None. Changed in version 0.18.0: The ability to provide an extra dimension for channels was added.
propertiestuple or list of str, optional
Properties that will be included in the resulting dictionary For a list of available properties, please see regionprops(). Users should remember to add “label” to keep track of region identities.
cachebool, optional
Determine whether to cache calculated properties. The computation is much faster for cached properties, whereas the memory consumption increases.
separatorstr, optional
For non-scalar properties not listed in OBJECT_COLUMNS, each element will appear in its own column, with the index of that element separated from the property name by this separator. For example, the inertia tensor of a 2D region will appear in four columns: inertia_tensor-0-0, inertia_tensor-0-1, inertia_tensor-1-0, and inertia_tensor-1-1 (where the separator is -). Object columns are those that cannot be split in this way because the number of columns would change depending on the object. For example, image and coords.
extra_propertiesIterable of callables
Add extra property computation functions that are not included with skimage. The name of the property is derived from the function name, the dtype is inferred by calling the function on a small sample. If the name of an extra property clashes with the name of an existing property the extra property wil not be visible and a UserWarning is issued. A property computation function must take a region mask as its first argument. If the property requires an intensity image, it must accept the intensity image as the second argument. Returns
out_dictdict
Dictionary mapping property names to an array of values of that property, one value per region. This dictionary can be used as input to pandas DataFrame to map property names to columns in the frame and regions to rows. If the image has no regions, the arrays will have length 0, but the correct type. Notes Each column contains either a scalar property, an object property, or an element in a multidimensional array. Properties with scalar values for each region, such as “eccentricity”, will appear as a float or int array with that property name as key. Multidimensional properties of fixed size for a given image dimension, such as “centroid” (every centroid will have three elements in a 3D image, no matter the region size), will be split into that many columns, with the name {property_name}{separator}{element_num} (for 1D properties), {property_name}{separator}{elem_num0}{separator}{elem_num1} (for 2D properties), and so on. For multidimensional properties that don’t have a fixed size, such as “image” (the image of a region varies in size depending on the region size), an object array will be used, with the corresponding property name as the key. Examples >>> from skimage import data, util, measure
>>> image = data.coins()
>>> label_image = measure.label(image > 110, connectivity=image.ndim)
>>> props = measure.regionprops_table(label_image, image,
... properties=['label', 'inertia_tensor',
... 'inertia_tensor_eigvals'])
>>> props
{'label': array([ 1, 2, ...]), ...
'inertia_tensor-0-0': array([ 4.012...e+03, 8.51..., ...]), ...
...,
'inertia_tensor_eigvals-1': array([ 2.67...e+02, 2.83..., ...])}
The resulting dictionary can be directly passed to pandas, if installed, to obtain a clean DataFrame: >>> import pandas as pd
>>> data = pd.DataFrame(props)
>>> data.head()
label inertia_tensor-0-0 ... inertia_tensor_eigvals-1
0 1 4012.909888 ... 267.065503
1 2 8.514739 ... 2.834806
2 3 0.666667 ... 0.000000
3 4 0.000000 ... 0.000000
4 5 0.222222 ... 0.111111
[5 rows x 7 columns] If we want to measure a feature that does not come as a built-in property, we can define custom functions and pass them as extra_properties. For example, we can create a custom function that measures the intensity quartiles in a region: >>> from skimage import data, util, measure
>>> import numpy as np
>>> def quartiles(regionmask, intensity):
... return np.percentile(intensity[regionmask], q=(25, 50, 75))
>>>
>>> image = data.coins()
>>> label_image = measure.label(image > 110, connectivity=image.ndim)
>>> props = measure.regionprops_table(label_image, intensity_image=image,
... properties=('label',),
... extra_properties=(quartiles,))
>>> import pandas as pd
>>> pd.DataFrame(props).head()
label quartiles-0 quartiles-1 quartiles-2
0 1 117.00 123.0 130.0
1 2 111.25 112.0 114.0
2 3 111.00 111.0 111.0
3 4 111.00 111.5 112.5
4 5 112.50 113.0 114.0
Examples using skimage.measure.regionprops_table
Measure region properties shannon_entropy
skimage.measure.shannon_entropy(image, base=2) [source]
Calculate the Shannon entropy of an image. The Shannon entropy is defined as S = -sum(pk * log(pk)), where pk are frequency/probability of pixels of value k. Parameters
image(N, M) ndarray
Grayscale input image.
basefloat, optional
The logarithmic base to use. Returns
entropyfloat
Notes The returned value is measured in bits or shannon (Sh) for base=2, natural unit (nat) for base=np.e and hartley (Hart) for base=10. References
1
https://en.wikipedia.org/wiki/Entropy_(information_theory)
2
https://en.wiktionary.org/wiki/Shannon_entropy Examples >>> from skimage import data
>>> from skimage.measure import shannon_entropy
>>> shannon_entropy(data.camera())
7.231695011055706
subdivide_polygon
skimage.measure.subdivide_polygon(coords, degree=2, preserve_ends=False) [source]
Subdivision of polygonal curves using B-Splines. Note that the resulting curve is always within the convex hull of the original polygon. Circular polygons stay closed after subdivision. Parameters
coords(N, 2) array
Coordinate array.
degree{1, 2, 3, 4, 5, 6, 7}, optional
Degree of B-Spline. Default is 2.
preserve_endsbool, optional
Preserve first and last coordinate of non-circular polygon. Default is False. Returns
coords(M, 2) array
Subdivided coordinate array. References
1
http://mrl.nyu.edu/publications/subdiv-course2000/coursenotes00.pdf
CircleModel
class skimage.measure.CircleModel [source]
Bases: skimage.measure.fit.BaseModel Total least squares estimator for 2D circles. The functional model of the circle is: r**2 = (x - xc)**2 + (y - yc)**2
This estimator minimizes the squared distances from all points to the circle: min{ sum((r - sqrt((x_i - xc)**2 + (y_i - yc)**2))**2) }
A minimum number of 3 points is required to solve for the parameters. Examples >>> t = np.linspace(0, 2 * np.pi, 25)
>>> xy = CircleModel().predict_xy(t, params=(2, 3, 4))
>>> model = CircleModel()
>>> model.estimate(xy)
True
>>> tuple(np.round(model.params, 5))
(2.0, 3.0, 4.0)
>>> res = model.residuals(xy)
>>> np.abs(np.round(res, 9))
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.])
Attributes
paramstuple
Circle model parameters in the following order xc, yc, r.
__init__() [source]
Initialize self. See help(type(self)) for accurate signature.
estimate(data) [source]
Estimate circle model from data using total least squares. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
successbool
True, if model estimation succeeds.
predict_xy(t, params=None) [source]
Predict x- and y-coordinates using the estimated model. Parameters
tarray
Angles in circle in radians. Angles start to count from positive x-axis to positive y-axis in a right-handed system.
params(3, ) array, optional
Optional custom parameter set. Returns
xy(…, 2) array
Predicted x- and y-coordinates.
residuals(data) [source]
Determine residuals of data to model. For each point the shortest distance to the circle is returned. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
residuals(N, ) array
Residual for each data point.
EllipseModel
class skimage.measure.EllipseModel [source]
Bases: skimage.measure.fit.BaseModel Total least squares estimator for 2D ellipses. The functional model of the ellipse is: xt = xc + a*cos(theta)*cos(t) - b*sin(theta)*sin(t)
yt = yc + a*sin(theta)*cos(t) + b*cos(theta)*sin(t)
d = sqrt((x - xt)**2 + (y - yt)**2)
where (xt, yt) is the closest point on the ellipse to (x, y). Thus d is the shortest distance from the point to the ellipse. The estimator is based on a least squares minimization. The optimal solution is computed directly, no iterations are required. This leads to a simple, stable and robust fitting method. The params attribute contains the parameters in the following order: xc, yc, a, b, theta
Examples >>> xy = EllipseModel().predict_xy(np.linspace(0, 2 * np.pi, 25),
... params=(10, 15, 4, 8, np.deg2rad(30)))
>>> ellipse = EllipseModel()
>>> ellipse.estimate(xy)
True
>>> np.round(ellipse.params, 2)
array([10. , 15. , 4. , 8. , 0.52])
>>> np.round(abs(ellipse.residuals(xy)), 5)
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.])
Attributes
paramstuple
Ellipse model parameters in the following order xc, yc, a, b, theta.
__init__() [source]
Initialize self. See help(type(self)) for accurate signature.
estimate(data) [source]
Estimate circle model from data using total least squares. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
successbool
True, if model estimation succeeds. References
1
Halir, R.; Flusser, J. “Numerically stable direct least squares fitting of ellipses”. In Proc. 6th International Conference in Central Europe on Computer Graphics and Visualization. WSCG (Vol. 98, pp. 125-132).
predict_xy(t, params=None) [source]
Predict x- and y-coordinates using the estimated model. Parameters
tarray
Angles in circle in radians. Angles start to count from positive x-axis to positive y-axis in a right-handed system.
params(5, ) array, optional
Optional custom parameter set. Returns
xy(…, 2) array
Predicted x- and y-coordinates.
residuals(data) [source]
Determine residuals of data to model. For each point the shortest distance to the ellipse is returned. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
residuals(N, ) array
Residual for each data point.
LineModelND
class skimage.measure.LineModelND [source]
Bases: skimage.measure.fit.BaseModel Total least squares estimator for N-dimensional lines. In contrast to ordinary least squares line estimation, this estimator minimizes the orthogonal distances of points to the estimated line. Lines are defined by a point (origin) and a unit vector (direction) according to the following vector equation: X = origin + lambda * direction
Examples >>> x = np.linspace(1, 2, 25)
>>> y = 1.5 * x + 3
>>> lm = LineModelND()
>>> lm.estimate(np.stack([x, y], axis=-1))
True
>>> tuple(np.round(lm.params, 5))
(array([1.5 , 5.25]), array([0.5547 , 0.83205]))
>>> res = lm.residuals(np.stack([x, y], axis=-1))
>>> np.abs(np.round(res, 9))
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.])
>>> np.round(lm.predict_y(x[:5]), 3)
array([4.5 , 4.562, 4.625, 4.688, 4.75 ])
>>> np.round(lm.predict_x(y[:5]), 3)
array([1. , 1.042, 1.083, 1.125, 1.167])
Attributes
paramstuple
Line model parameters in the following order origin, direction.
__init__() [source]
Initialize self. See help(type(self)) for accurate signature.
estimate(data) [source]
Estimate line model from data. This minimizes the sum of shortest (orthogonal) distances from the given data points to the estimated line. Parameters
data(N, dim) array
N points in a space of dimensionality dim >= 2. Returns
successbool
True, if model estimation succeeds.
predict(x, axis=0, params=None) [source]
Predict intersection of the estimated line model with a hyperplane orthogonal to a given axis. Parameters
x(n, 1) array
Coordinates along an axis.
axisint
Axis orthogonal to the hyperplane intersecting the line.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
data(n, m) array
Predicted coordinates. Raises
ValueError
If the line is parallel to the given axis.
predict_x(y, params=None) [source]
Predict x-coordinates for 2D lines using the estimated model. Alias for: predict(y, axis=1)[:, 0]
Parameters
yarray
y-coordinates.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
xarray
Predicted x-coordinates.
predict_y(x, params=None) [source]
Predict y-coordinates for 2D lines using the estimated model. Alias for: predict(x, axis=0)[:, 1]
Parameters
xarray
x-coordinates.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
yarray
Predicted y-coordinates.
residuals(data, params=None) [source]
Determine residuals of data to model. For each point, the shortest (orthogonal) distance to the line is returned. It is obtained by projecting the data onto the line. Parameters
data(N, dim) array
N points in a space of dimension dim.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
residuals(N, ) array
Residual for each data point. | |
doc_2708 |
Overrides the standard draw_path to add the shadow offset and necessary color changes for the shadow. | |
doc_2709 | A dictionary acting as a cache for finder objects. The keys are paths that have been passed to sys.path_hooks and the values are the finders that are found. If a path is a valid file system path but no finder is found on sys.path_hooks then None is stored. Originally specified in PEP 302. Changed in version 3.3: None is stored instead of imp.NullImporter when no finder is found. | |
doc_2710 |
Alias for set_edgecolor. | |
doc_2711 |
Bases: matplotlib.offsetbox.AnchoredOffsetbox An anchored container with transformed coordinates. Artists added to the drawing_area are scaled according to the coordinates of the transformation used. The dimensions of this artist will scale to contain the artists added. Parameters
transformmatplotlib.transforms.Transform
The transformation object for the coordinate system in use, i.e., matplotlib.axes.Axes.transData.
locstr
Location of this artist. Valid locations are 'upper left', 'upper center', 'upper right', 'center left', 'center', 'center right', 'lower left', 'lower center, 'lower right'. For backward compatibility, numeric values are accepted as well. See the parameter loc of Legend for details.
padfloat, default: 0.4
Padding around the child objects, in fraction of the font size.
borderpadfloat, default: 0.5
Border padding, in fraction of the font size.
propmatplotlib.font_manager.FontProperties, optional
Font property used as a reference for paddings.
frameonbool, default: True
If True, draw a box around this artists. **kwargs
Keyword arguments forwarded to AnchoredOffsetbox. Examples To display an ellipse in the upper left, with a width of 0.1 and height of 0.4 in data coordinates: >>> box = AnchoredAuxTransformBox(ax.transData, loc='upper left')
>>> el = Ellipse((0, 0), width=0.1, height=0.4, angle=30)
>>> box.drawing_area.add_artist(el)
>>> ax.add_artist(box)
Attributes
drawing_areamatplotlib.offsetbox.AuxTransformBox
A container for artists to display. set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, bbox_to_anchor=<UNSET>, child=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, gid=<UNSET>, height=<UNSET>, in_layout=<UNSET>, label=<UNSET>, offset=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, width=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
bbox_to_anchor unknown
child unknown
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
figure Figure
gid str
height float
in_layout bool
label object
offset (float, float) or callable
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
width float
zorder float
Examples using mpl_toolkits.axes_grid1.anchored_artists.AnchoredAuxTransformBox
Annotations | |
doc_2712 |
Helper function to obtain the location of a corner of a bbox Parameters
bboxmatplotlib.transforms.Bbox
loc{1, 2, 3, 4}
Corner of bbox. Valid values are: 'upper right' : 1,
'upper left' : 2,
'lower left' : 3,
'lower right' : 4
Returns
x, yfloat
Coordinates of the corner specified by loc. | |
doc_2713 |
Return the indices of the elements that are non-zero. Refer to numpy.nonzero for full documentation. See also numpy.nonzero
equivalent function | |
doc_2714 |
Build the max tree from an image. Component trees represent the hierarchical structure of the connected components resulting from sequential thresholding operations applied to an image. A connected component at one level is parent of a component at a higher level if the latter is included in the first. A max-tree is an efficient representation of a component tree. A connected component at one level is represented by one reference pixel at this level, which is parent to all other pixels at that level and to the reference pixel at the level above. The max-tree is the basis for many morphological operators, namely connected operators. Parameters
imagendarray
The input image for which the max-tree is to be calculated. This image can be of any type.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1. Returns
parentndarray, int64
Array of same shape as image. The value of each pixel is the index of its parent in the ravelled array.
tree_traverser1D array, int64
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). References
1
Salembier, P., Oliveras, A., & Garrido, L. (1998). Antiextensive Connected Operators for Image and Sequence Processing. IEEE Transactions on Image Processing, 7(4), 555-570. DOI:10.1109/83.663500
2
Berger, C., Geraud, T., Levillain, R., Widynski, N., Baillard, A., Bertin, E. (2007). Effective Component Tree Computation with Application to Pattern Recognition in Astronomical Imaging. In International Conference on Image Processing (ICIP) (pp. 41-44). DOI:10.1109/ICIP.2007.4379949
3
Najman, L., & Couprie, M. (2006). Building the component tree in quasi-linear time. IEEE Transactions on Image Processing, 15(11), 3531-3539. DOI:10.1109/TIP.2006.877518
4
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create a small sample image (Figure 1 from [4]) and build the max-tree. >>> image = np.array([[15, 13, 16], [12, 12, 10], [16, 12, 14]])
>>> P, S = max_tree(image, connectivity=2) | |
doc_2715 | Similar to thread_time() but return time as nanoseconds. New in version 3.7. | |
doc_2716 | Return a CoverageResults object that contains the cumulative results of all previous calls to run, runctx and runfunc for the given Trace instance. Does not reset the accumulated trace results. | |
doc_2717 | Parse a query string given as a string argument (data of type application/x-www-form-urlencoded). Data are returned as a dictionary. The dictionary keys are the unique query variable names and the values are lists of values for each name. The optional argument keep_blank_values is a flag indicating whether blank values in percent-encoded queries should be treated as blank strings. A true value indicates that blanks should be retained as blank strings. The default false value indicates that blank values are to be ignored and treated as if they were not included. The optional argument strict_parsing is a flag indicating what to do with parsing errors. If false (the default), errors are silently ignored. If true, errors raise a ValueError exception. The optional encoding and errors parameters specify how to decode percent-encoded sequences into Unicode characters, as accepted by the bytes.decode() method. The optional argument max_num_fields is the maximum number of fields to read. If set, then throws a ValueError if there are more than max_num_fields fields read. The optional argument separator is the symbol to use for separating the query arguments. It defaults to &. Use the urllib.parse.urlencode() function (with the doseq parameter set to True) to convert such dictionaries into query strings. Changed in version 3.2: Add encoding and errors parameters. Changed in version 3.8: Added max_num_fields parameter. Changed in version 3.9.2: Added separator parameter with the default value of &. Python versions earlier than Python 3.9.2 allowed using both ; and & as query parameter separator. This has been changed to allow only a single separator key, with & as the default separator. | |
doc_2718 |
Helper function to obtain a Path from one bbox to another. Parameters
bbox1, bbox2matplotlib.transforms.Bbox
Bounding boxes to connect.
loc1{1, 2, 3, 4}
Corner of bbox1 to use. Valid values are: 'upper right' : 1,
'upper left' : 2,
'lower left' : 3,
'lower right' : 4
loc2{1, 2, 3, 4}, optional
Corner of bbox2 to use. If None, defaults to loc1. Valid values are: 'upper right' : 1,
'upper left' : 2,
'lower left' : 3,
'lower right' : 4
Returns
pathmatplotlib.path.Path
A line segment from the loc1 corner of bbox1 to the loc2 corner of bbox2. | |
doc_2719 | See Migration guide for more details. tf.compat.v1.sets.set_size, tf.compat.v1.sets.size
tf.sets.size(
a, validate_indices=True
)
Args
a SparseTensor, with indices sorted in row-major order.
validate_indices Whether to validate the order and range of sparse indices in a.
Returns int32 Tensor of set sizes. For a ranked n, this is a Tensor with rank n-1, and the same 1st n-1 dimensions as a. Each value is the number of unique elements in the corresponding [0...n-1] dimension of a.
Raises
TypeError If a is an invalid types. | |
doc_2720 | Headers are folded using the Header folding algorithm, which preserves existing line breaks in the value, and wraps each resulting line to the max_line_length. Non-ASCII binary data are CTE encoded using the unknown-8bit charset. | |
doc_2721 | Represents the C unsigned int datatype. The constructor accepts an optional integer initializer; no overflow checking is done. On platforms where sizeof(int) == sizeof(long) it is an alias for c_ulong. | |
doc_2722 |
Averages all function events over their keys. Parameters
group_by_input_shapes – group entries by
name, input shapes) rather than just event name. ((event) –
is useful to see which input shapes contribute to the runtime (This) –
most and may help with size-specific optimizations or (the) –
the best candidates for quantization (choosing) –
group_by_stack_n – group by top n stack trace entries Returns
An EventList containing FunctionEventAvg objects. | |
doc_2723 | Tag the value and dump it to a compact JSON string. Parameters
value (Any) – Return type
str | |
doc_2724 |
Do nothing and return the estimator unchanged This method is just there to implement the usual API and hence work in pipelines. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to estimate the normalization parameters.
yNone
Ignored. Returns
selfobject
Fitted transformer. | |
doc_2725 | Reset the encoder to the initial state. The output is discarded: call .encode(object, final=True), passing an empty byte or text string if necessary, to reset the encoder and to get the output. | |
doc_2726 |
Alias for set_edgecolor. | |
doc_2727 | Predict the labels (1 inlier, -1 outlier) of X according to LOF. Only available for novelty detection (when novelty is set to True). This method allows to generalize prediction to new observations (not in the training set). Parameters
Xarray-like of shape (n_samples, n_features)
The query sample or samples to compute the Local Outlier Factor w.r.t. to the training samples. Returns
is_inlierndarray of shape (n_samples,)
Returns -1 for anomalies/outliers and +1 for inliers. | |
doc_2728 |
Interpolate a function at the Chebyshev points of the first kind. Returns the series that interpolates func at the Chebyshev points of the first kind scaled and shifted to the domain. The resulting series tends to a minmax approximation of func when the function is continuous in the domain. New in version 1.14.0. Parameters
funcfunction
The function to be interpolated. It must be a function of a single variable of the form f(x, a, b, c...), where a, b, c... are extra arguments passed in the args parameter.
degint
Degree of the interpolating polynomial.
domain{None, [beg, end]}, optional
Domain over which func is interpolated. The default is None, in which case the domain is [-1, 1].
argstuple, optional
Extra arguments to be used in the function call. Default is no extra arguments. Returns
polynomialChebyshev instance
Interpolating Chebyshev instance. Notes See numpy.polynomial.chebfromfunction for more details. | |
doc_2729 | Transforms a string to one that can be used in locale-aware comparisons. For example, strxfrm(s1) < strxfrm(s2) is equivalent to strcoll(s1, s2) < 0. This function can be used when the same string is compared repeatedly, e.g. when collating a sequence of strings. | |
doc_2730 |
Blit the canvas in bbox (default entire canvas). | |
doc_2731 | See Migration guide for more details. tf.compat.v1.raw_ops.SnapshotDataset
tf.raw_ops.SnapshotDataset(
input_dataset, path, output_types, output_shapes, compression='',
reader_path_prefix='', writer_path_prefix='',
shard_size_bytes=10737418240, pending_snapshot_expiry_seconds=86400,
num_reader_threads=1, reader_buffer_size=1, num_writer_threads=1,
writer_buffer_size=1, shuffle_on_read=False, seed=0, seed2=0,
mode='auto', snapshot_name='', name=None
)
This dataset attempts to determine whether a valid snapshot exists at the snapshot_path, and reads from the snapshot in lieu of using input_dataset. If not, it will run the preprocessing pipeline as usual, and write out a snapshot of the data processed for future use.
Args
input_dataset A Tensor of type variant. A variant tensor representing the input dataset.
path A Tensor of type string. The path we should write snapshots to / read snapshots from.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
compression An optional string. Defaults to "".
reader_path_prefix An optional string. Defaults to "".
writer_path_prefix An optional string. Defaults to "".
shard_size_bytes An optional int. Defaults to 10737418240.
pending_snapshot_expiry_seconds An optional int. Defaults to 86400.
num_reader_threads An optional int. Defaults to 1.
reader_buffer_size An optional int. Defaults to 1.
num_writer_threads An optional int. Defaults to 1.
writer_buffer_size An optional int. Defaults to 1.
shuffle_on_read An optional bool. Defaults to False.
seed An optional int. Defaults to 0.
seed2 An optional int. Defaults to 0.
mode An optional string. Defaults to "auto".
snapshot_name An optional string. Defaults to "".
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_2732 | Return list of supplemental group ids associated with the current process. Availability: Unix. Note On Mac OS X, getgroups() behavior differs somewhat from other Unix platforms. If the Python interpreter was built with a deployment target of 10.5 or earlier, getgroups() returns the list of effective group ids associated with the current user process; this list is limited to a system-defined number of entries, typically 16, and may be modified by calls to setgroups() if suitably privileged. If built with a deployment target greater than 10.5, getgroups() returns the current group access list for the user associated with the effective user id of the process; the group access list may change over the lifetime of the process, it is not affected by calls to setgroups(), and its length is not limited to 16. The deployment target value, MACOSX_DEPLOYMENT_TARGET, can be obtained with sysconfig.get_config_var(). | |
doc_2733 |
The number of inputs. Data attribute containing the number of arguments the ufunc treats as input. Examples >>> np.add.nin
2
>>> np.multiply.nin
2
>>> np.power.nin
2
>>> np.exp.nin
1 | |
doc_2734 | Computes the inverse of hfft(). input must be a real-valued signal, interpreted in the Fourier domain. The IFFT of a real signal is Hermitian-symmetric, X[i] = conj(X[-i]). ihfft() represents this in the one-sided form where only the positive frequencies below the Nyquist frequency are included. To compute the full output, use ifft(). Parameters
input (Tensor) – the real input tensor
n (int, optional) – Signal length. If given, the input will either be zero-padded or trimmed to this length before computing the Hermitian IFFT.
dim (int, optional) – The dimension along which to take the one dimensional Hermitian IFFT.
norm (str, optional) –
Normalization mode. For the backward transform (ihfft()), these correspond to:
"forward" - no normalization
"backward" - normalize by 1/n
"ortho" - normalize by 1/sqrt(n) (making the IFFT orthonormal) Calling the forward transform (hfft()) with the same normalization mode will apply an overall normalization of 1/n between the two transforms. This is required to make ihfft() the exact inverse. Default is "backward" (normalize by 1/n). Example >>> t = torch.arange(5)
>>> t
tensor([0, 1, 2, 3, 4])
>>> torch.fft.ihfft(t)
tensor([ 2.0000+-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j])
Compare against the full output from ifft(): >>> torch.fft.ifft(t)
tensor([ 2.0000+-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j, -0.5000+0.1625j,
-0.5000+0.6882j]) | |
doc_2735 | See Migration guide for more details. tf.compat.v1.raw_ops.DecodeGif
tf.raw_ops.DecodeGif(
contents, name=None
)
GIF images with frame or transparency compression are not supported. On Linux and MacOS systems, convert animated GIFs from compressed to uncompressed by running: convert $src.gif -coalesce $dst.gif
This op also supports decoding JPEGs and PNGs, though it is cleaner to use tf.io.decode_image.
Args
contents A Tensor of type string. 0-D. The GIF-encoded image.
name A name for the operation (optional).
Returns A Tensor of type uint8. | |
doc_2736 | Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size. Parameters
size (int...) – a sequence of integers defining the shape of the output tensor. Can be a variable number of arguments or a collection like a list or tuple. Keyword Arguments
out (Tensor, optional) – the output tensor.
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()).
layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example: >>> torch.zeros(2, 3)
tensor([[ 0., 0., 0.],
[ 0., 0., 0.]])
>>> torch.zeros(5)
tensor([ 0., 0., 0., 0., 0.]) | |
doc_2737 |
Keymap to associate with this tool. list[str]: List of keys that will trigger this tool when a keypress event is emitted on self.figure.canvas. | |
doc_2738 | Defaults to %(expression)s OVER (%(window)s)'. If only the expression argument is provided, the window clause will be blank. | |
doc_2739 |
Set the edgecolor. Parameters
ccolor
Notes This method does not modify the facecolor (which defaults to "none"), unlike the Patch.set_color method defined in the parent class. Use Patch.set_facecolor to set the facecolor. | |
doc_2740 |
Alias for get_linestyle. | |
doc_2741 | tf.compat.v1.gfile.ListDirectory(
dirname
)
The list is in arbitrary order. It does not contain the special entries "." and "..".
Args
dirname string, path to a directory
Returns [filename1, filename2, ... filenameN] as strings
Raises errors.NotFoundError if directory doesn't exist | |
doc_2742 | Returns a new tensor with the sine of the elements of input. outi=sin(inputi)\text{out}_{i} = \sin(\text{input}_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-0.5461, 0.1347, -2.7266, -0.2746])
>>> torch.sin(a)
tensor([-0.5194, 0.1343, -0.4032, -0.2711]) | |
doc_2743 | Takes a list of valid values and returns a “compressed” version of those values – in a single value. For example, SplitDateTimeField is a subclass which combines a time field and a date field into a datetime object. This method must be implemented in the subclasses. | |
doc_2744 | Filesystem path to the application directory, e.g. '/usr/lib/pythonX.Y/dist-packages/django/contrib/admin'. In most cases, Django can automatically detect and set this, but you can also provide an explicit override as a class attribute on your AppConfig subclass. In a few situations this is required; for instance if the app package is a namespace package with multiple paths. | |
doc_2745 |
Get whether the Axes rectangle patch is drawn. | |
doc_2746 | A function definition.
name is a raw string of the function name.
args is a arguments node.
body is the list of nodes inside the function.
decorator_list is the list of decorators to be applied, stored outermost first (i.e. the first in the list will be applied last).
returns is the return annotation.
type_comment
type_comment is an optional string with the type annotation as a comment. | |
doc_2747 |
Return the clip path with the non-affine part of its transformation applied, and the remaining affine part of its transformation. | |
doc_2748 |
Left bound for the interval. | |
doc_2749 |
Return peaks in a straight line Hough transform. Identifies most prominent lines separated by a certain angle and distance in a Hough transform. Non-maximum suppression with different sizes is applied separately in the first (distances) and second (angles) dimension of the Hough space to identify peaks. Parameters
hspace(N, M) array
Hough space returned by the hough_line function.
angles(M,) array
Angles returned by the hough_line function. Assumed to be continuous. (angles[-1] - angles[0] == PI).
dists(N, ) array
Distances returned by the hough_line function.
min_distanceint, optional
Minimum distance separating lines (maximum filter size for first dimension of hough space).
min_angleint, optional
Minimum angle separating lines (maximum filter size for second dimension of hough space).
thresholdfloat, optional
Minimum intensity of peaks. Default is 0.5 * max(hspace).
num_peaksint, optional
Maximum number of peaks. When the number of peaks exceeds num_peaks, return num_peaks coordinates based on peak intensity. Returns
accum, angles, diststuple of array
Peak values in Hough space, angles and distances. Examples >>> from skimage.transform import hough_line, hough_line_peaks
>>> from skimage.draw import line
>>> img = np.zeros((15, 15), dtype=bool)
>>> rr, cc = line(0, 0, 14, 14)
>>> img[rr, cc] = 1
>>> rr, cc = line(0, 14, 14, 0)
>>> img[cc, rr] = 1
>>> hspace, angles, dists = hough_line(img)
>>> hspace, angles, dists = hough_line_peaks(hspace, angles, dists)
>>> len(angles)
2 | |
doc_2750 | The etag parsed and unquoted. Ranges always operate on strong etags so the weakness information is not necessary. | |
doc_2751 | The number of signals which the process may queue. Availability: Linux 2.6.8 or later. New in version 3.4. | |
doc_2752 | Create a new Snapshot instance with a filtered traces sequence, filters is a list of DomainFilter and Filter instances. If filters is an empty list, return a new Snapshot instance with a copy of the traces. All inclusive filters are applied at once, a trace is ignored if no inclusive filters match it. A trace is ignored if at least one exclusive filter matches it. Changed in version 3.6: DomainFilter instances are now also accepted in filters. | |
doc_2753 | Wait until notified or until a timeout occurs. If the calling thread has not acquired the lock when this method is called, a RuntimeError is raised. This method releases the underlying lock, and then blocks until it is awakened by a notify() or notify_all() call for the same condition variable in another thread, or until the optional timeout occurs. Once awakened or timed out, it re-acquires the lock and returns. When the timeout argument is present and not None, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof). When the underlying lock is an RLock, it is not released using its release() method, since this may not actually unlock the lock when it was acquired multiple times recursively. Instead, an internal interface of the RLock class is used, which really unlocks it even when it has been recursively acquired several times. Another internal interface is then used to restore the recursion level when the lock is reacquired. The return value is True unless a given timeout expired, in which case it is False. Changed in version 3.2: Previously, the method always returned None. | |
doc_2754 |
Annotate the point xy with text text. In the simplest form, the text is placed at xy. Optionally, the text can be displayed in another position xytext. An arrow pointing from the text to the annotated point xy can then be added by defining arrowprops. Parameters
textstr
The text of the annotation.
xy(float, float)
The point (x, y) to annotate. The coordinate system is determined by xycoords.
xytext(float, float), default: xy
The position (x, y) to place the text at. The coordinate system is determined by textcoords.
xycoordsstr or Artist or Transform or callable or (float, float), default: 'data'
The coordinate system that xy is given in. The following types of values are supported:
One of the following strings:
Value Description
'figure points' Points from the lower left of the figure
'figure pixels' Pixels from the lower left of the figure
'figure fraction' Fraction of figure from lower left
'subfigure points' Points from the lower left of the subfigure
'subfigure pixels' Pixels from the lower left of the subfigure
'subfigure fraction' Fraction of subfigure from lower left
'axes points' Points from lower left corner of axes
'axes pixels' Pixels from lower left corner of axes
'axes fraction' Fraction of axes from lower left
'data' Use the coordinate system of the object being annotated (default)
'polar' (theta, r) if not native 'data' coordinates Note that 'subfigure pixels' and 'figure pixels' are the same for the parent figure, so users who want code that is usable in a subfigure can use 'subfigure pixels'. An Artist: xy is interpreted as a fraction of the artist's Bbox. E.g. (0, 0) would be the lower left corner of the bounding box and (0.5, 1) would be the center top of the bounding box. A Transform to transform xy to screen coordinates.
A function with one of the following signatures: def transform(renderer) -> Bbox
def transform(renderer) -> Transform
where renderer is a RendererBase subclass. The result of the function is interpreted like the Artist and Transform cases above. A tuple (xcoords, ycoords) specifying separate coordinate systems for x and y. xcoords and ycoords must each be of one of the above described types. See Advanced Annotations for more details.
textcoordsstr or Artist or Transform or callable or (float, float), default: value of xycoords
The coordinate system that xytext is given in. All xycoords values are valid as well as the following strings:
Value Description
'offset points' Offset (in points) from the xy value
'offset pixels' Offset (in pixels) from the xy value
arrowpropsdict, optional
The properties used to draw a FancyArrowPatch arrow between the positions xy and xytext. Defaults to None, i.e. no arrow is drawn. For historical reasons there are two different ways to specify arrows, "simple" and "fancy": Simple arrow: If arrowprops does not contain the key 'arrowstyle' the allowed keys are:
Key Description
width The width of the arrow in points
headwidth The width of the base of the arrow head in points
headlength The length of the arrow head in points
shrink Fraction of total length to shrink from both ends
? Any key to matplotlib.patches.FancyArrowPatch The arrow is attached to the edge of the text box, the exact position (corners or centers) depending on where it's pointing to. Fancy arrow: This is used if 'arrowstyle' is provided in the arrowprops. Valid keys are the following FancyArrowPatch parameters:
Key Description
arrowstyle the arrow style
connectionstyle the connection style
relpos see below; default is (0.5, 0.5)
patchA default is bounding box of the text
patchB default is None
shrinkA default is 2 points
shrinkB default is 2 points
mutation_scale default is text size (in points)
mutation_aspect default is 1.
? any key for matplotlib.patches.PathPatch The exact starting point position of the arrow is defined by relpos. It's a tuple of relative coordinates of the text box, where (0, 0) is the lower left corner and (1, 1) is the upper right corner. Values <0 and >1 are supported and specify points outside the text box. By default (0.5, 0.5) the starting point is centered in the text box.
annotation_clipbool or None, default: None
Whether to draw the annotation when the annotation point xy is outside the axes area. If True, the annotation will only be drawn when xy is within the axes. If False, the annotation will always be drawn. If None, the annotation will only be drawn when xy is within the axes and xycoords is 'data'. **kwargs
Additional kwargs are passed to Text. Returns
Annotation
See also Advanced Annotations
Examples using matplotlib.pyplot.annotate
Pyplot tutorial
Annotations | |
doc_2755 | In-place version of add() | |
doc_2756 | A TLSVersion enum member representing the highest supported TLS version. The value defaults to TLSVersion.MAXIMUM_SUPPORTED. The attribute is read-only for protocols other than PROTOCOL_TLS, PROTOCOL_TLS_CLIENT, and PROTOCOL_TLS_SERVER. The attributes maximum_version, minimum_version and SSLContext.options all affect the supported SSL and TLS versions of the context. The implementation does not prevent invalid combination. For example a context with OP_NO_TLSv1_2 in options and maximum_version set to TLSVersion.TLSv1_2 will not be able to establish a TLS 1.2 connection. Note This attribute is not available unless the ssl module is compiled with OpenSSL 1.1.0g or newer. New in version 3.7. | |
doc_2757 |
Return the values of the located ticks given vmin and vmax. Note To get tick locations with the vmin and vmax values defined automatically for the associated axis simply call the Locator instance: >>> print(type(loc))
<type 'Locator'>
>>> print(loc())
[1, 2, 3, 4] | |
doc_2758 |
Draw samples from the Dirichlet distribution. Draw size samples of dimension k from a Dirichlet distribution. A Dirichlet-distributed random variable can be seen as a multivariate generalization of a Beta distribution. The Dirichlet distribution is a conjugate prior of a multinomial distribution in Bayesian inference. Parameters
alphasequence of floats, length k
Parameter of the distribution (length k for sample of length k).
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n), then m * n * k samples are drawn. Default is None, in which case a vector of length k is returned. Returns
samplesndarray,
The drawn samples, of shape (size, k). Raises
ValueError
If any value in alpha is less than or equal to zero Notes The Dirichlet distribution is a distribution over vectors \(x\) that fulfil the conditions \(x_i>0\) and \(\sum_{i=1}^k x_i = 1\). The probability density function \(p\) of a Dirichlet-distributed random vector \(X\) is proportional to \[p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},\] where \(\alpha\) is a vector containing the positive concentration parameters. The method uses the following property for computation: let \(Y\) be a random vector which has components that follow a standard gamma distribution, then \(X = \frac{1}{\sum_{i=1}^k{Y_i}} Y\) is Dirichlet-distributed References 1
David McKay, “Information Theory, Inference and Learning Algorithms,” chapter 23, http://www.inference.org.uk/mackay/itila/ 2
Wikipedia, “Dirichlet distribution”, https://en.wikipedia.org/wiki/Dirichlet_distribution Examples Taking an example cited in Wikipedia, this distribution can be used if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had, on average, a designated average length, but allowing some variation in the relative sizes of the pieces. >>> s = np.random.default_rng().dirichlet((10, 5, 3), 20).transpose()
>>> import matplotlib.pyplot as plt
>>> plt.barh(range(20), s[0])
>>> plt.barh(range(20), s[1], left=s[0], color='g')
>>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r')
>>> plt.title("Lengths of Strings") | |
doc_2759 | Locates a bitmap file of the name name.xpm or name in one of the bitmap directories (see the tix_addbitmapdir() method). By using tix_getbitmap(), you can avoid hard coding the pathnames of the bitmap files in your application. When successful, it returns the complete pathname of the bitmap file, prefixed with the character @. The returned value can be used to configure the bitmap option of the Tk and Tix widgets. | |
doc_2760 | The content of the comment as a string. The attribute contains all characters between the leading <!-- and trailing -->, but does not include them. | |
doc_2761 | A FileCookieJar that can load from and save cookies to disk in the Mozilla cookies.txt file format (which is also used by the Lynx and Netscape browsers). Note This loses information about RFC 2965 cookies, and also about newer or non-standard cookie-attributes such as port. Warning Back up your cookies before saving if you have cookies whose loss / corruption would be inconvenient (there are some subtleties which may lead to slight changes in the file over a load / save round-trip). Also note that cookies saved while Mozilla is running will get clobbered by Mozilla. | |
doc_2762 |
Bases: skimage.measure.fit.BaseModel Total least squares estimator for 2D ellipses. The functional model of the ellipse is: xt = xc + a*cos(theta)*cos(t) - b*sin(theta)*sin(t)
yt = yc + a*sin(theta)*cos(t) + b*cos(theta)*sin(t)
d = sqrt((x - xt)**2 + (y - yt)**2)
where (xt, yt) is the closest point on the ellipse to (x, y). Thus d is the shortest distance from the point to the ellipse. The estimator is based on a least squares minimization. The optimal solution is computed directly, no iterations are required. This leads to a simple, stable and robust fitting method. The params attribute contains the parameters in the following order: xc, yc, a, b, theta
Examples >>> xy = EllipseModel().predict_xy(np.linspace(0, 2 * np.pi, 25),
... params=(10, 15, 4, 8, np.deg2rad(30)))
>>> ellipse = EllipseModel()
>>> ellipse.estimate(xy)
True
>>> np.round(ellipse.params, 2)
array([10. , 15. , 4. , 8. , 0.52])
>>> np.round(abs(ellipse.residuals(xy)), 5)
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.])
Attributes
paramstuple
Ellipse model parameters in the following order xc, yc, a, b, theta.
__init__() [source]
Initialize self. See help(type(self)) for accurate signature.
estimate(data) [source]
Estimate circle model from data using total least squares. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
successbool
True, if model estimation succeeds. References
1
Halir, R.; Flusser, J. “Numerically stable direct least squares fitting of ellipses”. In Proc. 6th International Conference in Central Europe on Computer Graphics and Visualization. WSCG (Vol. 98, pp. 125-132).
predict_xy(t, params=None) [source]
Predict x- and y-coordinates using the estimated model. Parameters
tarray
Angles in circle in radians. Angles start to count from positive x-axis to positive y-axis in a right-handed system.
params(5, ) array, optional
Optional custom parameter set. Returns
xy(…, 2) array
Predicted x- and y-coordinates.
residuals(data) [source]
Determine residuals of data to model. For each point the shortest distance to the ellipse is returned. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
residuals(N, ) array
Residual for each data point. | |
doc_2763 | get the width of the Surface get_width() -> width Return the width of the Surface in pixels. | |
doc_2764 | Test that first and second are equal. If the values do not compare equal, the test will fail. In addition, if first and second are the exact same type and one of list, tuple, dict, set, frozenset or str or any type that a subclass registers with addTypeEqualityFunc() the type-specific equality function will be called in order to generate a more useful default error message (see also the list of type-specific methods). Changed in version 3.1: Added the automatic calling of type-specific equality function. Changed in version 3.2: assertMultiLineEqual() added as the default type equality function for comparing strings. | |
doc_2765 | See Migration guide for more details. tf.compat.v1.linalg.normalize
tf.linalg.normalize(
tensor, ord='euclidean', axis=None, name=None
)
This uses tf.linalg.norm to compute the norm along axis. This function can compute several different vector norms (the 1-norm, the Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and matrix norms (Frobenius, 1-norm, 2-norm and inf-norm).
Args
tensor Tensor of types float32, float64, complex64, complex128
ord Order of the norm. Supported values are 'fro', 'euclidean', 1, 2, np.inf and any positive real number yielding the corresponding p-norm. Default is 'euclidean' which is equivalent to Frobenius norm if tensor is a matrix and equivalent to 2-norm for vectors. Some restrictions apply: a) The Frobenius norm 'fro' is not defined for vectors, b) If axis is a 2-tuple (matrix norm), only 'euclidean', 'fro', 1, 2, np.inf are supported. See the description of axis on how to compute norms for a batch of vectors or matrices stored in a tensor.
axis If axis is None (the default), the input is considered a vector and a single vector norm is computed over the entire set of values in the tensor, i.e. norm(tensor, ord=ord) is equivalent to norm(reshape(tensor, [-1]), ord=ord). If axis is a Python integer, the input is considered a batch of vectors, and axis determines the axis in tensor over which to compute vector norms. If axis is a 2-tuple of Python integers it is considered a batch of matrices and axis determines the axes in tensor over which to compute a matrix norm. Negative indices are supported. Example: If you are passing a tensor that can be either a matrix or a batch of matrices at runtime, pass axis=[-2,-1] instead of axis=None to make sure that matrix norms are computed.
name The name of the op.
Returns
normalized A normalized Tensor with the same shape as tensor.
norm The computed norms with the same shape and dtype tensor but the final axis is 1 instead. Same as running tf.cast(tf.linalg.norm(tensor, ord, axis keepdims=True), tensor.dtype).
Raises
ValueError If ord or axis is invalid. | |
doc_2766 | Method called on an input line when the command prefix is not recognized. If this method is not overridden, it prints an error message and returns. | |
doc_2767 |
Scalar method identical to the corresponding array attribute. Please see ndarray.byteswap. | |
doc_2768 |
Load sample images for image manipulation. Loads both, china and flower. Read more in the User Guide. Returns
dataBunch
Dictionary-like object, with the following attributes.
imageslist of ndarray of shape (427, 640, 3)
The two sample image.
filenameslist
The filenames for the images.
DESCRstr
The full description of the dataset. Examples To load the data and visualize the images: >>> from sklearn.datasets import load_sample_images
>>> dataset = load_sample_images()
>>> len(dataset.images)
2
>>> first_img_data = dataset.images[0]
>>> first_img_data.shape
(427, 640, 3)
>>> first_img_data.dtype
dtype('uint8') | |
doc_2769 |
Bases: matplotlib.ticker.Formatter Return fixed strings for tick labels based only on position, not value. Note FixedFormatter should only be used together with FixedLocator. Otherwise, the labels may end up in unexpected positions. Set the sequence seq of strings that will be used for labels. get_offset()[source]
set_offset_string(ofs)[source] | |
doc_2770 | CCompiler_compile(self, sources[, ...]) Compile one or more source files.
CCompiler_customize(self, dist[, need_cxx]) Do any platform-specific customization of a compiler instance.
CCompiler_customize_cmd(self, cmd[, ignore]) Customize compiler using distutils command.
CCompiler_cxx_compiler(self) Return the C++ compiler.
CCompiler_find_executables(self) Does nothing here, but is called by the get_version method and can be overridden by subclasses.
CCompiler_get_version(self[, force, ok_status]) Return compiler version, or None if compiler is not available.
CCompiler_object_filenames(self, ...[, ...]) Return the name of the object files for the given source files.
CCompiler_show_customization(self) Print the compiler customizations to stdout.
CCompiler_spawn(self, cmd[, display, env]) Execute a command in a sub-process.
gen_lib_options(compiler, library_dirs, ...)
new_compiler([plat, compiler, verbose, ...])
replace_method(klass, method_name, func)
simple_version_match([pat, ignore, start]) Simple matching of version numbers, for use in CCompiler and FCompiler. | |
doc_2771 | bytearray.swapcase()
Return a copy of the sequence with all the lowercase ASCII characters converted to their corresponding uppercase counterpart and vice-versa. For example: >>> b'Hello World'.swapcase()
b'hELLO wORLD'
Lowercase ASCII characters are those byte values in the sequence b'abcdefghijklmnopqrstuvwxyz'. Uppercase ASCII characters are those byte values in the sequence b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'. Unlike str.swapcase(), it is always the case that bin.swapcase().swapcase() == bin for the binary versions. Case conversions are symmetrical in ASCII, even though that is not generally true for arbitrary Unicode code points. Note The bytearray version of this method does not operate in place - it always produces a new object, even if no changes were made. | |
doc_2772 |
If using a GUI backend with pyplot, display the figure window. If the figure was not created using figure, it will lack a FigureManagerBase, and this method will raise an AttributeError. Warning This does not manage an GUI event loop. Consequently, the figure may only be shown briefly or not shown at all if you or your environment are not managing an event loop. Proper use cases for Figure.show include running this from a GUI application or an IPython shell. If you're running a pure python shell or executing a non-GUI python script, you should use matplotlib.pyplot.show instead, which takes care of managing the event loop for you. Parameters
warnbool, default: True
If True and we are not running headless (i.e. on Linux with an unset DISPLAY), issue warning when called on a non-GUI backend. | |
doc_2773 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_2774 | Return a deep copy of this object. | |
doc_2775 |
Set the edgecolor(s) of the collection. Parameters
ccolor or list of colors or 'face'
The collection edgecolor(s). If a sequence, the patches cycle through it. If 'face', match the facecolor. | |
doc_2776 | See torch.diag() | |
doc_2777 |
Apply a function along an axis of the DataFrame. Objects passed to the function are Series objects whose index is either the DataFrame’s index (axis=0) or the DataFrame’s columns (axis=1). By default (result_type=None), the final return type is inferred from the return type of the applied function. Otherwise, it depends on the result_type argument. Parameters
func:function
Function to apply to each column or row.
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
Axis along which the function is applied: 0 or ‘index’: apply function to each column. 1 or ‘columns’: apply function to each row.
raw:bool, default False
Determines if row or column is passed as a Series or ndarray object: False : passes each row or column as a Series to the function. True : the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance.
result_type:{‘expand’, ‘reduce’, ‘broadcast’, None}, default None
These only act when axis=1 (columns): ‘expand’ : list-like results will be turned into columns. ‘reduce’ : returns a Series if possible rather than expanding list-like results. This is the opposite of ‘expand’. ‘broadcast’ : results will be broadcast to the original shape of the DataFrame, the original index and columns will be retained. The default behaviour (None) depends on the return value of the applied function: list-like results will be returned as a Series of those. However if the apply function returns a Series these are expanded to columns.
args:tuple
Positional arguments to pass to func in addition to the array/series. **kwargs
Additional keyword arguments to pass as keywords arguments to func. Returns
Series or DataFrame
Result of applying func along the given axis of the DataFrame. See also DataFrame.applymap
For elementwise operations. DataFrame.aggregate
Only perform aggregating type operations. DataFrame.transform
Only perform transforming type operations. Notes Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. Examples
>>> df = pd.DataFrame([[4, 9]] * 3, columns=['A', 'B'])
>>> df
A B
0 4 9
1 4 9
2 4 9
Using a numpy universal function (in this case the same as np.sqrt(df)):
>>> df.apply(np.sqrt)
A B
0 2.0 3.0
1 2.0 3.0
2 2.0 3.0
Using a reducing function on either axis
>>> df.apply(np.sum, axis=0)
A 12
B 27
dtype: int64
>>> df.apply(np.sum, axis=1)
0 13
1 13
2 13
dtype: int64
Returning a list-like will result in a Series
>>> df.apply(lambda x: [1, 2], axis=1)
0 [1, 2]
1 [1, 2]
2 [1, 2]
dtype: object
Passing result_type='expand' will expand list-like results to columns of a Dataframe
>>> df.apply(lambda x: [1, 2], axis=1, result_type='expand')
0 1
0 1 2
1 1 2
2 1 2
Returning a Series inside the function is similar to passing result_type='expand'. The resulting column names will be the Series index.
>>> df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)
foo bar
0 1 2
1 1 2
2 1 2
Passing result_type='broadcast' will ensure the same shape result, whether list-like or scalar is returned by the function, and broadcast it along the axis. The resulting column names will be the originals.
>>> df.apply(lambda x: [1, 2], axis=1, result_type='broadcast')
A B
0 1 2
1 1 2
2 1 2 | |
doc_2778 | the server protocol to use. defaults to HTTP/1.1 | |
doc_2779 | sklearn.metrics.fowlkes_mallows_score(labels_true, labels_pred, *, sparse=False) [source]
Measure the similarity of two clusterings of a set of points. New in version 0.18. The Fowlkes-Mallows index (FMI) is defined as the geometric mean between of the precision and recall: FMI = TP / sqrt((TP + FP) * (TP + FN))
Where TP is the number of True Positive (i.e. the number of pair of points that belongs in the same clusters in both labels_true and labels_pred), FP is the number of False Positive (i.e. the number of pair of points that belongs in the same clusters in labels_true and not in labels_pred) and FN is the number of False Negative (i.e the number of pair of points that belongs in the same clusters in labels_pred and not in labels_True). The score ranges from 0 to 1. A high value indicates a good similarity between two clusters. Read more in the User Guide. Parameters
labels_trueint array, shape = (n_samples,)
A clustering of the data into disjoint subsets.
labels_predarray, shape = (n_samples, )
A clustering of the data into disjoint subsets.
sparsebool, default=False
Compute contingency matrix internally with sparse matrix. Returns
scorefloat
The resulting Fowlkes-Mallows score. References
1
E. B. Fowkles and C. L. Mallows, 1983. “A method for comparing two hierarchical clusterings”. Journal of the American Statistical Association
2
Wikipedia entry for the Fowlkes-Mallows Index Examples Perfect labelings are both homogeneous and complete, hence have score 1.0: >>> from sklearn.metrics.cluster import fowlkes_mallows_score
>>> fowlkes_mallows_score([0, 0, 1, 1], [0, 0, 1, 1])
1.0
>>> fowlkes_mallows_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
If classes members are completely split across different clusters, the assignment is totally random, hence the FMI is null: >>> fowlkes_mallows_score([0, 0, 0, 0], [0, 1, 2, 3])
0.0 | |
doc_2780 | class sklearn.gaussian_process.kernels.Exponentiation(kernel, exponent) [source]
The Exponentiation kernel takes one base kernel and a scalar parameter \(p\) and combines them via \[k_{exp}(X, Y) = k(X, Y) ^p\] Note that the __pow__ magic method is overridden, so Exponentiation(RBF(), 2) is equivalent to using the ** operator with RBF() ** 2. Read more in the User Guide. New in version 0.18. Parameters
kernelKernel
The base kernel
exponentfloat
The exponent for the base kernel Attributes
bounds
Returns the log-transformed bounds on the theta.
hyperparameters
Returns a list of all hyperparameter.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Returns whether the kernel is defined on discrete structures.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Examples >>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import (RationalQuadratic,
... Exponentiation)
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = Exponentiation(RationalQuadratic(), exponent=2)
>>> gpr = GaussianProcessRegressor(kernel=kernel, alpha=5,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
0.419...
>>> gpr.predict(X[:1,:], return_std=True)
(array([635.5...]), array([0.559...]))
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Left argument of the returned kernel k(X, Y)
Yarray-like of shape (n_samples_Y, n_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Argument to the kernel. Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Returns whether the kernel is defined on discrete structures.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | |
doc_2781 | Returns a new view of the self tensor with singleton dimensions expanded to a larger size. Passing -1 as the size for a dimension means not changing the size of that dimension. Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1. Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory. Parameters
*sizes (torch.Size or int...) – the desired expanded size Warning More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first. Example: >>> x = torch.tensor([[1], [2], [3]])
>>> x.size()
torch.Size([3, 1])
>>> x.expand(3, 4)
tensor([[ 1, 1, 1, 1],
[ 2, 2, 2, 2],
[ 3, 3, 3, 3]])
>>> x.expand(-1, 4) # -1 means not changing the size of that dimension
tensor([[ 1, 1, 1, 1],
[ 2, 2, 2, 2],
[ 3, 3, 3, 3]]) | |
doc_2782 |
Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters
Xarray-like of (n_samples, n_features), default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
radiusfloat, default=None
Limiting distance of neighbors to return. The default is the value passed to the constructor.
return_distancebool, default=True
Whether or not to return the distances.
sort_resultsbool, default=False
If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns
neigh_distndarray of shape (n_samples,) of arrays
Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter.
neigh_indndarray of shape (n_samples,) of arrays
An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np
>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(radius=1.6)
>>> neigh.fit(samples)
NearestNeighbors(radius=1.6)
>>> rng = neigh.radius_neighbors([[1., 1., 1.]])
>>> print(np.asarray(rng[0][0]))
[1.5 0.5]
>>> print(np.asarray(rng[1][0]))
[1 2]
The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time. | |
doc_2783 | Insert a new item with value x in the array before position i. Negative values are treated as being relative to the end of the array. | |
doc_2784 |
Bases: object A multi-page PDF file. Notes In reality PdfPages is a thin wrapper around PdfFile, in order to avoid confusion when using savefig and forgetting the format argument. Examples >>> import matplotlib.pyplot as plt
>>> # Initialize:
>>> with PdfPages('foo.pdf') as pdf:
... # As many times as you like, create a figure fig and save it:
... fig = plt.figure()
... pdf.savefig(fig)
... # When no figure is specified the current figure is saved
... pdf.savefig()
Create a new PdfPages object. Parameters
filenamestr or path-like or file-like
Plots using PdfPages.savefig will be written to a file at this location. The file is opened at once and any older file with the same name is overwritten.
keep_emptybool, optional
If set to False, then empty pdf files will be deleted automatically when closed.
metadatadict, optional
Information dictionary object (see PDF reference section 10.2.1 'Document Information Dictionary'), e.g.: {'Creator': 'My software', 'Author': 'Me', 'Title': 'Awesome'}. The standard keys are 'Title', 'Author', 'Subject', 'Keywords', 'Creator', 'Producer', 'CreationDate', 'ModDate', and 'Trapped'. Values have been predefined for 'Creator', 'Producer' and 'CreationDate'. They can be removed by setting them to None. attach_note(text, positionRect=[- 100, - 100, 0, 0])[source]
Add a new text note to the page to be saved next. The optional positionRect specifies the position of the new note on the page. It is outside the page per default to make sure it is invisible on printouts.
close()[source]
Finalize this object, making the underlying file a complete PDF file.
get_pagecount()[source]
Return the current number of pages in the multipage pdf file.
infodict()[source]
Return a modifiable information dictionary object (see PDF reference section 10.2.1 'Document Information Dictionary').
keep_empty
savefig(figure=None, **kwargs)[source]
Save a Figure to this file as a new page. Any other keyword arguments are passed to savefig. Parameters
figureFigure or int, default: the active figure
The figure, or index of the figure, that is saved to the file. | |
doc_2785 |
Compute the (Moore-Penrose) pseudo-inverse of a matrix. Calculate the generalized inverse of a matrix using its singular-value decomposition (SVD) and including all large singular values. Changed in version 1.14: Can now operate on stacks of matrices Parameters
a(…, M, N) array_like
Matrix or stack of matrices to be pseudo-inverted.
rcond(…) array_like of float
Cutoff for small singular values. Singular values less than or equal to rcond * largest_singular_value are set to zero. Broadcasts against the stack of matrices.
hermitianbool, optional
If True, a is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. New in version 1.17.0. Returns
B(…, N, M) ndarray
The pseudo-inverse of a. If a is a matrix instance, then so is B. Raises
LinAlgError
If the SVD computation does not converge. See also scipy.linalg.pinv
Similar function in SciPy. scipy.linalg.pinv2
Similar function in SciPy (SVD-based). scipy.linalg.pinvh
Compute the (Moore-Penrose) pseudo-inverse of a Hermitian matrix. Notes The pseudo-inverse of a matrix A, denoted \(A^+\), is defined as: “the matrix that ‘solves’ [the least-squares problem] \(Ax = b\),” i.e., if \(\bar{x}\) is said solution, then \(A^+\) is that matrix such that \(\bar{x} = A^+b\). It can be shown that if \(Q_1 \Sigma Q_2^T = A\) is the singular value decomposition of A, then \(A^+ = Q_2 \Sigma^+ Q_1^T\), where \(Q_{1,2}\) are orthogonal matrices, \(\Sigma\) is a diagonal matrix consisting of A’s so-called singular values, (followed, typically, by zeros), and then \(\Sigma^+\) is simply the diagonal matrix consisting of the reciprocals of A’s singular values (again, followed by zeros). [1] References 1
G. Strang, Linear Algebra and Its Applications, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pp. 139-142. Examples The following example checks that a * a+ * a == a and a+ * a * a+ == a+: >>> a = np.random.randn(9, 6)
>>> B = np.linalg.pinv(a)
>>> np.allclose(a, np.dot(a, np.dot(B, a)))
True
>>> np.allclose(B, np.dot(B, np.dot(a, B)))
True | |
doc_2786 |
Other Members
FLOAT tf.dtypes.DType
FLOAT16 tf.dtypes.DType
GRAPHVIZ_DOT 3
INT16 tf.dtypes.DType
INT32 tf.dtypes.DType
INT64 tf.dtypes.DType
INT8 tf.dtypes.DType
QUANTIZED_UINT8 tf.dtypes.DType
STRING tf.dtypes.DType
TFLITE 2 | |
doc_2787 | os.O_RSYNC
os.O_SYNC
os.O_NDELAY
os.O_NONBLOCK
os.O_NOCTTY
os.O_CLOEXEC
The above constants are only available on Unix. Changed in version 3.3: Add O_CLOEXEC constant. | |
doc_2788 |
Alias for get_linewidth. | |
doc_2789 | Remove element elem from the set. Raises KeyError if elem is not contained in the set. | |
doc_2790 | See Migration guide for more details. tf.compat.v1.raw_ops.SparseTensorDenseMatMul
tf.raw_ops.SparseTensorDenseMatMul(
a_indices, a_values, a_shape, b, adjoint_a=False, adjoint_b=False, name=None
)
No validity checking is performed on the indices of A. However, the following input format is recommended for optimal behavior: if adjoint_a == false: A should be sorted in lexicographically increasing order. Use SparseReorder if you're not sure. if adjoint_a == true: A should be sorted in order of increasing dimension 1 (i.e., "column major" order instead of "row major" order).
Args
a_indices A Tensor. Must be one of the following types: int32, int64. 2-D. The indices of the SparseTensor, size [nnz, 2] Matrix.
a_values A Tensor. 1-D. The values of the SparseTensor, size [nnz] Vector.
a_shape A Tensor of type int64. 1-D. The shape of the SparseTensor, size [2] Vector.
b A Tensor. Must have the same type as a_values. 2-D. A dense Matrix.
adjoint_a An optional bool. Defaults to False. Use the adjoint of A in the matrix multiply. If A is complex, this is transpose(conj(A)). Otherwise it's transpose(A).
adjoint_b An optional bool. Defaults to False. Use the adjoint of B in the matrix multiply. If B is complex, this is transpose(conj(B)). Otherwise it's transpose(B).
name A name for the operation (optional).
Returns A Tensor. Has the same type as a_values. | |
doc_2791 | A read-only property for the variance of a normal distribution. Equal to the square of the standard deviation. | |
doc_2792 |
Connect the major canvas events to methods. | |
doc_2793 |
Turn on colorbar minor ticks. | |
doc_2794 |
Return filter function to be used for agg filter. | |
doc_2795 | operator.lt(a, b)
operator.le(a, b)
operator.eq(a, b)
operator.ne(a, b)
operator.ge(a, b)
operator.gt(a, b)
operator.__lt__(a, b)
operator.__le__(a, b)
operator.__eq__(a, b)
operator.__ne__(a, b)
operator.__ge__(a, b)
operator.__gt__(a, b)
Perform “rich comparisons” between a and b. Specifically, lt(a, b) is equivalent to a < b, le(a, b) is equivalent to a <= b, eq(a,
b) is equivalent to a == b, ne(a, b) is equivalent to a != b, gt(a, b) is equivalent to a > b and ge(a, b) is equivalent to a
>= b. Note that these functions can return any value, which may or may not be interpretable as a Boolean value. See Comparisons for more information about rich comparisons.
The logical operations are also generally applicable to all objects, and support truth tests, identity tests, and boolean operations:
operator.not_(obj)
operator.__not__(obj)
Return the outcome of not obj. (Note that there is no __not__() method for object instances; only the interpreter core defines this operation. The result is affected by the __bool__() and __len__() methods.)
operator.truth(obj)
Return True if obj is true, and False otherwise. This is equivalent to using the bool constructor.
operator.is_(a, b)
Return a is b. Tests object identity.
operator.is_not(a, b)
Return a is not b. Tests object identity.
The mathematical and bitwise operations are the most numerous:
operator.abs(obj)
operator.__abs__(obj)
Return the absolute value of obj.
operator.add(a, b)
operator.__add__(a, b)
Return a + b, for a and b numbers.
operator.and_(a, b)
operator.__and__(a, b)
Return the bitwise and of a and b.
operator.floordiv(a, b)
operator.__floordiv__(a, b)
Return a // b.
operator.index(a)
operator.__index__(a)
Return a converted to an integer. Equivalent to a.__index__().
operator.inv(obj)
operator.invert(obj)
operator.__inv__(obj)
operator.__invert__(obj)
Return the bitwise inverse of the number obj. This is equivalent to ~obj.
operator.lshift(a, b)
operator.__lshift__(a, b)
Return a shifted left by b.
operator.mod(a, b)
operator.__mod__(a, b)
Return a % b.
operator.mul(a, b)
operator.__mul__(a, b)
Return a * b, for a and b numbers.
operator.matmul(a, b)
operator.__matmul__(a, b)
Return a @ b. New in version 3.5.
operator.neg(obj)
operator.__neg__(obj)
Return obj negated (-obj).
operator.or_(a, b)
operator.__or__(a, b)
Return the bitwise or of a and b.
operator.pos(obj)
operator.__pos__(obj)
Return obj positive (+obj).
operator.pow(a, b)
operator.__pow__(a, b)
Return a ** b, for a and b numbers.
operator.rshift(a, b)
operator.__rshift__(a, b)
Return a shifted right by b.
operator.sub(a, b)
operator.__sub__(a, b)
Return a - b.
operator.truediv(a, b)
operator.__truediv__(a, b)
Return a / b where 2/3 is .66 rather than 0. This is also known as “true” division.
operator.xor(a, b)
operator.__xor__(a, b)
Return the bitwise exclusive or of a and b.
Operations which work with sequences (some of them with mappings too) include:
operator.concat(a, b)
operator.__concat__(a, b)
Return a + b for a and b sequences.
operator.contains(a, b)
operator.__contains__(a, b)
Return the outcome of the test b in a. Note the reversed operands.
operator.countOf(a, b)
Return the number of occurrences of b in a.
operator.delitem(a, b)
operator.__delitem__(a, b)
Remove the value of a at index b.
operator.getitem(a, b)
operator.__getitem__(a, b)
Return the value of a at index b.
operator.indexOf(a, b)
Return the index of the first of occurrence of b in a.
operator.setitem(a, b, c)
operator.__setitem__(a, b, c)
Set the value of a at index b to c.
operator.length_hint(obj, default=0)
Return an estimated length for the object o. First try to return its actual length, then an estimate using object.__length_hint__(), and finally return the default value. New in version 3.4.
The operator module also defines tools for generalized attribute and item lookups. These are useful for making fast field extractors as arguments for map(), sorted(), itertools.groupby(), or other functions that expect a function argument.
operator.attrgetter(attr)
operator.attrgetter(*attrs)
Return a callable object that fetches attr from its operand. If more than one attribute is requested, returns a tuple of attributes. The attribute names can also contain dots. For example: After f = attrgetter('name'), the call f(b) returns b.name. After f = attrgetter('name', 'date'), the call f(b) returns (b.name, b.date). After f = attrgetter('name.first', 'name.last'), the call f(b) returns (b.name.first, b.name.last). Equivalent to: def attrgetter(*items):
if any(not isinstance(item, str) for item in items):
raise TypeError('attribute name must be a string')
if len(items) == 1:
attr = items[0]
def g(obj):
return resolve_attr(obj, attr)
else:
def g(obj):
return tuple(resolve_attr(obj, attr) for attr in items)
return g
def resolve_attr(obj, attr):
for name in attr.split("."):
obj = getattr(obj, name)
return obj
operator.itemgetter(item)
operator.itemgetter(*items)
Return a callable object that fetches item from its operand using the operand’s __getitem__() method. If multiple items are specified, returns a tuple of lookup values. For example: After f = itemgetter(2), the call f(r) returns r[2]. After g = itemgetter(2, 5, 3), the call g(r) returns (r[2], r[5], r[3]). Equivalent to: def itemgetter(*items):
if len(items) == 1:
item = items[0]
def g(obj):
return obj[item]
else:
def g(obj):
return tuple(obj[item] for item in items)
return g
The items can be any type accepted by the operand’s __getitem__() method. Dictionaries accept any hashable value. Lists, tuples, and strings accept an index or a slice: >>> itemgetter(1)('ABCDEFG')
'B'
>>> itemgetter(1, 3, 5)('ABCDEFG')
('B', 'D', 'F')
>>> itemgetter(slice(2, None))('ABCDEFG')
'CDEFG'
>>> soldier = dict(rank='captain', name='dotterbart')
>>> itemgetter('rank')(soldier)
'captain'
Example of using itemgetter() to retrieve specific fields from a tuple record: >>> inventory = [('apple', 3), ('banana', 2), ('pear', 5), ('orange', 1)]
>>> getcount = itemgetter(1)
>>> list(map(getcount, inventory))
[3, 2, 5, 1]
>>> sorted(inventory, key=getcount)
[('orange', 1), ('banana', 2), ('apple', 3), ('pear', 5)]
operator.methodcaller(name, /, *args, **kwargs)
Return a callable object that calls the method name on its operand. If additional arguments and/or keyword arguments are given, they will be given to the method as well. For example: After f = methodcaller('name'), the call f(b) returns b.name(). After f = methodcaller('name', 'foo', bar=1), the call f(b) returns b.name('foo', bar=1). Equivalent to: def methodcaller(name, /, *args, **kwargs):
def caller(obj):
return getattr(obj, name)(*args, **kwargs)
return caller
Mapping Operators to Functions This table shows how abstract operations correspond to operator symbols in the Python syntax and the functions in the operator module.
Operation Syntax Function
Addition a + b add(a, b)
Concatenation seq1 + seq2 concat(seq1, seq2)
Containment Test obj in seq contains(seq, obj)
Division a / b truediv(a, b)
Division a // b floordiv(a, b)
Bitwise And a & b and_(a, b)
Bitwise Exclusive Or a ^ b xor(a, b)
Bitwise Inversion ~ a invert(a)
Bitwise Or a | b or_(a, b)
Exponentiation a ** b pow(a, b)
Identity a is b is_(a, b)
Identity a is not b is_not(a, b)
Indexed Assignment obj[k] = v setitem(obj, k, v)
Indexed Deletion del obj[k] delitem(obj, k)
Indexing obj[k] getitem(obj, k)
Left Shift a << b lshift(a, b)
Modulo a % b mod(a, b)
Multiplication a * b mul(a, b)
Matrix Multiplication a @ b matmul(a, b)
Negation (Arithmetic) - a neg(a)
Negation (Logical) not a not_(a)
Positive + a pos(a)
Right Shift a >> b rshift(a, b)
Slice Assignment seq[i:j] = values setitem(seq, slice(i, j), values)
Slice Deletion del seq[i:j] delitem(seq, slice(i, j))
Slicing seq[i:j] getitem(seq, slice(i, j))
String Formatting s % obj mod(s, obj)
Subtraction a - b sub(a, b)
Truth Test obj truth(obj)
Ordering a < b lt(a, b)
Ordering a <= b le(a, b)
Equality a == b eq(a, b)
Difference a != b ne(a, b)
Ordering a >= b ge(a, b)
Ordering a > b gt(a, b) In-place Operators Many operations have an “in-place” version. Listed below are functions providing a more primitive access to in-place operators than the usual syntax does; for example, the statement x += y is equivalent to x = operator.iadd(x, y). Another way to put it is to say that z = operator.iadd(x, y) is equivalent to the compound statement z = x; z += y. In those examples, note that when an in-place method is called, the computation and assignment are performed in two separate steps. The in-place functions listed below only do the first step, calling the in-place method. The second step, assignment, is not handled. For immutable targets such as strings, numbers, and tuples, the updated value is computed, but not assigned back to the input variable: >>> a = 'hello'
>>> iadd(a, ' world')
'hello world'
>>> a
'hello'
For mutable targets such as lists and dictionaries, the in-place method will perform the update, so no subsequent assignment is necessary: >>> s = ['h', 'e', 'l', 'l', 'o']
>>> iadd(s, [' ', 'w', 'o', 'r', 'l', 'd'])
['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd']
>>> s
['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd']
operator.iadd(a, b)
operator.__iadd__(a, b)
a = iadd(a, b) is equivalent to a += b.
operator.iand(a, b)
operator.__iand__(a, b)
a = iand(a, b) is equivalent to a &= b.
operator.iconcat(a, b)
operator.__iconcat__(a, b)
a = iconcat(a, b) is equivalent to a += b for a and b sequences.
operator.ifloordiv(a, b)
operator.__ifloordiv__(a, b)
a = ifloordiv(a, b) is equivalent to a //= b.
operator.ilshift(a, b)
operator.__ilshift__(a, b)
a = ilshift(a, b) is equivalent to a <<= b.
operator.imod(a, b)
operator.__imod__(a, b)
a = imod(a, b) is equivalent to a %= b.
operator.imul(a, b)
operator.__imul__(a, b)
a = imul(a, b) is equivalent to a *= b.
operator.imatmul(a, b)
operator.__imatmul__(a, b)
a = imatmul(a, b) is equivalent to a @= b. New in version 3.5.
operator.ior(a, b)
operator.__ior__(a, b)
a = ior(a, b) is equivalent to a |= b.
operator.ipow(a, b)
operator.__ipow__(a, b)
a = ipow(a, b) is equivalent to a **= b.
operator.irshift(a, b)
operator.__irshift__(a, b)
a = irshift(a, b) is equivalent to a >>= b.
operator.isub(a, b)
operator.__isub__(a, b)
a = isub(a, b) is equivalent to a -= b.
operator.itruediv(a, b)
operator.__itruediv__(a, b)
a = itruediv(a, b) is equivalent to a /= b.
operator.ixor(a, b)
operator.__ixor__(a, b)
a = ixor(a, b) is equivalent to a ^= b. | |
doc_2796 |
Return the Figure instance the artist belongs to. | |
doc_2797 |
Reverse the order of elements in an array along the given axis. The shape of the array is preserved, but the elements are reordered. New in version 1.12.0. Parameters
marray_like
Input array.
axisNone or int or tuple of ints, optional
Axis or axes along which to flip over. The default, axis=None, will flip over all of the axes of the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, flipping is performed on all of the axes specified in the tuple. Changed in version 1.15.0: None and tuples of axes are supported Returns
outarray_like
A view of m with the entries of axis reversed. Since a view is returned, this operation is done in constant time. See also flipud
Flip an array vertically (axis=0). fliplr
Flip an array horizontally (axis=1). Notes flip(m, 0) is equivalent to flipud(m). flip(m, 1) is equivalent to fliplr(m). flip(m, n) corresponds to m[...,::-1,...] with ::-1 at position n. flip(m) corresponds to m[::-1,::-1,...,::-1] with ::-1 at all positions. flip(m, (0, 1)) corresponds to m[::-1,::-1,...] with ::-1 at position 0 and position 1. Examples >>> A = np.arange(8).reshape((2,2,2))
>>> A
array([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
>>> np.flip(A, 0)
array([[[4, 5],
[6, 7]],
[[0, 1],
[2, 3]]])
>>> np.flip(A, 1)
array([[[2, 3],
[0, 1]],
[[6, 7],
[4, 5]]])
>>> np.flip(A)
array([[[7, 6],
[5, 4]],
[[3, 2],
[1, 0]]])
>>> np.flip(A, (0, 2))
array([[[5, 4],
[7, 6]],
[[1, 0],
[3, 2]]])
>>> A = np.random.randn(3,4,5)
>>> np.all(np.flip(A,2) == A[:,:,::-1,...])
True | |
doc_2798 |
Return the clip path. | |
doc_2799 | See Migration guide for more details. tf.compat.v1.raw_ops.CudnnRNNParamsSize
tf.raw_ops.CudnnRNNParamsSize(
num_layers, num_units, input_size, T, S, rnn_mode='lstm',
input_mode='linear_input', direction='unidirectional',
dropout=0, seed=0, seed2=0, num_proj=0, name=None
)
Return the params size that can be used by the Cudnn RNN model. Subsequent weight allocation and initialization should use this size. num_layers: Specifies the number of layers in the RNN model. num_units: Specifies the size of the hidden state. input_size: Specifies the size of the input state. rnn_mode: Indicates the type of the RNN model. input_mode: Indicate whether there is a linear projection between the input and The actual computation before the first layer. 'skip_input' is only allowed when input_size == num_units; 'auto_select' implies 'skip_input' when input_size == num_units; otherwise, it implies 'linear_input'. direction: Indicates whether a bidirectional model will be used. dir = (direction == bidirectional) ? 2 : 1 dropout: dropout probability. When set to 0., dropout is disabled. seed: the 1st part of a seed to initialize dropout. seed2: the 2nd part of a seed to initialize dropout. params_size: The size of the params buffer that should be allocated and initialized for this RNN model. Note that this params buffer may not be compatible across GPUs. Please use CudnnRNNParamsWeights and CudnnRNNParamsBiases to save and restore them in a way that is compatible across different runs.
Args
num_layers A Tensor of type int32.
num_units A Tensor of type int32.
input_size A Tensor of type int32.
T A tf.DType from: tf.half, tf.float32, tf.float64.
S A tf.DType from: tf.int32, tf.int64.
rnn_mode An optional string from: "rnn_relu", "rnn_tanh", "lstm", "gru". Defaults to "lstm".
input_mode An optional string from: "linear_input", "skip_input", "auto_select". Defaults to "linear_input".
direction An optional string from: "unidirectional", "bidirectional". Defaults to "unidirectional".
dropout An optional float. Defaults to 0.
seed An optional int. Defaults to 0.
seed2 An optional int. Defaults to 0.
num_proj An optional int. Defaults to 0.
name A name for the operation (optional).
Returns A Tensor of type S. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.