doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
skimage.measure.approximate_polygon(coords, tolerance) [source]
Approximate a polygonal chain with the specified tolerance. It is based on the Douglas-Peucker algorithm. Note that the approximated polygon is always within the convex hull of the original polygon. Parameters
coords(N, 2) array
Coordinate array.
tolerancefloat
Maximum distance from original points of polygon to approximated polygonal chain. If tolerance is 0, the original coordinate array is returned. Returns
coords(M, 2) array
Approximated polygonal chain where M <= N. References
1
https://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm | skimage.api.skimage.measure#skimage.measure.approximate_polygon |
skimage.measure.block_reduce(image, block_size, func=<function sum>, cval=0, func_kwargs=None) [source]
Downsample image by applying function func to local blocks. This function is useful for max and mean pooling, for example. Parameters
imagendarray
N-dimensional input image.
block_sizearray_like
Array containing down-sampling integer factor along each axis.
funccallable
Function object which is used to calculate the return value for each local block. This function must implement an axis parameter. Primary functions are numpy.sum, numpy.min, numpy.max, numpy.mean and numpy.median. See also func_kwargs.
cvalfloat
Constant padding value if image is not perfectly divisible by the block size.
func_kwargsdict
Keyword arguments passed to func. Notably useful for passing dtype argument to np.mean. Takes dictionary of inputs, e.g.: func_kwargs={'dtype': np.float16}). Returns
imagendarray
Down-sampled image with same number of dimensions as input image. Examples >>> from skimage.measure import block_reduce
>>> image = np.arange(3*3*4).reshape(3, 3, 4)
>>> image
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]],
[[24, 25, 26, 27],
[28, 29, 30, 31],
[32, 33, 34, 35]]])
>>> block_reduce(image, block_size=(3, 3, 1), func=np.mean)
array([[[16., 17., 18., 19.]]])
>>> image_max1 = block_reduce(image, block_size=(1, 3, 4), func=np.max)
>>> image_max1
array([[[11]],
[[23]],
[[35]]])
>>> image_max2 = block_reduce(image, block_size=(3, 1, 4), func=np.max)
>>> image_max2
array([[[27],
[31],
[35]]]) | skimage.api.skimage.measure#skimage.measure.block_reduce |
class skimage.measure.CircleModel [source]
Bases: skimage.measure.fit.BaseModel Total least squares estimator for 2D circles. The functional model of the circle is: r**2 = (x - xc)**2 + (y - yc)**2
This estimator minimizes the squared distances from all points to the circle: min{ sum((r - sqrt((x_i - xc)**2 + (y_i - yc)**2))**2) }
A minimum number of 3 points is required to solve for the parameters. Examples >>> t = np.linspace(0, 2 * np.pi, 25)
>>> xy = CircleModel().predict_xy(t, params=(2, 3, 4))
>>> model = CircleModel()
>>> model.estimate(xy)
True
>>> tuple(np.round(model.params, 5))
(2.0, 3.0, 4.0)
>>> res = model.residuals(xy)
>>> np.abs(np.round(res, 9))
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.])
Attributes
paramstuple
Circle model parameters in the following order xc, yc, r.
__init__() [source]
Initialize self. See help(type(self)) for accurate signature.
estimate(data) [source]
Estimate circle model from data using total least squares. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
successbool
True, if model estimation succeeds.
predict_xy(t, params=None) [source]
Predict x- and y-coordinates using the estimated model. Parameters
tarray
Angles in circle in radians. Angles start to count from positive x-axis to positive y-axis in a right-handed system.
params(3, ) array, optional
Optional custom parameter set. Returns
xy(…, 2) array
Predicted x- and y-coordinates.
residuals(data) [source]
Determine residuals of data to model. For each point the shortest distance to the circle is returned. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
residuals(N, ) array
Residual for each data point. | skimage.api.skimage.measure#skimage.measure.CircleModel |
estimate(data) [source]
Estimate circle model from data using total least squares. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
successbool
True, if model estimation succeeds. | skimage.api.skimage.measure#skimage.measure.CircleModel.estimate |
predict_xy(t, params=None) [source]
Predict x- and y-coordinates using the estimated model. Parameters
tarray
Angles in circle in radians. Angles start to count from positive x-axis to positive y-axis in a right-handed system.
params(3, ) array, optional
Optional custom parameter set. Returns
xy(…, 2) array
Predicted x- and y-coordinates. | skimage.api.skimage.measure#skimage.measure.CircleModel.predict_xy |
residuals(data) [source]
Determine residuals of data to model. For each point the shortest distance to the circle is returned. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
residuals(N, ) array
Residual for each data point. | skimage.api.skimage.measure#skimage.measure.CircleModel.residuals |
__init__() [source]
Initialize self. See help(type(self)) for accurate signature. | skimage.api.skimage.measure#skimage.measure.CircleModel.__init__ |
class skimage.measure.EllipseModel [source]
Bases: skimage.measure.fit.BaseModel Total least squares estimator for 2D ellipses. The functional model of the ellipse is: xt = xc + a*cos(theta)*cos(t) - b*sin(theta)*sin(t)
yt = yc + a*sin(theta)*cos(t) + b*cos(theta)*sin(t)
d = sqrt((x - xt)**2 + (y - yt)**2)
where (xt, yt) is the closest point on the ellipse to (x, y). Thus d is the shortest distance from the point to the ellipse. The estimator is based on a least squares minimization. The optimal solution is computed directly, no iterations are required. This leads to a simple, stable and robust fitting method. The params attribute contains the parameters in the following order: xc, yc, a, b, theta
Examples >>> xy = EllipseModel().predict_xy(np.linspace(0, 2 * np.pi, 25),
... params=(10, 15, 4, 8, np.deg2rad(30)))
>>> ellipse = EllipseModel()
>>> ellipse.estimate(xy)
True
>>> np.round(ellipse.params, 2)
array([10. , 15. , 4. , 8. , 0.52])
>>> np.round(abs(ellipse.residuals(xy)), 5)
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.])
Attributes
paramstuple
Ellipse model parameters in the following order xc, yc, a, b, theta.
__init__() [source]
Initialize self. See help(type(self)) for accurate signature.
estimate(data) [source]
Estimate circle model from data using total least squares. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
successbool
True, if model estimation succeeds. References
1
Halir, R.; Flusser, J. “Numerically stable direct least squares fitting of ellipses”. In Proc. 6th International Conference in Central Europe on Computer Graphics and Visualization. WSCG (Vol. 98, pp. 125-132).
predict_xy(t, params=None) [source]
Predict x- and y-coordinates using the estimated model. Parameters
tarray
Angles in circle in radians. Angles start to count from positive x-axis to positive y-axis in a right-handed system.
params(5, ) array, optional
Optional custom parameter set. Returns
xy(…, 2) array
Predicted x- and y-coordinates.
residuals(data) [source]
Determine residuals of data to model. For each point the shortest distance to the ellipse is returned. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
residuals(N, ) array
Residual for each data point. | skimage.api.skimage.measure#skimage.measure.EllipseModel |
estimate(data) [source]
Estimate circle model from data using total least squares. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
successbool
True, if model estimation succeeds. References
1
Halir, R.; Flusser, J. “Numerically stable direct least squares fitting of ellipses”. In Proc. 6th International Conference in Central Europe on Computer Graphics and Visualization. WSCG (Vol. 98, pp. 125-132). | skimage.api.skimage.measure#skimage.measure.EllipseModel.estimate |
predict_xy(t, params=None) [source]
Predict x- and y-coordinates using the estimated model. Parameters
tarray
Angles in circle in radians. Angles start to count from positive x-axis to positive y-axis in a right-handed system.
params(5, ) array, optional
Optional custom parameter set. Returns
xy(…, 2) array
Predicted x- and y-coordinates. | skimage.api.skimage.measure#skimage.measure.EllipseModel.predict_xy |
residuals(data) [source]
Determine residuals of data to model. For each point the shortest distance to the ellipse is returned. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
residuals(N, ) array
Residual for each data point. | skimage.api.skimage.measure#skimage.measure.EllipseModel.residuals |
__init__() [source]
Initialize self. See help(type(self)) for accurate signature. | skimage.api.skimage.measure#skimage.measure.EllipseModel.__init__ |
skimage.measure.euler_number(image, connectivity=None) [source]
Calculate the Euler characteristic in binary image. For 2D objects, the Euler number is the number of objects minus the number of holes. For 3D objects, the Euler number is obtained as the number of objects plus the number of holes, minus the number of tunnels, or loops. Parameters
image: (N, M) ndarray or (N, M, D) ndarray.
2D or 3D images. If image is not binary, all values strictly greater than zero are considered as the object.
connectivityint, optional
Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor. Accepted values are ranging from 1 to input.ndim. If None, a full connectivity of input.ndim is used. 4 or 8 neighborhoods are defined for 2D images (connectivity 1 and 2, respectively). 6 or 26 neighborhoods are defined for 3D images, (connectivity 1 and 3, respectively). Connectivity 2 is not defined. Returns
euler_numberint
Euler characteristic of the set of all objects in the image. Notes The Euler characteristic is an integer number that describes the topology of the set of all objects in the input image. If object is 4-connected, then background is 8-connected, and conversely. The computation of the Euler characteristic is based on an integral geometry formula in discretized space. In practice, a neighbourhood configuration is constructed, and a LUT is applied for each configuration. The coefficients used are the ones of Ohser et al. It can be useful to compute the Euler characteristic for several connectivities. A large relative difference between results for different connectivities suggests that the image resolution (with respect to the size of objects and holes) is too low. References
1
S. Rivollier. Analyse d’image geometrique et morphometrique par diagrammes de forme et voisinages adaptatifs generaux. PhD thesis, 2010. Ecole Nationale Superieure des Mines de Saint-Etienne. https://tel.archives-ouvertes.fr/tel-00560838
2
Ohser J., Nagel W., Schladitz K. (2002) The Euler Number of Discretized Sets - On the Choice of Adjacency in Homogeneous Lattices. In: Mecke K., Stoyan D. (eds) Morphology of Condensed Matter. Lecture Notes in Physics, vol 600. Springer, Berlin, Heidelberg. Examples >>> import numpy as np
>>> SAMPLE = np.zeros((100,100,100));
>>> SAMPLE[40:60, 40:60, 40:60]=1
>>> euler_number(SAMPLE)
1...
>>> SAMPLE[45:55,45:55,45:55] = 0;
>>> euler_number(SAMPLE)
2...
>>> SAMPLE = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
... [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
... [1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0],
... [0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1],
... [0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1]])
>>> euler_number(SAMPLE) # doctest:
0
>>> euler_number(SAMPLE, connectivity=1) # doctest:
2 | skimage.api.skimage.measure#skimage.measure.euler_number |
skimage.measure.find_contours(image, level=None, fully_connected='low', positive_orientation='low', *, mask=None) [source]
Find iso-valued contours in a 2D array for a given level value. Uses the “marching squares” method to compute a the iso-valued contours of the input 2D array for a particular level value. Array values are linearly interpolated to provide better precision for the output contours. Parameters
image2D ndarray of double
Input image in which to find contours.
levelfloat, optional
Value along which to find contours in the array. By default, the level is set to (max(image) + min(image)) / 2 Changed in version 0.18: This parameter is now optional.
fully_connectedstr, {‘low’, ‘high’}
Indicates whether array elements below the given level value are to be considered fully-connected (and hence elements above the value will only be face connected), or vice-versa. (See notes below for details.)
positive_orientationstr, {‘low’, ‘high’}
Indicates whether the output contours will produce positively-oriented polygons around islands of low- or high-valued elements. If ‘low’ then contours will wind counter- clockwise around elements below the iso-value. Alternately, this means that low-valued elements are always on the left of the contour. (See below for details.)
mask2D ndarray of bool, or None
A boolean mask, True where we want to draw contours. Note that NaN values are always excluded from the considered region (mask is set to False wherever array is NaN). Returns
contourslist of (n,2)-ndarrays
Each contour is an ndarray of shape (n, 2), consisting of n (row, column) coordinates along the contour. See also
skimage.measure.marching_cubes
Notes The marching squares algorithm is a special case of the marching cubes algorithm [1]. A simple explanation is available here: http://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html There is a single ambiguous case in the marching squares algorithm: when a given 2 x 2-element square has two high-valued and two low-valued elements, each pair diagonally adjacent. (Where high- and low-valued is with respect to the contour value sought.) In this case, either the high-valued elements can be ‘connected together’ via a thin isthmus that separates the low-valued elements, or vice-versa. When elements are connected together across a diagonal, they are considered ‘fully connected’ (also known as ‘face+vertex-connected’ or ‘8-connected’). Only high-valued or low-valued elements can be fully-connected, the other set will be considered as ‘face-connected’ or ‘4-connected’. By default, low-valued elements are considered fully-connected; this can be altered with the ‘fully_connected’ parameter. Output contours are not guaranteed to be closed: contours which intersect the array edge or a masked-off region (either where mask is False or where array is NaN) will be left open. All other contours will be closed. (The closed-ness of a contours can be tested by checking whether the beginning point is the same as the end point.) Contours are oriented. By default, array values lower than the contour value are to the left of the contour and values greater than the contour value are to the right. This means that contours will wind counter-clockwise (i.e. in ‘positive orientation’) around islands of low-valued pixels. This behavior can be altered with the ‘positive_orientation’ parameter. The order of the contours in the output list is determined by the position of the smallest x,y (in lexicographical order) coordinate in the contour. This is a side-effect of how the input array is traversed, but can be relied upon. Warning Array coordinates/values are assumed to refer to the center of the array element. Take a simple example input: [0, 1]. The interpolated position of 0.5 in this array is midway between the 0-element (at x=0) and the 1-element (at x=1), and thus would fall at x=0.5. This means that to find reasonable contours, it is best to find contours midway between the expected “light” and “dark” values. In particular, given a binarized array, do not choose to find contours at the low or high value of the array. This will often yield degenerate contours, especially around structures that are a single array element wide. Instead choose a middle value, as above. References
1
Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). DOI:10.1145/37401.37422 Examples >>> a = np.zeros((3, 3))
>>> a[0, 0] = 1
>>> a
array([[1., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
>>> find_contours(a, 0.5)
[array([[0. , 0.5],
[0.5, 0. ]])] | skimage.api.skimage.measure#skimage.measure.find_contours |
skimage.measure.grid_points_in_poly(shape, verts) [source]
Test whether points on a specified grid are inside a polygon. For each (r, c) coordinate on a grid, i.e. (0, 0), (0, 1) etc., test whether that point lies inside a polygon. Parameters
shapetuple (M, N)
Shape of the grid.
verts(V, 2) array
Specify the V vertices of the polygon, sorted either clockwise or anti-clockwise. The first point may (but does not need to be) duplicated. Returns
mask(M, N) ndarray of bool
True where the grid falls inside the polygon. See also
points_in_poly | skimage.api.skimage.measure#skimage.measure.grid_points_in_poly |
skimage.measure.inertia_tensor(image, mu=None) [source]
Compute the inertia tensor of the input image. Parameters
imagearray
The input image.
muarray, optional
The pre-computed central moments of image. The inertia tensor computation requires the central moments of the image. If an application requires both the central moments and the inertia tensor (for example, skimage.measure.regionprops), then it is more efficient to pre-compute them and pass them to the inertia tensor call. Returns
Tarray, shape (image.ndim, image.ndim)
The inertia tensor of the input image. \(T_{i, j}\) contains the covariance of image intensity along axes \(i\) and \(j\). References
1
https://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_tensor
2
Bernd Jähne. Spatio-Temporal Image Processing: Theory and Scientific Applications. (Chapter 8: Tensor Methods) Springer, 1993. | skimage.api.skimage.measure#skimage.measure.inertia_tensor |
skimage.measure.inertia_tensor_eigvals(image, mu=None, T=None) [source]
Compute the eigenvalues of the inertia tensor of the image. The inertia tensor measures covariance of the image intensity along the image axes. (See inertia_tensor.) The relative magnitude of the eigenvalues of the tensor is thus a measure of the elongation of a (bright) object in the image. Parameters
imagearray
The input image.
muarray, optional
The pre-computed central moments of image.
Tarray, shape (image.ndim, image.ndim)
The pre-computed inertia tensor. If T is given, mu and image are ignored. Returns
eigvalslist of float, length image.ndim
The eigenvalues of the inertia tensor of image, in descending order. Notes Computing the eigenvalues requires the inertia tensor of the input image. This is much faster if the central moments (mu) are provided, or, alternatively, one can provide the inertia tensor (T) directly. | skimage.api.skimage.measure#skimage.measure.inertia_tensor_eigvals |
skimage.measure.label(input, background=None, return_num=False, connectivity=None) [source]
Label connected regions of an integer array. Two pixels are connected when they are neighbors and have the same value. In 2D, they can be neighbors either in a 1- or 2-connected sense. The value refers to the maximum number of orthogonal hops to consider a pixel/voxel a neighbor: 1-connectivity 2-connectivity diagonal connection close-up
[ ] [ ] [ ] [ ] [ ]
| \ | / | <- hop 2
[ ]--[x]--[ ] [ ]--[x]--[ ] [x]--[ ]
| / | \ hop 1
[ ] [ ] [ ] [ ]
Parameters
inputndarray of dtype int
Image to label.
backgroundint, optional
Consider all pixels with this value as background pixels, and label them as 0. By default, 0-valued pixels are considered as background pixels.
return_numbool, optional
Whether to return the number of assigned labels.
connectivityint, optional
Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor. Accepted values are ranging from 1 to input.ndim. If None, a full connectivity of input.ndim is used. Returns
labelsndarray of dtype int
Labeled array, where all connected regions are assigned the same integer value.
numint, optional
Number of labels, which equals the maximum label index and is only returned if return_num is True. See also
regionprops
regionprops_table
References
1
Christophe Fiorio and Jens Gustedt, “Two linear time Union-Find strategies for image processing”, Theoretical Computer Science 154 (1996), pp. 165-181.
2
Kensheng Wu, Ekow Otoo and Arie Shoshani, “Optimizing connected component labeling algorithms”, Paper LBNL-56864, 2005, Lawrence Berkeley National Laboratory (University of California), http://repositories.cdlib.org/lbnl/LBNL-56864 Examples >>> import numpy as np
>>> x = np.eye(3).astype(int)
>>> print(x)
[[1 0 0]
[0 1 0]
[0 0 1]]
>>> print(label(x, connectivity=1))
[[1 0 0]
[0 2 0]
[0 0 3]]
>>> print(label(x, connectivity=2))
[[1 0 0]
[0 1 0]
[0 0 1]]
>>> print(label(x, background=-1))
[[1 2 2]
[2 1 2]
[2 2 1]]
>>> x = np.array([[1, 0, 0],
... [1, 1, 5],
... [0, 0, 0]])
>>> print(label(x))
[[1 0 0]
[1 1 2]
[0 0 0]] | skimage.api.skimage.measure#skimage.measure.label |
class skimage.measure.LineModelND [source]
Bases: skimage.measure.fit.BaseModel Total least squares estimator for N-dimensional lines. In contrast to ordinary least squares line estimation, this estimator minimizes the orthogonal distances of points to the estimated line. Lines are defined by a point (origin) and a unit vector (direction) according to the following vector equation: X = origin + lambda * direction
Examples >>> x = np.linspace(1, 2, 25)
>>> y = 1.5 * x + 3
>>> lm = LineModelND()
>>> lm.estimate(np.stack([x, y], axis=-1))
True
>>> tuple(np.round(lm.params, 5))
(array([1.5 , 5.25]), array([0.5547 , 0.83205]))
>>> res = lm.residuals(np.stack([x, y], axis=-1))
>>> np.abs(np.round(res, 9))
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.])
>>> np.round(lm.predict_y(x[:5]), 3)
array([4.5 , 4.562, 4.625, 4.688, 4.75 ])
>>> np.round(lm.predict_x(y[:5]), 3)
array([1. , 1.042, 1.083, 1.125, 1.167])
Attributes
paramstuple
Line model parameters in the following order origin, direction.
__init__() [source]
Initialize self. See help(type(self)) for accurate signature.
estimate(data) [source]
Estimate line model from data. This minimizes the sum of shortest (orthogonal) distances from the given data points to the estimated line. Parameters
data(N, dim) array
N points in a space of dimensionality dim >= 2. Returns
successbool
True, if model estimation succeeds.
predict(x, axis=0, params=None) [source]
Predict intersection of the estimated line model with a hyperplane orthogonal to a given axis. Parameters
x(n, 1) array
Coordinates along an axis.
axisint
Axis orthogonal to the hyperplane intersecting the line.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
data(n, m) array
Predicted coordinates. Raises
ValueError
If the line is parallel to the given axis.
predict_x(y, params=None) [source]
Predict x-coordinates for 2D lines using the estimated model. Alias for: predict(y, axis=1)[:, 0]
Parameters
yarray
y-coordinates.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
xarray
Predicted x-coordinates.
predict_y(x, params=None) [source]
Predict y-coordinates for 2D lines using the estimated model. Alias for: predict(x, axis=0)[:, 1]
Parameters
xarray
x-coordinates.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
yarray
Predicted y-coordinates.
residuals(data, params=None) [source]
Determine residuals of data to model. For each point, the shortest (orthogonal) distance to the line is returned. It is obtained by projecting the data onto the line. Parameters
data(N, dim) array
N points in a space of dimension dim.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
residuals(N, ) array
Residual for each data point. | skimage.api.skimage.measure#skimage.measure.LineModelND |
estimate(data) [source]
Estimate line model from data. This minimizes the sum of shortest (orthogonal) distances from the given data points to the estimated line. Parameters
data(N, dim) array
N points in a space of dimensionality dim >= 2. Returns
successbool
True, if model estimation succeeds. | skimage.api.skimage.measure#skimage.measure.LineModelND.estimate |
predict(x, axis=0, params=None) [source]
Predict intersection of the estimated line model with a hyperplane orthogonal to a given axis. Parameters
x(n, 1) array
Coordinates along an axis.
axisint
Axis orthogonal to the hyperplane intersecting the line.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
data(n, m) array
Predicted coordinates. Raises
ValueError
If the line is parallel to the given axis. | skimage.api.skimage.measure#skimage.measure.LineModelND.predict |
predict_x(y, params=None) [source]
Predict x-coordinates for 2D lines using the estimated model. Alias for: predict(y, axis=1)[:, 0]
Parameters
yarray
y-coordinates.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
xarray
Predicted x-coordinates. | skimage.api.skimage.measure#skimage.measure.LineModelND.predict_x |
predict_y(x, params=None) [source]
Predict y-coordinates for 2D lines using the estimated model. Alias for: predict(x, axis=0)[:, 1]
Parameters
xarray
x-coordinates.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
yarray
Predicted y-coordinates. | skimage.api.skimage.measure#skimage.measure.LineModelND.predict_y |
residuals(data, params=None) [source]
Determine residuals of data to model. For each point, the shortest (orthogonal) distance to the line is returned. It is obtained by projecting the data onto the line. Parameters
data(N, dim) array
N points in a space of dimension dim.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
residuals(N, ) array
Residual for each data point. | skimage.api.skimage.measure#skimage.measure.LineModelND.residuals |
__init__() [source]
Initialize self. See help(type(self)) for accurate signature. | skimage.api.skimage.measure#skimage.measure.LineModelND.__init__ |
skimage.measure.marching_cubes(volume, level=None, *, spacing=(1.0, 1.0, 1.0), gradient_direction='descent', step_size=1, allow_degenerate=True, method='lewiner', mask=None) [source]
Marching cubes algorithm to find surfaces in 3d volumetric data. In contrast with Lorensen et al. approach [2], Lewiner et al. algorithm is faster, resolves ambiguities, and guarantees topologically correct results. Therefore, this algorithm generally a better choice. Parameters
volume(M, N, P) array
Input data volume to find isosurfaces. Will internally be converted to float32 if necessary.
levelfloat, optional
Contour value to search for isosurfaces in volume. If not given or None, the average of the min and max of vol is used.
spacinglength-3 tuple of floats, optional
Voxel spacing in spatial dimensions corresponding to numpy array indexing dimensions (M, N, P) as in volume.
gradient_directionstring, optional
Controls if the mesh was generated from an isosurface with gradient descent toward objects of interest (the default), or the opposite, considering the left-hand rule. The two options are: * descent : Object was greater than exterior * ascent : Exterior was greater than object
step_sizeint, optional
Step size in voxels. Default 1. Larger steps yield faster but coarser results. The result will always be topologically correct though.
allow_degeneratebool, optional
Whether to allow degenerate (i.e. zero-area) triangles in the end-result. Default True. If False, degenerate triangles are removed, at the cost of making the algorithm slower. method: str, optional
One of ‘lewiner’, ‘lorensen’ or ‘_lorensen’. Specify witch of Lewiner et al. or Lorensen et al. method will be used. The ‘_lorensen’ flag correspond to an old implementation that will be deprecated in version 0.19.
mask(M, N, P) array, optional
Boolean array. The marching cube algorithm will be computed only on True elements. This will save computational time when interfaces are located within certain region of the volume M, N, P-e.g. the top half of the cube-and also allow to compute finite surfaces-i.e. open surfaces that do not end at the border of the cube. Returns
verts(V, 3) array
Spatial coordinates for V unique mesh vertices. Coordinate order matches input volume (M, N, P). If allow_degenerate is set to True, then the presence of degenerate triangles in the mesh can make this array have duplicate vertices.
faces(F, 3) array
Define triangular faces via referencing vertex indices from verts. This algorithm specifically outputs triangles, so each face has exactly three indices.
normals(V, 3) array
The normal direction at each vertex, as calculated from the data.
values(V, ) array
Gives a measure for the maximum value of the data in the local region near each vertex. This can be used by visualization tools to apply a colormap to the mesh. See also
skimage.measure.mesh_surface_area
skimage.measure.find_contours
Notes The algorithm [1] is an improved version of Chernyaev’s Marching Cubes 33 algorithm. It is an efficient algorithm that relies on heavy use of lookup tables to handle the many different cases, keeping the algorithm relatively easy. This implementation is written in Cython, ported from Lewiner’s C++ implementation. To quantify the area of an isosurface generated by this algorithm, pass verts and faces to skimage.measure.mesh_surface_area. Regarding visualization of algorithm output, to contour a volume named myvolume about the level 0.0, using the mayavi package: >>>
>> from mayavi import mlab
>> verts, faces, _, _ = marching_cubes(myvolume, 0.0)
>> mlab.triangular_mesh([vert[0] for vert in verts],
[vert[1] for vert in verts],
[vert[2] for vert in verts],
faces)
>> mlab.show()
Similarly using the visvis package: >>>
>> import visvis as vv
>> verts, faces, normals, values = marching_cubes(myvolume, 0.0)
>> vv.mesh(np.fliplr(verts), faces, normals, values)
>> vv.use().Run()
To reduce the number of triangles in the mesh for better performance, see this example using the mayavi package. References
1
Thomas Lewiner, Helio Lopes, Antonio Wilson Vieira and Geovan Tavares. Efficient implementation of Marching Cubes’ cases with topological guarantees. Journal of Graphics Tools 8(2) pp. 1-15 (december 2003). DOI:10.1080/10867651.2003.10487582
2
Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). DOI:10.1145/37401.37422 | skimage.api.skimage.measure#skimage.measure.marching_cubes |
skimage.measure.marching_cubes_classic(volume, level=None, spacing=(1.0, 1.0, 1.0), gradient_direction='descent') [source]
Classic marching cubes algorithm to find surfaces in 3d volumetric data. Note that the marching_cubes() algorithm is recommended over this algorithm, because it’s faster and produces better results. Parameters
volume(M, N, P) array of doubles
Input data volume to find isosurfaces. Will be cast to np.float64.
levelfloat
Contour value to search for isosurfaces in volume. If not given or None, the average of the min and max of vol is used.
spacinglength-3 tuple of floats
Voxel spacing in spatial dimensions corresponding to numpy array indexing dimensions (M, N, P) as in volume.
gradient_directionstring
Controls if the mesh was generated from an isosurface with gradient descent toward objects of interest (the default), or the opposite. The two options are: * descent : Object was greater than exterior * ascent : Exterior was greater than object Returns
verts(V, 3) array
Spatial coordinates for V unique mesh vertices. Coordinate order matches input volume (M, N, P). If allow_degenerate is set to True, then the presence of degenerate triangles in the mesh can make this array have duplicate vertices.
faces(F, 3) array
Define triangular faces via referencing vertex indices from verts. This algorithm specifically outputs triangles, so each face has exactly three indices. See also
skimage.measure.marching_cubes
skimage.measure.mesh_surface_area
Notes The marching cubes algorithm is implemented as described in [1]. A simple explanation is available here: http://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html
There are several known ambiguous cases in the marching cubes algorithm. Using point labeling as in [1], Figure 4, as shown: v8 ------ v7
/ | / | y
/ | / | ^ z
v4 ------ v3 | | /
| v5 ----|- v6 |/ (note: NOT right handed!)
| / | / ----> x
| / | /
v1 ------ v2
Most notably, if v4, v8, v2, and v6 are all >= level (or any generalization of this case) two parallel planes are generated by this algorithm, separating v4 and v8 from v2 and v6. An equally valid interpretation would be a single connected thin surface enclosing all four points. This is the best known ambiguity, though there are others. This algorithm does not attempt to resolve such ambiguities; it is a naive implementation of marching cubes as in [1], but may be a good beginning for work with more recent techniques (Dual Marching Cubes, Extended Marching Cubes, Cubic Marching Squares, etc.). Because of interactions between neighboring cubes, the isosurface(s) generated by this algorithm are NOT guaranteed to be closed, particularly for complicated contours. Furthermore, this algorithm does not guarantee a single contour will be returned. Indeed, ALL isosurfaces which cross level will be found, regardless of connectivity. The output is a triangular mesh consisting of a set of unique vertices and connecting triangles. The order of these vertices and triangles in the output list is determined by the position of the smallest x,y,z (in lexicographical order) coordinate in the contour. This is a side-effect of how the input array is traversed, but can be relied upon. The generated mesh guarantees coherent orientation as of version 0.12. To quantify the area of an isosurface generated by this algorithm, pass outputs directly into skimage.measure.mesh_surface_area. References
1(1,2,3)
Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). DOI:10.1145/37401.37422 | skimage.api.skimage.measure#skimage.measure.marching_cubes_classic |
skimage.measure.marching_cubes_lewiner(volume, level=None, spacing=(1.0, 1.0, 1.0), gradient_direction='descent', step_size=1, allow_degenerate=True, use_classic=False, mask=None) [source]
Lewiner marching cubes algorithm to find surfaces in 3d volumetric data. In contrast to marching_cubes_classic(), this algorithm is faster, resolves ambiguities, and guarantees topologically correct results. Therefore, this algorithm generally a better choice, unless there is a specific need for the classic algorithm. Parameters
volume(M, N, P) array
Input data volume to find isosurfaces. Will internally be converted to float32 if necessary.
levelfloat
Contour value to search for isosurfaces in volume. If not given or None, the average of the min and max of vol is used.
spacinglength-3 tuple of floats
Voxel spacing in spatial dimensions corresponding to numpy array indexing dimensions (M, N, P) as in volume.
gradient_directionstring
Controls if the mesh was generated from an isosurface with gradient descent toward objects of interest (the default), or the opposite, considering the left-hand rule. The two options are: * descent : Object was greater than exterior * ascent : Exterior was greater than object
step_sizeint
Step size in voxels. Default 1. Larger steps yield faster but coarser results. The result will always be topologically correct though.
allow_degeneratebool
Whether to allow degenerate (i.e. zero-area) triangles in the end-result. Default True. If False, degenerate triangles are removed, at the cost of making the algorithm slower.
use_classicbool
If given and True, the classic marching cubes by Lorensen (1987) is used. This option is included for reference purposes. Note that this algorithm has ambiguities and is not guaranteed to produce a topologically correct result. The results with using this option are not generally the same as the marching_cubes_classic() function.
mask(M, N, P) array
Boolean array. The marching cube algorithm will be computed only on True elements. This will save computational time when interfaces are located within certain region of the volume M, N, P-e.g. the top half of the cube-and also allow to compute finite surfaces-i.e. open surfaces that do not end at the border of the cube. Returns
verts(V, 3) array
Spatial coordinates for V unique mesh vertices. Coordinate order matches input volume (M, N, P). If allow_degenerate is set to True, then the presence of degenerate triangles in the mesh can make this array have duplicate vertices.
faces(F, 3) array
Define triangular faces via referencing vertex indices from verts. This algorithm specifically outputs triangles, so each face has exactly three indices.
normals(V, 3) array
The normal direction at each vertex, as calculated from the data.
values(V, ) array
Gives a measure for the maximum value of the data in the local region near each vertex. This can be used by visualization tools to apply a colormap to the mesh. See also
skimage.measure.marching_cubes
skimage.measure.mesh_surface_area
Notes The algorithm [1] is an improved version of Chernyaev’s Marching Cubes 33 algorithm. It is an efficient algorithm that relies on heavy use of lookup tables to handle the many different cases, keeping the algorithm relatively easy. This implementation is written in Cython, ported from Lewiner’s C++ implementation. To quantify the area of an isosurface generated by this algorithm, pass verts and faces to skimage.measure.mesh_surface_area. Regarding visualization of algorithm output, to contour a volume named myvolume about the level 0.0, using the mayavi package: >>> from mayavi import mlab
>>> verts, faces, normals, values = marching_cubes_lewiner(myvolume, 0.0)
>>> mlab.triangular_mesh([vert[0] for vert in verts],
... [vert[1] for vert in verts],
... [vert[2] for vert in verts],
... faces)
>>> mlab.show()
Similarly using the visvis package: >>> import visvis as vv
>>> verts, faces, normals, values = marching_cubes_lewiner(myvolume, 0.0)
>>> vv.mesh(np.fliplr(verts), faces, normals, values)
>>> vv.use().Run()
References
1
Thomas Lewiner, Helio Lopes, Antonio Wilson Vieira and Geovan Tavares. Efficient implementation of Marching Cubes’ cases with topological guarantees. Journal of Graphics Tools 8(2) pp. 1-15 (december 2003). DOI:10.1080/10867651.2003.10487582 | skimage.api.skimage.measure#skimage.measure.marching_cubes_lewiner |
skimage.measure.mesh_surface_area(verts, faces) [source]
Compute surface area, given vertices & triangular faces Parameters
verts(V, 3) array of floats
Array containing (x, y, z) coordinates for V unique mesh vertices.
faces(F, 3) array of ints
List of length-3 lists of integers, referencing vertex coordinates as provided in verts Returns
areafloat
Surface area of mesh. Units now [coordinate units] ** 2. See also
skimage.measure.marching_cubes
skimage.measure.marching_cubes_classic
Notes The arguments expected by this function are the first two outputs from skimage.measure.marching_cubes. For unit correct output, ensure correct spacing was passed to skimage.measure.marching_cubes. This algorithm works properly only if the faces provided are all triangles. | skimage.api.skimage.measure#skimage.measure.mesh_surface_area |
skimage.measure.moments(image, order=3) [source]
Calculate all raw image moments up to a certain order. The following properties can be calculated from raw image moments:
Area as: M[0, 0]. Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that raw moments are neither translation, scale nor rotation invariant. Parameters
imagenD double or uint8 array
Rasterized shape as image.
orderint, optional
Maximum order of moments. Default is 3. Returns
m(order + 1, order + 1) array
Raw image moments. References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 1
>>> M = moments(image)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> centroid
(14.5, 14.5) | skimage.api.skimage.measure#skimage.measure.moments |
skimage.measure.moments_central(image, center=None, order=3, **kwargs) [source]
Calculate all central image moments up to a certain order. The center coordinates (cr, cc) can be calculated from the raw moments as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that central moments are translation invariant but not scale and rotation invariant. Parameters
imagenD double or uint8 array
Rasterized shape as image.
centertuple of float, optional
Coordinates of the image centroid. This will be computed if it is not provided.
orderint, optional
The maximum order of moments computed. Returns
mu(order + 1, order + 1) array
Central image moments. References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 1
>>> M = moments(image)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> moments_central(image, centroid)
array([[16., 0., 20., 0.],
[ 0., 0., 0., 0.],
[20., 0., 25., 0.],
[ 0., 0., 0., 0.]]) | skimage.api.skimage.measure#skimage.measure.moments_central |
skimage.measure.moments_coords(coords, order=3) [source]
Calculate all raw image moments up to a certain order. The following properties can be calculated from raw image moments:
Area as: M[0, 0]. Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that raw moments are neither translation, scale nor rotation invariant. Parameters
coords(N, D) double or uint8 array
Array of N points that describe an image of D dimensionality in Cartesian space.
orderint, optional
Maximum order of moments. Default is 3. Returns
M(order + 1, order + 1, …) array
Raw image moments. (D dimensions) References
1
Johannes Kilian. Simple Image Analysis By Moments. Durham University, version 0.2, Durham, 2001. Examples >>> coords = np.array([[row, col]
... for row in range(13, 17)
... for col in range(14, 18)], dtype=np.double)
>>> M = moments_coords(coords)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> centroid
(14.5, 15.5) | skimage.api.skimage.measure#skimage.measure.moments_coords |
skimage.measure.moments_coords_central(coords, center=None, order=3) [source]
Calculate all central image moments up to a certain order. The following properties can be calculated from raw image moments:
Area as: M[0, 0]. Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that raw moments are neither translation, scale nor rotation invariant. Parameters
coords(N, D) double or uint8 array
Array of N points that describe an image of D dimensionality in Cartesian space. A tuple of coordinates as returned by np.nonzero is also accepted as input.
centertuple of float, optional
Coordinates of the image centroid. This will be computed if it is not provided.
orderint, optional
Maximum order of moments. Default is 3. Returns
Mc(order + 1, order + 1, …) array
Central image moments. (D dimensions) References
1
Johannes Kilian. Simple Image Analysis By Moments. Durham University, version 0.2, Durham, 2001. Examples >>> coords = np.array([[row, col]
... for row in range(13, 17)
... for col in range(14, 18)])
>>> moments_coords_central(coords)
array([[16., 0., 20., 0.],
[ 0., 0., 0., 0.],
[20., 0., 25., 0.],
[ 0., 0., 0., 0.]])
As seen above, for symmetric objects, odd-order moments (columns 1 and 3, rows 1 and 3) are zero when centered on the centroid, or center of mass, of the object (the default). If we break the symmetry by adding a new point, this no longer holds: >>> coords2 = np.concatenate((coords, [[17, 17]]), axis=0)
>>> np.round(moments_coords_central(coords2),
... decimals=2)
array([[17. , 0. , 22.12, -2.49],
[ 0. , 3.53, 1.73, 7.4 ],
[25.88, 6.02, 36.63, 8.83],
[ 4.15, 19.17, 14.8 , 39.6 ]])
Image moments and central image moments are equivalent (by definition) when the center is (0, 0): >>> np.allclose(moments_coords(coords),
... moments_coords_central(coords, (0, 0)))
True | skimage.api.skimage.measure#skimage.measure.moments_coords_central |
skimage.measure.moments_hu(nu) [source]
Calculate Hu’s set of image moments (2D-only). Note that this set of moments is proofed to be translation, scale and rotation invariant. Parameters
nu(M, M) array
Normalized central image moments, where M must be >= 4. Returns
nu(7,) array
Hu’s set of image moments. References
1
M. K. Hu, “Visual Pattern Recognition by Moment Invariants”, IRE Trans. Info. Theory, vol. IT-8, pp. 179-187, 1962
2
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
3
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
4
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
5
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 0.5
>>> image[10:12, 10:12] = 1
>>> mu = moments_central(image)
>>> nu = moments_normalized(mu)
>>> moments_hu(nu)
array([7.45370370e-01, 3.51165981e-01, 1.04049179e-01, 4.06442107e-02,
2.64312299e-03, 2.40854582e-02, 4.33680869e-19]) | skimage.api.skimage.measure#skimage.measure.moments_hu |
skimage.measure.moments_normalized(mu, order=3) [source]
Calculate all normalized central image moments up to a certain order. Note that normalized central moments are translation and scale invariant but not rotation invariant. Parameters
mu(M,[ …,] M) array
Central image moments, where M must be greater than or equal to order.
orderint, optional
Maximum order of moments. Default is 3. Returns
nu(order + 1,[ …,] order + 1) array
Normalized central image moments. References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 1
>>> m = moments(image)
>>> centroid = (m[0, 1] / m[0, 0], m[1, 0] / m[0, 0])
>>> mu = moments_central(image, centroid)
>>> moments_normalized(mu)
array([[ nan, nan, 0.078125 , 0. ],
[ nan, 0. , 0. , 0. ],
[0.078125 , 0. , 0.00610352, 0. ],
[0. , 0. , 0. , 0. ]]) | skimage.api.skimage.measure#skimage.measure.moments_normalized |
skimage.measure.perimeter(image, neighbourhood=4) [source]
Calculate total perimeter of all objects in binary image. Parameters
image(N, M) ndarray
2D binary image.
neighbourhood4 or 8, optional
Neighborhood connectivity for border pixel determination. It is used to compute the contour. A higher neighbourhood widens the border on which the perimeter is computed. Returns
perimeterfloat
Total perimeter of all objects in binary image. References
1
K. Benkrid, D. Crookes. Design and FPGA Implementation of a Perimeter Estimator. The Queen’s University of Belfast. http://www.cs.qub.ac.uk/~d.crookes/webpubs/papers/perimeter.doc Examples >>> from skimage import data, util
>>> from skimage.measure import label
>>> # coins image (binary)
>>> img_coins = data.coins() > 110
>>> # total perimeter of all objects in the image
>>> perimeter(img_coins, neighbourhood=4)
7796.867...
>>> perimeter(img_coins, neighbourhood=8)
8806.268... | skimage.api.skimage.measure#skimage.measure.perimeter |
skimage.measure.perimeter_crofton(image, directions=4) [source]
Calculate total Crofton perimeter of all objects in binary image. Parameters
image(N, M) ndarray
2D image. If image is not binary, all values strictly greater than zero are considered as the object.
directions2 or 4, optional
Number of directions used to approximate the Crofton perimeter. By default, 4 is used: it should be more accurate than 2. Computation time is the same in both cases. Returns
perimeterfloat
Total perimeter of all objects in binary image. Notes This measure is based on Crofton formula [1], which is a measure from integral geometry. It is defined for general curve length evaluation via a double integral along all directions. In a discrete space, 2 or 4 directions give a quite good approximation, 4 being more accurate than 2 for more complex shapes. Similar to perimeter(), this function returns an approximation of the perimeter in continuous space. References
1
https://en.wikipedia.org/wiki/Crofton_formula
2
S. Rivollier. Analyse d’image geometrique et morphometrique par diagrammes de forme et voisinages adaptatifs generaux. PhD thesis, 2010. Ecole Nationale Superieure des Mines de Saint-Etienne. https://tel.archives-ouvertes.fr/tel-00560838 Examples >>> from skimage import data, util
>>> from skimage.measure import label
>>> # coins image (binary)
>>> img_coins = data.coins() > 110
>>> # total perimeter of all objects in the image
>>> perimeter_crofton(img_coins, directions=2)
8144.578...
>>> perimeter_crofton(img_coins, directions=4)
7837.077... | skimage.api.skimage.measure#skimage.measure.perimeter_crofton |
skimage.measure.points_in_poly(points, verts) [source]
Test whether points lie inside a polygon. Parameters
points(N, 2) array
Input points, (x, y).
verts(M, 2) array
Vertices of the polygon, sorted either clockwise or anti-clockwise. The first point may (but does not need to be) duplicated. Returns
mask(N,) array of bool
True if corresponding point is inside the polygon. See also
grid_points_in_poly | skimage.api.skimage.measure#skimage.measure.points_in_poly |
skimage.measure.profile_line(image, src, dst, linewidth=1, order=None, mode=None, cval=0.0, *, reduce_func=<function mean>) [source]
Return the intensity profile of an image measured along a scan line. Parameters
imagendarray, shape (M, N[, C])
The image, either grayscale (2D array) or multichannel (3D array, where the final axis contains the channel information).
srcarray_like, shape (2, )
The coordinates of the start point of the scan line.
dstarray_like, shape (2, )
The coordinates of the end point of the scan line. The destination point is included in the profile, in contrast to standard numpy indexing.
linewidthint, optional
Width of the scan, perpendicular to the line
orderint in {0, 1, 2, 3, 4, 5}, optional
The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail.
mode{‘constant’, ‘nearest’, ‘reflect’, ‘mirror’, ‘wrap’}, optional
How to compute any values falling outside of the image.
cvalfloat, optional
If mode is ‘constant’, what constant value to use outside the image.
reduce_funccallable, optional
Function used to calculate the aggregation of pixel values perpendicular to the profile_line direction when linewidth > 1. If set to None the unreduced array will be returned. Returns
return_valuearray
The intensity profile along the scan line. The length of the profile is the ceil of the computed length of the scan line. Examples >>> x = np.array([[1, 1, 1, 2, 2, 2]])
>>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])
>>> img
array([[0, 0, 0, 0, 0, 0],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[0, 0, 0, 0, 0, 0]])
>>> profile_line(img, (2, 1), (2, 4))
array([1., 1., 2., 2.])
>>> profile_line(img, (1, 0), (1, 6), cval=4)
array([1., 1., 1., 2., 2., 2., 4.])
The destination point is included in the profile, in contrast to standard numpy indexing. For example: >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds
array([1., 1., 1., 2., 2., 2., 0.])
>>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row
array([1., 1., 1., 2., 2., 2.])
For different reduce_func inputs: >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)
array([0.66666667, 0.66666667, 0.66666667, 1.33333333])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)
array([1, 1, 1, 2])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)
array([2, 2, 2, 4])
The unreduced array will be returned when reduce_func is None or when reduce_func acts on each pixel value individually. >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,
... reduce_func=None)
array([[1, 1, 2],
[1, 1, 2],
[1, 1, 2],
[0, 0, 0]])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)
array([[1. , 1. , 0. ],
[1. , 1. , 0. ],
[1. , 1. , 0. ],
[1.41421356, 1.41421356, 0. ]]) | skimage.api.skimage.measure#skimage.measure.profile_line |
skimage.measure.ransac(data, model_class, min_samples, residual_threshold, is_data_valid=None, is_model_valid=None, max_trials=100, stop_sample_num=inf, stop_residuals_sum=0, stop_probability=1, random_state=None, initial_inliers=None) [source]
Fit a model to data with the RANSAC (random sample consensus) algorithm. RANSAC is an iterative algorithm for the robust estimation of parameters from a subset of inliers from the complete data set. Each iteration performs the following tasks: Select min_samples random samples from the original data and check whether the set of data is valid (see is_data_valid). Estimate a model to the random subset (model_cls.estimate(*data[random_subset]) and check whether the estimated model is valid (see is_model_valid). Classify all data as inliers or outliers by calculating the residuals to the estimated model (model_cls.residuals(*data)) - all data samples with residuals smaller than the residual_threshold are considered as inliers. Save estimated model as best model if number of inlier samples is maximal. In case the current estimated model has the same number of inliers, it is only considered as the best model if it has less sum of residuals. These steps are performed either a maximum number of times or until one of the special stop criteria are met. The final model is estimated using all inlier samples of the previously determined best model. Parameters
data[list, tuple of] (N, …) array
Data set to which the model is fitted, where N is the number of data points and the remaining dimension are depending on model requirements. If the model class requires multiple input data arrays (e.g. source and destination coordinates of skimage.transform.AffineTransform), they can be optionally passed as tuple or list. Note, that in this case the functions estimate(*data), residuals(*data), is_model_valid(model, *random_data) and is_data_valid(*random_data) must all take each data array as separate arguments.
model_classobject
Object with the following object methods: success = estimate(*data) residuals(*data) where success indicates whether the model estimation succeeded (True or None for success, False for failure).
min_samplesint in range (0, N)
The minimum number of data points to fit a model to.
residual_thresholdfloat larger than 0
Maximum distance for a data point to be classified as an inlier.
is_data_validfunction, optional
This function is called with the randomly selected data before the model is fitted to it: is_data_valid(*random_data).
is_model_validfunction, optional
This function is called with the estimated model and the randomly selected data: is_model_valid(model, *random_data), .
max_trialsint, optional
Maximum number of iterations for random sample selection.
stop_sample_numint, optional
Stop iteration if at least this number of inliers are found.
stop_residuals_sumfloat, optional
Stop iteration if sum of residuals is less than or equal to this threshold.
stop_probabilityfloat in range [0, 1], optional
RANSAC iteration stops if at least one outlier-free set of the training data is sampled with probability >= stop_probability, depending on the current best model’s inlier ratio and the number of trials. This requires to generate at least N samples (trials): N >= log(1 - probability) / log(1 - e**m) where the probability (confidence) is typically set to a high value such as 0.99, e is the current fraction of inliers w.r.t. the total number of samples, and m is the min_samples value.
random_stateint, RandomState instance or None, optional
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
initial_inliersarray-like of bool, shape (N,), optional
Initial samples selection for model estimation Returns
modelobject
Best model with largest consensus set.
inliers(N, ) array
Boolean mask of inliers classified as True. References
1
“RANSAC”, Wikipedia, https://en.wikipedia.org/wiki/RANSAC Examples Generate ellipse data without tilt and add noise: >>> t = np.linspace(0, 2 * np.pi, 50)
>>> xc, yc = 20, 30
>>> a, b = 5, 10
>>> x = xc + a * np.cos(t)
>>> y = yc + b * np.sin(t)
>>> data = np.column_stack([x, y])
>>> np.random.seed(seed=1234)
>>> data += np.random.normal(size=data.shape)
Add some faulty data: >>> data[0] = (100, 100)
>>> data[1] = (110, 120)
>>> data[2] = (120, 130)
>>> data[3] = (140, 130)
Estimate ellipse model using all available data: >>> model = EllipseModel()
>>> model.estimate(data)
True
>>> np.round(model.params)
array([ 72., 75., 77., 14., 1.])
Estimate ellipse model using RANSAC: >>> ransac_model, inliers = ransac(data, EllipseModel, 20, 3, max_trials=50)
>>> abs(np.round(ransac_model.params))
array([20., 30., 5., 10., 0.])
>>> inliers
array([False, False, False, False, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True], dtype=bool)
>>> sum(inliers) > 40
True
RANSAC can be used to robustly estimate a geometric transformation. In this section, we also show how to use a proportion of the total samples, rather than an absolute number. >>> from skimage.transform import SimilarityTransform
>>> np.random.seed(0)
>>> src = 100 * np.random.rand(50, 2)
>>> model0 = SimilarityTransform(scale=0.5, rotation=1, translation=(10, 20))
>>> dst = model0(src)
>>> dst[0] = (10000, 10000)
>>> dst[1] = (-100, 100)
>>> dst[2] = (50, 50)
>>> ratio = 0.5 # use half of the samples
>>> min_samples = int(ratio * len(src))
>>> model, inliers = ransac((src, dst), SimilarityTransform, min_samples, 10,
... initial_inliers=np.ones(len(src), dtype=bool))
>>> inliers
array([False, False, False, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True]) | skimage.api.skimage.measure#skimage.measure.ransac |
skimage.measure.regionprops(label_image, intensity_image=None, cache=True, coordinates=None, *, extra_properties=None) [source]
Measure properties of labeled image regions. Parameters
label_image(M, N[, P]) ndarray
Labeled input image. Labels with value 0 are ignored. Changed in version 0.14.1: Previously, label_image was processed by numpy.squeeze and so any number of singleton dimensions was allowed. This resulted in inconsistent handling of images with singleton dimensions. To recover the old behaviour, use regionprops(np.squeeze(label_image), ...).
intensity_image(M, N[, P][, C]) ndarray, optional
Intensity (i.e., input) image with same size as labeled image, plus optionally an extra dimension for multichannel data. Default is None. Changed in version 0.18.0: The ability to provide an extra dimension for channels was added.
cachebool, optional
Determine whether to cache calculated properties. The computation is much faster for cached properties, whereas the memory consumption increases.
coordinatesDEPRECATED
This argument is deprecated and will be removed in a future version of scikit-image. See Coordinate conventions for more details. Deprecated since version 0.16.0: Use “rc” coordinates everywhere. It may be sufficient to call numpy.transpose on your label image to get the same values as 0.15 and earlier. However, for some properties, the transformation will be less trivial. For example, the new orientation is \(\frac{\pi}{2}\) plus the old orientation.
extra_propertiesIterable of callables
Add extra property computation functions that are not included with skimage. The name of the property is derived from the function name, the dtype is inferred by calling the function on a small sample. If the name of an extra property clashes with the name of an existing property the extra property wil not be visible and a UserWarning is issued. A property computation function must take a region mask as its first argument. If the property requires an intensity image, it must accept the intensity image as the second argument. Returns
propertieslist of RegionProperties
Each item describes one labeled region, and can be accessed using the attributes listed below. See also
label
Notes The following properties can be accessed as attributes or keys:
areaint
Number of pixels of the region.
bboxtuple
Bounding box (min_row, min_col, max_row, max_col). Pixels belonging to the bounding box are in the half-open interval [min_row; max_row) and [min_col; max_col).
bbox_areaint
Number of pixels of bounding box.
centroidarray
Centroid coordinate tuple (row, col).
convex_areaint
Number of pixels of convex hull image, which is the smallest convex polygon that encloses the region.
convex_image(H, J) ndarray
Binary convex hull image which has the same size as bounding box.
coords(N, 2) ndarray
Coordinate list (row, col) of the region.
eccentricityfloat
Eccentricity of the ellipse that has the same second-moments as the region. The eccentricity is the ratio of the focal distance (distance between focal points) over the major axis length. The value is in the interval [0, 1). When it is 0, the ellipse becomes a circle.
equivalent_diameterfloat
The diameter of a circle with the same area as the region.
euler_numberint
Euler characteristic of the set of non-zero pixels. Computed as number of connected components subtracted by number of holes (input.ndim connectivity). In 3D, number of connected components plus number of holes subtracted by number of tunnels.
extentfloat
Ratio of pixels in the region to pixels in the total bounding box. Computed as area / (rows * cols)
feret_diameter_maxfloat
Maximum Feret’s diameter computed as the longest distance between points around a region’s convex hull contour as determined by find_contours. [5]
filled_areaint
Number of pixels of the region will all the holes filled in. Describes the area of the filled_image.
filled_image(H, J) ndarray
Binary region image with filled holes which has the same size as bounding box.
image(H, J) ndarray
Sliced binary region image which has the same size as bounding box.
inertia_tensorndarray
Inertia tensor of the region for the rotation around its mass.
inertia_tensor_eigvalstuple
The eigenvalues of the inertia tensor in decreasing order.
intensity_imagendarray
Image inside region bounding box.
labelint
The label in the labeled input image.
local_centroidarray
Centroid coordinate tuple (row, col), relative to region bounding box.
major_axis_lengthfloat
The length of the major axis of the ellipse that has the same normalized second central moments as the region.
max_intensityfloat
Value with the greatest intensity in the region.
mean_intensityfloat
Value with the mean intensity in the region.
min_intensityfloat
Value with the least intensity in the region.
minor_axis_lengthfloat
The length of the minor axis of the ellipse that has the same normalized second central moments as the region.
moments(3, 3) ndarray
Spatial moments up to 3rd order: m_ij = sum{ array(row, col) * row^i * col^j }
where the sum is over the row, col coordinates of the region.
moments_central(3, 3) ndarray
Central moments (translation invariant) up to 3rd order: mu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }
where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s centroid.
moments_hutuple
Hu moments (translation, scale and rotation invariant).
moments_normalized(3, 3) ndarray
Normalized moments (translation and scale invariant) up to 3rd order: nu_ij = mu_ij / m_00^[(i+j)/2 + 1]
where m_00 is the zeroth spatial moment.
orientationfloat
Angle between the 0th axis (rows) and the major axis of the ellipse that has the same second moments as the region, ranging from -pi/2 to pi/2 counter-clockwise.
perimeterfloat
Perimeter of object which approximates the contour as a line through the centers of border pixels using a 4-connectivity.
perimeter_croftonfloat
Perimeter of object approximated by the Crofton formula in 4 directions.
slicetuple of slices
A slice to extract the object from the source image.
solidityfloat
Ratio of pixels in the region to pixels of the convex hull image.
weighted_centroidarray
Centroid coordinate tuple (row, col) weighted with intensity image.
weighted_local_centroidarray
Centroid coordinate tuple (row, col), relative to region bounding box, weighted with intensity image.
weighted_moments(3, 3) ndarray
Spatial moments of intensity image up to 3rd order: wm_ij = sum{ array(row, col) * row^i * col^j }
where the sum is over the row, col coordinates of the region.
weighted_moments_central(3, 3) ndarray
Central moments (translation invariant) of intensity image up to 3rd order: wmu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }
where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s weighted centroid.
weighted_moments_hutuple
Hu moments (translation, scale and rotation invariant) of intensity image.
weighted_moments_normalized(3, 3) ndarray
Normalized moments (translation and scale invariant) of intensity image up to 3rd order: wnu_ij = wmu_ij / wm_00^[(i+j)/2 + 1]
where wm_00 is the zeroth spatial moment (intensity-weighted area). Each region also supports iteration, so that you can do: for prop in region:
print(prop, region[prop])
References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment
5
W. Pabst, E. Gregorová. Characterization of particles and particle systems, pp. 27-28. ICT Prague, 2007. https://old.vscht.cz/sil/keramika/Characterization_of_particles/CPPS%20_English%20version_.pdf Examples >>> from skimage import data, util
>>> from skimage.measure import label, regionprops
>>> img = util.img_as_ubyte(data.coins()) > 110
>>> label_img = label(img, connectivity=img.ndim)
>>> props = regionprops(label_img)
>>> # centroid of first labeled object
>>> props[0].centroid
(22.72987986048314, 81.91228523446583)
>>> # centroid of first labeled object
>>> props[0]['centroid']
(22.72987986048314, 81.91228523446583)
Add custom measurements by passing functions as extra_properties >>> from skimage import data, util
>>> from skimage.measure import label, regionprops
>>> import numpy as np
>>> img = util.img_as_ubyte(data.coins()) > 110
>>> label_img = label(img, connectivity=img.ndim)
>>> def pixelcount(regionmask):
... return np.sum(regionmask)
>>> props = regionprops(label_img, extra_properties=(pixelcount,))
>>> props[0].pixelcount
7741
>>> props[1]['pixelcount']
42 | skimage.api.skimage.measure#skimage.measure.regionprops |
skimage.measure.regionprops_table(label_image, intensity_image=None, properties=('label', 'bbox'), *, cache=True, separator='-', extra_properties=None) [source]
Compute image properties and return them as a pandas-compatible table. The table is a dictionary mapping column names to value arrays. See Notes section below for details. New in version 0.16. Parameters
label_image(N, M[, P]) ndarray
Labeled input image. Labels with value 0 are ignored.
intensity_image(M, N[, P][, C]) ndarray, optional
Intensity (i.e., input) image with same size as labeled image, plus optionally an extra dimension for multichannel data. Default is None. Changed in version 0.18.0: The ability to provide an extra dimension for channels was added.
propertiestuple or list of str, optional
Properties that will be included in the resulting dictionary For a list of available properties, please see regionprops(). Users should remember to add “label” to keep track of region identities.
cachebool, optional
Determine whether to cache calculated properties. The computation is much faster for cached properties, whereas the memory consumption increases.
separatorstr, optional
For non-scalar properties not listed in OBJECT_COLUMNS, each element will appear in its own column, with the index of that element separated from the property name by this separator. For example, the inertia tensor of a 2D region will appear in four columns: inertia_tensor-0-0, inertia_tensor-0-1, inertia_tensor-1-0, and inertia_tensor-1-1 (where the separator is -). Object columns are those that cannot be split in this way because the number of columns would change depending on the object. For example, image and coords.
extra_propertiesIterable of callables
Add extra property computation functions that are not included with skimage. The name of the property is derived from the function name, the dtype is inferred by calling the function on a small sample. If the name of an extra property clashes with the name of an existing property the extra property wil not be visible and a UserWarning is issued. A property computation function must take a region mask as its first argument. If the property requires an intensity image, it must accept the intensity image as the second argument. Returns
out_dictdict
Dictionary mapping property names to an array of values of that property, one value per region. This dictionary can be used as input to pandas DataFrame to map property names to columns in the frame and regions to rows. If the image has no regions, the arrays will have length 0, but the correct type. Notes Each column contains either a scalar property, an object property, or an element in a multidimensional array. Properties with scalar values for each region, such as “eccentricity”, will appear as a float or int array with that property name as key. Multidimensional properties of fixed size for a given image dimension, such as “centroid” (every centroid will have three elements in a 3D image, no matter the region size), will be split into that many columns, with the name {property_name}{separator}{element_num} (for 1D properties), {property_name}{separator}{elem_num0}{separator}{elem_num1} (for 2D properties), and so on. For multidimensional properties that don’t have a fixed size, such as “image” (the image of a region varies in size depending on the region size), an object array will be used, with the corresponding property name as the key. Examples >>> from skimage import data, util, measure
>>> image = data.coins()
>>> label_image = measure.label(image > 110, connectivity=image.ndim)
>>> props = measure.regionprops_table(label_image, image,
... properties=['label', 'inertia_tensor',
... 'inertia_tensor_eigvals'])
>>> props
{'label': array([ 1, 2, ...]), ...
'inertia_tensor-0-0': array([ 4.012...e+03, 8.51..., ...]), ...
...,
'inertia_tensor_eigvals-1': array([ 2.67...e+02, 2.83..., ...])}
The resulting dictionary can be directly passed to pandas, if installed, to obtain a clean DataFrame: >>> import pandas as pd
>>> data = pd.DataFrame(props)
>>> data.head()
label inertia_tensor-0-0 ... inertia_tensor_eigvals-1
0 1 4012.909888 ... 267.065503
1 2 8.514739 ... 2.834806
2 3 0.666667 ... 0.000000
3 4 0.000000 ... 0.000000
4 5 0.222222 ... 0.111111
[5 rows x 7 columns] If we want to measure a feature that does not come as a built-in property, we can define custom functions and pass them as extra_properties. For example, we can create a custom function that measures the intensity quartiles in a region: >>> from skimage import data, util, measure
>>> import numpy as np
>>> def quartiles(regionmask, intensity):
... return np.percentile(intensity[regionmask], q=(25, 50, 75))
>>>
>>> image = data.coins()
>>> label_image = measure.label(image > 110, connectivity=image.ndim)
>>> props = measure.regionprops_table(label_image, intensity_image=image,
... properties=('label',),
... extra_properties=(quartiles,))
>>> import pandas as pd
>>> pd.DataFrame(props).head()
label quartiles-0 quartiles-1 quartiles-2
0 1 117.00 123.0 130.0
1 2 111.25 112.0 114.0
2 3 111.00 111.0 111.0
3 4 111.00 111.5 112.5
4 5 112.50 113.0 114.0 | skimage.api.skimage.measure#skimage.measure.regionprops_table |
skimage.measure.shannon_entropy(image, base=2) [source]
Calculate the Shannon entropy of an image. The Shannon entropy is defined as S = -sum(pk * log(pk)), where pk are frequency/probability of pixels of value k. Parameters
image(N, M) ndarray
Grayscale input image.
basefloat, optional
The logarithmic base to use. Returns
entropyfloat
Notes The returned value is measured in bits or shannon (Sh) for base=2, natural unit (nat) for base=np.e and hartley (Hart) for base=10. References
1
https://en.wikipedia.org/wiki/Entropy_(information_theory)
2
https://en.wiktionary.org/wiki/Shannon_entropy Examples >>> from skimage import data
>>> from skimage.measure import shannon_entropy
>>> shannon_entropy(data.camera())
7.231695011055706 | skimage.api.skimage.measure#skimage.measure.shannon_entropy |
skimage.measure.subdivide_polygon(coords, degree=2, preserve_ends=False) [source]
Subdivision of polygonal curves using B-Splines. Note that the resulting curve is always within the convex hull of the original polygon. Circular polygons stay closed after subdivision. Parameters
coords(N, 2) array
Coordinate array.
degree{1, 2, 3, 4, 5, 6, 7}, optional
Degree of B-Spline. Default is 2.
preserve_endsbool, optional
Preserve first and last coordinate of non-circular polygon. Default is False. Returns
coords(M, 2) array
Subdivided coordinate array. References
1
http://mrl.nyu.edu/publications/subdiv-course2000/coursenotes00.pdf | skimage.api.skimage.measure#skimage.measure.subdivide_polygon |
Module: metrics
skimage.metrics.adapted_rand_error([…]) Compute Adapted Rand error as defined by the SNEMI3D contest.
skimage.metrics.contingency_table(im_true, …) Return the contingency table for all regions in matched segmentations.
skimage.metrics.hausdorff_distance(image0, …) Calculate the Hausdorff distance between nonzero elements of given images.
skimage.metrics.mean_squared_error(image0, …) Compute the mean-squared error between two images.
skimage.metrics.normalized_root_mse(…[, …]) Compute the normalized root mean-squared error (NRMSE) between two images.
skimage.metrics.peak_signal_noise_ratio(…) Compute the peak signal to noise ratio (PSNR) for an image.
skimage.metrics.structural_similarity(im1, …) Compute the mean structural similarity index between two images.
skimage.metrics.variation_of_information([…]) Return symmetric conditional entropies associated with the VI. adapted_rand_error
skimage.metrics.adapted_rand_error(image_true=None, image_test=None, *, table=None, ignore_labels=(0, )) [source]
Compute Adapted Rand error as defined by the SNEMI3D contest. [1] Parameters
image_truendarray of int
Ground-truth label image, same shape as im_test.
image_testndarray of int
Test image.
tablescipy.sparse array in crs format, optional
A contingency table built with skimage.evaluate.contingency_table. If None, it will be computed on the fly.
ignore_labelssequence of int, optional
Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score. Returns
arefloat
The adapted Rand error; equal to \(1 - \frac{2pr}{p + r}\), where p and r are the precision and recall described below.
precfloat
The adapted Rand precision: this is the number of pairs of pixels that have the same label in the test label image and in the true image, divided by the number in the test image.
recfloat
The adapted Rand recall: this is the number of pairs of pixels that have the same label in the test label image and in the true image, divided by the number in the true image. Notes Pixels with label 0 in the true segmentation are ignored in the score. References
1
Arganda-Carreras I, Turaga SC, Berger DR, et al. (2015) Crowdsourcing the creation of image segmentation algorithms for connectomics. Front. Neuroanat. 9:142. DOI:10.3389/fnana.2015.00142
contingency_table
skimage.metrics.contingency_table(im_true, im_test, *, ignore_labels=None, normalize=False) [source]
Return the contingency table for all regions in matched segmentations. Parameters
im_truendarray of int
Ground-truth label image, same shape as im_test.
im_testndarray of int
Test image.
ignore_labelssequence of int, optional
Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score.
normalizebool
Determines if the contingency table is normalized by pixel count. Returns
contscipy.sparse.csr_matrix
A contingency table. cont[i, j] will equal the number of voxels labeled i in im_true and j in im_test.
hausdorff_distance
skimage.metrics.hausdorff_distance(image0, image1) [source]
Calculate the Hausdorff distance between nonzero elements of given images. The Hausdorff distance [1] is the maximum distance between any point on image0 and its nearest point on image1, and vice-versa. Parameters
image0, image1ndarray
Arrays where True represents a point that is included in a set of points. Both arrays must have the same shape. Returns
distancefloat
The Hausdorff distance between coordinates of nonzero pixels in image0 and image1, using the Euclidian distance. References
1
http://en.wikipedia.org/wiki/Hausdorff_distance Examples >>> points_a = (3, 0)
>>> points_b = (6, 0)
>>> shape = (7, 1)
>>> image_a = np.zeros(shape, dtype=bool)
>>> image_b = np.zeros(shape, dtype=bool)
>>> image_a[points_a] = True
>>> image_b[points_b] = True
>>> hausdorff_distance(image_a, image_b)
3.0
Examples using skimage.metrics.hausdorff_distance
Hausdorff Distance mean_squared_error
skimage.metrics.mean_squared_error(image0, image1) [source]
Compute the mean-squared error between two images. Parameters
image0, image1ndarray
Images. Any dimensionality, must have same shape. Returns
msefloat
The mean-squared error (MSE) metric. Notes Changed in version 0.16: This function was renamed from skimage.measure.compare_mse to skimage.metrics.mean_squared_error.
normalized_root_mse
skimage.metrics.normalized_root_mse(image_true, image_test, *, normalization='euclidean') [source]
Compute the normalized root mean-squared error (NRMSE) between two images. Parameters
image_truendarray
Ground-truth image, same shape as im_test.
image_testndarray
Test image.
normalization{‘euclidean’, ‘min-max’, ‘mean’}, optional
Controls the normalization method to use in the denominator of the NRMSE. There is no standard method of normalization across the literature [1]. The methods available here are as follows:
‘euclidean’ : normalize by the averaged Euclidean norm of im_true: NRMSE = RMSE * sqrt(N) / || im_true ||
where || . || denotes the Frobenius norm and N = im_true.size. This result is equivalent to: NRMSE = || im_true - im_test || / || im_true ||.
‘min-max’ : normalize by the intensity range of im_true. ‘mean’ : normalize by the mean of im_true
Returns
nrmsefloat
The NRMSE metric. Notes Changed in version 0.16: This function was renamed from skimage.measure.compare_nrmse to skimage.metrics.normalized_root_mse. References
1
https://en.wikipedia.org/wiki/Root-mean-square_deviation
peak_signal_noise_ratio
skimage.metrics.peak_signal_noise_ratio(image_true, image_test, *, data_range=None) [source]
Compute the peak signal to noise ratio (PSNR) for an image. Parameters
image_truendarray
Ground-truth image, same shape as im_test.
image_testndarray
Test image.
data_rangeint, optional
The data range of the input image (distance between minimum and maximum possible values). By default, this is estimated from the image data-type. Returns
psnrfloat
The PSNR metric. Notes Changed in version 0.16: This function was renamed from skimage.measure.compare_psnr to skimage.metrics.peak_signal_noise_ratio. References
1
https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
structural_similarity
skimage.metrics.structural_similarity(im1, im2, *, win_size=None, gradient=False, data_range=None, multichannel=False, gaussian_weights=False, full=False, **kwargs) [source]
Compute the mean structural similarity index between two images. Parameters
im1, im2ndarray
Images. Any dimensionality with same shape.
win_sizeint or None, optional
The side-length of the sliding window used in comparison. Must be an odd value. If gaussian_weights is True, this is ignored and the window size will depend on sigma.
gradientbool, optional
If True, also return the gradient with respect to im2.
data_rangefloat, optional
The data range of the input image (distance between minimum and maximum possible values). By default, this is estimated from the image data-type.
multichannelbool, optional
If True, treat the last dimension of the array as channels. Similarity calculations are done independently for each channel then averaged.
gaussian_weightsbool, optional
If True, each patch has its mean and variance spatially weighted by a normalized Gaussian kernel of width sigma=1.5.
fullbool, optional
If True, also return the full structural similarity image. Returns
mssimfloat
The mean structural similarity index over the image.
gradndarray
The gradient of the structural similarity between im1 and im2 [2]. This is only returned if gradient is set to True.
Sndarray
The full SSIM image. This is only returned if full is set to True. Other Parameters
use_sample_covariancebool
If True, normalize covariances by N-1 rather than, N where N is the number of pixels within the sliding window.
K1float
Algorithm parameter, K1 (small constant, see [1]).
K2float
Algorithm parameter, K2 (small constant, see [1]).
sigmafloat
Standard deviation for the Gaussian when gaussian_weights is True. Notes To match the implementation of Wang et. al. [1], set gaussian_weights to True, sigma to 1.5, and use_sample_covariance to False. Changed in version 0.16: This function was renamed from skimage.measure.compare_ssim to skimage.metrics.structural_similarity. References
1(1,2,3)
Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13, 600-612. https://ece.uwaterloo.ca/~z70wang/publications/ssim.pdf, DOI:10.1109/TIP.2003.819861
2
Avanaki, A. N. (2009). Exact global histogram specification optimized for structural similarity. Optical Review, 16, 613-621. arXiv:0901.0065 DOI:10.1007/s10043-009-0119-z
variation_of_information
skimage.metrics.variation_of_information(image0=None, image1=None, *, table=None, ignore_labels=()) [source]
Return symmetric conditional entropies associated with the VI. [1] The variation of information is defined as VI(X,Y) = H(X|Y) + H(Y|X). If X is the ground-truth segmentation, then H(X|Y) can be interpreted as the amount of under-segmentation and H(X|Y) as the amount of over-segmentation. In other words, a perfect over-segmentation will have H(X|Y)=0 and a perfect under-segmentation will have H(Y|X)=0. Parameters
image0, image1ndarray of int
Label images / segmentations, must have same shape.
tablescipy.sparse array in csr format, optional
A contingency table built with skimage.evaluate.contingency_table. If None, it will be computed with skimage.evaluate.contingency_table. If given, the entropies will be computed from this table and any images will be ignored.
ignore_labelssequence of int, optional
Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score. Returns
vindarray of float, shape (2,)
The conditional entropies of image1|image0 and image0|image1. References
1
Marina Meilă (2007), Comparing clusterings—an information based distance, Journal of Multivariate Analysis, Volume 98, Issue 5, Pages 873-895, ISSN 0047-259X, DOI:10.1016/j.jmva.2006.11.013. | skimage.api.skimage.metrics |
skimage.metrics.adapted_rand_error(image_true=None, image_test=None, *, table=None, ignore_labels=(0, )) [source]
Compute Adapted Rand error as defined by the SNEMI3D contest. [1] Parameters
image_truendarray of int
Ground-truth label image, same shape as im_test.
image_testndarray of int
Test image.
tablescipy.sparse array in crs format, optional
A contingency table built with skimage.evaluate.contingency_table. If None, it will be computed on the fly.
ignore_labelssequence of int, optional
Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score. Returns
arefloat
The adapted Rand error; equal to \(1 - \frac{2pr}{p + r}\), where p and r are the precision and recall described below.
precfloat
The adapted Rand precision: this is the number of pairs of pixels that have the same label in the test label image and in the true image, divided by the number in the test image.
recfloat
The adapted Rand recall: this is the number of pairs of pixels that have the same label in the test label image and in the true image, divided by the number in the true image. Notes Pixels with label 0 in the true segmentation are ignored in the score. References
1
Arganda-Carreras I, Turaga SC, Berger DR, et al. (2015) Crowdsourcing the creation of image segmentation algorithms for connectomics. Front. Neuroanat. 9:142. DOI:10.3389/fnana.2015.00142 | skimage.api.skimage.metrics#skimage.metrics.adapted_rand_error |
skimage.metrics.contingency_table(im_true, im_test, *, ignore_labels=None, normalize=False) [source]
Return the contingency table for all regions in matched segmentations. Parameters
im_truendarray of int
Ground-truth label image, same shape as im_test.
im_testndarray of int
Test image.
ignore_labelssequence of int, optional
Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score.
normalizebool
Determines if the contingency table is normalized by pixel count. Returns
contscipy.sparse.csr_matrix
A contingency table. cont[i, j] will equal the number of voxels labeled i in im_true and j in im_test. | skimage.api.skimage.metrics#skimage.metrics.contingency_table |
skimage.metrics.hausdorff_distance(image0, image1) [source]
Calculate the Hausdorff distance between nonzero elements of given images. The Hausdorff distance [1] is the maximum distance between any point on image0 and its nearest point on image1, and vice-versa. Parameters
image0, image1ndarray
Arrays where True represents a point that is included in a set of points. Both arrays must have the same shape. Returns
distancefloat
The Hausdorff distance between coordinates of nonzero pixels in image0 and image1, using the Euclidian distance. References
1
http://en.wikipedia.org/wiki/Hausdorff_distance Examples >>> points_a = (3, 0)
>>> points_b = (6, 0)
>>> shape = (7, 1)
>>> image_a = np.zeros(shape, dtype=bool)
>>> image_b = np.zeros(shape, dtype=bool)
>>> image_a[points_a] = True
>>> image_b[points_b] = True
>>> hausdorff_distance(image_a, image_b)
3.0 | skimage.api.skimage.metrics#skimage.metrics.hausdorff_distance |
skimage.metrics.mean_squared_error(image0, image1) [source]
Compute the mean-squared error between two images. Parameters
image0, image1ndarray
Images. Any dimensionality, must have same shape. Returns
msefloat
The mean-squared error (MSE) metric. Notes Changed in version 0.16: This function was renamed from skimage.measure.compare_mse to skimage.metrics.mean_squared_error. | skimage.api.skimage.metrics#skimage.metrics.mean_squared_error |
skimage.metrics.normalized_root_mse(image_true, image_test, *, normalization='euclidean') [source]
Compute the normalized root mean-squared error (NRMSE) between two images. Parameters
image_truendarray
Ground-truth image, same shape as im_test.
image_testndarray
Test image.
normalization{‘euclidean’, ‘min-max’, ‘mean’}, optional
Controls the normalization method to use in the denominator of the NRMSE. There is no standard method of normalization across the literature [1]. The methods available here are as follows:
‘euclidean’ : normalize by the averaged Euclidean norm of im_true: NRMSE = RMSE * sqrt(N) / || im_true ||
where || . || denotes the Frobenius norm and N = im_true.size. This result is equivalent to: NRMSE = || im_true - im_test || / || im_true ||.
‘min-max’ : normalize by the intensity range of im_true. ‘mean’ : normalize by the mean of im_true
Returns
nrmsefloat
The NRMSE metric. Notes Changed in version 0.16: This function was renamed from skimage.measure.compare_nrmse to skimage.metrics.normalized_root_mse. References
1
https://en.wikipedia.org/wiki/Root-mean-square_deviation | skimage.api.skimage.metrics#skimage.metrics.normalized_root_mse |
skimage.metrics.peak_signal_noise_ratio(image_true, image_test, *, data_range=None) [source]
Compute the peak signal to noise ratio (PSNR) for an image. Parameters
image_truendarray
Ground-truth image, same shape as im_test.
image_testndarray
Test image.
data_rangeint, optional
The data range of the input image (distance between minimum and maximum possible values). By default, this is estimated from the image data-type. Returns
psnrfloat
The PSNR metric. Notes Changed in version 0.16: This function was renamed from skimage.measure.compare_psnr to skimage.metrics.peak_signal_noise_ratio. References
1
https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio | skimage.api.skimage.metrics#skimage.metrics.peak_signal_noise_ratio |
skimage.metrics.structural_similarity(im1, im2, *, win_size=None, gradient=False, data_range=None, multichannel=False, gaussian_weights=False, full=False, **kwargs) [source]
Compute the mean structural similarity index between two images. Parameters
im1, im2ndarray
Images. Any dimensionality with same shape.
win_sizeint or None, optional
The side-length of the sliding window used in comparison. Must be an odd value. If gaussian_weights is True, this is ignored and the window size will depend on sigma.
gradientbool, optional
If True, also return the gradient with respect to im2.
data_rangefloat, optional
The data range of the input image (distance between minimum and maximum possible values). By default, this is estimated from the image data-type.
multichannelbool, optional
If True, treat the last dimension of the array as channels. Similarity calculations are done independently for each channel then averaged.
gaussian_weightsbool, optional
If True, each patch has its mean and variance spatially weighted by a normalized Gaussian kernel of width sigma=1.5.
fullbool, optional
If True, also return the full structural similarity image. Returns
mssimfloat
The mean structural similarity index over the image.
gradndarray
The gradient of the structural similarity between im1 and im2 [2]. This is only returned if gradient is set to True.
Sndarray
The full SSIM image. This is only returned if full is set to True. Other Parameters
use_sample_covariancebool
If True, normalize covariances by N-1 rather than, N where N is the number of pixels within the sliding window.
K1float
Algorithm parameter, K1 (small constant, see [1]).
K2float
Algorithm parameter, K2 (small constant, see [1]).
sigmafloat
Standard deviation for the Gaussian when gaussian_weights is True. Notes To match the implementation of Wang et. al. [1], set gaussian_weights to True, sigma to 1.5, and use_sample_covariance to False. Changed in version 0.16: This function was renamed from skimage.measure.compare_ssim to skimage.metrics.structural_similarity. References
1(1,2,3)
Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13, 600-612. https://ece.uwaterloo.ca/~z70wang/publications/ssim.pdf, DOI:10.1109/TIP.2003.819861
2
Avanaki, A. N. (2009). Exact global histogram specification optimized for structural similarity. Optical Review, 16, 613-621. arXiv:0901.0065 DOI:10.1007/s10043-009-0119-z | skimage.api.skimage.metrics#skimage.metrics.structural_similarity |
skimage.metrics.variation_of_information(image0=None, image1=None, *, table=None, ignore_labels=()) [source]
Return symmetric conditional entropies associated with the VI. [1] The variation of information is defined as VI(X,Y) = H(X|Y) + H(Y|X). If X is the ground-truth segmentation, then H(X|Y) can be interpreted as the amount of under-segmentation and H(X|Y) as the amount of over-segmentation. In other words, a perfect over-segmentation will have H(X|Y)=0 and a perfect under-segmentation will have H(Y|X)=0. Parameters
image0, image1ndarray of int
Label images / segmentations, must have same shape.
tablescipy.sparse array in csr format, optional
A contingency table built with skimage.evaluate.contingency_table. If None, it will be computed with skimage.evaluate.contingency_table. If given, the entropies will be computed from this table and any images will be ignored.
ignore_labelssequence of int, optional
Labels to ignore. Any part of the true image labeled with any of these values will not be counted in the score. Returns
vindarray of float, shape (2,)
The conditional entropies of image1|image0 and image0|image1. References
1
Marina Meilă (2007), Comparing clusterings—an information based distance, Journal of Multivariate Analysis, Volume 98, Issue 5, Pages 873-895, ISSN 0047-259X, DOI:10.1016/j.jmva.2006.11.013. | skimage.api.skimage.metrics#skimage.metrics.variation_of_information |
Module: morphology
skimage.morphology.area_closing(image[, …]) Perform an area closing of the image.
skimage.morphology.area_opening(image[, …]) Perform an area opening of the image.
skimage.morphology.ball(radius[, dtype]) Generates a ball-shaped structuring element.
skimage.morphology.binary_closing(image[, …]) Return fast binary morphological closing of an image.
skimage.morphology.binary_dilation(image[, …]) Return fast binary morphological dilation of an image.
skimage.morphology.binary_erosion(image[, …]) Return fast binary morphological erosion of an image.
skimage.morphology.binary_opening(image[, …]) Return fast binary morphological opening of an image.
skimage.morphology.black_tophat(image[, …]) Return black top hat of an image.
skimage.morphology.closing(image[, selem, out]) Return greyscale morphological closing of an image.
skimage.morphology.convex_hull_image(image) Compute the convex hull image of a binary image.
skimage.morphology.convex_hull_object(image, *) Compute the convex hull image of individual objects in a binary image.
skimage.morphology.cube(width[, dtype]) Generates a cube-shaped structuring element.
skimage.morphology.diameter_closing(image[, …]) Perform a diameter closing of the image.
skimage.morphology.diameter_opening(image[, …]) Perform a diameter opening of the image.
skimage.morphology.diamond(radius[, dtype]) Generates a flat, diamond-shaped structuring element.
skimage.morphology.dilation(image[, selem, …]) Return greyscale morphological dilation of an image.
skimage.morphology.disk(radius[, dtype]) Generates a flat, disk-shaped structuring element.
skimage.morphology.erosion(image[, selem, …]) Return greyscale morphological erosion of an image.
skimage.morphology.flood(image, seed_point, *) Mask corresponding to a flood fill.
skimage.morphology.flood_fill(image, …[, …]) Perform flood filling on an image.
skimage.morphology.h_maxima(image, h[, selem]) Determine all maxima of the image with height >= h.
skimage.morphology.h_minima(image, h[, selem]) Determine all minima of the image with depth >= h.
skimage.morphology.label(input[, …]) Label connected regions of an integer array.
skimage.morphology.local_maxima(image[, …]) Find local maxima of n-dimensional array.
skimage.morphology.local_minima(image[, …]) Find local minima of n-dimensional array.
skimage.morphology.max_tree(image[, …]) Build the max tree from an image.
skimage.morphology.max_tree_local_maxima(image) Determine all local maxima of the image.
skimage.morphology.medial_axis(image[, …]) Compute the medial axis transform of a binary image
skimage.morphology.octagon(m, n[, dtype]) Generates an octagon shaped structuring element.
skimage.morphology.octahedron(radius[, dtype]) Generates a octahedron-shaped structuring element.
skimage.morphology.opening(image[, selem, out]) Return greyscale morphological opening of an image.
skimage.morphology.reconstruction(seed, mask) Perform a morphological reconstruction of an image.
skimage.morphology.rectangle(nrows, ncols[, …]) Generates a flat, rectangular-shaped structuring element.
skimage.morphology.remove_small_holes(ar[, …]) Remove contiguous holes smaller than the specified size.
skimage.morphology.remove_small_objects(ar) Remove objects smaller than the specified size.
skimage.morphology.skeletonize(image, *[, …]) Compute the skeleton of a binary image.
skimage.morphology.skeletonize_3d(image) Compute the skeleton of a binary image.
skimage.morphology.square(width[, dtype]) Generates a flat, square-shaped structuring element.
skimage.morphology.star(a[, dtype]) Generates a star shaped structuring element.
skimage.morphology.thin(image[, max_iter]) Perform morphological thinning of a binary image.
skimage.morphology.watershed(image[, …]) Deprecated function.
skimage.morphology.white_tophat(image[, …]) Return white top hat of an image. area_closing
skimage.morphology.area_closing(image, area_threshold=64, connectivity=1, parent=None, tree_traverser=None) [source]
Perform an area closing of the image. Area closing removes all dark structures of an image with a surface smaller than area_threshold. The output image is larger than or equal to the input image for every pixel and all local minima have at least a surface of area_threshold pixels. Area closings are similar to morphological closings, but they do not use a fixed structuring element, but rather a deformable one, with surface = area_threshold. In the binary case, area closings are equivalent to remove_small_holes; this operator is thus extended to gray-level images. Technically, this operator is based on the max-tree representation of the image. Parameters
imagendarray
The input image for which the area_closing is to be calculated. This image can be of any type.
area_thresholdunsigned int
The size parameter (number of pixels). The default value is arbitrarily chosen to be 64.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1.
parentndarray, int64, optional
Parent image representing the max tree of the inverted image. The value of each pixel is the index of its parent in the ravelled array. See Note for further details.
tree_traverser1D array, int64, optional
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). Returns
outputndarray
Output image of the same shape and type as input image. See also
skimage.morphology.area_opening
skimage.morphology.diameter_opening
skimage.morphology.diameter_closing
skimage.morphology.max_tree
skimage.morphology.remove_small_objects
skimage.morphology.remove_small_holes
Notes If a max-tree representation (parent and tree_traverser) are given to the function, they must be calculated from the inverted image for this function, i.e.: >>> P, S = max_tree(invert(f)) >>> closed = diameter_closing(f, 3, parent=P, tree_traverser=S) References
1
Vincent L., Proc. “Grayscale area openings and closings, their efficient implementation and applications”, EURASIP Workshop on Mathematical Morphology and its Applications to Signal Processing, Barcelona, Spain, pp.22-27, May 1993.
2
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. DOI:10.1007/978-3-662-05088-0
3
Salembier, P., Oliveras, A., & Garrido, L. (1998). Antiextensive Connected Operators for Image and Sequence Processing. IEEE Transactions on Image Processing, 7(4), 555-570. DOI:10.1109/83.663500
4
Najman, L., & Couprie, M. (2006). Building the component tree in quasi-linear time. IEEE Transactions on Image Processing, 15(11), 3531-3539. DOI:10.1109/TIP.2006.877518
5
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a minimum in the center and 4 additional local minima. >>> w = 12
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 180 + 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:3,1:5] = 160; f[2:4,9:11] = 140; f[9:11,2:4] = 120
>>> f[9:10,9:11] = 100; f[10,10] = 100
>>> f = f.astype(int)
We can calculate the area closing: >>> closed = area_closing(f, 8, connectivity=1)
All small minima are removed, and the remaining minima have at least a size of 8.
area_opening
skimage.morphology.area_opening(image, area_threshold=64, connectivity=1, parent=None, tree_traverser=None) [source]
Perform an area opening of the image. Area opening removes all bright structures of an image with a surface smaller than area_threshold. The output image is thus the largest image smaller than the input for which all local maxima have at least a surface of area_threshold pixels. Area openings are similar to morphological openings, but they do not use a fixed structuring element, but rather a deformable one, with surface = area_threshold. Consequently, the area_opening with area_threshold=1 is the identity. In the binary case, area openings are equivalent to remove_small_objects; this operator is thus extended to gray-level images. Technically, this operator is based on the max-tree representation of the image. Parameters
imagendarray
The input image for which the area_opening is to be calculated. This image can be of any type.
area_thresholdunsigned int
The size parameter (number of pixels). The default value is arbitrarily chosen to be 64.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1.
parentndarray, int64, optional
Parent image representing the max tree of the image. The value of each pixel is the index of its parent in the ravelled array.
tree_traverser1D array, int64, optional
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). Returns
outputndarray
Output image of the same shape and type as the input image. See also
skimage.morphology.area_closing
skimage.morphology.diameter_opening
skimage.morphology.diameter_closing
skimage.morphology.max_tree
skimage.morphology.remove_small_objects
skimage.morphology.remove_small_holes
References
1
Vincent L., Proc. “Grayscale area openings and closings, their efficient implementation and applications”, EURASIP Workshop on Mathematical Morphology and its Applications to Signal Processing, Barcelona, Spain, pp.22-27, May 1993.
2
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. :DOI:10.1007/978-3-662-05088-0
3
Salembier, P., Oliveras, A., & Garrido, L. (1998). Antiextensive Connected Operators for Image and Sequence Processing. IEEE Transactions on Image Processing, 7(4), 555-570. :DOI:10.1109/83.663500
4
Najman, L., & Couprie, M. (2006). Building the component tree in quasi-linear time. IEEE Transactions on Image Processing, 15(11), 3531-3539. :DOI:10.1109/TIP.2006.877518
5
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. :DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a maximum in the center and 4 additional local maxima. >>> w = 12
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 20 - 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:3,1:5] = 40; f[2:4,9:11] = 60; f[9:11,2:4] = 80
>>> f[9:10,9:11] = 100; f[10,10] = 100
>>> f = f.astype(int)
We can calculate the area opening: >>> open = area_opening(f, 8, connectivity=1)
The peaks with a surface smaller than 8 are removed.
ball
skimage.morphology.ball(radius, dtype=<class 'numpy.uint8'>) [source]
Generates a ball-shaped structuring element. This is the 3D equivalent of a disk. A pixel is within the neighborhood if the Euclidean distance between it and the origin is no greater than radius. Parameters
radiusint
The radius of the ball-shaped structuring element. Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element.
Examples using skimage.morphology.ball
Local Histogram Equalization
Rank filters binary_closing
skimage.morphology.binary_closing(image, selem=None, out=None) [source]
Return fast binary morphological closing of an image. This function returns the same result as greyscale closing but performs faster for binary images. The morphological closing on an image is defined as a dilation followed by an erosion. Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks. This tends to “close” up (dark) gaps between (bright) features. Parameters
imagendarray
Binary input image.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped structuring element (connectivity=1).
outndarray of bool, optional
The array to store the result of the morphology. If None, is passed, a new array will be allocated. Returns
closingndarray of bool
The result of the morphological closing.
Examples using skimage.morphology.binary_closing
Flood Fill binary_dilation
skimage.morphology.binary_dilation(image, selem=None, out=None) [source]
Return fast binary morphological dilation of an image. This function returns the same result as greyscale dilation but performs faster for binary images. Morphological dilation sets a pixel at (i,j) to the maximum over all pixels in the neighborhood centered at (i,j). Dilation enlarges bright regions and shrinks dark regions. Parameters
imagendarray
Binary input image.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped structuring element (connectivity=1).
outndarray of bool, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
dilatedndarray of bool or uint
The result of the morphological dilation with values in [False, True].
binary_erosion
skimage.morphology.binary_erosion(image, selem=None, out=None) [source]
Return fast binary morphological erosion of an image. This function returns the same result as greyscale erosion but performs faster for binary images. Morphological erosion sets a pixel at (i,j) to the minimum over all pixels in the neighborhood centered at (i,j). Erosion shrinks bright regions and enlarges dark regions. Parameters
imagendarray
Binary input image.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped structuring element (connectivity=1).
outndarray of bool, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
erodedndarray of bool or uint
The result of the morphological erosion taking values in [False, True].
binary_opening
skimage.morphology.binary_opening(image, selem=None, out=None) [source]
Return fast binary morphological opening of an image. This function returns the same result as greyscale opening but performs faster for binary images. The morphological opening on an image is defined as an erosion followed by a dilation. Opening can remove small bright spots (i.e. “salt”) and connect small dark cracks. This tends to “open” up (dark) gaps between (bright) features. Parameters
imagendarray
Binary input image.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped structuring element (connectivity=1).
outndarray of bool, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
openingndarray of bool
The result of the morphological opening.
Examples using skimage.morphology.binary_opening
Flood Fill black_tophat
skimage.morphology.black_tophat(image, selem=None, out=None) [source]
Return black top hat of an image. The black top hat of an image is defined as its morphological closing minus the original image. This operation returns the dark spots of the image that are smaller than the structuring element. Note that dark spots in the original image are bright spots after the black top hat. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarray, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
outarray, same shape and type as image
The result of the morphological black top hat. See also
white_tophat
References
1
https://en.wikipedia.org/wiki/Top-hat_transform Examples >>> # Change dark peak to bright peak and subtract background
>>> import numpy as np
>>> from skimage.morphology import square
>>> dark_on_grey = np.array([[7, 6, 6, 6, 7],
... [6, 5, 4, 5, 6],
... [6, 4, 0, 4, 6],
... [6, 5, 4, 5, 6],
... [7, 6, 6, 6, 7]], dtype=np.uint8)
>>> black_tophat(dark_on_grey, square(3))
array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 5, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]], dtype=uint8)
closing
skimage.morphology.closing(image, selem=None, out=None) [source]
Return greyscale morphological closing of an image. The morphological closing on an image is defined as a dilation followed by an erosion. Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks. This tends to “close” up (dark) gaps between (bright) features. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as an array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarray, optional
The array to store the result of the morphology. If None, is passed, a new array will be allocated. Returns
closingarray, same shape and type as image
The result of the morphological closing. Examples >>> # Close a gap between two bright lines
>>> import numpy as np
>>> from skimage.morphology import square
>>> broken_line = np.array([[0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0],
... [1, 1, 0, 1, 1],
... [0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> closing(broken_line, square(3))
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]], dtype=uint8)
convex_hull_image
skimage.morphology.convex_hull_image(image, offset_coordinates=True, tolerance=1e-10) [source]
Compute the convex hull image of a binary image. The convex hull is the set of pixels included in the smallest convex polygon that surround all white pixels in the input image. Parameters
imagearray
Binary input image. This array is cast to bool before processing.
offset_coordinatesbool, optional
If True, a pixel at coordinate, e.g., (4, 7) will be represented by coordinates (3.5, 7), (4.5, 7), (4, 6.5), and (4, 7.5). This adds some “extent” to a pixel when computing the hull.
tolerancefloat, optional
Tolerance when determining whether a point is inside the hull. Due to numerical floating point errors, a tolerance of 0 can result in some points erroneously being classified as being outside the hull. Returns
hull(M, N) array of bool
Binary image with pixels in convex hull set to True. References
1
https://blogs.mathworks.com/steve/2011/10/04/binary-image-convex-hull-algorithm-notes/
convex_hull_object
skimage.morphology.convex_hull_object(image, *, connectivity=2) [source]
Compute the convex hull image of individual objects in a binary image. The convex hull is the set of pixels included in the smallest convex polygon that surround all white pixels in the input image. Parameters
image(M, N) ndarray
Binary input image.
connectivity{1, 2}, int, optional
Determines the neighbors of each pixel. Adjacent elements within a squared distance of connectivity from pixel center are considered neighbors.: 1-connectivity 2-connectivity
[ ] [ ] [ ] [ ]
| \ | /
[ ]--[x]--[ ] [ ]--[x]--[ ]
| / | \
[ ] [ ] [ ] [ ]
Returns
hullndarray of bool
Binary image with pixels inside convex hull set to True. Notes This function uses skimage.morphology.label to define unique objects, finds the convex hull of each using convex_hull_image, and combines these regions with logical OR. Be aware the convex hulls of unconnected objects may overlap in the result. If this is suspected, consider using convex_hull_image separately on each object or adjust connectivity.
cube
skimage.morphology.cube(width, dtype=<class 'numpy.uint8'>) [source]
Generates a cube-shaped structuring element. This is the 3D equivalent of a square. Every pixel along the perimeter has a chessboard distance no greater than radius (radius=floor(width/2)) pixels. Parameters
widthint
The width, height and depth of the cube. Returns
selemndarray
A structuring element consisting only of ones, i.e. every pixel belongs to the neighborhood. Other Parameters
dtypedata-type
The data type of the structuring element.
diameter_closing
skimage.morphology.diameter_closing(image, diameter_threshold=8, connectivity=1, parent=None, tree_traverser=None) [source]
Perform a diameter closing of the image. Diameter closing removes all dark structures of an image with maximal extension smaller than diameter_threshold. The maximal extension is defined as the maximal extension of the bounding box. The operator is also called Bounding Box Closing. In practice, the result is similar to a morphological closing, but long and thin structures are not removed. Technically, this operator is based on the max-tree representation of the image. Parameters
imagendarray
The input image for which the diameter_closing is to be calculated. This image can be of any type.
diameter_thresholdunsigned int
The maximal extension parameter (number of pixels). The default value is 8.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1.
parentndarray, int64, optional
Precomputed parent image representing the max tree of the inverted image. This function is fast, if precomputed parent and tree_traverser are provided. See Note for further details.
tree_traverser1D array, int64, optional
Precomputed traverser, where the pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). This function is fast, if precomputed parent and tree_traverser are provided. See Note for further details. Returns
outputndarray
Output image of the same shape and type as input image. See also
skimage.morphology.area_opening
skimage.morphology.area_closing
skimage.morphology.diameter_opening
skimage.morphology.max_tree
Notes If a max-tree representation (parent and tree_traverser) are given to the function, they must be calculated from the inverted image for this function, i.e.: >>> P, S = max_tree(invert(f)) >>> closed = diameter_closing(f, 3, parent=P, tree_traverser=S) References
1
Walter, T., & Klein, J.-C. (2002). Automatic Detection of Microaneurysms in Color Fundus Images of the Human Retina by Means of the Bounding Box Closing. In A. Colosimo, P. Sirabella, A. Giuliani (Eds.), Medical Data Analysis. Lecture Notes in Computer Science, vol 2526, pp. 210-220. Springer Berlin Heidelberg. DOI:10.1007/3-540-36104-9_23
2
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a minimum in the center and 4 additional local minima. >>> w = 12
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 180 + 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:3,1:5] = 160; f[2:4,9:11] = 140; f[9:11,2:4] = 120
>>> f[9:10,9:11] = 100; f[10,10] = 100
>>> f = f.astype(int)
We can calculate the diameter closing: >>> closed = diameter_closing(f, 3, connectivity=1)
All small minima with a maximal extension of 2 or less are removed. The remaining minima have all a maximal extension of at least 3.
diameter_opening
skimage.morphology.diameter_opening(image, diameter_threshold=8, connectivity=1, parent=None, tree_traverser=None) [source]
Perform a diameter opening of the image. Diameter opening removes all bright structures of an image with maximal extension smaller than diameter_threshold. The maximal extension is defined as the maximal extension of the bounding box. The operator is also called Bounding Box Opening. In practice, the result is similar to a morphological opening, but long and thin structures are not removed. Technically, this operator is based on the max-tree representation of the image. Parameters
imagendarray
The input image for which the area_opening is to be calculated. This image can be of any type.
diameter_thresholdunsigned int
The maximal extension parameter (number of pixels). The default value is 8.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1.
parentndarray, int64, optional
Parent image representing the max tree of the image. The value of each pixel is the index of its parent in the ravelled array.
tree_traverser1D array, int64, optional
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). Returns
outputndarray
Output image of the same shape and type as the input image. See also
skimage.morphology.area_opening
skimage.morphology.area_closing
skimage.morphology.diameter_closing
skimage.morphology.max_tree
References
1
Walter, T., & Klein, J.-C. (2002). Automatic Detection of Microaneurysms in Color Fundus Images of the Human Retina by Means of the Bounding Box Closing. In A. Colosimo, P. Sirabella, A. Giuliani (Eds.), Medical Data Analysis. Lecture Notes in Computer Science, vol 2526, pp. 210-220. Springer Berlin Heidelberg. DOI:10.1007/3-540-36104-9_23
2
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a maximum in the center and 4 additional local maxima. >>> w = 12
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 20 - 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:3,1:5] = 40; f[2:4,9:11] = 60; f[9:11,2:4] = 80
>>> f[9:10,9:11] = 100; f[10,10] = 100
>>> f = f.astype(int)
We can calculate the diameter opening: >>> open = diameter_opening(f, 3, connectivity=1)
The peaks with a maximal extension of 2 or less are removed. The remaining peaks have all a maximal extension of at least 3.
diamond
skimage.morphology.diamond(radius, dtype=<class 'numpy.uint8'>) [source]
Generates a flat, diamond-shaped structuring element. A pixel is part of the neighborhood (i.e. labeled 1) if the city block/Manhattan distance between it and the center of the neighborhood is no greater than radius. Parameters
radiusint
The radius of the diamond-shaped structuring element. Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element.
dilation
skimage.morphology.dilation(image, selem=None, out=None, shift_x=False, shift_y=False) [source]
Return greyscale morphological dilation of an image. Morphological dilation sets a pixel at (i,j) to the maximum over all pixels in the neighborhood centered at (i,j). Dilation enlarges bright regions and shrinks dark regions. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarray, optional
The array to store the result of the morphology. If None, is passed, a new array will be allocated.
shift_x, shift_ybool, optional
shift structuring element about center point. This only affects eccentric structuring elements (i.e. selem with even numbered sides). Returns
dilateduint8 array, same shape and type as image
The result of the morphological dilation. Notes For uint8 (and uint16 up to a certain bit-depth) data, the lower algorithm complexity makes the skimage.filters.rank.maximum function more efficient for larger images and structuring elements. Examples >>> # Dilation enlarges bright regions
>>> import numpy as np
>>> from skimage.morphology import square
>>> bright_pixel = np.array([[0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0],
... [0, 0, 1, 0, 0],
... [0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> dilation(bright_pixel, square(3))
array([[0, 0, 0, 0, 0],
[0, 1, 1, 1, 0],
[0, 1, 1, 1, 0],
[0, 1, 1, 1, 0],
[0, 0, 0, 0, 0]], dtype=uint8)
Examples using skimage.morphology.dilation
Rank filters disk
skimage.morphology.disk(radius, dtype=<class 'numpy.uint8'>) [source]
Generates a flat, disk-shaped structuring element. A pixel is within the neighborhood if the Euclidean distance between it and the origin is no greater than radius. Parameters
radiusint
The radius of the disk-shaped structuring element. Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element.
Examples using skimage.morphology.disk
Local Histogram Equalization
Entropy
Markers for watershed transform
Flood Fill
Segment human cells (in mitosis)
Rank filters erosion
skimage.morphology.erosion(image, selem=None, out=None, shift_x=False, shift_y=False) [source]
Return greyscale morphological erosion of an image. Morphological erosion sets a pixel at (i,j) to the minimum over all pixels in the neighborhood centered at (i,j). Erosion shrinks bright regions and enlarges dark regions. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as an array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarrays, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated.
shift_x, shift_ybool, optional
shift structuring element about center point. This only affects eccentric structuring elements (i.e. selem with even numbered sides). Returns
erodedarray, same shape as image
The result of the morphological erosion. Notes For uint8 (and uint16 up to a certain bit-depth) data, the lower algorithm complexity makes the skimage.filters.rank.minimum function more efficient for larger images and structuring elements. Examples >>> # Erosion shrinks bright regions
>>> import numpy as np
>>> from skimage.morphology import square
>>> bright_square = np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> erosion(bright_square, square(3))
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]], dtype=uint8)
flood
skimage.morphology.flood(image, seed_point, *, selem=None, connectivity=None, tolerance=None) [source]
Mask corresponding to a flood fill. Starting at a specific seed_point, connected points equal or within tolerance of the seed value are found. Parameters
imagendarray
An n-dimensional array.
seed_pointtuple or int
The point in image used as the starting point for the flood fill. If the image is 1D, this point may be given as an integer.
selemndarray, optional
A structuring element used to determine the neighborhood of each evaluated pixel. It must contain only 1’s and 0’s, have the same number of dimensions as image. If not given, all adjacent pixels are considered as part of the neighborhood (fully connected).
connectivityint, optional
A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is larger or equal to connectivity are considered neighbors. Ignored if selem is not None.
tolerancefloat or int, optional
If None (default), adjacent values must be strictly equal to the initial value of image at seed_point. This is fastest. If a value is given, a comparison will be done at every point and if within tolerance of the initial value will also be filled (inclusive). Returns
maskndarray
A Boolean array with the same shape as image is returned, with True values for areas connected to and equal (or within tolerance of) the seed point. All other values are False. Notes The conceptual analogy of this operation is the ‘paint bucket’ tool in many raster graphics programs. This function returns just the mask representing the fill. If indices are desired rather than masks for memory reasons, the user can simply run numpy.nonzero on the result, save the indices, and discard this mask. Examples >>> from skimage.morphology import flood
>>> image = np.zeros((4, 7), dtype=int)
>>> image[1:3, 1:3] = 1
>>> image[3, 0] = 1
>>> image[1:3, 4:6] = 2
>>> image[3, 6] = 3
>>> image
array([[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 2, 2, 0],
[0, 1, 1, 0, 2, 2, 0],
[1, 0, 0, 0, 0, 0, 3]])
Fill connected ones with 5, with full connectivity (diagonals included): >>> mask = flood(image, (1, 1))
>>> image_flooded = image.copy()
>>> image_flooded[mask] = 5
>>> image_flooded
array([[0, 0, 0, 0, 0, 0, 0],
[0, 5, 5, 0, 2, 2, 0],
[0, 5, 5, 0, 2, 2, 0],
[5, 0, 0, 0, 0, 0, 3]])
Fill connected ones with 5, excluding diagonal points (connectivity 1): >>> mask = flood(image, (1, 1), connectivity=1)
>>> image_flooded = image.copy()
>>> image_flooded[mask] = 5
>>> image_flooded
array([[0, 0, 0, 0, 0, 0, 0],
[0, 5, 5, 0, 2, 2, 0],
[0, 5, 5, 0, 2, 2, 0],
[1, 0, 0, 0, 0, 0, 3]])
Fill with a tolerance: >>> mask = flood(image, (0, 0), tolerance=1)
>>> image_flooded = image.copy()
>>> image_flooded[mask] = 5
>>> image_flooded
array([[5, 5, 5, 5, 5, 5, 5],
[5, 5, 5, 5, 2, 2, 5],
[5, 5, 5, 5, 2, 2, 5],
[5, 5, 5, 5, 5, 5, 3]])
flood_fill
skimage.morphology.flood_fill(image, seed_point, new_value, *, selem=None, connectivity=None, tolerance=None, in_place=False, inplace=None) [source]
Perform flood filling on an image. Starting at a specific seed_point, connected points equal or within tolerance of the seed value are found, then set to new_value. Parameters
imagendarray
An n-dimensional array.
seed_pointtuple or int
The point in image used as the starting point for the flood fill. If the image is 1D, this point may be given as an integer.
new_valueimage type
New value to set the entire fill. This must be chosen in agreement with the dtype of image.
selemndarray, optional
A structuring element used to determine the neighborhood of each evaluated pixel. It must contain only 1’s and 0’s, have the same number of dimensions as image. If not given, all adjacent pixels are considered as part of the neighborhood (fully connected).
connectivityint, optional
A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is less than or equal to connectivity are considered neighbors. Ignored if selem is not None.
tolerancefloat or int, optional
If None (default), adjacent values must be strictly equal to the value of image at seed_point to be filled. This is fastest. If a tolerance is provided, adjacent points with values within plus or minus tolerance from the seed point are filled (inclusive).
in_placebool, optional
If True, flood filling is applied to image in place. If False, the flood filled result is returned without modifying the input image (default).
inplacebool, optional
This parameter is deprecated and will be removed in version 0.19.0 in favor of in_place. If True, flood filling is applied to image inplace. If False, the flood filled result is returned without modifying the input image (default). Returns
filledndarray
An array with the same shape as image is returned, with values in areas connected to and equal (or within tolerance of) the seed point replaced with new_value. Notes The conceptual analogy of this operation is the ‘paint bucket’ tool in many raster graphics programs. Examples >>> from skimage.morphology import flood_fill
>>> image = np.zeros((4, 7), dtype=int)
>>> image[1:3, 1:3] = 1
>>> image[3, 0] = 1
>>> image[1:3, 4:6] = 2
>>> image[3, 6] = 3
>>> image
array([[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 2, 2, 0],
[0, 1, 1, 0, 2, 2, 0],
[1, 0, 0, 0, 0, 0, 3]])
Fill connected ones with 5, with full connectivity (diagonals included): >>> flood_fill(image, (1, 1), 5)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 5, 5, 0, 2, 2, 0],
[0, 5, 5, 0, 2, 2, 0],
[5, 0, 0, 0, 0, 0, 3]])
Fill connected ones with 5, excluding diagonal points (connectivity 1): >>> flood_fill(image, (1, 1), 5, connectivity=1)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 5, 5, 0, 2, 2, 0],
[0, 5, 5, 0, 2, 2, 0],
[1, 0, 0, 0, 0, 0, 3]])
Fill with a tolerance: >>> flood_fill(image, (0, 0), 5, tolerance=1)
array([[5, 5, 5, 5, 5, 5, 5],
[5, 5, 5, 5, 2, 2, 5],
[5, 5, 5, 5, 2, 2, 5],
[5, 5, 5, 5, 5, 5, 3]])
h_maxima
skimage.morphology.h_maxima(image, h, selem=None) [source]
Determine all maxima of the image with height >= h. The local maxima are defined as connected sets of pixels with equal grey level strictly greater than the grey level of all pixels in direct neighborhood of the set. A local maximum M of height h is a local maximum for which there is at least one path joining M with an equal or higher local maximum on which the minimal value is f(M) - h (i.e. the values along the path are not decreasing by more than h with respect to the maximum’s value) and no path to an equal or higher local maximum for which the minimal value is greater. The global maxima of the image are also found by this function. Parameters
imagendarray
The input image for which the maxima are to be calculated.
hunsigned integer
The minimal height of all extracted maxima.
selemndarray, optional
The neighborhood expressed as an n-D array of 1’s and 0’s. Default is the ball of radius 1 according to the maximum norm (i.e. a 3x3 square for 2D images, a 3x3x3 cube for 3D images, etc.) Returns
h_maxndarray
The local maxima of height >= h and the global maxima. The resulting image is a binary image, where pixels belonging to the determined maxima take value 1, the others take value 0. See also
skimage.morphology.extrema.h_minima
skimage.morphology.extrema.local_maxima
skimage.morphology.extrema.local_minima
References
1
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. Examples >>> import numpy as np
>>> from skimage.morphology import extrema
We create an image (quadratic function with a maximum in the center and 4 additional constant maxima. The heights of the maxima are: 1, 21, 41, 61, 81 >>> w = 10
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 20 - 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:4,2:4] = 40; f[2:4,7:9] = 60; f[7:9,2:4] = 80; f[7:9,7:9] = 100
>>> f = f.astype(int)
We can calculate all maxima with a height of at least 40: >>> maxima = extrema.h_maxima(f, 40)
The resulting image will contain 3 local maxima.
h_minima
skimage.morphology.h_minima(image, h, selem=None) [source]
Determine all minima of the image with depth >= h. The local minima are defined as connected sets of pixels with equal grey level strictly smaller than the grey levels of all pixels in direct neighborhood of the set. A local minimum M of depth h is a local minimum for which there is at least one path joining M with an equal or lower local minimum on which the maximal value is f(M) + h (i.e. the values along the path are not increasing by more than h with respect to the minimum’s value) and no path to an equal or lower local minimum for which the maximal value is smaller. The global minima of the image are also found by this function. Parameters
imagendarray
The input image for which the minima are to be calculated.
hunsigned integer
The minimal depth of all extracted minima.
selemndarray, optional
The neighborhood expressed as an n-D array of 1’s and 0’s. Default is the ball of radius 1 according to the maximum norm (i.e. a 3x3 square for 2D images, a 3x3x3 cube for 3D images, etc.) Returns
h_minndarray
The local minima of depth >= h and the global minima. The resulting image is a binary image, where pixels belonging to the determined minima take value 1, the others take value 0. See also
skimage.morphology.extrema.h_maxima
skimage.morphology.extrema.local_maxima
skimage.morphology.extrema.local_minima
References
1
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. Examples >>> import numpy as np
>>> from skimage.morphology import extrema
We create an image (quadratic function with a minimum in the center and 4 additional constant maxima. The depth of the minima are: 1, 21, 41, 61, 81 >>> w = 10
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 180 + 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:4,2:4] = 160; f[2:4,7:9] = 140; f[7:9,2:4] = 120; f[7:9,7:9] = 100
>>> f = f.astype(int)
We can calculate all minima with a depth of at least 40: >>> minima = extrema.h_minima(f, 40)
The resulting image will contain 3 local minima.
label
skimage.morphology.label(input, background=None, return_num=False, connectivity=None) [source]
Label connected regions of an integer array. Two pixels are connected when they are neighbors and have the same value. In 2D, they can be neighbors either in a 1- or 2-connected sense. The value refers to the maximum number of orthogonal hops to consider a pixel/voxel a neighbor: 1-connectivity 2-connectivity diagonal connection close-up
[ ] [ ] [ ] [ ] [ ]
| \ | / | <- hop 2
[ ]--[x]--[ ] [ ]--[x]--[ ] [x]--[ ]
| / | \ hop 1
[ ] [ ] [ ] [ ]
Parameters
inputndarray of dtype int
Image to label.
backgroundint, optional
Consider all pixels with this value as background pixels, and label them as 0. By default, 0-valued pixels are considered as background pixels.
return_numbool, optional
Whether to return the number of assigned labels.
connectivityint, optional
Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor. Accepted values are ranging from 1 to input.ndim. If None, a full connectivity of input.ndim is used. Returns
labelsndarray of dtype int
Labeled array, where all connected regions are assigned the same integer value.
numint, optional
Number of labels, which equals the maximum label index and is only returned if return_num is True. See also
regionprops
regionprops_table
References
1
Christophe Fiorio and Jens Gustedt, “Two linear time Union-Find strategies for image processing”, Theoretical Computer Science 154 (1996), pp. 165-181.
2
Kensheng Wu, Ekow Otoo and Arie Shoshani, “Optimizing connected component labeling algorithms”, Paper LBNL-56864, 2005, Lawrence Berkeley National Laboratory (University of California), http://repositories.cdlib.org/lbnl/LBNL-56864 Examples >>> import numpy as np
>>> x = np.eye(3).astype(int)
>>> print(x)
[[1 0 0]
[0 1 0]
[0 0 1]]
>>> print(label(x, connectivity=1))
[[1 0 0]
[0 2 0]
[0 0 3]]
>>> print(label(x, connectivity=2))
[[1 0 0]
[0 1 0]
[0 0 1]]
>>> print(label(x, background=-1))
[[1 2 2]
[2 1 2]
[2 2 1]]
>>> x = np.array([[1, 0, 0],
... [1, 1, 5],
... [0, 0, 0]])
>>> print(label(x))
[[1 0 0]
[1 1 2]
[0 0 0]]
local_maxima
skimage.morphology.local_maxima(image, selem=None, connectivity=None, indices=False, allow_borders=True) [source]
Find local maxima of n-dimensional array. The local maxima are defined as connected sets of pixels with equal gray level (plateaus) strictly greater than the gray levels of all pixels in the neighborhood. Parameters
imagendarray
An n-dimensional array.
selemndarray, optional
A structuring element used to determine the neighborhood of each evaluated pixel (True denotes a connected pixel). It must be a boolean array and have the same number of dimensions as image. If neither selem nor connectivity are given, all adjacent pixels are considered as part of the neighborhood.
connectivityint, optional
A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is less than or equal to connectivity are considered neighbors. Ignored if selem is not None.
indicesbool, optional
If True, the output will be a tuple of one-dimensional arrays representing the indices of local maxima in each dimension. If False, the output will be a boolean array with the same shape as image.
allow_bordersbool, optional
If true, plateaus that touch the image border are valid maxima. Returns
maximandarray or tuple[ndarray]
If indices is false, a boolean array with the same shape as image is returned with True indicating the position of local maxima (False otherwise). If indices is true, a tuple of one-dimensional arrays containing the coordinates (indices) of all found maxima. Warns
UserWarning
If allow_borders is false and any dimension of the given image is shorter than 3 samples, maxima can’t exist and a warning is shown. See also
skimage.morphology.local_minima
skimage.morphology.h_maxima
skimage.morphology.h_minima
Notes This function operates on the following ideas: Make a first pass over the image’s last dimension and flag candidates for local maxima by comparing pixels in only one direction. If the pixels aren’t connected in the last dimension all pixels are flagged as candidates instead. For each candidate: Perform a flood-fill to find all connected pixels that have the same gray value and are part of the plateau. Consider the connected neighborhood of a plateau: if no bordering sample has a higher gray level, mark the plateau as a definite local maximum. Examples >>> from skimage.morphology import local_maxima
>>> image = np.zeros((4, 7), dtype=int)
>>> image[1:3, 1:3] = 1
>>> image[3, 0] = 1
>>> image[1:3, 4:6] = 2
>>> image[3, 6] = 3
>>> image
array([[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 2, 2, 0],
[0, 1, 1, 0, 2, 2, 0],
[1, 0, 0, 0, 0, 0, 3]])
Find local maxima by comparing to all neighboring pixels (maximal connectivity): >>> local_maxima(image)
array([[False, False, False, False, False, False, False],
[False, True, True, False, False, False, False],
[False, True, True, False, False, False, False],
[ True, False, False, False, False, False, True]])
>>> local_maxima(image, indices=True)
(array([1, 1, 2, 2, 3, 3]), array([1, 2, 1, 2, 0, 6]))
Find local maxima without comparing to diagonal pixels (connectivity 1): >>> local_maxima(image, connectivity=1)
array([[False, False, False, False, False, False, False],
[False, True, True, False, True, True, False],
[False, True, True, False, True, True, False],
[ True, False, False, False, False, False, True]])
and exclude maxima that border the image edge: >>> local_maxima(image, connectivity=1, allow_borders=False)
array([[False, False, False, False, False, False, False],
[False, True, True, False, True, True, False],
[False, True, True, False, True, True, False],
[False, False, False, False, False, False, False]])
local_minima
skimage.morphology.local_minima(image, selem=None, connectivity=None, indices=False, allow_borders=True) [source]
Find local minima of n-dimensional array. The local minima are defined as connected sets of pixels with equal gray level (plateaus) strictly smaller than the gray levels of all pixels in the neighborhood. Parameters
imagendarray
An n-dimensional array.
selemndarray, optional
A structuring element used to determine the neighborhood of each evaluated pixel (True denotes a connected pixel). It must be a boolean array and have the same number of dimensions as image. If neither selem nor connectivity are given, all adjacent pixels are considered as part of the neighborhood.
connectivityint, optional
A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is less than or equal to connectivity are considered neighbors. Ignored if selem is not None.
indicesbool, optional
If True, the output will be a tuple of one-dimensional arrays representing the indices of local minima in each dimension. If False, the output will be a boolean array with the same shape as image.
allow_bordersbool, optional
If true, plateaus that touch the image border are valid minima. Returns
minimandarray or tuple[ndarray]
If indices is false, a boolean array with the same shape as image is returned with True indicating the position of local minima (False otherwise). If indices is true, a tuple of one-dimensional arrays containing the coordinates (indices) of all found minima. See also
skimage.morphology.local_maxima
skimage.morphology.h_maxima
skimage.morphology.h_minima
Notes This function operates on the following ideas: Make a first pass over the image’s last dimension and flag candidates for local minima by comparing pixels in only one direction. If the pixels aren’t connected in the last dimension all pixels are flagged as candidates instead. For each candidate: Perform a flood-fill to find all connected pixels that have the same gray value and are part of the plateau. Consider the connected neighborhood of a plateau: if no bordering sample has a smaller gray level, mark the plateau as a definite local minimum. Examples >>> from skimage.morphology import local_minima
>>> image = np.zeros((4, 7), dtype=int)
>>> image[1:3, 1:3] = -1
>>> image[3, 0] = -1
>>> image[1:3, 4:6] = -2
>>> image[3, 6] = -3
>>> image
array([[ 0, 0, 0, 0, 0, 0, 0],
[ 0, -1, -1, 0, -2, -2, 0],
[ 0, -1, -1, 0, -2, -2, 0],
[-1, 0, 0, 0, 0, 0, -3]])
Find local minima by comparing to all neighboring pixels (maximal connectivity): >>> local_minima(image)
array([[False, False, False, False, False, False, False],
[False, True, True, False, False, False, False],
[False, True, True, False, False, False, False],
[ True, False, False, False, False, False, True]])
>>> local_minima(image, indices=True)
(array([1, 1, 2, 2, 3, 3]), array([1, 2, 1, 2, 0, 6]))
Find local minima without comparing to diagonal pixels (connectivity 1): >>> local_minima(image, connectivity=1)
array([[False, False, False, False, False, False, False],
[False, True, True, False, True, True, False],
[False, True, True, False, True, True, False],
[ True, False, False, False, False, False, True]])
and exclude minima that border the image edge: >>> local_minima(image, connectivity=1, allow_borders=False)
array([[False, False, False, False, False, False, False],
[False, True, True, False, True, True, False],
[False, True, True, False, True, True, False],
[False, False, False, False, False, False, False]])
max_tree
skimage.morphology.max_tree(image, connectivity=1) [source]
Build the max tree from an image. Component trees represent the hierarchical structure of the connected components resulting from sequential thresholding operations applied to an image. A connected component at one level is parent of a component at a higher level if the latter is included in the first. A max-tree is an efficient representation of a component tree. A connected component at one level is represented by one reference pixel at this level, which is parent to all other pixels at that level and to the reference pixel at the level above. The max-tree is the basis for many morphological operators, namely connected operators. Parameters
imagendarray
The input image for which the max-tree is to be calculated. This image can be of any type.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1. Returns
parentndarray, int64
Array of same shape as image. The value of each pixel is the index of its parent in the ravelled array.
tree_traverser1D array, int64
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). References
1
Salembier, P., Oliveras, A., & Garrido, L. (1998). Antiextensive Connected Operators for Image and Sequence Processing. IEEE Transactions on Image Processing, 7(4), 555-570. DOI:10.1109/83.663500
2
Berger, C., Geraud, T., Levillain, R., Widynski, N., Baillard, A., Bertin, E. (2007). Effective Component Tree Computation with Application to Pattern Recognition in Astronomical Imaging. In International Conference on Image Processing (ICIP) (pp. 41-44). DOI:10.1109/ICIP.2007.4379949
3
Najman, L., & Couprie, M. (2006). Building the component tree in quasi-linear time. IEEE Transactions on Image Processing, 15(11), 3531-3539. DOI:10.1109/TIP.2006.877518
4
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create a small sample image (Figure 1 from [4]) and build the max-tree. >>> image = np.array([[15, 13, 16], [12, 12, 10], [16, 12, 14]])
>>> P, S = max_tree(image, connectivity=2)
max_tree_local_maxima
skimage.morphology.max_tree_local_maxima(image, connectivity=1, parent=None, tree_traverser=None) [source]
Determine all local maxima of the image. The local maxima are defined as connected sets of pixels with equal gray level strictly greater than the gray levels of all pixels in direct neighborhood of the set. The function labels the local maxima. Technically, the implementation is based on the max-tree representation of an image. The function is very efficient if the max-tree representation has already been computed. Otherwise, it is preferable to use the function local_maxima. Parameters
imagendarray
The input image for which the maxima are to be calculated. connectivity: unsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1. parent: ndarray, int64, optional
The value of each pixel is the index of its parent in the ravelled array. tree_traverser: 1D array, int64, optional
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). Returns
local_maxndarray, uint64
Labeled local maxima of the image. See also
skimage.morphology.local_maxima
skimage.morphology.max_tree
References
1
Vincent L., Proc. “Grayscale area openings and closings, their efficient implementation and applications”, EURASIP Workshop on Mathematical Morphology and its Applications to Signal Processing, Barcelona, Spain, pp.22-27, May 1993.
2
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. DOI:10.1007/978-3-662-05088-0
3
Salembier, P., Oliveras, A., & Garrido, L. (1998). Antiextensive Connected Operators for Image and Sequence Processing. IEEE Transactions on Image Processing, 7(4), 555-570. DOI:10.1109/83.663500
4
Najman, L., & Couprie, M. (2006). Building the component tree in quasi-linear time. IEEE Transactions on Image Processing, 15(11), 3531-3539. DOI:10.1109/TIP.2006.877518
5
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a maximum in the center and 4 additional constant maxima. >>> w = 10
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 20 - 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:4,2:4] = 40; f[2:4,7:9] = 60; f[7:9,2:4] = 80; f[7:9,7:9] = 100
>>> f = f.astype(int)
We can calculate all local maxima: >>> maxima = max_tree_local_maxima(f)
The resulting image contains the labeled local maxima.
medial_axis
skimage.morphology.medial_axis(image, mask=None, return_distance=False) [source]
Compute the medial axis transform of a binary image Parameters
imagebinary ndarray, shape (M, N)
The image of the shape to be skeletonized.
maskbinary ndarray, shape (M, N), optional
If a mask is given, only those elements in image with a true value in mask are used for computing the medial axis.
return_distancebool, optional
If true, the distance transform is returned as well as the skeleton. Returns
outndarray of bools
Medial axis transform of the image
distndarray of ints, optional
Distance transform of the image (only returned if return_distance is True) See also
skeletonize
Notes This algorithm computes the medial axis transform of an image as the ridges of its distance transform. The different steps of the algorithm are as follows
A lookup table is used, that assigns 0 or 1 to each configuration of the 3x3 binary square, whether the central pixel should be removed or kept. We want a point to be removed if it has more than one neighbor and if removing it does not change the number of connected components. The distance transform to the background is computed, as well as the cornerness of the pixel. The foreground (value of 1) points are ordered by the distance transform, then the cornerness. A cython function is called to reduce the image to its skeleton. It processes pixels in the order determined at the previous step, and removes or maintains a pixel according to the lookup table. Because of the ordering, it is possible to process all pixels in only one pass. Examples >>> square = np.zeros((7, 7), dtype=np.uint8)
>>> square[1:-1, 2:-2] = 1
>>> square
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> medial_axis(square).astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
octagon
skimage.morphology.octagon(m, n, dtype=<class 'numpy.uint8'>) [source]
Generates an octagon shaped structuring element. For a given size of (m) horizontal and vertical sides and a given (n) height or width of slanted sides octagon is generated. The slanted sides are 45 or 135 degrees to the horizontal axis and hence the widths and heights are equal. Parameters
mint
The size of the horizontal and vertical sides.
nint
The height or width of the slanted sides. Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element.
octahedron
skimage.morphology.octahedron(radius, dtype=<class 'numpy.uint8'>) [source]
Generates a octahedron-shaped structuring element. This is the 3D equivalent of a diamond. A pixel is part of the neighborhood (i.e. labeled 1) if the city block/Manhattan distance between it and the center of the neighborhood is no greater than radius. Parameters
radiusint
The radius of the octahedron-shaped structuring element. Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element.
opening
skimage.morphology.opening(image, selem=None, out=None) [source]
Return greyscale morphological opening of an image. The morphological opening on an image is defined as an erosion followed by a dilation. Opening can remove small bright spots (i.e. “salt”) and connect small dark cracks. This tends to “open” up (dark) gaps between (bright) features. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as an array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarray, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
openingarray, same shape and type as image
The result of the morphological opening. Examples >>> # Open up gap between two bright regions (but also shrink regions)
>>> import numpy as np
>>> from skimage.morphology import square
>>> bad_connection = np.array([[1, 0, 0, 0, 1],
... [1, 1, 0, 1, 1],
... [1, 1, 1, 1, 1],
... [1, 1, 0, 1, 1],
... [1, 0, 0, 0, 1]], dtype=np.uint8)
>>> opening(bad_connection, square(3))
array([[0, 0, 0, 0, 0],
[1, 1, 0, 1, 1],
[1, 1, 0, 1, 1],
[1, 1, 0, 1, 1],
[0, 0, 0, 0, 0]], dtype=uint8)
reconstruction
skimage.morphology.reconstruction(seed, mask, method='dilation', selem=None, offset=None) [source]
Perform a morphological reconstruction of an image. Morphological reconstruction by dilation is similar to basic morphological dilation: high-intensity values will replace nearby low-intensity values. The basic dilation operator, however, uses a structuring element to determine how far a value in the input image can spread. In contrast, reconstruction uses two images: a “seed” image, which specifies the values that spread, and a “mask” image, which gives the maximum allowed value at each pixel. The mask image, like the structuring element, limits the spread of high-intensity values. Reconstruction by erosion is simply the inverse: low-intensity values spread from the seed image and are limited by the mask image, which represents the minimum allowed value. Alternatively, you can think of reconstruction as a way to isolate the connected regions of an image. For dilation, reconstruction connects regions marked by local maxima in the seed image: neighboring pixels less-than-or-equal-to those seeds are connected to the seeded region. Local maxima with values larger than the seed image will get truncated to the seed value. Parameters
seedndarray
The seed image (a.k.a. marker image), which specifies the values that are dilated or eroded.
maskndarray
The maximum (dilation) / minimum (erosion) allowed value at each pixel.
method{‘dilation’|’erosion’}, optional
Perform reconstruction by dilation or erosion. In dilation (or erosion), the seed image is dilated (or eroded) until limited by the mask image. For dilation, each seed value must be less than or equal to the corresponding mask value; for erosion, the reverse is true. Default is ‘dilation’.
selemndarray, optional
The neighborhood expressed as an n-D array of 1’s and 0’s. Default is the n-D square of radius equal to 1 (i.e. a 3x3 square for 2D images, a 3x3x3 cube for 3D images, etc.)
offsetndarray, optional
The coordinates of the center of the structuring element. Default is located on the geometrical center of the selem, in that case selem dimensions must be odd. Returns
reconstructedndarray
The result of morphological reconstruction. Notes The algorithm is taken from [1]. Applications for greyscale reconstruction are discussed in [2] and [3]. References
1
Robinson, “Efficient morphological reconstruction: a downhill filter”, Pattern Recognition Letters 25 (2004) 1759-1767.
2
Vincent, L., “Morphological Grayscale Reconstruction in Image Analysis: Applications and Efficient Algorithms”, IEEE Transactions on Image Processing (1993)
3
Soille, P., “Morphological Image Analysis: Principles and Applications”, Chapter 6, 2nd edition (2003), ISBN 3540429883. Examples >>> import numpy as np
>>> from skimage.morphology import reconstruction
First, we create a sinusoidal mask image with peaks at middle and ends. >>> x = np.linspace(0, 4 * np.pi)
>>> y_mask = np.cos(x)
Then, we create a seed image initialized to the minimum mask value (for reconstruction by dilation, min-intensity values don’t spread) and add “seeds” to the left and right peak, but at a fraction of peak value (1). >>> y_seed = y_mask.min() * np.ones_like(x)
>>> y_seed[0] = 0.5
>>> y_seed[-1] = 0
>>> y_rec = reconstruction(y_seed, y_mask)
The reconstructed image (or curve, in this case) is exactly the same as the mask image, except that the peaks are truncated to 0.5 and 0. The middle peak disappears completely: Since there were no seed values in this peak region, its reconstructed value is truncated to the surrounding value (-1). As a more practical example, we try to extract the bright features of an image by subtracting a background image created by reconstruction. >>> y, x = np.mgrid[:20:0.5, :20:0.5]
>>> bumps = np.sin(x) + np.sin(y)
To create the background image, set the mask image to the original image, and the seed image to the original image with an intensity offset, h. >>> h = 0.3
>>> seed = bumps - h
>>> background = reconstruction(seed, bumps)
The resulting reconstructed image looks exactly like the original image, but with the peaks of the bumps cut off. Subtracting this reconstructed image from the original image leaves just the peaks of the bumps >>> hdome = bumps - background
This operation is known as the h-dome of the image and leaves features of height h in the subtracted image.
rectangle
skimage.morphology.rectangle(nrows, ncols, dtype=<class 'numpy.uint8'>) [source]
Generates a flat, rectangular-shaped structuring element. Every pixel in the rectangle generated for a given width and given height belongs to the neighborhood. Parameters
nrowsint
The number of rows of the rectangle.
ncolsint
The number of columns of the rectangle. Returns
selemndarray
A structuring element consisting only of ones, i.e. every pixel belongs to the neighborhood. Other Parameters
dtypedata-type
The data type of the structuring element. Notes The use of width and height has been deprecated in version 0.18.0. Use nrows and ncols instead.
remove_small_holes
skimage.morphology.remove_small_holes(ar, area_threshold=64, connectivity=1, in_place=False) [source]
Remove contiguous holes smaller than the specified size. Parameters
arndarray (arbitrary shape, int or bool type)
The array containing the connected components of interest.
area_thresholdint, optional (default: 64)
The maximum area, in pixels, of a contiguous hole that will be filled. Replaces min_size.
connectivityint, {1, 2, …, ar.ndim}, optional (default: 1)
The connectivity defining the neighborhood of a pixel.
in_placebool, optional (default: False)
If True, remove the connected components in the input array itself. Otherwise, make a copy. Returns
outndarray, same shape and type as input ar
The input array with small holes within connected components removed. Raises
TypeError
If the input array is of an invalid type, such as float or string. ValueError
If the input array contains negative values. Notes If the array type is int, it is assumed that it contains already-labeled objects. The labels are not kept in the output image (this function always outputs a bool image). It is suggested that labeling is completed after using this function. Examples >>> from skimage import morphology
>>> a = np.array([[1, 1, 1, 1, 1, 0],
... [1, 1, 1, 0, 1, 0],
... [1, 0, 0, 1, 1, 0],
... [1, 1, 1, 1, 1, 0]], bool)
>>> b = morphology.remove_small_holes(a, 2)
>>> b
array([[ True, True, True, True, True, False],
[ True, True, True, True, True, False],
[ True, False, False, True, True, False],
[ True, True, True, True, True, False]])
>>> c = morphology.remove_small_holes(a, 2, connectivity=2)
>>> c
array([[ True, True, True, True, True, False],
[ True, True, True, False, True, False],
[ True, False, False, True, True, False],
[ True, True, True, True, True, False]])
>>> d = morphology.remove_small_holes(a, 2, in_place=True)
>>> d is a
True
Examples using skimage.morphology.remove_small_holes
Measure region properties remove_small_objects
skimage.morphology.remove_small_objects(ar, min_size=64, connectivity=1, in_place=False) [source]
Remove objects smaller than the specified size. Expects ar to be an array with labeled objects, and removes objects smaller than min_size. If ar is bool, the image is first labeled. This leads to potentially different behavior for bool and 0-and-1 arrays. Parameters
arndarray (arbitrary shape, int or bool type)
The array containing the objects of interest. If the array type is int, the ints must be non-negative.
min_sizeint, optional (default: 64)
The smallest allowable object size.
connectivityint, {1, 2, …, ar.ndim}, optional (default: 1)
The connectivity defining the neighborhood of a pixel. Used during labelling if ar is bool.
in_placebool, optional (default: False)
If True, remove the objects in the input array itself. Otherwise, make a copy. Returns
outndarray, same shape and type as input ar
The input array with small connected components removed. Raises
TypeError
If the input array is of an invalid type, such as float or string. ValueError
If the input array contains negative values. Examples >>> from skimage import morphology
>>> a = np.array([[0, 0, 0, 1, 0],
... [1, 1, 1, 0, 0],
... [1, 1, 1, 0, 1]], bool)
>>> b = morphology.remove_small_objects(a, 6)
>>> b
array([[False, False, False, False, False],
[ True, True, True, False, False],
[ True, True, True, False, False]])
>>> c = morphology.remove_small_objects(a, 7, connectivity=2)
>>> c
array([[False, False, False, True, False],
[ True, True, True, False, False],
[ True, True, True, False, False]])
>>> d = morphology.remove_small_objects(a, 6, in_place=True)
>>> d is a
True
Examples using skimage.morphology.remove_small_objects
Measure region properties skeletonize
skimage.morphology.skeletonize(image, *, method=None) [source]
Compute the skeleton of a binary image. Thinning is used to reduce each connected component in a binary image to a single-pixel wide skeleton. Parameters
imagendarray, 2D or 3D
A binary image containing the objects to be skeletonized. Zeros represent background, nonzero values are foreground.
method{‘zhang’, ‘lee’}, optional
Which algorithm to use. Zhang’s algorithm [Zha84] only works for 2D images, and is the default for 2D. Lee’s algorithm [Lee94] works for 2D or 3D images and is the default for 3D. Returns
skeletonndarray
The thinned image. See also
medial_axis
References
Lee94
T.-C. Lee, R.L. Kashyap and C.-N. Chu, Building skeleton models via 3-D medial surface/axis thinning algorithms. Computer Vision, Graphics, and Image Processing, 56(6):462-478, 1994.
Zha84
A fast parallel algorithm for thinning digital patterns, T. Y. Zhang and C. Y. Suen, Communications of the ACM, March 1984, Volume 27, Number 3. Examples >>> X, Y = np.ogrid[0:9, 0:9]
>>> ellipse = (1./3 * (X - 4)**2 + (Y - 4)**2 < 3**2).astype(np.uint8)
>>> ellipse
array([[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0]], dtype=uint8)
>>> skel = skeletonize(ellipse)
>>> skel.astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
skeletonize_3d
skimage.morphology.skeletonize_3d(image) [source]
Compute the skeleton of a binary image. Thinning is used to reduce each connected component in a binary image to a single-pixel wide skeleton. Parameters
imagendarray, 2D or 3D
A binary image containing the objects to be skeletonized. Zeros represent background, nonzero values are foreground. Returns
skeletonndarray
The thinned image. See also
skeletonize, medial_axis
Notes The method of [Lee94] uses an octree data structure to examine a 3x3x3 neighborhood of a pixel. The algorithm proceeds by iteratively sweeping over the image, and removing pixels at each iteration until the image stops changing. Each iteration consists of two steps: first, a list of candidates for removal is assembled; then pixels from this list are rechecked sequentially, to better preserve connectivity of the image. The algorithm this function implements is different from the algorithms used by either skeletonize or medial_axis, thus for 2D images the results produced by this function are generally different. References
Lee94
T.-C. Lee, R.L. Kashyap and C.-N. Chu, Building skeleton models via 3-D medial surface/axis thinning algorithms. Computer Vision, Graphics, and Image Processing, 56(6):462-478, 1994.
square
skimage.morphology.square(width, dtype=<class 'numpy.uint8'>) [source]
Generates a flat, square-shaped structuring element. Every pixel along the perimeter has a chessboard distance no greater than radius (radius=floor(width/2)) pixels. Parameters
widthint
The width and height of the square. Returns
selemndarray
A structuring element consisting only of ones, i.e. every pixel belongs to the neighborhood. Other Parameters
dtypedata-type
The data type of the structuring element.
star
skimage.morphology.star(a, dtype=<class 'numpy.uint8'>) [source]
Generates a star shaped structuring element. Start has 8 vertices and is an overlap of square of size 2*a + 1 with its 45 degree rotated version. The slanted sides are 45 or 135 degrees to the horizontal axis. Parameters
aint
Parameter deciding the size of the star structural element. The side of the square array returned is 2*a + 1 + 2*floor(a / 2). Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element.
thin
skimage.morphology.thin(image, max_iter=None) [source]
Perform morphological thinning of a binary image. Parameters
imagebinary (M, N) ndarray
The image to be thinned.
max_iterint, number of iterations, optional
Regardless of the value of this parameter, the thinned image is returned immediately if an iteration produces no change. If this parameter is specified it thus sets an upper bound on the number of iterations performed. Returns
outndarray of bool
Thinned image. See also
skeletonize, medial_axis
Notes This algorithm [1] works by making multiple passes over the image, removing pixels matching a set of criteria designed to thin connected regions while preserving eight-connected components and 2 x 2 squares [2]. In each of the two sub-iterations the algorithm correlates the intermediate skeleton image with a neighborhood mask, then looks up each neighborhood in a lookup table indicating whether the central pixel should be deleted in that sub-iteration. References
1
Z. Guo and R. W. Hall, “Parallel thinning with two-subiteration algorithms,” Comm. ACM, vol. 32, no. 3, pp. 359-373, 1989. DOI:10.1145/62065.62074
2
Lam, L., Seong-Whan Lee, and Ching Y. Suen, “Thinning Methodologies-A Comprehensive Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 14, No. 9, p. 879, 1992. DOI:10.1109/34.161346 Examples >>> square = np.zeros((7, 7), dtype=np.uint8)
>>> square[1:-1, 2:-2] = 1
>>> square[0, 1] = 1
>>> square
array([[0, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> skel = thin(square)
>>> skel.astype(np.uint8)
array([[0, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
watershed
skimage.morphology.watershed(image, markers=None, connectivity=1, offset=None, mask=None, compactness=0, watershed_line=False) [source]
Deprecated function. Use skimage.segmentation.watershed instead. Find watershed basins in image flooded from given markers. Parameters
imagendarray (2-D, 3-D, …) of integers
Data array where the lowest value points are labeled first.
markersint, or ndarray of int, same shape as image, optional
The desired number of markers, or an array marking the basins with the values to be assigned in the label matrix. Zero means not a marker. If None (no markers given), the local minima of the image are used as markers.
connectivityndarray, optional
An array with the same number of dimensions as image whose non-zero elements indicate neighbors for connection. Following the scipy convention, default is a one-connected array of the dimension of the image.
offsetarray_like of shape image.ndim, optional
offset of the connectivity (one offset per dimension)
maskndarray of bools or 0s and 1s, optional
Array of same shape as image. Only points at which mask == True will be labeled.
compactnessfloat, optional
Use compact watershed [3] with given compactness parameter. Higher values result in more regularly-shaped watershed basins.
watershed_linebool, optional
If watershed_line is True, a one-pixel wide line separates the regions obtained by the watershed algorithm. The line has the label 0. Returns
out: ndarray
A labeled matrix of the same type and shape as markers See also
skimage.segmentation.random_walker
random walker segmentation A segmentation algorithm based on anisotropic diffusion, usually slower than the watershed but with good results on noisy data and boundaries with holes. Notes This function implements a watershed algorithm [1] [2] that apportions pixels into marked basins. The algorithm uses a priority queue to hold the pixels with the metric for the priority queue being pixel value, then the time of entry into the queue - this settles ties in favor of the closest marker. Some ideas taken from Soille, “Automated Basin Delineation from Digital Elevation Models Using Mathematical Morphology”, Signal Processing 20 (1990) 171-182 The most important insight in the paper is that entry time onto the queue solves two problems: a pixel should be assigned to the neighbor with the largest gradient or, if there is no gradient, pixels on a plateau should be split between markers on opposite sides. This implementation converts all arguments to specific, lowest common denominator types, then passes these to a C algorithm. Markers can be determined manually, or automatically using for example the local minima of the gradient of the image, or the local maxima of the distance function to the background for separating overlapping objects (see example). References
1
https://en.wikipedia.org/wiki/Watershed_%28image_processing%29
2
http://cmm.ensmp.fr/~beucher/wtshed.html
3
Peer Neubert & Peter Protzel (2014). Compact Watershed and Preemptive SLIC: On Improving Trade-offs of Superpixel Segmentation Algorithms. ICPR 2014, pp 996-1001. DOI:10.1109/ICPR.2014.181 https://www.tu-chemnitz.de/etit/proaut/publications/cws_pSLIC_ICPR.pdf Examples The watershed algorithm is useful to separate overlapping objects. We first generate an initial image with two overlapping circles: >>> import numpy as np
>>> x, y = np.indices((80, 80))
>>> x1, y1, x2, y2 = 28, 28, 44, 52
>>> r1, r2 = 16, 20
>>> mask_circle1 = (x - x1)**2 + (y - y1)**2 < r1**2
>>> mask_circle2 = (x - x2)**2 + (y - y2)**2 < r2**2
>>> image = np.logical_or(mask_circle1, mask_circle2)
Next, we want to separate the two circles. We generate markers at the maxima of the distance to the background: >>> from scipy import ndimage as ndi
>>> distance = ndi.distance_transform_edt(image)
>>> from skimage.feature import peak_local_max
>>> local_maxi = peak_local_max(distance, labels=image,
... footprint=np.ones((3, 3)),
... indices=False)
>>> markers = ndi.label(local_maxi)[0]
Finally, we run the watershed on the image and markers: >>> labels = watershed(-distance, markers, mask=image)
The algorithm works also for 3-D images, and can be used for example to separate overlapping spheres.
white_tophat
skimage.morphology.white_tophat(image, selem=None, out=None) [source]
Return white top hat of an image. The white top hat of an image is defined as the image minus its morphological opening. This operation returns the bright spots of the image that are smaller than the structuring element. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as an array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarray, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
outarray, same shape and type as image
The result of the morphological white top hat. See also
black_tophat
References
1
https://en.wikipedia.org/wiki/Top-hat_transform Examples >>> # Subtract grey background from bright peak
>>> import numpy as np
>>> from skimage.morphology import square
>>> bright_on_grey = np.array([[2, 3, 3, 3, 2],
... [3, 4, 5, 4, 3],
... [3, 5, 9, 5, 3],
... [3, 4, 5, 4, 3],
... [2, 3, 3, 3, 2]], dtype=np.uint8)
>>> white_tophat(bright_on_grey, square(3))
array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 5, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.morphology |
skimage.morphology.area_closing(image, area_threshold=64, connectivity=1, parent=None, tree_traverser=None) [source]
Perform an area closing of the image. Area closing removes all dark structures of an image with a surface smaller than area_threshold. The output image is larger than or equal to the input image for every pixel and all local minima have at least a surface of area_threshold pixels. Area closings are similar to morphological closings, but they do not use a fixed structuring element, but rather a deformable one, with surface = area_threshold. In the binary case, area closings are equivalent to remove_small_holes; this operator is thus extended to gray-level images. Technically, this operator is based on the max-tree representation of the image. Parameters
imagendarray
The input image for which the area_closing is to be calculated. This image can be of any type.
area_thresholdunsigned int
The size parameter (number of pixels). The default value is arbitrarily chosen to be 64.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1.
parentndarray, int64, optional
Parent image representing the max tree of the inverted image. The value of each pixel is the index of its parent in the ravelled array. See Note for further details.
tree_traverser1D array, int64, optional
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). Returns
outputndarray
Output image of the same shape and type as input image. See also
skimage.morphology.area_opening
skimage.morphology.diameter_opening
skimage.morphology.diameter_closing
skimage.morphology.max_tree
skimage.morphology.remove_small_objects
skimage.morphology.remove_small_holes
Notes If a max-tree representation (parent and tree_traverser) are given to the function, they must be calculated from the inverted image for this function, i.e.: >>> P, S = max_tree(invert(f)) >>> closed = diameter_closing(f, 3, parent=P, tree_traverser=S) References
1
Vincent L., Proc. “Grayscale area openings and closings, their efficient implementation and applications”, EURASIP Workshop on Mathematical Morphology and its Applications to Signal Processing, Barcelona, Spain, pp.22-27, May 1993.
2
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. DOI:10.1007/978-3-662-05088-0
3
Salembier, P., Oliveras, A., & Garrido, L. (1998). Antiextensive Connected Operators for Image and Sequence Processing. IEEE Transactions on Image Processing, 7(4), 555-570. DOI:10.1109/83.663500
4
Najman, L., & Couprie, M. (2006). Building the component tree in quasi-linear time. IEEE Transactions on Image Processing, 15(11), 3531-3539. DOI:10.1109/TIP.2006.877518
5
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a minimum in the center and 4 additional local minima. >>> w = 12
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 180 + 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:3,1:5] = 160; f[2:4,9:11] = 140; f[9:11,2:4] = 120
>>> f[9:10,9:11] = 100; f[10,10] = 100
>>> f = f.astype(int)
We can calculate the area closing: >>> closed = area_closing(f, 8, connectivity=1)
All small minima are removed, and the remaining minima have at least a size of 8. | skimage.api.skimage.morphology#skimage.morphology.area_closing |
skimage.morphology.area_opening(image, area_threshold=64, connectivity=1, parent=None, tree_traverser=None) [source]
Perform an area opening of the image. Area opening removes all bright structures of an image with a surface smaller than area_threshold. The output image is thus the largest image smaller than the input for which all local maxima have at least a surface of area_threshold pixels. Area openings are similar to morphological openings, but they do not use a fixed structuring element, but rather a deformable one, with surface = area_threshold. Consequently, the area_opening with area_threshold=1 is the identity. In the binary case, area openings are equivalent to remove_small_objects; this operator is thus extended to gray-level images. Technically, this operator is based on the max-tree representation of the image. Parameters
imagendarray
The input image for which the area_opening is to be calculated. This image can be of any type.
area_thresholdunsigned int
The size parameter (number of pixels). The default value is arbitrarily chosen to be 64.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1.
parentndarray, int64, optional
Parent image representing the max tree of the image. The value of each pixel is the index of its parent in the ravelled array.
tree_traverser1D array, int64, optional
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). Returns
outputndarray
Output image of the same shape and type as the input image. See also
skimage.morphology.area_closing
skimage.morphology.diameter_opening
skimage.morphology.diameter_closing
skimage.morphology.max_tree
skimage.morphology.remove_small_objects
skimage.morphology.remove_small_holes
References
1
Vincent L., Proc. “Grayscale area openings and closings, their efficient implementation and applications”, EURASIP Workshop on Mathematical Morphology and its Applications to Signal Processing, Barcelona, Spain, pp.22-27, May 1993.
2
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. :DOI:10.1007/978-3-662-05088-0
3
Salembier, P., Oliveras, A., & Garrido, L. (1998). Antiextensive Connected Operators for Image and Sequence Processing. IEEE Transactions on Image Processing, 7(4), 555-570. :DOI:10.1109/83.663500
4
Najman, L., & Couprie, M. (2006). Building the component tree in quasi-linear time. IEEE Transactions on Image Processing, 15(11), 3531-3539. :DOI:10.1109/TIP.2006.877518
5
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. :DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a maximum in the center and 4 additional local maxima. >>> w = 12
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 20 - 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:3,1:5] = 40; f[2:4,9:11] = 60; f[9:11,2:4] = 80
>>> f[9:10,9:11] = 100; f[10,10] = 100
>>> f = f.astype(int)
We can calculate the area opening: >>> open = area_opening(f, 8, connectivity=1)
The peaks with a surface smaller than 8 are removed. | skimage.api.skimage.morphology#skimage.morphology.area_opening |
skimage.morphology.ball(radius, dtype=<class 'numpy.uint8'>) [source]
Generates a ball-shaped structuring element. This is the 3D equivalent of a disk. A pixel is within the neighborhood if the Euclidean distance between it and the origin is no greater than radius. Parameters
radiusint
The radius of the ball-shaped structuring element. Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element. | skimage.api.skimage.morphology#skimage.morphology.ball |
skimage.morphology.binary_closing(image, selem=None, out=None) [source]
Return fast binary morphological closing of an image. This function returns the same result as greyscale closing but performs faster for binary images. The morphological closing on an image is defined as a dilation followed by an erosion. Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks. This tends to “close” up (dark) gaps between (bright) features. Parameters
imagendarray
Binary input image.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped structuring element (connectivity=1).
outndarray of bool, optional
The array to store the result of the morphology. If None, is passed, a new array will be allocated. Returns
closingndarray of bool
The result of the morphological closing. | skimage.api.skimage.morphology#skimage.morphology.binary_closing |
skimage.morphology.binary_dilation(image, selem=None, out=None) [source]
Return fast binary morphological dilation of an image. This function returns the same result as greyscale dilation but performs faster for binary images. Morphological dilation sets a pixel at (i,j) to the maximum over all pixels in the neighborhood centered at (i,j). Dilation enlarges bright regions and shrinks dark regions. Parameters
imagendarray
Binary input image.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped structuring element (connectivity=1).
outndarray of bool, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
dilatedndarray of bool or uint
The result of the morphological dilation with values in [False, True]. | skimage.api.skimage.morphology#skimage.morphology.binary_dilation |
skimage.morphology.binary_erosion(image, selem=None, out=None) [source]
Return fast binary morphological erosion of an image. This function returns the same result as greyscale erosion but performs faster for binary images. Morphological erosion sets a pixel at (i,j) to the minimum over all pixels in the neighborhood centered at (i,j). Erosion shrinks bright regions and enlarges dark regions. Parameters
imagendarray
Binary input image.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped structuring element (connectivity=1).
outndarray of bool, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
erodedndarray of bool or uint
The result of the morphological erosion taking values in [False, True]. | skimage.api.skimage.morphology#skimage.morphology.binary_erosion |
skimage.morphology.binary_opening(image, selem=None, out=None) [source]
Return fast binary morphological opening of an image. This function returns the same result as greyscale opening but performs faster for binary images. The morphological opening on an image is defined as an erosion followed by a dilation. Opening can remove small bright spots (i.e. “salt”) and connect small dark cracks. This tends to “open” up (dark) gaps between (bright) features. Parameters
imagendarray
Binary input image.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use a cross-shaped structuring element (connectivity=1).
outndarray of bool, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
openingndarray of bool
The result of the morphological opening. | skimage.api.skimage.morphology#skimage.morphology.binary_opening |
skimage.morphology.black_tophat(image, selem=None, out=None) [source]
Return black top hat of an image. The black top hat of an image is defined as its morphological closing minus the original image. This operation returns the dark spots of the image that are smaller than the structuring element. Note that dark spots in the original image are bright spots after the black top hat. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarray, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
outarray, same shape and type as image
The result of the morphological black top hat. See also
white_tophat
References
1
https://en.wikipedia.org/wiki/Top-hat_transform Examples >>> # Change dark peak to bright peak and subtract background
>>> import numpy as np
>>> from skimage.morphology import square
>>> dark_on_grey = np.array([[7, 6, 6, 6, 7],
... [6, 5, 4, 5, 6],
... [6, 4, 0, 4, 6],
... [6, 5, 4, 5, 6],
... [7, 6, 6, 6, 7]], dtype=np.uint8)
>>> black_tophat(dark_on_grey, square(3))
array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 5, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.morphology#skimage.morphology.black_tophat |
skimage.morphology.closing(image, selem=None, out=None) [source]
Return greyscale morphological closing of an image. The morphological closing on an image is defined as a dilation followed by an erosion. Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks. This tends to “close” up (dark) gaps between (bright) features. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as an array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarray, optional
The array to store the result of the morphology. If None, is passed, a new array will be allocated. Returns
closingarray, same shape and type as image
The result of the morphological closing. Examples >>> # Close a gap between two bright lines
>>> import numpy as np
>>> from skimage.morphology import square
>>> broken_line = np.array([[0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0],
... [1, 1, 0, 1, 1],
... [0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> closing(broken_line, square(3))
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.morphology#skimage.morphology.closing |
skimage.morphology.convex_hull_image(image, offset_coordinates=True, tolerance=1e-10) [source]
Compute the convex hull image of a binary image. The convex hull is the set of pixels included in the smallest convex polygon that surround all white pixels in the input image. Parameters
imagearray
Binary input image. This array is cast to bool before processing.
offset_coordinatesbool, optional
If True, a pixel at coordinate, e.g., (4, 7) will be represented by coordinates (3.5, 7), (4.5, 7), (4, 6.5), and (4, 7.5). This adds some “extent” to a pixel when computing the hull.
tolerancefloat, optional
Tolerance when determining whether a point is inside the hull. Due to numerical floating point errors, a tolerance of 0 can result in some points erroneously being classified as being outside the hull. Returns
hull(M, N) array of bool
Binary image with pixels in convex hull set to True. References
1
https://blogs.mathworks.com/steve/2011/10/04/binary-image-convex-hull-algorithm-notes/ | skimage.api.skimage.morphology#skimage.morphology.convex_hull_image |
skimage.morphology.convex_hull_object(image, *, connectivity=2) [source]
Compute the convex hull image of individual objects in a binary image. The convex hull is the set of pixels included in the smallest convex polygon that surround all white pixels in the input image. Parameters
image(M, N) ndarray
Binary input image.
connectivity{1, 2}, int, optional
Determines the neighbors of each pixel. Adjacent elements within a squared distance of connectivity from pixel center are considered neighbors.: 1-connectivity 2-connectivity
[ ] [ ] [ ] [ ]
| \ | /
[ ]--[x]--[ ] [ ]--[x]--[ ]
| / | \
[ ] [ ] [ ] [ ]
Returns
hullndarray of bool
Binary image with pixels inside convex hull set to True. Notes This function uses skimage.morphology.label to define unique objects, finds the convex hull of each using convex_hull_image, and combines these regions with logical OR. Be aware the convex hulls of unconnected objects may overlap in the result. If this is suspected, consider using convex_hull_image separately on each object or adjust connectivity. | skimage.api.skimage.morphology#skimage.morphology.convex_hull_object |
skimage.morphology.cube(width, dtype=<class 'numpy.uint8'>) [source]
Generates a cube-shaped structuring element. This is the 3D equivalent of a square. Every pixel along the perimeter has a chessboard distance no greater than radius (radius=floor(width/2)) pixels. Parameters
widthint
The width, height and depth of the cube. Returns
selemndarray
A structuring element consisting only of ones, i.e. every pixel belongs to the neighborhood. Other Parameters
dtypedata-type
The data type of the structuring element. | skimage.api.skimage.morphology#skimage.morphology.cube |
skimage.morphology.diameter_closing(image, diameter_threshold=8, connectivity=1, parent=None, tree_traverser=None) [source]
Perform a diameter closing of the image. Diameter closing removes all dark structures of an image with maximal extension smaller than diameter_threshold. The maximal extension is defined as the maximal extension of the bounding box. The operator is also called Bounding Box Closing. In practice, the result is similar to a morphological closing, but long and thin structures are not removed. Technically, this operator is based on the max-tree representation of the image. Parameters
imagendarray
The input image for which the diameter_closing is to be calculated. This image can be of any type.
diameter_thresholdunsigned int
The maximal extension parameter (number of pixels). The default value is 8.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1.
parentndarray, int64, optional
Precomputed parent image representing the max tree of the inverted image. This function is fast, if precomputed parent and tree_traverser are provided. See Note for further details.
tree_traverser1D array, int64, optional
Precomputed traverser, where the pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). This function is fast, if precomputed parent and tree_traverser are provided. See Note for further details. Returns
outputndarray
Output image of the same shape and type as input image. See also
skimage.morphology.area_opening
skimage.morphology.area_closing
skimage.morphology.diameter_opening
skimage.morphology.max_tree
Notes If a max-tree representation (parent and tree_traverser) are given to the function, they must be calculated from the inverted image for this function, i.e.: >>> P, S = max_tree(invert(f)) >>> closed = diameter_closing(f, 3, parent=P, tree_traverser=S) References
1
Walter, T., & Klein, J.-C. (2002). Automatic Detection of Microaneurysms in Color Fundus Images of the Human Retina by Means of the Bounding Box Closing. In A. Colosimo, P. Sirabella, A. Giuliani (Eds.), Medical Data Analysis. Lecture Notes in Computer Science, vol 2526, pp. 210-220. Springer Berlin Heidelberg. DOI:10.1007/3-540-36104-9_23
2
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a minimum in the center and 4 additional local minima. >>> w = 12
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 180 + 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:3,1:5] = 160; f[2:4,9:11] = 140; f[9:11,2:4] = 120
>>> f[9:10,9:11] = 100; f[10,10] = 100
>>> f = f.astype(int)
We can calculate the diameter closing: >>> closed = diameter_closing(f, 3, connectivity=1)
All small minima with a maximal extension of 2 or less are removed. The remaining minima have all a maximal extension of at least 3. | skimage.api.skimage.morphology#skimage.morphology.diameter_closing |
skimage.morphology.diameter_opening(image, diameter_threshold=8, connectivity=1, parent=None, tree_traverser=None) [source]
Perform a diameter opening of the image. Diameter opening removes all bright structures of an image with maximal extension smaller than diameter_threshold. The maximal extension is defined as the maximal extension of the bounding box. The operator is also called Bounding Box Opening. In practice, the result is similar to a morphological opening, but long and thin structures are not removed. Technically, this operator is based on the max-tree representation of the image. Parameters
imagendarray
The input image for which the area_opening is to be calculated. This image can be of any type.
diameter_thresholdunsigned int
The maximal extension parameter (number of pixels). The default value is 8.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1.
parentndarray, int64, optional
Parent image representing the max tree of the image. The value of each pixel is the index of its parent in the ravelled array.
tree_traverser1D array, int64, optional
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). Returns
outputndarray
Output image of the same shape and type as the input image. See also
skimage.morphology.area_opening
skimage.morphology.area_closing
skimage.morphology.diameter_closing
skimage.morphology.max_tree
References
1
Walter, T., & Klein, J.-C. (2002). Automatic Detection of Microaneurysms in Color Fundus Images of the Human Retina by Means of the Bounding Box Closing. In A. Colosimo, P. Sirabella, A. Giuliani (Eds.), Medical Data Analysis. Lecture Notes in Computer Science, vol 2526, pp. 210-220. Springer Berlin Heidelberg. DOI:10.1007/3-540-36104-9_23
2
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a maximum in the center and 4 additional local maxima. >>> w = 12
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 20 - 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:3,1:5] = 40; f[2:4,9:11] = 60; f[9:11,2:4] = 80
>>> f[9:10,9:11] = 100; f[10,10] = 100
>>> f = f.astype(int)
We can calculate the diameter opening: >>> open = diameter_opening(f, 3, connectivity=1)
The peaks with a maximal extension of 2 or less are removed. The remaining peaks have all a maximal extension of at least 3. | skimage.api.skimage.morphology#skimage.morphology.diameter_opening |
skimage.morphology.diamond(radius, dtype=<class 'numpy.uint8'>) [source]
Generates a flat, diamond-shaped structuring element. A pixel is part of the neighborhood (i.e. labeled 1) if the city block/Manhattan distance between it and the center of the neighborhood is no greater than radius. Parameters
radiusint
The radius of the diamond-shaped structuring element. Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element. | skimage.api.skimage.morphology#skimage.morphology.diamond |
skimage.morphology.dilation(image, selem=None, out=None, shift_x=False, shift_y=False) [source]
Return greyscale morphological dilation of an image. Morphological dilation sets a pixel at (i,j) to the maximum over all pixels in the neighborhood centered at (i,j). Dilation enlarges bright regions and shrinks dark regions. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as a 2-D array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarray, optional
The array to store the result of the morphology. If None, is passed, a new array will be allocated.
shift_x, shift_ybool, optional
shift structuring element about center point. This only affects eccentric structuring elements (i.e. selem with even numbered sides). Returns
dilateduint8 array, same shape and type as image
The result of the morphological dilation. Notes For uint8 (and uint16 up to a certain bit-depth) data, the lower algorithm complexity makes the skimage.filters.rank.maximum function more efficient for larger images and structuring elements. Examples >>> # Dilation enlarges bright regions
>>> import numpy as np
>>> from skimage.morphology import square
>>> bright_pixel = np.array([[0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0],
... [0, 0, 1, 0, 0],
... [0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> dilation(bright_pixel, square(3))
array([[0, 0, 0, 0, 0],
[0, 1, 1, 1, 0],
[0, 1, 1, 1, 0],
[0, 1, 1, 1, 0],
[0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.morphology#skimage.morphology.dilation |
skimage.morphology.disk(radius, dtype=<class 'numpy.uint8'>) [source]
Generates a flat, disk-shaped structuring element. A pixel is within the neighborhood if the Euclidean distance between it and the origin is no greater than radius. Parameters
radiusint
The radius of the disk-shaped structuring element. Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element. | skimage.api.skimage.morphology#skimage.morphology.disk |
skimage.morphology.erosion(image, selem=None, out=None, shift_x=False, shift_y=False) [source]
Return greyscale morphological erosion of an image. Morphological erosion sets a pixel at (i,j) to the minimum over all pixels in the neighborhood centered at (i,j). Erosion shrinks bright regions and enlarges dark regions. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as an array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarrays, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated.
shift_x, shift_ybool, optional
shift structuring element about center point. This only affects eccentric structuring elements (i.e. selem with even numbered sides). Returns
erodedarray, same shape as image
The result of the morphological erosion. Notes For uint8 (and uint16 up to a certain bit-depth) data, the lower algorithm complexity makes the skimage.filters.rank.minimum function more efficient for larger images and structuring elements. Examples >>> # Erosion shrinks bright regions
>>> import numpy as np
>>> from skimage.morphology import square
>>> bright_square = np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> erosion(bright_square, square(3))
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.morphology#skimage.morphology.erosion |
skimage.morphology.flood(image, seed_point, *, selem=None, connectivity=None, tolerance=None) [source]
Mask corresponding to a flood fill. Starting at a specific seed_point, connected points equal or within tolerance of the seed value are found. Parameters
imagendarray
An n-dimensional array.
seed_pointtuple or int
The point in image used as the starting point for the flood fill. If the image is 1D, this point may be given as an integer.
selemndarray, optional
A structuring element used to determine the neighborhood of each evaluated pixel. It must contain only 1’s and 0’s, have the same number of dimensions as image. If not given, all adjacent pixels are considered as part of the neighborhood (fully connected).
connectivityint, optional
A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is larger or equal to connectivity are considered neighbors. Ignored if selem is not None.
tolerancefloat or int, optional
If None (default), adjacent values must be strictly equal to the initial value of image at seed_point. This is fastest. If a value is given, a comparison will be done at every point and if within tolerance of the initial value will also be filled (inclusive). Returns
maskndarray
A Boolean array with the same shape as image is returned, with True values for areas connected to and equal (or within tolerance of) the seed point. All other values are False. Notes The conceptual analogy of this operation is the ‘paint bucket’ tool in many raster graphics programs. This function returns just the mask representing the fill. If indices are desired rather than masks for memory reasons, the user can simply run numpy.nonzero on the result, save the indices, and discard this mask. Examples >>> from skimage.morphology import flood
>>> image = np.zeros((4, 7), dtype=int)
>>> image[1:3, 1:3] = 1
>>> image[3, 0] = 1
>>> image[1:3, 4:6] = 2
>>> image[3, 6] = 3
>>> image
array([[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 2, 2, 0],
[0, 1, 1, 0, 2, 2, 0],
[1, 0, 0, 0, 0, 0, 3]])
Fill connected ones with 5, with full connectivity (diagonals included): >>> mask = flood(image, (1, 1))
>>> image_flooded = image.copy()
>>> image_flooded[mask] = 5
>>> image_flooded
array([[0, 0, 0, 0, 0, 0, 0],
[0, 5, 5, 0, 2, 2, 0],
[0, 5, 5, 0, 2, 2, 0],
[5, 0, 0, 0, 0, 0, 3]])
Fill connected ones with 5, excluding diagonal points (connectivity 1): >>> mask = flood(image, (1, 1), connectivity=1)
>>> image_flooded = image.copy()
>>> image_flooded[mask] = 5
>>> image_flooded
array([[0, 0, 0, 0, 0, 0, 0],
[0, 5, 5, 0, 2, 2, 0],
[0, 5, 5, 0, 2, 2, 0],
[1, 0, 0, 0, 0, 0, 3]])
Fill with a tolerance: >>> mask = flood(image, (0, 0), tolerance=1)
>>> image_flooded = image.copy()
>>> image_flooded[mask] = 5
>>> image_flooded
array([[5, 5, 5, 5, 5, 5, 5],
[5, 5, 5, 5, 2, 2, 5],
[5, 5, 5, 5, 2, 2, 5],
[5, 5, 5, 5, 5, 5, 3]]) | skimage.api.skimage.morphology#skimage.morphology.flood |
skimage.morphology.flood_fill(image, seed_point, new_value, *, selem=None, connectivity=None, tolerance=None, in_place=False, inplace=None) [source]
Perform flood filling on an image. Starting at a specific seed_point, connected points equal or within tolerance of the seed value are found, then set to new_value. Parameters
imagendarray
An n-dimensional array.
seed_pointtuple or int
The point in image used as the starting point for the flood fill. If the image is 1D, this point may be given as an integer.
new_valueimage type
New value to set the entire fill. This must be chosen in agreement with the dtype of image.
selemndarray, optional
A structuring element used to determine the neighborhood of each evaluated pixel. It must contain only 1’s and 0’s, have the same number of dimensions as image. If not given, all adjacent pixels are considered as part of the neighborhood (fully connected).
connectivityint, optional
A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is less than or equal to connectivity are considered neighbors. Ignored if selem is not None.
tolerancefloat or int, optional
If None (default), adjacent values must be strictly equal to the value of image at seed_point to be filled. This is fastest. If a tolerance is provided, adjacent points with values within plus or minus tolerance from the seed point are filled (inclusive).
in_placebool, optional
If True, flood filling is applied to image in place. If False, the flood filled result is returned without modifying the input image (default).
inplacebool, optional
This parameter is deprecated and will be removed in version 0.19.0 in favor of in_place. If True, flood filling is applied to image inplace. If False, the flood filled result is returned without modifying the input image (default). Returns
filledndarray
An array with the same shape as image is returned, with values in areas connected to and equal (or within tolerance of) the seed point replaced with new_value. Notes The conceptual analogy of this operation is the ‘paint bucket’ tool in many raster graphics programs. Examples >>> from skimage.morphology import flood_fill
>>> image = np.zeros((4, 7), dtype=int)
>>> image[1:3, 1:3] = 1
>>> image[3, 0] = 1
>>> image[1:3, 4:6] = 2
>>> image[3, 6] = 3
>>> image
array([[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 2, 2, 0],
[0, 1, 1, 0, 2, 2, 0],
[1, 0, 0, 0, 0, 0, 3]])
Fill connected ones with 5, with full connectivity (diagonals included): >>> flood_fill(image, (1, 1), 5)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 5, 5, 0, 2, 2, 0],
[0, 5, 5, 0, 2, 2, 0],
[5, 0, 0, 0, 0, 0, 3]])
Fill connected ones with 5, excluding diagonal points (connectivity 1): >>> flood_fill(image, (1, 1), 5, connectivity=1)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 5, 5, 0, 2, 2, 0],
[0, 5, 5, 0, 2, 2, 0],
[1, 0, 0, 0, 0, 0, 3]])
Fill with a tolerance: >>> flood_fill(image, (0, 0), 5, tolerance=1)
array([[5, 5, 5, 5, 5, 5, 5],
[5, 5, 5, 5, 2, 2, 5],
[5, 5, 5, 5, 2, 2, 5],
[5, 5, 5, 5, 5, 5, 3]]) | skimage.api.skimage.morphology#skimage.morphology.flood_fill |
skimage.morphology.h_maxima(image, h, selem=None) [source]
Determine all maxima of the image with height >= h. The local maxima are defined as connected sets of pixels with equal grey level strictly greater than the grey level of all pixels in direct neighborhood of the set. A local maximum M of height h is a local maximum for which there is at least one path joining M with an equal or higher local maximum on which the minimal value is f(M) - h (i.e. the values along the path are not decreasing by more than h with respect to the maximum’s value) and no path to an equal or higher local maximum for which the minimal value is greater. The global maxima of the image are also found by this function. Parameters
imagendarray
The input image for which the maxima are to be calculated.
hunsigned integer
The minimal height of all extracted maxima.
selemndarray, optional
The neighborhood expressed as an n-D array of 1’s and 0’s. Default is the ball of radius 1 according to the maximum norm (i.e. a 3x3 square for 2D images, a 3x3x3 cube for 3D images, etc.) Returns
h_maxndarray
The local maxima of height >= h and the global maxima. The resulting image is a binary image, where pixels belonging to the determined maxima take value 1, the others take value 0. See also
skimage.morphology.extrema.h_minima
skimage.morphology.extrema.local_maxima
skimage.morphology.extrema.local_minima
References
1
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. Examples >>> import numpy as np
>>> from skimage.morphology import extrema
We create an image (quadratic function with a maximum in the center and 4 additional constant maxima. The heights of the maxima are: 1, 21, 41, 61, 81 >>> w = 10
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 20 - 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:4,2:4] = 40; f[2:4,7:9] = 60; f[7:9,2:4] = 80; f[7:9,7:9] = 100
>>> f = f.astype(int)
We can calculate all maxima with a height of at least 40: >>> maxima = extrema.h_maxima(f, 40)
The resulting image will contain 3 local maxima. | skimage.api.skimage.morphology#skimage.morphology.h_maxima |
skimage.morphology.h_minima(image, h, selem=None) [source]
Determine all minima of the image with depth >= h. The local minima are defined as connected sets of pixels with equal grey level strictly smaller than the grey levels of all pixels in direct neighborhood of the set. A local minimum M of depth h is a local minimum for which there is at least one path joining M with an equal or lower local minimum on which the maximal value is f(M) + h (i.e. the values along the path are not increasing by more than h with respect to the minimum’s value) and no path to an equal or lower local minimum for which the maximal value is smaller. The global minima of the image are also found by this function. Parameters
imagendarray
The input image for which the minima are to be calculated.
hunsigned integer
The minimal depth of all extracted minima.
selemndarray, optional
The neighborhood expressed as an n-D array of 1’s and 0’s. Default is the ball of radius 1 according to the maximum norm (i.e. a 3x3 square for 2D images, a 3x3x3 cube for 3D images, etc.) Returns
h_minndarray
The local minima of depth >= h and the global minima. The resulting image is a binary image, where pixels belonging to the determined minima take value 1, the others take value 0. See also
skimage.morphology.extrema.h_maxima
skimage.morphology.extrema.local_maxima
skimage.morphology.extrema.local_minima
References
1
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. Examples >>> import numpy as np
>>> from skimage.morphology import extrema
We create an image (quadratic function with a minimum in the center and 4 additional constant maxima. The depth of the minima are: 1, 21, 41, 61, 81 >>> w = 10
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 180 + 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:4,2:4] = 160; f[2:4,7:9] = 140; f[7:9,2:4] = 120; f[7:9,7:9] = 100
>>> f = f.astype(int)
We can calculate all minima with a depth of at least 40: >>> minima = extrema.h_minima(f, 40)
The resulting image will contain 3 local minima. | skimage.api.skimage.morphology#skimage.morphology.h_minima |
skimage.morphology.label(input, background=None, return_num=False, connectivity=None) [source]
Label connected regions of an integer array. Two pixels are connected when they are neighbors and have the same value. In 2D, they can be neighbors either in a 1- or 2-connected sense. The value refers to the maximum number of orthogonal hops to consider a pixel/voxel a neighbor: 1-connectivity 2-connectivity diagonal connection close-up
[ ] [ ] [ ] [ ] [ ]
| \ | / | <- hop 2
[ ]--[x]--[ ] [ ]--[x]--[ ] [x]--[ ]
| / | \ hop 1
[ ] [ ] [ ] [ ]
Parameters
inputndarray of dtype int
Image to label.
backgroundint, optional
Consider all pixels with this value as background pixels, and label them as 0. By default, 0-valued pixels are considered as background pixels.
return_numbool, optional
Whether to return the number of assigned labels.
connectivityint, optional
Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor. Accepted values are ranging from 1 to input.ndim. If None, a full connectivity of input.ndim is used. Returns
labelsndarray of dtype int
Labeled array, where all connected regions are assigned the same integer value.
numint, optional
Number of labels, which equals the maximum label index and is only returned if return_num is True. See also
regionprops
regionprops_table
References
1
Christophe Fiorio and Jens Gustedt, “Two linear time Union-Find strategies for image processing”, Theoretical Computer Science 154 (1996), pp. 165-181.
2
Kensheng Wu, Ekow Otoo and Arie Shoshani, “Optimizing connected component labeling algorithms”, Paper LBNL-56864, 2005, Lawrence Berkeley National Laboratory (University of California), http://repositories.cdlib.org/lbnl/LBNL-56864 Examples >>> import numpy as np
>>> x = np.eye(3).astype(int)
>>> print(x)
[[1 0 0]
[0 1 0]
[0 0 1]]
>>> print(label(x, connectivity=1))
[[1 0 0]
[0 2 0]
[0 0 3]]
>>> print(label(x, connectivity=2))
[[1 0 0]
[0 1 0]
[0 0 1]]
>>> print(label(x, background=-1))
[[1 2 2]
[2 1 2]
[2 2 1]]
>>> x = np.array([[1, 0, 0],
... [1, 1, 5],
... [0, 0, 0]])
>>> print(label(x))
[[1 0 0]
[1 1 2]
[0 0 0]] | skimage.api.skimage.morphology#skimage.morphology.label |
skimage.morphology.local_maxima(image, selem=None, connectivity=None, indices=False, allow_borders=True) [source]
Find local maxima of n-dimensional array. The local maxima are defined as connected sets of pixels with equal gray level (plateaus) strictly greater than the gray levels of all pixels in the neighborhood. Parameters
imagendarray
An n-dimensional array.
selemndarray, optional
A structuring element used to determine the neighborhood of each evaluated pixel (True denotes a connected pixel). It must be a boolean array and have the same number of dimensions as image. If neither selem nor connectivity are given, all adjacent pixels are considered as part of the neighborhood.
connectivityint, optional
A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is less than or equal to connectivity are considered neighbors. Ignored if selem is not None.
indicesbool, optional
If True, the output will be a tuple of one-dimensional arrays representing the indices of local maxima in each dimension. If False, the output will be a boolean array with the same shape as image.
allow_bordersbool, optional
If true, plateaus that touch the image border are valid maxima. Returns
maximandarray or tuple[ndarray]
If indices is false, a boolean array with the same shape as image is returned with True indicating the position of local maxima (False otherwise). If indices is true, a tuple of one-dimensional arrays containing the coordinates (indices) of all found maxima. Warns
UserWarning
If allow_borders is false and any dimension of the given image is shorter than 3 samples, maxima can’t exist and a warning is shown. See also
skimage.morphology.local_minima
skimage.morphology.h_maxima
skimage.morphology.h_minima
Notes This function operates on the following ideas: Make a first pass over the image’s last dimension and flag candidates for local maxima by comparing pixels in only one direction. If the pixels aren’t connected in the last dimension all pixels are flagged as candidates instead. For each candidate: Perform a flood-fill to find all connected pixels that have the same gray value and are part of the plateau. Consider the connected neighborhood of a plateau: if no bordering sample has a higher gray level, mark the plateau as a definite local maximum. Examples >>> from skimage.morphology import local_maxima
>>> image = np.zeros((4, 7), dtype=int)
>>> image[1:3, 1:3] = 1
>>> image[3, 0] = 1
>>> image[1:3, 4:6] = 2
>>> image[3, 6] = 3
>>> image
array([[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 2, 2, 0],
[0, 1, 1, 0, 2, 2, 0],
[1, 0, 0, 0, 0, 0, 3]])
Find local maxima by comparing to all neighboring pixels (maximal connectivity): >>> local_maxima(image)
array([[False, False, False, False, False, False, False],
[False, True, True, False, False, False, False],
[False, True, True, False, False, False, False],
[ True, False, False, False, False, False, True]])
>>> local_maxima(image, indices=True)
(array([1, 1, 2, 2, 3, 3]), array([1, 2, 1, 2, 0, 6]))
Find local maxima without comparing to diagonal pixels (connectivity 1): >>> local_maxima(image, connectivity=1)
array([[False, False, False, False, False, False, False],
[False, True, True, False, True, True, False],
[False, True, True, False, True, True, False],
[ True, False, False, False, False, False, True]])
and exclude maxima that border the image edge: >>> local_maxima(image, connectivity=1, allow_borders=False)
array([[False, False, False, False, False, False, False],
[False, True, True, False, True, True, False],
[False, True, True, False, True, True, False],
[False, False, False, False, False, False, False]]) | skimage.api.skimage.morphology#skimage.morphology.local_maxima |
skimage.morphology.local_minima(image, selem=None, connectivity=None, indices=False, allow_borders=True) [source]
Find local minima of n-dimensional array. The local minima are defined as connected sets of pixels with equal gray level (plateaus) strictly smaller than the gray levels of all pixels in the neighborhood. Parameters
imagendarray
An n-dimensional array.
selemndarray, optional
A structuring element used to determine the neighborhood of each evaluated pixel (True denotes a connected pixel). It must be a boolean array and have the same number of dimensions as image. If neither selem nor connectivity are given, all adjacent pixels are considered as part of the neighborhood.
connectivityint, optional
A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is less than or equal to connectivity are considered neighbors. Ignored if selem is not None.
indicesbool, optional
If True, the output will be a tuple of one-dimensional arrays representing the indices of local minima in each dimension. If False, the output will be a boolean array with the same shape as image.
allow_bordersbool, optional
If true, plateaus that touch the image border are valid minima. Returns
minimandarray or tuple[ndarray]
If indices is false, a boolean array with the same shape as image is returned with True indicating the position of local minima (False otherwise). If indices is true, a tuple of one-dimensional arrays containing the coordinates (indices) of all found minima. See also
skimage.morphology.local_maxima
skimage.morphology.h_maxima
skimage.morphology.h_minima
Notes This function operates on the following ideas: Make a first pass over the image’s last dimension and flag candidates for local minima by comparing pixels in only one direction. If the pixels aren’t connected in the last dimension all pixels are flagged as candidates instead. For each candidate: Perform a flood-fill to find all connected pixels that have the same gray value and are part of the plateau. Consider the connected neighborhood of a plateau: if no bordering sample has a smaller gray level, mark the plateau as a definite local minimum. Examples >>> from skimage.morphology import local_minima
>>> image = np.zeros((4, 7), dtype=int)
>>> image[1:3, 1:3] = -1
>>> image[3, 0] = -1
>>> image[1:3, 4:6] = -2
>>> image[3, 6] = -3
>>> image
array([[ 0, 0, 0, 0, 0, 0, 0],
[ 0, -1, -1, 0, -2, -2, 0],
[ 0, -1, -1, 0, -2, -2, 0],
[-1, 0, 0, 0, 0, 0, -3]])
Find local minima by comparing to all neighboring pixels (maximal connectivity): >>> local_minima(image)
array([[False, False, False, False, False, False, False],
[False, True, True, False, False, False, False],
[False, True, True, False, False, False, False],
[ True, False, False, False, False, False, True]])
>>> local_minima(image, indices=True)
(array([1, 1, 2, 2, 3, 3]), array([1, 2, 1, 2, 0, 6]))
Find local minima without comparing to diagonal pixels (connectivity 1): >>> local_minima(image, connectivity=1)
array([[False, False, False, False, False, False, False],
[False, True, True, False, True, True, False],
[False, True, True, False, True, True, False],
[ True, False, False, False, False, False, True]])
and exclude minima that border the image edge: >>> local_minima(image, connectivity=1, allow_borders=False)
array([[False, False, False, False, False, False, False],
[False, True, True, False, True, True, False],
[False, True, True, False, True, True, False],
[False, False, False, False, False, False, False]]) | skimage.api.skimage.morphology#skimage.morphology.local_minima |
skimage.morphology.max_tree(image, connectivity=1) [source]
Build the max tree from an image. Component trees represent the hierarchical structure of the connected components resulting from sequential thresholding operations applied to an image. A connected component at one level is parent of a component at a higher level if the latter is included in the first. A max-tree is an efficient representation of a component tree. A connected component at one level is represented by one reference pixel at this level, which is parent to all other pixels at that level and to the reference pixel at the level above. The max-tree is the basis for many morphological operators, namely connected operators. Parameters
imagendarray
The input image for which the max-tree is to be calculated. This image can be of any type.
connectivityunsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1. Returns
parentndarray, int64
Array of same shape as image. The value of each pixel is the index of its parent in the ravelled array.
tree_traverser1D array, int64
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). References
1
Salembier, P., Oliveras, A., & Garrido, L. (1998). Antiextensive Connected Operators for Image and Sequence Processing. IEEE Transactions on Image Processing, 7(4), 555-570. DOI:10.1109/83.663500
2
Berger, C., Geraud, T., Levillain, R., Widynski, N., Baillard, A., Bertin, E. (2007). Effective Component Tree Computation with Application to Pattern Recognition in Astronomical Imaging. In International Conference on Image Processing (ICIP) (pp. 41-44). DOI:10.1109/ICIP.2007.4379949
3
Najman, L., & Couprie, M. (2006). Building the component tree in quasi-linear time. IEEE Transactions on Image Processing, 15(11), 3531-3539. DOI:10.1109/TIP.2006.877518
4
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create a small sample image (Figure 1 from [4]) and build the max-tree. >>> image = np.array([[15, 13, 16], [12, 12, 10], [16, 12, 14]])
>>> P, S = max_tree(image, connectivity=2) | skimage.api.skimage.morphology#skimage.morphology.max_tree |
skimage.morphology.max_tree_local_maxima(image, connectivity=1, parent=None, tree_traverser=None) [source]
Determine all local maxima of the image. The local maxima are defined as connected sets of pixels with equal gray level strictly greater than the gray levels of all pixels in direct neighborhood of the set. The function labels the local maxima. Technically, the implementation is based on the max-tree representation of an image. The function is very efficient if the max-tree representation has already been computed. Otherwise, it is preferable to use the function local_maxima. Parameters
imagendarray
The input image for which the maxima are to be calculated. connectivity: unsigned int, optional
The neighborhood connectivity. The integer represents the maximum number of orthogonal steps to reach a neighbor. In 2D, it is 1 for a 4-neighborhood and 2 for a 8-neighborhood. Default value is 1. parent: ndarray, int64, optional
The value of each pixel is the index of its parent in the ravelled array. tree_traverser: 1D array, int64, optional
The ordered pixel indices (referring to the ravelled array). The pixels are ordered such that every pixel is preceded by its parent (except for the root which has no parent). Returns
local_maxndarray, uint64
Labeled local maxima of the image. See also
skimage.morphology.local_maxima
skimage.morphology.max_tree
References
1
Vincent L., Proc. “Grayscale area openings and closings, their efficient implementation and applications”, EURASIP Workshop on Mathematical Morphology and its Applications to Signal Processing, Barcelona, Spain, pp.22-27, May 1993.
2
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. DOI:10.1007/978-3-662-05088-0
3
Salembier, P., Oliveras, A., & Garrido, L. (1998). Antiextensive Connected Operators for Image and Sequence Processing. IEEE Transactions on Image Processing, 7(4), 555-570. DOI:10.1109/83.663500
4
Najman, L., & Couprie, M. (2006). Building the component tree in quasi-linear time. IEEE Transactions on Image Processing, 15(11), 3531-3539. DOI:10.1109/TIP.2006.877518
5
Carlinet, E., & Geraud, T. (2014). A Comparative Review of Component Tree Computation Algorithms. IEEE Transactions on Image Processing, 23(9), 3885-3895. DOI:10.1109/TIP.2014.2336551 Examples We create an image (quadratic function with a maximum in the center and 4 additional constant maxima. >>> w = 10
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 20 - 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:4,2:4] = 40; f[2:4,7:9] = 60; f[7:9,2:4] = 80; f[7:9,7:9] = 100
>>> f = f.astype(int)
We can calculate all local maxima: >>> maxima = max_tree_local_maxima(f)
The resulting image contains the labeled local maxima. | skimage.api.skimage.morphology#skimage.morphology.max_tree_local_maxima |
skimage.morphology.medial_axis(image, mask=None, return_distance=False) [source]
Compute the medial axis transform of a binary image Parameters
imagebinary ndarray, shape (M, N)
The image of the shape to be skeletonized.
maskbinary ndarray, shape (M, N), optional
If a mask is given, only those elements in image with a true value in mask are used for computing the medial axis.
return_distancebool, optional
If true, the distance transform is returned as well as the skeleton. Returns
outndarray of bools
Medial axis transform of the image
distndarray of ints, optional
Distance transform of the image (only returned if return_distance is True) See also
skeletonize
Notes This algorithm computes the medial axis transform of an image as the ridges of its distance transform. The different steps of the algorithm are as follows
A lookup table is used, that assigns 0 or 1 to each configuration of the 3x3 binary square, whether the central pixel should be removed or kept. We want a point to be removed if it has more than one neighbor and if removing it does not change the number of connected components. The distance transform to the background is computed, as well as the cornerness of the pixel. The foreground (value of 1) points are ordered by the distance transform, then the cornerness. A cython function is called to reduce the image to its skeleton. It processes pixels in the order determined at the previous step, and removes or maintains a pixel according to the lookup table. Because of the ordering, it is possible to process all pixels in only one pass. Examples >>> square = np.zeros((7, 7), dtype=np.uint8)
>>> square[1:-1, 2:-2] = 1
>>> square
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> medial_axis(square).astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.morphology#skimage.morphology.medial_axis |
skimage.morphology.octagon(m, n, dtype=<class 'numpy.uint8'>) [source]
Generates an octagon shaped structuring element. For a given size of (m) horizontal and vertical sides and a given (n) height or width of slanted sides octagon is generated. The slanted sides are 45 or 135 degrees to the horizontal axis and hence the widths and heights are equal. Parameters
mint
The size of the horizontal and vertical sides.
nint
The height or width of the slanted sides. Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element. | skimage.api.skimage.morphology#skimage.morphology.octagon |
skimage.morphology.octahedron(radius, dtype=<class 'numpy.uint8'>) [source]
Generates a octahedron-shaped structuring element. This is the 3D equivalent of a diamond. A pixel is part of the neighborhood (i.e. labeled 1) if the city block/Manhattan distance between it and the center of the neighborhood is no greater than radius. Parameters
radiusint
The radius of the octahedron-shaped structuring element. Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element. | skimage.api.skimage.morphology#skimage.morphology.octahedron |
skimage.morphology.opening(image, selem=None, out=None) [source]
Return greyscale morphological opening of an image. The morphological opening on an image is defined as an erosion followed by a dilation. Opening can remove small bright spots (i.e. “salt”) and connect small dark cracks. This tends to “open” up (dark) gaps between (bright) features. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as an array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarray, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
openingarray, same shape and type as image
The result of the morphological opening. Examples >>> # Open up gap between two bright regions (but also shrink regions)
>>> import numpy as np
>>> from skimage.morphology import square
>>> bad_connection = np.array([[1, 0, 0, 0, 1],
... [1, 1, 0, 1, 1],
... [1, 1, 1, 1, 1],
... [1, 1, 0, 1, 1],
... [1, 0, 0, 0, 1]], dtype=np.uint8)
>>> opening(bad_connection, square(3))
array([[0, 0, 0, 0, 0],
[1, 1, 0, 1, 1],
[1, 1, 0, 1, 1],
[1, 1, 0, 1, 1],
[0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.morphology#skimage.morphology.opening |
skimage.morphology.reconstruction(seed, mask, method='dilation', selem=None, offset=None) [source]
Perform a morphological reconstruction of an image. Morphological reconstruction by dilation is similar to basic morphological dilation: high-intensity values will replace nearby low-intensity values. The basic dilation operator, however, uses a structuring element to determine how far a value in the input image can spread. In contrast, reconstruction uses two images: a “seed” image, which specifies the values that spread, and a “mask” image, which gives the maximum allowed value at each pixel. The mask image, like the structuring element, limits the spread of high-intensity values. Reconstruction by erosion is simply the inverse: low-intensity values spread from the seed image and are limited by the mask image, which represents the minimum allowed value. Alternatively, you can think of reconstruction as a way to isolate the connected regions of an image. For dilation, reconstruction connects regions marked by local maxima in the seed image: neighboring pixels less-than-or-equal-to those seeds are connected to the seeded region. Local maxima with values larger than the seed image will get truncated to the seed value. Parameters
seedndarray
The seed image (a.k.a. marker image), which specifies the values that are dilated or eroded.
maskndarray
The maximum (dilation) / minimum (erosion) allowed value at each pixel.
method{‘dilation’|’erosion’}, optional
Perform reconstruction by dilation or erosion. In dilation (or erosion), the seed image is dilated (or eroded) until limited by the mask image. For dilation, each seed value must be less than or equal to the corresponding mask value; for erosion, the reverse is true. Default is ‘dilation’.
selemndarray, optional
The neighborhood expressed as an n-D array of 1’s and 0’s. Default is the n-D square of radius equal to 1 (i.e. a 3x3 square for 2D images, a 3x3x3 cube for 3D images, etc.)
offsetndarray, optional
The coordinates of the center of the structuring element. Default is located on the geometrical center of the selem, in that case selem dimensions must be odd. Returns
reconstructedndarray
The result of morphological reconstruction. Notes The algorithm is taken from [1]. Applications for greyscale reconstruction are discussed in [2] and [3]. References
1
Robinson, “Efficient morphological reconstruction: a downhill filter”, Pattern Recognition Letters 25 (2004) 1759-1767.
2
Vincent, L., “Morphological Grayscale Reconstruction in Image Analysis: Applications and Efficient Algorithms”, IEEE Transactions on Image Processing (1993)
3
Soille, P., “Morphological Image Analysis: Principles and Applications”, Chapter 6, 2nd edition (2003), ISBN 3540429883. Examples >>> import numpy as np
>>> from skimage.morphology import reconstruction
First, we create a sinusoidal mask image with peaks at middle and ends. >>> x = np.linspace(0, 4 * np.pi)
>>> y_mask = np.cos(x)
Then, we create a seed image initialized to the minimum mask value (for reconstruction by dilation, min-intensity values don’t spread) and add “seeds” to the left and right peak, but at a fraction of peak value (1). >>> y_seed = y_mask.min() * np.ones_like(x)
>>> y_seed[0] = 0.5
>>> y_seed[-1] = 0
>>> y_rec = reconstruction(y_seed, y_mask)
The reconstructed image (or curve, in this case) is exactly the same as the mask image, except that the peaks are truncated to 0.5 and 0. The middle peak disappears completely: Since there were no seed values in this peak region, its reconstructed value is truncated to the surrounding value (-1). As a more practical example, we try to extract the bright features of an image by subtracting a background image created by reconstruction. >>> y, x = np.mgrid[:20:0.5, :20:0.5]
>>> bumps = np.sin(x) + np.sin(y)
To create the background image, set the mask image to the original image, and the seed image to the original image with an intensity offset, h. >>> h = 0.3
>>> seed = bumps - h
>>> background = reconstruction(seed, bumps)
The resulting reconstructed image looks exactly like the original image, but with the peaks of the bumps cut off. Subtracting this reconstructed image from the original image leaves just the peaks of the bumps >>> hdome = bumps - background
This operation is known as the h-dome of the image and leaves features of height h in the subtracted image. | skimage.api.skimage.morphology#skimage.morphology.reconstruction |
skimage.morphology.rectangle(nrows, ncols, dtype=<class 'numpy.uint8'>) [source]
Generates a flat, rectangular-shaped structuring element. Every pixel in the rectangle generated for a given width and given height belongs to the neighborhood. Parameters
nrowsint
The number of rows of the rectangle.
ncolsint
The number of columns of the rectangle. Returns
selemndarray
A structuring element consisting only of ones, i.e. every pixel belongs to the neighborhood. Other Parameters
dtypedata-type
The data type of the structuring element. Notes The use of width and height has been deprecated in version 0.18.0. Use nrows and ncols instead. | skimage.api.skimage.morphology#skimage.morphology.rectangle |
skimage.morphology.remove_small_holes(ar, area_threshold=64, connectivity=1, in_place=False) [source]
Remove contiguous holes smaller than the specified size. Parameters
arndarray (arbitrary shape, int or bool type)
The array containing the connected components of interest.
area_thresholdint, optional (default: 64)
The maximum area, in pixels, of a contiguous hole that will be filled. Replaces min_size.
connectivityint, {1, 2, …, ar.ndim}, optional (default: 1)
The connectivity defining the neighborhood of a pixel.
in_placebool, optional (default: False)
If True, remove the connected components in the input array itself. Otherwise, make a copy. Returns
outndarray, same shape and type as input ar
The input array with small holes within connected components removed. Raises
TypeError
If the input array is of an invalid type, such as float or string. ValueError
If the input array contains negative values. Notes If the array type is int, it is assumed that it contains already-labeled objects. The labels are not kept in the output image (this function always outputs a bool image). It is suggested that labeling is completed after using this function. Examples >>> from skimage import morphology
>>> a = np.array([[1, 1, 1, 1, 1, 0],
... [1, 1, 1, 0, 1, 0],
... [1, 0, 0, 1, 1, 0],
... [1, 1, 1, 1, 1, 0]], bool)
>>> b = morphology.remove_small_holes(a, 2)
>>> b
array([[ True, True, True, True, True, False],
[ True, True, True, True, True, False],
[ True, False, False, True, True, False],
[ True, True, True, True, True, False]])
>>> c = morphology.remove_small_holes(a, 2, connectivity=2)
>>> c
array([[ True, True, True, True, True, False],
[ True, True, True, False, True, False],
[ True, False, False, True, True, False],
[ True, True, True, True, True, False]])
>>> d = morphology.remove_small_holes(a, 2, in_place=True)
>>> d is a
True | skimage.api.skimage.morphology#skimage.morphology.remove_small_holes |
skimage.morphology.remove_small_objects(ar, min_size=64, connectivity=1, in_place=False) [source]
Remove objects smaller than the specified size. Expects ar to be an array with labeled objects, and removes objects smaller than min_size. If ar is bool, the image is first labeled. This leads to potentially different behavior for bool and 0-and-1 arrays. Parameters
arndarray (arbitrary shape, int or bool type)
The array containing the objects of interest. If the array type is int, the ints must be non-negative.
min_sizeint, optional (default: 64)
The smallest allowable object size.
connectivityint, {1, 2, …, ar.ndim}, optional (default: 1)
The connectivity defining the neighborhood of a pixel. Used during labelling if ar is bool.
in_placebool, optional (default: False)
If True, remove the objects in the input array itself. Otherwise, make a copy. Returns
outndarray, same shape and type as input ar
The input array with small connected components removed. Raises
TypeError
If the input array is of an invalid type, such as float or string. ValueError
If the input array contains negative values. Examples >>> from skimage import morphology
>>> a = np.array([[0, 0, 0, 1, 0],
... [1, 1, 1, 0, 0],
... [1, 1, 1, 0, 1]], bool)
>>> b = morphology.remove_small_objects(a, 6)
>>> b
array([[False, False, False, False, False],
[ True, True, True, False, False],
[ True, True, True, False, False]])
>>> c = morphology.remove_small_objects(a, 7, connectivity=2)
>>> c
array([[False, False, False, True, False],
[ True, True, True, False, False],
[ True, True, True, False, False]])
>>> d = morphology.remove_small_objects(a, 6, in_place=True)
>>> d is a
True | skimage.api.skimage.morphology#skimage.morphology.remove_small_objects |
skimage.morphology.skeletonize(image, *, method=None) [source]
Compute the skeleton of a binary image. Thinning is used to reduce each connected component in a binary image to a single-pixel wide skeleton. Parameters
imagendarray, 2D or 3D
A binary image containing the objects to be skeletonized. Zeros represent background, nonzero values are foreground.
method{‘zhang’, ‘lee’}, optional
Which algorithm to use. Zhang’s algorithm [Zha84] only works for 2D images, and is the default for 2D. Lee’s algorithm [Lee94] works for 2D or 3D images and is the default for 3D. Returns
skeletonndarray
The thinned image. See also
medial_axis
References
Lee94
T.-C. Lee, R.L. Kashyap and C.-N. Chu, Building skeleton models via 3-D medial surface/axis thinning algorithms. Computer Vision, Graphics, and Image Processing, 56(6):462-478, 1994.
Zha84
A fast parallel algorithm for thinning digital patterns, T. Y. Zhang and C. Y. Suen, Communications of the ACM, March 1984, Volume 27, Number 3. Examples >>> X, Y = np.ogrid[0:9, 0:9]
>>> ellipse = (1./3 * (X - 4)**2 + (Y - 4)**2 < 3**2).astype(np.uint8)
>>> ellipse
array([[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0]], dtype=uint8)
>>> skel = skeletonize(ellipse)
>>> skel.astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.morphology#skimage.morphology.skeletonize |
skimage.morphology.skeletonize_3d(image) [source]
Compute the skeleton of a binary image. Thinning is used to reduce each connected component in a binary image to a single-pixel wide skeleton. Parameters
imagendarray, 2D or 3D
A binary image containing the objects to be skeletonized. Zeros represent background, nonzero values are foreground. Returns
skeletonndarray
The thinned image. See also
skeletonize, medial_axis
Notes The method of [Lee94] uses an octree data structure to examine a 3x3x3 neighborhood of a pixel. The algorithm proceeds by iteratively sweeping over the image, and removing pixels at each iteration until the image stops changing. Each iteration consists of two steps: first, a list of candidates for removal is assembled; then pixels from this list are rechecked sequentially, to better preserve connectivity of the image. The algorithm this function implements is different from the algorithms used by either skeletonize or medial_axis, thus for 2D images the results produced by this function are generally different. References
Lee94
T.-C. Lee, R.L. Kashyap and C.-N. Chu, Building skeleton models via 3-D medial surface/axis thinning algorithms. Computer Vision, Graphics, and Image Processing, 56(6):462-478, 1994. | skimage.api.skimage.morphology#skimage.morphology.skeletonize_3d |
skimage.morphology.square(width, dtype=<class 'numpy.uint8'>) [source]
Generates a flat, square-shaped structuring element. Every pixel along the perimeter has a chessboard distance no greater than radius (radius=floor(width/2)) pixels. Parameters
widthint
The width and height of the square. Returns
selemndarray
A structuring element consisting only of ones, i.e. every pixel belongs to the neighborhood. Other Parameters
dtypedata-type
The data type of the structuring element. | skimage.api.skimage.morphology#skimage.morphology.square |
skimage.morphology.star(a, dtype=<class 'numpy.uint8'>) [source]
Generates a star shaped structuring element. Start has 8 vertices and is an overlap of square of size 2*a + 1 with its 45 degree rotated version. The slanted sides are 45 or 135 degrees to the horizontal axis. Parameters
aint
Parameter deciding the size of the star structural element. The side of the square array returned is 2*a + 1 + 2*floor(a / 2). Returns
selemndarray
The structuring element where elements of the neighborhood are 1 and 0 otherwise. Other Parameters
dtypedata-type
The data type of the structuring element. | skimage.api.skimage.morphology#skimage.morphology.star |
skimage.morphology.thin(image, max_iter=None) [source]
Perform morphological thinning of a binary image. Parameters
imagebinary (M, N) ndarray
The image to be thinned.
max_iterint, number of iterations, optional
Regardless of the value of this parameter, the thinned image is returned immediately if an iteration produces no change. If this parameter is specified it thus sets an upper bound on the number of iterations performed. Returns
outndarray of bool
Thinned image. See also
skeletonize, medial_axis
Notes This algorithm [1] works by making multiple passes over the image, removing pixels matching a set of criteria designed to thin connected regions while preserving eight-connected components and 2 x 2 squares [2]. In each of the two sub-iterations the algorithm correlates the intermediate skeleton image with a neighborhood mask, then looks up each neighborhood in a lookup table indicating whether the central pixel should be deleted in that sub-iteration. References
1
Z. Guo and R. W. Hall, “Parallel thinning with two-subiteration algorithms,” Comm. ACM, vol. 32, no. 3, pp. 359-373, 1989. DOI:10.1145/62065.62074
2
Lam, L., Seong-Whan Lee, and Ching Y. Suen, “Thinning Methodologies-A Comprehensive Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 14, No. 9, p. 879, 1992. DOI:10.1109/34.161346 Examples >>> square = np.zeros((7, 7), dtype=np.uint8)
>>> square[1:-1, 2:-2] = 1
>>> square[0, 1] = 1
>>> square
array([[0, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> skel = thin(square)
>>> skel.astype(np.uint8)
array([[0, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.morphology#skimage.morphology.thin |
skimage.morphology.watershed(image, markers=None, connectivity=1, offset=None, mask=None, compactness=0, watershed_line=False) [source]
Deprecated function. Use skimage.segmentation.watershed instead. Find watershed basins in image flooded from given markers. Parameters
imagendarray (2-D, 3-D, …) of integers
Data array where the lowest value points are labeled first.
markersint, or ndarray of int, same shape as image, optional
The desired number of markers, or an array marking the basins with the values to be assigned in the label matrix. Zero means not a marker. If None (no markers given), the local minima of the image are used as markers.
connectivityndarray, optional
An array with the same number of dimensions as image whose non-zero elements indicate neighbors for connection. Following the scipy convention, default is a one-connected array of the dimension of the image.
offsetarray_like of shape image.ndim, optional
offset of the connectivity (one offset per dimension)
maskndarray of bools or 0s and 1s, optional
Array of same shape as image. Only points at which mask == True will be labeled.
compactnessfloat, optional
Use compact watershed [3] with given compactness parameter. Higher values result in more regularly-shaped watershed basins.
watershed_linebool, optional
If watershed_line is True, a one-pixel wide line separates the regions obtained by the watershed algorithm. The line has the label 0. Returns
out: ndarray
A labeled matrix of the same type and shape as markers See also
skimage.segmentation.random_walker
random walker segmentation A segmentation algorithm based on anisotropic diffusion, usually slower than the watershed but with good results on noisy data and boundaries with holes. Notes This function implements a watershed algorithm [1] [2] that apportions pixels into marked basins. The algorithm uses a priority queue to hold the pixels with the metric for the priority queue being pixel value, then the time of entry into the queue - this settles ties in favor of the closest marker. Some ideas taken from Soille, “Automated Basin Delineation from Digital Elevation Models Using Mathematical Morphology”, Signal Processing 20 (1990) 171-182 The most important insight in the paper is that entry time onto the queue solves two problems: a pixel should be assigned to the neighbor with the largest gradient or, if there is no gradient, pixels on a plateau should be split between markers on opposite sides. This implementation converts all arguments to specific, lowest common denominator types, then passes these to a C algorithm. Markers can be determined manually, or automatically using for example the local minima of the gradient of the image, or the local maxima of the distance function to the background for separating overlapping objects (see example). References
1
https://en.wikipedia.org/wiki/Watershed_%28image_processing%29
2
http://cmm.ensmp.fr/~beucher/wtshed.html
3
Peer Neubert & Peter Protzel (2014). Compact Watershed and Preemptive SLIC: On Improving Trade-offs of Superpixel Segmentation Algorithms. ICPR 2014, pp 996-1001. DOI:10.1109/ICPR.2014.181 https://www.tu-chemnitz.de/etit/proaut/publications/cws_pSLIC_ICPR.pdf Examples The watershed algorithm is useful to separate overlapping objects. We first generate an initial image with two overlapping circles: >>> import numpy as np
>>> x, y = np.indices((80, 80))
>>> x1, y1, x2, y2 = 28, 28, 44, 52
>>> r1, r2 = 16, 20
>>> mask_circle1 = (x - x1)**2 + (y - y1)**2 < r1**2
>>> mask_circle2 = (x - x2)**2 + (y - y2)**2 < r2**2
>>> image = np.logical_or(mask_circle1, mask_circle2)
Next, we want to separate the two circles. We generate markers at the maxima of the distance to the background: >>> from scipy import ndimage as ndi
>>> distance = ndi.distance_transform_edt(image)
>>> from skimage.feature import peak_local_max
>>> local_maxi = peak_local_max(distance, labels=image,
... footprint=np.ones((3, 3)),
... indices=False)
>>> markers = ndi.label(local_maxi)[0]
Finally, we run the watershed on the image and markers: >>> labels = watershed(-distance, markers, mask=image)
The algorithm works also for 3-D images, and can be used for example to separate overlapping spheres. | skimage.api.skimage.morphology#skimage.morphology.watershed |
skimage.morphology.white_tophat(image, selem=None, out=None) [source]
Return white top hat of an image. The white top hat of an image is defined as the image minus its morphological opening. This operation returns the bright spots of the image that are smaller than the structuring element. Parameters
imagendarray
Image array.
selemndarray, optional
The neighborhood expressed as an array of 1’s and 0’s. If None, use cross-shaped structuring element (connectivity=1).
outndarray, optional
The array to store the result of the morphology. If None is passed, a new array will be allocated. Returns
outarray, same shape and type as image
The result of the morphological white top hat. See also
black_tophat
References
1
https://en.wikipedia.org/wiki/Top-hat_transform Examples >>> # Subtract grey background from bright peak
>>> import numpy as np
>>> from skimage.morphology import square
>>> bright_on_grey = np.array([[2, 3, 3, 3, 2],
... [3, 4, 5, 4, 3],
... [3, 5, 9, 5, 3],
... [3, 4, 5, 4, 3],
... [2, 3, 3, 3, 2]], dtype=np.uint8)
>>> white_tophat(bright_on_grey, square(3))
array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 5, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.morphology#skimage.morphology.white_tophat |
Module: registration
skimage.registration.optical_flow_ilk(…[, …]) Coarse to fine optical flow estimator.
skimage.registration.optical_flow_tvl1(…) Coarse to fine optical flow estimator.
skimage.registration.phase_cross_correlation(…) Efficient subpixel image translation registration by cross-correlation. optical_flow_ilk
skimage.registration.optical_flow_ilk(reference_image, moving_image, *, radius=7, num_warp=10, gaussian=False, prefilter=False, dtype=<class 'numpy.float32'>) [source]
Coarse to fine optical flow estimator. The iterative Lucas-Kanade (iLK) solver is applied at each level of the image pyramid. iLK [1] is a fast and robust alternative to TVL1 algorithm although less accurate for rendering flat surfaces and object boundaries (see [2]). Parameters
reference_imagendarray, shape (M, N[, P[, …]])
The first gray scale image of the sequence.
moving_imagendarray, shape (M, N[, P[, …]])
The second gray scale image of the sequence.
radiusint, optional
Radius of the window considered around each pixel.
num_warpint, optional
Number of times moving_image is warped.
gaussianbool, optional
If True, a Gaussian kernel is used for the local integration. Otherwise, a uniform kernel is used.
prefilterbool, optional
Whether to prefilter the estimated optical flow before each image warp. When True, a median filter with window size 3 along each axis is applied. This helps to remove potential outliers.
dtypedtype, optional
Output data type: must be floating point. Single precision provides good results and saves memory usage and computation time compared to double precision. Returns
flowndarray, shape ((reference_image.ndim, M, N[, P[, …]])
The estimated optical flow components for each axis. Notes The implemented algorithm is described in Table2 of [1]. Color images are not supported. References
1(1,2)
Le Besnerais, G., & Champagnat, F. (2005, September). Dense optical flow by iterative local window registration. In IEEE International Conference on Image Processing 2005 (Vol. 1, pp. I-137). IEEE. DOI:10.1109/ICIP.2005.1529706
2
Plyer, A., Le Besnerais, G., & Champagnat, F. (2016). Massively parallel Lucas Kanade optical flow for real-time video processing applications. Journal of Real-Time Image Processing, 11(4), 713-730. DOI:10.1007/s11554-014-0423-0 Examples >>> from skimage.color import rgb2gray
>>> from skimage.data import stereo_motorcycle
>>> from skimage.registration import optical_flow_ilk
>>> reference_image, moving_image, disp = stereo_motorcycle()
>>> # --- Convert the images to gray level: color is not supported.
>>> reference_image = rgb2gray(reference_image)
>>> moving_image = rgb2gray(moving_image)
>>> flow = optical_flow_ilk(moving_image, reference_image)
Examples using skimage.registration.optical_flow_ilk
Registration using optical flow optical_flow_tvl1
skimage.registration.optical_flow_tvl1(reference_image, moving_image, *, attachment=15, tightness=0.3, num_warp=5, num_iter=10, tol=0.0001, prefilter=False, dtype=<class 'numpy.float32'>) [source]
Coarse to fine optical flow estimator. The TV-L1 solver is applied at each level of the image pyramid. TV-L1 is a popular algorithm for optical flow estimation introduced by Zack et al. [1], improved in [2] and detailed in [3]. Parameters
reference_imagendarray, shape (M, N[, P[, …]])
The first gray scale image of the sequence.
moving_imagendarray, shape (M, N[, P[, …]])
The second gray scale image of the sequence.
attachmentfloat, optional
Attachment parameter (\(\lambda\) in [1]). The smaller this parameter is, the smoother the returned result will be.
tightnessfloat, optional
Tightness parameter (\(\tau\) in [1]). It should have a small value in order to maintain attachement and regularization parts in correspondence.
num_warpint, optional
Number of times image1 is warped.
num_iterint, optional
Number of fixed point iteration.
tolfloat, optional
Tolerance used as stopping criterion based on the L² distance between two consecutive values of (u, v).
prefilterbool, optional
Whether to prefilter the estimated optical flow before each image warp. When True, a median filter with window size 3 along each axis is applied. This helps to remove potential outliers.
dtypedtype, optional
Output data type: must be floating point. Single precision provides good results and saves memory usage and computation time compared to double precision. Returns
flowndarray, shape ((image0.ndim, M, N[, P[, …]])
The estimated optical flow components for each axis. Notes Color images are not supported. References
1(1,2,3)
Zach, C., Pock, T., & Bischof, H. (2007, September). A duality based approach for realtime TV-L 1 optical flow. In Joint pattern recognition symposium (pp. 214-223). Springer, Berlin, Heidelberg. DOI:10.1007/978-3-540-74936-3_22
2
Wedel, A., Pock, T., Zach, C., Bischof, H., & Cremers, D. (2009). An improved algorithm for TV-L 1 optical flow. In Statistical and geometrical approaches to visual motion analysis (pp. 23-45). Springer, Berlin, Heidelberg. DOI:10.1007/978-3-642-03061-1_2
3
Pérez, J. S., Meinhardt-Llopis, E., & Facciolo, G. (2013). TV-L1 optical flow estimation. Image Processing On Line, 2013, 137-150. DOI:10.5201/ipol.2013.26 Examples >>> from skimage.color import rgb2gray
>>> from skimage.data import stereo_motorcycle
>>> from skimage.registration import optical_flow_tvl1
>>> image0, image1, disp = stereo_motorcycle()
>>> # --- Convert the images to gray level: color is not supported.
>>> image0 = rgb2gray(image0)
>>> image1 = rgb2gray(image1)
>>> flow = optical_flow_tvl1(image1, image0)
Examples using skimage.registration.optical_flow_tvl1
Registration using optical flow phase_cross_correlation
skimage.registration.phase_cross_correlation(reference_image, moving_image, *, upsample_factor=1, space='real', return_error=True, reference_mask=None, moving_mask=None, overlap_ratio=0.3) [source]
Efficient subpixel image translation registration by cross-correlation. This code gives the same precision as the FFT upsampled cross-correlation in a fraction of the computation time and with reduced memory requirements. It obtains an initial estimate of the cross-correlation peak by an FFT and then refines the shift estimation by upsampling the DFT only in a small neighborhood of that estimate by means of a matrix-multiply DFT. Parameters
reference_imagearray
Reference image.
moving_imagearray
Image to register. Must be same dimensionality as reference_image.
upsample_factorint, optional
Upsampling factor. Images will be registered to within 1 / upsample_factor of a pixel. For example upsample_factor == 20 means the images will be registered within 1/20th of a pixel. Default is 1 (no upsampling). Not used if any of reference_mask or moving_mask is not None.
spacestring, one of “real” or “fourier”, optional
Defines how the algorithm interprets input data. “real” means data will be FFT’d to compute the correlation, while “fourier” data will bypass FFT of input data. Case insensitive. Not used if any of reference_mask or moving_mask is not None.
return_errorbool, optional
Returns error and phase difference if on, otherwise only shifts are returned. Has noeffect if any of reference_mask or moving_mask is not None. In this case only shifts is returned.
reference_maskndarray
Boolean mask for reference_image. The mask should evaluate to True (or 1) on valid pixels. reference_mask should have the same shape as reference_image.
moving_maskndarray or None, optional
Boolean mask for moving_image. The mask should evaluate to True (or 1) on valid pixels. moving_mask should have the same shape as moving_image. If None, reference_mask will be used.
overlap_ratiofloat, optional
Minimum allowed overlap ratio between images. The correlation for translations corresponding with an overlap ratio lower than this threshold will be ignored. A lower overlap_ratio leads to smaller maximum translation, while a higher overlap_ratio leads to greater robustness against spurious matches due to small overlap between masked images. Used only if one of reference_mask or moving_mask is None. Returns
shiftsndarray
Shift vector (in pixels) required to register moving_image with reference_image. Axis ordering is consistent with numpy (e.g. Z, Y, X)
errorfloat
Translation invariant normalized RMS error between reference_image and moving_image.
phasedifffloat
Global phase difference between the two images (should be zero if images are non-negative). References
1
Manuel Guizar-Sicairos, Samuel T. Thurman, and James R. Fienup, “Efficient subpixel image registration algorithms,” Optics Letters 33, 156-158 (2008). DOI:10.1364/OL.33.000156
2
James R. Fienup, “Invariant error metrics for image reconstruction” Optics Letters 36, 8352-8357 (1997). DOI:10.1364/AO.36.008352
3
Dirk Padfield. Masked Object Registration in the Fourier Domain. IEEE Transactions on Image Processing, vol. 21(5), pp. 2706-2718 (2012). DOI:10.1109/TIP.2011.2181402
4
D. Padfield. “Masked FFT registration”. In Proc. Computer Vision and Pattern Recognition, pp. 2918-2925 (2010). DOI:10.1109/CVPR.2010.5540032
Examples using skimage.registration.phase_cross_correlation
Masked Normalized Cross-Correlation | skimage.api.skimage.registration |
skimage.registration.optical_flow_ilk(reference_image, moving_image, *, radius=7, num_warp=10, gaussian=False, prefilter=False, dtype=<class 'numpy.float32'>) [source]
Coarse to fine optical flow estimator. The iterative Lucas-Kanade (iLK) solver is applied at each level of the image pyramid. iLK [1] is a fast and robust alternative to TVL1 algorithm although less accurate for rendering flat surfaces and object boundaries (see [2]). Parameters
reference_imagendarray, shape (M, N[, P[, …]])
The first gray scale image of the sequence.
moving_imagendarray, shape (M, N[, P[, …]])
The second gray scale image of the sequence.
radiusint, optional
Radius of the window considered around each pixel.
num_warpint, optional
Number of times moving_image is warped.
gaussianbool, optional
If True, a Gaussian kernel is used for the local integration. Otherwise, a uniform kernel is used.
prefilterbool, optional
Whether to prefilter the estimated optical flow before each image warp. When True, a median filter with window size 3 along each axis is applied. This helps to remove potential outliers.
dtypedtype, optional
Output data type: must be floating point. Single precision provides good results and saves memory usage and computation time compared to double precision. Returns
flowndarray, shape ((reference_image.ndim, M, N[, P[, …]])
The estimated optical flow components for each axis. Notes The implemented algorithm is described in Table2 of [1]. Color images are not supported. References
1(1,2)
Le Besnerais, G., & Champagnat, F. (2005, September). Dense optical flow by iterative local window registration. In IEEE International Conference on Image Processing 2005 (Vol. 1, pp. I-137). IEEE. DOI:10.1109/ICIP.2005.1529706
2
Plyer, A., Le Besnerais, G., & Champagnat, F. (2016). Massively parallel Lucas Kanade optical flow for real-time video processing applications. Journal of Real-Time Image Processing, 11(4), 713-730. DOI:10.1007/s11554-014-0423-0 Examples >>> from skimage.color import rgb2gray
>>> from skimage.data import stereo_motorcycle
>>> from skimage.registration import optical_flow_ilk
>>> reference_image, moving_image, disp = stereo_motorcycle()
>>> # --- Convert the images to gray level: color is not supported.
>>> reference_image = rgb2gray(reference_image)
>>> moving_image = rgb2gray(moving_image)
>>> flow = optical_flow_ilk(moving_image, reference_image) | skimage.api.skimage.registration#skimage.registration.optical_flow_ilk |
skimage.registration.optical_flow_tvl1(reference_image, moving_image, *, attachment=15, tightness=0.3, num_warp=5, num_iter=10, tol=0.0001, prefilter=False, dtype=<class 'numpy.float32'>) [source]
Coarse to fine optical flow estimator. The TV-L1 solver is applied at each level of the image pyramid. TV-L1 is a popular algorithm for optical flow estimation introduced by Zack et al. [1], improved in [2] and detailed in [3]. Parameters
reference_imagendarray, shape (M, N[, P[, …]])
The first gray scale image of the sequence.
moving_imagendarray, shape (M, N[, P[, …]])
The second gray scale image of the sequence.
attachmentfloat, optional
Attachment parameter (\(\lambda\) in [1]). The smaller this parameter is, the smoother the returned result will be.
tightnessfloat, optional
Tightness parameter (\(\tau\) in [1]). It should have a small value in order to maintain attachement and regularization parts in correspondence.
num_warpint, optional
Number of times image1 is warped.
num_iterint, optional
Number of fixed point iteration.
tolfloat, optional
Tolerance used as stopping criterion based on the L² distance between two consecutive values of (u, v).
prefilterbool, optional
Whether to prefilter the estimated optical flow before each image warp. When True, a median filter with window size 3 along each axis is applied. This helps to remove potential outliers.
dtypedtype, optional
Output data type: must be floating point. Single precision provides good results and saves memory usage and computation time compared to double precision. Returns
flowndarray, shape ((image0.ndim, M, N[, P[, …]])
The estimated optical flow components for each axis. Notes Color images are not supported. References
1(1,2,3)
Zach, C., Pock, T., & Bischof, H. (2007, September). A duality based approach for realtime TV-L 1 optical flow. In Joint pattern recognition symposium (pp. 214-223). Springer, Berlin, Heidelberg. DOI:10.1007/978-3-540-74936-3_22
2
Wedel, A., Pock, T., Zach, C., Bischof, H., & Cremers, D. (2009). An improved algorithm for TV-L 1 optical flow. In Statistical and geometrical approaches to visual motion analysis (pp. 23-45). Springer, Berlin, Heidelberg. DOI:10.1007/978-3-642-03061-1_2
3
Pérez, J. S., Meinhardt-Llopis, E., & Facciolo, G. (2013). TV-L1 optical flow estimation. Image Processing On Line, 2013, 137-150. DOI:10.5201/ipol.2013.26 Examples >>> from skimage.color import rgb2gray
>>> from skimage.data import stereo_motorcycle
>>> from skimage.registration import optical_flow_tvl1
>>> image0, image1, disp = stereo_motorcycle()
>>> # --- Convert the images to gray level: color is not supported.
>>> image0 = rgb2gray(image0)
>>> image1 = rgb2gray(image1)
>>> flow = optical_flow_tvl1(image1, image0) | skimage.api.skimage.registration#skimage.registration.optical_flow_tvl1 |
skimage.registration.phase_cross_correlation(reference_image, moving_image, *, upsample_factor=1, space='real', return_error=True, reference_mask=None, moving_mask=None, overlap_ratio=0.3) [source]
Efficient subpixel image translation registration by cross-correlation. This code gives the same precision as the FFT upsampled cross-correlation in a fraction of the computation time and with reduced memory requirements. It obtains an initial estimate of the cross-correlation peak by an FFT and then refines the shift estimation by upsampling the DFT only in a small neighborhood of that estimate by means of a matrix-multiply DFT. Parameters
reference_imagearray
Reference image.
moving_imagearray
Image to register. Must be same dimensionality as reference_image.
upsample_factorint, optional
Upsampling factor. Images will be registered to within 1 / upsample_factor of a pixel. For example upsample_factor == 20 means the images will be registered within 1/20th of a pixel. Default is 1 (no upsampling). Not used if any of reference_mask or moving_mask is not None.
spacestring, one of “real” or “fourier”, optional
Defines how the algorithm interprets input data. “real” means data will be FFT’d to compute the correlation, while “fourier” data will bypass FFT of input data. Case insensitive. Not used if any of reference_mask or moving_mask is not None.
return_errorbool, optional
Returns error and phase difference if on, otherwise only shifts are returned. Has noeffect if any of reference_mask or moving_mask is not None. In this case only shifts is returned.
reference_maskndarray
Boolean mask for reference_image. The mask should evaluate to True (or 1) on valid pixels. reference_mask should have the same shape as reference_image.
moving_maskndarray or None, optional
Boolean mask for moving_image. The mask should evaluate to True (or 1) on valid pixels. moving_mask should have the same shape as moving_image. If None, reference_mask will be used.
overlap_ratiofloat, optional
Minimum allowed overlap ratio between images. The correlation for translations corresponding with an overlap ratio lower than this threshold will be ignored. A lower overlap_ratio leads to smaller maximum translation, while a higher overlap_ratio leads to greater robustness against spurious matches due to small overlap between masked images. Used only if one of reference_mask or moving_mask is None. Returns
shiftsndarray
Shift vector (in pixels) required to register moving_image with reference_image. Axis ordering is consistent with numpy (e.g. Z, Y, X)
errorfloat
Translation invariant normalized RMS error between reference_image and moving_image.
phasedifffloat
Global phase difference between the two images (should be zero if images are non-negative). References
1
Manuel Guizar-Sicairos, Samuel T. Thurman, and James R. Fienup, “Efficient subpixel image registration algorithms,” Optics Letters 33, 156-158 (2008). DOI:10.1364/OL.33.000156
2
James R. Fienup, “Invariant error metrics for image reconstruction” Optics Letters 36, 8352-8357 (1997). DOI:10.1364/AO.36.008352
3
Dirk Padfield. Masked Object Registration in the Fourier Domain. IEEE Transactions on Image Processing, vol. 21(5), pp. 2706-2718 (2012). DOI:10.1109/TIP.2011.2181402
4
D. Padfield. “Masked FFT registration”. In Proc. Computer Vision and Pattern Recognition, pp. 2918-2925 (2010). DOI:10.1109/CVPR.2010.5540032 | skimage.api.skimage.registration#skimage.registration.phase_cross_correlation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.