doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
skimage.feature.corner_fast(image, n=12, threshold=0.15) [source]
Extract FAST corners for a given image. Parameters
image2D ndarray
Input image.
nint, optional
Minimum number of consecutive pixels out of 16 pixels on the circle that should all be either brighter or darker w.r.t testpixel. A point c on the circle is darker w.r.t test pixel p if Ic < Ip - threshold and brighter if Ic > Ip + threshold. Also stands for the n in FAST-n corner detector.
thresholdfloat, optional
Threshold used in deciding whether the pixels on the circle are brighter, darker or similar w.r.t. the test pixel. Decrease the threshold when more corners are desired and vice-versa. Returns
responsendarray
FAST corner response image. References
1
Rosten, E., & Drummond, T. (2006, May). Machine learning for high-speed corner detection. In European conference on computer vision (pp. 430-443). Springer, Berlin, Heidelberg. DOI:10.1007/11744023_34 http://www.edwardrosten.com/work/rosten_2006_machine.pdf
2
Wikipedia, “Features from accelerated segment test”, https://en.wikipedia.org/wiki/Features_from_accelerated_segment_test Examples >>> from skimage.feature import corner_fast, corner_peaks
>>> square = np.zeros((12, 12))
>>> square[3:9, 3:9] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> corner_peaks(corner_fast(square, 9), min_distance=1)
array([[3, 3],
[3, 8],
[8, 3],
[8, 8]]) | skimage.api.skimage.feature#skimage.feature.corner_fast |
skimage.feature.corner_foerstner(image, sigma=1) [source]
Compute Foerstner corner measure response image. This corner detector uses information from the auto-correlation matrix A: A = [(imx**2) (imx*imy)] = [Axx Axy]
[(imx*imy) (imy**2)] [Axy Ayy]
Where imx and imy are first derivatives, averaged with a gaussian filter. The corner measure is then defined as: w = det(A) / trace(A) (size of error ellipse)
q = 4 * det(A) / trace(A)**2 (roundness of error ellipse)
Parameters
imagendarray
Input image.
sigmafloat, optional
Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix. Returns
wndarray
Error ellipse sizes.
qndarray
Roundness of error ellipse. References
1
Förstner, W., & Gülch, E. (1987, June). A fast operator for detection and precise location of distinct points, corners and centres of circular features. In Proc. ISPRS intercommission conference on fast processing of photogrammetric data (pp. 281-305). https://cseweb.ucsd.edu/classes/sp02/cse252/foerstner/foerstner.pdf
2
https://en.wikipedia.org/wiki/Corner_detection Examples >>> from skimage.feature import corner_foerstner, corner_peaks
>>> square = np.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> w, q = corner_foerstner(square)
>>> accuracy_thresh = 0.5
>>> roundness_thresh = 0.3
>>> foerstner = (q > roundness_thresh) * (w > accuracy_thresh) * w
>>> corner_peaks(foerstner, min_distance=1)
array([[2, 2],
[2, 7],
[7, 2],
[7, 7]]) | skimage.api.skimage.feature#skimage.feature.corner_foerstner |
skimage.feature.corner_harris(image, method='k', k=0.05, eps=1e-06, sigma=1) [source]
Compute Harris corner measure response image. This corner detector uses information from the auto-correlation matrix A: A = [(imx**2) (imx*imy)] = [Axx Axy]
[(imx*imy) (imy**2)] [Axy Ayy]
Where imx and imy are first derivatives, averaged with a gaussian filter. The corner measure is then defined as: det(A) - k * trace(A)**2
or: 2 * det(A) / (trace(A) + eps)
Parameters
imagendarray
Input image.
method{‘k’, ‘eps’}, optional
Method to compute the response image from the auto-correlation matrix.
kfloat, optional
Sensitivity factor to separate corners from edges, typically in range [0, 0.2]. Small values of k result in detection of sharp corners.
epsfloat, optional
Normalisation factor (Noble’s corner measure).
sigmafloat, optional
Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix. Returns
responsendarray
Harris response image. References
1
https://en.wikipedia.org/wiki/Corner_detection Examples >>> from skimage.feature import corner_harris, corner_peaks
>>> square = np.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> corner_peaks(corner_harris(square), min_distance=1)
array([[2, 2],
[2, 7],
[7, 2],
[7, 7]]) | skimage.api.skimage.feature#skimage.feature.corner_harris |
skimage.feature.corner_kitchen_rosenfeld(image, mode='constant', cval=0) [source]
Compute Kitchen and Rosenfeld corner measure response image. The corner measure is calculated as follows: (imxx * imy**2 + imyy * imx**2 - 2 * imxy * imx * imy)
/ (imx**2 + imy**2)
Where imx and imy are the first and imxx, imxy, imyy the second derivatives. Parameters
imagendarray
Input image.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
responsendarray
Kitchen and Rosenfeld response image. References
1
Kitchen, L., & Rosenfeld, A. (1982). Gray-level corner detection. Pattern recognition letters, 1(2), 95-102. DOI:10.1016/0167-8655(82)90020-4 | skimage.api.skimage.feature#skimage.feature.corner_kitchen_rosenfeld |
skimage.feature.corner_moravec(image, window_size=1) [source]
Compute Moravec corner measure response image. This is one of the simplest corner detectors and is comparatively fast but has several limitations (e.g. not rotation invariant). Parameters
imagendarray
Input image.
window_sizeint, optional
Window size. Returns
responsendarray
Moravec response image. References
1
https://en.wikipedia.org/wiki/Corner_detection Examples >>> from skimage.feature import corner_moravec
>>> square = np.zeros([7, 7])
>>> square[3, 3] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
>>> corner_moravec(square).astype(int)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 2, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]]) | skimage.api.skimage.feature#skimage.feature.corner_moravec |
skimage.feature.corner_orientations(image, corners, mask) [source]
Compute the orientation of corners. The orientation of corners is computed using the first order central moment i.e. the center of mass approach. The corner orientation is the angle of the vector from the corner coordinate to the intensity centroid in the local neighborhood around the corner calculated using first order central moment. Parameters
image2D array
Input grayscale image.
corners(N, 2) array
Corner coordinates as (row, col).
mask2D array
Mask defining the local neighborhood of the corner used for the calculation of the central moment. Returns
orientations(N, 1) array
Orientations of corners in the range [-pi, pi]. References
1
Ethan Rublee, Vincent Rabaud, Kurt Konolige and Gary Bradski “ORB : An efficient alternative to SIFT and SURF” http://www.vision.cs.chubu.ac.jp/CV-R/pdf/Rublee_iccv2011.pdf
2
Paul L. Rosin, “Measuring Corner Properties” http://users.cs.cf.ac.uk/Paul.Rosin/corner2.pdf Examples >>> from skimage.morphology import octagon
>>> from skimage.feature import (corner_fast, corner_peaks,
... corner_orientations)
>>> square = np.zeros((12, 12))
>>> square[3:9, 3:9] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> corners = corner_peaks(corner_fast(square, 9), min_distance=1)
>>> corners
array([[3, 3],
[3, 8],
[8, 3],
[8, 8]])
>>> orientations = corner_orientations(square, corners, octagon(3, 2))
>>> np.rad2deg(orientations)
array([ 45., 135., -45., -135.]) | skimage.api.skimage.feature#skimage.feature.corner_orientations |
skimage.feature.corner_peaks(image, min_distance=1, threshold_abs=None, threshold_rel=None, exclude_border=True, indices=True, num_peaks=inf, footprint=None, labels=None, *, num_peaks_per_label=inf, p_norm=inf) [source]
Find peaks in corner measure response image. This differs from skimage.feature.peak_local_max in that it suppresses multiple connected peaks with the same accumulator value. Parameters
imagendarray
Input image.
min_distanceint, optional
The minimal allowed distance separating peaks.
**
See skimage.feature.peak_local_max().
p_normfloat
Which Minkowski p-norm to use. Should be in the range [1, inf]. A finite large p may cause a ValueError if overflow can occur. inf corresponds to the Chebyshev distance and 2 to the Euclidean distance. Returns
outputndarray or ndarray of bools
If indices = True : (row, column, …) coordinates of peaks. If indices = False : Boolean array shaped like image, with peaks represented by True values. See also
skimage.feature.peak_local_max
Notes Changed in version 0.18: The default value of threshold_rel has changed to None, which corresponds to letting skimage.feature.peak_local_max decide on the default. This is equivalent to threshold_rel=0. The num_peaks limit is applied before suppression of connected peaks. To limit the number of peaks after suppression, set num_peaks=np.inf and post-process the output of this function. Examples >>> from skimage.feature import peak_local_max
>>> response = np.zeros((5, 5))
>>> response[2:4, 2:4] = 1
>>> response
array([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 1., 1., 0.],
[0., 0., 1., 1., 0.],
[0., 0., 0., 0., 0.]])
>>> peak_local_max(response)
array([[2, 2],
[2, 3],
[3, 2],
[3, 3]])
>>> corner_peaks(response)
array([[2, 2]]) | skimage.api.skimage.feature#skimage.feature.corner_peaks |
skimage.feature.corner_shi_tomasi(image, sigma=1) [source]
Compute Shi-Tomasi (Kanade-Tomasi) corner measure response image. This corner detector uses information from the auto-correlation matrix A: A = [(imx**2) (imx*imy)] = [Axx Axy]
[(imx*imy) (imy**2)] [Axy Ayy]
Where imx and imy are first derivatives, averaged with a gaussian filter. The corner measure is then defined as the smaller eigenvalue of A: ((Axx + Ayy) - sqrt((Axx - Ayy)**2 + 4 * Axy**2)) / 2
Parameters
imagendarray
Input image.
sigmafloat, optional
Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix. Returns
responsendarray
Shi-Tomasi response image. References
1
https://en.wikipedia.org/wiki/Corner_detection Examples >>> from skimage.feature import corner_shi_tomasi, corner_peaks
>>> square = np.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> corner_peaks(corner_shi_tomasi(square), min_distance=1)
array([[2, 2],
[2, 7],
[7, 2],
[7, 7]]) | skimage.api.skimage.feature#skimage.feature.corner_shi_tomasi |
skimage.feature.corner_subpix(image, corners, window_size=11, alpha=0.99) [source]
Determine subpixel position of corners. A statistical test decides whether the corner is defined as the intersection of two edges or a single peak. Depending on the classification result, the subpixel corner location is determined based on the local covariance of the grey-values. If the significance level for either statistical test is not sufficient, the corner cannot be classified, and the output subpixel position is set to NaN. Parameters
imagendarray
Input image.
corners(N, 2) ndarray
Corner coordinates (row, col).
window_sizeint, optional
Search window size for subpixel estimation.
alphafloat, optional
Significance level for corner classification. Returns
positions(N, 2) ndarray
Subpixel corner positions. NaN for “not classified” corners. References
1
Förstner, W., & Gülch, E. (1987, June). A fast operator for detection and precise location of distinct points, corners and centres of circular features. In Proc. ISPRS intercommission conference on fast processing of photogrammetric data (pp. 281-305). https://cseweb.ucsd.edu/classes/sp02/cse252/foerstner/foerstner.pdf
2
https://en.wikipedia.org/wiki/Corner_detection Examples >>> from skimage.feature import corner_harris, corner_peaks, corner_subpix
>>> img = np.zeros((10, 10))
>>> img[:5, :5] = 1
>>> img[5:, 5:] = 1
>>> img.astype(int)
array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1]])
>>> coords = corner_peaks(corner_harris(img), min_distance=2)
>>> coords_subpix = corner_subpix(img, coords, window_size=7)
>>> coords_subpix
array([[4.5, 4.5]]) | skimage.api.skimage.feature#skimage.feature.corner_subpix |
skimage.feature.daisy(image, step=4, radius=15, rings=3, histograms=8, orientations=8, normalization='l1', sigmas=None, ring_radii=None, visualize=False) [source]
Extract DAISY feature descriptors densely for the given image. DAISY is a feature descriptor similar to SIFT formulated in a way that allows for fast dense extraction. Typically, this is practical for bag-of-features image representations. The implementation follows Tola et al. [1] but deviate on the following points: Histogram bin contribution are smoothed with a circular Gaussian window over the tonal range (the angular range). The sigma values of the spatial Gaussian smoothing in this code do not match the sigma values in the original code by Tola et al. [2]. In their code, spatial smoothing is applied to both the input image and the center histogram. However, this smoothing is not documented in [1] and, therefore, it is omitted. Parameters
image(M, N) array
Input image (grayscale).
stepint, optional
Distance between descriptor sampling points.
radiusint, optional
Radius (in pixels) of the outermost ring.
ringsint, optional
Number of rings.
histogramsint, optional
Number of histograms sampled per ring.
orientationsint, optional
Number of orientations (bins) per histogram.
normalization[ ‘l1’ | ‘l2’ | ‘daisy’ | ‘off’ ], optional
How to normalize the descriptors ‘l1’: L1-normalization of each descriptor. ‘l2’: L2-normalization of each descriptor. ‘daisy’: L2-normalization of individual histograms. ‘off’: Disable normalization.
sigmas1D array of float, optional
Standard deviation of spatial Gaussian smoothing for the center histogram and for each ring of histograms. The array of sigmas should be sorted from the center and out. I.e. the first sigma value defines the spatial smoothing of the center histogram and the last sigma value defines the spatial smoothing of the outermost ring. Specifying sigmas overrides the following parameter. rings = len(sigmas) - 1
ring_radii1D array of int, optional
Radius (in pixels) for each ring. Specifying ring_radii overrides the following two parameters. rings = len(ring_radii) radius = ring_radii[-1] If both sigmas and ring_radii are given, they must satisfy the following predicate since no radius is needed for the center histogram. len(ring_radii) == len(sigmas) + 1
visualizebool, optional
Generate a visualization of the DAISY descriptors Returns
descsarray
Grid of DAISY descriptors for the given image as an array dimensionality (P, Q, R) where P = ceil((M - radius*2) / step) Q = ceil((N - radius*2) / step) R = (rings * histograms + 1) * orientations
descs_img(M, N, 3) array (only if visualize==True)
Visualization of the DAISY descriptors. References
1(1,2)
Tola et al. “Daisy: An efficient dense descriptor applied to wide- baseline stereo.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 32.5 (2010): 815-830.
2
http://cvlab.epfl.ch/software/daisy | skimage.api.skimage.feature#skimage.feature.daisy |
skimage.feature.draw_haar_like_feature(image, r, c, width, height, feature_coord, color_positive_block=(1.0, 0.0, 0.0), color_negative_block=(0.0, 1.0, 0.0), alpha=0.5, max_n_features=None, random_state=None) [source]
Visualization of Haar-like features. Parameters
image(M, N) ndarray
The region of an integral image for which the features need to be computed.
rint
Row-coordinate of top left corner of the detection window.
cint
Column-coordinate of top left corner of the detection window.
widthint
Width of the detection window.
heightint
Height of the detection window.
feature_coordndarray of list of tuples or None, optional
The array of coordinates to be extracted. This is useful when you want to recompute only a subset of features. In this case feature_type needs to be an array containing the type of each feature, as returned by haar_like_feature_coord(). By default, all coordinates are computed.
color_positive_rectangletuple of 3 floats
Floats specifying the color for the positive block. Corresponding values define (R, G, B) values. Default value is red (1, 0, 0).
color_negative_blocktuple of 3 floats
Floats specifying the color for the negative block Corresponding values define (R, G, B) values. Default value is blue (0, 1, 0).
alphafloat
Value in the range [0, 1] that specifies opacity of visualization. 1 - fully transparent, 0 - opaque.
max_n_featuresint, default=None
The maximum number of features to be returned. By default, all features are returned.
random_stateint, RandomState instance or None, optional
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. The random state is used when generating a set of features smaller than the total number of available features. Returns
features(M, N), ndarray
An image in which the different features will be added. Examples >>> import numpy as np
>>> from skimage.feature import haar_like_feature_coord
>>> from skimage.feature import draw_haar_like_feature
>>> feature_coord, _ = haar_like_feature_coord(2, 2, 'type-4')
>>> image = draw_haar_like_feature(np.zeros((2, 2)),
... 0, 0, 2, 2,
... feature_coord,
... max_n_features=1)
>>> image
array([[[0. , 0.5, 0. ],
[0.5, 0. , 0. ]],
[[0.5, 0. , 0. ],
[0. , 0.5, 0. ]]]) | skimage.api.skimage.feature#skimage.feature.draw_haar_like_feature |
skimage.feature.draw_multiblock_lbp(image, r, c, width, height, lbp_code=0, color_greater_block=(1, 1, 1), color_less_block=(0, 0.69, 0.96), alpha=0.5) [source]
Multi-block local binary pattern visualization. Blocks with higher sums are colored with alpha-blended white rectangles, whereas blocks with lower sums are colored alpha-blended cyan. Colors and the alpha parameter can be changed. Parameters
imagendarray of float or uint
Image on which to visualize the pattern.
rint
Row-coordinate of top left corner of a rectangle containing feature.
cint
Column-coordinate of top left corner of a rectangle containing feature.
widthint
Width of one of 9 equal rectangles that will be used to compute a feature.
heightint
Height of one of 9 equal rectangles that will be used to compute a feature.
lbp_codeint
The descriptor of feature to visualize. If not provided, the descriptor with 0 value will be used.
color_greater_blocktuple of 3 floats
Floats specifying the color for the block that has greater intensity value. They should be in the range [0, 1]. Corresponding values define (R, G, B) values. Default value is white (1, 1, 1).
color_greater_blocktuple of 3 floats
Floats specifying the color for the block that has greater intensity value. They should be in the range [0, 1]. Corresponding values define (R, G, B) values. Default value is cyan (0, 0.69, 0.96).
alphafloat
Value in the range [0, 1] that specifies opacity of visualization. 1 - fully transparent, 0 - opaque. Returns
outputndarray of float
Image with MB-LBP visualization. References
1
Face Detection Based on Multi-Block LBP Representation. Lun Zhang, Rufeng Chu, Shiming Xiang, Shengcai Liao, Stan Z. Li http://www.cbsr.ia.ac.cn/users/scliao/papers/Zhang-ICB07-MBLBP.pdf | skimage.api.skimage.feature#skimage.feature.draw_multiblock_lbp |
skimage.feature.greycomatrix(image, distances, angles, levels=None, symmetric=False, normed=False) [source]
Calculate the grey-level co-occurrence matrix. A grey level co-occurrence matrix is a histogram of co-occurring greyscale values at a given offset over an image. Parameters
imagearray_like
Integer typed input image. Only positive valued images are supported. If type is other than uint8, the argument levels needs to be set.
distancesarray_like
List of pixel pair distance offsets.
anglesarray_like
List of pixel pair angles in radians.
levelsint, optional
The input image should contain integers in [0, levels-1], where levels indicate the number of grey-levels counted (typically 256 for an 8-bit image). This argument is required for 16-bit images or higher and is typically the maximum of the image. As the output matrix is at least levels x levels, it might be preferable to use binning of the input image rather than large values for levels.
symmetricbool, optional
If True, the output matrix P[:, :, d, theta] is symmetric. This is accomplished by ignoring the order of value pairs, so both (i, j) and (j, i) are accumulated when (i, j) is encountered for a given offset. The default is False.
normedbool, optional
If True, normalize each matrix P[:, :, d, theta] by dividing by the total number of accumulated co-occurrences for the given offset. The elements of the resulting matrix sum to 1. The default is False. Returns
P4-D ndarray
The grey-level co-occurrence histogram. The value P[i,j,d,theta] is the number of times that grey-level j occurs at a distance d and at an angle theta from grey-level i. If normed is False, the output is of type uint32, otherwise it is float64. The dimensions are: levels x levels x number of distances x number of angles. References
1
The GLCM Tutorial Home Page, http://www.fp.ucalgary.ca/mhallbey/tutorial.htm
2
Haralick, RM.; Shanmugam, K., “Textural features for image classification” IEEE Transactions on systems, man, and cybernetics 6 (1973): 610-621. DOI:10.1109/TSMC.1973.4309314
3
Pattern Recognition Engineering, Morton Nadler & Eric P. Smith
4
Wikipedia, https://en.wikipedia.org/wiki/Co-occurrence_matrix Examples Compute 2 GLCMs: One for a 1-pixel offset to the right, and one for a 1-pixel offset upwards. >>> image = np.array([[0, 0, 1, 1],
... [0, 0, 1, 1],
... [0, 2, 2, 2],
... [2, 2, 3, 3]], dtype=np.uint8)
>>> result = greycomatrix(image, [1], [0, np.pi/4, np.pi/2, 3*np.pi/4],
... levels=4)
>>> result[:, :, 0, 0]
array([[2, 2, 1, 0],
[0, 2, 0, 0],
[0, 0, 3, 1],
[0, 0, 0, 1]], dtype=uint32)
>>> result[:, :, 0, 1]
array([[1, 1, 3, 0],
[0, 1, 1, 0],
[0, 0, 0, 2],
[0, 0, 0, 0]], dtype=uint32)
>>> result[:, :, 0, 2]
array([[3, 0, 2, 0],
[0, 2, 2, 0],
[0, 0, 1, 2],
[0, 0, 0, 0]], dtype=uint32)
>>> result[:, :, 0, 3]
array([[2, 0, 0, 0],
[1, 1, 2, 0],
[0, 0, 2, 1],
[0, 0, 0, 0]], dtype=uint32) | skimage.api.skimage.feature#skimage.feature.greycomatrix |
skimage.feature.greycoprops(P, prop='contrast') [source]
Calculate texture properties of a GLCM. Compute a feature of a grey level co-occurrence matrix to serve as a compact summary of the matrix. The properties are computed as follows: ‘contrast’: \(\sum_{i,j=0}^{levels-1} P_{i,j}(i-j)^2\)
‘dissimilarity’: \(\sum_{i,j=0}^{levels-1}P_{i,j}|i-j|\)
‘homogeneity’: \(\sum_{i,j=0}^{levels-1}\frac{P_{i,j}}{1+(i-j)^2}\)
‘ASM’: \(\sum_{i,j=0}^{levels-1} P_{i,j}^2\)
‘energy’: \(\sqrt{ASM}\)
‘correlation’:
\[\sum_{i,j=0}^{levels-1} P_{i,j}\left[\frac{(i-\mu_i) \ (j-\mu_j)}{\sqrt{(\sigma_i^2)(\sigma_j^2)}}\right]\] Each GLCM is normalized to have a sum of 1 before the computation of texture properties. Parameters
Pndarray
Input array. P is the grey-level co-occurrence histogram for which to compute the specified property. The value P[i,j,d,theta] is the number of times that grey-level j occurs at a distance d and at an angle theta from grey-level i.
prop{‘contrast’, ‘dissimilarity’, ‘homogeneity’, ‘energy’, ‘correlation’, ‘ASM’}, optional
The property of the GLCM to compute. The default is ‘contrast’. Returns
results2-D ndarray
2-dimensional array. results[d, a] is the property ‘prop’ for the d’th distance and the a’th angle. References
1
The GLCM Tutorial Home Page, http://www.fp.ucalgary.ca/mhallbey/tutorial.htm Examples Compute the contrast for GLCMs with distances [1, 2] and angles [0 degrees, 90 degrees] >>> image = np.array([[0, 0, 1, 1],
... [0, 0, 1, 1],
... [0, 2, 2, 2],
... [2, 2, 3, 3]], dtype=np.uint8)
>>> g = greycomatrix(image, [1, 2], [0, np.pi/2], levels=4,
... normed=True, symmetric=True)
>>> contrast = greycoprops(g, 'contrast')
>>> contrast
array([[0.58333333, 1. ],
[1.25 , 2.75 ]]) | skimage.api.skimage.feature#skimage.feature.greycoprops |
skimage.feature.haar_like_feature(int_image, r, c, width, height, feature_type=None, feature_coord=None) [source]
Compute the Haar-like features for a region of interest (ROI) of an integral image. Haar-like features have been successfully used for image classification and object detection [1]. It has been used for real-time face detection algorithm proposed in [2]. Parameters
int_image(M, N) ndarray
Integral image for which the features need to be computed.
rint
Row-coordinate of top left corner of the detection window.
cint
Column-coordinate of top left corner of the detection window.
widthint
Width of the detection window.
heightint
Height of the detection window.
feature_typestr or list of str or None, optional
The type of feature to consider: ‘type-2-x’: 2 rectangles varying along the x axis; ‘type-2-y’: 2 rectangles varying along the y axis; ‘type-3-x’: 3 rectangles varying along the x axis; ‘type-3-y’: 3 rectangles varying along the y axis; ‘type-4’: 4 rectangles varying along x and y axis. By default all features are extracted. If using with feature_coord, it should correspond to the feature type of each associated coordinate feature.
feature_coordndarray of list of tuples or None, optional
The array of coordinates to be extracted. This is useful when you want to recompute only a subset of features. In this case feature_type needs to be an array containing the type of each feature, as returned by haar_like_feature_coord(). By default, all coordinates are computed. Returns
haar_features(n_features,) ndarray of int or float
Resulting Haar-like features. Each value is equal to the subtraction of sums of the positive and negative rectangles. The data type depends of the data type of int_image: int when the data type of int_image is uint or int and float when the data type of int_image is float. Notes When extracting those features in parallel, be aware that the choice of the backend (i.e. multiprocessing vs threading) will have an impact on the performance. The rule of thumb is as follows: use multiprocessing when extracting features for all possible ROI in an image; use threading when extracting the feature at specific location for a limited number of ROIs. Refer to the example Face classification using Haar-like feature descriptor for more insights. References
1
https://en.wikipedia.org/wiki/Haar-like_feature
2
Oren, M., Papageorgiou, C., Sinha, P., Osuna, E., & Poggio, T. (1997, June). Pedestrian detection using wavelet templates. In Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on (pp. 193-199). IEEE. http://tinyurl.com/y6ulxfta DOI:10.1109/CVPR.1997.609319
3
Viola, Paul, and Michael J. Jones. “Robust real-time face detection.” International journal of computer vision 57.2 (2004): 137-154. https://www.merl.com/publications/docs/TR2004-043.pdf DOI:10.1109/CVPR.2001.990517 Examples >>> import numpy as np
>>> from skimage.transform import integral_image
>>> from skimage.feature import haar_like_feature
>>> img = np.ones((5, 5), dtype=np.uint8)
>>> img_ii = integral_image(img)
>>> feature = haar_like_feature(img_ii, 0, 0, 5, 5, 'type-3-x')
>>> feature
array([-1, -2, -3, -4, -1, -2, -3, -4, -1, -2, -3, -4, -1, -2, -3, -4, -1,
-2, -3, -4, -1, -2, -3, -4, -1, -2, -3, -1, -2, -3, -1, -2, -3, -1,
-2, -1, -2, -1, -2, -1, -1, -1])
You can compute the feature for some pre-computed coordinates. >>> from skimage.feature import haar_like_feature_coord
>>> feature_coord, feature_type = zip(
... *[haar_like_feature_coord(5, 5, feat_t)
... for feat_t in ('type-2-x', 'type-3-x')])
>>> # only select one feature over two
>>> feature_coord = np.concatenate([x[::2] for x in feature_coord])
>>> feature_type = np.concatenate([x[::2] for x in feature_type])
>>> feature = haar_like_feature(img_ii, 0, 0, 5, 5,
... feature_type=feature_type,
... feature_coord=feature_coord)
>>> feature
array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, -1, -3, -1, -3, -1, -3, -1, -3, -1,
-3, -1, -3, -1, -3, -2, -1, -3, -2, -2, -2, -1]) | skimage.api.skimage.feature#skimage.feature.haar_like_feature |
skimage.feature.haar_like_feature_coord(width, height, feature_type=None) [source]
Compute the coordinates of Haar-like features. Parameters
widthint
Width of the detection window.
heightint
Height of the detection window.
feature_typestr or list of str or None, optional
The type of feature to consider: ‘type-2-x’: 2 rectangles varying along the x axis; ‘type-2-y’: 2 rectangles varying along the y axis; ‘type-3-x’: 3 rectangles varying along the x axis; ‘type-3-y’: 3 rectangles varying along the y axis; ‘type-4’: 4 rectangles varying along x and y axis. By default all features are extracted. Returns
feature_coord(n_features, n_rectangles, 2, 2), ndarray of list of tuple coord
Coordinates of the rectangles for each feature.
feature_type(n_features,), ndarray of str
The corresponding type for each feature. Examples >>> import numpy as np
>>> from skimage.transform import integral_image
>>> from skimage.feature import haar_like_feature_coord
>>> feat_coord, feat_type = haar_like_feature_coord(2, 2, 'type-4')
>>> feat_coord
array([ list([[(0, 0), (0, 0)], [(0, 1), (0, 1)],
[(1, 1), (1, 1)], [(1, 0), (1, 0)]])], dtype=object)
>>> feat_type
array(['type-4'], dtype=object) | skimage.api.skimage.feature#skimage.feature.haar_like_feature_coord |
skimage.feature.hessian_matrix(image, sigma=1, mode='constant', cval=0, order='rc') [source]
Compute Hessian matrix. The Hessian matrix is defined as: H = [Hrr Hrc]
[Hrc Hcc]
which is computed by convolving the image with the second derivatives of the Gaussian kernel in the respective r- and c-directions. Parameters
imagendarray
Input image.
sigmafloat
Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries.
order{‘rc’, ‘xy’}, optional
This parameter allows for the use of reverse or forward order of the image axes in gradient computation. ‘rc’ indicates the use of the first axis initially (Hrr, Hrc, Hcc), whilst ‘xy’ indicates the usage of the last axis initially (Hxx, Hxy, Hyy) Returns
Hrrndarray
Element of the Hessian matrix for each pixel in the input image.
Hrcndarray
Element of the Hessian matrix for each pixel in the input image.
Hccndarray
Element of the Hessian matrix for each pixel in the input image. Examples >>> from skimage.feature import hessian_matrix
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 4
>>> Hrr, Hrc, Hcc = hessian_matrix(square, sigma=0.1, order='rc')
>>> Hrc
array([[ 0., 0., 0., 0., 0.],
[ 0., 1., 0., -1., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., -1., 0., 1., 0.],
[ 0., 0., 0., 0., 0.]]) | skimage.api.skimage.feature#skimage.feature.hessian_matrix |
skimage.feature.hessian_matrix_det(image, sigma=1, approximate=True) [source]
Compute the approximate Hessian Determinant over an image. The 2D approximate method uses box filters over integral images to compute the approximate Hessian Determinant, as described in [1]. Parameters
imagearray
The image over which to compute Hessian Determinant.
sigmafloat, optional
Standard deviation used for the Gaussian kernel, used for the Hessian matrix.
approximatebool, optional
If True and the image is 2D, use a much faster approximate computation. This argument has no effect on 3D and higher images. Returns
outarray
The array of the Determinant of Hessians. Notes For 2D images when approximate=True, the running time of this method only depends on size of the image. It is independent of sigma as one would expect. The downside is that the result for sigma less than 3 is not accurate, i.e., not similar to the result obtained if someone computed the Hessian and took its determinant. References
1
Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, “SURF: Speeded Up Robust Features” ftp://ftp.vision.ee.ethz.ch/publications/articles/eth_biwi_00517.pdf | skimage.api.skimage.feature#skimage.feature.hessian_matrix_det |
skimage.feature.hessian_matrix_eigvals(H_elems) [source]
Compute eigenvalues of Hessian matrix. Parameters
H_elemslist of ndarray
The upper-diagonal elements of the Hessian matrix, as returned by hessian_matrix. Returns
eigsndarray
The eigenvalues of the Hessian matrix, in decreasing order. The eigenvalues are the leading dimension. That is, eigs[i, j, k] contains the ith-largest eigenvalue at position (j, k). Examples >>> from skimage.feature import hessian_matrix, hessian_matrix_eigvals
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 4
>>> H_elems = hessian_matrix(square, sigma=0.1, order='rc')
>>> hessian_matrix_eigvals(H_elems)[0]
array([[ 0., 0., 2., 0., 0.],
[ 0., 1., 0., 1., 0.],
[ 2., 0., -2., 0., 2.],
[ 0., 1., 0., 1., 0.],
[ 0., 0., 2., 0., 0.]]) | skimage.api.skimage.feature#skimage.feature.hessian_matrix_eigvals |
skimage.feature.hog(image, orientations=9, pixels_per_cell=(8, 8), cells_per_block=(3, 3), block_norm='L2-Hys', visualize=False, transform_sqrt=False, feature_vector=True, multichannel=None) [source]
Extract Histogram of Oriented Gradients (HOG) for a given image. Compute a Histogram of Oriented Gradients (HOG) by (optional) global image normalization computing the gradient image in row and col
computing gradient histograms normalizing across blocks flattening into a feature vector Parameters
image(M, N[, C]) ndarray
Input image.
orientationsint, optional
Number of orientation bins.
pixels_per_cell2-tuple (int, int), optional
Size (in pixels) of a cell.
cells_per_block2-tuple (int, int), optional
Number of cells in each block.
block_normstr {‘L1’, ‘L1-sqrt’, ‘L2’, ‘L2-Hys’}, optional
Block normalization method:
L1
Normalization using L1-norm.
L1-sqrt
Normalization using L1-norm, followed by square root.
L2
Normalization using L2-norm.
L2-Hys
Normalization using L2-norm, followed by limiting the maximum values to 0.2 (Hys stands for hysteresis) and renormalization using L2-norm. (default) For details, see [3], [4].
visualizebool, optional
Also return an image of the HOG. For each cell and orientation bin, the image contains a line segment that is centered at the cell center, is perpendicular to the midpoint of the range of angles spanned by the orientation bin, and has intensity proportional to the corresponding histogram value.
transform_sqrtbool, optional
Apply power law compression to normalize the image before processing. DO NOT use this if the image contains negative values. Also see notes section below.
feature_vectorbool, optional
Return the data as a feature vector by calling .ravel() on the result just before returning.
multichannelboolean, optional
If True, the last image dimension is considered as a color channel, otherwise as spatial. Returns
out(n_blocks_row, n_blocks_col, n_cells_row, n_cells_col, n_orient) ndarray
HOG descriptor for the image. If feature_vector is True, a 1D (flattened) array is returned.
hog_image(M, N) ndarray, optional
A visualisation of the HOG image. Only provided if visualize is True. Notes The presented code implements the HOG extraction method from [2] with the following changes: (I) blocks of (3, 3) cells are used ((2, 2) in the paper); (II) no smoothing within cells (Gaussian spatial window with sigma=8pix in the paper); (III) L1 block normalization is used (L2-Hys in the paper). Power law compression, also known as Gamma correction, is used to reduce the effects of shadowing and illumination variations. The compression makes the dark regions lighter. When the kwarg transform_sqrt is set to True, the function computes the square root of each color channel and then applies the hog algorithm to the image. References
1
https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients
2
Dalal, N and Triggs, B, Histograms of Oriented Gradients for Human Detection, IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2005 San Diego, CA, USA, https://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf, DOI:10.1109/CVPR.2005.177
3
Lowe, D.G., Distinctive image features from scale-invatiant keypoints, International Journal of Computer Vision (2004) 60: 91, http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf, DOI:10.1023/B:VISI.0000029664.99615.94
4
Dalal, N, Finding People in Images and Videos, Human-Computer Interaction [cs.HC], Institut National Polytechnique de Grenoble - INPG, 2006, https://tel.archives-ouvertes.fr/tel-00390303/file/NavneetDalalThesis.pdf | skimage.api.skimage.feature#skimage.feature.hog |
skimage.feature.local_binary_pattern(image, P, R, method='default') [source]
Gray scale and rotation invariant LBP (Local Binary Patterns). LBP is an invariant descriptor that can be used for texture classification. Parameters
image(N, M) array
Graylevel image.
Pint
Number of circularly symmetric neighbour set points (quantization of the angular space).
Rfloat
Radius of circle (spatial resolution of the operator).
method{‘default’, ‘ror’, ‘uniform’, ‘var’}
Method to determine the pattern.
‘default’: original local binary pattern which is gray scale but not
rotation invariant.
‘ror’: extension of default implementation which is gray scale and
rotation invariant.
‘uniform’: improved rotation invariance with uniform patterns and
finer quantization of the angular space which is gray scale and rotation invariant.
‘nri_uniform’: non rotation-invariant uniform patterns variant
which is only gray scale invariant [2].
‘var’: rotation invariant variance measures of the contrast of local
image texture which is rotation but not gray scale invariant. Returns
output(N, M) array
LBP image. References
1
Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. Timo Ojala, Matti Pietikainen, Topi Maenpaa. http://www.ee.oulu.fi/research/mvmp/mvg/files/pdf/pdf_94.pdf, 2002.
2
Face recognition with local binary patterns. Timo Ahonen, Abdenour Hadid, Matti Pietikainen, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.214.6851, 2004. | skimage.api.skimage.feature#skimage.feature.local_binary_pattern |
skimage.feature.masked_register_translation(src_image, target_image, src_mask, target_mask=None, overlap_ratio=0.3) [source]
Deprecated function. Use skimage.registration.phase_cross_correlation instead. | skimage.api.skimage.feature#skimage.feature.masked_register_translation |
skimage.feature.match_descriptors(descriptors1, descriptors2, metric=None, p=2, max_distance=inf, cross_check=True, max_ratio=1.0) [source]
Brute-force matching of descriptors. For each descriptor in the first set this matcher finds the closest descriptor in the second set (and vice-versa in the case of enabled cross-checking). Parameters
descriptors1(M, P) array
Descriptors of size P about M keypoints in the first image.
descriptors2(N, P) array
Descriptors of size P about N keypoints in the second image.
metric{‘euclidean’, ‘cityblock’, ‘minkowski’, ‘hamming’, …} , optional
The metric to compute the distance between two descriptors. See scipy.spatial.distance.cdist for all possible types. The hamming distance should be used for binary descriptors. By default the L2-norm is used for all descriptors of dtype float or double and the Hamming distance is used for binary descriptors automatically.
pint, optional
The p-norm to apply for metric='minkowski'.
max_distancefloat, optional
Maximum allowed distance between descriptors of two keypoints in separate images to be regarded as a match.
cross_checkbool, optional
If True, the matched keypoints are returned after cross checking i.e. a matched pair (keypoint1, keypoint2) is returned if keypoint2 is the best match for keypoint1 in second image and keypoint1 is the best match for keypoint2 in first image.
max_ratiofloat, optional
Maximum ratio of distances between first and second closest descriptor in the second set of descriptors. This threshold is useful to filter ambiguous matches between the two descriptor sets. The choice of this value depends on the statistics of the chosen descriptor, e.g., for SIFT descriptors a value of 0.8 is usually chosen, see D.G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, 2004. Returns
matches(Q, 2) array
Indices of corresponding matches in first and second set of descriptors, where matches[:, 0] denote the indices in the first and matches[:, 1] the indices in the second set of descriptors. | skimage.api.skimage.feature#skimage.feature.match_descriptors |
skimage.feature.match_template(image, template, pad_input=False, mode='constant', constant_values=0) [source]
Match a template to a 2-D or 3-D image using normalized correlation. The output is an array with values between -1.0 and 1.0. The value at a given position corresponds to the correlation coefficient between the image and the template. For pad_input=True matches correspond to the center and otherwise to the top-left corner of the template. To find the best match you must search for peaks in the response (output) image. Parameters
image(M, N[, D]) array
2-D or 3-D input image.
template(m, n[, d]) array
Template to locate. It must be (m <= M, n <= N[, d <= D]).
pad_inputbool
If True, pad image so that output is the same size as the image, and output values correspond to the template center. Otherwise, the output is an array with shape (M - m + 1, N - n + 1) for an (M, N) image and an (m, n) template, and matches correspond to origin (top-left corner) of the template.
modesee numpy.pad, optional
Padding mode.
constant_valuessee numpy.pad, optional
Constant values used in conjunction with mode='constant'. Returns
outputarray
Response image with correlation coefficients. Notes Details on the cross-correlation are presented in [1]. This implementation uses FFT convolutions of the image and the template. Reference [2] presents similar derivations but the approximation presented in this reference is not used in our implementation. References
1
J. P. Lewis, “Fast Normalized Cross-Correlation”, Industrial Light and Magic.
2
Briechle and Hanebeck, “Template Matching using Fast Normalized Cross Correlation”, Proceedings of the SPIE (2001). DOI:10.1117/12.421129 Examples >>> template = np.zeros((3, 3))
>>> template[1, 1] = 1
>>> template
array([[0., 0., 0.],
[0., 1., 0.],
[0., 0., 0.]])
>>> image = np.zeros((6, 6))
>>> image[1, 1] = 1
>>> image[4, 4] = -1
>>> image
array([[ 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., -1., 0.],
[ 0., 0., 0., 0., 0., 0.]])
>>> result = match_template(image, template)
>>> np.round(result, 3)
array([[ 1. , -0.125, 0. , 0. ],
[-0.125, -0.125, 0. , 0. ],
[ 0. , 0. , 0.125, 0.125],
[ 0. , 0. , 0.125, -1. ]])
>>> result = match_template(image, template, pad_input=True)
>>> np.round(result, 3)
array([[-0.125, -0.125, -0.125, 0. , 0. , 0. ],
[-0.125, 1. , -0.125, 0. , 0. , 0. ],
[-0.125, -0.125, -0.125, 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0.125, 0.125, 0.125],
[ 0. , 0. , 0. , 0.125, -1. , 0.125],
[ 0. , 0. , 0. , 0.125, 0.125, 0.125]]) | skimage.api.skimage.feature#skimage.feature.match_template |
skimage.feature.multiblock_lbp(int_image, r, c, width, height) [source]
Multi-block local binary pattern (MB-LBP). The features are calculated similarly to local binary patterns (LBPs), (See local_binary_pattern()) except that summed blocks are used instead of individual pixel values. MB-LBP is an extension of LBP that can be computed on multiple scales in constant time using the integral image. Nine equally-sized rectangles are used to compute a feature. For each rectangle, the sum of the pixel intensities is computed. Comparisons of these sums to that of the central rectangle determine the feature, similarly to LBP. Parameters
int_image(N, M) array
Integral image.
rint
Row-coordinate of top left corner of a rectangle containing feature.
cint
Column-coordinate of top left corner of a rectangle containing feature.
widthint
Width of one of the 9 equal rectangles that will be used to compute a feature.
heightint
Height of one of the 9 equal rectangles that will be used to compute a feature. Returns
outputint
8-bit MB-LBP feature descriptor. References
1
Face Detection Based on Multi-Block LBP Representation. Lun Zhang, Rufeng Chu, Shiming Xiang, Shengcai Liao, Stan Z. Li http://www.cbsr.ia.ac.cn/users/scliao/papers/Zhang-ICB07-MBLBP.pdf | skimage.api.skimage.feature#skimage.feature.multiblock_lbp |
skimage.feature.multiscale_basic_features(image, multichannel=False, intensity=True, edges=True, texture=True, sigma_min=0.5, sigma_max=16, num_sigma=None, num_workers=None) [source]
Local features for a single- or multi-channel nd image. Intensity, gradient intensity and local structure are computed at different scales thanks to Gaussian blurring. Parameters
imagendarray
Input image, which can be grayscale or multichannel.
multichannelbool, default False
True if the last dimension corresponds to color channels.
intensitybool, default True
If True, pixel intensities averaged over the different scales are added to the feature set.
edgesbool, default True
If True, intensities of local gradients averaged over the different scales are added to the feature set.
texturebool, default True
If True, eigenvalues of the Hessian matrix after Gaussian blurring at different scales are added to the feature set.
sigma_minfloat, optional
Smallest value of the Gaussian kernel used to average local neighbourhoods before extracting features.
sigma_maxfloat, optional
Largest value of the Gaussian kernel used to average local neighbourhoods before extracting features.
num_sigmaint, optional
Number of values of the Gaussian kernel between sigma_min and sigma_max. If None, sigma_min multiplied by powers of 2 are used.
num_workersint or None, optional
The number of parallel threads to use. If set to None, the full set of available cores are used. Returns
featuresnp.ndarray
Array of shape image.shape + (n_features,) | skimage.api.skimage.feature#skimage.feature.multiscale_basic_features |
class skimage.feature.ORB(downscale=1.2, n_scales=8, n_keypoints=500, fast_n=9, fast_threshold=0.08, harris_k=0.04) [source]
Bases: skimage.feature.util.FeatureDetector, skimage.feature.util.DescriptorExtractor Oriented FAST and rotated BRIEF feature detector and binary descriptor extractor. Parameters
n_keypointsint, optional
Number of keypoints to be returned. The function will return the best n_keypoints according to the Harris corner response if more than n_keypoints are detected. If not, then all the detected keypoints are returned.
fast_nint, optional
The n parameter in skimage.feature.corner_fast. Minimum number of consecutive pixels out of 16 pixels on the circle that should all be either brighter or darker w.r.t test-pixel. A point c on the circle is darker w.r.t test pixel p if Ic < Ip - threshold and brighter if Ic > Ip + threshold. Also stands for the n in FAST-n corner detector.
fast_thresholdfloat, optional
The threshold parameter in feature.corner_fast. Threshold used to decide whether the pixels on the circle are brighter, darker or similar w.r.t. the test pixel. Decrease the threshold when more corners are desired and vice-versa.
harris_kfloat, optional
The k parameter in skimage.feature.corner_harris. Sensitivity factor to separate corners from edges, typically in range [0, 0.2]. Small values of k result in detection of sharp corners.
downscalefloat, optional
Downscale factor for the image pyramid. Default value 1.2 is chosen so that there are more dense scales which enable robust scale invariance for a subsequent feature description.
n_scalesint, optional
Maximum number of scales from the bottom of the image pyramid to extract the features from. References
1
Ethan Rublee, Vincent Rabaud, Kurt Konolige and Gary Bradski “ORB: An efficient alternative to SIFT and SURF” http://www.vision.cs.chubu.ac.jp/CV-R/pdf/Rublee_iccv2011.pdf Examples >>> from skimage.feature import ORB, match_descriptors
>>> img1 = np.zeros((100, 100))
>>> img2 = np.zeros_like(img1)
>>> np.random.seed(1)
>>> square = np.random.rand(20, 20)
>>> img1[40:60, 40:60] = square
>>> img2[53:73, 53:73] = square
>>> detector_extractor1 = ORB(n_keypoints=5)
>>> detector_extractor2 = ORB(n_keypoints=5)
>>> detector_extractor1.detect_and_extract(img1)
>>> detector_extractor2.detect_and_extract(img2)
>>> matches = match_descriptors(detector_extractor1.descriptors,
... detector_extractor2.descriptors)
>>> matches
array([[0, 0],
[1, 1],
[2, 2],
[3, 3],
[4, 4]])
>>> detector_extractor1.keypoints[matches[:, 0]]
array([[42., 40.],
[47., 58.],
[44., 40.],
[59., 42.],
[45., 44.]])
>>> detector_extractor2.keypoints[matches[:, 1]]
array([[55., 53.],
[60., 71.],
[57., 53.],
[72., 55.],
[58., 57.]])
Attributes
keypoints(N, 2) array
Keypoint coordinates as (row, col).
scales(N, ) array
Corresponding scales.
orientations(N, ) array
Corresponding orientations in radians.
responses(N, ) array
Corresponding Harris corner responses.
descriptors(Q, descriptor_size) array of dtype bool
2D array of binary descriptors of size descriptor_size for Q keypoints after filtering out border keypoints with value at an index (i, j) either being True or False representing the outcome of the intensity comparison for i-th keypoint on j-th decision pixel-pair. It is Q == np.sum(mask).
__init__(downscale=1.2, n_scales=8, n_keypoints=500, fast_n=9, fast_threshold=0.08, harris_k=0.04) [source]
Initialize self. See help(type(self)) for accurate signature.
detect(image) [source]
Detect oriented FAST keypoints along with the corresponding scale. Parameters
image2D array
Input image.
detect_and_extract(image) [source]
Detect oriented FAST keypoints and extract rBRIEF descriptors. Note that this is faster than first calling detect and then extract. Parameters
image2D array
Input image.
extract(image, keypoints, scales, orientations) [source]
Extract rBRIEF binary descriptors for given keypoints in image. Note that the keypoints must be extracted using the same downscale and n_scales parameters. Additionally, if you want to extract both keypoints and descriptors you should use the faster detect_and_extract. Parameters
image2D array
Input image.
keypoints(N, 2) array
Keypoint coordinates as (row, col).
scales(N, ) array
Corresponding scales.
orientations(N, ) array
Corresponding orientations in radians. | skimage.api.skimage.feature#skimage.feature.ORB |
detect(image) [source]
Detect oriented FAST keypoints along with the corresponding scale. Parameters
image2D array
Input image. | skimage.api.skimage.feature#skimage.feature.ORB.detect |
detect_and_extract(image) [source]
Detect oriented FAST keypoints and extract rBRIEF descriptors. Note that this is faster than first calling detect and then extract. Parameters
image2D array
Input image. | skimage.api.skimage.feature#skimage.feature.ORB.detect_and_extract |
extract(image, keypoints, scales, orientations) [source]
Extract rBRIEF binary descriptors for given keypoints in image. Note that the keypoints must be extracted using the same downscale and n_scales parameters. Additionally, if you want to extract both keypoints and descriptors you should use the faster detect_and_extract. Parameters
image2D array
Input image.
keypoints(N, 2) array
Keypoint coordinates as (row, col).
scales(N, ) array
Corresponding scales.
orientations(N, ) array
Corresponding orientations in radians. | skimage.api.skimage.feature#skimage.feature.ORB.extract |
__init__(downscale=1.2, n_scales=8, n_keypoints=500, fast_n=9, fast_threshold=0.08, harris_k=0.04) [source]
Initialize self. See help(type(self)) for accurate signature. | skimage.api.skimage.feature#skimage.feature.ORB.__init__ |
skimage.feature.peak_local_max(image, min_distance=1, threshold_abs=None, threshold_rel=None, exclude_border=True, indices=True, num_peaks=inf, footprint=None, labels=None, num_peaks_per_label=inf, p_norm=inf) [source]
Find peaks in an image as coordinate list or boolean mask. Peaks are the local maxima in a region of 2 * min_distance + 1 (i.e. peaks are separated by at least min_distance). If both threshold_abs and threshold_rel are provided, the maximum of the two is chosen as the minimum intensity threshold of peaks. Changed in version 0.18: Prior to version 0.18, peaks of the same height within a radius of min_distance were all returned, but this could cause unexpected behaviour. From 0.18 onwards, an arbitrary peak within the region is returned. See issue gh-2592. Parameters
imagendarray
Input image.
min_distanceint, optional
The minimal allowed distance separating peaks. To find the maximum number of peaks, use min_distance=1.
threshold_absfloat, optional
Minimum intensity of peaks. By default, the absolute threshold is the minimum intensity of the image.
threshold_relfloat, optional
Minimum intensity of peaks, calculated as max(image) * threshold_rel.
exclude_borderint, tuple of ints, or bool, optional
If positive integer, exclude_border excludes peaks from within exclude_border-pixels of the border of the image. If tuple of non-negative ints, the length of the tuple must match the input array’s dimensionality. Each element of the tuple will exclude peaks from within exclude_border-pixels of the border of the image along that dimension. If True, takes the min_distance parameter as value. If zero or False, peaks are identified regardless of their distance from the border.
indicesbool, optional
If True, the output will be an array representing peak coordinates. The coordinates are sorted according to peaks values (Larger first). If False, the output will be a boolean array shaped as image.shape with peaks present at True elements. indices is deprecated and will be removed in version 0.20. Default behavior will be to always return peak coordinates. You can obtain a mask as shown in the example below.
num_peaksint, optional
Maximum number of peaks. When the number of peaks exceeds num_peaks, return num_peaks peaks based on highest peak intensity.
footprintndarray of bools, optional
If provided, footprint == 1 represents the local region within which to search for peaks at every point in image.
labelsndarray of ints, optional
If provided, each unique region labels == value represents a unique region to search for peaks. Zero is reserved for background.
num_peaks_per_labelint, optional
Maximum number of peaks for each label.
p_normfloat
Which Minkowski p-norm to use. Should be in the range [1, inf]. A finite large p may cause a ValueError if overflow can occur. inf corresponds to the Chebyshev distance and 2 to the Euclidean distance. Returns
outputndarray or ndarray of bools
If indices = True : (row, column, …) coordinates of peaks. If indices = False : Boolean array shaped like image, with peaks represented by True values. See also
skimage.feature.corner_peaks
Notes The peak local maximum function returns the coordinates of local peaks (maxima) in an image. Internally, a maximum filter is used for finding local maxima. This operation dilates the original image. After comparison of the dilated and original image, this function returns the coordinates or a mask of the peaks where the dilated image equals the original image. Examples >>> img1 = np.zeros((7, 7))
>>> img1[3, 4] = 1
>>> img1[3, 2] = 1.5
>>> img1
array([[0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 1.5, 0. , 1. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. , 0. , 0. ]])
>>> peak_local_max(img1, min_distance=1)
array([[3, 2],
[3, 4]])
>>> peak_local_max(img1, min_distance=2)
array([[3, 2]])
>>> img2 = np.zeros((20, 20, 20))
>>> img2[10, 10, 10] = 1
>>> img2[15, 15, 15] = 1
>>> peak_idx = peak_local_max(img2, exclude_border=0)
>>> peak_idx
array([[10, 10, 10],
[15, 15, 15]])
>>> peak_mask = np.zeros_like(img2, dtype=bool)
>>> peak_mask[tuple(peak_idx.T)] = True
>>> np.argwhere(peak_mask)
array([[10, 10, 10],
[15, 15, 15]]) | skimage.api.skimage.feature#skimage.feature.peak_local_max |
skimage.feature.plot_matches(ax, image1, image2, keypoints1, keypoints2, matches, keypoints_color='k', matches_color=None, only_matches=False, alignment='horizontal') [source]
Plot matched features. Parameters
axmatplotlib.axes.Axes
Matches and image are drawn in this ax.
image1(N, M [, 3]) array
First grayscale or color image.
image2(N, M [, 3]) array
Second grayscale or color image.
keypoints1(K1, 2) array
First keypoint coordinates as (row, col).
keypoints2(K2, 2) array
Second keypoint coordinates as (row, col).
matches(Q, 2) array
Indices of corresponding matches in first and second set of descriptors, where matches[:, 0] denote the indices in the first and matches[:, 1] the indices in the second set of descriptors.
keypoints_colormatplotlib color, optional
Color for keypoint locations.
matches_colormatplotlib color, optional
Color for lines which connect keypoint matches. By default the color is chosen randomly.
only_matchesbool, optional
Whether to only plot matches and not plot the keypoint locations.
alignment{‘horizontal’, ‘vertical’}, optional
Whether to show images side by side, 'horizontal', or one above the other, 'vertical'. | skimage.api.skimage.feature#skimage.feature.plot_matches |
skimage.feature.register_translation(src_image, target_image, upsample_factor=1, space='real', return_error=True) [source]
Deprecated function. Use skimage.registration.phase_cross_correlation instead. | skimage.api.skimage.feature#skimage.feature.register_translation |
skimage.feature.shape_index(image, sigma=1, mode='constant', cval=0) [source]
Compute the shape index. The shape index, as defined by Koenderink & van Doorn [1], is a single valued measure of local curvature, assuming the image as a 3D plane with intensities representing heights. It is derived from the eigen values of the Hessian, and its value ranges from -1 to 1 (and is undefined (=NaN) in flat regions), with following ranges representing following shapes: Ranges of the shape index and corresponding shapes.
Interval (s in …) Shape
[ -1, -7/8) Spherical cup
[-7/8, -5/8) Through
[-5/8, -3/8) Rut
[-3/8, -1/8) Saddle rut
[-1/8, +1/8) Saddle
[+1/8, +3/8) Saddle ridge
[+3/8, +5/8) Ridge
[+5/8, +7/8) Dome
[+7/8, +1] Spherical cap Parameters
imagendarray
Input image.
sigmafloat, optional
Standard deviation used for the Gaussian kernel, which is used for smoothing the input data before Hessian eigen value calculation.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
sndarray
Shape index References
1
Koenderink, J. J. & van Doorn, A. J., “Surface shape and curvature scales”, Image and Vision Computing, 1992, 10, 557-564. DOI:10.1016/0262-8856(92)90076-F Examples >>> from skimage.feature import shape_index
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 4
>>> s = shape_index(square, sigma=0.1)
>>> s
array([[ nan, nan, -0.5, nan, nan],
[ nan, -0. , nan, -0. , nan],
[-0.5, nan, -1. , nan, -0.5],
[ nan, -0. , nan, -0. , nan],
[ nan, nan, -0.5, nan, nan]]) | skimage.api.skimage.feature#skimage.feature.shape_index |
skimage.feature.structure_tensor(image, sigma=1, mode='constant', cval=0, order=None) [source]
Compute structure tensor using sum of squared differences. The (2-dimensional) structure tensor A is defined as: A = [Arr Arc]
[Arc Acc]
which is approximated by the weighted sum of squared differences in a local window around each pixel in the image. This formula can be extended to a larger number of dimensions (see [1]). Parameters
imagendarray
Input image.
sigmafloat, optional
Standard deviation used for the Gaussian kernel, which is used as a weighting function for the local summation of squared differences.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries.
order{‘rc’, ‘xy’}, optional
NOTE: Only applies in 2D. Higher dimensions must always use ‘rc’ order. This parameter allows for the use of reverse or forward order of the image axes in gradient computation. ‘rc’ indicates the use of the first axis initially (Arr, Arc, Acc), whilst ‘xy’ indicates the usage of the last axis initially (Axx, Axy, Ayy). Returns
A_elemslist of ndarray
Upper-diagonal elements of the structure tensor for each pixel in the input image. See also
structure_tensor_eigenvalues
References
1
https://en.wikipedia.org/wiki/Structure_tensor Examples >>> from skimage.feature import structure_tensor
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 1
>>> Arr, Arc, Acc = structure_tensor(square, sigma=0.1, order='rc')
>>> Acc
array([[0., 0., 0., 0., 0.],
[0., 1., 0., 1., 0.],
[0., 4., 0., 4., 0.],
[0., 1., 0., 1., 0.],
[0., 0., 0., 0., 0.]]) | skimage.api.skimage.feature#skimage.feature.structure_tensor |
skimage.feature.structure_tensor_eigenvalues(A_elems) [source]
Compute eigenvalues of structure tensor. Parameters
A_elemslist of ndarray
The upper-diagonal elements of the structure tensor, as returned by structure_tensor. Returns
ndarray
The eigenvalues of the structure tensor, in decreasing order. The eigenvalues are the leading dimension. That is, the coordinate [i, j, k] corresponds to the ith-largest eigenvalue at position (j, k). See also
structure_tensor
Examples >>> from skimage.feature import structure_tensor
>>> from skimage.feature import structure_tensor_eigenvalues
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 1
>>> A_elems = structure_tensor(square, sigma=0.1, order='rc')
>>> structure_tensor_eigenvalues(A_elems)[0]
array([[0., 0., 0., 0., 0.],
[0., 2., 4., 2., 0.],
[0., 4., 0., 4., 0.],
[0., 2., 4., 2., 0.],
[0., 0., 0., 0., 0.]]) | skimage.api.skimage.feature#skimage.feature.structure_tensor_eigenvalues |
skimage.feature.structure_tensor_eigvals(Axx, Axy, Ayy) [source]
Compute eigenvalues of structure tensor. Parameters
Axxndarray
Element of the structure tensor for each pixel in the input image.
Axyndarray
Element of the structure tensor for each pixel in the input image.
Ayyndarray
Element of the structure tensor for each pixel in the input image. Returns
l1ndarray
Larger eigen value for each input matrix.
l2ndarray
Smaller eigen value for each input matrix. Examples >>> from skimage.feature import structure_tensor, structure_tensor_eigvals
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 1
>>> Arr, Arc, Acc = structure_tensor(square, sigma=0.1, order='rc')
>>> structure_tensor_eigvals(Acc, Arc, Arr)[0]
array([[0., 0., 0., 0., 0.],
[0., 2., 4., 2., 0.],
[0., 4., 0., 4., 0.],
[0., 2., 4., 2., 0.],
[0., 0., 0., 0., 0.]]) | skimage.api.skimage.feature#skimage.feature.structure_tensor_eigvals |
Module: filters
skimage.filters.apply_hysteresis_threshold(…) Apply hysteresis thresholding to image.
skimage.filters.correlate_sparse(image, kernel) Compute valid cross-correlation of padded_array and kernel.
skimage.filters.difference_of_gaussians(…) Find features between low_sigma and high_sigma in size.
skimage.filters.farid(image, *[, mask]) Find the edge magnitude using the Farid transform.
skimage.filters.farid_h(image, *[, mask]) Find the horizontal edges of an image using the Farid transform.
skimage.filters.farid_v(image, *[, mask]) Find the vertical edges of an image using the Farid transform.
skimage.filters.frangi(image[, sigmas, …]) Filter an image with the Frangi vesselness filter.
skimage.filters.gabor(image, frequency[, …]) Return real and imaginary responses to Gabor filter.
skimage.filters.gabor_kernel(frequency[, …]) Return complex 2D Gabor filter kernel.
skimage.filters.gaussian(image[, sigma, …]) Multi-dimensional Gaussian filter.
skimage.filters.hessian(image[, sigmas, …]) Filter an image with the Hybrid Hessian filter.
skimage.filters.inverse(data[, …]) Apply the filter in reverse to the given data.
skimage.filters.laplace(image[, ksize, mask]) Find the edges of an image using the Laplace operator.
skimage.filters.median(image[, selem, out, …]) Return local median of an image.
skimage.filters.meijering(image[, sigmas, …]) Filter an image with the Meijering neuriteness filter.
skimage.filters.prewitt(image[, mask, axis, …]) Find the edge magnitude using the Prewitt transform.
skimage.filters.prewitt_h(image[, mask]) Find the horizontal edges of an image using the Prewitt transform.
skimage.filters.prewitt_v(image[, mask]) Find the vertical edges of an image using the Prewitt transform.
skimage.filters.rank_order(image) Return an image of the same shape where each pixel is the index of the pixel value in the ascending order of the unique values of image, aka the rank-order value.
skimage.filters.roberts(image[, mask]) Find the edge magnitude using Roberts’ cross operator.
skimage.filters.roberts_neg_diag(image[, mask]) Find the cross edges of an image using the Roberts’ Cross operator.
skimage.filters.roberts_pos_diag(image[, mask]) Find the cross edges of an image using Roberts’ cross operator.
skimage.filters.sato(image[, sigmas, …]) Filter an image with the Sato tubeness filter.
skimage.filters.scharr(image[, mask, axis, …]) Find the edge magnitude using the Scharr transform.
skimage.filters.scharr_h(image[, mask]) Find the horizontal edges of an image using the Scharr transform.
skimage.filters.scharr_v(image[, mask]) Find the vertical edges of an image using the Scharr transform.
skimage.filters.sobel(image[, mask, axis, …]) Find edges in an image using the Sobel filter.
skimage.filters.sobel_h(image[, mask]) Find the horizontal edges of an image using the Sobel transform.
skimage.filters.sobel_v(image[, mask]) Find the vertical edges of an image using the Sobel transform.
skimage.filters.threshold_isodata([image, …]) Return threshold value(s) based on ISODATA method.
skimage.filters.threshold_li(image, *[, …]) Compute threshold value by Li’s iterative Minimum Cross Entropy method.
skimage.filters.threshold_local(image, …) Compute a threshold mask image based on local pixel neighborhood.
skimage.filters.threshold_mean(image) Return threshold value based on the mean of grayscale values.
skimage.filters.threshold_minimum([image, …]) Return threshold value based on minimum method.
skimage.filters.threshold_multiotsu(image[, …]) Generate classes-1 threshold values to divide gray levels in image.
skimage.filters.threshold_niblack(image[, …]) Applies Niblack local threshold to an array.
skimage.filters.threshold_otsu([image, …]) Return threshold value based on Otsu’s method.
skimage.filters.threshold_sauvola(image[, …]) Applies Sauvola local threshold to an array.
skimage.filters.threshold_triangle(image[, …]) Return threshold value based on the triangle algorithm.
skimage.filters.threshold_yen([image, …]) Return threshold value based on Yen’s method.
skimage.filters.try_all_threshold(image[, …]) Returns a figure comparing the outputs of different thresholding methods.
skimage.filters.unsharp_mask(image[, …]) Unsharp masking filter.
skimage.filters.wiener(data[, …]) Minimum Mean Square Error (Wiener) inverse filter.
skimage.filters.window(window_type, shape[, …]) Return an n-dimensional window of a given size and dimensionality.
skimage.filters.LPIFilter2D(…) Linear Position-Invariant Filter (2-dimensional)
skimage.filters.rank apply_hysteresis_threshold
skimage.filters.apply_hysteresis_threshold(image, low, high) [source]
Apply hysteresis thresholding to image. This algorithm finds regions where image is greater than high OR image is greater than low and that region is connected to a region greater than high. Parameters
imagearray, shape (M,[ N, …, P])
Grayscale input image.
lowfloat, or array of same shape as image
Lower threshold.
highfloat, or array of same shape as image
Higher threshold. Returns
thresholdedarray of bool, same shape as image
Array in which True indicates the locations where image was above the hysteresis threshold. References
1
J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1986; vol. 8, pp.679-698. DOI:10.1109/TPAMI.1986.4767851 Examples >>> image = np.array([1, 2, 3, 2, 1, 2, 1, 3, 2])
>>> apply_hysteresis_threshold(image, 1.5, 2.5).astype(int)
array([0, 1, 1, 1, 0, 0, 0, 1, 1])
correlate_sparse
skimage.filters.correlate_sparse(image, kernel, mode='reflect') [source]
Compute valid cross-correlation of padded_array and kernel. This function is fast when kernel is large with many zeros. See scipy.ndimage.correlate for a description of cross-correlation. Parameters
imagendarray, dtype float, shape (M, N,[ …,] P)
The input array. If mode is ‘valid’, this array should already be padded, as a margin of the same shape as kernel will be stripped off.
kernelndarray, dtype float shape (Q, R,[ …,] S)
The kernel to be correlated. Must have the same number of dimensions as padded_array. For high performance, it should be sparse (few nonzero entries).
modestring, optional
See scipy.ndimage.correlate for valid modes. Additionally, mode ‘valid’ is accepted, in which case no padding is applied and the result is the result for the smaller image for which the kernel is entirely inside the original data. Returns
resultarray of float, shape (M, N,[ …,] P)
The result of cross-correlating image with kernel. If mode ‘valid’ is used, the resulting shape is (M-Q+1, N-R+1,[ …,] P-S+1).
difference_of_gaussians
skimage.filters.difference_of_gaussians(image, low_sigma, high_sigma=None, *, mode='nearest', cval=0, multichannel=False, truncate=4.0) [source]
Find features between low_sigma and high_sigma in size. This function uses the Difference of Gaussians method for applying band-pass filters to multi-dimensional arrays. The input array is blurred with two Gaussian kernels of differing sigmas to produce two intermediate, filtered images. The more-blurred image is then subtracted from the less-blurred image. The final output image will therefore have had high-frequency components attenuated by the smaller-sigma Gaussian, and low frequency components will have been removed due to their presence in the more-blurred intermediate. Parameters
imagendarray
Input array to filter.
low_sigmascalar or sequence of scalars
Standard deviation(s) for the Gaussian kernel with the smaller sigmas across all axes. The standard deviations are given for each axis as a sequence, or as a single number, in which case the single number is used as the standard deviation value for all axes.
high_sigmascalar or sequence of scalars, optional (default is None)
Standard deviation(s) for the Gaussian kernel with the larger sigmas across all axes. The standard deviations are given for each axis as a sequence, or as a single number, in which case the single number is used as the standard deviation value for all axes. If None is given (default), sigmas for all axes are calculated as 1.6 * low_sigma.
mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional
The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘nearest’.
cvalscalar, optional
Value to fill past edges of input if mode is ‘constant’. Default is 0.0
multichannelbool, optional (default: False)
Whether the last axis of the image is to be interpreted as multiple channels. If True, each channel is filtered separately (channels are not mixed together).
truncatefloat, optional (default is 4.0)
Truncate the filter at this many standard deviations. Returns
filtered_imagendarray
the filtered array. See also
skimage.feature.blog_dog
Notes This function will subtract an array filtered with a Gaussian kernel with sigmas given by high_sigma from an array filtered with a Gaussian kernel with sigmas provided by low_sigma. The values for high_sigma must always be greater than or equal to the corresponding values in low_sigma, or a ValueError will be raised. When high_sigma is none, the values for high_sigma will be calculated as 1.6x the corresponding values in low_sigma. This ratio was originally proposed by Marr and Hildreth (1980) [1] and is commonly used when approximating the inverted Laplacian of Gaussian, which is used in edge and blob detection. Input image is converted according to the conventions of img_as_float. Except for sigma values, all parameters are used for both filters. References
1
Marr, D. and Hildreth, E. Theory of Edge Detection. Proc. R. Soc. Lond. Series B 207, 187-217 (1980). https://doi.org/10.1098/rspb.1980.0020 Examples Apply a simple Difference of Gaussians filter to a color image: >>> from skimage.data import astronaut
>>> from skimage.filters import difference_of_gaussians
>>> filtered_image = difference_of_gaussians(astronaut(), 2, 10,
... multichannel=True)
Apply a Laplacian of Gaussian filter as approximated by the Difference of Gaussians filter: >>> filtered_image = difference_of_gaussians(astronaut(), 2,
... multichannel=True)
Apply a Difference of Gaussians filter to a grayscale image using different sigma values for each axis: >>> from skimage.data import camera
>>> filtered_image = difference_of_gaussians(camera(), (2,5), (3,20))
farid
skimage.filters.farid(image, *, mask=None) [source]
Find the edge magnitude using the Farid transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Farid edge map. See also
sobel, prewitt, canny
Notes Take the square root of the sum of the squares of the horizontal and vertical derivatives to get a magnitude that is somewhat insensitive to direction. Similar to the Scharr operator, this operator is designed with a rotation invariance constraint. References
1
Farid, H. and Simoncelli, E. P., “Differentiation of discrete multidimensional signals”, IEEE Transactions on Image Processing 13(4): 496-508, 2004. DOI:10.1109/TIP.2004.823819
2
Wikipedia, “Farid and Simoncelli Derivatives.” Available at: <https://en.wikipedia.org/wiki/Image_derivatives#Farid_and_Simoncelli_Derivatives> Examples >>> from skimage import data
>>> camera = data.camera()
>>> from skimage import filters
>>> edges = filters.farid(camera)
farid_h
skimage.filters.farid_h(image, *, mask=None) [source]
Find the horizontal edges of an image using the Farid transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Farid edge map. Notes The kernel was constructed using the 5-tap weights from [1]. References
1
Farid, H. and Simoncelli, E. P., “Differentiation of discrete multidimensional signals”, IEEE Transactions on Image Processing 13(4): 496-508, 2004. DOI:10.1109/TIP.2004.823819
2
Farid, H. and Simoncelli, E. P. “Optimally rotation-equivariant directional derivative kernels”, In: 7th International Conference on Computer Analysis of Images and Patterns, Kiel, Germany. Sep, 1997.
farid_v
skimage.filters.farid_v(image, *, mask=None) [source]
Find the vertical edges of an image using the Farid transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Farid edge map. Notes The kernel was constructed using the 5-tap weights from [1]. References
1
Farid, H. and Simoncelli, E. P., “Differentiation of discrete multidimensional signals”, IEEE Transactions on Image Processing 13(4): 496-508, 2004. DOI:10.1109/TIP.2004.823819
frangi
skimage.filters.frangi(image, sigmas=range(1, 10, 2), scale_range=None, scale_step=None, alpha=0.5, beta=0.5, gamma=15, black_ridges=True, mode='reflect', cval=0) [source]
Filter an image with the Frangi vesselness filter. This filter can be used to detect continuous ridges, e.g. vessels, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects. Defined only for 2-D and 3-D images. Calculates the eigenvectors of the Hessian to compute the similarity of an image region to vessels, according to the method described in [1]. Parameters
image(N, M[, P]) ndarray
Array with input image data.
sigmasiterable of floats, optional
Sigmas used as scales of filter, i.e., np.arange(scale_range[0], scale_range[1], scale_step)
scale_range2-tuple of floats, optional
The range of sigmas used.
scale_stepfloat, optional
Step size between sigmas.
alphafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a plate-like structure.
betafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a blob-like structure.
gammafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to areas of high variance/texture/structure.
black_ridgesboolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
out(N, M[, P]) ndarray
Filtered image (maximum of pixels across all scales). See also
meijering
sato
hessian
Notes Written by Marc Schrijver, November 2001 Re-Written by D. J. Kroon, University of Twente, May 2009, [2] Adoption of 3D version from D. G. Ellis, Januar 20017, [3] References
1
Frangi, A. F., Niessen, W. J., Vincken, K. L., & Viergever, M. A. (1998,). Multiscale vessel enhancement filtering. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 130-137). Springer Berlin Heidelberg. DOI:10.1007/BFb0056195
2
Kroon, D. J.: Hessian based Frangi vesselness filter.
3
Ellis, D. G.: https://github.com/ellisdg/frangi3d/tree/master/frangi
gabor
skimage.filters.gabor(image, frequency, theta=0, bandwidth=1, sigma_x=None, sigma_y=None, n_stds=3, offset=0, mode='reflect', cval=0) [source]
Return real and imaginary responses to Gabor filter. The real and imaginary parts of the Gabor filter kernel are applied to the image and the response is returned as a pair of arrays. Gabor filter is a linear filter with a Gaussian kernel which is modulated by a sinusoidal plane wave. Frequency and orientation representations of the Gabor filter are similar to those of the human visual system. Gabor filter banks are commonly used in computer vision and image processing. They are especially suitable for edge detection and texture classification. Parameters
image2-D array
Input image.
frequencyfloat
Spatial frequency of the harmonic function. Specified in pixels.
thetafloat, optional
Orientation in radians. If 0, the harmonic is in the x-direction.
bandwidthfloat, optional
The bandwidth captured by the filter. For fixed bandwidth, sigma_x and sigma_y will decrease with increasing frequency. This value is ignored if sigma_x and sigma_y are set by the user.
sigma_x, sigma_yfloat, optional
Standard deviation in x- and y-directions. These directions apply to the kernel before rotation. If theta = pi/2, then the kernel is rotated 90 degrees so that sigma_x controls the vertical direction.
n_stdsscalar, optional
The linear size of the kernel is n_stds (3 by default) standard deviations.
offsetfloat, optional
Phase offset of harmonic function in radians.
mode{‘constant’, ‘nearest’, ‘reflect’, ‘mirror’, ‘wrap’}, optional
Mode used to convolve image with a kernel, passed to ndi.convolve
cvalscalar, optional
Value to fill past edges of input if mode of convolution is ‘constant’. The parameter is passed to ndi.convolve. Returns
real, imagarrays
Filtered images using the real and imaginary parts of the Gabor filter kernel. Images are of the same dimensions as the input one. References
1
https://en.wikipedia.org/wiki/Gabor_filter
2
https://web.archive.org/web/20180127125930/http://mplab.ucsd.edu/tutorials/gabor.pdf Examples >>> from skimage.filters import gabor
>>> from skimage import data, io
>>> from matplotlib import pyplot as plt
>>> image = data.coins()
>>> # detecting edges in a coin image
>>> filt_real, filt_imag = gabor(image, frequency=0.6)
>>> plt.figure()
>>> io.imshow(filt_real)
>>> io.show()
>>> # less sensitivity to finer details with the lower frequency kernel
>>> filt_real, filt_imag = gabor(image, frequency=0.1)
>>> plt.figure()
>>> io.imshow(filt_real)
>>> io.show()
gabor_kernel
skimage.filters.gabor_kernel(frequency, theta=0, bandwidth=1, sigma_x=None, sigma_y=None, n_stds=3, offset=0) [source]
Return complex 2D Gabor filter kernel. Gabor kernel is a Gaussian kernel modulated by a complex harmonic function. Harmonic function consists of an imaginary sine function and a real cosine function. Spatial frequency is inversely proportional to the wavelength of the harmonic and to the standard deviation of a Gaussian kernel. The bandwidth is also inversely proportional to the standard deviation. Parameters
frequencyfloat
Spatial frequency of the harmonic function. Specified in pixels.
thetafloat, optional
Orientation in radians. If 0, the harmonic is in the x-direction.
bandwidthfloat, optional
The bandwidth captured by the filter. For fixed bandwidth, sigma_x and sigma_y will decrease with increasing frequency. This value is ignored if sigma_x and sigma_y are set by the user.
sigma_x, sigma_yfloat, optional
Standard deviation in x- and y-directions. These directions apply to the kernel before rotation. If theta = pi/2, then the kernel is rotated 90 degrees so that sigma_x controls the vertical direction.
n_stdsscalar, optional
The linear size of the kernel is n_stds (3 by default) standard deviations
offsetfloat, optional
Phase offset of harmonic function in radians. Returns
gcomplex array
Complex filter kernel. References
1
https://en.wikipedia.org/wiki/Gabor_filter
2
https://web.archive.org/web/20180127125930/http://mplab.ucsd.edu/tutorials/gabor.pdf Examples >>> from skimage.filters import gabor_kernel
>>> from skimage import io
>>> from matplotlib import pyplot as plt
>>> gk = gabor_kernel(frequency=0.2)
>>> plt.figure()
>>> io.imshow(gk.real)
>>> io.show()
>>> # more ripples (equivalent to increasing the size of the
>>> # Gaussian spread)
>>> gk = gabor_kernel(frequency=0.2, bandwidth=0.1)
>>> plt.figure()
>>> io.imshow(gk.real)
>>> io.show()
gaussian
skimage.filters.gaussian(image, sigma=1, output=None, mode='nearest', cval=0, multichannel=None, preserve_range=False, truncate=4.0) [source]
Multi-dimensional Gaussian filter. Parameters
imagearray-like
Input image (grayscale or color) to filter.
sigmascalar or sequence of scalars, optional
Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.
outputarray, optional
The output parameter passes an array in which to store the filter output.
mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional
The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘nearest’.
cvalscalar, optional
Value to fill past edges of input if mode is ‘constant’. Default is 0.0
multichannelbool, optional (default: None)
Whether the last axis of the image is to be interpreted as multiple channels. If True, each channel is filtered separately (channels are not mixed together). Only 3 channels are supported. If None, the function will attempt to guess this, and raise a warning if ambiguous, when the array has shape (M, N, 3).
preserve_rangebool, optional
Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html
truncatefloat, optional
Truncate the filter at this many standard deviations. Returns
filtered_imagendarray
the filtered array Notes This function is a wrapper around scipy.ndi.gaussian_filter(). Integer arrays are converted to float. The output should be floating point data type since gaussian converts to float provided image. If output is not provided, another array will be allocated and returned as the result. The multi-dimensional filter is implemented as a sequence of one-dimensional convolution filters. The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision. Examples >>> a = np.zeros((3, 3))
>>> a[1, 1] = 1
>>> a
array([[0., 0., 0.],
[0., 1., 0.],
[0., 0., 0.]])
>>> gaussian(a, sigma=0.4) # mild smoothing
array([[0.00163116, 0.03712502, 0.00163116],
[0.03712502, 0.84496158, 0.03712502],
[0.00163116, 0.03712502, 0.00163116]])
>>> gaussian(a, sigma=1) # more smoothing
array([[0.05855018, 0.09653293, 0.05855018],
[0.09653293, 0.15915589, 0.09653293],
[0.05855018, 0.09653293, 0.05855018]])
>>> # Several modes are possible for handling boundaries
>>> gaussian(a, sigma=1, mode='reflect')
array([[0.08767308, 0.12075024, 0.08767308],
[0.12075024, 0.16630671, 0.12075024],
[0.08767308, 0.12075024, 0.08767308]])
>>> # For RGB images, each is filtered separately
>>> from skimage.data import astronaut
>>> image = astronaut()
>>> filtered_img = gaussian(image, sigma=1, multichannel=True)
hessian
skimage.filters.hessian(image, sigmas=range(1, 10, 2), scale_range=None, scale_step=None, alpha=0.5, beta=0.5, gamma=15, black_ridges=True, mode=None, cval=0) [source]
Filter an image with the Hybrid Hessian filter. This filter can be used to detect continuous edges, e.g. vessels, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects. Defined only for 2-D and 3-D images. Almost equal to Frangi filter, but uses alternative method of smoothing. Refer to [1] to find the differences between Frangi and Hessian filters. Parameters
image(N, M[, P]) ndarray
Array with input image data.
sigmasiterable of floats, optional
Sigmas used as scales of filter, i.e., np.arange(scale_range[0], scale_range[1], scale_step)
scale_range2-tuple of floats, optional
The range of sigmas used.
scale_stepfloat, optional
Step size between sigmas.
betafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a blob-like structure.
gammafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to areas of high variance/texture/structure.
black_ridgesboolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
out(N, M[, P]) ndarray
Filtered image (maximum of pixels across all scales). See also
meijering
sato
frangi
Notes Written by Marc Schrijver (November 2001) Re-Written by D. J. Kroon University of Twente (May 2009) [2] References
1
Ng, C. C., Yap, M. H., Costen, N., & Li, B. (2014,). Automatic wrinkle detection using hybrid Hessian filter. In Asian Conference on Computer Vision (pp. 609-622). Springer International Publishing. DOI:10.1007/978-3-319-16811-1_40
2
Kroon, D. J.: Hessian based Frangi vesselness filter.
inverse
skimage.filters.inverse(data, impulse_response=None, filter_params={}, max_gain=2, predefined_filter=None) [source]
Apply the filter in reverse to the given data. Parameters
data(M,N) ndarray
Input data.
impulse_responsecallable f(r, c, **filter_params)
Impulse response of the filter. See LPIFilter2D.__init__.
filter_paramsdict
Additional keyword parameters to the impulse_response function.
max_gainfloat
Limit the filter gain. Often, the filter contains zeros, which would cause the inverse filter to have infinite gain. High gain causes amplification of artefacts, so a conservative limit is recommended. Other Parameters
predefined_filterLPIFilter2D
If you need to apply the same filter multiple times over different images, construct the LPIFilter2D and specify it here.
laplace
skimage.filters.laplace(image, ksize=3, mask=None) [source]
Find the edges of an image using the Laplace operator. Parameters
imagendarray
Image to process.
ksizeint, optional
Define the size of the discrete Laplacian operator such that it will have a size of (ksize,) * image.ndim.
maskndarray, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
outputndarray
The Laplace edge map. Notes The Laplacian operator is generated using the function skimage.restoration.uft.laplacian().
median
skimage.filters.median(image, selem=None, out=None, mode='nearest', cval=0.0, behavior='ndimage') [source]
Return local median of an image. Parameters
imagearray-like
Input image.
selemndarray, optional
If behavior=='rank', selem is a 2-D array of 1’s and 0’s. If behavior=='ndimage', selem is a N-D array of 1’s and 0’s with the same number of dimension than image. If None, selem will be a N-D array with 3 elements for each dimension (e.g., vector, square, cube, etc.)
outndarray, (same dtype as image), optional
If None, a new array is allocated.
mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’,’‘wrap’}, optional
The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘nearest’. New in version 0.15: mode is used when behavior='ndimage'.
cvalscalar, optional
Value to fill past edges of input if mode is ‘constant’. Default is 0.0 New in version 0.15: cval was added in 0.15 is used when behavior='ndimage'.
behavior{‘ndimage’, ‘rank’}, optional
Either to use the old behavior (i.e., < 0.15) or the new behavior. The old behavior will call the skimage.filters.rank.median(). The new behavior will call the scipy.ndimage.median_filter(). Default is ‘ndimage’. New in version 0.15: behavior is introduced in 0.15 Changed in version 0.16: Default behavior has been changed from ‘rank’ to ‘ndimage’ Returns
out2-D array (same dtype as input image)
Output image. See also
skimage.filters.rank.median
Rank-based implementation of the median filtering offering more flexibility with additional parameters but dedicated for unsigned integer images. Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters import median
>>> img = data.camera()
>>> med = median(img, disk(5))
meijering
skimage.filters.meijering(image, sigmas=range(1, 10, 2), alpha=None, black_ridges=True, mode='reflect', cval=0) [source]
Filter an image with the Meijering neuriteness filter. This filter can be used to detect continuous ridges, e.g. neurites, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects. Calculates the eigenvectors of the Hessian to compute the similarity of an image region to neurites, according to the method described in [1]. Parameters
image(N, M[, …, P]) ndarray
Array with input image data.
sigmasiterable of floats, optional
Sigmas used as scales of filter
alphafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a plate-like structure.
black_ridgesboolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
out(N, M[, …, P]) ndarray
Filtered image (maximum of pixels across all scales). See also
sato
frangi
hessian
References
1
Meijering, E., Jacob, M., Sarria, J. C., Steiner, P., Hirling, H., Unser, M. (2004). Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry Part A, 58(2), 167-176. DOI:10.1002/cyto.a.20022
prewitt
skimage.filters.prewitt(image, mask=None, *, axis=None, mode='reflect', cval=0.0) [source]
Find the edge magnitude using the Prewitt transform. Parameters
imagearray
The input image.
maskarray of bool, optional
Clip the output image to this mask. (Values where mask=0 will be set to 0.)
axisint or sequence of int, optional
Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as: prw_mag = np.sqrt(sum([prewitt(image, axis=i)**2
for i in range(image.ndim)]) / image.ndim)
The magnitude is also computed if axis is a sequence.
modestr or sequence of str, optional
The boundary mode for the convolution. See scipy.ndimage.convolve for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.
cvalfloat, optional
When mode is 'constant', this is the constant used in values outside the boundary of the image data. Returns
outputarray of float
The Prewitt edge map. See also
sobel, scharr
Notes The edge magnitude depends slightly on edge directions, since the approximation of the gradient operator by the Prewitt operator is not completely rotation invariant. For a better rotation invariance, the Scharr operator should be used. The Sobel operator has a better rotation invariance than the Prewitt operator, but a worse rotation invariance than the Scharr operator. Examples >>> from skimage import data
>>> from skimage import filters
>>> camera = data.camera()
>>> edges = filters.prewitt(camera)
prewitt_h
skimage.filters.prewitt_h(image, mask=None) [source]
Find the horizontal edges of an image using the Prewitt transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Prewitt edge map. Notes We use the following kernel: 1/3 1/3 1/3
0 0 0
-1/3 -1/3 -1/3
prewitt_v
skimage.filters.prewitt_v(image, mask=None) [source]
Find the vertical edges of an image using the Prewitt transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Prewitt edge map. Notes We use the following kernel: 1/3 0 -1/3
1/3 0 -1/3
1/3 0 -1/3
rank_order
skimage.filters.rank_order(image) [source]
Return an image of the same shape where each pixel is the index of the pixel value in the ascending order of the unique values of image, aka the rank-order value. Parameters
imagendarray
Returns
labelsndarray of type np.uint32, of shape image.shape
New array where each pixel has the rank-order value of the corresponding pixel in image. Pixel values are between 0 and n - 1, where n is the number of distinct unique values in image.
original_values1-D ndarray
Unique original values of image Examples >>> a = np.array([[1, 4, 5], [4, 4, 1], [5, 1, 1]])
>>> a
array([[1, 4, 5],
[4, 4, 1],
[5, 1, 1]])
>>> rank_order(a)
(array([[0, 1, 2],
[1, 1, 0],
[2, 0, 0]], dtype=uint32), array([1, 4, 5]))
>>> b = np.array([-1., 2.5, 3.1, 2.5])
>>> rank_order(b)
(array([0, 1, 2, 1], dtype=uint32), array([-1. , 2.5, 3.1]))
roberts
skimage.filters.roberts(image, mask=None) [source]
Find the edge magnitude using Roberts’ cross operator. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Roberts’ Cross edge map. See also
sobel, scharr, prewitt, feature.canny
Examples >>> from skimage import data
>>> camera = data.camera()
>>> from skimage import filters
>>> edges = filters.roberts(camera)
roberts_neg_diag
skimage.filters.roberts_neg_diag(image, mask=None) [source]
Find the cross edges of an image using the Roberts’ Cross operator. The kernel is applied to the input image to produce separate measurements of the gradient component one orientation. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Robert’s edge map. Notes We use the following kernel: 0 1
-1 0
roberts_pos_diag
skimage.filters.roberts_pos_diag(image, mask=None) [source]
Find the cross edges of an image using Roberts’ cross operator. The kernel is applied to the input image to produce separate measurements of the gradient component one orientation. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Robert’s edge map. Notes We use the following kernel: 1 0
0 -1
sato
skimage.filters.sato(image, sigmas=range(1, 10, 2), black_ridges=True, mode=None, cval=0) [source]
Filter an image with the Sato tubeness filter. This filter can be used to detect continuous ridges, e.g. tubes, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects. Defined only for 2-D and 3-D images. Calculates the eigenvectors of the Hessian to compute the similarity of an image region to tubes, according to the method described in [1]. Parameters
image(N, M[, P]) ndarray
Array with input image data.
sigmasiterable of floats, optional
Sigmas used as scales of filter.
black_ridgesboolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
out(N, M[, P]) ndarray
Filtered image (maximum of pixels across all scales). See also
meijering
frangi
hessian
References
1
Sato, Y., Nakajima, S., Shiraga, N., Atsumi, H., Yoshida, S., Koller, T., …, Kikinis, R. (1998). Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. Medical image analysis, 2(2), 143-168. DOI:10.1016/S1361-8415(98)80009-1
scharr
skimage.filters.scharr(image, mask=None, *, axis=None, mode='reflect', cval=0.0) [source]
Find the edge magnitude using the Scharr transform. Parameters
imagearray
The input image.
maskarray of bool, optional
Clip the output image to this mask. (Values where mask=0 will be set to 0.)
axisint or sequence of int, optional
Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as: sch_mag = np.sqrt(sum([scharr(image, axis=i)**2
for i in range(image.ndim)]) / image.ndim)
The magnitude is also computed if axis is a sequence.
modestr or sequence of str, optional
The boundary mode for the convolution. See scipy.ndimage.convolve for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.
cvalfloat, optional
When mode is 'constant', this is the constant used in values outside the boundary of the image data. Returns
outputarray of float
The Scharr edge map. See also
sobel, prewitt, canny
Notes The Scharr operator has a better rotation invariance than other edge filters such as the Sobel or the Prewitt operators. References
1
D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.
2
https://en.wikipedia.org/wiki/Sobel_operator#Alternative_operators Examples >>> from skimage import data
>>> from skimage import filters
>>> camera = data.camera()
>>> edges = filters.scharr(camera)
scharr_h
skimage.filters.scharr_h(image, mask=None) [source]
Find the horizontal edges of an image using the Scharr transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Scharr edge map. Notes We use the following kernel: 3 10 3
0 0 0
-3 -10 -3
References
1
D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.
scharr_v
skimage.filters.scharr_v(image, mask=None) [source]
Find the vertical edges of an image using the Scharr transform. Parameters
image2-D array
Image to process
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Scharr edge map. Notes We use the following kernel: 3 0 -3
10 0 -10
3 0 -3
References
1
D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.
sobel
skimage.filters.sobel(image, mask=None, *, axis=None, mode='reflect', cval=0.0) [source]
Find edges in an image using the Sobel filter. Parameters
imagearray
The input image.
maskarray of bool, optional
Clip the output image to this mask. (Values where mask=0 will be set to 0.)
axisint or sequence of int, optional
Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as: sobel_mag = np.sqrt(sum([sobel(image, axis=i)**2
for i in range(image.ndim)]) / image.ndim)
The magnitude is also computed if axis is a sequence.
modestr or sequence of str, optional
The boundary mode for the convolution. See scipy.ndimage.convolve for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.
cvalfloat, optional
When mode is 'constant', this is the constant used in values outside the boundary of the image data. Returns
outputarray of float
The Sobel edge map. See also
scharr, prewitt, canny
References
1
D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.
2
https://en.wikipedia.org/wiki/Sobel_operator Examples >>> from skimage import data
>>> from skimage import filters
>>> camera = data.camera()
>>> edges = filters.sobel(camera)
Examples using skimage.filters.sobel
Flood Fill sobel_h
skimage.filters.sobel_h(image, mask=None) [source]
Find the horizontal edges of an image using the Sobel transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Sobel edge map. Notes We use the following kernel: 1 2 1
0 0 0
-1 -2 -1
sobel_v
skimage.filters.sobel_v(image, mask=None) [source]
Find the vertical edges of an image using the Sobel transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Sobel edge map. Notes We use the following kernel: 1 0 -1
2 0 -2
1 0 -1
threshold_isodata
skimage.filters.threshold_isodata(image=None, nbins=256, return_all=False, *, hist=None) [source]
Return threshold value(s) based on ISODATA method. Histogram-based threshold, known as Ridler-Calvard method or inter-means. Threshold values returned satisfy the following equality: threshold = (image[image <= threshold].mean() +
image[image > threshold].mean()) / 2.0
That is, returned thresholds are intensities that separate the image into two groups of pixels, where the threshold intensity is midway between the mean intensities of these groups. For integer images, the above equality holds to within one; for floating- point images, the equality holds to within the histogram bin-width. Either image or hist must be provided. In case hist is given, the actual histogram of the image is ignored. Parameters
image(N, M) ndarray, optional
Input image.
nbinsint, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
return_allbool, optional
If False (default), return only the lowest threshold that satisfies the above equality. If True, return all valid thresholds.
histarray, or 2-tuple of arrays, optional
Histogram to determine the threshold from and a corresponding array of bin center intensities. Alternatively, only the histogram can be passed. Returns
thresholdfloat or int or array
Threshold value(s). References
1
Ridler, TW & Calvard, S (1978), “Picture thresholding using an iterative selection method” IEEE Transactions on Systems, Man and Cybernetics 8: 630-632, DOI:10.1109/TSMC.1978.4310039
2
Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165, http://www.busim.ee.boun.edu.tr/~sankur/SankurFolder/Threshold_survey.pdf DOI:10.1117/1.1631315
3
ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples >>> from skimage.data import coins
>>> image = coins()
>>> thresh = threshold_isodata(image)
>>> binary = image > thresh
threshold_li
skimage.filters.threshold_li(image, *, tolerance=None, initial_guess=None, iter_callback=None) [source]
Compute threshold value by Li’s iterative Minimum Cross Entropy method. Parameters
imagendarray
Input image.
tolerancefloat, optional
Finish the computation when the change in the threshold in an iteration is less than this value. By default, this is half the smallest difference between intensity values in image.
initial_guessfloat or Callable[[array[float]], float], optional
Li’s iterative method uses gradient descent to find the optimal threshold. If the image intensity histogram contains more than two modes (peaks), the gradient descent could get stuck in a local optimum. An initial guess for the iteration can help the algorithm find the globally-optimal threshold. A float value defines a specific start point, while a callable should take in an array of image intensities and return a float value. Example valid callables include numpy.mean (default), lambda arr: numpy.quantile(arr, 0.95), or even skimage.filters.threshold_otsu().
iter_callbackCallable[[float], Any], optional
A function that will be called on the threshold at every iteration of the algorithm. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. References
1
Li C.H. and Lee C.K. (1993) “Minimum Cross Entropy Thresholding” Pattern Recognition, 26(4): 617-625 DOI:10.1016/0031-3203(93)90115-D
2
Li C.H. and Tam P.K.S. (1998) “An Iterative Algorithm for Minimum Cross Entropy Thresholding” Pattern Recognition Letters, 18(8): 771-776 DOI:10.1016/S0167-8655(98)00057-9
3
Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165 DOI:10.1117/1.1631315
4
ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_li(image)
>>> binary = image > thresh
threshold_local
skimage.filters.threshold_local(image, block_size, method='gaussian', offset=0, mode='reflect', param=None, cval=0) [source]
Compute a threshold mask image based on local pixel neighborhood. Also known as adaptive or dynamic thresholding. The threshold value is the weighted mean for the local neighborhood of a pixel subtracted by a constant. Alternatively the threshold can be determined dynamically by a given function, using the ‘generic’ method. Parameters
image(N, M) ndarray
Input image.
block_sizeint
Odd size of pixel neighborhood which is used to calculate the threshold value (e.g. 3, 5, 7, …, 21, …).
method{‘generic’, ‘gaussian’, ‘mean’, ‘median’}, optional
Method used to determine adaptive threshold for local neighbourhood in weighted mean image. ‘generic’: use custom function (see param parameter) ‘gaussian’: apply gaussian filter (see param parameter for custom sigma value) ‘mean’: apply arithmetic mean filter ‘median’: apply median rank filter By default the ‘gaussian’ method is used.
offsetfloat, optional
Constant subtracted from weighted mean of neighborhood to calculate the local threshold value. Default offset is 0.
mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional
The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘reflect’.
param{int, function}, optional
Either specify sigma for ‘gaussian’ method or function object for ‘generic’ method. This functions takes the flat array of local neighbourhood as a single argument and returns the calculated threshold for the centre pixel.
cvalfloat, optional
Value to fill past edges of input if mode is ‘constant’. Returns
threshold(N, M) ndarray
Threshold image. All pixels in the input image higher than the corresponding pixel in the threshold image are considered foreground. References
1
https://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#adaptivethreshold Examples >>> from skimage.data import camera
>>> image = camera()[:50, :50]
>>> binary_image1 = image > threshold_local(image, 15, 'mean')
>>> func = lambda arr: arr.mean()
>>> binary_image2 = image > threshold_local(image, 15, 'generic',
... param=func)
threshold_mean
skimage.filters.threshold_mean(image) [source]
Return threshold value based on the mean of grayscale values. Parameters
image(N, M[, …, P]) ndarray
Grayscale input image. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. References
1
C. A. Glasbey, “An analysis of histogram-based thresholding algorithms,” CVGIP: Graphical Models and Image Processing, vol. 55, pp. 532-537, 1993. DOI:10.1006/cgip.1993.1040 Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_mean(image)
>>> binary = image > thresh
threshold_minimum
skimage.filters.threshold_minimum(image=None, nbins=256, max_iter=10000, *, hist=None) [source]
Return threshold value based on minimum method. The histogram of the input image is computed if not provided and smoothed until there are only two maxima. Then the minimum in between is the threshold value. Either image or hist must be provided. In case hist is given, the actual histogram of the image is ignored. Parameters
image(M, N) ndarray, optional
Input image.
nbinsint, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
max_iterint, optional
Maximum number of iterations to smooth the histogram.
histarray, or 2-tuple of arrays, optional
Histogram to determine the threshold from and a corresponding array of bin center intensities. Alternatively, only the histogram can be passed. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. Raises
RuntimeError
If unable to find two local maxima in the histogram or if the smoothing takes more than 1e4 iterations. References
1
C. A. Glasbey, “An analysis of histogram-based thresholding algorithms,” CVGIP: Graphical Models and Image Processing, vol. 55, pp. 532-537, 1993.
2
Prewitt, JMS & Mendelsohn, ML (1966), “The analysis of cell images”, Annals of the New York Academy of Sciences 128: 1035-1053 DOI:10.1111/j.1749-6632.1965.tb11715.x Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_minimum(image)
>>> binary = image > thresh
threshold_multiotsu
skimage.filters.threshold_multiotsu(image, classes=3, nbins=256) [source]
Generate classes-1 threshold values to divide gray levels in image. The threshold values are chosen to maximize the total sum of pairwise variances between the thresholded graylevel classes. See Notes and [1] for more details. Parameters
image(N, M) ndarray
Grayscale input image.
classesint, optional
Number of classes to be thresholded, i.e. the number of resulting regions.
nbinsint, optional
Number of bins used to calculate the histogram. This value is ignored for integer arrays. Returns
thresharray
Array containing the threshold values for the desired classes. Raises
ValueError
If image contains less grayscale value then the desired number of classes. Notes This implementation relies on a Cython function whose complexity is \(O\left(\frac{Ch^{C-1}}{(C-1)!}\right)\), where \(h\) is the number of histogram bins and \(C\) is the number of classes desired. The input image must be grayscale. References
1
Liao, P-S., Chen, T-S. and Chung, P-C., “A fast algorithm for multilevel thresholding”, Journal of Information Science and Engineering 17 (5): 713-727, 2001. Available at: <https://ftp.iis.sinica.edu.tw/JISE/2001/200109_01.pdf> DOI:10.6688/JISE.2001.17.5.1
2
Tosa, Y., “Multi-Otsu Threshold”, a java plugin for ImageJ. Available at: <http://imagej.net/plugins/download/Multi_OtsuThreshold.java> Examples >>> from skimage.color import label2rgb
>>> from skimage import data
>>> image = data.camera()
>>> thresholds = threshold_multiotsu(image)
>>> regions = np.digitize(image, bins=thresholds)
>>> regions_colorized = label2rgb(regions)
Examples using skimage.filters.threshold_multiotsu
Multi-Otsu Thresholding
Segment human cells (in mitosis) threshold_niblack
skimage.filters.threshold_niblack(image, window_size=15, k=0.2) [source]
Applies Niblack local threshold to an array. A threshold T is calculated for every pixel in the image using the following formula: T = m(x,y) - k * s(x,y)
where m(x,y) and s(x,y) are the mean and standard deviation of pixel (x,y) neighborhood defined by a rectangular window with size w times w centered around the pixel. k is a configurable parameter that weights the effect of standard deviation. Parameters
imagendarray
Input image.
window_sizeint, or iterable of int, optional
Window size specified as a single odd integer (3, 5, 7, …), or an iterable of length image.ndim containing only odd integers (e.g. (1, 5, 5)).
kfloat, optional
Value of parameter k in threshold formula. Returns
threshold(N, M) ndarray
Threshold mask. All pixels with an intensity higher than this value are assumed to be foreground. Notes This algorithm is originally designed for text recognition. The Bradley threshold is a particular case of the Niblack one, being equivalent to >>> from skimage import data
>>> image = data.page()
>>> q = 1
>>> threshold_image = threshold_niblack(image, k=0) * q
for some value q. By default, Bradley and Roth use q=1. References
1
W. Niblack, An introduction to Digital Image Processing, Prentice-Hall, 1986.
2
D. Bradley and G. Roth, “Adaptive thresholding using Integral Image”, Journal of Graphics Tools 12(2), pp. 13-21, 2007. DOI:10.1080/2151237X.2007.10129236 Examples >>> from skimage import data
>>> image = data.page()
>>> threshold_image = threshold_niblack(image, window_size=7, k=0.1)
threshold_otsu
skimage.filters.threshold_otsu(image=None, nbins=256, *, hist=None) [source]
Return threshold value based on Otsu’s method. Either image or hist must be provided. If hist is provided, the actual histogram of the image is ignored. Parameters
image(N, M) ndarray, optional
Grayscale input image.
nbinsint, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
histarray, or 2-tuple of arrays, optional
Histogram from which to determine the threshold, and optionally a corresponding array of bin center intensities. An alternative use of this function is to pass it only hist. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. Notes The input image must be grayscale. References
1
Wikipedia, https://en.wikipedia.org/wiki/Otsu’s_Method Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_otsu(image)
>>> binary = image <= thresh
Examples using skimage.filters.threshold_otsu
Measure region properties
Rank filters threshold_sauvola
skimage.filters.threshold_sauvola(image, window_size=15, k=0.2, r=None) [source]
Applies Sauvola local threshold to an array. Sauvola is a modification of Niblack technique. In the original method a threshold T is calculated for every pixel in the image using the following formula: T = m(x,y) * (1 + k * ((s(x,y) / R) - 1))
where m(x,y) and s(x,y) are the mean and standard deviation of pixel (x,y) neighborhood defined by a rectangular window with size w times w centered around the pixel. k is a configurable parameter that weights the effect of standard deviation. R is the maximum standard deviation of a greyscale image. Parameters
imagendarray
Input image.
window_sizeint, or iterable of int, optional
Window size specified as a single odd integer (3, 5, 7, …), or an iterable of length image.ndim containing only odd integers (e.g. (1, 5, 5)).
kfloat, optional
Value of the positive parameter k.
rfloat, optional
Value of R, the dynamic range of standard deviation. If None, set to the half of the image dtype range. Returns
threshold(N, M) ndarray
Threshold mask. All pixels with an intensity higher than this value are assumed to be foreground. Notes This algorithm is originally designed for text recognition. References
1
J. Sauvola and M. Pietikainen, “Adaptive document image binarization,” Pattern Recognition 33(2), pp. 225-236, 2000. DOI:10.1016/S0031-3203(99)00055-2 Examples >>> from skimage import data
>>> image = data.page()
>>> t_sauvola = threshold_sauvola(image, window_size=15, k=0.2)
>>> binary_image = image > t_sauvola
threshold_triangle
skimage.filters.threshold_triangle(image, nbins=256) [source]
Return threshold value based on the triangle algorithm. Parameters
image(N, M[, …, P]) ndarray
Grayscale input image.
nbinsint, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. References
1
Zack, G. W., Rogers, W. E. and Latt, S. A., 1977, Automatic Measurement of Sister Chromatid Exchange Frequency, Journal of Histochemistry and Cytochemistry 25 (7), pp. 741-753 DOI:10.1177/25.7.70454
2
ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_triangle(image)
>>> binary = image > thresh
threshold_yen
skimage.filters.threshold_yen(image=None, nbins=256, *, hist=None) [source]
Return threshold value based on Yen’s method. Either image or hist must be provided. In case hist is given, the actual histogram of the image is ignored. Parameters
image(N, M) ndarray, optional
Input image.
nbinsint, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
histarray, or 2-tuple of arrays, optional
Histogram from which to determine the threshold, and optionally a corresponding array of bin center intensities. An alternative use of this function is to pass it only hist. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. References
1
Yen J.C., Chang F.J., and Chang S. (1995) “A New Criterion for Automatic Multilevel Thresholding” IEEE Trans. on Image Processing, 4(3): 370-378. DOI:10.1109/83.366472
2
Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165, DOI:10.1117/1.1631315 http://www.busim.ee.boun.edu.tr/~sankur/SankurFolder/Threshold_survey.pdf
3
ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_yen(image)
>>> binary = image <= thresh
try_all_threshold
skimage.filters.try_all_threshold(image, figsize=(8, 5), verbose=True) [source]
Returns a figure comparing the outputs of different thresholding methods. Parameters
image(N, M) ndarray
Input image.
figsizetuple, optional
Figure size (in inches).
verbosebool, optional
Print function name for each method. Returns
fig, axtuple
Matplotlib figure and axes. Notes The following algorithms are used: isodata li mean minimum otsu triangle yen Examples >>> from skimage.data import text
>>> fig, ax = try_all_threshold(text(), figsize=(10, 6), verbose=False)
unsharp_mask
skimage.filters.unsharp_mask(image, radius=1.0, amount=1.0, multichannel=False, preserve_range=False) [source]
Unsharp masking filter. The sharp details are identified as the difference between the original image and its blurred version. These details are then scaled, and added back to the original image. Parameters
image[P, …, ]M[, N][, C] ndarray
Input image.
radiusscalar or sequence of scalars, optional
If a scalar is given, then its value is used for all dimensions. If sequence is given, then there must be exactly one radius for each dimension except the last dimension for multichannel images. Note that 0 radius means no blurring, and negative values are not allowed.
amountscalar, optional
The details will be amplified with this factor. The factor could be 0 or negative. Typically, it is a small positive number, e.g. 1.0.
multichannelbool, optional
If True, the last image dimension is considered as a color channel, otherwise as spatial. Color channels are processed individually.
preserve_rangebool, optional
Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns
output[P, …, ]M[, N][, C] ndarray of float
Image with unsharp mask applied. Notes Unsharp masking is an image sharpening technique. It is a linear image operation, and numerically stable, unlike deconvolution which is an ill-posed problem. Because of this stability, it is often preferred over deconvolution. The main idea is as follows: sharp details are identified as the difference between the original image and its blurred version. These details are added back to the original image after a scaling step: enhanced image = original + amount * (original - blurred) When applying this filter to several color layers independently, color bleeding may occur. More visually pleasing result can be achieved by processing only the brightness/lightness/intensity channel in a suitable color space such as HSV, HSL, YUV, or YCbCr. Unsharp masking is described in most introductory digital image processing books. This implementation is based on [1]. References
1
Maria Petrou, Costas Petrou “Image Processing: The Fundamentals”, (2010), ed ii., page 357, ISBN 13: 9781119994398 DOI:10.1002/9781119994398
2
Wikipedia. Unsharp masking https://en.wikipedia.org/wiki/Unsharp_masking Examples >>> array = np.ones(shape=(5,5), dtype=np.uint8)*100
>>> array[2,2] = 120
>>> array
array([[100, 100, 100, 100, 100],
[100, 100, 100, 100, 100],
[100, 100, 120, 100, 100],
[100, 100, 100, 100, 100],
[100, 100, 100, 100, 100]], dtype=uint8)
>>> np.around(unsharp_mask(array, radius=0.5, amount=2),2)
array([[0.39, 0.39, 0.39, 0.39, 0.39],
[0.39, 0.39, 0.38, 0.39, 0.39],
[0.39, 0.38, 0.53, 0.38, 0.39],
[0.39, 0.39, 0.38, 0.39, 0.39],
[0.39, 0.39, 0.39, 0.39, 0.39]])
>>> array = np.ones(shape=(5,5), dtype=np.int8)*100
>>> array[2,2] = 127
>>> np.around(unsharp_mask(array, radius=0.5, amount=2),2)
array([[0.79, 0.79, 0.79, 0.79, 0.79],
[0.79, 0.78, 0.75, 0.78, 0.79],
[0.79, 0.75, 1. , 0.75, 0.79],
[0.79, 0.78, 0.75, 0.78, 0.79],
[0.79, 0.79, 0.79, 0.79, 0.79]])
>>> np.around(unsharp_mask(array, radius=0.5, amount=2, preserve_range=True), 2)
array([[100. , 100. , 99.99, 100. , 100. ],
[100. , 99.39, 95.48, 99.39, 100. ],
[ 99.99, 95.48, 147.59, 95.48, 99.99],
[100. , 99.39, 95.48, 99.39, 100. ],
[100. , 100. , 99.99, 100. , 100. ]])
wiener
skimage.filters.wiener(data, impulse_response=None, filter_params={}, K=0.25, predefined_filter=None) [source]
Minimum Mean Square Error (Wiener) inverse filter. Parameters
data(M,N) ndarray
Input data.
Kfloat or (M,N) ndarray
Ratio between power spectrum of noise and undegraded image.
impulse_responsecallable f(r, c, **filter_params)
Impulse response of the filter. See LPIFilter2D.__init__.
filter_paramsdict
Additional keyword parameters to the impulse_response function. Other Parameters
predefined_filterLPIFilter2D
If you need to apply the same filter multiple times over different images, construct the LPIFilter2D and specify it here.
window
skimage.filters.window(window_type, shape, warp_kwargs=None) [source]
Return an n-dimensional window of a given size and dimensionality. Parameters
window_typestring, float, or tuple
The type of window to be created. Any window type supported by scipy.signal.get_window is allowed here. See notes below for a current list, or the SciPy documentation for the version of SciPy on your machine.
shapetuple of int or int
The shape of the window along each axis. If an integer is provided, a 1D window is generated.
warp_kwargsdict
Keyword arguments passed to skimage.transform.warp (e.g., warp_kwargs={'order':3} to change interpolation method). Returns
nd_windowndarray
A window of the specified shape. dtype is np.double. Notes This function is based on scipy.signal.get_window and thus can access all of the window types available to that function (e.g., "hann", "boxcar"). Note that certain window types require parameters that have to be supplied with the window name as a tuple (e.g., ("tukey", 0.8)). If only a float is supplied, it is interpreted as the beta parameter of the Kaiser window. See https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.windows.get_window.html for more details. Note that this function generates a double precision array of the specified shape and can thus generate very large arrays that consume a large amount of available memory. The approach taken here to create nD windows is to first calculate the Euclidean distance from the center of the intended nD window to each position in the array. That distance is used to sample, with interpolation, from a 1D window returned from scipy.signal.get_window. The method of interpolation can be changed with the order keyword argument passed to skimage.transform.warp. Some coordinates in the output window will be outside of the original signal; these will be filled in with zeros. Window types: - boxcar - triang - blackman - hamming - hann - bartlett - flattop - parzen - bohman - blackmanharris - nuttall - barthann - kaiser (needs beta) - gaussian (needs standard deviation) - general_gaussian (needs power, width) - slepian (needs width) - dpss (needs normalized half-bandwidth) - chebwin (needs attenuation) - exponential (needs decay scale) - tukey (needs taper fraction) References
1
Two-dimensional window design, Wikipedia, https://en.wikipedia.org/wiki/Two_dimensional_window_design Examples Return a Hann window with shape (512, 512): >>> from skimage.filters import window
>>> w = window('hann', (512, 512))
Return a Kaiser window with beta parameter of 16 and shape (256, 256, 35): >>> w = window(16, (256, 256, 35))
Return a Tukey window with an alpha parameter of 0.8 and shape (100, 300): >>> w = window(('tukey', 0.8), (100, 300))
LPIFilter2D
class skimage.filters.LPIFilter2D(impulse_response, **filter_params) [source]
Bases: object Linear Position-Invariant Filter (2-dimensional)
__init__(impulse_response, **filter_params) [source]
Parameters
impulse_responsecallable f(r, c, **filter_params)
Function that yields the impulse response. r and c are 1-dimensional vectors that represent row and column positions, in other words coordinates are (r[0],c[0]),(r[0],c[1]) etc. **filter_params are passed through. In other words, impulse_response would be called like this: >>> def impulse_response(r, c, **filter_params):
... pass
>>>
>>> r = [0,0,0,1,1,1,2,2,2]
>>> c = [0,1,2,0,1,2,0,1,2]
>>> filter_params = {'kw1': 1, 'kw2': 2, 'kw3': 3}
>>> impulse_response(r, c, **filter_params)
Examples Gaussian filter: Use a 1-D gaussian in each direction without normalization coefficients. >>> def filt_func(r, c, sigma = 1):
... return np.exp(-np.hypot(r, c)/sigma)
>>> filter = LPIFilter2D(filt_func) | skimage.api.skimage.filters |
skimage.filters.apply_hysteresis_threshold(image, low, high) [source]
Apply hysteresis thresholding to image. This algorithm finds regions where image is greater than high OR image is greater than low and that region is connected to a region greater than high. Parameters
imagearray, shape (M,[ N, …, P])
Grayscale input image.
lowfloat, or array of same shape as image
Lower threshold.
highfloat, or array of same shape as image
Higher threshold. Returns
thresholdedarray of bool, same shape as image
Array in which True indicates the locations where image was above the hysteresis threshold. References
1
J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1986; vol. 8, pp.679-698. DOI:10.1109/TPAMI.1986.4767851 Examples >>> image = np.array([1, 2, 3, 2, 1, 2, 1, 3, 2])
>>> apply_hysteresis_threshold(image, 1.5, 2.5).astype(int)
array([0, 1, 1, 1, 0, 0, 0, 1, 1]) | skimage.api.skimage.filters#skimage.filters.apply_hysteresis_threshold |
skimage.filters.correlate_sparse(image, kernel, mode='reflect') [source]
Compute valid cross-correlation of padded_array and kernel. This function is fast when kernel is large with many zeros. See scipy.ndimage.correlate for a description of cross-correlation. Parameters
imagendarray, dtype float, shape (M, N,[ …,] P)
The input array. If mode is ‘valid’, this array should already be padded, as a margin of the same shape as kernel will be stripped off.
kernelndarray, dtype float shape (Q, R,[ …,] S)
The kernel to be correlated. Must have the same number of dimensions as padded_array. For high performance, it should be sparse (few nonzero entries).
modestring, optional
See scipy.ndimage.correlate for valid modes. Additionally, mode ‘valid’ is accepted, in which case no padding is applied and the result is the result for the smaller image for which the kernel is entirely inside the original data. Returns
resultarray of float, shape (M, N,[ …,] P)
The result of cross-correlating image with kernel. If mode ‘valid’ is used, the resulting shape is (M-Q+1, N-R+1,[ …,] P-S+1). | skimage.api.skimage.filters#skimage.filters.correlate_sparse |
skimage.filters.difference_of_gaussians(image, low_sigma, high_sigma=None, *, mode='nearest', cval=0, multichannel=False, truncate=4.0) [source]
Find features between low_sigma and high_sigma in size. This function uses the Difference of Gaussians method for applying band-pass filters to multi-dimensional arrays. The input array is blurred with two Gaussian kernels of differing sigmas to produce two intermediate, filtered images. The more-blurred image is then subtracted from the less-blurred image. The final output image will therefore have had high-frequency components attenuated by the smaller-sigma Gaussian, and low frequency components will have been removed due to their presence in the more-blurred intermediate. Parameters
imagendarray
Input array to filter.
low_sigmascalar or sequence of scalars
Standard deviation(s) for the Gaussian kernel with the smaller sigmas across all axes. The standard deviations are given for each axis as a sequence, or as a single number, in which case the single number is used as the standard deviation value for all axes.
high_sigmascalar or sequence of scalars, optional (default is None)
Standard deviation(s) for the Gaussian kernel with the larger sigmas across all axes. The standard deviations are given for each axis as a sequence, or as a single number, in which case the single number is used as the standard deviation value for all axes. If None is given (default), sigmas for all axes are calculated as 1.6 * low_sigma.
mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional
The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘nearest’.
cvalscalar, optional
Value to fill past edges of input if mode is ‘constant’. Default is 0.0
multichannelbool, optional (default: False)
Whether the last axis of the image is to be interpreted as multiple channels. If True, each channel is filtered separately (channels are not mixed together).
truncatefloat, optional (default is 4.0)
Truncate the filter at this many standard deviations. Returns
filtered_imagendarray
the filtered array. See also
skimage.feature.blog_dog
Notes This function will subtract an array filtered with a Gaussian kernel with sigmas given by high_sigma from an array filtered with a Gaussian kernel with sigmas provided by low_sigma. The values for high_sigma must always be greater than or equal to the corresponding values in low_sigma, or a ValueError will be raised. When high_sigma is none, the values for high_sigma will be calculated as 1.6x the corresponding values in low_sigma. This ratio was originally proposed by Marr and Hildreth (1980) [1] and is commonly used when approximating the inverted Laplacian of Gaussian, which is used in edge and blob detection. Input image is converted according to the conventions of img_as_float. Except for sigma values, all parameters are used for both filters. References
1
Marr, D. and Hildreth, E. Theory of Edge Detection. Proc. R. Soc. Lond. Series B 207, 187-217 (1980). https://doi.org/10.1098/rspb.1980.0020 Examples Apply a simple Difference of Gaussians filter to a color image: >>> from skimage.data import astronaut
>>> from skimage.filters import difference_of_gaussians
>>> filtered_image = difference_of_gaussians(astronaut(), 2, 10,
... multichannel=True)
Apply a Laplacian of Gaussian filter as approximated by the Difference of Gaussians filter: >>> filtered_image = difference_of_gaussians(astronaut(), 2,
... multichannel=True)
Apply a Difference of Gaussians filter to a grayscale image using different sigma values for each axis: >>> from skimage.data import camera
>>> filtered_image = difference_of_gaussians(camera(), (2,5), (3,20)) | skimage.api.skimage.filters#skimage.filters.difference_of_gaussians |
skimage.filters.farid(image, *, mask=None) [source]
Find the edge magnitude using the Farid transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Farid edge map. See also
sobel, prewitt, canny
Notes Take the square root of the sum of the squares of the horizontal and vertical derivatives to get a magnitude that is somewhat insensitive to direction. Similar to the Scharr operator, this operator is designed with a rotation invariance constraint. References
1
Farid, H. and Simoncelli, E. P., “Differentiation of discrete multidimensional signals”, IEEE Transactions on Image Processing 13(4): 496-508, 2004. DOI:10.1109/TIP.2004.823819
2
Wikipedia, “Farid and Simoncelli Derivatives.” Available at: <https://en.wikipedia.org/wiki/Image_derivatives#Farid_and_Simoncelli_Derivatives> Examples >>> from skimage import data
>>> camera = data.camera()
>>> from skimage import filters
>>> edges = filters.farid(camera) | skimage.api.skimage.filters#skimage.filters.farid |
skimage.filters.farid_h(image, *, mask=None) [source]
Find the horizontal edges of an image using the Farid transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Farid edge map. Notes The kernel was constructed using the 5-tap weights from [1]. References
1
Farid, H. and Simoncelli, E. P., “Differentiation of discrete multidimensional signals”, IEEE Transactions on Image Processing 13(4): 496-508, 2004. DOI:10.1109/TIP.2004.823819
2
Farid, H. and Simoncelli, E. P. “Optimally rotation-equivariant directional derivative kernels”, In: 7th International Conference on Computer Analysis of Images and Patterns, Kiel, Germany. Sep, 1997. | skimage.api.skimage.filters#skimage.filters.farid_h |
skimage.filters.farid_v(image, *, mask=None) [source]
Find the vertical edges of an image using the Farid transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Farid edge map. Notes The kernel was constructed using the 5-tap weights from [1]. References
1
Farid, H. and Simoncelli, E. P., “Differentiation of discrete multidimensional signals”, IEEE Transactions on Image Processing 13(4): 496-508, 2004. DOI:10.1109/TIP.2004.823819 | skimage.api.skimage.filters#skimage.filters.farid_v |
skimage.filters.frangi(image, sigmas=range(1, 10, 2), scale_range=None, scale_step=None, alpha=0.5, beta=0.5, gamma=15, black_ridges=True, mode='reflect', cval=0) [source]
Filter an image with the Frangi vesselness filter. This filter can be used to detect continuous ridges, e.g. vessels, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects. Defined only for 2-D and 3-D images. Calculates the eigenvectors of the Hessian to compute the similarity of an image region to vessels, according to the method described in [1]. Parameters
image(N, M[, P]) ndarray
Array with input image data.
sigmasiterable of floats, optional
Sigmas used as scales of filter, i.e., np.arange(scale_range[0], scale_range[1], scale_step)
scale_range2-tuple of floats, optional
The range of sigmas used.
scale_stepfloat, optional
Step size between sigmas.
alphafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a plate-like structure.
betafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a blob-like structure.
gammafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to areas of high variance/texture/structure.
black_ridgesboolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
out(N, M[, P]) ndarray
Filtered image (maximum of pixels across all scales). See also
meijering
sato
hessian
Notes Written by Marc Schrijver, November 2001 Re-Written by D. J. Kroon, University of Twente, May 2009, [2] Adoption of 3D version from D. G. Ellis, Januar 20017, [3] References
1
Frangi, A. F., Niessen, W. J., Vincken, K. L., & Viergever, M. A. (1998,). Multiscale vessel enhancement filtering. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 130-137). Springer Berlin Heidelberg. DOI:10.1007/BFb0056195
2
Kroon, D. J.: Hessian based Frangi vesselness filter.
3
Ellis, D. G.: https://github.com/ellisdg/frangi3d/tree/master/frangi | skimage.api.skimage.filters#skimage.filters.frangi |
skimage.filters.gabor(image, frequency, theta=0, bandwidth=1, sigma_x=None, sigma_y=None, n_stds=3, offset=0, mode='reflect', cval=0) [source]
Return real and imaginary responses to Gabor filter. The real and imaginary parts of the Gabor filter kernel are applied to the image and the response is returned as a pair of arrays. Gabor filter is a linear filter with a Gaussian kernel which is modulated by a sinusoidal plane wave. Frequency and orientation representations of the Gabor filter are similar to those of the human visual system. Gabor filter banks are commonly used in computer vision and image processing. They are especially suitable for edge detection and texture classification. Parameters
image2-D array
Input image.
frequencyfloat
Spatial frequency of the harmonic function. Specified in pixels.
thetafloat, optional
Orientation in radians. If 0, the harmonic is in the x-direction.
bandwidthfloat, optional
The bandwidth captured by the filter. For fixed bandwidth, sigma_x and sigma_y will decrease with increasing frequency. This value is ignored if sigma_x and sigma_y are set by the user.
sigma_x, sigma_yfloat, optional
Standard deviation in x- and y-directions. These directions apply to the kernel before rotation. If theta = pi/2, then the kernel is rotated 90 degrees so that sigma_x controls the vertical direction.
n_stdsscalar, optional
The linear size of the kernel is n_stds (3 by default) standard deviations.
offsetfloat, optional
Phase offset of harmonic function in radians.
mode{‘constant’, ‘nearest’, ‘reflect’, ‘mirror’, ‘wrap’}, optional
Mode used to convolve image with a kernel, passed to ndi.convolve
cvalscalar, optional
Value to fill past edges of input if mode of convolution is ‘constant’. The parameter is passed to ndi.convolve. Returns
real, imagarrays
Filtered images using the real and imaginary parts of the Gabor filter kernel. Images are of the same dimensions as the input one. References
1
https://en.wikipedia.org/wiki/Gabor_filter
2
https://web.archive.org/web/20180127125930/http://mplab.ucsd.edu/tutorials/gabor.pdf Examples >>> from skimage.filters import gabor
>>> from skimage import data, io
>>> from matplotlib import pyplot as plt
>>> image = data.coins()
>>> # detecting edges in a coin image
>>> filt_real, filt_imag = gabor(image, frequency=0.6)
>>> plt.figure()
>>> io.imshow(filt_real)
>>> io.show()
>>> # less sensitivity to finer details with the lower frequency kernel
>>> filt_real, filt_imag = gabor(image, frequency=0.1)
>>> plt.figure()
>>> io.imshow(filt_real)
>>> io.show() | skimage.api.skimage.filters#skimage.filters.gabor |
skimage.filters.gabor_kernel(frequency, theta=0, bandwidth=1, sigma_x=None, sigma_y=None, n_stds=3, offset=0) [source]
Return complex 2D Gabor filter kernel. Gabor kernel is a Gaussian kernel modulated by a complex harmonic function. Harmonic function consists of an imaginary sine function and a real cosine function. Spatial frequency is inversely proportional to the wavelength of the harmonic and to the standard deviation of a Gaussian kernel. The bandwidth is also inversely proportional to the standard deviation. Parameters
frequencyfloat
Spatial frequency of the harmonic function. Specified in pixels.
thetafloat, optional
Orientation in radians. If 0, the harmonic is in the x-direction.
bandwidthfloat, optional
The bandwidth captured by the filter. For fixed bandwidth, sigma_x and sigma_y will decrease with increasing frequency. This value is ignored if sigma_x and sigma_y are set by the user.
sigma_x, sigma_yfloat, optional
Standard deviation in x- and y-directions. These directions apply to the kernel before rotation. If theta = pi/2, then the kernel is rotated 90 degrees so that sigma_x controls the vertical direction.
n_stdsscalar, optional
The linear size of the kernel is n_stds (3 by default) standard deviations
offsetfloat, optional
Phase offset of harmonic function in radians. Returns
gcomplex array
Complex filter kernel. References
1
https://en.wikipedia.org/wiki/Gabor_filter
2
https://web.archive.org/web/20180127125930/http://mplab.ucsd.edu/tutorials/gabor.pdf Examples >>> from skimage.filters import gabor_kernel
>>> from skimage import io
>>> from matplotlib import pyplot as plt
>>> gk = gabor_kernel(frequency=0.2)
>>> plt.figure()
>>> io.imshow(gk.real)
>>> io.show()
>>> # more ripples (equivalent to increasing the size of the
>>> # Gaussian spread)
>>> gk = gabor_kernel(frequency=0.2, bandwidth=0.1)
>>> plt.figure()
>>> io.imshow(gk.real)
>>> io.show() | skimage.api.skimage.filters#skimage.filters.gabor_kernel |
skimage.filters.gaussian(image, sigma=1, output=None, mode='nearest', cval=0, multichannel=None, preserve_range=False, truncate=4.0) [source]
Multi-dimensional Gaussian filter. Parameters
imagearray-like
Input image (grayscale or color) to filter.
sigmascalar or sequence of scalars, optional
Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.
outputarray, optional
The output parameter passes an array in which to store the filter output.
mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional
The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘nearest’.
cvalscalar, optional
Value to fill past edges of input if mode is ‘constant’. Default is 0.0
multichannelbool, optional (default: None)
Whether the last axis of the image is to be interpreted as multiple channels. If True, each channel is filtered separately (channels are not mixed together). Only 3 channels are supported. If None, the function will attempt to guess this, and raise a warning if ambiguous, when the array has shape (M, N, 3).
preserve_rangebool, optional
Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html
truncatefloat, optional
Truncate the filter at this many standard deviations. Returns
filtered_imagendarray
the filtered array Notes This function is a wrapper around scipy.ndi.gaussian_filter(). Integer arrays are converted to float. The output should be floating point data type since gaussian converts to float provided image. If output is not provided, another array will be allocated and returned as the result. The multi-dimensional filter is implemented as a sequence of one-dimensional convolution filters. The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision. Examples >>> a = np.zeros((3, 3))
>>> a[1, 1] = 1
>>> a
array([[0., 0., 0.],
[0., 1., 0.],
[0., 0., 0.]])
>>> gaussian(a, sigma=0.4) # mild smoothing
array([[0.00163116, 0.03712502, 0.00163116],
[0.03712502, 0.84496158, 0.03712502],
[0.00163116, 0.03712502, 0.00163116]])
>>> gaussian(a, sigma=1) # more smoothing
array([[0.05855018, 0.09653293, 0.05855018],
[0.09653293, 0.15915589, 0.09653293],
[0.05855018, 0.09653293, 0.05855018]])
>>> # Several modes are possible for handling boundaries
>>> gaussian(a, sigma=1, mode='reflect')
array([[0.08767308, 0.12075024, 0.08767308],
[0.12075024, 0.16630671, 0.12075024],
[0.08767308, 0.12075024, 0.08767308]])
>>> # For RGB images, each is filtered separately
>>> from skimage.data import astronaut
>>> image = astronaut()
>>> filtered_img = gaussian(image, sigma=1, multichannel=True) | skimage.api.skimage.filters#skimage.filters.gaussian |
skimage.filters.hessian(image, sigmas=range(1, 10, 2), scale_range=None, scale_step=None, alpha=0.5, beta=0.5, gamma=15, black_ridges=True, mode=None, cval=0) [source]
Filter an image with the Hybrid Hessian filter. This filter can be used to detect continuous edges, e.g. vessels, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects. Defined only for 2-D and 3-D images. Almost equal to Frangi filter, but uses alternative method of smoothing. Refer to [1] to find the differences between Frangi and Hessian filters. Parameters
image(N, M[, P]) ndarray
Array with input image data.
sigmasiterable of floats, optional
Sigmas used as scales of filter, i.e., np.arange(scale_range[0], scale_range[1], scale_step)
scale_range2-tuple of floats, optional
The range of sigmas used.
scale_stepfloat, optional
Step size between sigmas.
betafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a blob-like structure.
gammafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to areas of high variance/texture/structure.
black_ridgesboolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
out(N, M[, P]) ndarray
Filtered image (maximum of pixels across all scales). See also
meijering
sato
frangi
Notes Written by Marc Schrijver (November 2001) Re-Written by D. J. Kroon University of Twente (May 2009) [2] References
1
Ng, C. C., Yap, M. H., Costen, N., & Li, B. (2014,). Automatic wrinkle detection using hybrid Hessian filter. In Asian Conference on Computer Vision (pp. 609-622). Springer International Publishing. DOI:10.1007/978-3-319-16811-1_40
2
Kroon, D. J.: Hessian based Frangi vesselness filter. | skimage.api.skimage.filters#skimage.filters.hessian |
skimage.filters.inverse(data, impulse_response=None, filter_params={}, max_gain=2, predefined_filter=None) [source]
Apply the filter in reverse to the given data. Parameters
data(M,N) ndarray
Input data.
impulse_responsecallable f(r, c, **filter_params)
Impulse response of the filter. See LPIFilter2D.__init__.
filter_paramsdict
Additional keyword parameters to the impulse_response function.
max_gainfloat
Limit the filter gain. Often, the filter contains zeros, which would cause the inverse filter to have infinite gain. High gain causes amplification of artefacts, so a conservative limit is recommended. Other Parameters
predefined_filterLPIFilter2D
If you need to apply the same filter multiple times over different images, construct the LPIFilter2D and specify it here. | skimage.api.skimage.filters#skimage.filters.inverse |
skimage.filters.laplace(image, ksize=3, mask=None) [source]
Find the edges of an image using the Laplace operator. Parameters
imagendarray
Image to process.
ksizeint, optional
Define the size of the discrete Laplacian operator such that it will have a size of (ksize,) * image.ndim.
maskndarray, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
outputndarray
The Laplace edge map. Notes The Laplacian operator is generated using the function skimage.restoration.uft.laplacian(). | skimage.api.skimage.filters#skimage.filters.laplace |
class skimage.filters.LPIFilter2D(impulse_response, **filter_params) [source]
Bases: object Linear Position-Invariant Filter (2-dimensional)
__init__(impulse_response, **filter_params) [source]
Parameters
impulse_responsecallable f(r, c, **filter_params)
Function that yields the impulse response. r and c are 1-dimensional vectors that represent row and column positions, in other words coordinates are (r[0],c[0]),(r[0],c[1]) etc. **filter_params are passed through. In other words, impulse_response would be called like this: >>> def impulse_response(r, c, **filter_params):
... pass
>>>
>>> r = [0,0,0,1,1,1,2,2,2]
>>> c = [0,1,2,0,1,2,0,1,2]
>>> filter_params = {'kw1': 1, 'kw2': 2, 'kw3': 3}
>>> impulse_response(r, c, **filter_params)
Examples Gaussian filter: Use a 1-D gaussian in each direction without normalization coefficients. >>> def filt_func(r, c, sigma = 1):
... return np.exp(-np.hypot(r, c)/sigma)
>>> filter = LPIFilter2D(filt_func) | skimage.api.skimage.filters#skimage.filters.LPIFilter2D |
__init__(impulse_response, **filter_params) [source]
Parameters
impulse_responsecallable f(r, c, **filter_params)
Function that yields the impulse response. r and c are 1-dimensional vectors that represent row and column positions, in other words coordinates are (r[0],c[0]),(r[0],c[1]) etc. **filter_params are passed through. In other words, impulse_response would be called like this: >>> def impulse_response(r, c, **filter_params):
... pass
>>>
>>> r = [0,0,0,1,1,1,2,2,2]
>>> c = [0,1,2,0,1,2,0,1,2]
>>> filter_params = {'kw1': 1, 'kw2': 2, 'kw3': 3}
>>> impulse_response(r, c, **filter_params)
Examples Gaussian filter: Use a 1-D gaussian in each direction without normalization coefficients. >>> def filt_func(r, c, sigma = 1):
... return np.exp(-np.hypot(r, c)/sigma)
>>> filter = LPIFilter2D(filt_func) | skimage.api.skimage.filters#skimage.filters.LPIFilter2D.__init__ |
skimage.filters.median(image, selem=None, out=None, mode='nearest', cval=0.0, behavior='ndimage') [source]
Return local median of an image. Parameters
imagearray-like
Input image.
selemndarray, optional
If behavior=='rank', selem is a 2-D array of 1’s and 0’s. If behavior=='ndimage', selem is a N-D array of 1’s and 0’s with the same number of dimension than image. If None, selem will be a N-D array with 3 elements for each dimension (e.g., vector, square, cube, etc.)
outndarray, (same dtype as image), optional
If None, a new array is allocated.
mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’,’‘wrap’}, optional
The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘nearest’. New in version 0.15: mode is used when behavior='ndimage'.
cvalscalar, optional
Value to fill past edges of input if mode is ‘constant’. Default is 0.0 New in version 0.15: cval was added in 0.15 is used when behavior='ndimage'.
behavior{‘ndimage’, ‘rank’}, optional
Either to use the old behavior (i.e., < 0.15) or the new behavior. The old behavior will call the skimage.filters.rank.median(). The new behavior will call the scipy.ndimage.median_filter(). Default is ‘ndimage’. New in version 0.15: behavior is introduced in 0.15 Changed in version 0.16: Default behavior has been changed from ‘rank’ to ‘ndimage’ Returns
out2-D array (same dtype as input image)
Output image. See also
skimage.filters.rank.median
Rank-based implementation of the median filtering offering more flexibility with additional parameters but dedicated for unsigned integer images. Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters import median
>>> img = data.camera()
>>> med = median(img, disk(5)) | skimage.api.skimage.filters#skimage.filters.median |
skimage.filters.meijering(image, sigmas=range(1, 10, 2), alpha=None, black_ridges=True, mode='reflect', cval=0) [source]
Filter an image with the Meijering neuriteness filter. This filter can be used to detect continuous ridges, e.g. neurites, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects. Calculates the eigenvectors of the Hessian to compute the similarity of an image region to neurites, according to the method described in [1]. Parameters
image(N, M[, …, P]) ndarray
Array with input image data.
sigmasiterable of floats, optional
Sigmas used as scales of filter
alphafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a plate-like structure.
black_ridgesboolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
out(N, M[, …, P]) ndarray
Filtered image (maximum of pixels across all scales). See also
sato
frangi
hessian
References
1
Meijering, E., Jacob, M., Sarria, J. C., Steiner, P., Hirling, H., Unser, M. (2004). Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry Part A, 58(2), 167-176. DOI:10.1002/cyto.a.20022 | skimage.api.skimage.filters#skimage.filters.meijering |
skimage.filters.prewitt(image, mask=None, *, axis=None, mode='reflect', cval=0.0) [source]
Find the edge magnitude using the Prewitt transform. Parameters
imagearray
The input image.
maskarray of bool, optional
Clip the output image to this mask. (Values where mask=0 will be set to 0.)
axisint or sequence of int, optional
Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as: prw_mag = np.sqrt(sum([prewitt(image, axis=i)**2
for i in range(image.ndim)]) / image.ndim)
The magnitude is also computed if axis is a sequence.
modestr or sequence of str, optional
The boundary mode for the convolution. See scipy.ndimage.convolve for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.
cvalfloat, optional
When mode is 'constant', this is the constant used in values outside the boundary of the image data. Returns
outputarray of float
The Prewitt edge map. See also
sobel, scharr
Notes The edge magnitude depends slightly on edge directions, since the approximation of the gradient operator by the Prewitt operator is not completely rotation invariant. For a better rotation invariance, the Scharr operator should be used. The Sobel operator has a better rotation invariance than the Prewitt operator, but a worse rotation invariance than the Scharr operator. Examples >>> from skimage import data
>>> from skimage import filters
>>> camera = data.camera()
>>> edges = filters.prewitt(camera) | skimage.api.skimage.filters#skimage.filters.prewitt |
skimage.filters.prewitt_h(image, mask=None) [source]
Find the horizontal edges of an image using the Prewitt transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Prewitt edge map. Notes We use the following kernel: 1/3 1/3 1/3
0 0 0
-1/3 -1/3 -1/3 | skimage.api.skimage.filters#skimage.filters.prewitt_h |
skimage.filters.prewitt_v(image, mask=None) [source]
Find the vertical edges of an image using the Prewitt transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Prewitt edge map. Notes We use the following kernel: 1/3 0 -1/3
1/3 0 -1/3
1/3 0 -1/3 | skimage.api.skimage.filters#skimage.filters.prewitt_v |
Module: filters.rank
skimage.filters.rank.autolevel(image, selem) Auto-level image using local histogram.
skimage.filters.rank.autolevel_percentile(…) Return greyscale local autolevel of an image.
skimage.filters.rank.bottomhat(image, selem) Local bottom-hat of an image.
skimage.filters.rank.enhance_contrast(image, …) Enhance contrast of an image.
skimage.filters.rank.enhance_contrast_percentile(…) Enhance contrast of an image.
skimage.filters.rank.entropy(image, selem[, …]) Local entropy.
skimage.filters.rank.equalize(image, selem) Equalize image using local histogram.
skimage.filters.rank.geometric_mean(image, selem) Return local geometric mean of an image.
skimage.filters.rank.gradient(image, selem) Return local gradient of an image (i.e.
skimage.filters.rank.gradient_percentile(…) Return local gradient of an image (i.e.
skimage.filters.rank.majority(image, selem, *) Majority filter assign to each pixel the most occuring value within its neighborhood.
skimage.filters.rank.maximum(image, selem[, …]) Return local maximum of an image.
skimage.filters.rank.mean(image, selem[, …]) Return local mean of an image.
skimage.filters.rank.mean_bilateral(image, selem) Apply a flat kernel bilateral filter.
skimage.filters.rank.mean_percentile(image, …) Return local mean of an image.
skimage.filters.rank.median(image[, selem, …]) Return local median of an image.
skimage.filters.rank.minimum(image, selem[, …]) Return local minimum of an image.
skimage.filters.rank.modal(image, selem[, …]) Return local mode of an image.
skimage.filters.rank.noise_filter(image, selem) Noise feature.
skimage.filters.rank.otsu(image, selem[, …]) Local Otsu’s threshold value for each pixel.
skimage.filters.rank.percentile(image, selem) Return local percentile of an image.
skimage.filters.rank.pop(image, selem[, …]) Return the local number (population) of pixels.
skimage.filters.rank.pop_bilateral(image, selem) Return the local number (population) of pixels.
skimage.filters.rank.pop_percentile(image, selem) Return the local number (population) of pixels.
skimage.filters.rank.subtract_mean(image, selem) Return image subtracted from its local mean.
skimage.filters.rank.subtract_mean_percentile(…) Return image subtracted from its local mean.
skimage.filters.rank.sum(image, selem[, …]) Return the local sum of pixels.
skimage.filters.rank.sum_bilateral(image, selem) Apply a flat kernel bilateral filter.
skimage.filters.rank.sum_percentile(image, selem) Return the local sum of pixels.
skimage.filters.rank.threshold(image, selem) Local threshold of an image.
skimage.filters.rank.threshold_percentile(…) Local threshold of an image.
skimage.filters.rank.tophat(image, selem[, …]) Local top-hat of an image.
skimage.filters.rank.windowed_histogram(…) Normalized sliding window histogram autolevel
skimage.filters.rank.autolevel(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Auto-level image using local histogram. This filter locally stretches the histogram of gray values to cover the entire range of values from “white” to “black”. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import autolevel
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> auto = autolevel(img, disk(5))
>>> auto_vol = autolevel(volume, ball(5))
Examples using skimage.filters.rank.autolevel
Rank filters autolevel_percentile
skimage.filters.rank.autolevel_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return greyscale local autolevel of an image. This filter locally stretches the histogram of greyvalues to cover the entire range of values from “white” to “black”. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
Examples using skimage.filters.rank.autolevel_percentile
Rank filters bottomhat
skimage.filters.rank.bottomhat(image, selem, out=None, mask=None, shift_x=False, shift_y=False) [source]
Local bottom-hat of an image. This filter computes the morphological closing of the image and then subtracts the result from the original image. Parameters
image2-D array (integer or float)
Input image.
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (integer or float), optional
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out2-D array (same dtype as input image)
Output image. Warns
Deprecated:
New in version 0.17. This function is deprecated and will be removed in scikit-image 0.19. This filter was misnamed and we believe that the usefulness is narrow. Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import bottomhat
>>> img = data.camera()
>>> out = bottomhat(img, disk(5))
enhance_contrast
skimage.filters.rank.enhance_contrast(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Enhance contrast of an image. This replaces each pixel by the local maximum if the pixel gray value is closer to the local maximum than the local minimum. Otherwise it is replaced by the local minimum. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import enhance_contrast
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = enhance_contrast(img, disk(5))
>>> out_vol = enhance_contrast(volume, ball(5))
Examples using skimage.filters.rank.enhance_contrast
Rank filters enhance_contrast_percentile
skimage.filters.rank.enhance_contrast_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Enhance contrast of an image. This replaces each pixel by the local maximum if the pixel greyvalue is closer to the local maximum than the local minimum. Otherwise it is replaced by the local minimum. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
Examples using skimage.filters.rank.enhance_contrast_percentile
Rank filters entropy
skimage.filters.rank.entropy(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Local entropy. The entropy is computed using base 2 logarithm i.e. the filter returns the minimum number of bits needed to encode the local gray level distribution. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (float)
Output image. References
1
https://en.wikipedia.org/wiki/Entropy_(information_theory) Examples >>> from skimage import data
>>> from skimage.filters.rank import entropy
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> ent = entropy(img, disk(5))
>>> ent_vol = entropy(volume, ball(5))
Examples using skimage.filters.rank.entropy
Tinting gray-scale images
Entropy
Rank filters equalize
skimage.filters.rank.equalize(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Equalize image using local histogram. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import equalize
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> equ = equalize(img, disk(5))
>>> equ_vol = equalize(volume, ball(5))
Examples using skimage.filters.rank.equalize
Local Histogram Equalization
Rank filters geometric_mean
skimage.filters.rank.geometric_mean(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local geometric mean of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. References
1
Gonzalez, R. C. and Wood, R. E. “Digital Image Processing (3rd Edition).” Prentice-Hall Inc, 2006. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import mean
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> avg = geometric_mean(img, disk(5))
>>> avg_vol = geometric_mean(volume, ball(5))
gradient
skimage.filters.rank.gradient(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local gradient of an image (i.e. local maximum - local minimum). Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import gradient
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = gradient(img, disk(5))
>>> out_vol = gradient(volume, ball(5))
Examples using skimage.filters.rank.gradient
Markers for watershed transform
Rank filters gradient_percentile
skimage.filters.rank.gradient_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return local gradient of an image (i.e. local maximum - local minimum). Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
majority
skimage.filters.rank.majority(image, selem, *, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Majority filter assign to each pixel the most occuring value within its neighborhood. Parameters
imagendarray
Image array (uint8, uint16 array).
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
outndarray (integer or float), optional
If None, a new array will be allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out2-D array (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.filters.rank import majority
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> maj_img = majority(img, disk(5))
>>> maj_img_vol = majority(volume, ball(5))
maximum
skimage.filters.rank.maximum(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local maximum of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. See also
skimage.morphology.dilation
Notes The lower algorithm complexity makes skimage.filters.rank.maximum more efficient for larger images and structuring elements. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import maximum
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = maximum(img, disk(5))
>>> out_vol = maximum(volume, ball(5))
Examples using skimage.filters.rank.maximum
Rank filters mean
skimage.filters.rank.mean(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local mean of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import mean
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> avg = mean(img, disk(5))
>>> avg_vol = mean(volume, ball(5))
Examples using skimage.filters.rank.mean
Segment human cells (in mitosis)
Rank filters mean_bilateral
skimage.filters.rank.mean_bilateral(image, selem, out=None, mask=None, shift_x=False, shift_y=False, s0=10, s1=10) [source]
Apply a flat kernel bilateral filter. This is an edge-preserving and noise reducing denoising filter. It averages pixels based on their spatial closeness and radiometric similarity. Spatial closeness is measured by considering only the local pixel neighborhood given by a structuring element. Radiometric similarity is defined by the greylevel interval [g-s0, g+s1] where g is the current pixel greylevel. Only pixels belonging to the structuring element and having a greylevel inside this interval are averaged. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
s0, s1int
Define the [s0, s1] interval around the greyvalue of the center pixel to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. See also
denoise_bilateral
Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import mean_bilateral
>>> img = data.camera().astype(np.uint16)
>>> bilat_img = mean_bilateral(img, disk(20), s0=10,s1=10)
Examples using skimage.filters.rank.mean_bilateral
Rank filters mean_percentile
skimage.filters.rank.mean_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return local mean of an image. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
median
skimage.filters.rank.median(image, selem=None, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local median of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s. If None, a full square of size 3 is used.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. See also
skimage.filters.median
Implementation of a median filtering which handles images with floating precision. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import median
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> med = median(img, disk(5))
>>> med_vol = median(volume, ball(5))
Examples using skimage.filters.rank.median
Markers for watershed transform
Rank filters minimum
skimage.filters.rank.minimum(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local minimum of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. See also
skimage.morphology.erosion
Notes The lower algorithm complexity makes skimage.filters.rank.minimum more efficient for larger images and structuring elements. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import minimum
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = minimum(img, disk(5))
>>> out_vol = minimum(volume, ball(5))
Examples using skimage.filters.rank.minimum
Rank filters modal
skimage.filters.rank.modal(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local mode of an image. The mode is the value that appears most often in the local histogram. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import modal
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = modal(img, disk(5))
>>> out_vol = modal(volume, ball(5))
noise_filter
skimage.filters.rank.noise_filter(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Noise feature. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. References
1
N. Hashimoto et al. Referenceless image quality evaluation for whole slide imaging. J Pathol Inform 2012;3:9. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import noise_filter
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = noise_filter(img, disk(5))
>>> out_vol = noise_filter(volume, ball(5))
otsu
skimage.filters.rank.otsu(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Local Otsu’s threshold value for each pixel. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. References
1
https://en.wikipedia.org/wiki/Otsu’s_method Examples >>> from skimage import data
>>> from skimage.filters.rank import otsu
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> local_otsu = otsu(img, disk(5))
>>> thresh_image = img >= local_otsu
>>> local_otsu_vol = otsu(volume, ball(5))
>>> thresh_image_vol = volume >= local_otsu_vol
Examples using skimage.filters.rank.otsu
Rank filters percentile
skimage.filters.rank.percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0) [source]
Return local percentile of an image. Returns the value of the p0 lower percentile of the local greyvalue distribution. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0float in [0, …, 1]
Set the percentile value. Returns
out2-D array (same dtype as input image)
Output image.
pop
skimage.filters.rank.pop(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return the local number (population) of pixels. The number of pixels is defined as the number of pixels which are included in the structuring element and the mask. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square, cube # Need to add 3D example
>>> import skimage.filters.rank as rank
>>> img = 255 * np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> rank.pop(img, square(3))
array([[4, 6, 6, 6, 4],
[6, 9, 9, 9, 6],
[6, 9, 9, 9, 6],
[6, 9, 9, 9, 6],
[4, 6, 6, 6, 4]], dtype=uint8)
pop_bilateral
skimage.filters.rank.pop_bilateral(image, selem, out=None, mask=None, shift_x=False, shift_y=False, s0=10, s1=10) [source]
Return the local number (population) of pixels. The number of pixels is defined as the number of pixels which are included in the structuring element and the mask. Additionally pixels must have a greylevel inside the interval [g-s0, g+s1] where g is the greyvalue of the center pixel. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
s0, s1int
Define the [s0, s1] interval around the greyvalue of the center pixel to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square
>>> import skimage.filters.rank as rank
>>> img = 255 * np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint16)
>>> rank.pop_bilateral(img, square(3), s0=10, s1=10)
array([[3, 4, 3, 4, 3],
[4, 4, 6, 4, 4],
[3, 6, 9, 6, 3],
[4, 4, 6, 4, 4],
[3, 4, 3, 4, 3]], dtype=uint16)
pop_percentile
skimage.filters.rank.pop_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return the local number (population) of pixels. The number of pixels is defined as the number of pixels which are included in the structuring element and the mask. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
subtract_mean
skimage.filters.rank.subtract_mean(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return image subtracted from its local mean. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Notes Subtracting the mean value may introduce underflow. To compensate this potential underflow, the obtained difference is downscaled by a factor of 2 and shifted by n_bins / 2 - 1, the median value of the local histogram (n_bins = max(3, image.max()) +1 for 16-bits images and 256 otherwise). Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import subtract_mean
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = subtract_mean(img, disk(5))
>>> out_vol = subtract_mean(volume, ball(5))
subtract_mean_percentile
skimage.filters.rank.subtract_mean_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return image subtracted from its local mean. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
sum
skimage.filters.rank.sum(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return the local sum of pixels. Note that the sum may overflow depending on the data type of the input array. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square, cube # Need to add 3D example
>>> import skimage.filters.rank as rank # Cube seems to fail but
>>> img = np.array([[0, 0, 0, 0, 0], # Ball can pass
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> rank.sum(img, square(3))
array([[1, 2, 3, 2, 1],
[2, 4, 6, 4, 2],
[3, 6, 9, 6, 3],
[2, 4, 6, 4, 2],
[1, 2, 3, 2, 1]], dtype=uint8)
sum_bilateral
skimage.filters.rank.sum_bilateral(image, selem, out=None, mask=None, shift_x=False, shift_y=False, s0=10, s1=10) [source]
Apply a flat kernel bilateral filter. This is an edge-preserving and noise reducing denoising filter. It averages pixels based on their spatial closeness and radiometric similarity. Spatial closeness is measured by considering only the local pixel neighborhood given by a structuring element (selem). Radiometric similarity is defined by the greylevel interval [g-s0, g+s1] where g is the current pixel greylevel. Only pixels belonging to the structuring element AND having a greylevel inside this interval are summed. Note that the sum may overflow depending on the data type of the input array. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
s0, s1int
Define the [s0, s1] interval around the greyvalue of the center pixel to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. See also
denoise_bilateral
Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import sum_bilateral
>>> img = data.camera().astype(np.uint16)
>>> bilat_img = sum_bilateral(img, disk(10), s0=10, s1=10)
sum_percentile
skimage.filters.rank.sum_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return the local sum of pixels. Only greyvalues between percentiles [p0, p1] are considered in the filter. Note that the sum may overflow depending on the data type of the input array. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
threshold
skimage.filters.rank.threshold(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Local threshold of an image. The resulting binary mask is True if the gray value of the center pixel is greater than the local mean. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square, cube # Need to add 3D example
>>> from skimage.filters.rank import threshold
>>> img = 255 * np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> threshold(img, square(3))
array([[0, 0, 0, 0, 0],
[0, 1, 1, 1, 0],
[0, 1, 0, 1, 0],
[0, 1, 1, 1, 0],
[0, 0, 0, 0, 0]], dtype=uint8)
threshold_percentile
skimage.filters.rank.threshold_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0) [source]
Local threshold of an image. The resulting binary mask is True if the greyvalue of the center pixel is greater than the local mean. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0float in [0, …, 1]
Set the percentile value. Returns
out2-D array (same dtype as input image)
Output image.
tophat
skimage.filters.rank.tophat(image, selem, out=None, mask=None, shift_x=False, shift_y=False) [source]
Local top-hat of an image. This filter computes the morphological opening of the image and then subtracts the result from the original image. Parameters
image2-D array (integer or float)
Input image.
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (integer or float), optional
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out2-D array (same dtype as input image)
Output image. Warns
Deprecated:
New in version 0.17. This function is deprecated and will be removed in scikit-image 0.19. This filter was misnamed and we believe that the usefulness is narrow. Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import tophat
>>> img = data.camera()
>>> out = tophat(img, disk(5))
windowed_histogram
skimage.filters.rank.windowed_histogram(image, selem, out=None, mask=None, shift_x=False, shift_y=False, n_bins=None) [source]
Normalized sliding window histogram Parameters
image2-D array (integer or float)
Input image.
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (integer or float), optional
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
n_binsint or None
The number of histogram bins. Will default to image.max() + 1 if None is passed. Returns
out3-D array (float)
Array of dimensions (H,W,N), where (H,W) are the dimensions of the input image and N is n_bins or image.max() + 1 if no value is provided as a parameter. Effectively, each pixel is a N-D feature vector that is the histogram. The sum of the elements in the feature vector will be 1, unless no pixels in the window were covered by both selem and mask, in which case all elements will be 0. Examples >>> from skimage import data
>>> from skimage.filters.rank import windowed_histogram
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> hist_img = windowed_histogram(img, disk(5)) | skimage.api.skimage.filters.rank |
skimage.filters.rank.autolevel(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Auto-level image using local histogram. This filter locally stretches the histogram of gray values to cover the entire range of values from “white” to “black”. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import autolevel
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> auto = autolevel(img, disk(5))
>>> auto_vol = autolevel(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.autolevel |
skimage.filters.rank.autolevel_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return greyscale local autolevel of an image. This filter locally stretches the histogram of greyvalues to cover the entire range of values from “white” to “black”. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. | skimage.api.skimage.filters.rank#skimage.filters.rank.autolevel_percentile |
skimage.filters.rank.bottomhat(image, selem, out=None, mask=None, shift_x=False, shift_y=False) [source]
Local bottom-hat of an image. This filter computes the morphological closing of the image and then subtracts the result from the original image. Parameters
image2-D array (integer or float)
Input image.
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (integer or float), optional
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out2-D array (same dtype as input image)
Output image. Warns
Deprecated:
New in version 0.17. This function is deprecated and will be removed in scikit-image 0.19. This filter was misnamed and we believe that the usefulness is narrow. Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import bottomhat
>>> img = data.camera()
>>> out = bottomhat(img, disk(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.bottomhat |
skimage.filters.rank.enhance_contrast(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Enhance contrast of an image. This replaces each pixel by the local maximum if the pixel gray value is closer to the local maximum than the local minimum. Otherwise it is replaced by the local minimum. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import enhance_contrast
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = enhance_contrast(img, disk(5))
>>> out_vol = enhance_contrast(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.enhance_contrast |
skimage.filters.rank.enhance_contrast_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Enhance contrast of an image. This replaces each pixel by the local maximum if the pixel greyvalue is closer to the local maximum than the local minimum. Otherwise it is replaced by the local minimum. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. | skimage.api.skimage.filters.rank#skimage.filters.rank.enhance_contrast_percentile |
skimage.filters.rank.entropy(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Local entropy. The entropy is computed using base 2 logarithm i.e. the filter returns the minimum number of bits needed to encode the local gray level distribution. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (float)
Output image. References
1
https://en.wikipedia.org/wiki/Entropy_(information_theory) Examples >>> from skimage import data
>>> from skimage.filters.rank import entropy
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> ent = entropy(img, disk(5))
>>> ent_vol = entropy(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.entropy |
skimage.filters.rank.equalize(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Equalize image using local histogram. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import equalize
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> equ = equalize(img, disk(5))
>>> equ_vol = equalize(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.equalize |
skimage.filters.rank.geometric_mean(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local geometric mean of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. References
1
Gonzalez, R. C. and Wood, R. E. “Digital Image Processing (3rd Edition).” Prentice-Hall Inc, 2006. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import mean
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> avg = geometric_mean(img, disk(5))
>>> avg_vol = geometric_mean(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.geometric_mean |
skimage.filters.rank.gradient(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local gradient of an image (i.e. local maximum - local minimum). Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import gradient
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = gradient(img, disk(5))
>>> out_vol = gradient(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.gradient |
skimage.filters.rank.gradient_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return local gradient of an image (i.e. local maximum - local minimum). Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. | skimage.api.skimage.filters.rank#skimage.filters.rank.gradient_percentile |
skimage.filters.rank.majority(image, selem, *, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Majority filter assign to each pixel the most occuring value within its neighborhood. Parameters
imagendarray
Image array (uint8, uint16 array).
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
outndarray (integer or float), optional
If None, a new array will be allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out2-D array (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.filters.rank import majority
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> maj_img = majority(img, disk(5))
>>> maj_img_vol = majority(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.majority |
skimage.filters.rank.maximum(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local maximum of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. See also
skimage.morphology.dilation
Notes The lower algorithm complexity makes skimage.filters.rank.maximum more efficient for larger images and structuring elements. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import maximum
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = maximum(img, disk(5))
>>> out_vol = maximum(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.maximum |
skimage.filters.rank.mean(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local mean of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import mean
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> avg = mean(img, disk(5))
>>> avg_vol = mean(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.mean |
skimage.filters.rank.mean_bilateral(image, selem, out=None, mask=None, shift_x=False, shift_y=False, s0=10, s1=10) [source]
Apply a flat kernel bilateral filter. This is an edge-preserving and noise reducing denoising filter. It averages pixels based on their spatial closeness and radiometric similarity. Spatial closeness is measured by considering only the local pixel neighborhood given by a structuring element. Radiometric similarity is defined by the greylevel interval [g-s0, g+s1] where g is the current pixel greylevel. Only pixels belonging to the structuring element and having a greylevel inside this interval are averaged. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
s0, s1int
Define the [s0, s1] interval around the greyvalue of the center pixel to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. See also
denoise_bilateral
Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import mean_bilateral
>>> img = data.camera().astype(np.uint16)
>>> bilat_img = mean_bilateral(img, disk(20), s0=10,s1=10) | skimage.api.skimage.filters.rank#skimage.filters.rank.mean_bilateral |
skimage.filters.rank.mean_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return local mean of an image. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. | skimage.api.skimage.filters.rank#skimage.filters.rank.mean_percentile |
skimage.filters.rank.median(image, selem=None, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local median of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s. If None, a full square of size 3 is used.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. See also
skimage.filters.median
Implementation of a median filtering which handles images with floating precision. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import median
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> med = median(img, disk(5))
>>> med_vol = median(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.median |
skimage.filters.rank.minimum(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local minimum of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. See also
skimage.morphology.erosion
Notes The lower algorithm complexity makes skimage.filters.rank.minimum more efficient for larger images and structuring elements. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import minimum
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = minimum(img, disk(5))
>>> out_vol = minimum(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.minimum |
skimage.filters.rank.modal(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local mode of an image. The mode is the value that appears most often in the local histogram. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import modal
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = modal(img, disk(5))
>>> out_vol = modal(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.modal |
skimage.filters.rank.noise_filter(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Noise feature. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. References
1
N. Hashimoto et al. Referenceless image quality evaluation for whole slide imaging. J Pathol Inform 2012;3:9. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import noise_filter
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = noise_filter(img, disk(5))
>>> out_vol = noise_filter(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.noise_filter |
skimage.filters.rank.otsu(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Local Otsu’s threshold value for each pixel. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. References
1
https://en.wikipedia.org/wiki/Otsu’s_method Examples >>> from skimage import data
>>> from skimage.filters.rank import otsu
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> local_otsu = otsu(img, disk(5))
>>> thresh_image = img >= local_otsu
>>> local_otsu_vol = otsu(volume, ball(5))
>>> thresh_image_vol = volume >= local_otsu_vol | skimage.api.skimage.filters.rank#skimage.filters.rank.otsu |
skimage.filters.rank.percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0) [source]
Return local percentile of an image. Returns the value of the p0 lower percentile of the local greyvalue distribution. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0float in [0, …, 1]
Set the percentile value. Returns
out2-D array (same dtype as input image)
Output image. | skimage.api.skimage.filters.rank#skimage.filters.rank.percentile |
skimage.filters.rank.pop(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return the local number (population) of pixels. The number of pixels is defined as the number of pixels which are included in the structuring element and the mask. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square, cube # Need to add 3D example
>>> import skimage.filters.rank as rank
>>> img = 255 * np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> rank.pop(img, square(3))
array([[4, 6, 6, 6, 4],
[6, 9, 9, 9, 6],
[6, 9, 9, 9, 6],
[6, 9, 9, 9, 6],
[4, 6, 6, 6, 4]], dtype=uint8) | skimage.api.skimage.filters.rank#skimage.filters.rank.pop |
skimage.filters.rank.pop_bilateral(image, selem, out=None, mask=None, shift_x=False, shift_y=False, s0=10, s1=10) [source]
Return the local number (population) of pixels. The number of pixels is defined as the number of pixels which are included in the structuring element and the mask. Additionally pixels must have a greylevel inside the interval [g-s0, g+s1] where g is the greyvalue of the center pixel. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
s0, s1int
Define the [s0, s1] interval around the greyvalue of the center pixel to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square
>>> import skimage.filters.rank as rank
>>> img = 255 * np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint16)
>>> rank.pop_bilateral(img, square(3), s0=10, s1=10)
array([[3, 4, 3, 4, 3],
[4, 4, 6, 4, 4],
[3, 6, 9, 6, 3],
[4, 4, 6, 4, 4],
[3, 4, 3, 4, 3]], dtype=uint16) | skimage.api.skimage.filters.rank#skimage.filters.rank.pop_bilateral |
skimage.filters.rank.pop_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return the local number (population) of pixels. The number of pixels is defined as the number of pixels which are included in the structuring element and the mask. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. | skimage.api.skimage.filters.rank#skimage.filters.rank.pop_percentile |
skimage.filters.rank.subtract_mean(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return image subtracted from its local mean. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Notes Subtracting the mean value may introduce underflow. To compensate this potential underflow, the obtained difference is downscaled by a factor of 2 and shifted by n_bins / 2 - 1, the median value of the local histogram (n_bins = max(3, image.max()) +1 for 16-bits images and 256 otherwise). Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import subtract_mean
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = subtract_mean(img, disk(5))
>>> out_vol = subtract_mean(volume, ball(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.subtract_mean |
skimage.filters.rank.subtract_mean_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return image subtracted from its local mean. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. | skimage.api.skimage.filters.rank#skimage.filters.rank.subtract_mean_percentile |
skimage.filters.rank.sum(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return the local sum of pixels. Note that the sum may overflow depending on the data type of the input array. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square, cube # Need to add 3D example
>>> import skimage.filters.rank as rank # Cube seems to fail but
>>> img = np.array([[0, 0, 0, 0, 0], # Ball can pass
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> rank.sum(img, square(3))
array([[1, 2, 3, 2, 1],
[2, 4, 6, 4, 2],
[3, 6, 9, 6, 3],
[2, 4, 6, 4, 2],
[1, 2, 3, 2, 1]], dtype=uint8) | skimage.api.skimage.filters.rank#skimage.filters.rank.sum |
skimage.filters.rank.sum_bilateral(image, selem, out=None, mask=None, shift_x=False, shift_y=False, s0=10, s1=10) [source]
Apply a flat kernel bilateral filter. This is an edge-preserving and noise reducing denoising filter. It averages pixels based on their spatial closeness and radiometric similarity. Spatial closeness is measured by considering only the local pixel neighborhood given by a structuring element (selem). Radiometric similarity is defined by the greylevel interval [g-s0, g+s1] where g is the current pixel greylevel. Only pixels belonging to the structuring element AND having a greylevel inside this interval are summed. Note that the sum may overflow depending on the data type of the input array. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
s0, s1int
Define the [s0, s1] interval around the greyvalue of the center pixel to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. See also
denoise_bilateral
Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import sum_bilateral
>>> img = data.camera().astype(np.uint16)
>>> bilat_img = sum_bilateral(img, disk(10), s0=10, s1=10) | skimage.api.skimage.filters.rank#skimage.filters.rank.sum_bilateral |
skimage.filters.rank.sum_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return the local sum of pixels. Only greyvalues between percentiles [p0, p1] are considered in the filter. Note that the sum may overflow depending on the data type of the input array. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. | skimage.api.skimage.filters.rank#skimage.filters.rank.sum_percentile |
skimage.filters.rank.threshold(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Local threshold of an image. The resulting binary mask is True if the gray value of the center pixel is greater than the local mean. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square, cube # Need to add 3D example
>>> from skimage.filters.rank import threshold
>>> img = 255 * np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> threshold(img, square(3))
array([[0, 0, 0, 0, 0],
[0, 1, 1, 1, 0],
[0, 1, 0, 1, 0],
[0, 1, 1, 1, 0],
[0, 0, 0, 0, 0]], dtype=uint8) | skimage.api.skimage.filters.rank#skimage.filters.rank.threshold |
skimage.filters.rank.threshold_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0) [source]
Local threshold of an image. The resulting binary mask is True if the greyvalue of the center pixel is greater than the local mean. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0float in [0, …, 1]
Set the percentile value. Returns
out2-D array (same dtype as input image)
Output image. | skimage.api.skimage.filters.rank#skimage.filters.rank.threshold_percentile |
skimage.filters.rank.tophat(image, selem, out=None, mask=None, shift_x=False, shift_y=False) [source]
Local top-hat of an image. This filter computes the morphological opening of the image and then subtracts the result from the original image. Parameters
image2-D array (integer or float)
Input image.
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (integer or float), optional
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out2-D array (same dtype as input image)
Output image. Warns
Deprecated:
New in version 0.17. This function is deprecated and will be removed in scikit-image 0.19. This filter was misnamed and we believe that the usefulness is narrow. Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import tophat
>>> img = data.camera()
>>> out = tophat(img, disk(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.tophat |
skimage.filters.rank.windowed_histogram(image, selem, out=None, mask=None, shift_x=False, shift_y=False, n_bins=None) [source]
Normalized sliding window histogram Parameters
image2-D array (integer or float)
Input image.
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (integer or float), optional
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
n_binsint or None
The number of histogram bins. Will default to image.max() + 1 if None is passed. Returns
out3-D array (float)
Array of dimensions (H,W,N), where (H,W) are the dimensions of the input image and N is n_bins or image.max() + 1 if no value is provided as a parameter. Effectively, each pixel is a N-D feature vector that is the histogram. The sum of the elements in the feature vector will be 1, unless no pixels in the window were covered by both selem and mask, in which case all elements will be 0. Examples >>> from skimage import data
>>> from skimage.filters.rank import windowed_histogram
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> hist_img = windowed_histogram(img, disk(5)) | skimage.api.skimage.filters.rank#skimage.filters.rank.windowed_histogram |
skimage.filters.rank_order(image) [source]
Return an image of the same shape where each pixel is the index of the pixel value in the ascending order of the unique values of image, aka the rank-order value. Parameters
imagendarray
Returns
labelsndarray of type np.uint32, of shape image.shape
New array where each pixel has the rank-order value of the corresponding pixel in image. Pixel values are between 0 and n - 1, where n is the number of distinct unique values in image.
original_values1-D ndarray
Unique original values of image Examples >>> a = np.array([[1, 4, 5], [4, 4, 1], [5, 1, 1]])
>>> a
array([[1, 4, 5],
[4, 4, 1],
[5, 1, 1]])
>>> rank_order(a)
(array([[0, 1, 2],
[1, 1, 0],
[2, 0, 0]], dtype=uint32), array([1, 4, 5]))
>>> b = np.array([-1., 2.5, 3.1, 2.5])
>>> rank_order(b)
(array([0, 1, 2, 1], dtype=uint32), array([-1. , 2.5, 3.1])) | skimage.api.skimage.filters#skimage.filters.rank_order |
skimage.filters.roberts(image, mask=None) [source]
Find the edge magnitude using Roberts’ cross operator. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Roberts’ Cross edge map. See also
sobel, scharr, prewitt, feature.canny
Examples >>> from skimage import data
>>> camera = data.camera()
>>> from skimage import filters
>>> edges = filters.roberts(camera) | skimage.api.skimage.filters#skimage.filters.roberts |
skimage.filters.roberts_neg_diag(image, mask=None) [source]
Find the cross edges of an image using the Roberts’ Cross operator. The kernel is applied to the input image to produce separate measurements of the gradient component one orientation. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Robert’s edge map. Notes We use the following kernel: 0 1
-1 0 | skimage.api.skimage.filters#skimage.filters.roberts_neg_diag |
skimage.filters.roberts_pos_diag(image, mask=None) [source]
Find the cross edges of an image using Roberts’ cross operator. The kernel is applied to the input image to produce separate measurements of the gradient component one orientation. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Robert’s edge map. Notes We use the following kernel: 1 0
0 -1 | skimage.api.skimage.filters#skimage.filters.roberts_pos_diag |
skimage.filters.sato(image, sigmas=range(1, 10, 2), black_ridges=True, mode=None, cval=0) [source]
Filter an image with the Sato tubeness filter. This filter can be used to detect continuous ridges, e.g. tubes, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects. Defined only for 2-D and 3-D images. Calculates the eigenvectors of the Hessian to compute the similarity of an image region to tubes, according to the method described in [1]. Parameters
image(N, M[, P]) ndarray
Array with input image data.
sigmasiterable of floats, optional
Sigmas used as scales of filter.
black_ridgesboolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
out(N, M[, P]) ndarray
Filtered image (maximum of pixels across all scales). See also
meijering
frangi
hessian
References
1
Sato, Y., Nakajima, S., Shiraga, N., Atsumi, H., Yoshida, S., Koller, T., …, Kikinis, R. (1998). Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. Medical image analysis, 2(2), 143-168. DOI:10.1016/S1361-8415(98)80009-1 | skimage.api.skimage.filters#skimage.filters.sato |
skimage.filters.scharr(image, mask=None, *, axis=None, mode='reflect', cval=0.0) [source]
Find the edge magnitude using the Scharr transform. Parameters
imagearray
The input image.
maskarray of bool, optional
Clip the output image to this mask. (Values where mask=0 will be set to 0.)
axisint or sequence of int, optional
Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as: sch_mag = np.sqrt(sum([scharr(image, axis=i)**2
for i in range(image.ndim)]) / image.ndim)
The magnitude is also computed if axis is a sequence.
modestr or sequence of str, optional
The boundary mode for the convolution. See scipy.ndimage.convolve for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.
cvalfloat, optional
When mode is 'constant', this is the constant used in values outside the boundary of the image data. Returns
outputarray of float
The Scharr edge map. See also
sobel, prewitt, canny
Notes The Scharr operator has a better rotation invariance than other edge filters such as the Sobel or the Prewitt operators. References
1
D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.
2
https://en.wikipedia.org/wiki/Sobel_operator#Alternative_operators Examples >>> from skimage import data
>>> from skimage import filters
>>> camera = data.camera()
>>> edges = filters.scharr(camera) | skimage.api.skimage.filters#skimage.filters.scharr |
skimage.filters.scharr_h(image, mask=None) [source]
Find the horizontal edges of an image using the Scharr transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Scharr edge map. Notes We use the following kernel: 3 10 3
0 0 0
-3 -10 -3
References
1
D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives. | skimage.api.skimage.filters#skimage.filters.scharr_h |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.