doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
skimage.filters.scharr_v(image, mask=None) [source]
Find the vertical edges of an image using the Scharr transform. Parameters
image2-D array
Image to process
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Scharr edge map. Notes We use the following kernel: 3 0 -3
10 0 -10
3 0 -3
References
1
D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives. | skimage.api.skimage.filters#skimage.filters.scharr_v |
skimage.filters.sobel(image, mask=None, *, axis=None, mode='reflect', cval=0.0) [source]
Find edges in an image using the Sobel filter. Parameters
imagearray
The input image.
maskarray of bool, optional
Clip the output image to this mask. (Values where mask=0 will be set to 0.)
axisint or sequence of int, optional
Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as: sobel_mag = np.sqrt(sum([sobel(image, axis=i)**2
for i in range(image.ndim)]) / image.ndim)
The magnitude is also computed if axis is a sequence.
modestr or sequence of str, optional
The boundary mode for the convolution. See scipy.ndimage.convolve for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.
cvalfloat, optional
When mode is 'constant', this is the constant used in values outside the boundary of the image data. Returns
outputarray of float
The Sobel edge map. See also
scharr, prewitt, canny
References
1
D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.
2
https://en.wikipedia.org/wiki/Sobel_operator Examples >>> from skimage import data
>>> from skimage import filters
>>> camera = data.camera()
>>> edges = filters.sobel(camera) | skimage.api.skimage.filters#skimage.filters.sobel |
skimage.filters.sobel_h(image, mask=None) [source]
Find the horizontal edges of an image using the Sobel transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Sobel edge map. Notes We use the following kernel: 1 2 1
0 0 0
-1 -2 -1 | skimage.api.skimage.filters#skimage.filters.sobel_h |
skimage.filters.sobel_v(image, mask=None) [source]
Find the vertical edges of an image using the Sobel transform. Parameters
image2-D array
Image to process.
mask2-D array, optional
An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns
output2-D array
The Sobel edge map. Notes We use the following kernel: 1 0 -1
2 0 -2
1 0 -1 | skimage.api.skimage.filters#skimage.filters.sobel_v |
skimage.filters.threshold_isodata(image=None, nbins=256, return_all=False, *, hist=None) [source]
Return threshold value(s) based on ISODATA method. Histogram-based threshold, known as Ridler-Calvard method or inter-means. Threshold values returned satisfy the following equality: threshold = (image[image <= threshold].mean() +
image[image > threshold].mean()) / 2.0
That is, returned thresholds are intensities that separate the image into two groups of pixels, where the threshold intensity is midway between the mean intensities of these groups. For integer images, the above equality holds to within one; for floating- point images, the equality holds to within the histogram bin-width. Either image or hist must be provided. In case hist is given, the actual histogram of the image is ignored. Parameters
image(N, M) ndarray, optional
Input image.
nbinsint, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
return_allbool, optional
If False (default), return only the lowest threshold that satisfies the above equality. If True, return all valid thresholds.
histarray, or 2-tuple of arrays, optional
Histogram to determine the threshold from and a corresponding array of bin center intensities. Alternatively, only the histogram can be passed. Returns
thresholdfloat or int or array
Threshold value(s). References
1
Ridler, TW & Calvard, S (1978), “Picture thresholding using an iterative selection method” IEEE Transactions on Systems, Man and Cybernetics 8: 630-632, DOI:10.1109/TSMC.1978.4310039
2
Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165, http://www.busim.ee.boun.edu.tr/~sankur/SankurFolder/Threshold_survey.pdf DOI:10.1117/1.1631315
3
ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples >>> from skimage.data import coins
>>> image = coins()
>>> thresh = threshold_isodata(image)
>>> binary = image > thresh | skimage.api.skimage.filters#skimage.filters.threshold_isodata |
skimage.filters.threshold_li(image, *, tolerance=None, initial_guess=None, iter_callback=None) [source]
Compute threshold value by Li’s iterative Minimum Cross Entropy method. Parameters
imagendarray
Input image.
tolerancefloat, optional
Finish the computation when the change in the threshold in an iteration is less than this value. By default, this is half the smallest difference between intensity values in image.
initial_guessfloat or Callable[[array[float]], float], optional
Li’s iterative method uses gradient descent to find the optimal threshold. If the image intensity histogram contains more than two modes (peaks), the gradient descent could get stuck in a local optimum. An initial guess for the iteration can help the algorithm find the globally-optimal threshold. A float value defines a specific start point, while a callable should take in an array of image intensities and return a float value. Example valid callables include numpy.mean (default), lambda arr: numpy.quantile(arr, 0.95), or even skimage.filters.threshold_otsu().
iter_callbackCallable[[float], Any], optional
A function that will be called on the threshold at every iteration of the algorithm. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. References
1
Li C.H. and Lee C.K. (1993) “Minimum Cross Entropy Thresholding” Pattern Recognition, 26(4): 617-625 DOI:10.1016/0031-3203(93)90115-D
2
Li C.H. and Tam P.K.S. (1998) “An Iterative Algorithm for Minimum Cross Entropy Thresholding” Pattern Recognition Letters, 18(8): 771-776 DOI:10.1016/S0167-8655(98)00057-9
3
Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165 DOI:10.1117/1.1631315
4
ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_li(image)
>>> binary = image > thresh | skimage.api.skimage.filters#skimage.filters.threshold_li |
skimage.filters.threshold_local(image, block_size, method='gaussian', offset=0, mode='reflect', param=None, cval=0) [source]
Compute a threshold mask image based on local pixel neighborhood. Also known as adaptive or dynamic thresholding. The threshold value is the weighted mean for the local neighborhood of a pixel subtracted by a constant. Alternatively the threshold can be determined dynamically by a given function, using the ‘generic’ method. Parameters
image(N, M) ndarray
Input image.
block_sizeint
Odd size of pixel neighborhood which is used to calculate the threshold value (e.g. 3, 5, 7, …, 21, …).
method{‘generic’, ‘gaussian’, ‘mean’, ‘median’}, optional
Method used to determine adaptive threshold for local neighbourhood in weighted mean image. ‘generic’: use custom function (see param parameter) ‘gaussian’: apply gaussian filter (see param parameter for custom sigma value) ‘mean’: apply arithmetic mean filter ‘median’: apply median rank filter By default the ‘gaussian’ method is used.
offsetfloat, optional
Constant subtracted from weighted mean of neighborhood to calculate the local threshold value. Default offset is 0.
mode{‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional
The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. Default is ‘reflect’.
param{int, function}, optional
Either specify sigma for ‘gaussian’ method or function object for ‘generic’ method. This functions takes the flat array of local neighbourhood as a single argument and returns the calculated threshold for the centre pixel.
cvalfloat, optional
Value to fill past edges of input if mode is ‘constant’. Returns
threshold(N, M) ndarray
Threshold image. All pixels in the input image higher than the corresponding pixel in the threshold image are considered foreground. References
1
https://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#adaptivethreshold Examples >>> from skimage.data import camera
>>> image = camera()[:50, :50]
>>> binary_image1 = image > threshold_local(image, 15, 'mean')
>>> func = lambda arr: arr.mean()
>>> binary_image2 = image > threshold_local(image, 15, 'generic',
... param=func) | skimage.api.skimage.filters#skimage.filters.threshold_local |
skimage.filters.threshold_mean(image) [source]
Return threshold value based on the mean of grayscale values. Parameters
image(N, M[, …, P]) ndarray
Grayscale input image. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. References
1
C. A. Glasbey, “An analysis of histogram-based thresholding algorithms,” CVGIP: Graphical Models and Image Processing, vol. 55, pp. 532-537, 1993. DOI:10.1006/cgip.1993.1040 Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_mean(image)
>>> binary = image > thresh | skimage.api.skimage.filters#skimage.filters.threshold_mean |
skimage.filters.threshold_minimum(image=None, nbins=256, max_iter=10000, *, hist=None) [source]
Return threshold value based on minimum method. The histogram of the input image is computed if not provided and smoothed until there are only two maxima. Then the minimum in between is the threshold value. Either image or hist must be provided. In case hist is given, the actual histogram of the image is ignored. Parameters
image(M, N) ndarray, optional
Input image.
nbinsint, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
max_iterint, optional
Maximum number of iterations to smooth the histogram.
histarray, or 2-tuple of arrays, optional
Histogram to determine the threshold from and a corresponding array of bin center intensities. Alternatively, only the histogram can be passed. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. Raises
RuntimeError
If unable to find two local maxima in the histogram or if the smoothing takes more than 1e4 iterations. References
1
C. A. Glasbey, “An analysis of histogram-based thresholding algorithms,” CVGIP: Graphical Models and Image Processing, vol. 55, pp. 532-537, 1993.
2
Prewitt, JMS & Mendelsohn, ML (1966), “The analysis of cell images”, Annals of the New York Academy of Sciences 128: 1035-1053 DOI:10.1111/j.1749-6632.1965.tb11715.x Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_minimum(image)
>>> binary = image > thresh | skimage.api.skimage.filters#skimage.filters.threshold_minimum |
skimage.filters.threshold_multiotsu(image, classes=3, nbins=256) [source]
Generate classes-1 threshold values to divide gray levels in image. The threshold values are chosen to maximize the total sum of pairwise variances between the thresholded graylevel classes. See Notes and [1] for more details. Parameters
image(N, M) ndarray
Grayscale input image.
classesint, optional
Number of classes to be thresholded, i.e. the number of resulting regions.
nbinsint, optional
Number of bins used to calculate the histogram. This value is ignored for integer arrays. Returns
thresharray
Array containing the threshold values for the desired classes. Raises
ValueError
If image contains less grayscale value then the desired number of classes. Notes This implementation relies on a Cython function whose complexity is \(O\left(\frac{Ch^{C-1}}{(C-1)!}\right)\), where \(h\) is the number of histogram bins and \(C\) is the number of classes desired. The input image must be grayscale. References
1
Liao, P-S., Chen, T-S. and Chung, P-C., “A fast algorithm for multilevel thresholding”, Journal of Information Science and Engineering 17 (5): 713-727, 2001. Available at: <https://ftp.iis.sinica.edu.tw/JISE/2001/200109_01.pdf> DOI:10.6688/JISE.2001.17.5.1
2
Tosa, Y., “Multi-Otsu Threshold”, a java plugin for ImageJ. Available at: <http://imagej.net/plugins/download/Multi_OtsuThreshold.java> Examples >>> from skimage.color import label2rgb
>>> from skimage import data
>>> image = data.camera()
>>> thresholds = threshold_multiotsu(image)
>>> regions = np.digitize(image, bins=thresholds)
>>> regions_colorized = label2rgb(regions) | skimage.api.skimage.filters#skimage.filters.threshold_multiotsu |
skimage.filters.threshold_niblack(image, window_size=15, k=0.2) [source]
Applies Niblack local threshold to an array. A threshold T is calculated for every pixel in the image using the following formula: T = m(x,y) - k * s(x,y)
where m(x,y) and s(x,y) are the mean and standard deviation of pixel (x,y) neighborhood defined by a rectangular window with size w times w centered around the pixel. k is a configurable parameter that weights the effect of standard deviation. Parameters
imagendarray
Input image.
window_sizeint, or iterable of int, optional
Window size specified as a single odd integer (3, 5, 7, …), or an iterable of length image.ndim containing only odd integers (e.g. (1, 5, 5)).
kfloat, optional
Value of parameter k in threshold formula. Returns
threshold(N, M) ndarray
Threshold mask. All pixels with an intensity higher than this value are assumed to be foreground. Notes This algorithm is originally designed for text recognition. The Bradley threshold is a particular case of the Niblack one, being equivalent to >>> from skimage import data
>>> image = data.page()
>>> q = 1
>>> threshold_image = threshold_niblack(image, k=0) * q
for some value q. By default, Bradley and Roth use q=1. References
1
W. Niblack, An introduction to Digital Image Processing, Prentice-Hall, 1986.
2
D. Bradley and G. Roth, “Adaptive thresholding using Integral Image”, Journal of Graphics Tools 12(2), pp. 13-21, 2007. DOI:10.1080/2151237X.2007.10129236 Examples >>> from skimage import data
>>> image = data.page()
>>> threshold_image = threshold_niblack(image, window_size=7, k=0.1) | skimage.api.skimage.filters#skimage.filters.threshold_niblack |
skimage.filters.threshold_otsu(image=None, nbins=256, *, hist=None) [source]
Return threshold value based on Otsu’s method. Either image or hist must be provided. If hist is provided, the actual histogram of the image is ignored. Parameters
image(N, M) ndarray, optional
Grayscale input image.
nbinsint, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
histarray, or 2-tuple of arrays, optional
Histogram from which to determine the threshold, and optionally a corresponding array of bin center intensities. An alternative use of this function is to pass it only hist. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. Notes The input image must be grayscale. References
1
Wikipedia, https://en.wikipedia.org/wiki/Otsu’s_Method Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_otsu(image)
>>> binary = image <= thresh | skimage.api.skimage.filters#skimage.filters.threshold_otsu |
skimage.filters.threshold_sauvola(image, window_size=15, k=0.2, r=None) [source]
Applies Sauvola local threshold to an array. Sauvola is a modification of Niblack technique. In the original method a threshold T is calculated for every pixel in the image using the following formula: T = m(x,y) * (1 + k * ((s(x,y) / R) - 1))
where m(x,y) and s(x,y) are the mean and standard deviation of pixel (x,y) neighborhood defined by a rectangular window with size w times w centered around the pixel. k is a configurable parameter that weights the effect of standard deviation. R is the maximum standard deviation of a greyscale image. Parameters
imagendarray
Input image.
window_sizeint, or iterable of int, optional
Window size specified as a single odd integer (3, 5, 7, …), or an iterable of length image.ndim containing only odd integers (e.g. (1, 5, 5)).
kfloat, optional
Value of the positive parameter k.
rfloat, optional
Value of R, the dynamic range of standard deviation. If None, set to the half of the image dtype range. Returns
threshold(N, M) ndarray
Threshold mask. All pixels with an intensity higher than this value are assumed to be foreground. Notes This algorithm is originally designed for text recognition. References
1
J. Sauvola and M. Pietikainen, “Adaptive document image binarization,” Pattern Recognition 33(2), pp. 225-236, 2000. DOI:10.1016/S0031-3203(99)00055-2 Examples >>> from skimage import data
>>> image = data.page()
>>> t_sauvola = threshold_sauvola(image, window_size=15, k=0.2)
>>> binary_image = image > t_sauvola | skimage.api.skimage.filters#skimage.filters.threshold_sauvola |
skimage.filters.threshold_triangle(image, nbins=256) [source]
Return threshold value based on the triangle algorithm. Parameters
image(N, M[, …, P]) ndarray
Grayscale input image.
nbinsint, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. References
1
Zack, G. W., Rogers, W. E. and Latt, S. A., 1977, Automatic Measurement of Sister Chromatid Exchange Frequency, Journal of Histochemistry and Cytochemistry 25 (7), pp. 741-753 DOI:10.1177/25.7.70454
2
ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_triangle(image)
>>> binary = image > thresh | skimage.api.skimage.filters#skimage.filters.threshold_triangle |
skimage.filters.threshold_yen(image=None, nbins=256, *, hist=None) [source]
Return threshold value based on Yen’s method. Either image or hist must be provided. In case hist is given, the actual histogram of the image is ignored. Parameters
image(N, M) ndarray, optional
Input image.
nbinsint, optional
Number of bins used to calculate histogram. This value is ignored for integer arrays.
histarray, or 2-tuple of arrays, optional
Histogram from which to determine the threshold, and optionally a corresponding array of bin center intensities. An alternative use of this function is to pass it only hist. Returns
thresholdfloat
Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. References
1
Yen J.C., Chang F.J., and Chang S. (1995) “A New Criterion for Automatic Multilevel Thresholding” IEEE Trans. on Image Processing, 4(3): 370-378. DOI:10.1109/83.366472
2
Sezgin M. and Sankur B. (2004) “Survey over Image Thresholding Techniques and Quantitative Performance Evaluation” Journal of Electronic Imaging, 13(1): 146-165, DOI:10.1117/1.1631315 http://www.busim.ee.boun.edu.tr/~sankur/SankurFolder/Threshold_survey.pdf
3
ImageJ AutoThresholder code, http://fiji.sc/wiki/index.php/Auto_Threshold Examples >>> from skimage.data import camera
>>> image = camera()
>>> thresh = threshold_yen(image)
>>> binary = image <= thresh | skimage.api.skimage.filters#skimage.filters.threshold_yen |
skimage.filters.try_all_threshold(image, figsize=(8, 5), verbose=True) [source]
Returns a figure comparing the outputs of different thresholding methods. Parameters
image(N, M) ndarray
Input image.
figsizetuple, optional
Figure size (in inches).
verbosebool, optional
Print function name for each method. Returns
fig, axtuple
Matplotlib figure and axes. Notes The following algorithms are used: isodata li mean minimum otsu triangle yen Examples >>> from skimage.data import text
>>> fig, ax = try_all_threshold(text(), figsize=(10, 6), verbose=False) | skimage.api.skimage.filters#skimage.filters.try_all_threshold |
skimage.filters.unsharp_mask(image, radius=1.0, amount=1.0, multichannel=False, preserve_range=False) [source]
Unsharp masking filter. The sharp details are identified as the difference between the original image and its blurred version. These details are then scaled, and added back to the original image. Parameters
image[P, …, ]M[, N][, C] ndarray
Input image.
radiusscalar or sequence of scalars, optional
If a scalar is given, then its value is used for all dimensions. If sequence is given, then there must be exactly one radius for each dimension except the last dimension for multichannel images. Note that 0 radius means no blurring, and negative values are not allowed.
amountscalar, optional
The details will be amplified with this factor. The factor could be 0 or negative. Typically, it is a small positive number, e.g. 1.0.
multichannelbool, optional
If True, the last image dimension is considered as a color channel, otherwise as spatial. Color channels are processed individually.
preserve_rangebool, optional
Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns
output[P, …, ]M[, N][, C] ndarray of float
Image with unsharp mask applied. Notes Unsharp masking is an image sharpening technique. It is a linear image operation, and numerically stable, unlike deconvolution which is an ill-posed problem. Because of this stability, it is often preferred over deconvolution. The main idea is as follows: sharp details are identified as the difference between the original image and its blurred version. These details are added back to the original image after a scaling step: enhanced image = original + amount * (original - blurred) When applying this filter to several color layers independently, color bleeding may occur. More visually pleasing result can be achieved by processing only the brightness/lightness/intensity channel in a suitable color space such as HSV, HSL, YUV, or YCbCr. Unsharp masking is described in most introductory digital image processing books. This implementation is based on [1]. References
1
Maria Petrou, Costas Petrou “Image Processing: The Fundamentals”, (2010), ed ii., page 357, ISBN 13: 9781119994398 DOI:10.1002/9781119994398
2
Wikipedia. Unsharp masking https://en.wikipedia.org/wiki/Unsharp_masking Examples >>> array = np.ones(shape=(5,5), dtype=np.uint8)*100
>>> array[2,2] = 120
>>> array
array([[100, 100, 100, 100, 100],
[100, 100, 100, 100, 100],
[100, 100, 120, 100, 100],
[100, 100, 100, 100, 100],
[100, 100, 100, 100, 100]], dtype=uint8)
>>> np.around(unsharp_mask(array, radius=0.5, amount=2),2)
array([[0.39, 0.39, 0.39, 0.39, 0.39],
[0.39, 0.39, 0.38, 0.39, 0.39],
[0.39, 0.38, 0.53, 0.38, 0.39],
[0.39, 0.39, 0.38, 0.39, 0.39],
[0.39, 0.39, 0.39, 0.39, 0.39]])
>>> array = np.ones(shape=(5,5), dtype=np.int8)*100
>>> array[2,2] = 127
>>> np.around(unsharp_mask(array, radius=0.5, amount=2),2)
array([[0.79, 0.79, 0.79, 0.79, 0.79],
[0.79, 0.78, 0.75, 0.78, 0.79],
[0.79, 0.75, 1. , 0.75, 0.79],
[0.79, 0.78, 0.75, 0.78, 0.79],
[0.79, 0.79, 0.79, 0.79, 0.79]])
>>> np.around(unsharp_mask(array, radius=0.5, amount=2, preserve_range=True), 2)
array([[100. , 100. , 99.99, 100. , 100. ],
[100. , 99.39, 95.48, 99.39, 100. ],
[ 99.99, 95.48, 147.59, 95.48, 99.99],
[100. , 99.39, 95.48, 99.39, 100. ],
[100. , 100. , 99.99, 100. , 100. ]]) | skimage.api.skimage.filters#skimage.filters.unsharp_mask |
skimage.filters.wiener(data, impulse_response=None, filter_params={}, K=0.25, predefined_filter=None) [source]
Minimum Mean Square Error (Wiener) inverse filter. Parameters
data(M,N) ndarray
Input data.
Kfloat or (M,N) ndarray
Ratio between power spectrum of noise and undegraded image.
impulse_responsecallable f(r, c, **filter_params)
Impulse response of the filter. See LPIFilter2D.__init__.
filter_paramsdict
Additional keyword parameters to the impulse_response function. Other Parameters
predefined_filterLPIFilter2D
If you need to apply the same filter multiple times over different images, construct the LPIFilter2D and specify it here. | skimage.api.skimage.filters#skimage.filters.wiener |
skimage.filters.window(window_type, shape, warp_kwargs=None) [source]
Return an n-dimensional window of a given size and dimensionality. Parameters
window_typestring, float, or tuple
The type of window to be created. Any window type supported by scipy.signal.get_window is allowed here. See notes below for a current list, or the SciPy documentation for the version of SciPy on your machine.
shapetuple of int or int
The shape of the window along each axis. If an integer is provided, a 1D window is generated.
warp_kwargsdict
Keyword arguments passed to skimage.transform.warp (e.g., warp_kwargs={'order':3} to change interpolation method). Returns
nd_windowndarray
A window of the specified shape. dtype is np.double. Notes This function is based on scipy.signal.get_window and thus can access all of the window types available to that function (e.g., "hann", "boxcar"). Note that certain window types require parameters that have to be supplied with the window name as a tuple (e.g., ("tukey", 0.8)). If only a float is supplied, it is interpreted as the beta parameter of the Kaiser window. See https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.windows.get_window.html for more details. Note that this function generates a double precision array of the specified shape and can thus generate very large arrays that consume a large amount of available memory. The approach taken here to create nD windows is to first calculate the Euclidean distance from the center of the intended nD window to each position in the array. That distance is used to sample, with interpolation, from a 1D window returned from scipy.signal.get_window. The method of interpolation can be changed with the order keyword argument passed to skimage.transform.warp. Some coordinates in the output window will be outside of the original signal; these will be filled in with zeros. Window types: - boxcar - triang - blackman - hamming - hann - bartlett - flattop - parzen - bohman - blackmanharris - nuttall - barthann - kaiser (needs beta) - gaussian (needs standard deviation) - general_gaussian (needs power, width) - slepian (needs width) - dpss (needs normalized half-bandwidth) - chebwin (needs attenuation) - exponential (needs decay scale) - tukey (needs taper fraction) References
1
Two-dimensional window design, Wikipedia, https://en.wikipedia.org/wiki/Two_dimensional_window_design Examples Return a Hann window with shape (512, 512): >>> from skimage.filters import window
>>> w = window('hann', (512, 512))
Return a Kaiser window with beta parameter of 16 and shape (256, 256, 35): >>> w = window(16, (256, 256, 35))
Return a Tukey window with an alpha parameter of 0.8 and shape (100, 300): >>> w = window(('tukey', 0.8), (100, 300)) | skimage.api.skimage.filters#skimage.filters.window |
Module: future Functionality with an experimental API. Although you can count on the functions in this package being around in the future, the API may change with any version update and will not follow the skimage two-version deprecation path. Therefore, use the functions herein with care, and do not use them in production code that will depend on updated skimage versions.
skimage.future.fit_segmenter(labels, …) Segmentation using labeled parts of the image and a classifier.
skimage.future.manual_lasso_segmentation(image) Return a label image based on freeform selections made with the mouse.
skimage.future.manual_polygon_segmentation(image) Return a label image based on polygon selections made with the mouse.
skimage.future.predict_segmenter(features, clf) Segmentation of images using a pretrained classifier.
skimage.future.TrainableSegmenter([clf, …]) Estimator for classifying pixels.
skimage.future.graph fit_segmenter
skimage.future.fit_segmenter(labels, features, clf) [source]
Segmentation using labeled parts of the image and a classifier. Parameters
labelsndarray of ints
Image of labels. Labels >= 1 correspond to the training set and label 0 to unlabeled pixels to be segmented.
featuresndarray
Array of features, with the first dimension corresponding to the number of features, and the other dimensions correspond to labels.shape.
clfclassifier object
classifier object, exposing a fit and a predict method as in scikit-learn’s API, for example an instance of RandomForestClassifier or LogisticRegression classifier. Returns
clfclassifier object
classifier trained on labels Raises
NotFittedError if self.clf has not been fitted yet (use self.fit).
Examples using skimage.future.fit_segmenter
Trainable segmentation using local features and random forests manual_lasso_segmentation
skimage.future.manual_lasso_segmentation(image, alpha=0.4, return_all=False) [source]
Return a label image based on freeform selections made with the mouse. Parameters
image(M, N[, 3]) array
Grayscale or RGB image.
alphafloat, optional
Transparency value for polygons drawn over the image.
return_allbool, optional
If True, an array containing each separate polygon drawn is returned. (The polygons may overlap.) If False (default), latter polygons “overwrite” earlier ones where they overlap. Returns
labelsarray of int, shape ([Q, ]M, N)
The segmented regions. If mode is ‘separate’, the leading dimension of the array corresponds to the number of regions that the user drew. Notes Press and hold the left mouse button to draw around each object. Examples >>> from skimage import data, future, io
>>> camera = data.camera()
>>> mask = future.manual_lasso_segmentation(camera)
>>> io.imshow(mask)
>>> io.show()
manual_polygon_segmentation
skimage.future.manual_polygon_segmentation(image, alpha=0.4, return_all=False) [source]
Return a label image based on polygon selections made with the mouse. Parameters
image(M, N[, 3]) array
Grayscale or RGB image.
alphafloat, optional
Transparency value for polygons drawn over the image.
return_allbool, optional
If True, an array containing each separate polygon drawn is returned. (The polygons may overlap.) If False (default), latter polygons “overwrite” earlier ones where they overlap. Returns
labelsarray of int, shape ([Q, ]M, N)
The segmented regions. If mode is ‘separate’, the leading dimension of the array corresponds to the number of regions that the user drew. Notes Use left click to select the vertices of the polygon and right click to confirm the selection once all vertices are selected. Examples >>> from skimage import data, future, io
>>> camera = data.camera()
>>> mask = future.manual_polygon_segmentation(camera)
>>> io.imshow(mask)
>>> io.show()
predict_segmenter
skimage.future.predict_segmenter(features, clf) [source]
Segmentation of images using a pretrained classifier. Parameters
featuresndarray
Array of features, with the last dimension corresponding to the number of features, and the other dimensions are compatible with the shape of the image to segment, or a flattened image.
clfclassifier object
trained classifier object, exposing a predict method as in scikit-learn’s API, for example an instance of RandomForestClassifier or LogisticRegression classifier. The classifier must be already trained, for example with skimage.segmentation.fit_segmenter(). Returns
outputndarray
Labeled array, built from the prediction of the classifier.
Examples using skimage.future.predict_segmenter
Trainable segmentation using local features and random forests TrainableSegmenter
class skimage.future.TrainableSegmenter(clf=None, features_func=None) [source]
Bases: object Estimator for classifying pixels. Parameters
clfclassifier object, optional
classifier object, exposing a fit and a predict method as in scikit-learn’s API, for example an instance of RandomForestClassifier or LogisticRegression classifier.
features_funcfunction, optional
function computing features on all pixels of the image, to be passed to the classifier. The output should be of shape (m_features, *labels.shape). If None, skimage.segmentation.multiscale_basic_features() is used. Methods
fit(image, labels) Train classifier using partially labeled (annotated) image.
predict(image) Segment new image using trained internal classifier.
compute_features
__init__(clf=None, features_func=None) [source]
Initialize self. See help(type(self)) for accurate signature.
compute_features(image) [source]
fit(image, labels) [source]
Train classifier using partially labeled (annotated) image. Parameters
imagendarray
Input image, which can be grayscale or multichannel, and must have a number of dimensions compatible with self.features_func.
labelsndarray of ints
Labeled array of shape compatible with image (same shape for a single-channel image). Labels >= 1 correspond to the training set and label 0 to unlabeled pixels to be segmented.
predict(image) [source]
Segment new image using trained internal classifier. Parameters
imagendarray
Input image, which can be grayscale or multichannel, and must have a number of dimensions compatible with self.features_func. Raises
NotFittedError if self.clf has not been fitted yet (use self.fit). | skimage.api.skimage.future |
skimage.future.fit_segmenter(labels, features, clf) [source]
Segmentation using labeled parts of the image and a classifier. Parameters
labelsndarray of ints
Image of labels. Labels >= 1 correspond to the training set and label 0 to unlabeled pixels to be segmented.
featuresndarray
Array of features, with the first dimension corresponding to the number of features, and the other dimensions correspond to labels.shape.
clfclassifier object
classifier object, exposing a fit and a predict method as in scikit-learn’s API, for example an instance of RandomForestClassifier or LogisticRegression classifier. Returns
clfclassifier object
classifier trained on labels Raises
NotFittedError if self.clf has not been fitted yet (use self.fit). | skimage.api.skimage.future#skimage.future.fit_segmenter |
Module: future.graph
skimage.future.graph.cut_normalized(labels, rag) Perform Normalized Graph cut on the Region Adjacency Graph.
skimage.future.graph.cut_threshold(labels, …) Combine regions separated by weight less than threshold.
skimage.future.graph.merge_hierarchical(…) Perform hierarchical merging of a RAG.
skimage.future.graph.ncut(labels, rag[, …]) Perform Normalized Graph cut on the Region Adjacency Graph.
skimage.future.graph.rag_boundary(labels, …) Comouter RAG based on region boundaries
skimage.future.graph.rag_mean_color(image, …) Compute the Region Adjacency Graph using mean colors.
skimage.future.graph.show_rag(labels, rag, image) Show a Region Adjacency Graph on an image.
skimage.future.graph.RAG([label_image, …]) The Region Adjacency Graph (RAG) of an image, subclasses networx.Graph cut_normalized
skimage.future.graph.cut_normalized(labels, rag, thresh=0.001, num_cuts=10, in_place=True, max_edge=1.0, *, random_state=None) [source]
Perform Normalized Graph cut on the Region Adjacency Graph. Given an image’s labels and its similarity RAG, recursively perform a 2-way normalized cut on it. All nodes belonging to a subgraph that cannot be cut further are assigned a unique label in the output. Parameters
labelsndarray
The array of labels.
ragRAG
The region adjacency graph.
threshfloat
The threshold. A subgraph won’t be further subdivided if the value of the N-cut exceeds thresh.
num_cutsint
The number or N-cuts to perform before determining the optimal one.
in_placebool
If set, modifies rag in place. For each node n the function will set a new attribute rag.nodes[n]['ncut label'].
max_edgefloat, optional
The maximum possible value of an edge in the RAG. This corresponds to an edge between identical regions. This is used to put self edges in the RAG.
random_stateint, RandomState instance or None, optional
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. The random state is used for the starting point of scipy.sparse.linalg.eigsh. Returns
outndarray
The new labeled array. References
1
Shi, J.; Malik, J., “Normalized cuts and image segmentation”, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, no. 8, pp. 888-905, August 2000. Examples >>> from skimage import data, segmentation
>>> from skimage.future import graph
>>> img = data.astronaut()
>>> labels = segmentation.slic(img)
>>> rag = graph.rag_mean_color(img, labels, mode='similarity')
>>> new_labels = graph.cut_normalized(labels, rag)
cut_threshold
skimage.future.graph.cut_threshold(labels, rag, thresh, in_place=True) [source]
Combine regions separated by weight less than threshold. Given an image’s labels and its RAG, output new labels by combining regions whose nodes are separated by a weight less than the given threshold. Parameters
labelsndarray
The array of labels.
ragRAG
The region adjacency graph.
threshfloat
The threshold. Regions connected by edges with smaller weights are combined.
in_placebool
If set, modifies rag in place. The function will remove the edges with weights less that thresh. If set to False the function makes a copy of rag before proceeding. Returns
outndarray
The new labelled array. References
1
Alain Tremeau and Philippe Colantoni “Regions Adjacency Graph Applied To Color Image Segmentation” DOI:10.1109/83.841950 Examples >>> from skimage import data, segmentation
>>> from skimage.future import graph
>>> img = data.astronaut()
>>> labels = segmentation.slic(img)
>>> rag = graph.rag_mean_color(img, labels)
>>> new_labels = graph.cut_threshold(labels, rag, 10)
merge_hierarchical
skimage.future.graph.merge_hierarchical(labels, rag, thresh, rag_copy, in_place_merge, merge_func, weight_func) [source]
Perform hierarchical merging of a RAG. Greedily merges the most similar pair of nodes until no edges lower than thresh remain. Parameters
labelsndarray
The array of labels.
ragRAG
The Region Adjacency Graph.
threshfloat
Regions connected by an edge with weight smaller than thresh are merged.
rag_copybool
If set, the RAG copied before modifying.
in_place_mergebool
If set, the nodes are merged in place. Otherwise, a new node is created for each merge..
merge_funccallable
This function is called before merging two nodes. For the RAG graph while merging src and dst, it is called as follows merge_func(graph, src, dst).
weight_funccallable
The function to compute the new weights of the nodes adjacent to the merged node. This is directly supplied as the argument weight_func to merge_nodes. Returns
outndarray
The new labeled array.
ncut
skimage.future.graph.ncut(labels, rag, thresh=0.001, num_cuts=10, in_place=True, max_edge=1.0, *, random_state=None) [source]
Perform Normalized Graph cut on the Region Adjacency Graph. Given an image’s labels and its similarity RAG, recursively perform a 2-way normalized cut on it. All nodes belonging to a subgraph that cannot be cut further are assigned a unique label in the output. Parameters
labelsndarray
The array of labels.
ragRAG
The region adjacency graph.
threshfloat
The threshold. A subgraph won’t be further subdivided if the value of the N-cut exceeds thresh.
num_cutsint
The number or N-cuts to perform before determining the optimal one.
in_placebool
If set, modifies rag in place. For each node n the function will set a new attribute rag.nodes[n]['ncut label'].
max_edgefloat, optional
The maximum possible value of an edge in the RAG. This corresponds to an edge between identical regions. This is used to put self edges in the RAG.
random_stateint, RandomState instance or None, optional
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. The random state is used for the starting point of scipy.sparse.linalg.eigsh. Returns
outndarray
The new labeled array. References
1
Shi, J.; Malik, J., “Normalized cuts and image segmentation”, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, no. 8, pp. 888-905, August 2000. Examples >>> from skimage import data, segmentation
>>> from skimage.future import graph
>>> img = data.astronaut()
>>> labels = segmentation.slic(img)
>>> rag = graph.rag_mean_color(img, labels, mode='similarity')
>>> new_labels = graph.cut_normalized(labels, rag)
rag_boundary
skimage.future.graph.rag_boundary(labels, edge_map, connectivity=2) [source]
Comouter RAG based on region boundaries Given an image’s initial segmentation and its edge map this method constructs the corresponding Region Adjacency Graph (RAG). Each node in the RAG represents a set of pixels within the image with the same label in labels. The weight between two adjacent regions is the average value in edge_map along their boundary.
labelsndarray
The labelled image.
edge_mapndarray
This should have the same shape as that of labels. For all pixels along the boundary between 2 adjacent regions, the average value of the corresponding pixels in edge_map is the edge weight between them.
connectivityint, optional
Pixels with a squared distance less than connectivity from each other are considered adjacent. It can range from 1 to labels.ndim. Its behavior is the same as connectivity parameter in scipy.ndimage.filters.generate_binary_structure. Examples >>> from skimage import data, segmentation, filters, color
>>> from skimage.future import graph
>>> img = data.chelsea()
>>> labels = segmentation.slic(img)
>>> edge_map = filters.sobel(color.rgb2gray(img))
>>> rag = graph.rag_boundary(labels, edge_map)
rag_mean_color
skimage.future.graph.rag_mean_color(image, labels, connectivity=2, mode='distance', sigma=255.0) [source]
Compute the Region Adjacency Graph using mean colors. Given an image and its initial segmentation, this method constructs the corresponding Region Adjacency Graph (RAG). Each node in the RAG represents a set of pixels within image with the same label in labels. The weight between two adjacent regions represents how similar or dissimilar two regions are depending on the mode parameter. Parameters
imagendarray, shape(M, N, […, P,] 3)
Input image.
labelsndarray, shape(M, N, […, P])
The labelled image. This should have one dimension less than image. If image has dimensions (M, N, 3) labels should have dimensions (M, N).
connectivityint, optional
Pixels with a squared distance less than connectivity from each other are considered adjacent. It can range from 1 to labels.ndim. Its behavior is the same as connectivity parameter in scipy.ndimage.generate_binary_structure.
mode{‘distance’, ‘similarity’}, optional
The strategy to assign edge weights. ‘distance’ : The weight between two adjacent regions is the \(|c_1 - c_2|\), where \(c_1\) and \(c_2\) are the mean colors of the two regions. It represents the Euclidean distance in their average color. ‘similarity’ : The weight between two adjacent is \(e^{-d^2/sigma}\) where \(d=|c_1 - c_2|\), where \(c_1\) and \(c_2\) are the mean colors of the two regions. It represents how similar two regions are.
sigmafloat, optional
Used for computation when mode is “similarity”. It governs how close to each other two colors should be, for their corresponding edge weight to be significant. A very large value of sigma could make any two colors behave as though they were similar. Returns
outRAG
The region adjacency graph. References
1
Alain Tremeau and Philippe Colantoni “Regions Adjacency Graph Applied To Color Image Segmentation” DOI:10.1109/83.841950 Examples >>> from skimage import data, segmentation
>>> from skimage.future import graph
>>> img = data.astronaut()
>>> labels = segmentation.slic(img)
>>> rag = graph.rag_mean_color(img, labels)
show_rag
skimage.future.graph.show_rag(labels, rag, image, border_color='black', edge_width=1.5, edge_cmap='magma', img_cmap='bone', in_place=True, ax=None) [source]
Show a Region Adjacency Graph on an image. Given a labelled image and its corresponding RAG, show the nodes and edges of the RAG on the image with the specified colors. Edges are displayed between the centroid of the 2 adjacent regions in the image. Parameters
labelsndarray, shape (M, N)
The labelled image.
ragRAG
The Region Adjacency Graph.
imagendarray, shape (M, N[, 3])
Input image. If colormap is None, the image should be in RGB format.
border_colorcolor spec, optional
Color with which the borders between regions are drawn.
edge_widthfloat, optional
The thickness with which the RAG edges are drawn.
edge_cmapmatplotlib.colors.Colormap, optional
Any matplotlib colormap with which the edges are drawn.
img_cmapmatplotlib.colors.Colormap, optional
Any matplotlib colormap with which the image is draw. If set to None the image is drawn as it is.
in_placebool, optional
If set, the RAG is modified in place. For each node n the function will set a new attribute rag.nodes[n]['centroid'].
axmatplotlib.axes.Axes, optional
The axes to draw on. If not specified, new axes are created and drawn on. Returns
lcmatplotlib.collections.LineCollection
A colection of lines that represent the edges of the graph. It can be passed to the matplotlib.figure.Figure.colorbar() function. Examples >>> from skimage import data, segmentation
>>> from skimage.future import graph
>>> import matplotlib.pyplot as plt
>>>
>>> img = data.coffee()
>>> labels = segmentation.slic(img)
>>> g = graph.rag_mean_color(img, labels)
>>> lc = graph.show_rag(labels, g, img)
>>> cbar = plt.colorbar(lc)
RAG
class skimage.future.graph.RAG(label_image=None, connectivity=1, data=None, **attr) [source]
Bases: networkx.classes.graph.Graph The Region Adjacency Graph (RAG) of an image, subclasses networx.Graph Parameters
label_imagearray of int
An initial segmentation, with each region labeled as a different integer. Every unique value in label_image will correspond to a node in the graph.
connectivityint in {1, …, label_image.ndim}, optional
The connectivity between pixels in label_image. For a 2D image, a connectivity of 1 corresponds to immediate neighbors up, down, left, and right, while a connectivity of 2 also includes diagonal neighbors. See scipy.ndimage.generate_binary_structure.
datanetworkx Graph specification, optional
Initial or additional edges to pass to the NetworkX Graph constructor. See networkx.Graph. Valid edge specifications include edge list (list of tuples), NumPy arrays, and SciPy sparse matrices.
**attrkeyword arguments, optional
Additional attributes to add to the graph.
__init__(label_image=None, connectivity=1, data=None, **attr) [source]
Initialize a graph with edges, name, or graph attributes. Parameters
incoming_graph_datainput graph (optional, default: None)
Data to initialize graph. If None (default) an empty graph is created. The data can be an edge list, or any NetworkX graph object. If the corresponding optional Python packages are installed the data can also be a NumPy matrix or 2d ndarray, a SciPy sparse matrix, or a PyGraphviz graph.
attrkeyword arguments, optional (default= no attributes)
Attributes to add to graph as key=value pairs. See also
convert
Examples >>> G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> G = nx.Graph(name="my graph")
>>> e = [(1, 2), (2, 3), (3, 4)] # list of edges
>>> G = nx.Graph(e)
Arbitrary graph attribute pairs (key=value) may be assigned >>> G = nx.Graph(e, day="Friday")
>>> G.graph
{'day': 'Friday'}
add_edge(u, v, attr_dict=None, **attr) [source]
Add an edge between u and v while updating max node id. See also networkx.Graph.add_edge().
add_node(n, attr_dict=None, **attr) [source]
Add node n while updating the maximum node id. See also networkx.Graph.add_node().
copy() [source]
Copy the graph with its max node id. See also networkx.Graph.copy().
fresh_copy() [source]
Return a fresh copy graph with the same data structure. A fresh copy has no nodes, edges or graph attributes. It is the same data structure as the current graph. This method is typically used to create an empty version of the graph. This is required when subclassing Graph with networkx v2 and does not cause problems for v1. Here is more detail from the network migrating from 1.x to 2.x document: With the new GraphViews (SubGraph, ReversedGraph, etc)
you can't assume that ``G.__class__()`` will create a new
instance of the same graph type as ``G``. In fact, the
call signature for ``__class__`` differs depending on
whether ``G`` is a view or a base class. For v2.x you
should use ``G.fresh_copy()`` to create a null graph of
the correct type---ready to fill with nodes and edges.
merge_nodes(src, dst, weight_func=<function min_weight>, in_place=True, extra_arguments=[], extra_keywords={}) [source]
Merge node src and dst. The new combined node is adjacent to all the neighbors of src and dst. weight_func is called to decide the weight of edges incident on the new node. Parameters
src, dstint
Nodes to be merged.
weight_funccallable, optional
Function to decide the attributes of edges incident on the new node. For each neighbor n for src and `dst, weight_func will be called as follows: weight_func(src, dst, n, *extra_arguments, **extra_keywords). src, dst and n are IDs of vertices in the RAG object which is in turn a subclass of networkx.Graph. It is expected to return a dict of attributes of the resulting edge.
in_placebool, optional
If set to True, the merged node has the id dst, else merged node has a new id which is returned.
extra_argumentssequence, optional
The sequence of extra positional arguments passed to weight_func.
extra_keywordsdictionary, optional
The dict of keyword arguments passed to the weight_func. Returns
idint
The id of the new node. Notes If in_place is False the resulting node has a new id, rather than dst.
next_id() [source]
Returns the id for the new node to be inserted. The current implementation returns one more than the maximum id. Returns
idint
The id of the new node to be inserted. | skimage.api.skimage.future.graph |
skimage.future.graph.cut_normalized(labels, rag, thresh=0.001, num_cuts=10, in_place=True, max_edge=1.0, *, random_state=None) [source]
Perform Normalized Graph cut on the Region Adjacency Graph. Given an image’s labels and its similarity RAG, recursively perform a 2-way normalized cut on it. All nodes belonging to a subgraph that cannot be cut further are assigned a unique label in the output. Parameters
labelsndarray
The array of labels.
ragRAG
The region adjacency graph.
threshfloat
The threshold. A subgraph won’t be further subdivided if the value of the N-cut exceeds thresh.
num_cutsint
The number or N-cuts to perform before determining the optimal one.
in_placebool
If set, modifies rag in place. For each node n the function will set a new attribute rag.nodes[n]['ncut label'].
max_edgefloat, optional
The maximum possible value of an edge in the RAG. This corresponds to an edge between identical regions. This is used to put self edges in the RAG.
random_stateint, RandomState instance or None, optional
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. The random state is used for the starting point of scipy.sparse.linalg.eigsh. Returns
outndarray
The new labeled array. References
1
Shi, J.; Malik, J., “Normalized cuts and image segmentation”, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, no. 8, pp. 888-905, August 2000. Examples >>> from skimage import data, segmentation
>>> from skimage.future import graph
>>> img = data.astronaut()
>>> labels = segmentation.slic(img)
>>> rag = graph.rag_mean_color(img, labels, mode='similarity')
>>> new_labels = graph.cut_normalized(labels, rag) | skimage.api.skimage.future.graph#skimage.future.graph.cut_normalized |
skimage.future.graph.cut_threshold(labels, rag, thresh, in_place=True) [source]
Combine regions separated by weight less than threshold. Given an image’s labels and its RAG, output new labels by combining regions whose nodes are separated by a weight less than the given threshold. Parameters
labelsndarray
The array of labels.
ragRAG
The region adjacency graph.
threshfloat
The threshold. Regions connected by edges with smaller weights are combined.
in_placebool
If set, modifies rag in place. The function will remove the edges with weights less that thresh. If set to False the function makes a copy of rag before proceeding. Returns
outndarray
The new labelled array. References
1
Alain Tremeau and Philippe Colantoni “Regions Adjacency Graph Applied To Color Image Segmentation” DOI:10.1109/83.841950 Examples >>> from skimage import data, segmentation
>>> from skimage.future import graph
>>> img = data.astronaut()
>>> labels = segmentation.slic(img)
>>> rag = graph.rag_mean_color(img, labels)
>>> new_labels = graph.cut_threshold(labels, rag, 10) | skimage.api.skimage.future.graph#skimage.future.graph.cut_threshold |
skimage.future.graph.merge_hierarchical(labels, rag, thresh, rag_copy, in_place_merge, merge_func, weight_func) [source]
Perform hierarchical merging of a RAG. Greedily merges the most similar pair of nodes until no edges lower than thresh remain. Parameters
labelsndarray
The array of labels.
ragRAG
The Region Adjacency Graph.
threshfloat
Regions connected by an edge with weight smaller than thresh are merged.
rag_copybool
If set, the RAG copied before modifying.
in_place_mergebool
If set, the nodes are merged in place. Otherwise, a new node is created for each merge..
merge_funccallable
This function is called before merging two nodes. For the RAG graph while merging src and dst, it is called as follows merge_func(graph, src, dst).
weight_funccallable
The function to compute the new weights of the nodes adjacent to the merged node. This is directly supplied as the argument weight_func to merge_nodes. Returns
outndarray
The new labeled array. | skimage.api.skimage.future.graph#skimage.future.graph.merge_hierarchical |
skimage.future.graph.ncut(labels, rag, thresh=0.001, num_cuts=10, in_place=True, max_edge=1.0, *, random_state=None) [source]
Perform Normalized Graph cut on the Region Adjacency Graph. Given an image’s labels and its similarity RAG, recursively perform a 2-way normalized cut on it. All nodes belonging to a subgraph that cannot be cut further are assigned a unique label in the output. Parameters
labelsndarray
The array of labels.
ragRAG
The region adjacency graph.
threshfloat
The threshold. A subgraph won’t be further subdivided if the value of the N-cut exceeds thresh.
num_cutsint
The number or N-cuts to perform before determining the optimal one.
in_placebool
If set, modifies rag in place. For each node n the function will set a new attribute rag.nodes[n]['ncut label'].
max_edgefloat, optional
The maximum possible value of an edge in the RAG. This corresponds to an edge between identical regions. This is used to put self edges in the RAG.
random_stateint, RandomState instance or None, optional
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. The random state is used for the starting point of scipy.sparse.linalg.eigsh. Returns
outndarray
The new labeled array. References
1
Shi, J.; Malik, J., “Normalized cuts and image segmentation”, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, no. 8, pp. 888-905, August 2000. Examples >>> from skimage import data, segmentation
>>> from skimage.future import graph
>>> img = data.astronaut()
>>> labels = segmentation.slic(img)
>>> rag = graph.rag_mean_color(img, labels, mode='similarity')
>>> new_labels = graph.cut_normalized(labels, rag) | skimage.api.skimage.future.graph#skimage.future.graph.ncut |
class skimage.future.graph.RAG(label_image=None, connectivity=1, data=None, **attr) [source]
Bases: networkx.classes.graph.Graph The Region Adjacency Graph (RAG) of an image, subclasses networx.Graph Parameters
label_imagearray of int
An initial segmentation, with each region labeled as a different integer. Every unique value in label_image will correspond to a node in the graph.
connectivityint in {1, …, label_image.ndim}, optional
The connectivity between pixels in label_image. For a 2D image, a connectivity of 1 corresponds to immediate neighbors up, down, left, and right, while a connectivity of 2 also includes diagonal neighbors. See scipy.ndimage.generate_binary_structure.
datanetworkx Graph specification, optional
Initial or additional edges to pass to the NetworkX Graph constructor. See networkx.Graph. Valid edge specifications include edge list (list of tuples), NumPy arrays, and SciPy sparse matrices.
**attrkeyword arguments, optional
Additional attributes to add to the graph.
__init__(label_image=None, connectivity=1, data=None, **attr) [source]
Initialize a graph with edges, name, or graph attributes. Parameters
incoming_graph_datainput graph (optional, default: None)
Data to initialize graph. If None (default) an empty graph is created. The data can be an edge list, or any NetworkX graph object. If the corresponding optional Python packages are installed the data can also be a NumPy matrix or 2d ndarray, a SciPy sparse matrix, or a PyGraphviz graph.
attrkeyword arguments, optional (default= no attributes)
Attributes to add to graph as key=value pairs. See also
convert
Examples >>> G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> G = nx.Graph(name="my graph")
>>> e = [(1, 2), (2, 3), (3, 4)] # list of edges
>>> G = nx.Graph(e)
Arbitrary graph attribute pairs (key=value) may be assigned >>> G = nx.Graph(e, day="Friday")
>>> G.graph
{'day': 'Friday'}
add_edge(u, v, attr_dict=None, **attr) [source]
Add an edge between u and v while updating max node id. See also networkx.Graph.add_edge().
add_node(n, attr_dict=None, **attr) [source]
Add node n while updating the maximum node id. See also networkx.Graph.add_node().
copy() [source]
Copy the graph with its max node id. See also networkx.Graph.copy().
fresh_copy() [source]
Return a fresh copy graph with the same data structure. A fresh copy has no nodes, edges or graph attributes. It is the same data structure as the current graph. This method is typically used to create an empty version of the graph. This is required when subclassing Graph with networkx v2 and does not cause problems for v1. Here is more detail from the network migrating from 1.x to 2.x document: With the new GraphViews (SubGraph, ReversedGraph, etc)
you can't assume that ``G.__class__()`` will create a new
instance of the same graph type as ``G``. In fact, the
call signature for ``__class__`` differs depending on
whether ``G`` is a view or a base class. For v2.x you
should use ``G.fresh_copy()`` to create a null graph of
the correct type---ready to fill with nodes and edges.
merge_nodes(src, dst, weight_func=<function min_weight>, in_place=True, extra_arguments=[], extra_keywords={}) [source]
Merge node src and dst. The new combined node is adjacent to all the neighbors of src and dst. weight_func is called to decide the weight of edges incident on the new node. Parameters
src, dstint
Nodes to be merged.
weight_funccallable, optional
Function to decide the attributes of edges incident on the new node. For each neighbor n for src and `dst, weight_func will be called as follows: weight_func(src, dst, n, *extra_arguments, **extra_keywords). src, dst and n are IDs of vertices in the RAG object which is in turn a subclass of networkx.Graph. It is expected to return a dict of attributes of the resulting edge.
in_placebool, optional
If set to True, the merged node has the id dst, else merged node has a new id which is returned.
extra_argumentssequence, optional
The sequence of extra positional arguments passed to weight_func.
extra_keywordsdictionary, optional
The dict of keyword arguments passed to the weight_func. Returns
idint
The id of the new node. Notes If in_place is False the resulting node has a new id, rather than dst.
next_id() [source]
Returns the id for the new node to be inserted. The current implementation returns one more than the maximum id. Returns
idint
The id of the new node to be inserted. | skimage.api.skimage.future.graph#skimage.future.graph.RAG |
add_edge(u, v, attr_dict=None, **attr) [source]
Add an edge between u and v while updating max node id. See also networkx.Graph.add_edge(). | skimage.api.skimage.future.graph#skimage.future.graph.RAG.add_edge |
add_node(n, attr_dict=None, **attr) [source]
Add node n while updating the maximum node id. See also networkx.Graph.add_node(). | skimage.api.skimage.future.graph#skimage.future.graph.RAG.add_node |
copy() [source]
Copy the graph with its max node id. See also networkx.Graph.copy(). | skimage.api.skimage.future.graph#skimage.future.graph.RAG.copy |
fresh_copy() [source]
Return a fresh copy graph with the same data structure. A fresh copy has no nodes, edges or graph attributes. It is the same data structure as the current graph. This method is typically used to create an empty version of the graph. This is required when subclassing Graph with networkx v2 and does not cause problems for v1. Here is more detail from the network migrating from 1.x to 2.x document: With the new GraphViews (SubGraph, ReversedGraph, etc)
you can't assume that ``G.__class__()`` will create a new
instance of the same graph type as ``G``. In fact, the
call signature for ``__class__`` differs depending on
whether ``G`` is a view or a base class. For v2.x you
should use ``G.fresh_copy()`` to create a null graph of
the correct type---ready to fill with nodes and edges. | skimage.api.skimage.future.graph#skimage.future.graph.RAG.fresh_copy |
merge_nodes(src, dst, weight_func=<function min_weight>, in_place=True, extra_arguments=[], extra_keywords={}) [source]
Merge node src and dst. The new combined node is adjacent to all the neighbors of src and dst. weight_func is called to decide the weight of edges incident on the new node. Parameters
src, dstint
Nodes to be merged.
weight_funccallable, optional
Function to decide the attributes of edges incident on the new node. For each neighbor n for src and `dst, weight_func will be called as follows: weight_func(src, dst, n, *extra_arguments, **extra_keywords). src, dst and n are IDs of vertices in the RAG object which is in turn a subclass of networkx.Graph. It is expected to return a dict of attributes of the resulting edge.
in_placebool, optional
If set to True, the merged node has the id dst, else merged node has a new id which is returned.
extra_argumentssequence, optional
The sequence of extra positional arguments passed to weight_func.
extra_keywordsdictionary, optional
The dict of keyword arguments passed to the weight_func. Returns
idint
The id of the new node. Notes If in_place is False the resulting node has a new id, rather than dst. | skimage.api.skimage.future.graph#skimage.future.graph.RAG.merge_nodes |
next_id() [source]
Returns the id for the new node to be inserted. The current implementation returns one more than the maximum id. Returns
idint
The id of the new node to be inserted. | skimage.api.skimage.future.graph#skimage.future.graph.RAG.next_id |
__init__(label_image=None, connectivity=1, data=None, **attr) [source]
Initialize a graph with edges, name, or graph attributes. Parameters
incoming_graph_datainput graph (optional, default: None)
Data to initialize graph. If None (default) an empty graph is created. The data can be an edge list, or any NetworkX graph object. If the corresponding optional Python packages are installed the data can also be a NumPy matrix or 2d ndarray, a SciPy sparse matrix, or a PyGraphviz graph.
attrkeyword arguments, optional (default= no attributes)
Attributes to add to graph as key=value pairs. See also
convert
Examples >>> G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> G = nx.Graph(name="my graph")
>>> e = [(1, 2), (2, 3), (3, 4)] # list of edges
>>> G = nx.Graph(e)
Arbitrary graph attribute pairs (key=value) may be assigned >>> G = nx.Graph(e, day="Friday")
>>> G.graph
{'day': 'Friday'} | skimage.api.skimage.future.graph#skimage.future.graph.RAG.__init__ |
skimage.future.graph.rag_boundary(labels, edge_map, connectivity=2) [source]
Comouter RAG based on region boundaries Given an image’s initial segmentation and its edge map this method constructs the corresponding Region Adjacency Graph (RAG). Each node in the RAG represents a set of pixels within the image with the same label in labels. The weight between two adjacent regions is the average value in edge_map along their boundary.
labelsndarray
The labelled image.
edge_mapndarray
This should have the same shape as that of labels. For all pixels along the boundary between 2 adjacent regions, the average value of the corresponding pixels in edge_map is the edge weight between them.
connectivityint, optional
Pixels with a squared distance less than connectivity from each other are considered adjacent. It can range from 1 to labels.ndim. Its behavior is the same as connectivity parameter in scipy.ndimage.filters.generate_binary_structure. Examples >>> from skimage import data, segmentation, filters, color
>>> from skimage.future import graph
>>> img = data.chelsea()
>>> labels = segmentation.slic(img)
>>> edge_map = filters.sobel(color.rgb2gray(img))
>>> rag = graph.rag_boundary(labels, edge_map) | skimage.api.skimage.future.graph#skimage.future.graph.rag_boundary |
skimage.future.graph.rag_mean_color(image, labels, connectivity=2, mode='distance', sigma=255.0) [source]
Compute the Region Adjacency Graph using mean colors. Given an image and its initial segmentation, this method constructs the corresponding Region Adjacency Graph (RAG). Each node in the RAG represents a set of pixels within image with the same label in labels. The weight between two adjacent regions represents how similar or dissimilar two regions are depending on the mode parameter. Parameters
imagendarray, shape(M, N, […, P,] 3)
Input image.
labelsndarray, shape(M, N, […, P])
The labelled image. This should have one dimension less than image. If image has dimensions (M, N, 3) labels should have dimensions (M, N).
connectivityint, optional
Pixels with a squared distance less than connectivity from each other are considered adjacent. It can range from 1 to labels.ndim. Its behavior is the same as connectivity parameter in scipy.ndimage.generate_binary_structure.
mode{‘distance’, ‘similarity’}, optional
The strategy to assign edge weights. ‘distance’ : The weight between two adjacent regions is the \(|c_1 - c_2|\), where \(c_1\) and \(c_2\) are the mean colors of the two regions. It represents the Euclidean distance in their average color. ‘similarity’ : The weight between two adjacent is \(e^{-d^2/sigma}\) where \(d=|c_1 - c_2|\), where \(c_1\) and \(c_2\) are the mean colors of the two regions. It represents how similar two regions are.
sigmafloat, optional
Used for computation when mode is “similarity”. It governs how close to each other two colors should be, for their corresponding edge weight to be significant. A very large value of sigma could make any two colors behave as though they were similar. Returns
outRAG
The region adjacency graph. References
1
Alain Tremeau and Philippe Colantoni “Regions Adjacency Graph Applied To Color Image Segmentation” DOI:10.1109/83.841950 Examples >>> from skimage import data, segmentation
>>> from skimage.future import graph
>>> img = data.astronaut()
>>> labels = segmentation.slic(img)
>>> rag = graph.rag_mean_color(img, labels) | skimage.api.skimage.future.graph#skimage.future.graph.rag_mean_color |
skimage.future.graph.show_rag(labels, rag, image, border_color='black', edge_width=1.5, edge_cmap='magma', img_cmap='bone', in_place=True, ax=None) [source]
Show a Region Adjacency Graph on an image. Given a labelled image and its corresponding RAG, show the nodes and edges of the RAG on the image with the specified colors. Edges are displayed between the centroid of the 2 adjacent regions in the image. Parameters
labelsndarray, shape (M, N)
The labelled image.
ragRAG
The Region Adjacency Graph.
imagendarray, shape (M, N[, 3])
Input image. If colormap is None, the image should be in RGB format.
border_colorcolor spec, optional
Color with which the borders between regions are drawn.
edge_widthfloat, optional
The thickness with which the RAG edges are drawn.
edge_cmapmatplotlib.colors.Colormap, optional
Any matplotlib colormap with which the edges are drawn.
img_cmapmatplotlib.colors.Colormap, optional
Any matplotlib colormap with which the image is draw. If set to None the image is drawn as it is.
in_placebool, optional
If set, the RAG is modified in place. For each node n the function will set a new attribute rag.nodes[n]['centroid'].
axmatplotlib.axes.Axes, optional
The axes to draw on. If not specified, new axes are created and drawn on. Returns
lcmatplotlib.collections.LineCollection
A colection of lines that represent the edges of the graph. It can be passed to the matplotlib.figure.Figure.colorbar() function. Examples >>> from skimage import data, segmentation
>>> from skimage.future import graph
>>> import matplotlib.pyplot as plt
>>>
>>> img = data.coffee()
>>> labels = segmentation.slic(img)
>>> g = graph.rag_mean_color(img, labels)
>>> lc = graph.show_rag(labels, g, img)
>>> cbar = plt.colorbar(lc) | skimage.api.skimage.future.graph#skimage.future.graph.show_rag |
skimage.future.manual_lasso_segmentation(image, alpha=0.4, return_all=False) [source]
Return a label image based on freeform selections made with the mouse. Parameters
image(M, N[, 3]) array
Grayscale or RGB image.
alphafloat, optional
Transparency value for polygons drawn over the image.
return_allbool, optional
If True, an array containing each separate polygon drawn is returned. (The polygons may overlap.) If False (default), latter polygons “overwrite” earlier ones where they overlap. Returns
labelsarray of int, shape ([Q, ]M, N)
The segmented regions. If mode is ‘separate’, the leading dimension of the array corresponds to the number of regions that the user drew. Notes Press and hold the left mouse button to draw around each object. Examples >>> from skimage import data, future, io
>>> camera = data.camera()
>>> mask = future.manual_lasso_segmentation(camera)
>>> io.imshow(mask)
>>> io.show() | skimage.api.skimage.future#skimage.future.manual_lasso_segmentation |
skimage.future.manual_polygon_segmentation(image, alpha=0.4, return_all=False) [source]
Return a label image based on polygon selections made with the mouse. Parameters
image(M, N[, 3]) array
Grayscale or RGB image.
alphafloat, optional
Transparency value for polygons drawn over the image.
return_allbool, optional
If True, an array containing each separate polygon drawn is returned. (The polygons may overlap.) If False (default), latter polygons “overwrite” earlier ones where they overlap. Returns
labelsarray of int, shape ([Q, ]M, N)
The segmented regions. If mode is ‘separate’, the leading dimension of the array corresponds to the number of regions that the user drew. Notes Use left click to select the vertices of the polygon and right click to confirm the selection once all vertices are selected. Examples >>> from skimage import data, future, io
>>> camera = data.camera()
>>> mask = future.manual_polygon_segmentation(camera)
>>> io.imshow(mask)
>>> io.show() | skimage.api.skimage.future#skimage.future.manual_polygon_segmentation |
skimage.future.predict_segmenter(features, clf) [source]
Segmentation of images using a pretrained classifier. Parameters
featuresndarray
Array of features, with the last dimension corresponding to the number of features, and the other dimensions are compatible with the shape of the image to segment, or a flattened image.
clfclassifier object
trained classifier object, exposing a predict method as in scikit-learn’s API, for example an instance of RandomForestClassifier or LogisticRegression classifier. The classifier must be already trained, for example with skimage.segmentation.fit_segmenter(). Returns
outputndarray
Labeled array, built from the prediction of the classifier. | skimage.api.skimage.future#skimage.future.predict_segmenter |
class skimage.future.TrainableSegmenter(clf=None, features_func=None) [source]
Bases: object Estimator for classifying pixels. Parameters
clfclassifier object, optional
classifier object, exposing a fit and a predict method as in scikit-learn’s API, for example an instance of RandomForestClassifier or LogisticRegression classifier.
features_funcfunction, optional
function computing features on all pixels of the image, to be passed to the classifier. The output should be of shape (m_features, *labels.shape). If None, skimage.segmentation.multiscale_basic_features() is used. Methods
fit(image, labels) Train classifier using partially labeled (annotated) image.
predict(image) Segment new image using trained internal classifier.
compute_features
__init__(clf=None, features_func=None) [source]
Initialize self. See help(type(self)) for accurate signature.
compute_features(image) [source]
fit(image, labels) [source]
Train classifier using partially labeled (annotated) image. Parameters
imagendarray
Input image, which can be grayscale or multichannel, and must have a number of dimensions compatible with self.features_func.
labelsndarray of ints
Labeled array of shape compatible with image (same shape for a single-channel image). Labels >= 1 correspond to the training set and label 0 to unlabeled pixels to be segmented.
predict(image) [source]
Segment new image using trained internal classifier. Parameters
imagendarray
Input image, which can be grayscale or multichannel, and must have a number of dimensions compatible with self.features_func. Raises
NotFittedError if self.clf has not been fitted yet (use self.fit). | skimage.api.skimage.future#skimage.future.TrainableSegmenter |
compute_features(image) [source] | skimage.api.skimage.future#skimage.future.TrainableSegmenter.compute_features |
fit(image, labels) [source]
Train classifier using partially labeled (annotated) image. Parameters
imagendarray
Input image, which can be grayscale or multichannel, and must have a number of dimensions compatible with self.features_func.
labelsndarray of ints
Labeled array of shape compatible with image (same shape for a single-channel image). Labels >= 1 correspond to the training set and label 0 to unlabeled pixels to be segmented. | skimage.api.skimage.future#skimage.future.TrainableSegmenter.fit |
predict(image) [source]
Segment new image using trained internal classifier. Parameters
imagendarray
Input image, which can be grayscale or multichannel, and must have a number of dimensions compatible with self.features_func. Raises
NotFittedError if self.clf has not been fitted yet (use self.fit). | skimage.api.skimage.future#skimage.future.TrainableSegmenter.predict |
__init__(clf=None, features_func=None) [source]
Initialize self. See help(type(self)) for accurate signature. | skimage.api.skimage.future#skimage.future.TrainableSegmenter.__init__ |
Module: graph
skimage.graph.route_through_array(array, …) Simple example of how to use the MCP and MCP_Geometric classes.
skimage.graph.shortest_path(arr[, reach, …]) Find the shortest path through an n-d array from one side to another.
skimage.graph.MCP(costs[, offsets, …]) A class for finding the minimum cost path through a given n-d costs array.
skimage.graph.MCP_Connect(costs[, offsets, …]) Connect source points using the distance-weighted minimum cost function.
skimage.graph.MCP_Flexible(costs[, offsets, …]) Find minimum cost paths through an N-d costs array.
skimage.graph.MCP_Geometric(costs[, …]) Find distance-weighted minimum cost paths through an n-d costs array. route_through_array
skimage.graph.route_through_array(array, start, end, fully_connected=True, geometric=True) [source]
Simple example of how to use the MCP and MCP_Geometric classes. See the MCP and MCP_Geometric class documentation for explanation of the path-finding algorithm. Parameters
arrayndarray
Array of costs.
startiterable
n-d index into array defining the starting point
enditerable
n-d index into array defining the end point
fully_connectedbool (optional)
If True, diagonal moves are permitted, if False, only axial moves.
geometricbool (optional)
If True, the MCP_Geometric class is used to calculate costs, if False, the MCP base class is used. See the class documentation for an explanation of the differences between MCP and MCP_Geometric. Returns
pathlist
List of n-d index tuples defining the path from start to end.
costfloat
Cost of the path. If geometric is False, the cost of the path is the sum of the values of array along the path. If geometric is True, a finer computation is made (see the documentation of the MCP_Geometric class). See also
MCP, MCP_Geometric
Examples >>> import numpy as np
>>> from skimage.graph import route_through_array
>>>
>>> image = np.array([[1, 3], [10, 12]])
>>> image
array([[ 1, 3],
[10, 12]])
>>> # Forbid diagonal steps
>>> route_through_array(image, [0, 0], [1, 1], fully_connected=False)
([(0, 0), (0, 1), (1, 1)], 9.5)
>>> # Now allow diagonal steps: the path goes directly from start to end
>>> route_through_array(image, [0, 0], [1, 1])
([(0, 0), (1, 1)], 9.19238815542512)
>>> # Cost is the sum of array values along the path (16 = 1 + 3 + 12)
>>> route_through_array(image, [0, 0], [1, 1], fully_connected=False,
... geometric=False)
([(0, 0), (0, 1), (1, 1)], 16.0)
>>> # Larger array where we display the path that is selected
>>> image = np.arange((36)).reshape((6, 6))
>>> image
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]])
>>> # Find the path with lowest cost
>>> indices, weight = route_through_array(image, (0, 0), (5, 5))
>>> indices = np.stack(indices, axis=-1)
>>> path = np.zeros_like(image)
>>> path[indices[0], indices[1]] = 1
>>> path
array([[1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1]])
shortest_path
skimage.graph.shortest_path(arr, reach=1, axis=-1, output_indexlist=False) [source]
Find the shortest path through an n-d array from one side to another. Parameters
arrndarray of float64
reachint, optional
By default (reach = 1), the shortest path can only move one row up or down for every step it moves forward (i.e., the path gradient is limited to 1). reach defines the number of elements that can be skipped along each non-axis dimension at each step.
axisint, optional
The axis along which the path must always move forward (default -1)
output_indexlistbool, optional
See return value p for explanation. Returns
piterable of int
For each step along axis, the coordinate of the shortest path. If output_indexlist is True, then the path is returned as a list of n-d tuples that index into arr. If False, then the path is returned as an array listing the coordinates of the path along the non-axis dimensions for each step along the axis dimension. That is, p.shape == (arr.shape[axis], arr.ndim-1) except that p is squeezed before returning so if arr.ndim == 2, then p.shape == (arr.shape[axis],)
costfloat
Cost of path. This is the absolute sum of all the differences along the path.
MCP
class skimage.graph.MCP(costs, offsets=None, fully_connected=True, sampling=None)
Bases: object A class for finding the minimum cost path through a given n-d costs array. Given an n-d costs array, this class can be used to find the minimum-cost path through that array from any set of points to any other set of points. Basic usage is to initialize the class and call find_costs() with a one or more starting indices (and an optional list of end indices). After that, call traceback() one or more times to find the path from any given end-position to the closest starting index. New paths through the same costs array can be found by calling find_costs() repeatedly. The cost of a path is calculated simply as the sum of the values of the costs array at each point on the path. The class MCP_Geometric, on the other hand, accounts for the fact that diagonal vs. axial moves are of different lengths, and weights the path cost accordingly. Array elements with infinite or negative costs will simply be ignored, as will paths whose cumulative cost overflows to infinite. Parameters
costsndarray
offsetsiterable, optional
A list of offset tuples: each offset specifies a valid move from a given n-d position. If not provided, offsets corresponding to a singly- or fully-connected n-d neighborhood will be constructed with make_offsets(), using the fully_connected parameter value.
fully_connectedbool, optional
If no offsets are provided, this determines the connectivity of the generated neighborhood. If true, the path may go along diagonals between elements of the costs array; otherwise only axial moves are permitted.
samplingtuple, optional
For each dimension, specifies the distance between two cells/voxels. If not given or None, the distance is assumed unit. Attributes
offsetsndarray
Equivalent to the offsets provided to the constructor, or if none were so provided, the offsets created for the requested n-d neighborhood. These are useful for interpreting the traceback array returned by the find_costs() method.
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation.
find_costs()
Find the minimum-cost path from the given starting points. This method finds the minimum-cost path to the specified ending indices from any one of the specified starting indices. If no end positions are given, then the minimum-cost path to every position in the costs array will be found. Parameters
startsiterable
A list of n-d starting indices (where n is the dimension of the costs array). The minimum cost path to the closest/cheapest starting point will be found.
endsiterable, optional
A list of n-d ending indices.
find_all_endsbool, optional
If ‘True’ (default), the minimum-cost-path to every specified end-position will be found; otherwise the algorithm will stop when a a path is found to any end-position. (If no ends were specified, then this parameter has no effect.) Returns
cumulative_costsndarray
Same shape as the costs array; this array records the minimum cost path from the nearest/cheapest starting index to each index considered. (If ends were specified, not all elements in the array will necessarily be considered: positions not evaluated will have a cumulative cost of inf. If find_all_ends is ‘False’, only one of the specified end-positions will have a finite cumulative cost.)
tracebackndarray
Same shape as the costs array; this array contains the offset to any given index from its predecessor index. The offset indices index into the offsets attribute, which is a array of n-d offsets. In the 2-d case, if offsets[traceback[x, y]] is (-1, -1), that means that the predecessor of [x, y] in the minimum cost path to some start position is [x+1, y+1]. Note that if the offset_index is -1, then the given index was not considered.
goal_reached()
int goal_reached(int index, float cumcost) This method is called each iteration after popping an index from the heap, before examining the neighbours. This method can be overloaded to modify the behavior of the MCP algorithm. An example might be to stop the algorithm when a certain cumulative cost is reached, or when the front is a certain distance away from the seed point. This method should return 1 if the algorithm should not check the current point’s neighbours and 2 if the algorithm is now done.
traceback(end)
Trace a minimum cost path through the pre-calculated traceback array. This convenience function reconstructs the the minimum cost path to a given end position from one of the starting indices provided to find_costs(), which must have been called previously. This function can be called as many times as desired after find_costs() has been run. Parameters
enditerable
An n-d index into the costs array. Returns
tracebacklist of n-d tuples
A list of indices into the costs array, starting with one of the start positions passed to find_costs(), and ending with the given end index. These indices specify the minimum-cost path from any given start index to the end index. (The total cost of that path can be read out from the cumulative_costs array returned by find_costs().)
MCP_Connect
class skimage.graph.MCP_Connect(costs, offsets=None, fully_connected=True)
Bases: skimage.graph._mcp.MCP Connect source points using the distance-weighted minimum cost function. A front is grown from each seed point simultaneously, while the origin of the front is tracked as well. When two fronts meet, create_connection() is called. This method must be overloaded to deal with the found edges in a way that is appropriate for the application.
__init__(*args, **kwargs)
Initialize self. See help(type(self)) for accurate signature.
create_connection()
create_connection id1, id2, pos1, pos2, cost1, cost2) Overload this method to keep track of the connections that are found during MCP processing. Note that a connection with the same ids can be found multiple times (but with different positions and costs). At the time that this method is called, both points are “frozen” and will not be visited again by the MCP algorithm. Parameters
id1int
The seed point id where the first neighbor originated from.
id2int
The seed point id where the second neighbor originated from.
pos1tuple
The index of of the first neighbour in the connection.
pos2tuple
The index of of the second neighbour in the connection.
cost1float
The cumulative cost at pos1.
cost2float
The cumulative costs at pos2.
MCP_Flexible
class skimage.graph.MCP_Flexible(costs, offsets=None, fully_connected=True)
Bases: skimage.graph._mcp.MCP Find minimum cost paths through an N-d costs array. See the documentation for MCP for full details. This class differs from MCP in that several methods can be overloaded (from pure Python) to modify the behavior of the algorithm and/or create custom algorithms based on MCP. Note that goal_reached can also be overloaded in the MCP class.
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation.
examine_neighbor(index, new_index, offset_length)
This method is called once for every pair of neighboring nodes, as soon as both nodes are frozen. This method can be overloaded to obtain information about neightboring nodes, and/or to modify the behavior of the MCP algorithm. One example is the MCP_Connect class, which checks for meeting fronts using this hook.
travel_cost(old_cost, new_cost, offset_length)
This method calculates the travel cost for going from the current node to the next. The default implementation returns new_cost. Overload this method to adapt the behaviour of the algorithm.
update_node(index, new_index, offset_length)
This method is called when a node is updated, right after new_index is pushed onto the heap and the traceback map is updated. This method can be overloaded to keep track of other arrays that are used by a specific implementation of the algorithm. For instance the MCP_Connect class uses it to update an id map.
MCP_Geometric
class skimage.graph.MCP_Geometric(costs, offsets=None, fully_connected=True)
Bases: skimage.graph._mcp.MCP Find distance-weighted minimum cost paths through an n-d costs array. See the documentation for MCP for full details. This class differs from MCP in that the cost of a path is not simply the sum of the costs along that path. This class instead assumes that the costs array contains at each position the “cost” of a unit distance of travel through that position. For example, a move (in 2-d) from (1, 1) to (1, 2) is assumed to originate in the center of the pixel (1, 1) and terminate in the center of (1, 2). The entire move is of distance 1, half through (1, 1) and half through (1, 2); thus the cost of that move is (1/2)*costs[1,1] + (1/2)*costs[1,2]. On the other hand, a move from (1, 1) to (2, 2) is along the diagonal and is sqrt(2) in length. Half of this move is within the pixel (1, 1) and the other half in (2, 2), so the cost of this move is calculated as (sqrt(2)/2)*costs[1,1] + (sqrt(2)/2)*costs[2,2]. These calculations don’t make a lot of sense with offsets of magnitude greater than 1. Use the sampling argument in order to deal with anisotropic data.
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation. | skimage.api.skimage.graph |
class skimage.graph.MCP(costs, offsets=None, fully_connected=True, sampling=None)
Bases: object A class for finding the minimum cost path through a given n-d costs array. Given an n-d costs array, this class can be used to find the minimum-cost path through that array from any set of points to any other set of points. Basic usage is to initialize the class and call find_costs() with a one or more starting indices (and an optional list of end indices). After that, call traceback() one or more times to find the path from any given end-position to the closest starting index. New paths through the same costs array can be found by calling find_costs() repeatedly. The cost of a path is calculated simply as the sum of the values of the costs array at each point on the path. The class MCP_Geometric, on the other hand, accounts for the fact that diagonal vs. axial moves are of different lengths, and weights the path cost accordingly. Array elements with infinite or negative costs will simply be ignored, as will paths whose cumulative cost overflows to infinite. Parameters
costsndarray
offsetsiterable, optional
A list of offset tuples: each offset specifies a valid move from a given n-d position. If not provided, offsets corresponding to a singly- or fully-connected n-d neighborhood will be constructed with make_offsets(), using the fully_connected parameter value.
fully_connectedbool, optional
If no offsets are provided, this determines the connectivity of the generated neighborhood. If true, the path may go along diagonals between elements of the costs array; otherwise only axial moves are permitted.
samplingtuple, optional
For each dimension, specifies the distance between two cells/voxels. If not given or None, the distance is assumed unit. Attributes
offsetsndarray
Equivalent to the offsets provided to the constructor, or if none were so provided, the offsets created for the requested n-d neighborhood. These are useful for interpreting the traceback array returned by the find_costs() method.
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation.
find_costs()
Find the minimum-cost path from the given starting points. This method finds the minimum-cost path to the specified ending indices from any one of the specified starting indices. If no end positions are given, then the minimum-cost path to every position in the costs array will be found. Parameters
startsiterable
A list of n-d starting indices (where n is the dimension of the costs array). The minimum cost path to the closest/cheapest starting point will be found.
endsiterable, optional
A list of n-d ending indices.
find_all_endsbool, optional
If ‘True’ (default), the minimum-cost-path to every specified end-position will be found; otherwise the algorithm will stop when a a path is found to any end-position. (If no ends were specified, then this parameter has no effect.) Returns
cumulative_costsndarray
Same shape as the costs array; this array records the minimum cost path from the nearest/cheapest starting index to each index considered. (If ends were specified, not all elements in the array will necessarily be considered: positions not evaluated will have a cumulative cost of inf. If find_all_ends is ‘False’, only one of the specified end-positions will have a finite cumulative cost.)
tracebackndarray
Same shape as the costs array; this array contains the offset to any given index from its predecessor index. The offset indices index into the offsets attribute, which is a array of n-d offsets. In the 2-d case, if offsets[traceback[x, y]] is (-1, -1), that means that the predecessor of [x, y] in the minimum cost path to some start position is [x+1, y+1]. Note that if the offset_index is -1, then the given index was not considered.
goal_reached()
int goal_reached(int index, float cumcost) This method is called each iteration after popping an index from the heap, before examining the neighbours. This method can be overloaded to modify the behavior of the MCP algorithm. An example might be to stop the algorithm when a certain cumulative cost is reached, or when the front is a certain distance away from the seed point. This method should return 1 if the algorithm should not check the current point’s neighbours and 2 if the algorithm is now done.
traceback(end)
Trace a minimum cost path through the pre-calculated traceback array. This convenience function reconstructs the the minimum cost path to a given end position from one of the starting indices provided to find_costs(), which must have been called previously. This function can be called as many times as desired after find_costs() has been run. Parameters
enditerable
An n-d index into the costs array. Returns
tracebacklist of n-d tuples
A list of indices into the costs array, starting with one of the start positions passed to find_costs(), and ending with the given end index. These indices specify the minimum-cost path from any given start index to the end index. (The total cost of that path can be read out from the cumulative_costs array returned by find_costs().) | skimage.api.skimage.graph#skimage.graph.MCP |
find_costs()
Find the minimum-cost path from the given starting points. This method finds the minimum-cost path to the specified ending indices from any one of the specified starting indices. If no end positions are given, then the minimum-cost path to every position in the costs array will be found. Parameters
startsiterable
A list of n-d starting indices (where n is the dimension of the costs array). The minimum cost path to the closest/cheapest starting point will be found.
endsiterable, optional
A list of n-d ending indices.
find_all_endsbool, optional
If ‘True’ (default), the minimum-cost-path to every specified end-position will be found; otherwise the algorithm will stop when a a path is found to any end-position. (If no ends were specified, then this parameter has no effect.) Returns
cumulative_costsndarray
Same shape as the costs array; this array records the minimum cost path from the nearest/cheapest starting index to each index considered. (If ends were specified, not all elements in the array will necessarily be considered: positions not evaluated will have a cumulative cost of inf. If find_all_ends is ‘False’, only one of the specified end-positions will have a finite cumulative cost.)
tracebackndarray
Same shape as the costs array; this array contains the offset to any given index from its predecessor index. The offset indices index into the offsets attribute, which is a array of n-d offsets. In the 2-d case, if offsets[traceback[x, y]] is (-1, -1), that means that the predecessor of [x, y] in the minimum cost path to some start position is [x+1, y+1]. Note that if the offset_index is -1, then the given index was not considered. | skimage.api.skimage.graph#skimage.graph.MCP.find_costs |
goal_reached()
int goal_reached(int index, float cumcost) This method is called each iteration after popping an index from the heap, before examining the neighbours. This method can be overloaded to modify the behavior of the MCP algorithm. An example might be to stop the algorithm when a certain cumulative cost is reached, or when the front is a certain distance away from the seed point. This method should return 1 if the algorithm should not check the current point’s neighbours and 2 if the algorithm is now done. | skimage.api.skimage.graph#skimage.graph.MCP.goal_reached |
traceback(end)
Trace a minimum cost path through the pre-calculated traceback array. This convenience function reconstructs the the minimum cost path to a given end position from one of the starting indices provided to find_costs(), which must have been called previously. This function can be called as many times as desired after find_costs() has been run. Parameters
enditerable
An n-d index into the costs array. Returns
tracebacklist of n-d tuples
A list of indices into the costs array, starting with one of the start positions passed to find_costs(), and ending with the given end index. These indices specify the minimum-cost path from any given start index to the end index. (The total cost of that path can be read out from the cumulative_costs array returned by find_costs().) | skimage.api.skimage.graph#skimage.graph.MCP.traceback |
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation. | skimage.api.skimage.graph#skimage.graph.MCP.__init__ |
class skimage.graph.MCP_Connect(costs, offsets=None, fully_connected=True)
Bases: skimage.graph._mcp.MCP Connect source points using the distance-weighted minimum cost function. A front is grown from each seed point simultaneously, while the origin of the front is tracked as well. When two fronts meet, create_connection() is called. This method must be overloaded to deal with the found edges in a way that is appropriate for the application.
__init__(*args, **kwargs)
Initialize self. See help(type(self)) for accurate signature.
create_connection()
create_connection id1, id2, pos1, pos2, cost1, cost2) Overload this method to keep track of the connections that are found during MCP processing. Note that a connection with the same ids can be found multiple times (but with different positions and costs). At the time that this method is called, both points are “frozen” and will not be visited again by the MCP algorithm. Parameters
id1int
The seed point id where the first neighbor originated from.
id2int
The seed point id where the second neighbor originated from.
pos1tuple
The index of of the first neighbour in the connection.
pos2tuple
The index of of the second neighbour in the connection.
cost1float
The cumulative cost at pos1.
cost2float
The cumulative costs at pos2. | skimage.api.skimage.graph#skimage.graph.MCP_Connect |
create_connection()
create_connection id1, id2, pos1, pos2, cost1, cost2) Overload this method to keep track of the connections that are found during MCP processing. Note that a connection with the same ids can be found multiple times (but with different positions and costs). At the time that this method is called, both points are “frozen” and will not be visited again by the MCP algorithm. Parameters
id1int
The seed point id where the first neighbor originated from.
id2int
The seed point id where the second neighbor originated from.
pos1tuple
The index of of the first neighbour in the connection.
pos2tuple
The index of of the second neighbour in the connection.
cost1float
The cumulative cost at pos1.
cost2float
The cumulative costs at pos2. | skimage.api.skimage.graph#skimage.graph.MCP_Connect.create_connection |
__init__(*args, **kwargs)
Initialize self. See help(type(self)) for accurate signature. | skimage.api.skimage.graph#skimage.graph.MCP_Connect.__init__ |
class skimage.graph.MCP_Flexible(costs, offsets=None, fully_connected=True)
Bases: skimage.graph._mcp.MCP Find minimum cost paths through an N-d costs array. See the documentation for MCP for full details. This class differs from MCP in that several methods can be overloaded (from pure Python) to modify the behavior of the algorithm and/or create custom algorithms based on MCP. Note that goal_reached can also be overloaded in the MCP class.
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation.
examine_neighbor(index, new_index, offset_length)
This method is called once for every pair of neighboring nodes, as soon as both nodes are frozen. This method can be overloaded to obtain information about neightboring nodes, and/or to modify the behavior of the MCP algorithm. One example is the MCP_Connect class, which checks for meeting fronts using this hook.
travel_cost(old_cost, new_cost, offset_length)
This method calculates the travel cost for going from the current node to the next. The default implementation returns new_cost. Overload this method to adapt the behaviour of the algorithm.
update_node(index, new_index, offset_length)
This method is called when a node is updated, right after new_index is pushed onto the heap and the traceback map is updated. This method can be overloaded to keep track of other arrays that are used by a specific implementation of the algorithm. For instance the MCP_Connect class uses it to update an id map. | skimage.api.skimage.graph#skimage.graph.MCP_Flexible |
examine_neighbor(index, new_index, offset_length)
This method is called once for every pair of neighboring nodes, as soon as both nodes are frozen. This method can be overloaded to obtain information about neightboring nodes, and/or to modify the behavior of the MCP algorithm. One example is the MCP_Connect class, which checks for meeting fronts using this hook. | skimage.api.skimage.graph#skimage.graph.MCP_Flexible.examine_neighbor |
travel_cost(old_cost, new_cost, offset_length)
This method calculates the travel cost for going from the current node to the next. The default implementation returns new_cost. Overload this method to adapt the behaviour of the algorithm. | skimage.api.skimage.graph#skimage.graph.MCP_Flexible.travel_cost |
update_node(index, new_index, offset_length)
This method is called when a node is updated, right after new_index is pushed onto the heap and the traceback map is updated. This method can be overloaded to keep track of other arrays that are used by a specific implementation of the algorithm. For instance the MCP_Connect class uses it to update an id map. | skimage.api.skimage.graph#skimage.graph.MCP_Flexible.update_node |
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation. | skimage.api.skimage.graph#skimage.graph.MCP_Flexible.__init__ |
class skimage.graph.MCP_Geometric(costs, offsets=None, fully_connected=True)
Bases: skimage.graph._mcp.MCP Find distance-weighted minimum cost paths through an n-d costs array. See the documentation for MCP for full details. This class differs from MCP in that the cost of a path is not simply the sum of the costs along that path. This class instead assumes that the costs array contains at each position the “cost” of a unit distance of travel through that position. For example, a move (in 2-d) from (1, 1) to (1, 2) is assumed to originate in the center of the pixel (1, 1) and terminate in the center of (1, 2). The entire move is of distance 1, half through (1, 1) and half through (1, 2); thus the cost of that move is (1/2)*costs[1,1] + (1/2)*costs[1,2]. On the other hand, a move from (1, 1) to (2, 2) is along the diagonal and is sqrt(2) in length. Half of this move is within the pixel (1, 1) and the other half in (2, 2), so the cost of this move is calculated as (sqrt(2)/2)*costs[1,1] + (sqrt(2)/2)*costs[2,2]. These calculations don’t make a lot of sense with offsets of magnitude greater than 1. Use the sampling argument in order to deal with anisotropic data.
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation. | skimage.api.skimage.graph#skimage.graph.MCP_Geometric |
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation. | skimage.api.skimage.graph#skimage.graph.MCP_Geometric.__init__ |
skimage.graph.route_through_array(array, start, end, fully_connected=True, geometric=True) [source]
Simple example of how to use the MCP and MCP_Geometric classes. See the MCP and MCP_Geometric class documentation for explanation of the path-finding algorithm. Parameters
arrayndarray
Array of costs.
startiterable
n-d index into array defining the starting point
enditerable
n-d index into array defining the end point
fully_connectedbool (optional)
If True, diagonal moves are permitted, if False, only axial moves.
geometricbool (optional)
If True, the MCP_Geometric class is used to calculate costs, if False, the MCP base class is used. See the class documentation for an explanation of the differences between MCP and MCP_Geometric. Returns
pathlist
List of n-d index tuples defining the path from start to end.
costfloat
Cost of the path. If geometric is False, the cost of the path is the sum of the values of array along the path. If geometric is True, a finer computation is made (see the documentation of the MCP_Geometric class). See also
MCP, MCP_Geometric
Examples >>> import numpy as np
>>> from skimage.graph import route_through_array
>>>
>>> image = np.array([[1, 3], [10, 12]])
>>> image
array([[ 1, 3],
[10, 12]])
>>> # Forbid diagonal steps
>>> route_through_array(image, [0, 0], [1, 1], fully_connected=False)
([(0, 0), (0, 1), (1, 1)], 9.5)
>>> # Now allow diagonal steps: the path goes directly from start to end
>>> route_through_array(image, [0, 0], [1, 1])
([(0, 0), (1, 1)], 9.19238815542512)
>>> # Cost is the sum of array values along the path (16 = 1 + 3 + 12)
>>> route_through_array(image, [0, 0], [1, 1], fully_connected=False,
... geometric=False)
([(0, 0), (0, 1), (1, 1)], 16.0)
>>> # Larger array where we display the path that is selected
>>> image = np.arange((36)).reshape((6, 6))
>>> image
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]])
>>> # Find the path with lowest cost
>>> indices, weight = route_through_array(image, (0, 0), (5, 5))
>>> indices = np.stack(indices, axis=-1)
>>> path = np.zeros_like(image)
>>> path[indices[0], indices[1]] = 1
>>> path
array([[1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1]]) | skimage.api.skimage.graph#skimage.graph.route_through_array |
skimage.graph.shortest_path(arr, reach=1, axis=-1, output_indexlist=False) [source]
Find the shortest path through an n-d array from one side to another. Parameters
arrndarray of float64
reachint, optional
By default (reach = 1), the shortest path can only move one row up or down for every step it moves forward (i.e., the path gradient is limited to 1). reach defines the number of elements that can be skipped along each non-axis dimension at each step.
axisint, optional
The axis along which the path must always move forward (default -1)
output_indexlistbool, optional
See return value p for explanation. Returns
piterable of int
For each step along axis, the coordinate of the shortest path. If output_indexlist is True, then the path is returned as a list of n-d tuples that index into arr. If False, then the path is returned as an array listing the coordinates of the path along the non-axis dimensions for each step along the axis dimension. That is, p.shape == (arr.shape[axis], arr.ndim-1) except that p is squeezed before returning so if arr.ndim == 2, then p.shape == (arr.shape[axis],)
costfloat
Cost of path. This is the absolute sum of all the differences along the path. | skimage.api.skimage.graph#skimage.graph.shortest_path |
skimage.img_as_bool(image, force_copy=False) [source]
Convert an image to boolean format. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of bool (bool_)
Output image. Notes The upper half of the input dtype’s positive range is True, and the lower half is False. All negative values (if present) are False. | skimage.api.skimage#skimage.img_as_bool |
skimage.img_as_float(image, force_copy=False) [source]
Convert an image to floating point format. This function is similar to img_as_float64, but will not convert lower-precision floating point arrays to float64. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of float
Output image. Notes The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0]. | skimage.api.skimage#skimage.img_as_float |
skimage.img_as_float32(image, force_copy=False) [source]
Convert an image to single-precision (32-bit) floating point format. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of float32
Output image. Notes The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0]. | skimage.api.skimage#skimage.img_as_float32 |
skimage.img_as_float64(image, force_copy=False) [source]
Convert an image to double-precision (64-bit) floating point format. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of float64
Output image. Notes The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0]. | skimage.api.skimage#skimage.img_as_float64 |
skimage.img_as_int(image, force_copy=False) [source]
Convert an image to 16-bit signed integer format. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of int16
Output image. Notes The values are scaled between -32768 and 32767. If the input data-type is positive-only (e.g., uint8), then the output image will still only have positive values. | skimage.api.skimage#skimage.img_as_int |
skimage.img_as_ubyte(image, force_copy=False) [source]
Convert an image to 8-bit unsigned integer format. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of ubyte (uint8)
Output image. Notes Negative input values will be clipped. Positive values are scaled between 0 and 255. | skimage.api.skimage#skimage.img_as_ubyte |
skimage.img_as_uint(image, force_copy=False) [source]
Convert an image to 16-bit unsigned integer format. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of uint16
Output image. Notes Negative input values will be clipped. Positive values are scaled between 0 and 65535. | skimage.api.skimage#skimage.img_as_uint |
Module: io Utilities to read and write images in various formats. The following plug-ins are available:
Plugin Description
qt Fast image display using the Qt library. Deprecated since 0.18. Will be removed in 0.20.
imread Image reading and writing via imread
gdal Image reading via the GDAL Library (www.gdal.org)
simpleitk Image reading and writing via SimpleITK
gtk Fast image display using the GTK library
pil Image reading via the Python Imaging Library
fits FITS image reading via PyFITS
matplotlib Display or save images using Matplotlib
tifffile Load and save TIFF and TIFF-based images using tifffile.py
imageio Image reading via the ImageIO Library
skimage.io.call_plugin(kind, *args, **kwargs) Find the appropriate plugin of ‘kind’ and execute it.
skimage.io.concatenate_images(ic) Concatenate all images in the image collection into an array.
skimage.io.find_available_plugins([loaded]) List available plugins.
skimage.io.imread(fname[, as_gray, plugin]) Load an image from file.
skimage.io.imread_collection(load_pattern[, …]) Load a collection of images.
skimage.io.imread_collection_wrapper(imread)
skimage.io.imsave(fname, arr[, plugin, …]) Save an image to file.
skimage.io.imshow(arr[, plugin]) Display an image.
skimage.io.imshow_collection(ic[, plugin]) Display a collection of images.
skimage.io.load_sift(f) Read SIFT or SURF features from externally generated file.
skimage.io.load_surf(f) Read SIFT or SURF features from externally generated file.
skimage.io.plugin_info(plugin) Return plugin meta-data.
skimage.io.plugin_order() Return the currently preferred plugin order.
skimage.io.pop() Pop an image from the shared image stack.
skimage.io.push(img) Push an image onto the shared image stack.
skimage.io.reset_plugins()
skimage.io.show() Display pending images.
skimage.io.use_plugin(name[, kind]) Set the default plugin for a specified operation.
skimage.io.ImageCollection(load_pattern[, …]) Load and manage a collection of image files.
skimage.io.MultiImage(filename[, …]) A class containing all frames from multi-frame images.
skimage.io.collection Data structures to hold collections of images, with optional caching.
skimage.io.manage_plugins Handle image reading, writing and plotting plugins.
skimage.io.sift
skimage.io.util call_plugin
skimage.io.call_plugin(kind, *args, **kwargs) [source]
Find the appropriate plugin of ‘kind’ and execute it. Parameters
kind{‘imshow’, ‘imsave’, ‘imread’, ‘imread_collection’}
Function to look up.
pluginstr, optional
Plugin to load. Defaults to None, in which case the first matching plugin is used.
*args, **kwargsarguments and keyword arguments
Passed to the plugin function.
concatenate_images
skimage.io.concatenate_images(ic) [source]
Concatenate all images in the image collection into an array. Parameters
ican iterable of images
The images to be concatenated. Returns
array_catndarray
An array having one more dimension than the images in ic. Raises
ValueError
If images in ic don’t have identical shapes. See also
ImageCollection.concatenate, MultiImage.concatenate
Notes concatenate_images receives any iterable object containing images, including ImageCollection and MultiImage, and returns a NumPy array.
find_available_plugins
skimage.io.find_available_plugins(loaded=False) [source]
List available plugins. Parameters
loadedbool
If True, show only those plugins currently loaded. By default, all plugins are shown. Returns
pdict
Dictionary with plugin names as keys and exposed functions as values.
imread
skimage.io.imread(fname, as_gray=False, plugin=None, **plugin_args) [source]
Load an image from file. Parameters
fnamestring
Image file name, e.g. test.jpg or URL.
as_graybool, optional
If True, convert color images to gray-scale (64-bit floats). Images that are already in gray-scale format are not converted.
pluginstr, optional
Name of plugin to use. By default, the different plugins are tried (starting with imageio) until a suitable candidate is found. If not given and fname is a tiff file, the tifffile plugin will be used. Returns
img_arrayndarray
The different color bands/channels are stored in the third dimension, such that a gray-image is MxN, an RGB-image MxNx3 and an RGBA-image MxNx4. Other Parameters
plugin_argskeywords
Passed to the given plugin.
imread_collection
skimage.io.imread_collection(load_pattern, conserve_memory=True, plugin=None, **plugin_args) [source]
Load a collection of images. Parameters
load_patternstr or list
List of objects to load. These are usually filenames, but may vary depending on the currently active plugin. See the docstring for ImageCollection for the default behaviour of this parameter.
conserve_memorybool, optional
If True, never keep more than one in memory at a specific time. Otherwise, images will be cached once they are loaded. Returns
icImageCollection
Collection of images. Other Parameters
plugin_argskeywords
Passed to the given plugin.
imread_collection_wrapper
skimage.io.imread_collection_wrapper(imread) [source]
imsave
skimage.io.imsave(fname, arr, plugin=None, check_contrast=True, **plugin_args) [source]
Save an image to file. Parameters
fnamestr
Target filename.
arrndarray of shape (M,N) or (M,N,3) or (M,N,4)
Image data.
pluginstr, optional
Name of plugin to use. By default, the different plugins are tried (starting with imageio) until a suitable candidate is found. If not given and fname is a tiff file, the tifffile plugin will be used.
check_contrastbool, optional
Check for low contrast and print warning (default: True). Other Parameters
plugin_argskeywords
Passed to the given plugin. Notes When saving a JPEG, the compression ratio may be controlled using the quality keyword argument which is an integer with values in [1, 100] where 1 is worst quality and smallest file size, and 100 is best quality and largest file size (default 75). This is only available when using the PIL and imageio plugins.
imshow
skimage.io.imshow(arr, plugin=None, **plugin_args) [source]
Display an image. Parameters
arrndarray or str
Image data or name of image file.
pluginstr
Name of plugin to use. By default, the different plugins are tried (starting with imageio) until a suitable candidate is found. Other Parameters
plugin_argskeywords
Passed to the given plugin.
Examples using skimage.io.imshow
Explore 3D images (of cells) imshow_collection
skimage.io.imshow_collection(ic, plugin=None, **plugin_args) [source]
Display a collection of images. Parameters
icImageCollection
Collection to display.
pluginstr
Name of plugin to use. By default, the different plugins are tried until a suitable candidate is found. Other Parameters
plugin_argskeywords
Passed to the given plugin.
load_sift
skimage.io.load_sift(f) [source]
Read SIFT or SURF features from externally generated file. This routine reads SIFT or SURF files generated by binary utilities from http://people.cs.ubc.ca/~lowe/keypoints/ and http://www.vision.ee.ethz.ch/~surf/. This routine does not generate SIFT/SURF features from an image. These algorithms are patent encumbered. Please use skimage.feature.CENSURE instead. Parameters
filelikestring or open file
Input file generated by the feature detectors from http://people.cs.ubc.ca/~lowe/keypoints/ or http://www.vision.ee.ethz.ch/~surf/ .
mode{‘SIFT’, ‘SURF’}, optional
Kind of descriptor used to generate filelike. Returns
datarecord array with fields
row: int
row position of feature
column: int
column position of feature
scale: float
feature scale
orientation: float
feature orientation
data: array
feature values
load_surf
skimage.io.load_surf(f) [source]
Read SIFT or SURF features from externally generated file. This routine reads SIFT or SURF files generated by binary utilities from http://people.cs.ubc.ca/~lowe/keypoints/ and http://www.vision.ee.ethz.ch/~surf/. This routine does not generate SIFT/SURF features from an image. These algorithms are patent encumbered. Please use skimage.feature.CENSURE instead. Parameters
filelikestring or open file
Input file generated by the feature detectors from http://people.cs.ubc.ca/~lowe/keypoints/ or http://www.vision.ee.ethz.ch/~surf/ .
mode{‘SIFT’, ‘SURF’}, optional
Kind of descriptor used to generate filelike. Returns
datarecord array with fields
row: int
row position of feature
column: int
column position of feature
scale: float
feature scale
orientation: float
feature orientation
data: array
feature values
plugin_info
skimage.io.plugin_info(plugin) [source]
Return plugin meta-data. Parameters
pluginstr
Name of plugin. Returns
mdict
Meta data as specified in plugin .ini.
plugin_order
skimage.io.plugin_order() [source]
Return the currently preferred plugin order. Returns
pdict
Dictionary of preferred plugin order, with function name as key and plugins (in order of preference) as value.
pop
skimage.io.pop() [source]
Pop an image from the shared image stack. Returns
imgndarray
Image popped from the stack.
push
skimage.io.push(img) [source]
Push an image onto the shared image stack. Parameters
imgndarray
Image to push.
reset_plugins
skimage.io.reset_plugins() [source]
show
skimage.io.show() [source]
Display pending images. Launch the event loop of the current gui plugin, and display all pending images, queued via imshow. This is required when using imshow from non-interactive scripts. A call to show will block execution of code until all windows have been closed. Examples >>> import skimage.io as io
>>> for i in range(4):
... ax_im = io.imshow(np.random.rand(50, 50))
>>> io.show()
use_plugin
skimage.io.use_plugin(name, kind=None) [source]
Set the default plugin for a specified operation. The plugin will be loaded if it hasn’t been already. Parameters
namestr
Name of plugin.
kind{‘imsave’, ‘imread’, ‘imshow’, ‘imread_collection’, ‘imshow_collection’}, optional
Set the plugin for this function. By default, the plugin is set for all functions. See also
available_plugins
List of available plugins Examples To use Matplotlib as the default image reader, you would write: >>> from skimage import io
>>> io.use_plugin('matplotlib', 'imread')
To see a list of available plugins run io.available_plugins. Note that this lists plugins that are defined, but the full list may not be usable if your system does not have the required libraries installed.
ImageCollection
class skimage.io.ImageCollection(load_pattern, conserve_memory=True, load_func=None, **load_func_kwargs) [source]
Bases: object Load and manage a collection of image files. Parameters
load_patternstr or list of str
Pattern string or list of strings to load. The filename path can be absolute or relative.
conserve_memorybool, optional
If True, ImageCollection does not keep more than one in memory at a specific time. Otherwise, images will be cached once they are loaded. Other Parameters
load_funccallable
imread by default. See notes below. Notes Note that files are always returned in alphanumerical order. Also note that slicing returns a new ImageCollection, not a view into the data. ImageCollection can be modified to load images from an arbitrary source by specifying a combination of load_pattern and load_func. For an ImageCollection ic, ic[5] uses load_func(load_pattern[5]) to load the image. Imagine, for example, an ImageCollection that loads every third frame from a video file: video_file = 'no_time_for_that_tiny.gif'
def vidread_step(f, step):
vid = imageio.get_reader(f)
seq = [v for v in vid.iter_data()]
return seq[::step]
ic = ImageCollection(video_file, load_func=vidread_step, step=3)
ic # is an ImageCollection object of length 1 because there is 1 file
x = ic[0] # calls vidread_step(video_file, step=3)
x[5] # is the sixth element of a list of length 8 (24 / 3)
Another use of load_func would be to convert all images to uint8: def imread_convert(f):
return imread(f).astype(np.uint8)
ic = ImageCollection('/tmp/*.png', load_func=imread_convert)
Examples >>> import skimage.io as io
>>> from skimage import data_dir
>>> coll = io.ImageCollection(data_dir + '/chess*.png')
>>> len(coll)
2
>>> coll[0].shape
(200, 200)
>>> ic = io.ImageCollection(['/tmp/work/*.png', '/tmp/other/*.jpg'])
Attributes
fileslist of str
If a pattern string is given for load_pattern, this attribute stores the expanded file list. Otherwise, this is equal to load_pattern.
__init__(load_pattern, conserve_memory=True, load_func=None, **load_func_kwargs) [source]
Load and manage a collection of images.
concatenate() [source]
Concatenate all images in the collection into an array. Returns
arnp.ndarray
An array having one more dimension than the images in self. Raises
ValueError
If images in the ImageCollection don’t have identical shapes. See also
concatenate_images
property conserve_memory
property files
reload(n=None) [source]
Clear the image cache. Parameters
nNone or int
Clear the cache for this image only. By default, the entire cache is erased.
MultiImage
class skimage.io.MultiImage(filename, conserve_memory=True, dtype=None, **imread_kwargs) [source]
Bases: skimage.io.collection.ImageCollection A class containing all frames from multi-frame images. Parameters
load_patternstr or list of str
Pattern glob or filenames to load. The path can be absolute or relative.
conserve_memorybool, optional
Whether to conserve memory by only caching a single frame. Default is True. Other Parameters
load_funccallable
imread by default. See notes below. Notes If conserve_memory=True the memory footprint can be reduced, however the performance can be affected because frames have to be read from file more often. The last accessed frame is cached, all other frames will have to be read from file. The current implementation makes use of tifffile for Tiff files and PIL otherwise. Examples >>> from skimage import data_dir
>>> img = MultiImage(data_dir + '/multipage.tif')
>>> len(img)
2
>>> for frame in img:
... print(frame.shape)
(15, 10)
(15, 10)
__init__(filename, conserve_memory=True, dtype=None, **imread_kwargs) [source]
Load a multi-img.
property filename | skimage.api.skimage.io |
skimage.io.call_plugin(kind, *args, **kwargs) [source]
Find the appropriate plugin of ‘kind’ and execute it. Parameters
kind{‘imshow’, ‘imsave’, ‘imread’, ‘imread_collection’}
Function to look up.
pluginstr, optional
Plugin to load. Defaults to None, in which case the first matching plugin is used.
*args, **kwargsarguments and keyword arguments
Passed to the plugin function. | skimage.api.skimage.io#skimage.io.call_plugin |
skimage.io.concatenate_images(ic) [source]
Concatenate all images in the image collection into an array. Parameters
ican iterable of images
The images to be concatenated. Returns
array_catndarray
An array having one more dimension than the images in ic. Raises
ValueError
If images in ic don’t have identical shapes. See also
ImageCollection.concatenate, MultiImage.concatenate
Notes concatenate_images receives any iterable object containing images, including ImageCollection and MultiImage, and returns a NumPy array. | skimage.api.skimage.io#skimage.io.concatenate_images |
skimage.io.find_available_plugins(loaded=False) [source]
List available plugins. Parameters
loadedbool
If True, show only those plugins currently loaded. By default, all plugins are shown. Returns
pdict
Dictionary with plugin names as keys and exposed functions as values. | skimage.api.skimage.io#skimage.io.find_available_plugins |
class skimage.io.ImageCollection(load_pattern, conserve_memory=True, load_func=None, **load_func_kwargs) [source]
Bases: object Load and manage a collection of image files. Parameters
load_patternstr or list of str
Pattern string or list of strings to load. The filename path can be absolute or relative.
conserve_memorybool, optional
If True, ImageCollection does not keep more than one in memory at a specific time. Otherwise, images will be cached once they are loaded. Other Parameters
load_funccallable
imread by default. See notes below. Notes Note that files are always returned in alphanumerical order. Also note that slicing returns a new ImageCollection, not a view into the data. ImageCollection can be modified to load images from an arbitrary source by specifying a combination of load_pattern and load_func. For an ImageCollection ic, ic[5] uses load_func(load_pattern[5]) to load the image. Imagine, for example, an ImageCollection that loads every third frame from a video file: video_file = 'no_time_for_that_tiny.gif'
def vidread_step(f, step):
vid = imageio.get_reader(f)
seq = [v for v in vid.iter_data()]
return seq[::step]
ic = ImageCollection(video_file, load_func=vidread_step, step=3)
ic # is an ImageCollection object of length 1 because there is 1 file
x = ic[0] # calls vidread_step(video_file, step=3)
x[5] # is the sixth element of a list of length 8 (24 / 3)
Another use of load_func would be to convert all images to uint8: def imread_convert(f):
return imread(f).astype(np.uint8)
ic = ImageCollection('/tmp/*.png', load_func=imread_convert)
Examples >>> import skimage.io as io
>>> from skimage import data_dir
>>> coll = io.ImageCollection(data_dir + '/chess*.png')
>>> len(coll)
2
>>> coll[0].shape
(200, 200)
>>> ic = io.ImageCollection(['/tmp/work/*.png', '/tmp/other/*.jpg'])
Attributes
fileslist of str
If a pattern string is given for load_pattern, this attribute stores the expanded file list. Otherwise, this is equal to load_pattern.
__init__(load_pattern, conserve_memory=True, load_func=None, **load_func_kwargs) [source]
Load and manage a collection of images.
concatenate() [source]
Concatenate all images in the collection into an array. Returns
arnp.ndarray
An array having one more dimension than the images in self. Raises
ValueError
If images in the ImageCollection don’t have identical shapes. See also
concatenate_images
property conserve_memory
property files
reload(n=None) [source]
Clear the image cache. Parameters
nNone or int
Clear the cache for this image only. By default, the entire cache is erased. | skimage.api.skimage.io#skimage.io.ImageCollection |
concatenate() [source]
Concatenate all images in the collection into an array. Returns
arnp.ndarray
An array having one more dimension than the images in self. Raises
ValueError
If images in the ImageCollection don’t have identical shapes. See also
concatenate_images | skimage.api.skimage.io#skimage.io.ImageCollection.concatenate |
property conserve_memory | skimage.api.skimage.io#skimage.io.ImageCollection.conserve_memory |
property files | skimage.api.skimage.io#skimage.io.ImageCollection.files |
reload(n=None) [source]
Clear the image cache. Parameters
nNone or int
Clear the cache for this image only. By default, the entire cache is erased. | skimage.api.skimage.io#skimage.io.ImageCollection.reload |
__init__(load_pattern, conserve_memory=True, load_func=None, **load_func_kwargs) [source]
Load and manage a collection of images. | skimage.api.skimage.io#skimage.io.ImageCollection.__init__ |
skimage.io.imread(fname, as_gray=False, plugin=None, **plugin_args) [source]
Load an image from file. Parameters
fnamestring
Image file name, e.g. test.jpg or URL.
as_graybool, optional
If True, convert color images to gray-scale (64-bit floats). Images that are already in gray-scale format are not converted.
pluginstr, optional
Name of plugin to use. By default, the different plugins are tried (starting with imageio) until a suitable candidate is found. If not given and fname is a tiff file, the tifffile plugin will be used. Returns
img_arrayndarray
The different color bands/channels are stored in the third dimension, such that a gray-image is MxN, an RGB-image MxNx3 and an RGBA-image MxNx4. Other Parameters
plugin_argskeywords
Passed to the given plugin. | skimage.api.skimage.io#skimage.io.imread |
skimage.io.imread_collection(load_pattern, conserve_memory=True, plugin=None, **plugin_args) [source]
Load a collection of images. Parameters
load_patternstr or list
List of objects to load. These are usually filenames, but may vary depending on the currently active plugin. See the docstring for ImageCollection for the default behaviour of this parameter.
conserve_memorybool, optional
If True, never keep more than one in memory at a specific time. Otherwise, images will be cached once they are loaded. Returns
icImageCollection
Collection of images. Other Parameters
plugin_argskeywords
Passed to the given plugin. | skimage.api.skimage.io#skimage.io.imread_collection |
skimage.io.imread_collection_wrapper(imread) [source] | skimage.api.skimage.io#skimage.io.imread_collection_wrapper |
skimage.io.imsave(fname, arr, plugin=None, check_contrast=True, **plugin_args) [source]
Save an image to file. Parameters
fnamestr
Target filename.
arrndarray of shape (M,N) or (M,N,3) or (M,N,4)
Image data.
pluginstr, optional
Name of plugin to use. By default, the different plugins are tried (starting with imageio) until a suitable candidate is found. If not given and fname is a tiff file, the tifffile plugin will be used.
check_contrastbool, optional
Check for low contrast and print warning (default: True). Other Parameters
plugin_argskeywords
Passed to the given plugin. Notes When saving a JPEG, the compression ratio may be controlled using the quality keyword argument which is an integer with values in [1, 100] where 1 is worst quality and smallest file size, and 100 is best quality and largest file size (default 75). This is only available when using the PIL and imageio plugins. | skimage.api.skimage.io#skimage.io.imsave |
skimage.io.imshow(arr, plugin=None, **plugin_args) [source]
Display an image. Parameters
arrndarray or str
Image data or name of image file.
pluginstr
Name of plugin to use. By default, the different plugins are tried (starting with imageio) until a suitable candidate is found. Other Parameters
plugin_argskeywords
Passed to the given plugin. | skimage.api.skimage.io#skimage.io.imshow |
skimage.io.imshow_collection(ic, plugin=None, **plugin_args) [source]
Display a collection of images. Parameters
icImageCollection
Collection to display.
pluginstr
Name of plugin to use. By default, the different plugins are tried until a suitable candidate is found. Other Parameters
plugin_argskeywords
Passed to the given plugin. | skimage.api.skimage.io#skimage.io.imshow_collection |
skimage.io.load_sift(f) [source]
Read SIFT or SURF features from externally generated file. This routine reads SIFT or SURF files generated by binary utilities from http://people.cs.ubc.ca/~lowe/keypoints/ and http://www.vision.ee.ethz.ch/~surf/. This routine does not generate SIFT/SURF features from an image. These algorithms are patent encumbered. Please use skimage.feature.CENSURE instead. Parameters
filelikestring or open file
Input file generated by the feature detectors from http://people.cs.ubc.ca/~lowe/keypoints/ or http://www.vision.ee.ethz.ch/~surf/ .
mode{‘SIFT’, ‘SURF’}, optional
Kind of descriptor used to generate filelike. Returns
datarecord array with fields
row: int
row position of feature
column: int
column position of feature
scale: float
feature scale
orientation: float
feature orientation
data: array
feature values | skimage.api.skimage.io#skimage.io.load_sift |
skimage.io.load_surf(f) [source]
Read SIFT or SURF features from externally generated file. This routine reads SIFT or SURF files generated by binary utilities from http://people.cs.ubc.ca/~lowe/keypoints/ and http://www.vision.ee.ethz.ch/~surf/. This routine does not generate SIFT/SURF features from an image. These algorithms are patent encumbered. Please use skimage.feature.CENSURE instead. Parameters
filelikestring or open file
Input file generated by the feature detectors from http://people.cs.ubc.ca/~lowe/keypoints/ or http://www.vision.ee.ethz.ch/~surf/ .
mode{‘SIFT’, ‘SURF’}, optional
Kind of descriptor used to generate filelike. Returns
datarecord array with fields
row: int
row position of feature
column: int
column position of feature
scale: float
feature scale
orientation: float
feature orientation
data: array
feature values | skimage.api.skimage.io#skimage.io.load_surf |
class skimage.io.MultiImage(filename, conserve_memory=True, dtype=None, **imread_kwargs) [source]
Bases: skimage.io.collection.ImageCollection A class containing all frames from multi-frame images. Parameters
load_patternstr or list of str
Pattern glob or filenames to load. The path can be absolute or relative.
conserve_memorybool, optional
Whether to conserve memory by only caching a single frame. Default is True. Other Parameters
load_funccallable
imread by default. See notes below. Notes If conserve_memory=True the memory footprint can be reduced, however the performance can be affected because frames have to be read from file more often. The last accessed frame is cached, all other frames will have to be read from file. The current implementation makes use of tifffile for Tiff files and PIL otherwise. Examples >>> from skimage import data_dir
>>> img = MultiImage(data_dir + '/multipage.tif')
>>> len(img)
2
>>> for frame in img:
... print(frame.shape)
(15, 10)
(15, 10)
__init__(filename, conserve_memory=True, dtype=None, **imread_kwargs) [source]
Load a multi-img.
property filename | skimage.api.skimage.io#skimage.io.MultiImage |
property filename | skimage.api.skimage.io#skimage.io.MultiImage.filename |
__init__(filename, conserve_memory=True, dtype=None, **imread_kwargs) [source]
Load a multi-img. | skimage.api.skimage.io#skimage.io.MultiImage.__init__ |
skimage.io.plugin_info(plugin) [source]
Return plugin meta-data. Parameters
pluginstr
Name of plugin. Returns
mdict
Meta data as specified in plugin .ini. | skimage.api.skimage.io#skimage.io.plugin_info |
skimage.io.plugin_order() [source]
Return the currently preferred plugin order. Returns
pdict
Dictionary of preferred plugin order, with function name as key and plugins (in order of preference) as value. | skimage.api.skimage.io#skimage.io.plugin_order |
skimage.io.pop() [source]
Pop an image from the shared image stack. Returns
imgndarray
Image popped from the stack. | skimage.api.skimage.io#skimage.io.pop |
skimage.io.push(img) [source]
Push an image onto the shared image stack. Parameters
imgndarray
Image to push. | skimage.api.skimage.io#skimage.io.push |
skimage.io.reset_plugins() [source] | skimage.api.skimage.io#skimage.io.reset_plugins |
skimage.io.show() [source]
Display pending images. Launch the event loop of the current gui plugin, and display all pending images, queued via imshow. This is required when using imshow from non-interactive scripts. A call to show will block execution of code until all windows have been closed. Examples >>> import skimage.io as io
>>> for i in range(4):
... ax_im = io.imshow(np.random.rand(50, 50))
>>> io.show() | skimage.api.skimage.io#skimage.io.show |
skimage.io.use_plugin(name, kind=None) [source]
Set the default plugin for a specified operation. The plugin will be loaded if it hasn’t been already. Parameters
namestr
Name of plugin.
kind{‘imsave’, ‘imread’, ‘imshow’, ‘imread_collection’, ‘imshow_collection’}, optional
Set the plugin for this function. By default, the plugin is set for all functions. See also
available_plugins
List of available plugins Examples To use Matplotlib as the default image reader, you would write: >>> from skimage import io
>>> io.use_plugin('matplotlib', 'imread')
To see a list of available plugins run io.available_plugins. Note that this lists plugins that are defined, but the full list may not be usable if your system does not have the required libraries installed. | skimage.api.skimage.io#skimage.io.use_plugin |
skimage.lookfor(what) [source]
Do a keyword search on scikit-image docstrings. Parameters
whatstr
Words to look for. Examples >>> import skimage
>>> skimage.lookfor('regular_grid')
Search results for 'regular_grid'
---------------------------------
skimage.lookfor
Do a keyword search on scikit-image docstrings.
skimage.util.regular_grid
Find `n_points` regularly spaced along `ar_shape`. | skimage.api.skimage#skimage.lookfor |
Module: measure
skimage.measure.approximate_polygon(coords, …) Approximate a polygonal chain with the specified tolerance.
skimage.measure.block_reduce(image, block_size) Downsample image by applying function func to local blocks.
skimage.measure.euler_number(image[, …]) Calculate the Euler characteristic in binary image.
skimage.measure.find_contours(image[, …]) Find iso-valued contours in a 2D array for a given level value.
skimage.measure.grid_points_in_poly(shape, verts) Test whether points on a specified grid are inside a polygon.
skimage.measure.inertia_tensor(image[, mu]) Compute the inertia tensor of the input image.
skimage.measure.inertia_tensor_eigvals(image) Compute the eigenvalues of the inertia tensor of the image.
skimage.measure.label(input[, background, …]) Label connected regions of an integer array.
skimage.measure.marching_cubes(volume[, …]) Marching cubes algorithm to find surfaces in 3d volumetric data.
skimage.measure.marching_cubes_classic(volume) Classic marching cubes algorithm to find surfaces in 3d volumetric data.
skimage.measure.marching_cubes_lewiner(volume) Lewiner marching cubes algorithm to find surfaces in 3d volumetric data.
skimage.measure.mesh_surface_area(verts, faces) Compute surface area, given vertices & triangular faces
skimage.measure.moments(image[, order]) Calculate all raw image moments up to a certain order.
skimage.measure.moments_central(image[, …]) Calculate all central image moments up to a certain order.
skimage.measure.moments_coords(coords[, order]) Calculate all raw image moments up to a certain order.
skimage.measure.moments_coords_central(coords) Calculate all central image moments up to a certain order.
skimage.measure.moments_hu(nu) Calculate Hu’s set of image moments (2D-only).
skimage.measure.moments_normalized(mu[, order]) Calculate all normalized central image moments up to a certain order.
skimage.measure.perimeter(image[, neighbourhood]) Calculate total perimeter of all objects in binary image.
skimage.measure.perimeter_crofton(image[, …]) Calculate total Crofton perimeter of all objects in binary image.
skimage.measure.points_in_poly(points, verts) Test whether points lie inside a polygon.
skimage.measure.profile_line(image, src, dst) Return the intensity profile of an image measured along a scan line.
skimage.measure.ransac(data, model_class, …) Fit a model to data with the RANSAC (random sample consensus) algorithm.
skimage.measure.regionprops(label_image[, …]) Measure properties of labeled image regions.
skimage.measure.regionprops_table(label_image) Compute image properties and return them as a pandas-compatible table.
skimage.measure.shannon_entropy(image[, base]) Calculate the Shannon entropy of an image.
skimage.measure.subdivide_polygon(coords[, …]) Subdivision of polygonal curves using B-Splines.
skimage.measure.CircleModel() Total least squares estimator for 2D circles.
skimage.measure.EllipseModel() Total least squares estimator for 2D ellipses.
skimage.measure.LineModelND() Total least squares estimator for N-dimensional lines. approximate_polygon
skimage.measure.approximate_polygon(coords, tolerance) [source]
Approximate a polygonal chain with the specified tolerance. It is based on the Douglas-Peucker algorithm. Note that the approximated polygon is always within the convex hull of the original polygon. Parameters
coords(N, 2) array
Coordinate array.
tolerancefloat
Maximum distance from original points of polygon to approximated polygonal chain. If tolerance is 0, the original coordinate array is returned. Returns
coords(M, 2) array
Approximated polygonal chain where M <= N. References
1
https://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm
block_reduce
skimage.measure.block_reduce(image, block_size, func=<function sum>, cval=0, func_kwargs=None) [source]
Downsample image by applying function func to local blocks. This function is useful for max and mean pooling, for example. Parameters
imagendarray
N-dimensional input image.
block_sizearray_like
Array containing down-sampling integer factor along each axis.
funccallable
Function object which is used to calculate the return value for each local block. This function must implement an axis parameter. Primary functions are numpy.sum, numpy.min, numpy.max, numpy.mean and numpy.median. See also func_kwargs.
cvalfloat
Constant padding value if image is not perfectly divisible by the block size.
func_kwargsdict
Keyword arguments passed to func. Notably useful for passing dtype argument to np.mean. Takes dictionary of inputs, e.g.: func_kwargs={'dtype': np.float16}). Returns
imagendarray
Down-sampled image with same number of dimensions as input image. Examples >>> from skimage.measure import block_reduce
>>> image = np.arange(3*3*4).reshape(3, 3, 4)
>>> image
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]],
[[24, 25, 26, 27],
[28, 29, 30, 31],
[32, 33, 34, 35]]])
>>> block_reduce(image, block_size=(3, 3, 1), func=np.mean)
array([[[16., 17., 18., 19.]]])
>>> image_max1 = block_reduce(image, block_size=(1, 3, 4), func=np.max)
>>> image_max1
array([[[11]],
[[23]],
[[35]]])
>>> image_max2 = block_reduce(image, block_size=(3, 1, 4), func=np.max)
>>> image_max2
array([[[27],
[31],
[35]]])
euler_number
skimage.measure.euler_number(image, connectivity=None) [source]
Calculate the Euler characteristic in binary image. For 2D objects, the Euler number is the number of objects minus the number of holes. For 3D objects, the Euler number is obtained as the number of objects plus the number of holes, minus the number of tunnels, or loops. Parameters
image: (N, M) ndarray or (N, M, D) ndarray.
2D or 3D images. If image is not binary, all values strictly greater than zero are considered as the object.
connectivityint, optional
Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor. Accepted values are ranging from 1 to input.ndim. If None, a full connectivity of input.ndim is used. 4 or 8 neighborhoods are defined for 2D images (connectivity 1 and 2, respectively). 6 or 26 neighborhoods are defined for 3D images, (connectivity 1 and 3, respectively). Connectivity 2 is not defined. Returns
euler_numberint
Euler characteristic of the set of all objects in the image. Notes The Euler characteristic is an integer number that describes the topology of the set of all objects in the input image. If object is 4-connected, then background is 8-connected, and conversely. The computation of the Euler characteristic is based on an integral geometry formula in discretized space. In practice, a neighbourhood configuration is constructed, and a LUT is applied for each configuration. The coefficients used are the ones of Ohser et al. It can be useful to compute the Euler characteristic for several connectivities. A large relative difference between results for different connectivities suggests that the image resolution (with respect to the size of objects and holes) is too low. References
1
S. Rivollier. Analyse d’image geometrique et morphometrique par diagrammes de forme et voisinages adaptatifs generaux. PhD thesis, 2010. Ecole Nationale Superieure des Mines de Saint-Etienne. https://tel.archives-ouvertes.fr/tel-00560838
2
Ohser J., Nagel W., Schladitz K. (2002) The Euler Number of Discretized Sets - On the Choice of Adjacency in Homogeneous Lattices. In: Mecke K., Stoyan D. (eds) Morphology of Condensed Matter. Lecture Notes in Physics, vol 600. Springer, Berlin, Heidelberg. Examples >>> import numpy as np
>>> SAMPLE = np.zeros((100,100,100));
>>> SAMPLE[40:60, 40:60, 40:60]=1
>>> euler_number(SAMPLE)
1...
>>> SAMPLE[45:55,45:55,45:55] = 0;
>>> euler_number(SAMPLE)
2...
>>> SAMPLE = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
... [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0],
... [1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0],
... [0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1],
... [0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1]])
>>> euler_number(SAMPLE) # doctest:
0
>>> euler_number(SAMPLE, connectivity=1) # doctest:
2
Examples using skimage.measure.euler_number
Euler number find_contours
skimage.measure.find_contours(image, level=None, fully_connected='low', positive_orientation='low', *, mask=None) [source]
Find iso-valued contours in a 2D array for a given level value. Uses the “marching squares” method to compute a the iso-valued contours of the input 2D array for a particular level value. Array values are linearly interpolated to provide better precision for the output contours. Parameters
image2D ndarray of double
Input image in which to find contours.
levelfloat, optional
Value along which to find contours in the array. By default, the level is set to (max(image) + min(image)) / 2 Changed in version 0.18: This parameter is now optional.
fully_connectedstr, {‘low’, ‘high’}
Indicates whether array elements below the given level value are to be considered fully-connected (and hence elements above the value will only be face connected), or vice-versa. (See notes below for details.)
positive_orientationstr, {‘low’, ‘high’}
Indicates whether the output contours will produce positively-oriented polygons around islands of low- or high-valued elements. If ‘low’ then contours will wind counter- clockwise around elements below the iso-value. Alternately, this means that low-valued elements are always on the left of the contour. (See below for details.)
mask2D ndarray of bool, or None
A boolean mask, True where we want to draw contours. Note that NaN values are always excluded from the considered region (mask is set to False wherever array is NaN). Returns
contourslist of (n,2)-ndarrays
Each contour is an ndarray of shape (n, 2), consisting of n (row, column) coordinates along the contour. See also
skimage.measure.marching_cubes
Notes The marching squares algorithm is a special case of the marching cubes algorithm [1]. A simple explanation is available here: http://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html There is a single ambiguous case in the marching squares algorithm: when a given 2 x 2-element square has two high-valued and two low-valued elements, each pair diagonally adjacent. (Where high- and low-valued is with respect to the contour value sought.) In this case, either the high-valued elements can be ‘connected together’ via a thin isthmus that separates the low-valued elements, or vice-versa. When elements are connected together across a diagonal, they are considered ‘fully connected’ (also known as ‘face+vertex-connected’ or ‘8-connected’). Only high-valued or low-valued elements can be fully-connected, the other set will be considered as ‘face-connected’ or ‘4-connected’. By default, low-valued elements are considered fully-connected; this can be altered with the ‘fully_connected’ parameter. Output contours are not guaranteed to be closed: contours which intersect the array edge or a masked-off region (either where mask is False or where array is NaN) will be left open. All other contours will be closed. (The closed-ness of a contours can be tested by checking whether the beginning point is the same as the end point.) Contours are oriented. By default, array values lower than the contour value are to the left of the contour and values greater than the contour value are to the right. This means that contours will wind counter-clockwise (i.e. in ‘positive orientation’) around islands of low-valued pixels. This behavior can be altered with the ‘positive_orientation’ parameter. The order of the contours in the output list is determined by the position of the smallest x,y (in lexicographical order) coordinate in the contour. This is a side-effect of how the input array is traversed, but can be relied upon. Warning Array coordinates/values are assumed to refer to the center of the array element. Take a simple example input: [0, 1]. The interpolated position of 0.5 in this array is midway between the 0-element (at x=0) and the 1-element (at x=1), and thus would fall at x=0.5. This means that to find reasonable contours, it is best to find contours midway between the expected “light” and “dark” values. In particular, given a binarized array, do not choose to find contours at the low or high value of the array. This will often yield degenerate contours, especially around structures that are a single array element wide. Instead choose a middle value, as above. References
1
Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). DOI:10.1145/37401.37422 Examples >>> a = np.zeros((3, 3))
>>> a[0, 0] = 1
>>> a
array([[1., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
>>> find_contours(a, 0.5)
[array([[0. , 0.5],
[0.5, 0. ]])]
Examples using skimage.measure.find_contours
Contour finding
Measure region properties grid_points_in_poly
skimage.measure.grid_points_in_poly(shape, verts) [source]
Test whether points on a specified grid are inside a polygon. For each (r, c) coordinate on a grid, i.e. (0, 0), (0, 1) etc., test whether that point lies inside a polygon. Parameters
shapetuple (M, N)
Shape of the grid.
verts(V, 2) array
Specify the V vertices of the polygon, sorted either clockwise or anti-clockwise. The first point may (but does not need to be) duplicated. Returns
mask(M, N) ndarray of bool
True where the grid falls inside the polygon. See also
points_in_poly
inertia_tensor
skimage.measure.inertia_tensor(image, mu=None) [source]
Compute the inertia tensor of the input image. Parameters
imagearray
The input image.
muarray, optional
The pre-computed central moments of image. The inertia tensor computation requires the central moments of the image. If an application requires both the central moments and the inertia tensor (for example, skimage.measure.regionprops), then it is more efficient to pre-compute them and pass them to the inertia tensor call. Returns
Tarray, shape (image.ndim, image.ndim)
The inertia tensor of the input image. \(T_{i, j}\) contains the covariance of image intensity along axes \(i\) and \(j\). References
1
https://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_tensor
2
Bernd Jähne. Spatio-Temporal Image Processing: Theory and Scientific Applications. (Chapter 8: Tensor Methods) Springer, 1993.
inertia_tensor_eigvals
skimage.measure.inertia_tensor_eigvals(image, mu=None, T=None) [source]
Compute the eigenvalues of the inertia tensor of the image. The inertia tensor measures covariance of the image intensity along the image axes. (See inertia_tensor.) The relative magnitude of the eigenvalues of the tensor is thus a measure of the elongation of a (bright) object in the image. Parameters
imagearray
The input image.
muarray, optional
The pre-computed central moments of image.
Tarray, shape (image.ndim, image.ndim)
The pre-computed inertia tensor. If T is given, mu and image are ignored. Returns
eigvalslist of float, length image.ndim
The eigenvalues of the inertia tensor of image, in descending order. Notes Computing the eigenvalues requires the inertia tensor of the input image. This is much faster if the central moments (mu) are provided, or, alternatively, one can provide the inertia tensor (T) directly.
label
skimage.measure.label(input, background=None, return_num=False, connectivity=None) [source]
Label connected regions of an integer array. Two pixels are connected when they are neighbors and have the same value. In 2D, they can be neighbors either in a 1- or 2-connected sense. The value refers to the maximum number of orthogonal hops to consider a pixel/voxel a neighbor: 1-connectivity 2-connectivity diagonal connection close-up
[ ] [ ] [ ] [ ] [ ]
| \ | / | <- hop 2
[ ]--[x]--[ ] [ ]--[x]--[ ] [x]--[ ]
| / | \ hop 1
[ ] [ ] [ ] [ ]
Parameters
inputndarray of dtype int
Image to label.
backgroundint, optional
Consider all pixels with this value as background pixels, and label them as 0. By default, 0-valued pixels are considered as background pixels.
return_numbool, optional
Whether to return the number of assigned labels.
connectivityint, optional
Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor. Accepted values are ranging from 1 to input.ndim. If None, a full connectivity of input.ndim is used. Returns
labelsndarray of dtype int
Labeled array, where all connected regions are assigned the same integer value.
numint, optional
Number of labels, which equals the maximum label index and is only returned if return_num is True. See also
regionprops
regionprops_table
References
1
Christophe Fiorio and Jens Gustedt, “Two linear time Union-Find strategies for image processing”, Theoretical Computer Science 154 (1996), pp. 165-181.
2
Kensheng Wu, Ekow Otoo and Arie Shoshani, “Optimizing connected component labeling algorithms”, Paper LBNL-56864, 2005, Lawrence Berkeley National Laboratory (University of California), http://repositories.cdlib.org/lbnl/LBNL-56864 Examples >>> import numpy as np
>>> x = np.eye(3).astype(int)
>>> print(x)
[[1 0 0]
[0 1 0]
[0 0 1]]
>>> print(label(x, connectivity=1))
[[1 0 0]
[0 2 0]
[0 0 3]]
>>> print(label(x, connectivity=2))
[[1 0 0]
[0 1 0]
[0 0 1]]
>>> print(label(x, background=-1))
[[1 2 2]
[2 1 2]
[2 2 1]]
>>> x = np.array([[1, 0, 0],
... [1, 1, 5],
... [0, 0, 0]])
>>> print(label(x))
[[1 0 0]
[1 1 2]
[0 0 0]]
Examples using skimage.measure.label
Measure region properties
Euler number
Segment human cells (in mitosis) marching_cubes
skimage.measure.marching_cubes(volume, level=None, *, spacing=(1.0, 1.0, 1.0), gradient_direction='descent', step_size=1, allow_degenerate=True, method='lewiner', mask=None) [source]
Marching cubes algorithm to find surfaces in 3d volumetric data. In contrast with Lorensen et al. approach [2], Lewiner et al. algorithm is faster, resolves ambiguities, and guarantees topologically correct results. Therefore, this algorithm generally a better choice. Parameters
volume(M, N, P) array
Input data volume to find isosurfaces. Will internally be converted to float32 if necessary.
levelfloat, optional
Contour value to search for isosurfaces in volume. If not given or None, the average of the min and max of vol is used.
spacinglength-3 tuple of floats, optional
Voxel spacing in spatial dimensions corresponding to numpy array indexing dimensions (M, N, P) as in volume.
gradient_directionstring, optional
Controls if the mesh was generated from an isosurface with gradient descent toward objects of interest (the default), or the opposite, considering the left-hand rule. The two options are: * descent : Object was greater than exterior * ascent : Exterior was greater than object
step_sizeint, optional
Step size in voxels. Default 1. Larger steps yield faster but coarser results. The result will always be topologically correct though.
allow_degeneratebool, optional
Whether to allow degenerate (i.e. zero-area) triangles in the end-result. Default True. If False, degenerate triangles are removed, at the cost of making the algorithm slower. method: str, optional
One of ‘lewiner’, ‘lorensen’ or ‘_lorensen’. Specify witch of Lewiner et al. or Lorensen et al. method will be used. The ‘_lorensen’ flag correspond to an old implementation that will be deprecated in version 0.19.
mask(M, N, P) array, optional
Boolean array. The marching cube algorithm will be computed only on True elements. This will save computational time when interfaces are located within certain region of the volume M, N, P-e.g. the top half of the cube-and also allow to compute finite surfaces-i.e. open surfaces that do not end at the border of the cube. Returns
verts(V, 3) array
Spatial coordinates for V unique mesh vertices. Coordinate order matches input volume (M, N, P). If allow_degenerate is set to True, then the presence of degenerate triangles in the mesh can make this array have duplicate vertices.
faces(F, 3) array
Define triangular faces via referencing vertex indices from verts. This algorithm specifically outputs triangles, so each face has exactly three indices.
normals(V, 3) array
The normal direction at each vertex, as calculated from the data.
values(V, ) array
Gives a measure for the maximum value of the data in the local region near each vertex. This can be used by visualization tools to apply a colormap to the mesh. See also
skimage.measure.mesh_surface_area
skimage.measure.find_contours
Notes The algorithm [1] is an improved version of Chernyaev’s Marching Cubes 33 algorithm. It is an efficient algorithm that relies on heavy use of lookup tables to handle the many different cases, keeping the algorithm relatively easy. This implementation is written in Cython, ported from Lewiner’s C++ implementation. To quantify the area of an isosurface generated by this algorithm, pass verts and faces to skimage.measure.mesh_surface_area. Regarding visualization of algorithm output, to contour a volume named myvolume about the level 0.0, using the mayavi package: >>>
>> from mayavi import mlab
>> verts, faces, _, _ = marching_cubes(myvolume, 0.0)
>> mlab.triangular_mesh([vert[0] for vert in verts],
[vert[1] for vert in verts],
[vert[2] for vert in verts],
faces)
>> mlab.show()
Similarly using the visvis package: >>>
>> import visvis as vv
>> verts, faces, normals, values = marching_cubes(myvolume, 0.0)
>> vv.mesh(np.fliplr(verts), faces, normals, values)
>> vv.use().Run()
To reduce the number of triangles in the mesh for better performance, see this example using the mayavi package. References
1
Thomas Lewiner, Helio Lopes, Antonio Wilson Vieira and Geovan Tavares. Efficient implementation of Marching Cubes’ cases with topological guarantees. Journal of Graphics Tools 8(2) pp. 1-15 (december 2003). DOI:10.1080/10867651.2003.10487582
2
Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). DOI:10.1145/37401.37422
marching_cubes_classic
skimage.measure.marching_cubes_classic(volume, level=None, spacing=(1.0, 1.0, 1.0), gradient_direction='descent') [source]
Classic marching cubes algorithm to find surfaces in 3d volumetric data. Note that the marching_cubes() algorithm is recommended over this algorithm, because it’s faster and produces better results. Parameters
volume(M, N, P) array of doubles
Input data volume to find isosurfaces. Will be cast to np.float64.
levelfloat
Contour value to search for isosurfaces in volume. If not given or None, the average of the min and max of vol is used.
spacinglength-3 tuple of floats
Voxel spacing in spatial dimensions corresponding to numpy array indexing dimensions (M, N, P) as in volume.
gradient_directionstring
Controls if the mesh was generated from an isosurface with gradient descent toward objects of interest (the default), or the opposite. The two options are: * descent : Object was greater than exterior * ascent : Exterior was greater than object Returns
verts(V, 3) array
Spatial coordinates for V unique mesh vertices. Coordinate order matches input volume (M, N, P). If allow_degenerate is set to True, then the presence of degenerate triangles in the mesh can make this array have duplicate vertices.
faces(F, 3) array
Define triangular faces via referencing vertex indices from verts. This algorithm specifically outputs triangles, so each face has exactly three indices. See also
skimage.measure.marching_cubes
skimage.measure.mesh_surface_area
Notes The marching cubes algorithm is implemented as described in [1]. A simple explanation is available here: http://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html
There are several known ambiguous cases in the marching cubes algorithm. Using point labeling as in [1], Figure 4, as shown: v8 ------ v7
/ | / | y
/ | / | ^ z
v4 ------ v3 | | /
| v5 ----|- v6 |/ (note: NOT right handed!)
| / | / ----> x
| / | /
v1 ------ v2
Most notably, if v4, v8, v2, and v6 are all >= level (or any generalization of this case) two parallel planes are generated by this algorithm, separating v4 and v8 from v2 and v6. An equally valid interpretation would be a single connected thin surface enclosing all four points. This is the best known ambiguity, though there are others. This algorithm does not attempt to resolve such ambiguities; it is a naive implementation of marching cubes as in [1], but may be a good beginning for work with more recent techniques (Dual Marching Cubes, Extended Marching Cubes, Cubic Marching Squares, etc.). Because of interactions between neighboring cubes, the isosurface(s) generated by this algorithm are NOT guaranteed to be closed, particularly for complicated contours. Furthermore, this algorithm does not guarantee a single contour will be returned. Indeed, ALL isosurfaces which cross level will be found, regardless of connectivity. The output is a triangular mesh consisting of a set of unique vertices and connecting triangles. The order of these vertices and triangles in the output list is determined by the position of the smallest x,y,z (in lexicographical order) coordinate in the contour. This is a side-effect of how the input array is traversed, but can be relied upon. The generated mesh guarantees coherent orientation as of version 0.12. To quantify the area of an isosurface generated by this algorithm, pass outputs directly into skimage.measure.mesh_surface_area. References
1(1,2,3)
Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). DOI:10.1145/37401.37422
marching_cubes_lewiner
skimage.measure.marching_cubes_lewiner(volume, level=None, spacing=(1.0, 1.0, 1.0), gradient_direction='descent', step_size=1, allow_degenerate=True, use_classic=False, mask=None) [source]
Lewiner marching cubes algorithm to find surfaces in 3d volumetric data. In contrast to marching_cubes_classic(), this algorithm is faster, resolves ambiguities, and guarantees topologically correct results. Therefore, this algorithm generally a better choice, unless there is a specific need for the classic algorithm. Parameters
volume(M, N, P) array
Input data volume to find isosurfaces. Will internally be converted to float32 if necessary.
levelfloat
Contour value to search for isosurfaces in volume. If not given or None, the average of the min and max of vol is used.
spacinglength-3 tuple of floats
Voxel spacing in spatial dimensions corresponding to numpy array indexing dimensions (M, N, P) as in volume.
gradient_directionstring
Controls if the mesh was generated from an isosurface with gradient descent toward objects of interest (the default), or the opposite, considering the left-hand rule. The two options are: * descent : Object was greater than exterior * ascent : Exterior was greater than object
step_sizeint
Step size in voxels. Default 1. Larger steps yield faster but coarser results. The result will always be topologically correct though.
allow_degeneratebool
Whether to allow degenerate (i.e. zero-area) triangles in the end-result. Default True. If False, degenerate triangles are removed, at the cost of making the algorithm slower.
use_classicbool
If given and True, the classic marching cubes by Lorensen (1987) is used. This option is included for reference purposes. Note that this algorithm has ambiguities and is not guaranteed to produce a topologically correct result. The results with using this option are not generally the same as the marching_cubes_classic() function.
mask(M, N, P) array
Boolean array. The marching cube algorithm will be computed only on True elements. This will save computational time when interfaces are located within certain region of the volume M, N, P-e.g. the top half of the cube-and also allow to compute finite surfaces-i.e. open surfaces that do not end at the border of the cube. Returns
verts(V, 3) array
Spatial coordinates for V unique mesh vertices. Coordinate order matches input volume (M, N, P). If allow_degenerate is set to True, then the presence of degenerate triangles in the mesh can make this array have duplicate vertices.
faces(F, 3) array
Define triangular faces via referencing vertex indices from verts. This algorithm specifically outputs triangles, so each face has exactly three indices.
normals(V, 3) array
The normal direction at each vertex, as calculated from the data.
values(V, ) array
Gives a measure for the maximum value of the data in the local region near each vertex. This can be used by visualization tools to apply a colormap to the mesh. See also
skimage.measure.marching_cubes
skimage.measure.mesh_surface_area
Notes The algorithm [1] is an improved version of Chernyaev’s Marching Cubes 33 algorithm. It is an efficient algorithm that relies on heavy use of lookup tables to handle the many different cases, keeping the algorithm relatively easy. This implementation is written in Cython, ported from Lewiner’s C++ implementation. To quantify the area of an isosurface generated by this algorithm, pass verts and faces to skimage.measure.mesh_surface_area. Regarding visualization of algorithm output, to contour a volume named myvolume about the level 0.0, using the mayavi package: >>> from mayavi import mlab
>>> verts, faces, normals, values = marching_cubes_lewiner(myvolume, 0.0)
>>> mlab.triangular_mesh([vert[0] for vert in verts],
... [vert[1] for vert in verts],
... [vert[2] for vert in verts],
... faces)
>>> mlab.show()
Similarly using the visvis package: >>> import visvis as vv
>>> verts, faces, normals, values = marching_cubes_lewiner(myvolume, 0.0)
>>> vv.mesh(np.fliplr(verts), faces, normals, values)
>>> vv.use().Run()
References
1
Thomas Lewiner, Helio Lopes, Antonio Wilson Vieira and Geovan Tavares. Efficient implementation of Marching Cubes’ cases with topological guarantees. Journal of Graphics Tools 8(2) pp. 1-15 (december 2003). DOI:10.1080/10867651.2003.10487582
mesh_surface_area
skimage.measure.mesh_surface_area(verts, faces) [source]
Compute surface area, given vertices & triangular faces Parameters
verts(V, 3) array of floats
Array containing (x, y, z) coordinates for V unique mesh vertices.
faces(F, 3) array of ints
List of length-3 lists of integers, referencing vertex coordinates as provided in verts Returns
areafloat
Surface area of mesh. Units now [coordinate units] ** 2. See also
skimage.measure.marching_cubes
skimage.measure.marching_cubes_classic
Notes The arguments expected by this function are the first two outputs from skimage.measure.marching_cubes. For unit correct output, ensure correct spacing was passed to skimage.measure.marching_cubes. This algorithm works properly only if the faces provided are all triangles.
moments
skimage.measure.moments(image, order=3) [source]
Calculate all raw image moments up to a certain order. The following properties can be calculated from raw image moments:
Area as: M[0, 0]. Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that raw moments are neither translation, scale nor rotation invariant. Parameters
imagenD double or uint8 array
Rasterized shape as image.
orderint, optional
Maximum order of moments. Default is 3. Returns
m(order + 1, order + 1) array
Raw image moments. References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 1
>>> M = moments(image)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> centroid
(14.5, 14.5)
moments_central
skimage.measure.moments_central(image, center=None, order=3, **kwargs) [source]
Calculate all central image moments up to a certain order. The center coordinates (cr, cc) can be calculated from the raw moments as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that central moments are translation invariant but not scale and rotation invariant. Parameters
imagenD double or uint8 array
Rasterized shape as image.
centertuple of float, optional
Coordinates of the image centroid. This will be computed if it is not provided.
orderint, optional
The maximum order of moments computed. Returns
mu(order + 1, order + 1) array
Central image moments. References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 1
>>> M = moments(image)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> moments_central(image, centroid)
array([[16., 0., 20., 0.],
[ 0., 0., 0., 0.],
[20., 0., 25., 0.],
[ 0., 0., 0., 0.]])
moments_coords
skimage.measure.moments_coords(coords, order=3) [source]
Calculate all raw image moments up to a certain order. The following properties can be calculated from raw image moments:
Area as: M[0, 0]. Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that raw moments are neither translation, scale nor rotation invariant. Parameters
coords(N, D) double or uint8 array
Array of N points that describe an image of D dimensionality in Cartesian space.
orderint, optional
Maximum order of moments. Default is 3. Returns
M(order + 1, order + 1, …) array
Raw image moments. (D dimensions) References
1
Johannes Kilian. Simple Image Analysis By Moments. Durham University, version 0.2, Durham, 2001. Examples >>> coords = np.array([[row, col]
... for row in range(13, 17)
... for col in range(14, 18)], dtype=np.double)
>>> M = moments_coords(coords)
>>> centroid = (M[1, 0] / M[0, 0], M[0, 1] / M[0, 0])
>>> centroid
(14.5, 15.5)
moments_coords_central
skimage.measure.moments_coords_central(coords, center=None, order=3) [source]
Calculate all central image moments up to a certain order. The following properties can be calculated from raw image moments:
Area as: M[0, 0]. Centroid as: {M[1, 0] / M[0, 0], M[0, 1] / M[0, 0]}. Note that raw moments are neither translation, scale nor rotation invariant. Parameters
coords(N, D) double or uint8 array
Array of N points that describe an image of D dimensionality in Cartesian space. A tuple of coordinates as returned by np.nonzero is also accepted as input.
centertuple of float, optional
Coordinates of the image centroid. This will be computed if it is not provided.
orderint, optional
Maximum order of moments. Default is 3. Returns
Mc(order + 1, order + 1, …) array
Central image moments. (D dimensions) References
1
Johannes Kilian. Simple Image Analysis By Moments. Durham University, version 0.2, Durham, 2001. Examples >>> coords = np.array([[row, col]
... for row in range(13, 17)
... for col in range(14, 18)])
>>> moments_coords_central(coords)
array([[16., 0., 20., 0.],
[ 0., 0., 0., 0.],
[20., 0., 25., 0.],
[ 0., 0., 0., 0.]])
As seen above, for symmetric objects, odd-order moments (columns 1 and 3, rows 1 and 3) are zero when centered on the centroid, or center of mass, of the object (the default). If we break the symmetry by adding a new point, this no longer holds: >>> coords2 = np.concatenate((coords, [[17, 17]]), axis=0)
>>> np.round(moments_coords_central(coords2),
... decimals=2)
array([[17. , 0. , 22.12, -2.49],
[ 0. , 3.53, 1.73, 7.4 ],
[25.88, 6.02, 36.63, 8.83],
[ 4.15, 19.17, 14.8 , 39.6 ]])
Image moments and central image moments are equivalent (by definition) when the center is (0, 0): >>> np.allclose(moments_coords(coords),
... moments_coords_central(coords, (0, 0)))
True
moments_hu
skimage.measure.moments_hu(nu) [source]
Calculate Hu’s set of image moments (2D-only). Note that this set of moments is proofed to be translation, scale and rotation invariant. Parameters
nu(M, M) array
Normalized central image moments, where M must be >= 4. Returns
nu(7,) array
Hu’s set of image moments. References
1
M. K. Hu, “Visual Pattern Recognition by Moment Invariants”, IRE Trans. Info. Theory, vol. IT-8, pp. 179-187, 1962
2
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
3
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
4
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
5
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 0.5
>>> image[10:12, 10:12] = 1
>>> mu = moments_central(image)
>>> nu = moments_normalized(mu)
>>> moments_hu(nu)
array([7.45370370e-01, 3.51165981e-01, 1.04049179e-01, 4.06442107e-02,
2.64312299e-03, 2.40854582e-02, 4.33680869e-19])
moments_normalized
skimage.measure.moments_normalized(mu, order=3) [source]
Calculate all normalized central image moments up to a certain order. Note that normalized central moments are translation and scale invariant but not rotation invariant. Parameters
mu(M,[ …,] M) array
Central image moments, where M must be greater than or equal to order.
orderint, optional
Maximum order of moments. Default is 3. Returns
nu(order + 1,[ …,] order + 1) array
Normalized central image moments. References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment Examples >>> image = np.zeros((20, 20), dtype=np.double)
>>> image[13:17, 13:17] = 1
>>> m = moments(image)
>>> centroid = (m[0, 1] / m[0, 0], m[1, 0] / m[0, 0])
>>> mu = moments_central(image, centroid)
>>> moments_normalized(mu)
array([[ nan, nan, 0.078125 , 0. ],
[ nan, 0. , 0. , 0. ],
[0.078125 , 0. , 0.00610352, 0. ],
[0. , 0. , 0. , 0. ]])
perimeter
skimage.measure.perimeter(image, neighbourhood=4) [source]
Calculate total perimeter of all objects in binary image. Parameters
image(N, M) ndarray
2D binary image.
neighbourhood4 or 8, optional
Neighborhood connectivity for border pixel determination. It is used to compute the contour. A higher neighbourhood widens the border on which the perimeter is computed. Returns
perimeterfloat
Total perimeter of all objects in binary image. References
1
K. Benkrid, D. Crookes. Design and FPGA Implementation of a Perimeter Estimator. The Queen’s University of Belfast. http://www.cs.qub.ac.uk/~d.crookes/webpubs/papers/perimeter.doc Examples >>> from skimage import data, util
>>> from skimage.measure import label
>>> # coins image (binary)
>>> img_coins = data.coins() > 110
>>> # total perimeter of all objects in the image
>>> perimeter(img_coins, neighbourhood=4)
7796.867...
>>> perimeter(img_coins, neighbourhood=8)
8806.268...
Examples using skimage.measure.perimeter
Different perimeters perimeter_crofton
skimage.measure.perimeter_crofton(image, directions=4) [source]
Calculate total Crofton perimeter of all objects in binary image. Parameters
image(N, M) ndarray
2D image. If image is not binary, all values strictly greater than zero are considered as the object.
directions2 or 4, optional
Number of directions used to approximate the Crofton perimeter. By default, 4 is used: it should be more accurate than 2. Computation time is the same in both cases. Returns
perimeterfloat
Total perimeter of all objects in binary image. Notes This measure is based on Crofton formula [1], which is a measure from integral geometry. It is defined for general curve length evaluation via a double integral along all directions. In a discrete space, 2 or 4 directions give a quite good approximation, 4 being more accurate than 2 for more complex shapes. Similar to perimeter(), this function returns an approximation of the perimeter in continuous space. References
1
https://en.wikipedia.org/wiki/Crofton_formula
2
S. Rivollier. Analyse d’image geometrique et morphometrique par diagrammes de forme et voisinages adaptatifs generaux. PhD thesis, 2010. Ecole Nationale Superieure des Mines de Saint-Etienne. https://tel.archives-ouvertes.fr/tel-00560838 Examples >>> from skimage import data, util
>>> from skimage.measure import label
>>> # coins image (binary)
>>> img_coins = data.coins() > 110
>>> # total perimeter of all objects in the image
>>> perimeter_crofton(img_coins, directions=2)
8144.578...
>>> perimeter_crofton(img_coins, directions=4)
7837.077...
Examples using skimage.measure.perimeter_crofton
Different perimeters points_in_poly
skimage.measure.points_in_poly(points, verts) [source]
Test whether points lie inside a polygon. Parameters
points(N, 2) array
Input points, (x, y).
verts(M, 2) array
Vertices of the polygon, sorted either clockwise or anti-clockwise. The first point may (but does not need to be) duplicated. Returns
mask(N,) array of bool
True if corresponding point is inside the polygon. See also
grid_points_in_poly
profile_line
skimage.measure.profile_line(image, src, dst, linewidth=1, order=None, mode=None, cval=0.0, *, reduce_func=<function mean>) [source]
Return the intensity profile of an image measured along a scan line. Parameters
imagendarray, shape (M, N[, C])
The image, either grayscale (2D array) or multichannel (3D array, where the final axis contains the channel information).
srcarray_like, shape (2, )
The coordinates of the start point of the scan line.
dstarray_like, shape (2, )
The coordinates of the end point of the scan line. The destination point is included in the profile, in contrast to standard numpy indexing.
linewidthint, optional
Width of the scan, perpendicular to the line
orderint in {0, 1, 2, 3, 4, 5}, optional
The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail.
mode{‘constant’, ‘nearest’, ‘reflect’, ‘mirror’, ‘wrap’}, optional
How to compute any values falling outside of the image.
cvalfloat, optional
If mode is ‘constant’, what constant value to use outside the image.
reduce_funccallable, optional
Function used to calculate the aggregation of pixel values perpendicular to the profile_line direction when linewidth > 1. If set to None the unreduced array will be returned. Returns
return_valuearray
The intensity profile along the scan line. The length of the profile is the ceil of the computed length of the scan line. Examples >>> x = np.array([[1, 1, 1, 2, 2, 2]])
>>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])
>>> img
array([[0, 0, 0, 0, 0, 0],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[0, 0, 0, 0, 0, 0]])
>>> profile_line(img, (2, 1), (2, 4))
array([1., 1., 2., 2.])
>>> profile_line(img, (1, 0), (1, 6), cval=4)
array([1., 1., 1., 2., 2., 2., 4.])
The destination point is included in the profile, in contrast to standard numpy indexing. For example: >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds
array([1., 1., 1., 2., 2., 2., 0.])
>>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row
array([1., 1., 1., 2., 2., 2.])
For different reduce_func inputs: >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)
array([0.66666667, 0.66666667, 0.66666667, 1.33333333])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)
array([1, 1, 1, 2])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)
array([2, 2, 2, 4])
The unreduced array will be returned when reduce_func is None or when reduce_func acts on each pixel value individually. >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,
... reduce_func=None)
array([[1, 1, 2],
[1, 1, 2],
[1, 1, 2],
[0, 0, 0]])
>>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)
array([[1. , 1. , 0. ],
[1. , 1. , 0. ],
[1. , 1. , 0. ],
[1.41421356, 1.41421356, 0. ]])
ransac
skimage.measure.ransac(data, model_class, min_samples, residual_threshold, is_data_valid=None, is_model_valid=None, max_trials=100, stop_sample_num=inf, stop_residuals_sum=0, stop_probability=1, random_state=None, initial_inliers=None) [source]
Fit a model to data with the RANSAC (random sample consensus) algorithm. RANSAC is an iterative algorithm for the robust estimation of parameters from a subset of inliers from the complete data set. Each iteration performs the following tasks: Select min_samples random samples from the original data and check whether the set of data is valid (see is_data_valid). Estimate a model to the random subset (model_cls.estimate(*data[random_subset]) and check whether the estimated model is valid (see is_model_valid). Classify all data as inliers or outliers by calculating the residuals to the estimated model (model_cls.residuals(*data)) - all data samples with residuals smaller than the residual_threshold are considered as inliers. Save estimated model as best model if number of inlier samples is maximal. In case the current estimated model has the same number of inliers, it is only considered as the best model if it has less sum of residuals. These steps are performed either a maximum number of times or until one of the special stop criteria are met. The final model is estimated using all inlier samples of the previously determined best model. Parameters
data[list, tuple of] (N, …) array
Data set to which the model is fitted, where N is the number of data points and the remaining dimension are depending on model requirements. If the model class requires multiple input data arrays (e.g. source and destination coordinates of skimage.transform.AffineTransform), they can be optionally passed as tuple or list. Note, that in this case the functions estimate(*data), residuals(*data), is_model_valid(model, *random_data) and is_data_valid(*random_data) must all take each data array as separate arguments.
model_classobject
Object with the following object methods: success = estimate(*data) residuals(*data) where success indicates whether the model estimation succeeded (True or None for success, False for failure).
min_samplesint in range (0, N)
The minimum number of data points to fit a model to.
residual_thresholdfloat larger than 0
Maximum distance for a data point to be classified as an inlier.
is_data_validfunction, optional
This function is called with the randomly selected data before the model is fitted to it: is_data_valid(*random_data).
is_model_validfunction, optional
This function is called with the estimated model and the randomly selected data: is_model_valid(model, *random_data), .
max_trialsint, optional
Maximum number of iterations for random sample selection.
stop_sample_numint, optional
Stop iteration if at least this number of inliers are found.
stop_residuals_sumfloat, optional
Stop iteration if sum of residuals is less than or equal to this threshold.
stop_probabilityfloat in range [0, 1], optional
RANSAC iteration stops if at least one outlier-free set of the training data is sampled with probability >= stop_probability, depending on the current best model’s inlier ratio and the number of trials. This requires to generate at least N samples (trials): N >= log(1 - probability) / log(1 - e**m) where the probability (confidence) is typically set to a high value such as 0.99, e is the current fraction of inliers w.r.t. the total number of samples, and m is the min_samples value.
random_stateint, RandomState instance or None, optional
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
initial_inliersarray-like of bool, shape (N,), optional
Initial samples selection for model estimation Returns
modelobject
Best model with largest consensus set.
inliers(N, ) array
Boolean mask of inliers classified as True. References
1
“RANSAC”, Wikipedia, https://en.wikipedia.org/wiki/RANSAC Examples Generate ellipse data without tilt and add noise: >>> t = np.linspace(0, 2 * np.pi, 50)
>>> xc, yc = 20, 30
>>> a, b = 5, 10
>>> x = xc + a * np.cos(t)
>>> y = yc + b * np.sin(t)
>>> data = np.column_stack([x, y])
>>> np.random.seed(seed=1234)
>>> data += np.random.normal(size=data.shape)
Add some faulty data: >>> data[0] = (100, 100)
>>> data[1] = (110, 120)
>>> data[2] = (120, 130)
>>> data[3] = (140, 130)
Estimate ellipse model using all available data: >>> model = EllipseModel()
>>> model.estimate(data)
True
>>> np.round(model.params)
array([ 72., 75., 77., 14., 1.])
Estimate ellipse model using RANSAC: >>> ransac_model, inliers = ransac(data, EllipseModel, 20, 3, max_trials=50)
>>> abs(np.round(ransac_model.params))
array([20., 30., 5., 10., 0.])
>>> inliers
array([False, False, False, False, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True], dtype=bool)
>>> sum(inliers) > 40
True
RANSAC can be used to robustly estimate a geometric transformation. In this section, we also show how to use a proportion of the total samples, rather than an absolute number. >>> from skimage.transform import SimilarityTransform
>>> np.random.seed(0)
>>> src = 100 * np.random.rand(50, 2)
>>> model0 = SimilarityTransform(scale=0.5, rotation=1, translation=(10, 20))
>>> dst = model0(src)
>>> dst[0] = (10000, 10000)
>>> dst[1] = (-100, 100)
>>> dst[2] = (50, 50)
>>> ratio = 0.5 # use half of the samples
>>> min_samples = int(ratio * len(src))
>>> model, inliers = ransac((src, dst), SimilarityTransform, min_samples, 10,
... initial_inliers=np.ones(len(src), dtype=bool))
>>> inliers
array([False, False, False, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True])
regionprops
skimage.measure.regionprops(label_image, intensity_image=None, cache=True, coordinates=None, *, extra_properties=None) [source]
Measure properties of labeled image regions. Parameters
label_image(M, N[, P]) ndarray
Labeled input image. Labels with value 0 are ignored. Changed in version 0.14.1: Previously, label_image was processed by numpy.squeeze and so any number of singleton dimensions was allowed. This resulted in inconsistent handling of images with singleton dimensions. To recover the old behaviour, use regionprops(np.squeeze(label_image), ...).
intensity_image(M, N[, P][, C]) ndarray, optional
Intensity (i.e., input) image with same size as labeled image, plus optionally an extra dimension for multichannel data. Default is None. Changed in version 0.18.0: The ability to provide an extra dimension for channels was added.
cachebool, optional
Determine whether to cache calculated properties. The computation is much faster for cached properties, whereas the memory consumption increases.
coordinatesDEPRECATED
This argument is deprecated and will be removed in a future version of scikit-image. See Coordinate conventions for more details. Deprecated since version 0.16.0: Use “rc” coordinates everywhere. It may be sufficient to call numpy.transpose on your label image to get the same values as 0.15 and earlier. However, for some properties, the transformation will be less trivial. For example, the new orientation is \(\frac{\pi}{2}\) plus the old orientation.
extra_propertiesIterable of callables
Add extra property computation functions that are not included with skimage. The name of the property is derived from the function name, the dtype is inferred by calling the function on a small sample. If the name of an extra property clashes with the name of an existing property the extra property wil not be visible and a UserWarning is issued. A property computation function must take a region mask as its first argument. If the property requires an intensity image, it must accept the intensity image as the second argument. Returns
propertieslist of RegionProperties
Each item describes one labeled region, and can be accessed using the attributes listed below. See also
label
Notes The following properties can be accessed as attributes or keys:
areaint
Number of pixels of the region.
bboxtuple
Bounding box (min_row, min_col, max_row, max_col). Pixels belonging to the bounding box are in the half-open interval [min_row; max_row) and [min_col; max_col).
bbox_areaint
Number of pixels of bounding box.
centroidarray
Centroid coordinate tuple (row, col).
convex_areaint
Number of pixels of convex hull image, which is the smallest convex polygon that encloses the region.
convex_image(H, J) ndarray
Binary convex hull image which has the same size as bounding box.
coords(N, 2) ndarray
Coordinate list (row, col) of the region.
eccentricityfloat
Eccentricity of the ellipse that has the same second-moments as the region. The eccentricity is the ratio of the focal distance (distance between focal points) over the major axis length. The value is in the interval [0, 1). When it is 0, the ellipse becomes a circle.
equivalent_diameterfloat
The diameter of a circle with the same area as the region.
euler_numberint
Euler characteristic of the set of non-zero pixels. Computed as number of connected components subtracted by number of holes (input.ndim connectivity). In 3D, number of connected components plus number of holes subtracted by number of tunnels.
extentfloat
Ratio of pixels in the region to pixels in the total bounding box. Computed as area / (rows * cols)
feret_diameter_maxfloat
Maximum Feret’s diameter computed as the longest distance between points around a region’s convex hull contour as determined by find_contours. [5]
filled_areaint
Number of pixels of the region will all the holes filled in. Describes the area of the filled_image.
filled_image(H, J) ndarray
Binary region image with filled holes which has the same size as bounding box.
image(H, J) ndarray
Sliced binary region image which has the same size as bounding box.
inertia_tensorndarray
Inertia tensor of the region for the rotation around its mass.
inertia_tensor_eigvalstuple
The eigenvalues of the inertia tensor in decreasing order.
intensity_imagendarray
Image inside region bounding box.
labelint
The label in the labeled input image.
local_centroidarray
Centroid coordinate tuple (row, col), relative to region bounding box.
major_axis_lengthfloat
The length of the major axis of the ellipse that has the same normalized second central moments as the region.
max_intensityfloat
Value with the greatest intensity in the region.
mean_intensityfloat
Value with the mean intensity in the region.
min_intensityfloat
Value with the least intensity in the region.
minor_axis_lengthfloat
The length of the minor axis of the ellipse that has the same normalized second central moments as the region.
moments(3, 3) ndarray
Spatial moments up to 3rd order: m_ij = sum{ array(row, col) * row^i * col^j }
where the sum is over the row, col coordinates of the region.
moments_central(3, 3) ndarray
Central moments (translation invariant) up to 3rd order: mu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }
where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s centroid.
moments_hutuple
Hu moments (translation, scale and rotation invariant).
moments_normalized(3, 3) ndarray
Normalized moments (translation and scale invariant) up to 3rd order: nu_ij = mu_ij / m_00^[(i+j)/2 + 1]
where m_00 is the zeroth spatial moment.
orientationfloat
Angle between the 0th axis (rows) and the major axis of the ellipse that has the same second moments as the region, ranging from -pi/2 to pi/2 counter-clockwise.
perimeterfloat
Perimeter of object which approximates the contour as a line through the centers of border pixels using a 4-connectivity.
perimeter_croftonfloat
Perimeter of object approximated by the Crofton formula in 4 directions.
slicetuple of slices
A slice to extract the object from the source image.
solidityfloat
Ratio of pixels in the region to pixels of the convex hull image.
weighted_centroidarray
Centroid coordinate tuple (row, col) weighted with intensity image.
weighted_local_centroidarray
Centroid coordinate tuple (row, col), relative to region bounding box, weighted with intensity image.
weighted_moments(3, 3) ndarray
Spatial moments of intensity image up to 3rd order: wm_ij = sum{ array(row, col) * row^i * col^j }
where the sum is over the row, col coordinates of the region.
weighted_moments_central(3, 3) ndarray
Central moments (translation invariant) of intensity image up to 3rd order: wmu_ij = sum{ array(row, col) * (row - row_c)^i * (col - col_c)^j }
where the sum is over the row, col coordinates of the region, and row_c and col_c are the coordinates of the region’s weighted centroid.
weighted_moments_hutuple
Hu moments (translation, scale and rotation invariant) of intensity image.
weighted_moments_normalized(3, 3) ndarray
Normalized moments (translation and scale invariant) of intensity image up to 3rd order: wnu_ij = wmu_ij / wm_00^[(i+j)/2 + 1]
where wm_00 is the zeroth spatial moment (intensity-weighted area). Each region also supports iteration, so that you can do: for prop in region:
print(prop, region[prop])
References
1
Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009.
2
B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005.
3
T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993.
4
https://en.wikipedia.org/wiki/Image_moment
5
W. Pabst, E. Gregorová. Characterization of particles and particle systems, pp. 27-28. ICT Prague, 2007. https://old.vscht.cz/sil/keramika/Characterization_of_particles/CPPS%20_English%20version_.pdf Examples >>> from skimage import data, util
>>> from skimage.measure import label, regionprops
>>> img = util.img_as_ubyte(data.coins()) > 110
>>> label_img = label(img, connectivity=img.ndim)
>>> props = regionprops(label_img)
>>> # centroid of first labeled object
>>> props[0].centroid
(22.72987986048314, 81.91228523446583)
>>> # centroid of first labeled object
>>> props[0]['centroid']
(22.72987986048314, 81.91228523446583)
Add custom measurements by passing functions as extra_properties >>> from skimage import data, util
>>> from skimage.measure import label, regionprops
>>> import numpy as np
>>> img = util.img_as_ubyte(data.coins()) > 110
>>> label_img = label(img, connectivity=img.ndim)
>>> def pixelcount(regionmask):
... return np.sum(regionmask)
>>> props = regionprops(label_img, extra_properties=(pixelcount,))
>>> props[0].pixelcount
7741
>>> props[1]['pixelcount']
42
Examples using skimage.measure.regionprops
Measure region properties regionprops_table
skimage.measure.regionprops_table(label_image, intensity_image=None, properties=('label', 'bbox'), *, cache=True, separator='-', extra_properties=None) [source]
Compute image properties and return them as a pandas-compatible table. The table is a dictionary mapping column names to value arrays. See Notes section below for details. New in version 0.16. Parameters
label_image(N, M[, P]) ndarray
Labeled input image. Labels with value 0 are ignored.
intensity_image(M, N[, P][, C]) ndarray, optional
Intensity (i.e., input) image with same size as labeled image, plus optionally an extra dimension for multichannel data. Default is None. Changed in version 0.18.0: The ability to provide an extra dimension for channels was added.
propertiestuple or list of str, optional
Properties that will be included in the resulting dictionary For a list of available properties, please see regionprops(). Users should remember to add “label” to keep track of region identities.
cachebool, optional
Determine whether to cache calculated properties. The computation is much faster for cached properties, whereas the memory consumption increases.
separatorstr, optional
For non-scalar properties not listed in OBJECT_COLUMNS, each element will appear in its own column, with the index of that element separated from the property name by this separator. For example, the inertia tensor of a 2D region will appear in four columns: inertia_tensor-0-0, inertia_tensor-0-1, inertia_tensor-1-0, and inertia_tensor-1-1 (where the separator is -). Object columns are those that cannot be split in this way because the number of columns would change depending on the object. For example, image and coords.
extra_propertiesIterable of callables
Add extra property computation functions that are not included with skimage. The name of the property is derived from the function name, the dtype is inferred by calling the function on a small sample. If the name of an extra property clashes with the name of an existing property the extra property wil not be visible and a UserWarning is issued. A property computation function must take a region mask as its first argument. If the property requires an intensity image, it must accept the intensity image as the second argument. Returns
out_dictdict
Dictionary mapping property names to an array of values of that property, one value per region. This dictionary can be used as input to pandas DataFrame to map property names to columns in the frame and regions to rows. If the image has no regions, the arrays will have length 0, but the correct type. Notes Each column contains either a scalar property, an object property, or an element in a multidimensional array. Properties with scalar values for each region, such as “eccentricity”, will appear as a float or int array with that property name as key. Multidimensional properties of fixed size for a given image dimension, such as “centroid” (every centroid will have three elements in a 3D image, no matter the region size), will be split into that many columns, with the name {property_name}{separator}{element_num} (for 1D properties), {property_name}{separator}{elem_num0}{separator}{elem_num1} (for 2D properties), and so on. For multidimensional properties that don’t have a fixed size, such as “image” (the image of a region varies in size depending on the region size), an object array will be used, with the corresponding property name as the key. Examples >>> from skimage import data, util, measure
>>> image = data.coins()
>>> label_image = measure.label(image > 110, connectivity=image.ndim)
>>> props = measure.regionprops_table(label_image, image,
... properties=['label', 'inertia_tensor',
... 'inertia_tensor_eigvals'])
>>> props
{'label': array([ 1, 2, ...]), ...
'inertia_tensor-0-0': array([ 4.012...e+03, 8.51..., ...]), ...
...,
'inertia_tensor_eigvals-1': array([ 2.67...e+02, 2.83..., ...])}
The resulting dictionary can be directly passed to pandas, if installed, to obtain a clean DataFrame: >>> import pandas as pd
>>> data = pd.DataFrame(props)
>>> data.head()
label inertia_tensor-0-0 ... inertia_tensor_eigvals-1
0 1 4012.909888 ... 267.065503
1 2 8.514739 ... 2.834806
2 3 0.666667 ... 0.000000
3 4 0.000000 ... 0.000000
4 5 0.222222 ... 0.111111
[5 rows x 7 columns] If we want to measure a feature that does not come as a built-in property, we can define custom functions and pass them as extra_properties. For example, we can create a custom function that measures the intensity quartiles in a region: >>> from skimage import data, util, measure
>>> import numpy as np
>>> def quartiles(regionmask, intensity):
... return np.percentile(intensity[regionmask], q=(25, 50, 75))
>>>
>>> image = data.coins()
>>> label_image = measure.label(image > 110, connectivity=image.ndim)
>>> props = measure.regionprops_table(label_image, intensity_image=image,
... properties=('label',),
... extra_properties=(quartiles,))
>>> import pandas as pd
>>> pd.DataFrame(props).head()
label quartiles-0 quartiles-1 quartiles-2
0 1 117.00 123.0 130.0
1 2 111.25 112.0 114.0
2 3 111.00 111.0 111.0
3 4 111.00 111.5 112.5
4 5 112.50 113.0 114.0
Examples using skimage.measure.regionprops_table
Measure region properties shannon_entropy
skimage.measure.shannon_entropy(image, base=2) [source]
Calculate the Shannon entropy of an image. The Shannon entropy is defined as S = -sum(pk * log(pk)), where pk are frequency/probability of pixels of value k. Parameters
image(N, M) ndarray
Grayscale input image.
basefloat, optional
The logarithmic base to use. Returns
entropyfloat
Notes The returned value is measured in bits or shannon (Sh) for base=2, natural unit (nat) for base=np.e and hartley (Hart) for base=10. References
1
https://en.wikipedia.org/wiki/Entropy_(information_theory)
2
https://en.wiktionary.org/wiki/Shannon_entropy Examples >>> from skimage import data
>>> from skimage.measure import shannon_entropy
>>> shannon_entropy(data.camera())
7.231695011055706
subdivide_polygon
skimage.measure.subdivide_polygon(coords, degree=2, preserve_ends=False) [source]
Subdivision of polygonal curves using B-Splines. Note that the resulting curve is always within the convex hull of the original polygon. Circular polygons stay closed after subdivision. Parameters
coords(N, 2) array
Coordinate array.
degree{1, 2, 3, 4, 5, 6, 7}, optional
Degree of B-Spline. Default is 2.
preserve_endsbool, optional
Preserve first and last coordinate of non-circular polygon. Default is False. Returns
coords(M, 2) array
Subdivided coordinate array. References
1
http://mrl.nyu.edu/publications/subdiv-course2000/coursenotes00.pdf
CircleModel
class skimage.measure.CircleModel [source]
Bases: skimage.measure.fit.BaseModel Total least squares estimator for 2D circles. The functional model of the circle is: r**2 = (x - xc)**2 + (y - yc)**2
This estimator minimizes the squared distances from all points to the circle: min{ sum((r - sqrt((x_i - xc)**2 + (y_i - yc)**2))**2) }
A minimum number of 3 points is required to solve for the parameters. Examples >>> t = np.linspace(0, 2 * np.pi, 25)
>>> xy = CircleModel().predict_xy(t, params=(2, 3, 4))
>>> model = CircleModel()
>>> model.estimate(xy)
True
>>> tuple(np.round(model.params, 5))
(2.0, 3.0, 4.0)
>>> res = model.residuals(xy)
>>> np.abs(np.round(res, 9))
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.])
Attributes
paramstuple
Circle model parameters in the following order xc, yc, r.
__init__() [source]
Initialize self. See help(type(self)) for accurate signature.
estimate(data) [source]
Estimate circle model from data using total least squares. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
successbool
True, if model estimation succeeds.
predict_xy(t, params=None) [source]
Predict x- and y-coordinates using the estimated model. Parameters
tarray
Angles in circle in radians. Angles start to count from positive x-axis to positive y-axis in a right-handed system.
params(3, ) array, optional
Optional custom parameter set. Returns
xy(…, 2) array
Predicted x- and y-coordinates.
residuals(data) [source]
Determine residuals of data to model. For each point the shortest distance to the circle is returned. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
residuals(N, ) array
Residual for each data point.
EllipseModel
class skimage.measure.EllipseModel [source]
Bases: skimage.measure.fit.BaseModel Total least squares estimator for 2D ellipses. The functional model of the ellipse is: xt = xc + a*cos(theta)*cos(t) - b*sin(theta)*sin(t)
yt = yc + a*sin(theta)*cos(t) + b*cos(theta)*sin(t)
d = sqrt((x - xt)**2 + (y - yt)**2)
where (xt, yt) is the closest point on the ellipse to (x, y). Thus d is the shortest distance from the point to the ellipse. The estimator is based on a least squares minimization. The optimal solution is computed directly, no iterations are required. This leads to a simple, stable and robust fitting method. The params attribute contains the parameters in the following order: xc, yc, a, b, theta
Examples >>> xy = EllipseModel().predict_xy(np.linspace(0, 2 * np.pi, 25),
... params=(10, 15, 4, 8, np.deg2rad(30)))
>>> ellipse = EllipseModel()
>>> ellipse.estimate(xy)
True
>>> np.round(ellipse.params, 2)
array([10. , 15. , 4. , 8. , 0.52])
>>> np.round(abs(ellipse.residuals(xy)), 5)
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.])
Attributes
paramstuple
Ellipse model parameters in the following order xc, yc, a, b, theta.
__init__() [source]
Initialize self. See help(type(self)) for accurate signature.
estimate(data) [source]
Estimate circle model from data using total least squares. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
successbool
True, if model estimation succeeds. References
1
Halir, R.; Flusser, J. “Numerically stable direct least squares fitting of ellipses”. In Proc. 6th International Conference in Central Europe on Computer Graphics and Visualization. WSCG (Vol. 98, pp. 125-132).
predict_xy(t, params=None) [source]
Predict x- and y-coordinates using the estimated model. Parameters
tarray
Angles in circle in radians. Angles start to count from positive x-axis to positive y-axis in a right-handed system.
params(5, ) array, optional
Optional custom parameter set. Returns
xy(…, 2) array
Predicted x- and y-coordinates.
residuals(data) [source]
Determine residuals of data to model. For each point the shortest distance to the ellipse is returned. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
residuals(N, ) array
Residual for each data point.
LineModelND
class skimage.measure.LineModelND [source]
Bases: skimage.measure.fit.BaseModel Total least squares estimator for N-dimensional lines. In contrast to ordinary least squares line estimation, this estimator minimizes the orthogonal distances of points to the estimated line. Lines are defined by a point (origin) and a unit vector (direction) according to the following vector equation: X = origin + lambda * direction
Examples >>> x = np.linspace(1, 2, 25)
>>> y = 1.5 * x + 3
>>> lm = LineModelND()
>>> lm.estimate(np.stack([x, y], axis=-1))
True
>>> tuple(np.round(lm.params, 5))
(array([1.5 , 5.25]), array([0.5547 , 0.83205]))
>>> res = lm.residuals(np.stack([x, y], axis=-1))
>>> np.abs(np.round(res, 9))
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0.])
>>> np.round(lm.predict_y(x[:5]), 3)
array([4.5 , 4.562, 4.625, 4.688, 4.75 ])
>>> np.round(lm.predict_x(y[:5]), 3)
array([1. , 1.042, 1.083, 1.125, 1.167])
Attributes
paramstuple
Line model parameters in the following order origin, direction.
__init__() [source]
Initialize self. See help(type(self)) for accurate signature.
estimate(data) [source]
Estimate line model from data. This minimizes the sum of shortest (orthogonal) distances from the given data points to the estimated line. Parameters
data(N, dim) array
N points in a space of dimensionality dim >= 2. Returns
successbool
True, if model estimation succeeds.
predict(x, axis=0, params=None) [source]
Predict intersection of the estimated line model with a hyperplane orthogonal to a given axis. Parameters
x(n, 1) array
Coordinates along an axis.
axisint
Axis orthogonal to the hyperplane intersecting the line.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
data(n, m) array
Predicted coordinates. Raises
ValueError
If the line is parallel to the given axis.
predict_x(y, params=None) [source]
Predict x-coordinates for 2D lines using the estimated model. Alias for: predict(y, axis=1)[:, 0]
Parameters
yarray
y-coordinates.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
xarray
Predicted x-coordinates.
predict_y(x, params=None) [source]
Predict y-coordinates for 2D lines using the estimated model. Alias for: predict(x, axis=0)[:, 1]
Parameters
xarray
x-coordinates.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
yarray
Predicted y-coordinates.
residuals(data, params=None) [source]
Determine residuals of data to model. For each point, the shortest (orthogonal) distance to the line is returned. It is obtained by projecting the data onto the line. Parameters
data(N, dim) array
N points in a space of dimension dim.
params(2, ) array, optional
Optional custom parameter set in the form (origin, direction). Returns
residuals(N, ) array
Residual for each data point. | skimage.api.skimage.measure |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.