doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
Module: restoration Image restoration module. skimage.restoration.ball_kernel(radius, ndim) Create a ball kernel for restoration.rolling_ball. skimage.restoration.calibrate_denoiser(…) Calibrate a denoising function and return optimal J-invariant version. skimage.restoration.cycle_spin(x, func, …) Cycle spinning (repeatedly apply func to shifted versions of x). skimage.restoration.denoise_bilateral(image) Denoise image using bilateral filter. skimage.restoration.denoise_nl_means(image) Perform non-local means denoising on 2-D or 3-D grayscale images, and 2-D RGB images. skimage.restoration.denoise_tv_bregman(…) Perform total-variation denoising using split-Bregman optimization. skimage.restoration.denoise_tv_chambolle(image) Perform total-variation denoising on n-dimensional images. skimage.restoration.denoise_wavelet(image[, …]) Perform wavelet denoising on an image. skimage.restoration.ellipsoid_kernel(shape, …) Create an ellipoid kernel for restoration.rolling_ball. skimage.restoration.estimate_sigma(image[, …]) Robust wavelet-based estimator of the (Gaussian) noise standard deviation. skimage.restoration.inpaint_biharmonic(…) Inpaint masked points in image with biharmonic equations. skimage.restoration.richardson_lucy(image, psf) Richardson-Lucy deconvolution. skimage.restoration.rolling_ball(image, *[, …]) Estimate background intensity by rolling/translating a kernel. skimage.restoration.unsupervised_wiener(…) Unsupervised Wiener-Hunt deconvolution. skimage.restoration.unwrap_phase(image[, …]) Recover the original from a wrapped phase image. skimage.restoration.wiener(image, psf, balance) Wiener-Hunt deconvolution ball_kernel skimage.restoration.ball_kernel(radius, ndim) [source] Create a ball kernel for restoration.rolling_ball. Parameters radiusint Radius of the ball. ndimint Number of dimensions of the ball. ndim should match the dimensionality of the image the kernel will be applied to. Returns kernelndarray The kernel containing the surface intensity of the top half of the ellipsoid. See also rolling_ball calibrate_denoiser skimage.restoration.calibrate_denoiser(image, denoise_function, denoise_parameters, *, stride=4, approximate_loss=True, extra_output=False) [source] Calibrate a denoising function and return optimal J-invariant version. The returned function is partially evaluated with optimal parameter values set for denoising the input image. Parameters imagendarray Input data to be denoised (converted using img_as_float). denoise_functionfunction Denoising function to be calibrated. denoise_parametersdict of list Ranges of parameters for denoise_function to be calibrated over. strideint, optional Stride used in masking procedure that converts denoise_function to J-invariance. approximate_lossbool, optional Whether to approximate the self-supervised loss used to evaluate the denoiser by only computing it on one masked version of the image. If False, the runtime will be a factor of stride**image.ndim longer. extra_outputbool, optional If True, return parameters and losses in addition to the calibrated denoising function Returns best_denoise_functionfunction The optimal J-invariant version of denoise_function. If extra_output is True, the following tuple is also returned: (parameters_tested, losses)tuple (list of dict, list of int) List of parameters tested for denoise_function, as a dictionary of kwargs Self-supervised loss for each set of parameters in parameters_tested. Notes The calibration procedure uses a self-supervised mean-square-error loss to evaluate the performance of J-invariant versions of denoise_function. The minimizer of the self-supervised loss is also the minimizer of the ground-truth loss (i.e., the true MSE error) [1]. The returned function can be used on the original noisy image, or other images with similar characteristics. Increasing the stride increases the performance of best_denoise_function at the expense of increasing its runtime. It has no effect on the runtime of the calibration. References 1 J. Batson & L. Royer. Noise2Self: Blind Denoising by Self-Supervision, International Conference on Machine Learning, p. 524-533 (2019). Examples >>> from skimage import color, data >>> from skimage.restoration import denoise_wavelet >>> import numpy as np >>> img = color.rgb2gray(data.astronaut()[:50, :50]) >>> noisy = img + 0.5 * img.std() * np.random.randn(*img.shape) >>> parameters = {'sigma': np.arange(0.1, 0.4, 0.02)} >>> denoising_function = calibrate_denoiser(noisy, denoise_wavelet, ... denoise_parameters=parameters) >>> denoised_img = denoising_function(img) cycle_spin skimage.restoration.cycle_spin(x, func, max_shifts, shift_steps=1, num_workers=None, multichannel=False, func_kw={}) [source] Cycle spinning (repeatedly apply func to shifted versions of x). Parameters xarray-like Data for input to func. funcfunction A function to apply to circularly shifted versions of x. Should take x as its first argument. Any additional arguments can be supplied via func_kw. max_shiftsint or tuple If an integer, shifts in range(0, max_shifts+1) will be used along each axis of x. If a tuple, range(0, max_shifts[i]+1) will be along axis i. shift_stepsint or tuple, optional The step size for the shifts applied along axis, i, are:: range((0, max_shifts[i]+1, shift_steps[i])). If an integer is provided, the same step size is used for all axes. num_workersint or None, optional The number of parallel threads to use during cycle spinning. If set to None, the full set of available cores are used. multichannelbool, optional Whether to treat the final axis as channels (no cycle shifts are performed over the channels axis). func_kwdict, optional Additional keyword arguments to supply to func. Returns avg_ynp.ndarray The output of func(x, **func_kw) averaged over all combinations of the specified axis shifts. Notes Cycle spinning was proposed as a way to approach shift-invariance via performing several circular shifts of a shift-variant transform [1]. For a n-level discrete wavelet transforms, one may wish to perform all shifts up to max_shifts = 2**n - 1. In practice, much of the benefit can often be realized with only a small number of shifts per axis. For transforms such as the blockwise discrete cosine transform, one may wish to evaluate shifts up to the block size used by the transform. References 1 R.R. Coifman and D.L. Donoho. “Translation-Invariant De-Noising”. Wavelets and Statistics, Lecture Notes in Statistics, vol.103. Springer, New York, 1995, pp.125-150. DOI:10.1007/978-1-4612-2544-7_9 Examples >>> import skimage.data >>> from skimage import img_as_float >>> from skimage.restoration import denoise_wavelet, cycle_spin >>> img = img_as_float(skimage.data.camera()) >>> sigma = 0.1 >>> img = img + sigma * np.random.standard_normal(img.shape) >>> denoised = cycle_spin(img, func=denoise_wavelet, ... max_shifts=3) denoise_bilateral skimage.restoration.denoise_bilateral(image, win_size=None, sigma_color=None, sigma_spatial=1, bins=10000, mode='constant', cval=0, multichannel=False) [source] Denoise image using bilateral filter. Parameters imagendarray, shape (M, N[, 3]) Input image, 2D grayscale or RGB. win_sizeint Window size for filtering. If win_size is not specified, it is calculated as max(5, 2 * ceil(3 * sigma_spatial) + 1). sigma_colorfloat Standard deviation for grayvalue/color distance (radiometric similarity). A larger value results in averaging of pixels with larger radiometric differences. Note, that the image will be converted using the img_as_float function and thus the standard deviation is in respect to the range [0, 1]. If the value is None the standard deviation of the image will be used. sigma_spatialfloat Standard deviation for range distance. A larger value results in averaging of pixels with larger spatial differences. binsint Number of discrete values for Gaussian weights of color filtering. A larger value results in improved accuracy. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’} How to handle values outside the image borders. See numpy.pad for detail. cvalstring Used in conjunction with mode ‘constant’, the value outside the image boundaries. multichannelbool Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. Returns denoisedndarray Denoised image. Notes This is an edge-preserving, denoising filter. It averages pixels based on their spatial closeness and radiometric similarity [1]. Spatial closeness is measured by the Gaussian function of the Euclidean distance between two pixels and a certain standard deviation (sigma_spatial). Radiometric similarity is measured by the Gaussian function of the Euclidean distance between two color values and a certain standard deviation (sigma_color). References 1 C. Tomasi and R. Manduchi. “Bilateral Filtering for Gray and Color Images.” IEEE International Conference on Computer Vision (1998) 839-846. DOI:10.1109/ICCV.1998.710815 Examples >>> from skimage import data, img_as_float >>> astro = img_as_float(data.astronaut()) >>> astro = astro[220:300, 220:320] >>> noisy = astro + 0.6 * astro.std() * np.random.random(astro.shape) >>> noisy = np.clip(noisy, 0, 1) >>> denoised = denoise_bilateral(noisy, sigma_color=0.05, sigma_spatial=15, ... multichannel=True) Examples using skimage.restoration.denoise_bilateral Rank filters denoise_nl_means skimage.restoration.denoise_nl_means(image, patch_size=7, patch_distance=11, h=0.1, multichannel=False, fast_mode=True, sigma=0.0, *, preserve_range=None) [source] Perform non-local means denoising on 2-D or 3-D grayscale images, and 2-D RGB images. Parameters image2D or 3D ndarray Input image to be denoised, which can be 2D or 3D, and grayscale or RGB (for 2D images only, see multichannel parameter). patch_sizeint, optional Size of patches used for denoising. patch_distanceint, optional Maximal distance in pixels where to search patches used for denoising. hfloat, optional Cut-off distance (in gray levels). The higher h, the more permissive one is in accepting patches. A higher h results in a smoother image, at the expense of blurring features. For a Gaussian noise of standard deviation sigma, a rule of thumb is to choose the value of h to be sigma of slightly less. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. fast_modebool, optional If True (default value), a fast version of the non-local means algorithm is used. If False, the original version of non-local means is used. See the Notes section for more details about the algorithms. sigmafloat, optional The standard deviation of the (Gaussian) noise. If provided, a more robust computation of patch weights is computed that takes the expected noise variance into account (see Notes below). preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns resultndarray Denoised image, of same shape as image. Notes The non-local means algorithm is well suited for denoising images with specific textures. The principle of the algorithm is to average the value of a given pixel with values of other pixels in a limited neighbourhood, provided that the patches centered on the other pixels are similar enough to the patch centered on the pixel of interest. In the original version of the algorithm [1], corresponding to fast=False, the computational complexity is: image.size * patch_size ** image.ndim * patch_distance ** image.ndim Hence, changing the size of patches or their maximal distance has a strong effect on computing times, especially for 3-D images. However, the default behavior corresponds to fast_mode=True, for which another version of non-local means [2] is used, corresponding to a complexity of: image.size * patch_distance ** image.ndim The computing time depends only weakly on the patch size, thanks to the computation of the integral of patches distances for a given shift, that reduces the number of operations [1]. Therefore, this algorithm executes faster than the classic algorithm (fast_mode=False), at the expense of using twice as much memory. This implementation has been proven to be more efficient compared to other alternatives, see e.g. [3]. Compared to the classic algorithm, all pixels of a patch contribute to the distance to another patch with the same weight, no matter their distance to the center of the patch. This coarser computation of the distance can result in a slightly poorer denoising performance. Moreover, for small images (images with a linear size that is only a few times the patch size), the classic algorithm can be faster due to boundary effects. The image is padded using the reflect mode of skimage.util.pad before denoising. If the noise standard deviation, sigma, is provided a more robust computation of patch weights is used. Subtracting the known noise variance from the computed patch distances improves the estimates of patch similarity, giving a moderate improvement to denoising performance [4]. It was also mentioned as an option for the fast variant of the algorithm in [3]. When sigma is provided, a smaller h should typically be used to avoid oversmoothing. The optimal value for h depends on the image content and noise level, but a reasonable starting point is h = 0.8 * sigma when fast_mode is True, or h = 0.6 * sigma when fast_mode is False. References 1(1,2) A. Buades, B. Coll, & J-M. Morel. A non-local algorithm for image denoising. In CVPR 2005, Vol. 2, pp. 60-65, IEEE. DOI:10.1109/CVPR.2005.38 2 J. Darbon, A. Cunha, T.F. Chan, S. Osher, and G.J. Jensen, Fast nonlocal filtering applied to electron cryomicroscopy, in 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2008, pp. 1331-1334. DOI:10.1109/ISBI.2008.4541250 3(1,2) Jacques Froment. Parameter-Free Fast Pixelwise Non-Local Means Denoising. Image Processing On Line, 2014, vol. 4, pp. 300-326. DOI:10.5201/ipol.2014.120 4 A. Buades, B. Coll, & J-M. Morel. Non-Local Means Denoising. Image Processing On Line, 2011, vol. 1, pp. 208-212. DOI:10.5201/ipol.2011.bcm_nlm Examples >>> a = np.zeros((40, 40)) >>> a[10:-10, 10:-10] = 1. >>> a += 0.3 * np.random.randn(*a.shape) >>> denoised_a = denoise_nl_means(a, 7, 5, 0.1) denoise_tv_bregman skimage.restoration.denoise_tv_bregman(image, weight, max_iter=100, eps=0.001, isotropic=True, *, multichannel=False) [source] Perform total-variation denoising using split-Bregman optimization. Total-variation denoising (also know as total-variation regularization) tries to find an image with less total-variation under the constraint of being similar to the input image, which is controlled by the regularization parameter ([1], [2], [3], [4]). Parameters imagendarray Input data to be denoised (converted using img_as_float`). weightfloat Denoising weight. The smaller the weight, the more denoising (at the expense of less similarity to the input). The regularization parameter lambda is chosen as 2 * weight. epsfloat, optional Relative difference of the value of the cost function that determines the stop criterion. The algorithm stops when: SUM((u(n) - u(n-1))**2) < eps max_iterint, optional Maximal number of iterations used for the optimization. isotropicboolean, optional Switch between isotropic and anisotropic TV denoising. multichannelbool, optional Apply total-variation denoising separately for each channel. This option should be true for color images, otherwise the denoising is also applied in the channels dimension. Returns undarray Denoised image. References 1 https://en.wikipedia.org/wiki/Total_variation_denoising 2 Tom Goldstein and Stanley Osher, “The Split Bregman Method For L1 Regularized Problems”, ftp://ftp.math.ucla.edu/pub/camreport/cam08-29.pdf 3 Pascal Getreuer, “Rudin–Osher–Fatemi Total Variation Denoising using Split Bregman” in Image Processing On Line on 2012–05–19, https://www.ipol.im/pub/art/2012/g-tvd/article_lr.pdf 4 https://web.math.ucsb.edu/~cgarcia/UGProjects/BregmanAlgorithms_JacquelineBush.pdf denoise_tv_chambolle skimage.restoration.denoise_tv_chambolle(image, weight=0.1, eps=0.0002, n_iter_max=200, multichannel=False) [source] Perform total-variation denoising on n-dimensional images. Parameters imagendarray of ints, uints or floats Input data to be denoised. image can be of any numeric type, but it is cast into an ndarray of floats for the computation of the denoised image. weightfloat, optional Denoising weight. The greater weight, the more denoising (at the expense of fidelity to input). epsfloat, optional Relative difference of the value of the cost function that determines the stop criterion. The algorithm stops when: (E_(n-1) - E_n) < eps * E_0 n_iter_maxint, optional Maximal number of iterations used for the optimization. multichannelbool, optional Apply total-variation denoising separately for each channel. This option should be true for color images, otherwise the denoising is also applied in the channels dimension. Returns outndarray Denoised image. Notes Make sure to set the multichannel parameter appropriately for color images. The principle of total variation denoising is explained in https://en.wikipedia.org/wiki/Total_variation_denoising The principle of total variation denoising is to minimize the total variation of the image, which can be roughly described as the integral of the norm of the image gradient. Total variation denoising tends to produce “cartoon-like” images, that is, piecewise-constant images. This code is an implementation of the algorithm of Rudin, Fatemi and Osher that was proposed by Chambolle in [1]. References 1 A. Chambolle, An algorithm for total variation minimization and applications, Journal of Mathematical Imaging and Vision, Springer, 2004, 20, 89-97. Examples 2D example on astronaut image: >>> from skimage import color, data >>> img = color.rgb2gray(data.astronaut())[:50, :50] >>> img += 0.5 * img.std() * np.random.randn(*img.shape) >>> denoised_img = denoise_tv_chambolle(img, weight=60) 3D example on synthetic data: >>> x, y, z = np.ogrid[0:20, 0:20, 0:20] >>> mask = (x - 22)**2 + (y - 20)**2 + (z - 17)**2 < 8**2 >>> mask = mask.astype(float) >>> mask += 0.2*np.random.randn(*mask.shape) >>> res = denoise_tv_chambolle(mask, weight=100) denoise_wavelet skimage.restoration.denoise_wavelet(image, sigma=None, wavelet='db1', mode='soft', wavelet_levels=None, multichannel=False, convert2ycbcr=False, method='BayesShrink', rescale_sigma=True) [source] Perform wavelet denoising on an image. Parameters imagendarray ([M[, N[, …P]][, C]) of ints, uints or floats Input data to be denoised. image can be of any numeric type, but it is cast into an ndarray of floats for the computation of the denoised image. sigmafloat or list, optional The noise standard deviation used when computing the wavelet detail coefficient threshold(s). When None (default), the noise standard deviation is estimated via the method in [2]. waveletstring, optional The type of wavelet to perform and can be any of the options pywt.wavelist outputs. The default is ‘db1’. For example, wavelet can be any of {'db2', 'haar', 'sym9'} and many more. mode{‘soft’, ‘hard’}, optional An optional argument to choose the type of denoising performed. It noted that choosing soft thresholding given additive noise finds the best approximation of the original image. wavelet_levelsint or None, optional The number of wavelet decomposition levels to use. The default is three less than the maximum number of possible decomposition levels. multichannelbool, optional Apply wavelet denoising separately for each channel (where channels correspond to the final axis of the array). convert2ycbcrbool, optional If True and multichannel True, do the wavelet denoising in the YCbCr colorspace instead of the RGB color space. This typically results in better performance for RGB images. method{‘BayesShrink’, ‘VisuShrink’}, optional Thresholding method to be used. The currently supported methods are “BayesShrink” [1] and “VisuShrink” [2]. Defaults to “BayesShrink”. rescale_sigmabool, optional If False, no rescaling of the user-provided sigma will be performed. The default of True rescales sigma appropriately if the image is rescaled internally. New in version 0.16: rescale_sigma was introduced in 0.16 Returns outndarray Denoised image. Notes The wavelet domain is a sparse representation of the image, and can be thought of similarly to the frequency domain of the Fourier transform. Sparse representations have most values zero or near-zero and truly random noise is (usually) represented by many small values in the wavelet domain. Setting all values below some threshold to 0 reduces the noise in the image, but larger thresholds also decrease the detail present in the image. If the input is 3D, this function performs wavelet denoising on each color plane separately. Changed in version 0.16: For floating point inputs, the original input range is maintained and there is no clipping applied to the output. Other input types will be converted to a floating point value in the range [-1, 1] or [0, 1] depending on the input image range. Unless rescale_sigma = False, any internal rescaling applied to the image will also be applied to sigma to maintain the same relative amplitude. Many wavelet coefficient thresholding approaches have been proposed. By default, denoise_wavelet applies BayesShrink, which is an adaptive thresholding method that computes separate thresholds for each wavelet sub-band as described in [1]. If method == "VisuShrink", a single “universal threshold” is applied to all wavelet detail coefficients as described in [2]. This threshold is designed to remove all Gaussian noise at a given sigma with high probability, but tends to produce images that appear overly smooth. Although any of the wavelets from PyWavelets can be selected, the thresholding methods assume an orthogonal wavelet transform and may not choose the threshold appropriately for biorthogonal wavelets. Orthogonal wavelets are desirable because white noise in the input remains white noise in the subbands. Biorthogonal wavelets lead to colored noise in the subbands. Additionally, the orthogonal wavelets in PyWavelets are orthonormal so that noise variance in the subbands remains identical to the noise variance of the input. Example orthogonal wavelets are the Daubechies (e.g. ‘db2’) or symmlet (e.g. ‘sym2’) families. References 1(1,2) Chang, S. Grace, Bin Yu, and Martin Vetterli. “Adaptive wavelet thresholding for image denoising and compression.” Image Processing, IEEE Transactions on 9.9 (2000): 1532-1546. DOI:10.1109/83.862633 2(1,2,3) D. L. Donoho and I. M. Johnstone. “Ideal spatial adaptation by wavelet shrinkage.” Biometrika 81.3 (1994): 425-455. DOI:10.1093/biomet/81.3.425 Examples >>> from skimage import color, data >>> img = img_as_float(data.astronaut()) >>> img = color.rgb2gray(img) >>> img += 0.1 * np.random.randn(*img.shape) >>> img = np.clip(img, 0, 1) >>> denoised_img = denoise_wavelet(img, sigma=0.1, rescale_sigma=True) ellipsoid_kernel skimage.restoration.ellipsoid_kernel(shape, intensity) [source] Create an ellipoid kernel for restoration.rolling_ball. Parameters shapearraylike Length of the principal axis of the ellipsoid (excluding the intensity axis). The kernel needs to have the same dimensionality as the image it will be applied to. intensityint Length of the intensity axis of the ellipsoid. Returns kernelndarray The kernel containing the surface intensity of the top half of the ellipsoid. See also rolling_ball Examples using skimage.restoration.ellipsoid_kernel Use rolling-ball algorithm for estimating background intensity estimate_sigma skimage.restoration.estimate_sigma(image, average_sigmas=False, multichannel=False) [source] Robust wavelet-based estimator of the (Gaussian) noise standard deviation. Parameters imagendarray Image for which to estimate the noise standard deviation. average_sigmasbool, optional If true, average the channel estimates of sigma. Otherwise return a list of sigmas corresponding to each channel. multichannelbool Estimate sigma separately for each channel. Returns sigmafloat or list Estimated noise standard deviation(s). If multichannel is True and average_sigmas is False, a separate noise estimate for each channel is returned. Otherwise, the average of the individual channel estimates is returned. Notes This function assumes the noise follows a Gaussian distribution. The estimation algorithm is based on the median absolute deviation of the wavelet detail coefficients as described in section 4.2 of [1]. References 1 D. L. Donoho and I. M. Johnstone. “Ideal spatial adaptation by wavelet shrinkage.” Biometrika 81.3 (1994): 425-455. DOI:10.1093/biomet/81.3.425 Examples >>> import skimage.data >>> from skimage import img_as_float >>> img = img_as_float(skimage.data.camera()) >>> sigma = 0.1 >>> img = img + sigma * np.random.standard_normal(img.shape) >>> sigma_hat = estimate_sigma(img, multichannel=False) inpaint_biharmonic skimage.restoration.inpaint_biharmonic(image, mask, multichannel=False) [source] Inpaint masked points in image with biharmonic equations. Parameters image(M[, N[, …, P]][, C]) ndarray Input image. mask(M[, N[, …, P]]) ndarray Array of pixels to be inpainted. Have to be the same shape as one of the ‘image’ channels. Unknown pixels have to be represented with 1, known pixels - with 0. multichannelboolean, optional If True, the last image dimension is considered as a color channel, otherwise as spatial. Returns out(M[, N[, …, P]][, C]) ndarray Input image with masked pixels inpainted. References 1 N.S.Hoang, S.B.Damelin, “On surface completion and image inpainting by biharmonic functions: numerical aspects”, arXiv:1707.06567 2 C. K. Chui and H. N. Mhaskar, MRA Contextual-Recovery Extension of Smooth Functions on Manifolds, Appl. and Comp. Harmonic Anal., 28 (2010), 104-113, DOI:10.1016/j.acha.2009.04.004 Examples >>> img = np.tile(np.square(np.linspace(0, 1, 5)), (5, 1)) >>> mask = np.zeros_like(img) >>> mask[2, 2:] = 1 >>> mask[1, 3:] = 1 >>> mask[0, 4:] = 1 >>> out = inpaint_biharmonic(img, mask) richardson_lucy skimage.restoration.richardson_lucy(image, psf, iterations=50, clip=True, filter_epsilon=None) [source] Richardson-Lucy deconvolution. Parameters imagendarray Input degraded image (can be N dimensional). psfndarray The point spread function. iterationsint, optional Number of iterations. This parameter plays the role of regularisation. clipboolean, optional True by default. If true, pixel value of the result above 1 or under -1 are thresholded for skimage pipeline compatibility. filter_epsilon: float, optional Value below which intermediate results become 0 to avoid division by small numbers. Returns im_deconvndarray The deconvolved image. References 1 https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution Examples >>> from skimage import img_as_float, data, restoration >>> camera = img_as_float(data.camera()) >>> from scipy.signal import convolve2d >>> psf = np.ones((5, 5)) / 25 >>> camera = convolve2d(camera, psf, 'same') >>> camera += 0.1 * camera.std() * np.random.standard_normal(camera.shape) >>> deconvolved = restoration.richardson_lucy(camera, psf, 5) rolling_ball skimage.restoration.rolling_ball(image, *, radius=100, kernel=None, nansafe=False, num_threads=None) [source] Estimate background intensity by rolling/translating a kernel. This rolling ball algorithm estimates background intensity for a ndimage in case of uneven exposure. It is a generalization of the frequently used rolling ball algorithm [1]. Parameters imagendarray The image to be filtered. radiusint, optional Radius of a ball shaped kernel to be rolled/translated in the image. Used if kernel = None. kernelndarray, optional The kernel to be rolled/translated in the image. It must have the same number of dimensions as image. Kernel is filled with the intensity of the kernel at that position. nansafe: bool, optional If False (default) assumes that none of the values in image are np.nan, and uses a faster implementation. num_threads: int, optional The maximum number of threads to use. If None use the OpenMP default value; typically equal to the maximum number of virtual cores. Note: This is an upper limit to the number of threads. The exact number is determined by the system’s OpenMP library. Returns backgroundndarray The estimated background of the image. Notes For the pixel that has its background intensity estimated (without loss of generality at center) the rolling ball method centers kernel under it and raises the kernel until the surface touches the image umbra at some pos=(y,x). The background intensity is then estimated using the image intensity at that position (image[pos]) plus the difference of kernel[center] - kernel[pos]. This algorithm assumes that dark pixels correspond to the background. If you have a bright background, invert the image before passing it to the function, e.g., using utils.invert. See the gallery example for details. This algorithm is sensitive to noise (in particular salt-and-pepper noise). If this is a problem in your image, you can apply mild gaussian smoothing before passing the image to this function. References 1 Sternberg, Stanley R. “Biomedical image processing.” Computer 1 (1983): 22-34. DOI:10.1109/MC.1983.1654163 Examples >>> import numpy as np >>> from skimage import data >>> from skimage.restoration import rolling_ball >>> image = data.coins() >>> background = rolling_ball(data.coins()) >>> filtered_image = image - background >>> import numpy as np >>> from skimage import data >>> from skimage.restoration import rolling_ball, ellipsoid_kernel >>> image = data.coins() >>> kernel = ellipsoid_kernel((101, 101), 75) >>> background = rolling_ball(data.coins(), kernel=kernel) >>> filtered_image = image - background Examples using skimage.restoration.rolling_ball Use rolling-ball algorithm for estimating background intensity unsupervised_wiener skimage.restoration.unsupervised_wiener(image, psf, reg=None, user_params=None, is_real=True, clip=True) [source] Unsupervised Wiener-Hunt deconvolution. Return the deconvolution with a Wiener-Hunt approach, where the hyperparameters are automatically estimated. The algorithm is a stochastic iterative process (Gibbs sampler) described in the reference below. See also wiener function. Parameters image(M, N) ndarray The input degraded image. psfndarray The impulse response (input image’s space) or the transfer function (Fourier space). Both are accepted. The transfer function is automatically recognized as being complex (np.iscomplexobj(psf)). regndarray, optional The regularisation operator. The Laplacian by default. It can be an impulse response or a transfer function, as for the psf. user_paramsdict, optional Dictionary of parameters for the Gibbs sampler. See below. clipboolean, optional True by default. If true, pixel values of the result above 1 or under -1 are thresholded for skimage pipeline compatibility. Returns x_postmean(M, N) ndarray The deconvolved image (the posterior mean). chainsdict The keys noise and prior contain the chain list of noise and prior precision respectively. Other Parameters The keys of ``user_params`` are: thresholdfloat The stopping criterion: the norm of the difference between to successive approximated solution (empirical mean of object samples, see Notes section). 1e-4 by default. burninint The number of sample to ignore to start computation of the mean. 15 by default. min_iterint The minimum number of iterations. 30 by default. max_iterint The maximum number of iterations if threshold is not satisfied. 200 by default. callbackcallable (None by default) A user provided callable to which is passed, if the function exists, the current image sample for whatever purpose. The user can store the sample, or compute other moments than the mean. It has no influence on the algorithm execution and is only for inspection. Notes The estimated image is design as the posterior mean of a probability law (from a Bayesian analysis). The mean is defined as a sum over all the possible images weighted by their respective probability. Given the size of the problem, the exact sum is not tractable. This algorithm use of MCMC to draw image under the posterior law. The practical idea is to only draw highly probable images since they have the biggest contribution to the mean. At the opposite, the less probable images are drawn less often since their contribution is low. Finally the empirical mean of these samples give us an estimation of the mean, and an exact computation with an infinite sample set. References 1 François Orieux, Jean-François Giovannelli, and Thomas Rodet, “Bayesian estimation of regularization and point spread function parameters for Wiener-Hunt deconvolution”, J. Opt. Soc. Am. A 27, 1593-1607 (2010) https://www.osapublishing.org/josaa/abstract.cfm?URI=josaa-27-7-1593 http://research.orieux.fr/files/papers/OGR-JOSA10.pdf Examples >>> from skimage import color, data, restoration >>> img = color.rgb2gray(data.astronaut()) >>> from scipy.signal import convolve2d >>> psf = np.ones((5, 5)) / 25 >>> img = convolve2d(img, psf, 'same') >>> img += 0.1 * img.std() * np.random.standard_normal(img.shape) >>> deconvolved_img = restoration.unsupervised_wiener(img, psf) unwrap_phase skimage.restoration.unwrap_phase(image, wrap_around=False, seed=None) [source] Recover the original from a wrapped phase image. From an image wrapped to lie in the interval [-pi, pi), recover the original, unwrapped image. Parameters image1D, 2D or 3D ndarray of floats, optionally a masked array The values should be in the range [-pi, pi). If a masked array is provided, the masked entries will not be changed, and their values will not be used to guide the unwrapping of neighboring, unmasked values. Masked 1D arrays are not allowed, and will raise a ValueError. wrap_aroundbool or sequence of bool, optional When an element of the sequence is True, the unwrapping process will regard the edges along the corresponding axis of the image to be connected and use this connectivity to guide the phase unwrapping process. If only a single boolean is given, it will apply to all axes. Wrap around is not supported for 1D arrays. seedint, optional Unwrapping 2D or 3D images uses random initialization. This sets the seed of the PRNG to achieve deterministic behavior. Returns image_unwrappedarray_like, double Unwrapped image of the same shape as the input. If the input image was a masked array, the mask will be preserved. Raises ValueError If called with a masked 1D array or called with a 1D array and wrap_around=True. References 1 Miguel Arevallilo Herraez, David R. Burton, Michael J. Lalor, and Munther A. Gdeisat, “Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path”, Journal Applied Optics, Vol. 41, No. 35 (2002) 7437, 2 Abdul-Rahman, H., Gdeisat, M., Burton, D., & Lalor, M., “Fast three-dimensional phase-unwrapping algorithm based on sorting by reliability following a non-continuous path. In W. Osten, C. Gorecki, & E. L. Novak (Eds.), Optical Metrology (2005) 32–40, International Society for Optics and Photonics. Examples >>> c0, c1 = np.ogrid[-1:1:128j, -1:1:128j] >>> image = 12 * np.pi * np.exp(-(c0**2 + c1**2)) >>> image_wrapped = np.angle(np.exp(1j * image)) >>> image_unwrapped = unwrap_phase(image_wrapped) >>> np.std(image_unwrapped - image) < 1e-6 # A constant offset is normal True Examples using skimage.restoration.unwrap_phase Phase Unwrapping wiener skimage.restoration.wiener(image, psf, balance, reg=None, is_real=True, clip=True) [source] Wiener-Hunt deconvolution Return the deconvolution with a Wiener-Hunt approach (i.e. with Fourier diagonalisation). Parameters image(M, N) ndarray Input degraded image psfndarray Point Spread Function. This is assumed to be the impulse response (input image space) if the data-type is real, or the transfer function (Fourier space) if the data-type is complex. There is no constraints on the shape of the impulse response. The transfer function must be of shape (M, N) if is_real is True, (M, N // 2 + 1) otherwise (see np.fft.rfftn). balancefloat The regularisation parameter value that tunes the balance between the data adequacy that improve frequency restoration and the prior adequacy that reduce frequency restoration (to avoid noise artifacts). regndarray, optional The regularisation operator. The Laplacian by default. It can be an impulse response or a transfer function, as for the psf. Shape constraint is the same as for the psf parameter. is_realboolean, optional True by default. Specify if psf and reg are provided with hermitian hypothesis, that is only half of the frequency plane is provided (due to the redundancy of Fourier transform of real signal). It’s apply only if psf and/or reg are provided as transfer function. For the hermitian property see uft module or np.fft.rfftn. clipboolean, optional True by default. If True, pixel values of the result above 1 or under -1 are thresholded for skimage pipeline compatibility. Returns im_deconv(M, N) ndarray The deconvolved image. Notes This function applies the Wiener filter to a noisy and degraded image by an impulse response (or PSF). If the data model is \[y = Hx + n\] where \(n\) is noise, \(H\) the PSF and \(x\) the unknown original image, the Wiener filter is \[\hat x = F^\dagger (|\Lambda_H|^2 + \lambda |\Lambda_D|^2) \Lambda_H^\dagger F y\] where \(F\) and \(F^\dagger\) are the Fourier and inverse Fourier transforms respectively, \(\Lambda_H\) the transfer function (or the Fourier transform of the PSF, see [Hunt] below) and \(\Lambda_D\) the filter to penalize the restored image frequencies (Laplacian by default, that is penalization of high frequency). The parameter \(\lambda\) tunes the balance between the data (that tends to increase high frequency, even those coming from noise), and the regularization. These methods are then specific to a prior model. Consequently, the application or the true image nature must corresponds to the prior model. By default, the prior model (Laplacian) introduce image smoothness or pixel correlation. It can also be interpreted as high-frequency penalization to compensate the instability of the solution with respect to the data (sometimes called noise amplification or “explosive” solution). Finally, the use of Fourier space implies a circulant property of \(H\), see [Hunt]. References 1 François Orieux, Jean-François Giovannelli, and Thomas Rodet, “Bayesian estimation of regularization and point spread function parameters for Wiener-Hunt deconvolution”, J. Opt. Soc. Am. A 27, 1593-1607 (2010) https://www.osapublishing.org/josaa/abstract.cfm?URI=josaa-27-7-1593 http://research.orieux.fr/files/papers/OGR-JOSA10.pdf 2 B. R. Hunt “A matrix theory proof of the discrete convolution theorem”, IEEE Trans. on Audio and Electroacoustics, vol. au-19, no. 4, pp. 285-288, dec. 1971 Examples >>> from skimage import color, data, restoration >>> img = color.rgb2gray(data.astronaut()) >>> from scipy.signal import convolve2d >>> psf = np.ones((5, 5)) / 25 >>> img = convolve2d(img, psf, 'same') >>> img += 0.1 * img.std() * np.random.standard_normal(img.shape) >>> deconvolved_img = restoration.wiener(img, psf, 1100)
skimage.api.skimage.restoration
skimage.restoration.ball_kernel(radius, ndim) [source] Create a ball kernel for restoration.rolling_ball. Parameters radiusint Radius of the ball. ndimint Number of dimensions of the ball. ndim should match the dimensionality of the image the kernel will be applied to. Returns kernelndarray The kernel containing the surface intensity of the top half of the ellipsoid. See also rolling_ball
skimage.api.skimage.restoration#skimage.restoration.ball_kernel
skimage.restoration.calibrate_denoiser(image, denoise_function, denoise_parameters, *, stride=4, approximate_loss=True, extra_output=False) [source] Calibrate a denoising function and return optimal J-invariant version. The returned function is partially evaluated with optimal parameter values set for denoising the input image. Parameters imagendarray Input data to be denoised (converted using img_as_float). denoise_functionfunction Denoising function to be calibrated. denoise_parametersdict of list Ranges of parameters for denoise_function to be calibrated over. strideint, optional Stride used in masking procedure that converts denoise_function to J-invariance. approximate_lossbool, optional Whether to approximate the self-supervised loss used to evaluate the denoiser by only computing it on one masked version of the image. If False, the runtime will be a factor of stride**image.ndim longer. extra_outputbool, optional If True, return parameters and losses in addition to the calibrated denoising function Returns best_denoise_functionfunction The optimal J-invariant version of denoise_function. If extra_output is True, the following tuple is also returned: (parameters_tested, losses)tuple (list of dict, list of int) List of parameters tested for denoise_function, as a dictionary of kwargs Self-supervised loss for each set of parameters in parameters_tested. Notes The calibration procedure uses a self-supervised mean-square-error loss to evaluate the performance of J-invariant versions of denoise_function. The minimizer of the self-supervised loss is also the minimizer of the ground-truth loss (i.e., the true MSE error) [1]. The returned function can be used on the original noisy image, or other images with similar characteristics. Increasing the stride increases the performance of best_denoise_function at the expense of increasing its runtime. It has no effect on the runtime of the calibration. References 1 J. Batson & L. Royer. Noise2Self: Blind Denoising by Self-Supervision, International Conference on Machine Learning, p. 524-533 (2019). Examples >>> from skimage import color, data >>> from skimage.restoration import denoise_wavelet >>> import numpy as np >>> img = color.rgb2gray(data.astronaut()[:50, :50]) >>> noisy = img + 0.5 * img.std() * np.random.randn(*img.shape) >>> parameters = {'sigma': np.arange(0.1, 0.4, 0.02)} >>> denoising_function = calibrate_denoiser(noisy, denoise_wavelet, ... denoise_parameters=parameters) >>> denoised_img = denoising_function(img)
skimage.api.skimage.restoration#skimage.restoration.calibrate_denoiser
skimage.restoration.cycle_spin(x, func, max_shifts, shift_steps=1, num_workers=None, multichannel=False, func_kw={}) [source] Cycle spinning (repeatedly apply func to shifted versions of x). Parameters xarray-like Data for input to func. funcfunction A function to apply to circularly shifted versions of x. Should take x as its first argument. Any additional arguments can be supplied via func_kw. max_shiftsint or tuple If an integer, shifts in range(0, max_shifts+1) will be used along each axis of x. If a tuple, range(0, max_shifts[i]+1) will be along axis i. shift_stepsint or tuple, optional The step size for the shifts applied along axis, i, are:: range((0, max_shifts[i]+1, shift_steps[i])). If an integer is provided, the same step size is used for all axes. num_workersint or None, optional The number of parallel threads to use during cycle spinning. If set to None, the full set of available cores are used. multichannelbool, optional Whether to treat the final axis as channels (no cycle shifts are performed over the channels axis). func_kwdict, optional Additional keyword arguments to supply to func. Returns avg_ynp.ndarray The output of func(x, **func_kw) averaged over all combinations of the specified axis shifts. Notes Cycle spinning was proposed as a way to approach shift-invariance via performing several circular shifts of a shift-variant transform [1]. For a n-level discrete wavelet transforms, one may wish to perform all shifts up to max_shifts = 2**n - 1. In practice, much of the benefit can often be realized with only a small number of shifts per axis. For transforms such as the blockwise discrete cosine transform, one may wish to evaluate shifts up to the block size used by the transform. References 1 R.R. Coifman and D.L. Donoho. “Translation-Invariant De-Noising”. Wavelets and Statistics, Lecture Notes in Statistics, vol.103. Springer, New York, 1995, pp.125-150. DOI:10.1007/978-1-4612-2544-7_9 Examples >>> import skimage.data >>> from skimage import img_as_float >>> from skimage.restoration import denoise_wavelet, cycle_spin >>> img = img_as_float(skimage.data.camera()) >>> sigma = 0.1 >>> img = img + sigma * np.random.standard_normal(img.shape) >>> denoised = cycle_spin(img, func=denoise_wavelet, ... max_shifts=3)
skimage.api.skimage.restoration#skimage.restoration.cycle_spin
skimage.restoration.denoise_bilateral(image, win_size=None, sigma_color=None, sigma_spatial=1, bins=10000, mode='constant', cval=0, multichannel=False) [source] Denoise image using bilateral filter. Parameters imagendarray, shape (M, N[, 3]) Input image, 2D grayscale or RGB. win_sizeint Window size for filtering. If win_size is not specified, it is calculated as max(5, 2 * ceil(3 * sigma_spatial) + 1). sigma_colorfloat Standard deviation for grayvalue/color distance (radiometric similarity). A larger value results in averaging of pixels with larger radiometric differences. Note, that the image will be converted using the img_as_float function and thus the standard deviation is in respect to the range [0, 1]. If the value is None the standard deviation of the image will be used. sigma_spatialfloat Standard deviation for range distance. A larger value results in averaging of pixels with larger spatial differences. binsint Number of discrete values for Gaussian weights of color filtering. A larger value results in improved accuracy. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’} How to handle values outside the image borders. See numpy.pad for detail. cvalstring Used in conjunction with mode ‘constant’, the value outside the image boundaries. multichannelbool Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. Returns denoisedndarray Denoised image. Notes This is an edge-preserving, denoising filter. It averages pixels based on their spatial closeness and radiometric similarity [1]. Spatial closeness is measured by the Gaussian function of the Euclidean distance between two pixels and a certain standard deviation (sigma_spatial). Radiometric similarity is measured by the Gaussian function of the Euclidean distance between two color values and a certain standard deviation (sigma_color). References 1 C. Tomasi and R. Manduchi. “Bilateral Filtering for Gray and Color Images.” IEEE International Conference on Computer Vision (1998) 839-846. DOI:10.1109/ICCV.1998.710815 Examples >>> from skimage import data, img_as_float >>> astro = img_as_float(data.astronaut()) >>> astro = astro[220:300, 220:320] >>> noisy = astro + 0.6 * astro.std() * np.random.random(astro.shape) >>> noisy = np.clip(noisy, 0, 1) >>> denoised = denoise_bilateral(noisy, sigma_color=0.05, sigma_spatial=15, ... multichannel=True)
skimage.api.skimage.restoration#skimage.restoration.denoise_bilateral
skimage.restoration.denoise_nl_means(image, patch_size=7, patch_distance=11, h=0.1, multichannel=False, fast_mode=True, sigma=0.0, *, preserve_range=None) [source] Perform non-local means denoising on 2-D or 3-D grayscale images, and 2-D RGB images. Parameters image2D or 3D ndarray Input image to be denoised, which can be 2D or 3D, and grayscale or RGB (for 2D images only, see multichannel parameter). patch_sizeint, optional Size of patches used for denoising. patch_distanceint, optional Maximal distance in pixels where to search patches used for denoising. hfloat, optional Cut-off distance (in gray levels). The higher h, the more permissive one is in accepting patches. A higher h results in a smoother image, at the expense of blurring features. For a Gaussian noise of standard deviation sigma, a rule of thumb is to choose the value of h to be sigma of slightly less. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. fast_modebool, optional If True (default value), a fast version of the non-local means algorithm is used. If False, the original version of non-local means is used. See the Notes section for more details about the algorithms. sigmafloat, optional The standard deviation of the (Gaussian) noise. If provided, a more robust computation of patch weights is computed that takes the expected noise variance into account (see Notes below). preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns resultndarray Denoised image, of same shape as image. Notes The non-local means algorithm is well suited for denoising images with specific textures. The principle of the algorithm is to average the value of a given pixel with values of other pixels in a limited neighbourhood, provided that the patches centered on the other pixels are similar enough to the patch centered on the pixel of interest. In the original version of the algorithm [1], corresponding to fast=False, the computational complexity is: image.size * patch_size ** image.ndim * patch_distance ** image.ndim Hence, changing the size of patches or their maximal distance has a strong effect on computing times, especially for 3-D images. However, the default behavior corresponds to fast_mode=True, for which another version of non-local means [2] is used, corresponding to a complexity of: image.size * patch_distance ** image.ndim The computing time depends only weakly on the patch size, thanks to the computation of the integral of patches distances for a given shift, that reduces the number of operations [1]. Therefore, this algorithm executes faster than the classic algorithm (fast_mode=False), at the expense of using twice as much memory. This implementation has been proven to be more efficient compared to other alternatives, see e.g. [3]. Compared to the classic algorithm, all pixels of a patch contribute to the distance to another patch with the same weight, no matter their distance to the center of the patch. This coarser computation of the distance can result in a slightly poorer denoising performance. Moreover, for small images (images with a linear size that is only a few times the patch size), the classic algorithm can be faster due to boundary effects. The image is padded using the reflect mode of skimage.util.pad before denoising. If the noise standard deviation, sigma, is provided a more robust computation of patch weights is used. Subtracting the known noise variance from the computed patch distances improves the estimates of patch similarity, giving a moderate improvement to denoising performance [4]. It was also mentioned as an option for the fast variant of the algorithm in [3]. When sigma is provided, a smaller h should typically be used to avoid oversmoothing. The optimal value for h depends on the image content and noise level, but a reasonable starting point is h = 0.8 * sigma when fast_mode is True, or h = 0.6 * sigma when fast_mode is False. References 1(1,2) A. Buades, B. Coll, & J-M. Morel. A non-local algorithm for image denoising. In CVPR 2005, Vol. 2, pp. 60-65, IEEE. DOI:10.1109/CVPR.2005.38 2 J. Darbon, A. Cunha, T.F. Chan, S. Osher, and G.J. Jensen, Fast nonlocal filtering applied to electron cryomicroscopy, in 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2008, pp. 1331-1334. DOI:10.1109/ISBI.2008.4541250 3(1,2) Jacques Froment. Parameter-Free Fast Pixelwise Non-Local Means Denoising. Image Processing On Line, 2014, vol. 4, pp. 300-326. DOI:10.5201/ipol.2014.120 4 A. Buades, B. Coll, & J-M. Morel. Non-Local Means Denoising. Image Processing On Line, 2011, vol. 1, pp. 208-212. DOI:10.5201/ipol.2011.bcm_nlm Examples >>> a = np.zeros((40, 40)) >>> a[10:-10, 10:-10] = 1. >>> a += 0.3 * np.random.randn(*a.shape) >>> denoised_a = denoise_nl_means(a, 7, 5, 0.1)
skimage.api.skimage.restoration#skimage.restoration.denoise_nl_means
skimage.restoration.denoise_tv_bregman(image, weight, max_iter=100, eps=0.001, isotropic=True, *, multichannel=False) [source] Perform total-variation denoising using split-Bregman optimization. Total-variation denoising (also know as total-variation regularization) tries to find an image with less total-variation under the constraint of being similar to the input image, which is controlled by the regularization parameter ([1], [2], [3], [4]). Parameters imagendarray Input data to be denoised (converted using img_as_float`). weightfloat Denoising weight. The smaller the weight, the more denoising (at the expense of less similarity to the input). The regularization parameter lambda is chosen as 2 * weight. epsfloat, optional Relative difference of the value of the cost function that determines the stop criterion. The algorithm stops when: SUM((u(n) - u(n-1))**2) < eps max_iterint, optional Maximal number of iterations used for the optimization. isotropicboolean, optional Switch between isotropic and anisotropic TV denoising. multichannelbool, optional Apply total-variation denoising separately for each channel. This option should be true for color images, otherwise the denoising is also applied in the channels dimension. Returns undarray Denoised image. References 1 https://en.wikipedia.org/wiki/Total_variation_denoising 2 Tom Goldstein and Stanley Osher, “The Split Bregman Method For L1 Regularized Problems”, ftp://ftp.math.ucla.edu/pub/camreport/cam08-29.pdf 3 Pascal Getreuer, “Rudin–Osher–Fatemi Total Variation Denoising using Split Bregman” in Image Processing On Line on 2012–05–19, https://www.ipol.im/pub/art/2012/g-tvd/article_lr.pdf 4 https://web.math.ucsb.edu/~cgarcia/UGProjects/BregmanAlgorithms_JacquelineBush.pdf
skimage.api.skimage.restoration#skimage.restoration.denoise_tv_bregman
skimage.restoration.denoise_tv_chambolle(image, weight=0.1, eps=0.0002, n_iter_max=200, multichannel=False) [source] Perform total-variation denoising on n-dimensional images. Parameters imagendarray of ints, uints or floats Input data to be denoised. image can be of any numeric type, but it is cast into an ndarray of floats for the computation of the denoised image. weightfloat, optional Denoising weight. The greater weight, the more denoising (at the expense of fidelity to input). epsfloat, optional Relative difference of the value of the cost function that determines the stop criterion. The algorithm stops when: (E_(n-1) - E_n) < eps * E_0 n_iter_maxint, optional Maximal number of iterations used for the optimization. multichannelbool, optional Apply total-variation denoising separately for each channel. This option should be true for color images, otherwise the denoising is also applied in the channels dimension. Returns outndarray Denoised image. Notes Make sure to set the multichannel parameter appropriately for color images. The principle of total variation denoising is explained in https://en.wikipedia.org/wiki/Total_variation_denoising The principle of total variation denoising is to minimize the total variation of the image, which can be roughly described as the integral of the norm of the image gradient. Total variation denoising tends to produce “cartoon-like” images, that is, piecewise-constant images. This code is an implementation of the algorithm of Rudin, Fatemi and Osher that was proposed by Chambolle in [1]. References 1 A. Chambolle, An algorithm for total variation minimization and applications, Journal of Mathematical Imaging and Vision, Springer, 2004, 20, 89-97. Examples 2D example on astronaut image: >>> from skimage import color, data >>> img = color.rgb2gray(data.astronaut())[:50, :50] >>> img += 0.5 * img.std() * np.random.randn(*img.shape) >>> denoised_img = denoise_tv_chambolle(img, weight=60) 3D example on synthetic data: >>> x, y, z = np.ogrid[0:20, 0:20, 0:20] >>> mask = (x - 22)**2 + (y - 20)**2 + (z - 17)**2 < 8**2 >>> mask = mask.astype(float) >>> mask += 0.2*np.random.randn(*mask.shape) >>> res = denoise_tv_chambolle(mask, weight=100)
skimage.api.skimage.restoration#skimage.restoration.denoise_tv_chambolle
skimage.restoration.denoise_wavelet(image, sigma=None, wavelet='db1', mode='soft', wavelet_levels=None, multichannel=False, convert2ycbcr=False, method='BayesShrink', rescale_sigma=True) [source] Perform wavelet denoising on an image. Parameters imagendarray ([M[, N[, …P]][, C]) of ints, uints or floats Input data to be denoised. image can be of any numeric type, but it is cast into an ndarray of floats for the computation of the denoised image. sigmafloat or list, optional The noise standard deviation used when computing the wavelet detail coefficient threshold(s). When None (default), the noise standard deviation is estimated via the method in [2]. waveletstring, optional The type of wavelet to perform and can be any of the options pywt.wavelist outputs. The default is ‘db1’. For example, wavelet can be any of {'db2', 'haar', 'sym9'} and many more. mode{‘soft’, ‘hard’}, optional An optional argument to choose the type of denoising performed. It noted that choosing soft thresholding given additive noise finds the best approximation of the original image. wavelet_levelsint or None, optional The number of wavelet decomposition levels to use. The default is three less than the maximum number of possible decomposition levels. multichannelbool, optional Apply wavelet denoising separately for each channel (where channels correspond to the final axis of the array). convert2ycbcrbool, optional If True and multichannel True, do the wavelet denoising in the YCbCr colorspace instead of the RGB color space. This typically results in better performance for RGB images. method{‘BayesShrink’, ‘VisuShrink’}, optional Thresholding method to be used. The currently supported methods are “BayesShrink” [1] and “VisuShrink” [2]. Defaults to “BayesShrink”. rescale_sigmabool, optional If False, no rescaling of the user-provided sigma will be performed. The default of True rescales sigma appropriately if the image is rescaled internally. New in version 0.16: rescale_sigma was introduced in 0.16 Returns outndarray Denoised image. Notes The wavelet domain is a sparse representation of the image, and can be thought of similarly to the frequency domain of the Fourier transform. Sparse representations have most values zero or near-zero and truly random noise is (usually) represented by many small values in the wavelet domain. Setting all values below some threshold to 0 reduces the noise in the image, but larger thresholds also decrease the detail present in the image. If the input is 3D, this function performs wavelet denoising on each color plane separately. Changed in version 0.16: For floating point inputs, the original input range is maintained and there is no clipping applied to the output. Other input types will be converted to a floating point value in the range [-1, 1] or [0, 1] depending on the input image range. Unless rescale_sigma = False, any internal rescaling applied to the image will also be applied to sigma to maintain the same relative amplitude. Many wavelet coefficient thresholding approaches have been proposed. By default, denoise_wavelet applies BayesShrink, which is an adaptive thresholding method that computes separate thresholds for each wavelet sub-band as described in [1]. If method == "VisuShrink", a single “universal threshold” is applied to all wavelet detail coefficients as described in [2]. This threshold is designed to remove all Gaussian noise at a given sigma with high probability, but tends to produce images that appear overly smooth. Although any of the wavelets from PyWavelets can be selected, the thresholding methods assume an orthogonal wavelet transform and may not choose the threshold appropriately for biorthogonal wavelets. Orthogonal wavelets are desirable because white noise in the input remains white noise in the subbands. Biorthogonal wavelets lead to colored noise in the subbands. Additionally, the orthogonal wavelets in PyWavelets are orthonormal so that noise variance in the subbands remains identical to the noise variance of the input. Example orthogonal wavelets are the Daubechies (e.g. ‘db2’) or symmlet (e.g. ‘sym2’) families. References 1(1,2) Chang, S. Grace, Bin Yu, and Martin Vetterli. “Adaptive wavelet thresholding for image denoising and compression.” Image Processing, IEEE Transactions on 9.9 (2000): 1532-1546. DOI:10.1109/83.862633 2(1,2,3) D. L. Donoho and I. M. Johnstone. “Ideal spatial adaptation by wavelet shrinkage.” Biometrika 81.3 (1994): 425-455. DOI:10.1093/biomet/81.3.425 Examples >>> from skimage import color, data >>> img = img_as_float(data.astronaut()) >>> img = color.rgb2gray(img) >>> img += 0.1 * np.random.randn(*img.shape) >>> img = np.clip(img, 0, 1) >>> denoised_img = denoise_wavelet(img, sigma=0.1, rescale_sigma=True)
skimage.api.skimage.restoration#skimage.restoration.denoise_wavelet
skimage.restoration.ellipsoid_kernel(shape, intensity) [source] Create an ellipoid kernel for restoration.rolling_ball. Parameters shapearraylike Length of the principal axis of the ellipsoid (excluding the intensity axis). The kernel needs to have the same dimensionality as the image it will be applied to. intensityint Length of the intensity axis of the ellipsoid. Returns kernelndarray The kernel containing the surface intensity of the top half of the ellipsoid. See also rolling_ball
skimage.api.skimage.restoration#skimage.restoration.ellipsoid_kernel
skimage.restoration.estimate_sigma(image, average_sigmas=False, multichannel=False) [source] Robust wavelet-based estimator of the (Gaussian) noise standard deviation. Parameters imagendarray Image for which to estimate the noise standard deviation. average_sigmasbool, optional If true, average the channel estimates of sigma. Otherwise return a list of sigmas corresponding to each channel. multichannelbool Estimate sigma separately for each channel. Returns sigmafloat or list Estimated noise standard deviation(s). If multichannel is True and average_sigmas is False, a separate noise estimate for each channel is returned. Otherwise, the average of the individual channel estimates is returned. Notes This function assumes the noise follows a Gaussian distribution. The estimation algorithm is based on the median absolute deviation of the wavelet detail coefficients as described in section 4.2 of [1]. References 1 D. L. Donoho and I. M. Johnstone. “Ideal spatial adaptation by wavelet shrinkage.” Biometrika 81.3 (1994): 425-455. DOI:10.1093/biomet/81.3.425 Examples >>> import skimage.data >>> from skimage import img_as_float >>> img = img_as_float(skimage.data.camera()) >>> sigma = 0.1 >>> img = img + sigma * np.random.standard_normal(img.shape) >>> sigma_hat = estimate_sigma(img, multichannel=False)
skimage.api.skimage.restoration#skimage.restoration.estimate_sigma
skimage.restoration.inpaint_biharmonic(image, mask, multichannel=False) [source] Inpaint masked points in image with biharmonic equations. Parameters image(M[, N[, …, P]][, C]) ndarray Input image. mask(M[, N[, …, P]]) ndarray Array of pixels to be inpainted. Have to be the same shape as one of the ‘image’ channels. Unknown pixels have to be represented with 1, known pixels - with 0. multichannelboolean, optional If True, the last image dimension is considered as a color channel, otherwise as spatial. Returns out(M[, N[, …, P]][, C]) ndarray Input image with masked pixels inpainted. References 1 N.S.Hoang, S.B.Damelin, “On surface completion and image inpainting by biharmonic functions: numerical aspects”, arXiv:1707.06567 2 C. K. Chui and H. N. Mhaskar, MRA Contextual-Recovery Extension of Smooth Functions on Manifolds, Appl. and Comp. Harmonic Anal., 28 (2010), 104-113, DOI:10.1016/j.acha.2009.04.004 Examples >>> img = np.tile(np.square(np.linspace(0, 1, 5)), (5, 1)) >>> mask = np.zeros_like(img) >>> mask[2, 2:] = 1 >>> mask[1, 3:] = 1 >>> mask[0, 4:] = 1 >>> out = inpaint_biharmonic(img, mask)
skimage.api.skimage.restoration#skimage.restoration.inpaint_biharmonic
skimage.restoration.richardson_lucy(image, psf, iterations=50, clip=True, filter_epsilon=None) [source] Richardson-Lucy deconvolution. Parameters imagendarray Input degraded image (can be N dimensional). psfndarray The point spread function. iterationsint, optional Number of iterations. This parameter plays the role of regularisation. clipboolean, optional True by default. If true, pixel value of the result above 1 or under -1 are thresholded for skimage pipeline compatibility. filter_epsilon: float, optional Value below which intermediate results become 0 to avoid division by small numbers. Returns im_deconvndarray The deconvolved image. References 1 https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution Examples >>> from skimage import img_as_float, data, restoration >>> camera = img_as_float(data.camera()) >>> from scipy.signal import convolve2d >>> psf = np.ones((5, 5)) / 25 >>> camera = convolve2d(camera, psf, 'same') >>> camera += 0.1 * camera.std() * np.random.standard_normal(camera.shape) >>> deconvolved = restoration.richardson_lucy(camera, psf, 5)
skimage.api.skimage.restoration#skimage.restoration.richardson_lucy
skimage.restoration.rolling_ball(image, *, radius=100, kernel=None, nansafe=False, num_threads=None) [source] Estimate background intensity by rolling/translating a kernel. This rolling ball algorithm estimates background intensity for a ndimage in case of uneven exposure. It is a generalization of the frequently used rolling ball algorithm [1]. Parameters imagendarray The image to be filtered. radiusint, optional Radius of a ball shaped kernel to be rolled/translated in the image. Used if kernel = None. kernelndarray, optional The kernel to be rolled/translated in the image. It must have the same number of dimensions as image. Kernel is filled with the intensity of the kernel at that position. nansafe: bool, optional If False (default) assumes that none of the values in image are np.nan, and uses a faster implementation. num_threads: int, optional The maximum number of threads to use. If None use the OpenMP default value; typically equal to the maximum number of virtual cores. Note: This is an upper limit to the number of threads. The exact number is determined by the system’s OpenMP library. Returns backgroundndarray The estimated background of the image. Notes For the pixel that has its background intensity estimated (without loss of generality at center) the rolling ball method centers kernel under it and raises the kernel until the surface touches the image umbra at some pos=(y,x). The background intensity is then estimated using the image intensity at that position (image[pos]) plus the difference of kernel[center] - kernel[pos]. This algorithm assumes that dark pixels correspond to the background. If you have a bright background, invert the image before passing it to the function, e.g., using utils.invert. See the gallery example for details. This algorithm is sensitive to noise (in particular salt-and-pepper noise). If this is a problem in your image, you can apply mild gaussian smoothing before passing the image to this function. References 1 Sternberg, Stanley R. “Biomedical image processing.” Computer 1 (1983): 22-34. DOI:10.1109/MC.1983.1654163 Examples >>> import numpy as np >>> from skimage import data >>> from skimage.restoration import rolling_ball >>> image = data.coins() >>> background = rolling_ball(data.coins()) >>> filtered_image = image - background >>> import numpy as np >>> from skimage import data >>> from skimage.restoration import rolling_ball, ellipsoid_kernel >>> image = data.coins() >>> kernel = ellipsoid_kernel((101, 101), 75) >>> background = rolling_ball(data.coins(), kernel=kernel) >>> filtered_image = image - background
skimage.api.skimage.restoration#skimage.restoration.rolling_ball
skimage.restoration.unsupervised_wiener(image, psf, reg=None, user_params=None, is_real=True, clip=True) [source] Unsupervised Wiener-Hunt deconvolution. Return the deconvolution with a Wiener-Hunt approach, where the hyperparameters are automatically estimated. The algorithm is a stochastic iterative process (Gibbs sampler) described in the reference below. See also wiener function. Parameters image(M, N) ndarray The input degraded image. psfndarray The impulse response (input image’s space) or the transfer function (Fourier space). Both are accepted. The transfer function is automatically recognized as being complex (np.iscomplexobj(psf)). regndarray, optional The regularisation operator. The Laplacian by default. It can be an impulse response or a transfer function, as for the psf. user_paramsdict, optional Dictionary of parameters for the Gibbs sampler. See below. clipboolean, optional True by default. If true, pixel values of the result above 1 or under -1 are thresholded for skimage pipeline compatibility. Returns x_postmean(M, N) ndarray The deconvolved image (the posterior mean). chainsdict The keys noise and prior contain the chain list of noise and prior precision respectively. Other Parameters The keys of ``user_params`` are: thresholdfloat The stopping criterion: the norm of the difference between to successive approximated solution (empirical mean of object samples, see Notes section). 1e-4 by default. burninint The number of sample to ignore to start computation of the mean. 15 by default. min_iterint The minimum number of iterations. 30 by default. max_iterint The maximum number of iterations if threshold is not satisfied. 200 by default. callbackcallable (None by default) A user provided callable to which is passed, if the function exists, the current image sample for whatever purpose. The user can store the sample, or compute other moments than the mean. It has no influence on the algorithm execution and is only for inspection. Notes The estimated image is design as the posterior mean of a probability law (from a Bayesian analysis). The mean is defined as a sum over all the possible images weighted by their respective probability. Given the size of the problem, the exact sum is not tractable. This algorithm use of MCMC to draw image under the posterior law. The practical idea is to only draw highly probable images since they have the biggest contribution to the mean. At the opposite, the less probable images are drawn less often since their contribution is low. Finally the empirical mean of these samples give us an estimation of the mean, and an exact computation with an infinite sample set. References 1 François Orieux, Jean-François Giovannelli, and Thomas Rodet, “Bayesian estimation of regularization and point spread function parameters for Wiener-Hunt deconvolution”, J. Opt. Soc. Am. A 27, 1593-1607 (2010) https://www.osapublishing.org/josaa/abstract.cfm?URI=josaa-27-7-1593 http://research.orieux.fr/files/papers/OGR-JOSA10.pdf Examples >>> from skimage import color, data, restoration >>> img = color.rgb2gray(data.astronaut()) >>> from scipy.signal import convolve2d >>> psf = np.ones((5, 5)) / 25 >>> img = convolve2d(img, psf, 'same') >>> img += 0.1 * img.std() * np.random.standard_normal(img.shape) >>> deconvolved_img = restoration.unsupervised_wiener(img, psf)
skimage.api.skimage.restoration#skimage.restoration.unsupervised_wiener
skimage.restoration.unwrap_phase(image, wrap_around=False, seed=None) [source] Recover the original from a wrapped phase image. From an image wrapped to lie in the interval [-pi, pi), recover the original, unwrapped image. Parameters image1D, 2D or 3D ndarray of floats, optionally a masked array The values should be in the range [-pi, pi). If a masked array is provided, the masked entries will not be changed, and their values will not be used to guide the unwrapping of neighboring, unmasked values. Masked 1D arrays are not allowed, and will raise a ValueError. wrap_aroundbool or sequence of bool, optional When an element of the sequence is True, the unwrapping process will regard the edges along the corresponding axis of the image to be connected and use this connectivity to guide the phase unwrapping process. If only a single boolean is given, it will apply to all axes. Wrap around is not supported for 1D arrays. seedint, optional Unwrapping 2D or 3D images uses random initialization. This sets the seed of the PRNG to achieve deterministic behavior. Returns image_unwrappedarray_like, double Unwrapped image of the same shape as the input. If the input image was a masked array, the mask will be preserved. Raises ValueError If called with a masked 1D array or called with a 1D array and wrap_around=True. References 1 Miguel Arevallilo Herraez, David R. Burton, Michael J. Lalor, and Munther A. Gdeisat, “Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path”, Journal Applied Optics, Vol. 41, No. 35 (2002) 7437, 2 Abdul-Rahman, H., Gdeisat, M., Burton, D., & Lalor, M., “Fast three-dimensional phase-unwrapping algorithm based on sorting by reliability following a non-continuous path. In W. Osten, C. Gorecki, & E. L. Novak (Eds.), Optical Metrology (2005) 32–40, International Society for Optics and Photonics. Examples >>> c0, c1 = np.ogrid[-1:1:128j, -1:1:128j] >>> image = 12 * np.pi * np.exp(-(c0**2 + c1**2)) >>> image_wrapped = np.angle(np.exp(1j * image)) >>> image_unwrapped = unwrap_phase(image_wrapped) >>> np.std(image_unwrapped - image) < 1e-6 # A constant offset is normal True
skimage.api.skimage.restoration#skimage.restoration.unwrap_phase
skimage.restoration.wiener(image, psf, balance, reg=None, is_real=True, clip=True) [source] Wiener-Hunt deconvolution Return the deconvolution with a Wiener-Hunt approach (i.e. with Fourier diagonalisation). Parameters image(M, N) ndarray Input degraded image psfndarray Point Spread Function. This is assumed to be the impulse response (input image space) if the data-type is real, or the transfer function (Fourier space) if the data-type is complex. There is no constraints on the shape of the impulse response. The transfer function must be of shape (M, N) if is_real is True, (M, N // 2 + 1) otherwise (see np.fft.rfftn). balancefloat The regularisation parameter value that tunes the balance between the data adequacy that improve frequency restoration and the prior adequacy that reduce frequency restoration (to avoid noise artifacts). regndarray, optional The regularisation operator. The Laplacian by default. It can be an impulse response or a transfer function, as for the psf. Shape constraint is the same as for the psf parameter. is_realboolean, optional True by default. Specify if psf and reg are provided with hermitian hypothesis, that is only half of the frequency plane is provided (due to the redundancy of Fourier transform of real signal). It’s apply only if psf and/or reg are provided as transfer function. For the hermitian property see uft module or np.fft.rfftn. clipboolean, optional True by default. If True, pixel values of the result above 1 or under -1 are thresholded for skimage pipeline compatibility. Returns im_deconv(M, N) ndarray The deconvolved image. Notes This function applies the Wiener filter to a noisy and degraded image by an impulse response (or PSF). If the data model is \[y = Hx + n\] where \(n\) is noise, \(H\) the PSF and \(x\) the unknown original image, the Wiener filter is \[\hat x = F^\dagger (|\Lambda_H|^2 + \lambda |\Lambda_D|^2) \Lambda_H^\dagger F y\] where \(F\) and \(F^\dagger\) are the Fourier and inverse Fourier transforms respectively, \(\Lambda_H\) the transfer function (or the Fourier transform of the PSF, see [Hunt] below) and \(\Lambda_D\) the filter to penalize the restored image frequencies (Laplacian by default, that is penalization of high frequency). The parameter \(\lambda\) tunes the balance between the data (that tends to increase high frequency, even those coming from noise), and the regularization. These methods are then specific to a prior model. Consequently, the application or the true image nature must corresponds to the prior model. By default, the prior model (Laplacian) introduce image smoothness or pixel correlation. It can also be interpreted as high-frequency penalization to compensate the instability of the solution with respect to the data (sometimes called noise amplification or “explosive” solution). Finally, the use of Fourier space implies a circulant property of \(H\), see [Hunt]. References 1 François Orieux, Jean-François Giovannelli, and Thomas Rodet, “Bayesian estimation of regularization and point spread function parameters for Wiener-Hunt deconvolution”, J. Opt. Soc. Am. A 27, 1593-1607 (2010) https://www.osapublishing.org/josaa/abstract.cfm?URI=josaa-27-7-1593 http://research.orieux.fr/files/papers/OGR-JOSA10.pdf 2 B. R. Hunt “A matrix theory proof of the discrete convolution theorem”, IEEE Trans. on Audio and Electroacoustics, vol. au-19, no. 4, pp. 285-288, dec. 1971 Examples >>> from skimage import color, data, restoration >>> img = color.rgb2gray(data.astronaut()) >>> from scipy.signal import convolve2d >>> psf = np.ones((5, 5)) / 25 >>> img = convolve2d(img, psf, 'same') >>> img += 0.1 * img.std() * np.random.standard_normal(img.shape) >>> deconvolved_img = restoration.wiener(img, psf, 1100)
skimage.api.skimage.restoration#skimage.restoration.wiener
Module: segmentation skimage.segmentation.active_contour(image, snake) Active contour model. skimage.segmentation.chan_vese(image[, mu, …]) Chan-Vese segmentation algorithm. skimage.segmentation.checkerboard_level_set(…) Create a checkerboard level set with binary values. skimage.segmentation.circle_level_set(…[, …]) Create a circle level set with binary values. skimage.segmentation.clear_border(labels[, …]) Clear objects connected to the label image border. skimage.segmentation.disk_level_set(…[, …]) Create a disk level set with binary values. skimage.segmentation.expand_labels(label_image) Expand labels in label image by distance pixels without overlapping. skimage.segmentation.felzenszwalb(image[, …]) Computes Felsenszwalb’s efficient graph based image segmentation. skimage.segmentation.find_boundaries(label_img) Return bool array where boundaries between labeled regions are True. skimage.segmentation.flood(image, seed_point, *) Mask corresponding to a flood fill. skimage.segmentation.flood_fill(image, …) Perform flood filling on an image. skimage.segmentation.inverse_gaussian_gradient(image) Inverse of gradient magnitude. skimage.segmentation.join_segmentations(s1, s2) Return the join of the two input segmentations. skimage.segmentation.mark_boundaries(image, …) Return image with boundaries between labeled regions highlighted. skimage.segmentation.morphological_chan_vese(…) Morphological Active Contours without Edges (MorphACWE) skimage.segmentation.morphological_geodesic_active_contour(…) Morphological Geodesic Active Contours (MorphGAC). skimage.segmentation.quickshift(image[, …]) Segments image using quickshift clustering in Color-(x,y) space. skimage.segmentation.random_walker(data, labels) Random walker algorithm for segmentation from markers. skimage.segmentation.relabel_sequential(…) Relabel arbitrary labels to {offset, … skimage.segmentation.slic(image[, …]) Segments image using k-means clustering in Color-(x,y,z) space. skimage.segmentation.watershed(image[, …]) Find watershed basins in image flooded from given markers. active_contour skimage.segmentation.active_contour(image, snake, alpha=0.01, beta=0.1, w_line=0, w_edge=1, gamma=0.01, max_px_move=1.0, max_iterations=2500, convergence=0.1, *, boundary_condition='periodic', coordinates='rc') [source] Active contour model. Active contours by fitting snakes to features of images. Supports single and multichannel 2D images. Snakes can be periodic (for segmentation) or have fixed and/or free ends. The output snake has the same length as the input boundary. As the number of points is constant, make sure that the initial snake has enough points to capture the details of the final contour. Parameters image(N, M) or (N, M, 3) ndarray Input image. snake(N, 2) ndarray Initial snake coordinates. For periodic boundary conditions, endpoints must not be duplicated. alphafloat, optional Snake length shape parameter. Higher values makes snake contract faster. betafloat, optional Snake smoothness shape parameter. Higher values makes snake smoother. w_linefloat, optional Controls attraction to brightness. Use negative values to attract toward dark regions. w_edgefloat, optional Controls attraction to edges. Use negative values to repel snake from edges. gammafloat, optional Explicit time stepping parameter. max_px_movefloat, optional Maximum pixel distance to move per iteration. max_iterationsint, optional Maximum iterations to optimize snake shape. convergencefloat, optional Convergence criteria. boundary_conditionstring, optional Boundary conditions for the contour. Can be one of ‘periodic’, ‘free’, ‘fixed’, ‘free-fixed’, or ‘fixed-free’. ‘periodic’ attaches the two ends of the snake, ‘fixed’ holds the end-points in place, and ‘free’ allows free movement of the ends. ‘fixed’ and ‘free’ can be combined by parsing ‘fixed-free’, ‘free-fixed’. Parsing ‘fixed-fixed’ or ‘free-free’ yields same behaviour as ‘fixed’ and ‘free’, respectively. coordinates{‘rc’}, optional This option remains for compatibility purpose only and has no effect. It was introduced in 0.16 with the 'xy' option, but since 0.18, only the 'rc' option is valid. Coordinates must be set in a row-column format. Returns snake(N, 2) ndarray Optimised snake, same shape as input parameter. References 1 Kass, M.; Witkin, A.; Terzopoulos, D. “Snakes: Active contour models”. International Journal of Computer Vision 1 (4): 321 (1988). DOI:10.1007/BF00133570 Examples >>> from skimage.draw import circle_perimeter >>> from skimage.filters import gaussian Create and smooth image: >>> img = np.zeros((100, 100)) >>> rr, cc = circle_perimeter(35, 45, 25) >>> img[rr, cc] = 1 >>> img = gaussian(img, 2) Initialize spline: >>> s = np.linspace(0, 2*np.pi, 100) >>> init = 50 * np.array([np.sin(s), np.cos(s)]).T + 50 Fit spline to image: >>> snake = active_contour(img, init, w_edge=0, w_line=1, coordinates='rc') >>> dist = np.sqrt((45-snake[:, 0])**2 + (35-snake[:, 1])**2) >>> int(np.mean(dist)) 25 chan_vese skimage.segmentation.chan_vese(image, mu=0.25, lambda1=1.0, lambda2=1.0, tol=0.001, max_iter=500, dt=0.5, init_level_set='checkerboard', extended_output=False) [source] Chan-Vese segmentation algorithm. Active contour model by evolving a level set. Can be used to segment objects without clearly defined boundaries. Parameters image(M, N) ndarray Grayscale image to be segmented. mufloat, optional ‘edge length’ weight parameter. Higher mu values will produce a ‘round’ edge, while values closer to zero will detect smaller objects. lambda1float, optional ‘difference from average’ weight parameter for the output region with value ‘True’. If it is lower than lambda2, this region will have a larger range of values than the other. lambda2float, optional ‘difference from average’ weight parameter for the output region with value ‘False’. If it is lower than lambda1, this region will have a larger range of values than the other. tolfloat, positive, optional Level set variation tolerance between iterations. If the L2 norm difference between the level sets of successive iterations normalized by the area of the image is below this value, the algorithm will assume that the solution was reached. max_iteruint, optional Maximum number of iterations allowed before the algorithm interrupts itself. dtfloat, optional A multiplication factor applied at calculations for each step, serves to accelerate the algorithm. While higher values may speed up the algorithm, they may also lead to convergence problems. init_level_setstr or (M, N) ndarray, optional Defines the starting level set used by the algorithm. If a string is inputted, a level set that matches the image size will automatically be generated. Alternatively, it is possible to define a custom level set, which should be an array of float values, with the same shape as ‘image’. Accepted string values are as follows. ‘checkerboard’ the starting level set is defined as sin(x/5*pi)*sin(y/5*pi), where x and y are pixel coordinates. This level set has fast convergence, but may fail to detect implicit edges. ‘disk’ the starting level set is defined as the opposite of the distance from the center of the image minus half of the minimum value between image width and image height. This is somewhat slower, but is more likely to properly detect implicit edges. ‘small disk’ the starting level set is defined as the opposite of the distance from the center of the image minus a quarter of the minimum value between image width and image height. extended_outputbool, optional If set to True, the return value will be a tuple containing the three return values (see below). If set to False which is the default value, only the ‘segmentation’ array will be returned. Returns segmentation(M, N) ndarray, bool Segmentation produced by the algorithm. phi(M, N) ndarray of floats Final level set computed by the algorithm. energieslist of floats Shows the evolution of the ‘energy’ for each step of the algorithm. This should allow to check whether the algorithm converged. Notes The Chan-Vese Algorithm is designed to segment objects without clearly defined boundaries. This algorithm is based on level sets that are evolved iteratively to minimize an energy, which is defined by weighted values corresponding to the sum of differences intensity from the average value outside the segmented region, the sum of differences from the average value inside the segmented region, and a term which is dependent on the length of the boundary of the segmented region. This algorithm was first proposed by Tony Chan and Luminita Vese, in a publication entitled “An Active Contour Model Without Edges” [1]. This implementation of the algorithm is somewhat simplified in the sense that the area factor ‘nu’ described in the original paper is not implemented, and is only suitable for grayscale images. Typical values for lambda1 and lambda2 are 1. If the ‘background’ is very different from the segmented object in terms of distribution (for example, a uniform black image with figures of varying intensity), then these values should be different from each other. Typical values for mu are between 0 and 1, though higher values can be used when dealing with shapes with very ill-defined contours. The ‘energy’ which this algorithm tries to minimize is defined as the sum of the differences from the average within the region squared and weighed by the ‘lambda’ factors to which is added the length of the contour multiplied by the ‘mu’ factor. Supports 2D grayscale images only, and does not implement the area term described in the original article. References 1 An Active Contour Model without Edges, Tony Chan and Luminita Vese, Scale-Space Theories in Computer Vision, 1999, DOI:10.1007/3-540-48236-9_13 2 Chan-Vese Segmentation, Pascal Getreuer Image Processing On Line, 2 (2012), pp. 214-224, DOI:10.5201/ipol.2012.g-cv 3 The Chan-Vese Algorithm - Project Report, Rami Cohen, 2011 arXiv:1107.2782 checkerboard_level_set skimage.segmentation.checkerboard_level_set(image_shape, square_size=5) [source] Create a checkerboard level set with binary values. Parameters image_shapetuple of positive integers Shape of the image. square_sizeint, optional Size of the squares of the checkerboard. It defaults to 5. Returns outarray with shape image_shape Binary level set of the checkerboard. See also circle_level_set circle_level_set skimage.segmentation.circle_level_set(image_shape, center=None, radius=None) [source] Create a circle level set with binary values. Parameters image_shapetuple of positive integers Shape of the image centertuple of positive integers, optional Coordinates of the center of the circle given in (row, column). If not given, it defaults to the center of the image. radiusfloat, optional Radius of the circle. If not given, it is set to the 75% of the smallest image dimension. Returns outarray with shape image_shape Binary level set of the circle with the given radius and center. Warns Deprecated: New in version 0.17: This function is deprecated and will be removed in scikit-image 0.19. Please use the function named disk_level_set instead. See also checkerboard_level_set clear_border skimage.segmentation.clear_border(labels, buffer_size=0, bgval=0, in_place=False, mask=None) [source] Clear objects connected to the label image border. Parameters labels(M[, N[, …, P]]) array of int or bool Imaging data labels. buffer_sizeint, optional The width of the border examined. By default, only objects that touch the outside of the image are removed. bgvalfloat or int, optional Cleared objects are set to this value. in_placebool, optional Whether or not to manipulate the labels array in-place. maskndarray of bool, same shape as image, optional. Image data mask. Objects in labels image overlapping with False pixels of mask will be removed. If defined, the argument buffer_size will be ignored. Returns out(M[, N[, …, P]]) array Imaging data labels with cleared borders Examples >>> import numpy as np >>> from skimage.segmentation import clear_border >>> labels = np.array([[0, 0, 0, 0, 0, 0, 0, 1, 0], ... [1, 1, 0, 0, 1, 0, 0, 1, 0], ... [1, 1, 0, 1, 0, 1, 0, 0, 0], ... [0, 0, 0, 1, 1, 1, 1, 0, 0], ... [0, 1, 1, 1, 1, 1, 1, 1, 0], ... [0, 0, 0, 0, 0, 0, 0, 0, 0]]) >>> clear_border(labels) array([[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 1, 0, 1, 0, 0, 0], [0, 0, 0, 1, 1, 1, 1, 0, 0], [0, 1, 1, 1, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]) >>> mask = np.array([[0, 0, 1, 1, 1, 1, 1, 1, 1], ... [0, 0, 1, 1, 1, 1, 1, 1, 1], ... [1, 1, 1, 1, 1, 1, 1, 1, 1], ... [1, 1, 1, 1, 1, 1, 1, 1, 1], ... [1, 1, 1, 1, 1, 1, 1, 1, 1], ... [1, 1, 1, 1, 1, 1, 1, 1, 1]]).astype(bool) >>> clear_border(labels, mask=mask) array([[0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 1, 0, 0, 1, 0], [0, 0, 0, 1, 0, 1, 0, 0, 0], [0, 0, 0, 1, 1, 1, 1, 0, 0], [0, 1, 1, 1, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]) disk_level_set skimage.segmentation.disk_level_set(image_shape, *, center=None, radius=None) [source] Create a disk level set with binary values. Parameters image_shapetuple of positive integers Shape of the image centertuple of positive integers, optional Coordinates of the center of the disk given in (row, column). If not given, it defaults to the center of the image. radiusfloat, optional Radius of the disk. If not given, it is set to the 75% of the smallest image dimension. Returns outarray with shape image_shape Binary level set of the disk with the given radius and center. See also checkerboard_level_set expand_labels skimage.segmentation.expand_labels(label_image, distance=1) [source] Expand labels in label image by distance pixels without overlapping. Given a label image, expand_labels grows label regions (connected components) outwards by up to distance pixels without overflowing into neighboring regions. More specifically, each background pixel that is within Euclidean distance of <= distance pixels of a connected component is assigned the label of that connected component. Where multiple connected components are within distance pixels of a background pixel, the label value of the closest connected component will be assigned (see Notes for the case of multiple labels at equal distance). Parameters label_imagendarray of dtype int label image distancefloat Euclidean distance in pixels by which to grow the labels. Default is one. Returns enlarged_labelsndarray of dtype int Labeled array, where all connected regions have been enlarged See also skimage.measure.label(), skimage.segmentation.watershed(), skimage.morphology.dilation() Notes Where labels are spaced more than distance pixels are apart, this is equivalent to a morphological dilation with a disc or hyperball of radius distance. However, in contrast to a morphological dilation, expand_labels will not expand a label region into a neighboring region. This implementation of expand_labels is derived from CellProfiler [1], where it is known as module “IdentifySecondaryObjects (Distance-N)” [2]. There is an important edge case when a pixel has the same distance to multiple regions, as it is not defined which region expands into that space. Here, the exact behavior depends on the upstream implementation of scipy.ndimage.distance_transform_edt. References 1 https://cellprofiler.org 2 https://github.com/CellProfiler/CellProfiler/blob/082930ea95add7b72243a4fa3d39ae5145995e9c/cellprofiler/modules/identifysecondaryobjects.py#L559 Examples >>> labels = np.array([0, 1, 0, 0, 0, 0, 2]) >>> expand_labels(labels, distance=1) array([1, 1, 1, 0, 0, 2, 2]) Labels will not overwrite each other: >>> expand_labels(labels, distance=3) array([1, 1, 1, 1, 2, 2, 2]) In case of ties, behavior is undefined, but currently resolves to the label closest to (0,) * ndim in lexicographical order. >>> labels_tied = np.array([0, 1, 0, 2, 0]) >>> expand_labels(labels_tied, 1) array([1, 1, 1, 2, 2]) >>> labels2d = np.array( ... [[0, 1, 0, 0], ... [2, 0, 0, 0], ... [0, 3, 0, 0]] ... ) >>> expand_labels(labels2d, 1) array([[2, 1, 1, 0], [2, 2, 0, 0], [2, 3, 3, 0]]) felzenszwalb skimage.segmentation.felzenszwalb(image, scale=1, sigma=0.8, min_size=20, multichannel=True) [source] Computes Felsenszwalb’s efficient graph based image segmentation. Produces an oversegmentation of a multichannel (i.e. RGB) image using a fast, minimum spanning tree based clustering on the image grid. The parameter scale sets an observation level. Higher scale means less and larger segments. sigma is the diameter of a Gaussian kernel, used for smoothing the image prior to segmentation. The number of produced segments as well as their size can only be controlled indirectly through scale. Segment size within an image can vary greatly depending on local contrast. For RGB images, the algorithm uses the euclidean distance between pixels in color space. Parameters image(width, height, 3) or (width, height) ndarray Input image. scalefloat Free parameter. Higher means larger clusters. sigmafloat Width (standard deviation) of Gaussian kernel used in preprocessing. min_sizeint Minimum component size. Enforced using postprocessing. multichannelbool, optional (default: True) Whether the last axis of the image is to be interpreted as multiple channels. A value of False, for a 3D image, is not currently supported. Returns segment_mask(width, height) ndarray Integer mask indicating segment labels. Notes The k parameter used in the original paper renamed to scale here. References 1 Efficient graph-based image segmentation, Felzenszwalb, P.F. and Huttenlocher, D.P. International Journal of Computer Vision, 2004 Examples >>> from skimage.segmentation import felzenszwalb >>> from skimage.data import coffee >>> img = coffee() >>> segments = felzenszwalb(img, scale=3.0, sigma=0.95, min_size=5) find_boundaries skimage.segmentation.find_boundaries(label_img, connectivity=1, mode='thick', background=0) [source] Return bool array where boundaries between labeled regions are True. Parameters label_imgarray of int or bool An array in which different regions are labeled with either different integers or boolean values. connectivityint in {1, …, label_img.ndim}, optional A pixel is considered a boundary pixel if any of its neighbors has a different label. connectivity controls which pixels are considered neighbors. A connectivity of 1 (default) means pixels sharing an edge (in 2D) or a face (in 3D) will be considered neighbors. A connectivity of label_img.ndim means pixels sharing a corner will be considered neighbors. modestring in {‘thick’, ‘inner’, ‘outer’, ‘subpixel’} How to mark the boundaries: thick: any pixel not completely surrounded by pixels of the same label (defined by connectivity) is marked as a boundary. This results in boundaries that are 2 pixels thick. inner: outline the pixels just inside of objects, leaving background pixels untouched. outer: outline pixels in the background around object boundaries. When two objects touch, their boundary is also marked. subpixel: return a doubled image, with pixels between the original pixels marked as boundary where appropriate. backgroundint, optional For modes ‘inner’ and ‘outer’, a definition of a background label is required. See mode for descriptions of these two. Returns boundariesarray of bool, same shape as label_img A bool image where True represents a boundary pixel. For mode equal to ‘subpixel’, boundaries.shape[i] is equal to 2 * label_img.shape[i] - 1 for all i (a pixel is inserted in between all other pairs of pixels). Examples >>> labels = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ... [0, 0, 0, 0, 0, 5, 5, 5, 0, 0], ... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0], ... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0], ... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0], ... [0, 0, 0, 0, 0, 5, 5, 5, 0, 0], ... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=np.uint8) >>> find_boundaries(labels, mode='thick').astype(np.uint8) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 1, 0], [0, 1, 1, 1, 1, 1, 0, 1, 1, 0], [0, 1, 1, 0, 1, 1, 0, 1, 1, 0], [0, 1, 1, 1, 1, 1, 0, 1, 1, 0], [0, 0, 1, 1, 1, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8) >>> find_boundaries(labels, mode='inner').astype(np.uint8) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 0, 1, 0, 0], [0, 0, 1, 0, 1, 1, 0, 1, 0, 0], [0, 0, 1, 1, 1, 1, 0, 1, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8) >>> find_boundaries(labels, mode='outer').astype(np.uint8) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 0, 0, 1, 0], [0, 1, 0, 0, 1, 1, 0, 0, 1, 0], [0, 1, 0, 0, 1, 1, 0, 0, 1, 0], [0, 1, 0, 0, 1, 1, 0, 0, 1, 0], [0, 0, 1, 1, 1, 1, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8) >>> labels_small = labels[::2, ::3] >>> labels_small array([[0, 0, 0, 0], [0, 0, 5, 0], [0, 1, 5, 0], [0, 0, 5, 0], [0, 0, 0, 0]], dtype=uint8) >>> find_boundaries(labels_small, mode='subpixel').astype(np.uint8) array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 1, 1, 0], [0, 0, 0, 1, 0, 1, 0], [0, 1, 1, 1, 0, 1, 0], [0, 1, 0, 1, 0, 1, 0], [0, 1, 1, 1, 0, 1, 0], [0, 0, 0, 1, 0, 1, 0], [0, 0, 0, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 0]], dtype=uint8) >>> bool_image = np.array([[False, False, False, False, False], ... [False, False, False, False, False], ... [False, False, True, True, True], ... [False, False, True, True, True], ... [False, False, True, True, True]], ... dtype=bool) >>> find_boundaries(bool_image) array([[False, False, False, False, False], [False, False, True, True, True], [False, True, True, True, True], [False, True, True, False, False], [False, True, True, False, False]]) flood skimage.segmentation.flood(image, seed_point, *, selem=None, connectivity=None, tolerance=None) [source] Mask corresponding to a flood fill. Starting at a specific seed_point, connected points equal or within tolerance of the seed value are found. Parameters imagendarray An n-dimensional array. seed_pointtuple or int The point in image used as the starting point for the flood fill. If the image is 1D, this point may be given as an integer. selemndarray, optional A structuring element used to determine the neighborhood of each evaluated pixel. It must contain only 1’s and 0’s, have the same number of dimensions as image. If not given, all adjacent pixels are considered as part of the neighborhood (fully connected). connectivityint, optional A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is larger or equal to connectivity are considered neighbors. Ignored if selem is not None. tolerancefloat or int, optional If None (default), adjacent values must be strictly equal to the initial value of image at seed_point. This is fastest. If a value is given, a comparison will be done at every point and if within tolerance of the initial value will also be filled (inclusive). Returns maskndarray A Boolean array with the same shape as image is returned, with True values for areas connected to and equal (or within tolerance of) the seed point. All other values are False. Notes The conceptual analogy of this operation is the ‘paint bucket’ tool in many raster graphics programs. This function returns just the mask representing the fill. If indices are desired rather than masks for memory reasons, the user can simply run numpy.nonzero on the result, save the indices, and discard this mask. Examples >>> from skimage.morphology import flood >>> image = np.zeros((4, 7), dtype=int) >>> image[1:3, 1:3] = 1 >>> image[3, 0] = 1 >>> image[1:3, 4:6] = 2 >>> image[3, 6] = 3 >>> image array([[0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 0, 2, 2, 0], [0, 1, 1, 0, 2, 2, 0], [1, 0, 0, 0, 0, 0, 3]]) Fill connected ones with 5, with full connectivity (diagonals included): >>> mask = flood(image, (1, 1)) >>> image_flooded = image.copy() >>> image_flooded[mask] = 5 >>> image_flooded array([[0, 0, 0, 0, 0, 0, 0], [0, 5, 5, 0, 2, 2, 0], [0, 5, 5, 0, 2, 2, 0], [5, 0, 0, 0, 0, 0, 3]]) Fill connected ones with 5, excluding diagonal points (connectivity 1): >>> mask = flood(image, (1, 1), connectivity=1) >>> image_flooded = image.copy() >>> image_flooded[mask] = 5 >>> image_flooded array([[0, 0, 0, 0, 0, 0, 0], [0, 5, 5, 0, 2, 2, 0], [0, 5, 5, 0, 2, 2, 0], [1, 0, 0, 0, 0, 0, 3]]) Fill with a tolerance: >>> mask = flood(image, (0, 0), tolerance=1) >>> image_flooded = image.copy() >>> image_flooded[mask] = 5 >>> image_flooded array([[5, 5, 5, 5, 5, 5, 5], [5, 5, 5, 5, 2, 2, 5], [5, 5, 5, 5, 2, 2, 5], [5, 5, 5, 5, 5, 5, 3]]) Examples using skimage.segmentation.flood Flood Fill flood_fill skimage.segmentation.flood_fill(image, seed_point, new_value, *, selem=None, connectivity=None, tolerance=None, in_place=False, inplace=None) [source] Perform flood filling on an image. Starting at a specific seed_point, connected points equal or within tolerance of the seed value are found, then set to new_value. Parameters imagendarray An n-dimensional array. seed_pointtuple or int The point in image used as the starting point for the flood fill. If the image is 1D, this point may be given as an integer. new_valueimage type New value to set the entire fill. This must be chosen in agreement with the dtype of image. selemndarray, optional A structuring element used to determine the neighborhood of each evaluated pixel. It must contain only 1’s and 0’s, have the same number of dimensions as image. If not given, all adjacent pixels are considered as part of the neighborhood (fully connected). connectivityint, optional A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is less than or equal to connectivity are considered neighbors. Ignored if selem is not None. tolerancefloat or int, optional If None (default), adjacent values must be strictly equal to the value of image at seed_point to be filled. This is fastest. If a tolerance is provided, adjacent points with values within plus or minus tolerance from the seed point are filled (inclusive). in_placebool, optional If True, flood filling is applied to image in place. If False, the flood filled result is returned without modifying the input image (default). inplacebool, optional This parameter is deprecated and will be removed in version 0.19.0 in favor of in_place. If True, flood filling is applied to image inplace. If False, the flood filled result is returned without modifying the input image (default). Returns filledndarray An array with the same shape as image is returned, with values in areas connected to and equal (or within tolerance of) the seed point replaced with new_value. Notes The conceptual analogy of this operation is the ‘paint bucket’ tool in many raster graphics programs. Examples >>> from skimage.morphology import flood_fill >>> image = np.zeros((4, 7), dtype=int) >>> image[1:3, 1:3] = 1 >>> image[3, 0] = 1 >>> image[1:3, 4:6] = 2 >>> image[3, 6] = 3 >>> image array([[0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 0, 2, 2, 0], [0, 1, 1, 0, 2, 2, 0], [1, 0, 0, 0, 0, 0, 3]]) Fill connected ones with 5, with full connectivity (diagonals included): >>> flood_fill(image, (1, 1), 5) array([[0, 0, 0, 0, 0, 0, 0], [0, 5, 5, 0, 2, 2, 0], [0, 5, 5, 0, 2, 2, 0], [5, 0, 0, 0, 0, 0, 3]]) Fill connected ones with 5, excluding diagonal points (connectivity 1): >>> flood_fill(image, (1, 1), 5, connectivity=1) array([[0, 0, 0, 0, 0, 0, 0], [0, 5, 5, 0, 2, 2, 0], [0, 5, 5, 0, 2, 2, 0], [1, 0, 0, 0, 0, 0, 3]]) Fill with a tolerance: >>> flood_fill(image, (0, 0), 5, tolerance=1) array([[5, 5, 5, 5, 5, 5, 5], [5, 5, 5, 5, 2, 2, 5], [5, 5, 5, 5, 2, 2, 5], [5, 5, 5, 5, 5, 5, 3]]) Examples using skimage.segmentation.flood_fill Flood Fill inverse_gaussian_gradient skimage.segmentation.inverse_gaussian_gradient(image, alpha=100.0, sigma=5.0) [source] Inverse of gradient magnitude. Compute the magnitude of the gradients in the image and then inverts the result in the range [0, 1]. Flat areas are assigned values close to 1, while areas close to borders are assigned values close to 0. This function or a similar one defined by the user should be applied over the image as a preprocessing step before calling morphological_geodesic_active_contour. Parameters image(M, N) or (L, M, N) array Grayscale image or volume. alphafloat, optional Controls the steepness of the inversion. A larger value will make the transition between the flat areas and border areas steeper in the resulting array. sigmafloat, optional Standard deviation of the Gaussian filter applied over the image. Returns gimage(M, N) or (L, M, N) array Preprocessed image (or volume) suitable for morphological_geodesic_active_contour. join_segmentations skimage.segmentation.join_segmentations(s1, s2) [source] Return the join of the two input segmentations. The join J of S1 and S2 is defined as the segmentation in which two voxels are in the same segment if and only if they are in the same segment in both S1 and S2. Parameters s1, s2numpy arrays s1 and s2 are label fields of the same shape. Returns jnumpy array The join segmentation of s1 and s2. Examples >>> from skimage.segmentation import join_segmentations >>> s1 = np.array([[0, 0, 1, 1], ... [0, 2, 1, 1], ... [2, 2, 2, 1]]) >>> s2 = np.array([[0, 1, 1, 0], ... [0, 1, 1, 0], ... [0, 1, 1, 1]]) >>> join_segmentations(s1, s2) array([[0, 1, 3, 2], [0, 5, 3, 2], [4, 5, 5, 3]]) mark_boundaries skimage.segmentation.mark_boundaries(image, label_img, color=(1, 1, 0), outline_color=None, mode='outer', background_label=0) [source] Return image with boundaries between labeled regions highlighted. Parameters image(M, N[, 3]) array Grayscale or RGB image. label_img(M, N) array of int Label array where regions are marked by different integer values. colorlength-3 sequence, optional RGB color of boundaries in the output image. outline_colorlength-3 sequence, optional RGB color surrounding boundaries in the output image. If None, no outline is drawn. modestring in {‘thick’, ‘inner’, ‘outer’, ‘subpixel’}, optional The mode for finding boundaries. background_labelint, optional Which label to consider background (this is only useful for modes inner and outer). Returns marked(M, N, 3) array of float An image in which the boundaries between labels are superimposed on the original image. See also find_boundaries Examples using skimage.segmentation.mark_boundaries Trainable segmentation using local features and random forests morphological_chan_vese skimage.segmentation.morphological_chan_vese(image, iterations, init_level_set='checkerboard', smoothing=1, lambda1=1, lambda2=1, iter_callback=<function <lambda>>) [source] Morphological Active Contours without Edges (MorphACWE) Active contours without edges implemented with morphological operators. It can be used to segment objects in images and volumes without well defined borders. It is required that the inside of the object looks different on average than the outside (i.e., the inner area of the object should be darker or lighter than the outer area on average). Parameters image(M, N) or (L, M, N) array Grayscale image or volume to be segmented. iterationsuint Number of iterations to run init_level_setstr, (M, N) array, or (L, M, N) array Initial level set. If an array is given, it will be binarized and used as the initial level set. If a string is given, it defines the method to generate a reasonable initial level set with the shape of the image. Accepted values are ‘checkerboard’ and ‘circle’. See the documentation of checkerboard_level_set and circle_level_set respectively for details about how these level sets are created. smoothinguint, optional Number of times the smoothing operator is applied per iteration. Reasonable values are around 1-4. Larger values lead to smoother segmentations. lambda1float, optional Weight parameter for the outer region. If lambda1 is larger than lambda2, the outer region will contain a larger range of values than the inner region. lambda2float, optional Weight parameter for the inner region. If lambda2 is larger than lambda1, the inner region will contain a larger range of values than the outer region. iter_callbackfunction, optional If given, this function is called once per iteration with the current level set as the only argument. This is useful for debugging or for plotting intermediate results during the evolution. Returns out(M, N) or (L, M, N) array Final segmentation (i.e., the final level set) See also circle_level_set, checkerboard_level_set Notes This is a version of the Chan-Vese algorithm that uses morphological operators instead of solving a partial differential equation (PDE) for the evolution of the contour. The set of morphological operators used in this algorithm are proved to be infinitesimally equivalent to the Chan-Vese PDE (see [1]). However, morphological operators are do not suffer from the numerical stability issues typically found in PDEs (it is not necessary to find the right time step for the evolution), and are computationally faster. The algorithm and its theoretical derivation are described in [1]. References 1(1,2) A Morphological Approach to Curvature-based Evolution of Curves and Surfaces, Pablo Márquez-Neila, Luis Baumela, Luis Álvarez. In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2014, DOI:10.1109/TPAMI.2013.106 morphological_geodesic_active_contour skimage.segmentation.morphological_geodesic_active_contour(gimage, iterations, init_level_set='circle', smoothing=1, threshold='auto', balloon=0, iter_callback=<function <lambda>>) [source] Morphological Geodesic Active Contours (MorphGAC). Geodesic active contours implemented with morphological operators. It can be used to segment objects with visible but noisy, cluttered, broken borders. Parameters gimage(M, N) or (L, M, N) array Preprocessed image or volume to be segmented. This is very rarely the original image. Instead, this is usually a preprocessed version of the original image that enhances and highlights the borders (or other structures) of the object to segment. morphological_geodesic_active_contour will try to stop the contour evolution in areas where gimage is small. See morphsnakes.inverse_gaussian_gradient as an example function to perform this preprocessing. Note that the quality of morphological_geodesic_active_contour might greatly depend on this preprocessing. iterationsuint Number of iterations to run. init_level_setstr, (M, N) array, or (L, M, N) array Initial level set. If an array is given, it will be binarized and used as the initial level set. If a string is given, it defines the method to generate a reasonable initial level set with the shape of the image. Accepted values are ‘checkerboard’ and ‘circle’. See the documentation of checkerboard_level_set and circle_level_set respectively for details about how these level sets are created. smoothinguint, optional Number of times the smoothing operator is applied per iteration. Reasonable values are around 1-4. Larger values lead to smoother segmentations. thresholdfloat, optional Areas of the image with a value smaller than this threshold will be considered borders. The evolution of the contour will stop in this areas. balloonfloat, optional Balloon force to guide the contour in non-informative areas of the image, i.e., areas where the gradient of the image is too small to push the contour towards a border. A negative value will shrink the contour, while a positive value will expand the contour in these areas. Setting this to zero will disable the balloon force. iter_callbackfunction, optional If given, this function is called once per iteration with the current level set as the only argument. This is useful for debugging or for plotting intermediate results during the evolution. Returns out(M, N) or (L, M, N) array Final segmentation (i.e., the final level set) See also inverse_gaussian_gradient, circle_level_set, checkerboard_level_set Notes This is a version of the Geodesic Active Contours (GAC) algorithm that uses morphological operators instead of solving partial differential equations (PDEs) for the evolution of the contour. The set of morphological operators used in this algorithm are proved to be infinitesimally equivalent to the GAC PDEs (see [1]). However, morphological operators are do not suffer from the numerical stability issues typically found in PDEs (e.g., it is not necessary to find the right time step for the evolution), and are computationally faster. The algorithm and its theoretical derivation are described in [1]. References 1(1,2) A Morphological Approach to Curvature-based Evolution of Curves and Surfaces, Pablo Márquez-Neila, Luis Baumela, Luis Álvarez. In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2014, DOI:10.1109/TPAMI.2013.106 quickshift skimage.segmentation.quickshift(image, ratio=1.0, kernel_size=5, max_dist=10, return_tree=False, sigma=0, convert2lab=True, random_seed=42) [source] Segments image using quickshift clustering in Color-(x,y) space. Produces an oversegmentation of the image using the quickshift mode-seeking algorithm. Parameters image(width, height, channels) ndarray Input image. ratiofloat, optional, between 0 and 1 Balances color-space proximity and image-space proximity. Higher values give more weight to color-space. kernel_sizefloat, optional Width of Gaussian kernel used in smoothing the sample density. Higher means fewer clusters. max_distfloat, optional Cut-off point for data distances. Higher means fewer clusters. return_treebool, optional Whether to return the full segmentation hierarchy tree and distances. sigmafloat, optional Width for Gaussian smoothing as preprocessing. Zero means no smoothing. convert2labbool, optional Whether the input should be converted to Lab colorspace prior to segmentation. For this purpose, the input is assumed to be RGB. random_seedint, optional Random seed used for breaking ties. Returns segment_mask(width, height) ndarray Integer mask indicating segment labels. Notes The authors advocate to convert the image to Lab color space prior to segmentation, though this is not strictly necessary. For this to work, the image must be given in RGB format. References 1 Quick shift and kernel methods for mode seeking, Vedaldi, A. and Soatto, S. European Conference on Computer Vision, 2008 random_walker skimage.segmentation.random_walker(data, labels, beta=130, mode='cg_j', tol=0.001, copy=True, multichannel=False, return_full_prob=False, spacing=None, *, prob_tol=0.001) [source] Random walker algorithm for segmentation from markers. Random walker algorithm is implemented for gray-level or multichannel images. Parameters dataarray_like Image to be segmented in phases. Gray-level data can be two- or three-dimensional; multichannel data can be three- or four- dimensional (multichannel=True) with the highest dimension denoting channels. Data spacing is assumed isotropic unless the spacing keyword argument is used. labelsarray of ints, of same shape as data without channels dimension Array of seed markers labeled with different positive integers for different phases. Zero-labeled pixels are unlabeled pixels. Negative labels correspond to inactive pixels that are not taken into account (they are removed from the graph). If labels are not consecutive integers, the labels array will be transformed so that labels are consecutive. In the multichannel case, labels should have the same shape as a single channel of data, i.e. without the final dimension denoting channels. betafloat, optional Penalization coefficient for the random walker motion (the greater beta, the more difficult the diffusion). modestring, available options {‘cg’, ‘cg_j’, ‘cg_mg’, ‘bf’} Mode for solving the linear system in the random walker algorithm. ‘bf’ (brute force): an LU factorization of the Laplacian is computed. This is fast for small images (<1024x1024), but very slow and memory-intensive for large images (e.g., 3-D volumes). ‘cg’ (conjugate gradient): the linear system is solved iteratively using the Conjugate Gradient method from scipy.sparse.linalg. This is less memory-consuming than the brute force method for large images, but it is quite slow. ‘cg_j’ (conjugate gradient with Jacobi preconditionner): the Jacobi preconditionner is applyed during the Conjugate gradient method iterations. This may accelerate the convergence of the ‘cg’ method. ‘cg_mg’ (conjugate gradient with multigrid preconditioner): a preconditioner is computed using a multigrid solver, then the solution is computed with the Conjugate Gradient method. This mode requires that the pyamg module is installed. tolfloat, optional Tolerance to achieve when solving the linear system using the conjugate gradient based modes (‘cg’, ‘cg_j’ and ‘cg_mg’). copybool, optional If copy is False, the labels array will be overwritten with the result of the segmentation. Use copy=False if you want to save on memory. multichannelbool, optional If True, input data is parsed as multichannel data (see ‘data’ above for proper input format in this case). return_full_probbool, optional If True, the probability that a pixel belongs to each of the labels will be returned, instead of only the most likely label. spacingiterable of floats, optional Spacing between voxels in each spatial dimension. If None, then the spacing between pixels/voxels in each dimension is assumed 1. prob_tolfloat, optional Tolerance on the resulting probability to be in the interval [0, 1]. If the tolerance is not satisfied, a warning is displayed. Returns outputndarray If return_full_prob is False, array of ints of same shape and data type as labels, in which each pixel has been labeled according to the marker that reached the pixel first by anisotropic diffusion. If return_full_prob is True, array of floats of shape (nlabels, labels.shape). output[label_nb, i, j] is the probability that label label_nb reaches the pixel (i, j) first. See also skimage.morphology.watershed watershed segmentation A segmentation algorithm based on mathematical morphology and “flooding” of regions from markers. Notes Multichannel inputs are scaled with all channel data combined. Ensure all channels are separately normalized prior to running this algorithm. The spacing argument is specifically for anisotropic datasets, where data points are spaced differently in one or more spatial dimensions. Anisotropic data is commonly encountered in medical imaging. The algorithm was first proposed in [1]. The algorithm solves the diffusion equation at infinite times for sources placed on markers of each phase in turn. A pixel is labeled with the phase that has the greatest probability to diffuse first to the pixel. The diffusion equation is solved by minimizing x.T L x for each phase, where L is the Laplacian of the weighted graph of the image, and x is the probability that a marker of the given phase arrives first at a pixel by diffusion (x=1 on markers of the phase, x=0 on the other markers, and the other coefficients are looked for). Each pixel is attributed the label for which it has a maximal value of x. The Laplacian L of the image is defined as: L_ii = d_i, the number of neighbors of pixel i (the degree of i) L_ij = -w_ij if i and j are adjacent pixels The weight w_ij is a decreasing function of the norm of the local gradient. This ensures that diffusion is easier between pixels of similar values. When the Laplacian is decomposed into blocks of marked and unmarked pixels: L = M B.T B A with first indices corresponding to marked pixels, and then to unmarked pixels, minimizing x.T L x for one phase amount to solving: A x = - B x_m where x_m = 1 on markers of the given phase, and 0 on other markers. This linear system is solved in the algorithm using a direct method for small images, and an iterative method for larger images. References 1 Leo Grady, Random walks for image segmentation, IEEE Trans Pattern Anal Mach Intell. 2006 Nov;28(11):1768-83. DOI:10.1109/TPAMI.2006.233. Examples >>> np.random.seed(0) >>> a = np.zeros((10, 10)) + 0.2 * np.random.rand(10, 10) >>> a[5:8, 5:8] += 1 >>> b = np.zeros_like(a, dtype=np.int32) >>> b[3, 3] = 1 # Marker for first phase >>> b[6, 6] = 2 # Marker for second phase >>> random_walker(a, b) array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 2, 2, 2, 1, 1], [1, 1, 1, 1, 1, 2, 2, 2, 1, 1], [1, 1, 1, 1, 1, 2, 2, 2, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32) relabel_sequential skimage.segmentation.relabel_sequential(label_field, offset=1) [source] Relabel arbitrary labels to {offset, … offset + number_of_labels}. This function also returns the forward map (mapping the original labels to the reduced labels) and the inverse map (mapping the reduced labels back to the original ones). Parameters label_fieldnumpy array of int, arbitrary shape An array of labels, which must be non-negative integers. offsetint, optional The return labels will start at offset, which should be strictly positive. Returns relabelednumpy array of int, same shape as label_field The input label field with labels mapped to {offset, …, number_of_labels + offset - 1}. The data type will be the same as label_field, except when offset + number_of_labels causes overflow of the current data type. forward_mapArrayMap The map from the original label space to the returned label space. Can be used to re-apply the same mapping. See examples for usage. The output data type will be the same as relabeled. inverse_mapArrayMap The map from the new label space to the original space. This can be used to reconstruct the original label field from the relabeled one. The output data type will be the same as label_field. Notes The label 0 is assumed to denote the background and is never remapped. The forward map can be extremely big for some inputs, since its length is given by the maximum of the label field. However, in most situations, label_field.max() is much smaller than label_field.size, and in these cases the forward map is guaranteed to be smaller than either the input or output images. Examples >>> from skimage.segmentation import relabel_sequential >>> label_field = np.array([1, 1, 5, 5, 8, 99, 42]) >>> relab, fw, inv = relabel_sequential(label_field) >>> relab array([1, 1, 2, 2, 3, 5, 4]) >>> print(fw) ArrayMap: 1 → 1 5 → 2 8 → 3 42 → 4 99 → 5 >>> np.array(fw) array([0, 1, 0, 0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5]) >>> np.array(inv) array([ 0, 1, 5, 8, 42, 99]) >>> (fw[label_field] == relab).all() True >>> (inv[relab] == label_field).all() True >>> relab, fw, inv = relabel_sequential(label_field, offset=5) >>> relab array([5, 5, 6, 6, 7, 9, 8]) slic skimage.segmentation.slic(image, n_segments=100, compactness=10.0, max_iter=10, sigma=0, spacing=None, multichannel=True, convert2lab=None, enforce_connectivity=True, min_size_factor=0.5, max_size_factor=3, slic_zero=False, start_label=None, mask=None) [source] Segments image using k-means clustering in Color-(x,y,z) space. Parameters image2D, 3D or 4D ndarray Input image, which can be 2D or 3D, and grayscale or multichannel (see multichannel parameter). Input image must either be NaN-free or the NaN’s must be masked out n_segmentsint, optional The (approximate) number of labels in the segmented output image. compactnessfloat, optional Balances color proximity and space proximity. Higher values give more weight to space proximity, making superpixel shapes more square/cubic. In SLICO mode, this is the initial compactness. This parameter depends strongly on image contrast and on the shapes of objects in the image. We recommend exploring possible values on a log scale, e.g., 0.01, 0.1, 1, 10, 100, before refining around a chosen value. max_iterint, optional Maximum number of iterations of k-means. sigmafloat or (3,) array-like of floats, optional Width of Gaussian smoothing kernel for pre-processing for each dimension of the image. The same sigma is applied to each dimension in case of a scalar value. Zero means no smoothing. Note, that sigma is automatically scaled if it is scalar and a manual voxel spacing is provided (see Notes section). spacing(3,) array-like of floats, optional The voxel spacing along each image dimension. By default, slic assumes uniform spacing (same voxel resolution along z, y and x). This parameter controls the weights of the distances along z, y, and x during k-means clustering. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. convert2labbool, optional Whether the input should be converted to Lab colorspace prior to segmentation. The input image must be RGB. Highly recommended. This option defaults to True when multichannel=True and image.shape[-1] == 3. enforce_connectivitybool, optional Whether the generated segments are connected or not min_size_factorfloat, optional Proportion of the minimum segment size to be removed with respect to the supposed segment size `depth*width*height/n_segments` max_size_factorfloat, optional Proportion of the maximum connected segment size. A value of 3 works in most of the cases. slic_zerobool, optional Run SLIC-zero, the zero-parameter mode of SLIC. [2] start_label: int, optional The labels’ index start. Should be 0 or 1. New in version 0.17: start_label was introduced in 0.17 mask2D ndarray, optional If provided, superpixels are computed only where mask is True, and seed points are homogeneously distributed over the mask using a K-means clustering strategy. New in version 0.17: mask was introduced in 0.17 Returns labels2D or 3D array Integer mask indicating segment labels. Raises ValueError If convert2lab is set to True but the last array dimension is not of length 3. ValueError If start_label is not 0 or 1. Notes If sigma > 0, the image is smoothed using a Gaussian kernel prior to segmentation. If sigma is scalar and spacing is provided, the kernel width is divided along each dimension by the spacing. For example, if sigma=1 and spacing=[5, 1, 1], the effective sigma is [0.2, 1, 1]. This ensures sensible smoothing for anisotropic images. The image is rescaled to be in [0, 1] prior to processing. Images of shape (M, N, 3) are interpreted as 2D RGB images by default. To interpret them as 3D with the last dimension having length 3, use multichannel=False. start_label is introduced to handle the issue [4]. The labels indexing starting at 0 will be deprecated in future versions. If mask is not None labels indexing starts at 1 and masked area is set to 0. References 1 Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk, SLIC Superpixels Compared to State-of-the-art Superpixel Methods, TPAMI, May 2012. DOI:10.1109/TPAMI.2012.120 2 https://www.epfl.ch/labs/ivrl/research/slic-superpixels/#SLICO 3 Irving, Benjamin. “maskSLIC: regional superpixel generation with application to local pathology characterisation in medical images.”, 2016, arXiv:1606.09518 4 https://github.com/scikit-image/scikit-image/issues/3722 Examples >>> from skimage.segmentation import slic >>> from skimage.data import astronaut >>> img = astronaut() >>> segments = slic(img, n_segments=100, compactness=10) Increasing the compactness parameter yields more square regions: >>> segments = slic(img, n_segments=100, compactness=20) watershed skimage.segmentation.watershed(image, markers=None, connectivity=1, offset=None, mask=None, compactness=0, watershed_line=False) [source] Find watershed basins in image flooded from given markers. Parameters imagendarray (2-D, 3-D, …) of integers Data array where the lowest value points are labeled first. markersint, or ndarray of int, same shape as image, optional The desired number of markers, or an array marking the basins with the values to be assigned in the label matrix. Zero means not a marker. If None (no markers given), the local minima of the image are used as markers. connectivityndarray, optional An array with the same number of dimensions as image whose non-zero elements indicate neighbors for connection. Following the scipy convention, default is a one-connected array of the dimension of the image. offsetarray_like of shape image.ndim, optional offset of the connectivity (one offset per dimension) maskndarray of bools or 0s and 1s, optional Array of same shape as image. Only points at which mask == True will be labeled. compactnessfloat, optional Use compact watershed [3] with given compactness parameter. Higher values result in more regularly-shaped watershed basins. watershed_linebool, optional If watershed_line is True, a one-pixel wide line separates the regions obtained by the watershed algorithm. The line has the label 0. Returns outndarray A labeled matrix of the same type and shape as markers See also skimage.segmentation.random_walker random walker segmentation A segmentation algorithm based on anisotropic diffusion, usually slower than the watershed but with good results on noisy data and boundaries with holes. Notes This function implements a watershed algorithm [1] [2] that apportions pixels into marked basins. The algorithm uses a priority queue to hold the pixels with the metric for the priority queue being pixel value, then the time of entry into the queue - this settles ties in favor of the closest marker. Some ideas taken from Soille, “Automated Basin Delineation from Digital Elevation Models Using Mathematical Morphology”, Signal Processing 20 (1990) 171-182 The most important insight in the paper is that entry time onto the queue solves two problems: a pixel should be assigned to the neighbor with the largest gradient or, if there is no gradient, pixels on a plateau should be split between markers on opposite sides. This implementation converts all arguments to specific, lowest common denominator types, then passes these to a C algorithm. Markers can be determined manually, or automatically using for example the local minima of the gradient of the image, or the local maxima of the distance function to the background for separating overlapping objects (see example). References 1 https://en.wikipedia.org/wiki/Watershed_%28image_processing%29 2 http://cmm.ensmp.fr/~beucher/wtshed.html 3 Peer Neubert & Peter Protzel (2014). Compact Watershed and Preemptive SLIC: On Improving Trade-offs of Superpixel Segmentation Algorithms. ICPR 2014, pp 996-1001. DOI:10.1109/ICPR.2014.181 https://www.tu-chemnitz.de/etit/proaut/publications/cws_pSLIC_ICPR.pdf Examples The watershed algorithm is useful to separate overlapping objects. We first generate an initial image with two overlapping circles: >>> x, y = np.indices((80, 80)) >>> x1, y1, x2, y2 = 28, 28, 44, 52 >>> r1, r2 = 16, 20 >>> mask_circle1 = (x - x1)**2 + (y - y1)**2 < r1**2 >>> mask_circle2 = (x - x2)**2 + (y - y2)**2 < r2**2 >>> image = np.logical_or(mask_circle1, mask_circle2) Next, we want to separate the two circles. We generate markers at the maxima of the distance to the background: >>> from scipy import ndimage as ndi >>> distance = ndi.distance_transform_edt(image) >>> from skimage.feature import peak_local_max >>> local_maxi = peak_local_max(distance, labels=image, ... footprint=np.ones((3, 3)), ... indices=False) >>> markers = ndi.label(local_maxi)[0] Finally, we run the watershed on the image and markers: >>> labels = watershed(-distance, markers, mask=image) The algorithm works also for 3-D images, and can be used for example to separate overlapping spheres. Examples using skimage.segmentation.watershed Watershed segmentation Markers for watershed transform Segment human cells (in mitosis)
skimage.api.skimage.segmentation
skimage.segmentation.active_contour(image, snake, alpha=0.01, beta=0.1, w_line=0, w_edge=1, gamma=0.01, max_px_move=1.0, max_iterations=2500, convergence=0.1, *, boundary_condition='periodic', coordinates='rc') [source] Active contour model. Active contours by fitting snakes to features of images. Supports single and multichannel 2D images. Snakes can be periodic (for segmentation) or have fixed and/or free ends. The output snake has the same length as the input boundary. As the number of points is constant, make sure that the initial snake has enough points to capture the details of the final contour. Parameters image(N, M) or (N, M, 3) ndarray Input image. snake(N, 2) ndarray Initial snake coordinates. For periodic boundary conditions, endpoints must not be duplicated. alphafloat, optional Snake length shape parameter. Higher values makes snake contract faster. betafloat, optional Snake smoothness shape parameter. Higher values makes snake smoother. w_linefloat, optional Controls attraction to brightness. Use negative values to attract toward dark regions. w_edgefloat, optional Controls attraction to edges. Use negative values to repel snake from edges. gammafloat, optional Explicit time stepping parameter. max_px_movefloat, optional Maximum pixel distance to move per iteration. max_iterationsint, optional Maximum iterations to optimize snake shape. convergencefloat, optional Convergence criteria. boundary_conditionstring, optional Boundary conditions for the contour. Can be one of ‘periodic’, ‘free’, ‘fixed’, ‘free-fixed’, or ‘fixed-free’. ‘periodic’ attaches the two ends of the snake, ‘fixed’ holds the end-points in place, and ‘free’ allows free movement of the ends. ‘fixed’ and ‘free’ can be combined by parsing ‘fixed-free’, ‘free-fixed’. Parsing ‘fixed-fixed’ or ‘free-free’ yields same behaviour as ‘fixed’ and ‘free’, respectively. coordinates{‘rc’}, optional This option remains for compatibility purpose only and has no effect. It was introduced in 0.16 with the 'xy' option, but since 0.18, only the 'rc' option is valid. Coordinates must be set in a row-column format. Returns snake(N, 2) ndarray Optimised snake, same shape as input parameter. References 1 Kass, M.; Witkin, A.; Terzopoulos, D. “Snakes: Active contour models”. International Journal of Computer Vision 1 (4): 321 (1988). DOI:10.1007/BF00133570 Examples >>> from skimage.draw import circle_perimeter >>> from skimage.filters import gaussian Create and smooth image: >>> img = np.zeros((100, 100)) >>> rr, cc = circle_perimeter(35, 45, 25) >>> img[rr, cc] = 1 >>> img = gaussian(img, 2) Initialize spline: >>> s = np.linspace(0, 2*np.pi, 100) >>> init = 50 * np.array([np.sin(s), np.cos(s)]).T + 50 Fit spline to image: >>> snake = active_contour(img, init, w_edge=0, w_line=1, coordinates='rc') >>> dist = np.sqrt((45-snake[:, 0])**2 + (35-snake[:, 1])**2) >>> int(np.mean(dist)) 25
skimage.api.skimage.segmentation#skimage.segmentation.active_contour
skimage.segmentation.chan_vese(image, mu=0.25, lambda1=1.0, lambda2=1.0, tol=0.001, max_iter=500, dt=0.5, init_level_set='checkerboard', extended_output=False) [source] Chan-Vese segmentation algorithm. Active contour model by evolving a level set. Can be used to segment objects without clearly defined boundaries. Parameters image(M, N) ndarray Grayscale image to be segmented. mufloat, optional ‘edge length’ weight parameter. Higher mu values will produce a ‘round’ edge, while values closer to zero will detect smaller objects. lambda1float, optional ‘difference from average’ weight parameter for the output region with value ‘True’. If it is lower than lambda2, this region will have a larger range of values than the other. lambda2float, optional ‘difference from average’ weight parameter for the output region with value ‘False’. If it is lower than lambda1, this region will have a larger range of values than the other. tolfloat, positive, optional Level set variation tolerance between iterations. If the L2 norm difference between the level sets of successive iterations normalized by the area of the image is below this value, the algorithm will assume that the solution was reached. max_iteruint, optional Maximum number of iterations allowed before the algorithm interrupts itself. dtfloat, optional A multiplication factor applied at calculations for each step, serves to accelerate the algorithm. While higher values may speed up the algorithm, they may also lead to convergence problems. init_level_setstr or (M, N) ndarray, optional Defines the starting level set used by the algorithm. If a string is inputted, a level set that matches the image size will automatically be generated. Alternatively, it is possible to define a custom level set, which should be an array of float values, with the same shape as ‘image’. Accepted string values are as follows. ‘checkerboard’ the starting level set is defined as sin(x/5*pi)*sin(y/5*pi), where x and y are pixel coordinates. This level set has fast convergence, but may fail to detect implicit edges. ‘disk’ the starting level set is defined as the opposite of the distance from the center of the image minus half of the minimum value between image width and image height. This is somewhat slower, but is more likely to properly detect implicit edges. ‘small disk’ the starting level set is defined as the opposite of the distance from the center of the image minus a quarter of the minimum value between image width and image height. extended_outputbool, optional If set to True, the return value will be a tuple containing the three return values (see below). If set to False which is the default value, only the ‘segmentation’ array will be returned. Returns segmentation(M, N) ndarray, bool Segmentation produced by the algorithm. phi(M, N) ndarray of floats Final level set computed by the algorithm. energieslist of floats Shows the evolution of the ‘energy’ for each step of the algorithm. This should allow to check whether the algorithm converged. Notes The Chan-Vese Algorithm is designed to segment objects without clearly defined boundaries. This algorithm is based on level sets that are evolved iteratively to minimize an energy, which is defined by weighted values corresponding to the sum of differences intensity from the average value outside the segmented region, the sum of differences from the average value inside the segmented region, and a term which is dependent on the length of the boundary of the segmented region. This algorithm was first proposed by Tony Chan and Luminita Vese, in a publication entitled “An Active Contour Model Without Edges” [1]. This implementation of the algorithm is somewhat simplified in the sense that the area factor ‘nu’ described in the original paper is not implemented, and is only suitable for grayscale images. Typical values for lambda1 and lambda2 are 1. If the ‘background’ is very different from the segmented object in terms of distribution (for example, a uniform black image with figures of varying intensity), then these values should be different from each other. Typical values for mu are between 0 and 1, though higher values can be used when dealing with shapes with very ill-defined contours. The ‘energy’ which this algorithm tries to minimize is defined as the sum of the differences from the average within the region squared and weighed by the ‘lambda’ factors to which is added the length of the contour multiplied by the ‘mu’ factor. Supports 2D grayscale images only, and does not implement the area term described in the original article. References 1 An Active Contour Model without Edges, Tony Chan and Luminita Vese, Scale-Space Theories in Computer Vision, 1999, DOI:10.1007/3-540-48236-9_13 2 Chan-Vese Segmentation, Pascal Getreuer Image Processing On Line, 2 (2012), pp. 214-224, DOI:10.5201/ipol.2012.g-cv 3 The Chan-Vese Algorithm - Project Report, Rami Cohen, 2011 arXiv:1107.2782
skimage.api.skimage.segmentation#skimage.segmentation.chan_vese
skimage.segmentation.checkerboard_level_set(image_shape, square_size=5) [source] Create a checkerboard level set with binary values. Parameters image_shapetuple of positive integers Shape of the image. square_sizeint, optional Size of the squares of the checkerboard. It defaults to 5. Returns outarray with shape image_shape Binary level set of the checkerboard. See also circle_level_set
skimage.api.skimage.segmentation#skimage.segmentation.checkerboard_level_set
skimage.segmentation.circle_level_set(image_shape, center=None, radius=None) [source] Create a circle level set with binary values. Parameters image_shapetuple of positive integers Shape of the image centertuple of positive integers, optional Coordinates of the center of the circle given in (row, column). If not given, it defaults to the center of the image. radiusfloat, optional Radius of the circle. If not given, it is set to the 75% of the smallest image dimension. Returns outarray with shape image_shape Binary level set of the circle with the given radius and center. Warns Deprecated: New in version 0.17: This function is deprecated and will be removed in scikit-image 0.19. Please use the function named disk_level_set instead. See also checkerboard_level_set
skimage.api.skimage.segmentation#skimage.segmentation.circle_level_set
skimage.segmentation.clear_border(labels, buffer_size=0, bgval=0, in_place=False, mask=None) [source] Clear objects connected to the label image border. Parameters labels(M[, N[, …, P]]) array of int or bool Imaging data labels. buffer_sizeint, optional The width of the border examined. By default, only objects that touch the outside of the image are removed. bgvalfloat or int, optional Cleared objects are set to this value. in_placebool, optional Whether or not to manipulate the labels array in-place. maskndarray of bool, same shape as image, optional. Image data mask. Objects in labels image overlapping with False pixels of mask will be removed. If defined, the argument buffer_size will be ignored. Returns out(M[, N[, …, P]]) array Imaging data labels with cleared borders Examples >>> import numpy as np >>> from skimage.segmentation import clear_border >>> labels = np.array([[0, 0, 0, 0, 0, 0, 0, 1, 0], ... [1, 1, 0, 0, 1, 0, 0, 1, 0], ... [1, 1, 0, 1, 0, 1, 0, 0, 0], ... [0, 0, 0, 1, 1, 1, 1, 0, 0], ... [0, 1, 1, 1, 1, 1, 1, 1, 0], ... [0, 0, 0, 0, 0, 0, 0, 0, 0]]) >>> clear_border(labels) array([[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 1, 0, 1, 0, 0, 0], [0, 0, 0, 1, 1, 1, 1, 0, 0], [0, 1, 1, 1, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]) >>> mask = np.array([[0, 0, 1, 1, 1, 1, 1, 1, 1], ... [0, 0, 1, 1, 1, 1, 1, 1, 1], ... [1, 1, 1, 1, 1, 1, 1, 1, 1], ... [1, 1, 1, 1, 1, 1, 1, 1, 1], ... [1, 1, 1, 1, 1, 1, 1, 1, 1], ... [1, 1, 1, 1, 1, 1, 1, 1, 1]]).astype(bool) >>> clear_border(labels, mask=mask) array([[0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 1, 0, 0, 1, 0], [0, 0, 0, 1, 0, 1, 0, 0, 0], [0, 0, 0, 1, 1, 1, 1, 0, 0], [0, 1, 1, 1, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]])
skimage.api.skimage.segmentation#skimage.segmentation.clear_border
skimage.segmentation.disk_level_set(image_shape, *, center=None, radius=None) [source] Create a disk level set with binary values. Parameters image_shapetuple of positive integers Shape of the image centertuple of positive integers, optional Coordinates of the center of the disk given in (row, column). If not given, it defaults to the center of the image. radiusfloat, optional Radius of the disk. If not given, it is set to the 75% of the smallest image dimension. Returns outarray with shape image_shape Binary level set of the disk with the given radius and center. See also checkerboard_level_set
skimage.api.skimage.segmentation#skimage.segmentation.disk_level_set
skimage.segmentation.expand_labels(label_image, distance=1) [source] Expand labels in label image by distance pixels without overlapping. Given a label image, expand_labels grows label regions (connected components) outwards by up to distance pixels without overflowing into neighboring regions. More specifically, each background pixel that is within Euclidean distance of <= distance pixels of a connected component is assigned the label of that connected component. Where multiple connected components are within distance pixels of a background pixel, the label value of the closest connected component will be assigned (see Notes for the case of multiple labels at equal distance). Parameters label_imagendarray of dtype int label image distancefloat Euclidean distance in pixels by which to grow the labels. Default is one. Returns enlarged_labelsndarray of dtype int Labeled array, where all connected regions have been enlarged See also skimage.measure.label(), skimage.segmentation.watershed(), skimage.morphology.dilation() Notes Where labels are spaced more than distance pixels are apart, this is equivalent to a morphological dilation with a disc or hyperball of radius distance. However, in contrast to a morphological dilation, expand_labels will not expand a label region into a neighboring region. This implementation of expand_labels is derived from CellProfiler [1], where it is known as module “IdentifySecondaryObjects (Distance-N)” [2]. There is an important edge case when a pixel has the same distance to multiple regions, as it is not defined which region expands into that space. Here, the exact behavior depends on the upstream implementation of scipy.ndimage.distance_transform_edt. References 1 https://cellprofiler.org 2 https://github.com/CellProfiler/CellProfiler/blob/082930ea95add7b72243a4fa3d39ae5145995e9c/cellprofiler/modules/identifysecondaryobjects.py#L559 Examples >>> labels = np.array([0, 1, 0, 0, 0, 0, 2]) >>> expand_labels(labels, distance=1) array([1, 1, 1, 0, 0, 2, 2]) Labels will not overwrite each other: >>> expand_labels(labels, distance=3) array([1, 1, 1, 1, 2, 2, 2]) In case of ties, behavior is undefined, but currently resolves to the label closest to (0,) * ndim in lexicographical order. >>> labels_tied = np.array([0, 1, 0, 2, 0]) >>> expand_labels(labels_tied, 1) array([1, 1, 1, 2, 2]) >>> labels2d = np.array( ... [[0, 1, 0, 0], ... [2, 0, 0, 0], ... [0, 3, 0, 0]] ... ) >>> expand_labels(labels2d, 1) array([[2, 1, 1, 0], [2, 2, 0, 0], [2, 3, 3, 0]])
skimage.api.skimage.segmentation#skimage.segmentation.expand_labels
skimage.segmentation.felzenszwalb(image, scale=1, sigma=0.8, min_size=20, multichannel=True) [source] Computes Felsenszwalb’s efficient graph based image segmentation. Produces an oversegmentation of a multichannel (i.e. RGB) image using a fast, minimum spanning tree based clustering on the image grid. The parameter scale sets an observation level. Higher scale means less and larger segments. sigma is the diameter of a Gaussian kernel, used for smoothing the image prior to segmentation. The number of produced segments as well as their size can only be controlled indirectly through scale. Segment size within an image can vary greatly depending on local contrast. For RGB images, the algorithm uses the euclidean distance between pixels in color space. Parameters image(width, height, 3) or (width, height) ndarray Input image. scalefloat Free parameter. Higher means larger clusters. sigmafloat Width (standard deviation) of Gaussian kernel used in preprocessing. min_sizeint Minimum component size. Enforced using postprocessing. multichannelbool, optional (default: True) Whether the last axis of the image is to be interpreted as multiple channels. A value of False, for a 3D image, is not currently supported. Returns segment_mask(width, height) ndarray Integer mask indicating segment labels. Notes The k parameter used in the original paper renamed to scale here. References 1 Efficient graph-based image segmentation, Felzenszwalb, P.F. and Huttenlocher, D.P. International Journal of Computer Vision, 2004 Examples >>> from skimage.segmentation import felzenszwalb >>> from skimage.data import coffee >>> img = coffee() >>> segments = felzenszwalb(img, scale=3.0, sigma=0.95, min_size=5)
skimage.api.skimage.segmentation#skimage.segmentation.felzenszwalb
skimage.segmentation.find_boundaries(label_img, connectivity=1, mode='thick', background=0) [source] Return bool array where boundaries between labeled regions are True. Parameters label_imgarray of int or bool An array in which different regions are labeled with either different integers or boolean values. connectivityint in {1, …, label_img.ndim}, optional A pixel is considered a boundary pixel if any of its neighbors has a different label. connectivity controls which pixels are considered neighbors. A connectivity of 1 (default) means pixels sharing an edge (in 2D) or a face (in 3D) will be considered neighbors. A connectivity of label_img.ndim means pixels sharing a corner will be considered neighbors. modestring in {‘thick’, ‘inner’, ‘outer’, ‘subpixel’} How to mark the boundaries: thick: any pixel not completely surrounded by pixels of the same label (defined by connectivity) is marked as a boundary. This results in boundaries that are 2 pixels thick. inner: outline the pixels just inside of objects, leaving background pixels untouched. outer: outline pixels in the background around object boundaries. When two objects touch, their boundary is also marked. subpixel: return a doubled image, with pixels between the original pixels marked as boundary where appropriate. backgroundint, optional For modes ‘inner’ and ‘outer’, a definition of a background label is required. See mode for descriptions of these two. Returns boundariesarray of bool, same shape as label_img A bool image where True represents a boundary pixel. For mode equal to ‘subpixel’, boundaries.shape[i] is equal to 2 * label_img.shape[i] - 1 for all i (a pixel is inserted in between all other pairs of pixels). Examples >>> labels = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ... [0, 0, 0, 0, 0, 5, 5, 5, 0, 0], ... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0], ... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0], ... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0], ... [0, 0, 0, 0, 0, 5, 5, 5, 0, 0], ... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=np.uint8) >>> find_boundaries(labels, mode='thick').astype(np.uint8) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 1, 0], [0, 1, 1, 1, 1, 1, 0, 1, 1, 0], [0, 1, 1, 0, 1, 1, 0, 1, 1, 0], [0, 1, 1, 1, 1, 1, 0, 1, 1, 0], [0, 0, 1, 1, 1, 1, 1, 1, 1, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8) >>> find_boundaries(labels, mode='inner').astype(np.uint8) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 0, 1, 0, 0], [0, 0, 1, 0, 1, 1, 0, 1, 0, 0], [0, 0, 1, 1, 1, 1, 0, 1, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8) >>> find_boundaries(labels, mode='outer').astype(np.uint8) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 0, 0, 1, 0], [0, 1, 0, 0, 1, 1, 0, 0, 1, 0], [0, 1, 0, 0, 1, 1, 0, 0, 1, 0], [0, 1, 0, 0, 1, 1, 0, 0, 1, 0], [0, 0, 1, 1, 1, 1, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8) >>> labels_small = labels[::2, ::3] >>> labels_small array([[0, 0, 0, 0], [0, 0, 5, 0], [0, 1, 5, 0], [0, 0, 5, 0], [0, 0, 0, 0]], dtype=uint8) >>> find_boundaries(labels_small, mode='subpixel').astype(np.uint8) array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 1, 1, 0], [0, 0, 0, 1, 0, 1, 0], [0, 1, 1, 1, 0, 1, 0], [0, 1, 0, 1, 0, 1, 0], [0, 1, 1, 1, 0, 1, 0], [0, 0, 0, 1, 0, 1, 0], [0, 0, 0, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 0]], dtype=uint8) >>> bool_image = np.array([[False, False, False, False, False], ... [False, False, False, False, False], ... [False, False, True, True, True], ... [False, False, True, True, True], ... [False, False, True, True, True]], ... dtype=bool) >>> find_boundaries(bool_image) array([[False, False, False, False, False], [False, False, True, True, True], [False, True, True, True, True], [False, True, True, False, False], [False, True, True, False, False]])
skimage.api.skimage.segmentation#skimage.segmentation.find_boundaries
skimage.segmentation.flood(image, seed_point, *, selem=None, connectivity=None, tolerance=None) [source] Mask corresponding to a flood fill. Starting at a specific seed_point, connected points equal or within tolerance of the seed value are found. Parameters imagendarray An n-dimensional array. seed_pointtuple or int The point in image used as the starting point for the flood fill. If the image is 1D, this point may be given as an integer. selemndarray, optional A structuring element used to determine the neighborhood of each evaluated pixel. It must contain only 1’s and 0’s, have the same number of dimensions as image. If not given, all adjacent pixels are considered as part of the neighborhood (fully connected). connectivityint, optional A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is larger or equal to connectivity are considered neighbors. Ignored if selem is not None. tolerancefloat or int, optional If None (default), adjacent values must be strictly equal to the initial value of image at seed_point. This is fastest. If a value is given, a comparison will be done at every point and if within tolerance of the initial value will also be filled (inclusive). Returns maskndarray A Boolean array with the same shape as image is returned, with True values for areas connected to and equal (or within tolerance of) the seed point. All other values are False. Notes The conceptual analogy of this operation is the ‘paint bucket’ tool in many raster graphics programs. This function returns just the mask representing the fill. If indices are desired rather than masks for memory reasons, the user can simply run numpy.nonzero on the result, save the indices, and discard this mask. Examples >>> from skimage.morphology import flood >>> image = np.zeros((4, 7), dtype=int) >>> image[1:3, 1:3] = 1 >>> image[3, 0] = 1 >>> image[1:3, 4:6] = 2 >>> image[3, 6] = 3 >>> image array([[0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 0, 2, 2, 0], [0, 1, 1, 0, 2, 2, 0], [1, 0, 0, 0, 0, 0, 3]]) Fill connected ones with 5, with full connectivity (diagonals included): >>> mask = flood(image, (1, 1)) >>> image_flooded = image.copy() >>> image_flooded[mask] = 5 >>> image_flooded array([[0, 0, 0, 0, 0, 0, 0], [0, 5, 5, 0, 2, 2, 0], [0, 5, 5, 0, 2, 2, 0], [5, 0, 0, 0, 0, 0, 3]]) Fill connected ones with 5, excluding diagonal points (connectivity 1): >>> mask = flood(image, (1, 1), connectivity=1) >>> image_flooded = image.copy() >>> image_flooded[mask] = 5 >>> image_flooded array([[0, 0, 0, 0, 0, 0, 0], [0, 5, 5, 0, 2, 2, 0], [0, 5, 5, 0, 2, 2, 0], [1, 0, 0, 0, 0, 0, 3]]) Fill with a tolerance: >>> mask = flood(image, (0, 0), tolerance=1) >>> image_flooded = image.copy() >>> image_flooded[mask] = 5 >>> image_flooded array([[5, 5, 5, 5, 5, 5, 5], [5, 5, 5, 5, 2, 2, 5], [5, 5, 5, 5, 2, 2, 5], [5, 5, 5, 5, 5, 5, 3]])
skimage.api.skimage.segmentation#skimage.segmentation.flood
skimage.segmentation.flood_fill(image, seed_point, new_value, *, selem=None, connectivity=None, tolerance=None, in_place=False, inplace=None) [source] Perform flood filling on an image. Starting at a specific seed_point, connected points equal or within tolerance of the seed value are found, then set to new_value. Parameters imagendarray An n-dimensional array. seed_pointtuple or int The point in image used as the starting point for the flood fill. If the image is 1D, this point may be given as an integer. new_valueimage type New value to set the entire fill. This must be chosen in agreement with the dtype of image. selemndarray, optional A structuring element used to determine the neighborhood of each evaluated pixel. It must contain only 1’s and 0’s, have the same number of dimensions as image. If not given, all adjacent pixels are considered as part of the neighborhood (fully connected). connectivityint, optional A number used to determine the neighborhood of each evaluated pixel. Adjacent pixels whose squared distance from the center is less than or equal to connectivity are considered neighbors. Ignored if selem is not None. tolerancefloat or int, optional If None (default), adjacent values must be strictly equal to the value of image at seed_point to be filled. This is fastest. If a tolerance is provided, adjacent points with values within plus or minus tolerance from the seed point are filled (inclusive). in_placebool, optional If True, flood filling is applied to image in place. If False, the flood filled result is returned without modifying the input image (default). inplacebool, optional This parameter is deprecated and will be removed in version 0.19.0 in favor of in_place. If True, flood filling is applied to image inplace. If False, the flood filled result is returned without modifying the input image (default). Returns filledndarray An array with the same shape as image is returned, with values in areas connected to and equal (or within tolerance of) the seed point replaced with new_value. Notes The conceptual analogy of this operation is the ‘paint bucket’ tool in many raster graphics programs. Examples >>> from skimage.morphology import flood_fill >>> image = np.zeros((4, 7), dtype=int) >>> image[1:3, 1:3] = 1 >>> image[3, 0] = 1 >>> image[1:3, 4:6] = 2 >>> image[3, 6] = 3 >>> image array([[0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 0, 2, 2, 0], [0, 1, 1, 0, 2, 2, 0], [1, 0, 0, 0, 0, 0, 3]]) Fill connected ones with 5, with full connectivity (diagonals included): >>> flood_fill(image, (1, 1), 5) array([[0, 0, 0, 0, 0, 0, 0], [0, 5, 5, 0, 2, 2, 0], [0, 5, 5, 0, 2, 2, 0], [5, 0, 0, 0, 0, 0, 3]]) Fill connected ones with 5, excluding diagonal points (connectivity 1): >>> flood_fill(image, (1, 1), 5, connectivity=1) array([[0, 0, 0, 0, 0, 0, 0], [0, 5, 5, 0, 2, 2, 0], [0, 5, 5, 0, 2, 2, 0], [1, 0, 0, 0, 0, 0, 3]]) Fill with a tolerance: >>> flood_fill(image, (0, 0), 5, tolerance=1) array([[5, 5, 5, 5, 5, 5, 5], [5, 5, 5, 5, 2, 2, 5], [5, 5, 5, 5, 2, 2, 5], [5, 5, 5, 5, 5, 5, 3]])
skimage.api.skimage.segmentation#skimage.segmentation.flood_fill
skimage.segmentation.inverse_gaussian_gradient(image, alpha=100.0, sigma=5.0) [source] Inverse of gradient magnitude. Compute the magnitude of the gradients in the image and then inverts the result in the range [0, 1]. Flat areas are assigned values close to 1, while areas close to borders are assigned values close to 0. This function or a similar one defined by the user should be applied over the image as a preprocessing step before calling morphological_geodesic_active_contour. Parameters image(M, N) or (L, M, N) array Grayscale image or volume. alphafloat, optional Controls the steepness of the inversion. A larger value will make the transition between the flat areas and border areas steeper in the resulting array. sigmafloat, optional Standard deviation of the Gaussian filter applied over the image. Returns gimage(M, N) or (L, M, N) array Preprocessed image (or volume) suitable for morphological_geodesic_active_contour.
skimage.api.skimage.segmentation#skimage.segmentation.inverse_gaussian_gradient
skimage.segmentation.join_segmentations(s1, s2) [source] Return the join of the two input segmentations. The join J of S1 and S2 is defined as the segmentation in which two voxels are in the same segment if and only if they are in the same segment in both S1 and S2. Parameters s1, s2numpy arrays s1 and s2 are label fields of the same shape. Returns jnumpy array The join segmentation of s1 and s2. Examples >>> from skimage.segmentation import join_segmentations >>> s1 = np.array([[0, 0, 1, 1], ... [0, 2, 1, 1], ... [2, 2, 2, 1]]) >>> s2 = np.array([[0, 1, 1, 0], ... [0, 1, 1, 0], ... [0, 1, 1, 1]]) >>> join_segmentations(s1, s2) array([[0, 1, 3, 2], [0, 5, 3, 2], [4, 5, 5, 3]])
skimage.api.skimage.segmentation#skimage.segmentation.join_segmentations
skimage.segmentation.mark_boundaries(image, label_img, color=(1, 1, 0), outline_color=None, mode='outer', background_label=0) [source] Return image with boundaries between labeled regions highlighted. Parameters image(M, N[, 3]) array Grayscale or RGB image. label_img(M, N) array of int Label array where regions are marked by different integer values. colorlength-3 sequence, optional RGB color of boundaries in the output image. outline_colorlength-3 sequence, optional RGB color surrounding boundaries in the output image. If None, no outline is drawn. modestring in {‘thick’, ‘inner’, ‘outer’, ‘subpixel’}, optional The mode for finding boundaries. background_labelint, optional Which label to consider background (this is only useful for modes inner and outer). Returns marked(M, N, 3) array of float An image in which the boundaries between labels are superimposed on the original image. See also find_boundaries
skimage.api.skimage.segmentation#skimage.segmentation.mark_boundaries
skimage.segmentation.morphological_chan_vese(image, iterations, init_level_set='checkerboard', smoothing=1, lambda1=1, lambda2=1, iter_callback=<function <lambda>>) [source] Morphological Active Contours without Edges (MorphACWE) Active contours without edges implemented with morphological operators. It can be used to segment objects in images and volumes without well defined borders. It is required that the inside of the object looks different on average than the outside (i.e., the inner area of the object should be darker or lighter than the outer area on average). Parameters image(M, N) or (L, M, N) array Grayscale image or volume to be segmented. iterationsuint Number of iterations to run init_level_setstr, (M, N) array, or (L, M, N) array Initial level set. If an array is given, it will be binarized and used as the initial level set. If a string is given, it defines the method to generate a reasonable initial level set with the shape of the image. Accepted values are ‘checkerboard’ and ‘circle’. See the documentation of checkerboard_level_set and circle_level_set respectively for details about how these level sets are created. smoothinguint, optional Number of times the smoothing operator is applied per iteration. Reasonable values are around 1-4. Larger values lead to smoother segmentations. lambda1float, optional Weight parameter for the outer region. If lambda1 is larger than lambda2, the outer region will contain a larger range of values than the inner region. lambda2float, optional Weight parameter for the inner region. If lambda2 is larger than lambda1, the inner region will contain a larger range of values than the outer region. iter_callbackfunction, optional If given, this function is called once per iteration with the current level set as the only argument. This is useful for debugging or for plotting intermediate results during the evolution. Returns out(M, N) or (L, M, N) array Final segmentation (i.e., the final level set) See also circle_level_set, checkerboard_level_set Notes This is a version of the Chan-Vese algorithm that uses morphological operators instead of solving a partial differential equation (PDE) for the evolution of the contour. The set of morphological operators used in this algorithm are proved to be infinitesimally equivalent to the Chan-Vese PDE (see [1]). However, morphological operators are do not suffer from the numerical stability issues typically found in PDEs (it is not necessary to find the right time step for the evolution), and are computationally faster. The algorithm and its theoretical derivation are described in [1]. References 1(1,2) A Morphological Approach to Curvature-based Evolution of Curves and Surfaces, Pablo Márquez-Neila, Luis Baumela, Luis Álvarez. In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2014, DOI:10.1109/TPAMI.2013.106
skimage.api.skimage.segmentation#skimage.segmentation.morphological_chan_vese
skimage.segmentation.morphological_geodesic_active_contour(gimage, iterations, init_level_set='circle', smoothing=1, threshold='auto', balloon=0, iter_callback=<function <lambda>>) [source] Morphological Geodesic Active Contours (MorphGAC). Geodesic active contours implemented with morphological operators. It can be used to segment objects with visible but noisy, cluttered, broken borders. Parameters gimage(M, N) or (L, M, N) array Preprocessed image or volume to be segmented. This is very rarely the original image. Instead, this is usually a preprocessed version of the original image that enhances and highlights the borders (or other structures) of the object to segment. morphological_geodesic_active_contour will try to stop the contour evolution in areas where gimage is small. See morphsnakes.inverse_gaussian_gradient as an example function to perform this preprocessing. Note that the quality of morphological_geodesic_active_contour might greatly depend on this preprocessing. iterationsuint Number of iterations to run. init_level_setstr, (M, N) array, or (L, M, N) array Initial level set. If an array is given, it will be binarized and used as the initial level set. If a string is given, it defines the method to generate a reasonable initial level set with the shape of the image. Accepted values are ‘checkerboard’ and ‘circle’. See the documentation of checkerboard_level_set and circle_level_set respectively for details about how these level sets are created. smoothinguint, optional Number of times the smoothing operator is applied per iteration. Reasonable values are around 1-4. Larger values lead to smoother segmentations. thresholdfloat, optional Areas of the image with a value smaller than this threshold will be considered borders. The evolution of the contour will stop in this areas. balloonfloat, optional Balloon force to guide the contour in non-informative areas of the image, i.e., areas where the gradient of the image is too small to push the contour towards a border. A negative value will shrink the contour, while a positive value will expand the contour in these areas. Setting this to zero will disable the balloon force. iter_callbackfunction, optional If given, this function is called once per iteration with the current level set as the only argument. This is useful for debugging or for plotting intermediate results during the evolution. Returns out(M, N) or (L, M, N) array Final segmentation (i.e., the final level set) See also inverse_gaussian_gradient, circle_level_set, checkerboard_level_set Notes This is a version of the Geodesic Active Contours (GAC) algorithm that uses morphological operators instead of solving partial differential equations (PDEs) for the evolution of the contour. The set of morphological operators used in this algorithm are proved to be infinitesimally equivalent to the GAC PDEs (see [1]). However, morphological operators are do not suffer from the numerical stability issues typically found in PDEs (e.g., it is not necessary to find the right time step for the evolution), and are computationally faster. The algorithm and its theoretical derivation are described in [1]. References 1(1,2) A Morphological Approach to Curvature-based Evolution of Curves and Surfaces, Pablo Márquez-Neila, Luis Baumela, Luis Álvarez. In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2014, DOI:10.1109/TPAMI.2013.106
skimage.api.skimage.segmentation#skimage.segmentation.morphological_geodesic_active_contour
skimage.segmentation.quickshift(image, ratio=1.0, kernel_size=5, max_dist=10, return_tree=False, sigma=0, convert2lab=True, random_seed=42) [source] Segments image using quickshift clustering in Color-(x,y) space. Produces an oversegmentation of the image using the quickshift mode-seeking algorithm. Parameters image(width, height, channels) ndarray Input image. ratiofloat, optional, between 0 and 1 Balances color-space proximity and image-space proximity. Higher values give more weight to color-space. kernel_sizefloat, optional Width of Gaussian kernel used in smoothing the sample density. Higher means fewer clusters. max_distfloat, optional Cut-off point for data distances. Higher means fewer clusters. return_treebool, optional Whether to return the full segmentation hierarchy tree and distances. sigmafloat, optional Width for Gaussian smoothing as preprocessing. Zero means no smoothing. convert2labbool, optional Whether the input should be converted to Lab colorspace prior to segmentation. For this purpose, the input is assumed to be RGB. random_seedint, optional Random seed used for breaking ties. Returns segment_mask(width, height) ndarray Integer mask indicating segment labels. Notes The authors advocate to convert the image to Lab color space prior to segmentation, though this is not strictly necessary. For this to work, the image must be given in RGB format. References 1 Quick shift and kernel methods for mode seeking, Vedaldi, A. and Soatto, S. European Conference on Computer Vision, 2008
skimage.api.skimage.segmentation#skimage.segmentation.quickshift
skimage.segmentation.random_walker(data, labels, beta=130, mode='cg_j', tol=0.001, copy=True, multichannel=False, return_full_prob=False, spacing=None, *, prob_tol=0.001) [source] Random walker algorithm for segmentation from markers. Random walker algorithm is implemented for gray-level or multichannel images. Parameters dataarray_like Image to be segmented in phases. Gray-level data can be two- or three-dimensional; multichannel data can be three- or four- dimensional (multichannel=True) with the highest dimension denoting channels. Data spacing is assumed isotropic unless the spacing keyword argument is used. labelsarray of ints, of same shape as data without channels dimension Array of seed markers labeled with different positive integers for different phases. Zero-labeled pixels are unlabeled pixels. Negative labels correspond to inactive pixels that are not taken into account (they are removed from the graph). If labels are not consecutive integers, the labels array will be transformed so that labels are consecutive. In the multichannel case, labels should have the same shape as a single channel of data, i.e. without the final dimension denoting channels. betafloat, optional Penalization coefficient for the random walker motion (the greater beta, the more difficult the diffusion). modestring, available options {‘cg’, ‘cg_j’, ‘cg_mg’, ‘bf’} Mode for solving the linear system in the random walker algorithm. ‘bf’ (brute force): an LU factorization of the Laplacian is computed. This is fast for small images (<1024x1024), but very slow and memory-intensive for large images (e.g., 3-D volumes). ‘cg’ (conjugate gradient): the linear system is solved iteratively using the Conjugate Gradient method from scipy.sparse.linalg. This is less memory-consuming than the brute force method for large images, but it is quite slow. ‘cg_j’ (conjugate gradient with Jacobi preconditionner): the Jacobi preconditionner is applyed during the Conjugate gradient method iterations. This may accelerate the convergence of the ‘cg’ method. ‘cg_mg’ (conjugate gradient with multigrid preconditioner): a preconditioner is computed using a multigrid solver, then the solution is computed with the Conjugate Gradient method. This mode requires that the pyamg module is installed. tolfloat, optional Tolerance to achieve when solving the linear system using the conjugate gradient based modes (‘cg’, ‘cg_j’ and ‘cg_mg’). copybool, optional If copy is False, the labels array will be overwritten with the result of the segmentation. Use copy=False if you want to save on memory. multichannelbool, optional If True, input data is parsed as multichannel data (see ‘data’ above for proper input format in this case). return_full_probbool, optional If True, the probability that a pixel belongs to each of the labels will be returned, instead of only the most likely label. spacingiterable of floats, optional Spacing between voxels in each spatial dimension. If None, then the spacing between pixels/voxels in each dimension is assumed 1. prob_tolfloat, optional Tolerance on the resulting probability to be in the interval [0, 1]. If the tolerance is not satisfied, a warning is displayed. Returns outputndarray If return_full_prob is False, array of ints of same shape and data type as labels, in which each pixel has been labeled according to the marker that reached the pixel first by anisotropic diffusion. If return_full_prob is True, array of floats of shape (nlabels, labels.shape). output[label_nb, i, j] is the probability that label label_nb reaches the pixel (i, j) first. See also skimage.morphology.watershed watershed segmentation A segmentation algorithm based on mathematical morphology and “flooding” of regions from markers. Notes Multichannel inputs are scaled with all channel data combined. Ensure all channels are separately normalized prior to running this algorithm. The spacing argument is specifically for anisotropic datasets, where data points are spaced differently in one or more spatial dimensions. Anisotropic data is commonly encountered in medical imaging. The algorithm was first proposed in [1]. The algorithm solves the diffusion equation at infinite times for sources placed on markers of each phase in turn. A pixel is labeled with the phase that has the greatest probability to diffuse first to the pixel. The diffusion equation is solved by minimizing x.T L x for each phase, where L is the Laplacian of the weighted graph of the image, and x is the probability that a marker of the given phase arrives first at a pixel by diffusion (x=1 on markers of the phase, x=0 on the other markers, and the other coefficients are looked for). Each pixel is attributed the label for which it has a maximal value of x. The Laplacian L of the image is defined as: L_ii = d_i, the number of neighbors of pixel i (the degree of i) L_ij = -w_ij if i and j are adjacent pixels The weight w_ij is a decreasing function of the norm of the local gradient. This ensures that diffusion is easier between pixels of similar values. When the Laplacian is decomposed into blocks of marked and unmarked pixels: L = M B.T B A with first indices corresponding to marked pixels, and then to unmarked pixels, minimizing x.T L x for one phase amount to solving: A x = - B x_m where x_m = 1 on markers of the given phase, and 0 on other markers. This linear system is solved in the algorithm using a direct method for small images, and an iterative method for larger images. References 1 Leo Grady, Random walks for image segmentation, IEEE Trans Pattern Anal Mach Intell. 2006 Nov;28(11):1768-83. DOI:10.1109/TPAMI.2006.233. Examples >>> np.random.seed(0) >>> a = np.zeros((10, 10)) + 0.2 * np.random.rand(10, 10) >>> a[5:8, 5:8] += 1 >>> b = np.zeros_like(a, dtype=np.int32) >>> b[3, 3] = 1 # Marker for first phase >>> b[6, 6] = 2 # Marker for second phase >>> random_walker(a, b) array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 2, 2, 2, 1, 1], [1, 1, 1, 1, 1, 2, 2, 2, 1, 1], [1, 1, 1, 1, 1, 2, 2, 2, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)
skimage.api.skimage.segmentation#skimage.segmentation.random_walker
skimage.segmentation.relabel_sequential(label_field, offset=1) [source] Relabel arbitrary labels to {offset, … offset + number_of_labels}. This function also returns the forward map (mapping the original labels to the reduced labels) and the inverse map (mapping the reduced labels back to the original ones). Parameters label_fieldnumpy array of int, arbitrary shape An array of labels, which must be non-negative integers. offsetint, optional The return labels will start at offset, which should be strictly positive. Returns relabelednumpy array of int, same shape as label_field The input label field with labels mapped to {offset, …, number_of_labels + offset - 1}. The data type will be the same as label_field, except when offset + number_of_labels causes overflow of the current data type. forward_mapArrayMap The map from the original label space to the returned label space. Can be used to re-apply the same mapping. See examples for usage. The output data type will be the same as relabeled. inverse_mapArrayMap The map from the new label space to the original space. This can be used to reconstruct the original label field from the relabeled one. The output data type will be the same as label_field. Notes The label 0 is assumed to denote the background and is never remapped. The forward map can be extremely big for some inputs, since its length is given by the maximum of the label field. However, in most situations, label_field.max() is much smaller than label_field.size, and in these cases the forward map is guaranteed to be smaller than either the input or output images. Examples >>> from skimage.segmentation import relabel_sequential >>> label_field = np.array([1, 1, 5, 5, 8, 99, 42]) >>> relab, fw, inv = relabel_sequential(label_field) >>> relab array([1, 1, 2, 2, 3, 5, 4]) >>> print(fw) ArrayMap: 1 → 1 5 → 2 8 → 3 42 → 4 99 → 5 >>> np.array(fw) array([0, 1, 0, 0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5]) >>> np.array(inv) array([ 0, 1, 5, 8, 42, 99]) >>> (fw[label_field] == relab).all() True >>> (inv[relab] == label_field).all() True >>> relab, fw, inv = relabel_sequential(label_field, offset=5) >>> relab array([5, 5, 6, 6, 7, 9, 8])
skimage.api.skimage.segmentation#skimage.segmentation.relabel_sequential
skimage.segmentation.slic(image, n_segments=100, compactness=10.0, max_iter=10, sigma=0, spacing=None, multichannel=True, convert2lab=None, enforce_connectivity=True, min_size_factor=0.5, max_size_factor=3, slic_zero=False, start_label=None, mask=None) [source] Segments image using k-means clustering in Color-(x,y,z) space. Parameters image2D, 3D or 4D ndarray Input image, which can be 2D or 3D, and grayscale or multichannel (see multichannel parameter). Input image must either be NaN-free or the NaN’s must be masked out n_segmentsint, optional The (approximate) number of labels in the segmented output image. compactnessfloat, optional Balances color proximity and space proximity. Higher values give more weight to space proximity, making superpixel shapes more square/cubic. In SLICO mode, this is the initial compactness. This parameter depends strongly on image contrast and on the shapes of objects in the image. We recommend exploring possible values on a log scale, e.g., 0.01, 0.1, 1, 10, 100, before refining around a chosen value. max_iterint, optional Maximum number of iterations of k-means. sigmafloat or (3,) array-like of floats, optional Width of Gaussian smoothing kernel for pre-processing for each dimension of the image. The same sigma is applied to each dimension in case of a scalar value. Zero means no smoothing. Note, that sigma is automatically scaled if it is scalar and a manual voxel spacing is provided (see Notes section). spacing(3,) array-like of floats, optional The voxel spacing along each image dimension. By default, slic assumes uniform spacing (same voxel resolution along z, y and x). This parameter controls the weights of the distances along z, y, and x during k-means clustering. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. convert2labbool, optional Whether the input should be converted to Lab colorspace prior to segmentation. The input image must be RGB. Highly recommended. This option defaults to True when multichannel=True and image.shape[-1] == 3. enforce_connectivitybool, optional Whether the generated segments are connected or not min_size_factorfloat, optional Proportion of the minimum segment size to be removed with respect to the supposed segment size `depth*width*height/n_segments` max_size_factorfloat, optional Proportion of the maximum connected segment size. A value of 3 works in most of the cases. slic_zerobool, optional Run SLIC-zero, the zero-parameter mode of SLIC. [2] start_label: int, optional The labels’ index start. Should be 0 or 1. New in version 0.17: start_label was introduced in 0.17 mask2D ndarray, optional If provided, superpixels are computed only where mask is True, and seed points are homogeneously distributed over the mask using a K-means clustering strategy. New in version 0.17: mask was introduced in 0.17 Returns labels2D or 3D array Integer mask indicating segment labels. Raises ValueError If convert2lab is set to True but the last array dimension is not of length 3. ValueError If start_label is not 0 or 1. Notes If sigma > 0, the image is smoothed using a Gaussian kernel prior to segmentation. If sigma is scalar and spacing is provided, the kernel width is divided along each dimension by the spacing. For example, if sigma=1 and spacing=[5, 1, 1], the effective sigma is [0.2, 1, 1]. This ensures sensible smoothing for anisotropic images. The image is rescaled to be in [0, 1] prior to processing. Images of shape (M, N, 3) are interpreted as 2D RGB images by default. To interpret them as 3D with the last dimension having length 3, use multichannel=False. start_label is introduced to handle the issue [4]. The labels indexing starting at 0 will be deprecated in future versions. If mask is not None labels indexing starts at 1 and masked area is set to 0. References 1 Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk, SLIC Superpixels Compared to State-of-the-art Superpixel Methods, TPAMI, May 2012. DOI:10.1109/TPAMI.2012.120 2 https://www.epfl.ch/labs/ivrl/research/slic-superpixels/#SLICO 3 Irving, Benjamin. “maskSLIC: regional superpixel generation with application to local pathology characterisation in medical images.”, 2016, arXiv:1606.09518 4 https://github.com/scikit-image/scikit-image/issues/3722 Examples >>> from skimage.segmentation import slic >>> from skimage.data import astronaut >>> img = astronaut() >>> segments = slic(img, n_segments=100, compactness=10) Increasing the compactness parameter yields more square regions: >>> segments = slic(img, n_segments=100, compactness=20)
skimage.api.skimage.segmentation#skimage.segmentation.slic
skimage.segmentation.watershed(image, markers=None, connectivity=1, offset=None, mask=None, compactness=0, watershed_line=False) [source] Find watershed basins in image flooded from given markers. Parameters imagendarray (2-D, 3-D, …) of integers Data array where the lowest value points are labeled first. markersint, or ndarray of int, same shape as image, optional The desired number of markers, or an array marking the basins with the values to be assigned in the label matrix. Zero means not a marker. If None (no markers given), the local minima of the image are used as markers. connectivityndarray, optional An array with the same number of dimensions as image whose non-zero elements indicate neighbors for connection. Following the scipy convention, default is a one-connected array of the dimension of the image. offsetarray_like of shape image.ndim, optional offset of the connectivity (one offset per dimension) maskndarray of bools or 0s and 1s, optional Array of same shape as image. Only points at which mask == True will be labeled. compactnessfloat, optional Use compact watershed [3] with given compactness parameter. Higher values result in more regularly-shaped watershed basins. watershed_linebool, optional If watershed_line is True, a one-pixel wide line separates the regions obtained by the watershed algorithm. The line has the label 0. Returns outndarray A labeled matrix of the same type and shape as markers See also skimage.segmentation.random_walker random walker segmentation A segmentation algorithm based on anisotropic diffusion, usually slower than the watershed but with good results on noisy data and boundaries with holes. Notes This function implements a watershed algorithm [1] [2] that apportions pixels into marked basins. The algorithm uses a priority queue to hold the pixels with the metric for the priority queue being pixel value, then the time of entry into the queue - this settles ties in favor of the closest marker. Some ideas taken from Soille, “Automated Basin Delineation from Digital Elevation Models Using Mathematical Morphology”, Signal Processing 20 (1990) 171-182 The most important insight in the paper is that entry time onto the queue solves two problems: a pixel should be assigned to the neighbor with the largest gradient or, if there is no gradient, pixels on a plateau should be split between markers on opposite sides. This implementation converts all arguments to specific, lowest common denominator types, then passes these to a C algorithm. Markers can be determined manually, or automatically using for example the local minima of the gradient of the image, or the local maxima of the distance function to the background for separating overlapping objects (see example). References 1 https://en.wikipedia.org/wiki/Watershed_%28image_processing%29 2 http://cmm.ensmp.fr/~beucher/wtshed.html 3 Peer Neubert & Peter Protzel (2014). Compact Watershed and Preemptive SLIC: On Improving Trade-offs of Superpixel Segmentation Algorithms. ICPR 2014, pp 996-1001. DOI:10.1109/ICPR.2014.181 https://www.tu-chemnitz.de/etit/proaut/publications/cws_pSLIC_ICPR.pdf Examples The watershed algorithm is useful to separate overlapping objects. We first generate an initial image with two overlapping circles: >>> x, y = np.indices((80, 80)) >>> x1, y1, x2, y2 = 28, 28, 44, 52 >>> r1, r2 = 16, 20 >>> mask_circle1 = (x - x1)**2 + (y - y1)**2 < r1**2 >>> mask_circle2 = (x - x2)**2 + (y - y2)**2 < r2**2 >>> image = np.logical_or(mask_circle1, mask_circle2) Next, we want to separate the two circles. We generate markers at the maxima of the distance to the background: >>> from scipy import ndimage as ndi >>> distance = ndi.distance_transform_edt(image) >>> from skimage.feature import peak_local_max >>> local_maxi = peak_local_max(distance, labels=image, ... footprint=np.ones((3, 3)), ... indices=False) >>> markers = ndi.label(local_maxi)[0] Finally, we run the watershed on the image and markers: >>> labels = watershed(-distance, markers, mask=image) The algorithm works also for 3-D images, and can be used for example to separate overlapping spheres.
skimage.api.skimage.segmentation#skimage.segmentation.watershed
skimage Image Processing for Python scikit-image (a.k.a. skimage) is a collection of algorithms for image processing and computer vision. The main package of skimage only provides a few utilities for converting between image data types; for most features, you need to import one of the following subpackages: Subpackages color Color space conversion. data Test images and example data. draw Drawing primitives (lines, text, etc.) that operate on NumPy arrays. exposure Image intensity adjustment, e.g., histogram equalization, etc. feature Feature detection and extraction, e.g., texture analysis corners, etc. filters Sharpening, edge finding, rank filters, thresholding, etc. graph Graph-theoretic operations, e.g., shortest paths. io Reading, saving, and displaying images and video. measure Measurement of image properties, e.g., region properties and contours. metrics Metrics corresponding to images, e.g. distance metrics, similarity, etc. morphology Morphological operations, e.g., opening or skeletonization. restoration Restoration algorithms, e.g., deconvolution algorithms, denoising, etc. segmentation Partitioning an image into multiple regions. transform Geometric and other transforms, e.g., rotation or the Radon transform. util Generic utilities. viewer A simple graphical user interface for visualizing results and exploring parameters. Utility Functions img_as_float Convert an image to floating point format, with values in [0, 1]. Is similar to img_as_float64, but will not convert lower-precision floating point arrays to float64. img_as_float32 Convert an image to single-precision (32-bit) floating point format, with values in [0, 1]. img_as_float64 Convert an image to double-precision (64-bit) floating point format, with values in [0, 1]. img_as_uint Convert an image to unsigned integer format, with values in [0, 65535]. img_as_int Convert an image to signed integer format, with values in [-32768, 32767]. img_as_ubyte Convert an image to unsigned byte format, with values in [0, 255]. img_as_bool Convert an image to boolean format, with values either True or False. dtype_limits Return intensity limits, i.e. (min, max) tuple, of the image’s dtype. skimage.dtype_limits(image[, clip_negative]) Return intensity limits, i.e. skimage.ensure_python_version(min_version) skimage.img_as_bool(image[, force_copy]) Convert an image to boolean format. skimage.img_as_float(image[, force_copy]) Convert an image to floating point format. skimage.img_as_float32(image[, force_copy]) Convert an image to single-precision (32-bit) floating point format. skimage.img_as_float64(image[, force_copy]) Convert an image to double-precision (64-bit) floating point format. skimage.img_as_int(image[, force_copy]) Convert an image to 16-bit signed integer format. skimage.img_as_ubyte(image[, force_copy]) Convert an image to 8-bit unsigned integer format. skimage.img_as_uint(image[, force_copy]) Convert an image to 16-bit unsigned integer format. skimage.lookfor(what) Do a keyword search on scikit-image docstrings. skimage.data Standard test images. skimage.util dtype_limits skimage.dtype_limits(image, clip_negative=False) [source] Return intensity limits, i.e. (min, max) tuple, of the image’s dtype. Parameters imagendarray Input image. clip_negativebool, optional If True, clip the negative range (i.e. return 0 for min intensity) even if the image dtype allows negative values. Returns imin, imaxtuple Lower and upper intensity limits. ensure_python_version skimage.ensure_python_version(min_version) [source] img_as_bool skimage.img_as_bool(image, force_copy=False) [source] Convert an image to boolean format. Parameters imagendarray Input image. force_copybool, optional Force a copy of the data, irrespective of its current dtype. Returns outndarray of bool (bool_) Output image. Notes The upper half of the input dtype’s positive range is True, and the lower half is False. All negative values (if present) are False. img_as_float skimage.img_as_float(image, force_copy=False) [source] Convert an image to floating point format. This function is similar to img_as_float64, but will not convert lower-precision floating point arrays to float64. Parameters imagendarray Input image. force_copybool, optional Force a copy of the data, irrespective of its current dtype. Returns outndarray of float Output image. Notes The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0]. Examples using skimage.img_as_float Tinting gray-scale images 3D adaptive histogram equalization Phase Unwrapping Finding local maxima Use rolling-ball algorithm for estimating background intensity Explore 3D images (of cells) img_as_float32 skimage.img_as_float32(image, force_copy=False) [source] Convert an image to single-precision (32-bit) floating point format. Parameters imagendarray Input image. force_copybool, optional Force a copy of the data, irrespective of its current dtype. Returns outndarray of float32 Output image. Notes The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0]. img_as_float64 skimage.img_as_float64(image, force_copy=False) [source] Convert an image to double-precision (64-bit) floating point format. Parameters imagendarray Input image. force_copybool, optional Force a copy of the data, irrespective of its current dtype. Returns outndarray of float64 Output image. Notes The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0]. img_as_int skimage.img_as_int(image, force_copy=False) [source] Convert an image to 16-bit signed integer format. Parameters imagendarray Input image. force_copybool, optional Force a copy of the data, irrespective of its current dtype. Returns outndarray of int16 Output image. Notes The values are scaled between -32768 and 32767. If the input data-type is positive-only (e.g., uint8), then the output image will still only have positive values. img_as_ubyte skimage.img_as_ubyte(image, force_copy=False) [source] Convert an image to 8-bit unsigned integer format. Parameters imagendarray Input image. force_copybool, optional Force a copy of the data, irrespective of its current dtype. Returns outndarray of ubyte (uint8) Output image. Notes Negative input values will be clipped. Positive values are scaled between 0 and 255. Examples using skimage.img_as_ubyte Local Histogram Equalization Entropy Markers for watershed transform Segment human cells (in mitosis) Rank filters img_as_uint skimage.img_as_uint(image, force_copy=False) [source] Convert an image to 16-bit unsigned integer format. Parameters imagendarray Input image. force_copybool, optional Force a copy of the data, irrespective of its current dtype. Returns outndarray of uint16 Output image. Notes Negative input values will be clipped. Positive values are scaled between 0 and 65535. lookfor skimage.lookfor(what) [source] Do a keyword search on scikit-image docstrings. Parameters whatstr Words to look for. Examples >>> import skimage >>> skimage.lookfor('regular_grid') Search results for 'regular_grid' --------------------------------- skimage.lookfor Do a keyword search on scikit-image docstrings. skimage.util.regular_grid Find `n_points` regularly spaced along `ar_shape`.
skimage.api.skimage
Module: transform skimage.transform.downscale_local_mean(…) Down-sample N-dimensional image by local averaging. skimage.transform.estimate_transform(ttype, …) Estimate 2D geometric transformation parameters. skimage.transform.frt2(a) Compute the 2-dimensional finite radon transform (FRT) for an n x n integer array. skimage.transform.hough_circle(image, radius) Perform a circular Hough transform. skimage.transform.hough_circle_peaks(…[, …]) Return peaks in a circle Hough transform. skimage.transform.hough_ellipse(image[, …]) Perform an elliptical Hough transform. skimage.transform.hough_line(image[, theta]) Perform a straight line Hough transform. skimage.transform.hough_line_peaks(hspace, …) Return peaks in a straight line Hough transform. skimage.transform.ifrt2(a) Compute the 2-dimensional inverse finite radon transform (iFRT) for an (n+1) x n integer array. skimage.transform.integral_image(image) Integral image / summed area table. skimage.transform.integrate(ii, start, end) Use an integral image to integrate over a given window. skimage.transform.iradon(radon_image[, …]) Inverse radon transform. skimage.transform.iradon_sart(radon_image[, …]) Inverse radon transform. skimage.transform.matrix_transform(coords, …) Apply 2D matrix transform. skimage.transform.order_angles_golden_ratio(theta) Order angles to reduce the amount of correlated information in subsequent projections. skimage.transform.probabilistic_hough_line(image) Return lines from a progressive probabilistic line Hough transform. skimage.transform.pyramid_expand(image[, …]) Upsample and then smooth image. skimage.transform.pyramid_gaussian(image[, …]) Yield images of the Gaussian pyramid formed by the input image. skimage.transform.pyramid_laplacian(image[, …]) Yield images of the laplacian pyramid formed by the input image. skimage.transform.pyramid_reduce(image[, …]) Smooth and then downsample image. skimage.transform.radon(image[, theta, …]) Calculates the radon transform of an image given specified projection angles. skimage.transform.rescale(image, scale[, …]) Scale image by a certain factor. skimage.transform.resize(image, output_shape) Resize image to match a certain size. skimage.transform.rotate(image, angle[, …]) Rotate image by a certain angle around its center. skimage.transform.swirl(image[, center, …]) Perform a swirl transformation. skimage.transform.warp(image, inverse_map[, …]) Warp an image according to a given coordinate transformation. skimage.transform.warp_coords(coord_map, shape) Build the source coordinates for the output of a 2-D image warp. skimage.transform.warp_polar(image[, …]) Remap image to polar or log-polar coordinates space. skimage.transform.AffineTransform([matrix, …]) 2D affine transformation. skimage.transform.EssentialMatrixTransform([…]) Essential matrix transformation. skimage.transform.EuclideanTransform([…]) 2D Euclidean transformation. skimage.transform.FundamentalMatrixTransform([…]) Fundamental matrix transformation. skimage.transform.PiecewiseAffineTransform() 2D piecewise affine transformation. skimage.transform.PolynomialTransform([params]) 2D polynomial transformation. skimage.transform.ProjectiveTransform([matrix]) Projective transformation. skimage.transform.SimilarityTransform([…]) 2D similarity transformation. downscale_local_mean skimage.transform.downscale_local_mean(image, factors, cval=0, clip=True) [source] Down-sample N-dimensional image by local averaging. The image is padded with cval if it is not perfectly divisible by the integer factors. In contrast to interpolation in skimage.transform.resize and skimage.transform.rescale this function calculates the local mean of elements in each block of size factors in the input image. Parameters imagendarray N-dimensional input image. factorsarray_like Array containing down-sampling integer factor along each axis. cvalfloat, optional Constant padding value if image is not perfectly divisible by the integer factors. clipbool, optional Unused, but kept here for API consistency with the other transforms in this module. (The local mean will never fall outside the range of values in the input image, assuming the provided cval also falls within that range.) Returns imagendarray Down-sampled image with same number of dimensions as input image. For integer inputs, the output dtype will be float64. See numpy.mean() for details. Examples >>> a = np.arange(15).reshape(3, 5) >>> a array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) >>> downscale_local_mean(a, (2, 3)) array([[3.5, 4. ], [5.5, 4.5]]) estimate_transform skimage.transform.estimate_transform(ttype, src, dst, **kwargs) [source] Estimate 2D geometric transformation parameters. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. Parameters ttype{‘euclidean’, similarity’, ‘affine’, ‘piecewise-affine’, ‘projective’, ‘polynomial’} Type of transform. kwargsarray or int Function parameters (src, dst, n, angle): NAME / TTYPE FUNCTION PARAMETERS 'euclidean' `src, `dst` 'similarity' `src, `dst` 'affine' `src, `dst` 'piecewise-affine' `src, `dst` 'projective' `src, `dst` 'polynomial' `src, `dst`, `order` (polynomial order, default order is 2) Also see examples below. Returns tformGeometricTransform Transform object containing the transformation parameters and providing access to forward and inverse transformation functions. Examples >>> import numpy as np >>> from skimage import transform >>> # estimate transformation parameters >>> src = np.array([0, 0, 10, 10]).reshape((2, 2)) >>> dst = np.array([12, 14, 1, -20]).reshape((2, 2)) >>> tform = transform.estimate_transform('similarity', src, dst) >>> np.allclose(tform.inverse(tform(src)), src) True >>> # warp image using the estimated transformation >>> from skimage import data >>> image = data.camera() >>> warp(image, inverse_map=tform.inverse) >>> # create transformation with explicit parameters >>> tform2 = transform.SimilarityTransform(scale=1.1, rotation=1, ... translation=(10, 20)) >>> # unite transformations, applied in order from left to right >>> tform3 = tform + tform2 >>> np.allclose(tform3(src), tform2(tform(src))) True frt2 skimage.transform.frt2(a) [source] Compute the 2-dimensional finite radon transform (FRT) for an n x n integer array. Parameters aarray_like A 2-D square n x n integer array. Returns FRT2-D ndarray Finite Radon Transform array of (n+1) x n integer coefficients. See also ifrt2 The two-dimensional inverse FRT. Notes The FRT has a unique inverse if and only if n is prime. [FRT] The idea for this algorithm is due to Vlad Negnevitski. References FRT A. Kingston and I. Svalbe, “Projective transforms on periodic discrete image arrays,” in P. Hawkes (Ed), Advances in Imaging and Electron Physics, 139 (2006) Examples Generate a test image: Use a prime number for the array dimensions >>> SIZE = 59 >>> img = np.tri(SIZE, dtype=np.int32) Apply the Finite Radon Transform: >>> f = frt2(img) hough_circle skimage.transform.hough_circle(image, radius, normalize=True, full_output=False) [source] Perform a circular Hough transform. Parameters image(M, N) ndarray Input image with nonzero values representing edges. radiusscalar or sequence of scalars Radii at which to compute the Hough transform. Floats are converted to integers. normalizeboolean, optional (default True) Normalize the accumulator with the number of pixels used to draw the radius. full_outputboolean, optional (default False) Extend the output size by twice the largest radius in order to detect centers outside the input picture. Returns H3D ndarray (radius index, (M + 2R, N + 2R) ndarray) Hough transform accumulator for each radius. R designates the larger radius if full_output is True. Otherwise, R = 0. Examples >>> from skimage.transform import hough_circle >>> from skimage.draw import circle_perimeter >>> img = np.zeros((100, 100), dtype=bool) >>> rr, cc = circle_perimeter(25, 35, 23) >>> img[rr, cc] = 1 >>> try_radii = np.arange(5, 50) >>> res = hough_circle(img, try_radii) >>> ridx, r, c = np.unravel_index(np.argmax(res), res.shape) >>> r, c, try_radii[ridx] (25, 35, 23) hough_circle_peaks skimage.transform.hough_circle_peaks(hspaces, radii, min_xdistance=1, min_ydistance=1, threshold=None, num_peaks=inf, total_num_peaks=inf, normalize=False) [source] Return peaks in a circle Hough transform. Identifies most prominent circles separated by certain distances in given Hough spaces. Non-maximum suppression with different sizes is applied separately in the first and second dimension of the Hough space to identify peaks. For circles with different radius but close in distance, only the one with highest peak is kept. Parameters hspaces(N, M) array Hough spaces returned by the hough_circle function. radii(M,) array Radii corresponding to Hough spaces. min_xdistanceint, optional Minimum distance separating centers in the x dimension. min_ydistanceint, optional Minimum distance separating centers in the y dimension. thresholdfloat, optional Minimum intensity of peaks in each Hough space. Default is 0.5 * max(hspace). num_peaksint, optional Maximum number of peaks in each Hough space. When the number of peaks exceeds num_peaks, only num_peaks coordinates based on peak intensity are considered for the corresponding radius. total_num_peaksint, optional Maximum number of peaks. When the number of peaks exceeds num_peaks, return num_peaks coordinates based on peak intensity. normalizebool, optional If True, normalize the accumulator by the radius to sort the prominent peaks. Returns accum, cx, cy, radtuple of array Peak values in Hough space, x and y center coordinates and radii. Notes Circles with bigger radius have higher peaks in Hough space. If larger circles are preferred over smaller ones, normalize should be False. Otherwise, circles will be returned in the order of decreasing voting number. Examples >>> from skimage import transform, draw >>> img = np.zeros((120, 100), dtype=int) >>> radius, x_0, y_0 = (20, 99, 50) >>> y, x = draw.circle_perimeter(y_0, x_0, radius) >>> img[x, y] = 1 >>> hspaces = transform.hough_circle(img, radius) >>> accum, cx, cy, rad = hough_circle_peaks(hspaces, [radius,]) hough_ellipse skimage.transform.hough_ellipse(image, threshold=4, accuracy=1, min_size=4, max_size=None) [source] Perform an elliptical Hough transform. Parameters image(M, N) ndarray Input image with nonzero values representing edges. thresholdint, optional Accumulator threshold value. accuracydouble, optional Bin size on the minor axis used in the accumulator. min_sizeint, optional Minimal major axis length. max_sizeint, optional Maximal minor axis length. If None, the value is set to the half of the smaller image dimension. Returns resultndarray with fields [(accumulator, yc, xc, a, b, orientation)]. Where (yc, xc) is the center, (a, b) the major and minor axes, respectively. The orientation value follows skimage.draw.ellipse_perimeter convention. Notes The accuracy must be chosen to produce a peak in the accumulator distribution. In other words, a flat accumulator distribution with low values may be caused by a too low bin size. References 1 Xie, Yonghong, and Qiang Ji. “A new efficient ellipse detection method.” Pattern Recognition, 2002. Proceedings. 16th International Conference on. Vol. 2. IEEE, 2002 Examples >>> from skimage.transform import hough_ellipse >>> from skimage.draw import ellipse_perimeter >>> img = np.zeros((25, 25), dtype=np.uint8) >>> rr, cc = ellipse_perimeter(10, 10, 6, 8) >>> img[cc, rr] = 1 >>> result = hough_ellipse(img, threshold=8) >>> result.tolist() [(10, 10.0, 10.0, 8.0, 6.0, 0.0)] hough_line skimage.transform.hough_line(image, theta=None) [source] Perform a straight line Hough transform. Parameters image(M, N) ndarray Input image with nonzero values representing edges. theta1D ndarray of double, optional Angles at which to compute the transform, in radians. Defaults to a vector of 180 angles evenly spaced from -pi/2 to pi/2. Returns hspace2-D ndarray of uint64 Hough transform accumulator. anglesndarray Angles at which the transform is computed, in radians. distancesndarray Distance values. Notes The origin is the top left corner of the original image. X and Y axis are horizontal and vertical edges respectively. The distance is the minimal algebraic distance from the origin to the detected line. The angle accuracy can be improved by decreasing the step size in the theta array. Examples Generate a test image: >>> img = np.zeros((100, 150), dtype=bool) >>> img[30, :] = 1 >>> img[:, 65] = 1 >>> img[35:45, 35:50] = 1 >>> for i in range(90): ... img[i, i] = 1 >>> img += np.random.random(img.shape) > 0.95 Apply the Hough transform: >>> out, angles, d = hough_line(img) import numpy as np import matplotlib.pyplot as plt from skimage.transform import hough_line from skimage.draw import line img = np.zeros((100, 150), dtype=bool) img[30, :] = 1 img[:, 65] = 1 img[35:45, 35:50] = 1 rr, cc = line(60, 130, 80, 10) img[rr, cc] = 1 img += np.random.random(img.shape) > 0.95 out, angles, d = hough_line(img) fix, axes = plt.subplots(1, 2, figsize=(7, 4)) axes[0].imshow(img, cmap=plt.cm.gray) axes[0].set_title('Input image') axes[1].imshow( out, cmap=plt.cm.bone, extent=(np.rad2deg(angles[-1]), np.rad2deg(angles[0]), d[-1], d[0])) axes[1].set_title('Hough transform') axes[1].set_xlabel('Angle (degree)') axes[1].set_ylabel('Distance (pixel)') plt.tight_layout() plt.show() (Source code, png, pdf) hough_line_peaks skimage.transform.hough_line_peaks(hspace, angles, dists, min_distance=9, min_angle=10, threshold=None, num_peaks=inf) [source] Return peaks in a straight line Hough transform. Identifies most prominent lines separated by a certain angle and distance in a Hough transform. Non-maximum suppression with different sizes is applied separately in the first (distances) and second (angles) dimension of the Hough space to identify peaks. Parameters hspace(N, M) array Hough space returned by the hough_line function. angles(M,) array Angles returned by the hough_line function. Assumed to be continuous. (angles[-1] - angles[0] == PI). dists(N, ) array Distances returned by the hough_line function. min_distanceint, optional Minimum distance separating lines (maximum filter size for first dimension of hough space). min_angleint, optional Minimum angle separating lines (maximum filter size for second dimension of hough space). thresholdfloat, optional Minimum intensity of peaks. Default is 0.5 * max(hspace). num_peaksint, optional Maximum number of peaks. When the number of peaks exceeds num_peaks, return num_peaks coordinates based on peak intensity. Returns accum, angles, diststuple of array Peak values in Hough space, angles and distances. Examples >>> from skimage.transform import hough_line, hough_line_peaks >>> from skimage.draw import line >>> img = np.zeros((15, 15), dtype=bool) >>> rr, cc = line(0, 0, 14, 14) >>> img[rr, cc] = 1 >>> rr, cc = line(0, 14, 14, 0) >>> img[cc, rr] = 1 >>> hspace, angles, dists = hough_line(img) >>> hspace, angles, dists = hough_line_peaks(hspace, angles, dists) >>> len(angles) 2 ifrt2 skimage.transform.ifrt2(a) [source] Compute the 2-dimensional inverse finite radon transform (iFRT) for an (n+1) x n integer array. Parameters aarray_like A 2-D (n+1) row x n column integer array. Returns iFRT2-D n x n ndarray Inverse Finite Radon Transform array of n x n integer coefficients. See also frt2 The two-dimensional FRT Notes The FRT has a unique inverse if and only if n is prime. See [1] for an overview. The idea for this algorithm is due to Vlad Negnevitski. References 1 A. Kingston and I. Svalbe, “Projective transforms on periodic discrete image arrays,” in P. Hawkes (Ed), Advances in Imaging and Electron Physics, 139 (2006) Examples >>> SIZE = 59 >>> img = np.tri(SIZE, dtype=np.int32) Apply the Finite Radon Transform: >>> f = frt2(img) Apply the Inverse Finite Radon Transform to recover the input >>> fi = ifrt2(f) Check that it’s identical to the original >>> assert len(np.nonzero(img-fi)[0]) == 0 integral_image skimage.transform.integral_image(image) [source] Integral image / summed area table. The integral image contains the sum of all elements above and to the left of it, i.e.: \[S[m, n] = \sum_{i \leq m} \sum_{j \leq n} X[i, j]\] Parameters imagendarray Input image. Returns Sndarray Integral image/summed area table of same shape as input image. References 1 F.C. Crow, “Summed-area tables for texture mapping,” ACM SIGGRAPH Computer Graphics, vol. 18, 1984, pp. 207-212. integrate skimage.transform.integrate(ii, start, end) [source] Use an integral image to integrate over a given window. Parameters iindarray Integral image. startList of tuples, each tuple of length equal to dimension of ii Coordinates of top left corner of window(s). Each tuple in the list contains the starting row, col, … index i.e [(row_win1, col_win1, …), (row_win2, col_win2,…), …]. endList of tuples, each tuple of length equal to dimension of ii Coordinates of bottom right corner of window(s). Each tuple in the list containing the end row, col, … index i.e [(row_win1, col_win1, …), (row_win2, col_win2, …), …]. Returns Sscalar or ndarray Integral (sum) over the given window(s). Examples >>> arr = np.ones((5, 6), dtype=float) >>> ii = integral_image(arr) >>> integrate(ii, (1, 0), (1, 2)) # sum from (1, 0) to (1, 2) array([3.]) >>> integrate(ii, [(3, 3)], [(4, 5)]) # sum from (3, 3) to (4, 5) array([6.]) >>> # sum from (1, 0) to (1, 2) and from (3, 3) to (4, 5) >>> integrate(ii, [(1, 0), (3, 3)], [(1, 2), (4, 5)]) array([3., 6.]) iradon skimage.transform.iradon(radon_image, theta=None, output_size=None, filter_name='ramp', interpolation='linear', circle=True, preserve_range=True) [source] Inverse radon transform. Reconstruct an image from the radon transform, using the filtered back projection algorithm. Parameters radon_imagearray Image containing radon transform (sinogram). Each column of the image corresponds to a projection along a different angle. The tomography rotation axis should lie at the pixel index radon_image.shape[0] // 2 along the 0th dimension of radon_image. thetaarray_like, optional Reconstruction angles (in degrees). Default: m angles evenly spaced between 0 and 180 (if the shape of radon_image is (N, M)). output_sizeint, optional Number of rows and columns in the reconstruction. filter_namestr, optional Filter used in frequency domain filtering. Ramp filter used by default. Filters available: ramp, shepp-logan, cosine, hamming, hann. Assign None to use no filter. interpolationstr, optional Interpolation method used in reconstruction. Methods available: ‘linear’, ‘nearest’, and ‘cubic’ (‘cubic’ is slow). circleboolean, optional Assume the reconstructed image is zero outside the inscribed circle. Also changes the default output_size to match the behaviour of radon called with circle=True. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns reconstructedndarray Reconstructed image. The rotation axis will be located in the pixel with indices (reconstructed.shape[0] // 2, reconstructed.shape[1] // 2). Changed in version 0.19: In iradon, filter argument is deprecated in favor of filter_name. Notes It applies the Fourier slice theorem to reconstruct an image by multiplying the frequency domain of the filter with the FFT of the projection data. This algorithm is called filtered back projection. References 1 AC Kak, M Slaney, “Principles of Computerized Tomographic Imaging”, IEEE Press 1988. 2 B.R. Ramesh, N. Srinivasa, K. Rajgopal, “An Algorithm for Computing the Discrete Radon Transform With Some Applications”, Proceedings of the Fourth IEEE Region 10 International Conference, TENCON ‘89, 1989 iradon_sart skimage.transform.iradon_sart(radon_image, theta=None, image=None, projection_shifts=None, clip=None, relaxation=0.15, dtype=None) [source] Inverse radon transform. Reconstruct an image from the radon transform, using a single iteration of the Simultaneous Algebraic Reconstruction Technique (SART) algorithm. Parameters radon_image2D array Image containing radon transform (sinogram). Each column of the image corresponds to a projection along a different angle. The tomography rotation axis should lie at the pixel index radon_image.shape[0] // 2 along the 0th dimension of radon_image. theta1D array, optional Reconstruction angles (in degrees). Default: m angles evenly spaced between 0 and 180 (if the shape of radon_image is (N, M)). image2D array, optional Image containing an initial reconstruction estimate. Shape of this array should be (radon_image.shape[0], radon_image.shape[0]). The default is an array of zeros. projection_shifts1D array, optional Shift the projections contained in radon_image (the sinogram) by this many pixels before reconstructing the image. The i’th value defines the shift of the i’th column of radon_image. cliplength-2 sequence of floats, optional Force all values in the reconstructed tomogram to lie in the range [clip[0], clip[1]] relaxationfloat, optional Relaxation parameter for the update step. A higher value can improve the convergence rate, but one runs the risk of instabilities. Values close to or higher than 1 are not recommended. dtypedtype, optional Output data type, must be floating point. By default, if input data type is not float, input is cast to double, otherwise dtype is set to input data type. Returns reconstructedndarray Reconstructed image. The rotation axis will be located in the pixel with indices (reconstructed.shape[0] // 2, reconstructed.shape[1] // 2). Notes Algebraic Reconstruction Techniques are based on formulating the tomography reconstruction problem as a set of linear equations. Along each ray, the projected value is the sum of all the values of the cross section along the ray. A typical feature of SART (and a few other variants of algebraic techniques) is that it samples the cross section at equidistant points along the ray, using linear interpolation between the pixel values of the cross section. The resulting set of linear equations are then solved using a slightly modified Kaczmarz method. When using SART, a single iteration is usually sufficient to obtain a good reconstruction. Further iterations will tend to enhance high-frequency information, but will also often increase the noise. References 1 AC Kak, M Slaney, “Principles of Computerized Tomographic Imaging”, IEEE Press 1988. 2 AH Andersen, AC Kak, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm”, Ultrasonic Imaging 6 pp 81–94 (1984) 3 S Kaczmarz, “Angenäherte auflösung von systemen linearer gleichungen”, Bulletin International de l’Academie Polonaise des Sciences et des Lettres 35 pp 355–357 (1937) 4 Kohler, T. “A projection access scheme for iterative reconstruction based on the golden section.” Nuclear Science Symposium Conference Record, 2004 IEEE. Vol. 6. IEEE, 2004. 5 Kaczmarz’ method, Wikipedia, https://en.wikipedia.org/wiki/Kaczmarz_method matrix_transform skimage.transform.matrix_transform(coords, matrix) [source] Apply 2D matrix transform. Parameters coords(N, 2) array x, y coordinates to transform matrix(3, 3) array Homogeneous transformation matrix. Returns coords(N, 2) array Transformed coordinates. order_angles_golden_ratio skimage.transform.order_angles_golden_ratio(theta) [source] Order angles to reduce the amount of correlated information in subsequent projections. Parameters theta1D array of floats Projection angles in degrees. Duplicate angles are not allowed. Returns indices_generatorgenerator yielding unsigned integers The returned generator yields indices into theta such that theta[indices] gives the approximate golden ratio ordering of the projections. In total, len(theta) indices are yielded. All non-negative integers < len(theta) are yielded exactly once. Notes The method used here is that of the golden ratio introduced by T. Kohler. References 1 Kohler, T. “A projection access scheme for iterative reconstruction based on the golden section.” Nuclear Science Symposium Conference Record, 2004 IEEE. Vol. 6. IEEE, 2004. 2 Winkelmann, Stefanie, et al. “An optimal radial profile order based on the Golden Ratio for time-resolved MRI.” Medical Imaging, IEEE Transactions on 26.1 (2007): 68-76. probabilistic_hough_line skimage.transform.probabilistic_hough_line(image, threshold=10, line_length=50, line_gap=10, theta=None, seed=None) [source] Return lines from a progressive probabilistic line Hough transform. Parameters image(M, N) ndarray Input image with nonzero values representing edges. thresholdint, optional Threshold line_lengthint, optional Minimum accepted length of detected lines. Increase the parameter to extract longer lines. line_gapint, optional Maximum gap between pixels to still form a line. Increase the parameter to merge broken lines more aggressively. theta1D ndarray, dtype=double, optional Angles at which to compute the transform, in radians. If None, use a range from -pi/2 to pi/2. seedint, optional Seed to initialize the random number generator. Returns lineslist List of lines identified, lines in format ((x0, y0), (x1, y1)), indicating line start and end. References 1 C. Galamhos, J. Matas and J. Kittler, “Progressive probabilistic Hough transform for line detection”, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1999. pyramid_expand skimage.transform.pyramid_expand(image, upscale=2, sigma=None, order=1, mode='reflect', cval=0, multichannel=False, preserve_range=False) [source] Upsample and then smooth image. Parameters imagendarray Input image. upscalefloat, optional Upscale factor. sigmafloat, optional Sigma for Gaussian filter. Default is 2 * upscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution. orderint, optional Order of splines used in interpolation of upsampling. See skimage.transform.warp for detail. mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. cvalfloat, optional Value to fill past edges of input if mode is ‘constant’. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns outarray Upsampled and smoothed float image. References 1 http://persci.mit.edu/pub_pdfs/pyramid83.pdf pyramid_gaussian skimage.transform.pyramid_gaussian(image, max_layer=-1, downscale=2, sigma=None, order=1, mode='reflect', cval=0, multichannel=False, preserve_range=False) [source] Yield images of the Gaussian pyramid formed by the input image. Recursively applies the pyramid_reduce function to the image, and yields the downscaled images. Note that the first image of the pyramid will be the original, unscaled image. The total number of images is max_layer + 1. In case all layers are computed, the last image is either a one-pixel image or the image where the reduction does not change its shape. Parameters imagendarray Input image. max_layerint, optional Number of layers for the pyramid. 0th layer is the original image. Default is -1 which builds all possible layers. downscalefloat, optional Downscale factor. sigmafloat, optional Sigma for Gaussian filter. Default is 2 * downscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution. orderint, optional Order of splines used in interpolation of downsampling. See skimage.transform.warp for detail. mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. cvalfloat, optional Value to fill past edges of input if mode is ‘constant’. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns pyramidgenerator Generator yielding pyramid layers as float images. References 1 http://persci.mit.edu/pub_pdfs/pyramid83.pdf pyramid_laplacian skimage.transform.pyramid_laplacian(image, max_layer=-1, downscale=2, sigma=None, order=1, mode='reflect', cval=0, multichannel=False, preserve_range=False) [source] Yield images of the laplacian pyramid formed by the input image. Each layer contains the difference between the downsampled and the downsampled, smoothed image: layer = resize(prev_layer) - smooth(resize(prev_layer)) Note that the first image of the pyramid will be the difference between the original, unscaled image and its smoothed version. The total number of images is max_layer + 1. In case all layers are computed, the last image is either a one-pixel image or the image where the reduction does not change its shape. Parameters imagendarray Input image. max_layerint, optional Number of layers for the pyramid. 0th layer is the original image. Default is -1 which builds all possible layers. downscalefloat, optional Downscale factor. sigmafloat, optional Sigma for Gaussian filter. Default is 2 * downscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution. orderint, optional Order of splines used in interpolation of downsampling. See skimage.transform.warp for detail. mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. cvalfloat, optional Value to fill past edges of input if mode is ‘constant’. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns pyramidgenerator Generator yielding pyramid layers as float images. References 1 http://persci.mit.edu/pub_pdfs/pyramid83.pdf 2 http://sepwww.stanford.edu/data/media/public/sep/morgan/texturematch/paper_html/node3.html pyramid_reduce skimage.transform.pyramid_reduce(image, downscale=2, sigma=None, order=1, mode='reflect', cval=0, multichannel=False, preserve_range=False) [source] Smooth and then downsample image. Parameters imagendarray Input image. downscalefloat, optional Downscale factor. sigmafloat, optional Sigma for Gaussian filter. Default is 2 * downscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution. orderint, optional Order of splines used in interpolation of downsampling. See skimage.transform.warp for detail. mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. cvalfloat, optional Value to fill past edges of input if mode is ‘constant’. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns outarray Smoothed and downsampled float image. References 1 http://persci.mit.edu/pub_pdfs/pyramid83.pdf radon skimage.transform.radon(image, theta=None, circle=True, *, preserve_range=False) [source] Calculates the radon transform of an image given specified projection angles. Parameters imagearray_like Input image. The rotation axis will be located in the pixel with indices (image.shape[0] // 2, image.shape[1] // 2). thetaarray_like, optional Projection angles (in degrees). If None, the value is set to np.arange(180). circleboolean, optional Assume image is zero outside the inscribed circle, making the width of each projection (the first dimension of the sinogram) equal to min(image.shape). preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns radon_imagendarray Radon transform (sinogram). The tomography rotation axis will lie at the pixel index radon_image.shape[0] // 2 along the 0th dimension of radon_image. Notes Based on code of Justin K. Romberg (https://www.clear.rice.edu/elec431/projects96/DSP/bpanalysis.html) References 1 AC Kak, M Slaney, “Principles of Computerized Tomographic Imaging”, IEEE Press 1988. 2 B.R. Ramesh, N. Srinivasa, K. Rajgopal, “An Algorithm for Computing the Discrete Radon Transform With Some Applications”, Proceedings of the Fourth IEEE Region 10 International Conference, TENCON ‘89, 1989 rescale skimage.transform.rescale(image, scale, order=None, mode='reflect', cval=0, clip=True, preserve_range=False, multichannel=False, anti_aliasing=None, anti_aliasing_sigma=None) [source] Scale image by a certain factor. Performs interpolation to up-scale or down-scale N-dimensional images. Note that anti-aliasing should be enabled when down-sizing images to avoid aliasing artifacts. For down-sampling with an integer factor also see skimage.transform.downscale_local_mean. Parameters imagendarray Input image. scale{float, tuple of floats} Scale factors. Separate scale factors can be defined as (rows, cols[, …][, dim]). Returns scaledndarray Scaled version of the input. Other Parameters orderint, optional The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad. cvalfloat, optional Used in conjunction with mode ‘constant’, the value outside the image boundaries. clipbool, optional Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. anti_aliasingbool, optional Whether to apply a Gaussian filter to smooth the image prior to down-scaling. It is crucial to filter when down-sampling the image to avoid aliasing artifacts. If input image data type is bool, no anti-aliasing is applied. anti_aliasing_sigma{float, tuple of floats}, optional Standard deviation for Gaussian filtering to avoid aliasing artifacts. By default, this value is chosen as (s - 1) / 2 where s is the down-scaling factor. Notes Modes ‘reflect’ and ‘symmetric’ are similar, but differ in whether the edge pixels are duplicated during the reflection. As an example, if an array has values [0, 1, 2] and was padded to the right by four values using symmetric, the result would be [0, 1, 2, 2, 1, 0, 0], while for reflect it would be [0, 1, 2, 1, 0, 1, 2]. Examples >>> from skimage import data >>> from skimage.transform import rescale >>> image = data.camera() >>> rescale(image, 0.1).shape (51, 51) >>> rescale(image, 0.5).shape (256, 256) resize skimage.transform.resize(image, output_shape, order=None, mode='reflect', cval=0, clip=True, preserve_range=False, anti_aliasing=None, anti_aliasing_sigma=None) [source] Resize image to match a certain size. Performs interpolation to up-size or down-size N-dimensional images. Note that anti-aliasing should be enabled when down-sizing images to avoid aliasing artifacts. For down-sampling with an integer factor also see skimage.transform.downscale_local_mean. Parameters imagendarray Input image. output_shapetuple or ndarray Size of the generated output image (rows, cols[, …][, dim]). If dim is not provided, the number of channels is preserved. In case the number of input channels does not equal the number of output channels a n-dimensional interpolation is applied. Returns resizedndarray Resized version of the input. Other Parameters orderint, optional The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad. cvalfloat, optional Used in conjunction with mode ‘constant’, the value outside the image boundaries. clipbool, optional Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html anti_aliasingbool, optional Whether to apply a Gaussian filter to smooth the image prior to down-scaling. It is crucial to filter when down-sampling the image to avoid aliasing artifacts. If input image data type is bool, no anti-aliasing is applied. anti_aliasing_sigma{float, tuple of floats}, optional Standard deviation for Gaussian filtering to avoid aliasing artifacts. By default, this value is chosen as (s - 1) / 2 where s is the down-scaling factor, where s > 1. For the up-size case, s < 1, no anti-aliasing is performed prior to rescaling. Notes Modes ‘reflect’ and ‘symmetric’ are similar, but differ in whether the edge pixels are duplicated during the reflection. As an example, if an array has values [0, 1, 2] and was padded to the right by four values using symmetric, the result would be [0, 1, 2, 2, 1, 0, 0], while for reflect it would be [0, 1, 2, 1, 0, 1, 2]. Examples >>> from skimage import data >>> from skimage.transform import resize >>> image = data.camera() >>> resize(image, (100, 100)).shape (100, 100) rotate skimage.transform.rotate(image, angle, resize=False, center=None, order=None, mode='constant', cval=0, clip=True, preserve_range=False) [source] Rotate image by a certain angle around its center. Parameters imagendarray Input image. anglefloat Rotation angle in degrees in counter-clockwise direction. resizebool, optional Determine whether the shape of the output image will be automatically calculated, so the complete rotated image exactly fits. Default is False. centeriterable of length 2 The rotation center. If center=None, the image is rotated around its center, i.e. center=(cols / 2 - 0.5, rows / 2 - 0.5). Please note that this parameter is (cols, rows), contrary to normal skimage ordering. Returns rotatedndarray Rotated version of the input. Other Parameters orderint, optional The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad. cvalfloat, optional Used in conjunction with mode ‘constant’, the value outside the image boundaries. clipbool, optional Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Notes Modes ‘reflect’ and ‘symmetric’ are similar, but differ in whether the edge pixels are duplicated during the reflection. As an example, if an array has values [0, 1, 2] and was padded to the right by four values using symmetric, the result would be [0, 1, 2, 2, 1, 0, 0], while for reflect it would be [0, 1, 2, 1, 0, 1, 2]. Examples >>> from skimage import data >>> from skimage.transform import rotate >>> image = data.camera() >>> rotate(image, 2).shape (512, 512) >>> rotate(image, 2, resize=True).shape (530, 530) >>> rotate(image, 90, resize=True).shape (512, 512) Examples using skimage.transform.rotate Different perimeters Measure region properties swirl skimage.transform.swirl(image, center=None, strength=1, radius=100, rotation=0, output_shape=None, order=None, mode='reflect', cval=0, clip=True, preserve_range=False) [source] Perform a swirl transformation. Parameters imagendarray Input image. center(column, row) tuple or (2,) ndarray, optional Center coordinate of transformation. strengthfloat, optional The amount of swirling applied. radiusfloat, optional The extent of the swirl in pixels. The effect dies out rapidly beyond radius. rotationfloat, optional Additional rotation applied to the image. Returns swirledndarray Swirled version of the input. Other Parameters output_shapetuple (rows, cols), optional Shape of the output image generated. By default the shape of the input image is preserved. orderint, optional The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional Points outside the boundaries of the input are filled according to the given mode, with ‘constant’ used as the default. Modes match the behaviour of numpy.pad. cvalfloat, optional Used in conjunction with mode ‘constant’, the value outside the image boundaries. clipbool, optional Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html warp skimage.transform.warp(image, inverse_map, map_args={}, output_shape=None, order=None, mode='constant', cval=0.0, clip=True, preserve_range=False) [source] Warp an image according to a given coordinate transformation. Parameters imagendarray Input image. inverse_maptransformation object, callable cr = f(cr, **kwargs), or ndarray Inverse coordinate map, which transforms coordinates in the output images into their corresponding coordinates in the input image. There are a number of different options to define this map, depending on the dimensionality of the input image. A 2-D image can have 2 dimensions for gray-scale images, or 3 dimensions with color information. For 2-D images, you can directly pass a transformation object, e.g. skimage.transform.SimilarityTransform, or its inverse. For 2-D images, you can pass a (3, 3) homogeneous transformation matrix, e.g. skimage.transform.SimilarityTransform.params. For 2-D images, a function that transforms a (M, 2) array of (col, row) coordinates in the output image to their corresponding coordinates in the input image. Extra parameters to the function can be specified through map_args. For N-D images, you can directly pass an array of coordinates. The first dimension specifies the coordinates in the input image, while the subsequent dimensions determine the position in the output image. E.g. in case of 2-D images, you need to pass an array of shape (2, rows, cols), where rows and cols determine the shape of the output image, and the first dimension contains the (row, col) coordinate in the input image. See scipy.ndimage.map_coordinates for further documentation. Note, that a (3, 3) matrix is interpreted as a homogeneous transformation matrix, so you cannot interpolate values from a 3-D input, if the output is of shape (3,). See example section for usage. map_argsdict, optional Keyword arguments passed to inverse_map. output_shapetuple (rows, cols), optional Shape of the output image generated. By default the shape of the input image is preserved. Note that, even for multi-band images, only rows and columns need to be specified. orderint, optional The order of interpolation. The order has to be in the range 0-5: 0: Nearest-neighbor 1: Bi-linear (default) 2: Bi-quadratic 3: Bi-cubic 4: Bi-quartic 5: Bi-quintic Default is 0 if image.dtype is bool and 1 otherwise. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad. cvalfloat, optional Used in conjunction with mode ‘constant’, the value outside the image boundaries. clipbool, optional Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns warpeddouble ndarray The warped input image. Notes The input image is converted to a double image. In case of a SimilarityTransform, AffineTransform and ProjectiveTransform and order in [0, 3] this function uses the underlying transformation matrix to warp the image with a much faster routine. Examples >>> from skimage.transform import warp >>> from skimage import data >>> image = data.camera() The following image warps are all equal but differ substantially in execution time. The image is shifted to the bottom. Use a geometric transform to warp an image (fast): >>> from skimage.transform import SimilarityTransform >>> tform = SimilarityTransform(translation=(0, -10)) >>> warped = warp(image, tform) Use a callable (slow): >>> def shift_down(xy): ... xy[:, 1] -= 10 ... return xy >>> warped = warp(image, shift_down) Use a transformation matrix to warp an image (fast): >>> matrix = np.array([[1, 0, 0], [0, 1, -10], [0, 0, 1]]) >>> warped = warp(image, matrix) >>> from skimage.transform import ProjectiveTransform >>> warped = warp(image, ProjectiveTransform(matrix=matrix)) You can also use the inverse of a geometric transformation (fast): >>> warped = warp(image, tform.inverse) For N-D images you can pass a coordinate array, that specifies the coordinates in the input image for every element in the output image. E.g. if you want to rescale a 3-D cube, you can do: >>> cube_shape = np.array([30, 30, 30]) >>> cube = np.random.rand(*cube_shape) Setup the coordinate array, that defines the scaling: >>> scale = 0.1 >>> output_shape = (scale * cube_shape).astype(int) >>> coords0, coords1, coords2 = np.mgrid[:output_shape[0], ... :output_shape[1], :output_shape[2]] >>> coords = np.array([coords0, coords1, coords2]) Assume that the cube contains spatial data, where the first array element center is at coordinate (0.5, 0.5, 0.5) in real space, i.e. we have to account for this extra offset when scaling the image: >>> coords = (coords + 0.5) / scale - 0.5 >>> warped = warp(cube, coords) Examples using skimage.transform.warp Registration using optical flow warp_coords skimage.transform.warp_coords(coord_map, shape, dtype=<class 'numpy.float64'>) [source] Build the source coordinates for the output of a 2-D image warp. Parameters coord_mapcallable like GeometricTransform.inverse Return input coordinates for given output coordinates. Coordinates are in the shape (P, 2), where P is the number of coordinates and each element is a (row, col) pair. shapetuple Shape of output image (rows, cols[, bands]). dtypenp.dtype or string dtype for return value (sane choices: float32 or float64). Returns coords(ndim, rows, cols[, bands]) array of dtype dtype Coordinates for scipy.ndimage.map_coordinates, that will yield an image of shape (orows, ocols, bands) by drawing from source points according to the coord_transform_fn. Notes This is a lower-level routine that produces the source coordinates for 2-D images used by warp(). It is provided separately from warp to give additional flexibility to users who would like, for example, to re-use a particular coordinate mapping, to use specific dtypes at various points along the the image-warping process, or to implement different post-processing logic than warp performs after the call to ndi.map_coordinates. Examples Produce a coordinate map that shifts an image up and to the right: >>> from skimage import data >>> from scipy.ndimage import map_coordinates >>> >>> def shift_up10_left20(xy): ... return xy - np.array([-20, 10])[None, :] >>> >>> image = data.astronaut().astype(np.float32) >>> coords = warp_coords(shift_up10_left20, image.shape) >>> warped_image = map_coordinates(image, coords) warp_polar skimage.transform.warp_polar(image, center=None, *, radius=None, output_shape=None, scaling='linear', multichannel=False, **kwargs) [source] Remap image to polar or log-polar coordinates space. Parameters imagendarray Input image. Only 2-D arrays are accepted by default. If multichannel=True, 3-D arrays are accepted and the last axis is interpreted as multiple channels. centertuple (row, col), optional Point in image that represents the center of the transformation (i.e., the origin in cartesian space). Values can be of type float. If no value is given, the center is assumed to be the center point of the image. radiusfloat, optional Radius of the circle that bounds the area to be transformed. output_shapetuple (row, col), optional scaling{‘linear’, ‘log’}, optional Specify whether the image warp is polar or log-polar. Defaults to ‘linear’. multichannelbool, optional Whether the image is a 3-D array in which the third axis is to be interpreted as multiple channels. If set to False (default), only 2-D arrays are accepted. **kwargskeyword arguments Passed to transform.warp. Returns warpedndarray The polar or log-polar warped image. Examples Perform a basic polar warp on a grayscale image: >>> from skimage import data >>> from skimage.transform import warp_polar >>> image = data.checkerboard() >>> warped = warp_polar(image) Perform a log-polar warp on a grayscale image: >>> warped = warp_polar(image, scaling='log') Perform a log-polar warp on a grayscale image while specifying center, radius, and output shape: >>> warped = warp_polar(image, (100,100), radius=100, ... output_shape=image.shape, scaling='log') Perform a log-polar warp on a color image: >>> image = data.astronaut() >>> warped = warp_polar(image, scaling='log', multichannel=True) AffineTransform class skimage.transform.AffineTransform(matrix=None, scale=None, rotation=None, shear=None, translation=None) [source] Bases: skimage.transform._geometric.ProjectiveTransform 2D affine transformation. Has the following form: X = a0*x + a1*y + a2 = = sx*x*cos(rotation) - sy*y*sin(rotation + shear) + a2 Y = b0*x + b1*y + b2 = = sx*x*sin(rotation) + sy*y*cos(rotation + shear) + b2 where sx and sy are scale factors in the x and y directions, and the homogeneous transformation matrix is: [[a0 a1 a2] [b0 b1 b2] [0 0 1]] Parameters matrix(3, 3) array, optional Homogeneous transformation matrix. scale{s as float or (sx, sy) as array, list or tuple}, optional Scale factor(s). If a single value, it will be assigned to both sx and sy. New in version 0.17: Added support for supplying a single scalar value. rotationfloat, optional Rotation angle in counter-clockwise direction as radians. shearfloat, optional Shear angle in counter-clockwise direction as radians. translation(tx, ty) as array, list or tuple, optional Translation parameters. Attributes params(3, 3) array Homogeneous transformation matrix. __init__(matrix=None, scale=None, rotation=None, shear=None, translation=None) [source] Initialize self. See help(type(self)) for accurate signature. property rotation property scale property shear property translation EssentialMatrixTransform class skimage.transform.EssentialMatrixTransform(rotation=None, translation=None, matrix=None) [source] Bases: skimage.transform._geometric.FundamentalMatrixTransform Essential matrix transformation. The essential matrix relates corresponding points between a pair of calibrated images. The matrix transforms normalized, homogeneous image points in one image to epipolar lines in the other image. The essential matrix is only defined for a pair of moving images capturing a non-planar scene. In the case of pure rotation or planar scenes, the homography describes the geometric relation between two images (ProjectiveTransform). If the intrinsic calibration of the images is unknown, the fundamental matrix describes the projective relation between the two images (FundamentalMatrixTransform). Parameters rotation(3, 3) array, optional Rotation matrix of the relative camera motion. translation(3, 1) array, optional Translation vector of the relative camera motion. The vector must have unit length. matrix(3, 3) array, optional Essential matrix. References 1 Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. Attributes params(3, 3) array Essential matrix. __init__(rotation=None, translation=None, matrix=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate essential matrix using 8-point algorithm. The 8-point algorithm requires at least 8 corresponding point pairs for a well-conditioned solution, otherwise the over-determined solution is estimated. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. EuclideanTransform class skimage.transform.EuclideanTransform(matrix=None, rotation=None, translation=None) [source] Bases: skimage.transform._geometric.ProjectiveTransform 2D Euclidean transformation. Has the following form: X = a0 * x - b0 * y + a1 = = x * cos(rotation) - y * sin(rotation) + a1 Y = b0 * x + a0 * y + b1 = = x * sin(rotation) + y * cos(rotation) + b1 where the homogeneous transformation matrix is: [[a0 b0 a1] [b0 a0 b1] [0 0 1]] The Euclidean transformation is a rigid transformation with rotation and translation parameters. The similarity transformation extends the Euclidean transformation with a single scaling factor. Parameters matrix(3, 3) array, optional Homogeneous transformation matrix. rotationfloat, optional Rotation angle in counter-clockwise direction as radians. translation(tx, ty) as array, list or tuple, optional x, y translation parameters. Attributes params(3, 3) array Homogeneous transformation matrix. __init__(matrix=None, rotation=None, translation=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. property rotation property translation FundamentalMatrixTransform class skimage.transform.FundamentalMatrixTransform(matrix=None) [source] Bases: skimage.transform._geometric.GeometricTransform Fundamental matrix transformation. The fundamental matrix relates corresponding points between a pair of uncalibrated images. The matrix transforms homogeneous image points in one image to epipolar lines in the other image. The fundamental matrix is only defined for a pair of moving images. In the case of pure rotation or planar scenes, the homography describes the geometric relation between two images (ProjectiveTransform). If the intrinsic calibration of the images is known, the essential matrix describes the metric relation between the two images (EssentialMatrixTransform). Parameters matrix(3, 3) array, optional Fundamental matrix. References 1 Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. Attributes params(3, 3) array Fundamental matrix. __init__(matrix=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate fundamental matrix using 8-point algorithm. The 8-point algorithm requires at least 8 corresponding point pairs for a well-conditioned solution, otherwise the over-determined solution is estimated. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. inverse(coords) [source] Apply inverse transformation. Parameters coords(N, 2) array Destination coordinates. Returns coords(N, 3) array Epipolar lines in the source image. residuals(src, dst) [source] Compute the Sampson distance. The Sampson distance is the first approximation to the geometric error. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns residuals(N, ) array Sampson distance. PiecewiseAffineTransform class skimage.transform.PiecewiseAffineTransform [source] Bases: skimage.transform._geometric.GeometricTransform 2D piecewise affine transformation. Control points are used to define the mapping. The transform is based on a Delaunay triangulation of the points to form a mesh. Each triangle is used to find a local affine transform. Attributes affineslist of AffineTransform objects Affine transformations for each triangle in the mesh. inverse_affineslist of AffineTransform objects Inverse affine transformations for each triangle in the mesh. __init__() [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. inverse(coords) [source] Apply inverse transformation. Coordinates outside of the mesh will be set to - 1. Parameters coords(N, 2) array Source coordinates. Returns coords(N, 2) array Transformed coordinates. PolynomialTransform class skimage.transform.PolynomialTransform(params=None) [source] Bases: skimage.transform._geometric.GeometricTransform 2D polynomial transformation. Has the following form: X = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i )) Y = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i )) Parameters params(2, N) array, optional Polynomial coefficients where N * 2 = (order + 1) * (order + 2). So, a_ji is defined in params[0, :] and b_ji in params[1, :]. Attributes params(2, N) array Polynomial coefficients where N * 2 = (order + 1) * (order + 2). So, a_ji is defined in params[0, :] and b_ji in params[1, :]. __init__(params=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst, order=2) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. The transformation is defined as: X = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i )) Y = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i )) These equations can be transformed to the following form: 0 = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i )) - X 0 = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i )) - Y which exist for each set of corresponding points, so we have a set of N * 2 equations. The coefficients appear linearly so we can write A x = 0, where: A = [[1 x y x**2 x*y y**2 ... 0 ... 0 -X] [0 ... 0 1 x y x**2 x*y y**2 -Y] ... ... ] x.T = [a00 a10 a11 a20 a21 a22 ... ann b00 b10 b11 b20 b21 b22 ... bnn c3] In case of total least-squares the solution of this homogeneous system of equations is the right singular vector of A which corresponds to the smallest singular value normed by the coefficient c3. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. orderint, optional Polynomial order (number of coefficients is order + 1). Returns successbool True, if model estimation succeeds. inverse(coords) [source] Apply inverse transformation. Parameters coords(N, 2) array Destination coordinates. Returns coords(N, 2) array Source coordinates. ProjectiveTransform class skimage.transform.ProjectiveTransform(matrix=None) [source] Bases: skimage.transform._geometric.GeometricTransform Projective transformation. Apply a projective transformation (homography) on coordinates. For each homogeneous coordinate \(\mathbf{x} = [x, y, 1]^T\), its target position is calculated by multiplying with the given matrix, \(H\), to give \(H \mathbf{x}\): [[a0 a1 a2] [b0 b1 b2] [c0 c1 1 ]]. E.g., to rotate by theta degrees clockwise, the matrix should be: [[cos(theta) -sin(theta) 0] [sin(theta) cos(theta) 0] [0 0 1]] or, to translate x by 10 and y by 20: [[1 0 10] [0 1 20] [0 0 1 ]]. Parameters matrix(3, 3) array, optional Homogeneous transformation matrix. Attributes params(3, 3) array Homogeneous transformation matrix. __init__(matrix=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. The transformation is defined as: X = (a0*x + a1*y + a2) / (c0*x + c1*y + 1) Y = (b0*x + b1*y + b2) / (c0*x + c1*y + 1) These equations can be transformed to the following form: 0 = a0*x + a1*y + a2 - c0*x*X - c1*y*X - X 0 = b0*x + b1*y + b2 - c0*x*Y - c1*y*Y - Y which exist for each set of corresponding points, so we have a set of N * 2 equations. The coefficients appear linearly so we can write A x = 0, where: A = [[x y 1 0 0 0 -x*X -y*X -X] [0 0 0 x y 1 -x*Y -y*Y -Y] ... ... ] x.T = [a0 a1 a2 b0 b1 b2 c0 c1 c3] In case of total least-squares the solution of this homogeneous system of equations is the right singular vector of A which corresponds to the smallest singular value normed by the coefficient c3. In case of the affine transformation the coefficients c0 and c1 are 0. Thus the system of equations is: A = [[x y 1 0 0 0 -X] [0 0 0 x y 1 -Y] ... ... ] x.T = [a0 a1 a2 b0 b1 b2 c3] Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. inverse(coords) [source] Apply inverse transformation. Parameters coords(N, 2) array Destination coordinates. Returns coords(N, 2) array Source coordinates. SimilarityTransform class skimage.transform.SimilarityTransform(matrix=None, scale=None, rotation=None, translation=None) [source] Bases: skimage.transform._geometric.EuclideanTransform 2D similarity transformation. Has the following form: X = a0 * x - b0 * y + a1 = = s * x * cos(rotation) - s * y * sin(rotation) + a1 Y = b0 * x + a0 * y + b1 = = s * x * sin(rotation) + s * y * cos(rotation) + b1 where s is a scale factor and the homogeneous transformation matrix is: [[a0 b0 a1] [b0 a0 b1] [0 0 1]] The similarity transformation extends the Euclidean transformation with a single scaling factor in addition to the rotation and translation parameters. Parameters matrix(3, 3) array, optional Homogeneous transformation matrix. scalefloat, optional Scale factor. rotationfloat, optional Rotation angle in counter-clockwise direction as radians. translation(tx, ty) as array, list or tuple, optional x, y translation parameters. Attributes params(3, 3) array Homogeneous transformation matrix. __init__(matrix=None, scale=None, rotation=None, translation=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. property scale
skimage.api.skimage.transform
class skimage.transform.AffineTransform(matrix=None, scale=None, rotation=None, shear=None, translation=None) [source] Bases: skimage.transform._geometric.ProjectiveTransform 2D affine transformation. Has the following form: X = a0*x + a1*y + a2 = = sx*x*cos(rotation) - sy*y*sin(rotation + shear) + a2 Y = b0*x + b1*y + b2 = = sx*x*sin(rotation) + sy*y*cos(rotation + shear) + b2 where sx and sy are scale factors in the x and y directions, and the homogeneous transformation matrix is: [[a0 a1 a2] [b0 b1 b2] [0 0 1]] Parameters matrix(3, 3) array, optional Homogeneous transformation matrix. scale{s as float or (sx, sy) as array, list or tuple}, optional Scale factor(s). If a single value, it will be assigned to both sx and sy. New in version 0.17: Added support for supplying a single scalar value. rotationfloat, optional Rotation angle in counter-clockwise direction as radians. shearfloat, optional Shear angle in counter-clockwise direction as radians. translation(tx, ty) as array, list or tuple, optional Translation parameters. Attributes params(3, 3) array Homogeneous transformation matrix. __init__(matrix=None, scale=None, rotation=None, shear=None, translation=None) [source] Initialize self. See help(type(self)) for accurate signature. property rotation property scale property shear property translation
skimage.api.skimage.transform#skimage.transform.AffineTransform
property rotation
skimage.api.skimage.transform#skimage.transform.AffineTransform.rotation
property scale
skimage.api.skimage.transform#skimage.transform.AffineTransform.scale
property shear
skimage.api.skimage.transform#skimage.transform.AffineTransform.shear
property translation
skimage.api.skimage.transform#skimage.transform.AffineTransform.translation
__init__(matrix=None, scale=None, rotation=None, shear=None, translation=None) [source] Initialize self. See help(type(self)) for accurate signature.
skimage.api.skimage.transform#skimage.transform.AffineTransform.__init__
skimage.transform.downscale_local_mean(image, factors, cval=0, clip=True) [source] Down-sample N-dimensional image by local averaging. The image is padded with cval if it is not perfectly divisible by the integer factors. In contrast to interpolation in skimage.transform.resize and skimage.transform.rescale this function calculates the local mean of elements in each block of size factors in the input image. Parameters imagendarray N-dimensional input image. factorsarray_like Array containing down-sampling integer factor along each axis. cvalfloat, optional Constant padding value if image is not perfectly divisible by the integer factors. clipbool, optional Unused, but kept here for API consistency with the other transforms in this module. (The local mean will never fall outside the range of values in the input image, assuming the provided cval also falls within that range.) Returns imagendarray Down-sampled image with same number of dimensions as input image. For integer inputs, the output dtype will be float64. See numpy.mean() for details. Examples >>> a = np.arange(15).reshape(3, 5) >>> a array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) >>> downscale_local_mean(a, (2, 3)) array([[3.5, 4. ], [5.5, 4.5]])
skimage.api.skimage.transform#skimage.transform.downscale_local_mean
class skimage.transform.EssentialMatrixTransform(rotation=None, translation=None, matrix=None) [source] Bases: skimage.transform._geometric.FundamentalMatrixTransform Essential matrix transformation. The essential matrix relates corresponding points between a pair of calibrated images. The matrix transforms normalized, homogeneous image points in one image to epipolar lines in the other image. The essential matrix is only defined for a pair of moving images capturing a non-planar scene. In the case of pure rotation or planar scenes, the homography describes the geometric relation between two images (ProjectiveTransform). If the intrinsic calibration of the images is unknown, the fundamental matrix describes the projective relation between the two images (FundamentalMatrixTransform). Parameters rotation(3, 3) array, optional Rotation matrix of the relative camera motion. translation(3, 1) array, optional Translation vector of the relative camera motion. The vector must have unit length. matrix(3, 3) array, optional Essential matrix. References 1 Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. Attributes params(3, 3) array Essential matrix. __init__(rotation=None, translation=None, matrix=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate essential matrix using 8-point algorithm. The 8-point algorithm requires at least 8 corresponding point pairs for a well-conditioned solution, otherwise the over-determined solution is estimated. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds.
skimage.api.skimage.transform#skimage.transform.EssentialMatrixTransform
estimate(src, dst) [source] Estimate essential matrix using 8-point algorithm. The 8-point algorithm requires at least 8 corresponding point pairs for a well-conditioned solution, otherwise the over-determined solution is estimated. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds.
skimage.api.skimage.transform#skimage.transform.EssentialMatrixTransform.estimate
__init__(rotation=None, translation=None, matrix=None) [source] Initialize self. See help(type(self)) for accurate signature.
skimage.api.skimage.transform#skimage.transform.EssentialMatrixTransform.__init__
skimage.transform.estimate_transform(ttype, src, dst, **kwargs) [source] Estimate 2D geometric transformation parameters. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. Parameters ttype{‘euclidean’, similarity’, ‘affine’, ‘piecewise-affine’, ‘projective’, ‘polynomial’} Type of transform. kwargsarray or int Function parameters (src, dst, n, angle): NAME / TTYPE FUNCTION PARAMETERS 'euclidean' `src, `dst` 'similarity' `src, `dst` 'affine' `src, `dst` 'piecewise-affine' `src, `dst` 'projective' `src, `dst` 'polynomial' `src, `dst`, `order` (polynomial order, default order is 2) Also see examples below. Returns tformGeometricTransform Transform object containing the transformation parameters and providing access to forward and inverse transformation functions. Examples >>> import numpy as np >>> from skimage import transform >>> # estimate transformation parameters >>> src = np.array([0, 0, 10, 10]).reshape((2, 2)) >>> dst = np.array([12, 14, 1, -20]).reshape((2, 2)) >>> tform = transform.estimate_transform('similarity', src, dst) >>> np.allclose(tform.inverse(tform(src)), src) True >>> # warp image using the estimated transformation >>> from skimage import data >>> image = data.camera() >>> warp(image, inverse_map=tform.inverse) >>> # create transformation with explicit parameters >>> tform2 = transform.SimilarityTransform(scale=1.1, rotation=1, ... translation=(10, 20)) >>> # unite transformations, applied in order from left to right >>> tform3 = tform + tform2 >>> np.allclose(tform3(src), tform2(tform(src))) True
skimage.api.skimage.transform#skimage.transform.estimate_transform
class skimage.transform.EuclideanTransform(matrix=None, rotation=None, translation=None) [source] Bases: skimage.transform._geometric.ProjectiveTransform 2D Euclidean transformation. Has the following form: X = a0 * x - b0 * y + a1 = = x * cos(rotation) - y * sin(rotation) + a1 Y = b0 * x + a0 * y + b1 = = x * sin(rotation) + y * cos(rotation) + b1 where the homogeneous transformation matrix is: [[a0 b0 a1] [b0 a0 b1] [0 0 1]] The Euclidean transformation is a rigid transformation with rotation and translation parameters. The similarity transformation extends the Euclidean transformation with a single scaling factor. Parameters matrix(3, 3) array, optional Homogeneous transformation matrix. rotationfloat, optional Rotation angle in counter-clockwise direction as radians. translation(tx, ty) as array, list or tuple, optional x, y translation parameters. Attributes params(3, 3) array Homogeneous transformation matrix. __init__(matrix=None, rotation=None, translation=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. property rotation property translation
skimage.api.skimage.transform#skimage.transform.EuclideanTransform
estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds.
skimage.api.skimage.transform#skimage.transform.EuclideanTransform.estimate
property rotation
skimage.api.skimage.transform#skimage.transform.EuclideanTransform.rotation
property translation
skimage.api.skimage.transform#skimage.transform.EuclideanTransform.translation
__init__(matrix=None, rotation=None, translation=None) [source] Initialize self. See help(type(self)) for accurate signature.
skimage.api.skimage.transform#skimage.transform.EuclideanTransform.__init__
skimage.transform.frt2(a) [source] Compute the 2-dimensional finite radon transform (FRT) for an n x n integer array. Parameters aarray_like A 2-D square n x n integer array. Returns FRT2-D ndarray Finite Radon Transform array of (n+1) x n integer coefficients. See also ifrt2 The two-dimensional inverse FRT. Notes The FRT has a unique inverse if and only if n is prime. [FRT] The idea for this algorithm is due to Vlad Negnevitski. References FRT A. Kingston and I. Svalbe, “Projective transforms on periodic discrete image arrays,” in P. Hawkes (Ed), Advances in Imaging and Electron Physics, 139 (2006) Examples Generate a test image: Use a prime number for the array dimensions >>> SIZE = 59 >>> img = np.tri(SIZE, dtype=np.int32) Apply the Finite Radon Transform: >>> f = frt2(img)
skimage.api.skimage.transform#skimage.transform.frt2
class skimage.transform.FundamentalMatrixTransform(matrix=None) [source] Bases: skimage.transform._geometric.GeometricTransform Fundamental matrix transformation. The fundamental matrix relates corresponding points between a pair of uncalibrated images. The matrix transforms homogeneous image points in one image to epipolar lines in the other image. The fundamental matrix is only defined for a pair of moving images. In the case of pure rotation or planar scenes, the homography describes the geometric relation between two images (ProjectiveTransform). If the intrinsic calibration of the images is known, the essential matrix describes the metric relation between the two images (EssentialMatrixTransform). Parameters matrix(3, 3) array, optional Fundamental matrix. References 1 Hartley, Richard, and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. Attributes params(3, 3) array Fundamental matrix. __init__(matrix=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate fundamental matrix using 8-point algorithm. The 8-point algorithm requires at least 8 corresponding point pairs for a well-conditioned solution, otherwise the over-determined solution is estimated. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. inverse(coords) [source] Apply inverse transformation. Parameters coords(N, 2) array Destination coordinates. Returns coords(N, 3) array Epipolar lines in the source image. residuals(src, dst) [source] Compute the Sampson distance. The Sampson distance is the first approximation to the geometric error. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns residuals(N, ) array Sampson distance.
skimage.api.skimage.transform#skimage.transform.FundamentalMatrixTransform
estimate(src, dst) [source] Estimate fundamental matrix using 8-point algorithm. The 8-point algorithm requires at least 8 corresponding point pairs for a well-conditioned solution, otherwise the over-determined solution is estimated. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds.
skimage.api.skimage.transform#skimage.transform.FundamentalMatrixTransform.estimate
inverse(coords) [source] Apply inverse transformation. Parameters coords(N, 2) array Destination coordinates. Returns coords(N, 3) array Epipolar lines in the source image.
skimage.api.skimage.transform#skimage.transform.FundamentalMatrixTransform.inverse
residuals(src, dst) [source] Compute the Sampson distance. The Sampson distance is the first approximation to the geometric error. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns residuals(N, ) array Sampson distance.
skimage.api.skimage.transform#skimage.transform.FundamentalMatrixTransform.residuals
__init__(matrix=None) [source] Initialize self. See help(type(self)) for accurate signature.
skimage.api.skimage.transform#skimage.transform.FundamentalMatrixTransform.__init__
skimage.transform.hough_circle(image, radius, normalize=True, full_output=False) [source] Perform a circular Hough transform. Parameters image(M, N) ndarray Input image with nonzero values representing edges. radiusscalar or sequence of scalars Radii at which to compute the Hough transform. Floats are converted to integers. normalizeboolean, optional (default True) Normalize the accumulator with the number of pixels used to draw the radius. full_outputboolean, optional (default False) Extend the output size by twice the largest radius in order to detect centers outside the input picture. Returns H3D ndarray (radius index, (M + 2R, N + 2R) ndarray) Hough transform accumulator for each radius. R designates the larger radius if full_output is True. Otherwise, R = 0. Examples >>> from skimage.transform import hough_circle >>> from skimage.draw import circle_perimeter >>> img = np.zeros((100, 100), dtype=bool) >>> rr, cc = circle_perimeter(25, 35, 23) >>> img[rr, cc] = 1 >>> try_radii = np.arange(5, 50) >>> res = hough_circle(img, try_radii) >>> ridx, r, c = np.unravel_index(np.argmax(res), res.shape) >>> r, c, try_radii[ridx] (25, 35, 23)
skimage.api.skimage.transform#skimage.transform.hough_circle
skimage.transform.hough_circle_peaks(hspaces, radii, min_xdistance=1, min_ydistance=1, threshold=None, num_peaks=inf, total_num_peaks=inf, normalize=False) [source] Return peaks in a circle Hough transform. Identifies most prominent circles separated by certain distances in given Hough spaces. Non-maximum suppression with different sizes is applied separately in the first and second dimension of the Hough space to identify peaks. For circles with different radius but close in distance, only the one with highest peak is kept. Parameters hspaces(N, M) array Hough spaces returned by the hough_circle function. radii(M,) array Radii corresponding to Hough spaces. min_xdistanceint, optional Minimum distance separating centers in the x dimension. min_ydistanceint, optional Minimum distance separating centers in the y dimension. thresholdfloat, optional Minimum intensity of peaks in each Hough space. Default is 0.5 * max(hspace). num_peaksint, optional Maximum number of peaks in each Hough space. When the number of peaks exceeds num_peaks, only num_peaks coordinates based on peak intensity are considered for the corresponding radius. total_num_peaksint, optional Maximum number of peaks. When the number of peaks exceeds num_peaks, return num_peaks coordinates based on peak intensity. normalizebool, optional If True, normalize the accumulator by the radius to sort the prominent peaks. Returns accum, cx, cy, radtuple of array Peak values in Hough space, x and y center coordinates and radii. Notes Circles with bigger radius have higher peaks in Hough space. If larger circles are preferred over smaller ones, normalize should be False. Otherwise, circles will be returned in the order of decreasing voting number. Examples >>> from skimage import transform, draw >>> img = np.zeros((120, 100), dtype=int) >>> radius, x_0, y_0 = (20, 99, 50) >>> y, x = draw.circle_perimeter(y_0, x_0, radius) >>> img[x, y] = 1 >>> hspaces = transform.hough_circle(img, radius) >>> accum, cx, cy, rad = hough_circle_peaks(hspaces, [radius,])
skimage.api.skimage.transform#skimage.transform.hough_circle_peaks
skimage.transform.hough_ellipse(image, threshold=4, accuracy=1, min_size=4, max_size=None) [source] Perform an elliptical Hough transform. Parameters image(M, N) ndarray Input image with nonzero values representing edges. thresholdint, optional Accumulator threshold value. accuracydouble, optional Bin size on the minor axis used in the accumulator. min_sizeint, optional Minimal major axis length. max_sizeint, optional Maximal minor axis length. If None, the value is set to the half of the smaller image dimension. Returns resultndarray with fields [(accumulator, yc, xc, a, b, orientation)]. Where (yc, xc) is the center, (a, b) the major and minor axes, respectively. The orientation value follows skimage.draw.ellipse_perimeter convention. Notes The accuracy must be chosen to produce a peak in the accumulator distribution. In other words, a flat accumulator distribution with low values may be caused by a too low bin size. References 1 Xie, Yonghong, and Qiang Ji. “A new efficient ellipse detection method.” Pattern Recognition, 2002. Proceedings. 16th International Conference on. Vol. 2. IEEE, 2002 Examples >>> from skimage.transform import hough_ellipse >>> from skimage.draw import ellipse_perimeter >>> img = np.zeros((25, 25), dtype=np.uint8) >>> rr, cc = ellipse_perimeter(10, 10, 6, 8) >>> img[cc, rr] = 1 >>> result = hough_ellipse(img, threshold=8) >>> result.tolist() [(10, 10.0, 10.0, 8.0, 6.0, 0.0)]
skimage.api.skimage.transform#skimage.transform.hough_ellipse
skimage.transform.hough_line(image, theta=None) [source] Perform a straight line Hough transform. Parameters image(M, N) ndarray Input image with nonzero values representing edges. theta1D ndarray of double, optional Angles at which to compute the transform, in radians. Defaults to a vector of 180 angles evenly spaced from -pi/2 to pi/2. Returns hspace2-D ndarray of uint64 Hough transform accumulator. anglesndarray Angles at which the transform is computed, in radians. distancesndarray Distance values. Notes The origin is the top left corner of the original image. X and Y axis are horizontal and vertical edges respectively. The distance is the minimal algebraic distance from the origin to the detected line. The angle accuracy can be improved by decreasing the step size in the theta array. Examples Generate a test image: >>> img = np.zeros((100, 150), dtype=bool) >>> img[30, :] = 1 >>> img[:, 65] = 1 >>> img[35:45, 35:50] = 1 >>> for i in range(90): ... img[i, i] = 1 >>> img += np.random.random(img.shape) > 0.95 Apply the Hough transform: >>> out, angles, d = hough_line(img) import numpy as np import matplotlib.pyplot as plt from skimage.transform import hough_line from skimage.draw import line img = np.zeros((100, 150), dtype=bool) img[30, :] = 1 img[:, 65] = 1 img[35:45, 35:50] = 1 rr, cc = line(60, 130, 80, 10) img[rr, cc] = 1 img += np.random.random(img.shape) > 0.95 out, angles, d = hough_line(img) fix, axes = plt.subplots(1, 2, figsize=(7, 4)) axes[0].imshow(img, cmap=plt.cm.gray) axes[0].set_title('Input image') axes[1].imshow( out, cmap=plt.cm.bone, extent=(np.rad2deg(angles[-1]), np.rad2deg(angles[0]), d[-1], d[0])) axes[1].set_title('Hough transform') axes[1].set_xlabel('Angle (degree)') axes[1].set_ylabel('Distance (pixel)') plt.tight_layout() plt.show() (Source code, png, pdf)
skimage.api.skimage.transform#skimage.transform.hough_line
skimage.transform.hough_line_peaks(hspace, angles, dists, min_distance=9, min_angle=10, threshold=None, num_peaks=inf) [source] Return peaks in a straight line Hough transform. Identifies most prominent lines separated by a certain angle and distance in a Hough transform. Non-maximum suppression with different sizes is applied separately in the first (distances) and second (angles) dimension of the Hough space to identify peaks. Parameters hspace(N, M) array Hough space returned by the hough_line function. angles(M,) array Angles returned by the hough_line function. Assumed to be continuous. (angles[-1] - angles[0] == PI). dists(N, ) array Distances returned by the hough_line function. min_distanceint, optional Minimum distance separating lines (maximum filter size for first dimension of hough space). min_angleint, optional Minimum angle separating lines (maximum filter size for second dimension of hough space). thresholdfloat, optional Minimum intensity of peaks. Default is 0.5 * max(hspace). num_peaksint, optional Maximum number of peaks. When the number of peaks exceeds num_peaks, return num_peaks coordinates based on peak intensity. Returns accum, angles, diststuple of array Peak values in Hough space, angles and distances. Examples >>> from skimage.transform import hough_line, hough_line_peaks >>> from skimage.draw import line >>> img = np.zeros((15, 15), dtype=bool) >>> rr, cc = line(0, 0, 14, 14) >>> img[rr, cc] = 1 >>> rr, cc = line(0, 14, 14, 0) >>> img[cc, rr] = 1 >>> hspace, angles, dists = hough_line(img) >>> hspace, angles, dists = hough_line_peaks(hspace, angles, dists) >>> len(angles) 2
skimage.api.skimage.transform#skimage.transform.hough_line_peaks
skimage.transform.ifrt2(a) [source] Compute the 2-dimensional inverse finite radon transform (iFRT) for an (n+1) x n integer array. Parameters aarray_like A 2-D (n+1) row x n column integer array. Returns iFRT2-D n x n ndarray Inverse Finite Radon Transform array of n x n integer coefficients. See also frt2 The two-dimensional FRT Notes The FRT has a unique inverse if and only if n is prime. See [1] for an overview. The idea for this algorithm is due to Vlad Negnevitski. References 1 A. Kingston and I. Svalbe, “Projective transforms on periodic discrete image arrays,” in P. Hawkes (Ed), Advances in Imaging and Electron Physics, 139 (2006) Examples >>> SIZE = 59 >>> img = np.tri(SIZE, dtype=np.int32) Apply the Finite Radon Transform: >>> f = frt2(img) Apply the Inverse Finite Radon Transform to recover the input >>> fi = ifrt2(f) Check that it’s identical to the original >>> assert len(np.nonzero(img-fi)[0]) == 0
skimage.api.skimage.transform#skimage.transform.ifrt2
skimage.transform.integral_image(image) [source] Integral image / summed area table. The integral image contains the sum of all elements above and to the left of it, i.e.: \[S[m, n] = \sum_{i \leq m} \sum_{j \leq n} X[i, j]\] Parameters imagendarray Input image. Returns Sndarray Integral image/summed area table of same shape as input image. References 1 F.C. Crow, “Summed-area tables for texture mapping,” ACM SIGGRAPH Computer Graphics, vol. 18, 1984, pp. 207-212.
skimage.api.skimage.transform#skimage.transform.integral_image
skimage.transform.integrate(ii, start, end) [source] Use an integral image to integrate over a given window. Parameters iindarray Integral image. startList of tuples, each tuple of length equal to dimension of ii Coordinates of top left corner of window(s). Each tuple in the list contains the starting row, col, … index i.e [(row_win1, col_win1, …), (row_win2, col_win2,…), …]. endList of tuples, each tuple of length equal to dimension of ii Coordinates of bottom right corner of window(s). Each tuple in the list containing the end row, col, … index i.e [(row_win1, col_win1, …), (row_win2, col_win2, …), …]. Returns Sscalar or ndarray Integral (sum) over the given window(s). Examples >>> arr = np.ones((5, 6), dtype=float) >>> ii = integral_image(arr) >>> integrate(ii, (1, 0), (1, 2)) # sum from (1, 0) to (1, 2) array([3.]) >>> integrate(ii, [(3, 3)], [(4, 5)]) # sum from (3, 3) to (4, 5) array([6.]) >>> # sum from (1, 0) to (1, 2) and from (3, 3) to (4, 5) >>> integrate(ii, [(1, 0), (3, 3)], [(1, 2), (4, 5)]) array([3., 6.])
skimage.api.skimage.transform#skimage.transform.integrate
skimage.transform.iradon(radon_image, theta=None, output_size=None, filter_name='ramp', interpolation='linear', circle=True, preserve_range=True) [source] Inverse radon transform. Reconstruct an image from the radon transform, using the filtered back projection algorithm. Parameters radon_imagearray Image containing radon transform (sinogram). Each column of the image corresponds to a projection along a different angle. The tomography rotation axis should lie at the pixel index radon_image.shape[0] // 2 along the 0th dimension of radon_image. thetaarray_like, optional Reconstruction angles (in degrees). Default: m angles evenly spaced between 0 and 180 (if the shape of radon_image is (N, M)). output_sizeint, optional Number of rows and columns in the reconstruction. filter_namestr, optional Filter used in frequency domain filtering. Ramp filter used by default. Filters available: ramp, shepp-logan, cosine, hamming, hann. Assign None to use no filter. interpolationstr, optional Interpolation method used in reconstruction. Methods available: ‘linear’, ‘nearest’, and ‘cubic’ (‘cubic’ is slow). circleboolean, optional Assume the reconstructed image is zero outside the inscribed circle. Also changes the default output_size to match the behaviour of radon called with circle=True. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns reconstructedndarray Reconstructed image. The rotation axis will be located in the pixel with indices (reconstructed.shape[0] // 2, reconstructed.shape[1] // 2). Changed in version 0.19: In iradon, filter argument is deprecated in favor of filter_name. Notes It applies the Fourier slice theorem to reconstruct an image by multiplying the frequency domain of the filter with the FFT of the projection data. This algorithm is called filtered back projection. References 1 AC Kak, M Slaney, “Principles of Computerized Tomographic Imaging”, IEEE Press 1988. 2 B.R. Ramesh, N. Srinivasa, K. Rajgopal, “An Algorithm for Computing the Discrete Radon Transform With Some Applications”, Proceedings of the Fourth IEEE Region 10 International Conference, TENCON ‘89, 1989
skimage.api.skimage.transform#skimage.transform.iradon
skimage.transform.iradon_sart(radon_image, theta=None, image=None, projection_shifts=None, clip=None, relaxation=0.15, dtype=None) [source] Inverse radon transform. Reconstruct an image from the radon transform, using a single iteration of the Simultaneous Algebraic Reconstruction Technique (SART) algorithm. Parameters radon_image2D array Image containing radon transform (sinogram). Each column of the image corresponds to a projection along a different angle. The tomography rotation axis should lie at the pixel index radon_image.shape[0] // 2 along the 0th dimension of radon_image. theta1D array, optional Reconstruction angles (in degrees). Default: m angles evenly spaced between 0 and 180 (if the shape of radon_image is (N, M)). image2D array, optional Image containing an initial reconstruction estimate. Shape of this array should be (radon_image.shape[0], radon_image.shape[0]). The default is an array of zeros. projection_shifts1D array, optional Shift the projections contained in radon_image (the sinogram) by this many pixels before reconstructing the image. The i’th value defines the shift of the i’th column of radon_image. cliplength-2 sequence of floats, optional Force all values in the reconstructed tomogram to lie in the range [clip[0], clip[1]] relaxationfloat, optional Relaxation parameter for the update step. A higher value can improve the convergence rate, but one runs the risk of instabilities. Values close to or higher than 1 are not recommended. dtypedtype, optional Output data type, must be floating point. By default, if input data type is not float, input is cast to double, otherwise dtype is set to input data type. Returns reconstructedndarray Reconstructed image. The rotation axis will be located in the pixel with indices (reconstructed.shape[0] // 2, reconstructed.shape[1] // 2). Notes Algebraic Reconstruction Techniques are based on formulating the tomography reconstruction problem as a set of linear equations. Along each ray, the projected value is the sum of all the values of the cross section along the ray. A typical feature of SART (and a few other variants of algebraic techniques) is that it samples the cross section at equidistant points along the ray, using linear interpolation between the pixel values of the cross section. The resulting set of linear equations are then solved using a slightly modified Kaczmarz method. When using SART, a single iteration is usually sufficient to obtain a good reconstruction. Further iterations will tend to enhance high-frequency information, but will also often increase the noise. References 1 AC Kak, M Slaney, “Principles of Computerized Tomographic Imaging”, IEEE Press 1988. 2 AH Andersen, AC Kak, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm”, Ultrasonic Imaging 6 pp 81–94 (1984) 3 S Kaczmarz, “Angenäherte auflösung von systemen linearer gleichungen”, Bulletin International de l’Academie Polonaise des Sciences et des Lettres 35 pp 355–357 (1937) 4 Kohler, T. “A projection access scheme for iterative reconstruction based on the golden section.” Nuclear Science Symposium Conference Record, 2004 IEEE. Vol. 6. IEEE, 2004. 5 Kaczmarz’ method, Wikipedia, https://en.wikipedia.org/wiki/Kaczmarz_method
skimage.api.skimage.transform#skimage.transform.iradon_sart
skimage.transform.matrix_transform(coords, matrix) [source] Apply 2D matrix transform. Parameters coords(N, 2) array x, y coordinates to transform matrix(3, 3) array Homogeneous transformation matrix. Returns coords(N, 2) array Transformed coordinates.
skimage.api.skimage.transform#skimage.transform.matrix_transform
skimage.transform.order_angles_golden_ratio(theta) [source] Order angles to reduce the amount of correlated information in subsequent projections. Parameters theta1D array of floats Projection angles in degrees. Duplicate angles are not allowed. Returns indices_generatorgenerator yielding unsigned integers The returned generator yields indices into theta such that theta[indices] gives the approximate golden ratio ordering of the projections. In total, len(theta) indices are yielded. All non-negative integers < len(theta) are yielded exactly once. Notes The method used here is that of the golden ratio introduced by T. Kohler. References 1 Kohler, T. “A projection access scheme for iterative reconstruction based on the golden section.” Nuclear Science Symposium Conference Record, 2004 IEEE. Vol. 6. IEEE, 2004. 2 Winkelmann, Stefanie, et al. “An optimal radial profile order based on the Golden Ratio for time-resolved MRI.” Medical Imaging, IEEE Transactions on 26.1 (2007): 68-76.
skimage.api.skimage.transform#skimage.transform.order_angles_golden_ratio
class skimage.transform.PiecewiseAffineTransform [source] Bases: skimage.transform._geometric.GeometricTransform 2D piecewise affine transformation. Control points are used to define the mapping. The transform is based on a Delaunay triangulation of the points to form a mesh. Each triangle is used to find a local affine transform. Attributes affineslist of AffineTransform objects Affine transformations for each triangle in the mesh. inverse_affineslist of AffineTransform objects Inverse affine transformations for each triangle in the mesh. __init__() [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. inverse(coords) [source] Apply inverse transformation. Coordinates outside of the mesh will be set to - 1. Parameters coords(N, 2) array Source coordinates. Returns coords(N, 2) array Transformed coordinates.
skimage.api.skimage.transform#skimage.transform.PiecewiseAffineTransform
estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds.
skimage.api.skimage.transform#skimage.transform.PiecewiseAffineTransform.estimate
inverse(coords) [source] Apply inverse transformation. Coordinates outside of the mesh will be set to - 1. Parameters coords(N, 2) array Source coordinates. Returns coords(N, 2) array Transformed coordinates.
skimage.api.skimage.transform#skimage.transform.PiecewiseAffineTransform.inverse
__init__() [source] Initialize self. See help(type(self)) for accurate signature.
skimage.api.skimage.transform#skimage.transform.PiecewiseAffineTransform.__init__
class skimage.transform.PolynomialTransform(params=None) [source] Bases: skimage.transform._geometric.GeometricTransform 2D polynomial transformation. Has the following form: X = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i )) Y = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i )) Parameters params(2, N) array, optional Polynomial coefficients where N * 2 = (order + 1) * (order + 2). So, a_ji is defined in params[0, :] and b_ji in params[1, :]. Attributes params(2, N) array Polynomial coefficients where N * 2 = (order + 1) * (order + 2). So, a_ji is defined in params[0, :] and b_ji in params[1, :]. __init__(params=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst, order=2) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. The transformation is defined as: X = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i )) Y = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i )) These equations can be transformed to the following form: 0 = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i )) - X 0 = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i )) - Y which exist for each set of corresponding points, so we have a set of N * 2 equations. The coefficients appear linearly so we can write A x = 0, where: A = [[1 x y x**2 x*y y**2 ... 0 ... 0 -X] [0 ... 0 1 x y x**2 x*y y**2 -Y] ... ... ] x.T = [a00 a10 a11 a20 a21 a22 ... ann b00 b10 b11 b20 b21 b22 ... bnn c3] In case of total least-squares the solution of this homogeneous system of equations is the right singular vector of A which corresponds to the smallest singular value normed by the coefficient c3. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. orderint, optional Polynomial order (number of coefficients is order + 1). Returns successbool True, if model estimation succeeds. inverse(coords) [source] Apply inverse transformation. Parameters coords(N, 2) array Destination coordinates. Returns coords(N, 2) array Source coordinates.
skimage.api.skimage.transform#skimage.transform.PolynomialTransform
estimate(src, dst, order=2) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. The transformation is defined as: X = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i )) Y = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i )) These equations can be transformed to the following form: 0 = sum[j=0:order]( sum[i=0:j]( a_ji * x**(j - i) * y**i )) - X 0 = sum[j=0:order]( sum[i=0:j]( b_ji * x**(j - i) * y**i )) - Y which exist for each set of corresponding points, so we have a set of N * 2 equations. The coefficients appear linearly so we can write A x = 0, where: A = [[1 x y x**2 x*y y**2 ... 0 ... 0 -X] [0 ... 0 1 x y x**2 x*y y**2 -Y] ... ... ] x.T = [a00 a10 a11 a20 a21 a22 ... ann b00 b10 b11 b20 b21 b22 ... bnn c3] In case of total least-squares the solution of this homogeneous system of equations is the right singular vector of A which corresponds to the smallest singular value normed by the coefficient c3. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. orderint, optional Polynomial order (number of coefficients is order + 1). Returns successbool True, if model estimation succeeds.
skimage.api.skimage.transform#skimage.transform.PolynomialTransform.estimate
inverse(coords) [source] Apply inverse transformation. Parameters coords(N, 2) array Destination coordinates. Returns coords(N, 2) array Source coordinates.
skimage.api.skimage.transform#skimage.transform.PolynomialTransform.inverse
__init__(params=None) [source] Initialize self. See help(type(self)) for accurate signature.
skimage.api.skimage.transform#skimage.transform.PolynomialTransform.__init__
skimage.transform.probabilistic_hough_line(image, threshold=10, line_length=50, line_gap=10, theta=None, seed=None) [source] Return lines from a progressive probabilistic line Hough transform. Parameters image(M, N) ndarray Input image with nonzero values representing edges. thresholdint, optional Threshold line_lengthint, optional Minimum accepted length of detected lines. Increase the parameter to extract longer lines. line_gapint, optional Maximum gap between pixels to still form a line. Increase the parameter to merge broken lines more aggressively. theta1D ndarray, dtype=double, optional Angles at which to compute the transform, in radians. If None, use a range from -pi/2 to pi/2. seedint, optional Seed to initialize the random number generator. Returns lineslist List of lines identified, lines in format ((x0, y0), (x1, y1)), indicating line start and end. References 1 C. Galamhos, J. Matas and J. Kittler, “Progressive probabilistic Hough transform for line detection”, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1999.
skimage.api.skimage.transform#skimage.transform.probabilistic_hough_line
class skimage.transform.ProjectiveTransform(matrix=None) [source] Bases: skimage.transform._geometric.GeometricTransform Projective transformation. Apply a projective transformation (homography) on coordinates. For each homogeneous coordinate \(\mathbf{x} = [x, y, 1]^T\), its target position is calculated by multiplying with the given matrix, \(H\), to give \(H \mathbf{x}\): [[a0 a1 a2] [b0 b1 b2] [c0 c1 1 ]]. E.g., to rotate by theta degrees clockwise, the matrix should be: [[cos(theta) -sin(theta) 0] [sin(theta) cos(theta) 0] [0 0 1]] or, to translate x by 10 and y by 20: [[1 0 10] [0 1 20] [0 0 1 ]]. Parameters matrix(3, 3) array, optional Homogeneous transformation matrix. Attributes params(3, 3) array Homogeneous transformation matrix. __init__(matrix=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. The transformation is defined as: X = (a0*x + a1*y + a2) / (c0*x + c1*y + 1) Y = (b0*x + b1*y + b2) / (c0*x + c1*y + 1) These equations can be transformed to the following form: 0 = a0*x + a1*y + a2 - c0*x*X - c1*y*X - X 0 = b0*x + b1*y + b2 - c0*x*Y - c1*y*Y - Y which exist for each set of corresponding points, so we have a set of N * 2 equations. The coefficients appear linearly so we can write A x = 0, where: A = [[x y 1 0 0 0 -x*X -y*X -X] [0 0 0 x y 1 -x*Y -y*Y -Y] ... ... ] x.T = [a0 a1 a2 b0 b1 b2 c0 c1 c3] In case of total least-squares the solution of this homogeneous system of equations is the right singular vector of A which corresponds to the smallest singular value normed by the coefficient c3. In case of the affine transformation the coefficients c0 and c1 are 0. Thus the system of equations is: A = [[x y 1 0 0 0 -X] [0 0 0 x y 1 -Y] ... ... ] x.T = [a0 a1 a2 b0 b1 b2 c3] Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. inverse(coords) [source] Apply inverse transformation. Parameters coords(N, 2) array Destination coordinates. Returns coords(N, 2) array Source coordinates.
skimage.api.skimage.transform#skimage.transform.ProjectiveTransform
estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. The transformation is defined as: X = (a0*x + a1*y + a2) / (c0*x + c1*y + 1) Y = (b0*x + b1*y + b2) / (c0*x + c1*y + 1) These equations can be transformed to the following form: 0 = a0*x + a1*y + a2 - c0*x*X - c1*y*X - X 0 = b0*x + b1*y + b2 - c0*x*Y - c1*y*Y - Y which exist for each set of corresponding points, so we have a set of N * 2 equations. The coefficients appear linearly so we can write A x = 0, where: A = [[x y 1 0 0 0 -x*X -y*X -X] [0 0 0 x y 1 -x*Y -y*Y -Y] ... ... ] x.T = [a0 a1 a2 b0 b1 b2 c0 c1 c3] In case of total least-squares the solution of this homogeneous system of equations is the right singular vector of A which corresponds to the smallest singular value normed by the coefficient c3. In case of the affine transformation the coefficients c0 and c1 are 0. Thus the system of equations is: A = [[x y 1 0 0 0 -X] [0 0 0 x y 1 -Y] ... ... ] x.T = [a0 a1 a2 b0 b1 b2 c3] Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds.
skimage.api.skimage.transform#skimage.transform.ProjectiveTransform.estimate
inverse(coords) [source] Apply inverse transformation. Parameters coords(N, 2) array Destination coordinates. Returns coords(N, 2) array Source coordinates.
skimage.api.skimage.transform#skimage.transform.ProjectiveTransform.inverse
__init__(matrix=None) [source] Initialize self. See help(type(self)) for accurate signature.
skimage.api.skimage.transform#skimage.transform.ProjectiveTransform.__init__
skimage.transform.pyramid_expand(image, upscale=2, sigma=None, order=1, mode='reflect', cval=0, multichannel=False, preserve_range=False) [source] Upsample and then smooth image. Parameters imagendarray Input image. upscalefloat, optional Upscale factor. sigmafloat, optional Sigma for Gaussian filter. Default is 2 * upscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution. orderint, optional Order of splines used in interpolation of upsampling. See skimage.transform.warp for detail. mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. cvalfloat, optional Value to fill past edges of input if mode is ‘constant’. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns outarray Upsampled and smoothed float image. References 1 http://persci.mit.edu/pub_pdfs/pyramid83.pdf
skimage.api.skimage.transform#skimage.transform.pyramid_expand
skimage.transform.pyramid_gaussian(image, max_layer=-1, downscale=2, sigma=None, order=1, mode='reflect', cval=0, multichannel=False, preserve_range=False) [source] Yield images of the Gaussian pyramid formed by the input image. Recursively applies the pyramid_reduce function to the image, and yields the downscaled images. Note that the first image of the pyramid will be the original, unscaled image. The total number of images is max_layer + 1. In case all layers are computed, the last image is either a one-pixel image or the image where the reduction does not change its shape. Parameters imagendarray Input image. max_layerint, optional Number of layers for the pyramid. 0th layer is the original image. Default is -1 which builds all possible layers. downscalefloat, optional Downscale factor. sigmafloat, optional Sigma for Gaussian filter. Default is 2 * downscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution. orderint, optional Order of splines used in interpolation of downsampling. See skimage.transform.warp for detail. mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. cvalfloat, optional Value to fill past edges of input if mode is ‘constant’. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns pyramidgenerator Generator yielding pyramid layers as float images. References 1 http://persci.mit.edu/pub_pdfs/pyramid83.pdf
skimage.api.skimage.transform#skimage.transform.pyramid_gaussian
skimage.transform.pyramid_laplacian(image, max_layer=-1, downscale=2, sigma=None, order=1, mode='reflect', cval=0, multichannel=False, preserve_range=False) [source] Yield images of the laplacian pyramid formed by the input image. Each layer contains the difference between the downsampled and the downsampled, smoothed image: layer = resize(prev_layer) - smooth(resize(prev_layer)) Note that the first image of the pyramid will be the difference between the original, unscaled image and its smoothed version. The total number of images is max_layer + 1. In case all layers are computed, the last image is either a one-pixel image or the image where the reduction does not change its shape. Parameters imagendarray Input image. max_layerint, optional Number of layers for the pyramid. 0th layer is the original image. Default is -1 which builds all possible layers. downscalefloat, optional Downscale factor. sigmafloat, optional Sigma for Gaussian filter. Default is 2 * downscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution. orderint, optional Order of splines used in interpolation of downsampling. See skimage.transform.warp for detail. mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. cvalfloat, optional Value to fill past edges of input if mode is ‘constant’. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns pyramidgenerator Generator yielding pyramid layers as float images. References 1 http://persci.mit.edu/pub_pdfs/pyramid83.pdf 2 http://sepwww.stanford.edu/data/media/public/sep/morgan/texturematch/paper_html/node3.html
skimage.api.skimage.transform#skimage.transform.pyramid_laplacian
skimage.transform.pyramid_reduce(image, downscale=2, sigma=None, order=1, mode='reflect', cval=0, multichannel=False, preserve_range=False) [source] Smooth and then downsample image. Parameters imagendarray Input image. downscalefloat, optional Downscale factor. sigmafloat, optional Sigma for Gaussian filter. Default is 2 * downscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution. orderint, optional Order of splines used in interpolation of downsampling. See skimage.transform.warp for detail. mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’. cvalfloat, optional Value to fill past edges of input if mode is ‘constant’. multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns outarray Smoothed and downsampled float image. References 1 http://persci.mit.edu/pub_pdfs/pyramid83.pdf
skimage.api.skimage.transform#skimage.transform.pyramid_reduce
skimage.transform.radon(image, theta=None, circle=True, *, preserve_range=False) [source] Calculates the radon transform of an image given specified projection angles. Parameters imagearray_like Input image. The rotation axis will be located in the pixel with indices (image.shape[0] // 2, image.shape[1] // 2). thetaarray_like, optional Projection angles (in degrees). If None, the value is set to np.arange(180). circleboolean, optional Assume image is zero outside the inscribed circle, making the width of each projection (the first dimension of the sinogram) equal to min(image.shape). preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns radon_imagendarray Radon transform (sinogram). The tomography rotation axis will lie at the pixel index radon_image.shape[0] // 2 along the 0th dimension of radon_image. Notes Based on code of Justin K. Romberg (https://www.clear.rice.edu/elec431/projects96/DSP/bpanalysis.html) References 1 AC Kak, M Slaney, “Principles of Computerized Tomographic Imaging”, IEEE Press 1988. 2 B.R. Ramesh, N. Srinivasa, K. Rajgopal, “An Algorithm for Computing the Discrete Radon Transform With Some Applications”, Proceedings of the Fourth IEEE Region 10 International Conference, TENCON ‘89, 1989
skimage.api.skimage.transform#skimage.transform.radon
skimage.transform.rescale(image, scale, order=None, mode='reflect', cval=0, clip=True, preserve_range=False, multichannel=False, anti_aliasing=None, anti_aliasing_sigma=None) [source] Scale image by a certain factor. Performs interpolation to up-scale or down-scale N-dimensional images. Note that anti-aliasing should be enabled when down-sizing images to avoid aliasing artifacts. For down-sampling with an integer factor also see skimage.transform.downscale_local_mean. Parameters imagendarray Input image. scale{float, tuple of floats} Scale factors. Separate scale factors can be defined as (rows, cols[, …][, dim]). Returns scaledndarray Scaled version of the input. Other Parameters orderint, optional The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad. cvalfloat, optional Used in conjunction with mode ‘constant’, the value outside the image boundaries. clipbool, optional Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. anti_aliasingbool, optional Whether to apply a Gaussian filter to smooth the image prior to down-scaling. It is crucial to filter when down-sampling the image to avoid aliasing artifacts. If input image data type is bool, no anti-aliasing is applied. anti_aliasing_sigma{float, tuple of floats}, optional Standard deviation for Gaussian filtering to avoid aliasing artifacts. By default, this value is chosen as (s - 1) / 2 where s is the down-scaling factor. Notes Modes ‘reflect’ and ‘symmetric’ are similar, but differ in whether the edge pixels are duplicated during the reflection. As an example, if an array has values [0, 1, 2] and was padded to the right by four values using symmetric, the result would be [0, 1, 2, 2, 1, 0, 0], while for reflect it would be [0, 1, 2, 1, 0, 1, 2]. Examples >>> from skimage import data >>> from skimage.transform import rescale >>> image = data.camera() >>> rescale(image, 0.1).shape (51, 51) >>> rescale(image, 0.5).shape (256, 256)
skimage.api.skimage.transform#skimage.transform.rescale
skimage.transform.resize(image, output_shape, order=None, mode='reflect', cval=0, clip=True, preserve_range=False, anti_aliasing=None, anti_aliasing_sigma=None) [source] Resize image to match a certain size. Performs interpolation to up-size or down-size N-dimensional images. Note that anti-aliasing should be enabled when down-sizing images to avoid aliasing artifacts. For down-sampling with an integer factor also see skimage.transform.downscale_local_mean. Parameters imagendarray Input image. output_shapetuple or ndarray Size of the generated output image (rows, cols[, …][, dim]). If dim is not provided, the number of channels is preserved. In case the number of input channels does not equal the number of output channels a n-dimensional interpolation is applied. Returns resizedndarray Resized version of the input. Other Parameters orderint, optional The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad. cvalfloat, optional Used in conjunction with mode ‘constant’, the value outside the image boundaries. clipbool, optional Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html anti_aliasingbool, optional Whether to apply a Gaussian filter to smooth the image prior to down-scaling. It is crucial to filter when down-sampling the image to avoid aliasing artifacts. If input image data type is bool, no anti-aliasing is applied. anti_aliasing_sigma{float, tuple of floats}, optional Standard deviation for Gaussian filtering to avoid aliasing artifacts. By default, this value is chosen as (s - 1) / 2 where s is the down-scaling factor, where s > 1. For the up-size case, s < 1, no anti-aliasing is performed prior to rescaling. Notes Modes ‘reflect’ and ‘symmetric’ are similar, but differ in whether the edge pixels are duplicated during the reflection. As an example, if an array has values [0, 1, 2] and was padded to the right by four values using symmetric, the result would be [0, 1, 2, 2, 1, 0, 0], while for reflect it would be [0, 1, 2, 1, 0, 1, 2]. Examples >>> from skimage import data >>> from skimage.transform import resize >>> image = data.camera() >>> resize(image, (100, 100)).shape (100, 100)
skimage.api.skimage.transform#skimage.transform.resize
skimage.transform.rotate(image, angle, resize=False, center=None, order=None, mode='constant', cval=0, clip=True, preserve_range=False) [source] Rotate image by a certain angle around its center. Parameters imagendarray Input image. anglefloat Rotation angle in degrees in counter-clockwise direction. resizebool, optional Determine whether the shape of the output image will be automatically calculated, so the complete rotated image exactly fits. Default is False. centeriterable of length 2 The rotation center. If center=None, the image is rotated around its center, i.e. center=(cols / 2 - 0.5, rows / 2 - 0.5). Please note that this parameter is (cols, rows), contrary to normal skimage ordering. Returns rotatedndarray Rotated version of the input. Other Parameters orderint, optional The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad. cvalfloat, optional Used in conjunction with mode ‘constant’, the value outside the image boundaries. clipbool, optional Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Notes Modes ‘reflect’ and ‘symmetric’ are similar, but differ in whether the edge pixels are duplicated during the reflection. As an example, if an array has values [0, 1, 2] and was padded to the right by four values using symmetric, the result would be [0, 1, 2, 2, 1, 0, 0], while for reflect it would be [0, 1, 2, 1, 0, 1, 2]. Examples >>> from skimage import data >>> from skimage.transform import rotate >>> image = data.camera() >>> rotate(image, 2).shape (512, 512) >>> rotate(image, 2, resize=True).shape (530, 530) >>> rotate(image, 90, resize=True).shape (512, 512)
skimage.api.skimage.transform#skimage.transform.rotate
class skimage.transform.SimilarityTransform(matrix=None, scale=None, rotation=None, translation=None) [source] Bases: skimage.transform._geometric.EuclideanTransform 2D similarity transformation. Has the following form: X = a0 * x - b0 * y + a1 = = s * x * cos(rotation) - s * y * sin(rotation) + a1 Y = b0 * x + a0 * y + b1 = = s * x * sin(rotation) + s * y * cos(rotation) + b1 where s is a scale factor and the homogeneous transformation matrix is: [[a0 b0 a1] [b0 a0 b1] [0 0 1]] The similarity transformation extends the Euclidean transformation with a single scaling factor in addition to the rotation and translation parameters. Parameters matrix(3, 3) array, optional Homogeneous transformation matrix. scalefloat, optional Scale factor. rotationfloat, optional Rotation angle in counter-clockwise direction as radians. translation(tx, ty) as array, list or tuple, optional x, y translation parameters. Attributes params(3, 3) array Homogeneous transformation matrix. __init__(matrix=None, scale=None, rotation=None, translation=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. property scale
skimage.api.skimage.transform#skimage.transform.SimilarityTransform
estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds.
skimage.api.skimage.transform#skimage.transform.SimilarityTransform.estimate
property scale
skimage.api.skimage.transform#skimage.transform.SimilarityTransform.scale
__init__(matrix=None, scale=None, rotation=None, translation=None) [source] Initialize self. See help(type(self)) for accurate signature.
skimage.api.skimage.transform#skimage.transform.SimilarityTransform.__init__