_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_28700 | Returns the database column data type for fields such as ForeignKey and OneToOneField that point to the Field, taking into account the connection. See Custom database types for usage in custom fields. | |
doc_28701 | Retrieve the specified ANNOTATIONs for mailbox. The method is non-standard, but is supported by the Cyrus server. | |
doc_28702 |
Set the position to use for z-sorting. | |
doc_28703 |
Alias for set_antialiased. | |
doc_28704 | tf.compat.v1.distributions.Laplace(
loc, scale, validate_args=False, allow_nan_stats=True, name='Laplace'
)
Mathematical details The probability density function (pdf) of this distribution is, pdf(x; mu, sigma) = exp(-|x - mu| / sigma) / Z
Z = 2 sigma
where loc = mu, scale = sigma, and Z is the normalization constant. Note that the Laplace distribution can be thought of two exponential distributions spliced together "back-to-back." The Lpalce distribution is a member of the location-scale family, i.e., it can be constructed as, X ~ Laplace(loc=0, scale=1)
Y = loc + scale * X
Args
loc Floating point tensor which characterizes the location (center) of the distribution.
scale Positive floating point tensor which characterizes the spread of the distribution.
validate_args Python bool, default False. When True distribution parameters are checked for validity despite possibly degrading runtime performance. When False invalid inputs may silently render incorrect outputs.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Raises
TypeError if loc and scale are of different dtype.
Attributes
allow_nan_stats Python bool describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.
batch_shape Shape of a single sample from a single event index as a TensorShape. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape. May be partially defined or unknown.
loc Distribution parameter for the location.
name Name prepended to all ops created by this Distribution.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized. Currently this is one of the static instances distributions.FULLY_REPARAMETERIZED or distributions.NOT_REPARAMETERIZED.
scale Distribution parameter for scale.
validate_args Python bool indicating possibly expensive checks are enabled. Methods batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.
Args
name name to give to the op
Returns
batch_shape Tensor. cdf View source
cdf(
value, name='cdf'
)
Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x]
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. copy View source
copy(
**override_parameters_kwargs
)
Creates a deep copy of the distribution.
Note: the copy distribution may continue to depend on the original initialization arguments.
Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.
Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs). covariance View source
covariance(
name='covariance'
)
Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-k, vector-valued distribution, it is calculated as, Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e., Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.
Args
name Python str prepended to names of ops created by this function.
Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape). cross_entropy View source
cross_entropy(
other, name='cross_entropy'
)
Computes the (Shannon) cross entropy. Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shanon) cross entropy is defined as: H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)
where F denotes the support of the random variable X ~ P.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shanon) cross entropy. entropy View source
entropy(
name='entropy'
)
Shannon entropy in nats. event_shape_tensor View source
event_shape_tensor(
name='event_shape_tensor'
)
Shape of a single sample from a single batch as a 1-D int32 Tensor.
Args
name name to give to the op
Returns
event_shape Tensor. is_scalar_batch View source
is_scalar_batch(
name='is_scalar_batch'
)
Indicates that batch_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_batch bool scalar Tensor. is_scalar_event View source
is_scalar_event(
name='is_scalar_event'
)
Indicates that event_shape == [].
Args
name Python str prepended to names of ops created by this function.
Returns
is_scalar_event bool scalar Tensor. kl_divergence View source
kl_divergence(
other, name='kl_divergence'
)
Computes the Kullback--Leibler divergence. Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as: KL[p, q] = E_p[log(p(X)/q(X))]
= -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
= H[p, q] - H[p]
where F denotes the support of the random variable X ~ p, H[., .] denotes (Shanon) cross entropy, and H[.] denotes (Shanon) entropy.
Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.
Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence. log_cdf View source
log_cdf(
value, name='log_cdf'
)
Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ]
Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_prob View source
log_prob(
value, name='log_prob'
)
Log probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. log_survival_function View source
log_survival_function(
value, name='log_survival_function'
)
Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
mean View source
mean(
name='mean'
)
Mean. mode View source
mode(
name='mode'
)
Mode. param_shapes View source
@classmethod
param_shapes(
sample_shape, name='DistributionParamShapes'
)
Shapes of parameters given the desired shape of a call to sample(). This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Subclasses should override class method _param_shapes.
Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.
Returns dict of parameter name to Tensor shapes.
param_static_shapes View source
@classmethod
param_static_shapes(
sample_shape
)
param_shapes with static (i.e. TensorShape) shapes. This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically. Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.
Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().
Returns dict of parameter name to TensorShape.
Raises
ValueError if sample_shape is a TensorShape and is not fully defined. prob View source
prob(
value, name='prob'
)
Probability density/mass function.
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. quantile View source
quantile(
value, name='quantile'
)
Quantile function. Aka "inverse cdf" or "percent point function". Given random variable X and p in [0, 1], the quantile is: quantile(p) := x such that P[X <= x] == p
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. sample View source
sample(
sample_shape=(), seed=None, name='sample'
)
Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample.
Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer seed for RNG
name name to give to the op.
Returns
samples a Tensor with prepended dimensions sample_shape. stddev View source
stddev(
name='stddev'
)
Standard deviation. Standard deviation is defined as, stddev = E[(X - E[X])**2]**0.5
where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). survival_function View source
survival_function(
value, name='survival_function'
)
Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
Returns Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.
variance View source
variance(
name='variance'
)
Variance. Variance is defined as, Var = E[(X - E[X])**2]
where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.
Args
name Python str prepended to names of ops created by this function.
Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean(). | |
doc_28705 | tf.compat.v1.distributions.ReparameterizationType(
rep_type
)
Two static instances exist in the distributions library, signifying one of two possible properties for samples from a distribution: FULLY_REPARAMETERIZED: Samples from the distribution are fully reparameterized, and straight-through gradients are supported. NOT_REPARAMETERIZED: Samples from the distribution are not fully reparameterized, and straight-through gradients are either partially unsupported or are not supported at all. In this case, for purposes of e.g. RL or variational inference, it is generally safest to wrap the sample results in a stop_gradients call and use policy gradients / surrogate loss instead. Methods __eq__ View source
__eq__(
other
)
Determine if this ReparameterizationType is equal to another. Since ReparameterizationType instances are constant static global instances, equality checks if two instances' id() values are equal.
Args
other Object to compare against.
Returns self is other. | |
doc_28706 |
Bases: matplotlib.patches.BoxStyle._Base A square box. Parameters
padfloat, default: 0.3
The amount of padding around the original box. __call__(x0, y0, width, height, mutation_size, mutation_aspect=<deprecated parameter>)[source]
Given the location and size of the box, return the path of the box around it. Parameters
x0, y0, width, heightfloat
Location and size of the box.
mutation_sizefloat
A reference scale for the mutation. Returns
Path | |
doc_28707 |
Alias for set_horizontalalignment. | |
doc_28708 |
A string identifying the data type. Will be used for display in, e.g. Series.dtype | |
doc_28709 |
Share the y-axis with other. This is equivalent to passing sharey=other when constructing the axes, and cannot be used if the y-axis is already being shared with another Axes. | |
doc_28710 |
If pass_through is True, all ancestors will always be invalidated, even if 'self' is already invalid. | |
doc_28711 | See Migration guide for more details. tf.compat.v1.keras.layers.UpSampling2D
tf.keras.layers.UpSampling2D(
size=(2, 2), data_format=None, interpolation='nearest', **kwargs
)
Repeats the rows and columns of the data by size[0] and size[1] respectively. Examples:
input_shape = (2, 2, 1, 3)
x = np.arange(np.prod(input_shape)).reshape(input_shape)
print(x)
[[[[ 0 1 2]]
[[ 3 4 5]]]
[[[ 6 7 8]]
[[ 9 10 11]]]]
y = tf.keras.layers.UpSampling2D(size=(1, 2))(x)
print(y)
tf.Tensor(
[[[[ 0 1 2]
[ 0 1 2]]
[[ 3 4 5]
[ 3 4 5]]]
[[[ 6 7 8]
[ 6 7 8]]
[[ 9 10 11]
[ 9 10 11]]]], shape=(2, 2, 2, 3), dtype=int64)
Arguments
size Int, or tuple of 2 integers. The upsampling factors for rows and columns.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last".
interpolation A string, one of nearest or bilinear. Input shape: 4D tensor with shape: If data_format is "channels_last": (batch_size, rows, cols, channels)
If data_format is "channels_first": (batch_size, channels, rows, cols)
Output shape: 4D tensor with shape: If data_format is "channels_last": (batch_size, upsampled_rows, upsampled_cols, channels)
If data_format is "channels_first": (batch_size, channels, upsampled_rows, upsampled_cols) | |
doc_28712 | Return the group database entry for the given numeric group ID. KeyError is raised if the entry asked for cannot be found. Deprecated since version 3.6: Since Python 3.6 the support of non-integer arguments like floats or strings in getgrgid() is deprecated. | |
doc_28713 |
Return the registered default canvas for given file format. Handles deferred import of required backend. | |
doc_28714 | An immutable list. Changelog New in version 0.5. Private | |
doc_28715 | class sklearn.kernel_approximation.PolynomialCountSketch(*, gamma=1.0, degree=2, coef0=0, n_components=100, random_state=None) [source]
Polynomial kernel approximation via Tensor Sketch. Implements Tensor Sketch, which approximates the feature map of the polynomial kernel: K(X, Y) = (gamma * <X, Y> + coef0)^degree
by efficiently computing a Count Sketch of the outer product of a vector with itself using Fast Fourier Transforms (FFT). Read more in the User Guide. New in version 0.24. Parameters
gammafloat, default=1.0
Parameter of the polynomial kernel whose feature map will be approximated.
degreeint, default=2
Degree of the polynomial kernel whose feature map will be approximated.
coef0int, default=0
Constant term of the polynomial kernel whose feature map will be approximated.
n_componentsint, default=100
Dimensionality of the output feature space. Usually, n_components should be greater than the number of features in input samples in order to achieve good performance. The optimal score / run time balance is typically achieved around n_components = 10 * n_features, but this depends on the specific dataset being used.
random_stateint, RandomState instance, default=None
Determines random number generation for indexHash and bitHash initialization. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes
indexHash_ndarray of shape (degree, n_features), dtype=int64
Array of indexes in range [0, n_components) used to represent the 2-wise independent hash functions for Count Sketch computation.
bitHash_ndarray of shape (degree, n_features), dtype=float32
Array with random entries in {+1, -1}, used to represent the 2-wise independent hash functions for Count Sketch computation. Examples >>> from sklearn.kernel_approximation import PolynomialCountSketch
>>> from sklearn.linear_model import SGDClassifier
>>> X = [[0, 0], [1, 1], [1, 0], [0, 1]]
>>> y = [0, 0, 1, 1]
>>> ps = PolynomialCountSketch(degree=3, random_state=1)
>>> X_features = ps.fit_transform(X)
>>> clf = SGDClassifier(max_iter=10, tol=1e-3)
>>> clf.fit(X_features, y)
SGDClassifier(max_iter=10)
>>> clf.score(X_features, y)
1.0
Methods
fit(X[, y]) Fit the model with X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Generate the feature map approximation for X.
fit(X, y=None) [source]
Fit the model with X. Initializes the internal variables. The method needs no information about the distribution of data, so we only care about n_features in X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data, where n_samples in the number of samples and n_features is the number of features. Returns
selfobject
Returns the transformer.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Generate the feature map approximation for X. Parameters
X{array-like}, shape (n_samples, n_features)
New data, where n_samples in the number of samples and n_features is the number of features. Returns
X_newarray-like, shape (n_samples, n_components)
Examples using sklearn.kernel_approximation.PolynomialCountSketch
Release Highlights for scikit-learn 0.24
Scalable learning with polynomial kernel aproximation | |
doc_28716 | Create a new CAB file named cabname. files must be a list of tuples, each containing the name of the file on disk, and the name of the file inside the CAB file. The files are added to the CAB file in the order they appear in the list. All files are added into a single CAB file, using the MSZIP compression algorithm. Callbacks to Python for the various steps of MSI creation are currently not exposed. | |
doc_28717 | Tidy up any resources used by the handler. This version does no output but removes the handler from an internal list of handlers which is closed when shutdown() is called. Subclasses should ensure that this gets called from overridden close() methods. | |
doc_28718 |
Draw a marker at each of path's vertices (excluding control points). This provides a fallback implementation of draw_markers that makes multiple calls to draw_path(). Some backends may want to override this method in order to draw the marker only once and reuse it multiple times. Parameters
gcGraphicsContextBase
The graphics context.
marker_transmatplotlib.transforms.Transform
An affine transform applied to the marker.
transmatplotlib.transforms.Transform
An affine transform applied to the path. | |
doc_28719 |
The last colorbar associated with this ScalarMappable. May be None. | |
doc_28720 |
Set the JoinStyle for the collection (for all its elements). Parameters
jsJoinStyle or {'miter', 'round', 'bevel'} | |
doc_28721 |
Encodes a bytestring to a base64 string for use in URLs, stripping any trailing equal signs. | |
doc_28722 |
Calculate TimedeltaArray of difference between index values and index converted to PeriodArray at specified freq. Used for vectorized offsets. Parameters
freq:Period frequency
Returns
TimedeltaArray/Index | |
doc_28723 |
Return the offsets for the collection. | |
doc_28724 | See torch.sign() | |
doc_28725 |
Find indices where elements should be inserted to maintain order. Find the indices into a sorted array a such that, if the corresponding elements in v were inserted before the indices, the order of a would be preserved. Assuming that a is sorted:
side returned index i satisfies
left a[i-1] < v <= a[i]
right a[i-1] <= v < a[i] Parameters
a1-D array_like
Input array. If sorter is None, then it must be sorted in ascending order, otherwise sorter must be an array of indices that sort it.
varray_like
Values to insert into a.
side{‘left’, ‘right’}, optional
If ‘left’, the index of the first suitable location found is given. If ‘right’, return the last such index. If there is no suitable index, return either 0 or N (where N is the length of a).
sorter1-D array_like, optional
Optional array of integer indices that sort array a into ascending order. They are typically the result of argsort. New in version 1.7.0. Returns
indicesint or array of ints
Array of insertion points with the same shape as v, or an integer if v is a scalar. See also sort
Return a sorted copy of an array. histogram
Produce histogram from 1-D data. Notes Binary search is used to find the required insertion points. As of NumPy 1.4.0 searchsorted works with real/complex arrays containing nan values. The enhanced sort order is documented in sort. This function uses the same algorithm as the builtin python bisect.bisect_left (side='left') and bisect.bisect_right (side='right') functions, which is also vectorized in the v argument. Examples >>> np.searchsorted([1,2,3,4,5], 3)
2
>>> np.searchsorted([1,2,3,4,5], 3, side='right')
3
>>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3])
array([0, 5, 1, 2]) | |
doc_28726 |
If pass_through is True, all ancestors will always be invalidated, even if 'self' is already invalid. | |
doc_28727 | Joins the applied CAN filters such that only CAN frames that match all given CAN filters are passed to user space. This constant is documented in the Linux documentation. Availability: Linux >= 4.1. New in version 3.9. | |
doc_28728 | See Migration guide for more details. tf.compat.v1.estimator.StopAtStepHook, tf.compat.v1.train.StopAtStepHook
tf.estimator.StopAtStepHook(
num_steps=None, last_step=None
)
Args
num_steps Number of steps to execute.
last_step Step after which to stop.
Raises
ValueError If one of the arguments is invalid. Methods after_create_session View source
after_create_session(
session, coord
)
Called when new TensorFlow session is created. This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which begin is called: When this is called, the graph is finalized and ops can no longer be added to the graph. This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
Args
session A TensorFlow Session that has been created.
coord A Coordinator object which keeps track of all threads. after_run View source
after_run(
run_context, run_values
)
Called after each call to run(). The run_values argument contains results of requested ops/tensors by before_run(). The run_context argument is the same one send to before_run call. run_context.request_stop() can be called to stop the iteration. If session.run() raises any exceptions then after_run() is not called.
Args
run_context A SessionRunContext object.
run_values A SessionRunValues object. before_run View source
before_run(
run_context
)
Called before each call to run(). You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops.
Args
run_context A SessionRunContext object.
Returns None or a SessionRunArgs object.
begin View source
begin()
Called once before using the session. When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the begin() call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of begin() on the same graph, should not change the graph. end View source
end(
session
)
Called at the end of session. The session argument can be used in case the hook wants to run final ops, such as saving a last checkpoint. If session.run() raises exception other than OutOfRangeError or StopIteration then end() is not called. Note the difference between end() and after_run() behavior when session.run() raises OutOfRangeError or StopIteration. In that case end() is called but after_run() is not called.
Args
session A TensorFlow Session that will be soon closed. | |
doc_28729 |
Copy properties from other to self. | |
doc_28730 |
Exports an EventList as a Chrome tracing tools file. The checkpoint can be later loaded and inspected under chrome://tracing URL. Parameters
path (str) – Path where the trace will be written. | |
doc_28731 | Alias for torch.acos(). | |
doc_28732 | A data structure of functions to call at the end of each request, in the format {scope: [functions]}. The scope key is the name of a blueprint the functions are active for, or None for all requests. To register a function, use the after_request() decorator. This data structure is internal. It should not be modified directly and its format may change at any time. | |
doc_28733 |
Calculate the ewm (exponential weighted moment) sample correlation. Parameters
other:Series or DataFrame, optional
If not supplied then will default to self and produce pairwise output.
pairwise:bool, default None
If False then only matching columns between self and other will be used and the output will be a DataFrame. If True then all pairwise combinations will be calculated and the output will be a MultiIndex DataFrame in the case of DataFrame inputs. In the case of missing elements, only complete pairwise observations will be used. **kwargs
For NumPy compatibility and will not have an effect on the result. Returns
Series or DataFrame
Return type is the same as the original object with np.float64 dtype. See also pandas.Series.ewm
Calling ewm with Series data. pandas.DataFrame.ewm
Calling ewm with DataFrames. pandas.Series.corr
Aggregating corr for Series. pandas.DataFrame.corr
Aggregating corr for DataFrame. | |
doc_28734 | Show my ACLs for a mailbox (i.e. the rights that I have on mailbox). | |
doc_28735 |
Return input with invalid data masked and replaced by a fill value. Invalid data means values of nan, inf, etc. Parameters
aarray_like
Input array, a (subclass of) ndarray.
masksequence, optional
Mask. Must be convertible to an array of booleans with the same shape as data. True indicates a masked (i.e. invalid) data.
copybool, optional
Whether to use a copy of a (True) or to fix a in place (False). Default is True.
fill_valuescalar, optional
Value used for fixing invalid data. Default is None, in which case the a.fill_value is used. Returns
bMaskedArray
The input array with invalid entries fixed. Notes A copy is performed by default. Examples >>> x = np.ma.array([1., -1, np.nan, np.inf], mask=[1] + [0]*3)
>>> x
masked_array(data=[--, -1.0, nan, inf],
mask=[ True, False, False, False],
fill_value=1e+20)
>>> np.ma.fix_invalid(x)
masked_array(data=[--, -1.0, --, --],
mask=[ True, False, True, True],
fill_value=1e+20)
>>> fixed = np.ma.fix_invalid(x)
>>> fixed.data
array([ 1.e+00, -1.e+00, 1.e+20, 1.e+20])
>>> x.data
array([ 1., -1., nan, inf]) | |
doc_28736 | See Migration guide for more details. tf.compat.v1.string_to_hash_bucket_fast, tf.compat.v1.strings.to_hash_bucket_fast
tf.strings.to_hash_bucket_fast(
input, num_buckets, name=None
)
The hash function is deterministic on the content of the string within the process and will never change. However, it is not suitable for cryptography. This function may be used when CPU time is scarce and inputs are trusted or unimportant. There is a risk of adversaries constructing inputs that all hash to the same bucket. To prevent this problem, use a strong hash function with tf.string_to_hash_bucket_strong. Examples:
tf.strings.to_hash_bucket_fast(["Hello", "TensorFlow", "2.x"], 3).numpy()
array([0, 2, 2])
Args
input A Tensor of type string. The strings to assign a hash bucket.
num_buckets An int that is >= 1. The number of buckets.
name A name for the operation (optional).
Returns A Tensor of type int64. | |
doc_28737 | <section class="expandable">
<h4 class="showalways">View aliases</h4>
<p>
<b>Compat aliases for migration</b>
<p>See
<a href="https://www.tensorflow.org/guide/migrate">Migration guide</a> for
more details.</p>
<p>`tf.compat.v1.raw_ops.For`</p>
</p>
</section>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>tf.raw_ops.For(
start, limit, delta, input, body, name=None
)
</code></pre>
<!-- Placeholder for "Used in" -->
output = input;
for i in range(start, limit, delta)
output = body(i, output);
Args
start A Tensor of type int32. The lower bound. An int32
limit A Tensor of type int32. The upper bound. An int32
delta A Tensor of type int32. The increment. An int32
input A list of Tensor objects. A list of input tensors whose types are T.
body A function decorated with @Defun. A function that takes a list of tensors (int32, T) and returns another list of tensors (T).
name A name for the operation (optional).
Returns A list of Tensor objects. Has the same type as input. | |
doc_28738 | tf.compat.v1.to_float(
x, name='ToFloat'
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.cast instead.
Args
x A Tensor or SparseTensor or IndexedSlices.
name A name for the operation (optional).
Returns A Tensor or SparseTensor or IndexedSlices with same shape as x with type float32.
Raises
TypeError If x cannot be cast to the float32. | |
doc_28739 | Extract all members from the archive to the current working directory or directory path. If optional members is given, it must be a subset of the list returned by getmembers(). Directory information like owner, modification time and permissions are set after all members have been extracted. This is done to work around two problems: A directory’s modification time is reset each time a file is created in it. And, if a directory’s permissions do not allow writing, extracting files to it will fail. If numeric_owner is True, the uid and gid numbers from the tarfile are used to set the owner/group for the extracted files. Otherwise, the named values from the tarfile are used. Warning Never extract archives from untrusted sources without prior inspection. It is possible that files are created outside of path, e.g. members that have absolute filenames starting with "/" or filenames with two dots "..". Changed in version 3.5: Added the numeric_owner parameter. Changed in version 3.6: The path parameter accepts a path-like object. | |
doc_28740 | class sklearn.metrics.DetCurveDisplay(*, fpr, fnr, estimator_name=None, pos_label=None) [source]
DET curve visualization. It is recommend to use plot_det_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. New in version 0.24. Parameters
fprndarray
False positive rate.
tprndarray
True positive rate.
estimator_namestr, default=None
Name of estimator. If None, the estimator name is not shown.
pos_labelstr or int, default=None
The label of the positive class. Attributes
line_matplotlib Artist
DET Curve.
ax_matplotlib Axes
Axes with DET Curve.
figure_matplotlib Figure
Figure containing the curve. See also
det_curve
Compute error rates for different probability thresholds.
plot_det_curve
Plot detection error tradeoff (DET) curve. Examples >>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([0, 0, 1, 1])
>>> pred = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, fnr, thresholds = metrics.det_curve(y, pred)
>>> display = metrics.DetCurveDisplay(
... fpr=fpr, fnr=fnr, estimator_name='example estimator'
... )
>>> display.plot()
>>> plt.show()
Methods
plot([ax, name]) Plot visualization.
plot(ax=None, *, name=None, **kwargs) [source]
Plot visualization. Parameters
axmatplotlib axes, default=None
Axes object to plot on. If None, a new figure and axes is created.
namestr, default=None
Name of DET curve for labeling. If None, use the name of the estimator. Returns
displayDetCurveDisplay
Object that stores computed values. | |
doc_28741 | Return a tuple containing all schemes currently supported in sysconfig. | |
doc_28742 |
Calculate the Hausdorff distance between nonzero elements of given images. The Hausdorff distance [1] is the maximum distance between any point on image0 and its nearest point on image1, and vice-versa. Parameters
image0, image1ndarray
Arrays where True represents a point that is included in a set of points. Both arrays must have the same shape. Returns
distancefloat
The Hausdorff distance between coordinates of nonzero pixels in image0 and image1, using the Euclidian distance. References
1
http://en.wikipedia.org/wiki/Hausdorff_distance Examples >>> points_a = (3, 0)
>>> points_b = (6, 0)
>>> shape = (7, 1)
>>> image_a = np.zeros(shape, dtype=bool)
>>> image_b = np.zeros(shape, dtype=bool)
>>> image_a[points_a] = True
>>> image_b[points_b] = True
>>> hausdorff_distance(image_a, image_b)
3.0 | |
doc_28743 |
Computes a partial inverse of MaxPool2d. See MaxUnpool2d for details. | |
doc_28744 |
Error raised for unsupported Numba engine routines. | |
doc_28745 |
Transform via the mapping y=xexponenty = x^{\text{exponent}} . | |
doc_28746 |
Return local gradient of an image (i.e. local maximum - local minimum). Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. | |
doc_28747 | operator.__xor__(a, b)
Return the bitwise exclusive or of a and b. | |
doc_28748 | List of all bands of the source, as GDALBand instances. >>> rst = GDALRaster({"width": 1, "height": 2, 'srid': 4326,
... "bands": [{"data": [0, 1]}, {"data": [2, 3]}]})
>>> len(rst.bands)
2
>>> rst.bands[1].data()
array([[ 2., 3.]], dtype=float32) | |
doc_28749 |
Return whether the x-axis is autoscaled. | |
doc_28750 | Reallocate storage for a curses window to adjust its dimensions to the specified values. If either dimension is larger than the current values, the window’s data is filled with blanks that have the current background rendition (as set by bkgdset()) merged into them. | |
doc_28751 | See Migration guide for more details. tf.compat.v1.raw_ops.AvgPool3DGrad
tf.raw_ops.AvgPool3DGrad(
orig_input_shape, grad, ksize, strides, padding, data_format='NDHWC',
name=None
)
Args
orig_input_shape A Tensor of type int32. The original input dimensions.
grad A Tensor. Must be one of the following types: half, bfloat16, float32, float64. Output backprop of shape [batch, depth, rows, cols, channels].
ksize A list of ints that has length >= 5. 1-D tensor of length 5. The size of the window for each dimension of the input tensor. Must have ksize[0] = ksize[4] = 1.
strides A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1.
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
data_format An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width].
name A name for the operation (optional).
Returns A Tensor. Has the same type as grad. | |
doc_28752 | See Migration guide for more details. tf.compat.v1.raw_ops.InitializeTableFromTextFile
tf.raw_ops.InitializeTableFromTextFile(
table_handle, filename, key_index, value_index, vocab_size=-1,
delimiter='\t', name=None
)
It inserts one key-value pair into the table for each line of the file. The key and value is extracted from the whole line content, elements from the split line based on delimiter or the line number (starting from zero). Where to extract the key and value from a line is specified by key_index and value_index. A value of -1 means use the line number(starting from zero), expects int64. A value of -2 means use the whole line content, expects string. A value >= 0 means use the index (starting at zero) of the split line based on delimiter.
Args
table_handle A Tensor of type mutable string. Handle to a table which will be initialized.
filename A Tensor of type string. Filename of a vocabulary text file.
key_index An int that is >= -2. Column index in a line to get the table key values from.
value_index An int that is >= -2. Column index that represents information of a line to get the table value values from.
vocab_size An optional int that is >= -1. Defaults to -1. Number of elements of the file, use -1 if unknown.
delimiter An optional string. Defaults to "\t". Delimiter to separate fields in a line.
name A name for the operation (optional).
Returns The created Operation. | |
doc_28753 | See Migration guide for more details. tf.compat.v1.raw_ops.IFFT
tf.raw_ops.IFFT(
input, name=None
)
Computes the inverse 1-dimensional discrete Fourier transform over the inner-most dimension of input.
Args
input A Tensor. Must be one of the following types: complex64, complex128. A complex tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_28754 |
Return an xarray object from the pandas object. Returns
xarray.DataArray or xarray.Dataset
Data in the pandas structure converted to Dataset if the object is a DataFrame, or a DataArray if the object is a Series. See also DataFrame.to_hdf
Write DataFrame to an HDF5 file. DataFrame.to_parquet
Write a DataFrame to the binary parquet format. Notes See the xarray docs Examples
>>> df = pd.DataFrame([('falcon', 'bird', 389.0, 2),
... ('parrot', 'bird', 24.0, 2),
... ('lion', 'mammal', 80.5, 4),
... ('monkey', 'mammal', np.nan, 4)],
... columns=['name', 'class', 'max_speed',
... 'num_legs'])
>>> df
name class max_speed num_legs
0 falcon bird 389.0 2
1 parrot bird 24.0 2
2 lion mammal 80.5 4
3 monkey mammal NaN 4
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (index: 4)
Coordinates:
* index (index) int64 0 1 2 3
Data variables:
name (index) object 'falcon' 'parrot' 'lion' 'monkey'
class (index) object 'bird' 'bird' 'mammal' 'mammal'
max_speed (index) float64 389.0 24.0 80.5 nan
num_legs (index) int64 2 2 4 4
>>> df['max_speed'].to_xarray()
<xarray.DataArray 'max_speed' (index: 4)>
array([389. , 24. , 80.5, nan])
Coordinates:
* index (index) int64 0 1 2 3
>>> dates = pd.to_datetime(['2018-01-01', '2018-01-01',
... '2018-01-02', '2018-01-02'])
>>> df_multiindex = pd.DataFrame({'date': dates,
... 'animal': ['falcon', 'parrot',
... 'falcon', 'parrot'],
... 'speed': [350, 18, 361, 15]})
>>> df_multiindex = df_multiindex.set_index(['date', 'animal'])
>>> df_multiindex
speed
date animal
2018-01-01 falcon 350
parrot 18
2018-01-02 falcon 361
parrot 15
>>> df_multiindex.to_xarray()
<xarray.Dataset>
Dimensions: (animal: 2, date: 2)
Coordinates:
* date (date) datetime64[ns] 2018-01-01 2018-01-02
* animal (animal) object 'falcon' 'parrot'
Data variables:
speed (date, animal) int64 350 18 361 15 | |
doc_28755 |
Return item and drop from frame. Raise KeyError if not found. Parameters
item:label
Label of column to be popped. Returns
Series
Examples
>>> df = pd.DataFrame([('falcon', 'bird', 389.0),
... ('parrot', 'bird', 24.0),
... ('lion', 'mammal', 80.5),
... ('monkey', 'mammal', np.nan)],
... columns=('name', 'class', 'max_speed'))
>>> df
name class max_speed
0 falcon bird 389.0
1 parrot bird 24.0
2 lion mammal 80.5
3 monkey mammal NaN
>>> df.pop('class')
0 bird
1 bird
2 mammal
3 mammal
Name: class, dtype: object
>>> df
name max_speed
0 falcon 389.0
1 parrot 24.0
2 lion 80.5
3 monkey NaN | |
doc_28756 | find a specific font on the system match_font(name, bold=False, italic=False) -> path Returns the full path to a font file on the system. If bold or italic are set to true, this will attempt to find the correct family of font. The font name can also be an iterable of font names, a string of comma-separated font names, or a bytes of comma-separated font names, in which case the set of names will be searched in order. If none of the given names are found, None is returned. New in pygame 2.0.1: Accept an iterable of font names. Example: print pygame.font.match_font('bitstreamverasans')
# output is: /usr/share/fonts/truetype/ttf-bitstream-vera/Vera.ttf
# (but only if you have Vera on your system) | |
doc_28757 | Returns True if value is naive, False if it is aware. This function assumes that value is a datetime. | |
doc_28758 |
Set the snapping behavior. Snapping aligns positions with the pixel grid, which results in clearer images. For example, if a black line of 1px width was defined at a position in between two pixels, the resulting image would contain the interpolated value of that line in the pixel grid, which would be a grey value on both adjacent pixel positions. In contrast, snapping will move the line to the nearest integer pixel value, so that the resulting image will really contain a 1px wide black line. Snapping is currently only supported by the Agg and MacOSX backends. Parameters
snapbool or None
Possible values:
True: Snap vertices to the nearest pixel center.
False: Do not modify vertex positions.
None: (auto) If the path contains only rectilinear line segments, round to the nearest pixel center. | |
doc_28759 | Make the response object ready to be pickled. Does the following: Buffer the response into a list, ignoring implicity_sequence_conversion and direct_passthrough. Set the Content-Length header. Generate an ETag header if one is not already set. Changed in version 2.0: An ETag header is added, the no_etag parameter is deprecated and will be removed in Werkzeug 2.1. Changelog Changed in version 0.6: The Content-Length header is set. Parameters
no_etag (None) – Return type
None | |
doc_28760 | See Migration guide for more details. tf.compat.v1.raw_ops.Exit
tf.raw_ops.Exit(
data, name=None
)
Exit makes its input data available to the parent frame.
Args
data A Tensor. The tensor to be made available to the parent frame.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | |
doc_28761 | Returns a new instance of the SocketHandler class intended to communicate with a remote machine whose address is given by host and port. Changed in version 3.4: If port is specified as None, a Unix domain socket is created using the value in host - otherwise, a TCP socket is created.
close()
Closes the socket.
emit()
Pickles the record’s attribute dictionary and writes it to the socket in binary format. If there is an error with the socket, silently drops the packet. If the connection was previously lost, re-establishes the connection. To unpickle the record at the receiving end into a LogRecord, use the makeLogRecord() function.
handleError()
Handles an error which has occurred during emit(). The most likely cause is a lost connection. Closes the socket so that we can retry on the next event.
makeSocket()
This is a factory method which allows subclasses to define the precise type of socket they want. The default implementation creates a TCP socket (socket.SOCK_STREAM).
makePickle(record)
Pickles the record’s attribute dictionary in binary format with a length prefix, and returns it ready for transmission across the socket. The details of this operation are equivalent to: data = pickle.dumps(record_attr_dict, 1)
datalen = struct.pack('>L', len(data))
return datalen + data
Note that pickles aren’t completely secure. If you are concerned about security, you may want to override this method to implement a more secure mechanism. For example, you can sign pickles using HMAC and then verify them on the receiving end, or alternatively you can disable unpickling of global objects on the receiving end.
send(packet)
Send a pickled byte-string packet to the socket. The format of the sent byte-string is as described in the documentation for makePickle(). This function allows for partial sends, which can happen when the network is busy.
createSocket()
Tries to create a socket; on failure, uses an exponential back-off algorithm. On initial failure, the handler will drop the message it was trying to send. When subsequent messages are handled by the same instance, it will not try connecting until some time has passed. The default parameters are such that the initial delay is one second, and if after that delay the connection still can’t be made, the handler will double the delay each time up to a maximum of 30 seconds. This behaviour is controlled by the following handler attributes:
retryStart (initial delay, defaulting to 1.0 seconds).
retryFactor (multiplier, defaulting to 2.0).
retryMax (maximum delay, defaulting to 30.0 seconds). This means that if the remote listener starts up after the handler has been used, you could lose messages (since the handler won’t even attempt a connection until the delay has elapsed, but just silently drop messages during the delay period). | |
doc_28762 | See Migration guide for more details. tf.compat.v1.train.queue_runner.QueueRunner
tf.compat.v1.train.QueueRunner(
queue=None, enqueue_ops=None, close_op=None, cancel_op=None,
queue_closed_exception_types=None, queue_runner_def=None, import_scope=None
)
Queues are a convenient TensorFlow mechanism to compute tensors asynchronously using multiple threads. For example in the canonical 'Input Reader' setup one set of threads generates filenames in a queue; a second set of threads read records from the files, processes them, and enqueues tensors on a second queue; a third set of threads dequeues these input records to construct batches and runs them through training operations. There are several delicate issues when running multiple threads that way: closing the queues in sequence as the input is exhausted, correctly catching and reporting exceptions, etc. The QueueRunner, combined with the Coordinator, helps handle these issues.
Args
queue A Queue.
enqueue_ops List of enqueue ops to run in threads later.
close_op Op to close the queue. Pending enqueue ops are preserved.
cancel_op Op to close the queue and cancel pending enqueue ops.
queue_closed_exception_types Optional tuple of Exception types that indicate that the queue has been closed when raised during an enqueue operation. Defaults to (tf.errors.OutOfRangeError,). Another common case includes (tf.errors.OutOfRangeError, tf.errors.CancelledError), when some of the enqueue ops may dequeue from other Queues.
queue_runner_def Optional QueueRunnerDef protocol buffer. If specified, recreates the QueueRunner from its contents. queue_runner_def and the other arguments are mutually exclusive.
import_scope Optional string. Name scope to add. Only used when initializing from protocol buffer.
Raises
ValueError If both queue_runner_def and queue are both specified.
ValueError If queue or enqueue_ops are not provided when not restoring from queue_runner_def.
RuntimeError If eager execution is enabled. Eager Compatibility QueueRunners are not compatible with eager execution. Instead, please use tf.data to get data into your model.
Attributes
cancel_op
close_op
enqueue_ops
exceptions_raised Exceptions raised but not handled by the QueueRunner threads. Exceptions raised in queue runner threads are handled in one of two ways depending on whether or not a Coordinator was passed to create_threads(): With a Coordinator, exceptions are reported to the coordinator and forgotten by the QueueRunner. Without a Coordinator, exceptions are captured by the QueueRunner and made available in this exceptions_raised property.
name The string name of the underlying Queue.
queue
queue_closed_exception_types
Methods create_threads View source
create_threads(
sess, coord=None, daemon=False, start=False
)
Create threads to run the enqueue ops for the given session. This method requires a session in which the graph was launched. It creates a list of threads, optionally starting them. There is one thread for each op passed in enqueue_ops. The coord argument is an optional coordinator that the threads will use to terminate together and report exceptions. If a coordinator is given, this method starts an additional thread to close the queue when the coordinator requests a stop. If previously created threads for the given session are still running, no new threads will be created.
Args
sess A Session.
coord Optional Coordinator object for reporting errors and checking stop conditions.
daemon Boolean. If True make the threads daemon threads.
start Boolean. If True starts the threads. If False the caller must call the start() method of the returned threads.
Returns A list of threads.
from_proto View source
@staticmethod
from_proto(
queue_runner_def, import_scope=None
)
Returns a QueueRunner object created from queue_runner_def. to_proto View source
to_proto(
export_scope=None
)
Converts this QueueRunner to a QueueRunnerDef protocol buffer.
Args
export_scope Optional string. Name scope to remove.
Returns A QueueRunnerDef protocol buffer, or None if the Variable is not in the specified name scope. | |
doc_28763 |
Down/up samples the input to either the given size or the given scale_factor The algorithm used for interpolation is determined by mode. Currently temporal, spatial and volumetric sampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. The modes available for resizing are: nearest, linear (3D-only), bilinear, bicubic (4D-only), trilinear (5D-only), area Parameters
input (Tensor) – the input tensor
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size.
scale_factor (float or Tuple[float]) – multiplier for spatial size. Has to match input size if it is a tuple.
mode (str) – algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' | 'bicubic' | 'trilinear' | 'area'. Default: 'nearest'
align_corners (bool, optional) – Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is 'linear', 'bilinear', 'bicubic' or 'trilinear'. Default: False
recompute_scale_factor (bool, optional) – recompute the scale_factor for use in the interpolation calculation. When scale_factor is passed as a parameter, it is used to compute the output_size. If recompute_scale_factor is False or not specified, the passed-in scale_factor will be used in the interpolation computation. Otherwise, a new scale_factor will be computed based on the output and input sizes for use in the interpolation computation (i.e. the computation will be identical to if the computed output_size were passed-in explicitly). Note that when scale_factor is floating-point, the recomputed scale_factor may differ from the one passed in due to rounding and precision issues. Note With mode='bicubic', it’s possible to cause overshoot, in other words it can produce negative values or values greater than 255 for images. Explicitly call result.clamp(min=0, max=255) if you want to reduce the overshoot when displaying the image. Warning With align_corners = True, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False. See Upsample for concrete examples on how this affects the outputs. Warning When scale_factor is specified, if recompute_scale_factor=True, scale_factor is used to compute the output_size which will then be used to infer new scales for the interpolation. The default behavior for recompute_scale_factor changed to False in 1.6.0, and scale_factor is used in the interpolation calculation. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. | |
doc_28764 |
Returns True if a certain feature is exist and covered within _Config.conf_features. Parameters
‘name’: str
feature name in uppercase. | |
doc_28765 | See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingMomentumParameters
tf.raw_ops.RetrieveTPUEmbeddingMomentumParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, momenta). parameters A Tensor of type float32.
momenta A Tensor of type float32. | |
doc_28766 |
Write the figure to a PNG file. Parameters
filename_or_objstr or path-like or file-like
The file to write to.
metadatadict, optional
Metadata in the PNG file as key-value pairs of bytes or latin-1 encodable strings. According to the PNG specification, keys must be shorter than 79 chars. The PNG specification defines some common keywords that may be used as appropriate: Title: Short (one line) title or caption for image. Author: Name of image's creator. Description: Description of image (possibly long). Copyright: Copyright notice. Creation Time: Time of original image creation (usually RFC 1123 format). Software: Software used to create the image. Disclaimer: Legal disclaimer. Warning: Warning of nature of content. Source: Device used to create the image. Comment: Miscellaneous comment; conversion from other image format. Other keywords may be invented for other purposes. If 'Software' is not given, an autogenerated value for Matplotlib will be used. This can be removed by setting it to None. For more details see the PNG specification.
pil_kwargsdict, optional
Keyword arguments passed to PIL.Image.Image.save. If the 'pnginfo' key is present, it completely overrides metadata, including the default 'Software' key. | |
doc_28767 | Returns a datetime of the last modified time of the file. For storage systems unable to return the last modified time this will raise NotImplementedError. If USE_TZ is True, returns an aware datetime, otherwise returns a naive datetime in the local timezone. | |
doc_28768 | In-place version of not_equal(). | |
doc_28769 | Assign a ctypes type to specify the result type of the foreign function. Use None for void, a function not returning anything. It is possible to assign a callable Python object that is not a ctypes type, in this case the function is assumed to return a C int, and the callable will be called with this integer, allowing further processing or error checking. Using this is deprecated, for more flexible post processing or error checking use a ctypes data type as restype and assign a callable to the errcheck attribute. | |
doc_28770 | Return True if this entry is a file or a symbolic link pointing to a file; return False if the entry is or points to a directory or other non-file entry, or if it doesn’t exist anymore. If follow_symlinks is False, return True only if this entry is a file (without following symlinks); return False if the entry is a directory or other non-file entry, or if it doesn’t exist anymore. The result is cached on the os.DirEntry object. Caching, system calls made, and exceptions raised are as per is_dir(). | |
doc_28771 | tf.optimizers.get Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.get
tf.keras.optimizers.get(
identifier
)
Arguments
identifier Optimizer identifier, one of String: name of an optimizer Dictionary: configuration dictionary. - Keras Optimizer instance (it will be returned unchanged). - TensorFlow Optimizer instance (it will be wrapped as a Keras Optimizer).
Returns A Keras Optimizer instance.
Raises
ValueError If identifier cannot be interpreted. | |
doc_28772 |
Applies weight normalization to a parameter in the given module. w=gv∥v∥\mathbf{w} = g \dfrac{\mathbf{v}}{\|\mathbf{v}\|}
Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This replaces the parameter specified by name (e.g. 'weight') with two parameters: one specifying the magnitude (e.g. 'weight_g') and one specifying the direction (e.g. 'weight_v'). Weight normalization is implemented via a hook that recomputes the weight tensor from the magnitude and direction before every forward() call. By default, with dim=0, the norm is computed independently per output channel/plane. To compute a norm over the entire weight tensor, use dim=None. See https://arxiv.org/abs/1602.07868 Parameters
module (Module) – containing module
name (str, optional) – name of weight parameter
dim (int, optional) – dimension over which to compute the norm Returns
The original module with the weight norm hook Example: >>> m = weight_norm(nn.Linear(20, 40), name='weight')
>>> m
Linear(in_features=20, out_features=40, bias=True)
>>> m.weight_g.size()
torch.Size([40, 1])
>>> m.weight_v.size()
torch.Size([40, 20]) | |
doc_28773 |
Scalar method identical to the corresponding array attribute. Please see ndarray.trace. | |
doc_28774 | Return a tuple (real_value, coded_value) from a string representation. real_value can be any type. This method does no decoding in BaseCookie — it exists so it can be overridden. | |
doc_28775 | See torch.all() | |
doc_28776 |
The real part of the array. See also numpy.real
equivalent function Examples >>> x = np.sqrt([1+0j, 0+1j])
>>> x.real
array([ 1. , 0.70710678])
>>> x.real.dtype
dtype('float64') | |
doc_28777 |
Remove a callback based on its observer id. See also add_callback | |
doc_28778 | See Migration guide for more details. tf.compat.v1.nn.with_space_to_batch
tf.nn.with_space_to_batch(
input, dilation_rate, padding, op, filter_shape=None, spatial_dims=None,
data_format=None
)
This has the effect of transforming sliding window operations into the corresponding "atrous" operation in which the input is sampled at the specified dilation_rate. In the special case that dilation_rate is uniformly 1, this simply returns: op(input, num_spatial_dims, padding) Otherwise, it returns: batch_to_space_nd( op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings), num_spatial_dims, "VALID") adjusted_dilation_rate, adjusted_crops), where: adjusted_dilation_rate is an int64 tensor of shape [max(spatialdims)], adjusted{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2] defined as follows: We first define two int64 tensors paddings and crops of shape [num_spatial_dims, 2] based on the value of padding and the spatial dimensions of the input: If padding = "VALID", then: paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate) If padding = "SAME", then: dilated_filter_shape = filter_shape + (filter_shape - 1) * (dilation_rate - 1) paddings, crops = required_space_to_batch_paddings( input_shape[spatial_dims], dilation_rate, [(dilated_filter_shape - 1) // 2, dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2]) Because space_to_batch_nd and batch_to_space_nd assume that the spatial dimensions are contiguous starting at the second dimension, but the specified spatial_dims may not be, we must adjust dilation_rate, paddings and crops in order to be usable with these operations. For a given dimension, if the block size is 1, and both the starting and ending padding and crop amounts are 0, then space_to_batch_nd effectively leaves that dimension alone, which is what is needed for dimensions not part of spatial_dims. Furthermore, space_to_batch_nd and batch_to_space_nd handle this case efficiently for any number of leading and trailing dimensions. For 0 <= i < len(spatial_dims), we assign: adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i] adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :] adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :] All unassigned values of adjusted_dilation_rate default to 1, while all unassigned values of adjusted_paddings and adjusted_crops default to 0. Note in the case that dilation_rate is not uniformly 1, specifying "VALID" padding is equivalent to specifying padding = "SAME" with a filter_shape of [1]*N. Advanced usage. Note the following optimization: A sequence of with_space_to_batch operations with identical (not uniformly 1) dilation_rate parameters and "VALID" padding net = with_space_to_batch(net, dilation_rate, "VALID", op_1) ... net = with_space_to_batch(net, dilation_rate, "VALID", op_k) can be combined into a single with_space_to_batch operation as follows: def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "VALID") ... result = op_k(result, num_spatial_dims, "VALID") net = with_space_to_batch(net, dilation_rate, "VALID", combined_op) This eliminates the overhead of k-1 calls to space_to_batch_nd and batch_to_space_nd. Similarly, a sequence of with_space_to_batch operations with identical (not uniformly 1) dilation_rate parameters, "SAME" padding, and odd filter dimensions net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1) ... net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k) can be combined into a single with_space_to_batch operation as follows: def combined_op(converted_input, num_spatial_dims, _): result = op_1(converted_input, num_spatial_dims, "SAME") ... result = op_k(result, num_spatial_dims, "SAME") net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
Args
input Tensor of rank > max(spatial_dims).
dilation_rate int32 Tensor of known shape [num_spatial_dims].
padding str constant equal to "VALID" or "SAME"
op Function that maps (input, num_spatial_dims, padding) -> output
filter_shape If padding = "SAME", specifies the shape of the convolution kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims]. If padding = "VALID", filter_shape is ignored and need not be specified.
spatial_dims Monotonically increasing sequence of num_spatial_dims integers (which are >= 1) specifying the spatial dimensions of input and output. Defaults to: range(1, num_spatial_dims+1).
data_format A string or None. Specifies whether the channel dimension of the input and output is the last dimension (default, or if data_format does not start with "NC"), or the second dimension (if data_format starts with "NC"). For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For N=3, the valid values are "NDHWC" (default) and "NCDHW".
Returns The output Tensor as described above, dimensions will vary based on the op provided.
Raises
ValueError if padding is invalid or the arguments are incompatible.
ValueError if spatial_dims are invalid. | |
doc_28779 | See Migration guide for more details. tf.compat.v1.get_static_value
tf.get_static_value(
tensor, partial=False
)
This function attempts to partially evaluate the given tensor, and returns its value as a numpy ndarray if this succeeds. Compatibility(V1): If constant_value(tensor) returns a non-None result, it will no longer be possible to feed a different value for tensor. This allows the result of this function to influence the graph that is constructed, and permits static shape optimizations.
Args
tensor The Tensor to be evaluated.
partial If True, the returned numpy array is allowed to have partially evaluated values. Values that can't be evaluated will be None.
Returns A numpy ndarray containing the constant value of the given tensor, or None if it cannot be calculated.
Raises
TypeError if tensor is not an ops.Tensor. | |
doc_28780 | See Migration guide for more details. tf.compat.v1.raw_ops.MaxPoolWithArgmax
tf.raw_ops.MaxPoolWithArgmax(
input, ksize, strides, padding, Targmax=tf.dtypes.int64,
include_batch_in_index=False, name=None
)
The indices in argmax are flattened, so that a maximum value at position [b, y, x, c] becomes flattened index: (y * width + x) * channels + c if include_batch_in_index is False; ((b * height + y) * width + x) * channels + c if include_batch_in_index is True. The indices returned are always in [0, height) x [0, width) before flattening, even if padding is involved and the mathematically correct answer is outside (either negative or too large). This is a bug, but fixing it is difficult to do in a safe backwards compatible way, especially due to flattening.
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 4-D with shape [batch, height, width, channels]. Input to pool over.
ksize A list of ints that has length >= 4. The size of the window for each dimension of the input tensor.
strides A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor.
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
Targmax An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64.
include_batch_in_index An optional bool. Defaults to False. Whether to include batch dimension in flattened index of argmax.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output, argmax). output A Tensor. Has the same type as input.
argmax A Tensor of type Targmax. | |
doc_28781 |
Return a dataframe of the components (days, hours, minutes, seconds, milliseconds, microseconds, nanoseconds) of the Timedeltas. Returns
DataFrame | |
doc_28782 |
Return whether any element is Truthy. Parameters
*args
Required for compatibility with numpy. **kwargs
Required for compatibility with numpy. Returns
any:bool or array-like (if axis is specified)
A single element array-like may be converted to bool. See also Index.all
Return whether all elements are True. Series.all
Return whether all elements are True. Notes Not a Number (NaN), positive infinity and negative infinity evaluate to True because these are not equal to zero. Examples
>>> index = pd.Index([0, 1, 2])
>>> index.any()
True
>>> index = pd.Index([0, 0, 0])
>>> index.any()
False | |
doc_28783 | Get the current default floating point torch.dtype. Example: >>> torch.get_default_dtype() # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_dtype(torch.float64)
>>> torch.get_default_dtype() # default is now changed to torch.float64
torch.float64
>>> torch.set_default_tensor_type(torch.FloatTensor) # setting tensor type also affects this
>>> torch.get_default_dtype() # changed to torch.float32, the dtype for torch.FloatTensor
torch.float32 | |
doc_28784 | Adds the specified filter filter to this logger. | |
doc_28785 | Specifies that the STARTUPINFO.wShowWindow attribute contains additional information. | |
doc_28786 | skimage.filters.rank.autolevel(image, selem) Auto-level image using local histogram.
skimage.filters.rank.autolevel_percentile(…) Return greyscale local autolevel of an image.
skimage.filters.rank.bottomhat(image, selem) Local bottom-hat of an image.
skimage.filters.rank.enhance_contrast(image, …) Enhance contrast of an image.
skimage.filters.rank.enhance_contrast_percentile(…) Enhance contrast of an image.
skimage.filters.rank.entropy(image, selem[, …]) Local entropy.
skimage.filters.rank.equalize(image, selem) Equalize image using local histogram.
skimage.filters.rank.geometric_mean(image, selem) Return local geometric mean of an image.
skimage.filters.rank.gradient(image, selem) Return local gradient of an image (i.e.
skimage.filters.rank.gradient_percentile(…) Return local gradient of an image (i.e.
skimage.filters.rank.majority(image, selem, *) Majority filter assign to each pixel the most occuring value within its neighborhood.
skimage.filters.rank.maximum(image, selem[, …]) Return local maximum of an image.
skimage.filters.rank.mean(image, selem[, …]) Return local mean of an image.
skimage.filters.rank.mean_bilateral(image, selem) Apply a flat kernel bilateral filter.
skimage.filters.rank.mean_percentile(image, …) Return local mean of an image.
skimage.filters.rank.median(image[, selem, …]) Return local median of an image.
skimage.filters.rank.minimum(image, selem[, …]) Return local minimum of an image.
skimage.filters.rank.modal(image, selem[, …]) Return local mode of an image.
skimage.filters.rank.noise_filter(image, selem) Noise feature.
skimage.filters.rank.otsu(image, selem[, …]) Local Otsu’s threshold value for each pixel.
skimage.filters.rank.percentile(image, selem) Return local percentile of an image.
skimage.filters.rank.pop(image, selem[, …]) Return the local number (population) of pixels.
skimage.filters.rank.pop_bilateral(image, selem) Return the local number (population) of pixels.
skimage.filters.rank.pop_percentile(image, selem) Return the local number (population) of pixels.
skimage.filters.rank.subtract_mean(image, selem) Return image subtracted from its local mean.
skimage.filters.rank.subtract_mean_percentile(…) Return image subtracted from its local mean.
skimage.filters.rank.sum(image, selem[, …]) Return the local sum of pixels.
skimage.filters.rank.sum_bilateral(image, selem) Apply a flat kernel bilateral filter.
skimage.filters.rank.sum_percentile(image, selem) Return the local sum of pixels.
skimage.filters.rank.threshold(image, selem) Local threshold of an image.
skimage.filters.rank.threshold_percentile(…) Local threshold of an image.
skimage.filters.rank.tophat(image, selem[, …]) Local top-hat of an image.
skimage.filters.rank.windowed_histogram(…) Normalized sliding window histogram autolevel
skimage.filters.rank.autolevel(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Auto-level image using local histogram. This filter locally stretches the histogram of gray values to cover the entire range of values from “white” to “black”. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import autolevel
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> auto = autolevel(img, disk(5))
>>> auto_vol = autolevel(volume, ball(5))
Examples using skimage.filters.rank.autolevel
Rank filters autolevel_percentile
skimage.filters.rank.autolevel_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return greyscale local autolevel of an image. This filter locally stretches the histogram of greyvalues to cover the entire range of values from “white” to “black”. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
Examples using skimage.filters.rank.autolevel_percentile
Rank filters bottomhat
skimage.filters.rank.bottomhat(image, selem, out=None, mask=None, shift_x=False, shift_y=False) [source]
Local bottom-hat of an image. This filter computes the morphological closing of the image and then subtracts the result from the original image. Parameters
image2-D array (integer or float)
Input image.
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (integer or float), optional
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out2-D array (same dtype as input image)
Output image. Warns
Deprecated:
New in version 0.17. This function is deprecated and will be removed in scikit-image 0.19. This filter was misnamed and we believe that the usefulness is narrow. Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import bottomhat
>>> img = data.camera()
>>> out = bottomhat(img, disk(5))
enhance_contrast
skimage.filters.rank.enhance_contrast(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Enhance contrast of an image. This replaces each pixel by the local maximum if the pixel gray value is closer to the local maximum than the local minimum. Otherwise it is replaced by the local minimum. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import enhance_contrast
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = enhance_contrast(img, disk(5))
>>> out_vol = enhance_contrast(volume, ball(5))
Examples using skimage.filters.rank.enhance_contrast
Rank filters enhance_contrast_percentile
skimage.filters.rank.enhance_contrast_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Enhance contrast of an image. This replaces each pixel by the local maximum if the pixel greyvalue is closer to the local maximum than the local minimum. Otherwise it is replaced by the local minimum. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
Examples using skimage.filters.rank.enhance_contrast_percentile
Rank filters entropy
skimage.filters.rank.entropy(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Local entropy. The entropy is computed using base 2 logarithm i.e. the filter returns the minimum number of bits needed to encode the local gray level distribution. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (float)
Output image. References
1
https://en.wikipedia.org/wiki/Entropy_(information_theory) Examples >>> from skimage import data
>>> from skimage.filters.rank import entropy
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> ent = entropy(img, disk(5))
>>> ent_vol = entropy(volume, ball(5))
Examples using skimage.filters.rank.entropy
Tinting gray-scale images
Entropy
Rank filters equalize
skimage.filters.rank.equalize(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Equalize image using local histogram. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import equalize
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> equ = equalize(img, disk(5))
>>> equ_vol = equalize(volume, ball(5))
Examples using skimage.filters.rank.equalize
Local Histogram Equalization
Rank filters geometric_mean
skimage.filters.rank.geometric_mean(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local geometric mean of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. References
1
Gonzalez, R. C. and Wood, R. E. “Digital Image Processing (3rd Edition).” Prentice-Hall Inc, 2006. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import mean
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> avg = geometric_mean(img, disk(5))
>>> avg_vol = geometric_mean(volume, ball(5))
gradient
skimage.filters.rank.gradient(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local gradient of an image (i.e. local maximum - local minimum). Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import gradient
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = gradient(img, disk(5))
>>> out_vol = gradient(volume, ball(5))
Examples using skimage.filters.rank.gradient
Markers for watershed transform
Rank filters gradient_percentile
skimage.filters.rank.gradient_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return local gradient of an image (i.e. local maximum - local minimum). Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
majority
skimage.filters.rank.majority(image, selem, *, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Majority filter assign to each pixel the most occuring value within its neighborhood. Parameters
imagendarray
Image array (uint8, uint16 array).
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
outndarray (integer or float), optional
If None, a new array will be allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out2-D array (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.filters.rank import majority
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> maj_img = majority(img, disk(5))
>>> maj_img_vol = majority(volume, ball(5))
maximum
skimage.filters.rank.maximum(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local maximum of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. See also
skimage.morphology.dilation
Notes The lower algorithm complexity makes skimage.filters.rank.maximum more efficient for larger images and structuring elements. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import maximum
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = maximum(img, disk(5))
>>> out_vol = maximum(volume, ball(5))
Examples using skimage.filters.rank.maximum
Rank filters mean
skimage.filters.rank.mean(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local mean of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import mean
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> avg = mean(img, disk(5))
>>> avg_vol = mean(volume, ball(5))
Examples using skimage.filters.rank.mean
Segment human cells (in mitosis)
Rank filters mean_bilateral
skimage.filters.rank.mean_bilateral(image, selem, out=None, mask=None, shift_x=False, shift_y=False, s0=10, s1=10) [source]
Apply a flat kernel bilateral filter. This is an edge-preserving and noise reducing denoising filter. It averages pixels based on their spatial closeness and radiometric similarity. Spatial closeness is measured by considering only the local pixel neighborhood given by a structuring element. Radiometric similarity is defined by the greylevel interval [g-s0, g+s1] where g is the current pixel greylevel. Only pixels belonging to the structuring element and having a greylevel inside this interval are averaged. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
s0, s1int
Define the [s0, s1] interval around the greyvalue of the center pixel to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. See also
denoise_bilateral
Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import mean_bilateral
>>> img = data.camera().astype(np.uint16)
>>> bilat_img = mean_bilateral(img, disk(20), s0=10,s1=10)
Examples using skimage.filters.rank.mean_bilateral
Rank filters mean_percentile
skimage.filters.rank.mean_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return local mean of an image. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
median
skimage.filters.rank.median(image, selem=None, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local median of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s. If None, a full square of size 3 is used.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. See also
skimage.filters.median
Implementation of a median filtering which handles images with floating precision. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import median
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> med = median(img, disk(5))
>>> med_vol = median(volume, ball(5))
Examples using skimage.filters.rank.median
Markers for watershed transform
Rank filters minimum
skimage.filters.rank.minimum(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local minimum of an image. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. See also
skimage.morphology.erosion
Notes The lower algorithm complexity makes skimage.filters.rank.minimum more efficient for larger images and structuring elements. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import minimum
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = minimum(img, disk(5))
>>> out_vol = minimum(volume, ball(5))
Examples using skimage.filters.rank.minimum
Rank filters modal
skimage.filters.rank.modal(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return local mode of an image. The mode is the value that appears most often in the local histogram. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import modal
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = modal(img, disk(5))
>>> out_vol = modal(volume, ball(5))
noise_filter
skimage.filters.rank.noise_filter(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Noise feature. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. References
1
N. Hashimoto et al. Referenceless image quality evaluation for whole slide imaging. J Pathol Inform 2012;3:9. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import noise_filter
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = noise_filter(img, disk(5))
>>> out_vol = noise_filter(volume, ball(5))
otsu
skimage.filters.rank.otsu(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Local Otsu’s threshold value for each pixel. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. References
1
https://en.wikipedia.org/wiki/Otsu’s_method Examples >>> from skimage import data
>>> from skimage.filters.rank import otsu
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> local_otsu = otsu(img, disk(5))
>>> thresh_image = img >= local_otsu
>>> local_otsu_vol = otsu(volume, ball(5))
>>> thresh_image_vol = volume >= local_otsu_vol
Examples using skimage.filters.rank.otsu
Rank filters percentile
skimage.filters.rank.percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0) [source]
Return local percentile of an image. Returns the value of the p0 lower percentile of the local greyvalue distribution. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0float in [0, …, 1]
Set the percentile value. Returns
out2-D array (same dtype as input image)
Output image.
pop
skimage.filters.rank.pop(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return the local number (population) of pixels. The number of pixels is defined as the number of pixels which are included in the structuring element and the mask. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square, cube # Need to add 3D example
>>> import skimage.filters.rank as rank
>>> img = 255 * np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> rank.pop(img, square(3))
array([[4, 6, 6, 6, 4],
[6, 9, 9, 9, 6],
[6, 9, 9, 9, 6],
[6, 9, 9, 9, 6],
[4, 6, 6, 6, 4]], dtype=uint8)
pop_bilateral
skimage.filters.rank.pop_bilateral(image, selem, out=None, mask=None, shift_x=False, shift_y=False, s0=10, s1=10) [source]
Return the local number (population) of pixels. The number of pixels is defined as the number of pixels which are included in the structuring element and the mask. Additionally pixels must have a greylevel inside the interval [g-s0, g+s1] where g is the greyvalue of the center pixel. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
s0, s1int
Define the [s0, s1] interval around the greyvalue of the center pixel to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square
>>> import skimage.filters.rank as rank
>>> img = 255 * np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint16)
>>> rank.pop_bilateral(img, square(3), s0=10, s1=10)
array([[3, 4, 3, 4, 3],
[4, 4, 6, 4, 4],
[3, 6, 9, 6, 3],
[4, 4, 6, 4, 4],
[3, 4, 3, 4, 3]], dtype=uint16)
pop_percentile
skimage.filters.rank.pop_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return the local number (population) of pixels. The number of pixels is defined as the number of pixels which are included in the structuring element and the mask. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
subtract_mean
skimage.filters.rank.subtract_mean(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return image subtracted from its local mean. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Notes Subtracting the mean value may introduce underflow. To compensate this potential underflow, the obtained difference is downscaled by a factor of 2 and shifted by n_bins / 2 - 1, the median value of the local histogram (n_bins = max(3, image.max()) +1 for 16-bits images and 256 otherwise). Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import subtract_mean
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = subtract_mean(img, disk(5))
>>> out_vol = subtract_mean(volume, ball(5))
subtract_mean_percentile
skimage.filters.rank.subtract_mean_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return image subtracted from its local mean. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
sum
skimage.filters.rank.sum(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Return the local sum of pixels. Note that the sum may overflow depending on the data type of the input array. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square, cube # Need to add 3D example
>>> import skimage.filters.rank as rank # Cube seems to fail but
>>> img = np.array([[0, 0, 0, 0, 0], # Ball can pass
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> rank.sum(img, square(3))
array([[1, 2, 3, 2, 1],
[2, 4, 6, 4, 2],
[3, 6, 9, 6, 3],
[2, 4, 6, 4, 2],
[1, 2, 3, 2, 1]], dtype=uint8)
sum_bilateral
skimage.filters.rank.sum_bilateral(image, selem, out=None, mask=None, shift_x=False, shift_y=False, s0=10, s1=10) [source]
Apply a flat kernel bilateral filter. This is an edge-preserving and noise reducing denoising filter. It averages pixels based on their spatial closeness and radiometric similarity. Spatial closeness is measured by considering only the local pixel neighborhood given by a structuring element (selem). Radiometric similarity is defined by the greylevel interval [g-s0, g+s1] where g is the current pixel greylevel. Only pixels belonging to the structuring element AND having a greylevel inside this interval are summed. Note that the sum may overflow depending on the data type of the input array. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
s0, s1int
Define the [s0, s1] interval around the greyvalue of the center pixel to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. See also
denoise_bilateral
Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import sum_bilateral
>>> img = data.camera().astype(np.uint16)
>>> bilat_img = sum_bilateral(img, disk(10), s0=10, s1=10)
sum_percentile
skimage.filters.rank.sum_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0, p1=1) [source]
Return the local sum of pixels. Only greyvalues between percentiles [p0, p1] are considered in the filter. Note that the sum may overflow depending on the data type of the input array. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0, p1float in [0, …, 1]
Define the [p0, p1] percentile interval to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image.
threshold
skimage.filters.rank.threshold(image, selem, out=None, mask=None, shift_x=False, shift_y=False, shift_z=False) [source]
Local threshold of an image. The resulting binary mask is True if the gray value of the center pixel is greater than the local mean. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage.morphology import square, cube # Need to add 3D example
>>> from skimage.filters.rank import threshold
>>> img = 255 * np.array([[0, 0, 0, 0, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 1, 1, 1, 0],
... [0, 0, 0, 0, 0]], dtype=np.uint8)
>>> threshold(img, square(3))
array([[0, 0, 0, 0, 0],
[0, 1, 1, 1, 0],
[0, 1, 0, 1, 0],
[0, 1, 1, 1, 0],
[0, 0, 0, 0, 0]], dtype=uint8)
threshold_percentile
skimage.filters.rank.threshold_percentile(image, selem, out=None, mask=None, shift_x=False, shift_y=False, p0=0) [source]
Local threshold of an image. The resulting binary mask is True if the greyvalue of the center pixel is greater than the local mean. Only greyvalues between percentiles [p0, p1] are considered in the filter. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
p0float in [0, …, 1]
Set the percentile value. Returns
out2-D array (same dtype as input image)
Output image.
tophat
skimage.filters.rank.tophat(image, selem, out=None, mask=None, shift_x=False, shift_y=False) [source]
Local top-hat of an image. This filter computes the morphological opening of the image and then subtracts the result from the original image. Parameters
image2-D array (integer or float)
Input image.
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (integer or float), optional
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out2-D array (same dtype as input image)
Output image. Warns
Deprecated:
New in version 0.17. This function is deprecated and will be removed in scikit-image 0.19. This filter was misnamed and we believe that the usefulness is narrow. Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import tophat
>>> img = data.camera()
>>> out = tophat(img, disk(5))
windowed_histogram
skimage.filters.rank.windowed_histogram(image, selem, out=None, mask=None, shift_x=False, shift_y=False, n_bins=None) [source]
Normalized sliding window histogram Parameters
image2-D array (integer or float)
Input image.
selem2-D array (integer or float)
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (integer or float), optional
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint, optional
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
n_binsint or None
The number of histogram bins. Will default to image.max() + 1 if None is passed. Returns
out3-D array (float)
Array of dimensions (H,W,N), where (H,W) are the dimensions of the input image and N is n_bins or image.max() + 1 if no value is provided as a parameter. Effectively, each pixel is a N-D feature vector that is the histogram. The sum of the elements in the feature vector will be 1, unless no pixels in the window were covered by both selem and mask, in which case all elements will be 0. Examples >>> from skimage import data
>>> from skimage.filters.rank import windowed_histogram
>>> from skimage.morphology import disk, ball
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> hist_img = windowed_histogram(img, disk(5)) | |
doc_28787 | A boolean that controls whether or not authenticated users accessing the login page will be redirected as if they had just successfully logged in. Defaults to False. Warning If you enable redirect_authenticated_user, other websites will be able to determine if their visitors are authenticated on your site by requesting redirect URLs to image files on your website. To avoid this “social media fingerprinting” information leakage, host all images and your favicon on a separate domain. Enabling redirect_authenticated_user can also result in a redirect loop when using the permission_required() decorator unless the raise_exception parameter is used. | |
doc_28788 | Return the current protocol. | |
doc_28789 |
Bases: object A base class for path effects. Subclasses should override the draw_path method to add effect functionality. Parameters
offset(float, float), default: (0, 0)
The (x, y) offset to apply to the path, measured in points. draw_path(renderer, gc, tpath, affine, rgbFace=None)[source]
Derived should override this method. The arguments are the same as matplotlib.backend_bases.RendererBase.draw_path() except the first argument is a renderer.
classmatplotlib.patheffects.Normal(offset=(0.0, 0.0))[source]
Bases: matplotlib.patheffects.AbstractPathEffect The "identity" PathEffect. The Normal PathEffect's sole purpose is to draw the original artist with no special path effect. Parameters
offset(float, float), default: (0, 0)
The (x, y) offset to apply to the path, measured in points.
classmatplotlib.patheffects.PathEffectRenderer(path_effects, renderer)[source]
Bases: matplotlib.backend_bases.RendererBase Implements a Renderer which contains another renderer. This proxy then intercepts draw calls, calling the appropriate AbstractPathEffect draw method. Note Not all methods have been overridden on this RendererBase subclass. It may be necessary to add further methods to extend the PathEffects capabilities further. Parameters
path_effectsiterable of AbstractPathEffect
The path effects which this renderer represents.
renderermatplotlib.backend_bases.RendererBase subclass
copy_with_path_effect(path_effects)[source]
draw_markers(gc, marker_path, marker_trans, path, *args, **kwargs)[source]
Draw a marker at each of path's vertices (excluding control points). This provides a fallback implementation of draw_markers that makes multiple calls to draw_path(). Some backends may want to override this method in order to draw the marker only once and reuse it multiple times. Parameters
gcGraphicsContextBase
The graphics context.
marker_transmatplotlib.transforms.Transform
An affine transform applied to the marker.
transmatplotlib.transforms.Transform
An affine transform applied to the path.
draw_path(gc, tpath, affine, rgbFace=None)[source]
Draw a Path instance using the given affine transform.
draw_path_collection(gc, master_transform, paths, *args, **kwargs)[source]
Draw a collection of paths selecting drawing properties from the lists facecolors, edgecolors, linewidths, linestyles and antialiaseds. offsets is a list of offsets to apply to each of the paths. The offsets in offsets are first transformed by offsetTrans before being applied. offset_position is unused now, but the argument is kept for backwards compatibility. This provides a fallback implementation of draw_path_collection() that makes multiple calls to draw_path(). Some backends may want to override this in order to render each set of path data only once, and then reference that path multiple times with the different offsets, colors, styles etc. The generator methods _iter_collection_raw_paths() and _iter_collection() are provided to help with (and standardize) the implementation across backends. It is highly recommended to use those generators, so that changes to the behavior of draw_path_collection() can be made globally.
classmatplotlib.patheffects.PathPatchEffect(offset=(0, 0), **kwargs)[source]
Bases: matplotlib.patheffects.AbstractPathEffect Draws a PathPatch instance whose Path comes from the original PathEffect artist. Parameters
offset(float, float), default: (0, 0)
The (x, y) offset to apply to the path, in points. **kwargs
All keyword arguments are passed through to the PathPatch constructor. The properties which cannot be overridden are "path", "clip_box" "transform" and "clip_path". draw_path(renderer, gc, tpath, affine, rgbFace)[source]
Derived should override this method. The arguments are the same as matplotlib.backend_bases.RendererBase.draw_path() except the first argument is a renderer.
classmatplotlib.patheffects.SimpleLineShadow(offset=(2, - 2), shadow_color='k', alpha=0.3, rho=0.3, **kwargs)[source]
Bases: matplotlib.patheffects.AbstractPathEffect A simple shadow via a line. Parameters
offset(float, float), default: (2, -2)
The (x, y) offset to apply to the path, in points.
shadow_colorcolor, default: 'black'
The shadow color. A value of None takes the original artist's color with a scale factor of rho.
alphafloat, default: 0.3
The alpha transparency of the created shadow patch.
rhofloat, default: 0.3
A scale factor to apply to the rgbFace color if shadow_color is None. **kwargs
Extra keywords are stored and passed through to AbstractPathEffect._update_gc(). draw_path(renderer, gc, tpath, affine, rgbFace)[source]
Overrides the standard draw_path to add the shadow offset and necessary color changes for the shadow.
classmatplotlib.patheffects.SimplePatchShadow(offset=(2, - 2), shadow_rgbFace=None, alpha=None, rho=0.3, **kwargs)[source]
Bases: matplotlib.patheffects.AbstractPathEffect A simple shadow via a filled patch. Parameters
offset(float, float), default: (2, -2)
The (x, y) offset of the shadow in points.
shadow_rgbFacecolor
The shadow color.
alphafloat, default: 0.3
The alpha transparency of the created shadow patch. http://matplotlib.1069221.n5.nabble.com/path-effects-question-td27630.html
rhofloat, default: 0.3
A scale factor to apply to the rgbFace color if shadow_rgbFace is not specified. **kwargs
Extra keywords are stored and passed through to AbstractPathEffect._update_gc(). draw_path(renderer, gc, tpath, affine, rgbFace)[source]
Overrides the standard draw_path to add the shadow offset and necessary color changes for the shadow.
classmatplotlib.patheffects.Stroke(offset=(0, 0), **kwargs)[source]
Bases: matplotlib.patheffects.AbstractPathEffect A line based PathEffect which re-draws a stroke. The path will be stroked with its gc updated with the given keyword arguments, i.e., the keyword arguments should be valid gc parameter values. draw_path(renderer, gc, tpath, affine, rgbFace)[source]
Draw the path with updated gc.
classmatplotlib.patheffects.TickedStroke(offset=(0, 0), spacing=10.0, angle=45.0, length=1.4142135623730951, **kwargs)[source]
Bases: matplotlib.patheffects.AbstractPathEffect A line-based PathEffect which draws a path with a ticked style. This line style is frequently used to represent constraints in optimization. The ticks may be used to indicate that one side of the line is invalid or to represent a closed boundary of a domain (i.e. a wall or the edge of a pipe). The spacing, length, and angle of ticks can be controlled. This line style is sometimes referred to as a hatched line. See also the contour demo example. See also the contours in optimization example. Parameters
offset(float, float), default: (0, 0)
The (x, y) offset to apply to the path, in points.
spacingfloat, default: 10.0
The spacing between ticks in points.
anglefloat, default: 45.0
The angle between the path and the tick in degrees. The angle is measured as if you were an ant walking along the curve, with zero degrees pointing directly ahead, 90 to your left, -90 to your right, and 180 behind you.
lengthfloat, default: 1.414
The length of the tick relative to spacing. Recommended length = 1.414 (sqrt(2)) when angle=45, length=1.0 when angle=90 and length=2.0 when angle=60. **kwargs
Extra keywords are stored and passed through to AbstractPathEffect._update_gc(). Examples See TickedStroke patheffect. draw_path(renderer, gc, tpath, affine, rgbFace)[source]
Draw the path with updated gc.
classmatplotlib.patheffects.withSimplePatchShadow(offset=(2, - 2), shadow_rgbFace=None, alpha=None, rho=0.3, **kwargs)[source]
Bases: matplotlib.patheffects.SimplePatchShadow A shortcut PathEffect for applying SimplePatchShadow and then drawing the original Artist. With this class you can use artist.set_path_effects([path_effects.withSimplePatchShadow()])
as a shortcut for artist.set_path_effects([path_effects.SimplePatchShadow(),
path_effects.Normal()])
Parameters
offset(float, float), default: (2, -2)
The (x, y) offset of the shadow in points.
shadow_rgbFacecolor
The shadow color.
alphafloat, default: 0.3
The alpha transparency of the created shadow patch. http://matplotlib.1069221.n5.nabble.com/path-effects-question-td27630.html
rhofloat, default: 0.3
A scale factor to apply to the rgbFace color if shadow_rgbFace is not specified. **kwargs
Extra keywords are stored and passed through to AbstractPathEffect._update_gc(). draw_path(renderer, gc, tpath, affine, rgbFace)[source]
Overrides the standard draw_path to add the shadow offset and necessary color changes for the shadow.
classmatplotlib.patheffects.withStroke(offset=(0, 0), **kwargs)[source]
Bases: matplotlib.patheffects.Stroke A shortcut PathEffect for applying Stroke and then drawing the original Artist. With this class you can use artist.set_path_effects([path_effects.withStroke()])
as a shortcut for artist.set_path_effects([path_effects.Stroke(),
path_effects.Normal()])
The path will be stroked with its gc updated with the given keyword arguments, i.e., the keyword arguments should be valid gc parameter values. draw_path(renderer, gc, tpath, affine, rgbFace)[source]
Draw the path with updated gc.
classmatplotlib.patheffects.withTickedStroke(offset=(0, 0), spacing=10.0, angle=45.0, length=1.4142135623730951, **kwargs)[source]
Bases: matplotlib.patheffects.TickedStroke A shortcut PathEffect for applying TickedStroke and then drawing the original Artist. With this class you can use artist.set_path_effects([path_effects.withTickedStroke()])
as a shortcut for artist.set_path_effects([path_effects.TickedStroke(),
path_effects.Normal()])
Parameters
offset(float, float), default: (0, 0)
The (x, y) offset to apply to the path, in points.
spacingfloat, default: 10.0
The spacing between ticks in points.
anglefloat, default: 45.0
The angle between the path and the tick in degrees. The angle is measured as if you were an ant walking along the curve, with zero degrees pointing directly ahead, 90 to your left, -90 to your right, and 180 behind you.
lengthfloat, default: 1.414
The length of the tick relative to spacing. Recommended length = 1.414 (sqrt(2)) when angle=45, length=1.0 when angle=90 and length=2.0 when angle=60. **kwargs
Extra keywords are stored and passed through to AbstractPathEffect._update_gc(). Examples See TickedStroke patheffect. draw_path(renderer, gc, tpath, affine, rgbFace)[source]
Draw the path with updated gc. | |
doc_28790 | Call the exception as WSGI application. Parameters
environ (WSGIEnvironment) – the WSGI environment.
start_response (StartResponse) – the response callable provided by the WSGI server. Return type
Iterable[bytes] | |
doc_28791 | If the value for a header in the Message object originated from a parser (as opposed to being set by a program), this attribute indicates whether or not a generator should refold that value when transforming the message back into serialized form. The possible values are:
none all source values use original folding
long source values that have any line that is longer than max_line_length will be refolded
all all values are refolded. The default is long. | |
doc_28792 |
Sets the current device. Usage of this function is discouraged in favor of device. In most cases it’s better to use CUDA_VISIBLE_DEVICES environmental variable. Parameters
device (torch.device or int) – selected device. This function is a no-op if this argument is negative. | |
doc_28793 |
Set the path effects. Parameters
path_effectsAbstractPathEffect | |
doc_28794 |
Returns
transformTransform
The transform used for drawing y-axis labels, which will add pad_points of padding (in points) between the axis and the label. The x-direction is in axis coordinates and the y-direction is in data coordinates
valign{'center', 'top', 'bottom', 'baseline', 'center_baseline'}
The text vertical alignment.
halign{'center', 'left', 'right'}
The text horizontal alignment. Notes This transformation is primarily used by the Axis class, and is meant to be overridden by new kinds of projections that may need to place axis elements in different locations. | |
doc_28795 |
Return a string representation of data. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. The default implementation converts ints and floats and arrays of ints and floats into a comma-separated string enclosed in square brackets, unless the artist has an associated colorbar, in which case scalar values are formatted using the colorbar's formatter. See also get_cursor_data | |
doc_28796 | See torch.arctanh() | |
doc_28797 |
First discrete difference of element. Calculates the difference of a Dataframe element compared with another element in the Dataframe (default is element in previous row). Parameters
periods:int, default 1
Periods to shift for calculating difference, accepts negative values.
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
Take difference over rows (0) or columns (1). Returns
Dataframe
First differences of the Series. See also Dataframe.pct_change
Percent change over given number of periods. Dataframe.shift
Shift index by desired number of periods with an optional time freq. Series.diff
First discrete difference of object. Notes For boolean dtypes, this uses operator.xor() rather than operator.sub(). The result is calculated according to current dtype in Dataframe, however dtype of the result is always float64. Examples Difference with previous row
>>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6],
... 'b': [1, 1, 2, 3, 5, 8],
... 'c': [1, 4, 9, 16, 25, 36]})
>>> df
a b c
0 1 1 1
1 2 1 4
2 3 2 9
3 4 3 16
4 5 5 25
5 6 8 36
>>> df.diff()
a b c
0 NaN NaN NaN
1 1.0 0.0 3.0
2 1.0 1.0 5.0
3 1.0 1.0 7.0
4 1.0 2.0 9.0
5 1.0 3.0 11.0
Difference with previous column
>>> df.diff(axis=1)
a b c
0 NaN 0 0
1 NaN -1 3
2 NaN -1 7
3 NaN -1 13
4 NaN 0 20
5 NaN 2 28
Difference with 3rd previous row
>>> df.diff(periods=3)
a b c
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 3.0 2.0 15.0
4 3.0 4.0 21.0
5 3.0 6.0 27.0
Difference with following row
>>> df.diff(periods=-1)
a b c
0 -1.0 0.0 -3.0
1 -1.0 -1.0 -5.0
2 -1.0 -1.0 -7.0
3 -1.0 -2.0 -9.0
4 -1.0 -3.0 -11.0
5 NaN NaN NaN
Overflow in input dtype
>>> df = pd.DataFrame({'a': [1, 0]}, dtype=np.uint8)
>>> df.diff()
a
0 NaN
1 255.0 | |
doc_28798 | This class implements the portion of the TestCase interface which allows the test runner to drive the test, but does not provide the methods which test code can use to check and report errors. This is used to create test cases using legacy test code, allowing it to be integrated into a unittest-based test framework. | |
doc_28799 |
Return index of first occurrence of maximum over requested axis. NA/null values are excluded. Parameters
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.
skipna:bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA. Returns
Series
Indexes of maxima along the specified axis. Raises
ValueError
If the row/column is empty See also Series.idxmax
Return index of the maximum element. Notes This method is the DataFrame version of ndarray.argmax. Examples Consider a dataset containing food consumption in Argentina.
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
>>> df
consumption co2_emissions
Pork 10.51 37.20
Wheat Products 103.11 19.66
Beef 55.48 1712.00
By default, it returns the index for the maximum value in each column.
>>> df.idxmax()
consumption Wheat Products
co2_emissions Beef
dtype: object
To return the index for the maximum value in each row, use axis="columns".
>>> df.idxmax(axis="columns")
Pork co2_emissions
Wheat Products consumption
Beef co2_emissions
dtype: object |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.