_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_3900 | class sklearn.ensemble.HistGradientBoostingClassifier(loss='auto', *, learning_rate=0.1, max_iter=100, max_leaf_nodes=31, max_depth=None, min_samples_leaf=20, l2_regularization=0.0, max_bins=255, categorical_features=None, monotonic_cst=None, warm_start=False, early_stopping='auto', scoring='loss', validation_fraction=0.1, n_iter_no_change=10, tol=1e-07, verbose=0, random_state=None) [source]
Histogram-based Gradient Boosting Classification Tree. This estimator is much faster than GradientBoostingClassifier for big datasets (n_samples >= 10 000). This estimator has native support for missing values (NaNs). During training, the tree grower learns at each split point whether samples with missing values should go to the left or right child, based on the potential gain. When predicting, samples with missing values are assigned to the left or right child consequently. If no missing values were encountered for a given feature during training, then samples with missing values are mapped to whichever child has the most samples. This implementation is inspired by LightGBM. Note This estimator is still experimental for now: the predictions and the API might change without any deprecation cycle. To use it, you need to explicitly import enable_hist_gradient_boosting: >>> # explicitly require this experimental feature
>>> from sklearn.experimental import enable_hist_gradient_boosting # noqa
>>> # now you can import normally from ensemble
>>> from sklearn.ensemble import HistGradientBoostingClassifier
Read more in the User Guide. New in version 0.21. Parameters
loss{‘auto’, ‘binary_crossentropy’, ‘categorical_crossentropy’}, default=’auto’
The loss function to use in the boosting process. ‘binary_crossentropy’ (also known as logistic loss) is used for binary classification and generalizes to ‘categorical_crossentropy’ for multiclass classification. ‘auto’ will automatically choose either loss depending on the nature of the problem.
learning_ratefloat, default=0.1
The learning rate, also known as shrinkage. This is used as a multiplicative factor for the leaves values. Use 1 for no shrinkage.
max_iterint, default=100
The maximum number of iterations of the boosting process, i.e. the maximum number of trees for binary classification. For multiclass classification, n_classes trees per iteration are built.
max_leaf_nodesint or None, default=31
The maximum number of leaves for each tree. Must be strictly greater than 1. If None, there is no maximum limit.
max_depthint or None, default=None
The maximum depth of each tree. The depth of a tree is the number of edges to go from the root to the deepest leaf. Depth isn’t constrained by default.
min_samples_leafint, default=20
The minimum number of samples per leaf. For small datasets with less than a few hundred samples, it is recommended to lower this value since only very shallow trees would be built.
l2_regularizationfloat, default=0
The L2 regularization parameter. Use 0 for no regularization.
max_binsint, default=255
The maximum number of bins to use for non-missing values. Before training, each feature of the input array X is binned into integer-valued bins, which allows for a much faster training stage. Features with a small number of unique values may use less than max_bins bins. In addition to the max_bins bins, one more bin is always reserved for missing values. Must be no larger than 255.
monotonic_cstarray-like of int of shape (n_features), default=None
Indicates the monotonic constraint to enforce on each feature. -1, 1 and 0 respectively correspond to a negative constraint, positive constraint and no constraint. Read more in the User Guide. New in version 0.23.
categorical_featuresarray-like of {bool, int} of shape (n_features) or shape (n_categorical_features,), default=None.
Indicates the categorical features. None : no feature will be considered categorical. boolean array-like : boolean mask indicating categorical features. integer array-like : integer indices indicating categorical features. For each categorical feature, there must be at most max_bins unique categories, and each categorical value must be in [0, max_bins -1]. Read more in the User Guide. New in version 0.24.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble. For results to be valid, the estimator should be re-trained on the same data only. See the Glossary.
early_stopping‘auto’ or bool, default=’auto’
If ‘auto’, early stopping is enabled if the sample size is larger than 10000. If True, early stopping is enabled, otherwise early stopping is disabled. New in version 0.23.
scoringstr or callable or None, default=’loss’
Scoring parameter to use for early stopping. It can be a single string (see The scoring parameter: defining model evaluation rules) or a callable (see Defining your scoring strategy from metric functions). If None, the estimator’s default scorer is used. If scoring='loss', early stopping is checked w.r.t the loss value. Only used if early stopping is performed.
validation_fractionint or float or None, default=0.1
Proportion (or absolute size) of training data to set aside as validation data for early stopping. If None, early stopping is done on the training data. Only used if early stopping is performed.
n_iter_no_changeint, default=10
Used to determine when to “early stop”. The fitting process is stopped when none of the last n_iter_no_change scores are better than the n_iter_no_change - 1 -th-to-last one, up to some tolerance. Only used if early stopping is performed.
tolfloat or None, default=1e-7
The absolute tolerance to use when comparing scores. The higher the tolerance, the more likely we are to early stop: higher tolerance means that it will be harder for subsequent iterations to be considered an improvement upon the reference score.
verboseint, default=0
The verbosity level. If not zero, print some information about the fitting process.
random_stateint, RandomState instance or None, default=None
Pseudo-random number generator to control the subsampling in the binning process, and the train/validation data split if early stopping is enabled. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
classes_array, shape = (n_classes,)
Class labels.
do_early_stopping_bool
Indicates whether early stopping is used during training.
n_iter_int
The number of iterations as selected by early stopping, depending on the early_stopping parameter. Otherwise it corresponds to max_iter.
n_trees_per_iteration_int
The number of tree that are built at each iteration. This is equal to 1 for binary classification, and to n_classes for multiclass classification.
train_score_ndarray, shape (n_iter_+1,)
The scores at each iteration on the training data. The first entry is the score of the ensemble before the first iteration. Scores are computed according to the scoring parameter. If scoring is not ‘loss’, scores are computed on a subset of at most 10 000 samples. Empty if no early stopping.
validation_score_ndarray, shape (n_iter_+1,)
The scores at each iteration on the held-out validation data. The first entry is the score of the ensemble before the first iteration. Scores are computed according to the scoring parameter. Empty if no early stopping or if validation_fraction is None.
is_categorical_ndarray, shape (n_features, ) or None
Boolean mask for the categorical features. None if there are no categorical features. Examples >>> # To use this experimental feature, we need to explicitly ask for it:
>>> from sklearn.experimental import enable_hist_gradient_boosting # noqa
>>> from sklearn.ensemble import HistGradientBoostingClassifier
>>> from sklearn.datasets import load_iris
>>> X, y = load_iris(return_X_y=True)
>>> clf = HistGradientBoostingClassifier().fit(X, y)
>>> clf.score(X, y)
1.0
Methods
decision_function(X) Compute the decision function of X.
fit(X, y[, sample_weight]) Fit the gradient boosting model.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict classes for X.
predict_proba(X) Predict class probabilities for X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
staged_decision_function(X) Compute decision function of X for each iteration.
staged_predict(X) Predict classes at each iteration.
staged_predict_proba(X) Predict class probabilities at each iteration.
decision_function(X) [source]
Compute the decision function of X. Parameters
Xarray-like, shape (n_samples, n_features)
The input samples. Returns
decisionndarray, shape (n_samples,) or (n_samples, n_trees_per_iteration)
The raw predicted values (i.e. the sum of the trees leaves) for each sample. n_trees_per_iteration is equal to the number of classes in multiclass classification.
fit(X, y, sample_weight=None) [source]
Fit the gradient boosting model. Parameters
Xarray-like of shape (n_samples, n_features)
The input samples.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,) default=None
Weights of training data. New in version 0.23. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict classes for X. Parameters
Xarray-like, shape (n_samples, n_features)
The input samples. Returns
yndarray, shape (n_samples,)
The predicted classes.
predict_proba(X) [source]
Predict class probabilities for X. Parameters
Xarray-like, shape (n_samples, n_features)
The input samples. Returns
pndarray, shape (n_samples, n_classes)
The class probabilities of the input samples.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
staged_decision_function(X) [source]
Compute decision function of X for each iteration. This method allows monitoring (i.e. determine error on testing set) after each stage. Parameters
Xarray-like of shape (n_samples, n_features)
The input samples. Yields
decisiongenerator of ndarray of shape (n_samples,) or (n_samples, n_trees_per_iteration)
The decision function of the input samples, which corresponds to the raw values predicted from the trees of the ensemble . The classes corresponds to that in the attribute classes_.
staged_predict(X) [source]
Predict classes at each iteration. This method allows monitoring (i.e. determine error on testing set) after each stage. New in version 0.24. Parameters
Xarray-like of shape (n_samples, n_features)
The input samples. Yields
ygenerator of ndarray of shape (n_samples,)
The predicted classes of the input samples, for each iteration.
staged_predict_proba(X) [source]
Predict class probabilities at each iteration. This method allows monitoring (i.e. determine error on testing set) after each stage. Parameters
Xarray-like of shape (n_samples, n_features)
The input samples. Yields
ygenerator of ndarray of shape (n_samples,)
The predicted class probabilities of the input samples, for each iteration.
Examples using sklearn.ensemble.HistGradientBoostingClassifier
Release Highlights for scikit-learn 0.23
Release Highlights for scikit-learn 0.24
Release Highlights for scikit-learn 0.22 | |
doc_3901 |
Reduce X to the selected features. Parameters
Xarray of shape [n_samples, n_features]
The input samples. Returns
X_rarray of shape [n_samples, n_selected_features]
The input samples with only the selected features. | |
doc_3902 |
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha array-like or scalar or None
animated bool
antialiased or aa or antialiaseds bool or list of bools
array array-like or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clim (vmin: float, vmax: float)
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
cmap Colormap or str or None
color color or list of rgba tuples
edgecolor or ec or edgecolors color or list of colors or 'face'
facecolor or facecolors or fc color or list of colors
figure Figure
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or dashes or linestyles or ls str or tuple or list thereof
linewidth or linewidths or lw float or list of floats
norm Normalize or None
offset_transform Transform
offsets (N, 2) or (2,) array-like
path_effects AbstractPathEffect
paths unknown
picker None or bool or float or callable
pickradius float
rasterized bool
sizes ndarray or None
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
urls list of str or None
visible bool
zorder float | |
doc_3903 |
Get Less than or equal to of dataframe and other, element-wise (binary operator le). Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators. Equivalent to ==, !=, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison. Parameters
other:scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis:{0 or ‘index’, 1 or ‘columns’}, default ‘columns’
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).
level:int or label
Broadcast across a level, matching Index values on the passed MultiIndex level. Returns
DataFrame of bool
Result of the comparison. See also DataFrame.eq
Compare DataFrames for equality elementwise. DataFrame.ne
Compare DataFrames for inequality elementwise. DataFrame.le
Compare DataFrames for less than inequality or equality elementwise. DataFrame.lt
Compare DataFrames for strictly less than inequality elementwise. DataFrame.ge
Compare DataFrames for greater than inequality or equality elementwise. DataFrame.gt
Compare DataFrames for strictly greater than inequality elementwise. Notes Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN). Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100],
... 'revenue': [100, 250, 300]},
... index=['A', 'B', 'C'])
>>> df
cost revenue
A 250 100
B 150 250
C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100
cost revenue
A False True
B False False
C True False
>>> df.eq(100)
cost revenue
A False True
B False False
C True False
When other is a Series, the columns of a DataFrame are aligned with the index of other and broadcast:
>>> df != pd.Series([100, 250], index=["cost", "revenue"])
cost revenue
A True True
B True False
C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index')
cost revenue
A True False
B True True
C True True
D True True
When comparing to an arbitrary sequence, the number of columns must match the number elements in other:
>>> df == [250, 100]
cost revenue
A True True
B False False
C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index')
cost revenue
A True False
B False True
C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]},
... index=['A', 'B', 'C', 'D'])
>>> other
revenue
A 300
B 250
C 100
D 150
>>> df.gt(other)
cost revenue
A False False
B False False
C False True
D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220],
... 'revenue': [100, 250, 300, 200, 175, 225]},
... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
... ['A', 'B', 'C', 'A', 'B', 'C']])
>>> df_multindex
cost revenue
Q1 A 250 100
B 150 250
C 100 300
Q2 A 150 200
B 300 175
C 220 225
>>> df.le(df_multindex, level=1)
cost revenue
Q1 A True True
B True True
C True True
Q2 A False True
B True False
C True False | |
doc_3904 | tf.logical_and Compat aliases for migration See Migration guide for more details. tf.compat.v1.logical_and, tf.compat.v1.math.logical_and
tf.math.logical_and(
x, y, name=None
)
The operation works for the following input types: Two single elements of type bool
One tf.Tensor of type bool and one single bool, where the result will be calculated by applying logical AND with the single element to each element in the larger Tensor. Two tf.Tensor objects of type bool of the same shape. In this case, the result will be the element-wise logical AND of the two input tensors. Usage:
a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_and(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([False])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_and(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_and(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, False, False, True])>
Args
x A tf.Tensor type bool.
y A tf.Tensor of type bool.
name A name for the operation (optional).
Returns A tf.Tensor of type bool with the same size as that of x or y. | |
doc_3905 | Return whether the call has completed. | |
doc_3906 |
The length of one element in bytes. | |
doc_3907 | A raise statement. exc is the exception object to be raised, normally a Call or Name, or None for a standalone raise. cause is the optional part for y in raise x from y. >>> print(ast.dump(ast.parse('raise x from y'), indent=4))
Module(
body=[
Raise(
exc=Name(id='x', ctx=Load()),
cause=Name(id='y', ctx=Load()))],
type_ignores=[]) | |
doc_3908 |
Function that measures the Binary Cross Entropy between the target and the output. See BCELoss for details. Parameters
input – Tensor of arbitrary shape
target – Tensor of the same shape as input
weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
Examples: >>> input = torch.randn((3, 2), requires_grad=True)
>>> target = torch.rand((3, 2), requires_grad=False)
>>> loss = F.binary_cross_entropy(F.sigmoid(input), target)
>>> loss.backward() | |
doc_3909 |
Return offset of the container. | |
doc_3910 |
Fits the imputer on X and return the transformed X. Parameters
Xarray-like, shape (n_samples, n_features)
Input data, where “n_samples” is the number of samples and “n_features” is the number of features.
yignored.
Returns
Xtarray-like, shape (n_samples, n_features)
The imputed input data. | |
doc_3911 | See Migration guide for more details. tf.compat.v1.raw_ops.ScatterNdUpdate
tf.raw_ops.ScatterNdUpdate(
ref, indices, updates, use_locking=True, name=None
)
variable according to indices. ref is a Tensor with rank P and indices is a Tensor of rank Q. indices must be integer tensor, containing indices into ref. It must be shape \([d_0, ..., d_{Q-2}, K]\) where 0 < K <= P. The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref. updates is Tensor of rank Q-1+P-K with shape: $$[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].$$ For example, say we want to update 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this: ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
update = tf.scatter_nd_update(ref, indices, updates)
with tf.Session() as sess:
print sess.run(update)
The resulting update to ref would look like this: [1, 11, 3, 10, 9, 6, 7, 12]
See tf.scatter_nd for more details about how to make updates to slices. See also tf.scatter_update and tf.batch_scatter_update.
Args
ref A mutable Tensor. A mutable Tensor. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A Tensor. Must be one of the following types: int32, int64. A tensor of indices into ref.
updates A Tensor. Must have the same type as ref. A Tensor. Must have the same type as ref. A tensor of updated values to add to ref.
use_locking An optional bool. Defaults to True. An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as ref. | |
doc_3912 | Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict. Buffers can be accessed as attributes using given names. Parameters
name (string) – name of the buffer. The buffer can be accessed from this module using the given name
tensor (Tensor) – buffer to be registered.
persistent (bool) – whether the buffer is part of this module’s state_dict. Example: >>> self.register_buffer('running_mean', torch.zeros(num_features)) | |
doc_3913 | CRC-32 of the uncompressed file. | |
doc_3914 |
Return a list of the child Artists of this Artist. | |
doc_3915 |
Return a with each element rounded to the given number of decimals. Refer to numpy.around for full documentation. See also numpy.around
equivalent function | |
doc_3916 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_3917 |
Computes the additive chi-squared kernel between observations in X and Y. The chi-squared kernel is computed between each pair of rows in X and Y. X and Y have to be non-negative. This kernel is most commonly applied to histograms. The chi-squared kernel is given by: k(x, y) = -Sum [(x - y)^2 / (x + y)]
It can be interpreted as a weighted difference per entry. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples_X, n_features)
Yndarray of shape (n_samples_Y, n_features), default=None
Returns
kernel_matrixndarray of shape (n_samples_X, n_samples_Y)
See also
chi2_kernel
The exponentiated version of the kernel, which is usually preferable.
sklearn.kernel_approximation.AdditiveChi2Sampler
A Fourier approximation to this kernel. Notes As the negative of a distance, this kernel is only conditionally positive definite. References Zhang, J. and Marszalek, M. and Lazebnik, S. and Schmid, C. Local features and kernels for classification of texture and object categories: A comprehensive study International Journal of Computer Vision 2007 https://research.microsoft.com/en-us/um/people/manik/projects/trade-off/papers/ZhangIJCV06.pdf | |
doc_3918 | Captured stderr from the child process. A bytes sequence, or a string if run() was called with an encoding, errors, or text=True. None if stderr was not captured. | |
doc_3919 | Gets or sets whether the font should be rendered with an underline. underline -> bool Whether the font should be rendered in underline. When set to True, all rendered fonts will include an underline. The underline is always one pixel thick, regardless of font size. This can be mixed with the bold and italic modes. New in pygame 2.0.0. | |
doc_3920 |
Explicitly mark a string as safe for (HTML) output purposes. The returned object can be used everywhere a string is appropriate. Can be called multiple times on a single string. Can also be used as a decorator. For building up fragments of HTML, you should normally be using django.utils.html.format_html() instead. String marked safe will become unsafe again if modified. For example: >>> mystr = '<b>Hello World</b> '
>>> mystr = mark_safe(mystr)
>>> type(mystr)
<class 'django.utils.safestring.SafeString'>
>>> mystr = mystr.strip() # removing whitespace
>>> type(mystr)
<type 'str'> | |
doc_3921 |
Return the maximum theta limit in degrees. | |
doc_3922 |
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | |
doc_3923 | See Migration guide for more details. tf.compat.v1.signal.irfft3d, tf.compat.v1.spectral.irfft3d
tf.signal.irfft3d(
input_tensor, fft_length=None, name=None
)
Computes the inverse 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of input. The inner-most 3 dimensions of input are assumed to be the result of RFFT3D: The inner-most dimension contains the fft_length / 2 + 1 unique components of the DFT of a real-valued signal. If fft_length is not provided, it is computed from the size of the inner-most 3 dimensions of input. If the FFT length used to compute input is odd, it should be provided since it cannot be inferred properly. Along each axis IRFFT3D is computed on, if fft_length (or fft_length / 2 + 1 for the inner-most dimension) is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.
Args
input A Tensor. Must be one of the following types: complex64, complex128. A complex tensor.
fft_length A Tensor of type int32. An int32 tensor of shape [3]. The FFT length for each dimension.
Treal An optional tf.DType from: tf.float32, tf.float64. Defaults to tf.float32.
name A name for the operation (optional).
Returns A Tensor of type Treal. | |
doc_3924 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_3925 |
Return the Bbox bounding the text, in display units. In addition to being used internally, this is useful for specifying clickable regions in a png file on a web page. Parameters
rendererRenderer, optional
A renderer is needed to compute the bounding box. If the artist has already been drawn, the renderer is cached; thus, it is only necessary to pass this argument when calling get_window_extent before the first draw. In practice, it is usually easier to trigger a draw first (e.g. by saving the figure).
dpifloat, optional
The dpi value for computing the bbox, defaults to self.figure.dpi (not the renderer dpi); should be set e.g. if to match regions with a figure saved with a custom dpi value. | |
doc_3926 |
Bases: matplotlib.patches.Patch A regular polygon patch. Parameters
xy(float, float)
The center position.
numVerticesint
The number of vertices.
radiusfloat
The distance from the center to each of the vertices.
orientationfloat
The polygon rotation angle (in radians). **kwargs
Patch properties:
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha unknown
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float get_patch_transform()[source]
Return the Transform instance mapping patch coordinates to data coordinates. For example, one may define a patch of a circle which represents a radius of 5 by providing coordinates for a unit circle, and a transform which scales the coordinates (the patch coordinate) by 5.
get_path()[source]
Return the path of this patch.
set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, antialiased=<UNSET>, capstyle=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, color=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, fill=<UNSET>, gid=<UNSET>, hatch=<UNSET>, in_layout=<UNSET>, joinstyle=<UNSET>, label=<UNSET>, linestyle=<UNSET>, linewidth=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float
Examples using matplotlib.patches.RegularPolygon
Reference for Matplotlib artists
Radar chart (aka spider or star chart) | |
doc_3927 | types.LambdaType
The type of user-defined functions and functions created by lambda expressions. Raises an auditing event function.__new__ with argument code. The audit event only occurs for direct instantiation of function objects, and is not raised for normal compilation. | |
doc_3928 |
Return an iterable of the ParameterDict key/value pairs. | |
doc_3929 | tf.experimental.numpy.take(
a, indices, axis=None, out=None, mode='clip'
)
out argument is not supported, and default mode is clip. See the NumPy documentation for numpy.take. | |
doc_3930 | Specifies the type pointed to. | |
doc_3931 |
Return local mode of an image. The mode is the value that appears most often in the local histogram. Parameters
image([P,] M, N) ndarray (uint8, uint16)
Input image.
selemndarray
The neighborhood expressed as an ndarray of 1’s and 0’s.
out([P,] M, N) array (same dtype as input)
If None, a new array is allocated.
maskndarray (integer or float), optional
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_y, shift_zint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element). Returns
out([P,] M, N) ndarray (same dtype as input image)
Output image. Examples >>> from skimage import data
>>> from skimage.morphology import disk, ball
>>> from skimage.filters.rank import modal
>>> import numpy as np
>>> img = data.camera()
>>> volume = np.random.randint(0, 255, size=(10,10,10), dtype=np.uint8)
>>> out = modal(img, disk(5))
>>> out_vol = modal(volume, ball(5)) | |
doc_3932 | Returns the standard deviation of the data in the provided expression. Default alias: <field>__stddev
Return type: float if input is int, otherwise same as input field, or output_field if supplied Has one optional argument:
sample
By default, StdDev returns the population standard deviation. However, if sample=True, the return value will be the sample standard deviation. | |
doc_3933 | Kicks off the distributed backward pass using the provided roots. This currently implements the FAST mode algorithm which assumes all RPC messages sent in the same distributed autograd context across workers would be part of the autograd graph during the backward pass. We use the provided roots to discover the autograd graph and compute appropriate dependencies. This method blocks until the entire autograd computation is done. We accumulate the gradients in the appropriate torch.distributed.autograd.context on each of the nodes. The autograd context to be used is looked up given the context_id that is passed in when torch.distributed.autograd.backward() is called. If there is no valid autograd context corresponding to the given ID, we throw an error. You can retrieve the accumulated gradients using the get_gradients() API. Parameters
context_id (int) – The autograd context id for which we should retrieve the gradients.
roots (list) – Tensors which represent the roots of the autograd computation. All the tensors should be scalars.
retain_graph (bool, optional) – If False, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Usually, you need to set this to True to run backward multiple times. Example::
>>> import torch.distributed.autograd as dist_autograd
>>> with dist_autograd.context() as context_id:
>>> pred = model.forward()
>>> loss = loss_func(pred, loss)
>>> dist_autograd.backward(context_id, loss) | |
doc_3934 | Casts all floating point parameters and buffers to double datatype. Returns
self Return type
Module | |
doc_3935 | Make an iterator that returns evenly spaced values starting with number start. Often used as an argument to map() to generate consecutive data points. Also, used with zip() to add sequence numbers. Roughly equivalent to: def count(start=0, step=1):
# count(10) --> 10 11 12 13 14 ...
# count(2.5, 0.5) -> 2.5 3.0 3.5 ...
n = start
while True:
yield n
n += step
When counting with floating point numbers, better accuracy can sometimes be achieved by substituting multiplicative code such as: (start + step * i
for i in count()). Changed in version 3.1: Added step argument and allowed non-integer arguments. | |
doc_3936 |
Compute data precision matrix with the generative model. Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency. Returns
precisionarray, shape=(n_features, n_features)
Estimated precision of data. | |
doc_3937 | sklearn.datasets.make_hastie_10_2(n_samples=12000, *, random_state=None) [source]
Generates data for binary classification used in Hastie et al. 2009, Example 10.2. The ten features are standard independent Gaussian and the target y is defined by: y[i] = 1 if np.sum(X[i] ** 2) > 9.34 else -1
Read more in the User Guide. Parameters
n_samplesint, default=12000
The number of samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 10)
The input samples.
yndarray of shape (n_samples,)
The output values. See also
make_gaussian_quantiles
A generalization of this dataset approach. References
1
T. Hastie, R. Tibshirani and J. Friedman, “Elements of Statistical Learning Ed. 2”, Springer, 2009.
Examples using sklearn.datasets.make_hastie_10_2
Gradient Boosting regularization
Discrete versus Real AdaBoost
Early stopping of Gradient Boosting
Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV | |
doc_3938 | Initialize curses and call another callable object, func, which should be the rest of your curses-using application. If the application raises an exception, this function will restore the terminal to a sane state before re-raising the exception and generating a traceback. The callable object func is then passed the main window ‘stdscr’ as its first argument, followed by any other arguments passed to wrapper(). Before calling func, wrapper() turns on cbreak mode, turns off echo, enables the terminal keypad, and initializes colors if the terminal has color support. On exit (whether normally or by exception) it restores cooked mode, turns on echo, and disables the terminal keypad. | |
doc_3939 |
Determine all minima of the image with depth >= h. The local minima are defined as connected sets of pixels with equal grey level strictly smaller than the grey levels of all pixels in direct neighborhood of the set. A local minimum M of depth h is a local minimum for which there is at least one path joining M with an equal or lower local minimum on which the maximal value is f(M) + h (i.e. the values along the path are not increasing by more than h with respect to the minimum’s value) and no path to an equal or lower local minimum for which the maximal value is smaller. The global minima of the image are also found by this function. Parameters
imagendarray
The input image for which the minima are to be calculated.
hunsigned integer
The minimal depth of all extracted minima.
selemndarray, optional
The neighborhood expressed as an n-D array of 1’s and 0’s. Default is the ball of radius 1 according to the maximum norm (i.e. a 3x3 square for 2D images, a 3x3x3 cube for 3D images, etc.) Returns
h_minndarray
The local minima of depth >= h and the global minima. The resulting image is a binary image, where pixels belonging to the determined minima take value 1, the others take value 0. See also
skimage.morphology.extrema.h_maxima
skimage.morphology.extrema.local_maxima
skimage.morphology.extrema.local_minima
References
1
Soille, P., “Morphological Image Analysis: Principles and Applications” (Chapter 6), 2nd edition (2003), ISBN 3540429883. Examples >>> import numpy as np
>>> from skimage.morphology import extrema
We create an image (quadratic function with a minimum in the center and 4 additional constant maxima. The depth of the minima are: 1, 21, 41, 61, 81 >>> w = 10
>>> x, y = np.mgrid[0:w,0:w]
>>> f = 180 + 0.2*((x - w/2)**2 + (y-w/2)**2)
>>> f[2:4,2:4] = 160; f[2:4,7:9] = 140; f[7:9,2:4] = 120; f[7:9,7:9] = 100
>>> f = f.astype(int)
We can calculate all minima with a depth of at least 40: >>> minima = extrema.h_minima(f, 40)
The resulting image will contain 3 local minima. | |
doc_3940 | The name of the URLConf keyword argument that contains the primary key. By default, pk_url_kwarg is 'pk'. | |
doc_3941 | accessor for ‘max-age’ | |
doc_3942 |
Draw samples from a Poisson distribution. The Poisson distribution is the limit of the binomial distribution for large N. Note New code should use the poisson method of a default_rng() instance instead; please see the Quick Start. Parameters
lamfloat or array_like of floats
Expected number of events occurring in a fixed-time interval, must be >= 0. A sequence must be broadcastable over the requested size.
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if lam is a scalar. Otherwise, np.array(lam).size samples are drawn. Returns
outndarray or scalar
Drawn samples from the parameterized Poisson distribution. See also Generator.poisson
which should be used for new code. Notes The Poisson distribution \[f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}\] For events with an expected separation \(\lambda\) the Poisson distribution \(f(k; \lambda)\) describes the probability of \(k\) events occurring within the observed interval \(\lambda\). Because the output is limited to the range of the C int64 type, a ValueError is raised when lam is within 10 sigma of the maximum representable value. References 1
Weisstein, Eric W. “Poisson Distribution.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/PoissonDistribution.html 2
Wikipedia, “Poisson distribution”, https://en.wikipedia.org/wiki/Poisson_distribution Examples Draw samples from the distribution: >>> import numpy as np
>>> s = np.random.poisson(5, 10000)
Display histogram of the sample: >>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(s, 14, density=True)
>>> plt.show()
Draw each 100 values for lambda 100 and 500: >>> s = np.random.poisson(lam=(100., 500.), size=(100, 2)) | |
doc_3943 | Height and width of the widget map (default is 400x600). | |
doc_3944 | The Package type is defined as Union[str, ModuleType]. This means that where the function describes accepting a Package, you can pass in either a string or a module. Module objects must have a resolvable __spec__.submodule_search_locations that is not None. | |
doc_3945 | Releases the lock. The lock must have been acquired earlier, but not necessarily by the same thread. | |
doc_3946 | Token value for "[". | |
doc_3947 | Parses an XML section from a string constant. This function can be used to embed “XML literals” in Python code. text is a string containing XML data. parser is an optional parser instance. If not given, the standard XMLParser parser is used. Returns an Element instance. | |
doc_3948 | See Migration guide for more details. tf.compat.v1.raw_ops.MatrixSquareRoot
tf.raw_ops.MatrixSquareRoot(
input, name=None
)
matmul(sqrtm(A), sqrtm(A)) = A The input matrix should be invertible. If the input matrix is real, it should have no eigenvalues which are real and negative (pairs of complex conjugate eigenvalues are allowed). The matrix square root is computed by first reducing the matrix to quasi-triangular form with the real Schur decomposition. The square root of the quasi-triangular matrix is then computed directly. Details of the algorithm can be found in: Nicholas J. Higham, "Computing real square roots of a real matrix", Linear Algebra Appl., 1987. The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the matrix square root for all input submatrices [..., :, :].
Args
input A Tensor. Must be one of the following types: float64, float32, half, complex64, complex128. Shape is [..., M, M].
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_3949 | The suite() function parses the parameter source as if it were an input to compile(source, 'file.py', 'exec'). If the parse succeeds, an ST object is created to hold the internal parse tree representation, otherwise an appropriate exception is raised. | |
doc_3950 | See Migration guide for more details. tf.compat.v1.keras.applications.imagenet_utils.preprocess_input
tf.keras.applications.imagenet_utils.preprocess_input(
x, data_format=None, mode='caffe'
)
Usage example with applications.MobileNet: i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)
x = tf.cast(i, tf.float32)
x = tf.keras.applications.mobilenet.preprocess_input(x)
core = tf.keras.applications.MobileNet()
x = core(x)
model = tf.keras.Model(inputs=[i], outputs=[x])
image = tf.image.decode_png(tf.io.read_file('file.png'))
result = model(image)
Arguments
x A floating point numpy.array or a tf.Tensor, 3D or 4D with 3 color channels, with values in the range [0, 255]. The preprocessed data are written over the input data if the data types are compatible. To avoid this behaviour, numpy.copy(x) can be used.
data_format Optional data format of the image tensor/array. Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last").
mode One of "caffe", "tf" or "torch". Defaults to "caffe". caffe: will convert the images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling. tf: will scale pixels between -1 and 1, sample-wise. torch: will scale pixels between 0 and 1 and then will normalize each channel with respect to the ImageNet dataset.
Returns Preprocessed numpy.array or a tf.Tensor with type float32.
Raises
ValueError In case of unknown mode or data_format argument. | |
doc_3951 |
Get the artist's bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly. | |
doc_3952 |
For GUI backends, show the figure window and redraw. For non-GUI backends, raise an exception, unless running headless (i.e. on Linux with an unset DISPLAY); this exception is converted to a warning in Figure.show. | |
doc_3953 | Return the number of Thread objects currently alive. The returned count is equal to the length of the list returned by enumerate(). | |
doc_3954 | get the number of trackballs on a Joystick get_numballs() -> int Returns the number of trackball devices on a Joystick. These devices work similar to a mouse but they have no absolute position; they only have relative amounts of movement. The pygame.JOYBALLMOTION event will be sent when the trackball is rolled. It will report the amount of movement on the trackball. | |
doc_3955 |
Convert an image to floating point format. This function is similar to img_as_float64, but will not convert lower-precision floating point arrays to float64. Parameters
imagendarray
Input image.
force_copybool, optional
Force a copy of the data, irrespective of its current dtype. Returns
outndarray of float
Output image. Notes The range of a floating point image is [0.0, 1.0] or [-1.0, 1.0] when converting from unsigned or signed datatypes, respectively. If the input image has a float type, intensity values are not modified and can be outside the ranges [0.0, 1.0] or [-1.0, 1.0]. | |
doc_3956 | Determines if a type conversion is allowed under PyTorch casting rules described in the type promotion documentation. Parameters
from (dpython:type) – The original torch.dtype.
to (dpython:type) – The target torch.dtype. Example: >>> torch.can_cast(torch.double, torch.float)
True
>>> torch.can_cast(torch.float, torch.int)
False | |
doc_3957 |
Other Members
COMPILER_VERSION '7.3.1 20180303'
GIT_VERSION 'v2.4.0-rc4-71-g582c8d236cb'
GRAPH_DEF_VERSION 561
GRAPH_DEF_VERSION_MIN_CONSUMER 0
GRAPH_DEF_VERSION_MIN_PRODUCER 0
VERSION '2.4.0' | |
doc_3958 | True if arbitrary Unicode strings can be used as file names (within limitations imposed by the file system). | |
doc_3959 | tf.data.experimental.RandomDataset(
seed=None
)
Attributes
element_spec The type specification of an element of this dataset.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
Methods apply View source
apply(
transformation_func
)
Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset.
dataset = tf.data.Dataset.range(100)
def dataset_fn(ds):
return ds.filter(lambda x: x < 5)
dataset = dataset.apply(dataset_fn)
list(dataset.as_numpy_iterator())
[0, 1, 2, 3, 4]
Args
transformation_func A function that takes one Dataset argument and returns a Dataset.
Returns
Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source
as_numpy_iterator()
Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset:
print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
for element in dataset.as_numpy_iterator():
print(element)
1
2
3
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
as_numpy_iterator() will preserve the nested structure of dataset elements.
dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
'b': [5, 6]})
list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
{'a': (2, 4), 'b': 6}]
True
Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays.
Raises
TypeError if an element contains a non-Tensor value.
RuntimeError if eager execution is not enabled. batch View source
batch(
batch_size, drop_remainder=False
)
Combines consecutive elements of this dataset into batches.
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
dataset = tf.data.Dataset.range(8)
dataset = dataset.batch(3, drop_remainder=True)
list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset. cache View source
cache(
filename=''
)
Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data.
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x**2)
dataset = dataset.cache()
# The first time reading through the data will generate the data using
# `range` and `map`.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
# Subsequent iterations read from the cache.
list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed.
dataset = tf.data.Dataset.range(5)
dataset = dataset.cache("/path/to/file") # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
dataset = tf.data.Dataset.range(10)
dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache.
Args
filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory.
Returns
Dataset A Dataset. cardinality View source
cardinality()
Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file).
dataset = tf.data.Dataset.range(42)
print(dataset.cardinality().numpy())
42
dataset = dataset.repeat()
cardinality = dataset.cardinality()
print((cardinality == tf.data.INFINITE_CARDINALITY).numpy())
True
dataset = dataset.filter(lambda x: True)
cardinality = dataset.cardinality()
print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy())
True
Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively.
concatenate View source
concatenate(
dataset
)
Creates a Dataset by concatenating the given dataset with this dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
ds = a.concatenate(b)
list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
# The input dataset and dataset to be concatenated should have the same
# nested structures and output types.
c = tf.data.Dataset.zip((a, b))
a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args
dataset Dataset to be concatenated.
Returns
Dataset A Dataset. enumerate View source
enumerate(
start=0
)
Enumerates the elements of this dataset. It is similar to python's enumerate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.enumerate(start=5)
for element in dataset.as_numpy_iterator():
print(element)
(5, 1)
(6, 2)
(7, 3)
# The nested structure of the input dataset determines the structure of
# elements in the resulting dataset.
dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
dataset = dataset.enumerate()
for element in dataset.as_numpy_iterator():
print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args
start A tf.int64 scalar tf.Tensor, representing the start value for enumeration.
Returns
Dataset A Dataset. filter View source
filter(
predicate
)
Filters this dataset according to predicate.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.filter(lambda x: x < 3)
list(dataset.as_numpy_iterator())
[1, 2]
# `tf.math.equal(x, y)` is required for equality comparison
def filter_fn(x):
return tf.math.equal(x, 1)
dataset = dataset.filter(filter_fn)
list(dataset.as_numpy_iterator())
[1]
Args
predicate A function mapping a dataset element to a boolean.
Returns
Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source
flat_map(
map_func
)
Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements:
dataset = tf.data.Dataset.from_tensor_slices(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]])
dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x))
list(dataset.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7, 8, 9]
tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1)
Args
map_func A function mapping a dataset element to a dataset.
Returns
Dataset A Dataset. from_generator View source
@staticmethod
from_generator(
generator, output_types=None, output_shapes=None, args=None,
output_signature=None
)
Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument:
def gen():
ragged_tensor = tf.ragged.constant([[1, 2], [3]])
yield 42, ragged_tensor
dataset = tf.data.Dataset.from_generator(
gen,
output_signature=(
tf.TensorSpec(shape=(), dtype=tf.int32),
tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32)))
list(dataset.take(1))
[(<tf.Tensor: shape=(), dtype=int32, numpy=42>,
<tf.RaggedTensor [[1, 2], [3]]>)]
There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes.
Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator().
Args
generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args.
output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator.
output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator.
args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments.
output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator.
Returns
Dataset A Dataset. from_tensor_slices View source
@staticmethod
from_tensor_slices(
tensors
)
Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions.
# Slicing a 1D tensor produces scalar tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
list(dataset.as_numpy_iterator())
[1, 2, 3]
# Slicing a 2D tensor produces 1D tensor elements.
dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
# Slicing a tuple of 1D tensors produces tuple elements containing
# scalar tensors.
dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
# Dictionary structure is also preserved.
dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
{'a': 2, 'b': 4}]
True
# Two tensors can be combined into one Dataset object.
features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((features, labels))
# Both the features and the labels tensors can be converted
# to a Dataset object separately and combined after.
features_dataset = Dataset.from_tensor_slices(features)
labels_dataset = Dataset.from_tensor_slices(labels)
dataset = Dataset.zip((features_dataset, labels_dataset))
# A batched feature and label set can be converted to a Dataset
# in similar fashion.
batched_features = tf.constant([[[1, 3], [2, 3]],
[[2, 1], [1, 2]],
[[3, 3], [3, 2]]], shape=(3, 2, 2))
batched_labels = tf.constant([['A', 'A'],
['B', 'B'],
['A', 'B']], shape=(3, 2, 1))
dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
for element in dataset.as_numpy_iterator():
print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element, with each component having the same size in the first dimension.
Returns
Dataset A Dataset. from_tensors View source
@staticmethod
from_tensors(
tensors
)
Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead.
dataset = tf.data.Dataset.from_tensors([1, 2, 3])
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
# You can use `from_tensors` to produce a dataset which repeats
# the same example many times.
example = tf.constant([1,2,3])
dataset = tf.data.Dataset.from_tensors(example).repeat(2)
list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide.
Args
tensors A dataset element.
Returns
Dataset A Dataset. interleave View source
interleave(
map_func, cycle_length=None, block_length=None, num_parallel_calls=None,
deterministic=None
)
Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently:
# Preprocess 4 files concurrently, and interleave blocks of 16 records
# from each file.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
return tf.data.Dataset.range(10)
dataset = dataset.interleave(lambda x:
tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
cycle_length=4, block_length=16)
The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example:
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
# NOTE: New lines indicate "block" boundaries.
dataset = dataset.interleave(
lambda x: Dataset.from_tensors(x).repeat(6),
cycle_length=2, block_length=4)
list(dataset.as_numpy_iterator())
[1, 1, 1, 1,
2, 2, 2, 2,
1, 1,
2, 2,
3, 3, 3, 3,
4, 4, 4, 4,
3, 3,
4, 4,
5, 5, 5, 5,
5, 5]
Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined.
Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False.
filenames = ["/var/data/file1.txt", "/var/data/file2.txt",
"/var/data/file3.txt", "/var/data/file4.txt"]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to a dataset.
cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism.
block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1.
num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. list_files View source
@staticmethod
list_files(
file_pattern, shuffle=None, seed=None
)
A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems.
Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order.
Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py
Args
file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched.
shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
Returns
Dataset A Dataset of strings corresponding to file names. map View source
map(
map_func, num_parallel_calls=None, deterministic=None
)
Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1)
list(dataset.as_numpy_iterator())
[2, 3, 4, 5, 6]
The input signature of map_func is determined by the structure of each element in this dataset.
dataset = Dataset.range(5)
# `map_func` takes a single argument of type `tf.Tensor` with the same
# shape and dtype.
result = dataset.map(lambda x: x + 1)
# Each element is a tuple containing two `tf.Tensor` objects.
elements = [(1, "foo"), (2, "bar"), (3, "baz")]
dataset = tf.data.Dataset.from_generator(
lambda: elements, (tf.int32, tf.string))
# `map_func` takes two arguments of type `tf.Tensor`. This function
# projects out just the first component.
result = dataset.map(lambda x_int, y_str: x_int)
list(result.as_numpy_iterator())
[1, 2, 3]
# Each element is a dictionary mapping strings to `tf.Tensor` objects.
elements = ([{"a": 1, "b": "foo"},
{"a": 2, "b": "bar"},
{"a": 3, "b": "baz"}])
dataset = tf.data.Dataset.from_generator(
lambda: elements, {"a": tf.int32, "b": tf.string})
# `map_func` takes a single argument of type `dict` with the same keys
# as the elements.
result = dataset.map(lambda d: str(d["a"]) + d["b"])
The value or values returned by map_func determine the structure of each element in the returned dataset.
dataset = tf.data.Dataset.range(3)
# `map_func` returns two `tf.Tensor` objects.
def g(x):
return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"])
result = dataset.map(g)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None))
# Python primitives, lists, and NumPy arrays are implicitly converted to
# `tf.Tensor`.
def h(x):
return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64)
result = dataset.map(h)
result.element_spec
(TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None))
# `map_func` can return nested structures.
def i(x):
return (37.0, [42, 16]), "foo"
result = dataset.map(i)
result.element_spec
((TensorSpec(shape=(), dtype=tf.float32, name=None),
TensorSpec(shape=(2,), dtype=tf.int32, name=None)),
TensorSpec(shape=(), dtype=tf.string, name=None))
map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
# transform a string tensor to upper case string using a Python function
def upper_case_fn(t: tf.Tensor):
return t.numpy().decode('utf-8').upper()
d = d.map(lambda x: tf.py_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example:
d = tf.data.Dataset.from_tensor_slices(['hello', 'world'])
def upper_case_fn(t: np.ndarray):
return t.decode('utf-8').upper()
d = d.map(lambda x: tf.numpy_function(func=upper_case_fn,
inp=[x], Tout=tf.string))
list(d.as_numpy_iterator())
[b'HELLO', b'WORLD']
Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False.
dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ]
dataset = dataset.map(lambda x: x + 1,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=False)
Args
map_func A function mapping a dataset element to another dataset element.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU.
deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically.
Returns
Dataset A Dataset. options View source
options()
Returns the options for this dataset and its inputs.
Returns A tf.data.Options object representing the dataset options.
padded_batch View source
padded_batch(
batch_size, padded_shapes=None, padding_values=None, drop_remainder=False
)
Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension.
A = (tf.data.Dataset
.range(1, 5, output_type=tf.int32)
.map(lambda x: tf.fill([x], x)))
# Pad to the smallest per-batch size that fits all elements.
B = A.padded_batch(2)
for element in B.as_numpy_iterator():
print(element)
[[1 0]
[2 2]]
[[3 3 3 0]
[4 4 4 4]]
# Pad to a fixed size.
C = A.padded_batch(2, padded_shapes=5)
for element in C.as_numpy_iterator():
print(element)
[[1 0 0 0 0]
[2 2 0 0 0]]
[[3 3 3 0 0]
[4 4 4 4 0]]
# Pad with a custom value.
D = A.padded_batch(2, padded_shapes=5, padding_values=-1)
for element in D.as_numpy_iterator():
print(element)
[[ 1 -1 -1 -1 -1]
[ 2 2 -1 -1 -1]]
[[ 3 3 3 -1 -1]
[ 4 4 4 4 -1]]
# Components of nested elements can be padded independently.
elements = [([1, 2, 3], [10]),
([4, 5], [11, 12])]
dataset = tf.data.Dataset.from_generator(
lambda: iter(elements), (tf.int32, tf.int32))
# Pad the first component of the tuple to length 4, and the second
# component to the smallest size that fits.
dataset = dataset.padded_batch(2,
padded_shapes=([4], [None]),
padding_values=(-1, 100))
list(dataset.as_numpy_iterator())
[(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32),
array([[ 10, 100], [ 11, 12]], dtype=int32))]
# Pad with a single value and multiple components.
E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1)
for element in E.as_numpy_iterator():
print(element)
(array([[ 1, -1],
[ 2, 2]], dtype=int32), array([[ 1, -1],
[ 2, 2]], dtype=int32))
(array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1],
[ 4, 4, 4, 4]], dtype=int32))
See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor.
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank.
padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component.
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch.
Returns
Dataset A Dataset.
Raises
ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source
prefetch(
buffer_size
)
Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements.
Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each).
dataset = tf.data.Dataset.range(3)
dataset = dataset.prefetch(2)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching.
Returns
Dataset A Dataset. range View source
@staticmethod
range(
*args, **kwargs
)
Creates a Dataset of a step-separated range of values.
list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
list(Dataset.range(5, 1).as_numpy_iterator())
[]
list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args
*args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs output_type: Its expected dtype. (Optional, default: tf.int64).
Returns
Dataset A RangeDataset.
Raises
ValueError if len(args) == 0. reduce View source
reduce(
initial_state, reduce_func
)
Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result.
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy()
5
tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy()
10
Args
initial_state An element representing the initial state of the transformation.
reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state.
Returns A dataset element corresponding to the final state of the transformation.
repeat View source
repeat(
count=None
)
Repeats this dataset so each original value is seen count times.
dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements.
Args
count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely.
Returns
Dataset A Dataset. shard View source
shard(
num_shards, index
)
Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i.
A = tf.data.Dataset.range(10)
B = A.shard(num_shards=3, index=0)
list(B.as_numpy_iterator())
[0, 3, 6, 9]
C = A.shard(num_shards=3, index=1)
list(C.as_numpy_iterator())
[1, 4, 7]
D = A.shard(num_shards=3, index=2)
list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
Args
num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel.
index A tf.int64 scalar tf.Tensor, representing the worker index.
Returns
Dataset A Dataset.
Raises
InvalidArgumentError if num_shards or index are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.)
shuffle View source
shuffle(
buffer_size, seed=None, reshuffle_each_iteration=None
)
Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration:
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
dataset = tf.data.Dataset.range(3)
dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args
buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample.
seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior.
reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.)
Returns
Dataset A Dataset. skip View source
skip(
count
)
Creates a Dataset that skips count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.skip(7)
list(dataset.as_numpy_iterator())
[7, 8, 9]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset.
Returns
Dataset A Dataset. take View source
take(
count
)
Creates a Dataset with at most count elements from this dataset.
dataset = tf.data.Dataset.range(10)
dataset = dataset.take(3)
list(dataset.as_numpy_iterator())
[0, 1, 2]
Args
count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset.
Returns
Dataset A Dataset. unbatch View source
unbatch()
Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...].
elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ]
dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64)
dataset = dataset.unbatch()
list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 1, 2, 3, 4]
Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch.
Returns A Dataset.
window View source
window(
size, shift=None, stride=1, drop_remainder=False
)
Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example:
dataset = tf.data.Dataset.range(7).window(2)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1]
[2, 3]
[4, 5]
[6]
dataset = tf.data.Dataset.range(7).window(3, 2, 1, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 1, 2]
[2, 3, 4]
[4, 5, 6]
dataset = tf.data.Dataset.range(7).window(3, 1, 2, True)
for window in dataset:
print(list(window.as_numpy_iterator()))
[0, 2, 4]
[1, 3, 5]
[2, 4, 6]
Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows.
nested = ([1, 2, 3, 4], [5, 6, 7, 8])
dataset = tf.data.Dataset.from_tensor_slices(nested).window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print(tuple(to_numpy(component) for component in window))
([1, 2], [5, 6])
([3, 4], [7, 8])
dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]})
dataset = dataset.window(2)
for window in dataset:
def to_numpy(ds):
return list(ds.as_numpy_iterator())
print({'a': to_numpy(window['a'])})
{'a': [1, 2]}
{'a': [3, 4]}
Args
size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive.
shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive.
stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element".
drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size.
Returns
Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source
with_options(
options
)
Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values.
ds = tf.data.Dataset.range(5)
ds = ds.interleave(lambda x: tf.data.Dataset.range(5),
cycle_length=3,
num_parallel_calls=3)
options = tf.data.Options()
# This will make the interleave order non-deterministic.
options.experimental_deterministic = False
ds = ds.with_options(options)
Args
options A tf.data.Options that identifies the options the use.
Returns
Dataset A Dataset with the given options.
Raises
ValueError when an option is set more than once to a non-default value zip View source
@staticmethod
zip(
datasets
)
Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects.
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
ds = tf.data.Dataset.zip((a, b))
list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
ds = tf.data.Dataset.zip((b, a))
list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
# The `datasets` argument may contain an arbitrary number of datasets.
c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
# [9, 10],
# [11, 12] ]
ds = tf.data.Dataset.zip((a, b, c))
for element in ds.as_numpy_iterator():
print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
ds = tf.data.Dataset.zip((a, d))
list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args
datasets A nested structure of datasets.
Returns
Dataset A Dataset. __bool__ View source
__bool__()
__iter__ View source
__iter__()
Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol.
Returns An tf.data.Iterator for the elements of this dataset.
Raises
RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source
__len__()
Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead.
Returns An integer representing the length of the dataset.
Raises
RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source
__nonzero__() | |
doc_3960 |
Creates a biclustering for X. Parameters
Xarray-like of shape (n_samples, n_features)
yIgnored | |
doc_3961 |
Alias for set_linewidth. | |
doc_3962 | Platform
Windows The ioctl() method is a limited interface to the WSAIoctl system interface. Please refer to the Win32 documentation for more information. On other platforms, the generic fcntl.fcntl() and fcntl.ioctl() functions may be used; they accept a socket object as their first argument. Currently only the following control codes are supported: SIO_RCVALL, SIO_KEEPALIVE_VALS, and SIO_LOOPBACK_FAST_PATH. Changed in version 3.6: SIO_LOOPBACK_FAST_PATH was added. | |
doc_3963 | Set an attribute value from a string, given a namespaceURI and a qname. Note that a qname is the whole attribute name. This is different than above. | |
doc_3964 |
Add a callback function that will be called whenever one of the Artist's properties changes. Parameters
funccallable
The callback function. It must have the signature: def func(artist: Artist) -> Any
where artist is the calling Artist. Return values may exist but are ignored. Returns
int
The observer id associated with the callback. This id can be used for removing the callback with remove_callback later. See also remove_callback | |
doc_3965 | interactively scale an image using smoothscale scaletest.main(imagefile, convert_alpha=False, run_speed_test=True) -> None arguments: imagefile - file name of source image (required)
convert_alpha - use convert_alpha() on the surf (default False)
run_speed_test - (default False) A smoothscale example that resized an image on the screen. Vertical and horizontal arrow keys are used to change the width and height of the displayed image. If the convert_alpha option is True then the source image is forced to have source alpha, whether or not the original images does. If run_speed_test is True then a background timing test is performed instead of the interactive scaler. If scaletest.py is run as a program then the command line options are: ImageFile [-t] [-convert_alpha]
[-t] = Run Speed Test
[-convert_alpha] = Use convert_alpha() on the surf. | |
doc_3966 | Write audio frames, without correcting nframes. Changed in version 3.4: Any bytes-like object is now accepted. | |
doc_3967 | See Migration guide for more details. tf.compat.v1.raw_ops.SegmentSum
tf.raw_ops.SegmentSum(
data, segment_ids, name=None
)
Read the section on segmentation for an explanation of segments. Computes a tensor such that \(output_i = \sum_j data_j\) where sum is over j such that segment_ids[j] == i. If the sum is empty for a given segment ID i, output[i] = 0. For example: c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]])
tf.segment_sum(c, tf.constant([0, 0, 1]))
# ==> [[5, 5, 5, 5],
# [5, 6, 7, 8]]
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose size is equal to the size of data's first dimension. Values should be sorted and can be repeated.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | |
doc_3968 |
Bases: matplotlib.transforms.TransformedPath A TransformedPatchPath caches a non-affine transformed copy of the Patch. This cached copy is automatically updated when the non-affine part of the transform or the patch changes. Parameters
patchPatch
__init__(patch)[source]
Parameters
patchPatch
__module__='matplotlib.transforms' | |
doc_3969 |
For each element in self, return a copy of the string with uppercase characters converted to lowercase and vice versa. See also char.swapcase | |
doc_3970 | Apply RFC 2965 rules on unverifiable transactions even to Netscape cookies. | |
doc_3971 | Search mailbox for matching messages. charset may be None, in which case no CHARSET will be specified in the request to the server. The IMAP protocol requires that at least one criterion be specified; an exception will be raised when the server returns an error. charset must be None if the UTF8=ACCEPT capability was enabled using the enable() command. Example: # M is a connected IMAP4 instance...
typ, msgnums = M.search(None, 'FROM', '"LDJ"')
# or:
typ, msgnums = M.search(None, '(FROM "LDJ")') | |
doc_3972 | reserve channels from being automatically used set_reserved(count) -> None The mixer can reserve any number of channels that will not be automatically selected for playback by Sounds. If sounds are currently playing on the reserved channels they will not be stopped. This allows the application to reserve a specific number of channels for important sounds that must not be dropped or have a guaranteed channel to play on. | |
doc_3973 | alias of werkzeug.datastructures.ImmutableMultiDict | |
doc_3974 | See Migration guide for more details. tf.compat.v1.keras.experimental.CosineDecay
tf.keras.experimental.CosineDecay(
initial_learning_rate, decay_steps, alpha=0.0, name=None
)
See [Loshchilov & Hutter, ICLR2016], SGDR: Stochastic Gradient Descent with Warm Restarts. https://arxiv.org/abs/1608.03983 When training a model, it is often recommended to lower the learning rate as the training progresses. This schedule applies a cosine decay function to an optimizer step, given a provided initial learning rate. It requires a step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The schedule a 1-arg callable that produces a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions. It is computed as: def decayed_learning_rate(step):
step = min(step, decay_steps)
cosine_decay = 0.5 * (1 + cos(pi * step / decay_steps))
decayed = (1 - alpha) * cosine_decay + alpha
return initial_learning_rate * decayed
Example usage: decay_steps = 1000
lr_decayed_fn = tf.keras.experimental.CosineDecay(
initial_learning_rate, decay_steps)
You can pass this schedule directly into a tf.keras.optimizers.Optimizer as the learning rate. The learning rate schedule is also serializable and deserializable using tf.keras.optimizers.schedules.serialize and tf.keras.optimizers.schedules.deserialize.
Returns A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar Tensor of the same type as initial_learning_rate.
Args
initial_learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate.
decay_steps A scalar int32 or int64 Tensor or a Python number. Number of steps to decay over.
alpha A scalar float32 or float64 Tensor or a Python number. Minimum learning rate value as a fraction of initial_learning_rate.
name String. Optional name of the operation. Defaults to 'CosineDecay'. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates a LearningRateSchedule from its config.
Args
config Output of get_config().
Returns A LearningRateSchedule instance.
get_config View source
get_config()
__call__ View source
__call__(
step
)
Call self as a function. | |
doc_3975 |
Set the properties by parsing a fontconfig pattern. This support does not depend on fontconfig; we are merely borrowing its pattern syntax for use here. | |
doc_3976 |
Logical indicating if the date belongs to a leap year. | |
doc_3977 |
Return the kerning pair distance (possibly 0) for chars name1 and name2. | |
doc_3978 |
Get the current Axes. If there is currently no Axes on this Figure, a new one is created using Figure.add_subplot. (To test whether there is currently an Axes on a Figure, check whether figure.axes is empty. To test whether there is currently a Figure on the pyplot figure stack, check whether pyplot.get_fignums() is empty.) The following kwargs are supported for ensuring the returned Axes adheres to the given projection etc., and for Axes creation if the active Axes does not exist:
Property Description
adjustable {'box', 'datalim'}
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
anchor (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...}
animated bool
aspect {'auto', 'equal'} or float
autoscale_on bool
autoscalex_on bool
autoscaley_on bool
axes_locator Callable[[Axes, Renderer], Bbox]
axisbelow bool or 'line'
box_aspect float or None
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
facecolor or fc color
figure Figure
frame_on bool
gid str
in_layout bool
label object
navigate bool
navigate_mode unknown
path_effects AbstractPathEffect
picker None or bool or float or callable
position [left, bottom, width, height] or Bbox
prop_cycle unknown
rasterization_zorder float or None
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
title str
transform Transform
url str
visible bool
xbound unknown
xlabel str
xlim (bottom: float, top: float)
xmargin float greater than -0.5
xscale {"linear", "log", "symlog", "logit", ...} or ScaleBase
xticklabels unknown
xticks unknown
ybound unknown
ylabel str
ylim (bottom: float, top: float)
ymargin float greater than -0.5
yscale {"linear", "log", "symlog", "logit", ...} or ScaleBase
yticklabels unknown
yticks unknown
zorder float
Examples using matplotlib.pyplot.gca
Creating annotated heatmaps
Managing multiple figures in pyplot
Scale invariant angle label
Rainbow text
Infinite lines
Set and get properties
Hinton diagrams
Tight Layout guide | |
doc_3979 | New in version 3.7.
safe
The UUID was generated by the platform in a multiprocessing-safe way.
unsafe
The UUID was not generated in a multiprocessing-safe way.
unknown
The platform does not provide information on whether the UUID was generated safely or not. | |
doc_3980 |
Alias for set_linestyle. | |
doc_3981 | See Migration guide for more details. tf.compat.v1.test.TestCase
tf.test.TestCase(
methodName='runTest'
)
Child Classes class failureException Methods addCleanup
addCleanup(
*args, **kwargs
)
Add a function, with arguments, to be called when the test is completed. Functions added are called on a LIFO basis and are called after tearDown on test failure or success. Cleanup items are called even if setUp fails (unlike tearDown). addTypeEqualityFunc
addTypeEqualityFunc(
typeobj, function
)
Add a type specific assertEqual style function to compare a type. This method is for use by TestCase subclasses that need to register their own type equality functions to provide nicer error messages.
Args
typeobj The data type to call this function on when both values are of the same type in assertEqual().
function The callable taking two arguments and an optional msg= argument that raises self.failureException with a useful error message when the two arguments are not equal. assertAllClose View source
assertAllClose(
a, b, rtol=1e-06, atol=1e-06, msg=None
)
Asserts that two structures of numpy arrays or Tensors, have near values. a and b can be arbitrarily nested structures. A layer of a nested structure can be a dict, namedtuple, tuple or list.
Note: the implementation follows numpy.allclose (and numpy.testing.assert_allclose). It checks whether two arrays are element-wise equal within a tolerance. The relative difference (rtol * abs(b)) and the absolute difference atol are added together to compare against the absolute difference between a and b.
Args
a The expected numpy ndarray, or anything that can be converted into a numpy ndarray (including Tensor), or any arbitrarily nested of structure of these.
b The actual numpy ndarray, or anything that can be converted into a numpy ndarray (including Tensor), or any arbitrarily nested of structure of these.
rtol relative tolerance.
atol absolute tolerance.
msg Optional message to report on failure.
Raises
ValueError if only one of a[p] and b[p] is a dict or a[p] and b[p] have different length, where [p] denotes a path to the nested structure, e.g. given a = [(1, 1), {'d': (6, 7)}] and [p] = [1]['d'], then a[p] = (6, 7). assertAllCloseAccordingToType View source
assertAllCloseAccordingToType(
a, b, rtol=1e-06, atol=1e-06, float_rtol=1e-06, float_atol=1e-06,
half_rtol=0.001, half_atol=0.001, bfloat16_rtol=0.01, bfloat16_atol=0.01,
msg=None
)
Like assertAllClose, but also suitable for comparing fp16 arrays. In particular, the tolerance is reduced to 1e-3 if at least one of the arguments is of type float16.
Args
a the expected numpy ndarray or anything can be converted to one.
b the actual numpy ndarray or anything can be converted to one.
rtol relative tolerance.
atol absolute tolerance.
float_rtol relative tolerance for float32.
float_atol absolute tolerance for float32.
half_rtol relative tolerance for float16.
half_atol absolute tolerance for float16.
bfloat16_rtol relative tolerance for bfloat16.
bfloat16_atol absolute tolerance for bfloat16.
msg Optional message to report on failure. assertAllEqual View source
assertAllEqual(
a, b, msg=None
)
Asserts that two numpy arrays or Tensors have the same values.
Args
a the expected numpy ndarray or anything can be converted to one.
b the actual numpy ndarray or anything can be converted to one.
msg Optional message to report on failure. assertAllGreater View source
assertAllGreater(
a, comparison_target
)
Assert element values are all greater than a target value.
Args
a The numpy ndarray, or anything that can be converted into a numpy ndarray (including Tensor).
comparison_target The target value of comparison. assertAllGreaterEqual View source
assertAllGreaterEqual(
a, comparison_target
)
Assert element values are all greater than or equal to a target value.
Args
a The numpy ndarray, or anything that can be converted into a numpy ndarray (including Tensor).
comparison_target The target value of comparison. assertAllInRange View source
assertAllInRange(
target, lower_bound, upper_bound, open_lower_bound=False, open_upper_bound=False
)
Assert that elements in a Tensor are all in a given range.
Args
target The numpy ndarray, or anything that can be converted into a numpy ndarray (including Tensor).
lower_bound lower bound of the range
upper_bound upper bound of the range
open_lower_bound (bool) whether the lower bound is open (i.e., > rather than the default >=)
open_upper_bound (bool) whether the upper bound is open (i.e., < rather than the default <=)
Raises
AssertionError if the value tensor does not have an ordered numeric type (float* or int*), or if there are nan values, or if any of the elements do not fall in the specified range. assertAllInSet View source
assertAllInSet(
target, expected_set
)
Assert that elements of a Tensor are all in a given closed set.
Args
target The numpy ndarray, or anything that can be converted into a numpy ndarray (including Tensor).
expected_set (list, tuple or set) The closed set that the elements of the value of target are expected to fall into.
Raises
AssertionError if any of the elements do not fall into expected_set. assertAllLess View source
assertAllLess(
a, comparison_target
)
Assert element values are all less than a target value.
Args
a The numpy ndarray, or anything that can be converted into a numpy ndarray (including Tensor).
comparison_target The target value of comparison. assertAllLessEqual View source
assertAllLessEqual(
a, comparison_target
)
Assert element values are all less than or equal to a target value.
Args
a The numpy ndarray, or anything that can be converted into a numpy ndarray (including Tensor).
comparison_target The target value of comparison. assertAlmostEqual
assertAlmostEqual(
first, second, places=None, msg=None, delta=None
)
Fail if the two objects are unequal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is more than the given delta. Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit). If the two objects compare equal then they will automatically compare almost equal. assertAlmostEquals
assertAlmostEquals(
*args, **kwargs
)
assertArrayNear View source
assertArrayNear(
farray1, farray2, err, msg=None
)
Asserts that two float arrays are near each other. Checks that for all elements of farray1 and farray2 |f1 - f2| < err. Asserts a test failure if not.
Args
farray1 a list of float values.
farray2 a list of float values.
err a float value.
msg Optional message to report on failure. assertBetween
assertBetween(
value, minv, maxv, msg=None
)
Asserts that value is between minv and maxv (inclusive). assertCommandFails
assertCommandFails(
command, regexes, env=None, close_fds=True, msg=None
)
Asserts a shell command fails and the error matches a regex in a list.
Args
command List or string representing the command to run.
regexes the list of regular expression strings.
env Dictionary of environment variable settings. If None, no environment variables will be set for the child process. This is to make tests more hermetic. NOTE: this behavior is different than the standard subprocess module.
close_fds Whether or not to close all open fd's in the child after forking.
msg Optional message to report on failure. assertCommandSucceeds
assertCommandSucceeds(
command, regexes=(b'',), env=None, close_fds=True, msg=None
)
Asserts that a shell command succeeds (i.e. exits with code 0).
Args
command List or string representing the command to run.
regexes List of regular expression byte strings that match success.
env Dictionary of environment variable settings. If None, no environment variables will be set for the child process. This is to make tests more hermetic. NOTE: this behavior is different than the standard subprocess module.
close_fds Whether or not to close all open fd's in the child after forking.
msg Optional message to report on failure. assertContainsExactSubsequence
assertContainsExactSubsequence(
container, subsequence, msg=None
)
Asserts that "container" contains "subsequence" as an exact subsequence. Asserts that "container" contains all the elements of "subsequence", in order, and without other elements interspersed. For example, [1, 2, 3] is an exact subsequence of [0, 0, 1, 2, 3, 0] but not of [0, 0, 1, 2, 0, 3, 0].
Args
container the list we're testing for subsequence inclusion.
subsequence the list we hope will be an exact subsequence of container.
msg Optional message to report on failure. assertContainsInOrder
assertContainsInOrder(
strings, target, msg=None
)
Asserts that the strings provided are found in the target in order. This may be useful for checking HTML output.
Args
strings A list of strings, such as [ 'fox', 'dog' ]
target A target string in which to look for the strings, such as 'The quick brown fox jumped over the lazy dog'.
msg Optional message to report on failure. assertContainsSubsequence
assertContainsSubsequence(
container, subsequence, msg=None
)
Asserts that "container" contains "subsequence" as a subsequence. Asserts that "container" contains all the elements of "subsequence", in order, but possibly with other elements interspersed. For example, [1, 2, 3] is a subsequence of [0, 0, 1, 2, 0, 3, 0] but not of [0, 0, 1, 3, 0, 2, 0].
Args
container the list we're testing for subsequence inclusion.
subsequence the list we hope will be a subsequence of container.
msg Optional message to report on failure. assertContainsSubset
assertContainsSubset(
expected_subset, actual_set, msg=None
)
Checks whether actual iterable is a superset of expected iterable. assertCountEqual
assertCountEqual(
first, second, msg=None
)
An unordered sequence comparison asserting that the same elements, regardless of order. If the same element occurs more than once, it verifies that the elements occur the same number of times. self.assertEqual(Counter(list(first)),
Counter(list(second)))
Example: - [0, 1, 1] and [1, 0, 1] compare equal.
- [0, 0, 1] and [0, 1] compare unequal.
assertDTypeEqual View source
assertDTypeEqual(
target, expected_dtype
)
Assert ndarray data type is equal to expected.
Args
target The numpy ndarray, or anything that can be converted into a numpy ndarray (including Tensor).
expected_dtype Expected data type. assertDeviceEqual View source
assertDeviceEqual(
device1, device2, msg=None
)
Asserts that the two given devices are the same.
Args
device1 A string device name or TensorFlow DeviceSpec object.
device2 A string device name or TensorFlow DeviceSpec object.
msg Optional message to report on failure. assertDictContainsSubset
assertDictContainsSubset(
subset, dictionary, msg=None
)
Checks whether dictionary is a superset of subset. assertDictEqual
assertDictEqual(
a, b, msg=None
)
Raises AssertionError if a and b are not equal dictionaries.
Args
a A dict, the expected value.
b A dict, the actual value.
msg An optional str, the associated message.
Raises
AssertionError if the dictionaries are not equal. assertEmpty
assertEmpty(
container, msg=None
)
Asserts that an object has zero length.
Args
container Anything that implements the collections.abc.Sized interface.
msg Optional message to report on failure. assertEndsWith
assertEndsWith(
actual, expected_end, msg=None
)
Asserts that actual.endswith(expected_end) is True.
Args
actual str
expected_end str
msg Optional message to report on failure. assertEqual
assertEqual(
first, second, msg=None
)
Fail if the two objects are unequal as determined by the '==' operator. assertEquals
assertEquals(
*args, **kwargs
)
assertFalse
assertFalse(
expr, msg=None
)
Check that the expression is false. assertGreater
assertGreater(
a, b, msg=None
)
Just like self.assertTrue(a > b), but with a nicer default message. assertGreaterEqual
assertGreaterEqual(
a, b, msg=None
)
Just like self.assertTrue(a >= b), but with a nicer default message. assertIn
assertIn(
member, container, msg=None
)
Just like self.assertTrue(a in b), but with a nicer default message. assertIs
assertIs(
expr1, expr2, msg=None
)
Just like self.assertTrue(a is b), but with a nicer default message. assertIsInstance
assertIsInstance(
obj, cls, msg=None
)
Same as self.assertTrue(isinstance(obj, cls)), with a nicer default message. assertIsNone
assertIsNone(
obj, msg=None
)
Same as self.assertTrue(obj is None), with a nicer default message. assertIsNot
assertIsNot(
expr1, expr2, msg=None
)
Just like self.assertTrue(a is not b), but with a nicer default message. assertIsNotNone
assertIsNotNone(
obj, msg=None
)
Included for symmetry with assertIsNone. assertItemsEqual
assertItemsEqual(
first, second, msg=None
)
An unordered sequence comparison asserting that the same elements, regardless of order. If the same element occurs more than once, it verifies that the elements occur the same number of times. self.assertEqual(Counter(list(first)),
Counter(list(second)))
Example: - [0, 1, 1] and [1, 0, 1] compare equal.
- [0, 0, 1] and [0, 1] compare unequal.
assertJsonEqual
assertJsonEqual(
first, second, msg=None
)
Asserts that the JSON objects defined in two strings are equal. A summary of the differences will be included in the failure message using assertSameStructure.
Args
first A string containing JSON to decode and compare to second.
second A string containing JSON to decode and compare to first.
msg Additional text to include in the failure message. assertLen
assertLen(
container, expected_len, msg=None
)
Asserts that an object has the expected length.
Args
container Anything that implements the collections.abc.Sized interface.
expected_len The expected length of the container.
msg Optional message to report on failure. assertLess
assertLess(
a, b, msg=None
)
Just like self.assertTrue(a < b), but with a nicer default message. assertLessEqual
assertLessEqual(
a, b, msg=None
)
Just like self.assertTrue(a <= b), but with a nicer default message. assertListEqual
assertListEqual(
list1, list2, msg=None
)
A list-specific equality assertion.
Args
list1 The first list to compare.
list2 The second list to compare.
msg Optional message to use on failure instead of a list of differences. assertLogs
assertLogs(
logger=None, level=None
)
Fail unless a log message of level level or higher is emitted on logger_name or its children. If omitted, level defaults to INFO and logger defaults to the root logger. This method must be used as a context manager, and will yield a recording object with two attributes: output and records. At the end of the context manager, the output attribute will be a list of the matching formatted log messages and the records attribute will be a list of the corresponding LogRecord objects. Example:: with self.assertLogs('foo', level='INFO') as cm:
logging.getLogger('foo').info('first message')
logging.getLogger('foo.bar').error('second message')
self.assertEqual(cm.output, ['INFO:foo:first message',
'ERROR:foo.bar:second message'])
assertMultiLineEqual
assertMultiLineEqual(
first, second, msg=None, **kwargs
)
Asserts that two multi-line strings are equal. assertNDArrayNear View source
assertNDArrayNear(
ndarray1, ndarray2, err, msg=None
)
Asserts that two numpy arrays have near values.
Args
ndarray1 a numpy ndarray.
ndarray2 a numpy ndarray.
err a float. The maximum absolute difference allowed.
msg Optional message to report on failure. assertNear View source
assertNear(
f1, f2, err, msg=None
)
Asserts that two floats are near each other. Checks that |f1 - f2| < err and asserts a test failure if not.
Args
f1 A float value.
f2 A float value.
err A float value.
msg An optional string message to append to the failure message. assertNoCommonElements
assertNoCommonElements(
expected_seq, actual_seq, msg=None
)
Checks whether actual iterable and expected iterable are disjoint. assertNotAllClose View source
assertNotAllClose(
a, b, **kwargs
)
Assert that two numpy arrays, or Tensors, do not have near values.
Args
a the first value to compare.
b the second value to compare.
**kwargs additional keyword arguments to be passed to the underlying assertAllClose call.
Raises
AssertionError If a and b are unexpectedly close at all elements. assertNotAllEqual View source
assertNotAllEqual(
a, b, msg=None
)
Asserts that two numpy arrays or Tensors do not have the same values.
Args
a the expected numpy ndarray or anything can be converted to one.
b the actual numpy ndarray or anything can be converted to one.
msg Optional message to report on failure. assertNotAlmostEqual
assertNotAlmostEqual(
first, second, places=None, msg=None, delta=None
)
Fail if the two objects are equal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is less than the given delta. Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit). Objects that are equal automatically fail. assertNotAlmostEquals
assertNotAlmostEquals(
*args, **kwargs
)
assertNotEmpty
assertNotEmpty(
container, msg=None
)
Asserts that an object has non-zero length.
Args
container Anything that implements the collections.abc.Sized interface.
msg Optional message to report on failure. assertNotEndsWith
assertNotEndsWith(
actual, unexpected_end, msg=None
)
Asserts that actual.endswith(unexpected_end) is False.
Args
actual str
unexpected_end str
msg Optional message to report on failure. assertNotEqual
assertNotEqual(
first, second, msg=None
)
Fail if the two objects are equal as determined by the '!=' operator. assertNotEquals
assertNotEquals(
*args, **kwargs
)
assertNotIn
assertNotIn(
member, container, msg=None
)
Just like self.assertTrue(a not in b), but with a nicer default message. assertNotIsInstance
assertNotIsInstance(
obj, cls, msg=None
)
Included for symmetry with assertIsInstance. assertNotRegex
assertNotRegex(
text, unexpected_regex, msg=None
)
Fail the test if the text matches the regular expression. assertNotRegexpMatches
assertNotRegexpMatches(
*args, **kwargs
)
assertNotStartsWith
assertNotStartsWith(
actual, unexpected_start, msg=None
)
Asserts that actual.startswith(unexpected_start) is False.
Args
actual str
unexpected_start str
msg Optional message to report on failure. assertProtoEquals View source
assertProtoEquals(
expected_message_maybe_ascii, message, msg=None
)
Asserts that message is same as parsed expected_message_ascii. Creates another prototype of message, reads the ascii message into it and then compares them using self._AssertProtoEqual().
Args
expected_message_maybe_ascii proto message in original or ascii form.
message the message to validate.
msg Optional message to report on failure. assertProtoEqualsVersion View source
assertProtoEqualsVersion(
expected, actual, producer=versions.GRAPH_DEF_VERSION,
min_consumer=versions.GRAPH_DEF_VERSION_MIN_CONSUMER, msg=None
)
assertRaises
assertRaises(
expected_exception, *args, **kwargs
)
Fail unless an exception of class expected_exception is raised by the callable when invoked with specified positional and keyword arguments. If a different type of exception is raised, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception. If called with the callable and arguments omitted, will return a context object used like this:: with self.assertRaises(SomeException):
do_something()
An optional keyword argument 'msg' can be provided when assertRaises is used as a context object. The context manager keeps a reference to the exception as the 'exception' attribute. This allows you to inspect the exception after the assertion:: with self.assertRaises(SomeException) as cm:
do_something()
the_exception = cm.exception
self.assertEqual(the_exception.error_code, 3)
assertRaisesOpError View source
assertRaisesOpError(
expected_err_re_or_predicate
)
assertRaisesRegex
assertRaisesRegex(
expected_exception, expected_regex, *args, **kwargs
)
Asserts that the message in a raised exception matches a regex.
Args
expected_exception Exception class expected to be raised.
expected_regex Regex (re.Pattern object or string) expected to be found in error message.
args Function to be called and extra positional args.
kwargs Extra kwargs.
msg Optional message used in case of failure. Can only be used when assertRaisesRegex is used as a context manager. assertRaisesRegexp
assertRaisesRegexp(
expected_exception, expected_regex, *args, **kwargs
)
Asserts that the message in a raised exception matches a regex.
Args
expected_exception Exception class expected to be raised.
expected_regex Regex (re.Pattern object or string) expected to be found in error message.
args Function to be called and extra positional args.
kwargs Extra kwargs.
msg Optional message used in case of failure. Can only be used when assertRaisesRegex is used as a context manager. assertRaisesWithLiteralMatch
assertRaisesWithLiteralMatch(
expected_exception, expected_exception_message, callable_obj=None, *args,
**kwargs
)
Asserts that the message in a raised exception equals the given string. Unlike assertRaisesRegex, this method takes a literal string, not a regular expression. with self.assertRaisesWithLiteralMatch(ExType, 'message'): DoSomething()
Args
expected_exception Exception class expected to be raised.
expected_exception_message String message expected in the raised exception. For a raise exception e, expected_exception_message must equal str(e).
callable_obj Function to be called, or None to return a context.
*args Extra args.
**kwargs Extra kwargs.
Returns A context manager if callable_obj is None. Otherwise, None.
Raises self.failureException if callable_obj does not raise a matching exception.
assertRaisesWithPredicateMatch View source
@contextlib.contextmanager
assertRaisesWithPredicateMatch(
exception_type, expected_err_re_or_predicate
)
Returns a context manager to enclose code expected to raise an exception. If the exception is an OpError, the op stack is also included in the message predicate search.
Args
exception_type The expected type of exception that should be raised.
expected_err_re_or_predicate If this is callable, it should be a function of one argument that inspects the passed-in exception and returns True (success) or False (please fail the test). Otherwise, the error message is expected to match this regular expression partially.
Returns A context manager to surround code that is expected to raise an exception.
assertRegex
assertRegex(
text, expected_regex, msg=None
)
Fail the test unless the text matches the regular expression. assertRegexMatch
assertRegexMatch(
actual_str, regexes, message=None
)
Asserts that at least one regex in regexes matches str. If possible you should use assertRegex, which is a simpler version of this method. assertRegex takes a single regular expression (a string or re compiled object) instead of a list. Notes: This function uses substring matching, i.e. the matching succeeds if any substring of the error message matches any regex in the list. This is more convenient for the user than full-string matching. If regexes is the empty list, the matching will always fail. Use regexes=[''] for a regex that will always pass. '.' matches any single character except the newline. To match any character, use '(.|\n)'. '^' matches the beginning of each line, not just the beginning of the string. Similarly, '$' matches the end of each line. An exception will be thrown if regexes contains an invalid regex.
Args
actual_str The string we try to match with the items in regexes.
regexes The regular expressions we want to match against str. See "Notes" above for detailed notes on how this is interpreted.
message The message to be printed if the test fails. assertRegexpMatches
assertRegexpMatches(
*args, **kwargs
)
assertSameElements
assertSameElements(
expected_seq, actual_seq, msg=None
)
Asserts that two sequences have the same elements (in any order). This method, unlike assertCountEqual, doesn't care about any duplicates in the expected and actual sequences. assertSameElements([1, 1, 1, 0, 0, 0], [0, 1]) # Doesn't raise an AssertionError If possible, you should use assertCountEqual instead of assertSameElements.
Args
expected_seq A sequence containing elements we are expecting.
actual_seq The sequence that we are testing.
msg The message to be printed if the test fails. assertSameStructure
assertSameStructure(
a, b, aname='a', bname='b', msg=None
)
Asserts that two values contain the same structural content. The two arguments should be data trees consisting of trees of dicts and lists. They will be deeply compared by walking into the contents of dicts and lists; other items will be compared using the == operator. If the two structures differ in content, the failure message will indicate the location within the structures where the first difference is found. This may be helpful when comparing large structures. Mixed Sequence and Set types are supported. Mixed Mapping types are supported, but the order of the keys will not be considered in the comparison.
Args
a The first structure to compare.
b The second structure to compare.
aname Variable name to use for the first structure in assertion messages.
bname Variable name to use for the second structure.
msg Additional text to include in the failure message. assertSequenceAlmostEqual
assertSequenceAlmostEqual(
expected_seq, actual_seq, places=None, msg=None, delta=None
)
An approximate equality assertion for ordered sequences. Fail if the two sequences are unequal as determined by their value differences rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between each value in the two sequences is more than the given delta. Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit). If the two sequences compare equal then they will automatically compare almost equal.
Args
expected_seq A sequence containing elements we are expecting.
actual_seq The sequence that we are testing.
places The number of decimal places to compare.
msg The message to be printed if the test fails.
delta The OK difference between compared values. assertSequenceEqual
assertSequenceEqual(
seq1, seq2, msg=None, seq_type=None
)
An equality assertion for ordered sequences (like lists and tuples). For the purposes of this function, a valid ordered sequence type is one which can be indexed, has a length, and has an equality operator.
Args
seq1 The first sequence to compare.
seq2 The second sequence to compare.
seq_type The expected datatype of the sequences, or None if no datatype should be enforced.
msg Optional message to use on failure instead of a list of differences. assertSequenceStartsWith
assertSequenceStartsWith(
prefix, whole, msg=None
)
An equality assertion for the beginning of ordered sequences. If prefix is an empty sequence, it will raise an error unless whole is also an empty sequence. If prefix is not a sequence, it will raise an error if the first element of whole does not match.
Args
prefix A sequence expected at the beginning of the whole parameter.
whole The sequence in which to look for prefix.
msg Optional message to report on failure. assertSetEqual
assertSetEqual(
set1, set2, msg=None
)
A set-specific equality assertion.
Args
set1 The first set to compare.
set2 The second set to compare.
msg Optional message to use on failure instead of a list of differences. assertSetEqual uses ducktyping to support different types of sets, and is optimized for sets specifically (parameters must support a difference method). assertShapeEqual View source
assertShapeEqual(
np_array, tf_tensor, msg=None
)
Asserts that a Numpy ndarray and a TensorFlow tensor have the same shape.
Args
np_array A Numpy ndarray or Numpy scalar.
tf_tensor A Tensor.
msg Optional message to report on failure.
Raises
TypeError If the arguments have the wrong type. assertStartsWith View source
assertStartsWith(
actual, expected_start, msg=None
)
Assert that actual.startswith(expected_start) is True.
Args
actual str
expected_start str
msg Optional message to report on failure. assertTotallyOrdered
assertTotallyOrdered(
*groups, **kwargs
)
Asserts that total ordering has been implemented correctly. For example, say you have a class A that compares only on its attribute x. Comparators other than lt are omitted for brevity. class A(object): def init(self, x, y): self.x = x self.y = y def hash(self): return hash(self.x) def lt(self, other): try: return self.x < other.x except AttributeError: return NotImplemented assertTotallyOrdered will check that instances can be ordered correctly. For example, self.assertTotallyOrdered( [None], # None should come before everything else. [1], # Integers sort earlier. [A(1, 'a')], [A(2, 'b')], # 2 is after 1. [A(3, 'c'), A(3, 'd')], # The second argument is irrelevant. [A(4, 'z')], ['foo']) # Strings sort last.
Args
*groups A list of groups of elements. Each group of elements is a list of objects that are equal. The elements in each group must be less than the elements in the group after it. For example, these groups are totally ordered: [None], [1], [2, 2], [3]. **kwargs: optional msg keyword argument can be passed. assertTrue
assertTrue(
expr, msg=None
)
Check that the expression is true. assertTupleEqual
assertTupleEqual(
tuple1, tuple2, msg=None
)
A tuple-specific equality assertion.
Args
tuple1 The first tuple to compare.
tuple2 The second tuple to compare.
msg Optional message to use on failure instead of a list of differences. assertUrlEqual
assertUrlEqual(
a, b, msg=None
)
Asserts that urls are equal, ignoring ordering of query params. assertWarns
assertWarns(
expected_warning, *args, **kwargs
)
Fail unless a warning of class warnClass is triggered by the callable when invoked with specified positional and keyword arguments. If a different type of warning is triggered, it will not be handled: depending on the other warning filtering rules in effect, it might be silenced, printed out, or raised as an exception. If called with the callable and arguments omitted, will return a context object used like this:: with self.assertWarns(SomeWarning):
do_something()
An optional keyword argument 'msg' can be provided when assertWarns is used as a context object. The context manager keeps a reference to the first matching warning as the 'warning' attribute; similarly, the 'filename' and 'lineno' attributes give you information about the line of Python code from which the warning was triggered. This allows you to inspect the warning after the assertion:: with self.assertWarns(SomeWarning) as cm:
do_something()
the_warning = cm.warning
self.assertEqual(the_warning.some_attribute, 147)
assertWarnsRegex
assertWarnsRegex(
expected_warning, expected_regex, *args, **kwargs
)
Asserts that the message in a triggered warning matches a regexp. Basic functioning is similar to assertWarns() with the addition that only warnings whose messages also match the regular expression are considered successful matches.
Args
expected_warning Warning class expected to be triggered.
expected_regex Regex (re.Pattern object or string) expected to be found in error message.
args Function to be called and extra positional args.
kwargs Extra kwargs.
msg Optional message used in case of failure. Can only be used when assertWarnsRegex is used as a context manager. assert_
assert_(
*args, **kwargs
)
cached_session View source
@contextlib.contextmanager
cached_session(
graph=None, config=None, use_gpu=False, force_gpu=False
)
Returns a TensorFlow Session for use in executing tests. This method behaves differently than self.session(): for performance reasons cached_session will by default reuse the same session within the same test. The session returned by this function will only be closed at the end of the test (in the TearDown function). Use the use_gpu and force_gpu options to control where ops are run. If force_gpu is True, all ops are pinned to /device:GPU:0. Otherwise, if use_gpu is True, TensorFlow tries to run as many ops on the GPU as possible. If both force_gpu anduse_gpu` are False, all ops are pinned to the CPU. Example: class MyOperatorTest(test_util.TensorFlowTestCase):
def testMyOperator(self):
with self.cached_session(use_gpu=True) as sess:
valid_input = [1.0, 2.0, 3.0, 4.0, 5.0]
result = MyOperator(valid_input).eval()
self.assertEqual(result, [1.0, 2.0, 3.0, 5.0, 8.0]
invalid_input = [-1.0, 2.0, 7.0]
with self.assertRaisesOpError("negative input not supported"):
MyOperator(invalid_input).eval()
Args
graph Optional graph to use during the returned session.
config An optional config_pb2.ConfigProto to use to configure the session.
use_gpu If True, attempt to run as many ops as possible on GPU.
force_gpu If True, pin all ops to /device:GPU:0.
Yields A Session object that should be used as a context manager to surround the graph building and execution code in a test case.
captureWritesToStream View source
@contextlib.contextmanager
captureWritesToStream(
stream
)
A context manager that captures the writes to a given stream. This context manager captures all writes to a given stream inside of a CapturedWrites object. When this context manager is created, it yields the CapturedWrites object. The captured contents can be accessed by calling .contents() on the CapturedWrites. For this function to work, the stream must have a file descriptor that can be modified using os.dup and os.dup2, and the stream must support a .flush() method. The default python sys.stdout and sys.stderr are examples of this. Note that this does not work in Colab or Jupyter notebooks, because those use alternate stdout streams. Example: class MyOperatorTest(test_util.TensorFlowTestCase):
def testMyOperator(self):
input = [1.0, 2.0, 3.0, 4.0, 5.0]
with self.captureWritesToStream(sys.stdout) as captured:
result = MyOperator(input).eval()
self.assertStartsWith(captured.contents(), "This was printed.")
Args
stream The stream whose writes should be captured. This stream must have a file descriptor, support writing via using that file descriptor, and must have a .flush() method.
Yields A CapturedWrites object that contains all writes to the specified stream made during this context.
checkedThread View source
checkedThread(
target, args=None, kwargs=None
)
Returns a Thread wrapper that asserts 'target' completes successfully. This method should be used to create all threads in test cases, as otherwise there is a risk that a thread will silently fail, and/or assertions made in the thread will not be respected.
Args
target A callable object to be executed in the thread.
args The argument tuple for the target invocation. Defaults to ().
kwargs A dictionary of keyword arguments for the target invocation. Defaults to {}.
Returns A wrapper for threading.Thread that supports start() and join() methods.
countTestCases
countTestCases()
create_tempdir
create_tempdir(
name=None, cleanup=None
)
Create a temporary directory specific to the test.
Note: The directory and its contents will be recursively cleared before creation. This ensures that there is no pre-existing state.
This creates a named directory on disk that is isolated to this test, and will be properly cleaned up by the test. This avoids several pitfalls of creating temporary directories for test purposes, as well as makes it easier to setup directories and verify their contents. For example: def test_foo(self):
out_dir = self.create_tempdir()
out_log = out_dir.create_file('output.log')
expected_outputs = [
os.path.join(out_dir, 'data-0.txt'),
os.path.join(out_dir, 'data-1.txt'),
]
code_under_test(out_dir)
self.assertTrue(os.path.exists(expected_paths[0]))
self.assertTrue(os.path.exists(expected_paths[1]))
self.assertEqual('foo', out_log.read_text())
See also: create_tempfile() for creating temporary files.
Args
name Optional name of the directory. If not given, a unique name will be generated and used.
cleanup Optional cleanup policy on when/if to remove the directory (and all its contents) at the end of the test. If None, then uses self.tempfile_cleanup.
Returns A _TempDir representing the created directory; see _TempDir class docs for usage.
create_tempfile
create_tempfile(
file_path=None, content=None, mode='w', encoding='utf8',
errors='strict', cleanup=None
)
Create a temporary file specific to the test. This creates a named file on disk that is isolated to this test, and will be properly cleaned up by the test. This avoids several pitfalls of creating temporary files for test purposes, as well as makes it easier to setup files, their data, read them back, and inspect them when a test fails. For example: def test_foo(self):
output = self.create_tempfile()
code_under_test(output)
self.assertGreater(os.path.getsize(output), 0)
self.assertEqual('foo', output.read_text())
Note: This will zero-out the file. This ensures there is no pre-existing state. NOTE: If the file already exists, it will be made writable and overwritten.
See also: create_tempdir() for creating temporary directories, and _TempDir.create_file for creating files within a temporary directory.
Args
file_path Optional file path for the temp file. If not given, a unique file name will be generated and used. Slashes are allowed in the name; any missing intermediate directories will be created. NOTE: This path is the path that will be cleaned up, including any directories in the path, e.g., 'foo/bar/baz.txt' will rm -r foo.
content Optional string or bytes to initially write to the file. If not specified, then an empty file is created.
mode Mode string to use when writing content. Only used if content is non-empty.
encoding Encoding to use when writing string content. Only used if content is text.
errors How to handle text to bytes encoding errors. Only used if content is text.
cleanup Optional cleanup policy on when/if to remove the directory (and all its contents) at the end of the test. If None, then uses self.tempfile_cleanup.
Returns A _TempFile representing the created file; see _TempFile class docs for usage.
debug
debug()
Run the test without collecting errors in a TestResult defaultTestResult
defaultTestResult()
doCleanups
doCleanups()
Execute all cleanup functions. Normally called for you after tearDown. enter_context
enter_context(
manager
)
Returns the CM's value after registering it with the exit stack. Entering a context pushes it onto a stack of contexts. The context is exited when the test completes. Contexts are are exited in the reverse order of entering. They will always be exited, regardless of test failure/success. The context stack is specific to the test being run. This is useful to eliminate per-test boilerplate when context managers are used. For example, instead of decorating every test with @mock.patch, simply do self.foo = self.enter_context(mock.patch(...))' insetUp()`.
Note: The context managers will always be exited without any error information. This is an unfortunate implementation detail due to some internals of how unittest runs tests.
Args
manager The context manager to enter. evaluate View source
evaluate(
tensors
)
Evaluates tensors and returns numpy values.
Args
tensors A Tensor or a nested list/tuple of Tensors.
Returns tensors numpy values.
fail
fail(
msg=None, prefix=None
)
Fail immediately with the given message, optionally prefixed. failIf
failIf(
*args, **kwargs
)
failIfAlmostEqual
failIfAlmostEqual(
*args, **kwargs
)
failIfEqual
failIfEqual(
*args, **kwargs
)
failUnless
failUnless(
*args, **kwargs
)
failUnlessAlmostEqual
failUnlessAlmostEqual(
*args, **kwargs
)
failUnlessEqual
failUnlessEqual(
*args, **kwargs
)
failUnlessRaises
failUnlessRaises(
*args, **kwargs
)
get_temp_dir View source
get_temp_dir()
Returns a unique temporary directory for the test to use. If you call this method multiple times during in a test, it will return the same folder. However, across different runs the directories will be different. This will ensure that across different runs tests will not be able to pollute each others environment. If you need multiple unique directories within a single test, you should use tempfile.mkdtemp as follows: tempfile.mkdtemp(dir=self.get_temp_dir()):
Returns string, the path to the unique temporary directory created for this test.
id
id()
run
run(
result=None
)
session View source
@contextlib.contextmanager
session(
graph=None, config=None, use_gpu=False, force_gpu=False
)
A context manager for a TensorFlow Session for use in executing tests. Note that this will set this session and the graph as global defaults. Use the use_gpu and force_gpu options to control where ops are run. If force_gpu is True, all ops are pinned to /device:GPU:0. Otherwise, if use_gpu is True, TensorFlow tries to run as many ops on the GPU as possible. If both force_gpu anduse_gpu` are False, all ops are pinned to the CPU. Example: class MyOperatorTest(test_util.TensorFlowTestCase):
def testMyOperator(self):
with self.session(use_gpu=True):
valid_input = [1.0, 2.0, 3.0, 4.0, 5.0]
result = MyOperator(valid_input).eval()
self.assertEqual(result, [1.0, 2.0, 3.0, 5.0, 8.0]
invalid_input = [-1.0, 2.0, 7.0]
with self.assertRaisesOpError("negative input not supported"):
MyOperator(invalid_input).eval()
Args
graph Optional graph to use during the returned session.
config An optional config_pb2.ConfigProto to use to configure the session.
use_gpu If True, attempt to run as many ops as possible on GPU.
force_gpu If True, pin all ops to /device:GPU:0.
Yields A Session object that should be used as a context manager to surround the graph building and execution code in a test case.
setUp View source
setUp()
Hook method for setting up the test fixture before exercising it. setUpClass
@classmethod
setUpClass()
Hook method for setting up class fixture before running tests in the class. shortDescription
shortDescription()
Formats both the test method name and the first line of its docstring. If no docstring is given, only returns the method name. This method overrides unittest.TestCase.shortDescription(), which only returns the first line of the docstring, obscuring the name of the test upon failure.
Returns
desc A short description of a test method. skipTest
skipTest(
reason
)
Skip this test. subTest
@contextlib.contextmanager
subTest(
msg=_subtest_msg_sentinel, **params
)
Return a context manager that will return the enclosed block of code in a subtest identified by the optional message and keyword parameters. A failure in the subtest marks the test case as failed but resumes execution at the end of the enclosed block, allowing further test code to be executed. tearDown View source
tearDown()
Hook method for deconstructing the test fixture after testing it. tearDownClass
@classmethod
tearDownClass()
Hook method for deconstructing the class fixture after running all tests in the class. test_session View source
@contextlib.contextmanager
test_session(
graph=None, config=None, use_gpu=False, force_gpu=False
)
Use cached_session instead. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use self.session() or self.cached_session() instead. __call__
__call__(
*args, **kwds
)
Call self as a function. __eq__
__eq__(
other
)
Return self==value.
Class Variables
longMessage True
maxDiff 1600
tempfile_cleanup | |
doc_3982 |
Add a label near the point (x, y). Parameters
x, yfloat
The approximate location of the label.
inlinebool, default: True
If True remove the segment of the contour beneath the label.
inline_spacingint, default: 5
Space in pixels to leave on each side of label when placing inline. This spacing will be exact for labels at locations where the contour is straight, less so for labels on curved contours.
transformTransform or False, default: self.axes.transData
A transform applied to (x, y) before labeling. The default causes (x, y) to be interpreted as data coordinates. False is a synonym for IdentityTransform; i.e. (x, y) should be interpreted as display coordinates. | |
doc_3983 |
Kernel Density Estimation. Read more in the User Guide. Parameters
bandwidthfloat, default=1.0
The bandwidth of the kernel.
algorithm{‘kd_tree’, ‘ball_tree’, ‘auto’}, default=’auto’
The tree algorithm to use.
kernel{‘gaussian’, ‘tophat’, ‘epanechnikov’, ‘exponential’, ‘linear’, ‘cosine’}, default=’gaussian’
The kernel to use.
metricstr, default=’euclidian’
The distance metric to use. Note that not all metrics are valid with all algorithms. Refer to the documentation of BallTree and KDTree for a description of available algorithms. Note that the normalization of the density output is correct only for the Euclidean distance metric. Default is ‘euclidean’.
atolfloat, default=0
The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution.
rtolfloat, default=0
The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution.
breadth_firstbool, default=True
If true (default), use a breadth-first approach to the problem. Otherwise use a depth-first approach.
leaf_sizeint, default=40
Specify the leaf size of the underlying tree. See BallTree or KDTree for details.
metric_paramsdict, default=None
Additional parameters to be passed to the tree for use with the metric. For more information, see the documentation of BallTree or KDTree. Attributes
tree_BinaryTree instance
The tree algorithm for fast generalized N-point problems. See also
sklearn.neighbors.KDTree
K-dimensional tree for fast generalized N-point problems.
sklearn.neighbors.BallTree
Ball tree for fast generalized N-point problems. Examples Compute a gaussian kernel density estimate with a fixed bandwidth. >>> import numpy as np
>>> rng = np.random.RandomState(42)
>>> X = rng.random_sample((100, 3))
>>> kde = KernelDensity(kernel='gaussian', bandwidth=0.5).fit(X)
>>> log_density = kde.score_samples(X[:3])
>>> log_density
array([-1.52955942, -1.51462041, -1.60244657])
Methods
fit(X[, y, sample_weight]) Fit the Kernel Density model on the data.
get_params([deep]) Get parameters for this estimator.
sample([n_samples, random_state]) Generate random samples from the model.
score(X[, y]) Compute the total log probability density under the model.
score_samples(X) Evaluate the log density model on the data.
set_params(**params) Set the parameters of this estimator.
fit(X, y=None, sample_weight=None) [source]
Fit the Kernel Density model on the data. Parameters
Xarray-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point.
yNone
Ignored. This parameter exists only for compatibility with Pipeline.
sample_weightarray-like of shape (n_samples,), default=None
List of sample weights attached to the data X. New in version 0.20. Returns
selfobject
Returns instance of object.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
sample(n_samples=1, random_state=None) [source]
Generate random samples from the model. Currently, this is implemented only for gaussian and tophat kernels. Parameters
n_samplesint, default=1
Number of samples to generate.
random_stateint, RandomState instance or None, default=None
Determines random number generation used to generate random samples. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. Returns
Xarray-like of shape (n_samples, n_features)
List of samples.
score(X, y=None) [source]
Compute the total log probability density under the model. Parameters
Xarray-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point.
yNone
Ignored. This parameter exists only for compatibility with Pipeline. Returns
logprobfloat
Total log-likelihood of the data in X. This is normalized to be a probability density, so the value will be low for high-dimensional data.
score_samples(X) [source]
Evaluate the log density model on the data. Parameters
Xarray-like of shape (n_samples, n_features)
An array of points to query. Last dimension should match dimension of training data (n_features). Returns
densityndarray of shape (n_samples,)
The array of log(density) evaluations. These are normalized to be probability densities, so values will be low for high-dimensional data.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_3984 |
Return the sketch parameters for the artist. Returns
tuple or None
A 3-tuple with the following elements:
scale: The amplitude of the wiggle perpendicular to the source line.
length: The length of the wiggle along the line.
randomness: The scale factor by which the length is shrunken or expanded. Returns None if no sketch parameters were set. | |
doc_3985 | Represents the C unsigned long long datatype. The constructor accepts an optional integer initializer; no overflow checking is done. | |
doc_3986 | class collections.abc.MutableMapping
ABCs for read-only and mutable mappings. | |
doc_3987 |
New in version 1.13. Any class, ndarray subclass or not, can define this method or set it to None in order to override the behavior of NumPy’s ufuncs. This works quite similarly to Python’s __mul__ and other binary operation routines.
ufunc is the ufunc object that was called.
method is a string indicating which Ufunc method was called (one of "__call__", "reduce", "reduceat", "accumulate", "outer", "inner").
inputs is a tuple of the input arguments to the ufunc.
kwargs is a dictionary containing the optional input arguments of the ufunc. If given, any out arguments, both positional and keyword, are passed as a tuple in kwargs. See the discussion in Universal functions (ufunc) for details. The method should return either the result of the operation, or NotImplemented if the operation requested is not implemented. If one of the input or output arguments has a __array_ufunc__ method, it is executed instead of the ufunc. If more than one of the arguments implements __array_ufunc__, they are tried in the order: subclasses before superclasses, inputs before outputs, otherwise left to right. The first routine returning something other than NotImplemented determines the result. If all of the __array_ufunc__ operations return NotImplemented, a TypeError is raised. Note We intend to re-implement numpy functions as (generalized) Ufunc, in which case it will become possible for them to be overridden by the __array_ufunc__ method. A prime candidate is matmul, which currently is not a Ufunc, but could be relatively easily be rewritten as a (set of) generalized Ufuncs. The same may happen with functions such as median, amin, and argsort. Like with some other special methods in python, such as __hash__ and __iter__, it is possible to indicate that your class does not support ufuncs by setting __array_ufunc__ = None. Ufuncs always raise TypeError when called on an object that sets __array_ufunc__ = None. The presence of __array_ufunc__ also influences how ndarray handles binary operations like arr + obj and arr
< obj when arr is an ndarray and obj is an instance of a custom class. There are two possibilities. If obj.__array_ufunc__ is present and not None, then ndarray.__add__ and friends will delegate to the ufunc machinery, meaning that arr + obj becomes np.add(arr, obj), and then add invokes obj.__array_ufunc__. This is useful if you want to define an object that acts like an array. Alternatively, if obj.__array_ufunc__ is set to None, then as a special case, special methods like ndarray.__add__ will notice this and unconditionally raise TypeError. This is useful if you want to create objects that interact with arrays via binary operations, but are not themselves arrays. For example, a units handling system might have an object m representing the “meters” unit, and want to support the syntax arr * m to represent that the array has units of “meters”, but not want to otherwise interact with arrays via ufuncs or otherwise. This can be done by setting __array_ufunc__ = None and defining __mul__ and __rmul__ methods. (Note that this means that writing an __array_ufunc__ that always returns NotImplemented is not quite the same as setting __array_ufunc__ = None: in the former case, arr + obj will raise TypeError, while in the latter case it is possible to define a __radd__ method to prevent this.) The above does not hold for in-place operators, for which ndarray never returns NotImplemented. Hence, arr += obj would always lead to a TypeError. This is because for arrays in-place operations cannot generically be replaced by a simple reverse operation. (For instance, by default, arr += obj would be translated to arr =
arr + obj, i.e., arr would be replaced, contrary to what is expected for in-place array operations.) Note If you define __array_ufunc__: If you are not a subclass of ndarray, we recommend your class define special methods like __add__ and __lt__ that delegate to ufuncs just like ndarray does. An easy way to do this is to subclass from NDArrayOperatorsMixin. If you subclass ndarray, we recommend that you put all your override logic in __array_ufunc__ and not also override special methods. This ensures the class hierarchy is determined in only one place rather than separately by the ufunc machinery and by the binary operation rules (which gives preference to special methods of subclasses; the alternative way to enforce a one-place only hierarchy, of setting __array_ufunc__ to None, would seem very unexpected and thus confusing, as then the subclass would not work at all with ufuncs).
ndarray defines its own __array_ufunc__, which, evaluates the ufunc if no arguments have overrides, and returns NotImplemented otherwise. This may be useful for subclasses for which __array_ufunc__ converts any instances of its own class to ndarray: it can then pass these on to its superclass using super().__array_ufunc__(*inputs, **kwargs), and finally return the results after possible back-conversion. The advantage of this practice is that it ensures that it is possible to have a hierarchy of subclasses that extend the behaviour. See Subclassing ndarray for details. Note If a class defines the __array_ufunc__ method, this disables the __array_wrap__, __array_prepare__, __array_priority__ mechanism described below for ufuncs (which may eventually be deprecated).
class.__array_function__(func, types, args, kwargs)
New in version 1.16. Note In NumPy 1.17, the protocol is enabled by default, but can be disabled with NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=0. In NumPy 1.16, you need to set the environment variable NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=1 before importing NumPy to use NumPy function overrides. Eventually, expect to __array_function__ to always be enabled.
func is an arbitrary callable exposed by NumPy’s public API, which was called in the form func(*args, **kwargs).
types is a collection collections.abc.Collection of unique argument types from the original NumPy function call that implement __array_function__. The tuple args and dict kwargs are directly passed on from the original call. As a convenience for __array_function__ implementors, types provides all argument types with an '__array_function__' attribute. This allows implementors to quickly identify cases where they should defer to __array_function__ implementations on other arguments. Implementations should not rely on the iteration order of types. Most implementations of __array_function__ will start with two checks: Is the given function something that we know how to overload? Are all arguments of a type that we know how to handle? If these conditions hold, __array_function__ should return the result from calling its implementation for func(*args, **kwargs). Otherwise, it should return the sentinel value NotImplemented, indicating that the function is not implemented by these types. There are no general requirements on the return value from __array_function__, although most sensible implementations should probably return array(s) with the same type as one of the function’s arguments. It may also be convenient to define a custom decorators (implements below) for registering __array_function__ implementations. HANDLED_FUNCTIONS = {}
class MyArray:
def __array_function__(self, func, types, args, kwargs):
if func not in HANDLED_FUNCTIONS:
return NotImplemented
# Note: this allows subclasses that don't override
# __array_function__ to handle MyArray objects
if not all(issubclass(t, MyArray) for t in types):
return NotImplemented
return HANDLED_FUNCTIONS[func](*args, **kwargs)
def implements(numpy_function):
"""Register an __array_function__ implementation for MyArray objects."""
def decorator(func):
HANDLED_FUNCTIONS[numpy_function] = func
return func
return decorator
@implements(np.concatenate)
def concatenate(arrays, axis=0, out=None):
... # implementation of concatenate for MyArray objects
@implements(np.broadcast_to)
def broadcast_to(array, shape):
... # implementation of broadcast_to for MyArray objects
Note that it is not required for __array_function__ implementations to include all of the corresponding NumPy function’s optional arguments (e.g., broadcast_to above omits the irrelevant subok argument). Optional arguments are only passed in to __array_function__ if they were explicitly used in the NumPy function call. Just like the case for builtin special methods like __add__, properly written __array_function__ methods should always return NotImplemented when an unknown type is encountered. Otherwise, it will be impossible to correctly override NumPy functions from another object if the operation also includes one of your objects. For the most part, the rules for dispatch with __array_function__ match those for __array_ufunc__. In particular: NumPy will gather implementations of __array_function__ from all specified inputs and call them in order: subclasses before superclasses, and otherwise left to right. Note that in some edge cases involving subclasses, this differs slightly from the current behavior of Python. Implementations of __array_function__ indicate that they can handle the operation by returning any value other than NotImplemented. If all __array_function__ methods return NotImplemented, NumPy will raise TypeError. If no __array_function__ methods exists, NumPy will default to calling its own implementation, intended for use on NumPy arrays. This case arises, for example, when all array-like arguments are Python numbers or lists. (NumPy arrays do have a __array_function__ method, given below, but it always returns NotImplemented if any argument other than a NumPy array subclass implements __array_function__.) One deviation from the current behavior of __array_ufunc__ is that NumPy will only call __array_function__ on the first argument of each unique type. This matches Python’s rule for calling reflected methods, and this ensures that checking overloads has acceptable performance even when there are a large number of overloaded arguments.
class.__array_finalize__(obj)
This method is called whenever the system internally allocates a new array from obj, where obj is a subclass (subtype) of the ndarray. It can be used to change attributes of self after construction (so as to ensure a 2-d matrix for example), or to update meta-information from the “parent.” Subclasses inherit a default implementation of this method that does nothing.
class.__array_prepare__(array, context=None)
At the beginning of every ufunc, this method is called on the input object with the highest array priority, or the output object if one was specified. The output array is passed in and whatever is returned is passed to the ufunc. Subclasses inherit a default implementation of this method which simply returns the output array unmodified. Subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the ufunc for computation. Note For ufuncs, it is hoped to eventually deprecate this method in favour of __array_ufunc__.
class.__array_wrap__(array, context=None)
At the end of every ufunc, this method is called on the input object with the highest array priority, or the output object if one was specified. The ufunc-computed array is passed in and whatever is returned is passed to the user. Subclasses inherit a default implementation of this method, which transforms the array into a new instance of the object’s class. Subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the user. Note For ufuncs, it is hoped to eventually deprecate this method in favour of __array_ufunc__.
class.__array_priority__
The value of this attribute is used to determine what type of object to return in situations where there is more than one possibility for the Python type of the returned object. Subclasses inherit a default value of 0.0 for this attribute. Note For ufuncs, it is hoped to eventually deprecate this method in favour of __array_ufunc__.
class.__array__([dtype])
If a class (ndarray subclass or not) having the __array__ method is used as the output object of an ufunc, results will not be written to the object returned by __array__. This practice will return TypeError.
Matrix objects Note It is strongly advised not to use the matrix subclass. As described below, it makes writing functions that deal consistently with matrices and regular arrays very difficult. Currently, they are mainly used for interacting with scipy.sparse. We hope to provide an alternative for this use, however, and eventually remove the matrix subclass. matrix objects inherit from the ndarray and therefore, they have the same attributes and methods of ndarrays. There are six important differences of matrix objects, however, that may lead to unexpected results when you use matrices but expect them to act like arrays: Matrix objects can be created using a string notation to allow Matlab-style syntax where spaces separate columns and semicolons (‘;’) separate rows. Matrix objects are always two-dimensional. This has far-reaching implications, in that m.ravel() is still two-dimensional (with a 1 in the first dimension) and item selection returns two-dimensional objects so that sequence behavior is fundamentally different than arrays. Matrix objects over-ride multiplication to be matrix-multiplication. Make sure you understand this for functions that you may want to receive matrices. Especially in light of the fact that asanyarray(m) returns a matrix when m is a matrix.
Matrix objects over-ride power to be matrix raised to a power. The same warning about using power inside a function that uses asanyarray(…) to get an array object holds for this fact. The default __array_priority__ of matrix objects is 10.0, and therefore mixed operations with ndarrays always produce matrices.
Matrices have special attributes which make calculations easier. These are
matrix.T Returns the transpose of the matrix.
matrix.H Returns the (complex) conjugate transpose of self.
matrix.I Returns the (multiplicative) inverse of invertible self.
matrix.A Return self as an ndarray object. Warning Matrix objects over-ride multiplication, ‘*’, and power, ‘**’, to be matrix-multiplication and matrix power, respectively. If your subroutine can accept sub-classes and you do not convert to base- class arrays, then you must use the ufuncs multiply and power to be sure that you are performing the correct operation for all inputs. The matrix class is a Python subclass of the ndarray and can be used as a reference for how to construct your own subclass of the ndarray. Matrices can be created from other matrices, strings, and anything else that can be converted to an ndarray . The name “mat “is an alias for “matrix “in NumPy.
matrix(data[, dtype, copy])
Note It is no longer recommended to use this class, even for linear
asmatrix(data[, dtype]) Interpret the input as a matrix.
bmat(obj[, ldict, gdict]) Build a matrix object from a string, nested sequence, or array. Example 1: Matrix creation from a string >>> a = np.mat('1 2 3; 4 5 3')
>>> print((a*a.T).I)
[[ 0.29239766 -0.13450292]
[-0.13450292 0.08187135]]
Example 2: Matrix creation from nested sequence >>> np.mat([[1,5,10],[1.0,3,4j]])
matrix([[ 1.+0.j, 5.+0.j, 10.+0.j],
[ 1.+0.j, 3.+0.j, 0.+4.j]])
Example 3: Matrix creation from an array >>> np.mat(np.random.rand(3,3)).T
matrix([[4.17022005e-01, 3.02332573e-01, 1.86260211e-01],
[7.20324493e-01, 1.46755891e-01, 3.45560727e-01],
[1.14374817e-04, 9.23385948e-02, 3.96767474e-01]])
Memory-mapped file arrays Memory-mapped files are useful for reading and/or modifying small segments of a large file with regular layout, without reading the entire file into memory. A simple subclass of the ndarray uses a memory-mapped file for the data buffer of the array. For small files, the over-head of reading the entire file into memory is typically not significant, however for large files using memory mapping can save considerable resources. Memory-mapped-file arrays have one additional method (besides those they inherit from the ndarray): .flush() which must be called manually by the user to ensure that any changes to the array actually get written to disk.
memmap(filename[, dtype, mode, offset, ...]) Create a memory-map to an array stored in a binary file on disk.
memmap.flush() Write any changes in the array to the file on disk. Example: >>> a = np.memmap('newfile.dat', dtype=float, mode='w+', shape=1000)
>>> a[10] = 10.0
>>> a[30] = 30.0
>>> del a
>>> b = np.fromfile('newfile.dat', dtype=float)
>>> print(b[10], b[30])
10.0 30.0
>>> a = np.memmap('newfile.dat', dtype=float)
>>> print(a[10], a[30])
10.0 30.0
Character arrays (numpy.char) See also Creating character arrays (numpy.char) Note The chararray class exists for backwards compatibility with Numarray, it is not recommended for new development. Starting from numpy 1.4, if one needs arrays of strings, it is recommended to use arrays of dtype object_, bytes_ or str_, and use the free functions in the numpy.char module for fast vectorized string operations. These are enhanced arrays of either str_ type or bytes_ type. These arrays inherit from the ndarray, but specially-define the operations +, *, and % on a (broadcasting) element-by-element basis. These operations are not available on the standard ndarray of character type. In addition, the chararray has all of the standard str (and bytes) methods, executing them on an element-by-element basis. Perhaps the easiest way to create a chararray is to use self.view(chararray) where self is an ndarray of str or unicode data-type. However, a chararray can also be created using the numpy.chararray constructor, or via the numpy.char.array function:
chararray(shape[, itemsize, unicode, ...]) Provides a convenient view on arrays of string and unicode values.
core.defchararray.array(obj[, itemsize, ...]) Create a chararray. Another difference with the standard ndarray of str data-type is that the chararray inherits the feature introduced by Numarray that white-space at the end of any element in the array will be ignored on item retrieval and comparison operations. Record arrays (numpy.rec) See also Creating record arrays (numpy.rec), Data type routines, Data type objects (dtype). NumPy provides the recarray class which allows accessing the fields of a structured array as attributes, and a corresponding scalar data type object record.
recarray(shape[, dtype, buf, offset, ...]) Construct an ndarray that allows field access using attributes.
record A data-type scalar that allows field access as attribute lookup. Masked arrays (numpy.ma) See also Masked arrays Standard container class For backward compatibility and as a standard “container “class, the UserArray from Numeric has been brought over to NumPy and named numpy.lib.user_array.container The container class is a Python class whose self.array attribute is an ndarray. Multiple inheritance is probably easier with numpy.lib.user_array.container than with the ndarray itself and so it is included by default. It is not documented here beyond mentioning its existence because you are encouraged to use the ndarray class directly if you can.
numpy.lib.user_array.container(data[, ...]) Standard container-class for easy multiple-inheritance. Array Iterators Iterators are a powerful concept for array processing. Essentially, iterators implement a generalized for-loop. If myiter is an iterator object, then the Python code: for val in myiter:
...
some code involving val
...
calls val = next(myiter) repeatedly until StopIteration is raised by the iterator. There are several ways to iterate over an array that may be useful: default iteration, flat iteration, and \(N\)-dimensional enumeration. Default iteration The default iterator of an ndarray object is the default Python iterator of a sequence type. Thus, when the array object itself is used as an iterator. The default behavior is equivalent to: for i in range(arr.shape[0]):
val = arr[i]
This default iterator selects a sub-array of dimension \(N-1\) from the array. This can be a useful construct for defining recursive algorithms. To loop over the entire array requires \(N\) for-loops. >>> a = np.arange(24).reshape(3,2,4)+10
>>> for val in a:
... print('item:', val)
item: [[10 11 12 13]
[14 15 16 17]]
item: [[18 19 20 21]
[22 23 24 25]]
item: [[26 27 28 29]
[30 31 32 33]]
Flat iteration
ndarray.flat A 1-D iterator over the array. As mentioned previously, the flat attribute of ndarray objects returns an iterator that will cycle over the entire array in C-style contiguous order. >>> for i, val in enumerate(a.flat):
... if i%5 == 0: print(i, val)
0 10
5 15
10 20
15 25
20 30
Here, I’ve used the built-in enumerate iterator to return the iterator index as well as the value. N-dimensional enumeration
ndenumerate(arr) Multidimensional index iterator. Sometimes it may be useful to get the N-dimensional index while iterating. The ndenumerate iterator can achieve this. >>> for i, val in np.ndenumerate(a):
... if sum(i)%5 == 0: print(i, val)
(0, 0, 0) 10
(1, 1, 3) 25
(2, 0, 3) 29
(2, 1, 2) 32
Iterator for broadcasting
broadcast Produce an object that mimics broadcasting. The general concept of broadcasting is also available from Python using the broadcast iterator. This object takes \(N\) objects as inputs and returns an iterator that returns tuples providing each of the input sequence elements in the broadcasted result. >>> for val in np.broadcast([[1,0],[2,3]],[0,1]):
... print(val)
(1, 0)
(0, 1)
(2, 0)
(3, 1) | |
doc_3988 |
Return the values of the located ticks given vmin and vmax. Note To get tick locations with the vmin and vmax values defined automatically for the associated axis simply call the Locator instance: >>> print(type(loc))
<type 'Locator'>
>>> print(loc())
[1, 2, 3, 4] | |
doc_3989 |
router = routers.SimpleRouter()
router.register(r'users', UserViewSet)
router.register(r'accounts', AccountViewSet)
urlpatterns = router.urls
There are two mandatory arguments to the register() method:
prefix - The URL prefix to use for this set of routes.
viewset - The viewset class. Optionally, you may also specify an additional argument:
basename - The base to use for the URL names that are created. If unset the basename will be automatically generated based on the queryset attribute of the viewset, if it has one. Note that if the viewset does not include a queryset attribute then you must set basename when registering the viewset. The example above would generate the following URL patterns: URL pattern: ^users/$ Name: 'user-list'
URL pattern: ^users/{pk}/$ Name: 'user-detail'
URL pattern: ^accounts/$ Name: 'account-list'
URL pattern: ^accounts/{pk}/$ Name: 'account-detail'
Note: The basename argument is used to specify the initial part of the view name pattern. In the example above, that's the user or account part. Typically you won't need to specify the basename argument, but if you have a viewset where you've defined a custom get_queryset method, then the viewset may not have a .queryset attribute set. If you try to register that viewset you'll see an error like this: 'basename' argument not specified, and could not automatically determine the name from the viewset, as it does not have a '.queryset' attribute.
This means you'll need to explicitly set the basename argument when registering the viewset, as it could not be automatically determined from the model name. Using include with routers The .urls attribute on a router instance is simply a standard list of URL patterns. There are a number of different styles for how you can include these URLs. For example, you can append router.urls to a list of existing views... router = routers.SimpleRouter()
router.register(r'users', UserViewSet)
router.register(r'accounts', AccountViewSet)
urlpatterns = [
path('forgot-password/', ForgotPasswordFormView.as_view()),
]
urlpatterns += router.urls
Alternatively you can use Django's include function, like so... urlpatterns = [
path('forgot-password', ForgotPasswordFormView.as_view()),
path('', include(router.urls)),
]
You may use include with an application namespace: urlpatterns = [
path('forgot-password/', ForgotPasswordFormView.as_view()),
path('api/', include((router.urls, 'app_name'))),
]
Or both an application and instance namespace: urlpatterns = [
path('forgot-password/', ForgotPasswordFormView.as_view()),
path('api/', include((router.urls, 'app_name'), namespace='instance_name')),
]
See Django's URL namespaces docs and the include API reference for more details. Note: If using namespacing with hyperlinked serializers you'll also need to ensure that any view_name parameters on the serializers correctly reflect the namespace. In the examples above you'd need to include a parameter such as view_name='app_name:user-detail' for serializer fields hyperlinked to the user detail view. The automatic view_name generation uses a pattern like %(model_name)-detail. Unless your models names actually clash you may be better off not namespacing your Django REST Framework views when using hyperlinked serializers. Routing for extra actions A viewset may mark extra actions for routing by decorating a method with the @action decorator. These extra actions will be included in the generated routes. For example, given the set_password method on the UserViewSet class: from myapp.permissions import IsAdminOrIsSelf
from rest_framework.decorators import action
class UserViewSet(ModelViewSet):
...
@action(methods=['post'], detail=True, permission_classes=[IsAdminOrIsSelf])
def set_password(self, request, pk=None):
...
The following route would be generated: URL pattern: ^users/{pk}/set_password/$
URL name: 'user-set-password'
By default, the URL pattern is based on the method name, and the URL name is the combination of the ViewSet.basename and the hyphenated method name. If you don't want to use the defaults for either of these values, you can instead provide the url_path and url_name arguments to the @action decorator. For example, if you want to change the URL for our custom action to ^users/{pk}/change-password/$, you could write: from myapp.permissions import IsAdminOrIsSelf
from rest_framework.decorators import action
class UserViewSet(ModelViewSet):
...
@action(methods=['post'], detail=True, permission_classes=[IsAdminOrIsSelf],
url_path='change-password', url_name='change_password')
def set_password(self, request, pk=None):
...
The above example would now generate the following URL pattern: URL path: ^users/{pk}/change-password/$
URL name: 'user-change_password'
API Guide SimpleRouter This router includes routes for the standard set of list, create, retrieve, update, partial_update and destroy actions. The viewset can also mark additional methods to be routed, using the @action decorator.
URL Style
HTTP Method
Action
URL Name
{prefix}/
GET
list
{basename}-list
POST
create
{prefix}/{url_path}/
GET, or as specified by `methods` argument
`@action(detail=False)` decorated method
{basename}-{url_name}
{prefix}/{lookup}/
GET
retrieve
{basename}-detail
PUT
update
PATCH
partial_update
DELETE
destroy
{prefix}/{lookup}/{url_path}/
GET, or as specified by `methods` argument
`@action(detail=True)` decorated method
{basename}-{url_name}
By default the URLs created by SimpleRouter are appended with a trailing slash. This behavior can be modified by setting the trailing_slash argument to False when instantiating the router. For example: router = SimpleRouter(trailing_slash=False)
Trailing slashes are conventional in Django, but are not used by default in some other frameworks such as Rails. Which style you choose to use is largely a matter of preference, although some javascript frameworks may expect a particular routing style. The router will match lookup values containing any characters except slashes and period characters. For a more restrictive (or lenient) lookup pattern, set the lookup_value_regex attribute on the viewset. For example, you can limit the lookup to valid UUIDs: class MyModelViewSet(mixins.RetrieveModelMixin, viewsets.GenericViewSet):
lookup_field = 'my_model_id'
lookup_value_regex = '[0-9a-f]{32}'
DefaultRouter This router is similar to SimpleRouter as above, but additionally includes a default API root view, that returns a response containing hyperlinks to all the list views. It also generates routes for optional .json style format suffixes.
URL Style
HTTP Method
Action
URL Name
[.format]
GET
automatically generated root view
api-root
{prefix}/[.format]
GET
list
{basename}-list
POST
create
{prefix}/{url_path}/[.format]
GET, or as specified by `methods` argument
`@action(detail=False)` decorated method
{basename}-{url_name}
{prefix}/{lookup}/[.format]
GET
retrieve
{basename}-detail
PUT
update
PATCH
partial_update
DELETE
destroy
{prefix}/{lookup}/{url_path}/[.format]
GET, or as specified by `methods` argument
`@action(detail=True)` decorated method
{basename}-{url_name}
As with SimpleRouter the trailing slashes on the URL routes can be removed by setting the trailing_slash argument to False when instantiating the router. router = DefaultRouter(trailing_slash=False)
Custom Routers Implementing a custom router isn't something you'd need to do very often, but it can be useful if you have specific requirements about how the URLs for your API are structured. Doing so allows you to encapsulate the URL structure in a reusable way that ensures you don't have to write your URL patterns explicitly for each new view. The simplest way to implement a custom router is to subclass one of the existing router classes. The .routes attribute is used to template the URL patterns that will be mapped to each viewset. The .routes attribute is a list of Route named tuples. The arguments to the Route named tuple are: url: A string representing the URL to be routed. May include the following format strings:
{prefix} - The URL prefix to use for this set of routes.
{lookup} - The lookup field used to match against a single instance.
{trailing_slash} - Either a '/' or an empty string, depending on the trailing_slash argument. mapping: A mapping of HTTP method names to the view methods name: The name of the URL as used in reverse calls. May include the following format string:
{basename} - The base to use for the URL names that are created. initkwargs: A dictionary of any additional arguments that should be passed when instantiating the view. Note that the detail, basename, and suffix arguments are reserved for viewset introspection and are also used by the browsable API to generate the view name and breadcrumb links. Customizing dynamic routes You can also customize how the @action decorator is routed. Include the DynamicRoute named tuple in the .routes list, setting the detail argument as appropriate for the list-based and detail-based routes. In addition to detail, the arguments to DynamicRoute are: url: A string representing the URL to be routed. May include the same format strings as Route, and additionally accepts the {url_path} format string. name: The name of the URL as used in reverse calls. May include the following format strings:
{basename} - The base to use for the URL names that are created.
{url_name} - The url_name provided to the @action. initkwargs: A dictionary of any additional arguments that should be passed when instantiating the view. Example The following example will only route to the list and retrieve actions, and does not use the trailing slash convention. from rest_framework.routers import Route, DynamicRoute, SimpleRouter
class CustomReadOnlyRouter(SimpleRouter):
"""
A router for read-only APIs, which doesn't use trailing slashes.
"""
routes = [
Route(
url=r'^{prefix}$',
mapping={'get': 'list'},
name='{basename}-list',
detail=False,
initkwargs={'suffix': 'List'}
),
Route(
url=r'^{prefix}/{lookup}$',
mapping={'get': 'retrieve'},
name='{basename}-detail',
detail=True,
initkwargs={'suffix': 'Detail'}
),
DynamicRoute(
url=r'^{prefix}/{lookup}/{url_path}$',
name='{basename}-{url_name}',
detail=True,
initkwargs={}
)
]
Let's take a look at the routes our CustomReadOnlyRouter would generate for a simple viewset. views.py: class UserViewSet(viewsets.ReadOnlyModelViewSet):
"""
A viewset that provides the standard actions
"""
queryset = User.objects.all()
serializer_class = UserSerializer
lookup_field = 'username'
@action(detail=True)
def group_names(self, request, pk=None):
"""
Returns a list of all the group names that the given
user belongs to.
"""
user = self.get_object()
groups = user.groups.all()
return Response([group.name for group in groups])
urls.py: router = CustomReadOnlyRouter()
router.register('users', UserViewSet)
urlpatterns = router.urls
The following mappings would be generated...
URL
HTTP Method
Action
URL Name
/users
GET
list
user-list
/users/{username}
GET
retrieve
user-detail
/users/{username}/group_names
GET
group_names
user-group-names
For another example of setting the .routes attribute, see the source code for the SimpleRouter class. Advanced custom routers If you want to provide totally custom behavior, you can override BaseRouter and override the get_urls(self) method. The method should inspect the registered viewsets and return a list of URL patterns. The registered prefix, viewset and basename tuples may be inspected by accessing the self.registry attribute. You may also want to override the get_default_basename(self, viewset) method, or else always explicitly set the basename argument when registering your viewsets with the router. Third Party Packages The following third party packages are also available. DRF Nested Routers The drf-nested-routers package provides routers and relationship fields for working with nested resources. ModelRouter (wq.db.rest) The wq.db package provides an advanced ModelRouter class (and singleton instance) that extends DefaultRouter with a register_model() API. Much like Django's admin.site.register, the only required argument to rest.router.register_model is a model class. Reasonable defaults for a url prefix, serializer, and viewset will be inferred from the model and global configuration. from wq.db import rest
from myapp.models import MyModel
rest.router.register_model(MyModel)
DRF-extensions The DRF-extensions package provides routers for creating nested viewsets, collection level controllers with customizable endpoint names. routers.py | |
doc_3990 | Optional. Either True or False. Default is True. Specifies whether files in the specified location should be included. Either this or allow_folders must be True. | |
doc_3991 | True when verbose output is enabled. Should be checked when more detailed information is desired about a running test. verbose is set by test.regrtest. | |
doc_3992 | See Migration guide for more details. tf.compat.v1.python_io.tf_record_iterator
tf.compat.v1.io.tf_record_iterator(
path, options=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use eager execution and: tf.data.TFRecordDataset(path)
Args
path The path to the TFRecords file.
options (optional) A TFRecordOptions object.
Returns An iterator of serialized TFRecords.
Raises
IOError If path cannot be opened for reading. | |
doc_3993 | tf.summary.create_noop_writer()
This is useful as a placeholder in code that expects a context manager. | |
doc_3994 | Send signal sig to the process pid. Constants for the specific signals available on the host platform are defined in the signal module. Windows: The signal.CTRL_C_EVENT and signal.CTRL_BREAK_EVENT signals are special signals which can only be sent to console processes which share a common console window, e.g., some subprocesses. Any other value for sig will cause the process to be unconditionally killed by the TerminateProcess API, and the exit code will be set to sig. The Windows version of kill() additionally takes process handles to be killed. See also signal.pthread_kill(). Raises an auditing event os.kill with arguments pid, sig. New in version 3.2: Windows support. | |
doc_3995 |
Similar to @classmethod, the @classproperty decorator converts the result of a method with a single cls argument into a property that can be accessed directly from the class. | |
doc_3996 | Return the number of items currently in the history. (This is different from get_history_length(), which returns the maximum number of lines that will be written to a history file.) | |
doc_3997 |
Number of elements in the array. Equal to np.prod(a.shape), i.e., the product of the array’s dimensions. Notes a.size returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested np.prod(a.shape), which returns an instance of np.int_), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. Examples >>> x = np.zeros((3, 5, 2), dtype=np.complex128)
>>> x.size
30
>>> np.prod(x.shape)
30 | |
doc_3998 |
Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters
bbool | |
doc_3999 |
Bases: skimage.transform._geometric.ProjectiveTransform 2D affine transformation. Has the following form: X = a0*x + a1*y + a2 =
= sx*x*cos(rotation) - sy*y*sin(rotation + shear) + a2
Y = b0*x + b1*y + b2 =
= sx*x*sin(rotation) + sy*y*cos(rotation + shear) + b2
where sx and sy are scale factors in the x and y directions, and the homogeneous transformation matrix is: [[a0 a1 a2]
[b0 b1 b2]
[0 0 1]]
Parameters
matrix(3, 3) array, optional
Homogeneous transformation matrix.
scale{s as float or (sx, sy) as array, list or tuple}, optional
Scale factor(s). If a single value, it will be assigned to both sx and sy. New in version 0.17: Added support for supplying a single scalar value.
rotationfloat, optional
Rotation angle in counter-clockwise direction as radians.
shearfloat, optional
Shear angle in counter-clockwise direction as radians.
translation(tx, ty) as array, list or tuple, optional
Translation parameters. Attributes
params(3, 3) array
Homogeneous transformation matrix.
__init__(matrix=None, scale=None, rotation=None, shear=None, translation=None) [source]
Initialize self. See help(type(self)) for accurate signature.
property rotation
property scale
property shear
property translation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.