_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_27400
Add an etag for the current response if there is none yet. Changed in version 2.0: SHA-1 is used to generate the value. MD5 may not be available in some environments. Parameters overwrite (bool) – weak (bool) – Return type None
doc_27401
The number of threads in the thread-pool used by ProcessGroupAgent.
doc_27402
The number of days in the month.
doc_27403
Get the offset for the location of 0 in radians.
doc_27404
tf.compat.v1.data.experimental.sample_from_datasets( datasets, weights=None, seed=None ) Args datasets A list of tf.data.Dataset objects with compatible structure. weights (Optional.) A list of len(datasets) floating-point values where weights[i] represents the probability with which an element should be sampled from datasets[i], or a tf.data.Dataset object where each element is such a list. Defaults to a uniform distribution across datasets. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. Returns A dataset that interleaves elements from datasets at random, according to weights if provided, otherwise with uniform probability. Raises TypeError If the datasets or weights arguments have the wrong type. ValueError If the weights argument is specified and does not match the length of the datasets element.
doc_27405
Return the Discrete Fourier Transform sample frequencies. The returned float array f contains the frequency bin centers in cycles per unit of the sample spacing (with zero at the start). For instance, if the sample spacing is in seconds, then the frequency unit is cycles/second. Given a window length n and a sample spacing d: f = [0, 1, ..., n/2-1, -n/2, ..., -1] / (d*n) if n is even f = [0, 1, ..., (n-1)/2, -(n-1)/2, ..., -1] / (d*n) if n is odd Parameters nint Window length. dscalar, optional Sample spacing (inverse of the sampling rate). Defaults to 1. Returns fndarray Array of length n containing the sample frequencies. Examples >>> signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=float) >>> fourier = np.fft.fft(signal) >>> n = signal.size >>> timestep = 0.1 >>> freq = np.fft.fftfreq(n, d=timestep) >>> freq array([ 0. , 1.25, 2.5 , ..., -3.75, -2.5 , -1.25])
doc_27406
Represent a categorical variable in classic R / S-plus fashion. Categoricals can only take on only a limited, and usually fixed, number of possible values (categories). In contrast to statistical categorical variables, a Categorical might have an order, but numerical operations (additions, divisions, …) are not possible. All values of the Categorical are either in categories or np.nan. Assigning values outside of categories will raise a ValueError. Order is defined by the order of the categories, not lexical order of the values. Parameters values:list-like The values of the categorical. If categories are given, values not in categories will be replaced with NaN. categories:Index-like (unique), optional The unique categories for this categorical. If not given, the categories are assumed to be the unique values of values (sorted, if possible, otherwise in the order in which they appear). ordered:bool, default False Whether or not this categorical is treated as a ordered categorical. If True, the resulting categorical will be ordered. An ordered categorical respects, when sorted, the order of its categories attribute (which in turn is the categories argument, if provided). dtype:CategoricalDtype An instance of CategoricalDtype to use for this categorical. Raises ValueError If the categories do not validate. TypeError If an explicit ordered=True is given but no categories and the values are not sortable. See also CategoricalDtype Type for categorical data. CategoricalIndex An Index with an underlying Categorical. Notes See the user guide for more. Examples >>> pd.Categorical([1, 2, 3, 1, 2, 3]) [1, 2, 3, 1, 2, 3] Categories (3, int64): [1, 2, 3] >>> pd.Categorical(['a', 'b', 'c', 'a', 'b', 'c']) ['a', 'b', 'c', 'a', 'b', 'c'] Categories (3, object): ['a', 'b', 'c'] Missing values are not included as a category. >>> c = pd.Categorical([1, 2, 3, 1, 2, 3, np.nan]) >>> c [1, 2, 3, 1, 2, 3, NaN] Categories (3, int64): [1, 2, 3] However, their presence is indicated in the codes attribute by code -1. >>> c.codes array([ 0, 1, 2, 0, 1, 2, -1], dtype=int8) Ordered Categoricals can be sorted according to the custom order of the categories and can have a min and max value. >>> c = pd.Categorical(['a', 'b', 'c', 'a', 'b', 'c'], ordered=True, ... categories=['c', 'b', 'a']) >>> c ['a', 'b', 'c', 'a', 'b', 'c'] Categories (3, object): ['c' < 'b' < 'a'] >>> c.min() 'c' Attributes categories The categories of this categorical. codes The category codes of this categorical. ordered Whether the categories have an ordered relationship. dtype The CategoricalDtype for this instance. Methods from_codes(codes[, categories, ordered, dtype]) Make a Categorical type from codes and categories or dtype. __array__([dtype]) The numpy array interface.
doc_27407
In-place version of igamma()
doc_27408
Represents the C size_t datatype.
doc_27409
Note: Functions taking Tensor arguments can also take anything accepted by tf.convert_to_tensor. Note: Elementwise binary operations in TensorFlow follow numpy-style broadcasting. TensorFlow provides a variety of math functions including: Basic arithmetic operators and trigonometric functions. Special math functions (like: tf.math.igamma and tf.math.zeta) Complex number functions (like: tf.math.imag and tf.math.angle) Reductions and scans (like: tf.math.reduce_mean and tf.math.cumsum) Segment functions (like: tf.math.segment_sum) See: tf.linalg for matrix and tensor functions. About Segmentation TensorFlow provides several operations that you can use to perform common math computations on tensor segments. Here a segmentation is a partitioning of a tensor along the first dimension, i.e. it defines a mapping from the first dimension onto segment_ids. The segment_ids tensor should be the size of the first dimension, d0, with consecutive IDs in the range 0 to k, where k<d0. In particular, a segmentation of a matrix tensor is a mapping of rows to segments. For example: c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) tf.math.segment_sum(c, tf.constant([0, 0, 1])) # ==> [[0 0 0 0] # [5 6 7 8]] The standard segment_* functions assert that the segment indices are sorted. If you have unsorted indices use the equivalent unsorted_segment_ function. These functions take an additional argument num_segments so that the output tensor can be efficiently allocated. c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]]) tf.math.unsorted_segment_sum(c, tf.constant([0, 1, 0]), num_segments=2) # ==> [[ 6, 8, 10, 12], # [-1, -2, -3, -4]] Modules special module: Public API for tf.math.special namespace. Functions abs(...): Computes the absolute value of a tensor. accumulate_n(...): Returns the element-wise sum of a list of tensors. acos(...): Computes acos of x element-wise. acosh(...): Computes inverse hyperbolic cosine of x element-wise. add(...): Returns x + y element-wise. add_n(...): Adds all input tensors element-wise. angle(...): Returns the element-wise argument of a complex (or real) tensor. argmax(...): Returns the index with the largest value across axes of a tensor. (deprecated arguments) argmin(...): Returns the index with the smallest value across axes of a tensor. (deprecated arguments) asin(...): Computes the trignometric inverse sine of x element-wise. asinh(...): Computes inverse hyperbolic sine of x element-wise. atan(...): Computes the trignometric inverse tangent of x element-wise. atan2(...): Computes arctangent of y/x element-wise, respecting signs of the arguments. atanh(...): Computes inverse hyperbolic tangent of x element-wise. bessel_i0(...): Computes the Bessel i0 function of x element-wise. bessel_i0e(...): Computes the Bessel i0e function of x element-wise. bessel_i1(...): Computes the Bessel i1 function of x element-wise. bessel_i1e(...): Computes the Bessel i1e function of x element-wise. betainc(...): Compute the regularized incomplete beta integral \(I_x(a, b)\). bincount(...): Counts the number of occurrences of each value in an integer array. ceil(...): Return the ceiling of the input, element-wise. confusion_matrix(...): Computes the confusion matrix from predictions and labels. conj(...): Returns the complex conjugate of a complex number. cos(...): Computes cos of x element-wise. cosh(...): Computes hyperbolic cosine of x element-wise. count_nonzero(...): Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments) cumprod(...): Compute the cumulative product of the tensor x along axis. cumsum(...): Compute the cumulative sum of the tensor x along axis. cumulative_logsumexp(...): Compute the cumulative log-sum-exp of the tensor x along axis. digamma(...): Computes Psi, the derivative of Lgamma (the log of the absolute value of divide(...): Computes Python style division of x by y. divide_no_nan(...): Computes a safe divide which returns 0 if the y is zero. equal(...): Returns the truth value of (x == y) element-wise. erf(...): Computes the Gauss error function of x element-wise. erfc(...): Computes the complementary error function of x element-wise. erfcinv(...): Computes the inverse of complementary error function. erfinv(...): Compute inverse error function. exp(...): Computes exponential of x element-wise. \(y = e^x\). expm1(...): Computes exp(x) - 1 element-wise. floor(...): Returns element-wise largest integer not greater than x. floordiv(...): Divides x / y elementwise, rounding toward the most negative integer. floormod(...): Returns element-wise remainder of division. When x < 0 xor y < 0 is greater(...): Returns the truth value of (x > y) element-wise. greater_equal(...): Returns the truth value of (x >= y) element-wise. igamma(...): Compute the lower regularized incomplete Gamma function P(a, x). igammac(...): Compute the upper regularized incomplete Gamma function Q(a, x). imag(...): Returns the imaginary part of a complex (or real) tensor. in_top_k(...): Says whether the targets are in the top K predictions. invert_permutation(...): Computes the inverse permutation of a tensor. is_finite(...): Returns which elements of x are finite. is_inf(...): Returns which elements of x are Inf. is_nan(...): Returns which elements of x are NaN. is_non_decreasing(...): Returns True if x is non-decreasing. is_strictly_increasing(...): Returns True if x is strictly increasing. l2_normalize(...): Normalizes along dimension axis using an L2 norm. (deprecated arguments) lbeta(...): Computes \(ln(|Beta(x)|)\), reducing along the last dimension. less(...): Returns the truth value of (x < y) element-wise. less_equal(...): Returns the truth value of (x <= y) element-wise. lgamma(...): Computes the log of the absolute value of Gamma(x) element-wise. log(...): Computes natural logarithm of x element-wise. log1p(...): Computes natural logarithm of (1 + x) element-wise. log_sigmoid(...): Computes log sigmoid of x element-wise. log_softmax(...): Computes log softmax activations. (deprecated arguments) logical_and(...): Logical AND function. logical_not(...): Returns the truth value of NOT x element-wise. logical_or(...): Returns the truth value of x OR y element-wise. logical_xor(...): Logical XOR function. maximum(...): Returns the max of x and y (i.e. x > y ? x : y) element-wise. minimum(...): Returns the min of x and y (i.e. x < y ? x : y) element-wise. mod(...): Returns element-wise remainder of division. When x < 0 xor y < 0 is multiply(...): Returns an element-wise x * y. multiply_no_nan(...): Computes the product of x and y and returns 0 if the y is zero, even if x is NaN or infinite. ndtri(...): Compute quantile of Standard Normal. negative(...): Computes numerical negative value element-wise. nextafter(...): Returns the next representable value of x1 in the direction of x2, element-wise. not_equal(...): Returns the truth value of (x != y) element-wise. polygamma(...): Compute the polygamma function \(\psi^{(n)}(x)\). polyval(...): Computes the elementwise value of a polynomial. pow(...): Computes the power of one value to another. real(...): Returns the real part of a complex (or real) tensor. reciprocal(...): Computes the reciprocal of x element-wise. reciprocal_no_nan(...): Performs a safe reciprocal operation, element wise. reduce_all(...): Computes the "logical and" of elements across dimensions of a tensor. (deprecated arguments) reduce_any(...): Computes the "logical or" of elements across dimensions of a tensor. (deprecated arguments) reduce_euclidean_norm(...): Computes the Euclidean norm of elements across dimensions of a tensor. reduce_logsumexp(...): Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments) reduce_max(...): Computes the maximum of elements across dimensions of a tensor. (deprecated arguments) reduce_mean(...): Computes the mean of elements across dimensions of a tensor. reduce_min(...): Computes the minimum of elements across dimensions of a tensor. (deprecated arguments) reduce_prod(...): Computes the product of elements across dimensions of a tensor. (deprecated arguments) reduce_std(...): Computes the standard deviation of elements across dimensions of a tensor. reduce_sum(...): Computes the sum of elements across dimensions of a tensor. (deprecated arguments) reduce_variance(...): Computes the variance of elements across dimensions of a tensor. rint(...): Returns element-wise integer closest to x. round(...): Rounds the values of a tensor to the nearest integer, element-wise. rsqrt(...): Computes reciprocal of square root of x element-wise. scalar_mul(...): Multiplies a scalar times a Tensor or IndexedSlices object. segment_max(...): Computes the maximum along segments of a tensor. segment_mean(...): Computes the mean along segments of a tensor. segment_min(...): Computes the minimum along segments of a tensor. segment_prod(...): Computes the product along segments of a tensor. segment_sum(...): Computes the sum along segments of a tensor. sigmoid(...): Computes sigmoid of x element-wise. sign(...): Returns an element-wise indication of the sign of a number. sin(...): Computes sine of x element-wise. sinh(...): Computes hyperbolic sine of x element-wise. sobol_sample(...): Generates points from the Sobol sequence. softmax(...): Computes softmax activations. (deprecated arguments) softplus(...): Computes softplus: log(exp(features) + 1). softsign(...): Computes softsign: features / (abs(features) + 1). sqrt(...): Computes element-wise square root of the input tensor. square(...): Computes square of x element-wise. squared_difference(...): Returns conj(x - y)(x - y) element-wise. subtract(...): Returns x - y element-wise. tan(...): Computes tan of x element-wise. tanh(...): Computes hyperbolic tangent of x element-wise. top_k(...): Finds values and indices of the k largest entries for the last dimension. truediv(...): Divides x / y elementwise (using Python 3 division operator semantics). unsorted_segment_max(...): Computes the maximum along segments of a tensor. unsorted_segment_mean(...): Computes the mean along segments of a tensor. unsorted_segment_min(...): Computes the minimum along segments of a tensor. unsorted_segment_prod(...): Computes the product along segments of a tensor. unsorted_segment_sqrt_n(...): Computes the sum along segments of a tensor divided by the sqrt(N). unsorted_segment_sum(...): Computes the sum along segments of a tensor. xdivy(...): Returns 0 if x == 0, and x / y otherwise, elementwise. xlog1py(...): Compute x * log1p(y). xlogy(...): Returns 0 if x == 0, and x * log(y) otherwise, elementwise. zero_fraction(...): Returns the fraction of zeros in value. zeta(...): Compute the Hurwitz zeta function \(\zeta(x, q)\).
doc_27410
See Migration guide for more details. tf.compat.v1.raw_ops.SparseTensorSliceDataset tf.raw_ops.SparseTensorSliceDataset( indices, values, dense_shape, name=None ) Args indices A Tensor of type int64. values A Tensor. dense_shape A Tensor of type int64. name A name for the operation (optional). Returns A Tensor of type variant.
doc_27411
Scheduling policy for CPU-intensive processes that tries to preserve interactivity on the rest of the computer.
doc_27412
sklearn.datasets.fetch_species_distributions(*, data_home=None, download_if_missing=True) [source] Loader for species distribution dataset from Phillips et. al. (2006) Read more in the User Guide. Parameters data_homestr, default=None Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders. download_if_missingbool, default=True If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site. Returns dataBunch Dictionary-like object, with the following attributes. coveragesarray, shape = [14, 1592, 1212] These represent the 14 features measured at each point of the map grid. The latitude/longitude values for the grid are discussed below. Missing data is represented by the value -9999. trainrecord array, shape = (1624,) The training points for the data. Each point has three fields: train[‘species’] is the species name train[‘dd long’] is the longitude, in degrees train[‘dd lat’] is the latitude, in degrees testrecord array, shape = (620,) The test points for the data. Same format as the training data. Nx, Nyintegers The number of longitudes (x) and latitudes (y) in the grid x_left_lower_corner, y_left_lower_cornerfloats The (x,y) position of the lower-left corner, in degrees grid_sizefloat The spacing between points of the grid, in degrees Notes This dataset represents the geographic distribution of species. The dataset is provided by Phillips et. al. (2006). The two species are: “Bradypus variegatus” , the Brown-throated Sloth. “Microryzomys minutus” , also known as the Forest Small Rice Rat, a rodent that lives in Peru, Colombia, Ecuador, Peru, and Venezuela. For an example of using this dataset with scikit-learn, see examples/applications/plot_species_distribution_modeling.py. References “Maximum entropy modeling of species geographic distributions” S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling, 190:231-259, 2006. Examples using sklearn.datasets.fetch_species_distributions Species distribution modeling Kernel Density Estimate of Species Distributions
doc_27413
Return True if the object is an asynchronous generator function, for example: >>> async def agen(): ... yield 1 ... >>> inspect.isasyncgenfunction(agen) True New in version 3.6. Changed in version 3.8: Functions wrapped in functools.partial() now return True if the wrapped function is a asynchronous generator function.
doc_27414
class sklearn.linear_model.LassoLarsCV(*, fit_intercept=True, verbose=False, max_iter=500, normalize=True, precompute='auto', cv=None, max_n_alphas=1000, n_jobs=None, eps=2.220446049250313e-16, copy_X=True, positive=False) [source] Cross-validated Lasso, using the LARS algorithm. See glossary entry for cross-validation estimator. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 Read more in the User Guide. Parameters fit_interceptbool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). verbosebool or int, default=False Sets the verbosity amount. max_iterint, default=500 Maximum number of iterations to perform. normalizebool, default=True This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. precomputebool or ‘auto’ , default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix cannot be passed as argument since we will use only subsets of X. cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. max_n_alphasint, default=1000 The maximum number of points on the path used to compute the residuals in the cross-validation n_jobsint or None, default=None Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. epsfloat, default=np.finfo(float).eps The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. positivebool, default=False Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients do not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator. As a consequence using LassoLarsCV only makes sense for problems where a sparse solution is expected and/or reached. Attributes coef_array-like of shape (n_features,) parameter vector (w in the formulation formula) intercept_float independent term in decision function. coef_path_array-like of shape (n_features, n_alphas) the varying values of the coefficients along the path alpha_float the estimated regularization parameter alpha alphas_array-like of shape (n_alphas,) the different values of alpha along the path cv_alphas_array-like of shape (n_cv_alphas,) all the values of alpha along the path for the different folds mse_path_array-like of shape (n_folds, n_cv_alphas) the mean square error on left-out for each fold along the path (alpha values given by cv_alphas) n_iter_array-like or int the number of iterations run by Lars with the optimal alpha. active_list of int Indices of active variables at the end of the path. See also lars_path, LassoLars, LarsCV, LassoCV Notes The object solves the same problem as the LassoCV object. However, unlike the LassoCV, it find the relevant alphas values by itself. In general, because of this property, it will be more stable. However, it is more fragile to heavily multicollinear datasets. It is more efficient than the LassoCV if only a small number of features are selected compared to the total number, for instance if there are very few samples compared to the number of features. Examples >>> from sklearn.linear_model import LassoLarsCV >>> from sklearn.datasets import make_regression >>> X, y = make_regression(noise=4.0, random_state=0) >>> reg = LassoLarsCV(cv=5).fit(X, y) >>> reg.score(X, y) 0.9992... >>> reg.alpha_ 0.0484... >>> reg.predict(X[:1,]) array([-77.8723...]) Methods fit(X, y) Fit the model using X, y as training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the model using X, y as training data. Parameters Xarray-like of shape (n_samples, n_features) Training data. yarray-like of shape (n_samples,) Target values. Returns selfobject returns an instance of self. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.LassoLarsCV Lasso model selection: Cross-Validation / AIC / BIC
doc_27415
This function creates a mutable character buffer. The returned object is a ctypes array of c_char. init_or_size must be an integer which specifies the size of the array, or a bytes object which will be used to initialize the array items. If a bytes object is specified as first argument, the buffer is made one item larger than its length so that the last element in the array is a NUL termination character. An integer can be passed as second argument which allows specifying the size of the array if the length of the bytes should not be used. Raises an auditing event ctypes.create_string_buffer with arguments init, size.
doc_27416
Handler for Errorbars. Parameters marker_padfloat Padding between points in legend entry. numpointsint Number of points to show in legend entry. **kwargs Keyword arguments forwarded to HandlerNpoints. create_artists(legend, orig_handle, xdescent, ydescent, width, height, fontsize, trans)[source] get_err_size(legend, xdescent, ydescent, width, height, fontsize)[source]
doc_27417
Return the first operand with exponent adjusted by the second. Equivalently, return the first operand multiplied by 10**other. The second operand must be an integer.
doc_27418
Return the sum along diagonals of the array. If a is 2-D, the sum along its diagonal with the given offset is returned, i.e., the sum of elements a[i,i+offset] for all i. If a has more than two dimensions, then the axes specified by axis1 and axis2 are used to determine the 2-D sub-arrays whose traces are returned. The shape of the resulting array is the same as that of a with axis1 and axis2 removed. Parameters aarray_like Input array, from which the diagonals are taken. offsetint, optional Offset of the diagonal from the main diagonal. Can be both positive and negative. Defaults to 0. axis1, axis2int, optional Axes to be used as the first and second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults are the first two axes of a. dtypedtype, optional Determines the data-type of the returned array and of the accumulator where the elements are summed. If dtype has the value None and a is of integer type of precision less than the default integer precision, then the default integer precision is used. Otherwise, the precision is the same as that of a. outndarray, optional Array into which the output is placed. Its type is preserved and it must be of the right shape to hold the output. Returns sum_along_diagonalsndarray If a is 2-D, the sum along the diagonal is returned. If a has larger dimensions, then an array of sums along diagonals is returned. See also diag, diagonal, diagflat Examples >>> np.trace(np.eye(3)) 3.0 >>> a = np.arange(8).reshape((2,2,2)) >>> np.trace(a) array([6, 8]) >>> a = np.arange(24).reshape((2,2,2,3)) >>> np.trace(a).shape (2, 3)
doc_27419
See Migration guide for more details. tf.compat.v1.distribute.NcclAllReduce tf.distribute.NcclAllReduce( num_packs=1 ) It uses Nvidia NCCL for all-reduce. For the batch API, tensors will be repacked or aggregated for more efficient cross-device transportation. For reduces that are not all-reduce, it falls back to tf.distribute.ReductionToOneDevice. Here is how you can use NcclAllReduce in tf.distribute.MirroredStrategy: strategy = tf.distribute.MirroredStrategy( cross_device_ops=tf.distribute.NcclAllReduce()) Args num_packs a non-negative integer. The number of packs to split values into. If zero, no packing will be done. Raises ValueError if num_packs is negative. Methods batch_reduce View source batch_reduce( reduce_op, value_destination_pairs, options=None ) Reduce values to destinations in batches. See tf.distribute.StrategyExtended.batch_reduce_to. This can only be called in the cross-replica context. Args reduce_op a tf.distribute.ReduceOp specifying how values should be combined. value_destination_pairs a sequence of (value, destinations) pairs. See tf.distribute.CrossDeviceOps.reduce for descriptions. options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details. Returns A list of tf.Tensor or tf.distribute.DistributedValues, one per pair in value_destination_pairs. Raises ValueError if value_destination_pairs is not an iterable of tuples of tf.distribute.DistributedValues and destinations. broadcast View source broadcast( tensor, destinations ) Broadcast tensor to destinations. This can only be called in the cross-replica context. Args tensor a tf.Tensor like object. The value to broadcast. destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to broadcast to. Note that if it's a tf.Variable, the value is broadcasted to the devices of that variable, this method doesn't update the variable. Returns A tf.Tensor or tf.distribute.DistributedValues. reduce View source reduce( reduce_op, per_replica_value, destinations, options=None ) Reduce per_replica_value to destinations. See tf.distribute.StrategyExtended.reduce_to. This can only be called in the cross-replica context. Args reduce_op a tf.distribute.ReduceOp specifying how values should be combined. per_replica_value a tf.distribute.DistributedValues, or a tf.Tensor like object. destinations a tf.distribute.DistributedValues, a tf.Variable, a tf.Tensor alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to value and destinations. Note that if it's a tf.Variable, the value is reduced to the devices of that variable, and this method doesn't update the variable. options a tf.distribute.experimental.CommunicationOptions. See tf.distribute.experimental.CommunicationOptions for details. Returns A tf.Tensor or tf.distribute.DistributedValues. Raises ValueError if per_replica_value can't be converted to a tf.distribute.DistributedValues or if destinations is not a string, tf.Variable or tf.distribute.DistributedValues.
doc_27420
tf.compat.v1.logging.TaskLevelStatusMessage( msg )
doc_27421
Merge multiple sorted inputs into a single sorted output (for example, merge timestamped entries from multiple log files). Returns an iterator over the sorted values. Similar to sorted(itertools.chain(*iterables)) but returns an iterable, does not pull the data into memory all at once, and assumes that each of the input streams is already sorted (smallest to largest). Has two optional arguments which must be specified as keyword arguments. key specifies a key function of one argument that is used to extract a comparison key from each input element. The default value is None (compare the elements directly). reverse is a boolean value. If set to True, then the input elements are merged as if each comparison were reversed. To achieve behavior similar to sorted(itertools.chain(*iterables), reverse=True), all iterables must be sorted from largest to smallest. Changed in version 3.5: Added the optional key and reverse parameters.
doc_27422
Remove this colorbar from the figure. If the colorbar was created with use_gridspec=True the previous gridspec is restored.
doc_27423
Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_27424
Write all data associated with the window into the provided file object. This information can be later retrieved using the getwin() function.
doc_27425
Returns the size of the self tensor. The returned value is a subclass of tuple. Example: >>> torch.empty(3, 4, 5).size() torch.Size([3, 4, 5])
doc_27426
Return the complementary error function at x. The complementary error function is defined as 1.0 - erf(x). It is used for large values of x where a subtraction from one would cause a loss of significance. New in version 3.2.
doc_27427
Set the position to use for z-sorting.
doc_27428
The default ordering for the object, for use when obtaining lists of objects: ordering = ['-order_date'] This is a tuple or list of strings and/or query expressions. Each string is a field name with an optional “-” prefix, which indicates descending order. Fields without a leading “-” will be ordered ascending. Use the string “?” to order randomly. For example, to order by a pub_date field ascending, use this: ordering = ['pub_date'] To order by pub_date descending, use this: ordering = ['-pub_date'] To order by pub_date descending, then by author ascending, use this: ordering = ['-pub_date', 'author'] You can also use query expressions. To order by author ascending and make null values sort last, use this: from django.db.models import F ordering = [F('author').asc(nulls_last=True)]
doc_27429
Return True if s is a Python soft keyword. New in version 3.9.
doc_27430
Create a RequestContext representing a WSGI environment. Use a with block to push the context, which will make request point at this request. See The Request Context. Typically you should not call this from your own code. A request context is automatically pushed by the wsgi_app() when handling a request. Use test_request_context() to create an environment and context instead of this method. Parameters environ (dict) – a WSGI environment Return type flask.ctx.RequestContext
doc_27431
Get parameters of this kernel. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_27432
Create and return a pointer to a new pad data structure with the given number of lines and columns. Return a pad as a window object. A pad is like a window, except that it is not restricted by the screen size, and is not necessarily associated with a particular part of the screen. Pads can be used when a large window is needed, and only a part of the window will be on the screen at one time. Automatic refreshes of pads (such as from scrolling or echoing of input) do not occur. The refresh() and noutrefresh() methods of a pad require 6 arguments to specify the part of the pad to be displayed and the location on the screen to be used for the display. The arguments are pminrow, pmincol, sminrow, smincol, smaxrow, smaxcol; the p arguments refer to the upper left corner of the pad region to be displayed and the s arguments define a clipping box on the screen within which the pad region is to be displayed.
doc_27433
Indicates if an interval is empty, meaning it contains no points. New in version 0.25.0. Returns bool or ndarray A boolean indicating if a scalar Interval is empty, or a boolean ndarray positionally indicating if an Interval in an IntervalArray or IntervalIndex is empty. Examples An Interval that contains points is not empty: >>> pd.Interval(0, 1, closed='right').is_empty False An Interval that does not contain any points is empty: >>> pd.Interval(0, 0, closed='right').is_empty True >>> pd.Interval(0, 0, closed='left').is_empty True >>> pd.Interval(0, 0, closed='neither').is_empty True An Interval that contains a single point is not empty: >>> pd.Interval(0, 0, closed='both').is_empty False An IntervalArray or IntervalIndex returns a boolean ndarray positionally indicating if an Interval is empty: >>> ivs = [pd.Interval(0, 0, closed='neither'), ... pd.Interval(1, 2, closed='neither')] >>> pd.arrays.IntervalArray(ivs).is_empty array([ True, False]) Missing values are not considered empty: >>> ivs = [pd.Interval(0, 0, closed='neither'), np.nan] >>> pd.IntervalIndex(ivs).is_empty array([ True, False])
doc_27434
Pointer to start of data.
doc_27435
Set the artist's clip Bbox. Parameters clipboxBbox
doc_27436
See Migration guide for more details. tf.compat.v1.bitwise.invert tf.bitwise.invert( x, name=None ) Flip each bit of supported types. For example, type int8 (decimal 2) binary 00000010 becomes (decimal -3) binary 11111101. This operation is performed on each element of the tensor argument x. Example: import tensorflow as tf from tensorflow.python.ops import bitwise_ops # flip 2 (00000010) to -3 (11111101) tf.assert_equal(-3, bitwise_ops.invert(2)) dtype_list = [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64, dtypes.uint8, dtypes.uint16, dtypes.uint32, dtypes.uint64] inputs = [0, 5, 3, 14] for dtype in dtype_list: # Because of issues with negative numbers, let's test this indirectly. # 1. invert(a) and a = 0 # 2. invert(a) or a = invert(0) input_tensor = tf.constant([0, 5, 3, 14], dtype=dtype) not_a_and_a, not_a_or_a, not_0 = [bitwise_ops.bitwise_and( input_tensor, bitwise_ops.invert(input_tensor)), bitwise_ops.bitwise_or( input_tensor, bitwise_ops.invert(input_tensor)), bitwise_ops.invert( tf.constant(0, dtype=dtype))] expected = tf.constant([0, 0, 0, 0], dtype=tf.float32) tf.assert_equal(tf.cast(not_a_and_a, tf.float32), expected) expected = tf.cast([not_0] * 4, tf.float32) tf.assert_equal(tf.cast(not_a_or_a, tf.float32), expected) # For unsigned dtypes let's also check the result directly. if dtype.is_unsigned: inverted = bitwise_ops.invert(input_tensor) expected = tf.constant([dtype.max - x for x in inputs], dtype=tf.float32) tf.assert_equal(tf.cast(inverted, tf.float32), tf.cast(expected, tf.float32)) Args x A Tensor. Must be one of the following types: int8, int16, int32, int64, uint8, uint16, uint32, uint64. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
doc_27437
See Migration guide for more details. tf.compat.v1.raw_ops.DeserializeIterator tf.raw_ops.DeserializeIterator( resource_handle, serialized, name=None ) Args resource_handle A Tensor of type resource. A handle to an iterator resource. serialized A Tensor of type variant. A variant tensor storing the state of the iterator contained in the resource. name A name for the operation (optional). Returns The created Operation.
doc_27438
Return: obj argument as is, if obj is a Future, a Task, or a Future-like object (isfuture() is used for the test.) a Task object wrapping obj, if obj is a coroutine (iscoroutine() is used for the test); in this case the coroutine will be scheduled by ensure_future(). a Task object that would await on obj, if obj is an awaitable (inspect.isawaitable() is used for the test.) If obj is neither of the above a TypeError is raised. Important See also the create_task() function which is the preferred way for creating new Tasks. Changed in version 3.5.1: The function accepts any awaitable object.
doc_27439
Create a virtual environment by specifying the target directory (absolute or relative to the current directory) which is to contain the virtual environment. The create method will either create the environment in the specified directory, or raise an appropriate exception. The create method of the EnvBuilder class illustrates the hooks available for subclass customization: def create(self, env_dir): """ Create a virtualized Python environment in a directory. env_dir is the target directory to create an environment in. """ env_dir = os.path.abspath(env_dir) context = self.ensure_directories(env_dir) self.create_configuration(context) self.setup_python(context) self.setup_scripts(context) self.post_setup(context) Each of the methods ensure_directories(), create_configuration(), setup_python(), setup_scripts() and post_setup() can be overridden.
doc_27440
skimage.viewer.canvastools.LineTool(manager) Widget for line selection in a plot. skimage.viewer.canvastools.PaintTool(…[, …]) Widget for painting on top of a plot. skimage.viewer.canvastools.RectangleTool(manager) Widget for selecting a rectangular region in a plot. skimage.viewer.canvastools.ThickLineTool(manager) Widget for line selection in a plot. skimage.viewer.canvastools.base skimage.viewer.canvastools.linetool skimage.viewer.canvastools.painttool skimage.viewer.canvastools.recttool LineTool class skimage.viewer.canvastools.LineTool(manager, on_move=None, on_release=None, on_enter=None, maxdist=10, line_props=None, handle_props=None, **kwargs) [source] Bases: skimage.viewer.canvastools.base.CanvasToolBase Widget for line selection in a plot. Parameters managerViewer or PlotPlugin. Skimage viewer or plot plugin object. on_movefunction Function called whenever a control handle is moved. This function must accept the end points of line as the only argument. on_releasefunction Function called whenever the control handle is released. on_enterfunction Function called whenever the “enter” key is pressed. maxdistfloat Maximum pixel distance allowed when selecting control handle. line_propsdict Properties for matplotlib.lines.Line2D. handle_propsdict Marker properties for the handles (also see matplotlib.lines.Line2D). Attributes end_points2D array End points of line ((x1, y1), (x2, y2)). __init__(manager, on_move=None, on_release=None, on_enter=None, maxdist=10, line_props=None, handle_props=None, **kwargs) [source] Initialize self. See help(type(self)) for accurate signature. property end_points property geometry Geometry information that gets passed to callback functions. hit_test(event) [source] on_mouse_press(event) [source] on_mouse_release(event) [source] on_move(event) [source] update(x=None, y=None) [source] PaintTool class skimage.viewer.canvastools.PaintTool(manager, overlay_shape, radius=5, alpha=0.3, on_move=None, on_release=None, on_enter=None, rect_props=None) [source] Bases: skimage.viewer.canvastools.base.CanvasToolBase Widget for painting on top of a plot. Parameters managerViewer or PlotPlugin. Skimage viewer or plot plugin object. overlay_shapeshape tuple 2D shape tuple used to initialize overlay image. radiusint The size of the paint cursor. alphafloat (between [0, 1]) Opacity of overlay. on_movefunction Function called whenever a control handle is moved. This function must accept the end points of line as the only argument. on_releasefunction Function called whenever the control handle is released. on_enterfunction Function called whenever the “enter” key is pressed. rect_propsdict Properties for matplotlib.patches.Rectangle. This class redefines defaults in matplotlib.widgets.RectangleSelector. Examples >>> from skimage.data import camera >>> import matplotlib.pyplot as plt >>> from skimage.viewer.canvastools import PaintTool >>> import numpy as np >>> img = camera() >>> ax = plt.subplot(111) >>> plt.imshow(img, cmap=plt.cm.gray) >>> p = PaintTool(ax,np.shape(img[:-1]),10,0.2) >>> plt.show() >>> mask = p.overlay >>> plt.imshow(mask,cmap=plt.cm.gray) >>> plt.show() Attributes overlayarray Overlay of painted labels displayed on top of image. labelint Current paint color. __init__(manager, overlay_shape, radius=5, alpha=0.3, on_move=None, on_release=None, on_enter=None, rect_props=None) [source] Initialize self. See help(type(self)) for accurate signature. property geometry Geometry information that gets passed to callback functions. property label on_key_press(event) [source] on_mouse_press(event) [source] on_mouse_release(event) [source] on_move(event) [source] property overlay property radius property shape update_cursor(x, y) [source] update_overlay(x, y) [source] RectangleTool class skimage.viewer.canvastools.RectangleTool(manager, on_move=None, on_release=None, on_enter=None, maxdist=10, rect_props=None) [source] Bases: skimage.viewer.canvastools.base.CanvasToolBase, matplotlib.widgets.RectangleSelector Widget for selecting a rectangular region in a plot. After making the desired selection, press “Enter” to accept the selection and call the on_enter callback function. Parameters managerViewer or PlotPlugin. Skimage viewer or plot plugin object. on_movefunction Function called whenever a control handle is moved. This function must accept the rectangle extents as the only argument. on_releasefunction Function called whenever the control handle is released. on_enterfunction Function called whenever the “enter” key is pressed. maxdistfloat Maximum pixel distance allowed when selecting control handle. rect_propsdict Properties for matplotlib.patches.Rectangle. This class redefines defaults in matplotlib.widgets.RectangleSelector. Examples >>> from skimage import data >>> from skimage.viewer import ImageViewer >>> from skimage.viewer.canvastools import RectangleTool >>> from skimage.draw import line >>> from skimage.draw import set_color >>> viewer = ImageViewer(data.coffee()) >>> def print_the_rect(extents): ... global viewer ... im = viewer.image ... coord = np.int64(extents) ... [rr1, cc1] = line(coord[2],coord[0],coord[2],coord[1]) ... [rr2, cc2] = line(coord[2],coord[1],coord[3],coord[1]) ... [rr3, cc3] = line(coord[3],coord[1],coord[3],coord[0]) ... [rr4, cc4] = line(coord[3],coord[0],coord[2],coord[0]) ... set_color(im, (rr1, cc1), [255, 255, 0]) ... set_color(im, (rr2, cc2), [0, 255, 255]) ... set_color(im, (rr3, cc3), [255, 0, 255]) ... set_color(im, (rr4, cc4), [0, 0, 0]) ... viewer.image=im >>> rect_tool = RectangleTool(viewer, on_enter=print_the_rect) >>> viewer.show() Attributes extentstuple Return (xmin, xmax, ymin, ymax). __init__(manager, on_move=None, on_release=None, on_enter=None, maxdist=10, rect_props=None) [source] Parameters axAxes The parent axes for the widget. onselectfunction A callback function that is called after a selection is completed. It must have the signature: def onselect(eclick: MouseEvent, erelease: MouseEvent) where eclick and erelease are the mouse click and release MouseEvents that start and complete the selection. drawtype{“box”, “line”, “none”}, default: “box” Whether to draw the full rectangle box, the diagonal line of the rectangle, or nothing at all. minspanxfloat, default: 0 Selections with an x-span less than minspanx are ignored. minspanyfloat, default: 0 Selections with an y-span less than minspany are ignored. useblitbool, default: False Whether to use blitting for faster drawing (if supported by the backend). linepropsdict, optional Properties with which the line is drawn, if drawtype == "line". Default: dict(color="black", linestyle="-", linewidth=2, alpha=0.5) rectpropsdict, optional Properties with which the rectangle is drawn, if drawtype == "box". Default: dict(facecolor="red", edgecolor="black", alpha=0.2, fill=True) spancoords{“data”, “pixels”}, default: “data” Whether to interpret minspanx and minspany in data or in pixel coordinates. buttonMouseButton, list of MouseButton, default: all buttons Button(s) that trigger rectangle selection. maxdistfloat, default: 10 Distance in pixels within which the interactive tool handles can be activated. marker_propsdict Properties with which the interactive handles are drawn. Currently not implemented and ignored. interactivebool, default: False Whether to draw a set of handles that allow interaction with the widget after it is drawn. state_modifier_keysdict, optional Keyboard modifiers which affect the widget’s behavior. Values amend the defaults. “move”: Move the existing shape, default: no modifier. “clear”: Clear the current shape, default: “escape”. “square”: Makes the shape square, default: “shift”. “center”: Make the initial point the center of the shape, default: “ctrl”. “square” and “center” can be combined. property corners Corners of rectangle from lower left, moving clockwise. property edge_centers Midpoint of rectangle edges from left, moving clockwise. property extents Return (xmin, xmax, ymin, ymax). property geometry Geometry information that gets passed to callback functions. on_mouse_press(event) [source] on_mouse_release(event) [source] on_move(event) [source] ThickLineTool class skimage.viewer.canvastools.ThickLineTool(manager, on_move=None, on_enter=None, on_release=None, on_change=None, maxdist=10, line_props=None, handle_props=None) [source] Bases: skimage.viewer.canvastools.linetool.LineTool Widget for line selection in a plot. The thickness of the line can be varied using the mouse scroll wheel, or with the ‘+’ and ‘-‘ keys. Parameters managerViewer or PlotPlugin. Skimage viewer or plot plugin object. on_movefunction Function called whenever a control handle is moved. This function must accept the end points of line as the only argument. on_releasefunction Function called whenever the control handle is released. on_enterfunction Function called whenever the “enter” key is pressed. on_changefunction Function called whenever the line thickness is changed. maxdistfloat Maximum pixel distance allowed when selecting control handle. line_propsdict Properties for matplotlib.lines.Line2D. handle_propsdict Marker properties for the handles (also see matplotlib.lines.Line2D). Attributes end_points2D array End points of line ((x1, y1), (x2, y2)). __init__(manager, on_move=None, on_enter=None, on_release=None, on_change=None, maxdist=10, line_props=None, handle_props=None) [source] Initialize self. See help(type(self)) for accurate signature. on_key_press(event) [source] on_scroll(event) [source]
doc_27441
Registry entries subordinate to this key define types (or classes) of documents and the properties associated with those types. Shell and COM applications use the information stored under this key.
doc_27442
Apply decision function to an array of samples. The decision function is equal (up to a constant factor) to the log-posterior of the model, i.e. log p(y = k | x). In a binary classification setting this instead corresponds to the difference log p(y = 1 | x) - log p(y = 0 | x). See Mathematical formulation of the LDA and QDA classifiers. Parameters Xarray-like of shape (n_samples, n_features) Array of samples (test vectors). Returns Cndarray of shape (n_samples,) or (n_samples, n_classes) Decision function values related to each class, per sample. In the two-class case, the shape is (n_samples,), giving the log likelihood ratio of the positive class.
doc_27443
Estimate ∫ydx\int y\,dx along dim, using the trapezoid rule. Parameters y (Tensor) – The values of the function to integrate x (Tensor) – The points at which the function y is sampled. If x is not in ascending order, intervals on which it is decreasing contribute negatively to the estimated integral (i.e., the convention ∫abf=−∫baf\int_a^b f = -\int_b^a f is followed). dim (int) – The dimension along which to integrate. By default, use the last dimension. Returns A Tensor with the same shape as the input, except with dim removed. Each element of the returned tensor represents the estimated integral ∫ydx\int y\,dx along dim. Example: >>> y = torch.randn((2, 3)) >>> y tensor([[-2.1156, 0.6857, -0.2700], [-1.2145, 0.5540, 2.0431]]) >>> x = torch.tensor([[1, 3, 4], [1, 2, 3]]) >>> torch.trapz(y, x) tensor([-1.2220, 0.9683]) torch.trapz(y, *, dx=1, dim=-1) → Tensor As above, but the sample points are spaced uniformly at a distance of dx. Parameters y (Tensor) – The values of the function to integrate Keyword Arguments dx (float) – The distance between points at which y is sampled. dim (int) – The dimension along which to integrate. By default, use the last dimension. Returns A Tensor with the same shape as the input, except with dim removed. Each element of the returned tensor represents the estimated integral ∫ydx\int y\,dx along dim.
doc_27444
turtle.pu() turtle.up() Pull the pen up – no drawing when moving.
doc_27445
Return the maximum along a given axis. Refer to numpy.amax for full documentation. See also numpy.amax equivalent function
doc_27446
Bases: object A series of possibly disconnected, possibly closed, line and curve segments. The underlying storage is made up of two parallel numpy arrays: vertices: an Nx2 float array of vertices codes: an N-length uint8 array of path codes, or None These two arrays always have the same length in the first dimension. For example, to represent a cubic curve, you must provide three vertices and three CURVE4 codes. The code types are: STOP1 vertex (ignored) A marker for the end of the entire path (currently not required and ignored) MOVETO1 vertex Pick up the pen and move to the given vertex. LINETO1 vertex Draw a line from the current position to the given vertex. CURVE31 control point, 1 endpoint Draw a quadratic Bezier curve from the current position, with the given control point, to the given end point. CURVE42 control points, 1 endpoint Draw a cubic Bezier curve from the current position, with the given control points, to the given end point. CLOSEPOLY1 vertex (ignored) Draw a line segment to the start point of the current polyline. If codes is None, it is interpreted as a MOVETO followed by a series of LINETO. Users of Path objects should not access the vertices and codes arrays directly. Instead, they should use iter_segments or cleaned to get the vertex/code pairs. This helps, in particular, to consistently handle the case of codes being None. Some behavior of Path objects can be controlled by rcParams. See the rcParams whose keys start with 'path.'. Note The vertices and codes arrays should be treated as immutable -- there are a number of optimizations and assumptions made up front in the constructor that will not change when the data changes. Create a new path with the given vertices and codes. Parameters vertices(N, 2) array-like The path vertices, as an array, masked array or sequence of pairs. Masked values, if any, will be converted to NaNs, which are then handled correctly by the Agg PathIterator and other consumers of path data, such as iter_segments(). codesarray-like or None, optional N-length array of integers representing the codes of the path. If not None, codes must be the same length as vertices. If None, vertices will be treated as a series of line segments. _interpolation_stepsint, optional Used as a hint to certain projections, such as Polar, that this path should be linearly interpolated immediately before drawing. This attribute is primarily an implementation detail and is not intended for public use. closedbool, optional If codes is None and closed is True, vertices will be treated as line segments of a closed polygon. Note that the last vertex will then be ignored (as the corresponding code will be set to CLOSEPOLY). readonlybool, optional Makes the path behave in an immutable way and sets the vertices and codes as read-only arrays. CLOSEPOLY=79 CURVE3=3 CURVE4=4 LINETO=2 MOVETO=1 NUM_VERTICES_FOR_CODE={0: 1, 1: 1, 2: 1, 3: 2, 4: 3, 79: 1} A dictionary mapping Path codes to the number of vertices that the code expects. STOP=0 classmethodarc(theta1, theta2, n=None, is_wedge=False)[source] Return a Path for the unit circle arc from angles theta1 to theta2 (in degrees). theta2 is unwrapped to produce the shortest arc within 360 degrees. That is, if theta2 > theta1 + 360, the arc will be from theta1 to theta2 - 360 and not a full circle plus some extra overlap. If n is provided, it is the number of spline segments to make. If n is not provided, the number of spline segments is determined based on the delta between theta1 and theta2. Masionobe, L. 2003. Drawing an elliptical arc using polylines, quadratic or cubic Bezier curves. classmethodcircle(center=(0.0, 0.0), radius=1.0, readonly=False)[source] Return a Path representing a circle of a given radius and center. Parameters center(float, float), default: (0, 0) The center of the circle. radiusfloat, default: 1 The radius of the circle. readonlybool Whether the created path should have the "readonly" argument set when creating the Path instance. Notes The circle is approximated using 8 cubic Bezier curves, as described in Lancaster, Don. Approximating a Circle or an Ellipse Using Four Bezier Cubic Splines. cleaned(transform=None, remove_nans=False, clip=None, *, simplify=False, curves=False, stroke_width=1.0, snap=False, sketch=None)[source] Return a new Path with vertices and codes cleaned according to the parameters. See also Path.iter_segments for details of the keyword arguments. clip_to_bbox(bbox, inside=True)[source] Clip the path to the given bounding box. The path must be made up of one or more closed polygons. This algorithm will not behave correctly for unclosed paths. If inside is True, clip to the inside of the box, otherwise to the outside of the box. code_type alias of numpy.uint8 propertycodes The list of codes in the Path as a 1D numpy array. Each code is one of STOP, MOVETO, LINETO, CURVE3, CURVE4 or CLOSEPOLY. For codes that correspond to more than one vertex (CURVE3 and CURVE4), that code will be repeated so that the length of self.vertices and self.codes is always the same. contains_path(path, transform=None)[source] Return whether this (closed) path completely contains the given path. If transform is not None, the path will be transformed before checking for containment. contains_point(point, transform=None, radius=0.0)[source] Return whether the area enclosed by the path contains the given point. The path is always treated as closed; i.e. if the last code is not CLOSEPOLY an implicit segment connecting the last vertex to the first vertex is assumed. Parameters point(float, float) The point (x, y) to check. transformmatplotlib.transforms.Transform, optional If not None, point will be compared to self transformed by transform; i.e. for a correct check, transform should transform the path into the coordinate system of point. radiusfloat, default: 0 Add an additional margin on the path in coordinates of point. The path is extended tangentially by radius/2; i.e. if you would draw the path with a linewidth of radius, all points on the line would still be considered to be contained in the area. Conversely, negative values shrink the area: Points on the imaginary line will be considered outside the area. Returns bool Notes The current algorithm has some limitations: The result is undefined for points exactly at the boundary (i.e. at the path shifted by radius/2). The result is undefined if there is no enclosed area, i.e. all vertices are on a straight line. If bounding lines start to cross each other due to radius shift, the result is not guaranteed to be correct. contains_points(points, transform=None, radius=0.0)[source] Return whether the area enclosed by the path contains the given points. The path is always treated as closed; i.e. if the last code is not CLOSEPOLY an implicit segment connecting the last vertex to the first vertex is assumed. Parameters points(N, 2) array The points to check. Columns contain x and y values. transformmatplotlib.transforms.Transform, optional If not None, points will be compared to self transformed by transform; i.e. for a correct check, transform should transform the path into the coordinate system of points. radiusfloat, default: 0 Add an additional margin on the path in coordinates of points. The path is extended tangentially by radius/2; i.e. if you would draw the path with a linewidth of radius, all points on the line would still be considered to be contained in the area. Conversely, negative values shrink the area: Points on the imaginary line will be considered outside the area. Returns length-N bool array Notes The current algorithm has some limitations: The result is undefined for points exactly at the boundary (i.e. at the path shifted by radius/2). The result is undefined if there is no enclosed area, i.e. all vertices are on a straight line. If bounding lines start to cross each other due to radius shift, the result is not guaranteed to be correct. copy()[source] Return a shallow copy of the Path, which will share the vertices and codes with the source Path. deepcopy(memo=None)[source] Return a deepcopy of the Path. The Path will not be readonly, even if the source Path is. get_extents(transform=None, **kwargs)[source] Get Bbox of the path. Parameters transformmatplotlib.transforms.Transform, optional Transform to apply to path before computing extents, if any. **kwargs Forwarded to iter_bezier. Returns matplotlib.transforms.Bbox The extents of the path Bbox([[xmin, ymin], [xmax, ymax]]) statichatch(hatchpattern, density=6)[source] Given a hatch specifier, hatchpattern, generates a Path that can be used in a repeated hatching pattern. density is the number of lines per unit square. interpolated(steps)[source] Return a new path resampled to length N x steps. Codes other than LINETO are not handled correctly. intersects_bbox(bbox, filled=True)[source] Return whether this path intersects a given Bbox. If filled is True, then this also returns True if the path completely encloses the Bbox (i.e., the path is treated as filled). The bounding box is always considered filled. intersects_path(other, filled=True)[source] Return whether if this path intersects another given path. If filled is True, then this also returns True if one path completely encloses the other (i.e., the paths are treated as filled). iter_bezier(**kwargs)[source] Iterate over each bezier curve (lines included) in a Path. Parameters **kwargs Forwarded to iter_segments. Yields Bmatplotlib.bezier.BezierSegment The bezier curves that make up the current path. Note in particular that freestanding points are bezier curves of order 0, and lines are bezier curves of order 1 (with two control points). codePath.code_type The code describing what kind of curve is being returned. Path.MOVETO, Path.LINETO, Path.CURVE3, Path.CURVE4 correspond to bezier curves with 1, 2, 3, and 4 control points (respectively). Path.CLOSEPOLY is a Path.LINETO with the control points correctly chosen based on the start/end points of the current stroke. iter_segments(transform=None, remove_nans=True, clip=None, snap=False, stroke_width=1.0, simplify=None, curves=True, sketch=None)[source] Iterate over all curve segments in the path. Each iteration returns a pair (vertices, code), where vertices is a sequence of 1-3 coordinate pairs, and code is a Path code. Additionally, this method can provide a number of standard cleanups and conversions to the path. Parameters transformNone or Transform If not None, the given affine transformation will be applied to the path. remove_nansbool, optional Whether to remove all NaNs from the path and skip over them using MOVETO commands. clipNone or (float, float, float, float), optional If not None, must be a four-tuple (x1, y1, x2, y2) defining a rectangle in which to clip the path. snapNone or bool, optional If True, snap all nodes to pixels; if False, don't snap them. If None, snap if the path contains only segments parallel to the x or y axes, and no more than 1024 of them. stroke_widthfloat, optional The width of the stroke being drawn (used for path snapping). simplifyNone or bool, optional Whether to simplify the path by removing vertices that do not affect its appearance. If None, use the should_simplify attribute. See also rcParams["path.simplify"] (default: True) and rcParams["path.simplify_threshold"] (default: 0.111111111111). curvesbool, optional If True, curve segments will be returned as curve segments. If False, all curves will be converted to line segments. sketchNone or sequence, optional If not None, must be a 3-tuple of the form (scale, length, randomness), representing the sketch parameters. classmethodmake_compound_path(*args)[source] Make a compound path from a list of Path objects. Blindly removes all Path.STOP control points. classmethodmake_compound_path_from_polys(XY)[source] Make a compound path object to draw a number of polygons with equal numbers of sides XY is a (numpolys x numsides x 2) numpy array of vertices. Return object is a Path. (Source code, png, pdf) propertyreadonly True if the Path is read-only. propertyshould_simplify True if the vertices array should be simplified. propertysimplify_threshold The fraction of a pixel difference below which vertices will be simplified out. to_polygons(transform=None, width=0, height=0, closed_only=True)[source] Convert this path to a list of polygons or polylines. Each polygon/polyline is an Nx2 array of vertices. In other words, each polygon has no MOVETO instructions or curves. This is useful for displaying in backends that do not support compound paths or Bezier curves. If width and height are both non-zero then the lines will be simplified so that vertices outside of (0, 0), (width, height) will be clipped. If closed_only is True (default), only closed polygons, with the last point being the same as the first point, will be returned. Any unclosed polylines in the path will be explicitly closed. If closed_only is False, any unclosed polygons in the path will be returned as unclosed polygons, and the closed polygons will be returned explicitly closed by setting the last point to the same as the first point. transformed(transform)[source] Return a transformed copy of the path. See also matplotlib.transforms.TransformedPath A specialized path class that will cache the transformed result and automatically update when the transform changes. classmethodunit_circle()[source] Return the readonly Path of the unit circle. For most cases, Path.circle() will be what you want. classmethodunit_circle_righthalf()[source] Return a Path of the right half of a unit circle. See Path.circle for the reference on the approximation used. classmethodunit_rectangle()[source] Return a Path instance of the unit rectangle from (0, 0) to (1, 1). classmethodunit_regular_asterisk(numVertices)[source] Return a Path for a unit regular asterisk with the given numVertices and radius of 1.0, centered at (0, 0). classmethodunit_regular_polygon(numVertices)[source] Return a Path instance for a unit regular polygon with the given numVertices such that the circumscribing circle has radius 1.0, centered at (0, 0). classmethodunit_regular_star(numVertices, innerCircle=0.5)[source] Return a Path for a unit regular star with the given numVertices and radius of 1.0, centered at (0, 0). propertyvertices The list of vertices in the Path as an Nx2 numpy array. classmethodwedge(theta1, theta2, n=None)[source] Return a Path for the unit circle wedge from angles theta1 to theta2 (in degrees). theta2 is unwrapped to produce the shortest wedge within 360 degrees. That is, if theta2 > theta1 + 360, the wedge will be from theta1 to theta2 - 360 and not a full circle plus some extra overlap. If n is provided, it is the number of spline segments to make. If n is not provided, the number of spline segments is determined based on the delta between theta1 and theta2. See Path.arc for the reference on the approximation used. matplotlib.path.get_path_collection_extents(master_transform, paths, transforms, offsets, offset_transform)[source] Given a sequence of Paths, Transforms objects, and offsets, as found in a PathCollection, returns the bounding box that encapsulates all of them. Parameters master_transformTransform Global transformation applied to all paths. pathslist of Path transformslist of Affine2D offsets(N, 2) array-like offset_transformAffine2D Transform applied to the offsets before offsetting the path. Notes The way that paths, transforms and offsets are combined follows the same method as for collections: Each is iterated over independently, so if you have 3 paths, 2 transforms and 1 offset, their combinations are as follows: (A, A, A), (B, B, A), (C, A, A)
doc_27447
tf.data.experimental.service.DispatchServer( config=None, start=True ) A tf.data.experimental.service.DispatchServer coordinates a cluster of tf.data.experimental.service.WorkerServers. When the workers start, they register themselves with the dispatcher. dispatcher = tf.data.experimental.service.DispatchServer() dispatcher_address = dispatcher.target.split("://")[1] worker = tf.data.experimental.service.WorkerServer(WorkerConfig( dispatcher_address=dispatcher_address)) dataset = tf.data.Dataset.range(10) dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode="parallel_epochs", service=dispatcher.target)) print(list(dataset.as_numpy_iterator())) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] When starting a dedicated tf.data dispatch process, use join() to block indefinitely after starting up the server. dispatcher = tf.data.experimental.service.DispatchServer( tf.data.experimental.service.DispatcherConfig(port=5050)) dispatcher.join() To start a DispatchServer in fault-tolerant mode, set work_dir and fault_tolerant_mode like below: dispatcher = tf.data.experimental.service.DispatchServer( tf.data.experimental.service.DispatcherConfig( port=5050, work_dir="gs://my-bucket/dispatcher/work_dir", fault_tolerant_mode=True)) Args config (Optional.) A tf.data.experimental.service.DispatcherConfig configration. If None, the dispatcher will use default configuration values. start (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to True. Attributes target Returns a target that can be used to connect to the server. dispatcher = tf.data.experimental.service.DispatchServer() dataset = tf.data.Dataset.range(10) dataset = dataset.apply(tf.data.experimental.service.distribute( processing_mode="parallel_epochs", service=dispatcher.target)) The returned string will be in the form protocol://address, e.g. "grpc://localhost:5050". Methods join View source join() Blocks until the server has shut down. This is useful when starting a dedicated dispatch process. dispatcher = tf.data.experimental.service.DispatchServer( tf.data.experimental.service.DispatcherConfig(port=5050)) dispatcher.join() Raises tf.errors.OpError Or one of its subclasses if an error occurs while joining the server. start View source start() Starts this server. dispatcher = tf.data.experimental.service.DispatchServer(start=False) dispatcher.start() Raises tf.errors.OpError Or one of its subclasses if an error occurs while starting the server.
doc_27448
Applies a 2D bilinear upsampling to an input signal composed of several input channels. To specify the scale, it takes either the size or the scale_factor as it’s constructor argument. When size is given, it is the output size of the image (h, w). Parameters size (int or Tuple[int, int], optional) – output spatial sizes scale_factor (float or Tuple[float, float], optional) – multiplier for spatial size. Warning This class is deprecated in favor of interpolate(). It is equivalent to nn.functional.interpolate(..., mode='bilinear', align_corners=True). Shape: Input: (N,C,Hin,Win)(N, C, H_{in}, W_{in}) Output: (N,C,Hout,Wout)(N, C, H_{out}, W_{out}) where Hout=⌊Hin×scale_factor⌋H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor Wout=⌊Win×scale_factor⌋W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor Examples: >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingBilinear2d(scale_factor=2) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]])
doc_27449
See Migration guide for more details. tf.compat.v1.raw_ops.TensorListScatterIntoExistingList tf.raw_ops.TensorListScatterIntoExistingList( input_handle, tensor, indices, name=None ) Each member of the TensorList corresponds to one row of the input tensor, specified by the given index (see tf.gather). input_handle: The list to scatter into. tensor: The input tensor. indices: The indices used to index into the list. output_handle: The TensorList. Args input_handle A Tensor of type variant. tensor A Tensor. indices A Tensor of type int32. name A name for the operation (optional). Returns A Tensor of type variant.
doc_27450
See Migration guide for more details. tf.compat.v1.raw_ops.SparseReduceSumSparse tf.raw_ops.SparseReduceSumSparse( input_indices, input_values, input_shape, reduction_axes, keep_dims=False, name=None ) This Op takes a SparseTensor and is the sparse counterpart to tf.reduce_sum(). In contrast to SparseReduceSum, this Op returns a SparseTensor. Reduces sp_input along the dimensions given in reduction_axes. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_axes. If keep_dims is true, the reduced dimensions are retained with length 1. If reduction_axes has no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, which are interpreted according to the indexing rules in Python. Args input_indices A Tensor of type int64. 2-D. N x R matrix with the indices of non-empty values in a SparseTensor, possibly not in canonical ordering. input_values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. 1-D. N non-empty values corresponding to input_indices. input_shape A Tensor of type int64. 1-D. Shape of the input SparseTensor. reduction_axes A Tensor of type int32. 1-D. Length-K vector containing the reduction axes. keep_dims An optional bool. Defaults to False. If true, retain reduced dimensions with length 1. name A name for the operation (optional). Returns A tuple of Tensor objects (output_indices, output_values, output_shape). output_indices A Tensor of type int64. output_values A Tensor. Has the same type as input_values. output_shape A Tensor of type int64.
doc_27451
Insert x after this node in the list of nodes in the graph. Equvalent to self.next.prepend(x) Parameters x (Node) – The node to put after this node. Must be a member of the same graph.
doc_27452
Get the status of the file descriptor fd. Return a stat_result object. As of Python 3.3, this is equivalent to os.stat(fd). See also The stat() function.
doc_27453
The django.contrib.sitemaps.GenericSitemap class allows you to create a sitemap by passing it a dictionary which has to contain at least a queryset entry. This queryset will be used to generate the items of the sitemap. It may also have a date_field entry that specifies a date field for objects retrieved from the queryset. This will be used for the lastmod attribute in the generated sitemap. The priority, changefreq, and protocol keyword arguments allow specifying these attributes for all URLs.
doc_27454
Return the Charset instance associated with the message’s payload. This is a legacy method. On the EmailMessage class it always returns None.
doc_27455
Returns True if any of the elements of a evaluate to True. Refer to numpy.any for full documentation. See also numpy.any equivalent function
doc_27456
Given a dictionary of key/value pairs, generates a fontconfig pattern string.
doc_27457
Return arrays with the results of pyfunc broadcast (vectorized) over args and kwargs not in excluded.
doc_27458
See torch.nonzero()
doc_27459
The address from which the last accepted connection came. If this is unavailable then it is None.
doc_27460
A single interactive example, consisting of a Python statement and its expected output. The constructor arguments are used to initialize the attributes of the same names. Example defines the following attributes. They are initialized by the constructor, and should not be modified directly. source A string containing the example’s source code. This source code consists of a single Python statement, and always ends with a newline; the constructor adds a newline when necessary. want The expected output from running the example’s source code (either from stdout, or a traceback in case of exception). want ends with a newline unless no output is expected, in which case it’s an empty string. The constructor adds a newline when necessary. exc_msg The exception message generated by the example, if the example is expected to generate an exception; or None if it is not expected to generate an exception. This exception message is compared against the return value of traceback.format_exception_only(). exc_msg ends with a newline unless it’s None. The constructor adds a newline if needed. lineno The line number within the string containing this example where the example begins. This line number is zero-based with respect to the beginning of the containing string. indent The example’s indentation in the containing string, i.e., the number of space characters that precede the example’s first prompt. options A dictionary mapping from option flags to True or False, which is used to override default options for this example. Any option flags not contained in this dictionary are left at their default value (as specified by the DocTestRunner’s optionflags). By default, no options are set.
doc_27461
Run the test suite. test_labels allows you to specify which tests to run and supports several formats (see DiscoverRunner.build_suite() for a list of supported formats). Deprecated since version 4.0: extra_tests is a list of extra TestCase instances to add to the suite that is executed by the test runner. These extra tests are run in addition to those discovered in the modules listed in test_labels. This method should return the number of tests that failed.
doc_27462
Default behaviour adds OK and Cancel buttons. Override for custom button layouts.
doc_27463
Partially fit underlying estimators Should be used when memory is inefficient to train all data. Chunks of data can be passed in several iteration. Parameters X(sparse) array-like of shape (n_samples, n_features) Data. y(sparse) array-like of shape (n_samples,) or (n_samples, n_classes) Multi-class targets. An indicator matrix turns on multilabel classification. classesarray, shape (n_classes, ) Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is only required in the first call of partial_fit and can be omitted in the subsequent calls. Returns self
doc_27464
class sklearn.feature_selection.SelectFpr(score_func=<function f_classif>, *, alpha=0.05) [source] Filter: Select the pvalues below alpha based on a FPR test. FPR test stands for False Positive Rate test. It controls the total amount of false detections. Read more in the User Guide. Parameters score_funccallable, default=f_classif Function taking two arrays X and y, and returning a pair of arrays (scores, pvalues). Default is f_classif (see below “See Also”). The default function only works with classification tasks. alphafloat, default=5e-2 The highest p-value for features to be kept. Attributes scores_array-like of shape (n_features,) Scores of features. pvalues_array-like of shape (n_features,) p-values of feature scores. See also f_classif ANOVA F-value between label/feature for classification tasks. chi2 Chi-squared stats of non-negative features for classification tasks. mutual_info_classif Mutual information for a discrete target. f_regression F-value between label/feature for regression tasks. mutual_info_regression Mutual information for a continuous target. SelectPercentile Select features based on percentile of the highest scores. SelectKBest Select features based on the k highest scores. SelectFdr Select features based on an estimated false discovery rate. SelectFwe Select features based on family-wise error rate. GenericUnivariateSelect Univariate feature selector with configurable mode. Examples >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.feature_selection import SelectFpr, chi2 >>> X, y = load_breast_cancer(return_X_y=True) >>> X.shape (569, 30) >>> X_new = SelectFpr(chi2, alpha=0.01).fit_transform(X, y) >>> X_new.shape (569, 16) Methods fit(X, y) Run score function on (X, y) and get the appropriate features. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. get_support([indices]) Get a mask, or integer index, of the features selected inverse_transform(X) Reverse the transformation operation set_params(**params) Set the parameters of this estimator. transform(X) Reduce X to the selected features. fit(X, y) [source] Run score function on (X, y) and get the appropriate features. Parameters Xarray-like of shape (n_samples, n_features) The training input samples. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). Returns selfobject fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_support(indices=False) [source] Get a mask, or integer index, of the features selected Parameters indicesbool, default=False If True, the return value will be an array of integers, rather than a boolean mask. Returns supportarray An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector. inverse_transform(X) [source] Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Reduce X to the selected features. Parameters Xarray of shape [n_samples, n_features] The input samples. Returns X_rarray of shape [n_samples, n_selected_features] The input samples with only the selected features.
doc_27465
Return the product of the values over the requested axis. Parameters axis:{index (0), columns (1)} Axis for the function to be applied on. skipna:bool, default True Exclude NA/null values when computing the result. level:int or level name, default None If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series. numeric_only:bool, default None Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series. min_count:int, default 0 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. **kwargs Additional keyword arguments to be passed to the function. Returns Series or DataFrame (if level specified) See also Series.sum Return the sum. Series.min Return the minimum. Series.max Return the maximum. Series.idxmin Return the index of the minimum. Series.idxmax Return the index of the maximum. DataFrame.sum Return the sum over the requested axis. DataFrame.min Return the minimum over the requested axis. DataFrame.max Return the maximum over the requested axis. DataFrame.idxmin Return the index of the minimum over the requested axis. DataFrame.idxmax Return the index of the maximum over the requested axis. Examples By default, the product of an empty or all-NA Series is 1 >>> pd.Series([], dtype="float64").prod() 1.0 This can be controlled with the min_count parameter >>> pd.Series([], dtype="float64").prod(min_count=1) nan Thanks to the skipna parameter, min_count handles all-NA and empty series identically. >>> pd.Series([np.nan]).prod() 1.0 >>> pd.Series([np.nan]).prod(min_count=1) nan
doc_27466
See Migration guide for more details. tf.compat.v1.raw_ops.Unique tf.raw_ops.Unique( x, out_idx=tf.dtypes.int32, name=None ) This operation returns a tensor y containing all of the unique elements of x sorted in the same order that they occur in x; x does not need to be sorted. This operation also returns a tensor idx the same size as x that contains the index of each value of x in the unique output y. In other words: y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1] Examples: # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] y, idx = unique(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] # tensor 'x' is [4, 5, 1, 2, 3, 3, 4, 5] y, idx = unique(x) y ==> [4, 5, 1, 2, 3] idx ==> [0, 1, 2, 3, 4, 4, 0, 1] Args x A Tensor. 1-D. out_idx An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32. name A name for the operation (optional). Returns A tuple of Tensor objects (y, idx). y A Tensor. Has the same type as x. idx A Tensor of type out_idx.
doc_27467
Raised when some mailbox-related condition beyond the control of the program causes it to be unable to proceed, such as when failing to acquire a lock that another program already holds a lock, or when a uniquely-generated file name already exists.
doc_27468
Deprecated since version 3.9: Deprecated in favor of headers.
doc_27469
Return the Hanning window. The Hanning window is a taper formed by using a weighted cosine. Parameters Mint Number of points in the output window. If zero or less, an empty array is returned. Returns outndarray, shape(M,) The window, with the maximum value normalized to one (the value one appears only if M is odd). See also bartlett, blackman, hamming, kaiser Notes The Hanning window is defined as \[w(n) = 0.5 - 0.5cos\left(\frac{2\pi{n}}{M-1}\right) \qquad 0 \leq n \leq M-1\] The Hanning was named for Julius von Hann, an Austrian meteorologist. It is also known as the Cosine Bell. Some authors prefer that it be called a Hann window, to help avoid confusion with the very similar Hamming window. Most references to the Hanning window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. References 1 Blackman, R.B. and Tukey, J.W., (1958) The measurement of power spectra, Dover Publications, New York. 2 E.R. Kanasewich, “Time Sequence Analysis in Geophysics”, The University of Alberta Press, 1975, pp. 106-108. 3 Wikipedia, “Window function”, https://en.wikipedia.org/wiki/Window_function 4 W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, “Numerical Recipes”, Cambridge University Press, 1986, page 425. Examples >>> np.hanning(12) array([0. , 0.07937323, 0.29229249, 0.57115742, 0.82743037, 0.97974649, 0.97974649, 0.82743037, 0.57115742, 0.29229249, 0.07937323, 0. ]) Plot the window and its frequency response: >>> import matplotlib.pyplot as plt >>> from numpy.fft import fft, fftshift >>> window = np.hanning(51) >>> plt.plot(window) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Hann window") Text(0.5, 1.0, 'Hann window') >>> plt.ylabel("Amplitude") Text(0, 0.5, 'Amplitude') >>> plt.xlabel("Sample") Text(0.5, 0, 'Sample') >>> plt.show() >>> plt.figure() <Figure size 640x480 with 0 Axes> >>> A = fft(window, 2048) / 25.5 >>> mag = np.abs(fftshift(A)) >>> freq = np.linspace(-0.5, 0.5, len(A)) >>> with np.errstate(divide='ignore', invalid='ignore'): ... response = 20 * np.log10(mag) ... >>> response = np.clip(response, -100, 100) >>> plt.plot(freq, response) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Frequency response of the Hann window") Text(0.5, 1.0, 'Frequency response of the Hann window') >>> plt.ylabel("Magnitude [dB]") Text(0, 0.5, 'Magnitude [dB]') >>> plt.xlabel("Normalized frequency [cycles per sample]") Text(0.5, 0, 'Normalized frequency [cycles per sample]') >>> plt.axis('tight') ... >>> plt.show()
doc_27470
Derived from BrokenExecutor (formerly RuntimeError), this exception class is raised when one of the workers of a ProcessPoolExecutor has terminated in a non-clean fashion (for example, if it was killed from the outside). New in version 3.3.
doc_27471
Alias for get_linestyle.
doc_27472
same as ‘feature_ahead()’ but if both features implied each other and keep the highest interest. Parameters ‘names’: sequence sequence of CPU feature names in uppercase. Returns list of CPU features sorted as-is ‘names’ Examples >>> self.feature_untied(["SSE2", "SSE3", "SSE41"]) ["SSE2", "SSE3", "SSE41"] # assume AVX2 and FMA3 implies each other >>> self.feature_untied(["SSE2", "SSE3", "SSE41", "FMA3", "AVX2"]) ["SSE2", "SSE3", "SSE41", "AVX2"]
doc_27473
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters module (nn.Module) – module containing the tensor to prune name (str) – parameter name within module on which pruning will act. amount (int or float) – quantity of parameters to prune. If float, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If int, it represents the absolute number of parameters to prune. dim (int, optional) – index of the dim along which we define channels to prune. Default: -1.
doc_27474
Open a gdbm database and return a gdbm object. The filename argument is the name of the database file. The optional flag argument can be: Value Meaning 'r' Open existing database for reading only (default) 'w' Open existing database for reading and writing 'c' Open database for reading and writing, creating it if it doesn’t exist 'n' Always create a new, empty database, open for reading and writing The following additional characters may be appended to the flag to control how the database is opened: Value Meaning 'f' Open the database in fast mode. Writes to the database will not be synchronized. 's' Synchronized mode. This will cause changes to the database to be immediately written to the file. 'u' Do not lock database. Not all flags are valid for all versions of gdbm. The module constant open_flags is a string of supported flag characters. The exception error is raised if an invalid flag is specified. The optional mode argument is the Unix mode of the file, used only when the database has to be created. It defaults to octal 0o666. In addition to the dictionary-like methods, gdbm objects have the following methods: gdbm.firstkey() It’s possible to loop over every key in the database using this method and the nextkey() method. The traversal is ordered by gdbm’s internal hash values, and won’t be sorted by the key values. This method returns the starting key. gdbm.nextkey(key) Returns the key that follows key in the traversal. The following code prints every key in the database db, without having to create a list in memory that contains them all: k = db.firstkey() while k != None: print(k) k = db.nextkey(k) gdbm.reorganize() If you have carried out a lot of deletions and would like to shrink the space used by the gdbm file, this routine will reorganize the database. gdbm objects will not shorten the length of a database file except by using this reorganization; otherwise, deleted file space will be kept and reused as new (key, value) pairs are added. gdbm.sync() When the database has been opened in fast mode, this method forces any unwritten data to be written to the disk. gdbm.close() Close the gdbm database.
doc_27475
Alias for set_linewidth.
doc_27476
Token objects are returned by the ContextVar.set() method. They can be passed to the ContextVar.reset() method to revert the value of the variable to what it was before the corresponding set. Token.var A read-only property. Points to the ContextVar object that created the token. Token.old_value A read-only property. Set to the value the variable had before the ContextVar.set() method call that created the token. It points to Token.MISSING is the variable was not set before the call. Token.MISSING A marker object used by Token.old_value.
doc_27477
See Migration guide for more details. tf.compat.v1.raw_ops.TensorArrayConcatV3 tf.raw_ops.TensorArrayConcatV3( handle, flow_in, dtype, element_shape_except0=None, name=None ) Takes T elements of shapes (n0 x d0 x d1 x ...), (n1 x d0 x d1 x ...), ..., (n(T-1) x d0 x d1 x ...) and concatenates them into a Tensor of shape: (n0 + n1 + ... + n(T-1) x d0 x d1 x ...) All elements must have the same shape (excepting the first dimension). Args handle A Tensor of type resource. The handle to a TensorArray. flow_in A Tensor of type float32. A float scalar that enforces proper chaining of operations. dtype A tf.DType. The type of the elem that is returned. element_shape_except0 An optional tf.TensorShape or list of ints. Defaults to None. The expected shape of an element, if known, excluding the first dimension. Used to validate the shapes of TensorArray elements. If this shape is not fully specified, concatenating zero-size TensorArrays is an error. name A name for the operation (optional). Returns A tuple of Tensor objects (value, lengths). value A Tensor of type dtype. lengths A Tensor of type int64.
doc_27478
'DEFAULT_RENDERER_CLASSES': [ 'rest_framework.renderers.JSONRenderer', ], 'DEFAULT_PARSER_CLASSES': [ 'rest_framework.parsers.JSONParser', ] } Accessing settings If you need to access the values of REST framework's API settings in your project, you should use the api_settings object. For example. from rest_framework.settings import api_settings print(api_settings.DEFAULT_AUTHENTICATION_CLASSES) The api_settings object will check for any user-defined settings, and otherwise fall back to the default values. Any setting that uses string import paths to refer to a class will automatically import and return the referenced class, instead of the string literal. API Reference API policy settings The following settings control the basic API policies, and are applied to every APIView class-based view, or @api_view function based view. DEFAULT_RENDERER_CLASSES A list or tuple of renderer classes, that determines the default set of renderers that may be used when returning a Response object. Default: [ 'rest_framework.renderers.JSONRenderer', 'rest_framework.renderers.BrowsableAPIRenderer', ] DEFAULT_PARSER_CLASSES A list or tuple of parser classes, that determines the default set of parsers used when accessing the request.data property. Default: [ 'rest_framework.parsers.JSONParser', 'rest_framework.parsers.FormParser', 'rest_framework.parsers.MultiPartParser' ] DEFAULT_AUTHENTICATION_CLASSES A list or tuple of authentication classes, that determines the default set of authenticators used when accessing the request.user or request.auth properties. Default: [ 'rest_framework.authentication.SessionAuthentication', 'rest_framework.authentication.BasicAuthentication' ] DEFAULT_PERMISSION_CLASSES A list or tuple of permission classes, that determines the default set of permissions checked at the start of a view. Permission must be granted by every class in the list. Default: [ 'rest_framework.permissions.AllowAny', ] DEFAULT_THROTTLE_CLASSES A list or tuple of throttle classes, that determines the default set of throttles checked at the start of a view. Default: [] DEFAULT_CONTENT_NEGOTIATION_CLASS A content negotiation class, that determines how a renderer is selected for the response, given an incoming request. Default: 'rest_framework.negotiation.DefaultContentNegotiation' DEFAULT_SCHEMA_CLASS A view inspector class that will be used for schema generation. Default: 'rest_framework.schemas.openapi.AutoSchema' Generic view settings The following settings control the behavior of the generic class-based views. DEFAULT_FILTER_BACKENDS A list of filter backend classes that should be used for generic filtering. If set to None then generic filtering is disabled. DEFAULT_PAGINATION_CLASS The default class to use for queryset pagination. If set to None, pagination is disabled by default. See the pagination documentation for further guidance on setting and modifying the pagination style. Default: None PAGE_SIZE The default page size to use for pagination. If set to None, pagination is disabled by default. Default: None SEARCH_PARAM The name of a query parameter, which can be used to specify the search term used by SearchFilter. Default: search ORDERING_PARAM The name of a query parameter, which can be used to specify the ordering of results returned by OrderingFilter. Default: ordering Versioning settings DEFAULT_VERSION The value that should be used for request.version when no versioning information is present. Default: None ALLOWED_VERSIONS If set, this value will restrict the set of versions that may be returned by the versioning scheme, and will raise an error if the provided version if not in this set. Default: None VERSION_PARAM The string that should used for any versioning parameters, such as in the media type or URL query parameters. Default: 'version' Authentication settings The following settings control the behavior of unauthenticated requests. UNAUTHENTICATED_USER The class that should be used to initialize request.user for unauthenticated requests. (If removing authentication entirely, e.g. by removing django.contrib.auth from INSTALLED_APPS, set UNAUTHENTICATED_USER to None.) Default: django.contrib.auth.models.AnonymousUser UNAUTHENTICATED_TOKEN The class that should be used to initialize request.auth for unauthenticated requests. Default: None Test settings The following settings control the behavior of APIRequestFactory and APIClient TEST_REQUEST_DEFAULT_FORMAT The default format that should be used when making test requests. This should match up with the format of one of the renderer classes in the TEST_REQUEST_RENDERER_CLASSES setting. Default: 'multipart' TEST_REQUEST_RENDERER_CLASSES The renderer classes that are supported when building test requests. The format of any of these renderer classes may be used when constructing a test request, for example: client.post('/users', {'username': 'jamie'}, format='json') Default: [ 'rest_framework.renderers.MultiPartRenderer', 'rest_framework.renderers.JSONRenderer' ] Schema generation controls SCHEMA_COERCE_PATH_PK If set, this maps the 'pk' identifier in the URL conf onto the actual field name when generating a schema path parameter. Typically this will be 'id'. This gives a more suitable representation as "primary key" is an implementation detail, whereas "identifier" is a more general concept. Default: True SCHEMA_COERCE_METHOD_NAMES If set, this is used to map internal viewset method names onto external action names used in the schema generation. This allows us to generate names that are more suitable for an external representation than those that are used internally in the codebase. Default: {'retrieve': 'read', 'destroy': 'delete'} Content type controls URL_FORMAT_OVERRIDE The name of a URL parameter that may be used to override the default content negotiation Accept header behavior, by using a format=… query parameter in the request URL. For example: http://example.com/organizations/?format=csv If the value of this setting is None then URL format overrides will be disabled. Default: 'format' FORMAT_SUFFIX_KWARG The name of a parameter in the URL conf that may be used to provide a format suffix. This setting is applied when using format_suffix_patterns to include suffixed URL patterns. For example: http://example.com/organizations.csv/ Default: 'format' Date and time formatting The following settings are used to control how date and time representations may be parsed and rendered. DATETIME_FORMAT A format string that should be used by default for rendering the output of DateTimeField serializer fields. If None, then DateTimeField serializer fields will return Python datetime objects, and the datetime encoding will be determined by the renderer. May be any of None, 'iso-8601' or a Python strftime format string. Default: 'iso-8601' DATETIME_INPUT_FORMATS A list of format strings that should be used by default for parsing inputs to DateTimeField serializer fields. May be a list including the string 'iso-8601' or Python strftime format strings. Default: ['iso-8601'] DATE_FORMAT A format string that should be used by default for rendering the output of DateField serializer fields. If None, then DateField serializer fields will return Python date objects, and the date encoding will be determined by the renderer. May be any of None, 'iso-8601' or a Python strftime format string. Default: 'iso-8601' DATE_INPUT_FORMATS A list of format strings that should be used by default for parsing inputs to DateField serializer fields. May be a list including the string 'iso-8601' or Python strftime format strings. Default: ['iso-8601'] TIME_FORMAT A format string that should be used by default for rendering the output of TimeField serializer fields. If None, then TimeField serializer fields will return Python time objects, and the time encoding will be determined by the renderer. May be any of None, 'iso-8601' or a Python strftime format string. Default: 'iso-8601' TIME_INPUT_FORMATS A list of format strings that should be used by default for parsing inputs to TimeField serializer fields. May be a list including the string 'iso-8601' or Python strftime format strings. Default: ['iso-8601'] Encodings UNICODE_JSON When set to True, JSON responses will allow unicode characters in responses. For example: {"unicode black star":"★"} When set to False, JSON responses will escape non-ascii characters, like so: {"unicode black star":"\u2605"} Both styles conform to RFC 4627, and are syntactically valid JSON. The unicode style is preferred as being more user-friendly when inspecting API responses. Default: True COMPACT_JSON When set to True, JSON responses will return compact representations, with no spacing after ':' and ',' characters. For example: {"is_admin":false,"email":"jane@example"} When set to False, JSON responses will return slightly more verbose representations, like so: {"is_admin": false, "email": "jane@example"} The default style is to return minified responses, in line with Heroku's API design guidelines. Default: True STRICT_JSON When set to True, JSON rendering and parsing will only observe syntactically valid JSON, raising an exception for the extended float values (nan, inf, -inf) accepted by Python's json module. This is the recommended setting, as these values are not generally supported. e.g., neither Javascript's JSON.Parse nor PostgreSQL's JSON data type accept these values. When set to False, JSON rendering and parsing will be permissive. However, these values are still invalid and will need to be specially handled in your code. Default: True COERCE_DECIMAL_TO_STRING When returning decimal objects in API representations that do not support a native decimal type, it is normally best to return the value as a string. This avoids the loss of precision that occurs with binary floating point implementations. When set to True, the serializer DecimalField class will return strings instead of Decimal objects. When set to False, serializers will return Decimal objects, which the default JSON encoder will return as floats. Default: True View names and descriptions The following settings are used to generate the view names and descriptions, as used in responses to OPTIONS requests, and as used in the browsable API. VIEW_NAME_FUNCTION A string representing the function that should be used when generating view names. This should be a function with the following signature: view_name(self) self: The view instance. Typically the name function would inspect the name of the class when generating a descriptive name, by accessing self.__class__.__name__. If the view instance inherits ViewSet, it may have been initialized with several optional arguments: name: A name explicitly provided to a view in the viewset. Typically, this value should be used as-is when provided. suffix: Text used when differentiating individual views in a viewset. This argument is mutually exclusive to name. detail: Boolean that differentiates an individual view in a viewset as either being a 'list' or 'detail' view. Default: 'rest_framework.views.get_view_name' VIEW_DESCRIPTION_FUNCTION A string representing the function that should be used when generating view descriptions. This setting can be changed to support markup styles other than the default markdown. For example, you can use it to support rst markup in your view docstrings being output in the browsable API. This should be a function with the following signature: view_description(self, html=False) self: The view instance. Typically the description function would inspect the docstring of the class when generating a description, by accessing self.__class__.__doc__ html: A boolean indicating if HTML output is required. True when used in the browsable API, and False when used in generating OPTIONS responses. If the view instance inherits ViewSet, it may have been initialized with several optional arguments: description: A description explicitly provided to the view in the viewset. Typically, this is set by extra viewset actions, and should be used as-is. Default: 'rest_framework.views.get_view_description' HTML Select Field cutoffs Global settings for select field cutoffs for rendering relational fields in the browsable API. HTML_SELECT_CUTOFF Global setting for the html_cutoff value. Must be an integer. Default: 1000 HTML_SELECT_CUTOFF_TEXT A string representing a global setting for html_cutoff_text. Default: "More than {count} items..." Miscellaneous settings EXCEPTION_HANDLER A string representing the function that should be used when returning a response for any given exception. If the function returns None, a 500 error will be raised. This setting can be changed to support error responses other than the default {"detail": "Failure..."} responses. For example, you can use it to provide API responses like {"errors": [{"message": "Failure...", "code": ""} ...]}. This should be a function with the following signature: exception_handler(exc, context) exc: The exception. Default: 'rest_framework.views.exception_handler' NON_FIELD_ERRORS_KEY A string representing the key that should be used for serializer errors that do not refer to a specific field, but are instead general errors. Default: 'non_field_errors' URL_FIELD_NAME A string representing the key that should be used for the URL fields generated by HyperlinkedModelSerializer. Default: 'url' NUM_PROXIES An integer of 0 or more, that may be used to specify the number of application proxies that the API runs behind. This allows throttling to more accurately identify client IP addresses. If set to None then less strict IP matching will be used by the throttle classes. Default: None settings.py
doc_27479
Alias for set_facecolor.
doc_27480
Bases: torch.distributions.distribution.Distribution Creates a Binomial distribution parameterized by total_count and either probs or logits (but not both). total_count must be broadcastable with probs/logits. Example: >>> m = Binomial(100, torch.tensor([0 , .2, .8, 1])) >>> x = m.sample() tensor([ 0., 22., 71., 100.]) >>> m = Binomial(torch.tensor([[5.], [10.]]), torch.tensor([0.5, 0.8])) >>> x = m.sample() tensor([[ 4., 5.], [ 7., 6.]]) Parameters total_count (int or Tensor) – number of Bernoulli trials probs (Tensor) – Event probabilities logits (Tensor) – Event log-odds arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0), 'total_count': IntegerGreaterThan(lower_bound=0)} enumerate_support(expand=True) [source] expand(batch_shape, _instance=None) [source] has_enumerate_support = True log_prob(value) [source] logits [source] property mean property param_shape probs [source] sample(sample_shape=torch.Size([])) [source] property support property variance
doc_27481
Rotate the deque n steps to the right. If n is negative, rotate to the left. When the deque is not empty, rotating one step to the right is equivalent to d.appendleft(d.pop()), and rotating one step to the left is equivalent to d.append(d.popleft()).
doc_27482
string.ascii_letters The concatenation of the ascii_lowercase and ascii_uppercase constants described below. This value is not locale-dependent. string.ascii_lowercase The lowercase letters 'abcdefghijklmnopqrstuvwxyz'. This value is not locale-dependent and will not change. string.ascii_uppercase The uppercase letters 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'. This value is not locale-dependent and will not change. string.digits The string '0123456789'. string.hexdigits The string '0123456789abcdefABCDEF'. string.octdigits The string '01234567'. string.punctuation String of ASCII characters which are considered punctuation characters in the C locale: !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~. string.printable String of ASCII characters which are considered printable. This is a combination of digits, ascii_letters, punctuation, and whitespace. string.whitespace A string containing all ASCII characters that are considered whitespace. This includes the characters space, tab, linefeed, return, formfeed, and vertical tab. Custom String Formatting The built-in string class provides the ability to do complex variable substitutions and value formatting via the format() method described in PEP 3101. The Formatter class in the string module allows you to create and customize your own string formatting behaviors using the same implementation as the built-in format() method. class string.Formatter The Formatter class has the following public methods: format(format_string, /, *args, **kwargs) The primary API method. It takes a format string and an arbitrary set of positional and keyword arguments. It is just a wrapper that calls vformat(). Changed in version 3.7: A format string argument is now positional-only. vformat(format_string, args, kwargs) This function does the actual work of formatting. It is exposed as a separate function for cases where you want to pass in a predefined dictionary of arguments, rather than unpacking and repacking the dictionary as individual arguments using the *args and **kwargs syntax. vformat() does the work of breaking up the format string into character data and replacement fields. It calls the various methods described below. In addition, the Formatter defines a number of methods that are intended to be replaced by subclasses: parse(format_string) Loop over the format_string and return an iterable of tuples (literal_text, field_name, format_spec, conversion). This is used by vformat() to break the string into either literal text, or replacement fields. The values in the tuple conceptually represent a span of literal text followed by a single replacement field. If there is no literal text (which can happen if two replacement fields occur consecutively), then literal_text will be a zero-length string. If there is no replacement field, then the values of field_name, format_spec and conversion will be None. get_field(field_name, args, kwargs) Given field_name as returned by parse() (see above), convert it to an object to be formatted. Returns a tuple (obj, used_key). The default version takes strings of the form defined in PEP 3101, such as “0[name]” or “label.title”. args and kwargs are as passed in to vformat(). The return value used_key has the same meaning as the key parameter to get_value(). get_value(key, args, kwargs) Retrieve a given field value. The key argument will be either an integer or a string. If it is an integer, it represents the index of the positional argument in args; if it is a string, then it represents a named argument in kwargs. The args parameter is set to the list of positional arguments to vformat(), and the kwargs parameter is set to the dictionary of keyword arguments. For compound field names, these functions are only called for the first component of the field name; subsequent components are handled through normal attribute and indexing operations. So for example, the field expression ‘0.name’ would cause get_value() to be called with a key argument of 0. The name attribute will be looked up after get_value() returns by calling the built-in getattr() function. If the index or keyword refers to an item that does not exist, then an IndexError or KeyError should be raised. check_unused_args(used_args, args, kwargs) Implement checking for unused arguments if desired. The arguments to this function is the set of all argument keys that were actually referred to in the format string (integers for positional arguments, and strings for named arguments), and a reference to the args and kwargs that was passed to vformat. The set of unused args can be calculated from these parameters. check_unused_args() is assumed to raise an exception if the check fails. format_field(value, format_spec) format_field() simply calls the global format() built-in. The method is provided so that subclasses can override it. convert_field(value, conversion) Converts the value (returned by get_field()) given a conversion type (as in the tuple returned by the parse() method). The default version understands ‘s’ (str), ‘r’ (repr) and ‘a’ (ascii) conversion types. Format String Syntax The str.format() method and the Formatter class share the same syntax for format strings (although in the case of Formatter, subclasses can define their own format string syntax). The syntax is related to that of formatted string literals, but it is less sophisticated and, in particular, does not support arbitrary expressions. Format strings contain “replacement fields” surrounded by curly braces {}. Anything that is not contained in braces is considered literal text, which is copied unchanged to the output. If you need to include a brace character in the literal text, it can be escaped by doubling: {{ and }}. The grammar for a replacement field is as follows: replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}" field_name ::= arg_name ("." attribute_name | "[" element_index "]")* arg_name ::= [identifier | digit+] attribute_name ::= identifier element_index ::= digit+ | index_string index_string ::= <any source character except "]"> + conversion ::= "r" | "s" | "a" format_spec ::= <described in the next section> In less formal terms, the replacement field can start with a field_name that specifies the object whose value is to be formatted and inserted into the output instead of the replacement field. The field_name is optionally followed by a conversion field, which is preceded by an exclamation point '!', and a format_spec, which is preceded by a colon ':'. These specify a non-default format for the replacement value. See also the Format Specification Mini-Language section. The field_name itself begins with an arg_name that is either a number or a keyword. If it’s a number, it refers to a positional argument, and if it’s a keyword, it refers to a named keyword argument. If the numerical arg_names in a format string are 0, 1, 2, … in sequence, they can all be omitted (not just some) and the numbers 0, 1, 2, … will be automatically inserted in that order. Because arg_name is not quote-delimited, it is not possible to specify arbitrary dictionary keys (e.g., the strings '10' or ':-]') within a format string. The arg_name can be followed by any number of index or attribute expressions. An expression of the form '.name' selects the named attribute using getattr(), while an expression of the form '[index]' does an index lookup using __getitem__(). Changed in version 3.1: The positional argument specifiers can be omitted for str.format(), so '{} {}'.format(a, b) is equivalent to '{0} {1}'.format(a, b). Changed in version 3.4: The positional argument specifiers can be omitted for Formatter. Some simple format string examples: "First, thou shalt count to {0}" # References first positional argument "Bring me a {}" # Implicitly references the first positional argument "From {} to {}" # Same as "From {0} to {1}" "My quest is {name}" # References keyword argument 'name' "Weight in tons {0.weight}" # 'weight' attribute of first positional arg "Units destroyed: {players[0]}" # First element of keyword argument 'players'. The conversion field causes a type coercion before formatting. Normally, the job of formatting a value is done by the __format__() method of the value itself. However, in some cases it is desirable to force a type to be formatted as a string, overriding its own definition of formatting. By converting the value to a string before calling __format__(), the normal formatting logic is bypassed. Three conversion flags are currently supported: '!s' which calls str() on the value, '!r' which calls repr() and '!a' which calls ascii(). Some examples: "Harold's a clever {0!s}" # Calls str() on the argument first "Bring out the holy {name!r}" # Calls repr() on the argument first "More {!a}" # Calls ascii() on the argument first The format_spec field contains a specification of how the value should be presented, including such details as field width, alignment, padding, decimal precision and so on. Each value type can define its own “formatting mini-language” or interpretation of the format_spec. Most built-in types support a common formatting mini-language, which is described in the next section. A format_spec field can also include nested replacement fields within it. These nested replacement fields may contain a field name, conversion flag and format specification, but deeper nesting is not allowed. The replacement fields within the format_spec are substituted before the format_spec string is interpreted. This allows the formatting of a value to be dynamically specified. See the Format examples section for some examples. Format Specification Mini-Language “Format specifications” are used within replacement fields contained within a format string to define how individual values are presented (see Format String Syntax and Formatted string literals). They can also be passed directly to the built-in format() function. Each formattable type may define how the format specification is to be interpreted. Most built-in types implement the following options for format specifications, although some of the formatting options are only supported by the numeric types. A general convention is that an empty format specification produces the same result as if you had called str() on the value. A non-empty format specification typically modifies the result. The general form of a standard format specifier is: format_spec ::= [[fill]align][sign][#][0][width][grouping_option][.precision][type] fill ::= <any character> align ::= "<" | ">" | "=" | "^" sign ::= "+" | "-" | " " width ::= digit+ grouping_option ::= "_" | "," precision ::= digit+ type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%" If a valid align value is specified, it can be preceded by a fill character that can be any character and defaults to a space if omitted. It is not possible to use a literal curly brace (“{” or “}”) as the fill character in a formatted string literal or when using the str.format() method. However, it is possible to insert a curly brace with a nested replacement field. This limitation doesn’t affect the format() function. The meaning of the various alignment options is as follows: Option Meaning '<' Forces the field to be left-aligned within the available space (this is the default for most objects). '>' Forces the field to be right-aligned within the available space (this is the default for numbers). '=' Forces the padding to be placed after the sign (if any) but before the digits. This is used for printing fields in the form ‘+000000120’. This alignment option is only valid for numeric types. It becomes the default when ‘0’ immediately precedes the field width. '^' Forces the field to be centered within the available space. Note that unless a minimum field width is defined, the field width will always be the same size as the data to fill it, so that the alignment option has no meaning in this case. The sign option is only valid for number types, and can be one of the following: Option Meaning '+' indicates that a sign should be used for both positive as well as negative numbers. '-' indicates that a sign should be used only for negative numbers (this is the default behavior). space indicates that a leading space should be used on positive numbers, and a minus sign on negative numbers. The '#' option causes the “alternate form” to be used for the conversion. The alternate form is defined differently for different types. This option is only valid for integer, float and complex types. For integers, when binary, octal, or hexadecimal output is used, this option adds the prefix respective '0b', '0o', or '0x' to the output value. For float and complex the alternate form causes the result of the conversion to always contain a decimal-point character, even if no digits follow it. Normally, a decimal-point character appears in the result of these conversions only if a digit follows it. In addition, for 'g' and 'G' conversions, trailing zeros are not removed from the result. The ',' option signals the use of a comma for a thousands separator. For a locale aware separator, use the 'n' integer presentation type instead. Changed in version 3.1: Added the ',' option (see also PEP 378). The '_' option signals the use of an underscore for a thousands separator for floating point presentation types and for integer presentation type 'd'. For integer presentation types 'b', 'o', 'x', and 'X', underscores will be inserted every 4 digits. For other presentation types, specifying this option is an error. Changed in version 3.6: Added the '_' option (see also PEP 515). width is a decimal integer defining the minimum total field width, including any prefixes, separators, and other formatting characters. If not specified, then the field width will be determined by the content. When no explicit alignment is given, preceding the width field by a zero ('0') character enables sign-aware zero-padding for numeric types. This is equivalent to a fill character of '0' with an alignment type of '='. The precision is a decimal number indicating how many digits should be displayed after the decimal point for a floating point value formatted with 'f' and 'F', or before and after the decimal point for a floating point value formatted with 'g' or 'G'. For non-number types the field indicates the maximum field size - in other words, how many characters will be used from the field content. The precision is not allowed for integer values. Finally, the type determines how the data should be presented. The available string presentation types are: Type Meaning 's' String format. This is the default type for strings and may be omitted. None The same as 's'. The available integer presentation types are: Type Meaning 'b' Binary format. Outputs the number in base 2. 'c' Character. Converts the integer to the corresponding unicode character before printing. 'd' Decimal Integer. Outputs the number in base 10. 'o' Octal format. Outputs the number in base 8. 'x' Hex format. Outputs the number in base 16, using lower-case letters for the digits above 9. 'X' Hex format. Outputs the number in base 16, using upper-case letters for the digits above 9. 'n' Number. This is the same as 'd', except that it uses the current locale setting to insert the appropriate number separator characters. None The same as 'd'. In addition to the above presentation types, integers can be formatted with the floating point presentation types listed below (except 'n' and None). When doing so, float() is used to convert the integer to a floating point number before formatting. The available presentation types for float and Decimal values are: Type Meaning 'e' Scientific notation. For a given precision p, formats the number in scientific notation with the letter ‘e’ separating the coefficient from the exponent. The coefficient has one digit before and p digits after the decimal point, for a total of p + 1 significant digits. With no precision given, uses a precision of 6 digits after the decimal point for float, and shows all coefficient digits for Decimal. If no digits follow the decimal point, the decimal point is also removed unless the # option is used. 'E' Scientific notation. Same as 'e' except it uses an upper case ‘E’ as the separator character. 'f' Fixed-point notation. For a given precision p, formats the number as a decimal number with exactly p digits following the decimal point. With no precision given, uses a precision of 6 digits after the decimal point for float, and uses a precision large enough to show all coefficient digits for Decimal. If no digits follow the decimal point, the decimal point is also removed unless the # option is used. 'F' Fixed-point notation. Same as 'f', but converts nan to NAN and inf to INF. 'g' General format. For a given precision p >= 1, this rounds the number to p significant digits and then formats the result in either fixed-point format or in scientific notation, depending on its magnitude. A precision of 0 is treated as equivalent to a precision of 1. The precise rules are as follows: suppose that the result formatted with presentation type 'e' and precision p-1 would have exponent exp. Then, if m <= exp < p, where m is -4 for floats and -6 for Decimals, the number is formatted with presentation type 'f' and precision p-1-exp. Otherwise, the number is formatted with presentation type 'e' and precision p-1. In both cases insignificant trailing zeros are removed from the significand, and the decimal point is also removed if there are no remaining digits following it, unless the '#' option is used. With no precision given, uses a precision of 6 significant digits for float. For Decimal, the coefficient of the result is formed from the coefficient digits of the value; scientific notation is used for values smaller than 1e-6 in absolute value and values where the place value of the least significant digit is larger than 1, and fixed-point notation is used otherwise. Positive and negative infinity, positive and negative zero, and nans, are formatted as inf, -inf, 0, -0 and nan respectively, regardless of the precision. 'G' General format. Same as 'g' except switches to 'E' if the number gets too large. The representations of infinity and NaN are uppercased, too. 'n' Number. This is the same as 'g', except that it uses the current locale setting to insert the appropriate number separator characters. '%' Percentage. Multiplies the number by 100 and displays in fixed ('f') format, followed by a percent sign. None For float this is the same as 'g', except that when fixed-point notation is used to format the result, it always includes at least one digit past the decimal point. The precision used is as large as needed to represent the given value faithfully. For Decimal, this is the same as either 'g' or 'G' depending on the value of context.capitals for the current decimal context. The overall effect is to match the output of str() as altered by the other format modifiers. Format examples This section contains examples of the str.format() syntax and comparison with the old %-formatting. In most of the cases the syntax is similar to the old %-formatting, with the addition of the {} and with : used instead of %. For example, '%03.2f' can be translated to '{:03.2f}'. The new format syntax also supports new and different options, shown in the following examples. Accessing arguments by position: >>> '{0}, {1}, {2}'.format('a', 'b', 'c') 'a, b, c' >>> '{}, {}, {}'.format('a', 'b', 'c') # 3.1+ only 'a, b, c' >>> '{2}, {1}, {0}'.format('a', 'b', 'c') 'c, b, a' >>> '{2}, {1}, {0}'.format(*'abc') # unpacking argument sequence 'c, b, a' >>> '{0}{1}{0}'.format('abra', 'cad') # arguments' indices can be repeated 'abracadabra' Accessing arguments by name: >>> 'Coordinates: {latitude}, {longitude}'.format(latitude='37.24N', longitude='-115.81W') 'Coordinates: 37.24N, -115.81W' >>> coord = {'latitude': '37.24N', 'longitude': '-115.81W'} >>> 'Coordinates: {latitude}, {longitude}'.format(**coord) 'Coordinates: 37.24N, -115.81W' Accessing arguments’ attributes: >>> c = 3-5j >>> ('The complex number {0} is formed from the real part {0.real} ' ... 'and the imaginary part {0.imag}.').format(c) 'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.' >>> class Point: ... def __init__(self, x, y): ... self.x, self.y = x, y ... def __str__(self): ... return 'Point({self.x}, {self.y})'.format(self=self) ... >>> str(Point(4, 2)) 'Point(4, 2)' Accessing arguments’ items: >>> coord = (3, 5) >>> 'X: {0[0]}; Y: {0[1]}'.format(coord) 'X: 3; Y: 5' Replacing %s and %r: >>> "repr() shows quotes: {!r}; str() doesn't: {!s}".format('test1', 'test2') "repr() shows quotes: 'test1'; str() doesn't: test2" Aligning the text and specifying a width: >>> '{:<30}'.format('left aligned') 'left aligned ' >>> '{:>30}'.format('right aligned') ' right aligned' >>> '{:^30}'.format('centered') ' centered ' >>> '{:*^30}'.format('centered') # use '*' as a fill char '***********centered***********' Replacing %+f, %-f, and % f and specifying a sign: >>> '{:+f}; {:+f}'.format(3.14, -3.14) # show it always '+3.140000; -3.140000' >>> '{: f}; {: f}'.format(3.14, -3.14) # show a space for positive numbers ' 3.140000; -3.140000' >>> '{:-f}; {:-f}'.format(3.14, -3.14) # show only the minus -- same as '{:f}; {:f}' '3.140000; -3.140000' Replacing %x and %o and converting the value to different bases: >>> # format also supports binary numbers >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42) 'int: 42; hex: 2a; oct: 52; bin: 101010' >>> # with 0x, 0o, or 0b as prefix: >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42) 'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010' Using the comma as a thousands separator: >>> '{:,}'.format(1234567890) '1,234,567,890' Expressing a percentage: >>> points = 19 >>> total = 22 >>> 'Correct answers: {:.2%}'.format(points/total) 'Correct answers: 86.36%' Using type-specific formatting: >>> import datetime >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58) >>> '{:%Y-%m-%d %H:%M:%S}'.format(d) '2010-07-04 12:15:58' Nesting arguments and more complex examples: >>> for align, text in zip('<^>', ['left', 'center', 'right']): ... '{0:{fill}{align}16}'.format(text, fill=align, align=align) ... 'left<<<<<<<<<<<<' '^^^^^center^^^^^' '>>>>>>>>>>>right' >>> >>> octets = [192, 168, 0, 1] >>> '{:02X}{:02X}{:02X}{:02X}'.format(*octets) 'C0A80001' >>> int(_, 16) 3232235521 >>> >>> width = 5 >>> for num in range(5,12): ... for base in 'dXob': ... print('{0:{width}{base}}'.format(num, base=base, width=width), end=' ') ... print() ... 5 5 5 101 6 6 6 110 7 7 7 111 8 8 10 1000 9 9 11 1001 10 A 12 1010 11 B 13 1011 Template strings Template strings provide simpler string substitutions as described in PEP 292. A primary use case for template strings is for internationalization (i18n) since in that context, the simpler syntax and functionality makes it easier to translate than other built-in string formatting facilities in Python. As an example of a library built on template strings for i18n, see the flufl.i18n package. Template strings support $-based substitutions, using the following rules: $$ is an escape; it is replaced with a single $. $identifier names a substitution placeholder matching a mapping key of "identifier". By default, "identifier" is restricted to any case-insensitive ASCII alphanumeric string (including underscores) that starts with an underscore or ASCII letter. The first non-identifier character after the $ character terminates this placeholder specification. ${identifier} is equivalent to $identifier. It is required when valid identifier characters follow the placeholder but are not part of the placeholder, such as "${noun}ification". Any other appearance of $ in the string will result in a ValueError being raised. The string module provides a Template class that implements these rules. The methods of Template are: class string.Template(template) The constructor takes a single argument which is the template string. substitute(mapping={}, /, **kwds) Performs the template substitution, returning a new string. mapping is any dictionary-like object with keys that match the placeholders in the template. Alternatively, you can provide keyword arguments, where the keywords are the placeholders. When both mapping and kwds are given and there are duplicates, the placeholders from kwds take precedence. safe_substitute(mapping={}, /, **kwds) Like substitute(), except that if placeholders are missing from mapping and kwds, instead of raising a KeyError exception, the original placeholder will appear in the resulting string intact. Also, unlike with substitute(), any other appearances of the $ will simply return $ instead of raising ValueError. While other exceptions may still occur, this method is called “safe” because it always tries to return a usable string instead of raising an exception. In another sense, safe_substitute() may be anything other than safe, since it will silently ignore malformed templates containing dangling delimiters, unmatched braces, or placeholders that are not valid Python identifiers. Template instances also provide one public data attribute: template This is the object passed to the constructor’s template argument. In general, you shouldn’t change it, but read-only access is not enforced. Here is an example of how to use a Template: >>> from string import Template >>> s = Template('$who likes $what') >>> s.substitute(who='tim', what='kung pao') 'tim likes kung pao' >>> d = dict(who='tim') >>> Template('Give $who $100').substitute(d) Traceback (most recent call last): ... ValueError: Invalid placeholder in string: line 1, col 11 >>> Template('$who likes $what').substitute(d) Traceback (most recent call last): ... KeyError: 'what' >>> Template('$who likes $what').safe_substitute(d) 'tim likes $what' Advanced usage: you can derive subclasses of Template to customize the placeholder syntax, delimiter character, or the entire regular expression used to parse template strings. To do this, you can override these class attributes: delimiter – This is the literal string describing a placeholder introducing delimiter. The default value is $. Note that this should not be a regular expression, as the implementation will call re.escape() on this string as needed. Note further that you cannot change the delimiter after class creation (i.e. a different delimiter must be set in the subclass’s class namespace). idpattern – This is the regular expression describing the pattern for non-braced placeholders. The default value is the regular expression (?a:[_a-z][_a-z0-9]*). If this is given and braceidpattern is None this pattern will also apply to braced placeholders. Note Since default flags is re.IGNORECASE, pattern [a-z] can match with some non-ASCII characters. That’s why we use the local a flag here. Changed in version 3.7: braceidpattern can be used to define separate patterns used inside and outside the braces. braceidpattern – This is like idpattern but describes the pattern for braced placeholders. Defaults to None which means to fall back to idpattern (i.e. the same pattern is used both inside and outside braces). If given, this allows you to define different patterns for braced and unbraced placeholders. New in version 3.7. flags – The regular expression flags that will be applied when compiling the regular expression used for recognizing substitutions. The default value is re.IGNORECASE. Note that re.VERBOSE will always be added to the flags, so custom idpatterns must follow conventions for verbose regular expressions. New in version 3.2. Alternatively, you can provide the entire regular expression pattern by overriding the class attribute pattern. If you do this, the value must be a regular expression object with four named capturing groups. The capturing groups correspond to the rules given above, along with the invalid placeholder rule: escaped – This group matches the escape sequence, e.g. $$, in the default pattern. named – This group matches the unbraced placeholder name; it should not include the delimiter in capturing group. braced – This group matches the brace enclosed placeholder name; it should not include either the delimiter or braces in the capturing group. invalid – This group matches any other delimiter pattern (usually a single delimiter), and it should appear last in the regular expression. Helper functions string.capwords(s, sep=None) Split the argument into words using str.split(), capitalize each word using str.capitalize(), and join the capitalized words using str.join(). If the optional second argument sep is absent or None, runs of whitespace characters are replaced by a single space and leading and trailing whitespace are removed, otherwise sep is used to split and join the words.
doc_27483
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
doc_27484
If the prompt argument is present, it is written to standard output without a trailing newline. The function then reads a line from input, converts it to a string (stripping a trailing newline), and returns that. When EOF is read, EOFError is raised. Example: >>> s = input('--> ') --> Monty Python's Flying Circus >>> s "Monty Python's Flying Circus" If the readline module was loaded, then input() will use it to provide elaborate line editing and history features. Raises an auditing event builtins.input with argument prompt before reading input Raises an auditing event builtins.input/result with the result after successfully reading input.
doc_27485
A dictionary mapping suffixes into MIME types, contains custom overrides for the default system mappings. The mapping is used case-insensitively, and so should contain only lower-cased keys. Changed in version 3.9: This dictionary is no longer filled with the default system mappings, but only contains overrides.
doc_27486
Protect an awaitable object from being cancelled. If aw is a coroutine it is automatically scheduled as a Task. The statement: res = await shield(something()) is equivalent to: res = await something() except that if the coroutine containing it is cancelled, the Task running in something() is not cancelled. From the point of view of something(), the cancellation did not happen. Although its caller is still cancelled, so the “await” expression still raises a CancelledError. If something() is cancelled by other means (i.e. from within itself) that would also cancel shield(). If it is desired to completely ignore cancellation (not recommended) the shield() function should be combined with a try/except clause, as follows: try: res = await shield(something()) except CancelledError: res = None Deprecated since version 3.8, will be removed in version 3.10: The loop parameter.
doc_27487
See Migration guide for more details. tf.compat.v1.raw_ops.SpaceToBatch tf.raw_ops.SpaceToBatch( input, paddings, block_size, name=None ) This is a legacy version of the more general SpaceToBatchND. Zero-pads and then rearranges (permutes) blocks of spatial data into batch. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the batch dimension. After the zero-padding, both height and width of the input must be divisible by the block size. Args input A Tensor. 4-D with shape [batch, height, width, depth]. paddings A Tensor. Must be one of the following types: int32, int64. 2-D tensor of non-negative integers with shape [2, 2]. It specifies the padding of the input with zeros across the spatial dimensions as follows: paddings = [[pad_top, pad_bottom], [pad_left, pad_right]] The effective spatial dimensions of the zero-padded input tensor will be: height_pad = pad_top + height + pad_bottom width_pad = pad_left + width + pad_right The attr block_size must be greater than one. It indicates the block size. Non-overlapping blocks of size block_size x block size in the height and width dimensions are rearranged into the batch dimension at each location. The batch of the output tensor is batch * block_size * block_size. Both height_pad and width_pad must be divisible by block_size. The shape of the output will be: [batchblock_sizeblock_size, height_pad/block_size, width_pad/block_size, depth] Some examples: (1) For the following input of shape [1, 2, 2, 1] and block_size of 2: x = [[[[1], [2]], [[3], [4]]]] The output tensor has shape [4, 1, 1, 1] and value: [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] (2) For the following input of shape [1, 2, 2, 3] and block_size of 2: x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]] The output tensor has shape [4, 1, 1, 3] and value: [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] (3) For the following input of shape [1, 4, 4, 1] and block_size of 2: x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]] The output tensor has shape [4, 2, 2, 1] and value: x = [[[[1], [3]], [[9], [11]]], [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]] (4) For the following input of shape [2, 2, 4, 1] and block_size of 2: x = [[[[1], [2], [3], [4]], [[5], [6], [7], [8]]], [[[9], [10], [11], [12]], [[13], [14], [15], [16]]]] The output tensor has shape [8, 1, 2, 1] and value: x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]], [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]] Among others, this operation is useful for reducing atrous convolution into regular convolution. block_size An int that is >= 2. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
doc_27488
Return the Python representation of s (a str instance containing a JSON document). JSONDecodeError will be raised if the given JSON document is not valid.
doc_27489
tf.experimental.numpy.split( ary, indices_or_sections, axis=0 ) See the NumPy documentation for numpy.split.
doc_27490
Look up the codec for the given encoding and return its decoder function. Raises a LookupError in case the encoding cannot be found.
doc_27491
enum.IntEnum collection of ALERT_DESCRIPTION_* constants. New in version 3.6.
doc_27492
Format specified values of self and return them. Deprecated since version 1.2.0. Parameters slicer:int, array-like An indexer into self that specifies which values are used in the formatting process. kwargs:dict Options for specifying how the values should be formatted. These options include the following: na_rep:str The value that serves as a placeholder for NULL values quoting:bool or None Whether or not there are quoted values in self date_format:str The format used to represent date-like values. Returns numpy.ndarray Formatted values.
doc_27493
Return the source code for the specified module. Raise ZipImportError if the module couldn’t be found, return None if the archive does contain the module, but has no source for it.
doc_27494
Return a bytes array convertible to a human-readable description of the type of compression used in the audio file. For AIFF files, the returned value is b'not compressed'.
doc_27495
The suffix to append to the auto-generated candidate template name. Default suffix is _detail.
doc_27496
Iterates over all blueprints by the order they were registered. Changelog New in version 0.11. Return type ValuesView[Blueprint]
doc_27497
Return the Axes box aspect, i.e. the ratio of height to width. The box aspect is None (i.e. chosen depending on the available figure space) unless explicitly specified. See also matplotlib.axes.Axes.set_box_aspect for a description of box aspect. matplotlib.axes.Axes.set_aspect for a description of aspect handling.
doc_27498
tf.compat.v1.TFRecordReader( name=None, options=None ) See ReaderBase for supported methods. Args name A name for the operation (optional). options A TFRecordOptions object (optional). Eager Compatibility Readers are not compatible with eager execution. Instead, please use tf.data to get data into your model. Attributes reader_ref Op that implements the reader. supports_serialize Whether the Reader implementation can serialize its state. Methods num_records_produced View source num_records_produced( name=None ) Returns the number of records this reader has produced. This is the same as the number of Read executions that have succeeded. Args name A name for the operation (optional). Returns An int64 Tensor. num_work_units_completed View source num_work_units_completed( name=None ) Returns the number of work units this reader has finished processing. Args name A name for the operation (optional). Returns An int64 Tensor. read View source read( queue, name=None ) Returns the next record (key, value) pair produced by a reader. Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. name A name for the operation (optional). Returns A tuple of Tensors (key, value). key A string scalar Tensor. value A string scalar Tensor. read_up_to View source read_up_to( queue, num_records, name=None ) Returns up to num_records (key, value) pairs produced by a reader. Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch. Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. num_records Number of records to read. name A name for the operation (optional). Returns A tuple of Tensors (keys, values). keys A 1-D string Tensor. values A 1-D string Tensor. reset View source reset( name=None ) Restore a reader to its initial clean state. Args name A name for the operation (optional). Returns The created Operation. restore_state View source restore_state( state, name=None ) Restore a reader to a previously saved state. Not all Readers support being restored, so this can produce an Unimplemented error. Args state A string Tensor. Result of a SerializeState of a Reader with matching type. name A name for the operation (optional). Returns The created Operation. serialize_state View source serialize_state( name=None ) Produce a string tensor that encodes the state of a reader. Not all Readers support being serialized, so this can produce an Unimplemented error. Args name A name for the operation (optional). Returns A string Tensor.
doc_27499
Convert the bytes-like object to a value. If no valid value is found, raise EOFError, ValueError or TypeError. Extra bytes in the input are ignored.