_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_1900
See torch.matrix_power()
doc_1901
Creates a new Decimal instance from num but using self as context. Unlike the Decimal constructor, the context precision, rounding method, flags, and traps are applied to the conversion. This is useful because constants are often given to a greater precision than is needed by the application. Another benefit is that rounding immediately eliminates unintended effects from digits beyond the current precision. In the following example, using unrounded inputs means that adding zero to a sum can change the result: >>> getcontext().prec = 3 >>> Decimal('3.4445') + Decimal('1.0023') Decimal('4.45') >>> Decimal('3.4445') + Decimal(0) + Decimal('1.0023') Decimal('4.44') This method implements the to-number operation of the IBM specification. If the argument is a string, no leading or trailing whitespace or underscores are permitted.
doc_1902
Use an integral image to integrate over a given window. Parameters iindarray Integral image. startList of tuples, each tuple of length equal to dimension of ii Coordinates of top left corner of window(s). Each tuple in the list contains the starting row, col, … index i.e [(row_win1, col_win1, …), (row_win2, col_win2,…), …]. endList of tuples, each tuple of length equal to dimension of ii Coordinates of bottom right corner of window(s). Each tuple in the list containing the end row, col, … index i.e [(row_win1, col_win1, …), (row_win2, col_win2, …), …]. Returns Sscalar or ndarray Integral (sum) over the given window(s). Examples >>> arr = np.ones((5, 6), dtype=float) >>> ii = integral_image(arr) >>> integrate(ii, (1, 0), (1, 2)) # sum from (1, 0) to (1, 2) array([3.]) >>> integrate(ii, [(3, 3)], [(4, 5)]) # sum from (3, 3) to (4, 5) array([6.]) >>> # sum from (1, 0) to (1, 2) and from (3, 3) to (4, 5) >>> integrate(ii, [(1, 0), (3, 3)], [(1, 2), (4, 5)]) array([3., 6.])
doc_1903
Return the ticks position ("left", "right", "default", or "unknown").
doc_1904
Return the Transform instance mapping patch coordinates to data coordinates. For example, one may define a patch of a circle which represents a radius of 5 by providing coordinates for a unit circle, and a transform which scales the coordinates (the patch coordinate) by 5.
doc_1905
Haematoxylin-Eosin-DAB (HED) to RGB color space conversion. Parameters hed(…, 3) array_like The image in the HED color space. Final dimension denotes channels. Returns out(…, 3) ndarray The image in RGB. Same dimensions as input. Raises ValueError If hed is not at least 2-D with shape (…, 3). References 1 A. C. Ruifrok and D. A. Johnston, “Quantification of histochemical staining by color deconvolution.,” Analytical and quantitative cytology and histology / the International Academy of Cytology [and] American Society of Cytology, vol. 23, no. 4, pp. 291-9, Aug. 2001. Examples >>> from skimage import data >>> from skimage.color import rgb2hed, hed2rgb >>> ihc = data.immunohistochemistry() >>> ihc_hed = rgb2hed(ihc) >>> ihc_rgb = hed2rgb(ihc_hed)
doc_1906
Remove the artist from the figure if possible. The effect will not be visible until the figure is redrawn, e.g., with FigureCanvasBase.draw_idle. Call relim to update the axes limits if desired. Note: relim will not see collections even if the collection was added to the axes with autolim = True. Note: there is no support for removing the artist's legend entry.
doc_1907
See torch.count_nonzero()
doc_1908
The error code used by ValidationError if validation fails. Defaults to "invalid".
doc_1909
Logistic Regression CV (aka logit, MaxEnt) classifier. See glossary entry for cross-validation estimator. This class implements logistic regression using liblinear, newton-cg, sag of lbfgs optimizer. The newton-cg, sag and lbfgs solvers support only L2 regularization with primal formulation. The liblinear solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. Elastic-Net penalty is only supported by the saga solver. For the grid of Cs values and l1_ratios values, the best hyperparameter is selected by the cross-validator StratifiedKFold, but it can be changed using the cv parameter. The ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers can warm-start the coefficients (see Glossary). Read more in the User Guide. Parameters Csint or list of floats, default=10 Each of the values in Cs describes the inverse of regularization strength. If Cs is as an int, then a grid of Cs values are chosen in a logarithmic scale between 1e-4 and 1e4. Like in support vector machines, smaller values specify stronger regularization. fit_interceptbool, default=True Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function. cvint or cross-validation generator, default=None The default cross-validation generator used is Stratified K-Folds. If an integer is provided, then it is the number of folds used. See the module sklearn.model_selection module for the list of possible cross-validation objects. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. dualbool, default=False Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features. penalty{‘l1’, ‘l2’, ‘elasticnet’}, default=’l2’ Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver. scoringstr or callable, default=None A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y). For a list of scoring functions that can be used, look at sklearn.metrics. The default scoring option used is ‘accuracy’. solver{‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default=’lbfgs’ Algorithm to use in the optimization problem. For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones. For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes. ‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty, whereas ‘liblinear’ and ‘saga’ handle L1 penalty. ‘liblinear’ might be slower in LogisticRegressionCV because it does not handle warm-starting. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. tolfloat, default=1e-4 Tolerance for stopping criteria. max_iterint, default=100 Maximum number of iterations of the optimization algorithm. class_weightdict or ‘balanced’, default=None Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. New in version 0.17: class_weight == ‘balanced’ n_jobsint, default=None Number of CPU cores used during the cross-validation loop. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. verboseint, default=0 For the ‘liblinear’, ‘sag’ and ‘lbfgs’ solvers set verbose to any positive number for verbosity. refitbool, default=True If set to True, the scores are averaged across all folds, and the coefs and the C that corresponds to the best score is taken, and a final refit is done using these parameters. Otherwise the coefs, intercepts and C that correspond to the best scores across folds are averaged. intercept_scalingfloat, default=1 Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight. Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. multi_class{‘auto, ‘ovr’, ‘multinomial’}, default=’auto’ If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’. New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case. Changed in version 0.22: Default changed from ‘ovr’ to ‘auto’ in 0.22. random_stateint, RandomState instance, default=None Used when solver='sag', ‘saga’ or ‘liblinear’ to shuffle the data. Note that this only applies to the solver and not the cross-validation generator. See Glossary for details. l1_ratioslist of float, default=None The list of Elastic-Net mixing parameter, with 0 <= l1_ratio <= 1. Only used if penalty='elasticnet'. A value of 0 is equivalent to using penalty='l2', while 1 is equivalent to using penalty='l1'. For 0 < l1_ratio <1, the penalty is a combination of L1 and L2. Attributes classes_ndarray of shape (n_classes, ) A list of class labels known to the classifier. coef_ndarray of shape (1, n_features) or (n_classes, n_features) Coefficient of the features in the decision function. coef_ is of shape (1, n_features) when the given problem is binary. intercept_ndarray of shape (1,) or (n_classes,) Intercept (a.k.a. bias) added to the decision function. If fit_intercept is set to False, the intercept is set to zero. intercept_ is of shape(1,) when the problem is binary. Cs_ndarray of shape (n_cs) Array of C i.e. inverse of regularization parameter values used for cross-validation. l1_ratios_ndarray of shape (n_l1_ratios) Array of l1_ratios used for cross-validation. If no l1_ratio is used (i.e. penalty is not ‘elasticnet’), this is set to [None] coefs_paths_ndarray of shape (n_folds, n_cs, n_features) or (n_folds, n_cs, n_features + 1) dict with classes as the keys, and the path of coefficients obtained during cross-validating across each fold and then across each Cs after doing an OvR for the corresponding class as values. If the ‘multi_class’ option is set to ‘multinomial’, then the coefs_paths are the coefficients corresponding to each class. Each dict value has shape (n_folds, n_cs, n_features) or (n_folds, n_cs, n_features + 1) depending on whether the intercept is fit or not. If penalty='elasticnet', the shape is (n_folds, n_cs, n_l1_ratios_, n_features) or (n_folds, n_cs, n_l1_ratios_, n_features + 1). scores_dict dict with classes as the keys, and the values as the grid of scores obtained during cross-validating each fold, after doing an OvR for the corresponding class. If the ‘multi_class’ option given is ‘multinomial’ then the same scores are repeated across all classes, since this is the multinomial class. Each dict value has shape (n_folds, n_cs or (n_folds, n_cs, n_l1_ratios) if penalty='elasticnet'. C_ndarray of shape (n_classes,) or (n_classes - 1,) Array of C that maps to the best scores across every class. If refit is set to False, then for each class, the best C is the average of the C’s that correspond to the best scores for each fold. C_ is of shape(n_classes,) when the problem is binary. l1_ratio_ndarray of shape (n_classes,) or (n_classes - 1,) Array of l1_ratio that maps to the best scores across every class. If refit is set to False, then for each class, the best l1_ratio is the average of the l1_ratio’s that correspond to the best scores for each fold. l1_ratio_ is of shape(n_classes,) when the problem is binary. n_iter_ndarray of shape (n_classes, n_folds, n_cs) or (1, n_folds, n_cs) Actual number of iterations for all classes, folds and Cs. In the binary or multinomial cases, the first dimension is equal to 1. If penalty='elasticnet', the shape is (n_classes, n_folds, n_cs, n_l1_ratios) or (1, n_folds, n_cs, n_l1_ratios). See also LogisticRegression Examples >>> from sklearn.datasets import load_iris >>> from sklearn.linear_model import LogisticRegressionCV >>> X, y = load_iris(return_X_y=True) >>> clf = LogisticRegressionCV(cv=5, random_state=0).fit(X, y) >>> clf.predict(X[:2, :]) array([0, 0]) >>> clf.predict_proba(X[:2, :]).shape (2, 3) >>> clf.score(X, y) 0.98... Methods decision_function(X) Predict confidence scores for samples. densify() Convert coefficient matrix to dense array format. fit(X, y[, sample_weight]) Fit the model according to the given training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict class labels for samples in X. predict_log_proba(X) Predict logarithm of probability estimates. predict_proba(X) Probability estimates. score(X, y[, sample_weight]) Returns the score using the scoring option on the given test data and labels. set_params(**params) Set the parameters of this estimator. sparsify() Convert coefficient matrix to sparse format. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, sample_weight=None) [source] Fit the model according to the given training data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. yarray-like of shape (n_samples,) Target vector relative to X. sample_weightarray-like of shape (n_samples,) default=None Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. Returns selfobject get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. predict_log_proba(X) [source] Predict logarithm of probability estimates. The returned estimates for all classes are ordered by the label of classes. Parameters Xarray-like of shape (n_samples, n_features) Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns Tarray-like of shape (n_samples, n_classes) Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. predict_proba(X) [source] Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes. Parameters Xarray-like of shape (n_samples, n_features) Vector to be scored, where n_samples is the number of samples and n_features is the number of features. Returns Tarray-like of shape (n_samples, n_classes) Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. score(X, y, sample_weight=None) [source] Returns the score using the scoring option on the given test data and labels. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Score of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
doc_1910
Creates the module object from the given specification in accordance with PEP 489. New in version 3.5.
doc_1911
See Migration guide for more details. tf.compat.v1.keras.callbacks.LambdaCallback tf.keras.callbacks.LambdaCallback( on_epoch_begin=None, on_epoch_end=None, on_batch_begin=None, on_batch_end=None, on_train_begin=None, on_train_end=None, **kwargs ) This callback is constructed with anonymous functions that will be called at the appropriate time. Note that the callbacks expects positional arguments, as: on_epoch_begin and on_epoch_end expect two positional arguments: epoch, logs on_batch_begin and on_batch_end expect two positional arguments: batch, logs on_train_begin and on_train_end expect one positional argument: logs Arguments on_epoch_begin called at the beginning of every epoch. on_epoch_end called at the end of every epoch. on_batch_begin called at the beginning of every batch. on_batch_end called at the end of every batch. on_train_begin called at the beginning of model training. on_train_end called at the end of model training. Example: # Print the batch number at the beginning of every batch. batch_print_callback = LambdaCallback( on_batch_begin=lambda batch,logs: print(batch)) # Stream the epoch loss to a file in JSON format. The file content # is not well-formed JSON but rather has a JSON object per line. import json json_log = open('loss_log.json', mode='wt', buffering=1) json_logging_callback = LambdaCallback( on_epoch_end=lambda epoch, logs: json_log.write( json.dumps({'epoch': epoch, 'loss': logs['loss']}) + '\n'), on_train_end=lambda logs: json_log.close() ) # Terminate some processes after having finished model training. processes = ... cleanup_callback = LambdaCallback( on_train_end=lambda logs: [ p.terminate() for p in processes if p.is_alive()]) model.fit(..., callbacks=[batch_print_callback, json_logging_callback, cleanup_callback]) Methods set_model View source set_model( model ) set_params View source set_params( params )
doc_1912
Return the offset as a tuple (x, y). The extent parameters have to be provided to handle the case where the offset is dynamically determined by a callable (see set_offset). Parameters width, height, xdescent, ydescent Extent parameters. rendererRendererBase subclass
doc_1913
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters Xarray-like of shape (n_samples_X, n_features) or list of object Argument to the kernel. Returns K_diagndarray of shape (n_samples_X,) Diagonal of kernel k(X, X)
doc_1914
A tuple containing the five components of the version number: major, minor, micro, releaselevel, and serial. All values except releaselevel are integers; the release level is 'alpha', 'beta', 'candidate', or 'final'. The version_info value corresponding to the Python version 2.0 is (2, 0, 0, 'final', 0). The components can also be accessed by name, so sys.version_info[0] is equivalent to sys.version_info.major and so on. Changed in version 3.1: Added named component attributes.
doc_1915
Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters fn (Module -> None) – function to be applied to each submodule Returns self Return type Module Example: >>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
doc_1916
Return a new path with expanded ~ and ~user constructs, as returned by os.path.expanduser(): >>> p = PosixPath('~/films/Monty Python') >>> p.expanduser() PosixPath('/home/eric/films/Monty Python') New in version 3.5.
doc_1917
Fit the model with X and apply the dimensionality reduction on X. Parameters Xarray-like of shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features. yIgnored Returns X_newndarray of shape (n_samples, n_components) Transformed values. Notes This method returns a Fortran-ordered array. To convert it to a C-ordered array, use ‘np.ascontiguousarray’.
doc_1918
Set if artist is to be included in layout calculations, E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). Parameters in_layoutbool
doc_1919
Return whether units are set on any axis.
doc_1920
Like Artist.get_window_extent, but includes any clipping. Parameters rendererRendererBase subclass renderer that will be used to draw the figures (i.e. fig.canvas.get_renderer()) Returns Bbox The enclosing bounding box (in figure pixel coordinates).
doc_1921
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
doc_1922
tf.compat.v1.nn.rnn_cell.LSTMStateTuple( c, h ) Stores two elements: (c, h), in that order. Where c is the hidden state and h is the output. Only used when state_is_tuple=True. Attributes c h dtype
doc_1923
Reset the iterator to its initial state.
doc_1924
Bases: mpl_toolkits.axisartist.angle_helper.LocatorBase __call__(v1, v2)[source] Call self as a function. Examples using mpl_toolkits.axisartist.angle_helper.LocatorDMS axis_direction demo Curvilinear grid demo floating_axis demo Simple Axis Pad
doc_1925
Convert the Reduced distance to the true distance. The reduced distance, defined for some metrics, is a computationally more efficient measure which preserves the rank of the true distance. For example, in the Euclidean distance metric, the reduced distance is the squared-euclidean distance.
doc_1926
Instances are replaced with an appropriate value for Enum members. By default, the initial value starts at 1.
doc_1927
Return filter function to be used for agg filter.
doc_1928
Broadcasts input to the shape shape. Equivalent to calling input.expand(shape). See expand() for details. Parameters input (Tensor) – the input tensor. shape (list, tuple, or torch.Size) – the new shape. Example: >>> x = torch.tensor([1, 2, 3]) >>> torch.broadcast_to(x, (3, 3)) tensor([[1, 2, 3], [1, 2, 3], [1, 2, 3]])
doc_1929
See Migration guide for more details. tf.compat.v1.raw_ops.DebugGradientRefIdentity tf.raw_ops.DebugGradientRefIdentity( input, name=None ) This op is hidden from public in Python. It is used by TensorFlow Debugger to register gradient tensors for gradient debugging. This op operates on reference-type tensors. Args input A mutable Tensor. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as input.
doc_1930
Return True if the file descriptors fp1 and fp2 refer to the same file. Availability: Unix, Windows. Changed in version 3.2: Added Windows support. Changed in version 3.6: Accepts a path-like object.
doc_1931
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
doc_1932
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
doc_1933
Access a group of rows and columns by label(s) or a boolean array. .loc[] is primarily label based, but may also be used with a boolean array. Allowed inputs are: A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index, and never as an integer position along the index). A list or array of labels, e.g. ['a', 'b', 'c']. A slice object with labels, e.g. 'a':'f'. Warning Note that contrary to usual python slices, both the start and the stop are included A boolean array of the same length as the axis being sliced, e.g. [True, False, True]. An alignable boolean Series. The index of the key will be aligned before masking. An alignable Index. The Index of the returned selection will be the input. A callable function with one argument (the calling Series or DataFrame) and that returns valid output for indexing (one of the above) See more at Selection by Label. Raises KeyError If any items are not found. IndexingError If an indexed key is passed and its index is unalignable to the frame index. See also DataFrame.at Access a single value for a row/column label pair. DataFrame.iloc Access group of rows and columns by integer position(s). DataFrame.xs Returns a cross-section (row(s) or column(s)) from the Series/DataFrame. Series.loc Access group of values using labels. Examples Getting values >>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]], ... index=['cobra', 'viper', 'sidewinder'], ... columns=['max_speed', 'shield']) >>> df max_speed shield cobra 1 2 viper 4 5 sidewinder 7 8 Single label. Note this returns the row as a Series. >>> df.loc['viper'] max_speed 4 shield 5 Name: viper, dtype: int64 List of labels. Note using [[]] returns a DataFrame. >>> df.loc[['viper', 'sidewinder']] max_speed shield viper 4 5 sidewinder 7 8 Single label for row and column >>> df.loc['cobra', 'shield'] 2 Slice with labels for row and single label for column. As mentioned above, note that both the start and stop of the slice are included. >>> df.loc['cobra':'viper', 'max_speed'] cobra 1 viper 4 Name: max_speed, dtype: int64 Boolean list with the same length as the row axis >>> df.loc[[False, False, True]] max_speed shield sidewinder 7 8 Alignable boolean Series: >>> df.loc[pd.Series([False, True, False], ... index=['viper', 'sidewinder', 'cobra'])] max_speed shield sidewinder 7 8 Index (same behavior as df.reindex) >>> df.loc[pd.Index(["cobra", "viper"], name="foo")] max_speed shield foo cobra 1 2 viper 4 5 Conditional that returns a boolean Series >>> df.loc[df['shield'] > 6] max_speed shield sidewinder 7 8 Conditional that returns a boolean Series with column labels specified >>> df.loc[df['shield'] > 6, ['max_speed']] max_speed sidewinder 7 Callable that returns a boolean Series >>> df.loc[lambda df: df['shield'] == 8] max_speed shield sidewinder 7 8 Setting values Set value for all items matching the list of labels >>> df.loc[['viper', 'sidewinder'], ['shield']] = 50 >>> df max_speed shield cobra 1 2 viper 4 50 sidewinder 7 50 Set value for an entire row >>> df.loc['cobra'] = 10 >>> df max_speed shield cobra 10 10 viper 4 50 sidewinder 7 50 Set value for an entire column >>> df.loc[:, 'max_speed'] = 30 >>> df max_speed shield cobra 30 10 viper 30 50 sidewinder 30 50 Set value for rows matching callable condition >>> df.loc[df['shield'] > 35] = 0 >>> df max_speed shield cobra 30 10 viper 0 0 sidewinder 0 0 Getting values on a DataFrame with an index that has integer labels Another example using integers for the index >>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]], ... index=[7, 8, 9], columns=['max_speed', 'shield']) >>> df max_speed shield 7 1 2 8 4 5 9 7 8 Slice with integer labels for rows. As mentioned above, note that both the start and stop of the slice are included. >>> df.loc[7:9] max_speed shield 7 1 2 8 4 5 9 7 8 Getting values with a MultiIndex A number of examples using a DataFrame with a MultiIndex >>> tuples = [ ... ('cobra', 'mark i'), ('cobra', 'mark ii'), ... ('sidewinder', 'mark i'), ('sidewinder', 'mark ii'), ... ('viper', 'mark ii'), ('viper', 'mark iii') ... ] >>> index = pd.MultiIndex.from_tuples(tuples) >>> values = [[12, 2], [0, 4], [10, 20], ... [1, 4], [7, 1], [16, 36]] >>> df = pd.DataFrame(values, columns=['max_speed', 'shield'], index=index) >>> df max_speed shield cobra mark i 12 2 mark ii 0 4 sidewinder mark i 10 20 mark ii 1 4 viper mark ii 7 1 mark iii 16 36 Single label. Note this returns a DataFrame with a single index. >>> df.loc['cobra'] max_speed shield mark i 12 2 mark ii 0 4 Single index tuple. Note this returns a Series. >>> df.loc[('cobra', 'mark ii')] max_speed 0 shield 4 Name: (cobra, mark ii), dtype: int64 Single label for row and column. Similar to passing in a tuple, this returns a Series. >>> df.loc['cobra', 'mark i'] max_speed 12 shield 2 Name: (cobra, mark i), dtype: int64 Single tuple. Note using [[]] returns a DataFrame. >>> df.loc[[('cobra', 'mark ii')]] max_speed shield cobra mark ii 0 4 Single tuple for the index with a single label for the column >>> df.loc[('cobra', 'mark i'), 'shield'] 2 Slice from index tuple to single label >>> df.loc[('cobra', 'mark i'):'viper'] max_speed shield cobra mark i 12 2 mark ii 0 4 sidewinder mark i 10 20 mark ii 1 4 viper mark ii 7 1 mark iii 16 36 Slice from index tuple to index tuple >>> df.loc[('cobra', 'mark i'):('viper', 'mark ii')] max_speed shield cobra mark i 12 2 mark ii 0 4 sidewinder mark i 10 20 mark ii 1 4 viper mark ii 7 1
doc_1934
Return the index of the leaf that each sample is predicted as. New in version 0.17. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. check_inputbool, default=True Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Returns X_leavesarray-like of shape (n_samples,) For each datapoint x in X, return the index of the leaf x ends up in. Leaves are numbered within [0; self.tree_.node_count), possibly with gaps in the numbering.
doc_1935
Alias for field number 1
doc_1936
Scalar method identical to the corresponding array attribute. Please see ndarray.flatten.
doc_1937
The character conventionally used by the operating system to separate search path components (as in PATH), such as ':' for POSIX or ';' for Windows. Also available via os.path.
doc_1938
Request a widget redraw once control returns to the GUI event loop. Even if multiple calls to draw_idle occur before control returns to the GUI event loop, the figure will only be rendered once. Notes Backends may choose to override the method and implement their own strategy to prevent multiple renderings.
doc_1939
A dictionary object including validation. Validating functions are defined and associated with rc parameters in matplotlib.rcsetup. The list of rcParams is: _internal.classic_mode agg.path.chunksize animation.bitrate animation.codec animation.convert_args animation.convert_path animation.embed_limit animation.ffmpeg_args animation.ffmpeg_path animation.frame_format animation.html animation.writer axes.autolimit_mode axes.axisbelow axes.edgecolor axes.facecolor axes.formatter.limits axes.formatter.min_exponent axes.formatter.offset_threshold axes.formatter.use_locale axes.formatter.use_mathtext axes.formatter.useoffset axes.grid axes.grid.axis axes.grid.which axes.labelcolor axes.labelpad axes.labelsize axes.labelweight axes.linewidth axes.prop_cycle axes.spines.bottom axes.spines.left axes.spines.right axes.spines.top axes.titlecolor axes.titlelocation axes.titlepad axes.titlesize axes.titleweight axes.titley axes.unicode_minus axes.xmargin axes.ymargin axes.zmargin axes3d.grid backend backend_fallback boxplot.bootstrap boxplot.boxprops.color boxplot.boxprops.linestyle boxplot.boxprops.linewidth boxplot.capprops.color boxplot.capprops.linestyle boxplot.capprops.linewidth boxplot.flierprops.color boxplot.flierprops.linestyle boxplot.flierprops.linewidth boxplot.flierprops.marker boxplot.flierprops.markeredgecolor boxplot.flierprops.markeredgewidth boxplot.flierprops.markerfacecolor boxplot.flierprops.markersize boxplot.meanline boxplot.meanprops.color boxplot.meanprops.linestyle boxplot.meanprops.linewidth boxplot.meanprops.marker boxplot.meanprops.markeredgecolor boxplot.meanprops.markerfacecolor boxplot.meanprops.markersize boxplot.medianprops.color boxplot.medianprops.linestyle boxplot.medianprops.linewidth boxplot.notch boxplot.patchartist boxplot.showbox boxplot.showcaps boxplot.showfliers boxplot.showmeans boxplot.vertical boxplot.whiskerprops.color boxplot.whiskerprops.linestyle boxplot.whiskerprops.linewidth boxplot.whiskers contour.corner_mask contour.linewidth contour.negative_linestyle date.autoformatter.day date.autoformatter.hour date.autoformatter.microsecond date.autoformatter.minute date.autoformatter.month date.autoformatter.second date.autoformatter.year date.converter date.epoch date.interval_multiples docstring.hardcopy errorbar.capsize figure.autolayout figure.constrained_layout.h_pad figure.constrained_layout.hspace figure.constrained_layout.use figure.constrained_layout.w_pad figure.constrained_layout.wspace figure.dpi figure.edgecolor figure.facecolor figure.figsize figure.frameon figure.max_open_warning figure.raise_window figure.subplot.bottom figure.subplot.hspace figure.subplot.left figure.subplot.right figure.subplot.top figure.subplot.wspace figure.titlesize figure.titleweight font.cursive font.family font.fantasy font.monospace font.sans-serif font.serif font.size font.stretch font.style font.variant font.weight grid.alpha grid.color grid.linestyle grid.linewidth hatch.color hatch.linewidth hist.bins image.aspect image.cmap image.composite_image image.interpolation image.lut image.origin image.resample interactive keymap.back keymap.copy keymap.forward keymap.fullscreen keymap.grid keymap.grid_minor keymap.help keymap.home keymap.pan keymap.quit keymap.quit_all keymap.save keymap.xscale keymap.yscale keymap.zoom legend.borderaxespad legend.borderpad legend.columnspacing legend.edgecolor legend.facecolor legend.fancybox legend.fontsize legend.framealpha legend.frameon legend.handleheight legend.handlelength legend.handletextpad legend.labelcolor legend.labelspacing legend.loc legend.markerscale legend.numpoints legend.scatterpoints legend.shadow legend.title_fontsize lines.antialiased lines.color lines.dash_capstyle lines.dash_joinstyle lines.dashdot_pattern lines.dashed_pattern lines.dotted_pattern lines.linestyle lines.linewidth lines.marker lines.markeredgecolor lines.markeredgewidth lines.markerfacecolor lines.markersize lines.scale_dashes lines.solid_capstyle lines.solid_joinstyle markers.fillstyle mathtext.bf mathtext.cal mathtext.default mathtext.fallback mathtext.fontset mathtext.it mathtext.rm mathtext.sf mathtext.tt patch.antialiased patch.edgecolor patch.facecolor patch.force_edgecolor patch.linewidth path.effects path.simplify path.simplify_threshold path.sketch path.snap pcolor.shading pcolormesh.snap pdf.compression pdf.fonttype pdf.inheritcolor pdf.use14corefonts pgf.preamble pgf.rcfonts pgf.texsystem polaraxes.grid ps.distiller.res ps.fonttype ps.papersize ps.useafm ps.usedistiller savefig.bbox savefig.directory savefig.dpi savefig.edgecolor savefig.facecolor savefig.format savefig.orientation savefig.pad_inches savefig.transparent scatter.edgecolors scatter.marker svg.fonttype svg.hashsalt svg.image_inline text.antialiased text.color text.hinting text.hinting_factor text.kerning_factor text.latex.preamble text.usetex timezone tk.window_focus toolbar webagg.address webagg.open_in_browser webagg.port webagg.port_retries xaxis.labellocation xtick.alignment xtick.bottom xtick.color xtick.direction xtick.labelbottom xtick.labelcolor xtick.labelsize xtick.labeltop xtick.major.bottom xtick.major.pad xtick.major.size xtick.major.top xtick.major.width xtick.minor.bottom xtick.minor.pad xtick.minor.size xtick.minor.top xtick.minor.visible xtick.minor.width xtick.top yaxis.labellocation ytick.alignment ytick.color ytick.direction ytick.labelcolor ytick.labelleft ytick.labelright ytick.labelsize ytick.left ytick.major.left ytick.major.pad ytick.major.right ytick.major.size ytick.major.width ytick.minor.left ytick.minor.pad ytick.minor.right ytick.minor.size ytick.minor.visible ytick.minor.width ytick.right See also The matplotlibrc file find_all(pattern)[source] Return the subset of this RcParams dictionary whose keys match, using re.search(), the given pattern. Note Changes to the returned dictionary are not propagated to the parent RcParams dictionary.
doc_1940
(default: 8) If expand_tabs is true, then all tab characters in text will be expanded to zero or more spaces, depending on the current column and the given tab size. New in version 3.3.
doc_1941
Set the marker edge color. Parameters eccolor
doc_1942
By default, this is set to False. When False, each value from the repeated fields is stored. When set to True, any trailing values which are blank will be stripped from the result. If the underlying field has required=True, but remove_trailing_nulls is True, then null values are only allowed at the end, and will be stripped. Some examples: SplitArrayField(IntegerField(required=True), size=3, remove_trailing_nulls=False) ['1', '2', '3'] # -> [1, 2, 3] ['1', '2', ''] # -> ValidationError - third entry required. ['1', '', '3'] # -> ValidationError - second entry required. ['', '2', ''] # -> ValidationError - first and third entries required. SplitArrayField(IntegerField(required=False), size=3, remove_trailing_nulls=False) ['1', '2', '3'] # -> [1, 2, 3] ['1', '2', ''] # -> [1, 2, None] ['1', '', '3'] # -> [1, None, 3] ['', '2', ''] # -> [None, 2, None] SplitArrayField(IntegerField(required=True), size=3, remove_trailing_nulls=True) ['1', '2', '3'] # -> [1, 2, 3] ['1', '2', ''] # -> [1, 2] ['1', '', '3'] # -> ValidationError - second entry required. ['', '2', ''] # -> ValidationError - first entry required. SplitArrayField(IntegerField(required=False), size=3, remove_trailing_nulls=True) ['1', '2', '3'] # -> [1, 2, 3] ['1', '2', ''] # -> [1, 2] ['1', '', '3'] # -> [1, None, 3] ['', '2', ''] # -> [None, 2]
doc_1943
See Migration guide for more details. tf.compat.v1.math.polygamma, tf.compat.v1.polygamma tf.math.polygamma( a, x, name=None ) The polygamma function is defined as: \(\psi^{(a)}(x) = \frac{d^a}{dx^a} \psi(x)\) where \(\psi(x)\) is the digamma function. The polygamma function is defined only for non-negative integer orders \a\. Args a A Tensor. Must be one of the following types: float32, float64. x A Tensor. Must have the same type as a. name A name for the operation (optional). Returns A Tensor. Has the same type as a.
doc_1944
Return whether the artist is pickable. See also set_picker, get_picker, pick
doc_1945
Return the offsets for the collection.
doc_1946
When using CreateView you have access to self.object, which is the object being created. If the object hasn’t been created yet, the value will be None.
doc_1947
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterNd tf.raw_ops.ScatterNd( indices, updates, shape, name=None ) Creates a new tensor by applying sparse updates to individual values or slices within a tensor (initially zero for numeric, empty for string) of the given shape according to indices. This operator is the inverse of the tf.gather_nd operator which extracts values or slices from a given tensor. This operation is similar to tensor_scatter_add, except that the tensor is zero-initialized. Calling tf.scatter_nd(indices, values, shape) is identical to tensor_scatter_add(tf.zeros(shape, values.dtype), indices, values) If indices contains duplicates, then their updates are accumulated (summed). Warning: The order in which updates are applied is nondeterministic, so the output will be nondeterministic if indices contains duplicates -- because of some numerical approximation issues, numbers summed in different order may yield different results. indices is an integer tensor containing indices into a new tensor of shape shape. The last dimension of indices can be at most the rank of shape: indices.shape[-1] <= shape.rank The last dimension of indices corresponds to indices into elements (if indices.shape[-1] = shape.rank) or slices (if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of shape. updates is a tensor with shape indices.shape[:-1] + shape[indices.shape[-1]:] The simplest form of scatter is to insert individual elements in a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements. In Python, this scatter operation would look like this: indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) shape = tf.constant([8]) scatter = tf.scatter_nd(indices, updates, shape) print(scatter) The resulting tensor would look like this: [0, 11, 0, 10, 9, 0, 0, 12] We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values. In Python, this scatter operation would look like this: indices = tf.constant([[0], [2]]) updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]) shape = tf.constant([4, 4, 4]) scatter = tf.scatter_nd(indices, updates, shape) print(scatter) The resulting tensor would look like this: [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]] Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored. Args indices A Tensor. Must be one of the following types: int32, int64. Index tensor. updates A Tensor. Updates to scatter into output. shape A Tensor. Must have the same type as indices. 1-D. The shape of the resulting tensor. name A name for the operation (optional). Returns A Tensor. Has the same type as updates.
doc_1948
Returns a true division of the inputs, element-wise. Unlike ‘floor division’, true division adjusts the output type to present the best answer, regardless of input types. Parameters x1array_like Dividend array. x2array_like Divisor array. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output). outndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. wherearray_like, optional This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs For other keyword-only arguments, see the ufunc docs. Returns outndarray or scalar This is a scalar if both x1 and x2 are scalars. Notes In Python, // is the floor division operator and / the true division operator. The true_divide(x1, x2) function is equivalent to true division in Python. Examples >>> x = np.arange(5) >>> np.true_divide(x, 4) array([ 0. , 0.25, 0.5 , 0.75, 1. ]) >>> x/4 array([ 0. , 0.25, 0.5 , 0.75, 1. ]) >>> x//4 array([0, 0, 0, 0, 1]) The / operator can be used as a shorthand for np.true_divide on ndarrays. >>> x = np.arange(5) >>> x / 4 array([0. , 0.25, 0.5 , 0.75, 1. ])
doc_1949
Cumulative max for each group. Returns Series or DataFrame See also Series.groupby Apply a function groupby to a Series. DataFrame.groupby Apply a function groupby to each row or column of a DataFrame.
doc_1950
Flush and close this stream. This method has no effect if the file is already closed. Once the file is closed, any operation on the file (e.g. reading or writing) will raise a ValueError. As a convenience, it is allowed to call this method more than once; only the first call, however, will have an effect.
doc_1951
Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the transformers of the ColumnTransformer. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_1952
Execute a code object. When an exception occurs, showtraceback() is called to display a traceback. All exceptions are caught except SystemExit, which is allowed to propagate. A note about KeyboardInterrupt: this exception may occur elsewhere in this code, and may not always be caught. The caller should be prepared to deal with it.
doc_1953
Type code C Type Python Type Minimum size in bytes Notes 'b' signed char int 1 'B' unsigned char int 1 'u' wchar_t Unicode character 2 (1) 'h' signed short int 2 'H' unsigned short int 2 'i' signed int int 2 'I' unsigned int int 2 'l' signed long int 4 'L' unsigned long int 4 'q' signed long long int 8 'Q' unsigned long long int 8 'f' float float 4 'd' double float 8 Notes: It can be 16 bits or 32 bits depending on the platform. Changed in version 3.9: array('u') now uses wchar_t as C type instead of deprecated Py_UNICODE. This change doesn’t affect to its behavior because Py_UNICODE is alias of wchar_t since Python 3.3. Deprecated since version 3.3, will be removed in version 4.0. The actual representation of values is determined by the machine architecture (strictly speaking, by the C implementation). The actual size can be accessed through the itemsize attribute. The module defines the following type: class array.array(typecode[, initializer]) A new array whose items are restricted by typecode, and initialized from the optional initializer value, which must be a list, a bytes-like object, or iterable over elements of the appropriate type. If given a list or string, the initializer is passed to the new array’s fromlist(), frombytes(), or fromunicode() method (see below) to add initial items to the array. Otherwise, the iterable initializer is passed to the extend() method. Raises an auditing event array.__new__ with arguments typecode, initializer. array.typecodes A string with all available type codes. Array objects support the ordinary sequence operations of indexing, slicing, concatenation, and multiplication. When using slice assignment, the assigned value must be an array object with the same type code; in all other cases, TypeError is raised. Array objects also implement the buffer interface, and may be used wherever bytes-like objects are supported. The following data items and methods are also supported: array.typecode The typecode character used to create the array. array.itemsize The length in bytes of one array item in the internal representation. array.append(x) Append a new item with value x to the end of the array. array.buffer_info() Return a tuple (address, length) giving the current memory address and the length in elements of the buffer used to hold array’s contents. The size of the memory buffer in bytes can be computed as array.buffer_info()[1] * array.itemsize. This is occasionally useful when working with low-level (and inherently unsafe) I/O interfaces that require memory addresses, such as certain ioctl() operations. The returned numbers are valid as long as the array exists and no length-changing operations are applied to it. Note When using array objects from code written in C or C++ (the only way to effectively make use of this information), it makes more sense to use the buffer interface supported by array objects. This method is maintained for backward compatibility and should be avoided in new code. The buffer interface is documented in Buffer Protocol. array.byteswap() “Byteswap” all items of the array. This is only supported for values which are 1, 2, 4, or 8 bytes in size; for other types of values, RuntimeError is raised. It is useful when reading data from a file written on a machine with a different byte order. array.count(x) Return the number of occurrences of x in the array. array.extend(iterable) Append items from iterable to the end of the array. If iterable is another array, it must have exactly the same type code; if not, TypeError will be raised. If iterable is not an array, it must be iterable and its elements must be the right type to be appended to the array. array.frombytes(s) Appends items from the string, interpreting the string as an array of machine values (as if it had been read from a file using the fromfile() method). New in version 3.2: fromstring() is renamed to frombytes() for clarity. array.fromfile(f, n) Read n items (as machine values) from the file object f and append them to the end of the array. If less than n items are available, EOFError is raised, but the items that were available are still inserted into the array. array.fromlist(list) Append items from the list. This is equivalent to for x in list: a.append(x) except that if there is a type error, the array is unchanged. array.fromunicode(s) Extends this array with data from the given unicode string. The array must be a type 'u' array; otherwise a ValueError is raised. Use array.frombytes(unicodestring.encode(enc)) to append Unicode data to an array of some other type. array.index(x) Return the smallest i such that i is the index of the first occurrence of x in the array. array.insert(i, x) Insert a new item with value x in the array before position i. Negative values are treated as being relative to the end of the array. array.pop([i]) Removes the item with the index i from the array and returns it. The optional argument defaults to -1, so that by default the last item is removed and returned. array.remove(x) Remove the first occurrence of x from the array. array.reverse() Reverse the order of the items in the array. array.tobytes() Convert the array to an array of machine values and return the bytes representation (the same sequence of bytes that would be written to a file by the tofile() method.) New in version 3.2: tostring() is renamed to tobytes() for clarity. array.tofile(f) Write all items (as machine values) to the file object f. array.tolist() Convert the array to an ordinary list with the same items. array.tounicode() Convert the array to a unicode string. The array must be a type 'u' array; otherwise a ValueError is raised. Use array.tobytes().decode(enc) to obtain a unicode string from an array of some other type. When an array object is printed or converted to a string, it is represented as array(typecode, initializer). The initializer is omitted if the array is empty, otherwise it is a string if the typecode is 'u', otherwise it is a list of numbers. The string is guaranteed to be able to be converted back to an array with the same type and value using eval(), so long as the array class has been imported using from array import array. Examples: array('l') array('u', 'hello \u2641') array('l', [1, 2, 3, 4, 5]) array('d', [1.0, 2.0, 3.14]) See also Module struct Packing and unpacking of heterogeneous binary data. Module xdrlib Packing and unpacking of External Data Representation (XDR) data as used in some remote procedure call systems. The Numerical Python Documentation The Numeric Python extension (NumPy) defines another array type; see http://www.numpy.org/ for further information about Numerical Python.
doc_1954
Returns the matrix exponential. Supports batched input. For a matrix A, the matrix exponential is defined as eA=∑k=0∞Ak/k!\mathrm{e}^A = \sum_{k=0}^\infty A^k / k! The implementation is based on: Bader, P.; Blanes, S.; Casas, F. Computing the Matrix Exponential with an Optimized Taylor Polynomial Approximation. Mathematics 2019, 7, 1174. Parameters input (Tensor) – the input tensor. Example: >>> a = torch.randn(2, 2, 2) >>> a[0, :, :] = torch.eye(2, 2) >>> a[1, :, :] = 2 * torch.eye(2, 2) >>> a tensor([[[1., 0.], [0., 1.]], [[2., 0.], [0., 2.]]]) >>> torch.matrix_exp(a) tensor([[[2.7183, 0.0000], [0.0000, 2.7183]], [[7.3891, 0.0000], [0.0000, 7.3891]]]) >>> import math >>> x = torch.tensor([[0, math.pi/3], [-math.pi/3, 0]]) >>> x.matrix_exp() # should be [[cos(pi/3), sin(pi/3)], [-sin(pi/3), cos(pi/3)]] tensor([[ 0.5000, 0.8660], [-0.8660, 0.5000]])
doc_1955
Toplevel widget of Tix which represents mostly the main window of an application. It has an associated Tcl interpreter. Classes in the tkinter.tix module subclasses the classes in the tkinter. The former imports the latter, so to use tkinter.tix with Tkinter, all you need to do is to import one module. In general, you can just import tkinter.tix, and replace the toplevel call to tkinter.Tk with tix.Tk: from tkinter import tix from tkinter.constants import * root = tix.Tk()
doc_1956
Set to True when the execution of tests should stop by stop().
doc_1957
alias of numpy.str_
doc_1958
Constructs a test suite that matches the test labels provided. test_labels is a list of strings describing the tests to be run. A test label can take one of four forms: path.to.test_module.TestCase.test_method – Run a single test method in a test case. path.to.test_module.TestCase – Run all the test methods in a test case. path.to.module – Search for and run all tests in the named Python package or module. path/to/directory – Search for and run all tests below the named directory. If test_labels has a value of None, the test runner will search for tests in all files below the current directory whose names match its pattern (see above). Deprecated since version 4.0: extra_tests is a list of extra TestCase instances to add to the suite that is executed by the test runner. These extra tests are run in addition to those discovered in the modules listed in test_labels. Returns a TestSuite instance ready to be run.
doc_1959
The request was not successfully authenticated, and the highest priority authentication class does not use WWW-Authenticate headers. — An HTTP 403 Forbidden response will be returned. The request was not successfully authenticated, and the highest priority authentication class does use WWW-Authenticate headers. — An HTTP 401 Unauthorized response, with an appropriate WWW-Authenticate header will be returned. Object level permissions REST framework permissions also support object-level permissioning. Object level permissions are used to determine if a user should be allowed to act on a particular object, which will typically be a model instance. Object level permissions are run by REST framework's generic views when .get_object() is called. As with view level permissions, an exceptions.PermissionDenied exception will be raised if the user is not allowed to act on the given object. If you're writing your own views and want to enforce object level permissions, or if you override the get_object method on a generic view, then you'll need to explicitly call the .check_object_permissions(request, obj) method on the view at the point at which you've retrieved the object. This will either raise a PermissionDenied or NotAuthenticated exception, or simply return if the view has the appropriate permissions. For example: def get_object(self): obj = get_object_or_404(self.get_queryset(), pk=self.kwargs["pk"]) self.check_object_permissions(self.request, obj) return obj Note: With the exception of DjangoObjectPermissions, the provided permission classes in rest_framework.permissions do not implement the methods necessary to check object permissions. If you wish to use the provided permission classes in order to check object permissions, you must subclass them and implement the has_object_permission() method described in the Custom permissions section (below). Limitations of object level permissions For performance reasons the generic views will not automatically apply object level permissions to each instance in a queryset when returning a list of objects. Often when you're using object level permissions you'll also want to filter the queryset appropriately, to ensure that users only have visibility onto instances that they are permitted to view. Because the get_object() method is not called, object level permissions from the has_object_permission() method are not applied when creating objects. In order to restrict object creation you need to implement the permission check either in your Serializer class or override the perform_create() method of your ViewSet class. Setting the permission policy The default permission policy may be set globally, using the DEFAULT_PERMISSION_CLASSES setting. For example. REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.IsAuthenticated', ] } If not specified, this setting defaults to allowing unrestricted access: 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.AllowAny', ] You can also set the authentication policy on a per-view, or per-viewset basis, using the APIView class-based views. from rest_framework.permissions import IsAuthenticated from rest_framework.response import Response from rest_framework.views import APIView class ExampleView(APIView): permission_classes = [IsAuthenticated] def get(self, request, format=None): content = { 'status': 'request was permitted' } return Response(content) Or, if you're using the @api_view decorator with function based views. from rest_framework.decorators import api_view, permission_classes from rest_framework.permissions import IsAuthenticated from rest_framework.response import Response @api_view(['GET']) @permission_classes([IsAuthenticated]) def example_view(request, format=None): content = { 'status': 'request was permitted' } return Response(content) Note: when you set new permission classes via the class attribute or decorators you're telling the view to ignore the default list set in the settings.py file. Provided they inherit from rest_framework.permissions.BasePermission, permissions can be composed using standard Python bitwise operators. For example, IsAuthenticatedOrReadOnly could be written: from rest_framework.permissions import BasePermission, IsAuthenticated, SAFE_METHODS from rest_framework.response import Response from rest_framework.views import APIView class ReadOnly(BasePermission): def has_permission(self, request, view): return request.method in SAFE_METHODS class ExampleView(APIView): permission_classes = [IsAuthenticated|ReadOnly] def get(self, request, format=None): content = { 'status': 'request was permitted' } return Response(content) Note: it supports & (and), | (or) and ~ (not). API Reference AllowAny The AllowAny permission class will allow unrestricted access, regardless of if the request was authenticated or unauthenticated. This permission is not strictly required, since you can achieve the same result by using an empty list or tuple for the permissions setting, but you may find it useful to specify this class because it makes the intention explicit. IsAuthenticated The IsAuthenticated permission class will deny permission to any unauthenticated user, and allow permission otherwise. This permission is suitable if you want your API to only be accessible to registered users. IsAdminUser The IsAdminUser permission class will deny permission to any user, unless user.is_staff is True in which case permission will be allowed. This permission is suitable if you want your API to only be accessible to a subset of trusted administrators. IsAuthenticatedOrReadOnly The IsAuthenticatedOrReadOnly will allow authenticated users to perform any request. Requests for unauthorised users will only be permitted if the request method is one of the "safe" methods; GET, HEAD or OPTIONS. This permission is suitable if you want to your API to allow read permissions to anonymous users, and only allow write permissions to authenticated users. DjangoModelPermissions This permission class ties into Django's standard django.contrib.auth model permissions. This permission must only be applied to views that have a .queryset property or get_queryset() method. Authorization will only be granted if the user is authenticated and has the relevant model permissions assigned. POST requests require the user to have the add permission on the model. PUT and PATCH requests require the user to have the change permission on the model. DELETE requests require the user to have the delete permission on the model. The default behaviour can also be overridden to support custom model permissions. For example, you might want to include a view model permission for GET requests. To use custom model permissions, override DjangoModelPermissions and set the .perms_map property. Refer to the source code for details. DjangoModelPermissionsOrAnonReadOnly Similar to DjangoModelPermissions, but also allows unauthenticated users to have read-only access to the API. DjangoObjectPermissions This permission class ties into Django's standard object permissions framework that allows per-object permissions on models. In order to use this permission class, you'll also need to add a permission backend that supports object-level permissions, such as django-guardian. As with DjangoModelPermissions, this permission must only be applied to views that have a .queryset property or .get_queryset() method. Authorization will only be granted if the user is authenticated and has the relevant per-object permissions and relevant model permissions assigned. POST requests require the user to have the add permission on the model instance. PUT and PATCH requests require the user to have the change permission on the model instance. DELETE requests require the user to have the delete permission on the model instance. Note that DjangoObjectPermissions does not require the django-guardian package, and should support other object-level backends equally well. As with DjangoModelPermissions you can use custom model permissions by overriding DjangoObjectPermissions and setting the .perms_map property. Refer to the source code for details. Note: If you need object level view permissions for GET, HEAD and OPTIONS requests and are using django-guardian for your object-level permissions backend, you'll want to consider using the DjangoObjectPermissionsFilter class provided by the djangorestframework-guardian package. It ensures that list endpoints only return results including objects for which the user has appropriate view permissions. Custom permissions To implement a custom permission, override BasePermission and implement either, or both, of the following methods: .has_permission(self, request, view) .has_object_permission(self, request, view, obj) The methods should return True if the request should be granted access, and False otherwise. If you need to test if a request is a read operation or a write operation, you should check the request method against the constant SAFE_METHODS, which is a tuple containing 'GET', 'OPTIONS' and 'HEAD'. For example: if request.method in permissions.SAFE_METHODS: # Check permissions for read-only request else: # Check permissions for write request Note: The instance-level has_object_permission method will only be called if the view-level has_permission checks have already passed. Also note that in order for the instance-level checks to run, the view code should explicitly call .check_object_permissions(request, obj). If you are using the generic views then this will be handled for you by default. (Function-based views will need to check object permissions explicitly, raising PermissionDenied on failure.) Custom permissions will raise a PermissionDenied exception if the test fails. To change the error message associated with the exception, implement a message attribute directly on your custom permission. Otherwise the default_detail attribute from PermissionDenied will be used. Similarly, to change the code identifier associated with the exception, implement a code attribute directly on your custom permission - otherwise the default_code attribute from PermissionDenied will be used. from rest_framework import permissions class CustomerAccessPermission(permissions.BasePermission): message = 'Adding customers not allowed.' def has_permission(self, request, view): ... Examples The following is an example of a permission class that checks the incoming request's IP address against a blocklist, and denies the request if the IP has been blocked. from rest_framework import permissions class BlocklistPermission(permissions.BasePermission): """ Global permission check for blocked IPs. """ def has_permission(self, request, view): ip_addr = request.META['REMOTE_ADDR'] blocked = Blocklist.objects.filter(ip_addr=ip_addr).exists() return not blocked As well as global permissions, that are run against all incoming requests, you can also create object-level permissions, that are only run against operations that affect a particular object instance. For example: class IsOwnerOrReadOnly(permissions.BasePermission): """ Object-level permission to only allow owners of an object to edit it. Assumes the model instance has an `owner` attribute. """ def has_object_permission(self, request, view, obj): # Read permissions are allowed to any request, # so we'll always allow GET, HEAD or OPTIONS requests. if request.method in permissions.SAFE_METHODS: return True # Instance must have an attribute named `owner`. return obj.owner == request.user Note that the generic views will check the appropriate object level permissions, but if you're writing your own custom views, you'll need to make sure you check the object level permission checks yourself. You can do so by calling self.check_object_permissions(request, obj) from the view once you have the object instance. This call will raise an appropriate APIException if any object-level permission checks fail, and will otherwise simply return. Also note that the generic views will only check the object-level permissions for views that retrieve a single model instance. If you require object-level filtering of list views, you'll need to filter the queryset separately. See the filtering documentation for more details. Overview of access restriction methods REST framework offers three different methods to customize access restrictions on a case-by-case basis. These apply in different scenarios and have different effects and limitations. queryset/get_queryset(): Limits the general visibility of existing objects from the database. The queryset limits which objects will be listed and which objects can be modified or deleted. The get_queryset() method can apply different querysets based on the current action. permission_classes/get_permissions(): General permission checks based on the current action, request and targeted object. Object level permissions can only be applied to retrieve, modify and deletion actions. Permission checks for list and create will be applied to the entire object type. (In case of list: subject to restrictions in the queryset.) serializer_class/get_serializer(): Instance level restrictions that apply to all objects on input and output. The serializer may have access to the request context. The get_serializer() method can apply different serializers based on the current action. The following table lists the access restriction methods and the level of control they offer over which actions. queryset permission_classes serializer_class Action: list global no object-level* Action: create no global object-level Action: retrieve global object-level object-level Action: update global object-level object-level Action: partial_update global object-level object-level Action: destroy global object-level no Can reference action in decision no** yes no** Can reference request in decision no** yes yes * A Serializer class should not raise PermissionDenied in a list action, or the entire list would not be returned. ** The get_*() methods have access to the current view and can return different Serializer or QuerySet instances based on the request or action. Third party packages The following third party packages are also available. DRF - Access Policy The Django REST - Access Policy package provides a way to define complex access rules in declarative policy classes that are attached to view sets or function-based views. The policies are defined in JSON in a format similar to AWS' Identity & Access Management policies. Composed Permissions The Composed Permissions package provides a simple way to define complex and multi-depth (with logic operators) permission objects, using small and reusable components. REST Condition The REST Condition package is another extension for building complex permissions in a simple and convenient way. The extension allows you to combine permissions with logical operators. DRY Rest Permissions The DRY Rest Permissions package provides the ability to define different permissions for individual default and custom actions. This package is made for apps with permissions that are derived from relationships defined in the app's data model. It also supports permission checks being returned to a client app through the API's serializer. Additionally it supports adding permissions to the default and custom list actions to restrict the data they retrieve per user. Django Rest Framework Roles The Django Rest Framework Roles package makes it easier to parameterize your API over multiple types of users. Django REST Framework API Key The Django REST Framework API Key package provides permissions classes, models and helpers to add API key authorization to your API. It can be used to authorize internal or third-party backends and services (i.e. machines) which do not have a user account. API keys are stored securely using Django's password hashing infrastructure, and they can be viewed, edited and revoked at anytime in the Django admin. Django Rest Framework Role Filters The Django Rest Framework Role Filters package provides simple filtering over multiple types of roles. Django Rest Framework PSQ The Django Rest Framework PSQ package is an extension that gives support for having action-based permission_classes, serializer_class, and queryset dependent on permission-based rules. permissions.py
doc_1960
A one-dimensional polynomial class. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. A convenience class, used to encapsulate “natural” operations on polynomials so that said operations may take on their customary form in code (see Examples). Parameters c_or_rarray_like The polynomial’s coefficients, in decreasing powers, or if the value of the second parameter is True, the polynomial’s roots (values where the polynomial evaluates to 0). For example, poly1d([1, 2, 3]) returns an object that represents \(x^2 + 2x + 3\), whereas poly1d([1, 2, 3], True) returns one that represents \((x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x -6\). rbool, optional If True, c_or_r specifies the polynomial’s roots; the default is False. variablestr, optional Changes the variable used when printing p from x to variable (see Examples). Examples Construct the polynomial \(x^2 + 2x + 3\): >>> p = np.poly1d([1, 2, 3]) >>> print(np.poly1d(p)) 2 1 x + 2 x + 3 Evaluate the polynomial at \(x = 0.5\): >>> p(0.5) 4.25 Find the roots: >>> p.r array([-1.+1.41421356j, -1.-1.41421356j]) >>> p(p.r) array([ -4.44089210e-16+0.j, -4.44089210e-16+0.j]) # may vary These numbers in the previous line represent (0, 0) to machine precision Show the coefficients: >>> p.c array([1, 2, 3]) Display the order (the leading zero-coefficients are removed): >>> p.order 2 Show the coefficient of the k-th power in the polynomial (which is equivalent to p.c[-(i+1)]): >>> p[1] 2 Polynomials can be added, subtracted, multiplied, and divided (returns quotient and remainder): >>> p * p poly1d([ 1, 4, 10, 12, 9]) >>> (p**3 + 4) / p (poly1d([ 1., 4., 10., 12., 9.]), poly1d([4.])) asarray(p) gives the coefficient array, so polynomials can be used in all functions that accept arrays: >>> p**2 # square of polynomial poly1d([ 1, 4, 10, 12, 9]) >>> np.square(p) # square of individual coefficients array([1, 4, 9]) The variable used in the string representation of p can be modified, using the variable parameter: >>> p = np.poly1d([1,2,3], variable='z') >>> print(p) 2 1 z + 2 z + 3 Construct a polynomial from its roots: >>> np.poly1d([1, 2], True) poly1d([ 1., -3., 2.]) This is the same polynomial as obtained by: >>> np.poly1d([1, -1]) * np.poly1d([1, -2]) poly1d([ 1, -3, 2]) Attributes c The polynomial coefficients coef The polynomial coefficients coefficients The polynomial coefficients coeffs The polynomial coefficients o The order or degree of the polynomial order The order or degree of the polynomial r The roots of the polynomial, where self(x) == 0 roots The roots of the polynomial, where self(x) == 0 variable The name of the polynomial variable Methods __call__(val) Call self as a function. deriv([m]) Return a derivative of this polynomial. integ([m, k]) Return an antiderivative (indefinite integral) of this polynomial.
doc_1961
New in Django 4.0. Returns the context for rendering a formset in a template. The available context is: formset : The instance of the formset.
doc_1962
tf.nn.nce_loss( weights, biases, labels, inputs, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, name='nce_loss' ) See Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. Also see our Candidate Sampling Algorithms Reference A common use case is to use this method for training, and calculate the full sigmoid loss for evaluation or inference as in the following example: if mode == "train": loss = tf.nn.nce_loss( weights=weights, biases=biases, labels=labels, inputs=inputs, ...) elif mode == "eval": logits = tf.matmul(inputs, tf.transpose(weights)) logits = tf.nn.bias_add(logits, biases) labels_one_hot = tf.one_hot(labels, n_classes) loss = tf.nn.sigmoid_cross_entropy_with_logits( labels=labels_one_hot, logits=logits) loss = tf.reduce_sum(loss, axis=1) Note: when doing embedding lookup on weights and bias, "div" partition strategy will be used. Support for other partition strategy will be added later. Note: By default this uses a log-uniform (Zipfian) distribution for sampling, so your labels must be sorted in order of decreasing frequency to achieve good results. For more details, see tf.random.log_uniform_candidate_sampler. Note: In the case where num_true > 1, we assign to each target class the target probability 1 / num_true so that the target probabilities sum to 1 per-example. Note: It would be useful to allow a variable number of target classes per example. We hope to provide this functionality in a future release. For now, if you have a variable number of target classes, you can pad them out to a constant number by either repeating them or by padding with an otherwise unused class. Args weights A Tensor of shape [num_classes, dim], or a list of Tensor objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-partitioned) class embeddings. biases A Tensor of shape [num_classes]. The class biases. labels A Tensor of type int64 and shape [batch_size, num_true]. The target classes. inputs A Tensor of shape [batch_size, dim]. The forward activations of the input network. num_sampled An int. The number of negative classes to randomly sample per batch. This single sample of negative classes is evaluated for each element in the batch. num_classes An int. The number of possible classes. num_true An int. The number of target classes per training example. sampled_values a tuple of (sampled_candidates, true_expected_count, sampled_expected_count) returned by a *_candidate_sampler function. (if None, we default to log_uniform_candidate_sampler) remove_accidental_hits A bool. Whether to remove "accidental hits" where a sampled class equals one of the target classes. If set to True, this is a "Sampled Logistic" loss instead of NCE, and we are learning to generate log-odds instead of log probabilities. See our Candidate Sampling Algorithms Reference. Default is False. name A name for the operation (optional). Returns A batch_size 1-D tensor of per-example NCE losses.
doc_1963
Returns an instance of LoggerAdapter initialized with an underlying Logger instance and a dict-like object. process(msg, kwargs) Modifies the message and/or keyword arguments passed to a logging call in order to insert contextual information. This implementation takes the object passed as extra to the constructor and adds it to kwargs using key ‘extra’. The return value is a (msg, kwargs) tuple which has the (possibly modified) versions of the arguments passed in.
doc_1964
Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”. This operator supports TensorFloat32. See ConvTranspose2d for details and output shape. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Parameters input – input tensor of shape (minibatch,in_channels,iH,iW)(\text{minibatch} , \text{in\_channels} , iH , iW) weight – filters of shape (in_channels,out_channelsgroups,kH,kW)(\text{in\_channels} , \frac{\text{out\_channels}}{\text{groups}} , kH , kW) bias – optional bias of shape (out_channels)(\text{out\_channels}) . Default: None stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1 padding – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padH, padW). Default: 0 output_padding – additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padH, out_padW). Default: 0 groups – split input into groups, in_channels\text{in\_channels} should be divisible by the number of groups. Default: 1 dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1 Examples: >>> # With square kernels and equal stride >>> inputs = torch.randn(1, 4, 5, 5) >>> weights = torch.randn(4, 8, 3, 3) >>> F.conv_transpose2d(inputs, weights, padding=1)
doc_1965
Slices the self tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed. Parameters dim (int) – the dimension to slice index (int) – the index to select with Note select() is equivalent to slicing. For example, tensor.select(0, index) is equivalent to tensor[index] and tensor.select(2, index) is equivalent to tensor[:,:,index].
doc_1966
Extract cookies from HTTP response and store them in the CookieJar, where allowed by policy. The CookieJar will look for allowable Set-Cookie and Set-Cookie2 headers in the response argument, and store cookies as appropriate (subject to the CookiePolicy.set_ok() method’s approval). The response object (usually the result of a call to urllib.request.urlopen(), or similar) should support an info() method, which returns an email.message.Message instance. The request object (usually a urllib.request.Request instance) must support the methods get_full_url(), get_host(), unverifiable(), and origin_req_host attribute, as documented by urllib.request. The request is used to set default values for cookie-attributes as well as for checking that the cookie is allowed to be set. Changed in version 3.3: request object needs origin_req_host attribute. Dependency on a deprecated method get_origin_req_host() has been removed.
doc_1967
Bases: skimage.viewer.plugins.base.Plugin __init__(maxdist=10, **kwargs) [source] Initialize self. See help(type(self)) for accurate signature. attach(image_viewer) [source] Attach the plugin to an ImageViewer. Note that the ImageViewer will automatically call this method when the plugin is added to the ImageViewer. For example: viewer += Plugin(...) Also note that attach automatically calls the filter function so that the image matches the filtered value specified by attached widgets. crop(extents) [source] help() [source] name = 'Crop' reset() [source]
doc_1968
tf.metrics.SpecificityAtSensitivity Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.SpecificityAtSensitivity tf.keras.metrics.SpecificityAtSensitivity( sensitivity, num_thresholds=200, name=None, dtype=None ) Sensitivity measures the proportion of actual positives that are correctly identified as such (tp / (tp + fn)). Specificity measures the proportion of actual negatives that are correctly identified as such (tn / (tn + fp)). This metric creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the specificity at the given sensitivity. The threshold for the given sensitivity value is computed and used to evaluate the corresponding specificity. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. For additional information about specificity and sensitivity, see the following. Args sensitivity A scalar value in range [0, 1]. num_thresholds (Optional) Defaults to 200. The number of thresholds to use for matching the given sensitivity. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.SpecificityAtSensitivity(0.5) m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8]) m.result().numpy() 0.66666667 m.reset_states() m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8], sample_weight=[1, 1, 2, 2, 2]) m.result().numpy() 0.5 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.SpecificityAtSensitivity()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates confusion matrix statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
doc_1969
See Migration guide for more details. tf.compat.v1.sets.set_union, tf.compat.v1.sets.union tf.sets.union( a, b, validate_indices=True ) All but the last dimension of a and b must match. Example: import tensorflow as tf import collections # [[{1, 2}, {3}], [{4}, {5, 6}]] a = collections.OrderedDict([ ((0, 0, 0), 1), ((0, 0, 1), 2), ((0, 1, 0), 3), ((1, 0, 0), 4), ((1, 1, 0), 5), ((1, 1, 1), 6), ]) a = tf.sparse.SparseTensor(list(a.keys()), list(a.values()), dense_shape=[2, 2, 2]) # [[{1, 3}, {2}], [{4, 5}, {5, 6, 7, 8}]] b = collections.OrderedDict([ ((0, 0, 0), 1), ((0, 0, 1), 3), ((0, 1, 0), 2), ((1, 0, 0), 4), ((1, 0, 1), 5), ((1, 1, 0), 5), ((1, 1, 1), 6), ((1, 1, 2), 7), ((1, 1, 3), 8), ]) b = tf.sparse.SparseTensor(list(b.keys()), list(b.values()), dense_shape=[2, 2, 4]) # `set_union` is applied to each aligned pair of sets. tf.sets.union(a, b) # The result will be a equivalent to either of: # # np.array([[{1, 2, 3}, {2, 3}], [{4, 5}, {5, 6, 7, 8}]]) # # collections.OrderedDict([ # ((0, 0, 0), 1), # ((0, 0, 1), 2), # ((0, 0, 2), 3), # ((0, 1, 0), 2), # ((0, 1, 1), 3), # ((1, 0, 0), 4), # ((1, 0, 1), 5), # ((1, 1, 0), 5), # ((1, 1, 1), 6), # ((1, 1, 2), 7), # ((1, 1, 3), 8), # ]) Args a Tensor or SparseTensor of the same type as b. If sparse, indices must be sorted in row-major order. b Tensor or SparseTensor of the same type as a. If sparse, indices must be sorted in row-major order. validate_indices Whether to validate the order and range of sparse indices in a and b. Returns A SparseTensor whose shape is the same rank as a and b, and all but the last dimension the same. Elements along the last dimension contain the unions.
doc_1970
See Migration guide for more details. tf.compat.v1.raw_ops.CudnnRNNCanonicalToParams tf.raw_ops.CudnnRNNCanonicalToParams( num_layers, num_units, input_size, weights, biases, rnn_mode='lstm', input_mode='linear_input', direction='unidirectional', dropout=0, seed=0, seed2=0, name=None ) Writes a set of weights into the opaque params buffer so they can be used in upcoming training or inferences. Note that the params buffer may not be compatible across different GPUs. So any save and restoration should be converted to and from the canonical weights and biases. num_layers: Specifies the number of layers in the RNN model. num_units: Specifies the size of the hidden state. input_size: Specifies the size of the input state. weights: the canonical form of weights that can be used for saving and restoration. They are more likely to be compatible across different generations. biases: the canonical form of biases that can be used for saving and restoration. They are more likely to be compatible across different generations. num_params: number of parameter sets for all layers. Each layer may contain multiple parameter sets, with each set consisting of a weight matrix and a bias vector. rnn_mode: Indicates the type of the RNN model. input_mode: Indicate whether there is a linear projection between the input and The actual computation before the first layer. 'skip_input' is only allowed when input_size == num_units; 'auto_select' implies 'skip_input' when input_size == num_units; otherwise, it implies 'linear_input'. direction: Indicates whether a bidirectional model will be used. dir = (direction == bidirectional) ? 2 : 1 dropout: dropout probability. When set to 0., dropout is disabled. seed: the 1st part of a seed to initialize dropout. seed2: the 2nd part of a seed to initialize dropout. Args num_layers A Tensor of type int32. num_units A Tensor of type int32. input_size A Tensor of type int32. weights A list of at least 1 Tensor objects with the same type in: half, float32, float64. biases A list with the same length as weights of Tensor objects with the same type as weights. rnn_mode An optional string from: "rnn_relu", "rnn_tanh", "lstm", "gru". Defaults to "lstm". input_mode An optional string from: "linear_input", "skip_input", "auto_select". Defaults to "linear_input". direction An optional string from: "unidirectional", "bidirectional". Defaults to "unidirectional". dropout An optional float. Defaults to 0. seed An optional int. Defaults to 0. seed2 An optional int. Defaults to 0. name A name for the operation (optional). Returns A Tensor. Has the same type as weights.
doc_1971
Compute number of output features. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The data. yNone Ignored. Returns selfobject Fitted transformer.
doc_1972
Whether the OpenSSL library has built-in support for the TLS 1.3 protocol. New in version 3.7.
doc_1973
This function causes the cgitb module to take over the interpreter’s default handling for exceptions by setting the value of sys.excepthook. The optional argument display defaults to 1 and can be set to 0 to suppress sending the traceback to the browser. If the argument logdir is present, the traceback reports are written to files. The value of logdir should be a directory where these files will be placed. The optional argument context is the number of lines of context to display around the current line of source code in the traceback; this defaults to 5. If the optional argument format is "html", the output is formatted as HTML. Any other value forces plain text output. The default value is "html".
doc_1974
Bases: object __call__(renderer)[source] Call self as a function.
doc_1975
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterMax tf.raw_ops.ScatterMax( ref, indices, updates, use_locking=False, name=None ) This operation computes # Scalar indices ref[indices, ...] = max(ref[indices, ...], updates[...]) # Vector indices (for each i) ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...]) # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...]) This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions combine. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = []. Args ref A mutable Tensor. Must be one of the following types: half, bfloat16, float32, float64, int32, int64. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates A Tensor. Must have the same type as ref. A tensor of updated values to reduce into ref. use_locking An optional bool. Defaults to False. If True, the update will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
doc_1976
The sort command is a variant of search with sorting semantics for the results. Returned data contains a space separated list of matching message numbers. Sort has two arguments before the search_criterion argument(s); a parenthesized list of sort_criteria, and the searching charset. Note that unlike search, the searching charset argument is mandatory. There is also a uid sort command which corresponds to sort the way that uid search corresponds to search. The sort command first searches the mailbox for messages that match the given searching criteria using the charset argument for the interpretation of strings in the searching criteria. It then returns the numbers of matching messages. This is an IMAP4rev1 extension command.
doc_1977
Compare this network to other. In this comparison only the network addresses are considered; host bits aren’t. Returns either -1, 0 or 1. >>> ip_network('192.0.2.1/32').compare_networks(ip_network('192.0.2.2/32')) -1 >>> ip_network('192.0.2.1/32').compare_networks(ip_network('192.0.2.0/32')) 1 >>> ip_network('192.0.2.1/32').compare_networks(ip_network('192.0.2.1/32')) 0 Deprecated since version 3.7: It uses the same ordering and comparison algorithm as “<”, “==”, and “>”
doc_1978
Initialize QAppliction. The QApplication needs to be initialized before creating any QWidgets
doc_1979
Raised when an operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError.
doc_1980
Runs the application on a local development server. Do not use run() in a production setting. It is not intended to meet security and performance requirements for a production server. Instead, see Deployment Options for WSGI server recommendations. If the debug flag is set the server will automatically reload for code changes and show a debugger in case an exception happened. If you want to run the application in debug mode, but disable the code execution on the interactive debugger, you can pass use_evalex=False as parameter. This will keep the debugger’s traceback screen active, but disable code execution. It is not recommended to use this function for development with automatic reloading as this is badly supported. Instead you should be using the flask command line script’s run support. Keep in Mind Flask will suppress any server error with a generic error page unless it is in debug mode. As such to enable just the interactive debugger without the code reloading, you have to invoke run() with debug=True and use_reloader=False. Setting use_debugger to True without being in debug mode won’t catch any exceptions because there won’t be any to catch. Parameters host (Optional[str]) – the hostname to listen on. Set this to '0.0.0.0' to have the server available externally as well. Defaults to '127.0.0.1' or the host in the SERVER_NAME config variable if present. port (Optional[int]) – the port of the webserver. Defaults to 5000 or the port defined in the SERVER_NAME config variable if present. debug (Optional[bool]) – if given, enable or disable debug mode. See debug. load_dotenv (bool) – Load the nearest .env and .flaskenv files to set environment variables. Will also change the working directory to the directory containing the first file found. options (Any) – the options to be forwarded to the underlying Werkzeug server. See werkzeug.serving.run_simple() for more information. Return type None Changelog Changed in version 1.0: If installed, python-dotenv will be used to load environment variables from .env and .flaskenv files. If set, the FLASK_ENV and FLASK_DEBUG environment variables will override env and debug. Threaded mode is enabled by default. Changed in version 0.10: The default port is now picked from the SERVER_NAME variable.
doc_1981
Return the ZAxis (Axis) instance.
doc_1982
Return x with its trend removed. Parameters xarray or sequence Array or sequence containing the data. key{'default', 'constant', 'mean', 'linear', 'none'} or function The detrending algorithm to use. 'default', 'mean', and 'constant' are the same as detrend_mean. 'linear' is the same as detrend_linear. 'none' is the same as detrend_none. The default is 'mean'. See the corresponding functions for more details regarding the algorithms. Can also be a function that carries out the detrend operation. axisint The axis along which to do the detrending. See also detrend_mean Implementation of the 'mean' algorithm. detrend_linear Implementation of the 'linear' algorithm. detrend_none Implementation of the 'none' algorithm.
doc_1983
Write the buffers contents to file descriptor fd at a offset offset, leaving the file offset unchanged. buffers must be a sequence of bytes-like objects. Buffers are processed in array order. Entire contents of the first buffer is written before proceeding to the second, and so on. The flags argument contains a bitwise OR of zero or more of the following flags: RWF_DSYNC RWF_SYNC Return the total number of bytes actually written. The operating system may set a limit (sysconf() value 'SC_IOV_MAX') on the number of buffers that can be used. Combine the functionality of os.writev() and os.pwrite(). Availability: Linux 2.6.30 and newer, FreeBSD 6.0 and newer, OpenBSD 2.7 and newer, AIX 7.1 and newer. Using flags requires Linux 4.7 or newer. New in version 3.7.
doc_1984
Default widget: Select Empty value: '' (an empty string) Normalizes to: A string. Validates that the selected choice exists in the list of choices. Error message keys: required, invalid_choice The field allows choosing from files inside a certain directory. It takes five extra arguments; only path is required: path The absolute path to the directory whose contents you want listed. This directory must exist. recursive If False (the default) only the direct contents of path will be offered as choices. If True, the directory will be descended into recursively and all descendants will be listed as choices. match A regular expression pattern; only files with names matching this expression will be allowed as choices. allow_files Optional. Either True or False. Default is True. Specifies whether files in the specified location should be included. Either this or allow_folders must be True. allow_folders Optional. Either True or False. Default is False. Specifies whether folders in the specified location should be included. Either this or allow_files must be True.
doc_1985
set the mouse cursor to a system variant set_system_cursor(constant) -> None When the mouse cursor is visible, it will displayed as a operating system specific variant of the options below. Pygame Cursor Constant Description -------------------------------------------- pygame.SYSTEM_CURSOR_ARROW arrow pygame.SYSTEM_CURSOR_IBEAM i-beam pygame.SYSTEM_CURSOR_WAIT wait pygame.SYSTEM_CURSOR_CROSSHAIR crosshair pygame.SYSTEM_CURSOR_WAITARROW small wait cursor (or wait if not available) pygame.SYSTEM_CURSOR_SIZENWSE double arrow pointing northwest and southeast pygame.SYSTEM_CURSOR_SIZENESW double arrow pointing northeast and southwest pygame.SYSTEM_CURSOR_SIZEWE double arrow pointing west and east pygame.SYSTEM_CURSOR_SIZENS double arrow pointing north and south pygame.SYSTEM_CURSOR_SIZEALL four pointed arrow pointing north, south, east, and west pygame.SYSTEM_CURSOR_NO slashed circle or crossbones pygame.SYSTEM_CURSOR_HAND hand New in pygame 2.0.0.
doc_1986
See Migration guide for more details. tf.compat.v1.linalg.pinv tf.linalg.pinv( a, rcond=None, validate_args=False, name=None ) Calculate the generalized inverse of a matrix using its singular-value decomposition (SVD) and including all large singular values. The pseudo-inverse of a matrix A, is defined as: 'the matrix that 'solves' [the least-squares problem] A @ x = b,' i.e., if x_hat is a solution, then A_pinv is the matrix such that x_hat = A_pinv @ b. It can be shown that if U @ Sigma @ V.T = A is the singular value decomposition of A, then A_pinv = V @ inv(Sigma) U^T. [(Strang, 1980)][1] This function is analogous to numpy.linalg.pinv. It differs only in default value of rcond. In numpy.linalg.pinv, the default rcond is 1e-15. Here the default is 10. * max(num_rows, num_cols) * np.finfo(dtype).eps. Args a (Batch of) float-like matrix-shaped Tensor(s) which are to be pseudo-inverted. rcond Tensor of small singular value cutoffs. Singular values smaller (in modulus) than rcond * largest_singular_value (again, in modulus) are set to zero. Must broadcast against tf.shape(a)[:-2]. Default value: 10. * max(num_rows, num_cols) * np.finfo(a.dtype).eps. validate_args When True, additional assertions might be embedded in the graph. Default value: False (i.e., no graph assertions are added). name Python str prefixed to ops created by this function. Default value: 'pinv'. Returns a_pinv (Batch of) pseudo-inverse of input a. Has same shape as a except rightmost two dimensions are transposed. Raises TypeError if input a does not have float-like dtype. ValueError if input a has fewer than 2 dimensions. Examples import tensorflow as tf import tensorflow_probability as tfp a = tf.constant([[1., 0.4, 0.5], [0.4, 0.2, 0.25], [0.5, 0.25, 0.35]]) tf.matmul(tf.linalg..pinv(a), a) # ==> array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]], dtype=float32) a = tf.constant([[1., 0.4, 0.5, 1.], [0.4, 0.2, 0.25, 2.], [0.5, 0.25, 0.35, 3.]]) tf.matmul(tf.linalg..pinv(a), a) # ==> array([[ 0.76, 0.37, 0.21, -0.02], [ 0.37, 0.43, -0.33, 0.02], [ 0.21, -0.33, 0.81, 0.01], [-0.02, 0.02, 0.01, 1. ]], dtype=float32) References [1]: G. Strang. 'Linear Algebra and Its Applications, 2nd Ed.' Academic Press, Inc., 1980, pp. 139-142.
doc_1987
A hook for the initial data on admin change forms. By default, fields are given initial values from GET parameters. For instance, ?name=initial_value will set the name field’s initial value to be initial_value. This method should return a dictionary in the form {'fieldname': 'fieldval'}: def get_changeform_initial_data(self, request): return {'name': 'custom_initial_value'}
doc_1988
For each element in self, return a copy of the string with uppercase characters converted to lowercase and vice versa. See also char.swapcase
doc_1989
Method representing the process’s activity. You may override this method in a subclass. The standard run() method invokes the callable object passed to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs arguments, respectively.
doc_1990
Return a new path object representing the user’s home directory (as returned by os.path.expanduser() with ~ construct): >>> Path.home() PosixPath('/home/antoine') New in version 3.5.
doc_1991
Token value for ":".
doc_1992
Set the current process’s effective user id. Availability: Unix.
doc_1993
Sets the value of the specified option. Available options: compute.[use_bottleneck, use_numba, use_numexpr] display.[chop_threshold, colheader_justify, column_space, date_dayfirst, date_yearfirst, encoding, expand_frame_repr, float_format] display.html.[border, table_schema, use_mathjax] display.[large_repr] display.latex.[escape, longtable, multicolumn, multicolumn_format, multirow, repr] display.[max_categories, max_columns, max_colwidth, max_dir_items, max_info_columns, max_info_rows, max_rows, max_seq_items, memory_usage, min_rows, multi_sparse, notebook_repr_html, pprint_nest_depth, precision, show_dimensions] display.unicode.[ambiguous_as_wide, east_asian_width] display.[width] io.excel.ods.[reader, writer] io.excel.xls.[reader, writer] io.excel.xlsb.[reader] io.excel.xlsm.[reader, writer] io.excel.xlsx.[reader, writer] io.hdf.[default_format, dropna_table] io.parquet.[engine] io.sql.[engine] mode.[chained_assignment, data_manager, sim_interactive, string_storage, use_inf_as_na, use_inf_as_null] plotting.[backend] plotting.matplotlib.[register_converters] styler.format.[decimal, escape, formatter, na_rep, precision, thousands] styler.html.[mathjax] styler.latex.[environment, hrules, multicol_align, multirow_align] styler.render.[encoding, max_columns, max_elements, max_rows, repr] styler.sparse.[columns, index] Parameters pat:str Regexp which should match a single option. Note: partial matches are supported for convenience, but unless you use the full option name (e.g. x.y.z.option_name), your code may break in future versions if new options with similar names are introduced. value:object New value of option. Returns None Raises OptionError if no such option exists Notes The available options with its descriptions: compute.use_bottleneck:bool Use the bottleneck library to accelerate if it is installed, the default is True Valid values: False,True [default: True] [currently: True] compute.use_numba:bool Use the numba engine option for select operations if it is installed, the default is False Valid values: False,True [default: False] [currently: False] compute.use_numexpr:bool Use the numexpr library to accelerate computation if it is installed, the default is True Valid values: False,True [default: True] [currently: True] display.chop_threshold:float or None if set to a float value, all float values smaller then the given threshold will be displayed as exactly 0 by repr and friends. [default: None] [currently: None] display.colheader_justify:‘left’/’right’ Controls the justification of column headers. used by DataFrameFormatter. [default: right] [currently: right] display.column_space No description available. [default: 12] [currently: 12] display.date_dayfirst:boolean When True, prints and parses dates with the day first, eg 20/01/2005 [default: False] [currently: False] display.date_yearfirst:boolean When True, prints and parses dates with the year first, eg 2005/01/20 [default: False] [currently: False] display.encoding:str/unicode Defaults to the detected encoding of the console. Specifies the encoding to be used for strings returned by to_string, these are generally strings meant to be displayed on the console. [default: utf-8] [currently: utf-8] display.expand_frame_repr:boolean Whether to print out the full DataFrame repr for wide DataFrames across multiple lines, max_columns is still respected, but the output will wrap-around across multiple “pages” if its width exceeds display.width. [default: True] [currently: True] display.float_format:callable The callable should accept a floating point number and return a string with the desired format of the number. This is used in some places like SeriesFormatter. See formats.format.EngFormatter for an example. [default: None] [currently: None] display.html.border:int A border=value attribute is inserted in the <table> tag for the DataFrame HTML repr. [default: 1] [currently: 1] display.html.table_schema:boolean Whether to publish a Table Schema representation for frontends that support it. (default: False) [default: False] [currently: False] display.html.use_mathjax:boolean When True, Jupyter notebook will process table contents using MathJax, rendering mathematical expressions enclosed by the dollar symbol. (default: True) [default: True] [currently: True] display.large_repr:‘truncate’/’info’ For DataFrames exceeding max_rows/max_cols, the repr (and HTML repr) can show a truncated table (the default from 0.13), or switch to the view from df.info() (the behaviour in earlier versions of pandas). [default: truncate] [currently: truncate] display.latex.escape:bool This specifies if the to_latex method of a Dataframe uses escapes special characters. Valid values: False,True [default: True] [currently: True] display.latex.longtable :bool This specifies if the to_latex method of a Dataframe uses the longtable format. Valid values: False,True [default: False] [currently: False] display.latex.multicolumn:bool This specifies if the to_latex method of a Dataframe uses multicolumns to pretty-print MultiIndex columns. Valid values: False,True [default: True] [currently: True] display.latex.multicolumn_format:bool This specifies if the to_latex method of a Dataframe uses multicolumns to pretty-print MultiIndex columns. Valid values: False,True [default: l] [currently: l] display.latex.multirow:bool This specifies if the to_latex method of a Dataframe uses multirows to pretty-print MultiIndex rows. Valid values: False,True [default: False] [currently: False] display.latex.repr:boolean Whether to produce a latex DataFrame representation for jupyter environments that support it. (default: False) [default: False] [currently: False] display.max_categories:int This sets the maximum number of categories pandas should output when printing out a Categorical or a Series of dtype “category”. [default: 8] [currently: 8] display.max_columns:int If max_cols is exceeded, switch to truncate view. Depending on large_repr, objects are either centrally truncated or printed as a summary view. ‘None’ value means unlimited. In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and pandas will auto-detect the width of the terminal and print a truncated object which fits the screen width. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. [default: 0] [currently: 0] display.max_colwidth:int or None The maximum width in characters of a column in the repr of a pandas data structure. When the column overflows, a “…” placeholder is embedded in the output. A ‘None’ value means unlimited. [default: 50] [currently: 50] display.max_dir_items:int The number of items that will be added to dir(…). ‘None’ value means unlimited. Because dir is cached, changing this option will not immediately affect already existing dataframes until a column is deleted or added. This is for instance used to suggest columns from a dataframe to tab completion. [default: 100] [currently: 100] display.max_info_columns:int max_info_columns is used in DataFrame.info method to decide if per column information will be printed. [default: 100] [currently: 100] display.max_info_rows:int or None df.info() will usually show null-counts for each column. For large frames this can be quite slow. max_info_rows and max_info_cols limit this null check only to frames with smaller dimensions than specified. [default: 1690785] [currently: 1690785] display.max_rows:int If max_rows is exceeded, switch to truncate view. Depending on large_repr, objects are either centrally truncated or printed as a summary view. ‘None’ value means unlimited. In case python/IPython is running in a terminal and large_repr equals ‘truncate’ this can be set to 0 and pandas will auto-detect the height of the terminal and print a truncated object which fits the screen height. The IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to do correct auto-detection. [default: 60] [currently: 60] display.max_seq_items:int or None When pretty-printing a long sequence, no more then max_seq_items will be printed. If items are omitted, they will be denoted by the addition of “…” to the resulting string. If set to None, the number of items to be printed is unlimited. [default: 100] [currently: 100] display.memory_usage:bool, string or None This specifies if the memory usage of a DataFrame should be displayed when df.info() is called. Valid values True,False,’deep’ [default: True] [currently: True] display.min_rows:int The numbers of rows to show in a truncated view (when max_rows is exceeded). Ignored when max_rows is set to None or 0. When set to None, follows the value of max_rows. [default: 10] [currently: 10] display.multi_sparse:boolean “sparsify” MultiIndex display (don’t display repeated elements in outer levels within groups) [default: True] [currently: True] display.notebook_repr_html:boolean When True, IPython notebook will use html representation for pandas objects (if it is available). [default: True] [currently: True] display.pprint_nest_depth:int Controls the number of nested levels to process when pretty-printing [default: 3] [currently: 3] display.precision:int Floating point output precision in terms of number of places after the decimal, for regular formatting as well as scientific notation. Similar to precision in numpy.set_printoptions(). [default: 6] [currently: 6] display.show_dimensions:boolean or ‘truncate’ Whether to print out dimensions at the end of DataFrame repr. If ‘truncate’ is specified, only print out the dimensions if the frame is truncated (e.g. not display all rows and/or columns) [default: truncate] [currently: truncate] display.unicode.ambiguous_as_wide:boolean Whether to use the Unicode East Asian Width to calculate the display text width. Enabling this may affect to the performance (default: False) [default: False] [currently: False] display.unicode.east_asian_width:boolean Whether to use the Unicode East Asian Width to calculate the display text width. Enabling this may affect to the performance (default: False) [default: False] [currently: False] display.width:int Width of the display in characters. In case python/IPython is running in a terminal this can be set to None and pandas will correctly auto-detect the width. Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a terminal and hence it is not possible to correctly detect the width. [default: 80] [currently: 80] io.excel.ods.reader:string The default Excel reader engine for ‘ods’ files. Available options: auto, odf. [default: auto] [currently: auto] io.excel.ods.writer:string The default Excel writer engine for ‘ods’ files. Available options: auto, odf. [default: auto] [currently: auto] io.excel.xls.reader:string The default Excel reader engine for ‘xls’ files. Available options: auto, xlrd. [default: auto] [currently: auto] io.excel.xls.writer:string The default Excel writer engine for ‘xls’ files. Available options: auto, xlwt. [default: auto] [currently: auto] (Deprecated, use `` instead.) io.excel.xlsb.reader:string The default Excel reader engine for ‘xlsb’ files. Available options: auto, pyxlsb. [default: auto] [currently: auto] io.excel.xlsm.reader:string The default Excel reader engine for ‘xlsm’ files. Available options: auto, xlrd, openpyxl. [default: auto] [currently: auto] io.excel.xlsm.writer:string The default Excel writer engine for ‘xlsm’ files. Available options: auto, openpyxl. [default: auto] [currently: auto] io.excel.xlsx.reader:string The default Excel reader engine for ‘xlsx’ files. Available options: auto, xlrd, openpyxl. [default: auto] [currently: auto] io.excel.xlsx.writer:string The default Excel writer engine for ‘xlsx’ files. Available options: auto, openpyxl, xlsxwriter. [default: auto] [currently: auto] io.hdf.default_format:format default format writing format, if None, then put will default to ‘fixed’ and append will default to ‘table’ [default: None] [currently: None] io.hdf.dropna_table:boolean drop ALL nan rows when appending to a table [default: False] [currently: False] io.parquet.engine:string The default parquet reader/writer engine. Available options: ‘auto’, ‘pyarrow’, ‘fastparquet’, the default is ‘auto’ [default: auto] [currently: auto] io.sql.engine:string The default sql reader/writer engine. Available options: ‘auto’, ‘sqlalchemy’, the default is ‘auto’ [default: auto] [currently: auto] mode.chained_assignment:string Raise an exception, warn, or no action if trying to use chained assignment, The default is warn [default: warn] [currently: warn] mode.data_manager:string Internal data manager type; can be “block” or “array”. Defaults to “block”, unless overridden by the ‘PANDAS_DATA_MANAGER’ environment variable (needs to be set before pandas is imported). [default: block] [currently: block] mode.sim_interactive:boolean Whether to simulate interactive mode for purposes of testing [default: False] [currently: False] mode.string_storage:string The default storage for StringDtype. [default: python] [currently: python] mode.use_inf_as_na:boolean True means treat None, NaN, INF, -INF as NA (old way), False means None and NaN are null, but INF, -INF are not NA (new way). [default: False] [currently: False] mode.use_inf_as_null:boolean use_inf_as_null had been deprecated and will be removed in a future version. Use use_inf_as_na instead. [default: False] [currently: False] (Deprecated, use mode.use_inf_as_na instead.) plotting.backend:str The plotting backend to use. The default value is “matplotlib”, the backend provided with pandas. Other backends can be specified by providing the name of the module that implements the backend. [default: matplotlib] [currently: matplotlib] plotting.matplotlib.register_converters:bool or ‘auto’. Whether to register converters with matplotlib’s units registry for dates, times, datetimes, and Periods. Toggling to False will remove the converters, restoring any converters that pandas overwrote. [default: auto] [currently: auto] styler.format.decimal:str The character representation for the decimal separator for floats and complex. [default: .] [currently: .] styler.format.escape:str, optional Whether to escape certain characters according to the given context; html or latex. [default: None] [currently: None] styler.format.formatter:str, callable, dict, optional A formatter object to be used as default within Styler.format. [default: None] [currently: None] styler.format.na_rep:str, optional The string representation for values identified as missing. [default: None] [currently: None] styler.format.precision:int The precision for floats and complex numbers. [default: 6] [currently: 6] styler.format.thousands:str, optional The character representation for thousands separator for floats, int and complex. [default: None] [currently: None] styler.html.mathjax:bool If False will render special CSS classes to table attributes that indicate Mathjax will not be used in Jupyter Notebook. [default: True] [currently: True] styler.latex.environment:str The environment to replace \begin{table}. If “longtable” is used results in a specific longtable environment format. [default: None] [currently: None] styler.latex.hrules:bool Whether to add horizontal rules on top and bottom and below the headers. [default: False] [currently: False] styler.latex.multicol_align:{“r”, “c”, “l”, “naive-l”, “naive-r”} The specifier for horizontal alignment of sparsified LaTeX multicolumns. Pipe decorators can also be added to non-naive values to draw vertical rules, e.g. “|r” will draw a rule on the left side of right aligned merged cells. [default: r] [currently: r] styler.latex.multirow_align:{“c”, “t”, “b”} The specifier for vertical alignment of sparsified LaTeX multirows. [default: c] [currently: c] styler.render.encoding:str The encoding used for output HTML and LaTeX files. [default: utf-8] [currently: utf-8] styler.render.max_columns:int, optional The maximum number of columns that will be rendered. May still be reduced to satsify max_elements, which takes precedence. [default: None] [currently: None] styler.render.max_elements:int The maximum number of data-cell (<td>) elements that will be rendered before trimming will occur over columns, rows or both if needed. [default: 262144] [currently: 262144] styler.render.max_rows:int, optional The maximum number of rows that will be rendered. May still be reduced to satsify max_elements, which takes precedence. [default: None] [currently: None] styler.render.repr:str Determine which output to use in Jupyter Notebook in {“html”, “latex”}. [default: html] [currently: html] styler.sparse.columns:bool Whether to sparsify the display of hierarchical columns. Setting to False will display each explicit level element in a hierarchical key for each column. [default: True] [currently: True] styler.sparse.index:bool Whether to sparsify the display of a hierarchical index. Setting to False will display each explicit level element in a hierarchical key for each row. [default: True] [currently: True]
doc_1994
See Migration guide for more details. tf.compat.v1.raw_ops.DatasetToGraphV2 tf.raw_ops.DatasetToGraphV2( input_dataset, external_state_policy=0, strip_device_assignment=False, name=None ) Returns a graph representation for input_dataset. Args input_dataset A Tensor of type variant. A variant tensor representing the dataset to return the graph representation for. external_state_policy An optional int. Defaults to 0. strip_device_assignment An optional bool. Defaults to False. name A name for the operation (optional). Returns A Tensor of type string.
doc_1995
Bases: torch.distributions.transformed_distribution.TransformedDistribution Samples from a Gumbel Distribution. Examples: >>> m = Gumbel(torch.tensor([1.0]), torch.tensor([2.0])) >>> m.sample() # sample from Gumbel distribution with loc=1, scale=2 tensor([ 1.0124]) Parameters loc (float or Tensor) – Location parameter of the distribution scale (float or Tensor) – Scale parameter of the distribution arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)} entropy() [source] expand(batch_shape, _instance=None) [source] log_prob(value) [source] property mean property stddev support = Real() property variance
doc_1996
A string describing the specific codec error.
doc_1997
tf.compat.v1.layers.Dropout( rate=0.5, noise_shape=None, seed=None, name=None, **kwargs ) Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting. The units that are kept are scaled by 1 / (1 - rate), so that their sum is unchanged at training time and inference time. Arguments rate The dropout rate, between 0 and 1. E.g. rate=0.1 would drop out 10% of input units. noise_shape 1D tensor of type int32 representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features), and you want the dropout mask to be the same for all timesteps, you can use noise_shape=[batch_size, 1, features]. seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed. for behavior. name The name of the layer (string). Attributes graph scope_name
doc_1998
Drop specified labels from rows or columns. Remove rows or columns by specifying label names and corresponding axis, or by specifying directly index or column names. When using a multi-index, labels on different levels can be removed by specifying the level. See the user guide <advanced.shown_levels> for more information about the now unused levels. Parameters labels:single label or list-like Index or column labels to drop. A tuple will be used as a single label and not treated as a list-like. axis:{0 or ‘index’, 1 or ‘columns’}, default 0 Whether to drop labels from the index (0 or ‘index’) or columns (1 or ‘columns’). index:single label or list-like Alternative to specifying axis (labels, axis=0 is equivalent to index=labels). columns:single label or list-like Alternative to specifying axis (labels, axis=1 is equivalent to columns=labels). level:int or level name, optional For MultiIndex, level from which the labels will be removed. inplace:bool, default False If False, return a copy. Otherwise, do operation inplace and return None. errors:{‘ignore’, ‘raise’}, default ‘raise’ If ‘ignore’, suppress error and only existing labels are dropped. Returns DataFrame or None DataFrame without the removed index or column labels or None if inplace=True. Raises KeyError If any of the labels is not found in the selected axis. See also DataFrame.loc Label-location based indexer for selection by label. DataFrame.dropna Return DataFrame with labels on given axis omitted where (all or any) data are missing. DataFrame.drop_duplicates Return DataFrame with duplicate rows removed, optionally only considering certain columns. Series.drop Return Series with specified index labels removed. Examples >>> df = pd.DataFrame(np.arange(12).reshape(3, 4), ... columns=['A', 'B', 'C', 'D']) >>> df A B C D 0 0 1 2 3 1 4 5 6 7 2 8 9 10 11 Drop columns >>> df.drop(['B', 'C'], axis=1) A D 0 0 3 1 4 7 2 8 11 >>> df.drop(columns=['B', 'C']) A D 0 0 3 1 4 7 2 8 11 Drop a row by index >>> df.drop([0, 1]) A B C D 2 8 9 10 11 Drop columns and/or rows of MultiIndex DataFrame >>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'], ... ['speed', 'weight', 'length']], ... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2], ... [0, 1, 2, 0, 1, 2, 0, 1, 2]]) >>> df = pd.DataFrame(index=midx, columns=['big', 'small'], ... data=[[45, 30], [200, 100], [1.5, 1], [30, 20], ... [250, 150], [1.5, 0.8], [320, 250], ... [1, 0.8], [0.3, 0.2]]) >>> df big small lama speed 45.0 30.0 weight 200.0 100.0 length 1.5 1.0 cow speed 30.0 20.0 weight 250.0 150.0 length 1.5 0.8 falcon speed 320.0 250.0 weight 1.0 0.8 length 0.3 0.2 Drop a specific index combination from the MultiIndex DataFrame, i.e., drop the combination 'falcon' and 'weight', which deletes only the corresponding row >>> df.drop(index=('falcon', 'weight')) big small lama speed 45.0 30.0 weight 200.0 100.0 length 1.5 1.0 cow speed 30.0 20.0 weight 250.0 150.0 length 1.5 0.8 falcon speed 320.0 250.0 length 0.3 0.2 >>> df.drop(index='cow', columns='small') big lama speed 45.0 weight 200.0 length 1.5 falcon speed 320.0 weight 1.0 length 0.3 >>> df.drop(index='length', level=1) big small lama speed 45.0 30.0 weight 200.0 100.0 cow speed 30.0 20.0 weight 250.0 150.0 falcon speed 320.0 250.0 weight 1.0 0.8
doc_1999
tf.experimental.numpy.issubdtype( arg1, arg2 ) Parameters arg1, arg2 : dtype_like dtype or string representing a typecode. Returns out : bool See Also issubsctype, issubclass_ numpy.core.numerictypes : Overview of numpy type hierarchy. Examples >>> np.issubdtype('S1', np.string_) True >>> np.issubdtype(np.float64, np.float32) False