_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_24800
class sklearn.mixture.BayesianGaussianMixture(*, n_components=1, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weight_concentration_prior_type='dirichlet_process', weight_concentration_prior=None, mean_precision_prior=None, mean_prior=None, degrees_of_freedom_prior=None, covariance_prior=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10) [source] Variational Bayesian estimation of a Gaussian mixture. This class allows to infer an approximate posterior distribution over the parameters of a Gaussian mixture distribution. The effective number of components can be inferred from the data. This class implements two types of prior for the weights distribution: a finite mixture model with Dirichlet distribution and an infinite mixture model with the Dirichlet Process. In practice Dirichlet Process inference algorithm is approximated and uses a truncated distribution with a fixed maximum number of components (called the Stick-breaking representation). The number of components actually used almost always depends on the data. New in version 0.18. Read more in the User Guide. Parameters n_componentsint, default=1 The number of mixture components. Depending on the data and the value of the weight_concentration_prior the model can decide to not use all the components by setting some component weights_ to values very close to zero. The number of effective components is therefore smaller than n_components. covariance_type{‘full’, ‘tied’, ‘diag’, ‘spherical’}, default=’full’ String describing the type of covariance parameters to use. Must be one of: 'full' (each component has its own general covariance matrix), 'tied' (all components share the same general covariance matrix), 'diag' (each component has its own diagonal covariance matrix), 'spherical' (each component has its own single variance). tolfloat, default=1e-3 The convergence threshold. EM iterations will stop when the lower bound average gain on the likelihood (of the training data with respect to the model) is below this threshold. reg_covarfloat, default=1e-6 Non-negative regularization added to the diagonal of covariance. Allows to assure that the covariance matrices are all positive. max_iterint, default=100 The number of EM iterations to perform. n_initint, default=1 The number of initializations to perform. The result with the highest lower bound value on the likelihood is kept. init_params{‘kmeans’, ‘random’}, default=’kmeans’ The method used to initialize the weights, the means and the covariances. Must be one of: 'kmeans' : responsibilities are initialized using kmeans. 'random' : responsibilities are initialized randomly. weight_concentration_prior_typestr, default=’dirichlet_process’ String describing the type of the weight concentration prior. Must be one of: 'dirichlet_process' (using the Stick-breaking representation), 'dirichlet_distribution' (can favor more uniform weights). weight_concentration_priorfloat | None, default=None. The dirichlet concentration of each component on the weight distribution (Dirichlet). This is commonly called gamma in the literature. The higher concentration puts more mass in the center and will lead to more components being active, while a lower concentration parameter will lead to more mass at the edge of the mixture weights simplex. The value of the parameter must be greater than 0. If it is None, it’s set to 1. / n_components. mean_precision_priorfloat | None, default=None. The precision prior on the mean distribution (Gaussian). Controls the extent of where means can be placed. Larger values concentrate the cluster means around mean_prior. The value of the parameter must be greater than 0. If it is None, it is set to 1. mean_priorarray-like, shape (n_features,), default=None. The prior on the mean distribution (Gaussian). If it is None, it is set to the mean of X. degrees_of_freedom_priorfloat | None, default=None. The prior of the number of degrees of freedom on the covariance distributions (Wishart). If it is None, it’s set to n_features. covariance_priorfloat or array-like, default=None. The prior on the covariance distribution (Wishart). If it is None, the emiprical covariance prior is initialized using the covariance of X. The shape depends on covariance_type: (n_features, n_features) if 'full', (n_features, n_features) if 'tied', (n_features) if 'diag', float if 'spherical' random_stateint, RandomState instance or None, default=None Controls the random seed given to the method chosen to initialize the parameters (see init_params). In addition, it controls the generation of random samples from the fitted distribution (see the method sample). Pass an int for reproducible output across multiple function calls. See Glossary. warm_startbool, default=False If ‘warm_start’ is True, the solution of the last fitting is used as initialization for the next call of fit(). This can speed up convergence when fit is called several times on similar problems. See the Glossary. verboseint, default=0 Enable verbose output. If 1 then it prints the current initialization and each iteration step. If greater than 1 then it prints also the log probability and the time needed for each step. verbose_intervalint, default=10 Number of iteration done before the next print. Attributes weights_array-like of shape (n_components,) The weights of each mixture components. means_array-like of shape (n_components, n_features) The mean of each mixture component. covariances_array-like The covariance of each mixture component. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' precisions_array-like The precision matrices for each component in the mixture. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' precisions_cholesky_array-like The cholesky decomposition of the precision matrices of each mixture component. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' converged_bool True when convergence was reached in fit(), False otherwise. n_iter_int Number of step used by the best fit of inference to reach the convergence. lower_bound_float Lower bound value on the likelihood (of the training data with respect to the model) of the best fit of inference. weight_concentration_prior_tuple or float The dirichlet concentration of each component on the weight distribution (Dirichlet). The type depends on weight_concentration_prior_type: (float, float) if 'dirichlet_process' (Beta parameters), float if 'dirichlet_distribution' (Dirichlet parameters). The higher concentration puts more mass in the center and will lead to more components being active, while a lower concentration parameter will lead to more mass at the edge of the simplex. weight_concentration_array-like of shape (n_components,) The dirichlet concentration of each component on the weight distribution (Dirichlet). mean_precision_prior_float The precision prior on the mean distribution (Gaussian). Controls the extent of where means can be placed. Larger values concentrate the cluster means around mean_prior. If mean_precision_prior is set to None, mean_precision_prior_ is set to 1. mean_precision_array-like of shape (n_components,) The precision of each components on the mean distribution (Gaussian). mean_prior_array-like of shape (n_features,) The prior on the mean distribution (Gaussian). degrees_of_freedom_prior_float The prior of the number of degrees of freedom on the covariance distributions (Wishart). degrees_of_freedom_array-like of shape (n_components,) The number of degrees of freedom of each components in the model. covariance_prior_float or array-like The prior on the covariance distribution (Wishart). The shape depends on covariance_type: (n_features, n_features) if 'full', (n_features, n_features) if 'tied', (n_features) if 'diag', float if 'spherical' See also GaussianMixture Finite Gaussian mixture fit with EM. References 1 Bishop, Christopher M. (2006). “Pattern recognition and machine learning”. Vol. 4 No. 4. New York: Springer. 2 Hagai Attias. (2000). “A Variational Bayesian Framework for Graphical Models”. In Advances in Neural Information Processing Systems 12. 3 Blei, David M. and Michael I. Jordan. (2006). “Variational inference for Dirichlet process mixtures”. Bayesian analysis 1.1 Examples >>> import numpy as np >>> from sklearn.mixture import BayesianGaussianMixture >>> X = np.array([[1, 2], [1, 4], [1, 0], [4, 2], [12, 4], [10, 7]]) >>> bgm = BayesianGaussianMixture(n_components=2, random_state=42).fit(X) >>> bgm.means_ array([[2.49... , 2.29...], [8.45..., 4.52... ]]) >>> bgm.predict([[0, 0], [9, 3]]) array([0, 1]) Methods fit(X[, y]) Estimate model parameters with the EM algorithm. fit_predict(X[, y]) Estimate model parameters using X and predict the labels for X. get_params([deep]) Get parameters for this estimator. predict(X) Predict the labels for the data samples in X using trained model. predict_proba(X) Predict posterior probability of each component given the data. sample([n_samples]) Generate random samples from the fitted Gaussian distribution. score(X[, y]) Compute the per-sample average log-likelihood of the given data X. score_samples(X) Compute the weighted log probabilities for each sample. set_params(**params) Set the parameters of this estimator. fit(X, y=None) [source] Estimate model parameters with the EM algorithm. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. If warm_start is True, then n_init is ignored and a single initialization is performed upon the first call. Upon consecutive calls, training starts where it left off. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns self fit_predict(X, y=None) [source] Estimate model parameters using X and predict the labels for X. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. After fitting, it predicts the most probable label for the input data points. New in version 0.20. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict the labels for the data samples in X using trained model. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels. predict_proba(X) [source] Predict posterior probability of each component given the data. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns resparray, shape (n_samples, n_components) Returns the probability each Gaussian (state) in the model given each sample. sample(n_samples=1) [source] Generate random samples from the fitted Gaussian distribution. Parameters n_samplesint, default=1 Number of samples to generate. Returns Xarray, shape (n_samples, n_features) Randomly generated sample yarray, shape (nsamples,) Component labels score(X, y=None) [source] Compute the per-sample average log-likelihood of the given data X. Parameters Xarray-like of shape (n_samples, n_dimensions) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_likelihoodfloat Log likelihood of the Gaussian mixture given X. score_samples(X) [source] Compute the weighted log probabilities for each sample. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_probarray, shape (n_samples,) Log probabilities of each data point in X. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.mixture.BayesianGaussianMixture Gaussian Mixture Model Ellipsoids Gaussian Mixture Model Sine Curve Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture
doc_24801
See Migration guide for more details. tf.compat.v1.raw_ops.SkipDataset tf.raw_ops.SkipDataset( input_dataset, count, output_types, output_shapes, name=None ) Args input_dataset A Tensor of type variant. count A Tensor of type int64. A scalar representing the number of elements from the input_dataset that should be skipped. If count is -1, skips everything. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
doc_24802
Compiles the given template code and returns a Template object.
doc_24803
tf.summary.should_record_summaries()
doc_24804
Formats the specified exception information (a standard exception tuple as returned by sys.exc_info()) as a string. This default implementation just uses traceback.print_exception(). The resulting string is returned.
doc_24805
tf.keras.applications.resnet.ResNet152 Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.ResNet152, tf.compat.v1.keras.applications.resnet.ResNet152 tf.keras.applications.ResNet152( include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000, **kwargs ) Reference: Deep Residual Learning for Image Recognition (CVPR 2015) Optionally loads weights pre-trained on ImageNet. Note that the data format convention used by the model is the one specified in your Keras config at ~/.keras/keras.json. Note: each Keras Application expects a specific kind of input preprocessing. For ResNet, call tf.keras.applications.resnet.preprocess_input on your inputs before passing them to the model. Arguments include_top whether to include the fully-connected layer at the top of the network. weights one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. input_tensor optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model. input_shape optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with 'channels_last' data format) or (3, 224, 224) (with 'channels_first' data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value. pooling Optional pooling mode for feature extraction when include_top is False. None means that the output of the model will be the 4D tensor output of the last convolutional block. avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. max means that global max pooling will be applied. classes optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Returns A Keras model instance.
doc_24806
Prevent deletion of the referenced object by raising ProtectedError, a subclass of django.db.IntegrityError.
doc_24807
Returns the square of the correlation coefficient as a float, or default if there aren’t any matching rows.
doc_24808
Set the agg filter. Parameters filter_funccallable A filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array.
doc_24809
GroupBy.__iter__() Groupby iterator. GroupBy.groups Dict {group name -> group labels}. GroupBy.indices Dict {group name -> group indices}. GroupBy.get_group(name[, obj]) Construct DataFrame from group with provided name. Grouper(*args, **kwargs) A Grouper allows the user to specify a groupby instruction for an object. Function application GroupBy.apply(func, *args, **kwargs) Apply function func group-wise and combine the results together. GroupBy.agg(func, *args, **kwargs) SeriesGroupBy.aggregate([func, engine, ...]) Aggregate using one or more operations over the specified axis. DataFrameGroupBy.aggregate([func, engine, ...]) Aggregate using one or more operations over the specified axis. SeriesGroupBy.transform(func, *args[, ...]) Call function producing a like-indexed Series on each group and return a Series having the same indexes as the original object filled with the transformed values. DataFrameGroupBy.transform(func, *args[, ...]) Call function producing a like-indexed DataFrame on each group and return a DataFrame having the same indexes as the original object filled with the transformed values. GroupBy.pipe(func, *args, **kwargs) Apply a function func with arguments to this GroupBy object and return the function's result. Computations / descriptive stats GroupBy.all([skipna]) Return True if all values in the group are truthful, else False. GroupBy.any([skipna]) Return True if any value in the group is truthful, else False. GroupBy.bfill([limit]) Backward fill the values. GroupBy.backfill([limit]) Backward fill the values. GroupBy.count() Compute count of group, excluding missing values. GroupBy.cumcount([ascending]) Number each item in each group from 0 to the length of that group - 1. GroupBy.cummax([axis]) Cumulative max for each group. GroupBy.cummin([axis]) Cumulative min for each group. GroupBy.cumprod([axis]) Cumulative product for each group. GroupBy.cumsum([axis]) Cumulative sum for each group. GroupBy.ffill([limit]) Forward fill the values. GroupBy.first([numeric_only, min_count]) Compute first of group values. GroupBy.head([n]) Return first n rows of each group. GroupBy.last([numeric_only, min_count]) Compute last of group values. GroupBy.max([numeric_only, min_count]) Compute max of group values. GroupBy.mean([numeric_only, engine, ...]) Compute mean of groups, excluding missing values. GroupBy.median([numeric_only]) Compute median of groups, excluding missing values. GroupBy.min([numeric_only, min_count]) Compute min of group values. GroupBy.ngroup([ascending]) Number each group from 0 to the number of groups - 1. GroupBy.nth(n[, dropna]) Take the nth row from each group if n is an int, otherwise a subset of rows. GroupBy.ohlc() Compute open, high, low and close values of a group, excluding missing values. GroupBy.pad([limit]) Forward fill the values. GroupBy.prod([numeric_only, min_count]) Compute prod of group values. GroupBy.rank([method, ascending, na_option, ...]) Provide the rank of values within each group. GroupBy.pct_change([periods, fill_method, ...]) Calculate pct_change of each value to previous entry in group. GroupBy.size() Compute group sizes. GroupBy.sem([ddof]) Compute standard error of the mean of groups, excluding missing values. GroupBy.std([ddof, engine, engine_kwargs]) Compute standard deviation of groups, excluding missing values. GroupBy.sum([numeric_only, min_count, ...]) Compute sum of group values. GroupBy.var([ddof, engine, engine_kwargs]) Compute variance of groups, excluding missing values. GroupBy.tail([n]) Return last n rows of each group. The following methods are available in both SeriesGroupBy and DataFrameGroupBy objects, but may differ slightly, usually in that the DataFrameGroupBy version usually permits the specification of an axis argument, and often an argument indicating whether to restrict application to columns of a specific data type. DataFrameGroupBy.all([skipna]) Return True if all values in the group are truthful, else False. DataFrameGroupBy.any([skipna]) Return True if any value in the group is truthful, else False. DataFrameGroupBy.backfill([limit]) Backward fill the values. DataFrameGroupBy.bfill([limit]) Backward fill the values. DataFrameGroupBy.corr Compute pairwise correlation of columns, excluding NA/null values. DataFrameGroupBy.count() Compute count of group, excluding missing values. DataFrameGroupBy.cov Compute pairwise covariance of columns, excluding NA/null values. DataFrameGroupBy.cumcount([ascending]) Number each item in each group from 0 to the length of that group - 1. DataFrameGroupBy.cummax([axis]) Cumulative max for each group. DataFrameGroupBy.cummin([axis]) Cumulative min for each group. DataFrameGroupBy.cumprod([axis]) Cumulative product for each group. DataFrameGroupBy.cumsum([axis]) Cumulative sum for each group. DataFrameGroupBy.describe(**kwargs) Generate descriptive statistics. DataFrameGroupBy.diff First discrete difference of element. DataFrameGroupBy.ffill([limit]) Forward fill the values. DataFrameGroupBy.fillna Fill NA/NaN values using the specified method. DataFrameGroupBy.filter(func[, dropna]) Return a copy of a DataFrame excluding filtered elements. DataFrameGroupBy.hist Make a histogram of the DataFrame's columns. DataFrameGroupBy.idxmax([axis, skipna]) Return index of first occurrence of maximum over requested axis. DataFrameGroupBy.idxmin([axis, skipna]) Return index of first occurrence of minimum over requested axis. DataFrameGroupBy.mad Return the mean absolute deviation of the values over the requested axis. DataFrameGroupBy.nunique([dropna]) Return DataFrame with counts of unique elements in each position. DataFrameGroupBy.pad([limit]) Forward fill the values. DataFrameGroupBy.pct_change([periods, ...]) Calculate pct_change of each value to previous entry in group. DataFrameGroupBy.plot Class implementing the .plot attribute for groupby objects. DataFrameGroupBy.quantile([q, interpolation]) Return group values at the given quantile, a la numpy.percentile. DataFrameGroupBy.rank([method, ascending, ...]) Provide the rank of values within each group. DataFrameGroupBy.resample(rule, *args, **kwargs) Provide resampling when using a TimeGrouper. DataFrameGroupBy.sample([n, frac, replace, ...]) Return a random sample of items from each group. DataFrameGroupBy.shift([periods, freq, ...]) Shift each group by periods observations. DataFrameGroupBy.size() Compute group sizes. DataFrameGroupBy.skew Return unbiased skew over requested axis. DataFrameGroupBy.take Return the elements in the given positional indices along an axis. DataFrameGroupBy.tshift (DEPRECATED) Shift the time index, using the index's frequency if available. DataFrameGroupBy.value_counts([subset, ...]) Return a Series or DataFrame containing counts of unique rows. The following methods are available only for SeriesGroupBy objects. SeriesGroupBy.hist Draw histogram of the input series using matplotlib. SeriesGroupBy.nlargest([n, keep]) Return the largest n elements. SeriesGroupBy.nsmallest([n, keep]) Return the smallest n elements. SeriesGroupBy.nunique([dropna]) Return number of unique elements in the group. SeriesGroupBy.unique Return unique values of Series object. SeriesGroupBy.value_counts([normalize, ...]) SeriesGroupBy.is_monotonic_increasing Alias for is_monotonic. SeriesGroupBy.is_monotonic_decreasing Return boolean if values in the object are monotonic_decreasing. The following methods are available only for DataFrameGroupBy objects. DataFrameGroupBy.corrwith Compute pairwise correlation. DataFrameGroupBy.boxplot([subplots, column, ...]) Make box plots from DataFrameGroupBy data.
doc_24810
Escape '&', '<', and '>' in a string of data. You can escape other strings of data by passing a dictionary as the optional entities parameter. The keys and values must all be strings; each key will be replaced with its corresponding value. The characters '&', '<' and '>' are always escaped, even if entities is provided.
doc_24811
Remove an attribute by name. If there is no matching attribute, a NotFoundErr is raised.
doc_24812
Update the location of children if necessary and draw them to the given renderer.
doc_24813
Acquire a lock, blocking or non-blocking. When invoked with the block argument set to True, block until the lock is in an unlocked state (not owned by any process or thread) unless the lock is already owned by the current process or thread. The current process or thread then takes ownership of the lock (if it does not already have ownership) and the recursion level inside the lock increments by one, resulting in a return value of True. Note that there are several differences in this first argument’s behavior compared to the implementation of threading.RLock.acquire(), starting with the name of the argument itself. When invoked with the block argument set to False, do not block. If the lock has already been acquired (and thus is owned) by another process or thread, the current process or thread does not take ownership and the recursion level within the lock is not changed, resulting in a return value of False. If the lock is in an unlocked state, the current process or thread takes ownership and the recursion level is incremented, resulting in a return value of True. Use and behaviors of the timeout argument are the same as in Lock.acquire(). Note that some of these behaviors of timeout differ from the implemented behaviors in threading.RLock.acquire().
doc_24814
Returns the minimum value of the given expression. Default alias: <field>__min Return type: same as input field, or output_field if supplied
doc_24815
New in Django 3.2. Optional. The database collation name of the field. Note Collation names are not standardized. As such, this will not be portable across multiple database backends. Oracle Oracle supports collations only when the MAX_STRING_SIZE database initialization parameter is set to EXTENDED.
doc_24816
This method is not defined in BaseHandler, but subclasses should override it if they intend to provide a catch-all for otherwise unhandled HTTP errors. It will be called automatically by the OpenerDirector getting the error, and should not normally be called in other circumstances. req will be a Request object, fp will be a file-like object with the HTTP error body, code will be the three-digit code of the error, msg will be the user-visible explanation of the code and hdrs will be a mapping object with the headers of the error. Return values and exceptions raised should be the same as those of urlopen().
doc_24817
Returns an array containing the same data with a new shape. Refer to MaskedArray.reshape for full documentation. See also MaskedArray.reshape equivalent function
doc_24818
Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters filestr or Path A string naming the dump file. Changed in version 1.17.0: pathlib.Path objects are now accepted.
doc_24819
+ 0.1 + 0.1 - 0.3 is exactly equal to zero. In binary floating point, the result is 5.5511151231257827e-017. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal is preferred in accounting applications which have strict equality invariants. The decimal module incorporates a notion of significant places so that 1.30 + 1.20 is 2.50. The trailing zero is kept to indicate significance. This is the customary presentation for monetary applications. For multiplication, the “schoolbook” approach uses all the figures in the multiplicands. For instance, 1.3 * 1.2 gives 1.56 while 1.30 * 1.20 gives 1.5600. Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem: >>> from decimal import * >>> getcontext().prec = 6 >>> Decimal(1) / Decimal(7) Decimal('0.142857') >>> getcontext().prec = 28 >>> Decimal(1) / Decimal(7) Decimal('0.1428571428571428571428571429') Both binary and decimal floating point are implemented in terms of published standards. While the built-in float type exposes only a modest portion of its capabilities, the decimal module exposes all required parts of the standard. When needed, the programmer has full control over rounding and signal handling. This includes an option to enforce exact arithmetic by using exceptions to block any inexact operations. The decimal module was designed to support “without prejudice, both exact unrounded decimal arithmetic (sometimes called fixed-point arithmetic) and rounded floating-point arithmetic.” – excerpt from the decimal arithmetic specification. The module design is centered around three concepts: the decimal number, the context for arithmetic, and signals. A decimal number is immutable. It has a sign, coefficient digits, and an exponent. To preserve significance, the coefficient digits do not truncate trailing zeros. Decimals also include special values such as Infinity, -Infinity, and NaN. The standard also differentiates -0 from +0. The context for arithmetic is an environment specifying precision, rounding rules, limits on exponents, flags indicating the results of operations, and trap enablers which determine whether signals are treated as exceptions. Rounding options include ROUND_CEILING, ROUND_DOWN, ROUND_FLOOR, ROUND_HALF_DOWN, ROUND_HALF_EVEN, ROUND_HALF_UP, ROUND_UP, and ROUND_05UP. Signals are groups of exceptional conditions arising during the course of computation. Depending on the needs of the application, signals may be ignored, considered as informational, or treated as exceptions. The signals in the decimal module are: Clamped, InvalidOperation, DivisionByZero, Inexact, Rounded, Subnormal, Overflow, Underflow and FloatOperation. For each signal there is a flag and a trap enabler. When a signal is encountered, its flag is set to one, then, if the trap enabler is set to one, an exception is raised. Flags are sticky, so the user needs to reset them before monitoring a calculation. See also IBM’s General Decimal Arithmetic Specification, The General Decimal Arithmetic Specification. Quick-start Tutorial The usual start to using decimals is importing the module, viewing the current context with getcontext() and, if necessary, setting new values for precision, rounding, or enabled traps: >>> from decimal import * >>> getcontext() Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[Overflow, DivisionByZero, InvalidOperation]) >>> getcontext().prec = 7 # Set a new precision Decimal instances can be constructed from integers, strings, floats, or tuples. Construction from an integer or a float performs an exact conversion of the value of that integer or float. Decimal numbers include special values such as NaN which stands for “Not a number”, positive and negative Infinity, and -0: >>> getcontext().prec = 28 >>> Decimal(10) Decimal('10') >>> Decimal('3.14') Decimal('3.14') >>> Decimal(3.14) Decimal('3.140000000000000124344978758017532527446746826171875') >>> Decimal((0, (3, 1, 4), -2)) Decimal('3.14') >>> Decimal(str(2.0 ** 0.5)) Decimal('1.4142135623730951') >>> Decimal(2) ** Decimal('0.5') Decimal('1.414213562373095048801688724') >>> Decimal('NaN') Decimal('NaN') >>> Decimal('-Infinity') Decimal('-Infinity') If the FloatOperation signal is trapped, accidental mixing of decimals and floats in constructors or ordering comparisons raises an exception: >>> c = getcontext() >>> c.traps[FloatOperation] = True >>> Decimal(3.14) Traceback (most recent call last): File "<stdin>", line 1, in <module> decimal.FloatOperation: [<class 'decimal.FloatOperation'>] >>> Decimal('3.5') < 3.7 Traceback (most recent call last): File "<stdin>", line 1, in <module> decimal.FloatOperation: [<class 'decimal.FloatOperation'>] >>> Decimal('3.5') == 3.5 True New in version 3.3. The significance of a new Decimal is determined solely by the number of digits input. Context precision and rounding only come into play during arithmetic operations. >>> getcontext().prec = 6 >>> Decimal('3.0') Decimal('3.0') >>> Decimal('3.1415926535') Decimal('3.1415926535') >>> Decimal('3.1415926535') + Decimal('2.7182818285') Decimal('5.85987') >>> getcontext().rounding = ROUND_UP >>> Decimal('3.1415926535') + Decimal('2.7182818285') Decimal('5.85988') If the internal limits of the C version are exceeded, constructing a decimal raises InvalidOperation: >>> Decimal("1e9999999999999999999") Traceback (most recent call last): File "<stdin>", line 1, in <module> decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>] Changed in version 3.3. Decimals interact well with much of the rest of Python. Here is a small decimal floating point flying circus: >>> data = list(map(Decimal, '1.34 1.87 3.45 2.35 1.00 0.03 9.25'.split())) >>> max(data) Decimal('9.25') >>> min(data) Decimal('0.03') >>> sorted(data) [Decimal('0.03'), Decimal('1.00'), Decimal('1.34'), Decimal('1.87'), Decimal('2.35'), Decimal('3.45'), Decimal('9.25')] >>> sum(data) Decimal('19.29') >>> a,b,c = data[:3] >>> str(a) '1.34' >>> float(a) 1.34 >>> round(a, 1) Decimal('1.3') >>> int(a) 1 >>> a * 5 Decimal('6.70') >>> a * b Decimal('2.5058') >>> c % a Decimal('0.77') And some mathematical functions are also available to Decimal: >>> getcontext().prec = 28 >>> Decimal(2).sqrt() Decimal('1.414213562373095048801688724') >>> Decimal(1).exp() Decimal('2.718281828459045235360287471') >>> Decimal('10').ln() Decimal('2.302585092994045684017991455') >>> Decimal('10').log10() Decimal('1') The quantize() method rounds a number to a fixed exponent. This method is useful for monetary applications that often round results to a fixed number of places: >>> Decimal('7.325').quantize(Decimal('.01'), rounding=ROUND_DOWN) Decimal('7.32') >>> Decimal('7.325').quantize(Decimal('1.'), rounding=ROUND_UP) Decimal('8') As shown above, the getcontext() function accesses the current context and allows the settings to be changed. This approach meets the needs of most applications. For more advanced work, it may be useful to create alternate contexts using the Context() constructor. To make an alternate active, use the setcontext() function. In accordance with the standard, the decimal module provides two ready to use standard contexts, BasicContext and ExtendedContext. The former is especially useful for debugging because many of the traps are enabled: >>> myothercontext = Context(prec=60, rounding=ROUND_HALF_DOWN) >>> setcontext(myothercontext) >>> Decimal(1) / Decimal(7) Decimal('0.142857142857142857142857142857142857142857142857142857142857') >>> ExtendedContext Context(prec=9, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[]) >>> setcontext(ExtendedContext) >>> Decimal(1) / Decimal(7) Decimal('0.142857143') >>> Decimal(42) / Decimal(0) Decimal('Infinity') >>> setcontext(BasicContext) >>> Decimal(42) / Decimal(0) Traceback (most recent call last): File "<pyshell#143>", line 1, in -toplevel- Decimal(42) / Decimal(0) DivisionByZero: x / 0 Contexts also have signal flags for monitoring exceptional conditions encountered during computations. The flags remain set until explicitly cleared, so it is best to clear the flags before each set of monitored computations by using the clear_flags() method. >>> setcontext(ExtendedContext) >>> getcontext().clear_flags() >>> Decimal(355) / Decimal(113) Decimal('3.14159292') >>> getcontext() Context(prec=9, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[Inexact, Rounded], traps=[]) The flags entry shows that the rational approximation to Pi was rounded (digits beyond the context precision were thrown away) and that the result is inexact (some of the discarded digits were non-zero). Individual traps are set using the dictionary in the traps field of a context: >>> setcontext(ExtendedContext) >>> Decimal(1) / Decimal(0) Decimal('Infinity') >>> getcontext().traps[DivisionByZero] = 1 >>> Decimal(1) / Decimal(0) Traceback (most recent call last): File "<pyshell#112>", line 1, in -toplevel- Decimal(1) / Decimal(0) DivisionByZero: x / 0 Most programs adjust the current context only once, at the beginning of the program. And, in many applications, data is converted to Decimal with a single cast inside a loop. With context set and decimals created, the bulk of the program manipulates the data no differently than with other Python numeric types. Decimal objects class decimal.Decimal(value="0", context=None) Construct a new Decimal object based from value. value can be an integer, string, tuple, float, or another Decimal object. If no value is given, returns Decimal('0'). If value is a string, it should conform to the decimal numeric string syntax after leading and trailing whitespace characters, as well as underscores throughout, are removed: sign ::= '+' | '-' digit ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' indicator ::= 'e' | 'E' digits ::= digit [digit]... decimal-part ::= digits '.' [digits] | ['.'] digits exponent-part ::= indicator [sign] digits infinity ::= 'Infinity' | 'Inf' nan ::= 'NaN' [digits] | 'sNaN' [digits] numeric-value ::= decimal-part [exponent-part] | infinity numeric-string ::= [sign] numeric-value | [sign] nan Other Unicode decimal digits are also permitted where digit appears above. These include decimal digits from various other alphabets (for example, Arabic-Indic and Devanāgarī digits) along with the fullwidth digits '\uff10' through '\uff19'. If value is a tuple, it should have three components, a sign (0 for positive or 1 for negative), a tuple of digits, and an integer exponent. For example, Decimal((0, (1, 4, 1, 4), -3)) returns Decimal('1.414'). If value is a float, the binary floating point value is losslessly converted to its exact decimal equivalent. This conversion can often require 53 or more digits of precision. For example, Decimal(float('1.1')) converts to Decimal('1.100000000000000088817841970012523233890533447265625'). The context precision does not affect how many digits are stored. That is determined exclusively by the number of digits in value. For example, Decimal('3.00000') records all five zeros even if the context precision is only three. The purpose of the context argument is determining what to do if value is a malformed string. If the context traps InvalidOperation, an exception is raised; otherwise, the constructor returns a new Decimal with the value of NaN. Once constructed, Decimal objects are immutable. Changed in version 3.2: The argument to the constructor is now permitted to be a float instance. Changed in version 3.3: float arguments raise an exception if the FloatOperation trap is set. By default the trap is off. Changed in version 3.6: Underscores are allowed for grouping, as with integral and floating-point literals in code. Decimal floating point objects share many properties with the other built-in numeric types such as float and int. All of the usual math operations and special methods apply. Likewise, decimal objects can be copied, pickled, printed, used as dictionary keys, used as set elements, compared, sorted, and coerced to another type (such as float or int). There are some small differences between arithmetic on Decimal objects and arithmetic on integers and floats. When the remainder operator % is applied to Decimal objects, the sign of the result is the sign of the dividend rather than the sign of the divisor: >>> (-7) % 4 1 >>> Decimal(-7) % Decimal(4) Decimal('-3') The integer division operator // behaves analogously, returning the integer part of the true quotient (truncating towards zero) rather than its floor, so as to preserve the usual identity x == (x // y) * y + x % y: >>> -7 // 4 -2 >>> Decimal(-7) // Decimal(4) Decimal('-1') The % and // operators implement the remainder and divide-integer operations (respectively) as described in the specification. Decimal objects cannot generally be combined with floats or instances of fractions.Fraction in arithmetic operations: an attempt to add a Decimal to a float, for example, will raise a TypeError. However, it is possible to use Python’s comparison operators to compare a Decimal instance x with another number y. This avoids confusing results when doing equality comparisons between numbers of different types. Changed in version 3.2: Mixed-type comparisons between Decimal instances and other numeric types are now fully supported. In addition to the standard numeric properties, decimal floating point objects also have a number of specialized methods: adjusted() Return the adjusted exponent after shifting out the coefficient’s rightmost digits until only the lead digit remains: Decimal('321e+5').adjusted() returns seven. Used for determining the position of the most significant digit with respect to the decimal point. as_integer_ratio() Return a pair (n, d) of integers that represent the given Decimal instance as a fraction, in lowest terms and with a positive denominator: >>> Decimal('-3.14').as_integer_ratio() (-157, 50) The conversion is exact. Raise OverflowError on infinities and ValueError on NaNs. New in version 3.6. as_tuple() Return a named tuple representation of the number: DecimalTuple(sign, digits, exponent). canonical() Return the canonical encoding of the argument. Currently, the encoding of a Decimal instance is always canonical, so this operation returns its argument unchanged. compare(other, context=None) Compare the values of two Decimal instances. compare() returns a Decimal instance, and if either operand is a NaN then the result is a NaN: a or b is a NaN ==> Decimal('NaN') a < b ==> Decimal('-1') a == b ==> Decimal('0') a > b ==> Decimal('1') compare_signal(other, context=None) This operation is identical to the compare() method, except that all NaNs signal. That is, if neither operand is a signaling NaN then any quiet NaN operand is treated as though it were a signaling NaN. compare_total(other, context=None) Compare two operands using their abstract representation rather than their numerical value. Similar to the compare() method, but the result gives a total ordering on Decimal instances. Two Decimal instances with the same numeric value but different representations compare unequal in this ordering: >>> Decimal('12.0').compare_total(Decimal('12')) Decimal('-1') Quiet and signaling NaNs are also included in the total ordering. The result of this function is Decimal('0') if both operands have the same representation, Decimal('-1') if the first operand is lower in the total order than the second, and Decimal('1') if the first operand is higher in the total order than the second operand. See the specification for details of the total order. This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly. compare_total_mag(other, context=None) Compare two operands using their abstract representation rather than their value as in compare_total(), but ignoring the sign of each operand. x.compare_total_mag(y) is equivalent to x.copy_abs().compare_total(y.copy_abs()). This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly. conjugate() Just returns self, this method is only to comply with the Decimal Specification. copy_abs() Return the absolute value of the argument. This operation is unaffected by the context and is quiet: no flags are changed and no rounding is performed. copy_negate() Return the negation of the argument. This operation is unaffected by the context and is quiet: no flags are changed and no rounding is performed. copy_sign(other, context=None) Return a copy of the first operand with the sign set to be the same as the sign of the second operand. For example: >>> Decimal('2.3').copy_sign(Decimal('-1.5')) Decimal('-2.3') This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly. exp(context=None) Return the value of the (natural) exponential function e**x at the given number. The result is correctly rounded using the ROUND_HALF_EVEN rounding mode. >>> Decimal(1).exp() Decimal('2.718281828459045235360287471') >>> Decimal(321).exp() Decimal('2.561702493119680037517373933E+139') from_float(f) Classmethod that converts a float to a decimal number, exactly. Note Decimal.from_float(0.1) is not the same as Decimal(‘0.1’). Since 0.1 is not exactly representable in binary floating point, the value is stored as the nearest representable value which is 0x1.999999999999ap-4. That equivalent value in decimal is 0.1000000000000000055511151231257827021181583404541015625. Note From Python 3.2 onwards, a Decimal instance can also be constructed directly from a float. >>> Decimal.from_float(0.1) Decimal('0.1000000000000000055511151231257827021181583404541015625') >>> Decimal.from_float(float('nan')) Decimal('NaN') >>> Decimal.from_float(float('inf')) Decimal('Infinity') >>> Decimal.from_float(float('-inf')) Decimal('-Infinity') New in version 3.1. fma(other, third, context=None) Fused multiply-add. Return self*other+third with no rounding of the intermediate product self*other. >>> Decimal(2).fma(3, 5) Decimal('11') is_canonical() Return True if the argument is canonical and False otherwise. Currently, a Decimal instance is always canonical, so this operation always returns True. is_finite() Return True if the argument is a finite number, and False if the argument is an infinity or a NaN. is_infinite() Return True if the argument is either positive or negative infinity and False otherwise. is_nan() Return True if the argument is a (quiet or signaling) NaN and False otherwise. is_normal(context=None) Return True if the argument is a normal finite number. Return False if the argument is zero, subnormal, infinite or a NaN. is_qnan() Return True if the argument is a quiet NaN, and False otherwise. is_signed() Return True if the argument has a negative sign and False otherwise. Note that zeros and NaNs can both carry signs. is_snan() Return True if the argument is a signaling NaN and False otherwise. is_subnormal(context=None) Return True if the argument is subnormal, and False otherwise. is_zero() Return True if the argument is a (positive or negative) zero and False otherwise. ln(context=None) Return the natural (base e) logarithm of the operand. The result is correctly rounded using the ROUND_HALF_EVEN rounding mode. log10(context=None) Return the base ten logarithm of the operand. The result is correctly rounded using the ROUND_HALF_EVEN rounding mode. logb(context=None) For a nonzero number, return the adjusted exponent of its operand as a Decimal instance. If the operand is a zero then Decimal('-Infinity') is returned and the DivisionByZero flag is raised. If the operand is an infinity then Decimal('Infinity') is returned. logical_and(other, context=None) logical_and() is a logical operation which takes two logical operands (see Logical operands). The result is the digit-wise and of the two operands. logical_invert(context=None) logical_invert() is a logical operation. The result is the digit-wise inversion of the operand. logical_or(other, context=None) logical_or() is a logical operation which takes two logical operands (see Logical operands). The result is the digit-wise or of the two operands. logical_xor(other, context=None) logical_xor() is a logical operation which takes two logical operands (see Logical operands). The result is the digit-wise exclusive or of the two operands. max(other, context=None) Like max(self, other) except that the context rounding rule is applied before returning and that NaN values are either signaled or ignored (depending on the context and whether they are signaling or quiet). max_mag(other, context=None) Similar to the max() method, but the comparison is done using the absolute values of the operands. min(other, context=None) Like min(self, other) except that the context rounding rule is applied before returning and that NaN values are either signaled or ignored (depending on the context and whether they are signaling or quiet). min_mag(other, context=None) Similar to the min() method, but the comparison is done using the absolute values of the operands. next_minus(context=None) Return the largest number representable in the given context (or in the current thread’s context if no context is given) that is smaller than the given operand. next_plus(context=None) Return the smallest number representable in the given context (or in the current thread’s context if no context is given) that is larger than the given operand. next_toward(other, context=None) If the two operands are unequal, return the number closest to the first operand in the direction of the second operand. If both operands are numerically equal, return a copy of the first operand with the sign set to be the same as the sign of the second operand. normalize(context=None) Normalize the number by stripping the rightmost trailing zeros and converting any result equal to Decimal('0') to Decimal('0e0'). Used for producing canonical values for attributes of an equivalence class. For example, Decimal('32.100') and Decimal('0.321000e+2') both normalize to the equivalent value Decimal('32.1'). number_class(context=None) Return a string describing the class of the operand. The returned value is one of the following ten strings. "-Infinity", indicating that the operand is negative infinity. "-Normal", indicating that the operand is a negative normal number. "-Subnormal", indicating that the operand is negative and subnormal. "-Zero", indicating that the operand is a negative zero. "+Zero", indicating that the operand is a positive zero. "+Subnormal", indicating that the operand is positive and subnormal. "+Normal", indicating that the operand is a positive normal number. "+Infinity", indicating that the operand is positive infinity. "NaN", indicating that the operand is a quiet NaN (Not a Number). "sNaN", indicating that the operand is a signaling NaN. quantize(exp, rounding=None, context=None) Return a value equal to the first operand after rounding and having the exponent of the second operand. >>> Decimal('1.41421356').quantize(Decimal('1.000')) Decimal('1.414') Unlike other operations, if the length of the coefficient after the quantize operation would be greater than precision, then an InvalidOperation is signaled. This guarantees that, unless there is an error condition, the quantized exponent is always equal to that of the right-hand operand. Also unlike other operations, quantize never signals Underflow, even if the result is subnormal and inexact. If the exponent of the second operand is larger than that of the first then rounding may be necessary. In this case, the rounding mode is determined by the rounding argument if given, else by the given context argument; if neither argument is given the rounding mode of the current thread’s context is used. An error is returned whenever the resulting exponent is greater than Emax or less than Etiny. radix() Return Decimal(10), the radix (base) in which the Decimal class does all its arithmetic. Included for compatibility with the specification. remainder_near(other, context=None) Return the remainder from dividing self by other. This differs from self % other in that the sign of the remainder is chosen so as to minimize its absolute value. More precisely, the return value is self - n * other where n is the integer nearest to the exact value of self / other, and if two integers are equally near then the even one is chosen. If the result is zero then its sign will be the sign of self. >>> Decimal(18).remainder_near(Decimal(10)) Decimal('-2') >>> Decimal(25).remainder_near(Decimal(10)) Decimal('5') >>> Decimal(35).remainder_near(Decimal(10)) Decimal('-5') rotate(other, context=None) Return the result of rotating the digits of the first operand by an amount specified by the second operand. The second operand must be an integer in the range -precision through precision. The absolute value of the second operand gives the number of places to rotate. If the second operand is positive then rotation is to the left; otherwise rotation is to the right. The coefficient of the first operand is padded on the left with zeros to length precision if necessary. The sign and exponent of the first operand are unchanged. same_quantum(other, context=None) Test whether self and other have the same exponent or whether both are NaN. This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly. scaleb(other, context=None) Return the first operand with exponent adjusted by the second. Equivalently, return the first operand multiplied by 10**other. The second operand must be an integer. shift(other, context=None) Return the result of shifting the digits of the first operand by an amount specified by the second operand. The second operand must be an integer in the range -precision through precision. The absolute value of the second operand gives the number of places to shift. If the second operand is positive then the shift is to the left; otherwise the shift is to the right. Digits shifted into the coefficient are zeros. The sign and exponent of the first operand are unchanged. sqrt(context=None) Return the square root of the argument to full precision. to_eng_string(context=None) Convert to a string, using engineering notation if an exponent is needed. Engineering notation has an exponent which is a multiple of 3. This can leave up to 3 digits to the left of the decimal place and may require the addition of either one or two trailing zeros. For example, this converts Decimal('123E+1') to Decimal('1.23E+3'). to_integral(rounding=None, context=None) Identical to the to_integral_value() method. The to_integral name has been kept for compatibility with older versions. to_integral_exact(rounding=None, context=None) Round to the nearest integer, signaling Inexact or Rounded as appropriate if rounding occurs. The rounding mode is determined by the rounding parameter if given, else by the given context. If neither parameter is given then the rounding mode of the current context is used. to_integral_value(rounding=None, context=None) Round to the nearest integer without signaling Inexact or Rounded. If given, applies rounding; otherwise, uses the rounding method in either the supplied context or the current context. Logical operands The logical_and(), logical_invert(), logical_or(), and logical_xor() methods expect their arguments to be logical operands. A logical operand is a Decimal instance whose exponent and sign are both zero, and whose digits are all either 0 or 1. Context objects Contexts are environments for arithmetic operations. They govern precision, set rules for rounding, determine which signals are treated as exceptions, and limit the range for exponents. Each thread has its own current context which is accessed or changed using the getcontext() and setcontext() functions: decimal.getcontext() Return the current context for the active thread. decimal.setcontext(c) Set the current context for the active thread to c. You can also use the with statement and the localcontext() function to temporarily change the active context. decimal.localcontext(ctx=None) Return a context manager that will set the current context for the active thread to a copy of ctx on entry to the with-statement and restore the previous context when exiting the with-statement. If no context is specified, a copy of the current context is used. For example, the following code sets the current decimal precision to 42 places, performs a calculation, and then automatically restores the previous context: from decimal import localcontext with localcontext() as ctx: ctx.prec = 42 # Perform a high precision calculation s = calculate_something() s = +s # Round the final result back to the default precision New contexts can also be created using the Context constructor described below. In addition, the module provides three pre-made contexts: class decimal.BasicContext This is a standard context defined by the General Decimal Arithmetic Specification. Precision is set to nine. Rounding is set to ROUND_HALF_UP. All flags are cleared. All traps are enabled (treated as exceptions) except Inexact, Rounded, and Subnormal. Because many of the traps are enabled, this context is useful for debugging. class decimal.ExtendedContext This is a standard context defined by the General Decimal Arithmetic Specification. Precision is set to nine. Rounding is set to ROUND_HALF_EVEN. All flags are cleared. No traps are enabled (so that exceptions are not raised during computations). Because the traps are disabled, this context is useful for applications that prefer to have result value of NaN or Infinity instead of raising exceptions. This allows an application to complete a run in the presence of conditions that would otherwise halt the program. class decimal.DefaultContext This context is used by the Context constructor as a prototype for new contexts. Changing a field (such a precision) has the effect of changing the default for new contexts created by the Context constructor. This context is most useful in multi-threaded environments. Changing one of the fields before threads are started has the effect of setting system-wide defaults. Changing the fields after threads have started is not recommended as it would require thread synchronization to prevent race conditions. In single threaded environments, it is preferable to not use this context at all. Instead, simply create contexts explicitly as described below. The default values are prec=28, rounding=ROUND_HALF_EVEN, and enabled traps for Overflow, InvalidOperation, and DivisionByZero. In addition to the three supplied contexts, new contexts can be created with the Context constructor. class decimal.Context(prec=None, rounding=None, Emin=None, Emax=None, capitals=None, clamp=None, flags=None, traps=None) Creates a new context. If a field is not specified or is None, the default values are copied from the DefaultContext. If the flags field is not specified or is None, all flags are cleared. prec is an integer in the range [1, MAX_PREC] that sets the precision for arithmetic operations in the context. The rounding option is one of the constants listed in the section Rounding Modes. The traps and flags fields list any signals to be set. Generally, new contexts should only set traps and leave the flags clear. The Emin and Emax fields are integers specifying the outer limits allowable for exponents. Emin must be in the range [MIN_EMIN, 0], Emax in the range [0, MAX_EMAX]. The capitals field is either 0 or 1 (the default). If set to 1, exponents are printed with a capital E; otherwise, a lowercase e is used: Decimal('6.02e+23'). The clamp field is either 0 (the default) or 1. If set to 1, the exponent e of a Decimal instance representable in this context is strictly limited to the range Emin - prec + 1 <= e <= Emax - prec + 1. If clamp is 0 then a weaker condition holds: the adjusted exponent of the Decimal instance is at most Emax. When clamp is 1, a large normal number will, where possible, have its exponent reduced and a corresponding number of zeros added to its coefficient, in order to fit the exponent constraints; this preserves the value of the number but loses information about significant trailing zeros. For example: >>> Context(prec=6, Emax=999, clamp=1).create_decimal('1.23e999') Decimal('1.23000E+999') A clamp value of 1 allows compatibility with the fixed-width decimal interchange formats specified in IEEE 754. The Context class defines several general purpose methods as well as a large number of methods for doing arithmetic directly in a given context. In addition, for each of the Decimal methods described above (with the exception of the adjusted() and as_tuple() methods) there is a corresponding Context method. For example, for a Context instance C and Decimal instance x, C.exp(x) is equivalent to x.exp(context=C). Each Context method accepts a Python integer (an instance of int) anywhere that a Decimal instance is accepted. clear_flags() Resets all of the flags to 0. clear_traps() Resets all of the traps to 0. New in version 3.3. copy() Return a duplicate of the context. copy_decimal(num) Return a copy of the Decimal instance num. create_decimal(num) Creates a new Decimal instance from num but using self as context. Unlike the Decimal constructor, the context precision, rounding method, flags, and traps are applied to the conversion. This is useful because constants are often given to a greater precision than is needed by the application. Another benefit is that rounding immediately eliminates unintended effects from digits beyond the current precision. In the following example, using unrounded inputs means that adding zero to a sum can change the result: >>> getcontext().prec = 3 >>> Decimal('3.4445') + Decimal('1.0023') Decimal('4.45') >>> Decimal('3.4445') + Decimal(0) + Decimal('1.0023') Decimal('4.44') This method implements the to-number operation of the IBM specification. If the argument is a string, no leading or trailing whitespace or underscores are permitted. create_decimal_from_float(f) Creates a new Decimal instance from a float f but rounding using self as the context. Unlike the Decimal.from_float() class method, the context precision, rounding method, flags, and traps are applied to the conversion. >>> context = Context(prec=5, rounding=ROUND_DOWN) >>> context.create_decimal_from_float(math.pi) Decimal('3.1415') >>> context = Context(prec=5, traps=[Inexact]) >>> context.create_decimal_from_float(math.pi) Traceback (most recent call last): ... decimal.Inexact: None New in version 3.1. Etiny() Returns a value equal to Emin - prec + 1 which is the minimum exponent value for subnormal results. When underflow occurs, the exponent is set to Etiny. Etop() Returns a value equal to Emax - prec + 1. The usual approach to working with decimals is to create Decimal instances and then apply arithmetic operations which take place within the current context for the active thread. An alternative approach is to use context methods for calculating within a specific context. The methods are similar to those for the Decimal class and are only briefly recounted here. abs(x) Returns the absolute value of x. add(x, y) Return the sum of x and y. canonical(x) Returns the same Decimal object x. compare(x, y) Compares x and y numerically. compare_signal(x, y) Compares the values of the two operands numerically. compare_total(x, y) Compares two operands using their abstract representation. compare_total_mag(x, y) Compares two operands using their abstract representation, ignoring sign. copy_abs(x) Returns a copy of x with the sign set to 0. copy_negate(x) Returns a copy of x with the sign inverted. copy_sign(x, y) Copies the sign from y to x. divide(x, y) Return x divided by y. divide_int(x, y) Return x divided by y, truncated to an integer. divmod(x, y) Divides two numbers and returns the integer part of the result. exp(x) Returns e ** x. fma(x, y, z) Returns x multiplied by y, plus z. is_canonical(x) Returns True if x is canonical; otherwise returns False. is_finite(x) Returns True if x is finite; otherwise returns False. is_infinite(x) Returns True if x is infinite; otherwise returns False. is_nan(x) Returns True if x is a qNaN or sNaN; otherwise returns False. is_normal(x) Returns True if x is a normal number; otherwise returns False. is_qnan(x) Returns True if x is a quiet NaN; otherwise returns False. is_signed(x) Returns True if x is negative; otherwise returns False. is_snan(x) Returns True if x is a signaling NaN; otherwise returns False. is_subnormal(x) Returns True if x is subnormal; otherwise returns False. is_zero(x) Returns True if x is a zero; otherwise returns False. ln(x) Returns the natural (base e) logarithm of x. log10(x) Returns the base 10 logarithm of x. logb(x) Returns the exponent of the magnitude of the operand’s MSD. logical_and(x, y) Applies the logical operation and between each operand’s digits. logical_invert(x) Invert all the digits in x. logical_or(x, y) Applies the logical operation or between each operand’s digits. logical_xor(x, y) Applies the logical operation xor between each operand’s digits. max(x, y) Compares two values numerically and returns the maximum. max_mag(x, y) Compares the values numerically with their sign ignored. min(x, y) Compares two values numerically and returns the minimum. min_mag(x, y) Compares the values numerically with their sign ignored. minus(x) Minus corresponds to the unary prefix minus operator in Python. multiply(x, y) Return the product of x and y. next_minus(x) Returns the largest representable number smaller than x. next_plus(x) Returns the smallest representable number larger than x. next_toward(x, y) Returns the number closest to x, in direction towards y. normalize(x) Reduces x to its simplest form. number_class(x) Returns an indication of the class of x. plus(x) Plus corresponds to the unary prefix plus operator in Python. This operation applies the context precision and rounding, so it is not an identity operation. power(x, y, modulo=None) Return x to the power of y, reduced modulo modulo if given. With two arguments, compute x**y. If x is negative then y must be integral. The result will be inexact unless y is integral and the result is finite and can be expressed exactly in ‘precision’ digits. The rounding mode of the context is used. Results are always correctly-rounded in the Python version. Decimal(0) ** Decimal(0) results in InvalidOperation, and if InvalidOperation is not trapped, then results in Decimal('NaN'). Changed in version 3.3: The C module computes power() in terms of the correctly-rounded exp() and ln() functions. The result is well-defined but only “almost always correctly-rounded”. With three arguments, compute (x**y) % modulo. For the three argument form, the following restrictions on the arguments hold: all three arguments must be integral y must be nonnegative at least one of x or y must be nonzero modulo must be nonzero and have at most ‘precision’ digits The value resulting from Context.power(x, y, modulo) is equal to the value that would be obtained by computing (x**y) % modulo with unbounded precision, but is computed more efficiently. The exponent of the result is zero, regardless of the exponents of x, y and modulo. The result is always exact. quantize(x, y) Returns a value equal to x (rounded), having the exponent of y. radix() Just returns 10, as this is Decimal, :) remainder(x, y) Returns the remainder from integer division. The sign of the result, if non-zero, is the same as that of the original dividend. remainder_near(x, y) Returns x - y * n, where n is the integer nearest the exact value of x / y (if the result is 0 then its sign will be the sign of x). rotate(x, y) Returns a rotated copy of x, y times. same_quantum(x, y) Returns True if the two operands have the same exponent. scaleb(x, y) Returns the first operand after adding the second value its exp. shift(x, y) Returns a shifted copy of x, y times. sqrt(x) Square root of a non-negative number to context precision. subtract(x, y) Return the difference between x and y. to_eng_string(x) Convert to a string, using engineering notation if an exponent is needed. Engineering notation has an exponent which is a multiple of 3. This can leave up to 3 digits to the left of the decimal place and may require the addition of either one or two trailing zeros. to_integral_exact(x) Rounds to an integer. to_sci_string(x) Converts a number to a string using scientific notation. Constants The constants in this section are only relevant for the C module. They are also included in the pure Python version for compatibility. 32-bit 64-bit decimal.MAX_PREC 425000000 999999999999999999 decimal.MAX_EMAX 425000000 999999999999999999 decimal.MIN_EMIN -425000000 -999999999999999999 decimal.MIN_ETINY -849999999 -1999999999999999997 decimal.HAVE_THREADS The value is True. Deprecated, because Python now always has threads. Deprecated since version 3.9. decimal.HAVE_CONTEXTVAR The default value is True. If Python is compiled --without-decimal-contextvar, the C version uses a thread-local rather than a coroutine-local context and the value is False. This is slightly faster in some nested context scenarios. New in version 3.9: backported to 3.7 and 3.8. Rounding modes decimal.ROUND_CEILING Round towards Infinity. decimal.ROUND_DOWN Round towards zero. decimal.ROUND_FLOOR Round towards -Infinity. decimal.ROUND_HALF_DOWN Round to nearest with ties going towards zero. decimal.ROUND_HALF_EVEN Round to nearest with ties going to nearest even integer. decimal.ROUND_HALF_UP Round to nearest with ties going away from zero. decimal.ROUND_UP Round away from zero. decimal.ROUND_05UP Round away from zero if last digit after rounding towards zero would have been 0 or 5; otherwise round towards zero. Signals Signals represent conditions that arise during computation. Each corresponds to one context flag and one context trap enabler. The context flag is set whenever the condition is encountered. After the computation, flags may be checked for informational purposes (for instance, to determine whether a computation was exact). After checking the flags, be sure to clear all flags before starting the next computation. If the context’s trap enabler is set for the signal, then the condition causes a Python exception to be raised. For example, if the DivisionByZero trap is set, then a DivisionByZero exception is raised upon encountering the condition. class decimal.Clamped Altered an exponent to fit representation constraints. Typically, clamping occurs when an exponent falls outside the context’s Emin and Emax limits. If possible, the exponent is reduced to fit by adding zeros to the coefficient. class decimal.DecimalException Base class for other signals and a subclass of ArithmeticError. class decimal.DivisionByZero Signals the division of a non-infinite number by zero. Can occur with division, modulo division, or when raising a number to a negative power. If this signal is not trapped, returns Infinity or -Infinity with the sign determined by the inputs to the calculation. class decimal.Inexact Indicates that rounding occurred and the result is not exact. Signals when non-zero digits were discarded during rounding. The rounded result is returned. The signal flag or trap is used to detect when results are inexact. class decimal.InvalidOperation An invalid operation was performed. Indicates that an operation was requested that does not make sense. If not trapped, returns NaN. Possible causes include: Infinity - Infinity 0 * Infinity Infinity / Infinity x % 0 Infinity % x sqrt(-x) and x > 0 0 ** 0 x ** (non-integer) x ** Infinity class decimal.Overflow Numerical overflow. Indicates the exponent is larger than Emax after rounding has occurred. If not trapped, the result depends on the rounding mode, either pulling inward to the largest representable finite number or rounding outward to Infinity. In either case, Inexact and Rounded are also signaled. class decimal.Rounded Rounding occurred though possibly no information was lost. Signaled whenever rounding discards digits; even if those digits are zero (such as rounding 5.00 to 5.0). If not trapped, returns the result unchanged. This signal is used to detect loss of significant digits. class decimal.Subnormal Exponent was lower than Emin prior to rounding. Occurs when an operation result is subnormal (the exponent is too small). If not trapped, returns the result unchanged. class decimal.Underflow Numerical underflow with result rounded to zero. Occurs when a subnormal result is pushed to zero by rounding. Inexact and Subnormal are also signaled. class decimal.FloatOperation Enable stricter semantics for mixing floats and Decimals. If the signal is not trapped (default), mixing floats and Decimals is permitted in the Decimal constructor, create_decimal() and all comparison operators. Both conversion and comparisons are exact. Any occurrence of a mixed operation is silently recorded by setting FloatOperation in the context flags. Explicit conversions with from_float() or create_decimal_from_float() do not set the flag. Otherwise (the signal is trapped), only equality comparisons and explicit conversions are silent. All other mixed operations raise FloatOperation. The following table summarizes the hierarchy of signals: exceptions.ArithmeticError(exceptions.Exception) DecimalException Clamped DivisionByZero(DecimalException, exceptions.ZeroDivisionError) Inexact Overflow(Inexact, Rounded) Underflow(Inexact, Rounded, Subnormal) InvalidOperation Rounded Subnormal FloatOperation(DecimalException, exceptions.TypeError) Floating Point Notes Mitigating round-off error with increased precision The use of decimal floating point eliminates decimal representation error (making it possible to represent 0.1 exactly); however, some operations can still incur round-off error when non-zero digits exceed the fixed precision. The effects of round-off error can be amplified by the addition or subtraction of nearly offsetting quantities resulting in loss of significance. Knuth provides two instructive examples where rounded floating point arithmetic with insufficient precision causes the breakdown of the associative and distributive properties of addition: # Examples from Seminumerical Algorithms, Section 4.2.2. >>> from decimal import Decimal, getcontext >>> getcontext().prec = 8 >>> u, v, w = Decimal(11111113), Decimal(-11111111), Decimal('7.51111111') >>> (u + v) + w Decimal('9.5111111') >>> u + (v + w) Decimal('10') >>> u, v, w = Decimal(20000), Decimal(-6), Decimal('6.0000003') >>> (u*v) + (u*w) Decimal('0.01') >>> u * (v+w) Decimal('0.0060000') The decimal module makes it possible to restore the identities by expanding the precision sufficiently to avoid loss of significance: >>> getcontext().prec = 20 >>> u, v, w = Decimal(11111113), Decimal(-11111111), Decimal('7.51111111') >>> (u + v) + w Decimal('9.51111111') >>> u + (v + w) Decimal('9.51111111') >>> >>> u, v, w = Decimal(20000), Decimal(-6), Decimal('6.0000003') >>> (u*v) + (u*w) Decimal('0.0060000') >>> u * (v+w) Decimal('0.0060000') Special values The number system for the decimal module provides special values including NaN, sNaN, -Infinity, Infinity, and two zeros, +0 and -0. Infinities can be constructed directly with: Decimal('Infinity'). Also, they can arise from dividing by zero when the DivisionByZero signal is not trapped. Likewise, when the Overflow signal is not trapped, infinity can result from rounding beyond the limits of the largest representable number. The infinities are signed (affine) and can be used in arithmetic operations where they get treated as very large, indeterminate numbers. For instance, adding a constant to infinity gives another infinite result. Some operations are indeterminate and return NaN, or if the InvalidOperation signal is trapped, raise an exception. For example, 0/0 returns NaN which means “not a number”. This variety of NaN is quiet and, once created, will flow through other computations always resulting in another NaN. This behavior can be useful for a series of computations that occasionally have missing inputs — it allows the calculation to proceed while flagging specific results as invalid. A variant is sNaN which signals rather than remaining quiet after every operation. This is a useful return value when an invalid result needs to interrupt a calculation for special handling. The behavior of Python’s comparison operators can be a little surprising where a NaN is involved. A test for equality where one of the operands is a quiet or signaling NaN always returns False (even when doing Decimal('NaN')==Decimal('NaN')), while a test for inequality always returns True. An attempt to compare two Decimals using any of the <, <=, > or >= operators will raise the InvalidOperation signal if either operand is a NaN, and return False if this signal is not trapped. Note that the General Decimal Arithmetic specification does not specify the behavior of direct comparisons; these rules for comparisons involving a NaN were taken from the IEEE 854 standard (see Table 3 in section 5.7). To ensure strict standards-compliance, use the compare() and compare-signal() methods instead. The signed zeros can result from calculations that underflow. They keep the sign that would have resulted if the calculation had been carried out to greater precision. Since their magnitude is zero, both positive and negative zeros are treated as equal and their sign is informational. In addition to the two signed zeros which are distinct yet equal, there are various representations of zero with differing precisions yet equivalent in value. This takes a bit of getting used to. For an eye accustomed to normalized floating point representations, it is not immediately obvious that the following calculation returns a value equal to zero: >>> 1 / Decimal('Infinity') Decimal('0E-1000026') Working with threads The getcontext() function accesses a different Context object for each thread. Having separate thread contexts means that threads may make changes (such as getcontext().prec=10) without interfering with other threads. Likewise, the setcontext() function automatically assigns its target to the current thread. If setcontext() has not been called before getcontext(), then getcontext() will automatically create a new context for use in the current thread. The new context is copied from a prototype context called DefaultContext. To control the defaults so that each thread will use the same values throughout the application, directly modify the DefaultContext object. This should be done before any threads are started so that there won’t be a race condition between threads calling getcontext(). For example: # Set applicationwide defaults for all threads about to be launched DefaultContext.prec = 12 DefaultContext.rounding = ROUND_DOWN DefaultContext.traps = ExtendedContext.traps.copy() DefaultContext.traps[InvalidOperation] = 1 setcontext(DefaultContext) # Afterwards, the threads can be started t1.start() t2.start() t3.start() . . . Recipes Here are a few recipes that serve as utility functions and that demonstrate ways to work with the Decimal class: def moneyfmt(value, places=2, curr='', sep=',', dp='.', pos='', neg='-', trailneg=''): """Convert Decimal to a money formatted string. places: required number of places after the decimal point curr: optional currency symbol before the sign (may be blank) sep: optional grouping separator (comma, period, space, or blank) dp: decimal point indicator (comma or period) only specify as blank when places is zero pos: optional sign for positive numbers: '+', space or blank neg: optional sign for negative numbers: '-', '(', space or blank trailneg:optional trailing minus indicator: '-', ')', space or blank >>> d = Decimal('-1234567.8901') >>> moneyfmt(d, curr='$') '-$1,234,567.89' >>> moneyfmt(d, places=0, sep='.', dp='', neg='', trailneg='-') '1.234.568-' >>> moneyfmt(d, curr='$', neg='(', trailneg=')') '($1,234,567.89)' >>> moneyfmt(Decimal(123456789), sep=' ') '123 456 789.00' >>> moneyfmt(Decimal('-0.02'), neg='<', trailneg='>') '<0.02>' """ q = Decimal(10) ** -places # 2 places --> '0.01' sign, digits, exp = value.quantize(q).as_tuple() result = [] digits = list(map(str, digits)) build, next = result.append, digits.pop if sign: build(trailneg) for i in range(places): build(next() if digits else '0') if places: build(dp) if not digits: build('0') i = 0 while digits: build(next()) i += 1 if i == 3 and digits: i = 0 build(sep) build(curr) build(neg if sign else pos) return ''.join(reversed(result)) def pi(): """Compute Pi to the current precision. >>> print(pi()) 3.141592653589793238462643383 """ getcontext().prec += 2 # extra digits for intermediate steps three = Decimal(3) # substitute "three=3.0" for regular floats lasts, t, s, n, na, d, da = 0, three, 3, 1, 0, 0, 24 while s != lasts: lasts = s n, na = n+na, na+8 d, da = d+da, da+32 t = (t * n) / d s += t getcontext().prec -= 2 return +s # unary plus applies the new precision def exp(x): """Return e raised to the power of x. Result type matches input type. >>> print(exp(Decimal(1))) 2.718281828459045235360287471 >>> print(exp(Decimal(2))) 7.389056098930650227230427461 >>> print(exp(2.0)) 7.38905609893 >>> print(exp(2+0j)) (7.38905609893+0j) """ getcontext().prec += 2 i, lasts, s, fact, num = 0, 0, 1, 1, 1 while s != lasts: lasts = s i += 1 fact *= i num *= x s += num / fact getcontext().prec -= 2 return +s def cos(x): """Return the cosine of x as measured in radians. The Taylor series approximation works best for a small value of x. For larger values, first compute x = x % (2 * pi). >>> print(cos(Decimal('0.5'))) 0.8775825618903727161162815826 >>> print(cos(0.5)) 0.87758256189 >>> print(cos(0.5+0j)) (0.87758256189+0j) """ getcontext().prec += 2 i, lasts, s, fact, num, sign = 0, 0, 1, 1, 1, 1 while s != lasts: lasts = s i += 2 fact *= i * (i-1) num *= x * x sign *= -1 s += num / fact * sign getcontext().prec -= 2 return +s def sin(x): """Return the sine of x as measured in radians. The Taylor series approximation works best for a small value of x. For larger values, first compute x = x % (2 * pi). >>> print(sin(Decimal('0.5'))) 0.4794255386042030002732879352 >>> print(sin(0.5)) 0.479425538604 >>> print(sin(0.5+0j)) (0.479425538604+0j) """ getcontext().prec += 2 i, lasts, s, fact, num, sign = 1, 0, x, 1, x, 1 while s != lasts: lasts = s i += 2 fact *= i * (i-1) num *= x * x sign *= -1 s += num / fact * sign getcontext().prec -= 2 return +s Decimal FAQ Q. It is cumbersome to type decimal.Decimal('1234.5'). Is there a way to minimize typing when using the interactive interpreter? A. Some users abbreviate the constructor to just a single letter: >>> D = decimal.Decimal >>> D('1.23') + D('3.45') Decimal('4.68') Q. In a fixed-point application with two decimal places, some inputs have many places and need to be rounded. Others are not supposed to have excess digits and need to be validated. What methods should be used? A. The quantize() method rounds to a fixed number of decimal places. If the Inexact trap is set, it is also useful for validation: >>> TWOPLACES = Decimal(10) ** -2 # same as Decimal('0.01') >>> # Round to two places >>> Decimal('3.214').quantize(TWOPLACES) Decimal('3.21') >>> # Validate that a number does not exceed two places >>> Decimal('3.21').quantize(TWOPLACES, context=Context(traps=[Inexact])) Decimal('3.21') >>> Decimal('3.214').quantize(TWOPLACES, context=Context(traps=[Inexact])) Traceback (most recent call last): ... Inexact: None Q. Once I have valid two place inputs, how do I maintain that invariant throughout an application? A. Some operations like addition, subtraction, and multiplication by an integer will automatically preserve fixed point. Others operations, like division and non-integer multiplication, will change the number of decimal places and need to be followed-up with a quantize() step: >>> a = Decimal('102.72') # Initial fixed-point values >>> b = Decimal('3.17') >>> a + b # Addition preserves fixed-point Decimal('105.89') >>> a - b Decimal('99.55') >>> a * 42 # So does integer multiplication Decimal('4314.24') >>> (a * b).quantize(TWOPLACES) # Must quantize non-integer multiplication Decimal('325.62') >>> (b / a).quantize(TWOPLACES) # And quantize division Decimal('0.03') In developing fixed-point applications, it is convenient to define functions to handle the quantize() step: >>> def mul(x, y, fp=TWOPLACES): ... return (x * y).quantize(fp) >>> def div(x, y, fp=TWOPLACES): ... return (x / y).quantize(fp) >>> mul(a, b) # Automatically preserve fixed-point Decimal('325.62') >>> div(b, a) Decimal('0.03') Q. There are many ways to express the same value. The numbers 200, 200.000, 2E2, and 02E+4 all have the same value at various precisions. Is there a way to transform them to a single recognizable canonical value? A. The normalize() method maps all equivalent values to a single representative: >>> values = map(Decimal, '200 200.000 2E2 .02E+4'.split()) >>> [v.normalize() for v in values] [Decimal('2E+2'), Decimal('2E+2'), Decimal('2E+2'), Decimal('2E+2')] Q. Some decimal values always print with exponential notation. Is there a way to get a non-exponential representation? A. For some values, exponential notation is the only way to express the number of significant places in the coefficient. For example, expressing 5.0E+3 as 5000 keeps the value constant but cannot show the original’s two-place significance. If an application does not care about tracking significance, it is easy to remove the exponent and trailing zeroes, losing significance, but keeping the value unchanged: >>> def remove_exponent(d): ... return d.quantize(Decimal(1)) if d == d.to_integral() else d.normalize() >>> remove_exponent(Decimal('5E+3')) Decimal('5000') Q. Is there a way to convert a regular float to a Decimal? A. Yes, any binary floating point number can be exactly expressed as a Decimal though an exact conversion may take more precision than intuition would suggest: >>> Decimal(math.pi) Decimal('3.141592653589793115997963468544185161590576171875') Q. Within a complex calculation, how can I make sure that I haven’t gotten a spurious result because of insufficient precision or rounding anomalies. A. The decimal module makes it easy to test results. A best practice is to re-run calculations using greater precision and with various rounding modes. Widely differing results indicate insufficient precision, rounding mode issues, ill-conditioned inputs, or a numerically unstable algorithm. Q. I noticed that context precision is applied to the results of operations but not to the inputs. Is there anything to watch out for when mixing values of different precisions? A. Yes. The principle is that all values are considered to be exact and so is the arithmetic on those values. Only the results are rounded. The advantage for inputs is that “what you type is what you get”. A disadvantage is that the results can look odd if you forget that the inputs haven’t been rounded: >>> getcontext().prec = 3 >>> Decimal('3.104') + Decimal('2.104') Decimal('5.21') >>> Decimal('3.104') + Decimal('0.000') + Decimal('2.104') Decimal('5.20') The solution is either to increase precision or to force rounding of inputs using the unary plus operation: >>> getcontext().prec = 3 >>> +Decimal('1.23456789') # unary plus triggers rounding Decimal('1.23') Alternatively, inputs can be rounded upon creation using the Context.create_decimal() method: >>> Context(prec=5, rounding=ROUND_DOWN).create_decimal('1.2345678') Decimal('1.2345') Q. Is the CPython implementation fast for large numbers? A. Yes. In the CPython and PyPy3 implementations, the C/CFFI versions of the decimal module integrate the high speed libmpdec library for arbitrary precision correctly-rounded decimal floating point arithmetic 1. libmpdec uses Karatsuba multiplication for medium-sized numbers and the Number Theoretic Transform for very large numbers. The context must be adapted for exact arbitrary precision arithmetic. Emin and Emax should always be set to the maximum values, clamp should always be 0 (the default). Setting prec requires some care. The easiest approach for trying out bignum arithmetic is to use the maximum value for prec as well 2: >>> setcontext(Context(prec=MAX_PREC, Emax=MAX_EMAX, Emin=MIN_EMIN)) >>> x = Decimal(2) ** 256 >>> x / 128 Decimal('904625697166532776746648320380374280103671755200316906558262375061821325312') For inexact results, MAX_PREC is far too large on 64-bit platforms and the available memory will be insufficient: >>> Decimal(1) / 3 Traceback (most recent call last): File "<stdin>", line 1, in <module> MemoryError On systems with overallocation (e.g. Linux), a more sophisticated approach is to adjust prec to the amount of available RAM. Suppose that you have 8GB of RAM and expect 10 simultaneous operands using a maximum of 500MB each: >>> import sys >>> >>> # Maximum number of digits for a single operand using 500MB in 8-byte words >>> # with 19 digits per word (4-byte and 9 digits for the 32-bit build): >>> maxdigits = 19 * ((500 * 1024**2) // 8) >>> >>> # Check that this works: >>> c = Context(prec=maxdigits, Emax=MAX_EMAX, Emin=MIN_EMIN) >>> c.traps[Inexact] = True >>> setcontext(c) >>> >>> # Fill the available precision with nines: >>> x = Decimal(0).logical_invert() * 9 >>> sys.getsizeof(x) 524288112 >>> x + 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> decimal.Inexact: [<class 'decimal.Inexact'>] In general (and especially on systems without overallocation), it is recommended to estimate even tighter bounds and set the Inexact trap if all calculations are expected to be exact. 1 New in version 3.3. 2 Changed in version 3.9: This approach now works for all exact results except for non-integer powers.
doc_24820
See torch.bitwise_not()
doc_24821
Get the current window caption get_caption() -> (title, icontitle) Returns the title and icontitle for the display Surface. These will often be the same value.
doc_24822
Create a new figure manager instance for the given figure.
doc_24823
>>> g = GeoIP2() >>> g.country('google.com') {'country_code': 'US', 'country_name': 'United States'} >>> g.city('72.14.207.99') {'city': 'Mountain View', 'continent_code': 'NA', 'continent_name': 'North America', 'country_code': 'US', 'country_name': 'United States', 'dma_code': 807, 'is_in_european_union': False, 'latitude': 37.419200897216797, 'longitude': -122.05740356445312, 'postal_code': '94043', 'region': 'CA', 'time_zone': 'America/Los_Angeles'} >>> g.lat_lon('salon.com') (39.0437, -77.4875) >>> g.lon_lat('uh.edu') (-95.4342, 29.834) >>> g.geos('24.124.1.80').wkt 'POINT (-97 38)' API Reference class GeoIP2(path=None, cache=0, country=None, city=None) The GeoIP object does not require any parameters to use the default settings. However, at the very least the GEOIP_PATH setting should be set with the path of the location of your GeoIP datasets. The following initialization keywords may be used to customize any of the defaults. Keyword Arguments Description path Base directory to where GeoIP data is located or the full path to where the city or country data files (.mmdb) are located. Assumes that both the city and country datasets are located in this directory; overrides the GEOIP_PATH setting. cache The cache settings when opening up the GeoIP datasets. May be an integer in (0, 1, 2, 4, 8) corresponding to the MODE_AUTO, MODE_MMAP_EXT, MODE_MMAP, and GEOIP_INDEX_CACHE MODE_MEMORY C API settings, respectively. Defaults to 0 (MODE_AUTO). country The name of the GeoIP country data file. Defaults to GeoLite2-Country.mmdb. Setting this keyword overrides the GEOIP_COUNTRY setting. city The name of the GeoIP city data file. Defaults to GeoLite2-City.mmdb. Setting this keyword overrides the GEOIP_CITY setting. Methods Instantiating classmethod GeoIP2.open(path, cache) This classmethod instantiates the GeoIP object from the given database path and given cache setting. Querying All the following querying routines may take either a string IP address or a fully qualified domain name (FQDN). For example, both '205.186.163.125' and 'djangoproject.com' would be valid query parameters. GeoIP2.city(query) Returns a dictionary of city information for the given query. Some of the values in the dictionary may be undefined (None). GeoIP2.country(query) Returns a dictionary with the country code and country for the given query. GeoIP2.country_code(query) Returns the country code corresponding to the query. GeoIP2.country_name(query) Returns the country name corresponding to the query. Coordinate Retrieval GeoIP2.coords(query) Returns a coordinate tuple of (longitude, latitude). GeoIP2.lon_lat(query) Returns a coordinate tuple of (longitude, latitude). GeoIP2.lat_lon(query) Returns a coordinate tuple of (latitude, longitude), GeoIP2.geos(query) Returns a Point object corresponding to the query. Settings GEOIP_PATH A string or pathlib.Path specifying the directory where the GeoIP data files are located. This setting is required unless manually specified with path keyword when initializing the GeoIP2 object. GEOIP_COUNTRY The basename to use for the GeoIP country data file. Defaults to 'GeoLite2-Country.mmdb'. GEOIP_CITY The basename to use for the GeoIP city data file. Defaults to 'GeoLite2-City.mmdb'. Exceptions exception GeoIP2Exception The exception raised when an error occurs in a call to the underlying geoip2 library. Footnotes [1] GeoIP(R) is a registered trademark of MaxMind, Inc.
doc_24824
Raises an AssertionError if two items are not equal up to significant digits. Note It is recommended to use one of assert_allclose, assert_array_almost_equal_nulp or assert_array_max_ulp instead of this function for more consistent floating point comparisons. Given two numbers, check that they are approximately equal. Approximately equal is defined as the number of significant digits that agree. Parameters actualscalar The object to check. desiredscalar The expected object. significantint, optional Desired precision, default is 7. err_msgstr, optional The error message to be printed in case of failure. verbosebool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal up to specified precision. See also assert_allclose Compare two array_like objects for equality with desired relative and/or absolute precision. assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal Examples >>> np.testing.assert_approx_equal(0.12345677777777e-20, 0.1234567e-20) >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345671e-20, ... significant=8) >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345672e-20, ... significant=8) Traceback (most recent call last): ... AssertionError: Items are not equal to 8 significant digits: ACTUAL: 1.234567e-21 DESIRED: 1.2345672e-21 the evaluated condition that raises the exception is >>> abs(0.12345670e-20/1e-21 - 0.12345672e-20/1e-21) >= 10**-(8-1) True
doc_24825
Enables workarounds for various bugs present in other SSL implementations. This option is set by default. It does not necessarily set the same flags as OpenSSL’s SSL_OP_ALL constant. New in version 3.2.
doc_24826
Returns the string representation of the Period, depending on the selected fmt. fmt must be a string containing one or several directives. The method recognizes the same directives as the time.strftime() function of the standard Python distribution, as well as the specific additional directives %f, %F, %q. (formatting & docs originally from scikits.timeries). Directive Meaning Notes %a Locale’s abbreviated weekday name. %A Locale’s full weekday name. %b Locale’s abbreviated month name. %B Locale’s full month name. %c Locale’s appropriate date and time representation. %d Day of the month as a decimal number [01,31]. %f ‘Fiscal’ year without a century as a decimal number [00,99] (1) %F ‘Fiscal’ year with a century as a decimal number (2) %H Hour (24-hour clock) as a decimal number [00,23]. %I Hour (12-hour clock) as a decimal number [01,12]. %j Day of the year as a decimal number [001,366]. %m Month as a decimal number [01,12]. %M Minute as a decimal number [00,59]. %p Locale’s equivalent of either AM or PM. (3) %q Quarter as a decimal number [01,04] %S Second as a decimal number [00,61]. (4) %U Week number of the year (Sunday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Sunday are considered to be in week 0. (5) %w Weekday as a decimal number [0(Sunday),6]. %W Week number of the year (Monday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Monday are considered to be in week 0. (5) %x Locale’s appropriate date representation. %X Locale’s appropriate time representation. %y Year without century as a decimal number [00,99]. %Y Year with century as a decimal number. %Z Time zone name (no characters if no time zone exists). %% A literal '%' character. Notes The %f directive is the same as %y if the frequency is not quarterly. Otherwise, it corresponds to the ‘fiscal’ year, as defined by the qyear attribute. The %F directive is the same as %Y if the frequency is not quarterly. Otherwise, it corresponds to the ‘fiscal’ year, as defined by the qyear attribute. The %p directive only affects the output hour field if the %I directive is used to parse the hour. The range really is 0 to 61; this accounts for leap seconds and the (very rare) double leap seconds. The %U and %W directives are only used in calculations when the day of the week and the year are specified. Examples >>> a = Period(freq='Q-JUL', year=2006, quarter=1) >>> a.strftime('%F-Q%q') '2006-Q1' >>> # Output the last month in the quarter of this date >>> a.strftime('%b-%Y') 'Oct-2005' >>> >>> a = Period(freq='D', year=2001, month=1, day=1) >>> a.strftime('%d-%b-%Y') '01-Jan-2001' >>> a.strftime('%b. %d, %Y was a %A') 'Jan. 01, 2001 was a Monday'
doc_24827
Learn model for the data X with variational Bayes method. When learning_method is ‘online’, use mini-batch update. Otherwise, use batch update. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Document word matrix. yIgnored Returns self
doc_24828
Bases: matplotlib.patches.ArrowStyle._Curve A simple curve without any arrow head. Parameters head_lengthfloat, default: 0.4 Length of the arrow head, relative to mutation_scale. head_widthfloat, default: 0.2 Width of the arrow head, relative to mutation_scale. widthAfloat, default: 1.0 Width of the bracket at the beginning of the arrow widthBfloat, default: 1.0 Width of the bracket at the end of the arrow lengthAfloat, default: 0.2 Length of the bracket at the beginning of the arrow lengthBfloat, default: 0.2 Length of the bracket at the end of the arrow angleAfloat, default 0 Orientation of the bracket at the beginning, as a counterclockwise angle. 0 degrees means perpendicular to the line. angleBfloat, default 0 Orientation of the bracket at the beginning, as a counterclockwise angle. 0 degrees means perpendicular to the line. scaleAfloat, default mutation_size The mutation_size for the beginning bracket scaleBfloat, default mutation_size The mutation_size for the end bracket
doc_24829
This implementation registers a SIGCHLD signal handler on instantiation. That can break third-party code that installs a custom handler for SIGCHLD signal. The watcher avoids disrupting other code spawning processes by polling every process explicitly on a SIGCHLD signal. There is no limitation for running subprocesses from different threads once the watcher is installed. The solution is safe but it has a significant overhead when handling a big number of processes (O(n) each time a SIGCHLD is received). New in version 3.8.
doc_24830
Make a 2D hexagonal binning plot of points x, y. If C is None, the value of the hexagon is determined by the number of points in the hexagon. Otherwise, C specifies values at the coordinate (x[i], y[i]). For each hexagon, these values are reduced using reduce_C_function. Parameters x, yarray-like The data positions. x and y must be of the same length. Carray-like, optional If given, these values are accumulated in the bins. Otherwise, every point has a value of 1. Must be of the same length as x and y. gridsizeint or (int, int), default: 100 If a single int, the number of hexagons in the x-direction. The number of hexagons in the y-direction is chosen such that the hexagons are approximately regular. Alternatively, if a tuple (nx, ny), the number of hexagons in the x-direction and the y-direction. bins'log' or int or sequence, default: None Discretization of the hexagon values. If None, no binning is applied; the color of each hexagon directly corresponds to its count value. If 'log', use a logarithmic scale for the colormap. Internally, \(log_{10}(i+1)\) is used to determine the hexagon color. This is equivalent to norm=LogNorm(). If an integer, divide the counts in the specified number of bins, and color the hexagons accordingly. If a sequence of values, the values of the lower bound of the bins to be used. xscale{'linear', 'log'}, default: 'linear' Use a linear or log10 scale on the horizontal axis. yscale{'linear', 'log'}, default: 'linear' Use a linear or log10 scale on the vertical axis. mincntint > 0, default: None If not None, only display cells with more than mincnt number of points in the cell. marginalsbool, default: False If marginals is True, plot the marginal density as colormapped rectangles along the bottom of the x-axis and left of the y-axis. extent4-tuple of float, default: None The limits of the bins (xmin, xmax, ymin, ymax). The default assigns the limits based on gridsize, x, y, xscale and yscale. If xscale or yscale is set to 'log', the limits are expected to be the exponent for a power of 10. E.g. for x-limits of 1 and 50 in 'linear' scale and y-limits of 10 and 1000 in 'log' scale, enter (1, 50, 1, 3). Returns PolyCollection A PolyCollection defining the hexagonal bins. PolyCollection.get_offsets contains a Mx2 array containing the x, y positions of the M hexagon centers. PolyCollection.get_array contains the values of the M hexagons. If marginals is True, horizontal bar and vertical bar (both PolyCollections) will be attached to the return collection as attributes hbar and vbar. Other Parameters cmapstr or Colormap, default: rcParams["image.cmap"] (default: 'viridis') The Colormap instance or registered colormap name used to map the bin values to colors. normNormalize, optional The Normalize instance scales the bin values to the canonical colormap range [0, 1] for mapping to colors. By default, the data range is mapped to the colorbar range using linear scaling. vmin, vmaxfloat, default: None The colorbar range. If None, suitable min/max values are automatically chosen by the Normalize instance (defaults to the respective min/max values of the bins in case of the default linear scaling). It is an error to use vmin/vmax when norm is given. alphafloat between 0 and 1, optional The alpha blending value, between 0 (transparent) and 1 (opaque). linewidthsfloat, default: None If None, defaults to 1.0. edgecolors{'face', 'none', None} or color, default: 'face' The color of the hexagon edges. Possible values are: 'face': Draw the edges in the same color as the fill color. 'none': No edges are drawn. This can sometimes lead to unsightly unpainted pixels between the hexagons. None: Draw outlines in the default color. An explicit color. reduce_C_functioncallable, default: numpy.mean The function to aggregate C within the bins. It is ignored if C is not given. This must have the signature: def reduce_C_function(C: array) -> float Commonly used functions are: numpy.mean: average of the points numpy.sum: integral of the point values numpy.amax: value taken from the largest point dataindexable object, optional If given, the following parameters also accept a string s, which is interpreted as data[s] (unless this raises an exception): x, y, C **kwargsPolyCollection properties All other keyword arguments are passed on to PolyCollection: Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha array-like or scalar or None animated bool antialiased or aa or antialiaseds bool or list of bools array array-like or None capstyle CapStyle or {'butt', 'projecting', 'round'} clim (vmin: float, vmax: float) clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None cmap Colormap or str or None color color or list of rgba tuples edgecolor or ec or edgecolors color or list of colors or 'face' facecolor or facecolors or fc color or list of colors figure Figure gid str hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} in_layout bool joinstyle JoinStyle or {'miter', 'round', 'bevel'} label object linestyle or dashes or linestyles or ls str or tuple or list thereof linewidth or linewidths or lw float or list of floats norm Normalize or None offset_transform Transform offsets (N, 2) or (2,) array-like path_effects AbstractPathEffect paths list of array-like picker None or bool or float or callable pickradius float rasterized bool sizes ndarray or None sketch_params (scale: float, length: float, randomness: float) snap bool or None transform Transform url str urls list of str or None verts list of array-like verts_and_codes unknown visible bool zorder float See also hist2d 2D histogram rectangular bins Examples using matplotlib.axes.Axes.hexbin Hexagonal binned plot hexbin(x, y, C)
doc_24831
Add a mark with the given id (larger than 0), and the given name at the given position. This method can be called at any time before close().
doc_24832
Remove a file descriptor being tracked by a polling object. Just like the register() method, fd can be an integer or an object with a fileno() method that returns an integer. Attempting to remove a file descriptor that was never registered causes a KeyError exception to be raised.
doc_24833
An alias for the built-in OSError exception.
doc_24834
Release a semaphore, incrementing the internal counter by n. When it was zero on entry and other threads are waiting for it to become larger than zero again, wake up n of those threads. Changed in version 3.9: Added the n parameter to release multiple waiting threads at once.
doc_24835
Raise a square matrix to the (integer) power n. For positive integers n, the power is computed by repeated matrix squarings and matrix multiplications. If n == 0, the identity matrix of the same shape as M is returned. If n < 0, the inverse is computed and then raised to the abs(n). Note Stacks of object matrices are not currently supported. Parameters a(…, M, M) array_like Matrix to be “powered”. nint The exponent can be any integer or long integer, positive, negative, or zero. Returns a**n(…, M, M) ndarray or matrix object The return value is the same shape and type as M; if the exponent is positive or zero then the type of the elements is the same as those of M. If the exponent is negative the elements are floating-point. Raises LinAlgError For matrices that are not square or that (for negative powers) cannot be inverted numerically. Examples >>> from numpy.linalg import matrix_power >>> i = np.array([[0, 1], [-1, 0]]) # matrix equiv. of the imaginary unit >>> matrix_power(i, 3) # should = -i array([[ 0, -1], [ 1, 0]]) >>> matrix_power(i, 0) array([[1, 0], [0, 1]]) >>> matrix_power(i, -3) # should = 1/(-i) = i, but w/ f.p. elements array([[ 0., 1.], [-1., 0.]]) Somewhat more sophisticated example >>> q = np.zeros((4, 4)) >>> q[0:2, 0:2] = -i >>> q[2:4, 2:4] = i >>> q # one of the three quaternion units not equal to 1 array([[ 0., -1., 0., 0.], [ 1., 0., 0., 0.], [ 0., 0., 0., 1.], [ 0., 0., -1., 0.]]) >>> matrix_power(q, 2) # = -np.eye(4) array([[-1., 0., 0., 0.], [ 0., -1., 0., 0.], [ 0., 0., -1., 0.], [ 0., 0., 0., -1.]])
doc_24836
Return True if either the real or the imaginary part of x is a NaN, and False otherwise.
doc_24837
html.entities.html5 A dictionary that maps HTML5 named character references 1 to the equivalent Unicode character(s), e.g. html5['gt;'] == '>'. Note that the trailing semicolon is included in the name (e.g. 'gt;'), however some of the names are accepted by the standard even without the semicolon: in this case the name is present with and without the ';'. See also html.unescape(). New in version 3.3. html.entities.entitydefs A dictionary mapping XHTML 1.0 entity definitions to their replacement text in ISO Latin-1. html.entities.name2codepoint A dictionary that maps HTML entity names to the Unicode code points. html.entities.codepoint2name A dictionary that maps Unicode code points to HTML entity names. Footnotes 1 See https://www.w3.org/TR/html5/syntax.html#named-character-references
doc_24838
See Migration guide for more details. tf.compat.v1.app.flags.FlagNameConflictsWithMethodError
doc_24839
A string containing an encoded and serialized session dictionary.
doc_24840
See Migration guide for more details. tf.compat.v1.keras.regularizers.serialize tf.keras.regularizers.serialize( regularizer )
doc_24841
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_24842
Return the drawstyle. See also set_drawstyle.
doc_24843
Convert a tagged representation back to the original type. Parameters value (Dict[str, Any]) – Return type Any
doc_24844
Return the values (min, max) that are mapped to the colormap limits.
doc_24845
Find the horizontal edges of an image using the Scharr transform. Parameters image2-D array Image to process. mask2-D array, optional An optional mask to limit the application to a certain area. Note that pixels surrounding masked regions are also masked to prevent masked regions from affecting the result. Returns output2-D array The Scharr edge map. Notes We use the following kernel: 3 10 3 0 0 0 -3 -10 -3 References 1 D. Kroon, 2009, Short Paper University Twente, Numerical Optimization of Kernel Based Image Derivatives.
doc_24846
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_24847
set the current error message set_error(error_msg) -> None SDL maintains an internal error message. This message will usually be given to you when pygame.error() is raised, so this function will rarely be needed.
doc_24848
Render object to a LaTeX tabular, longtable, or nested table. Requires \usepackage{booktabs}. The output can be copy/pasted into a main LaTeX document or read from an external file with \input{table.tex}. Changed in version 1.0.0: Added caption and label arguments. Changed in version 1.2.0: Added position argument, changed meaning of caption argument. Parameters buf:str, Path or StringIO-like, optional, default None Buffer to write to. If None, the output is returned as a string. columns:list of label, optional The subset of columns to write. Writes all columns by default. col_space:int, optional The minimum width of each column. header:bool or list of str, default True Write out the column names. If a list of strings is given, it is assumed to be aliases for the column names. index:bool, default True Write row names (index). na_rep:str, default ‘NaN’ Missing data representation. formatters:list of functions or dict of {str: function}, optional Formatter functions to apply to columns’ elements by position or name. The result of each function must be a unicode string. List must be of length equal to the number of columns. float_format:one-parameter function or str, optional, default None Formatter for floating point numbers. For example float_format="%.2f" and float_format="{:0.2f}".format will both result in 0.1234 being formatted as 0.12. sparsify:bool, optional Set to False for a DataFrame with a hierarchical index to print every multiindex key at each row. By default, the value will be read from the config module. index_names:bool, default True Prints the names of the indexes. bold_rows:bool, default False Make the row labels bold in the output. column_format:str, optional The columns format as specified in LaTeX table format e.g. ‘rcl’ for 3 columns. By default, ‘l’ will be used for all columns except columns of numbers, which default to ‘r’. longtable:bool, optional By default, the value will be read from the pandas config module. Use a longtable environment instead of tabular. Requires adding a usepackage{longtable} to your LaTeX preamble. escape:bool, optional By default, the value will be read from the pandas config module. When set to False prevents from escaping latex special characters in column names. encoding:str, optional A string representing the encoding to use in the output file, defaults to ‘utf-8’. decimal:str, default ‘.’ Character recognized as decimal separator, e.g. ‘,’ in Europe. multicolumn:bool, default True Use multicolumn to enhance MultiIndex columns. The default will be read from the config module. multicolumn_format:str, default ‘l’ The alignment for multicolumns, similar to column_format The default will be read from the config module. multirow:bool, default False Use multirow to enhance MultiIndex rows. Requires adding a usepackage{multirow} to your LaTeX preamble. Will print centered labels (instead of top-aligned) across the contained rows, separating groups via clines. The default will be read from the pandas config module. caption:str or tuple, optional Tuple (full_caption, short_caption), which results in \caption[short_caption]{full_caption}; if a single string is passed, no short caption will be set. New in version 1.0.0. Changed in version 1.2.0: Optionally allow caption to be a tuple (full_caption, short_caption). label:str, optional The LaTeX label to be placed inside \label{} in the output. This is used with \ref{} in the main .tex file. New in version 1.0.0. position:str, optional The LaTeX positional argument for tables, to be placed after \begin{} in the output. New in version 1.2.0. Returns str or None If buf is None, returns the result as a string. Otherwise returns None. See also Styler.to_latex Render a DataFrame to LaTeX with conditional formatting. DataFrame.to_string Render a DataFrame to a console-friendly tabular output. DataFrame.to_html Render a DataFrame as an HTML table. Examples >>> df = pd.DataFrame(dict(name=['Raphael', 'Donatello'], ... mask=['red', 'purple'], ... weapon=['sai', 'bo staff'])) >>> print(df.to_latex(index=False)) \begin{tabular}{lll} \toprule name & mask & weapon \\ \midrule Raphael & red & sai \\ Donatello & purple & bo staff \\ \bottomrule \end{tabular}
doc_24849
Alternative error attach function to the errorhandler() decorator that is more straightforward to use for non decorator usage. Changelog New in version 0.7. Parameters code_or_exception (Union[Type[Exception], int]) – f (Callable[[Exception], Union[Response, AnyStr, Dict[str, Any], Generator[AnyStr, None, None], Tuple[Union[Response, AnyStr, Dict[str, Any], Generator[AnyStr, None, None]], Union[Headers, Dict[str, Union[str, List[str], Tuple[str, ...]]], List[Tuple[str, Union[str, List[str], Tuple[str, ...]]]]]], Tuple[Union[Response, AnyStr, Dict[str, Any], Generator[AnyStr, None, None]], int], Tuple[Union[Response, AnyStr, Dict[str, Any], Generator[AnyStr, None, None]], int, Union[Headers, Dict[str, Union[str, List[str], Tuple[str, ...]]], List[Tuple[str, Union[str, List[str], Tuple[str, ...]]]]]], WSGIApplication]]) – Return type None
doc_24850
Force rasterized (bitmap) drawing for vector graphics output. Rasterized drawing is not supported by all artists. If you try to enable this on an artist that does not support it, the command has no effect and a warning will be issued. This setting is ignored for pixel-based output. See also Rasterization for vector graphics. Parameters rasterizedbool
doc_24851
tf.estimator.DNNEstimator( head, hidden_units, feature_columns, model_dir=None, optimizer='Adagrad', activation_fn=tf.nn.relu, dropout=None, config=None, warm_start_from=None, batch_norm=False ) Example: sparse_feature_a = sparse_column_with_hash_bucket(...) sparse_feature_b = sparse_column_with_hash_bucket(...) sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a, ...) sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b, ...) estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256]) # Or estimator using the ProximalAdagradOptimizer optimizer with # regularization. estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256], optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( learning_rate=0.1, l1_regularization_strength=0.001 )) # Or estimator using an optimizer with a learning rate decay. estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256], optimizer=lambda: tf.keras.optimizers.Adam( learning_rate=tf.compat.v1.train.exponential_decay( learning_rate=0.1, global_step=tf.compat.v1.train.get_global_step(), decay_steps=10000, decay_rate=0.96)) # Or estimator with warm-starting from a previous checkpoint. estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256], warm_start_from="/path/to/checkpoint/dir") # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train) metrics = estimator.evaluate(input_fn=input_fn_eval) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor. Loss and predicted output are determined by the specified head. Args head A _Head instance constructed with a method such as tf.contrib.estimator.multi_label_head. hidden_units Iterable of number hidden units per layer. All layers are fully connected. Ex. [64, 32] means first layer has 64 nodes and second one has 32. feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from _FeatureColumn. model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. optimizer An instance of tf.keras.optimizers.* used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', SGD'), or callable. Defaults to Adagrad optimizer. activation_fn Activation function applied to each layer. If None, will use tf.nn.relu. dropout When not None, the probability we will drop out a given coordinate. config RunConfig object to configure the runtime settings. warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. batch_norm Whether to use batch normalization after each hidden layer. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config export_savedmodel model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
doc_24852
Predict class or regression value for X. For a classification model, the predicted class for each sample in X is returned. For a regression model, the predicted value based on X is returned. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. check_inputbool, default=True Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Returns yarray-like of shape (n_samples,) or (n_samples, n_outputs) The predicted classes, or the predict values.
doc_24853
See Migration guide for more details. tf.compat.v1.raw_ops.Greater tf.raw_ops.Greater( x, y, name=None ) Note: math.greater supports broadcasting. More about broadcasting here Example: x = tf.constant([5, 4, 6]) y = tf.constant([5, 2, 5]) tf.math.greater(x, y) ==> [False, True, True] x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.greater(x, y) ==> [False, False, True] Args x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor of type bool.
doc_24854
See Migration guide for more details. tf.compat.v1.raw_ops.FakeQueue tf.raw_ops.FakeQueue( resource, name=None ) Args resource A Tensor of type resource. name A name for the operation (optional). Returns A Tensor of type mutable string.
doc_24855
Make an iterator that filters elements from data returning only those that have a corresponding element in selectors that evaluates to True. Stops when either the data or selectors iterables has been exhausted. Roughly equivalent to: def compress(data, selectors): # compress('ABCDEF', [1,0,1,0,1,1]) --> A C E F return (d for d, s in zip(data, selectors) if s) New in version 3.1.
doc_24856
Set the artist's clip Bbox. Parameters clipboxBbox
doc_24857
Return True if the stream can be read from. If False, read() will raise OSError.
doc_24858
Update colors from the scalar mappable array, if any. Assign colors to edges and faces based on the array and/or colors that were directly set, as appropriate.
doc_24859
The name of this lookup, used to identify it on parsing query expressions. It cannot contain the string "__".
doc_24860
Get a copy of the iterator in its current state. Examples >>> x = np.arange(10) >>> y = x + 1 >>> it = np.nditer([x, y]) >>> next(it) (array(0), array(1)) >>> it2 = it.copy() >>> next(it2) (array(1), array(2))
doc_24861
Upgrade an existing transport-based connection to TLS. Return a new transport instance, that the protocol must start using immediately after the await. The transport instance passed to the start_tls method should never be used again. Parameters: transport and protocol instances that methods like create_server() and create_connection() return. sslcontext: a configured instance of SSLContext. server_side pass True when a server-side connection is being upgraded (like the one created by create_server()). server_hostname: sets or overrides the host name that the target server’s certificate will be matched against. ssl_handshake_timeout is (for a TLS connection) the time in seconds to wait for the TLS handshake to complete before aborting the connection. 60.0 seconds if None (default). New in version 3.7.
doc_24862
See Migration guide for more details. tf.compat.v1.raw_ops.Equal tf.raw_ops.Equal( x, y, incompatible_shape_error=True, name=None ) Note: Equal supports broadcasting. More about broadcasting here x = tf.constant([2, 4]) y = tf.constant(2) tf.math.equal(x, y) ==> array([True, False]) x = tf.constant([2, 4]) y = tf.constant([2, 4]) tf.math.equal(x, y) ==> array([True, True]) Args x A Tensor. y A Tensor. Must have the same type as x. incompatible_shape_error An optional bool. Defaults to True. name A name for the operation (optional). Returns A Tensor of type bool.
doc_24863
Return a property attribute. fget is a function for getting an attribute value. fset is a function for setting an attribute value. fdel is a function for deleting an attribute value. And doc creates a docstring for the attribute. A typical use is to define a managed attribute x: class C: def __init__(self): self._x = None def getx(self): return self._x def setx(self, value): self._x = value def delx(self): del self._x x = property(getx, setx, delx, "I'm the 'x' property.") If c is an instance of C, c.x will invoke the getter, c.x = value will invoke the setter and del c.x the deleter. If given, doc will be the docstring of the property attribute. Otherwise, the property will copy fget’s docstring (if it exists). This makes it possible to create read-only properties easily using property() as a decorator: class Parrot: def __init__(self): self._voltage = 100000 @property def voltage(self): """Get the current voltage.""" return self._voltage The @property decorator turns the voltage() method into a “getter” for a read-only attribute with the same name, and it sets the docstring for voltage to “Get the current voltage.” A property object has getter, setter, and deleter methods usable as decorators that create a copy of the property with the corresponding accessor function set to the decorated function. This is best explained with an example: class C: def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x This code is exactly equivalent to the first example. Be sure to give the additional functions the same name as the original property (x in this case.) The returned property object also has the attributes fget, fset, and fdel corresponding to the constructor arguments. Changed in version 3.5: The docstrings of property objects are now writeable.
doc_24864
tf.experimental.numpy.divide( x1, x2 ) Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.divide.
doc_24865
Return a new partial object which when called will behave like func called with the positional arguments args and keyword arguments keywords. If more arguments are supplied to the call, they are appended to args. If additional keyword arguments are supplied, they extend and override keywords. Roughly equivalent to: def partial(func, /, *args, **keywords): def newfunc(*fargs, **fkeywords): newkeywords = {**keywords, **fkeywords} return func(*args, *fargs, **newkeywords) newfunc.func = func newfunc.args = args newfunc.keywords = keywords return newfunc The partial() is used for partial function application which “freezes” some portion of a function’s arguments and/or keywords resulting in a new object with a simplified signature. For example, partial() can be used to create a callable that behaves like the int() function where the base argument defaults to two: >>> from functools import partial >>> basetwo = partial(int, base=2) >>> basetwo.__doc__ = 'Convert base 2 string to an int.' >>> basetwo('10010') 18
doc_24866
create new Surface from a string buffer fromstring(string, size, format, flipped=False) -> Surface This function takes arguments similar to pygame.image.tostring(). The size argument is a pair of numbers representing the width and height. Once the new Surface is created you can destroy the string buffer. The size and format image must compute the exact same size as the passed string buffer. Otherwise an exception will be raised. See the pygame.image.frombuffer() method for a potentially faster way to transfer images into pygame.
doc_24867
tf.keras.layers.Convolution3DTranspose Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.Conv3DTranspose, tf.compat.v1.keras.layers.Convolution3DTranspose tf.keras.layers.Conv3DTranspose( filters, kernel_size, strides=(1, 1, 1), padding='valid', output_padding=None, data_format=None, dilation_rate=(1, 1, 1), activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs ) The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers, does not include the sample axis), e.g. input_shape=(128, 128, 128, 3) for a 128x128x128 volume with 3 channels if data_format="channels_last". Arguments filters Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions. strides An integer or tuple/list of 3 integers, specifying the strides of the convolution along the depth, height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. padding one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. output_padding An integer or tuple/list of 3 integers, specifying the amount of padding along the depth, height, and width. Can be a single integer to specify the same value for all spatial dimensions. The amount of output padding along a given dimension must be lower than the stride along that same dimension. If set to None (default), the output shape is inferred. data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, depth, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". dilation_rate an integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. activation Activation function to use. If you don't specify anything, no activation is applied ( see keras.activations). use_bias Boolean, whether the layer uses a bias vector. kernel_initializer Initializer for the kernel weights matrix. bias_initializer Initializer for the bias vector. kernel_regularizer Regularizer function applied to the kernel weights matrix ( see keras.regularizers). bias_regularizer Regularizer function applied to the bias vector ( see keras.regularizers). activity_regularizer Regularizer function applied to the output of the layer (its "activation") ( see keras.regularizers). kernel_constraint Constraint function applied to the kernel matrix ( see keras.constraints). bias_constraint Constraint function applied to the bias vector ( see keras.constraints). Input shape: 5D tensor with shape: (batch_size, channels, depth, rows, cols) if data_format='channels_first' or 5D tensor with shape: (batch_size, depth, rows, cols, channels) if data_format='channels_last'. Output shape: 5D tensor with shape: (batch_size, filters, new_depth, new_rows, new_cols) if data_format='channels_first' or 5D tensor with shape: (batch_size, new_depth, new_rows, new_cols, filters) if data_format='channels_last'. depth and rows and cols values might have changed due to padding. If output_padding is specified:: new_depth = ((depth - 1) * strides[0] + kernel_size[0] - 2 * padding[0] + output_padding[0]) new_rows = ((rows - 1) * strides[1] + kernel_size[1] - 2 * padding[1] + output_padding[1]) new_cols = ((cols - 1) * strides[2] + kernel_size[2] - 2 * padding[2] + output_padding[2]) Returns A tensor of rank 5 representing activation(conv3dtranspose(inputs, kernel) + bias). Raises ValueError if padding is "causal". ValueError when both strides > 1 and dilation_rate > 1. References: A guide to convolution arithmetic for deep learning Deconvolutional Networks
doc_24868
The week ordinal of the year. Deprecated since version 1.1.0. Series.dt.weekofyear and Series.dt.week have been deprecated. Please use Series.dt.isocalendar().week instead.
doc_24869
See Migration guide for more details. tf.compat.v1.raw_ops.MatrixSetDiagV2 tf.raw_ops.MatrixSetDiagV2( input, diagonal, k, name=None ) Given input and diagonal, this operation returns a tensor with the same shape and values as input, except for the specified diagonals of the innermost matrices. These will be overwritten by the values in diagonal. input has r+1 dimensions [I, J, ..., L, M, N]. When k is scalar or k[0] == k[1], diagonal has r dimensions [I, J, ..., L, max_diag_len]. Otherwise, it has r+1 dimensions [I, J, ..., L, num_diags, max_diag_len]. num_diags is the number of diagonals, num_diags = k[1] - k[0] + 1. max_diag_len is the longest diagonal in the range [k[0], k[1]], max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0)) The output is a tensor of rank k+1 with dimensions [I, J, ..., L, M, N]. If k is scalar or k[0] == k[1]: output[i, j, ..., l, m, n] = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1] input[i, j, ..., l, m, n] ; otherwise Otherwise, output[i, j, ..., l, m, n] = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1] input[i, j, ..., l, m, n] ; otherwise where d = n - m, diag_index = k[1] - d, and index_in_diag = n - max(d, 0). For example: # The main diagonal. input = np.array([[[7, 7, 7, 7], # Input shape: (2, 3, 4) [7, 7, 7, 7], [7, 7, 7, 7]], [[7, 7, 7, 7], [7, 7, 7, 7], [7, 7, 7, 7]]]) diagonal = np.array([[1, 2, 3], # Diagonal shape: (2, 3) [4, 5, 6]]) tf.matrix_set_diag(diagonal) ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4) [7, 2, 7, 7], [7, 7, 3, 7]], [[4, 7, 7, 7], [7, 5, 7, 7], [7, 7, 6, 7]]] # A superdiagonal (per batch). tf.matrix_set_diag(diagonal, k = 1) ==> [[[7, 1, 7, 7], # Output shape: (2, 3, 4) [7, 7, 2, 7], [7, 7, 7, 3]], [[7, 4, 7, 7], [7, 7, 5, 7], [7, 7, 7, 6]]] # A band of diagonals. diagonals = np.array([[[1, 2, 3], # Diagonal shape: (2, 2, 3) [4, 5, 0]], [[6, 1, 2], [3, 4, 0]]]) tf.matrix_set_diag(diagonals, k = (-1, 0)) ==> [[[1, 7, 7, 7], # Output shape: (2, 3, 4) [4, 2, 7, 7], [0, 5, 3, 7]], [[6, 7, 7, 7], [3, 1, 7, 7], [7, 4, 2, 7]]] Args input A Tensor. Rank r+1, where r >= 1. diagonal A Tensor. Must have the same type as input. Rank r when k is an integer or k[0] == k[1]. Otherwise, it has rank r+1. k >= 1. k A Tensor of type int32. Diagonal offset(s). Positive value means superdiagonal, 0 refers to the main diagonal, and negative value means subdiagonals. k can be a single integer (for a single diagonal) or a pair of integers specifying the low and high ends of a matrix band. k[0] must not be larger than k[1]. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
doc_24870
Rename the file or directory src to dst. If dst is a directory, OSError will be raised. If dst exists and is a file, it will be replaced silently if the user has permission. The operation may fail if src and dst are on different filesystems. If successful, the renaming will be an atomic operation (this is a POSIX requirement). This function can support specifying src_dir_fd and/or dst_dir_fd to supply paths relative to directory descriptors. Raises an auditing event os.rename with arguments src, dst, src_dir_fd, dst_dir_fd. New in version 3.3. Changed in version 3.6: Accepts a path-like object for src and dst.
doc_24871
Loads templates from a Python dictionary. This is useful for testing. This loader takes a dictionary of templates as its first argument: TEMPLATES = [{ 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'OPTIONS': { 'loaders': [ ('django.template.loaders.locmem.Loader', { 'index.html': 'content here', }), ], }, }] This loader is disabled by default.
doc_24872
Convert a packed IP address (a bytes-like object of some number of bytes) to its standard, family-specific string representation (for example, '7.10.0.5' or '5aef:2b::8'). inet_ntop() is useful when a library or network protocol returns an object of type struct in_addr (similar to inet_ntoa()) or struct in6_addr. Supported values for address_family are currently AF_INET and AF_INET6. If the bytes object packed_ip is not the correct length for the specified address family, ValueError will be raised. OSError is raised for errors from the call to inet_ntop(). Availability: Unix (maybe not all platforms), Windows. Changed in version 3.4: Windows support added Changed in version 3.5: Writable bytes-like object is now accepted.
doc_24873
Restore the internal state of the masked array, for pickling purposes. state is typically the output of the __getstate__ output, and is a 5-tuple: class name a tuple giving the shape of the data a typecode for the data a binary string for the data a binary string for the mask.
doc_24874
Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Make function raise SkipTest exception if a given condition is true. If the condition is a callable, it is used at runtime to dynamically make the decision. This is useful for tests that may require costly imports, to delay the cost until the test suite is actually executed. Parameters skip_conditionbool or callable Flag to determine whether to skip the decorated test. msgstr, optional Message to give on raising a SkipTest exception. Default is None. Returns decoratorfunction Decorator which, when applied to a function, causes SkipTest to be raised when skip_condition is True, and the function to be called normally otherwise. Notes The decorator itself is decorated with the nose.tools.make_decorator function in order to transmit function name, and various other metadata.
doc_24875
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceStridedSliceAssign tf.raw_ops.ResourceStridedSliceAssign( ref, begin, end, strides, value, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0, name=None ) The values of value are assigned to the positions in the variable ref that are selected by the slice parameters. The slice parameters begin,end,strides, etc. work exactly as inStridedSlice`. NOTE this op currently does not support broadcasting and so value's shape must be exactly the shape produced by the slice of ref. Args ref A Tensor of type resource. begin A Tensor. Must be one of the following types: int32, int64. end A Tensor. Must have the same type as begin. strides A Tensor. Must have the same type as begin. value A Tensor. begin_mask An optional int. Defaults to 0. end_mask An optional int. Defaults to 0. ellipsis_mask An optional int. Defaults to 0. new_axis_mask An optional int. Defaults to 0. shrink_axis_mask An optional int. Defaults to 0. name A name for the operation (optional). Returns The created Operation.
doc_24876
Call all of the registered callbacks. This function is triggered internally when a property is changed. See also add_callback remove_callback
doc_24877
tf.experimental.numpy.trace( a, offset=0, axis1=0, axis2=1, dtype=None ) Unsupported arguments: out. See the NumPy documentation for numpy.trace.
doc_24878
Returns the alignment requirements of a ctypes type. obj_or_type must be a ctypes type or instance.
doc_24879
The day of the datetime. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="D") ... ) >>> datetime_series 0 2000-01-01 1 2000-01-02 2 2000-01-03 dtype: datetime64[ns] >>> datetime_series.dt.day 0 1 1 2 2 3 dtype: int64
doc_24880
A boolean which is True for server-side sockets and False for client-side sockets. New in version 3.2.
doc_24881
Return a Numpy representation of the DataFrame. Warning We recommend using DataFrame.to_numpy() instead. Only the values in the DataFrame will be returned, the axes labels will be removed. Returns numpy.ndarray The values of the DataFrame. See also DataFrame.to_numpy Recommended alternative to this method. DataFrame.index Retrieve the index labels. DataFrame.columns Retrieving the column names. Notes The dtype will be a lower-common-denominator dtype (implicit upcasting); that is to say if the dtypes (even of numeric types) are mixed, the one that accommodates all will be chosen. Use this with care if you are not dealing with the blocks. e.g. If the dtypes are float16 and float32, dtype will be upcast to float32. If dtypes are int32 and uint8, dtype will be upcast to int32. By numpy.find_common_type() convention, mixing int64 and uint64 will result in a float64 dtype. Examples A DataFrame where all columns are the same type (e.g., int64) results in an array of the same type. >>> df = pd.DataFrame({'age': [ 3, 29], ... 'height': [94, 170], ... 'weight': [31, 115]}) >>> df age height weight 0 3 94 31 1 29 170 115 >>> df.dtypes age int64 height int64 weight int64 dtype: object >>> df.values array([[ 3, 94, 31], [ 29, 170, 115]]) A DataFrame with mixed type columns(e.g., str/object, int64, float32) results in an ndarray of the broadest type that accommodates these mixed types (e.g., object). >>> df2 = pd.DataFrame([('parrot', 24.0, 'second'), ... ('lion', 80.5, 1), ... ('monkey', np.nan, None)], ... columns=('name', 'max_speed', 'rank')) >>> df2.dtypes name object max_speed float64 rank object dtype: object >>> df2.values array([['parrot', 24.0, 'second'], ['lion', 80.5, 1], ['monkey', nan, None]], dtype=object)
doc_24882
Bases: object [Deprecated] Notes Deprecated since version 3.4: apply_aspect(position=None)[source] get_viewlim_mode()[source] set_viewlim_mode(mode)[source] update_viewlim()[source] [Deprecated] Notes Deprecated since version 3.4:
doc_24883
Update displayed image. This method can be overridden or extended in subclasses and plugins to react to image changes.
doc_24884
Load a model from a github repo or a local directory. Note: Loading a model is the typical use case, but this can also be used to for loading other objects such as tokenizers, loss functions, etc. If source is 'github', repo_or_dir is expected to be of the form repo_owner/repo_name[:tag_name] with an optional tag/branch. If source is 'local', repo_or_dir is expected to be a path to a local directory. Parameters repo_or_dir (string) – repo name (repo_owner/repo_name[:tag_name]), if source = 'github'; or a path to a local directory, if source = 'local'. model (string) – the name of a callable (entrypoint) defined in the repo/dir’s hubconf.py. *args (optional) – the corresponding args for callable model. source (string, optional) – 'github' | 'local'. Specifies how repo_or_dir is to be interpreted. Default is 'github'. force_reload (bool, optional) – whether to force a fresh download of the github repo unconditionally. Does not have any effect if source = 'local'. Default is False. verbose (bool, optional) – If False, mute messages about hitting local caches. Note that the message about first download cannot be muted. Does not have any effect if source = 'local'. Default is True. **kwargs (optional) – the corresponding kwargs for callable model. Returns The output of the model callable when called with the given *args and **kwargs. Example >>> # from a github repo >>> repo = 'pytorch/vision' >>> model = torch.hub.load(repo, 'resnet50', pretrained=True) >>> # from a local directory >>> path = '/some/local/path/pytorch/vision' >>> model = torch.hub.load(path, 'resnet50', pretrained=True)
doc_24885
Multiply other by self, and return a new masked array.
doc_24886
Should return True if viewing obj is permitted, False otherwise. If obj is None, should return True or False to indicate whether viewing of objects of this type is permitted in general (e.g., False will be interpreted as meaning that the current user is not permitted to view any object of this type). The default implementation returns True if the user has either the “change” or “view” permission.
doc_24887
xml.sax.saxutils.escape(data, entities={}) Escape '&', '<', and '>' in a string of data. You can escape other strings of data by passing a dictionary as the optional entities parameter. The keys and values must all be strings; each key will be replaced with its corresponding value. The characters '&', '<' and '>' are always escaped, even if entities is provided. xml.sax.saxutils.unescape(data, entities={}) Unescape '&amp;', '&lt;', and '&gt;' in a string of data. You can unescape other strings of data by passing a dictionary as the optional entities parameter. The keys and values must all be strings; each key will be replaced with its corresponding value. '&amp', '&lt;', and '&gt;' are always unescaped, even if entities is provided. xml.sax.saxutils.quoteattr(data, entities={}) Similar to escape(), but also prepares data to be used as an attribute value. The return value is a quoted version of data with any additional required replacements. quoteattr() will select a quote character based on the content of data, attempting to avoid encoding any quote characters in the string. If both single- and double-quote characters are already in data, the double-quote characters will be encoded and data will be wrapped in double-quotes. The resulting string can be used directly as an attribute value: >>> print("<element attr=%s>" % quoteattr("ab ' cd \" ef")) <element attr="ab ' cd &quot; ef"> This function is useful when generating attribute values for HTML or any SGML using the reference concrete syntax. class xml.sax.saxutils.XMLGenerator(out=None, encoding='iso-8859-1', short_empty_elements=False) This class implements the ContentHandler interface by writing SAX events back into an XML document. In other words, using an XMLGenerator as the content handler will reproduce the original document being parsed. out should be a file-like object which will default to sys.stdout. encoding is the encoding of the output stream which defaults to 'iso-8859-1'. short_empty_elements controls the formatting of elements that contain no content: if False (the default) they are emitted as a pair of start/end tags, if set to True they are emitted as a single self-closed tag. New in version 3.2: The short_empty_elements parameter. class xml.sax.saxutils.XMLFilterBase(base) This class is designed to sit between an XMLReader and the client application’s event handlers. By default, it does nothing but pass requests up to the reader and events on to the handlers unmodified, but subclasses can override specific methods to modify the event stream or the configuration requests as they pass through. xml.sax.saxutils.prepare_input_source(source, base='') This function takes an input source and an optional base URL and returns a fully resolved InputSource object ready for reading. The input source can be given as a string, a file-like object, or an InputSource object; parsers will use this function to implement the polymorphic source argument to their parse() method.
doc_24888
sklearn.calibration.calibration_curve(y_true, y_prob, *, normalize=False, n_bins=5, strategy='uniform') [source] Compute true and predicted probabilities for a calibration curve. The method assumes the inputs come from a binary classifier, and discretize the [0, 1] interval into bins. Calibration curves may also be referred to as reliability diagrams. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) True targets. y_probarray-like of shape (n_samples,) Probabilities of the positive class. normalizebool, default=False Whether y_prob needs to be normalized into the [0, 1] interval, i.e. is not a proper probability. If True, the smallest value in y_prob is linearly mapped onto 0 and the largest one onto 1. n_binsint, default=5 Number of bins to discretize the [0, 1] interval. A bigger number requires more data. Bins with no samples (i.e. without corresponding values in y_prob) will not be returned, thus the returned arrays may have less than n_bins values. strategy{‘uniform’, ‘quantile’}, default=’uniform’ Strategy used to define the widths of the bins. uniform The bins have identical widths. quantile The bins have the same number of samples and depend on y_prob. Returns prob_truendarray of shape (n_bins,) or smaller The proportion of samples whose class is the positive class, in each bin (fraction of positives). prob_predndarray of shape (n_bins,) or smaller The mean predicted probability in each bin. References Alexandru Niculescu-Mizil and Rich Caruana (2005) Predicting Good Probabilities With Supervised Learning, in Proceedings of the 22nd International Conference on Machine Learning (ICML). See section 4 (Qualitative Analysis of Predictions). Examples >>> import numpy as np >>> from sklearn.calibration import calibration_curve >>> y_true = np.array([0, 0, 0, 0, 1, 1, 1, 1, 1]) >>> y_pred = np.array([0.1, 0.2, 0.3, 0.4, 0.65, 0.7, 0.8, 0.9, 1.]) >>> prob_true, prob_pred = calibration_curve(y_true, y_pred, n_bins=3) >>> prob_true array([0. , 0.5, 1. ]) >>> prob_pred array([0.2 , 0.525, 0.85 ]) Examples using sklearn.calibration.calibration_curve Comparison of Calibration of Classifiers Probability Calibration curves
doc_24889
Recursively descend the directory tree named by dir, compiling all .py files along the way. Return a true value if all the files compiled successfully, and a false value otherwise. The maxlevels parameter is used to limit the depth of the recursion; it defaults to sys.getrecursionlimit(). If ddir is given, it is prepended to the path to each file being compiled for use in compilation time tracebacks, and is also compiled in to the byte-code file, where it will be used in tracebacks and other messages in cases where the source file does not exist at the time the byte-code file is executed. If force is true, modules are re-compiled even if the timestamps are up to date. If rx is given, its search method is called on the complete path to each file considered for compilation, and if it returns a true value, the file is skipped. If quiet is False or 0 (the default), the filenames and other information are printed to standard out. Set to 1, only errors are printed. Set to 2, all output is suppressed. If legacy is true, byte-code files are written to their legacy locations and names, which may overwrite byte-code files created by another version of Python. The default is to write files to their PEP 3147 locations and names, which allows byte-code files from multiple versions of Python to coexist. optimize specifies the optimization level for the compiler. It is passed to the built-in compile() function. Accepts also a sequence of optimization levels which lead to multiple compilations of one .py file in one call. The argument workers specifies how many workers are used to compile files in parallel. The default is to not use multiple workers. If the platform can’t use multiple workers and workers argument is given, then sequential compilation will be used as a fallback. If workers is 0, the number of cores in the system is used. If workers is lower than 0, a ValueError will be raised. invalidation_mode should be a member of the py_compile.PycInvalidationMode enum and controls how the generated pycs are invalidated at runtime. The stripdir, prependdir and limit_sl_dest arguments correspond to the -s, -p and -e options described above. They may be specified as str, bytes or os.PathLike. If hardlink_dupes is true and two .pyc files with different optimization level have the same content, use hard links to consolidate duplicate files. Changed in version 3.2: Added the legacy and optimize parameter. Changed in version 3.5: Added the workers parameter. Changed in version 3.5: quiet parameter was changed to a multilevel value. Changed in version 3.5: The legacy parameter only writes out .pyc files, not .pyo files no matter what the value of optimize is. Changed in version 3.6: Accepts a path-like object. Changed in version 3.7: The invalidation_mode parameter was added. Changed in version 3.7.2: The invalidation_mode parameter’s default value is updated to None. Changed in version 3.8: Setting workers to 0 now chooses the optimal number of cores. Changed in version 3.9: Added stripdir, prependdir, limit_sl_dest and hardlink_dupes arguments. Default value of maxlevels was changed from 10 to sys.getrecursionlimit()
doc_24890
If item is specified, sets the focus item to item. Otherwise, returns the current focus item, or ‘’ if there is none.
doc_24891
class sklearn.ensemble.BaggingClassifier(base_estimator=None, n_estimators=10, *, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=None, random_state=None, verbose=0) [source] A Bagging classifier. A Bagging classifier is an ensemble meta-estimator that fits base classifiers each on random subsets of the original dataset and then aggregate their individual predictions (either by voting or by averaging) to form a final prediction. Such a meta-estimator can typically be used as a way to reduce the variance of a black-box estimator (e.g., a decision tree), by introducing randomization into its construction procedure and then making an ensemble out of it. This algorithm encompasses several works from the literature. When random subsets of the dataset are drawn as random subsets of the samples, then this algorithm is known as Pasting [1]. If samples are drawn with replacement, then the method is known as Bagging [2]. When random subsets of the dataset are drawn as random subsets of the features, then the method is known as Random Subspaces [3]. Finally, when base estimators are built on subsets of both samples and features, then the method is known as Random Patches [4]. Read more in the User Guide. New in version 0.15. Parameters base_estimatorobject, default=None The base estimator to fit on random subsets of the dataset. If None, then the base estimator is a DecisionTreeClassifier. n_estimatorsint, default=10 The number of base estimators in the ensemble. max_samplesint or float, default=1.0 The number of samples to draw from X to train each base estimator (with replacement by default, see bootstrap for more details). If int, then draw max_samples samples. If float, then draw max_samples * X.shape[0] samples. max_featuresint or float, default=1.0 The number of features to draw from X to train each base estimator ( without replacement by default, see bootstrap_features for more details). If int, then draw max_features features. If float, then draw max_features * X.shape[1] features. bootstrapbool, default=True Whether samples are drawn with replacement. If False, sampling without replacement is performed. bootstrap_featuresbool, default=False Whether features are drawn with replacement. oob_scorebool, default=False Whether to use out-of-bag samples to estimate the generalization error. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new ensemble. See the Glossary. New in version 0.17: warm_start constructor parameter. n_jobsint, default=None The number of jobs to run in parallel for both fit and predict. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. random_stateint, RandomState instance or None, default=None Controls the random resampling of the original dataset (sample wise and feature wise). If the base estimator accepts a random_state attribute, a different seed is generated for each instance in the ensemble. Pass an int for reproducible output across multiple function calls. See Glossary. verboseint, default=0 Controls the verbosity when fitting and predicting. Attributes base_estimator_estimator The base estimator from which the ensemble is grown. n_features_int The number of features when fit is performed. estimators_list of estimators The collection of fitted base estimators. estimators_samples_list of arrays The subset of drawn samples for each base estimator. estimators_features_list of arrays The subset of drawn features for each base estimator. classes_ndarray of shape (n_classes,) The classes labels. n_classes_int or list The number of classes. oob_score_float Score of the training dataset obtained using an out-of-bag estimate. This attribute exists only when oob_score is True. oob_decision_function_ndarray of shape (n_samples, n_classes) Decision function computed with out-of-bag estimate on the training set. If n_estimators is small it might be possible that a data point was never left out during the bootstrap. In this case, oob_decision_function_ might contain NaN. This attribute exists only when oob_score is True. References 1 L. Breiman, “Pasting small votes for classification in large databases and on-line”, Machine Learning, 36(1), 85-103, 1999. 2 L. Breiman, “Bagging predictors”, Machine Learning, 24(2), 123-140, 1996. 3 T. Ho, “The random subspace method for constructing decision forests”, Pattern Analysis and Machine Intelligence, 20(8), 832-844, 1998. 4 G. Louppe and P. Geurts, “Ensembles on Random Patches”, Machine Learning and Knowledge Discovery in Databases, 346-361, 2012. Examples >>> from sklearn.svm import SVC >>> from sklearn.ensemble import BaggingClassifier >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_samples=100, n_features=4, ... n_informative=2, n_redundant=0, ... random_state=0, shuffle=False) >>> clf = BaggingClassifier(base_estimator=SVC(), ... n_estimators=10, random_state=0).fit(X, y) >>> clf.predict([[0, 0, 0, 0]]) array([1]) Methods decision_function(X) Average of the decision functions of the base classifiers. fit(X, y[, sample_weight]) Build a Bagging ensemble of estimators from the training get_params([deep]) Get parameters for this estimator. predict(X) Predict class for X. predict_log_proba(X) Predict class log-probabilities for X. predict_proba(X) Predict class probabilities for X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. decision_function(X) [source] Average of the decision functions of the base classifiers. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Returns scorendarray of shape (n_samples, k) The decision function of the input samples. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. Regression and binary classification are special cases with k == 1, otherwise k==n_classes. property estimators_samples_ The subset of drawn samples for each base estimator. Returns a dynamically generated list of indices identifying the samples used for fitting each member of the ensemble, i.e., the in-bag samples. Note: the list is re-created at each call to the property in order to reduce the object memory footprint by not storing the sampling data. Thus fetching the property may be slower than expected. fit(X, y, sample_weight=None) [source] Build a Bagging ensemble of estimators from the training set (X, y). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. yarray-like of shape (n_samples,) The target values (class labels in classification, real numbers in regression). sample_weightarray-like of shape (n_samples,), default=None Sample weights. If None, then samples are equally weighted. Note that this is supported only if the base estimator supports sample weighting. Returns selfobject get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict class for X. The predicted class of an input sample is computed as the class with the highest mean predicted probability. If base estimators do not implement a predict_proba method, then it resorts to voting. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Returns yndarray of shape (n_samples,) The predicted classes. predict_log_proba(X) [source] Predict class log-probabilities for X. The predicted class log-probabilities of an input sample is computed as the log of the mean predicted class probabilities of the base estimators in the ensemble. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Returns pndarray of shape (n_samples, n_classes) The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. predict_proba(X) [source] Predict class probabilities for X. The predicted class probabilities of an input sample is computed as the mean predicted class probabilities of the base estimators in the ensemble. If base estimators do not implement a predict_proba method, then it resorts to voting and the predicted class probabilities of an input sample represents the proportion of estimators predicting each class. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The training input samples. Sparse matrices are accepted only if they are supported by the base estimator. Returns pndarray of shape (n_samples, n_classes) The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_24892
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_24893
tf.compat.v1.nn.sampled_softmax_loss( weights, biases, labels, inputs, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=True, partition_strategy='mod', name='sampled_softmax_loss', seed=None ) This is a faster way to train a softmax classifier over a huge number of classes. This operation is for training only. It is generally an underestimate of the full softmax loss. A common use case is to use this method for training, and calculate the full softmax loss for evaluation or inference. In this case, you must set partition_strategy="div" for the two losses to be consistent, as in the following example: if mode == "train": loss = tf.nn.sampled_softmax_loss( weights=weights, biases=biases, labels=labels, inputs=inputs, ..., partition_strategy="div") elif mode == "eval": logits = tf.matmul(inputs, tf.transpose(weights)) logits = tf.nn.bias_add(logits, biases) labels_one_hot = tf.one_hot(labels, n_classes) loss = tf.nn.softmax_cross_entropy_with_logits( labels=labels_one_hot, logits=logits) See our Candidate Sampling Algorithms Reference (pdf). Also see Section 3 of (Jean et al., 2014) for the math. Args weights A Tensor of shape [num_classes, dim], or a list of Tensor objects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-sharded) class embeddings. biases A Tensor of shape [num_classes]. The class biases. labels A Tensor of type int64 and shape [batch_size, num_true]. The target classes. Note that this format differs from the labels argument of nn.softmax_cross_entropy_with_logits. inputs A Tensor of shape [batch_size, dim]. The forward activations of the input network. num_sampled An int. The number of classes to randomly sample per batch. num_classes An int. The number of possible classes. num_true An int. The number of target classes per training example. sampled_values a tuple of (sampled_candidates, true_expected_count, sampled_expected_count) returned by a *_candidate_sampler function. (if None, we default to log_uniform_candidate_sampler) remove_accidental_hits A bool. whether to remove "accidental hits" where a sampled class equals one of the target classes. Default is True. partition_strategy A string specifying the partitioning strategy, relevant if len(weights) > 1. Currently "div" and "mod" are supported. Default is "mod". See tf.nn.embedding_lookup for more details. name A name for the operation (optional). seed random seed for candidate sampling. Default to None, which doesn't set the op-level random seed for candidate sampling. Returns A batch_size 1-D tensor of per-example sampled softmax losses. References: On Using Very Large Target Vocabulary for Neural Machine Translation: Jean et al., 2014 (pdf)
doc_24894
Return the value of field as a string where possible. field must be an integer.
doc_24895
tf.keras.metrics.binary_crossentropy, tf.losses.binary_crossentropy, tf.metrics.binary_crossentropy Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.binary_crossentropy, tf.compat.v1.keras.metrics.binary_crossentropy tf.keras.losses.binary_crossentropy( y_true, y_pred, from_logits=False, label_smoothing=0 ) Standalone usage: y_true = [[0, 1], [0, 0]] y_pred = [[0.6, 0.4], [0.4, 0.6]] loss = tf.keras.losses.binary_crossentropy(y_true, y_pred) assert loss.shape == (2,) loss.numpy() array([0.916 , 0.714], dtype=float32) Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. from_logits Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. label_smoothing Float in [0, 1]. If > 0 then smooth the labels. Returns Binary crossentropy loss value. shape = [batch_size, d0, .. dN-1].
doc_24896
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_24897
class ast.SetComp(elt, generators) class ast.GeneratorExp(elt, generators) class ast.DictComp(key, value, generators) List and set comprehensions, generator expressions, and dictionary comprehensions. elt (or key and value) is a single node representing the part that will be evaluated for each item. generators is a list of comprehension nodes. >>> print(ast.dump(ast.parse('[x for x in numbers]', mode='eval'), indent=4)) Expression( body=ListComp( elt=Name(id='x', ctx=Load()), generators=[ comprehension( target=Name(id='x', ctx=Store()), iter=Name(id='numbers', ctx=Load()), ifs=[], is_async=0)])) >>> print(ast.dump(ast.parse('{x: x**2 for x in numbers}', mode='eval'), indent=4)) Expression( body=DictComp( key=Name(id='x', ctx=Load()), value=BinOp( left=Name(id='x', ctx=Load()), op=Pow(), right=Constant(value=2)), generators=[ comprehension( target=Name(id='x', ctx=Store()), iter=Name(id='numbers', ctx=Load()), ifs=[], is_async=0)])) >>> print(ast.dump(ast.parse('{x for x in numbers}', mode='eval'), indent=4)) Expression( body=SetComp( elt=Name(id='x', ctx=Load()), generators=[ comprehension( target=Name(id='x', ctx=Store()), iter=Name(id='numbers', ctx=Load()), ifs=[], is_async=0)]))
doc_24898
Return True if there are maxsize items in the queue. If the queue was initialized with maxsize=0 (the default), then full() never returns True.
doc_24899
Reads a file into a list of strings. It calls readline() until the file is read to the end. It does support the optional size argument if the underlying stream supports it for readline. Parameters size (Optional[int]) – Return type List[bytes]