_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_28800 |
Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters
filestr or Path
A string naming the dump file. Changed in version 1.17.0: pathlib.Path objects are now accepted. | |
doc_28801 |
Add widget to plugin. Alternatively, Plugin’s __add__ method is overloaded to add widgets: plugin += Widget(...)
Widgets can adjust required or optional arguments of filter function or parameters for the plugin. This is specified by the Widget’s ptype. | |
doc_28802 |
Return unbiased skew over requested axis. Normalized by N-1. Parameters
axis:{index (0), columns (1)}
Axis for the function to be applied on.
skipna:bool, default True
Exclude NA/null values when computing the result.
level:int or level name, default None
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only:bool, default None
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series. **kwargs
Additional keyword arguments to be passed to the function. Returns
Series or DataFrame (if level specified) | |
doc_28803 | See Migration guide for more details. tf.compat.v1.FixedLenSequenceFeature, tf.compat.v1.io.FixedLenSequenceFeature
tf.io.FixedLenSequenceFeature(
shape, dtype, allow_missing=False, default_value=None
)
The resulting Tensor of parsing a single SequenceExample or Example has a static shape of [None] + shape and the specified dtype. The resulting Tensor of parsing a batch_size many Examples has a static shape of [batch_size, None] + shape and the specified dtype. The entries in the batch from different Examples will be padded with default_value to the maximum length present in the batch. To treat a sparse input as dense, provide allow_missing=True; otherwise, the parse functions will fail on any examples missing this feature. Fields:
shape: Shape of input data for dimension 2 and higher. First dimension is of variable length None.
dtype: Data type of input.
allow_missing: Whether to allow this feature to be missing from a feature list item. Is available only for parsing SequenceExample not for parsing Examples.
default_value: Scalar value to be used to pad multiple Examples to their maximum length. Irrelevant for parsing a single Example or SequenceExample. Defaults to "" for dtype string and 0 otherwise (optional).
Attributes
shape
dtype
allow_missing
default_value | |
doc_28804 |
Linear classifiers (SVM, logistic regression, etc.) with SGD training. This estimator implements regularized linear models with stochastic gradient descent (SGD) learning: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate). SGD allows minibatch (online/out-of-core) learning via the partial_fit method. For best results using the default learning rate schedule, the data should have zero mean and unit variance. This implementation works with data represented as dense or sparse arrays of floating point values for the features. The model it fits can be controlled with the loss parameter; by default, it fits a linear support vector machine (SVM). The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection. Read more in the User Guide. Parameters
lossstr, default=’hinge’
The loss function to be used. Defaults to ‘hinge’, which gives a linear SVM. The possible options are ‘hinge’, ‘log’, ‘modified_huber’, ‘squared_hinge’, ‘perceptron’, or a regression loss: ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. The ‘log’ loss gives logistic regression, a probabilistic classifier. ‘modified_huber’ is another smooth loss that brings tolerance to outliers as well as probability estimates. ‘squared_hinge’ is like hinge but is quadratically penalized. ‘perceptron’ is the linear loss used by the perceptron algorithm. The other losses are designed for regression but can be useful in classification as well; see SGDRegressor for a description. More details about the losses formulas can be found in the User Guide.
penalty{‘l2’, ‘l1’, ‘elasticnet’}, default=’l2’
The penalty (aka regularization term) to be used. Defaults to ‘l2’ which is the standard regularizer for linear SVM models. ‘l1’ and ‘elasticnet’ might bring sparsity to the model (feature selection) not achievable with ‘l2’.
alphafloat, default=0.0001
Constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when set to learning_rate is set to ‘optimal’.
l1_ratiofloat, default=0.15
The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty is ‘elasticnet’.
fit_interceptbool, default=True
Whether the intercept should be estimated or not. If False, the data is assumed to be already centered.
max_iterint, default=1000
The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19.
tolfloat, default=1e-3
The stopping criterion. If it is not None, training will stop when (loss > best_loss - tol) for n_iter_no_change consecutive epochs. New in version 0.19.
shufflebool, default=True
Whether or not the training data should be shuffled after each epoch.
verboseint, default=0
The verbosity level.
epsilonfloat, default=0.1
Epsilon in the epsilon-insensitive loss functions; only if loss is ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. For ‘huber’, determines the threshold at which it becomes less important to get the prediction exactly right. For epsilon-insensitive, any differences between the current prediction and the correct label are ignored if they are less than this threshold.
n_jobsint, default=None
The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
random_stateint, RandomState instance, default=None
Used for shuffling the data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary.
learning_ratestr, default=’optimal’
The learning rate schedule: ‘constant’: eta = eta0
‘optimal’: eta = 1.0 / (alpha * (t + t0)) where t0 is chosen by a heuristic proposed by Leon Bottou. ‘invscaling’: eta = eta0 / pow(t, power_t)
‘adaptive’: eta = eta0, as long as the training keeps decreasing. Each time n_iter_no_change consecutive epochs fail to decrease the training loss by tol or fail to increase validation score by tol if early_stopping is True, the current learning rate is divided by 5. New in version 0.20: Added ‘adaptive’ option
eta0double, default=0.0
The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. The default value is 0.0 as eta0 is not used by the default schedule ‘optimal’.
power_tdouble, default=0.5
The exponent for inverse scaling learning rate [default 0.5].
early_stoppingbool, default=False
Whether to use early stopping to terminate training when validation score is not improving. If set to True, it will automatically set aside a stratified fraction of training data as validation and terminate training when validation score returned by the score method is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20: Added ‘early_stopping’ option
validation_fractionfloat, default=0.1
The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20: Added ‘validation_fraction’ option
n_iter_no_changeint, default=5
Number of iterations with no improvement to wait before early stopping. New in version 0.20: Added ‘n_iter_no_change’ option
class_weightdict, {class_label: weight} or “balanced”, default=None
Preset for the class_weight fit parameter. Weights associated with classes. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)).
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled. If a dynamic learning rate is used, the learning rate is adapted depending on the number of samples already seen. Calling fit resets this counter, while partial_fit will result in increasing the existing counter.
averagebool or int, default=False
When set to True, computes the averaged SGD weights accross all updates and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. Attributes
coef_ndarray of shape (1, n_features) if n_classes == 2 else (n_classes, n_features)
Weights assigned to the features.
intercept_ndarray of shape (1,) if n_classes == 2 else (n_classes,)
Constants in decision function.
n_iter_int
The actual number of iterations before reaching the stopping criterion. For multiclass fits, it is the maximum over every binary fit.
loss_function_concrete LossFunction
classes_array of shape (n_classes,)
t_int
Number of weight updates performed during training. Same as (n_iter_ * n_samples). See also
sklearn.svm.LinearSVC
Linear support vector classification.
LogisticRegression
Logistic regression.
Perceptron
Inherits from SGDClassifier. Perceptron() is equivalent to SGDClassifier(loss="perceptron", eta0=1, learning_rate="constant", penalty=None). Examples >>> import numpy as np
>>> from sklearn.linear_model import SGDClassifier
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.pipeline import make_pipeline
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> Y = np.array([1, 1, 2, 2])
>>> # Always scale the input. The most convenient way is to use a pipeline.
>>> clf = make_pipeline(StandardScaler(),
... SGDClassifier(max_iter=1000, tol=1e-3))
>>> clf.fit(X, Y)
Pipeline(steps=[('standardscaler', StandardScaler()),
('sgdclassifier', SGDClassifier())])
>>> print(clf.predict([[-0.8, -1]]))
[1]
Methods
decision_function(X) Predict confidence scores for samples.
densify() Convert coefficient matrix to dense array format.
fit(X, y[, coef_init, intercept_init, …]) Fit linear model with Stochastic Gradient Descent.
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight]) Perform one epoch of stochastic gradient descent on given samples.
predict(X) Predict class labels for samples in X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**kwargs) Set and validate the parameters of estimator.
sparsify() Convert coefficient matrix to sparse format.
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator.
fit(X, y, coef_init=None, intercept_init=None, sample_weight=None) [source]
Fit linear model with Stochastic Gradient Descent. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Training data.
yndarray of shape (n_samples,)
Target values.
coef_initndarray of shape (n_classes, n_features), default=None
The initial coefficients to warm-start the optimization.
intercept_initndarray of shape (n_classes,), default=None
The initial intercept to warm-start the optimization.
sample_weightarray-like, shape (n_samples,), default=None
Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the constructor) if class_weight is specified. Returns
self :
Returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None, sample_weight=None) [source]
Perform one epoch of stochastic gradient descent on given samples. Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Subset of the training data.
yndarray of shape (n_samples,)
Subset of the target values.
classesndarray of shape (n_classes,), default=None
Classes across all calls to partial_fit. Can be obtained by via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes.
sample_weightarray-like, shape (n_samples,), default=None
Weights applied to individual samples. If not provided, uniform weights are assumed. Returns
self :
Returns an instance of self.
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample.
property predict_log_proba
Log of probability estimates. This method is only available for log loss and modified Huber loss. When loss=”modified_huber”, probability estimates may be hard zeros and ones, so taking the logarithm is not possible. See predict_proba for details. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Input data for prediction. Returns
Tarray-like, shape (n_samples, n_classes)
Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
property predict_proba
Probability estimates. This method is only available for log loss and modified Huber loss. Multiclass probability estimates are derived from binary (one-vs.-rest) estimates by simple normalization, as recommended by Zadrozny and Elkan. Binary probability estimates for loss=”modified_huber” are given by (clip(decision_function(X), -1, 1) + 1) / 2. For other loss functions it is necessary to perform proper probability calibration by wrapping the classifier with CalibratedClassifierCV instead. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Input data for prediction. Returns
ndarray of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. References Zadrozny and Elkan, “Transforming classifier scores into multiclass probability estimates”, SIGKDD’02, http://www.research.ibm.com/people/z/zadrozny/kdd2002-Transf.pdf The justification for the formula in the loss=”modified_huber” case is in the appendix B in: http://jmlr.csail.mit.edu/papers/volume2/zhang02c/zhang02c.pdf
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**kwargs) [source]
Set and validate the parameters of estimator. Parameters
**kwargsdict
Estimator parameters. Returns
selfobject
Estimator instance.
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. | |
doc_28805 | Logs a message with level DEBUG on the root logger. The msg is the message format string, and the args are the arguments which are merged into msg using the string formatting operator. (Note that this means that you can use keywords in the format string, together with a single dictionary argument.) There are three keyword arguments in kwargs which are inspected: exc_info which, if it does not evaluate as false, causes exception information to be added to the logging message. If an exception tuple (in the format returned by sys.exc_info()) or an exception instance is provided, it is used; otherwise, sys.exc_info() is called to get the exception information. The second optional keyword argument is stack_info, which defaults to False. If true, stack information is added to the logging message, including the actual logging call. Note that this is not the same stack information as that displayed through specifying exc_info: The former is stack frames from the bottom of the stack up to the logging call in the current thread, whereas the latter is information about stack frames which have been unwound, following an exception, while searching for exception handlers. You can specify stack_info independently of exc_info, e.g. to just show how you got to a certain point in your code, even when no exceptions were raised. The stack frames are printed following a header line which says: Stack (most recent call last):
This mimics the Traceback (most recent call last): which is used when displaying exception frames. The third optional keyword argument is extra which can be used to pass a dictionary which is used to populate the __dict__ of the LogRecord created for the logging event with user-defined attributes. These custom attributes can then be used as you like. For example, they could be incorporated into logged messages. For example: FORMAT = '%(asctime)-15s %(clientip)s %(user)-8s %(message)s'
logging.basicConfig(format=FORMAT)
d = {'clientip': '192.168.0.1', 'user': 'fbloggs'}
logging.warning('Protocol problem: %s', 'connection reset', extra=d)
would print something like: 2006-02-08 22:20:02,165 192.168.0.1 fbloggs Protocol problem: connection reset
The keys in the dictionary passed in extra should not clash with the keys used by the logging system. (See the Formatter documentation for more information on which keys are used by the logging system.) If you choose to use these attributes in logged messages, you need to exercise some care. In the above example, for instance, the Formatter has been set up with a format string which expects ‘clientip’ and ‘user’ in the attribute dictionary of the LogRecord. If these are missing, the message will not be logged because a string formatting exception will occur. So in this case, you always need to pass the extra dictionary with these keys. While this might be annoying, this feature is intended for use in specialized circumstances, such as multi-threaded servers where the same code executes in many contexts, and interesting conditions which arise are dependent on this context (such as remote client IP address and authenticated user name, in the above example). In such circumstances, it is likely that specialized Formatters would be used with particular Handlers. Changed in version 3.2: The stack_info parameter was added. | |
doc_28806 | Return True for ignorable characters. The character ch is ignorable if ch is a space or tab, otherwise it is not ignorable. Used as a default for parameter charjunk in ndiff(). | |
doc_28807 |
Return the canvas width and height in display coords. | |
doc_28808 |
Parameters
position{'left', 'right'} | |
doc_28809 | The native integral thread ID of this thread. This is a non-negative integer, or None if the thread has not been started. See the get_native_id() function. This represents the Thread ID (TID) as assigned to the thread by the OS (kernel). Its value may be used to uniquely identify this particular thread system-wide (until the thread terminates, after which the value may be recycled by the OS). Note Similar to Process IDs, Thread IDs are only valid (guaranteed unique system-wide) from the time the thread is created until the thread has been terminated. Availability: Requires get_native_id() function. New in version 3.8. | |
doc_28810 | Join the background thread. This can only be used after close() has been called. It blocks until the background thread exits, ensuring that all data in the buffer has been flushed to the pipe. By default if a process is not the creator of the queue then on exit it will attempt to join the queue’s background thread. The process can call cancel_join_thread() to make join_thread() do nothing. | |
doc_28811 |
Return a fixed frequency DatetimeIndex. Returns the range of equally spaced time points (where the difference between any two adjacent points is specified by the given frequency) such that they all satisfy start <[=] x <[=] end, where the first one and the last one are, resp., the first and last time points in that range that fall on the boundary of freq (if given as a frequency string) or that are valid for freq (if given as a pandas.tseries.offsets.DateOffset). (If exactly one of start, end, or freq is not specified, this missing parameter can be computed given periods, the number of timesteps in the range. See the note below.) Parameters
start:str or datetime-like, optional
Left bound for generating dates.
end:str or datetime-like, optional
Right bound for generating dates.
periods:int, optional
Number of periods to generate.
freq:str or DateOffset, default ‘D’
Frequency strings can have multiples, e.g. ‘5H’. See here for a list of frequency aliases.
tz:str or tzinfo, optional
Time zone name for returning localized DatetimeIndex, for example ‘Asia/Hong_Kong’. By default, the resulting DatetimeIndex is timezone-naive.
normalize:bool, default False
Normalize start/end dates to midnight before generating date range.
name:str, default None
Name of the resulting DatetimeIndex.
closed:{None, ‘left’, ‘right’}, optional
Make the interval closed with respect to the given frequency to the ‘left’, ‘right’, or both sides (None, the default). Deprecated since version 1.4.0: Argument closed has been deprecated to standardize boundary inputs. Use inclusive instead, to set each bound as closed or open.
inclusive:{“both”, “neither”, “left”, “right”}, default “both”
Include boundaries; Whether to set each bound as closed or open. New in version 1.4.0. **kwargs
For compatibility. Has no effect on the result. Returns
rng:DatetimeIndex
See also DatetimeIndex
An immutable container for datetimes. timedelta_range
Return a fixed frequency TimedeltaIndex. period_range
Return a fixed frequency PeriodIndex. interval_range
Return a fixed frequency IntervalIndex. Notes Of the four parameters start, end, periods, and freq, exactly three must be specified. If freq is omitted, the resulting DatetimeIndex will have periods linearly spaced elements between start and end (closed on both sides). To learn more about the frequency strings, please see this link. Examples Specifying the values The next four examples generate the same DatetimeIndex, but vary the combination of start, end and periods. Specify start and end, with the default daily frequency.
>>> pd.date_range(start='1/1/2018', end='1/08/2018')
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'],
dtype='datetime64[ns]', freq='D')
Specify start and periods, the number of periods (days).
>>> pd.date_range(start='1/1/2018', periods=8)
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'],
dtype='datetime64[ns]', freq='D')
Specify end and periods, the number of periods (days).
>>> pd.date_range(end='1/1/2018', periods=8)
DatetimeIndex(['2017-12-25', '2017-12-26', '2017-12-27', '2017-12-28',
'2017-12-29', '2017-12-30', '2017-12-31', '2018-01-01'],
dtype='datetime64[ns]', freq='D')
Specify start, end, and periods; the frequency is generated automatically (linearly spaced).
>>> pd.date_range(start='2018-04-24', end='2018-04-27', periods=3)
DatetimeIndex(['2018-04-24 00:00:00', '2018-04-25 12:00:00',
'2018-04-27 00:00:00'],
dtype='datetime64[ns]', freq=None)
Other Parameters Changed the freq (frequency) to 'M' (month end frequency).
>>> pd.date_range(start='1/1/2018', periods=5, freq='M')
DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31', '2018-04-30',
'2018-05-31'],
dtype='datetime64[ns]', freq='M')
Multiples are allowed
>>> pd.date_range(start='1/1/2018', periods=5, freq='3M')
DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31',
'2019-01-31'],
dtype='datetime64[ns]', freq='3M')
freq can also be specified as an Offset object.
>>> pd.date_range(start='1/1/2018', periods=5, freq=pd.offsets.MonthEnd(3))
DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31',
'2019-01-31'],
dtype='datetime64[ns]', freq='3M')
Specify tz to set the timezone.
>>> pd.date_range(start='1/1/2018', periods=5, tz='Asia/Tokyo')
DatetimeIndex(['2018-01-01 00:00:00+09:00', '2018-01-02 00:00:00+09:00',
'2018-01-03 00:00:00+09:00', '2018-01-04 00:00:00+09:00',
'2018-01-05 00:00:00+09:00'],
dtype='datetime64[ns, Asia/Tokyo]', freq='D')
inclusive controls whether to include start and end that are on the boundary. The default, “both”, includes boundary points on either end.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive="both")
DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D')
Use inclusive='left' to exclude end if it falls on the boundary.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive='left')
DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03'],
dtype='datetime64[ns]', freq='D')
Use inclusive='right' to exclude start if it falls on the boundary, and similarly inclusive='neither' will exclude both start and end.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive='right')
DatetimeIndex(['2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D') | |
doc_28812 |
Return a (tight) bounding box of the figure in inches. Note that FigureBase differs from all other artists, which return their Bbox in pixels. Artists that have artist.set_in_layout(False) are not included in the bbox. Parameters
rendererRendererBase subclass
renderer that will be used to draw the figures (i.e. fig.canvas.get_renderer())
bbox_extra_artistslist of Artist or None
List of artists to include in the tight bounding box. If None (default), then all artist children of each Axes are included in the tight bounding box. Returns
BboxBase
containing the bounding box (in figure inches). | |
doc_28813 | See Migration guide for more details. tf.compat.v1.raw_ops.StringLength
tf.raw_ops.StringLength(
input, unit='BYTE', name=None
)
Computes the length of each string given in the input tensor.
strings = tf.constant(['Hello','TensorFlow', '\U0001F642'])
tf.strings.length(strings).numpy() # default counts bytes
array([ 5, 10, 4], dtype=int32)
tf.strings.length(strings, unit="UTF8_CHAR").numpy()
array([ 5, 10, 1], dtype=int32)
Args
input A Tensor of type string. The strings for which to compute the length for each element.
unit An optional string from: "BYTE", "UTF8_CHAR". Defaults to "BYTE". The unit that is counted to compute string length. One of: "BYTE" (for the number of bytes in each string) or "UTF8_CHAR" (for the number of UTF-8 encoded Unicode code points in each string). Results are undefined if unit=UTF8_CHAR and the input strings do not contain structurally valid UTF-8.
name A name for the operation (optional).
Returns A Tensor of type int32. | |
doc_28814 | Get the memory usage in bytes of the tracemalloc module used to store traces of memory blocks. Return an int. | |
doc_28815 | sklearn.metrics.consensus_score(a, b, *, similarity='jaccard') [source]
The similarity of two sets of biclusters. Similarity between individual biclusters is computed. Then the best matching between sets is found using the Hungarian algorithm. The final score is the sum of similarities divided by the size of the larger set. Read more in the User Guide. Parameters
a(rows, columns)
Tuple of row and column indicators for a set of biclusters.
b(rows, columns)
Another set of biclusters like a.
similarity‘jaccard’ or callable, default=’jaccard’
May be the string “jaccard” to use the Jaccard coefficient, or any function that takes four arguments, each of which is a 1d indicator vector: (a_rows, a_columns, b_rows, b_columns). References Hochreiter, Bodenhofer, et. al., 2010. FABIA: factor analysis for bicluster acquisition.
Examples using sklearn.metrics.consensus_score
A demo of the Spectral Co-Clustering algorithm
A demo of the Spectral Biclustering algorithm | |
doc_28816 |
Scanned page. This image of printed text is useful for demonstrations requiring uneven background illumination. Returns
page(191, 384) uint8 ndarray
Page image. | |
doc_28817 | The name that will be used by default for the relation from a related object back to this one. The default is <model_name>_set. This option also sets related_query_name. As the reverse name for a field should be unique, be careful if you intend to subclass your model. To work around name collisions, part of the name should contain '%(app_label)s' and '%(model_name)s', which are replaced respectively by the name of the application the model is in, and the name of the model, both lowercased. See the paragraph on related names for abstract models. | |
doc_28818 | Alias for torch.asinh(). | |
doc_28819 | See torch.argmin() | |
doc_28820 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_28821 |
An N-dimensional tuple of current coordinates. Examples >>> x = np.arange(6).reshape(2, 3)
>>> fl = x.flat
>>> fl.coords
(0, 0)
>>> next(fl)
0
>>> fl.coords
(0, 1) | |
doc_28822 | Statistic on memory allocations. Snapshot.statistics() returns a list of Statistic instances. See also the StatisticDiff class.
count
Number of memory blocks (int).
size
Total size of memory blocks in bytes (int).
traceback
Traceback where the memory block was allocated, Traceback instance. | |
doc_28823 | Represents a parsed URL. This behaves like a regular tuple but also has some extra attributes that give further insight into the URL. Create new instance of _URLTuple(scheme, netloc, path, query, fragment) Parameters
scheme (str) –
netloc (str) –
path (str) –
query (str) –
fragment (str) –
encode(charset='utf-8', errors='replace')
Encodes the URL to a tuple made out of bytes. The charset is only being used for the path, query and fragment. Parameters
charset (str) –
errors (str) – Return type
werkzeug.urls.BytesURL | |
doc_28824 |
Add a toolitem to the container. This method must be implemented per backend. The callback associated with the button click event, must be exactly self.trigger_tool(name). Parameters
namestr
Name of the tool to add, this gets used as the tool's ID and as the default label of the buttons.
groupstr
Name of the group that this tool belongs to.
positionint
Position of the tool within its group, if -1 it goes at the end.
imagestr
Filename of the image for the button or None.
descriptionstr
Description of the tool, used for the tooltips.
togglebool
True : The button is a toggle (change the pressed/unpressed state between consecutive clicks).
False : The button is a normal button (returns to unpressed state after release). | |
doc_28825 | Module containing the models, e.g. <module 'django.contrib.admin.models'
from 'django/contrib/admin/models.py'>. It may be None if the application doesn’t contain a models module. Note that the database related signals such as pre_migrate and post_migrate are only emitted for applications that have a models module. | |
doc_28826 |
Scalar method identical to the corresponding array attribute. Please see ndarray.byteswap. | |
doc_28827 | class sklearn.dummy.DummyClassifier(*, strategy='prior', random_state=None, constant=None) [source]
DummyClassifier is a classifier that makes predictions using simple rules. This classifier is useful as a simple baseline to compare with other (real) classifiers. Do not use it for real problems. Read more in the User Guide. New in version 0.13. Parameters
strategy{“stratified”, “most_frequent”, “prior”, “uniform”, “constant”}, default=”prior”
Strategy to use to generate predictions. “stratified”: generates predictions by respecting the training set’s class distribution. “most_frequent”: always predicts the most frequent label in the training set. “prior”: always predicts the class that maximizes the class prior (like “most_frequent”) and predict_proba returns the class prior. “uniform”: generates predictions uniformly at random.
“constant”: always predicts a constant label that is provided by the user. This is useful for metrics that evaluate a non-majority class Changed in version 0.24: The default value of strategy has changed to “prior” in version 0.24.
random_stateint, RandomState instance or None, default=None
Controls the randomness to generate the predictions when strategy='stratified' or strategy='uniform'. Pass an int for reproducible output across multiple function calls. See Glossary.
constantint or str or array-like of shape (n_outputs,)
The explicit constant as predicted by the “constant” strategy. This parameter is useful only for the “constant” strategy. Attributes
classes_ndarray of shape (n_classes,) or list of such arrays
Class labels for each output.
n_classes_int or list of int
Number of label for each output.
class_prior_ndarray of shape (n_classes,) or list of such arrays
Probability of each class for each output.
n_outputs_int
Number of outputs.
sparse_output_bool
True if the array returned from predict is to be in sparse CSC format. Is automatically set to True if the input y is passed in sparse format. Examples >>> import numpy as np
>>> from sklearn.dummy import DummyClassifier
>>> X = np.array([-1, 1, 1, 1])
>>> y = np.array([0, 1, 1, 1])
>>> dummy_clf = DummyClassifier(strategy="most_frequent")
>>> dummy_clf.fit(X, y)
DummyClassifier(strategy='most_frequent')
>>> dummy_clf.predict(X)
array([1, 1, 1, 1])
>>> dummy_clf.score(X, y)
0.75
Methods
fit(X, y[, sample_weight]) Fit the random classifier.
get_params([deep]) Get parameters for this estimator.
predict(X) Perform classification on test vectors X.
predict_log_proba(X) Return log probability estimates for the test vectors X.
predict_proba(X) Return probability estimates for the test vectors X.
score(X, y[, sample_weight]) Returns the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit the random classifier. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Perform classification on test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Test data. Returns
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
Predicted target values for X.
predict_log_proba(X) [source]
Return log probability estimates for the test vectors X. Parameters
X{array-like, object with finite length or shape}
Training data, requires length = n_samples Returns
Pndarray of shape (n_samples, n_classes) or list of such arrays
Returns the log probability of the sample for each class in the model, where classes are ordered arithmetically for each output.
predict_proba(X) [source]
Return probability estimates for the test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Test data. Returns
Pndarray of shape (n_samples, n_classes) or list of such arrays
Returns the probability of the sample for each class in the model, where classes are ordered arithmetically, for each output.
score(X, y, sample_weight=None) [source]
Returns the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
XNone or array-like of shape (n_samples, n_features)
Test samples. Passing None as test samples gives the same result as passing real test samples, since DummyClassifier operates independently of the sampled observations.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_28828 | Finder for modules declared in the Windows registry. This class implements the importlib.abc.MetaPathFinder ABC. Only class methods are defined by this class to alleviate the need for instantiation. New in version 3.3. Deprecated since version 3.6: Use site configuration instead. Future versions of Python may not enable this finder by default. | |
doc_28829 |
Set the zorder for the artist. Artists with lower zorder values are drawn first. Parameters
levelfloat | |
doc_28830 |
Bases: tuple Represents the information on a composite element of a composite char. Create new instance of CompositePart(name, dx, dy) dx
x-displacement of the part from the origin.
dy
y-displacement of the part from the origin.
name
Name of the part, e.g. 'acute'. | |
doc_28831 | Alias for torch.div(). | |
doc_28832 |
The nanoseconds of the datetime. Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="ns")
... )
>>> datetime_series
0 2000-01-01 00:00:00.000000000
1 2000-01-01 00:00:00.000000001
2 2000-01-01 00:00:00.000000002
dtype: datetime64[ns]
>>> datetime_series.dt.nanosecond
0 0
1 1
2 2
dtype: int64 | |
doc_28833 |
Check whether the provided array or dtype is of an integer dtype. Unlike in is_any_int_dtype, timedelta64 instances will return False. The nullable Integer dtypes (e.g. pandas.Int64Dtype) are also considered as integer by this function. Parameters
arr_or_dtype:array-like or dtype
The array or dtype to check. Returns
boolean
Whether or not the array or dtype is of an integer dtype and not an instance of timedelta64. Examples
>>> is_integer_dtype(str)
False
>>> is_integer_dtype(int)
True
>>> is_integer_dtype(float)
False
>>> is_integer_dtype(np.uint64)
True
>>> is_integer_dtype('int8')
True
>>> is_integer_dtype('Int8')
True
>>> is_integer_dtype(pd.Int8Dtype)
True
>>> is_integer_dtype(np.datetime64)
False
>>> is_integer_dtype(np.timedelta64)
False
>>> is_integer_dtype(np.array(['a', 'b']))
False
>>> is_integer_dtype(pd.Series([1, 2]))
True
>>> is_integer_dtype(np.array([], dtype=np.timedelta64))
False
>>> is_integer_dtype(pd.Index([1, 2.])) # float
False | |
doc_28834 | tf.compat.v1.profiler.write_op_log(
graph, log_dir, op_log=None, run_meta=None, add_trace=True
)
The API also assigns ops in tf.compat.v1.trainable_variables() an op type called '_trainable_variables'. The API also logs 'flops' statistics for ops with op.RegisterStatistics() defined. flops calculation depends on Tensor shapes defined in 'graph', which might not be complete. 'run_meta', if provided, completes the shape information with best effort.
Args
graph tf.Graph. If None and eager execution is not enabled, use default graph.
log_dir directory to write the log file.
op_log (Optional) OpLogProto proto to be written. If not provided, an new one is created.
run_meta (Optional) RunMetadata proto that helps flops computation using run time shape information.
add_trace Whether to add python code trace information. Used to support "code" view. | |
doc_28835 |
Return whether the artist uses clipping. | |
doc_28836 | Alias to SIGCHLD. | |
doc_28837 |
Fill NaN values using an interpolation method. Please note that only method='linear' is supported for DataFrame/Series with a MultiIndex. Parameters
method:str, default ‘linear’
Interpolation technique to use. One of: ‘linear’: Ignore the index and treat the values as equally spaced. This is the only method supported on MultiIndexes. ‘time’: Works on daily and higher resolution data to interpolate given length of interval. ‘index’, ‘values’: use the actual numerical values of the index. ‘pad’: Fill in NaNs using existing values. ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘spline’, ‘barycentric’, ‘polynomial’: Passed to scipy.interpolate.interp1d. These methods use the numerical values of the index. Both ‘polynomial’ and ‘spline’ require that you also specify an order (int), e.g. df.interpolate(method='polynomial', order=5). ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’, ‘akima’, ‘cubicspline’: Wrappers around the SciPy interpolation methods of similar names. See Notes. ‘from_derivatives’: Refers to scipy.interpolate.BPoly.from_derivatives which replaces ‘piecewise_polynomial’ interpolation method in scipy 0.18.
axis:{{0 or ‘index’, 1 or ‘columns’, None}}, default None
Axis to interpolate along.
limit:int, optional
Maximum number of consecutive NaNs to fill. Must be greater than 0.
inplace:bool, default False
Update the data in place if possible.
limit_direction:{{‘forward’, ‘backward’, ‘both’}}, Optional
Consecutive NaNs will be filled in this direction. If limit is specified:
If ‘method’ is ‘pad’ or ‘ffill’, ‘limit_direction’ must be ‘forward’. If ‘method’ is ‘backfill’ or ‘bfill’, ‘limit_direction’ must be ‘backwards’. If ‘limit’ is not specified:
If ‘method’ is ‘backfill’ or ‘bfill’, the default is ‘backward’ else the default is ‘forward’ Changed in version 1.1.0: raises ValueError if limit_direction is ‘forward’ or ‘both’ and method is ‘backfill’ or ‘bfill’. raises ValueError if limit_direction is ‘backward’ or ‘both’ and method is ‘pad’ or ‘ffill’.
limit_area:{{None, ‘inside’, ‘outside’}}, default None
If limit is specified, consecutive NaNs will be filled with this restriction. None: No fill restriction. ‘inside’: Only fill NaNs surrounded by valid values (interpolate). ‘outside’: Only fill NaNs outside valid values (extrapolate).
downcast:optional, ‘infer’ or None, defaults to None
Downcast dtypes if possible.
``**kwargs``:optional
Keyword arguments to pass on to the interpolating function. Returns
Series or DataFrame or None
Returns the same object type as the caller, interpolated at some or all NaN values or None if inplace=True. See also fillna
Fill missing values using different methods. scipy.interpolate.Akima1DInterpolator
Piecewise cubic polynomials (Akima interpolator). scipy.interpolate.BPoly.from_derivatives
Piecewise polynomial in the Bernstein basis. scipy.interpolate.interp1d
Interpolate a 1-D function. scipy.interpolate.KroghInterpolator
Interpolate polynomial (Krogh interpolator). scipy.interpolate.PchipInterpolator
PCHIP 1-d monotonic cubic interpolation. scipy.interpolate.CubicSpline
Cubic spline data interpolator. Notes The ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’ and ‘akima’ methods are wrappers around the respective SciPy implementations of similar names. These use the actual numerical values of the index. For more information on their behavior, see the SciPy documentation and SciPy tutorial. Examples Filling in NaN in a Series via linear interpolation.
>>> s = pd.Series([0, 1, np.nan, 3])
>>> s
0 0.0
1 1.0
2 NaN
3 3.0
dtype: float64
>>> s.interpolate()
0 0.0
1 1.0
2 2.0
3 3.0
dtype: float64
Filling in NaN in a Series by padding, but filling at most two consecutive NaN at a time.
>>> s = pd.Series([np.nan, "single_one", np.nan,
... "fill_two_more", np.nan, np.nan, np.nan,
... 4.71, np.nan])
>>> s
0 NaN
1 single_one
2 NaN
3 fill_two_more
4 NaN
5 NaN
6 NaN
7 4.71
8 NaN
dtype: object
>>> s.interpolate(method='pad', limit=2)
0 NaN
1 single_one
2 single_one
3 fill_two_more
4 fill_two_more
5 fill_two_more
6 NaN
7 4.71
8 4.71
dtype: object
Filling in NaN in a Series via polynomial interpolation or splines: Both ‘polynomial’ and ‘spline’ methods require that you also specify an order (int).
>>> s = pd.Series([0, 2, np.nan, 8])
>>> s.interpolate(method='polynomial', order=2)
0 0.000000
1 2.000000
2 4.666667
3 8.000000
dtype: float64
Fill the DataFrame forward (that is, going down) along each column using linear interpolation. Note how the last entry in column ‘a’ is interpolated differently, because there is no entry after it to use for interpolation. Note how the first entry in column ‘b’ remains NaN, because there is no entry before it to use for interpolation.
>>> df = pd.DataFrame([(0.0, np.nan, -1.0, 1.0),
... (np.nan, 2.0, np.nan, np.nan),
... (2.0, 3.0, np.nan, 9.0),
... (np.nan, 4.0, -4.0, 16.0)],
... columns=list('abcd'))
>>> df
a b c d
0 0.0 NaN -1.0 1.0
1 NaN 2.0 NaN NaN
2 2.0 3.0 NaN 9.0
3 NaN 4.0 -4.0 16.0
>>> df.interpolate(method='linear', limit_direction='forward', axis=0)
a b c d
0 0.0 NaN -1.0 1.0
1 1.0 2.0 -2.0 5.0
2 2.0 3.0 -3.0 9.0
3 2.0 4.0 -4.0 16.0
Using polynomial interpolation.
>>> df['d'].interpolate(method='polynomial', order=2)
0 1.0
1 4.0
2 9.0
3 16.0
Name: d, dtype: float64 | |
doc_28838 |
Reverse the transformation operation Parameters
Xarray of shape [n_samples, n_selected_features]
The input samples. Returns
X_rarray of shape [n_samples, n_original_features]
X with columns of zeros inserted where features would have been removed by transform. | |
doc_28839 |
Set the Figure instance the artist belongs to. Parameters
figFigure | |
doc_28840 | Returns the maximum number of extra inline forms to use. By default, returns the InlineModelAdmin.max_num attribute. Override this method to programmatically determine the maximum number of inline forms. For example, this may be based on the model instance (passed as the keyword argument obj): class BinaryTreeAdmin(admin.TabularInline):
model = BinaryTree
def get_max_num(self, request, obj=None, **kwargs):
max_num = 10
if obj and obj.parent:
return max_num - 5
return max_num | |
doc_28841 | See Migration guide for more details. tf.compat.v1.raw_ops.IteratorGetNext
tf.raw_ops.IteratorGetNext(
iterator, output_types, output_shapes, name=None
)
Args
iterator A Tensor of type resource.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A list of Tensor objects of type output_types. | |
doc_28842 | Do not dump the file. | |
doc_28843 |
Return the current callback function used on floating-point errors. When the error handling for a floating-point error (one of “divide”, “over”, “under”, or “invalid”) is set to ‘call’ or ‘log’, the function that is called or the log instance that is written to is returned by geterrcall. This function or log instance has been set with seterrcall. Returns
errobjcallable, log instance or None
The current error handler. If no handler was set through seterrcall, None is returned. See also
seterrcall, seterr, geterr
Notes For complete documentation of the types of floating-point exceptions and treatment options, see seterr. Examples >>> np.geterrcall() # we did not yet set a handler, returns None
>>> oldsettings = np.seterr(all='call')
>>> def err_handler(type, flag):
... print("Floating point error (%s), with flag %s" % (type, flag))
>>> oldhandler = np.seterrcall(err_handler)
>>> np.array([1, 2, 3]) / 0.0
Floating point error (divide by zero), with flag 1
array([inf, inf, inf])
>>> cur_handler = np.geterrcall()
>>> cur_handler is err_handler
True | |
doc_28844 | Return the current state of the decoder. This must be a tuple with two items, the first must be the buffer containing the still undecoded input. The second must be an integer and can be additional state info. (The implementation should make sure that 0 is the most common additional state info.) If this additional state info is 0 it must be possible to set the decoder to the state which has no input buffered and 0 as the additional state info, so that feeding the previously buffered input to the decoder returns it to the previous state without producing any output. (Additional state info that is more complicated than integers can be converted into an integer by marshaling/pickling the info and encoding the bytes of the resulting string into an integer.) | |
doc_28845 |
Set the Figure instance the artist belongs to. Parameters
figFigure | |
doc_28846 | This read-only attribute provides the column names of the last query. To remain compatible with the Python DB API, it returns a 7-tuple for each column where the last six items of each tuple are None. It is set for SELECT statements without any matching rows as well. | |
doc_28847 | Return the index of the first of occurrence of b in a. | |
doc_28848 | The method the request was made with, such as GET. | |
doc_28849 |
Predict class for X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns
yndarray of shape (n_samples,)
The predicted values. | |
doc_28850 |
Read-only name identifying the axis. | |
doc_28851 | class socketserver.ThreadingMixIn
Forking and threading versions of each type of server can be created using these mix-in classes. For instance, ThreadingUDPServer is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer):
pass
The mix-in class comes first, since it overrides a method defined in UDPServer. Setting the various attributes also changes the behavior of the underlying server mechanism. ForkingMixIn and the Forking classes mentioned below are only available on POSIX platforms that support fork(). socketserver.ForkingMixIn.server_close() waits until all child processes complete, except if socketserver.ForkingMixIn.block_on_close attribute is false. socketserver.ThreadingMixIn.server_close() waits until all non-daemon threads complete, except if socketserver.ThreadingMixIn.block_on_close attribute is false. Use daemonic threads by setting ThreadingMixIn.daemon_threads to True to not wait until threads complete. Changed in version 3.7: socketserver.ForkingMixIn.server_close() and socketserver.ThreadingMixIn.server_close() now waits until all child processes and non-daemonic threads complete. Add a new socketserver.ForkingMixIn.block_on_close class attribute to opt-in for the pre-3.7 behaviour. | |
doc_28852 | Set the transparent colorkey set_colorkey(Color, flags=0) -> None set_colorkey(None) -> None Set the current color key for the Surface. When blitting this Surface onto a destination, any pixels that have the same color as the colorkey will be transparent. The color can be an RGB color or a mapped color integer. If None is passed, the colorkey will be unset. The colorkey will be ignored if the Surface is formatted to use per pixel alpha values. The colorkey can be mixed with the full Surface alpha value. The optional flags argument can be set to pygame.RLEACCEL to provide better performance on non accelerated displays. An RLEACCEL Surface will be slower to modify, but quicker to blit as a source. | |
doc_28853 | Flag indicating whether to print only the filenames of files containing whitespace related problems. This is set to true by the -q option if called as a script. | |
doc_28854 |
Return str(self). | |
doc_28855 | A valid email address. By default Django uses the DEFAULT_FROM_EMAIL. | |
doc_28856 | Sets the system identifier of this InputSource. | |
doc_28857 | class graphlib.TopologicalSorter(graph=None)
Provides functionality to topologically sort a graph of hashable nodes. A topological order is a linear ordering of the vertices in a graph such that for every directed edge u -> v from vertex u to vertex v, vertex u comes before vertex v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this example, a topological ordering is just a valid sequence for the tasks. A complete topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph. If the optional graph argument is provided it must be a dictionary representing a directed acyclic graph where the keys are nodes and the values are iterables of all predecessors of that node in the graph (the nodes that have edges that point to the value in the key). Additional nodes can be added to the graph using the add() method. In the general case, the steps required to perform the sorting of a given graph are as follows: Create an instance of the TopologicalSorter with an optional initial graph. Add additional nodes to the graph. Call prepare() on the graph. While is_active() is True, iterate over the nodes returned by get_ready() and process them. Call done() on each node as it finishes processing. In case just an immediate sorting of the nodes in the graph is required and no parallelism is involved, the convenience method TopologicalSorter.static_order() can be used directly: >>> graph = {"D": {"B", "C"}, "C": {"A"}, "B": {"A"}}
>>> ts = TopologicalSorter(graph)
>>> tuple(ts.static_order())
('A', 'C', 'B', 'D')
The class is designed to easily support parallel processing of the nodes as they become ready. For instance: topological_sorter = TopologicalSorter()
# Add nodes to 'topological_sorter'...
topological_sorter.prepare()
while topological_sorter.is_active():
for node in topological_sorter.get_ready():
# Worker threads or processes take nodes to work on off the
# 'task_queue' queue.
task_queue.put(node)
# When the work for a node is done, workers put the node in
# 'finalized_tasks_queue' so we can get more nodes to work on.
# The definition of 'is_active()' guarantees that, at this point, at
# least one node has been placed on 'task_queue' that hasn't yet
# been passed to 'done()', so this blocking 'get()' must (eventually)
# succeed. After calling 'done()', we loop back to call 'get_ready()'
# again, so put newly freed nodes on 'task_queue' as soon as
# logically possible.
node = finalized_tasks_queue.get()
topological_sorter.done(node)
add(node, *predecessors)
Add a new node and its predecessors to the graph. Both the node and all elements in predecessors must be hashable. If called multiple times with the same node argument, the set of dependencies will be the union of all dependencies passed in. It is possible to add a node with no dependencies (predecessors is not provided) or to provide a dependency twice. If a node that has not been provided before is included among predecessors it will be automatically added to the graph with no predecessors of its own. Raises ValueError if called after prepare().
prepare()
Mark the graph as finished and check for cycles in the graph. If any cycle is detected, CycleError will be raised, but get_ready() can still be used to obtain as many nodes as possible until cycles block more progress. After a call to this function, the graph cannot be modified, and therefore no more nodes can be added using add().
is_active()
Returns True if more progress can be made and False otherwise. Progress can be made if cycles do not block the resolution and either there are still nodes ready that haven’t yet been returned by TopologicalSorter.get_ready() or the number of nodes marked TopologicalSorter.done() is less than the number that have been returned by TopologicalSorter.get_ready(). The __bool__() method of this class defers to this function, so instead of: if ts.is_active():
...
it is possible to simply do: if ts:
...
Raises ValueError if called without calling prepare() previously.
done(*nodes)
Marks a set of nodes returned by TopologicalSorter.get_ready() as processed, unblocking any successor of each node in nodes for being returned in the future by a call to TopologicalSorter.get_ready(). Raises ValueError if any node in nodes has already been marked as processed by a previous call to this method or if a node was not added to the graph by using TopologicalSorter.add(), if called without calling prepare() or if node has not yet been returned by get_ready().
get_ready()
Returns a tuple with all the nodes that are ready. Initially it returns all nodes with no predecessors, and once those are marked as processed by calling TopologicalSorter.done(), further calls will return all new nodes that have all their predecessors already processed. Once no more progress can be made, empty tuples are returned. Raises ValueError if called without calling prepare() previously.
static_order()
Returns an iterable of nodes in a topological order. Using this method does not require to call TopologicalSorter.prepare() or TopologicalSorter.done(). This method is equivalent to: def static_order(self):
self.prepare()
while self.is_active():
node_group = self.get_ready()
yield from node_group
self.done(*node_group)
The particular order that is returned may depend on the specific order in which the items were inserted in the graph. For example: >>> ts = TopologicalSorter()
>>> ts.add(3, 2, 1)
>>> ts.add(1, 0)
>>> print([*ts.static_order()])
[2, 0, 1, 3]
>>> ts2 = TopologicalSorter()
>>> ts2.add(1, 0)
>>> ts2.add(3, 2, 1)
>>> print([*ts2.static_order()])
[0, 2, 1, 3]
This is due to the fact that “0” and “2” are in the same level in the graph (they would have been returned in the same call to get_ready()) and the order between them is determined by the order of insertion. If any cycle is detected, CycleError will be raised.
New in version 3.9.
Exceptions The graphlib module defines the following exception classes:
exception graphlib.CycleError
Subclass of ValueError raised by TopologicalSorter.prepare() if cycles exist in the working graph. If multiple cycles exist, only one undefined choice among them will be reported and included in the exception. The detected cycle can be accessed via the second element in the args attribute of the exception instance and consists in a list of nodes, such that each node is, in the graph, an immediate predecessor of the next node in the list. In the reported list, the first and the last node will be the same, to make it clear that it is cyclic. | |
doc_28858 |
Return the current hatching pattern. | |
doc_28859 | Construct a new Decimal object based from value. value can be an integer, string, tuple, float, or another Decimal object. If no value is given, returns Decimal('0'). If value is a string, it should conform to the decimal numeric string syntax after leading and trailing whitespace characters, as well as underscores throughout, are removed: sign ::= '+' | '-'
digit ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9'
indicator ::= 'e' | 'E'
digits ::= digit [digit]...
decimal-part ::= digits '.' [digits] | ['.'] digits
exponent-part ::= indicator [sign] digits
infinity ::= 'Infinity' | 'Inf'
nan ::= 'NaN' [digits] | 'sNaN' [digits]
numeric-value ::= decimal-part [exponent-part] | infinity
numeric-string ::= [sign] numeric-value | [sign] nan
Other Unicode decimal digits are also permitted where digit appears above. These include decimal digits from various other alphabets (for example, Arabic-Indic and Devanāgarī digits) along with the fullwidth digits '\uff10' through '\uff19'. If value is a tuple, it should have three components, a sign (0 for positive or 1 for negative), a tuple of digits, and an integer exponent. For example, Decimal((0, (1, 4, 1, 4), -3)) returns Decimal('1.414'). If value is a float, the binary floating point value is losslessly converted to its exact decimal equivalent. This conversion can often require 53 or more digits of precision. For example, Decimal(float('1.1')) converts to Decimal('1.100000000000000088817841970012523233890533447265625'). The context precision does not affect how many digits are stored. That is determined exclusively by the number of digits in value. For example, Decimal('3.00000') records all five zeros even if the context precision is only three. The purpose of the context argument is determining what to do if value is a malformed string. If the context traps InvalidOperation, an exception is raised; otherwise, the constructor returns a new Decimal with the value of NaN. Once constructed, Decimal objects are immutable. Changed in version 3.2: The argument to the constructor is now permitted to be a float instance. Changed in version 3.3: float arguments raise an exception if the FloatOperation trap is set. By default the trap is off. Changed in version 3.6: Underscores are allowed for grouping, as with integral and floating-point literals in code. Decimal floating point objects share many properties with the other built-in numeric types such as float and int. All of the usual math operations and special methods apply. Likewise, decimal objects can be copied, pickled, printed, used as dictionary keys, used as set elements, compared, sorted, and coerced to another type (such as float or int). There are some small differences between arithmetic on Decimal objects and arithmetic on integers and floats. When the remainder operator % is applied to Decimal objects, the sign of the result is the sign of the dividend rather than the sign of the divisor: >>> (-7) % 4
1
>>> Decimal(-7) % Decimal(4)
Decimal('-3')
The integer division operator // behaves analogously, returning the integer part of the true quotient (truncating towards zero) rather than its floor, so as to preserve the usual identity x == (x // y) * y + x % y: >>> -7 // 4
-2
>>> Decimal(-7) // Decimal(4)
Decimal('-1')
The % and // operators implement the remainder and divide-integer operations (respectively) as described in the specification. Decimal objects cannot generally be combined with floats or instances of fractions.Fraction in arithmetic operations: an attempt to add a Decimal to a float, for example, will raise a TypeError. However, it is possible to use Python’s comparison operators to compare a Decimal instance x with another number y. This avoids confusing results when doing equality comparisons between numbers of different types. Changed in version 3.2: Mixed-type comparisons between Decimal instances and other numeric types are now fully supported. In addition to the standard numeric properties, decimal floating point objects also have a number of specialized methods:
adjusted()
Return the adjusted exponent after shifting out the coefficient’s rightmost digits until only the lead digit remains: Decimal('321e+5').adjusted() returns seven. Used for determining the position of the most significant digit with respect to the decimal point.
as_integer_ratio()
Return a pair (n, d) of integers that represent the given Decimal instance as a fraction, in lowest terms and with a positive denominator: >>> Decimal('-3.14').as_integer_ratio()
(-157, 50)
The conversion is exact. Raise OverflowError on infinities and ValueError on NaNs.
New in version 3.6.
as_tuple()
Return a named tuple representation of the number: DecimalTuple(sign, digits, exponent).
canonical()
Return the canonical encoding of the argument. Currently, the encoding of a Decimal instance is always canonical, so this operation returns its argument unchanged.
compare(other, context=None)
Compare the values of two Decimal instances. compare() returns a Decimal instance, and if either operand is a NaN then the result is a NaN: a or b is a NaN ==> Decimal('NaN')
a < b ==> Decimal('-1')
a == b ==> Decimal('0')
a > b ==> Decimal('1')
compare_signal(other, context=None)
This operation is identical to the compare() method, except that all NaNs signal. That is, if neither operand is a signaling NaN then any quiet NaN operand is treated as though it were a signaling NaN.
compare_total(other, context=None)
Compare two operands using their abstract representation rather than their numerical value. Similar to the compare() method, but the result gives a total ordering on Decimal instances. Two Decimal instances with the same numeric value but different representations compare unequal in this ordering: >>> Decimal('12.0').compare_total(Decimal('12'))
Decimal('-1')
Quiet and signaling NaNs are also included in the total ordering. The result of this function is Decimal('0') if both operands have the same representation, Decimal('-1') if the first operand is lower in the total order than the second, and Decimal('1') if the first operand is higher in the total order than the second operand. See the specification for details of the total order. This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly.
compare_total_mag(other, context=None)
Compare two operands using their abstract representation rather than their value as in compare_total(), but ignoring the sign of each operand. x.compare_total_mag(y) is equivalent to x.copy_abs().compare_total(y.copy_abs()). This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly.
conjugate()
Just returns self, this method is only to comply with the Decimal Specification.
copy_abs()
Return the absolute value of the argument. This operation is unaffected by the context and is quiet: no flags are changed and no rounding is performed.
copy_negate()
Return the negation of the argument. This operation is unaffected by the context and is quiet: no flags are changed and no rounding is performed.
copy_sign(other, context=None)
Return a copy of the first operand with the sign set to be the same as the sign of the second operand. For example: >>> Decimal('2.3').copy_sign(Decimal('-1.5'))
Decimal('-2.3')
This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly.
exp(context=None)
Return the value of the (natural) exponential function e**x at the given number. The result is correctly rounded using the ROUND_HALF_EVEN rounding mode. >>> Decimal(1).exp()
Decimal('2.718281828459045235360287471')
>>> Decimal(321).exp()
Decimal('2.561702493119680037517373933E+139')
from_float(f)
Classmethod that converts a float to a decimal number, exactly. Note Decimal.from_float(0.1) is not the same as Decimal(‘0.1’). Since 0.1 is not exactly representable in binary floating point, the value is stored as the nearest representable value which is 0x1.999999999999ap-4. That equivalent value in decimal is 0.1000000000000000055511151231257827021181583404541015625. Note From Python 3.2 onwards, a Decimal instance can also be constructed directly from a float. >>> Decimal.from_float(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
>>> Decimal.from_float(float('nan'))
Decimal('NaN')
>>> Decimal.from_float(float('inf'))
Decimal('Infinity')
>>> Decimal.from_float(float('-inf'))
Decimal('-Infinity')
New in version 3.1.
fma(other, third, context=None)
Fused multiply-add. Return self*other+third with no rounding of the intermediate product self*other. >>> Decimal(2).fma(3, 5)
Decimal('11')
is_canonical()
Return True if the argument is canonical and False otherwise. Currently, a Decimal instance is always canonical, so this operation always returns True.
is_finite()
Return True if the argument is a finite number, and False if the argument is an infinity or a NaN.
is_infinite()
Return True if the argument is either positive or negative infinity and False otherwise.
is_nan()
Return True if the argument is a (quiet or signaling) NaN and False otherwise.
is_normal(context=None)
Return True if the argument is a normal finite number. Return False if the argument is zero, subnormal, infinite or a NaN.
is_qnan()
Return True if the argument is a quiet NaN, and False otherwise.
is_signed()
Return True if the argument has a negative sign and False otherwise. Note that zeros and NaNs can both carry signs.
is_snan()
Return True if the argument is a signaling NaN and False otherwise.
is_subnormal(context=None)
Return True if the argument is subnormal, and False otherwise.
is_zero()
Return True if the argument is a (positive or negative) zero and False otherwise.
ln(context=None)
Return the natural (base e) logarithm of the operand. The result is correctly rounded using the ROUND_HALF_EVEN rounding mode.
log10(context=None)
Return the base ten logarithm of the operand. The result is correctly rounded using the ROUND_HALF_EVEN rounding mode.
logb(context=None)
For a nonzero number, return the adjusted exponent of its operand as a Decimal instance. If the operand is a zero then Decimal('-Infinity') is returned and the DivisionByZero flag is raised. If the operand is an infinity then Decimal('Infinity') is returned.
logical_and(other, context=None)
logical_and() is a logical operation which takes two logical operands (see Logical operands). The result is the digit-wise and of the two operands.
logical_invert(context=None)
logical_invert() is a logical operation. The result is the digit-wise inversion of the operand.
logical_or(other, context=None)
logical_or() is a logical operation which takes two logical operands (see Logical operands). The result is the digit-wise or of the two operands.
logical_xor(other, context=None)
logical_xor() is a logical operation which takes two logical operands (see Logical operands). The result is the digit-wise exclusive or of the two operands.
max(other, context=None)
Like max(self, other) except that the context rounding rule is applied before returning and that NaN values are either signaled or ignored (depending on the context and whether they are signaling or quiet).
max_mag(other, context=None)
Similar to the max() method, but the comparison is done using the absolute values of the operands.
min(other, context=None)
Like min(self, other) except that the context rounding rule is applied before returning and that NaN values are either signaled or ignored (depending on the context and whether they are signaling or quiet).
min_mag(other, context=None)
Similar to the min() method, but the comparison is done using the absolute values of the operands.
next_minus(context=None)
Return the largest number representable in the given context (or in the current thread’s context if no context is given) that is smaller than the given operand.
next_plus(context=None)
Return the smallest number representable in the given context (or in the current thread’s context if no context is given) that is larger than the given operand.
next_toward(other, context=None)
If the two operands are unequal, return the number closest to the first operand in the direction of the second operand. If both operands are numerically equal, return a copy of the first operand with the sign set to be the same as the sign of the second operand.
normalize(context=None)
Normalize the number by stripping the rightmost trailing zeros and converting any result equal to Decimal('0') to Decimal('0e0'). Used for producing canonical values for attributes of an equivalence class. For example, Decimal('32.100') and Decimal('0.321000e+2') both normalize to the equivalent value Decimal('32.1').
number_class(context=None)
Return a string describing the class of the operand. The returned value is one of the following ten strings.
"-Infinity", indicating that the operand is negative infinity.
"-Normal", indicating that the operand is a negative normal number.
"-Subnormal", indicating that the operand is negative and subnormal.
"-Zero", indicating that the operand is a negative zero.
"+Zero", indicating that the operand is a positive zero.
"+Subnormal", indicating that the operand is positive and subnormal.
"+Normal", indicating that the operand is a positive normal number.
"+Infinity", indicating that the operand is positive infinity.
"NaN", indicating that the operand is a quiet NaN (Not a Number).
"sNaN", indicating that the operand is a signaling NaN.
quantize(exp, rounding=None, context=None)
Return a value equal to the first operand after rounding and having the exponent of the second operand. >>> Decimal('1.41421356').quantize(Decimal('1.000'))
Decimal('1.414')
Unlike other operations, if the length of the coefficient after the quantize operation would be greater than precision, then an InvalidOperation is signaled. This guarantees that, unless there is an error condition, the quantized exponent is always equal to that of the right-hand operand. Also unlike other operations, quantize never signals Underflow, even if the result is subnormal and inexact. If the exponent of the second operand is larger than that of the first then rounding may be necessary. In this case, the rounding mode is determined by the rounding argument if given, else by the given context argument; if neither argument is given the rounding mode of the current thread’s context is used. An error is returned whenever the resulting exponent is greater than Emax or less than Etiny.
radix()
Return Decimal(10), the radix (base) in which the Decimal class does all its arithmetic. Included for compatibility with the specification.
remainder_near(other, context=None)
Return the remainder from dividing self by other. This differs from self % other in that the sign of the remainder is chosen so as to minimize its absolute value. More precisely, the return value is self - n * other where n is the integer nearest to the exact value of self / other, and if two integers are equally near then the even one is chosen. If the result is zero then its sign will be the sign of self. >>> Decimal(18).remainder_near(Decimal(10))
Decimal('-2')
>>> Decimal(25).remainder_near(Decimal(10))
Decimal('5')
>>> Decimal(35).remainder_near(Decimal(10))
Decimal('-5')
rotate(other, context=None)
Return the result of rotating the digits of the first operand by an amount specified by the second operand. The second operand must be an integer in the range -precision through precision. The absolute value of the second operand gives the number of places to rotate. If the second operand is positive then rotation is to the left; otherwise rotation is to the right. The coefficient of the first operand is padded on the left with zeros to length precision if necessary. The sign and exponent of the first operand are unchanged.
same_quantum(other, context=None)
Test whether self and other have the same exponent or whether both are NaN. This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise InvalidOperation if the second operand cannot be converted exactly.
scaleb(other, context=None)
Return the first operand with exponent adjusted by the second. Equivalently, return the first operand multiplied by 10**other. The second operand must be an integer.
shift(other, context=None)
Return the result of shifting the digits of the first operand by an amount specified by the second operand. The second operand must be an integer in the range -precision through precision. The absolute value of the second operand gives the number of places to shift. If the second operand is positive then the shift is to the left; otherwise the shift is to the right. Digits shifted into the coefficient are zeros. The sign and exponent of the first operand are unchanged.
sqrt(context=None)
Return the square root of the argument to full precision.
to_eng_string(context=None)
Convert to a string, using engineering notation if an exponent is needed. Engineering notation has an exponent which is a multiple of 3. This can leave up to 3 digits to the left of the decimal place and may require the addition of either one or two trailing zeros. For example, this converts Decimal('123E+1') to Decimal('1.23E+3').
to_integral(rounding=None, context=None)
Identical to the to_integral_value() method. The to_integral name has been kept for compatibility with older versions.
to_integral_exact(rounding=None, context=None)
Round to the nearest integer, signaling Inexact or Rounded as appropriate if rounding occurs. The rounding mode is determined by the rounding parameter if given, else by the given context. If neither parameter is given then the rounding mode of the current context is used.
to_integral_value(rounding=None, context=None)
Round to the nearest integer without signaling Inexact or Rounded. If given, applies rounding; otherwise, uses the rounding method in either the supplied context or the current context. | |
doc_28860 | Return the message’s main content type. This is the maintype part of the string returned by get_content_type(). | |
doc_28861 | Set to os.getcwd(). | |
doc_28862 | This is the default backend. Email will be sent through a SMTP server. The value for each argument is retrieved from the matching setting if the argument is None:
host: EMAIL_HOST
port: EMAIL_PORT
username: EMAIL_HOST_USER
password: EMAIL_HOST_PASSWORD
use_tls: EMAIL_USE_TLS
use_ssl: EMAIL_USE_SSL
timeout: EMAIL_TIMEOUT
ssl_keyfile: EMAIL_SSL_KEYFILE
ssl_certfile: EMAIL_SSL_CERTFILE
The SMTP backend is the default configuration inherited by Django. If you want to specify it explicitly, put the following in your settings: EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
If unspecified, the default timeout will be the one provided by socket.getdefaulttimeout(), which defaults to None (no timeout). | |
doc_28863 | Return True if the queue is empty, False otherwise. Because of multithreading/multiprocessing semantics, this is not reliable. | |
doc_28864 |
Alias for set_linestyle. | |
doc_28865 | See Migration guide for more details. tf.compat.v1.make_ndarray
tf.make_ndarray(
tensor
)
Create a numpy ndarray with the same shape and data as the tensor. For example: # Tensor a has shape (2,3)
a = tf.constant([[1,2,3],[4,5,6]])
proto_tensor = tf.make_tensor_proto(a) # convert `tensor a` to a proto tensor
tf.make_ndarray(proto_tensor) # output: array([[1, 2, 3],
# [4, 5, 6]], dtype=int32)
# output has shape (2,3)
Args
tensor A TensorProto.
Returns A numpy array with the tensor contents.
Raises
TypeError if tensor has unsupported type. | |
doc_28866 |
Bases: matplotlib.patches._Style ConnectionStyle is a container class which defines several connectionstyle classes, which is used to create a path between two points. These are mainly used with FancyArrowPatch. A connectionstyle object can be either created as: ConnectionStyle.Arc3(rad=0.2)
or: ConnectionStyle("Arc3", rad=0.2)
or: ConnectionStyle("Arc3, rad=0.2")
The following classes are defined
Class Name Attrs
Arc3 arc3 rad=0.0
Angle3 angle3 angleA=90, angleB=0
Angle angle angleA=90, angleB=0, rad=0.0
Arc arc angleA=0, angleB=0, armA=None, armB=None, rad=0.0
Bar bar armA=0.0, armB=0.0, fraction=0.3, angle=None An instance of any connection style class is an callable object, whose call signature is: __call__(self, posA, posB,
patchA=None, patchB=None,
shrinkA=2., shrinkB=2.)
and it returns a Path instance. posA and posB are tuples of (x, y) coordinates of the two points to be connected. patchA (or patchB) is given, the returned path is clipped so that it start (or end) from the boundary of the patch. The path is further shrunk by shrinkA (or shrinkB) which is given in points. Return the instance of the subclass with the given style name. classAngle(angleA=90, angleB=0, rad=0.0)[source]
Bases: matplotlib.patches.ConnectionStyle._Base Creates a piecewise continuous quadratic Bezier path between two points. The path has a one passing-through point placed at the intersecting point of two lines which cross the start and end point, and have a slope of angleA and angleB, respectively. The connecting edges are rounded with rad. angleA
starting angle of the path angleB
ending angle of the path rad
rounding radius of the edge connect(posA, posB)[source]
classAngle3(angleA=90, angleB=0)[source]
Bases: matplotlib.patches.ConnectionStyle._Base Creates a simple quadratic Bezier curve between two points. The middle control points is placed at the intersecting point of two lines which cross the start and end point, and have a slope of angleA and angleB, respectively. angleA
starting angle of the path angleB
ending angle of the path connect(posA, posB)[source]
classArc(angleA=0, angleB=0, armA=None, armB=None, rad=0.0)[source]
Bases: matplotlib.patches.ConnectionStyle._Base Creates a piecewise continuous quadratic Bezier path between two points. The path can have two passing-through points, a point placed at the distance of armA and angle of angleA from point A, another point with respect to point B. The edges are rounded with rad.
angleA :
starting angle of the path
angleB :
ending angle of the path
armA :
length of the starting arm
armB :
length of the ending arm
rad :
rounding radius of the edges connect(posA, posB)[source]
classArc3(rad=0.0)[source]
Bases: matplotlib.patches.ConnectionStyle._Base Creates a simple quadratic Bezier curve between two points. The curve is created so that the middle control point (C1) is located at the same distance from the start (C0) and end points(C2) and the distance of the C1 to the line connecting C0-C2 is rad times the distance of C0-C2. rad
curvature of the curve. connect(posA, posB)[source]
classBar(armA=0.0, armB=0.0, fraction=0.3, angle=None)[source]
Bases: matplotlib.patches.ConnectionStyle._Base A line with angle between A and B with armA and armB. One of the arms is extended so that they are connected in a right angle. The length of armA is determined by (armA + fraction x AB distance). Same for armB. Parameters
armAfloat
minimum length of armA
armBfloat
minimum length of armB
fractionfloat
a fraction of the distance between two points that will be added to armA and armB.
anglefloat or None
angle of the connecting line (if None, parallel to A and B) connect(posA, posB)[source] | |
doc_28867 | A RegexValidator instance that ensures a value consists of only letters, numbers, underscores or hyphens. | |
doc_28868 |
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
edgecolor color
facecolor color
figure Figure
frameon bool
gid str
in_layout bool
label object
linewidth number
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float | |
doc_28869 |
Blit the canvas in bbox (default entire canvas). | |
doc_28870 |
Find good place to draw a label (relatively flat part of the contour). | |
doc_28871 | The loader which loaded the module. Defaults to None. This attribute is to match importlib.machinery.ModuleSpec.loader as stored in the attr:__spec__ object. Note A future version of Python may stop setting this attribute by default. To guard against this potential change, preferrably read from the __spec__ attribute instead or use getattr(module, "__loader__", None) if you explicitly need to use this attribute. Changed in version 3.4: Defaults to None. Previously the attribute was optional. | |
doc_28872 | tf.keras.losses.kl_divergence, tf.keras.losses.kld, tf.keras.losses.kullback_leibler_divergence, tf.keras.metrics.KLD, tf.keras.metrics.kl_divergence, tf.keras.metrics.kld, tf.keras.metrics.kullback_leibler_divergence, tf.losses.KLD, tf.losses.kl_divergence, tf.losses.kld, tf.losses.kullback_leibler_divergence, tf.metrics.KLD, tf.metrics.kl_divergence, tf.metrics.kld, tf.metrics.kullback_leibler_divergence Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.KLD, tf.compat.v1.keras.losses.kl_divergence, tf.compat.v1.keras.losses.kld, tf.compat.v1.keras.losses.kullback_leibler_divergence, tf.compat.v1.keras.metrics.KLD, tf.compat.v1.keras.metrics.kl_divergence, tf.compat.v1.keras.metrics.kld, tf.compat.v1.keras.metrics.kullback_leibler_divergence
tf.keras.losses.KLD(
y_true, y_pred
)
loss = y_true * log(y_true / y_pred) See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Standalone usage:
y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)
y_pred = np.random.random(size=(2, 3))
loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)
assert loss.shape == (2,)
y_true = tf.keras.backend.clip(y_true, 1e-7, 1)
y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)
assert np.array_equal(
loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))
Args
y_true Tensor of true targets.
y_pred Tensor of predicted targets.
Returns A Tensor with loss.
Raises
TypeError If y_true cannot be cast to the y_pred.dtype. | |
doc_28873 | The node that immediately precedes this one with the same parent. For instance the element with an end-tag that comes just before the self element’s start-tag. Of course, XML documents are made up of more than just elements so the previous sibling could be text, a comment, or something else. If this node is the first child of the parent, this attribute will be None. This is a read-only attribute. | |
doc_28874 |
Load an SPSS file from the file path, returning a DataFrame. New in version 0.25.0. Parameters
path:str or Path
File path.
usecols:list-like, optional
Return a subset of the columns. If None, return all columns.
convert_categoricals:bool, default is True
Convert categorical columns into pd.Categorical. Returns
DataFrame | |
doc_28875 | tf.compat.v1.losses.get_losses(
scope=None, loss_collection=tf.GraphKeys.LOSSES
)
Args
scope An optional scope name for filtering the losses to return.
loss_collection Optional losses collection.
Returns a list of loss tensors. | |
doc_28876 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_28877 |
See torch.stft() Warning This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result. | |
doc_28878 | select_template() is just like get_template(), except it takes a list of template names. It tries each name in order and returns the first template that exists. | |
doc_28879 | Alias for torch.atanh(). | |
doc_28880 |
Convert bytes in the encoding used by a subprocess into a filesystem-appropriate str. Inherited from exec_command, and possibly incorrect. | |
doc_28881 | See Migration guide for more details. tf.compat.v1.raw_ops.NonMaxSuppressionWithOverlaps
tf.raw_ops.NonMaxSuppressionWithOverlaps(
overlaps, scores, max_output_size, overlap_threshold, score_threshold, name=None
)
pruning away boxes that have high overlaps with previously selected boxes. Bounding boxes with score less than score_threshold are removed. N-by-n overlap values are supplied as square matrix, which allows for defining a custom overlap criterium (eg. intersection over union, intersection over area, etc.). The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. For example: selected_indices = tf.image.non_max_suppression_with_overlaps( overlaps, scores, max_output_size, overlap_threshold, score_threshold) selected_boxes = tf.gather(boxes, selected_indices)
Args
overlaps A Tensor of type float32. A 2-D float tensor of shape [num_boxes, num_boxes] representing the n-by-n box overlap values.
scores A Tensor of type float32. A 1-D float tensor of shape [num_boxes] representing a single score corresponding to each box (each row of boxes).
max_output_size A Tensor of type int32. A scalar integer tensor representing the maximum number of boxes to be selected by non max suppression.
overlap_threshold A Tensor of type float32. A 0-D float tensor representing the threshold for deciding whether boxes overlap too.
score_threshold A Tensor of type float32. A 0-D float tensor representing the threshold for deciding when to remove boxes based on score.
name A name for the operation (optional).
Returns A Tensor of type int32. | |
doc_28882 |
Define how the connection between two line segments is drawn. For a visual impression of each JoinStyle, view these docs online, or run JoinStyle.demo. Lines in Matplotlib are typically defined by a 1D Path and a finite linewidth, where the underlying 1D Path represents the center of the stroked line. By default, GraphicsContextBase defines the boundaries of a stroked line to simply be every point within some radius, linewidth/2, away from any point of the center line. However, this results in corners appearing "rounded", which may not be the desired behavior if you are drawing, for example, a polygon or pointed star. Supported values: 'miter'
the "arrow-tip" style. Each boundary of the filled-in area will extend in a straight line parallel to the tangent vector of the centerline at the point it meets the corner, until they meet in a sharp point. 'round'
stokes every point within a radius of linewidth/2 of the center lines. 'bevel'
the "squared-off" style. It can be thought of as a rounded corner where the "circular" part of the corner has been cut off. Note Very long miter tips are cut off (to form a bevel) after a backend-dependent limit called the "miter limit", which specifies the maximum allowed ratio of miter length to line width. For example, the PDF backend uses the default value of 10 specified by the PDF standard, while the SVG backend does not even specify the miter limit, resulting in a default value of 4 per the SVG specification. Matplotlib does not currently allow the user to adjust this parameter. A more detailed description of the effect of a miter limit can be found in the Mozilla Developer Docs (Source code, png, pdf) staticdemo()[source]
Demonstrate how each JoinStyle looks for various join angles. | |
doc_28883 |
Dequantizes an incoming tensor Examples::
>>> input = torch.tensor([[1., -1.], [1., -1.]])
>>> scale, zero_point, dtype = 1.0, 2, torch.qint8
>>> qm = Quantize(scale, zero_point, dtype)
>>> quantized_input = qm(input)
>>> dqm = DeQuantize()
>>> dequantized = dqm(quantized_input)
>>> print(dequantized)
tensor([[ 1., -1.],
[ 1., -1.]], dtype=torch.float32) | |
doc_28884 |
Return the bottom coord of the rectangle. | |
doc_28885 | tf.summary.experimental.set_step(
step
)
For convenience, this function sets a default value for the step parameter used in summary-writing functions elsewhere in the API so that it need not be explicitly passed in every such invocation. The value can be a constant or a variable, and can be retrieved via tf.summary.experimental.get_step().
Note: when using this with @tf.functions, the step value will be captured at the time the function is traced, so changes to the step outside the function will not be reflected inside the function unless using a tf.Variable step.
Args
step An int64-castable default step value, or None to unset. | |
doc_28886 | If there is no certificate for the peer on the other end of the connection, return None. If the SSL handshake hasn’t been done yet, raise ValueError. If the binary_form parameter is False, and a certificate was received from the peer, this method returns a dict instance. If the certificate was not validated, the dict is empty. If the certificate was validated, it returns a dict with several keys, amongst them subject (the principal for which the certificate was issued) and issuer (the principal issuing the certificate). If a certificate contains an instance of the Subject Alternative Name extension (see RFC 3280), there will also be a subjectAltName key in the dictionary. The subject and issuer fields are tuples containing the sequence of relative distinguished names (RDNs) given in the certificate’s data structure for the respective fields, and each RDN is a sequence of name-value pairs. Here is a real-world example: {'issuer': ((('countryName', 'IL'),),
(('organizationName', 'StartCom Ltd.'),),
(('organizationalUnitName',
'Secure Digital Certificate Signing'),),
(('commonName',
'StartCom Class 2 Primary Intermediate Server CA'),)),
'notAfter': 'Nov 22 08:15:19 2013 GMT',
'notBefore': 'Nov 21 03:09:52 2011 GMT',
'serialNumber': '95F0',
'subject': ((('description', '571208-SLe257oHY9fVQ07Z'),),
(('countryName', 'US'),),
(('stateOrProvinceName', 'California'),),
(('localityName', 'San Francisco'),),
(('organizationName', 'Electronic Frontier Foundation, Inc.'),),
(('commonName', '*.eff.org'),),
(('emailAddress', 'hostmaster@eff.org'),)),
'subjectAltName': (('DNS', '*.eff.org'), ('DNS', 'eff.org')),
'version': 3}
Note To validate a certificate for a particular service, you can use the match_hostname() function. If the binary_form parameter is True, and a certificate was provided, this method returns the DER-encoded form of the entire certificate as a sequence of bytes, or None if the peer did not provide a certificate. Whether the peer provides a certificate depends on the SSL socket’s role: for a client SSL socket, the server will always provide a certificate, regardless of whether validation was required; for a server SSL socket, the client will only provide a certificate when requested by the server; therefore getpeercert() will return None if you used CERT_NONE (rather than CERT_OPTIONAL or CERT_REQUIRED). Changed in version 3.2: The returned dictionary includes additional items such as issuer and notBefore. Changed in version 3.4: ValueError is raised when the handshake isn’t done. The returned dictionary includes additional X509v3 extension items such as crlDistributionPoints, caIssuers and OCSP URIs. Changed in version 3.9: IPv6 address strings no longer have a trailing new line. | |
doc_28887 | See Migration guide for more details. tf.compat.v1.raw_ops.DenseToCSRSparseMatrix
tf.raw_ops.DenseToCSRSparseMatrix(
dense_input, indices, name=None
)
Args
dense_input A Tensor. Must be one of the following types: float32, float64, complex64, complex128. A Dense tensor.
indices A Tensor of type int64. Indices of nonzero elements.
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_28888 | tf.compat.v1.disable_eager_execution()
This function can only be called before any Graphs, Ops, or Tensors have been created. It can be used at the beginning of the program for complex migration projects from TensorFlow 1.x to 2.x. | |
doc_28889 |
Generate random samples from the model. Currently, this is implemented only for gaussian and tophat kernels. Parameters
n_samplesint, default=1
Number of samples to generate.
random_stateint, RandomState instance or None, default=None
Determines random number generation used to generate random samples. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. Returns
Xarray-like of shape (n_samples, n_features)
List of samples. | |
doc_28890 | See Migration guide for more details. tf.compat.v1.raw_ops.DirectedInterleaveDataset
tf.raw_ops.DirectedInterleaveDataset(
selector_input_dataset, data_input_datasets, output_types, output_shapes,
name=None
)
Args
selector_input_dataset A Tensor of type variant. A dataset of scalar DT_INT64 elements that determines which of the N data inputs should produce the next output element.
data_input_datasets A list of at least 1 Tensor objects with type variant. N datasets with the same type that will be interleaved according to the values of selector_input_dataset.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_28891 |
Return whether the artist uses clipping. | |
doc_28892 | A hook allowing the expression to coerce value into a more appropriate type. expression is the same as self. | |
doc_28893 | andrews_curves(frame, class_column[, ax, ...]) Generate a matplotlib plot of Andrews curves, for visualising clusters of multivariate data.
autocorrelation_plot(series[, ax]) Autocorrelation plot for time series.
bootstrap_plot(series[, fig, size, samples]) Bootstrap plot on mean, median and mid-range statistics.
boxplot(data[, column, by, ax, fontsize, ...]) Make a box plot from DataFrame columns.
deregister_matplotlib_converters() Remove pandas formatters and converters.
lag_plot(series[, lag, ax]) Lag plot for time series.
parallel_coordinates(frame, class_column[, ...]) Parallel coordinates plotting.
plot_params Stores pandas plotting options.
radviz(frame, class_column[, ax, color, ...]) Plot a multidimensional dataset in 2D.
register_matplotlib_converters() Register pandas formatters and converters with matplotlib.
scatter_matrix(frame[, alpha, figsize, ax, ...]) Draw a matrix of scatter plots.
table(ax, data[, rowLabels, colLabels]) Helper function to convert DataFrame and Series to matplotlib.table. | |
doc_28894 | turns a midi note off (note must be on) note_off(note, velocity=None, channel=0) -> None Turn a note off in the output stream. The note must already be on for this to work correctly. | |
doc_28895 | Load MIME information from a file named filename. This uses readfp() to parse the file. If strict is True, information will be added to list of standard types, else to the list of non-standard types. | |
doc_28896 | Token value for "*". | |
doc_28897 | DistanceMetric class This class provides a uniform interface to fast distance metric functions. The various metrics can be accessed via the get_metric class method and the metric string identifier (see below). Examples >>> from sklearn.neighbors import DistanceMetric
>>> dist = DistanceMetric.get_metric('euclidean')
>>> X = [[0, 1, 2],
[3, 4, 5]]
>>> dist.pairwise(X)
array([[ 0. , 5.19615242],
[ 5.19615242, 0. ]])
Available Metrics The following lists the string metric identifiers and the associated distance metric classes: Metrics intended for real-valued vector spaces:
identifier class name args distance function
“euclidean” EuclideanDistance
sqrt(sum((x - y)^2))
“manhattan” ManhattanDistance
sum(|x - y|)
“chebyshev” ChebyshevDistance
max(|x - y|)
“minkowski” MinkowskiDistance p sum(|x - y|^p)^(1/p)
“wminkowski” WMinkowskiDistance p, w sum(|w * (x - y)|^p)^(1/p)
“seuclidean” SEuclideanDistance V sqrt(sum((x - y)^2 / V))
“mahalanobis” MahalanobisDistance V or VI sqrt((x - y)' V^-1 (x - y)) Metrics intended for two-dimensional vector spaces: Note that the haversine distance metric requires data in the form of [latitude, longitude] and both inputs and outputs are in units of radians.
identifier class name distance function
“haversine” HaversineDistance 2 arcsin(sqrt(sin^2(0.5*dx) + cos(x1)cos(x2)sin^2(0.5*dy))) Metrics intended for integer-valued vector spaces: Though intended for integer-valued vectors, these are also valid metrics in the case of real-valued vectors.
identifier class name distance function
“hamming” HammingDistance N_unequal(x, y) / N_tot
“canberra” CanberraDistance sum(|x - y| / (|x| + |y|))
“braycurtis” BrayCurtisDistance sum(|x - y|) / (sum(|x|) + sum(|y|)) Metrics intended for boolean-valued vector spaces: Any nonzero entry is evaluated to “True”. In the listings below, the following abbreviations are used: N : number of dimensions NTT : number of dims in which both values are True NTF : number of dims in which the first value is True, second is False NFT : number of dims in which the first value is False, second is True NFF : number of dims in which both values are False NNEQ : number of non-equal dimensions, NNEQ = NTF + NFT NNZ : number of nonzero dimensions, NNZ = NTF + NFT + NTT
identifier class name distance function
“jaccard” JaccardDistance NNEQ / NNZ
“matching” MatchingDistance NNEQ / N
“dice” DiceDistance NNEQ / (NTT + NNZ)
“kulsinski” KulsinskiDistance (NNEQ + N - NTT) / (NNEQ + N)
“rogerstanimoto” RogersTanimotoDistance 2 * NNEQ / (N + NNEQ)
“russellrao” RussellRaoDistance NNZ / N
“sokalmichener” SokalMichenerDistance 2 * NNEQ / (N + NNEQ)
“sokalsneath” SokalSneathDistance NNEQ / (NNEQ + 0.5 * NTT) User-defined distance:
identifier class name args
“pyfunc” PyFuncDistance func Here func is a function which takes two one-dimensional numpy arrays, and returns a distance. Note that in order to be used within the BallTree, the distance must be a true metric: i.e. it must satisfy the following properties Non-negativity: d(x, y) >= 0 Identity: d(x, y) = 0 if and only if x == y Symmetry: d(x, y) = d(y, x) Triangle Inequality: d(x, y) + d(y, z) >= d(x, z) Because of the Python object overhead involved in calling the python function, this will be fairly slow, but it will have the same scaling as other distances. Methods
dist_to_rdist Convert the true distance to the reduced distance.
get_metric Get the given distance metric from the string identifier.
pairwise Compute the pairwise distances between X and Y
rdist_to_dist Convert the Reduced distance to the true distance.
dist_to_rdist()
Convert the true distance to the reduced distance. The reduced distance, defined for some metrics, is a computationally more efficient measure which preserves the rank of the true distance. For example, in the Euclidean distance metric, the reduced distance is the squared-euclidean distance.
get_metric()
Get the given distance metric from the string identifier. See the docstring of DistanceMetric for a list of available metrics. Parameters
metricstring or class name
The distance metric to use **kwargs
additional arguments will be passed to the requested metric
pairwise()
Compute the pairwise distances between X and Y This is a convenience routine for the sake of testing. For many metrics, the utilities in scipy.spatial.distance.cdist and scipy.spatial.distance.pdist will be faster. Parameters
Xarray-like
Array of shape (Nx, D), representing Nx points in D dimensions.
Yarray-like (optional)
Array of shape (Ny, D), representing Ny points in D dimensions. If not specified, then Y=X. Returns
——-
distndarray
The shape (Nx, Ny) array of pairwise distances between points in X and Y.
rdist_to_dist()
Convert the Reduced distance to the true distance. The reduced distance, defined for some metrics, is a computationally more efficient measure which preserves the rank of the true distance. For example, in the Euclidean distance metric, the reduced distance is the squared-euclidean distance. | |
doc_28898 |
Rename categories. Parameters
new_categories:list-like, dict-like or callable
New categories which will replace old categories. list-like: all items must be unique and the number of items in the new categories must match the existing number of categories. dict-like: specifies a mapping from old categories to new. Categories not contained in the mapping are passed through and extra categories in the mapping are ignored. callable : a callable that is called on all items in the old categories and whose return values comprise the new categories.
inplace:bool, default False
Whether or not to rename the categories inplace or return a copy of this categorical with renamed categories. Deprecated since version 1.3.0. Returns
cat:Categorical or None
Categorical with removed categories or None if inplace=True. Raises
ValueError
If new categories are list-like and do not have the same number of items than the current categories or do not validate as categories See also reorder_categories
Reorder categories. add_categories
Add new categories. remove_categories
Remove the specified categories. remove_unused_categories
Remove categories which are not used. set_categories
Set the categories to the specified ones. Examples
>>> c = pd.Categorical(['a', 'a', 'b'])
>>> c.rename_categories([0, 1])
[0, 0, 1]
Categories (2, int64): [0, 1]
For dict-like new_categories, extra keys are ignored and categories not in the dictionary are passed through
>>> c.rename_categories({'a': 'A', 'c': 'C'})
['A', 'A', 'b']
Categories (2, object): ['A', 'b']
You may also provide a callable to create the new categories
>>> c.rename_categories(lambda x: x.upper())
['A', 'A', 'B']
Categories (2, object): ['A', 'B'] | |
doc_28899 |
Packs a Tensor containing padded sequences of variable length. input can be of size T x B x * where T is the length of the longest sequence (equal to lengths[0]), B is the batch size, and * is any number of dimensions (including 0). If batch_first is True, B x T x * input is expected. For unsorted sequences, use enforce_sorted = False. If enforce_sorted is True, the sequences should be sorted by length in a decreasing order, i.e. input[:,0] should be the longest sequence, and input[:,B-1] the shortest one. enforce_sorted = True is only necessary for ONNX export. Note This function accepts any input that has at least two dimensions. You can apply it to pack the labels, and use the output of the RNN with them to compute the loss directly. A Tensor can be retrieved from a PackedSequence object by accessing its .data attribute. Parameters
input (Tensor) – padded batch of variable length sequences.
lengths (Tensor or list(int)) – list of sequence lengths of each batch element (must be on the CPU if provided as a tensor).
batch_first (bool, optional) – if True, the input is expected in B x T x * format.
enforce_sorted (bool, optional) – if True, the input is expected to contain sequences sorted by length in a decreasing order. If False, the input will get sorted unconditionally. Default: True. Returns
a PackedSequence object |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.