_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_3400 | The latest representable time, time(23, 59, 59, 999999). | |
doc_3401 | The default value is True. If Python is compiled --without-decimal-contextvar, the C version uses a thread-local rather than a coroutine-local context and the value is False. This is slightly faster in some nested context scenarios. | |
doc_3402 | See Migration guide for more details. tf.compat.v1.raw_ops.MergeV2Checkpoints
tf.raw_ops.MergeV2Checkpoints(
checkpoint_prefixes, destination_prefix, delete_old_dirs=True, name=None
)
result is one logical checkpoint, with one physical metadata file and renamed data files. Intended for "grouping" multiple checkpoints in a sharded checkpoint setup. If delete_old_dirs is true, attempts to delete recursively the dirname of each path in the input checkpoint_prefixes. This is useful when those paths are non user-facing temporary locations.
Args
checkpoint_prefixes A Tensor of type string. prefixes of V2 checkpoints to merge.
destination_prefix A Tensor of type string. scalar. The desired final prefix. Allowed to be the same as one of the checkpoint_prefixes.
delete_old_dirs An optional bool. Defaults to True. see above.
name A name for the operation (optional).
Returns The created Operation. | |
doc_3403 | Interrupt from keyboard (CTRL + BREAK). Availability: Windows. | |
doc_3404 |
Autoscale the scalar limits on the norm instance using the current array | |
doc_3405 | See Migration guide for more details. tf.compat.v1.random.truncated_normal, tf.compat.v1.truncated_normal
tf.random.truncated_normal(
shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, seed=None, name=None
)
The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
Args
shape A 1-D integer Tensor or Python array. The shape of the output tensor.
mean A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.
stddev A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution, before truncation.
dtype The type of the output.
seed A Python integer. Used to create a random seed for the distribution. See tf.random.set_seed for behavior.
name A name for the operation (optional).
Returns A tensor of the specified shape filled with random truncated normal values. | |
doc_3406 |
Set and validate the parameters of estimator. Parameters
**kwargsdict
Estimator parameters. Returns
selfobject
Estimator instance. | |
doc_3407 |
Return filter function to be used for agg filter. | |
doc_3408 | See Migration guide for more details. tf.compat.v1.raw_ops.While
tf.raw_ops.While(
input, cond, body, output_shapes=[], parallel_iterations=10, name=None
)
Args
input A list of Tensor objects. A list of input tensors whose types are T.
cond A function decorated with @Defun. A function takes 'input' and returns a tensor. If the tensor is a scalar of non-boolean, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, non-emptiness means True and False otherwise.
body A function decorated with @Defun. A function that takes a list of tensors and returns another list of tensors. Both lists have the same types as specified by T.
output_shapes An optional list of shapes (each a tf.TensorShape or list of ints). Defaults to [].
parallel_iterations An optional int. Defaults to 10.
name A name for the operation (optional).
Returns A list of Tensor objects. Has the same type as input. | |
doc_3409 | returns a linear interpolation to the given Color. lerp(Color, float) -> Color Returns a Color which is a linear interpolation between self and the given Color in RGBA space. The second parameter determines how far between self and other the result is going to be. It must be a value between 0 and 1 where 0 means self and 1 means other will be returned. New in pygame 2.0.1. | |
doc_3410 |
Build a CF Tree for the input data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Input data.
yIgnored
Not used, present here for API consistency by convention. Returns
self
Fitted estimator. | |
doc_3411 | SSL 3.0 to TLS 1.3. | |
doc_3412 | Token value for "=". | |
doc_3413 |
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
bbox_to_anchor unknown
child unknown
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
figure Figure
gid str
height float
in_layout bool
label object
offset (float, float) or callable
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
width float
zorder float | |
doc_3414 |
Calculate the ewm (exponential weighted moment) mean. Parameters
*args
For NumPy compatibility and will not have an effect on the result.
engine:str, default None
'cython' : Runs the operation through C-extensions from cython. 'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba New in version 1.3.0.
engine_kwargs:dict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} New in version 1.3.0. **kwargs
For NumPy compatibility and will not have an effect on the result. Returns
Series or DataFrame
Return type is the same as the original object with np.float64 dtype. See also pandas.Series.ewm
Calling ewm with Series data. pandas.DataFrame.ewm
Calling ewm with DataFrames. pandas.Series.mean
Aggregating mean for Series. pandas.DataFrame.mean
Aggregating mean for DataFrame. Notes See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine. | |
doc_3415 | msvcrt.LK_NBRLCK
Locks the specified bytes. If the bytes cannot be locked, OSError is raised. | |
doc_3416 | See Migration guide for more details. tf.compat.v1.where_v2
tf.where(
condition, x=None, y=None, name=None
)
This operator has two modes: in one mode both x and y are provided, in another mode neither are provided. condition is always expected to be a tf.Tensor of type bool. Retrieving indices of True elements If x and y are not provided (both are None): tf.where will return the indices of condition that are True, in the form of a 2-D tensor with shape (n, d). (Where n is the number of matching indices in condition, and d is the number of dimensions in condition). Indices are output in row-major order.
tf.where([True, False, False, True])
<tf.Tensor: shape=(2, 1), dtype=int64, numpy=
array([[0],
[3]])>
tf.where([[True, False], [False, True]])
<tf.Tensor: shape=(2, 2), dtype=int64, numpy=
array([[0, 0],
[1, 1]])>
tf.where([[[True, False], [False, True], [True, True]]])
<tf.Tensor: shape=(4, 3), dtype=int64, numpy=
array([[0, 0, 0],
[0, 1, 1],
[0, 2, 0],
[0, 2, 1]])>
Multiplexing between x and y
If x and y are provided (both have non-None values): tf.where will choose an output shape from the shapes of condition, x, and y that all three shapes are broadcastable to. The condition tensor acts as a mask that chooses whether the corresponding element / row in the output should be taken from x (if the element in condition is True) or y (if it is false).
tf.where([True, False, False, True], [1,2,3,4], [100,200,300,400])
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([ 1, 200, 300, 4],
dtype=int32)>
tf.where([True, False, False, True], [1,2,3,4], [100])
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([ 1, 100, 100, 4],
dtype=int32)>
tf.where([True, False, False, True], [1,2,3,4], 100)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([ 1, 100, 100, 4],
dtype=int32)>
tf.where([True, False, False, True], 1, 100)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([ 1, 100, 100, 1],
dtype=int32)>
tf.where(True, [1,2,3,4], 100)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([1, 2, 3, 4],
dtype=int32)>
tf.where(False, [1,2,3,4], 100)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([100, 100, 100, 100],
dtype=int32)>
Note that if the gradient of either branch of the tf.where generates a NaN, then the gradient of the entire tf.where will be NaN. A workaround is to use an inner tf.where to ensure the function has no asymptote, and to avoid computing a value whose gradient is NaN by replacing dangerous inputs with safe inputs. Instead of this,
y = tf.constant(-1, dtype=tf.float32)
tf.where(y > 0, tf.sqrt(y), y)
<tf.Tensor: shape=(), dtype=float32, numpy=-1.0>
Use this
tf.where(y > 0, tf.sqrt(tf.where(y > 0, y, 1)), y)
<tf.Tensor: shape=(), dtype=float32, numpy=-1.0>
Args
condition A tf.Tensor of type bool
x If provided, a Tensor which is of the same type as y, and has a shape broadcastable with condition and y.
y If provided, a Tensor which is of the same type as x, and has a shape broadcastable with condition and x.
name A name of the operation (optional).
Returns If x and y are provided: A Tensor with the same type as x and y, and shape that is broadcast from condition, x, and y. Otherwise, a Tensor with shape (num_true, dim_size(condition)).
Raises
ValueError When exactly one of x or y is non-None, or the shapes are not all broadcastable. | |
doc_3417 | query the tree for neighbors within a radius r Parameters
Xarray-like of shape (n_samples, n_features)
An array of points to query
rdistance within which neighbors are returned
r can be a single value, or an array of values of shape x.shape[:-1] if different radii are desired for each point.
return_distancebool, default=False
if True, return distances to neighbors of each point if False, return only neighbors Note that unlike the query() method, setting return_distance=True here adds to the computation time. Not all distances need to be calculated explicitly for return_distance=False. Results are not sorted by default: see sort_results keyword.
count_onlybool, default=False
if True, return only the count of points within distance r if False, return the indices of all points within distance r If return_distance==True, setting count_only=True will result in an error.
sort_resultsbool, default=False
if True, the distances and indices will be sorted before being returned. If False, the results will not be sorted. If return_distance == False, setting sort_results = True will result in an error. Returns
countif count_only == True
indif count_only == False and return_distance == False
(ind, dist)if count_only == False and return_distance == True
countndarray of shape X.shape[:-1], dtype=int
Each entry gives the number of neighbors within a distance r of the corresponding point.
indndarray of shape X.shape[:-1], dtype=object
Each element is a numpy integer array listing the indices of neighbors of the corresponding point. Note that unlike the results of a k-neighbors query, the returned neighbors are not sorted by distance by default.
distndarray of shape X.shape[:-1], dtype=object
Each element is a numpy double array listing the distances corresponding to indices in i. | |
doc_3418 |
Set the norm limits for image scaling. Parameters
vmin, vmaxfloat
The limits. The limits may also be passed as a tuple (vmin, vmax) as a single positional argument. | |
doc_3419 | See Migration guide for more details. tf.compat.v1.keras.layers.Attention
tf.keras.layers.Attention(
use_scale=False, **kwargs
)
Inputs are query tensor of shape [batch_size, Tq, dim], value tensor of shape [batch_size, Tv, dim] and key tensor of shape [batch_size, Tv, dim]. The calculation follows the steps: Calculate scores with shape [batch_size, Tq, Tv] as a query-key dot product: scores = tf.matmul(query, key, transpose_b=True). Use scores to calculate a distribution with shape [batch_size, Tq, Tv]: distribution = tf.nn.softmax(scores). Use distribution to create a linear combination of value with shape [batch_size, Tq, dim]: return tf.matmul(distribution, value).
Args
use_scale If True, will create a scalar variable to scale the attention scores.
causal Boolean. Set to True for decoder self-attention. Adds a mask such that position i cannot attend to positions j > i. This prevents the flow of information from the future towards the past.
dropout Float between 0 and 1. Fraction of the units to drop for the attention scores. Call Arguments:
inputs: List of the following tensors: query: Query Tensor of shape [batch_size, Tq, dim]. value: Value Tensor of shape [batch_size, Tv, dim]. key: Optional key Tensor of shape [batch_size, Tv, dim]. If not given, will use value for both key and value, which is the most common case.
mask: List of the following tensors: query_mask: A boolean mask Tensor of shape [batch_size, Tq]. If given, the output will be zero at the positions where mask==False. value_mask: A boolean mask Tensor of shape [batch_size, Tv]. If given, will apply the mask such that values at positions where mask==False do not contribute to the result.
return_attention_scores: bool, it True, returns the attention scores (after masking and softmax) as an additional output argument.
training: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (no dropout). Output: Attention outputs of shape [batch_size, Tq, dim]. [Optional] Attention scores after masking and softmax with shape [batch_size, Tq, Tv]. The meaning of query, value and key depend on the application. In the case of text similarity, for example, query is the sequence embeddings of the first piece of text and value is the sequence embeddings of the second piece of text. key is usually the same tensor as value. Here is a code example for using Attention in a CNN+Attention network: # Variable-length int sequences.
query_input = tf.keras.Input(shape=(None,), dtype='int32')
value_input = tf.keras.Input(shape=(None,), dtype='int32')
# Embedding lookup.
token_embedding = tf.keras.layers.Embedding(input_dim=1000, output_dim=64)
# Query embeddings of shape [batch_size, Tq, dimension].
query_embeddings = token_embedding(query_input)
# Value embeddings of shape [batch_size, Tv, dimension].
value_embeddings = token_embedding(value_input)
# CNN layer.
cnn_layer = tf.keras.layers.Conv1D(
filters=100,
kernel_size=4,
# Use 'same' padding so outputs have the same shape as inputs.
padding='same')
# Query encoding of shape [batch_size, Tq, filters].
query_seq_encoding = cnn_layer(query_embeddings)
# Value encoding of shape [batch_size, Tv, filters].
value_seq_encoding = cnn_layer(value_embeddings)
# Query-value attention of shape [batch_size, Tq, filters].
query_value_attention_seq = tf.keras.layers.Attention()(
[query_seq_encoding, value_seq_encoding])
# Reduce over the sequence axis to produce encodings of shape
# [batch_size, filters].
query_encoding = tf.keras.layers.GlobalAveragePooling1D()(
query_seq_encoding)
query_value_attention = tf.keras.layers.GlobalAveragePooling1D()(
query_value_attention_seq)
# Concatenate query and document encodings to produce a DNN input layer.
input_layer = tf.keras.layers.Concatenate()(
[query_encoding, query_value_attention])
# Add DNN layers, and create Model.
# ... | |
doc_3420 | ZIP flag bits. | |
doc_3421 |
Return a copy of the array. Parameters
order{‘C’, ‘F’, ‘A’, ‘K’}, optional
Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible. (Note that this function and numpy.copy are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also numpy.copy
Similar function with different default behavior numpy.copyto
Notes This function is the preferred method for creating an array copy. The function numpy.copy is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. Examples >>> x = np.array([[1,2,3],[4,5,6]], order='F')
>>> y = x.copy()
>>> x.fill(0)
>>> x
array([[0, 0, 0],
[0, 0, 0]])
>>> y
array([[1, 2, 3],
[4, 5, 6]])
>>> y.flags['C_CONTIGUOUS']
True | |
doc_3422 |
Add a key to a quiver plot. The positioning of the key depends on X, Y, coordinates, and labelpos. If labelpos is 'N' or 'S', X, Y give the position of the middle of the key arrow. If labelpos is 'E', X, Y positions the head, and if labelpos is 'W', X, Y positions the tail; in either of these two cases, X, Y is somewhere in the middle of the arrow+label key object. Parameters
Qmatplotlib.quiver.Quiver
A Quiver object as returned by a call to quiver().
X, Yfloat
The location of the key.
Ufloat
The length of the key.
labelstr
The key label (e.g., length and units of the key).
anglefloat, default: 0
The angle of the key arrow, in degrees anti-clockwise from the x-axis.
coordinates{'axes', 'figure', 'data', 'inches'}, default: 'axes'
Coordinate system and units for X, Y: 'axes' and 'figure' are normalized coordinate systems with (0, 0) in the lower left and (1, 1) in the upper right; 'data' are the axes data coordinates (used for the locations of the vectors in the quiver plot itself); 'inches' is position in the figure in inches, with (0, 0) at the lower left corner.
colorcolor
Overrides face and edge colors from Q.
labelpos{'N', 'S', 'E', 'W'}
Position the label above, below, to the right, to the left of the arrow, respectively.
labelsepfloat, default: 0.1
Distance in inches between the arrow and the label.
labelcolorcolor, default: rcParams["text.color"] (default: 'black')
Label color.
fontpropertiesdict, optional
A dictionary with keyword arguments accepted by the FontProperties initializer: family, style, variant, size, weight. **kwargs
Any additional keyword arguments are used to override vector properties taken from Q.
Examples using matplotlib.axes.Axes.quiverkey
Advanced quiver and quiverkey functions
Quiver Simple Demo | |
doc_3423 |
Set a label that will be displayed in the legend. Parameters
sobject
s will be converted to a string by calling str. | |
doc_3424 |
Set the artist's visibility. Parameters
bbool | |
doc_3425 |
Linear Model trained with L1 prior as regularizer (aka the Lasso) The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Technically the Lasso model is optimizing the same objective function as the Elastic Net with l1_ratio=1.0 (no L2 penalty). Read more in the User Guide. Parameters
alphafloat, default=1.0
Constant that multiplies the L1 term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=False
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. For sparse input this option is always True to preserve sparsity.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.
positivebool, default=False
When set to True, forces the coefficients to be positive.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
coef_ndarray of shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the cost function formula).
dual_gap_float or ndarray of shape (n_targets,)
Given param alpha, the dual gaps at the end of the optimization, same shape as each observation of y.
sparse_coef_sparse matrix of shape (n_features, 1) or (n_targets, n_features)
Sparse representation of the fitted coef_.
intercept_float or ndarray of shape (n_targets,)
Independent term in decision function.
n_iter_int or list of int
Number of iterations run by the coordinate descent solver to reach the specified tolerance. See also
lars_path
lasso_path
LassoLars
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
Notes The algorithm used to fit the model is coordinate descent. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Examples >>> from sklearn import linear_model
>>> clf = linear_model.Lasso(alpha=0.1)
>>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
Lasso(alpha=0.1)
>>> print(clf.coef_)
[0.85 0. ]
>>> print(clf.intercept_)
0.15...
Methods
fit(X, y[, sample_weight, check_input]) Fit model with coordinate descent.
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute elastic net path with coordinate descent.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None, check_input=True) [source]
Fit model with coordinate descent. Parameters
X{ndarray, sparse matrix} of (n_samples, n_features)
Data.
y{ndarray, sparse matrix} of shape (n_samples,) or (n_samples, n_targets)
Target. Will be cast to X’s dtype if necessary.
sample_weightfloat or array-like of shape (n_samples,), default=None
Sample weight. New in version 0.23.
check_inputbool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Notes Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
property sparse_coef_
Sparse representation of the fitted coef_. | |
doc_3426 |
Container for the BitGenerators. Generator exposes a number of methods for generating random numbers drawn from a variety of probability distributions. In addition to the distribution-specific arguments, each method takes a keyword argument size that defaults to None. If size is None, then a single value is generated and returned. If size is an integer, then a 1-D array filled with generated values is returned. If size is a tuple, then an array with that shape is filled and returned. The function numpy.random.default_rng will instantiate a Generator with numpy’s default BitGenerator. No Compatibility Guarantee Generator does not provide a version compatibility guarantee. In particular, as better algorithms evolve the bit stream may change. Parameters
bit_generatorBitGenerator
BitGenerator to use as the core generator. See also default_rng
Recommended constructor for Generator. Notes The Python stdlib module random contains pseudo-random number generator with a number of methods that are similar to the ones available in Generator. It uses Mersenne Twister, and this bit generator can be accessed using MT19937. Generator, besides being NumPy-aware, has the advantage that it provides a much larger number of probability distributions to choose from. Examples >>> from numpy.random import Generator, PCG64
>>> rng = Generator(PCG64())
>>> rng.standard_normal()
-0.203 # random | |
doc_3427 |
Bases: matplotlib.backend_tools.ToolBase Toggleable tool. Every time it is triggered, it switches between enable and disable. Parameters
``*args``
Variable length argument to be used by the Tool. ``**kwargs``
toggled if present and True, sets the initial state of the Tool Arbitrary keyword arguments to be consumed by the Tool cursor=None
Cursor to use when the tool is active.
default_toggled=False
Default of toggled state.
disable(event=None)[source]
Disable the toggle tool. trigger call this method when toggled is True. This can happen in different circumstances. Click on the toolbar tool button. Call to matplotlib.backend_managers.ToolManager.trigger_tool. Another ToolToggleBase derived tool is triggered (from the same ToolManager).
enable(event=None)[source]
Enable the toggle tool. trigger calls this method when toggled is False.
radio_group=None
Attribute to group 'radio' like tools (mutually exclusive). str that identifies the group or None if not belonging to a group.
set_figure(figure)[source]
propertytoggled
State of the toggled tool.
trigger(sender, event, data=None)[source]
Calls enable or disable based on toggled value. | |
doc_3428 |
Apply the non-affine part of this transform to Path path, returning a new Path. transform_path(path) is equivalent to transform_path_affine(transform_path_non_affine(values)). | |
doc_3429 | This method is called after close has been called to reset the parser so that it is ready to parse new documents. The results of calling parse or feed after close without calling reset are undefined. | |
doc_3430 | os.WSTOPPED
os.WNOWAIT
Flags that can be used in options in waitid() that specify what child signal to wait for. Availability: Unix. New in version 3.3. | |
doc_3431 | A dictionary of context data that will be added to the default context data passed to the template. | |
doc_3432 | The sitemaps.FlatPageSitemap class looks at all publicly visible flatpages defined for the current SITE_ID (see the sites documentation) and creates an entry in the sitemap. These entries include only the location attribute – not lastmod, changefreq or priority. | |
doc_3433 |
Return the canvas width and height in display coords. | |
doc_3434 | Return a tuple (y, x) of co-ordinates of upper-left corner. | |
doc_3435 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_3436 | The email package calls this method with a list of strings, each string ending with the line separation characters found in the source being parsed. The first line includes the field header name and separator. All whitespace in the source is preserved. The method should return the (name, value) tuple that is to be stored in the Message to represent the parsed header. If an implementation wishes to retain compatibility with the existing email package policies, name should be the case preserved name (all characters up to the ‘:’ separator), while value should be the unfolded value (all line separator characters removed, but whitespace kept intact), stripped of leading whitespace. sourcelines may contain surrogateescaped binary data. There is no default implementation | |
doc_3437 |
Set the font weight. May be either a numeric value in the range 0-1000 or one of 'ultralight', 'light', 'normal', 'regular', 'book', 'medium', 'roman', 'semibold', 'demibold', 'demi', 'bold', 'heavy', 'extra bold', 'black' | |
doc_3438 | Return the current value of the flags that are used for dlopen() calls. Symbolic names for the flag values can be found in the os module (RTLD_xxx constants, e.g. os.RTLD_LAZY). Availability: Unix. | |
doc_3439 |
Compute pairwise correlation. Pairwise correlation is computed between rows or columns of DataFrame with rows or columns of Series or DataFrame. DataFrames are first aligned along both axes before computing the correlations. Parameters
other:DataFrame, Series
Object with which to compute correlations.
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
The axis to use. 0 or ‘index’ to compute column-wise, 1 or ‘columns’ for row-wise.
drop:bool, default False
Drop missing indices from result.
method:{‘pearson’, ‘kendall’, ‘spearman’} or callable
Method of correlation: pearson : standard correlation coefficient kendall : Kendall Tau correlation coefficient spearman : Spearman rank correlation
callable: callable with input two 1d ndarrays
and returning a float. Returns
Series
Pairwise correlations. See also DataFrame.corr
Compute pairwise correlation of columns. | |
doc_3440 | See Migration guide for more details. tf.compat.v1.estimator.ProfilerHook, tf.compat.v1.train.ProfilerHook
tf.estimator.ProfilerHook(
save_steps=None, save_secs=None, output_dir='', show_dataflow=True,
show_memory=False
)
This produces files called "timeline-.json", which are in Chrome Trace format. For more information see: https://github.com/catapult-project/catapult/blob/master/tracing/README.md
Args
save_steps int, save profile traces every N steps. Exactly one of save_secs and save_steps should be set.
save_secs int or float, save profile traces every N seconds.
output_dir string, the directory to save the profile traces to. Defaults to the current directory.
show_dataflow bool, if True, add flow events to the trace connecting producers and consumers of tensors.
show_memory bool, if True, add object snapshot events to the trace showing the sizes and lifetimes of tensors. Methods after_create_session View source
after_create_session(
session, coord
)
Called when new TensorFlow session is created. This is called to signal the hooks that a new session has been created. This has two essential differences with the situation in which begin is called: When this is called, the graph is finalized and ops can no longer be added to the graph. This method will also be called as a result of recovering a wrapped session, not only at the beginning of the overall session.
Args
session A TensorFlow Session that has been created.
coord A Coordinator object which keeps track of all threads. after_run View source
after_run(
run_context, run_values
)
Called after each call to run(). The run_values argument contains results of requested ops/tensors by before_run(). The run_context argument is the same one send to before_run call. run_context.request_stop() can be called to stop the iteration. If session.run() raises any exceptions then after_run() is not called.
Args
run_context A SessionRunContext object.
run_values A SessionRunValues object. before_run View source
before_run(
run_context
)
Called before each call to run(). You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call. The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session. At this point graph is finalized and you can not add ops.
Args
run_context A SessionRunContext object.
Returns None or a SessionRunArgs object.
begin View source
begin()
Called once before using the session. When called, the default graph is the one that will be launched in the session. The hook can modify the graph by adding new operations to it. After the begin() call the graph will be finalized and the other callbacks can not modify the graph anymore. Second call of begin() on the same graph, should not change the graph. end View source
end(
session
)
Called at the end of session. The session argument can be used in case the hook wants to run final ops, such as saving a last checkpoint. If session.run() raises exception other than OutOfRangeError or StopIteration then end() is not called. Note the difference between end() and after_run() behavior when session.run() raises OutOfRangeError or StopIteration. In that case end() is called but after_run() is not called.
Args
session A TensorFlow Session that will be soon closed. | |
doc_3441 | tf.losses.categorical_hinge Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.categorical_hinge
tf.keras.losses.categorical_hinge(
y_true, y_pred
)
loss = maximum(neg - pos + 1, 0) where neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred) Standalone usage:
y_true = np.random.randint(0, 3, size=(2,))
y_true = tf.keras.utils.to_categorical(y_true, num_classes=3)
y_pred = np.random.random(size=(2, 3))
loss = tf.keras.losses.categorical_hinge(y_true, y_pred)
assert loss.shape == (2,)
pos = np.sum(y_true * y_pred, axis=-1)
neg = np.amax((1. - y_true) * y_pred, axis=-1)
assert np.array_equal(loss.numpy(), np.maximum(0., neg - pos + 1.))
Args
y_true The ground truth values. y_true values are expected to be 0 or 1.
y_pred The predicted values.
Returns Categorical hinge loss values. | |
doc_3442 | Return a message object structure from a bytes-like object. This is equivalent to BytesParser().parsebytes(s). Optional _class and policy are interpreted as with the BytesParser class constructor. New in version 3.2. Changed in version 3.3: Removed the strict argument. Added the policy keyword. | |
doc_3443 | See Migration guide for more details. tf.compat.v1.raw_ops.WholeFileReader
tf.raw_ops.WholeFileReader(
container='', shared_name='', name=None
)
To use, enqueue filenames in a Queue. The output of ReaderRead will be a filename (key) and the contents of that file (value).
Args
container An optional string. Defaults to "". If non-empty, this reader is placed in the given container. Otherwise, a default container is used.
shared_name An optional string. Defaults to "". If non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
name A name for the operation (optional).
Returns A Tensor of type mutable string. | |
doc_3444 |
Fast lookup of value from 1-dimensional ndarray. Only use this if you know what you’re doing. Returns
scalar or Series | |
doc_3445 |
Set the path effects. Parameters
path_effectsAbstractPathEffect | |
doc_3446 | Removes the tab specified by tab_id, unmaps and unmanages the associated window. | |
doc_3447 |
Return an array converted to a float type. Parameters
aarray_like
The input array.
dtypestr or dtype object, optional
Float type code to coerce input array a. If dtype is one of the ‘int’ dtypes, it is replaced with float64. Returns
outndarray
The input a as a float ndarray. Examples >>> np.asfarray([2, 3])
array([2., 3.])
>>> np.asfarray([2, 3], dtype='float')
array([2., 3.])
>>> np.asfarray([2, 3], dtype='int8')
array([2., 3.]) | |
doc_3448 |
Return the cumulative product of the elements along the given axis. Refer to numpy.cumprod for full documentation. See also numpy.cumprod
equivalent function | |
doc_3449 | An ordered mapping of parameters’ names to the corresponding Parameter objects. Parameters appear in strict definition order, including keyword-only parameters. Changed in version 3.7: Python only explicitly guaranteed that it preserved the declaration order of keyword-only parameters as of version 3.7, although in practice this order had always been preserved in Python 3. | |
doc_3450 |
Return the font size. | |
doc_3451 | See Migration guide for more details. tf.compat.v1.raw_ops.Conv3DBackpropInput
tf.raw_ops.Conv3DBackpropInput(
input, filter, out_backprop, strides, padding, dilations=[1, 1, 1, 1, 1],
name=None
)
Args
input A Tensor. Must be one of the following types: half, float32, float64. Shape [batch, depth, rows, cols, in_channels].
filter A Tensor. Must have the same type as input. Shape [depth, rows, cols, in_channels, out_channels]. in_channels must match between input and filter.
out_backprop A Tensor. Must have the same type as input. Backprop signal of shape [batch, out_depth, out_rows, out_cols, out_channels].
strides A list of ints that has length >= 5. 1-D tensor of length 5. The stride of the sliding window for each dimension of input. Must have strides[0] = strides[4] = 1.
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
dilations An optional list of ints. Defaults to [1, 1, 1, 1, 1].
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_3452 |
Returns the rank of current process group Rank is a unique identifier assigned to each process within a distributed process group. They are always consecutive integers ranging from 0 to world_size. Parameters
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Returns
The rank of the process group -1, if not part of the group | |
doc_3453 | The dict of keyword arguments to this Node. The interpretation of arguments depends on the node’s opcode. See the Node docstring for more information. Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment. | |
doc_3454 | Specify that the file descriptor fd be used for typeahead checking. If fd is -1, then no typeahead checking is done. The curses library does “line-breakout optimization” by looking for typeahead periodically while updating the screen. If input is found, and it is coming from a tty, the current update is postponed until refresh or doupdate is called again, allowing faster response to commands typed in advance. This function allows specifying a different file descriptor for typeahead checking. | |
doc_3455 | See Migration guide for more details. tf.compat.v1.raw_ops.TensorSummaryV2
tf.raw_ops.TensorSummaryV2(
tag, tensor, serialized_summary_metadata, name=None
)
Args
tag A Tensor of type string. A string attached to this summary. Used for organization in TensorBoard.
tensor A Tensor. A tensor to serialize.
serialized_summary_metadata A Tensor of type string. A serialized SummaryMetadata proto. Contains plugin data.
name A name for the operation (optional).
Returns A Tensor of type string. | |
doc_3456 | Provides simple access to WWW-Authenticate headers.
property algorithm
A string indicating a pair of algorithms used to produce the digest and a checksum. If this is not present it is assumed to be “MD5”. If the algorithm is not understood, the challenge should be ignored (and a different one used, if there is more than one).
static auth_property(name, doc=None)
A static helper function for Authentication subclasses to add extra authentication system properties onto a class: class FooAuthenticate(WWWAuthenticate):
special_realm = auth_property('special_realm')
For more information have a look at the sourcecode to see how the regular properties (realm etc.) are implemented.
property domain
A list of URIs that define the protection space. If a URI is an absolute path, it is relative to the canonical root URL of the server being accessed.
property nonce
A server-specified data string which should be uniquely generated each time a 401 response is made. It is recommended that this string be base64 or hexadecimal data.
property opaque
A string of data, specified by the server, which should be returned by the client unchanged in the Authorization header of subsequent requests with URIs in the same protection space. It is recommended that this string be base64 or hexadecimal data.
property qop
A set of quality-of-privacy directives such as auth and auth-int.
property realm
A string to be displayed to users so they know which username and password to use. This string should contain at least the name of the host performing the authentication and might additionally indicate the collection of users who might have access.
set_basic(realm='authentication required')
Clear the auth info and enable basic auth.
set_digest(realm, nonce, qop=('auth'), opaque=None, algorithm=None, stale=False)
Clear the auth info and enable digest auth.
property stale
A flag, indicating that the previous request from the client was rejected because the nonce value was stale.
to_header()
Convert the stored values into a WWW-Authenticate header.
property type
The type of the auth mechanism. HTTP currently specifies Basic and Digest. | |
doc_3457 |
Set the sketch parameters. Parameters
scalefloat, optional
The amplitude of the wiggle perpendicular to the source line, in pixels. If scale is None, or not provided, no sketch filter will be provided.
lengthfloat, optional
The length of the wiggle along the line, in pixels (default 128.0)
randomnessfloat, optional
The scale factor by which the length is shrunken or expanded (default 16.0) The PGF backend uses this argument as an RNG seed and not as described above. Using the same seed yields the same random shape. | |
doc_3458 | Calls importlib.abc.PathEntryFinder.invalidate_caches() on all finders stored in sys.path_importer_cache that define the method. Otherwise entries in sys.path_importer_cache set to None are deleted. Changed in version 3.7: Entries of None in sys.path_importer_cache are deleted. | |
doc_3459 | Convert samples in the audio fragment to a-LAW encoding and return this as a bytes object. a-LAW is an audio encoding format whereby you get a dynamic range of about 13 bits using only 8 bit samples. It is used by the Sun audio hardware, among others. | |
doc_3460 | tf.compat.v1.nn.relu_layer(
x, weights, biases, name=None
)
Args
x a 2D tensor. Dimensions typically: batch, in_units
weights a 2D tensor. Dimensions typically: in_units, out_units
biases a 1D tensor. Dimensions: out_units
name A name for the operation (optional). If not specified "nn_relu_layer" is used.
Returns A 2-D Tensor computing relu(matmul(x, weights) + biases). Dimensions typically: batch, out_units. | |
doc_3461 | sklearn.model_selection.learning_curve(estimator, X, y, *, groups=None, train_sizes=array([0.1, 0.33, 0.55, 0.78, 1.0]), cv=None, scoring=None, exploit_incremental_learning=False, n_jobs=None, pre_dispatch='all', verbose=0, shuffle=False, random_state=None, error_score=nan, return_times=False, fit_params=None) [source]
Learning curve. Determines cross-validated training and test scores for different training set sizes. A cross-validation generator splits the whole dataset k times in training and test data. Subsets of the training set with varying sizes will be used to train the estimator and a score for each training subset size and the test set will be computed. Afterwards, the scores will be averaged over all k runs for each training subset size. Read more in the User Guide. Parameters
estimatorobject type that implements the “fit” and “predict” methods
An object of that type which is cloned for each validation.
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
Target relative to X for classification or regression; None for unsupervised learning.
groupsarray-like of shape (n_samples,), default=None
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold).
train_sizesarray-like of shape (n_ticks,), default=np.linspace(0.1, 1.0, 5)
Relative or absolute numbers of training examples that will be used to generate the learning curve. If the dtype is float, it is regarded as a fraction of the maximum size of the training set (that is determined by the selected validation method), i.e. it has to be within (0, 1]. Otherwise it is interpreted as absolute sizes of the training sets. Note that for classification the number of samples usually have to be big enough to contain at least one sample from each class.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross validation, int, to specify the number of folds in a (Stratified)KFold,
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
scoringstr or callable, default=None
A str (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y).
exploit_incremental_learningbool, default=False
If the estimator supports incremental learning, this will be used to speed up fitting for different training set sizes.
n_jobsint, default=None
Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the different training and test sets. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
pre_dispatchint or str, default=’all’
Number of predispatched jobs for parallel execution (default is all). The option can reduce the allocated memory. The str can be an expression like ‘2*n_jobs’.
verboseint, default=0
Controls the verbosity: the higher, the more messages.
shufflebool, default=False
Whether to shuffle training data before taking prefixes of it based on``train_sizes``.
random_stateint, RandomState instance or None, default=None
Used when shuffle is True. Pass an int for reproducible output across multiple function calls. See Glossary.
error_score‘raise’ or numeric, default=np.nan
Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. New in version 0.20.
return_timesbool, default=False
Whether to return the fit and score times.
fit_paramsdict, default=None
Parameters to pass to the fit method of the estimator. New in version 0.24. Returns
train_sizes_absarray of shape (n_unique_ticks,)
Numbers of training examples that has been used to generate the learning curve. Note that the number of ticks might be less than n_ticks because duplicate entries will be removed.
train_scoresarray of shape (n_ticks, n_cv_folds)
Scores on training sets.
test_scoresarray of shape (n_ticks, n_cv_folds)
Scores on test set.
fit_timesarray of shape (n_ticks, n_cv_folds)
Times spent for fitting in seconds. Only present if return_times is True.
score_timesarray of shape (n_ticks, n_cv_folds)
Times spent for scoring in seconds. Only present if return_times is True. Notes See examples/model_selection/plot_learning_curve.py
Examples using sklearn.model_selection.learning_curve
Comparison of kernel ridge regression and SVR
Plotting Learning Curves | |
doc_3462 | See Migration guide for more details. tf.compat.v1.raw_ops.SparseSparseMinimum
tf.raw_ops.SparseSparseMinimum(
a_indices, a_values, a_shape, b_indices, b_values, b_shape, name=None
)
Assumes the two SparseTensors have the same shape, i.e., no broadcasting.
Args
a_indices A Tensor of type int64. 2-D. N x R matrix with the indices of non-empty values in a SparseTensor, in the canonical lexicographic ordering.
a_values A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. 1-D. N non-empty values corresponding to a_indices.
a_shape A Tensor of type int64. 1-D. Shape of the input SparseTensor.
b_indices A Tensor of type int64. counterpart to a_indices for the other operand.
b_values A Tensor. Must have the same type as a_values. counterpart to a_values for the other operand; must be of the same dtype.
b_shape A Tensor of type int64. counterpart to a_shape for the other operand; the two shapes must be equal.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output_indices, output_values). output_indices A Tensor of type int64.
output_values A Tensor. Has the same type as a_values. | |
doc_3463 | Return the value of the Boolean capability corresponding to the terminfo capability name capname as an integer. Return the value -1 if capname is not a Boolean capability, or 0 if it is canceled or absent from the terminal description. | |
doc_3464 | Prevent client side from requesting a session ticket. New in version 3.6. | |
doc_3465 | See Migration guide for more details. tf.compat.v1.feature_column.bucketized_column
tf.feature_column.bucketized_column(
source_column, boundaries
)
Buckets include the left boundary, and exclude the right boundary. Namely, boundaries=[0., 1., 2.] generates buckets (-inf, 0.), [0., 1.), [1., 2.), and [2., +inf). For example, if the inputs are boundaries = [0, 10, 100]
input tensor = [[-5, 10000]
[150, 10]
[5, 100]]
then the output will be output = [[0, 3]
[3, 2]
[1, 3]]
Example: price = tf.feature_column.numeric_column('price')
bucketized_price = tf.feature_column.bucketized_column(
price, boundaries=[...])
columns = [bucketized_price, ...]
features = tf.io.parse_example(
..., features=tf.feature_column.make_parse_example_spec(columns))
dense_tensor = tf.keras.layers.DenseFeatures(columns)(features)
A bucketized_column can also be crossed with another categorical column using crossed_column: price = tf.feature_column.numeric_column('price')
# bucketized_column converts numerical feature to a categorical one.
bucketized_price = tf.feature_column.bucketized_column(
price, boundaries=[...])
# 'keywords' is a string feature.
price_x_keywords = tf.feature_column.crossed_column(
[bucketized_price, 'keywords'], 50K)
columns = [price_x_keywords, ...]
features = tf.io.parse_example(
..., features=tf.feature_column.make_parse_example_spec(columns))
dense_tensor = tf.keras.layers.DenseFeatures(columns)(features)
linear_model = tf.keras.experimental.LinearModel(units=...)(dense_tensor)
Args
source_column A one-dimensional dense column which is generated with numeric_column.
boundaries A sorted list or tuple of floats specifying the boundaries.
Returns A BucketizedColumn.
Raises
ValueError If source_column is not a numeric column, or if it is not one-dimensional.
ValueError If boundaries is not a sorted list or tuple. | |
doc_3466 | Read window related data stored in the file by an earlier putwin() call. The routine then creates and initializes a new window using that data, returning the new window object. | |
doc_3467 |
The number of output dimensions of this transform. Must be overridden (with integers) in the subclass. | |
doc_3468 |
Set the artist transform. Parameters
tTransform | |
doc_3469 |
Build a layout of Axes based on ASCII art or nested lists. This is a helper function to build complex GridSpec layouts visually. Note This API is provisional and may be revised in the future based on early user feedback. Parameters
mosaiclist of list of {hashable or nested} or str
A visual layout of how you want your Axes to be arranged labeled as strings. For example x = [['A panel', 'A panel', 'edge'],
['C panel', '.', 'edge']]
produces 4 axes: 'A panel' which is 1 row high and spans the first two columns 'edge' which is 2 rows high and is on the right edge 'C panel' which in 1 row and 1 column wide in the bottom left a blank space 1 row and 1 column wide in the bottom center Any of the entries in the layout can be a list of lists of the same form to create nested layouts. If input is a str, then it must be of the form '''
AAE
C.E
'''
where each character is a column and each line is a row. This only allows only single character Axes labels and does not allow nesting but is very terse.
sharex, shareybool, default: False
If True, the x-axis (sharex) or y-axis (sharey) will be shared among all subplots. In that case, tick label visibility and axis units behave as for subplots. If False, each subplot's x- or y-axis will be independent.
subplot_kwdict, optional
Dictionary with keywords passed to the Figure.add_subplot call used to create each subplot.
gridspec_kwdict, optional
Dictionary with keywords passed to the GridSpec constructor used to create the grid the subplots are placed on.
empty_sentinelobject, optional
Entry in the layout to mean "leave this space empty". Defaults to '.'. Note, if layout is a string, it is processed via inspect.cleandoc to remove leading white space, which may interfere with using white-space as the empty sentinel. **fig_kw
All additional keyword arguments are passed to the pyplot.figure call. Returns
figFigure
The new figure dict[label, Axes]
A dictionary mapping the labels to the Axes objects. The order of the axes is left-to-right and top-to-bottom of their position in the total layout.
Examples using matplotlib.pyplot.subplot_mosaic
Image Demo
Labelling subplots
Basic Usage
Legend guide
Arranging multiple Axes in a Figure | |
doc_3470 | Write the results of the current profile to filename. | |
doc_3471 |
Return whether the Artist has an explicitly set transform. This is True after set_transform has been called. | |
doc_3472 |
Transform X using the inverse function. Parameters
Xarray-like, shape (n_samples, n_features)
Input array. Returns
X_outarray-like, shape (n_samples, n_features)
Transformed input. | |
doc_3473 |
Remove key from the ModuleDict and return its module. Parameters
key (string) – key to pop from the ModuleDict | |
doc_3474 | New in Django 3.2. Optional. A boolean attribute. When True the alternate links generated by alternates will contain a hreflang="x-default" fallback entry with a value of LANGUAGE_CODE. The default is False. | |
doc_3475 | os.O_WRONLY
os.O_RDWR
os.O_APPEND
os.O_CREAT
os.O_EXCL
os.O_TRUNC
The above constants are available on Unix and Windows. | |
doc_3476 |
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. | |
doc_3477 | URL name: password_change_done The page shown after a user has changed their password. Attributes:
template_name
The full name of a template to use. Defaults to registration/password_change_done.html if not supplied.
extra_context
A dictionary of context data that will be added to the default context data passed to the template. | |
doc_3478 | Adds the specified handler hdlr to this logger. | |
doc_3479 | Validates whether the password meets a minimum length. The minimum length can be customized with the min_length parameter. | |
doc_3480 | Retrieve whole message number which, and set its seen flag. Result is in form (response, ['line', ...], octets). | |
doc_3481 |
Compute data covariance with the generative model. cov = components_.T * S**2 * components_ + sigma2 * eye(n_features) where S**2 contains the explained variances, and sigma2 contains the noise variances. Returns
covarray, shape=(n_features, n_features)
Estimated covariance of data. | |
doc_3482 |
Return the Transform instance used by this artist. | |
doc_3483 |
Bases: TypeError | |
doc_3484 | A list of callables that take a path argument to try to create a finder for the path. If a finder can be created, it is to be returned by the callable, else raise ImportError. Originally specified in PEP 302. | |
doc_3485 |
Return an ndarray of indices that sort the array along the specified axis. Masked values are filled beforehand to fill_value. Parameters
axisint, optional
Axis along which to sort. If None, the default, the flattened array is used. Changed in version 1.13.0: Previously, the default was documented to be -1, but that was in error. At some future date, the default will change to -1, as originally intended. Until then, the axis should be given explicitly when arr.ndim > 1, to avoid a FutureWarning.
kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional
The sorting algorithm used.
orderlist, optional
When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. Not all fields need be specified.
endwith{True, False}, optional
Whether missing values (if any) should be treated as the largest values (True) or the smallest values (False) When the array contains unmasked values at the same extremes of the datatype, the ordering of these values and the masked values is undefined.
fill_valuescalar or None, optional
Value used internally for the masked values. If fill_value is not None, it supersedes endwith. Returns
index_arrayndarray, int
Array of indices that sort a along the specified axis. In other words, a[index_array] yields a sorted a. See also ma.MaskedArray.sort
Describes sorting algorithms used. lexsort
Indirect stable sort with multiple keys. numpy.ndarray.sort
Inplace sort. Notes See sort for notes on the different sorting algorithms. Examples >>> a = np.ma.array([3,2,1], mask=[False, False, True])
>>> a
masked_array(data=[3, 2, --],
mask=[False, False, True],
fill_value=999999)
>>> a.argsort()
array([1, 0, 2]) | |
doc_3486 |
Return the corresponding inverse transformation. It holds x == self.inverted().transform(self.transform(x)). The return value of this method should be treated as temporary. An update to self does not cause a corresponding update to its inverted copy. | |
doc_3487 |
Generate a swiss roll dataset. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of sample points on the S curve.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 3)
The points.
tndarray of shape (n_samples,)
The univariate position of the sample according to the main dimension of the points in the manifold. Notes The algorithm is from Marsland [1]. References
1
S. Marsland, “Machine Learning: An Algorithmic Perspective”, Chapter 10, 2009. http://seat.massey.ac.nz/personal/s.r.marsland/Code/10/lle.py | |
doc_3488 |
Set the text string s. It may contain newlines (\n) or math in LaTeX syntax. Parameters
sobject
Any object gets converted to its str representation, except for None which is converted to an empty string. | |
doc_3489 | Sent with a preflight request to indicate which method will be used for the cross origin request. Set access_control_allow_methods on the response to indicate which methods are allowed. | |
doc_3490 |
Extended-precision floating-point number type, compatible with C long double but not necessarily with IEEE 754 quadruple-precision. Character code
'g' Alias
numpy.longfloat Alias on this platform (Linux x86_64)
numpy.float128: 128-bit extended-precision floating-point number type. | |
doc_3491 | Return the number of occurrences of b in a. | |
doc_3492 |
Copy properties from other to self. | |
doc_3493 | Alternative error attach function to the errorhandler() decorator that is more straightforward to use for non decorator usage. Changelog New in version 0.7. Parameters
code_or_exception (Union[Type[Exception], int]) –
f (Callable[[Exception], Union[Response, AnyStr, Dict[str, Any], Generator[AnyStr, None, None], Tuple[Union[Response, AnyStr, Dict[str, Any], Generator[AnyStr, None, None]], Union[Headers, Dict[str, Union[str, List[str], Tuple[str, ...]]], List[Tuple[str, Union[str, List[str], Tuple[str, ...]]]]]], Tuple[Union[Response, AnyStr, Dict[str, Any], Generator[AnyStr, None, None]], int], Tuple[Union[Response, AnyStr, Dict[str, Any], Generator[AnyStr, None, None]], int, Union[Headers, Dict[str, Union[str, List[str], Tuple[str, ...]]], List[Tuple[str, Union[str, List[str], Tuple[str, ...]]]]]], WSGIApplication]]) – Return type
None | |
doc_3494 |
Return True if ‘CCompiler.compile()’ able to compile a source file with certain flags. | |
doc_3495 |
Evaluate a 3-D Legendre series on the Cartesian product of x, y, and z. This function returns the values: \[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * L_i(a) * L_j(b) * L_k(c)\] where the points (a, b, c) consist of all triples formed by taking a from x, b from y, and c from z. The resulting points form a grid with x in the first dimension, y in the second, and z in the third. The parameters x, y, and z are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either x, y, and z or their elements must support multiplication and addition both with themselves and with the elements of c. If c has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters
x, y, zarray_like, compatible objects
The three dimensional series is evaluated at the points in the Cartesian product of x, y, and z. If x,`y`, or z is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar.
carray_like
Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in c[i,j]. If c has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns
valuesndarray, compatible object
The values of the two dimensional polynomial at points in the Cartesian product of x and y. See also
legval, legval2d, leggrid2d, legval3d
Notes New in version 1.7.0. | |
doc_3496 |
Sampler that restricts data loading to a subset of the dataset. It is especially useful in conjunction with torch.nn.parallel.DistributedDataParallel. In such a case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it. Note Dataset is assumed to be of constant size. Parameters
dataset – Dataset used for sampling.
num_replicas (int, optional) – Number of processes participating in distributed training. By default, world_size is retrieved from the current distributed group.
rank (int, optional) – Rank of the current process within num_replicas. By default, rank is retrieved from the current distributed group.
shuffle (bool, optional) – If True (default), sampler will shuffle the indices.
seed (int, optional) – random seed used to shuffle the sampler if shuffle=True. This number should be identical across all processes in the distributed group. Default: 0.
drop_last (bool, optional) – if True, then the sampler will drop the tail of the data to make it evenly divisible across the number of replicas. If False, the sampler will add extra indices to make the data evenly divisible across the replicas. Default: False. Warning In distributed mode, calling the set_epoch() method at the beginning of each epoch before creating the DataLoader iterator is necessary to make shuffling work properly across multiple epochs. Otherwise, the same ordering will be always used. Example: >>> sampler = DistributedSampler(dataset) if is_distributed else None
>>> loader = DataLoader(dataset, shuffle=(sampler is None),
... sampler=sampler)
>>> for epoch in range(start_epoch, n_epochs):
... if is_distributed:
... sampler.set_epoch(epoch)
... train(loader) | |
doc_3497 |
Set whether the Axes rectangle patch is drawn. Parameters
bbool | |
doc_3498 |
Set the alpha value used for blending - not supported on all backends. Parameters
alphaarray-like or scalar or None
All values must be within the 0-1 range, inclusive. Masked values and nans are not supported. | |
doc_3499 | Return a Signature object for the given callable: >>> from inspect import signature
>>> def foo(a, *, b:int, **kwargs):
... pass
>>> sig = signature(foo)
>>> str(sig)
'(a, *, b:int, **kwargs)'
>>> str(sig.parameters['b'])
'b:int'
>>> sig.parameters['b'].annotation
<class 'int'>
Accepts a wide range of Python callables, from plain functions and classes to functools.partial() objects. Raises ValueError if no signature can be provided, and TypeError if that type of object is not supported. A slash(/) in the signature of a function denotes that the parameters prior to it are positional-only. For more info, see the FAQ entry on positional-only parameters. New in version 3.5: follow_wrapped parameter. Pass False to get a signature of callable specifically (callable.__wrapped__ will not be used to unwrap decorated callables.) Note Some callables may not be introspectable in certain implementations of Python. For example, in CPython, some built-in functions defined in C provide no metadata about their arguments. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.