_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_30500 |
Draw the text instance. Parameters
gcGraphicsContextBase
The graphics context.
xfloat
The x location of the text in display coords.
yfloat
The y location of the text baseline in display coords.
sstr
The text string.
propmatplotlib.font_manager.FontProperties
The font properties.
anglefloat
The rotation angle in degrees anti-clockwise.
mtextmatplotlib.text.Text
The original text object to be rendered. Notes Note for backend implementers: When you are trying to determine if you have gotten your bounding box right (which is what enables the text layout/alignment to work properly), it helps to change the line in text.py: if 0: bbox_artist(self, renderer)
to if 1, and then the actual bounding box will be plotted along with your text. | |
doc_30501 | See Migration guide for more details. tf.compat.v1.raw_ops.ResourceScatterSub
tf.raw_ops.ResourceScatterSub(
resource, indices, updates, name=None
)
This operation computes # Scalar indices
ref[indices, ...] -= updates[...]
# Vector indices (for each i)
ref[indices[i], ...] -= updates[i, ...]
# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]
Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].
Args
resource A Tensor of type resource. Should be from a Variable node.
indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref.
updates A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A tensor of updated values to add to ref.
name A name for the operation (optional).
Returns The created Operation. | |
doc_30502 | Convert a tuple or struct_time representing a time as returned by gmtime() or localtime() to a string as specified by the format argument. If t is not provided, the current time as returned by localtime() is used. format must be a string. ValueError is raised if any field in t is outside of the allowed range. 0 is a legal argument for any position in the time tuple; if it is normally illegal the value is forced to a correct one. The following directives can be embedded in the format string. They are shown without the optional field width and precision specification, and are replaced by the indicated characters in the strftime() result:
Directive Meaning Notes
%a Locale’s abbreviated weekday name.
%A Locale’s full weekday name.
%b Locale’s abbreviated month name.
%B Locale’s full month name.
%c Locale’s appropriate date and time representation.
%d Day of the month as a decimal number [01,31].
%H Hour (24-hour clock) as a decimal number [00,23].
%I Hour (12-hour clock) as a decimal number [01,12].
%j Day of the year as a decimal number [001,366].
%m Month as a decimal number [01,12].
%M Minute as a decimal number [00,59].
%p Locale’s equivalent of either AM or PM. (1)
%S Second as a decimal number [00,61]. (2)
%U Week number of the year (Sunday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Sunday are considered to be in week 0. (3)
%w Weekday as a decimal number [0(Sunday),6].
%W Week number of the year (Monday as the first day of the week) as a decimal number [00,53]. All days in a new year preceding the first Monday are considered to be in week 0. (3)
%x Locale’s appropriate date representation.
%X Locale’s appropriate time representation.
%y Year without century as a decimal number [00,99].
%Y Year with century as a decimal number.
%z Time zone offset indicating a positive or negative time difference from UTC/GMT of the form +HHMM or -HHMM, where H represents decimal hour digits and M represents decimal minute digits [-23:59, +23:59].
%Z Time zone name (no characters if no time zone exists).
%% A literal '%' character. Notes: When used with the strptime() function, the %p directive only affects the output hour field if the %I directive is used to parse the hour. The range really is 0 to 61; value 60 is valid in timestamps representing leap seconds and value 61 is supported for historical reasons. When used with the strptime() function, %U and %W are only used in calculations when the day of the week and the year are specified. Here is an example, a format for dates compatible with that specified in the RFC 2822 Internet email standard. 1 >>> from time import gmtime, strftime
>>> strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime())
'Thu, 28 Jun 2001 14:17:15 +0000'
Additional directives may be supported on certain platforms, but only the ones listed here have a meaning standardized by ANSI C. To see the full set of format codes supported on your platform, consult the strftime(3) documentation. On some platforms, an optional field width and precision specification can immediately follow the initial '%' of a directive in the following order; this is also not portable. The field width is normally 2 except for %j where it is 3. | |
doc_30503 | tf.compat.v1.keras.initializers.lecun_uniform(
seed=None
)
With distribution="truncated_normal" or "untruncated_normal", samples are drawn from a truncated/untruncated normal distribution with a mean of zero and a standard deviation (after truncation, if used) stddev = sqrt(scale / n) where n is: number of input units in the weight tensor, if mode = "fan_in" number of output units, if mode = "fan_out" average of the numbers of input and output units, if mode = "fan_avg" With distribution="uniform", samples are drawn from a uniform distribution within [-limit, limit], with limit = sqrt(3 * scale / n).
Args
scale Scaling factor (positive float).
mode One of "fan_in", "fan_out", "fan_avg".
distribution Random distribution to use. One of "normal", "uniform".
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported.
Raises
ValueError In case of an invalid value for the "scale", mode" or "distribution" arguments. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | |
doc_30504 | Returns whether the user_obj has any permissions on the app app_label. | |
doc_30505 | tf.experimental.numpy.dstack(
tup
)
See the NumPy documentation for numpy.dstack. | |
doc_30506 | tf.initializers.Zeros, tf.initializers.zeros, tf.keras.initializers.zeros Also available via the shortcut function tf.keras.initializers.zeros. Examples:
# Standalone usage:
initializer = tf.keras.initializers.Zeros()
values = initializer(shape=(2, 2))
# Usage in a Keras layer:
initializer = tf.keras.initializers.Zeros()
layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)
Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, **kwargs
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. Only numeric or boolean dtypes are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)).
**kwargs Additional keyword arguments. | |
doc_30507 | Encodes the object input and returns a tuple (output object, length consumed). For instance, text encoding converts a string object to a bytes object using a particular character set encoding (e.g., cp1252 or iso-8859-1). The errors argument defines the error handling to apply. It defaults to 'strict' handling. The method may not store state in the Codec instance. Use StreamWriter for codecs which have to keep state in order to make encoding efficient. The encoder must be able to handle zero length input and return an empty object of the output object type in this situation. | |
doc_30508 | Look up the codec for the given encoding and return its StreamReader class or factory function. Raises a LookupError in case the encoding cannot be found. | |
doc_30509 |
YPbPr to RGB color space conversion. Parameters
ypbpr(…, 3) array_like
The image in YPbPr format. Final dimension denotes channels. Returns
out(…, 3) ndarray
The image in RGB format. Same dimensions as input. Raises
ValueError
If ypbpr is not at least 2-D with shape (…, 3). References
1
https://en.wikipedia.org/wiki/YPbPr | |
doc_30510 | tf.compat.v1.keras.layers.CuDNNGRU(
units, kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros', kernel_regularizer=None,
recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, recurrent_constraint=None, bias_constraint=None,
return_sequences=False, return_state=False, go_backwards=False, stateful=False,
**kwargs
)
More information about cuDNN can be found on the NVIDIA developer website. Can only be run on GPU.
Arguments
units Positive integer, dimensionality of the output space.
kernel_initializer Initializer for the kernel weights matrix, used for the linear transformation of the inputs.
recurrent_initializer Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state.
bias_initializer Initializer for the bias vector.
kernel_regularizer Regularizer function applied to the kernel weights matrix.
recurrent_regularizer Regularizer function applied to the recurrent_kernel weights matrix.
bias_regularizer Regularizer function applied to the bias vector.
activity_regularizer Regularizer function applied to the output of the layer (its "activation").
kernel_constraint Constraint function applied to the kernel weights matrix.
recurrent_constraint Constraint function applied to the recurrent_kernel weights matrix.
bias_constraint Constraint function applied to the bias vector.
return_sequences Boolean. Whether to return the last output in the output sequence, or the full sequence.
return_state Boolean. Whether to return the last state in addition to the output.
go_backwards Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
stateful Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
Attributes
cell
states
Methods reset_states View source
reset_states(
states=None
)
Reset the recorded states for the stateful RNN layer. Can only be used when RNN layer is constructed with stateful = True. Args: states: Numpy arrays that contains the value for the initial state, which will be feed to cell at the first time step. When the value is None, zero filled numpy array will be created based on the cell state size.
Raises
AttributeError When the RNN layer is not stateful.
ValueError When the batch size of the RNN layer is unknown.
ValueError When the input numpy array is not compatible with the RNN layer state, either size wise or dtype wise. | |
doc_30511 | Verify that cert (in decoded format as returned by SSLSocket.getpeercert()) matches the given hostname. The rules applied are those for checking the identity of HTTPS servers as outlined in RFC 2818, RFC 5280 and RFC 6125. In addition to HTTPS, this function should be suitable for checking the identity of servers in various SSL-based protocols such as FTPS, IMAPS, POPS and others. CertificateError is raised on failure. On success, the function returns nothing: >>> cert = {'subject': ((('commonName', 'example.com'),),)}
>>> ssl.match_hostname(cert, "example.com")
>>> ssl.match_hostname(cert, "example.org")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/py3k/Lib/ssl.py", line 130, in match_hostname
ssl.CertificateError: hostname 'example.org' doesn't match 'example.com'
New in version 3.2. Changed in version 3.3.3: The function now follows RFC 6125, section 6.4.3 and does neither match multiple wildcards (e.g. *.*.com or *a*.example.org) nor a wildcard inside an internationalized domain names (IDN) fragment. IDN A-labels such as www*.xn--pthon-kva.org are still supported, but x*.python.org no longer matches xn--tda.python.org. Changed in version 3.5: Matching of IP addresses, when present in the subjectAltName field of the certificate, is now supported. Changed in version 3.7: The function is no longer used to TLS connections. Hostname matching is now performed by OpenSSL. Allow wildcard when it is the leftmost and the only character in that segment. Partial wildcards like www*.example.com are no longer supported. Deprecated since version 3.7. | |
doc_30512 | Return the current EntityResolver. | |
doc_30513 | See Migration guide for more details. tf.compat.v1.raw_ops.RaggedTensorFromVariant
tf.raw_ops.RaggedTensorFromVariant(
encoded_ragged, input_ragged_rank, output_ragged_rank, Tvalues,
Tsplits=tf.dtypes.int64, name=None
)
Decodes the given variant Tensor and returns a RaggedTensor. The input could be a scalar, meaning it encodes a single RaggedTensor with ragged_rank output_ragged_rank. It could also have an arbitrary rank, in which case each element is decoded into a RaggedTensor with ragged_rank input_ragged_rank and these are then stacked according to the input shape to output a single RaggedTensor with ragged_rank output_ragged_rank. Each variant element in the input Tensor is decoded by retrieving from the element a 1-D variant Tensor with input_ragged_rank + 1 Tensors, corresponding to the splits and values of the decoded RaggedTensor. If input_ragged_rank is -1, then it is inferred as output_ragged_rank - rank(encoded_ragged). See RaggedTensorToVariant for the corresponding encoding logic.
Args
encoded_ragged A Tensor of type variant. A variant Tensor containing encoded RaggedTensors.
input_ragged_rank An int that is >= -1. The ragged rank of each encoded RaggedTensor component in the input. If set to -1, this is inferred as output_ragged_rank - rank(encoded_ragged)
output_ragged_rank An int that is >= 0. The expected ragged rank of the output RaggedTensor. The following must hold: output_ragged_rank = rank(encoded_ragged) + input_ragged_rank.
Tvalues A tf.DType.
Tsplits An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output_nested_splits, output_dense_values). output_nested_splits A list of output_ragged_rank Tensor objects with type Tsplits.
output_dense_values A Tensor of type Tvalues. | |
doc_30514 | See Migration guide for more details. tf.compat.v1.feature_column.categorical_column_with_hash_bucket
tf.feature_column.categorical_column_with_hash_bucket(
key, hash_bucket_size, dtype=tf.dtypes.string
)
Use this when your sparse features are in string or integer format, and you want to distribute your inputs into a finite number of buckets by hashing. output_id = Hash(input_feature_string) % bucket_size for string type input. For int type input, the value is converted to its string representation first and then hashed by the same formula. For input dictionary features, features[key] is either Tensor or SparseTensor. If Tensor, missing values can be represented by -1 for int and '' for string, which will be dropped by this feature column. Example: keywords = categorical_column_with_hash_bucket("keywords", 10K)
columns = [keywords, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction = linear_model(features, columns)
# or
keywords_embedded = embedding_column(keywords, 16)
columns = [keywords_embedded, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
dense_tensor = input_layer(features, columns)
Args
key A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns.
hash_bucket_size An int > 1. The number of buckets.
dtype The type of features. Only string and integer types are supported.
Returns A HashedCategoricalColumn.
Raises
ValueError hash_bucket_size is not greater than 1.
ValueError dtype is neither string nor integer. | |
doc_30515 |
K-fold iterator variant with non-overlapping groups. The same group will not appear in two different folds (the number of distinct groups has to be at least equal to the number of folds). The folds are approximately balanced in the sense that the number of distinct groups is approximately the same in each fold. Read more in the User Guide. Parameters
n_splitsint, default=5
Number of folds. Must be at least 2. Changed in version 0.22: n_splits default value changed from 3 to 5. See also
LeaveOneGroupOut
For splitting the data according to explicit domain-specific stratification of the dataset. Examples >>> import numpy as np
>>> from sklearn.model_selection import GroupKFold
>>> X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
>>> y = np.array([1, 2, 3, 4])
>>> groups = np.array([0, 0, 2, 2])
>>> group_kfold = GroupKFold(n_splits=2)
>>> group_kfold.get_n_splits(X, y, groups)
2
>>> print(group_kfold)
GroupKFold(n_splits=2)
>>> for train_index, test_index in group_kfold.split(X, y, groups):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
... print(X_train, X_test, y_train, y_test)
...
TRAIN: [0 1] TEST: [2 3]
[[1 2]
[3 4]] [[5 6]
[7 8]] [1 2] [3 4]
TRAIN: [2 3] TEST: [0 1]
[[5 6]
[7 8]] [[1 2]
[3 4]] [3 4] [1 2]
Methods
get_n_splits([X, y, groups]) Returns the number of splitting iterations in the cross-validator
split(X[, y, groups]) Generate indices to split data into training and test set.
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator.
split(X, y=None, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,), default=None
The target variable for supervised learning problems.
groupsarray-like of shape (n_samples,)
Group labels for the samples used while splitting the dataset into train/test set. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. | |
doc_30516 | class sklearn.preprocessing.LabelBinarizer(*, neg_label=0, pos_label=1, sparse_output=False) [source]
Binarize labels in a one-vs-all fashion. Several regression and binary classification algorithms are available in scikit-learn. A simple way to extend these algorithms to the multi-class classification case is to use the so-called one-vs-all scheme. At learning time, this simply consists in learning one regressor or binary classifier per class. In doing so, one needs to convert multi-class labels to binary labels (belong or does not belong to the class). LabelBinarizer makes this process easy with the transform method. At prediction time, one assigns the class for which the corresponding model gave the greatest confidence. LabelBinarizer makes this easy with the inverse_transform method. Read more in the User Guide. Parameters
neg_labelint, default=0
Value with which negative labels must be encoded.
pos_labelint, default=1
Value with which positive labels must be encoded.
sparse_outputbool, default=False
True if the returned array from transform is desired to be in sparse CSR format. Attributes
classes_ndarray of shape (n_classes,)
Holds the label for each class.
y_type_str
Represents the type of the target data as evaluated by utils.multiclass.type_of_target. Possible type are ‘continuous’, ‘continuous-multioutput’, ‘binary’, ‘multiclass’, ‘multiclass-multioutput’, ‘multilabel-indicator’, and ‘unknown’.
sparse_input_bool
True if the input data to transform is given as a sparse matrix, False otherwise. See also
label_binarize
Function to perform the transform operation of LabelBinarizer with fixed classes.
OneHotEncoder
Encode categorical features using a one-hot aka one-of-K scheme. Examples >>> from sklearn import preprocessing
>>> lb = preprocessing.LabelBinarizer()
>>> lb.fit([1, 2, 6, 4, 2])
LabelBinarizer()
>>> lb.classes_
array([1, 2, 4, 6])
>>> lb.transform([1, 6])
array([[1, 0, 0, 0],
[0, 0, 0, 1]])
Binary targets transform to a column vector >>> lb = preprocessing.LabelBinarizer()
>>> lb.fit_transform(['yes', 'no', 'no', 'yes'])
array([[1],
[0],
[0],
[1]])
Passing a 2D matrix for multilabel classification >>> import numpy as np
>>> lb.fit(np.array([[0, 1, 1], [1, 0, 0]]))
LabelBinarizer()
>>> lb.classes_
array([0, 1, 2])
>>> lb.transform([0, 1, 2, 1])
array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[0, 1, 0]])
Methods
fit(y) Fit label binarizer.
fit_transform(y) Fit label binarizer and transform multi-class labels to binary labels.
get_params([deep]) Get parameters for this estimator.
inverse_transform(Y[, threshold]) Transform binary labels back to multi-class labels.
set_params(**params) Set the parameters of this estimator.
transform(y) Transform multi-class labels to binary labels.
fit(y) [source]
Fit label binarizer. Parameters
yndarray of shape (n_samples,) or (n_samples, n_classes)
Target values. The 2-d matrix should only contain 0 and 1, represents multilabel classification. Returns
selfreturns an instance of self.
fit_transform(y) [source]
Fit label binarizer and transform multi-class labels to binary labels. The output of transform is sometimes referred to as the 1-of-K coding scheme. Parameters
y{ndarray, sparse matrix} of shape (n_samples,) or (n_samples, n_classes)
Target values. The 2-d matrix should only contain 0 and 1, represents multilabel classification. Sparse matrix can be CSR, CSC, COO, DOK, or LIL. Returns
Y{ndarray, sparse matrix} of shape (n_samples, n_classes)
Shape will be (n_samples, 1) for binary problems. Sparse matrix will be of CSR format.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(Y, threshold=None) [source]
Transform binary labels back to multi-class labels. Parameters
Y{ndarray, sparse matrix} of shape (n_samples, n_classes)
Target values. All sparse matrices are converted to CSR before inverse transformation.
thresholdfloat, default=None
Threshold used in the binary and multi-label cases. Use 0 when Y contains the output of decision_function (classifier). Use 0.5 when Y contains the output of predict_proba. If None, the threshold is assumed to be half way between neg_label and pos_label. Returns
y{ndarray, sparse matrix} of shape (n_samples,)
Target values. Sparse matrix will be of CSR format. Notes In the case when the binary labels are fractional (probabilistic), inverse_transform chooses the class with the greatest value. Typically, this allows to use the output of a linear model’s decision_function method directly as the input of inverse_transform.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(y) [source]
Transform multi-class labels to binary labels. The output of transform is sometimes referred to by some authors as the 1-of-K coding scheme. Parameters
y{array, sparse matrix} of shape (n_samples,) or (n_samples, n_classes)
Target values. The 2-d matrix should only contain 0 and 1, represents multilabel classification. Sparse matrix can be CSR, CSC, COO, DOK, or LIL. Returns
Y{ndarray, sparse matrix} of shape (n_samples, n_classes)
Shape will be (n_samples, 1) for binary problems. Sparse matrix will be of CSR format. | |
doc_30517 | The Content-Encoding entity-header field is used as a modifier to the media-type. When present, its value indicates what additional content codings have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type header field. | |
doc_30518 | sklearn.metrics.coverage_error(y_true, y_score, *, sample_weight=None) [source]
Coverage error measure. Compute how far we need to go through the ranked scores to cover all true labels. The best value is equal to the average number of labels in y_true per sample. Ties in y_scores are broken by giving maximal rank that would have been assigned to all tied values. Note: Our implementation’s score is 1 greater than the one given in Tsoumakas et al., 2010. This extends it to handle the degenerate case in which an instance has 0 true labels. Read more in the User Guide. Parameters
y_truendarray of shape (n_samples, n_labels)
True binary labels in binary indicator format.
y_scorendarray of shape (n_samples, n_labels)
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers).
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
coverage_errorfloat
References
1
Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US. | |
doc_30519 | List of database features that the current connection should have so that the model is considered during the migration phase. For example, if you set this list to ['gis_enabled'], the model will only be synchronized on GIS-enabled databases. It’s also useful to skip some models when testing with several database backends. Avoid relations between models that may or may not be created as the ORM doesn’t handle this. | |
doc_30520 |
Get the artist's bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly. | |
doc_30521 | Gets the currently active array type. get_arraytype () -> str DEPRECATED: Returns the currently active array type. This will be a value of the get_arraytypes() tuple and indicates which type of array module is used for the array creation. New in pygame 1.8. | |
doc_30522 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_30523 |
Return the clip path with the non-affine part of its transformation applied, and the remaining affine part of its transformation. | |
doc_30524 | The Date general-header field represents the date and time at which the message was originated, having the same semantics as orig-date in RFC 822. Changed in version 2.0: The datetime object is timezone-aware. | |
doc_30525 | A short description of the command, which will be printed in the help message when the user runs the command python manage.py help <command>. | |
doc_30526 |
Return the current hatching pattern. | |
doc_30527 | Return a list with the n smallest elements from the dataset defined by iterable. key, if provided, specifies a function of one argument that is used to extract a comparison key from each element in iterable (for example, key=str.lower). Equivalent to: sorted(iterable, key=key)[:n]. | |
doc_30528 | Returns a date string as per RFC 2822, e.g.: Fri, 09 Nov 2001 01:08:47 -0000
Optional timeval if given is a floating point time value as accepted by time.gmtime() and time.localtime(), otherwise the current time is used. Optional localtime is a flag that when True, interprets timeval, and returns a date relative to the local timezone instead of UTC, properly taking daylight savings time into account. The default is False meaning UTC is used. Optional usegmt is a flag that when True, outputs a date string with the timezone as an ascii string GMT, rather than a numeric -0000. This is needed for some protocols (such as HTTP). This only applies when localtime is False. The default is False. | |
doc_30529 | The flag is set when the code object is a generator function, i.e. a generator object is returned when the code object is executed. | |
doc_30530 | The name of the table to create for storing the many-to-many data. If this is not provided, Django will assume a default name based upon the names of: the table for the model defining the relationship and the name of the field itself. | |
doc_30531 | Registers a namespace prefix. The registry is global, and any existing mapping for either the given prefix or the namespace URI will be removed. prefix is a namespace prefix. uri is a namespace uri. Tags and attributes in this namespace will be serialized with the given prefix, if at all possible. New in version 3.2. | |
doc_30532 | Raised when an invalid or illegal string is specified. | |
doc_30533 | tf.experimental.numpy.deg2rad(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.deg2rad. | |
doc_30534 |
Set the path effects. Parameters
path_effectsAbstractPathEffect | |
doc_30535 |
Surface of the moon. This low-contrast image of the surface of the moon is useful for illustrating histogram equalization and contrast stretching. Returns
moon(512, 512) uint8 ndarray
Moon image. | |
doc_30536 |
Copy properties of other into self. | |
doc_30537 |
Set the artist transform. Parameters
tTransform | |
doc_30538 | sys.__stdout__
sys.__stderr__
These objects contain the original values of stdin, stderr and stdout at the start of the program. They are used during finalization, and could be useful to print to the actual standard stream no matter if the sys.std* object has been redirected. It can also be used to restore the actual files to known working file objects in case they have been overwritten with a broken object. However, the preferred way to do this is to explicitly save the previous stream before replacing it, and restore the saved object. Note Under some conditions stdin, stdout and stderr as well as the original values __stdin__, __stdout__ and __stderr__ can be None. It is usually the case for Windows GUI apps that aren’t connected to a console and Python apps started with pythonw. | |
doc_30539 | See Migration guide for more details. tf.compat.v1.keras.activations.get
tf.keras.activations.get(
identifier
)
Arguments
identifier Function or string
Returns Function corresponding to the input string or input function.
For example:
tf.keras.activations.get('softmax')
<function softmax at 0x1222a3d90>
tf.keras.activations.get(tf.keras.activations.softmax)
<function softmax at 0x1222a3d90>
tf.keras.activations.get(None)
<function linear at 0x1239596a8>
tf.keras.activations.get(abs)
<built-in function abs>
tf.keras.activations.get('abcd')
Traceback (most recent call last):
ValueError: Unknown activation function:abcd
Raises
ValueError Input is an unknown function or string, i.e., the input does not denote any defined function. | |
doc_30540 | Group identifier of the file owner. | |
doc_30541 | Return font-specific data. Options include:
ascent - distance between baseline and highest point that a
character of the font can occupy
descent - distance between baseline and lowest point that a
character of the font can occupy
linespace - minimum vertical separation necessary between any two
characters of the font that ensures no vertical overlap between lines. fixed - 1 if font is fixed-width else 0 | |
doc_30542 |
Draw samples from an F distribution. Samples are drawn from an F distribution with specified parameters, dfnum (degrees of freedom in numerator) and dfden (degrees of freedom in denominator), where both parameters must be greater than zero. The random variate of the F distribution (also known as the Fisher distribution) is a continuous probability distribution that arises in ANOVA tests, and is the ratio of two chi-square variates. Note New code should use the f method of a default_rng() instance instead; please see the Quick Start. Parameters
dfnumfloat or array_like of floats
Degrees of freedom in numerator, must be > 0.
dfdenfloat or array_like of float
Degrees of freedom in denominator, must be > 0.
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if dfnum and dfden are both scalars. Otherwise, np.broadcast(dfnum, dfden).size samples are drawn. Returns
outndarray or scalar
Drawn samples from the parameterized Fisher distribution. See also scipy.stats.f
probability density function, distribution or cumulative density function, etc. Generator.f
which should be used for new code. Notes The F statistic is used to compare in-group variances to between-group variances. Calculating the distribution depends on the sampling, and so it is a function of the respective degrees of freedom in the problem. The variable dfnum is the number of samples minus one, the between-groups degrees of freedom, while dfden is the within-groups degrees of freedom, the sum of the number of samples in each group minus the number of groups. References 1
Glantz, Stanton A. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. 2
Wikipedia, “F-distribution”, https://en.wikipedia.org/wiki/F-distribution Examples An example from Glantz[1], pp 47-40: Two groups, children of diabetics (25 people) and children from people without diabetes (25 controls). Fasting blood glucose was measured, case group had a mean value of 86.1, controls had a mean value of 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these data consistent with the null hypothesis that the parents diabetic status does not affect their children’s blood glucose levels? Calculating the F statistic from the data gives a value of 36.01. Draw samples from the distribution: >>> dfnum = 1. # between group degrees of freedom
>>> dfden = 48. # within groups degrees of freedom
>>> s = np.random.f(dfnum, dfden, 1000)
The lower bound for the top 1% of the samples is : >>> np.sort(s)[-10]
7.61988120985 # random
So there is about a 1% chance that the F statistic will exceed 7.62, the measured value is 36, so the null hypothesis is rejected at the 1% level. | |
doc_30543 | tf.experimental.numpy.random.seed(
s
)
Sets the seed for the random number generator. Uses tf.set_random_seed. Args: s: an integer. See the NumPy documentation for numpy.random.seed. | |
doc_30544 | tf.experimental.numpy.roll(
a, shift, axis=None
)
See the NumPy documentation for numpy.roll. | |
doc_30545 |
Convenience method for controlling tick locators. See matplotlib.axes.Axes.locator_params() for full documentation. Note that this is for Axes3D objects, therefore, setting axis to 'both' will result in the parameters being set for all three axes. Also, axis can also take a value of 'z' to apply parameters to the z axis. | |
doc_30546 |
Determine if the first argument is a subclass of the second argument. Parameters
arg1, arg2dtype or dtype specifier
Data-types. Returns
outbool
The result. See also
issctype, issubdtype, obj2sctype
Examples >>> np.issubsctype('S8', str)
False
>>> np.issubsctype(np.array([1]), int)
True
>>> np.issubsctype(np.array([1]), float)
False | |
doc_30547 | Parse data as JSON. Useful during testing. If the mimetype does not indicate JSON (application/json, see is_json()), this returns None. Unlike Request.get_json(), the result is not cached. Parameters
force (bool) – Ignore the mimetype and always try to parse JSON.
silent (bool) – Silence parsing errors and return None instead. Return type
Optional[Any] | |
doc_30548 |
Used if copy.copy is called on an array. Returns a copy of the array. Equivalent to a.copy(order='K'). | |
doc_30549 |
Return True if date is last day of the year. Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.is_year_end
False
>>> ts = pd.Timestamp(2020, 12, 31)
>>> ts.is_year_end
True | |
doc_30550 | Return the current local datetime, with tzinfo None. Equivalent to: datetime.fromtimestamp(time.time())
See also now(), fromtimestamp(). This method is functionally equivalent to now(), but without a tz parameter. | |
doc_30551 | See Migration guide for more details. tf.compat.v1.image.transpose, tf.compat.v1.image.transpose_image
tf.image.transpose(
image, name=None
)
Usage Example:
x = [[[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]],
[[7.0, 8.0, 9.0],
[10.0, 11.0, 12.0]]]
tf.image.transpose(x)
<tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy=
array([[[ 1., 2., 3.],
[ 7., 8., 9.]],
[[ 4., 5., 6.],
[10., 11., 12.]]], dtype=float32)>
Args
image 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels].
name A name for this operation (optional).
Returns If image was 4-D, a 4-D float Tensor of shape [batch, width, height, channels] If image was 3-D, a 3-D float Tensor of shape [width, height, channels]
Raises
ValueError if the shape of image not supported. Usage Example:
image = [[[1, 2], [3, 4]],
[[5, 6], [7, 8]],
[[9, 10], [11, 12]]]
image = tf.constant(image)
tf.image.transpose(image)
<tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy=
array([[[ 1, 2],
[ 5, 6],
[ 9, 10]],
[[ 3, 4],
[ 7, 8],
[11, 12]]], dtype=int32)> | |
doc_30552 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_30553 | See Migration guide for more details. tf.compat.v1.math.unsorted_segment_max, tf.compat.v1.unsorted_segment_max
tf.math.unsorted_segment_max(
data, segment_ids, num_segments, name=None
)
Read the section on segmentation for an explanation of segments. This operator is similar to the unsorted segment sum operator found (here). Instead of computing the sum over segments, it computes the maximum such that: \(output_i = \max_{j...} data[j...]\) where max is over tuples j... such that segment_ids[j...] == i. If the maximum is empty for a given segment ID i, it outputs the smallest possible value for the specific numeric type, output[i] = numeric_limits<T>::lowest(). If the given segment ID i is negative, then the corresponding value is dropped, and will not be included in the result. For example: c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]])
tf.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2)
# ==> [[ 4, 3, 3, 4],
# [5, 6, 7, 8]]
Args
data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
segment_ids A Tensor. Must be one of the following types: int32, int64. A tensor whose shape is a prefix of data.shape.
num_segments A Tensor. Must be one of the following types: int32, int64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as data. | |
doc_30554 |
Bases: matplotlib.patheffects.AbstractPathEffect A line based PathEffect which re-draws a stroke. The path will be stroked with its gc updated with the given keyword arguments, i.e., the keyword arguments should be valid gc parameter values. draw_path(renderer, gc, tpath, affine, rgbFace)[source]
Draw the path with updated gc. | |
doc_30555 | Construct an IPv6 interface. The meaning of address is as in the constructor of IPv6Network, except that arbitrary host addresses are always accepted. IPv6Interface is a subclass of IPv6Address, so it inherits all the attributes from that class. In addition, the following attributes are available:
ip
network
with_prefixlen
with_netmask
with_hostmask
Refer to the corresponding attribute documentation in IPv4Interface. | |
doc_30556 | window.addch(y, x, ch[, attr])
Paint character ch at (y, x) with attributes attr, overwriting any character previously painted at that location. By default, the character position and attributes are the current settings for the window object. Note Writing outside the window, subwindow, or pad raises a curses.error. Attempting to write to the lower right corner of a window, subwindow, or pad will cause an exception to be raised after the character is printed. | |
doc_30557 |
Bases: object A decorator that performs %-substitution on an object's docstring. This decorator should be robust even if obj.__doc__ is None (for example, if -OO was passed to the interpreter). Usage: construct a docstring.Substitution with a sequence or dictionary suitable for performing substitution; then decorate a suitable function with the constructed object, e.g.: sub_author_name = Substitution(author='Jason')
@sub_author_name
def some_function(x):
"%(author)s wrote this function"
# note that some_function.__doc__ is now "Jason wrote this function"
One can also use positional arguments: sub_first_last_names = Substitution('Edgar Allen', 'Poe')
@sub_first_last_names
def some_function(x):
"%s %s wrote the Raven"
update(*args, **kwargs)[source]
Update self.params (which must be a dict) with the supplied args.
matplotlib.docstring.copy(source)[source]
Copy a docstring from another source function (if present). | |
doc_30558 |
Returns a copy of the calling offset object with n=1 and all other attributes equal. | |
doc_30559 |
Marks given tensors as modified in an in-place operation. This should be called at most once, only from inside the forward() method, and all arguments should be inputs. Every tensor that’s been modified in-place in a call to forward() should be given to this function, to ensure correctness of our checks. It doesn’t matter whether the function is called before or after modification. | |
doc_30560 | See Migration guide for more details. tf.compat.v1.math.multiply_no_nan
tf.math.multiply_no_nan(
x, y, name=None
)
Args
x A Tensor. Must be one of the following types: float32, float64.
y A Tensor whose dtype is compatible with x.
name A name for the operation (optional).
Returns The element-wise value of the x times y. | |
doc_30561 | Send a HELP command. Return a pair (response, list) where list is a list of help strings. | |
doc_30562 | Convert the color from HLS coordinates to RGB coordinates. | |
doc_30563 | True or False. Determines whether or not a user object is created if not already in the database Defaults to True. | |
doc_30564 | Filter action.
Constant Meaning
KQ_EV_ADD Adds or modifies an event
KQ_EV_DELETE Removes an event from the queue
KQ_EV_ENABLE Permitscontrol() to returns the event
KQ_EV_DISABLE Disablesevent
KQ_EV_ONESHOT Removes event after first occurrence
KQ_EV_CLEAR Reset the state after an event is retrieved
KQ_EV_SYSFLAGS internal event
KQ_EV_FLAG1 internal event
KQ_EV_EOF Filter specific EOF condition
KQ_EV_ERROR See return values | |
doc_30565 |
For each element in self, return a list of the words in the string, using sep as the delimiter string. See also char.split | |
doc_30566 | class smtpd.SMTPServer(localaddr, remoteaddr, data_size_limit=33554432, map=None, enable_SMTPUTF8=False, decode_data=False)
Create a new SMTPServer object, which binds to local address localaddr. It will treat remoteaddr as an upstream SMTP relayer. Both localaddr and remoteaddr should be a (host, port) tuple. The object inherits from asyncore.dispatcher, and so will insert itself into asyncore’s event loop on instantiation. data_size_limit specifies the maximum number of bytes that will be accepted in a DATA command. A value of None or 0 means no limit. map is the socket map to use for connections (an initially empty dictionary is a suitable value). If not specified the asyncore global socket map is used. enable_SMTPUTF8 determines whether the SMTPUTF8 extension (as defined in RFC 6531) should be enabled. The default is False. When True, SMTPUTF8 is accepted as a parameter to the MAIL command and when present is passed to process_message() in the kwargs['mail_options'] list. decode_data and enable_SMTPUTF8 cannot be set to True at the same time. decode_data specifies whether the data portion of the SMTP transaction should be decoded using UTF-8. When decode_data is False (the default), the server advertises the 8BITMIME extension (RFC 6152), accepts the BODY=8BITMIME parameter to the MAIL command, and when present passes it to process_message() in the kwargs['mail_options'] list. decode_data and enable_SMTPUTF8 cannot be set to True at the same time.
process_message(peer, mailfrom, rcpttos, data, **kwargs)
Raise a NotImplementedError exception. Override this in subclasses to do something useful with this message. Whatever was passed in the constructor as remoteaddr will be available as the _remoteaddr attribute. peer is the remote host’s address, mailfrom is the envelope originator, rcpttos are the envelope recipients and data is a string containing the contents of the e-mail (which should be in RFC 5321 format). If the decode_data constructor keyword is set to True, the data argument will be a unicode string. If it is set to False, it will be a bytes object. kwargs is a dictionary containing additional information. It is empty if decode_data=True was given as an init argument, otherwise it contains the following keys:
mail_options:
a list of all received parameters to the MAIL command (the elements are uppercase strings; example: ['BODY=8BITMIME', 'SMTPUTF8']).
rcpt_options:
same as mail_options but for the RCPT command. Currently no RCPT TO options are supported, so for now this will always be an empty list. Implementations of process_message should use the **kwargs signature to accept arbitrary keyword arguments, since future feature enhancements may add keys to the kwargs dictionary. Return None to request a normal 250 Ok response; otherwise return the desired response string in RFC 5321 format.
channel_class
Override this in subclasses to use a custom SMTPChannel for managing SMTP clients.
New in version 3.4: The map constructor argument. Changed in version 3.5: localaddr and remoteaddr may now contain IPv6 addresses. New in version 3.5: The decode_data and enable_SMTPUTF8 constructor parameters, and the kwargs parameter to process_message() when decode_data is False. Changed in version 3.6: decode_data is now False by default.
DebuggingServer Objects
class smtpd.DebuggingServer(localaddr, remoteaddr)
Create a new debugging server. Arguments are as per SMTPServer. Messages will be discarded, and printed on stdout.
PureProxy Objects
class smtpd.PureProxy(localaddr, remoteaddr)
Create a new pure proxy server. Arguments are as per SMTPServer. Everything will be relayed to remoteaddr. Note that running this has a good chance to make you into an open relay, so please be careful.
MailmanProxy Objects
class smtpd.MailmanProxy(localaddr, remoteaddr)
Deprecated since version 3.9, will be removed in version 3.11: MailmanProxy is deprecated, it depends on a Mailman module which no longer exists and therefore is already broken. Create a new pure proxy server. Arguments are as per SMTPServer. Everything will be relayed to remoteaddr, unless local mailman configurations knows about an address, in which case it will be handled via mailman. Note that running this has a good chance to make you into an open relay, so please be careful.
SMTPChannel Objects
class smtpd.SMTPChannel(server, conn, addr, data_size_limit=33554432, map=None, enable_SMTPUTF8=False, decode_data=False)
Create a new SMTPChannel object which manages the communication between the server and a single SMTP client. conn and addr are as per the instance variables described below. data_size_limit specifies the maximum number of bytes that will be accepted in a DATA command. A value of None or 0 means no limit. enable_SMTPUTF8 determines whether the SMTPUTF8 extension (as defined in RFC 6531) should be enabled. The default is False. decode_data and enable_SMTPUTF8 cannot be set to True at the same time. A dictionary can be specified in map to avoid using a global socket map. decode_data specifies whether the data portion of the SMTP transaction should be decoded using UTF-8. The default is False. decode_data and enable_SMTPUTF8 cannot be set to True at the same time. To use a custom SMTPChannel implementation you need to override the SMTPServer.channel_class of your SMTPServer. Changed in version 3.5: The decode_data and enable_SMTPUTF8 parameters were added. Changed in version 3.6: decode_data is now False by default. The SMTPChannel has the following instance variables:
smtp_server
Holds the SMTPServer that spawned this channel.
conn
Holds the socket object connecting to the client.
addr
Holds the address of the client, the second value returned by socket.accept
received_lines
Holds a list of the line strings (decoded using UTF-8) received from the client. The lines have their "\r\n" line ending translated to "\n".
smtp_state
Holds the current state of the channel. This will be either COMMAND initially and then DATA after the client sends a “DATA” line.
seen_greeting
Holds a string containing the greeting sent by the client in its “HELO”.
mailfrom
Holds a string containing the address identified in the “MAIL FROM:” line from the client.
rcpttos
Holds a list of strings containing the addresses identified in the “RCPT TO:” lines from the client.
received_data
Holds a string containing all of the data sent by the client during the DATA state, up to but not including the terminating "\r\n.\r\n".
fqdn
Holds the fully-qualified domain name of the server as returned by socket.getfqdn().
peer
Holds the name of the client peer as returned by conn.getpeername() where conn is conn.
The SMTPChannel operates by invoking methods named smtp_<command> upon reception of a command line from the client. Built into the base SMTPChannel class are methods for handling the following commands (and responding to them appropriately):
Command Action taken
HELO Accepts the greeting from the client and stores it in seen_greeting. Sets server to base command mode.
EHLO Accepts the greeting from the client and stores it in seen_greeting. Sets server to extended command mode.
NOOP Takes no action.
QUIT Closes the connection cleanly.
MAIL Accepts the “MAIL FROM:” syntax and stores the supplied address as mailfrom. In extended command mode, accepts the RFC 1870 SIZE attribute and responds appropriately based on the value of data_size_limit.
RCPT Accepts the “RCPT TO:” syntax and stores the supplied addresses in the rcpttos list.
RSET Resets the mailfrom, rcpttos, and received_data, but not the greeting.
DATA Sets the internal state to DATA and stores remaining lines from the client in received_data until the terminator "\r\n.\r\n" is received.
HELP Returns minimal information on command syntax
VRFY Returns code 252 (the server doesn’t know if the address is valid)
EXPN Reports that the command is not implemented. | |
doc_30567 | class socketserver.DatagramRequestHandler
These BaseRequestHandler subclasses override the setup() and finish() methods, and provide self.rfile and self.wfile attributes. The self.rfile and self.wfile attributes can be read or written, respectively, to get the request data or return data to the client. The rfile attributes of both classes support the io.BufferedIOBase readable interface, and DatagramRequestHandler.wfile supports the io.BufferedIOBase writable interface. Changed in version 3.6: StreamRequestHandler.wfile also supports the io.BufferedIOBase writable interface. | |
doc_30568 |
Draw samples from a von Mises distribution. Samples are drawn from a von Mises distribution with specified mode (mu) and dispersion (kappa), on the interval [-pi, pi]. The von Mises distribution (also known as the circular normal distribution) is a continuous probability distribution on the unit circle. It may be thought of as the circular analogue of the normal distribution. Note New code should use the vonmises method of a default_rng() instance instead; please see the Quick Start. Parameters
mufloat or array_like of floats
Mode (“center”) of the distribution.
kappafloat or array_like of floats
Dispersion of the distribution, has to be >=0.
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if mu and kappa are both scalars. Otherwise, np.broadcast(mu, kappa).size samples are drawn. Returns
outndarray or scalar
Drawn samples from the parameterized von Mises distribution. See also scipy.stats.vonmises
probability density function, distribution, or cumulative density function, etc. Generator.vonmises
which should be used for new code. Notes The probability density for the von Mises distribution is \[p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},\] where \(\mu\) is the mode and \(\kappa\) the dispersion, and \(I_0(\kappa)\) is the modified Bessel function of order 0. The von Mises is named for Richard Edler von Mises, who was born in Austria-Hungary, in what is now the Ukraine. He fled to the United States in 1939 and became a professor at Harvard. He worked in probability theory, aerodynamics, fluid mechanics, and philosophy of science. References 1
Abramowitz, M. and Stegun, I. A. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. 2
von Mises, R., “Mathematical Theory of Probability and Statistics”, New York: Academic Press, 1964. Examples Draw samples from the distribution: >>> mu, kappa = 0.0, 4.0 # mean and dispersion
>>> s = np.random.vonmises(mu, kappa, 1000)
Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt
>>> from scipy.special import i0
>>> plt.hist(s, 50, density=True)
>>> x = np.linspace(-np.pi, np.pi, num=51)
>>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa))
>>> plt.plot(x, y, linewidth=2, color='r')
>>> plt.show() | |
doc_30569 | unlock()
Three locking mechanisms are used—dot locking and, if available, the flock() and lockf() system calls. For MH mailboxes, locking the mailbox means locking the .mh_sequences file and, only for the duration of any operations that affect them, locking individual message files. | |
doc_30570 | See Migration guide for more details. tf.compat.v1.keras.backend.clear_session
tf.keras.backend.clear_session()
Keras manages a global state, which it uses to implement the Functional model-building API and to uniquify autogenerated layer names. If you are creating many models in a loop, this global state will consume an increasing amount of memory over time, and you may want to clear it. Calling clear_session() releases the global state: this helps avoid clutter from old models and layers, especially when memory is limited. Example 1: calling clear_session() when creating models in a loop for _ in range(100):
# Without `clear_session()`, each iteration of this loop will
# slightly increase the size of the global state managed by Keras
model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])
for _ in range(100):
# With `clear_session()` called at the beginning,
# Keras starts with a blank state at each iteration
# and memory consumption is constant over time.
tf.keras.backend.clear_session()
model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)])
Example 2: resetting the layer name generation counter
import tensorflow as tf
layers = [tf.keras.layers.Dense(10) for _ in range(10)]
new_layer = tf.keras.layers.Dense(10)
print(new_layer.name)
dense_10
tf.keras.backend.set_learning_phase(1)
print(tf.keras.backend.learning_phase())
1
tf.keras.backend.clear_session()
new_layer = tf.keras.layers.Dense(10)
print(new_layer.name)
dense | |
doc_30571 |
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | |
doc_30572 |
Expands the dimension dim of the self tensor over multiple dimensions of sizes given by sizes.
sizes is the new shape of the unflattened dimension and it can be a Tuple[int] as well as torch.Size if self is a Tensor, or namedshape (Tuple[(name: str, size: int)]) if self is a NamedTensor. The total number of elements in sizes must match the number of elements in the original dim being unflattened. Parameters
dim (Union[int, str]) – Dimension to unflatten
sizes (Union[Tuple[int] or torch.Size, Tuple[Tuple[str, int]]]) – New shape of the unflattened dimension Examples >>> torch.randn(3, 4, 1).unflatten(1, (2, 2)).shape
torch.Size([3, 2, 2, 1])
>>> torch.randn(2, 4, names=('A', 'B')).unflatten('B', (('B1', 2), ('B2', 2)))
tensor([[[-1.1772, 0.0180],
[ 0.2412, 0.1431]],
[[-1.1819, -0.8899], [ 1.5813, 0.2274]]], names=(‘A’, ‘B1’, ‘B2’)) Warning The named tensor API is experimental and subject to change. | |
doc_30573 |
Compute the normalized root mean-squared error (NRMSE) between two images. Parameters
image_truendarray
Ground-truth image, same shape as im_test.
image_testndarray
Test image.
normalization{‘euclidean’, ‘min-max’, ‘mean’}, optional
Controls the normalization method to use in the denominator of the NRMSE. There is no standard method of normalization across the literature [1]. The methods available here are as follows:
‘euclidean’ : normalize by the averaged Euclidean norm of im_true: NRMSE = RMSE * sqrt(N) / || im_true ||
where || . || denotes the Frobenius norm and N = im_true.size. This result is equivalent to: NRMSE = || im_true - im_test || / || im_true ||.
‘min-max’ : normalize by the intensity range of im_true. ‘mean’ : normalize by the mean of im_true
Returns
nrmsefloat
The NRMSE metric. Notes Changed in version 0.16: This function was renamed from skimage.measure.compare_nrmse to skimage.metrics.normalized_root_mse. References
1
https://en.wikipedia.org/wiki/Root-mean-square_deviation | |
doc_30574 | Enables support for PEP 484 and PEP 526 style type comments (# type: <type>, # type: ignore <stuff>). New in version 3.8. | |
doc_30575 | tf.nn.moments(
x, axes, shift=None, keepdims=False, name=None
)
The mean and variance are calculated by aggregating the contents of x across axes. If x is 1-D and axes = [0] this is just the mean and variance of a vector.
Note: shift is currently not used; the true mean is computed and used.
When using these moments for batch normalization (see tf.nn.batch_normalization): for so-called "global normalization", used with convolutional filters with shape [batch, height, width, depth], pass axes=[0, 1, 2]. for simple batch normalization pass axes=[0] (batch only).
Args
x A Tensor.
axes Array of ints. Axes along which to compute mean and variance.
shift Not used in the current implementation.
keepdims produce moments with the same dimensionality as the input.
name Name used to scope the operations that compute the moments.
Returns Two Tensor objects: mean and variance. | |
doc_30576 | Raises a ValidationError with a code of 'invalid_extension' if the extension of value.name (value is a File) isn’t found in allowed_extensions. The extension is compared case-insensitively with allowed_extensions. Warning Don’t rely on validation of the file extension to determine a file’s type. Files can be renamed to have any extension no matter what data they contain. | |
doc_30577 | Create a UUID from either a string of 32 hexadecimal digits, a string of 16 bytes in big-endian order as the bytes argument, a string of 16 bytes in little-endian order as the bytes_le argument, a tuple of six integers (32-bit time_low, 16-bit time_mid, 16-bit time_hi_version, 8-bit clock_seq_hi_variant, 8-bit clock_seq_low, 48-bit node) as the fields argument, or a single 128-bit integer as the int argument. When a string of hex digits is given, curly braces, hyphens, and a URN prefix are all optional. For example, these expressions all yield the same UUID: UUID('{12345678-1234-5678-1234-567812345678}')
UUID('12345678123456781234567812345678')
UUID('urn:uuid:12345678-1234-5678-1234-567812345678')
UUID(bytes=b'\x12\x34\x56\x78'*4)
UUID(bytes_le=b'\x78\x56\x34\x12\x34\x12\x78\x56' +
b'\x12\x34\x56\x78\x12\x34\x56\x78')
UUID(fields=(0x12345678, 0x1234, 0x5678, 0x12, 0x34, 0x567812345678))
UUID(int=0x12345678123456781234567812345678)
Exactly one of hex, bytes, bytes_le, fields, or int must be given. The version argument is optional; if given, the resulting UUID will have its variant and version number set according to RFC 4122, overriding bits in the given hex, bytes, bytes_le, fields, or int. Comparison of UUID objects are made by way of comparing their UUID.int attributes. Comparison with a non-UUID object raises a TypeError. str(uuid) returns a string in the form 12345678-1234-5678-1234-567812345678 where the 32 hexadecimal digits represent the UUID. | |
doc_30578 | Load the type map given in the file filename, if it exists. The type map is returned as a dictionary mapping filename extensions, including the leading dot ('.'), to strings of the form 'type/subtype'. If the file filename does not exist or cannot be read, None is returned. | |
doc_30579 | Return a fragment that has all samples in the original fragment multiplied by the floating-point value factor. Samples are truncated in case of overflow. | |
doc_30580 | Merge in data from another CoverageResults object. | |
doc_30581 | The class of the original traceback. | |
doc_30582 | Return the MIME part that is the best candidate to be the “body” of the message. preferencelist must be a sequence of strings from the set related, html, and plain, and indicates the order of preference for the content type of the part returned. Start looking for candidate matches with the object on which the get_body method is called. If related is not included in preferencelist, consider the root part (or subpart of the root part) of any related encountered as a candidate if the (sub-)part matches a preference. When encountering a multipart/related, check the start parameter and if a part with a matching Content-ID is found, consider only it when looking for candidate matches. Otherwise consider only the first (default root) part of the multipart/related. If a part has a Content-Disposition header, only consider the part a candidate match if the value of the header is inline. If none of the candidates matches any of the preferences in preferencelist, return None. Notes: (1) For most applications the only preferencelist combinations that really make sense are ('plain',), ('html', 'plain'), and the default ('related', 'html', 'plain'). (2) Because matching starts with the object on which get_body is called, calling get_body on a multipart/related will return the object itself unless preferencelist has a non-default value. (3) Messages (or message parts) that do not specify a Content-Type or whose Content-Type header is invalid will be treated as if they are of type text/plain, which may occasionally cause get_body to return unexpected results. | |
doc_30583 |
Do a keyword search on docstrings. A list of objects that matched the search is displayed, sorted by relevance. All given keywords need to be found in the docstring for it to be returned as a result, but the order does not matter. Parameters
whatstr
String containing words to look for.
modulestr or list, optional
Name of module(s) whose docstrings to go through.
import_modulesbool, optional
Whether to import sub-modules in packages. Default is True.
regeneratebool, optional
Whether to re-generate the docstring cache. Default is False.
outputfile-like, optional
File-like object to write the output to. If omitted, use a pager. See also
source, info
Notes Relevance is determined only roughly, by checking if the keywords occur in the function name, at the start of a docstring, etc. Examples >>> np.lookfor('binary representation')
Search results for 'binary representation'
------------------------------------------
numpy.binary_repr
Return the binary representation of the input number as a string.
numpy.core.setup_common.long_double_representation
Given a binary dump as given by GNU od -b, look for long double
numpy.base_repr
Return a string representation of a number in the given base system.
... | |
doc_30584 |
Return the xaxis' major tick labels, as a list of Text. | |
doc_30585 | Example: >>> from django.contrib.gis.geos import WKBReader
>>> wkb_r = WKBReader()
>>> wkb_r.read('0101000000000000000000F03F000000000000F03F')
<Point object at 0x103a88910> | |
doc_30586 | See Migration guide for more details. tf.compat.v1.manip.tile, tf.compat.v1.tile
tf.tile(
input, multiples, name=None
)
This operation creates a new tensor by replicating input multiples times. The output tensor's i'th dimension has input.dims(i) * multiples[i] elements, and the values of input are replicated multiples[i] times along the 'i'th dimension. For example, tiling [a b c d] by [2] produces [a b c d a b c d].
a = tf.constant([[1,2,3],[4,5,6]], tf.int32)
b = tf.constant([1,2], tf.int32)
tf.tile(a, b)
<tf.Tensor: shape=(2, 6), dtype=int32, numpy=
array([[1, 2, 3, 1, 2, 3],
[4, 5, 6, 4, 5, 6]], dtype=int32)>
c = tf.constant([2,1], tf.int32)
tf.tile(a, c)
<tf.Tensor: shape=(4, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6],
[1, 2, 3],
[4, 5, 6]], dtype=int32)>
d = tf.constant([2,2], tf.int32)
tf.tile(a, d)
<tf.Tensor: shape=(4, 6), dtype=int32, numpy=
array([[1, 2, 3, 1, 2, 3],
[4, 5, 6, 4, 5, 6],
[1, 2, 3, 1, 2, 3],
[4, 5, 6, 4, 5, 6]], dtype=int32)>
Args
input A Tensor. 1-D or higher.
multiples A Tensor. Must be one of the following types: int32, int64. 1-D. Length must be the same as the number of dimensions in input
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_30587 |
Extract a diagonal or construct a diagonal array. This function is the equivalent of numpy.diag that takes masked values into account, see numpy.diag for details. See also numpy.diag
Equivalent function for ndarrays. | |
doc_30588 | See Migration guide for more details. tf.compat.v1.raw_ops.SdcaShrinkL1
tf.raw_ops.SdcaShrinkL1(
weights, l1, l2, name=None
)
Args
weights A list of Tensor objects with type mutable float32. a list of vectors where each value is the weight associated with a feature group.
l1 A float. Symmetric l1 regularization strength.
l2 A float. Symmetric l2 regularization strength. Should be a positive float.
name A name for the operation (optional).
Returns The created Operation. | |
doc_30589 |
Weight function of the Hermite polynomials. The weight function is \(\exp(-x^2)\) and the interval of integration is \([-\inf, \inf]\). the Hermite polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters
xarray_like
Values at which the weight function will be computed. Returns
wndarray
The weight function at x. Notes New in version 1.7.0. | |
doc_30590 | See Migration guide for more details. tf.compat.v1.raw_ops.QuantizedDepthwiseConv2DWithBiasAndRelu
tf.raw_ops.QuantizedDepthwiseConv2DWithBiasAndRelu(
input, filter, bias, min_input, max_input, min_filter, max_filter, strides,
padding, out_type=tf.dtypes.qint32, dilations=[1, 1, 1, 1], padding_list=[],
name=None
)
Args
input A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. The original input tensor.
filter A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. The original filter tensor.
bias A Tensor of type float32. The original bias tensor.
min_input A Tensor of type float32. The float value that the minimum quantized input value represents.
max_input A Tensor of type float32. The float value that the maximum quantized input value represents.
min_filter A Tensor of type float32. The float value that the minimum quantized filter value represents.
max_filter A Tensor of type float32. The float value that the maximum quantized filter value represents.
strides A list of ints. List of stride values.
padding A string from: "SAME", "VALID".
out_type An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.qint32. The type of the output.
dilations An optional list of ints. Defaults to [1, 1, 1, 1]. List of dilation values.
padding_list An optional list of ints. Defaults to [].
name A name for the operation (optional).
Returns A tuple of Tensor objects (output, min_output, max_output). output A Tensor of type out_type.
min_output A Tensor of type float32.
max_output A Tensor of type float32. | |
doc_30591 | Return x factorial as an integer. Raises ValueError if x is not integral or is negative. Deprecated since version 3.9: Accepting floats with integral values (like 5.0) is deprecated. | |
doc_30592 |
Return the sizes ('areas') of the elements in the collection. Returns
array
The 'area' of each element. | |
doc_30593 | Update the is_authenticated flag for the given uri or list of URIs. | |
doc_30594 | tf.compat.v1.keras.initializers.he_uniform(
seed=None
)
With distribution="truncated_normal" or "untruncated_normal", samples are drawn from a truncated/untruncated normal distribution with a mean of zero and a standard deviation (after truncation, if used) stddev = sqrt(scale / n) where n is: number of input units in the weight tensor, if mode = "fan_in" number of output units, if mode = "fan_out" average of the numbers of input and output units, if mode = "fan_avg" With distribution="uniform", samples are drawn from a uniform distribution within [-limit, limit], with limit = sqrt(3 * scale / n).
Args
scale Scaling factor (positive float).
mode One of "fan_in", "fan_out", "fan_avg".
distribution Random distribution to use. One of "normal", "uniform".
seed A Python integer. Used to create random seeds. See tf.compat.v1.set_random_seed for behavior.
dtype Default data type, used if no dtype argument is provided when calling the initializer. Only floating point types are supported.
Raises
ValueError In case of an invalid value for the "scale", mode" or "distribution" arguments. Methods from_config View source
@classmethod
from_config(
config
)
Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args
config A Python dictionary. It will typically be the output of get_config.
Returns An Initializer instance.
get_config View source
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns A JSON-serializable Python dict.
__call__ View source
__call__(
shape, dtype=None, partition_info=None
)
Returns a tensor object initialized as specified by the initializer.
Args
shape Shape of the tensor.
dtype Optional dtype of the tensor. If not provided use the initializer dtype.
partition_info Optional information about the possible partitioning of a tensor. | |
doc_30595 |
Single-precision floating-point number type, compatible with C float. Character code
'f' Alias on this platform (Linux x86_64)
numpy.float32: 32-bit-precision floating-point number type: sign bit, 8 bits exponent, 23 bits mantissa. | |
doc_30596 |
Allows the model to jointly attend to information from different representation subspaces. See Attention Is All You Need MultiHead(Q,K,V)=Concat(head1,…,headh)WO\text{MultiHead}(Q, K, V) = \text{Concat}(head_1,\dots,head_h)W^O
where headi=Attention(QWiQ,KWiK,VWiV)head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V) . Parameters
embed_dim – total dimension of the model.
num_heads – parallel attention heads.
dropout – a Dropout layer on attn_output_weights. Default: 0.0.
bias – add bias as module parameter. Default: True.
add_bias_kv – add bias to the key and value sequences at dim=0.
add_zero_attn – add a new batch of zeros to the key and value sequences at dim=1.
kdim – total number of features in key. Default: None.
vdim – total number of features in value. Default: None. Note that if kdim and vdim are None, they will be set to embed_dim such that query, key, and value have the same number of features. Examples: >>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
>>> attn_output, attn_output_weights = multihead_attn(query, key, value)
forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None) [source]
Parameters
key, value (query,) – map a query and a set of key-value pairs to an output. See “Attention Is All You Need” for more details.
key_padding_mask – if provided, specified padding elements in the key will be ignored by the attention. When given a binary mask and a value is True, the corresponding value on the attention layer will be ignored. When given a byte mask and a value is non-zero, the corresponding value on the attention layer will be ignored
need_weights – output attn_output_weights.
attn_mask – 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all the batches while a 3D mask allows to specify a different mask for the entries of each batch. Shapes for inputs:
query: (L,N,E)(L, N, E) where L is the target sequence length, N is the batch size, E is the embedding dimension. key: (S,N,E)(S, N, E) , where S is the source sequence length, N is the batch size, E is the embedding dimension. value: (S,N,E)(S, N, E) where S is the source sequence length, N is the batch size, E is the embedding dimension. key_padding_mask: (N,S)(N, S) where N is the batch size, S is the source sequence length. If a ByteTensor is provided, the non-zero positions will be ignored while the position with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the value of True will be ignored while the position with the value of False will be unchanged.
attn_mask: if a 2D mask: (L,S)(L, S) where L is the target sequence length, S is the source sequence length. If a 3D mask: (N⋅num_heads,L,S)(N\cdot\text{num\_heads}, L, S) where N is the batch size, L is the target sequence length, S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend while the zero positions will be unchanged. If a BoolTensor is provided, positions with True is not allowed to attend while False values will be unchanged. If a FloatTensor is provided, it will be added to the attention weight. Shapes for outputs:
attn_output: (L,N,E)(L, N, E) where L is the target sequence length, N is the batch size, E is the embedding dimension. attn_output_weights: (N,L,S)(N, L, S) where N is the batch size, L is the target sequence length, S is the source sequence length. | |
doc_30597 | A string indicating the fault type. | |
doc_30598 |
Build a decision tree classifier from the training set (X, y). Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csc_matrix.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels) as integers or strings.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. Splits are also ignored if they would result in any single class carrying a negative weight in either child node.
check_inputbool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you do.
X_idx_sorteddeprecated, default=”deprecated”
This parameter is deprecated and has no effect. It will be removed in 1.1 (renaming of 0.26). Deprecated since version 0.24. Returns
selfDecisionTreeClassifier
Fitted estimator. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.