_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_29900 | See torch.mean() | |
doc_29901 |
Returns whether the kernel is stationary. | |
doc_29902 | tf.estimator.LinearEstimator(
head, feature_columns, model_dir=None, optimizer='Ftrl', config=None,
sparse_combiner='sum', warm_start_from=None
)
Example: categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
# Estimator using the default optimizer.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=lambda: tf.keras.optimizers.Ftrl(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator using the FTRL optimizer with regularization.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
optimizer=tf.keras.optimizers.Ftrl(
learning_rate=0.1,
l1_regularization_strength=0.001
))
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss and predicted output are determined by the specified head.
Args
head A Head instance constructed with a method such as tf.estimator.MultiLabelHead.
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from FeatureColumn.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
optimizer An instance of tf.keras.optimizers.* used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer.
config RunConfig object to configure the runtime settings.
sparse_combiner A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. for more details, see tf.feature_column.linear_model.
warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights and biases are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | |
doc_29903 | Some actions are best if they’re made available to any object in the admin site – the export action defined above would be a good candidate. You can make an action globally available using AdminSite.add_action(). For example: from django.contrib import admin
admin.site.add_action(export_selected_objects)
This makes the export_selected_objects action globally available as an action named “export_selected_objects”. You can explicitly give the action a name – good if you later want to programmatically remove the action – by passing a second argument to AdminSite.add_action(): admin.site.add_action(export_selected_objects, 'export_selected') | |
doc_29904 | The UUID variant, which determines the internal layout of the UUID. This will be one of the constants RESERVED_NCS, RFC_4122, RESERVED_MICROSOFT, or RESERVED_FUTURE. | |
doc_29905 | os.RTLD_NOW
os.RTLD_GLOBAL
os.RTLD_LOCAL
os.RTLD_NODELETE
os.RTLD_NOLOAD
os.RTLD_DEEPBIND
Flags for use with the setdlopenflags() and getdlopenflags() functions. See the Unix manual page dlopen(3) for what the different flags mean. New in version 3.3. | |
doc_29906 |
Callback for dragging in zoom mode. | |
doc_29907 |
[Deprecated] Notes Deprecated since version 3.5: | |
doc_29908 |
Return complex 2D Gabor filter kernel. Gabor kernel is a Gaussian kernel modulated by a complex harmonic function. Harmonic function consists of an imaginary sine function and a real cosine function. Spatial frequency is inversely proportional to the wavelength of the harmonic and to the standard deviation of a Gaussian kernel. The bandwidth is also inversely proportional to the standard deviation. Parameters
frequencyfloat
Spatial frequency of the harmonic function. Specified in pixels.
thetafloat, optional
Orientation in radians. If 0, the harmonic is in the x-direction.
bandwidthfloat, optional
The bandwidth captured by the filter. For fixed bandwidth, sigma_x and sigma_y will decrease with increasing frequency. This value is ignored if sigma_x and sigma_y are set by the user.
sigma_x, sigma_yfloat, optional
Standard deviation in x- and y-directions. These directions apply to the kernel before rotation. If theta = pi/2, then the kernel is rotated 90 degrees so that sigma_x controls the vertical direction.
n_stdsscalar, optional
The linear size of the kernel is n_stds (3 by default) standard deviations
offsetfloat, optional
Phase offset of harmonic function in radians. Returns
gcomplex array
Complex filter kernel. References
1
https://en.wikipedia.org/wiki/Gabor_filter
2
https://web.archive.org/web/20180127125930/http://mplab.ucsd.edu/tutorials/gabor.pdf Examples >>> from skimage.filters import gabor_kernel
>>> from skimage import io
>>> from matplotlib import pyplot as plt
>>> gk = gabor_kernel(frequency=0.2)
>>> plt.figure()
>>> io.imshow(gk.real)
>>> io.show()
>>> # more ripples (equivalent to increasing the size of the
>>> # Gaussian spread)
>>> gk = gabor_kernel(frequency=0.2, bandwidth=0.1)
>>> plt.figure()
>>> io.imshow(gk.real)
>>> io.show() | |
doc_29909 | Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility. | |
doc_29910 |
Return the picking behavior of the artist. The possible values are described in set_picker. See also
set_picker, pickable, pick | |
doc_29911 |
Provide the rank of values within each group. Parameters
method:{‘average’, ‘min’, ‘max’, ‘first’, ‘dense’}, default ‘average’
average: average rank of group. min: lowest rank in group. max: highest rank in group. first: ranks assigned in order they appear in the array. dense: like ‘min’, but rank always increases by 1 between groups.
ascending:bool, default True
False for ranks by high (1) to low (N).
na_option:{‘keep’, ‘top’, ‘bottom’}, default ‘keep’
keep: leave NA values where they are. top: smallest rank if ascending. bottom: smallest rank if descending.
pct:bool, default False
Compute percentage rank of data within each group.
axis:int, default 0
The axis of the object over which to compute the rank. Returns
DataFrame with ranking of values within each group
See also Series.groupby
Apply a function groupby to a Series. DataFrame.groupby
Apply a function groupby to each row or column of a DataFrame. Examples
>>> df = pd.DataFrame(
... {
... "group": ["a", "a", "a", "a", "a", "b", "b", "b", "b", "b"],
... "value": [2, 4, 2, 3, 5, 1, 2, 4, 1, 5],
... }
... )
>>> df
group value
0 a 2
1 a 4
2 a 2
3 a 3
4 a 5
5 b 1
6 b 2
7 b 4
8 b 1
9 b 5
>>> for method in ['average', 'min', 'max', 'dense', 'first']:
... df[f'{method}_rank'] = df.groupby('group')['value'].rank(method)
>>> df
group value average_rank min_rank max_rank dense_rank first_rank
0 a 2 1.5 1.0 2.0 1.0 1.0
1 a 4 4.0 4.0 4.0 3.0 4.0
2 a 2 1.5 1.0 2.0 1.0 2.0
3 a 3 3.0 3.0 3.0 2.0 3.0
4 a 5 5.0 5.0 5.0 4.0 5.0
5 b 1 1.5 1.0 2.0 1.0 1.0
6 b 2 3.0 3.0 3.0 2.0 3.0
7 b 4 4.0 4.0 4.0 3.0 4.0
8 b 1 1.5 1.0 2.0 1.0 2.0
9 b 5 5.0 5.0 5.0 4.0 5.0 | |
doc_29912 | Returns True if the user account is currently active. | |
doc_29913 | tf.compat.v1.estimator.tpu.TPUEstimatorSpec(
mode, predictions=None, loss=None, train_op=None, eval_metrics=None,
export_outputs=None, scaffold_fn=None, host_call=None, training_hooks=None,
evaluation_hooks=None, prediction_hooks=None
)
See EstimatorSpec for mode, predictions, loss, train_op, and export_outputs. For evaluation, eval_metricsis a tuple of metric_fn and tensors, where metric_fn runs on CPU to generate metrics and tensors represents the Tensors transferred from TPU system to CPU host and passed to metric_fn. To be precise, TPU evaluation expects a slightly different signature from the tf.estimator.Estimator. While EstimatorSpec.eval_metric_ops expects a dict, TPUEstimatorSpec.eval_metrics is a tuple of metric_fn and tensors. The tensors could be a list of Tensors or dict of names to Tensors. The tensors usually specify the model logits, which are transferred back from TPU system to CPU host. All tensors must have be batch-major, i.e., the batch size is the first dimension. Once all tensors are available at CPU host from all shards, they are concatenated (on CPU) and passed as positional arguments to the metric_fn if tensors is list or keyword arguments if tensors is a dict. metric_fn takes the tensors and returns a dict from metric string name to the result of calling a metric function, namely a (metric_tensor, update_op) tuple. See TPUEstimator for MNIST example how to specify the eval_metrics. scaffold_fn is a function running on CPU to generate the Scaffold. This function should not capture any Tensors in model_fn. host_call is a tuple of a function and a list or dictionary of tensors to pass to that function and returns a list of Tensors. host_call currently works for train() and evaluate(). The Tensors returned by the function is executed on the CPU on every step, so there is communication overhead when sending tensors from TPU to CPU. To reduce the overhead, try reducing the size of the tensors. The tensors are concatenated along their major (batch) dimension, and so must be >= rank 1. The host_call is useful for writing summaries with tf.contrib.summary.create_file_writer.
Attributes
mode
predictions
loss
train_op
eval_metrics
export_outputs
scaffold_fn
host_call
training_hooks
evaluation_hooks
prediction_hooks
Methods as_estimator_spec View source
as_estimator_spec()
Creates an equivalent EstimatorSpec used by CPU train/eval. | |
doc_29914 | Class method that attempts to find a spec for the module specified by fullname on sys.path or, if defined, on path. For each path entry that is searched, sys.path_importer_cache is checked. If a non-false object is found then it is used as the path entry finder to look for the module being searched for. If no entry is found in sys.path_importer_cache, then sys.path_hooks is searched for a finder for the path entry and, if found, is stored in sys.path_importer_cache along with being queried about the module. If no finder is ever found then None is both stored in the cache and returned. New in version 3.4. Changed in version 3.5: If the current working directory – represented by an empty string – is no longer valid then None is returned but no value is cached in sys.path_importer_cache. | |
doc_29915 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_29916 |
Predict on the data matrix X using the ClassifierChain model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input data. Returns
Y_predarray-like of shape (n_samples, n_classes)
The predicted values. | |
doc_29917 |
Return the GridSpec instance associated with the subplot. | |
doc_29918 |
Bases: matplotlib.patches.Patch An axis spine -- the line noting the data area boundaries. Spines are the lines connecting the axis tick marks and noting the boundaries of the data area. They can be placed at arbitrary positions. See set_position for more information. The default position is ('outward', 0). Spines are subclasses of Patch, and inherit much of their behavior. Spines draw a line, a circle, or an arc depending if set_patch_line, set_patch_circle, or set_patch_arc has been called. Line-like is the default. Parameters
axesAxes
The Axes instance containing the spine.
spine_typestr
The spine type.
pathPath
The Path instance used to draw the spine. Other Parameters
**kwargs
Valid keyword arguments are:
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha unknown
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float classmethodarc_spine(axes, spine_type, center, radius, theta1, theta2, **kwargs)[source]
Create and return an arc Spine.
classmethodcircular_spine(axes, center, radius, **kwargs)[source]
Create and return a circular Spine.
cla()[source]
[Deprecated] Notes Deprecated since version 3.4:
clear()[source]
Clear the current spine.
draw(renderer)[source]
Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters
rendererRendererBase subclass.
Notes This method is overridden in the Artist subclasses.
get_bounds()[source]
Get the bounds of the spine.
get_patch_transform()[source]
Return the Transform instance mapping patch coordinates to data coordinates. For example, one may define a patch of a circle which represents a radius of 5 by providing coordinates for a unit circle, and a transform which scales the coordinates (the patch coordinate) by 5.
get_path()[source]
Return the path of this patch.
get_position()[source]
Return the spine position.
get_spine_transform()[source]
Return the spine transform.
get_window_extent(renderer=None)[source]
Return the window extent of the spines in display space, including padding for ticks (but not their labels) See also matplotlib.axes.Axes.get_tightbbox
matplotlib.axes.Axes.get_window_extent
classmethodlinear_spine(axes, spine_type, **kwargs)[source]
Create and return a linear Spine.
register_axis(axis)[source]
Register an axis. An axis should be registered with its corresponding spine from the Axes instance. This allows the spine to clear any axis properties when needed.
set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, antialiased=<UNSET>, bounds=<UNSET>, capstyle=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, color=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, fill=<UNSET>, gid=<UNSET>, hatch=<UNSET>, in_layout=<UNSET>, joinstyle=<UNSET>, label=<UNSET>, linestyle=<UNSET>, linewidth=<UNSET>, patch_arc=<UNSET>, patch_circle=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, position=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
antialiased or aa bool or None
bounds (low: float, high: float)
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
patch_arc unknown
patch_circle unknown
path_effects AbstractPathEffect
picker None or bool or float or callable
position unknown
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float
set_bounds(low=None, high=None)[source]
Set the spine bounds. Parameters
lowfloat or None, optional
The lower spine bound. Passing None leaves the limit unchanged. The bounds may also be passed as the tuple (low, high) as the first positional argument.
highfloat or None, optional
The higher spine bound. Passing None leaves the limit unchanged.
set_color(c)[source]
Set the edgecolor. Parameters
ccolor
Notes This method does not modify the facecolor (which defaults to "none"), unlike the Patch.set_color method defined in the parent class. Use Patch.set_facecolor to set the facecolor.
set_patch_arc(center, radius, theta1, theta2)[source]
Set the spine to be arc-like.
set_patch_circle(center, radius)[source]
Set the spine to be circular.
set_patch_line()[source]
Set the spine to be linear.
set_position(position)[source]
Set the position of the spine. Spine position is specified by a 2 tuple of (position type, amount). The position types are: 'outward': place the spine out from the data area by the specified number of points. (Negative values place the spine inwards.) 'axes': place the spine at the specified Axes coordinate (0 to 1). 'data': place the spine at the specified data coordinate. Additionally, shorthand notations define a special positions: 'center' -> ('axes', 0.5) 'zero' -> ('data', 0.0)
classmatplotlib.spines.Spines(**kwargs)[source]
Bases: collections.abc.MutableMapping The container of all Spines in an Axes. The interface is dict-like mapping names (e.g. 'left') to Spine objects. Additionally it implements some pandas.Series-like features like accessing elements by attribute: spines['top'].set_visible(False)
spines.top.set_visible(False)
Multiple spines can be addressed simultaneously by passing a list: spines[['top', 'right']].set_visible(False)
Use an open slice to address all spines: spines[:].set_visible(False)
The latter two indexing methods will return a SpinesProxy that broadcasts all set_* calls to its members, but cannot be used for any other operation. classmethodfrom_dict(d)[source]
classmatplotlib.spines.SpinesProxy(spine_dict)[source]
Bases: object A proxy to broadcast set_* method calls to all contained Spines. The proxy cannot be used for any other operations on its members. The supported methods are determined dynamically based on the contained spines. If not all spines support a given method, it's executed only on the subset of spines that support it. | |
doc_29919 |
Performs Logarithmic correction on the input image. This function transforms the input image pixelwise according to the equation O = gain*log(1 + I) after scaling each pixel to the range 0 to 1. For inverse logarithmic correction, the equation is O = gain*(2**I - 1). Parameters
imagendarray
Input image.
gainfloat, optional
The constant multiplier. Default value is 1.
invfloat, optional
If True, it performs inverse logarithmic correction, else correction will be logarithmic. Defaults to False. Returns
outndarray
Logarithm corrected output image. See also
adjust_gamma
References
1
http://www.ece.ucsb.edu/Faculty/Manjunath/courses/ece178W03/EnhancePart1.pdf | |
doc_29920 | Auto-negotiate the highest protocol version like PROTOCOL_TLS, but only support server-side SSLSocket connections. New in version 3.6. | |
doc_29921 | Signoff: commit changes, unlock mailbox, drop connection. | |
doc_29922 | Play the SystemExclamation sound. | |
doc_29923 |
Alias for set_antialiased. | |
doc_29924 | bytearray.removeprefix(prefix, /)
If the binary data starts with the prefix string, return bytes[len(prefix):]. Otherwise, return a copy of the original binary data: >>> b'TestHook'.removeprefix(b'Test')
b'Hook'
>>> b'BaseTestCase'.removeprefix(b'Test')
b'BaseTestCase'
The prefix may be any bytes-like object. Note The bytearray version of this method does not operate in place - it always produces a new object, even if no changes were made. New in version 3.9. | |
doc_29925 |
pygame object for video overlay graphics Overlay(format, (width, height)) -> Overlay The Overlay objects provide support for accessing hardware video overlays. Video overlays do not use standard RGB pixel formats, and can use multiple resolutions of data to create a single image. The Overlay objects represent lower level access to the display hardware. To use the object you must understand the technical details of video overlays. The Overlay format determines the type of pixel data used. Not all hardware will support all types of overlay formats. Here is a list of available format types: YV12_OVERLAY, IYUV_OVERLAY, YUY2_OVERLAY, UYVY_OVERLAY, YVYU_OVERLAY The width and height arguments control the size for the overlay image data. The overlay image can be displayed at any size, not just the resolution of the overlay. The overlay objects are always visible, and always show above the regular display contents. display()
set the overlay pixel data display((y, u, v)) -> None display() -> None Display the YUV data in SDL's overlay planes. The y, u, and v arguments are strings of binary data. The data must be in the correct format used to create the Overlay. If no argument is passed in, the Overlay will simply be redrawn with the current data. This can be useful when the Overlay is not really hardware accelerated. The strings are not validated, and improperly sized strings could crash the program.
set_location()
control where the overlay is displayed set_location(rect) -> None Set the location for the overlay. The overlay will always be shown relative to the main display Surface. This does not actually redraw the overlay, it will be updated on the next call to Overlay.display().
get_hardware()
test if the Overlay is hardware accelerated get_hardware(rect) -> int Returns a True value when the Overlay is hardware accelerated. If the platform does not support acceleration, software rendering is used. | |
doc_29926 |
Return whether the artist is animated. | |
doc_29927 |
Rename the fields from a flexible-datatype ndarray or recarray. Nested fields are supported. Parameters
basendarray
Input array whose fields must be modified.
namemapperdictionary
Dictionary mapping old field names to their new version. Examples >>> from numpy.lib import recfunctions as rfn
>>> a = np.array([(1, (2, [3.0, 30.])), (4, (5, [6.0, 60.]))],
... dtype=[('a', int),('b', [('ba', float), ('bb', (float, 2))])])
>>> rfn.rename_fields(a, {'a':'A', 'bb':'BB'})
array([(1, (2., [ 3., 30.])), (4, (5., [ 6., 60.]))],
dtype=[('A', '<i8'), ('b', [('ba', '<f8'), ('BB', '<f8', (2,))])]) | |
doc_29928 | Return the time in seconds since the epoch as a floating point number. The specific date of the epoch and the handling of leap seconds is platform dependent. On Windows and most Unix systems, the epoch is January 1, 1970, 00:00:00 (UTC) and leap seconds are not counted towards the time in seconds since the epoch. This is commonly referred to as Unix time. To find out what the epoch is on a given platform, look at gmtime(0). Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls. The number returned by time() may be converted into a more common time format (i.e. year, month, day, hour, etc…) in UTC by passing it to gmtime() function or in local time by passing it to the localtime() function. In both cases a struct_time object is returned, from which the components of the calendar date may be accessed as attributes. | |
doc_29929 | Return value as a plist-formatted bytes object. See the documentation for dump() for an explanation of the keyword arguments of this function. New in version 3.4. | |
doc_29930 |
Set the dimension of the drawing canvas. | |
doc_29931 |
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
center unknown
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path_effects AbstractPathEffect
picker None or bool or float or callable
radius unknown
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
theta1 unknown
theta2 unknown
transform Transform
url str
visible bool
width unknown
zorder float | |
doc_29932 | You can also override the get_test_func() method to have the mixin use a differently named function for its checks (instead of test_func()). | |
doc_29933 |
Bases: matplotlib.transforms.Transform The base polar transform. This handles projection theta and r into Cartesian coordinate space x and y, but does not perform the ultimate affine transformation into the correct position. Parameters
shorthand_namestr
A string representing the "name" of the transform. The name carries no significance other than to improve the readability of str(transform) when DEBUG=True. has_inverse=True
True if this transform has a corresponding inverse transform.
input_dims=2
The number of input dimensions of this transform. Must be overridden (with integers) in the subclass.
inverted()[source]
Return the corresponding inverse transformation. It holds x == self.inverted().transform(self.transform(x)). The return value of this method should be treated as temporary. An update to self does not cause a corresponding update to its inverted copy.
output_dims=2
The number of output dimensions of this transform. Must be overridden (with integers) in the subclass.
transform_non_affine(tr)[source]
Apply only the non-affine part of this transformation. transform(values) is always equivalent to transform_affine(transform_non_affine(values)). In non-affine transformations, this is generally equivalent to transform(values). In affine transformations, this is always a no-op. Parameters
valuesarray
The input values as NumPy array of length input_dims or shape (N x input_dims). Returns
array
The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input.
transform_path_non_affine(path)[source]
Apply the non-affine part of this transform to Path path, returning a new Path. transform_path(path) is equivalent to transform_path_affine(transform_path_non_affine(values)). | |
doc_29934 |
For each element, return True if there are only numeric characters in the element. Calls unicode.isnumeric element-wise. Numeric characters include digit characters, and all characters that have the Unicode numeric value property, e.g. U+2155,
VULGAR FRACTION ONE FIFTH. Parameters
aarray_like, unicode
Input array. Returns
outndarray, bool
Array of booleans of same shape as a. See also unicode.isnumeric | |
doc_29935 | sklearn.cluster.dbscan(X, eps=0.5, *, min_samples=5, metric='minkowski', metric_params=None, algorithm='auto', leaf_size=30, p=2, sample_weight=None, n_jobs=None) [source]
Perform DBSCAN clustering from vector array or distance matrix. Read more in the User Guide. Parameters
X{array-like, sparse (CSR) matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
A feature array, or array of distances between samples if metric='precomputed'.
epsfloat, default=0.5
The maximum distance between two samples for one to be considered as in the neighborhood of the other. This is not a maximum bound on the distances of points within a cluster. This is the most important DBSCAN parameter to choose appropriately for your data set and distance function.
min_samplesint, default=5
The number of samples (or total weight) in a neighborhood for a point to be considered as a core point. This includes the point itself.
metricstr or callable, default=’minkowski’
The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by sklearn.metrics.pairwise_distances for its metric parameter. If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors.
metric_paramsdict, default=None
Additional keyword arguments for the metric function. New in version 0.19.
algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’
The algorithm to be used by the NearestNeighbors module to compute pointwise distances and find nearest neighbors. See NearestNeighbors module documentation for details.
leaf_sizeint, default=30
Leaf size passed to BallTree or cKDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
pfloat, default=2
The power of the Minkowski metric to be used to calculate distance between points.
sample_weightarray-like of shape (n_samples,), default=None
Weight of each sample, such that a sample with a weight of at least min_samples is by itself a core sample; a sample with negative weight may inhibit its eps-neighbor from being core. Note that weights are absolute, and default to 1.
n_jobsint, default=None
The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. If precomputed distance are used, parallel execution is not available and thus n_jobs will have no effect. Returns
core_samplesndarray of shape (n_core_samples,)
Indices of core samples.
labelsndarray of shape (n_samples,)
Cluster labels for each point. Noisy samples are given the label -1. See also
DBSCAN
An estimator interface for this clustering algorithm.
OPTICS
A similar estimator interface clustering at multiple values of eps. Our implementation is optimized for memory usage. Notes For an example, see examples/cluster/plot_dbscan.py. This implementation bulk-computes all neighborhood queries, which increases the memory complexity to O(n.d) where d is the average number of neighbors, while original DBSCAN had memory complexity O(n). It may attract a higher memory complexity when querying these nearest neighborhoods, depending on the algorithm. One way to avoid the query complexity is to pre-compute sparse neighborhoods in chunks using NearestNeighbors.radius_neighbors_graph with mode='distance', then using metric='precomputed' here. Another way to reduce memory and computation time is to remove (near-)duplicate points and use sample_weight instead. cluster.optics provides a similar clustering with lower memory usage. References Ester, M., H. P. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise”. In: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, AAAI Press, pp. 226-231. 1996 Schubert, E., Sander, J., Ester, M., Kriegel, H. P., & Xu, X. (2017). DBSCAN revisited, revisited: why and how you should (still) use DBSCAN. ACM Transactions on Database Systems (TODS), 42(3), 19. | |
doc_29936 |
Returns the information about the current DataLoader iterator worker process. When called in a worker, this returns an object guaranteed to have the following attributes:
id: the current worker id.
num_workers: the total number of workers.
seed: the random seed set for the current worker. This value is determined by main process RNG and the worker id. See DataLoader’s documentation for more details.
dataset: the copy of the dataset object in this process. Note that this will be a different object in a different process than the one in the main process. When called in the main process, this returns None. Note When used in a worker_init_fn passed over to DataLoader, this method can be useful to set up each worker process differently, for instance, using worker_id to configure the dataset object to only read a specific fraction of a sharded dataset, or use seed to seed other libraries used in dataset code (e.g., NumPy). | |
doc_29937 |
Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters
rendererRendererBase subclass.
Notes This method is overridden in the Artist subclasses. | |
doc_29938 | Out-of-place version of torch.Tensor.scatter_add_() | |
doc_29939 | Encapsulate an XML error or warning. This class can contain basic error or warning information from either the XML parser or the application: it can be subclassed to provide additional functionality or to add localization. Note that although the handlers defined in the ErrorHandler interface receive instances of this exception, it is not required to actually raise the exception — it is also useful as a container for information. When instantiated, msg should be a human-readable description of the error. The optional exception parameter, if given, should be None or an exception that was caught by the parsing code and is being passed along as information. This is the base class for the other SAX exception classes. | |
doc_29940 |
This is equivalent to self.log_pob(input).argmax(dim=1), but is more efficient in some cases. Parameters
input (Tensor) – a minibatch of examples Returns
a class with the highest probability for each example Return type
output (Tensor) Shape:
Input: (N,in_features)(N, \texttt{in\_features})
Output: (N)(N) | |
doc_29941 | Hides the tab specified by tab_id. The tab will not be displayed, but the associated window remains managed by the notebook and its configuration remembered. Hidden tabs may be restored with the add() command. | |
doc_29942 | The subdomain that the blueprint should be active for, None otherwise. | |
doc_29943 | This method is called to handle an HTML doctype declaration (e.g. <!DOCTYPE html>). The decl parameter will be the entire contents of the declaration inside the <!...> markup (e.g. 'DOCTYPE html'). | |
doc_29944 |
Instances of autocast serve as context managers or decorators that allow regions of your script to run in mixed precision. In these regions, CUDA ops run in an op-specific dtype chosen by autocast to improve performance while maintaining accuracy. See the Autocast Op Reference for details. When entering an autocast-enabled region, Tensors may be any type. You should not call .half() on your model(s) or inputs when using autocasting. autocast should wrap only the forward pass(es) of your network, including the loss computation(s). Backward passes under autocast are not recommended. Backward ops run in the same type that autocast used for corresponding forward ops. Example: # Creates model and optimizer in default precision
model = Net().cuda()
optimizer = optim.SGD(model.parameters(), ...)
for input, target in data:
optimizer.zero_grad()
# Enables autocasting for the forward pass (model + loss)
with autocast():
output = model(input)
loss = loss_fn(output, target)
# Exits the context manager before backward()
loss.backward()
optimizer.step()
See the Automatic Mixed Precision examples for usage (along with gradient scaling) in more complex scenarios (e.g., gradient penalty, multiple models/losses, custom autograd functions). autocast can also be used as a decorator, e.g., on the forward method of your model: class AutocastModel(nn.Module):
...
@autocast()
def forward(self, input):
...
Floating-point Tensors produced in an autocast-enabled region may be float16. After returning to an autocast-disabled region, using them with floating-point Tensors of different dtypes may cause type mismatch errors. If so, cast the Tensor(s) produced in the autocast region back to float32 (or other dtype if desired). If a Tensor from the autocast region is already float32, the cast is a no-op, and incurs no additional overhead. Example: # Creates some tensors in default dtype (here assumed to be float32)
a_float32 = torch.rand((8, 8), device="cuda")
b_float32 = torch.rand((8, 8), device="cuda")
c_float32 = torch.rand((8, 8), device="cuda")
d_float32 = torch.rand((8, 8), device="cuda")
with autocast():
# torch.mm is on autocast's list of ops that should run in float16.
# Inputs are float32, but the op runs in float16 and produces float16 output.
# No manual casts are required.
e_float16 = torch.mm(a_float32, b_float32)
# Also handles mixed input types
f_float16 = torch.mm(d_float32, e_float16)
# After exiting autocast, calls f_float16.float() to use with d_float32
g_float32 = torch.mm(d_float32, f_float16.float())
Type mismatch errors in an autocast-enabled region are a bug; if this is what you observe, please file an issue. autocast(enabled=False) subregions can be nested in autocast-enabled regions. Locally disabling autocast can be useful, for example, if you want to force a subregion to run in a particular dtype. Disabling autocast gives you explicit control over the execution type. In the subregion, inputs from the surrounding region should be cast to dtype before use: # Creates some tensors in default dtype (here assumed to be float32)
a_float32 = torch.rand((8, 8), device="cuda")
b_float32 = torch.rand((8, 8), device="cuda")
c_float32 = torch.rand((8, 8), device="cuda")
d_float32 = torch.rand((8, 8), device="cuda")
with autocast():
e_float16 = torch.mm(a_float32, b_float32)
with autocast(enabled=False):
# Calls e_float16.float() to ensure float32 execution
# (necessary because e_float16 was created in an autocasted region)
f_float32 = torch.mm(c_float32, e_float16.float())
# No manual casts are required when re-entering the autocast-enabled region.
# torch.mm again runs in float16 and produces float16 output, regardless of input types.
g_float16 = torch.mm(d_float32, f_float32)
The autocast state is thread-local. If you want it enabled in a new thread, the context manager or decorator must be invoked in that thread. This affects torch.nn.DataParallel and torch.nn.parallel.DistributedDataParallel when used with more than one GPU per process (see Working with Multiple GPUs). Parameters
enabled (bool, optional, default=True) – Whether autocasting should be enabled in the region. | |
doc_29945 | tf.distribute.OneDeviceStrategy(
device
)
Using this strategy will place any variables created in its scope on the specified device. Input distributed through this strategy will be prefetched to the specified device. Moreover, any functions called via strategy.run will also be placed on the specified device as well. Typical usage of this strategy could be testing your code with the tf.distribute.Strategy API before switching to other strategies which actually distribute to multiple devices/machines. For example: strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
with strategy.scope():
v = tf.Variable(1.0)
print(v.device) # /job:localhost/replica:0/task:0/device:GPU:0
def step_fn(x):
return x * 2
result = 0
for i in range(10):
result += strategy.run(step_fn, args=(i,))
print(result) # 90
Args
device Device string identifier for the device on which the variables should be placed. See class docs for more details on how the device is used. Examples: "/cpu:0", "/gpu:0", "/device:CPU:0", "/device:GPU:0"
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example,
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"],
'ps': ["localhost:34567"]
},
'task': {'type': 'worker', 'index': 0}
})
# This implicitly uses TF_CONFIG for the cluster and current task info.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
...
if strategy.cluster_resolver.task_type == 'worker':
# Perform something that's only applicable on workers. Since we set this
# as a worker above, this block will run on this particular instance.
elif strategy.cluster_resolver.task_type == 'ps':
# Perform something that's only applicable on parameter servers. Since we
# set this as a worker above, this block will not run on this particular
# instance.
For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source
distribute_datasets_from_function(
dataset_fn, options=None
)
Distributes tf.data.Dataset instances created by calls to dataset_fn. dataset_fn will be called once for each worker in the strategy. In this case, we only have one worker and one device so dataset_fn is called once. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed: def dataset_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size)
return d.shard(
input_context.num_input_pipelines, input_context.input_pipeline_id)
inputs = strategy.distribute_datasets_from_function(dataset_fn)
for batch in inputs:
replica_results = strategy.run(replica_fn, args=(batch,))
Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size.
Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns A "distributed Dataset", which the caller can iterate over like regular datasets.
experimental_distribute_dataset View source
experimental_distribute_dataset(
dataset, options=None
)
Distributes a tf.data.Dataset instance provided via dataset. In this case, there is only one device, so this is only a thin wrapper around the input dataset. It will, however, prefetch the input data to the specified device. The returned distributed dataset can be iterated over similar to how regular datasets can.
Note: Currently, the user cannot add any more transformations to a distributed dataset.
Example: strategy = tf.distribute.OneDeviceStrategy()
dataset = tf.data.Dataset.range(10).batch(2)
dist_dataset = strategy.experimental_distribute_dataset(dataset)
for x in dist_dataset:
print(x) # [0, 1], [2, 3],...
Args: dataset: tf.data.Dataset to be prefetched to device. options: tf.distribute.InputOptions used to control options on how this dataset is distributed. Returns: A "distributed Dataset" that the caller can iterate over. experimental_distribute_values_from_function View source
experimental_distribute_values_from_function(
value_fn
)
Generates tf.distribute.DistributedValues from value_fn. This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.
Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns A tf.distribute.DistributedValues containing a value for each replica.
Example usage: Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return tf.constant(1.)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(3.0, 2.0)
Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(distributed_values)
local_result
(2, 2)
Place values on devices and distribute: strategy = tf.distribute.TPUStrategy()
worker_devices = strategy.extended.worker_devices
multiple_values = []
for i in range(strategy.num_replicas_in_sync):
with tf.device(worker_devices[i]):
multiple_values.append(tf.constant(1.0))
def value_fn(ctx):
return multiple_values[ctx.replica_id_in_sync_group]
distributed_values = strategy.
experimental_distribute_values_from_function(
value_fn)
experimental_local_results View source
experimental_local_results(
value
)
Returns the list of all local per-replica values contained in value. In OneDeviceStrategy, the value is always expected to be a single value, so the result is just the value in a tuple.
Args
value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope.
Returns A tuple of values contained in value. If value represents a single value, this returns (value,).
gather View source
gather(
value, axis
)
Gather value across replicas along axis to the current device. Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy, this is CPU of each worker. This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.
Note: For all strategies except tf.distribute.TPUStrategy, the input value on different replicas must have the same rank, and their shapes must be the same in all dimensions except the axis-th dimension. In other words, their shapes cannot be different in a dimension d where d does not equal to the axis argument. For example, given a tf.distribute.DistributedValues with component tensors of shape (1, 2, 3) and (1, 3, 3) on two replicas, you can call gather(..., axis=1, ...) on it, but not gather(..., axis=0, ...) or gather(..., axis=2, ...). However, for tf.distribute.TPUStrategy.gather, all tensors must have exactly the same rank and same shape.
Note: Given a tf.distribute.DistributedValues value, its component tensors must have a non-zero rank. Otherwise, consider using tf.expand_dims before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
# A DistributedValues with component tensor of shape (2, 1) on each replica
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]])))
@tf.function
def run():
return strategy.gather(distributed_values, axis=0)
run()
<tf.Tensor: shape=(4, 1), dtype=int32, numpy=
array([[1],
[2],
[1],
[2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"])
single_tensor = tf.reshape(tf.range(6), shape=(1,2,3))
distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor))
@tf.function
def run(axis):
return strategy.gather(distributed_values, axis=axis)
axis=0
run(axis)
<tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]],
[[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=1
run(axis)
<tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy=
array([[[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5],
[0, 1, 2],
[3, 4, 5]]], dtype=int32)>
axis=2
run(axis)
<tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy=
array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2],
[3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args
value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns A Tensor that's the concatenation of value across replicas along axis dimension.
reduce View source
reduce(
reduce_op, value, axis
)
Reduce value across replicas. In OneDeviceStrategy, there is only one replica, so if axis=None, value is simply returned. If axis is specified as something other than None, such as axis=0, value is reduced along that axis and returned. Example: t = tf.range(10)
result = strategy.reduce(tf.distribute.ReduceOp.SUM, t, axis=None).numpy()
# result: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
result = strategy.reduce(tf.distribute.ReduceOp.SUM, t, axis=0).numpy()
# result: 45
Args
reduce_op A tf.distribute.ReduceOp value specifying how values should be combined.
value A "per replica" value, e.g. returned by run to be combined into a single tensor.
axis Specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns A Tensor.
run View source
run(
fn, args=(), kwargs=None, options=None
)
Run fn on each replica, with the given arguments. In OneDeviceStrategy, fn is simply called within a device scope for the given device, with the provided arguments.
Args
fn The function to run. The output must be a tf.nest of Tensors.
args (Optional) Positional arguments to fn.
kwargs (Optional) Keyword arguments to fn.
options (Optional) An instance of tf.distribute.RunOptions specifying the options to run fn.
Returns Return value from running fn.
scope View source
scope()
Returns a context manager selecting this Strategy as current. Inside a with strategy.scope(): code block, this thread will use a variable creator set by strategy, and will enter its "cross-replica context". In OneDeviceStrategy, all variables created inside strategy.scope() will be on device specified at strategy construction time. See example in the docs for this class.
Returns A context manager to use for creating variables with this strategy. | |
doc_29946 | Create a kqueue object from a given file descriptor. | |
doc_29947 |
Set the artist offset transform. Parameters
transOffsetTransform | |
doc_29948 |
Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The input channels are separated into num_groups groups, each containing num_channels / num_groups channels. The mean and standard-deviation are calculated separately over the each group. γ\gamma and β\beta are learnable per-channel affine transform parameter vectors of size num_channels if affine is True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). This layer uses statistics computed from input data in both training and evaluation modes. Parameters
num_groups (int) – number of groups to separate the channels into
num_channels (int) – number of channels expected in input
eps – a value added to the denominator for numerical stability. Default: 1e-5
affine – a boolean value that when set to True, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default: True. Shape:
Input: (N,C,∗)(N, C, *) where C=num_channelsC=\text{num\_channels}
Output: (N,C,∗)(N, C, *) (same shape as input) Examples: >>> input = torch.randn(20, 6, 10, 10)
>>> # Separate 6 channels into 3 groups
>>> m = nn.GroupNorm(3, 6)
>>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm)
>>> m = nn.GroupNorm(6, 6)
>>> # Put all 6 channels into a single group (equivalent with LayerNorm)
>>> m = nn.GroupNorm(1, 6)
>>> # Activating the module
>>> output = m(input) | |
doc_29949 |
Applies fit_predict of last step in pipeline after transforms. Applies fit_transforms of a pipeline to the data, followed by the fit_predict method of the final estimator in the pipeline. Valid only if the final estimator implements fit_predict. Parameters
Xiterable
Training data. Must fulfill input requirements of first step of the pipeline.
yiterable, default=None
Training targets. Must fulfill label requirements for all steps of the pipeline.
**fit_paramsdict of string -> object
Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p. Returns
y_predarray-like | |
doc_29950 | See Migration guide for more details. tf.compat.v1.raw_ops.PartitionedCall
tf.raw_ops.PartitionedCall(
args, Tout, f, config='', config_proto='',
executor_type='', name=None
)
Args
args A list of Tensor objects. A list of input tensors.
Tout A list of tf.DTypes. A list of output types.
f A function decorated with @Defun. A function that takes 'args', a list of tensors, and returns 'output', another list of tensors. Input and output types are specified by 'Tin' and 'Tout'. The function body of f will be placed and partitioned across devices, setting this op apart from the regular Call op.
config An optional string. Defaults to "".
config_proto An optional string. Defaults to "".
executor_type An optional string. Defaults to "".
name A name for the operation (optional).
Returns A list of Tensor objects of type Tout. | |
doc_29951 |
Return a list of (width, height, descent) tuples for ticklabels. Empty labels are left out. | |
doc_29952 |
Initializes the default distributed process group, and this will also initialize the distributed package. There are 2 main ways to initialize a process group:
Specify store, rank, and world_size explicitly. Specify init_method (a URL string) which indicates where/how to discover peers. Optionally specify rank and world_size, or encode all required parameters in the URL and omit them. If neither is specified, init_method is assumed to be “env://”. Parameters
backend (str or Backend) – The backend to use. Depending on build-time configurations, valid values include mpi, gloo, and nccl. This field should be given as a lowercase string (e.g., "gloo"), which can also be accessed via Backend attributes (e.g., Backend.GLOO). If using multiple processes per machine with nccl backend, each process must have exclusive access to every GPU it uses, as sharing GPUs between processes can result in deadlocks.
init_method (str, optional) – URL specifying how to initialize the process group. Default is “env://” if no init_method or store is specified. Mutually exclusive with store.
world_size (int, optional) – Number of processes participating in the job. Required if store is specified.
rank (int, optional) – Rank of the current process (it should be a number between 0 and world_size-1). Required if store is specified.
store (Store, optional) – Key/value store accessible to all workers, used to exchange connection/address information. Mutually exclusive with init_method.
timeout (timedelta, optional) – Timeout for operations executed against the process group. Default value equals 30 minutes. This is applicable for the gloo backend. For nccl, this is applicable only if the environment variable NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1. When NCCL_BLOCKING_WAIT is set, this is the duration for which the process will block and wait for collectives to complete before throwing an exception. When NCCL_ASYNC_ERROR_HANDLING is set, this is the duration after which collectives will be aborted asynchronously and the process will crash. NCCL_BLOCKING_WAIT will provide errors to the user which can be caught and handled, but due to its blocking nature, it has a performance overhead. On the other hand, NCCL_ASYNC_ERROR_HANDLING has very little performance overhead, but crashes the process on errors. This is done since CUDA execution is async and it is no longer safe to continue executing user code since failed async NCCL operations might result in subsequent CUDA operations running on corrupted data. Only one of these two environment variables should be set.
group_name (str, optional, deprecated) – Group name. To enable backend == Backend.MPI, PyTorch needs to be built from source on a system that supports MPI. | |
doc_29953 | Acts just like HttpResponse but uses a 403 status code. | |
doc_29954 | Close all file descriptors from fd_low (inclusive) to fd_high (exclusive), ignoring errors. Equivalent to (but much faster than): for fd in range(fd_low, fd_high):
try:
os.close(fd)
except OSError:
pass | |
doc_29955 | Called when the test case test raises an unexpected exception. err is a tuple of the form returned by sys.exc_info(): (type, value,
traceback). The default implementation appends a tuple (test, formatted_err) to the instance’s errors attribute, where formatted_err is a formatted traceback derived from err. | |
doc_29956 | tf.compat.v1.train.polynomial_decay(
learning_rate, global_step, decay_steps, end_learning_rate=0.0001, power=1.0,
cycle=False, name=None
)
It is commonly observed that a monotonically decreasing learning rate, whose degree of change is carefully chosen, results in a better performing model. This function applies a polynomial decay function to a provided initial learning_rate to reach an end_learning_rate in the given decay_steps. It requires a global_step value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step. The function returns the decayed learning rate. It is computed as: global_step = min(global_step, decay_steps)
decayed_learning_rate = (learning_rate - end_learning_rate) *
(1 - global_step / decay_steps) ^ (power) +
end_learning_rate
If cycle is True then a multiple of decay_steps is used, the first one that is bigger than global_steps. decay_steps = decay_steps * ceil(global_step / decay_steps)
decayed_learning_rate = (learning_rate - end_learning_rate) *
(1 - global_step / decay_steps) ^ (power) +
end_learning_rate
Example: decay from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5): ...
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
end_learning_rate = 0.01
decay_steps = 10000
learning_rate = tf.compat.v1.train.polynomial_decay(starter_learning_rate,
global_step,
decay_steps, end_learning_rate,
power=0.5)
# Passing global_step to minimize() will increment it at each step.
learning_step = (
tf.compat.v1.train.GradientDescentOptimizer(learning_rate)
.minimize(...my loss..., global_step=global_step)
)
Args
learning_rate A scalar float32 or float64 Tensor or a Python number. The initial learning rate.
global_step A scalar int32 or int64 Tensor or a Python number. Global step to use for the decay computation. Must not be negative.
decay_steps A scalar int32 or int64 Tensor or a Python number. Must be positive. See the decay computation above.
end_learning_rate A scalar float32 or float64 Tensor or a Python number. The minimal end learning rate.
power A scalar float32 or float64 Tensor or a Python number. The power of the polynomial. Defaults to linear, 1.0.
cycle A boolean, whether or not it should cycle beyond decay_steps.
name String. Optional name of the operation. Defaults to 'PolynomialDecay'.
Returns A scalar Tensor of the same type as learning_rate. The decayed learning rate.
Raises
ValueError if global_step is not supplied. Eager Compatibility When eager execution is enabled, this function returns a function which in turn returns the decayed learning rate Tensor. This can be useful for changing the learning rate value across different invocations of optimizer functions. | |
doc_29957 | Creates a collation with the specified name and callable. The callable will be passed two string arguments. It should return -1 if the first is ordered lower than the second, 0 if they are ordered equal and 1 if the first is ordered higher than the second. Note that this controls sorting (ORDER BY in SQL) so your comparisons don’t affect other SQL operations. Note that the callable will get its parameters as Python bytestrings, which will normally be encoded in UTF-8. The following example shows a custom collation that sorts “the wrong way”: import sqlite3
def collate_reverse(string1, string2):
if string1 == string2:
return 0
elif string1 < string2:
return 1
else:
return -1
con = sqlite3.connect(":memory:")
con.create_collation("reverse", collate_reverse)
cur = con.cursor()
cur.execute("create table test(x)")
cur.executemany("insert into test(x) values (?)", [("a",), ("b",)])
cur.execute("select x from test order by x collate reverse")
for row in cur:
print(row)
con.close()
To remove a collation, call create_collation with None as callable: con.create_collation("reverse", None) | |
doc_29958 |
Classifier implementing the k-nearest neighbors vote. Read more in the User Guide. Parameters
n_neighborsint, default=5
Number of neighbors to use by default for kneighbors queries.
weights{‘uniform’, ‘distance’} or callable, default=’uniform’
weight function used in prediction. Possible values: ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally. ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away. [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights.
algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree
‘kd_tree’ will use KDTree
‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force.
leaf_sizeint, default=30
Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
pint, default=2
Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
metricstr or callable, default=’minkowski’
the distance metric to use for the tree. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of DistanceMetric for a list of available metrics. If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors.
metric_paramsdict, default=None
Additional keyword arguments for the metric function.
n_jobsint, default=None
The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Doesn’t affect fit method. Attributes
classes_array of shape (n_classes,)
Class labels known to the classifier
effective_metric_str or callble
The distance metric used. It will be same as the metric parameter or a synonym of it, e.g. ‘euclidean’ if the metric parameter set to ‘minkowski’ and p parameter set to 2.
effective_metric_params_dict
Additional keyword arguments for the metric function. For most metrics will be same with metric_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’.
n_samples_fit_int
Number of samples in the fitted data.
outputs_2d_bool
False when y’s shape is (n_samples, ) or (n_samples, 1) during fit otherwise True. See also
RadiusNeighborsClassifier
KNeighborsRegressor
RadiusNeighborsRegressor
NearestNeighbors
Notes See Nearest Neighbors in the online documentation for a discussion of the choice of algorithm and leaf_size. Warning Regarding the Nearest Neighbors algorithms, if it is found that two neighbors, neighbor k+1 and k, have identical distances but different labels, the results will depend on the ordering of the training data. https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm Examples >>> X = [[0], [1], [2], [3]]
>>> y = [0, 0, 1, 1]
>>> from sklearn.neighbors import KNeighborsClassifier
>>> neigh = KNeighborsClassifier(n_neighbors=3)
>>> neigh.fit(X, y)
KNeighborsClassifier(...)
>>> print(neigh.predict([[1.1]]))
[0]
>>> print(neigh.predict_proba([[0.9]]))
[[0.66666667 0.33333333]]
Methods
fit(X, y) Fit the k-nearest neighbors classifier from the training dataset.
get_params([deep]) Get parameters for this estimator.
kneighbors([X, n_neighbors, return_distance]) Finds the K-neighbors of a point.
kneighbors_graph([X, n_neighbors, mode]) Computes the (weighted) graph of k-Neighbors for points in X
predict(X) Predict the class labels for the provided data.
predict_proba(X) Return probability estimates for the test data X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit the k-nearest neighbors classifier from the training dataset. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’
Training data.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values. Returns
selfKNeighborsClassifier
The fitted k-nearest neighbors classifier.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
kneighbors(X=None, n_neighbors=None, return_distance=True) [source]
Finds the K-neighbors of a point. Returns indices of and distances to the neighbors of each point. Parameters
Xarray-like, shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
n_neighborsint, default=None
Number of neighbors required for each sample. The default is the value passed to the constructor.
return_distancebool, default=True
Whether or not to return the distances. Returns
neigh_distndarray of shape (n_queries, n_neighbors)
Array representing the lengths to points, only present if return_distance=True
neigh_indndarray of shape (n_queries, n_neighbors)
Indices of the nearest points in the population matrix. Examples In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1] >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(n_neighbors=1)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))
As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points: >>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
kneighbors_graph(X=None, n_neighbors=None, mode='connectivity') [source]
Computes the (weighted) graph of k-Neighbors for points in X Parameters
Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For metric='precomputed' the shape should be (n_queries, n_indexed). Otherwise the shape should be (n_queries, n_features).
n_neighborsint, default=None
Number of neighbors for each sample. The default is the value passed to the constructor.
mode{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. Returns
Asparse-matrix of shape (n_queries, n_samples_fit)
n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix is of CSR format. See also
NearestNeighbors.radius_neighbors_graph
Examples >>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(n_neighbors=2)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
predict(X) [source]
Predict the class labels for the provided data. Parameters
Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’
Test samples. Returns
yndarray of shape (n_queries,) or (n_queries, n_outputs)
Class labels for each data sample.
predict_proba(X) [source]
Return probability estimates for the test data X. Parameters
Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’
Test samples. Returns
pndarray of shape (n_queries, n_classes), or a list of n_outputs
of such arrays if n_outputs > 1. The class probabilities of the input samples. Classes are ordered by lexicographic order.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_29959 |
ctypes interface Returns
interfacenamedtuple
Named tuple containing ctypes wrapper state_address - Memory address of the state struct state - pointer to the state struct next_uint64 - function pointer to produce 64 bit integers next_uint32 - function pointer to produce 32 bit integers next_double - function pointer to produce doubles bitgen - pointer to the bit generator struct | |
doc_29960 | tf.compat.v1.train.slice_input_producer(
tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32,
shared_name=None, name=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(tuple(tensor_list)).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). Implemented using a Queue -- a QueueRunner for the Queue is added to the current Graph's QUEUE_RUNNER collection.
Args
tensor_list A list of Tensor objects. Every Tensor in tensor_list must have the same size in the first dimension.
num_epochs An integer (optional). If specified, slice_input_producer produces each slice num_epochs times before generating an OutOfRange error. If not specified, slice_input_producer can cycle through the slices an unlimited number of times.
shuffle Boolean. If true, the integers are randomly shuffled within each epoch.
seed An integer (optional). Seed used if shuffle == True.
capacity An integer. Sets the queue capacity.
shared_name (optional). If set, this queue will be shared under the given name across multiple sessions.
name A name for the operations (optional).
Returns A list of tensors, one for each element of tensor_list. If the tensor in tensor_list has shape [N, a, b, .., z], then the corresponding output tensor will have shape [a, b, ..., z].
Raises
ValueError if slice_input_producer produces nothing from tensor_list. Eager Compatibility Input pipelines based on Queues are not supported when eager execution is enabled. Please use the tf.data API to ingest data under eager execution. | |
doc_29961 | Represents the C PyObject * datatype. Calling this without an argument creates a NULL PyObject * pointer. | |
doc_29962 | Sets the elements of the color update(r, g, b) -> None update(r, g, b, a=255) -> None update(color_value) -> None Sets the elements of the color. See parameters for pygame.Color() for the parameters of this function. If the alpha value was not set it will not change. New in pygame 2.0.1. | |
doc_29963 | Sets a new string as response. The value must be a string or bytes. If a string is set it’s encoded to the charset of the response (utf-8 by default). Changelog New in version 0.9. Parameters
value (Union[bytes, str]) – Return type
None | |
doc_29964 |
Set the offsets for the collection. Parameters
offsets(N, 2) or (2,) array-like | |
doc_29965 | See Migration guide for more details. tf.compat.v1.estimator.export.RegressionOutput
tf.estimator.export.RegressionOutput(
value
)
Args
value a float Tensor giving the predicted values. Required.
Raises
ValueError if the value is not a Tensor with dtype tf.float32.
Attributes
value
Methods as_signature_def View source
as_signature_def(
receiver_tensors
)
Generate a SignatureDef proto for inclusion in a MetaGraphDef. The SignatureDef will specify outputs as described in this ExportOutput, and will use the provided receiver_tensors as inputs.
Args
receiver_tensors a Tensor, or a dict of string to Tensor, specifying input nodes that will be fed. | |
doc_29966 | The file may not be changed. | |
doc_29967 | Return a list of all available group entries, in arbitrary order. | |
doc_29968 |
Alias for get_horizontalalignment. | |
doc_29969 |
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha float or 2D array-like or None
animated bool
array unknown
clim (vmin: float, vmax: float)
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
cmap unknown
data unknown
extent 4-tuple of float
figure Figure
filternorm unknown
filterrad unknown
gid str
in_layout bool
interpolation {'nearest', 'bilinear'} or None
interpolation_stage {'data', 'rgba'} or None
label object
norm unknown
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
resample bool or None
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float | |
doc_29970 | operator.__getitem__(a, b)
Return the value of a at index b. | |
doc_29971 | Returns the message ID for the record. If you are using your own messages, you could do this by having the msg passed to the logger being an ID rather than a format string. Then, in here, you could use a dictionary lookup to get the message ID. This version returns 1, which is the base message ID in win32service.pyd. | |
doc_29972 | Detect all offsets in the raw compiled bytecode string code which are jump targets, and return a list of these offsets. | |
doc_29973 |
The last colorbar associated with this ScalarMappable. May be None. | |
doc_29974 |
For each element, return the lowest index in the string where substring sub is found. See also char.find | |
doc_29975 | The code object is optimized, using fast locals. | |
doc_29976 | type_comment is an optional string with the type annotation as a comment | |
doc_29977 | Fills the tensor with numbers drawn from the Cauchy distribution: f(x)=1πσ(x−median)2+σ2f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 + \sigma^2} | |
doc_29978 | Get data and node arrays. Returns
arrays: tuple of array
Arrays for storing tree data, index, node data and node bounds. | |
doc_29979 |
Disconnect all events created by this widget. | |
doc_29980 |
Open a new element. Attributes can be given as keyword arguments, or as a string/string dictionary. The method returns an opaque identifier that can be passed to the close() method, to close all open elements up to and including this one. Parameters
tag
Element tag. attrib
Attribute dictionary. Alternatively, attributes can be given as keyword arguments. Returns
An element identifier. | |
doc_29981 | A compiled regular expression used to parse section headers. The default matches [section] to the name "section". Whitespace is considered part of the section name, thus [ larch ] will be read as a section of name " larch ". Override this attribute if that’s unsuitable. For example: >>> import re
>>> config = """
... [Section 1]
... option = value
...
... [ Section 2 ]
... another = val
... """
>>> typical = configparser.ConfigParser()
>>> typical.read_string(config)
>>> typical.sections()
['Section 1', ' Section 2 ']
>>> custom = configparser.ConfigParser()
>>> custom.SECTCRE = re.compile(r"\[ *(?P<header>[^]]+?) *\]")
>>> custom.read_string(config)
>>> custom.sections()
['Section 1', 'Section 2']
Note While ConfigParser objects also use an OPTCRE attribute for recognizing option lines, it’s not recommended to override it because that would interfere with constructor options allow_no_value and delimiters. | |
doc_29982 |
True if this transform has a corresponding inverse transform. | |
doc_29983 |
The day of the datetime. Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="D")
... )
>>> datetime_series
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
>>> datetime_series.dt.day
0 1
1 2
2 3
dtype: int64 | |
doc_29984 | class sklearn.ensemble.VotingRegressor(estimators, *, weights=None, n_jobs=None, verbose=False) [source]
Prediction voting regressor for unfitted estimators. A voting regressor is an ensemble meta-estimator that fits several base regressors, each on the whole dataset. Then it averages the individual predictions to form a final prediction. Read more in the User Guide. New in version 0.21. Parameters
estimatorslist of (str, estimator) tuples
Invoking the fit method on the VotingRegressor will fit clones of those original estimators that will be stored in the class attribute self.estimators_. An estimator can be set to 'drop' using set_params. Changed in version 0.21: 'drop' is accepted. Using None was deprecated in 0.22 and support was removed in 0.24.
weightsarray-like of shape (n_regressors,), default=None
Sequence of weights (float or int) to weight the occurrences of predicted values before averaging. Uses uniform weights if None.
n_jobsint, default=None
The number of jobs to run in parallel for fit. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verbosebool, default=False
If True, the time elapsed while fitting will be printed as it is completed. New in version 0.23. Attributes
estimators_list of regressors
The collection of fitted sub-estimators as defined in estimators that are not ‘drop’.
named_estimators_Bunch
Attribute to access any fitted sub-estimators by name. New in version 0.20. See also
VotingClassifier
Soft Voting/Majority Rule classifier. Examples >>> import numpy as np
>>> from sklearn.linear_model import LinearRegression
>>> from sklearn.ensemble import RandomForestRegressor
>>> from sklearn.ensemble import VotingRegressor
>>> r1 = LinearRegression()
>>> r2 = RandomForestRegressor(n_estimators=10, random_state=1)
>>> X = np.array([[1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36]])
>>> y = np.array([2, 6, 12, 20, 30, 42])
>>> er = VotingRegressor([('lr', r1), ('rf', r2)])
>>> print(er.fit(X, y).predict(X))
[ 3.3 5.7 11.8 19.7 28. 40.3]
Methods
fit(X, y[, sample_weight]) Fit the estimators.
fit_transform(X[, y]) Return class labels or probabilities for each estimator.
get_params([deep]) Get the parameters of an estimator from the ensemble.
predict(X) Predict regression target for X.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of an estimator from the ensemble.
transform(X) Return predictions for X for each estimator.
fit(X, y, sample_weight=None) [source]
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject
Fitted estimator.
fit_transform(X, y=None, **fit_params) [source]
Return class labels or probabilities for each estimator. Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
Input samples
yndarray of shape (n_samples,), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters
deepbool, default=True
Setting it to True gets the various estimators and the parameters of the estimators as well.
predict(X) [source]
Predict regression target for X. The predicted regression target of an input sample is computed as the mean predicted regression targets of the estimators in the ensemble. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
yndarray of shape (n_samples,)
The predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters
**paramskeyword arguments
Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’.
transform(X) [source]
Return predictions for X for each estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Returns
predictions: ndarray of shape (n_samples, n_classifiers)
Values predicted by each regressor.
Examples using sklearn.ensemble.VotingRegressor
Plot individual and voting regression predictions | |
doc_29985 | Reserved for Microsoft compatibility. | |
doc_29986 | Move cursor to (new_y, new_x). | |
doc_29987 | Return True if the argument is either positive or negative infinity and False otherwise. | |
doc_29988 |
Autoscale the view limits using the data limits. Parameters
tightbool or None
If True, only expand the axis limits using the margins. Note that unlike for autoscale, tight=True does not set the margins to zero. If False and rcParams["axes.autolimit_mode"] (default: 'data') is 'round_numbers', then after expansion by the margins, further expand the axis limits using the axis major locator. If None (the default), reuse the value set in the previous call to autoscale_view (the initial value is False, but the default style sets rcParams["axes.autolimit_mode"] (default: 'data') to 'data', in which case this behaves like True).
scalexbool, default: True
Whether to autoscale the x axis.
scaleybool, default: True
Whether to autoscale the y axis. Notes The autoscaling preserves any preexisting axis direction reversal. The data limits are not updated automatically when artist data are changed after the artist has been added to an Axes instance. In that case, use matplotlib.axes.Axes.relim() prior to calling autoscale_view. If the views of the Axes are fixed, e.g. via set_xlim, they will not be changed by autoscale_view(). See matplotlib.axes.Axes.autoscale() for an alternative.
Examples using matplotlib.axes.Axes.autoscale_view
Line, Poly and RegularPoly Collection with autoscaling
Compound path
Ellipse Collection
Packed-bubble chart
Group barchart with units
Textbox
Autoscaling | |
doc_29989 |
Extracts sliding local blocks from a batched input tensor. Consider a batched input tensor of shape (N,C,∗)(N, C, *) , where NN is the batch dimension, CC is the channel dimension, and ∗* represent arbitrary spatial dimensions. This operation flattens each sliding kernel_size-sized block within the spatial dimensions of input into a column (i.e., last dimension) of a 3-D output tensor of shape (N,C×∏(kernel_size),L)(N, C \times \prod(\text{kernel\_size}), L) , where C×∏(kernel_size)C \times \prod(\text{kernel\_size}) is the total number of values within each block (a block has ∏(kernel_size)\prod(\text{kernel\_size}) spatial locations each containing a CC -channeled vector), and LL is the total number of such blocks: L=∏d⌊spatial_size[d]+2×padding[d]−dilation[d]×(kernel_size[d]−1)−1stride[d]+1⌋,L = \prod_d \left\lfloor\frac{\text{spatial\_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel\_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor,
where spatial_size\text{spatial\_size} is formed by the spatial dimensions of input (∗* above), and dd is over all spatial dimensions. Therefore, indexing output at the last dimension (column dimension) gives all values within a certain block. The padding, stride and dilation arguments specify how the sliding blocks are retrieved.
stride controls the stride for the sliding blocks.
padding controls the amount of implicit zero-paddings on both sides for padding number of points for each dimension before reshaping.
dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does. Parameters
kernel_size (int or tuple) – the size of the sliding blocks
stride (int or tuple, optional) – the stride of the sliding blocks in the input spatial dimensions. Default: 1
padding (int or tuple, optional) – implicit zero padding to be added on both sides of input. Default: 0
dilation (int or tuple, optional) – a parameter that controls the stride of elements within the neighborhood. Default: 1 If kernel_size, dilation, padding or stride is an int or a tuple of length 1, their values will be replicated across all spatial dimensions. For the case of two input spatial dimensions this operation is sometimes called im2col. Note Fold calculates each combined value in the resulting large tensor by summing all values from all containing blocks. Unfold extracts the values in the local blocks by copying from the large tensor. So, if the blocks overlap, they are not inverses of each other. In general, folding and unfolding operations are related as follows. Consider Fold and Unfold instances created with the same parameters: >>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...)
>>> fold = nn.Fold(output_size=..., **fold_params)
>>> unfold = nn.Unfold(**fold_params)
Then for any (supported) input tensor the following equality holds: fold(unfold(input)) == divisor * input
where divisor is a tensor that depends only on the shape and dtype of the input: >>> input_ones = torch.ones(input.shape, dtype=input.dtype)
>>> divisor = fold(unfold(input_ones))
When the divisor tensor contains no zero elements, then fold and unfold operations are inverses of each other (up to constant divisor). Warning Currently, only 4-D input tensors (batched image-like tensors) are supported. Shape:
Input: (N,C,∗)(N, C, *)
Output: (N,C×∏(kernel_size),L)(N, C \times \prod(\text{kernel\_size}), L) as described above Examples: >>> unfold = nn.Unfold(kernel_size=(2, 3))
>>> input = torch.randn(2, 5, 3, 4)
>>> output = unfold(input)
>>> # each patch contains 30 values (2x3=6 vectors, each of 5 channels)
>>> # 4 blocks (2x3 kernels) in total in the 3x4 input
>>> output.size()
torch.Size([2, 30, 4])
>>> # Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape)
>>> inp = torch.randn(1, 3, 10, 12)
>>> w = torch.randn(2, 3, 4, 5)
>>> inp_unf = torch.nn.functional.unfold(inp, (4, 5))
>>> out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2)
>>> out = torch.nn.functional.fold(out_unf, (7, 8), (1, 1))
>>> # or equivalently (and avoiding a copy),
>>> # out = out_unf.view(1, 2, 7, 8)
>>> (torch.nn.functional.conv2d(inp, w) - out).abs().max()
tensor(1.9073e-06) | |
doc_29990 | Return HTML version of exception report. Used for HTML version of debug 500 HTTP error page. | |
doc_29991 | In-place version of tan() | |
doc_29992 |
Returns whether the kernel is stationary. | |
doc_29993 |
Returns a view of the array with axes transposed. For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector, an additional dimension must be added. np.atleast2d(a).T achieves this, as does a[:, np.newaxis]. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and a.shape = (i[0], i[1], ... i[n-2], i[n-1]), then a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0]). Parameters
axesNone, tuple of ints, or n ints
None or no argument: reverses the order of the axes. tuple of ints: i in the j-th place in the tuple means a’s i-th axis becomes a.transpose()’s j-th axis.
n ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form) Returns
outndarray
View of a, with axes suitably permuted. See also transpose
Equivalent function ndarray.T
Array property returning the array transposed. ndarray.reshape
Give a new shape to an array without changing its data. Examples >>> a = np.array([[1, 2], [3, 4]])
>>> a
array([[1, 2],
[3, 4]])
>>> a.transpose()
array([[1, 3],
[2, 4]])
>>> a.transpose((1, 0))
array([[1, 3],
[2, 4]])
>>> a.transpose(1, 0)
array([[1, 3],
[2, 4]]) | |
doc_29994 |
Return the url. | |
doc_29995 | Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the input tensor Return type
pruned_tensor (torch.Tensor) | |
doc_29996 | See Migration guide for more details. tf.compat.v1.config.optimizer.get_experimental_options
tf.config.optimizer.get_experimental_options()
Refer to tf.config.optimizer.set_experimental_options for a list of current options. Note that optimizations are only applied in graph mode, (within tf.function). In addition, as these are experimental options, the list is subject to change.
Returns Dictionary of configured experimental optimizer options | |
doc_29997 | Axes(fig, rect, *[, facecolor, frameon, ...]) Build an Axes in a figure.
SimpleAxisArtist(axis, axisnum, spine)
SimpleChainedObjects(objects) | |
doc_29998 |
Returns the element-wise remainder of division. This is the NumPy implementation of the C library function fmod, the remainder has the same sign as the dividend x1. It is equivalent to the Matlab(TM) rem function and should not be confused with the Python modulus operator x1 % x2. Parameters
x1array_like
Dividend.
x2array_like
Divisor. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yarray_like
The remainder of the division of x1 by x2. This is a scalar if both x1 and x2 are scalars. See also remainder
Equivalent to the Python % operator. divide
Notes The result of the modulo operation for negative dividend and divisors is bound by conventions. For fmod, the sign of result is the sign of the dividend, while for remainder the sign of the result is the sign of the divisor. The fmod function is equivalent to the Matlab(TM) rem function. Examples >>> np.fmod([-3, -2, -1, 1, 2, 3], 2)
array([-1, 0, -1, 1, 0, 1])
>>> np.remainder([-3, -2, -1, 1, 2, 3], 2)
array([1, 0, 1, 1, 0, 1])
>>> np.fmod([5, 3], [2, 2.])
array([ 1., 1.])
>>> a = np.arange(-3, 3).reshape(3, 2)
>>> a
array([[-3, -2],
[-1, 0],
[ 1, 2]])
>>> np.fmod(a, [2,2])
array([[-1, 0],
[-1, 0],
[ 1, 0]]) | |
doc_29999 |
Bases: matplotlib.transforms.Affine2DBase The affine part of the polar projection. Scales the output so that maximum radius rests on the edge of the axes circle. limits is the view limit of the data. The only part of its bounds that is used is the y limits (for the radius limits). The theta range is handled by the non-affine transform. get_matrix()[source]
Get the matrix for the affine part of this transform. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.