_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_1100 |
Set the z-axis scale. Parameters
value{"linear"}
The axis scale type to apply. 3D axes currently only support linear scales; other scales yield nonsensical results. **kwargs
Keyword arguments are nominally forwarded to the scale class, but none of them is applicable for linear scales. | |
doc_1101 |
Get the hour of the day component of the Period. Returns
int
The hour as an integer, between 0 and 23. See also Period.second
Get the second component of the Period. Period.minute
Get the minute component of the Period. Examples
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.hour
13
Period longer than a day
>>> p = pd.Period("2018-03-11", freq="M")
>>> p.hour
0 | |
doc_1102 | See Migration guide for more details. tf.compat.v1.raw_ops.DummySeedGenerator
tf.raw_ops.DummySeedGenerator(
name=None
)
Args
name A name for the operation (optional).
Returns A Tensor of type resource. | |
doc_1103 | alias of Linear | |
doc_1104 | Return the parent’s process id. When the parent process has exited, on Unix the id returned is the one of the init process (1), on Windows it is still the same id, which may be already reused by another process. Availability: Unix, Windows. Changed in version 3.2: Added support for Windows. | |
doc_1105 | Holds a string containing the greeting sent by the client in its “HELO”. | |
doc_1106 | Fork. Connect the child’s controlling terminal to a pseudo-terminal. Return value is (pid, fd). Note that the child gets pid 0, and the fd is invalid. The parent’s return value is the pid of the child, and fd is a file descriptor connected to the child’s controlling terminal (and also to the child’s standard input and output). | |
doc_1107 |
Array interface to get at this Transform's affine matrix. | |
doc_1108 | Length of the network prefix, in bits. | |
doc_1109 | get version number of the SDL_Image library being used get_sdl_image_version() -> None get_sdl_image_version() -> (major, minor, patch) If pygame is built with extended image formats, then this function will return the SDL_Image library's version number as a tuple of 3 integers (major, minor, patch). If not, then it will return None. New in pygame 2.0.0.dev11. | |
doc_1110 |
Returns the average of the array elements along given axis. Refer to numpy.mean for full documentation. See also numpy.mean
equivalent function | |
doc_1111 |
Create a new figure, or activate an existing figure. Parameters
numint or str or Figure, optional
A unique identifier for the figure. If a figure with that identifier already exists, this figure is made active and returned. An integer refers to the Figure.number attribute, a string refers to the figure label. If there is no figure with the identifier or num is not given, a new figure is created, made active and returned. If num is an int, it will be used for the Figure.number attribute, otherwise, an auto-generated integer value is used (starting at 1 and incremented for each new figure). If num is a string, the figure label and the window title is set to this value.
figsize(float, float), default: rcParams["figure.figsize"] (default: [6.4, 4.8])
Width, height in inches.
dpifloat, default: rcParams["figure.dpi"] (default: 100.0)
The resolution of the figure in dots-per-inch.
facecolorcolor, default: rcParams["figure.facecolor"] (default: 'white')
The background color.
edgecolorcolor, default: rcParams["figure.edgecolor"] (default: 'white')
The border color.
frameonbool, default: True
If False, suppress drawing the figure frame.
FigureClasssubclass of Figure
Optionally use a custom Figure instance.
clearbool, default: False
If True and the figure already exists, then it is cleared.
tight_layoutbool or dict, default: rcParams["figure.autolayout"] (default: False)
If False use subplotpars. If True adjust subplot parameters using tight_layout with default padding. When providing a dict containing the keys pad, w_pad, h_pad, and rect, the default tight_layout paddings will be overridden.
constrained_layoutbool, default: rcParams["figure.constrained_layout.use"] (default: False)
If True use constrained layout to adjust positioning of plot elements. Like tight_layout, but designed to be more flexible. See Constrained Layout Guide for examples. (Note: does not work with add_subplot or subplot2grid.) **kwargs : optional
See Figure for other possible arguments. Returns
Figure
The Figure instance returned will also be passed to new_figure_manager in the backends, which allows to hook custom Figure classes into the pyplot interface. Additional kwargs will be passed to the Figure init function. Notes If you are creating many figures, make sure you explicitly call pyplot.close on the figures you are not using, because this will enable pyplot to properly clean up the memory. rcParams defines the default values, which can be modified in the matplotlibrc file.
Examples using matplotlib.pyplot.figure
Curve with error band
Errorbar limit selection
EventCollection Demo
Filled polygon
Linestyles
Markevery Demo
prop_cycle property markevery in rcParams
Psd Demo
Scatter plot with histograms
Barcode
Figimage Demo
Layer Images
Streamplot
Aligning Labels
Axes Zoom Effect
Custom Figure subclasses
Resizing axes with constrained layout
Resizing axes with tight layout
Geographic Projections
Using Gridspec to make multi-column/row subplot layouts
Nested Gridspecs
Managing multiple figures in pyplot
Figure subfigures
Creating multiple subplots using plt.subplots
Polar Legend
Scatter plot on polar axis
Arrow Demo
Auto-wrapping text
Text Rotation Mode
The difference between \dfrac and \frac
Annotation arrow style reference
Convert texts to images
Mathtext Examples
Rainbow text
STIX Fonts
Unicode minus
Usetex Baseline Test
Usetex Fonteffects
Annotation Polar
Fig Axes Customize Simple
Simple axes labels
Adding lines to figures
Pyplot Two Subplots
Text Commands
Text Layout
Drawing fancy boxes
Hatch demo
Axes Divider
Demo Axes Grid
Axes Grid2
Showing RGB channels using RGBAxes
Per-row or per-column colorbars
Axes with a fixed physical size
Setting a fixed aspect on ImageGrid cells
Inset Locator Demo
Make Room For Ylabel Using Axesgrid
Parasite Simple2
Simple Axes Divider 1
Simple Axes Divider 3
Simple ImageGrid
Simple ImageGrid 2
Axis Direction
axis_direction demo
Axis line styles
Curvilinear grid demo
Demo CurveLinear Grid2
mpl_toolkits.axisartist.floating_axes features
floating_axis demo
Parasite Axes demo
Ticklabel alignment
Ticklabel direction
Simple Axis Direction01
Simple Axis Direction03
Simple Axis Pad
Custom spines with axisartist
Simple Axisline
Simple Axisline3
Anatomy of a figure
Firefox
Shaded & power normalized rendering
XKCD
The double pendulum problem
Frame grabbing
Rain simulation
Animated 3D random walk
MATPLOTLIB UNCHAINED
Close Event
Interactive functions
Hyperlinks
Matplotlib logo
Multipage PDF
SVG Filter Line
SVG Filter Pie
transforms.offset_copy
Zorder Demo
Plot 2D data on 3D plot
Demo of 3D bar charts
Create 2D bar graphs in different planes
3D box surface plot
Demonstrates plotting contour (level) curves in 3D
Demonstrates plotting contour (level) curves in 3D using the extend3d option
Projecting contour profiles onto a graph
Filled contours
Projecting filled contour onto a graph
3D errorbars
Create 3D histogram of 2D data
Parametric Curve
Lorenz Attractor
2D and 3D Axes in same Figure
Automatic Text Offsetting
Draw flat objects in 3D plot
Generate polygons to fill under 3D line graph
3D quiver plot
Rotating a 3D plot
3D scatterplot
3D plots as subplots
3D surface (solid color)
3D surface (checkerboard)
3D surface with polar coordinates
Text annotations in 3D
Triangular 3D contour plot
Triangular 3D filled contour plot
Triangular 3D surfaces
More triangular 3D surfaces
3D voxel / volumetric plot
3D voxel plot of the numpy logo
3D voxel / volumetric plot with rgb colors
3D voxel / volumetric plot with cylindrical coordinates
3D wireframe plot
Rotating 3D wireframe plot
MRI With EEG
The Sankey class
Long chain of connections using Sankey
Rankine power cycle
SkewT-logP diagram: using transforms and custom projections
Spine Placement
Ellipse With Units
SVG Histogram
Tool Manager
subplot2grid demo
GridSpec demo
Nested GridSpecs
Simple Legend01
Menu
Rectangle and ellipse selectors
Basic Usage
Pyplot tutorial
Image tutorial
Artist tutorial
Constrained Layout Guide
Tight Layout guide
Arranging multiple Axes in a Figure
origin and extent in imshow
Path effects guide
Transformations Tutorial
Specifying Colors
Complex and semantic figure composition
Text in Matplotlib Plots
Text properties and layout | |
doc_1112 | turtle.fd(distance)
Parameters
distance – a number (integer or float) Move the turtle forward by the specified distance, in the direction the turtle is headed. >>> turtle.position()
(0.00,0.00)
>>> turtle.forward(25)
>>> turtle.position()
(25.00,0.00)
>>> turtle.forward(-75)
>>> turtle.position()
(-50.00,0.00) | |
doc_1113 | This decorator marks a view as being exempt from the protection ensured by the middleware. Example: from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt
@csrf_exempt
def my_view(request):
return HttpResponse('Hello world') | |
doc_1114 |
Update colors from the scalar mappable array, if any. Assign colors to edges and faces based on the array and/or colors that were directly set, as appropriate. | |
doc_1115 |
Bases: matplotlib.backend_bases.Event A pick event. This event is fired when the user picks a location on the canvas sufficiently close to an artist that has been made pickable with Artist.set_picker. A PickEvent has a number of special attributes in addition to those defined by the parent Event class. Examples Bind a function on_pick() to pick events, that prints the coordinates of the picked data point: ax.plot(np.rand(100), 'o', picker=5) # 5 points tolerance
def on_pick(event):
line = event.artist
xdata, ydata = line.get_data()
ind = event.ind
print('on pick line:', np.array([xdata[ind], ydata[ind]]).T)
cid = fig.canvas.mpl_connect('pick_event', on_pick)
Attributes
mouseeventMouseEvent
The mouse event that generated the pick.
artistmatplotlib.artist.Artist
The picked artist. Note that artists are not pickable by default (see Artist.set_picker). other
Additional attributes may be present depending on the type of the picked object; e.g., a Line2D pick may define different extra attributes than a PatchCollection pick. | |
doc_1116 | A mixin class that performs template-based response rendering for views that operate upon a single object instance. Requires that the view it is mixed with provides self.object, the object instance that the view is operating on. self.object will usually be, but is not required to be, an instance of a Django model. It may be None if the view is in the process of constructing a new instance. Extends TemplateResponseMixin Methods and Attributes
template_name_field
The field on the current object instance that can be used to determine the name of a candidate template. If either template_name_field itself or the value of the template_name_field on the current object instance is None, the object will not be used for a candidate template name.
template_name_suffix
The suffix to append to the auto-generated candidate template name. Default suffix is _detail.
get_template_names()
Returns a list of candidate template names. Returns the following list: the value of template_name on the view (if provided) the contents of the template_name_field field on the object instance that the view is operating upon (if available) <app_label>/<model_name><template_name_suffix>.html | |
doc_1117 | Run awaitable objects in the aws iterable concurrently and block until the condition specified by return_when. The aws iterable must not be empty. Returns two sets of Tasks/Futures: (done, pending). Usage: done, pending = await asyncio.wait(aws)
timeout (a float or int), if specified, can be used to control the maximum number of seconds to wait before returning. Note that this function does not raise asyncio.TimeoutError. Futures or Tasks that aren’t done when the timeout occurs are simply returned in the second set. return_when indicates when this function should return. It must be one of the following constants:
Constant Description
FIRST_COMPLETED The function will return when any future finishes or is cancelled.
FIRST_EXCEPTION The function will return when any future finishes by raising an exception. If no future raises an exception then it is equivalent to ALL_COMPLETED.
ALL_COMPLETED The function will return when all futures finish or are cancelled. Unlike wait_for(), wait() does not cancel the futures when a timeout occurs. Deprecated since version 3.8: If any awaitable in aws is a coroutine, it is automatically scheduled as a Task. Passing coroutines objects to wait() directly is deprecated as it leads to confusing behavior. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. Note wait() schedules coroutines as Tasks automatically and later returns those implicitly created Task objects in (done, pending) sets. Therefore the following code won’t work as expected: async def foo():
return 42
coro = foo()
done, pending = await asyncio.wait({coro})
if coro in done:
# This branch will never be run!
Here is how the above snippet can be fixed: async def foo():
return 42
task = asyncio.create_task(foo())
done, pending = await asyncio.wait({task})
if task in done:
# Everything will work as expected now.
Deprecated since version 3.8, will be removed in version 3.11: Passing coroutine objects to wait() directly is deprecated. | |
doc_1118 | Applies element-wise LogSigmoid(xi)=log(11+exp(−xi))\text{LogSigmoid}(x_i) = \log \left(\frac{1}{1 + \exp(-x_i)}\right) See LogSigmoid for more details. | |
doc_1119 | test if a Group contains Sprites has(*sprites) -> bool Return True if the Group contains all of the given sprites. This is similar to using the "in" operator on the Group ("if sprite in group: ..."), which tests if a single Sprite belongs to a Group. Each sprite argument can also be a iterator containing Sprites. | |
doc_1120 |
Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. See also char.istitle | |
doc_1121 | tf.keras.applications.nasnet.NASNetMobile Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.NASNetMobile, tf.compat.v1.keras.applications.nasnet.NASNetMobile
tf.keras.applications.NASNetMobile(
input_shape=None, include_top=True, weights='imagenet',
input_tensor=None, pooling=None, classes=1000
)
Reference:
Learning Transferable Architectures for Scalable Image Recognition (CVPR 2018) Optionally loads weights pre-trained on ImageNet. Note that the data format convention used by the model is the one specified in your Keras config at ~/.keras/keras.json.
Note: each Keras Application expects a specific kind of input preprocessing. For NASNet, call tf.keras.applications.nasnet.preprocess_input on your inputs before passing them to the model.
Arguments
input_shape Optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) for NASNetMobile It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (224, 224, 3) would be one valid value.
include_top Whether to include the fully-connected layer at the top of the network.
weights None (random initialization) or imagenet (ImageNet weights) For loading imagenet weights, input_shape should be (224, 224, 3)
input_tensor Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model.
pooling Optional pooling mode for feature extraction when include_top is False.
None means that the output of the model will be the 4D tensor output of the last convolutional layer.
avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor.
max means that global max pooling will be applied.
classes Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified.
Returns A Keras model instance.
Raises
ValueError In case of invalid argument for weights, or invalid input shape.
RuntimeError If attempting to run this model with a backend that does not support separable convolutions. | |
doc_1122 |
Return the internal pad in points. See set_pad for more details. | |
doc_1123 | returns a vector with the same direction but length 1. normalize() -> Vector3 Returns a new vector that has length equal to 1 and the same direction as self. | |
doc_1124 | rotates a vector by a given angle in radians. rotate_rad(angle, Vector3) -> Vector3 Returns a vector which has the same length as self but is rotated counterclockwise by the given angle in radians around the given axis. New in pygame 2.0.0. | |
doc_1125 | Load “dotenv” files in order of precedence to set environment variables. If an env var is already set it is not overwritten, so earlier files in the list are preferred over later files. This is a no-op if python-dotenv is not installed. Parameters
path – Load the file at this location instead of searching. Returns
True if a file was loaded. Changed in version 2.0: When loading the env files, set the default encoding to UTF-8. Changelog Changed in version 1.1.0: Returns False when python-dotenv is not installed, or when the given path isn’t a file. New in version 1.0. | |
doc_1126 | tf.experimental.numpy.vander(
x, N=None, increasing=False
)
See the NumPy documentation for numpy.vander. | |
doc_1127 | Remove a child node. oldChild must be a child of this node; if not, ValueError is raised. oldChild is returned on success. If oldChild will not be used further, its unlink() method should be called. | |
doc_1128 | Handle a defect found on obj. When the email package calls this method, defect will always be a subclass of Defect. The default implementation checks the raise_on_defect flag. If it is True, defect is raised as an exception. If it is False (the default), obj and defect are passed to register_defect(). | |
doc_1129 | See Migration guide for more details. tf.compat.v1.raw_ops.DepthwiseConv2dNative
tf.raw_ops.DepthwiseConv2dNative(
input, filter, strides, padding, explicit_paddings=[],
data_format='NHWC', dilations=[1, 1, 1, 1], name=None
)
Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, channel_multiplier], containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. Thus, the output has in_channels * channel_multiplier channels. for k in 0..in_channels-1
for q in 0..channel_multiplier-1
output[b, i, j, k * channel_multiplier + q] =
sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *
filter[di, dj, k, q]
Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].
Args
input A Tensor. Must be one of the following types: half, bfloat16, float32, float64.
filter A Tensor. Must have the same type as input.
strides A list of ints. 1-D of length 4. The stride of the sliding window for each dimension of input.
padding A string from: "SAME", "VALID", "EXPLICIT". The type of padding algorithm to use.
explicit_paddings An optional list of ints. Defaults to [].
data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, height, width, channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, channels, height, width].
dilations An optional list of ints. Defaults to [1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_1130 | class sklearn.feature_selection.VarianceThreshold(threshold=0.0) [source]
Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning. Read more in the User Guide. Parameters
thresholdfloat, default=0
Features with a training-set variance lower than this threshold will be removed. The default is to keep all features with non-zero variance, i.e. remove the features that have the same value in all samples. Attributes
variances_array, shape (n_features,)
Variances of individual features. Notes Allows NaN in the input. Raises ValueError if no feature in X meets the variance threshold. Examples The following dataset has integer features, two of which are the same in every sample. These are removed with the default setting for threshold: >>> X = [[0, 2, 0, 3], [0, 1, 4, 3], [0, 1, 1, 3]]
>>> selector = VarianceThreshold()
>>> selector.fit_transform(X)
array([[2, 0],
[1, 4],
[1, 1]])
Methods
fit(X[, y]) Learn empirical variances from X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
get_support([indices]) Get a mask, or integer index, of the features selected
inverse_transform(X) Reverse the transformation operation
set_params(**params) Set the parameters of this estimator.
transform(X) Reduce X to the selected features.
fit(X, y=None) [source]
Learn empirical variances from X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Sample vectors from which to compute variances.
yany, default=None
Ignored. This parameter exists only for compatibility with sklearn.pipeline.Pipeline. Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_support(indices=False) [source]
Get a mask, or integer index, of the features selected Parameters
indicesbool, default=False
If True, the return value will be an array of integers, rather than a boolean mask. Returns
supportarray
An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
inverse_transform(X) [source]
Reverse the transformation operation Parameters
Xarray of shape [n_samples, n_selected_features]
The input samples. Returns
X_rarray of shape [n_samples, n_original_features]
X with columns of zeros inserted where features would have been removed by transform.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Reduce X to the selected features. Parameters
Xarray of shape [n_samples, n_features]
The input samples. Returns
X_rarray of shape [n_samples, n_selected_features]
The input samples with only the selected features. | |
doc_1131 | show lots of sprites moving around testsprite.main(update_rects = True, use_static = False, use_FastRenderGroup = False, screen_dims = [640, 480], use_alpha = False, flags = 0) -> None Optional keyword arguments: update_rects - use the RenderUpdate sprite group class
use_static - include non-moving images
use_FastRenderGroup - Use the FastRenderGroup sprite group
screen_dims - pygame window dimensions
use_alpha - use alpha blending
flags - additional display mode flags Like the testsprite.c that comes with SDL, this pygame version shows lots of sprites moving around. If run as a stand-alone program then no command line arguments are taken. | |
doc_1132 | See Migration guide for more details. tf.compat.v1.raw_ops.ReciprocalGrad
tf.raw_ops.ReciprocalGrad(
y, dy, name=None
)
Specifically, grad = -dy * y*y, where y = 1/x, and dy is the corresponding input gradient.
Args
y A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
dy A Tensor. Must have the same type as y.
name A name for the operation (optional).
Returns A Tensor. Has the same type as y. | |
doc_1133 |
Set the norm limits for image scaling. Parameters
vmin, vmaxfloat
The limits. The limits may also be passed as a tuple (vmin, vmax) as a single positional argument. | |
doc_1134 |
Bases: object A base class for providing timer events, useful for things animations. Backends need to implement a few specific methods in order to use their own timing mechanisms so that the timer events are integrated into their event loops. Subclasses must override the following methods:
_timer_start: Backend-specific code for starting the timer.
_timer_stop: Backend-specific code for stopping the timer. Subclasses may additionally override the following methods:
_timer_set_single_shot: Code for setting the timer to single shot operating mode, if supported by the timer object. If not, the Timer class itself will store the flag and the _on_timer method should be overridden to support such behavior.
_timer_set_interval: Code for setting the interval on the timer, if there is a method for doing so on the timer object.
_on_timer: The internal function that any timer object should call, which will handle the task of running all callbacks that have been set. Parameters
intervalint, default: 1000ms
The time between timer events in milliseconds. Will be stored as timer.interval.
callbackslist[tuple[callable, tuple, dict]]
List of (func, args, kwargs) tuples that will be called upon timer events. This list is accessible as timer.callbacks and can be manipulated directly, or the functions add_callback and remove_callback can be used. add_callback(func, *args, **kwargs)[source]
Register func to be called by timer when the event fires. Any additional arguments provided will be passed to func. This function returns func, which makes it possible to use it as a decorator.
propertyinterval
The time between timer events, in milliseconds.
remove_callback(func, *args, **kwargs)[source]
Remove func from list of callbacks. args and kwargs are optional and used to distinguish between copies of the same function registered to be called with different arguments. This behavior is deprecated. In the future, *args, **kwargs won't be considered anymore; to keep a specific callback removable by itself, pass it to add_callback as a functools.partial object.
propertysingle_shot
Whether this timer should stop after a single run.
start(interval=None)[source]
Start the timer object. Parameters
intervalint, optional
Timer interval in milliseconds; overrides a previously set interval if provided.
stop()[source]
Stop the timer. | |
doc_1135 |
Principal component analysis (PCA). Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD. It uses the LAPACK implementation of the full SVD or a randomized truncated SVD by the method of Halko et al. 2009, depending on the shape of the input data and the number of components to extract. It can also use the scipy.sparse.linalg ARPACK implementation of the truncated SVD. Notice that this class does not support sparse input. See TruncatedSVD for an alternative with sparse data. Read more in the User Guide. Parameters
n_componentsint, float or ‘mle’, default=None
Number of components to keep. if n_components is not set all components are kept: n_components == min(n_samples, n_features)
If n_components == 'mle' and svd_solver == 'full', Minka’s MLE is used to guess the dimension. Use of n_components == 'mle' will interpret svd_solver == 'auto' as svd_solver == 'full'. If 0 < n_components < 1 and svd_solver == 'full', select the number of components such that the amount of variance that needs to be explained is greater than the percentage specified by n_components. If svd_solver == 'arpack', the number of components must be strictly less than the minimum of n_features and n_samples. Hence, the None case results in: n_components == min(n_samples, n_features) - 1
copybool, default=True
If False, data passed to fit are overwritten and running fit(X).transform(X) will not yield the expected results, use fit_transform(X) instead.
whitenbool, default=False
When True (False by default) the components_ vectors are multiplied by the square root of n_samples and then divided by the singular values to ensure uncorrelated outputs with unit component-wise variances. Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making their data respect some hard-wired assumptions.
svd_solver{‘auto’, ‘full’, ‘arpack’, ‘randomized’}, default=’auto’
If auto :
The solver is selected by a default policy based on X.shape and n_components: if the input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient ‘randomized’ method is enabled. Otherwise the exact full SVD is computed and optionally truncated afterwards. If full :
run exact full SVD calling the standard LAPACK solver via scipy.linalg.svd and select the components by postprocessing If arpack :
run SVD truncated to n_components calling ARPACK solver via scipy.sparse.linalg.svds. It requires strictly 0 < n_components < min(X.shape) If randomized :
run randomized SVD by the method of Halko et al. New in version 0.18.0.
tolfloat, default=0.0
Tolerance for singular values computed by svd_solver == ‘arpack’. Must be of range [0.0, infinity). New in version 0.18.0.
iterated_powerint or ‘auto’, default=’auto’
Number of iterations for the power method computed by svd_solver == ‘randomized’. Must be of range [0, infinity). New in version 0.18.0.
random_stateint, RandomState instance or None, default=None
Used when the ‘arpack’ or ‘randomized’ solvers are used. Pass an int for reproducible results across multiple function calls. See Glossary. New in version 0.18.0. Attributes
components_ndarray of shape (n_components, n_features)
Principal axes in feature space, representing the directions of maximum variance in the data. The components are sorted by explained_variance_.
explained_variance_ndarray of shape (n_components,)
The amount of variance explained by each of the selected components. Equal to n_components largest eigenvalues of the covariance matrix of X. New in version 0.18.
explained_variance_ratio_ndarray of shape (n_components,)
Percentage of variance explained by each of the selected components. If n_components is not set then all components are stored and the sum of the ratios is equal to 1.0.
singular_values_ndarray of shape (n_components,)
The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the n_components variables in the lower-dimensional space. New in version 0.19.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0).
n_components_int
The estimated number of components. When n_components is set to ‘mle’ or a number between 0 and 1 (with svd_solver == ‘full’) this number is estimated from input data. Otherwise it equals the parameter n_components, or the lesser value of n_features and n_samples if n_components is None.
n_features_int
Number of features in the training data.
n_samples_int
Number of samples in the training data.
noise_variance_float
The estimated noise covariance following the Probabilistic PCA model from Tipping and Bishop 1999. See “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf. It is required to compute the estimated data covariance and score samples. Equal to the average of (min(n_features, n_samples) - n_components) smallest eigenvalues of the covariance matrix of X. See also
KernelPCA
Kernel Principal Component Analysis.
SparsePCA
Sparse Principal Component Analysis.
TruncatedSVD
Dimensionality reduction using truncated SVD.
IncrementalPCA
Incremental Principal Component Analysis. References For n_components == ‘mle’, this class uses the method of Minka, T. P. “Automatic choice of dimensionality for PCA”. In NIPS, pp. 598-604 Implements the probabilistic PCA model from: Tipping, M. E., and Bishop, C. M. (1999). “Probabilistic principal component analysis”. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3), 611-622. via the score and score_samples methods. See http://www.miketipping.com/papers/met-mppca.pdf For svd_solver == ‘arpack’, refer to scipy.sparse.linalg.svds. For svd_solver == ‘randomized’, see: Halko, N., Martinsson, P. G., and Tropp, J. A. (2011). “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions”. SIAM review, 53(2), 217-288. and also Martinsson, P. G., Rokhlin, V., and Tygert, M. (2011). “A randomized algorithm for the decomposition of matrices”. Applied and Computational Harmonic Analysis, 30(1), 47-68. Examples >>> import numpy as np
>>> from sklearn.decomposition import PCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = PCA(n_components=2)
>>> pca.fit(X)
PCA(n_components=2)
>>> print(pca.explained_variance_ratio_)
[0.9924... 0.0075...]
>>> print(pca.singular_values_)
[6.30061... 0.54980...]
>>> pca = PCA(n_components=2, svd_solver='full')
>>> pca.fit(X)
PCA(n_components=2, svd_solver='full')
>>> print(pca.explained_variance_ratio_)
[0.9924... 0.00755...]
>>> print(pca.singular_values_)
[6.30061... 0.54980...]
>>> pca = PCA(n_components=1, svd_solver='arpack')
>>> pca.fit(X)
PCA(n_components=1, svd_solver='arpack')
>>> print(pca.explained_variance_ratio_)
[0.99244...]
>>> print(pca.singular_values_)
[6.30061...]
Methods
fit(X[, y]) Fit the model with X.
fit_transform(X[, y]) Fit the model with X and apply the dimensionality reduction on X.
get_covariance() Compute data covariance with the generative model.
get_params([deep]) Get parameters for this estimator.
get_precision() Compute data precision matrix with the generative model.
inverse_transform(X) Transform data back to its original space.
score(X[, y]) Return the average log-likelihood of all samples.
score_samples(X) Return the log-likelihood of each sample.
set_params(**params) Set the parameters of this estimator.
transform(X) Apply dimensionality reduction to X.
fit(X, y=None) [source]
Fit the model with X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None) [source]
Fit the model with X and apply the dimensionality reduction on X. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Returns
X_newndarray of shape (n_samples, n_components)
Transformed values. Notes This method returns a Fortran-ordered array. To convert it to a C-ordered array, use ‘np.ascontiguousarray’.
get_covariance() [source]
Compute data covariance with the generative model. cov = components_.T * S**2 * components_ + sigma2 * eye(n_features) where S**2 contains the explained variances, and sigma2 contains the noise variances. Returns
covarray, shape=(n_features, n_features)
Estimated covariance of data.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Compute data precision matrix with the generative model. Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency. Returns
precisionarray, shape=(n_features, n_features)
Estimated precision of data.
inverse_transform(X) [source]
Transform data back to its original space. In other words, return an input X_original whose transform would be X. Parameters
Xarray-like, shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of components. Returns
X_original array-like, shape (n_samples, n_features)
Notes If whitening is enabled, inverse_transform will compute the exact inverse operation, which includes reversing whitening.
score(X, y=None) [source]
Return the average log-likelihood of all samples. See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf Parameters
Xarray-like of shape (n_samples, n_features)
The data.
yIgnored
Returns
llfloat
Average log-likelihood of the samples under the current model.
score_samples(X) [source]
Return the log-likelihood of each sample. See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf Parameters
Xarray-like of shape (n_samples, n_features)
The data. Returns
llndarray of shape (n_samples,)
Log-likelihood of each sample under the current model.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Apply dimensionality reduction to X. X is projected on the first principal components previously extracted from a training set. Parameters
Xarray-like, shape (n_samples, n_features)
New data, where n_samples is the number of samples and n_features is the number of features. Returns
X_newarray-like, shape (n_samples, n_components)
Examples >>> import numpy as np
>>> from sklearn.decomposition import IncrementalPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> ipca = IncrementalPCA(n_components=2, batch_size=3)
>>> ipca.fit(X)
IncrementalPCA(batch_size=3, n_components=2)
>>> ipca.transform(X) | |
doc_1136 | Remove sequence from the list of sequences that include this message. | |
doc_1137 | See Migration guide for more details. tf.compat.v1.keras.layers.InputSpec, tf.compat.v1.layers.InputSpec
tf.keras.layers.InputSpec(
dtype=None, shape=None, ndim=None, max_ndim=None, min_ndim=None, axes=None,
allow_last_axis_squeeze=False, name=None
)
Layers can expose (if appropriate) an input_spec attribute: an instance of InputSpec, or a nested structure of InputSpec instances (one per input tensor). These objects enable the layer to run input compatibility checks for input structure, input rank, input shape, and input dtype. A None entry in a shape is compatible with any dimension, a None shape is compatible with any shape.
Arguments
dtype Expected DataType of the input.
shape Shape tuple, expected shape of the input (may include None for unchecked axes). Includes the batch size.
ndim Integer, expected rank of the input.
max_ndim Integer, maximum rank of the input.
min_ndim Integer, minimum rank of the input.
axes Dictionary mapping integer axes to a specific dimension value.
allow_last_axis_squeeze If True, then allow inputs of rank N+1 as long as the last axis of the input is 1, as well as inputs of rank N-1 as long as the last axis of the spec is 1.
name Expected key corresponding to this input when passing data as a dictionary. Example: class MyLayer(Layer):
def __init__(self):
super(MyLayer, self).__init__()
# The layer will accept inputs with shape (?, 28, 28) & (?, 28, 28, 1)
# and raise an appropriate error message otherwise.
self.input_spec = InputSpec(
shape=(None, 28, 28, 1),
allow_last_axis_squeeze=True)
Methods from_config View source
@classmethod
from_config(
config
)
get_config View source
get_config() | |
doc_1138 |
Compute the standard deviation along the specified axis. Returns the standard deviation, a measure of the spread of a distribution, of the array elements. The standard deviation is computed for the flattened array by default, otherwise over the specified axis. Parameters
aarray_like
Calculate the standard deviation of these values.
axisNone or int or tuple of ints, optional
Axis or axes along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array. New in version 1.7.0. If this is a tuple of ints, a standard deviation is performed over multiple axes, instead of a single axis or all the axes as before.
dtypedtype, optional
Type to use in computing the standard deviation. For arrays of integer type the default is float64, for arrays of float types it is the same as the array type.
outndarray, optional
Alternative output array in which to place the result. It must have the same shape as the expected output but the type (of the calculated values) will be cast if necessary.
ddofint, optional
Means Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. By default ddof is zero.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the std method of sub-classes of ndarray, however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised.
wherearray_like of bool, optional
Elements to include in the standard deviation. See reduce for details. New in version 1.20.0. Returns
standard_deviationndarray, see dtype parameter above.
If out is None, return a new array containing the standard deviation, otherwise return a reference to the output array. See also
var, mean, nanmean, nanstd, nanvar
Output type determination
Notes The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std = sqrt(mean(x)), where x = abs(a - a.mean())**2. The average squared deviation is typically calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of the infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with ddof=1, it will not be an unbiased estimate of the standard deviation per se. Note that, for complex numbers, std takes the absolute value before squaring, so that the result is always real and nonnegative. For floating-point input, the std is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the dtype keyword can alleviate this issue. Examples >>> a = np.array([[1, 2], [3, 4]])
>>> np.std(a)
1.1180339887498949 # may vary
>>> np.std(a, axis=0)
array([1., 1.])
>>> np.std(a, axis=1)
array([0.5, 0.5])
In single precision, std() can be inaccurate: >>> a = np.zeros((2, 512*512), dtype=np.float32)
>>> a[0, :] = 1.0
>>> a[1, :] = 0.1
>>> np.std(a)
0.45000005
Computing the standard deviation in float64 is more accurate: >>> np.std(a, dtype=np.float64)
0.44999999925494177 # may vary
Specifying a where argument: >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]])
>>> np.std(a)
2.614064523559687 # may vary
>>> np.std(a, where=[[True], [True], [False]])
2.0 | |
doc_1139 |
Count non-NA cells for each column or row. The values None, NaN, NaT, and optionally numpy.inf (depending on pandas.options.mode.use_inf_as_na) are considered NA. Parameters
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
If 0 or ‘index’ counts are generated for each column. If 1 or ‘columns’ counts are generated for each row.
level:int or str, optional
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a DataFrame. A str specifies the level name.
numeric_only:bool, default False
Include only float, int or boolean data. Returns
Series or DataFrame
For each column/row the number of non-NA/null entries. If level is specified returns a DataFrame. See also Series.count
Number of non-NA elements in a Series. DataFrame.value_counts
Count unique combinations of columns. DataFrame.shape
Number of DataFrame rows and columns (including NA elements). DataFrame.isna
Boolean same-sized DataFrame showing places of NA elements. Examples Constructing DataFrame from a dictionary:
>>> df = pd.DataFrame({"Person":
... ["John", "Myla", "Lewis", "John", "Myla"],
... "Age": [24., np.nan, 21., 33, 26],
... "Single": [False, True, True, True, False]})
>>> df
Person Age Single
0 John 24.0 False
1 Myla NaN True
2 Lewis 21.0 True
3 John 33.0 True
4 Myla 26.0 False
Notice the uncounted NA values:
>>> df.count()
Person 5
Age 4
Single 5
dtype: int64
Counts for each row:
>>> df.count(axis='columns')
0 3
1 2
2 3
3 3
4 3
dtype: int64 | |
doc_1140 | See Migration guide for more details. tf.compat.v1.estimator.VocabInfo, tf.compat.v1.train.VocabInfo
tf.estimator.VocabInfo(
new_vocab, new_vocab_size, num_oov_buckets, old_vocab, old_vocab_size=-1,
backup_initializer=None, axis=0
)
See tf.estimator.WarmStartSettings for examples of using VocabInfo to warm-start. Args: new_vocab: [Required] A path to the new vocabulary file (used with the model to be trained). new_vocab_size: [Required] An integer indicating how many entries of the new vocabulary will used in training. num_oov_buckets: [Required] An integer indicating how many OOV buckets are associated with the vocabulary. old_vocab: [Required] A path to the old vocabulary file (used with the checkpoint to be warm-started from). old_vocab_size: [Optional] An integer indicating how many entries of the old vocabulary were used in the creation of the checkpoint. If not provided, the entire old vocabulary will be used. backup_initializer: [Optional] A variable initializer used for variables corresponding to new vocabulary entries and OOV. If not provided, these entries will be zero-initialized. axis: [Optional] Denotes what axis the vocabulary corresponds to. The default, 0, corresponds to the most common use case (embeddings or linear weights for binary classification / regression). An axis of 1 could be used for warm-starting output layers with class vocabularies. Returns: A VocabInfo which represents the vocabulary information for warm-starting. Raises: ValueError: axis is neither 0 or 1. Example Usage:
embeddings_vocab_info = tf.VocabInfo(
new_vocab='embeddings_vocab',
new_vocab_size=100,
num_oov_buckets=1,
old_vocab='pretrained_embeddings_vocab',
old_vocab_size=10000,
backup_initializer=tf.compat.v1.truncated_normal_initializer(
mean=0.0, stddev=(1 / math.sqrt(embedding_dim))),
axis=0)
softmax_output_layer_kernel_vocab_info = tf.VocabInfo(
new_vocab='class_vocab',
new_vocab_size=5,
num_oov_buckets=0, # No OOV for classes.
old_vocab='old_class_vocab',
old_vocab_size=8,
backup_initializer=tf.compat.v1.glorot_uniform_initializer(),
axis=1)
softmax_output_layer_bias_vocab_info = tf.VocabInfo(
new_vocab='class_vocab',
new_vocab_size=5,
num_oov_buckets=0, # No OOV for classes.
old_vocab='old_class_vocab',
old_vocab_size=8,
backup_initializer=tf.compat.v1.zeros_initializer(),
axis=0)
#Currently, only axis=0 and axis=1 are supported.
```
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>
<tr>
<td>
`new_vocab`
</td>
<td>
</td>
</tr><tr>
<td>
`new_vocab_size`
</td>
<td>
</td>
</tr><tr>
<td>
`num_oov_buckets`
</td>
<td>
</td>
</tr><tr>
<td>
`old_vocab`
</td>
<td>
</td>
</tr><tr>
<td>
`old_vocab_size`
</td>
<td>
</td>
</tr><tr>
<td>
`backup_initializer`
</td>
<td>
</td>
</tr><tr>
<td>
`axis`
</td>
<td>
</td>
</tr>
</table> | |
doc_1141 | sklearn.metrics.log_loss(y_true, y_pred, *, eps=1e-15, normalize=True, sample_weight=None, labels=None) [source]
Log loss, aka logistic loss or cross-entropy loss. This is the loss function used in (multinomial) logistic regression and extensions of it such as neural networks, defined as the negative log-likelihood of a logistic model that returns y_pred probabilities for its training data y_true. The log loss is only defined for two or more labels. For a single sample with true label \(y \in \{0,1\}\) and and a probability estimate \(p = \operatorname{Pr}(y = 1)\), the log loss is: \[L_{\log}(y, p) = -(y \log (p) + (1 - y) \log (1 - p))\] Read more in the User Guide. Parameters
y_truearray-like or label indicator matrix
Ground truth (correct) labels for n_samples samples.
y_predarray-like of float, shape = (n_samples, n_classes) or (n_samples,)
Predicted probabilities, as returned by a classifier’s predict_proba method. If y_pred.shape = (n_samples,) the probabilities provided are assumed to be that of the positive class. The labels in y_pred are assumed to be ordered alphabetically, as done by preprocessing.LabelBinarizer.
epsfloat, default=1e-15
Log loss is undefined for p=0 or p=1, so probabilities are clipped to max(eps, min(1 - eps, p)).
normalizebool, default=True
If true, return the mean loss per sample. Otherwise, return the sum of the per-sample losses.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
labelsarray-like, default=None
If not provided, labels will be inferred from y_true. If labels is None and y_pred has shape (n_samples,) the labels are assumed to be binary and are inferred from y_true. New in version 0.18. Returns
lossfloat
Notes The logarithm used is the natural logarithm (base-e). References C.M. Bishop (2006). Pattern Recognition and Machine Learning. Springer, p. 209. Examples >>> from sklearn.metrics import log_loss
>>> log_loss(["spam", "ham", "ham", "spam"],
... [[.1, .9], [.9, .1], [.8, .2], [.35, .65]])
0.21616...
Examples using sklearn.metrics.log_loss
Probability Calibration for 3-class classification
Probabilistic predictions with Gaussian process classification (GPC) | |
doc_1142 | calculates the Euclidean distance to a given vector. distance_to(Vector3) -> float | |
doc_1143 |
Remove an event from the event list -- by default, the last. Note that this does not check that there are events, much like the normal pop method. If no events exist, this will throw an exception. | |
doc_1144 | Optional. This attribute defines the maximum number of URLs included on each page of the sitemap. Its value should not exceed the default value of 50000, which is the upper limit allowed in the Sitemaps protocol. | |
doc_1145 |
pygame module to transform surfaces A Surface transform is an operation that moves or resizes the pixels. All these functions take a Surface to operate on and return a new Surface with the results. Some of the transforms are considered destructive. These means every time they are performed they lose pixel data. Common examples of this are resizing and rotating. For this reason, it is better to re-transform the original surface than to keep transforming an image multiple times. (For example, suppose you are animating a bouncing spring which expands and contracts. If you applied the size changes incrementally to the previous images, you would lose detail. Instead, always begin with the original image and scale to the desired size.) pygame.transform.flip()
flip vertically and horizontally flip(Surface, xbool, ybool) -> Surface This can flip a Surface either vertically, horizontally, or both. Flipping a Surface is non-destructive and returns a new Surface with the same dimensions.
pygame.transform.scale()
resize to new resolution scale(Surface, (width, height), DestSurface = None) -> Surface Resizes the Surface to a new resolution. This is a fast scale operation that does not sample the results. An optional destination surface can be used, rather than have it create a new one. This is quicker if you want to repeatedly scale something. However the destination must be the same size as the (width, height) passed in. Also the destination surface must be the same format.
pygame.transform.rotate()
rotate an image rotate(Surface, angle) -> Surface Unfiltered counterclockwise rotation. The angle argument represents degrees and can be any floating point value. Negative angle amounts will rotate clockwise. Unless rotating by 90 degree increments, the image will be padded larger to hold the new size. If the image has pixel alphas, the padded area will be transparent. Otherwise pygame will pick a color that matches the Surface colorkey or the topleft pixel value.
pygame.transform.rotozoom()
filtered scale and rotation rotozoom(Surface, angle, scale) -> Surface This is a combined scale and rotation transform. The resulting Surface will be a filtered 32-bit Surface. The scale argument is a floating point value that will be multiplied by the current resolution. The angle argument is a floating point value that represents the counterclockwise degrees to rotate. A negative rotation angle will rotate clockwise.
pygame.transform.scale2x()
specialized image doubler scale2x(Surface, DestSurface = None) -> Surface This will return a new image that is double the size of the original. It uses the AdvanceMAME Scale2X algorithm which does a 'jaggie-less' scale of bitmap graphics. This really only has an effect on simple images with solid colors. On photographic and antialiased images it will look like a regular unfiltered scale. An optional destination surface can be used, rather than have it create a new one. This is quicker if you want to repeatedly scale something. However the destination must be twice the size of the source surface passed in. Also the destination surface must be the same format.
pygame.transform.smoothscale()
scale a surface to an arbitrary size smoothly smoothscale(Surface, (width, height), DestSurface = None) -> Surface Uses one of two different algorithms for scaling each dimension of the input surface as required. For shrinkage, the output pixels are area averages of the colors they cover. For expansion, a bilinear filter is used. For the x86-64 and i686 architectures, optimized MMX routines are included and will run much faster than other machine types. The size is a 2 number sequence for (width, height). This function only works for 24-bit or 32-bit surfaces. An exception will be thrown if the input surface bit depth is less than 24. New in pygame 1.8.
pygame.transform.get_smoothscale_backend()
return smoothscale filter version in use: 'GENERIC', 'MMX', or 'SSE' get_smoothscale_backend() -> String Shows whether or not smoothscale is using MMX or SSE acceleration. If no acceleration is available then "GENERIC" is returned. For a x86 processor the level of acceleration to use is determined at runtime. This function is provided for pygame testing and debugging.
pygame.transform.set_smoothscale_backend()
set smoothscale filter version to one of: 'GENERIC', 'MMX', or 'SSE' set_smoothscale_backend(type) -> None Sets smoothscale acceleration. Takes a string argument. A value of 'GENERIC' turns off acceleration. 'MMX' uses MMX instructions only. 'SSE' allows SSE extensions as well. A value error is raised if type is not recognized or not supported by the current processor. This function is provided for pygame testing and debugging. If smoothscale causes an invalid instruction error then it is a pygame/SDL bug that should be reported. Use this function as a temporary fix only.
pygame.transform.chop()
gets a copy of an image with an interior area removed chop(Surface, rect) -> Surface Extracts a portion of an image. All vertical and horizontal pixels surrounding the given rectangle area are removed. The corner areas (diagonal to the rect) are then brought together. (The original image is not altered by this operation.) NOTE: If you want a "crop" that returns the part of an image within a rect, you can blit with a rect to a new surface or copy a subsurface.
pygame.transform.laplacian()
find edges in a surface laplacian(Surface, DestSurface = None) -> Surface Finds the edges in a surface using the laplacian algorithm. New in pygame 1.8.
pygame.transform.average_surfaces()
find the average surface from many surfaces. average_surfaces(Surfaces, DestSurface = None, palette_colors = 1) -> Surface Takes a sequence of surfaces and returns a surface with average colors from each of the surfaces. palette_colors - if true we average the colors in palette, otherwise we average the pixel values. This is useful if the surface is actually greyscale colors, and not palette colors. Note, this function currently does not handle palette using surfaces correctly. New in pygame 1.8. New in pygame 1.9: palette_colors argument
pygame.transform.average_color()
finds the average color of a surface average_color(Surface, Rect = None) -> Color Finds the average color of a Surface or a region of a surface specified by a Rect, and returns it as a Color.
pygame.transform.threshold()
finds which, and how many pixels in a surface are within a threshold of a 'search_color' or a 'search_surf'. threshold(dest_surf, surf, search_color, threshold=(0,0,0,0), set_color=(0,0,0,0), set_behavior=1, search_surf=None, inverse_set=False) -> num_threshold_pixels This versatile function can be used for find colors in a 'surf' close to a 'search_color' or close to colors in a separate 'search_surf'. It can also be used to transfer pixels into a 'dest_surf' that match or don't match. By default it sets pixels in the 'dest_surf' where all of the pixels NOT within the threshold are changed to set_color. If inverse_set is optionally set to True, the pixels that ARE within the threshold are changed to set_color. If the optional 'search_surf' surface is given, it is used to threshold against rather than the specified 'set_color'. That is, it will find each pixel in the 'surf' that is within the 'threshold' of the pixel at the same coordinates of the 'search_surf'.
Parameters:
dest_surf (pygame.Surface or None) -- Surface we are changing. See 'set_behavior'. Should be None if counting (set_behavior is 0).
surf (pygame.Surface) -- Surface we are looking at.
search_color (pygame.Color) -- Color we are searching for.
threshold (pygame.Color) -- Within this distance from search_color (or search_surf). You can use a threshold of (r,g,b,a) where the r,g,b can have different thresholds. So you could use an r threshold of 40 and a blue threshold of 2 if you like.
set_color (pygame.Color or None) -- Color we set in dest_surf.
set_behavior (int) -- set_behavior=1 (default). Pixels in dest_surface will be changed to 'set_color'. set_behavior=0 we do not change 'dest_surf', just count. Make dest_surf=None. set_behavior=2 pixels set in 'dest_surf' will be from 'surf'.
search_surf (pygame.Surface or None) -- search_surf=None (default). Search against 'search_color' instead. search_surf=Surface. Look at the color in 'search_surf' rather than using 'search_color'.
inverse_set (bool) -- False, default. Pixels outside of threshold are changed. True, Pixels within threshold are changed.
Return type:
int
Returns:
The number of pixels that are within the 'threshold' in 'surf' compared to either 'search_color' or search_surf.
Examples:
See the threshold tests for a full of examples: https://github.com/pygame/pygame/blob/master/test/transform_test.py def test_threshold_dest_surf_not_change(self):
""" the pixels within the threshold.
All pixels not within threshold are changed to set_color.
So there should be none changed in this test.
"""
(w, h) = size = (32, 32)
threshold = (20, 20, 20, 20)
original_color = (25, 25, 25, 25)
original_dest_color = (65, 65, 65, 55)
threshold_color = (10, 10, 10, 10)
set_color = (255, 10, 10, 10)
surf = pygame.Surface(size, pygame.SRCALPHA, 32)
dest_surf = pygame.Surface(size, pygame.SRCALPHA, 32)
search_surf = pygame.Surface(size, pygame.SRCALPHA, 32)
surf.fill(original_color)
search_surf.fill(threshold_color)
dest_surf.fill(original_dest_color)
# set_behavior=1, set dest_surface from set_color.
# all within threshold of third_surface, so no color is set.
THRESHOLD_BEHAVIOR_FROM_SEARCH_COLOR = 1
pixels_within_threshold = pygame.transform.threshold(
dest_surf=dest_surf,
surf=surf,
search_color=None,
threshold=threshold,
set_color=set_color,
set_behavior=THRESHOLD_BEHAVIOR_FROM_SEARCH_COLOR,
search_surf=search_surf,
)
# # Return, of pixels within threshold is correct
self.assertEqual(w * h, pixels_within_threshold)
# # Size of dest surface is correct
dest_rect = dest_surf.get_rect()
dest_size = dest_rect.size
self.assertEqual(size, dest_size)
# The color is not the change_color specified for every pixel As all
# pixels are within threshold
for pt in test_utils.rect_area_pts(dest_rect):
self.assertNotEqual(dest_surf.get_at(pt), set_color)
self.assertEqual(dest_surf.get_at(pt), original_dest_color) New in pygame 1.8. Changed in pygame 1.9.4: Fixed a lot of bugs and added keyword arguments. Test your code. | |
doc_1146 |
lookup_name = 'second' | |
doc_1147 | mimetypes.guess_type(url, strict=True)
Guess the type of a file based on its filename, path or URL, given by url. URL can be a string or a path-like object. The return value is a tuple (type, encoding) where type is None if the type can’t be guessed (missing or unknown suffix) or a string of the form 'type/subtype', usable for a MIME content-type header. encoding is None for no encoding or the name of the program used to encode (e.g. compress or gzip). The encoding is suitable for use as a Content-Encoding header, not as a Content-Transfer-Encoding header. The mappings are table driven. Encoding suffixes are case sensitive; type suffixes are first tried case sensitively, then case insensitively. The optional strict argument is a flag specifying whether the list of known MIME types is limited to only the official types registered with IANA. When strict is True (the default), only the IANA types are supported; when strict is False, some additional non-standard but commonly used MIME types are also recognized. Changed in version 3.8: Added support for url being a path-like object.
mimetypes.guess_all_extensions(type, strict=True)
Guess the extensions for a file based on its MIME type, given by type. The return value is a list of strings giving all possible filename extensions, including the leading dot ('.'). The extensions are not guaranteed to have been associated with any particular data stream, but would be mapped to the MIME type type by guess_type(). The optional strict argument has the same meaning as with the guess_type() function.
mimetypes.guess_extension(type, strict=True)
Guess the extension for a file based on its MIME type, given by type. The return value is a string giving a filename extension, including the leading dot ('.'). The extension is not guaranteed to have been associated with any particular data stream, but would be mapped to the MIME type type by guess_type(). If no extension can be guessed for type, None is returned. The optional strict argument has the same meaning as with the guess_type() function.
Some additional functions and data items are available for controlling the behavior of the module.
mimetypes.init(files=None)
Initialize the internal data structures. If given, files must be a sequence of file names which should be used to augment the default type map. If omitted, the file names to use are taken from knownfiles; on Windows, the current registry settings are loaded. Each file named in files or knownfiles takes precedence over those named before it. Calling init() repeatedly is allowed. Specifying an empty list for files will prevent the system defaults from being applied: only the well-known values will be present from a built-in list. If files is None the internal data structure is completely rebuilt to its initial default value. This is a stable operation and will produce the same results when called multiple times. Changed in version 3.2: Previously, Windows registry settings were ignored.
mimetypes.read_mime_types(filename)
Load the type map given in the file filename, if it exists. The type map is returned as a dictionary mapping filename extensions, including the leading dot ('.'), to strings of the form 'type/subtype'. If the file filename does not exist or cannot be read, None is returned.
mimetypes.add_type(type, ext, strict=True)
Add a mapping from the MIME type type to the extension ext. When the extension is already known, the new type will replace the old one. When the type is already known the extension will be added to the list of known extensions. When strict is True (the default), the mapping will be added to the official MIME types, otherwise to the non-standard ones.
mimetypes.inited
Flag indicating whether or not the global data structures have been initialized. This is set to True by init().
mimetypes.knownfiles
List of type map file names commonly installed. These files are typically named mime.types and are installed in different locations by different packages.
mimetypes.suffix_map
Dictionary mapping suffixes to suffixes. This is used to allow recognition of encoded files for which the encoding and the type are indicated by the same extension. For example, the .tgz extension is mapped to .tar.gz to allow the encoding and type to be recognized separately.
mimetypes.encodings_map
Dictionary mapping filename extensions to encoding types.
mimetypes.types_map
Dictionary mapping filename extensions to MIME types.
mimetypes.common_types
Dictionary mapping filename extensions to non-standard, but commonly found MIME types.
An example usage of the module: >>> import mimetypes
>>> mimetypes.init()
>>> mimetypes.knownfiles
['/etc/mime.types', '/etc/httpd/mime.types', ... ]
>>> mimetypes.suffix_map['.tgz']
'.tar.gz'
>>> mimetypes.encodings_map['.gz']
'gzip'
>>> mimetypes.types_map['.tgz']
'application/x-tar-gz'
MimeTypes Objects The MimeTypes class may be useful for applications which may want more than one MIME-type database; it provides an interface similar to the one of the mimetypes module.
class mimetypes.MimeTypes(filenames=(), strict=True)
This class represents a MIME-types database. By default, it provides access to the same database as the rest of this module. The initial database is a copy of that provided by the module, and may be extended by loading additional mime.types-style files into the database using the read() or readfp() methods. The mapping dictionaries may also be cleared before loading additional data if the default data is not desired. The optional filenames parameter can be used to cause additional files to be loaded “on top” of the default database.
suffix_map
Dictionary mapping suffixes to suffixes. This is used to allow recognition of encoded files for which the encoding and the type are indicated by the same extension. For example, the .tgz extension is mapped to .tar.gz to allow the encoding and type to be recognized separately. This is initially a copy of the global suffix_map defined in the module.
encodings_map
Dictionary mapping filename extensions to encoding types. This is initially a copy of the global encodings_map defined in the module.
types_map
Tuple containing two dictionaries, mapping filename extensions to MIME types: the first dictionary is for the non-standards types and the second one is for the standard types. They are initialized by common_types and types_map.
types_map_inv
Tuple containing two dictionaries, mapping MIME types to a list of filename extensions: the first dictionary is for the non-standards types and the second one is for the standard types. They are initialized by common_types and types_map.
guess_extension(type, strict=True)
Similar to the guess_extension() function, using the tables stored as part of the object.
guess_type(url, strict=True)
Similar to the guess_type() function, using the tables stored as part of the object.
guess_all_extensions(type, strict=True)
Similar to the guess_all_extensions() function, using the tables stored as part of the object.
read(filename, strict=True)
Load MIME information from a file named filename. This uses readfp() to parse the file. If strict is True, information will be added to list of standard types, else to the list of non-standard types.
readfp(fp, strict=True)
Load MIME type information from an open file fp. The file must have the format of the standard mime.types files. If strict is True, information will be added to the list of standard types, else to the list of non-standard types.
read_windows_registry(strict=True)
Load MIME type information from the Windows registry. Availability: Windows. If strict is True, information will be added to the list of standard types, else to the list of non-standard types. New in version 3.2. | |
doc_1148 | The maximum resident set size that should be made available to the process. | |
doc_1149 |
The degree of the series. New in version 1.5.0. Returns
degreeint
Degree of the series, one less than the number of coefficients. | |
doc_1150 |
Transform dataset. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Input data to be transformed. Use dtype=np.float32 for maximum efficiency. Sparse matrices are also supported, use sparse csr_matrix for maximum efficiency. Returns
X_transformedsparse matrix of shape (n_samples, n_out)
Transformed dataset. | |
doc_1151 | os.PRIO_PGRP
os.PRIO_USER
Parameters for the getpriority() and setpriority() functions. Availability: Unix. New in version 3.3. | |
doc_1152 |
Add a colorbar to a plot. Parameters
mappable
The matplotlib.cm.ScalarMappable (i.e., AxesImage, ContourSet, etc.) described by this colorbar. This argument is mandatory for the Figure.colorbar method but optional for the pyplot.colorbar function, which sets the default to the current image. Note that one can create a ScalarMappable "on-the-fly" to generate colorbars not attached to a previously drawn artist, e.g. fig.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax)
caxAxes, optional
Axes into which the colorbar will be drawn.
axAxes, list of Axes, optional
One or more parent axes from which space for a new colorbar axes will be stolen, if cax is None. This has no effect if cax is set.
use_gridspecbool, optional
If cax is None, a new cax is created as an instance of Axes. If ax is an instance of Subplot and use_gridspec is True, cax is created as an instance of Subplot using the gridspec module. Returns
colorbarColorbar
Notes Additional keyword arguments are of two kinds: axes properties: locationNone or {'left', 'right', 'top', 'bottom'}
The location, relative to the parent axes, where the colorbar axes is created. It also determines the orientation of the colorbar (colorbars on the left and right are vertical, colorbars at the top and bottom are horizontal). If None, the location will come from the orientation if it is set (vertical colorbars on the right, horizontal ones at the bottom), or default to 'right' if orientation is unset. orientationNone or {'vertical', 'horizontal'}
The orientation of the colorbar. It is preferable to set the location of the colorbar, as that also determines the orientation; passing incompatible values for location and orientation raises an exception. fractionfloat, default: 0.15
Fraction of original axes to use for colorbar. shrinkfloat, default: 1.0
Fraction by which to multiply the size of the colorbar. aspectfloat, default: 20
Ratio of long to short dimensions. padfloat, default: 0.05 if vertical, 0.15 if horizontal
Fraction of original axes between colorbar and new image axes. anchor(float, float), optional
The anchor point of the colorbar axes. Defaults to (0.0, 0.5) if vertical; (0.5, 1.0) if horizontal. panchor(float, float), or False, optional
The anchor point of the colorbar parent axes. If False, the parent axes' anchor will be unchanged. Defaults to (1.0, 0.5) if vertical; (0.5, 0.0) if horizontal. colorbar properties:
Property Description
extend {'neither', 'both', 'min', 'max'} If not 'neither', make pointed end(s) for out-of- range values. These are set for a given colormap using the colormap set_under and set_over methods.
extendfrac {None, 'auto', length, lengths} If set to None, both the minimum and maximum triangular colorbar extensions with have a length of 5% of the interior colorbar length (this is the default setting). If set to 'auto', makes the triangular colorbar extensions the same lengths as the interior boxes (when spacing is set to 'uniform') or the same lengths as the respective adjacent interior boxes (when spacing is set to 'proportional'). If a scalar, indicates the length of both the minimum and maximum triangular colorbar extensions as a fraction of the interior colorbar length. A two-element sequence of fractions may also be given, indicating the lengths of the minimum and maximum colorbar extensions respectively as a fraction of the interior colorbar length.
extendrect bool If False the minimum and maximum colorbar extensions will be triangular (the default). If True the extensions will be rectangular.
spacing {'uniform', 'proportional'} Uniform spacing gives each discrete color the same space; proportional makes the space proportional to the data interval.
ticks None or list of ticks or Locator If None, ticks are determined automatically from the input.
format None or str or Formatter If None, ScalarFormatter is used. If a format string is given, e.g., '%.3f', that is used. An alternative Formatter may be given instead.
drawedges bool Whether to draw lines at color boundaries.
label str The label on the colorbar's long axis. The following will probably be useful only in the context of indexed colors (that is, when the mappable has norm=NoNorm()), or other unusual circumstances.
Property Description
boundaries None or a sequence
values None or a sequence which must be of length 1 less than the sequence of boundaries. For each region delimited by adjacent entries in boundaries, the colormapped to the corresponding value in values will be used. If mappable is a ContourSet, its extend kwarg is included automatically. The shrink kwarg provides a simple way to scale the colorbar with respect to the axes. Note that if cax is specified, it determines the size of the colorbar and shrink and aspect kwargs are ignored. For more precise control, you can manually specify the positions of the axes objects in which the mappable and the colorbar are drawn. In this case, do not use any of the axes properties kwargs. It is known that some vector graphics viewers (svg and pdf) renders white gaps between segments of the colorbar. This is due to bugs in the viewers, not Matplotlib. As a workaround, the colorbar can be rendered with overlapping segments: cbar = colorbar()
cbar.solids.set_edgecolor("face")
draw()
However this has negative consequences in other circumstances, e.g. with semi-transparent images (alpha < 1) and colorbar extensions; therefore, this workaround is not used by default (see issue #1188).
Examples using matplotlib.pyplot.colorbar
Subplots spacings and margins
Ellipse Collection
Axes Divider
Simple Colorbar
Image tutorial
Tight Layout guide | |
doc_1153 | A method that takes a template_name and yields Origin instances for each possible source. For example, the filesystem loader may receive 'index.html' as a template_name argument. This method would yield origins for the full path of index.html as it appears in each template directory the loader looks at. The method doesn’t need to verify that the template exists at a given path, but it should ensure the path is valid. For instance, the filesystem loader makes sure the path lies under a valid template directory. | |
doc_1154 |
Return a list of URLs, one for each element of the collection. The list contains None for elements without a URL. See Hyperlinks for an example. | |
doc_1155 |
Computes Felsenszwalb’s efficient graph based image segmentation. Produces an oversegmentation of a multichannel (i.e. RGB) image using a fast, minimum spanning tree based clustering on the image grid. The parameter scale sets an observation level. Higher scale means less and larger segments. sigma is the diameter of a Gaussian kernel, used for smoothing the image prior to segmentation. The number of produced segments as well as their size can only be controlled indirectly through scale. Segment size within an image can vary greatly depending on local contrast. For RGB images, the algorithm uses the euclidean distance between pixels in color space. Parameters
image(width, height, 3) or (width, height) ndarray
Input image.
scalefloat
Free parameter. Higher means larger clusters.
sigmafloat
Width (standard deviation) of Gaussian kernel used in preprocessing.
min_sizeint
Minimum component size. Enforced using postprocessing.
multichannelbool, optional (default: True)
Whether the last axis of the image is to be interpreted as multiple channels. A value of False, for a 3D image, is not currently supported. Returns
segment_mask(width, height) ndarray
Integer mask indicating segment labels. Notes The k parameter used in the original paper renamed to scale here. References
1
Efficient graph-based image segmentation, Felzenszwalb, P.F. and Huttenlocher, D.P. International Journal of Computer Vision, 2004 Examples >>> from skimage.segmentation import felzenszwalb
>>> from skimage.data import coffee
>>> img = coffee()
>>> segments = felzenszwalb(img, scale=3.0, sigma=0.95, min_size=5) | |
doc_1156 |
Compute the truth value of x1 AND x2 element-wise. Parameters
x1, x2array_like
Input arrays. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray or bool
Boolean result of the logical AND operation applied to the elements of x1 and x2; the shape is determined by broadcasting. This is a scalar if both x1 and x2 are scalars. See also
logical_or, logical_not, logical_xor
bitwise_and
Examples >>> np.logical_and(True, False)
False
>>> np.logical_and([True, False], [False, False])
array([False, False])
>>> x = np.arange(5)
>>> np.logical_and(x>1, x<4)
array([False, False, True, True, False])
The & operator can be used as a shorthand for np.logical_and on boolean ndarrays. >>> a = np.array([True, False])
>>> b = np.array([False, False])
>>> a & b
array([False, False]) | |
doc_1157 | Disable all renegotiation in TLSv1.2 and earlier. Do not send HelloRequest messages, and ignore renegotiation requests via ClientHello. This option is only available with OpenSSL 1.1.0h and later. New in version 3.7. | |
doc_1158 |
Extension dtype for string data. New in version 1.0.0. Warning StringDtype is considered experimental. The implementation and parts of the API may change without warning. In particular, StringDtype.na_value may change to no longer be numpy.nan. Parameters
storage:{“python”, “pyarrow”}, optional
If not given, the value of pd.options.mode.string_storage. Examples
>>> pd.StringDtype()
string[python]
>>> pd.StringDtype(storage="pyarrow")
string[pyarrow]
Attributes
None Methods
None | |
doc_1159 | os.RTLD_NOW
os.RTLD_GLOBAL
os.RTLD_LOCAL
os.RTLD_NODELETE
os.RTLD_NOLOAD
os.RTLD_DEEPBIND
Flags for use with the setdlopenflags() and getdlopenflags() functions. See the Unix manual page dlopen(3) for what the different flags mean. New in version 3.3. | |
doc_1160 | Set the current value of the ctypes-private copy of the system errno variable in the calling thread to value and return the previous value. Raises an auditing event ctypes.set_errno with argument errno. | |
doc_1161 | An optional boolean argument that determines if concatenated values will be distinct. Defaults to False. | |
doc_1162 |
Divides (“unscales”) the optimizer’s gradient tensors by the scale factor. unscale_() is optional, serving cases where you need to modify or inspect gradients between the backward pass(es) and step(). If unscale_() is not called explicitly, gradients will be unscaled automatically during step(). Simple example, using unscale_() to enable clipping of unscaled gradients: ...
scaler.scale(loss).backward()
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)
scaler.step(optimizer)
scaler.update()
Parameters
optimizer (torch.optim.Optimizer) – Optimizer that owns the gradients to be unscaled. Note unscale_() does not incur a CPU-GPU sync. Warning unscale_() should only be called once per optimizer per step() call, and only after all gradients for that optimizer’s assigned parameters have been accumulated. Calling unscale_() twice for a given optimizer between each step() triggers a RuntimeError. Warning unscale_() may unscale sparse gradients out of place, replacing the .grad attribute. | |
doc_1163 | tf.device(
device_name
)
This function specifies the device to be used for ops created/executed in a particular context. Nested contexts will inherit and also create/execute their ops on the specified device. If a specific device is not required, consider not using this function so that a device can be automatically assigned. In general the use of this function is optional. device_name can be fully specified, as in "/job:worker/task:1/device:cpu:0", or partially specified, containing only a subset of the "/"-separated fields. Any fields which are specified will override device annotations from outer scopes. For example: with tf.device('/job:foo'):
# ops created here have devices with /job:foo
with tf.device('/job:bar/task:0/device:gpu:2'):
# ops created here have the fully specified device above
with tf.device('/device:gpu:1'):
# ops created here have the device '/job:foo/device:gpu:1'
Args
device_name The device name to use in the context.
Returns A context manager that specifies the default device to use for newly created ops.
Raises
RuntimeError If a function is passed in. | |
doc_1164 |
Bases: matplotlib.patches.ArrowStyle._Base Wedge(?) shape. Only works with a quadratic Bezier curve. The begin point has a width of the tail_width and the end point has a width of 0. At the middle, the width is shrink_factor*tail_width. Parameters
tail_widthfloat, default: 0.3
Width of the tail.
shrink_factorfloat, default: 0.5
Fraction of the arrow width at the middle point. transmute(path, mutation_size, linewidth)[source]
The transmute method is the very core of the ArrowStyle class and must be overridden in the subclasses. It receives the path object along which the arrow will be drawn, and the mutation_size, with which the arrow head etc. will be scaled. The linewidth may be used to adjust the path so that it does not pass beyond the given points. It returns a tuple of a Path instance and a boolean. The boolean value indicate whether the path can be filled or not. The return value can also be a list of paths and list of booleans of a same length. | |
doc_1165 | See Migration guide for more details. tf.compat.v1.ragged.constant
tf.ragged.constant(
pylist, dtype=None, ragged_rank=None, inner_shape=None, name=None,
row_splits_dtype=tf.dtypes.int64
)
Example:
tf.ragged.constant([[1, 2], [3], [4, 5, 6]])
<tf.RaggedTensor [[1, 2], [3], [4, 5, 6]]>
All scalar values in pylist must have the same nesting depth K, and the returned RaggedTensor will have rank K. If pylist contains no scalar values, then K is one greater than the maximum depth of empty lists in pylist. All scalar values in pylist must be compatible with dtype.
Args
pylist A nested list, tuple or np.ndarray. Any nested element that is not a list, tuple or np.ndarray must be a scalar value compatible with dtype.
dtype The type of elements for the returned RaggedTensor. If not specified, then a default is chosen based on the scalar values in pylist.
ragged_rank An integer specifying the ragged rank of the returned RaggedTensor. Must be nonnegative and less than K. Defaults to max(0, K - 1) if inner_shape is not specified. Defaults to max(0, K - 1 - len(inner_shape)) if inner_shape is specified.
inner_shape A tuple of integers specifying the shape for individual inner values in the returned RaggedTensor. Defaults to () if ragged_rank is not specified. If ragged_rank is specified, then a default is chosen based on the contents of pylist.
name A name prefix for the returned tensor (optional).
row_splits_dtype data type for the constructed RaggedTensor's row_splits. One of tf.int32 or tf.int64.
Returns A potentially ragged tensor with rank K and the specified ragged_rank, containing the values from pylist.
Raises
ValueError If the scalar values in pylist have inconsistent nesting depth; or if ragged_rank or inner_shape are incompatible with pylist. | |
doc_1166 | A decorator that is used to register custom template filter. You can specify a name for the filter, otherwise the function name will be used. Example: @app.template_filter()
def reverse(s):
return s[::-1]
Parameters
name (Optional[str]) – the optional name of the filter, otherwise the function name will be used. Return type
Callable | |
doc_1167 |
Estimate circle model from data using total least squares. Parameters
data(N, 2) array
N points with (x, y) coordinates, respectively. Returns
successbool
True, if model estimation succeeds. | |
doc_1168 | tf.compat.v1.layers.max_pooling1d(
inputs, pool_size, strides, padding='valid',
data_format='channels_last', name=None
)
Arguments
inputs The tensor over which to pool. Must have rank 3.
pool_size An integer or tuple/list of a single integer, representing the size of the pooling window.
strides An integer or tuple/list of a single integer, specifying the strides of the pooling operation.
padding A string. The padding method, either 'valid' or 'same'. Case-insensitive.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length).
name A string, the name of the layer.
Returns The output tensor, of rank 3.
Raises
ValueError if eager execution is enabled. | |
doc_1169 | sklearn.datasets.load_diabetes(*, return_X_y=False, as_frame=False) [source]
Load and return the diabetes dataset (regression).
Samples total 442
Dimensionality 10
Features real, -.2 < x < .2
Targets integer 25 - 346 Read more in the User Guide. Parameters
return_X_ybool, default=False.
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (442, 10)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (442,)
The regression target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. frame: DataFrame of shape (442, 11)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset. data_filename: str
The path to the location of the data. target_filename: str
The path to the location of the target.
(data, target)tuple if return_X_y is True
New in version 0.18.
Examples using sklearn.datasets.load_diabetes
Plot individual and voting regression predictions
Gradient Boosting regression
Model Complexity Influence
Model-based and sequential feature selection
Lasso path using LARS
Linear Regression Example
Sparsity Example: Fitting only features 1 and 2
Lasso and Elastic Net
Lasso model selection: Cross-Validation / AIC / BIC
Advanced Plotting With Partial Dependence
Imputing missing values before building an estimator
Plotting Cross-Validated Predictions
Cross-validation on diabetes Dataset Exercise | |
doc_1170 | See Migration guide for more details. tf.compat.v1.stop_gradient
tf.stop_gradient(
input, name=None
)
When executed in a graph, this op outputs its input tensor as-is. When building ops to compute gradients, this op prevents the contribution of its inputs to be taken into account. Normally, the gradient generator adds ops to a graph to compute the derivatives of a specified 'loss' by recursively finding out inputs that contributed to its computation. If you insert this op in the graph it inputs are masked from the gradient generator. They are not taken into account for computing gradients. This is useful any time you want to compute a value with TensorFlow but need to pretend that the value was a constant. Some examples include: The EM algorithm where the M-step should not involve backpropagation through the output of the E-step. Contrastive divergence training of Boltzmann machines where, when differentiating the energy function, the training must not backpropagate through the graph that generated the samples from the model. Adversarial training, where no backprop should happen through the adversarial example generation process.
Args
input A Tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_1171 |
Subtract one Hermite series from another. Returns the difference of two Hermite series c1 - c2. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series P_0 + 2*P_1 + 3*P_2. Parameters
c1, c2array_like
1-D arrays of Hermite series coefficients ordered from low to high. Returns
outndarray
Of Hermite series coefficients representing their difference. See also
hermeadd, hermemulx, hermemul, hermediv, hermepow
Notes Unlike multiplication, division, etc., the difference of two Hermite series is a Hermite series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” Examples >>> from numpy.polynomial.hermite_e import hermesub
>>> hermesub([1, 2, 3, 4], [1, 2, 3])
array([0., 0., 0., 4.]) | |
doc_1172 | See Migration guide for more details. tf.compat.v1.config.experimental.enable_mlir_graph_optimization
tf.config.experimental.enable_mlir_graph_optimization()
DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.
Note: MLIR-Based TensorFlow Compiler is under active development and has missing features, please refrain from using. This API exists for development and testing only.
TensorFlow Compiler Optimizations are responsible general graph level optimizations that in the current stack mostly done by Grappler graph optimizers. | |
doc_1173 | See Migration guide for more details. tf.compat.v1.raw_ops.TensorArrayPack
tf.raw_ops.TensorArrayPack(
handle, flow_in, dtype, element_shape=None, name=None
)
Args
handle A Tensor of type mutable string.
flow_in A Tensor of type float32.
dtype A tf.DType.
element_shape An optional tf.TensorShape or list of ints. Defaults to None.
name A name for the operation (optional).
Returns A Tensor of type dtype. | |
doc_1174 | See Migration guide for more details. tf.compat.v1.math.rint, tf.compat.v1.rint
tf.math.rint(
x, name=None
)
If the result is midway between two representable values, the even representable is chosen. For example: rint(-1.5) ==> -2.0
rint(0.5000001) ==> 1.0
rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_1175 | Send an IHAVE command. message_id is the id of the message to send to the server (enclosed in '<' and '>'). The data parameter and the return value are the same as for post(). | |
doc_1176 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_1177 |
Unflattens a tensor dim expanding it to a desired shape. For use with Sequential.
dim specifies the dimension of the input tensor to be unflattened, and it can be either int or str when Tensor or NamedTensor is used, respectively.
unflattened_size is the new shape of the unflattened dimension of the tensor and it can be a tuple of ints or a list of ints or torch.Size for Tensor input; a NamedShape (tuple of (name, size) tuples) for NamedTensor input. Shape:
Input: (N,∗dims)(N, *dims)
Output: (N,Cout,Hout,Wout)(N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}})
Parameters
dim (Union[int, str]) – Dimension to be unflattened
unflattened_size (Union[torch.Size, Tuple, List, NamedShape]) – New shape of the unflattened dimension Examples >>> input = torch.randn(2, 50)
>>> # With tuple of ints
>>> m = nn.Sequential(
>>> nn.Linear(50, 50),
>>> nn.Unflatten(1, (2, 5, 5))
>>> )
>>> output = m(input)
>>> output.size()
torch.Size([2, 2, 5, 5])
>>> # With torch.Size
>>> m = nn.Sequential(
>>> nn.Linear(50, 50),
>>> nn.Unflatten(1, torch.Size([2, 5, 5]))
>>> )
>>> output = m(input)
>>> output.size()
torch.Size([2, 2, 5, 5])
>>> # With namedshape (tuple of tuples)
>>> input = torch.randn(2, 50, names=('N', 'features'))
>>> unflatten = nn.Unflatten('features', (('C', 2), ('H', 5), ('W', 5)))
>>> output = unflatten(input)
>>> output.size()
torch.Size([2, 2, 5, 5])
add_module(name, module)
Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters
name (string) – name of the child module. The child module can be accessed from this module using the given name
module (Module) – child module to be added to the module.
apply(fn)
Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters
fn (Module -> None) – function to be applied to each submodule Returns
self Return type
Module Example: >>> @torch.no_grad()
>>> def init_weights(m):
>>> print(m)
>>> if type(m) == nn.Linear:
>>> m.weight.fill_(1.0)
>>> print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1., 1.],
[ 1., 1.]])
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
bfloat16()
Casts all floating point parameters and buffers to bfloat16 datatype. Returns
self Return type
Module
buffers(recurse=True)
Returns an iterator over module buffers. Parameters
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
torch.Tensor – module buffer Example: >>> for buf in model.buffers():
>>> print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
children()
Returns an iterator over immediate children modules. Yields
Module – a child module
cpu()
Moves all model parameters and buffers to the CPU. Returns
self Return type
Module
cuda(device=None)
Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters
device (int, optional) – if specified, all parameters will be copied to that device Returns
self Return type
Module
double()
Casts all floating point parameters and buffers to double datatype. Returns
self Return type
Module
eval()
Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns
self Return type
Module
float()
Casts all floating point parameters and buffers to float datatype. Returns
self Return type
Module
half()
Casts all floating point parameters and buffers to half datatype. Returns
self Return type
Module
load_state_dict(state_dict, strict=True)
Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True
Returns
missing_keys is a list of str containing the missing keys
unexpected_keys is a list of str containing the unexpected keys Return type
NamedTuple with missing_keys and unexpected_keys fields
modules()
Returns an iterator over all modules in the network. Yields
Module – a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
print(idx, '->', m)
0 -> Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
named_buffers(prefix='', recurse=True)
Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Parameters
prefix (str) – prefix to prepend to all buffer names.
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
(string, torch.Tensor) – Tuple containing the name and buffer Example: >>> for name, buf in self.named_buffers():
>>> if name in ['running_var']:
>>> print(buf.size())
named_children()
Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields
(string, Module) – Tuple containing a name and child module Example: >>> for name, module in model.named_children():
>>> if name in ['conv4', 'conv5']:
>>> print(module)
named_modules(memo=None, prefix='')
Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Yields
(string, Module) – Tuple of name and module Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
print(idx, '->', m)
0 -> ('', Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix='', recurse=True)
Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Parameters
prefix (str) – prefix to prepend to all parameter names.
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields
(string, Parameter) – Tuple containing the name and parameter Example: >>> for name, param in self.named_parameters():
>>> if name in ['bias']:
>>> print(param.size())
parameters(recurse=True)
Returns an iterator over module parameters. This is typically passed to an optimizer. Parameters
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields
Parameter – module parameter Example: >>> for param in model.parameters():
>>> print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
register_backward_hook(hook)
Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_buffer(name, tensor, persistent=True)
Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict. Buffers can be accessed as attributes using given names. Parameters
name (string) – name of the buffer. The buffer can be accessed from this module using the given name
tensor (Tensor) – buffer to be registered.
persistent (bool) – whether the buffer is part of this module’s state_dict. Example: >>> self.register_buffer('running_mean', torch.zeros(num_features))
register_forward_hook(hook)
Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_forward_pre_hook(hook)
Registers a forward pre-hook on the module. The hook will be called every time before forward() is invoked. It should have the following signature: hook(module, input) -> None or modified input
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple). Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_full_backward_hook(hook)
Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments. Warning Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle
register_parameter(name, param)
Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters
name (string) – name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter) – parameter to be added to the module.
requires_grad_(requires_grad=True)
Change if autograd should record operations on parameters in this module. This method sets the parameters’ requires_grad attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). Parameters
requires_grad (bool) – whether autograd should record operations on parameters in this module. Default: True. Returns
self Return type
Module
state_dict(destination=None, prefix='', keep_vars=False)
Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns
a dictionary containing a whole state of the module Return type
dict Example: >>> module.state_dict().keys()
['bias', 'weight']
to(*args, **kwargs)
Moves and/or casts the parameters and buffers. This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtype`s. In addition, this method will
only cast the floating point or complex parameters and buffers to :attr:`dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices. See below for examples. Note This method modifies the module in-place. Parameters
device (torch.device) – the desired device of the parameters and buffers in this module
dtype (torch.dtype) – the desired floating point or complex dtype of the parameters and buffers in this module
tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
memory_format (torch.memory_format) – the desired memory format for 4D parameters and buffers in this module (keyword only argument) Returns
self Return type
Module Examples: >>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j, 0.2382+0.j],
[ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
train(mode=True)
Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Parameters
mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True. Returns
self Return type
Module
type(dst_type)
Casts all parameters and buffers to dst_type. Parameters
dst_type (type or string) – the desired type Returns
self Return type
Module
xpu(device=None)
Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Parameters
device (int, optional) – if specified, all parameters will be copied to that device Returns
self Return type
Module
zero_grad(set_to_none=False)
Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details. | |
doc_1178 | GLOBAL_VARIABLES: the default collection of Variable objects, shared across distributed environment (model variables are subset of these). See tf.compat.v1.global_variables for more details. Commonly, all TRAINABLE_VARIABLES variables will be in MODEL_VARIABLES, and all MODEL_VARIABLES variables will be in GLOBAL_VARIABLES.
LOCAL_VARIABLES: the subset of Variable objects that are local to each machine. Usually used for temporarily variables, like counters. Note: use tf.contrib.framework.local_variable to add to this collection.
MODEL_VARIABLES: the subset of Variable objects that are used in the model for inference (feed forward). Note: use tf.contrib.framework.model_variable to add to this collection.
TRAINABLE_VARIABLES: the subset of Variable objects that will be trained by an optimizer. See tf.compat.v1.trainable_variables for more details.
SUMMARIES: the summary Tensor objects that have been created in the graph. See tf.compat.v1.summary.merge_all for more details.
QUEUE_RUNNERS: the QueueRunner objects that are used to produce input for a computation. See tf.compat.v1.train.start_queue_runners for more details.
MOVING_AVERAGE_VARIABLES: the subset of Variable objects that will also keep moving averages. See tf.compat.v1.moving_average_variables for more details.
REGULARIZATION_LOSSES: regularization losses collected during graph construction. The following standard keys are defined, but their collections are not automatically populated as many of the others are: WEIGHTS BIASES ACTIVATIONS
Class Variables
ACTIVATIONS 'activations'
ASSET_FILEPATHS 'asset_filepaths'
BIASES 'biases'
CONCATENATED_VARIABLES 'concatenated_variables'
COND_CONTEXT 'cond_context'
EVAL_STEP 'eval_step'
GLOBAL_STEP 'global_step'
GLOBAL_VARIABLES 'variables'
INIT_OP 'init_op'
LOCAL_INIT_OP 'local_init_op'
LOCAL_RESOURCES 'local_resources'
LOCAL_VARIABLES 'local_variables'
LOSSES 'losses'
METRIC_VARIABLES 'metric_variables'
MODEL_VARIABLES 'model_variables'
MOVING_AVERAGE_VARIABLES 'moving_average_variables'
QUEUE_RUNNERS 'queue_runners'
READY_FOR_LOCAL_INIT_OP 'ready_for_local_init_op'
READY_OP 'ready_op'
REGULARIZATION_LOSSES 'regularization_losses'
RESOURCES 'resources'
SAVEABLE_OBJECTS 'saveable_objects'
SAVERS 'savers'
SUMMARIES 'summaries'
SUMMARY_OP 'summary_op'
TABLE_INITIALIZERS 'table_initializer'
TRAINABLE_RESOURCE_VARIABLES 'trainable_resource_variables'
TRAINABLE_VARIABLES 'trainable_variables'
TRAIN_OP 'train_op'
UPDATE_OPS 'update_ops'
VARIABLES 'variables'
WEIGHTS 'weights'
WHILE_CONTEXT 'while_context' | |
doc_1179 |
Access a group of rows and columns by label(s) or a boolean array. .loc[] is primarily label based, but may also be used with a boolean array. Allowed inputs are: A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index, and never as an integer position along the index). A list or array of labels, e.g. ['a', 'b', 'c'].
A slice object with labels, e.g. 'a':'f'. Warning Note that contrary to usual python slices, both the start and the stop are included A boolean array of the same length as the axis being sliced, e.g. [True, False, True]. An alignable boolean Series. The index of the key will be aligned before masking. An alignable Index. The Index of the returned selection will be the input. A callable function with one argument (the calling Series or DataFrame) and that returns valid output for indexing (one of the above) See more at Selection by Label. Raises
KeyError
If any items are not found. IndexingError
If an indexed key is passed and its index is unalignable to the frame index. See also DataFrame.at
Access a single value for a row/column label pair. DataFrame.iloc
Access group of rows and columns by integer position(s). DataFrame.xs
Returns a cross-section (row(s) or column(s)) from the Series/DataFrame. Series.loc
Access group of values using labels. Examples Getting values
>>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
... index=['cobra', 'viper', 'sidewinder'],
... columns=['max_speed', 'shield'])
>>> df
max_speed shield
cobra 1 2
viper 4 5
sidewinder 7 8
Single label. Note this returns the row as a Series.
>>> df.loc['viper']
max_speed 4
shield 5
Name: viper, dtype: int64
List of labels. Note using [[]] returns a DataFrame.
>>> df.loc[['viper', 'sidewinder']]
max_speed shield
viper 4 5
sidewinder 7 8
Single label for row and column
>>> df.loc['cobra', 'shield']
2
Slice with labels for row and single label for column. As mentioned above, note that both the start and stop of the slice are included.
>>> df.loc['cobra':'viper', 'max_speed']
cobra 1
viper 4
Name: max_speed, dtype: int64
Boolean list with the same length as the row axis
>>> df.loc[[False, False, True]]
max_speed shield
sidewinder 7 8
Alignable boolean Series:
>>> df.loc[pd.Series([False, True, False],
... index=['viper', 'sidewinder', 'cobra'])]
max_speed shield
sidewinder 7 8
Index (same behavior as df.reindex)
>>> df.loc[pd.Index(["cobra", "viper"], name="foo")]
max_speed shield
foo
cobra 1 2
viper 4 5
Conditional that returns a boolean Series
>>> df.loc[df['shield'] > 6]
max_speed shield
sidewinder 7 8
Conditional that returns a boolean Series with column labels specified
>>> df.loc[df['shield'] > 6, ['max_speed']]
max_speed
sidewinder 7
Callable that returns a boolean Series
>>> df.loc[lambda df: df['shield'] == 8]
max_speed shield
sidewinder 7 8
Setting values Set value for all items matching the list of labels
>>> df.loc[['viper', 'sidewinder'], ['shield']] = 50
>>> df
max_speed shield
cobra 1 2
viper 4 50
sidewinder 7 50
Set value for an entire row
>>> df.loc['cobra'] = 10
>>> df
max_speed shield
cobra 10 10
viper 4 50
sidewinder 7 50
Set value for an entire column
>>> df.loc[:, 'max_speed'] = 30
>>> df
max_speed shield
cobra 30 10
viper 30 50
sidewinder 30 50
Set value for rows matching callable condition
>>> df.loc[df['shield'] > 35] = 0
>>> df
max_speed shield
cobra 30 10
viper 0 0
sidewinder 0 0
Getting values on a DataFrame with an index that has integer labels Another example using integers for the index
>>> df = pd.DataFrame([[1, 2], [4, 5], [7, 8]],
... index=[7, 8, 9], columns=['max_speed', 'shield'])
>>> df
max_speed shield
7 1 2
8 4 5
9 7 8
Slice with integer labels for rows. As mentioned above, note that both the start and stop of the slice are included.
>>> df.loc[7:9]
max_speed shield
7 1 2
8 4 5
9 7 8
Getting values with a MultiIndex A number of examples using a DataFrame with a MultiIndex
>>> tuples = [
... ('cobra', 'mark i'), ('cobra', 'mark ii'),
... ('sidewinder', 'mark i'), ('sidewinder', 'mark ii'),
... ('viper', 'mark ii'), ('viper', 'mark iii')
... ]
>>> index = pd.MultiIndex.from_tuples(tuples)
>>> values = [[12, 2], [0, 4], [10, 20],
... [1, 4], [7, 1], [16, 36]]
>>> df = pd.DataFrame(values, columns=['max_speed', 'shield'], index=index)
>>> df
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
mark iii 16 36
Single label. Note this returns a DataFrame with a single index.
>>> df.loc['cobra']
max_speed shield
mark i 12 2
mark ii 0 4
Single index tuple. Note this returns a Series.
>>> df.loc[('cobra', 'mark ii')]
max_speed 0
shield 4
Name: (cobra, mark ii), dtype: int64
Single label for row and column. Similar to passing in a tuple, this returns a Series.
>>> df.loc['cobra', 'mark i']
max_speed 12
shield 2
Name: (cobra, mark i), dtype: int64
Single tuple. Note using [[]] returns a DataFrame.
>>> df.loc[[('cobra', 'mark ii')]]
max_speed shield
cobra mark ii 0 4
Single tuple for the index with a single label for the column
>>> df.loc[('cobra', 'mark i'), 'shield']
2
Slice from index tuple to single label
>>> df.loc[('cobra', 'mark i'):'viper']
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
mark iii 16 36
Slice from index tuple to index tuple
>>> df.loc[('cobra', 'mark i'):('viper', 'mark ii')]
max_speed shield
cobra mark i 12 2
mark ii 0 4
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1 | |
doc_1180 | See Migration guide for more details. tf.compat.v1.test.is_built_with_rocm
tf.test.is_built_with_rocm() | |
doc_1181 |
Return the picking behavior of the artist. The possible values are described in set_picker. See also
set_picker, pickable, pick | |
doc_1182 | An object that stores some headers. It has a dict-like interface, but is ordered, can store the same key multiple times, and iterating yields (key, value) pairs instead of only keys. This data structure is useful if you want a nicer way to handle WSGI headers which are stored as tuples in a list. From Werkzeug 0.3 onwards, the KeyError raised by this class is also a subclass of the BadRequest HTTP exception and will render a page for a 400 BAD REQUEST if caught in a catch-all for HTTP exceptions. Headers is mostly compatible with the Python wsgiref.headers.Headers class, with the exception of __getitem__. wsgiref will return None for headers['missing'], whereas Headers will raise a KeyError. To create a new Headers object pass it a list or dict of headers which are used as default values. This does not reuse the list passed to the constructor for internal usage. Parameters
defaults – The list of default values for the Headers. Changelog Changed in version 0.9: This data structure now stores unicode values similar to how the multi dicts do it. The main difference is that bytes can be set as well which will automatically be latin1 decoded. Changed in version 0.9: The linked() function was removed without replacement as it was an API that does not support the changes to the encoding model.
add(_key, _value, **kw)
Add a new header tuple to the list. Keyword arguments can specify additional parameters for the header value, with underscores converted to dashes: >>> d = Headers()
>>> d.add('Content-Type', 'text/plain')
>>> d.add('Content-Disposition', 'attachment', filename='foo.png')
The keyword argument dumping uses dump_options_header() behind the scenes. Changelog New in version 0.4.1: keyword arguments were added for wsgiref compatibility.
add_header(_key, _value, **_kw)
Add a new header tuple to the list. An alias for add() for compatibility with the wsgiref add_header() method.
clear()
Clears all headers.
extend(*args, **kwargs)
Extend headers in this object with items from another object containing header items as well as keyword arguments. To replace existing keys instead of extending, use update() instead. If provided, the first argument can be another Headers object, a MultiDict, dict, or iterable of pairs. Changelog Changed in version 1.0: Support MultiDict. Allow passing kwargs.
get(key, default=None, type=None, as_bytes=False)
Return the default value if the requested data doesn’t exist. If type is provided and is a callable it should convert the value, return it or raise a ValueError if that is not possible. In this case the function will return the default as if the value was not found: >>> d = Headers([('Content-Length', '42')])
>>> d.get('Content-Length', type=int)
42
Changelog New in version 0.9: Added support for as_bytes. Parameters
key – The key to be looked up.
default – The default value to be returned if the key can’t be looked up. If not further specified None is returned.
type – A callable that is used to cast the value in the Headers. If a ValueError is raised by this callable the default value is returned.
as_bytes – return bytes instead of strings.
get_all(name)
Return a list of all the values for the named field. This method is compatible with the wsgiref get_all() method.
getlist(key, type=None, as_bytes=False)
Return the list of items for a given key. If that key is not in the Headers, the return value will be an empty list. Just like get(), getlist() accepts a type parameter. All items will be converted with the callable defined there. Changelog New in version 0.9: Added support for as_bytes. Parameters
key – The key to be looked up.
type – A callable that is used to cast the value in the Headers. If a ValueError is raised by this callable the value will be removed from the list.
as_bytes – return bytes instead of strings. Returns
a list of all the values for the key.
has_key(key)
Deprecated since version 2.0: Will be removed in Werkzeug 2.1. Use key in data instead.
pop(key=None, default=no value)
Removes and returns a key or index. Parameters
key – The key to be popped. If this is an integer the item at that position is removed, if it’s a string the value for that key is. If the key is omitted or None the last item is removed. Returns
an item.
popitem()
Removes a key or index and returns a (key, value) item.
remove(key)
Remove a key. Parameters
key – The key to be removed.
set(_key, _value, **kw)
Remove all header tuples for key and add a new one. The newly added key either appears at the end of the list if there was no entry or replaces the first one. Keyword arguments can specify additional parameters for the header value, with underscores converted to dashes. See add() for more information. Changelog Changed in version 0.6.1: set() now accepts the same arguments as add(). Parameters
key – The key to be inserted.
value – The value to be inserted.
setdefault(key, default)
Return the first value for the key if it is in the headers, otherwise set the header to the value given by default and return that. Parameters
key – The header key to get.
default – The value to set for the key if it is not in the headers.
setlist(key, values)
Remove any existing values for a header and add new ones. Parameters
key – The header key to set.
values – An iterable of values to set for the key. Changelog New in version 1.0.
setlistdefault(key, default)
Return the list of values for the key if it is in the headers, otherwise set the header to the list of values given by default and return that. Unlike MultiDict.setlistdefault(), modifying the returned list will not affect the headers. Parameters
key – The header key to get.
default – An iterable of values to set for the key if it is not in the headers. Changelog New in version 1.0.
to_wsgi_list()
Convert the headers into a list suitable for WSGI. Returns
list
update(*args, **kwargs)
Replace headers in this object with items from another headers object and keyword arguments. To extend existing keys instead of replacing, use extend() instead. If provided, the first argument can be another Headers object, a MultiDict, dict, or iterable of pairs. Changelog New in version 1.0. | |
doc_1183 |
Set the alpha value used for blending - not supported on all backends. If alpha=None (the default), the alpha components of the foreground and fill colors will be used to set their respective transparencies (where applicable); otherwise, alpha will override them. | |
doc_1184 | The class to which a class instance belongs. | |
doc_1185 |
Split a Bezier segment defined by its control points beta into two separate segments divided at t and return their control points. | |
doc_1186 |
Alias for get_facecolor. | |
doc_1187 | See Migration guide for more details. tf.compat.v1.ifft2d, tf.compat.v1.signal.ifft2d, tf.compat.v1.spectral.ifft2d
tf.signal.ifft2d(
input, name=None
)
Computes the inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of input.
Args
input A Tensor. Must be one of the following types: complex64, complex128. A complex tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_1188 | Return a ZipInfo object with information about the archive member name. Calling getinfo() for a name not currently contained in the archive will raise a KeyError. | |
doc_1189 | See Migration guide for more details. tf.compat.v1.raw_ops.Conv2DBackpropInput
tf.raw_ops.Conv2DBackpropInput(
input_sizes, filter, out_backprop, strides, padding, use_cudnn_on_gpu=True,
explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1],
name=None
)
Args
input_sizes A Tensor of type int32. An integer vector representing the shape of input, where input is a 4-D [batch, height, width, channels] tensor.
filter A Tensor. Must be one of the following types: half, bfloat16, float32, float64, int32. 4-D with shape [filter_height, filter_width, in_channels, out_channels].
out_backprop A Tensor. Must have the same type as filter. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution.
strides A list of ints. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format.
padding A string from: "SAME", "VALID", "EXPLICIT". The type of padding algorithm to use.
use_cudnn_on_gpu An optional bool. Defaults to True.
explicit_paddings An optional list of ints. Defaults to []. If padding is "EXPLICIT", the list of explicit padding amounts. For the ith dimension, the amount of padding inserted before and after the dimension is explicit_paddings[2 * i] and explicit_paddings[2 * i + 1], respectively. If padding is not "EXPLICIT", explicit_paddings must be empty.
data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width].
dilations An optional list of ints. Defaults to [1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1.
name A name for the operation (optional).
Returns A Tensor. Has the same type as filter. | |
doc_1190 | class sklearn.gaussian_process.kernels.Matern(length_scale=1.0, length_scale_bounds=1e-05, 100000.0, nu=1.5) [source]
Matern kernel. The class of Matern kernels is a generalization of the RBF. It has an additional parameter \(\nu\) which controls the smoothness of the resulting function. The smaller \(\nu\), the less smooth the approximated function is. As \(\nu\rightarrow\infty\), the kernel becomes equivalent to the RBF kernel. When \(\nu = 1/2\), the Matérn kernel becomes identical to the absolute exponential kernel. Important intermediate values are \(\nu=1.5\) (once differentiable functions) and \(\nu=2.5\) (twice differentiable functions). The kernel is given by: \[k(x_i, x_j) = \frac{1}{\Gamma(\nu)2^{\nu-1}}\Bigg( \frac{\sqrt{2\nu}}{l} d(x_i , x_j ) \Bigg)^\nu K_\nu\Bigg( \frac{\sqrt{2\nu}}{l} d(x_i , x_j )\Bigg)\] where \(d(\cdot,\cdot)\) is the Euclidean distance, \(K_{\nu}(\cdot)\) is a modified Bessel function and \(\Gamma(\cdot)\) is the gamma function. See [1], Chapter 4, Section 4.2, for details regarding the different variants of the Matern kernel. Read more in the User Guide. New in version 0.18. Parameters
length_scalefloat or ndarray of shape (n_features,), default=1.0
The length scale of the kernel. If a float, an isotropic kernel is used. If an array, an anisotropic kernel is used where each dimension of l defines the length-scale of the respective feature dimension.
length_scale_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on ‘length_scale’. If set to “fixed”, ‘length_scale’ cannot be changed during hyperparameter tuning.
nufloat, default=1.5
The parameter nu controlling the smoothness of the learned function. The smaller nu, the less smooth the approximated function is. For nu=inf, the kernel becomes equivalent to the RBF kernel and for nu=0.5 to the absolute exponential kernel. Important intermediate values are nu=1.5 (once differentiable functions) and nu=2.5 (twice differentiable functions). Note that values of nu not in [0.5, 1.5, 2.5, inf] incur a considerably higher computational cost (appr. 10 times higher) since they require to evaluate the modified Bessel function. Furthermore, in contrast to l, nu is kept fixed to its initial value and not optimized. Attributes
anisotropic
bounds
Returns the log-transformed bounds on the theta. hyperparameter_length_scale
hyperparameters
Returns a list of all hyperparameter specifications.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. References
1
Carl Edward Rasmussen, Christopher K. I. Williams (2006). “Gaussian Processes for Machine Learning”. The MIT Press. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.gaussian_process import GaussianProcessClassifier
>>> from sklearn.gaussian_process.kernels import Matern
>>> X, y = load_iris(return_X_y=True)
>>> kernel = 1.0 * Matern(length_scale=1.0, nu=1.5)
>>> gpc = GaussianProcessClassifier(kernel=kernel,
... random_state=0).fit(X, y)
>>> gpc.score(X, y)
0.9866...
>>> gpc.predict_proba(X[:2,:])
array([[0.8513..., 0.0368..., 0.1117...],
[0.8086..., 0.0693..., 0.1220...]])
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y)
Yndarray of shape (n_samples_Y, n_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y) Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter specifications.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel
Examples using sklearn.gaussian_process.kernels.Matern
Illustration of prior and posterior Gaussian process for different kernels | |
doc_1191 | Takes an optional file object fp, which is ignored by the base class. Initializes “protected” instance variables _info and _charset which are set by derived classes, as well as _fallback, which is set through add_fallback(). It then calls self._parse(fp) if fp is not None.
_parse(fp)
No-op in the base class, this method takes file object fp, and reads the data from the file, initializing its message catalog. If you have an unsupported message catalog file format, you should override this method to parse your format.
add_fallback(fallback)
Add fallback as the fallback object for the current translation object. A translation object should consult the fallback if it cannot provide a translation for a given message.
gettext(message)
If a fallback has been set, forward gettext() to the fallback. Otherwise, return message. Overridden in derived classes.
ngettext(singular, plural, n)
If a fallback has been set, forward ngettext() to the fallback. Otherwise, return singular if n is 1; return plural otherwise. Overridden in derived classes.
pgettext(context, message)
If a fallback has been set, forward pgettext() to the fallback. Otherwise, return the translated message. Overridden in derived classes. New in version 3.8.
npgettext(context, singular, plural, n)
If a fallback has been set, forward npgettext() to the fallback. Otherwise, return the translated message. Overridden in derived classes. New in version 3.8.
lgettext(message)
lngettext(singular, plural, n)
Equivalent to gettext() and ngettext(), but the translation is returned as a byte string encoded in the preferred system encoding if no encoding was explicitly set with set_output_charset(). Overridden in derived classes. Warning These methods should be avoided in Python 3. See the warning for the lgettext() function. Deprecated since version 3.8, will be removed in version 3.10.
info()
Return the “protected” _info variable, a dictionary containing the metadata found in the message catalog file.
charset()
Return the encoding of the message catalog file.
output_charset()
Return the encoding used to return translated messages in lgettext() and lngettext(). Deprecated since version 3.8, will be removed in version 3.10.
set_output_charset(charset)
Change the encoding used to return translated messages. Deprecated since version 3.8, will be removed in version 3.10.
install(names=None)
This method installs gettext() into the built-in namespace, binding it to _. If the names parameter is given, it must be a sequence containing the names of functions you want to install in the builtins namespace in addition to _(). Supported names are 'gettext', 'ngettext', 'pgettext', 'npgettext', 'lgettext', and 'lngettext'. Note that this is only one way, albeit the most convenient way, to make the _() function available to your application. Because it affects the entire application globally, and specifically the built-in namespace, localized modules should never install _(). Instead, they should use this code to make _() available to their module: import gettext
t = gettext.translation('mymodule', ...)
_ = t.gettext
This puts _() only in the module’s global namespace and so only affects calls within this module. Changed in version 3.8: Added 'pgettext' and 'npgettext'. | |
doc_1192 | Assume the end of the document. That will check well-formedness conditions that can be checked only at the end, invoke handlers, and may clean up resources allocated during parsing. | |
doc_1193 | Open url in a new page (“tab”) of the default browser, if possible, otherwise equivalent to open_new(). | |
doc_1194 | Return the round-robin quantum in seconds for the process with PID pid. A pid of 0 means the calling process. | |
doc_1195 |
Fit the model to data matrix X and target(s) y. Parameters
Xndarray or sparse matrix of shape (n_samples, n_features)
The input data.
yndarray of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in regression). Returns
selfreturns a trained MLP model. | |
doc_1196 | Return whether this path points to the same file as other_path, which can be either a Path object, or a string. The semantics are similar to os.path.samefile() and os.path.samestat(). An OSError can be raised if either file cannot be accessed for some reason. >>> p = Path('spam')
>>> q = Path('eggs')
>>> p.samefile(q)
False
>>> p.samefile('spam')
True
New in version 3.5. | |
doc_1197 | tf.compat.v1.nn.moments(
x, axes, shift=None, name=None, keep_dims=None, keepdims=None
)
The mean and variance are calculated by aggregating the contents of x across axes. If x is 1-D and axes = [0] this is just the mean and variance of a vector.
Note: shift is currently not used; the true mean is computed and used.
When using these moments for batch normalization (see tf.nn.batch_normalization): for so-called "global normalization", used with convolutional filters with shape [batch, height, width, depth], pass axes=[0, 1, 2]. for simple batch normalization pass axes=[0] (batch only).
Args
x A Tensor.
axes Array of ints. Axes along which to compute mean and variance.
shift Not used in the current implementation
name Name used to scope the operations that compute the moments.
keep_dims produce moments with the same dimensionality as the input.
keepdims Alias to keep_dims.
Returns Two Tensor objects: mean and variance. | |
doc_1198 | Windows only: Represents a HRESULT value, which contains success or error information for a function or method call. | |
doc_1199 | tf.autodiff.ForwardAccumulator(
primals, tangents
)
Compare to tf.GradientTape which computes vector-Jacobian products ("VJP"s) using reverse-mode autodiff (backprop). Reverse mode is more attractive when computing gradients of a scalar-valued function with respect to many inputs (e.g. a neural network with many parameters and a scalar loss). Forward mode works best on functions with many outputs and few inputs. Since it does not hold on to intermediate activations, it is much more memory efficient than backprop where it is applicable. Consider a simple linear regression:
x = tf.constant([[2.0, 3.0], [1.0, 4.0]])
dense = tf.keras.layers.Dense(1)
dense.build([None, 2])
with tf.autodiff.ForwardAccumulator(
primals=dense.kernel,
tangents=tf.constant([[1.], [0.]])) as acc:
loss = tf.reduce_sum((dense(x) - tf.constant([1., -1.])) ** 2.)
acc.jvp(loss)
<tf.Tensor: shape=(), dtype=float32, numpy=...>
The example has two variables containing parameters, dense.kernel (2 parameters) and dense.bias (1 parameter). Considering the training data x as a constant, this means the Jacobian matrix for the function mapping from parameters to loss has one row and three columns. With forwardprop, we specify a length-three vector in advance which multiplies the Jacobian. The primals constructor argument is the parameter (a tf.Tensor or tf.Variable) we're specifying a vector for, and the tangents argument is the "vector" in Jacobian-vector product. If our goal is to compute the entire Jacobian matrix, forwardprop computes one column at a time while backprop computes one row at a time. Since the Jacobian in the linear regression example has only one row, backprop requires fewer invocations:
x = tf.constant([[2.0, 3.0], [1.0, 4.0]])
dense = tf.keras.layers.Dense(1)
dense.build([None, 2])
loss_fn = lambda: tf.reduce_sum((dense(x) - tf.constant([1., -1.])) ** 2.)
kernel_fprop = []
with tf.autodiff.ForwardAccumulator(
dense.kernel, tf.constant([[1.], [0.]])) as acc:
kernel_fprop.append(acc.jvp(loss_fn()))
with tf.autodiff.ForwardAccumulator(
dense.kernel, tf.constant([[0.], [1.]])) as acc:
kernel_fprop.append(acc.jvp(loss_fn()))
with tf.autodiff.ForwardAccumulator(dense.bias, tf.constant([1.])) as acc:
bias_fprop = acc.jvp(loss_fn())
with tf.GradientTape() as tape:
loss = loss_fn()
kernel_grad, bias_grad = tape.gradient(loss, (dense.kernel, dense.bias))
np.testing.assert_allclose(
kernel_grad, tf.stack(kernel_fprop)[:, tf.newaxis])
np.testing.assert_allclose(bias_grad, bias_fprop[tf.newaxis])
Implicit in the tape.gradient call is a length-one vector which left-multiplies the Jacobian, a vector-Jacobian product. ForwardAccumulator maintains JVPs corresponding primal tensors it is watching, derived from the original primals specified in the constructor. As soon as a primal tensor is deleted, ForwardAccumulator deletes the corresponding JVP. acc.jvp(x) retrieves acc's JVP corresponding to the primal tensor x. It does not perform any computation. acc.jvp calls can be repeated as long as acc is accessible, whether the context manager is active or not. New JVPs are only computed while the context manager is active. Note that ForwardAccumulators are always applied in the order their context managers were entered, so inner accumulators will not see JVP computation from outer accumulators. Take higher-order JVPs from outer accumulators:
primal = tf.constant(1.1)
with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as outer:
with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as inner:
primal_out = primal ** tf.constant(3.5)
inner_jvp = inner.jvp(primal_out)
inner_jvp # 3.5 * 1.1 ** 2.5
<tf.Tensor: shape=(), dtype=float32, numpy=4.4417057>
outer.jvp(inner_jvp) # 3.5 * 2.5 * 1.1 ** 1.5
<tf.Tensor: shape=(), dtype=float32, numpy=10.094786>
Reversing the collection in the last line to instead retrieve inner.jvp(outer.jvp(primal_out)) will not work. Strict nesting also applies to combinations of ForwardAccumulator and tf.GradientTape. More deeply nested GradientTape objects will ignore the products of outer ForwardAccumulator objects. This allows (for example) memory-efficient forward-over-backward computation of Hessian-vector products, where the inner GradientTape would otherwise hold on to all intermediate JVPs:
v = tf.Variable([1., 2.])
with tf.autodiff.ForwardAccumulator(
v,
# The "vector" in Hessian-vector product.
tf.constant([1., 0.])) as acc:
with tf.GradientTape() as tape:
y = tf.reduce_sum(v ** 3.)
backward = tape.gradient(y, v)
backward # gradient from backprop
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([ 3., 12.], dtype=float32)>
acc.jvp(backward) # forward-over-backward Hessian-vector product
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([6., 0.], dtype=float32)>
Args
primals A tensor or nested structure of tensors to watch.
tangents A tensor or nested structure of tensors, with the same nesting structure as primals, with each element being a vector with the same size as the corresponding primal element.
Raises
ValueError If the same tensor or variable is specified multiple times in primals. Methods jvp View source
jvp(
primals, unconnected_gradients=tf.UnconnectedGradients.NONE
)
Fetches the Jacobian-vector product computed for primals. Note that this method performs no computation, and simply looks up a JVP that was already computed (unlike backprop using a tf.GradientTape, where the computation happens on the call to tape.gradient).
Args
primals A watched Tensor or structure of Tensors to fetch the JVPs for.
unconnected_gradients A value which can either hold 'none' or 'zero' and alters the value which will be returned if no JVP was computed for primals. The possible values and effects are detailed in 'tf.UnconnectedGradients' and it defaults to 'none'.
Returns Tensors with the same shapes and dtypes as primals, or None if no JVP is available.
__enter__ View source
__enter__()
__exit__ View source
__exit__(
typ, value, traceback
) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.