_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_25600 |
Bases: matplotlib.projections.geo._GeoTransform The base Hammer transform. Create a new geographical transform. Resolution is the number of steps to interpolate between each input line segment to approximate its path in curved space. has_inverse=True
True if this transform has a corresponding inverse transform.
inverted()[source]
Return the corresponding inverse transformation. It holds x == self.inverted().transform(self.transform(x)). The return value of this method should be treated as temporary. An update to self does not cause a corresponding update to its inverted copy.
transform_non_affine(ll)[source]
Apply only the non-affine part of this transformation. transform(values) is always equivalent to transform_affine(transform_non_affine(values)). In non-affine transformations, this is generally equivalent to transform(values). In affine transformations, this is always a no-op. Parameters
valuesarray
The input values as NumPy array of length input_dims or shape (N x input_dims). Returns
array
The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input. | |
doc_25601 | Get the size of the terminal window. For each of the two dimensions, the environment variable, COLUMNS and LINES respectively, is checked. If the variable is defined and the value is a positive integer, it is used. When COLUMNS or LINES is not defined, which is the common case, the terminal connected to sys.__stdout__ is queried by invoking os.get_terminal_size(). If the terminal size cannot be successfully queried, either because the system doesn’t support querying, or because we are not connected to a terminal, the value given in fallback parameter is used. fallback defaults to (80, 24) which is the default size used by many terminal emulators. The value returned is a named tuple of type os.terminal_size. See also: The Single UNIX Specification, Version 2, Other Environment Variables. New in version 3.3. | |
doc_25602 |
Returns a new bit generator with the state jumped. Jumps the state as-if jumps * 210306068529402873165736369884012333109 random numbers have been generated. Parameters
jumpsinteger, positive
Number of times to jump the state of the bit generator returned Returns
bit_generatorPCG64
New instance of generator jumped iter times Notes The step size is phi-1 when multiplied by 2**128 where phi is the golden ratio. | |
doc_25603 | flip vertically and horizontally flip(Surface, xbool, ybool) -> Surface This can flip a Surface either vertically, horizontally, or both. Flipping a Surface is non-destructive and returns a new Surface with the same dimensions. | |
doc_25604 | A callback wrapper object returned by loop.call_soon(), loop.call_soon_threadsafe().
cancel()
Cancel the callback. If the callback has already been canceled or executed, this method has no effect.
cancelled()
Return True if the callback was cancelled. New in version 3.7. | |
doc_25605 | Returns the size of the mask get_size() -> (width, height)
Returns:
the size of the mask, (width, height)
Return type:
tuple(int, int) | |
doc_25606 |
Matches all possible non-overlapping sets of operators and their data dependencies (pattern) in the Graph of a GraphModule (gm), then replaces each of these matched subgraphs with another subgraph (replacement). Parameters
gm – The GraphModule that wraps the Graph to operate on
pattern – The subgraph to match in gm for replacement
replacement – The subgraph to replace pattern with Returns
A list of Match objects representing the places in the original graph that pattern was matched to. The list is empty if there are no matches. Match is defined as: class Match(NamedTuple):
# Node from which the match was found
anchor: Node
# Maps nodes in the pattern subgraph to nodes in the larger graph
nodes_map: Dict[Node, Node]
Return type
List[Match] Examples: import torch
from torch.fx import symbolic_trace, subgraph_rewriter
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, w1, w2):
m1 = torch.cat([w1, w2]).sum()
m2 = torch.cat([w1, w2]).sum()
return x + torch.max(m1) + torch.max(m2)
def pattern(w1, w2):
return torch.cat([w1, w2]).sum()
def replacement(w1, w2):
return torch.stack([w1, w2])
traced_module = symbolic_trace(M())
subgraph_rewriter.replace_pattern(traced_module, pattern, replacement)
The above code will first match pattern in the forward method of traced_module. Pattern-matching is done based on use-def relationships, not node names. For example, if you had p = torch.cat([a, b]) in pattern, you could match m = torch.cat([a, b]) in the original forward function, despite the variable names being different (p vs m). The return statement in pattern is matched based on its value only; it may or may not match to the return statement in the larger graph. In other words, the pattern doesn’t have to extend to the end of the larger graph. When the pattern is matched, it will be removed from the larger function and replaced by replacement. If there are multiple matches for pattern in the larger function, each non-overlapping match will be replaced. In the case of a match overlap, the first found match in the set of overlapping matches will be replaced. (“First” here being defined as the first in a topological ordering of the Nodes’ use-def relationships. In most cases, the first Node is the parameter that appears directly after self, while the last Node is whatever the function returns.) One important thing to note is that the parameters of the pattern Callable must be used in the Callable itself, and the parameters of the replacement Callable must match the pattern. The first rule is why, in the above code block, the forward function has parameters x, w1, w2, but the pattern function only has parameters w1, w2. pattern doesn’t use x, so it shouldn’t specify x as a parameter. As an example of the second rule, consider replacing def pattern(x, y):
return torch.neg(x) + torch.relu(y)
with def replacement(x, y):
return torch.relu(x)
In this case, replacement needs the same number of parameters as pattern (both x and y), even though the parameter y isn’t used in replacement. After calling subgraph_rewriter.replace_pattern, the generated Python code looks like this: def forward(self, x, w1, w2):
stack_1 = torch.stack([w1, w2])
sum_1 = stack_1.sum()
stack_2 = torch.stack([w1, w2])
sum_2 = stack_2.sum()
max_1 = torch.max(sum_1)
add_1 = x + max_1
max_2 = torch.max(sum_2)
add_2 = add_1 + max_2
return add_2 | |
doc_25607 | sklearn.metrics.pairwise.cosine_similarity(X, Y=None, dense_output=True) [source]
Compute cosine similarity between samples in X and Y. Cosine similarity, or the cosine kernel, computes similarity as the normalized dot product of X and Y: K(X, Y) = <X, Y> / (||X||*||Y||) On L2-normalized data, this function is equivalent to linear_kernel. Read more in the User Guide. Parameters
X{ndarray, sparse matrix} of shape (n_samples_X, n_features)
Input data.
Y{ndarray, sparse matrix} of shape (n_samples_Y, n_features), default=None
Input data. If None, the output will be the pairwise similarities between all samples in X.
dense_outputbool, default=True
Whether to return dense output even when the input is sparse. If False, the output is sparse if both input arrays are sparse. New in version 0.17: parameter dense_output for dense output. Returns
kernel matrixndarray of shape (n_samples_X, n_samples_Y) | |
doc_25608 |
Scalar method identical to the corresponding array attribute. Please see ndarray.dumps. | |
doc_25609 | User defined value. | |
doc_25610 | See Migration guide for more details. tf.compat.v1.test.main
tf.test.main(
argv=None
) | |
doc_25611 | tf.keras.layers.Convolution3D Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.Conv3D, tf.compat.v1.keras.layers.Convolution3D
tf.keras.layers.Conv3D(
filters, kernel_size, strides=(1, 1, 1), padding='valid',
data_format=None, dilation_rate=(1, 1, 1), groups=1, activation=None,
use_bias=True, kernel_initializer='glorot_uniform',
bias_initializer='zeros', kernel_regularizer=None,
bias_regularizer=None, activity_regularizer=None, kernel_constraint=None,
bias_constraint=None, **kwargs
)
This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers, does not include the sample axis), e.g. input_shape=(128, 128, 128, 1) for 128x128x128 volumes with a single channel, in data_format="channels_last". Examples:
# The inputs are 28x28x28 volumes with a single channel, and the
# batch size is 4
input_shape =(4, 28, 28, 28, 1)
x = tf.random.normal(input_shape)
y = tf.keras.layers.Conv3D(
2, 3, activation='relu', input_shape=input_shape[1:])(x)
print(y.shape)
(4, 26, 26, 26, 2)
# With extended batch shape [4, 7], e.g. a batch of 4 videos of 3D frames,
# with 7 frames per video.
input_shape = (4, 7, 28, 28, 28, 1)
x = tf.random.normal(input_shape)
y = tf.keras.layers.Conv3D(
2, 3, activation='relu', input_shape=input_shape[2:])(x)
print(y.shape)
(4, 7, 26, 26, 26, 2)
Arguments
filters Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
kernel_size An integer or tuple/list of 3 integers, specifying the depth, height and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions.
strides An integer or tuple/list of 3 integers, specifying the strides of the convolution along each spatial dimension. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.
padding one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape batch_shape + (spatial_dim1, spatial_dim2, spatial_dim3, channels) while channels_first corresponds to inputs with shape batch_shape + (channels, spatial_dim1, spatial_dim2, spatial_dim3). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last".
dilation_rate an integer or tuple/list of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.
groups A positive integer specifying the number of groups in which the input is split along the channel axis. Each group is convolved separately with filters / groups filters. The output is the concatenation of all the groups results along the channel axis. Input channels and filters must both be divisible by groups.
activation Activation function to use. If you don't specify anything, no activation is applied (see keras.activations).
use_bias Boolean, whether the layer uses a bias vector.
kernel_initializer Initializer for the kernel weights matrix (see keras.initializers).
bias_initializer Initializer for the bias vector (see keras.initializers).
kernel_regularizer Regularizer function applied to the kernel weights matrix (see keras.regularizers).
bias_regularizer Regularizer function applied to the bias vector (see keras.regularizers).
activity_regularizer Regularizer function applied to the output of the layer (its "activation") (see keras.regularizers).
kernel_constraint Constraint function applied to the kernel matrix (see keras.constraints).
bias_constraint Constraint function applied to the bias vector (see keras.constraints). Input shape: 5+D tensor with shape: batch_shape + (channels, conv_dim1, conv_dim2, conv_dim3) if data_format='channels_first' or 5+D tensor with shape: batch_shape + (conv_dim1, conv_dim2, conv_dim3, channels) if data_format='channels_last'. Output shape: 5+D tensor with shape: batch_shape + (filters, new_conv_dim1, new_conv_dim2, new_conv_dim3) if data_format='channels_first' or 5+D tensor with shape: batch_shape + (new_conv_dim1, new_conv_dim2, new_conv_dim3, filters) if data_format='channels_last'. new_conv_dim1, new_conv_dim2 and new_conv_dim3 values might have changed due to padding.
Returns A tensor of rank 5+ representing activation(conv3d(inputs, kernel) + bias).
Raises
ValueError if padding is "causal".
ValueError when both strides > 1 and dilation_rate > 1. | |
doc_25612 | Whiteout. New in version 3.4. | |
doc_25613 |
Aggregate using one or more operations over the specified axis. Parameters
func:function, str, list or dict
Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply. Accepted combinations are: function string function name list of functions and/or function names, e.g. [np.sum, 'mean'] dict of axis labels -> functions, function names or list of such.
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
If 0 or ‘index’: apply function to each column. If 1 or ‘columns’: apply function to each row. *args
Positional arguments to pass to func. **kwargs
Keyword arguments to pass to func. Returns
scalar, Series or DataFrame
The return can be: scalar : when Series.agg is called with single function Series : when DataFrame.agg is called with a single function DataFrame : when DataFrame.agg is called with several functions Return scalar, Series or DataFrame. The aggregation operations are always performed over an axis, either the
index (default) or the column axis. This behavior is different from
numpy aggregation functions (mean, median, prod, sum, std,
var), where the default is to compute the aggregation of the flattened
array, e.g., numpy.mean(arr_2d) as opposed to
numpy.mean(arr_2d, axis=0).
agg is an alias for aggregate. Use the alias.
See also DataFrame.apply
Perform any type of operations. DataFrame.transform
Perform transformation type operations. core.groupby.GroupBy
Perform operations over groups. core.resample.Resampler
Perform operations over resampled bins. core.window.Rolling
Perform operations over rolling window. core.window.Expanding
Perform operations over expanding window. core.window.ExponentialMovingWindow
Perform operation over exponential weighted window. Notes agg is an alias for aggregate. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods for more details. A passed user-defined-function will be passed a Series for evaluation. Examples
>>> df = pd.DataFrame([[1, 2, 3],
... [4, 5, 6],
... [7, 8, 9],
... [np.nan, np.nan, np.nan]],
... columns=['A', 'B', 'C'])
Aggregate these functions over the rows.
>>> df.agg(['sum', 'min'])
A B C
sum 12.0 15.0 18.0
min 1.0 2.0 3.0
Different aggregations per column.
>>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
A B
sum 12.0 NaN
min 1.0 2.0
max NaN 8.0
Aggregate different functions over the columns and rename the index of the resulting DataFrame.
>>> df.agg(x=('A', max), y=('B', 'min'), z=('C', np.mean))
A B C
x 7.0 NaN NaN
y NaN 2.0 NaN
z NaN NaN 6.0
Aggregate over the columns.
>>> df.agg("mean", axis="columns")
0 2.0
1 5.0
2 8.0
3 NaN
dtype: float64 | |
doc_25614 |
Derived should override this method. The arguments are the same as matplotlib.backend_bases.RendererBase.draw_path() except the first argument is a renderer. | |
doc_25615 |
Parameters:
string (str) – string that contains spatial data
srid (int) – spatial reference identifier
Return type:
a GEOSGeometry corresponding to the spatial data in the string fromstr(string, srid) is equivalent to GEOSGeometry(string, srid). Example: >>> from django.contrib.gis.geos import fromstr
>>> pnt = fromstr('POINT(-90.5 29.5)', srid=4326) | |
doc_25616 | The value to be used for the wsgi.multiprocess environment variable. It defaults to true in BaseHandler, but may have a different default (or be set by the constructor) in the other subclasses. | |
doc_25617 | Default return value for get_login_url(). Defaults to None in which case get_login_url() falls back to settings.LOGIN_URL. | |
doc_25618 | This is an optional argument which validates that the array reaches at least the stated length. | |
doc_25619 |
A gated recurrent unit (GRU) cell A dynamic quantized GRUCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.GRUCell, please see https://pytorch.org/docs/stable/nn.html#torch.nn.GRUCell for documentation. Examples: >>> rnn = nn.GRUCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx = rnn(input[i], hx)
output.append(hx) | |
doc_25620 | Play the sound repeatedly. The SND_ASYNC flag must also be used to avoid blocking. Cannot be used with SND_MEMORY. | |
doc_25621 | the system name of the cdrom drive get_name() -> name Return the string name of the drive. This is the system name used to represent the drive. It is often the drive letter or device name. This method can work on an uninitialized CD. | |
doc_25622 | See Migration guide for more details. tf.compat.v1.keras.applications.efficientnet.preprocess_input
tf.keras.applications.efficientnet.preprocess_input(
x, data_format=None
) | |
doc_25623 | Represents the C unsigned char datatype, it interprets the value as small integer. The constructor accepts an optional integer initializer; no overflow checking is done. | |
doc_25624 | Returns the name of the session cookie. Uses app.session_cookie_name which is set to SESSION_COOKIE_NAME Parameters
app (Flask) – Return type
str | |
doc_25625 | Return the window contents as a string; whether blanks in the window are included is affected by the stripspaces member. | |
doc_25626 |
The transposed array. Same as self.transpose(). See also transpose
Examples >>> x = np.array([[1.,2.],[3.,4.]])
>>> x
array([[ 1., 2.],
[ 3., 4.]])
>>> x.T
array([[ 1., 3.],
[ 2., 4.]])
>>> x = np.array([1.,2.,3.,4.])
>>> x
array([ 1., 2., 3., 4.])
>>> x.T
array([ 1., 2., 3., 4.]) | |
doc_25627 |
Return the Transform instance mapping patch coordinates to data coordinates. For example, one may define a patch of a circle which represents a radius of 5 by providing coordinates for a unit circle, and a transform which scales the coordinates (the patch coordinate) by 5. | |
doc_25628 |
Return the string path of the cache directory. The procedure used to find the directory is the same as for _get_config_dir, except using $XDG_CACHE_HOME/$HOME/.cache instead. | |
doc_25629 | See torch.baddbmm() | |
doc_25630 | A model class. Can be explicitly provided, otherwise will be determined by examining self.object or queryset. | |
doc_25631 |
Return the alpha value used for blending - not supported on all backends. | |
doc_25632 |
Confirm s is string 'figure' or convert s to float or raise. | |
doc_25633 | Define how a single command-line argument should be parsed. Each parameter has its own more detailed description below, but in short they are:
name or flags - Either a name or a list of option strings, e.g. foo or -f, --foo.
action - The basic type of action to be taken when this argument is encountered at the command line.
nargs - The number of command-line arguments that should be consumed.
const - A constant value required by some action and nargs selections.
default - The value produced if the argument is absent from the command line and if it is absent from the namespace object.
type - The type to which the command-line argument should be converted.
choices - A container of the allowable values for the argument.
required - Whether or not the command-line option may be omitted (optionals only).
help - A brief description of what the argument does.
metavar - A name for the argument in usage messages.
dest - The name of the attribute to be added to the object returned by parse_args(). | |
doc_25634 |
Set the agg filter. Parameters
filter_funccallable
A filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array. | |
doc_25635 | See Migration guide for more details. tf.compat.v1.config.experimental.enable_tensor_float_32_execution
tf.config.experimental.enable_tensor_float_32_execution(
enabled
)
TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere GPUs. TensorFloat-32 execution causes certain float32 ops, such as matrix multiplications and convolutions, to run much faster on Ampere GPUs but with reduced precision. This reduced precision should not impact convergence of deep learning models in practice. TensorFloat-32 is enabled by default. TensorFloat-32 is only supported on Ampere GPUs, so all other hardware will use the full float32 precision regardless of whether TensorFloat-32 is enabled or not. If you want to use the full float32 precision on Ampere, you can disable TensorFloat-32 execution with this function. For example: x = tf.fill((2, 2), 1.0001)
y = tf.fill((2, 2), 1.)
# TensorFloat-32 is enabled, so matmul is run with reduced precision
print(tf.linalg.matmul(x, y)) # [[2., 2.], [2., 2.]]
tf.config.experimental.enable_tensor_float_32_execution(False)
# Matmul is run with full precision
print(tf.linalg.matmul(x, y)) # [[2.0002, 2.0002], [2.0002, 2.0002]]
To check whether TensorFloat-32 execution is currently enabled, use tf.config.experimental.tensor_float_32_execution_enabled. If TensorFloat-32 is enabled, float32 inputs of supported ops, such as tf.linalg.matmul, will be rounded from 23 bits of precision to 10 bits of precision in most cases. This allows the ops to execute much faster by utilizing the GPU's tensor cores. TensorFloat-32 has the same dynamic range as float32, meaning it is no more likely to underflow or overflow than float32. Ops still use float32 accumulation when TensorFloat-32 is enabled. Enabling or disabling TensorFloat-32 only affects Ampere GPUs and subsequent GPUs that support TensorFloat-32. Note TensorFloat-32 is not always used in supported ops, as only inputs of certain shapes are supported. Support for more input shapes and more ops may be added in the future. As a result, precision of float32 ops may decrease in minor versions of TensorFlow. TensorFloat-32 is also used for some complex64 ops. Currently, TensorFloat-32 is used in fewer cases for complex64 as it is for float32.
Args
enabled Bool indicating whether to enable TensorFloat-32 execution. | |
doc_25636 |
Return filter function to be used for agg filter. | |
doc_25637 |
Set the artist's clip path. Parameters
pathPatch or Path or TransformedPath or None
The clip path. If given a Path, transform must be provided as well. If None, a previously set clip path is removed.
transformTransform, optional
Only used if path is a Path, in which case the given Path is converted to a TransformedPath using transform. Notes For efficiency, if path is a Rectangle this method will set the clipping box to the corresponding rectangle and set the clipping path to None. For technical reasons (support of set), a tuple (path, transform) is also accepted as a single positional parameter. | |
doc_25638 | Return a list of all the values for the field named name. If there are no such named headers in the message, failobj is returned (defaults to None). | |
doc_25639 |
Return the sum of the array elements over the given axis. Masked elements are set to 0 internally. Refer to numpy.sum for full documentation. See also numpy.ndarray.sum
corresponding function for ndarrays numpy.sum
equivalent function Examples >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4)
>>> x
masked_array(
data=[[1, --, 3],
[--, 5, --],
[7, --, 9]],
mask=[[False, True, False],
[ True, False, True],
[False, True, False]],
fill_value=999999)
>>> x.sum()
25
>>> x.sum(axis=1)
masked_array(data=[4, 5, 16],
mask=[False, False, False],
fill_value=999999)
>>> x.sum(axis=0)
masked_array(data=[8, 5, 12],
mask=[False, False, False],
fill_value=999999)
>>> print(type(x.sum(axis=0, dtype=np.int64)[0]))
<class 'numpy.int64'> | |
doc_25640 | Redirects to get_success_url(). | |
doc_25641 | See Migration guide for more details. tf.compat.v1.raw_ops.ExperimentalDenseToSparseBatchDataset
tf.raw_ops.ExperimentalDenseToSparseBatchDataset(
input_dataset, batch_size, row_shape, output_types, output_shapes, name=None
)
Args
input_dataset A Tensor of type variant. A handle to an input dataset. Must have a single component.
batch_size A Tensor of type int64. A scalar representing the number of elements to accumulate in a batch.
row_shape A Tensor of type int64. A vector representing the dense shape of each row in the produced SparseTensor. The shape may be partially specified, using -1 to indicate that a particular dimension should use the maximum size of all batch elements.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_25642 |
Return the complex conjugate, element-wise. Refer to numpy.conjugate for full documentation. See also numpy.conjugate
equivalent function | |
doc_25643 | Get a format string for time.strftime() to represent a date in a locale-specific era-based way. | |
doc_25644 |
Transformer is a special type of interpreter that produces a new Module. It exposes a transform() method that returns the transformed Module. Transformer does not require arguments to run, as Interpreter does. Transformer works entirely symbolically. Example Suppose we want to swap all instances of torch.neg with torch.sigmoid and vice versa (including their Tensor method equivalents). We could subclass Transformer like so: class NegSigmSwapXformer(Transformer):
def call_function(self, target : 'Target', args : Tuple[Argument, ...], kwargs : Dict[str, Any]) -> Any:
if target == torch.sigmoid:
return torch.neg(*args, **kwargs)
return super().call_function(n)
def call_method(self, target : 'Target', args : Tuple[Argument, ...], kwargs : Dict[str, Any]) -> Any:
if target == 'neg':
call_self, *args_tail = args
return call_self.sigmoid(*args_tail, **kwargs)
return super().call_method(n)
def fn(x):
return torch.sigmoid(x).neg()
gm = torch.fx.symbolic_trace(fn)
transformed : torch.nn.Module = NegSigmSwapXformer(gm).transform()
input = torch.randn(3, 4)
torch.testing.assert_allclose(transformed(input), torch.neg(input).sigmoid())
Parameters
module (GraphModule) – The Module to be transformed.
get_attr(target, args, kwargs) [source]
Execute a get_attr node. In Transformer, this is overridden to insert a new get_attr node into the output graph. Parameters
target (Target) – The call target for this node. See Node for details on semantics
args (Tuple) – Tuple of positional args for this invocation
kwargs (Dict) – Dict of keyword arguments for this invocation
placeholder(target, args, kwargs) [source]
Execute a placeholder node. In Transformer, this is overridden to insert a new placeholder into the output graph. Parameters
target (Target) – The call target for this node. See Node for details on semantics
args (Tuple) – Tuple of positional args for this invocation
kwargs (Dict) – Dict of keyword arguments for this invocation
transform() [source]
Transform self.module and return the transformed GraphModule. | |
doc_25645 | A tuple of two strings: the first is the name of the local non-DST timezone, the second is the name of the local DST timezone. If no DST timezone is defined, the second string should not be used. See note below. | |
doc_25646 | Counts the number of non-zero values in the tensor input along the given dim. If no dim is specified then all non-zeros in the tensor are counted. Parameters
input (Tensor) – the input tensor.
dim (int or tuple of python:ints, optional) – Dim or tuple of dims along which to count non-zeros. Example: >>> x = torch.zeros(3,3)
>>> x[torch.randn(3,3) > 0.5] = 1
>>> x
tensor([[0., 1., 1.],
[0., 0., 0.],
[0., 0., 1.]])
>>> torch.count_nonzero(x)
tensor(3)
>>> torch.count_nonzero(x, dim=0)
tensor([0, 1, 2]) | |
doc_25647 |
For each element in self, return a titlecased version of the string: words start with uppercase characters, all remaining cased characters are lowercase. See also char.title | |
doc_25648 |
Default of toggled state. | |
doc_25649 | Called at the start of a CDATA section. This and EndCdataSectionHandler are needed to be able to identify the syntactical start and end for CDATA sections. | |
doc_25650 |
Find artist objects. Recursively find all Artist instances contained in the artist. Parameters
match
A filter criterion for the matches. This can be
None: Return all objects contained in artist. A function with signature def match(artist: Artist) -> bool. The result will only contain artists for which the function returns True. A class instance: e.g., Line2D. The result will only contain artists of this class or its subclasses (isinstance check).
include_selfbool
Include self in the list to be checked for a match. Returns
list of Artist | |
doc_25651 |
Filter an image with the Hybrid Hessian filter. This filter can be used to detect continuous edges, e.g. vessels, wrinkles, rivers. It can be used to calculate the fraction of the whole image containing such objects. Defined only for 2-D and 3-D images. Almost equal to Frangi filter, but uses alternative method of smoothing. Refer to [1] to find the differences between Frangi and Hessian filters. Parameters
image(N, M[, P]) ndarray
Array with input image data.
sigmasiterable of floats, optional
Sigmas used as scales of filter, i.e., np.arange(scale_range[0], scale_range[1], scale_step)
scale_range2-tuple of floats, optional
The range of sigmas used.
scale_stepfloat, optional
Step size between sigmas.
betafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to deviation from a blob-like structure.
gammafloat, optional
Frangi correction constant that adjusts the filter’s sensitivity to areas of high variance/texture/structure.
black_ridgesboolean, optional
When True (the default), the filter detects black ridges; when False, it detects white ridges.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries. Returns
out(N, M[, P]) ndarray
Filtered image (maximum of pixels across all scales). See also
meijering
sato
frangi
Notes Written by Marc Schrijver (November 2001) Re-Written by D. J. Kroon University of Twente (May 2009) [2] References
1
Ng, C. C., Yap, M. H., Costen, N., & Li, B. (2014,). Automatic wrinkle detection using hybrid Hessian filter. In Asian Conference on Computer Vision (pp. 609-622). Springer International Publishing. DOI:10.1007/978-3-319-16811-1_40
2
Kroon, D. J.: Hessian based Frangi vesselness filter. | |
doc_25652 | Cast a memoryview to a new format or shape. shape defaults to [byte_length//new_itemsize], which means that the result view will be one-dimensional. The return value is a new memoryview, but the buffer itself is not copied. Supported casts are 1D -> C-contiguous and C-contiguous -> 1D. The destination format is restricted to a single element native format in struct syntax. One of the formats must be a byte format (‘B’, ‘b’ or ‘c’). The byte length of the result must be the same as the original length. Cast 1D/long to 1D/unsigned bytes: >>> import array
>>> a = array.array('l', [1,2,3])
>>> x = memoryview(a)
>>> x.format
'l'
>>> x.itemsize
8
>>> len(x)
3
>>> x.nbytes
24
>>> y = x.cast('B')
>>> y.format
'B'
>>> y.itemsize
1
>>> len(y)
24
>>> y.nbytes
24
Cast 1D/unsigned bytes to 1D/char: >>> b = bytearray(b'zyz')
>>> x = memoryview(b)
>>> x[0] = b'a'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: memoryview: invalid value for format "B"
>>> y = x.cast('c')
>>> y[0] = b'a'
>>> b
bytearray(b'ayz')
Cast 1D/bytes to 3D/ints to 1D/signed char: >>> import struct
>>> buf = struct.pack("i"*12, *list(range(12)))
>>> x = memoryview(buf)
>>> y = x.cast('i', shape=[2,2,3])
>>> y.tolist()
[[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]]]
>>> y.format
'i'
>>> y.itemsize
4
>>> len(y)
2
>>> y.nbytes
48
>>> z = y.cast('b')
>>> z.format
'b'
>>> z.itemsize
1
>>> len(z)
48
>>> z.nbytes
48
Cast 1D/unsigned long to 2D/unsigned long: >>> buf = struct.pack("L"*6, *list(range(6)))
>>> x = memoryview(buf)
>>> y = x.cast('L', shape=[2,3])
>>> len(y)
2
>>> y.nbytes
48
>>> y.tolist()
[[0, 1, 2], [3, 4, 5]]
New in version 3.3. Changed in version 3.5: The source format is no longer restricted when casting to a byte view. | |
doc_25653 | A Future-like object that runs a Python coroutine. Not thread-safe. Tasks are used to run coroutines in event loops. If a coroutine awaits on a Future, the Task suspends the execution of the coroutine and waits for the completion of the Future. When the Future is done, the execution of the wrapped coroutine resumes. Event loops use cooperative scheduling: an event loop runs one Task at a time. While a Task awaits for the completion of a Future, the event loop runs other Tasks, callbacks, or performs IO operations. Use the high-level asyncio.create_task() function to create Tasks, or the low-level loop.create_task() or ensure_future() functions. Manual instantiation of Tasks is discouraged. To cancel a running Task use the cancel() method. Calling it will cause the Task to throw a CancelledError exception into the wrapped coroutine. If a coroutine is awaiting on a Future object during cancellation, the Future object will be cancelled. cancelled() can be used to check if the Task was cancelled. The method returns True if the wrapped coroutine did not suppress the CancelledError exception and was actually cancelled. asyncio.Task inherits from Future all of its APIs except Future.set_result() and Future.set_exception(). Tasks support the contextvars module. When a Task is created it copies the current context and later runs its coroutine in the copied context. Changed in version 3.7: Added support for the contextvars module. Changed in version 3.8: Added the name parameter. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter.
cancel(msg=None)
Request the Task to be cancelled. This arranges for a CancelledError exception to be thrown into the wrapped coroutine on the next cycle of the event loop. The coroutine then has a chance to clean up or even deny the request by suppressing the exception with a try … … except CancelledError … finally block. Therefore, unlike Future.cancel(), Task.cancel() does not guarantee that the Task will be cancelled, although suppressing cancellation completely is not common and is actively discouraged. Changed in version 3.9: Added the msg parameter. The following example illustrates how coroutines can intercept the cancellation request: async def cancel_me():
print('cancel_me(): before sleep')
try:
# Wait for 1 hour
await asyncio.sleep(3600)
except asyncio.CancelledError:
print('cancel_me(): cancel sleep')
raise
finally:
print('cancel_me(): after sleep')
async def main():
# Create a "cancel_me" Task
task = asyncio.create_task(cancel_me())
# Wait for 1 second
await asyncio.sleep(1)
task.cancel()
try:
await task
except asyncio.CancelledError:
print("main(): cancel_me is cancelled now")
asyncio.run(main())
# Expected output:
#
# cancel_me(): before sleep
# cancel_me(): cancel sleep
# cancel_me(): after sleep
# main(): cancel_me is cancelled now
cancelled()
Return True if the Task is cancelled. The Task is cancelled when the cancellation was requested with cancel() and the wrapped coroutine propagated the CancelledError exception thrown into it.
done()
Return True if the Task is done. A Task is done when the wrapped coroutine either returned a value, raised an exception, or the Task was cancelled.
result()
Return the result of the Task. If the Task is done, the result of the wrapped coroutine is returned (or if the coroutine raised an exception, that exception is re-raised.) If the Task has been cancelled, this method raises a CancelledError exception. If the Task’s result isn’t yet available, this method raises a InvalidStateError exception.
exception()
Return the exception of the Task. If the wrapped coroutine raised an exception that exception is returned. If the wrapped coroutine returned normally this method returns None. If the Task has been cancelled, this method raises a CancelledError exception. If the Task isn’t done yet, this method raises an InvalidStateError exception.
add_done_callback(callback, *, context=None)
Add a callback to be run when the Task is done. This method should only be used in low-level callback-based code. See the documentation of Future.add_done_callback() for more details.
remove_done_callback(callback)
Remove callback from the callbacks list. This method should only be used in low-level callback-based code. See the documentation of Future.remove_done_callback() for more details.
get_stack(*, limit=None)
Return the list of stack frames for this Task. If the wrapped coroutine is not done, this returns the stack where it is suspended. If the coroutine has completed successfully or was cancelled, this returns an empty list. If the coroutine was terminated by an exception, this returns the list of traceback frames. The frames are always ordered from oldest to newest. Only one stack frame is returned for a suspended coroutine. The optional limit argument sets the maximum number of frames to return; by default all available frames are returned. The ordering of the returned list differs depending on whether a stack or a traceback is returned: the newest frames of a stack are returned, but the oldest frames of a traceback are returned. (This matches the behavior of the traceback module.)
print_stack(*, limit=None, file=None)
Print the stack or traceback for this Task. This produces output similar to that of the traceback module for the frames retrieved by get_stack(). The limit argument is passed to get_stack() directly. The file argument is an I/O stream to which the output is written; by default output is written to sys.stderr.
get_coro()
Return the coroutine object wrapped by the Task. New in version 3.8.
get_name()
Return the name of the Task. If no name has been explicitly assigned to the Task, the default asyncio Task implementation generates a default name during instantiation. New in version 3.8.
set_name(value)
Set the name of the Task. The value argument can be any object, which is then converted to a string. In the default Task implementation, the name will be visible in the repr() output of a task object. New in version 3.8. | |
doc_25654 |
Roll provided date backward to next offset only if not on offset. Returns
TimeStamp
Rolled timestamp if not on offset, otherwise unchanged timestamp. | |
doc_25655 | Set the current process’s real, effective, and saved user ids. Availability: Unix. New in version 3.2. | |
doc_25656 | Return the day of the week as an integer, where Monday is 1 and Sunday is 7. For example, date(2002, 12, 4).isoweekday() == 3, a Wednesday. See also weekday(), isocalendar(). | |
doc_25657 | Checks if the function accepts the arguments and keyword arguments. Returns a new (args, kwargs) tuple that can safely be passed to the function without causing a TypeError because the function signature is incompatible. If drop_extra is set to True (which is the default) any extra positional or keyword arguments are dropped automatically. The exception raised provides three attributes:
missing
A set of argument names that the function expected but where missing.
extra
A dict of keyword arguments that the function can not handle but where provided.
extra_positional
A list of values that where given by positional argument but the function cannot accept. This can be useful for decorators that forward user submitted data to a view function: from werkzeug.utils import ArgumentValidationError, validate_arguments
def sanitize(f):
def proxy(request):
data = request.values.to_dict()
try:
args, kwargs = validate_arguments(f, (request,), data)
except ArgumentValidationError:
raise BadRequest('The browser failed to transmit all '
'the data expected.')
return f(*args, **kwargs)
return proxy
Parameters
func – the function the validation is performed against.
args – a tuple of positional arguments.
kwargs – a dict of keyword arguments.
drop_extra – set to False if you don’t want extra arguments to be silently dropped. Returns
tuple in the form (args, kwargs). Deprecated since version 2.0: Will be removed in Werkzeug 2.1. Use inspect.signature() instead. | |
doc_25658 |
Transform binary labels back to multi-class labels. Parameters
Y{ndarray, sparse matrix} of shape (n_samples, n_classes)
Target values. All sparse matrices are converted to CSR before inverse transformation.
thresholdfloat, default=None
Threshold used in the binary and multi-label cases. Use 0 when Y contains the output of decision_function (classifier). Use 0.5 when Y contains the output of predict_proba. If None, the threshold is assumed to be half way between neg_label and pos_label. Returns
y{ndarray, sparse matrix} of shape (n_samples,)
Target values. Sparse matrix will be of CSR format. Notes In the case when the binary labels are fractional (probabilistic), inverse_transform chooses the class with the greatest value. Typically, this allows to use the output of a linear model’s decision_function method directly as the input of inverse_transform. | |
doc_25659 | Returns a new instance of the FileHandler class. The specified file is opened and used as the stream for logging. If mode is not specified, 'a' is used. If encoding is not None, it is used to open the file with that encoding. If delay is true, then file opening is deferred until the first call to emit(). By default, the file grows indefinitely. If errors is specified, it’s used to determine how encoding errors are handled. Changed in version 3.6: As well as string values, Path objects are also accepted for the filename argument. Changed in version 3.9: The errors parameter was added.
close()
Closes the file.
emit(record)
Outputs the record to the file. | |
doc_25660 | See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.load_img
tf.keras.preprocessing.image.load_img(
path, grayscale=False, color_mode='rgb', target_size=None,
interpolation='nearest'
)
Usage: image = tf.keras.preprocessing.image.load_img(image_path)
input_arr = keras.preprocessing.image.img_to_array(image)
input_arr = np.array([input_arr]) # Convert single image to a batch.
predictions = model.predict(input_arr)
Arguments
path Path to image file.
grayscale DEPRECATED use color_mode="grayscale".
color_mode One of "grayscale", "rgb", "rgba". Default: "rgb". The desired image format.
target_size Either None (default to original size) or tuple of ints (img_height, img_width).
interpolation Interpolation method used to resample the image if the target size is different from that of the loaded image. Supported methods are "nearest", "bilinear", and "bicubic". If PIL version 1.1.3 or newer is installed, "lanczos" is also supported. If PIL version 3.4.0 or newer is installed, "box" and "hamming" are also supported. By default, "nearest" is used.
Returns A PIL Image instance.
Raises
ImportError if PIL is not available.
ValueError if interpolation method is not supported. | |
doc_25661 | Set the ForeignKey to the value passed to SET(), or if a callable is passed in, the result of calling it. In most cases, passing a callable will be necessary to avoid executing queries at the time your models.py is imported: from django.conf import settings
from django.contrib.auth import get_user_model
from django.db import models
def get_sentinel_user():
return get_user_model().objects.get_or_create(username='deleted')[0]
class MyModel(models.Model):
user = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.SET(get_sentinel_user),
) | |
doc_25662 | The ContentType of the modified object. | |
doc_25663 |
Set the agg filter. Parameters
filter_funccallable
A filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array. | |
doc_25664 |
Return a dictionary mapping property name -> value. | |
doc_25665 |
Opposite of the value of X on the K-means objective. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data.
yIgnored
Not used, present here for API consistency by convention.
sample_weightarray-like of shape (n_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight. Returns
scorefloat
Opposite of the value of X on the K-means objective. | |
doc_25666 | Given a string representing one Unicode character, return an integer representing the Unicode code point of that character. For example, ord('a') returns the integer 97 and ord('€') (Euro sign) returns 8364. This is the inverse of chr(). | |
doc_25667 | sklearn.model_selection.permutation_test_score(estimator, X, y, *, groups=None, cv=None, n_permutations=100, n_jobs=None, random_state=0, verbose=0, scoring=None, fit_params=None) [source]
Evaluate the significance of a cross-validated score with permutations Permutes targets to generate ‘randomized data’ and compute the empirical p-value against the null hypothesis that features and targets are independent. The p-value represents the fraction of randomized data sets where the estimator performed as well or better than in the original data. A small p-value suggests that there is a real dependency between features and targets which has been used by the estimator to give good predictions. A large p-value may be due to lack of real dependency between features and targets or the estimator was not able to use the dependency to give good predictions. Read more in the User Guide. Parameters
estimatorestimator object implementing ‘fit’
The object to use to fit the data.
Xarray-like of shape at least 2D
The data to fit.
yarray-like of shape (n_samples,) or (n_samples, n_outputs) or None
The target variable to try to predict in the case of supervised learning.
groupsarray-like of shape (n_samples,), default=None
Labels to constrain permutation within groups, i.e. y values are permuted among samples with the same group identifier. When not specified, y values are permuted among all samples. When a grouped cross-validator is used, the group labels are also passed on to the split method of the cross-validator. The cross-validator uses them for grouping the samples while splitting the dataset into train/test set.
scoringstr or callable, default=None
A single str (see The scoring parameter: defining model evaluation rules) or a callable (see Defining your scoring strategy from metric functions) to evaluate the predictions on the test set. If None the estimator’s score method is used.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross validation, int, to specify the number of folds in a (Stratified)KFold,
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
n_permutationsint, default=100
Number of times to permute y.
n_jobsint, default=None
Number of jobs to run in parallel. Training the estimator and computing the cross-validated score are parallelized over the permutations. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
random_stateint, RandomState instance or None, default=0
Pass an int for reproducible output for permutation of y values among samples. See Glossary.
verboseint, default=0
The verbosity level.
fit_paramsdict, default=None
Parameters to pass to the fit method of the estimator. New in version 0.24. Returns
scorefloat
The true score without permuting targets.
permutation_scoresarray of shape (n_permutations,)
The scores obtained for each permutations.
pvaluefloat
The p-value, which approximates the probability that the score would be obtained by chance. This is calculated as: (C + 1) / (n_permutations + 1) Where C is the number of permutations whose score >= the true score. The best possible p-value is 1/(n_permutations + 1), the worst is 1.0. Notes This function implements Test 1 in: Ojala and Garriga. Permutation Tests for Studying Classifier Performance. The Journal of Machine Learning Research (2010) vol. 11
Examples using sklearn.model_selection.permutation_test_score
Test with permutations the significance of a classification score | |
doc_25668 | Close the event loop. The loop must not be running when this function is called. Any pending callbacks will be discarded. This method clears all queues and shuts down the executor, but does not wait for the executor to finish. This method is idempotent and irreversible. No other methods should be called after the event loop is closed. | |
doc_25669 |
Scale each feature by its maximum absolute value. This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity. This scaler can also be applied to sparse CSR or CSC matrices. New in version 0.17. Parameters
copybool, default=True
Set to False to perform inplace scaling and avoid a copy (if the input is already a numpy array). Attributes
scale_ndarray of shape (n_features,)
Per feature relative scaling of the data. New in version 0.17: scale_ attribute.
max_abs_ndarray of shape (n_features,)
Per feature maximum absolute value.
n_samples_seen_int
The number of samples processed by the estimator. Will be reset on new calls to fit, but increments across partial_fit calls. See also
maxabs_scale
Equivalent function without the estimator API. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Examples >>> from sklearn.preprocessing import MaxAbsScaler
>>> X = [[ 1., -1., 2.],
... [ 2., 0., 0.],
... [ 0., 1., -1.]]
>>> transformer = MaxAbsScaler().fit(X)
>>> transformer
MaxAbsScaler()
>>> transformer.transform(X)
array([[ 0.5, -1. , 1. ],
[ 1. , 0. , 0. ],
[ 0. , 1. , -0.5]])
Methods
fit(X[, y]) Compute the maximum absolute value to be used for later scaling.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Scale back the data to the original representation
partial_fit(X[, y]) Online computation of max absolute value of X for later scaling.
set_params(**params) Set the parameters of this estimator.
transform(X) Scale the data
fit(X, y=None) [source]
Compute the maximum absolute value to be used for later scaling. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the per-feature minimum and maximum used for later scaling along the features axis.
yNone
Ignored. Returns
selfobject
Fitted scaler.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Scale back the data to the original representation Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data that should be transformed back. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array.
partial_fit(X, y=None) [source]
Online computation of max absolute value of X for later scaling. All of X is processed as a single batch. This is intended for cases when fit is not feasible due to very large number of n_samples or because X is read from a continuous stream. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the mean and standard deviation used for later scaling along the features axis.
yNone
Ignored. Returns
selfobject
Fitted scaler.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Scale the data Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data that should be scaled. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array. | |
doc_25670 | Create a filesystem node (file, device special file or named pipe) named path. mode specifies both the permissions to use and the type of node to be created, being combined (bitwise OR) with one of stat.S_IFREG, stat.S_IFCHR, stat.S_IFBLK, and stat.S_IFIFO (those constants are available in stat). For stat.S_IFCHR and stat.S_IFBLK, device defines the newly created device special file (probably using os.makedev()), otherwise it is ignored. This function can also support paths relative to directory descriptors. Availability: Unix. New in version 3.3: The dir_fd argument. Changed in version 3.6: Accepts a path-like object. | |
doc_25671 | If input is a vector (1-D tensor), then returns a 2-D square tensor with the elements of input as the diagonal. If input is a tensor with more than one dimension, then returns a 2-D tensor with diagonal elements equal to a flattened input. The argument offset controls which diagonal to consider: If offset = 0, it is the main diagonal. If offset > 0, it is above the main diagonal. If offset < 0, it is below the main diagonal. Parameters
input (Tensor) – the input tensor.
offset (int, optional) – the diagonal to consider. Default: 0 (main diagonal). Examples: >>> a = torch.randn(3)
>>> a
tensor([-0.2956, -0.9068, 0.1695])
>>> torch.diagflat(a)
tensor([[-0.2956, 0.0000, 0.0000],
[ 0.0000, -0.9068, 0.0000],
[ 0.0000, 0.0000, 0.1695]])
>>> torch.diagflat(a, 1)
tensor([[ 0.0000, -0.2956, 0.0000, 0.0000],
[ 0.0000, 0.0000, -0.9068, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.1695],
[ 0.0000, 0.0000, 0.0000, 0.0000]])
>>> a = torch.randn(2, 2)
>>> a
tensor([[ 0.2094, -0.3018],
[-0.1516, 1.9342]])
>>> torch.diagflat(a)
tensor([[ 0.2094, 0.0000, 0.0000, 0.0000],
[ 0.0000, -0.3018, 0.0000, 0.0000],
[ 0.0000, 0.0000, -0.1516, 0.0000],
[ 0.0000, 0.0000, 0.0000, 1.9342]]) | |
doc_25672 |
Computes and returns a mask for the input tensor t. Starting from a base default_mask (which should be a mask of ones if the tensor has not been pruned yet), generate a random mask to apply on top of the default_mask according to the specific pruning method recipe. Parameters
t (torch.Tensor) – tensor representing the importance scores of the
to prune. (parameter) –
default_mask (torch.Tensor) – Base mask from previous pruning
that need to be respected after the new mask is (iterations,) –
Same dims as t. (applied.) – Returns
mask to apply to t, of same dims as t Return type
mask (torch.Tensor) | |
doc_25673 |
Keymap to associate with this tool. list[str]: List of keys that will trigger this tool when a keypress event is emitted on self.figure.canvas. | |
doc_25674 |
Create a memory-map to an array stored in a binary file on disk. Memory-mapped files are used for accessing small segments of large files on disk, without reading the entire file into memory. NumPy’s memmap’s are array-like objects. This differs from Python’s mmap module, which uses file-like objects. This subclass of ndarray has some unpleasant interactions with some operations, because it doesn’t quite fit properly as a subclass. An alternative to using this subclass is to create the mmap object yourself, then create an ndarray with ndarray.__new__ directly, passing the object created in its ‘buffer=’ parameter. This class may at some point be turned into a factory function which returns a view into an mmap buffer. Flush the memmap instance to write the changes to the file. Currently there is no API to close the underlying mmap. It is tricky to ensure the resource is actually closed, since it may be shared between different memmap instances. Parameters
filenamestr, file-like object, or pathlib.Path instance
The file name or file object to be used as the array data buffer.
dtypedata-type, optional
The data-type used to interpret the file contents. Default is uint8.
mode{‘r+’, ‘r’, ‘w+’, ‘c’}, optional
The file is opened in this mode:
‘r’ Open existing file for reading only.
‘r+’ Open existing file for reading and writing.
‘w+’ Create or overwrite existing file for reading and writing.
‘c’ Copy-on-write: assignments affect data in memory, but changes are not saved to disk. The file on disk is read-only. Default is ‘r+’.
offsetint, optional
In the file, array data starts at this offset. Since offset is measured in bytes, it should normally be a multiple of the byte-size of dtype. When mode != 'r', even positive offsets beyond end of file are valid; The file will be extended to accommodate the additional data. By default, memmap will start at the beginning of the file, even if filename is a file pointer fp and fp.tell() != 0.
shapetuple, optional
The desired shape of the array. If mode == 'r' and the number of remaining bytes after offset is not a multiple of the byte-size of dtype, you must specify shape. By default, the returned array will be 1-D with the number of elements determined by file size and data-type.
order{‘C’, ‘F’}, optional
Specify the order of the ndarray memory layout: row-major, C-style or column-major, Fortran-style. This only has an effect if the shape is greater than 1-D. The default order is ‘C’. See also lib.format.open_memmap
Create or load a memory-mapped .npy file. Notes The memmap object can be used anywhere an ndarray is accepted. Given a memmap fp, isinstance(fp, numpy.ndarray) returns True. Memory-mapped files cannot be larger than 2GB on 32-bit systems. When a memmap causes a file to be created or extended beyond its current size in the filesystem, the contents of the new part are unspecified. On systems with POSIX filesystem semantics, the extended part will be filled with zero bytes. Examples >>> data = np.arange(12, dtype='float32')
>>> data.resize((3,4))
This example uses a temporary file so that doctest doesn’t write files to your directory. You would use a ‘normal’ filename. >>> from tempfile import mkdtemp
>>> import os.path as path
>>> filename = path.join(mkdtemp(), 'newfile.dat')
Create a memmap with dtype and shape that matches our data: >>> fp = np.memmap(filename, dtype='float32', mode='w+', shape=(3,4))
>>> fp
memmap([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]], dtype=float32)
Write data to memmap array: >>> fp[:] = data[:]
>>> fp
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
>>> fp.filename == path.abspath(filename)
True
Flushes memory changes to disk in order to read them back >>> fp.flush()
Load the memmap and verify data was stored: >>> newfp = np.memmap(filename, dtype='float32', mode='r', shape=(3,4))
>>> newfp
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
Read-only memmap: >>> fpr = np.memmap(filename, dtype='float32', mode='r', shape=(3,4))
>>> fpr.flags.writeable
False
Copy-on-write memmap: >>> fpc = np.memmap(filename, dtype='float32', mode='c', shape=(3,4))
>>> fpc.flags.writeable
True
It’s possible to assign to copy-on-write array, but values are only written into the memory copy of the array, and not written to disk: >>> fpc
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
>>> fpc[0,:] = 0
>>> fpc
memmap([[ 0., 0., 0., 0.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
File on disk is unchanged: >>> fpr
memmap([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]], dtype=float32)
Offset into a memmap: >>> fpo = np.memmap(filename, dtype='float32', mode='r', offset=16)
>>> fpo
memmap([ 4., 5., 6., 7., 8., 9., 10., 11.], dtype=float32)
Attributes
filenamestr or pathlib.Path instance
Path to the mapped file.
offsetint
Offset position in the file.
modestr
File mode. Methods
flush() Write any changes in the array to the file on disk. | |
doc_25675 |
Update colors from the scalar mappable array, if any. Assign colors to edges and faces based on the array and/or colors that were directly set, as appropriate. | |
doc_25676 |
Return self!=value. | |
doc_25677 | See Migration guide for more details. tf.compat.v1.raw_ops.BoostedTreesPredict
tf.raw_ops.BoostedTreesPredict(
tree_ensemble_handle, bucketized_features, logits_dimension, name=None
)
computes the logits. It is designed to be used during prediction. It traverses all the trees and calculates the final score for each instance.
Args
tree_ensemble_handle A Tensor of type resource.
bucketized_features A list of at least 1 Tensor objects with type int32. A list of rank 1 Tensors containing bucket id for each feature.
logits_dimension An int. scalar, dimension of the logits, to be used for partial logits shape.
name A name for the operation (optional).
Returns A Tensor of type float32. | |
doc_25678 |
Return total duration of each element expressed in seconds. This method is available directly on TimedeltaArray, TimedeltaIndex and on Series containing timedelta values under the .dt namespace. Returns
seconds:[ndarray, Float64Index, Series]
When the calling object is a TimedeltaArray, the return type is ndarray. When the calling object is a TimedeltaIndex, the return type is a Float64Index. When the calling object is a Series, the return type is Series of type float64 whose index is the same as the original. See also datetime.timedelta.total_seconds
Standard library version of this method. TimedeltaIndex.components
Return a DataFrame with components of each Timedelta. Examples Series
>>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='d'))
>>> s
0 0 days
1 1 days
2 2 days
3 3 days
4 4 days
dtype: timedelta64[ns]
>>> s.dt.total_seconds()
0 0.0
1 86400.0
2 172800.0
3 259200.0
4 345600.0
dtype: float64
TimedeltaIndex
>>> idx = pd.to_timedelta(np.arange(5), unit='d')
>>> idx
TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'],
dtype='timedelta64[ns]', freq=None)
>>> idx.total_seconds()
Float64Index([0.0, 86400.0, 172800.0, 259200.00000000003, 345600.0],
dtype='float64') | |
doc_25679 | turtle.pu()
turtle.up()
Pull the pen up – no drawing when moving. | |
doc_25680 |
The masked constant is a special case of MaskedArray, with a float datatype and a null shape. It is used to test whether a specific entry of a masked array is masked, or to mask one or several entries of a masked array: >>> x = ma.array([1, 2, 3], mask=[0, 1, 0])
>>> x[1] is ma.masked
True
>>> x[-1] = ma.masked
>>> x
masked_array(data=[1, --, --],
mask=[False, True, True],
fill_value=999999)
numpy.ma.nomask
Value indicating that a masked array has no invalid entry. nomask is used internally to speed up computations when the mask is not needed. It is represented internally as np.False_.
numpy.ma.masked_print_options
String used in lieu of missing data when a masked array is printed. By default, this string is '--'. | |
doc_25681 |
Attach the plugin to an ImageViewer. Note that the ImageViewer will automatically call this method when the plugin is added to the ImageViewer. For example: viewer += Plugin(...)
Also note that attach automatically calls the filter function so that the image matches the filtered value specified by attached widgets. | |
doc_25682 | Takes an instance of Form and the name of the field. The return value will be used when accessing the field in a template. Most likely it will be an instance of a subclass of BoundField. | |
doc_25683 |
Add an AxesBase to the Axes' children; return the child Axes. This is the lowlevel version. See axes.Axes.inset_axes. | |
doc_25684 | sklearn.neighbors.kneighbors_graph(X, n_neighbors, *, mode='connectivity', metric='minkowski', p=2, metric_params=None, include_self=False, n_jobs=None) [source]
Computes the (weighted) graph of k-Neighbors for points in X Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features) or BallTree
Sample data, in the form of a numpy array or a precomputed BallTree.
n_neighborsint
Number of neighbors for each sample.
mode{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, and ‘distance’ will return the distances between neighbors according to the given metric.
metricstr, default=’minkowski’
The distance metric used to calculate the k-Neighbors for each sample point. The DistanceMetric class gives a list of available metrics. The default distance is ‘euclidean’ (‘minkowski’ metric with the p param equal to 2.)
pint, default=2
Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
metric_paramsdict, default=None
additional keyword arguments for the metric function.
include_selfbool or ‘auto’, default=False
Whether or not to mark each sample as the first nearest neighbor to itself. If ‘auto’, then True is used for mode=’connectivity’ and False for mode=’distance’.
n_jobsint, default=None
The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Returns
Asparse matrix of shape (n_samples, n_samples)
Graph where A[i, j] is assigned the weight of edge that connects i to j. The matrix is of CSR format. See also
radius_neighbors_graph
Examples >>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import kneighbors_graph
>>> A = kneighbors_graph(X, 2, mode='connectivity', include_self=True)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
Examples using sklearn.neighbors.kneighbors_graph
Agglomerative clustering with and without structure
Hierarchical clustering: structured vs unstructured ward
Comparing different clustering algorithms on toy datasets | |
doc_25685 |
Plot a histogram. Compute and draw the histogram of x. The return value is a tuple (n, bins, patches) or ([n0, n1, ...], bins, [patches0, patches1, ...]) if the input contains multiple data. See the documentation of the weights parameter to draw a histogram of already-binned data. Multiple data can be provided via x as a list of datasets of potentially different length ([x0, x1, ...]), or as a 2D ndarray in which each column is a dataset. Note that the ndarray form is transposed relative to the list form. Masked arrays are not supported. The bins, range, weights, and density parameters behave as in numpy.histogram. Parameters
x(n,) array or sequence of (n,) arrays
Input values, this takes either a single array or a sequence of arrays which are not required to be of the same length.
binsint or sequence or str, default: rcParams["hist.bins"] (default: 10)
If bins is an integer, it defines the number of equal-width bins in the range. If bins is a sequence, it defines the bin edges, including the left edge of the first bin and the right edge of the last bin; in this case, bins may be unequally spaced. All but the last (righthand-most) bin is half-open. In other words, if bins is: [1, 2, 3, 4]
then the first bin is [1, 2) (including 1, but excluding 2) and the second [2, 3). The last bin, however, is [3, 4], which includes 4. If bins is a string, it is one of the binning strategies supported by numpy.histogram_bin_edges: 'auto', 'fd', 'doane', 'scott', 'stone', 'rice', 'sturges', or 'sqrt'.
rangetuple or None, default: None
The lower and upper range of the bins. Lower and upper outliers are ignored. If not provided, range is (x.min(), x.max()). Range has no effect if bins is a sequence. If bins is a sequence or range is specified, autoscaling is based on the specified bin range instead of the range of x.
densitybool, default: False
If True, draw and return a probability density: each bin will display the bin's raw count divided by the total number of counts and the bin width (density = counts / (sum(counts) * np.diff(bins))), so that the area under the histogram integrates to 1 (np.sum(density * np.diff(bins)) == 1). If stacked is also True, the sum of the histograms is normalized to 1.
weights(n,) array-like or None, default: None
An array of weights, of the same shape as x. Each value in x only contributes its associated weight towards the bin count (instead of 1). If density is True, the weights are normalized, so that the integral of the density over the range remains 1. This parameter can be used to draw a histogram of data that has already been binned, e.g. using numpy.histogram (by treating each bin as a single point with a weight equal to its count) counts, bins = np.histogram(data)
plt.hist(bins[:-1], bins, weights=counts)
(or you may alternatively use bar()).
cumulativebool or -1, default: False
If True, then a histogram is computed where each bin gives the counts in that bin plus all bins for smaller values. The last bin gives the total number of datapoints. If density is also True then the histogram is normalized such that the last bin equals 1. If cumulative is a number less than 0 (e.g., -1), the direction of accumulation is reversed. In this case, if density is also True, then the histogram is normalized such that the first bin equals 1.
bottomarray-like, scalar, or None, default: None
Location of the bottom of each bin, ie. bins are drawn from bottom to bottom + hist(x, bins) If a scalar, the bottom of each bin is shifted by the same amount. If an array, each bin is shifted independently and the length of bottom must match the number of bins. If None, defaults to 0.
histtype{'bar', 'barstacked', 'step', 'stepfilled'}, default: 'bar'
The type of histogram to draw. 'bar' is a traditional bar-type histogram. If multiple data are given the bars are arranged side by side. 'barstacked' is a bar-type histogram where multiple data are stacked on top of each other. 'step' generates a lineplot that is by default unfilled. 'stepfilled' generates a lineplot that is by default filled.
align{'left', 'mid', 'right'}, default: 'mid'
The horizontal alignment of the histogram bars. 'left': bars are centered on the left bin edges. 'mid': bars are centered between the bin edges. 'right': bars are centered on the right bin edges.
orientation{'vertical', 'horizontal'}, default: 'vertical'
If 'horizontal', barh will be used for bar-type histograms and the bottom kwarg will be the left edges.
rwidthfloat or None, default: None
The relative width of the bars as a fraction of the bin width. If None, automatically compute the width. Ignored if histtype is 'step' or 'stepfilled'.
logbool, default: False
If True, the histogram axis will be set to a log scale.
colorcolor or array-like of colors or None, default: None
Color or sequence of colors, one per dataset. Default (None) uses the standard line color sequence.
labelstr or None, default: None
String, or sequence of strings to match multiple datasets. Bar charts yield multiple patches per dataset, but only the first gets the label, so that legend will work as expected.
stackedbool, default: False
If True, multiple data are stacked on top of each other If False multiple data are arranged side by side if histtype is 'bar' or on top of each other if histtype is 'step' Returns
narray or list of arrays
The values of the histogram bins. See density and weights for a description of the possible semantics. If input x is an array, then this is an array of length nbins. If input is a sequence of arrays [data1, data2, ...], then this is a list of arrays with the values of the histograms for each of the arrays in the same order. The dtype of the array n (or of its element arrays) will always be float even if no weighting or normalization is used.
binsarray
The edges of the bins. Length nbins + 1 (nbins left edges and right edge of last bin). Always a single array even when multiple data sets are passed in.
patchesBarContainer or list of a single Polygon or list of such objects
Container of individual artists used to create the histogram or list of such containers if there are multiple input datasets. Other Parameters
dataindexable object, optional
If given, the following parameters also accept a string s, which is interpreted as data[s] (unless this raises an exception): x, weights **kwargs
Patch properties See also hist2d
2D histogram with rectangular bins hexbin
2D histogram with hexagonal bins Notes For large numbers of bins (>1000), 'step' and 'stepfilled' can be significantly faster than 'bar' and 'barstacked'.
Examples using matplotlib.pyplot.hist
Pyplot Text
Animated histogram
SVG Histogram
Pyplot tutorial
Image tutorial | |
doc_25686 |
A torch.nn.Conv2d module with lazy initialization of the in_channels argument of the Conv2d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
See also torch.nn.Conv2d and torch.nn.modules.lazy.LazyModuleMixin
cls_to_become
alias of Conv2d | |
doc_25687 |
Bases: matplotlib.widgets.Widget Widget connected to a single Axes. To guarantee that the widget remains responsive and not garbage-collected, a reference to the object should be maintained by the user. This is necessary because the callback registry maintains only weak-refs to the functions, which are member functions of the widget. If there are no references to the widget object it may be garbage collected which will disconnect the callbacks. Attributes
axAxes
The parent axes for the widget.
canvasFigureCanvasBase
The parent figure canvas for the widget.
activebool
Is the widget active? propertycids[source]
connect_event(event, callback)[source]
Connect a callback function with an event. This should be used in lieu of figure.canvas.mpl_connect since this function stores callback ids for later clean up.
disconnect_events()[source]
Disconnect all events created by this widget.
classmatplotlib.widgets.Button(ax, label, image=None, color='0.85', hovercolor='0.95')[source]
Bases: matplotlib.widgets.AxesWidget A GUI neutral button. For the button to remain responsive you must keep a reference to it. Call on_clicked to connect to the button. Attributes
ax
The matplotlib.axes.Axes the button renders into. label
A matplotlib.text.Text instance. color
The color of the button when not hovering. hovercolor
The color of the button when hovering. Parameters
axAxes
The Axes instance the button will be placed into.
labelstr
The button text.
imagearray-like or PIL Image
The image to place in the button, if not None. The parameter is directly forwarded to imshow.
colorcolor
The color of the button when not activated.
hovercolorcolor
The color of the button when the mouse is over it. propertycnt[source]
disconnect(cid)[source]
Remove the callback function with connection id cid.
propertyobservers[source]
on_clicked(func)[source]
Connect the callback function func to button click events. Returns a connection id, which can be used to disconnect the callback.
classmatplotlib.widgets.CheckButtons(ax, labels, actives=None)[source]
Bases: matplotlib.widgets.AxesWidget A GUI neutral set of check buttons. For the check buttons to remain responsive you must keep a reference to this object. Connect to the CheckButtons with the on_clicked method. Attributes
axAxes
The parent axes for the widget.
labelslist of Text
rectangleslist of Rectangle
lineslist of (Line2D, Line2D) pairs
List of lines for the x's in the check boxes. These lines exist for each box, but have set_visible(False) when its box is not checked. Add check buttons to matplotlib.axes.Axes instance ax. Parameters
axAxes
The parent axes for the widget.
labelslist of str
The labels of the check buttons.
activeslist of bool, optional
The initial check states of the buttons. The list must have the same length as labels. If not given, all buttons are unchecked. propertycnt[source]
disconnect(cid)[source]
Remove the observer with connection id cid.
get_status()[source]
Return a tuple of the status (True/False) of all of the check buttons.
propertyobservers[source]
on_clicked(func)[source]
Connect the callback function func to button click events. Returns a connection id, which can be used to disconnect the callback.
set_active(index)[source]
Toggle (activate or deactivate) a check button by index. Callbacks will be triggered if eventson is True. Parameters
indexint
Index of the check button to toggle. Raises
ValueError
If index is invalid.
classmatplotlib.widgets.Cursor(ax, horizOn=True, vertOn=True, useblit=False, **lineprops)[source]
Bases: matplotlib.widgets.AxesWidget A crosshair cursor that spans the axes and moves with mouse cursor. For the cursor to remain responsive you must keep a reference to it. Parameters
axmatplotlib.axes.Axes
The Axes to attach the cursor to.
horizOnbool, default: True
Whether to draw the horizontal line.
vertOnbool, default: True
Whether to draw the vertical line.
useblitbool, default: False
Use blitting for faster drawing if supported by the backend. Other Parameters
**lineprops
Line2D properties that control the appearance of the lines. See also axhline. Examples See Cursor. clear(event)[source]
Internal event handler to clear the cursor.
onmove(event)[source]
Internal event handler to draw the cursor when the mouse moves.
classmatplotlib.widgets.EllipseSelector(ax, onselect, drawtype=<deprecated parameter>, minspanx=0, minspany=0, useblit=False, lineprops=<deprecated parameter>, props=None, spancoords='data', button=None, grab_range=10, handle_props=None, interactive=False, state_modifier_keys=None, drag_from_anywhere=False, ignore_event_outside=False)[source]
Bases: matplotlib.widgets.RectangleSelector Select an elliptical region of an axes. For the cursor to remain responsive you must keep a reference to it. Press and release events triggered at the same coordinates outside the selection will clear the selector, except when ignore_event_outside=True. Parameters
axAxes
The parent axes for the widget. onselectfunction
A callback function that is called after a release event and the selection is created, changed or removed. It must have the signature: def onselect(eclick: MouseEvent, erelease: MouseEvent)
where eclick and erelease are the mouse click and release MouseEvents that start and complete the selection. minspanxfloat, default: 0
Selections with an x-span less than or equal to minspanx are removed (when already existing) or cancelled. minspanyfloat, default: 0
Selections with an y-span less than or equal to minspanx are removed (when already existing) or cancelled. useblitbool, default: False
Whether to use blitting for faster drawing (if supported by the backend). propsdict, optional
Properties with which the ellipse is drawn. See matplotlib.patches.Patch for valid properties. Default: dict(facecolor='red', edgecolor='black', alpha=0.2, fill=True) spancoords{"data", "pixels"}, default: "data"
Whether to interpret minspanx and minspany in data or in pixel coordinates. buttonMouseButton, list of MouseButton, default: all buttons
Button(s) that trigger rectangle selection. grab_rangefloat, default: 10
Distance in pixels within which the interactive tool handles can be activated. handle_propsdict, optional
Properties with which the interactive handles (marker artists) are drawn. See the marker arguments in matplotlib.lines.Line2D for valid properties. Default values are defined in mpl.rcParams except for the default value of markeredgecolor which will be the same as the edgecolor property in props. interactivebool, default: False
Whether to draw a set of handles that allow interaction with the widget after it is drawn. state_modifier_keysdict, optional
Keyboard modifiers which affect the widget's behavior. Values amend the defaults. "move": Move the existing shape, default: no modifier. "clear": Clear the current shape, default: "escape". "square": Make the shape square, default: "shift". "center": Make the initial point the center of the shape, default: "ctrl". "square" and "center" can be combined. drag_from_anywherebool, default: False
If True, the widget can be moved by clicking anywhere within its bounds. ignore_event_outsidebool, default: False
If True, the event triggered outside the span selector will be ignored. Examples Rectangle and ellipse selectors propertydraw_shape[source]
classmatplotlib.widgets.Lasso(ax, xy, callback=None, useblit=True)[source]
Bases: matplotlib.widgets.AxesWidget Selection curve of an arbitrary shape. The selected path can be used in conjunction with contains_point to select data points from an image. Unlike LassoSelector, this must be initialized with a starting point xy, and the Lasso events are destroyed upon release. Parameters
axAxes
The parent axes for the widget.
xy(float, float)
Coordinates of the start of the lasso.
useblitbool, default: True
Whether to use blitting for faster drawing (if supported by the backend).
callbackcallable
Whenever the lasso is released, the callback function is called and passed the vertices of the selected path. onmove(event)[source]
onrelease(event)[source]
classmatplotlib.widgets.LassoSelector(ax, onselect=None, useblit=True, props=None, button=None)[source]
Bases: matplotlib.widgets._SelectorWidget Selection curve of an arbitrary shape. For the selector to remain responsive you must keep a reference to it. The selected path can be used in conjunction with contains_point to select data points from an image. In contrast to Lasso, LassoSelector is written with an interface similar to RectangleSelector and SpanSelector, and will continue to interact with the axes until disconnected. Example usage: ax = plt.subplot()
ax.plot(x, y)
def onselect(verts):
print(verts)
lasso = LassoSelector(ax, onselect)
Parameters
axAxes
The parent axes for the widget.
onselectfunction
Whenever the lasso is released, the onselect function is called and passed the vertices of the selected path.
useblitbool, default: True
Whether to use blitting for faster drawing (if supported by the backend).
propsdict, optional
Properties with which the line is drawn, see matplotlib.lines.Line2D for valid properties. Default values are defined in mpl.rcParams.
buttonMouseButton or list of MouseButton, optional
The mouse buttons used for rectangle selection. Default is None, which corresponds to all buttons. onpress(event)[source]
[Deprecated] Notes Deprecated since version 3.5:
onrelease(event)[source]
[Deprecated] Notes Deprecated since version 3.5:
classmatplotlib.widgets.LockDraw[source]
Bases: object Some widgets, like the cursor, draw onto the canvas, and this is not desirable under all circumstances, like when the toolbar is in zoom-to-rect mode and drawing a rectangle. To avoid this, a widget can acquire a canvas' lock with canvas.widgetlock(widget) before drawing on the canvas; this will prevent other widgets from doing so at the same time (if they also try to acquire the lock first). available(o)[source]
Return whether drawing is available to o.
isowner(o)[source]
Return whether o owns this lock.
locked()[source]
Return whether the lock is currently held by an owner.
release(o)[source]
Release the lock from o.
classmatplotlib.widgets.MultiCursor(canvas, axes, useblit=True, horizOn=False, vertOn=True, **lineprops)[source]
Bases: matplotlib.widgets.Widget Provide a vertical (default) and/or horizontal line cursor shared between multiple axes. For the cursor to remain responsive you must keep a reference to it. Parameters
canvasmatplotlib.backend_bases.FigureCanvasBase
The FigureCanvas that contains all the axes.
axeslist of matplotlib.axes.Axes
The Axes to attach the cursor to.
useblitbool, default: True
Use blitting for faster drawing if supported by the backend.
horizOnbool, default: False
Whether to draw the horizontal line. vertOn: bool, default: True
Whether to draw the vertical line. Other Parameters
**lineprops
Line2D properties that control the appearance of the lines. See also axhline. Examples See Multicursor. clear(event)[source]
Clear the cursor.
connect()[source]
Connect events.
disconnect()[source]
Disconnect events.
onmove(event)[source]
classmatplotlib.widgets.PolygonSelector(ax, onselect, useblit=False, props=None, handle_props=None, grab_range=10)[source]
Bases: matplotlib.widgets._SelectorWidget Select a polygon region of an axes. Place vertices with each mouse click, and make the selection by completing the polygon (clicking on the first vertex). Once drawn individual vertices can be moved by clicking and dragging with the left mouse button, or removed by clicking the right mouse button. In addition, the following modifier keys can be used: Hold ctrl and click and drag a vertex to reposition it before the polygon has been completed. Hold the shift key and click and drag anywhere in the axes to move all vertices. Press the esc key to start a new polygon. For the selector to remain responsive you must keep a reference to it. Parameters
axAxes
The parent axes for the widget.
onselectfunction
When a polygon is completed or modified after completion, the onselect function is called and passed a list of the vertices as (xdata, ydata) tuples.
useblitbool, default: False
Whether to use blitting for faster drawing (if supported by the backend).
propsdict, optional
Properties with which the line is drawn, see matplotlib.lines.Line2D for valid properties. Default: dict(color='k', linestyle='-', linewidth=2, alpha=0.5)
handle_propsdict, optional
Artist properties for the markers drawn at the vertices of the polygon. See the marker arguments in matplotlib.lines.Line2D for valid properties. Default values are defined in mpl.rcParams except for the default value of markeredgecolor which will be the same as the color property in props.
grab_rangefloat, default: 10
A vertex is selected (to complete the polygon or to move a vertex) if the mouse click is within grab_range pixels of the vertex. Notes If only one point remains after removing points, the selector reverts to an incomplete state and you can start drawing a new polygon from the existing point. Examples Polygon Selector propertyline[source]
onmove(event)[source]
Cursor move event handler and validator.
propertyvertex_select_radius[source]
propertyverts
The polygon vertices, as a list of (x, y) pairs.
classmatplotlib.widgets.RadioButtons(ax, labels, active=0, activecolor='blue')[source]
Bases: matplotlib.widgets.AxesWidget A GUI neutral radio button. For the buttons to remain responsive you must keep a reference to this object. Connect to the RadioButtons with the on_clicked method. Attributes
axAxes
The parent axes for the widget.
activecolorcolor
The color of the selected button.
labelslist of Text
The button labels.
circleslist of Circle
The buttons.
value_selectedstr
The label text of the currently selected button. Add radio buttons to an Axes. Parameters
axAxes
The axes to add the buttons to.
labelslist of str
The button labels.
activeint
The index of the initially selected button.
activecolorcolor
The color of the selected button. propertycnt[source]
disconnect(cid)[source]
Remove the observer with connection id cid.
propertyobservers[source]
on_clicked(func)[source]
Connect the callback function func to button click events. Returns a connection id, which can be used to disconnect the callback.
set_active(index)[source]
Select button with number index. Callbacks will be triggered if eventson is True.
classmatplotlib.widgets.RangeSlider(ax, label, valmin, valmax, valinit=None, valfmt=None, closedmin=True, closedmax=True, dragging=True, valstep=None, orientation='horizontal', track_color='lightgrey', handle_style=None, **kwargs)[source]
Bases: matplotlib.widgets.SliderBase A slider representing a range of floating point values. Defines the min and max of the range via the val attribute as a tuple of (min, max). Create a slider that defines a range contained within [valmin, valmax] in axes ax. For the slider to remain responsive you must maintain a reference to it. Call on_changed() to connect to the slider event. Attributes
valtuple of float
Slider value. Parameters
axAxes
The Axes to put the slider in.
labelstr
Slider label.
valminfloat
The minimum value of the slider.
valmaxfloat
The maximum value of the slider.
valinittuple of float or None, default: None
The initial positions of the slider. If None the initial positions will be at the 25th and 75th percentiles of the range.
valfmtstr, default: None
%-format string used to format the slider values. If None, a ScalarFormatter is used instead.
closedminbool, default: True
Whether the slider interval is closed on the bottom.
closedmaxbool, default: True
Whether the slider interval is closed on the top.
draggingbool, default: True
If True the slider can be dragged by the mouse.
valstepfloat, default: None
If given, the slider will snap to multiples of valstep.
orientation{'horizontal', 'vertical'}, default: 'horizontal'
The orientation of the slider.
track_colorcolor, default: 'lightgrey'
The color of the background track. The track is accessible for further styling via the track attribute.
handle_styledict
Properties of the slider handles. Default values are
Key Value Default Description
facecolor color 'white' The facecolor of the slider handles.
edgecolor color '.75' The edgecolor of the slider handles.
size int 10 The size of the slider handles in points. Other values will be transformed as marker{foo} and passed to the Line2D constructor. e.g. handle_style = {'style'='x'} will result in markerstyle = 'x'. Notes Additional kwargs are passed on to self.poly which is the Polygon that draws the slider knob. See the Polygon documentation for valid property names (facecolor, edgecolor, alpha, etc.). on_changed(func)[source]
Connect func as callback function to changes of the slider value. Parameters
funccallable
Function to call when slider is changed. The function must accept a numpy array with shape (2,) as its argument. Returns
int
Connection id (which can be used to disconnect func).
set_max(max)[source]
Set the lower value of the slider to max. Parameters
maxfloat
set_min(min)[source]
Set the lower value of the slider to min. Parameters
minfloat
set_val(val)[source]
Set slider value to val. Parameters
valtuple or array-like of float
classmatplotlib.widgets.RectangleSelector(ax, onselect, drawtype=<deprecated parameter>, minspanx=0, minspany=0, useblit=False, lineprops=<deprecated parameter>, props=None, spancoords='data', button=None, grab_range=10, handle_props=None, interactive=False, state_modifier_keys=None, drag_from_anywhere=False, ignore_event_outside=False)[source]
Bases: matplotlib.widgets._SelectorWidget Select a rectangular region of an axes. For the cursor to remain responsive you must keep a reference to it. Press and release events triggered at the same coordinates outside the selection will clear the selector, except when ignore_event_outside=True. Parameters
axAxes
The parent axes for the widget. onselectfunction
A callback function that is called after a release event and the selection is created, changed or removed. It must have the signature: def onselect(eclick: MouseEvent, erelease: MouseEvent)
where eclick and erelease are the mouse click and release MouseEvents that start and complete the selection. minspanxfloat, default: 0
Selections with an x-span less than or equal to minspanx are removed (when already existing) or cancelled. minspanyfloat, default: 0
Selections with an y-span less than or equal to minspanx are removed (when already existing) or cancelled. useblitbool, default: False
Whether to use blitting for faster drawing (if supported by the backend). propsdict, optional
Properties with which the rectangle is drawn. See matplotlib.patches.Patch for valid properties. Default: dict(facecolor='red', edgecolor='black', alpha=0.2, fill=True) spancoords{"data", "pixels"}, default: "data"
Whether to interpret minspanx and minspany in data or in pixel coordinates. buttonMouseButton, list of MouseButton, default: all buttons
Button(s) that trigger rectangle selection. grab_rangefloat, default: 10
Distance in pixels within which the interactive tool handles can be activated. handle_propsdict, optional
Properties with which the interactive handles (marker artists) are drawn. See the marker arguments in matplotlib.lines.Line2D for valid properties. Default values are defined in mpl.rcParams except for the default value of markeredgecolor which will be the same as the edgecolor property in props. interactivebool, default: False
Whether to draw a set of handles that allow interaction with the widget after it is drawn. state_modifier_keysdict, optional
Keyboard modifiers which affect the widget's behavior. Values amend the defaults. "move": Move the existing shape, default: no modifier. "clear": Clear the current shape, default: "escape". "square": Make the shape square, default: "shift". "center": Make the initial point the center of the shape, default: "ctrl". "square" and "center" can be combined. drag_from_anywherebool, default: False
If True, the widget can be moved by clicking anywhere within its bounds. ignore_event_outsidebool, default: False
If True, the event triggered outside the span selector will be ignored. Examples >>> import matplotlib.pyplot as plt
>>> import matplotlib.widgets as mwidgets
>>> fig, ax = plt.subplots()
>>> ax.plot([1, 2, 3], [10, 50, 100])
>>> def onselect(eclick, erelease):
... print(eclick.xdata, eclick.ydata)
... print(erelease.xdata, erelease.ydata)
>>> props = dict(facecolor='blue', alpha=0.5)
>>> rect = mwidgets.RectangleSelector(ax, onselect, interactive=True,
props=props)
>>> fig.show()
See also: Rectangle and ellipse selectors propertyactive_handle[source]
propertycenter
Center of rectangle.
propertycorners
Corners of rectangle from lower left, moving clockwise.
propertydraw_shape[source]
propertydrawtype[source]
propertyedge_centers
Midpoint of rectangle edges from left, moving anti-clockwise.
propertyextents
Return (xmin, xmax, ymin, ymax).
propertygeometry
Return an array of shape (2, 5) containing the x (RectangleSelector.geometry[1, :]) and y (RectangleSelector.geometry[0, :]) coordinates of the four corners of the rectangle starting and ending in the top left corner.
propertyinteractive[source]
propertymaxdist[source]
propertyto_draw[source]
classmatplotlib.widgets.Slider(ax, label, valmin, valmax, valinit=0.5, valfmt=None, closedmin=True, closedmax=True, slidermin=None, slidermax=None, dragging=True, valstep=None, orientation='horizontal', *, initcolor='r', track_color='lightgrey', handle_style=None, **kwargs)[source]
Bases: matplotlib.widgets.SliderBase A slider representing a floating point range. Create a slider from valmin to valmax in axes ax. For the slider to remain responsive you must maintain a reference to it. Call on_changed() to connect to the slider event. Attributes
valfloat
Slider value. Parameters
axAxes
The Axes to put the slider in.
labelstr
Slider label.
valminfloat
The minimum value of the slider.
valmaxfloat
The maximum value of the slider.
valinitfloat, default: 0.5
The slider initial position.
valfmtstr, default: None
%-format string used to format the slider value. If None, a ScalarFormatter is used instead.
closedminbool, default: True
Whether the slider interval is closed on the bottom.
closedmaxbool, default: True
Whether the slider interval is closed on the top.
sliderminSlider, default: None
Do not allow the current slider to have a value less than the value of the Slider slidermin.
slidermaxSlider, default: None
Do not allow the current slider to have a value greater than the value of the Slider slidermax.
draggingbool, default: True
If True the slider can be dragged by the mouse.
valstepfloat or array-like, default: None
If a float, the slider will snap to multiples of valstep. If an array the slider will snap to the values in the array.
orientation{'horizontal', 'vertical'}, default: 'horizontal'
The orientation of the slider.
initcolorcolor, default: 'r'
The color of the line at the valinit position. Set to 'none' for no line.
track_colorcolor, default: 'lightgrey'
The color of the background track. The track is accessible for further styling via the track attribute.
handle_styledict
Properties of the slider handle. Default values are
Key Value Default Description
facecolor color 'white' The facecolor of the slider handle.
edgecolor color '.75' The edgecolor of the slider handle.
size int 10 The size of the slider handle in points. Other values will be transformed as marker{foo} and passed to the Line2D constructor. e.g. handle_style = {'style'='x'} will result in markerstyle = 'x'. Notes Additional kwargs are passed on to self.poly which is the Polygon that draws the slider knob. See the Polygon documentation for valid property names (facecolor, edgecolor, alpha, etc.). propertycnt[source]
propertyobservers[source]
on_changed(func)[source]
Connect func as callback function to changes of the slider value. Parameters
funccallable
Function to call when slider is changed. The function must accept a single float as its arguments. Returns
int
Connection id (which can be used to disconnect func).
set_val(val)[source]
Set slider value to val. Parameters
valfloat
classmatplotlib.widgets.SliderBase(ax, orientation, closedmin, closedmax, valmin, valmax, valfmt, dragging, valstep)[source]
Bases: matplotlib.widgets.AxesWidget The base class for constructing Slider widgets. Not intended for direct usage. For the slider to remain responsive you must maintain a reference to it. disconnect(cid)[source]
Remove the observer with connection id cid. Parameters
cidint
Connection id of the observer to be removed.
reset()[source]
Reset the slider to the initial value.
classmatplotlib.widgets.SpanSelector(ax, onselect, direction, minspan=0, useblit=False, props=None, onmove_callback=None, interactive=False, button=None, handle_props=None, grab_range=10, drag_from_anywhere=False, ignore_event_outside=False)[source]
Bases: matplotlib.widgets._SelectorWidget Visually select a min/max range on a single axis and call a function with those values. To guarantee that the selector remains responsive, keep a reference to it. In order to turn off the SpanSelector, set span_selector.active to False. To turn it back on, set it to True. Press and release events triggered at the same coordinates outside the selection will clear the selector, except when ignore_event_outside=True. Parameters
axmatplotlib.axes.Axes
onselectcallable
A callback function that is called after a release event and the selection is created, changed or removed. It must have the signature: def on_select(min: float, max: float) -> Any
direction{"horizontal", "vertical"}
The direction along which to draw the span selector.
minspanfloat, default: 0
If selection is less than or equal to minspan, the selection is removed (when already existing) or cancelled.
useblitbool, default: False
If True, use the backend-dependent blitting features for faster canvas updates.
propsdict, optional
Dictionary of matplotlib.patches.Patch properties. Default: dict(facecolor='red', alpha=0.5)
onmove_callbackfunc(min, max), min/max are floats, default: None
Called on mouse move while the span is being selected.
span_staysbool, default: False
If True, the span stays visible after the mouse is released. Deprecated, use interactive instead.
interactivebool, default: False
Whether to draw a set of handles that allow interaction with the widget after it is drawn.
buttonMouseButton or list of MouseButton, default: all buttons
The mouse buttons which activate the span selector.
handle_propsdict, default: None
Properties of the handle lines at the edges of the span. Only used when interactive is True. See matplotlib.lines.Line2D for valid properties.
grab_rangefloat, default: 10
Distance in pixels within which the interactive tool handles can be activated.
drag_from_anywherebool, default: False
If True, the widget can be moved by clicking anywhere within its bounds.
ignore_event_outsidebool, default: False
If True, the event triggered outside the span selector will be ignored. Examples >>> import matplotlib.pyplot as plt
>>> import matplotlib.widgets as mwidgets
>>> fig, ax = plt.subplots()
>>> ax.plot([1, 2, 3], [10, 50, 100])
>>> def onselect(vmin, vmax):
... print(vmin, vmax)
>>> span = mwidgets.SpanSelector(ax, onselect, 'horizontal',
... props=dict(facecolor='blue', alpha=0.5))
>>> fig.show()
See also: Span Selector propertyactive_handle[source]
connect_default_events()[source]
Connect the major canvas events to methods.
propertydirection
Direction of the span selector: 'vertical' or 'horizontal'.
propertyextents
Return extents of the span selector.
new_axes(ax)[source]
Set SpanSelector to operate on a new Axes.
propertypressv[source]
propertyprev[source]
propertyrect[source]
propertyrectprops[source]
propertyspan_stays[source]
classmatplotlib.widgets.SubplotTool(targetfig, toolfig)[source]
Bases: matplotlib.widgets.Widget A tool to adjust the subplot params of a matplotlib.figure.Figure. Parameters
targetfigFigure
The figure instance to adjust.
toolfigFigure
The figure instance to embed the subplot tool into.
classmatplotlib.widgets.TextBox(ax, label, initial='', color='.95', hovercolor='1', label_pad=0.01, textalignment='left')[source]
Bases: matplotlib.widgets.AxesWidget A GUI neutral text input box. For the text box to remain responsive you must keep a reference to it. Call on_text_change to be updated whenever the text changes. Call on_submit to be updated whenever the user hits enter or leaves the text entry field. Attributes
axAxes
The parent axes for the widget.
labelText
colorcolor
The color of the text box when not hovering.
hovercolorcolor
The color of the text box when hovering. Parameters
axAxes
The Axes instance the button will be placed into.
labelstr
Label for this text box.
initialstr
Initial value in the text box.
colorcolor
The color of the box.
hovercolorcolor
The color of the box when the mouse is over it.
label_padfloat
The distance between the label and the right side of the textbox.
textalignment{'left', 'center', 'right'}
The horizontal location of the text. propertyDIST_FROM_LEFT[source]
begin_typing(x)[source]
propertychange_observers[source]
propertycnt[source]
disconnect(cid)[source]
Remove the observer with connection id cid.
on_submit(func)[source]
When the user hits enter or leaves the submission box, call this func with event. A connection id is returned which can be used to disconnect.
on_text_change(func)[source]
When the text changes, call this func with event. A connection id is returned which can be used to disconnect.
position_cursor(x)[source]
set_val(val)[source]
stop_typing()[source]
propertysubmit_observers[source]
propertytext
classmatplotlib.widgets.ToolHandles(ax, x, y, marker='o', marker_props=None, useblit=True)[source]
Bases: object Control handles for canvas tools. Parameters
axmatplotlib.axes.Axes
Matplotlib axes where tool handles are displayed.
x, y1D arrays
Coordinates of control handles.
markerstr, default: 'o'
Shape of marker used to display handle. See matplotlib.pyplot.plot.
marker_propsdict, optional
Additional marker properties. See matplotlib.lines.Line2D.
useblitbool, default: True
Whether to use blitting for faster drawing (if supported by the backend). propertyartists
closest(x, y)[source]
Return index and pixel distance to closest index.
set_animated(val)[source]
set_data(pts, y=None)[source]
Set x and y positions of handles.
set_visible(val)[source]
propertyx
propertyy
classmatplotlib.widgets.ToolLineHandles(ax, positions, direction, line_props=None, useblit=True)[source]
Bases: object Control handles for canvas tools. Parameters
axmatplotlib.axes.Axes
Matplotlib axes where tool handles are displayed.
positions1D array
Positions of handles in data coordinates.
direction{"horizontal", "vertical"}
Direction of handles, either 'vertical' or 'horizontal'
line_propsdict, optional
Additional line properties. See matplotlib.lines.Line2D.
useblitbool, default: True
Whether to use blitting for faster drawing (if supported by the backend). propertyartists
closest(x, y)[source]
Return index and pixel distance to closest handle. Parameters
x, yfloat
x, y position from which the distance will be calculated to determinate the closest handle Returns
index, distanceindex of the handle and its distance from
position x, y
propertydirection
Direction of the handle: 'vertical' or 'horizontal'.
propertypositions
Positions of the handle in data coordinates.
remove()[source]
Remove the handles artist from the figure.
set_animated(value)[source]
Set the animated state of the handles artist.
set_data(positions)[source]
Set x or y positions of handles, depending if the lines are vertical of horizontal. Parameters
positionstuple of length 2
Set the positions of the handle in data coordinates
set_visible(value)[source]
Set the visibility state of the handles artist.
classmatplotlib.widgets.Widget[source]
Bases: object Abstract base class for GUI neutral widgets. propertyactive
Is the widget active?
drawon=True
eventson=True
get_active()[source]
Get whether the widget is active.
ignore(event)[source]
Return whether event should be ignored. This method should be called at the beginning of any event callback.
set_active(active)[source]
Set whether the widget is active. | |
doc_25688 |
The Connectionist Temporal Classification loss. See CTCLoss for details. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information. Note This operation may produce nondeterministic gradients when given tensors on a CUDA device. See Reproducibility for more information. Parameters
log_probs – (T,N,C)(T, N, C) where C = number of characters in alphabet including blank, T = input length, and N = batch size. The logarithmized probabilities of the outputs (e.g. obtained with torch.nn.functional.log_softmax()).
targets – (N,S)(N, S) or (sum(target_lengths)). Targets cannot be blank. In the second form, the targets are assumed to be concatenated.
input_lengths – (N)(N) . Lengths of the inputs (must each be ≤T\leq T )
target_lengths – (N)(N) . Lengths of the targets
blank (int, optional) – Blank label. Default 00 .
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the output losses will be divided by the target lengths and then the mean over the batch is taken, 'sum': the output will be summed. Default: 'mean'
zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default: False Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Example: >>> log_probs = torch.randn(50, 16, 20).log_softmax(2).detach().requires_grad_()
>>> targets = torch.randint(1, 20, (16, 30), dtype=torch.long)
>>> input_lengths = torch.full((16,), 50, dtype=torch.long)
>>> target_lengths = torch.randint(10,30,(16,), dtype=torch.long)
>>> loss = F.ctc_loss(log_probs, targets, input_lengths, target_lengths)
>>> loss.backward() | |
doc_25689 |
Compute data precision matrix with the FactorAnalysis model. Returns
precisionndarray of shape (n_features, n_features)
Estimated precision of data. | |
doc_25690 | The name of the module the loader will handle. | |
doc_25691 |
Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters
rendererRendererBase subclass.
Notes This method is overridden in the Artist subclasses. | |
doc_25692 | See Migration guide for more details. tf.compat.v1.raw_ops.DummyIterationCounter
tf.raw_ops.DummyIterationCounter(
name=None
)
Args
name A name for the operation (optional).
Returns A Tensor of type resource. | |
doc_25693 | A concrete implementation of Finder.find_module() which is equivalent to self.find_loader(fullname)[0]. Deprecated since version 3.4: Use find_spec() instead. | |
doc_25694 |
Set the font style. Parameters
fontstyle{'normal', 'italic', 'oblique'}
See also font_manager.FontProperties.set_style | |
doc_25695 |
pygame module for interacting with midi input and output. New in pygame 1.9.0. The midi module can send output to midi devices and get input from midi devices. It can also list midi devices on the system. The midi module supports real and virtual midi devices. It uses the portmidi library. Is portable to which ever platforms portmidi supports (currently Windows, Mac OS X, and Linux). This uses pyportmidi for now, but may use its own bindings at some point in the future. The pyportmidi bindings are included with pygame. New in pygame 2.0.0. These are pygame events (pygame.event) reserved for midi use. The MIDIIN event is used by pygame.midi.midis2events() when converting midi events to pygame events. MIDIIN
MIDIOUT pygame.midi.init()
initialize the midi module init() -> None Initializes the pygame.midi module. Must be called before using the pygame.midi module. It is safe to call this more than once.
pygame.midi.quit()
uninitialize the midi module quit() -> None Uninitializes the pygame.midi module. If pygame.midi.init() was called to initialize the pygame.midi module, then this function will be called automatically when your program exits. It is safe to call this function more than once.
pygame.midi.get_init()
returns True if the midi module is currently initialized get_init() -> bool Gets the initialization state of the pygame.midi module.
Returns:
True if the pygame.midi module is currently initialized.
Return type:
bool New in pygame 1.9.5.
pygame.midi.Input
Input is used to get midi input from midi devices. Input(device_id) -> None Input(device_id, buffer_size) -> None
Parameters:
device_id (int) -- midi device id
buffer_size (int) -- (optional) the number of input events to be buffered close()
closes a midi stream, flushing any pending buffers. close() -> None PortMidi attempts to close open streams when the application exits. Note This is particularly difficult under Windows.
poll()
returns True if there's data, or False if not. poll() -> bool Used to indicate if any data exists.
Returns:
True if there is data, False otherwise
Return type:
bool
Raises:
MidiException -- on error
read()
reads num_events midi events from the buffer. read(num_events) -> midi_event_list Reads from the input buffer and gives back midi events.
Parameters:
num_events (int) -- number of input events to read
Returns:
the format for midi_event_list is [[[status, data1, data2, data3], timestamp], ...]
Return type:
list
pygame.midi.Output
Output is used to send midi to an output device Output(device_id) -> None Output(device_id, latency=0) -> None Output(device_id, buffer_size=256) -> None Output(device_id, latency, buffer_size) -> None The buffer_size specifies the number of output events to be buffered waiting for output. In some cases (see below) PortMidi does not buffer output at all and merely passes data to a lower-level API, in which case buffersize is ignored. latency is the delay in milliseconds applied to timestamps to determine when the output should actually occur. If latency is <<0, 0 is assumed. If latency is zero, timestamps are ignored and all output is delivered immediately. If latency is greater than zero, output is delayed until the message timestamp plus the latency. In some cases, PortMidi can obtain better timing than your application by passing timestamps along to the device driver or hardware. Latency may also help you to synchronize midi data to audio data by matching midi latency to the audio buffer latency. Note Time is measured relative to the time source indicated by time_proc. Timestamps are absolute, not relative delays or offsets. abort()
terminates outgoing messages immediately abort() -> None The caller should immediately close the output port; this call may result in transmission of a partial midi message. There is no abort for Midi input because the user can simply ignore messages in the buffer and close an input device at any time.
close()
closes a midi stream, flushing any pending buffers. close() -> None PortMidi attempts to close open streams when the application exits. Note This is particularly difficult under Windows.
note_off()
turns a midi note off (note must be on) note_off(note, velocity=None, channel=0) -> None Turn a note off in the output stream. The note must already be on for this to work correctly.
note_on()
turns a midi note on (note must be off) note_on(note, velocity=None, channel=0) -> None Turn a note on in the output stream. The note must already be off for this to work correctly.
set_instrument()
select an instrument, with a value between 0 and 127 set_instrument(instrument_id, channel=0) -> None Select an instrument.
pitch_bend()
modify the pitch of a channel. set_instrument(value=0, channel=0) -> None Adjust the pitch of a channel. The value is a signed integer from -8192 to +8191. For example, 0 means "no change", +4096 is typically a semitone higher, and -8192 is 1 whole tone lower (though the musical range corresponding to the pitch bend range can also be changed in some synthesizers). If no value is given, the pitch bend is returned to "no change". New in pygame 1.9.4.
write()
writes a list of midi data to the Output write(data) -> None Writes series of MIDI information in the form of a list.
Parameters:
data (list) -- data to write, the expected format is [[[status, data1=0, data2=0, ...], timestamp], ...] with the data# fields being optional
Raises:
IndexError -- if more than 1024 elements in the data list Example: # Program change at time 20000 and 500ms later send note 65 with
# velocity 100.
write([[[0xc0, 0, 0], 20000], [[0x90, 60, 100], 20500]]) Note Timestamps will be ignored if latency = 0 To get a note to play immediately, send MIDI info with timestamp read from function Time Optional data fields: write([[[0xc0, 0, 0], 20000]]) is equivalent to write([[[0xc0], 20000]])
write_short()
writes up to 3 bytes of midi data to the Output write_short(status) -> None write_short(status, data1=0, data2=0) -> None Output MIDI information of 3 bytes or less. The data fields are optional and assumed to be 0 if omitted. Examples of status byte values: 0xc0 # program change
0x90 # note on
# etc. Example: # note 65 on with velocity 100
write_short(0x90, 65, 100)
write_sys_ex()
writes a timestamped system-exclusive midi message. write_sys_ex(when, msg) -> None Writes a timestamped system-exclusive midi message.
Parameters:
msg (list[int] or str) -- midi message
when -- timestamp in milliseconds Example: midi_output.write_sys_ex(0, '\xF0\x7D\x10\x11\x12\x13\xF7')
# is equivalent to
midi_output.write_sys_ex(pygame.midi.time(),
[0xF0, 0x7D, 0x10, 0x11, 0x12, 0x13, 0xF7])
pygame.midi.get_count()
gets the number of devices. get_count() -> num_devices Device ids range from 0 to get_count() - 1
pygame.midi.get_default_input_id()
gets default input device number get_default_input_id() -> default_id The following describes the usage details for this function and the get_default_output_id() function. Return the default device ID or -1 if there are no devices. The result can be passed to the Input/Output class. On a PC the user can specify a default device by setting an environment variable. To use device #1, for example: set PM_RECOMMENDED_INPUT_DEVICE=1
or
set PM_RECOMMENDED_OUTPUT_DEVICE=1 The user should first determine the available device ID by using the supplied application "testin" or "testout". In general, the registry is a better place for this kind of info. With USB devices that can come and go, using integers is not very reliable for device identification. Under Windows, if PM_RECOMMENDED_INPUT_DEVICE (or PM_RECOMMENDED_OUTPUT_DEVICE) is NOT found in the environment, then the default device is obtained by looking for a string in the registry under: HKEY_LOCAL_MACHINE/SOFTWARE/PortMidi/Recommended_Input_Device
or
HKEY_LOCAL_MACHINE/SOFTWARE/PortMidi/Recommended_Output_Device The number of the first device with a substring that matches the string exactly is returned. For example, if the string in the registry is "USB" and device 1 is named "In USB MidiSport 1x1", then that will be the default input because it contains the string "USB". In addition to the name, get_device_info() returns "interf", which is the interface name. The "interface" is the underlying software system or API used by PortMidi to access devices. Supported interfaces: MMSystem # the only Win32 interface currently supported
ALSA # the only Linux interface currently supported
CoreMIDI # the only Mac OS X interface currently supported
# DirectX - not implemented
# OSS - not implemented To specify both the interface and the device name in the registry, separate the two with a comma and a space. The string before the comma must be a substring of the "interf" string and the string after the space must be a substring of the "name" name string in order to match the device. e.g.: MMSystem, In USB MidiSport 1x1 Note In the current release, the default is simply the first device (the input or output device with the lowest PmDeviceID).
pygame.midi.get_default_output_id()
gets default output device number get_default_output_id() -> default_id See get_default_input_id() for usage details.
pygame.midi.get_device_info()
returns information about a midi device get_device_info(an_id) -> (interf, name, input, output, opened) get_device_info(an_id) -> None Gets the device info for a given id.
Parameters:
an_id (int) -- id of the midi device being queried
Returns:
if the id is out of range None is returned, otherwise a tuple of (interf, name, input, output, opened) is returned.
interf: string describing the device interface (e.g. 'ALSA') name: string name of the device (e.g. 'Midi Through Port-0') input: 1 if the device is an input device, otherwise 0 output: 1 if the device is an output device, otherwise 0 opened: 1 if the device is opened, otherwise 0
Return type:
tuple or None
pygame.midi.midis2events()
converts midi events to pygame events midis2events(midi_events, device_id) -> [Event, ...] Takes a sequence of midi events and returns list of pygame events. The midi_events data is expected to be a sequence of ((status, data1, data2, data3), timestamp) midi events (all values required).
Returns:
a list of pygame events of event type MIDIIN
Return type:
list
pygame.midi.time()
returns the current time in ms of the PortMidi timer time() -> time The time is reset to 0 when the pygame.midi module is initialized.
pygame.midi.frequency_to_midi()
Converts a frequency into a MIDI note. Rounds to the closest midi note. frequency_to_midi(midi_note) -> midi_note example: frequency_to_midi(27.5) == 21 New in pygame 1.9.5.
pygame.midi.midi_to_frequency()
Converts a midi note to a frequency. midi_to_frequency(midi_note) -> frequency example: midi_to_frequency(21) == 27.5 New in pygame 1.9.5.
pygame.midi.midi_to_ansi_note()
Returns the Ansi Note name for a midi number. midi_to_ansi_note(midi_note) -> ansi_note example: midi_to_ansi_note(21) == 'A0' New in pygame 1.9.5.
exception pygame.midi.MidiException
exception that pygame.midi functions and classes can raise MidiException(errno) -> None | |
doc_25696 |
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters | |
doc_25697 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_25698 | Generate n random bytes. This method should not be used for generating security tokens. Use secrets.token_bytes() instead. New in version 3.9. | |
doc_25699 | The maximum value allowed for the timeout parameter of Lock.acquire(). Specifying a timeout greater than this value will raise an OverflowError. New in version 3.2. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.