_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_4400 | The maximum number of processes the current process may create. | |
doc_4401 |
Set the offsets for the collection. Parameters
offsets(N, 2) or (2,) array-like | |
doc_4402 | See Migration guide for more details. tf.compat.v1.keras.layers.ActivityRegularization
tf.keras.layers.ActivityRegularization(
l1=0.0, l2=0.0, **kwargs
)
Arguments
l1 L1 regularization factor (positive float).
l2 L2 regularization factor (positive float). Input shape: Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape: Same shape as input. | |
doc_4403 | Returns the ContentType instance uniquely identified by the given application label and model name. The primary purpose of this method is to allow ContentType objects to be referenced via a natural key during deserialization. | |
doc_4404 | Returns a tensor with the same size as input that is filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) . torch.rand_like(input) is equivalent to torch.rand(input.size(), dtype=input.dtype, layout=input.layout, device=input.device). Parameters
input (Tensor) – the size of input will determine size of the output tensor. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input.
layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input.
device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format. | |
doc_4405 |
Autoscale the scalar limits on the norm instance using the current array, changing only limits that are None | |
doc_4406 |
Evaluate a 3-D HermiteE series on the Cartesian product of x, y, and z. This function returns the values: \[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * He_i(a) * He_j(b) * He_k(c)\] where the points (a, b, c) consist of all triples formed by taking a from x, b from y, and c from z. The resulting points form a grid with x in the first dimension, y in the second, and z in the third. The parameters x, y, and z are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either x, y, and z or their elements must support multiplication and addition both with themselves and with the elements of c. If c has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters
x, y, zarray_like, compatible objects
The three dimensional series is evaluated at the points in the Cartesian product of x, y, and z. If x,`y`, or z is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar.
carray_like
Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in c[i,j]. If c has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns
valuesndarray, compatible object
The values of the two dimensional polynomial at points in the Cartesian product of x and y. See also
hermeval, hermeval2d, hermegrid2d, hermeval3d
Notes New in version 1.7.0. | |
doc_4407 | A Fault object encapsulates the content of an XML-RPC fault tag. Fault objects have the following attributes:
faultCode
A string indicating the fault type.
faultString
A string containing a diagnostic message associated with the fault. | |
doc_4408 | Return an iterator which yields the same values as glob() without actually storing them all simultaneously. Raises an auditing event glob.glob with arguments pathname, recursive. | |
doc_4409 |
Computes the gradient of current tensor w.r.t. graph leaves. The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying gradient. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. self. This function accumulates gradients in the leaves - you might need to zero .grad attributes or set them to None before calling it. See Default gradient layouts for details on the memory layout of accumulated gradients. Note If you run any forward ops, create gradient, and/or call backward in a user-specified CUDA stream context, see Stream semantics of backward passes. Parameters
gradient (Tensor or None) – Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless create_graph is True. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable then this argument is optional.
retain_graph (bool, optional) – If False, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.
create_graph (bool, optional) – If True, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to False.
inputs (sequence of Tensor) – Inputs w.r.t. which the gradient will be accumulated into .grad. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute the attr::tensors. All the provided inputs must be leaf Tensors. | |
doc_4410 | Try to set the current audio format to format—see getfmts() for a list. Returns the audio format that the device was set to, which may not be the requested format. May also be used to return the current audio format—do this by passing an “audio format” of AFMT_QUERY. | |
doc_4411 |
Find the coefficients of a polynomial with the given sequence of roots. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in numpy.polynomial is preferred. A summary of the differences can be found in the transition guide. Returns the coefficients of the polynomial whose leading coefficient is one for the given sequence of zeros (multiple roots must be included in the sequence as many times as their multiplicity; see Examples). A square matrix (or array, which will be treated as a matrix) can also be given, in which case the coefficients of the characteristic polynomial of the matrix are returned. Parameters
seq_of_zerosarray_like, shape (N,) or (N, N)
A sequence of polynomial roots, or a square array or matrix object. Returns
cndarray
1D array of polynomial coefficients from highest to lowest degree: c[0] * x**(N) + c[1] * x**(N-1) + ... + c[N-1] * x + c[N] where c[0] always equals 1. Raises
ValueError
If input is the wrong shape (the input must be a 1-D or square 2-D array). See also polyval
Compute polynomial values. roots
Return the roots of a polynomial. polyfit
Least squares polynomial fit. poly1d
A one-dimensional polynomial class. Notes Specifying the roots of a polynomial still leaves one degree of freedom, typically represented by an undetermined leading coefficient. [1] In the case of this function, that coefficient - the first one in the returned array - is always taken as one. (If for some reason you have one other point, the only automatic way presently to leverage that information is to use polyfit.) The characteristic polynomial, \(p_a(t)\), of an n-by-n matrix A is given by \(p_a(t) = \mathrm{det}(t\, \mathbf{I} - \mathbf{A})\), where I is the n-by-n identity matrix. [2] References 1
M. Sullivan and M. Sullivan, III, “Algebra and Trignometry, Enhanced With Graphing Utilities,” Prentice-Hall, pg. 318, 1996. 2
G. Strang, “Linear Algebra and Its Applications, 2nd Edition,” Academic Press, pg. 182, 1980. Examples Given a sequence of a polynomial’s zeros: >>> np.poly((0, 0, 0)) # Multiple root example
array([1., 0., 0., 0.])
The line above represents z**3 + 0*z**2 + 0*z + 0. >>> np.poly((-1./2, 0, 1./2))
array([ 1. , 0. , -0.25, 0. ])
The line above represents z**3 - z/4 >>> np.poly((np.random.random(1)[0], 0, np.random.random(1)[0]))
array([ 1. , -0.77086955, 0.08618131, 0. ]) # random
Given a square array object: >>> P = np.array([[0, 1./3], [-1./2, 0]])
>>> np.poly(P)
array([1. , 0. , 0.16666667])
Note how in all cases the leading coefficient is always 1. | |
doc_4412 |
Set the color for masked values. | |
doc_4413 | False
The false value of the bool type. Assignments to False are illegal and raise a SyntaxError.
True
The true value of the bool type. Assignments to True are illegal and raise a SyntaxError.
None
The sole value of the type NoneType. None is frequently used to represent the absence of a value, as when default arguments are not passed to a function. Assignments to None are illegal and raise a SyntaxError.
NotImplemented
Special value which should be returned by the binary special methods (e.g. __eq__(), __lt__(), __add__(), __rsub__(), etc.) to indicate that the operation is not implemented with respect to the other type; may be returned by the in-place binary special methods (e.g. __imul__(), __iand__(), etc.) for the same purpose. It should not be evaluated in a boolean context. Note When a binary (or in-place) method returns NotImplemented the interpreter will try the reflected operation on the other type (or some other fallback, depending on the operator). If all attempts return NotImplemented, the interpreter will raise an appropriate exception. Incorrectly returning NotImplemented will result in a misleading error message or the NotImplemented value being returned to Python code. See Implementing the arithmetic operations for examples. Note NotImplementedError and NotImplemented are not interchangeable, even though they have similar names and purposes. See NotImplementedError for details on when to use it. Changed in version 3.9: Evaluating NotImplemented in a boolean context is deprecated. While it currently evaluates as true, it will emit a DeprecationWarning. It will raise a TypeError in a future version of Python.
Ellipsis
The same as the ellipsis literal “...”. Special value used mostly in conjunction with extended slicing syntax for user-defined container data types.
__debug__
This constant is true if Python was not started with an -O option. See also the assert statement.
Note The names None, False, True and __debug__ cannot be reassigned (assignments to them, even as an attribute name, raise SyntaxError), so they can be considered “true” constants. Constants added by the site module The site module (which is imported automatically during startup, except if the -S command-line option is given) adds several constants to the built-in namespace. They are useful for the interactive interpreter shell and should not be used in programs.
quit(code=None)
exit(code=None)
Objects that when printed, print a message like “Use quit() or Ctrl-D (i.e. EOF) to exit”, and when called, raise SystemExit with the specified exit code.
copyright
credits
Objects that when printed or called, print the text of copyright or credits, respectively.
license
Object that when printed, prints the message “Type license() to see the full license text”, and when called, displays the full license text in a pager-like fashion (one screen at a time). | |
doc_4414 | See Migration guide for more details. tf.compat.v1.raw_ops.QuantizedDepthwiseConv2D
tf.raw_ops.QuantizedDepthwiseConv2D(
input, filter, min_input, max_input, min_filter, max_filter, strides, padding,
out_type=tf.dtypes.qint32, dilations=[1, 1, 1, 1], name=None
)
Args
input A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. The original input tensor.
filter A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. The original filter tensor.
min_input A Tensor of type float32. The float value that the minimum quantized input value represents.
max_input A Tensor of type float32. The float value that the maximum quantized input value represents.
min_filter A Tensor of type float32. The float value that the minimum quantized filter value represents.
max_filter A Tensor of type float32. The float value that the maximum quantized filter value represents.
strides A list of ints. List of stride values.
padding A string from: "SAME", "VALID".
out_type An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.qint32. The type of the output.
dilations An optional list of ints. Defaults to [1, 1, 1, 1]. List of dilation values.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output, min_output, max_output). output A Tensor of type out_type.
min_output A Tensor of type float32.
max_output A Tensor of type float32. | |
doc_4415 | Return True if the terminal has insert- and delete-character capabilities. This function is included for historical reasons only, as all modern software terminal emulators have such capabilities. | |
doc_4416 | tf.compat.v1.arg_min(
input, dimension, output_type=tf.dtypes.int64, name=None
)
Note that in case of ties the identity of the return value is not guaranteed. Usage: import tensorflow as tf
a = [1, 10, 26.9, 2.8, 166.32, 62.3]
b = tf.math.argmin(input = a)
c = tf.keras.backend.eval(b)
# c = 0
# here a[0] = 1 which is the smallest element of a across axis 0
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool.
dimension A Tensor. Must be one of the following types: int32, int64. int32 or int64, must be in the range [-rank(input), rank(input)). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.
output_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64.
name A name for the operation (optional).
Returns A Tensor of type output_type. | |
doc_4417 | See Migration guide for more details. tf.compat.v1.angle, tf.compat.v1.math.angle
tf.math.angle(
input, name=None
)
Given a tensor input, this operation returns a tensor of type float that is the argument of each element in input considered as a complex number. The elements in input are considered to be complex numbers of the form \(a + bj\), where a is the real part and b is the imaginary part. If input is real then b is zero by definition. The argument returned by this function is of the form \(atan2(b, a)\). If input is real, a tensor of all zeros is returned. For example: input = tf.constant([-2.25 + 4.75j, 3.25 + 5.75j], dtype=tf.complex64)
tf.math.angle(input).numpy()
# ==> array([2.0131705, 1.056345 ], dtype=float32)
Args
input A Tensor. Must be one of the following types: float, double, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor of type float32 or float64. | |
doc_4418 |
Reload the style library. | |
doc_4419 |
Draw a glyph described by info to the reference point (ox, oy). | |
doc_4420 | (1, 2)
$ python -m pickletools x.pickle
0: \x80 PROTO 3
2: K BININT1 1
4: K BININT1 2
6: \x86 TUPLE2
7: q BINPUT 0
9: . STOP
highest protocol among opcodes = 2
Command line options
-a, --annotate
Annotate each line with a short opcode description.
-o, --output=<file>
Name of a file where the output should be written.
-l, --indentlevel=<num>
The number of blanks by which to indent a new MARK level.
-m, --memo
When multiple objects are disassembled, preserve memo between disassemblies.
-p, --preamble=<preamble>
When more than one pickle file are specified, print given preamble before each disassembly.
Programmatic Interface
pickletools.dis(pickle, out=None, memo=None, indentlevel=4, annotate=0)
Outputs a symbolic disassembly of the pickle to the file-like object out, defaulting to sys.stdout. pickle can be a string or a file-like object. memo can be a Python dictionary that will be used as the pickle’s memo; it can be used to perform disassemblies across multiple pickles created by the same pickler. Successive levels, indicated by MARK opcodes in the stream, are indented by indentlevel spaces. If a nonzero value is given to annotate, each opcode in the output is annotated with a short description. The value of annotate is used as a hint for the column where annotation should start. New in version 3.2: The annotate argument.
pickletools.genops(pickle)
Provides an iterator over all of the opcodes in a pickle, returning a sequence of (opcode, arg, pos) triples. opcode is an instance of an OpcodeInfo class; arg is the decoded value, as a Python object, of the opcode’s argument; pos is the position at which this opcode is located. pickle can be a string or a file-like object.
pickletools.optimize(picklestring)
Returns a new equivalent pickle string after eliminating unused PUT opcodes. The optimized pickle is shorter, takes less transmission time, requires less storage space, and unpickles more efficiently. | |
doc_4421 | Instances of this class represent loaded shared libraries. Functions in these libraries use the standard C calling convention, and are assumed to return int. On Windows creating a CDLL instance may fail even if the DLL name exists. When a dependent DLL of the loaded DLL is not found, a OSError error is raised with the message “[WinError 126] The specified module could not be found”. This error message does not contain the name of the missing DLL because the Windows API does not return this information making this error hard to diagnose. To resolve this error and determine which DLL is not found, you need to find the list of dependent DLLs and determine which one is not found using Windows debugging and tracing tools. | |
doc_4422 | This class works like AdminConfig, except it doesn’t call autodiscover().
default_site
A dotted import path to the default admin site’s class or to a callable that returns a site instance. Defaults to 'django.contrib.admin.sites.AdminSite'. See Overriding the default admin site for usage. | |
doc_4423 | See Migration guide for more details. tf.compat.v1.keras.layers.Average
tf.keras.layers.Average(
**kwargs
)
It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape). Example:
x1 = np.ones((2, 2))
x2 = np.zeros((2, 2))
y = tf.keras.layers.Average()([x1, x2])
y.numpy().tolist()
[[0.5, 0.5], [0.5, 0.5]]
Usage in a functional model:
input1 = tf.keras.layers.Input(shape=(16,))
x1 = tf.keras.layers.Dense(8, activation='relu')(input1)
input2 = tf.keras.layers.Input(shape=(32,))
x2 = tf.keras.layers.Dense(8, activation='relu')(input2)
avg = tf.keras.layers.Average()([x1, x2])
out = tf.keras.layers.Dense(4)(avg)
model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)
Raises
ValueError If there is a shape mismatch between the inputs and the shapes cannot be broadcasted to match.
Arguments
**kwargs standard layer keyword arguments. | |
doc_4424 | Receive data from the socket. The return value is a bytes object representing the data received. The maximum amount of data to be received at once is specified by bufsize. See the Unix manual page recv(2) for the meaning of the optional argument flags; it defaults to zero. Note For best match with hardware and network realities, the value of bufsize should be a relatively small power of 2, for example, 4096. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an InterruptedError exception (see PEP 475 for the rationale). | |
doc_4425 |
Alias for set_edgecolor. | |
doc_4426 | This exception is raised when a weak reference proxy, created by the weakref.proxy() function, is used to access an attribute of the referent after it has been garbage collected. For more information on weak references, see the weakref module. | |
doc_4427 | Sent with a preflight request to indicate which headers will be sent with the cross origin request. Set access_control_allow_headers on the response to indicate which headers are allowed. | |
doc_4428 | See Migration guide for more details. tf.compat.v1.nn.compute_average_loss
tf.nn.compute_average_loss(
per_example_loss, sample_weight=None, global_batch_size=None
)
Usage with distribution strategy and custom training loop: with strategy.scope():
def compute_loss(labels, predictions, sample_weight=None):
# If you are using a `Loss` class instead, set reduction to `NONE` so that
# we can do the reduction afterwards and divide by global batch size.
per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, predictions)
# Compute loss that is scaled by sample_weight and by global batch size.
return tf.nn.compute_average_loss(
per_example_loss,
sample_weight=sample_weight,
global_batch_size=GLOBAL_BATCH_SIZE)
Args
per_example_loss Per-example loss.
sample_weight Optional weighting for each example.
global_batch_size Optional global batch size value. Defaults to (size of first dimension of losses) * (number of replicas).
Returns Scalar loss value. | |
doc_4429 | Convert the color from RGB coordinates to HSV coordinates. | |
doc_4430 |
Compute a mask from polygon. Parameters
image_shapetuple of size 2.
The shape of the mask.
polygonarray_like.
The polygon coordinates of shape (N, 2) where N is the number of points. Returns
mask2-D ndarray of type ‘bool’.
The mask that corresponds to the input polygon. Notes This function does not do any border checking, so that all the vertices need to be within the given shape. Examples >>> image_shape = (128, 128)
>>> polygon = np.array([[60, 100], [100, 40], [40, 40]])
>>> mask = polygon2mask(image_shape, polygon)
>>> mask.shape
(128, 128) | |
doc_4431 | See Migration guide for more details. tf.compat.v1.raw_ops.CrossReplicaSum
tf.raw_ops.CrossReplicaSum(
input, group_assignment, name=None
)
Each instance supplies its own input. For example, suppose there are 8 TPU instances: [A, B, C, D, E, F, G, H]. Passing group_assignment=[[0,2,4,6],[1,3,5,7]] sets A, C, E, G as group 0, and B, D, F, H as group 1. Thus we get the outputs: [A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H].
Args
input A Tensor. Must be one of the following types: half, bfloat16, float32, int32, uint32. The local input to the sum.
group_assignment A Tensor of type int32. An int32 tensor with shape [num_groups, num_replicas_per_group]. group_assignment[i] represents the replica ids in the ith subgroup.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_4432 |
Return the log-likelihood of each sample. See. “Pattern Recognition and Machine Learning” by C. Bishop, 12.2.1 p. 574 or http://www.miketipping.com/papers/met-mppca.pdf Parameters
Xarray-like of shape (n_samples, n_features)
The data. Returns
llndarray of shape (n_samples,)
Log-likelihood of each sample under the current model. | |
doc_4433 |
Get Floating division of dataframe and other, element-wise (binary operator truediv). Equivalent to dataframe / other, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rtruediv. Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **. Parameters
other:scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis:{0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level:int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value:float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing. Returns
DataFrame
Result of the arithmetic operation. See also DataFrame.add
Add DataFrames. DataFrame.sub
Subtract DataFrames. DataFrame.mul
Multiply DataFrames. DataFrame.div
Divide DataFrames (float division). DataFrame.truediv
Divide DataFrames (float division). DataFrame.floordiv
Divide DataFrames (integer division). DataFrame.mod
Calculate modulo (remainder after division). DataFrame.pow
Calculate exponential power. Notes Mismatched indices will be unioned together. Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0 | |
doc_4434 | Register a Blueprint on this blueprint. Keyword arguments passed to this method will override the defaults set on the blueprint. New in version 2.0. Parameters
blueprint (flask.blueprints.Blueprint) –
options (Any) – Return type
None | |
doc_4435 | The FileEntry widget can be used to input a filename. The user can type in the filename manually. Alternatively, the user can press the button widget that sits next to the entry, which will bring up a file selection dialog. | |
doc_4436 | The string '0123456789'. | |
doc_4437 | Send data to the remote end-point of the socket. | |
doc_4438 | Returns a named tuple with paths to OpenSSL’s default cafile and capath. The paths are the same as used by SSLContext.set_default_verify_paths(). The return value is a named tuple DefaultVerifyPaths:
cafile - resolved path to cafile or None if the file doesn’t exist,
capath - resolved path to capath or None if the directory doesn’t exist,
openssl_cafile_env - OpenSSL’s environment key that points to a cafile,
openssl_cafile - hard coded path to a cafile,
openssl_capath_env - OpenSSL’s environment key that points to a capath,
openssl_capath - hard coded path to a capath directory Availability: LibreSSL ignores the environment vars openssl_cafile_env and openssl_capath_env. New in version 3.4. | |
doc_4439 | Return a complete message of byte data sent from the other end of the connection as a string. Blocks until there is something to receive. Raises EOFError if there is nothing left to receive and the other end has closed. If maxlength is specified and the message is longer than maxlength then OSError is raised and the connection will no longer be readable. Changed in version 3.3: This function used to raise IOError, which is now an alias of OSError. | |
doc_4440 |
Return the AxisInfo for unit. unit is a tzinfo instance or None. The axis argument is required but not used. | |
doc_4441 | Accept: application/json
Might receive an error response indicating that the DELETE method is not allowed on that resource: HTTP/1.1 405 Method Not Allowed
Content-Type: application/json
Content-Length: 42
{"detail": "Method 'DELETE' not allowed."}
Validation errors are handled slightly differently, and will include the field names as the keys in the response. If the validation error was not specific to a particular field then it will use the "non_field_errors" key, or whatever string value has been set for the NON_FIELD_ERRORS_KEY setting. An example validation error might look like this: HTTP/1.1 400 Bad Request
Content-Type: application/json
Content-Length: 94
{"amount": ["A valid integer is required."], "description": ["This field may not be blank."]}
Custom exception handling You can implement custom exception handling by creating a handler function that converts exceptions raised in your API views into response objects. This allows you to control the style of error responses used by your API. The function must take a pair of arguments, the first is the exception to be handled, and the second is a dictionary containing any extra context such as the view currently being handled. The exception handler function should either return a Response object, or return None if the exception cannot be handled. If the handler returns None then the exception will be re-raised and Django will return a standard HTTP 500 'server error' response. For example, you might want to ensure that all error responses include the HTTP status code in the body of the response, like so: HTTP/1.1 405 Method Not Allowed
Content-Type: application/json
Content-Length: 62
{"status_code": 405, "detail": "Method 'DELETE' not allowed."}
In order to alter the style of the response, you could write the following custom exception handler: from rest_framework.views import exception_handler
def custom_exception_handler(exc, context):
# Call REST framework's default exception handler first,
# to get the standard error response.
response = exception_handler(exc, context)
# Now add the HTTP status code to the response.
if response is not None:
response.data['status_code'] = response.status_code
return response
The context argument is not used by the default handler, but can be useful if the exception handler needs further information such as the view currently being handled, which can be accessed as context['view']. The exception handler must also be configured in your settings, using the EXCEPTION_HANDLER setting key. For example: REST_FRAMEWORK = {
'EXCEPTION_HANDLER': 'my_project.my_app.utils.custom_exception_handler'
}
If not specified, the 'EXCEPTION_HANDLER' setting defaults to the standard exception handler provided by REST framework: REST_FRAMEWORK = {
'EXCEPTION_HANDLER': 'rest_framework.views.exception_handler'
}
Note that the exception handler will only be called for responses generated by raised exceptions. It will not be used for any responses returned directly by the view, such as the HTTP_400_BAD_REQUEST responses that are returned by the generic views when serializer validation fails. API Reference APIException Signature: APIException() The base class for all exceptions raised inside an APIView class or @api_view. To provide a custom exception, subclass APIException and set the .status_code, .default_detail, and default_code attributes on the class. For example, if your API relies on a third party service that may sometimes be unreachable, you might want to implement an exception for the "503 Service Unavailable" HTTP response code. You could do this like so: from rest_framework.exceptions import APIException
class ServiceUnavailable(APIException):
status_code = 503
default_detail = 'Service temporarily unavailable, try again later.'
default_code = 'service_unavailable'
Inspecting API exceptions There are a number of different properties available for inspecting the status of an API exception. You can use these to build custom exception handling for your project. The available attributes and methods are:
.detail - Return the textual description of the error.
.get_codes() - Return the code identifier of the error.
.get_full_details() - Return both the textual description and the code identifier. In most cases the error detail will be a simple item: >>> print(exc.detail)
You do not have permission to perform this action.
>>> print(exc.get_codes())
permission_denied
>>> print(exc.get_full_details())
{'message':'You do not have permission to perform this action.','code':'permission_denied'}
In the case of validation errors the error detail will be either a list or dictionary of items: >>> print(exc.detail)
{"name":"This field is required.","age":"A valid integer is required."}
>>> print(exc.get_codes())
{"name":"required","age":"invalid"}
>>> print(exc.get_full_details())
{"name":{"message":"This field is required.","code":"required"},"age":{"message":"A valid integer is required.","code":"invalid"}}
ParseError Signature: ParseError(detail=None, code=None) Raised if the request contains malformed data when accessing request.data. By default this exception results in a response with the HTTP status code "400 Bad Request". AuthenticationFailed Signature: AuthenticationFailed(detail=None, code=None) Raised when an incoming request includes incorrect authentication. By default this exception results in a response with the HTTP status code "401 Unauthenticated", but it may also result in a "403 Forbidden" response, depending on the authentication scheme in use. See the authentication documentation for more details. NotAuthenticated Signature: NotAuthenticated(detail=None, code=None) Raised when an unauthenticated request fails the permission checks. By default this exception results in a response with the HTTP status code "401 Unauthenticated", but it may also result in a "403 Forbidden" response, depending on the authentication scheme in use. See the authentication documentation for more details. PermissionDenied Signature: PermissionDenied(detail=None, code=None) Raised when an authenticated request fails the permission checks. By default this exception results in a response with the HTTP status code "403 Forbidden". NotFound Signature: NotFound(detail=None, code=None) Raised when a resource does not exists at the given URL. This exception is equivalent to the standard Http404 Django exception. By default this exception results in a response with the HTTP status code "404 Not Found". MethodNotAllowed Signature: MethodNotAllowed(method, detail=None, code=None) Raised when an incoming request occurs that does not map to a handler method on the view. By default this exception results in a response with the HTTP status code "405 Method Not Allowed". NotAcceptable Signature: NotAcceptable(detail=None, code=None) Raised when an incoming request occurs with an Accept header that cannot be satisfied by any of the available renderers. By default this exception results in a response with the HTTP status code "406 Not Acceptable". UnsupportedMediaType Signature: UnsupportedMediaType(media_type, detail=None, code=None) Raised if there are no parsers that can handle the content type of the request data when accessing request.data. By default this exception results in a response with the HTTP status code "415 Unsupported Media Type". Throttled Signature: Throttled(wait=None, detail=None, code=None) Raised when an incoming request fails the throttling checks. By default this exception results in a response with the HTTP status code "429 Too Many Requests". ValidationError Signature: ValidationError(detail, code=None) The ValidationError exception is slightly different from the other APIException classes: The detail argument is mandatory, not optional. The detail argument may be a list or dictionary of error details, and may also be a nested data structure. By using a dictionary, you can specify field-level errors while performing object-level validation in the validate() method of a serializer. For example. raise serializers.ValidationError({'name': 'Please enter a valid name.'})
By convention you should import the serializers module and use a fully qualified ValidationError style, in order to differentiate it from Django's built-in validation error. For example. raise serializers.ValidationError('This field must be an integer value.')
The ValidationError class should be used for serializer and field validation, and by validator classes. It is also raised when calling serializer.is_valid with the raise_exception keyword argument: serializer.is_valid(raise_exception=True)
The generic views use the raise_exception=True flag, which means that you can override the style of validation error responses globally in your API. To do so, use a custom exception handler, as described above. By default this exception results in a response with the HTTP status code "400 Bad Request". Generic Error Views Django REST Framework provides two error views suitable for providing generic JSON 500 Server Error and 400 Bad Request responses. (Django's default error views provide HTML responses, which may not be appropriate for an API-only application.) Use these as per Django's Customizing error views documentation. rest_framework.exceptions.server_error Returns a response with status code 500 and application/json content type. Set as handler500: handler500 = 'rest_framework.exceptions.server_error'
rest_framework.exceptions.bad_request Returns a response with status code 400 and application/json content type. Set as handler400: handler400 = 'rest_framework.exceptions.bad_request'
exceptions.py | |
doc_4442 | See Migration guide for more details. tf.compat.v1.saved_model.loader.load
tf.compat.v1.saved_model.load(
sess, tags, export_dir, import_scope=None, **saver_kwargs
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
Args
sess The TensorFlow session to restore the variables.
tags Set of string tags to identify the required MetaGraphDef. These should correspond to the tags used when saving the variables using the SavedModel save() API.
export_dir Directory in which the SavedModel protocol buffer and variables to be loaded are located.
import_scope Optional string -- if specified, prepend this string followed by '/' to all loaded tensor names. This scope is applied to tensor instances loaded into the passed session, but it is not written through to the static MetaGraphDef protocol buffer that is returned.
**saver_kwargs Optional keyword arguments passed through to Saver.
Returns The MetaGraphDef protocol buffer loaded in the provided session. This can be used to further extract signature-defs, collection-defs, etc.
Raises
RuntimeError MetaGraphDef associated with the tags cannot be found. | |
doc_4443 | Get statistics as a sorted list of Statistic instances grouped by key_type:
key_type description
'filename' filename
'lineno' filename and line number
'traceback' traceback If cumulative is True, cumulate size and count of memory blocks of all frames of the traceback of a trace, not only the most recent frame. The cumulative mode can only be used with key_type equals to 'filename' and 'lineno'. The result is sorted from the biggest to the smallest by: Statistic.size, Statistic.count and then by Statistic.traceback. | |
doc_4444 |
Close all open figures and set the Matplotlib backend. The argument is case-insensitive. Switching to an interactive backend is possible only if no event loop for another interactive backend has started. Switching to and from non-interactive backends is always possible. Parameters
newbackendstr
The name of the backend to use. | |
doc_4445 |
Check that left and right Series are equal. Parameters
left:Series
right:Series
check_dtype:bool, default True
Whether to check the Series dtype is identical.
check_index_type:bool or {‘equiv’}, default ‘equiv’
Whether to check the Index class, dtype and inferred_type are identical.
check_series_type:bool, default True
Whether to check the Series class is identical.
check_less_precise:bool or int, default False
Specify comparison precision. Only used when check_exact is False. 5 digits (False) or 3 digits (True) after decimal points are compared. If int, then specify the digits to compare. When comparing two numbers, if the first number has magnitude less than 1e-5, we compare the two numbers directly and check whether they are equivalent within the specified precision. Otherwise, we compare the ratio of the second number to the first number and check whether it is equivalent to 1 within the specified precision. Deprecated since version 1.1.0: Use rtol and atol instead to define relative/absolute tolerance, respectively. Similar to math.isclose().
check_names:bool, default True
Whether to check the Series and Index names attribute.
check_exact:bool, default False
Whether to compare number exactly.
check_datetimelike_compat:bool, default False
Compare datetime-like which is comparable ignoring dtype.
check_categorical:bool, default True
Whether to compare internal Categorical exactly.
check_category_order:bool, default True
Whether to compare category order of internal Categoricals. New in version 1.0.2.
check_freq:bool, default True
Whether to check the freq attribute on a DatetimeIndex or TimedeltaIndex. New in version 1.1.0.
check_flags:bool, default True
Whether to check the flags attribute. New in version 1.2.0.
rtol:float, default 1e-5
Relative tolerance. Only used when check_exact is False. New in version 1.1.0.
atol:float, default 1e-8
Absolute tolerance. Only used when check_exact is False. New in version 1.1.0.
obj:str, default ‘Series’
Specify object name being compared, internally used to show appropriate assertion message.
check_index:bool, default True
Whether to check index equivalence. If False, then compare only values. New in version 1.3.0. Examples
>>> from pandas import testing as tm
>>> a = pd.Series([1, 2, 3, 4])
>>> b = pd.Series([1, 2, 3, 4])
>>> tm.assert_series_equal(a, b) | |
doc_4446 | Input is used to get midi input from midi devices. Input(device_id) -> None Input(device_id, buffer_size) -> None
Parameters:
device_id (int) -- midi device id
buffer_size (int) -- (optional) the number of input events to be buffered close()
closes a midi stream, flushing any pending buffers. close() -> None PortMidi attempts to close open streams when the application exits. Note This is particularly difficult under Windows.
poll()
returns True if there's data, or False if not. poll() -> bool Used to indicate if any data exists.
Returns:
True if there is data, False otherwise
Return type:
bool
Raises:
MidiException -- on error
read()
reads num_events midi events from the buffer. read(num_events) -> midi_event_list Reads from the input buffer and gives back midi events.
Parameters:
num_events (int) -- number of input events to read
Returns:
the format for midi_event_list is [[[status, data1, data2, data3], timestamp], ...]
Return type:
list | |
doc_4447 |
Register a new colormap. The colormap name can then be used as a string argument to any cmap parameter in Matplotlib. It is also available in pyplot.get_cmap. The colormap registry stores a copy of the given colormap, so that future changes to the original colormap instance do not affect the registered colormap. Think of this as the registry taking a snapshot of the colormap at registration. Parameters
cmapmatplotlib.colors.Colormap
The colormap to register.
namestr, optional
The name for the colormap. If not given, cmap.name is used. force: bool, default: False
If False, a ValueError is raised if trying to overwrite an already registered name. True supports overwriting registered colormaps other than the builtin colormaps. | |
doc_4448 | class sklearn.kernel_approximation.RBFSampler(*, gamma=1.0, n_components=100, random_state=None) [source]
Approximates feature map of an RBF kernel by Monte Carlo approximation of its Fourier transform. It implements a variant of Random Kitchen Sinks.[1] Read more in the User Guide. Parameters
gammafloat, default=1.0
Parameter of RBF kernel: exp(-gamma * x^2)
n_componentsint, default=100
Number of Monte Carlo samples per original feature. Equals the dimensionality of the computed feature space.
random_stateint, RandomState instance or None, default=None
Pseudo-random number generator to control the generation of the random weights and random offset when fitting the training data. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
random_offset_ndarray of shape (n_components,), dtype=float64
Random offset used to compute the projection in the n_components dimensions of the feature space.
random_weights_ndarray of shape (n_features, n_components), dtype=float64
Random projection directions drawn from the Fourier transform of the RBF kernel. Notes See “Random Features for Large-Scale Kernel Machines” by A. Rahimi and Benjamin Recht. [1] “Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning” by A. Rahimi and Benjamin Recht. (https://people.eecs.berkeley.edu/~brecht/papers/08.rah.rec.nips.pdf) Examples >>> from sklearn.kernel_approximation import RBFSampler
>>> from sklearn.linear_model import SGDClassifier
>>> X = [[0, 0], [1, 1], [1, 0], [0, 1]]
>>> y = [0, 0, 1, 1]
>>> rbf_feature = RBFSampler(gamma=1, random_state=1)
>>> X_features = rbf_feature.fit_transform(X)
>>> clf = SGDClassifier(max_iter=5, tol=1e-3)
>>> clf.fit(X_features, y)
SGDClassifier(max_iter=5)
>>> clf.score(X_features, y)
1.0
Methods
fit(X[, y]) Fit the model with X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Apply the approximate feature map to X.
fit(X, y=None) [source]
Fit the model with X. Samples random projection according to n_features. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Training data, where n_samples in the number of samples and n_features is the number of features. Returns
selfobject
Returns the transformer.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Apply the approximate feature map to X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
New data, where n_samples in the number of samples and n_features is the number of features. Returns
X_newarray-like, shape (n_samples, n_components)
Examples using sklearn.kernel_approximation.RBFSampler
Explicit feature map approximation for RBF kernels | |
doc_4449 | Return struct.calcsize() for nPn{fmt}0n or, if gettotalrefcount exists, 2PnPn{fmt}0P. | |
doc_4450 | See Migration guide for more details. tf.compat.v1.config.experimental.ClusterDeviceFilters
tf.config.experimental.ClusterDeviceFilters()
Note: this is an experimental API and subject to changes.
Set device filters for selective jobs and tasks. For each remote worker, the device filters are a list of strings. When any filters are present, the remote worker will ignore all devices which do not match any of its filters. Each filter can be partially specified, e.g. "/job:ps", "/job:worker/replica:3", etc. Note that a device is always visible to the worker it is located on. For example, to set the device filters for a parameter server cluster: cdf = tf.config.experimental.ClusterDeviceFilters()
for i in range(num_workers):
cdf.set_device_filters('worker', i, ['/job:ps'])
for i in range(num_ps):
cdf.set_device_filters('ps', i, ['/job:worker'])
tf.config.experimental_connect_to_cluster(cluster_def,
cluster_device_filters=cdf)
The device filters can be partically specified. For remote tasks that do not have device filters specified, all devices will be visible to them. Methods set_device_filters View source
set_device_filters(
job_name, task_index, device_filters
)
Set the device filters for given job name and task id. | |
doc_4451 | See Migration guide for more details. tf.compat.v1.estimator.experimental.build_raw_supervised_input_receiver_fn
tf.estimator.experimental.build_raw_supervised_input_receiver_fn(
features, labels, default_batch_size=None
)
This function wraps tensor placeholders in a supervised_receiver_fn with the expectation that the features and labels appear precisely as the model_fn expects them. Features and labels can therefore be dicts of tensors, or raw tensors.
Args
features a dict of string to Tensor or Tensor.
labels a dict of string to Tensor or Tensor.
default_batch_size the number of query examples expected per batch. Leave unset for variable batch size (recommended).
Returns A supervised_input_receiver_fn.
Raises
ValueError if features and labels have overlapping keys. | |
doc_4452 |
Set the left coordinate of the rectangle. | |
doc_4453 |
Bases: tornado.web.RequestHandler get()[source] | |
doc_4454 | Display the syntax error that just occurred. This does not display a stack trace because there isn’t one for syntax errors. If filename is given, it is stuffed into the exception instead of the default filename provided by Python’s parser, because it always uses '<string>' when reading from a string. The output is written by the write() method. | |
doc_4455 | New in Django 3.2. A translatable string used as a substitute for elided page numbers in the page range returned by get_elided_page_range(). Default is '…'. | |
doc_4456 | tf.compat.v1.tpu.replicate(
computation, inputs=None, infeed_queue=None, device_assignment=None, name=None,
maximum_shapes=None, padding_spec=None, xla_options=None
)
Example for the basic usage that inputs has static shape:
def computation(x):
x = x + 1
return tf.math.reduce_mean(x)
x = tf.convert_to_tensor([1., 2., 3.])
y = tf.convert_to_tensor([4., 5., 6.])
tf.compat.v1.tpu.replicate(computation, inputs=[[x], [y]])
If the inputs has dynamic shapes and you would like to automatically bucketize the inputs to avoid XLA recompilation. See the advanced example below:
def computation(x):
x = x + 1
return tf.math.reduce_mean(x)
# Assume input tensors in two replicas `x` and `y` both have dynamic shape
# ([None, 2]).
tf.compat.v1.tpu.replicate(
computation,
inputs=[x, y],
maximum_shapes=[tf.TensorShape([None, None])],
padding_spec=tf.compat.v1.tpu.PaddingSpec.POWER_OF_TWO)
Args
computation A Python function that builds the computation to replicate.
inputs A list of lists of input tensors or None (equivalent to [[]]), indexed by [replica_num][input_num]. All replicas must have the same number of inputs. Each input can be a nested structure containing values that are convertible to tensors. Note that passing an N-dimension list of compatible values will result in a N-dimension list of scalar tensors rather than a single Rank-N tensors. If you need different behavior, convert part of inputs to tensors with tf.convert_to_tensor.
infeed_queue If not None, the InfeedQueue from which to append a tuple of arguments as inputs to computation.
device_assignment If not None, a DeviceAssignment describing the mapping between logical cores in the computation with physical cores in the TPU topology. Uses a default device assignment if None. The DeviceAssignment may be omitted if each replica of the computation uses only one core, and there is either only one replica, or the number of replicas is equal to the number of cores in the TPU system.
name (Deprecated) Does nothing.
maximum_shapes A nested structure of tf.TensorShape representing the shape to which the respective component of each input element in each replica should be padded. Any unknown dimensions (e.g. tf.compat.v1.Dimension(None) in a tf.TensorShape or -1 in a tensor-like object) will be padded to the maximum size of that dimension over all replicas. The structure of maximum_shapes needs to be the same as inputs[0].
padding_spec An enum specified by tpu.PaddingSpec. This describes the padding policy when the inputs to tpu.replicate is dynamic. One usage is to enable automatic bucketizing on the inputs by setting the value to tpu.PaddingSpec.POWER_OF_TWO, which can help to reduce the recompilation in the XLA side.
xla_options An instance of tpu.XLAOptions which indicates the options passed to XLA compiler. Use None for default options.
Returns A list of outputs, indexed by [replica_num] each output can be a nested structure same as what computation() returns with a few exceptions. Exceptions include: 1) None output: a NoOp would be returned which control-depends on computation. 2) Single value output: A tuple containing the value would be returned. 3) Operation-only outputs: a NoOp would be returned which control-depends on computation.
Raises
ValueError If all replicas do not have equal numbers of input tensors.
ValueError If the number of inputs per replica does not match the number of formal parameters to computation.
ValueError If the static inputs dimensions don't match with the values given in maximum_shapes.
ValueError If the structure of inputs per replica does not match the structure of maximum_shapes. | |
doc_4457 | tf.compat.v1.nn.rnn_cell.GRUCell(
num_units, activation=None, reuse=None, kernel_initializer=None,
bias_initializer=None, name=None, dtype=None, **kwargs
)
Note that this cell is not optimized for performance. Please use tf.contrib.cudnn_rnn.CudnnGRU for better performance on GPU, or tf.contrib.rnn.GRUBlockCellV2 for better performance on CPU.
Args
num_units int, The number of units in the GRU cell.
activation Nonlinearity to use. Default: tanh.
reuse (optional) Python boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised.
kernel_initializer (optional) The initializer to use for the weight and projection matrices.
bias_initializer (optional) The initializer to use for the bias.
name String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases.
dtype Default dtype of the layer (default of None means use the type of the first input). Required when build is called before call.
**kwargs Dict, keyword named properties for common layer attributes, like trainable etc when constructing the cell from configs of get_config(). References: Learning Phrase Representations using RNN Encoder Decoder for Statistical Machine Translation: Cho et al., 2014 (pdf)
Attributes
graph
output_size Integer or TensorShape: size of outputs produced by this cell.
scope_name
state_size size(s) of state(s) used by this cell. It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes.
Methods get_initial_state View source
get_initial_state(
inputs=None, batch_size=None, dtype=None
)
zero_state View source
zero_state(
batch_size, dtype
)
Return zero-filled state tensor(s).
Args
batch_size int, float, or unit Tensor representing the batch size.
dtype the data type to use for the state.
Returns If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size. | |
doc_4458 |
Return the distutils distribution object for self. | |
doc_4459 |
Computes the paired euclidean distances between X and Y. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Yarray-like of shape (n_samples, n_features)
Returns
distancesndarray of shape (n_samples,) | |
doc_4460 | Eliminates all but the first element from every consecutive group of equivalent elements. Note This function is different from torch.unique() in the sense that this function only eliminates consecutive duplicate values. This semantics is similar to std::unique in C++. Parameters
input (Tensor) – the input tensor
return_inverse (bool) – Whether to also return the indices for where elements in the original input ended up in the returned unique list.
return_counts (bool) – Whether to also return the counts for each unique element.
dim (int) – the dimension to apply unique. If None, the unique of the flattened input is returned. default: None
Returns
A tensor or a tuple of tensors containing
output (Tensor): the output list of unique scalar elements.
inverse_indices (Tensor): (optional) if return_inverse is True, there will be an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor.
counts (Tensor): (optional) if return_counts is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor. Return type
(Tensor, Tensor (optional), Tensor (optional)) Example: >>> x = torch.tensor([1, 1, 2, 2, 3, 1, 1, 2])
>>> output = torch.unique_consecutive(x)
>>> output
tensor([1, 2, 3, 1, 2])
>>> output, inverse_indices = torch.unique_consecutive(x, return_inverse=True)
>>> output
tensor([1, 2, 3, 1, 2])
>>> inverse_indices
tensor([0, 0, 1, 1, 2, 3, 3, 4])
>>> output, counts = torch.unique_consecutive(x, return_counts=True)
>>> output
tensor([1, 2, 3, 1, 2])
>>> counts
tensor([2, 2, 1, 2, 1]) | |
doc_4461 | mmap.MADV_RANDOM
mmap.MADV_SEQUENTIAL
mmap.MADV_WILLNEED
mmap.MADV_DONTNEED
mmap.MADV_REMOVE
mmap.MADV_DONTFORK
mmap.MADV_DOFORK
mmap.MADV_HWPOISON
mmap.MADV_MERGEABLE
mmap.MADV_UNMERGEABLE
mmap.MADV_SOFT_OFFLINE
mmap.MADV_HUGEPAGE
mmap.MADV_NOHUGEPAGE
mmap.MADV_DONTDUMP
mmap.MADV_DODUMP
mmap.MADV_FREE
mmap.MADV_NOSYNC
mmap.MADV_AUTOSYNC
mmap.MADV_NOCORE
mmap.MADV_CORE
mmap.MADV_PROTECT
These options can be passed to mmap.madvise(). Not every option will be present on every system. Availability: Systems with the madvise() system call. New in version 3.8. | |
doc_4462 |
Compute data precision matrix with the generative model. Equals the inverse of the covariance but computed with the matrix inversion lemma for efficiency. Returns
precisionarray, shape=(n_features, n_features)
Estimated precision of data. | |
doc_4463 | Revert control channel back to plaintext. This can be useful to take advantage of firewalls that know how to handle NAT with non-secure FTP without opening fixed ports. New in version 3.3. | |
doc_4464 |
Equivalent to shift without copying data. The shifted data will not include the dropped periods and the shifted axis will be smaller than the original. Deprecated since version 1.2.0: slice_shift is deprecated, use DataFrame/Series.shift instead. Parameters
periods:int
Number of periods to move, can be positive or negative. Returns
shifted:same type as caller
Notes While the slice_shift is faster than shift, you may pay for it later during alignment. | |
doc_4465 | See Migration guide for more details. tf.compat.v1.raw_ops.CudnnRNN
tf.raw_ops.CudnnRNN(
input, input_h, input_c, params, rnn_mode='lstm',
input_mode='linear_input', direction='unidirectional',
dropout=0, seed=0, seed2=0, is_training=True, name=None
)
Computes the RNN from the input and initial states, with respect to the params buffer. rnn_mode: Indicates the type of the RNN model. input_mode: Indicate whether there is a linear projection between the input and the actual computation before the first layer. 'skip_input' is only allowed when input_size == num_units; 'auto_select' implies 'skip_input' when input_size == num_units; otherwise, it implies 'linear_input'. direction: Indicates whether a bidirectional model will be used. Should be "unidirectional" or "bidirectional". dropout: Dropout probability. When set to 0., dropout is disabled. seed: The 1st part of a seed to initialize dropout. seed2: The 2nd part of a seed to initialize dropout. input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size, num_units]. input_c: For LSTM, a 3-D tensor with the shape of [num_layer * dir, batch, num_units]. For other models, it is ignored. params: A 1-D tensor that contains the weights and biases in an opaque layout. The size must be created through CudnnRNNParamsSize, and initialized separately. Note that they might not be compatible across different generations. So it is a good idea to save and restore output: A 3-D tensor with the shape of [seq_length, batch_size, dir * num_units]. output_h: The same shape has input_h. output_c: The same shape as input_c for LSTM. An empty tensor for other models. is_training: Indicates whether this operation is used for inference or training. reserve_space: An opaque tensor that can be used in backprop calculation. It is only produced if is_training is false.
Args
input A Tensor. Must be one of the following types: half, float32, float64.
input_h A Tensor. Must have the same type as input.
input_c A Tensor. Must have the same type as input.
params A Tensor. Must have the same type as input.
rnn_mode An optional string from: "rnn_relu", "rnn_tanh", "lstm", "gru". Defaults to "lstm".
input_mode An optional string from: "linear_input", "skip_input", "auto_select". Defaults to "linear_input".
direction An optional string from: "unidirectional", "bidirectional". Defaults to "unidirectional".
dropout An optional float. Defaults to 0.
seed An optional int. Defaults to 0.
seed2 An optional int. Defaults to 0.
is_training An optional bool. Defaults to True.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output, output_h, output_c, reserve_space). output A Tensor. Has the same type as input.
output_h A Tensor. Has the same type as input.
output_c A Tensor. Has the same type as input.
reserve_space A Tensor. Has the same type as input. | |
doc_4466 |
Run score function on (X, y) and get the appropriate features. Parameters
Xarray-like of shape (n_samples, n_features)
The training input samples.
yarray-like of shape (n_samples,)
The target values (class labels in classification, real numbers in regression). Returns
selfobject | |
doc_4467 | class ast.USub
class ast.Not
class ast.Invert
Unary operator tokens. Not is the not keyword, Invert is the ~ operator. >>> print(ast.dump(ast.parse('not x', mode='eval'), indent=4))
Expression(
body=UnaryOp(
op=Not(),
operand=Name(id='x', ctx=Load()))) | |
doc_4468 | class sklearn.linear_model.ElasticNetCV(*, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, precompute='auto', max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=None, positive=False, random_state=None, selection='cyclic') [source]
Elastic Net model with iterative fitting along a regularization path. See glossary entry for cross-validation estimator. Read more in the User Guide. Parameters
l1_ratiofloat or list of float, default=0.5
float between 0 and 1 passed to ElasticNet (scaling between l1 and l2 penalties). For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2 This parameter can be a list, in which case the different values are tested by cross-validation and the one giving the best prediction score is used. Note that a good choice of list of values for l1_ratio is often to put more values close to 1 (i.e. Lasso) and less close to 0 (i.e. Ridge), as in [.1, .5, .7,
.9, .95, .99, 1].
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path, used for each l1_ratio.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
max_iterint, default=1000
The maximum number of iterations.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
verbosebool or int, default=0
Amount of verbosity.
n_jobsint, default=None
Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
positivebool, default=False
When set to True, forces the coefficients to be positive.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes
alpha_float
The amount of penalization chosen by cross validation.
l1_ratio_float
The compromise between l1 and l2 penalization chosen by cross validation.
coef_ndarray of shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the cost function formula).
intercept_float or ndarray of shape (n_targets, n_features)
Independent term in the decision function.
mse_path_ndarray of shape (n_l1_ratio, n_alpha, n_folds)
Mean square error for the test set on each fold, varying l1_ratio and alpha.
alphas_ndarray of shape (n_alphas,) or (n_l1_ratio, n_alphas)
The grid of alphas used for fitting, for each l1_ratio.
dual_gap_float
The dual gaps at the end of the optimization for the optimal alpha.
n_iter_int
Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha. See also
enet_path
ElasticNet
Notes For an example, see examples/linear_model/plot_lasso_model_selection.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. The parameter l1_ratio corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. More specifically, the optimization objective is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to: a * L1 + b * L2
for: alpha = a + b and l1_ratio = a / (a + b).
Examples >>> from sklearn.linear_model import ElasticNetCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_features=2, random_state=0)
>>> regr = ElasticNetCV(cv=5, random_state=0)
>>> regr.fit(X, y)
ElasticNetCV(cv=5, random_state=0)
>>> print(regr.alpha_)
0.199...
>>> print(regr.intercept_)
0.398...
>>> print(regr.predict([[0, 0]]))
[0.398...]
Methods
fit(X, y) Fit linear model with coordinate descent.
get_params([deep]) Get parameters for this estimator.
path(*args, **kwargs) Compute elastic net path with coordinate descent.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
static path(*args, **kwargs) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_4469 |
Return probability estimates for the test data X. Parameters
Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’
Test samples. Returns
pndarray of shape (n_queries, n_classes), or a list of n_outputs
of such arrays if n_outputs > 1. The class probabilities of the input samples. Classes are ordered by lexicographic order. | |
doc_4470 |
Scalar method identical to the corresponding array attribute. Please see ndarray.clip. | |
doc_4471 |
Process a pick event. Each child artist will fire a pick event if mouseevent is over the artist and the artist has picker set. See also
set_picker, get_picker, pickable | |
doc_4472 |
Implement a function with checks for __torch_function__ overrides. See torch::autograd::handle_torch_function for the equivalent of this function in the C++ implementation. Parameters
public_api (function) – Function exposed by the public torch API originally called like public_api(*args, **kwargs) on which arguments are now being checked.
relevant_args (iterable) – Iterable of arguments to check for __torch_function__ methods.
args (tuple) – Arbitrary positional arguments originally passed into public_api.
kwargs (tuple) – Arbitrary keyword arguments originally passed into public_api. Returns
Result from calling implementation or an __torch_function__ method, as appropriate. Return type
object :raises TypeError : if no implementation is found.: Example >>> def func(a):
... if type(a) is not torch.Tensor: # This will make func dispatchable by __torch_function__
... return handle_torch_function(func, (a,), a)
... return a + 0 | |
doc_4473 | Convert the array to an ordinary list with the same items. | |
doc_4474 |
Return whether x, y is in the bounding box, but not on its edge. | |
doc_4475 | New in version 3.2. Deprecated since version 3.3: It is now possible to use staticmethod with abstractmethod(), making this decorator redundant. A subclass of the built-in staticmethod(), indicating an abstract staticmethod. Otherwise it is similar to abstractmethod(). This special case is deprecated, as the staticmethod() decorator is now correctly identified as abstract when applied to an abstract method: class C(ABC):
@staticmethod
@abstractmethod
def my_abstract_staticmethod(...):
... | |
doc_4476 | An integer giving the value of the largest Unicode code point, i.e. 1114111 (0x10FFFF in hexadecimal). Changed in version 3.3: Before PEP 393, sys.maxunicode used to be either 0xFFFF or 0x10FFFF, depending on the configuration option that specified whether Unicode characters were stored as UCS-2 or UCS-4. | |
doc_4477 | codecs.BOM_BE
codecs.BOM_LE
codecs.BOM_UTF8
codecs.BOM_UTF16
codecs.BOM_UTF16_BE
codecs.BOM_UTF16_LE
codecs.BOM_UTF32
codecs.BOM_UTF32_BE
codecs.BOM_UTF32_LE
These constants define various byte sequences, being Unicode byte order marks (BOMs) for several encodings. They are used in UTF-16 and UTF-32 data streams to indicate the byte order used, and in UTF-8 as a Unicode signature. BOM_UTF16 is either BOM_UTF16_BE or BOM_UTF16_LE depending on the platform’s native byte order, BOM is an alias for BOM_UTF16, BOM_LE for BOM_UTF16_LE and BOM_BE for BOM_UTF16_BE. The others represent the BOM in UTF-8 and UTF-32 encodings. | |
doc_4478 | Updates the config like update() ignoring items with non-upper keys. Changelog New in version 0.11. Parameters
mapping (Optional[Mapping[str, Any]]) –
kwargs (Any) – Return type
bool | |
doc_4479 | tf.experimental.numpy.heaviside(
x1, x2
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.heaviside. | |
doc_4480 |
Compute structure tensor using sum of squared differences. The (2-dimensional) structure tensor A is defined as: A = [Arr Arc]
[Arc Acc]
which is approximated by the weighted sum of squared differences in a local window around each pixel in the image. This formula can be extended to a larger number of dimensions (see [1]). Parameters
imagendarray
Input image.
sigmafloat, optional
Standard deviation used for the Gaussian kernel, which is used as a weighting function for the local summation of squared differences.
mode{‘constant’, ‘reflect’, ‘wrap’, ‘nearest’, ‘mirror’}, optional
How to handle values outside the image borders.
cvalfloat, optional
Used in conjunction with mode ‘constant’, the value outside the image boundaries.
order{‘rc’, ‘xy’}, optional
NOTE: Only applies in 2D. Higher dimensions must always use ‘rc’ order. This parameter allows for the use of reverse or forward order of the image axes in gradient computation. ‘rc’ indicates the use of the first axis initially (Arr, Arc, Acc), whilst ‘xy’ indicates the usage of the last axis initially (Axx, Axy, Ayy). Returns
A_elemslist of ndarray
Upper-diagonal elements of the structure tensor for each pixel in the input image. See also
structure_tensor_eigenvalues
References
1
https://en.wikipedia.org/wiki/Structure_tensor Examples >>> from skimage.feature import structure_tensor
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 1
>>> Arr, Arc, Acc = structure_tensor(square, sigma=0.1, order='rc')
>>> Acc
array([[0., 0., 0., 0., 0.],
[0., 1., 0., 1., 0.],
[0., 4., 0., 4., 0.],
[0., 1., 0., 1., 0.],
[0., 0., 0., 0., 0.]]) | |
doc_4481 |
Define the picking behavior of the artist. Parameters
pickerNone or bool or float or callable
This can be one of the following:
None: Picking is disabled for this artist (default). A boolean: If True then picking will be enabled and the artist will fire a pick event if the mouse event is over the artist. A float: If picker is a number it is interpreted as an epsilon tolerance in points and the artist will fire off an event if its data is within epsilon of the mouse event. For some artists like lines and patch collections, the artist may provide additional data to the pick event that is generated, e.g., the indices of the data within epsilon of the pick event
A function: If picker is callable, it is a user supplied function which determines whether the artist is hit by the mouse event: hit, props = picker(artist, mouseevent)
to determine the hit test. if the mouse event is over the artist, return hit=True and props is a dictionary of properties you want added to the PickEvent attributes. | |
doc_4482 |
Compute the symmetric difference of two Index objects. Parameters
other:Index or array-like
result_name:str
sort:False or None, default None
Whether to sort the resulting index. By default, the values are attempted to be sorted, but any TypeError from incomparable elements is caught by pandas. None : Attempt to sort the result, but catch any TypeErrors from comparing incomparable elements. False : Do not sort the result. Returns
symmetric_difference:Index
Notes symmetric_difference contains elements that appear in either idx1 or idx2 but not both. Equivalent to the Index created by idx1.difference(idx2) | idx2.difference(idx1) with duplicates dropped. Examples
>>> idx1 = pd.Index([1, 2, 3, 4])
>>> idx2 = pd.Index([2, 3, 4, 5])
>>> idx1.symmetric_difference(idx2)
Int64Index([1, 5], dtype='int64') | |
doc_4483 |
Return self//value. | |
doc_4484 |
Draw samples from a multinomial distribution. The multinomial distribution is a multivariate generalization of the binomial distribution. Take an experiment with one of p possible outcomes. An example of such an experiment is throwing a dice, where the outcome can be 1 through 6. Each sample drawn from the distribution represents n such experiments. Its values, X_i = [X_0, X_1, ..., X_p], represent the number of times the outcome was i. Note New code should use the multinomial method of a default_rng() instance instead; please see the Quick Start. Parameters
nint
Number of experiments.
pvalssequence of floats, length p
Probabilities of each of the p different outcomes. These must sum to 1 (however, the last element is always assumed to account for the remaining probability, as long as sum(pvals[:-1]) <= 1).
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. Default is None, in which case a single value is returned. Returns
outndarray
The drawn samples, of shape size, if that was provided. If not, the shape is (N,). In other words, each entry out[i,j,...,:] is an N-dimensional value drawn from the distribution. See also Generator.multinomial
which should be used for new code. Examples Throw a dice 20 times: >>> np.random.multinomial(20, [1/6.]*6, size=1)
array([[4, 1, 7, 5, 2, 1]]) # random
It landed 4 times on 1, once on 2, etc. Now, throw the dice 20 times, and 20 times again: >>> np.random.multinomial(20, [1/6.]*6, size=2)
array([[3, 4, 3, 3, 4, 3], # random
[2, 4, 3, 4, 0, 7]])
For the first run, we threw 3 times 1, 4 times 2, etc. For the second, we threw 2 times 1, 4 times 2, etc. A loaded die is more likely to land on number 6: >>> np.random.multinomial(100, [1/7.]*5 + [2/7.])
array([11, 16, 14, 17, 16, 26]) # random
The probability inputs should be normalized. As an implementation detail, the value of the last entry is ignored and assumed to take up any leftover probability mass, but this should not be relied on. A biased coin which has twice as much weight on one side as on the other should be sampled like so: >>> np.random.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT
array([38, 62]) # random
not like: >>> np.random.multinomial(100, [1.0, 2.0]) # WRONG
Traceback (most recent call last):
ValueError: pvals < 0, pvals > 1 or pvals contains NaNs | |
doc_4485 |
related_query_name
The relation on the related object back to this object doesn’t exist by default. Setting related_query_name creates a relation from the related object back to this one. This allows querying and filtering from the related object. | |
doc_4486 | bytearray.decode(encoding="utf-8", errors="strict")
Return a string decoded from the given bytes. Default encoding is 'utf-8'. errors may be given to set a different error handling scheme. The default for errors is 'strict', meaning that encoding errors raise a UnicodeError. Other possible values are 'ignore', 'replace' and any other name registered via codecs.register_error(), see section Error Handlers. For a list of possible encodings, see section Standard Encodings. By default, the errors argument is not checked for best performances, but only used at the first decoding error. Enable the Python Development Mode, or use a debug build to check errors. Note Passing the encoding argument to str allows decoding any bytes-like object directly, without needing to make a temporary bytes or bytearray object. Changed in version 3.1: Added support for keyword arguments. Changed in version 3.9: The errors is now checked in development mode and in debug mode. | |
doc_4487 | Return a string representation suitable to be sent as HTTP headers. attrs and header are sent to each Morsel’s output() method. sep is used to join the headers together, and is by default the combination '\r\n' (CRLF). | |
doc_4488 | Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle | |
doc_4489 |
Align the xlabels and ylabels of subplots with the same subplots row or column (respectively) if label alignment is being done automatically (i.e. the label position is not manually set). Alignment persists for draw events after this is called. Parameters
axslist of Axes
Optional list (or ndarray) of Axes to align the labels. Default is to align all Axes on the figure. See also matplotlib.figure.Figure.align_xlabels
matplotlib.figure.Figure.align_ylabels | |
doc_4490 | Required to create a subkey of a registry key. | |
doc_4491 |
Return a numpy.datetime64 object with ‘ns’ precision. | |
doc_4492 |
Function to reset the SobolEngine to base state. | |
doc_4493 | Format a datetime object or timestamp into an RFC 2822 date string for Set-Cookie expires. Deprecated since version 2.0: Will be removed in Werkzeug 2.1. Use http_date() instead. Parameters
expires (Optional[Union[datetime.datetime, datetime.date, int, float, time.struct_time]]) – Return type
str | |
doc_4494 | See Migration guide for more details. tf.compat.v1.CriticalSection
tf.CriticalSection(
name=None, shared_name=None, critical_section_def=None, import_scope=None
)
A CriticalSection object is a resource in the graph which executes subgraphs in serial order. A common example of a subgraph one may wish to run exclusively is the one given by the following function: v = resource_variable_ops.ResourceVariable(0.0, name="v")
def count():
value = v.read_value()
with tf.control_dependencies([value]):
with tf.control_dependencies([v.assign_add(1)]):
return tf.identity(value)
Here, a snapshot of v is captured in value; and then v is updated. The snapshot value is returned. If multiple workers or threads all execute count in parallel, there is no guarantee that access to the variable v is atomic at any point within any thread's calculation of count. In fact, even implementing an atomic counter that guarantees that the user will see each value 0, 1, ..., is currently impossible. The solution is to ensure any access to the underlying resource v is only processed through a critical section: cs = CriticalSection()
f1 = cs.execute(count)
f2 = cs.execute(count)
output = f1 + f2
session.run(output)
The functions f1 and f2 will be executed serially, and updates to v will be atomic. NOTES All resource objects, including the critical section and any captured variables of functions executed on that critical section, will be colocated to the same device (host and cpu/gpu). When using multiple critical sections on the same resources, there is no guarantee of exclusive access to those resources. This behavior is disallowed by default (but see the kwarg exclusive_resource_access). For example, running the same function in two separate critical sections will not ensure serial execution: v = tf.compat.v1.get_variable("v", initializer=0.0, use_resource=True)
def accumulate(up):
x = v.read_value()
with tf.control_dependencies([x]):
with tf.control_dependencies([v.assign_add(up)]):
return tf.identity(x)
ex1 = CriticalSection().execute(
accumulate, 1.0, exclusive_resource_access=False)
ex2 = CriticalSection().execute(
accumulate, 1.0, exclusive_resource_access=False)
bad_sum = ex1 + ex2
sess.run(v.initializer)
sess.run(bad_sum) # May return 0.0
Attributes
name
Methods execute View source
execute(
fn, exclusive_resource_access=True, name=None
)
Execute function fn() inside the critical section. fn should not accept any arguments. To add extra arguments to when calling fn in the critical section, create a lambda: critical_section.execute(lambda: fn(*my_args, **my_kwargs))
Args
fn The function to execute. Must return at least one tensor.
exclusive_resource_access Whether the resources required by fn should be exclusive to this CriticalSection. Default: True. You may want to set this to False if you will be accessing a resource in read-only mode in two different CriticalSections.
name The name to use when creating the execute operation.
Returns The tensors returned from fn().
Raises
ValueError If fn attempts to lock this CriticalSection in any nested or lazy way that may cause a deadlock.
ValueError If exclusive_resource_access == True and another CriticalSection has an execution requesting the same resources as fn. Note, even ifexclusive_resource_accessisTrue, if another execution in anotherCriticalSectionwas created withoutexclusive_resource_access=True, aValueError` will be raised. | |
doc_4495 |
Return the yaxis' minor tick labels, as a list of Text. | |
doc_4496 | tf.metrics.MeanAbsolutePercentageError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.MeanAbsolutePercentageError
tf.keras.metrics.MeanAbsolutePercentageError(
name='mean_absolute_percentage_error', dtype=None
)
Args
name (Optional) string name of the metric instance.
dtype (Optional) data type of the metric result. Standalone usage:
m = tf.keras.metrics.MeanAbsolutePercentageError()
m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]])
m.result().numpy()
250000000.0
m.reset_states()
m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]],
sample_weight=[1, 0])
m.result().numpy()
500000000.0
Usage with compile() API: model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.MeanAbsolutePercentageError()])
Methods reset_states View source
reset_states()
Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source
result()
Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source
update_state(
y_true, y_pred, sample_weight=None
)
Accumulates metric statistics. y_true and y_pred should have the same shape.
Args
y_true Ground truth values. shape = [batch_size, d0, .. dN].
y_pred The predicted values. shape = [batch_size, d0, .. dN].
sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)).
Returns Update op. | |
doc_4497 | Exception raised when a specified option is not found in the specified section. | |
doc_4498 |
Return an iterable of the ModuleDict keys. | |
doc_4499 |
Bases: matplotlib.backend_tools.ToolBase Send message with the current pointer position. This tool runs in the background reporting the position of the cursor. send_message(event)[source]
Call matplotlib.backend_managers.ToolManager.message_event.
set_figure(figure)[source] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.