_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_29100 | Interface used by the parser to present error and warning messages to the application. The methods of this object control whether errors are immediately converted to exceptions or are handled in some other way. | |
doc_29101 | This function securely creates a temporary directory using the same rules as mkdtemp(). The resulting object can be used as a context manager (see Examples). On completion of the context or destruction of the temporary directory object the newly created temporary directory and all its contents are removed from the filesystem. The directory name can be retrieved from the name attribute of the returned object. When the returned object is used as a context manager, the name will be assigned to the target of the as clause in the with statement, if there is one. The directory can be explicitly cleaned up by calling the cleanup() method. Raises an auditing event tempfile.mkdtemp with argument fullpath. New in version 3.2. | |
doc_29102 | Parse lists as described by RFC 2068 Section 2. In particular, parse comma-separated lists where the elements of the list may include quoted-strings. A quoted-string could contain a comma. A non-quoted string could have quotes in the middle. Quotes are removed automatically after parsing. It basically works like parse_set_header() just that items may appear multiple times and case sensitivity is preserved. The return value is a standard list: >>> parse_list_header('token, "quoted value"')
['token', 'quoted value']
To create a header from the list again, use the dump_header() function. Parameters
value (str) – a string with a list header. Returns
list Return type
List[str] | |
doc_29103 |
Bases: mpl_toolkits.axisartist.angle_helper.LocatorBase __call__(v1, v2)[source]
Call self as a function. | |
doc_29104 | URL name: password_reset_confirm Presents a form for entering a new password. Keyword arguments from the URL:
uidb64: The user’s id encoded in base 64.
token: Token to check that the password is valid. Attributes:
template_name
The full name of a template to display the confirm password view. Default value is registration/password_reset_confirm.html.
token_generator
Instance of the class to check the password. This will default to default_token_generator, it’s an instance of django.contrib.auth.tokens.PasswordResetTokenGenerator.
post_reset_login
A boolean indicating if the user should be automatically authenticated after a successful password reset. Defaults to False.
post_reset_login_backend
A dotted path to the authentication backend to use when authenticating a user if post_reset_login is True. Required only if you have multiple AUTHENTICATION_BACKENDS configured. Defaults to None.
form_class
Form that will be used to set the password. Defaults to SetPasswordForm.
success_url
URL to redirect after the password reset done. Defaults to 'password_reset_complete'.
extra_context
A dictionary of context data that will be added to the default context data passed to the template.
reset_url_token
Token parameter displayed as a component of password reset URLs. Defaults to 'set-password'.
Template context:
form: The form (see form_class above) for setting the new user’s password.
validlink: Boolean, True if the link (combination of uidb64 and token) is valid or unused yet. | |
doc_29105 | Deprecated since version 3.9, will be removed in version 3.11: MailmanProxy is deprecated, it depends on a Mailman module which no longer exists and therefore is already broken. Create a new pure proxy server. Arguments are as per SMTPServer. Everything will be relayed to remoteaddr, unless local mailman configurations knows about an address, in which case it will be handled via mailman. Note that running this has a good chance to make you into an open relay, so please be careful. | |
doc_29106 |
Performs a (local) reduce with specified slices over a single axis. For i in range(len(indices)), reduceat computes ufunc.reduce(array[indices[i]:indices[i+1]]), which becomes the i-th generalized “row” parallel to axis in the final result (i.e., in a 2-D array, for example, if axis = 0, it becomes the i-th row, but if axis = 1, it becomes the i-th column). There are three exceptions to this: when i = len(indices) - 1 (so for the last index), indices[i+1] = array.shape[axis]. if indices[i] >= indices[i + 1], the i-th generalized “row” is simply array[indices[i]]. if indices[i] >= len(array) or indices[i] < 0, an error is raised. The shape of the output depends on the size of indices, and may be larger than array (this happens if len(indices) > array.shape[axis]). Parameters
arrayarray_like
The array to act on.
indicesarray_like
Paired indices, comma separated (not colon), specifying slices to reduce.
axisint, optional
The axis along which to apply the reduceat.
dtypedata-type code, optional
The type used to represent the intermediate results. Defaults to the data type of the output array if this is provided, or the data type of the input array if no output array is provided.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with ufunc.__call__, if given as a keyword, this may be wrapped in a 1-element tuple. Changed in version 1.13.0: Tuples are allowed for keyword argument. Returns
rndarray
The reduced values. If out was supplied, r is a reference to out. Notes A descriptive example: If array is 1-D, the function ufunc.accumulate(array) is the same as ufunc.reduceat(array, indices)[::2] where indices is range(len(array) - 1) with a zero placed in every other element: indices = zeros(2 * len(array) - 1), indices[1::2] = range(1, len(array)). Don’t be fooled by this attribute’s name: reduceat(array) is not necessarily smaller than array. Examples To take the running sum of four successive values: >>> np.add.reduceat(np.arange(8),[0,4, 1,5, 2,6, 3,7])[::2]
array([ 6, 10, 14, 18])
A 2-D example: >>> x = np.linspace(0, 15, 16).reshape(4,4)
>>> x
array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]])
# reduce such that the result has the following five rows:
# [row1 + row2 + row3]
# [row4]
# [row2]
# [row3]
# [row1 + row2 + row3 + row4]
>>> np.add.reduceat(x, [0, 3, 1, 2, 0])
array([[12., 15., 18., 21.],
[12., 13., 14., 15.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[24., 28., 32., 36.]])
# reduce such that result has the following two columns:
# [col1 * col2 * col3, col4]
>>> np.multiply.reduceat(x, [0, 3], 1)
array([[ 0., 3.],
[ 120., 7.],
[ 720., 11.],
[2184., 15.]]) | |
doc_29107 | See Migration guide for more details. tf.compat.v1.config.threading.set_inter_op_parallelism_threads
tf.config.threading.set_inter_op_parallelism_threads(
num_threads
)
Determines the number of threads used by independent non-blocking operations. 0 means the system picks an appropriate number.
Args
num_threads Number of parallel threads | |
doc_29108 | Recreate the URL for a request from the parts in a WSGI environment. The URL is an IRI, not a URI, so it may contain Unicode characters. Use iri_to_uri() to convert it to ASCII. Parameters
environ (WSGIEnvironment) – The WSGI environment to get the URL parts from.
root_only (bool) – Only build the root path, don’t include the remaining path or query string.
strip_querystring (bool) – Don’t include the query string.
host_only (bool) – Only build the scheme and host.
trusted_hosts (Optional[Iterable[str]]) – A list of trusted host names to validate the host against. Return type
str | |
doc_29109 | PKZIP version which created ZIP archive. | |
doc_29110 |
Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters
bbool | |
doc_29111 |
Reduce X to the selected features and then predict using the
underlying estimator. Parameters
Xarray of shape [n_samples, n_features]
The input samples. Returns
yarray of shape [n_samples]
The predicted target values. | |
doc_29112 | See Migration guide for more details. tf.compat.v1.raw_ops.NotEqual
tf.raw_ops.NotEqual(
x, y, incompatible_shape_error=True, name=None
)
Note: NotEqual supports broadcasting. More about broadcasting here
Args
x A Tensor.
y A Tensor. Must have the same type as x.
incompatible_shape_error An optional bool. Defaults to True.
name A name for the operation (optional).
Returns A Tensor of type bool. | |
doc_29113 |
Return the picking behavior of the artist. The possible values are described in set_picker. See also
set_picker, pickable, pick | |
doc_29114 | Return the “login name” of the user. This function checks the environment variables LOGNAME, USER, LNAME and USERNAME, in order, and returns the value of the first one which is set to a non-empty string. If none are set, the login name from the password database is returned on systems which support the pwd module, otherwise, an exception is raised. In general, this function should be preferred over os.getlogin(). | |
doc_29115 | A store implementation that uses a file to store the underlying key-value pairs. Parameters
file_name (str) – path of the file in which to store the key-value pairs
world_size (int) – The total number of processes using the store Example::
>>> import torch.distributed as dist
>>> store1 = dist.FileStore("/tmp/filestore", 2)
>>> store2 = dist.FileStore("/tmp/filestore", 2)
>>> # Use any of the store methods from either the client or server after initialization
>>> store1.set("first_key", "first_value")
>>> store2.get("first_key") | |
doc_29116 | Returns the value of the max_count attribute of the specialized class used to represent the header with the given name. | |
doc_29117 | One more than the number of the highest signal number. | |
doc_29118 | Reads one line from the remote server. You may override this method. | |
doc_29119 | See Migration guide for more details. tf.compat.v1.raw_ops.ExperimentalUniqueDataset
tf.raw_ops.ExperimentalUniqueDataset(
input_dataset, output_types, output_shapes, name=None
)
Args
input_dataset A Tensor of type variant.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_29120 | Special value which should be returned by the binary special methods (e.g. __eq__(), __lt__(), __add__(), __rsub__(), etc.) to indicate that the operation is not implemented with respect to the other type; may be returned by the in-place binary special methods (e.g. __imul__(), __iand__(), etc.) for the same purpose. It should not be evaluated in a boolean context. Note When a binary (or in-place) method returns NotImplemented the interpreter will try the reflected operation on the other type (or some other fallback, depending on the operator). If all attempts return NotImplemented, the interpreter will raise an appropriate exception. Incorrectly returning NotImplemented will result in a misleading error message or the NotImplemented value being returned to Python code. See Implementing the arithmetic operations for examples. Note NotImplementedError and NotImplemented are not interchangeable, even though they have similar names and purposes. See NotImplementedError for details on when to use it. Changed in version 3.9: Evaluating NotImplemented in a boolean context is deprecated. While it currently evaluates as true, it will emit a DeprecationWarning. It will raise a TypeError in a future version of Python. | |
doc_29121 | tf.keras.layers.experimental.SyncBatchNormalization(
axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True,
beta_initializer='zeros', gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones', beta_regularizer=None,
gamma_regularizer=None, beta_constraint=None, gamma_constraint=None,
renorm=False, renorm_clipping=None, renorm_momentum=0.99, trainable=True,
adjustment=None, name=None, **kwargs
)
Applies batch normalization to activations of the previous layer at each batch by synchronizing the global batch statistics across all devices that are training the model. For specific details about batch normalization please refer to the tf.keras.layers.BatchNormalization layer docs. If this layer is used when using tf.distribute strategy to train models across devices/workers, there will be an allreduce call to aggregate batch statistics across all replicas at every training step. Without tf.distribute strategy, this layer behaves as a regular tf.keras.layers.BatchNormalization layer. Example usage: strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(16))
model.add(tf.keras.layers.experimental.SyncBatchNormalization())
Arguments
axis Integer, the axis that should be normalized (typically the features axis). For instance, after a Conv2D layer with data_format="channels_first", set axis=1 in BatchNormalization.
momentum Momentum for the moving average.
epsilon Small float added to variance to avoid dividing by zero.
center If True, add offset of beta to normalized tensor. If False, beta is ignored.
scale If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer.
beta_initializer Initializer for the beta weight.
gamma_initializer Initializer for the gamma weight.
moving_mean_initializer Initializer for the moving mean.
moving_variance_initializer Initializer for the moving variance.
beta_regularizer Optional regularizer for the beta weight.
gamma_regularizer Optional regularizer for the gamma weight.
beta_constraint Optional constraint for the beta weight.
gamma_constraint Optional constraint for the gamma weight.
renorm Whether to use Batch Renormalization. This adds extra variables during training. The inference is the same for either value of this parameter.
renorm_clipping A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar Tensors used to clip the renorm correction. The correction (r, d) is used as corrected_value = normalized_value * r + d, with r clipped to [rmin, rmax], and d to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.
renorm_momentum Momentum used to update the moving means and standard deviations with renorm. Unlike momentum, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that momentum is still applied to get the means and variances for inference.
trainable Boolean, if True the variables will be marked as trainable. Call arguments:
inputs: Input tensor (of any rank).
training: Python boolean indicating whether the layer should behave in training mode or in inference mode.
training=True: The layer will normalize its inputs using the mean and variance of the current batch of inputs.
training=False: The layer will normalize its inputs using the mean and variance of its moving statistics, learned during training.
Input shape: Arbitrary. Use the keyword argument input_shape (tuple of integers, does not include the samples axis) when using this layer as the first layer in a model. Output shape: Same shape as input. | |
doc_29122 |
Set the hatching pattern hatch can be one of: / - diagonal hatching
\ - back diagonal
| - vertical
- - horizontal
+ - crossed
x - crossed diagonal
o - small circle
O - large circle
. - dots
* - stars
Letters can be combined, in which case all the specified hatchings are done. If same letter repeats, it increases the density of hatching of that pattern. Hatching is supported in the PostScript, PDF, SVG and Agg backends only. Unlike other properties such as linewidth and colors, hatching can only be specified for the collection as a whole, not separately for each member. Parameters
hatch{'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} | |
doc_29123 |
Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. See also char.isupper | |
doc_29124 |
Set the formatter of the major ticker. In addition to a Formatter instance, this also accepts a str or function. For a str a StrMethodFormatter is used. The field used for the value must be labeled 'x' and the field used for the position must be labeled 'pos'. See the StrMethodFormatter documentation for more information. For a function, a FuncFormatter is used. The function must take two inputs (a tick value x and a position pos), and return a string containing the corresponding tick label. See the FuncFormatter documentation for more information. Parameters
formatterFormatter, str, or function
Examples using matplotlib.axis.Axis.set_major_formatter
Creating a timeline with lines, dates, and text
Date tick labels
Custom tick formatter for time series
Labeling ticks using engineering notation
Dollar Ticks
Bachelor's degrees by gender
3D surface (colormap)
SkewT-logP diagram: using transforms and custom projections
Centering labels between ticks
Custom Ticker1
Formatting date ticks using ConciseDateFormatter
Date Demo Convert
Placing date ticks using recurrence rules
Date Index Formatter
Major and minor ticks
Setting tick labels from a list of values
Basic Usage
The Lifecycle of a Plot
Artist tutorial
Choosing Colormaps in Matplotlib
Text in Matplotlib Plots | |
doc_29125 | Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | |
doc_29126 |
An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array. For more information, refer to the numpy module and examine the methods and attributes of an array. Parameters
(for the __new__ method; see Notes below)
shapetuple of ints
Shape of created array.
dtypedata-type, optional
Any object that can be interpreted as a numpy data type.
bufferobject exposing buffer interface, optional
Used to fill the array with data.
offsetint, optional
Offset of array data in buffer.
stridestuple of ints, optional
Strides of data in memory.
order{‘C’, ‘F’}, optional
Row-major (C-style) or column-major (Fortran-style) order. See also array
Construct an array. zeros
Create an array, each element of which is zero. empty
Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). dtype
Create a data-type. numpy.typing.NDArray
An ndarray alias generic w.r.t. its dtype.type. Notes There are two modes of creating an array using __new__: If buffer is None, then only shape, dtype, and order are used. If buffer is an object exposing the buffer interface, then all keywords are interpreted. No __init__ method is needed because the array is fully initialized after the __new__ method. Examples These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray. First mode, buffer is None: >>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
[ nan, 2.5e-323]])
Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]),
... offset=np.int_().itemsize,
... dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])
Attributes
Tndarray
Transpose of the array.
databuffer
The array’s elements, in memory.
dtypedtype object
Describes the format of the elements in the array.
flagsdict
Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.
flatnumpy.flatiter object
Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO).
imagndarray
Imaginary part of the array.
realndarray
Real part of the array.
sizeint
Number of elements in the array.
itemsizeint
The memory use of each array element in bytes.
nbytesint
The total number of bytes required to store the array data, i.e., itemsize * size.
ndimint
The array’s number of dimensions.
shapetuple of ints
Shape of the array.
stridestuple of ints
The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4).
ctypesctypes object
Class containing properties of the array needed for interaction with ctypes.
basendarray
If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored. | |
doc_29127 |
Plot Precision Recall Curve for binary classifiers. Extra keyword arguments will be passed to matplotlib’s plot. Read more in the User Guide. Parameters
estimatorestimator instance
Fitted classifier or a fitted Pipeline in which the last estimator is a classifier.
X{array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
yarray-like of shape (n_samples,)
Binary target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
response_method{‘predict_proba’, ‘decision_function’, ‘auto’}, default=’auto’
Specifies whether to use predict_proba or decision_function as the target response. If set to ‘auto’, predict_proba is tried first and if it does not exist decision_function is tried next.
namestr, default=None
Name for labeling curve. If None, the name of the estimator is used.
axmatplotlib axes, default=None
Axes object to plot on. If None, a new figure and axes is created.
pos_labelstr or int, default=None
The class considered as the positive class when computing the precision and recall metrics. By default, estimators.classes_[1] is considered as the positive class. New in version 0.24.
**kwargsdict
Keyword arguments to be passed to matplotlib’s plot. Returns
displayPrecisionRecallDisplay
Object that stores computed values. See also
precision_recall_curve
Compute precision-recall pairs for different probability thresholds.
PrecisionRecallDisplay
Precision Recall visualization. | |
doc_29128 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_29129 | A list of queued input lines. The cmdqueue list is checked in cmdloop() when new input is needed; if it is nonempty, its elements will be processed in order, as if entered at the prompt. | |
doc_29130 | Return the integer square root of the nonnegative integer n. This is the floor of the exact square root of n, or equivalently the greatest integer a such that a² ≤ n. For some applications, it may be more convenient to have the least integer a such that n ≤ a², or in other words the ceiling of the exact square root of n. For positive n, this can be computed using a = 1 + isqrt(n - 1). New in version 3.8. | |
doc_29131 | See Migration guide for more details. tf.compat.v1.keras.applications.nasnet.preprocess_input
tf.keras.applications.nasnet.preprocess_input(
x, data_format=None
)
Usage example with applications.MobileNet: i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8)
x = tf.cast(i, tf.float32)
x = tf.keras.applications.mobilenet.preprocess_input(x)
core = tf.keras.applications.MobileNet()
x = core(x)
model = tf.keras.Model(inputs=[i], outputs=[x])
image = tf.image.decode_png(tf.io.read_file('file.png'))
result = model(image)
Arguments
x A floating point numpy.array or a tf.Tensor, 3D or 4D with 3 color channels, with values in the range [0, 255]. The preprocessed data are written over the input data if the data types are compatible. To avoid this behaviour, numpy.copy(x) can be used.
data_format Optional data format of the image tensor/array. Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last").
Returns Preprocessed numpy.array or a tf.Tensor with type float32. The inputs pixel values are scaled between -1 and 1, sample-wise.
Raises
ValueError In case of unknown data_format argument. | |
doc_29132 |
Bases: matplotlib.offsetbox.PackerBase VPacker packs its children vertically, automatically adjusting their relative positions at draw time. Parameters
padfloat, optional
The boundary padding in points.
sepfloat, optional
The spacing between items in points.
width, heightfloat, optional
Width and height of the container box in pixels, calculated if None.
align{'top', 'bottom', 'left', 'right', 'center', 'baseline'}, default: 'baseline'
Alignment of boxes.
mode{'fixed', 'expand', 'equal'}, default: 'fixed'
The packing mode. 'fixed' packs the given Artists tight with sep spacing. 'expand' uses the maximal available space to distribute the artists with equal spacing in between. 'equal': Each artist an equal fraction of the available space and is left-aligned (or top-aligned) therein.
childrenlist of Artist
The artists to pack. Notes pad and sep are in points and will be scaled with the renderer dpi, while width and height are in pixels. get_extent_offsets(renderer)[source]
Update offset of the children and return the extent of the box. Parameters
rendererRendererBase subclass
Returns
width
height
xdescent
ydescent
list of (xoffset, yoffset) pairs
set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, gid=<UNSET>, height=<UNSET>, in_layout=<UNSET>, label=<UNSET>, offset=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, width=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
figure Figure
gid str
height float
in_layout bool
label object
offset (float, float) or callable
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
width float
zorder float | |
doc_29133 |
Return filter function to be used for agg filter. | |
doc_29134 |
Return len(self). | |
doc_29135 | enum.IntEnum collection of SSL and TLS versions for SSLContext.maximum_version and SSLContext.minimum_version. New in version 3.7. | |
doc_29136 | operator.__delitem__(a, b)
Remove the value of a at index b. | |
doc_29137 | Return an iterable of the weak references to the values. | |
doc_29138 | Class used to record warnings for unit tests. See documentation of check_warnings() above for more details. | |
doc_29139 | tf.compat.v1.image.sample_distorted_bounding_box(
image_size, bounding_boxes, seed=None, seed2=None, min_object_covered=0.1,
aspect_ratio_range=None, area_range=None, max_attempts=None,
use_image_if_no_bounding_boxes=None, name=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: seed2 arg is deprecated.Use sample_distorted_bounding_box_v2 instead. Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. data augmentation. This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an image_size, bounding_boxes and a series of constraints. The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors: begin, size and bboxes. The first 2 tensors can be fed directly into tf.slice to crop the image. The latter may be supplied to tf.image.draw_bounding_boxes to visualize what the bounding box looks like. Bounding boxes are supplied and returned as [y_min, x_min, y_max, x_max]. The bounding box coordinates are floats in [0.0, 1.0] relative to the width and height of the underlying image. For example, # Generate a single distorted bounding box.
begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(
tf.shape(image),
bounding_boxes=bounding_boxes,
min_object_covered=0.1)
# Draw the bounding box in an image summary.
image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),
bbox_for_draw)
tf.compat.v1.summary.image('images_with_box', image_with_box)
# Employ the bounding box to distort the image.
distorted_image = tf.slice(image, begin, size)
Note that if no bounding box information is available, setting use_image_if_no_bounding_boxes = True will assume there is a single implicit bounding box covering the whole image. If use_image_if_no_bounding_boxes is false and no bounding boxes are supplied, an error is raised.
Args
image_size A Tensor. Must be one of the following types: uint8, int8, int16, int32, int64. 1-D, containing [height, width, channels].
bounding_boxes A Tensor of type float32. 3-D with shape [batch, N, 4] describing the N bounding boxes associated with the image.
seed An optional int. Defaults to 0. If either seed or seed2 are set to non-zero, the random number generator is seeded by the given seed. Otherwise, it is seeded by a random seed.
seed2 An optional int. Defaults to 0. A second seed to avoid seed collision.
min_object_covered A Tensor of type float32. Defaults to 0.1. The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied.
aspect_ratio_range An optional list of floats. Defaults to [0.75, 1.33]. The cropped area of the image must have an aspect ratio = width / height within this range.
area_range An optional list of floats. Defaults to [0.05, 1]. The cropped area of the image must contain a fraction of the supplied image within this range.
max_attempts An optional int. Defaults to 100. Number of attempts at generating a cropped region of the image of the specified constraints. After max_attempts failures, return the entire image.
use_image_if_no_bounding_boxes An optional bool. Defaults to False. Controls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error.
name A name for the operation (optional).
Returns A tuple of Tensor objects (begin, size, bboxes). begin A Tensor. Has the same type as image_size. 1-D, containing [offset_height, offset_width, 0]. Provide as input to tf.slice.
size A Tensor. Has the same type as image_size. 1-D, containing [target_height, target_width, -1]. Provide as input to tf.slice.
bboxes A Tensor of type float32. 3-D with shape [1, 1, 4] containing the distorted bounding box. Provide as input to tf.image.draw_bounding_boxes. | |
doc_29140 | See Migration guide for more details. tf.compat.v1.raw_ops.Round
tf.raw_ops.Round(
x, name=None
)
Rounds half to even. Also known as bankers rounding. If you want to round according to the current system rounding mode use std::cint.
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_29141 | A ModelForm for creating a new user. It has three fields: username (from the user model), password1, and password2. It verifies that password1 and password2 match, validates the password using validate_password(), and sets the user’s password using set_password(). | |
doc_29142 | See Migration guide for more details. tf.compat.v1.ragged.boolean_mask
tf.ragged.boolean_mask(
data, mask, name=None
)
Returns a potentially ragged tensor that is formed by retaining the elements in data where the corresponding value in mask is True.
output[a1...aA, i, b1...bB] = data[a1...aA, j, b1...bB] Where j is the ith True entry of mask[a1...aA].
Note that output preserves the mask dimensions a1...aA; this differs from tf.boolean_mask, which flattens those dimensions.
Args
data A potentially ragged tensor.
mask A potentially ragged boolean tensor. mask's shape must be a prefix of data's shape. rank(mask) must be known statically.
name A name prefix for the returned tensor (optional).
Returns A potentially ragged tensor that is formed by retaining the elements in data where the corresponding value in mask is True.
rank(output) = rank(data).
output.ragged_rank = max(data.ragged_rank, rank(mask) - 1).
Raises
ValueError if rank(mask) is not known statically; or if mask.shape is not a prefix of data.shape. Examples:
# Aliases for True & False so data and mask line up.
T, F = (True, False)
tf.ragged.boolean_mask( # Mask a 2D Tensor.
data=[[1, 2, 3], [4, 5, 6], [7, 8, 9]],
mask=[[T, F, T], [F, F, F], [T, F, F]]).to_list()
[[1, 3], [], [7]]
tf.ragged.boolean_mask( # Mask a 2D RaggedTensor.
tf.ragged.constant([[1, 2, 3], [4], [5, 6]]),
tf.ragged.constant([[F, F, T], [F], [T, T]])).to_list()
[[3], [], [5, 6]]
tf.ragged.boolean_mask( # Mask rows of a 2D RaggedTensor.
tf.ragged.constant([[1, 2, 3], [4], [5, 6]]),
tf.ragged.constant([True, False, True])).to_list()
[[1, 2, 3], [5, 6]] | |
doc_29143 |
Take values from the input array by matching 1d index and data slices. This iterates over matching 1d slices oriented along the specified axis in the index and data arrays, and uses the former to look up values in the latter. These slices can be different lengths. Functions returning an index along an axis, like argsort and argpartition, produce suitable indices for this function. New in version 1.15.0. Parameters
arrndarray (Ni…, M, Nk…)
Source array
indicesndarray (Ni…, J, Nk…)
Indices to take along each 1d slice of arr. This must match the dimension of arr, but dimensions Ni and Nj only need to broadcast against arr.
axisint
The axis to take 1d slices along. If axis is None, the input array is treated as if it had first been flattened to 1d, for consistency with sort and argsort. Returns
out: ndarray (Ni…, J, Nk…)
The indexed result. See also take
Take along an axis, using the same indices for every 1d slice put_along_axis
Put values into the destination array by matching 1d index and data slices Notes This is equivalent to (but faster than) the following use of ndindex and s_, which sets each of ii and kk to a tuple of indices: Ni, M, Nk = a.shape[:axis], a.shape[axis], a.shape[axis+1:]
J = indices.shape[axis] # Need not equal M
out = np.empty(Ni + (J,) + Nk)
for ii in ndindex(Ni):
for kk in ndindex(Nk):
a_1d = a [ii + s_[:,] + kk]
indices_1d = indices[ii + s_[:,] + kk]
out_1d = out [ii + s_[:,] + kk]
for j in range(J):
out_1d[j] = a_1d[indices_1d[j]]
Equivalently, eliminating the inner loop, the last two lines would be: out_1d[:] = a_1d[indices_1d]
Examples For this sample array >>> a = np.array([[10, 30, 20], [60, 40, 50]])
We can sort either by using sort directly, or argsort and this function >>> np.sort(a, axis=1)
array([[10, 20, 30],
[40, 50, 60]])
>>> ai = np.argsort(a, axis=1); ai
array([[0, 2, 1],
[1, 2, 0]])
>>> np.take_along_axis(a, ai, axis=1)
array([[10, 20, 30],
[40, 50, 60]])
The same works for max and min, if you expand the dimensions: >>> np.expand_dims(np.max(a, axis=1), axis=1)
array([[30],
[60]])
>>> ai = np.expand_dims(np.argmax(a, axis=1), axis=1)
>>> ai
array([[1],
[0]])
>>> np.take_along_axis(a, ai, axis=1)
array([[30],
[60]])
If we want to get the max and min at the same time, we can stack the indices first >>> ai_min = np.expand_dims(np.argmin(a, axis=1), axis=1)
>>> ai_max = np.expand_dims(np.argmax(a, axis=1), axis=1)
>>> ai = np.concatenate([ai_min, ai_max], axis=1)
>>> ai
array([[0, 1],
[1, 0]])
>>> np.take_along_axis(a, ai, axis=1)
array([[10, 30],
[40, 60]]) | |
doc_29144 |
Predefined split cross-validator Provides train/test indices to split data into train/test sets using a predefined scheme specified by the user with the test_fold parameter. Read more in the User Guide. New in version 0.16. Parameters
test_foldarray-like of shape (n_samples,)
The entry test_fold[i] represents the index of the test set that sample i belongs to. It is possible to exclude sample i from any test set (i.e. include sample i in every training set) by setting test_fold[i] equal to -1. Examples >>> import numpy as np
>>> from sklearn.model_selection import PredefinedSplit
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([0, 0, 1, 1])
>>> test_fold = [0, 1, -1, 1]
>>> ps = PredefinedSplit(test_fold)
>>> ps.get_n_splits()
2
>>> print(ps)
PredefinedSplit(test_fold=array([ 0, 1, -1, 1]))
>>> for train_index, test_index in ps.split():
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [1 2 3] TEST: [0]
TRAIN: [0 2] TEST: [1 3]
Methods
get_n_splits([X, y, groups]) Returns the number of splitting iterations in the cross-validator
split([X, y, groups]) Generate indices to split data into training and test set.
get_n_splits(X=None, y=None, groups=None) [source]
Returns the number of splitting iterations in the cross-validator Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Returns
n_splitsint
Returns the number of splitting iterations in the cross-validator.
split(X=None, y=None, groups=None) [source]
Generate indices to split data into training and test set. Parameters
Xobject
Always ignored, exists for compatibility.
yobject
Always ignored, exists for compatibility.
groupsobject
Always ignored, exists for compatibility. Yields
trainndarray
The training set indices for that split.
testndarray
The testing set indices for that split. | |
doc_29145 | Dictionary mapping host names to (login, account, password) tuples. The ‘default’ entry, if any, is represented as a pseudo-host by that name. | |
doc_29146 | Return True if the object is a generator. | |
doc_29147 | See Migration guide for more details. tf.compat.v1.xla.experimental.compile
tf.xla.experimental.compile(
computation, inputs=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: xla.experimental.compile is deprecated. Consider using tf.function(experimental_compile=True)
Note: In eager mode, computation will have @tf.function semantics.
Args
computation A Python function that builds a computation to apply to the input. If the function takes n inputs, 'inputs' should be a list of n tensors. computation may return a list of operations and tensors. Tensors must come before operations in the returned list. The return value of compile is a list of tensors corresponding to the tensors from the output of computation. All Operations returned from computation will be executed when evaluating any of the returned output tensors.
inputs A list of inputs or None (equivalent to an empty list). Each input can be a nested structure containing values that are convertible to tensors. Note that passing an N-dimension list of compatible values will result in a N-dimension list of scalar tensors rather than a single Rank-N tensors. If you need different behavior, convert part of inputs to tensors with tf.convert_to_tensor.
Returns Same data structure as if computation(*inputs) is called directly with some exceptions for correctness. Exceptions include: 1) None output: a NoOp would be returned which control-depends on computation. 2) Single value output: A tuple containing the value would be returned. 3) Operation-only outputs: a NoOp would be returned which control-depends on computation.
Raises
RuntimeError if called when eager execution is enabled. Known issues: When a tf.random operation is built with XLA, the implementation doesn't pass the user provided seed to the XLA compiler. As such, the XLA compiler generates a random number and uses it as a seed when compiling the operation. This implementation causes a violation of the Tensorflow defined semantics in two aspects. First, changing the value of the user defined seed doesn't change the numbers generated by the operation. Second, when a seed is not specified, running the program multiple times will generate the same numbers. | |
doc_29148 |
Returns pointers to the end-points of an array. Parameters
andarray
Input array. It must conform to the Python-side of the array interface. Returns
(low, high)tuple of 2 integers
The first integer is the first byte of the array, the second integer is just past the last byte of the array. If a is not contiguous it will not use every byte between the (low, high) values. Examples >>> I = np.eye(2, dtype='f'); I.dtype
dtype('float32')
>>> low, high = np.byte_bounds(I)
>>> high - low == I.size*I.itemsize
True
>>> I = np.eye(2); I.dtype
dtype('float64')
>>> low, high = np.byte_bounds(I)
>>> high - low == I.size*I.itemsize
True | |
doc_29149 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_29150 | get the number of bytes used per Surface row get_pitch() -> int Return the number of bytes separating each row in the Surface. Surfaces in video memory are not always linearly packed. Subsurfaces will also have a larger pitch than their real width. This value is not needed for normal pygame usage. | |
doc_29151 | Returns True if the session cookie should be httponly. This currently just returns the value of the SESSION_COOKIE_HTTPONLY config var. Parameters
app (Flask) – Return type
bool | |
doc_29152 |
Get the color for low out-of-range values. | |
doc_29153 | True if the address is reserved for multicast use. See RFC 3171 (for IPv4) or RFC 2373 (for IPv6). | |
doc_29154 |
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xndarray of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y)
Yndarray of shape (n_samples_Y, n_features), default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True. | |
doc_29155 | See Migration guide for more details. tf.compat.v1.math.erfinv
tf.math.erfinv(
x, name=None
)
Given x, compute the inverse error function of x. This function is the inverse of tf.math.erf.
Args
x Tensor with type float or double.
name A name for the operation (optional).
Returns Inverse error function of x. | |
doc_29156 | See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.img_to_array
tf.keras.preprocessing.image.img_to_array(
img, data_format=None, dtype=None
)
Usage: from PIL import Image
img_data = np.random.random(size=(100, 100, 3))
img = tf.keras.preprocessing.image.array_to_img(img_data)
array = tf.keras.preprocessing.image.img_to_array(img)
Arguments
img Input PIL Image instance.
data_format Image data format, can be either "channels_first" or "channels_last". Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last").
dtype Dtype to use. Default to None, in which case the global setting tf.keras.backend.floatx() is used (unless you changed it, it defaults to "float32")
Returns A 3D Numpy array.
Raises
ValueError if invalid img or data_format is passed. | |
doc_29157 | Copies the elements of tensor into the self tensor by selecting the indices in the order given in index. For example, if dim == 0 and index[i] == j, then the ith row of tensor is copied to the jth row of self. The dimth dimension of tensor must have the same size as the length of index (which must be a vector), and all other dimensions must match self, or an error will be raised. Note If index contains duplicate entries, multiple elements from tensor will be copied to the same index of self. The result is nondeterministic since it depends on which copy occurs last. Parameters
dim (int) – dimension along which to index
index (LongTensor) – indices of tensor to select from
tensor (Tensor) – the tensor containing values to copy Example: >>> x = torch.zeros(5, 3)
>>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
>>> index = torch.tensor([0, 4, 2])
>>> x.index_copy_(0, index, t)
tensor([[ 1., 2., 3.],
[ 0., 0., 0.],
[ 7., 8., 9.],
[ 0., 0., 0.],
[ 4., 5., 6.]]) | |
doc_29158 | true if this cd device initialized get_init() -> bool Test if this CDROM device is initialized. This is different than the pygame.cdrom.init() since each drive must also be initialized individually. | |
doc_29159 |
Read an array header from a filelike object using the 1.0 file format version. This will leave the file object located just after the header. Parameters
fpfilelike object
A file object or something with a read() method like a file. Returns
shapetuple of int
The shape of the array.
fortran_orderbool
The array data will be written out directly if it is either C-contiguous or Fortran-contiguous. Otherwise, it will be made contiguous before writing it out.
dtypedtype
The dtype of the file’s data. Raises
ValueError
If the data is invalid. | |
doc_29160 | Returns an AppConfig for the application with the given app_label. Raises LookupError if no such application exists. | |
doc_29161 |
Agglomerate features. Similar to AgglomerativeClustering, but recursively merges features instead of samples. Read more in the User Guide. Parameters
n_clustersint, default=2
The number of clusters to find. It must be None if distance_threshold is not None.
affinitystr or callable, default=’euclidean’
Metric used to compute the linkage. Can be “euclidean”, “l1”, “l2”, “manhattan”, “cosine”, or ‘precomputed’. If linkage is “ward”, only “euclidean” is accepted.
memorystr or object with the joblib.Memory interface, default=None
Used to cache the output of the computation of the tree. By default, no caching is done. If a string is given, it is the path to the caching directory.
connectivityarray-like or callable, default=None
Connectivity matrix. Defines for each feature the neighboring features following a given structure of the data. This can be a connectivity matrix itself or a callable that transforms the data into a connectivity matrix, such as derived from kneighbors_graph. Default is None, i.e, the hierarchical clustering algorithm is unstructured.
compute_full_tree‘auto’ or bool, default=’auto’
Stop early the construction of the tree at n_clusters. This is useful to decrease computation time if the number of clusters is not small compared to the number of features. This option is useful only when specifying a connectivity matrix. Note also that when varying the number of clusters and using caching, it may be advantageous to compute the full tree. It must be True if distance_threshold is not None. By default compute_full_tree is “auto”, which is equivalent to True when distance_threshold is not None or that n_clusters is inferior to the maximum between 100 or 0.02 * n_samples. Otherwise, “auto” is equivalent to False.
linkage{‘ward’, ‘complete’, ‘average’, ‘single’}, default=’ward’
Which linkage criterion to use. The linkage criterion determines which distance to use between sets of features. The algorithm will merge the pairs of cluster that minimize this criterion. ward minimizes the variance of the clusters being merged. average uses the average of the distances of each feature of the two sets. complete or maximum linkage uses the maximum distances between all features of the two sets. single uses the minimum of the distances between all observations of the two sets.
pooling_funccallable, default=np.mean
This combines the values of agglomerated features into a single value, and should accept an array of shape [M, N] and the keyword argument axis=1, and reduce it to an array of size [M].
distance_thresholdfloat, default=None
The linkage distance threshold above which, clusters will not be merged. If not None, n_clusters must be None and compute_full_tree must be True. New in version 0.21.
compute_distancesbool, default=False
Computes distances between clusters even if distance_threshold is not used. This can be used to make dendrogram visualization, but introduces a computational and memory overhead. New in version 0.24. Attributes
n_clusters_int
The number of clusters found by the algorithm. If distance_threshold=None, it will be equal to the given n_clusters.
labels_array-like of (n_features,)
cluster labels for each feature.
n_leaves_int
Number of leaves in the hierarchical tree.
n_connected_components_int
The estimated number of connected components in the graph. New in version 0.21: n_connected_components_ was added to replace n_components_.
children_array-like of shape (n_nodes-1, 2)
The children of each non-leaf node. Values less than n_features correspond to leaves of the tree which are the original samples. A node i greater than or equal to n_features is a non-leaf node and has children children_[i - n_features]. Alternatively at the i-th iteration, children[i][0] and children[i][1] are merged to form node n_features + i
distances_array-like of shape (n_nodes-1,)
Distances between nodes in the corresponding place in children_. Only computed if distance_threshold is used or compute_distances is set to True. Examples >>> import numpy as np
>>> from sklearn import datasets, cluster
>>> digits = datasets.load_digits()
>>> images = digits.images
>>> X = np.reshape(images, (len(images), -1))
>>> agglo = cluster.FeatureAgglomeration(n_clusters=32)
>>> agglo.fit(X)
FeatureAgglomeration(n_clusters=32)
>>> X_reduced = agglo.transform(X)
>>> X_reduced.shape
(1797, 32)
Methods
fit(X[, y]) Fit the hierarchical clustering on the data
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(Xred) Inverse the transformation.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform a new matrix using the built clustering
fit(X, y=None, **params) [source]
Fit the hierarchical clustering on the data Parameters
Xarray-like of shape (n_samples, n_features)
The data
yIgnored
Returns
self
property fit_predict
Fit the hierarchical clustering from features or distance matrix, and return cluster labels. Parameters
Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples)
Training instances to cluster, or distances between instances if affinity='precomputed'.
yIgnored
Not used, present here for API consistency by convention. Returns
labelsndarray of shape (n_samples,)
Cluster labels.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(Xred) [source]
Inverse the transformation. Return a vector of size nb_features with the values of Xred assigned to each group of features Parameters
Xredarray-like of shape (n_samples, n_clusters) or (n_clusters,)
The values to be assigned to each cluster of samples Returns
Xndarray of shape (n_samples, n_features) or (n_features,)
A vector of size n_samples with the values of Xred assigned to each of the cluster of samples.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform a new matrix using the built clustering Parameters
Xarray-like of shape (n_samples, n_features) or (n_samples,)
A M by N array of M observations in N dimensions or a length M array of M one-dimensional observations. Returns
Yndarray of shape (n_samples, n_clusters) or (n_clusters,)
The pooled values for each feature cluster. | |
doc_29162 | Returns the sum of all elements, treating Not a Numbers (NaNs) as zero. Parameters
input (Tensor) – the input tensor. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: >>> a = torch.tensor([1., 2., float('nan'), 4.])
>>> torch.nansum(a)
tensor(7.)
torch.nansum(input, dim, keepdim=False, *, dtype=None) → Tensor
Returns the sum of each row of the input tensor in the given dimension dim, treating Not a Numbers (NaNs) as zero. If dim is a list of dimensions, reduce over all of them. If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). Parameters
input (Tensor) – the input tensor.
dim (int or tuple of python:ints) – the dimension or dimensions to reduce.
keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None. Example: >>> torch.nansum(torch.tensor([1., float("nan")]))
1.0
>>> a = torch.tensor([[1, 2], [3., float("nan")]])
>>> torch.nansum(a)
tensor(6.)
>>> torch.nansum(a, dim=0)
tensor([4., 2.])
>>> torch.nansum(a, dim=1)
tensor([3., 3.]) | |
doc_29163 |
Return Addition of series and other, element-wise (binary operator add). Equivalent to series + other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters
other:Series or scalar value
fill_value:None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing.
level:int or name
Broadcast across a level, matching Index values on the passed MultiIndex level. Returns
Series
The result of the operation. See also Series.radd
Reverse of the Addition operator, see Python documentation for more details. Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.add(b, fill_value=0)
a 2.0
b 1.0
c 1.0
d 1.0
e NaN
dtype: float64 | |
doc_29164 | os.P_PGID
os.P_ALL
These are the possible values for idtype in waitid(). They affect how id is interpreted. Availability: Unix. New in version 3.3. | |
doc_29165 | An abstract base class for a loader which implements the optional PEP 302 protocol for loading arbitrary resources from the storage back-end. Deprecated since version 3.7: This ABC is deprecated in favour of supporting resource loading through importlib.abc.ResourceReader.
abstractmethod get_data(path)
An abstract method to return the bytes for the data located at path. Loaders that have a file-like storage back-end that allows storing arbitrary data can implement this abstract method to give direct access to the data stored. OSError is to be raised if the path cannot be found. The path is expected to be constructed using a module’s __file__ attribute or an item from a package’s __path__. Changed in version 3.4: Raises OSError instead of NotImplementedError. | |
doc_29166 | Handle a record. This just loops through the handlers offering them the record to handle. The actual object passed to the handlers is that which is returned from prepare(). | |
doc_29167 |
A torch.nn.ConvTranspose2d module with lazy initialization of the in_channels argument of the ConvTranspose2d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Default: 0
output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1 See also torch.nn.ConvTranspose2d and torch.nn.modules.lazy.LazyModuleMixin
cls_to_become
alias of ConvTranspose2d | |
doc_29168 | Retrieves a map from Tensor to the appropriate gradient for that Tensor accumulated in the provided context corresponding to the given context_id as part of the distributed autograd backward pass. Parameters
context_id (int) – The autograd context id for which we should retrieve the gradients. Returns
A map where the key is the Tensor and the value is the associated gradient for that Tensor. Example::
>>> import torch.distributed.autograd as dist_autograd
>>> with dist_autograd.context() as context_id:
>>> t1 = torch.rand((3, 3), requires_grad=True)
>>> t2 = torch.rand((3, 3), requires_grad=True)
>>> loss = t1 + t2
>>> dist_autograd.backward(context_id, [loss.sum()])
>>> grads = dist_autograd.get_gradients(context_id)
>>> print(grads[t1])
>>> print(grads[t2]) | |
doc_29169 | Return a named tuple object with three components: year, week and weekday. The ISO calendar is a widely used variant of the Gregorian calendar. 3 The ISO year consists of 52 or 53 full weeks, and where a week starts on a Monday and ends on a Sunday. The first week of an ISO year is the first (Gregorian) calendar week of a year containing a Thursday. This is called week number 1, and the ISO year of that Thursday is the same as its Gregorian year. For example, 2004 begins on a Thursday, so the first week of ISO year 2004 begins on Monday, 29 Dec 2003 and ends on Sunday, 4 Jan 2004: >>> from datetime import date
>>> date(2003, 12, 29).isocalendar()
datetime.IsoCalendarDate(year=2004, week=1, weekday=1)
>>> date(2004, 1, 4).isocalendar()
datetime.IsoCalendarDate(year=2004, week=1, weekday=7)
Changed in version 3.9: Result changed from a tuple to a named tuple. | |
doc_29170 | Dictionary mapping filename extensions to encoding types. | |
doc_29171 |
Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters
bbool | |
doc_29172 | sklearn.metrics.zero_one_loss(y_true, y_pred, *, normalize=True, sample_weight=None) [source]
Zero-one classification loss. If normalize is True, return the fraction of misclassifications (float), else it returns the number of misclassifications (int). The best performance is 0. Read more in the User Guide. Parameters
y_true1d array-like, or label indicator array / sparse matrix
Ground truth (correct) labels.
y_pred1d array-like, or label indicator array / sparse matrix
Predicted labels, as returned by a classifier.
normalizebool, default=True
If False, return the number of misclassifications. Otherwise, return the fraction of misclassifications.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
lossfloat or int,
If normalize == True, return the fraction of misclassifications (float), else it returns the number of misclassifications (int). See also
accuracy_score, hamming_loss, jaccard_score
Notes In multilabel classification, the zero_one_loss function corresponds to the subset zero-one loss: for each sample, the entire set of labels must be correctly predicted, otherwise the loss for that sample is equal to one. Examples >>> from sklearn.metrics import zero_one_loss
>>> y_pred = [1, 2, 3, 4]
>>> y_true = [2, 2, 3, 4]
>>> zero_one_loss(y_true, y_pred)
0.25
>>> zero_one_loss(y_true, y_pred, normalize=False)
1
In the multilabel case with binary label indicators: >>> import numpy as np
>>> zero_one_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
0.5
Examples using sklearn.metrics.zero_one_loss
Discrete versus Real AdaBoost | |
doc_29173 | Raised on dbm.gnu-specific errors, such as I/O errors. KeyError is raised for general mapping errors like specifying an incorrect key. | |
doc_29174 | Return a float with the magnitude (absolute value) of x but the sign of y. On platforms that support signed zeros, copysign(1.0, -0.0) returns -1.0. | |
doc_29175 | See Migration guide for more details. tf.compat.v1.raw_ops.ScanDataset
tf.raw_ops.ScanDataset(
input_dataset, initial_state, other_arguments, f, output_types, output_shapes,
preserve_cardinality=False, use_default_device=True, name=None
)
Args
input_dataset A Tensor of type variant.
initial_state A list of Tensor objects.
other_arguments A list of Tensor objects.
f A function decorated with @Defun.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
preserve_cardinality An optional bool. Defaults to False.
use_default_device An optional bool. Defaults to True.
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_29176 | Set to a name that is safe to use as the name of a temporary file. Any temporary file that is created should be closed and unlinked (removed). | |
doc_29177 |
Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters
bbool | |
doc_29178 | Django view for the model instances change list/actions page. See note below. | |
doc_29179 | See Migration guide for more details. tf.compat.v1.image.adjust_saturation
tf.image.adjust_saturation(
image, saturation_factor, name=None
)
This is a convenience method that converts RGB images to float representation, converts them to HSV, adds an offset to the saturation channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions. image is an RGB image or images. The image saturation is adjusted by converting the images to HSV and multiplying the saturation (S) channel by saturation_factor and clipping. The images are then converted back to RGB. Usage Example:
x = [[[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]],
[[7.0, 8.0, 9.0],
[10.0, 11.0, 12.0]]]
tf.image.adjust_saturation(x, 0.5)
<tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy=
array([[[ 2. , 2.5, 3. ],
[ 5. , 5.5, 6. ]],
[[ 8. , 8.5, 9. ],
[11. , 11.5, 12. ]]], dtype=float32)>
Args
image RGB image or images. The size of the last dimension must be 3.
saturation_factor float. Factor to multiply the saturation by.
name A name for this operation (optional).
Returns Adjusted image(s), same shape and DType as image.
Raises
InvalidArgumentError input must have 3 channels | |
doc_29180 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_29181 |
Alias for set_linewidth. | |
doc_29182 | A Row instance serves as a highly optimized row_factory for Connection objects. It tries to mimic a tuple in most of its features. It supports mapping access by column name and index, iteration, representation, equality testing and len(). If two Row objects have exactly the same columns and their members are equal, they compare equal.
keys()
This method returns a list of column names. Immediately after a query, it is the first member of each tuple in Cursor.description.
Changed in version 3.5: Added support of slicing. | |
doc_29183 |
Bases: object Helper to deprecate public access to an attribute (or method). This helper should only be used at class scope, as follows: class Foo:
attr = _deprecate_privatize_attribute(*args, **kwargs)
where all parameters are forwarded to deprecated. This form makes attr a property which forwards read and write access to self._attr (same name but with a leading underscore), with a deprecation warning. Note that the attribute name is derived from the name this helper is assigned to. This helper also works for deprecating methods. | |
doc_29184 | Remove a registered result. Once a result has been removed then stop() will no longer be called on that result object in response to a control-c. | |
doc_29185 |
Compute clustering and transform X to cluster-distance space. Equivalent to fit(X).transform(X), but more efficiently implemented. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data to transform.
yIgnored
Not used, present here for API consistency by convention.
sample_weightarray-like of shape (n_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight. Returns
X_newndarray of shape (n_samples, n_clusters)
X transformed in the new space. | |
doc_29186 | This handler sends an email to the site ADMINS for each log message it receives. If the log record contains a request attribute, the full details of the request will be included in the email. The email subject will include the phrase “internal IP” if the client’s IP address is in the INTERNAL_IPS setting; if not, it will include “EXTERNAL IP”. If the log record contains stack trace information, that stack trace will be included in the email. The include_html argument of AdminEmailHandler is used to control whether the traceback email includes an HTML attachment containing the full content of the debug web page that would have been produced if DEBUG were True. To set this value in your configuration, include it in the handler definition for django.utils.log.AdminEmailHandler, like this: 'handlers': {
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'include_html': True,
},
},
Be aware of the security implications of logging when using the AdminEmailHandler. By setting the email_backend argument of AdminEmailHandler, the email backend that is being used by the handler can be overridden, like this: 'handlers': {
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'email_backend': 'django.core.mail.backends.filebased.EmailBackend',
},
},
By default, an instance of the email backend specified in EMAIL_BACKEND will be used. The reporter_class argument of AdminEmailHandler allows providing an django.views.debug.ExceptionReporter subclass to customize the traceback text sent in the email body. You provide a string import path to the class you wish to use, like this: 'handlers': {
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'include_html': True,
'reporter_class': 'somepackage.error_reporter.CustomErrorReporter',
},
},
send_mail(subject, message, *args, **kwargs)
Sends emails to admin users. To customize this behavior, you can subclass the AdminEmailHandler class and override this method. | |
doc_29187 | Return the SCRIPT_NAME from the WSGI environment and decode it unless charset is set to None. Parameters
environ (WSGIEnvironment) – WSGI environment to get the path from.
charset (str) – The charset for the path, or None if no decoding should be performed.
errors (str) – The decoding error handling. Return type
str Changelog New in version 0.9. | |
doc_29188 |
Return self%=value. | |
doc_29189 |
Remove the rubberband. | |
doc_29190 | Return the system description of the signal signalnum, such as “Interrupt”, “Segmentation fault”, etc. Returns None if the signal is not recognized. New in version 3.8. | |
doc_29191 |
Return self[key]. | |
doc_29192 | Provides an overriding level level for all loggers which takes precedence over the logger’s own level. When the need arises to temporarily throttle logging output down across the whole application, this function can be useful. Its effect is to disable all logging calls of severity level and below, so that if you call it with a value of INFO, then all INFO and DEBUG events would be discarded, whereas those of severity WARNING and above would be processed according to the logger’s effective level. If logging.disable(logging.NOTSET) is called, it effectively removes this overriding level, so that logging output again depends on the effective levels of individual loggers. Note that if you have defined any custom logging level higher than CRITICAL (this is not recommended), you won’t be able to rely on the default value for the level parameter, but will have to explicitly supply a suitable value. Changed in version 3.7: The level parameter was defaulted to level CRITICAL. See bpo-28524 for more information about this change. | |
doc_29193 | Given a quantized Tensor, dequantize it and return the dequantized float Tensor. | |
doc_29194 |
Return reshaped DataFrame organized by given index / column values. Reshape data (produce a “pivot” table) based on column values. Uses unique values from specified index / columns to form axes of the resulting DataFrame. This function does not support data aggregation, multiple values will result in a MultiIndex in the columns. See the User Guide for more on reshaping. Parameters
index:str or object or a list of str, optional
Column to use to make new frame’s index. If None, uses existing index. Changed in version 1.1.0: Also accept list of index names.
columns:str or object or a list of str
Column to use to make new frame’s columns. Changed in version 1.1.0: Also accept list of columns names.
values:str, object or a list of the previous, optional
Column(s) to use for populating new frame’s values. If not specified, all remaining columns will be used and the result will have hierarchically indexed columns. Returns
DataFrame
Returns reshaped DataFrame. Raises
ValueError:
When there are any index, columns combinations with multiple values. DataFrame.pivot_table when you need to aggregate. See also DataFrame.pivot_table
Generalization of pivot that can handle duplicate values for one index/column pair. DataFrame.unstack
Pivot based on the index values instead of a column. wide_to_long
Wide panel to long format. Less flexible but more user-friendly than melt. Notes For finer-tuned control, see hierarchical indexing documentation along with the related stack/unstack methods. Examples
>>> df = pd.DataFrame({'foo': ['one', 'one', 'one', 'two', 'two',
... 'two'],
... 'bar': ['A', 'B', 'C', 'A', 'B', 'C'],
... 'baz': [1, 2, 3, 4, 5, 6],
... 'zoo': ['x', 'y', 'z', 'q', 'w', 't']})
>>> df
foo bar baz zoo
0 one A 1 x
1 one B 2 y
2 one C 3 z
3 two A 4 q
4 two B 5 w
5 two C 6 t
>>> df.pivot(index='foo', columns='bar', values='baz')
bar A B C
foo
one 1 2 3
two 4 5 6
>>> df.pivot(index='foo', columns='bar')['baz']
bar A B C
foo
one 1 2 3
two 4 5 6
>>> df.pivot(index='foo', columns='bar', values=['baz', 'zoo'])
baz zoo
bar A B C A B C
foo
one 1 2 3 x y z
two 4 5 6 q w t
You could also assign a list of column names or a list of index names.
>>> df = pd.DataFrame({
... "lev1": [1, 1, 1, 2, 2, 2],
... "lev2": [1, 1, 2, 1, 1, 2],
... "lev3": [1, 2, 1, 2, 1, 2],
... "lev4": [1, 2, 3, 4, 5, 6],
... "values": [0, 1, 2, 3, 4, 5]})
>>> df
lev1 lev2 lev3 lev4 values
0 1 1 1 1 0
1 1 1 2 2 1
2 1 2 1 3 2
3 2 1 2 4 3
4 2 1 1 5 4
5 2 2 2 6 5
>>> df.pivot(index="lev1", columns=["lev2", "lev3"],values="values")
lev2 1 2
lev3 1 2 1 2
lev1
1 0.0 1.0 2.0 NaN
2 4.0 3.0 NaN 5.0
>>> df.pivot(index=["lev1", "lev2"], columns=["lev3"],values="values")
lev3 1 2
lev1 lev2
1 1 0.0 1.0
2 2.0 NaN
2 1 4.0 3.0
2 NaN 5.0
A ValueError is raised if there are any duplicates.
>>> df = pd.DataFrame({"foo": ['one', 'one', 'two', 'two'],
... "bar": ['A', 'A', 'B', 'C'],
... "baz": [1, 2, 3, 4]})
>>> df
foo bar baz
0 one A 1
1 one A 2
2 two B 3
3 two C 4
Notice that the first two rows are the same for our index and columns arguments.
>>> df.pivot(index='foo', columns='bar', values='baz')
Traceback (most recent call last):
...
ValueError: Index contains duplicate entries, cannot reshape | |
doc_29195 |
Set whether the Axes responds to navigation toolbar commands. Parameters
bbool | |
doc_29196 | See Migration guide for more details. tf.compat.v1.experimental.async_clear_error
tf.experimental.async_clear_error()
In async execution mode, an error in op/function execution can lead to errors in subsequent ops/functions that are scheduled but not yet executed. Calling this method clears all pending operations and reset the async execution state. Example: while True:
try:
# Step function updates the metric `loss` internally
train_step_fn()
except tf.errors.OutOfRangeError:
tf.experimental.async_clear_error()
break
logging.info('loss = %s', loss.numpy()) | |
doc_29197 | Computes input * log(other) with the following cases. outi={NaNif otheri=NaN0if inputi=0.0inputi∗log(otheri)otherwise\text{out}_{i} = \begin{cases} \text{NaN} & \text{if } \text{other}_{i} = \text{NaN} \\ 0 & \text{if } \text{input}_{i} = 0.0 \\ \text{input}_{i} * \log{(\text{other}_{i})} & \text{otherwise} \end{cases}
Similar to SciPy’s scipy.special.xlogy. Parameters
input (Number or Tensor) –
other (Number or Tensor) – Note At least one of input or other must be a tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> x = torch.zeros(5,)
>>> y = torch.tensor([-1, 0, 1, float('inf'), float('nan')])
>>> torch.xlogy(x, y)
tensor([0., 0., 0., 0., nan])
>>> x = torch.tensor([1, 2, 3])
>>> y = torch.tensor([3, 2, 1])
>>> torch.xlogy(x, y)
tensor([1.0986, 1.3863, 0.0000])
>>> torch.xlogy(x, 4)
tensor([1.3863, 2.7726, 4.1589])
>>> torch.xlogy(2, y)
tensor([2.1972, 1.3863, 0.0000]) | |
doc_29198 | transfer image to string buffer tostring(Surface, format, flipped=False) -> string Creates a string that can be transferred with the 'fromstring' method in other Python imaging packages. Some Python image packages prefer their images in bottom-to-top format (PyOpenGL for example). If you pass True for the flipped argument, the string buffer will be vertically flipped. The format argument is a string of one of the following values. Note that only 8-bit Surfaces can use the "P" format. The other formats will work for any Surface. Also note that other Python image packages support more formats than pygame.
P, 8-bit palettized Surfaces
RGB, 24-bit image
RGBX, 32-bit image with unused space
RGBA, 32-bit image with an alpha channel
ARGB, 32-bit image with alpha channel first
RGBA_PREMULT, 32-bit image with colors scaled by alpha channel
ARGB_PREMULT, 32-bit image with colors scaled by alpha channel, alpha channel first | |
doc_29199 | The HTTP request method to use. By default its value is None, which means that get_method() will do its normal computation of the method to be used. Its value can be set (thus overriding the default computation in get_method()) either by providing a default value by setting it at the class level in a Request subclass, or by passing a value in to the Request constructor via the method argument. New in version 3.3. Changed in version 3.4: A default value can now be set in subclasses; previously it could only be set via the constructor argument. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.