_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_26000 |
Returns the maximum GPU memory managed by the caching allocator in bytes for a given device. By default, this returns the peak cached memory since the beginning of this program. reset_peak_stats() can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak cached memory amount of each iteration in a training loop. Parameters
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management. | |
doc_26001 | Resizes the map and the underlying file, if any. If the mmap was created with ACCESS_READ or ACCESS_COPY, resizing the map will raise a TypeError exception. | |
doc_26002 | An entity reference contained another reference to the same entity; possibly via a different name, and possibly indirectly. | |
doc_26003 |
Fill the area between two vertical curves. The curves are defined by the points (y, x1) and (y, x2). This creates one or multiple polygons describing the filled area. You may exclude some vertical sections from filling using where. By default, the edges connect the given points directly. Use step if the filling should be a step function, i.e. constant in between y. Parameters
yarray (length N)
The y coordinates of the nodes defining the curves.
x1array (length N) or scalar
The x coordinates of the nodes defining the first curve.
x2array (length N) or scalar, default: 0
The x coordinates of the nodes defining the second curve.
wherearray of bool (length N), optional
Define where to exclude some vertical regions from being filled. The filled regions are defined by the coordinates y[where]. More precisely, fill between y[i] and y[i+1] if where[i] and where[i+1]. Note that this definition implies that an isolated True value between two False values in where will not result in filling. Both sides of the True position remain unfilled due to the adjacent False values.
interpolatebool, default: False
This option is only relevant if where is used and the two curves are crossing each other. Semantically, where is often used for x1 > x2 or similar. By default, the nodes of the polygon defining the filled region will only be placed at the positions in the y array. Such a polygon cannot describe the above semantics close to the intersection. The y-sections containing the intersection are simply clipped. Setting interpolate to True will calculate the actual intersection point and extend the filled region up to this point.
step{'pre', 'post', 'mid'}, optional
Define step if the filling should be a step function, i.e. constant in between y. The value determines where the step will occur: 'pre': The y value is continued constantly to the left from every x position, i.e. the interval (x[i-1], x[i]] has the value y[i]. 'post': The y value is continued constantly to the right from every x position, i.e. the interval [x[i], x[i+1]) has the value y[i]. 'mid': Steps occur half-way between the x positions. Returns
PolyCollection
A PolyCollection containing the plotted polygons. Other Parameters
dataindexable object, optional
If given, the following parameters also accept a string s, which is interpreted as data[s] (unless this raises an exception): y, x1, x2, where **kwargs
All other keyword arguments are passed on to PolyCollection. They control the Polygon properties:
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha array-like or scalar or None
animated bool
antialiased or aa or antialiaseds bool or list of bools
array array-like or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clim (vmin: float, vmax: float)
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
cmap Colormap or str or None
color color or list of rgba tuples
edgecolor or ec or edgecolors color or list of colors or 'face'
facecolor or facecolors or fc color or list of colors
figure Figure
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or dashes or linestyles or ls str or tuple or list thereof
linewidth or linewidths or lw float or list of floats
norm Normalize or None
offset_transform Transform
offsets (N, 2) or (2,) array-like
path_effects AbstractPathEffect
paths list of array-like
picker None or bool or float or callable
pickradius float
rasterized bool
sizes ndarray or None
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
urls list of str or None
verts list of array-like
verts_and_codes unknown
visible bool
zorder float See also fill_between
Fill between two sets of y-values. fill_betweenx
Fill between two sets of x-values. | |
doc_26004 | tf.test.compute_gradient(
f, x, delta=0.001
)
With y = f(x), computes the theoretical and numeric Jacobian dy/dx.
Args
f the function.
x the arguments for the function as a list or tuple of values convertible to a Tensor.
delta (optional) perturbation used to compute numeric Jacobian.
Returns A pair of lists, where the first is a list of 2-d numpy arrays representing the theoretical Jacobians for each argument, and the second list is the numerical ones. Each 2-d array has "y_size" rows and "x_size" columns where "x_size" is the number of elements in the corresponding argument and "y_size" is the number of elements in f(x).
Raises
ValueError If result is empty but the gradient is nonzero.
ValueError If x is not list, but any other type. Example: @tf.function
def test_func(x):
return x*x
theoretical, numerical = tf.test.compute_gradient(test_func, [1.0])
theoretical, numerical
# ((array([[2.]], dtype=float32),), (array([[2.000004]], dtype=float32),)) | |
doc_26005 | Override to construct the dialog’s interface and return the widget that should have initial focus. | |
doc_26006 | See Migration guide for more details. tf.compat.v1.raw_ops.EnsureShape
tf.raw_ops.EnsureShape(
input, shape, name=None
)
Raises an error if the input tensor's shape does not match the specified shape. Returns the input tensor otherwise.
Args
input A Tensor. A tensor, whose shape is to be validated.
shape A tf.TensorShape or list of ints. The expected (possibly partially specified) shape of the input tensor.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_26007 |
Plot a set of filled voxels All voxels are plotted as 1x1x1 cubes on the axis, with filled[0, 0, 0] placed with its lower corner at the origin. Occluded faces are not plotted. Parameters
filled3D np.array of bool
A 3D array of values, with truthy values indicating which voxels to fill
x, y, z3D np.array, optional
The coordinates of the corners of the voxels. This should broadcast to a shape one larger in every dimension than the shape of filled. These can be used to plot non-cubic voxels. If not specified, defaults to increasing integers along each axis, like those returned by indices(). As indicated by the / in the function signature, these arguments can only be passed positionally.
facecolors, edgecolorsarray-like, optional
The color to draw the faces and edges of the voxels. Can only be passed as keyword arguments. These parameters can be: A single color value, to color all voxels the same color. This can be either a string, or a 1D rgb/rgba array
None, the default, to use a single color for the faces, and the style default for the edges. A 3D ndarray of color names, with each item the color for the corresponding voxel. The size must match the voxels. A 4D ndarray of rgb/rgba data, with the components along the last axis.
shadebool, default: True
Whether to shade the facecolors. Shading is always disabled when cmap is specified.
lightsourceLightSource
The lightsource to use when shade is True. **kwargs
Additional keyword arguments to pass onto Poly3DCollection. Returns
facesdict
A dictionary indexed by coordinate, where faces[i, j, k] is a Poly3DCollection of the faces drawn for the voxel filled[i, j, k]. If no faces were drawn for a given voxel, either because it was not asked to be drawn, or it is fully occluded, then (i, j, k) not in faces. Examples (Source code, png, pdf) (Source code, png, pdf) (Source code, png, pdf) (Source code, png, pdf) | |
doc_26008 | See Migration guide for more details. tf.compat.v1.nn.collapse_repeated
tf.nn.collapse_repeated(
labels, seq_length, name=None
)
Args
labels Tensor of shape [batch, max value in seq_length]
seq_length Tensor of shape [batch], sequence length of each batch element.
name A name for this Op. Defaults to "collapse_repeated_labels".
Returns A tuple (collapsed_labels, new_seq_length) where collapsed_labels Tensor of shape [batch, max_seq_length] with repeated labels collapsed and padded to max_seq_length, eg: [[A, A, B, B, A], [A, B, C, D, E]] => [[A, B, A, 0, 0], [A, B, C, D, E]]
new_seq_length int tensor of shape [batch] with new sequence lengths. | |
doc_26009 | This function is analogous to getgeneratorlocals(), but works for coroutine objects created by async def functions. New in version 3.5. | |
doc_26010 | Returns the WKB (Well-Known Binary) representation of this Geometry as a Python buffer. SRID value is not included, use the GEOSGeometry.ewkb property instead. | |
doc_26011 | Decodes the URL to a tuple made out of strings. The charset is only being used for the path, query and fragment. Parameters
charset (str) –
errors (str) – Return type
werkzeug.urls.URL | |
doc_26012 | A list of logging.LogRecord objects of the matching log messages. | |
doc_26013 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_26014 |
Reduce X to the selected features. Parameters
Xarray of shape [n_samples, n_features]
The input samples. Returns
X_rarray of shape [n_samples, n_selected_features]
The input samples with only the selected features. | |
doc_26015 |
Set StepPatch values, edges and baseline. Parameters
values1D array-like or None
Will not update values, if passing None
edges1D array-like, optional
baselinefloat, 1D array-like or None | |
doc_26016 |
The last colorbar associated with this ScalarMappable. May be None. | |
doc_26017 |
A torch.nn.Conv1d module with lazy initialization of the in_channels argument of the Conv1d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
See also torch.nn.Conv1d and torch.nn.modules.lazy.LazyModuleMixin
cls_to_become
alias of Conv1d | |
doc_26018 |
Set the colormap to 'hot'. This changes the default colormap as well as the colormap of the current image if there is one. See help(colormaps) for more information. | |
doc_26019 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_26020 |
The torch.quasirandom.SobolEngine is an engine for generating (scrambled) Sobol sequences. Sobol sequences are an example of low discrepancy quasi-random sequences. This implementation of an engine for Sobol sequences is capable of sampling sequences up to a maximum dimension of 21201. It uses direction numbers from https://web.maths.unsw.edu.au/~fkuo/sobol/ obtained using the search criterion D(6) up to the dimension 21201. This is the recommended choice by the authors. References Art B. Owen. Scrambling Sobol and Niederreiter-Xing points. Journal of Complexity, 14(4):466-489, December 1998. I. M. Sobol. The distribution of points in a cube and the accurate evaluation of integrals. Zh. Vychisl. Mat. i Mat. Phys., 7:784-802, 1967. Parameters
dimension (Int) – The dimensionality of the sequence to be drawn
scramble (bool, optional) – Setting this to True will produce scrambled Sobol sequences. Scrambling is capable of producing better Sobol sequences. Default: False.
seed (Int, optional) – This is the seed for the scrambling. The seed of the random number generator is set to this, if specified. Otherwise, it uses a random seed. Default: None
Examples: >>> soboleng = torch.quasirandom.SobolEngine(dimension=5)
>>> soboleng.draw(3)
tensor([[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.7500, 0.2500, 0.7500, 0.2500, 0.7500],
[0.2500, 0.7500, 0.2500, 0.7500, 0.2500]])
draw(n=1, out=None, dtype=torch.float32) [source]
Function to draw a sequence of n points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (n,dimension)(n, dimension) . Parameters
n (Int, optional) – The length of sequence of points to draw. Default: 1
out (Tensor, optional) – The output tensor
dtype (torch.dtype, optional) – the desired data type of the returned tensor. Default: torch.float32
draw_base2(m, out=None, dtype=torch.float32) [source]
Function to draw a sequence of 2**m points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (2∗∗m,dimension)(2**m, dimension) . Parameters
m (Int) – The (base2) exponent of the number of points to draw.
out (Tensor, optional) – The output tensor
dtype (torch.dtype, optional) – the desired data type of the returned tensor. Default: torch.float32
fast_forward(n) [source]
Function to fast-forward the state of the SobolEngine by n steps. This is equivalent to drawing n samples without using the samples. Parameters
n (Int) – The number of steps to fast-forward by.
reset() [source]
Function to reset the SobolEngine to base state. | |
doc_26021 | sklearn.metrics.pairwise_distances_argmin_min(X, Y, *, axis=1, metric='euclidean', metric_kwargs=None) [source]
Compute minimum distances between one point and a set of points. This function computes for each row in X, the index of the row of Y which is closest (according to the specified distance). The minimal distances are also returned. This is mostly equivalent to calling: (pairwise_distances(X, Y=Y, metric=metric).argmin(axis=axis),
pairwise_distances(X, Y=Y, metric=metric).min(axis=axis)) but uses much less memory, and is faster for large arrays. Parameters
X{array-like, sparse matrix} of shape (n_samples_X, n_features)
Array containing points.
Y{array-like, sparse matrix} of shape (n_samples_Y, n_features)
Array containing points.
axisint, default=1
Axis along which the argmin and distances are to be computed.
metricstr or callable, default=’euclidean’
Metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Distance matrices are not supported. Valid values for metric are: from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics.
metric_kwargsdict, default=None
Keyword arguments to pass to specified metric function. Returns
argminndarray
Y[argmin[i], :] is the row in Y that is closest to X[i, :].
distancesndarray
distances[i] is the distance between the i-th row in X and the argmin[i]-th row in Y. See also
sklearn.metrics.pairwise_distances
sklearn.metrics.pairwise_distances_argmin | |
doc_26022 |
Set the text value of the axis label. Parameters
labelstr
Text string.
fontdictdict
Text properties. **kwargs
Merged into fontdict. | |
doc_26023 |
Return (x1 > x2) element-wise. Unlike numpy.greater, this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters
x1, x2array_like of str or unicode
Input arrays of the same shape. Returns
outndarray
Output array of bools. See also
equal, not_equal, greater_equal, less_equal, less | |
doc_26024 |
Remove a click (by default, the last) from the list of clicks. Parameters
eventMouseEvent | |
doc_26025 |
Calculate the rolling rank. New in version 1.4.0. Parameters
method:{‘average’, ‘min’, ‘max’}, default ‘average’
How to rank the group of records that have the same value (i.e. ties): average: average rank of the group min: lowest rank in the group max: highest rank in the group
ascending:bool, default True
Whether or not the elements should be ranked in ascending order.
pct:bool, default False
Whether or not to display the returned rankings in percentile form. **kwargs
For NumPy compatibility and will not have an effect on the result. Returns
Series or DataFrame
Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling
Calling rolling with Series data. pandas.DataFrame.rolling
Calling rolling with DataFrames. pandas.Series.rank
Aggregating rank for Series. pandas.DataFrame.rank
Aggregating rank for DataFrame. Examples
>>> s = pd.Series([1, 4, 2, 3, 5, 3])
>>> s.rolling(3).rank()
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 1.5
dtype: float64
>>> s.rolling(3).rank(method="max")
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 2.0
dtype: float64
>>> s.rolling(3).rank(method="min")
0 NaN
1 NaN
2 2.0
3 2.0
4 3.0
5 1.0
dtype: float64 | |
doc_26026 | The binary data encapsulated by the Binary instance. The data is provided as a bytes object. | |
doc_26027 |
Returns the id for the new node to be inserted. The current implementation returns one more than the maximum id. Returns
idint
The id of the new node to be inserted. | |
doc_26028 |
Return the imaginary part of the complex argument. Parameters
valarray_like
Input array. Returns
outndarray or scalar
The imaginary component of the complex argument. If val is real, the type of val is used for the output. If val has complex elements, the returned type is float. See also
real, angle, real_if_close
Examples >>> a = np.array([1+2j, 3+4j, 5+6j])
>>> a.imag
array([2., 4., 6.])
>>> a.imag = np.array([8, 10, 12])
>>> a
array([1. +8.j, 3.+10.j, 5.+12.j])
>>> np.imag(1 + 1j)
1.0 | |
doc_26029 | tf.compat.v1.enable_v2_tensorshape()
This enables the new behavior. Concretely, tensor_shape[i] returned a Dimension instance in V1, but it V2 it returns either an integer, or None. Examples: #######################
# If you had this in V1:
value = tensor_shape[i].value
# Do this in V2 instead:
value = tensor_shape[i]
#######################
# If you had this in V1:
for dim in tensor_shape:
value = dim.value
print(value)
# Do this in V2 instead:
for value in tensor_shape:
print(value)
#######################
# If you had this in V1:
dim = tensor_shape[i]
dim.assert_is_compatible_with(other_shape) # or using any other shape method
# Do this in V2 instead:
if tensor_shape.rank is None:
dim = Dimension(None)
else:
dim = tensor_shape.dims[i]
dim.assert_is_compatible_with(other_shape) # or using any other shape method
# The V2 suggestion above is more explicit, which will save you from
# the following trap (present in V1):
# you might do in-place modifications to `dim` and expect them to be reflected
# in `tensor_shape[i]`, but they would not be. | |
doc_26030 | class torch.futures.Future
Wrapper around a torch._C.Future which encapsulates an asynchronous execution of a callable, e.g. rpc_async(). It also exposes a set of APIs to add callback functions and set results.
add_done_callback(self: torch._C.Future, arg0: function) → None
done() [source]
Return True if this Future is done. A Future is done if it has a result or an exception.
set_exception(result) [source]
Set an exception for this Future, which will mark this Future as completed with an error and trigger all attached callbacks. Note that when calling wait()/value() on this Future, the exception set here will be raised inline. Parameters
result (BaseException) – the exception for this Future. Example::
>>> import torch
>>>
>>> fut = torch.futures.Future()
>>> fut.set_exception(ValueError("foo"))
>>> fut.wait()
>>>
>>> # Output:
>>> # This will run after the future has finished.
>>> ValueError: foo
set_result(result) [source]
Set the result for this Future, which will mark this Future as completed and trigger all attached callbacks. Note that a Future cannot be marked completed twice. Parameters
result (object) – the result object of this Future. Example::
>>> import threading
>>> import time
>>> import torch
>>>
>>> def slow_set_future(fut, value):
>>> time.sleep(0.5)
>>> fut.set_result(value)
>>>
>>> fut = torch.futures.Future()
>>> t = threading.Thread(
>>> target=slow_set_future,
>>> args=(fut, torch.ones(2) * 3)
>>> )
>>> t.start()
>>>
>>> print(fut.wait()) # tensor([3., 3.])
>>> t.join()
then(callback) [source]
Append the given callback function to this Future, which will be run when the Future is completed. Multiple callbacks can be added to the same Future, and will be invoked in the same order as they were added. The callback must take one argument, which is the reference to this Future. The callback function can use the Future.wait() API to get the value. Note that if this Future is already completed, the given callback will be run immediately inline. Parameters
callback (Callable) – a Callable that takes this Future as the only argument. Returns
A new Future object that holds the return value of the callback and will be marked as completed when the given callback finishes. Example::
>>> import torch
>>>
>>> def callback(fut):
>>> print(f"RPC return value is {fut.wait()}.")
>>>
>>> fut = torch.futures.Future()
>>> # The inserted callback will print the return value when
>>> # receiving the response from "worker1"
>>> cb_fut = fut.then(callback)
>>> chain_cb_fut = cb_fut.then(
>>> lambda x : print(f"Chained cb done. {x.wait()}")
>>> )
>>> fut.set_result(5)
>>>
>>> # Outputs are:
>>> # RPC return value is 5.
>>> # Chained cb done. None
value(self: torch._C.Future) → object
wait() [source]
Block until the value of this Future is ready. Returns
The value held by this Future. If the function (callback or RPC) creating the value has thrown an error, this wait method will also throw an error.
torch.futures.collect_all(futures) [source]
Collects the provided Future objects into a single combined Future that is completed when all of the sub-futures are completed. Parameters
futures (list) – a list of Future objects. Returns
Returns a Future object to a list of the passed in Futures. Example::
>>> import torch
>>>
>>> fut0 = torch.futures.Future()
>>> fut1 = torch.futures.Future()
>>>
>>> fut = torch.futures.collect_all([fut0, fut1])
>>>
>>> fut0.set_result(0)
>>> fut1.set_result(1)
>>>
>>> fut_list = fut.wait()
>>> print(f"fut0 result = {fut_list[0].wait()}")
>>> print(f"fut1 result = {fut_list[1].wait()}")
>>> # outputs:
>>> # fut0 result = 0
>>> # fut1 result = 1
torch.futures.wait_all(futures) [source]
Waits for all provided futures to be complete, and returns the list of completed values. Parameters
futures (list) – a list of Future object. Returns
A list of the completed Future results. This method will throw an error if wait on any Future throws. | |
doc_26031 |
Return the artist's zorder. | |
doc_26032 | A base class that provides default implementations for all required methods. By default, it will reject any user and provide no permissions.
get_user_permissions(user_obj, obj=None)
Returns an empty set.
get_group_permissions(user_obj, obj=None)
Returns an empty set.
get_all_permissions(user_obj, obj=None)
Uses get_user_permissions() and get_group_permissions() to get the set of permission strings the user_obj has.
has_perm(user_obj, perm, obj=None)
Uses get_all_permissions() to check if user_obj has the permission string perm. | |
doc_26033 | The URL to redirect to after logout. Defaults to LOGOUT_REDIRECT_URL. | |
doc_26034 |
Perform clustering on X and returns cluster labels. Parameters
Xarray-like of shape (n_samples, n_features)
Input data.
yIgnored
Not used, present for API consistency by convention. Returns
labelsndarray of shape (n_samples,), dtype=np.int64
Cluster labels. | |
doc_26035 | class xml.sax.xmlreader.XMLReader
Base class which can be inherited by SAX parsers.
class xml.sax.xmlreader.IncrementalParser
In some cases, it is desirable not to parse an input source at once, but to feed chunks of the document as they get available. Note that the reader will normally not read the entire file, but read it in chunks as well; still parse() won’t return until the entire document is processed. So these interfaces should be used if the blocking behaviour of parse() is not desirable. When the parser is instantiated it is ready to begin accepting data from the feed method immediately. After parsing has been finished with a call to close the reset method must be called to make the parser ready to accept new data, either from feed or using the parse method. Note that these methods must not be called during parsing, that is, after parse has been called and before it returns. By default, the class also implements the parse method of the XMLReader interface using the feed, close and reset methods of the IncrementalParser interface as a convenience to SAX 2.0 driver writers.
class xml.sax.xmlreader.Locator
Interface for associating a SAX event with a document location. A locator object will return valid results only during calls to DocumentHandler methods; at any other time, the results are unpredictable. If information is not available, methods may return None.
class xml.sax.xmlreader.InputSource(system_id=None)
Encapsulation of the information needed by the XMLReader to read entities. This class may include information about the public identifier, system identifier, byte stream (possibly with character encoding information) and/or the character stream of an entity. Applications will create objects of this class for use in the XMLReader.parse() method and for returning from EntityResolver.resolveEntity. An InputSource belongs to the application, the XMLReader is not allowed to modify InputSource objects passed to it from the application, although it may make copies and modify those.
class xml.sax.xmlreader.AttributesImpl(attrs)
This is an implementation of the Attributes interface (see section The Attributes Interface). This is a dictionary-like object which represents the element attributes in a startElement() call. In addition to the most useful dictionary operations, it supports a number of other methods as described by the interface. Objects of this class should be instantiated by readers; attrs must be a dictionary-like object containing a mapping from attribute names to attribute values.
class xml.sax.xmlreader.AttributesNSImpl(attrs, qnames)
Namespace-aware variant of AttributesImpl, which will be passed to startElementNS(). It is derived from AttributesImpl, but understands attribute names as two-tuples of namespaceURI and localname. In addition, it provides a number of methods expecting qualified names as they appear in the original document. This class implements the AttributesNS interface (see section The AttributesNS Interface).
XMLReader Objects The XMLReader interface supports the following methods:
XMLReader.parse(source)
Process an input source, producing SAX events. The source object can be a system identifier (a string identifying the input source – typically a file name or a URL), a pathlib.Path or path-like object, or an InputSource object. When parse() returns, the input is completely processed, and the parser object can be discarded or reset. Changed in version 3.5: Added support of character streams. Changed in version 3.8: Added support of path-like objects.
XMLReader.getContentHandler()
Return the current ContentHandler.
XMLReader.setContentHandler(handler)
Set the current ContentHandler. If no ContentHandler is set, content events will be discarded.
XMLReader.getDTDHandler()
Return the current DTDHandler.
XMLReader.setDTDHandler(handler)
Set the current DTDHandler. If no DTDHandler is set, DTD events will be discarded.
XMLReader.getEntityResolver()
Return the current EntityResolver.
XMLReader.setEntityResolver(handler)
Set the current EntityResolver. If no EntityResolver is set, attempts to resolve an external entity will result in opening the system identifier for the entity, and fail if it is not available.
XMLReader.getErrorHandler()
Return the current ErrorHandler.
XMLReader.setErrorHandler(handler)
Set the current error handler. If no ErrorHandler is set, errors will be raised as exceptions, and warnings will be printed.
XMLReader.setLocale(locale)
Allow an application to set the locale for errors and warnings. SAX parsers are not required to provide localization for errors and warnings; if they cannot support the requested locale, however, they must raise a SAX exception. Applications may request a locale change in the middle of a parse.
XMLReader.getFeature(featurename)
Return the current setting for feature featurename. If the feature is not recognized, SAXNotRecognizedException is raised. The well-known featurenames are listed in the module xml.sax.handler.
XMLReader.setFeature(featurename, value)
Set the featurename to value. If the feature is not recognized, SAXNotRecognizedException is raised. If the feature or its setting is not supported by the parser, SAXNotSupportedException is raised.
XMLReader.getProperty(propertyname)
Return the current setting for property propertyname. If the property is not recognized, a SAXNotRecognizedException is raised. The well-known propertynames are listed in the module xml.sax.handler.
XMLReader.setProperty(propertyname, value)
Set the propertyname to value. If the property is not recognized, SAXNotRecognizedException is raised. If the property or its setting is not supported by the parser, SAXNotSupportedException is raised.
IncrementalParser Objects Instances of IncrementalParser offer the following additional methods:
IncrementalParser.feed(data)
Process a chunk of data.
IncrementalParser.close()
Assume the end of the document. That will check well-formedness conditions that can be checked only at the end, invoke handlers, and may clean up resources allocated during parsing.
IncrementalParser.reset()
This method is called after close has been called to reset the parser so that it is ready to parse new documents. The results of calling parse or feed after close without calling reset are undefined.
Locator Objects Instances of Locator provide these methods:
Locator.getColumnNumber()
Return the column number where the current event begins.
Locator.getLineNumber()
Return the line number where the current event begins.
Locator.getPublicId()
Return the public identifier for the current event.
Locator.getSystemId()
Return the system identifier for the current event.
InputSource Objects
InputSource.setPublicId(id)
Sets the public identifier of this InputSource.
InputSource.getPublicId()
Returns the public identifier of this InputSource.
InputSource.setSystemId(id)
Sets the system identifier of this InputSource.
InputSource.getSystemId()
Returns the system identifier of this InputSource.
InputSource.setEncoding(encoding)
Sets the character encoding of this InputSource. The encoding must be a string acceptable for an XML encoding declaration (see section 4.3.3 of the XML recommendation). The encoding attribute of the InputSource is ignored if the InputSource also contains a character stream.
InputSource.getEncoding()
Get the character encoding of this InputSource.
InputSource.setByteStream(bytefile)
Set the byte stream (a binary file) for this input source. The SAX parser will ignore this if there is also a character stream specified, but it will use a byte stream in preference to opening a URI connection itself. If the application knows the character encoding of the byte stream, it should set it with the setEncoding method.
InputSource.getByteStream()
Get the byte stream for this input source. The getEncoding method will return the character encoding for this byte stream, or None if unknown.
InputSource.setCharacterStream(charfile)
Set the character stream (a text file) for this input source. If there is a character stream specified, the SAX parser will ignore any byte stream and will not attempt to open a URI connection to the system identifier.
InputSource.getCharacterStream()
Get the character stream for this input source.
The Attributes Interface Attributes objects implement a portion of the mapping protocol, including the methods copy(), get(), __contains__(), items(), keys(), and values(). The following methods are also provided:
Attributes.getLength()
Return the number of attributes.
Attributes.getNames()
Return the names of the attributes.
Attributes.getType(name)
Returns the type of the attribute name, which is normally 'CDATA'.
Attributes.getValue(name)
Return the value of attribute name.
The AttributesNS Interface This interface is a subtype of the Attributes interface (see section The Attributes Interface). All methods supported by that interface are also available on AttributesNS objects. The following methods are also available:
AttributesNS.getValueByQName(name)
Return the value for a qualified name.
AttributesNS.getNameByQName(name)
Return the (namespace, localname) pair for a qualified name.
AttributesNS.getQNameByName(name)
Return the qualified name for a (namespace, localname) pair.
AttributesNS.getQNames()
Return the qualified names of all attributes. | |
doc_26036 |
Generate a HermiteE series with given roots. The function returns the coefficients of the polynomial \[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\] in HermiteE form, where the r_n are the roots specified in roots. If a zero has multiplicity n, then it must appear in roots n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then roots looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are c, then \[p(x) = c_0 + c_1 * He_1(x) + ... + c_n * He_n(x)\] The coefficient of the last term is not generally 1 for monic polynomials in HermiteE form. Parameters
rootsarray_like
Sequence containing the roots. Returns
outndarray
1-D array of coefficients. If all roots are real then out is a real array, if some of the roots are complex, then out is complex even if all the coefficients in the result are real (see Examples below). See also numpy.polynomial.polynomial.polyfromroots
numpy.polynomial.legendre.legfromroots
numpy.polynomial.laguerre.lagfromroots
numpy.polynomial.hermite.hermfromroots
numpy.polynomial.chebyshev.chebfromroots
Examples >>> from numpy.polynomial.hermite_e import hermefromroots, hermeval
>>> coef = hermefromroots((-1, 0, 1))
>>> hermeval((-1, 0, 1), coef)
array([0., 0., 0.])
>>> coef = hermefromroots((-1j, 1j))
>>> hermeval((-1j, 1j), coef)
array([0.+0.j, 0.+0.j]) | |
doc_26037 |
Calculate the rolling weighted window variance. New in version 1.0.0. Parameters
**kwargs
Keyword arguments to configure the SciPy weighted window type. Returns
Series or DataFrame
Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling
Calling rolling with Series data. pandas.DataFrame.rolling
Calling rolling with DataFrames. pandas.Series.var
Aggregating var for Series. pandas.DataFrame.var
Aggregating var for DataFrame. | |
doc_26038 | A named tuple holding information about the float type. It contains low level information about the precision and internal representation. The values correspond to the various floating-point constants defined in the standard header file float.h for the ‘C’ programming language; see section 5.2.4.2.2 of the 1999 ISO/IEC C standard [C99], ‘Characteristics of floating types’, for details.
attribute float.h macro explanation
epsilon DBL_EPSILON
difference between 1.0 and the least value greater than 1.0 that is representable as a float See also math.ulp().
dig DBL_DIG maximum number of decimal digits that can be faithfully represented in a float; see below
mant_dig DBL_MANT_DIG float precision: the number of base-radix digits in the significand of a float
max DBL_MAX maximum representable positive finite float
max_exp DBL_MAX_EXP maximum integer e such that radix**(e-1) is a representable finite float
max_10_exp DBL_MAX_10_EXP maximum integer e such that 10**e is in the range of representable finite floats
min DBL_MIN
minimum representable positive normalized float Use math.ulp(0.0) to get the smallest positive denormalized representable float.
min_exp DBL_MIN_EXP minimum integer e such that radix**(e-1) is a normalized float
min_10_exp DBL_MIN_10_EXP minimum integer e such that 10**e is a normalized float
radix FLT_RADIX radix of exponent representation
rounds FLT_ROUNDS integer constant representing the rounding mode used for arithmetic operations. This reflects the value of the system FLT_ROUNDS macro at interpreter startup time. See section 5.2.4.2.2 of the C99 standard for an explanation of the possible values and their meanings. The attribute sys.float_info.dig needs further explanation. If s is any string representing a decimal number with at most sys.float_info.dig significant digits, then converting s to a float and back again will recover a string representing the same decimal value: >>> import sys
>>> sys.float_info.dig
15
>>> s = '3.14159265358979' # decimal string with 15 significant digits
>>> format(float(s), '.15g') # convert to float and back -> same value
'3.14159265358979'
But for strings with more than sys.float_info.dig significant digits, this isn’t always true: >>> s = '9876543211234567' # 16 significant digits is too many!
>>> format(float(s), '.16g') # conversion changes value
'9876543211234568' | |
doc_26039 |
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features. | |
doc_26040 |
Return the url. | |
doc_26041 |
Learn empirical variances from X. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Sample vectors from which to compute variances.
yany, default=None
Ignored. This parameter exists only for compatibility with sklearn.pipeline.Pipeline. Returns
self | |
doc_26042 |
Performs unbuffered in place operation on operand ‘a’ for elements specified by ‘indices’. For addition ufunc, this method is equivalent to a[indices] += b, except that results are accumulated for elements that are indexed more than once. For example, a[[0,0]] += 1 will only increment the first element once because of buffering, whereas add.at(a, [0,0], 1) will increment the first element twice. New in version 1.8.0. Parameters
aarray_like
The array to perform in place operation on.
indicesarray_like or tuple
Array like index object or slice object for indexing into first operand. If first operand has multiple dimensions, indices can be a tuple of array like index objects or slice objects.
barray_like
Second operand for ufuncs requiring two operands. Operand must be broadcastable over first operand after indexing or slicing. Examples Set items 0 and 1 to their negative values: >>> a = np.array([1, 2, 3, 4])
>>> np.negative.at(a, [0, 1])
>>> a
array([-1, -2, 3, 4])
Increment items 0 and 1, and increment item 2 twice: >>> a = np.array([1, 2, 3, 4])
>>> np.add.at(a, [0, 1, 2, 2], 1)
>>> a
array([2, 3, 5, 4])
Add items 0 and 1 in first array to second array, and store results in first array: >>> a = np.array([1, 2, 3, 4])
>>> b = np.array([1, 2])
>>> np.add.at(a, [0, 1], b)
>>> a
array([2, 4, 3, 4]) | |
doc_26043 |
Return the units for axis. | |
doc_26044 | See Migration guide for more details. tf.compat.v1.linalg.LinearOperatorCirculant
tf.linalg.LinearOperatorCirculant(
spectrum, input_output_dtype=tf.dtypes.complex64, is_non_singular=None,
is_self_adjoint=None, is_positive_definite=None, is_square=True,
name='LinearOperatorCirculant'
)
This operator acts like a circulant matrix A with shape [B1,...,Bb, N, N] for some b >= 0. The first b indices index a batch member. For every batch index (i1,...,ib), A[i1,...,ib, : :] is an N x N matrix. This matrix A is not materialized, but for purposes of broadcasting this shape will be relevant. Description in terms of circulant matrices Circulant means the entries of A are generated by a single vector, the convolution kernel h: A_{mn} := h_{m-n mod N}. With h = [w, x, y, z], A = |w z y x|
|x w z y|
|y x w z|
|z y x w|
This means that the result of matrix multiplication v = Au has Lth column given circular convolution between h with the Lth column of u. Description in terms of the frequency spectrum There is an equivalent description in terms of the [batch] spectrum H and Fourier transforms. Here we consider A.shape = [N, N] and ignore batch dimensions. Define the discrete Fourier transform (DFT) and its inverse by DFT[ h[n] ] = H[k] := sum_{n = 0}^{N - 1} h_n e^{-i 2pi k n / N}
IDFT[ H[k] ] = h[n] = N^{-1} sum_{k = 0}^{N - 1} H_k e^{i 2pi k n / N}
From these definitions, we see that H[0] = sum_{n = 0}^{N - 1} h_n
H[1] = "the first positive frequency"
H[N - 1] = "the first negative frequency"
Loosely speaking, with * element-wise multiplication, matrix multiplication is equal to the action of a Fourier multiplier: A u = IDFT[ H * DFT[u] ]. Precisely speaking, given [N, R] matrix u, let DFT[u] be the [N, R] matrix with rth column equal to the DFT of the rth column of u. Define the IDFT similarly. Matrix multiplication may be expressed columnwise: (A u)_r = IDFT[ H * (DFT[u])_r ] Operator properties deduced from the spectrum. Letting U be the kth Euclidean basis vector, and U = IDFT[u]. The above formulas show thatA U = H_k * U. We conclude that the elements of H are the eigenvalues of this operator. Therefore This operator is positive definite if and only if Real{H} > 0. A general property of Fourier transforms is the correspondence between Hermitian functions and real valued transforms. Suppose H.shape = [B1,...,Bb, N]. We say that H is a Hermitian spectrum if, with % meaning modulus division, H[..., n % N] = ComplexConjugate[ H[..., (-n) % N] ] This operator corresponds to a real matrix if and only if H is Hermitian. This operator is self-adjoint if and only if H is real. See e.g. "Discrete-Time Signal Processing", Oppenheim and Schafer. Example of a self-adjoint positive definite operator # spectrum is real ==> operator is self-adjoint
# spectrum is positive ==> operator is positive definite
spectrum = [6., 4, 2]
operator = LinearOperatorCirculant(spectrum)
# IFFT[spectrum]
operator.convolution_kernel()
==> [4 + 0j, 1 + 0.58j, 1 - 0.58j]
operator.to_dense()
==> [[4 + 0.0j, 1 - 0.6j, 1 + 0.6j],
[1 + 0.6j, 4 + 0.0j, 1 - 0.6j],
[1 - 0.6j, 1 + 0.6j, 4 + 0.0j]]
Example of defining in terms of a real convolution kernel # convolution_kernel is real ==> spectrum is Hermitian.
convolution_kernel = [1., 2., 1.]]
spectrum = tf.signal.fft(tf.cast(convolution_kernel, tf.complex64))
# spectrum is Hermitian ==> operator is real.
# spectrum is shape [3] ==> operator is shape [3, 3]
# We force the input/output type to be real, which allows this to operate
# like a real matrix.
operator = LinearOperatorCirculant(spectrum, input_output_dtype=tf.float32)
operator.to_dense()
==> [[ 1, 1, 2],
[ 2, 1, 1],
[ 1, 2, 1]]
Example of Hermitian spectrum # spectrum is shape [3] ==> operator is shape [3, 3]
# spectrum is Hermitian ==> operator is real.
spectrum = [1, 1j, -1j]
operator = LinearOperatorCirculant(spectrum)
operator.to_dense()
==> [[ 0.33 + 0j, 0.91 + 0j, -0.24 + 0j],
[-0.24 + 0j, 0.33 + 0j, 0.91 + 0j],
[ 0.91 + 0j, -0.24 + 0j, 0.33 + 0j]
Example of forcing real dtype when spectrum is Hermitian # spectrum is shape [4] ==> operator is shape [4, 4]
# spectrum is real ==> operator is self-adjoint
# spectrum is Hermitian ==> operator is real
# spectrum has positive real part ==> operator is positive-definite.
spectrum = [6., 4, 2, 4]
# Force the input dtype to be float32.
# Cast the output to float32. This is fine because the operator will be
# real due to Hermitian spectrum.
operator = LinearOperatorCirculant(spectrum, input_output_dtype=tf.float32)
operator.shape
==> [4, 4]
operator.to_dense()
==> [[4, 1, 0, 1],
[1, 4, 1, 0],
[0, 1, 4, 1],
[1, 0, 1, 4]]
# convolution_kernel = tf.signal.ifft(spectrum)
operator.convolution_kernel()
==> [4, 1, 0, 1]
Performance Suppose operator is a LinearOperatorCirculant of shape [N, N], and x.shape = [N, R]. Then
operator.matmul(x) is O(R*N*Log[N])
operator.solve(x) is O(R*N*Log[N])
operator.determinant() involves a size N reduce_prod. If instead operator and x have shape [B1,...,Bb, N, N] and [B1,...,Bb, N, R], every operation increases in complexity by B1*...*Bb. Matrix property hints This LinearOperator is initialized with boolean flags of the form is_X, for X = non_singular, self_adjoint, positive_definite, square. These have the following meaning: If is_X == True, callers should expect the operator to have the property X. This is a promise that should be fulfilled, but is not a runtime assert. For example, finite floating point precision may result in these promises being violated. If is_X == False, callers should expect the operator to not have X. If is_X == None (the default), callers should have no expectation either way. References: Toeplitz and Circulant Matrices - A Review: Gray, 2006 (pdf)
Args
spectrum Shape [B1,...,Bb, N] Tensor. Allowed dtypes: float16, float32, float64, complex64, complex128. Type can be different than input_output_dtype
input_output_dtype dtype for input/output.
is_non_singular Expect that this operator is non-singular.
is_self_adjoint Expect that this operator is equal to its hermitian transpose. If spectrum is real, this will always be true.
is_positive_definite Expect that this operator is positive definite, meaning the quadratic form x^H A x has positive real part for all nonzero x. Note that we do not require the operator to be self-adjoint to be positive-definite. See: https://en.wikipedia.org/wiki/Positive-definite_matrix\ Extension_for_non_symmetric_matrices
is_square Expect that this operator acts like square [batch] matrices.
name A name to prepend to all ops created by this class.
Attributes
H Returns the adjoint of the current LinearOperator. Given A representing this LinearOperator, return A*. Note that calling self.adjoint() and self.H are equivalent.
batch_shape TensorShape of batch dimensions of this LinearOperator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns TensorShape([B1,...,Bb]), equivalent to A.shape[:-2]
block_depth Depth of recursively defined circulant blocks defining this Operator. With A the dense representation of this Operator, block_depth = 1 means A is symmetric circulant. For example, A = |w z y x|
|x w z y|
|y x w z|
|z y x w|
block_depth = 2 means A is block symmetric circulant with symmetric circulant blocks. For example, with W, X, Y, Z symmetric circulant, A = |W Z Y X|
|X W Z Y|
|Y X W Z|
|Z Y X W|
block_depth = 3 means A is block symmetric circulant with block symmetric circulant blocks.
block_shape
domain_dimension Dimension (in the sense of vector spaces) of the domain of this operator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns N.
dtype The DType of Tensors handled by this LinearOperator.
graph_parents List of graph dependencies of this LinearOperator. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Do not call graph_parents.
is_non_singular
is_positive_definite
is_self_adjoint
is_square Return True/False depending on if this operator is square.
parameters Dictionary of parameters used to instantiate this LinearOperator.
range_dimension Dimension (in the sense of vector spaces) of the range of this operator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns M.
shape TensorShape of this LinearOperator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns TensorShape([B1,...,Bb, M, N]), equivalent to A.shape.
spectrum
tensor_rank Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns b + 2.
Methods add_to_tensor View source
add_to_tensor(
x, name='add_to_tensor'
)
Add matrix represented by this operator to x. Equivalent to A + x.
Args
x Tensor with same dtype and shape broadcastable to self.shape.
name A name to give this Op.
Returns A Tensor with broadcast shape and same dtype as self.
adjoint View source
adjoint(
name='adjoint'
)
Returns the adjoint of the current LinearOperator. Given A representing this LinearOperator, return A*. Note that calling self.adjoint() and self.H are equivalent.
Args
name A name for this Op.
Returns LinearOperator which represents the adjoint of this LinearOperator.
assert_hermitian_spectrum View source
assert_hermitian_spectrum(
name='assert_hermitian_spectrum'
)
Returns an Op that asserts this operator has Hermitian spectrum. This operator corresponds to a real-valued matrix if and only if its spectrum is Hermitian.
Args
name A name to give this Op.
Returns An Op that asserts this operator has Hermitian spectrum.
assert_non_singular View source
assert_non_singular(
name='assert_non_singular'
)
Returns an Op that asserts this operator is non singular. This operator is considered non-singular if ConditionNumber < max{100, range_dimension, domain_dimension} * eps,
eps := np.finfo(self.dtype.as_numpy_dtype).eps
Args
name A string name to prepend to created ops.
Returns An Assert Op, that, when run, will raise an InvalidArgumentError if the operator is singular.
assert_positive_definite View source
assert_positive_definite(
name='assert_positive_definite'
)
Returns an Op that asserts this operator is positive definite. Here, positive definite means that the quadratic form x^H A x has positive real part for all nonzero x. Note that we do not require the operator to be self-adjoint to be positive definite.
Args
name A name to give this Op.
Returns An Assert Op, that, when run, will raise an InvalidArgumentError if the operator is not positive definite.
assert_self_adjoint View source
assert_self_adjoint(
name='assert_self_adjoint'
)
Returns an Op that asserts this operator is self-adjoint. Here we check that this operator is exactly equal to its hermitian transpose.
Args
name A string name to prepend to created ops.
Returns An Assert Op, that, when run, will raise an InvalidArgumentError if the operator is not self-adjoint.
batch_shape_tensor View source
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of batch dimensions of this operator, determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns a Tensor holding [B1,...,Bb].
Args
name A name for this Op.
Returns int32 Tensor
block_shape_tensor View source
block_shape_tensor()
Shape of the block dimensions of self.spectrum. cholesky View source
cholesky(
name='cholesky'
)
Returns a Cholesky factor as a LinearOperator. Given A representing this LinearOperator, if A is positive definite self-adjoint, return L, where A = L L^T, i.e. the cholesky decomposition.
Args
name A name for this Op.
Returns LinearOperator which represents the lower triangular matrix in the Cholesky decomposition.
Raises
ValueError When the LinearOperator is not hinted to be positive definite and self adjoint. cond View source
cond(
name='cond'
)
Returns the condition number of this linear operator.
Args
name A name for this Op.
Returns Shape [B1,...,Bb] Tensor of same dtype as self.
convolution_kernel View source
convolution_kernel(
name='convolution_kernel'
)
Convolution kernel corresponding to self.spectrum. The D dimensional DFT of this kernel is the frequency domain spectrum of this operator.
Args
name A name to give this Op.
Returns Tensor with dtype self.dtype.
determinant View source
determinant(
name='det'
)
Determinant for every batch member.
Args
name A name for this Op.
Returns Tensor with shape self.batch_shape and same dtype as self.
Raises
NotImplementedError If self.is_square is False. diag_part View source
diag_part(
name='diag_part'
)
Efficiently get the [batch] diagonal part of this operator. If this operator has shape [B1,...,Bb, M, N], this returns a Tensor diagonal, of shape [B1,...,Bb, min(M, N)], where diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]. my_operator = LinearOperatorDiag([1., 2.])
# Efficiently get the diagonal
my_operator.diag_part()
==> [1., 2.]
# Equivalent, but inefficient method
tf.linalg.diag_part(my_operator.to_dense())
==> [1., 2.]
Args
name A name for this Op.
Returns
diag_part A Tensor of same dtype as self. domain_dimension_tensor View source
domain_dimension_tensor(
name='domain_dimension_tensor'
)
Dimension (in the sense of vector spaces) of the domain of this operator. Determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns N.
Args
name A name for this Op.
Returns int32 Tensor
eigvals View source
eigvals(
name='eigvals'
)
Returns the eigenvalues of this linear operator. If the operator is marked as self-adjoint (via is_self_adjoint) this computation can be more efficient.
Note: This currently only supports self-adjoint operators.
Args
name A name for this Op.
Returns Shape [B1,...,Bb, N] Tensor of same dtype as self.
inverse View source
inverse(
name='inverse'
)
Returns the Inverse of this LinearOperator. Given A representing this LinearOperator, return a LinearOperator representing A^-1.
Args
name A name scope to use for ops added by this method.
Returns LinearOperator representing inverse of this matrix.
Raises
ValueError When the LinearOperator is not hinted to be non_singular. log_abs_determinant View source
log_abs_determinant(
name='log_abs_det'
)
Log absolute value of determinant for every batch member.
Args
name A name for this Op.
Returns Tensor with shape self.batch_shape and same dtype as self.
Raises
NotImplementedError If self.is_square is False. matmul View source
matmul(
x, adjoint=False, adjoint_arg=False, name='matmul'
)
Transform [batch] matrix x with left multiplication: x --> Ax. # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
X = ... # shape [..., N, R], batch matrix, R > 0.
Y = operator.matmul(X)
Y.shape
==> [..., M, R]
Y[..., :, r] = sum_j A[..., :, j] X[j, r]
Args
x LinearOperator or Tensor with compatible shape and same dtype as self. See class docstring for definition of compatibility.
adjoint Python bool. If True, left multiply by the adjoint: A^H x.
adjoint_arg Python bool. If True, compute A x^H where x^H is the hermitian transpose (transposition and complex conjugation).
name A name for this Op.
Returns A LinearOperator or Tensor with shape [..., M, R] and same dtype as self.
matvec View source
matvec(
x, adjoint=False, name='matvec'
)
Transform [batch] vector x with left multiplication: x --> Ax. # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
X = ... # shape [..., N], batch vector
Y = operator.matvec(X)
Y.shape
==> [..., M]
Y[..., :] = sum_j A[..., :, j] X[..., j]
Args
x Tensor with compatible shape and same dtype as self. x is treated as a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility.
adjoint Python bool. If True, left multiply by the adjoint: A^H x.
name A name for this Op.
Returns A Tensor with shape [..., M] and same dtype as self.
range_dimension_tensor View source
range_dimension_tensor(
name='range_dimension_tensor'
)
Dimension (in the sense of vector spaces) of the range of this operator. Determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns M.
Args
name A name for this Op.
Returns int32 Tensor
shape_tensor View source
shape_tensor(
name='shape_tensor'
)
Shape of this LinearOperator, determined at runtime. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns a Tensor holding [B1,...,Bb, M, N], equivalent to tf.shape(A).
Args
name A name for this Op.
Returns int32 Tensor
solve View source
solve(
rhs, adjoint=False, adjoint_arg=False, name='solve'
)
Solve (exact or approx) R (batch) systems of equations: A X = rhs. The returned Tensor will be close to an exact solution if A is well conditioned. Otherwise closeness will vary. See class docstring for details. Examples: # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve R > 0 linear systems for every member of the batch.
RHS = ... # shape [..., M, R]
X = operator.solve(RHS)
# X[..., :, r] is the solution to the r'th linear system
# sum_j A[..., :, j] X[..., j, r] = RHS[..., :, r]
operator.matmul(X)
==> RHS
Args
rhs Tensor with same dtype as this operator and compatible shape. rhs is treated like a [batch] matrix meaning for every set of leading dimensions, the last two dimensions defines a matrix. See class docstring for definition of compatibility.
adjoint Python bool. If True, solve the system involving the adjoint of this LinearOperator: A^H X = rhs.
adjoint_arg Python bool. If True, solve A X = rhs^H where rhs^H is the hermitian transpose (transposition and complex conjugation).
name A name scope to use for ops added by this method.
Returns Tensor with shape [...,N, R] and same dtype as rhs.
Raises
NotImplementedError If self.is_non_singular or is_square is False. solvevec View source
solvevec(
rhs, adjoint=False, name='solve'
)
Solve single equation with best effort: A X = rhs. The returned Tensor will be close to an exact solution if A is well conditioned. Otherwise closeness will vary. See class docstring for details. Examples: # Make an operator acting like batch matrix A. Assume A.shape = [..., M, N]
operator = LinearOperator(...)
operator.shape = [..., M, N]
# Solve one linear system for every member of the batch.
RHS = ... # shape [..., M]
X = operator.solvevec(RHS)
# X is the solution to the linear system
# sum_j A[..., :, j] X[..., j] = RHS[..., :]
operator.matvec(X)
==> RHS
Args
rhs Tensor with same dtype as this operator. rhs is treated like a [batch] vector meaning for every set of leading dimensions, the last dimension defines a vector. See class docstring for definition of compatibility regarding batch dimensions.
adjoint Python bool. If True, solve the system involving the adjoint of this LinearOperator: A^H X = rhs.
name A name scope to use for ops added by this method.
Returns Tensor with shape [...,N] and same dtype as rhs.
Raises
NotImplementedError If self.is_non_singular or is_square is False. tensor_rank_tensor View source
tensor_rank_tensor(
name='tensor_rank_tensor'
)
Rank (in the sense of tensors) of matrix corresponding to this operator. If this operator acts like the batch matrix A with A.shape = [B1,...,Bb, M, N], then this returns b + 2.
Args
name A name for this Op.
Returns int32 Tensor, determined at runtime.
to_dense View source
to_dense(
name='to_dense'
)
Return a dense (batch) matrix representing this operator. trace View source
trace(
name='trace'
)
Trace of the linear operator, equal to sum of self.diag_part(). If the operator is square, this is also the sum of the eigenvalues.
Args
name A name for this Op.
Returns Shape [B1,...,Bb] Tensor of same dtype as self.
__matmul__ View source
__matmul__(
other
) | |
doc_26045 | Header-encode the string string. The type of encoding (base64 or quoted-printable) will be based on the header_encoding attribute. | |
doc_26046 |
Return the edge color. | |
doc_26047 | Example: 'fav_color' in request.session | |
doc_26048 | Checks for an ASCII control character (ordinal values 0 to 31). | |
doc_26049 |
Render a DataFrame as an HTML table. Parameters
buf:str, Path or StringIO-like, optional, default None
Buffer to write to. If None, the output is returned as a string.
columns:sequence, optional, default None
The subset of columns to write. Writes all columns by default.
col_space:str or int, list or dict of int or str, optional
The minimum width of each column in CSS length units. An int is assumed to be px units. New in version 0.25.0: Ability to use str.
header:bool, optional
Whether to print column labels, default True.
index:bool, optional, default True
Whether to print index (row) labels.
na_rep:str, optional, default ‘NaN’
String representation of NaN to use.
formatters:list, tuple or dict of one-param. functions, optional
Formatter functions to apply to columns’ elements by position or name. The result of each function must be a unicode string. List/tuple must be of length equal to the number of columns.
float_format:one-parameter function, optional, default None
Formatter function to apply to columns’ elements if they are floats. This function must return a unicode string and will be applied only to the non-NaN elements, with NaN being handled by na_rep. Changed in version 1.2.0.
sparsify:bool, optional, default True
Set to False for a DataFrame with a hierarchical index to print every multiindex key at each row.
index_names:bool, optional, default True
Prints the names of the indexes.
justify:str, default None
How to justify the column labels. If None uses the option from the print configuration (controlled by set_option), ‘right’ out of the box. Valid values are left right center justify justify-all start end inherit match-parent initial unset.
max_rows:int, optional
Maximum number of rows to display in the console.
max_cols:int, optional
Maximum number of columns to display in the console.
show_dimensions:bool, default False
Display DataFrame dimensions (number of rows by number of columns).
decimal:str, default ‘.’
Character recognized as decimal separator, e.g. ‘,’ in Europe.
bold_rows:bool, default True
Make the row labels bold in the output.
classes:str or list or tuple, default None
CSS class(es) to apply to the resulting html table.
escape:bool, default True
Convert the characters <, >, and & to HTML-safe sequences.
notebook:{True, False}, default False
Whether the generated HTML is for IPython Notebook.
border:int
A border=border attribute is included in the opening <table> tag. Default pd.options.display.html.border.
table_id:str, optional
A css id is included in the opening <table> tag if specified.
render_links:bool, default False
Convert URLs to HTML links.
encoding:str, default “utf-8”
Set character encoding. New in version 1.0. Returns
str or None
If buf is None, returns the result as a string. Otherwise returns None. See also to_string
Convert DataFrame to a string. | |
doc_26050 |
Perform dimensionality reduction on X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data. Returns
X_newndarray of shape (n_samples, n_components)
Reduced version of X. This will always be a dense array. | |
doc_26051 |
Return the sizes ('areas') of the elements in the collection. Returns
array
The 'area' of each element. | |
doc_26052 | Enables support for top-level await, async for, async with and async comprehensions. New in version 3.8. | |
doc_26053 | Absolute path to the package on the filesystem. Used to look up resources contained in the package. | |
doc_26054 | Converts this geometry to canonical form: >>> g = MultiPoint(Point(0, 0), Point(2, 2), Point(1, 1))
>>> print(g)
MULTIPOINT (0 0, 2 2, 1 1)
>>> g.normalize()
>>> print(g)
MULTIPOINT (2 2, 1 1, 0 0) | |
doc_26055 | Create a video display information object Info() -> VideoInfo Creates a simple object containing several attributes to describe the current graphics environment. If this is called before pygame.display.set_mode() some platforms can provide information about the default display mode. This can also be called after setting the display mode to verify specific display options were satisfied. The VidInfo object has several attributes: hw: 1 if the display is hardware accelerated
wm: 1 if windowed display modes can be used
video_mem: The megabytes of video memory on the display. This is 0 if
unknown
bitsize: Number of bits used to store each pixel
bytesize: Number of bytes used to store each pixel
masks: Four values used to pack RGBA values into pixels
shifts: Four values used to pack RGBA values into pixels
losses: Four values used to pack RGBA values into pixels
blit_hw: 1 if hardware Surface blitting is accelerated
blit_hw_CC: 1 if hardware Surface colorkey blitting is accelerated
blit_hw_A: 1 if hardware Surface pixel alpha blitting is accelerated
blit_sw: 1 if software Surface blitting is accelerated
blit_sw_CC: 1 if software Surface colorkey blitting is accelerated
blit_sw_A: 1 if software Surface pixel alpha blitting is accelerated
current_h, current_w: Height and width of the current video mode, or
of the desktop mode if called before the display.set_mode
is called. (current_h, current_w are available since
SDL 1.2.10, and pygame 1.8.0). They are -1 on error, or if
an old SDL is being used. | |
doc_26056 | tf.compat.v1.nn.dilation2d(
input, filter=None, strides=None, rates=None, padding=None, name=None,
filters=None, dilations=None
)
The input tensor has shape [batch, in_height, in_width, depth] and the filter tensor has shape [filter_height, filter_width, depth], i.e., each input channel is processed independently of the others with its own structuring function. The output tensor has shape [batch, out_height, out_width, depth]. The spatial dimensions of the output tensor depend on the padding algorithm. We currently only support the default "NHWC" data_format. In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with conv2d, we use unmirrored filters): output[b, y, x, c] =
max_{dy, dx} input[b,
strides[1] * y + rates[1] * dy,
strides[2] * x + rates[2] * dx,
c] +
filter[dy, dx, c]
Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros. Note on duality: The dilation of input by the filter is equal to the negation of the erosion of -input by the reflected filter.
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 4-D with shape [batch, in_height, in_width, depth].
filter A Tensor. Must have the same type as input. 3-D with shape [filter_height, filter_width, depth].
strides A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. Must be: [1, stride_height, stride_width, 1].
rates A list of ints that has length >= 4. The input stride for atrous morphological dilation. Must be: [1, rate_height, rate_width, 1].
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_26057 | Returns a new tensor with the hyperbolic tangent of the elements of input. outi=tanh(inputi)\text{out}_{i} = \tanh(\text{input}_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.8986, -0.7279, 1.1745, 0.2611])
>>> torch.tanh(a)
tensor([ 0.7156, -0.6218, 0.8257, 0.2553]) | |
doc_26058 | The metadata of this raster, represented as a nested dictionary. The first-level key is the metadata domain. The second-level contains the metadata item names and values from each domain. To set or update a metadata item, pass the corresponding metadata item to the method using the nested structure described above. Only keys that are in the specified dictionary are updated; the rest of the metadata remains unchanged. To remove a metadata item, use None as the metadata value. >>> rst = GDALRaster({'width': 10, 'height': 20, 'srid': 4326})
>>> rst.metadata
{}
>>> rst.metadata = {'DEFAULT': {'OWNER': 'Django', 'VERSION': '1.0'}}
>>> rst.metadata
{'DEFAULT': {'OWNER': 'Django', 'VERSION': '1.0'}}
>>> rst.metadata = {'DEFAULT': {'OWNER': None, 'VERSION': '2.0'}}
>>> rst.metadata
{'DEFAULT': {'VERSION': '2.0'}} | |
doc_26059 | The value of the start parameter (or 0 if the parameter was not supplied) | |
doc_26060 |
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) – parameter name within module on which pruning will act. | |
doc_26061 |
[Deprecated] Set the cursor. Notes Deprecated since version 3.5. | |
doc_26062 |
Round elements of the array to the nearest integer. Parameters
xarray_like
Input array.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
outndarray or scalar
Output array is same shape and type as x. This is a scalar if x is a scalar. See also
fix, ceil, floor, trunc
Notes For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. Examples >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0])
>>> np.rint(a)
array([-2., -2., -0., 0., 2., 2., 2.]) | |
doc_26063 |
Set a Python function to be used when pretty printing arrays. Parameters
ffunction or None
Function to be used to pretty print arrays. The function should expect a single array argument and return a string of the representation of the array. If None, the function is reset to the default NumPy function to print arrays.
reprbool, optional
If True (default), the function for pretty printing (__repr__) is set, if False the function that returns the default string representation (__str__) is set. See also
set_printoptions, get_printoptions
Examples >>> def pprint(arr):
... return 'HA! - What are you going to do now?'
...
>>> np.set_string_function(pprint)
>>> a = np.arange(10)
>>> a
HA! - What are you going to do now?
>>> _ = a
>>> # [0 1 2 3 4 5 6 7 8 9]
We can reset the function to the default: >>> np.set_string_function(None)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
repr affects either pretty printing or normal string representation. Note that __repr__ is still affected by setting __str__ because the width of each array element in the returned string becomes equal to the length of the result of __str__(). >>> x = np.arange(4)
>>> np.set_string_function(lambda x:'random', repr=False)
>>> x.__str__()
'random'
>>> x.__repr__()
'array([0, 1, 2, 3])' | |
doc_26064 | Browsers will not allow JavaScript access to cookies marked as “HTTP only” for security. Default: True | |
doc_26065 | The debugging flags necessary for the collector to print information about a leaking program (equal to DEBUG_COLLECTABLE | DEBUG_UNCOLLECTABLE |
DEBUG_SAVEALL). | |
doc_26066 |
Bases: matplotlib.patheffects.TickedStroke A shortcut PathEffect for applying TickedStroke and then drawing the original Artist. With this class you can use artist.set_path_effects([path_effects.withTickedStroke()])
as a shortcut for artist.set_path_effects([path_effects.TickedStroke(),
path_effects.Normal()])
Parameters
offset(float, float), default: (0, 0)
The (x, y) offset to apply to the path, in points.
spacingfloat, default: 10.0
The spacing between ticks in points.
anglefloat, default: 45.0
The angle between the path and the tick in degrees. The angle is measured as if you were an ant walking along the curve, with zero degrees pointing directly ahead, 90 to your left, -90 to your right, and 180 behind you.
lengthfloat, default: 1.414
The length of the tick relative to spacing. Recommended length = 1.414 (sqrt(2)) when angle=45, length=1.0 when angle=90 and length=2.0 when angle=60. **kwargs
Extra keywords are stored and passed through to AbstractPathEffect._update_gc(). Examples See TickedStroke patheffect. draw_path(renderer, gc, tpath, affine, rgbFace)[source]
Draw the path with updated gc. | |
doc_26067 |
Color difference from the CMC l:c standard. This color difference was developed by the Colour Measurement Committee (CMC) of the Society of Dyers and Colourists (United Kingdom). It is intended for use in the textile industry. The scale factors kL, kC set the weight given to differences in lightness and chroma relative to differences in hue. The usual values are kL=2, kC=1 for “acceptability” and kL=1, kC=1 for “imperceptibility”. Colors with dE > 1 are “different” for the given scale factors. Parameters
lab1array_like
reference color (Lab colorspace)
lab2array_like
comparison color (Lab colorspace) Returns
dEarray_like
distance between colors lab1 and lab2 Notes deltaE_cmc the defines the scales for the lightness, hue, and chroma in terms of the first color. Consequently deltaE_cmc(lab1, lab2) != deltaE_cmc(lab2, lab1) References
1
https://en.wikipedia.org/wiki/Color_difference
2
http://www.brucelindbloom.com/index.html?Eqn_DeltaE_CIE94.html
3
F. J. J. Clarke, R. McDonald, and B. Rigg, “Modification to the JPC79 colour-difference formula,” J. Soc. Dyers Colour. 100, 128-132 (1984). | |
doc_26068 |
Alias for set_linewidth. | |
doc_26069 |
Return the Transform instance used by this artist. | |
doc_26070 |
Set the artist offset transform. Parameters
transOffsetTransform | |
doc_26071 |
Get a fontconfig pattern suitable for looking up the font as specified with fontconfig's fc-match utility. This support does not depend on fontconfig; we are merely borrowing its pattern syntax for use here. | |
doc_26072 | sklearn.metrics.roc_curve(y_true, y_score, *, pos_label=None, sample_weight=None, drop_intermediate=True) [source]
Compute Receiver operating characteristic (ROC). Note: this implementation is restricted to the binary classification task. Read more in the User Guide. Parameters
y_truendarray of shape (n_samples,)
True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos_label should be explicitly given.
y_scorendarray of shape (n_samples,)
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers).
pos_labelint or str, default=None
The label of the positive class. When pos_label=None, if y_true is in {-1, 1} or {0, 1}, pos_label is set to 1, otherwise an error will be raised.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
drop_intermediatebool, default=True
Whether to drop some suboptimal thresholds which would not appear on a plotted ROC curve. This is useful in order to create lighter ROC curves. New in version 0.17: parameter drop_intermediate. Returns
fprndarray of shape (>2,)
Increasing false positive rates such that element i is the false positive rate of predictions with score >= thresholds[i].
tprndarray of shape (>2,)
Increasing true positive rates such that element i is the true positive rate of predictions with score >= thresholds[i].
thresholdsndarray of shape = (n_thresholds,)
Decreasing thresholds on the decision function used to compute fpr and tpr. thresholds[0] represents no instances being predicted and is arbitrarily set to max(y_score) + 1. See also
plot_roc_curve
Plot Receiver operating characteristic (ROC) curve.
RocCurveDisplay
ROC Curve visualization.
det_curve
Compute error rates for different probability thresholds.
roc_auc_score
Compute the area under the ROC curve. Notes Since the thresholds are sorted from low to high values, they are reversed upon returning them to ensure they correspond to both fpr and tpr, which are sorted in reversed order during their calculation. References
1
Wikipedia entry for the Receiver operating characteristic
2
Fawcett T. An introduction to ROC analysis[J]. Pattern Recognition Letters, 2006, 27(8):861-874. Examples >>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, scores, pos_label=2)
>>> fpr
array([0. , 0. , 0.5, 0.5, 1. ])
>>> tpr
array([0. , 0.5, 0.5, 1. , 1. ])
>>> thresholds
array([1.8 , 0.8 , 0.4 , 0.35, 0.1 ])
Examples using sklearn.metrics.roc_curve
Feature transformations with ensembles of trees
Species distribution modeling
Visualizations with Display Objects
Detection error tradeoff (DET) curve
Receiver Operating Characteristic (ROC) | |
doc_26073 |
Applies Niblack local threshold to an array. A threshold T is calculated for every pixel in the image using the following formula: T = m(x,y) - k * s(x,y)
where m(x,y) and s(x,y) are the mean and standard deviation of pixel (x,y) neighborhood defined by a rectangular window with size w times w centered around the pixel. k is a configurable parameter that weights the effect of standard deviation. Parameters
imagendarray
Input image.
window_sizeint, or iterable of int, optional
Window size specified as a single odd integer (3, 5, 7, …), or an iterable of length image.ndim containing only odd integers (e.g. (1, 5, 5)).
kfloat, optional
Value of parameter k in threshold formula. Returns
threshold(N, M) ndarray
Threshold mask. All pixels with an intensity higher than this value are assumed to be foreground. Notes This algorithm is originally designed for text recognition. The Bradley threshold is a particular case of the Niblack one, being equivalent to >>> from skimage import data
>>> image = data.page()
>>> q = 1
>>> threshold_image = threshold_niblack(image, k=0) * q
for some value q. By default, Bradley and Roth use q=1. References
1
W. Niblack, An introduction to Digital Image Processing, Prentice-Hall, 1986.
2
D. Bradley and G. Roth, “Adaptive thresholding using Integral Image”, Journal of Graphics Tools 12(2), pp. 13-21, 2007. DOI:10.1080/2151237X.2007.10129236 Examples >>> from skimage import data
>>> image = data.page()
>>> threshold_image = threshold_niblack(image, window_size=7, k=0.1) | |
doc_26074 | Get the character encoding of this InputSource. | |
doc_26075 | Return any data available in the cooked queue (very lazy). Raise EOFError if connection closed and no data available. Return b'' if no cooked data available otherwise. This method never blocks. | |
doc_26076 | An optional string of a field name (with an optional "-" prefix which indicates descending order) or an expression (or a tuple or list of strings and/or expressions) that specifies the ordering of the elements in the result string. Examples are the same as for ArrayAgg.ordering. | |
doc_26077 |
Alias for the signed integer type (one of numpy.byte, numpy.short, numpy.intc, numpy.int_ and np.longlong) that is the same size as a pointer. Compatible with the C intptr_t. Character code
'p' | |
doc_26078 | See Migration guide for more details. tf.compat.v1.math.rsqrt, tf.compat.v1.rsqrt
tf.math.rsqrt(
x, name=None
)
For example:
x = tf.constant([2., 0., -2.])
tf.math.rsqrt(x)
<tf.Tensor: shape=(3,), dtype=float32,
numpy=array([0.707, inf, nan], dtype=float32)>
Args
x A tf.Tensor. Must be one of the following types: bfloat16, half, float32, float64.
name A name for the operation (optional).
Returns A tf.Tensor. Has the same type as x. | |
doc_26079 | Exception raised when non-blocking put() (or put_nowait()) is called on a Queue object which is full. | |
doc_26080 | bytearray.translate(table, /, delete=b'')
Return a copy of the bytes or bytearray object where all bytes occurring in the optional argument delete are removed, and the remaining bytes have been mapped through the given translation table, which must be a bytes object of length 256. You can use the bytes.maketrans() method to create a translation table. Set the table argument to None for translations that only delete characters: >>> b'read this short text'.translate(None, b'aeiou')
b'rd ths shrt txt'
Changed in version 3.6: delete is now supported as a keyword argument. | |
doc_26081 |
Test whether the mouse event occurred on the figure. Returns
bool, {} | |
doc_26082 |
Apply only the affine part of this transformation on the given array of values. transform(values) is always equivalent to transform_affine(transform_non_affine(values)). In non-affine transformations, this is generally a no-op. In affine transformations, this is equivalent to transform(values). Parameters
valuesarray
The input values as NumPy array of length input_dims or shape (N x input_dims). Returns
array
The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input. | |
doc_26083 | Return True if the argument is a (positive or negative) zero and False otherwise. | |
doc_26084 | Returns the current setting for the weekday to start each week. | |
doc_26085 | load new BMP image from a file (or file-like object) load_basic(file) -> Surface Load an image from a file source. You can pass either a filename or a Python file-like object. This function only supports loading "basic" image format, ie BMP format. This function is always available, no matter how pygame was built. | |
doc_26086 | In range(60). | |
doc_26087 | tf.keras.mixed_precision.experimental.set_policy
tf.keras.mixed_precision.set_global_policy(
policy
)
The global policy is the default tf.keras.mixed_precision.Policy used for layers, if no policy is passed to the layer constructor.
tf.keras.mixed_precision.set_global_policy('mixed_float16')
tf.keras.mixed_precision.global_policy()
<Policy "mixed_float16">
tf.keras.layers.Dense(10).dtype_policy
<Policy "mixed_float16">
# Global policy is not used if a policy is directly passed to constructor
tf.keras.layers.Dense(10, dtype='float64').dtype_policy
<Policy "float64">
tf.keras.mixed_precision.set_global_policy('float32')
If no global policy is set, layers will instead default to a Policy constructed from tf.keras.backend.floatx(). To use mixed precision, the global policy should be set to 'mixed_float16' or 'mixed_bfloat16', so that every layer uses a 16-bit compute dtype and float32 variable dtype by default. Only floating point policies can be set as the global policy, such as 'float32' and 'mixed_float16'. Non-floating point policies such as 'int32' and 'complex64' cannot be set as the global policy because most layers do not support such policies. See tf.keras.mixed_precision.Policy for more information.
Args
policy A Policy, or a string that will be converted to a Policy. Can also be None, in which case the global policy will be constructed from tf.keras.backend.floatx() | |
doc_26088 | The field on the current object instance that can be used to determine the name of a candidate template. If either template_name_field itself or the value of the template_name_field on the current object instance is None, the object will not be used for a candidate template name. | |
doc_26089 | Control the number of TLS 1.3 session tickets of a TLS_PROTOCOL_SERVER context. The setting has no impact on TLS 1.0 to 1.2 connections. Note This attribute is not available unless the ssl module is compiled with OpenSSL 1.1.1 or newer. New in version 3.8. | |
doc_26090 | Returns the day of the week (0 is Monday) for year (1970–…), month (1–12), day (1–31). | |
doc_26091 |
[Deprecated] Notes Deprecated since version 3.5: | |
doc_26092 | Exception raised when the put_nowait() method is called on a queue that has reached its maxsize. | |
doc_26093 | This class provides run_script() and report() methods to determine the set of modules imported by a script. path can be a list of directories to search for modules; if not specified, sys.path is used. debug sets the debugging level; higher values make the class print debugging messages about what it’s doing. excludes is a list of module names to exclude from the analysis. replace_paths is a list of (oldpath, newpath) tuples that will be replaced in module paths.
report()
Print a report to standard output that lists the modules imported by the script and their paths, as well as modules that are missing or seem to be missing.
run_script(pathname)
Analyze the contents of the pathname file, which must contain Python code.
modules
A dictionary mapping module names to modules. See Example usage of ModuleFinder. | |
doc_26094 | class frozenset([iterable])
Return a new set or frozenset object whose elements are taken from iterable. The elements of a set must be hashable. To represent sets of sets, the inner sets must be frozenset objects. If iterable is not specified, a new empty set is returned. Sets can be created by several means: Use a comma-separated list of elements within braces: {'jack', 'sjoerd'}
Use a set comprehension: {c for c in 'abracadabra' if c not in 'abc'}
Use the type constructor: set(), set('foobar'), set(['a', 'b', 'foo'])
Instances of set and frozenset provide the following operations:
len(s)
Return the number of elements in set s (cardinality of s).
x in s
Test x for membership in s.
x not in s
Test x for non-membership in s.
isdisjoint(other)
Return True if the set has no elements in common with other. Sets are disjoint if and only if their intersection is the empty set.
issubset(other)
set <= other
Test whether every element in the set is in other.
set < other
Test whether the set is a proper subset of other, that is, set <= other and set != other.
issuperset(other)
set >= other
Test whether every element in other is in the set.
set > other
Test whether the set is a proper superset of other, that is, set >=
other and set != other.
union(*others)
set | other | ...
Return a new set with elements from the set and all others.
intersection(*others)
set & other & ...
Return a new set with elements common to the set and all others.
difference(*others)
set - other - ...
Return a new set with elements in the set that are not in the others.
symmetric_difference(other)
set ^ other
Return a new set with elements in either the set or other but not both.
copy()
Return a shallow copy of the set.
Note, the non-operator versions of union(), intersection(), difference(), and symmetric_difference(), issubset(), and issuperset() methods will accept any iterable as an argument. In contrast, their operator based counterparts require their arguments to be sets. This precludes error-prone constructions like set('abc') & 'cbs' in favor of the more readable set('abc').intersection('cbs'). Both set and frozenset support set to set comparisons. Two sets are equal if and only if every element of each set is contained in the other (each is a subset of the other). A set is less than another set if and only if the first set is a proper subset of the second set (is a subset, but is not equal). A set is greater than another set if and only if the first set is a proper superset of the second set (is a superset, but is not equal). Instances of set are compared to instances of frozenset based on their members. For example, set('abc') == frozenset('abc') returns True and so does set('abc') in set([frozenset('abc')]). The subset and equality comparisons do not generalize to a total ordering function. For example, any two nonempty disjoint sets are not equal and are not subsets of each other, so all of the following return False: a<b, a==b, or a>b. Since sets only define partial ordering (subset relationships), the output of the list.sort() method is undefined for lists of sets. Set elements, like dictionary keys, must be hashable. Binary operations that mix set instances with frozenset return the type of the first operand. For example: frozenset('ab') |
set('bc') returns an instance of frozenset. The following table lists operations available for set that do not apply to immutable instances of frozenset:
update(*others)
set |= other | ...
Update the set, adding elements from all others.
intersection_update(*others)
set &= other & ...
Update the set, keeping only elements found in it and all others.
difference_update(*others)
set -= other | ...
Update the set, removing elements found in others.
symmetric_difference_update(other)
set ^= other
Update the set, keeping only elements found in either set, but not in both.
add(elem)
Add element elem to the set.
remove(elem)
Remove element elem from the set. Raises KeyError if elem is not contained in the set.
discard(elem)
Remove element elem from the set if it is present.
pop()
Remove and return an arbitrary element from the set. Raises KeyError if the set is empty.
clear()
Remove all elements from the set.
Note, the non-operator versions of the update(), intersection_update(), difference_update(), and symmetric_difference_update() methods will accept any iterable as an argument. Note, the elem argument to the __contains__(), remove(), and discard() methods may be a set. To support searching for an equivalent frozenset, a temporary one is created from elem. | |
doc_26095 |
Set the linestyle(s) for the collection.
linestyle description
'-' or 'solid' solid line
'--' or 'dashed' dashed line
'-.' or 'dashdot' dash-dotted line
':' or 'dotted' dotted line Alternatively a dash tuple of the following form can be provided: (offset, onoffseq),
where onoffseq is an even length tuple of on and off ink in points. Parameters
lsstr or tuple or list thereof
Valid values for individual linestyles include {'-', '--', '-.', ':', '', (offset, on-off-seq)}. See Line2D.set_linestyle for a complete description. | |
doc_26096 |
Return the bool of a single element Series or DataFrame. This must be a boolean scalar value, either True or False. It will raise a ValueError if the Series or DataFrame does not have exactly 1 element, or that element is not boolean (integer values 0 and 1 will also raise an exception). Returns
bool
The value in the Series or DataFrame. See also Series.astype
Change the data type of a Series, including to boolean. DataFrame.astype
Change the data type of a DataFrame, including to boolean. numpy.bool_
NumPy boolean data type, used by pandas for boolean values. Examples The method will only work for single element objects with a boolean value:
>>> pd.Series([True]).bool()
True
>>> pd.Series([False]).bool()
False
>>> pd.DataFrame({'col': [True]}).bool()
True
>>> pd.DataFrame({'col': [False]}).bool()
False | |
doc_26097 | What environment the app is running in. Flask and extensions may enable behaviors based on the environment, such as enabling debug mode. This maps to the ENV config key. This is set by the FLASK_ENV environment variable and may not behave as expected if set in code. Do not enable development when deploying in production. Default: 'production' | |
doc_26098 |
Computes a partial inverse of MaxPool1d. See MaxUnpool1d for details. | |
doc_26099 | Method called when an empty line is entered in response to the prompt. If this method is not overridden, it repeats the last nonempty command entered. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.