_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_1400 | Return a new view of the dictionary’s values. See the documentation of view objects. An equality comparison between one dict.values() view and another will always return False. This also applies when comparing dict.values() to itself: >>> d = {'a': 1}
>>> d.values() == d.values()
False | |
doc_1401 |
Alias for set_fontstretch. | |
doc_1402 |
Set the vertices of the polygon. Parameters
xy(N, 2) array-like
The coordinates of the vertices. Notes Unlike Path, we do not ignore the last input vertex. If the polygon is meant to be closed, and the last point of the polygon is not equal to the first, we assume that the user has not explicitly passed a CLOSEPOLY vertex, and add it ourselves. | |
doc_1403 | Return a frame object from the call stack. If optional integer depth is given, return the frame object that many calls below the top of the stack. If that is deeper than the call stack, ValueError is raised. The default for depth is zero, returning the frame at the top of the call stack. Raises an auditing event sys._getframe with no arguments. CPython implementation detail: This function should be used for internal and specialized purposes only. It is not guaranteed to exist in all implementations of Python. | |
doc_1404 |
Bind function func to event s. Parameters
sstr
One of the following events ids: 'button_press_event' 'button_release_event' 'draw_event' 'key_press_event' 'key_release_event' 'motion_notify_event' 'pick_event' 'resize_event' 'scroll_event' 'figure_enter_event', 'figure_leave_event', 'axes_enter_event', 'axes_leave_event' 'close_event'.
funccallable
The callback function to be executed, which must have the signature: def func(event: Event) -> Any
For the location events (button and key press/release), if the mouse is over the axes, the inaxes attribute of the event will be set to the Axes the event occurs is over, and additionally, the variables xdata and ydata attributes will be set to the mouse location in data coordinates. See KeyEvent and MouseEvent for more info. Returns
cid
A connection id that can be used with FigureCanvasBase.mpl_disconnect. Examples def on_press(event):
print('you pressed', event.button, event.xdata, event.ydata)
cid = canvas.mpl_connect('button_press_event', on_press)
Examples using matplotlib.pyplot.connect
Mouse move and click events | |
doc_1405 |
Find artist objects. Recursively find all Artist instances contained in the artist. Parameters
match
A filter criterion for the matches. This can be
None: Return all objects contained in artist. A function with signature def match(artist: Artist) -> bool. The result will only contain artists for which the function returns True. A class instance: e.g., Line2D. The result will only contain artists of this class or its subclasses (isinstance check).
include_selfbool
Include self in the list to be checked for a match. Returns
list of Artist | |
doc_1406 | Pop an item from the dict. | |
doc_1407 |
Set the canvas that contains the figure Parameters
canvasFigureCanvas | |
doc_1408 | class sklearn.random_projection.SparseRandomProjection(n_components='auto', *, density='auto', eps=0.1, dense_output=False, random_state=None) [source]
Reduce dimensionality through sparse random projection. Sparse random matrix is an alternative to dense random projection matrix that guarantees similar embedding quality while being much more memory efficient and allowing faster computation of the projected data. If we note s = 1 / density the components of the random matrix are drawn from: -sqrt(s) / sqrt(n_components) with probability 1 / 2s 0 with probability 1 - 1 / s +sqrt(s) / sqrt(n_components) with probability 1 / 2s Read more in the User Guide. New in version 0.13. Parameters
n_componentsint or ‘auto’, default=’auto’
Dimensionality of the target projection space. n_components can be automatically adjusted according to the number of samples in the dataset and the bound given by the Johnson-Lindenstrauss lemma. In that case the quality of the embedding is controlled by the eps parameter. It should be noted that Johnson-Lindenstrauss lemma can yield very conservative estimated of the required number of components as it makes no assumption on the structure of the dataset.
densityfloat or ‘auto’, default=’auto’
Ratio in the range (0, 1] of non-zero component in the random projection matrix. If density = ‘auto’, the value is set to the minimum density as recommended by Ping Li et al.: 1 / sqrt(n_features). Use density = 1 / 3.0 if you want to reproduce the results from Achlioptas, 2001.
epsfloat, default=0.1
Parameter to control the quality of the embedding according to the Johnson-Lindenstrauss lemma when n_components is set to ‘auto’. This value should be strictly positive. Smaller values lead to better embedding and higher number of dimensions (n_components) in the target projection space.
dense_outputbool, default=False
If True, ensure that the output of the random projection is a dense numpy array even if the input and random projection matrix are both sparse. In practice, if the number of components is small the number of zero components in the projected data will be very small and it will be more CPU and memory efficient to use a dense representation. If False, the projected data uses a sparse representation if the input is sparse.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generator used to generate the projection matrix at fit time. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
n_components_int
Concrete number of components computed when n_components=”auto”.
components_sparse matrix of shape (n_components, n_features)
Random matrix used for the projection. Sparse matrix will be of CSR format.
density_float in range 0.0 - 1.0
Concrete density computed from when density = “auto”. See also
GaussianRandomProjection
References
1
Ping Li, T. Hastie and K. W. Church, 2006, “Very Sparse Random Projections”. https://web.stanford.edu/~hastie/Papers/Ping/KDD06_rp.pdf
2
D. Achlioptas, 2001, “Database-friendly random projections”, https://users.soe.ucsc.edu/~optas/papers/jl.pdf Examples >>> import numpy as np
>>> from sklearn.random_projection import SparseRandomProjection
>>> rng = np.random.RandomState(42)
>>> X = rng.rand(100, 10000)
>>> transformer = SparseRandomProjection(random_state=rng)
>>> X_new = transformer.fit_transform(X)
>>> X_new.shape
(100, 3947)
>>> # very few components are non-zero
>>> np.mean(transformer.components_ != 0)
0.0100...
Methods
fit(X[, y]) Generate a sparse random projection matrix.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Project the data by using matrix product with the random matrix
fit(X, y=None) [source]
Generate a sparse random projection matrix. Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Training set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers. y
Ignored Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Project the data by using matrix product with the random matrix Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
The input data to project into a smaller dimensional space. Returns
X_new{ndarray, sparse matrix} of shape (n_samples, n_components)
Projected array.
Examples using sklearn.random_projection.SparseRandomProjection
Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…
The Johnson-Lindenstrauss bound for embedding with random projections | |
doc_1409 | See Migration guide for more details. tf.compat.v1.app.flags.EnumParser
tf.compat.v1.flags.EnumParser(
enum_values, case_sensitive=True
)
Args
enum_values [str], a non-empty list of string values in the enum.
case_sensitive bool, whether or not the enum is to be case-sensitive.
Raises
ValueError When enum_values is empty. Methods flag_type
flag_type()
See base class. parse
parse(
argument
)
Determines validity of argument and returns the correct element of enum.
Args
argument str, the supplied flag value.
Returns The first matching element from enum_values.
Raises
ValueError Raised when argument didn't match anything in enum.
Class Variables
syntactic_help '' | |
doc_1410 | Return True if the symbol is created from an import statement. | |
doc_1411 |
[Deprecated] Notes Deprecated since version 3.5: | |
doc_1412 | See Migration guide for more details. tf.compat.v1.raw_ops.MultiDeviceIteratorToStringHandle
tf.raw_ops.MultiDeviceIteratorToStringHandle(
multi_device_iterator, name=None
)
Args
multi_device_iterator A Tensor of type resource. A MultiDeviceIterator resource.
name A name for the operation (optional).
Returns A Tensor of type string. | |
doc_1413 |
Flush the GUI events for the figure. Interactive backends need to reimplement this method. | |
doc_1414 |
Performs DBSCAN extraction for an arbitrary epsilon. Extracting the clusters runs in linear time. Note that this results in labels_ which are close to a DBSCAN with similar settings and eps, only if eps is close to max_eps. Parameters
reachabilityarray of shape (n_samples,)
Reachability distances calculated by OPTICS (reachability_)
core_distancesarray of shape (n_samples,)
Distances at which points become core (core_distances_)
orderingarray of shape (n_samples,)
OPTICS ordered point indices (ordering_)
epsfloat
DBSCAN eps parameter. Must be set to < max_eps. Results will be close to DBSCAN algorithm if eps and max_eps are close to one another. Returns
labels_array of shape (n_samples,)
The estimated labels. | |
doc_1415 |
Draw samples from a noncentral chi-square distribution. The noncentral \(\chi^2\) distribution is a generalization of the \(\chi^2\) distribution. Parameters
dffloat or array_like of floats
Degrees of freedom, must be > 0. Changed in version 1.10.0: Earlier NumPy versions required dfnum > 1.
noncfloat or array_like of floats
Non-centrality, must be non-negative.
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if df and nonc are both scalars. Otherwise, np.broadcast(df, nonc).size samples are drawn. Returns
outndarray or scalar
Drawn samples from the parameterized noncentral chi-square distribution. Notes The probability density function for the noncentral Chi-square distribution is \[P(x;df,nonc) = \sum^{\infty}_{i=0} \frac{e^{-nonc/2}(nonc/2)^{i}}{i!} P_{Y_{df+2i}}(x),\] where \(Y_{q}\) is the Chi-square with q degrees of freedom. References 1
Wikipedia, “Noncentral chi-squared distribution” https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution Examples Draw values from the distribution and plot the histogram >>> rng = np.random.default_rng()
>>> import matplotlib.pyplot as plt
>>> values = plt.hist(rng.noncentral_chisquare(3, 20, 100000),
... bins=200, density=True)
>>> plt.show()
Draw values from a noncentral chisquare with very small noncentrality, and compare to a chisquare. >>> plt.figure()
>>> values = plt.hist(rng.noncentral_chisquare(3, .0000001, 100000),
... bins=np.arange(0., 25, .1), density=True)
>>> values2 = plt.hist(rng.chisquare(3, 100000),
... bins=np.arange(0., 25, .1), density=True)
>>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob')
>>> plt.show()
Demonstrate how large values of non-centrality lead to a more symmetric distribution. >>> plt.figure()
>>> values = plt.hist(rng.noncentral_chisquare(3, 20, 100000),
... bins=200, density=True)
>>> plt.show() | |
doc_1416 |
Sets whether PyTorch operations must use “deterministic” algorithms. That is, algorithms which, given the same input, and when run on the same software and hardware, always produce the same output. When True, operations will use deterministic algorithms when available, and if only nondeterministic algorithms are available they will throw a :class:RuntimeError when called. Warning This feature is in beta, and its design and implementation may change in the future. The following normally-nondeterministic operations will act deterministically when d=True:
torch.nn.Conv1d when called on CUDA tensor
torch.nn.Conv2d when called on CUDA tensor
torch.nn.Conv3d when called on CUDA tensor
torch.nn.ConvTranspose1d when called on CUDA tensor
torch.nn.ConvTranspose2d when called on CUDA tensor
torch.nn.ConvTranspose3d when called on CUDA tensor
torch.bmm() when called on sparse-dense CUDA tensors
torch.__getitem__() backward when self is a CPU tensor and indices is a list of tensors
torch.index_put() with accumulate=True when called on a CPU tensor The following normally-nondeterministic operations will throw a RuntimeError when d=True:
torch.nn.AvgPool3d when called on a CUDA tensor that requires grad
torch.nn.AdaptiveAvgPool2d when called on a CUDA tensor that requires grad
torch.nn.AdaptiveAvgPool3d when called on a CUDA tensor that requires grad
torch.nn.MaxPool3d when called on a CUDA tensor that requires grad
torch.nn.AdaptiveMaxPool2d when called on a CUDA tensor that requires grad
torch.nn.FractionalMaxPool2d when called on a CUDA tensor that requires grad
torch.nn.FractionalMaxPool3d when called on a CUDA tensor that requires grad
torch.nn.functional.interpolate() when called on a CUDA tensor that requires grad and one of the following modes is used: linear bilinear bicubic trilinear
torch.nn.ReflectionPad1d when called on a CUDA tensor that requires grad
torch.nn.ReflectionPad2d when called on a CUDA tensor that requires grad
torch.nn.ReplicationPad1d when called on a CUDA tensor that requires grad
torch.nn.ReplicationPad2d when called on a CUDA tensor that requires grad
torch.nn.ReplicationPad3d when called on a CUDA tensor that requires grad
torch.nn.NLLLoss when called on a CUDA tensor that requires grad
torch.nn.CTCLoss when called on a CUDA tensor that requires grad
torch.nn.EmbeddingBag when called on a CUDA tensor that requires grad
torch.scatter_add_() when called on a CUDA tensor
torch.index_add_() when called on a CUDA tensor torch.index_copy()
torch.index_select() when called on a CUDA tensor that requires grad
torch.repeat_interleave() when called on a CUDA tensor that requires grad
torch.histc() when called on a CUDA tensor
torch.bincount() when called on a CUDA tensor
torch.kthvalue() with called on a CUDA tensor
torch.median() with indices output when called on a CUDA tensor A handful of CUDA operations are nondeterministic if the CUDA version is 10.2 or greater, unless the environment variable CUBLAS_WORKSPACE_CONFIG=:4096:8 or CUBLAS_WORKSPACE_CONFIG=:16:8 is set. See the CUDA documentation for more details: https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility If one of these environment variable configurations is not set, a RuntimeError will be raised from these operations when called with CUDA tensors: torch.mm() torch.mv() torch.bmm() Note that deterministic operations tend to have worse performance than non-deterministic operations. Parameters
d (bool) – If True, force operations to be deterministic. If False, allow non-deterministic operations. | |
doc_1417 | If newindex is specified, sets the combobox value to the element position newindex. Otherwise, returns the index of the current value or -1 if the current value is not in the values list. | |
doc_1418 | This is similar to str.format(), except that it is appropriate for building up HTML fragments. All args and kwargs are passed through conditional_escape() before being passed to str.format(). For the case of building up small HTML fragments, this function is to be preferred over string interpolation using % or str.format() directly, because it applies escaping to all arguments - just like the template system applies escaping by default. So, instead of writing: mark_safe("%s <b>%s</b> %s" % (
some_html,
escape(some_text),
escape(some_other_text),
))
You should instead use: format_html("{} <b>{}</b> {}",
mark_safe(some_html),
some_text,
some_other_text,
)
This has the advantage that you don’t need to apply escape() to each argument and risk a bug and an XSS vulnerability if you forget one. Note that although this function uses str.format() to do the interpolation, some of the formatting options provided by str.format() (e.g. number formatting) will not work, since all arguments are passed through conditional_escape() which (ultimately) calls force_str() on the values. | |
doc_1419 | tf.compat.v1.keras.layers.LSTM(
units, activation='tanh',
recurrent_activation='hard_sigmoid', use_bias=True,
kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros', unit_forget_bias=True,
kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None,
activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None,
bias_constraint=None, dropout=0.0, recurrent_dropout=0.0,
return_sequences=False, return_state=False, go_backwards=False, stateful=False,
unroll=False, **kwargs
)
Note that this cell is not optimized for performance on GPU. Please use tf.compat.v1.keras.layers.CuDNNLSTM for better performance on GPU.
Arguments
units Positive integer, dimensionality of the output space.
activation Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
recurrent_activation Activation function to use for the recurrent step. Default: hard sigmoid (hard_sigmoid). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
use_bias Boolean, whether the layer uses a bias vector.
kernel_initializer Initializer for the kernel weights matrix, used for the linear transformation of the inputs..
recurrent_initializer Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state.
bias_initializer Initializer for the bias vector.
unit_forget_bias Boolean. If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer="zeros". This is recommended in Jozefowicz et al., 2015.
kernel_regularizer Regularizer function applied to the kernel weights matrix.
recurrent_regularizer Regularizer function applied to the recurrent_kernel weights matrix.
bias_regularizer Regularizer function applied to the bias vector.
activity_regularizer Regularizer function applied to the output of the layer (its "activation").
kernel_constraint Constraint function applied to the kernel weights matrix.
recurrent_constraint Constraint function applied to the recurrent_kernel weights matrix.
bias_constraint Constraint function applied to the bias vector.
dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
recurrent_dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
return_sequences Boolean. Whether to return the last output. in the output sequence, or the full sequence.
return_state Boolean. Whether to return the last state in addition to the output.
go_backwards Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
stateful Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
unroll Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
time_major The shape format of the inputs and outputs tensors. If True, the inputs and outputs will be in shape (timesteps, batch, ...), whereas in the False case, it will be (batch, timesteps, ...). Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. Call arguments:
inputs: A 3D tensor.
mask: Binary tensor of shape (samples, timesteps) indicating whether a given timestep should be masked.
training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used.
initial_state: List of initial state tensors to be passed to the first call of the cell.
Attributes
activation
bias_constraint
bias_initializer
bias_regularizer
dropout
implementation
kernel_constraint
kernel_initializer
kernel_regularizer
recurrent_activation
recurrent_constraint
recurrent_dropout
recurrent_initializer
recurrent_regularizer
states
unit_forget_bias
units
use_bias
Methods reset_states View source
reset_states(
states=None
)
Reset the recorded states for the stateful RNN layer. Can only be used when RNN layer is constructed with stateful = True. Args: states: Numpy arrays that contains the value for the initial state, which will be feed to cell at the first time step. When the value is None, zero filled numpy array will be created based on the cell state size.
Raises
AttributeError When the RNN layer is not stateful.
ValueError When the batch size of the RNN layer is unknown.
ValueError When the input numpy array is not compatible with the RNN layer state, either size wise or dtype wise. | |
doc_1420 | Start a new thread and return its identifier. The thread executes the function function with the argument list args (which must be a tuple). The optional kwargs argument specifies a dictionary of keyword arguments. When the function returns, the thread silently exits. When the function terminates with an unhandled exception, sys.unraisablehook() is called to handle the exception. The object attribute of the hook argument is function. By default, a stack trace is printed and then the thread exits (but other threads continue to run). When the function raises a SystemExit exception, it is silently ignored. Changed in version 3.8: sys.unraisablehook() is now used to handle unhandled exceptions. | |
doc_1421 | start time of a cdrom track get_track_start(track) -> seconds Return the absolute time in seconds where at start of the cdrom track. Note, track 0 is the first track on the CD. Track numbers start at zero. | |
doc_1422 | Return an iterator object. The object is required to support the iterator protocol described below. If a container supports different types of iteration, additional methods can be provided to specifically request iterators for those iteration types. (An example of an object supporting multiple forms of iteration would be a tree structure which supports both breadth-first and depth-first traversal.) This method corresponds to the tp_iter slot of the type structure for Python objects in the Python/C API. | |
doc_1423 | Returns the value specified for option in style. If state is specified, it is expected to be a sequence of one or more states. If the default argument is set, it is used as a fallback value in case no specification for option is found. To check what font a Button uses by default: from tkinter import ttk
print(ttk.Style().lookup("TButton", "font")) | |
doc_1424 | Returns the current value of the configuration option given by option. Option may be any of the configuration options. | |
doc_1425 | This property returns the data for this BoundField extracted by the widget’s value_from_datadict() method, or None if it wasn’t given: >>> unbound_form = ContactForm()
>>> print(unbound_form['subject'].data)
None
>>> bound_form = ContactForm(data={'subject': 'My Subject'})
>>> print(bound_form['subject'].data)
My Subject | |
doc_1426 |
Return whether the artist is animated. | |
doc_1427 | Installs activation scripts appropriate to the platform into the virtual environment. | |
doc_1428 | math.ceil(x)
Return the ceiling of x, the smallest integer greater than or equal to x. If x is not a float, delegates to x.__ceil__(), which should return an Integral value.
math.comb(n, k)
Return the number of ways to choose k items from n items without repetition and without order. Evaluates to n! / (k! * (n - k)!) when k <= n and evaluates to zero when k > n. Also called the binomial coefficient because it is equivalent to the coefficient of k-th term in polynomial expansion of the expression (1 + x) ** n. Raises TypeError if either of the arguments are not integers. Raises ValueError if either of the arguments are negative. New in version 3.8.
math.copysign(x, y)
Return a float with the magnitude (absolute value) of x but the sign of y. On platforms that support signed zeros, copysign(1.0, -0.0) returns -1.0.
math.fabs(x)
Return the absolute value of x.
math.factorial(x)
Return x factorial as an integer. Raises ValueError if x is not integral or is negative. Deprecated since version 3.9: Accepting floats with integral values (like 5.0) is deprecated.
math.floor(x)
Return the floor of x, the largest integer less than or equal to x. If x is not a float, delegates to x.__floor__(), which should return an Integral value.
math.fmod(x, y)
Return fmod(x, y), as defined by the platform C library. Note that the Python expression x % y may not return the same result. The intent of the C standard is that fmod(x, y) be exactly (mathematically; to infinite precision) equal to x - n*y for some integer n such that the result has the same sign as x and magnitude less than abs(y). Python’s x % y returns a result with the sign of y instead, and may not be exactly computable for float arguments. For example, fmod(-1e-100, 1e100) is -1e-100, but the result of Python’s -1e-100 % 1e100 is 1e100-1e-100, which cannot be represented exactly as a float, and rounds to the surprising 1e100. For this reason, function fmod() is generally preferred when working with floats, while Python’s x % y is preferred when working with integers.
math.frexp(x)
Return the mantissa and exponent of x as the pair (m, e). m is a float and e is an integer such that x == m * 2**e exactly. If x is zero, returns (0.0, 0), otherwise 0.5 <= abs(m) < 1. This is used to “pick apart” the internal representation of a float in a portable way.
math.fsum(iterable)
Return an accurate floating point sum of values in the iterable. Avoids loss of precision by tracking multiple intermediate partial sums: >>> sum([.1, .1, .1, .1, .1, .1, .1, .1, .1, .1])
0.9999999999999999
>>> fsum([.1, .1, .1, .1, .1, .1, .1, .1, .1, .1])
1.0
The algorithm’s accuracy depends on IEEE-754 arithmetic guarantees and the typical case where the rounding mode is half-even. On some non-Windows builds, the underlying C library uses extended precision addition and may occasionally double-round an intermediate sum causing it to be off in its least significant bit. For further discussion and two alternative approaches, see the ASPN cookbook recipes for accurate floating point summation.
math.gcd(*integers)
Return the greatest common divisor of the specified integer arguments. If any of the arguments is nonzero, then the returned value is the largest positive integer that is a divisor of all arguments. If all arguments are zero, then the returned value is 0. gcd() without arguments returns 0. New in version 3.5. Changed in version 3.9: Added support for an arbitrary number of arguments. Formerly, only two arguments were supported.
math.isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)
Return True if the values a and b are close to each other and False otherwise. Whether or not two values are considered close is determined according to given absolute and relative tolerances. rel_tol is the relative tolerance – it is the maximum allowed difference between a and b, relative to the larger absolute value of a or b. For example, to set a tolerance of 5%, pass rel_tol=0.05. The default tolerance is 1e-09, which assures that the two values are the same within about 9 decimal digits. rel_tol must be greater than zero. abs_tol is the minimum absolute tolerance – useful for comparisons near zero. abs_tol must be at least zero. If no errors occur, the result will be: abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol). The IEEE 754 special values of NaN, inf, and -inf will be handled according to IEEE rules. Specifically, NaN is not considered close to any other value, including NaN. inf and -inf are only considered close to themselves. New in version 3.5. See also PEP 485 – A function for testing approximate equality
math.isfinite(x)
Return True if x is neither an infinity nor a NaN, and False otherwise. (Note that 0.0 is considered finite.) New in version 3.2.
math.isinf(x)
Return True if x is a positive or negative infinity, and False otherwise.
math.isnan(x)
Return True if x is a NaN (not a number), and False otherwise.
math.isqrt(n)
Return the integer square root of the nonnegative integer n. This is the floor of the exact square root of n, or equivalently the greatest integer a such that a² ≤ n. For some applications, it may be more convenient to have the least integer a such that n ≤ a², or in other words the ceiling of the exact square root of n. For positive n, this can be computed using a = 1 + isqrt(n - 1). New in version 3.8.
math.lcm(*integers)
Return the least common multiple of the specified integer arguments. If all arguments are nonzero, then the returned value is the smallest positive integer that is a multiple of all arguments. If any of the arguments is zero, then the returned value is 0. lcm() without arguments returns 1. New in version 3.9.
math.ldexp(x, i)
Return x * (2**i). This is essentially the inverse of function frexp().
math.modf(x)
Return the fractional and integer parts of x. Both results carry the sign of x and are floats.
math.nextafter(x, y)
Return the next floating-point value after x towards y. If x is equal to y, return y. Examples:
math.nextafter(x, math.inf) goes up: towards positive infinity.
math.nextafter(x, -math.inf) goes down: towards minus infinity.
math.nextafter(x, 0.0) goes towards zero.
math.nextafter(x, math.copysign(math.inf, x)) goes away from zero. See also math.ulp(). New in version 3.9.
math.perm(n, k=None)
Return the number of ways to choose k items from n items without repetition and with order. Evaluates to n! / (n - k)! when k <= n and evaluates to zero when k > n. If k is not specified or is None, then k defaults to n and the function returns n!. Raises TypeError if either of the arguments are not integers. Raises ValueError if either of the arguments are negative. New in version 3.8.
math.prod(iterable, *, start=1)
Calculate the product of all the elements in the input iterable. The default start value for the product is 1. When the iterable is empty, return the start value. This function is intended specifically for use with numeric values and may reject non-numeric types. New in version 3.8.
math.remainder(x, y)
Return the IEEE 754-style remainder of x with respect to y. For finite x and finite nonzero y, this is the difference x - n*y, where n is the closest integer to the exact value of the quotient x /
y. If x / y is exactly halfway between two consecutive integers, the nearest even integer is used for n. The remainder r = remainder(x,
y) thus always satisfies abs(r) <= 0.5 * abs(y). Special cases follow IEEE 754: in particular, remainder(x, math.inf) is x for any finite x, and remainder(x, 0) and remainder(math.inf, x) raise ValueError for any non-NaN x. If the result of the remainder operation is zero, that zero will have the same sign as x. On platforms using IEEE 754 binary floating-point, the result of this operation is always exactly representable: no rounding error is introduced. New in version 3.7.
math.trunc(x)
Return the Real value x truncated to an Integral (usually an integer). Delegates to x.__trunc__().
math.ulp(x)
Return the value of the least significant bit of the float x: If x is a NaN (not a number), return x. If x is negative, return ulp(-x). If x is a positive infinity, return x. If x is equal to zero, return the smallest positive denormalized representable float (smaller than the minimum positive normalized float, sys.float_info.min). If x is equal to the largest positive representable float, return the value of the least significant bit of x, such that the first float smaller than x is x - ulp(x). Otherwise (x is a positive finite number), return the value of the least significant bit of x, such that the first float bigger than x is x + ulp(x). ULP stands for “Unit in the Last Place”. See also math.nextafter() and sys.float_info.epsilon. New in version 3.9.
Note that frexp() and modf() have a different call/return pattern than their C equivalents: they take a single argument and return a pair of values, rather than returning their second return value through an ‘output parameter’ (there is no such thing in Python). For the ceil(), floor(), and modf() functions, note that all floating-point numbers of sufficiently large magnitude are exact integers. Python floats typically carry no more than 53 bits of precision (the same as the platform C double type), in which case any float x with abs(x) >= 2**52 necessarily has no fractional bits. Power and logarithmic functions
math.exp(x)
Return e raised to the power x, where e = 2.718281… is the base of natural logarithms. This is usually more accurate than math.e ** x or pow(math.e, x).
math.expm1(x)
Return e raised to the power x, minus 1. Here e is the base of natural logarithms. For small floats x, the subtraction in exp(x) - 1 can result in a significant loss of precision; the expm1() function provides a way to compute this quantity to full precision: >>> from math import exp, expm1
>>> exp(1e-5) - 1 # gives result accurate to 11 places
1.0000050000069649e-05
>>> expm1(1e-5) # result accurate to full precision
1.0000050000166668e-05
New in version 3.2.
math.log(x[, base])
With one argument, return the natural logarithm of x (to base e). With two arguments, return the logarithm of x to the given base, calculated as log(x)/log(base).
math.log1p(x)
Return the natural logarithm of 1+x (base e). The result is calculated in a way which is accurate for x near zero.
math.log2(x)
Return the base-2 logarithm of x. This is usually more accurate than log(x, 2). New in version 3.3. See also int.bit_length() returns the number of bits necessary to represent an integer in binary, excluding the sign and leading zeros.
math.log10(x)
Return the base-10 logarithm of x. This is usually more accurate than log(x, 10).
math.pow(x, y)
Return x raised to the power y. Exceptional cases follow Annex ‘F’ of the C99 standard as far as possible. In particular, pow(1.0, x) and pow(x, 0.0) always return 1.0, even when x is a zero or a NaN. If both x and y are finite, x is negative, and y is not an integer then pow(x, y) is undefined, and raises ValueError. Unlike the built-in ** operator, math.pow() converts both its arguments to type float. Use ** or the built-in pow() function for computing exact integer powers.
math.sqrt(x)
Return the square root of x.
Trigonometric functions
math.acos(x)
Return the arc cosine of x, in radians. The result is between 0 and pi.
math.asin(x)
Return the arc sine of x, in radians. The result is between -pi/2 and pi/2.
math.atan(x)
Return the arc tangent of x, in radians. The result is between -pi/2 and pi/2.
math.atan2(y, x)
Return atan(y / x), in radians. The result is between -pi and pi. The vector in the plane from the origin to point (x, y) makes this angle with the positive X axis. The point of atan2() is that the signs of both inputs are known to it, so it can compute the correct quadrant for the angle. For example, atan(1) and atan2(1, 1) are both pi/4, but atan2(-1,
-1) is -3*pi/4.
math.cos(x)
Return the cosine of x radians.
math.dist(p, q)
Return the Euclidean distance between two points p and q, each given as a sequence (or iterable) of coordinates. The two points must have the same dimension. Roughly equivalent to: sqrt(sum((px - qx) ** 2.0 for px, qx in zip(p, q)))
New in version 3.8.
math.hypot(*coordinates)
Return the Euclidean norm, sqrt(sum(x**2 for x in coordinates)). This is the length of the vector from the origin to the point given by the coordinates. For a two dimensional point (x, y), this is equivalent to computing the hypotenuse of a right triangle using the Pythagorean theorem, sqrt(x*x + y*y). Changed in version 3.8: Added support for n-dimensional points. Formerly, only the two dimensional case was supported.
math.sin(x)
Return the sine of x radians.
math.tan(x)
Return the tangent of x radians.
Angular conversion
math.degrees(x)
Convert angle x from radians to degrees.
math.radians(x)
Convert angle x from degrees to radians.
Hyperbolic functions Hyperbolic functions are analogs of trigonometric functions that are based on hyperbolas instead of circles.
math.acosh(x)
Return the inverse hyperbolic cosine of x.
math.asinh(x)
Return the inverse hyperbolic sine of x.
math.atanh(x)
Return the inverse hyperbolic tangent of x.
math.cosh(x)
Return the hyperbolic cosine of x.
math.sinh(x)
Return the hyperbolic sine of x.
math.tanh(x)
Return the hyperbolic tangent of x.
Special functions
math.erf(x)
Return the error function at x. The erf() function can be used to compute traditional statistical functions such as the cumulative standard normal distribution: def phi(x):
'Cumulative distribution function for the standard normal distribution'
return (1.0 + erf(x / sqrt(2.0))) / 2.0
New in version 3.2.
math.erfc(x)
Return the complementary error function at x. The complementary error function is defined as 1.0 - erf(x). It is used for large values of x where a subtraction from one would cause a loss of significance. New in version 3.2.
math.gamma(x)
Return the Gamma function at x. New in version 3.2.
math.lgamma(x)
Return the natural logarithm of the absolute value of the Gamma function at x. New in version 3.2.
Constants
math.pi
The mathematical constant π = 3.141592…, to available precision.
math.e
The mathematical constant e = 2.718281…, to available precision.
math.tau
The mathematical constant τ = 6.283185…, to available precision. Tau is a circle constant equal to 2π, the ratio of a circle’s circumference to its radius. To learn more about Tau, check out Vi Hart’s video Pi is (still) Wrong, and start celebrating Tau day by eating twice as much pie! New in version 3.6.
math.inf
A floating-point positive infinity. (For negative infinity, use -math.inf.) Equivalent to the output of float('inf'). New in version 3.5.
math.nan
A floating-point “not a number” (NaN) value. Equivalent to the output of float('nan'). New in version 3.5.
CPython implementation detail: The math module consists mostly of thin wrappers around the platform C math library functions. Behavior in exceptional cases follows Annex F of the C99 standard where appropriate. The current implementation will raise ValueError for invalid operations like sqrt(-1.0) or log(0.0) (where C99 Annex F recommends signaling invalid operation or divide-by-zero), and OverflowError for results that overflow (for example, exp(1000.0)). A NaN will not be returned from any of the functions above unless one or more of the input arguments was a NaN; in that case, most functions will return a NaN, but (again following C99 Annex F) there are some exceptions to this rule, for example pow(float('nan'), 0.0) or hypot(float('nan'), float('inf')). Note that Python makes no effort to distinguish signaling NaNs from quiet NaNs, and behavior for signaling NaNs remains unspecified. Typical behavior is to treat all NaNs as though they were quiet. See also
Module cmath
Complex number versions of many of these functions. | |
doc_1429 | Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied. | |
doc_1430 | Return the set of valid signal numbers on this platform. This can be less than range(1, NSIG) if some signals are reserved by the system for internal use. New in version 3.8. | |
doc_1431 | See Migration guide for more details. tf.compat.v1.raw_ops.Reciprocal
tf.raw_ops.Reciprocal(
x, name=None
)
I.e., \(y = 1 / x\).
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_1432 | A legacy method for finding a loader for the specified module. If this is a top-level import, path will be None. Otherwise, this is a search for a subpackage or module and path will be the value of __path__ from the parent package. If a loader cannot be found, None is returned. If find_spec() is defined, backwards-compatible functionality is provided. Changed in version 3.4: Returns None when called instead of raising NotImplementedError. Can use find_spec() to provide functionality. Deprecated since version 3.4: Use find_spec() instead. | |
doc_1433 |
Compute the balanced accuracy. The balanced accuracy in binary and multiclass classification problems to deal with imbalanced datasets. It is defined as the average of recall obtained on each class. The best value is 1 and the worst value is 0 when adjusted=False. Read more in the User Guide. New in version 0.20. Parameters
y_true1d array-like
Ground truth (correct) target values.
y_pred1d array-like
Estimated targets as returned by a classifier.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
adjustedbool, default=False
When true, the result is adjusted for chance, so that random performance would score 0, and perfect performance scores 1. Returns
balanced_accuracyfloat
See also
recall_score, roc_auc_score
Notes Some literature promotes alternative definitions of balanced accuracy. Our definition is equivalent to accuracy_score with class-balanced sample weights, and shares desirable properties with the binary case. See the User Guide. References
1
Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. (2010). The balanced accuracy and its posterior distribution. Proceedings of the 20th International Conference on Pattern Recognition, 3121-24.
2
John. D. Kelleher, Brian Mac Namee, Aoife D’Arcy, (2015). Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples, and Case Studies. Examples >>> from sklearn.metrics import balanced_accuracy_score
>>> y_true = [0, 1, 0, 0, 1, 0]
>>> y_pred = [0, 1, 0, 0, 0, 1]
>>> balanced_accuracy_score(y_true, y_pred)
0.625 | |
doc_1434 | Return True if it is a hard link. | |
doc_1435 |
[Deprecated] Notes Deprecated since version 3.5: | |
doc_1436 |
Resample time-series data. Convenience method for frequency conversion and resampling of time series. The object must have a datetime-like index (DatetimeIndex, PeriodIndex, or TimedeltaIndex), or the caller must pass the label of a datetime-like series/index to the on/level keyword parameter. Parameters
rule:DateOffset, Timedelta or str
The offset string or object representing target conversion.
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
Which axis to use for up- or down-sampling. For Series this will default to 0, i.e. along the rows. Must be DatetimeIndex, TimedeltaIndex or PeriodIndex.
closed:{‘right’, ‘left’}, default None
Which side of bin interval is closed. The default is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
label:{‘right’, ‘left’}, default None
Which bin edge label to label bucket with. The default is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
convention:{‘start’, ‘end’, ‘s’, ‘e’}, default ‘start’
For PeriodIndex only, controls whether to use the start or end of rule.
kind:{‘timestamp’, ‘period’}, optional, default None
Pass ‘timestamp’ to convert the resulting index to a DateTimeIndex or ‘period’ to convert it to a PeriodIndex. By default the input representation is retained.
loffset:timedelta, default None
Adjust the resampled time labels. Deprecated since version 1.1.0: You should add the loffset to the df.index after the resample. See below.
base:int, default 0
For frequencies that evenly subdivide 1 day, the “origin” of the aggregated intervals. For example, for ‘5min’ frequency, base could range from 0 through 4. Defaults to 0. Deprecated since version 1.1.0: The new arguments that you should use are ‘offset’ or ‘origin’.
on:str, optional
For a DataFrame, column to use instead of index for resampling. Column must be datetime-like.
level:str or int, optional
For a MultiIndex, level (name or number) to use for resampling. level must be datetime-like.
origin:Timestamp or str, default ‘start_day’
The timestamp on which to adjust the grouping. The timezone of origin must match the timezone of the index. If string, must be one of the following: ‘epoch’: origin is 1970-01-01 ‘start’: origin is the first value of the timeseries ‘start_day’: origin is the first day at midnight of the timeseries New in version 1.1.0. ‘end’: origin is the last value of the timeseries ‘end_day’: origin is the ceiling midnight of the last day New in version 1.3.0.
offset:Timedelta or str, default is None
An offset timedelta added to the origin. New in version 1.1.0. Returns
pandas.core.Resampler
Resampler object. See also Series.resample
Resample a Series. DataFrame.resample
Resample a DataFrame. groupby
Group DataFrame by mapping, function, label, or list of labels. asfreq
Reindex a DataFrame with the given frequency without grouping. Notes See the user guide for more. To learn more about the offset strings, please see this link. Examples Start by creating a series with 9 one minute timestamps.
>>> index = pd.date_range('1/1/2000', periods=9, freq='T')
>>> series = pd.Series(range(9), index=index)
>>> series
2000-01-01 00:00:00 0
2000-01-01 00:01:00 1
2000-01-01 00:02:00 2
2000-01-01 00:03:00 3
2000-01-01 00:04:00 4
2000-01-01 00:05:00 5
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
Freq: T, dtype: int64
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.
>>> series.resample('3T').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels. For example, in the original series the bucket 2000-01-01 00:03:00 contains the value 3, but the summed value in the resampled bucket with the label 2000-01-01 00:03:00 does not include 3 (if it did, the summed value would be 6, not 3). To include this value close the right side of the bin interval as illustrated in the example below this one.
>>> series.resample('3T', label='right').sum()
2000-01-01 00:03:00 3
2000-01-01 00:06:00 12
2000-01-01 00:09:00 21
Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but close the right side of the bin interval.
>>> series.resample('3T', label='right', closed='right').sum()
2000-01-01 00:00:00 0
2000-01-01 00:03:00 6
2000-01-01 00:06:00 15
2000-01-01 00:09:00 15
Freq: 3T, dtype: int64
Upsample the series into 30 second bins.
>>> series.resample('30S').asfreq()[0:5] # Select first 5 rows
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 1.0
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
Freq: 30S, dtype: float64
Upsample the series into 30 second bins and fill the NaN values using the pad method.
>>> series.resample('30S').pad()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 0
2000-01-01 00:01:00 1
2000-01-01 00:01:30 1
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the NaN values using the bfill method.
>>> series.resample('30S').bfill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 1
2000-01-01 00:01:00 1
2000-01-01 00:01:30 2
2000-01-01 00:02:00 2
Freq: 30S, dtype: int64
Pass a custom function via apply
>>> def custom_resampler(arraylike):
... return np.sum(arraylike) + 5
...
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
2000-01-01 00:06:00 26
Freq: 3T, dtype: int64
For a Series with a PeriodIndex, the keyword convention can be used to control whether to use the start or end of rule. Resample a year by quarter using ‘start’ convention. Values are assigned to the first quarter of the period.
>>> s = pd.Series([1, 2], index=pd.period_range('2012-01-01',
... freq='A',
... periods=2))
>>> s
2012 1
2013 2
Freq: A-DEC, dtype: int64
>>> s.resample('Q', convention='start').asfreq()
2012Q1 1.0
2012Q2 NaN
2012Q3 NaN
2012Q4 NaN
2013Q1 2.0
2013Q2 NaN
2013Q3 NaN
2013Q4 NaN
Freq: Q-DEC, dtype: float64
Resample quarters by month using ‘end’ convention. Values are assigned to the last month of the period.
>>> q = pd.Series([1, 2, 3, 4], index=pd.period_range('2018-01-01',
... freq='Q',
... periods=4))
>>> q
2018Q1 1
2018Q2 2
2018Q3 3
2018Q4 4
Freq: Q-DEC, dtype: int64
>>> q.resample('M', convention='end').asfreq()
2018-03 1.0
2018-04 NaN
2018-05 NaN
2018-06 2.0
2018-07 NaN
2018-08 NaN
2018-09 3.0
2018-10 NaN
2018-11 NaN
2018-12 4.0
Freq: M, dtype: float64
For DataFrame objects, the keyword on can be used to specify the column instead of the index for resampling.
>>> d = {'price': [10, 11, 9, 13, 14, 18, 17, 19],
... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}
>>> df = pd.DataFrame(d)
>>> df['week_starting'] = pd.date_range('01/01/2018',
... periods=8,
... freq='W')
>>> df
price volume week_starting
0 10 50 2018-01-07
1 11 60 2018-01-14
2 9 40 2018-01-21
3 13 100 2018-01-28
4 14 50 2018-02-04
5 18 100 2018-02-11
6 17 40 2018-02-18
7 19 50 2018-02-25
>>> df.resample('M', on='week_starting').mean()
price volume
week_starting
2018-01-31 10.75 62.5
2018-02-28 17.00 60.0
For a DataFrame with MultiIndex, the keyword level can be used to specify on which level the resampling needs to take place.
>>> days = pd.date_range('1/1/2000', periods=4, freq='D')
>>> d2 = {'price': [10, 11, 9, 13, 14, 18, 17, 19],
... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}
>>> df2 = pd.DataFrame(
... d2,
... index=pd.MultiIndex.from_product(
... [days, ['morning', 'afternoon']]
... )
... )
>>> df2
price volume
2000-01-01 morning 10 50
afternoon 11 60
2000-01-02 morning 9 40
afternoon 13 100
2000-01-03 morning 14 50
afternoon 18 100
2000-01-04 morning 17 40
afternoon 19 50
>>> df2.resample('D', level=0).sum()
price volume
2000-01-01 21 110
2000-01-02 22 140
2000-01-03 32 150
2000-01-04 36 90
If you want to adjust the start of the bins based on a fixed timestamp:
>>> start, end = '2000-10-01 23:30:00', '2000-10-02 00:30:00'
>>> rng = pd.date_range(start, end, freq='7min')
>>> ts = pd.Series(np.arange(len(rng)) * 3, index=rng)
>>> ts
2000-10-01 23:30:00 0
2000-10-01 23:37:00 3
2000-10-01 23:44:00 6
2000-10-01 23:51:00 9
2000-10-01 23:58:00 12
2000-10-02 00:05:00 15
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
Freq: 7T, dtype: int64
>>> ts.resample('17min').sum()
2000-10-01 23:14:00 0
2000-10-01 23:31:00 9
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
Freq: 17T, dtype: int64
>>> ts.resample('17min', origin='epoch').sum()
2000-10-01 23:18:00 0
2000-10-01 23:35:00 18
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
>>> ts.resample('17min', origin='2000-01-01').sum()
2000-10-01 23:24:00 3
2000-10-01 23:41:00 15
2000-10-01 23:58:00 45
2000-10-02 00:15:00 45
Freq: 17T, dtype: int64
If you want to adjust the start of the bins with an offset Timedelta, the two following lines are equivalent:
>>> ts.resample('17min', origin='start').sum()
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
>>> ts.resample('17min', offset='23h30min').sum()
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
If you want to take the largest Timestamp as the end of the bins:
>>> ts.resample('17min', origin='end').sum()
2000-10-01 23:35:00 0
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
Freq: 17T, dtype: int64
In contrast with the start_day, you can use end_day to take the ceiling midnight of the largest Timestamp as the end of the bins and drop the bins not containing data:
>>> ts.resample('17min', origin='end_day').sum()
2000-10-01 23:38:00 3
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
Freq: 17T, dtype: int64
To replace the use of the deprecated base argument, you can now use offset, in this example it is equivalent to have base=2:
>>> ts.resample('17min', offset='2min').sum()
2000-10-01 23:16:00 0
2000-10-01 23:33:00 9
2000-10-01 23:50:00 36
2000-10-02 00:07:00 39
2000-10-02 00:24:00 24
Freq: 17T, dtype: int64
To replace the use of the deprecated loffset argument:
>>> from pandas.tseries.frequencies import to_offset
>>> loffset = '19min'
>>> ts_out = ts.resample('17min').sum()
>>> ts_out.index = ts_out.index + to_offset(loffset)
>>> ts_out
2000-10-01 23:33:00 0
2000-10-01 23:50:00 9
2000-10-02 00:07:00 21
2000-10-02 00:24:00 54
2000-10-02 00:41:00 24
Freq: 17T, dtype: int64 | |
doc_1437 | Execute the SQL query of the view, through MSIViewExecute(). If params is not None, it is a record describing actual values of the parameter tokens in the query. | |
doc_1438 | In-place version of digamma() | |
doc_1439 |
async def tcp_echo_client(message):
reader, writer = await asyncio.open_connection(
'127.0.0.1', 8888)
print(f'Send: {message!r}')
writer.write(message.encode())
await writer.drain()
data = await reader.read(100)
print(f'Received: {data.decode()!r}')
print('Close the connection')
writer.close()
await writer.wait_closed()
asyncio.run(tcp_echo_client('Hello World!'))
See also the Examples section below. Stream Functions The following top-level asyncio functions can be used to create and work with streams:
coroutine asyncio.open_connection(host=None, port=None, *, loop=None, limit=None, ssl=None, family=0, proto=0, flags=0, sock=None, local_addr=None, server_hostname=None, ssl_handshake_timeout=None)
Establish a network connection and return a pair of (reader, writer) objects. The returned reader and writer objects are instances of StreamReader and StreamWriter classes. The loop argument is optional and can always be determined automatically when this function is awaited from a coroutine. limit determines the buffer size limit used by the returned StreamReader instance. By default the limit is set to 64 KiB. The rest of the arguments are passed directly to loop.create_connection(). New in version 3.7: The ssl_handshake_timeout parameter.
coroutine asyncio.start_server(client_connected_cb, host=None, port=None, *, loop=None, limit=None, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None, reuse_port=None, ssl_handshake_timeout=None, start_serving=True)
Start a socket server. The client_connected_cb callback is called whenever a new client connection is established. It receives a (reader, writer) pair as two arguments, instances of the StreamReader and StreamWriter classes. client_connected_cb can be a plain callable or a coroutine function; if it is a coroutine function, it will be automatically scheduled as a Task. The loop argument is optional and can always be determined automatically when this method is awaited from a coroutine. limit determines the buffer size limit used by the returned StreamReader instance. By default the limit is set to 64 KiB. The rest of the arguments are passed directly to loop.create_server(). New in version 3.7: The ssl_handshake_timeout and start_serving parameters.
Unix Sockets
coroutine asyncio.open_unix_connection(path=None, *, loop=None, limit=None, ssl=None, sock=None, server_hostname=None, ssl_handshake_timeout=None)
Establish a Unix socket connection and return a pair of (reader, writer). Similar to open_connection() but operates on Unix sockets. See also the documentation of loop.create_unix_connection(). Availability: Unix. New in version 3.7: The ssl_handshake_timeout parameter. Changed in version 3.7: The path parameter can now be a path-like object
coroutine asyncio.start_unix_server(client_connected_cb, path=None, *, loop=None, limit=None, sock=None, backlog=100, ssl=None, ssl_handshake_timeout=None, start_serving=True)
Start a Unix socket server. Similar to start_server() but works with Unix sockets. See also the documentation of loop.create_unix_server(). Availability: Unix. New in version 3.7: The ssl_handshake_timeout and start_serving parameters. Changed in version 3.7: The path parameter can now be a path-like object.
StreamReader
class asyncio.StreamReader
Represents a reader object that provides APIs to read data from the IO stream. It is not recommended to instantiate StreamReader objects directly; use open_connection() and start_server() instead.
coroutine read(n=-1)
Read up to n bytes. If n is not provided, or set to -1, read until EOF and return all read bytes. If EOF was received and the internal buffer is empty, return an empty bytes object.
coroutine readline()
Read one line, where “line” is a sequence of bytes ending with \n. If EOF is received and \n was not found, the method returns partially read data. If EOF is received and the internal buffer is empty, return an empty bytes object.
coroutine readexactly(n)
Read exactly n bytes. Raise an IncompleteReadError if EOF is reached before n can be read. Use the IncompleteReadError.partial attribute to get the partially read data.
coroutine readuntil(separator=b'\n')
Read data from the stream until separator is found. On success, the data and separator will be removed from the internal buffer (consumed). Returned data will include the separator at the end. If the amount of data read exceeds the configured stream limit, a LimitOverrunError exception is raised, and the data is left in the internal buffer and can be read again. If EOF is reached before the complete separator is found, an IncompleteReadError exception is raised, and the internal buffer is reset. The IncompleteReadError.partial attribute may contain a portion of the separator. New in version 3.5.2.
at_eof()
Return True if the buffer is empty and feed_eof() was called.
StreamWriter
class asyncio.StreamWriter
Represents a writer object that provides APIs to write data to the IO stream. It is not recommended to instantiate StreamWriter objects directly; use open_connection() and start_server() instead.
write(data)
The method attempts to write the data to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent. The method should be used along with the drain() method: stream.write(data)
await stream.drain()
writelines(data)
The method writes a list (or any iterable) of bytes to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent. The method should be used along with the drain() method: stream.writelines(lines)
await stream.drain()
close()
The method closes the stream and the underlying socket. The method should be used along with the wait_closed() method: stream.close()
await stream.wait_closed()
can_write_eof()
Return True if the underlying transport supports the write_eof() method, False otherwise.
write_eof()
Close the write end of the stream after the buffered write data is flushed.
transport
Return the underlying asyncio transport.
get_extra_info(name, default=None)
Access optional transport information; see BaseTransport.get_extra_info() for details.
coroutine drain()
Wait until it is appropriate to resume writing to the stream. Example: writer.write(data)
await writer.drain()
This is a flow control method that interacts with the underlying IO write buffer. When the size of the buffer reaches the high watermark, drain() blocks until the size of the buffer is drained down to the low watermark and writing can be resumed. When there is nothing to wait for, the drain() returns immediately.
is_closing()
Return True if the stream is closed or in the process of being closed. New in version 3.7.
coroutine wait_closed()
Wait until the stream is closed. Should be called after close() to wait until the underlying connection is closed. New in version 3.7.
Examples TCP echo client using streams TCP echo client using the asyncio.open_connection() function: import asyncio
async def tcp_echo_client(message):
reader, writer = await asyncio.open_connection(
'127.0.0.1', 8888)
print(f'Send: {message!r}')
writer.write(message.encode())
data = await reader.read(100)
print(f'Received: {data.decode()!r}')
print('Close the connection')
writer.close()
asyncio.run(tcp_echo_client('Hello World!'))
See also The TCP echo client protocol example uses the low-level loop.create_connection() method. TCP echo server using streams TCP echo server using the asyncio.start_server() function: import asyncio
async def handle_echo(reader, writer):
data = await reader.read(100)
message = data.decode()
addr = writer.get_extra_info('peername')
print(f"Received {message!r} from {addr!r}")
print(f"Send: {message!r}")
writer.write(data)
await writer.drain()
print("Close the connection")
writer.close()
async def main():
server = await asyncio.start_server(
handle_echo, '127.0.0.1', 8888)
addr = server.sockets[0].getsockname()
print(f'Serving on {addr}')
async with server:
await server.serve_forever()
asyncio.run(main())
See also The TCP echo server protocol example uses the loop.create_server() method. Get HTTP headers Simple example querying HTTP headers of the URL passed on the command line: import asyncio
import urllib.parse
import sys
async def print_http_headers(url):
url = urllib.parse.urlsplit(url)
if url.scheme == 'https':
reader, writer = await asyncio.open_connection(
url.hostname, 443, ssl=True)
else:
reader, writer = await asyncio.open_connection(
url.hostname, 80)
query = (
f"HEAD {url.path or '/'} HTTP/1.0\r\n"
f"Host: {url.hostname}\r\n"
f"\r\n"
)
writer.write(query.encode('latin-1'))
while True:
line = await reader.readline()
if not line:
break
line = line.decode('latin1').rstrip()
if line:
print(f'HTTP header> {line}')
# Ignore the body, close the socket
writer.close()
url = sys.argv[1]
asyncio.run(print_http_headers(url))
Usage: python example.py http://example.com/path/page.html
or with HTTPS: python example.py https://example.com/path/page.html
Register an open socket to wait for data using streams Coroutine waiting until a socket receives data using the open_connection() function: import asyncio
import socket
async def wait_for_data():
# Get a reference to the current event loop because
# we want to access low-level APIs.
loop = asyncio.get_running_loop()
# Create a pair of connected sockets.
rsock, wsock = socket.socketpair()
# Register the open socket to wait for data.
reader, writer = await asyncio.open_connection(sock=rsock)
# Simulate the reception of data from the network
loop.call_soon(wsock.send, 'abc'.encode())
# Wait for data
data = await reader.read(100)
# Got data, we are done: close the socket
print("Received:", data.decode())
writer.close()
# Close the second socket
wsock.close()
asyncio.run(wait_for_data())
See also The register an open socket to wait for data using a protocol example uses a low-level protocol and the loop.create_connection() method. The watch a file descriptor for read events example uses the low-level loop.add_reader() method to watch a file descriptor. | |
doc_1440 | Decorator for skipping tests if gzip doesn’t exist. | |
doc_1441 |
Set the zorder for the artist. Artists with lower zorder values are drawn first. Parameters
levelfloat | |
doc_1442 | Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments. Warning Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
torch.utils.hooks.RemovableHandle | |
doc_1443 | Return an instance of the test result class that should be used for this test case class (if no other result instance is provided to the run() method). For TestCase instances, this will always be an instance of TestResult; subclasses of TestCase should override this as necessary. | |
doc_1444 |
Return the default fill value for the argument object. The default filling value depends on the datatype of the input array or the type of the input scalar:
datatype default
bool True
int 999999
float 1.e20
complex 1.e20+0j
object ‘?’
string ‘N/A’ For structured types, a structured scalar is returned, with each field the default fill value for its type. For subarray types, the fill value is an array of the same size containing the default scalar fill value. Parameters
objndarray, dtype or scalar
The array data-type or scalar for which the default fill value is returned. Returns
fill_valuescalar
The default fill value. Examples >>> np.ma.default_fill_value(1)
999999
>>> np.ma.default_fill_value(np.array([1.1, 2., np.pi]))
1e+20
>>> np.ma.default_fill_value(np.dtype(complex))
(1e+20+0j) | |
doc_1445 |
Check whether module is pruned by looking for forward_pre_hooks in its modules that inherit from the BasePruningMethod. Parameters
module (nn.Module) – object that is either pruned or unpruned Returns
binary answer to whether module is pruned. Examples >>> m = nn.Linear(5, 7)
>>> print(prune.is_pruned(m))
False
>>> prune.random_unstructured(m, name='weight', amount=0.2)
>>> print(prune.is_pruned(m))
True | |
doc_1446 | Return e raised to the power x, where e = 2.718281… is the base of natural logarithms. This is usually more accurate than math.e ** x or pow(math.e, x). | |
doc_1447 |
Returns the greatest common divisor of |x1| and |x2| Parameters
x1, x2array_like, int
Arrays of values. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output). Returns
yndarray or scalar
The greatest common divisor of the absolute value of the inputs This is a scalar if both x1 and x2 are scalars. See also lcm
The lowest common multiple Examples >>> np.gcd(12, 20)
4
>>> np.gcd.reduce([15, 25, 35])
5
>>> np.gcd(np.arange(6), 20)
array([20, 1, 2, 1, 4, 5]) | |
doc_1448 | This content manager provides only a minimum interface beyond that provided by Message itself: it deals only with text, raw byte strings, and Message objects. Nevertheless, it provides significant advantages compared to the base API: get_content on a text part will return a unicode string without the application needing to manually decode it, set_content provides a rich set of options for controlling the headers added to a part and controlling the content transfer encoding, and it enables the use of the various add_ methods, thereby simplifying the creation of multipart messages.
email.contentmanager.get_content(msg, errors='replace')
Return the payload of the part as either a string (for text parts), an EmailMessage object (for message/rfc822 parts), or a bytes object (for all other non-multipart types). Raise a KeyError if called on a multipart. If the part is a text part and errors is specified, use it as the error handler when decoding the payload to unicode. The default error handler is replace.
email.contentmanager.set_content(msg, <'str'>, subtype="plain", charset='utf-8', cte=None, disposition=None, filename=None, cid=None, params=None, headers=None)
email.contentmanager.set_content(msg, <'bytes'>, maintype, subtype, cte="base64", disposition=None, filename=None, cid=None, params=None, headers=None)
email.contentmanager.set_content(msg, <'EmailMessage'>, cte=None, disposition=None, filename=None, cid=None, params=None, headers=None)
Add headers and payload to msg: Add a Content-Type header with a maintype/subtype value. For str, set the MIME maintype to text, and set the subtype to subtype if it is specified, or plain if it is not. For bytes, use the specified maintype and subtype, or raise a TypeError if they are not specified. For EmailMessage objects, set the maintype to message, and set the subtype to subtype if it is specified or rfc822 if it is not. If subtype is partial, raise an error (bytes objects must be used to construct message/partial parts). If charset is provided (which is valid only for str), encode the string to bytes using the specified character set. The default is utf-8. If the specified charset is a known alias for a standard MIME charset name, use the standard charset instead. If cte is set, encode the payload using the specified content transfer encoding, and set the Content-Transfer-Encoding header to that value. Possible values for cte are quoted-printable, base64, 7bit, 8bit, and binary. If the input cannot be encoded in the specified encoding (for example, specifying a cte of 7bit for an input that contains non-ASCII values), raise a ValueError. For str objects, if cte is not set use heuristics to determine the most compact encoding. For EmailMessage, per RFC 2046, raise an error if a cte of quoted-printable or base64 is requested for subtype rfc822, and for any cte other than 7bit for subtype external-body. For message/rfc822, use 8bit if cte is not specified. For all other values of subtype, use 7bit. Note A cte of binary does not actually work correctly yet. The EmailMessage object as modified by set_content is correct, but BytesGenerator does not serialize it correctly. If disposition is set, use it as the value of the Content-Disposition header. If not specified, and filename is specified, add the header with the value attachment. If disposition is not specified and filename is also not specified, do not add the header. The only valid values for disposition are attachment and inline. If filename is specified, use it as the value of the filename parameter of the Content-Disposition header. If cid is specified, add a Content-ID header with cid as its value. If params is specified, iterate its items method and use the resulting (key, value) pairs to set additional parameters on the Content-Type header. If headers is specified and is a list of strings of the form headername: headervalue or a list of header objects (distinguished from strings by having a name attribute), add the headers to msg. | |
doc_1449 |
Return the height of the rectangle. | |
doc_1450 |
Draw samples from a Rayleigh distribution. The \(\chi\) and Weibull distributions are generalizations of the Rayleigh. Parameters
scalefloat or array_like of floats, optional
Scale, also equals the mode. Must be non-negative. Default is 1.
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if scale is a scalar. Otherwise, np.array(scale).size samples are drawn. Returns
outndarray or scalar
Drawn samples from the parameterized Rayleigh distribution. Notes The probability density function for the Rayleigh distribution is \[P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}\] The Rayleigh distribution would arise, for example, if the East and North components of the wind velocity had identical zero-mean Gaussian distributions. Then the wind speed would have a Rayleigh distribution. References 1
Brighton Webs Ltd., “Rayleigh Distribution,” https://web.archive.org/web/20090514091424/http://brighton-webs.co.uk:80/distributions/rayleigh.asp 2
Wikipedia, “Rayleigh distribution” https://en.wikipedia.org/wiki/Rayleigh_distribution Examples Draw values from the distribution and plot the histogram >>> from matplotlib.pyplot import hist
>>> rng = np.random.default_rng()
>>> values = hist(rng.rayleigh(3, 100000), bins=200, density=True)
Wave heights tend to follow a Rayleigh distribution. If the mean wave height is 1 meter, what fraction of waves are likely to be larger than 3 meters? >>> meanvalue = 1
>>> modevalue = np.sqrt(2 / np.pi) * meanvalue
>>> s = rng.rayleigh(modevalue, 1000000)
The percentage of waves larger than 3 meters is: >>> 100.*sum(s>3)/1000000.
0.087300000000000003 # random | |
doc_1451 | See Migration guide for more details. tf.compat.v1.estimator.LogisticRegressionHead
tf.estimator.LogisticRegressionHead(
weight_column=None, loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE,
name=None
)
Uses sigmoid_cross_entropy_with_logits loss, which is the same as BinaryClassHead. The differences compared to BinaryClassHead are: Does not support label_vocabulary. Instead, labels must be float in the range [0, 1]. Does not calculate some metrics that do not make sense, such as AUC. In PREDICT mode, only returns logits and predictions (=tf.sigmoid(logits)), whereas BinaryClassHead also returns probabilities, classes, and class_ids. Export output defaults to RegressionOutput, whereas BinaryClassHead defaults to PredictOutput. The head expects logits with shape [D0, D1, ... DN, 1]. In many applications, the shape is [batch_size, 1]. The labels shape must match logits, namely [D0, D1, ... DN] or [D0, D1, ... DN, 1]. If weight_column is specified, weights must be of shape [D0, D1, ... DN] or [D0, D1, ... DN, 1]. This is implemented as a generalized linear model, see https://en.wikipedia.org/wiki/Generalized_linear_model The head can be used with a canned estimator. Example: my_head = tf.estimator.LogisticRegressionHead()
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
It can also be used with a custom model_fn. Example: def _my_model_fn(features, labels, mode):
my_head = tf.estimator.LogisticRegressionHead()
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.keras.optimizers.Adagrad(lr=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
Args
weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example.
loss_reduction One of tf.losses.Reduction except NONE. Decides how to reduce training loss over batch and label dimension. Defaults to SUM_OVER_BATCH_SIZE, namely weighted sum of losses divided by batch size * label_dimension.
name name of the head. If provided, summary and metrics keys will be suffixed by "/" + name. Also used as name_scope when creating ops.
Attributes
logits_dimension See base_head.Head for details.
loss_reduction See base_head.Head for details.
name See base_head.Head for details. Methods create_estimator_spec View source
create_estimator_spec(
features, mode, logits, labels=None, optimizer=None, trainable_variables=None,
train_op_fn=None, update_ops=None, regularization_losses=None
)
Returns EstimatorSpec that a model_fn can return. It is recommended to pass all args via name.
Args
features Input dict mapping string feature names to Tensor or SparseTensor objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor.
mode Estimator's ModeKeys.
logits Logits Tensor to be used by the head.
labels Labels Tensor, or dict mapping string label names to Tensor objects of the label values.
optimizer An tf.keras.optimizers.Optimizer instance to optimize the loss in TRAIN mode. Namely, sets train_op = optimizer.get_updates(loss, trainable_variables), which updates variables to minimize loss.
trainable_variables A list or tuple of Variable objects to update to minimize loss. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable_variables need to be passed explicitly here.
train_op_fn Function that takes a scalar loss Tensor and returns an op to optimize the model with the loss in TRAIN mode. Used if optimizer is None. Exactly one of train_op_fn and optimizer must be set in TRAIN mode. By default, it is None in other modes. If you want to optimize loss yourself, you can pass lambda _: tf.no_op() and then use EstimatorSpec.loss to compute and apply gradients.
update_ops A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x doesn't have collections, update_ops need to be passed explicitly here.
regularization_losses A list of additional scalar losses to be added to the training loss, such as regularization losses.
Returns EstimatorSpec.
loss View source
loss(
labels, logits, features=None, mode=None, regularization_losses=None
)
Return predictions based on keys. See base_head.Head for details. metrics View source
metrics(
regularization_losses=None
)
Creates metrics. See base_head.Head for details. predictions View source
predictions(
logits
)
Return predictions based on keys. See base_head.Head for details.
Args
logits logits Tensor with shape [D0, D1, ... DN, logits_dimension]. For many applications, the shape is [batch_size, logits_dimension].
Returns A dict of predictions.
update_metrics View source
update_metrics(
eval_metrics, features, logits, labels, regularization_losses=None
)
Updates eval metrics. See base_head.Head for details. | |
doc_1452 |
A compatibility alias for tobytes, with exactly the same behavior. Despite its name, it returns bytes not strs. Deprecated since version 1.19.0. | |
doc_1453 | Mapping class that references keys weakly. Entries in the dictionary will be discarded when there is no longer a strong reference to the key. This can be used to associate additional data with an object owned by other parts of an application without adding attributes to those objects. This can be especially useful with objects that override attribute accesses. Changed in version 3.9: Added support for | and |= operators, specified in PEP 584. | |
doc_1454 | Create a new WSGI environ dict based on the values passed. The first parameter should be the path of the request which defaults to ‘/’. The second one can either be an absolute path (in that case the host is localhost:80) or a full path to the request with scheme, netloc port and the path to the script. This accepts the same arguments as the EnvironBuilder constructor. Changelog Changed in version 0.5: This function is now a thin wrapper over EnvironBuilder which was added in 0.5. The headers, environ_base, environ_overrides and charset parameters were added. Parameters
args (Any) –
kwargs (Any) – Return type
WSGIEnvironment | |
doc_1455 |
Load and return the breast cancer wisconsin dataset (classification). The breast cancer dataset is a classic and very easy binary classification dataset.
Classes 2
Samples per class 212(M),357(B)
Samples total 569
Dimensionality 30
Features real, positive Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (569, 30)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (569,)
The classification target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. target_names: list
The names of target classes. frame: DataFrame of shape (569, 31)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset. filename: str
The path to the location of the data. New in version 0.20.
(data, target)tuple if return_X_y is True
New in version 0.18. The copy of UCI ML Breast Cancer Wisconsin (Diagnostic) dataset is
downloaded from:
https://goo.gl/U2Uwz2
Examples Let’s say you are interested in the samples 10, 50, and 85, and want to know their class name. >>> from sklearn.datasets import load_breast_cancer
>>> data = load_breast_cancer()
>>> data.target[[10, 50, 85]]
array([0, 1, 0])
>>> list(data.target_names)
['malignant', 'benign'] | |
doc_1456 |
Set the markevery property to subsample the plot when using markers. e.g., if every=5, every 5-th marker will be plotted. Parameters
everyNone or int or (int, int) or slice or list[int] or float or (float, float) or list[bool]
Which markers to plot.
every=None: every point will be plotted.
every=N: every N-th marker will be plotted starting with marker 0.
every=(start, N): every N-th marker, starting at index start, will be plotted.
every=slice(start, end, N): every N-th marker, starting at index start, up to but not including index end, will be plotted.
every=[i, j, m, ...]: only markers at the given indices will be plotted.
every=[True, False, True, ...]: only positions that are True will be plotted. The list must have the same length as the data points.
every=0.1, (i.e. a float): markers will be spaced at approximately equal visual distances along the line; the distance along the line between markers is determined by multiplying the display-coordinate distance of the axes bounding-box diagonal by the value of every.
every=(0.5, 0.1) (i.e. a length-2 tuple of float): similar to every=0.1 but the first marker will be offset along the line by 0.5 multiplied by the display-coordinate-diagonal-distance along the line. For examples see Markevery Demo. Notes Setting markevery will still only draw markers at actual data points. While the float argument form aims for uniform visual spacing, it has to coerce from the ideal spacing to the nearest available data point. Depending on the number and distribution of data points, the result may still not look evenly spaced. When using a start offset to specify the first marker, the offset will be from the first data point which may be different from the first the visible data point if the plot is zoomed in. If zooming in on a plot when using float arguments then the actual data points that have markers will change because the distance between markers is always determined from the display-coordinates axes-bounding-box-diagonal regardless of the actual axes data limits. | |
doc_1457 |
Insert scalar into an array (scalar is cast to array’s dtype, if possible) There must be at least 1 argument, and define the last argument as item. Then, a.itemset(*args) is equivalent to but faster than a[args] = item. The item should be a scalar value and args must select a single item in the array a. Parameters
*argsArguments
If one argument: a scalar, only used in case a is of size 1. If two arguments: the last argument is the value to be set and must be a scalar, the first argument specifies a single array element location. It is either an int or a tuple. Notes Compared to indexing syntax, itemset provides some speed increase for placing a scalar into a particular location in an ndarray, if you must do this. However, generally this is discouraged: among other problems, it complicates the appearance of the code. Also, when using itemset (and item) inside a loop, be sure to assign the methods to a local variable to avoid the attribute look-up at each loop iteration. Examples >>> np.random.seed(123)
>>> x = np.random.randint(9, size=(3, 3))
>>> x
array([[2, 2, 6],
[1, 3, 6],
[1, 0, 1]])
>>> x.itemset(4, 0)
>>> x.itemset((2, 2), 9)
>>> x
array([[2, 2, 6],
[1, 0, 6],
[1, 0, 9]]) | |
doc_1458 | Return True if the string ends with the specified suffix, otherwise return False. suffix can also be a tuple of suffixes to look for. With optional start, test beginning at that position. With optional end, stop comparing at that position. | |
doc_1459 | Difference of number of memory blocks between the old and the new snapshots (int): 0 if the memory blocks have been allocated in the new snapshot. | |
doc_1460 | Allow an application to set the locale for errors and warnings. SAX parsers are not required to provide localization for errors and warnings; if they cannot support the requested locale, however, they must raise a SAX exception. Applications may request a locale change in the middle of a parse. | |
doc_1461 | tf.compat.v1.metrics.recall(
labels, predictions, weights=None, metrics_collections=None,
updates_collections=None, name=None
)
The recall function creates two local variables, true_positives and false_negatives, that are used to compute the recall. This value is ultimately returned as recall, an idempotent operation that simply divides true_positives by the sum of true_positives and false_negatives. For estimation of the metric over a stream of data, the function creates an update_op that updates these variables and returns the recall. update_op weights each prediction by the corresponding value in weights. If weights is None, weights default to 1. Use weights of 0 to mask values.
Args
labels The ground truth values, a Tensor whose dimensions must match predictions. Will be cast to bool.
predictions The predicted values, a Tensor of arbitrary dimensions. Will be cast to bool.
weights Optional Tensor whose rank is either 0, or the same rank as labels, and must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension).
metrics_collections An optional list of collections that recall should be added to.
updates_collections An optional list of collections that update_op should be added to.
name An optional variable_scope name.
Returns
recall Scalar float Tensor with the value of true_positives divided by the sum of true_positives and false_negatives.
update_op Operation that increments true_positives and false_negatives variables appropriately and whose value matches recall.
Raises
ValueError If predictions and labels have mismatched shapes, or if weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple.
RuntimeError If eager execution is enabled. | |
doc_1462 | Construct a StackSummary object from a supplied list of FrameSummary objects or old-style list of tuples. Each tuple should be a 4-tuple with filename, lineno, name, line as the elements. | |
doc_1463 | Return True if automatic collection is enabled. | |
doc_1464 |
Fit the model according to the given training data and parameters. Changed in version 0.19: store_covariances has been moved to main constructor as store_covariance Changed in version 0.19: tol has been moved to main constructor. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values (integers) | |
doc_1465 | Returns True if semaphore can not be acquired immediately. | |
doc_1466 | sklearn.datasets.make_friedman1(n_samples=100, n_features=10, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #1” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are independent features uniformly distributed on the interval [0, 1]. The output y is created according to the formula: y(X) = 10 * sin(pi * X[:, 0] * X[:, 1]) + 20 * (X[:, 2] - 0.5) ** 2 + 10 * X[:, 3] + 5 * X[:, 4] + noise * N(0, 1).
Out of the n_features features, only 5 are actually used to compute y. The remaining features are independent of y. The number of features has to be >= 5. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=10
The number of features. Should be at least 5.
noisefloat, default=0.0
The standard deviation of the gaussian noise applied to the output.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The input samples.
yndarray of shape (n_samples,)
The output values. References
1
J. Friedman, “Multivariate adaptive regression splines”, The Annals of Statistics 19 (1), pages 1-67, 1991.
2
L. Breiman, “Bagging predictors”, Machine Learning 24, pages 123-140, 1996. | |
doc_1467 | The version string of the OpenSSL library loaded by the interpreter: >>> ssl.OPENSSL_VERSION
'OpenSSL 1.0.2k 26 Jan 2017'
New in version 3.2. | |
doc_1468 | Create an object used to boxcar method calls. server is the eventual target of the call. Calls can be made to the result object, but they will immediately return None, and only store the call name and parameters in the MultiCall object. Calling the object itself causes all stored calls to be transmitted as a single system.multicall request. The result of this call is a generator; iterating over this generator yields the individual results. | |
doc_1469 |
The differences between consecutive elements of an array. Parameters
aryarray_like
If necessary, will be flattened before the differences are taken.
to_endarray_like, optional
Number(s) to append at the end of the returned differences.
to_beginarray_like, optional
Number(s) to prepend at the beginning of the returned differences. Returns
ediff1dndarray
The differences. Loosely, this is ary.flat[1:] - ary.flat[:-1]. See also
diff, gradient
Notes When applied to masked arrays, this function drops the mask information if the to_begin and/or to_end parameters are used. Examples >>> x = np.array([1, 2, 4, 7, 0])
>>> np.ediff1d(x)
array([ 1, 2, 3, -7])
>>> np.ediff1d(x, to_begin=-99, to_end=np.array([88, 99]))
array([-99, 1, 2, ..., -7, 88, 99])
The returned array is always 1D. >>> y = [[1, 2, 4], [1, 6, 24]]
>>> np.ediff1d(y)
array([ 1, 2, -3, 5, 18]) | |
doc_1470 | Called on listening channels (passive openers) when a connection can be established with a new remote endpoint that has issued a connect() call for the local endpoint. Deprecated in version 3.2; use handle_accepted() instead. Deprecated since version 3.2. | |
doc_1471 | Axes(*args[, grid_helper]) Build an Axes in a figure.
AxesZero(*args[, grid_helper]) Build an Axes in a figure.
AxisArtistHelper() AxisArtistHelper should define following method with given APIs. Note that the first axes argument will be axes attribute of the caller artist.::.
AxisArtistHelperRectlinear()
GridHelperBase()
GridHelperRectlinear(axes) | |
doc_1472 |
Reduces the tensor data across all machines in such a way that all get the final result. After the call tensor is going to be bitwise identical in all processes. Complex tensors are supported. Parameters
tensor (Tensor) – Input and output of the collective. The function operates in-place.
op (optional) – One of the values from torch.distributed.ReduceOp enum. Specifies an operation used for element-wise reductions.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group Examples >>> # All tensors below are of torch.int64 type.
>>> # We have 2 process groups, 2 ranks.
>>> tensor = torch.arange(2, dtype=torch.int64) + 1 + 2 * rank
>>> tensor
tensor([1, 2]) # Rank 0
tensor([3, 4]) # Rank 1
>>> dist.all_reduce(tensor, op=ReduceOp.SUM)
>>> tensor
tensor([4, 6]) # Rank 0
tensor([4, 6]) # Rank 1
>>> # All tensors below are of torch.cfloat type.
>>> # We have 2 process groups, 2 ranks.
>>> tensor = torch.tensor([1+1j, 2+2j], dtype=torch.cfloat) + 2 * rank * (1+1j)
>>> tensor
tensor([1.+1.j, 2.+2.j]) # Rank 0
tensor([3.+3.j, 4.+4.j]) # Rank 1
>>> dist.all_reduce(tensor, op=ReduceOp.SUM)
>>> tensor
tensor([4.+4.j, 6.+6.j]) # Rank 0
tensor([4.+4.j, 6.+6.j]) # Rank 1 | |
doc_1473 | Make an entry into the EventMapping table for this control. | |
doc_1474 |
Round an array to the given number of decimals. See also around
equivalent function; see for details. | |
doc_1475 | This is the base class for exceptions raised by the Parser class. It is derived from MessageError. This class is also used internally by the parser used by headerregistry. | |
doc_1476 | class sklearn.naive_bayes.ComplementNB(*, alpha=1.0, fit_prior=True, class_prior=None, norm=False) [source]
The Complement Naive Bayes classifier described in Rennie et al. (2003). The Complement Naive Bayes classifier was designed to correct the “severe assumptions” made by the standard Multinomial Naive Bayes classifier. It is particularly suited for imbalanced data sets. Read more in the User Guide. New in version 0.20. Parameters
alphafloat, default=1.0
Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
fit_priorbool, default=True
Only used in edge case with a single class in the training set.
class_priorarray-like of shape (n_classes,), default=None
Prior probabilities of the classes. Not used.
normbool, default=False
Whether or not a second normalization of the weights is performed. The default behavior mirrors the implementations found in Mahout and Weka, which do not follow the full algorithm described in Table 9 of the paper. Attributes
class_count_ndarray of shape (n_classes,)
Number of samples encountered for each class during fitting. This value is weighted by the sample weight when provided.
class_log_prior_ndarray of shape (n_classes,)
Smoothed empirical log probability for each class. Only used in edge case with a single class in the training set.
classes_ndarray of shape (n_classes,)
Class labels known to the classifier
coef_ndarray of shape (n_classes, n_features)
Mirrors feature_log_prob_ for interpreting ComplementNB as a linear model. Deprecated since version 0.24: coef_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26).
feature_all_ndarray of shape (n_features,)
Number of samples encountered for each feature during fitting. This value is weighted by the sample weight when provided.
feature_count_ndarray of shape (n_classes, n_features)
Number of samples encountered for each (class, feature) during fitting. This value is weighted by the sample weight when provided.
feature_log_prob_ndarray of shape (n_classes, n_features)
Empirical weights for class complements.
intercept_ndarray of shape (n_classes,)
Mirrors class_log_prior_ for interpreting ComplementNB as a linear model. Deprecated since version 0.24: coef_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26).
n_features_int
Number of features of each sample. References Rennie, J. D., Shih, L., Teevan, J., & Karger, D. R. (2003). Tackling the poor assumptions of naive bayes text classifiers. In ICML (Vol. 3, pp. 616-623). https://people.csail.mit.edu/jrennie/papers/icml03-nb.pdf Examples >>> import numpy as np
>>> rng = np.random.RandomState(1)
>>> X = rng.randint(5, size=(6, 100))
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> from sklearn.naive_bayes import ComplementNB
>>> clf = ComplementNB()
>>> clf.fit(X, y)
ComplementNB()
>>> print(clf.predict(X[2:3]))
[3]
Methods
fit(X, y[, sample_weight]) Fit Naive Bayes classifier according to X, y
get_params([deep]) Get parameters for this estimator.
partial_fit(X, y[, classes, sample_weight]) Incremental fit on a batch of samples.
predict(X) Perform classification on an array of test vectors X.
predict_log_proba(X) Return log-probability estimates for the test vector X.
predict_proba(X) Return probability estimates for the test vector X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit Naive Bayes classifier according to X, y Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
partial_fit(X, y, classes=None, sample_weight=None) [source]
Incremental fit on a batch of samples. This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning. This is especially useful when the whole dataset is too big to fit in memory at once. This method has some performance overhead hence it is better to call partial_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
classesarray-like of shape (n_classes), default=None
List of all the classes that can possibly appear in the y vector. Must be provided at the first call to partial_fit, can be omitted in subsequent calls.
sample_weightarray-like of shape (n_samples,), default=None
Weights applied to individual samples (1. for unweighted). Returns
selfobject
predict(X) [source]
Perform classification on an array of test vectors X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Cndarray of shape (n_samples,)
Predicted target values for X
predict_log_proba(X) [source]
Return log-probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
predict_proba(X) [source]
Return probability estimates for the test vector X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Carray-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.naive_bayes.ComplementNB
Classification of text documents using sparse features | |
doc_1477 |
Return vertical sizes. | |
doc_1478 | See Migration guide for more details. tf.compat.v1.saved_model.signature_def_utils.predict_signature_def
tf.compat.v1.saved_model.predict_signature_def(
inputs, outputs
)
This function produces signatures intended for use with the TensorFlow Serving Predict API (tensorflow_serving/apis/prediction_service.proto). This API imposes no constraints on the input and output types.
Args
inputs dict of string to Tensor.
outputs dict of string to Tensor.
Returns A prediction-flavored signature_def.
Raises
ValueError If inputs or outputs is None. | |
doc_1479 | Map character sets to their email properties. This class provides information about the requirements imposed on email for a specific character set. It also provides convenience routines for converting between character sets, given the availability of the applicable codecs. Given a character set, it will do its best to provide information on how to use that character set in an email message in an RFC-compliant way. Certain character sets must be encoded with quoted-printable or base64 when used in email headers or bodies. Certain character sets must be converted outright, and are not allowed in email. Optional input_charset is as described below; it is always coerced to lower case. After being alias normalized it is also used as a lookup into the registry of character sets to find out the header encoding, body encoding, and output conversion codec to be used for the character set. For example, if input_charset is iso-8859-1, then headers and bodies will be encoded using quoted-printable and no output conversion codec is necessary. If input_charset is euc-jp, then headers will be encoded with base64, bodies will not be encoded, but output text will be converted from the euc-jp character set to the iso-2022-jp character set. Charset instances have the following data attributes:
input_charset
The initial character set specified. Common aliases are converted to their official email names (e.g. latin_1 is converted to iso-8859-1). Defaults to 7-bit us-ascii.
header_encoding
If the character set must be encoded before it can be used in an email header, this attribute will be set to Charset.QP (for quoted-printable), Charset.BASE64 (for base64 encoding), or Charset.SHORTEST for the shortest of QP or BASE64 encoding. Otherwise, it will be None.
body_encoding
Same as header_encoding, but describes the encoding for the mail message’s body, which indeed may be different than the header encoding. Charset.SHORTEST is not allowed for body_encoding.
output_charset
Some character sets must be converted before they can be used in email headers or bodies. If the input_charset is one of them, this attribute will contain the name of the character set output will be converted to. Otherwise, it will be None.
input_codec
The name of the Python codec used to convert the input_charset to Unicode. If no conversion codec is necessary, this attribute will be None.
output_codec
The name of the Python codec used to convert Unicode to the output_charset. If no conversion codec is necessary, this attribute will have the same value as the input_codec.
Charset instances also have the following methods:
get_body_encoding()
Return the content transfer encoding used for body encoding. This is either the string quoted-printable or base64 depending on the encoding used, or it is a function, in which case you should call the function with a single argument, the Message object being encoded. The function should then set the Content-Transfer-Encoding header itself to whatever is appropriate. Returns the string quoted-printable if body_encoding is QP, returns the string base64 if body_encoding is BASE64, and returns the string 7bit otherwise.
get_output_charset()
Return the output character set. This is the output_charset attribute if that is not None, otherwise it is input_charset.
header_encode(string)
Header-encode the string string. The type of encoding (base64 or quoted-printable) will be based on the header_encoding attribute.
header_encode_lines(string, maxlengths)
Header-encode a string by converting it first to bytes. This is similar to header_encode() except that the string is fit into maximum line lengths as given by the argument maxlengths, which must be an iterator: each element returned from this iterator will provide the next maximum line length.
body_encode(string)
Body-encode the string string. The type of encoding (base64 or quoted-printable) will be based on the body_encoding attribute.
The Charset class also provides a number of methods to support standard operations and built-in functions.
__str__()
Returns input_charset as a string coerced to lower case. __repr__() is an alias for __str__().
__eq__(other)
This method allows you to compare two Charset instances for equality.
__ne__(other)
This method allows you to compare two Charset instances for inequality. | |
doc_1480 | Tix maintains a list of directories under which the tix_getimage() and tix_getbitmap() methods will search for image files. The standard bitmap directory is $TIX_LIBRARY/bitmaps. The tix_addbitmapdir() method adds directory into this list. By using this method, the image files of an applications can also be located using the tix_getimage() or tix_getbitmap() method. | |
doc_1481 | Send a pickled byte-string to a socket. The format of the sent byte-string is as described in the documentation for SocketHandler.makePickle(). | |
doc_1482 |
Return handles and labels for legend ax.legend() is equivalent to h, l = ax.get_legend_handles_labels()
ax.legend(h, l)
Examples using matplotlib.axes.Axes.get_legend_handles_labels
Legend guide | |
doc_1483 | See Migration guide for more details. tf.compat.v1.experimental.function_executor_type
@tf_contextlib.contextmanager
tf.experimental.function_executor_type(
executor_type
)
Eager defined functions are functions decorated by tf.contrib.eager.defun.
Args
executor_type a string for the name of the executor to be used to execute functions defined by tf.contrib.eager.defun.
Yields Context manager for setting the executor of eager defined functions. | |
doc_1484 |
Fit the hierarchical clustering from features or distance matrix, and return cluster labels. Parameters
Xarray-like of shape (n_samples, n_features) or (n_samples, n_samples)
Training instances to cluster, or distances between instances if affinity='precomputed'.
yIgnored
Not used, present here for API consistency by convention. Returns
labelsndarray of shape (n_samples,)
Cluster labels. | |
doc_1485 | Convert sound fragments in u-LAW encoding to linearly encoded sound fragments. u-LAW encoding always uses 8 bits samples, so width refers only to the sample width of the output fragment here. | |
doc_1486 |
Get Bbox of the path. Parameters
transformmatplotlib.transforms.Transform, optional
Transform to apply to path before computing extents, if any. **kwargs
Forwarded to iter_bezier. Returns
matplotlib.transforms.Bbox
The extents of the path Bbox([[xmin, ymin], [xmax, ymax]]) | |
doc_1487 |
The week ordinal of the year. Deprecated since version 1.1.0. weekofyear and week have been deprecated. Please use DatetimeIndex.isocalendar().week instead. | |
doc_1488 | Return immediately, allowing sounds to play asynchronously. | |
doc_1489 | Returns the given text with ampersands, quotes and angle brackets encoded for use in HTML. The input is first coerced to a string and the output has mark_safe() applied. | |
doc_1490 |
Generate synthetic binary image with several rounded blob-like objects. Parameters
lengthint, optional
Linear size of output image.
blob_size_fractionfloat, optional
Typical linear size of blob, as a fraction of length, should be smaller than 1.
n_dimint, optional
Number of dimensions of output image.
volume_fractionfloat, default 0.5
Fraction of image pixels covered by the blobs (where the output is 1). Should be in [0, 1].
seedint, optional
Seed to initialize the random number generator. If None, a random seed from the operating system is used. Returns
blobsndarray of bools
Output binary image Examples >>> from skimage import data
>>> data.binary_blobs(length=5, blob_size_fraction=0.2, seed=1)
array([[ True, False, True, True, True],
[ True, True, True, False, True],
[False, True, False, True, True],
[ True, False, False, True, True],
[ True, False, False, False, True]])
>>> blobs = data.binary_blobs(length=256, blob_size_fraction=0.1)
>>> # Finer structures
>>> blobs = data.binary_blobs(length=256, blob_size_fraction=0.05)
>>> # Blobs cover a smaller volume fraction of the image
>>> blobs = data.binary_blobs(length=256, volume_fraction=0.3) | |
doc_1491 | Takes the power of each element in input with exponent and returns a tensor with the result. exponent can be either a single float number or a Tensor with the same number of elements as input. When exponent is a scalar value, the operation applied is: outi=xiexponent\text{out}_i = x_i ^ \text{exponent}
When exponent is a tensor, the operation applied is: outi=xiexponenti\text{out}_i = x_i ^ {\text{exponent}_i}
When exponent is a tensor, the shapes of input and exponent must be broadcastable. Parameters
input (Tensor) – the input tensor.
exponent (float or tensor) – the exponent value Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.4331, 1.2475, 0.6834, -0.2791])
>>> torch.pow(a, 2)
tensor([ 0.1875, 1.5561, 0.4670, 0.0779])
>>> exp = torch.arange(1., 5.)
>>> a = torch.arange(1., 5.)
>>> a
tensor([ 1., 2., 3., 4.])
>>> exp
tensor([ 1., 2., 3., 4.])
>>> torch.pow(a, exp)
tensor([ 1., 4., 27., 256.])
torch.pow(self, exponent, *, out=None) → Tensor
self is a scalar float value, and exponent is a tensor. The returned tensor out is of the same shape as exponent The operation applied is: outi=selfexponenti\text{out}_i = \text{self} ^ {\text{exponent}_i}
Parameters
self (float) – the scalar base value for the power operation
exponent (Tensor) – the exponent tensor Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> exp = torch.arange(1., 5.)
>>> base = 2
>>> torch.pow(base, exp)
tensor([ 2., 4., 8., 16.]) | |
doc_1492 | Default return value for get_permission_denied_message(). Defaults to an empty string. | |
doc_1493 | Registry entries subordinate to this key define the default user configuration for new users on the local computer and the user configuration for the current user. | |
doc_1494 | tf.compat.v1.distributions.kl_divergence(
distribution_a, distribution_b, allow_nan_stats=True, name=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2019-01-01. Instructions for updating: The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use tfp.distributions instead of tf.distributions. If there is no KL method registered specifically for type(distribution_a) and type(distribution_b), then the class hierarchies of these types are searched. If one KL method is registered between any pairs of classes in these two parent hierarchies, it is used. If more than one such registered method exists, the method whose registered classes have the shortest sum MRO paths to the input types is used. If more than one such shortest path exists, the first method identified in the search is used (favoring a shorter MRO distance to type(distribution_a)).
Args
distribution_a The first distribution.
distribution_b The second distribution.
allow_nan_stats Python bool, default True. When True, statistics (e.g., mean, mode, variance) use the value "NaN" to indicate the result is undefined. When False, an exception is raised if one or more of the statistic's batch members are undefined.
name Python str name prefixed to Ops created by this class.
Returns A Tensor with the batchwise KL-divergence between distribution_a and distribution_b.
Raises
NotImplementedError If no KL method is defined for distribution types of distribution_a and distribution_b. | |
doc_1495 |
[Deprecated] Get the subplot geometry, e.g., (2, 2, 3). Notes Deprecated since version 3.4. | |
doc_1496 |
Predict the closest cluster each sample in X belongs to. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data to predict. Returns
labelsndarray of shape (n_samples,)
Index of the cluster each sample belongs to. | |
doc_1497 | Called when the test case test is about to be run. | |
doc_1498 | Extract all members from the archive to the current working directory. path specifies a different directory to extract to. members is optional and must be a subset of the list returned by namelist(). pwd is the password used for encrypted files. Warning Never extract archives from untrusted sources without prior inspection. It is possible that files are created outside of path, e.g. members that have absolute filenames starting with "/" or filenames with two dots "..". This module attempts to prevent that. See extract() note. Changed in version 3.6: Calling extractall() on a closed ZipFile will raise a ValueError. Previously, a RuntimeError was raised. Changed in version 3.6.2: The path parameter accepts a path-like object. | |
doc_1499 | Device inode resides on. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.