_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_3800 |
Bases: matplotlib.patches.ArrowStyle._Curve An arrow with an outward square bracket at its end and a head at the start. Parameters
widthBfloat, default: 1.0
Width of the bracket.
lengthBfloat, default: 0.2
Length of the bracket.
angleBfloat, default: 0 degrees
Orientation of the bracket, as a counterclockwise angle. 0 degrees means perpendicular to the line. arrow='<-[' | |
doc_3801 |
Lasso model fit with Least Angle Regression a.k.a. Lars It is a Linear Model trained with an L1 prior as regularizer. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the User Guide. Parameters
alphafloat, default=1.0
Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbosebool or int, default=False
Sets the verbosity amount.
normalizebool, default=True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precomputebool, ‘auto’ or array-like, default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
max_iterint, default=500
Maximum number of iterations to perform.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
fit_pathbool, default=True
If True the full path is stored in the coef_path_ attribute. If you compute the solution for a large problem or many targets, setting fit_path to False will lead to a speedup, especially with a small alpha.
positivebool, default=False
Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ >
0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.
jitterfloat, default=None
Upper bound on a uniform noise parameter to be added to the y values, to satisfy the model’s assumption of one-at-a-time computations. Might help with stability. New in version 0.23.
random_stateint, RandomState instance or None, default=None
Determines random number generation for jittering. Pass an int for reproducible output across multiple function calls. See Glossary. Ignored if jitter is None. New in version 0.23. Attributes
alphas_array-like of shape (n_alphas + 1,) or list of such arrays
Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller. If this is a list of array-like, the length of the outer list is n_targets.
active_list of length n_alphas or list of such lists
Indices of active variables at the end of the path. If this is a list of list, the length of the outer list is n_targets.
coef_path_array-like of shape (n_features, n_alphas + 1) or list of such arrays
If a list is passed it’s expected to be one of n_targets such arrays. The varying values of the coefficients along the path. It is not present if the fit_path parameter is False. If this is a list of array-like, the length of the outer list is n_targets.
coef_array-like of shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the formulation formula).
intercept_float or array-like of shape (n_targets,)
Independent term in decision function.
n_iter_array-like or int
The number of iterations taken by lars_path to find the grid of alphas for each target. See also
lars_path
lasso_path
Lasso
LassoCV
LassoLarsCV
LassoLarsIC
sklearn.decomposition.sparse_encode
Examples >>> from sklearn import linear_model
>>> reg = linear_model.LassoLars(alpha=0.01)
>>> reg.fit([[-1, 1], [0, 0], [1, 1]], [-1, 0, -1])
LassoLars(alpha=0.01)
>>> print(reg.coef_)
[ 0. -0.963257...]
Methods
fit(X, y[, Xy]) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, Xy=None) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target values.
Xyarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. Returns
selfobject
returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_3802 |
Find artist objects. Recursively find all Artist instances contained in the artist. Parameters
match
A filter criterion for the matches. This can be
None: Return all objects contained in artist. A function with signature def match(artist: Artist) -> bool. The result will only contain artists for which the function returns True. A class instance: e.g., Line2D. The result will only contain artists of this class or its subclasses (isinstance check).
include_selfbool
Include self in the list to be checked for a match. Returns
list of Artist | |
doc_3803 |
Set the name of the axis for the index or columns. Parameters
mapper:scalar, list-like, optional
Value to set the axis name attribute.
index, columns:scalar, list-like, dict-like or function, optional
A scalar, list-like, dict-like or functions transformations to apply to that axis’ values. Note that the columns parameter is not allowed if the object is a Series. This parameter only apply for DataFrame type objects. Use either mapper and axis to specify the axis to target with mapper, or index and/or columns.
axis:{0 or ‘index’, 1 or ‘columns’}, default 0
The axis to rename.
copy:bool, default True
Also copy underlying data.
inplace:bool, default False
Modifies the object directly, instead of creating a new Series or DataFrame. Returns
Series, DataFrame, or None
The same type as the caller or None if inplace=True. See also Series.rename
Alter Series index labels or name. DataFrame.rename
Alter DataFrame index labels or name. Index.rename
Set new names on index. Notes DataFrame.rename_axis supports two calling conventions (index=index_mapper, columns=columns_mapper, ...) (mapper, axis={'index', 'columns'}, ...) The first calling convention will only modify the names of the index and/or the names of the Index object that is the columns. In this case, the parameter copy is ignored. The second calling convention will modify the names of the corresponding index if mapper is a list or a scalar. However, if mapper is dict-like or a function, it will use the deprecated behavior of modifying the axis labels. We highly recommend using keyword arguments to clarify your intent. Examples Series
>>> s = pd.Series(["dog", "cat", "monkey"])
>>> s
0 dog
1 cat
2 monkey
dtype: object
>>> s.rename_axis("animal")
animal
0 dog
1 cat
2 monkey
dtype: object
DataFrame
>>> df = pd.DataFrame({"num_legs": [4, 4, 2],
... "num_arms": [0, 0, 2]},
... ["dog", "cat", "monkey"])
>>> df
num_legs num_arms
dog 4 0
cat 4 0
monkey 2 2
>>> df = df.rename_axis("animal")
>>> df
num_legs num_arms
animal
dog 4 0
cat 4 0
monkey 2 2
>>> df = df.rename_axis("limbs", axis="columns")
>>> df
limbs num_legs num_arms
animal
dog 4 0
cat 4 0
monkey 2 2
MultiIndex
>>> df.index = pd.MultiIndex.from_product([['mammal'],
... ['dog', 'cat', 'monkey']],
... names=['type', 'name'])
>>> df
limbs num_legs num_arms
type name
mammal dog 4 0
cat 4 0
monkey 2 2
>>> df.rename_axis(index={'type': 'class'})
limbs num_legs num_arms
class name
mammal dog 4 0
cat 4 0
monkey 2 2
>>> df.rename_axis(columns=str.upper)
LIMBS num_legs num_arms
type name
mammal dog 4 0
cat 4 0
monkey 2 2 | |
doc_3804 | Similar to Widget.attrs. A dictionary containing HTML attributes to be set on the rendered DateInput and TimeInput widgets, respectively. If these attributes aren’t set, Widget.attrs is used instead. | |
doc_3805 | Transfers the callback stack to a fresh ExitStack instance and returns it. No callbacks are invoked by this operation - instead, they will now be invoked when the new stack is closed (either explicitly or implicitly at the end of a with statement). For example, a group of files can be opened as an “all or nothing” operation as follows: with ExitStack() as stack:
files = [stack.enter_context(open(fname)) for fname in filenames]
# Hold onto the close method, but don't call it yet.
close_files = stack.pop_all().close
# If opening any file fails, all previously opened files will be
# closed automatically. If all files are opened successfully,
# they will remain open even after the with statement ends.
# close_files() can then be invoked explicitly to close them all. | |
doc_3806 |
Call function producing a like-indexed DataFrame on each group and return a DataFrame having the same indexes as the original object filled with the transformed values. Parameters
f:function
Function to apply to each group. Can also accept a Numba JIT function with engine='numba' specified. If the 'numba' engine is chosen, the function must be a user defined function with values and index as the first and second arguments respectively in the function signature. Each group’s index will be passed to the user defined function and optionally available for use. Changed in version 1.1.0. *args
Positional arguments to pass to func.
engine:str, default None
'cython' : Runs the function through C-extensions from cython. 'numba' : Runs the function through JIT compiled code from numba. None : Defaults to 'cython' or the global setting compute.use_numba New in version 1.1.0.
engine_kwargs:dict, default None
For 'cython' engine, there are no accepted engine_kwargs For 'numba' engine, the engine can accept nopython, nogil and parallel dictionary keys. The values must either be True or False. The default engine_kwargs for the 'numba' engine is {'nopython': True, 'nogil': False, 'parallel': False} and will be applied to the function New in version 1.1.0. **kwargs
Keyword arguments to be passed into func. Returns
DataFrame
See also DataFrame.groupby.apply
Apply function func group-wise and combine the results together. DataFrame.groupby.aggregate
Aggregate using one or more operations over the specified axis. DataFrame.transform
Call func on self producing a DataFrame with the same axis shape as self. Notes Each group is endowed the attribute ‘name’ in case you need to know which group you are working on. The current implementation imposes three requirements on f: f must return a value that either has the same shape as the input subframe or can be broadcast to the shape of the input subframe. For example, if f returns a scalar it will be broadcast to have the same shape as the input subframe. if this is a DataFrame, f must support application column-by-column in the subframe. If f also supports application to the entire subframe, then a fast path is used starting from the second chunk. f must not mutate groups. Mutation is not supported and may produce unexpected results. See Mutating with User Defined Function (UDF) methods for more details. When using engine='numba', there will be no “fall back” behavior internally. The group data and group index will be passed as numpy arrays to the JITed user defined function, and no alternative execution attempts will be tried. Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, see the examples below. Examples
>>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
... 'foo', 'bar'],
... 'B' : ['one', 'one', 'two', 'three',
... 'two', 'two'],
... 'C' : [1, 5, 5, 2, 5, 5],
... 'D' : [2.0, 5., 8., 1., 2., 9.]})
>>> grouped = df.groupby('A')
>>> grouped.transform(lambda x: (x - x.mean()) / x.std())
C D
0 -1.154701 -0.577350
1 0.577350 0.000000
2 0.577350 1.154701
3 -1.154701 -1.000000
4 0.577350 -0.577350
5 0.577350 1.000000
Broadcast result of the transformation
>>> grouped.transform(lambda x: x.max() - x.min())
C D
0 4 6.0
1 3 8.0
2 4 6.0
3 3 8.0
4 4 6.0
5 3 8.0
Changed in version 1.3.0: The resulting dtype will reflect the return value of the passed func, for example:
>>> grouped[['C', 'D']].transform(lambda x: x.astype(int).max())
C D
0 5 8
1 5 9
2 5 8
3 5 9
4 5 8
5 5 9 | |
doc_3807 | Return a Document from the given input. filename_or_file may be either a file name, or a file-like object. parser, if given, must be a SAX2 parser object. This function will change the document handler of the parser and activate namespace support; other parser configuration (like setting an entity resolver) must have been done in advance. | |
doc_3808 | tf.compat.v1.resource_loader.get_path_to_datafile(
path
)
The path is relative to tensorflow/
Args
path a string resource path relative to tensorflow/
Returns The path to the specified file present in the data attribute of py_test or py_binary.
Raises
IOError If the path is not found, or the resource can't be opened. | |
doc_3809 |
Return whether image composition by Matplotlib should be skipped. Raster backends should usually return False (letting the C-level rasterizer take care of image composition); vector backends should usually return not rcParams["image.composite_image"]. | |
doc_3810 |
Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The mean and standard-deviation are calculated separately over the last certain number dimensions which have to be of the shape specified by normalized_shape. γ\gamma and β\beta are learnable affine transform parameters of normalized_shape if elementwise_affine is True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False). Note Unlike Batch Normalization and Instance Normalization, which applies scalar scale and bias for each entire channel/plane with the affine option, Layer Normalization applies per-element scale and bias with elementwise_affine. This layer uses statistics computed from input data in both training and evaluation modes. Parameters
normalized_shape (int or list or torch.Size) –
input shape from an expected input of size [∗×normalized_shape[0]×normalized_shape[1]×…×normalized_shape[−1]][* \times \text{normalized\_shape}[0] \times \text{normalized\_shape}[1] \times \ldots \times \text{normalized\_shape}[-1]]
If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.
eps – a value added to the denominator for numerical stability. Default: 1e-5
elementwise_affine – a boolean value that when set to True, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default: True. Shape:
Input: (N,∗)(N, *)
Output: (N,∗)(N, *) (same shape as input) Examples: >>> input = torch.randn(20, 5, 10, 10)
>>> # With Learnable Parameters
>>> m = nn.LayerNorm(input.size()[1:])
>>> # Without Learnable Parameters
>>> m = nn.LayerNorm(input.size()[1:], elementwise_affine=False)
>>> # Normalize over last two dimensions
>>> m = nn.LayerNorm([10, 10])
>>> # Normalize over last dimension of size 10
>>> m = nn.LayerNorm(10)
>>> # Activating the module
>>> output = m(input) | |
doc_3811 | tf.keras.applications.efficientnet.EfficientNetB4 Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.applications.EfficientNetB4, tf.compat.v1.keras.applications.efficientnet.EfficientNetB4
tf.keras.applications.EfficientNetB4(
include_top=True, weights='imagenet', input_tensor=None,
input_shape=None, pooling=None, classes=1000,
classifier_activation='softmax', **kwargs
)
Reference:
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019) Optionally loads weights pre-trained on ImageNet. Note that the data format convention used by the model is the one specified in your Keras config at ~/.keras/keras.json. If you have never configured it, it defaults to "channels_last".
Arguments
include_top Whether to include the fully-connected layer at the top of the network. Defaults to True.
weights One of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded. Defaults to 'imagenet'.
input_tensor Optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model.
input_shape Optional shape tuple, only to be specified if include_top is False. It should have exactly 3 inputs channels.
pooling Optional pooling mode for feature extraction when include_top is False. Defaults to None.
None means that the output of the model will be the 4D tensor output of the last convolutional layer.
avg means that global average pooling will be applied to the output of the last convolutional layer, and thus the output of the model will be a 2D tensor.
max means that global max pooling will be applied.
classes Optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified. Defaults to 1000 (number of ImageNet classes).
classifier_activation A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. Defaults to 'softmax'.
Returns A keras.Model instance. | |
doc_3812 | Represents an outgoing WSGI HTTP response with body, status, and headers. Has properties and methods for using the functionality defined by various HTTP specs. The response body is flexible to support different use cases. The simple form is passing bytes, or a string which will be encoded as UTF-8. Passing an iterable of bytes or strings makes this a streaming response. A generator is particularly useful for building a CSV file in memory or using SSE (Server Sent Events). A file-like object is also iterable, although the send_file() helper should be used in that case. The response object is itself a WSGI application callable. When called (__call__()) with environ and start_response, it will pass its status and headers to start_response then return its body as an iterable. from werkzeug.wrappers.response import Response
def index():
return Response("Hello, World!")
def application(environ, start_response):
path = environ.get("PATH_INFO") or "/"
if path == "/":
response = index()
else:
response = Response("Not Found", status=404)
return response(environ, start_response)
Parameters
response (Union[Iterable[str], Iterable[bytes]]) – The data for the body of the response. A string or bytes, or tuple or list of strings or bytes, for a fixed-length response, or any other iterable of strings or bytes for a streaming response. Defaults to an empty body.
status (Optional[Union[int, str, http.HTTPStatus]]) – The status code for the response. Either an int, in which case the default status message is added, or a string in the form {code} {message}, like 404 Not Found. Defaults to 200.
headers (werkzeug.datastructures.Headers) – A Headers object, or a list of (key, value) tuples that will be converted to a Headers object.
mimetype (Optional[str]) – The mime type (content type without charset or other parameters) of the response. If the value starts with text/ (or matches some other special cases), the charset will be added to create the content_type.
content_type (Optional[str]) – The full content type of the response. Overrides building the value from mimetype.
direct_passthrough (bool) – Pass the response body directly through as the WSGI iterable. This can be used when the body is a binary file or other iterator of bytes, to skip some unnecessary checks. Use send_file() instead of setting this manually. Return type
None Changed in version 2.0: Combine BaseResponse and mixins into a single Response class. Using the old classes is deprecated and will be removed in Werkzeug 2.1. Changelog Changed in version 0.5: The direct_passthrough parameter was added.
__call__(environ, start_response)
Process this response as WSGI application. Parameters
environ (WSGIEnvironment) – the WSGI environment.
start_response (StartResponse) – the response callable provided by the WSGI server. Returns
an application iterator Return type
Iterable[bytes]
_ensure_sequence(mutable=False)
This method can be called by methods that need a sequence. If mutable is true, it will also ensure that the response sequence is a standard Python list. Changelog New in version 0.6. Parameters
mutable (bool) – Return type
None
accept_ranges
The Accept-Ranges header. Even though the name would indicate that multiple values are supported, it must be one string token only. The values 'bytes' and 'none' are common. Changelog New in version 0.7.
property access_control_allow_credentials: bool
Whether credentials can be shared by the browser to JavaScript code. As part of the preflight request it indicates whether credentials can be used on the cross origin request.
access_control_allow_headers
Which headers can be sent with the cross origin request.
access_control_allow_methods
Which methods can be used for the cross origin request.
access_control_allow_origin
The origin or ‘*’ for any origin that may make cross origin requests.
access_control_expose_headers
Which headers can be shared by the browser to JavaScript code.
access_control_max_age
The maximum age in seconds the access control settings can be cached for.
add_etag(overwrite=False, weak=False)
Add an etag for the current response if there is none yet. Changed in version 2.0: SHA-1 is used to generate the value. MD5 may not be available in some environments. Parameters
overwrite (bool) –
weak (bool) – Return type
None
age
The Age response-header field conveys the sender’s estimate of the amount of time since the response (or its revalidation) was generated at the origin server. Age values are non-negative decimal integers, representing time in seconds.
property allow: werkzeug.datastructures.HeaderSet
The Allow entity-header field lists the set of methods supported by the resource identified by the Request-URI. The purpose of this field is strictly to inform the recipient of valid methods associated with the resource. An Allow header field MUST be present in a 405 (Method Not Allowed) response.
autocorrect_location_header = True
Should this response object correct the location header to be RFC conformant? This is true by default. Changelog New in version 0.8.
automatically_set_content_length = True
Should this response object automatically set the content-length header if possible? This is true by default. Changelog New in version 0.8.
property cache_control: werkzeug.datastructures.ResponseCacheControl
The Cache-Control general-header field is used to specify directives that MUST be obeyed by all caching mechanisms along the request/response chain.
calculate_content_length()
Returns the content length if available or None otherwise. Return type
Optional[int]
call_on_close(func)
Adds a function to the internal list of functions that should be called as part of closing down the response. Since 0.7 this function also returns the function that was passed so that this can be used as a decorator. Changelog New in version 0.6. Parameters
func (Callable[[], Any]) – Return type
Callable[[], Any]
close()
Close the wrapped response if possible. You can also use the object in a with statement which will automatically close it. Changelog New in version 0.9: Can now be used in a with statement. Return type
None
content_encoding
The Content-Encoding entity-header field is used as a modifier to the media-type. When present, its value indicates what additional content codings have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type header field.
property content_language: werkzeug.datastructures.HeaderSet
The Content-Language entity-header field describes the natural language(s) of the intended audience for the enclosed entity. Note that this might not be equivalent to all the languages used within the entity-body.
content_length
The Content-Length entity-header field indicates the size of the entity-body, in decimal number of OCTETs, sent to the recipient or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET.
content_location
The Content-Location entity-header field MAY be used to supply the resource location for the entity enclosed in the message when that entity is accessible from a location separate from the requested resource’s URI.
content_md5
The Content-MD5 entity-header field, as defined in RFC 1864, is an MD5 digest of the entity-body for the purpose of providing an end-to-end message integrity check (MIC) of the entity-body. (Note: a MIC is good for detecting accidental modification of the entity-body in transit, but is not proof against malicious attacks.)
property content_range: werkzeug.datastructures.ContentRange
The Content-Range header as a ContentRange object. Available even if the header is not set. Changelog New in version 0.7.
content_security_policy
The Content-Security-Policy header adds an additional layer of security to help detect and mitigate certain types of attacks.
content_security_policy_report_only
The Content-Security-Policy-Report-Only header adds a csp policy that is not enforced but is reported thereby helping detect certain types of attacks.
content_type
The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET.
cross_origin_embedder_policy
Prevents a document from loading any cross-origin resources that do not explicitly grant the document permission. Values must be a member of the werkzeug.http.COEP enum.
cross_origin_opener_policy
Allows control over sharing of browsing context group with cross-origin documents. Values must be a member of the werkzeug.http.COOP enum.
property data: Union[bytes, str]
A descriptor that calls get_data() and set_data().
date
The Date general-header field represents the date and time at which the message was originated, having the same semantics as orig-date in RFC 822. Changed in version 2.0: The datetime object is timezone-aware.
delete_cookie(key, path='/', domain=None, secure=False, httponly=False, samesite=None)
Delete a cookie. Fails silently if key doesn’t exist. Parameters
key (str) – the key (name) of the cookie to be deleted.
path (str) – if the cookie that should be deleted was limited to a path, the path has to be defined here.
domain (Optional[str]) – if the cookie that should be deleted was limited to a domain, that domain has to be defined here.
secure (bool) – If True, the cookie will only be available via HTTPS.
httponly (bool) – Disallow JavaScript access to the cookie.
samesite (Optional[str]) – Limit the scope of the cookie to only be attached to requests that are “same-site”. Return type
None
direct_passthrough
Pass the response body directly through as the WSGI iterable. This can be used when the body is a binary file or other iterator of bytes, to skip some unnecessary checks. Use send_file() instead of setting this manually.
expires
The Expires entity-header field gives the date/time after which the response is considered stale. A stale cache entry may not normally be returned by a cache. Changed in version 2.0: The datetime object is timezone-aware.
classmethod force_type(response, environ=None)
Enforce that the WSGI response is a response object of the current type. Werkzeug will use the Response internally in many situations like the exceptions. If you call get_response() on an exception you will get back a regular Response object, even if you are using a custom subclass. This method can enforce a given response type, and it will also convert arbitrary WSGI callables into response objects if an environ is provided: # convert a Werkzeug response object into an instance of the
# MyResponseClass subclass.
response = MyResponseClass.force_type(response)
# convert any WSGI application into a response object
response = MyResponseClass.force_type(response, environ)
This is especially useful if you want to post-process responses in the main dispatcher and use functionality provided by your subclass. Keep in mind that this will modify response objects in place if possible! Parameters
response (Response) – a response object or wsgi application.
environ (Optional[WSGIEnvironment]) – a WSGI environment object. Returns
a response object. Return type
Response
freeze(no_etag=None)
Make the response object ready to be pickled. Does the following: Buffer the response into a list, ignoring implicity_sequence_conversion and direct_passthrough. Set the Content-Length header. Generate an ETag header if one is not already set. Changed in version 2.0: An ETag header is added, the no_etag parameter is deprecated and will be removed in Werkzeug 2.1. Changelog Changed in version 0.6: The Content-Length header is set. Parameters
no_etag (None) – Return type
None
classmethod from_app(app, environ, buffered=False)
Create a new response object from an application output. This works best if you pass it an application that returns a generator all the time. Sometimes applications may use the write() callable returned by the start_response function. This tries to resolve such edge cases automatically. But if you don’t get the expected output you should set buffered to True which enforces buffering. Parameters
app (WSGIApplication) – the WSGI application to execute.
environ (WSGIEnvironment) – the WSGI environment to execute against.
buffered (bool) – set to True to enforce buffering. Returns
a response object. Return type
Response
get_app_iter(environ)
Returns the application iterator for the given environ. Depending on the request method and the current status code the return value might be an empty response rather than the one from the response. If the request method is HEAD or the status code is in a range where the HTTP specification requires an empty response, an empty iterable is returned. Changelog New in version 0.6. Parameters
environ (WSGIEnvironment) – the WSGI environment of the request. Returns
a response iterable. Return type
Iterable[bytes]
get_data(as_text=False)
The string representation of the response body. Whenever you call this property the response iterable is encoded and flattened. This can lead to unwanted behavior if you stream big data. This behavior can be disabled by setting implicit_sequence_conversion to False. If as_text is set to True the return value will be a decoded string. Changelog New in version 0.9. Parameters
as_text (bool) – Return type
Union[bytes, str]
get_etag()
Return a tuple in the form (etag, is_weak). If there is no ETag the return value is (None, None). Return type
Union[Tuple[str, bool], Tuple[None, None]]
get_json(force=False, silent=False)
Parse data as JSON. Useful during testing. If the mimetype does not indicate JSON (application/json, see is_json()), this returns None. Unlike Request.get_json(), the result is not cached. Parameters
force (bool) – Ignore the mimetype and always try to parse JSON.
silent (bool) – Silence parsing errors and return None instead. Return type
Optional[Any]
get_wsgi_headers(environ)
This is automatically called right before the response is started and returns headers modified for the given environment. It returns a copy of the headers from the response with some modifications applied if necessary. For example the location header (if present) is joined with the root URL of the environment. Also the content length is automatically set to zero here for certain status codes. Changelog Changed in version 0.6: Previously that function was called fix_headers and modified the response object in place. Also since 0.6, IRIs in location and content-location headers are handled properly. Also starting with 0.6, Werkzeug will attempt to set the content length if it is able to figure it out on its own. This is the case if all the strings in the response iterable are already encoded and the iterable is buffered. Parameters
environ (WSGIEnvironment) – the WSGI environment of the request. Returns
returns a new Headers object. Return type
werkzeug.datastructures.Headers
get_wsgi_response(environ)
Returns the final WSGI response as tuple. The first item in the tuple is the application iterator, the second the status and the third the list of headers. The response returned is created specially for the given environment. For example if the request method in the WSGI environment is 'HEAD' the response will be empty and only the headers and status code will be present. Changelog New in version 0.6. Parameters
environ (WSGIEnvironment) – the WSGI environment of the request. Returns
an (app_iter, status, headers) tuple. Return type
Tuple[Iterable[bytes], str, List[Tuple[str, str]]]
implicit_sequence_conversion = True
if set to False accessing properties on the response object will not try to consume the response iterator and convert it into a list. Changelog New in version 0.6.2: That attribute was previously called implicit_seqence_conversion. (Notice the typo). If you did use this feature, you have to adapt your code to the name change.
property is_json: bool
Check if the mimetype indicates JSON data, either application/json or application/*+json.
property is_sequence: bool
If the iterator is buffered, this property will be True. A response object will consider an iterator to be buffered if the response attribute is a list or tuple. Changelog New in version 0.6.
property is_streamed: bool
If the response is streamed (the response is not an iterable with a length information) this property is True. In this case streamed means that there is no information about the number of iterations. This is usually True if a generator is passed to the response object. This is useful for checking before applying some sort of post filtering that should not take place for streamed responses.
iter_encoded()
Iter the response encoded with the encoding of the response. If the response object is invoked as WSGI application the return value of this method is used as application iterator unless direct_passthrough was activated. Return type
Iterator[bytes]
property json: Optional[Any]
The parsed JSON data if mimetype indicates JSON (application/json, see is_json()). Calls get_json() with default arguments.
json_module = <module 'json' from '/home/docs/.pyenv/versions/3.7.9/lib/python3.7/json/__init__.py'>
A module or other object that has dumps and loads functions that match the API of the built-in json module.
last_modified
The Last-Modified entity-header field indicates the date and time at which the origin server believes the variant was last modified. Changed in version 2.0: The datetime object is timezone-aware.
location
The Location response-header field is used to redirect the recipient to a location other than the Request-URI for completion of the request or identification of a new resource.
make_conditional(request_or_environ, accept_ranges=False, complete_length=None)
Make the response conditional to the request. This method works best if an etag was defined for the response already. The add_etag method can be used to do that. If called without etag just the date header is set. This does nothing if the request method in the request or environ is anything but GET or HEAD. For optimal performance when handling range requests, it’s recommended that your response data object implements seekable, seek and tell methods as described by io.IOBase. Objects returned by wrap_file() automatically implement those methods. It does not remove the body of the response because that’s something the __call__() function does for us automatically. Returns self so that you can do return resp.make_conditional(req) but modifies the object in-place. Parameters
request_or_environ (WSGIEnvironment) – a request object or WSGI environment to be used to make the response conditional against.
accept_ranges (Union[bool, str]) – This parameter dictates the value of Accept-Ranges header. If False (default), the header is not set. If True, it will be set to "bytes". If None, it will be set to "none". If it’s a string, it will use this value.
complete_length (Optional[int]) – Will be used only in valid Range Requests. It will set Content-Range complete length value and compute Content-Length real value. This parameter is mandatory for successful Range Requests completion. Raises
RequestedRangeNotSatisfiable if Range header could not be parsed or satisfied. Return type
Response Changed in version 2.0: Range processing is skipped if length is 0 instead of raising a 416 Range Not Satisfiable error.
make_sequence()
Converts the response iterator in a list. By default this happens automatically if required. If implicit_sequence_conversion is disabled, this method is not automatically called and some properties might raise exceptions. This also encodes all the items. Changelog New in version 0.6. Return type
None
property mimetype: Optional[str]
The mimetype (content type without charset etc.)
property mimetype_params: Dict[str, str]
The mimetype parameters as dict. For example if the content type is text/html; charset=utf-8 the params would be {'charset': 'utf-8'}. Changelog New in version 0.5.
response: Union[Iterable[str], Iterable[bytes]]
The response body to send as the WSGI iterable. A list of strings or bytes represents a fixed-length response, any other iterable is a streaming response. Strings are encoded to bytes as UTF-8. Do not set to a plain string or bytes, that will cause sending the response to be very inefficient as it will iterate one byte at a time.
property retry_after: Optional[datetime.datetime]
The Retry-After response-header field can be used with a 503 (Service Unavailable) response to indicate how long the service is expected to be unavailable to the requesting client. Time in seconds until expiration or date. Changed in version 2.0: The datetime object is timezone-aware.
set_cookie(key, value='', max_age=None, expires=None, path='/', domain=None, secure=False, httponly=False, samesite=None)
Sets a cookie. A warning is raised if the size of the cookie header exceeds max_cookie_size, but the header will still be set. Parameters
key (str) – the key (name) of the cookie to be set.
value (str) – the value of the cookie.
max_age (Optional[Union[datetime.timedelta, int]]) – should be a number of seconds, or None (default) if the cookie should last only as long as the client’s browser session.
expires (Optional[Union[str, datetime.datetime, int, float]]) – should be a datetime object or UNIX timestamp.
path (Optional[str]) – limits the cookie to a given path, per default it will span the whole domain.
domain (Optional[str]) – if you want to set a cross-domain cookie. For example, domain=".example.com" will set a cookie that is readable by the domain www.example.com, foo.example.com etc. Otherwise, a cookie will only be readable by the domain that set it.
secure (bool) – If True, the cookie will only be available via HTTPS.
httponly (bool) – Disallow JavaScript access to the cookie.
samesite (Optional[str]) – Limit the scope of the cookie to only be attached to requests that are “same-site”. Return type
None
set_data(value)
Sets a new string as response. The value must be a string or bytes. If a string is set it’s encoded to the charset of the response (utf-8 by default). Changelog New in version 0.9. Parameters
value (Union[bytes, str]) – Return type
None
set_etag(etag, weak=False)
Set the etag, and override the old one if there was one. Parameters
etag (str) –
weak (bool) – Return type
None
property status: str
The HTTP status code as a string.
property status_code: int
The HTTP status code as a number.
property stream: werkzeug.wrappers.response.ResponseStream
The response iterable as write-only stream.
property vary: werkzeug.datastructures.HeaderSet
The Vary field value indicates the set of request-header fields that fully determines, while the response is fresh, whether a cache is permitted to use the response to reply to a subsequent request without revalidation.
property www_authenticate: werkzeug.datastructures.WWWAuthenticate
The WWW-Authenticate header in a parsed form. | |
doc_3813 | tf.keras.layers.GRU(
units, activation='tanh', recurrent_activation='sigmoid',
use_bias=True, kernel_initializer='glorot_uniform',
recurrent_initializer='orthogonal',
bias_initializer='zeros', kernel_regularizer=None,
recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, recurrent_constraint=None, bias_constraint=None,
dropout=0.0, recurrent_dropout=0.0, return_sequences=False, return_state=False,
go_backwards=False, stateful=False, unroll=False, time_major=False,
reset_after=True, **kwargs
)
See the Keras RNN API guide for details about the usage of RNN API. Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the CuDNN kernel (see below for details), the layer will use a fast cuDNN implementation. The requirements to use the cuDNN implementation are:
activation == tanh
recurrent_activation == sigmoid
recurrent_dropout == 0
unroll is False
use_bias is True
reset_after is True
Inputs, if use masking, are strictly right-padded. Eager execution is enabled in the outermost context. There are two variants of the GRU implementation. The default one is based on v3 and has reset gate applied to hidden state before matrix multiplication. The other one is based on original and has the order reversed. The second variant is compatible with CuDNNGRU (GPU-only) and allows inference on CPU. Thus it has separate biases for kernel and recurrent_kernel. To use this variant, set 'reset_after'=True and recurrent_activation='sigmoid'. For example:
inputs = tf.random.normal([32, 10, 8])
gru = tf.keras.layers.GRU(4)
output = gru(inputs)
print(output.shape)
(32, 4)
gru = tf.keras.layers.GRU(4, return_sequences=True, return_state=True)
whole_sequence_output, final_state = gru(inputs)
print(whole_sequence_output.shape)
(32, 10, 4)
print(final_state.shape)
(32, 4)
Arguments
units Positive integer, dimensionality of the output space.
activation Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
recurrent_activation Activation function to use for the recurrent step. Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
use_bias Boolean, (default True), whether the layer uses a bias vector.
kernel_initializer Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: glorot_uniform.
recurrent_initializer Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: orthogonal.
bias_initializer Initializer for the bias vector. Default: zeros.
kernel_regularizer Regularizer function applied to the kernel weights matrix. Default: None.
recurrent_regularizer Regularizer function applied to the recurrent_kernel weights matrix. Default: None.
bias_regularizer Regularizer function applied to the bias vector. Default: None.
activity_regularizer Regularizer function applied to the output of the layer (its "activation"). Default: None.
kernel_constraint Constraint function applied to the kernel weights matrix. Default: None.
recurrent_constraint Constraint function applied to the recurrent_kernel weights matrix. Default: None.
bias_constraint Constraint function applied to the bias vector. Default: None.
dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.
recurrent_dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.
return_sequences Boolean. Whether to return the last output in the output sequence, or the full sequence. Default: False.
return_state Boolean. Whether to return the last state in addition to the output. Default: False.
go_backwards Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
stateful Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
unroll Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
time_major The shape format of the inputs and outputs tensors. If True, the inputs and outputs will be in shape [timesteps, batch, feature], whereas in the False case, it will be [batch, timesteps, feature]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
reset_after GRU convention (whether to apply reset gate after or before matrix multiplication). False = "before", True = "after" (default and CuDNN compatible). Call arguments:
inputs: A 3D tensor, with shape [batch, timesteps, feature].
mask: Binary tensor of shape [samples, timesteps] indicating whether a given timestep should be masked (optional, defaults to None).
training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used (optional, defaults to None).
initial_state: List of initial state tensors to be passed to the first call of the cell (optional, defaults to None which causes creation of zero-filled initial state tensors).
Attributes
activation
bias_constraint
bias_initializer
bias_regularizer
dropout
implementation
kernel_constraint
kernel_initializer
kernel_regularizer
recurrent_activation
recurrent_constraint
recurrent_dropout
recurrent_initializer
recurrent_regularizer
reset_after
states
units
use_bias
Methods get_dropout_mask_for_cell View source
get_dropout_mask_for_cell(
inputs, training, count=1
)
Get the dropout mask for RNN cell's input. It will create mask based on context if there isn't any existing cached mask. If a new mask is generated, it will update the cache in the cell.
Args
inputs The input tensor whose shape will be used to generate dropout mask.
training Boolean tensor, whether its in training mode, dropout will be ignored in non-training mode.
count Int, how many dropout mask will be generated. It is useful for cell that has internal weights fused together.
Returns List of mask tensor, generated or cached mask based on context.
get_recurrent_dropout_mask_for_cell View source
get_recurrent_dropout_mask_for_cell(
inputs, training, count=1
)
Get the recurrent dropout mask for RNN cell. It will create mask based on context if there isn't any existing cached mask. If a new mask is generated, it will update the cache in the cell.
Args
inputs The input tensor whose shape will be used to generate dropout mask.
training Boolean tensor, whether its in training mode, dropout will be ignored in non-training mode.
count Int, how many dropout mask will be generated. It is useful for cell that has internal weights fused together.
Returns List of mask tensor, generated or cached mask based on context.
reset_dropout_mask View source
reset_dropout_mask()
Reset the cached dropout masks if any. This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn't be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch. reset_recurrent_dropout_mask View source
reset_recurrent_dropout_mask()
Reset the cached recurrent dropout masks if any. This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn't be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch. reset_states View source
reset_states(
states=None
)
Reset the recorded states for the stateful RNN layer. Can only be used when RNN layer is constructed with stateful = True. Args: states: Numpy arrays that contains the value for the initial state, which will be feed to cell at the first time step. When the value is None, zero filled numpy array will be created based on the cell state size.
Raises
AttributeError When the RNN layer is not stateful.
ValueError When the batch size of the RNN layer is unknown.
ValueError When the input numpy array is not compatible with the RNN layer state, either size wise or dtype wise. | |
doc_3814 |
Draw samples from a binomial distribution. Samples are drawn from a binomial distribution with specified parameters, n trials and p probability of success where n an integer >= 0 and p is in the interval [0,1]. (n may be input as a float, but it is truncated to an integer in use) Note New code should use the binomial method of a default_rng() instance instead; please see the Quick Start. Parameters
nint or array_like of ints
Parameter of the distribution, >= 0. Floats are also accepted, but they will be truncated to integers.
pfloat or array_like of floats
Parameter of the distribution, >= 0 and <=1.
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if n and p are both scalars. Otherwise, np.broadcast(n, p).size samples are drawn. Returns
outndarray or scalar
Drawn samples from the parameterized binomial distribution, where each sample is equal to the number of successes over the n trials. See also scipy.stats.binom
probability density function, distribution or cumulative density function, etc. Generator.binomial
which should be used for new code. Notes The probability density for the binomial distribution is \[P(N) = \binom{n}{N}p^N(1-p)^{n-N},\] where \(n\) is the number of trials, \(p\) is the probability of success, and \(N\) is the number of successes. When estimating the standard error of a proportion in a population by using a random sample, the normal distribution works well unless the product p*n <=5, where p = population proportion estimate, and n = number of samples, in which case the binomial distribution is used instead. For example, a sample of 15 people shows 4 who are left handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4, so the binomial distribution should be used in this case. References 1
Dalgaard, Peter, “Introductory Statistics with R”, Springer-Verlag, 2002. 2
Glantz, Stanton A. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. 3
Lentner, Marvin, “Elementary Applied Statistics”, Bogden and Quigley, 1972. 4
Weisstein, Eric W. “Binomial Distribution.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/BinomialDistribution.html 5
Wikipedia, “Binomial distribution”, https://en.wikipedia.org/wiki/Binomial_distribution Examples Draw samples from the distribution: >>> n, p = 10, .5 # number of trials, probability of each trial
>>> s = np.random.binomial(n, p, 1000)
# result of flipping a coin 10 times, tested 1000 times.
A real world example. A company drills 9 wild-cat oil exploration wells, each with an estimated probability of success of 0.1. All nine wells fail. What is the probability of that happening? Let’s do 20,000 trials of the model, and count the number that generate zero positive results. >>> sum(np.random.binomial(9, 0.1, 20000) == 0)/20000.
# answer = 0.38885, or 38%. | |
doc_3815 | Returns an iterator over module buffers. Parameters
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
torch.Tensor – module buffer Example: >>> for buf in model.buffers():
>>> print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L) | |
doc_3816 |
Alias for set_linewidth. | |
doc_3817 | AxesGrid alias of mpl_toolkits.axisartist.axes_grid.ImageGrid
CbarAxes(*args, orientation, **kwargs) [Deprecated]
Grid(fig, rect, nrows_ncols[, ngrids, ...])
Parameters
ImageGrid(fig, rect, nrows_ncols[, ngrids, ...])
Parameters | |
doc_3818 | Only available on Windows. | |
doc_3819 | Returns the current setting for the given locale category as sequence containing language code, encoding. category may be one of the LC_* values except LC_ALL. It defaults to LC_CTYPE. Except for the code 'C', the language code corresponds to RFC 1766. language code and encoding may be None if their values cannot be determined. | |
doc_3820 |
Iterate over DataFrame rows as namedtuples. Parameters
index:bool, default True
If True, return the index as the first element of the tuple.
name:str or None, default “Pandas”
The name of the returned namedtuples or None to return regular tuples. Returns
iterator
An object to iterate over namedtuples for each row in the DataFrame with the first field possibly being the index and following fields being the column values. See also DataFrame.iterrows
Iterate over DataFrame rows as (index, Series) pairs. DataFrame.items
Iterate over (column name, Series) pairs. Notes The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start with an underscore. On python versions < 3.7 regular tuples are returned for DataFrames with a large number of columns (>254). Examples
>>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]},
... index=['dog', 'hawk'])
>>> df
num_legs num_wings
dog 4 0
hawk 2 2
>>> for row in df.itertuples():
... print(row)
...
Pandas(Index='dog', num_legs=4, num_wings=0)
Pandas(Index='hawk', num_legs=2, num_wings=2)
By setting the index parameter to False we can remove the index as the first element of the tuple:
>>> for row in df.itertuples(index=False):
... print(row)
...
Pandas(num_legs=4, num_wings=0)
Pandas(num_legs=2, num_wings=2)
With the name parameter set we set a custom name for the yielded namedtuples:
>>> for row in df.itertuples(name='Animal'):
... print(row)
...
Animal(Index='dog', num_legs=4, num_wings=0)
Animal(Index='hawk', num_legs=2, num_wings=2) | |
doc_3821 |
Series.isnull is an alias for Series.isna. Detect missing values. Return a boolean same-sized object indicating if the values are NA. NA values, such as None or numpy.NaN, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True). Returns
Series
Mask of bool values for each element in Series that indicates whether an element is an NA value. See also Series.isnull
Alias of isna. Series.notna
Boolean inverse of isna. Series.dropna
Omit axes labels with missing values. isna
Top-level isna. Examples Show which entries in a DataFrame are NA.
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
Show which entries in a Series are NA.
>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0 5.0
1 6.0
2 NaN
dtype: float64
>>> ser.isna()
0 False
1 False
2 True
dtype: bool | |
doc_3822 |
Compute D^2, the percentage of deviance explained. D^2 is a generalization of the coefficient of determination R^2. R^2 uses squared error and D^2 deviance. Note that those two are equal for family='normal'. D^2 is defined as \(D^2 = 1-\frac{D(y_{true},y_{pred})}{D_{null}}\), \(D_{null}\) is the null deviance, i.e. the deviance of a model with intercept alone, which corresponds to \(y_{pred} = \bar{y}\). The mean \(\bar{y}\) is averaged by sample_weight. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,)
True values of target.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
D^2 of self.predict(X) w.r.t. y. | |
doc_3823 | Cancel the callback. If the callback has already been canceled or executed, this method has no effect. | |
doc_3824 | Add a codec that map characters in the given character set to and from Unicode. charset is the canonical name of a character set. codecname is the name of a Python codec, as appropriate for the second argument to the str’s encode() method. | |
doc_3825 | Display the exception that just occurred. We remove the first stack item because it is within the interpreter object implementation. The output is written by the write() method. Changed in version 3.5: The full chained traceback is displayed instead of just the primary traceback. | |
doc_3826 |
Print a concise summary of a Series. This method prints information about a Series including the index dtype, non-null values and memory usage. New in version 1.4.0. Parameters
data:Series
Series to print information about.
verbose:bool, optional
Whether to print the full summary. By default, the setting in pandas.options.display.max_info_columns is followed.
buf:writable buffer, defaults to sys.stdout
Where to send the output. By default, the output is printed to sys.stdout. Pass a writable buffer if you need to further process the output.
memory_usage:bool, str, optional
Specifies whether total memory usage of the Series elements (including the index) should be displayed. By default, this follows the pandas.options.display.memory_usage setting. True always show memory usage. False never shows memory usage. A value of ‘deep’ is equivalent to “True with deep introspection”. Memory usage is shown in human-readable units (base-2 representation). Without deep introspection a memory estimation is made based in column dtype and number of rows assuming values consume the same memory amount for corresponding dtypes. With deep memory introspection, a real memory usage calculation is performed at the cost of computational resources.
show_counts:bool, optional
Whether to show the non-null counts. By default, this is shown only if the DataFrame is smaller than pandas.options.display.max_info_rows and pandas.options.display.max_info_columns. A value of True always shows the counts, and False never shows the counts. Returns
None
This method prints a summary of a Series and returns None. See also Series.describe
Generate descriptive statistics of Series. Series.memory_usage
Memory usage of Series. Examples
>>> int_values = [1, 2, 3, 4, 5]
>>> text_values = ['alpha', 'beta', 'gamma', 'delta', 'epsilon']
>>> s = pd.Series(text_values, index=int_values)
>>> s.info()
<class 'pandas.core.series.Series'>
Int64Index: 5 entries, 1 to 5
Series name: None
Non-Null Count Dtype
-------------- -----
5 non-null object
dtypes: object(1)
memory usage: 80.0+ bytes
Prints a summary excluding information about its values:
>>> s.info(verbose=False)
<class 'pandas.core.series.Series'>
Int64Index: 5 entries, 1 to 5
dtypes: object(1)
memory usage: 80.0+ bytes
Pipe output of Series.info to buffer instead of sys.stdout, get buffer content and writes to a text file:
>>> import io
>>> buffer = io.StringIO()
>>> s.info(buf=buffer)
>>> s = buffer.getvalue()
>>> with open("df_info.txt", "w",
... encoding="utf-8") as f:
... f.write(s)
260
The memory_usage parameter allows deep introspection mode, specially useful for big Series and fine-tune memory optimization:
>>> random_strings_array = np.random.choice(['a', 'b', 'c'], 10 ** 6)
>>> s = pd.Series(np.random.choice(['a', 'b', 'c'], 10 ** 6))
>>> s.info()
<class 'pandas.core.series.Series'>
RangeIndex: 1000000 entries, 0 to 999999
Series name: None
Non-Null Count Dtype
-------------- -----
1000000 non-null object
dtypes: object(1)
memory usage: 7.6+ MB
>>> s.info(memory_usage='deep')
<class 'pandas.core.series.Series'>
RangeIndex: 1000000 entries, 0 to 999999
Series name: None
Non-Null Count Dtype
-------------- -----
1000000 non-null object
dtypes: object(1)
memory usage: 55.3 MB | |
doc_3827 | The StdButtonBox widget is a group of standard buttons for Motif-like dialog boxes. | |
doc_3828 |
The percent of non- fill_value points, as decimal. Examples
>>> s = SparseArray([0, 0, 1, 1, 1], fill_value=0)
>>> s.density
0.6 | |
doc_3829 | sklearn.utils.sparsefuncs.inplace_row_scale(X, scale) [source]
Inplace row scaling of a CSR or CSC matrix. Scale each row of the data matrix by multiplying with specific scale provided by the caller assuming a (n_samples, n_features) shape. Parameters
Xsparse matrix of shape (n_samples, n_features)
Matrix to be scaled. It should be of CSR or CSC format.
scalendarray of shape (n_features,), dtype={np.float32, np.float64}
Array of precomputed sample-wise values to use for scaling. | |
doc_3830 |
Return the visibility. | |
doc_3831 | Casts all floating point parameters and buffers to bfloat16 datatype. Returns
self Return type
Module | |
doc_3832 |
Multi-block local binary pattern visualization. Blocks with higher sums are colored with alpha-blended white rectangles, whereas blocks with lower sums are colored alpha-blended cyan. Colors and the alpha parameter can be changed. Parameters
imagendarray of float or uint
Image on which to visualize the pattern.
rint
Row-coordinate of top left corner of a rectangle containing feature.
cint
Column-coordinate of top left corner of a rectangle containing feature.
widthint
Width of one of 9 equal rectangles that will be used to compute a feature.
heightint
Height of one of 9 equal rectangles that will be used to compute a feature.
lbp_codeint
The descriptor of feature to visualize. If not provided, the descriptor with 0 value will be used.
color_greater_blocktuple of 3 floats
Floats specifying the color for the block that has greater intensity value. They should be in the range [0, 1]. Corresponding values define (R, G, B) values. Default value is white (1, 1, 1).
color_greater_blocktuple of 3 floats
Floats specifying the color for the block that has greater intensity value. They should be in the range [0, 1]. Corresponding values define (R, G, B) values. Default value is cyan (0, 0.69, 0.96).
alphafloat
Value in the range [0, 1] that specifies opacity of visualization. 1 - fully transparent, 0 - opaque. Returns
outputndarray of float
Image with MB-LBP visualization. References
1
Face Detection Based on Multi-Block LBP Representation. Lun Zhang, Rufeng Chu, Shiming Xiang, Shengcai Liao, Stan Z. Li http://www.cbsr.ia.ac.cn/users/scliao/papers/Zhang-ICB07-MBLBP.pdf | |
doc_3833 |
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the object itself. | |
doc_3834 | This is a convenience function for invoking update_wrapper() as a function decorator when defining a wrapper function. It is equivalent to partial(update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated). For example: >>> from functools import wraps
>>> def my_decorator(f):
... @wraps(f)
... def wrapper(*args, **kwds):
... print('Calling decorated function')
... return f(*args, **kwds)
... return wrapper
...
>>> @my_decorator
... def example():
... """Docstring"""
... print('Called example function')
...
>>> example()
Calling decorated function
Called example function
>>> example.__name__
'example'
>>> example.__doc__
'Docstring'
Without the use of this decorator factory, the name of the example function would have been 'wrapper', and the docstring of the original example() would have been lost. | |
doc_3835 |
Calculate the rolling weighted window sum. Parameters
**kwargs
Keyword arguments to configure the SciPy weighted window type. Returns
Series or DataFrame
Return type is the same as the original object with np.float64 dtype. See also pandas.Series.rolling
Calling rolling with Series data. pandas.DataFrame.rolling
Calling rolling with DataFrames. pandas.Series.sum
Aggregating sum for Series. pandas.DataFrame.sum
Aggregating sum for DataFrame. | |
doc_3836 |
Draw a stacked area plot. An area plot displays quantitative data visually. This function wraps the matplotlib area function. Parameters
x:label or position, optional
Coordinates for the X axis. By default uses the index.
y:label or position, optional
Column to plot. By default uses all columns.
stacked:bool, default True
Area plots are stacked by default. Set to False to create a unstacked plot. **kwargs
Additional keyword arguments are documented in DataFrame.plot(). Returns
matplotlib.axes.Axes or numpy.ndarray
Area plot, or array of area plots if subplots is True. See also DataFrame.plot
Make plots of DataFrame using matplotlib / pylab. Examples Draw an area plot based on basic business metrics:
>>> df = pd.DataFrame({
... 'sales': [3, 2, 3, 9, 10, 6],
... 'signups': [5, 5, 6, 12, 14, 13],
... 'visits': [20, 42, 28, 62, 81, 50],
... }, index=pd.date_range(start='2018/01/01', end='2018/07/01',
... freq='M'))
>>> ax = df.plot.area()
Area plots are stacked by default. To produce an unstacked plot, pass stacked=False:
>>> ax = df.plot.area(stacked=False)
Draw an area plot for a single column:
>>> ax = df.plot.area(y='sales')
Draw with a different x:
>>> df = pd.DataFrame({
... 'sales': [3, 2, 3],
... 'visits': [20, 42, 28],
... 'day': [1, 2, 3],
... })
>>> ax = df.plot.area(x='day') | |
doc_3837 | Windows only: Returns the last error code set by Windows in the calling thread. This function calls the Windows GetLastError() function directly, it does not return the ctypes-private copy of the error code. | |
doc_3838 |
Align the xlabels and ylabels of subplots with the same subplots row or column (respectively) if label alignment is being done automatically (i.e. the label position is not manually set). Alignment persists for draw events after this is called. Parameters
axslist of Axes
Optional list (or ndarray) of Axes to align the labels. Default is to align all Axes on the figure. See also matplotlib.figure.Figure.align_xlabels
matplotlib.figure.Figure.align_ylabels | |
doc_3839 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_3840 | tf.estimator.LinearClassifier(
feature_columns, model_dir=None, n_classes=2, weight_column=None,
label_vocabulary=None, optimizer='Ftrl', config=None,
warm_start_from=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE,
sparse_combiner='sum'
)
Train a linear model to classify instances into one of multiple possible classes. When number of possible classes is 2, this is binary classification. Example: categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
# Estimator using the default optimizer.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
# Or estimator using the FTRL optimizer with regularization.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=tf.keras.optimizers.Ftrl(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=lambda: tf.keras.optimizers.Ftrl(
learning_rate=tf.exponential_decay(
learning_rate=0.1,
global_step=tf.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator with warm-starting from a previous checkpoint.
estimator = tf.estimator.LinearClassifier(
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
warm_start_from="/path/to/checkpoint/dir")
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train)
metrics = estimator.evaluate(input_fn=input_fn_eval)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a SparseColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedSparseColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a RealValuedColumn, a feature with key=column.name whose value is a Tensor.
Loss is calculated by using softmax cross entropy.
Args
feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from FeatureColumn.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
n_classes number of label classes. Default is binary classification. Note that class labels are integers representing the class index (i.e. values from 0 to n_classes-1). For arbitrary label values (e.g. string labels), convert to class indices first.
weight_column A string or a _NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a _NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor.
label_vocabulary A list of strings represents possible label values. If given, labels must be string type and have any value in label_vocabulary. If it is not given, that means labels are already encoded as integer or float within [0, 1] for n_classes=2 and encoded as integer values in {0, 1,..., n_classes-1} for n_classes>2 . Also there will be errors if vocabulary is not provided and labels are string.
optimizer An instance of tf.keras.optimizers.* or tf.estimator.experimental.LinearSDCA used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer.
config RunConfig object to configure the runtime settings.
warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights and biases are warm-started, and it is assumed that vocabularies and Tensor names are unchanged.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE.
sparse_combiner A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. for more details, see tf.feature_column.linear_model.
Raises
ValueError if n_classes < 2. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | |
doc_3841 |
Return each element rounded to the given number of decimals. Refer to numpy.around for full documentation. See also numpy.ndarray.round
corresponding function for ndarrays numpy.around
equivalent function | |
doc_3842 |
Compute the arithmetic mean along the specified axis. Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. float64 intermediate and return values are used for integer inputs. Parameters
aarray_like
Array containing numbers whose mean is desired. If a is not an array, a conversion is attempted.
axisNone or int or tuple of ints, optional
Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. New in version 1.7.0. If this is a tuple of ints, a mean is performed over multiple axes, instead of a single axis or all the axes as before.
dtypedata-type, optional
Type to use in computing the mean. For integer inputs, the default is float64; for floating point inputs, it is the same as the input dtype.
outndarray, optional
Alternate output array in which to place the result. The default is None; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See Output type determination for more details.
keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the mean method of sub-classes of ndarray, however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised.
wherearray_like of bool, optional
Elements to include in the mean. See reduce for details. New in version 1.20.0. Returns
mndarray, see dtype parameter above
If out=None, returns a new array containing the mean values, otherwise a reference to the output array is returned. See also average
Weighted average
std, var, nanmean, nanstd, nanvar
Notes The arithmetic mean is the sum of the elements along the axis divided by the number of elements. Note that for floating-point input, the mean is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-precision accumulator using the dtype keyword can alleviate this issue. By default, float16 results are computed using float32 intermediates for extra precision. Examples >>> a = np.array([[1, 2], [3, 4]])
>>> np.mean(a)
2.5
>>> np.mean(a, axis=0)
array([2., 3.])
>>> np.mean(a, axis=1)
array([1.5, 3.5])
In single precision, mean can be inaccurate: >>> a = np.zeros((2, 512*512), dtype=np.float32)
>>> a[0, :] = 1.0
>>> a[1, :] = 0.1
>>> np.mean(a)
0.54999924
Computing the mean in float64 is more accurate: >>> np.mean(a, dtype=np.float64)
0.55000000074505806 # may vary
Specifying a where argument: >>> a = np.array([[5, 9, 13], [14, 10, 12], [11, 15, 19]]) >>> np.mean(a) 12.0 >>> np.mean(a, where=[[True], [False], [False]]) 9.0 | |
doc_3843 | Returns three values: the formatted version of object as a string, a flag indicating whether the result is readable, and a flag indicating whether recursion was detected. The first argument is the object to be presented. The second is a dictionary which contains the id() of objects that are part of the current presentation context (direct and indirect containers for object that are affecting the presentation) as the keys; if an object needs to be presented which is already represented in context, the third return value should be True. Recursive calls to the format() method should add additional entries for containers to this dictionary. The third argument, maxlevels, gives the requested limit to recursion; this will be 0 if there is no requested limit. This argument should be passed unmodified to recursive calls. The fourth argument, level, gives the current level; recursive calls should be passed a value less than that of the current call. | |
doc_3844 |
The ordinal day of the year. | |
doc_3845 | Set the blocking mode of the specified file descriptor. Set the O_NONBLOCK flag if blocking is False, clear the flag otherwise. See also get_blocking() and socket.socket.setblocking(). Availability: Unix. New in version 3.5. | |
doc_3846 | Formats a number val according to the current LC_NUMERIC setting. The format follows the conventions of the % operator. For floating point values, the decimal point is modified if appropriate. If grouping is true, also takes the grouping into account. If monetary is true, the conversion uses monetary thousands separator and grouping strings. Processes formatting specifiers as in format % val, but takes the current locale settings into account. Changed in version 3.7: The monetary keyword parameter was added. | |
doc_3847 | winreg.OpenKeyEx(key, sub_key, reserved=0, access=KEY_READ)
Opens the specified key, returning a handle object. key is an already open key, or one of the predefined HKEY_* constants. sub_key is a string that identifies the sub_key to open. reserved is a reserved integer, and must be zero. The default is zero. access is an integer that specifies an access mask that describes the desired security access for the key. Default is KEY_READ. See Access Rights for other allowed values. The result is a new handle to the specified key. If the function fails, OSError is raised. Raises an auditing event winreg.OpenKey with arguments key, sub_key, access. Raises an auditing event winreg.OpenKey/result with argument key. Changed in version 3.2: Allow the use of named arguments. Changed in version 3.3: See above. | |
doc_3848 |
Computes and returns the sum of gradients of outputs w.r.t. the inputs. grad_outputs should be a sequence of length matching output containing the “vector” in Jacobian-vector product, usually the pre-computed gradients w.r.t. each of the outputs. If an output doesn’t require_grad, then the gradient can be None). If only_inputs is True, the function will only return a list of gradients w.r.t the specified inputs. If it’s False, then gradient w.r.t. all remaining leaves will still be computed, and will be accumulated into their .grad attribute. Note If you run any forward ops, create grad_outputs, and/or call grad in a user-specified CUDA stream context, see Stream semantics of backward passes. Parameters
outputs (sequence of Tensor) – outputs of the differentiated function.
inputs (sequence of Tensor) – Inputs w.r.t. which the gradient will be returned (and not accumulated into .grad).
grad_outputs (sequence of Tensor) – The “vector” in the Jacobian-vector product. Usually gradients w.r.t. each output. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable for all grad_tensors, then this argument is optional. Default: None.
retain_graph (bool, optional) – If False, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.
create_graph (bool, optional) – If True, graph of the derivative will be constructed, allowing to compute higher order derivative products. Default: False.
allow_unused (bool, optional) – If False, specifying inputs that were not used when computing outputs (and therefore their grad is always zero) is an error. Defaults to False. | |
doc_3849 | In-place version of fmod() | |
doc_3850 | See Migration guide for more details. tf.compat.v1.manip.reshape, tf.compat.v1.reshape
tf.reshape(
tensor, shape, name=None
)
Given tensor, this operation returns a new tf.Tensor that has the same values as tensor in the same order, except with a new shape given by shape.
t1 = [[1, 2, 3],
[4, 5, 6]]
print(tf.shape(t1).numpy())
[2 3]
t2 = tf.reshape(t1, [6])
t2
<tf.Tensor: shape=(6,), dtype=int32,
numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)>
tf.reshape(t2, [3, 2])
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4],
[5, 6]], dtype=int32)>
The tf.reshape does not change the order of or the total number of elements in the tensor, and so it can reuse the underlying data buffer. This makes it a fast operation independent of how big of a tensor it is operating on.
tf.reshape([1, 2, 3], [2, 2])
Traceback (most recent call last):
InvalidArgumentError: Input to reshape is a tensor with 3 values, but the
requested shape has 4
To instead reorder the data to rearrange the dimensions of a tensor, see tf.transpose.
t = [[1, 2, 3],
[4, 5, 6]]
tf.reshape(t, [3, 2]).numpy()
array([[1, 2],
[3, 4],
[5, 6]], dtype=int32)
tf.transpose(t, perm=[1, 0]).numpy()
array([[1, 4],
[2, 5],
[3, 6]], dtype=int32)
If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, a shape of [-1] flattens into 1-D. At most one component of shape can be -1.
t = [[1, 2, 3],
[4, 5, 6]]
tf.reshape(t, [-1])
<tf.Tensor: shape=(6,), dtype=int32,
numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)>
tf.reshape(t, [3, -1])
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4],
[5, 6]], dtype=int32)>
tf.reshape(t, [-1, 2])
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4],
[5, 6]], dtype=int32)>
tf.reshape(t, []) reshapes a tensor t with one element to a scalar.
tf.reshape([7], []).numpy()
7
More examples:
t = [1, 2, 3, 4, 5, 6, 7, 8, 9]
print(tf.shape(t).numpy())
[9]
tf.reshape(t, [3, 3])
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=int32)>
t = [[[1, 1], [2, 2]],
[[3, 3], [4, 4]]]
print(tf.shape(t).numpy())
[2 2 2]
tf.reshape(t, [2, 4])
<tf.Tensor: shape=(2, 4), dtype=int32, numpy=
array([[1, 1, 2, 2],
[3, 3, 4, 4]], dtype=int32)>
t = [[[1, 1, 1],
[2, 2, 2]],
[[3, 3, 3],
[4, 4, 4]],
[[5, 5, 5],
[6, 6, 6]]]
print(tf.shape(t).numpy())
[3 2 3]
# Pass '[-1]' to flatten 't'.
tf.reshape(t, [-1])
<tf.Tensor: shape=(18,), dtype=int32,
numpy=array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6],
dtype=int32)>
# -- Using -1 to infer the shape --
# Here -1 is inferred to be 9:
tf.reshape(t, [2, -1])
<tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[1, 1, 1, 2, 2, 2, 3, 3, 3],
[4, 4, 4, 5, 5, 5, 6, 6, 6]], dtype=int32)>
# -1 is inferred to be 2:
tf.reshape(t, [-1, 9])
<tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[1, 1, 1, 2, 2, 2, 3, 3, 3],
[4, 4, 4, 5, 5, 5, 6, 6, 6]], dtype=int32)>
# -1 is inferred to be 3:
tf.reshape(t, [ 2, -1, 3])
<tf.Tensor: shape=(2, 3, 3), dtype=int32, numpy=
array([[[1, 1, 1],
[2, 2, 2],
[3, 3, 3]],
[[4, 4, 4],
[5, 5, 5],
[6, 6, 6]]], dtype=int32)>
Args
tensor A Tensor.
shape A Tensor. Must be one of the following types: int32, int64. Defines the shape of the output tensor.
name Optional string. A name for the operation.
Returns A Tensor. Has the same type as tensor. | |
doc_3851 | Return the subprocess process id as an integer. | |
doc_3852 | Parse XML data reading from the object file. file only needs to provide the read(nbytes) method, returning the empty string when there’s no more data. | |
doc_3853 | Return a string which specifies the terminal device associated with file descriptor fd. If fd is not associated with a terminal device, an exception is raised. Availability: Unix. | |
doc_3854 | Round towards Infinity. | |
doc_3855 | incrementaldecoder
Incremental encoder and decoder classes or factory functions. These have to provide the interface defined by the base classes IncrementalEncoder and IncrementalDecoder, respectively. Incremental codecs can maintain state. | |
doc_3856 | See Migration guide for more details. tf.compat.v1.data.experimental.get_single_element
tf.data.experimental.get_single_element(
dataset
)
This function enables you to use a tf.data.Dataset in a stateless "tensor-in tensor-out" expression, without creating an iterator. This can be useful when your preprocessing transformations are expressed as a Dataset, and you want to use the transformation at serving time. For example: def preprocessing_fn(input_str):
# ...
return image, label
input_batch = ... # input batch of BATCH_SIZE elements
dataset = (tf.data.Dataset.from_tensor_slices(input_batch)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
image_batch, label_batch = tf.data.experimental.get_single_element(dataset)
Args
dataset A tf.data.Dataset object containing a single element.
Returns A nested structure of tf.Tensor objects, corresponding to the single element of dataset.
Raises
TypeError if dataset is not a tf.data.Dataset object. InvalidArgumentError (at runtime): if dataset does not contain exactly one element. | |
doc_3857 | This method for the Stats class prints a list of all functions that called each function in the profiled database. The ordering is identical to that provided by print_stats(), and the definition of the restricting argument is also identical. Each caller is reported on its own line. The format differs slightly depending on the profiler that produced the stats: With profile, a number is shown in parentheses after each caller to show how many times this specific call was made. For convenience, a second non-parenthesized number repeats the cumulative time spent in the function at the right. With cProfile, each caller is preceded by three numbers: the number of times this specific call was made, and the total and cumulative times spent in the current function while it was invoked by this specific caller. | |
doc_3858 | A base view for updating an existing object instance. It is not intended to be used directly, but rather as a parent class of the django.views.generic.edit.UpdateView. Ancestors (MRO) This view inherits methods and attributes from the following views: django.views.generic.edit.ModelFormMixin django.views.generic.edit.ProcessFormView Methods
get(request, *args, **kwargs)
Sets the current object instance (self.object).
post(request, *args, **kwargs)
Sets the current object instance (self.object). | |
doc_3859 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_3860 | Boolean. Designates whether this user can access the admin site. | |
doc_3861 | bytearray.center(width[, fillbyte])
Return a copy of the object centered in a sequence of length width. Padding is done using the specified fillbyte (default is an ASCII space). For bytes objects, the original sequence is returned if width is less than or equal to len(s). Note The bytearray version of this method does not operate in place - it always produces a new object, even if no changes were made. | |
doc_3862 | tf.compat.v1.disable_resource_variables()
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term If your code needs tf.disable_resource_variables() to be called to work properly please file a bug. | |
doc_3863 |
Hyperbolic cosine, element-wise. Equivalent to 1/2 * (np.exp(x) + np.exp(-x)) and np.cos(1j*x). Parameters
xarray_like
Input array.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
outndarray or scalar
Output array of same shape as x. This is a scalar if x is a scalar. Examples >>> np.cosh(0)
1.0
The hyperbolic cosine describes the shape of a hanging cable: >>> import matplotlib.pyplot as plt
>>> x = np.linspace(-4, 4, 1000)
>>> plt.plot(x, np.cosh(x))
>>> plt.show() | |
doc_3864 | A list of the field names that will be prompted for when creating a user via the createsuperuser management command. The user will be prompted to supply a value for each of these fields. It must include any field for which blank is False or undefined and may include additional fields you want prompted for when a user is created interactively. REQUIRED_FIELDS has no effect in other parts of Django, like creating a user in the admin. For example, here is the partial definition for a user model that defines two required fields - a date of birth and height: class MyUser(AbstractBaseUser):
...
date_of_birth = models.DateField()
height = models.FloatField()
...
REQUIRED_FIELDS = ['date_of_birth', 'height']
Note REQUIRED_FIELDS must contain all required fields on your user model, but should not contain the USERNAME_FIELD or password as these fields will always be prompted for. | |
doc_3865 |
Transform X into a (weighted) graph of neighbors nearer than a radius The transformed data is a sparse graph as returned by radius_neighbors_graph. Read more in the User Guide. New in version 0.22. Parameters
mode{‘distance’, ‘connectivity’}, default=’distance’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, and ‘distance’ will return the distances between neighbors according to the given metric.
radiusfloat, default=1.
Radius of neighborhood in the transformed sparse graph.
algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree
‘kd_tree’ will use KDTree
‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force.
leaf_sizeint, default=30
Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
metricstr or callable, default=’minkowski’
metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Distance matrices are not supported. Valid values for metric are: from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics.
pint, default=2
Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
metric_paramsdict, default=None
Additional keyword arguments for the metric function.
n_jobsint, default=1
The number of parallel jobs to run for neighbors search. If -1, then the number of jobs is set to the number of CPU cores. Attributes
effective_metric_str or callable
The distance metric used. It will be same as the metric parameter or a synonym of it, e.g. ‘euclidean’ if the metric parameter set to ‘minkowski’ and p parameter set to 2.
effective_metric_params_dict
Additional keyword arguments for the metric function. For most metrics will be same with metric_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’.
n_samples_fit_int
Number of samples in the fitted data. Examples >>> from sklearn.cluster import DBSCAN
>>> from sklearn.neighbors import RadiusNeighborsTransformer
>>> from sklearn.pipeline import make_pipeline
>>> estimator = make_pipeline(
... RadiusNeighborsTransformer(radius=42.0, mode='distance'),
... DBSCAN(min_samples=30, metric='precomputed'))
Methods
fit(X[, y]) Fit the radius neighbors transformer from the training dataset.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
radius_neighbors([X, radius, …]) Finds the neighbors within a given radius of a point or points.
radius_neighbors_graph([X, radius, mode, …]) Computes the (weighted) graph of Neighbors for points in X
set_params(**params) Set the parameters of this estimator.
transform(X) Computes the (weighted) graph of Neighbors for points in X
fit(X, y=None) [source]
Fit the radius neighbors transformer from the training dataset. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’
Training data. Returns
selfRadiusNeighborsTransformer
The fitted radius neighbors transformer.
fit_transform(X, y=None) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Training set.
yignored
Returns
Xtsparse matrix of shape (n_samples, n_samples)
Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source]
Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters
Xarray-like of (n_samples, n_features), default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
radiusfloat, default=None
Limiting distance of neighbors to return. The default is the value passed to the constructor.
return_distancebool, default=True
Whether or not to return the distances.
sort_resultsbool, default=False
If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns
neigh_distndarray of shape (n_samples,) of arrays
Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter.
neigh_indndarray of shape (n_samples,) of arrays
An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np
>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(radius=1.6)
>>> neigh.fit(samples)
NearestNeighbors(radius=1.6)
>>> rng = neigh.radius_neighbors([[1., 1., 1.]])
>>> print(np.asarray(rng[0][0]))
[1.5 0.5]
>>> print(np.asarray(rng[1][0]))
[1 2]
The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time.
radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source]
Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters
Xarray-like of shape (n_samples, n_features), default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
radiusfloat, default=None
Radius of neighborhoods. The default is the value passed to the constructor.
mode{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points.
sort_resultsbool, default=False
If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns
Asparse-matrix of shape (n_queries, n_samples_fit)
n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also
kneighbors_graph
Examples >>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(radius=1.5)
>>> neigh.fit(X)
NearestNeighbors(radius=1.5)
>>> A = neigh.radius_neighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 0.],
[1., 0., 1.]])
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Computes the (weighted) graph of Neighbors for points in X Parameters
Xarray-like of shape (n_samples_transform, n_features)
Sample data Returns
Xtsparse matrix of shape (n_samples_transform, n_samples_fit)
Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format. | |
doc_3866 | tf.compat.v1.nn.quantized_conv2d(
input, filter, min_input, max_input, min_filter, max_filter, strides, padding,
out_type=tf.dtypes.qint32, dilations=[1, 1, 1, 1], name=None
)
The inputs are quantized tensors where the lowest value represents the real number of the associated minimum, and the highest represents the maximum. This means that you can only interpret the quantized output in the same way, by taking the returned minimum and maximum values into account.
Args
input A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16.
filter A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. filter's input_depth dimension must match input's depth dimensions.
min_input A Tensor of type float32. The float value that the lowest quantized input value represents.
max_input A Tensor of type float32. The float value that the highest quantized input value represents.
min_filter A Tensor of type float32. The float value that the lowest quantized filter value represents.
max_filter A Tensor of type float32. The float value that the highest quantized filter value represents.
strides A list of ints. The stride of the sliding window for each dimension of the input tensor.
padding A string from: "SAME", "VALID". The type of padding algorithm to use.
out_type An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.qint32.
dilations An optional list of ints. Defaults to [1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output, min_output, max_output). output A Tensor of type out_type.
min_output A Tensor of type float32.
max_output A Tensor of type float32. | |
doc_3867 | Construct a new directory comparison object, to compare the directories a and b. ignore is a list of names to ignore, and defaults to filecmp.DEFAULT_IGNORES. hide is a list of names to hide, and defaults to [os.curdir, os.pardir]. The dircmp class compares files by doing shallow comparisons as described for filecmp.cmp(). The dircmp class provides the following methods:
report()
Print (to sys.stdout) a comparison between a and b.
report_partial_closure()
Print a comparison between a and b and common immediate subdirectories.
report_full_closure()
Print a comparison between a and b and common subdirectories (recursively).
The dircmp class offers a number of interesting attributes that may be used to get various bits of information about the directory trees being compared. Note that via __getattr__() hooks, all attributes are computed lazily, so there is no speed penalty if only those attributes which are lightweight to compute are used.
left
The directory a.
right
The directory b.
left_list
Files and subdirectories in a, filtered by hide and ignore.
right_list
Files and subdirectories in b, filtered by hide and ignore.
common
Files and subdirectories in both a and b.
left_only
Files and subdirectories only in a.
right_only
Files and subdirectories only in b.
common_dirs
Subdirectories in both a and b.
common_files
Files in both a and b.
common_funny
Names in both a and b, such that the type differs between the directories, or names for which os.stat() reports an error.
same_files
Files which are identical in both a and b, using the class’s file comparison operator.
diff_files
Files which are in both a and b, whose contents differ according to the class’s file comparison operator.
funny_files
Files which are in both a and b, but could not be compared.
subdirs
A dictionary mapping names in common_dirs to dircmp objects. | |
doc_3868 |
Clear the Axes. | |
doc_3869 | If the maintype is multipart, raise a TypeError; otherwise look up a handler function based on the type of obj (see next paragraph), call clear_content() on the msg, and call the handler function, passing through all arguments. The expectation is that the handler will transform and store obj into msg, possibly making other changes to msg as well, such as adding various MIME headers to encode information needed to interpret the stored data. To find the handler, obtain the type of obj (typ = type(obj)), and look for the following keys in the registry, stopping with the first one found: the type itself (typ) the type’s fully qualified name (typ.__module__ + '.' +
typ.__qualname__). the type’s qualname (typ.__qualname__) the type’s name (typ.__name__). If none of the above match, repeat all of the checks above for each of the types in the MRO (typ.__mro__). Finally, if no other key yields a handler, check for a handler for the key None. If there is no handler for None, raise a KeyError for the fully qualified name of the type. Also add a MIME-Version header if one is not present (see also MIMEPart). | |
doc_3870 | Sets the given header name to the given value. Both header and value should be strings. | |
doc_3871 | Returns True if x is infinite; otherwise returns False. | |
doc_3872 | True if the system is Android. | |
doc_3873 |
Determine whether y is monotonically correlated with x. y is found increasing or decreasing with respect to x based on a Spearman correlation test. Parameters
xarray-like of shape (n_samples,)
Training data.
yarray-like of shape (n_samples,)
Training target. Returns
increasing_boolboolean
Whether the relationship is increasing or decreasing. Notes The Spearman correlation coefficient is estimated from the data, and the sign of the resulting estimate is used as the result. In the event that the 95% confidence interval based on Fisher transform spans zero, a warning is raised. References Fisher transformation. Wikipedia. https://en.wikipedia.org/wiki/Fisher_transformation | |
doc_3874 | See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyProximalGradientDescent
tf.raw_ops.SparseApplyProximalGradientDescent(
var, alpha, l1, l2, grad, indices, use_locking=False, name=None
)
That is for rows we have grad for, we update var as follows: $$prox_v = var - alpha * grad$$ $$var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}$$
Args
var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable().
alpha A Tensor. Must have the same type as var. Scaling factor. Must be a scalar.
l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar.
grad A Tensor. Must have the same type as var. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as var. | |
doc_3875 | Write the bytes in bytes into memory at the current position of the file pointer and return the number of bytes written (never less than len(bytes), since if the write fails, a ValueError will be raised). The file position is updated to point after the bytes that were written. If the mmap was created with ACCESS_READ, then writing to it will raise a TypeError exception. Changed in version 3.5: Writable bytes-like object is now accepted. Changed in version 3.6: The number of bytes written is now returned. | |
doc_3876 | Exception raised when a browser control error occurs. | |
doc_3877 |
Sends a tensor synchronously. Parameters
tensor (Tensor) – Tensor to send.
dst (int) – Destination rank.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
tag (int, optional) – Tag to match send with remote recv | |
doc_3878 | tupled integers of the SDL library version SDL = '(2, 0, 12)' This is the SDL library version represented as an extended tuple. It also has attributes 'major', 'minor' & 'patch' that can be accessed like this: >>> pygame.version.SDL.major
2 printing the whole thing returns a string like this: >>> pygame.version.SDL
SDLVersion(major=2, minor=0, patch=12) New in pygame 2.0.0. | |
doc_3879 |
pygame module for loading and rendering fonts The font module allows for rendering TrueType fonts into a new Surface object. It accepts any UCS-2 character ('u0001' to 'uFFFF'). This module is optional and requires SDL_ttf as a dependency. You should test that pygame.font is available and initialized before attempting to use the module. Most of the work done with fonts are done by using the actual Font objects. The module by itself only has routines to initialize the module and create Font objects with pygame.font.Font(). You can load fonts from the system by using the pygame.font.SysFont() function. There are a few other functions to help lookup the system fonts. Pygame comes with a builtin default font. This can always be accessed by passing None as the font name. To use the pygame.freetype based pygame.ftfont as pygame.font define the environment variable PYGAME_FREETYPE before the first import of pygame. Module pygame.ftfont is a pygame.font compatible module that passes all but one of the font module unit tests: it does not have the UCS-2 limitation of the SDL_ttf based font module, so fails to raise an exception for a code point greater than 'uFFFF'. If pygame.freetype is unavailable then the SDL_ttf font module will be loaded instead. pygame.font.init()
initialize the font module init() -> None This method is called automatically by pygame.init(). It initializes the font module. The module must be initialized before any other functions will work. It is safe to call this function more than once.
pygame.font.quit()
uninitialize the font module quit() -> None Manually uninitialize SDL_ttf's font system. This is called automatically by pygame.quit(). It is safe to call this function even if font is currently not initialized.
pygame.font.get_init()
true if the font module is initialized get_init() -> bool Test if the font module is initialized or not.
pygame.font.get_default_font()
get the filename of the default font get_default_font() -> string Return the filename of the system font. This is not the full path to the file. This file can usually be found in the same directory as the font module, but it can also be bundled in separate archives.
pygame.font.get_fonts()
get all available fonts get_fonts() -> list of strings Returns a list of all the fonts available on the system. The names of the fonts will be set to lowercase with all spaces and punctuation removed. This works on most systems, but some will return an empty list if they cannot find fonts.
pygame.font.match_font()
find a specific font on the system match_font(name, bold=False, italic=False) -> path Returns the full path to a font file on the system. If bold or italic are set to true, this will attempt to find the correct family of font. The font name can also be an iterable of font names, a string of comma-separated font names, or a bytes of comma-separated font names, in which case the set of names will be searched in order. If none of the given names are found, None is returned. New in pygame 2.0.1: Accept an iterable of font names. Example: print pygame.font.match_font('bitstreamverasans')
# output is: /usr/share/fonts/truetype/ttf-bitstream-vera/Vera.ttf
# (but only if you have Vera on your system)
pygame.font.SysFont()
create a Font object from the system fonts SysFont(name, size, bold=False, italic=False) -> Font Return a new Font object that is loaded from the system fonts. The font will match the requested bold and italic flags. Pygame uses a small set of common font aliases. If the specific font you ask for is not available, a reasonable alternative may be used. If a suitable system font is not found this will fall back on loading the default pygame font. The font name can also be an iterable of font names, a string of comma-separated font names, or a bytes of comma-separated font names, in which case the set of names will be searched in order. New in pygame 2.0.1: Accept an iterable of font names.
pygame.font.Font
create a new Font object from a file Font(filename, size) -> Font Font(object, size) -> Font Load a new font from a given filename or a python file object. The size is the height of the font in pixels. If the filename is None the pygame default font will be loaded. If a font cannot be loaded from the arguments given an exception will be raised. Once the font is created the size cannot be changed. Font objects are mainly used to render text into new Surface objects. The render can emulate bold or italic features, but it is better to load from a font with actual italic or bold glyphs. The rendered text can be regular strings or unicode. bold
Gets or sets whether the font should be rendered in (faked) bold. bold -> bool Whether the font should be rendered in bold. When set to True, this enables the bold rendering of text. This is a fake stretching of the font that doesn't look good on many font types. If possible load the font from a real bold font file. While bold, the font will have a different width than when normal. This can be mixed with the italic and underline modes. New in pygame 2.0.0.
italic
Gets or sets whether the font should be rendered in (faked) italics. italic -> bool Whether the font should be rendered in italic. When set to True, this enables fake rendering of italic text. This is a fake skewing of the font that doesn't look good on many font types. If possible load the font from a real italic font file. While italic the font will have a different width than when normal. This can be mixed with the bold and underline modes. New in pygame 2.0.0.
underline
Gets or sets whether the font should be rendered with an underline. underline -> bool Whether the font should be rendered in underline. When set to True, all rendered fonts will include an underline. The underline is always one pixel thick, regardless of font size. This can be mixed with the bold and italic modes. New in pygame 2.0.0.
render()
draw text on a new Surface render(text, antialias, color, background=None) -> Surface This creates a new Surface with the specified text rendered on it. pygame provides no way to directly draw text on an existing Surface: instead you must use Font.render() to create an image (Surface) of the text, then blit this image onto another Surface. The text can only be a single line: newline characters are not rendered. Null characters ('x00') raise a TypeError. Both Unicode and char (byte) strings are accepted. For Unicode strings only UCS-2 characters ('u0001' to 'uFFFF') are recognized. Anything greater raises a UnicodeError. For char strings a LATIN1 encoding is assumed. The antialias argument is a boolean: if true the characters will have smooth edges. The color argument is the color of the text [e.g.: (0,0,255) for blue]. The optional background argument is a color to use for the text background. If no background is passed the area outside the text will be transparent. The Surface returned will be of the dimensions required to hold the text. (the same as those returned by Font.size()). If an empty string is passed for the text, a blank surface will be returned that is zero pixel wide and the height of the font. Depending on the type of background and antialiasing used, this returns different types of Surfaces. For performance reasons, it is good to know what type of image will be used. If antialiasing is not used, the return image will always be an 8-bit image with a two-color palette. If the background is transparent a colorkey will be set. Antialiased images are rendered to 24-bit RGB images. If the background is transparent a pixel alpha will be included. Optimization: if you know that the final destination for the text (on the screen) will always have a solid background, and the text is antialiased, you can improve performance by specifying the background color. This will cause the resulting image to maintain transparency information by colorkey rather than (much less efficient) alpha values. If you render '\n' an unknown char will be rendered. Usually a rectangle. Instead you need to handle new lines yourself. Font rendering is not thread safe: only a single thread can render text at any time.
size()
determine the amount of space needed to render text size(text) -> (width, height) Returns the dimensions needed to render the text. This can be used to help determine the positioning needed for text before it is rendered. It can also be used for wordwrapping and other layout effects. Be aware that most fonts use kerning which adjusts the widths for specific letter pairs. For example, the width for "ae" will not always match the width for "a" + "e".
set_underline()
control if text is rendered with an underline set_underline(bool) -> None When enabled, all rendered fonts will include an underline. The underline is always one pixel thick, regardless of font size. This can be mixed with the bold and italic modes. Note This is the same as the underline attribute.
get_underline()
check if text will be rendered with an underline get_underline() -> bool Return True when the font underline is enabled.
Note This is the same as the underline attribute.
set_bold()
enable fake rendering of bold text set_bold(bool) -> None Enables the bold rendering of text. This is a fake stretching of the font that doesn't look good on many font types. If possible load the font from a real bold font file. While bold, the font will have a different width than when normal. This can be mixed with the italic and underline modes. Note This is the same as the bold attribute.
get_bold()
check if text will be rendered bold get_bold() -> bool Return True when the font bold rendering mode is enabled. Note This is the same as the bold attribute.
set_italic()
enable fake rendering of italic text set_italic(bool) -> None Enables fake rendering of italic text. This is a fake skewing of the font that doesn't look good on many font types. If possible load the font from a real italic font file. While italic the font will have a different width than when normal. This can be mixed with the bold and underline modes. Note This is the same as the italic attribute.
metrics()
gets the metrics for each character in the passed string metrics(text) -> list The list contains tuples for each character, which contain the minimum X offset, the maximum X offset, the minimum Y offset, the maximum Y offset and the advance offset (bearing plus width) of the character. [(minx, maxx, miny, maxy, advance), (minx, maxx, miny, maxy, advance), ...]. None is entered in the list for each unrecognized character.
get_italic()
check if the text will be rendered italic get_italic() -> bool Return True when the font italic rendering mode is enabled. Note This is the same as the italic attribute.
get_linesize()
get the line space of the font text get_linesize() -> int Return the height in pixels for a line of text with the font. When rendering multiple lines of text this is the recommended amount of space between lines.
get_height()
get the height of the font get_height() -> int Return the height in pixels of the actual rendered text. This is the average size for each glyph in the font.
get_ascent()
get the ascent of the font get_ascent() -> int Return the height in pixels for the font ascent. The ascent is the number of pixels from the font baseline to the top of the font.
get_descent()
get the descent of the font get_descent() -> int Return the height in pixels for the font descent. The descent is the number of pixels from the font baseline to the bottom of the font. | |
doc_3880 | The URL to redirect to after a successful password change. Defaults to 'password_change_done'. | |
doc_3881 | tkinter.simpledialog.askinteger(title, prompt, **kw)
tkinter.simpledialog.askstring(title, prompt, **kw)
The above three functions provide dialogs that prompt the user to enter a value of the desired type. | |
doc_3882 |
Reduces the tensor data on multiple GPUs across all machines. Each tensor in tensor_list should reside on a separate GPU Only the GPU of tensor_list[dst_tensor] on the process with rank dst is going to receive the final result. Only nccl backend is currently supported tensors should only be GPU tensors Parameters
tensor_list (List[Tensor]) – Input and output GPU tensors of the collective. The function operates in-place. You also need to make sure that len(tensor_list) is the same for all the distributed processes calling this function.
dst (int) – Destination rank
op (optional) – One of the values from torch.distributed.ReduceOp enum. Specifies an operation used for element-wise reductions.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op
dst_tensor (int, optional) – Destination tensor rank within tensor_list
Returns
Async work handle, if async_op is set to True. None, otherwise | |
doc_3883 |
Information about the memory layout of the array. Notes The flags object can be accessed dictionary-like (as in a.flags['WRITEABLE']), or by using lowercased attribute names (as in a.flags.writeable). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling ndarray.setflags. The array flags cannot be set arbitrarily: UPDATEIFCOPY can only be set False. WRITEBACKIFCOPY can only be set False. ALIGNED can only be set True if the data is truly aligned. WRITEABLE can only be set True if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension arr.strides[dim] may be arbitrary if arr.shape[dim] == 1 or the array has no elements. It does not generally hold that self.strides[-1] == self.itemsize for C-style contiguous arrays or self.strides[0] == self.itemsize for Fortran-style contiguous arrays is true. Attributes
C_CONTIGUOUS (C)
The data is in a single, C-style contiguous segment. F_CONTIGUOUS (F)
The data is in a single, Fortran-style contiguous segment. OWNDATA (O)
The array owns the memory it uses or borrows it from another object. WRITEABLE (W)
The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception. ALIGNED (A)
The data and all elements are aligned appropriately for the hardware. WRITEBACKIFCOPY (X)
This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. UPDATEIFCOPY (U)
(Deprecated, use WRITEBACKIFCOPY) This array is a copy of some other array. When this array is deallocated, the base array will be updated with the contents of this array. FNC
F_CONTIGUOUS and not C_CONTIGUOUS. FORC
F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). BEHAVED (B)
ALIGNED and WRITEABLE. CARRAY (CA)
BEHAVED and C_CONTIGUOUS. FARRAY (FA)
BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. | |
doc_3884 | See Migration guide for more details. tf.compat.v1.feature_column.sequence_categorical_column_with_vocabulary_list
tf.feature_column.sequence_categorical_column_with_vocabulary_list(
key, vocabulary_list, dtype=None, default_value=-1, num_oov_buckets=0
)
Pass this to embedding_column or indicator_column to convert sequence categorical data into dense representation for input to sequence NN, such as RNN. Example: colors = sequence_categorical_column_with_vocabulary_list(
key='colors', vocabulary_list=('R', 'G', 'B', 'Y'),
num_oov_buckets=2)
colors_embedding = embedding_column(colors, dimension=3)
columns = [colors_embedding]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
sequence_feature_layer = SequenceFeatures(columns)
sequence_input, sequence_length = sequence_feature_layer(features)
sequence_length_mask = tf.sequence_mask(sequence_length)
rnn_cell = tf.keras.layers.SimpleRNNCell(hidden_size)
rnn_layer = tf.keras.layers.RNN(rnn_cell)
outputs, state = rnn_layer(sequence_input, mask=sequence_length_mask)
Args
key A unique string identifying the input feature.
vocabulary_list An ordered iterable defining the vocabulary. Each feature is mapped to the index of its value (if present) in vocabulary_list. Must be castable to dtype.
dtype The type of features. Only string and integer types are supported. If None, it will be inferred from vocabulary_list.
default_value The integer ID value to return for out-of-vocabulary feature values, defaults to -1. This can not be specified with a positive num_oov_buckets.
num_oov_buckets Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range [len(vocabulary_list), len(vocabulary_list)+num_oov_buckets) based on a hash of the input value. A positive num_oov_buckets can not be specified with default_value.
Returns A SequenceCategoricalColumn.
Raises
ValueError if vocabulary_list is empty, or contains duplicate keys.
ValueError num_oov_buckets is a negative integer.
ValueError num_oov_buckets and default_value are both specified.
ValueError if dtype is not integer or string. | |
doc_3885 | autocomplete_fields is a list of ForeignKey and/or ManyToManyField fields you would like to change to Select2 autocomplete inputs. By default, the admin uses a select-box interface (<select>) for those fields. Sometimes you don’t want to incur the overhead of selecting all the related instances to display in the dropdown. The Select2 input looks similar to the default input but comes with a search feature that loads the options asynchronously. This is faster and more user-friendly if the related model has many instances. You must define search_fields on the related object’s ModelAdmin because the autocomplete search uses it. To avoid unauthorized data disclosure, users must have the view or change permission to the related object in order to use autocomplete. Ordering and pagination of the results are controlled by the related ModelAdmin’s get_ordering() and get_paginator() methods. In the following example, ChoiceAdmin has an autocomplete field for the ForeignKey to the Question. The results are filtered by the question_text field and ordered by the date_created field: class QuestionAdmin(admin.ModelAdmin):
ordering = ['date_created']
search_fields = ['question_text']
class ChoiceAdmin(admin.ModelAdmin):
autocomplete_fields = ['question']
Performance considerations for large datasets Ordering using ModelAdmin.ordering may cause performance problems as sorting on a large queryset will be slow. Also, if your search fields include fields that aren’t indexed by the database, you might encounter poor performance on extremely large tables. For those cases, it’s a good idea to write your own ModelAdmin.get_search_results() implementation using a full-text indexed search. You may also want to change the Paginator on very large tables as the default paginator always performs a count() query. For example, you could override the default implementation of the Paginator.count property. | |
doc_3886 |
Set both the edgecolor and the facecolor. Parameters
ccolor or list of rgba tuples
See also
Collection.set_facecolor, Collection.set_edgecolor
For setting the edge or face color individually. | |
doc_3887 |
Return whether the artist is to be rasterized. | |
doc_3888 | Return the number of attributes. | |
doc_3889 | tf.math.special.bessel_i1 Compat aliases for migration See Migration guide for more details. tf.compat.v1.math.bessel_i1, tf.compat.v1.math.special.bessel_i1
tf.math.bessel_i1(
x, name=None
)
Modified Bessel function of order 1. It is preferable to use the numerically stabler function i1e(x) instead.
tf.math.special.bessel_i1([-1., -0.5, 0.5, 1.]).numpy()
array([-0.5651591 , -0.25789431, 0.25789431, 0.5651591 ], dtype=float32)
Args
x A Tensor or SparseTensor. Must be one of the following types: half, float32, float64.
name A name for the operation (optional).
Returns A Tensor or SparseTensor, respectively. Has the same type as x.
Scipy Compatibility Equivalent to scipy.special.i1 | |
doc_3890 |
template_name: 'django/forms/widgets/select.html'
option_template_name: 'django/forms/widgets/select_option.html'
Select widget with options ‘Unknown’, ‘Yes’ and ‘No’ | |
doc_3891 | Command that was used to spawn the child process. | |
doc_3892 | A legacy method for finding a loader for the specified module. Returns a 2-tuple of (loader, portion) where portion is a sequence of file system locations contributing to part of a namespace package. The loader may be None while specifying portion to signify the contribution of the file system locations to a namespace package. An empty list can be used for portion to signify the loader is not part of a namespace package. If loader is None and portion is the empty list then no loader or location for a namespace package were found (i.e. failure to find anything for the module). If find_spec() is defined then backwards-compatible functionality is provided. Changed in version 3.4: Returns (None, []) instead of raising NotImplementedError. Uses find_spec() when available to provide functionality. Deprecated since version 3.4: Use find_spec() instead. | |
doc_3893 | This is a low-level interface to the functionality of warn(), passing in explicitly the message, category, filename and line number, and optionally the module name and the registry (which should be the __warningregistry__ dictionary of the module). The module name defaults to the filename with .py stripped; if no registry is passed, the warning is never suppressed. message must be a string and category a subclass of Warning or message may be a Warning instance, in which case category will be ignored. module_globals, if supplied, should be the global namespace in use by the code for which the warning is issued. (This argument is used to support displaying source for modules found in zipfiles or other non-filesystem import sources). source, if supplied, is the destroyed object which emitted a ResourceWarning. Changed in version 3.6: Add the source parameter. | |
doc_3894 | The epilogue attribute acts the same way as the preamble attribute, except that it contains text that appears between the last boundary and the end of the message. You do not need to set the epilogue to the empty string in order for the Generator to print a newline at the end of the file. | |
doc_3895 |
Returns the currently selected Stream for a given device. Parameters
device (torch.device or int, optional) – selected device. Returns the currently selected Stream for the current device, given by current_device(), if device is None (default). | |
doc_3896 |
Implements stochastic gradient descent (optionally with momentum). Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning. Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float) – learning rate
momentum (float, optional) – momentum factor (default: 0)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
dampening (float, optional) – dampening for momentum (default: 0)
nesterov (bool, optional) – enables Nesterov momentum (default: False) Example >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
Note The implementation of SGD with Momentum/Nesterov subtly differs from Sutskever et. al. and implementations in some other frameworks. Considering the specific case of Momentum, the update can be written as vt+1=μ∗vt+gt+1,pt+1=pt−lr∗vt+1,\begin{aligned} v_{t+1} & = \mu * v_{t} + g_{t+1}, \\ p_{t+1} & = p_{t} - \text{lr} * v_{t+1}, \end{aligned}
where pp , gg , vv and μ\mu denote the parameters, gradient, velocity, and momentum respectively. This is in contrast to Sutskever et. al. and other frameworks which employ an update of the form vt+1=μ∗vt+lr∗gt+1,pt+1=pt−vt+1.\begin{aligned} v_{t+1} & = \mu * v_{t} + \text{lr} * g_{t+1}, \\ p_{t+1} & = p_{t} - v_{t+1}. \end{aligned}
The Nesterov version is analogously modified.
step(closure=None) [source]
Performs a single optimization step. Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss. | |
doc_3897 | A readonly int that shows the number of plans currently in the cuFFT plan cache. | |
doc_3898 |
Set Index or MultiIndex name. Able to set new names partially and by level. Parameters
names:label or list of label or dict-like for MultiIndex
Name(s) to set. Changed in version 1.3.0.
level:int, label or list of int or label, optional
If the index is a MultiIndex and names is not dict-like, level(s) to set (None for all levels). Otherwise level must be None. Changed in version 1.3.0.
inplace:bool, default False
Modifies the object directly, instead of creating a new Index or MultiIndex. Returns
Index or None
The same type as the caller or None if inplace=True. See also Index.rename
Able to set new names without level. Examples
>>> idx = pd.Index([1, 2, 3, 4])
>>> idx
Int64Index([1, 2, 3, 4], dtype='int64')
>>> idx.set_names('quarter')
Int64Index([1, 2, 3, 4], dtype='int64', name='quarter')
>>> idx = pd.MultiIndex.from_product([['python', 'cobra'],
... [2018, 2019]])
>>> idx
MultiIndex([('python', 2018),
('python', 2019),
( 'cobra', 2018),
( 'cobra', 2019)],
)
>>> idx.set_names(['kind', 'year'], inplace=True)
>>> idx
MultiIndex([('python', 2018),
('python', 2019),
( 'cobra', 2018),
( 'cobra', 2019)],
names=['kind', 'year'])
>>> idx.set_names('species', level=0)
MultiIndex([('python', 2018),
('python', 2019),
( 'cobra', 2018),
( 'cobra', 2019)],
names=['species', 'year'])
When renaming levels with a dict, levels can not be passed.
>>> idx.set_names({'kind': 'snake'})
MultiIndex([('python', 2018),
('python', 2019),
( 'cobra', 2018),
( 'cobra', 2019)],
names=['snake', 'year']) | |
doc_3899 |
Return the color of the text. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.