_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_30000 | See Migration guide for more details. tf.compat.v1.image.random_contrast
tf.image.random_contrast(
image, lower, upper, seed=None
)
Equivalent to adjust_contrast() but uses a contrast_factor randomly picked in the interval [lower, upper).
Args
image An image tensor with 3 or more dimensions.
lower float. Lower bound for the random contrast factor.
upper float. Upper bound for the random contrast factor.
seed A Python integer. Used to create a random seed. See tf.compat.v1.set_random_seed for behavior. Usage Example:
x = [[[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]],
[[7.0, 8.0, 9.0],
[10.0, 11.0, 12.0]]]
tf.image.random_contrast(x, 0.2, 0.5)
<tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy=...>
Returns The contrast-adjusted image(s).
Raises
ValueError if upper <= lower or if lower < 0. | |
doc_30001 | This function works like getopt(), except that GNU style scanning mode is used by default. This means that option and non-option arguments may be intermixed. The getopt() function stops processing options as soon as a non-option argument is encountered. If the first character of the option string is '+', or if the environment variable POSIXLY_CORRECT is set, then option processing stops as soon as a non-option argument is encountered. | |
doc_30002 | DefaultStorage provides lazy access to the current default storage system as defined by DEFAULT_FILE_STORAGE. DefaultStorage uses get_storage_class() internally. | |
doc_30003 | Dictionary mapping names accepted by pathconf() and fpathconf() to the integer values defined for those names by the host operating system. This can be used to determine the set of names known to the system. Availability: Unix. | |
doc_30004 | See Migration guide for more details. tf.compat.v1.raw_ops.Elu
tf.raw_ops.Elu(
features, name=None
)
See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Args
features A Tensor. Must be one of the following types: half, bfloat16, float32, float64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as features. | |
doc_30005 | Compress data (a bytes object), returning the compressed data as a bytes object. See LZMACompressor above for a description of the format, check, preset and filters arguments. | |
doc_30006 |
View inputs as arrays with at least two dimensions. Parameters
arys1, arys2, …array_like
One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have two or more dimensions are preserved. Returns
res, res2, …ndarray
An array, or list of arrays, each with a.ndim >= 2. Copies are avoided where possible, and views with two or more dimensions are returned. See also
atleast_1d, atleast_3d
Examples >>> np.atleast_2d(3.0)
array([[3.]])
>>> x = np.arange(3.0)
>>> np.atleast_2d(x)
array([[0., 1., 2.]])
>>> np.atleast_2d(x).base is x
True
>>> np.atleast_2d(1, [1, 2], [[1, 2]])
[array([[1]]), array([[1, 2]]), array([[1, 2]])] | |
doc_30007 |
Draw the text instance. Parameters
gcGraphicsContextBase
The graphics context.
xfloat
The x location of the text in display coords.
yfloat
The y location of the text baseline in display coords.
sstr
The text string.
propmatplotlib.font_manager.FontProperties
The font properties.
anglefloat
The rotation angle in degrees anti-clockwise.
mtextmatplotlib.text.Text
The original text object to be rendered. Notes Note for backend implementers: When you are trying to determine if you have gotten your bounding box right (which is what enables the text layout/alignment to work properly), it helps to change the line in text.py: if 0: bbox_artist(self, renderer)
to if 1, and then the actual bounding box will be plotted along with your text. | |
doc_30008 | Return the names of the attributes. | |
doc_30009 | tf.sparse.concat(
axis, sp_inputs, expand_nonconcat_dims=False, name=None
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (concat_dim). They will be removed in a future version. Instructions for updating: concat_dim is deprecated, use axis instead Concatenation is with respect to the dense versions of each sparse input. It is assumed that each inputs is a SparseTensor whose elements are ordered along increasing dimension number. If expand_nonconcat_dim is False, all inputs' shapes must match, except for the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are allowed to vary among all inputs. The indices, values, and shapes lists must have the same length. If expand_nonconcat_dim is False, then the output shape is identical to the inputs', except along the concat dimension, where it is the sum of the inputs' sizes along that dimension. If expand_nonconcat_dim is True, then the output shape along the non-concat dimensions will be expand to be the largest among all inputs, and it is the sum of the inputs sizes along the concat dimension. The output elements will be resorted to preserve the sort order along increasing dimension number. This op runs in O(M log M) time, where M is the total number of non-empty values across all inputs. This is due to the need for an internal sort in order to concatenate efficiently across an arbitrary dimension. For example, if axis = 1 and the inputs are sp_inputs[0]: shape = [2, 3]
[0, 2]: "a"
[1, 0]: "b"
[1, 1]: "c"
sp_inputs[1]: shape = [2, 4]
[0, 1]: "d"
[0, 2]: "e"
then the output will be shape = [2, 7]
[0, 2]: "a"
[0, 4]: "d"
[0, 5]: "e"
[1, 0]: "b"
[1, 1]: "c"
Graphically this is equivalent to doing [ a] concat [ d e ] = [ a d e ]
[b c ] [ ] [b c ]
Another example, if 'axis = 1' and the inputs are sp_inputs[0]: shape = [3, 3]
[0, 2]: "a"
[1, 0]: "b"
[2, 1]: "c"
sp_inputs[1]: shape = [2, 4]
[0, 1]: "d"
[0, 2]: "e"
if expand_nonconcat_dim = False, this will result in an error. But if expand_nonconcat_dim = True, this will result in: shape = [3, 7]
[0, 2]: "a"
[0, 4]: "d"
[0, 5]: "e"
[1, 0]: "b"
[2, 1]: "c"
Graphically this is equivalent to doing [ a] concat [ d e ] = [ a d e ]
[b ] [ ] [b ]
[ c ] [ c ]
Args
axis Dimension to concatenate along. Must be in range [-rank, rank), where rank is the number of dimensions in each input SparseTensor.
sp_inputs List of SparseTensor to concatenate.
name A name prefix for the returned tensors (optional).
expand_nonconcat_dim Whether to allow the expansion in the non-concat dimensions. Defaulted to False.
concat_dim The old (deprecated) name for axis.
expand_nonconcat_dims alias for expand_nonconcat_dim
Returns A SparseTensor with the concatenated output.
Raises
TypeError If sp_inputs is not a list of SparseTensor. | |
doc_30010 |
Inverse hyperbolic cosine, element-wise. Parameters
xarray_like
Input array.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
arccoshndarray
Array of the same shape as x. This is a scalar if x is a scalar. See also
cosh, arcsinh, sinh, arctanh, tanh
Notes arccosh is a multivalued function: for each x there are infinitely many numbers z such that cosh(z) = x. The convention is to return the z whose imaginary part lies in [-pi, pi] and the real part in [0, inf]. For real-valued input data types, arccosh always returns real output. For each value that cannot be expressed as a real number or infinity, it yields nan and sets the invalid floating point error flag. For complex-valued input, arccosh is a complex analytical function that has a branch cut [-inf, 1] and is continuous from above on it. References 1
M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. https://personal.math.ubc.ca/~cbm/aands/page_86.htm 2
Wikipedia, “Inverse hyperbolic function”, https://en.wikipedia.org/wiki/Arccosh Examples >>> np.arccosh([np.e, 10.0])
array([ 1.65745445, 2.99322285])
>>> np.arccosh(1)
0.0 | |
doc_30011 | class sklearn.linear_model.ARDRegression(*, n_iter=300, tol=0.001, alpha_1=1e-06, alpha_2=1e-06, lambda_1=1e-06, lambda_2=1e-06, compute_score=False, threshold_lambda=10000.0, fit_intercept=True, normalize=False, copy_X=True, verbose=False) [source]
Bayesian ARD regression. Fit the weights of a regression model, using an ARD prior. The weights of the regression model are assumed to be in Gaussian distributions. Also estimate the parameters lambda (precisions of the distributions of the weights) and alpha (precision of the distribution of the noise). The estimation is done by an iterative procedures (Evidence Maximization) Read more in the User Guide. Parameters
n_iterint, default=300
Maximum number of iterations.
tolfloat, default=1e-3
Stop the algorithm if w has converged.
alpha_1float, default=1e-6
Hyper-parameter : shape parameter for the Gamma distribution prior over the alpha parameter.
alpha_2float, default=1e-6
Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the alpha parameter.
lambda_1float, default=1e-6
Hyper-parameter : shape parameter for the Gamma distribution prior over the lambda parameter.
lambda_2float, default=1e-6
Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the lambda parameter.
compute_scorebool, default=False
If True, compute the objective function at each step of the model.
threshold_lambdafloat, default=10 000
threshold for removing (pruning) weights with high precision from the computation.
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
verbosebool, default=False
Verbose mode when fitting the model. Attributes
coef_array-like of shape (n_features,)
Coefficients of the regression model (mean of distribution)
alpha_float
estimated precision of the noise.
lambda_array-like of shape (n_features,)
estimated precisions of the weights.
sigma_array-like of shape (n_features, n_features)
estimated variance-covariance matrix of the weights
scores_float
if computed, value of the objective function (to be maximized)
intercept_float
Independent term in decision function. Set to 0.0 if fit_intercept = False.
X_offset_float
If normalize=True, offset subtracted for centering data to a zero mean.
X_scale_float
If normalize=True, parameter used to scale data to a unit standard deviation. Notes For an example, see examples/linear_model/plot_ard.py. References D. J. C. MacKay, Bayesian nonlinear modeling for the prediction competition, ASHRAE Transactions, 1994. R. Salakhutdinov, Lecture notes on Statistical Machine Learning, http://www.utstat.toronto.edu/~rsalakhu/sta4273/notes/Lecture2.pdf#page=15 Their beta is our self.alpha_ Their alpha is our self.lambda_ ARD is a little different than the slide: only dimensions/features for which self.lambda_ < self.threshold_lambda are kept and the rest are discarded. Examples >>> from sklearn import linear_model
>>> clf = linear_model.ARDRegression()
>>> clf.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
ARDRegression()
>>> clf.predict([[1, 1]])
array([1.])
Methods
fit(X, y) Fit the ARDRegression model according to the given training data and parameters.
get_params([deep]) Get parameters for this estimator.
predict(X[, return_std]) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit the ARDRegression model according to the given training data and parameters. Iterative procedure to maximize the evidence Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values (integers). Will be cast to X’s dtype if necessary Returns
selfreturns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X, return_std=False) [source]
Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Samples.
return_stdbool, default=False
Whether to return the standard deviation of posterior prediction. Returns
y_meanarray-like of shape (n_samples,)
Mean of predictive distribution of query points.
y_stdarray-like of shape (n_samples,)
Standard deviation of predictive distribution of query points.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.linear_model.ARDRegression
Automatic Relevance Determination Regression (ARD) | |
doc_30012 |
Compute the weighted log probabilities for each sample. Parameters
Xarray-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point. Returns
log_probarray, shape (n_samples,)
Log probabilities of each data point in X. | |
doc_30013 | Set the file’s current position. whence argument is optional and defaults to os.SEEK_SET or 0 (absolute file positioning); other values are os.SEEK_CUR or 1 (seek relative to the current position) and os.SEEK_END or 2 (seek relative to the file’s end). | |
doc_30014 | A variant of HTTPPasswordMgrWithDefaultRealm that also has a database of uri -> is_authenticated mappings. Can be used by a BasicAuth handler to determine when to send authentication credentials immediately instead of waiting for a 401 response first. New in version 3.5. | |
doc_30015 |
Test whether the mouse event occurred on the figure. Returns
bool, {} | |
doc_30016 | Try to match a single dict with the supplied arguments. | |
doc_30017 | Write the pickled representation of obj to the open file object given in the constructor. | |
doc_30018 | Name of the module the loader supports. | |
doc_30019 |
Convert y using the unit type of the yaxis. If the artist is not in contained in an Axes or if the yaxis does not have units, y itself is returned. | |
doc_30020 |
Bases: skimage.viewer.widgets.core.BaseWidget Buttons to save image to io.stack or to a file.
__init__(name='Save to:', default_format='png') [source]
Initialize self. See help(type(self)) for accurate signature.
save_to_file(filename=None) [source]
save_to_stack() [source] | |
doc_30021 | The Storage class provides a standardized API for storing files, along with a set of default behaviors that all other storage systems can inherit or override as necessary. Note When methods return naive datetime objects, the effective timezone used will be the current value of os.environ['TZ']; note that this is usually set from Django’s TIME_ZONE.
delete(name)
Deletes the file referenced by name. If deletion is not supported on the target storage system this will raise NotImplementedError instead.
exists(name)
Returns True if a file referenced by the given name already exists in the storage system, or False if the name is available for a new file.
get_accessed_time(name)
Returns a datetime of the last accessed time of the file. For storage systems unable to return the last accessed time this will raise NotImplementedError. If USE_TZ is True, returns an aware datetime, otherwise returns a naive datetime in the local timezone.
get_alternative_name(file_root, file_ext)
Returns an alternative filename based on the file_root and file_ext parameters, an underscore plus a random 7 character alphanumeric string is appended to the filename before the extension.
get_available_name(name, max_length=None)
Returns a filename based on the name parameter that’s free and available for new content to be written to on the target storage system. The length of the filename will not exceed max_length, if provided. If a free unique filename cannot be found, a SuspiciousFileOperation exception will be raised. If a file with name already exists, get_alternative_name() is called to obtain an alternative name.
get_created_time(name)
Returns a datetime of the creation time of the file. For storage systems unable to return the creation time this will raise NotImplementedError. If USE_TZ is True, returns an aware datetime, otherwise returns a naive datetime in the local timezone.
get_modified_time(name)
Returns a datetime of the last modified time of the file. For storage systems unable to return the last modified time this will raise NotImplementedError. If USE_TZ is True, returns an aware datetime, otherwise returns a naive datetime in the local timezone.
get_valid_name(name)
Returns a filename based on the name parameter that’s suitable for use on the target storage system.
generate_filename(filename)
Validates the filename by calling get_valid_name() and returns a filename to be passed to the save() method. The filename argument may include a path as returned by FileField.upload_to. In that case, the path won’t be passed to get_valid_name() but will be prepended back to the resulting name. The default implementation uses os.path operations. Override this method if that’s not appropriate for your storage.
listdir(path)
Lists the contents of the specified path, returning a 2-tuple of lists; the first item being directories, the second item being files. For storage systems that aren’t able to provide such a listing, this will raise a NotImplementedError instead.
open(name, mode='rb')
Opens the file given by name. Note that although the returned file is guaranteed to be a File object, it might actually be some subclass. In the case of remote file storage this means that reading/writing could be quite slow, so be warned.
path(name)
The local filesystem path where the file can be opened using Python’s standard open(). For storage systems that aren’t accessible from the local filesystem, this will raise NotImplementedError instead.
save(name, content, max_length=None)
Saves a new file using the storage system, preferably with the name specified. If there already exists a file with this name name, the storage system may modify the filename as necessary to get a unique name. The actual name of the stored file will be returned. The max_length argument is passed along to get_available_name(). The content argument must be an instance of django.core.files.File or a file-like object that can be wrapped in File.
size(name)
Returns the total size, in bytes, of the file referenced by name. For storage systems that aren’t able to return the file size this will raise NotImplementedError instead.
url(name)
Returns the URL where the contents of the file referenced by name can be accessed. For storage systems that don’t support access by URL this will raise NotImplementedError instead. | |
doc_30022 |
Return the group id. | |
doc_30023 | A record of the the module’s import-system-related state. Expected to be an instance of importlib.machinery.ModuleSpec. New in version 3.4. | |
doc_30024 |
Computes the inverse of rfft. This function computes the inverse of the one-dimensional n-point discrete Fourier Transform of real input computed by rfft. In other words, irfft(rfft(a), len(a)) == a to within numerical accuracy. (See Notes below for why len(a) is necessary here.) The input is expected to be in the form returned by rfft, i.e. the real zero-frequency term followed by the complex positive frequency terms in order of increasing frequency. Since the discrete Fourier Transform of real input is Hermitian-symmetric, the negative frequency terms are taken to be the complex conjugates of the corresponding positive frequency terms. Parameters
aarray_like
The input array.
nint, optional
Length of the transformed axis of the output. For n output points, n//2+1 input points are necessary. If the input is longer than this, it is cropped. If it is shorter than this, it is padded with zeros. If n is not given, it is taken to be 2*(m-1) where m is the length of the input along the axis specified by axis.
axisint, optional
Axis over which to compute the inverse FFT. If not given, the last axis is used.
norm{“backward”, “ortho”, “forward”}, optional
New in version 1.10.0. Normalization mode (see numpy.fft). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns
outndarray
The truncated or zero-padded input, transformed along the axis indicated by axis, or the last one if axis is not specified. The length of the transformed axis is n, or, if n is not given, 2*(m-1) where m is the length of the transformed axis of the input. To get an odd number of output points, n must be specified. Raises
IndexError
If axis is not a valid axis of a. See also numpy.fft
For definition of the DFT and conventions used. rfft
The one-dimensional FFT of real input, of which irfft is inverse. fft
The one-dimensional FFT. irfft2
The inverse of the two-dimensional FFT of real input. irfftn
The inverse of the n-dimensional FFT of real input. Notes Returns the real valued n-point inverse discrete Fourier transform of a, where a contains the non-negative frequency terms of a Hermitian-symmetric sequence. n is the length of the result, not the input. If you specify an n such that a must be zero-padded or truncated, the extra/removed values will be added/removed at high frequencies. One can thus resample a series to m points via Fourier interpolation by: a_resamp = irfft(rfft(a), m). The correct interpretation of the hermitian input depends on the length of the original data, as given by n. This is because each input shape could correspond to either an odd or even length signal. By default, irfft assumes an even output length which puts the last entry at the Nyquist frequency; aliasing with its symmetric counterpart. By Hermitian symmetry, the value is thus treated as purely real. To avoid losing information, the correct length of the real input must be given. Examples >>> np.fft.ifft([1, -1j, -1, 1j])
array([0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j]) # may vary
>>> np.fft.irfft([1, -1j, -1])
array([0., 1., 0., 0.])
Notice how the last term in the input to the ordinary ifft is the complex conjugate of the second term, and the output has zero imaginary part everywhere. When calling irfft, the negative frequencies are not specified, and the output array is purely real. | |
doc_30025 | tf.compat.v1.get_default_graph()
The returned graph will be the innermost graph on which a Graph.as_default() context has been entered, or a global default graph if none has been explicitly created.
Note: The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread's function.
Returns The default Graph being used in the current thread. | |
doc_30026 | tf.compat.v1.ones_like(
tensor, dtype=None, name=None, optimize=True
)
See also tf.ones. Given a single tensor (tensor), this operation returns a tensor of the same type and shape as tensor with all elements set to 1. Optionally, you can specify a new type (dtype) for the returned tensor. For example: tensor = tf.constant([[1, 2, 3], [4, 5, 6]])
tf.ones_like(tensor) # [[1, 1, 1], [1, 1, 1]]
Args
tensor A Tensor.
dtype A type for the returned Tensor. Must be float32, float64, int8, uint8, int16, uint16, int32, int64, complex64, complex128 or bool.
name A name for the operation (optional).
optimize if true, attempt to statically determine the shape of 'tensor' and encode it as a constant.
Returns A Tensor with all elements set to 1. | |
doc_30027 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_30028 |
Remove axes of length one from a. Refer to numpy.squeeze for full documentation. See also numpy.squeeze
equivalent function | |
doc_30029 | A SelectorKey is a namedtuple used to associate a file object to its underlying file descriptor, selected event mask and attached data. It is returned by several BaseSelector methods.
fileobj
File object registered.
fd
Underlying file descriptor.
events
Events that must be waited for on this file object.
data
Optional opaque data associated to this file object: for example, this could be used to store a per-client session ID. | |
doc_30030 | The name and value are returned unmodified. | |
doc_30031 | Note: In TensorFlow 2.0, AutoGraph is automatically applied when using tf.function. This module contains lower-level APIs for advanced use.
For more information, see the AutoGraph guide. By equivalent graph code we mean code that generates a TensorFlow graph when run. The generated graph has the same effects as the original code when executed (for example with tf.function or tf.compat.v1.Session.run). In other words, using AutoGraph can be thought of as running Python in TensorFlow. Modules experimental module: Public API for tf.autograph.experimental namespace. Functions set_verbosity(...): Sets the AutoGraph verbosity level. to_code(...): Returns the source code generated by AutoGraph, as a string. to_graph(...): Converts a Python entity into a TensorFlow graph. trace(...): Traces argument information at compilation time. | |
doc_30032 | Unset the flag(s) specified by flag without changing other flags. To remove more than one flag at a time, flag maybe a string of more than one character. | |
doc_30033 | See Migration guide for more details. tf.compat.v1.raw_ops.LoopCond
tf.raw_ops.LoopCond(
input, name=None
)
This operator represents the loop termination condition used by the "pivot" switches of a loop.
Args
input A Tensor of type bool. A boolean scalar, representing the branch predicate of the Switch op.
name A name for the operation (optional).
Returns A Tensor of type bool. | |
doc_30034 | (os.POSIX_SPAWN_OPEN, fd, path, flags, mode) Performs os.dup2(os.open(path, flags, mode), fd). | |
doc_30035 | See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatrixMul
tf.raw_ops.SparseMatrixMul(
a, b, name=None
)
Returns a sparse matrix. The dense tensor b may be either a scalar; otherwise a must be a rank-3 SparseMatrix; in this case b must be shaped [batch_size, 1, 1] and the multiply operation broadcasts.
Note: even if b is zero, the sparsity structure of the output does not change.
Args
a A Tensor of type variant. A CSRSparseMatrix.
b A Tensor. A dense tensor.
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_30036 | Return a list of 2-tuples containing all the message’s field headers and values. | |
doc_30037 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_30038 | bytearray.upper()
Return a copy of the sequence with all the lowercase ASCII characters converted to their corresponding uppercase counterpart. For example: >>> b'Hello World'.upper()
b'HELLO WORLD'
Lowercase ASCII characters are those byte values in the sequence b'abcdefghijklmnopqrstuvwxyz'. Uppercase ASCII characters are those byte values in the sequence b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'. Note The bytearray version of this method does not operate in place - it always produces a new object, even if no changes were made. | |
doc_30039 |
Set the locations of the ticks. This method is called before computing the tick labels because some formatters need to know all tick locations to do so. | |
doc_30040 |
Other Members
CXX11_ABI_FLAG 0
MONOLITHIC_BUILD 0 | |
doc_30041 | Process this response as WSGI application. Parameters
environ (WSGIEnvironment) – the WSGI environment.
start_response (StartResponse) – the response callable provided by the WSGI server. Returns
an application iterator Return type
Iterable[bytes] | |
doc_30042 |
Parameters
axAxes
The parent axes for the widget.
onselectfunction
A callback function that is called after a selection is completed. It must have the signature: def onselect(eclick: MouseEvent, erelease: MouseEvent)
where eclick and erelease are the mouse click and release MouseEvents that start and complete the selection.
drawtype{“box”, “line”, “none”}, default: “box”
Whether to draw the full rectangle box, the diagonal line of the rectangle, or nothing at all.
minspanxfloat, default: 0
Selections with an x-span less than minspanx are ignored.
minspanyfloat, default: 0
Selections with an y-span less than minspany are ignored.
useblitbool, default: False
Whether to use blitting for faster drawing (if supported by the backend).
linepropsdict, optional
Properties with which the line is drawn, if drawtype == "line". Default: dict(color="black", linestyle="-", linewidth=2, alpha=0.5)
rectpropsdict, optional
Properties with which the rectangle is drawn, if drawtype ==
"box". Default: dict(facecolor="red", edgecolor="black", alpha=0.2, fill=True)
spancoords{“data”, “pixels”}, default: “data”
Whether to interpret minspanx and minspany in data or in pixel coordinates.
buttonMouseButton, list of MouseButton, default: all buttons
Button(s) that trigger rectangle selection.
maxdistfloat, default: 10
Distance in pixels within which the interactive tool handles can be activated.
marker_propsdict
Properties with which the interactive handles are drawn. Currently not implemented and ignored.
interactivebool, default: False
Whether to draw a set of handles that allow interaction with the widget after it is drawn.
state_modifier_keysdict, optional
Keyboard modifiers which affect the widget’s behavior. Values amend the defaults. “move”: Move the existing shape, default: no modifier. “clear”: Clear the current shape, default: “escape”. “square”: Makes the shape square, default: “shift”. “center”: Make the initial point the center of the shape, default: “ctrl”. “square” and “center” can be combined. | |
doc_30043 |
Set the center of the ellipse. Parameters
xy(float, float) | |
doc_30044 | get the string name from an event id event_name(type) -> string Returns a string representing the name (in CapWords style) of the given event type. "UserEvent" is returned for all values in the user event id range. "Unknown" is returned when the event type does not exist. | |
doc_30045 | tf.compat.v1.train.RMSPropOptimizer(
learning_rate, decay=0.9, momentum=0.0, epsilon=1e-10, use_locking=False,
centered=False, name='RMSProp'
)
2012). References: Coursera slide 29: Hinton, 2012 (pdf)
Args
learning_rate A Tensor or a floating point value. The learning rate.
decay Discounting factor for the history/coming gradient
momentum A scalar tensor.
epsilon Small value to avoid zero denominator.
use_locking If True use locks for update operation.
centered If True, gradients are normalized by the estimated variance of the gradient; if False, by the uncentered second moment. Setting this to True may help with training, but is slightly more expensive in terms of computation and memory. Defaults to False.
name Optional name prefix for the operations created when applying gradients. Defaults to "RMSProp". Methods apply_gradients View source
apply_gradients(
grads_and_vars, global_step=None, name=None
)
Apply gradients to variables. This is the second part of minimize(). It returns an Operation that applies gradients.
Args
grads_and_vars List of (gradient, variable) pairs as returned by compute_gradients().
global_step Optional Variable to increment by one after the variables have been updated.
name Optional name for the returned operation. Default to the name passed to the Optimizer constructor.
Returns An Operation that applies the specified gradients. If global_step was not None, that operation also increments global_step.
Raises
TypeError If grads_and_vars is malformed.
ValueError If none of the variables have gradients.
RuntimeError If you should use _distributed_apply() instead. compute_gradients View source
compute_gradients(
loss, var_list=None, gate_gradients=GATE_OP, aggregation_method=None,
colocate_gradients_with_ops=False, grad_loss=None
)
Compute gradients of loss for the variables in var_list. This is the first part of minimize(). It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor, an IndexedSlices, or None if there is no gradient for the given variable.
Args
loss A Tensor containing the value to minimize or a callable taking no arguments which returns the value to minimize. When eager execution is enabled it must be a callable.
var_list Optional list or tuple of tf.Variable to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES.
gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH.
aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod.
colocate_gradients_with_ops If True, try colocating gradients with the corresponding op.
grad_loss Optional. A Tensor holding the gradient computed for loss.
Returns A list of (gradient, variable) pairs. Variable is always present, but gradient can be None.
Raises
TypeError If var_list contains anything else than Variable objects.
ValueError If some arguments are invalid.
RuntimeError If called with eager execution enabled and loss is not callable. Eager Compatibility When eager execution is enabled, gate_gradients, aggregation_method, and colocate_gradients_with_ops are ignored. get_name View source
get_name()
get_slot View source
get_slot(
var, name
)
Return a slot named name created for var by the Optimizer. Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them. Use get_slot_names() to get the list of slot names created by the Optimizer.
Args
var A variable passed to minimize() or apply_gradients().
name A string.
Returns The Variable for the slot if it was created, None otherwise.
get_slot_names View source
get_slot_names()
Return a list of the names of slots created by the Optimizer. See get_slot().
Returns A list of strings.
minimize View source
minimize(
loss, global_step=None, var_list=None, gate_gradients=GATE_OP,
aggregation_method=None, colocate_gradients_with_ops=False, name=None,
grad_loss=None
)
Add operations to minimize loss by updating var_list. This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function.
Args
loss A Tensor containing the value to minimize.
global_step Optional Variable to increment by one after the variables have been updated.
var_list Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES.
gate_gradients How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH.
aggregation_method Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod.
colocate_gradients_with_ops If True, try colocating gradients with the corresponding op.
name Optional name for the returned operation.
grad_loss Optional. A Tensor holding the gradient computed for loss.
Returns An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step.
Raises
ValueError If some of the variables are not Variable objects. Eager Compatibility When eager execution is enabled, loss should be a Python function that takes no arguments and computes the value to be minimized. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled. variables View source
variables()
A list of variables which encode the current state of Optimizer. Includes slot variables and additional global variables created by the optimizer in the current default graph.
Returns A list of variables.
Class Variables
GATE_GRAPH 2
GATE_NONE 0
GATE_OP 1 | |
doc_30046 |
An instance of RcParams for handling default Matplotlib values. | |
doc_30047 | Computes the p-norm distance between every pair of row vectors in the input. This is identical to the upper triangular portion, excluding the diagonal, of torch.norm(input[:, None] - input, dim=2, p=p). This function will be faster if the rows are contiguous. If input has shape N×MN \times M then the output will have shape 12N(N−1)\frac{1}{2} N (N - 1) . This function is equivalent to scipy.spatial.distance.pdist(input, ‘minkowski’, p=p) if p∈(0,∞)p \in (0, \infty) . When p=0p = 0 it is equivalent to scipy.spatial.distance.pdist(input, ‘hamming’) * M. When p=∞p = \infty , the closest scipy function is scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x - y).max()). Parameters
input – input tensor of shape N×MN \times M .
p – p value for the p-norm distance to calculate between each vector pair ∈[0,∞]\in [0, \infty] . | |
doc_30048 |
Set the pick radius used for containment tests. Parameters
prfloat
Pick radius, in points. | |
doc_30049 | Returns whether this LinearRing is counterclockwise. | |
doc_30050 | See Migration guide for more details. tf.compat.v1.raw_ops.ImageProjectiveTransformV3
tf.raw_ops.ImageProjectiveTransformV3(
images, transforms, output_shape, fill_value, interpolation,
fill_mode='CONSTANT', name=None
)
If one row of transforms is [a0, a1, a2, b0, b1, b2, c0, c1], then it maps the output point (x, y) to a transformed input point (x', y') = ((a0 x + a1 y + a2) / k, (b0 x + b1 y + b2) / k), where k = c0 x + c1 y + 1. If the transformed point lays outside of the input image, the output pixel is set to fill_value.
Args
images A Tensor. Must be one of the following types: uint8, int32, int64, half, float32, float64. 4-D with shape [batch, height, width, channels].
transforms A Tensor of type float32. 2-D Tensor, [batch, 8] or [1, 8] matrix, where each row corresponds to a 3 x 3 projective transformation matrix, with the last entry assumed to be 1. If there is one row, the same transformation will be applied to all images.
output_shape A Tensor of type int32. 1-D Tensor [new_height, new_width].
fill_value A Tensor of type float32. float, the value to be filled when fill_mode is constant".
interpolation A string. Interpolation method, "NEAREST" or "BILINEAR".
fill_mode An optional string. Defaults to "CONSTANT". Fill mode, "REFLECT", "WRAP", or "CONSTANT".
name A name for the operation (optional).
Returns A Tensor. Has the same type as images. | |
doc_30051 | Profile func(*args, **kwargs) | |
doc_30052 | Creates a test client for this application. For information about unit testing head over to Testing Flask Applications. Note that if you are testing for assertions or exceptions in your application code, you must set app.testing = True in order for the exceptions to propagate to the test client. Otherwise, the exception will be handled by the application (not visible to the test client) and the only indication of an AssertionError or other exception will be a 500 status code response to the test client. See the testing attribute. For example: app.testing = True
client = app.test_client()
The test client can be used in a with block to defer the closing down of the context until the end of the with block. This is useful if you want to access the context locals for testing: with app.test_client() as c:
rv = c.get('/?vodka=42')
assert request.args['vodka'] == '42'
Additionally, you may pass optional keyword arguments that will then be passed to the application’s test_client_class constructor. For example: from flask.testing import FlaskClient
class CustomClient(FlaskClient):
def __init__(self, *args, **kwargs):
self._authentication = kwargs.pop("authentication")
super(CustomClient,self).__init__( *args, **kwargs)
app.test_client_class = CustomClient
client = app.test_client(authentication='Basic ....')
See FlaskClient for more information. Changelog Changed in version 0.11: Added **kwargs to support passing additional keyword arguments to the constructor of test_client_class. New in version 0.7: The use_cookies parameter was added as well as the ability to override the client to be used by setting the test_client_class attribute. Changed in version 0.4: added support for with block usage for the client. Parameters
use_cookies (bool) –
kwargs (Any) – Return type
FlaskClient | |
doc_30053 |
Return a new matrix of given shape and type, without initializing entries. Parameters
shapeint or tuple of int
Shape of the empty matrix.
dtypedata-type, optional
Desired output data-type.
order{‘C’, ‘F’}, optional
Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. See also
empty_like, zeros
Notes empty, unlike zeros, does not set the matrix values to zero, and may therefore be marginally faster. On the other hand, it requires the user to manually set all the values in the array, and should be used with caution. Examples >>> import numpy.matlib
>>> np.matlib.empty((2, 2)) # filled with random data
matrix([[ 6.76425276e-320, 9.79033856e-307], # random
[ 7.39337286e-309, 3.22135945e-309]])
>>> np.matlib.empty((2, 2), dtype=int)
matrix([[ 6600475, 0], # random
[ 6586976, 22740995]]) | |
doc_30054 | tf.compat.v1.summary.tensor_summary(
name, tensor, summary_description=None, collections=None, summary_metadata=None,
family=None, display_name=None
)
Args
name A name for the generated node. If display_name is not set, it will also serve as the tag name in TensorBoard. (In that case, the tag name will inherit tf name scopes.)
tensor A tensor of any type and shape to serialize.
summary_description A long description of the summary sequence. Markdown is supported.
collections Optional list of graph collections keys. The new summary op is added to these collections. Defaults to [GraphKeys.SUMMARIES].
summary_metadata Optional SummaryMetadata proto (which describes which plugins may use the summary value).
family Optional; if provided, used as the prefix of the summary tag, which controls the name used for display on TensorBoard when display_name is not set.
display_name A string used to name this data in TensorBoard. If this is not set, then the node name will be used instead.
Returns A scalar Tensor of type string. The serialized Summary protocol buffer. | |
doc_30055 |
Given a math expression, renders it in a closely-clipped bounding box to an image file. Parameters
sstr
A math expression. The math portion must be enclosed in dollar signs.
filename_or_objstr or path-like or file-like
Where to write the image data.
propFontProperties, optional
The size and style of the text.
dpifloat, optional
The output dpi. If not set, the dpi is determined as for Figure.savefig.
formatstr, optional
The output format, e.g., 'svg', 'pdf', 'ps' or 'png'. If not set, the format is determined as for Figure.savefig. | |
doc_30056 | A string of bytes read before the end of stream was reached. | |
doc_30057 |
[Deprecated] Extract code from a piece of text, which contains either Python code or doctests. Notes Deprecated since version 3.5. | |
doc_30058 | Set a Cookie if policy says it’s OK to do so. | |
doc_30059 | By default, Variance returns the population variance. However, if sample=True, the return value will be the sample variance. | |
doc_30060 | Send an object to the other end of the connection which should be read using recv(). The object must be picklable. Very large pickles (approximately 32 MiB+, though it depends on the OS) may raise a ValueError exception. | |
doc_30061 | Return True if the current context references a file. | |
doc_30062 |
Set the artist transform. Parameters
tTransform | |
doc_30063 |
Upsample and then smooth image. Parameters
imagendarray
Input image.
upscalefloat, optional
Upscale factor.
sigmafloat, optional
Sigma for Gaussian filter. Default is 2 * upscale / 6.0 which corresponds to a filter mask twice the size of the scale factor that covers more than 99% of the Gaussian distribution.
orderint, optional
Order of splines used in interpolation of upsampling. See skimage.transform.warp for detail.
mode{‘reflect’, ‘constant’, ‘edge’, ‘symmetric’, ‘wrap’}, optional
The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to ‘constant’.
cvalfloat, optional
Value to fill past edges of input if mode is ‘constant’.
multichannelbool, optional
Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension.
preserve_rangebool, optional
Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html Returns
outarray
Upsampled and smoothed float image. References
1
http://persci.mit.edu/pub_pdfs/pyramid83.pdf | |
doc_30064 |
Compute the ANOVA F-value for the provided sample. Read more in the User Guide. Parameters
X{array-like, sparse matrix} shape = [n_samples, n_features]
The set of regressors that will be tested sequentially.
yarray of shape(n_samples)
The data matrix. Returns
Farray, shape = [n_features,]
The set of F values.
pvalarray, shape = [n_features,]
The set of p-values. See also
chi2
Chi-squared stats of non-negative features for classification tasks.
f_regression
F-value between label/feature for regression tasks. | |
doc_30065 |
Return group values at the given quantile, a la numpy.percentile. Parameters
q:float or array-like, default 0.5 (50% quantile)
Value(s) between 0 and 1 providing the quantile(s) to compute.
interpolation:{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}
Method to use when the desired quantile falls between two points. Returns
Series or DataFrame
Return type determined by caller of GroupBy object. See also Series.quantile
Similar method for Series. DataFrame.quantile
Similar method for DataFrame. numpy.percentile
NumPy method to compute qth percentile. Examples
>>> df = pd.DataFrame([
... ['a', 1], ['a', 2], ['a', 3],
... ['b', 1], ['b', 3], ['b', 5]
... ], columns=['key', 'val'])
>>> df.groupby('key').quantile()
val
key
a 2.0
b 3.0 | |
doc_30066 | Retry the request with authentication information, if available. | |
doc_30067 |
Initialize self. See help(type(self)) for accurate signature. | |
doc_30068 |
Returns a boolean array where two arrays are element-wise equal within a tolerance. The tolerance values are positive, typically very small numbers. The relative difference (rtol * abs(b)) and the absolute difference atol are added together to compare against the absolute difference between a and b. Warning The default atol is not appropriate for comparing numbers that are much smaller than one (see Notes). Parameters
a, barray_like
Input arrays to compare.
rtolfloat
The relative tolerance parameter (see Notes).
atolfloat
The absolute tolerance parameter (see Notes).
equal_nanbool
Whether to compare NaN’s as equal. If True, NaN’s in a will be considered equal to NaN’s in b in the output array. Returns
yarray_like
Returns a boolean array of where a and b are equal within the given tolerance. If both a and b are scalars, returns a single boolean value. See also allclose
math.isclose
Notes New in version 1.7.0. For finite values, isclose uses the following equation to test whether two floating point values are equivalent. absolute(a - b) <= (atol + rtol * absolute(b)) Unlike the built-in math.isclose, the above equation is not symmetric in a and b – it assumes b is the reference value – so that isclose(a, b) might be different from isclose(b, a). Furthermore, the default value of atol is not zero, and is used to determine what small values should be considered close to zero. The default value is appropriate for expected values of order unity: if the expected values are significantly smaller than one, it can result in false positives. atol should be carefully selected for the use case at hand. A zero value for atol will result in False if either a or b is zero. isclose is not defined for non-numeric data types. bool is considered a numeric data-type for this purpose. Examples >>> np.isclose([1e10,1e-7], [1.00001e10,1e-8])
array([ True, False])
>>> np.isclose([1e10,1e-8], [1.00001e10,1e-9])
array([ True, True])
>>> np.isclose([1e10,1e-8], [1.0001e10,1e-9])
array([False, True])
>>> np.isclose([1.0, np.nan], [1.0, np.nan])
array([ True, False])
>>> np.isclose([1.0, np.nan], [1.0, np.nan], equal_nan=True)
array([ True, True])
>>> np.isclose([1e-8, 1e-7], [0.0, 0.0])
array([ True, False])
>>> np.isclose([1e-100, 1e-7], [0.0, 0.0], atol=0.0)
array([False, False])
>>> np.isclose([1e-10, 1e-10], [1e-20, 0.0])
array([ True, True])
>>> np.isclose([1e-10, 1e-10], [1e-20, 0.999999e-10], atol=0.0)
array([False, True]) | |
doc_30069 | Computes the 2 dimensional inverse discrete Fourier transform of input. Equivalent to ifftn() but IFFTs only the last two dimensions by default. Parameters
input (Tensor) – the input tensor
s (Tuple[int], optional) – Signal size in the transformed dimensions. If given, each dimension dim[i] will either be zero-padded or trimmed to the length s[i] before computing the IFFT. If a length -1 is specified, no padding is done in that dimension. Default: s = [input.size(d) for d in dim]
dim (Tuple[int], optional) – Dimensions to be transformed. Default: last two dimensions.
norm (str, optional) –
Normalization mode. For the backward transform (ifft2()), these correspond to:
"forward" - no normalization
"backward" - normalize by 1/n
"ortho" - normalize by 1/sqrt(n) (making the IFFT orthonormal) Where n = prod(s) is the logical IFFT size. Calling the forward transform (fft2()) with the same normalization mode will apply an overall normalization of 1/n between the two transforms. This is required to make ifft2() the exact inverse. Default is "backward" (normalize by 1/n). Example >>> x = torch.rand(10, 10, dtype=torch.complex64)
>>> ifft2 = torch.fft.ifft2(t)
The discrete Fourier transform is separable, so ifft2() here is equivalent to two one-dimensional ifft() calls: >>> two_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1)
>>> torch.allclose(ifft2, two_iffts) | |
doc_30070 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_30071 | Traceback where the memory block was allocated, Traceback instance. | |
doc_30072 | 'DEFAULT_RENDERER_CLASSES': [
'rest_framework.renderers.JSONRenderer',
'rest_framework.renderers.BrowsableAPIRenderer',
]
}
You can also set the renderers used for an individual view, or viewset, using the APIView class-based views. from django.contrib.auth.models import User
from rest_framework.renderers import JSONRenderer
from rest_framework.response import Response
from rest_framework.views import APIView
class UserCountView(APIView):
"""
A view that returns the count of active users in JSON.
"""
renderer_classes = [JSONRenderer]
def get(self, request, format=None):
user_count = User.objects.filter(active=True).count()
content = {'user_count': user_count}
return Response(content)
Or, if you're using the @api_view decorator with function based views. @api_view(['GET'])
@renderer_classes([JSONRenderer])
def user_count_view(request, format=None):
"""
A view that returns the count of active users in JSON.
"""
user_count = User.objects.filter(active=True).count()
content = {'user_count': user_count}
return Response(content)
Ordering of renderer classes It's important when specifying the renderer classes for your API to think about what priority you want to assign to each media type. If a client underspecifies the representations it can accept, such as sending an Accept: */* header, or not including an Accept header at all, then REST framework will select the first renderer in the list to use for the response. For example if your API serves JSON responses and the HTML browsable API, you might want to make JSONRenderer your default renderer, in order to send JSON responses to clients that do not specify an Accept header. If your API includes views that can serve both regular webpages and API responses depending on the request, then you might consider making TemplateHTMLRenderer your default renderer, in order to play nicely with older browsers that send broken accept headers. API Reference JSONRenderer Renders the request data into JSON, using utf-8 encoding. Note that the default style is to include unicode characters, and render the response using a compact style with no unnecessary whitespace: {"unicode black star":"★","value":999}
The client may additionally include an 'indent' media type parameter, in which case the returned JSON will be indented. For example Accept: application/json; indent=4. {
"unicode black star": "★",
"value": 999
}
The default JSON encoding style can be altered using the UNICODE_JSON and COMPACT_JSON settings keys. .media_type: application/json .format: 'json' .charset: None TemplateHTMLRenderer Renders data to HTML, using Django's standard template rendering. Unlike other renderers, the data passed to the Response does not need to be serialized. Also, unlike other renderers, you may want to include a template_name argument when creating the Response. The TemplateHTMLRenderer will create a RequestContext, using the response.data as the context dict, and determine a template name to use to render the context. Note: When used with a view that makes use of a serializer the Response sent for rendering may not be a dictionary and will need to be wrapped in a dict before returning to allow the TemplateHTMLRenderer to render it. For example: response.data = {'results': response.data}
The template name is determined by (in order of preference): An explicit template_name argument passed to the response. An explicit .template_name attribute set on this class. The return result of calling view.get_template_names(). An example of a view that uses TemplateHTMLRenderer: class UserDetail(generics.RetrieveAPIView):
"""
A view that returns a templated HTML representation of a given user.
"""
queryset = User.objects.all()
renderer_classes = [TemplateHTMLRenderer]
def get(self, request, *args, **kwargs):
self.object = self.get_object()
return Response({'user': self.object}, template_name='user_detail.html')
You can use TemplateHTMLRenderer either to return regular HTML pages using REST framework, or to return both HTML and API responses from a single endpoint. If you're building websites that use TemplateHTMLRenderer along with other renderer classes, you should consider listing TemplateHTMLRenderer as the first class in the renderer_classes list, so that it will be prioritised first even for browsers that send poorly formed ACCEPT: headers. See the HTML & Forms Topic Page for further examples of TemplateHTMLRenderer usage. .media_type: text/html .format: 'html' .charset: utf-8 See also: StaticHTMLRenderer StaticHTMLRenderer A simple renderer that simply returns pre-rendered HTML. Unlike other renderers, the data passed to the response object should be a string representing the content to be returned. An example of a view that uses StaticHTMLRenderer: @api_view(['GET'])
@renderer_classes([StaticHTMLRenderer])
def simple_html_view(request):
data = '<html><body><h1>Hello, world</h1></body></html>'
return Response(data)
You can use StaticHTMLRenderer either to return regular HTML pages using REST framework, or to return both HTML and API responses from a single endpoint. .media_type: text/html .format: 'html' .charset: utf-8 See also: TemplateHTMLRenderer BrowsableAPIRenderer Renders data into HTML for the Browsable API: This renderer will determine which other renderer would have been given highest priority, and use that to display an API style response within the HTML page. .media_type: text/html .format: 'api' .charset: utf-8 .template: 'rest_framework/api.html' Customizing BrowsableAPIRenderer By default the response content will be rendered with the highest priority renderer apart from BrowsableAPIRenderer. If you need to customize this behavior, for example to use HTML as the default return format, but use JSON in the browsable API, you can do so by overriding the get_default_renderer() method. For example: class CustomBrowsableAPIRenderer(BrowsableAPIRenderer):
def get_default_renderer(self, view):
return JSONRenderer()
AdminRenderer Renders data into HTML for an admin-like display: This renderer is suitable for CRUD-style web APIs that should also present a user-friendly interface for managing the data. Note that views that have nested or list serializers for their input won't work well with the AdminRenderer, as the HTML forms are unable to properly support them. Note: The AdminRenderer is only able to include links to detail pages when a properly configured URL_FIELD_NAME (url by default) attribute is present in the data. For HyperlinkedModelSerializer this will be the case, but for ModelSerializer or plain Serializer classes you'll need to make sure to include the field explicitly. For example here we use models get_absolute_url method: class AccountSerializer(serializers.ModelSerializer):
url = serializers.CharField(source='get_absolute_url', read_only=True)
class Meta:
model = Account
.media_type: text/html .format: 'admin' .charset: utf-8 .template: 'rest_framework/admin.html' HTMLFormRenderer Renders data returned by a serializer into an HTML form. The output of this renderer does not include the enclosing <form> tags, a hidden CSRF input or any submit buttons. This renderer is not intended to be used directly, but can instead be used in templates by passing a serializer instance to the render_form template tag. {% load rest_framework %}
<form action="/submit-report/" method="post">
{% csrf_token %}
{% render_form serializer %}
<input type="submit" value="Save" />
</form>
For more information see the HTML & Forms documentation. .media_type: text/html .format: 'form' .charset: utf-8 .template: 'rest_framework/horizontal/form.html' MultiPartRenderer This renderer is used for rendering HTML multipart form data. It is not suitable as a response renderer, but is instead used for creating test requests, using REST framework's test client and test request factory. .media_type: multipart/form-data; boundary=BoUnDaRyStRiNg .format: 'multipart' .charset: utf-8 Custom renderers To implement a custom renderer, you should override BaseRenderer, set the .media_type and .format properties, and implement the .render(self, data, media_type=None, renderer_context=None) method. The method should return a bytestring, which will be used as the body of the HTTP response. The arguments passed to the .render() method are: data The request data, as set by the Response() instantiation. media_type=None Optional. If provided, this is the accepted media type, as determined by the content negotiation stage. Depending on the client's Accept: header, this may be more specific than the renderer's media_type attribute, and may include media type parameters. For example "application/json; nested=true". renderer_context=None Optional. If provided, this is a dictionary of contextual information provided by the view. By default this will include the following keys: view, request, response, args, kwargs. Example The following is an example plaintext renderer that will return a response with the data parameter as the content of the response. from django.utils.encoding import smart_text
from rest_framework import renderers
class PlainTextRenderer(renderers.BaseRenderer):
media_type = 'text/plain'
format = 'txt'
def render(self, data, media_type=None, renderer_context=None):
return smart_text(data, encoding=self.charset)
Setting the character set By default renderer classes are assumed to be using the UTF-8 encoding. To use a different encoding, set the charset attribute on the renderer. class PlainTextRenderer(renderers.BaseRenderer):
media_type = 'text/plain'
format = 'txt'
charset = 'iso-8859-1'
def render(self, data, media_type=None, renderer_context=None):
return data.encode(self.charset)
Note that if a renderer class returns a unicode string, then the response content will be coerced into a bytestring by the Response class, with the charset attribute set on the renderer used to determine the encoding. If the renderer returns a bytestring representing raw binary content, you should set a charset value of None, which will ensure the Content-Type header of the response will not have a charset value set. In some cases you may also want to set the render_style attribute to 'binary'. Doing so will also ensure that the browsable API will not attempt to display the binary content as a string. class JPEGRenderer(renderers.BaseRenderer):
media_type = 'image/jpeg'
format = 'jpg'
charset = None
render_style = 'binary'
def render(self, data, media_type=None, renderer_context=None):
return data
Advanced renderer usage You can do some pretty flexible things using REST framework's renderers. Some examples... Provide either flat or nested representations from the same endpoint, depending on the requested media type. Serve both regular HTML webpages, and JSON based API responses from the same endpoints. Specify multiple types of HTML representation for API clients to use. Underspecify a renderer's media type, such as using media_type = 'image/*', and use the Accept header to vary the encoding of the response. Varying behaviour by media type In some cases you might want your view to use different serialization styles depending on the accepted media type. If you need to do this you can access request.accepted_renderer to determine the negotiated renderer that will be used for the response. For example: @api_view(['GET'])
@renderer_classes([TemplateHTMLRenderer, JSONRenderer])
def list_users(request):
"""
A view that can return JSON or HTML representations
of the users in the system.
"""
queryset = Users.objects.filter(active=True)
if request.accepted_renderer.format == 'html':
# TemplateHTMLRenderer takes a context dict,
# and additionally requires a 'template_name'.
# It does not require serialization.
data = {'users': queryset}
return Response(data, template_name='list_users.html')
# JSONRenderer requires serialized data as normal.
serializer = UserSerializer(instance=queryset)
data = serializer.data
return Response(data)
Underspecifying the media type In some cases you might want a renderer to serve a range of media types. In this case you can underspecify the media types it should respond to, by using a media_type value such as image/*, or */*. If you underspecify the renderer's media type, you should make sure to specify the media type explicitly when you return the response, using the content_type attribute. For example: return Response(data, content_type='image/png')
Designing your media types For the purposes of many Web APIs, simple JSON responses with hyperlinked relations may be sufficient. If you want to fully embrace RESTful design and HATEOAS you'll need to consider the design and usage of your media types in more detail. In the words of Roy Fielding, "A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types.". For good examples of custom media types, see GitHub's use of a custom application/vnd.github+json media type, and Mike Amundsen's IANA approved application/vnd.collection+json JSON-based hypermedia. HTML error views Typically a renderer will behave the same regardless of if it's dealing with a regular response, or with a response caused by an exception being raised, such as an Http404 or PermissionDenied exception, or a subclass of APIException. If you're using either the TemplateHTMLRenderer or the StaticHTMLRenderer and an exception is raised, the behavior is slightly different, and mirrors Django's default handling of error views. Exceptions raised and handled by an HTML renderer will attempt to render using one of the following methods, by order of precedence. Load and render a template named {status_code}.html. Load and render a template named api_exception.html. Render the HTTP status code and text, for example "404 Not Found". Templates will render with a RequestContext which includes the status_code and details keys. Note: If DEBUG=True, Django's standard traceback error page will be displayed instead of rendering the HTTP status code and text. Third party packages The following third party packages are also available. YAML REST framework YAML provides YAML parsing and rendering support. It was previously included directly in the REST framework package, and is now instead supported as a third-party package. Installation & configuration Install using pip. $ pip install djangorestframework-yaml
Modify your REST framework settings. REST_FRAMEWORK = {
'DEFAULT_PARSER_CLASSES': [
'rest_framework_yaml.parsers.YAMLParser',
],
'DEFAULT_RENDERER_CLASSES': [
'rest_framework_yaml.renderers.YAMLRenderer',
],
}
XML REST Framework XML provides a simple informal XML format. It was previously included directly in the REST framework package, and is now instead supported as a third-party package. Installation & configuration Install using pip. $ pip install djangorestframework-xml
Modify your REST framework settings. REST_FRAMEWORK = {
'DEFAULT_PARSER_CLASSES': [
'rest_framework_xml.parsers.XMLParser',
],
'DEFAULT_RENDERER_CLASSES': [
'rest_framework_xml.renderers.XMLRenderer',
],
}
JSONP REST framework JSONP provides JSONP rendering support. It was previously included directly in the REST framework package, and is now instead supported as a third-party package. Warning: If you require cross-domain AJAX requests, you should generally be using the more modern approach of CORS as an alternative to JSONP. See the CORS documentation for more details. The jsonp approach is essentially a browser hack, and is only appropriate for globally readable API endpoints, where GET requests are unauthenticated and do not require any user permissions. Installation & configuration Install using pip. $ pip install djangorestframework-jsonp
Modify your REST framework settings. REST_FRAMEWORK = {
'DEFAULT_RENDERER_CLASSES': [
'rest_framework_jsonp.renderers.JSONPRenderer',
],
}
MessagePack MessagePack is a fast, efficient binary serialization format. Juan Riaza maintains the djangorestframework-msgpack package which provides MessagePack renderer and parser support for REST framework. XLSX (Binary Spreadsheet Endpoints) XLSX is the world's most popular binary spreadsheet format. Tim Allen of The Wharton School maintains drf-renderer-xlsx, which renders an endpoint as an XLSX spreadsheet using OpenPyXL, and allows the client to download it. Spreadsheets can be styled on a per-view basis. Installation & configuration Install using pip. $ pip install drf-renderer-xlsx
Modify your REST framework settings. REST_FRAMEWORK = {
...
'DEFAULT_RENDERER_CLASSES': [
'rest_framework.renderers.JSONRenderer',
'rest_framework.renderers.BrowsableAPIRenderer',
'drf_renderer_xlsx.renderers.XLSXRenderer',
],
}
To avoid having a file streamed without a filename (which the browser will often default to the filename "download", with no extension), we need to use a mixin to override the Content-Disposition header. If no filename is provided, it will default to export.xlsx. For example: from rest_framework.viewsets import ReadOnlyModelViewSet
from drf_renderer_xlsx.mixins import XLSXFileMixin
from drf_renderer_xlsx.renderers import XLSXRenderer
from .models import MyExampleModel
from .serializers import MyExampleSerializer
class MyExampleViewSet(XLSXFileMixin, ReadOnlyModelViewSet):
queryset = MyExampleModel.objects.all()
serializer_class = MyExampleSerializer
renderer_classes = [XLSXRenderer]
filename = 'my_export.xlsx'
CSV Comma-separated values are a plain-text tabular data format, that can be easily imported into spreadsheet applications. Mjumbe Poe maintains the djangorestframework-csv package which provides CSV renderer support for REST framework. UltraJSON UltraJSON is an optimized C JSON encoder which can give significantly faster JSON rendering. Adam Mertz maintains drf_ujson2, a fork of the now unmaintained drf-ujson-renderer, which implements JSON rendering using the UJSON package. CamelCase JSON djangorestframework-camel-case provides camel case JSON renderers and parsers for REST framework. This allows serializers to use Python-style underscored field names, but be exposed in the API as Javascript-style camel case field names. It is maintained by Vitaly Babiy. Pandas (CSV, Excel, PNG) Django REST Pandas provides a serializer and renderers that support additional data processing and output via the Pandas DataFrame API. Django REST Pandas includes renderers for Pandas-style CSV files, Excel workbooks (both .xls and .xlsx), and a number of other formats. It is maintained by S. Andrew Sheppard as part of the wq Project. LaTeX Rest Framework Latex provides a renderer that outputs PDFs using Laulatex. It is maintained by Pebble (S/F Software). renderers.py | |
doc_30073 |
Return the artist's zorder. | |
doc_30074 | A string to issue as an intro or banner. May be overridden by giving the cmdloop() method an argument. | |
doc_30075 | Abstract. | |
doc_30076 | See Migration guide for more details. tf.compat.v1.nn.space_to_depth
tf.compat.v1.space_to_depth(
input, block_size, name=None, data_format='NHWC'
)
Rearranges blocks of spatial data, into depth. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the depth dimension. The attr block_size indicates the input block size. Non-overlapping blocks of size block_size x block size are rearranged into depth at each location. The depth of the output tensor is block_size * block_size * input_depth. The Y, X coordinates within each block of the input become the high order component of the output channel index. The input tensor's height and width must be divisible by block_size. The data_format attr specifies the layout of the input and output tensors with the following options: "NHWC": [ batch, height, width, channels ] "NCHW": [ batch, channels, height, width ] "NCHW_VECT_C": qint8 [ batch, channels / 4, height, width, 4 ] It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates within the output image, bX, bY means coordinates within the input block, iC means input channels). The output would be a transpose to the following layout: n,oY,oX,bY,bX,iC This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models. For example, given an input of shape [1, 2, 2, 1], data_format = "NHWC" and block_size = 2: x = [[[[1], [2]],
[[3], [4]]]]
This operation will output a tensor of shape [1, 1, 1, 4]: [[[[1, 2, 3, 4]]]]
Here, the input has a batch of 1 and each batch element has shape [2, 2, 1], the corresponding output will have a single element (i.e. width and height are both 1) and will have a depth of 4 channels (1 * block_size * block_size). The output element shape is [1, 1, 4]. For an input tensor with larger depth, here of shape [1, 2, 2, 3], e.g. x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
This operation, for block_size of 2, will return the following tensor of shape [1, 1, 1, 12] [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
Similarly, for the following input of shape [1 4 4 1], and a block size of 2: x = [[[[1], [2], [5], [6]],
[[3], [4], [7], [8]],
[[9], [10], [13], [14]],
[[11], [12], [15], [16]]]]
the operator will return the following tensor of shape [1 2 2 4]: x = [[[[1, 2, 3, 4],
[5, 6, 7, 8]],
[[9, 10, 11, 12],
[13, 14, 15, 16]]]]
Args
input A Tensor.
block_size An int that is >= 2. The size of the spatial block.
data_format An optional string from: "NHWC", "NCHW", "NCHW_VECT_C". Defaults to "NHWC".
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | |
doc_30077 | A memoryview of contents of the shared memory block. | |
doc_30078 | Out-of-place version of torch.Tensor.index_fill_(). tensor1 corresponds to self in torch.Tensor.index_fill_(). | |
doc_30079 | See Migration guide for more details. tf.compat.v1.nn.l2_loss
tf.nn.l2_loss(
t, name=None
)
Computes half the L2 norm of a tensor without the sqrt: output = sum(t ** 2) / 2
Args
t A Tensor. Must be one of the following types: half, bfloat16, float32, float64. Typically 2-D, but may have any dimensions.
name A name for the operation (optional).
Returns A Tensor. Has the same type as t. | |
doc_30080 |
Create color map from linear mapping segments segmentdata argument is a dictionary with a red, green and blue entries. Each entry should be a list of x, y0, y1 tuples, forming rows in a table. Entries for alpha are optional. Example: suppose you want red to increase from 0 to 1 over the bottom half, green to do the same over the middle half, and blue over the top half. Then you would use: cdict = {'red': [(0.0, 0.0, 0.0),
(0.5, 1.0, 1.0),
(1.0, 1.0, 1.0)],
'green': [(0.0, 0.0, 0.0),
(0.25, 0.0, 0.0),
(0.75, 1.0, 1.0),
(1.0, 1.0, 1.0)],
'blue': [(0.0, 0.0, 0.0),
(0.5, 0.0, 0.0),
(1.0, 1.0, 1.0)]}
Each row in the table for a given color is a sequence of x, y0, y1 tuples. In each sequence, x must increase monotonically from 0 to 1. For any input value z falling between x[i] and x[i+1], the output value of a given color will be linearly interpolated between y1[i] and y0[i+1]: row i: x y0 y1
/
/
row i+1: x y0 y1
Hence y0 in the first row and y1 in the last row are never used. See also
LinearSegmentedColormap.from_list
Static method; factory function for generating a smoothly-varying LinearSegmentedColormap.
makeMappingArray
For information about making a mapping array. | |
doc_30081 |
Evaluate a 2-D Legendre series at points (x, y). This function returns the values: \[p(x,y) = \sum_{i,j} c_{i,j} * L_i(x) * L_j(y)\] The parameters x and y are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either x and y or their elements must support multiplication and addition both with themselves and with the elements of c. If c is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters
x, yarray_like, compatible objects
The two dimensional series is evaluated at the points (x, y), where x and y must have the same shape. If x or y is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar.
carray_like
Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in c[i,j]. If c has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns
valuesndarray, compatible object
The values of the two dimensional Legendre series at points formed from pairs of corresponding values from x and y. See also
legval, leggrid2d, legval3d, leggrid3d
Notes New in version 1.7.0. | |
doc_30082 |
Read SIFT or SURF features from externally generated file. This routine reads SIFT or SURF files generated by binary utilities from http://people.cs.ubc.ca/~lowe/keypoints/ and http://www.vision.ee.ethz.ch/~surf/. This routine does not generate SIFT/SURF features from an image. These algorithms are patent encumbered. Please use skimage.feature.CENSURE instead. Parameters
filelikestring or open file
Input file generated by the feature detectors from http://people.cs.ubc.ca/~lowe/keypoints/ or http://www.vision.ee.ethz.ch/~surf/ .
mode{‘SIFT’, ‘SURF’}, optional
Kind of descriptor used to generate filelike. Returns
datarecord array with fields
row: int
row position of feature
column: int
column position of feature
scale: float
feature scale
orientation: float
feature orientation
data: array
feature values | |
doc_30083 | See Migration guide for more details. tf.compat.v1.raw_ops.IdentityReaderV2
tf.raw_ops.IdentityReaderV2(
container='', shared_name='', name=None
)
To use, enqueue strings in a Queue. ReaderRead will take the front work string and output (work, work).
Args
container An optional string. Defaults to "". If non-empty, this reader is placed in the given container. Otherwise, a default container is used.
shared_name An optional string. Defaults to "". If non-empty, this reader is named in the given bucket with this shared_name. Otherwise, the node name is used instead.
name A name for the operation (optional).
Returns A Tensor of type resource. | |
doc_30084 | Return a list containing the names of the entries in the directory given by path. The list is in arbitrary order, and does not include the special entries '.' and '..' even if they are present in the directory. If a file is removed from or added to the directory during the call of this function, whether a name for that file be included is unspecified. path may be a path-like object. If path is of type bytes (directly or indirectly through the PathLike interface), the filenames returned will also be of type bytes; in all other circumstances, they will be of type str. This function can also support specifying a file descriptor; the file descriptor must refer to a directory. Raises an auditing event os.listdir with argument path. Note To encode str filenames to bytes, use fsencode(). See also The scandir() function returns directory entries along with file attribute information, giving better performance for many common use cases. Changed in version 3.2: The path parameter became optional. New in version 3.3: Added support for specifying path as an open file descriptor. Changed in version 3.6: Accepts a path-like object. | |
doc_30085 |
Add a colormap to the set recognized by get_cmap(). Register a new colormap to be accessed by name LinearSegmentedColormap('swirly', data, lut)
register_cmap(cmap=swirly_cmap)
Parameters
namestr, optional
The name that can be used in get_cmap() or rcParams["image.cmap"] (default: 'viridis') If absent, the name will be the name attribute of the cmap.
cmapmatplotlib.colors.Colormap
Despite being the second argument and having a default value, this is a required argument.
override_builtinbool
Allow built-in colormaps to be overridden by a user-supplied colormap. Please do not use this unless you are sure you need it. Notes Registering a colormap stores a reference to the colormap object which can currently be modified and inadvertently change the global colormap state. This behavior is deprecated and in Matplotlib 3.5 the registered colormap will be immutable. | |
doc_30086 | See Migration guide for more details. tf.compat.v1.feature_column.weighted_categorical_column
tf.feature_column.weighted_categorical_column(
categorical_column, weight_feature_key, dtype=tf.dtypes.float32
)
Use this when each of your sparse inputs has both an ID and a value. For example, if you're representing text documents as a collection of word frequencies, you can provide 2 parallel sparse input features ('terms' and 'frequencies' below). Example: Input tf.Example objects: [
features {
feature {
key: "terms"
value {bytes_list {value: "very" value: "model"} }
}
feature {
key: "frequencies"
value {float_list {value: 0.3 value: 0.1} }
}
},
features {
feature {
key: "terms"
value {bytes_list {value: "when" value: "course" value: "human"} }
}
feature {
key: "frequencies"
value {float_list {value: 0.4 value: 0.1 value: 0.2} }
}
}
]
categorical_column = categorical_column_with_hash_bucket(
column_name='terms', hash_bucket_size=1000)
weighted_column = weighted_categorical_column(
categorical_column=categorical_column, weight_feature_key='frequencies')
columns = [weighted_column, ...]
features = tf.io.parse_example(..., features=make_parse_example_spec(columns))
linear_prediction, _, _ = linear_model(features, columns)
This assumes the input dictionary contains a SparseTensor for key 'terms', and a SparseTensor for key 'frequencies'. These 2 tensors must have the same indices and dense shape.
Args
categorical_column A CategoricalColumn created by categorical_column_with_* functions.
weight_feature_key String key for weight values.
dtype Type of weights, such as tf.float32. Only float and integer weights are supported.
Returns A CategoricalColumn composed of two sparse features: one represents id, the other represents weight (value) of the id feature in that example.
Raises
ValueError if dtype is not convertible to float. | |
doc_30087 |
Return an ndarray of the flattened values of the underlying data. Returns
numpy.ndarray
Flattened array. See also numpy.ndarray.ravel
Return a flattened array. | |
doc_30088 | Create a new request object based on the values provided. If environ is given missing values are filled from there. This method is useful for small scripts when you need to simulate a request from an URL. Do not use this method for unittesting, there is a full featured client object (Client) that allows to create multipart requests, support for cookies etc. This accepts the same options as the EnvironBuilder. Changelog Changed in version 0.5: This method now accepts the same arguments as EnvironBuilder. Because of this the environ parameter is now called environ_overrides. Returns
request object Parameters
args (Any) –
kwargs (Any) – Return type
werkzeug.wrappers.request.Request | |
doc_30089 | Return a Document that represents the string. This method creates an io.StringIO object for the string and passes that on to parse(). | |
doc_30090 |
[Deprecated] Set the title text of the window containing the figure. Note that this has no effect if there is no window (e.g., a PS backend). Notes Deprecated since version 3.4. | |
doc_30091 |
Highlight values defined by a quantile with a style. New in version 1.3.0. Parameters
subset:label, array-like, IndexSlice, optional
A valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input or single key, to DataFrame.loc[:, <subset>] where the columns are prioritised, to limit data to before applying the function.
color:str, default ‘yellow’
Background color to use for highlighting.
axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0
Axis along which to determine and highlight quantiles. If None quantiles are measured over the entire DataFrame. See examples.
q_left:float, default 0
Left bound, in [0, q_right), for the target quantile range.
q_right:float, default 1
Right bound, in (q_left, 1], for the target quantile range.
interpolation:{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}
Argument passed to Series.quantile or DataFrame.quantile for quantile estimation.
inclusive:{‘both’, ‘neither’, ‘left’, ‘right’}
Identify whether quantile bounds are closed or open.
props:str, default None
CSS properties to use for highlighting. If props is given, color is not used. Returns
self:Styler
See also Styler.highlight_null
Highlight missing values with a style. Styler.highlight_max
Highlight the maximum with a style. Styler.highlight_min
Highlight the minimum with a style. Styler.highlight_between
Highlight a defined range with a style. Notes This function does not work with str dtypes. Examples Using axis=None and apply a quantile to all collective data
>>> df = pd.DataFrame(np.arange(10).reshape(2,5) + 1)
>>> df.style.highlight_quantile(axis=None, q_left=0.8, color="#fffd75")
...
Or highlight quantiles row-wise or column-wise, in this case by row-wise
>>> df.style.highlight_quantile(axis=1, q_left=0.8, color="#fffd75")
...
Use props instead of default background coloring
>>> df.style.highlight_quantile(axis=None, q_left=0.2, q_right=0.8,
... props='font-weight:bold;color:#e83e8c') | |
doc_30092 |
Estimate line model from data. This minimizes the sum of shortest (orthogonal) distances from the given data points to the estimated line. Parameters
data(N, dim) array
N points in a space of dimensionality dim >= 2. Returns
successbool
True, if model estimation succeeds. | |
doc_30093 | A Semaphore object. Not thread-safe. A semaphore manages an internal counter which is decremented by each acquire() call and incremented by each release() call. The counter can never go below zero; when acquire() finds that it is zero, it blocks, waiting until some task calls release(). The optional value argument gives the initial value for the internal counter (1 by default). If the given value is less than 0 a ValueError is raised. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. The preferred way to use a Semaphore is an async with statement: sem = asyncio.Semaphore(10)
# ... later
async with sem:
# work with shared resource
which is equivalent to: sem = asyncio.Semaphore(10)
# ... later
await sem.acquire()
try:
# work with shared resource
finally:
sem.release()
coroutine acquire()
Acquire a semaphore. If the internal counter is greater than zero, decrement it by one and return True immediately. If it is zero, wait until a release() is called and return True.
locked()
Returns True if semaphore can not be acquired immediately.
release()
Release a semaphore, incrementing the internal counter by one. Can wake up a task waiting to acquire the semaphore. Unlike BoundedSemaphore, Semaphore allows making more release() calls than acquire() calls. | |
doc_30094 | An abstract method that executes the module in its own namespace when a module is imported or reloaded. The module should already be initialized when exec_module() is called. When this method exists, create_module() must be defined. New in version 3.4. Changed in version 3.6: create_module() must also be defined. | |
doc_30095 |
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | |
doc_30096 | Creates a strided copy of self. Warning Throws an error if self is a strided tensor. Example: >>> s = torch.sparse_coo_tensor(
... torch.tensor([[1, 1],
... [0, 2]]),
... torch.tensor([9, 10]),
... size=(3, 3))
>>> s.to_dense()
tensor([[ 0, 0, 0],
[ 9, 0, 10],
[ 0, 0, 0]]) | |
doc_30097 | Token value for ".". | |
doc_30098 | Used internally for PIL-style arrays. The value is informational only. | |
doc_30099 |
Return Less than of series and other, element-wise (binary operator lt). Equivalent to series < other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters
other:Series or scalar value
fill_value:None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing.
level:int or name
Broadcast across a level, matching Index values on the passed MultiIndex level. Returns
Series
The result of the operation. Examples
>>> a = pd.Series([1, 1, 1, np.nan, 1], index=['a', 'b', 'c', 'd', 'e'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
e 1.0
dtype: float64
>>> b = pd.Series([0, 1, 2, np.nan, 1], index=['a', 'b', 'c', 'd', 'f'])
>>> b
a 0.0
b 1.0
c 2.0
d NaN
f 1.0
dtype: float64
>>> a.lt(b, fill_value=0)
a False
b False
c True
d False
e False
f True
dtype: bool |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.