_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_28300 | The Example that failed. | |
doc_28301 | Performs the element-wise multiplication of tensor1 by tensor2, multiply the result by the scalar value and add it to input. outi=inputi+value×tensor1i×tensor2i\text{out}_i = \text{input}_i + \text{value} \times \text{tensor1}_i \times \text{tensor2}_i
The shapes of tensor, tensor1, and tensor2 must be broadcastable. For inputs of type FloatTensor or DoubleTensor, value must be a real number, otherwise an integer. Parameters
input (Tensor) – the tensor to be added
tensor1 (Tensor) – the tensor to be multiplied
tensor2 (Tensor) – the tensor to be multiplied Keyword Arguments
value (Number, optional) – multiplier for tensor1.∗tensor2tensor1 .* tensor2
out (Tensor, optional) – the output tensor. Example: >>> t = torch.randn(1, 3)
>>> t1 = torch.randn(3, 1)
>>> t2 = torch.randn(1, 3)
>>> torch.addcmul(t, t1, t2, value=0.1)
tensor([[-0.8635, -0.6391, 1.6174],
[-0.7617, -0.5879, 1.7388],
[-0.8353, -0.6249, 1.6511]]) | |
doc_28302 | from myapp.serializers import UserSerializer
from rest_framework import generics
from rest_framework.permissions import IsAdminUser
class UserList(generics.ListCreateAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
permission_classes = [IsAdminUser]
For more complex cases you might also want to override various methods on the view class. For example. class UserList(generics.ListCreateAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
permission_classes = [IsAdminUser]
def list(self, request):
# Note the use of `get_queryset()` instead of `self.queryset`
queryset = self.get_queryset()
serializer = UserSerializer(queryset, many=True)
return Response(serializer.data)
For very simple cases you might want to pass through any class attributes using the .as_view() method. For example, your URLconf might include something like the following entry: path('users/', ListCreateAPIView.as_view(queryset=User.objects.all(), serializer_class=UserSerializer), name='user-list')
API Reference GenericAPIView This class extends REST framework's APIView class, adding commonly required behavior for standard list and detail views. Each of the concrete generic views provided is built by combining GenericAPIView, with one or more mixin classes. Attributes Basic settings: The following attributes control the basic view behavior.
queryset - The queryset that should be used for returning objects from this view. Typically, you must either set this attribute, or override the get_queryset() method. If you are overriding a view method, it is important that you call get_queryset() instead of accessing this property directly, as queryset will get evaluated once, and those results will be cached for all subsequent requests.
serializer_class - The serializer class that should be used for validating and deserializing input, and for serializing output. Typically, you must either set this attribute, or override the get_serializer_class() method.
lookup_field - The model field that should be used to for performing object lookup of individual model instances. Defaults to 'pk'. Note that when using hyperlinked APIs you'll need to ensure that both the API views and the serializer classes set the lookup fields if you need to use a custom value.
lookup_url_kwarg - The URL keyword argument that should be used for object lookup. The URL conf should include a keyword argument corresponding to this value. If unset this defaults to using the same value as lookup_field. Pagination: The following attributes are used to control pagination when used with list views.
pagination_class - The pagination class that should be used when paginating list results. Defaults to the same value as the DEFAULT_PAGINATION_CLASS setting, which is 'rest_framework.pagination.PageNumberPagination'. Setting pagination_class=None will disable pagination on this view. Filtering:
filter_backends - A list of filter backend classes that should be used for filtering the queryset. Defaults to the same value as the DEFAULT_FILTER_BACKENDS setting. Methods Base methods: get_queryset(self) Returns the queryset that should be used for list views, and that should be used as the base for lookups in detail views. Defaults to returning the queryset specified by the queryset attribute. This method should always be used rather than accessing self.queryset directly, as self.queryset gets evaluated only once, and those results are cached for all subsequent requests. May be overridden to provide dynamic behavior, such as returning a queryset, that is specific to the user making the request. For example: def get_queryset(self):
user = self.request.user
return user.accounts.all()
get_object(self) Returns an object instance that should be used for detail views. Defaults to using the lookup_field parameter to filter the base queryset. May be overridden to provide more complex behavior, such as object lookups based on more than one URL kwarg. For example: def get_object(self):
queryset = self.get_queryset()
filter = {}
for field in self.multiple_lookup_fields:
filter[field] = self.kwargs[field]
obj = get_object_or_404(queryset, **filter)
self.check_object_permissions(self.request, obj)
return obj
Note that if your API doesn't include any object level permissions, you may optionally exclude the self.check_object_permissions, and simply return the object from the get_object_or_404 lookup. filter_queryset(self, queryset) Given a queryset, filter it with whichever filter backends are in use, returning a new queryset. For example: def filter_queryset(self, queryset):
filter_backends = [CategoryFilter]
if 'geo_route' in self.request.query_params:
filter_backends = [GeoRouteFilter, CategoryFilter]
elif 'geo_point' in self.request.query_params:
filter_backends = [GeoPointFilter, CategoryFilter]
for backend in list(filter_backends):
queryset = backend().filter_queryset(self.request, queryset, view=self)
return queryset
get_serializer_class(self) Returns the class that should be used for the serializer. Defaults to returning the serializer_class attribute. May be overridden to provide dynamic behavior, such as using different serializers for read and write operations, or providing different serializers to different types of users. For example: def get_serializer_class(self):
if self.request.user.is_staff:
return FullAccountSerializer
return BasicAccountSerializer
Save and deletion hooks: The following methods are provided by the mixin classes, and provide easy overriding of the object save or deletion behavior.
perform_create(self, serializer) - Called by CreateModelMixin when saving a new object instance.
perform_update(self, serializer) - Called by UpdateModelMixin when saving an existing object instance.
perform_destroy(self, instance) - Called by DestroyModelMixin when deleting an object instance. These hooks are particularly useful for setting attributes that are implicit in the request, but are not part of the request data. For instance, you might set an attribute on the object based on the request user, or based on a URL keyword argument. def perform_create(self, serializer):
serializer.save(user=self.request.user)
These override points are also particularly useful for adding behavior that occurs before or after saving an object, such as emailing a confirmation, or logging the update. def perform_update(self, serializer):
instance = serializer.save()
send_email_confirmation(user=self.request.user, modified=instance)
You can also use these hooks to provide additional validation, by raising a ValidationError(). This can be useful if you need some validation logic to apply at the point of database save. For example: def perform_create(self, serializer):
queryset = SignupRequest.objects.filter(user=self.request.user)
if queryset.exists():
raise ValidationError('You have already signed up')
serializer.save(user=self.request.user)
Other methods: You won't typically need to override the following methods, although you might need to call into them if you're writing custom views using GenericAPIView.
get_serializer_context(self) - Returns a dictionary containing any extra context that should be supplied to the serializer. Defaults to including 'request', 'view' and 'format' keys.
get_serializer(self, instance=None, data=None, many=False, partial=False) - Returns a serializer instance.
get_paginated_response(self, data) - Returns a paginated style Response object.
paginate_queryset(self, queryset) - Paginate a queryset if required, either returning a page object, or None if pagination is not configured for this view.
filter_queryset(self, queryset) - Given a queryset, filter it with whichever filter backends are in use, returning a new queryset. Mixins The mixin classes provide the actions that are used to provide the basic view behavior. Note that the mixin classes provide action methods rather than defining the handler methods, such as .get() and .post(), directly. This allows for more flexible composition of behavior. The mixin classes can be imported from rest_framework.mixins. ListModelMixin Provides a .list(request, *args, **kwargs) method, that implements listing a queryset. If the queryset is populated, this returns a 200 OK response, with a serialized representation of the queryset as the body of the response. The response data may optionally be paginated. CreateModelMixin Provides a .create(request, *args, **kwargs) method, that implements creating and saving a new model instance. If an object is created this returns a 201 Created response, with a serialized representation of the object as the body of the response. If the representation contains a key named url, then the Location header of the response will be populated with that value. If the request data provided for creating the object was invalid, a 400 Bad Request response will be returned, with the error details as the body of the response. RetrieveModelMixin Provides a .retrieve(request, *args, **kwargs) method, that implements returning an existing model instance in a response. If an object can be retrieved this returns a 200 OK response, with a serialized representation of the object as the body of the response. Otherwise it will return a 404 Not Found. UpdateModelMixin Provides a .update(request, *args, **kwargs) method, that implements updating and saving an existing model instance. Also provides a .partial_update(request, *args, **kwargs) method, which is similar to the update method, except that all fields for the update will be optional. This allows support for HTTP PATCH requests. If an object is updated this returns a 200 OK response, with a serialized representation of the object as the body of the response. If the request data provided for updating the object was invalid, a 400 Bad Request response will be returned, with the error details as the body of the response. DestroyModelMixin Provides a .destroy(request, *args, **kwargs) method, that implements deletion of an existing model instance. If an object is deleted this returns a 204 No Content response, otherwise it will return a 404 Not Found. Concrete View Classes The following classes are the concrete generic views. If you're using generic views this is normally the level you'll be working at unless you need heavily customized behavior. The view classes can be imported from rest_framework.generics. CreateAPIView Used for create-only endpoints. Provides a post method handler. Extends: GenericAPIView, CreateModelMixin ListAPIView Used for read-only endpoints to represent a collection of model instances. Provides a get method handler. Extends: GenericAPIView, ListModelMixin RetrieveAPIView Used for read-only endpoints to represent a single model instance. Provides a get method handler. Extends: GenericAPIView, RetrieveModelMixin DestroyAPIView Used for delete-only endpoints for a single model instance. Provides a delete method handler. Extends: GenericAPIView, DestroyModelMixin UpdateAPIView Used for update-only endpoints for a single model instance. Provides put and patch method handlers. Extends: GenericAPIView, UpdateModelMixin ListCreateAPIView Used for read-write endpoints to represent a collection of model instances. Provides get and post method handlers. Extends: GenericAPIView, ListModelMixin, CreateModelMixin RetrieveUpdateAPIView Used for read or update endpoints to represent a single model instance. Provides get, put and patch method handlers. Extends: GenericAPIView, RetrieveModelMixin, UpdateModelMixin RetrieveDestroyAPIView Used for read or delete endpoints to represent a single model instance. Provides get and delete method handlers. Extends: GenericAPIView, RetrieveModelMixin, DestroyModelMixin RetrieveUpdateDestroyAPIView Used for read-write-delete endpoints to represent a single model instance. Provides get, put, patch and delete method handlers. Extends: GenericAPIView, RetrieveModelMixin, UpdateModelMixin, DestroyModelMixin Customizing the generic views Often you'll want to use the existing generic views, but use some slightly customized behavior. If you find yourself reusing some bit of customized behavior in multiple places, you might want to refactor the behavior into a common class that you can then just apply to any view or viewset as needed. Creating custom mixins For example, if you need to lookup objects based on multiple fields in the URL conf, you could create a mixin class like the following: class MultipleFieldLookupMixin:
"""
Apply this mixin to any view or viewset to get multiple field filtering
based on a `lookup_fields` attribute, instead of the default single field filtering.
"""
def get_object(self):
queryset = self.get_queryset() # Get the base queryset
queryset = self.filter_queryset(queryset) # Apply any filter backends
filter = {}
for field in self.lookup_fields:
if self.kwargs[field]: # Ignore empty fields.
filter[field] = self.kwargs[field]
obj = get_object_or_404(queryset, **filter) # Lookup the object
self.check_object_permissions(self.request, obj)
return obj
You can then simply apply this mixin to a view or viewset anytime you need to apply the custom behavior. class RetrieveUserView(MultipleFieldLookupMixin, generics.RetrieveAPIView):
queryset = User.objects.all()
serializer_class = UserSerializer
lookup_fields = ['account', 'username']
Using custom mixins is a good option if you have custom behavior that needs to be used. Creating custom base classes If you are using a mixin across multiple views, you can take this a step further and create your own set of base views that can then be used throughout your project. For example: class BaseRetrieveView(MultipleFieldLookupMixin,
generics.RetrieveAPIView):
pass
class BaseRetrieveUpdateDestroyView(MultipleFieldLookupMixin,
generics.RetrieveUpdateDestroyAPIView):
pass
Using custom base classes is a good option if you have custom behavior that consistently needs to be repeated across a large number of views throughout your project. PUT as create Prior to version 3.0 the REST framework mixins treated PUT as either an update or a create operation, depending on if the object already existed or not. Allowing PUT as create operations is problematic, as it necessarily exposes information about the existence or non-existence of objects. It's also not obvious that transparently allowing re-creating of previously deleted instances is necessarily a better default behavior than simply returning 404 responses. Both styles "PUT as 404" and "PUT as create" can be valid in different circumstances, but from version 3.0 onwards we now use 404 behavior as the default, due to it being simpler and more obvious. If you need to generic PUT-as-create behavior you may want to include something like this AllowPUTAsCreateMixin class as a mixin to your views. Third party packages The following third party packages provide additional generic view implementations. Django Rest Multiple Models Django Rest Multiple Models provides a generic view (and mixin) for sending multiple serialized models and/or querysets via a single API request. mixins.pygenerics.py | |
doc_28303 |
Alias for get_verticalalignment. | |
doc_28304 |
Integral image / summed area table. The integral image contains the sum of all elements above and to the left of it, i.e.: \[S[m, n] = \sum_{i \leq m} \sum_{j \leq n} X[i, j]\] Parameters
imagendarray
Input image. Returns
Sndarray
Integral image/summed area table of same shape as input image. References
1
F.C. Crow, “Summed-area tables for texture mapping,” ACM SIGGRAPH Computer Graphics, vol. 18, 1984, pp. 207-212. | |
doc_28305 |
Return Floating division of series and other, element-wise (binary operator truediv). Equivalent to series / other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters
other:Series or scalar value
fill_value:None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing.
level:int or name
Broadcast across a level, matching Index values on the passed MultiIndex level. Returns
Series
The result of the operation. See also Series.rtruediv
Reverse of the Floating division operator, see Python documentation for more details. Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divide(b, fill_value=0)
a 1.0
b inf
c inf
d 0.0
e NaN
dtype: float64 | |
doc_28306 | See Migration guide for more details. tf.compat.v1.raw_ops.CountUpTo
tf.raw_ops.CountUpTo(
ref, limit, name=None
)
Args
ref A mutable Tensor. Must be one of the following types: int32, int64. Should be from a scalar Variable node.
limit An int. If incrementing ref would bring it above limit, instead generates an 'OutOfRange' error.
name A name for the operation (optional).
Returns A Tensor. Has the same type as ref. | |
doc_28307 |
Unsupervised Outlier Detection using Local Outlier Factor (LOF) The anomaly score of each sample is called Local Outlier Factor. It measures the local deviation of density of a given sample with respect to its neighbors. It is local in that the anomaly score depends on how isolated the object is with respect to the surrounding neighborhood. More precisely, locality is given by k-nearest neighbors, whose distance is used to estimate the local density. By comparing the local density of a sample to the local densities of its neighbors, one can identify samples that have a substantially lower density than their neighbors. These are considered outliers. New in version 0.19. Parameters
n_neighborsint, default=20
Number of neighbors to use by default for kneighbors queries. If n_neighbors is larger than the number of samples provided, all samples will be used.
algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree
‘kd_tree’ will use KDTree
‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force.
leaf_sizeint, default=30
Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
metricstr or callable, default=’minkowski’
metric used for the distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is “precomputed”, X is assumed to be a distance matrix and must be square. X may be a sparse matrix, in which case only “nonzero” elements may be considered neighbors. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Valid values for metric are: from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics: https://docs.scipy.org/doc/scipy/reference/spatial.distance.html
pint, default=2
Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
metric_paramsdict, default=None
Additional keyword arguments for the metric function.
contamination‘auto’ or float, default=’auto’
The amount of contamination of the data set, i.e. the proportion of outliers in the data set. When fitting this is used to define the threshold on the scores of the samples. if ‘auto’, the threshold is determined as in the original paper, if a float, the contamination should be in the range [0, 0.5]. Changed in version 0.22: The default value of contamination changed from 0.1 to 'auto'.
noveltybool, default=False
By default, LocalOutlierFactor is only meant to be used for outlier detection (novelty=False). Set novelty to True if you want to use LocalOutlierFactor for novelty detection. In this case be aware that that you should only use predict, decision_function and score_samples on new unseen data and not on the training set. New in version 0.20.
n_jobsint, default=None
The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes
negative_outlier_factor_ndarray of shape (n_samples,)
The opposite LOF of the training samples. The higher, the more normal. Inliers tend to have a LOF score close to 1 (negative_outlier_factor_ close to -1), while outliers tend to have a larger LOF score. The local outlier factor (LOF) of a sample captures its supposed ‘degree of abnormality’. It is the average of the ratio of the local reachability density of a sample and those of its k-nearest neighbors.
n_neighbors_int
The actual number of neighbors used for kneighbors queries.
offset_float
Offset used to obtain binary labels from the raw scores. Observations having a negative_outlier_factor smaller than offset_ are detected as abnormal. The offset is set to -1.5 (inliers score around -1), except when a contamination parameter different than “auto” is provided. In that case, the offset is defined in such a way we obtain the expected number of outliers in training. New in version 0.20.
effective_metric_str
The effective metric used for the distance computation.
effective_metric_params_dict
The effective additional keyword arguments for the metric function.
n_samples_fit_int
It is the number of samples in the fitted data. References
1
Breunig, M. M., Kriegel, H. P., Ng, R. T., & Sander, J. (2000, May). LOF: identifying density-based local outliers. In ACM sigmod record. Examples >>> import numpy as np
>>> from sklearn.neighbors import LocalOutlierFactor
>>> X = [[-1.1], [0.2], [101.1], [0.3]]
>>> clf = LocalOutlierFactor(n_neighbors=2)
>>> clf.fit_predict(X)
array([ 1, 1, -1, 1])
>>> clf.negative_outlier_factor_
array([ -0.9821..., -1.0370..., -73.3697..., -0.9821...])
Methods
fit(X[, y]) Fit the local outlier factor detector from the training dataset.
get_params([deep]) Get parameters for this estimator.
kneighbors([X, n_neighbors, return_distance]) Finds the K-neighbors of a point.
kneighbors_graph([X, n_neighbors, mode]) Computes the (weighted) graph of k-Neighbors for points in X
set_params(**params) Set the parameters of this estimator.
property decision_function
Shifted opposite of the Local Outlier Factor of X. Bigger is better, i.e. large values correspond to inliers. Only available for novelty detection (when novelty is set to True). The shift offset allows a zero threshold for being an outlier. The argument X is supposed to contain new data: if X contains a point from training, it considers the later in its own neighborhood. Also, the samples in X are not considered in the neighborhood of any point. Parameters
Xarray-like of shape (n_samples, n_features)
The query sample or samples to compute the Local Outlier Factor w.r.t. the training samples. Returns
shifted_opposite_lof_scoresndarray of shape (n_samples,)
The shifted opposite of the Local Outlier Factor of each input samples. The lower, the more abnormal. Negative scores represent outliers, positive scores represent inliers.
fit(X, y=None) [source]
Fit the local outlier factor detector from the training dataset. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’
Training data.
yIgnored
Not used, present for API consistency by convention. Returns
selfLocalOutlierFactor
The fitted local outlier factor detector.
property fit_predict
Fits the model to the training set X and returns the labels. Not available for novelty detection (when novelty is set to True). Label is 1 for an inlier and -1 for an outlier according to the LOF score and the contamination parameter. Parameters
Xarray-like of shape (n_samples, n_features), default=None
The query sample or samples to compute the Local Outlier Factor w.r.t. to the training samples.
yIgnored
Not used, present for API consistency by convention. Returns
is_inlierndarray of shape (n_samples,)
Returns -1 for anomalies/outliers and 1 for inliers.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
kneighbors(X=None, n_neighbors=None, return_distance=True) [source]
Finds the K-neighbors of a point. Returns indices of and distances to the neighbors of each point. Parameters
Xarray-like, shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor.
n_neighborsint, default=None
Number of neighbors required for each sample. The default is the value passed to the constructor.
return_distancebool, default=True
Whether or not to return the distances. Returns
neigh_distndarray of shape (n_queries, n_neighbors)
Array representing the lengths to points, only present if return_distance=True
neigh_indndarray of shape (n_queries, n_neighbors)
Indices of the nearest points in the population matrix. Examples In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1] >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(n_neighbors=1)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))
As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points: >>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
kneighbors_graph(X=None, n_neighbors=None, mode='connectivity') [source]
Computes the (weighted) graph of k-Neighbors for points in X Parameters
Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None
The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For metric='precomputed' the shape should be (n_queries, n_indexed). Otherwise the shape should be (n_queries, n_features).
n_neighborsint, default=None
Number of neighbors for each sample. The default is the value passed to the constructor.
mode{‘connectivity’, ‘distance’}, default=’connectivity’
Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. Returns
Asparse-matrix of shape (n_queries, n_samples_fit)
n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix is of CSR format. See also
NearestNeighbors.radius_neighbors_graph
Examples >>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(n_neighbors=2)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
property predict
Predict the labels (1 inlier, -1 outlier) of X according to LOF. Only available for novelty detection (when novelty is set to True). This method allows to generalize prediction to new observations (not in the training set). Parameters
Xarray-like of shape (n_samples, n_features)
The query sample or samples to compute the Local Outlier Factor w.r.t. to the training samples. Returns
is_inlierndarray of shape (n_samples,)
Returns -1 for anomalies/outliers and +1 for inliers.
property score_samples
Opposite of the Local Outlier Factor of X. It is the opposite as bigger is better, i.e. large values correspond to inliers. Only available for novelty detection (when novelty is set to True). The argument X is supposed to contain new data: if X contains a point from training, it considers the later in its own neighborhood. Also, the samples in X are not considered in the neighborhood of any point. The score_samples on training data is available by considering the the negative_outlier_factor_ attribute. Parameters
Xarray-like of shape (n_samples, n_features)
The query sample or samples to compute the Local Outlier Factor w.r.t. the training samples. Returns
opposite_lof_scoresndarray of shape (n_samples,)
The opposite of the Local Outlier Factor of each input samples. The lower, the more abnormal.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_28308 | See Migration guide for more details. tf.compat.v1.raw_ops.TopKV2
tf.raw_ops.TopKV2(
input, k, sorted=True, name=None
)
If the input is a vector (rank-1), finds the k largest entries in the vector and outputs their values and indices as vectors. Thus values[j] is the j-th largest entry in input, and its index is indices[j]. For matrices (resp. higher rank input), computes the top k entries in each row (resp. vector along the last dimension). Thus, values.shape = indices.shape = input.shape[:-1] + [k]
If two elements are equal, the lower-index element appears first.
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 1-D or higher with last dimension at least k.
k A Tensor of type int32. 0-D. Number of top elements to look for along the last dimension (along each row for matrices).
sorted An optional bool. Defaults to True. If true the resulting k elements will be sorted by the values in descending order.
name A name for the operation (optional).
Returns A tuple of Tensor objects (values, indices). values A Tensor. Has the same type as input.
indices A Tensor of type int32. | |
doc_28309 | Returns the key that follows key in the traversal. The following code prints every key in the database db, without having to create a list in memory that contains them all: k = db.firstkey()
while k != None:
print(k)
k = db.nextkey(k) | |
doc_28310 | An XML declaration was found somewhere other than the start of the input data. | |
doc_28311 | True if the session is new, False otherwise. | |
doc_28312 |
Return the snap setting. See set_snap for details. | |
doc_28313 |
Sparse Principal Components Analysis (SparsePCA). Finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Read more in the User Guide. Parameters
n_componentsint, default=None
Number of sparse atoms to extract.
alphafloat, default=1
Sparsity controlling parameter. Higher values lead to sparser components.
ridge_alphafloat, default=0.01
Amount of ridge shrinkage to apply in order to improve conditioning when calling the transform method.
max_iterint, default=1000
Maximum number of iterations to perform.
tolfloat, default=1e-8
Tolerance for the stopping condition.
method{‘lars’, ‘cd’}, default=’lars’
lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
U_initndarray of shape (n_samples, n_components), default=None
Initial values for the loadings for warm restart scenarios.
V_initndarray of shape (n_components, n_features), default=None
Initial values for the components for warm restart scenarios.
verboseint or bool, default=False
Controls the verbosity; the higher, the more messages. Defaults to 0.
random_stateint, RandomState instance or None, default=None
Used during dictionary learning. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes
components_ndarray of shape (n_components, n_features)
Sparse components extracted from the data.
error_ndarray
Vector of errors at each iteration.
n_components_int
Estimated number of components. New in version 0.23.
n_iter_int
Number of iterations run.
mean_ndarray of shape (n_features,)
Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0). See also
PCA
MiniBatchSparsePCA
DictionaryLearning
Examples >>> import numpy as np
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.decomposition import SparsePCA
>>> X, _ = make_friedman1(n_samples=200, n_features=30, random_state=0)
>>> transformer = SparsePCA(n_components=5, random_state=0)
>>> transformer.fit(X)
SparsePCA(...)
>>> X_transformed = transformer.transform(X)
>>> X_transformed.shape
(200, 5)
>>> # most values in the components_ are zero (sparsity)
>>> np.mean(transformer.components_ == 0)
0.9666...
Methods
fit(X[, y]) Fit the model from data in X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Least Squares projection of the data onto the sparse components.
fit(X, y=None) [source]
Fit the model from data in X. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yIgnored
Returns
selfobject
Returns the instance itself.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Least Squares projection of the data onto the sparse components. To avoid instability issues in case the system is under-determined, regularization can be applied (Ridge regression) via the ridge_alpha parameter. Note that Sparse PCA components orthogonality is not enforced as in PCA hence one cannot use a simple linear projection. Parameters
Xndarray of shape (n_samples, n_features)
Test data to be transformed, must have the same number of features as the data used to train the model. Returns
X_newndarray of shape (n_samples, n_components)
Transformed data. | |
doc_28314 | Simple lightweight unbounded function cache. Sometimes called “memoize”. Returns the same as lru_cache(maxsize=None), creating a thin wrapper around a dictionary lookup for the function arguments. Because it never needs to evict old values, this is smaller and faster than lru_cache() with a size limit. For example: @cache
def factorial(n):
return n * factorial(n-1) if n else 1
>>> factorial(10) # no previously cached result, makes 11 recursive calls
3628800
>>> factorial(5) # just looks up cached value result
120
>>> factorial(12) # makes two new recursive calls, the other 10 are cached
479001600
New in version 3.9. | |
doc_28315 |
Check that all items of arrays differ in at most N Units in the Last Place. Parameters
a, barray_like
Input arrays to be compared.
maxulpint, optional
The maximum number of units in the last place that elements of a and b can differ. Default is 1.
dtypedtype, optional
Data-type to convert a and b to if given. Default is None. Returns
retndarray
Array containing number of representable floating point numbers between items in a and b. Raises
AssertionError
If one or more elements differ by more than maxulp. See also assert_array_almost_equal_nulp
Compare two arrays relatively to their spacing. Notes For computing the ULP difference, this API does not differentiate between various representations of NAN (ULP difference between 0x7fc00000 and 0xffc00000 is zero). Examples >>> a = np.linspace(0., 1., 100)
>>> res = np.testing.assert_array_max_ulp(a, np.arcsin(np.sin(a))) | |
doc_28316 | tf.keras.callbacks.experimental.BackupAndRestore(
backup_dir
)
BackupAndRestore callback is intended to recover from interruptions that happened in the middle of a model.fit execution by backing up the training states in a temporary checkpoint file (based on TF CheckpointManager) at the end of each epoch. If training restarted before completion, the training state and model are restored to the most recently saved state at the beginning of a new model.fit() run. Note that user is responsible to bring jobs back up. This callback is important for the backup and restore mechanism for fault tolerance purpose. And the model to be restored from an previous checkpoint is expected to be the same as the one used to back up. If user changes arguments passed to compile or fit, the checkpoint saved for fault tolerance can become invalid. Note: This callback is not compatible with disabling eager execution. A checkpoint is saved at the end of each epoch, when restoring we'll redo any partial work from an unfinished epoch in which the training got restarted (so the work done before a interruption doesn't affect the final model state). This works for both single worker and multi-worker mode, only MirroredStrategy and MultiWorkerMirroredStrategy are supported for now. Example:
class InterruptingCallback(tf.keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs=None):
if epoch == 4:
raise RuntimeError('Interrupting!')
callback = tf.keras.callbacks.experimental.BackupAndRestore(
backup_dir="/tmp")
model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
model.compile(tf.keras.optimizers.SGD(), loss='mse')
try:
model.fit(np.arange(100).reshape(5, 20), np.zeros(5), epochs=10,
batch_size=1, callbacks=[callback, InterruptingCallback()],
verbose=0)
except:
pass
history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5), epochs=10,
batch_size=1, callbacks=[callback], verbose=0)
# Only 6 more epochs are run, since first trainning got interrupted at
# zero-indexed epoch 4, second training will continue from 4 to 9.
len(history.history['loss'])
6
Arguments
backup_dir String, path to save the model file. This is the directory in which the system stores temporary files to recover the model from jobs terminated unexpectedly. The directory cannot be reused elsewhere to store other checkpoints, e.g. by BackupAndRestore callback of another training, or by another callback (ModelCheckpoint) of the same training. Methods set_model View source
set_model(
model
)
set_params View source
set_params(
params
) | |
doc_28317 | In-place version of sgn() | |
doc_28318 |
Enable the toggle tool. trigger calls this method when toggled is False. | |
doc_28319 | Seal will disable the automatic creation of mocks when accessing an attribute of the mock being sealed or any of its attributes that are already mocks recursively. If a mock instance with a name or a spec is assigned to an attribute it won’t be considered in the sealing chain. This allows one to prevent seal from fixing part of the mock object. >>> mock = Mock()
>>> mock.submock.attribute1 = 2
>>> mock.not_submock = mock.Mock(name="sample_name")
>>> seal(mock)
>>> mock.new_attribute # This will raise AttributeError.
>>> mock.submock.attribute2 # This will raise AttributeError.
>>> mock.not_submock.attribute2 # This won't raise.
New in version 3.7. | |
doc_28320 |
Random integers of type np.int_ between low and high, inclusive. Return random integers of type np.int_ from the “discrete uniform” distribution in the closed interval [low, high]. If high is None (the default), then results are from [1, low]. The np.int_ type translates to the C long integer type and its precision is platform dependent. This function has been deprecated. Use randint instead. Deprecated since version 1.11.0. Parameters
lowint
Lowest (signed) integer to be drawn from the distribution (unless high=None, in which case this parameter is the highest such integer).
highint, optional
If provided, the largest (signed) integer to be drawn from the distribution (see above for behavior if high=None).
sizeint or tuple of ints, optional
Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. Default is None, in which case a single value is returned. Returns
outint or ndarray of ints
size-shaped array of random integers from the appropriate distribution, or a single such random int if size not provided. See also randint
Similar to random_integers, only for the half-open interval [low, high), and 0 is the lowest value if high is omitted. Notes To sample from N evenly spaced floating-point numbers between a and b, use: a + (b - a) * (np.random.random_integers(N) - 1) / (N - 1.)
Examples >>> np.random.random_integers(5)
4 # random
>>> type(np.random.random_integers(5))
<class 'numpy.int64'>
>>> np.random.random_integers(5, size=(3,2))
array([[5, 4], # random
[3, 3],
[4, 5]])
Choose five random numbers from the set of five evenly-spaced numbers between 0 and 2.5, inclusive (i.e., from the set \({0, 5/8, 10/8, 15/8, 20/8}\)): >>> 2.5 * (np.random.random_integers(5, size=(5,)) - 1) / 4.
array([ 0.625, 1.25 , 0.625, 0.625, 2.5 ]) # random
Roll two six sided dice 1000 times and sum the results: >>> d1 = np.random.random_integers(1, 6, 1000)
>>> d2 = np.random.random_integers(1, 6, 1000)
>>> dsums = d1 + d2
Display results as a histogram: >>> import matplotlib.pyplot as plt
>>> count, bins, ignored = plt.hist(dsums, 11, density=True)
>>> plt.show() | |
doc_28321 |
Fit linear model with Stochastic Gradient Descent. Parameters
X{array-like, sparse matrix}, shape (n_samples, n_features)
Training data.
yndarray of shape (n_samples,)
Target values.
coef_initndarray of shape (n_classes, n_features), default=None
The initial coefficients to warm-start the optimization.
intercept_initndarray of shape (n_classes,), default=None
The initial intercept to warm-start the optimization.
sample_weightarray-like, shape (n_samples,), default=None
Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the constructor) if class_weight is specified. Returns
self :
Returns an instance of self. | |
doc_28322 |
Call this whenever the mappable is changed to notify all the callbackSM listeners to the 'changed' signal. | |
doc_28323 | Like the default UnicodeConverter, but it also matches slashes. This is useful for wikis and similar applications: Rule('/<path:wikipage>')
Rule('/<path:wikipage>/edit')
Parameters
map (Map) – the Map.
args (Any) –
kwargs (Any) – Return type
None | |
doc_28324 | A base class for connection-related issues. Subclasses are BrokenPipeError, ConnectionAbortedError, ConnectionRefusedError and ConnectionResetError. | |
doc_28325 | operator.__imul__(a, b)
a = imul(a, b) is equivalent to a *= b. | |
doc_28326 | An integer specifying the number of “overflow” objects the last page can contain. This extends the paginate_by limit on the last page by up to paginate_orphans, in order to keep the last page from having a very small number of objects. | |
doc_28327 |
Bases: matplotlib.backend_tools.ViewsPositionsBase Move forward in the view lim stack. default_keymap=['right', 'v', 'MouseButton.FORWARD']
Keymap to associate with this tool. list[str]: List of keys that will trigger this tool when a keypress event is emitted on self.figure.canvas.
description='Forward to next view'
Description of the Tool. str: Tooltip used if the Tool is included in a Toolbar.
image='forward'
Filename of the image. str: Filename of the image to use in a Toolbar. If None, the name is used as a label in the toolbar button. | |
doc_28328 | This is a standard context defined by the General Decimal Arithmetic Specification. Precision is set to nine. Rounding is set to ROUND_HALF_UP. All flags are cleared. All traps are enabled (treated as exceptions) except Inexact, Rounded, and Subnormal. Because many of the traps are enabled, this context is useful for debugging. | |
doc_28329 | A RegexValidator subclass that ensures a value looks like a URL, and raises an error code of 'invalid' if it doesn’t. Loopback addresses and reserved IP spaces are considered valid. Literal IPv6 addresses (RFC 3986#section-3.2.2) and Unicode domains are both supported. In addition to the optional arguments of its parent RegexValidator class, URLValidator accepts an extra optional attribute:
schemes
URL/URI scheme list to validate against. If not provided, the default list is ['http', 'https', 'ftp', 'ftps']. As a reference, the IANA website provides a full list of valid URI schemes. | |
doc_28330 | 'DEFAULT_THROTTLE_CLASSES': [
'rest_framework.throttling.AnonRateThrottle',
'rest_framework.throttling.UserRateThrottle'
],
'DEFAULT_THROTTLE_RATES': {
'anon': '100/day',
'user': '1000/day'
}
}
The rate descriptions used in DEFAULT_THROTTLE_RATES may include second, minute, hour or day as the throttle period. You can also set the throttling policy on a per-view or per-viewset basis, using the APIView class-based views. from rest_framework.response import Response
from rest_framework.throttling import UserRateThrottle
from rest_framework.views import APIView
class ExampleView(APIView):
throttle_classes = [UserRateThrottle]
def get(self, request, format=None):
content = {
'status': 'request was permitted'
}
return Response(content)
If you're using the @api_view decorator with function based views you can use the following decorator. @api_view(['GET'])
@throttle_classes([UserRateThrottle])
def example_view(request, format=None):
content = {
'status': 'request was permitted'
}
return Response(content)
It's also possible to set throttle classes for routes that are created using the @action decorator. Throttle classes set in this way will override any viewset level class settings. @action(detail=True, methods=["post"], throttle_classes=[UserRateThrottle])
def example_adhoc_method(request, pk=None):
content = {
'status': 'request was permitted'
}
return Response(content)
How clients are identified The X-Forwarded-For HTTP header and REMOTE_ADDR WSGI variable are used to uniquely identify client IP addresses for throttling. If the X-Forwarded-For header is present then it will be used, otherwise the value of the REMOTE_ADDR variable from the WSGI environment will be used. If you need to strictly identify unique client IP addresses, you'll need to first configure the number of application proxies that the API runs behind by setting the NUM_PROXIES setting. This setting should be an integer of zero or more. If set to non-zero then the client IP will be identified as being the last IP address in the X-Forwarded-For header, once any application proxy IP addresses have first been excluded. If set to zero, then the REMOTE_ADDR value will always be used as the identifying IP address. It is important to understand that if you configure the NUM_PROXIES setting, then all clients behind a unique NAT'd gateway will be treated as a single client. Further context on how the X-Forwarded-For header works, and identifying a remote client IP can be found here. Setting up the cache The throttle classes provided by REST framework use Django's cache backend. You should make sure that you've set appropriate cache settings. The default value of LocMemCache backend should be okay for simple setups. See Django's cache documentation for more details. If you need to use a cache other than 'default', you can do so by creating a custom throttle class and setting the cache attribute. For example: from django.core.cache import caches
class CustomAnonRateThrottle(AnonRateThrottle):
cache = caches['alternate']
You'll need to remember to also set your custom throttle class in the 'DEFAULT_THROTTLE_CLASSES' settings key, or using the throttle_classes view attribute. API Reference AnonRateThrottle The AnonRateThrottle will only ever throttle unauthenticated users. The IP address of the incoming request is used to generate a unique key to throttle against. The allowed request rate is determined from one of the following (in order of preference). The rate property on the class, which may be provided by overriding AnonRateThrottle and setting the property. The DEFAULT_THROTTLE_RATES['anon'] setting. AnonRateThrottle is suitable if you want to restrict the rate of requests from unknown sources. UserRateThrottle The UserRateThrottle will throttle users to a given rate of requests across the API. The user id is used to generate a unique key to throttle against. Unauthenticated requests will fall back to using the IP address of the incoming request to generate a unique key to throttle against. The allowed request rate is determined from one of the following (in order of preference). The rate property on the class, which may be provided by overriding UserRateThrottle and setting the property. The DEFAULT_THROTTLE_RATES['user'] setting. An API may have multiple UserRateThrottles in place at the same time. To do so, override UserRateThrottle and set a unique "scope" for each class. For example, multiple user throttle rates could be implemented by using the following classes... class BurstRateThrottle(UserRateThrottle):
scope = 'burst'
class SustainedRateThrottle(UserRateThrottle):
scope = 'sustained'
...and the following settings. REST_FRAMEWORK = {
'DEFAULT_THROTTLE_CLASSES': [
'example.throttles.BurstRateThrottle',
'example.throttles.SustainedRateThrottle'
],
'DEFAULT_THROTTLE_RATES': {
'burst': '60/min',
'sustained': '1000/day'
}
}
UserRateThrottle is suitable if you want simple global rate restrictions per-user. ScopedRateThrottle The ScopedRateThrottle class can be used to restrict access to specific parts of the API. This throttle will only be applied if the view that is being accessed includes a .throttle_scope property. The unique throttle key will then be formed by concatenating the "scope" of the request with the unique user id or IP address. The allowed request rate is determined by the DEFAULT_THROTTLE_RATES setting using a key from the request "scope". For example, given the following views... class ContactListView(APIView):
throttle_scope = 'contacts'
...
class ContactDetailView(APIView):
throttle_scope = 'contacts'
...
class UploadView(APIView):
throttle_scope = 'uploads'
...
...and the following settings. REST_FRAMEWORK = {
'DEFAULT_THROTTLE_CLASSES': [
'rest_framework.throttling.ScopedRateThrottle',
],
'DEFAULT_THROTTLE_RATES': {
'contacts': '1000/day',
'uploads': '20/day'
}
}
User requests to either ContactListView or ContactDetailView would be restricted to a total of 1000 requests per-day. User requests to UploadView would be restricted to 20 requests per day. Custom throttles To create a custom throttle, override BaseThrottle and implement .allow_request(self, request, view). The method should return True if the request should be allowed, and False otherwise. Optionally you may also override the .wait() method. If implemented, .wait() should return a recommended number of seconds to wait before attempting the next request, or None. The .wait() method will only be called if .allow_request() has previously returned False. If the .wait() method is implemented and the request is throttled, then a Retry-After header will be included in the response. Example The following is an example of a rate throttle, that will randomly throttle 1 in every 10 requests. import random
class RandomRateThrottle(throttling.BaseThrottle):
def allow_request(self, request, view):
return random.randint(1, 10) != 1
throttling.py | |
doc_28331 |
Convert x using the unit type of the xaxis. If the artist is not in contained in an Axes or if the xaxis does not have units, x itself is returned. | |
doc_28332 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_28333 | Determine if the formatted representation of object is “readable”, or can be used to reconstruct the value using eval(). This always returns False for recursive objects. >>> pprint.isreadable(stuff)
False | |
doc_28334 | tf.experimental.numpy.add(
x1, x2
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.add. | |
doc_28335 | Print the formatted representation of object on the configured stream, followed by a newline. | |
doc_28336 |
Pandas ExtensionArray for storing Period data. Users should use period_array() to create new instances. Alternatively, array() can be used to create new instances from a sequence of Period scalars. Parameters
values:Union[PeriodArray, Series[period], ndarray[int], PeriodIndex]
The data to store. These should be arrays that can be directly converted to ordinals without inference or copy (PeriodArray, ndarray[int64]), or a box around such an array (Series[period], PeriodIndex).
dtype:PeriodDtype, optional
A PeriodDtype instance from which to extract a freq. If both freq and dtype are specified, then the frequencies must match.
freq:str or DateOffset
The freq to use for the array. Mostly applicable when values is an ndarray of integers, when freq is required. When values is a PeriodArray (or box around), it’s checked that values.freq matches freq.
copy:bool, default False
Whether to copy the ordinals before storing. See also Period
Represents a period of time. PeriodIndex
Immutable Index for period data. period_range
Create a fixed-frequency PeriodArray. array
Construct a pandas array. Notes There are two components to a PeriodArray ordinals : integer ndarray freq : pd.tseries.offsets.Offset The values are physically stored as a 1-D ndarray of integers. These are called “ordinals” and represent some kind of offset from a base. The freq indicates the span covered by each element of the array. All elements in the PeriodArray have the same freq. Attributes
None Methods
None | |
doc_28337 | Returns an iterator over module buffers. Parameters
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
torch.Tensor – module buffer Example: >>> for buf in model.buffers():
>>> print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L) | |
doc_28338 | Parse a query string given as a string argument (data of type application/x-www-form-urlencoded). Data are returned as a list of name, value pairs. The optional argument keep_blank_values is a flag indicating whether blank values in percent-encoded queries should be treated as blank strings. A true value indicates that blanks should be retained as blank strings. The default false value indicates that blank values are to be ignored and treated as if they were not included. The optional argument strict_parsing is a flag indicating what to do with parsing errors. If false (the default), errors are silently ignored. If true, errors raise a ValueError exception. The optional encoding and errors parameters specify how to decode percent-encoded sequences into Unicode characters, as accepted by the bytes.decode() method. The optional argument max_num_fields is the maximum number of fields to read. If set, then throws a ValueError if there are more than max_num_fields fields read. The optional argument separator is the symbol to use for separating the query arguments. It defaults to &. Use the urllib.parse.urlencode() function to convert such lists of pairs into query strings. Changed in version 3.2: Add encoding and errors parameters. Changed in version 3.8: Added max_num_fields parameter. Changed in version 3.9.2: Added separator parameter with the default value of &. Python versions earlier than Python 3.9.2 allowed using both ; and & as query parameter separator. This has been changed to allow only a single separator key, with & as the default separator. | |
doc_28339 | Remove this file or symbolic link. If the path points to a directory, use Path.rmdir() instead. If missing_ok is false (the default), FileNotFoundError is raised if the path does not exist. If missing_ok is true, FileNotFoundError exceptions will be ignored (same behavior as the POSIX rm -f command). Changed in version 3.8: The missing_ok parameter was added. | |
doc_28340 |
Determine the type of data indicated by the target. Note that this type is the most specific type that can be inferred. For example:
binary is more specific but compatible with multiclass.
multiclass of integers is more specific but compatible with continuous.
multilabel-indicator is more specific but compatible with multiclass-multioutput. Parameters
yarray-like
Returns
target_typestr
One of: ‘continuous’: y is an array-like of floats that are not all integers, and is 1d or a column vector. ‘continuous-multioutput’: y is a 2d array of floats that are not all integers, and both dimensions are of size > 1. ‘binary’: y contains <= 2 discrete values and is 1d or a column vector. ‘multiclass’: y contains more than two discrete values, is not a sequence of sequences, and is 1d or a column vector. ‘multiclass-multioutput’: y is a 2d array that contains more than two discrete values, is not a sequence of sequences, and both dimensions are of size > 1. ‘multilabel-indicator’: y is a label indicator matrix, an array of two dimensions with at least two columns, and at most 2 unique values. ‘unknown’: y is array-like but none of the above, such as a 3d array, sequence of sequences, or an array of non-sequence objects. Examples >>> import numpy as np
>>> type_of_target([0.1, 0.6])
'continuous'
>>> type_of_target([1, -1, -1, 1])
'binary'
>>> type_of_target(['a', 'b', 'a'])
'binary'
>>> type_of_target([1.0, 2.0])
'binary'
>>> type_of_target([1, 0, 2])
'multiclass'
>>> type_of_target([1.0, 0.0, 3.0])
'multiclass'
>>> type_of_target(['a', 'b', 'c'])
'multiclass'
>>> type_of_target(np.array([[1, 2], [3, 1]]))
'multiclass-multioutput'
>>> type_of_target([[1, 2]])
'multilabel-indicator'
>>> type_of_target(np.array([[1.5, 2.0], [3.0, 1.6]]))
'continuous-multioutput'
>>> type_of_target(np.array([[0, 1], [1, 1]]))
'multilabel-indicator' | |
doc_28341 | Open the database file file and return a corresponding object. If the database file already exists, the whichdb() function is used to determine its type and the appropriate module is used; if it does not exist, the first module listed above that can be imported is used. The optional flag argument can be:
Value Meaning
'r' Open existing database for reading only (default)
'w' Open existing database for reading and writing
'c' Open database for reading and writing, creating it if it doesn’t exist
'n' Always create a new, empty database, open for reading and writing The optional mode argument is the Unix mode of the file, used only when the database has to be created. It defaults to octal 0o666 (and will be modified by the prevailing umask). | |
doc_28342 | Set to sys.maxsize for big memory tests. | |
doc_28343 | tf.experimental.numpy.cos(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.cos. | |
doc_28344 | Set-group-ID bit. This bit has several special uses. For a directory it indicates that BSD semantics is to be used for that directory: files created there inherit their group ID from the directory, not from the effective group ID of the creating process, and directories created there will also get the S_ISGID bit set. For a file that does not have the group execution bit (S_IXGRP) set, the set-group-ID bit indicates mandatory file/record locking (see also S_ENFMT). | |
doc_28345 |
Bases: matplotlib.colors.LinearSegmentedColormap LinearSegmentedColormap in which color varies smoothly. This class is a simplification of LinearSegmentedColormap, which doesn’t support jumps in color intensities. Parameters
namestr
Name of colormap.
segmented_datadict
Dictionary of ‘red’, ‘green’, ‘blue’, and (optionally) ‘alpha’ values. Each color key contains a list of x, y tuples. x must increase monotonically from 0 to 1 and corresponds to input values for a mappable object (e.g. an image). y corresponds to the color intensity.
__init__(name, segmented_data, **kwargs) [source]
Create color map from linear mapping segments segmentdata argument is a dictionary with a red, green and blue entries. Each entry should be a list of x, y0, y1 tuples, forming rows in a table. Entries for alpha are optional. Example: suppose you want red to increase from 0 to 1 over the bottom half, green to do the same over the middle half, and blue over the top half. Then you would use: cdict = {'red': [(0.0, 0.0, 0.0),
(0.5, 1.0, 1.0),
(1.0, 1.0, 1.0)],
'green': [(0.0, 0.0, 0.0),
(0.25, 0.0, 0.0),
(0.75, 1.0, 1.0),
(1.0, 1.0, 1.0)],
'blue': [(0.0, 0.0, 0.0),
(0.5, 0.0, 0.0),
(1.0, 1.0, 1.0)]}
Each row in the table for a given color is a sequence of x, y0, y1 tuples. In each sequence, x must increase monotonically from 0 to 1. For any input value z falling between x[i] and x[i+1], the output value of a given color will be linearly interpolated between y1[i] and y0[i+1]: row i: x y0 y1
/
/
row i+1: x y0 y1
Hence y0 in the first row and y1 in the last row are never used. See also
LinearSegmentedColormap.from_list
Static method; factory function for generating a smoothly-varying LinearSegmentedColormap.
makeMappingArray
For information about making a mapping array. | |
doc_28346 |
Average anomaly score of X of the base classifiers. The anomaly score of an input sample is computed as the mean anomaly score of the trees in the forest. The measure of normality of an observation given a tree is the depth of the leaf containing this observation, which is equivalent to the number of splittings required to isolate this point. In case of several observations n_left in the leaf, the average path length of a n_left samples isolation tree is added. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. Returns
scoresndarray of shape (n_samples,)
The anomaly score of the input samples. The lower, the more abnormal. Negative scores represent outliers, positive scores represent inliers. | |
doc_28347 | Compile the source into a code or AST object. Code objects can be executed by exec() or eval(). source can either be a normal string, a byte string, or an AST object. Refer to the ast module documentation for information on how to work with AST objects. The filename argument should give the file from which the code was read; pass some recognizable value if it wasn’t read from a file ('<string>' is commonly used). The mode argument specifies what kind of code must be compiled; it can be 'exec' if source consists of a sequence of statements, 'eval' if it consists of a single expression, or 'single' if it consists of a single interactive statement (in the latter case, expression statements that evaluate to something other than None will be printed). The optional arguments flags and dont_inherit control which compiler options should be activated and which future features should be allowed. If neither is present (or both are zero) the code is compiled with the same flags that affect the code that is calling compile(). If the flags argument is given and dont_inherit is not (or is zero) then the compiler options and the future statements specified by the flags argument are used in addition to those that would be used anyway. If dont_inherit is a non-zero integer then the flags argument is it – the flags (future features and compiler options) in the surrounding code are ignored. Compiler options and future statements are specified by bits which can be bitwise ORed together to specify multiple options. The bitfield required to specify a given future feature can be found as the compiler_flag attribute on the _Feature instance in the __future__ module. Compiler flags can be found in ast module, with PyCF_ prefix. The argument optimize specifies the optimization level of the compiler; the default value of -1 selects the optimization level of the interpreter as given by -O options. Explicit levels are 0 (no optimization; __debug__ is true), 1 (asserts are removed, __debug__ is false) or 2 (docstrings are removed too). This function raises SyntaxError if the compiled source is invalid, and ValueError if the source contains null bytes. If you want to parse Python code into its AST representation, see ast.parse().
Raises an auditing event compile with arguments source and filename. This event may also be raised by implicit compilation. Note When compiling a string with multi-line code in 'single' or 'eval' mode, input must be terminated by at least one newline character. This is to facilitate detection of incomplete and complete statements in the code module. Warning It is possible to crash the Python interpreter with a sufficiently large/complex string when compiling to an AST object due to stack depth limitations in Python’s AST compiler. Changed in version 3.2: Allowed use of Windows and Mac newlines. Also input in 'exec' mode does not have to end in a newline anymore. Added the optimize parameter. Changed in version 3.5: Previously, TypeError was raised when null bytes were encountered in source. New in version 3.8: ast.PyCF_ALLOW_TOP_LEVEL_AWAIT can now be passed in flags to enable support for top-level await, async for, and async with. | |
doc_28348 |
Plot detection error tradeoff (DET) curve. Extra keyword arguments will be passed to matplotlib’s plot. Read more in the User Guide. New in version 0.24. Parameters
estimatorestimator instance
Fitted classifier or a fitted Pipeline in which the last estimator is a classifier.
X{array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
response_method{‘predict_proba’, ‘decision_function’, ‘auto’} default=’auto’
Specifies whether to use predict_proba or decision_function as the predicted target response. If set to ‘auto’, predict_proba is tried first and if it does not exist decision_function is tried next.
namestr, default=None
Name of DET curve for labeling. If None, use the name of the estimator.
axmatplotlib axes, default=None
Axes object to plot on. If None, a new figure and axes is created.
pos_labelstr or int, default=None
The label of the positive class. When pos_label=None, if y_true is in {-1, 1} or {0, 1}, pos_label is set to 1, otherwise an error will be raised. Returns
displayDetCurveDisplay
Object that stores computed values. See also
det_curve
Compute error rates for different probability thresholds.
DetCurveDisplay
DET curve visualization.
plot_roc_curve
Plot Receiver operating characteristic (ROC) curve. Examples >>> import matplotlib.pyplot as plt
>>> from sklearn import datasets, metrics, model_selection, svm
>>> X, y = datasets.make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = model_selection.train_test_split(
... X, y, random_state=0)
>>> clf = svm.SVC(random_state=0)
>>> clf.fit(X_train, y_train)
SVC(random_state=0)
>>> metrics.plot_det_curve(clf, X_test, y_test)
>>> plt.show() | |
doc_28349 | See Migration guide for more details. tf.compat.v1.keras.layers.experimental.EinsumDense
tf.keras.layers.experimental.EinsumDense(
equation, output_shape, activation=None, bias_axes=None,
kernel_initializer='glorot_uniform',
bias_initializer='zeros', kernel_regularizer=None,
bias_regularizer=None, activity_regularizer=None, kernel_constraint=None,
bias_constraint=None, **kwargs
)
This layer can perform einsum calculations of arbitrary dimensionality.
Arguments
equation An equation describing the einsum to perform. This equation must be a valid einsum string of the form ab,bc->ac, ...ab,bc->...ac, or ab...,bc->ac... where 'ab', 'bc', and 'ac' can be any valid einsum axis expression sequence.
output_shape The expected shape of the output tensor (excluding the batch dimension and any dimensions represented by ellipses). You can specify None for any dimension that is unknown or can be inferred from the input shape.
activation Activation function to use. If you don't specify anything, no activation is applied (that is, a "linear" activation: a(x) = x).
bias_axes A string containing the output dimension(s) to apply a bias to. Each character in the bias_axes string should correspond to a character in the output portion of the equation string.
kernel_initializer Initializer for the kernel weights matrix.
bias_initializer Initializer for the bias vector.
kernel_regularizer Regularizer function applied to the kernel weights matrix.
bias_regularizer Regularizer function applied to the bias vector.
activity_regularizer Regularizer function applied to the output of the layer (its "activation")..
kernel_constraint Constraint function applied to the kernel weights matrix.
bias_constraint Constraint function applied to the bias vector. Examples: Biased dense layer with einsums This example shows how to instantiate a standard Keras dense layer using einsum operations. This example is equivalent to tf.keras.layers.Dense(64, use_bias=True).
layer = EinsumDense("ab,bc->ac", output_shape=64, bias_axes="c")
input_tensor = tf.keras.Input(shape=[32])
output_tensor = layer(input_tensor)
output_tensor
<... shape=(None, 64) dtype=...>
Applying a dense layer to a sequence This example shows how to instantiate a layer that applies the same dense operation to every element in a sequence. Here, the 'output_shape' has two values (since there are two non-batch dimensions in the output); the first dimension in the output_shape is None, because the sequence dimension b has an unknown shape.
layer = EinsumDense("abc,cd->abd",
output_shape=(None, 64),
bias_axes="d")
input_tensor = tf.keras.Input(shape=[32, 128])
output_tensor = layer(input_tensor)
output_tensor
<... shape=(None, 32, 64) dtype=...>
Applying a dense layer to a sequence using ellipses This example shows how to instantiate a layer that applies the same dense operation to every element in a sequence, but uses the ellipsis notation instead of specifying the batch and sequence dimensions. Because we are using ellipsis notation and have specified only one axis, the output_shape arg is a single value. When instantiated in this way, the layer can handle any number of sequence dimensions - including the case where no sequence dimension exists.
layer = EinsumDense("...x,xy->...y", output_shape=64, bias_axes="y")
input_tensor = tf.keras.Input(shape=[32, 128])
output_tensor = layer(input_tensor)
output_tensor
<... shape=(None, 32, 64) dtype=...> | |
doc_28350 | window.insstr(y, x, str[, attr])
Insert a character string (as many characters as will fit on the line) before the character under the cursor. All characters to the right of the cursor are shifted right, with the rightmost characters on the line being lost. The cursor position does not change (after moving to y, x, if specified). | |
doc_28351 | Like decode(), except that it accepts a source bytes and returns the corresponding decoded bytes. | |
doc_28352 | See Migration guide for more details. tf.compat.v1.raw_ops.RetrieveTPUEmbeddingADAMParameters
tf.raw_ops.RetrieveTPUEmbeddingADAMParameters(
num_shards, shard_id, table_id=-1, table_name='', config='',
name=None
)
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns A tuple of Tensor objects (parameters, momenta, velocities). parameters A Tensor of type float32.
momenta A Tensor of type float32.
velocities A Tensor of type float32. | |
doc_28353 |
# Using the standard RequestFactory API to create a form POST request
factory = APIRequestFactory()
request = factory.post('/notes/', {'title': 'new idea'})
Using the format argument Methods which create a request body, such as post, put and patch, include a format argument, which make it easy to generate requests using a content type other than multipart form data. For example: # Create a JSON POST request
factory = APIRequestFactory()
request = factory.post('/notes/', {'title': 'new idea'}, format='json')
By default the available formats are 'multipart' and 'json'. For compatibility with Django's existing RequestFactory the default format is 'multipart'. To support a wider set of request formats, or change the default format, see the configuration section. Explicitly encoding the request body If you need to explicitly encode the request body, you can do so by setting the content_type flag. For example: request = factory.post('/notes/', json.dumps({'title': 'new idea'}), content_type='application/json')
PUT and PATCH with form data One difference worth noting between Django's RequestFactory and REST framework's APIRequestFactory is that multipart form data will be encoded for methods other than just .post(). For example, using APIRequestFactory, you can make a form PUT request like so: factory = APIRequestFactory()
request = factory.put('/notes/547/', {'title': 'remember to email dave'})
Using Django's RequestFactory, you'd need to explicitly encode the data yourself: from django.test.client import encode_multipart, RequestFactory
factory = RequestFactory()
data = {'title': 'remember to email dave'}
content = encode_multipart('BoUnDaRyStRiNg', data)
content_type = 'multipart/form-data; boundary=BoUnDaRyStRiNg'
request = factory.put('/notes/547/', content, content_type=content_type)
Forcing authentication When testing views directly using a request factory, it's often convenient to be able to directly authenticate the request, rather than having to construct the correct authentication credentials. To forcibly authenticate a request, use the force_authenticate() method. from rest_framework.test import force_authenticate
factory = APIRequestFactory()
user = User.objects.get(username='olivia')
view = AccountDetail.as_view()
# Make an authenticated request to the view...
request = factory.get('/accounts/django-superstars/')
force_authenticate(request, user=user)
response = view(request)
The signature for the method is force_authenticate(request, user=None, token=None). When making the call, either or both of the user and token may be set. For example, when forcibly authenticating using a token, you might do something like the following: user = User.objects.get(username='olivia')
request = factory.get('/accounts/django-superstars/')
force_authenticate(request, user=user, token=user.auth_token)
Note: force_authenticate directly sets request.user to the in-memory user instance. If you are re-using the same user instance across multiple tests that update the saved user state, you may need to call refresh_from_db() between tests. Note: When using APIRequestFactory, the object that is returned is Django's standard HttpRequest, and not REST framework's Request object, which is only generated once the view is called. This means that setting attributes directly on the request object may not always have the effect you expect. For example, setting .token directly will have no effect, and setting .user directly will only work if session authentication is being used. # Request will only authenticate if `SessionAuthentication` is in use.
request = factory.get('/accounts/django-superstars/')
request.user = user
response = view(request)
Forcing CSRF validation By default, requests created with APIRequestFactory will not have CSRF validation applied when passed to a REST framework view. If you need to explicitly turn CSRF validation on, you can do so by setting the enforce_csrf_checks flag when instantiating the factory. factory = APIRequestFactory(enforce_csrf_checks=True)
Note: It's worth noting that Django's standard RequestFactory doesn't need to include this option, because when using regular Django the CSRF validation takes place in middleware, which is not run when testing views directly. When using REST framework, CSRF validation takes place inside the view, so the request factory needs to disable view-level CSRF checks. APIClient Extends Django's existing Client class. Making requests The APIClient class supports the same request interface as Django's standard Client class. This means that the standard .get(), .post(), .put(), .patch(), .delete(), .head() and .options() methods are all available. For example: from rest_framework.test import APIClient
client = APIClient()
client.post('/notes/', {'title': 'new idea'}, format='json')
To support a wider set of request formats, or change the default format, see the configuration section. Authenticating .login(**kwargs) The login method functions exactly as it does with Django's regular Client class. This allows you to authenticate requests against any views which include SessionAuthentication. # Make all requests in the context of a logged in session.
client = APIClient()
client.login(username='lauren', password='secret')
To logout, call the logout method as usual. # Log out
client.logout()
The login method is appropriate for testing APIs that use session authentication, for example web sites which include AJAX interaction with the API. .credentials(**kwargs) The credentials method can be used to set headers that will then be included on all subsequent requests by the test client. from rest_framework.authtoken.models import Token
from rest_framework.test import APIClient
# Include an appropriate `Authorization:` header on all requests.
token = Token.objects.get(user__username='lauren')
client = APIClient()
client.credentials(HTTP_AUTHORIZATION='Token ' + token.key)
Note that calling credentials a second time overwrites any existing credentials. You can unset any existing credentials by calling the method with no arguments. # Stop including any credentials
client.credentials()
The credentials method is appropriate for testing APIs that require authentication headers, such as basic authentication, OAuth1a and OAuth2 authentication, and simple token authentication schemes. .force_authenticate(user=None, token=None) Sometimes you may want to bypass authentication entirely and force all requests by the test client to be automatically treated as authenticated. This can be a useful shortcut if you're testing the API but don't want to have to construct valid authentication credentials in order to make test requests. user = User.objects.get(username='lauren')
client = APIClient()
client.force_authenticate(user=user)
To unauthenticate subsequent requests, call force_authenticate setting the user and/or token to None. client.force_authenticate(user=None)
CSRF validation By default CSRF validation is not applied when using APIClient. If you need to explicitly enable CSRF validation, you can do so by setting the enforce_csrf_checks flag when instantiating the client. client = APIClient(enforce_csrf_checks=True)
As usual CSRF validation will only apply to any session authenticated views. This means CSRF validation will only occur if the client has been logged in by calling login(). RequestsClient REST framework also includes a client for interacting with your application using the popular Python library, requests. This may be useful if: You are expecting to interface with the API primarily from another Python service, and want to test the service at the same level as the client will see. You want to write tests in such a way that they can also be run against a staging or live environment. (See "Live tests" below.) This exposes exactly the same interface as if you were using a requests session directly. from rest_framework.test import RequestsClient
client = RequestsClient()
response = client.get('http://testserver/users/')
assert response.status_code == 200
Note that the requests client requires you to pass fully qualified URLs. RequestsClient and working with the database The RequestsClient class is useful if you want to write tests that solely interact with the service interface. This is a little stricter than using the standard Django test client, as it means that all interactions should be via the API. If you're using RequestsClient you'll want to ensure that test setup, and results assertions are performed as regular API calls, rather than interacting with the database models directly. For example, rather than checking that Customer.objects.count() == 3 you would list the customers endpoint, and ensure that it contains three records. Headers & Authentication Custom headers and authentication credentials can be provided in the same way as when using a standard requests.Session instance. from requests.auth import HTTPBasicAuth
client.auth = HTTPBasicAuth('user', 'pass')
client.headers.update({'x-test': 'true'})
CSRF If you're using SessionAuthentication then you'll need to include a CSRF token for any POST, PUT, PATCH or DELETE requests. You can do so by following the same flow that a JavaScript based client would use. First make a GET request in order to obtain a CRSF token, then present that token in the following request. For example... client = RequestsClient()
# Obtain a CSRF token.
response = client.get('http://testserver/homepage/')
assert response.status_code == 200
csrftoken = response.cookies['csrftoken']
# Interact with the API.
response = client.post('http://testserver/organisations/', json={
'name': 'MegaCorp',
'status': 'active'
}, headers={'X-CSRFToken': csrftoken})
assert response.status_code == 200
Live tests With careful usage both the RequestsClient and the CoreAPIClient provide the ability to write test cases that can run either in development, or be run directly against your staging server or production environment. Using this style to create basic tests of a few core piece of functionality is a powerful way to validate your live service. Doing so may require some careful attention to setup and teardown to ensure that the tests run in a way that they do not directly affect customer data. CoreAPIClient The CoreAPIClient allows you to interact with your API using the Python coreapi client library. # Fetch the API schema
client = CoreAPIClient()
schema = client.get('http://testserver/schema/')
# Create a new organisation
params = {'name': 'MegaCorp', 'status': 'active'}
client.action(schema, ['organisations', 'create'], params)
# Ensure that the organisation exists in the listing
data = client.action(schema, ['organisations', 'list'])
assert(len(data) == 1)
assert(data == [{'name': 'MegaCorp', 'status': 'active'}])
Headers & Authentication Custom headers and authentication may be used with CoreAPIClient in a similar way as with RequestsClient. from requests.auth import HTTPBasicAuth
client = CoreAPIClient()
client.session.auth = HTTPBasicAuth('user', 'pass')
client.session.headers.update({'x-test': 'true'})
API Test cases REST framework includes the following test case classes, that mirror the existing Django test case classes, but use APIClient instead of Django's default Client. APISimpleTestCase APITransactionTestCase APITestCase APILiveServerTestCase Example You can use any of REST framework's test case classes as you would for the regular Django test case classes. The self.client attribute will be an APIClient instance. from django.urls import reverse
from rest_framework import status
from rest_framework.test import APITestCase
from myproject.apps.core.models import Account
class AccountTests(APITestCase):
def test_create_account(self):
"""
Ensure we can create a new account object.
"""
url = reverse('account-list')
data = {'name': 'DabApps'}
response = self.client.post(url, data, format='json')
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertEqual(Account.objects.count(), 1)
self.assertEqual(Account.objects.get().name, 'DabApps')
URLPatternsTestCase REST framework also provides a test case class for isolating urlpatterns on a per-class basis. Note that this inherits from Django's SimpleTestCase, and will most likely need to be mixed with another test case class. Example from django.urls import include, path, reverse
from rest_framework.test import APITestCase, URLPatternsTestCase
class AccountTests(APITestCase, URLPatternsTestCase):
urlpatterns = [
path('api/', include('api.urls')),
]
def test_create_account(self):
"""
Ensure we can create a new account object.
"""
url = reverse('account-list')
response = self.client.get(url, format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(len(response.data), 1)
Testing responses Checking the response data When checking the validity of test responses it's often more convenient to inspect the data that the response was created with, rather than inspecting the fully rendered response. For example, it's easier to inspect response.data: response = self.client.get('/users/4/')
self.assertEqual(response.data, {'id': 4, 'username': 'lauren'})
Instead of inspecting the result of parsing response.content: response = self.client.get('/users/4/')
self.assertEqual(json.loads(response.content), {'id': 4, 'username': 'lauren'})
Rendering responses If you're testing views directly using APIRequestFactory, the responses that are returned will not yet be rendered, as rendering of template responses is performed by Django's internal request-response cycle. In order to access response.content, you'll first need to render the response. view = UserDetail.as_view()
request = factory.get('/users/4')
response = view(request, pk='4')
response.render() # Cannot access `response.content` without this.
self.assertEqual(response.content, '{"username": "lauren", "id": 4}')
Configuration Setting the default format The default format used to make test requests may be set using the TEST_REQUEST_DEFAULT_FORMAT setting key. For example, to always use JSON for test requests by default instead of standard multipart form requests, set the following in your settings.py file: REST_FRAMEWORK = {
...
'TEST_REQUEST_DEFAULT_FORMAT': 'json'
}
Setting the available formats If you need to test requests using something other than multipart or json requests, you can do so by setting the TEST_REQUEST_RENDERER_CLASSES setting. For example, to add support for using format='html' in test requests, you might have something like this in your settings.py file. REST_FRAMEWORK = {
...
'TEST_REQUEST_RENDERER_CLASSES': [
'rest_framework.renderers.MultiPartRenderer',
'rest_framework.renderers.JSONRenderer',
'rest_framework.renderers.TemplateHTMLRenderer'
]
}
test.py | |
doc_28354 |
Prune (currently unpruned) units in a tensor by zeroing out the ones with the lowest L1-norm. Parameters
amount (int or float) – quantity of parameters to prune. If float, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If int, it represents the absolute number of parameters to prune.
classmethod apply(module, name, amount, importance_scores=None) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) – parameter name within module on which pruning will act.
amount (int or float) – quantity of parameters to prune. If float, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If int, it represents the absolute number of parameters to prune.
importance_scores (torch.Tensor) – tensor of importance scores (of same shape as module parameter) used to compute mask for pruning. The values in this tensor indicate the importance of the corresponding elements in the parameter being pruned. If unspecified or None, the module parameter will be used in its place.
apply_mask(module)
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the input tensor Return type
pruned_tensor (torch.Tensor)
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importance scores (of same shape as t) used to compute mask for pruning t. The values in this tensor indicate the importance of the corresponding elements in the t that is being pruned. If unspecified or None, the tensor t will be used in its place.
default_mask (torch.Tensor, optional) – mask from previous pruning iteration, if any. To be considered when determining what portion of the tensor that pruning should act on. If None, default to a mask of ones. Returns
pruned version of tensor t.
remove(module)
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed! | |
doc_28355 | Returns an instance of the paginator to use for this view. By default, instantiates an instance of paginator_class. | |
doc_28356 |
Create a record array from a (flat) list of arrays Parameters
arrayListlist or tuple
List of array-like objects (such as lists, tuples, and ndarrays).
dtypedata-type, optional
valid dtype for all arrays
shapeint or tuple of ints, optional
Shape of the resulting array. If not provided, inferred from arrayList[0]. formats, names, titles, aligned, byteorder :
If dtype is None, these arguments are passed to numpy.format_parser to construct a dtype. See that function for detailed documentation. Returns
np.recarray
Record array consisting of given arrayList columns. Examples >>> x1=np.array([1,2,3,4])
>>> x2=np.array(['a','dd','xyz','12'])
>>> x3=np.array([1.1,2,3,4])
>>> r = np.core.records.fromarrays([x1,x2,x3],names='a,b,c')
>>> print(r[1])
(2, 'dd', 2.0) # may vary
>>> x1[1]=34
>>> r.a
array([1, 2, 3, 4])
>>> x1 = np.array([1, 2, 3, 4])
>>> x2 = np.array(['a', 'dd', 'xyz', '12'])
>>> x3 = np.array([1.1, 2, 3,4])
>>> r = np.core.records.fromarrays(
... [x1, x2, x3],
... dtype=np.dtype([('a', np.int32), ('b', 'S3'), ('c', np.float32)]))
>>> r
rec.array([(1, b'a', 1.1), (2, b'dd', 2. ), (3, b'xyz', 3. ),
(4, b'12', 4. )],
dtype=[('a', '<i4'), ('b', 'S3'), ('c', '<f4')]) | |
doc_28357 | Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor. | |
doc_28358 |
Fetch and parse configuration statements that required for defining the targeted CPU features, statements should be declared in the top of source in between C comment and start with a special mark @targets. Configuration statements are sort of keywords representing CPU features names, group of statements and policies, combined together to determine the required optimization. Parameters
source: str
the path of C source file. Returns
bool, True if group has the ‘baseline’ option
list, list of CPU features
list, list of extra compiler flags | |
doc_28359 | tf.greater Compat aliases for migration See Migration guide for more details. tf.compat.v1.greater, tf.compat.v1.math.greater
tf.math.greater(
x, y, name=None
)
Note: math.greater supports broadcasting. More about broadcasting here
Example: x = tf.constant([5, 4, 6])
y = tf.constant([5, 2, 5])
tf.math.greater(x, y) ==> [False, True, True]
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.greater(x, y) ==> [False, False, True]
Args
x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor of type bool. | |
doc_28360 | Instead of referring to User directly, you should reference the user model using django.contrib.auth.get_user_model(). This method will return the currently active user model – the custom user model if one is specified, or User otherwise. When you define a foreign key or many-to-many relations to the user model, you should specify the custom model using the AUTH_USER_MODEL setting. For example: from django.conf import settings
from django.db import models
class Article(models.Model):
author = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
)
When connecting to signals sent by the user model, you should specify the custom model using the AUTH_USER_MODEL setting. For example: from django.conf import settings
from django.db.models.signals import post_save
def post_save_receiver(sender, instance, created, **kwargs):
pass
post_save.connect(post_save_receiver, sender=settings.AUTH_USER_MODEL)
Generally speaking, it’s easiest to refer to the user model with the AUTH_USER_MODEL setting in code that’s executed at import time, however, it’s also possible to call get_user_model() while Django is importing models, so you could use models.ForeignKey(get_user_model(), ...). If your app is tested with multiple user models, using @override_settings(AUTH_USER_MODEL=...) for example, and you cache the result of get_user_model() in a module-level variable, you may need to listen to the setting_changed signal to clear the cache. For example: from django.apps import apps
from django.contrib.auth import get_user_model
from django.core.signals import setting_changed
from django.dispatch import receiver
@receiver(setting_changed)
def user_model_swapped(**kwargs):
if kwargs['setting'] == 'AUTH_USER_MODEL':
apps.clear_cache()
from myapp import some_module
some_module.UserModel = get_user_model() | |
doc_28361 |
Call all of the registered callbacks. This function is triggered internally when a property is changed. See also add_callback
remove_callback | |
doc_28362 | File type. | |
doc_28363 |
Remove a colormap recognized by get_cmap(). You may not remove built-in colormaps. If the named colormap is not registered, returns with no error, raises if you try to de-register a default colormap. Warning Colormap names are currently a shared namespace that may be used by multiple packages. Use unregister_cmap only if you know you have registered that name before. In particular, do not unregister just in case to clean the name before registering a new colormap. Parameters
namestr
The name of the colormap to be un-registered Returns
ColorMap or None
If the colormap was registered, return it if not return None Raises
ValueError
If you try to de-register a default built-in colormap. | |
doc_28364 | tf.compat.v1.estimator.DNNLinearCombinedClassifier(
model_dir=None, linear_feature_columns=None, linear_optimizer='Ftrl',
dnn_feature_columns=None, dnn_optimizer='Adagrad',
dnn_hidden_units=None, dnn_activation_fn=tf.nn.relu, dnn_dropout=None,
n_classes=2, weight_column=None, label_vocabulary=None,
input_layer_partitioner=None, config=None, warm_start_from=None,
loss_reduction=tf.compat.v1.losses.Reduction.SUM, batch_norm=False,
linear_sparse_combiner='sum'
)
Note: This estimator is also known as wide-n-deep.
Example: numeric_feature = numeric_column(...)
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
categorical_feature_a_emb = embedding_column(
categorical_column=categorical_feature_a, ...)
categorical_feature_b_emb = embedding_column(
categorical_id_column=categorical_feature_b, ...)
estimator = tf.estimator.DNNLinearCombinedClassifier(
# wide settings
linear_feature_columns=[categorical_feature_a_x_categorical_feature_b],
linear_optimizer=tf.keras.optimizers.Ftrl(...),
# deep settings
dnn_feature_columns=[
categorical_feature_a_emb, categorical_feature_b_emb,
numeric_feature],
dnn_hidden_units=[1000, 500, 100],
dnn_optimizer=tf.keras.optimizers.Adagrad(...),
# warm-start settings
warm_start_from="/path/to/checkpoint/dir")
# To apply L1 and L2 regularization, you can set dnn_optimizer to:
tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001,
l2_regularization_strength=0.001)
# To apply learning rate decay, you can set dnn_optimizer to a callable:
lambda: tf.keras.optimizers.Adam(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96)
# It is the same for linear_optimizer.
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features, otherwise there will be a KeyError: for each column in dnn_feature_columns + linear_feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor.
Loss is calculated by using softmax cross entropy.
Args
model_fn Model function. Follows the signature:
features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same.
labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None.
mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning.
config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used.
config estimator.RunConfig configuration object.
params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types.
warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged.
Raises
ValueError parameters of model_fn don't match params.
ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source
export_savedmodel(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, strip_default_attrs=False
)
Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | |
doc_28365 | Return the system default NIS domain. | |
doc_28366 | De-initialize the library, and return terminal to normal status. | |
doc_28367 | Returns a new tensor with the same data as the self tensor but of a different shape. The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions d,d+1,…,d+kd, d+1, \dots, d+k that satisfy the following contiguity-like condition that ∀i=d,…,d+k−1\forall i = d, \dots, d+k-1 , stride[i]=stride[i+1]×size[i+1]\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1]
Otherwise, it will not be possible to view self tensor as shape without copying it (e.g., via contiguous()). When it is unclear whether a view() can be performed, it is advisable to use reshape(), which returns a view if the shapes are compatible, and copies (equivalent to calling contiguous()) otherwise. Parameters
shape (torch.Size or int...) – the desired size Example: >>> x = torch.randn(4, 4)
>>> x.size()
torch.Size([4, 4])
>>> y = x.view(16)
>>> y.size()
torch.Size([16])
>>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions
>>> z.size()
torch.Size([2, 8])
>>> a = torch.randn(1, 2, 3, 4)
>>> a.size()
torch.Size([1, 2, 3, 4])
>>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension
>>> b.size()
torch.Size([1, 3, 2, 4])
>>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory
>>> c.size()
torch.Size([1, 3, 2, 4])
>>> torch.equal(b, c)
False
view(dtype) → Tensor
Returns a new tensor with the same data as the self tensor but of a different dtype. dtype must have the same number of bytes per element as self’s dtype. Warning This overload is not supported by TorchScript, and using it in a Torchscript program will cause undefined behavior. Parameters
dtype (torch.dtype) – the desired dtype Example: >>> x = torch.randn(4, 4)
>>> x
tensor([[ 0.9482, -0.0310, 1.4999, -0.5316],
[-0.1520, 0.7472, 0.5617, -0.8649],
[-2.4724, -0.0334, -0.2976, -0.8499],
[-0.2109, 1.9913, -0.9607, -0.6123]])
>>> x.dtype
torch.float32
>>> y = x.view(torch.int32)
>>> y
tensor([[ 1064483442, -1124191867, 1069546515, -1089989247],
[-1105482831, 1061112040, 1057999968, -1084397505],
[-1071760287, -1123489973, -1097310419, -1084649136],
[-1101533110, 1073668768, -1082790149, -1088634448]],
dtype=torch.int32)
>>> y[0, 0] = 1000000000
>>> x
tensor([[ 0.0047, -0.0310, 1.4999, -0.5316],
[-0.1520, 0.7472, 0.5617, -0.8649],
[-2.4724, -0.0334, -0.2976, -0.8499],
[-0.2109, 1.9913, -0.9607, -0.6123]])
>>> x.view(torch.int16)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Viewing a tensor as a new dtype with a different number of bytes per element is not supported. | |
doc_28368 |
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
antialiased or aa bool or None
capstyle CapStyle or {'butt', 'projecting', 'round'}
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
color color
data 1D array-like or None
edgecolor or ec color or None
facecolor or fc color or None
figure Figure
fill bool
gid str
hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'}
in_layout bool
joinstyle JoinStyle or {'miter', 'round', 'bevel'}
label object
linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...}
linewidth or lw float or None
path unknown
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float | |
doc_28369 | returns the Euclidean length of the vector. length() -> float calculates the Euclidean length of the vector which follows from the Pythagorean theorem: vec.length() == math.sqrt(vec.x**2 + vec.y**2 + vec.z**2) | |
doc_28370 | Return a shallow copy of the underlying mapping. | |
doc_28371 |
Set the axisline style. The new style is completely defined by the passed attributes. Existing style attributes are forgotten. Parameters
axisline_stylestr or None
The line style, e.g. '->', optionally followed by a comma-separated list of attributes. Alternatively, the attributes can be provided as keywords. If None this returns a string containing the available styles. Examples The following two commands are equal: >>> set_axisline_style("->,size=1.5") >>> set_axisline_style("->", size=1.5) | |
doc_28372 | Sequence of all compare operation names. | |
doc_28373 |
An ExtensionDtype for uint64 integer data. Changed in version 1.0.0: Now uses pandas.NA as its missing value, rather than numpy.nan. Attributes
None Methods
None | |
doc_28374 | A prefix added to a session key to build a cache key string. | |
doc_28375 | Moves item to position index in parent’s list of children. It is illegal to move an item under one of its descendants. If index is less than or equal to zero, item is moved to the beginning; if greater than or equal to the number of children, it is moved to the end. If item was detached it is reattached. | |
doc_28376 | Provides a mechanism for looking up an object associated with the current HTTP request. Methods and Attributes
model
The model that this view will display data for. Specifying model
= Foo is effectively the same as specifying queryset =
Foo.objects.all(), where objects stands for Foo’s default manager.
queryset
A QuerySet that represents the objects. If provided, the value of queryset supersedes the value provided for model. Warning queryset is a class attribute with a mutable value so care must be taken when using it directly. Before using it, either call its all() method or retrieve it with get_queryset() which takes care of the cloning behind the scenes.
slug_field
The name of the field on the model that contains the slug. By default, slug_field is 'slug'.
slug_url_kwarg
The name of the URLConf keyword argument that contains the slug. By default, slug_url_kwarg is 'slug'.
pk_url_kwarg
The name of the URLConf keyword argument that contains the primary key. By default, pk_url_kwarg is 'pk'.
context_object_name
Designates the name of the variable to use in the context.
query_pk_and_slug
If True, causes get_object() to perform its lookup using both the primary key and the slug. Defaults to False. This attribute can help mitigate insecure direct object reference attacks. When applications allow access to individual objects by a sequential primary key, an attacker could brute-force guess all URLs; thereby obtaining a list of all objects in the application. If users with access to individual objects should be prevented from obtaining this list, setting query_pk_and_slug to True will help prevent the guessing of URLs as each URL will require two correct, non-sequential arguments. Using a unique slug may serve the same purpose, but this scheme allows you to have non-unique slugs.
get_object(queryset=None)
Returns the single object that this view will display. If queryset is provided, that queryset will be used as the source of objects; otherwise, get_queryset() will be used. get_object() looks for a pk_url_kwarg argument in the arguments to the view; if this argument is found, this method performs a primary-key based lookup using that value. If this argument is not found, it looks for a slug_url_kwarg argument, and performs a slug lookup using the slug_field. When query_pk_and_slug is True, get_object() will perform its lookup using both the primary key and the slug.
get_queryset()
Returns the queryset that will be used to retrieve the object that this view will display. By default, get_queryset() returns the value of the queryset attribute if it is set, otherwise it constructs a QuerySet by calling the all() method on the model attribute’s default manager.
get_context_object_name(obj)
Return the context variable name that will be used to contain the data that this view is manipulating. If context_object_name is not set, the context name will be constructed from the model_name of the model that the queryset is composed from. For example, the model Article would have context object named 'article'.
get_context_data(**kwargs)
Returns context data for displaying the object. The base implementation of this method requires that the self.object attribute be set by the view (even if None). Be sure to do this if you are using this mixin without one of the built-in views that does so. It returns a dictionary with these contents:
object: The object that this view is displaying (self.object).
context_object_name: self.object will also be stored under the name returned by get_context_object_name(), which defaults to the lowercased version of the model name. Context variables override values from template context processors Any variables from get_context_data() take precedence over context variables from context processors. For example, if your view sets the model attribute to User, the default context object name of user would override the user variable from the django.contrib.auth.context_processors.auth() context processor. Use get_context_object_name() to avoid a clash.
get_slug_field()
Returns the name of a slug field to be used to look up by slug. By default this returns the value of slug_field. | |
doc_28377 | class socketserver.ForkingUDPServer
class socketserver.ThreadingTCPServer
class socketserver.ThreadingUDPServer
These classes are pre-defined using the mix-in classes. | |
doc_28378 | The spatial reference system of the raster, as a SpatialReference instance. The SRS can be changed by setting it to an other SpatialReference or providing any input that is accepted by the SpatialReference constructor. >>> rst = GDALRaster({'width': 10, 'height': 20, 'srid': 4326})
>>> rst.srs.srid
4326
>>> rst.srs = 3086
>>> rst.srs.srid
3086 | |
doc_28379 |
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | |
doc_28380 | Read from a file descriptor fd at a position of offset into mutable bytes-like objects buffers, leaving the file offset unchanged. Transfer data into each buffer until it is full and then move on to the next buffer in the sequence to hold the rest of the data. The flags argument contains a bitwise OR of zero or more of the following flags: RWF_HIPRI RWF_NOWAIT Return the total number of bytes actually read which can be less than the total capacity of all the objects. The operating system may set a limit (sysconf() value 'SC_IOV_MAX') on the number of buffers that can be used. Combine the functionality of os.readv() and os.pread(). Availability: Linux 2.6.30 and newer, FreeBSD 6.0 and newer, OpenBSD 2.7 and newer, AIX 7.1 and newer. Using flags requires Linux 4.6 or newer. New in version 3.7. | |
doc_28381 | format_field() simply calls the global format() built-in. The method is provided so that subclasses can override it. | |
doc_28382 |
Return True if all entries of a and b are equal, using fill_value as a truth value where either or both are masked. Parameters
a, barray_like
Input arrays to compare.
fill_valuebool, optional
Whether masked values in a or b are considered equal (True) or not (False). Returns
ybool
Returns True if the two arrays are equal within the given tolerance, False otherwise. If either array contains NaN, then False is returned. See also
all, any
numpy.ma.allclose
Examples >>> a = np.ma.array([1e10, 1e-7, 42.0], mask=[0, 0, 1])
>>> a
masked_array(data=[10000000000.0, 1e-07, --],
mask=[False, False, True],
fill_value=1e+20)
>>> b = np.array([1e10, 1e-7, -42.0])
>>> b
array([ 1.00000000e+10, 1.00000000e-07, -4.20000000e+01])
>>> np.ma.allequal(a, b, fill_value=False)
False
>>> np.ma.allequal(a, b)
True | |
doc_28383 |
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | |
doc_28384 |
Remove tool named name. Parameters
namestr
Name of the tool. | |
doc_28385 |
Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters
rendererRendererBase subclass.
Notes This method is overridden in the Artist subclasses. | |
doc_28386 | path is the path to a directory that should contain subdirectories “common”, “posix”, “nt”, each containing scripts destined for the bin directory in the environment. The contents of “common” and the directory corresponding to os.name are copied after some text replacement of placeholders:
__VENV_DIR__ is replaced with the absolute path of the environment directory.
__VENV_NAME__ is replaced with the environment name (final path segment of environment directory).
__VENV_PROMPT__ is replaced with the prompt (the environment name surrounded by parentheses and with a following space)
__VENV_BIN_NAME__ is replaced with the name of the bin directory (either bin or Scripts).
__VENV_PYTHON__ is replaced with the absolute path of the environment’s executable. The directories are allowed to exist (for when an existing environment is being upgraded). | |
doc_28387 |
Default object formatter. | |
doc_28388 | Finds text for the first subelement matching match. match may be a tag name or a path. Returns the text content of the first matching element, or default if no element was found. Note that if the matching element has no text content an empty string is returned. namespaces is an optional mapping from namespace prefix to full name. Pass '' as prefix to move all unprefixed tag names in the expression into the given namespace. | |
doc_28389 |
Set the offsets for the collection. Parameters
offsets(N, 2) or (2,) array-like | |
doc_28390 |
Callback for mouse button release in pan/zoom mode. | |
doc_28391 | Change if autograd should record operations on this tensor: sets this tensor’s requires_grad attribute in-place. Returns this tensor. requires_grad_()’s main use case is to tell autograd to begin recording operations on a Tensor tensor. If tensor has requires_grad=False (because it was obtained through a DataLoader, or required preprocessing or initialization), tensor.requires_grad_() makes it so that autograd will begin to record operations on tensor. Parameters
requires_grad (bool) – If autograd should record operations on this tensor. Default: True. Example: >>> # Let's say we want to preprocess some saved weights and use
>>> # the result as new weights.
>>> saved_weights = [0.1, 0.2, 0.3, 0.25]
>>> loaded_weights = torch.tensor(saved_weights)
>>> weights = preprocess(loaded_weights) # some function
>>> weights
tensor([-0.5503, 0.4926, -2.1158, -0.8303])
>>> # Now, start to record operations done to weights
>>> weights.requires_grad_()
>>> out = weights.pow(2).sum()
>>> out.backward()
>>> weights.grad
tensor([-1.1007, 0.9853, -4.2316, -1.6606]) | |
doc_28392 |
Alias for set_linestyle. | |
doc_28393 | See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_multi_string
tf.compat.v1.flags.DEFINE_multi_string(
name, default, help, flag_values=_flagvalues.FLAGS, **args
)
Use the flag on the command line multiple times to place multiple string values into the list. The 'default' may be a single string (which will be converted into a single-element list) or a list of strings.
Args
name str, the flag name.
default Union[Iterable[Text], Text, None], the default value of the flag; see DEFINE_multi.
help str, the help message.
flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden.
**args Dictionary with extra keyword args that are passed to the Flag init.
Returns a handle to defined flag. | |
doc_28394 |
Get the current colorable artist. Specifically, returns the current ScalarMappable instance (Image created by imshow or figimage, Collection created by pcolor or scatter, etc.), or None if no such instance has been defined. The current image is an attribute of the current Axes, or the nearest earlier Axes in the current figure that contains an image. Notes Historically, the only colorable artists were images; hence the name gci (get current image). | |
doc_28395 | class sklearn.cross_decomposition.CCA(n_components=2, *, scale=True, max_iter=500, tol=1e-06, copy=True) [source]
Canonical Correlation Analysis, also known as “Mode B” PLS. Read more in the User Guide. Parameters
n_componentsint, default=2
Number of components to keep. Should be in [1, min(n_samples,
n_features, n_targets)].
scalebool, default=True
Whether to scale X and Y.
max_iterint, default=500
the maximum number of iterations of the power method.
tolfloat, default=1e-06
The tolerance used as convergence criteria in the power method: the algorithm stops whenever the squared norm of u_i - u_{i-1} is less than tol, where u corresponds to the left singular vector.
copybool, default=True
Whether to copy X and Y in fit before applying centering, and potentially scaling. If False, these operations will be done inplace, modifying both arrays. Attributes
x_weights_ndarray of shape (n_features, n_components)
The left singular vectors of the cross-covariance matrices of each iteration.
y_weights_ndarray of shape (n_targets, n_components)
The right singular vectors of the cross-covariance matrices of each iteration.
x_loadings_ndarray of shape (n_features, n_components)
The loadings of X.
y_loadings_ndarray of shape (n_targets, n_components)
The loadings of Y.
x_scores_ndarray of shape (n_samples, n_components)
The transformed training samples. Deprecated since version 0.24: x_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
y_scores_ndarray of shape (n_samples, n_components)
The transformed training targets. Deprecated since version 0.24: y_scores_ is deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26). You can just call transform on the training data instead.
x_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform X.
y_rotations_ndarray of shape (n_features, n_components)
The projection matrix used to transform Y.
coef_ndarray of shape (n_features, n_targets)
The coefficients of the linear model such that Y is approximated as Y = X @ coef_.
n_iter_list of shape (n_components,)
Number of iterations of the power method, for each component. See also
PLSCanonical
PLSSVD
Examples >>> from sklearn.cross_decomposition import CCA
>>> X = [[0., 0., 1.], [1.,0.,0.], [2.,2.,2.], [3.,5.,4.]]
>>> Y = [[0.1, -0.2], [0.9, 1.1], [6.2, 5.9], [11.9, 12.3]]
>>> cca = CCA(n_components=1)
>>> cca.fit(X, Y)
CCA(n_components=1)
>>> X_c, Y_c = cca.transform(X, Y)
Methods
fit(X, Y) Fit model to data.
fit_transform(X[, y]) Learn and apply the dimension reduction on the train data.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Transform data back to its original space.
predict(X[, copy]) Predict targets of given samples.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
transform(X[, Y, copy]) Apply the dimension reduction.
fit(X, Y) [source]
Fit model to data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
Yarray-like of shape (n_samples,) or (n_samples, n_targets)
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables.
fit_transform(X, y=None) [source]
Learn and apply the dimension reduction on the train data. Parameters
Xarray-like of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of predictors.
yarray-like of shape (n_samples, n_targets), default=None
Target vectors, where n_samples is the number of samples and n_targets is the number of response variables. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Transform data back to its original space. Parameters
Xarray-like of shape (n_samples, n_components)
New data, where n_samples is the number of samples and n_components is the number of pls components. Returns
x_reconstructedarray-like of shape (n_samples, n_features)
Notes This transformation will only be exact if n_components=n_features.
predict(X, copy=True) [source]
Predict targets of given samples. Parameters
Xarray-like of shape (n_samples, n_features)
Samples.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Notes This call requires the estimation of a matrix of shape (n_features, n_targets), which may be an issue in high dimensional space.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, Y=None, copy=True) [source]
Apply the dimension reduction. Parameters
Xarray-like of shape (n_samples, n_features)
Samples to transform.
Yarray-like of shape (n_samples, n_targets), default=None
Target vectors.
copybool, default=True
Whether to copy X and Y, or perform in-place normalization. Returns
x_scores if Y is not given, (x_scores, y_scores) otherwise.
Examples using sklearn.cross_decomposition.CCA
Compare cross decomposition methods
Multilabel classification | |
doc_28396 | tf.compat.v1.keras.layers.experimental.preprocessing.Normalization(
axis=-1, dtype=None, **kwargs
)
This layer will coerce its inputs into a distribution centered around 0 with standard deviation 1. It accomplishes this by precomputing the mean and variance of the data, and calling (input-mean)/sqrt(var) at runtime. What happens in adapt: Compute mean and variance of the data and store them as the layer's weights. adapt should be called before fit, evaluate, or predict. Examples: Calculate the mean and variance by analyzing the dataset in adapt.
adapt_data = np.array([[1.], [2.], [3.], [4.], [5.]], dtype=np.float32)
input_data = np.array([[1.], [2.], [3.]], np.float32)
layer = Normalization()
layer.adapt(adapt_data)
layer(input_data)
<tf.Tensor: shape=(3, 1), dtype=float32, numpy=
array([[-1.4142135 ],
[-0.70710677],
[ 0. ]], dtype=float32)>
Pass the mean and variance directly.
input_data = np.array([[1.], [2.], [3.]], np.float32)
layer = Normalization(mean=3., variance=2.)
layer(input_data)
<tf.Tensor: shape=(3, 1), dtype=float32, numpy=
array([[-1.4142135 ],
[-0.70710677],
[ 0. ]], dtype=float32)>
Attributes
axis Integer or tuple of integers, the axis or axes that should be "kept". These axes are not be summed over when calculating the normalization statistics. By default the last axis, the features axis is kept and any space or time axes are summed. Each element in the the axes that are kept is normalized independently. If axis is set to 'None', the layer will perform scalar normalization (dividing the input by a single scalar value). The batch axis, 0, is always summed over (axis=0 is not allowed).
mean The mean value(s) to use during normalization. The passed value(s) will be broadcast to the shape of the kept axes above; if the value(s) cannot be broadcast, an error will be raised when this layer's build() method is called.
variance The variance value(s) to use during normalization. The passed value(s) will be broadcast to the shape of the kept axes above; if the value(s)cannot be broadcast, an error will be raised when this layer's build() method is called. Methods adapt View source
adapt(
data, reset_state=True
)
Fits the state of the preprocessing layer to the data being passed.
Arguments
data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array.
reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt, or whether to start from the existing state. Subclasses may choose to throw if reset_state is set to 'False'. | |
doc_28397 |
Efficient softmax approximation as described in Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. Adaptive softmax is an approximate strategy for training models with large output spaces. It is most effective when the label distribution is highly imbalanced, for example in natural language modelling, where the word frequency distribution approximately follows the Zipf’s law. Adaptive softmax partitions the labels into several clusters, according to their frequency. These clusters may contain different number of targets each. Additionally, clusters containing less frequent labels assign lower dimensional embeddings to those labels, which speeds up the computation. For each minibatch, only clusters for which at least one target is present are evaluated. The idea is that the clusters which are accessed frequently (like the first one, containing most frequent labels), should also be cheap to compute – that is, contain a small number of assigned labels. We highly recommend taking a look at the original paper for more details.
cutoffs should be an ordered Sequence of integers sorted in the increasing order. It controls number of clusters and the partitioning of targets into clusters. For example setting cutoffs = [10, 100, 1000] means that first 10 targets will be assigned to the ‘head’ of the adaptive softmax, targets 11, 12, …, 100 will be assigned to the first cluster, and targets 101, 102, …, 1000 will be assigned to the second cluster, while targets 1001, 1002, …, n_classes - 1 will be assigned to the last, third cluster.
div_value is used to compute the size of each additional cluster, which is given as ⌊in_featuresdiv_valueidx⌋\left\lfloor\frac{\texttt{in\_features}}{\texttt{div\_value}^{idx}}\right\rfloor , where idxidx is the cluster index (with clusters for less frequent words having larger indices, and indices starting from 11 ).
head_bias if set to True, adds a bias term to the ‘head’ of the adaptive softmax. See paper for details. Set to False in the official implementation. Warning Labels passed as inputs to this module should be sorted according to their frequency. This means that the most frequent label should be represented by the index 0, and the least frequent label should be represented by the index n_classes - 1. Note This module returns a NamedTuple with output and loss fields. See further documentation for details. Note To compute log-probabilities for all classes, the log_prob method can be used. Parameters
in_features (int) – Number of features in the input tensor
n_classes (int) – Number of classes in the dataset
cutoffs (Sequence) – Cutoffs used to assign targets to their buckets
div_value (float, optional) – value used as an exponent to compute sizes of the clusters. Default: 4.0
head_bias (bool, optional) – If True, adds a bias term to the ‘head’ of the adaptive softmax. Default: False
Returns
output is a Tensor of size N containing computed target log probabilities for each example
loss is a Scalar representing the computed negative log likelihood loss Return type
NamedTuple with output and loss fields Shape:
input: (N,in_features)(N, \texttt{in\_features})
target: (N)(N) where each value satisfies 0<=target[i]<=n_classes0 <= \texttt{target[i]} <= \texttt{n\_classes}
output1: (N)(N)
output2: Scalar
log_prob(input) [source]
Computes log probabilities for all n_classes\texttt{n\_classes} Parameters
input (Tensor) – a minibatch of examples Returns
log-probabilities of for each class cc in range 0<=c<=n_classes0 <= c <= \texttt{n\_classes} , where n_classes\texttt{n\_classes} is a parameter passed to AdaptiveLogSoftmaxWithLoss constructor. Shape:
Input: (N,in_features)(N, \texttt{in\_features})
Output: (N,n_classes)(N, \texttt{n\_classes})
predict(input) [source]
This is equivalent to self.log_pob(input).argmax(dim=1), but is more efficient in some cases. Parameters
input (Tensor) – a minibatch of examples Returns
a class with the highest probability for each example Return type
output (Tensor) Shape:
Input: (N,in_features)(N, \texttt{in\_features})
Output: (N)(N) | |
doc_28398 | A class that represents thread-local data. For more details and extensive examples, see the documentation string of the _threading_local module. | |
doc_28399 |
Node is the data structure that represents individual operations within a Graph. For the most part, Nodes represent callsites to various entities, such as operators, methods, and Modules (some exceptions include nodes that specify function inputs and outputs). Each Node has a function specified by its op property. The Node semantics for each value of op are as follows:
placeholder represents a function input. The name attribute specifies the name this value will take on. target is similarly the name of the argument. args holds either: 1) nothing, or 2) a single argument denoting the default parameter of the function input. kwargs is don’t-care. Placeholders correspond to the function parameters (e.g. x) in the graph printout.
get_attr retrieves a parameter from the module hierarchy. name is similarly the name the result of the fetch is assigned to. target is the fully-qualified name of the parameter’s position in the module hierarchy. args and kwargs are don’t-care
call_function applies a free function to some values. name is similarly the name of the value to assign to. target is the function to be applied. args and kwargs represent the arguments to the function, following the Python calling convention
call_module applies a module in the module hierarchy’s forward() method to given arguments. name is as previous. target is the fully-qualified name of the module in the module hierarchy to call. args and kwargs represent the arguments to invoke the module on, including the self argument.
call_method calls a method on a value. name is as similar. target is the string name of the method to apply to the self argument. args and kwargs represent the arguments to invoke the module on, including the self argument
output contains the output of the traced function in its args[0] attribute. This corresponds to the “return” statement in the Graph printout.
property all_input_nodes
Return all Nodes that are inputs to this Node. This is equivalent to iterating over args and kwargs and only collecting the values that are Nodes. Returns
List of Nodes that appear in the args and kwargs of this Node, in that order.
append(x) [source]
Insert x after this node in the list of nodes in the graph. Equvalent to self.next.prepend(x) Parameters
x (Node) – The node to put after this node. Must be a member of the same graph.
property args
The tuple of arguments to this Node. The interpretation of arguments depends on the node’s opcode. See the Node docstring for more information. Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment.
property kwargs
The dict of keyword arguments to this Node. The interpretation of arguments depends on the node’s opcode. See the Node docstring for more information. Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment.
property next
Returns the next Node in the linked list of Nodes. Returns
The next Node in the linked list of Nodes.
prepend(x) [source]
Insert x before this node in the list of nodes in the graph. Example: Before: p -> self
bx -> x -> ax
After: p -> x -> self
bx -> ax
Parameters
x (Node) – The node to put before this node. Must be a member of the same graph.
property prev
Returns the previous Node in the linked list of Nodes. Returns
The previous Node in the linked list of Nodes.
replace_all_uses_with(replace_with) [source]
Replace all uses of self in the Graph with the Node replace_with. Parameters
replace_with (Node) – The node to replace all uses of self with. Returns
The list of Nodes on which this change was made. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.