_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_25100 | The request was not successfully authenticated, and the highest priority authentication class does not use WWW-Authenticate headers. — An HTTP 403 Forbidden response will be returned.
The request was not successfully authenticated, and the highest priority authentication class does use WWW-Authenticate headers. — An HTTP 401 Unauthorized response, with an appropriate WWW-Authenticate header will be returned.
Object level permissions REST framework permissions also support object-level permissioning. Object level permissions are used to determine if a user should be allowed to act on a particular object, which will typically be a model instance. Object level permissions are run by REST framework's generic views when .get_object() is called. As with view level permissions, an exceptions.PermissionDenied exception will be raised if the user is not allowed to act on the given object. If you're writing your own views and want to enforce object level permissions, or if you override the get_object method on a generic view, then you'll need to explicitly call the .check_object_permissions(request, obj) method on the view at the point at which you've retrieved the object. This will either raise a PermissionDenied or NotAuthenticated exception, or simply return if the view has the appropriate permissions. For example: def get_object(self):
obj = get_object_or_404(self.get_queryset(), pk=self.kwargs["pk"])
self.check_object_permissions(self.request, obj)
return obj
Note: With the exception of DjangoObjectPermissions, the provided permission classes in rest_framework.permissions do not implement the methods necessary to check object permissions. If you wish to use the provided permission classes in order to check object permissions, you must subclass them and implement the has_object_permission() method described in the Custom permissions section (below). Limitations of object level permissions For performance reasons the generic views will not automatically apply object level permissions to each instance in a queryset when returning a list of objects. Often when you're using object level permissions you'll also want to filter the queryset appropriately, to ensure that users only have visibility onto instances that they are permitted to view. Because the get_object() method is not called, object level permissions from the has_object_permission() method are not applied when creating objects. In order to restrict object creation you need to implement the permission check either in your Serializer class or override the perform_create() method of your ViewSet class. Setting the permission policy The default permission policy may be set globally, using the DEFAULT_PERMISSION_CLASSES setting. For example. REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated',
]
}
If not specified, this setting defaults to allowing unrestricted access: 'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.AllowAny',
]
You can also set the authentication policy on a per-view, or per-viewset basis, using the APIView class-based views. from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from rest_framework.views import APIView
class ExampleView(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, format=None):
content = {
'status': 'request was permitted'
}
return Response(content)
Or, if you're using the @api_view decorator with function based views. from rest_framework.decorators import api_view, permission_classes
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
@api_view(['GET'])
@permission_classes([IsAuthenticated])
def example_view(request, format=None):
content = {
'status': 'request was permitted'
}
return Response(content)
Note: when you set new permission classes via the class attribute or decorators you're telling the view to ignore the default list set in the settings.py file. Provided they inherit from rest_framework.permissions.BasePermission, permissions can be composed using standard Python bitwise operators. For example, IsAuthenticatedOrReadOnly could be written: from rest_framework.permissions import BasePermission, IsAuthenticated, SAFE_METHODS
from rest_framework.response import Response
from rest_framework.views import APIView
class ReadOnly(BasePermission):
def has_permission(self, request, view):
return request.method in SAFE_METHODS
class ExampleView(APIView):
permission_classes = [IsAuthenticated|ReadOnly]
def get(self, request, format=None):
content = {
'status': 'request was permitted'
}
return Response(content)
Note: it supports & (and), | (or) and ~ (not). API Reference AllowAny The AllowAny permission class will allow unrestricted access, regardless of if the request was authenticated or unauthenticated. This permission is not strictly required, since you can achieve the same result by using an empty list or tuple for the permissions setting, but you may find it useful to specify this class because it makes the intention explicit. IsAuthenticated The IsAuthenticated permission class will deny permission to any unauthenticated user, and allow permission otherwise. This permission is suitable if you want your API to only be accessible to registered users. IsAdminUser The IsAdminUser permission class will deny permission to any user, unless user.is_staff is True in which case permission will be allowed. This permission is suitable if you want your API to only be accessible to a subset of trusted administrators. IsAuthenticatedOrReadOnly The IsAuthenticatedOrReadOnly will allow authenticated users to perform any request. Requests for unauthorised users will only be permitted if the request method is one of the "safe" methods; GET, HEAD or OPTIONS. This permission is suitable if you want to your API to allow read permissions to anonymous users, and only allow write permissions to authenticated users. DjangoModelPermissions This permission class ties into Django's standard django.contrib.auth model permissions. This permission must only be applied to views that have a .queryset property or get_queryset() method. Authorization will only be granted if the user is authenticated and has the relevant model permissions assigned.
POST requests require the user to have the add permission on the model.
PUT and PATCH requests require the user to have the change permission on the model.
DELETE requests require the user to have the delete permission on the model. The default behaviour can also be overridden to support custom model permissions. For example, you might want to include a view model permission for GET requests. To use custom model permissions, override DjangoModelPermissions and set the .perms_map property. Refer to the source code for details. DjangoModelPermissionsOrAnonReadOnly Similar to DjangoModelPermissions, but also allows unauthenticated users to have read-only access to the API. DjangoObjectPermissions This permission class ties into Django's standard object permissions framework that allows per-object permissions on models. In order to use this permission class, you'll also need to add a permission backend that supports object-level permissions, such as django-guardian. As with DjangoModelPermissions, this permission must only be applied to views that have a .queryset property or .get_queryset() method. Authorization will only be granted if the user is authenticated and has the relevant per-object permissions and relevant model permissions assigned.
POST requests require the user to have the add permission on the model instance.
PUT and PATCH requests require the user to have the change permission on the model instance.
DELETE requests require the user to have the delete permission on the model instance. Note that DjangoObjectPermissions does not require the django-guardian package, and should support other object-level backends equally well. As with DjangoModelPermissions you can use custom model permissions by overriding DjangoObjectPermissions and setting the .perms_map property. Refer to the source code for details. Note: If you need object level view permissions for GET, HEAD and OPTIONS requests and are using django-guardian for your object-level permissions backend, you'll want to consider using the DjangoObjectPermissionsFilter class provided by the djangorestframework-guardian package. It ensures that list endpoints only return results including objects for which the user has appropriate view permissions. Custom permissions To implement a custom permission, override BasePermission and implement either, or both, of the following methods: .has_permission(self, request, view) .has_object_permission(self, request, view, obj) The methods should return True if the request should be granted access, and False otherwise. If you need to test if a request is a read operation or a write operation, you should check the request method against the constant SAFE_METHODS, which is a tuple containing 'GET', 'OPTIONS' and 'HEAD'. For example: if request.method in permissions.SAFE_METHODS:
# Check permissions for read-only request
else:
# Check permissions for write request
Note: The instance-level has_object_permission method will only be called if the view-level has_permission checks have already passed. Also note that in order for the instance-level checks to run, the view code should explicitly call .check_object_permissions(request, obj). If you are using the generic views then this will be handled for you by default. (Function-based views will need to check object permissions explicitly, raising PermissionDenied on failure.) Custom permissions will raise a PermissionDenied exception if the test fails. To change the error message associated with the exception, implement a message attribute directly on your custom permission. Otherwise the default_detail attribute from PermissionDenied will be used. Similarly, to change the code identifier associated with the exception, implement a code attribute directly on your custom permission - otherwise the default_code attribute from PermissionDenied will be used. from rest_framework import permissions
class CustomerAccessPermission(permissions.BasePermission):
message = 'Adding customers not allowed.'
def has_permission(self, request, view):
...
Examples The following is an example of a permission class that checks the incoming request's IP address against a blocklist, and denies the request if the IP has been blocked. from rest_framework import permissions
class BlocklistPermission(permissions.BasePermission):
"""
Global permission check for blocked IPs.
"""
def has_permission(self, request, view):
ip_addr = request.META['REMOTE_ADDR']
blocked = Blocklist.objects.filter(ip_addr=ip_addr).exists()
return not blocked
As well as global permissions, that are run against all incoming requests, you can also create object-level permissions, that are only run against operations that affect a particular object instance. For example: class IsOwnerOrReadOnly(permissions.BasePermission):
"""
Object-level permission to only allow owners of an object to edit it.
Assumes the model instance has an `owner` attribute.
"""
def has_object_permission(self, request, view, obj):
# Read permissions are allowed to any request,
# so we'll always allow GET, HEAD or OPTIONS requests.
if request.method in permissions.SAFE_METHODS:
return True
# Instance must have an attribute named `owner`.
return obj.owner == request.user
Note that the generic views will check the appropriate object level permissions, but if you're writing your own custom views, you'll need to make sure you check the object level permission checks yourself. You can do so by calling self.check_object_permissions(request, obj) from the view once you have the object instance. This call will raise an appropriate APIException if any object-level permission checks fail, and will otherwise simply return. Also note that the generic views will only check the object-level permissions for views that retrieve a single model instance. If you require object-level filtering of list views, you'll need to filter the queryset separately. See the filtering documentation for more details. Overview of access restriction methods REST framework offers three different methods to customize access restrictions on a case-by-case basis. These apply in different scenarios and have different effects and limitations.
queryset/get_queryset(): Limits the general visibility of existing objects from the database. The queryset limits which objects will be listed and which objects can be modified or deleted. The get_queryset() method can apply different querysets based on the current action.
permission_classes/get_permissions(): General permission checks based on the current action, request and targeted object. Object level permissions can only be applied to retrieve, modify and deletion actions. Permission checks for list and create will be applied to the entire object type. (In case of list: subject to restrictions in the queryset.)
serializer_class/get_serializer(): Instance level restrictions that apply to all objects on input and output. The serializer may have access to the request context. The get_serializer() method can apply different serializers based on the current action. The following table lists the access restriction methods and the level of control they offer over which actions. queryset permission_classes serializer_class Action: list global no object-level* Action: create no global object-level Action: retrieve global object-level object-level Action: update global object-level object-level Action: partial_update global object-level object-level Action: destroy global object-level no Can reference action in decision no** yes no** Can reference request in decision no** yes yes * A Serializer class should not raise PermissionDenied in a list action, or the entire list would not be returned. ** The get_*() methods have access to the current view and can return different Serializer or QuerySet instances based on the request or action. Third party packages The following third party packages are also available. DRF - Access Policy The Django REST - Access Policy package provides a way to define complex access rules in declarative policy classes that are attached to view sets or function-based views. The policies are defined in JSON in a format similar to AWS' Identity & Access Management policies. Composed Permissions The Composed Permissions package provides a simple way to define complex and multi-depth (with logic operators) permission objects, using small and reusable components. REST Condition The REST Condition package is another extension for building complex permissions in a simple and convenient way. The extension allows you to combine permissions with logical operators. DRY Rest Permissions The DRY Rest Permissions package provides the ability to define different permissions for individual default and custom actions. This package is made for apps with permissions that are derived from relationships defined in the app's data model. It also supports permission checks being returned to a client app through the API's serializer. Additionally it supports adding permissions to the default and custom list actions to restrict the data they retrieve per user. Django Rest Framework Roles The Django Rest Framework Roles package makes it easier to parameterize your API over multiple types of users. Django REST Framework API Key The Django REST Framework API Key package provides permissions classes, models and helpers to add API key authorization to your API. It can be used to authorize internal or third-party backends and services (i.e. machines) which do not have a user account. API keys are stored securely using Django's password hashing infrastructure, and they can be viewed, edited and revoked at anytime in the Django admin. Django Rest Framework Role Filters The Django Rest Framework Role Filters package provides simple filtering over multiple types of roles. Django Rest Framework PSQ The Django Rest Framework PSQ package is an extension that gives support for having action-based permission_classes, serializer_class, and queryset dependent on permission-based rules. permissions.py | |
doc_25101 | keys()
Return an iterator over all keys if called as iterkeys() or return a list of keys if called as keys(). | |
doc_25102 | Return a representation of the proxy object. | |
doc_25103 | Plotting Basic Spans Spectral Statistics Binned Contours 2D arrays Unstructured triangles Text and annotations Vector fields Clearing Appearance Property cycle
Axis / limits Axis limits and direction Axis labels, title, and legend Axis scales Autoscaling and margins Aspect ratio Ticks and tick labels Units Adding artists Twinning and sharing Axes position Async/event based Interactive Children Drawing Projection Other Inheritance The Axes class classmatplotlib.axes.Axes(fig, rect, *, facecolor=None, frameon=True, sharex=None, sharey=None, label='', xscale=None, yscale=None, box_aspect=None, **kwargs)[source]
Bases: matplotlib.axes._base._AxesBase The Axes contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. The Axes instance supports callbacks through a callbacks attribute which is a CallbackRegistry instance. The events you can connect to are 'xlim_changed' and 'ylim_changed' and the callback will be called with func(ax) where ax is the Axes instance. Attributes
dataLimBbox
The bounding box enclosing all data displayed in the Axes.
viewLimBbox
The view limits in data coordinates. Build an Axes in a figure. Parameters
figFigure
The Axes is built in the Figure fig.
rect[left, bottom, width, height]
The Axes is built in the rectangle rect. rect is in Figure coordinates.
sharex, shareyAxes, optional
The x or y axis is shared with the x or y axis in the input Axes.
frameonbool, default: True
Whether the Axes frame is visible.
box_aspectfloat, optional
Set a fixed aspect for the Axes box, i.e. the ratio of height to width. See set_box_aspect for details. **kwargs
Other optional keyword arguments:
Property Description
adjustable {'box', 'datalim'}
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
anchor (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...}
animated bool
aspect {'auto', 'equal'} or float
autoscale_on bool
autoscalex_on bool
autoscaley_on bool
axes_locator Callable[[Axes, Renderer], Bbox]
axisbelow bool or 'line'
box_aspect float or None
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
facecolor or fc color
figure Figure
frame_on bool
gid str
in_layout bool
label object
navigate bool
navigate_mode unknown
path_effects AbstractPathEffect
picker None or bool or float or callable
position [left, bottom, width, height] or Bbox
prop_cycle unknown
rasterization_zorder float or None
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
title str
transform Transform
url str
visible bool
xbound unknown
xlabel str
xlim (bottom: float, top: float)
xmargin float greater than -0.5
xscale {"linear", "log", "symlog", "logit", ...} or ScaleBase
xticklabels unknown
xticks unknown
ybound unknown
ylabel str
ylim (bottom: float, top: float)
ymargin float greater than -0.5
yscale {"linear", "log", "symlog", "logit", ...} or ScaleBase
yticklabels unknown
yticks unknown
zorder float Returns
Axes
The new Axes object.
Subplots
SubplotBase Base class for subplots, which are Axes instances with additional methods to facilitate generating and manipulating a set of Axes within a figure.
subplot_class_factory Plotting Basic
Axes.plot Plot y versus x as lines and/or markers.
Axes.errorbar Plot y versus x as lines and/or markers with attached errorbars.
Axes.scatter A scatter plot of y vs.
Axes.plot_date Plot coercing the axis to treat floats as dates.
Axes.step Make a step plot.
Axes.loglog Make a plot with log scaling on both the x and y axis.
Axes.semilogx Make a plot with log scaling on the x axis.
Axes.semilogy Make a plot with log scaling on the y axis.
Axes.fill_between Fill the area between two horizontal curves.
Axes.fill_betweenx Fill the area between two vertical curves.
Axes.bar Make a bar plot.
Axes.barh Make a horizontal bar plot.
Axes.bar_label Label a bar plot.
Axes.stem Create a stem plot.
Axes.eventplot Plot identical parallel lines at the given positions.
Axes.pie Plot a pie chart.
Axes.stackplot Draw a stacked area plot.
Axes.broken_barh Plot a horizontal sequence of rectangles.
Axes.vlines Plot vertical lines at each x from ymin to ymax.
Axes.hlines Plot horizontal lines at each y from xmin to xmax.
Axes.fill Plot filled polygons. Spans
Axes.axhline Add a horizontal line across the axis.
Axes.axhspan Add a horizontal span (rectangle) across the Axes.
Axes.axvline Add a vertical line across the Axes.
Axes.axvspan Add a vertical span (rectangle) across the Axes.
Axes.axline Add an infinitely long straight line. Spectral
Axes.acorr Plot the autocorrelation of x.
Axes.angle_spectrum Plot the angle spectrum.
Axes.cohere Plot the coherence between x and y.
Axes.csd Plot the cross-spectral density.
Axes.magnitude_spectrum Plot the magnitude spectrum.
Axes.phase_spectrum Plot the phase spectrum.
Axes.psd Plot the power spectral density.
Axes.specgram Plot a spectrogram.
Axes.xcorr Plot the cross correlation between x and y. Statistics
Axes.boxplot Draw a box and whisker plot.
Axes.violinplot Make a violin plot.
Axes.violin Drawing function for violin plots.
Axes.bxp Drawing function for box and whisker plots. Binned
Axes.hexbin Make a 2D hexagonal binning plot of points x, y.
Axes.hist Plot a histogram.
Axes.hist2d Make a 2D histogram plot.
Axes.stairs A stepwise constant function as a line with bounding edges or a filled plot. Contours
Axes.clabel Label a contour plot.
Axes.contour Plot contour lines.
Axes.contourf Plot filled contours. 2D arrays
Axes.imshow Display data as an image, i.e., on a 2D regular raster.
Axes.matshow Plot the values of a 2D matrix or array as color-coded image.
Axes.pcolor Create a pseudocolor plot with a non-regular rectangular grid.
Axes.pcolorfast Create a pseudocolor plot with a non-regular rectangular grid.
Axes.pcolormesh Create a pseudocolor plot with a non-regular rectangular grid.
Axes.spy Plot the sparsity pattern of a 2D array. Unstructured triangles
Axes.tripcolor Create a pseudocolor plot of an unstructured triangular grid.
Axes.triplot Draw a unstructured triangular grid as lines and/or markers.
Axes.tricontour Draw contour lines on an unstructured triangular grid.
Axes.tricontourf Draw contour regions on an unstructured triangular grid. Text and annotations
Axes.annotate Annotate the point xy with text text.
Axes.text Add text to the Axes.
Axes.table Add a table to an Axes.
Axes.arrow Add an arrow to the Axes.
Axes.inset_axes Add a child inset Axes to this existing Axes.
Axes.indicate_inset Add an inset indicator to the Axes.
Axes.indicate_inset_zoom Add an inset indicator rectangle to the Axes based on the axis limits for an inset_ax and draw connectors between inset_ax and the rectangle.
Axes.secondary_xaxis Add a second x-axis to this Axes.
Axes.secondary_yaxis Add a second y-axis to this Axes. Vector fields
Axes.barbs Plot a 2D field of barbs.
Axes.quiver Plot a 2D field of arrows.
Axes.quiverkey Add a key to a quiver plot.
Axes.streamplot Draw streamlines of a vector flow. Clearing
Axes.cla Clear the Axes.
Axes.clear Clear the Axes. Appearance
Axes.axis Convenience method to get or set some axis properties.
Axes.set_axis_off Turn the x- and y-axis off.
Axes.set_axis_on Turn the x- and y-axis on.
Axes.set_frame_on Set whether the Axes rectangle patch is drawn.
Axes.get_frame_on Get whether the Axes rectangle patch is drawn.
Axes.set_axisbelow Set whether axis ticks and gridlines are above or below most artists.
Axes.get_axisbelow Get whether axis ticks and gridlines are above or below most artists.
Axes.grid Configure the grid lines.
Axes.get_facecolor Get the facecolor of the Axes.
Axes.set_facecolor Set the facecolor of the Axes. Property cycle
Axes.set_prop_cycle Set the property cycle of the Axes. Axis / limits
Axes.get_xaxis Return the XAxis instance.
Axes.get_yaxis Return the YAxis instance. Axis limits and direction
Axes.invert_xaxis Invert the x-axis.
Axes.xaxis_inverted Return whether the xaxis is oriented in the "inverse" direction.
Axes.invert_yaxis Invert the y-axis.
Axes.yaxis_inverted Return whether the yaxis is oriented in the "inverse" direction.
Axes.set_xlim Set the x-axis view limits.
Axes.get_xlim Return the x-axis view limits.
Axes.set_ylim Set the y-axis view limits.
Axes.get_ylim Return the y-axis view limits.
Axes.update_datalim Extend the dataLim Bbox to include the given points.
Axes.set_xbound Set the lower and upper numerical bounds of the x-axis.
Axes.get_xbound Return the lower and upper x-axis bounds, in increasing order.
Axes.set_ybound Set the lower and upper numerical bounds of the y-axis.
Axes.get_ybound Return the lower and upper y-axis bounds, in increasing order. Axis labels, title, and legend
Axes.set_xlabel Set the label for the x-axis.
Axes.get_xlabel Get the xlabel text string.
Axes.set_ylabel Set the label for the y-axis.
Axes.get_ylabel Get the ylabel text string.
Axes.set_title Set a title for the Axes.
Axes.get_title Get an Axes title.
Axes.legend Place a legend on the Axes.
Axes.get_legend Return the Legend instance, or None if no legend is defined.
Axes.get_legend_handles_labels Return handles and labels for legend Axis scales
Axes.set_xscale Set the x-axis scale.
Axes.get_xscale Return the xaxis' scale (as a str).
Axes.set_yscale Set the y-axis scale.
Axes.get_yscale Return the yaxis' scale (as a str). Autoscaling and margins
Axes.use_sticky_edges When autoscaling, whether to obey all Artist.sticky_edges.
Axes.margins Set or retrieve autoscaling margins.
Axes.set_xmargin Set padding of X data limits prior to autoscaling.
Axes.set_ymargin Set padding of Y data limits prior to autoscaling.
Axes.relim Recompute the data limits based on current artists.
Axes.autoscale Autoscale the axis view to the data (toggle).
Axes.autoscale_view Autoscale the view limits using the data limits.
Axes.set_autoscale_on Set whether autoscaling is applied to each axis on the next draw or call to Axes.autoscale_view.
Axes.get_autoscale_on Return True if each axis is autoscaled, False otherwise.
Axes.set_autoscalex_on Set whether the x-axis is autoscaled on the next draw or call to Axes.autoscale_view.
Axes.get_autoscalex_on Return whether the x-axis is autoscaled.
Axes.set_autoscaley_on Set whether the y-axis is autoscaled on the next draw or call to Axes.autoscale_view.
Axes.get_autoscaley_on Return whether the y-axis is autoscaled. Aspect ratio
Axes.apply_aspect Adjust the Axes for a specified data aspect ratio.
Axes.set_aspect Set the aspect ratio of the axes scaling, i.e. y/x-scale.
Axes.get_aspect Return the aspect ratio of the axes scaling.
Axes.set_box_aspect Set the Axes box aspect, i.e. the ratio of height to width.
Axes.get_box_aspect Return the Axes box aspect, i.e. the ratio of height to width.
Axes.set_adjustable Set how the Axes adjusts to achieve the required aspect ratio.
Axes.get_adjustable Return whether the Axes will adjust its physical dimension ('box') or its data limits ('datalim') to achieve the desired aspect ratio. Ticks and tick labels
Axes.set_xticks Set the xaxis' tick locations and optionally labels.
Axes.get_xticks Return the xaxis' tick locations in data coordinates.
Axes.set_xticklabels Set the xaxis' labels with list of string labels.
Axes.get_xticklabels Get the xaxis' tick labels.
Axes.get_xmajorticklabels Return the xaxis' major tick labels, as a list of Text.
Axes.get_xminorticklabels Return the xaxis' minor tick labels, as a list of Text.
Axes.get_xgridlines Return the xaxis' grid lines as a list of Line2Ds.
Axes.get_xticklines Return the xaxis' tick lines as a list of Line2Ds.
Axes.xaxis_date Set up axis ticks and labels to treat data along the xaxis as dates.
Axes.set_yticks Set the yaxis' tick locations and optionally labels.
Axes.get_yticks Return the yaxis' tick locations in data coordinates.
Axes.set_yticklabels Set the yaxis' labels with list of string labels.
Axes.get_yticklabels Get the yaxis' tick labels.
Axes.get_ymajorticklabels Return the yaxis' major tick labels, as a list of Text.
Axes.get_yminorticklabels Return the yaxis' minor tick labels, as a list of Text.
Axes.get_ygridlines Return the yaxis' grid lines as a list of Line2Ds.
Axes.get_yticklines Return the yaxis' tick lines as a list of Line2Ds.
Axes.yaxis_date Set up axis ticks and labels to treat data along the yaxis as dates.
Axes.minorticks_off Remove minor ticks from the Axes.
Axes.minorticks_on Display minor ticks on the Axes.
Axes.ticklabel_format Configure the ScalarFormatter used by default for linear axes.
Axes.tick_params Change the appearance of ticks, tick labels, and gridlines.
Axes.locator_params Control behavior of major tick locators. Units
Axes.convert_xunits Convert x using the unit type of the xaxis.
Axes.convert_yunits Convert y using the unit type of the yaxis.
Axes.have_units Return whether units are set on any axis. Adding artists
Axes.add_artist Add an Artist to the Axes; return the artist.
Axes.add_child_axes Add an AxesBase to the Axes' children; return the child Axes.
Axes.add_collection Add a Collection to the Axes; return the collection.
Axes.add_container Add a Container to the axes' containers; return the container.
Axes.add_image Add an AxesImage to the Axes; return the image.
Axes.add_line Add a Line2D to the Axes; return the line.
Axes.add_patch Add a Patch to the Axes; return the patch.
Axes.add_table Add a Table to the Axes; return the table. Twinning and sharing
Axes.twinx Create a twin Axes sharing the xaxis.
Axes.twiny Create a twin Axes sharing the yaxis.
Axes.sharex Share the x-axis with other.
Axes.sharey Share the y-axis with other.
Axes.get_shared_x_axes Return a reference to the shared axes Grouper object for x axes.
Axes.get_shared_y_axes Return a reference to the shared axes Grouper object for y axes. Axes position
Axes.get_anchor Get the anchor location.
Axes.set_anchor Define the anchor location.
Axes.get_axes_locator Return the axes_locator.
Axes.set_axes_locator Set the Axes locator.
Axes.reset_position Reset the active position to the original position.
Axes.get_position Return the position of the Axes within the figure as a Bbox.
Axes.set_position Set the Axes position. Async/event based
Axes.stale Whether the artist is 'stale' and needs to be re-drawn for the output to match the internal state of the artist.
Axes.pchanged Call all of the registered callbacks.
Axes.add_callback Add a callback function that will be called whenever one of the Artist's properties changes.
Axes.remove_callback Remove a callback based on its observer id. Interactive
Axes.can_pan Return whether this Axes supports any pan/zoom button functionality.
Axes.can_zoom Return whether this Axes supports the zoom box button functionality.
Axes.get_navigate Get whether the Axes responds to navigation commands.
Axes.set_navigate Set whether the Axes responds to navigation toolbar commands.
Axes.get_navigate_mode Get the navigation toolbar button status: 'PAN', 'ZOOM', or None.
Axes.set_navigate_mode Set the navigation toolbar button status.
Axes.start_pan Called when a pan operation has started.
Axes.drag_pan Called when the mouse moves during a pan operation.
Axes.end_pan Called when a pan operation completes (when the mouse button is up.)
Axes.format_coord Return a format string formatting the x, y coordinates.
Axes.format_cursor_data Return a string representation of data.
Axes.format_xdata Return x formatted as an x-value.
Axes.format_ydata Return y formatted as an y-value.
Axes.mouseover If this property is set to True, the artist will be queried for custom context information when the mouse cursor moves over it.
Axes.in_axes Return whether the given event (in display coords) is in the Axes.
Axes.contains Test whether the artist contains the mouse event.
Axes.contains_point Return whether point (pair of pixel coordinates) is inside the axes patch.
Axes.get_cursor_data Return the cursor data for a given event. Children
Axes.get_children Return a list of the child Artists of this Artist.
Axes.get_images Return a list of AxesImages contained by the Axes.
Axes.get_lines Return a list of lines contained by the Axes.
Axes.findobj Find artist objects. Drawing
Axes.draw Draw the Artist (and its children) using the given renderer.
Axes.draw_artist Efficiently redraw a single artist.
Axes.redraw_in_frame Efficiently redraw Axes data, but not axis ticks, labels, etc.
Axes.get_renderer_cache
Axes.get_rasterization_zorder Return the zorder value below which artists will be rasterized.
Axes.set_rasterization_zorder Set the zorder threshold for rasterization for vector graphics output.
Axes.get_window_extent Return the Axes bounding box in display space; args and kwargs are empty.
Axes.get_tightbbox Return the tight bounding box of the axes, including axis and their decorators (xlabel, title, etc). Projection Methods used by Axis that must be overridden for non-rectilinear Axes.
Axes.name
Axes.get_xaxis_transform Get the transformation used for drawing x-axis labels, ticks and gridlines.
Axes.get_yaxis_transform Get the transformation used for drawing y-axis labels, ticks and gridlines.
Axes.get_data_ratio Return the aspect ratio of the scaled data.
Axes.get_xaxis_text1_transform
Returns
Axes.get_xaxis_text2_transform
Returns
Axes.get_yaxis_text1_transform
Returns
Axes.get_yaxis_text2_transform
Returns Other
Axes.zorder
Axes.get_default_bbox_extra_artists Return a default list of artists that are used for the bounding box calculation.
Axes.get_transformed_clip_path_and_affine Return the clip path with the non-affine part of its transformation applied, and the remaining affine part of its transformation.
Axes.has_data Return whether any artists have been added to the Axes.
Axes.set Set multiple properties at once. | |
doc_25104 |
Insert a placeholder node into the Graph. A placeholder represents a function input. Parameters
name (str) – A name for the input value. This corresponds to the name of the positional argument to the function this Graph represents.
type_expr (Optional[Any]) – an optional type annotation representing the Python type the output of this node will have. This is needed in some cases for proper code generation (e.g. when the function is used subsequently in TorchScript compilation). Note The same insertion point and type expression rules apply for this method as Graph.create_node. | |
doc_25105 | Called when the connection is lost or closed. The argument is either an exception object or None. The latter means a regular EOF is received, or the connection was aborted or closed by this side of the connection. | |
doc_25106 | Return the qualified name for a (namespace, localname) pair. | |
doc_25107 | from django.shortcuts import get_object_or_404
from myapps.serializers import UserSerializer
from rest_framework import viewsets
from rest_framework.response import Response
class UserViewSet(viewsets.ViewSet):
"""
A simple ViewSet for listing or retrieving users.
"""
def list(self, request):
queryset = User.objects.all()
serializer = UserSerializer(queryset, many=True)
return Response(serializer.data)
def retrieve(self, request, pk=None):
queryset = User.objects.all()
user = get_object_or_404(queryset, pk=pk)
serializer = UserSerializer(user)
return Response(serializer.data)
If we need to, we can bind this viewset into two separate views, like so: user_list = UserViewSet.as_view({'get': 'list'})
user_detail = UserViewSet.as_view({'get': 'retrieve'})
Typically we wouldn't do this, but would instead register the viewset with a router, and allow the urlconf to be automatically generated. from myapp.views import UserViewSet
from rest_framework.routers import DefaultRouter
router = DefaultRouter()
router.register(r'users', UserViewSet, basename='user')
urlpatterns = router.urls
Rather than writing your own viewsets, you'll often want to use the existing base classes that provide a default set of behavior. For example: class UserViewSet(viewsets.ModelViewSet):
"""
A viewset for viewing and editing user instances.
"""
serializer_class = UserSerializer
queryset = User.objects.all()
There are two main advantages of using a ViewSet class over using a View class. Repeated logic can be combined into a single class. In the above example, we only need to specify the queryset once, and it'll be used across multiple views. By using routers, we no longer need to deal with wiring up the URL conf ourselves. Both of these come with a trade-off. Using regular views and URL confs is more explicit and gives you more control. ViewSets are helpful if you want to get up and running quickly, or when you have a large API and you want to enforce a consistent URL configuration throughout. ViewSet actions The default routers included with REST framework will provide routes for a standard set of create/retrieve/update/destroy style actions, as shown below: class UserViewSet(viewsets.ViewSet):
"""
Example empty viewset demonstrating the standard
actions that will be handled by a router class.
If you're using format suffixes, make sure to also include
the `format=None` keyword argument for each action.
"""
def list(self, request):
pass
def create(self, request):
pass
def retrieve(self, request, pk=None):
pass
def update(self, request, pk=None):
pass
def partial_update(self, request, pk=None):
pass
def destroy(self, request, pk=None):
pass
Introspecting ViewSet actions During dispatch, the following attributes are available on the ViewSet.
basename - the base to use for the URL names that are created.
action - the name of the current action (e.g., list, create).
detail - boolean indicating if the current action is configured for a list or detail view.
suffix - the display suffix for the viewset type - mirrors the detail attribute.
name - the display name for the viewset. This argument is mutually exclusive to suffix.
description - the display description for the individual view of a viewset. You may inspect these attributes to adjust behaviour based on the current action. For example, you could restrict permissions to everything except the list action similar to this: def get_permissions(self):
"""
Instantiates and returns the list of permissions that this view requires.
"""
if self.action == 'list':
permission_classes = [IsAuthenticated]
else:
permission_classes = [IsAdmin]
return [permission() for permission in permission_classes]
Marking extra actions for routing If you have ad-hoc methods that should be routable, you can mark them as such with the @action decorator. Like regular actions, extra actions may be intended for either a single object, or an entire collection. To indicate this, set the detail argument to True or False. The router will configure its URL patterns accordingly. e.g., the DefaultRouter will configure detail actions to contain pk in their URL patterns. A more complete example of extra actions: from django.contrib.auth.models import User
from rest_framework import status, viewsets
from rest_framework.decorators import action
from rest_framework.response import Response
from myapp.serializers import UserSerializer, PasswordSerializer
class UserViewSet(viewsets.ModelViewSet):
"""
A viewset that provides the standard actions
"""
queryset = User.objects.all()
serializer_class = UserSerializer
@action(detail=True, methods=['post'])
def set_password(self, request, pk=None):
user = self.get_object()
serializer = PasswordSerializer(data=request.data)
if serializer.is_valid():
user.set_password(serializer.validated_data['password'])
user.save()
return Response({'status': 'password set'})
else:
return Response(serializer.errors,
status=status.HTTP_400_BAD_REQUEST)
@action(detail=False)
def recent_users(self, request):
recent_users = User.objects.all().order_by('-last_login')
page = self.paginate_queryset(recent_users)
if page is not None:
serializer = self.get_serializer(page, many=True)
return self.get_paginated_response(serializer.data)
serializer = self.get_serializer(recent_users, many=True)
return Response(serializer.data)
The action decorator will route GET requests by default, but may also accept other HTTP methods by setting the methods argument. For example: @action(detail=True, methods=['post', 'delete'])
def unset_password(self, request, pk=None):
...
The decorator allows you to override any viewset-level configuration such as permission_classes, serializer_class, filter_backends...: @action(detail=True, methods=['post'], permission_classes=[IsAdminOrIsSelf])
def set_password(self, request, pk=None):
...
The two new actions will then be available at the urls ^users/{pk}/set_password/$ and ^users/{pk}/unset_password/$. Use the url_path and url_name parameters to change the URL segment and the reverse URL name of the action. To view all extra actions, call the .get_extra_actions() method. Routing additional HTTP methods for extra actions Extra actions can map additional HTTP methods to separate ViewSet methods. For example, the above password set/unset methods could be consolidated into a single route. Note that additional mappings do not accept arguments. @action(detail=True, methods=['put'], name='Change Password')
def password(self, request, pk=None):
"""Update the user's password."""
...
@password.mapping.delete
def delete_password(self, request, pk=None):
"""Delete the user's password."""
...
Reversing action URLs If you need to get the URL of an action, use the .reverse_action() method. This is a convenience wrapper for reverse(), automatically passing the view's request object and prepending the url_name with the .basename attribute. Note that the basename is provided by the router during ViewSet registration. If you are not using a router, then you must provide the basename argument to the .as_view() method. Using the example from the previous section: >>> view.reverse_action('set-password', args=['1'])
'http://localhost:8000/api/users/1/set_password'
Alternatively, you can use the url_name attribute set by the @action decorator. >>> view.reverse_action(view.set_password.url_name, args=['1'])
'http://localhost:8000/api/users/1/set_password'
The url_name argument for .reverse_action() should match the same argument to the @action decorator. Additionally, this method can be used to reverse the default actions, such as list and create. API Reference ViewSet The ViewSet class inherits from APIView. You can use any of the standard attributes such as permission_classes, authentication_classes in order to control the API policy on the viewset. The ViewSet class does not provide any implementations of actions. In order to use a ViewSet class you'll override the class and define the action implementations explicitly. GenericViewSet The GenericViewSet class inherits from GenericAPIView, and provides the default set of get_object, get_queryset methods and other generic view base behavior, but does not include any actions by default. In order to use a GenericViewSet class you'll override the class and either mixin the required mixin classes, or define the action implementations explicitly. ModelViewSet The ModelViewSet class inherits from GenericAPIView and includes implementations for various actions, by mixing in the behavior of the various mixin classes. The actions provided by the ModelViewSet class are .list(), .retrieve(), .create(), .update(), .partial_update(), and .destroy(). Example Because ModelViewSet extends GenericAPIView, you'll normally need to provide at least the queryset and serializer_class attributes. For example: class AccountViewSet(viewsets.ModelViewSet):
"""
A simple ViewSet for viewing and editing accounts.
"""
queryset = Account.objects.all()
serializer_class = AccountSerializer
permission_classes = [IsAccountAdminOrReadOnly]
Note that you can use any of the standard attributes or method overrides provided by GenericAPIView. For example, to use a ViewSet that dynamically determines the queryset it should operate on, you might do something like this: class AccountViewSet(viewsets.ModelViewSet):
"""
A simple ViewSet for viewing and editing the accounts
associated with the user.
"""
serializer_class = AccountSerializer
permission_classes = [IsAccountAdminOrReadOnly]
def get_queryset(self):
return self.request.user.accounts.all()
Note however that upon removal of the queryset property from your ViewSet, any associated router will be unable to derive the basename of your Model automatically, and so you will have to specify the basename kwarg as part of your router registration. Also note that although this class provides the complete set of create/list/retrieve/update/destroy actions by default, you can restrict the available operations by using the standard permission classes. ReadOnlyModelViewSet The ReadOnlyModelViewSet class also inherits from GenericAPIView. As with ModelViewSet it also includes implementations for various actions, but unlike ModelViewSet only provides the 'read-only' actions, .list() and .retrieve(). Example As with ModelViewSet, you'll normally need to provide at least the queryset and serializer_class attributes. For example: class AccountViewSet(viewsets.ReadOnlyModelViewSet):
"""
A simple ViewSet for viewing accounts.
"""
queryset = Account.objects.all()
serializer_class = AccountSerializer
Again, as with ModelViewSet, you can use any of the standard attributes and method overrides available to GenericAPIView. Custom ViewSet base classes You may need to provide custom ViewSet classes that do not have the full set of ModelViewSet actions, or that customize the behavior in some other way. Example To create a base viewset class that provides create, list and retrieve operations, inherit from GenericViewSet, and mixin the required actions: from rest_framework import mixins
class CreateListRetrieveViewSet(mixins.CreateModelMixin,
mixins.ListModelMixin,
mixins.RetrieveModelMixin,
viewsets.GenericViewSet):
"""
A viewset that provides `retrieve`, `create`, and `list` actions.
To use it, override the class and set the `.queryset` and
`.serializer_class` attributes.
"""
pass
By creating your own base ViewSet classes, you can provide common behavior that can be reused in multiple viewsets across your API. viewsets.py | |
doc_25108 | Required. The maximum length (in characters) of the field. The max_length is enforced at the database level and in Django’s validation using MaxLengthValidator. Note If you are writing an application that must be portable to multiple database backends, you should be aware that there are restrictions on max_length for some backends. Refer to the database backend notes for details. | |
doc_25109 | Saves a new file using the storage system, preferably with the name specified. If there already exists a file with this name name, the storage system may modify the filename as necessary to get a unique name. The actual name of the stored file will be returned. The max_length argument is passed along to get_available_name(). The content argument must be an instance of django.core.files.File or a file-like object that can be wrapped in File. | |
doc_25110 | sklearn.isotonic.check_increasing(x, y) [source]
Determine whether y is monotonically correlated with x. y is found increasing or decreasing with respect to x based on a Spearman correlation test. Parameters
xarray-like of shape (n_samples,)
Training data.
yarray-like of shape (n_samples,)
Training target. Returns
increasing_boolboolean
Whether the relationship is increasing or decreasing. Notes The Spearman correlation coefficient is estimated from the data, and the sign of the resulting estimate is used as the result. In the event that the 95% confidence interval based on Fisher transform spans zero, a warning is raised. References Fisher transformation. Wikipedia. https://en.wikipedia.org/wiki/Fisher_transformation | |
doc_25111 | Round away from zero if last digit after rounding towards zero would have been 0 or 5; otherwise round towards zero. | |
doc_25112 | See Migration guide for more details. tf.compat.v1.raw_ops.BatchFFT3D
tf.raw_ops.BatchFFT3D(
input, name=None
)
Args
input A Tensor of type complex64.
name A name for the operation (optional).
Returns A Tensor of type complex64. | |
doc_25113 |
Return the position of the rectangle. | |
doc_25114 |
Get the Timestamp for the end of the period. Returns
Timestamp
See also Period.start_time
Return the start Timestamp. Period.dayofyear
Return the day of year. Period.daysinmonth
Return the days in that month. Period.dayofweek
Return the day of the week. | |
doc_25115 |
Set the tail patch. Parameters
patchApatches.Patch | |
doc_25116 |
Apply only the affine part of this transformation on the given array of values. transform(values) is always equivalent to transform_affine(transform_non_affine(values)). In non-affine transformations, this is generally a no-op. In affine transformations, this is equivalent to transform(values). Parameters
valuesarray
The input values as NumPy array of length input_dims or shape (N x input_dims). Returns
array
The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input. | |
doc_25117 |
Roll provided date backward to next offset only if not on offset. Returns
TimeStamp
Rolled timestamp if not on offset, otherwise unchanged timestamp. | |
doc_25118 | See Migration guide for more details. tf.compat.v1.math.reduce_prod
tf.compat.v1.reduce_prod(
input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None,
keep_dims=None
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each of the entries in axis, which must be unique. If keepdims is true, the reduced dimensions are retained with length 1. If axis is None, all dimensions are reduced, and a tensor with a single element is returned.
Args
input_tensor The tensor to reduce. Should have numeric type.
axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
keepdims If true, retains reduced dimensions with length 1.
name A name for the operation (optional).
reduction_indices The old (deprecated) name for axis.
keep_dims Deprecated alias for keepdims.
Returns The reduced tensor.
Numpy Compatibility Equivalent to np.prod | |
doc_25119 |
Return (x1 != x2) element-wise. Parameters
x1, x2array_like
Input arrays. If x1.shape != x2.shape, they must be broadcastable to a common shape (which becomes the shape of the output).
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
outndarray or scalar
Output array, element-wise comparison of x1 and x2. Typically of type bool, unless dtype=object is passed. This is a scalar if both x1 and x2 are scalars. See also
equal, greater, greater_equal, less, less_equal
Examples >>> np.not_equal([1.,2.], [1., 3.])
array([False, True])
>>> np.not_equal([1, 2], [[1, 3],[1, 4]])
array([[False, True],
[False, True]])
The != operator can be used as a shorthand for np.not_equal on ndarrays. >>> a = np.array([1., 2.])
>>> b = np.array([1., 3.])
>>> a != b
array([False, True]) | |
doc_25120 | Open FTP URLs, keeping a cache of open FTP connections to minimize delays. | |
doc_25121 | Optional. Use this when you don’t want to have a last page with very few items. If the last page would normally have a number of items less than or equal to orphans, then those items will be added to the previous page (which becomes the last page) instead of leaving the items on a page by themselves. For example, with 23 items, per_page=10, and orphans=3, there will be two pages; the first page with 10 items and the second (and last) page with 13 items. orphans defaults to zero, which means pages are never combined and the last page may have one item. | |
doc_25122 |
Like Artist.get_window_extent, but includes any clipping. Parameters
rendererRendererBase subclass
renderer that will be used to draw the figures (i.e. fig.canvas.get_renderer()) Returns
Bbox
The enclosing bounding box (in figure pixel coordinates). | |
doc_25123 |
Bases: matplotlib.backends.backend_cairo.FigureCanvasCairo, matplotlib.backends._backend_tk.FigureCanvasTk draw()[source]
Render the Figure. It is important that this method actually walk the artist tree even if not output is produced because this will trigger deferred work (like computing limits auto-limits and tick values) that users may want access to before saving to disk. | |
doc_25124 | Optionally the import path for the Flask application. | |
doc_25125 | See Migration guide for more details. tf.compat.v1.data.experimental.ThreadingOptions
tf.data.experimental.ThreadingOptions()
You can set the threading options of a dataset through the experimental_threading property of tf.data.Options; the property is an instance of tf.data.experimental.ThreadingOptions. options = tf.data.Options()
options.experimental_threading.private_threadpool_size = 10
dataset = dataset.with_options(options)
Attributes
max_intra_op_parallelism If set, it overrides the maximum degree of intra-op parallelism.
private_threadpool_size If set, the dataset will use a private threadpool of the given size. Methods __eq__ View source
__eq__(
other
)
Return self==value. __ne__ View source
__ne__(
other
)
Return self!=value. | |
doc_25126 |
Similar to smart_str(), except that lazy instances are resolved to strings, rather than kept as lazy objects. If strings_only is True, don’t convert (some) non-string-like objects. | |
doc_25127 |
Helper decorator for backward methods of custom autograd functions (subclasses of torch.autograd.Function). Ensures that backward executes with the same autocast state as forward. See the example page for more detail. | |
doc_25128 |
Set the label properties - color, fontsize, text. | |
doc_25129 |
Set if artist is to be included in layout calculations, E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). Parameters
in_layoutbool | |
doc_25130 |
Concatenate all images in the collection into an array. Returns
arnp.ndarray
An array having one more dimension than the images in self. Raises
ValueError
If images in the ImageCollection don’t have identical shapes. See also
concatenate_images | |
doc_25131 | Determine the prefix for the generated form. Returns prefix by default. | |
doc_25132 |
Set the locators and formatters of axis to instances suitable for this scale. | |
doc_25133 | Reference pixels into a 3d array pixels3d(Surface) -> array Create a new 3D array that directly references the pixel values in a Surface. Any changes to the array will affect the pixels in the Surface. This is a fast operation since no data is copied. This will only work on Surfaces that have 24-bit or 32-bit formats. Lower pixel formats cannot be referenced. The Surface this references will remain locked for the lifetime of the array (see the pygame.Surface.lock() - lock the Surface memory for pixel access method). | |
doc_25134 | Request message list, result is in the form (response, ['mesg_num octets',
...], octets). If which is set, it is the message to list. | |
doc_25135 |
Learn a NMF model for the data X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Data matrix to be decomposed
yIgnored
Returns
self | |
doc_25136 | at the end of your dataset, and see how it affects your model's step time. take(1).cache().repeat() will cache the first element of your dataset and produce it repeatedly. This should make the dataset very fast, so that the model becomes the bottleneck and you can identify the ideal model speed. With enough workers, the tf.data service should be able to achieve similar speed. Running the tf.data service tf.data servers should be brought up alongside your training jobs, and brought down when the jobs are finished. The tf.data service uses one DispatchServer and any number of WorkerServers. See https://github.com/tensorflow/ecosystem/tree/master/data_service for an example of using Google Kubernetes Engine (GKE) to manage the tf.data service. The server implementation in tf_std_data_server.py is not GKE-specific, and can be used to run the tf.data service in other contexts. Fault tolerance By default, the tf.data dispatch server stores its state in-memory, making it a single point of failure during training. To avoid this, pass fault_tolerant_mode=True when creating your DispatchServer. Dispatcher fault tolerance requires work_dir to be configured and accessible from the dispatcher both before and after restart (e.g. a GCS path). With fault tolerant mode enabled, the dispatcher will journal its state to the work directory so that no state is lost when the dispatcher is restarted. WorkerServers may be freely restarted, added, or removed during training. At startup, workers will register with the dispatcher and begin processing all outstanding jobs from the beginning. Using the tf.data service from your training job Once you have a tf.data service cluster running, take note of the dispatcher IP address and port. To connect to the service, you will use a string in the format "grpc://:". # Create the dataset however you were before using the tf.data service.
dataset = your_dataset_factory()
service = "grpc://{}:{}".format(dispatcher_address, dispatcher_port)
# This will register the dataset with the tf.data service cluster so that
# tf.data workers can run the dataset to produce elements. The dataset returned
# from applying `distribute` will fetch elements produced by tf.data workers.
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=service))
Below is a toy example that you can run yourself.
dispatcher = tf.data.experimental.service.DispatchServer()
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(
dispatcher_address=dispatcher_address))
dataset = tf.data.Dataset.range(10)
dataset = dataset.apply(tf.data.experimental.service.distribute(
processing_mode="parallel_epochs", service=dispatcher.target))
print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
See the documentation of tf.data.experimental.service.distribute for more details about using the distribute transformation. Classes class DispatcherConfig: Configuration class for tf.data service dispatchers. class WorkerConfig: Configuration class for tf.data service dispatchers. Functions distribute(...): A transformation that moves dataset processing to the tf.data service. from_dataset_id(...): Creates a dataset which reads data from the tf.data service. register_dataset(...): Registers a dataset with the tf.data service. | |
doc_25137 |
Apply a flat kernel bilateral filter. This is an edge-preserving and noise reducing denoising filter. It averages pixels based on their spatial closeness and radiometric similarity. Spatial closeness is measured by considering only the local pixel neighborhood given by a structuring element. Radiometric similarity is defined by the greylevel interval [g-s0, g+s1] where g is the current pixel greylevel. Only pixels belonging to the structuring element and having a greylevel inside this interval are averaged. Parameters
image2-D array (uint8, uint16)
Input image.
selem2-D array
The neighborhood expressed as a 2-D array of 1’s and 0’s.
out2-D array (same dtype as input)
If None, a new array is allocated.
maskndarray
Mask array that defines (>0) area of the image included in the local neighborhood. If None, the complete image is used (default).
shift_x, shift_yint
Offset added to the structuring element center point. Shift is bounded to the structuring element sizes (center must be inside the given structuring element).
s0, s1int
Define the [s0, s1] interval around the greyvalue of the center pixel to be considered for computing the value. Returns
out2-D array (same dtype as input image)
Output image. See also
denoise_bilateral
Examples >>> from skimage import data
>>> from skimage.morphology import disk
>>> from skimage.filters.rank import mean_bilateral
>>> img = data.camera().astype(np.uint16)
>>> bilat_img = mean_bilateral(img, disk(20), s0=10,s1=10) | |
doc_25138 |
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample. | |
doc_25139 |
Performs Sigmoid Correction on the input image. Also known as Contrast Adjustment. This function transforms the input image pixelwise according to the equation O = 1/(1 + exp*(gain*(cutoff - I))) after scaling each pixel to the range 0 to 1. Parameters
imagendarray
Input image.
cutofffloat, optional
Cutoff of the sigmoid function that shifts the characteristic curve in horizontal direction. Default value is 0.5.
gainfloat, optional
The constant multiplier in exponential’s power of sigmoid function. Default value is 10.
invbool, optional
If True, returns the negative sigmoid correction. Defaults to False. Returns
outndarray
Sigmoid corrected output image. See also
adjust_gamma
References
1
Gustav J. Braun, “Image Lightness Rescaling Using Sigmoidal Contrast Enhancement Functions”, http://www.cis.rit.edu/fairchild/PDFs/PAP07.pdf | |
doc_25140 | See Migration guide for more details. tf.compat.v1.raw_ops.ConcatenateDataset
tf.raw_ops.ConcatenateDataset(
input_dataset, another_dataset, output_types, output_shapes, name=None
)
Args
input_dataset A Tensor of type variant.
another_dataset A Tensor of type variant.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_25141 | See Migration guide for more details. tf.compat.v1.keras.layers.experimental.preprocessing.Discretization
tf.keras.layers.experimental.preprocessing.Discretization(
bins, **kwargs
)
This layer will place each element of its input data into one of several contiguous ranges and output an integer index indicating which range each element was placed in. Input shape: Any tf.Tensor or tf.RaggedTensor of dimension 2 or higher. Output shape: Same as input shape. Examples: Bucketize float values based on provided buckets. >>> input = np.array([[-1.5, 1.0, 3.4, .5], [0.0, 3.0, 1.3, 0.0]])
>>> layer = tf.keras.layers.experimental.preprocessing.Discretization(
... bins=[0., 1., 2.])
>>> layer(input)
<tf.Tensor: shape=(2, 4), dtype=int32, numpy=
array([[0, 1, 3, 1],
[0, 3, 2, 0]], dtype=int32)>
Attributes
bins Optional boundary specification. Bins exclude the left boundary and include the right boundary, so bins=[0., 1., 2.] generates bins (-inf, 0.), [0., 1.), [1., 2.), and [2., +inf). Methods adapt View source
adapt(
data, reset_state=True
)
Fits the state of the preprocessing layer to the data being passed.
Arguments
data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array.
reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt, or whether to start from the existing state. This argument may not be relevant to all preprocessing layers: a subclass of PreprocessingLayer may choose to throw if 'reset_state' is set to False. | |
doc_25142 |
Measure many replicates while keeping timer overhead to a minimum. At a high level, blocked_autorange executes the following pseudo-code: `setup`
total_time = 0
while total_time < min_run_time
start = timer()
for _ in range(block_size):
`stmt`
total_time += (timer() - start)
Note the variable block_size in the inner loop. The choice of block size is important to measurement quality, and must balance two competing objectives: A small block size results in more replicates and generally better statistics. A large block size better amortizes the cost of timer invocation, and results in a less biased measurement. This is important because CUDA syncronization time is non-trivial (order single to low double digit microseconds) and would otherwise bias the measurement. blocked_autorange sets block_size by running a warmup period, increasing block size until timer overhead is less than 0.1% of the overall computation. This value is then used for the main measurement loop. Returns
A Measurement object that contains measured runtimes and repetition counts, and can be used to compute statistics. (mean, median, etc.) | |
doc_25143 |
Return the maximum along a given axis. Refer to numpy.amax for full documentation. See also numpy.amax
equivalent function | |
doc_25144 | Asserts that the HTML fragment needle is contained in the haystack one. If the count integer argument is specified, then additionally the number of needle occurrences will be strictly verified. Whitespace in most cases is ignored, and attribute ordering is not significant. See assertHTMLEqual() for more details. | |
doc_25145 |
The minutes of the datetime. Examples
>>> datetime_series = pd.Series(
... pd.date_range("2000-01-01", periods=3, freq="T")
... )
>>> datetime_series
0 2000-01-01 00:00:00
1 2000-01-01 00:01:00
2 2000-01-01 00:02:00
dtype: datetime64[ns]
>>> datetime_series.dt.minute
0 0
1 1
2 2
dtype: int64 | |
doc_25146 |
Access a single value for a row/column pair by integer position. Similar to iloc, in that both provide integer-based lookups. Use iat if you only need to get or set a single value in a DataFrame or Series. Raises
IndexError
When integer position is out of bounds. See also DataFrame.at
Access a single value for a row/column label pair. DataFrame.loc
Access a group of rows and columns by label(s). DataFrame.iloc
Access a group of rows and columns by integer position(s). Examples
>>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
... columns=['A', 'B', 'C'])
>>> df
A B C
0 0 2 3
1 0 4 1
2 10 20 30
Get value at specified row/column pair
>>> df.iat[1, 2]
1
Set value at specified row/column pair
>>> df.iat[1, 2] = 10
>>> df.iat[1, 2]
10
Get value within a series
>>> df.loc[0].iat[1]
2 | |
doc_25147 |
Remove contiguous holes smaller than the specified size. Parameters
arndarray (arbitrary shape, int or bool type)
The array containing the connected components of interest.
area_thresholdint, optional (default: 64)
The maximum area, in pixels, of a contiguous hole that will be filled. Replaces min_size.
connectivityint, {1, 2, …, ar.ndim}, optional (default: 1)
The connectivity defining the neighborhood of a pixel.
in_placebool, optional (default: False)
If True, remove the connected components in the input array itself. Otherwise, make a copy. Returns
outndarray, same shape and type as input ar
The input array with small holes within connected components removed. Raises
TypeError
If the input array is of an invalid type, such as float or string. ValueError
If the input array contains negative values. Notes If the array type is int, it is assumed that it contains already-labeled objects. The labels are not kept in the output image (this function always outputs a bool image). It is suggested that labeling is completed after using this function. Examples >>> from skimage import morphology
>>> a = np.array([[1, 1, 1, 1, 1, 0],
... [1, 1, 1, 0, 1, 0],
... [1, 0, 0, 1, 1, 0],
... [1, 1, 1, 1, 1, 0]], bool)
>>> b = morphology.remove_small_holes(a, 2)
>>> b
array([[ True, True, True, True, True, False],
[ True, True, True, True, True, False],
[ True, False, False, True, True, False],
[ True, True, True, True, True, False]])
>>> c = morphology.remove_small_holes(a, 2, connectivity=2)
>>> c
array([[ True, True, True, True, True, False],
[ True, True, True, False, True, False],
[ True, False, False, True, True, False],
[ True, True, True, True, True, False]])
>>> d = morphology.remove_small_holes(a, 2, in_place=True)
>>> d is a
True | |
doc_25148 | Tries to execute this command, performing system checks if needed (as controlled by the requires_system_checks attribute). If the command raises a CommandError, it’s intercepted and printed to stderr. | |
doc_25149 |
Return whether face is colored. | |
doc_25150 | An async def function definition. Has the same fields as FunctionDef. | |
doc_25151 | See Migration guide for more details. tf.compat.v1.linalg.norm
tf.compat.v1.norm(
tensor, ord='euclidean', axis=None, keepdims=None, name=None,
keep_dims=None
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (keep_dims). They will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead This function can compute several different vector norms (the 1-norm, the Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and matrix norms (Frobenius, 1-norm, 2-norm and inf-norm).
Args
tensor Tensor of types float32, float64, complex64, complex128
ord Order of the norm. Supported values are 'fro', 'euclidean', 1, 2, np.inf and any positive real number yielding the corresponding p-norm. Default is 'euclidean' which is equivalent to Frobenius norm if tensor is a matrix and equivalent to 2-norm for vectors. Some restrictions apply: a) The Frobenius norm fro is not defined for vectors, b) If axis is a 2-tuple (matrix norm), only 'euclidean', 'fro', 1, 2, np.inf are supported. See the description of axis on how to compute norms for a batch of vectors or matrices stored in a tensor.
axis If axis is None (the default), the input is considered a vector and a single vector norm is computed over the entire set of values in the tensor, i.e. norm(tensor, ord=ord) is equivalent to norm(reshape(tensor, [-1]), ord=ord). If axis is a Python integer, the input is considered a batch of vectors, and axis determines the axis in tensor over which to compute vector norms. If axis is a 2-tuple of Python integers it is considered a batch of matrices and axis determines the axes in tensor over which to compute a matrix norm. Negative indices are supported. Example: If you are passing a tensor that can be either a matrix or a batch of matrices at runtime, pass axis=[-2,-1] instead of axis=None to make sure that matrix norms are computed.
keepdims If True, the axis indicated in axis are kept with size 1. Otherwise, the dimensions in axis are removed from the output shape.
name The name of the op.
keep_dims Deprecated alias for keepdims.
Returns
output A Tensor of the same type as tensor, containing the vector or matrix norms. If keepdims is True then the rank of output is equal to the rank of tensor. Otherwise, if axis is none the output is a scalar, if axis is an integer, the rank of output is one less than the rank of tensor, if axis is a 2-tuple the rank of output is two less than the rank of tensor.
Raises
ValueError If ord or axis is invalid. Numpy Compatibility Mostly equivalent to numpy.linalg.norm. Not supported: ord <= 0, 2-norm for matrices, nuclear norm. Other differences: a) If axis is None, treats the flattened tensor as a vector regardless of rank. b) Explicitly supports 'euclidean' norm as the default, including for higher order tensors. | |
doc_25152 |
Set the locators and formatters of axis to instances suitable for this scale. | |
doc_25153 | Create a RequestContext for a WSGI environment created from the given values. This is mostly useful during testing, where you may want to run a function that uses request data without dispatching a full request. See The Request Context. Use a with block to push the context, which will make request point at the request for the created environment. with test_request_context(...):
generate_report()
When using the shell, it may be easier to push and pop the context manually to avoid indentation. ctx = app.test_request_context(...)
ctx.push()
...
ctx.pop()
Takes the same arguments as Werkzeug’s EnvironBuilder, with some defaults from the application. See the linked Werkzeug docs for most of the available arguments. Flask-specific behavior is listed here. Parameters
path – URL path being requested.
base_url – Base URL where the app is being served, which path is relative to. If not given, built from PREFERRED_URL_SCHEME, subdomain, SERVER_NAME, and APPLICATION_ROOT.
subdomain – Subdomain name to append to SERVER_NAME.
url_scheme – Scheme to use instead of PREFERRED_URL_SCHEME.
data – The request body, either as a string or a dict of form keys and values.
json – If given, this is serialized as JSON and passed as data. Also defaults content_type to application/json.
args (Any) – other positional arguments passed to EnvironBuilder.
kwargs (Any) – other keyword arguments passed to EnvironBuilder. Return type
flask.ctx.RequestContext | |
doc_25154 |
Loads the schedulers state. Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict(). | |
doc_25155 | A subclass of Shelf which accepts a filename instead of a dict-like object. The underlying file will be opened using dbm.open(). By default, the file will be created and opened for both read and write. The optional flag parameter has the same interpretation as for the open() function. The optional protocol and writeback parameters have the same interpretation as for the Shelf class. | |
doc_25156 | Get a list of frame records for a traceback’s frame and all inner frames. These frames represent calls made as a consequence of frame. The first entry in the list represents traceback; the last entry represents where the exception was raised. Changed in version 3.5: A list of named tuples FrameInfo(frame, filename, lineno, function, code_context, index) is returned. | |
doc_25157 | bytearray.rjust(width[, fillbyte])
Return a copy of the object right justified in a sequence of length width. Padding is done using the specified fillbyte (default is an ASCII space). For bytes objects, the original sequence is returned if width is less than or equal to len(s). Note The bytearray version of this method does not operate in place - it always produces a new object, even if no changes were made. | |
doc_25158 | When serving files, set the cache control max age to this number of seconds. Can be a datetime.timedelta or an int. Override this value on a per-file basis using get_send_file_max_age() on the application or blueprint. If None, send_file tells the browser to use conditional requests will be used instead of a timed cache, which is usually preferable. Default: None | |
doc_25159 |
Inverse short time Fourier Transform. This is expected to be the inverse of stft(). It has the same parameters (+ additional optional parameter of length) and it should return the least squares estimation of the original signal. The algorithm will check using the NOLA condition ( nonzero overlap). Important consideration in the parameters window and center so that the envelop created by the summation of all the windows is never zero at certain point in time. Specifically, ∑t=−∞∞∣w∣2[n−t×hop_length]=0\sum_{t=-\infty}^{\infty} |w|^2[n-t\times hop\_length] \cancel{=} 0 . Since stft() discards elements at the end of the signal if they do not fit in a frame, istft may return a shorter signal than the original signal (can occur if center is False since the signal isn’t padded). If center is True, then there will be padding e.g. 'constant', 'reflect', etc. Left padding can be trimmed off exactly because they can be calculated but right padding cannot be calculated without additional information. Example: Suppose the last window is: [17, 18, 0, 0, 0] vs [18, 0, 0, 0, 0] The n_fft, hop_length, win_length are all the same which prevents the calculation of right padding. These additional values could be zeros or a reflection of the signal so providing length could be useful. If length is None then padding will be aggressively removed (some loss of signal). [1] D. W. Griffin and J. S. Lim, “Signal estimation from modified short-time Fourier transform,” IEEE Trans. ASSP, vol.32, no.2, pp.236-243, Apr. 1984. Parameters
input (Tensor) –
The input tensor. Expected to be output of stft(), can either be complex (channel, fft_size, n_frame), or real (channel, fft_size, n_frame, 2) where the channel dimension is optional. Deprecated since version 1.8.0: Real input is deprecated, use complex inputs as returned by stft(..., return_complex=True) instead.
n_fft (int) – Size of Fourier transform
hop_length (Optional[int]) – The distance between neighboring sliding window frames. (Default: n_fft // 4)
win_length (Optional[int]) – The size of window frame and STFT filter. (Default: n_fft)
window (Optional[torch.Tensor]) – The optional window function. (Default: torch.ones(win_length))
center (bool) – Whether input was padded on both sides so that the tt -th frame is centered at time t×hop_lengtht \times \text{hop\_length} . (Default: True)
normalized (bool) – Whether the STFT was normalized. (Default: False)
onesided (Optional[bool]) – Whether the STFT was onesided. (Default: True if n_fft != fft_size in the input size)
length (Optional[int]) – The amount to trim the signal by (i.e. the original signal length). (Default: whole signal)
return_complex (Optional[bool]) – Whether the output should be complex, or if the input should be assumed to derive from a real signal and window. Note that this is incompatible with onesided=True. (Default: False) Returns
Least squares estimation of the original signal of size (…, signal_length) Return type
Tensor | |
doc_25160 | alias of werkzeug.useragents._UserAgent | |
doc_25161 | Alternative constructor. The tarfile.open() function is actually a shortcut to this classmethod. | |
doc_25162 |
Matrix or vector norm. This function is able to return one of eight different matrix norms, or one of an infinite number of vector norms (described below), depending on the value of the ord parameter. Parameters
xarray_like
Input array. If axis is None, x must be 1-D or 2-D, unless ord is None. If both axis and ord are None, the 2-norm of x.ravel will be returned.
ord{non-zero int, inf, -inf, ‘fro’, ‘nuc’}, optional
Order of the norm (see table under Notes). inf means numpy’s inf object. The default is None.
axis{None, int, 2-tuple of ints}, optional.
If axis is an integer, it specifies the axis of x along which to compute the vector norms. If axis is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed. If axis is None then either a vector norm (when x is 1-D) or a matrix norm (when x is 2-D) is returned. The default is None. New in version 1.8.0.
keepdimsbool, optional
If this is set to True, the axes which are normed over are left in the result as dimensions with size one. With this option the result will broadcast correctly against the original x. New in version 1.10.0. Returns
nfloat or ndarray
Norm of the matrix or vector(s). See also scipy.linalg.norm
Similar function in SciPy. Notes For values of ord < 1, the result is, strictly speaking, not a mathematical ‘norm’, but it may still be useful for various numerical purposes. The following norms can be calculated:
ord norm for matrices norm for vectors
None Frobenius norm 2-norm
‘fro’ Frobenius norm –
‘nuc’ nuclear norm –
inf max(sum(abs(x), axis=1)) max(abs(x))
-inf min(sum(abs(x), axis=1)) min(abs(x))
0 – sum(x != 0)
1 max(sum(abs(x), axis=0)) as below
-1 min(sum(abs(x), axis=0)) as below
2 2-norm (largest sing. value) as below
-2 smallest singular value as below
other – sum(abs(x)**ord)**(1./ord) The Frobenius norm is given by [1]: \(||A||_F = [\sum_{i,j} abs(a_{i,j})^2]^{1/2}\) The nuclear norm is the sum of the singular values. Both the Frobenius and nuclear norm orders are only defined for matrices and raise a ValueError when x.ndim != 2. References 1
G. H. Golub and C. F. Van Loan, Matrix Computations, Baltimore, MD, Johns Hopkins University Press, 1985, pg. 15 Examples >>> from numpy import linalg as LA
>>> a = np.arange(9) - 4
>>> a
array([-4, -3, -2, ..., 2, 3, 4])
>>> b = a.reshape((3, 3))
>>> b
array([[-4, -3, -2],
[-1, 0, 1],
[ 2, 3, 4]])
>>> LA.norm(a)
7.745966692414834
>>> LA.norm(b)
7.745966692414834
>>> LA.norm(b, 'fro')
7.745966692414834
>>> LA.norm(a, np.inf)
4.0
>>> LA.norm(b, np.inf)
9.0
>>> LA.norm(a, -np.inf)
0.0
>>> LA.norm(b, -np.inf)
2.0
>>> LA.norm(a, 1)
20.0
>>> LA.norm(b, 1)
7.0
>>> LA.norm(a, -1)
-4.6566128774142013e-010
>>> LA.norm(b, -1)
6.0
>>> LA.norm(a, 2)
7.745966692414834
>>> LA.norm(b, 2)
7.3484692283495345
>>> LA.norm(a, -2)
0.0
>>> LA.norm(b, -2)
1.8570331885190563e-016 # may vary
>>> LA.norm(a, 3)
5.8480354764257312 # may vary
>>> LA.norm(a, -3)
0.0
Using the axis argument to compute vector norms: >>> c = np.array([[ 1, 2, 3],
... [-1, 1, 4]])
>>> LA.norm(c, axis=0)
array([ 1.41421356, 2.23606798, 5. ])
>>> LA.norm(c, axis=1)
array([ 3.74165739, 4.24264069])
>>> LA.norm(c, ord=1, axis=1)
array([ 6., 6.])
Using the axis argument to compute matrix norms: >>> m = np.arange(8).reshape(2,2,2)
>>> LA.norm(m, axis=(1,2))
array([ 3.74165739, 11.22497216])
>>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :])
(3.7416573867739413, 11.224972160321824) | |
doc_25163 |
Fit all transformers, transform the data and concatenate results. Parameters
X{array-like, dataframe} of shape (n_samples, n_features)
Input data, of which specified subsets are used to fit the transformers.
yarray-like of shape (n_samples,), default=None
Targets for supervised learning. Returns
X_t{array-like, sparse matrix} of shape (n_samples, sum_n_components)
hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices. | |
doc_25164 | Exception failing because of RFC 2109 invalidity: incorrect attributes, incorrect Set-Cookie header, etc. | |
doc_25165 | sklearn.metrics.recall_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source]
Compute the recall. The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples. The best value is 1 and the worst value is 0. Read more in the User Guide. Parameters
y_true1d array-like, or label indicator array / sparse matrix
Ground truth (correct) target values.
y_pred1d array-like, or label indicator array / sparse matrix
Estimated targets as returned by a classifier.
labelsarray-like, default=None
The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Changed in version 0.17: Parameter labels improved for multiclass problem.
pos_labelstr or int, default=1
The class to report if average='binary' and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only.
average{‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} default=’binary’
This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data:
'binary':
Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary.
'micro':
Calculate metrics globally by counting the total true positives, false negatives and false positives.
'macro':
Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
'weighted':
Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall.
'samples':
Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score).
sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
zero_division“warn”, 0 or 1, default=”warn”
Sets the value to return when there is a zero division. If set to “warn”, this acts as 0, but warnings are also raised. Returns
recallfloat (if average is not None) or array of float of shape
(n_unique_labels,) Recall of the positive class in binary classification or weighted average of the recall of each class for the multiclass task. See also
precision_recall_fscore_support, balanced_accuracy_score
multilabel_confusion_matrix
Notes When true positive + false negative == 0, recall returns 0 and raises UndefinedMetricWarning. This behavior can be modified with zero_division. Examples >>> from sklearn.metrics import recall_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> recall_score(y_true, y_pred, average='macro')
0.33...
>>> recall_score(y_true, y_pred, average='micro')
0.33...
>>> recall_score(y_true, y_pred, average='weighted')
0.33...
>>> recall_score(y_true, y_pred, average=None)
array([1., 0., 0.])
>>> y_true = [0, 0, 0, 0, 0, 0]
>>> recall_score(y_true, y_pred, average=None)
array([0.5, 0. , 0. ])
>>> recall_score(y_true, y_pred, average=None, zero_division=1)
array([0.5, 1. , 1. ])
Examples using sklearn.metrics.recall_score
Probability Calibration curves
Precision-Recall | |
doc_25166 |
Multiply one Chebyshev series by another. Returns the product of two Chebyshev series c1 * c2. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series T_0 + 2*T_1 + 3*T_2. Parameters
c1, c2array_like
1-D arrays of Chebyshev series coefficients ordered from low to high. Returns
outndarray
Of Chebyshev series coefficients representing their product. See also
chebadd, chebsub, chebmulx, chebdiv, chebpow
Notes In general, the (polynomial) product of two C-series results in terms that are not in the Chebyshev polynomial basis set. Thus, to express the product as a C-series, it is typically necessary to “reproject” the product onto said basis set, which typically produces “unintuitive live” (but correct) results; see Examples section below. Examples >>> from numpy.polynomial import chebyshev as C
>>> c1 = (1,2,3)
>>> c2 = (3,2,1)
>>> C.chebmul(c1,c2) # multiplication requires "reprojection"
array([ 6.5, 12. , 12. , 4. , 1.5]) | |
doc_25167 | tf.compat.v1.get_variable_scope() | |
doc_25168 | In-place version of lcm() | |
doc_25169 |
Set a label that will be displayed in the legend. Parameters
sobject
s will be converted to a string by calling str. | |
doc_25170 |
Return the artist's zorder. | |
doc_25171 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_25172 |
Set and validate the parameters of estimator. Parameters
**kwargsdict
Estimator parameters. Returns
selfobject
Estimator instance. | |
doc_25173 | stat.FILE_ATTRIBUTE_COMPRESSED
stat.FILE_ATTRIBUTE_DEVICE
stat.FILE_ATTRIBUTE_DIRECTORY
stat.FILE_ATTRIBUTE_ENCRYPTED
stat.FILE_ATTRIBUTE_HIDDEN
stat.FILE_ATTRIBUTE_INTEGRITY_STREAM
stat.FILE_ATTRIBUTE_NORMAL
stat.FILE_ATTRIBUTE_NOT_CONTENT_INDEXED
stat.FILE_ATTRIBUTE_NO_SCRUB_DATA
stat.FILE_ATTRIBUTE_OFFLINE
stat.FILE_ATTRIBUTE_READONLY
stat.FILE_ATTRIBUTE_REPARSE_POINT
stat.FILE_ATTRIBUTE_SPARSE_FILE
stat.FILE_ATTRIBUTE_SYSTEM
stat.FILE_ATTRIBUTE_TEMPORARY
stat.FILE_ATTRIBUTE_VIRTUAL
New in version 3.5. | |
doc_25174 | The Forms API Bound and unbound forms Using forms to validate data Initial form values Checking which form data has changed Accessing the fields from the form Accessing “clean” data Outputting forms as HTML More granular output Customizing BoundField Binding uploaded files to a form Subclassing forms Prefixes for forms
Form fields Core field arguments Checking if the field data has changed Built-in Field classes Slightly complex built-in Field classes Fields which handle relationships Creating custom fields
Model Form Functions modelform_factory modelformset_factory inlineformset_factory
Formset Functions formset_factory
The form rendering API The low-level render API Built-in-template form renderers Context available in formset templates Context available in form templates Context available in widget templates Overriding built-in formset templates Overriding built-in form templates Overriding built-in widget templates
Widgets Specifying widgets Setting arguments for widgets Widgets inheriting from the Select widget Customizing widget instances Base widget classes Built-in widgets
Form and field validation Raising ValidationError Using validation in practice | |
doc_25175 |
alias of matplotlib.backends.backend_template.FigureCanvasTemplate | |
doc_25176 |
Adjust the Axes for a specified data aspect ratio. Depending on get_adjustable this will modify either the Axes box (position) or the view limits. In the former case, get_anchor will affect the position. See also matplotlib.axes.Axes.set_aspect
For a description of aspect ratio handling. matplotlib.axes.Axes.set_adjustable
Set how the Axes adjusts to achieve the required aspect ratio. matplotlib.axes.Axes.set_anchor
Set the position in case of extra space. Notes This is called automatically when each Axes is drawn. You may need to call it yourself if you need to update the Axes position and/or view limits before the Figure is drawn. | |
doc_25177 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_25178 | This is the base class for all registered handlers — and handles only the simple mechanics of registration. | |
doc_25179 |
Given the location and size of the box, return the path of the box around it. Parameters
x0, y0, width, heightfloat
Location and size of the box.
mutation_sizefloat
A reference scale for the mutation. Returns
Path | |
doc_25180 |
Set the current Axes to be a and return a. | |
doc_25181 | See Migration guide for more details. tf.compat.v1.raw_ops.DeleteRandomSeedGenerator
tf.raw_ops.DeleteRandomSeedGenerator(
handle, deleter, name=None
)
Args
handle A Tensor of type resource.
deleter A Tensor of type variant.
name A name for the operation (optional).
Returns The created Operation. | |
doc_25182 |
Evaluate a 2-D Laguerre series at points (x, y). This function returns the values: \[p(x,y) = \sum_{i,j} c_{i,j} * L_i(x) * L_j(y)\] The parameters x and y are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either x and y or their elements must support multiplication and addition both with themselves and with the elements of c. If c is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters
x, yarray_like, compatible objects
The two dimensional series is evaluated at the points (x, y), where x and y must have the same shape. If x or y is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar.
carray_like
Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in c[i,j]. If c has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns
valuesndarray, compatible object
The values of the two dimensional polynomial at points formed with pairs of corresponding values from x and y. See also
lagval, laggrid2d, lagval3d, laggrid3d
Notes New in version 1.7.0. | |
doc_25183 |
Creates a SummaryWriter that will write out events and summaries to the event file. Parameters
log_dir (string) – Save directory location. Default is runs/CURRENT_DATETIME_HOSTNAME, which changes after each run. Use hierarchical folder structure to compare between runs easily. e.g. pass in ‘runs/exp1’, ‘runs/exp2’, etc. for each new experiment to compare across them.
comment (string) – Comment log_dir suffix appended to the default log_dir. If log_dir is assigned, this argument has no effect.
purge_step (int) – When logging crashes at step T+XT+X and restarts at step TT , any events whose global_step larger or equal to TT will be purged and hidden from TensorBoard. Note that crashed and resumed experiments should have the same log_dir.
max_queue (int) – Size of the queue for pending events and summaries before one of the ‘add’ calls forces a flush to disk. Default is ten items.
flush_secs (int) – How often, in seconds, to flush the pending events and summaries to disk. Default is every two minutes.
filename_suffix (string) – Suffix added to all event filenames in the log_dir directory. More details on filename construction in tensorboard.summary.writer.event_file_writer.EventFileWriter. Examples: from torch.utils.tensorboard import SummaryWriter
# create a summary writer with automatically generated folder name.
writer = SummaryWriter()
# folder location: runs/May04_22-14-54_s-MacBook-Pro.local/
# create a summary writer using the specified folder name.
writer = SummaryWriter("my_experiment")
# folder location: my_experiment
# create a summary writer with comment appended.
writer = SummaryWriter(comment="LR_0.1_BATCH_16")
# folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/ | |
doc_25184 | See Migration guide for more details. tf.compat.v1.raw_ops.StringNGrams
tf.raw_ops.StringNGrams(
data, data_splits, separator, ngram_widths, left_pad, right_pad, pad_width,
preserve_short_sequences, name=None
)
This op accepts a ragged tensor with 1 ragged dimension containing only strings and outputs a ragged tensor with 1 ragged dimension containing ngrams of that string, joined along the innermost axis.
Args
data A Tensor of type string. The values tensor of the ragged string tensor to make ngrams out of. Must be a 1D string tensor.
data_splits A Tensor. Must be one of the following types: int32, int64. The splits tensor of the ragged string tensor to make ngrams out of.
separator A string. The string to append between elements of the token. Use "" for no separator.
ngram_widths A list of ints. The sizes of the ngrams to create.
left_pad A string. The string to use to pad the left side of the ngram sequence. Only used if pad_width != 0.
right_pad A string. The string to use to pad the right side of the ngram sequence. Only used if pad_width != 0.
pad_width An int. The number of padding elements to add to each side of each sequence. Note that padding will never be greater than 'ngram_widths'-1 regardless of this value. If pad_width=-1, then add max(ngram_widths)-1 elements.
preserve_short_sequences A bool.
name A name for the operation (optional).
Returns A tuple of Tensor objects (ngrams, ngrams_splits). ngrams A Tensor of type string.
ngrams_splits A Tensor. Has the same type as data_splits. | |
doc_25185 | class sklearn.linear_model.LarsCV(*, fit_intercept=True, verbose=False, max_iter=500, normalize=True, precompute='auto', cv=None, max_n_alphas=1000, n_jobs=None, eps=2.220446049250313e-16, copy_X=True) [source]
Cross-validated Least Angle Regression model. See glossary entry for cross-validation estimator. Read more in the User Guide. Parameters
fit_interceptbool, default=True
whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
verbosebool or int, default=False
Sets the verbosity amount.
max_iterint, default=500
Maximum number of iterations to perform.
normalizebool, default=True
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
precomputebool, ‘auto’ or array-like , default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix cannot be passed as argument since we will use only subsets of X.
cvint, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
max_n_alphasint, default=1000
The maximum number of points on the path used to compute the residuals in the cross-validation
n_jobsint or None, default=None
Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten. Attributes
active_list of length n_alphas or list of such lists
Indices of active variables at the end of the path. If this is a list of lists, the outer list length is n_targets.
coef_array-like of shape (n_features,)
parameter vector (w in the formulation formula)
intercept_float
independent term in decision function
coef_path_array-like of shape (n_features, n_alphas)
the varying values of the coefficients along the path
alpha_float
the estimated regularization parameter alpha
alphas_array-like of shape (n_alphas,)
the different values of alpha along the path
cv_alphas_array-like of shape (n_cv_alphas,)
all the values of alpha along the path for the different folds
mse_path_array-like of shape (n_folds, n_cv_alphas)
the mean square error on left-out for each fold along the path (alpha values given by cv_alphas)
n_iter_array-like or int
the number of iterations run by Lars with the optimal alpha. See also
lars_path, LassoLars, LassoLarsCV
Examples >>> from sklearn.linear_model import LarsCV
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_samples=200, noise=4.0, random_state=0)
>>> reg = LarsCV(cv=5).fit(X, y)
>>> reg.score(X, y)
0.9996...
>>> reg.alpha_
0.0254...
>>> reg.predict(X[:1,])
array([154.0842...])
Methods
fit(X, y) Fit the model using X, y as training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit the model using X, y as training data. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,)
Target values. Returns
selfobject
returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_25186 | An integer specifying the number of “overflow” objects the last page can contain. By default this returns the value of paginate_orphans. | |
doc_25187 | get the bytes used per Surface pixel get_bytesize() -> int Return the number of bytes used per pixel. | |
doc_25188 | tf.compat.v1.strings.split(
input=None, sep=None, maxsplit=-1, result_type='SparseTensor',
source=None, name=None
)
Let N be the size of input (typically N will be the batch size). Split each element of input based on sep and return a SparseTensor or RaggedTensor containing the split tokens. Empty tokens are ignored. Examples:
print(tf.compat.v1.strings.split(['hello world', 'a b c']))
SparseTensor(indices=tf.Tensor( [[0 0] [0 1] [1 0] [1 1] [1 2]], ...),
values=tf.Tensor([b'hello' b'world' b'a' b'b' b'c'], ...),
dense_shape=tf.Tensor([2 3], shape=(2,), dtype=int64))
print(tf.compat.v1.strings.split(['hello world', 'a b c'],
result_type="RaggedTensor"))
<tf.RaggedTensor [[b'hello', b'world'], [b'a', b'b', b'c']]>
If sep is given, consecutive delimiters are not grouped together and are deemed to delimit empty strings. For example, input of "1<>2<><>3" and sep of "<>" returns ["1", "2", "", "3"]. If sep is None or an empty string, consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace. Note that the above mentioned behavior matches python's str.split.
Args
input A string Tensor of rank N, the strings to split. If rank(input) is not known statically, then it is assumed to be 1.
sep 0-D string Tensor, the delimiter character.
maxsplit An int. If maxsplit > 0, limit of the split of the result.
result_type The tensor type for the result: one of "RaggedTensor" or "SparseTensor".
source alias for "input" argument.
name A name for the operation (optional).
Raises
ValueError If sep is not a string.
Returns A SparseTensor or RaggedTensor of rank N+1, the strings split according to the delimiter. | |
doc_25189 |
Set whether the artist uses clipping. When False artists will be visible outside of the axes which can lead to unexpected results. Parameters
bbool | |
doc_25190 |
Return width, height, xdescent, ydescent of box. | |
doc_25191 |
Alias for set_linestyle. | |
doc_25192 |
Write object to an Excel sheet. To write a single object to an Excel .xlsx file it is only necessary to specify a target file name. To write to multiple sheets it is necessary to create an ExcelWriter object with a target file name, and specify a sheet in the file to write to. Multiple sheets may be written to by specifying unique sheet_name. With all data written to the file it is necessary to save the changes. Note that creating an ExcelWriter object with a file name that already exists will result in the contents of the existing file being erased. Parameters
excel_writer:path-like, file-like, or ExcelWriter object
File path or existing ExcelWriter.
sheet_name:str, default ‘Sheet1’
Name of sheet which will contain DataFrame.
na_rep:str, default ‘’
Missing data representation.
float_format:str, optional
Format string for floating point numbers. For example float_format="%.2f" will format 0.1234 to 0.12.
columns:sequence or list of str, optional
Columns to write.
header:bool or list of str, default True
Write out the column names. If a list of string is given it is assumed to be aliases for the column names.
index:bool, default True
Write row names (index).
index_label:str or sequence, optional
Column label for index column(s) if desired. If not specified, and header and index are True, then the index names are used. A sequence should be given if the DataFrame uses MultiIndex.
startrow:int, default 0
Upper left cell row to dump data frame.
startcol:int, default 0
Upper left cell column to dump data frame.
engine:str, optional
Write engine to use, ‘openpyxl’ or ‘xlsxwriter’. You can also set this via the options io.excel.xlsx.writer, io.excel.xls.writer, and io.excel.xlsm.writer. Deprecated since version 1.2.0: As the xlwt package is no longer maintained, the xlwt engine will be removed in a future version of pandas.
merge_cells:bool, default True
Write MultiIndex and Hierarchical Rows as merged cells.
encoding:str, optional
Encoding of the resulting excel file. Only necessary for xlwt, other writers support unicode natively.
inf_rep:str, default ‘inf’
Representation for infinity (there is no native representation for infinity in Excel).
verbose:bool, default True
Display more information in the error logs.
freeze_panes:tuple of int (length 2), optional
Specifies the one-based bottommost row and rightmost column that is to be frozen.
storage_options:dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec. Please see fsspec and urllib for more details. New in version 1.2.0. See also to_csv
Write DataFrame to a comma-separated values (csv) file. ExcelWriter
Class for writing DataFrame objects into excel sheets. read_excel
Read an Excel file into a pandas DataFrame. read_csv
Read a comma-separated values (csv) file into DataFrame. Notes For compatibility with to_csv(), to_excel serializes lists and dicts to strings before writing. Once a workbook has been saved it is not possible to write further data without rewriting the whole workbook. Examples Create, write to and save a workbook:
>>> df1 = pd.DataFrame([['a', 'b'], ['c', 'd']],
... index=['row 1', 'row 2'],
... columns=['col 1', 'col 2'])
>>> df1.to_excel("output.xlsx")
To specify the sheet name:
>>> df1.to_excel("output.xlsx",
... sheet_name='Sheet_name_1')
If you wish to write to more than one sheet in the workbook, it is necessary to specify an ExcelWriter object:
>>> df2 = df1.copy()
>>> with pd.ExcelWriter('output.xlsx') as writer:
... df1.to_excel(writer, sheet_name='Sheet_name_1')
... df2.to_excel(writer, sheet_name='Sheet_name_2')
ExcelWriter can also be used to append to an existing Excel file:
>>> with pd.ExcelWriter('output.xlsx',
... mode='a') as writer:
... df.to_excel(writer, sheet_name='Sheet_name_3')
To set the library that is used to write the Excel file, you can pass the engine keyword (the default engine is automatically chosen depending on the file extension):
>>> df1.to_excel('output1.xlsx', engine='xlsxwriter') | |
doc_25193 |
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | |
doc_25194 |
Unsigned integer type, compatible with C unsigned char. Character code
'B' Alias on this platform (Linux x86_64)
numpy.uint8: 8-bit unsigned integer (0 to 255). | |
doc_25195 | tf.image.is_jpeg Compat aliases for migration See Migration guide for more details. tf.compat.v1.image.is_jpeg, tf.compat.v1.io.is_jpeg
tf.io.is_jpeg(
contents, name=None
)
Args
contents 0-D string. The encoded image bytes.
name A name for the operation (optional)
Returns A scalar boolean tensor indicating if 'contents' may be a JPEG image. is_jpeg is susceptible to false positives. | |
doc_25196 |
Bases: matplotlib.ticker.Locator Dynamically find minor tick positions based on the positions of major ticks. The scale must be linear with major ticks evenly spaced. n is the number of subdivisions of the interval between major ticks; e.g., n=2 will place a single minor tick midway between major ticks. If n is omitted or None, it will be set to 5 or 4. tick_values(vmin, vmax)[source]
Return the values of the located ticks given vmin and vmax. Note To get tick locations with the vmin and vmax values defined automatically for the associated axis simply call the Locator instance: >>> print(type(loc))
<type 'Locator'>
>>> print(loc())
[1, 2, 3, 4] | |
doc_25197 |
Set the threshold for labelling minors ticks. Parameters
minor_thresholdint
Maximum number of locations for labelling some minor ticks. This parameter have no effect if minor is False. | |
doc_25198 |
An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array. For more information, refer to the numpy module and examine the methods and attributes of an array. Parameters
(for the __new__ method; see Notes below)
shapetuple of ints
Shape of created array.
dtypedata-type, optional
Any object that can be interpreted as a numpy data type.
bufferobject exposing buffer interface, optional
Used to fill the array with data.
offsetint, optional
Offset of array data in buffer.
stridestuple of ints, optional
Strides of data in memory.
order{‘C’, ‘F’}, optional
Row-major (C-style) or column-major (Fortran-style) order. See also array
Construct an array. zeros
Create an array, each element of which is zero. empty
Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). dtype
Create a data-type. numpy.typing.NDArray
An ndarray alias generic w.r.t. its dtype.type. Notes There are two modes of creating an array using __new__: If buffer is None, then only shape, dtype, and order are used. If buffer is an object exposing the buffer interface, then all keywords are interpreted. No __init__ method is needed because the array is fully initialized after the __new__ method. Examples These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray. First mode, buffer is None: >>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
[ nan, 2.5e-323]])
Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]),
... offset=np.int_().itemsize,
... dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])
Attributes
Tndarray
Transpose of the array.
databuffer
The array’s elements, in memory.
dtypedtype object
Describes the format of the elements in the array.
flagsdict
Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.
flatnumpy.flatiter object
Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO).
imagndarray
Imaginary part of the array.
realndarray
Real part of the array.
sizeint
Number of elements in the array.
itemsizeint
The memory use of each array element in bytes.
nbytesint
The total number of bytes required to store the array data, i.e., itemsize * size.
ndimint
The array’s number of dimensions.
shapetuple of ints
Shape of the array.
stridestuple of ints
The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4).
ctypesctypes object
Class containing properties of the array needed for interaction with ctypes.
basendarray
If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored. | |
doc_25199 |
Imputes all missing values in X. Note that this is stochastic, and that if random_state is not fixed, repeated calls, or permuted input, will yield different results. Parameters
Xarray-like of shape (n_samples, n_features)
The input data to complete. Returns
Xtarray-like, shape (n_samples, n_features)
The imputed input data. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.