_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_3100
Alias for set_linewidth.
doc_3101
Return a normalized absolutized version of the pathname path. On most platforms, this is equivalent to calling the function normpath() as follows: normpath(join(os.getcwd(), path)). Changed in version 3.6: Accepts a path-like object.
doc_3102
The smallest year number allowed in a date or datetime object. MINYEAR is 1.
doc_3103
A floating-point “not a number” (NaN) value. Equivalent to the output of float('nan'). New in version 3.5.
doc_3104
Return the spine position.
doc_3105
Window resize signal. Availability: Unix.
doc_3106
Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_3107
tf.compat.v1.train.global_step( sess, global_step_tensor ) # Create a variable to hold the global_step. global_step_tensor = tf.Variable(10, trainable=False, name='global_step') # Create a session. sess = tf.compat.v1.Session() # Initialize the variable sess.run(global_step_tensor.initializer) # Get the variable value. print('global_step: %s' % tf.compat.v1.train.global_step(sess, global_step_tensor)) global_step: 10 Args sess A TensorFlow Session object. global_step_tensor Tensor or the name of the operation that contains the global step. Returns The global step value.
doc_3108
Return a string representation of data. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. The default implementation converts ints and floats and arrays of ints and floats into a comma-separated string enclosed in square brackets, unless the artist has an associated colorbar, in which case scalar values are formatted using the colorbar's formatter. See also get_cursor_data
doc_3109
Get the transformation used for drawing x-axis labels, ticks and gridlines. The x-direction is in data coordinates and the y-direction is in axis coordinates. Note This transformation is primarily used by the Axis class, and is meant to be overridden by new kinds of projections that may need to place axis elements in different locations.
doc_3110
Set the children of the transform, to let the invalidation system know which transforms can invalidate this transform. Should be called from the constructor of any transforms that depend on other transforms.
doc_3111
A tuple of Group objects encoding the addresses and groups found in the header value. Addresses that are not part of a group are represented in this list as single-address Groups whose display_name is None.
doc_3112
See Migration guide for more details. tf.compat.v1.io.RaggedFeature.RowSplits tf.io.RaggedFeature.RowSplits( key ) Attributes key
doc_3113
Bases: matplotlib.collections.PolyCollection A collection of horizontal bars spanning yrange with a sequence of xranges. Parameters xrangeslist of (float, float) The sequence of (left-edge-position, width) pairs for each bar. yrange(float, float) The (lower-edge, height) common to all bars. **kwargs Forwarded to Collection. add_callback(func)[source] Add a callback function that will be called whenever one of the Artist's properties changes. Parameters funccallable The callback function. It must have the signature: def func(artist: Artist) -> Any where artist is the calling Artist. Return values may exist but are ignored. Returns int The observer id associated with the callback. This id can be used for removing the callback with remove_callback later. See also remove_callback autoscale()[source] Autoscale the scalar limits on the norm instance using the current array autoscale_None()[source] Autoscale the scalar limits on the norm instance using the current array, changing only limits that are None propertyaxes The Axes instance the artist resides in, or None. propertycallbacksSM[source] changed()[source] Call this whenever the mappable is changed to notify all the callbackSM listeners to the 'changed' signal. colorbar The last colorbar associated with this ScalarMappable. May be None. contains(mouseevent)[source] Test whether the mouse event occurred in the collection. Returns bool, dict(ind=itemlist), where every item in itemlist contains the event. convert_xunits(x)[source] Convert x using the unit type of the xaxis. If the artist is not in contained in an Axes or if the xaxis does not have units, x itself is returned. convert_yunits(y)[source] Convert y using the unit type of the yaxis. If the artist is not in contained in an Axes or if the yaxis does not have units, y itself is returned. draw(renderer)[source] Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters rendererRendererBase subclass. Notes This method is overridden in the Artist subclasses. findobj(match=None, include_self=True)[source] Find artist objects. Recursively find all Artist instances contained in the artist. Parameters match A filter criterion for the matches. This can be None: Return all objects contained in artist. A function with signature def match(artist: Artist) -> bool. The result will only contain artists for which the function returns True. A class instance: e.g., Line2D. The result will only contain artists of this class or its subclasses (isinstance check). include_selfbool Include self in the list to be checked for a match. Returns list of Artist format_cursor_data(data)[source] Return a string representation of data. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. The default implementation converts ints and floats and arrays of ints and floats into a comma-separated string enclosed in square brackets, unless the artist has an associated colorbar, in which case scalar values are formatted using the colorbar's formatter. See also get_cursor_data get_agg_filter()[source] Return filter function to be used for agg filter. get_alpha()[source] Return the alpha value used for blending - not supported on all backends. get_animated()[source] Return whether the artist is animated. get_array()[source] Return the array of values, that are mapped to colors. The base class ScalarMappable does not make any assumptions on the dimensionality and shape of the array. get_capstyle()[source] get_children()[source] Return a list of the child Artists of this Artist. get_clim()[source] Return the values (min, max) that are mapped to the colormap limits. get_clip_box()[source] Return the clipbox. get_clip_on()[source] Return whether the artist uses clipping. get_clip_path()[source] Return the clip path. get_cmap()[source] Return the Colormap instance. get_cursor_data(event)[source] Return the cursor data for a given event. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. Cursor data can be used by Artists to provide additional context information for a given event. The default implementation just returns None. Subclasses can override the method and return arbitrary data. However, when doing so, they must ensure that format_cursor_data can convert the data to a string representation. The only current use case is displaying the z-value of an AxesImage in the status bar of a plot window, while moving the mouse. Parameters eventmatplotlib.backend_bases.MouseEvent See also format_cursor_data get_dashes()[source] Alias for get_linestyle. get_datalim(transData)[source] get_ec()[source] Alias for get_edgecolor. get_edgecolor()[source] get_edgecolors()[source] Alias for get_edgecolor. get_facecolor()[source] get_facecolors()[source] Alias for get_facecolor. get_fc()[source] Alias for get_facecolor. get_figure()[source] Return the Figure instance the artist belongs to. get_fill()[source] Return whether face is colored. get_gid()[source] Return the group id. get_hatch()[source] Return the current hatching pattern. get_in_layout()[source] Return boolean flag, True if artist is included in layout calculations. E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). get_joinstyle()[source] get_label()[source] Return the label used for this artist in the legend. get_linestyle()[source] get_linestyles()[source] Alias for get_linestyle. get_linewidth()[source] get_linewidths()[source] Alias for get_linewidth. get_ls()[source] Alias for get_linestyle. get_lw()[source] Alias for get_linewidth. get_offset_transform()[source] Return the Transform instance used by this artist offset. get_offsets()[source] Return the offsets for the collection. get_path_effects()[source] get_paths()[source] get_picker()[source] Return the picking behavior of the artist. The possible values are described in set_picker. See also set_picker, pickable, pick get_pickradius()[source] get_rasterized()[source] Return whether the artist is to be rasterized. get_sizes()[source] Return the sizes ('areas') of the elements in the collection. Returns array The 'area' of each element. get_sketch_params()[source] Return the sketch parameters for the artist. Returns tuple or None A 3-tuple with the following elements: scale: The amplitude of the wiggle perpendicular to the source line. length: The length of the wiggle along the line. randomness: The scale factor by which the length is shrunken or expanded. Returns None if no sketch parameters were set. get_snap()[source] Return the snap setting. See set_snap for details. get_tightbbox(renderer)[source] Like Artist.get_window_extent, but includes any clipping. Parameters rendererRendererBase subclass renderer that will be used to draw the figures (i.e. fig.canvas.get_renderer()) Returns Bbox The enclosing bounding box (in figure pixel coordinates). get_transform()[source] Return the Transform instance used by this artist. get_transformed_clip_path_and_affine()[source] Return the clip path with the non-affine part of its transformation applied, and the remaining affine part of its transformation. get_transforms()[source] get_url()[source] Return the url. get_urls()[source] Return a list of URLs, one for each element of the collection. The list contains None for elements without a URL. See Hyperlinks for an example. get_visible()[source] Return the visibility. get_window_extent(renderer)[source] Get the artist's bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly. get_zorder()[source] Return the artist's zorder. have_units()[source] Return whether units are set on any axis. is_transform_set()[source] Return whether the Artist has an explicitly set transform. This is True after set_transform has been called. propertymouseover If this property is set to True, the artist will be queried for custom context information when the mouse cursor moves over it. See also get_cursor_data(), ToolCursorPosition and NavigationToolbar2. propertynorm pchanged()[source] Call all of the registered callbacks. This function is triggered internally when a property is changed. See also add_callback remove_callback pick(mouseevent)[source] Process a pick event. Each child artist will fire a pick event if mouseevent is over the artist and the artist has picker set. See also set_picker, get_picker, pickable pickable()[source] Return whether the artist is pickable. See also set_picker, get_picker, pick properties()[source] Return a dictionary of all the properties of the artist. remove()[source] Remove the artist from the figure if possible. The effect will not be visible until the figure is redrawn, e.g., with FigureCanvasBase.draw_idle. Call relim to update the axes limits if desired. Note: relim will not see collections even if the collection was added to the axes with autolim = True. Note: there is no support for removing the artist's legend entry. remove_callback(oid)[source] Remove a callback based on its observer id. See also add_callback set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, antialiased=<UNSET>, array=<UNSET>, capstyle=<UNSET>, clim=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, cmap=<UNSET>, color=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, gid=<UNSET>, hatch=<UNSET>, in_layout=<UNSET>, joinstyle=<UNSET>, label=<UNSET>, linestyle=<UNSET>, linewidth=<UNSET>, norm=<UNSET>, offset_transform=<UNSET>, offsets=<UNSET>, path_effects=<UNSET>, paths=<UNSET>, picker=<UNSET>, pickradius=<UNSET>, rasterized=<UNSET>, sizes=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, urls=<UNSET>, verts=<UNSET>, verts_and_codes=<UNSET>, visible=<UNSET>, zorder=<UNSET>)[source] Set multiple properties at once. Supported properties are Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha array-like or scalar or None animated bool antialiased or aa or antialiaseds bool or list of bools array array-like or None capstyle CapStyle or {'butt', 'projecting', 'round'} clim (vmin: float, vmax: float) clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None cmap Colormap or str or None color color or list of rgba tuples edgecolor or ec or edgecolors color or list of colors or 'face' facecolor or facecolors or fc color or list of colors figure Figure gid str hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} in_layout bool joinstyle JoinStyle or {'miter', 'round', 'bevel'} label object linestyle or dashes or linestyles or ls str or tuple or list thereof linewidth or linewidths or lw float or list of floats norm Normalize or None offset_transform Transform offsets (N, 2) or (2,) array-like path_effects AbstractPathEffect paths list of array-like picker None or bool or float or callable pickradius float rasterized bool sizes ndarray or None sketch_params (scale: float, length: float, randomness: float) snap bool or None transform Transform url str urls list of str or None verts list of array-like verts_and_codes unknown visible bool zorder float set_aa(aa)[source] Alias for set_antialiased. set_agg_filter(filter_func)[source] Set the agg filter. Parameters filter_funccallable A filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array. set_alpha(alpha)[source] Set the alpha value used for blending - not supported on all backends. Parameters alphaarray-like or scalar or None All values must be within the 0-1 range, inclusive. Masked values and nans are not supported. set_animated(b)[source] Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters bbool set_antialiased(aa)[source] Set the antialiasing state for rendering. Parameters aabool or list of bools set_antialiaseds(aa)[source] Alias for set_antialiased. set_array(A)[source] Set the value array from array-like A. Parameters Aarray-like or None The values that are mapped to colors. The base class ScalarMappable does not make any assumptions on the dimensionality and shape of the value array A. set_capstyle(cs)[source] Set the CapStyle for the collection (for all its elements). Parameters csCapStyle or {'butt', 'projecting', 'round'} set_clim(vmin=None, vmax=None)[source] Set the norm limits for image scaling. Parameters vmin, vmaxfloat The limits. The limits may also be passed as a tuple (vmin, vmax) as a single positional argument. set_clip_box(clipbox)[source] Set the artist's clip Bbox. Parameters clipboxBbox set_clip_on(b)[source] Set whether the artist uses clipping. When False artists will be visible outside of the axes which can lead to unexpected results. Parameters bbool set_clip_path(path, transform=None)[source] Set the artist's clip path. Parameters pathPatch or Path or TransformedPath or None The clip path. If given a Path, transform must be provided as well. If None, a previously set clip path is removed. transformTransform, optional Only used if path is a Path, in which case the given Path is converted to a TransformedPath using transform. Notes For efficiency, if path is a Rectangle this method will set the clipping box to the corresponding rectangle and set the clipping path to None. For technical reasons (support of set), a tuple (path, transform) is also accepted as a single positional parameter. set_cmap(cmap)[source] Set the colormap for luminance data. Parameters cmapColormap or str or None set_color(c)[source] Set both the edgecolor and the facecolor. Parameters ccolor or list of rgba tuples See also Collection.set_facecolor, Collection.set_edgecolor For setting the edge or face color individually. set_dashes(ls)[source] Alias for set_linestyle. set_ec(c)[source] Alias for set_edgecolor. set_edgecolor(c)[source] Set the edgecolor(s) of the collection. Parameters ccolor or list of colors or 'face' The collection edgecolor(s). If a sequence, the patches cycle through it. If 'face', match the facecolor. set_edgecolors(c)[source] Alias for set_edgecolor. set_facecolor(c)[source] Set the facecolor(s) of the collection. c can be a color (all patches have same color), or a sequence of colors; if it is a sequence the patches will cycle through the sequence. If c is 'none', the patch will not be filled. Parameters ccolor or list of colors set_facecolors(c)[source] Alias for set_facecolor. set_fc(c)[source] Alias for set_facecolor. set_figure(fig)[source] Set the Figure instance the artist belongs to. Parameters figFigure set_gid(gid)[source] Set the (group) id for the artist. Parameters gidstr set_hatch(hatch)[source] Set the hatching pattern hatch can be one of: / - diagonal hatching \ - back diagonal | - vertical - - horizontal + - crossed x - crossed diagonal o - small circle O - large circle . - dots * - stars Letters can be combined, in which case all the specified hatchings are done. If same letter repeats, it increases the density of hatching of that pattern. Hatching is supported in the PostScript, PDF, SVG and Agg backends only. Unlike other properties such as linewidth and colors, hatching can only be specified for the collection as a whole, not separately for each member. Parameters hatch{'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} set_in_layout(in_layout)[source] Set if artist is to be included in layout calculations, E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). Parameters in_layoutbool set_joinstyle(js)[source] Set the JoinStyle for the collection (for all its elements). Parameters jsJoinStyle or {'miter', 'round', 'bevel'} set_label(s)[source] Set a label that will be displayed in the legend. Parameters sobject s will be converted to a string by calling str. set_linestyle(ls)[source] Set the linestyle(s) for the collection. linestyle description '-' or 'solid' solid line '--' or 'dashed' dashed line '-.' or 'dashdot' dash-dotted line ':' or 'dotted' dotted line Alternatively a dash tuple of the following form can be provided: (offset, onoffseq), where onoffseq is an even length tuple of on and off ink in points. Parameters lsstr or tuple or list thereof Valid values for individual linestyles include {'-', '--', '-.', ':', '', (offset, on-off-seq)}. See Line2D.set_linestyle for a complete description. set_linestyles(ls)[source] Alias for set_linestyle. set_linewidth(lw)[source] Set the linewidth(s) for the collection. lw can be a scalar or a sequence; if it is a sequence the patches will cycle through the sequence Parameters lwfloat or list of floats set_linewidths(lw)[source] Alias for set_linewidth. set_ls(ls)[source] Alias for set_linestyle. set_lw(lw)[source] Alias for set_linewidth. set_norm(norm)[source] Set the normalization instance. Parameters normNormalize or None Notes If there are any colorbars using the mappable for this norm, setting the norm of the mappable will reset the norm, locator, and formatters on the colorbar to default. set_offset_transform(transOffset)[source] Set the artist offset transform. Parameters transOffsetTransform set_offsets(offsets)[source] Set the offsets for the collection. Parameters offsets(N, 2) or (2,) array-like set_path_effects(path_effects)[source] Set the path effects. Parameters path_effectsAbstractPathEffect set_paths(verts, closed=True)[source] Set the vertices of the polygons. Parameters vertslist of array-like The sequence of polygons [verts0, verts1, ...] where each element verts_i defines the vertices of polygon i as a 2D array-like of shape (M, 2). closedbool, default: True Whether the polygon should be closed by adding a CLOSEPOLY connection at the end. set_picker(picker)[source] Define the picking behavior of the artist. Parameters pickerNone or bool or float or callable This can be one of the following: None: Picking is disabled for this artist (default). A boolean: If True then picking will be enabled and the artist will fire a pick event if the mouse event is over the artist. A float: If picker is a number it is interpreted as an epsilon tolerance in points and the artist will fire off an event if its data is within epsilon of the mouse event. For some artists like lines and patch collections, the artist may provide additional data to the pick event that is generated, e.g., the indices of the data within epsilon of the pick event A function: If picker is callable, it is a user supplied function which determines whether the artist is hit by the mouse event: hit, props = picker(artist, mouseevent) to determine the hit test. if the mouse event is over the artist, return hit=True and props is a dictionary of properties you want added to the PickEvent attributes. set_pickradius(pr)[source] Set the pick radius used for containment tests. Parameters prfloat Pick radius, in points. set_rasterized(rasterized)[source] Force rasterized (bitmap) drawing for vector graphics output. Rasterized drawing is not supported by all artists. If you try to enable this on an artist that does not support it, the command has no effect and a warning will be issued. This setting is ignored for pixel-based output. See also Rasterization for vector graphics. Parameters rasterizedbool set_sizes(sizes, dpi=72.0)[source] Set the sizes of each member of the collection. Parameters sizesndarray or None The size to set for each element of the collection. The value is the 'area' of the element. dpifloat, default: 72 The dpi of the canvas. set_sketch_params(scale=None, length=None, randomness=None)[source] Set the sketch parameters. Parameters scalefloat, optional The amplitude of the wiggle perpendicular to the source line, in pixels. If scale is None, or not provided, no sketch filter will be provided. lengthfloat, optional The length of the wiggle along the line, in pixels (default 128.0) randomnessfloat, optional The scale factor by which the length is shrunken or expanded (default 16.0) The PGF backend uses this argument as an RNG seed and not as described above. Using the same seed yields the same random shape. set_snap(snap)[source] Set the snapping behavior. Snapping aligns positions with the pixel grid, which results in clearer images. For example, if a black line of 1px width was defined at a position in between two pixels, the resulting image would contain the interpolated value of that line in the pixel grid, which would be a grey value on both adjacent pixel positions. In contrast, snapping will move the line to the nearest integer pixel value, so that the resulting image will really contain a 1px wide black line. Snapping is currently only supported by the Agg and MacOSX backends. Parameters snapbool or None Possible values: True: Snap vertices to the nearest pixel center. False: Do not modify vertex positions. None: (auto) If the path contains only rectilinear line segments, round to the nearest pixel center. set_transform(t)[source] Set the artist transform. Parameters tTransform set_url(url)[source] Set the url for the artist. Parameters urlstr set_urls(urls)[source] Parameters urlslist of str or None Notes URLs are currently only implemented by the SVG backend. They are ignored by all other backends. set_verts(verts, closed=True)[source] Set the vertices of the polygons. Parameters vertslist of array-like The sequence of polygons [verts0, verts1, ...] where each element verts_i defines the vertices of polygon i as a 2D array-like of shape (M, 2). closedbool, default: True Whether the polygon should be closed by adding a CLOSEPOLY connection at the end. set_verts_and_codes(verts, codes)[source] Initialize vertices with path codes. set_visible(b)[source] Set the artist's visibility. Parameters bbool set_zorder(level)[source] Set the zorder for the artist. Artists with lower zorder values are drawn first. Parameters levelfloat classmethodspan_where(x, ymin, ymax, where, **kwargs)[source] Return a BrokenBarHCollection that plots horizontal bars from over the regions in x where where is True. The bars range on the y-axis from ymin to ymax kwargs are passed on to the collection. propertystale Whether the artist is 'stale' and needs to be re-drawn for the output to match the internal state of the artist. propertysticky_edges x and y sticky edge lists for autoscaling. When performing autoscaling, if a data limit coincides with a value in the corresponding sticky_edges list, then no margin will be added--the view limit "sticks" to the edge. A typical use case is histograms, where one usually expects no margin on the bottom edge (0) of the histogram. Moreover, margin expansion "bumps" against sticky edges and cannot cross them. For example, if the upper data limit is 1.0, the upper view limit computed by simple margin application is 1.2, but there is a sticky edge at 1.1, then the actual upper view limit will be 1.1. This attribute cannot be assigned to; however, the x and y lists can be modified in place as needed. Examples >>> artist.sticky_edges.x[:] = (xmin, xmax) >>> artist.sticky_edges.y[:] = (ymin, ymax) to_rgba(x, alpha=None, bytes=False, norm=True)[source] Return a normalized rgba array corresponding to x. In the normal case, x is a 1D or 2D sequence of scalars, and the corresponding ndarray of rgba values will be returned, based on the norm and colormap set for this ScalarMappable. There is one special case, for handling images that are already rgb or rgba, such as might have been read from an image file. If x is an ndarray with 3 dimensions, and the last dimension is either 3 or 4, then it will be treated as an rgb or rgba array, and no mapping will be done. The array can be uint8, or it can be floating point with values in the 0-1 range; otherwise a ValueError will be raised. If it is a masked array, the mask will be ignored. If the last dimension is 3, the alpha kwarg (defaulting to 1) will be used to fill in the transparency. If the last dimension is 4, the alpha kwarg is ignored; it does not replace the pre-existing alpha. A ValueError will be raised if the third dimension is other than 3 or 4. In either case, if bytes is False (default), the rgba array will be floats in the 0-1 range; if it is True, the returned rgba array will be uint8 in the 0 to 255 range. If norm is False, no normalization of the input data is performed, and it is assumed to be in the range (0-1). update(props)[source] Update this artist's properties from the dict props. Parameters propsdict update_from(other)[source] Copy properties from other to self. update_scalarmappable()[source] Update colors from the scalar mappable array, if any. Assign colors to edges and faces based on the array and/or colors that were directly set, as appropriate. zorder=0
doc_3114
Number of dimensions of the sub-array if this data type describes a sub-array, and 0 otherwise. New in version 1.13.0. Examples >>> x = np.dtype(float) >>> x.ndim 0 >>> x = np.dtype((float, 8)) >>> x.ndim 1 >>> x = np.dtype(('i4', (3, 4))) >>> x.ndim 2
doc_3115
Return the artist's zorder.
doc_3116
Block until the internal flag is true. If the internal flag is true on entry, return immediately. Otherwise, block until another thread calls set() to set the flag to true, or until the optional timeout occurs. When the timeout argument is present and not None, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof). This method returns True if and only if the internal flag has been set to true, either before the wait call or after the wait starts, so it will always return True except if a timeout is given and the operation times out. Changed in version 3.1: Previously, the method always returned None.
doc_3117
Add a callback function that will be called whenever one of the Artist's properties changes. Parameters funccallable The callback function. It must have the signature: def func(artist: Artist) -> Any where artist is the calling Artist. Return values may exist but are ignored. Returns int The observer id associated with the callback. This id can be used for removing the callback with remove_callback later. See also remove_callback
doc_3118
The Connectionist Temporal Classification loss. Calculates loss between a continuous (unsegmented) time series and a target sequence. CTCLoss sums over the probability of possible alignments of input to target, producing a loss value which is differentiable with respect to each input node. The alignment of input to target is assumed to be “many-to-one”, which limits the length of the target sequence such that it must be ≤\leq the input length. Parameters blank (int, optional) – blank label. Default 00 . reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the output losses will be divided by the target lengths and then the mean over the batch is taken. Default: 'mean' zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default: False Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Shape: Log_probs: Tensor of size (T,N,C)(T, N, C) , where T=input lengthT = \text{input length} , N=batch sizeN = \text{batch size} , and C=number of classes (including blank)C = \text{number of classes (including blank)} . The logarithmized probabilities of the outputs (e.g. obtained with torch.nn.functional.log_softmax()). Targets: Tensor of size (N,S)(N, S) or (sum⁡(target_lengths))(\operatorname{sum}(\text{target\_lengths})) , where N=batch sizeN = \text{batch size} and S=max target length, if shape is (N,S)S = \text{max target length, if shape is } (N, S) . It represent the target sequences. Each element in the target sequence is a class index. And the target index cannot be blank (default=0). In the (N,S)(N, S) form, targets are padded to the length of the longest sequence, and stacked. In the (sum⁡(target_lengths))(\operatorname{sum}(\text{target\_lengths})) form, the targets are assumed to be un-padded and concatenated within 1 dimension. Input_lengths: Tuple or tensor of size (N)(N) , where N=batch sizeN = \text{batch size} . It represent the lengths of the inputs (must each be ≤T\leq T ). And the lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. Target_lengths: Tuple or tensor of size (N)(N) , where N=batch sizeN = \text{batch size} . It represent lengths of the targets. Lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. If target shape is (N,S)(N,S) , target_lengths are effectively the stop index sns_n for each target sequence, such that target_n = targets[n,0:s_n] for each target in a batch. Lengths must each be ≤S\leq S If the targets are given as a 1d tensor that is the concatenation of individual targets, the target_lengths must add up to the total length of the tensor. Output: scalar. If reduction is 'none', then (N)(N) , where N=batch sizeN = \text{batch size} . Examples: >>> # Target are to be padded >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> S = 30 # Target sequence length of longest target in batch (padding length) >>> S_min = 10 # Minimum target length, for demonstration purposes >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long) >>> >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward() >>> >>> >>> # Target are to be un-padded >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long) >>> target = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward() Reference: A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf Note In order to use CuDNN, the following must be satisfied: targets must be in concatenated format, all input_lengths must be T. blank=0blank=0 , target_lengths ≤256\leq 256 , the integer arguments must be of dtype torch.int32. The regular implementation uses the (more common in PyTorch) torch.long dtype. Note In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. Please see the notes on Reproducibility for background.
doc_3119
Test element-wise for NaN and return result as a boolean array. Parameters xarray_like Input array. outndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. wherearray_like, optional This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs For other keyword-only arguments, see the ufunc docs. Returns yndarray or bool True where x is NaN, false otherwise. This is a scalar if x is a scalar. See also isinf, isneginf, isposinf, isfinite, isnat Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Examples >>> np.isnan(np.nan) True >>> np.isnan(np.inf) False >>> np.isnan([np.log(-1.),1.,np.log(0)]) array([ True, False, False])
doc_3120
A user updateable list of mappings. The list is ordered from first-searched to last-searched. It is the only stored state and can be modified to change which mappings are searched. The list should always contain at least one mapping.
doc_3121
Transform a sequence of documents to a document-term matrix. Parameters Xiterable over raw text documents, length = n_samples Samples. Each sample must be a text document (either bytes or unicode strings, file name or file object depending on the constructor argument) which will be tokenized and hashed. yany Ignored. This parameter exists only for compatibility with sklearn.pipeline.Pipeline. Returns Xsparse matrix of shape (n_samples, n_features) Document-term matrix.
doc_3122
See Migration guide for more details. tf.compat.v1.keras.activations.sigmoid tf.keras.activations.sigmoid( x ) Applies the sigmoid activation function. For small values (<-5), sigmoid returns a value close to zero, and for large values (>5) the result of the function gets close to 1. Sigmoid is equivalent to a 2-element Softmax, where the second element is assumed to be zero. The sigmoid function always returns a value between 0 and 1. For example: a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32) b = tf.keras.activations.sigmoid(a) b.numpy() array([2.0611537e-09, 2.6894143e-01, 5.0000000e-01, 7.3105860e-01, 1.0000000e+00], dtype=float32) Arguments x Input tensor. Returns Tensor with the sigmoid activation: 1 / (1 + exp(-x)).
doc_3123
tf.compat.v1.data.get_output_classes( dataset_or_iterator ) Args dataset_or_iterator A tf.data.Dataset or tf.data.Iterator. Returns A nested structure of Python type objects matching the structure of the dataset / iterator elements and specifying the class of the individual components.
doc_3124
Polls the set of registered file descriptors, and returns a possibly-empty list containing (fd, event) 2-tuples for the descriptors that have events or errors to report. fd is the file descriptor, and event is a bitmask with bits set for the reported events for that descriptor — POLLIN for waiting input, POLLOUT to indicate that the descriptor can be written to, and so forth. An empty list indicates that the call timed out and no file descriptors had any events to report. If timeout is given, it specifies the length of time in milliseconds which the system will wait for events before returning. If timeout is omitted, negative, or None, the call will block until there is an event for this poll object. Changed in version 3.5: The function is now retried with a recomputed timeout when interrupted by a signal, except if the signal handler raises an exception (see PEP 475 for the rationale), instead of raising InterruptedError.
doc_3125
the test client that is used with when test_client is used. Changelog New in version 0.7.
doc_3126
See Migration guide for more details. tf.compat.v1.raw_ops.StatefulPartitionedCall tf.raw_ops.StatefulPartitionedCall( args, Tout, f, config='', config_proto='', executor_type='', name=None ) Args args A list of Tensor objects. A list of input tensors. Tout A list of tf.DTypes. A list of output types. f A function decorated with @Defun. A function that takes 'args', a list of tensors, and returns 'output', another list of tensors. Input and output types are specified by 'Tin' and 'Tout'. The function body of f will be placed and partitioned across devices, setting this op apart from the regular Call op. This op is stateful. config An optional string. Defaults to "". config_proto An optional string. Defaults to "". executor_type An optional string. Defaults to "". name A name for the operation (optional). Returns A list of Tensor objects of type Tout.
doc_3127
Add element elem to the set.
doc_3128
An array that represents the days of the week in the current locale.
doc_3129
Returns True if any of the elements of a evaluate to True. Refer to numpy.any for full documentation. See also numpy.any equivalent function
doc_3130
Raised when an attempt is made to modify the type of a node.
doc_3131
Possible value for SSLContext.verify_flags. In this mode, only the peer cert is checked but none of the intermediate CA certificates. The mode requires a valid CRL that is signed by the peer cert’s issuer (its direct ancestor CA). If no proper CRL has been loaded with SSLContext.load_verify_locations, validation will fail. New in version 3.4.
doc_3132
Copies values from one array to another, broadcasting as necessary. Raises a TypeError if the casting rule is violated, and if where is provided, it selects which elements to copy. New in version 1.7.0. Parameters dstndarray The array into which values are copied. srcarray_like The array from which values are copied. casting{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur when copying. ‘no’ means the data types should not be cast at all. ‘equiv’ means only byte-order changes are allowed. ‘safe’ means only casts which can preserve values are allowed. ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. ‘unsafe’ means any data conversions may be done. wherearray_like of bool, optional A boolean array which is broadcasted to match the dimensions of dst, and selects elements to copy from src to dst wherever it contains the value True.
doc_3133
See Migration guide for more details. tf.compat.v1.raw_ops.ScatterSub tf.raw_ops.ScatterSub( ref, indices, updates, use_locking=False, name=None ) # Scalar indices ref[indices, ...] -= updates[...] # Vector indices (for each i) ref[indices[i], ...] -= updates[i, ...] # High rank indices (for each i, ..., j) ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...] This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Duplicate entries are handled correctly: if multiple indices reference the same location, their (negated) contributions add. Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = []. Args ref A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable node. indices A Tensor. Must be one of the following types: int32, int64. A tensor of indices into the first dimension of ref. updates A Tensor. Must have the same type as ref. A tensor of updated values to subtract from ref. use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as ref.
doc_3134
# Partitions the given `shape` and returns the partition results. # See docstring of `__call__` method for the format of partition results. Methods __call__ View source __call__( shape, dtype, axis=0 ) Partitions the given shape and returns the partition results. Examples of a partitioner that allocates a fixed number of shards: partitioner = FixedShardsPartitioner(num_shards=2) partitions = partitioner(tf.TensorShape([10, 3], tf.float32), axis=0) print(partitions) # [2, 0] Args shape a tf.TensorShape, the shape to partition. dtype a tf.dtypes.Dtype indicating the type of the partition value. axis The axis to partition along. Default: outermost axis. Returns A list of integers representing the number of partitions on each axis, where i-th value correponds to i-th axis.
doc_3135
Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility.
doc_3136
See torch.fmin()
doc_3137
get the dimensions of the Surface get_size() -> (width, height) Return the width and height of the Surface in pixels.
doc_3138
Decode a message header value without converting the character set. The header value is in header. This function returns a list of (decoded_string, charset) pairs containing each of the decoded parts of the header. charset is None for non-encoded parts of the header, otherwise a lower case string containing the name of the character set specified in the encoded string. Here’s an example: >>> from email.header import decode_header >>> decode_header('=?iso-8859-1?q?p=F6stal?=') [(b'p\xf6stal', 'iso-8859-1')]
doc_3139
Create an instance of the FileInput class. The instance will be used as global state for the functions of this module, and is also returned to use during iteration. The parameters to this function will be passed along to the constructor of the FileInput class. The FileInput instance can be used as a context manager in the with statement. In this example, input is closed after the with statement is exited, even if an exception occurs: with fileinput.input(files=('spam.txt', 'eggs.txt')) as f: for line in f: process(line) Changed in version 3.2: Can be used as a context manager. Changed in version 3.8: The keyword parameters mode and openhook are now keyword-only.
doc_3140
Return in a single string any lines of comments immediately preceding the object’s source code (for a class, function, or method), or at the top of the Python source file (if the object is a module). If the object’s source code is unavailable, return None. This could happen if the object has been defined in C or the interactive shell.
doc_3141
Segments image using quickshift clustering in Color-(x,y) space. Produces an oversegmentation of the image using the quickshift mode-seeking algorithm. Parameters image(width, height, channels) ndarray Input image. ratiofloat, optional, between 0 and 1 Balances color-space proximity and image-space proximity. Higher values give more weight to color-space. kernel_sizefloat, optional Width of Gaussian kernel used in smoothing the sample density. Higher means fewer clusters. max_distfloat, optional Cut-off point for data distances. Higher means fewer clusters. return_treebool, optional Whether to return the full segmentation hierarchy tree and distances. sigmafloat, optional Width for Gaussian smoothing as preprocessing. Zero means no smoothing. convert2labbool, optional Whether the input should be converted to Lab colorspace prior to segmentation. For this purpose, the input is assumed to be RGB. random_seedint, optional Random seed used for breaking ties. Returns segment_mask(width, height) ndarray Integer mask indicating segment labels. Notes The authors advocate to convert the image to Lab color space prior to segmentation, though this is not strictly necessary. For this to work, the image must be given in RGB format. References 1 Quick shift and kernel methods for mode seeking, Vedaldi, A. and Soatto, S. European Conference on Computer Vision, 2008
doc_3142
Return the time formatted according to ISO 8610. The full format looks like ‘YYYY-MM-DD HH:MM:SS.mmmmmmnnn’. By default, the fractional part is omitted if self.microsecond == 0 and self.nanosecond == 0. If self.tzinfo is not None, the UTC offset is also attached, giving giving a full format of ‘YYYY-MM-DD HH:MM:SS.mmmmmmnnn+HH:MM’. Parameters sep:str, default ‘T’ String used as the separator between the date and time. timespec:str, default ‘auto’ Specifies the number of additional terms of the time to include. The valid values are ‘auto’, ‘hours’, ‘minutes’, ‘seconds’, ‘milliseconds’, ‘microseconds’, and ‘nanoseconds’. Returns str Examples >>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651') >>> ts.isoformat() '2020-03-14T15:32:52.192548651' >>> ts.isoformat(timespec='microseconds') '2020-03-14T15:32:52.192548'
doc_3143
Getter for the precision matrix. Returns precision_array-like of shape (n_features, n_features) The precision matrix associated to the current covariance object.
doc_3144
Apply the non-affine part of this transform to Path path, returning a new Path. transform_path(path) is equivalent to transform_path_affine(transform_path_non_affine(values)).
doc_3145
tf.image.encode_png Compat aliases for migration See Migration guide for more details. tf.compat.v1.image.encode_png, tf.compat.v1.io.encode_png tf.io.encode_png( image, compression=-1, name=None ) image is a 3-D uint8 or uint16 Tensor of shape [height, width, channels] where channels is: 1: for grayscale. 2: for grayscale + alpha. 3: for RGB. 4: for RGBA. The ZLIB compression level, compression, can be -1 for the PNG-encoder default or a value from 0 to 9. 9 is the highest compression level, generating the smallest output, but is slower. Args image A Tensor. Must be one of the following types: uint8, uint16. 3-D with shape [height, width, channels]. compression An optional int. Defaults to -1. Compression level. name A name for the operation (optional). Returns A Tensor of type string.
doc_3146
An alias to collections.abc.Hashable
doc_3147
See Migration guide for more details. tf.compat.v1.image.adjust_gamma tf.image.adjust_gamma( image, gamma=1, gain=1 ) on the input image. Also known as Power Law Transform. This function converts the input images at first to float representation, then transforms them pixelwise according to the equation Out = gain * In**gamma, and then converts the back to the original data type. Usage Example: x = [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]] tf.image.adjust_gamma(x, 0.2) <tf.Tensor: shape=(2, 2, 3), dtype=float32, numpy= array([[[1. , 1.1486983, 1.2457309], [1.319508 , 1.3797297, 1.4309691]], [[1.4757731, 1.5157166, 1.5518456], [1.5848932, 1.6153942, 1.6437519]]], dtype=float32)> Args image RGB image or images to adjust. gamma A scalar or tensor. Non-negative real number. gain A scalar or tensor. The constant multiplier. Returns A Tensor. A Gamma-adjusted tensor of the same shape and type as image. Raises ValueError If gamma is negative. Notes: For gamma greater than 1, the histogram will shift towards left and the output image will be darker than the input image. For gamma less than 1, the histogram will shift towards right and the output image will be brighter than the input image. References: Wikipedia
doc_3148
A class attribute describing the aggregate function that will be generated. Specifically, the function will be interpolated as the function placeholder within template. Defaults to None.
doc_3149
tf.compat.v1.data.FixedLengthRecordDataset( filenames, record_bytes, header_bytes=None, footer_bytes=None, buffer_size=None, compression_type=None, num_parallel_reads=None ) Args filenames A tf.string tensor or tf.data.Dataset containing one or more filenames. record_bytes A tf.int64 scalar representing the number of bytes in each record. header_bytes (Optional.) A tf.int64 scalar representing the number of bytes to skip at the start of a file. footer_bytes (Optional.) A tf.int64 scalar representing the number of bytes to ignore at the end of a file. buffer_size (Optional.) A tf.int64 scalar representing the number of bytes to buffer when reading. compression_type (Optional.) A tf.string scalar evaluating to one of "" (no compression), "ZLIB", or "GZIP". num_parallel_reads (Optional.) A tf.int64 scalar representing the number of files to read in parallel. If greater than one, the records of files read in parallel are outputted in an interleaved order. If your input pipeline is I/O bottlenecked, consider setting this parameter to a value greater than one to parallelize the I/O. If None, files will be read sequentially. Attributes element_spec The type specification of an element of this dataset. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset.element_spec TensorSpec(shape=(), dtype=tf.int32, name=None) output_classes Returns the class of each component of an element of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_classes(dataset). output_shapes Returns the shape of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_shapes(dataset). output_types Returns the type of each component of an element of this dataset. (deprecated)Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.compat.v1.data.get_output_types(dataset). Methods apply View source apply( transformation_func ) Applies a transformation function to this dataset. apply enables chaining of custom Dataset transformations, which are represented as functions that take one Dataset argument and return a transformed Dataset. dataset = tf.data.Dataset.range(100) def dataset_fn(ds): return ds.filter(lambda x: x < 5) dataset = dataset.apply(dataset_fn) list(dataset.as_numpy_iterator()) [0, 1, 2, 3, 4] Args transformation_func A function that takes one Dataset argument and returns a Dataset. Returns Dataset The Dataset returned by applying transformation_func to this dataset. as_numpy_iterator View source as_numpy_iterator() Returns an iterator which converts all elements of the dataset to numpy. Use as_numpy_iterator to inspect the content of your dataset. To see element shapes and types, print dataset elements directly instead of using as_numpy_iterator. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset: print(element) tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) This method requires that you are running in eager mode and the dataset's element_spec contains only TensorSpec components. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) for element in dataset.as_numpy_iterator(): print(element) 1 2 3 dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) print(list(dataset.as_numpy_iterator())) [1, 2, 3] as_numpy_iterator() will preserve the nested structure of dataset elements. dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]), 'b': [5, 6]}) list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5}, {'a': (2, 4), 'b': 6}] True Returns An iterable over the elements of the dataset, with their tensors converted to numpy arrays. Raises TypeError if an element contains a non-Tensor value. RuntimeError if eager execution is not enabled. batch View source batch( batch_size, drop_remainder=False ) Combines consecutive elements of this dataset into batches. dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5]), array([6, 7])] dataset = tf.data.Dataset.range(8) dataset = dataset.batch(3, drop_remainder=True) list(dataset.as_numpy_iterator()) [array([0, 1, 2]), array([3, 4, 5])] The components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns Dataset A Dataset. cache View source cache( filename='' ) Caches the elements in this dataset. The first time the dataset is iterated over, its elements will be cached either in the specified file or in memory. Subsequent iterations will use the cached data. Note: For the cache to be finalized, the input dataset must be iterated through in its entirety. Otherwise, subsequent iterations will not use cached data. dataset = tf.data.Dataset.range(5) dataset = dataset.map(lambda x: x**2) dataset = dataset.cache() # The first time reading through the data will generate the data using # `range` and `map`. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] # Subsequent iterations read from the cache. list(dataset.as_numpy_iterator()) [0, 1, 4, 9, 16] When caching to a file, the cached data will persist across runs. Even the first iteration through the data will read from the cache file. Changing the input pipeline before the call to .cache() will have no effect until the cache file is removed or the filename is changed. dataset = tf.data.Dataset.range(5) dataset = dataset.cache("/path/to/file") # doctest: +SKIP list(dataset.as_numpy_iterator()) # doctest: +SKIP [0, 1, 2, 3, 4] dataset = tf.data.Dataset.range(10) dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP list(dataset.as_numpy_iterator()) # doctest: +SKIP [0, 1, 2, 3, 4] Note: cache will produce exactly the same elements during each iteration through the dataset. If you wish to randomize the iteration order, make sure to call shuffle after calling cache. Args filename A tf.string scalar tf.Tensor, representing the name of a directory on the filesystem to use for caching elements in this Dataset. If a filename is not provided, the dataset will be cached in memory. Returns Dataset A Dataset. cardinality View source cardinality() Returns the cardinality of the dataset, if known. cardinality may return tf.data.INFINITE_CARDINALITY if the dataset contains an infinite number of elements or tf.data.UNKNOWN_CARDINALITY if the analysis fails to determine the number of elements in the dataset (e.g. when the dataset source is a file). dataset = tf.data.Dataset.range(42) print(dataset.cardinality().numpy()) 42 dataset = dataset.repeat() cardinality = dataset.cardinality() print((cardinality == tf.data.INFINITE_CARDINALITY).numpy()) True dataset = dataset.filter(lambda x: True) cardinality = dataset.cardinality() print((cardinality == tf.data.UNKNOWN_CARDINALITY).numpy()) True Returns A scalar tf.int64 Tensor representing the cardinality of the dataset. If the cardinality is infinite or unknown, cardinality returns the named constants tf.data.INFINITE_CARDINALITY and tf.data.UNKNOWN_CARDINALITY respectively. concatenate View source concatenate( dataset ) Creates a Dataset by concatenating the given dataset with this dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ] ds = a.concatenate(b) list(ds.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7] # The input dataset and dataset to be concatenated should have the same # nested structures and output types. c = tf.data.Dataset.zip((a, b)) a.concatenate(c) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and (tf.int64, tf.int64) d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"]) a.concatenate(d) Traceback (most recent call last): TypeError: Two datasets to concatenate have different types <dtype: 'int64'> and <dtype: 'string'> Args dataset Dataset to be concatenated. Returns Dataset A Dataset. enumerate View source enumerate( start=0 ) Enumerates the elements of this dataset. It is similar to python's enumerate. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.enumerate(start=5) for element in dataset.as_numpy_iterator(): print(element) (5, 1) (6, 2) (7, 3) # The nested structure of the input dataset determines the structure of # elements in the resulting dataset. dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)]) dataset = dataset.enumerate() for element in dataset.as_numpy_iterator(): print(element) (0, array([7, 8], dtype=int32)) (1, array([ 9, 10], dtype=int32)) Args start A tf.int64 scalar tf.Tensor, representing the start value for enumeration. Returns Dataset A Dataset. filter View source filter( predicate ) Filters this dataset according to predicate. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.filter(lambda x: x < 3) list(dataset.as_numpy_iterator()) [1, 2] # `tf.math.equal(x, y)` is required for equality comparison def filter_fn(x): return tf.math.equal(x, 1) dataset = dataset.filter(filter_fn) list(dataset.as_numpy_iterator()) [1] Args predicate A function mapping a dataset element to a boolean. Returns Dataset The Dataset containing the elements of this dataset for which predicate is True. filter_with_legacy_function View source filter_with_legacy_function( predicate ) Filters this dataset according to predicate. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.filter() Note: This is an escape hatch for existing uses of filter that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to filter as this method will be removed in V2. Args predicate A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to a scalar tf.bool tensor. Returns Dataset The Dataset containing the elements of this dataset for which predicate is True. flat_map View source flat_map( map_func ) Maps map_func across this dataset and flattens the result. Use flat_map if you want to make sure that the order of your dataset stays the same. For example, to flatten a dataset of batches into a dataset of their elements: dataset = tf.data.Dataset.from_tensor_slices( [[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dataset = dataset.flat_map(lambda x: Dataset.from_tensor_slices(x)) list(dataset.as_numpy_iterator()) [1, 2, 3, 4, 5, 6, 7, 8, 9] tf.data.Dataset.interleave() is a generalization of flat_map, since flat_map produces the same output as tf.data.Dataset.interleave(cycle_length=1) Args map_func A function mapping a dataset element to a dataset. Returns Dataset A Dataset. from_generator View source @staticmethod from_generator( generator, output_types=None, output_shapes=None, args=None, output_signature=None ) Creates a Dataset whose elements are generated by generator. (deprecated arguments) Warning: SOME ARGUMENTS ARE DEPRECATED: (output_shapes, output_types). They will be removed in a future version. Instructions for updating: Use output_signature instead The generator argument must be a callable object that returns an object that supports the iter() protocol (e.g. a generator function). The elements generated by generator must be compatible with either the given output_signature argument or with the given output_types and (optionally) output_shapes arguments, whichiver was specified. The recommended way to call from_generator is to use the output_signature argument. In this case the output will be assumed to consist of objects with the classes, shapes and types defined by tf.TypeSpec objects from output_signature argument: def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) list(dataset.take(1)) [(<tf.Tensor: shape=(), dtype=int32, numpy=42>, <tf.RaggedTensor [[1, 2], [3]]>)] There is also a deprecated way to call from_generator by either with output_types argument alone or together with output_shapes argument. In this case the output of the function will be assumed to consist of tf.Tensor objects with with the types defined by output_types and with the shapes which are either unknown or defined by output_shapes. Note: The current implementation of Dataset.from_generator() uses tf.numpy_function and inherits the same constraints. In particular, it requires the dataset and iterator related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment. Note: If generator depends on mutable global variables or other external state, be aware that the runtime may invoke generator multiple times (in order to support repeating the Dataset) and at any time between the call to Dataset.from_generator() and the production of the first element from the generator. Mutating global variables or external state can cause undefined behavior, and we recommend that you explicitly cache any external state in generator before calling Dataset.from_generator(). Args generator A callable object that returns an object that supports the iter() protocol. If args is not specified, generator must take no arguments; otherwise it must take as many arguments as there are values in args. output_types (Optional.) A nested structure of tf.DType objects corresponding to each component of an element yielded by generator. output_shapes (Optional.) A nested structure of tf.TensorShape objects corresponding to each component of an element yielded by generator. args (Optional.) A tuple of tf.Tensor objects that will be evaluated and passed to generator as NumPy-array arguments. output_signature (Optional.) A nested structure of tf.TypeSpec objects corresponding to each component of an element yielded by generator. Returns Dataset A Dataset. from_sparse_tensor_slices View source @staticmethod from_sparse_tensor_slices( sparse_tensor ) Splits each rank-N tf.sparse.SparseTensor in this dataset row-wise. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.from_tensor_slices(). Args sparse_tensor A tf.sparse.SparseTensor. Returns Dataset A Dataset of rank-(N-1) sparse tensors. from_tensor_slices View source @staticmethod from_tensor_slices( tensors ) Creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. All input tensors must have the same size in their first dimensions. # Slicing a 1D tensor produces scalar tensor elements. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) list(dataset.as_numpy_iterator()) [1, 2, 3] # Slicing a 2D tensor produces 1D tensor elements. dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]) list(dataset.as_numpy_iterator()) [array([1, 2], dtype=int32), array([3, 4], dtype=int32)] # Slicing a tuple of 1D tensors produces tuple elements containing # scalar tensors. dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6])) list(dataset.as_numpy_iterator()) [(1, 3, 5), (2, 4, 6)] # Dictionary structure is also preserved. dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]}) list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3}, {'a': 2, 'b': 4}] True # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = Dataset.from_tensor_slices(features) labels_dataset = Dataset.from_tensor_slices(labels) dataset = Dataset.zip((features_dataset, labels_dataset)) # A batched feature and label set can be converted to a Dataset # in similar fashion. batched_features = tf.constant([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]], shape=(3, 2, 2)) batched_labels = tf.constant([['A', 'A'], ['B', 'B'], ['A', 'B']], shape=(3, 2, 1)) dataset = Dataset.from_tensor_slices((batched_features, batched_labels)) for element in dataset.as_numpy_iterator(): print(element) (array([[1, 3], [2, 3]], dtype=int32), array([[b'A'], [b'A']], dtype=object)) (array([[2, 1], [1, 2]], dtype=int32), array([[b'B'], [b'B']], dtype=object)) (array([[3, 3], [3, 2]], dtype=int32), array([[b'A'], [b'B']], dtype=object)) Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide. Args tensors A dataset element, with each component having the same size in the first dimension. Returns Dataset A Dataset. from_tensors View source @staticmethod from_tensors( tensors ) Creates a Dataset with a single element, comprising the given tensors. from_tensors produces a dataset containing only a single element. To slice the input tensor into multiple elements, use from_tensor_slices instead. dataset = tf.data.Dataset.from_tensors([1, 2, 3]) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32)] dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A')) list(dataset.as_numpy_iterator()) [(array([1, 2, 3], dtype=int32), b'A')] # You can use `from_tensors` to produce a dataset which repeats # the same example many times. example = tf.constant([1,2,3]) dataset = tf.data.Dataset.from_tensors(example).repeat(2) list(dataset.as_numpy_iterator()) [array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)] Note that if tensors contains a NumPy array, and eager execution is not enabled, the values will be embedded in the graph as one or more tf.constant operations. For large datasets (> 1 GB), this can waste memory and run into byte limits of graph serialization. If tensors contains one or more large NumPy arrays, consider the alternative described in this guide. Args tensors A dataset element. Returns Dataset A Dataset. interleave View source interleave( map_func, cycle_length=None, block_length=None, num_parallel_calls=None, deterministic=None ) Maps map_func across this dataset, and interleaves the results. For example, you can use Dataset.interleave() to process many input files concurrently: # Preprocess 4 files concurrently, and interleave blocks of 16 records # from each file. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) def parse_fn(filename): return tf.data.Dataset.range(10) dataset = dataset.interleave(lambda x: tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1), cycle_length=4, block_length=16) The cycle_length and block_length arguments control the order in which elements are produced. cycle_length controls the number of input elements that are processed concurrently. If you set cycle_length to 1, this transformation will handle one input element at a time, and will produce identical results to tf.data.Dataset.flat_map. In general, this transformation will apply map_func to cycle_length input elements, open iterators on the returned Dataset objects, and cycle through them producing block_length consecutive elements from each iterator, and consuming the next input element each time it reaches the end of an iterator. For example: dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] # NOTE: New lines indicate "block" boundaries. dataset = dataset.interleave( lambda x: Dataset.from_tensors(x).repeat(6), cycle_length=2, block_length=4) list(dataset.as_numpy_iterator()) [1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5] Note: The order of elements yielded by this transformation is deterministic, as long as map_func is a pure function and deterministic=True. If map_func contains any stateful operations, the order in which that state is accessed is undefined. Performance can often be improved by setting num_parallel_calls so that interleave will use multiple threads to fetch elements. If determinism isn't required, it can also improve performance to set deterministic=False. filenames = ["/var/data/file1.txt", "/var/data/file2.txt", "/var/data/file3.txt", "/var/data/file4.txt"] dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(lambda x: tf.data.TFRecordDataset(x), cycle_length=4, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) Args map_func A function mapping a dataset element to a dataset. cycle_length (Optional.) The number of input elements that will be processed concurrently. If not set, the tf.data runtime decides what it should be based on available CPU. If num_parallel_calls is set to tf.data.AUTOTUNE, the cycle_length argument identifies the maximum degree of parallelism. block_length (Optional.) The number of consecutive elements to produce from each input element before cycling to another input element. If not set, defaults to 1. num_parallel_calls (Optional.) If specified, the implementation creates a threadpool, which is used to fetch inputs from cycle elements asynchronously and in parallel. The default behavior is to fetch inputs from cycle elements synchronously with no parallelism. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. list_files View source @staticmethod list_files( file_pattern, shuffle=None, seed=None ) A dataset of all files matching one or more glob patterns. The file_pattern argument should be a small number of glob patterns. If your filenames have already been globbed, use Dataset.from_tensor_slices(filenames) instead, as re-globbing every filename with list_files may result in poor performance with remote storage systems. Note: The default behavior of this method is to return filenames in a non-deterministic random shuffled order. Pass a seed or shuffle=False to get results in a deterministic order. Example: If we had the following files on our filesystem: /path/to/dir/a.txt /path/to/dir/b.py /path/to/dir/c.py If we pass "/path/to/dir/*.py" as the directory, the dataset would produce: /path/to/dir/b.py /path/to/dir/c.py Args file_pattern A string, a list of strings, or a tf.Tensor of string type (scalar or vector), representing the filename glob (i.e. shell wildcard) pattern(s) that will be matched. shuffle (Optional.) If True, the file names will be shuffled randomly. Defaults to True. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. Returns Dataset A Dataset of strings corresponding to file names. make_initializable_iterator View source make_initializable_iterator( shared_name=None ) Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_initializable_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code. Note: The returned iterator will be in an uninitialized state, and you must run the iterator.initializer operation before using it: # Building graph ... dataset = ... iterator = dataset.make_initializable_iterator() next_value = iterator.get_next() # This is a Tensor. # ... from within a session ... sess.run(iterator.initializer) try: while True: value = sess.run(next_value) ... except tf.errors.OutOfRangeError: pass Args shared_name (Optional.) If non-empty, the returned iterator will be shared under the given name across multiple sessions that share the same devices (e.g. when using a remote server). Returns A tf.data.Iterator for elements of this dataset. Raises RuntimeError If eager execution is enabled. make_one_shot_iterator View source make_one_shot_iterator() Creates an iterator for elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This is a deprecated API that should only be used in TF 1 graph mode and legacy TF 2 graph mode available through tf.compat.v1. In all other situations -- namely, eager mode and inside tf.function -- you can consume dataset elements using for elem in dataset: ... or by explicitly creating iterator via iterator = iter(dataset) and fetching its elements via values = next(iterator). Furthermore, this API is not available in TF 2. During the transition from TF 1 to TF 2 you can use tf.compat.v1.data.make_one_shot_iterator(dataset) to create a TF 1 graph mode style iterator for a dataset created through TF 2 APIs. Note that this should be a transient state of your code base as there are in general no guarantees about the interoperability of TF 1 and TF 2 code. Note: The returned iterator will be initialized automatically. A "one-shot" iterator does not currently support re-initialization. For that see make_initializable_iterator. Example: # Building graph ... dataset = ... next_value = dataset.make_one_shot_iterator().get_next() # ... from within a session ... try: while True: value = sess.run(next_value) ... except tf.errors.OutOfRangeError: pass Returns An tf.data.Iterator for elements of this dataset. map View source map( map_func, num_parallel_calls=None, deterministic=None ) Maps map_func across the elements of this dataset. This transformation applies map_func to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. map_func can be used to change both the values and the structure of a dataset's elements. For example, adding 1 to each element, or projecting a subset of element components. dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1) list(dataset.as_numpy_iterator()) [2, 3, 4, 5, 6] The input signature of map_func is determined by the structure of each element in this dataset. dataset = Dataset.range(5) # `map_func` takes a single argument of type `tf.Tensor` with the same # shape and dtype. result = dataset.map(lambda x: x + 1) # Each element is a tuple containing two `tf.Tensor` objects. elements = [(1, "foo"), (2, "bar"), (3, "baz")] dataset = tf.data.Dataset.from_generator( lambda: elements, (tf.int32, tf.string)) # `map_func` takes two arguments of type `tf.Tensor`. This function # projects out just the first component. result = dataset.map(lambda x_int, y_str: x_int) list(result.as_numpy_iterator()) [1, 2, 3] # Each element is a dictionary mapping strings to `tf.Tensor` objects. elements = ([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}, {"a": 3, "b": "baz"}]) dataset = tf.data.Dataset.from_generator( lambda: elements, {"a": tf.int32, "b": tf.string}) # `map_func` takes a single argument of type `dict` with the same keys # as the elements. result = dataset.map(lambda d: str(d["a"]) + d["b"]) The value or values returned by map_func determine the structure of each element in the returned dataset. dataset = tf.data.Dataset.range(3) # `map_func` returns two `tf.Tensor` objects. def g(x): return tf.constant(37.0), tf.constant(["Foo", "Bar", "Baz"]) result = dataset.map(g) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(3,), dtype=tf.string, name=None)) # Python primitives, lists, and NumPy arrays are implicitly converted to # `tf.Tensor`. def h(x): return 37.0, ["Foo", "Bar"], np.array([1.0, 2.0], dtype=np.float64) result = dataset.map(h) result.element_spec (TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.string, name=None), TensorSpec(shape=(2,), dtype=tf.float64, name=None)) # `map_func` can return nested structures. def i(x): return (37.0, [42, 16]), "foo" result = dataset.map(i) result.element_spec ((TensorSpec(shape=(), dtype=tf.float32, name=None), TensorSpec(shape=(2,), dtype=tf.int32, name=None)), TensorSpec(shape=(), dtype=tf.string, name=None)) map_func can accept as arguments and return any type of dataset element. Note that irrespective of the context in which map_func is defined (eager vs. graph), tf.data traces the function and executes it as a graph. To use Python code inside of the function you have a few options: 1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code. 2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1). For example: d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) # transform a string tensor to upper case string using a Python function def upper_case_fn(t: tf.Tensor): return t.numpy().decode('utf-8').upper() d = d.map(lambda x: tf.py_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] 3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays. For example: d = tf.data.Dataset.from_tensor_slices(['hello', 'world']) def upper_case_fn(t: np.ndarray): return t.decode('utf-8').upper() d = d.map(lambda x: tf.numpy_function(func=upper_case_fn, inp=[x], Tout=tf.string)) list(d.as_numpy_iterator()) [b'HELLO', b'WORLD'] Note that the use of tf.numpy_function and tf.py_function in general precludes the possibility of executing user-defined transformations in parallel (because of Python GIL). Performance can often be improved by setting num_parallel_calls so that map will use multiple threads to process elements. If deterministic order isn't required, it can also improve performance to set deterministic=False. dataset = Dataset.range(1, 6) # ==> [ 1, 2, 3, 4, 5 ] dataset = dataset.map(lambda x: x + 1, num_parallel_calls=tf.data.AUTOTUNE, deterministic=False) Args map_func A function mapping a dataset element to another dataset element. num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. map_with_legacy_function View source map_with_legacy_function( map_func, num_parallel_calls=None, deterministic=None ) Maps map_func across the elements of this dataset. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use `tf.data.Dataset.map() Note: This is an escape hatch for existing uses of map that do not work with V2 functions. New uses are strongly discouraged and existing uses should migrate to map as this method will be removed in V2. Args map_func A function mapping a nested structure of tensors (having shapes and types defined by self.output_shapes and self.output_types) to another nested structure of tensors. num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process asynchronously in parallel. If not specified, elements will be processed sequentially. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. deterministic (Optional.) A boolean controlling whether determinism should be traded for performance by allowing elements to be produced out of order. If deterministic is None, the tf.data.Options.experimental_deterministic dataset option (True by default) is used to decide whether to produce elements deterministically. Returns Dataset A Dataset. options View source options() Returns the options for this dataset and its inputs. Returns A tf.data.Options object representing the dataset options. padded_batch View source padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False ) Combines consecutive elements of this dataset into padded batches. This transformation combines multiple consecutive elements of the input dataset into a single element. Like tf.data.Dataset.batch, the components of the resulting element will have an additional outer dimension, which will be batch_size (or N % batch_size for the last element if batch_size does not divide the number of input elements N evenly and drop_remainder is False). If your program depends on the batches having the same outer dimension, you should set the drop_remainder argument to True to prevent the smaller batch from being produced. Unlike tf.data.Dataset.batch, the input elements to be batched may have different shapes, and this transformation will pad each component to the respective shape in padded_shapes. The padded_shapes argument determines the resulting shape for each dimension of each component in an output element: If the dimension is a constant, the component will be padded out to that length in that dimension. If the dimension is unknown, the component will be padded out to the maximum length of all elements in that dimension. A = (tf.data.Dataset .range(1, 5, output_type=tf.int32) .map(lambda x: tf.fill([x], x))) # Pad to the smallest per-batch size that fits all elements. B = A.padded_batch(2) for element in B.as_numpy_iterator(): print(element) [[1 0] [2 2]] [[3 3 3 0] [4 4 4 4]] # Pad to a fixed size. C = A.padded_batch(2, padded_shapes=5) for element in C.as_numpy_iterator(): print(element) [[1 0 0 0 0] [2 2 0 0 0]] [[3 3 3 0 0] [4 4 4 4 0]] # Pad with a custom value. D = A.padded_batch(2, padded_shapes=5, padding_values=-1) for element in D.as_numpy_iterator(): print(element) [[ 1 -1 -1 -1 -1] [ 2 2 -1 -1 -1]] [[ 3 3 3 -1 -1] [ 4 4 4 4 -1]] # Components of nested elements can be padded independently. elements = [([1, 2, 3], [10]), ([4, 5], [11, 12])] dataset = tf.data.Dataset.from_generator( lambda: iter(elements), (tf.int32, tf.int32)) # Pad the first component of the tuple to length 4, and the second # component to the smallest size that fits. dataset = dataset.padded_batch(2, padded_shapes=([4], [None]), padding_values=(-1, 100)) list(dataset.as_numpy_iterator()) [(array([[ 1, 2, 3, -1], [ 4, 5, -1, -1]], dtype=int32), array([[ 10, 100], [ 11, 12]], dtype=int32))] # Pad with a single value and multiple components. E = tf.data.Dataset.zip((A, A)).padded_batch(2, padding_values=-1) for element in E.as_numpy_iterator(): print(element) (array([[ 1, -1], [ 2, 2]], dtype=int32), array([[ 1, -1], [ 2, 2]], dtype=int32)) (array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32), array([[ 3, 3, 3, -1], [ 4, 4, 4, 4]], dtype=int32)) See also tf.data.experimental.dense_to_sparse_batch, which combines elements that may have different shapes into a tf.sparse.SparseTensor. Args batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. padded_shapes (Optional.) A nested structure of tf.TensorShape or tf.int64 vector tensor-like objects representing the shape to which the respective component of each input element should be padded prior to batching. Any unknown dimensions will be padded to the maximum size of that dimension in each batch. If unset, all dimensions of all components are padded to the maximum size in the batch. padded_shapes must be set if any component has an unknown rank. padding_values (Optional.) A nested structure of scalar-shaped tf.Tensor, representing the padding values to use for the respective components. None represents that the nested structure should be padded with default values. Defaults are 0 for numeric types and the empty string for string types. The padding_values should have the same structure as the input dataset. If padding_values is a single element and the input dataset has multiple components, then the same padding_values will be used to pad every component of the dataset. If padding_values is a scalar, then its value will be broadcasted to match the shape of each component. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in the case it has fewer than batch_size elements; the default behavior is not to drop the smaller batch. Returns Dataset A Dataset. Raises ValueError If a component has an unknown rank, and the padded_shapes argument is not set. prefetch View source prefetch( buffer_size ) Creates a Dataset that prefetches elements from this dataset. Most dataset input pipelines should end with a call to prefetch. This allows later elements to be prepared while the current element is being processed. This often improves latency and throughput, at the cost of using additional memory to store prefetched elements. Note: Like other Dataset methods, prefetch operates on the elements of the input dataset. It has no concept of examples vs. batches. examples.prefetch(2) will prefetch two elements (2 examples), while examples.batch(20).prefetch(2) will prefetch 2 elements (2 batches, of 20 examples each). dataset = tf.data.Dataset.range(3) dataset = dataset.prefetch(2) list(dataset.as_numpy_iterator()) [0, 1, 2] Args buffer_size A tf.int64 scalar tf.Tensor, representing the maximum number of elements that will be buffered when prefetching. Returns Dataset A Dataset. range View source @staticmethod range( *args, **kwargs ) Creates a Dataset of a step-separated range of values. list(Dataset.range(5).as_numpy_iterator()) [0, 1, 2, 3, 4] list(Dataset.range(2, 5).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2).as_numpy_iterator()) [1, 3] list(Dataset.range(1, 5, -2).as_numpy_iterator()) [] list(Dataset.range(5, 1).as_numpy_iterator()) [] list(Dataset.range(5, 1, -2).as_numpy_iterator()) [5, 3] list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator()) [2, 3, 4] list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator()) [1.0, 3.0] Args *args follows the same semantics as python's xrange. len(args) == 1 -> start = 0, stop = args[0], step = 1. len(args) == 2 -> start = args[0], stop = args[1], step = 1. len(args) == 3 -> start = args[0], stop = args[1], step = args[2]. **kwargs output_type: Its expected dtype. (Optional, default: tf.int64). Returns Dataset A RangeDataset. Raises ValueError if len(args) == 0. reduce View source reduce( initial_state, reduce_func ) Reduces the input dataset to a single element. The transformation calls reduce_func successively on every element of the input dataset until the dataset is exhausted, aggregating information in its internal state. The initial_state argument is used for the initial state and the final state is returned as the result. tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, _: x + 1).numpy() 5 tf.data.Dataset.range(5).reduce(np.int64(0), lambda x, y: x + y).numpy() 10 Args initial_state An element representing the initial state of the transformation. reduce_func A function that maps (old_state, input_element) to new_state. It must take two arguments and return a new element The structure of new_state must match the structure of initial_state. Returns A dataset element corresponding to the final state of the transformation. repeat View source repeat( count=None ) Repeats this dataset so each original value is seen count times. dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3]) dataset = dataset.repeat(3) list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 3, 1, 2, 3] Note: If this dataset is a function of global state (e.g. a random number generator), then different repetitions may produce different elements. Args count (Optional.) A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely. Returns Dataset A Dataset. shard View source shard( num_shards, index ) Creates a Dataset that includes only 1/num_shards of this dataset. shard is deterministic. The Dataset produced by A.shard(n, i) will contain all elements of A whose index mod n = i. A = tf.data.Dataset.range(10) B = A.shard(num_shards=3, index=0) list(B.as_numpy_iterator()) [0, 3, 6, 9] C = A.shard(num_shards=3, index=1) list(C.as_numpy_iterator()) [1, 4, 7] D = A.shard(num_shards=3, index=2) list(D.as_numpy_iterator()) [2, 5, 8] This dataset operator is very useful when running distributed training, as it allows each worker to read a unique subset. When reading a single input file, you can shard elements as follows: d = tf.data.TFRecordDataset(input_file) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.map(parser_fn, num_parallel_calls=num_map_threads) Important caveats: Be sure to shard before you use any randomizing operator (such as shuffle). Generally it is best if the shard operator is used early in the dataset pipeline. For example, when reading from a set of TFRecord files, shard before converting the dataset to input samples. This avoids reading every file on every worker. The following is an example of an efficient sharding strategy within a complete pipeline: d = Dataset.list_files(pattern) d = d.shard(num_workers, worker_index) d = d.repeat(num_epochs) d = d.shuffle(shuffle_buffer_size) d = d.interleave(tf.data.TFRecordDataset, cycle_length=num_readers, block_length=1) d = d.map(parser_fn, num_parallel_calls=num_map_threads) Args num_shards A tf.int64 scalar tf.Tensor, representing the number of shards operating in parallel. index A tf.int64 scalar tf.Tensor, representing the worker index. Returns Dataset A Dataset. Raises InvalidArgumentError if num_shards or index are illegal values. Note: error checking is done on a best-effort basis, and errors aren't guaranteed to be caught upon dataset creation. (e.g. providing in a placeholder tensor bypasses the early checking, and will instead result in an error during a session.run call.) shuffle View source shuffle( buffer_size, seed=None, reshuffle_each_iteration=None ) Randomly shuffles the elements of this dataset. This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but buffer_size is set to 1,000, then shuffle will initially select a random element from only the first 1,000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1,000 element buffer. reshuffle_each_iteration controls whether the shuffle order should be different for each epoch. In TF 1.X, the idiomatic way to create epochs was through the repeat transformation: dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) dataset = dataset.repeat(2) # doctest: +SKIP [1, 0, 2, 1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) dataset = dataset.repeat(2) # doctest: +SKIP [1, 0, 2, 1, 0, 2] In TF 2.0, tf.data.Dataset objects are Python iterables which makes it possible to also create epochs through Python iteration: dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=True) list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 2, 0] dataset = tf.data.Dataset.range(3) dataset = dataset.shuffle(3, reshuffle_each_iteration=False) list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] list(dataset.as_numpy_iterator()) # doctest: +SKIP [1, 0, 2] Args buffer_size A tf.int64 scalar tf.Tensor, representing the number of elements from this dataset from which the new dataset will sample. seed (Optional.) A tf.int64 scalar tf.Tensor, representing the random seed that will be used to create the distribution. See tf.random.set_seed for behavior. reshuffle_each_iteration (Optional.) A boolean, which if true indicates that the dataset should be pseudorandomly reshuffled each time it is iterated over. (Defaults to True.) Returns Dataset A Dataset. skip View source skip( count ) Creates a Dataset that skips count elements from this dataset. dataset = tf.data.Dataset.range(10) dataset = dataset.skip(7) list(dataset.as_numpy_iterator()) [7, 8, 9] Args count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be skipped to form the new dataset. If count is greater than the size of this dataset, the new dataset will contain no elements. If count is -1, skips the entire dataset. Returns Dataset A Dataset. take View source take( count ) Creates a Dataset with at most count elements from this dataset. dataset = tf.data.Dataset.range(10) dataset = dataset.take(3) list(dataset.as_numpy_iterator()) [0, 1, 2] Args count A tf.int64 scalar tf.Tensor, representing the number of elements of this dataset that should be taken to form the new dataset. If count is -1, or if count is greater than the size of this dataset, the new dataset will contain all elements of this dataset. Returns Dataset A Dataset. unbatch View source unbatch() Splits elements of a dataset into multiple elements. For example, if elements of the dataset are shaped [B, a0, a1, ...], where B may vary for each input element, then for each element in the dataset, the unbatched dataset will contain B consecutive elements of shape [a0, a1, ...]. elements = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] dataset = tf.data.Dataset.from_generator(lambda: elements, tf.int64) dataset = dataset.unbatch() list(dataset.as_numpy_iterator()) [1, 2, 3, 1, 2, 1, 2, 3, 4] Note: unbatch requires a data copy to slice up the batched tensor into smaller, unbatched tensors. When optimizing performance, try to avoid unnecessary usage of unbatch. Returns A Dataset. window View source window( size, shift=None, stride=1, drop_remainder=False ) Combines (nests of) input elements into a dataset of (nests of) windows. A "window" is a finite dataset of flat elements of size size (or possibly fewer if there are not enough input elements to fill the window and drop_remainder evaluates to False). The shift argument determines the number of input elements by which the window moves on each iteration. If windows and elements are both numbered starting at 0, the first element in window k will be element k * shift of the input dataset. In particular, the first element of the first window will always be the first element of the input dataset. The stride argument determines the stride of the input elements, and the shift argument determines the shift of the window. For example: dataset = tf.data.Dataset.range(7).window(2) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1] [2, 3] [4, 5] [6] dataset = tf.data.Dataset.range(7).window(3, 2, 1, True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 1, 2] [2, 3, 4] [4, 5, 6] dataset = tf.data.Dataset.range(7).window(3, 1, 2, True) for window in dataset: print(list(window.as_numpy_iterator())) [0, 2, 4] [1, 3, 5] [2, 4, 6] Note that when the window transformation is applied to a dataset of nested elements, it produces a dataset of nested windows. nested = ([1, 2, 3, 4], [5, 6, 7, 8]) dataset = tf.data.Dataset.from_tensor_slices(nested).window(2) for window in dataset: def to_numpy(ds): return list(ds.as_numpy_iterator()) print(tuple(to_numpy(component) for component in window)) ([1, 2], [5, 6]) ([3, 4], [7, 8]) dataset = tf.data.Dataset.from_tensor_slices({'a': [1, 2, 3, 4]}) dataset = dataset.window(2) for window in dataset: def to_numpy(ds): return list(ds.as_numpy_iterator()) print({'a': to_numpy(window['a'])}) {'a': [1, 2]} {'a': [3, 4]} Args size A tf.int64 scalar tf.Tensor, representing the number of elements of the input dataset to combine into a window. Must be positive. shift (Optional.) A tf.int64 scalar tf.Tensor, representing the number of input elements by which the window moves in each iteration. Defaults to size. Must be positive. stride (Optional.) A tf.int64 scalar tf.Tensor, representing the stride of the input elements in the sliding window. Must be positive. The default value of 1 means "retain every input element". drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last windows should be dropped if their size is smaller than size. Returns Dataset A Dataset of (nests of) windows -- a finite datasets of flat elements created from the (nests of) input elements. with_options View source with_options( options ) Returns a new tf.data.Dataset with the given options set. The options are "global" in the sense they apply to the entire dataset. If options are set multiple times, they are merged as long as different options do not use different non-default values. ds = tf.data.Dataset.range(5) ds = ds.interleave(lambda x: tf.data.Dataset.range(5), cycle_length=3, num_parallel_calls=3) options = tf.data.Options() # This will make the interleave order non-deterministic. options.experimental_deterministic = False ds = ds.with_options(options) Args options A tf.data.Options that identifies the options the use. Returns Dataset A Dataset with the given options. Raises ValueError when an option is set more than once to a non-default value zip View source @staticmethod zip( datasets ) Creates a Dataset by zipping together the given datasets. This method has similar semantics to the built-in zip() function in Python, with the main difference being that the datasets argument can be an arbitrary nested structure of Dataset objects. # The nested structure of the `datasets` argument determines the # structure of elements in the resulting dataset. a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ] b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ] ds = tf.data.Dataset.zip((a, b)) list(ds.as_numpy_iterator()) [(1, 4), (2, 5), (3, 6)] ds = tf.data.Dataset.zip((b, a)) list(ds.as_numpy_iterator()) [(4, 1), (5, 2), (6, 3)] # The `datasets` argument may contain an arbitrary number of datasets. c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8], # [9, 10], # [11, 12] ] ds = tf.data.Dataset.zip((a, b, c)) for element in ds.as_numpy_iterator(): print(element) (1, 4, array([7, 8])) (2, 5, array([ 9, 10])) (3, 6, array([11, 12])) # The number of elements in the resulting dataset is the same as # the size of the smallest dataset in `datasets`. d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ] ds = tf.data.Dataset.zip((a, d)) list(ds.as_numpy_iterator()) [(1, 13), (2, 14)] Args datasets A nested structure of datasets. Returns Dataset A Dataset. __bool__ View source __bool__() __iter__ View source __iter__() Creates an iterator for elements of this dataset. The returned iterator implements the Python Iterator protocol. Returns An tf.data.Iterator for the elements of this dataset. Raises RuntimeError If not inside of tf.function and not executing eagerly. __len__ View source __len__() Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use tf.data.Dataset.cardinality instead. Returns An integer representing the length of the dataset. Raises RuntimeError If the dataset length is unknown or infinite, or if eager execution is not enabled. __nonzero__ View source __nonzero__()
doc_3150
See Migration guide for more details. tf.compat.v1.raw_ops.SparseSliceGrad tf.raw_ops.SparseSliceGrad( backprop_val_grad, input_indices, input_start, output_indices, name=None ) This op takes in the upstream gradient w.r.t. non-empty values of the sliced SparseTensor, and outputs the gradients w.r.t. the non-empty values of input SparseTensor. Args backprop_val_grad A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. 1-D. The gradient with respect to the non-empty values of the sliced SparseTensor. input_indices A Tensor of type int64. 2-D. The indices of the input SparseTensor. input_start A Tensor of type int64. 1-D. tensor represents the start of the slice. output_indices A Tensor of type int64. 2-D. The indices of the sliced SparseTensor. name A name for the operation (optional). Returns A Tensor. Has the same type as backprop_val_grad.
doc_3151
Return the current frame number.
doc_3152
Base class for syntax errors related to incorrect indentation. This is a subclass of SyntaxError.
doc_3153
Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_3154
Return a list of URLs, one for each element of the collection. The list contains None for elements without a URL. See Hyperlinks for an example.
doc_3155
Update this artist's properties from the dict props. Parameters propsdict
doc_3156
Creator of the file.
doc_3157
Finds and returns the closest Fraction to self that has denominator at most max_denominator. This method is useful for finding rational approximations to a given floating-point number: >>> from fractions import Fraction >>> Fraction('3.1415926535897932').limit_denominator(1000) Fraction(355, 113) or for recovering a rational number that’s represented as a float: >>> from math import pi, cos >>> Fraction(cos(pi/3)) Fraction(4503599627370497, 9007199254740992) >>> Fraction(cos(pi/3)).limit_denominator() Fraction(1, 2) >>> Fraction(1.1).limit_denominator() Fraction(11, 10)
doc_3158
Find n_points regularly spaced along ar_shape. The returned points (as slices) should be as close to cubically-spaced as possible. Essentially, the points are spaced by the Nth root of the input array size, where N is the number of dimensions. However, if an array dimension cannot fit a full step size, it is “discarded”, and the computation is done for only the remaining dimensions. Parameters ar_shapearray-like of ints The shape of the space embedding the grid. len(ar_shape) is the number of dimensions. n_pointsint The (approximate) number of points to embed in the space. Returns slicestuple of slice objects A slice along each dimension of ar_shape, such that the intersection of all the slices give the coordinates of regularly spaced points. Changed in version 0.14.1: In scikit-image 0.14.1 and 0.15, the return type was changed from a list to a tuple to ensure compatibility with Numpy 1.15 and higher. If your code requires the returned result to be a list, you may convert the output of this function to a list with: >>> result = list(regular_grid(ar_shape=(3, 20, 40), n_points=8)) Examples >>> ar = np.zeros((20, 40)) >>> g = regular_grid(ar.shape, 8) >>> g (slice(5, None, 10), slice(5, None, 10)) >>> ar[g] = 1 >>> ar.sum() 8.0 >>> ar = np.zeros((20, 40)) >>> g = regular_grid(ar.shape, 32) >>> g (slice(2, None, 5), slice(2, None, 5)) >>> ar[g] = 1 >>> ar.sum() 32.0 >>> ar = np.zeros((3, 20, 40)) >>> g = regular_grid(ar.shape, 8) >>> g (slice(1, None, 3), slice(5, None, 10), slice(5, None, 10)) >>> ar[g] = 1 >>> ar.sum() 8.0
doc_3159
Computes the bounds of a window. Parameters num_values:int, default 0 number of values that will be aggregated over window_size:int, default 0 the number of rows in a window min_periods:int, default None min_periods passed from the top level rolling API center:bool, default None center passed from the top level rolling API closed:str, default None closed passed from the top level rolling API win_type:str, default None win_type passed from the top level rolling API Returns A tuple of ndarray[int64]s, indicating the boundaries of each window
doc_3160
Bases: matplotlib.patches.Patch Wedge shaped patch. A wedge centered at x, y center with radius r that sweeps theta1 to theta2 (in degrees). If width is given, then a partial wedge is drawn from inner radius r - width to outer radius r. Valid keyword arguments are: Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha unknown animated bool antialiased or aa bool or None capstyle CapStyle or {'butt', 'projecting', 'round'} clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None color color edgecolor or ec color or None facecolor or fc color or None figure Figure fill bool gid str hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} in_layout bool joinstyle JoinStyle or {'miter', 'round', 'bevel'} label object linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} linewidth or lw float or None path_effects AbstractPathEffect picker None or bool or float or callable rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None transform Transform url str visible bool zorder float get_path()[source] Return the path of this patch. set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, antialiased=<UNSET>, capstyle=<UNSET>, center=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, color=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, fill=<UNSET>, gid=<UNSET>, hatch=<UNSET>, in_layout=<UNSET>, joinstyle=<UNSET>, label=<UNSET>, linestyle=<UNSET>, linewidth=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, radius=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, theta1=<UNSET>, theta2=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, width=<UNSET>, zorder=<UNSET>)[source] Set multiple properties at once. Supported properties are Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None animated bool antialiased or aa bool or None capstyle CapStyle or {'butt', 'projecting', 'round'} center unknown clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None color color edgecolor or ec color or None facecolor or fc color or None figure Figure fill bool gid str hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} in_layout bool joinstyle JoinStyle or {'miter', 'round', 'bevel'} label object linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} linewidth or lw float or None path_effects AbstractPathEffect picker None or bool or float or callable radius unknown rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None theta1 unknown theta2 unknown transform Transform url str visible bool width unknown zorder float set_center(center)[source] set_radius(radius)[source] set_theta1(theta1)[source] set_theta2(theta2)[source] set_width(width)[source] Examples using matplotlib.patches.Wedge Labeling a pie and a donut Reference for Matplotlib artists Circles, Wedges and Polygons SVG Filter Pie
doc_3161
tf.experimental.numpy.array( val, dtype=None, copy=True, ndmin=0 ) Since Tensors are immutable, a copy is made only if val is placed on a different device than the current one. Even if copy is False, a new Tensor may need to be built to satisfy dtype and ndim. This is used only if val is an ndarray or a Tensor. See the NumPy documentation for numpy.array.
doc_3162
See Migration guide for more details. tf.compat.v1.raw_ops.ModelDataset tf.raw_ops.ModelDataset( input_dataset, output_types, output_shapes, algorithm=0, cpu_budget=0, ram_budget=0, name=None ) Identity transformation that models performance. Args input_dataset A Tensor of type variant. A variant tensor representing the input dataset. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. algorithm An optional int. Defaults to 0. cpu_budget An optional int. Defaults to 0. ram_budget An optional int. Defaults to 0. name A name for the operation (optional). Returns A Tensor of type variant.
doc_3163
Multiplies input by 2**:attr:other. outi=inputi∗2iother\text{{out}}_i = \text{{input}}_i * 2^\text{{other}}_i Typically this function is used to construct floating point numbers by multiplying mantissas in input with integral powers of two created from the exponents in :attr:’other’. Parameters input (Tensor) – the input tensor. other (Tensor) – a tensor of exponents, typically integers. Keyword Arguments out (Tensor, optional) – the output tensor. Example:: >>> torch.ldexp(torch.tensor([1.]), torch.tensor([1])) tensor([2.]) >>> torch.ldexp(torch.tensor([1.0]), torch.tensor([1, 2, 3, 4])) tensor([ 2., 4., 8., 16.])
doc_3164
Return True if date is first day of the year. Examples >>> ts = pd.Timestamp(2020, 3, 14) >>> ts.is_year_start False >>> ts = pd.Timestamp(2020, 1, 1) >>> ts.is_year_start True
doc_3165
Test whether two objects contain the same elements. This function allows two Series or DataFrames to be compared against each other to see if they have the same shape and elements. NaNs in the same location are considered equal. The row/column index do not need to have the same type, as long as the values are considered equal. Corresponding columns must be of the same dtype. Parameters other:Series or DataFrame The other Series or DataFrame to be compared with the first. Returns bool True if all elements are the same in both objects, False otherwise. See also Series.eq Compare two Series objects of the same length and return a Series where each element is True if the element in each Series is equal, False otherwise. DataFrame.eq Compare two DataFrame objects of the same shape and return a DataFrame where each element is True if the respective element in each DataFrame is equal, False otherwise. testing.assert_series_equal Raises an AssertionError if left and right are not equal. Provides an easy interface to ignore inequality in dtypes, indexes and precision among others. testing.assert_frame_equal Like assert_series_equal, but targets DataFrames. numpy.array_equal Return True if two arrays have the same shape and elements, False otherwise. Examples >>> df = pd.DataFrame({1: [10], 2: [20]}) >>> df 1 2 0 10 20 DataFrames df and exactly_equal have the same types and values for their elements and column labels, which will return True. >>> exactly_equal = pd.DataFrame({1: [10], 2: [20]}) >>> exactly_equal 1 2 0 10 20 >>> df.equals(exactly_equal) True DataFrames df and different_column_type have the same element types and values, but have different types for the column labels, which will still return True. >>> different_column_type = pd.DataFrame({1.0: [10], 2.0: [20]}) >>> different_column_type 1.0 2.0 0 10 20 >>> df.equals(different_column_type) True DataFrames df and different_data_type have different types for the same values for their elements, and will return False even though their column labels are the same values and types. >>> different_data_type = pd.DataFrame({1: [10.0], 2: [20.0]}) >>> different_data_type 1 2 0 10.0 20.0 >>> df.equals(different_data_type) False
doc_3166
Redraw plot.
doc_3167
A compiled regular expression object used to match settings and request.META values considered as sensitive. By default equivalent to: import re re.compile(r'API|TOKEN|KEY|SECRET|PASS|SIGNATURE', flags=re.IGNORECASE)
doc_3168
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_3169
Return obj.method with a deprecation if it was overridden, else None. Parameters method An unbound method, i.e. an expression of the form Class.method_name. Remember that within the body of a method, one can always use __class__ to refer to the class that is currently being defined. obj Either an object of the class where method is defined, or a subclass of that class. allow_emptybool, default: False Whether to allow overrides by "empty" methods without emitting a warning. **kwargs Additional parameters passed to warn_deprecated to generate the deprecation warning; must at least include the "since" key.
doc_3170
Loads a template with the given name, compiles it and returns a Template object.
doc_3171
Scale image by a certain factor. Performs interpolation to up-scale or down-scale N-dimensional images. Note that anti-aliasing should be enabled when down-sizing images to avoid aliasing artifacts. For down-sampling with an integer factor also see skimage.transform.downscale_local_mean. Parameters imagendarray Input image. scale{float, tuple of floats} Scale factors. Separate scale factors can be defined as (rows, cols[, …][, dim]). Returns scaledndarray Scaled version of the input. Other Parameters orderint, optional The order of the spline interpolation, default is 0 if image.dtype is bool and 1 otherwise. The order has to be in the range 0-5. See skimage.transform.warp for detail. mode{‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}, optional Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad. cvalfloat, optional Used in conjunction with mode ‘constant’, the value outside the image boundaries. clipbool, optional Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range. preserve_rangebool, optional Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float. Also see https://scikit-image.org/docs/dev/user_guide/data_types.html multichannelbool, optional Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. anti_aliasingbool, optional Whether to apply a Gaussian filter to smooth the image prior to down-scaling. It is crucial to filter when down-sampling the image to avoid aliasing artifacts. If input image data type is bool, no anti-aliasing is applied. anti_aliasing_sigma{float, tuple of floats}, optional Standard deviation for Gaussian filtering to avoid aliasing artifacts. By default, this value is chosen as (s - 1) / 2 where s is the down-scaling factor. Notes Modes ‘reflect’ and ‘symmetric’ are similar, but differ in whether the edge pixels are duplicated during the reflection. As an example, if an array has values [0, 1, 2] and was padded to the right by four values using symmetric, the result would be [0, 1, 2, 2, 1, 0, 0], while for reflect it would be [0, 1, 2, 1, 0, 1, 2]. Examples >>> from skimage import data >>> from skimage.transform import rescale >>> image = data.camera() >>> rescale(image, 0.1).shape (51, 51) >>> rescale(image, 0.5).shape (256, 256)
doc_3172
Evaluate an Hermite series at points x. If c is of length n + 1, this function returns the value: \[p(x) = c_0 * H_0(x) + c_1 * H_1(x) + ... + c_n * H_n(x)\] The parameter x is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either x or its elements must support multiplication and addition both with themselves and with the elements of c. If c is a 1-D array, then p(x) will have the same shape as x. If c is multidimensional, then the shape of the result depends on the value of tensor. If tensor is true the shape will be c.shape[1:] + x.shape. If tensor is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters xarray_like, compatible object If x is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, x or its elements must support addition and multiplication with with themselves and with the elements of c. carray_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If c is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of c. tensorboolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of x. Scalars have dimension 0 for this action. The result is that every column of coefficients in c is evaluated for every element of x. If False, x is broadcast over the columns of c for the evaluation. This keyword is useful when c is multidimensional. The default value is True. New in version 1.7.0. Returns valuesndarray, algebra_like The shape of the return value is described above. See also hermval2d, hermgrid2d, hermval3d, hermgrid3d Notes The evaluation uses Clenshaw recursion, aka synthetic division. Examples >>> from numpy.polynomial.hermite import hermval >>> coef = [1,2,3] >>> hermval(1, coef) 11.0 >>> hermval([[1,2],[3,4]], coef) array([[ 11., 51.], [115., 203.]])
doc_3173
Optional. Either a method or attribute. If it’s a method, it should take one argument – an object as returned by items() – and return that object’s change frequency as a string. If it’s an attribute, its value should be a string representing the change frequency of every object returned by items(). Possible values for changefreq, whether you use a method or attribute, are: 'always' 'hourly' 'daily' 'weekly' 'monthly' 'yearly' 'never'
doc_3174
Bases: tornado.web.RequestHandler get(fignum)[source]
doc_3175
Bases: object Convert strings to dvi files using TeX, caching the results to a directory. Repeated calls to this constructor always return the same instance. propertyfont_families[source] propertyfont_family[source] propertyfont_info[source] get_basefile(tex, fontsize, dpi=None)[source] Return a filename based on a hash of the string, fontsize, and dpi. get_custom_preamble()[source] Return a string containing user additions to the tex preamble. get_font_config()[source] get_font_preamble()[source] Return a string containing font configuration for the tex preamble. get_grey(tex, fontsize=None, dpi=None)[source] Return the alpha channel. get_rgba(tex, fontsize=None, dpi=None, rgb=(0, 0, 0))[source] Return latex's rendering of the tex string as an rgba array. Examples >>> texmanager = TexManager() >>> s = r"\TeX\ is $\displaystyle\sum_n\frac{-e^{i\pi}}{2^n}$!" >>> Z = texmanager.get_rgba(s, fontsize=12, dpi=80, rgb=(1, 0, 0)) get_text_width_height_descent(tex, fontsize, renderer=None)[source] Return width, height and descent of the text. propertygrey_arrayd[source] make_dvi(tex, fontsize)[source] Generate a dvi file containing latex's layout of tex string. Return the file name. make_png(tex, fontsize, dpi)[source] Generate a png file containing latex's rendering of tex string. Return the file name. make_tex(tex, fontsize)[source] Generate a tex file to render the tex string at a specific font size. Return the file name. texcache='/home/elliott/.cache/matplotlib/tex.cache'
doc_3176
Return a shallow copy of the set.
doc_3177
Construct an open mesh from multiple sequences. This function takes N 1-D sequences and returns N outputs with N dimensions each, such that the shape is 1 in all but one dimension and the dimension with the non-unit shape value cycles through all N dimensions. Using ix_ one can quickly construct index arrays that will index the cross product. a[np.ix_([1,3],[2,5])] returns the array [[a[1,2] a[1,5]], [a[3,2] a[3,5]]]. Parameters args1-D sequences Each sequence should be of integer or boolean type. Boolean sequences will be interpreted as boolean masks for the corresponding dimension (equivalent to passing in np.nonzero(boolean_sequence)). Returns outtuple of ndarrays N arrays with N dimensions each, with N the number of input sequences. Together these arrays form an open mesh. See also ogrid, mgrid, meshgrid Examples >>> a = np.arange(10).reshape(2, 5) >>> a array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) >>> ixgrid = np.ix_([0, 1], [2, 4]) >>> ixgrid (array([[0], [1]]), array([[2, 4]])) >>> ixgrid[0].shape, ixgrid[1].shape ((2, 1), (1, 2)) >>> a[ixgrid] array([[2, 4], [7, 9]]) >>> ixgrid = np.ix_([True, True], [2, 4]) >>> a[ixgrid] array([[2, 4], [7, 9]]) >>> ixgrid = np.ix_([True, True], [False, False, True, False, True]) >>> a[ixgrid] array([[2, 4], [7, 9]])
doc_3178
PKZIP version needed to extract archive.
doc_3179
This class is normally only used if more precise control over profiling is needed than what the cProfile.run() function provides. A custom timer can be supplied for measuring how long code takes to run via the timer argument. This must be a function that returns a single number representing the current time. If the number is an integer, the timeunit specifies a multiplier that specifies the duration of each unit of time. For example, if the timer returns times measured in thousands of seconds, the time unit would be .001. Directly using the Profile class allows formatting profile results without writing the profile data to a file: import cProfile, pstats, io from pstats import SortKey pr = cProfile.Profile() pr.enable() # ... do something ... pr.disable() s = io.StringIO() sortby = SortKey.CUMULATIVE ps = pstats.Stats(pr, stream=s).sort_stats(sortby) ps.print_stats() print(s.getvalue()) The Profile class can also be used as a context manager (supported only in cProfile module. see Context Manager Types): import cProfile with cProfile.Profile() as pr: # ... do something ... pr.print_stats() Changed in version 3.8: Added context manager support. enable() Start collecting profiling data. Only in cProfile. disable() Stop collecting profiling data. Only in cProfile. create_stats() Stop collecting profiling data and record the results internally as the current profile. print_stats(sort=-1) Create a Stats object based on the current profile and print the results to stdout. dump_stats(filename) Write the results of the current profile to filename. run(cmd) Profile the cmd via exec(). runctx(cmd, globals, locals) Profile the cmd via exec() with the specified global and local environment. runcall(func, /, *args, **kwargs) Profile func(*args, **kwargs)
doc_3180
Check whether the provided array or dtype is of the string dtype. Parameters arr_or_dtype:array-like or dtype The array or dtype to check. Returns boolean Whether or not the array or dtype is of the string dtype. Examples >>> is_string_dtype(str) True >>> is_string_dtype(object) True >>> is_string_dtype(int) False >>> >>> is_string_dtype(np.array(['a', 'b'])) True >>> is_string_dtype(pd.Series([1, 2])) False
doc_3181
Set tick locations. Parameters tickslist of floats List of tick locations. labelslist of str, optional List of tick labels. If not set, the labels show the data value. minorbool, default: False If False, set the major ticks; if True, the minor ticks. **kwargs Text properties for the labels. These take effect only if you pass labels. In other cases, please use tick_params.
doc_3182
Figures out the full host name for the given domain part. The domain part is a subdomain in case host matching is disabled or a full host name. Parameters domain_part (Optional[str]) – Return type str
doc_3183
Abstract base class for structures in native byte order. Concrete structure and union types must be created by subclassing one of these types, and at least define a _fields_ class variable. ctypes will create descriptors which allow reading and writing the fields by direct attribute accesses. These are the _fields_ A sequence defining the structure fields. The items must be 2-tuples or 3-tuples. The first item is the name of the field, the second item specifies the type of the field; it can be any ctypes data type. For integer type fields like c_int, a third optional item can be given. It must be a small positive integer defining the bit width of the field. Field names must be unique within one structure or union. This is not checked, only one field can be accessed when names are repeated. It is possible to define the _fields_ class variable after the class statement that defines the Structure subclass, this allows creating data types that directly or indirectly reference themselves: class List(Structure): pass List._fields_ = [("pnext", POINTER(List)), ... ] The _fields_ class variable must, however, be defined before the type is first used (an instance is created, sizeof() is called on it, and so on). Later assignments to the _fields_ class variable will raise an AttributeError. It is possible to define sub-subclasses of structure types, they inherit the fields of the base class plus the _fields_ defined in the sub-subclass, if any. _pack_ An optional small integer that allows overriding the alignment of structure fields in the instance. _pack_ must already be defined when _fields_ is assigned, otherwise it will have no effect. _anonymous_ An optional sequence that lists the names of unnamed (anonymous) fields. _anonymous_ must be already defined when _fields_ is assigned, otherwise it will have no effect. The fields listed in this variable must be structure or union type fields. ctypes will create descriptors in the structure type that allows accessing the nested fields directly, without the need to create the structure or union field. Here is an example type (Windows): class _U(Union): _fields_ = [("lptdesc", POINTER(TYPEDESC)), ("lpadesc", POINTER(ARRAYDESC)), ("hreftype", HREFTYPE)] class TYPEDESC(Structure): _anonymous_ = ("u",) _fields_ = [("u", _U), ("vt", VARTYPE)] The TYPEDESC structure describes a COM data type, the vt field specifies which one of the union fields is valid. Since the u field is defined as anonymous field, it is now possible to access the members directly off the TYPEDESC instance. td.lptdesc and td.u.lptdesc are equivalent, but the former is faster since it does not need to create a temporary union instance: td = TYPEDESC() td.vt = VT_PTR td.lptdesc = POINTER(some_type) td.u.lptdesc = POINTER(some_type) It is possible to define sub-subclasses of structures, they inherit the fields of the base class. If the subclass definition has a separate _fields_ variable, the fields specified in this are appended to the fields of the base class. Structure and union constructors accept both positional and keyword arguments. Positional arguments are used to initialize member fields in the same order as they are appear in _fields_. Keyword arguments in the constructor are interpreted as attribute assignments, so they will initialize _fields_ with the same name, or create new attributes for names not present in _fields_.
doc_3184
Add a non-resampled image to the figure. The image is attached to the lower or upper left corner depending on origin. Parameters X The image data. This is an array of one of the following shapes: MxN: luminance (grayscale) values MxNx3: RGB values MxNx4: RGBA values xo, yoint The x/y image offset in pixels. alphaNone or float The alpha blending value. normmatplotlib.colors.Normalize A Normalize instance to map the luminance to the interval [0, 1]. cmapstr or matplotlib.colors.Colormap, default: rcParams["image.cmap"] (default: 'viridis') The colormap to use. vmin, vmaxfloat If norm is not given, these values set the data limits for the colormap. origin{'upper', 'lower'}, default: rcParams["image.origin"] (default: 'upper') Indicates where the [0, 0] index of the array is in the upper left or lower left corner of the axes. resizebool If True, resize the figure to match the given image size. Returns matplotlib.image.FigureImage Other Parameters **kwargs Additional kwargs are Artist kwargs passed on to FigureImage. Notes figimage complements the Axes image (imshow) which will be resampled to fit the current Axes. If you want a resampled image to fill the entire figure, you can define an Axes with extent [0, 0, 1, 1]. Examples f = plt.figure() nx = int(f.get_figwidth() * f.dpi) ny = int(f.get_figheight() * f.dpi) data = np.random.random((ny, nx)) f.figimage(data) plt.show()
doc_3185
Bases: matplotlib.backends.backend_webagg_core.FigureCanvasWebAggCore
doc_3186
calculates the angle to a given vector in degrees. angle_to(Vector2) -> float Returns the angle between self and the given vector.
doc_3187
Computes the solution to the least squares and least norm problems for a full rank matrix AA of size (m×n)(m \times n) and a matrix BB of size (m×k)(m \times k) . If m≥nm \geq n , lstsq() solves the least-squares problem: min⁡X∥AX−B∥2.\begin{array}{ll} \min_X & \|AX-B\|_2. \end{array} If m<nm < n , lstsq() solves the least-norm problem: min⁡X∥X∥2subject toAX=B.\begin{array}{ll} \min_X & \|X\|_2 & \text{subject to} & AX = B. \end{array} Returned tensor XX has shape (max⁡(m,n)×k)(\max(m, n) \times k) . The first nn rows of XX contains the solution. If m≥nm \geq n , the residual sum of squares for the solution in each column is given by the sum of squares of elements in the remaining m−nm - n rows of that column. Note The case when m<nm < n is not supported on the GPU. Parameters input (Tensor) – the matrix BB A (Tensor) – the mm by nn matrix AA Keyword Arguments out (tuple, optional) – the optional destination tensor Returns A namedtuple (solution, QR) containing: solution (Tensor): the least squares solution QR (Tensor): the details of the QR factorization Return type (Tensor, Tensor) Note The returned matrices will always be transposed, irrespective of the strides of the input matrices. That is, they will have stride (1, m) instead of (m, 1). Example: >>> A = torch.tensor([[1., 1, 1], ... [2, 3, 4], ... [3, 5, 2], ... [4, 2, 5], ... [5, 4, 3]]) >>> B = torch.tensor([[-10., -3], ... [ 12, 14], ... [ 14, 12], ... [ 16, 16], ... [ 18, 16]]) >>> X, _ = torch.lstsq(B, A) >>> X tensor([[ 2.0000, 1.0000], [ 1.0000, 1.0000], [ 1.0000, 2.0000], [ 10.9635, 4.8501], [ 8.9332, 5.2418]])
doc_3188
Index Attribute Meaning 0 sp_namp Login name 1 sp_pwdp Encrypted password 2 sp_lstchg Date of last change 3 sp_min Minimal number of days between changes 4 sp_max Maximum number of days between changes 5 sp_warn Number of days before password expires to warn user about it 6 sp_inact Number of days after password expires until account is disabled 7 sp_expire Number of days since 1970-01-01 when account expires 8 sp_flag Reserved The sp_namp and sp_pwdp items are strings, all others are integers. KeyError is raised if the entry asked for cannot be found. The following functions are defined: spwd.getspnam(name) Return the shadow password database entry for the given user name. Changed in version 3.6: Raises a PermissionError instead of KeyError if the user doesn’t have privileges. spwd.getspall() Return a list of all available shadow password database entries, in arbitrary order. See also Module grp An interface to the group database, similar to this. Module pwd An interface to the normal password database, similar to this.
doc_3189
A quantized linear module with quantized tensor as inputs and outputs. We adopt the same interface as torch.nn.Linear, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for documentation. Similar to Linear, attributes will be randomly initialized at module creation time and will be overwritten later Variables ~Linear.weight (Tensor) – the non-learnable quantized weights of the module of shape (out_features,in_features)(\text{out\_features}, \text{in\_features}) . ~Linear.bias (Tensor) – the non-learnable bias of the module of shape (out_features)(\text{out\_features}) . If bias is True, the values are initialized to zero. ~Linear.scale – scale parameter of output Quantized Tensor, type: double ~Linear.zero_point – zero_point parameter for output Quantized Tensor, type: long Examples: >>> m = nn.quantized.Linear(20, 30) >>> input = torch.randn(128, 20) >>> input = torch.quantize_per_tensor(input, 1.0, 0, torch.quint8) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30]) classmethod from_float(mod) [source] Create a quantized module from a float module or qparams_dict Parameters mod (Module) – a float module, either produced by torch.quantization utilities or provided by the user
doc_3190
Draws samples in [0, 1] from a power distribution with positive exponent a - 1. Also known as the power function distribution. Parameters afloat or array_like of floats Parameter of the distribution. Must be non-negative. sizeint or tuple of ints, optional Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if a is a scalar. Otherwise, np.array(a).size samples are drawn. Returns outndarray or scalar Drawn samples from the parameterized power distribution. Raises ValueError If a <= 0. Notes The probability density function is \[P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.\] The power function distribution is just the inverse of the Pareto distribution. It may also be seen as a special case of the Beta distribution. It is used, for example, in modeling the over-reporting of insurance claims. References 1 Christian Kleiber, Samuel Kotz, “Statistical size distributions in economics and actuarial sciences”, Wiley, 2003. 2 Heckert, N. A. and Filliben, James J. “NIST Handbook 148: Dataplot Reference Manual, Volume 2: Let Subcommands and Library Functions”, National Institute of Standards and Technology Handbook Series, June 2003. https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/powpdf.pdf Examples Draw samples from the distribution: >>> rng = np.random.default_rng() >>> a = 5. # shape >>> samples = 1000 >>> s = rng.power(a, samples) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=30) >>> x = np.linspace(0, 1, 100) >>> y = a*x**(a-1.) >>> normed_y = samples*np.diff(bins)[0]*y >>> plt.plot(x, normed_y) >>> plt.show() Compare the power function distribution to the inverse of the Pareto. >>> from scipy import stats >>> rvs = rng.power(5, 1000000) >>> rvsp = rng.pareto(5, 1000000) >>> xx = np.linspace(0,1,100) >>> powpdf = stats.powerlaw.pdf(xx,5) >>> plt.figure() >>> plt.hist(rvs, bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('power(5)') >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of 1 + Generator.pareto(5)') >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of stats.pareto(5)')
doc_3191
Called when the mouse moves during a pan operation. Parameters buttonMouseButton The pressed mouse button. keystr or None The pressed key, if any. x, yfloat The mouse coordinates in display coords. Notes This is intended to be overridden by new projection types.
doc_3192
Data-type of the array’s elements. Parameters None Returns dnumpy dtype object See also numpy.dtype Examples >>> x array([[0, 1], [2, 3]]) >>> x.dtype dtype('int32') >>> type(x.dtype) <type 'numpy.dtype'>
doc_3193
See Migration guide for more details. tf.compat.v1.raw_ops.Assert tf.raw_ops.Assert( condition, data, summarize=3, name=None ) If condition evaluates to false, print the list of tensors in data. summarize determines how many entries of the tensors to print. Args condition A Tensor of type bool. The condition to evaluate. data A list of Tensor objects. The tensors to print out when condition is false. summarize An optional int. Defaults to 3. Print this many entries of each tensor. name A name for the operation (optional). Returns The created Operation.
doc_3194
The inverse of format_datetime(). Performs the same function as parsedate(), but on success returns a datetime. If the input date has a timezone of -0000, the datetime will be a naive datetime, and if the date is conforming to the RFCs it will represent a time in UTC but with no indication of the actual source timezone of the message the date comes from. If the input date has any other valid timezone offset, the datetime will be an aware datetime with the corresponding a timezone tzinfo. New in version 3.3.
doc_3195
Singular Value Decomposition. When a is a 2D array, it is factorized as u @ np.diag(s) @ vh = (u * s) @ vh, where u and vh are 2D unitary arrays and s is a 1D array of a’s singular values. When a is higher-dimensional, SVD is applied in stacked mode as explained below. Parameters a(…, M, N) array_like A real or complex array with a.ndim >= 2. full_matricesbool, optional If True (default), u and vh have the shapes (..., M, M) and (..., N, N), respectively. Otherwise, the shapes are (..., M, K) and (..., K, N), respectively, where K = min(M, N). compute_uvbool, optional Whether or not to compute u and vh in addition to s. True by default. hermitianbool, optional If True, a is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. New in version 1.17.0. Returns u{ (…, M, M), (…, M, K) } array Unitary array(s). The first a.ndim - 2 dimensions have the same size as those of the input a. The size of the last two dimensions depends on the value of full_matrices. Only returned when compute_uv is True. s(…, K) array Vector(s) with the singular values, within each vector sorted in descending order. The first a.ndim - 2 dimensions have the same size as those of the input a. vh{ (…, N, N), (…, K, N) } array Unitary array(s). The first a.ndim - 2 dimensions have the same size as those of the input a. The size of the last two dimensions depends on the value of full_matrices. Only returned when compute_uv is True. Raises LinAlgError If SVD computation does not converge. See also scipy.linalg.svd Similar function in SciPy. scipy.linalg.svdvals Compute singular values of a matrix. Notes Changed in version 1.8.0: Broadcasting rules apply, see the numpy.linalg documentation for details. The decomposition is performed using LAPACK routine _gesdd. SVD is usually described for the factorization of a 2D matrix \(A\). The higher-dimensional case will be discussed below. In the 2D case, SVD is written as \(A = U S V^H\), where \(A = a\), \(U= u\), \(S= \mathtt{np.diag}(s)\) and \(V^H = vh\). The 1D array s contains the singular values of a and u and vh are unitary. The rows of vh are the eigenvectors of \(A^H A\) and the columns of u are the eigenvectors of \(A A^H\). In both cases the corresponding (possibly non-zero) eigenvalues are given by s**2. If a has more than two dimensions, then broadcasting rules apply, as explained in Linear algebra on several matrices at once. This means that SVD is working in “stacked” mode: it iterates over all indices of the first a.ndim - 2 dimensions and for each combination SVD is applied to the last two indices. The matrix a can be reconstructed from the decomposition with either (u * s[..., None, :]) @ vh or u @ (s[..., None] * vh). (The @ operator can be replaced by the function np.matmul for python versions below 3.5.) If a is a matrix object (as opposed to an ndarray), then so are all the return values. Examples >>> a = np.random.randn(9, 6) + 1j*np.random.randn(9, 6) >>> b = np.random.randn(2, 7, 8, 3) + 1j*np.random.randn(2, 7, 8, 3) Reconstruction based on full SVD, 2D case: >>> u, s, vh = np.linalg.svd(a, full_matrices=True) >>> u.shape, s.shape, vh.shape ((9, 9), (6,), (6, 6)) >>> np.allclose(a, np.dot(u[:, :6] * s, vh)) True >>> smat = np.zeros((9, 6), dtype=complex) >>> smat[:6, :6] = np.diag(s) >>> np.allclose(a, np.dot(u, np.dot(smat, vh))) True Reconstruction based on reduced SVD, 2D case: >>> u, s, vh = np.linalg.svd(a, full_matrices=False) >>> u.shape, s.shape, vh.shape ((9, 6), (6,), (6, 6)) >>> np.allclose(a, np.dot(u * s, vh)) True >>> smat = np.diag(s) >>> np.allclose(a, np.dot(u, np.dot(smat, vh))) True Reconstruction based on full SVD, 4D case: >>> u, s, vh = np.linalg.svd(b, full_matrices=True) >>> u.shape, s.shape, vh.shape ((2, 7, 8, 8), (2, 7, 3), (2, 7, 3, 3)) >>> np.allclose(b, np.matmul(u[..., :3] * s[..., None, :], vh)) True >>> np.allclose(b, np.matmul(u[..., :3], s[..., None] * vh)) True Reconstruction based on reduced SVD, 4D case: >>> u, s, vh = np.linalg.svd(b, full_matrices=False) >>> u.shape, s.shape, vh.shape ((2, 7, 8, 3), (2, 7, 3), (2, 7, 3, 3)) >>> np.allclose(b, np.matmul(u * s[..., None, :], vh)) True >>> np.allclose(b, np.matmul(u, s[..., None] * vh)) True
doc_3196
python:bind ^I rl_complete Init file The following functions relate to the init file and user configuration: readline.parse_and_bind(string) Execute the init line provided in the string argument. This calls rl_parse_and_bind() in the underlying library. readline.read_init_file([filename]) Execute a readline initialization file. The default filename is the last filename used. This calls rl_read_init_file() in the underlying library. Line buffer The following functions operate on the line buffer: readline.get_line_buffer() Return the current contents of the line buffer (rl_line_buffer in the underlying library). readline.insert_text(string) Insert text into the line buffer at the cursor position. This calls rl_insert_text() in the underlying library, but ignores the return value. readline.redisplay() Change what’s displayed on the screen to reflect the current contents of the line buffer. This calls rl_redisplay() in the underlying library. History file The following functions operate on a history file: readline.read_history_file([filename]) Load a readline history file, and append it to the history list. The default filename is ~/.history. This calls read_history() in the underlying library. readline.write_history_file([filename]) Save the history list to a readline history file, overwriting any existing file. The default filename is ~/.history. This calls write_history() in the underlying library. readline.append_history_file(nelements[, filename]) Append the last nelements items of history to a file. The default filename is ~/.history. The file must already exist. This calls append_history() in the underlying library. This function only exists if Python was compiled for a version of the library that supports it. New in version 3.5. readline.get_history_length() readline.set_history_length(length) Set or return the desired number of lines to save in the history file. The write_history_file() function uses this value to truncate the history file, by calling history_truncate_file() in the underlying library. Negative values imply unlimited history file size. History list The following functions operate on a global history list: readline.clear_history() Clear the current history. This calls clear_history() in the underlying library. The Python function only exists if Python was compiled for a version of the library that supports it. readline.get_current_history_length() Return the number of items currently in the history. (This is different from get_history_length(), which returns the maximum number of lines that will be written to a history file.) readline.get_history_item(index) Return the current contents of history item at index. The item index is one-based. This calls history_get() in the underlying library. readline.remove_history_item(pos) Remove history item specified by its position from the history. The position is zero-based. This calls remove_history() in the underlying library. readline.replace_history_item(pos, line) Replace history item specified by its position with line. The position is zero-based. This calls replace_history_entry() in the underlying library. readline.add_history(line) Append line to the history buffer, as if it was the last line typed. This calls add_history() in the underlying library. readline.set_auto_history(enabled) Enable or disable automatic calls to add_history() when reading input via readline. The enabled argument should be a Boolean value that when true, enables auto history, and that when false, disables auto history. New in version 3.6. CPython implementation detail: Auto history is enabled by default, and changes to this do not persist across multiple sessions. Startup hooks readline.set_startup_hook([function]) Set or remove the function invoked by the rl_startup_hook callback of the underlying library. If function is specified, it will be used as the new hook function; if omitted or None, any function already installed is removed. The hook is called with no arguments just before readline prints the first prompt. readline.set_pre_input_hook([function]) Set or remove the function invoked by the rl_pre_input_hook callback of the underlying library. If function is specified, it will be used as the new hook function; if omitted or None, any function already installed is removed. The hook is called with no arguments after the first prompt has been printed and just before readline starts reading input characters. This function only exists if Python was compiled for a version of the library that supports it. Completion The following functions relate to implementing a custom word completion function. This is typically operated by the Tab key, and can suggest and automatically complete a word being typed. By default, Readline is set up to be used by rlcompleter to complete Python identifiers for the interactive interpreter. If the readline module is to be used with a custom completer, a different set of word delimiters should be set. readline.set_completer([function]) Set or remove the completer function. If function is specified, it will be used as the new completer function; if omitted or None, any completer function already installed is removed. The completer function is called as function(text, state), for state in 0, 1, 2, …, until it returns a non-string value. It should return the next possible completion starting with text. The installed completer function is invoked by the entry_func callback passed to rl_completion_matches() in the underlying library. The text string comes from the first parameter to the rl_attempted_completion_function callback of the underlying library. readline.get_completer() Get the completer function, or None if no completer function has been set. readline.get_completion_type() Get the type of completion being attempted. This returns the rl_completion_type variable in the underlying library as an integer. readline.get_begidx() readline.get_endidx() Get the beginning or ending index of the completion scope. These indexes are the start and end arguments passed to the rl_attempted_completion_function callback of the underlying library. readline.set_completer_delims(string) readline.get_completer_delims() Set or get the word delimiters for completion. These determine the start of the word to be considered for completion (the completion scope). These functions access the rl_completer_word_break_characters variable in the underlying library. readline.set_completion_display_matches_hook([function]) Set or remove the completion display function. If function is specified, it will be used as the new completion display function; if omitted or None, any completion display function already installed is removed. This sets or clears the rl_completion_display_matches_hook callback in the underlying library. The completion display function is called as function(substitution, [matches], longest_match_length) once each time matches need to be displayed. Example The following example demonstrates how to use the readline module’s history reading and writing functions to automatically load and save a history file named .python_history from the user’s home directory. The code below would normally be executed automatically during interactive sessions from the user’s PYTHONSTARTUP file. import atexit import os import readline histfile = os.path.join(os.path.expanduser("~"), ".python_history") try: readline.read_history_file(histfile) # default history len is -1 (infinite), which may grow unruly readline.set_history_length(1000) except FileNotFoundError: pass atexit.register(readline.write_history_file, histfile) This code is actually automatically run when Python is run in interactive mode (see Readline configuration). The following example achieves the same goal but supports concurrent interactive sessions, by only appending the new history. import atexit import os import readline histfile = os.path.join(os.path.expanduser("~"), ".python_history") try: readline.read_history_file(histfile) h_len = readline.get_current_history_length() except FileNotFoundError: open(histfile, 'wb').close() h_len = 0 def save(prev_h_len, histfile): new_h_len = readline.get_current_history_length() readline.set_history_length(1000) readline.append_history_file(new_h_len - prev_h_len, histfile) atexit.register(save, h_len, histfile) The following example extends the code.InteractiveConsole class to support history save/restore. import atexit import code import os import readline class HistoryConsole(code.InteractiveConsole): def __init__(self, locals=None, filename="<console>", histfile=os.path.expanduser("~/.console-history")): code.InteractiveConsole.__init__(self, locals, filename) self.init_history(histfile) def init_history(self, histfile): readline.parse_and_bind("tab: complete") if hasattr(readline, "read_history_file"): try: readline.read_history_file(histfile) except FileNotFoundError: pass atexit.register(self.save_history, histfile) def save_history(self, histfile): readline.set_history_length(1000) readline.write_history_file(histfile)
doc_3197
See Migration guide for more details. tf.compat.v1.raw_ops.StackV2 tf.raw_ops.StackV2( max_size, elem_type, stack_name='', name=None ) Args max_size A Tensor of type int32. The maximum size of the stack if non-negative. If negative, the stack size is unlimited. elem_type A tf.DType. The type of the elements on the stack. stack_name An optional string. Defaults to "". Overrides the name used for the temporary stack resource. Default value is the name of the 'Stack' op (which is guaranteed unique). name A name for the operation (optional). Returns A Tensor of type resource.
doc_3198
Set the projection type. Parameters proj_type{'persp', 'ortho'}
doc_3199
The configuration dictionary as Config. This behaves exactly like a regular dictionary but supports additional methods to load a config from files.