_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_2100
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_2101
Set the alpha value used for blending - not supported on all backends. Parameters alphascalar or None alpha must be within the 0-1 range, inclusive.
doc_2102
[<Author: Terry Gilliam>, <Author: Terry Jones>] This is a very fragile solution as it requires the user to know an exact substring of the author’s name. A better approach could be a case-insensitive match (icontains), but this is only marginally better. A database’s more advanced comparison functions If you’re using PostgreSQL, Django provides a selection of database specific tools to allow you to leverage more complex querying options. Other databases have different selections of tools, possibly via plugins or user-defined functions. Django doesn’t include any support for them at this time. We’ll use some examples from PostgreSQL to demonstrate the kind of functionality databases may have. Searching in other databases All of the searching tools provided by django.contrib.postgres are constructed entirely on public APIs such as custom lookups and database functions. Depending on your database, you should be able to construct queries to allow similar APIs. If there are specific things which cannot be achieved this way, please open a ticket. In the above example, we determined that a case insensitive lookup would be more useful. When dealing with non-English names, a further improvement is to use unaccented comparison: >>> Author.objects.filter(name__unaccent__icontains='Helen') [<Author: Helen Mirren>, <Author: Helena Bonham Carter>, <Author: Hélène Joy>] This shows another issue, where we are matching against a different spelling of the name. In this case we have an asymmetry though - a search for Helen will pick up Helena or Hélène, but not the reverse. Another option would be to use a trigram_similar comparison, which compares sequences of letters. For example: >>> Author.objects.filter(name__unaccent__lower__trigram_similar='Hélène') [<Author: Helen Mirren>, <Author: Hélène Joy>] Now we have a different problem - the longer name of “Helena Bonham Carter” doesn’t show up as it is much longer. Trigram searches consider all combinations of three letters, and compares how many appear in both search and source strings. For the longer name, there are more combinations that don’t appear in the source string, so it is no longer considered a close match. The correct choice of comparison functions here depends on your particular data set, for example the language(s) used and the type of text being searched. All of the examples we’ve seen are on short strings where the user is likely to enter something close (by varying definitions) to the source data. Document-based search Standard database operations stop being a useful approach when you start considering large blocks of text. Whereas the examples above can be thought of as operations on a string of characters, full text search looks at the actual words. Depending on the system used, it’s likely to use some of the following ideas: Ignoring “stop words” such as “a”, “the”, “and”. Stemming words, so that “pony” and “ponies” are considered similar. Weighting words based on different criteria such as how frequently they appear in the text, or the importance of the fields, such as the title or keywords, that they appear in. There are many alternatives for using searching software, some of the most prominent are Elastic and Solr. These are full document-based search solutions. To use them with data from Django models, you’ll need a layer which translates your data into a textual document, including back-references to the database ids. When a search using the engine returns a certain document, you can then look it up in the database. There are a variety of third-party libraries which are designed to help with this process. PostgreSQL support PostgreSQL has its own full text search implementation built-in. While not as powerful as some other search engines, it has the advantage of being inside your database and so can easily be combined with other relational queries such as categorization. The django.contrib.postgres module provides some helpers to make these queries. For example, a query might select all the blog entries which mention “cheese”: >>> Entry.objects.filter(body_text__search='cheese') [<Entry: Cheese on Toast recipes>, <Entry: Pizza recipes>] You can also filter on a combination of fields and on related models: >>> Entry.objects.annotate( ... search=SearchVector('blog__tagline', 'body_text'), ... ).filter(search='cheese') [ <Entry: Cheese on Toast recipes>, <Entry: Pizza Recipes>, <Entry: Dairy farming in Argentina>, ] See the contrib.postgres Full text search document for complete details.
doc_2103
Bases: matplotlib.collections.Collection A generic collection of patches. This makes it easier to assign a colormap to a heterogeneous collection of patches. This also may improve plotting speed, since PatchCollection will draw faster than a large number of patches. patches a sequence of Patch objects. This list may include a heterogeneous assortment of different patch types. match_original If True, use the colors and linewidths of the original patches. If False, new colors may be assigned by providing the standard collection arguments, facecolor, edgecolor, linewidths, norm or cmap. If any of edgecolors, facecolors, linewidths, antialiaseds are None, they default to their rcParams patch setting, in sequence form. The use of ScalarMappable functionality is optional. If the ScalarMappable matrix _A has been set (via a call to set_array), at draw time a call to scalar mappable will be made to set the face colors. add_callback(func)[source] Add a callback function that will be called whenever one of the Artist's properties changes. Parameters funccallable The callback function. It must have the signature: def func(artist: Artist) -> Any where artist is the calling Artist. Return values may exist but are ignored. Returns int The observer id associated with the callback. This id can be used for removing the callback with remove_callback later. See also remove_callback autoscale()[source] Autoscale the scalar limits on the norm instance using the current array autoscale_None()[source] Autoscale the scalar limits on the norm instance using the current array, changing only limits that are None propertyaxes The Axes instance the artist resides in, or None. propertycallbacksSM[source] changed()[source] Call this whenever the mappable is changed to notify all the callbackSM listeners to the 'changed' signal. colorbar The last colorbar associated with this ScalarMappable. May be None. contains(mouseevent)[source] Test whether the mouse event occurred in the collection. Returns bool, dict(ind=itemlist), where every item in itemlist contains the event. convert_xunits(x)[source] Convert x using the unit type of the xaxis. If the artist is not in contained in an Axes or if the xaxis does not have units, x itself is returned. convert_yunits(y)[source] Convert y using the unit type of the yaxis. If the artist is not in contained in an Axes or if the yaxis does not have units, y itself is returned. draw(renderer)[source] Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters rendererRendererBase subclass. Notes This method is overridden in the Artist subclasses. findobj(match=None, include_self=True)[source] Find artist objects. Recursively find all Artist instances contained in the artist. Parameters match A filter criterion for the matches. This can be None: Return all objects contained in artist. A function with signature def match(artist: Artist) -> bool. The result will only contain artists for which the function returns True. A class instance: e.g., Line2D. The result will only contain artists of this class or its subclasses (isinstance check). include_selfbool Include self in the list to be checked for a match. Returns list of Artist format_cursor_data(data)[source] Return a string representation of data. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. The default implementation converts ints and floats and arrays of ints and floats into a comma-separated string enclosed in square brackets, unless the artist has an associated colorbar, in which case scalar values are formatted using the colorbar's formatter. See also get_cursor_data get_agg_filter()[source] Return filter function to be used for agg filter. get_alpha()[source] Return the alpha value used for blending - not supported on all backends. get_animated()[source] Return whether the artist is animated. get_array()[source] Return the array of values, that are mapped to colors. The base class ScalarMappable does not make any assumptions on the dimensionality and shape of the array. get_capstyle()[source] get_children()[source] Return a list of the child Artists of this Artist. get_clim()[source] Return the values (min, max) that are mapped to the colormap limits. get_clip_box()[source] Return the clipbox. get_clip_on()[source] Return whether the artist uses clipping. get_clip_path()[source] Return the clip path. get_cmap()[source] Return the Colormap instance. get_cursor_data(event)[source] Return the cursor data for a given event. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. Cursor data can be used by Artists to provide additional context information for a given event. The default implementation just returns None. Subclasses can override the method and return arbitrary data. However, when doing so, they must ensure that format_cursor_data can convert the data to a string representation. The only current use case is displaying the z-value of an AxesImage in the status bar of a plot window, while moving the mouse. Parameters eventmatplotlib.backend_bases.MouseEvent See also format_cursor_data get_dashes()[source] Alias for get_linestyle. get_datalim(transData)[source] get_ec()[source] Alias for get_edgecolor. get_edgecolor()[source] get_edgecolors()[source] Alias for get_edgecolor. get_facecolor()[source] get_facecolors()[source] Alias for get_facecolor. get_fc()[source] Alias for get_facecolor. get_figure()[source] Return the Figure instance the artist belongs to. get_fill()[source] Return whether face is colored. get_gid()[source] Return the group id. get_hatch()[source] Return the current hatching pattern. get_in_layout()[source] Return boolean flag, True if artist is included in layout calculations. E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). get_joinstyle()[source] get_label()[source] Return the label used for this artist in the legend. get_linestyle()[source] get_linestyles()[source] Alias for get_linestyle. get_linewidth()[source] get_linewidths()[source] Alias for get_linewidth. get_ls()[source] Alias for get_linestyle. get_lw()[source] Alias for get_linewidth. get_offset_transform()[source] Return the Transform instance used by this artist offset. get_offsets()[source] Return the offsets for the collection. get_path_effects()[source] get_paths()[source] get_picker()[source] Return the picking behavior of the artist. The possible values are described in set_picker. See also set_picker, pickable, pick get_pickradius()[source] get_rasterized()[source] Return whether the artist is to be rasterized. get_sketch_params()[source] Return the sketch parameters for the artist. Returns tuple or None A 3-tuple with the following elements: scale: The amplitude of the wiggle perpendicular to the source line. length: The length of the wiggle along the line. randomness: The scale factor by which the length is shrunken or expanded. Returns None if no sketch parameters were set. get_snap()[source] Return the snap setting. See set_snap for details. get_tightbbox(renderer)[source] Like Artist.get_window_extent, but includes any clipping. Parameters rendererRendererBase subclass renderer that will be used to draw the figures (i.e. fig.canvas.get_renderer()) Returns Bbox The enclosing bounding box (in figure pixel coordinates). get_transform()[source] Return the Transform instance used by this artist. get_transformed_clip_path_and_affine()[source] Return the clip path with the non-affine part of its transformation applied, and the remaining affine part of its transformation. get_transforms()[source] get_url()[source] Return the url. get_urls()[source] Return a list of URLs, one for each element of the collection. The list contains None for elements without a URL. See Hyperlinks for an example. get_visible()[source] Return the visibility. get_window_extent(renderer)[source] Get the artist's bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly. get_zorder()[source] Return the artist's zorder. have_units()[source] Return whether units are set on any axis. is_transform_set()[source] Return whether the Artist has an explicitly set transform. This is True after set_transform has been called. propertymouseover If this property is set to True, the artist will be queried for custom context information when the mouse cursor moves over it. See also get_cursor_data(), ToolCursorPosition and NavigationToolbar2. propertynorm pchanged()[source] Call all of the registered callbacks. This function is triggered internally when a property is changed. See also add_callback remove_callback pick(mouseevent)[source] Process a pick event. Each child artist will fire a pick event if mouseevent is over the artist and the artist has picker set. See also set_picker, get_picker, pickable pickable()[source] Return whether the artist is pickable. See also set_picker, get_picker, pick properties()[source] Return a dictionary of all the properties of the artist. remove()[source] Remove the artist from the figure if possible. The effect will not be visible until the figure is redrawn, e.g., with FigureCanvasBase.draw_idle. Call relim to update the axes limits if desired. Note: relim will not see collections even if the collection was added to the axes with autolim = True. Note: there is no support for removing the artist's legend entry. remove_callback(oid)[source] Remove a callback based on its observer id. See also add_callback set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, antialiased=<UNSET>, array=<UNSET>, capstyle=<UNSET>, clim=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, cmap=<UNSET>, color=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, gid=<UNSET>, hatch=<UNSET>, in_layout=<UNSET>, joinstyle=<UNSET>, label=<UNSET>, linestyle=<UNSET>, linewidth=<UNSET>, norm=<UNSET>, offset_transform=<UNSET>, offsets=<UNSET>, path_effects=<UNSET>, paths=<UNSET>, picker=<UNSET>, pickradius=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, urls=<UNSET>, visible=<UNSET>, zorder=<UNSET>)[source] Set multiple properties at once. Supported properties are Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha array-like or scalar or None animated bool antialiased or aa or antialiaseds bool or list of bools array array-like or None capstyle CapStyle or {'butt', 'projecting', 'round'} clim (vmin: float, vmax: float) clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None cmap Colormap or str or None color color or list of rgba tuples edgecolor or ec or edgecolors color or list of colors or 'face' facecolor or facecolors or fc color or list of colors figure Figure gid str hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} in_layout bool joinstyle JoinStyle or {'miter', 'round', 'bevel'} label object linestyle or dashes or linestyles or ls str or tuple or list thereof linewidth or linewidths or lw float or list of floats norm Normalize or None offset_transform Transform offsets (N, 2) or (2,) array-like path_effects AbstractPathEffect paths unknown picker None or bool or float or callable pickradius float rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None transform Transform url str urls list of str or None visible bool zorder float set_aa(aa)[source] Alias for set_antialiased. set_agg_filter(filter_func)[source] Set the agg filter. Parameters filter_funccallable A filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array. set_alpha(alpha)[source] Set the alpha value used for blending - not supported on all backends. Parameters alphaarray-like or scalar or None All values must be within the 0-1 range, inclusive. Masked values and nans are not supported. set_animated(b)[source] Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters bbool set_antialiased(aa)[source] Set the antialiasing state for rendering. Parameters aabool or list of bools set_antialiaseds(aa)[source] Alias for set_antialiased. set_array(A)[source] Set the value array from array-like A. Parameters Aarray-like or None The values that are mapped to colors. The base class ScalarMappable does not make any assumptions on the dimensionality and shape of the value array A. set_capstyle(cs)[source] Set the CapStyle for the collection (for all its elements). Parameters csCapStyle or {'butt', 'projecting', 'round'} set_clim(vmin=None, vmax=None)[source] Set the norm limits for image scaling. Parameters vmin, vmaxfloat The limits. The limits may also be passed as a tuple (vmin, vmax) as a single positional argument. set_clip_box(clipbox)[source] Set the artist's clip Bbox. Parameters clipboxBbox set_clip_on(b)[source] Set whether the artist uses clipping. When False artists will be visible outside of the axes which can lead to unexpected results. Parameters bbool set_clip_path(path, transform=None)[source] Set the artist's clip path. Parameters pathPatch or Path or TransformedPath or None The clip path. If given a Path, transform must be provided as well. If None, a previously set clip path is removed. transformTransform, optional Only used if path is a Path, in which case the given Path is converted to a TransformedPath using transform. Notes For efficiency, if path is a Rectangle this method will set the clipping box to the corresponding rectangle and set the clipping path to None. For technical reasons (support of set), a tuple (path, transform) is also accepted as a single positional parameter. set_cmap(cmap)[source] Set the colormap for luminance data. Parameters cmapColormap or str or None set_color(c)[source] Set both the edgecolor and the facecolor. Parameters ccolor or list of rgba tuples See also Collection.set_facecolor, Collection.set_edgecolor For setting the edge or face color individually. set_dashes(ls)[source] Alias for set_linestyle. set_ec(c)[source] Alias for set_edgecolor. set_edgecolor(c)[source] Set the edgecolor(s) of the collection. Parameters ccolor or list of colors or 'face' The collection edgecolor(s). If a sequence, the patches cycle through it. If 'face', match the facecolor. set_edgecolors(c)[source] Alias for set_edgecolor. set_facecolor(c)[source] Set the facecolor(s) of the collection. c can be a color (all patches have same color), or a sequence of colors; if it is a sequence the patches will cycle through the sequence. If c is 'none', the patch will not be filled. Parameters ccolor or list of colors set_facecolors(c)[source] Alias for set_facecolor. set_fc(c)[source] Alias for set_facecolor. set_figure(fig)[source] Set the Figure instance the artist belongs to. Parameters figFigure set_gid(gid)[source] Set the (group) id for the artist. Parameters gidstr set_hatch(hatch)[source] Set the hatching pattern hatch can be one of: / - diagonal hatching \ - back diagonal | - vertical - - horizontal + - crossed x - crossed diagonal o - small circle O - large circle . - dots * - stars Letters can be combined, in which case all the specified hatchings are done. If same letter repeats, it increases the density of hatching of that pattern. Hatching is supported in the PostScript, PDF, SVG and Agg backends only. Unlike other properties such as linewidth and colors, hatching can only be specified for the collection as a whole, not separately for each member. Parameters hatch{'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} set_in_layout(in_layout)[source] Set if artist is to be included in layout calculations, E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). Parameters in_layoutbool set_joinstyle(js)[source] Set the JoinStyle for the collection (for all its elements). Parameters jsJoinStyle or {'miter', 'round', 'bevel'} set_label(s)[source] Set a label that will be displayed in the legend. Parameters sobject s will be converted to a string by calling str. set_linestyle(ls)[source] Set the linestyle(s) for the collection. linestyle description '-' or 'solid' solid line '--' or 'dashed' dashed line '-.' or 'dashdot' dash-dotted line ':' or 'dotted' dotted line Alternatively a dash tuple of the following form can be provided: (offset, onoffseq), where onoffseq is an even length tuple of on and off ink in points. Parameters lsstr or tuple or list thereof Valid values for individual linestyles include {'-', '--', '-.', ':', '', (offset, on-off-seq)}. See Line2D.set_linestyle for a complete description. set_linestyles(ls)[source] Alias for set_linestyle. set_linewidth(lw)[source] Set the linewidth(s) for the collection. lw can be a scalar or a sequence; if it is a sequence the patches will cycle through the sequence Parameters lwfloat or list of floats set_linewidths(lw)[source] Alias for set_linewidth. set_ls(ls)[source] Alias for set_linestyle. set_lw(lw)[source] Alias for set_linewidth. set_norm(norm)[source] Set the normalization instance. Parameters normNormalize or None Notes If there are any colorbars using the mappable for this norm, setting the norm of the mappable will reset the norm, locator, and formatters on the colorbar to default. set_offset_transform(transOffset)[source] Set the artist offset transform. Parameters transOffsetTransform set_offsets(offsets)[source] Set the offsets for the collection. Parameters offsets(N, 2) or (2,) array-like set_path_effects(path_effects)[source] Set the path effects. Parameters path_effectsAbstractPathEffect set_paths(patches)[source] set_picker(picker)[source] Define the picking behavior of the artist. Parameters pickerNone or bool or float or callable This can be one of the following: None: Picking is disabled for this artist (default). A boolean: If True then picking will be enabled and the artist will fire a pick event if the mouse event is over the artist. A float: If picker is a number it is interpreted as an epsilon tolerance in points and the artist will fire off an event if its data is within epsilon of the mouse event. For some artists like lines and patch collections, the artist may provide additional data to the pick event that is generated, e.g., the indices of the data within epsilon of the pick event A function: If picker is callable, it is a user supplied function which determines whether the artist is hit by the mouse event: hit, props = picker(artist, mouseevent) to determine the hit test. if the mouse event is over the artist, return hit=True and props is a dictionary of properties you want added to the PickEvent attributes. set_pickradius(pr)[source] Set the pick radius used for containment tests. Parameters prfloat Pick radius, in points. set_rasterized(rasterized)[source] Force rasterized (bitmap) drawing for vector graphics output. Rasterized drawing is not supported by all artists. If you try to enable this on an artist that does not support it, the command has no effect and a warning will be issued. This setting is ignored for pixel-based output. See also Rasterization for vector graphics. Parameters rasterizedbool set_sketch_params(scale=None, length=None, randomness=None)[source] Set the sketch parameters. Parameters scalefloat, optional The amplitude of the wiggle perpendicular to the source line, in pixels. If scale is None, or not provided, no sketch filter will be provided. lengthfloat, optional The length of the wiggle along the line, in pixels (default 128.0) randomnessfloat, optional The scale factor by which the length is shrunken or expanded (default 16.0) The PGF backend uses this argument as an RNG seed and not as described above. Using the same seed yields the same random shape. set_snap(snap)[source] Set the snapping behavior. Snapping aligns positions with the pixel grid, which results in clearer images. For example, if a black line of 1px width was defined at a position in between two pixels, the resulting image would contain the interpolated value of that line in the pixel grid, which would be a grey value on both adjacent pixel positions. In contrast, snapping will move the line to the nearest integer pixel value, so that the resulting image will really contain a 1px wide black line. Snapping is currently only supported by the Agg and MacOSX backends. Parameters snapbool or None Possible values: True: Snap vertices to the nearest pixel center. False: Do not modify vertex positions. None: (auto) If the path contains only rectilinear line segments, round to the nearest pixel center. set_transform(t)[source] Set the artist transform. Parameters tTransform set_url(url)[source] Set the url for the artist. Parameters urlstr set_urls(urls)[source] Parameters urlslist of str or None Notes URLs are currently only implemented by the SVG backend. They are ignored by all other backends. set_visible(b)[source] Set the artist's visibility. Parameters bbool set_zorder(level)[source] Set the zorder for the artist. Artists with lower zorder values are drawn first. Parameters levelfloat propertystale Whether the artist is 'stale' and needs to be re-drawn for the output to match the internal state of the artist. propertysticky_edges x and y sticky edge lists for autoscaling. When performing autoscaling, if a data limit coincides with a value in the corresponding sticky_edges list, then no margin will be added--the view limit "sticks" to the edge. A typical use case is histograms, where one usually expects no margin on the bottom edge (0) of the histogram. Moreover, margin expansion "bumps" against sticky edges and cannot cross them. For example, if the upper data limit is 1.0, the upper view limit computed by simple margin application is 1.2, but there is a sticky edge at 1.1, then the actual upper view limit will be 1.1. This attribute cannot be assigned to; however, the x and y lists can be modified in place as needed. Examples >>> artist.sticky_edges.x[:] = (xmin, xmax) >>> artist.sticky_edges.y[:] = (ymin, ymax) to_rgba(x, alpha=None, bytes=False, norm=True)[source] Return a normalized rgba array corresponding to x. In the normal case, x is a 1D or 2D sequence of scalars, and the corresponding ndarray of rgba values will be returned, based on the norm and colormap set for this ScalarMappable. There is one special case, for handling images that are already rgb or rgba, such as might have been read from an image file. If x is an ndarray with 3 dimensions, and the last dimension is either 3 or 4, then it will be treated as an rgb or rgba array, and no mapping will be done. The array can be uint8, or it can be floating point with values in the 0-1 range; otherwise a ValueError will be raised. If it is a masked array, the mask will be ignored. If the last dimension is 3, the alpha kwarg (defaulting to 1) will be used to fill in the transparency. If the last dimension is 4, the alpha kwarg is ignored; it does not replace the pre-existing alpha. A ValueError will be raised if the third dimension is other than 3 or 4. In either case, if bytes is False (default), the rgba array will be floats in the 0-1 range; if it is True, the returned rgba array will be uint8 in the 0 to 255 range. If norm is False, no normalization of the input data is performed, and it is assumed to be in the range (0-1). update(props)[source] Update this artist's properties from the dict props. Parameters propsdict update_from(other)[source] Copy properties from other to self. update_scalarmappable()[source] Update colors from the scalar mappable array, if any. Assign colors to edges and faces based on the array and/or colors that were directly set, as appropriate. zorder=0
doc_2104
Serialize obj as a JSON formatted stream to fp (a .write()-supporting file-like object) using this conversion table. If skipkeys is true (default: False), then dict keys that are not of a basic type (str, int, float, bool, None) will be skipped instead of raising a TypeError. The json module always produces str objects, not bytes objects. Therefore, fp.write() must support str input. If ensure_ascii is true (the default), the output is guaranteed to have all incoming non-ASCII characters escaped. If ensure_ascii is false, these characters will be output as-is. If check_circular is false (default: True), then the circular reference check for container types will be skipped and a circular reference will result in an OverflowError (or worse). If allow_nan is false (default: True), then it will be a ValueError to serialize out of range float values (nan, inf, -inf) in strict compliance of the JSON specification. If allow_nan is true, their JavaScript equivalents (NaN, Infinity, -Infinity) will be used. If indent is a non-negative integer or string, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0, negative, or "" will only insert newlines. None (the default) selects the most compact representation. Using a positive integer indent indents that many spaces per level. If indent is a string (such as "\t"), that string is used to indent each level. Changed in version 3.2: Allow strings for indent in addition to integers. If specified, separators should be an (item_separator, key_separator) tuple. The default is (', ', ': ') if indent is None and (',', ': ') otherwise. To get the most compact JSON representation, you should specify (',', ':') to eliminate whitespace. Changed in version 3.4: Use (',', ': ') as default if indent is not None. If specified, default should be a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a TypeError. If not specified, TypeError is raised. If sort_keys is true (default: False), then the output of dictionaries will be sorted by key. To use a custom JSONEncoder subclass (e.g. one that overrides the default() method to serialize additional types), specify it with the cls kwarg; otherwise JSONEncoder is used. Changed in version 3.6: All optional parameters are now keyword-only. Note Unlike pickle and marshal, JSON is not a framed protocol, so trying to serialize multiple objects with repeated calls to dump() using the same fp will result in an invalid JSON file.
doc_2105
Add a subfigure to this figure or subfigure. A subfigure has the same artist methods as a figure, and is logically the same as a figure, but cannot print itself. See Figure subfigures. Parameters nrows, ncolsint, default: 1 Number of rows/columns of the subfigure grid. squeezebool, default: True If True, extra dimensions are squeezed out from the returned array of subfigures. wspace, hspacefloat, default: None The amount of width/height reserved for space between subfigures, expressed as a fraction of the average subfigure width/height. If not given, the values will be inferred from a figure or rcParams when necessary. width_ratiosarray-like of length ncols, optional Defines the relative widths of the columns. Each column gets a relative width of width_ratios[i] / sum(width_ratios). If not given, all columns will have the same width. height_ratiosarray-like of length nrows, optional Defines the relative heights of the rows. Each row gets a relative height of height_ratios[i] / sum(height_ratios). If not given, all rows will have the same height.
doc_2106
Set the alpha value used for blending - not supported on all backends. Parameters alphaarray-like or scalar or None All values must be within the 0-1 range, inclusive. Masked values and nans are not supported.
doc_2107
Convert the Timestamp to a NumPy datetime64. New in version 0.25.0. This is an alias method for Timestamp.to_datetime64(). The dtype and copy parameters are available here only for compatibility. Their values will not affect the return value. Returns numpy.datetime64 See also DatetimeIndex.to_numpy Similar method for DatetimeIndex. Examples >>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651') >>> ts.to_numpy() numpy.datetime64('2020-03-14T15:32:52.192548651') Analogous for pd.NaT: >>> pd.NaT.to_numpy() numpy.datetime64('NaT')
doc_2108
See Migration guide for more details. tf.compat.v1.raw_ops.QuantizedReshape tf.raw_ops.QuantizedReshape( tensor, shape, input_min, input_max, name=None ) <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr> <tr> <td> `tensor` </td> <td> A `Tensor`. </td> </tr><tr> <td> `shape` </td> <td> A `Tensor`. Must be one of the following types: `int32`, `int64`. Defines the shape of the output tensor. </td> </tr><tr> <td> `input_min` </td> <td> A `Tensor` of type `float32`. The minimum value of the input. </td> </tr><tr> <td> `input_max` </td> <td> A `Tensor` of type `float32`. The maximum value of the input. </td> </tr><tr> <td> `name` </td> <td> A name for the operation (optional). </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2"><h2 class="add-link">Returns</h2></th></tr> <tr class="alt"> <td colspan="2"> A tuple of `Tensor` objects (output, output_min, output_max). </td> </tr> <tr> <td> `output` </td> <td> A `Tensor`. Has the same type as `tensor`. </td> </tr><tr> <td> `output_min` </td> <td> A `Tensor` of type `float32`. </td> </tr><tr> <td> `output_max` </td> <td> A `Tensor` of type `float32`. </td> </tr> </table>
doc_2109
Set the mouse buttons for 3D rotation and zooming. Parameters rotate_btnint or list of int, default: 1 The mouse button or buttons to use for 3D rotation of the axes. zoom_btnint or list of int, default: 3 The mouse button or buttons to use to zoom the 3D axes.
doc_2110
Return Not equal to of series and other, element-wise (binary operator ne). Equivalent to series != other, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters other:Series or scalar value fill_value:None or float value, default None (NaN) Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing. level:int or name Broadcast across a level, matching Index values on the passed MultiIndex level. Returns Series The result of the operation. Examples >>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.ne(b, fill_value=0) a False b True c True d True e True dtype: bool
doc_2111
alias of matplotlib.backends.backend_pgf.FigureCanvasPgf classmatplotlib.backends.backend_pgf.FigureCanvasPgf(figure=None)[source] Bases: matplotlib.backend_bases.FigureCanvasBase draw()[source] Render the Figure. It is important that this method actually walk the artist tree even if not output is produced because this will trigger deferred work (like computing limits auto-limits and tick values) that users may want access to before saving to disk. filetypes={'pdf': 'LaTeX compiled PGF picture', 'pgf': 'LaTeX PGF picture', 'png': 'Portable Network Graphics'} get_default_filetype()[source] Return the default savefig file format as specified in rcParams["savefig.format"] (default: 'png'). The returned string does not include a period. This method is overridden in backends that only support a single file type. get_renderer()[source] print_pdf(fname_or_fh, *, metadata=None, **kwargs)[source] Use LaTeX to compile a pgf generated figure to pdf. print_pgf(fname_or_fh, **kwargs)[source] Output pgf macros for drawing the figure so it can be included and rendered in latex documents. print_png(fname_or_fh, **kwargs)[source] Use LaTeX to compile a pgf figure to pdf and convert it to png. exceptionmatplotlib.backends.backend_pgf.LatexError(message, latex_output='')[source] Bases: Exception classmatplotlib.backends.backend_pgf.LatexManager[source] Bases: object The LatexManager opens an instance of the LaTeX application for determining the metrics of text elements. The LaTeX environment can be modified by setting fonts and/or a custom preamble in rcParams. get_width_height_descent(text, prop)[source] Get the width, total height, and descent (in TeX points) for a text typeset by the current LaTeX environment. propertystr_cache[source] classmatplotlib.backends.backend_pgf.PdfPages(filename, *, keep_empty=True, metadata=None)[source] Bases: object A multi-page PDF file using the pgf backend Examples >>> import matplotlib.pyplot as plt >>> # Initialize: >>> with PdfPages('foo.pdf') as pdf: ... # As many times as you like, create a figure fig and save it: ... fig = plt.figure() ... pdf.savefig(fig) ... # When no figure is specified the current figure is saved ... pdf.savefig() Create a new PdfPages object. Parameters filenamestr or path-like Plots using PdfPages.savefig will be written to a file at this location. Any older file with the same name is overwritten. keep_emptybool, default: True If set to False, then empty pdf files will be deleted automatically when closed. metadatadict, optional Information dictionary object (see PDF reference section 10.2.1 'Document Information Dictionary'), e.g.: {'Creator': 'My software', 'Author': 'Me', 'Title': 'Awesome'}. The standard keys are 'Title', 'Author', 'Subject', 'Keywords', 'Creator', 'Producer', 'CreationDate', 'ModDate', and 'Trapped'. Values have been predefined for 'Creator', 'Producer' and 'CreationDate'. They can be removed by setting them to None. Note that some versions of LaTeX engines may ignore the 'Producer' key and set it to themselves. close()[source] Finalize this object, running LaTeX in a temporary directory and moving the final pdf file to filename. get_pagecount()[source] Return the current number of pages in the multipage pdf file. keep_empty savefig(figure=None, **kwargs)[source] Save a Figure to this file as a new page. Any other keyword arguments are passed to savefig. Parameters figureFigure or int, default: the active figure The figure, or index of the figure, that is saved to the file. classmatplotlib.backends.backend_pgf.RendererPgf(figure, fh)[source] Bases: matplotlib.backend_bases.RendererBase Create a new PGF renderer that translates any drawing instruction into text commands to be interpreted in a latex pgfpicture environment. Attributes figurematplotlib.figure.Figure Matplotlib figure to initialize height, width and dpi from. fhfile-like File handle for the output of the drawing commands. draw_image(gc, x, y, im, transform=None)[source] Draw an RGBA image. Parameters gcGraphicsContextBase A graphics context with clipping information. xscalar The distance in physical units (i.e., dots or pixels) from the left hand side of the canvas. yscalar The distance in physical units (i.e., dots or pixels) from the bottom side of the canvas. im(N, M, 4) array-like of np.uint8 An array of RGBA pixels. transformmatplotlib.transforms.Affine2DBase If and only if the concrete backend is written such that option_scale_image() returns True, an affine transformation (i.e., an Affine2DBase) may be passed to draw_image(). The translation vector of the transformation is given in physical units (i.e., dots or pixels). Note that the transformation does not override x and y, and has to be applied before translating the result by x and y (this can be accomplished by adding x and y to the translation vector defined by transform). draw_markers(gc, marker_path, marker_trans, path, trans, rgbFace=None)[source] Draw a marker at each of path's vertices (excluding control points). This provides a fallback implementation of draw_markers that makes multiple calls to draw_path(). Some backends may want to override this method in order to draw the marker only once and reuse it multiple times. Parameters gcGraphicsContextBase The graphics context. marker_transmatplotlib.transforms.Transform An affine transform applied to the marker. transmatplotlib.transforms.Transform An affine transform applied to the path. draw_path(gc, path, transform, rgbFace=None)[source] Draw a Path instance using the given affine transform. draw_tex(gc, x, y, s, prop, angle, ismath='TeX', mtext=None)[source] draw_text(gc, x, y, s, prop, angle, ismath=False, mtext=None)[source] Draw the text instance. Parameters gcGraphicsContextBase The graphics context. xfloat The x location of the text in display coords. yfloat The y location of the text baseline in display coords. sstr The text string. propmatplotlib.font_manager.FontProperties The font properties. anglefloat The rotation angle in degrees anti-clockwise. mtextmatplotlib.text.Text The original text object to be rendered. Notes Note for backend implementers: When you are trying to determine if you have gotten your bounding box right (which is what enables the text layout/alignment to work properly), it helps to change the line in text.py: if 0: bbox_artist(self, renderer) to if 1, and then the actual bounding box will be plotted along with your text. flipy()[source] Return whether y values increase from top to bottom. Note that this only affects drawing of texts and images. get_canvas_width_height()[source] Return the canvas width and height in display coords. get_text_width_height_descent(s, prop, ismath)[source] Get the width, height, and descent (offset from the bottom to the baseline), in display coords, of the string s with FontProperties prop. option_image_nocomposite()[source] Return whether image composition by Matplotlib should be skipped. Raster backends should usually return False (letting the C-level rasterizer take care of image composition); vector backends should usually return not rcParams["image.composite_image"]. option_scale_image()[source] Return whether arbitrary affine transformations in draw_image() are supported (True for most vector backends). points_to_pixels(points)[source] Convert points to display units. You need to override this function (unless your backend doesn't have a dpi, e.g., postscript or svg). Some imaging systems assume some value for pixels per inch: points to pixels = points * pixels_per_inch/72 * dpi/72 Parameters pointsfloat or array-like a float or a numpy array of float Returns Points converted to pixels classmatplotlib.backends.backend_pgf.TmpDirCleaner(*args, **kwargs)[source] Bases: object [Deprecated] Notes Deprecated since version 3.4: staticadd(tmpdir)[source] [Deprecated] Notes Deprecated since version 3.4: staticcleanup_remaining_tmpdirs()[source] [Deprecated] Notes Deprecated since version 3.4: remaining_tmpdirs={} matplotlib.backends.backend_pgf.common_texification(text)[source] Do some necessary and/or useful substitutions for texts to be included in LaTeX documents. This distinguishes text-mode and math-mode by replacing the math separator $ with \(\displaystyle %s\). Escaped math separators (\$) are ignored. The following characters are escaped in text segments: _^$% matplotlib.backends.backend_pgf.get_fontspec()[source] Build fontspec preamble from rc. matplotlib.backends.backend_pgf.get_preamble()[source] Get LaTeX preamble from rc. matplotlib.backends.backend_pgf.make_pdf_to_png_converter()[source] Return a function that converts a pdf file to a png file. matplotlib.backends.backend_pgf.writeln(fh, line)[source]
doc_2112
Applies a 1D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool1d for details and output shape. Parameters output_size – the target output size (single integer) return_indices – whether to return pooling indices. Default: False
doc_2113
Clear the Axes.
doc_2114
See Migration guide for more details. tf.compat.v1.raw_ops.Conv2DBackpropFilter tf.raw_ops.Conv2DBackpropFilter( input, filter_sizes, out_backprop, strides, padding, use_cudnn_on_gpu=True, explicit_paddings=[], data_format='NHWC', dilations=[1, 1, 1, 1], name=None ) Args input A Tensor. Must be one of the following types: half, bfloat16, float32, float64. 4-D with shape [batch, in_height, in_width, in_channels]. filter_sizes A Tensor of type int32. An integer vector representing the tensor shape of filter, where filter is a 4-D [filter_height, filter_width, in_channels, out_channels] tensor. out_backprop A Tensor. Must have the same type as input. 4-D with shape [batch, out_height, out_width, out_channels]. Gradients w.r.t. the output of the convolution. strides A list of ints. The stride of the sliding window for each dimension of the input of the convolution. Must be in the same order as the dimension specified with format. padding A string from: "SAME", "VALID", "EXPLICIT". The type of padding algorithm to use. use_cudnn_on_gpu An optional bool. Defaults to True. explicit_paddings An optional list of ints. Defaults to []. If padding is "EXPLICIT", the list of explicit padding amounts. For the ith dimension, the amount of padding inserted before and after the dimension is explicit_paddings[2 * i] and explicit_paddings[2 * i + 1], respectively. If padding is not "EXPLICIT", explicit_paddings must be empty. data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. dilations An optional list of ints. Defaults to [1, 1, 1, 1]. 1-D tensor of length 4. The dilation factor for each dimension of input. If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. The dimension order is determined by the value of data_format, see above for details. Dilations in the batch and depth dimensions must be 1. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
doc_2115
Return filter function to be used for agg filter.
doc_2116
Utility pruning method that does not prune any units but generates the pruning parametrization with a mask of ones. classmethod apply(module, name) [source] Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters module (nn.Module) – module containing the tensor to prune name (str) – parameter name within module on which pruning will act. apply_mask(module) Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters module (nn.Module) – module containing the tensor to prune Returns pruned version of the input tensor Return type pruned_tensor (torch.Tensor) prune(t, default_mask=None, importance_scores=None) Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters t (torch.Tensor) – tensor to prune (of same dimensions as default_mask). importance_scores (torch.Tensor) – tensor of importance scores (of same shape as t) used to compute mask for pruning t. The values in this tensor indicate the importance of the corresponding elements in the t that is being pruned. If unspecified or None, the tensor t will be used in its place. default_mask (torch.Tensor, optional) – mask from previous pruning iteration, if any. To be considered when determining what portion of the tensor that pruning should act on. If None, default to a mask of ones. Returns pruned version of tensor t. remove(module) Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed!
doc_2117
Return a tuple containing all the subgroups of the match, from 1 up to however many groups are in the pattern. The default argument is used for groups that did not participate in the match; it defaults to None. For example: >>> m = re.match(r"(\d+)\.(\d+)", "24.1632") >>> m.groups() ('24', '1632') If we make the decimal place and everything after it optional, not all groups might participate in the match. These groups will default to None unless the default argument is given: >>> m = re.match(r"(\d+)\.?(\d+)?", "24") >>> m.groups() # Second group defaults to None. ('24', None) >>> m.groups('0') # Now, the second group defaults to '0'. ('24', '0')
doc_2118
tf.compat.v1.distribute.OneDeviceStrategy( device ) Using this strategy will place any variables created in its scope on the specified device. Input distributed through this strategy will be prefetched to the specified device. Moreover, any functions called via strategy.run will also be placed on the specified device as well. Typical usage of this strategy could be testing your code with the tf.distribute.Strategy API before switching to other strategies which actually distribute to multiple devices/machines. For example: tf.enable_eager_execution() strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") with strategy.scope(): v = tf.Variable(1.0) print(v.device) # /job:localhost/replica:0/task:0/device:GPU:0 def step_fn(x): return x * 2 result = 0 for i in range(10): result += strategy.run(step_fn, args=(i,)) print(result) # 90 Args device Device string identifier for the device on which the variables should be placed. See class docs for more details on how the device is used. Examples: "/cpu:0", "/gpu:0", "/device:CPU:0", "/device:GPU:0" Attributes cluster_resolver Returns the cluster resolver associated with this strategy. In general, when using a multi-worker tf.distribute strategy such as tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy(), there is a tf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associated tf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have a tf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example, os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's API docstring. extended tf.distribute.StrategyExtended with additional methods. num_replicas_in_sync Returns number of replicas over which gradients are aggregated. Methods distribute_datasets_from_function View source distribute_datasets_from_function( dataset_fn, options=None ) Distributes tf.data.Dataset instances created by calls to dataset_fn. The argument dataset_fn that users pass in is an input function that has a tf.distribute.InputContext argument and returns a tf.data.Dataset instance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step). This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, where experimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed. The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed. You can use element_spec property of the tf.distribute.DistributedDataset returned by this API to query the tf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Follow tf.distribute.DistributedDataset.element_spec to see an example. Key Point: The tf.data.Dataset returned by dataset_fn should have a per-replica batch size, unlike experimental_distribute_dataset, which uses the global batch size. This may be computed using input_context.get_per_replica_batch_size. Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs. Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section. Args dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset. options tf.distribute.InputOptions used to control options on how this dataset is distributed. Returns A tf.distribute.DistributedDataset. experimental_distribute_dataset View source experimental_distribute_dataset( dataset, options=None ) Creates tf.distribute.DistributedDataset from tf.data.Dataset. The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs of tf.distribute.DistributedDataset to learn more. The following is an example: global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }] Three key actions happending under the hood of this method are batching, sharding, and prefetching. In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica. Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy or tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding. Note: for autosharding across multiple workers, the default mode is tf.data.experimental.AutoShardPolicy.AUTO. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g. tf.data.TFRecordDataset, tf.data.TextLineDataset, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting the tf.data.experimental.DistributeOptions.auto_shard_policy to be tf.data.experimental.AutoShardPolicy.OFF. By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync. If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function instead, which does not do any automatic batching or sharding for you. Note: If you are using TPUStrategy, the order in which the data is processed by the workers when using tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function is not guaranteed. This is typically required if you are using tf.distribute to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs. Note: Stateful dataset transformations are currently not supported with tf.distribute.experimental_distribute_dataset or tf.distribute.distribute_datasets_from_function. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has a map_fn that uses tf.random.uniform to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed. For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section. Args dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above. options tf.distribute.InputOptions used to control options on how this dataset is distributed. Returns A tf.distribute.DistributedDataset. experimental_local_results View source experimental_local_results( value ) Returns the list of all local per-replica values contained in value. Note: This only returns values on the worker initiated by this client. When using a tf.distribute.Strategy like tf.distribute.experimental.MultiWorkerMirroredStrategy, each worker will be its own client, and this function will only return values computed on that worker. Args value A value returned by experimental_run(), run(), extended.call_for_each_replica(), or a variable created in scope. Returns A tuple of values contained in value. If value represents a single value, this returns (value,). experimental_make_numpy_dataset View source experimental_make_numpy_dataset( numpy_input, session=None ) Makes a tf.data.Dataset for input provided via a numpy array. This avoids adding numpy_input as a large constant in the graph, and copies the data to the machine or machines that will be processing the input. Note that you will likely need to use tf.distribute.Strategy.experimental_distribute_dataset with the returned dataset to further distribute it with the strategy. Example: numpy_input = np.ones([10], dtype=np.float32) dataset = strategy.experimental_make_numpy_dataset(numpy_input) dist_dataset = strategy.experimental_distribute_dataset(dataset) Args numpy_input A nest of NumPy input arrays that will be converted into a dataset. Note that lists of Numpy arrays are stacked, as that is normal tf.data.Dataset behavior. session (TensorFlow v1.x graph execution only) A session used for initialization. Returns A tf.data.Dataset representing numpy_input. experimental_run View source experimental_run( fn, input_iterator=None ) Runs ops in fn on each replica, with inputs from input_iterator. DEPRECATED: This method is not available in TF 2.x. Please switch to using run instead. When eager execution is enabled, executes ops specified by fn on each replica. Otherwise, builds a graph to execute the ops on each replica. Each replica will take a single, different input from the inputs provided by one get_next call on the input iterator. fn may call tf.distribute.get_replica_context() to access members such as replica_id_in_sync_group. Key Point: Depending on the tf.distribute.Strategy implementation being used, and whether eager execution is enabled, fn may be called one or more times (once for each replica). Args fn The function to run. The inputs to the function must match the outputs of input_iterator.get_next(). The output must be a tf.nest of Tensors. input_iterator (Optional) input iterator from which the inputs are taken. Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be PerReplica (if the values are unsynchronized), Mirrored (if the values are kept in sync), or Tensor (if running on a single replica). make_dataset_iterator View source make_dataset_iterator( dataset ) Makes an iterator for input provided via dataset. DEPRECATED: This method is not available in TF 2.x. Data from the given dataset will be distributed evenly across all the compute replicas. We will assume that the input dataset is batched by the global batch size. With this assumption, we will make a best effort to divide each batch across all the replicas (one or more workers). If this effort fails, an error will be thrown, and the user should instead use make_input_fn_iterator which provides more control to the user, and does not try to divide a batch across replicas. The user could also use make_input_fn_iterator if they want to customize which input is fed to which replica/worker etc. Args dataset tf.data.Dataset that will be distributed evenly across all replicas. Returns An tf.distribute.InputIterator which returns inputs for each step of the computation. User should call initialize on the returned iterator. make_input_fn_iterator View source make_input_fn_iterator( input_fn, replication_mode=tf.distribute.InputReplicationMode.PER_WORKER ) Returns an iterator split across replicas created from an input function. DEPRECATED: This method is not available in TF 2.x. The input_fn should take an tf.distribute.InputContext object where information about batching and input sharding can be accessed: def input_fn(input_context): batch_size = input_context.get_per_replica_batch_size(global_batch_size) d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) return d.shard(input_context.num_input_pipelines, input_context.input_pipeline_id) with strategy.scope(): iterator = strategy.make_input_fn_iterator(input_fn) replica_results = strategy.experimental_run(replica_fn, iterator) The tf.data.Dataset returned by input_fn should have a per-replica batch size, which may be computed using input_context.get_per_replica_batch_size. Args input_fn A function taking a tf.distribute.InputContext object and returning a tf.data.Dataset. replication_mode an enum value of tf.distribute.InputReplicationMode. Only PER_WORKER is supported currently, which means there will be a single call to input_fn per worker. Replicas will dequeue from the local tf.data.Dataset on their worker. Returns An iterator object that should first be .initialize()-ed. It may then either be passed to strategy.experimental_run() or you can iterator.get_next() to get the next value to pass to strategy.extended.call_for_each_replica(). reduce View source reduce( reduce_op, value, axis=None ) Reduce value across replicas and return result on current device. strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1> To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs: strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) # Check devices on which per replica result is: strategy.experimental_local_results(per_replica_result)[0].device # /job:localhost/replica:0/task:0/device:GPU:0 strategy.experimental_local_results(per_replica_result)[1].device # /job:localhost/replica:0/task:0/device:GPU:1 total = strategy.reduce("SUM", per_replica_result, axis=None) # Check device on which reduced result is: total.device # /job:localhost/replica:0/task:0/device:CPU:0 This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing. Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. For TPUStrategy, it is the first TPU host. For multi client MultiWorkerMirroredStrategy, this is CPU of each worker. There are a number of different tf.distribute APIs for reducing values across replicas: tf.distribute.ReplicaContext.all_reduce: This differs from Strategy.reduce in that it is for replica context and does not copy the results to the host device. all_reduce should be typically used for reductions inside the training step such as gradients. tf.distribute.StrategyExtended.reduce_to and tf.distribute.StrategyExtended.batch_reduce_to: These APIs are more advanced versions of Strategy.reduce as they allow customizing the destination of the result. They are also called in cross replica context. What should axis be? Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and [4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss). strategy.reduce("sum", per_replica_result, axis=None) Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7. strategy.reduce("sum", per_replica_result, axis=0) If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specify tf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4. Args reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". value a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy. axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension). Returns A Tensor. run View source run( fn, args=(), kwargs=None, options=None ) Invokes fn on each replica, with the given arguments. This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargs have tf.distribute.DistributedValues, such as those produced by a tf.distribute.DistributedDataset from tf.distribute.Strategy.experimental_distribute_dataset or tf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica. fn is invoked under a replica context. fn may call tf.distribute.get_replica_context() to access members such as all_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context. All arguments in args or kwargs should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can be tf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Key Point: Depending on the implementation of tf.distribute.Strategy and whether eager execution is enabled, fn may be called one or more times. If fn is annotated with tf.function or tf.distribute.Strategy.run is called inside a tf.function (eager execution is disabled inside a tf.function by default), fn is called once per replica to generate a Tensorflow graph, which will then be reused for execution with new inputs. Otherwise, if eager execution is enabled, fn will be called once per replica every step just like regular python code. Example usage: Constant tensor input. strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) tensor_input = tf.constant(3.0) @tf.function def replica_fn(input): return input*2.0 result = strategy.run(replica_fn, args=(tensor_input,)) result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>, 1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0> } DistributedValues input. strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def run(): def value_fn(value_context): return value_context.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn2(input): return input*2 return strategy.run(replica_fn2, args=(distributed_values,)) result = run() result <tf.Tensor: shape=(), dtype=int32, numpy=4> Use tf.distribute.ReplicaContext to allreduce values. strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"]) @tf.function def run(): def value_fn(value_context): return tf.constant(value_context.replica_id_in_sync_group) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn(input): return tf.distribute.get_replica_context().all_reduce("sum", input) return strategy.run(replica_fn, args=(distributed_values,)) result = run() result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=int32, numpy=1>, 1: <tf.Tensor: shape=(), dtype=int32, numpy=1> } Args fn The function to run on each replica. args Optional positional arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues. kwargs Optional keyword arguments to fn. Its element can be a Python value, a tensor or a tf.distribute.DistributedValues. options An optional instance of tf.distribute.RunOptions specifying the options to run fn. Returns Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensor objects, or Tensors (for example, if running on a single replica). scope View source scope() Context manager to make the strategy current and distribute variables. This method returns a context manager, and is used as follows: strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> What happens when Strategy.scope is entered? strategy is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy() will now return this strategy. Outside this scope, it returns the default no-op strategy. Entering the scope also enters the "cross-replica context". See tf.distribute.StrategyExtended for an explanation on cross-replica and replica contexts. Variable creation inside scope is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy, TPUStrategy and MultiWorkerMiroredStrategy create variables replicated on each replica, whereas ParameterServerStrategy creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope. In some strategies, a default device scope may also be entered: in MultiWorkerMiroredStrategy, a default device scope of "/CPU:0" is entered on each worker. Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like keras model.fit. If you're not using model.fit, you need to use strategy.run API to explicitly distribute that computation. See an example in the custom training loop tutorial. What should be in scope and what should be outside? There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK). Anything that creates variables that should be distributed variables must be in strategy.scope. This can be either by directly putting it in scope, or relying on another API like strategy.run or model.fit to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope can also work seamlessly, without the user having to enter the scope. Some strategy APIs (such as strategy.run and strategy.reduce) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. When a tf.keras.Model is created inside a strategy.scope, we capture this information. When high level training frameworks methods such as model.compile, model.fit etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..) is not impacted - only high level training framework APIs are. model.compile, model.fit, model.evaluate, model.predict and model.save can all be called inside or outside the scope. The following can be either inside or outside the scope: Creating the input datasets Defining tf.functions that represent your training step Saving APIs such as tf.saved_model.save. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. Checkpoint saving. As mentioned above - checkpoint.restore may sometimes need to be inside scope if it creates variables. Returns A context manager. update_config_proto View source update_config_proto( config_proto ) Returns a copy of config_proto modified for use with this strategy. DEPRECATED: This method is not available in TF 2.x. The updated config has something needed to run a strategy, e.g. configuration to run collective ops, or device filters to improve distributed training performance. Args config_proto a tf.ConfigProto object. Returns The updated copy of the config_proto.
doc_2119
Returns the EWKB of this Geometry in hexadecimal form. This is an extension of the WKB specification that includes the SRID value that are a part of this geometry.
doc_2120
See Migration guide for more details. tf.compat.v1.raw_ops.QueueSize tf.raw_ops.QueueSize( handle, name=None ) Args handle A Tensor of type mutable string. The handle to a queue. name A name for the operation (optional). Returns A Tensor of type int32.
doc_2121
Callback processing for the mouse cursor leaving the canvas. Backend derived classes should call this function when leaving canvas. Parameters guiEvent The native UI event that generated the Matplotlib event.
doc_2122
Get parameters of this kernel. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_2123
Add prefix to the beginning of selected lines in text. Lines are separated by calling text.splitlines(True). By default, prefix is added to all lines that do not consist solely of whitespace (including any line endings). For example: >>> s = 'hello\n\n \nworld' >>> indent(s, ' ') ' hello\n\n \n world' The optional predicate argument can be used to control which lines are indented. For example, it is easy to add prefix to even empty and whitespace-only lines: >>> print(indent(s, '+ ', lambda line: True)) + hello + + + world New in version 3.3.
doc_2124
See Migration guide for more details. tf.compat.v1.raw_ops.MatrixInverse tf.raw_ops.MatrixInverse( input, adjoint=False, name=None ) The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the inverse for all input submatrices [..., :, :]. The op uses LU decomposition with partial pivoting to compute the inverses. If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result. Args input A Tensor. Must be one of the following types: float64, float32, half, complex64, complex128. Shape is [..., M, M]. adjoint An optional bool. Defaults to False. name A name for the operation (optional). Returns A Tensor. Has the same type as input.
doc_2125
Return self-value.
doc_2126
See Migration guide for more details. tf.compat.v1.raw_ops.Exp tf.raw_ops.Exp( x, name=None ) This function computes the exponential of every element in the input tensor. i.e. exp(x) or e^(x), where x is the input tensor. e denotes Euler's number and is approximately equal to 2.718281. Output is positive for any real input. x = tf.constant(2.0) tf.math.exp(x) ==> 7.389056 x = tf.constant([2.0, 8.0]) tf.math.exp(x) ==> array([7.389056, 2980.958], dtype=float32) For complex numbers, the exponential value is calculated as follows: e^(x+iy) = e^x * e^iy = e^x * (cos y + i sin y) Let's consider complex number 1+1j as an example. e^1 * (cos 1 + i sin 1) = 2.7182818284590 * (0.54030230586+0.8414709848j) x = tf.constant(1 + 1j) tf.math.exp(x) ==> 1.4686939399158851+2.2873552871788423j Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
doc_2127
See Migration guide for more details. tf.compat.v1.errors.InternalError tf.errors.InternalError( node_def, op, message ) This exception is raised when some invariant expected by the runtime has been broken. Catching this exception is not recommended. Attributes error_code The integer error code that describes the error. message The error message that describes the error. node_def The NodeDef proto representing the op that failed. op The operation that failed, if known. Note: If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding tf.Operation object. In that case, this will return None, and you should instead use the tf.errors.OpError.node_def to discover information about the op.
doc_2128
Purely integer-location based indexing for selection by position. .iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a boolean array. Allowed inputs are: An integer, e.g. 5. A list or array of integers, e.g. [4, 3, 0]. A slice object with ints, e.g. 1:7. A boolean array. A callable function with one argument (the calling Series or DataFrame) and that returns valid output for indexing (one of the above). This is useful in method chains, when you don’t have a reference to the calling object, but would like to base your selection on some value. .iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers which allow out-of-bounds indexing (this conforms with python/numpy slice semantics). See more at Selection by Position. See also DataFrame.iat Fast integer location scalar accessor. DataFrame.loc Purely label-location based indexer for selection by label. Series.iloc Purely integer-location based indexing for selection by position. Examples >>> mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4}, ... {'a': 100, 'b': 200, 'c': 300, 'd': 400}, ... {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000 }] >>> df = pd.DataFrame(mydict) >>> df a b c d 0 1 2 3 4 1 100 200 300 400 2 1000 2000 3000 4000 Indexing just the rows With a scalar integer. >>> type(df.iloc[0]) <class 'pandas.core.series.Series'> >>> df.iloc[0] a 1 b 2 c 3 d 4 Name: 0, dtype: int64 With a list of integers. >>> df.iloc[[0]] a b c d 0 1 2 3 4 >>> type(df.iloc[[0]]) <class 'pandas.core.frame.DataFrame'> >>> df.iloc[[0, 1]] a b c d 0 1 2 3 4 1 100 200 300 400 With a slice object. >>> df.iloc[:3] a b c d 0 1 2 3 4 1 100 200 300 400 2 1000 2000 3000 4000 With a boolean mask the same length as the index. >>> df.iloc[[True, False, True]] a b c d 0 1 2 3 4 2 1000 2000 3000 4000 With a callable, useful in method chains. The x passed to the lambda is the DataFrame being sliced. This selects the rows whose index label even. >>> df.iloc[lambda x: x.index % 2 == 0] a b c d 0 1 2 3 4 2 1000 2000 3000 4000 Indexing both axes You can mix the indexer types for the index and columns. Use : to select the entire axis. With scalar integers. >>> df.iloc[0, 1] 2 With lists of integers. >>> df.iloc[[0, 2], [1, 3]] b d 0 2 4 2 2000 4000 With slice objects. >>> df.iloc[1:3, 0:3] a b c 1 100 200 300 2 1000 2000 3000 With a boolean array whose length matches the columns. >>> df.iloc[:, [True, False, True, False]] a c 0 1 3 1 100 300 2 1000 3000 With a callable function that expects the Series or DataFrame. >>> df.iloc[:, lambda df: [0, 2]] a c 0 1 3 1 100 300 2 1000 3000
doc_2129
A tuple of positional arguments values. Dynamically computed from the arguments attribute.
doc_2130
Replace values where the condition is False. Parameters cond:bool Series/DataFrame, array-like, or callable Where cond is True, keep the original value. Where False, replace with corresponding value from other. If cond is callable, it is computed on the Series/DataFrame and should return boolean Series/DataFrame or array. The callable must not change input Series/DataFrame (though pandas doesn’t check it). other:scalar, Series/DataFrame, or callable Entries where cond is False are replaced with corresponding value from other. If other is callable, it is computed on the Series/DataFrame and should return scalar or Series/DataFrame. The callable must not change input Series/DataFrame (though pandas doesn’t check it). inplace:bool, default False Whether to perform the operation in place on the data. axis:int, default None Alignment axis if needed. level:int, default None Alignment level if needed. errors:str, {‘raise’, ‘ignore’}, default ‘raise’ Note that currently this parameter won’t affect the results and will always coerce to a suitable dtype. ‘raise’ : allow exceptions to be raised. ‘ignore’ : suppress exceptions. On error return original object. try_cast:bool, default None Try to cast the result back to the input type (if possible). Deprecated since version 1.3.0: Manually cast back if necessary. Returns Same type as caller or None if inplace=True. See also DataFrame.mask() Return an object of same shape as self. Notes The where method is an application of the if-then idiom. For each element in the calling DataFrame, if cond is True the element is used; otherwise the corresponding element from the DataFrame other is used. The signature for DataFrame.where() differs from numpy.where(). Roughly df1.where(m, df2) is equivalent to np.where(m, df1, df2). For further details and examples see the where documentation in indexing. Examples >>> s = pd.Series(range(5)) >>> s.where(s > 0) 0 NaN 1 1.0 2 2.0 3 3.0 4 4.0 dtype: float64 >>> s.mask(s > 0) 0 0.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64 >>> s.where(s > 1, 10) 0 10 1 10 2 2 3 3 4 4 dtype: int64 >>> s.mask(s > 1, 10) 0 0 1 1 2 10 3 10 4 10 dtype: int64 >>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B']) >>> df A B 0 0 1 1 2 3 2 4 5 3 6 7 4 8 9 >>> m = df % 3 == 0 >>> df.where(m, -df) A B 0 0 -1 1 -2 3 2 -4 -5 3 6 -7 4 -8 9 >>> df.where(m, -df) == np.where(m, df, -df) A B 0 True True 1 True True 2 True True 3 True True 4 True True >>> df.where(m, -df) == df.mask(~m, -df) A B 0 True True 1 True True 2 True True 3 True True 4 True True
doc_2131
turtle.turtlesize(stretch_wid=None, stretch_len=None, outline=None) Parameters stretch_wid – positive number stretch_len – positive number outline – positive number Return or set the pen’s attributes x/y-stretchfactors and/or outline. Set resizemode to “user”. If and only if resizemode is set to “user”, the turtle will be displayed stretched according to its stretchfactors: stretch_wid is stretchfactor perpendicular to its orientation, stretch_len is stretchfactor in direction of its orientation, outline determines the width of the shapes’s outline. >>> turtle.shapesize() (1.0, 1.0, 1) >>> turtle.resizemode("user") >>> turtle.shapesize(5, 5, 12) >>> turtle.shapesize() (5, 5, 12) >>> turtle.shapesize(outline=8) >>> turtle.shapesize() (5, 5, 8)
doc_2132
Module: email.mime.nonmultipart A subclass of MIMEBase, this is an intermediate base class for MIME messages that are not multipart. The primary purpose of this class is to prevent the use of the attach() method, which only makes sense for multipart messages. If attach() is called, a MultipartConversionError exception is raised.
doc_2133
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_2134
axes_grid1.anchored_artists axes_grid1.axes_divider Helper classes to adjust the positions of multiple axes at drawing time. axes_grid1.axes_grid axes_grid1.axes_rgb axes_grid1.axes_size Provides classes of simple units that will be used with AxesDivider class (or others) to determine the size of each axes. axes_grid1.inset_locator A collection of functions and objects for creating or placing inset axes. axes_grid1.mpl_axes axes_grid1.parasite_axes
doc_2135
See Migration guide for more details. tf.compat.v1.raw_ops.LookupTableExportV2 tf.raw_ops.LookupTableExportV2( table_handle, Tkeys, Tvalues, name=None ) Args table_handle A Tensor of type resource. Handle to the table. Tkeys A tf.DType. Tvalues A tf.DType. name A name for the operation (optional). Returns A tuple of Tensor objects (keys, values). keys A Tensor of type Tkeys. values A Tensor of type Tvalues.
doc_2136
Computes the inverse of rfftn. This function computes the inverse of the N-dimensional discrete Fourier Transform for real input over any number of axes in an M-dimensional array by means of the Fast Fourier Transform (FFT). In other words, irfftn(rfftn(a), a.shape) == a to within numerical accuracy. (The a.shape is necessary like len(a) is for irfft, and for the same reason.) The input should be ordered in the same way as is returned by rfftn, i.e. as for irfft for the final transformation axis, and as for ifftn along all the other axes. Parameters aarray_like Input array. ssequence of ints, optional Shape (length of each transformed axis) of the output (s[0] refers to axis 0, s[1] to axis 1, etc.). s is also the number of input points used along this axis, except for the last axis, where s[-1]//2+1 points of the input are used. Along any axis, if the shape indicated by s is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. If s is not given, the shape of the input along the axes specified by axes is used. Except for the last axis which is taken to be 2*(m-1) where m is the length of the input along that axis. axessequence of ints, optional Axes over which to compute the inverse FFT. If not given, the last len(s) axes are used, or all axes if s is also not specified. Repeated indices in axes means that the inverse transform over that axis is performed multiple times. norm{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see numpy.fft). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns outndarray The truncated or zero-padded input, transformed along the axes indicated by axes, or by a combination of s or a, as explained in the parameters section above. The length of each transformed axis is as given by the corresponding element of s, or the length of the input in every axis except for the last one if s is not given. In the final transformed axis the length of the output when s is not given is 2*(m-1) where m is the length of the final transformed axis of the input. To get an odd number of output points in the final axis, s must be specified. Raises ValueError If s and axes have different length. IndexError If an element of axes is larger than than the number of axes of a. See also rfftn The forward n-dimensional FFT of real input, of which ifftn is the inverse. fft The one-dimensional FFT, with definitions and conventions used. irfft The inverse of the one-dimensional FFT of real input. irfft2 The inverse of the two-dimensional FFT of real input. Notes See fft for definitions and conventions used. See rfft for definitions and conventions used for real input. The correct interpretation of the hermitian input depends on the shape of the original data, as given by s. This is because each input shape could correspond to either an odd or even length signal. By default, irfftn assumes an even output length which puts the last entry at the Nyquist frequency; aliasing with its symmetric counterpart. When performing the final complex to real transform, the last value is thus treated as purely real. To avoid losing information, the correct shape of the real input must be given. Examples >>> a = np.zeros((3, 2, 2)) >>> a[0, 0, 0] = 3 * 2 * 2 >>> np.fft.irfftn(a) array([[[1., 1.], [1., 1.]], [[1., 1.], [1., 1.]], [[1., 1.], [1., 1.]]])
doc_2137
See Migration guide for more details. tf.compat.v1.keras.layers.Dot tf.keras.layers.Dot( axes, normalize=False, **kwargs ) E.g. if applied to a list of two tensors a and b of shape (batch_size, n), the output will be a tensor of shape (batch_size, 1) where each entry i will be the dot product between a[i] and b[i]. x = np.arange(10).reshape(1, 5, 2) print(x) [[[0 1] [2 3] [4 5] [6 7] [8 9]]] y = np.arange(10, 20).reshape(1, 2, 5) print(y) [[[10 11 12 13 14] [15 16 17 18 19]]] tf.keras.layers.Dot(axes=(1, 2))([x, y]) <tf.Tensor: shape=(1, 2, 2), dtype=int64, numpy= array([[[260, 360], [320, 445]]])> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2)) x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2)) dotted = tf.keras.layers.Dot(axes=1)([x1, x2]) dotted.shape TensorShape([5, 1]) Arguments axes Integer or tuple of integers, axis or axes along which to take the dot product. If a tuple, should be two integers corresponding to the desired axis from the first input and the desired axis from the second input, respectively. Note that the size of the two selected axes must match. normalize Whether to L2-normalize samples along the dot product axis before taking the dot product. If set to True, then the output of the dot product is the cosine proximity between the two samples. **kwargs Standard layer keyword arguments.
doc_2138
See Migration guide for more details. tf.compat.v1.raw_ops.If tf.raw_ops.If( cond, input, Tout, then_branch, else_branch, output_shapes=[], name=None ) Args cond A Tensor. A Tensor. If the tensor is a scalar of non-boolean type, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, being empty means False and being non-empty means True. input A list of Tensor objects. A list of input tensors. Tout A list of tf.DTypes. A list of output types. then_branch A function decorated with @Defun. A function that takes 'inputs' and returns a list of tensors, whose types are the same as what else_branch returns. else_branch A function decorated with @Defun. A function that takes 'inputs' and returns a list of tensors, whose types are the same as what then_branch returns. output_shapes An optional list of shapes (each a tf.TensorShape or list of ints). Defaults to []. name A name for the operation (optional). Returns A list of Tensor objects of type Tout.
doc_2139
class sklearn.preprocessing.MinMaxScaler(feature_range=0, 1, *, copy=True, clip=False) [source] Transform features by scaling each feature to a given range. This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one. The transformation is given by: X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0)) X_scaled = X_std * (max - min) + min where min, max = feature_range. This transformation is often used as an alternative to zero mean, unit variance scaling. Read more in the User Guide. Parameters feature_rangetuple (min, max), default=(0, 1) Desired range of transformed data. copybool, default=True Set to False to perform inplace row normalization and avoid a copy (if the input is already a numpy array). clip: bool, default=False Set to True to clip transformed values of held-out data to provided feature range. New in version 0.24. Attributes min_ndarray of shape (n_features,) Per feature adjustment for minimum. Equivalent to min - X.min(axis=0) * self.scale_ scale_ndarray of shape (n_features,) Per feature relative scaling of the data. Equivalent to (max - min) / (X.max(axis=0) - X.min(axis=0)) New in version 0.17: scale_ attribute. data_min_ndarray of shape (n_features,) Per feature minimum seen in the data New in version 0.17: data_min_ data_max_ndarray of shape (n_features,) Per feature maximum seen in the data New in version 0.17: data_max_ data_range_ndarray of shape (n_features,) Per feature range (data_max_ - data_min_) seen in the data New in version 0.17: data_range_ n_samples_seen_int The number of samples processed by the estimator. It will be reset on new calls to fit, but increments across partial_fit calls. See also minmax_scale Equivalent function without the estimator API. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Examples >>> from sklearn.preprocessing import MinMaxScaler >>> data = [[-1, 2], [-0.5, 6], [0, 10], [1, 18]] >>> scaler = MinMaxScaler() >>> print(scaler.fit(data)) MinMaxScaler() >>> print(scaler.data_max_) [ 1. 18.] >>> print(scaler.transform(data)) [[0. 0. ] [0.25 0.25] [0.5 0.5 ] [1. 1. ]] >>> print(scaler.transform([[2, 2]])) [[1.5 0. ]] Methods fit(X[, y]) Compute the minimum and maximum to be used for later scaling. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. inverse_transform(X) Undo the scaling of X according to feature_range. partial_fit(X[, y]) Online computation of min and max on X for later scaling. set_params(**params) Set the parameters of this estimator. transform(X) Scale features of X according to feature_range. fit(X, y=None) [source] Compute the minimum and maximum to be used for later scaling. Parameters Xarray-like of shape (n_samples, n_features) The data used to compute the per-feature minimum and maximum used for later scaling along the features axis. yNone Ignored. Returns selfobject Fitted scaler. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. inverse_transform(X) [source] Undo the scaling of X according to feature_range. Parameters Xarray-like of shape (n_samples, n_features) Input data that will be transformed. It cannot be sparse. Returns Xtndarray of shape (n_samples, n_features) Transformed data. partial_fit(X, y=None) [source] Online computation of min and max on X for later scaling. All of X is processed as a single batch. This is intended for cases when fit is not feasible due to very large number of n_samples or because X is read from a continuous stream. Parameters Xarray-like of shape (n_samples, n_features) The data used to compute the mean and standard deviation used for later scaling along the features axis. yNone Ignored. Returns selfobject Fitted scaler. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Scale features of X according to feature_range. Parameters Xarray-like of shape (n_samples, n_features) Input data that will be transformed. Returns Xtndarray of shape (n_samples, n_features) Transformed data. Examples using sklearn.preprocessing.MinMaxScaler Release Highlights for scikit-learn 0.24 Univariate Feature Selection Scalable learning with polynomial kernel aproximation Compare Stochastic learning strategies for MLPClassifier Compare the effect of different scalers on data with outliers
doc_2140
Write the string s to the stream and return the number of characters written.
doc_2141
See Migration guide for more details. tf.compat.v1.distribute.cluster_resolver.ClusterResolver This defines the skeleton for all implementations of ClusterResolvers. ClusterResolvers are a way for TensorFlow to communicate with various cluster management systems (e.g. GCE, AWS, etc...) and gives TensorFlow necessary information to set up distributed training. By letting TensorFlow communicate with these systems, we will be able to automatically discover and resolve IP addresses for various TensorFlow workers. This will eventually allow us to automatically recover from underlying machine failures and scale TensorFlow worker clusters up and down. Note to Implementors of tf.distribute.cluster_resolver.ClusterResolver subclass: In addition to these abstract methods, when task_type, task_id, and rpc_layer attributes are applicable, you should also implement them either as properties with getters or setters, or directly set the attributes self._task_type, self._task_id, or self._rpc_layer so the base class' getters and setters are used. See tf.distribute.clusterresolver.SimpleClusterResolver.init_ for an example. In general, multi-client tf.distribute strategies such as tf.distribute.experimental.MultiWorkerMirroredStrategy require task_type and task_id properties to be available in the ClusterResolver they are using. On the other hand, these concepts are not applicable in single-client strategies, such as tf.distribute.experimental.TPUStrategy, because the program is only expected to be run on one task, so there should not be a need to have code branches according to task type and task id. task_type is the name of the server's current named job (e.g. 'worker', 'ps' in a distributed parameterized training job). task_id is the ordinal index of the server within the task type. rpc_layer is the protocol used by TensorFlow to communicate with other TensorFlow servers in a distributed environment. Attributes environment Returns the current environment which TensorFlow is running in. There are two possible return values, "google" (when TensorFlow is running in a Google-internal environment) or an empty string (when TensorFlow is running elsewhere). If you are implementing a ClusterResolver that works in both the Google environment and the open-source world (for instance, a TPU ClusterResolver or similar), you will have to return the appropriate string depending on the environment, which you will have to detect. Otherwise, if you are implementing a ClusterResolver that will only work in open-source TensorFlow, you do not need to implement this property. task_id Returns the task id this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task id, which is the index of the instance within its task type. This is useful when user needs to run specific code according to task index. For example, cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=0) ... if cluster_resolver.task_type == 'worker' and cluster_resolver.task_id == 0: # Perform something that's only applicable on 'worker' type, id 0. This # block will run on this particular instance since we've specified this # task to be a 'worker', id 0 in above cluster resolver. else: # Perform something that's only applicable on other ids. This block will # not run on this particular instance. Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.cluster_resolver.TPUClusterResolver. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class docstring. task_type Returns the task type this ClusterResolver indicates. In TensorFlow distributed environment, each job may have an applicable task type. Valid task types in TensorFlow include 'chief': a worker that is designated with more responsibility, 'worker': a regular worker for training/evaluation, 'ps': a parameter server, or 'evaluator': an evaluator that evaluates the checkpoints for metrics. See Multi-worker configuration for more information about 'chief' and 'worker' task type, which are most commonly used. Having access to such information is useful when user needs to run specific code according to task types. For example, cluster_spec = tf.train.ClusterSpec({ "ps": ["localhost:2222", "localhost:2223"], "worker": ["localhost:2224", "localhost:2225", "localhost:2226"] }) # SimpleClusterResolver is used here for illustration; other cluster # resolvers may be used for other source of task type/id. simple_resolver = SimpleClusterResolver(cluster_spec, task_type="worker", task_id=1) ... if cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. This block # will run on this particular instance since we've specified this task to # be a worker in above cluster resolver. elif cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. This # block will not run on this particular instance. Returns None if such information is not available or is not applicable in the current distributed environment, such as training with tf.distribute.experimental.TPUStrategy. For more information, please see tf.distribute.cluster_resolver.ClusterResolver's class doc. Methods cluster_spec View source @abc.abstractmethod cluster_spec() Retrieve the current state of the cluster and return a tf.train.ClusterSpec. Returns A tf.train.ClusterSpec representing the state of the cluster at the moment this function is called. Implementors of this function must take care in ensuring that the ClusterSpec returned is up-to-date at the time of calling this function. This usually means retrieving the information from the underlying cluster management system every time this function is invoked and reconstructing a cluster_spec, rather than attempting to cache anything. master View source @abc.abstractmethod master( task_type=None, task_id=None, rpc_layer=None ) Retrieves the name or URL of the session master. Note: this is only useful for TensorFlow 1.x. Args task_type (Optional) The type of the TensorFlow task of the master. task_id (Optional) The index of the TensorFlow task of the master. rpc_layer (Optional) The RPC protocol for the given cluster. Returns The name or URL of the session master. Implementors of this function must take care in ensuring that the master returned is up-to-date at the time to calling this function. This usually means retrieving the master every time this function is invoked. num_accelerators View source num_accelerators( task_type=None, task_id=None, config_proto=None ) Returns the number of accelerator cores per worker. This returns the number of accelerator cores (such as GPUs and TPUs) available per worker. Optionally, we allow callers to specify the task_type, and task_id, for if they want to target a specific TensorFlow task to query the number of accelerators. This is to support heterogenous environments, where the number of accelerators cores per host is different. Args task_type (Optional) The type of the TensorFlow task of the machine we want to query. task_id (Optional) The index of the TensorFlow task of the machine we want to query. config_proto (Optional) Configuration for starting a new session to query how many accelerator cores it has. Returns A map of accelerator types to number of cores.
doc_2142
Alias for get_facecolor.
doc_2143
Check whether two Interval objects overlap. Two intervals overlap if they share a common point, including closed endpoints. Intervals that only have an open endpoint in common do not overlap. Parameters other:Interval Interval to check against for an overlap. Returns bool True if the two intervals overlap. See also IntervalArray.overlaps The corresponding method for IntervalArray. IntervalIndex.overlaps The corresponding method for IntervalIndex. Examples >>> i1 = pd.Interval(0, 2) >>> i2 = pd.Interval(1, 3) >>> i1.overlaps(i2) True >>> i3 = pd.Interval(4, 5) >>> i1.overlaps(i3) False Intervals that share closed endpoints overlap: >>> i4 = pd.Interval(0, 1, closed='both') >>> i5 = pd.Interval(1, 2, closed='both') >>> i4.overlaps(i5) True Intervals that only have an open endpoint in common do not overlap: >>> i6 = pd.Interval(1, 2, closed='neither') >>> i4.overlaps(i6) False
doc_2144
Set multiple properties at once. Supported properties are Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None animated bool antialiased or aa bool clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None color or c unknown dash_capstyle CapStyle or {'butt', 'projecting', 'round'} dash_joinstyle JoinStyle or {'miter', 'round', 'bevel'} dashes sequence of floats (on/off ink in points) or (None, None) data (2, N) array or two 1D arrays drawstyle or ds {'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'}, default: 'default' figure Figure fillstyle {'full', 'left', 'right', 'bottom', 'top', 'none'} gid str in_layout bool label object linestyle or ls {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} linewidth or lw float locs_angles unknown marker marker style string, Path or MarkerStyle markeredgecolor or mec color markeredgewidth or mew float markerfacecolor or mfc color markerfacecoloralt or mfcalt color markersize or ms float markevery None or int or (int, int) or slice or list[int] or float or (float, float) or list[bool] path_effects AbstractPathEffect picker float or callable[[Artist, Event], tuple[bool, dict]] pickradius float rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None solid_capstyle CapStyle or {'butt', 'projecting', 'round'} solid_joinstyle JoinStyle or {'miter', 'round', 'bevel'} tick_out unknown ticksize unknown transform Transform url str visible bool xdata 1D array ydata 1D array zorder float
doc_2145
Returns the log-transformed bounds on the theta. Returns boundsndarray of shape (n_dims, 2) The log-transformed bounds on the kernel’s hyperparameters theta
doc_2146
This is an alias of random_sample. See random_sample for the complete documentation.
doc_2147
tf.metrics.MeanSquaredLogarithmicError Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.MeanSquaredLogarithmicError tf.keras.metrics.MeanSquaredLogarithmicError( name='mean_squared_logarithmic_error', dtype=None ) Args name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.MeanSquaredLogarithmicError() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) m.result().numpy() 0.12011322 m.reset_states() m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], sample_weight=[1, 0]) m.result().numpy() 0.24022643 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.MeanSquaredLogarithmicError()]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates metric statistics. y_true and y_pred should have the same shape. Args y_true Ground truth values. shape = [batch_size, d0, .. dN]. y_pred The predicted values. shape = [batch_size, d0, .. dN]. sample_weight Optional sample_weight acts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis (-1)). Returns Update op.
doc_2148
Bases: torch.distributions.exp_family.ExponentialFamily Beta distribution parameterized by concentration1 and concentration0. Example: >>> m = Beta(torch.tensor([0.5]), torch.tensor([0.5])) >>> m.sample() # Beta distributed with concentration concentration1 and concentration0 tensor([ 0.1046]) Parameters concentration1 (float or Tensor) – 1st concentration parameter of the distribution (often referred to as alpha) concentration0 (float or Tensor) – 2nd concentration parameter of the distribution (often referred to as beta) arg_constraints = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)} property concentration0 property concentration1 entropy() [source] expand(batch_shape, _instance=None) [source] has_rsample = True log_prob(value) [source] property mean rsample(sample_shape=()) [source] support = Interval(lower_bound=0.0, upper_bound=1.0) property variance
doc_2149
Put an item into the queue. If the queue is full, wait until a free slot is available before adding the item.
doc_2150
Identify yourself to the SMTP server using HELO. The hostname argument defaults to the fully qualified domain name of the local host. The message returned by the server is stored as the helo_resp attribute of the object. In normal operation it should not be necessary to call this method explicitly. It will be implicitly called by the sendmail() when necessary.
doc_2151
Return the group database entry for the given group name. KeyError is raised if the entry asked for cannot be found.
doc_2152
Return the Transform instance mapping patch coordinates to data coordinates. For example, one may define a patch of a circle which represents a radius of 5 by providing coordinates for a unit circle, and a transform which scales the coordinates (the patch coordinate) by 5.
doc_2153
New in Django 3.2. Property that returns a pathlib.Path representing the absolute filesystem path to a template for rendering the plain-text representation of the exception. Defaults to the Django provided template.
doc_2154
'blogs.blog': lambda o: "/blogs/%s/" % o.slug, 'news.story': lambda o: "/stories/%s/%s/" % (o.pub_year, o.slug), } The model name used in this setting should be all lowercase, regardless of the case of the actual model class name. ADMINS Default: [] (Empty list) A list of all the people who get code error notifications. When DEBUG=False and AdminEmailHandler is configured in LOGGING (done by default), Django emails these people the details of exceptions raised in the request/response cycle. Each item in the list should be a tuple of (Full name, email address). Example: [('John', 'john@example.com'), ('Mary', 'mary@example.com')] ALLOWED_HOSTS Default: [] (Empty list) A list of strings representing the host/domain names that this Django site can serve. This is a security measure to prevent HTTP Host header attacks, which are possible even under many seemingly-safe web server configurations. Values in this list can be fully qualified names (e.g. 'www.example.com'), in which case they will be matched against the request’s Host header exactly (case-insensitive, not including port). A value beginning with a period can be used as a subdomain wildcard: '.example.com' will match example.com, www.example.com, and any other subdomain of example.com. A value of '*' will match anything; in this case you are responsible to provide your own validation of the Host header (perhaps in a middleware; if so this middleware must be listed first in MIDDLEWARE). Django also allows the fully qualified domain name (FQDN) of any entries. Some browsers include a trailing dot in the Host header which Django strips when performing host validation. If the Host header (or X-Forwarded-Host if USE_X_FORWARDED_HOST is enabled) does not match any value in this list, the django.http.HttpRequest.get_host() method will raise SuspiciousOperation. When DEBUG is True and ALLOWED_HOSTS is empty, the host is validated against ['.localhost', '127.0.0.1', '[::1]']. ALLOWED_HOSTS is also checked when running tests. This validation only applies via get_host(); if your code accesses the Host header directly from request.META you are bypassing this security protection. APPEND_SLASH Default: True When set to True, if the request URL does not match any of the patterns in the URLconf and it doesn’t end in a slash, an HTTP redirect is issued to the same URL with a slash appended. Note that the redirect may cause any data submitted in a POST request to be lost. The APPEND_SLASH setting is only used if CommonMiddleware is installed (see Middleware). See also PREPEND_WWW. CACHES Default: { 'default': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache', } } A dictionary containing the settings for all caches to be used with Django. It is a nested dictionary whose contents maps cache aliases to a dictionary containing the options for an individual cache. The CACHES setting must configure a default cache; any number of additional caches may also be specified. If you are using a cache backend other than the local memory cache, or you need to define multiple caches, other options will be required. The following cache options are available. BACKEND Default: '' (Empty string) The cache backend to use. The built-in cache backends are: 'django.core.cache.backends.db.DatabaseCache' 'django.core.cache.backends.dummy.DummyCache' 'django.core.cache.backends.filebased.FileBasedCache' 'django.core.cache.backends.locmem.LocMemCache' 'django.core.cache.backends.memcached.PyMemcacheCache' 'django.core.cache.backends.memcached.PyLibMCCache' 'django.core.cache.backends.redis.RedisCache' You can use a cache backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path of a cache backend class (i.e. mypackage.backends.whatever.WhateverCache). Changed in Django 3.2: The PyMemcacheCache backend was added. Changed in Django 4.0: The RedisCache backend was added. KEY_FUNCTION A string containing a dotted path to a function (or any callable) that defines how to compose a prefix, version and key into a final cache key. The default implementation is equivalent to the function: def make_key(key, key_prefix, version): return ':'.join([key_prefix, str(version), key]) You may use any key function you want, as long as it has the same argument signature. See the cache documentation for more information. KEY_PREFIX Default: '' (Empty string) A string that will be automatically included (prepended by default) to all cache keys used by the Django server. See the cache documentation for more information. LOCATION Default: '' (Empty string) The location of the cache to use. This might be the directory for a file system cache, a host and port for a memcache server, or an identifying name for a local memory cache. e.g.: CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache', 'LOCATION': '/var/tmp/django_cache', } } OPTIONS Default: None Extra parameters to pass to the cache backend. Available parameters vary depending on your cache backend. Some information on available parameters can be found in the cache arguments documentation. For more information, consult your backend module’s own documentation. TIMEOUT Default: 300 The number of seconds before a cache entry is considered stale. If the value of this setting is None, cache entries will not expire. A value of 0 causes keys to immediately expire (effectively “don’t cache”). VERSION Default: 1 The default version number for cache keys generated by the Django server. See the cache documentation for more information. CACHE_MIDDLEWARE_ALIAS Default: 'default' The cache connection to use for the cache middleware. CACHE_MIDDLEWARE_KEY_PREFIX Default: '' (Empty string) A string which will be prefixed to the cache keys generated by the cache middleware. This prefix is combined with the KEY_PREFIX setting; it does not replace it. See Django’s cache framework. CACHE_MIDDLEWARE_SECONDS Default: 600 The default number of seconds to cache a page for the cache middleware. See Django’s cache framework. CSRF_COOKIE_AGE Default: 31449600 (approximately 1 year, in seconds) The age of CSRF cookies, in seconds. The reason for setting a long-lived expiration time is to avoid problems in the case of a user closing a browser or bookmarking a page and then loading that page from a browser cache. Without persistent cookies, the form submission would fail in this case. Some browsers (specifically Internet Explorer) can disallow the use of persistent cookies or can have the indexes to the cookie jar corrupted on disk, thereby causing CSRF protection checks to (sometimes intermittently) fail. Change this setting to None to use session-based CSRF cookies, which keep the cookies in-memory instead of on persistent storage. CSRF_COOKIE_DOMAIN Default: None The domain to be used when setting the CSRF cookie. This can be useful for easily allowing cross-subdomain requests to be excluded from the normal cross site request forgery protection. It should be set to a string such as ".example.com" to allow a POST request from a form on one subdomain to be accepted by a view served from another subdomain. Please note that the presence of this setting does not imply that Django’s CSRF protection is safe from cross-subdomain attacks by default - please see the CSRF limitations section. CSRF_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the CSRF cookie. If this is set to True, client-side JavaScript will not be able to access the CSRF cookie. Designating the CSRF cookie as HttpOnly doesn’t offer any practical protection because CSRF is only to protect against cross-domain attacks. If an attacker can read the cookie via JavaScript, they’re already on the same domain as far as the browser knows, so they can do anything they like anyway. (XSS is a much bigger hole than CSRF.) Although the setting offers little practical benefit, it’s sometimes required by security auditors. If you enable this and need to send the value of the CSRF token with an AJAX request, your JavaScript must pull the value from a hidden CSRF token form input instead of from the cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. CSRF_COOKIE_NAME Default: 'csrftoken' The name of the cookie to use for the CSRF authentication token. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Cross Site Request Forgery protection. CSRF_COOKIE_PATH Default: '/' The path set on the CSRF cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own CSRF cookie. CSRF_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the CSRF cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. CSRF_COOKIE_SECURE Default: False Whether to use a secure cookie for the CSRF cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent with an HTTPS connection. CSRF_USE_SESSIONS Default: False Whether to store the CSRF token in the user’s session instead of in a cookie. It requires the use of django.contrib.sessions. Storing the CSRF token in a cookie (Django’s default) is safe, but storing it in the session is common practice in other web frameworks and therefore sometimes demanded by security auditors. Since the default error views require the CSRF token, SessionMiddleware must appear in MIDDLEWARE before any middleware that may raise an exception to trigger an error view (such as PermissionDenied) if you’re using CSRF_USE_SESSIONS. See Middleware ordering. CSRF_FAILURE_VIEW Default: 'django.views.csrf.csrf_failure' A dotted path to the view function to be used when an incoming request is rejected by the CSRF protection. The function should have this signature: def csrf_failure(request, reason=""): ... where reason is a short message (intended for developers or logging, not for end users) indicating the reason the request was rejected. It should return an HttpResponseForbidden. django.views.csrf.csrf_failure() accepts an additional template_name parameter that defaults to '403_csrf.html'. If a template with that name exists, it will be used to render the page. CSRF_HEADER_NAME Default: 'HTTP_X_CSRFTOKEN' The name of the request header used for CSRF authentication. As with other HTTP headers in request.META, the header name received from the server is normalized by converting all characters to uppercase, replacing any hyphens with underscores, and adding an 'HTTP_' prefix to the name. For example, if your client sends a 'X-XSRF-TOKEN' header, the setting should be 'HTTP_X_XSRF_TOKEN'. CSRF_TRUSTED_ORIGINS Default: [] (Empty list) A list of trusted origins for unsafe requests (e.g. POST). For requests that include the Origin header, Django’s CSRF protection requires that header match the origin present in the Host header. For a secure unsafe request that doesn’t include the Origin header, the request must have a Referer header that matches the origin present in the Host header. These checks prevent, for example, a POST request from subdomain.example.com from succeeding against api.example.com. If you need cross-origin unsafe requests, continuing the example, add 'https://subdomain.example.com' to this list (and/or http://... if requests originate from an insecure page). The setting also supports subdomains, so you could add 'https://*.example.com', for example, to allow access from all subdomains of example.com. Changed in Django 4.0: The values in older versions must only include the hostname (possibly with a leading dot) and not the scheme or an asterisk. Also, Origin header checking isn’t performed in older versions. DATABASES Default: {} (Empty dictionary) A dictionary containing the settings for all databases to be used with Django. It is a nested dictionary whose contents map a database alias to a dictionary containing the options for an individual database. The DATABASES setting must configure a default database; any number of additional databases may also be specified. The simplest possible settings file is for a single-database setup using SQLite. This can be configured using the following: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': 'mydatabase', } } When connecting to other database backends, such as MariaDB, MySQL, Oracle, or PostgreSQL, additional connection parameters will be required. See the ENGINE setting below on how to specify other database types. This example is for PostgreSQL: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'mydatabase', 'USER': 'mydatabaseuser', 'PASSWORD': 'mypassword', 'HOST': '127.0.0.1', 'PORT': '5432', } } The following inner options that may be required for more complex configurations are available: ATOMIC_REQUESTS Default: False Set this to True to wrap each view in a transaction on this database. See Tying transactions to HTTP requests. AUTOCOMMIT Default: True Set this to False if you want to disable Django’s transaction management and implement your own. ENGINE Default: '' (Empty string) The database backend to use. The built-in database backends are: 'django.db.backends.postgresql' 'django.db.backends.mysql' 'django.db.backends.sqlite3' 'django.db.backends.oracle' You can use a database backend that doesn’t ship with Django by setting ENGINE to a fully-qualified path (i.e. mypackage.backends.whatever). HOST Default: '' (Empty string) Which host to use when connecting to the database. An empty string means localhost. Not used with SQLite. If this value starts with a forward slash ('/') and you’re using MySQL, MySQL will connect via a Unix socket to the specified socket. For example: "HOST": '/var/run/mysql' If you’re using MySQL and this value doesn’t start with a forward slash, then this value is assumed to be the host. If you’re using PostgreSQL, by default (empty HOST), the connection to the database is done through UNIX domain sockets (‘local’ lines in pg_hba.conf). If your UNIX domain socket is not in the standard location, use the same value of unix_socket_directory from postgresql.conf. If you want to connect through TCP sockets, set HOST to ‘localhost’ or ‘127.0.0.1’ (‘host’ lines in pg_hba.conf). On Windows, you should always define HOST, as UNIX domain sockets are not available. NAME Default: '' (Empty string) The name of the database to use. For SQLite, it’s the full path to the database file. When specifying the path, always use forward slashes, even on Windows (e.g. C:/homes/user/mysite/sqlite3.db). CONN_MAX_AGE Default: 0 The lifetime of a database connection, as an integer of seconds. Use 0 to close database connections at the end of each request — Django’s historical behavior — and None for unlimited persistent connections. OPTIONS Default: {} (Empty dictionary) Extra parameters to use when connecting to the database. Available parameters vary depending on your database backend. Some information on available parameters can be found in the Database Backends documentation. For more information, consult your backend module’s own documentation. PASSWORD Default: '' (Empty string) The password to use when connecting to the database. Not used with SQLite. PORT Default: '' (Empty string) The port to use when connecting to the database. An empty string means the default port. Not used with SQLite. TIME_ZONE Default: None A string representing the time zone for this database connection or None. This inner option of the DATABASES setting accepts the same values as the general TIME_ZONE setting. When USE_TZ is True and this option is set, reading datetimes from the database returns aware datetimes in this time zone instead of UTC. When USE_TZ is False, it is an error to set this option. If the database backend doesn’t support time zones (e.g. SQLite, MySQL, Oracle), Django reads and writes datetimes in local time according to this option if it is set and in UTC if it isn’t. Changing the connection time zone changes how datetimes are read from and written to the database. If Django manages the database and you don’t have a strong reason to do otherwise, you should leave this option unset. It’s best to store datetimes in UTC because it avoids ambiguous or nonexistent datetimes during daylight saving time changes. Also, receiving datetimes in UTC keeps datetime arithmetic simple — there’s no need to consider potential offset changes over a DST transition. If you’re connecting to a third-party database that stores datetimes in a local time rather than UTC, then you must set this option to the appropriate time zone. Likewise, if Django manages the database but third-party systems connect to the same database and expect to find datetimes in local time, then you must set this option. If the database backend supports time zones (e.g. PostgreSQL), the TIME_ZONE option is very rarely needed. It can be changed at any time; the database takes care of converting datetimes to the desired time zone. Setting the time zone of the database connection may be useful for running raw SQL queries involving date/time functions provided by the database, such as date_trunc, because their results depend on the time zone. However, this has a downside: receiving all datetimes in local time makes datetime arithmetic more tricky — you must account for possible offset changes over DST transitions. Consider converting to local time explicitly with AT TIME ZONE in raw SQL queries instead of setting the TIME_ZONE option. DISABLE_SERVER_SIDE_CURSORS Default: False Set this to True if you want to disable the use of server-side cursors with QuerySet.iterator(). Transaction pooling and server-side cursors describes the use case. This is a PostgreSQL-specific setting. USER Default: '' (Empty string) The username to use when connecting to the database. Not used with SQLite. TEST Default: {} (Empty dictionary) A dictionary of settings for test databases; for more details about the creation and use of test databases, see The test database. Here’s an example with a test database configuration: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'USER': 'mydatabaseuser', 'NAME': 'mydatabase', 'TEST': { 'NAME': 'mytestdatabase', }, }, } The following keys in the TEST dictionary are available: CHARSET Default: None The character set encoding used to create the test database. The value of this string is passed directly through to the database, so its format is backend-specific. Supported by the PostgreSQL (postgresql) and MySQL (mysql) backends. COLLATION Default: None The collation order to use when creating the test database. This value is passed directly to the backend, so its format is backend-specific. Only supported for the mysql backend (see the MySQL manual for details). DEPENDENCIES Default: ['default'], for all databases other than default, which has no dependencies. The creation-order dependencies of the database. See the documentation on controlling the creation order of test databases for details. MIGRATE Default: True When set to False, migrations won’t run when creating the test database. This is similar to setting None as a value in MIGRATION_MODULES, but for all apps. MIRROR Default: None The alias of the database that this database should mirror during testing. This setting exists to allow for testing of primary/replica (referred to as master/slave by some databases) configurations of multiple databases. See the documentation on testing primary/replica configurations for details. NAME Default: None The name of database to use when running the test suite. If the default value (None) is used with the SQLite database engine, the tests will use a memory resident database. For all other database engines the test database will use the name 'test_' + DATABASE_NAME. See The test database. SERIALIZE Boolean value to control whether or not the default test runner serializes the database into an in-memory JSON string before running tests (used to restore the database state between tests if you don’t have transactions). You can set this to False to speed up creation time if you don’t have any test classes with serialized_rollback=True. Deprecated since version 4.0: This setting is deprecated as it can be inferred from the databases with the serialized_rollback option enabled. TEMPLATE This is a PostgreSQL-specific setting. The name of a template (e.g. 'template0') from which to create the test database. CREATE_DB Default: True This is an Oracle-specific setting. If it is set to False, the test tablespaces won’t be automatically created at the beginning of the tests or dropped at the end. CREATE_USER Default: True This is an Oracle-specific setting. If it is set to False, the test user won’t be automatically created at the beginning of the tests and dropped at the end. USER Default: None This is an Oracle-specific setting. The username to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will use 'test_' + USER. PASSWORD Default: None This is an Oracle-specific setting. The password to use when connecting to the Oracle database that will be used when running tests. If not provided, Django will generate a random password. ORACLE_MANAGED_FILES Default: False This is an Oracle-specific setting. If set to True, Oracle Managed Files (OMF) tablespaces will be used. DATAFILE and DATAFILE_TMP will be ignored. TBLSPACE Default: None This is an Oracle-specific setting. The name of the tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER. TBLSPACE_TMP Default: None This is an Oracle-specific setting. The name of the temporary tablespace that will be used when running tests. If not provided, Django will use 'test_' + USER + '_temp'. DATAFILE Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE. If not provided, Django will use TBLSPACE + '.dbf'. DATAFILE_TMP Default: None This is an Oracle-specific setting. The name of the datafile to use for the TBLSPACE_TMP. If not provided, Django will use TBLSPACE_TMP + '.dbf'. DATAFILE_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE is allowed to grow to. DATAFILE_TMP_MAXSIZE Default: '500M' This is an Oracle-specific setting. The maximum size that the DATAFILE_TMP is allowed to grow to. DATAFILE_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE. DATAFILE_TMP_SIZE Default: '50M' This is an Oracle-specific setting. The initial size of the DATAFILE_TMP. DATAFILE_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE is extended when more space is required. DATAFILE_TMP_EXTSIZE Default: '25M' This is an Oracle-specific setting. The amount by which the DATAFILE_TMP is extended when more space is required. DATA_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size in bytes that a request body may be before a SuspiciousOperation (RequestDataTooBig) is raised. The check is done when accessing request.body or request.POST and is calculated against the total request size excluding any file upload data. You can set this to None to disable the check. Applications that are expected to receive unusually large form posts should tune this setting. The amount of request data is correlated to the amount of memory needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. See also FILE_UPLOAD_MAX_MEMORY_SIZE. DATA_UPLOAD_MAX_NUMBER_FIELDS Default: 1000 The maximum number of parameters that may be received via GET or POST before a SuspiciousOperation (TooManyFields) is raised. You can set this to None to disable the check. Applications that are expected to receive an unusually large number of form fields should tune this setting. The number of request parameters is correlated to the amount of time needed to process the request and populate the GET and POST dictionaries. Large requests could be used as a denial-of-service attack vector if left unchecked. Since web servers don’t typically perform deep request inspection, it’s not possible to perform a similar check at that level. DATABASE_ROUTERS Default: [] (Empty list) The list of routers that will be used to determine which database to use when performing a database query. See the documentation on automatic database routing in multi database configurations. DATE_FORMAT Default: 'N j, Y' (e.g. Feb. 4, 2003) The default formatting to use for displaying date fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATETIME_FORMAT, TIME_FORMAT and SHORT_DATE_FORMAT. DATE_INPUT_FORMATS Default: [ '%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', # '2006-10-25', '10/25/2006', '10/25/06' '%b %d %Y', '%b %d, %Y', # 'Oct 25 2006', 'Oct 25, 2006' '%d %b %Y', '%d %b, %Y', # '25 Oct 2006', '25 Oct, 2006' '%B %d %Y', '%B %d, %Y', # 'October 25 2006', 'October 25, 2006' '%d %B %Y', '%d %B, %Y', # '25 October 2006', '25 October, 2006' ] A list of formats that will be accepted when inputting data on a date field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATETIME_INPUT_FORMATS and TIME_INPUT_FORMATS. DATETIME_FORMAT Default: 'N j, Y, P' (e.g. Feb. 4, 2003, 4 p.m.) The default formatting to use for displaying datetime fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT, TIME_FORMAT and SHORT_DATETIME_FORMAT. DATETIME_INPUT_FORMATS Default: [ '%Y-%m-%d %H:%M:%S', # '2006-10-25 14:30:59' '%Y-%m-%d %H:%M:%S.%f', # '2006-10-25 14:30:59.000200' '%Y-%m-%d %H:%M', # '2006-10-25 14:30' '%m/%d/%Y %H:%M:%S', # '10/25/2006 14:30:59' '%m/%d/%Y %H:%M:%S.%f', # '10/25/2006 14:30:59.000200' '%m/%d/%Y %H:%M', # '10/25/2006 14:30' '%m/%d/%y %H:%M:%S', # '10/25/06 14:30:59' '%m/%d/%y %H:%M:%S.%f', # '10/25/06 14:30:59.000200' '%m/%d/%y %H:%M', # '10/25/06 14:30' ] A list of formats that will be accepted when inputting data on a datetime field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. Date-only formats are not included as datetime fields will automatically try DATE_INPUT_FORMATS in last resort. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and TIME_INPUT_FORMATS. DEBUG Default: False A boolean that turns on/off debug mode. Never deploy a site into production with DEBUG turned on. One of the main features of debug mode is the display of detailed error pages. If your app raises an exception when DEBUG is True, Django will display a detailed traceback, including a lot of metadata about your environment, such as all the currently defined Django settings (from settings.py). As a security measure, Django will not include settings that might be sensitive, such as SECRET_KEY. Specifically, it will exclude any setting whose name includes any of the following: 'API' 'KEY' 'PASS' 'SECRET' 'SIGNATURE' 'TOKEN' Note that these are partial matches. 'PASS' will also match PASSWORD, just as 'TOKEN' will also match TOKENIZED and so on. Still, note that there are always going to be sections of your debug output that are inappropriate for public consumption. File paths, configuration options and the like all give attackers extra information about your server. It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you’re debugging, but it’ll rapidly consume memory on a production server. Finally, if DEBUG is False, you also need to properly set the ALLOWED_HOSTS setting. Failing to do so will result in all requests being returned as “Bad Request (400)”. Note The default settings.py file created by django-admin startproject sets DEBUG = True for convenience. DEBUG_PROPAGATE_EXCEPTIONS Default: False If set to True, Django’s exception handling of view functions (handler500, or the debug view if DEBUG is True) and logging of 500 responses (django.request) is skipped and exceptions propagate upward. This can be useful for some test setups. It shouldn’t be used on a live site unless you want your web server (instead of Django) to generate “Internal Server Error” responses. In that case, make sure your server doesn’t show the stack trace or other sensitive information in the response. DECIMAL_SEPARATOR Default: '.' (Dot) Default decimal separator used when formatting decimal numbers. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. DEFAULT_AUTO_FIELD New in Django 3.2. Default: 'django.db.models.AutoField' Default primary key field type to use for models that don’t have a field with primary_key=True. Migrating auto-created through tables The value of DEFAULT_AUTO_FIELD will be respected when creating new auto-created through tables for many-to-many relationships. Unfortunately, the primary keys of existing auto-created through tables cannot currently be updated by the migrations framework. This means that if you switch the value of DEFAULT_AUTO_FIELD and then generate migrations, the primary keys of the related models will be updated, as will the foreign keys from the through table, but the primary key of the auto-created through table will not be migrated. In order to address this, you should add a RunSQL operation to your migrations to perform the required ALTER TABLE step. You can check the existing table name through sqlmigrate, dbshell, or with the field’s remote_field.through._meta.db_table property. Explicitly defined through models are already handled by the migrations system. Allowing automatic migrations for the primary key of existing auto-created through tables may be implemented at a later date. DEFAULT_CHARSET Default: 'utf-8' Default charset to use for all HttpResponse objects, if a MIME type isn’t manually specified. Used when constructing the Content-Type header. DEFAULT_EXCEPTION_REPORTER Default: 'django.views.debug.ExceptionReporter' Default exception reporter class to be used if none has been assigned to the HttpRequest instance yet. See Custom error reports. DEFAULT_EXCEPTION_REPORTER_FILTER Default: 'django.views.debug.SafeExceptionReporterFilter' Default exception reporter filter class to be used if none has been assigned to the HttpRequest instance yet. See Filtering error reports. DEFAULT_FILE_STORAGE Default: 'django.core.files.storage.FileSystemStorage' Default file storage class to be used for any file-related operations that don’t specify a particular storage system. See Managing files. DEFAULT_FROM_EMAIL Default: 'webmaster@localhost' Default email address to use for various automated correspondence from the site manager(s). This doesn’t include error messages sent to ADMINS and MANAGERS; for that, see SERVER_EMAIL. DEFAULT_INDEX_TABLESPACE Default: '' (Empty string) Default tablespace to use for indexes on fields that don’t specify one, if the backend supports it (see Tablespaces). DEFAULT_TABLESPACE Default: '' (Empty string) Default tablespace to use for models that don’t specify one, if the backend supports it (see Tablespaces). DISALLOWED_USER_AGENTS Default: [] (Empty list) List of compiled regular expression objects representing User-Agent strings that are not allowed to visit any page, systemwide. Use this for bots/crawlers. This is only used if CommonMiddleware is installed (see Middleware). EMAIL_BACKEND Default: 'django.core.mail.backends.smtp.EmailBackend' The backend to use for sending emails. For the list of available backends see Sending email. EMAIL_FILE_PATH Default: Not defined The directory used by the file email backend to store output files. EMAIL_HOST Default: 'localhost' The host to use for sending email. See also EMAIL_PORT. EMAIL_HOST_PASSWORD Default: '' (Empty string) Password to use for the SMTP server defined in EMAIL_HOST. This setting is used in conjunction with EMAIL_HOST_USER when authenticating to the SMTP server. If either of these settings is empty, Django won’t attempt authentication. See also EMAIL_HOST_USER. EMAIL_HOST_USER Default: '' (Empty string) Username to use for the SMTP server defined in EMAIL_HOST. If empty, Django won’t attempt authentication. See also EMAIL_HOST_PASSWORD. EMAIL_PORT Default: 25 Port to use for the SMTP server defined in EMAIL_HOST. EMAIL_SUBJECT_PREFIX Default: '[Django] ' Subject-line prefix for email messages sent with django.core.mail.mail_admins or django.core.mail.mail_managers. You’ll probably want to include the trailing space. EMAIL_USE_LOCALTIME Default: False Whether to send the SMTP Date header of email messages in the local time zone (True) or in UTC (False). EMAIL_USE_TLS Default: False Whether to use a TLS (secure) connection when talking to the SMTP server. This is used for explicit TLS connections, generally on port 587. If you are experiencing hanging connections, see the implicit TLS setting EMAIL_USE_SSL. EMAIL_USE_SSL Default: False Whether to use an implicit TLS (secure) connection when talking to the SMTP server. In most email documentation this type of TLS connection is referred to as SSL. It is generally used on port 465. If you are experiencing problems, see the explicit TLS setting EMAIL_USE_TLS. Note that EMAIL_USE_TLS/EMAIL_USE_SSL are mutually exclusive, so only set one of those settings to True. EMAIL_SSL_CERTFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted certificate chain file to use for the SSL connection. EMAIL_SSL_KEYFILE Default: None If EMAIL_USE_SSL or EMAIL_USE_TLS is True, you can optionally specify the path to a PEM-formatted private key file to use for the SSL connection. Note that setting EMAIL_SSL_CERTFILE and EMAIL_SSL_KEYFILE doesn’t result in any certificate checking. They’re passed to the underlying SSL connection. Please refer to the documentation of Python’s ssl.wrap_socket() function for details on how the certificate chain file and private key file are handled. EMAIL_TIMEOUT Default: None Specifies a timeout in seconds for blocking operations like the connection attempt. FILE_UPLOAD_HANDLERS Default: [ 'django.core.files.uploadhandler.MemoryFileUploadHandler', 'django.core.files.uploadhandler.TemporaryFileUploadHandler', ] A list of handlers to use for uploading. Changing this setting allows complete customization – even replacement – of Django’s upload process. See Managing files for details. FILE_UPLOAD_MAX_MEMORY_SIZE Default: 2621440 (i.e. 2.5 MB). The maximum size (in bytes) that an upload will be before it gets streamed to the file system. See Managing files for details. See also DATA_UPLOAD_MAX_MEMORY_SIZE. FILE_UPLOAD_DIRECTORY_PERMISSIONS Default: None The numeric mode to apply to directories created in the process of uploading files. This setting also determines the default permissions for collected static directories when using the collectstatic management command. See collectstatic for details on overriding it. This value mirrors the functionality and caveats of the FILE_UPLOAD_PERMISSIONS setting. FILE_UPLOAD_PERMISSIONS Default: 0o644 The numeric mode (i.e. 0o644) to set newly uploaded files to. For more information about what these modes mean, see the documentation for os.chmod(). If None, you’ll get operating-system dependent behavior. On most platforms, temporary files will have a mode of 0o600, and files saved from memory will be saved using the system’s standard umask. For security reasons, these permissions aren’t applied to the temporary files that are stored in FILE_UPLOAD_TEMP_DIR. This setting also determines the default permissions for collected static files when using the collectstatic management command. See collectstatic for details on overriding it. Warning Always prefix the mode with 0o . If you’re not familiar with file modes, please note that the 0o prefix is very important: it indicates an octal number, which is the way that modes must be specified. If you try to use 644, you’ll get totally incorrect behavior. FILE_UPLOAD_TEMP_DIR Default: None The directory to store data to (typically files larger than FILE_UPLOAD_MAX_MEMORY_SIZE) temporarily while uploading files. If None, Django will use the standard temporary directory for the operating system. For example, this will default to /tmp on *nix-style operating systems. See Managing files for details. FIRST_DAY_OF_WEEK Default: 0 (Sunday) A number representing the first day of the week. This is especially useful when displaying a calendar. This value is only used when not using format internationalization, or when a format cannot be found for the current locale. The value must be an integer from 0 to 6, where 0 means Sunday, 1 means Monday and so on. FIXTURE_DIRS Default: [] (Empty list) List of directories searched for fixture files, in addition to the fixtures directory of each application, in search order. Note that these paths should use Unix-style forward slashes, even on Windows. See Providing data with fixtures and Fixture loading. FORCE_SCRIPT_NAME Default: None If not None, this will be used as the value of the SCRIPT_NAME environment variable in any HTTP request. This setting can be used to override the server-provided value of SCRIPT_NAME, which may be a rewritten version of the preferred value or not supplied at all. It is also used by django.setup() to set the URL resolver script prefix outside of the request/response cycle (e.g. in management commands and standalone scripts) to generate correct URLs when SCRIPT_NAME is not /. FORM_RENDERER Default: 'django.forms.renderers.DjangoTemplates' The class that renders forms and form widgets. It must implement the low-level render API. Included form renderers are: 'django.forms.renderers.DjangoTemplates' 'django.forms.renderers.Jinja2' FORMAT_MODULE_PATH Default: None A full Python path to a Python package that contains custom format definitions for project locales. If not None, Django will check for a formats.py file, under the directory named as the current locale, and will use the formats defined in this file. For example, if FORMAT_MODULE_PATH is set to mysite.formats, and current language is en (English), Django will expect a directory tree like: mysite/ formats/ __init__.py en/ __init__.py formats.py You can also set this setting to a list of Python paths, for example: FORMAT_MODULE_PATH = [ 'mysite.formats', 'some_app.formats', ] When Django searches for a certain format, it will go through all given Python paths until it finds a module that actually defines the given format. This means that formats defined in packages farther up in the list will take precedence over the same formats in packages farther down. Available formats are: DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT, DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS YEAR_MONTH_FORMAT IGNORABLE_404_URLS Default: [] (Empty list) List of compiled regular expression objects describing URLs that should be ignored when reporting HTTP 404 errors via email (see How to manage error reporting). Regular expressions are matched against request's full paths (including query string, if any). Use this if your site does not provide a commonly requested file such as favicon.ico or robots.txt. This is only used if BrokenLinkEmailsMiddleware is enabled (see Middleware). INSTALLED_APPS Default: [] (Empty list) A list of strings designating all applications that are enabled in this Django installation. Each string should be a dotted Python path to: an application configuration class (preferred), or a package containing an application. Learn more about application configurations. Use the application registry for introspection Your code should never access INSTALLED_APPS directly. Use django.apps.apps instead. Application names and labels must be unique in INSTALLED_APPS Application names — the dotted Python path to the application package — must be unique. There is no way to include the same application twice, short of duplicating its code under another name. Application labels — by default the final part of the name — must be unique too. For example, you can’t include both django.contrib.auth and myproject.auth. However, you can relabel an application with a custom configuration that defines a different label. These rules apply regardless of whether INSTALLED_APPS references application configuration classes or application packages. When several applications provide different versions of the same resource (template, static file, management command, translation), the application listed first in INSTALLED_APPS has precedence. INTERNAL_IPS Default: [] (Empty list) A list of IP addresses, as strings, that: Allow the debug() context processor to add some variables to the template context. Can use the admindocs bookmarklets even if not logged in as a staff user. Are marked as “internal” (as opposed to “EXTERNAL”) in AdminEmailHandler emails. LANGUAGE_CODE Default: 'en-us' A string representing the language code for this installation. This should be in standard language ID format. For example, U.S. English is "en-us". See also the list of language identifiers and Internationalization and localization. USE_I18N must be active for this setting to have any effect. It serves two purposes: If the locale middleware isn’t in use, it decides which translation is served to all users. If the locale middleware is active, it provides a fallback language in case the user’s preferred language can’t be determined or is not supported by the website. It also provides the fallback translation when a translation for a given literal doesn’t exist for the user’s preferred language. See How Django discovers language preference for more details. LANGUAGE_COOKIE_AGE Default: None (expires at browser close) The age of the language cookie, in seconds. LANGUAGE_COOKIE_DOMAIN Default: None The domain to use for the language cookie. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies that have the old domain will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting) and to add a middleware that copies the value from the old cookie to a new one and then deletes the old one. LANGUAGE_COOKIE_HTTPONLY Default: False Whether to use HttpOnly flag on the language cookie. If this is set to True, client-side JavaScript will not be able to access the language cookie. See SESSION_COOKIE_HTTPONLY for details on HttpOnly. LANGUAGE_COOKIE_NAME Default: 'django_language' The name of the cookie to use for the language cookie. This can be whatever you want (as long as it’s different from the other cookie names in your application). See Internationalization and localization. LANGUAGE_COOKIE_PATH Default: '/' The path set on the language cookie. This should either match the URL path of your Django installation or be a parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths and each instance will only see its own language cookie. Be cautious when updating this setting on a production site. If you update this setting to use a deeper path than it previously used, existing user cookies that have the old path will not be updated. This will result in site users being unable to switch the language as long as these cookies persist. The only safe and reliable option to perform the switch is to change the language cookie name permanently (via the LANGUAGE_COOKIE_NAME setting), and to add a middleware that copies the value from the old cookie to a new one and then deletes the one. LANGUAGE_COOKIE_SAMESITE Default: None The value of the SameSite flag on the language cookie. This flag prevents the cookie from being sent in cross-site requests. See SESSION_COOKIE_SAMESITE for details about SameSite. LANGUAGE_COOKIE_SECURE Default: False Whether to use a secure cookie for the language cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. LANGUAGES Default: A list of all available languages. This list is continually growing and including a copy here would inevitably become rapidly out of date. You can see the current list of translated languages by looking in django/conf/global_settings.py. The list is a list of two-tuples in the format (language code, language name) – for example, ('ja', 'Japanese'). This specifies which languages are available for language selection. See Internationalization and localization. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, you can mark the language names as translation strings using the gettext_lazy() function. Here’s a sample settings file: from django.utils.translation import gettext_lazy as _ LANGUAGES = [ ('de', _('German')), ('en', _('English')), ] LANGUAGES_BIDI Default: A list of all language codes that are written right-to-left. You can see the current list of these languages by looking in django/conf/global_settings.py. The list contains language codes for languages that are written right-to-left. Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages. If you define a custom LANGUAGES setting, the list of bidirectional languages may contain language codes which are not enabled on a given site. LOCALE_PATHS Default: [] (Empty list) A list of directories where Django looks for translation files. See How Django discovers translations. Example: LOCALE_PATHS = [ '/home/www/project/common_files/locale', '/var/local/translations/locale', ] Django will look within each of these paths for the <locale_code>/LC_MESSAGES directories containing the actual translation files. LOGGING Default: A logging configuration dictionary. A data structure containing configuration information. The contents of this data structure will be passed as the argument to the configuration method described in LOGGING_CONFIG. Among other things, the default logging configuration passes HTTP 500 server errors to an email log handler when DEBUG is False. See also Configuring logging. You can see the default logging configuration by looking in django/utils/log.py. LOGGING_CONFIG Default: 'logging.config.dictConfig' A path to a callable that will be used to configure logging in the Django project. Points at an instance of Python’s dictConfig configuration method by default. If you set LOGGING_CONFIG to None, the logging configuration process will be skipped. MANAGERS Default: [] (Empty list) A list in the same format as ADMINS that specifies who should get broken link notifications when BrokenLinkEmailsMiddleware is enabled. MEDIA_ROOT Default: '' (Empty string) Absolute filesystem path to the directory that will hold user-uploaded files. Example: "/var/www/example.com/media/" See also MEDIA_URL. Warning MEDIA_ROOT and STATIC_ROOT must have different values. Before STATIC_ROOT was introduced, it was common to rely or fallback on MEDIA_ROOT to also serve static files; however, since this can have serious security implications, there is a validation check to prevent it. MEDIA_URL Default: '' (Empty string) URL that handles the media served from MEDIA_ROOT, used for managing stored files. It must end in a slash if set to a non-empty value. You will need to configure these files to be served in both development and production environments. If you want to use {{ MEDIA_URL }} in your templates, add 'django.template.context_processors.media' in the 'context_processors' option of TEMPLATES. Example: "http://media.example.com/" Warning There are security risks if you are accepting uploaded content from untrusted users! See the security guide’s topic on User-uploaded content for mitigation details. Warning MEDIA_URL and STATIC_URL must have different values. See MEDIA_ROOT for more details. Note If MEDIA_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. MIDDLEWARE Default: None A list of middleware to use. See Middleware. MIGRATION_MODULES Default: {} (Empty dictionary) A dictionary specifying the package where migration modules can be found on a per-app basis. The default value of this setting is an empty dictionary, but the default package name for migration modules is migrations. Example: {'blog': 'blog.db_migrations'} In this case, migrations pertaining to the blog app will be contained in the blog.db_migrations package. If you provide the app_label argument, makemigrations will automatically create the package if it doesn’t already exist. When you supply None as a value for an app, Django will consider the app as an app without migrations regardless of an existing migrations submodule. This can be used, for example, in a test settings file to skip migrations while testing (tables will still be created for the apps’ models). To disable migrations for all apps during tests, you can set the MIGRATE to False instead. If MIGRATION_MODULES is used in your general project settings, remember to use the migrate --run-syncdb option if you want to create tables for the app. MONTH_DAY_FORMAT Default: 'F j' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the month and day are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given day displays the day and month. Different locales have different formats. For example, U.S. English would say “January 1,” whereas Spanish might say “1 Enero.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and YEAR_MONTH_FORMAT. NUMBER_GROUPING Default: 0 Number of digits grouped together on the integer part of a number. Common use is to display a thousand separator. If this setting is 0, then no grouping will be applied to the number. If this setting is greater than 0, then THOUSAND_SEPARATOR will be used as the separator between those groups. Some locales use non-uniform digit grouping, e.g. 10,00,00,000 in en_IN. For this case, you can provide a sequence with the number of digit group sizes to be applied. The first number defines the size of the group preceding the decimal delimiter, and each number that follows defines the size of preceding groups. If the sequence is terminated with -1, no further grouping is performed. If the sequence terminates with a 0, the last group size is used for the remainder of the number. Example tuple for en_IN: NUMBER_GROUPING = (3, 2, 0) Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also DECIMAL_SEPARATOR, THOUSAND_SEPARATOR and USE_THOUSAND_SEPARATOR. PREPEND_WWW Default: False Whether to prepend the “www.” subdomain to URLs that don’t have it. This is only used if CommonMiddleware is installed (see Middleware). See also APPEND_SLASH. ROOT_URLCONF Default: Not defined A string representing the full Python import path to your root URLconf, for example "mydjangoapps.urls". Can be overridden on a per-request basis by setting the attribute urlconf on the incoming HttpRequest object. See How Django processes a request for details. SECRET_KEY Default: '' (Empty string) A secret key for a particular Django installation. This is used to provide cryptographic signing, and should be set to a unique, unpredictable value. django-admin startproject automatically adds a randomly-generated SECRET_KEY to each new project. Uses of the key shouldn’t assume that it’s text or bytes. Every use should go through force_str() or force_bytes() to convert it to the desired type. Django will refuse to start if SECRET_KEY is not set. Warning Keep this value secret. Running Django with a known SECRET_KEY defeats many of Django’s security protections, and can lead to privilege escalation and remote code execution vulnerabilities. The secret key is used for: All sessions if you are using any other session backend than django.contrib.sessions.backends.cache, or are using the default get_session_auth_hash(). All messages if you are using CookieStorage or FallbackStorage. All PasswordResetView tokens. Any usage of cryptographic signing, unless a different key is provided. If you rotate your secret key, all of the above will be invalidated. Secret keys are not used for passwords of users and key rotation will not affect them. Note The default settings.py file created by django-admin startproject creates a unique SECRET_KEY for convenience. SECURE_CONTENT_TYPE_NOSNIFF Default: True If True, the SecurityMiddleware sets the X-Content-Type-Options: nosniff header on all responses that do not already have it. SECURE_CROSS_ORIGIN_OPENER_POLICY New in Django 4.0. Default: 'same-origin' Unless set to None, the SecurityMiddleware sets the Cross-Origin Opener Policy header on all responses that do not already have it to the value provided. SECURE_HSTS_INCLUDE_SUBDOMAINS Default: False If True, the SecurityMiddleware adds the includeSubDomains directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. Warning Setting this incorrectly can irreversibly (for the value of SECURE_HSTS_SECONDS) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_HSTS_PRELOAD Default: False If True, the SecurityMiddleware adds the preload directive to the HTTP Strict Transport Security header. It has no effect unless SECURE_HSTS_SECONDS is set to a non-zero value. SECURE_HSTS_SECONDS Default: 0 If set to a non-zero integer value, the SecurityMiddleware sets the HTTP Strict Transport Security header on all responses that do not already have it. Warning Setting this incorrectly can irreversibly (for some time) break your site. Read the HTTP Strict Transport Security documentation first. SECURE_PROXY_SSL_HEADER Default: None A tuple representing an HTTP header/value combination that signifies a request is secure. This controls the behavior of the request object’s is_secure() method. By default, is_secure() determines if a request is secure by confirming that a requested URL uses https://. This method is important for Django’s CSRF protection, and it may be used by your own code or third-party apps. If your Django app is behind a proxy, though, the proxy may be “swallowing” whether the original request uses HTTPS or not. If there is a non-HTTPS connection between the proxy and Django then is_secure() would always return False – even for requests that were made via HTTPS by the end user. In contrast, if there is an HTTPS connection between the proxy and Django then is_secure() would always return True – even for requests that were made originally via HTTP. In this situation, configure your proxy to set a custom HTTP header that tells Django whether the request came in via HTTPS, and set SECURE_PROXY_SSL_HEADER so that Django knows what header to look for. Set a tuple with two elements – the name of the header to look for and the required value. For example: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') This tells Django to trust the X-Forwarded-Proto header that comes from our proxy, and any time its value is 'https', then the request is guaranteed to be secure (i.e., it originally came in via HTTPS). You should only set this setting if you control your proxy or have some other guarantee that it sets/strips this header appropriately. Note that the header needs to be in the format as used by request.META – all caps and likely starting with HTTP_. (Remember, Django automatically adds 'HTTP_' to the start of x-header names before making the header available in request.META.) Warning Modifying this setting can compromise your site’s security. Ensure you fully understand your setup before changing it. Make sure ALL of the following are true before setting this (assuming the values from the example above): Your Django app is behind a proxy. Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it. Your proxy sets the X-Forwarded-Proto header and sends it to Django, but only for requests that originally come in via HTTPS. If any of those are not true, you should keep this setting set to None and find another way of determining HTTPS, perhaps via custom middleware. SECURE_REDIRECT_EXEMPT Default: [] (Empty list) If a URL path matches a regular expression in this list, the request will not be redirected to HTTPS. The SecurityMiddleware strips leading slashes from URL paths, so patterns shouldn’t include them, e.g. SECURE_REDIRECT_EXEMPT = [r'^no-ssl/$', …]. If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_REFERRER_POLICY Default: 'same-origin' If configured, the SecurityMiddleware sets the Referrer Policy header on all responses that do not already have it to the value provided. SECURE_SSL_HOST Default: None If a string (e.g. secure.example.com), all SSL redirects will be directed to this host rather than the originally-requested host (e.g. www.example.com). If SECURE_SSL_REDIRECT is False, this setting has no effect. SECURE_SSL_REDIRECT Default: False If True, the SecurityMiddleware redirects all non-HTTPS requests to HTTPS (except for those URLs matching a regular expression listed in SECURE_REDIRECT_EXEMPT). Note If turning this to True causes infinite redirects, it probably means your site is running behind a proxy and can’t tell which requests are secure and which are not. Your proxy likely sets a header to indicate secure requests; you can correct the problem by finding out what that header is and configuring the SECURE_PROXY_SSL_HEADER setting accordingly. SERIALIZATION_MODULES Default: Not defined A dictionary of modules containing serializer definitions (provided as strings), keyed by a string identifier for that serialization type. For example, to define a YAML serializer, use: SERIALIZATION_MODULES = {'yaml': 'path.to.yaml_serializer'} SERVER_EMAIL Default: 'root@localhost' The email address that error messages come from, such as those sent to ADMINS and MANAGERS. Why are my emails sent from a different address? This address is used only for error messages. It is not the address that regular email messages sent with send_mail() come from; for that, see DEFAULT_FROM_EMAIL. SHORT_DATE_FORMAT Default: 'm/d/Y' (e.g. 12/31/2003) An available formatting that can be used for displaying date fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATETIME_FORMAT. SHORT_DATETIME_FORMAT Default: 'm/d/Y P' (e.g. 12/31/2003 4 p.m.) An available formatting that can be used for displaying datetime fields on templates. Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT and SHORT_DATE_FORMAT. SIGNING_BACKEND Default: 'django.core.signing.TimestampSigner' The backend used for signing cookies and other data. See also the Cryptographic signing documentation. SILENCED_SYSTEM_CHECKS Default: [] (Empty list) A list of identifiers of messages generated by the system check framework (i.e. ["models.W001"]) that you wish to permanently acknowledge and ignore. Silenced checks will not be output to the console. See also the System check framework documentation. TEMPLATES Default: [] (Empty list) A list containing the settings for all template engines to be used with Django. Each item of the list is a dictionary containing the options for an individual engine. Here’s a setup that tells the Django template engine to load templates from the templates subdirectory inside each installed application: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'APP_DIRS': True, }, ] The following options are available for all backends. BACKEND Default: Not defined The template backend to use. The built-in template backends are: 'django.template.backends.django.DjangoTemplates' 'django.template.backends.jinja2.Jinja2' You can use a template backend that doesn’t ship with Django by setting BACKEND to a fully-qualified path (i.e. 'mypackage.whatever.Backend'). NAME Default: see below The alias for this particular template engine. It’s an identifier that allows selecting an engine for rendering. Aliases must be unique across all configured template engines. It defaults to the name of the module defining the engine class, i.e. the next to last piece of BACKEND, when it isn’t provided. For example if the backend is 'mypackage.whatever.Backend' then its default name is 'whatever'. DIRS Default: [] (Empty list) Directories where the engine should look for template source files, in search order. APP_DIRS Default: False Whether the engine should look for template source files inside installed applications. Note The default settings.py file created by django-admin startproject sets 'APP_DIRS': True. OPTIONS Default: {} (Empty dict) Extra parameters to pass to the template backend. Available parameters vary depending on the template backend. See DjangoTemplates and Jinja2 for the options of the built-in backends. TEST_RUNNER Default: 'django.test.runner.DiscoverRunner' The name of the class to use for starting the test suite. See Using different testing frameworks. TEST_NON_SERIALIZED_APPS Default: [] (Empty list) In order to restore the database state between tests for TransactionTestCases and database backends without transactions, Django will serialize the contents of all apps when it starts the test run so it can then reload from that copy before running tests that need it. This slows down the startup time of the test runner; if you have apps that you know don’t need this feature, you can add their full names in here (e.g. 'django.contrib.contenttypes') to exclude them from this serialization process. THOUSAND_SEPARATOR Default: ',' (Comma) Default thousand separator used when formatting numbers. This setting is used only when USE_THOUSAND_SEPARATOR is True and NUMBER_GROUPING is greater than 0. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See also NUMBER_GROUPING, DECIMAL_SEPARATOR and USE_THOUSAND_SEPARATOR. TIME_FORMAT Default: 'P' (e.g. 4 p.m.) The default formatting to use for displaying time fields in any part of the system. Note that if USE_L10N is set to True, then the locale-dictated format has higher precedence and will be applied instead. See allowed date format strings. See also DATE_FORMAT and DATETIME_FORMAT. TIME_INPUT_FORMATS Default: [ '%H:%M:%S', # '14:30:59' '%H:%M:%S.%f', # '14:30:59.000200' '%H:%M', # '14:30' ] A list of formats that will be accepted when inputting data on a time field. Formats will be tried in order, using the first valid one. Note that these format strings use Python’s datetime module syntax, not the format strings from the date template filter. When USE_L10N is True, the locale-dictated format has higher precedence and will be applied instead. See also DATE_INPUT_FORMATS and DATETIME_INPUT_FORMATS. TIME_ZONE Default: 'America/Chicago' A string representing the time zone for this installation. See the list of time zones. Note Since Django was first released with the TIME_ZONE set to 'America/Chicago', the global setting (used if nothing is defined in your project’s settings.py) remains 'America/Chicago' for backwards compatibility. New project templates default to 'UTC'. Note that this isn’t necessarily the time zone of the server. For example, one server may serve multiple Django-powered sites, each with a separate time zone setting. When USE_TZ is False, this is the time zone in which Django will store all datetimes. When USE_TZ is True, this is the default time zone that Django will use to display datetimes in templates and to interpret datetimes entered in forms. On Unix environments (where time.tzset() is implemented), Django sets the os.environ['TZ'] variable to the time zone you specify in the TIME_ZONE setting. Thus, all your views and models will automatically operate in this time zone. However, Django won’t set the TZ environment variable if you’re using the manual configuration option as described in manually configuring settings. If Django doesn’t set the TZ environment variable, it’s up to you to ensure your processes are running in the correct environment. Note Django cannot reliably use alternate time zones in a Windows environment. If you’re running Django on Windows, TIME_ZONE must be set to match the system time zone. USE_DEPRECATED_PYTZ New in Django 4.0. Default: False A boolean that specifies whether to use pytz, rather than zoneinfo, as the default time zone implementation. Deprecated since version 4.0: This transitional setting is deprecated. Support for using pytz will be removed in Django 5.0. USE_I18N Default: True A boolean that specifies whether Django’s translation system should be enabled. This provides a way to turn it off, for performance. If this is set to False, Django will make some optimizations so as not to load the translation machinery. See also LANGUAGE_CODE, USE_L10N and USE_TZ. Note The default settings.py file created by django-admin startproject includes USE_I18N = True for convenience. USE_L10N Default: True A boolean that specifies if localized formatting of data will be enabled by default or not. If this is set to True, e.g. Django will display numbers and dates using the format of the current locale. See also LANGUAGE_CODE, USE_I18N and USE_TZ. Changed in Django 4.0: In older versions, the default value is False. Deprecated since version 4.0: This setting is deprecated. Starting with Django 5.0, localized formatting of data will always be enabled. For example Django will display numbers and dates using the format of the current locale. USE_THOUSAND_SEPARATOR Default: False A boolean that specifies whether to display numbers using a thousand separator. When set to True and USE_L10N is also True, Django will format numbers using the NUMBER_GROUPING and THOUSAND_SEPARATOR settings. These settings may also be dictated by the locale, which takes precedence. See also DECIMAL_SEPARATOR, NUMBER_GROUPING and THOUSAND_SEPARATOR. USE_TZ Default: False Note In Django 5.0, the default value will change from False to True. A boolean that specifies if datetimes will be timezone-aware by default or not. If this is set to True, Django will use timezone-aware datetimes internally. When USE_TZ is False, Django will use naive datetimes in local time, except when parsing ISO 8601 formatted strings, where timezone information will always be retained if present. See also TIME_ZONE, USE_I18N and USE_L10N. Note The default settings.py file created by django-admin startproject includes USE_TZ = True for convenience. USE_X_FORWARDED_HOST Default: False A boolean that specifies whether to use the X-Forwarded-Host header in preference to the Host header. This should only be enabled if a proxy which sets this header is in use. This setting takes priority over USE_X_FORWARDED_PORT. Per RFC 7239#section-5.3, the X-Forwarded-Host header can include the port number, in which case you shouldn’t use USE_X_FORWARDED_PORT. USE_X_FORWARDED_PORT Default: False A boolean that specifies whether to use the X-Forwarded-Port header in preference to the SERVER_PORT META variable. This should only be enabled if a proxy which sets this header is in use. USE_X_FORWARDED_HOST takes priority over this setting. WSGI_APPLICATION Default: None The full Python path of the WSGI application object that Django’s built-in servers (e.g. runserver) will use. The django-admin startproject management command will create a standard wsgi.py file with an application callable in it, and point this setting to that application. If not set, the return value of django.core.wsgi.get_wsgi_application() will be used. In this case, the behavior of runserver will be identical to previous Django versions. YEAR_MONTH_FORMAT Default: 'F Y' The default formatting to use for date fields on Django admin change-list pages – and, possibly, by other parts of the system – in cases when only the year and month are displayed. For example, when a Django admin change-list page is being filtered by a date drilldown, the header for a given month displays the month and the year. Different locales have different formats. For example, U.S. English would say “January 2006,” whereas another locale might say “2006/January.” Note that if USE_L10N is set to True, then the corresponding locale-dictated format has higher precedence and will be applied. See allowed date format strings. See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT and MONTH_DAY_FORMAT. X_FRAME_OPTIONS Default: 'DENY' The default value for the X-Frame-Options header used by XFrameOptionsMiddleware. See the clickjacking protection documentation. Auth Settings for django.contrib.auth. AUTHENTICATION_BACKENDS Default: ['django.contrib.auth.backends.ModelBackend'] A list of authentication backend classes (as strings) to use when attempting to authenticate a user. See the authentication backends documentation for details. AUTH_USER_MODEL Default: 'auth.User' The model to use to represent a User. See Substituting a custom User model. Warning You cannot change the AUTH_USER_MODEL setting during the lifetime of a project (i.e. once you have made and migrated models that depend on it) without serious effort. It is intended to be set at the project start, and the model it refers to must be available in the first migration of the app that it lives in. See Substituting a custom User model for more details. LOGIN_REDIRECT_URL Default: '/accounts/profile/' The URL or named URL pattern where requests are redirected after login when the LoginView doesn’t get a next GET parameter. LOGIN_URL Default: '/accounts/login/' The URL or named URL pattern where requests are redirected for login when using the login_required() decorator, LoginRequiredMixin, or AccessMixin. LOGOUT_REDIRECT_URL Default: None The URL or named URL pattern where requests are redirected after logout if LogoutView doesn’t have a next_page attribute. If None, no redirect will be performed and the logout view will be rendered. PASSWORD_RESET_TIMEOUT Default: 259200 (3 days, in seconds) The number of seconds a password reset link is valid for. Used by the PasswordResetConfirmView. Note Reducing the value of this timeout doesn’t make any difference to the ability of an attacker to brute-force a password reset token. Tokens are designed to be safe from brute-forcing without any timeout. This timeout exists to protect against some unlikely attack scenarios, such as someone gaining access to email archives that may contain old, unused password reset tokens. PASSWORD_HASHERS See How Django stores passwords. Default: [ 'django.contrib.auth.hashers.PBKDF2PasswordHasher', 'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher', 'django.contrib.auth.hashers.Argon2PasswordHasher', 'django.contrib.auth.hashers.BCryptSHA256PasswordHasher', ] AUTH_PASSWORD_VALIDATORS Default: [] (Empty list) The list of validators that are used to check the strength of user’s passwords. See Password validation for more details. By default, no validation is performed and all passwords are accepted. Messages Settings for django.contrib.messages. MESSAGE_LEVEL Default: messages.INFO Sets the minimum message level that will be recorded by the messages framework. See message levels for more details. Important If you override MESSAGE_LEVEL in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants MESSAGE_LEVEL = message_constants.DEBUG If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. MESSAGE_STORAGE Default: 'django.contrib.messages.storage.fallback.FallbackStorage' Controls where Django stores message data. Valid values are: 'django.contrib.messages.storage.fallback.FallbackStorage' 'django.contrib.messages.storage.session.SessionStorage' 'django.contrib.messages.storage.cookie.CookieStorage' See message storage backends for more details. The backends that use cookies – CookieStorage and FallbackStorage – use the value of SESSION_COOKIE_DOMAIN, SESSION_COOKIE_SECURE and SESSION_COOKIE_HTTPONLY when setting their cookies. MESSAGE_TAGS Default: { messages.DEBUG: 'debug', messages.INFO: 'info', messages.SUCCESS: 'success', messages.WARNING: 'warning', messages.ERROR: 'error', } This sets the mapping of message level to message tag, which is typically rendered as a CSS class in HTML. If you specify a value, it will extend the default. This means you only have to specify those values which you need to override. See Displaying messages above for more details. Important If you override MESSAGE_TAGS in your settings file and rely on any of the built-in constants, you must import the constants module directly to avoid the potential for circular imports, e.g.: from django.contrib.messages import constants as message_constants MESSAGE_TAGS = {message_constants.INFO: ''} If desired, you may specify the numeric values for the constants directly according to the values in the above constants table. Sessions Settings for django.contrib.sessions. SESSION_CACHE_ALIAS Default: 'default' If you’re using cache-based session storage, this selects the cache to use. SESSION_COOKIE_AGE Default: 1209600 (2 weeks, in seconds) The age of session cookies, in seconds. SESSION_COOKIE_DOMAIN Default: None The domain to use for session cookies. Set this to a string such as "example.com" for cross-domain cookies, or use None for a standard domain cookie. To use cross-domain cookies with CSRF_USE_SESSIONS, you must include a leading dot (e.g. ".example.com") to accommodate the CSRF middleware’s referer checking. Be cautious when updating this setting on a production site. If you update this setting to enable cross-domain cookies on a site that previously used standard domain cookies, existing user cookies will be set to the old domain. This may result in them being unable to log in as long as these cookies persist. This setting also affects cookies set by django.contrib.messages. SESSION_COOKIE_HTTPONLY Default: True Whether to use HttpOnly flag on the session cookie. If this is set to True, client-side JavaScript will not be able to access the session cookie. HttpOnly is a flag included in a Set-Cookie HTTP response header. It’s part of the RFC 6265#section-4.1.2.6 standard for cookies and can be a useful way to mitigate the risk of a client-side script accessing the protected cookie data. This makes it less trivial for an attacker to escalate a cross-site scripting vulnerability into full hijacking of a user’s session. There aren’t many good reasons for turning this off. Your code shouldn’t read session cookies from JavaScript. SESSION_COOKIE_NAME Default: 'sessionid' The name of the cookie to use for sessions. This can be whatever you want (as long as it’s different from the other cookie names in your application). SESSION_COOKIE_PATH Default: '/' The path set on the session cookie. This should either match the URL path of your Django installation or be parent of that path. This is useful if you have multiple Django instances running under the same hostname. They can use different cookie paths, and each instance will only see its own session cookie. SESSION_COOKIE_SAMESITE Default: 'Lax' The value of the SameSite flag on the session cookie. This flag prevents the cookie from being sent in cross-site requests thus preventing CSRF attacks and making some methods of stealing session cookie impossible. Possible values for the setting are: 'Strict': prevents the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user won’t be able to access the project. A bank website, however, most likely doesn’t want to allow any transactional pages to be linked from external sites so the 'Strict' flag would be appropriate. 'Lax' (default): provides a balance between security and usability for websites that want to maintain user’s logged-in session after the user arrives from an external link. In the GitHub scenario, the session cookie would be allowed when following a regular link from an external website and be blocked in CSRF-prone request methods (e.g. POST). 'None' (string): the session cookie will be sent with all same-site and cross-site requests. False: disables the flag. Note Modern browsers provide a more secure default policy for the SameSite flag and will assume Lax for cookies without an explicit value set. SESSION_COOKIE_SECURE Default: False Whether to use a secure cookie for the session cookie. If this is set to True, the cookie will be marked as “secure”, which means browsers may ensure that the cookie is only sent under an HTTPS connection. Leaving this setting off isn’t a good idea because an attacker could capture an unencrypted session cookie with a packet sniffer and use the cookie to hijack the user’s session. SESSION_ENGINE Default: 'django.contrib.sessions.backends.db' Controls where Django stores session data. Included engines are: 'django.contrib.sessions.backends.db' 'django.contrib.sessions.backends.file' 'django.contrib.sessions.backends.cache' 'django.contrib.sessions.backends.cached_db' 'django.contrib.sessions.backends.signed_cookies' See Configuring the session engine for more details. SESSION_EXPIRE_AT_BROWSER_CLOSE Default: False Whether to expire the session when the user closes their browser. See Browser-length sessions vs. persistent sessions. SESSION_FILE_PATH Default: None If you’re using file-based session storage, this sets the directory in which Django will store session data. When the default value (None) is used, Django will use the standard temporary directory for the system. SESSION_SAVE_EVERY_REQUEST Default: False Whether to save the session data on every request. If this is False (default), then the session data will only be saved if it has been modified – that is, if any of its dictionary values have been assigned or deleted. Empty sessions won’t be created, even if this setting is active. SESSION_SERIALIZER Default: 'django.contrib.sessions.serializers.JSONSerializer' Full import path of a serializer class to use for serializing session data. Included serializers are: 'django.contrib.sessions.serializers.PickleSerializer' 'django.contrib.sessions.serializers.JSONSerializer' See Session serialization for details, including a warning regarding possible remote code execution when using PickleSerializer. Sites Settings for django.contrib.sites. SITE_ID Default: Not defined The ID, as an integer, of the current site in the django_site database table. This is used so that application data can hook into specific sites and a single database can manage content for multiple sites. Static Files Settings for django.contrib.staticfiles. STATIC_ROOT Default: None The absolute path to the directory where collectstatic will collect static files for deployment. Example: "/var/www/example.com/static/" If the staticfiles contrib app is enabled (as in the default project template), the collectstatic management command will collect static files into this directory. See the how-to on managing static files for more details about usage. Warning This should be an initially empty destination directory for collecting your static files from their permanent locations into one directory for ease of deployment; it is not a place to store your static files permanently. You should do that in directories that will be found by staticfiles’s finders, which by default, are 'static/' app sub-directories and any directories you include in STATICFILES_DIRS). STATIC_URL Default: None URL to use when referring to static files located in STATIC_ROOT. Example: "static/" or "http://static.example.com/" If not None, this will be used as the base path for asset definitions (the Media class) and the staticfiles app. It must end in a slash if set to a non-empty value. You may need to configure these files to be served in development and will definitely need to do so in production. Note If STATIC_URL is a relative path, then it will be prefixed by the server-provided value of SCRIPT_NAME (or / if not set). This makes it easier to serve a Django application in a subpath without adding an extra configuration to the settings. STATICFILES_DIRS Default: [] (Empty list) This setting defines the additional locations the staticfiles app will traverse if the FileSystemFinder finder is enabled, e.g. if you use the collectstatic or findstatic management command or use the static file serving view. This should be set to a list of strings that contain full paths to your additional files directory(ies) e.g.: STATICFILES_DIRS = [ "/home/special.polls.com/polls/static", "/home/polls.com/polls/static", "/opt/webfiles/common", ] Note that these paths should use Unix-style forward slashes, even on Windows (e.g. "C:/Users/user/mysite/extra_static_content"). Prefixes (optional) In case you want to refer to files in one of the locations with an additional namespace, you can optionally provide a prefix as (prefix, path) tuples, e.g.: STATICFILES_DIRS = [ # ... ("downloads", "/opt/webfiles/stats"), ] For example, assuming you have STATIC_URL set to 'static/', the collectstatic management command would collect the “stats” files in a 'downloads' subdirectory of STATIC_ROOT. This would allow you to refer to the local file '/opt/webfiles/stats/polls_20101022.tar.gz' with '/static/downloads/polls_20101022.tar.gz' in your templates, e.g.: <a href="{% static 'downloads/polls_20101022.tar.gz' %}"> STATICFILES_STORAGE Default: 'django.contrib.staticfiles.storage.StaticFilesStorage' The file storage engine to use when collecting static files with the collectstatic management command. A ready-to-use instance of the storage backend defined in this setting can be found at django.contrib.staticfiles.storage.staticfiles_storage. For an example, see Serving static files from a cloud service or CDN. STATICFILES_FINDERS Default: [ 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', ] The list of finder backends that know how to find static files in various locations. The default will find files stored in the STATICFILES_DIRS setting (using django.contrib.staticfiles.finders.FileSystemFinder) and in a static subdirectory of each app (using django.contrib.staticfiles.finders.AppDirectoriesFinder). If multiple files with the same name are present, the first file that is found will be used. One finder is disabled by default: django.contrib.staticfiles.finders.DefaultStorageFinder. If added to your STATICFILES_FINDERS setting, it will look for static files in the default file storage as defined by the DEFAULT_FILE_STORAGE setting. Note When using the AppDirectoriesFinder finder, make sure your apps can be found by staticfiles by adding the app to the INSTALLED_APPS setting of your site. Static file finders are currently considered a private interface, and this interface is thus undocumented. Core Settings Topical Index Cache CACHES CACHE_MIDDLEWARE_ALIAS CACHE_MIDDLEWARE_KEY_PREFIX CACHE_MIDDLEWARE_SECONDS Database DATABASES DATABASE_ROUTERS DEFAULT_INDEX_TABLESPACE DEFAULT_TABLESPACE Debugging DEBUG DEBUG_PROPAGATE_EXCEPTIONS Email ADMINS DEFAULT_CHARSET DEFAULT_FROM_EMAIL EMAIL_BACKEND EMAIL_FILE_PATH EMAIL_HOST EMAIL_HOST_PASSWORD EMAIL_HOST_USER EMAIL_PORT EMAIL_SSL_CERTFILE EMAIL_SSL_KEYFILE EMAIL_SUBJECT_PREFIX EMAIL_TIMEOUT EMAIL_USE_LOCALTIME EMAIL_USE_TLS MANAGERS SERVER_EMAIL Error reporting DEFAULT_EXCEPTION_REPORTER DEFAULT_EXCEPTION_REPORTER_FILTER IGNORABLE_404_URLS MANAGERS SILENCED_SYSTEM_CHECKS File uploads DEFAULT_FILE_STORAGE FILE_UPLOAD_HANDLERS FILE_UPLOAD_MAX_MEMORY_SIZE FILE_UPLOAD_PERMISSIONS FILE_UPLOAD_TEMP_DIR MEDIA_ROOT MEDIA_URL Forms FORM_RENDERER Globalization (i18n/l10n) DATE_FORMAT DATE_INPUT_FORMATS DATETIME_FORMAT DATETIME_INPUT_FORMATS DECIMAL_SEPARATOR FIRST_DAY_OF_WEEK FORMAT_MODULE_PATH LANGUAGE_CODE LANGUAGE_COOKIE_AGE LANGUAGE_COOKIE_DOMAIN LANGUAGE_COOKIE_HTTPONLY LANGUAGE_COOKIE_NAME LANGUAGE_COOKIE_PATH LANGUAGE_COOKIE_SAMESITE LANGUAGE_COOKIE_SECURE LANGUAGES LANGUAGES_BIDI LOCALE_PATHS MONTH_DAY_FORMAT NUMBER_GROUPING SHORT_DATE_FORMAT SHORT_DATETIME_FORMAT THOUSAND_SEPARATOR TIME_FORMAT TIME_INPUT_FORMATS TIME_ZONE USE_I18N USE_L10N USE_THOUSAND_SEPARATOR USE_TZ YEAR_MONTH_FORMAT HTTP DATA_UPLOAD_MAX_MEMORY_SIZE DATA_UPLOAD_MAX_NUMBER_FIELDS DEFAULT_CHARSET DISALLOWED_USER_AGENTS FORCE_SCRIPT_NAME INTERNAL_IPS MIDDLEWARE Security SECURE_CONTENT_TYPE_NOSNIFF SECURE_CROSS_ORIGIN_OPENER_POLICY SECURE_HSTS_INCLUDE_SUBDOMAINS SECURE_HSTS_PRELOAD SECURE_HSTS_SECONDS SECURE_PROXY_SSL_HEADER SECURE_REDIRECT_EXEMPT SECURE_REFERRER_POLICY SECURE_SSL_HOST SECURE_SSL_REDIRECT SIGNING_BACKEND USE_X_FORWARDED_HOST USE_X_FORWARDED_PORT WSGI_APPLICATION Logging LOGGING LOGGING_CONFIG Models ABSOLUTE_URL_OVERRIDES FIXTURE_DIRS INSTALLED_APPS Security Cross Site Request Forgery Protection CSRF_COOKIE_DOMAIN CSRF_COOKIE_NAME CSRF_COOKIE_PATH CSRF_COOKIE_SAMESITE CSRF_COOKIE_SECURE CSRF_FAILURE_VIEW CSRF_HEADER_NAME CSRF_TRUSTED_ORIGINS CSRF_USE_SESSIONS SECRET_KEY X_FRAME_OPTIONS Serialization DEFAULT_CHARSET SERIALIZATION_MODULES Templates TEMPLATES Testing Database: TEST TEST_NON_SERIALIZED_APPS TEST_RUNNER URLs APPEND_SLASH PREPEND_WWW ROOT_URLCONF
doc_2155
In-place version of atanh()
doc_2156
See Migration guide for more details. tf.compat.v1.distribute.Server, tf.compat.v1.train.Server tf.distribute.Server( server_or_cluster_def, job_name=None, task_index=None, protocol=None, config=None, start=True ) A tf.distribute.Server instance encapsulates a set of devices and a tf.compat.v1.Session target that can participate in distributed training. A server belongs to a cluster (specified by a tf.train.ClusterSpec), and corresponds to a particular task in a named job. The server can communicate with any other server in the same cluster. Args server_or_cluster_def A tf.train.ServerDef or tf.train.ClusterDef protocol buffer, or a tf.train.ClusterSpec object, describing the server to be created and/or the cluster of which it is a member. job_name (Optional.) Specifies the name of the job of which the server is a member. Defaults to the value in server_or_cluster_def, if specified. task_index (Optional.) Specifies the task index of the server in its job. Defaults to the value in server_or_cluster_def, if specified. Otherwise defaults to 0 if the server's job has only one task. protocol (Optional.) Specifies the protocol to be used by the server. Acceptable values include "grpc", "grpc+verbs". Defaults to the value in server_or_cluster_def, if specified. Otherwise defaults to "grpc". config (Options.) A tf.compat.v1.ConfigProto that specifies default configuration options for all sessions that run on this server. start (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to True. Raises tf.errors.OpError Or one of its subclasses if an error occurs while creating the TensorFlow server. Attributes server_def Returns the tf.train.ServerDef for this server. target Returns the target for a tf.compat.v1.Session to connect to this server. To create a tf.compat.v1.Session that connects to this server, use the following snippet: server = tf.distribute.Server(...) with tf.compat.v1.Session(server.target): # ... Methods create_local_server View source @staticmethod create_local_server( config=None, start=True ) Creates a new single-process cluster running on the local host. This method is a convenience wrapper for creating a tf.distribute.Server with a tf.train.ServerDef that specifies a single-process cluster containing a single task in a job called "local". Args config (Options.) A tf.compat.v1.ConfigProto that specifies default configuration options for all sessions that run on this server. start (Optional.) Boolean, indicating whether to start the server after creating it. Defaults to True. Returns A local tf.distribute.Server. join View source join() Blocks until the server has shut down. This method currently blocks forever. Raises tf.errors.OpError Or one of its subclasses if an error occurs while joining the TensorFlow server. start View source start() Starts this server. Raises tf.errors.OpError Or one of its subclasses if an error occurs while starting the TensorFlow server.
doc_2157
See Migration guide for more details. tf.compat.v1.raw_ops.DeviceIndex tf.raw_ops.DeviceIndex( device_names, name=None ) Given a list of device names, this operation returns the index of the device this op runs. The length of the list is returned in two cases: (1) Device does not exist in the given device list. (2) It is in XLA compilation. Args device_names A list of strings. name A name for the operation (optional). Returns A Tensor of type int32.
doc_2158
Alias for get_linestyle.
doc_2159
Checks for buffer full or a record at the flushLevel or higher.
doc_2160
Read a data URL. This kind of URL contains the content encoded in the URL itself. The data URL syntax is specified in RFC 2397. This implementation ignores white spaces in base64 encoded data URLs so the URL may be wrapped in whatever source file it comes from. But even though some browsers don’t mind about a missing padding at the end of a base64 encoded data URL, this implementation will raise an ValueError in that case.
doc_2161
class sklearn.preprocessing.PowerTransformer(method='yeo-johnson', *, standardize=True, copy=True) [source] Apply a power transform featurewise to make data more Gaussian-like. Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Currently, PowerTransformer supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Box-Cox requires input data to be strictly positive, while Yeo-Johnson supports both positive or negative data. By default, zero-mean, unit-variance normalization is applied to the transformed data. Read more in the User Guide. New in version 0.20. Parameters method{‘yeo-johnson’, ‘box-cox’}, default=’yeo-johnson’ The power transform method. Available methods are: ‘yeo-johnson’ [1], works with positive and negative values ‘box-cox’ [2], only works with strictly positive values standardizebool, default=True Set to True to apply zero-mean, unit-variance normalization to the transformed output. copybool, default=True Set to False to perform inplace computation during transformation. Attributes lambdas_ndarray of float of shape (n_features,) The parameters of the power transformation for the selected features. See also power_transform Equivalent function without the estimator API. QuantileTransformer Maps data to a standard normal distribution with the parameter output_distribution='normal'. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. References 1 I.K. Yeo and R.A. Johnson, “A new family of power transformations to improve normality or symmetry.” Biometrika, 87(4), pp.954-959, (2000). 2 G.E.P. Box and D.R. Cox, “An Analysis of Transformations”, Journal of the Royal Statistical Society B, 26, 211-252 (1964). Examples >>> import numpy as np >>> from sklearn.preprocessing import PowerTransformer >>> pt = PowerTransformer() >>> data = [[1, 2], [3, 2], [4, 5]] >>> print(pt.fit(data)) PowerTransformer() >>> print(pt.lambdas_) [ 1.386... -3.100...] >>> print(pt.transform(data)) [[-1.316... -0.707...] [ 0.209... -0.707...] [ 1.106... 1.414...]] Methods fit(X[, y]) Estimate the optimal parameter lambda for each feature. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. inverse_transform(X) Apply the inverse power transformation using the fitted lambdas. set_params(**params) Set the parameters of this estimator. transform(X) Apply the power transform to each feature using the fitted lambdas. fit(X, y=None) [source] Estimate the optimal parameter lambda for each feature. The optimal lambda parameter for minimizing skewness is estimated on each feature independently using maximum likelihood. Parameters Xarray-like of shape (n_samples, n_features) The data used to estimate the optimal transformation parameters. yNone Ignored. Returns selfobject Fitted transformer. fit_transform(X, y=None) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. inverse_transform(X) [source] Apply the inverse power transformation using the fitted lambdas. The inverse of the Box-Cox transformation is given by: if lambda_ == 0: X = exp(X_trans) else: X = (X_trans * lambda_ + 1) ** (1 / lambda_) The inverse of the Yeo-Johnson transformation is given by: if X >= 0 and lambda_ == 0: X = exp(X_trans) - 1 elif X >= 0 and lambda_ != 0: X = (X_trans * lambda_ + 1) ** (1 / lambda_) - 1 elif X < 0 and lambda_ != 2: X = 1 - (-(2 - lambda_) * X_trans + 1) ** (1 / (2 - lambda_)) elif X < 0 and lambda_ == 2: X = 1 - exp(-X_trans) Parameters Xarray-like of shape (n_samples, n_features) The transformed data. Returns Xndarray of shape (n_samples, n_features) The original data. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Apply the power transform to each feature using the fitted lambdas. Parameters Xarray-like of shape (n_samples, n_features) The data to be transformed using a power transformation. Returns X_transndarray of shape (n_samples, n_features) The transformed data. Examples using sklearn.preprocessing.PowerTransformer Map data to a normal distribution Compare the effect of different scalers on data with outliers
doc_2162
See Migration guide for more details. tf.compat.v1.raw_ops.Const tf.raw_ops.Const( value, dtype, name=None ) Args value A tf.TensorProto. Attr value is the tensor to return. dtype A tf.DType. name A name for the operation (optional). Returns A Tensor of type dtype.
doc_2163
String used in lieu of missing data when a masked array is printed. By default, this string is '--'.
doc_2164
Returns the ISO-8601 week day with day 1 being Monday and day 7 being Sunday. lookup_name = 'iso_week_day'
doc_2165
Compute the coordinates of Haar-like features. Parameters widthint Width of the detection window. heightint Height of the detection window. feature_typestr or list of str or None, optional The type of feature to consider: ‘type-2-x’: 2 rectangles varying along the x axis; ‘type-2-y’: 2 rectangles varying along the y axis; ‘type-3-x’: 3 rectangles varying along the x axis; ‘type-3-y’: 3 rectangles varying along the y axis; ‘type-4’: 4 rectangles varying along x and y axis. By default all features are extracted. Returns feature_coord(n_features, n_rectangles, 2, 2), ndarray of list of tuple coord Coordinates of the rectangles for each feature. feature_type(n_features,), ndarray of str The corresponding type for each feature. Examples >>> import numpy as np >>> from skimage.transform import integral_image >>> from skimage.feature import haar_like_feature_coord >>> feat_coord, feat_type = haar_like_feature_coord(2, 2, 'type-4') >>> feat_coord array([ list([[(0, 0), (0, 0)], [(0, 1), (0, 1)], [(1, 1), (1, 1)], [(1, 0), (1, 0)]])], dtype=object) >>> feat_type array(['type-4'], dtype=object)
doc_2166
A form for generating and emailing a one-time use link to reset a user’s password. send_mail(subject_template_name, email_template_name, context, from_email, to_email, html_email_template_name=None) Uses the arguments to send an EmailMultiAlternatives. Can be overridden to customize how the email is sent to the user. Parameters: subject_template_name – the template for the subject. email_template_name – the template for the email body. context – context passed to the subject_template, email_template, and html_email_template (if it is not None). from_email – the sender’s email. to_email – the email of the requester. html_email_template_name – the template for the HTML body; defaults to None, in which case a plain text email is sent. By default, save() populates the context with the same variables that PasswordResetView passes to its email context.
doc_2167
imghdr.what(filename, h=None) Tests the image data contained in the file named by filename, and returns a string describing the image type. If optional h is provided, the filename is ignored and h is assumed to contain the byte stream to test. Changed in version 3.6: Accepts a path-like object. The following image types are recognized, as listed below with the return value from what(): Value Image format 'rgb' SGI ImgLib Files 'gif' GIF 87a and 89a Files 'pbm' Portable Bitmap Files 'pgm' Portable Graymap Files 'ppm' Portable Pixmap Files 'tiff' TIFF Files 'rast' Sun Raster Files 'xbm' X Bitmap Files 'jpeg' JPEG data in JFIF or Exif formats 'bmp' BMP files 'png' Portable Network Graphics 'webp' WebP files 'exr' OpenEXR Files New in version 3.5: The exr and webp formats were added. You can extend the list of file types imghdr can recognize by appending to this variable: imghdr.tests A list of functions performing the individual tests. Each function takes two arguments: the byte-stream and an open file-like object. When what() is called with a byte-stream, the file-like object will be None. The test function should return a string describing the image type if the test succeeded, or None if it failed. Example: >>> import imghdr >>> imghdr.what('bass.gif') 'gif'
doc_2168
Close the iterator and free acquired resources. This is called automatically when the iterator is exhausted or garbage collected, or when an error happens during iterating. However it is advisable to call it explicitly or use the with statement. New in version 3.6.
doc_2169
The WSGI input stream. In general it’s a bad idea to use this one because you can easily read past the boundary. Use the stream instead.
doc_2170
A string that is ready to be safely inserted into an HTML or XML document, either because it was escaped or because it was marked safe. Passing an object to the constructor converts it to text and wraps it to mark it safe without escaping. To escape the text, use the escape() class method instead. >>> Markup("Hello, <em>World</em>!") Markup('Hello, <em>World</em>!') >>> Markup(42) Markup('42') >>> Markup.escape("Hello, <em>World</em>!") Markup('Hello &lt;em&gt;World&lt;/em&gt;!') This implements the __html__() interface that some frameworks use. Passing an object that implements __html__() will wrap the output of that method, marking it safe. >>> class Foo: ... def __html__(self): ... return '<a href="/foo">foo</a>' ... >>> Markup(Foo()) Markup('<a href="/foo">foo</a>') This is a subclass of str. It has the same methods, but escapes their arguments and returns a Markup instance. >>> Markup("<em>%s</em>") % ("foo & bar",) Markup('<em>foo &amp; bar</em>') >>> Markup("<em>Hello</em> ") + "<foo>" Markup('<em>Hello</em> &lt;foo&gt;') Parameters base (Any) – encoding (Optional[str]) – errors (str) – Return type Markup classmethod escape(s) Escape a string. Calls escape() and ensures that for subclasses the correct type is returned. Parameters s (Any) – Return type markupsafe.Markup striptags() unescape() the markup, remove tags, and normalize whitespace to single spaces. >>> Markup("Main &raquo; <em>About</em>").striptags() 'Main » About' Return type str unescape() Convert escaped markup back into a text string. This replaces HTML entities with the characters they represent. >>> Markup("Main &raquo; <em>About</em>").unescape() 'Main » <em>About</em>' Return type str
doc_2171
Check whether all characters in each string are alphanumeric. This is equivalent to running the Python string method str.isalnum() for each element of the Series/Index. If a string has zero characters, False is returned for that check. Returns Series or Index of bool Series or Index of boolean values with the same length as the original Series/Index. See also Series.str.isalpha Check whether all characters are alphabetic. Series.str.isnumeric Check whether all characters are numeric. Series.str.isalnum Check whether all characters are alphanumeric. Series.str.isdigit Check whether all characters are digits. Series.str.isdecimal Check whether all characters are decimal. Series.str.isspace Check whether all characters are whitespace. Series.str.islower Check whether all characters are lowercase. Series.str.isupper Check whether all characters are uppercase. Series.str.istitle Check whether all characters are titlecase. Examples Checks for Alphabetic and Numeric Characters >>> s1 = pd.Series(['one', 'one1', '1', '']) >>> s1.str.isalpha() 0 True 1 False 2 False 3 False dtype: bool >>> s1.str.isnumeric() 0 False 1 False 2 True 3 False dtype: bool >>> s1.str.isalnum() 0 True 1 True 2 True 3 False dtype: bool Note that checks against characters mixed with any additional punctuation or whitespace will evaluate to false for an alphanumeric check. >>> s2 = pd.Series(['A B', '1.5', '3,000']) >>> s2.str.isalnum() 0 False 1 False 2 False dtype: bool More Detailed Checks for Numeric Characters There are several different but overlapping sets of numeric characters that can be checked for. >>> s3 = pd.Series(['23', '³', '⅕', '']) The s3.str.isdecimal method checks for characters used to form numbers in base 10. >>> s3.str.isdecimal() 0 True 1 False 2 False 3 False dtype: bool The s.str.isdigit method is the same as s3.str.isdecimal but also includes special digits, like superscripted and subscripted digits in unicode. >>> s3.str.isdigit() 0 True 1 True 2 False 3 False dtype: bool The s.str.isnumeric method is the same as s3.str.isdigit but also includes other characters that can represent quantities such as unicode fractions. >>> s3.str.isnumeric() 0 True 1 True 2 True 3 False dtype: bool Checks for Whitespace >>> s4 = pd.Series([' ', '\t\r\n ', '']) >>> s4.str.isspace() 0 True 1 True 2 False dtype: bool Checks for Character Case >>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', '']) >>> s5.str.islower() 0 True 1 False 2 False 3 False dtype: bool >>> s5.str.isupper() 0 False 1 False 2 True 3 False dtype: bool The s5.str.istitle method checks for whether all words are in title case (whether only the first letter of each word is capitalized). Words are assumed to be as any sequence of non-numeric characters separated by whitespace characters. >>> s5.str.istitle() 0 False 1 True 2 False 3 False dtype: bool
doc_2172
faulthandler.dump_traceback(file=sys.stderr, all_threads=True) Dump the tracebacks of all threads into file. If all_threads is False, dump only the current thread. Changed in version 3.5: Added support for passing file descriptor to this function. Fault handler state faulthandler.enable(file=sys.stderr, all_threads=True) Enable the fault handler: install handlers for the SIGSEGV, SIGFPE, SIGABRT, SIGBUS and SIGILL signals to dump the Python traceback. If all_threads is True, produce tracebacks for every running thread. Otherwise, dump only the current thread. The file must be kept open until the fault handler is disabled: see issue with file descriptors. Changed in version 3.5: Added support for passing file descriptor to this function. Changed in version 3.6: On Windows, a handler for Windows exception is also installed. faulthandler.disable() Disable the fault handler: uninstall the signal handlers installed by enable(). faulthandler.is_enabled() Check if the fault handler is enabled. Dumping the tracebacks after a timeout faulthandler.dump_traceback_later(timeout, repeat=False, file=sys.stderr, exit=False) Dump the tracebacks of all threads, after a timeout of timeout seconds, or every timeout seconds if repeat is True. If exit is True, call _exit() with status=1 after dumping the tracebacks. (Note _exit() exits the process immediately, which means it doesn’t do any cleanup like flushing file buffers.) If the function is called twice, the new call replaces previous parameters and resets the timeout. The timer has a sub-second resolution. The file must be kept open until the traceback is dumped or cancel_dump_traceback_later() is called: see issue with file descriptors. This function is implemented using a watchdog thread. Changed in version 3.7: This function is now always available. Changed in version 3.5: Added support for passing file descriptor to this function. faulthandler.cancel_dump_traceback_later() Cancel the last call to dump_traceback_later(). Dumping the traceback on a user signal faulthandler.register(signum, file=sys.stderr, all_threads=True, chain=False) Register a user signal: install a handler for the signum signal to dump the traceback of all threads, or of the current thread if all_threads is False, into file. Call the previous handler if chain is True. The file must be kept open until the signal is unregistered by unregister(): see issue with file descriptors. Not available on Windows. Changed in version 3.5: Added support for passing file descriptor to this function. faulthandler.unregister(signum) Unregister a user signal: uninstall the handler of the signum signal installed by register(). Return True if the signal was registered, False otherwise. Not available on Windows. Issue with file descriptors enable(), dump_traceback_later() and register() keep the file descriptor of their file argument. If the file is closed and its file descriptor is reused by a new file, or if os.dup2() is used to replace the file descriptor, the traceback will be written into a different file. Call these functions again each time that the file is replaced. Example Example of a segmentation fault on Linux with and without enabling the fault handler: $ python3 -c "import ctypes; ctypes.string_at(0)" Segmentation fault $ python3 -q -X faulthandler >>> import ctypes >>> ctypes.string_at(0) Fatal Python error: Segmentation fault Current thread 0x00007fb899f39700 (most recent call first): File "/home/python/cpython/Lib/ctypes/__init__.py", line 486 in string_at File "<stdin>", line 1 in <module> Segmentation fault
doc_2173
Generate a square mask for the sequence. The masked positions are filled with float(‘-inf’). Unmasked positions are filled with float(0.0).
doc_2174
Callback function for URL defaults for all view functions of the application. It’s called with the endpoint and values and should update the values passed in place. Parameters f (Callable[[str, dict], None]) – Return type Callable[[str, dict], None]
doc_2175
See Migration guide for more details. tf.compat.v1.raw_ops.ImmutableConst tf.raw_ops.ImmutableConst( dtype, shape, memory_region_name, name=None ) The current implementation memmaps the tensor from a file. Args dtype A tf.DType. Type of the returned tensor. shape A tf.TensorShape or list of ints. Shape of the returned tensor. memory_region_name A string. Name of readonly memory region used by the tensor, see NewReadOnlyMemoryRegionFromFile in tensorflow::Env. name A name for the operation (optional). Returns A Tensor of type dtype.
doc_2176
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
doc_2177
Sometimes it is necessary to set additional headers in a view. Because views do not have to return response objects but can return a value that is converted into a response object by Flask itself, it becomes tricky to add headers to it. This function can be called instead of using a return and you will get a response object which you can use to attach headers. If view looked like this and you want to add a new header: def index(): return render_template('index.html', foo=42) You can now do something like this: def index(): response = make_response(render_template('index.html', foo=42)) response.headers['X-Parachutes'] = 'parachutes are cool' return response This function accepts the very same arguments you can return from a view function. This for example creates a response with a 404 error code: response = make_response(render_template('not_found.html'), 404) The other use case of this function is to force the return value of a view function into a response which is helpful with view decorators: response = make_response(view_function()) response.headers['X-Parachutes'] = 'parachutes are cool' Internally this function does the following things: if no arguments are passed, it creates a new response argument if one argument is passed, flask.Flask.make_response() is invoked with it. if more than one argument is passed, the arguments are passed to the flask.Flask.make_response() function as tuple. Changelog New in version 0.6. Parameters args (Any) – Return type Response
doc_2178
Disable the data property to avoid reading from the input stream. Deprecated since version 2.0: Will be removed in Werkzeug 2.1. Create the request with shallow=True instead. Changelog New in version 0.9.
doc_2179
Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters rendererRendererBase subclass. Notes This method is overridden in the Artist subclasses.
doc_2180
Get the current way of handling floating-point errors. Returns resdict A dictionary with keys “divide”, “over”, “under”, and “invalid”, whose values are from the strings “ignore”, “print”, “log”, “warn”, “raise”, and “call”. The keys represent possible floating-point exceptions, and the values define how these exceptions are handled. See also geterrcall, seterr, seterrcall Notes For complete documentation of the types of floating-point exceptions and treatment options, see seterr. Examples >>> np.geterr() {'divide': 'warn', 'over': 'warn', 'under': 'ignore', 'invalid': 'warn'} >>> np.arange(3.) / np.arange(3.) array([nan, 1., 1.]) >>> oldsettings = np.seterr(all='warn', over='raise') >>> np.geterr() {'divide': 'warn', 'over': 'raise', 'under': 'warn', 'invalid': 'warn'} >>> np.arange(3.) / np.arange(3.) array([nan, 1., 1.])
doc_2181
Return the week number of the year. Examples >>> ts = pd.Timestamp(2020, 3, 14) >>> ts.week 11
doc_2182
Draw samples from a Gumbel distribution. Draw samples from a Gumbel distribution with specified location and scale. For more information on the Gumbel distribution, see Notes and References below. Note New code should use the gumbel method of a default_rng() instance instead; please see the Quick Start. Parameters locfloat or array_like of floats, optional The location of the mode of the distribution. Default is 0. scalefloat or array_like of floats, optional The scale parameter of the distribution. Default is 1. Must be non- negative. sizeint or tuple of ints, optional Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if loc and scale are both scalars. Otherwise, np.broadcast(loc, scale).size samples are drawn. Returns outndarray or scalar Drawn samples from the parameterized Gumbel distribution. See also scipy.stats.gumbel_l scipy.stats.gumbel_r scipy.stats.genextreme weibull Generator.gumbel which should be used for new code. Notes The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme Value Type I) distribution is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. The Gumbel is a special case of the Extreme Value Type I distribution for maximums from distributions with “exponential-like” tails. The probability density for the Gumbel distribution is \[p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/ \beta}},\] where \(\mu\) is the mode, a location parameter, and \(\beta\) is the scale parameter. The Gumbel (named for German mathematician Emil Julius Gumbel) was used very early in the hydrology literature, for modeling the occurrence of flood events. It is also used for modeling maximum wind speed and rainfall rates. It is a “fat-tailed” distribution - the probability of an event in the tail of the distribution is larger than if one used a Gaussian, hence the surprisingly frequent occurrence of 100-year floods. Floods were initially modeled as a Gaussian process, which underestimated the frequency of extreme events. It is one of a class of extreme value distributions, the Generalized Extreme Value (GEV) distributions, which also includes the Weibull and Frechet. The function has a mean of \(\mu + 0.57721\beta\) and a variance of \(\frac{\pi^2}{6}\beta^2\). References 1 Gumbel, E. J., “Statistics of Extremes,” New York: Columbia University Press, 1958. 2 Reiss, R.-D. and Thomas, M., “Statistical Analysis of Extreme Values from Insurance, Finance, Hydrology and Other Fields,” Basel: Birkhauser Verlag, 2001. Examples Draw samples from the distribution: >>> mu, beta = 0, 0.1 # location and scale >>> s = np.random.gumbel(mu, beta, 1000) Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp( -np.exp( -(bins - mu) /beta) ), ... linewidth=2, color='r') >>> plt.show() Show how an extreme value distribution can arise from a Gaussian process and compare to a Gaussian: >>> means = [] >>> maxima = [] >>> for i in range(0,1000) : ... a = np.random.normal(mu, beta, 1000) ... means.append(a.mean()) ... maxima.append(a.max()) >>> count, bins, ignored = plt.hist(maxima, 30, density=True) >>> beta = np.std(maxima) * np.sqrt(6) / np.pi >>> mu = np.mean(maxima) - 0.57721*beta >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp(-np.exp(-(bins - mu)/beta)), ... linewidth=2, color='r') >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi)) ... * np.exp(-(bins - mu)**2 / (2 * beta**2)), ... linewidth=2, color='g') >>> plt.show()
doc_2183
Fit the radius neighbors transformer from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. Returns selfRadiusNeighborsTransformer The fitted radius neighbors transformer.
doc_2184
Get the width, height, and descent (offset from the bottom to the baseline), in display coords, of the string s with FontProperties prop.
doc_2185
Compute the F1 score, also known as balanced F-score or F-measure. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal. The formula for the F1 score is: F1 = 2 * (precision * recall) / (precision + recall) In the multi-class and multi-label case, this is the average of the F1 score of each class with weighting depending on the average parameter. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. labelsarray-like, default=None The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Changed in version 0.17: Parameter labels improved for multiclass problem. pos_labelstr or int, default=1 The class to report if average='binary' and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only. average{‘micro’, ‘macro’, ‘samples’,’weighted’, ‘binary’} or None, default=’binary’ This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: 'binary': Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary. 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives. 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score). sample_weightarray-like of shape (n_samples,), default=None Sample weights. zero_division“warn”, 0 or 1, default=”warn” Sets the value to return when there is a zero division, i.e. when all predictions and labels are negative. If set to “warn”, this acts as 0, but warnings are also raised. Returns f1_scorefloat or array of float, shape = [n_unique_labels] F1 score of the positive class in binary classification or weighted average of the F1 scores of each class for the multiclass task. See also fbeta_score, precision_recall_fscore_support, jaccard_score multilabel_confusion_matrix Notes When true positive + false positive == 0, precision is undefined. When true positive + false negative == 0, recall is undefined. In such cases, by default the metric will be set to 0, as will f-score, and UndefinedMetricWarning will be raised. This behavior can be modified with zero_division. References 1 Wikipedia entry for the F1-score. Examples >>> from sklearn.metrics import f1_score >>> y_true = [0, 1, 2, 0, 1, 2] >>> y_pred = [0, 2, 1, 0, 0, 1] >>> f1_score(y_true, y_pred, average='macro') 0.26... >>> f1_score(y_true, y_pred, average='micro') 0.33... >>> f1_score(y_true, y_pred, average='weighted') 0.26... >>> f1_score(y_true, y_pred, average=None) array([0.8, 0. , 0. ]) >>> y_true = [0, 0, 0, 0, 0, 0] >>> y_pred = [0, 0, 0, 0, 0, 0] >>> f1_score(y_true, y_pred, zero_division=1) 1.0...
doc_2186
Divide one Hermite series by another. Returns the quotient-with-remainder of two Hermite series c1 / c2. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series P_0 + 2*P_1 + 3*P_2. Parameters c1, c2array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns [quo, rem]ndarrays Of Hermite series coefficients representing the quotient and remainder. See also hermadd, hermsub, hermmulx, hermmul, hermpow Notes In general, the (polynomial) division of one Hermite series by another results in quotient and remainder terms that are not in the Hermite polynomial basis set. Thus, to express these results as a Hermite series, it is necessary to “reproject” the results onto the Hermite basis set, which may produce “unintuitive” (but correct) results; see Examples section below. Examples >>> from numpy.polynomial.hermite import hermdiv >>> hermdiv([ 52., 29., 52., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([0.])) >>> hermdiv([ 54., 31., 52., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([2., 2.])) >>> hermdiv([ 53., 30., 52., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([1., 1.]))
doc_2187
Return a list of URLs, one for each element of the collection. The list contains None for elements without a URL. See Hyperlinks for an example.
doc_2188
Other Members version '2.4.0'
doc_2189
Call self as a function.
doc_2190
""" Build the Cython demonstrations of low-level access to NumPy random Usage: python setup.py build_ext -i """ import setuptools # triggers monkeypatching distutils from distutils.core import setup from os.path import dirname, join, abspath import numpy as np from Cython.Build import cythonize from numpy.distutils.misc_util import get_info from setuptools.extension import Extension path = dirname(__file__) src_dir = join(dirname(path), '..', 'src') defs = [('NPY_NO_DEPRECATED_API', 0)] inc_path = np.get_include() lib_path = [abspath(join(np.get_include(), '..', '..', 'random', 'lib'))] lib_path += get_info('npymath')['library_dirs'] extending = Extension("extending", sources=[join('.', 'extending.pyx')], include_dirs=[ np.get_include(), join(path, '..', '..') ], define_macros=defs, ) distributions = Extension("extending_distributions", sources=[join('.', 'extending_distributions.pyx')], include_dirs=[inc_path], library_dirs=lib_path, libraries=['npyrandom', 'npymath'], define_macros=defs, ) extensions = [extending, distributions] setup( ext_modules=cythonize(extensions) )
doc_2191
Set the agg filter. Parameters filter_funccallable A filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array.
doc_2192
Returns a str object representing arbitrary object s. Treats bytestrings using the encoding codec. If strings_only is True, don’t convert (some) non-string-like objects.
doc_2193
Bit-flags describing how this data type is to be interpreted. Bit-masks are in numpy.core.multiarray as the constants ITEM_HASOBJECT, LIST_PICKLE, ITEM_IS_POINTER, NEEDS_INIT, NEEDS_PYAPI, USE_GETITEM, USE_SETITEM. A full explanation of these flags is in C-API documentation; they are largely useful for user-defined data-types. The following example demonstrates that operations on this particular dtype requires Python C-API. Examples >>> x = np.dtype([('a', np.int32, 8), ('b', np.float64, 6)]) >>> x.flags 16 >>> np.core.multiarray.NEEDS_PYAPI 16
doc_2194
class sklearn.preprocessing.RobustScaler(*, with_centering=True, with_scaling=True, quantile_range=25.0, 75.0, copy=True, unit_variance=False) [source] Scale features using statistics that are robust to outliers. This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile). Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and interquartile range are then stored to be used on later data using the transform method. Standardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results. New in version 0.17. Read more in the User Guide. Parameters with_centeringbool, default=True If True, center the data before scaling. This will cause transform to raise an exception when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory. with_scalingbool, default=True If True, scale the data to interquartile range. quantile_rangetuple (q_min, q_max), 0.0 < q_min < q_max < 100.0, default=(25.0, 75.0), == (1st quantile, 3rd quantile), == IQR Quantile range used to calculate scale_. New in version 0.18. copybool, default=True If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned. unit_variancebool, default=False If True, scale data so that normally distributed features have a variance of 1. In general, if the difference between the x-values of q_max and q_min for a standard normal distribution is greater than 1, the dataset will be scaled down. If less than 1, the dataset will be scaled up. New in version 0.24. Attributes center_array of floats The median value for each feature in the training set. scale_array of floats The (scaled) interquartile range for each feature in the training set. New in version 0.17: scale_ attribute. See also robust_scale Equivalent function without the estimator API. PCA Further removes the linear correlation across features with ‘whiten=True’. Notes For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. https://en.wikipedia.org/wiki/Median https://en.wikipedia.org/wiki/Interquartile_range Examples >>> from sklearn.preprocessing import RobustScaler >>> X = [[ 1., -2., 2.], ... [ -2., 1., 3.], ... [ 4., 1., -2.]] >>> transformer = RobustScaler().fit(X) >>> transformer RobustScaler() >>> transformer.transform(X) array([[ 0. , -2. , 0. ], [-1. , 0. , 0.4], [ 1. , 0. , -1.6]]) Methods fit(X[, y]) Compute the median and quantiles to be used for scaling. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. inverse_transform(X) Scale back the data to the original representation set_params(**params) Set the parameters of this estimator. transform(X) Center and scale the data. fit(X, y=None) [source] Compute the median and quantiles to be used for scaling. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The data used to compute the median and quantiles used for later scaling along the features axis. yNone Ignored. Returns selfobject Fitted scaler. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. inverse_transform(X) [source] Scale back the data to the original representation Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The rescaled data to be transformed back. Returns X_tr{ndarray, sparse matrix} of shape (n_samples, n_features) Transformed array. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Center and scale the data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The data used to scale along the specified axis. Returns X_tr{ndarray, sparse matrix} of shape (n_samples, n_features) Transformed array. Examples using sklearn.preprocessing.RobustScaler Compare the effect of different scalers on data with outliers
doc_2195
Draw contour lines on an unstructured triangular grid. The triangulation can be specified in one of two ways; either tricontour(triangulation, ...) where triangulation is a Triangulation object, or tricontour(x, y, ...) tricontour(x, y, triangles, ...) tricontour(x, y, triangles=triangles, ...) tricontour(x, y, mask=mask, ...) tricontour(x, y, triangles, mask=mask, ...) in which case a Triangulation object will be created. See that class' docstring for an explanation of these cases. The remaining arguments may be: tricontour(..., Z) where Z is the array of values to contour, one per point in the triangulation. The level values are chosen automatically. tricontour(..., Z, levels) contour up to levels+1 automatically chosen contour levels (levels intervals). tricontour(..., Z, levels) draw contour lines at the values specified in sequence levels, which must be in increasing order. tricontour(Z, **kwargs) Use keyword arguments to control colors, linewidth, origin, cmap ... see below for more details. Parameters triangulationTriangulation, optional The unstructured triangular grid. If specified, then x, y, triangles, and mask are not accepted. x, yarray-like, optional The coordinates of the values in Z. triangles(ntri, 3) array-like of int, optional For each triangle, the indices of the three points that make up the triangle, ordered in an anticlockwise manner. If not specified, the Delaunay triangulation is calculated. mask(ntri,) array-like of bool, optional Which triangles are masked out. Z2D array-like The height values over which the contour is drawn. levelsint or array-like, optional Determines the number and positions of the contour lines / regions. If an int n, use MaxNLocator, which tries to automatically choose no more than n+1 "nice" contour levels between vmin and vmax. If array-like, draw contour lines at the specified levels. The values must be in increasing order. Returns TriContourSet Other Parameters colorscolor string or sequence of colors, optional The colors of the levels, i.e., the contour lines. The sequence is cycled for the levels in ascending order. If the sequence is shorter than the number of levels, it's repeated. As a shortcut, single color strings may be used in place of one-element lists, i.e. 'red' instead of ['red'] to color all levels with the same color. This shortcut does only work for color strings, not for other ways of specifying colors. By default (value None), the colormap specified by cmap will be used. alphafloat, default: 1 The alpha blending value, between 0 (transparent) and 1 (opaque). cmapstr or Colormap, default: rcParams["image.cmap"] (default: 'viridis') A Colormap instance or registered colormap name. The colormap maps the level values to colors. If both colors and cmap are given, an error is raised. normNormalize, optional If a colormap is used, the Normalize instance scales the level values to the canonical colormap range [0, 1] for mapping to colors. If not given, the default linear scaling is used. vmin, vmaxfloat, optional If not None, either or both of these values will be supplied to the Normalize instance, overriding the default color scaling based on levels. origin{None, 'upper', 'lower', 'image'}, default: None Determines the orientation and exact position of Z by specifying the position of Z[0, 0]. This is only relevant, if X, Y are not given. None: Z[0, 0] is at X=0, Y=0 in the lower left corner. 'lower': Z[0, 0] is at X=0.5, Y=0.5 in the lower left corner. 'upper': Z[0, 0] is at X=N+0.5, Y=0.5 in the upper left corner. 'image': Use the value from rcParams["image.origin"] (default: 'upper'). extent(x0, x1, y0, y1), optional If origin is not None, then extent is interpreted as in imshow: it gives the outer pixel boundaries. In this case, the position of Z[0, 0] is the center of the pixel, not a corner. If origin is None, then (x0, y0) is the position of Z[0, 0], and (x1, y1) is the position of Z[-1, -1]. This argument is ignored if X and Y are specified in the call to contour. locatorticker.Locator subclass, optional The locator is used to determine the contour levels if they are not given explicitly via levels. Defaults to MaxNLocator. extend{'neither', 'both', 'min', 'max'}, default: 'neither' Determines the tricontour-coloring of values that are outside the levels range. If 'neither', values outside the levels range are not colored. If 'min', 'max' or 'both', color the values below, above or below and above the levels range. Values below min(levels) and above max(levels) are mapped to the under/over values of the Colormap. Note that most colormaps do not have dedicated colors for these by default, so that the over and under values are the edge values of the colormap. You may want to set these values explicitly using Colormap.set_under and Colormap.set_over. Note An existing TriContourSet does not get notified if properties of its colormap are changed. Therefore, an explicit call to ContourSet.changed() is needed after modifying the colormap. The explicit call can be left out, if a colorbar is assigned to the TriContourSet because it internally calls ContourSet.changed(). xunits, yunitsregistered units, optional Override axis units by specifying an instance of a matplotlib.units.ConversionInterface. antialiasedbool, optional Enable antialiasing, overriding the defaults. For filled contours, the default is True. For line contours, it is taken from rcParams["lines.antialiased"] (default: True). linewidthsfloat or array-like, default: rcParams["contour.linewidth"] (default: None) The line width of the contour lines. If a number, all levels will be plotted with this linewidth. If a sequence, the levels in ascending order will be plotted with the linewidths in the order specified. If None, this falls back to rcParams["lines.linewidth"] (default: 1.5). linestyles{None, 'solid', 'dashed', 'dashdot', 'dotted'}, optional If linestyles is None, the default is 'solid' unless the lines are monochrome. In that case, negative contours will take their linestyle from rcParams["contour.negative_linestyle"] (default: 'dashed') setting. linestyles can also be an iterable of the above strings specifying a set of linestyles to be used. If this iterable is shorter than the number of contour levels it will be repeated as necessary. Examples using matplotlib.axes.Axes.tricontour Contour plot of irregularly spaced data Tricontour Demo Tricontour Smooth Delaunay Tricontour Smooth User Trigradient Demo Triangular 3D contour plot tricontour(x, y, z)
doc_2196
Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters rendererRendererBase subclass. Notes This method is overridden in the Artist subclasses.
doc_2197
Returns the normalized RGBA values of the Color. normalize() -> tuple Returns the normalized RGBA values of the Color as floating point values.
doc_2198
See Migration guide for more details. tf.compat.v1.io.parse_sequence_example tf.io.parse_sequence_example( serialized, context_features=None, sequence_features=None, example_names=None, name=None ) Parses a vector of serialized SequenceExample protos given in serialized. This op parses serialized sequence examples into a tuple of dictionaries, each mapping keys to Tensor and SparseTensor objects. The first dictionary contains mappings for keys appearing in context_features, and the second dictionary contains mappings for keys appearing in sequence_features. At least one of context_features and sequence_features must be provided and non-empty. The context_features keys are associated with a SequenceExample as a whole, independent of time / frame. In contrast, the sequence_features keys provide a way to access variable-length data within the FeatureList section of the SequenceExample proto. While the shapes of context_features values are fixed with respect to frame, the frame dimension (the first dimension) of sequence_features values may vary between SequenceExample protos, and even between feature_list keys within the same SequenceExample. context_features contains VarLenFeature, RaggedFeature, and FixedLenFeature objects. Each VarLenFeature is mapped to a SparseTensor; each RaggedFeature is mapped to a RaggedTensor; and each FixedLenFeature is mapped to a Tensor, of the specified type, shape, and default value. sequence_features contains VarLenFeature, RaggedFeature, and FixedLenSequenceFeature objects. Each VarLenFeature is mapped to a SparseTensor; each RaggedFeature is mapped to a RaggedTensor; and eachFixedLenSequenceFeatureis mapped to aTensor, each of the specified type. The shape will be(B,T,) + df.dense_shapeforFixedLenSequenceFeaturedf, whereBis the batch size, andTis the length of the associatedFeatureListin theSequenceExample. For instance,FixedLenSequenceFeature([])yields a scalar 2-DTensorof static shape[None, None]and dynamic shape[B, T], whileFixedLenSequenceFeature([k])(forint k >= 1) yields a 3-D matrixTensorof static shape[None, None, k]and dynamic shape[B, T, k]`. Like the input, the resulting output tensors have a batch dimension. This means that the original per-example shapes of VarLenFeatures and FixedLenSequenceFeatures can be lost. To handle that situation, this op also provides dicts of shape tensors as part of the output. There is one dict for the context features, and one for the feature_list features. Context features of type FixedLenFeatures will not be present, since their shapes are already known by the caller. In situations where the input 'FixedLenFeature`s are of different lengths across examples, the shorter examples will be padded with default datatype values: 0 for numeric types, and the empty string for string types. Each SparseTensor corresponding to sequence_features represents a ragged vector. Its indices are [time, index], where time is the FeatureList entry and index is the value's index in the list of values associated with that time. FixedLenFeature entries with a default_value and FixedLenSequenceFeature entries with allow_missing=True are optional; otherwise, we will fail if that Feature or FeatureList is missing from any example in serialized. example_name may contain a descriptive name for the corresponding serialized proto. This may be useful for debugging purposes, but it has no effect on the output. If not None, example_name must be a scalar. Args serialized A vector (1-D Tensor) of type string containing binary serialized SequenceExample protos. context_features A dict mapping feature keys to FixedLenFeature or VarLenFeature or RaggedFeature values. These features are associated with a SequenceExample as a whole. sequence_features A dict mapping feature keys to FixedLenSequenceFeature or VarLenFeature or RaggedFeature values. These features are associated with data within the FeatureList section of the SequenceExample proto. example_names A vector (1-D Tensor) of strings (optional), the name of the serialized protos. name A name for this operation (optional). Returns A tuple of three dicts, each mapping keys to Tensors, SparseTensors, and RaggedTensor. The first dict contains the context key/values, the second dict contains the feature_list key/values, and the final dict contains the lengths of any dense feature_list features. Raises ValueError if any feature is invalid.
doc_2199
torch.cuda.can_device_access_peer(device, peer_device) [source] Checks if peer access between two devices is possible. torch.cuda.current_blas_handle() [source] Returns cublasHandle_t pointer to current cuBLAS handle torch.cuda.current_device() [source] Returns the index of a currently selected device. torch.cuda.current_stream(device=None) [source] Returns the currently selected Stream for a given device. Parameters device (torch.device or int, optional) – selected device. Returns the currently selected Stream for the current device, given by current_device(), if device is None (default). torch.cuda.default_stream(device=None) [source] Returns the default Stream for a given device. Parameters device (torch.device or int, optional) – selected device. Returns the default Stream for the current device, given by current_device(), if device is None (default). class torch.cuda.device(device) [source] Context-manager that changes the selected device. Parameters device (torch.device or int) – device index to select. It’s a no-op if this argument is a negative integer or None. torch.cuda.device_count() [source] Returns the number of GPUs available. class torch.cuda.device_of(obj) [source] Context-manager that changes the current device to that of given object. You can use both tensors and storages as arguments. If a given object is not allocated on a GPU, this is a no-op. Parameters obj (Tensor or Storage) – object allocated on the selected device. torch.cuda.get_arch_list() [source] Returns list CUDA architectures this library was compiled for. torch.cuda.get_device_capability(device=None) [source] Gets the cuda capability of a device. Parameters device (torch.device or int, optional) – device for which to return the device capability. This function is a no-op if this argument is a negative integer. It uses the current device, given by current_device(), if device is None (default). Returns the major and minor cuda capability of the device Return type tuple(int, int) torch.cuda.get_device_name(device=None) [source] Gets the name of a device. Parameters device (torch.device or int, optional) – device for which to return the name. This function is a no-op if this argument is a negative integer. It uses the current device, given by current_device(), if device is None (default). Returns the name of the device Return type str torch.cuda.get_device_properties(device) [source] Gets the properties of a device. Parameters device (torch.device or int or str) – device for which to return the properties of the device. Returns the properties of the device Return type _CudaDeviceProperties torch.cuda.get_gencode_flags() [source] Returns NVCC gencode flags this library were compiled with. torch.cuda.init() [source] Initialize PyTorch’s CUDA state. You may need to call this explicitly if you are interacting with PyTorch via its C API, as Python bindings for CUDA functionality will not be available until this initialization takes place. Ordinary users should not need this, as all of PyTorch’s CUDA methods automatically initialize CUDA state on-demand. Does nothing if the CUDA state is already initialized. torch.cuda.ipc_collect() [source] Force collects GPU memory after it has been released by CUDA IPC. Note Checks if any sent CUDA tensors could be cleaned from the memory. Force closes shared memory file used for reference counting if there is no active counters. Useful when the producer process stopped actively sending tensors and want to release unused memory. torch.cuda.is_available() [source] Returns a bool indicating if CUDA is currently available. torch.cuda.is_initialized() [source] Returns whether PyTorch’s CUDA state has been initialized. torch.cuda.set_device(device) [source] Sets the current device. Usage of this function is discouraged in favor of device. In most cases it’s better to use CUDA_VISIBLE_DEVICES environmental variable. Parameters device (torch.device or int) – selected device. This function is a no-op if this argument is negative. torch.cuda.stream(stream) [source] Context-manager that selects a given stream. All CUDA kernels queued within its context will be enqueued on a selected stream. Parameters stream (Stream) – selected stream. This manager is a no-op if it’s None. Note Streams are per-device. If the selected stream is not on the current device, this function will also change the current device to match the stream. torch.cuda.synchronize(device=None) [source] Waits for all kernels in all streams on a CUDA device to complete. Parameters device (torch.device or int, optional) – device for which to synchronize. It uses the current device, given by current_device(), if device is None (default). Random Number Generator torch.cuda.get_rng_state(device='cuda') [source] Returns the random number generator state of the specified GPU as a ByteTensor. Parameters device (torch.device or int, optional) – The device to return the RNG state of. Default: 'cuda' (i.e., torch.device('cuda'), the current CUDA device). Warning This function eagerly initializes CUDA. torch.cuda.get_rng_state_all() [source] Returns a list of ByteTensor representing the random number states of all devices. torch.cuda.set_rng_state(new_state, device='cuda') [source] Sets the random number generator state of the specified GPU. Parameters new_state (torch.ByteTensor) – The desired state device (torch.device or int, optional) – The device to set the RNG state. Default: 'cuda' (i.e., torch.device('cuda'), the current CUDA device). torch.cuda.set_rng_state_all(new_states) [source] Sets the random number generator state of all devices. Parameters new_states (Iterable of torch.ByteTensor) – The desired state for each device torch.cuda.manual_seed(seed) [source] Sets the seed for generating random numbers for the current GPU. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters seed (int) – The desired seed. Warning If you are working with a multi-GPU model, this function is insufficient to get determinism. To seed all GPUs, use manual_seed_all(). torch.cuda.manual_seed_all(seed) [source] Sets the seed for generating random numbers on all GPUs. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters seed (int) – The desired seed. torch.cuda.seed() [source] Sets the seed for generating random numbers to a random number for the current GPU. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Warning If you are working with a multi-GPU model, this function will only initialize the seed on one GPU. To initialize all GPUs, use seed_all(). torch.cuda.seed_all() [source] Sets the seed for generating random numbers to a random number on all GPUs. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. torch.cuda.initial_seed() [source] Returns the current random seed of the current GPU. Warning This function eagerly initializes CUDA. Communication collectives torch.cuda.comm.broadcast(tensor, devices=None, *, out=None) [source] Broadcasts a tensor to specified GPU devices. Parameters tensor (Tensor) – tensor to broadcast. Can be on CPU or GPU. devices (Iterable[torch.device, str or int], optional) – an iterable of GPU devices, among which to broadcast. out (Sequence[Tensor], optional, keyword-only) – the GPU tensors to store output results. Note Exactly one of devices and out must be specified. Returns If devices is specified, a tuple containing copies of tensor, placed on devices. If out is specified, a tuple containing out tensors, each containing a copy of tensor. torch.cuda.comm.broadcast_coalesced(tensors, devices, buffer_size=10485760) [source] Broadcasts a sequence tensors to the specified GPUs. Small tensors are first coalesced into a buffer to reduce the number of synchronizations. Parameters tensors (sequence) – tensors to broadcast. Must be on the same device, either CPU or GPU. devices (Iterable[torch.device, str or int]) – an iterable of GPU devices, among which to broadcast. buffer_size (int) – maximum size of the buffer used for coalescing Returns A tuple containing copies of tensor, placed on devices. torch.cuda.comm.reduce_add(inputs, destination=None) [source] Sums tensors from multiple GPUs. All inputs should have matching shapes, dtype, and layout. The output tensor will be of the same shape, dtype, and layout. Parameters inputs (Iterable[Tensor]) – an iterable of tensors to add. destination (int, optional) – a device on which the output will be placed (default: current device). Returns A tensor containing an elementwise sum of all inputs, placed on the destination device. torch.cuda.comm.scatter(tensor, devices=None, chunk_sizes=None, dim=0, streams=None, *, out=None) [source] Scatters tensor across multiple GPUs. Parameters tensor (Tensor) – tensor to scatter. Can be on CPU or GPU. devices (Iterable[torch.device, str or int], optional) – an iterable of GPU devices, among which to scatter. chunk_sizes (Iterable[int], optional) – sizes of chunks to be placed on each device. It should match devices in length and sums to tensor.size(dim). If not specified, tensor will be divided into equal chunks. dim (int, optional) – A dimension along which to chunk tensor. Default: 0. streams (Iterable[Stream], optional) – an iterable of Streams, among which to execute the scatter. If not specified, the default stream will be utilized. out (Sequence[Tensor], optional, keyword-only) – the GPU tensors to store output results. Sizes of these tensors must match that of tensor, except for dim, where the total size must sum to tensor.size(dim). Note Exactly one of devices and out must be specified. When out is specified, chunk_sizes must not be specified and will be inferred from sizes of out. Returns If devices is specified, a tuple containing chunks of tensor, placed on devices. If out is specified, a tuple containing out tensors, each containing a chunk of tensor. torch.cuda.comm.gather(tensors, dim=0, destination=None, *, out=None) [source] Gathers tensors from multiple GPU devices. Parameters tensors (Iterable[Tensor]) – an iterable of tensors to gather. Tensor sizes in all dimensions other than dim have to match. dim (int, optional) – a dimension along which the tensors will be concatenated. Default: 0. destination (torch.device, str, or int, optional) – the output device. Can be CPU or CUDA. Default: the current CUDA device. out (Tensor, optional, keyword-only) – the tensor to store gather result. Its sizes must match those of tensors, except for dim, where the size must equal sum(tensor.size(dim) for tensor in tensors). Can be on CPU or CUDA. Note destination must not be specified when out is specified. Returns If destination is specified, a tensor located on destination device, that is a result of concatenating tensors along dim. If out is specified, the out tensor, now containing results of concatenating tensors along dim. Streams and events class torch.cuda.Stream [source] Wrapper around a CUDA stream. A CUDA stream is a linear sequence of execution that belongs to a specific device, independent from other streams. See CUDA semantics for details. Parameters device (torch.device or int, optional) – a device on which to allocate the stream. If device is None (default) or a negative integer, this will use the current device. priority (int, optional) – priority of the stream. Can be either -1 (high priority) or 0 (low priority). By default, streams have priority 0. Note Although CUDA versions >= 11 support more than two levels of priorities, in PyTorch, we only support two levels of priorities. query() [source] Checks if all the work submitted has been completed. Returns A boolean indicating if all kernels in this stream are completed. record_event(event=None) [source] Records an event. Parameters event (Event, optional) – event to record. If not given, a new one will be allocated. Returns Recorded event. synchronize() [source] Wait for all the kernels in this stream to complete. Note This is a wrapper around cudaStreamSynchronize(): see CUDA Stream documentation for more info. wait_event(event) [source] Makes all future work submitted to the stream wait for an event. Parameters event (Event) – an event to wait for. Note This is a wrapper around cudaStreamWaitEvent(): see CUDA Stream documentation for more info. This function returns without waiting for event: only future operations are affected. wait_stream(stream) [source] Synchronizes with another stream. All future work submitted to this stream will wait until all kernels submitted to a given stream at the time of call complete. Parameters stream (Stream) – a stream to synchronize. Note This function returns without waiting for currently enqueued kernels in stream: only future operations are affected. class torch.cuda.Event [source] Wrapper around a CUDA event. CUDA events are synchronization markers that can be used to monitor the device’s progress, to accurately measure timing, and to synchronize CUDA streams. The underlying CUDA events are lazily initialized when the event is first recorded or exported to another process. After creation, only streams on the same device may record the event. However, streams on any device can wait on the event. Parameters enable_timing (bool, optional) – indicates if the event should measure time (default: False) blocking (bool, optional) – if True, wait() will be blocking (default: False) interprocess (bool) – if True, the event can be shared between processes (default: False) elapsed_time(end_event) [source] Returns the time elapsed in milliseconds after the event was recorded and before the end_event was recorded. classmethod from_ipc_handle(device, handle) [source] Reconstruct an event from an IPC handle on the given device. ipc_handle() [source] Returns an IPC handle of this event. If not recorded yet, the event will use the current device. query() [source] Checks if all work currently captured by event has completed. Returns A boolean indicating if all work currently captured by event has completed. record(stream=None) [source] Records the event in a given stream. Uses torch.cuda.current_stream() if no stream is specified. The stream’s device must match the event’s device. synchronize() [source] Waits for the event to complete. Waits until the completion of all work currently captured in this event. This prevents the CPU thread from proceeding until the event completes. Note This is a wrapper around cudaEventSynchronize(): see CUDA Event documentation for more info. wait(stream=None) [source] Makes all future work submitted to the given stream wait for this event. Use torch.cuda.current_stream() if no stream is specified. Memory management torch.cuda.empty_cache() [source] Releases all unoccupied cached memory currently held by the caching allocator so that those can be used in other GPU application and visible in nvidia-smi. Note empty_cache() doesn’t increase the amount of GPU memory available for PyTorch. However, it may help reduce fragmentation of GPU memory in certain cases. See Memory management for more details about GPU memory management. torch.cuda.list_gpu_processes(device=None) [source] Returns a human-readable printout of the running processes and their GPU memory use for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. Parameters device (torch.device or int, optional) – selected device. Returns printout for the current device, given by current_device(), if device is None (default). torch.cuda.memory_stats(device=None) [source] Returns a dictionary of CUDA memory allocator statistics for a given device. The return value of this function is a dictionary of statistics, each of which is a non-negative integer. Core statistics: "allocated.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of allocation requests received by the memory allocator. "allocated_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of allocated memory. "segment.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of reserved segments from cudaMalloc(). "reserved_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of reserved memory. "active.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of active memory blocks. "active_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of active memory. "inactive_split.{all,large_pool,small_pool}.{current,peak,allocated,freed}": number of inactive, non-releasable memory blocks. "inactive_split_bytes.{all,large_pool,small_pool}.{current,peak,allocated,freed}": amount of inactive, non-releasable memory. For these core statistics, values are broken down as follows. Pool type: all: combined statistics across all memory pools. large_pool: statistics for the large allocation pool (as of October 2019, for size >= 1MB allocations). small_pool: statistics for the small allocation pool (as of October 2019, for size < 1MB allocations). Metric type: current: current value of this metric. peak: maximum value of this metric. allocated: historical total increase in this metric. freed: historical total decrease in this metric. In addition to the core statistics, we also provide some simple event counters: "num_alloc_retries": number of failed cudaMalloc calls that result in a cache flush and retry. "num_ooms": number of out-of-memory errors thrown. Parameters device (torch.device or int, optional) – selected device. Returns statistics for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management. torch.cuda.memory_summary(device=None, abbreviated=False) [source] Returns a human-readable printout of the current memory allocator statistics for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. Parameters device (torch.device or int, optional) – selected device. Returns printout for the current device, given by current_device(), if device is None (default). abbreviated (bool, optional) – whether to return an abbreviated summary (default: False). Note See Memory management for more details about GPU memory management. torch.cuda.memory_snapshot() [source] Returns a snapshot of the CUDA memory allocator state across all devices. Interpreting the output of this function requires familiarity with the memory allocator internals. Note See Memory management for more details about GPU memory management. torch.cuda.memory_allocated(device=None) [source] Returns the current GPU memory occupied by tensors in bytes for a given device. Parameters device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note This is likely less than the amount shown in nvidia-smi since some unused memory can be held by the caching allocator and some context needs to be created on GPU. See Memory management for more details about GPU memory management. torch.cuda.max_memory_allocated(device=None) [source] Returns the maximum GPU memory occupied by tensors in bytes for a given device. By default, this returns the peak allocated memory since the beginning of this program. reset_peak_stats() can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak allocated memory usage of each iteration in a training loop. Parameters device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management. torch.cuda.reset_max_memory_allocated(device=None) [source] Resets the starting point in tracking maximum GPU memory occupied by tensors for a given device. See max_memory_allocated() for details. Parameters device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Warning This function now calls reset_peak_memory_stats(), which resets /all/ peak memory stats. Note See Memory management for more details about GPU memory management. torch.cuda.memory_reserved(device=None) [source] Returns the current GPU memory managed by the caching allocator in bytes for a given device. Parameters device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management. torch.cuda.max_memory_reserved(device=None) [source] Returns the maximum GPU memory managed by the caching allocator in bytes for a given device. By default, this returns the peak cached memory since the beginning of this program. reset_peak_stats() can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak cached memory amount of each iteration in a training loop. Parameters device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note See Memory management for more details about GPU memory management. torch.cuda.set_per_process_memory_fraction(fraction, device=None) [source] Set memory fraction for a process. The fraction is used to limit an caching allocator to allocated memory on a CUDA device. The allowed value equals the total visible memory multiplied fraction. If trying to allocate more than the allowed value in a process, will raise an out of memory error in allocator. Parameters fraction (float) – Range: 0~1. Allowed memory equals total_memory * fraction. device (torch.device or int, optional) – selected device. If it is None the default CUDA device is used. Note In general, the total available free memory is less than the total capacity. torch.cuda.memory_cached(device=None) [source] Deprecated; see memory_reserved(). torch.cuda.max_memory_cached(device=None) [source] Deprecated; see max_memory_reserved(). torch.cuda.reset_max_memory_cached(device=None) [source] Resets the starting point in tracking maximum GPU memory managed by the caching allocator for a given device. See max_memory_cached() for details. Parameters device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Warning This function now calls reset_peak_memory_stats(), which resets /all/ peak memory stats. Note See Memory management for more details about GPU memory management. NVIDIA Tools Extension (NVTX) torch.cuda.nvtx.mark(msg) [source] Describe an instantaneous event that occurred at some point. Parameters msg (string) – ASCII message to associate with the event. torch.cuda.nvtx.range_push(msg) [source] Pushes a range onto a stack of nested range span. Returns zero-based depth of the range that is started. Parameters msg (string) – ASCII message to associate with range torch.cuda.nvtx.range_pop() [source] Pops a range off of a stack of nested range spans. Returns the zero-based depth of the range that is ended.