_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_3200
class KeysValidator(keys, strict=False, messages=None) Validates that the given keys are contained in the value. If strict is True, then it also checks that there are no other keys present. The messages passed should be a dict containing the keys missing_keys and/or extra_keys. Note Note that this checks only for the existence of a given key, not that the value of a key is non-empty. Range validators RangeMaxValueValidator class RangeMaxValueValidator(limit_value, message=None) Validates that the upper bound of the range is not greater than limit_value. RangeMinValueValidator class RangeMinValueValidator(limit_value, message=None) Validates that the lower bound of the range is not less than the limit_value.
doc_3201
rotates the vector around the z-axis by the angle in degrees in place. rotate_z_ip(angle) -> None Rotates the vector counterclockwise around the z-axis by the given angle in degrees. The length of the vector is not changed.
doc_3202
tf.debugging.assert_rank_in( x, ranks, message=None, name=None ) This Op checks that the rank of x is in ranks. If x has a different rank, message, as well as the shape of x are printed, and InvalidArgumentError is raised. Args x Tensor. ranks Iterable of scalar Tensor objects. message A string to prefix to the default message. name A name for this operation (optional). Defaults to "assert_rank_in". Returns Op raising InvalidArgumentError unless rank of x is in ranks. If static checks determine x has matching rank, a no_op is returned. This can be used with tf.control_dependencies inside of tf.functions to block followup computation until the check has executed. Raises InvalidArgumentError x does not have rank in ranks, but the rank cannot be statically determined. ValueError If static checks determine x has mismatched rank. Eager Compatibility returns None
doc_3203
A positive integer specifying the number of elements in the array. Out-of-range subscripts result in an IndexError. Will be returned by len().
doc_3204
Return the day of the year. Examples >>> ts = pd.Timestamp(2020, 3, 14) >>> ts.day_of_year 74
doc_3205
codecs.BOM_BE codecs.BOM_LE codecs.BOM_UTF8 codecs.BOM_UTF16 codecs.BOM_UTF16_BE codecs.BOM_UTF16_LE codecs.BOM_UTF32 codecs.BOM_UTF32_BE codecs.BOM_UTF32_LE These constants define various byte sequences, being Unicode byte order marks (BOMs) for several encodings. They are used in UTF-16 and UTF-32 data streams to indicate the byte order used, and in UTF-8 as a Unicode signature. BOM_UTF16 is either BOM_UTF16_BE or BOM_UTF16_LE depending on the platform’s native byte order, BOM is an alias for BOM_UTF16, BOM_LE for BOM_UTF16_LE and BOM_BE for BOM_UTF16_BE. The others represent the BOM in UTF-8 and UTF-32 encodings.
doc_3206
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_3207
Draw samples from a multinomial distribution. The multinomial distribution is a multivariate generalization of the binomial distribution. Take an experiment with one of p possible outcomes. An example of such an experiment is throwing a dice, where the outcome can be 1 through 6. Each sample drawn from the distribution represents n such experiments. Its values, X_i = [X_0, X_1, ..., X_p], represent the number of times the outcome was i. Note New code should use the multinomial method of a default_rng() instance instead; please see the Quick Start. Parameters nint Number of experiments. pvalssequence of floats, length p Probabilities of each of the p different outcomes. These must sum to 1 (however, the last element is always assumed to account for the remaining probability, as long as sum(pvals[:-1]) <= 1). sizeint or tuple of ints, optional Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. Default is None, in which case a single value is returned. Returns outndarray The drawn samples, of shape size, if that was provided. If not, the shape is (N,). In other words, each entry out[i,j,...,:] is an N-dimensional value drawn from the distribution. See also Generator.multinomial which should be used for new code. Examples Throw a dice 20 times: >>> np.random.multinomial(20, [1/6.]*6, size=1) array([[4, 1, 7, 5, 2, 1]]) # random It landed 4 times on 1, once on 2, etc. Now, throw the dice 20 times, and 20 times again: >>> np.random.multinomial(20, [1/6.]*6, size=2) array([[3, 4, 3, 3, 4, 3], # random [2, 4, 3, 4, 0, 7]]) For the first run, we threw 3 times 1, 4 times 2, etc. For the second, we threw 2 times 1, 4 times 2, etc. A loaded die is more likely to land on number 6: >>> np.random.multinomial(100, [1/7.]*5 + [2/7.]) array([11, 16, 14, 17, 16, 26]) # random The probability inputs should be normalized. As an implementation detail, the value of the last entry is ignored and assumed to take up any leftover probability mass, but this should not be relied on. A biased coin which has twice as much weight on one side as on the other should be sampled like so: >>> np.random.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT array([38, 62]) # random not like: >>> np.random.multinomial(100, [1.0, 2.0]) # WRONG Traceback (most recent call last): ValueError: pvals < 0, pvals > 1 or pvals contains NaNs
doc_3208
See Migration guide for more details. tf.compat.v1.raw_ops.ApplyAdaMax tf.raw_ops.ApplyAdaMax( var, m, v, beta1_power, lr, beta1, beta2, epsilon, grad, use_locking=False, name=None ) mt <- beta1 * m{t-1} + (1 - beta1) * g vt <- max(beta2 * v{t-1}, abs(g)) variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon) Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). m A mutable Tensor. Must have the same type as var. Should be from a Variable(). v A mutable Tensor. Must have the same type as var. Should be from a Variable(). beta1_power A Tensor. Must have the same type as var. Must be a scalar. lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. beta1 A Tensor. Must have the same type as var. Momentum factor. Must be a scalar. beta2 A Tensor. Must have the same type as var. Momentum factor. Must be a scalar. epsilon A Tensor. Must have the same type as var. Ridge term. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. use_locking An optional bool. Defaults to False. If True, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
doc_3209
os.SF_MNOWAIT os.SF_SYNC Parameters to the sendfile() function, if the implementation supports them. Availability: Unix. New in version 3.3.
doc_3210
Return a guess for whether wsgi.url_scheme should be “http” or “https”, by checking for a HTTPS environment variable in the environ dictionary. The return value is a string. This function is useful when creating a gateway that wraps CGI or a CGI-like protocol such as FastCGI. Typically, servers providing such protocols will include a HTTPS variable with a value of “1”, “yes”, or “on” when a request is received via SSL. So, this function returns “https” if such a value is found, and “http” otherwise.
doc_3211
Return the offsets for the collection.
doc_3212
Stop monitoring the fd file descriptor for write availability.
doc_3213
Returns the item ID of the item at position y.
doc_3214
Return the value (in fractional seconds) of a monotonic clock, i.e. a clock that cannot go backwards. The clock is not affected by system clock updates. The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid. New in version 3.3. Changed in version 3.5: The function is now always available and always system-wide.
doc_3215
See Migration guide for more details. tf.compat.v1.raw_ops.MaxPoolGradGrad tf.raw_ops.MaxPoolGradGrad( orig_input, orig_output, grad, ksize, strides, padding, data_format='NHWC', name=None ) Args orig_input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. The original input tensor. orig_output A Tensor. Must have the same type as orig_input. The original output tensor. grad A Tensor. Must have the same type as orig_input. 4-D. Gradients of gradients w.r.t. the input of max_pool. ksize A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. strides A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. padding A string from: "SAME", "VALID". The type of padding algorithm to use. data_format An optional string from: "NHWC", "NCHW". Defaults to "NHWC". Specify the data format of the input and output data. With the default format "NHWC", the data is stored in the order of: [batch, in_height, in_width, in_channels]. Alternatively, the format could be "NCHW", the data storage order of: [batch, in_channels, in_height, in_width]. name A name for the operation (optional). Returns A Tensor. Has the same type as orig_input.
doc_3216
Continue the process if it is currently stopped Availability: Unix.
doc_3217
tf.compat.v1.summary.get_summary_description( node_def ) When a Summary op is instantiated, a SummaryDescription of associated metadata is stored in its NodeDef. This method retrieves the description. Args node_def the node_def_pb2.NodeDef of a TensorSummary op Returns a summary_pb2.SummaryDescription Raises ValueError if the node is not a summary op. Eager Compatibility Not compatible with eager execution. To write TensorBoard summaries under eager execution, use tf.contrib.summary instead.
doc_3218
Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_3219
Return the Transform instance used by this artist offset.
doc_3220
Read at least one byte of cooked data unless EOF is hit. Return b'' if EOF is hit. Block if no data is immediately available.
doc_3221
Get Less than of dataframe and other, element-wise (binary operator lt). Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators. Equivalent to ==, !=, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison. Parameters other:scalar, sequence, Series, or DataFrame Any single or multiple element data structure, or list-like object. axis:{0 or ‘index’, 1 or ‘columns’}, default ‘columns’ Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). level:int or label Broadcast across a level, matching Index values on the passed MultiIndex level. Returns DataFrame of bool Result of the comparison. See also DataFrame.eq Compare DataFrames for equality elementwise. DataFrame.ne Compare DataFrames for inequality elementwise. DataFrame.le Compare DataFrames for less than inequality or equality elementwise. DataFrame.lt Compare DataFrames for strictly less than inequality elementwise. DataFrame.ge Compare DataFrames for greater than inequality or equality elementwise. DataFrame.gt Compare DataFrames for strictly greater than inequality elementwise. Notes Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN). Examples >>> df = pd.DataFrame({'cost': [250, 150, 100], ... 'revenue': [100, 250, 300]}, ... index=['A', 'B', 'C']) >>> df cost revenue A 250 100 B 150 250 C 100 300 Comparison with a scalar, using either the operator or method: >>> df == 100 cost revenue A False True B False False C True False >>> df.eq(100) cost revenue A False True B False False C True False When other is a Series, the columns of a DataFrame are aligned with the index of other and broadcast: >>> df != pd.Series([100, 250], index=["cost", "revenue"]) cost revenue A True True B True False C False True Use the method to control the broadcast axis: >>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index') cost revenue A True False B True True C True True D True True When comparing to an arbitrary sequence, the number of columns must match the number elements in other: >>> df == [250, 100] cost revenue A True True B False False C False False Use the method to control the axis: >>> df.eq([250, 250, 100], axis='index') cost revenue A True False B False True C True False Compare to a DataFrame of different shape. >>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]}, ... index=['A', 'B', 'C', 'D']) >>> other revenue A 300 B 250 C 100 D 150 >>> df.gt(other) cost revenue A False False B False False C False True D False False Compare to a MultiIndex by level. >>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220], ... 'revenue': [100, 250, 300, 200, 175, 225]}, ... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'], ... ['A', 'B', 'C', 'A', 'B', 'C']]) >>> df_multindex cost revenue Q1 A 250 100 B 150 250 C 100 300 Q2 A 150 200 B 300 175 C 220 225 >>> df.le(df_multindex, level=1) cost revenue Q1 A True True B True True C True True Q2 A False True B True False C True False
doc_3222
Determine if there is an effective (active) breakpoint at this line of code. Return a tuple of the breakpoint and a boolean that indicates if it is ok to delete a temporary breakpoint. Return (None, None) if there is no matching breakpoint.
doc_3223
Bases: matplotlib.collections._CollectionWithSizes A collection of Paths, as created by e.g. scatter. Parameters pathslist of path.Path The paths that will make up the Collection. sizesarray-like The factor by which to scale each drawn Path. One unit squared in the Path's data space is scaled to be sizes**2 points when rendered. **kwargs Forwarded to Collection. add_callback(func)[source] Add a callback function that will be called whenever one of the Artist's properties changes. Parameters funccallable The callback function. It must have the signature: def func(artist: Artist) -> Any where artist is the calling Artist. Return values may exist but are ignored. Returns int The observer id associated with the callback. This id can be used for removing the callback with remove_callback later. See also remove_callback autoscale()[source] Autoscale the scalar limits on the norm instance using the current array autoscale_None()[source] Autoscale the scalar limits on the norm instance using the current array, changing only limits that are None propertyaxes The Axes instance the artist resides in, or None. propertycallbacksSM[source] changed()[source] Call this whenever the mappable is changed to notify all the callbackSM listeners to the 'changed' signal. colorbar The last colorbar associated with this ScalarMappable. May be None. contains(mouseevent)[source] Test whether the mouse event occurred in the collection. Returns bool, dict(ind=itemlist), where every item in itemlist contains the event. convert_xunits(x)[source] Convert x using the unit type of the xaxis. If the artist is not in contained in an Axes or if the xaxis does not have units, x itself is returned. convert_yunits(y)[source] Convert y using the unit type of the yaxis. If the artist is not in contained in an Axes or if the yaxis does not have units, y itself is returned. draw(renderer)[source] Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters rendererRendererBase subclass. Notes This method is overridden in the Artist subclasses. findobj(match=None, include_self=True)[source] Find artist objects. Recursively find all Artist instances contained in the artist. Parameters match A filter criterion for the matches. This can be None: Return all objects contained in artist. A function with signature def match(artist: Artist) -> bool. The result will only contain artists for which the function returns True. A class instance: e.g., Line2D. The result will only contain artists of this class or its subclasses (isinstance check). include_selfbool Include self in the list to be checked for a match. Returns list of Artist format_cursor_data(data)[source] Return a string representation of data. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. The default implementation converts ints and floats and arrays of ints and floats into a comma-separated string enclosed in square brackets, unless the artist has an associated colorbar, in which case scalar values are formatted using the colorbar's formatter. See also get_cursor_data get_agg_filter()[source] Return filter function to be used for agg filter. get_alpha()[source] Return the alpha value used for blending - not supported on all backends. get_animated()[source] Return whether the artist is animated. get_array()[source] Return the array of values, that are mapped to colors. The base class ScalarMappable does not make any assumptions on the dimensionality and shape of the array. get_capstyle()[source] get_children()[source] Return a list of the child Artists of this Artist. get_clim()[source] Return the values (min, max) that are mapped to the colormap limits. get_clip_box()[source] Return the clipbox. get_clip_on()[source] Return whether the artist uses clipping. get_clip_path()[source] Return the clip path. get_cmap()[source] Return the Colormap instance. get_cursor_data(event)[source] Return the cursor data for a given event. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. Cursor data can be used by Artists to provide additional context information for a given event. The default implementation just returns None. Subclasses can override the method and return arbitrary data. However, when doing so, they must ensure that format_cursor_data can convert the data to a string representation. The only current use case is displaying the z-value of an AxesImage in the status bar of a plot window, while moving the mouse. Parameters eventmatplotlib.backend_bases.MouseEvent See also format_cursor_data get_dashes()[source] Alias for get_linestyle. get_datalim(transData)[source] get_ec()[source] Alias for get_edgecolor. get_edgecolor()[source] get_edgecolors()[source] Alias for get_edgecolor. get_facecolor()[source] get_facecolors()[source] Alias for get_facecolor. get_fc()[source] Alias for get_facecolor. get_figure()[source] Return the Figure instance the artist belongs to. get_fill()[source] Return whether face is colored. get_gid()[source] Return the group id. get_hatch()[source] Return the current hatching pattern. get_in_layout()[source] Return boolean flag, True if artist is included in layout calculations. E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). get_joinstyle()[source] get_label()[source] Return the label used for this artist in the legend. get_linestyle()[source] get_linestyles()[source] Alias for get_linestyle. get_linewidth()[source] get_linewidths()[source] Alias for get_linewidth. get_ls()[source] Alias for get_linestyle. get_lw()[source] Alias for get_linewidth. get_offset_transform()[source] Return the Transform instance used by this artist offset. get_offsets()[source] Return the offsets for the collection. get_path_effects()[source] get_paths()[source] get_picker()[source] Return the picking behavior of the artist. The possible values are described in set_picker. See also set_picker, pickable, pick get_pickradius()[source] get_rasterized()[source] Return whether the artist is to be rasterized. get_sizes()[source] Return the sizes ('areas') of the elements in the collection. Returns array The 'area' of each element. get_sketch_params()[source] Return the sketch parameters for the artist. Returns tuple or None A 3-tuple with the following elements: scale: The amplitude of the wiggle perpendicular to the source line. length: The length of the wiggle along the line. randomness: The scale factor by which the length is shrunken or expanded. Returns None if no sketch parameters were set. get_snap()[source] Return the snap setting. See set_snap for details. get_tightbbox(renderer)[source] Like Artist.get_window_extent, but includes any clipping. Parameters rendererRendererBase subclass renderer that will be used to draw the figures (i.e. fig.canvas.get_renderer()) Returns Bbox The enclosing bounding box (in figure pixel coordinates). get_transform()[source] Return the Transform instance used by this artist. get_transformed_clip_path_and_affine()[source] Return the clip path with the non-affine part of its transformation applied, and the remaining affine part of its transformation. get_transforms()[source] get_url()[source] Return the url. get_urls()[source] Return a list of URLs, one for each element of the collection. The list contains None for elements without a URL. See Hyperlinks for an example. get_visible()[source] Return the visibility. get_window_extent(renderer)[source] Get the artist's bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly. get_zorder()[source] Return the artist's zorder. have_units()[source] Return whether units are set on any axis. is_transform_set()[source] Return whether the Artist has an explicitly set transform. This is True after set_transform has been called. legend_elements(prop='colors', num='auto', fmt=None, func=<function PathCollection.<lambda>>, **kwargs)[source] Create legend handles and labels for a PathCollection. Each legend handle is a Line2D representing the Path that was drawn, and each label is a string what each Path represents. This is useful for obtaining a legend for a scatter plot; e.g.: scatter = plt.scatter([1, 2, 3], [4, 5, 6], c=[7, 2, 3]) plt.legend(*scatter.legend_elements()) creates three legend elements, one for each color with the numerical values passed to c as the labels. Also see the Automated legend creation example. Parameters prop{"colors", "sizes"}, default: "colors" If "colors", the legend handles will show the different colors of the collection. If "sizes", the legend will show the different sizes. To set both, use kwargs to directly edit the Line2D properties. numint, None, "auto" (default), array-like, or Locator Target number of elements to create. If None, use all unique elements of the mappable array. If an integer, target to use num elements in the normed range. If "auto", try to determine which option better suits the nature of the data. The number of created elements may slightly deviate from num due to a Locator being used to find useful locations. If a list or array, use exactly those elements for the legend. Finally, a Locator can be provided. fmtstr, Formatter, or None (default) The format or formatter to use for the labels. If a string must be a valid input for a StrMethodFormatter. If None (the default), use a ScalarFormatter. funcfunction, default: lambda x: x Function to calculate the labels. Often the size (or color) argument to scatter will have been pre-processed by the user using a function s = f(x) to make the markers visible; e.g. size = np.log10(x). Providing the inverse of this function here allows that pre-processing to be inverted, so that the legend labels have the correct values; e.g. func = lambda x: 10**x. **kwargs Allowed keyword arguments are color and size. E.g. it may be useful to set the color of the markers if prop="sizes" is used; similarly to set the size of the markers if prop="colors" is used. Any further parameters are passed onto the Line2D instance. This may be useful to e.g. specify a different markeredgecolor or alpha for the legend handles. Returns handleslist of Line2D Visual representation of each element of the legend. labelslist of str The string labels for elements of the legend. propertymouseover If this property is set to True, the artist will be queried for custom context information when the mouse cursor moves over it. See also get_cursor_data(), ToolCursorPosition and NavigationToolbar2. propertynorm pchanged()[source] Call all of the registered callbacks. This function is triggered internally when a property is changed. See also add_callback remove_callback pick(mouseevent)[source] Process a pick event. Each child artist will fire a pick event if mouseevent is over the artist and the artist has picker set. See also set_picker, get_picker, pickable pickable()[source] Return whether the artist is pickable. See also set_picker, get_picker, pick properties()[source] Return a dictionary of all the properties of the artist. remove()[source] Remove the artist from the figure if possible. The effect will not be visible until the figure is redrawn, e.g., with FigureCanvasBase.draw_idle. Call relim to update the axes limits if desired. Note: relim will not see collections even if the collection was added to the axes with autolim = True. Note: there is no support for removing the artist's legend entry. remove_callback(oid)[source] Remove a callback based on its observer id. See also add_callback set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, antialiased=<UNSET>, array=<UNSET>, capstyle=<UNSET>, clim=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, cmap=<UNSET>, color=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, gid=<UNSET>, hatch=<UNSET>, in_layout=<UNSET>, joinstyle=<UNSET>, label=<UNSET>, linestyle=<UNSET>, linewidth=<UNSET>, norm=<UNSET>, offset_transform=<UNSET>, offsets=<UNSET>, path_effects=<UNSET>, paths=<UNSET>, picker=<UNSET>, pickradius=<UNSET>, rasterized=<UNSET>, sizes=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, urls=<UNSET>, visible=<UNSET>, zorder=<UNSET>)[source] Set multiple properties at once. Supported properties are Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha array-like or scalar or None animated bool antialiased or aa or antialiaseds bool or list of bools array array-like or None capstyle CapStyle or {'butt', 'projecting', 'round'} clim (vmin: float, vmax: float) clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None cmap Colormap or str or None color color or list of rgba tuples edgecolor or ec or edgecolors color or list of colors or 'face' facecolor or facecolors or fc color or list of colors figure Figure gid str hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} in_layout bool joinstyle JoinStyle or {'miter', 'round', 'bevel'} label object linestyle or dashes or linestyles or ls str or tuple or list thereof linewidth or linewidths or lw float or list of floats norm Normalize or None offset_transform Transform offsets (N, 2) or (2,) array-like path_effects AbstractPathEffect paths unknown picker None or bool or float or callable pickradius float rasterized bool sizes ndarray or None sketch_params (scale: float, length: float, randomness: float) snap bool or None transform Transform url str urls list of str or None visible bool zorder float set_aa(aa)[source] Alias for set_antialiased. set_agg_filter(filter_func)[source] Set the agg filter. Parameters filter_funccallable A filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array. set_alpha(alpha)[source] Set the alpha value used for blending - not supported on all backends. Parameters alphaarray-like or scalar or None All values must be within the 0-1 range, inclusive. Masked values and nans are not supported. set_animated(b)[source] Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters bbool set_antialiased(aa)[source] Set the antialiasing state for rendering. Parameters aabool or list of bools set_antialiaseds(aa)[source] Alias for set_antialiased. set_array(A)[source] Set the value array from array-like A. Parameters Aarray-like or None The values that are mapped to colors. The base class ScalarMappable does not make any assumptions on the dimensionality and shape of the value array A. set_capstyle(cs)[source] Set the CapStyle for the collection (for all its elements). Parameters csCapStyle or {'butt', 'projecting', 'round'} set_clim(vmin=None, vmax=None)[source] Set the norm limits for image scaling. Parameters vmin, vmaxfloat The limits. The limits may also be passed as a tuple (vmin, vmax) as a single positional argument. set_clip_box(clipbox)[source] Set the artist's clip Bbox. Parameters clipboxBbox set_clip_on(b)[source] Set whether the artist uses clipping. When False artists will be visible outside of the axes which can lead to unexpected results. Parameters bbool set_clip_path(path, transform=None)[source] Set the artist's clip path. Parameters pathPatch or Path or TransformedPath or None The clip path. If given a Path, transform must be provided as well. If None, a previously set clip path is removed. transformTransform, optional Only used if path is a Path, in which case the given Path is converted to a TransformedPath using transform. Notes For efficiency, if path is a Rectangle this method will set the clipping box to the corresponding rectangle and set the clipping path to None. For technical reasons (support of set), a tuple (path, transform) is also accepted as a single positional parameter. set_cmap(cmap)[source] Set the colormap for luminance data. Parameters cmapColormap or str or None set_color(c)[source] Set both the edgecolor and the facecolor. Parameters ccolor or list of rgba tuples See also Collection.set_facecolor, Collection.set_edgecolor For setting the edge or face color individually. set_dashes(ls)[source] Alias for set_linestyle. set_ec(c)[source] Alias for set_edgecolor. set_edgecolor(c)[source] Set the edgecolor(s) of the collection. Parameters ccolor or list of colors or 'face' The collection edgecolor(s). If a sequence, the patches cycle through it. If 'face', match the facecolor. set_edgecolors(c)[source] Alias for set_edgecolor. set_facecolor(c)[source] Set the facecolor(s) of the collection. c can be a color (all patches have same color), or a sequence of colors; if it is a sequence the patches will cycle through the sequence. If c is 'none', the patch will not be filled. Parameters ccolor or list of colors set_facecolors(c)[source] Alias for set_facecolor. set_fc(c)[source] Alias for set_facecolor. set_figure(fig)[source] Set the Figure instance the artist belongs to. Parameters figFigure set_gid(gid)[source] Set the (group) id for the artist. Parameters gidstr set_hatch(hatch)[source] Set the hatching pattern hatch can be one of: / - diagonal hatching \ - back diagonal | - vertical - - horizontal + - crossed x - crossed diagonal o - small circle O - large circle . - dots * - stars Letters can be combined, in which case all the specified hatchings are done. If same letter repeats, it increases the density of hatching of that pattern. Hatching is supported in the PostScript, PDF, SVG and Agg backends only. Unlike other properties such as linewidth and colors, hatching can only be specified for the collection as a whole, not separately for each member. Parameters hatch{'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} set_in_layout(in_layout)[source] Set if artist is to be included in layout calculations, E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). Parameters in_layoutbool set_joinstyle(js)[source] Set the JoinStyle for the collection (for all its elements). Parameters jsJoinStyle or {'miter', 'round', 'bevel'} set_label(s)[source] Set a label that will be displayed in the legend. Parameters sobject s will be converted to a string by calling str. set_linestyle(ls)[source] Set the linestyle(s) for the collection. linestyle description '-' or 'solid' solid line '--' or 'dashed' dashed line '-.' or 'dashdot' dash-dotted line ':' or 'dotted' dotted line Alternatively a dash tuple of the following form can be provided: (offset, onoffseq), where onoffseq is an even length tuple of on and off ink in points. Parameters lsstr or tuple or list thereof Valid values for individual linestyles include {'-', '--', '-.', ':', '', (offset, on-off-seq)}. See Line2D.set_linestyle for a complete description. set_linestyles(ls)[source] Alias for set_linestyle. set_linewidth(lw)[source] Set the linewidth(s) for the collection. lw can be a scalar or a sequence; if it is a sequence the patches will cycle through the sequence Parameters lwfloat or list of floats set_linewidths(lw)[source] Alias for set_linewidth. set_ls(ls)[source] Alias for set_linestyle. set_lw(lw)[source] Alias for set_linewidth. set_norm(norm)[source] Set the normalization instance. Parameters normNormalize or None Notes If there are any colorbars using the mappable for this norm, setting the norm of the mappable will reset the norm, locator, and formatters on the colorbar to default. set_offset_transform(transOffset)[source] Set the artist offset transform. Parameters transOffsetTransform set_offsets(offsets)[source] Set the offsets for the collection. Parameters offsets(N, 2) or (2,) array-like set_path_effects(path_effects)[source] Set the path effects. Parameters path_effectsAbstractPathEffect set_paths(paths)[source] set_picker(picker)[source] Define the picking behavior of the artist. Parameters pickerNone or bool or float or callable This can be one of the following: None: Picking is disabled for this artist (default). A boolean: If True then picking will be enabled and the artist will fire a pick event if the mouse event is over the artist. A float: If picker is a number it is interpreted as an epsilon tolerance in points and the artist will fire off an event if its data is within epsilon of the mouse event. For some artists like lines and patch collections, the artist may provide additional data to the pick event that is generated, e.g., the indices of the data within epsilon of the pick event A function: If picker is callable, it is a user supplied function which determines whether the artist is hit by the mouse event: hit, props = picker(artist, mouseevent) to determine the hit test. if the mouse event is over the artist, return hit=True and props is a dictionary of properties you want added to the PickEvent attributes. set_pickradius(pr)[source] Set the pick radius used for containment tests. Parameters prfloat Pick radius, in points. set_rasterized(rasterized)[source] Force rasterized (bitmap) drawing for vector graphics output. Rasterized drawing is not supported by all artists. If you try to enable this on an artist that does not support it, the command has no effect and a warning will be issued. This setting is ignored for pixel-based output. See also Rasterization for vector graphics. Parameters rasterizedbool set_sizes(sizes, dpi=72.0)[source] Set the sizes of each member of the collection. Parameters sizesndarray or None The size to set for each element of the collection. The value is the 'area' of the element. dpifloat, default: 72 The dpi of the canvas. set_sketch_params(scale=None, length=None, randomness=None)[source] Set the sketch parameters. Parameters scalefloat, optional The amplitude of the wiggle perpendicular to the source line, in pixels. If scale is None, or not provided, no sketch filter will be provided. lengthfloat, optional The length of the wiggle along the line, in pixels (default 128.0) randomnessfloat, optional The scale factor by which the length is shrunken or expanded (default 16.0) The PGF backend uses this argument as an RNG seed and not as described above. Using the same seed yields the same random shape. set_snap(snap)[source] Set the snapping behavior. Snapping aligns positions with the pixel grid, which results in clearer images. For example, if a black line of 1px width was defined at a position in between two pixels, the resulting image would contain the interpolated value of that line in the pixel grid, which would be a grey value on both adjacent pixel positions. In contrast, snapping will move the line to the nearest integer pixel value, so that the resulting image will really contain a 1px wide black line. Snapping is currently only supported by the Agg and MacOSX backends. Parameters snapbool or None Possible values: True: Snap vertices to the nearest pixel center. False: Do not modify vertex positions. None: (auto) If the path contains only rectilinear line segments, round to the nearest pixel center. set_transform(t)[source] Set the artist transform. Parameters tTransform set_url(url)[source] Set the url for the artist. Parameters urlstr set_urls(urls)[source] Parameters urlslist of str or None Notes URLs are currently only implemented by the SVG backend. They are ignored by all other backends. set_visible(b)[source] Set the artist's visibility. Parameters bbool set_zorder(level)[source] Set the zorder for the artist. Artists with lower zorder values are drawn first. Parameters levelfloat propertystale Whether the artist is 'stale' and needs to be re-drawn for the output to match the internal state of the artist. propertysticky_edges x and y sticky edge lists for autoscaling. When performing autoscaling, if a data limit coincides with a value in the corresponding sticky_edges list, then no margin will be added--the view limit "sticks" to the edge. A typical use case is histograms, where one usually expects no margin on the bottom edge (0) of the histogram. Moreover, margin expansion "bumps" against sticky edges and cannot cross them. For example, if the upper data limit is 1.0, the upper view limit computed by simple margin application is 1.2, but there is a sticky edge at 1.1, then the actual upper view limit will be 1.1. This attribute cannot be assigned to; however, the x and y lists can be modified in place as needed. Examples >>> artist.sticky_edges.x[:] = (xmin, xmax) >>> artist.sticky_edges.y[:] = (ymin, ymax) to_rgba(x, alpha=None, bytes=False, norm=True)[source] Return a normalized rgba array corresponding to x. In the normal case, x is a 1D or 2D sequence of scalars, and the corresponding ndarray of rgba values will be returned, based on the norm and colormap set for this ScalarMappable. There is one special case, for handling images that are already rgb or rgba, such as might have been read from an image file. If x is an ndarray with 3 dimensions, and the last dimension is either 3 or 4, then it will be treated as an rgb or rgba array, and no mapping will be done. The array can be uint8, or it can be floating point with values in the 0-1 range; otherwise a ValueError will be raised. If it is a masked array, the mask will be ignored. If the last dimension is 3, the alpha kwarg (defaulting to 1) will be used to fill in the transparency. If the last dimension is 4, the alpha kwarg is ignored; it does not replace the pre-existing alpha. A ValueError will be raised if the third dimension is other than 3 or 4. In either case, if bytes is False (default), the rgba array will be floats in the 0-1 range; if it is True, the returned rgba array will be uint8 in the 0 to 255 range. If norm is False, no normalization of the input data is performed, and it is assumed to be in the range (0-1). update(props)[source] Update this artist's properties from the dict props. Parameters propsdict update_from(other)[source] Copy properties from other to self. update_scalarmappable()[source] Update colors from the scalar mappable array, if any. Assign colors to edges and faces based on the array and/or colors that were directly set, as appropriate. zorder=0
doc_3224
tf.metrics.PrecisionAtRecall Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.PrecisionAtRecall tf.keras.metrics.PrecisionAtRecall( recall, num_thresholds=200, name=None, dtype=None ) This metric creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the precision at the given recall. The threshold for the given recall value is computed and used to evaluate the corresponding precision. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values. Args recall A scalar value in range [0, 1]. num_thresholds (Optional) Defaults to 200. The number of thresholds to use for matching the given recall. name (Optional) string name of the metric instance. dtype (Optional) data type of the metric result. Standalone usage: m = tf.keras.metrics.PrecisionAtRecall(0.5) m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8]) m.result().numpy() 0.5 m.reset_states() m.update_state([0, 0, 0, 1, 1], [0, 0.3, 0.8, 0.3, 0.8], sample_weight=[2, 2, 2, 1, 1]) m.result().numpy() 0.33333333 Usage with compile() API: model.compile( optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.PrecisionAtRecall(recall=0.8)]) Methods reset_states View source reset_states() Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source result() Computes and returns the metric value tensor. Result computation is an idempotent operation that simply calculates the metric value using the state variables. update_state View source update_state( y_true, y_pred, sample_weight=None ) Accumulates confusion matrix statistics. Args y_true The ground truth values. y_pred The predicted values. sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true. Returns Update op.
doc_3225
Set the path effects. Parameters path_effectsAbstractPathEffect
doc_3226
Representation of the object, returns app_label.object_name, e.g. 'polls.Question'.
doc_3227
rotates the vector by a given angle in radians in place. rotate_ip_rad(angle) -> None Rotates the vector counterclockwise by the given angle in radians. The length of the vector is not changed. New in pygame 2.0.0.
doc_3228
See Migration guide for more details. tf.compat.v1.raw_ops.ResourceSparseApplyRMSProp tf.raw_ops.ResourceSparseApplyRMSProp( var, ms, mom, lr, rho, momentum, epsilon, grad, indices, use_locking=False, name=None ) Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon) ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom Args var A Tensor of type resource. Should be from a Variable(). ms A Tensor of type resource. Should be from a Variable(). mom A Tensor of type resource. Should be from a Variable(). lr A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Scaling factor. Must be a scalar. rho A Tensor. Must have the same type as lr. Decay rate. Must be a scalar. momentum A Tensor. Must have the same type as lr. epsilon A Tensor. Must have the same type as lr. Ridge term. Must be a scalar. grad A Tensor. Must have the same type as lr. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var, ms and mom. use_locking An optional bool. Defaults to False. If True, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. name A name for the operation (optional). Returns The created Operation.
doc_3229
Return the minimum along a given axis. Refer to numpy.amin for full documentation. See also numpy.amin equivalent function
doc_3230
Value used to identify the event. The interpretation depends on the filter but it’s usually the file descriptor. In the constructor ident can either be an int or an object with a fileno() method. kevent stores the integer internally.
doc_3231
Represents a Range header. All methods only support only bytes as the unit. Stores a list of ranges if given, but the methods only work if only one range is provided. Raises ValueError – If the ranges provided are invalid. Changelog Changed in version 0.15: The ranges passed in are validated. New in version 0.7. make_content_range(length) Creates a ContentRange object from the current range and given content length. range_for_length(length) If the range is for bytes, the length is not None and there is exactly one range and it is satisfiable it returns a (start, stop) tuple, otherwise None. ranges A list of (begin, end) tuples for the range header provided. The ranges are non-inclusive. to_content_range_header(length) Converts the object into Content-Range HTTP header, based on given length to_header() Converts the object back into an HTTP header. units The units of this range. Usually “bytes”.
doc_3232
Scalar method identical to the corresponding array attribute. Please see ndarray.tostring.
doc_3233
Return whether units are set on any axis.
doc_3234
Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_3235
Changes the default filesystem encoding and errors mode to ‘mbcs’ and ‘replace’ respectively, for consistency with versions of Python prior to 3.6. This is equivalent to defining the PYTHONLEGACYWINDOWSFSENCODING environment variable before launching Python. Availability: Windows. New in version 3.6: See PEP 529 for more details.
doc_3236
A convenient alias for None, useful for indexing arrays. Examples >>> newaxis is None True >>> x = np.arange(3) >>> x array([0, 1, 2]) >>> x[:, newaxis] array([[0], [1], [2]]) >>> x[:, newaxis, newaxis] array([[[0]], [[1]], [[2]]]) >>> x[:, newaxis] * x array([[0, 0, 0], [0, 1, 2], [0, 2, 4]]) Outer product, same as outer(x, y): >>> y = np.arange(3, 6) >>> x[:, newaxis] * y array([[ 0, 0, 0], [ 3, 4, 5], [ 6, 8, 10]]) x[newaxis, :] is equivalent to x[newaxis] and x[None]: >>> x[newaxis, :].shape (1, 3) >>> x[newaxis].shape (1, 3) >>> x[None].shape (1, 3) >>> x[:, newaxis].shape (3, 1)
doc_3237
class sklearn.gaussian_process.kernels.PairwiseKernel(gamma=1.0, gamma_bounds=1e-05, 100000.0, metric='linear', pairwise_kernels_kwargs=None) [source] Wrapper for kernels in sklearn.metrics.pairwise. A thin wrapper around the functionality of the kernels in sklearn.metrics.pairwise. Note: Evaluation of eval_gradient is not analytic but numeric and all kernels support only isotropic distances. The parameter gamma is considered to be a hyperparameter and may be optimized. The other kernel parameters are set directly at initialization and are kept fixed. New in version 0.18. Parameters gammafloat, default=1.0 Parameter gamma of the pairwise kernel specified by metric. It should be positive. gamma_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5) The lower and upper bound on ‘gamma’. If set to “fixed”, ‘gamma’ cannot be changed during hyperparameter tuning. metric{“linear”, “additive_chi2”, “chi2”, “poly”, “polynomial”, “rbf”, “laplacian”, “sigmoid”, “cosine”} or callable, default=”linear” The metric to use when calculating kernel between instances in a feature array. If metric is a string, it must be one of the metrics in pairwise.PAIRWISE_KERNEL_FUNCTIONS. If metric is “precomputed”, X is assumed to be a kernel matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them. pairwise_kernels_kwargsdict, default=None All entries of this dict (if any) are passed as keyword arguments to the pairwise kernel function. Attributes bounds Returns the log-transformed bounds on the theta. hyperparameter_gamma hyperparameters Returns a list of all hyperparameter specifications. n_dims Returns the number of non-fixed hyperparameters of the kernel. requires_vector_input Returns whether the kernel is defined on fixed-length feature vectors or generic objects. theta Returns the (flattened, log-transformed) non-fixed hyperparameters. Examples >>> from sklearn.datasets import load_iris >>> from sklearn.gaussian_process import GaussianProcessClassifier >>> from sklearn.gaussian_process.kernels import PairwiseKernel >>> X, y = load_iris(return_X_y=True) >>> kernel = PairwiseKernel(metric='rbf') >>> gpc = GaussianProcessClassifier(kernel=kernel, ... random_state=0).fit(X, y) >>> gpc.score(X, y) 0.9733... >>> gpc.predict_proba(X[:2,:]) array([[0.8880..., 0.05663..., 0.05532...], [0.8676..., 0.07073..., 0.06165...]]) Methods __call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient. clone_with_theta(theta) Returns a clone of self with given hyperparameters theta. diag(X) Returns the diagonal of the kernel k(X, X). get_params([deep]) Get parameters of this kernel. is_stationary() Returns whether the kernel is stationary. set_params(**params) Set the parameters of this kernel. __call__(X, Y=None, eval_gradient=False) [source] Return the kernel k(X, Y) and optionally its gradient. Parameters Xndarray of shape (n_samples_X, n_features) Left argument of the returned kernel k(X, Y) Yndarray of shape (n_samples_Y, n_features), default=None Right argument of the returned kernel k(X, Y). If None, k(X, X) if evaluated instead. eval_gradientbool, default=False Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns Kndarray of shape (n_samples_X, n_samples_Y) Kernel k(X, Y) K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True. property bounds Returns the log-transformed bounds on the theta. Returns boundsndarray of shape (n_dims, 2) The log-transformed bounds on the kernel’s hyperparameters theta clone_with_theta(theta) [source] Returns a clone of self with given hyperparameters theta. Parameters thetandarray of shape (n_dims,) The hyperparameters diag(X) [source] Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters Xndarray of shape (n_samples_X, n_features) Left argument of the returned kernel k(X, Y) Returns K_diagndarray of shape (n_samples_X,) Diagonal of kernel k(X, X) get_params(deep=True) [source] Get parameters of this kernel. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. property hyperparameters Returns a list of all hyperparameter specifications. is_stationary() [source] Returns whether the kernel is stationary. property n_dims Returns the number of non-fixed hyperparameters of the kernel. property requires_vector_input Returns whether the kernel is defined on fixed-length feature vectors or generic objects. Defaults to True for backward compatibility. set_params(**params) [source] Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns self property theta Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns thetandarray of shape (n_dims,) The non-fixed, log-transformed hyperparameters of the kernel
doc_3238
A namespace for a function or method. This class inherits SymbolTable. get_parameters() Return a tuple containing names of parameters to this function. get_locals() Return a tuple containing names of locals in this function. get_globals() Return a tuple containing names of globals in this function. get_nonlocals() Return a tuple containing names of nonlocals in this function. get_frees() Return a tuple containing names of free variables in this function.
doc_3239
Return the UTC datetime corresponding to the POSIX timestamp, with tzinfo None. (The resulting object is naive.) This may raise OverflowError, if the timestamp is out of the range of values supported by the platform C gmtime() function, and OSError on gmtime() failure. It’s common for this to be restricted to years in 1970 through 2038. To get an aware datetime object, call fromtimestamp(): datetime.fromtimestamp(timestamp, timezone.utc) On the POSIX compliant platforms, it is equivalent to the following expression: datetime(1970, 1, 1, tzinfo=timezone.utc) + timedelta(seconds=timestamp) except the latter formula always supports the full years range: between MINYEAR and MAXYEAR inclusive. Warning Because naive datetime objects are treated by many datetime methods as local times, it is preferred to use aware datetimes to represent times in UTC. As such, the recommended way to create an object representing a specific timestamp in UTC is by calling datetime.fromtimestamp(timestamp, tz=timezone.utc). Changed in version 3.3: Raise OverflowError instead of ValueError if the timestamp is out of the range of values supported by the platform C gmtime() function. Raise OSError instead of ValueError on gmtime() failure.
doc_3240
Set the offsets for the collection. Parameters offsets(N, 2) or (2,) array-like
doc_3241
Set the grid for the rectangle boundaries, and the data values. Parameters x, y1D array-like, optional Monotonic arrays of length N+1 and M+1, respectively, specifying rectangle boundaries. If not given, will default to range(N + 1) and range(M + 1), respectively. Aarray-like The data to be color-coded. The interpretation depends on the shape: (M, N) ndarray or masked array: values to be colormapped (M, N, 3): RGB array (M, N, 4): RGBA array
doc_3242
Alias for set_edgecolor.
doc_3243
See Migration guide for more details. tf.compat.v1.random.stateless_truncated_normal tf.random.stateless_truncated_normal( shape, seed, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, name=None ) This is a stateless version of tf.random.truncated_normal: if run twice with the same seeds and shapes, it will produce the same pseudorandom numbers. The output is consistent across multiple runs on the same hardware (and between CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU hardware. The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked. Args shape A 1-D integer Tensor or Python array. The shape of the output tensor. seed A shape [2] Tensor, the seed to the random number generator. Must have dtype int32 or int64. (When using XLA, only int32 is allowed.) mean A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution. stddev A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution, before truncation. dtype The type of the output. name A name for the operation (optional). Returns A tensor of the specified shape filled with random truncated normal values.
doc_3244
Alias for field number 0
doc_3245
Set whether the artist is intended to be used in an animation. If True, the artist is excluded from regular drawing of the figure. You have to call Figure.draw_artist / Axes.draw_artist explicitly on the artist. This appoach is used to speed up animations using blitting. See also matplotlib.animation and Faster rendering by using blitting. Parameters bbool
doc_3246
Returns the number of non-fixed hyperparameters of the kernel.
doc_3247
Set both the edgecolor and the facecolor. Parameters ccolor or list of rgba tuples See also Collection.set_facecolor, Collection.set_edgecolor For setting the edge or face color individually.
doc_3248
assertWarnsRegex(warning, regex, *, msg=None) Like assertWarns() but also tests that regex matches on the message of the triggered warning. regex may be a regular expression object or a string containing a regular expression suitable for use by re.search(). Example: self.assertWarnsRegex(DeprecationWarning, r'legacy_function\(\) is deprecated', legacy_function, 'XYZ') or: with self.assertWarnsRegex(RuntimeWarning, 'unsafe frobnicating'): frobnicate('/etc/passwd') New in version 3.2. Changed in version 3.3: Added the msg keyword argument when used as a context manager.
doc_3249
Returns a new tensor with boolean elements representing if each element of input is “close” to the corresponding element of other. Closeness is defined as: ∣input−other∣≤atol+rtol×∣other∣\lvert \text{input} - \text{other} \rvert \leq \texttt{atol} + \texttt{rtol} \times \lvert \text{other} \rvert where input and other are finite. Where input and/or other are nonfinite they are close if and only if they are equal, with NaNs being considered equal to each other when equal_nan is True. Parameters input (Tensor) – first tensor to compare other (Tensor) – second tensor to compare atol (float, optional) – absolute tolerance. Default: 1e-08 rtol (float, optional) – relative tolerance. Default: 1e-05 equal_nan (bool, optional) – if True, then two NaN s will be considered equal. Default: False Examples: >>> torch.isclose(torch.tensor((1., 2, 3)), torch.tensor((1 + 1e-10, 3, 4))) tensor([ True, False, False]) >>> torch.isclose(torch.tensor((float('inf'), 4)), torch.tensor((float('inf'), 6)), rtol=.5) tensor([True, True])
doc_3250
Return a new sequence of saved/cached frame information.
doc_3251
all() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. any any() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmax argmax() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argmin argmin() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. argsort argsort() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. astype astype() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. byteswap byteswap() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. choose choose() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. clip clip() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. compress compress() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. conj conj() conjugate conjugate() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. copy copy() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumprod cumprod() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. cumsum cumsum() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. diagonal diagonal() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dump dump() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. dumps dumps() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. fill fill() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. flatten flatten() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. getfield getfield() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. item item() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. itemset itemset() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. max max() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. mean mean() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. min min() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. newbyteorder newbyteorder() newbyteorder(new_order='S') Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The new_order code can be any from the following: 'S' - swap dtype from current to opposite endian '<', 'L'- little endian '>', 'B'- big endian '=', 'N'- native order '|', 'I'- ignore (no change to byte order) Parameters new_order : str, optional Byte order to force; a value from the byte order specifications above. The default value ('S') results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of 'B' or 'b' or 'biggish' are valid to specify big-endian. Returns new_dtype : dtype New dtype object with the given change to the byte order. nonzero nonzero() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. prod prod() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ptp ptp() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. put put() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. ravel ravel() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. repeat repeat() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. reshape reshape() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. resize resize() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. round round() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. searchsorted searchsorted() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setfield setfield() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. setflags setflags() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sort sort() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. squeeze squeeze() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. std std() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. sum sum() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. swapaxes swapaxes() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. take take() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tobytes tobytes() tofile tofile() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tolist tolist() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. tostring tostring() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. trace trace() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. transpose transpose() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. var var() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. view view() Not implemented (virtual attribute) Class generic exists solely to derive numpy scalars from, and possesses, albeit unimplemented, all the attributes of the ndarray class so as to provide a uniform API. See also the corresponding attribute of the derived class of interest. __abs__ __abs__() abs(self) __add__ __add__( value, / ) Return self+value. __and__ __and__( value, / ) Return self&value. __bool__ __bool__() self != 0 __eq__ __eq__( value, / ) Return self==value. __floordiv__ __floordiv__( value, / ) Return self//value. __ge__ __ge__( value, / ) Return self>=value. __getitem__ __getitem__( key, / ) Return self[key]. __gt__ __gt__( value, / ) Return self>value. __invert__ __invert__() ~self __le__ __le__( value, / ) Return self<=value. __lt__ __lt__( value, / ) Return self<value. __mod__ __mod__( value, / ) Return self%value. __mul__ __mul__( value, / ) Return self*value. __ne__ __ne__( value, / ) Return self!=value. __neg__ __neg__() -self __or__ __or__( value, / ) Return self|value. __pos__ __pos__() +self __pow__ __pow__( value, mod, / ) Return pow(self, value, mod). __radd__ __radd__( value, / ) Return value+self. __rand__ __rand__( value, / ) Return value&self. __rfloordiv__ __rfloordiv__( value, / ) Return value//self. __rmod__ __rmod__( value, / ) Return value%self. __rmul__ __rmul__( value, / ) Return value*self. __ror__ __ror__( value, / ) Return value|self. __rpow__ __rpow__( value, mod, / ) Return pow(value, self, mod). __rsub__ __rsub__( value, / ) Return value-self. __rtruediv__ __rtruediv__( value, / ) Return value/self. __rxor__ __rxor__( value, / ) Return value^self. __sub__ __sub__( value, / ) Return self-value. __truediv__ __truediv__( value, / ) Return self/value. __xor__ __xor__( value, / ) Return self^value. Class Variables T base data dtype flags flat imag itemsize nbytes ndim real shape size strides
doc_3252
Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data. The input data is assumed to be of the form minibatch x channels x [optional depth] x [optional height] x width. Hence, for spatial inputs, we expect a 4D Tensor and for volumetric inputs, we expect a 5D Tensor. The algorithms available for upsampling are nearest neighbor and linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input Tensor, respectively. One can either give a scale_factor or the target output size to calculate the output size. (You cannot give both, as it is ambiguous) Parameters size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], optional) – output spatial sizes scale_factor (float or Tuple[float] or Tuple[float, float] or Tuple[float, float, float], optional) – multiplier for spatial size. Has to match input size if it is a tuple. mode (str, optional) – the upsampling algorithm: one of 'nearest', 'linear', 'bilinear', 'bicubic' and 'trilinear'. Default: 'nearest' align_corners (bool, optional) – if True, the corner pixels of the input and output tensors are aligned, and thus preserving the values at those pixels. This only has effect when mode is 'linear', 'bilinear', or 'trilinear'. Default: False Shape: Input: (N,C,Win)(N, C, W_{in}) , (N,C,Hin,Win)(N, C, H_{in}, W_{in}) or (N,C,Din,Hin,Win)(N, C, D_{in}, H_{in}, W_{in}) Output: (N,C,Wout)(N, C, W_{out}) , (N,C,Hout,Wout)(N, C, H_{out}, W_{out}) or (N,C,Dout,Hout,Wout)(N, C, D_{out}, H_{out}, W_{out}) , where Dout=⌊Din×scale_factor⌋D_{out} = \left\lfloor D_{in} \times \text{scale\_factor} \right\rfloor Hout=⌊Hin×scale_factor⌋H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor Wout=⌊Win×scale_factor⌋W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor Warning With align_corners = True, the linearly interpolating modes (linear, bilinear, bicubic, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False. See below for concrete examples on how this affects the outputs. Note If you want downsampling/general resizing, you should use interpolate(). Examples: >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='nearest') >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> m(input) tensor([[[[ 1.0000, 1.2500, 1.7500, 2.0000], [ 1.5000, 1.7500, 2.2500, 2.5000], [ 2.5000, 2.7500, 3.2500, 3.5000], [ 3.0000, 3.2500, 3.7500, 4.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]]) >>> # Try scaling the same data in a larger tensor >>> >>> input_3x3 = torch.zeros(3, 3).view(1, 1, 3, 3) >>> input_3x3[:, :, :2, :2].copy_(input) tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> input_3x3 tensor([[[[ 1., 2., 0.], [ 3., 4., 0.], [ 0., 0., 0.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> # Notice that values in top left corner are the same with the small input (except at boundary) >>> m(input_3x3) tensor([[[[ 1.0000, 1.2500, 1.7500, 1.5000, 0.5000, 0.0000], [ 1.5000, 1.7500, 2.2500, 1.8750, 0.6250, 0.0000], [ 2.5000, 2.7500, 3.2500, 2.6250, 0.8750, 0.0000], [ 2.2500, 2.4375, 2.8125, 2.2500, 0.7500, 0.0000], [ 0.7500, 0.8125, 0.9375, 0.7500, 0.2500, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> # Notice that values in top left corner are now changed >>> m(input_3x3) tensor([[[[ 1.0000, 1.4000, 1.8000, 1.6000, 0.8000, 0.0000], [ 1.8000, 2.2000, 2.6000, 2.2400, 1.1200, 0.0000], [ 2.6000, 3.0000, 3.4000, 2.8800, 1.4400, 0.0000], [ 2.4000, 2.7200, 3.0400, 2.5600, 1.2800, 0.0000], [ 1.2000, 1.3600, 1.5200, 1.2800, 0.6400, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])
doc_3253
See Migration guide for more details. tf.compat.v1.raw_ops.ParseExampleDataset tf.raw_ops.ParseExampleDataset( input_dataset, num_parallel_calls, dense_defaults, sparse_keys, dense_keys, sparse_types, dense_shapes, output_types, output_shapes, sloppy=False, ragged_keys=[], ragged_value_types=[], ragged_split_types=[], name=None ) Args input_dataset A Tensor of type variant. num_parallel_calls A Tensor of type int64. dense_defaults A list of Tensor objects with types from: float32, int64, string. A dict mapping string keys to Tensors. The keys of the dict must match the dense_keys of the feature. sparse_keys A list of strings. A list of string keys in the examples features. The results for these keys will be returned as SparseTensor objects. dense_keys A list of strings. A list of Ndense string Tensors (scalars). The keys expected in the Examples features associated with dense values. sparse_types A list of tf.DTypes from: tf.float32, tf.int64, tf.string. A list of DTypes of the same length as sparse_keys. Only tf.float32 (FloatList), tf.int64 (Int64List), and tf.string (BytesList) are supported. dense_shapes A list of shapes (each a tf.TensorShape or list of ints). List of tuples with the same length as dense_keys. The shape of the data for each dense feature referenced by dense_keys. Required for any input tensors identified by dense_keys. Must be either fully defined, or may contain an unknown first dimension. An unknown first dimension means the feature is treated as having a variable number of blocks, and the output shape along this dimension is considered unknown at graph build time. Padding is applied for minibatch elements smaller than the maximum number of blocks for the given feature along this dimension. output_types A list of tf.DTypes that has length >= 1. The type list for the return values. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. The list of shapes being produced. sloppy An optional bool. Defaults to False. ragged_keys An optional list of strings. Defaults to []. ragged_value_types An optional list of tf.DTypes from: tf.float32, tf.int64, tf.string. Defaults to []. ragged_split_types An optional list of tf.DTypes from: tf.int32, tf.int64. Defaults to []. name A name for the operation (optional). Returns A Tensor of type variant.
doc_3254
See Migration guide for more details. tf.compat.v1.raw_ops.ShuffleDatasetV2 tf.raw_ops.ShuffleDatasetV2( input_dataset, buffer_size, seed_generator, output_types, output_shapes, name=None ) Args input_dataset A Tensor of type variant. buffer_size A Tensor of type int64. seed_generator A Tensor of type resource. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
doc_3255
An abstract base class which inherits from ResourceLoader and ExecutionLoader, providing concrete implementations of ResourceLoader.get_data() and ExecutionLoader.get_filename(). The fullname argument is a fully resolved name of the module the loader is to handle. The path argument is the path to the file for the module. New in version 3.3. name The name of the module the loader can handle. path Path to the file of the module. load_module(fullname) Calls super’s load_module(). Deprecated since version 3.4: Use Loader.exec_module() instead. abstractmethod get_filename(fullname) Returns path. abstractmethod get_data(path) Reads path as a binary file and returns the bytes from it.
doc_3256
See Migration guide for more details. tf.compat.v1.math.reduce_mean tf.compat.v1.reduce_mean( input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None ) Reduces input_tensor along the dimensions given in axis by computing the mean of elements across the dimensions in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each the entries in axis, which must be unique. If keepdims is true, the reduced dimensions are retained with length 1. If axis is None, all dimensions are reduced, and a tensor with a single element is returned. For example: x = tf.constant([[1., 1.], [2., 2.]]) tf.reduce_mean(x) <tf.Tensor: shape=(), dtype=float32, numpy=1.5> tf.reduce_mean(x, 0) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.5, 1.5], dtype=float32)> tf.reduce_mean(x, 1) <tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 2.], dtype=float32)> Args input_tensor The tensor to reduce. Should have numeric type. axis The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)). keepdims If true, retains reduced dimensions with length 1. name A name for the operation (optional). reduction_indices The old (deprecated) name for axis. keep_dims Deprecated alias for keepdims. Returns The reduced tensor. Numpy Compatibility Equivalent to np.mean Please note that np.mean has a dtype parameter that could be used to specify the output type. By default this is dtype=float64. On the other hand, tf.reduce_mean has an aggressive type inference from input_tensor, for example: x = tf.constant([1, 0, 1, 0]) tf.reduce_mean(x) <tf.Tensor: shape=(), dtype=int32, numpy=0> y = tf.constant([1., 0., 1., 0.]) tf.reduce_mean(y) <tf.Tensor: shape=(), dtype=float32, numpy=0.5>
doc_3257
Covariance estimator with shrinkage Read more in the User Guide. Parameters store_precisionbool, default=True Specify if the estimated precision is stored assume_centeredbool, default=False If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data will be centered before computation. shrinkagefloat, default=0.1 Coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1]. Attributes covariance_ndarray of shape (n_features, n_features) Estimated covariance matrix location_ndarray of shape (n_features,) Estimated location, i.e. the estimated mean. precision_ndarray of shape (n_features, n_features) Estimated pseudo inverse matrix. (stored only if store_precision is True) Notes The regularized covariance is given by: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features Examples >>> import numpy as np >>> from sklearn.covariance import ShrunkCovariance >>> from sklearn.datasets import make_gaussian_quantiles >>> real_cov = np.array([[.8, .3], ... [.3, .4]]) >>> rng = np.random.RandomState(0) >>> X = rng.multivariate_normal(mean=[0, 0], ... cov=real_cov, ... size=500) >>> cov = ShrunkCovariance().fit(X) >>> cov.covariance_ array([[0.7387..., 0.2536...], [0.2536..., 0.4110...]]) >>> cov.location_ array([0.0622..., 0.0193...]) Methods error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators. fit(X[, y]) Fit the shrunk covariance model according to the given training data and parameters. get_params([deep]) Get parameters for this estimator. get_precision() Getter for the precision matrix. mahalanobis(X) Computes the squared Mahalanobis distances of given observations. score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. set_params(**params) Set the parameters of this estimator. error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source] Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters comp_covarray-like of shape (n_features, n_features) The covariance to compare with. norm{“frobenius”, “spectral”}, default=”frobenius” The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_). scalingbool, default=True If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled. squaredbool, default=True Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns resultfloat The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. fit(X, y=None) [source] Fit the shrunk covariance model according to the given training data and parameters. Parameters Xarray-like of shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features. y: Ignored Not used, present for API consistency by convention. Returns selfobject get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. get_precision() [source] Getter for the precision matrix. Returns precision_array-like of shape (n_features, n_features) The precision matrix associated to the current covariance object. mahalanobis(X) [source] Computes the squared Mahalanobis distances of given observations. Parameters Xarray-like of shape (n_samples, n_features) The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns distndarray of shape (n_samples,) Squared Mahalanobis distances of the observations. score(X_test, y=None) [source] Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters X_testarray-like of shape (n_samples, n_features) Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering). yIgnored Not used, present for API consistency by convention. Returns resfloat The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_3258
See Migration guide for more details. tf.compat.v1.raw_ops.TFRecordDataset tf.raw_ops.TFRecordDataset( filenames, compression_type, buffer_size, name=None ) Args filenames A Tensor of type string. A scalar or vector containing the name(s) of the file(s) to be read. compression_type A Tensor of type string. A scalar containing either (i) the empty string (no compression), (ii) "ZLIB", or (iii) "GZIP". buffer_size A Tensor of type int64. A scalar representing the number of bytes to buffer. A value of 0 means no buffering will be performed. name A name for the operation (optional). Returns A Tensor of type variant.
doc_3259
Convert an image to 16-bit unsigned integer format. Parameters imagendarray Input image. force_copybool, optional Force a copy of the data, irrespective of its current dtype. Returns outndarray of uint16 Output image. Notes Negative input values will be clipped. Positive values are scaled between 0 and 65535.
doc_3260
tf.compat.v1.nn.crelu( features, name=None, axis=-1 ) Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. W. Shang, et al. Args features A Tensor with type float, double, int32, int64, uint8, int16, or int8. name A name for the operation (optional). axis The axis that the output values are concatenated along. Default is -1. Returns A Tensor with the same type as features. References: Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units: Shang et al., 2016 (pdf)
doc_3261
Convert x using the unit type of the xaxis. If the artist is not in contained in an Axes or if the xaxis does not have units, x itself is returned.
doc_3262
Return the snap setting. See set_snap for details.
doc_3263
See Migration guide for more details. tf.compat.v1.raw_ops.StatelessIf tf.raw_ops.StatelessIf( cond, input, Tout, then_branch, else_branch, output_shapes=[], name=None ) Args cond A Tensor. A Tensor. If the tensor is a scalar of non-boolean type, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, being empty means False and being non-empty means True. This should only be used when the if then/else body functions do not have stateful ops. input A list of Tensor objects. A list of input tensors. Tout A list of tf.DTypes. A list of output types. then_branch A function decorated with @Defun. A function that takes 'inputs' and returns a list of tensors, whose types are the same as what else_branch returns. else_branch A function decorated with @Defun. A function that takes 'inputs' and returns a list of tensors, whose types are the same as what then_branch returns. output_shapes An optional list of shapes (each a tf.TensorShape or list of ints). Defaults to []. name A name for the operation (optional). Returns A list of Tensor objects of type Tout.
doc_3264
The mean of all pixel values of the band (excluding the “no data” value).
doc_3265
Duplicate the file descriptor fd (an integer as returned by a file object’s fileno() method) and build a socket object from the result. Address family, socket type and protocol number are as for the socket() function above. The file descriptor should refer to a socket, but this is not checked — subsequent operations on the object may fail if the file descriptor is invalid. This function is rarely needed, but can be used to get or set socket options on a socket passed to a program as standard input or output (such as a server started by the Unix inet daemon). The socket is assumed to be in blocking mode. The newly created socket is non-inheritable. Changed in version 3.4: The returned socket is now non-inheritable.
doc_3266
Uses an incremental encoder to iteratively encode the input provided by iterator. This function is a generator. The errors argument (as well as any other keyword argument) is passed through to the incremental encoder. This function requires that the codec accept text str objects to encode. Therefore it does not support bytes-to-bytes encoders such as base64_codec.
doc_3267
Roll provided date backward to next offset only if not on offset. Returns TimeStamp Rolled timestamp if not on offset, otherwise unchanged timestamp.
doc_3268
Accept a connection on the bound socket or named pipe of the listener object and return a Connection object. If authentication is attempted and fails, then AuthenticationError is raised.
doc_3269
Apply a function repeatedly over multiple axes. func is called as res = func(a, axis), where axis is the first element of axes. The result res of the function call must have either the same dimensions as a or one less dimension. If res has one less dimension than a, a dimension is inserted before axis. The call to func is then repeated for each axis in axes, with res as the first argument. Parameters funcfunction This function must take two arguments, func(a, axis). aarray_like Input array. axesarray_like Axes over which func is applied; the elements must be integers. Returns apply_over_axisndarray The output array. The number of dimensions is the same as a, but the shape can be different. This depends on whether func changes the shape of its output with respect to its input. See also apply_along_axis Apply a function to 1-D slices of an array along the given axis. Examples >>> a = np.ma.arange(24).reshape(2,3,4) >>> a[:,0,1] = np.ma.masked >>> a[:,1,:] = np.ma.masked >>> a masked_array( data=[[[0, --, 2, 3], [--, --, --, --], [8, 9, 10, 11]], [[12, --, 14, 15], [--, --, --, --], [20, 21, 22, 23]]], mask=[[[False, True, False, False], [ True, True, True, True], [False, False, False, False]], [[False, True, False, False], [ True, True, True, True], [False, False, False, False]]], fill_value=999999) >>> np.ma.apply_over_axes(np.ma.sum, a, [0,2]) masked_array( data=[[[46], [--], [124]]], mask=[[[False], [ True], [False]]], fill_value=999999) Tuple axis arguments to ufuncs are equivalent: >>> np.ma.sum(a, axis=(0,2)).reshape((1,-1,1)) masked_array( data=[[[46], [--], [124]]], mask=[[[False], [ True], [False]]], fill_value=999999)
doc_3270
Calculate the expanding unbiased skewness. Parameters **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also scipy.stats.skew Third moment of a probability density. pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.skew Aggregating skew for Series. pandas.DataFrame.skew Aggregating skew for DataFrame. Notes A minimum of three periods is required for the rolling calculation.
doc_3271
operator.__floordiv__(a, b) Return a // b.
doc_3272
Bases: matplotlib.ticker.Formatter A Formatter which attempts to figure out the best format to use. This is most useful when used with the AutoDateLocator. AutoDateFormatter has a .scale dictionary that maps tick scales (the interval in days between one major tick) to format strings; this dictionary defaults to self.scaled = { DAYS_PER_YEAR: rcParams['date.autoformat.year'], DAYS_PER_MONTH: rcParams['date.autoformat.month'], 1: rcParams['date.autoformat.day'], 1 / HOURS_PER_DAY: rcParams['date.autoformat.hour'], 1 / MINUTES_PER_DAY: rcParams['date.autoformat.minute'], 1 / SEC_PER_DAY: rcParams['date.autoformat.second'], 1 / MUSECONDS_PER_DAY: rcParams['date.autoformat.microsecond'], } The formatter uses the format string corresponding to the lowest key in the dictionary that is greater or equal to the current scale. Dictionary entries can be customized: locator = AutoDateLocator() formatter = AutoDateFormatter(locator) formatter.scaled[1/(24*60)] = '%M:%S' # only show min and sec Custom callables can also be used instead of format strings. The following example shows how to use a custom format function to strip trailing zeros from decimal seconds and adds the date to the first ticklabel: def my_format_function(x, pos=None): x = matplotlib.dates.num2date(x) if pos == 0: fmt = '%D %H:%M:%S.%f' else: fmt = '%H:%M:%S.%f' label = x.strftime(fmt) label = label.rstrip("0") label = label.rstrip(".") return label formatter.scaled[1/(24*60)] = my_format_function Autoformat the date labels. Parameters locatorticker.Locator Locator that this axis is using. tzstr, optional Passed to dates.date2num. defaultfmtstr The default format to use if none of the values in self.scaled are greater than the unit returned by locator._get_unit(). usetexbool, default: rcParams["text.usetex"] (default: False) To enable/disable the use of TeX's math mode for rendering the results of the formatter. If any entries in self.scaled are set as functions, then it is up to the customized function to enable or disable TeX's math mode itself.
doc_3273
Validates that the lower bound of the range is not less than the limit_value.
doc_3274
Load data from a text file, with missing values handled as specified. Each line past the first skip_header lines is split at the delimiter character, and characters following the comments character are discarded. Parameters fnamefile, str, pathlib.Path, list of str, generator File, filename, list, or generator to read. If the filename extension is .gz or .bz2, the file is first decompressed. Note that generators must return bytes or strings. The strings in a list or produced by a generator are treated as lines. dtypedtype, optional Data type of the resulting array. If None, the dtypes will be determined by the contents of each column, individually. commentsstr, optional The character used to indicate the start of a comment. All the characters occurring on a line after a comment are discarded. delimiterstr, int, or sequence, optional The string used to separate values. By default, any consecutive whitespaces act as delimiter. An integer or sequence of integers can also be provided as width(s) of each field. skiprowsint, optional skiprows was removed in numpy 1.10. Please use skip_header instead. skip_headerint, optional The number of lines to skip at the beginning of the file. skip_footerint, optional The number of lines to skip at the end of the file. convertersvariable, optional The set of functions that convert the data of a column to a value. The converters can also be used to provide a default value for missing data: converters = {3: lambda s: float(s or 0)}. missingvariable, optional missing was removed in numpy 1.10. Please use missing_values instead. missing_valuesvariable, optional The set of strings corresponding to missing data. filling_valuesvariable, optional The set of values to be used as default when the data are missing. usecolssequence, optional Which columns to read, with 0 being the first. For example, usecols = (1, 4, 5) will extract the 2nd, 5th and 6th columns. names{None, True, str, sequence}, optional If names is True, the field names are read from the first line after the first skip_header lines. This line can optionally be preceded by a comment delimiter. If names is a sequence or a single-string of comma-separated names, the names will be used to define the field names in a structured dtype. If names is None, the names of the dtype fields will be used, if any. excludelistsequence, optional A list of names to exclude. This list is appended to the default list [‘return’,’file’,’print’]. Excluded names are appended with an underscore: for example, file would become file_. deletecharsstr, optional A string combining invalid characters that must be deleted from the names. defaultfmtstr, optional A format used to define default field names, such as “f%i” or “f_%02i”. autostripbool, optional Whether to automatically strip white spaces from the variables. replace_spacechar, optional Character(s) used in replacement of white spaces in the variable names. By default, use a ‘_’. case_sensitive{True, False, ‘upper’, ‘lower’}, optional If True, field names are case sensitive. If False or ‘upper’, field names are converted to upper case. If ‘lower’, field names are converted to lower case. unpackbool, optional If True, the returned array is transposed, so that arguments may be unpacked using x, y, z = genfromtxt(...). When used with a structured data-type, arrays are returned for each field. Default is False. usemaskbool, optional If True, return a masked array. If False, return a regular array. loosebool, optional If True, do not raise errors for invalid values. invalid_raisebool, optional If True, an exception is raised if an inconsistency is detected in the number of columns. If False, a warning is emitted and the offending lines are skipped. max_rowsint, optional The maximum number of rows to read. Must not be used with skip_footer at the same time. If given, the value must be at least 1. Default is to read the entire file. New in version 1.10.0. encodingstr, optional Encoding used to decode the inputfile. Does not apply when fname is a file object. The special value ‘bytes’ enables backward compatibility workarounds that ensure that you receive byte arrays when possible and passes latin1 encoded strings to converters. Override this value to receive unicode arrays and pass strings as input to converters. If set to None the system default is used. The default value is ‘bytes’. New in version 1.14.0. likearray_like Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as like supports the __array_function__ protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns outndarray Data read from the text file. If usemask is True, this is a masked array. See also numpy.loadtxt equivalent function when no data is missing. Notes When spaces are used as delimiters, or when no delimiter has been given as input, there should not be any missing data between two fields. When the variables are named (either by a flexible dtype or with names), there must not be any header in the file (else a ValueError exception is raised). Individual values are not stripped of spaces by default. When using a custom converter, make sure the function does remove spaces. References 1 NumPy User Guide, section I/O with NumPy. Examples >>> from io import StringIO >>> import numpy as np Comma delimited file with mixed dtype >>> s = StringIO(u"1,1.3,abcde") >>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), ... ('mystring','S5')], delimiter=",") >>> data array((1, 1.3, b'abcde'), dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', 'S5')]) Using dtype = None >>> _ = s.seek(0) # needed for StringIO example only >>> data = np.genfromtxt(s, dtype=None, ... names = ['myint','myfloat','mystring'], delimiter=",") >>> data array((1, 1.3, b'abcde'), dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', 'S5')]) Specifying dtype and names >>> _ = s.seek(0) >>> data = np.genfromtxt(s, dtype="i8,f8,S5", ... names=['myint','myfloat','mystring'], delimiter=",") >>> data array((1, 1.3, b'abcde'), dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', 'S5')]) An example with fixed-width columns >>> s = StringIO(u"11.3abcde") >>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'], ... delimiter=[1,3,5]) >>> data array((1, 1.3, b'abcde'), dtype=[('intvar', '<i8'), ('fltvar', '<f8'), ('strvar', 'S5')]) An example to show comments >>> f = StringIO(''' ... text,# of chars ... hello world,11 ... numpy,5''') >>> np.genfromtxt(f, dtype='S12,S12', delimiter=',') array([(b'text', b''), (b'hello world', b'11'), (b'numpy', b'5')], dtype=[('f0', 'S12'), ('f1', 'S12')])
doc_3275
Print a brief description of how the ArgumentParser should be invoked on the command line. If file is None, sys.stdout is assumed.
doc_3276
See Migration guide for more details. tf.compat.v1.raw_ops.StringSplitV2 tf.raw_ops.StringSplitV2( input, sep, maxsplit=-1, name=None ) Let N be the size of source (typically N will be the batch size). Split each element of source based on sep and return a SparseTensor containing the split tokens. Empty tokens are ignored. For example, N = 2, source[0] is 'hello world' and source[1] is 'a b c', then the output will be st.indices = [0, 0; 0, 1; 1, 0; 1, 1; 1, 2] st.shape = [2, 3] st.values = ['hello', 'world', 'a', 'b', 'c'] If sep is given, consecutive delimiters are not grouped together and are deemed to delimit empty strings. For example, source of "1<>2<><>3" and sep of "<>" returns ["1", "2", "", "3"]. If sep is None or an empty string, consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the startor end if the string has leading or trailing whitespace. Note that the above mentioned behavior matches python's str.split. Args input A Tensor of type string. 1-D string Tensor, the strings to split. sep A Tensor of type string. 0-D string Tensor, the delimiter character. maxsplit An optional int. Defaults to -1. An int. If maxsplit > 0, limit of the split of the result. name A name for the operation (optional). Returns A tuple of Tensor objects (indices, values, shape). indices A Tensor of type int64. values A Tensor of type string. shape A Tensor of type int64.
doc_3277
Return the value of the given socket option (see the Unix man page getsockopt(2)). The needed symbolic constants (SO_* etc.) are defined in this module. If buflen is absent, an integer option is assumed and its integer value is returned by the function. If buflen is present, it specifies the maximum length of the buffer used to receive the option in, and this buffer is returned as a bytes object. It is up to the caller to decode the contents of the buffer (see the optional built-in module struct for a way to decode C structures encoded as byte strings).
doc_3278
Get location for a sequence of labels. Parameters seq:label, slice, list, mask or a sequence of such You should use one of the above for each level. If a level should not be used, set it to slice(None). Returns numpy.ndarray NumPy array of integers suitable for passing to iloc. See also MultiIndex.get_loc Get location for a label or a tuple of labels. MultiIndex.slice_locs Get slice location given start label(s) and end label(s). Examples >>> mi = pd.MultiIndex.from_arrays([list('abb'), list('def')]) >>> mi.get_locs('b') array([1, 2], dtype=int64) >>> mi.get_locs([slice(None), ['e', 'f']]) array([1, 2], dtype=int64) >>> mi.get_locs([[True, False, True], slice('e', 'f')]) array([2], dtype=int64)
doc_3279
tf.nn.embedding_lookup( params, ids, max_norm=None, name=None ) This function is used to perform parallel lookups on the list of tensors in params. It is a generalization of tf.gather, where params is interpreted as a partitioning of a large embedding tensor. If len(params) > 1, each element id of ids is partitioned between the elements of params according to the "div" partition strategy, which means we assign ids to partitions in a contiguous manner. For instance, 13 ids are split across 5 partitions as: [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]. If the id space does not evenly divide the number of partitions, each of the first (max_id + 1) % len(params) partitions will be assigned one more id. The results of the lookup are concatenated into a dense tensor. The returned tensor has shape shape(ids) + shape(params)[1:]. Args params A single tensor representing the complete embedding tensor, or a list of tensors all of same shape except for the first dimension, representing sharded embedding tensors following "div" partition strategy. ids A Tensor with type int32 or int64 containing the ids to be looked up in params. max_norm If not None, each embedding is clipped if its l2-norm is larger than this value. name A name for the operation (optional). Returns A Tensor with the same type as the tensors in params. For instance, if params is a 5x2 matrix: [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] or a list of matrices: params[0]: [[1, 2], [3, 4]] params[1]: [[5, 6], [7, 8]] params[2]: [[9, 10]] and ids is: [0, 3, 4] The output will be a 3x2 matrix: [[1, 2], [7, 8], [9, 10]] Raises ValueError If params is empty.
doc_3280
Rethinking the Inception Architecture for Computer Vision (CVPR 2016) Functions InceptionV3(...): Instantiates the Inception v3 architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images.
doc_3281
tf.compat.v1.keras.layers.experimental.preprocessing.IntegerLookup( max_values=None, num_oov_indices=1, mask_value=0, oov_value=-1, vocabulary=None, invert=False, **kwargs ) Methods adapt View source adapt( data, reset_state=True ) Fits the state of the preprocessing layer to the dataset. Overrides the default adapt method to apply relevant preprocessing to the inputs before passing to the combiner. Arguments data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array. reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt. This must be True for this layer, which does not support repeated calls to adapt. get_vocabulary View source get_vocabulary() set_vocabulary View source set_vocabulary( vocab ) Sets vocabulary data for this layer with inverse=False. This method sets the vocabulary for this layer directly, instead of analyzing a dataset through 'adapt'. It should be used whenever the vocab information is already known. If vocabulary data is already present in the layer, this method will either replace it Arguments vocab An array of string tokens. Raises ValueError If there are too many inputs, the inputs do not match, or input data is missing. vocab_size View source vocab_size()
doc_3282
Returns the previous Node in the linked list of Nodes. Returns The previous Node in the linked list of Nodes.
doc_3283
os.execle(path, arg0, arg1, ..., env) os.execlp(file, arg0, arg1, ...) os.execlpe(file, arg0, arg1, ..., env) os.execv(path, args) os.execve(path, args, env) os.execvp(file, args) os.execvpe(file, args, env) These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as OSError exceptions. The current process is replaced immediately. Open file objects and descriptors are not flushed, so if there may be data buffered on these open files, you should flush them using sys.stdout.flush() or os.fsync() before calling an exec* function. The “l” and “v” variants of the exec* functions differ in how command-line arguments are passed. The “l” variants are perhaps the easiest to work with if the number of parameters is fixed when the code is written; the individual parameters simply become additional parameters to the execl*() functions. The “v” variants are good when the number of parameters is variable, with the arguments being passed in a list or tuple as the args parameter. In either case, the arguments to the child process should start with the name of the command being run, but this is not enforced. The variants which include a “p” near the end (execlp(), execlpe(), execvp(), and execvpe()) will use the PATH environment variable to locate the program file. When the environment is being replaced (using one of the exec*e variants, discussed in the next paragraph), the new environment is used as the source of the PATH variable. The other variants, execl(), execle(), execv(), and execve(), will not use the PATH variable to locate the executable; path must contain an appropriate absolute or relative path. For execle(), execlpe(), execve(), and execvpe() (note that these all end in “e”), the env parameter must be a mapping which is used to define the environment variables for the new process (these are used instead of the current process’ environment); the functions execl(), execlp(), execv(), and execvp() all cause the new process to inherit the environment of the current process. For execve() on some platforms, path may also be specified as an open file descriptor. This functionality may not be supported on your platform; you can check whether or not it is available using os.supports_fd. If it is unavailable, using it will raise a NotImplementedError. Raises an auditing event os.exec with arguments path, args, env. Availability: Unix, Windows. New in version 3.3: Added support for specifying path as an open file descriptor for execve(). Changed in version 3.6: Accepts a path-like object.
doc_3284
Retrieves the message header plus howmuch lines of the message after the header of message number which. Result is in form (response, ['line', ...], octets). The POP3 TOP command this method uses, unlike the RETR command, doesn’t set the message’s seen flag; unfortunately, TOP is poorly specified in the RFCs and is frequently broken in off-brand servers. Test this method by hand against the POP3 servers you will use before trusting it.
doc_3285
Pass the RunSQL.noop attribute to sql or reverse_sql when you want the operation not to do anything in the given direction. This is especially useful in making the operation reversible.
doc_3286
Add a second x-axis to this Axes. For example if we want to have a second scale for the data plotted on the xaxis. Parameters location{'top', 'bottom', 'left', 'right'} or float The position to put the secondary axis. Strings can be 'top' or 'bottom' for orientation='x' and 'right' or 'left' for orientation='y'. A float indicates the relative position on the parent axes to put the new axes, 0.0 being the bottom (or left) and 1.0 being the top (or right). functions2-tuple of func, or Transform with an inverse If a 2-tuple of functions, the user specifies the transform function and its inverse. i.e. functions=(lambda x: 2 / x, lambda x: 2 / x) would be an reciprocal transform with a factor of 2. The user can also directly supply a subclass of transforms.Transform so long as it has an inverse. See Secondary Axis for examples of making these conversions. Returns axaxes._secondary_axes.SecondaryAxis Other Parameters **kwargsAxes properties. Other miscellaneous axes parameters. Warning This method is experimental as of 3.1, and the API may change. Examples The main axis shows frequency, and the secondary axis shows period. (Source code, png, pdf) Examples using matplotlib.axes.Axes.secondary_xaxis Secondary Axis Basic Usage
doc_3287
Create an anchored inset axes by scaling a parent axes. For usage, also see the examples. Parameters parent_axesmatplotlib.axes.Axes Axes to place the inset axes. zoomfloat Scaling factor of the data axes. zoom > 1 will enlarge the coordinates (i.e., "zoomed in"), while zoom < 1 will shrink the coordinates (i.e., "zoomed out"). locstr, default: 'upper right' Location to place the inset axes. Valid locations are 'upper left', 'upper center', 'upper right', 'center left', 'center', 'center right', 'lower left', 'lower center, 'lower right'. For backward compatibility, numeric values are accepted as well. See the parameter loc of Legend for details. bbox_to_anchortuple or matplotlib.transforms.BboxBase, optional Bbox that the inset axes will be anchored to. If None, parent_axes.bbox is used. If a tuple, can be either [left, bottom, width, height], or [left, bottom]. If the kwargs width and/or height are specified in relative units, the 2-tuple [left, bottom] cannot be used. Note that the units of the bounding box are determined through the transform in use. When using bbox_to_anchor it almost always makes sense to also specify a bbox_transform. This might often be the axes transform parent_axes.transAxes. bbox_transformmatplotlib.transforms.Transform, optional Transformation for the bbox that contains the inset axes. If None, a transforms.IdentityTransform is used (i.e. pixel coordinates). This is useful when not providing any argument to bbox_to_anchor. When using bbox_to_anchor it almost always makes sense to also specify a bbox_transform. This might often be the axes transform parent_axes.transAxes. Inversely, when specifying the axes- or figure-transform here, be aware that not specifying bbox_to_anchor will use parent_axes.bbox, the units of which are in display (pixel) coordinates. axes_classmatplotlib.axes.Axes type, default: HostAxes The type of the newly created inset axes. axes_kwargsdict, optional Keyword arguments to pass to the constructor of the inset axes. Valid arguments include: Property Description adjustable {'box', 'datalim'} agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None anchor (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} animated bool aspect {'auto', 'equal'} or float autoscale_on bool autoscalex_on bool autoscaley_on bool axes_locator Callable[[Axes, Renderer], Bbox] axisbelow bool or 'line' box_aspect float or None clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None facecolor or fc color figure Figure frame_on bool gid str in_layout bool label object navigate bool navigate_mode unknown path_effects AbstractPathEffect picker None or bool or float or callable position [left, bottom, width, height] or Bbox prop_cycle unknown rasterization_zorder float or None rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None title str transform Transform url str visible bool xbound unknown xlabel str xlim (bottom: float, top: float) xmargin float greater than -0.5 xscale {"linear", "log", "symlog", "logit", ...} or ScaleBase xticklabels unknown xticks unknown ybound unknown ylabel str ylim (bottom: float, top: float) ymargin float greater than -0.5 yscale {"linear", "log", "symlog", "logit", ...} or ScaleBase yticklabels unknown yticks unknown zorder float borderpadfloat, default: 0.5 Padding between inset axes and the bbox_to_anchor. The units are axes font size, i.e. for a default font size of 10 points borderpad = 0.5 is equivalent to a padding of 5 points. Returns inset_axesaxes_class Inset axes object created. Examples using mpl_toolkits.axes_grid1.inset_locator.zoomed_inset_axes Adding a colorbar to inset axes Inset Locator Demo2
doc_3288
Half-precision floating-point number type. Character code 'e' Alias on this platform (Linux x86_64) numpy.float16: 16-bit-precision floating-point number type: sign bit, 5 bits exponent, 10 bits mantissa.
doc_3289
(Only supported on Solaris and derivatives.) Returns a /dev/poll polling object; see section /dev/poll Polling Objects below for the methods supported by devpoll objects. devpoll() objects are linked to the number of file descriptors allowed at the time of instantiation. If your program reduces this value, devpoll() will fail. If your program increases this value, devpoll() may return an incomplete list of active file descriptors. The new file descriptor is non-inheritable. New in version 3.3. Changed in version 3.4: The new file descriptor is now non-inheritable.
doc_3290
When setting cookies, the ‘host prefix’ must not contain a dot (eg. www.foo.bar.com can’t set a cookie for .bar.com, because www.foo contains a dot).
doc_3291
Detect existing (non-missing) values. Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to True. Characters such as empty strings '' or numpy.inf are not considered NA values (unless you set pandas.options.mode.use_inf_as_na = True). NA values, such as None or numpy.NaN, get mapped to False values. Returns DataFrame Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value. See also DataFrame.notnull Alias of notna. DataFrame.isna Boolean inverse of notna. DataFrame.dropna Omit axes labels with missing values. notna Top-level notna. Examples Show which entries in a DataFrame are not NA. >>> df = pd.DataFrame(dict(age=[5, 6, np.NaN], ... born=[pd.NaT, pd.Timestamp('1939-05-27'), ... pd.Timestamp('1940-04-25')], ... name=['Alfred', 'Batman', ''], ... toy=[None, 'Batmobile', 'Joker'])) >>> df age born name toy 0 5.0 NaT Alfred None 1 6.0 1939-05-27 Batman Batmobile 2 NaN 1940-04-25 Joker >>> df.notna() age born name toy 0 True False True False 1 True True True True 2 False True True True Show which entries in a Series are not NA. >>> ser = pd.Series([5, 6, np.NaN]) >>> ser 0 5.0 1 6.0 2 NaN dtype: float64 >>> ser.notna() 0 True 1 True 2 False dtype: bool
doc_3292
When you loop over the list of messages in a template, what you get are instances of the Message class. They have only a few attributes: message: The actual text of the message. level: An integer describing the type of the message (see the message levels section above). tags: A string combining all the message’s tags (extra_tags and level_tag) separated by spaces. extra_tags: A string containing custom tags for this message, separated by spaces. It’s empty by default. level_tag: The string representation of the level. By default, it’s the lowercase version of the name of the associated constant, but this can be changed if you need by using the MESSAGE_TAGS setting.
doc_3293
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_3294
Shift the time index, using the index’s frequency if available. Deprecated since version 1.1.0: Use shift instead. Parameters periods:int Number of periods to move, can be positive or negative. freq:DateOffset, timedelta, or str, default None Increment to use from the tseries module or time rule expressed as a string (e.g. ‘EOM’). axis:{0 or ‘index’, 1 or ‘columns’, None}, default 0 Corresponds to the axis that contains the Index. Returns shifted:Series/DataFrame Notes If freq is not specified then tries to use the freq or inferred_freq attributes of the index. If neither of those attributes exist, a ValueError is thrown
doc_3295
Change shape and size of array in-place. Parameters new_shapetuple of ints, or n ints Shape of resized array. refcheckbool, optional If False, reference count will not be checked. Default is True. Returns None Raises ValueError If a does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the order keyword argument is specified. This behaviour is a bug in NumPy. See also resize Return a new array with the specified shape. Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set refcheck to False. Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) Enlarging an array: as above, but missing entries are filled with zeros: >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) Referencing an array prevents resizing… >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... Unless refcheck is False: >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]])
doc_3296
sklearn.covariance.ledoit_wolf(X, *, assume_centered=False, block_size=1000) [source] Estimates the shrunk Ledoit-Wolf covariance matrix. Read more in the User Guide. Parameters Xarray-like of shape (n_samples, n_features) Data from which to compute the covariance estimate assume_centeredbool, default=False If True, data will not be centered before computation. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, data will be centered before computation. block_sizeint, default=1000 Size of blocks into which the covariance matrix will be split. This is purely a memory optimization and does not affect results. Returns shrunk_covndarray of shape (n_features, n_features) Shrunk covariance. shrinkagefloat Coefficient in the convex combination used for the computation of the shrunk estimate. Notes The regularized (shrunk) covariance is: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features Examples using sklearn.covariance.ledoit_wolf Sparse inverse covariance estimation
doc_3297
Compute the median along the specified axis, while ignoring NaNs. Returns the median of the array elements. New in version 1.9.0. Parameters aarray_like Input array or object that can be converted to an array. axis{int, sequence of int, None}, optional Axis or axes along which the medians are computed. The default is to compute the median along a flattened version of the array. A sequence of axes is supported since version 1.9.0. outndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. overwrite_inputbool, optional If True, then allow use of memory of input array a for calculations. The input array will be modified by the call to median. This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. If overwrite_input is True and a is not already an ndarray, an error will be raised. keepdimsbool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original a. If this is anything but the default value it will be passed through (in the special case of an empty array) to the mean function of the underlying array. If the array is a sub-class and mean does not have the kwarg keepdims this will raise a RuntimeError. Returns medianndarray A new array holding the result. If the input contains integers or floats smaller than float64, then the output data-type is np.float64. Otherwise, the data-type of the output is the same as that of the input. If out is specified, that array is returned instead. See also mean, median, percentile Notes Given a vector V of length N, the median of V is the middle value of a sorted copy of V, V_sorted - i.e., V_sorted[(N-1)/2], when N is odd and the average of the two middle values of V_sorted when N is even. Examples >>> a = np.array([[10.0, 7, 4], [3, 2, 1]]) >>> a[0, 1] = np.nan >>> a array([[10., nan, 4.], [ 3., 2., 1.]]) >>> np.median(a) nan >>> np.nanmedian(a) 3.0 >>> np.nanmedian(a, axis=0) array([6.5, 2. , 2.5]) >>> np.median(a, axis=1) array([nan, 2.]) >>> b = a.copy() >>> np.nanmedian(b, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a==b) >>> b = a.copy() >>> np.nanmedian(b, axis=None, overwrite_input=True) 3.0 >>> assert not np.all(a==b)
doc_3298
Return the line color. See also set_color.
doc_3299
Base exception class used for all specific DOM exceptions. This exception class cannot be directly instantiated.