_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_26800
List subscribed mailbox names in directory matching pattern. directory defaults to the top level directory and pattern defaults to match any mailbox. Returned data are tuples of message part envelope and data.
doc_26801
See Migration guide for more details. tf.compat.v1.train.Coordinator tf.train.Coordinator( clean_stop_exception_types=None ) This class implements a simple mechanism to coordinate the termination of a set of threads. Usage: # Create a coordinator. coord = Coordinator() # Start a number of threads, passing the coordinator to each of them. ...start thread 1...(coord, ...) ...start thread N...(coord, ...) # Wait for all the threads to terminate. coord.join(threads) Any of the threads can call coord.request_stop() to ask for all the threads to stop. To cooperate with the requests, each thread must check for coord.should_stop() on a regular basis. coord.should_stop() returns True as soon as coord.request_stop() has been called. A typical thread running with a coordinator will do something like: while not coord.should_stop(): ...do some work... Exception handling: A thread can report an exception to the coordinator as part of the request_stop() call. The exception will be re-raised from the coord.join() call. Thread code: try: while not coord.should_stop(): ...do some work... except Exception as e: coord.request_stop(e) Main code: try: ... coord = Coordinator() # Start a number of threads, passing the coordinator to each of them. ...start thread 1...(coord, ...) ...start thread N...(coord, ...) # Wait for all the threads to terminate. coord.join(threads) except Exception as e: ...exception that was passed to coord.request_stop() To simplify the thread implementation, the Coordinator provides a context handler stop_on_exception() that automatically requests a stop if an exception is raised. Using the context handler the thread code above can be written as: with coord.stop_on_exception(): while not coord.should_stop(): ...do some work... Grace period for stopping: After a thread has called coord.request_stop() the other threads have a fixed time to stop, this is called the 'stop grace period' and defaults to 2 minutes. If any of the threads is still alive after the grace period expires coord.join() raises a RuntimeError reporting the laggards. try: ... coord = Coordinator() # Start a number of threads, passing the coordinator to each of them. ...start thread 1...(coord, ...) ...start thread N...(coord, ...) # Wait for all the threads to terminate, give them 10s grace period coord.join(threads, stop_grace_period_secs=10) except RuntimeError: ...one of the threads took more than 10s to stop after request_stop() ...was called. except Exception: ...exception that was passed to coord.request_stop() Args clean_stop_exception_types Optional tuple of Exception types that should cause a clean stop of the coordinator. If an exception of one of these types is reported to request_stop(ex) the coordinator will behave as if request_stop(None) was called. Defaults to (tf.errors.OutOfRangeError,) which is used by input queues to signal the end of input. When feeding training data from a Python iterator it is common to add StopIteration to this list. Attributes joined Methods clear_stop View source clear_stop() Clears the stop flag. After this is called, calls to should_stop() will return False. join View source join( threads=None, stop_grace_period_secs=120, ignore_live_threads=False ) Wait for threads to terminate. This call blocks until a set of threads have terminated. The set of thread is the union of the threads passed in the threads argument and the list of threads that registered with the coordinator by calling Coordinator.register_thread(). After the threads stop, if an exc_info was passed to request_stop, that exception is re-raised. Grace period handling: When request_stop() is called, threads are given 'stop_grace_period_secs' seconds to terminate. If any of them is still alive after that period expires, a RuntimeError is raised. Note that if an exc_info was passed to request_stop() then it is raised instead of that RuntimeError. Args threads List of threading.Threads. The started threads to join in addition to the registered threads. stop_grace_period_secs Number of seconds given to threads to stop after request_stop() has been called. ignore_live_threads If False, raises an error if any of the threads are still alive after stop_grace_period_secs. Raises RuntimeError If any thread is still alive after request_stop() is called and the grace period expires. raise_requested_exception View source raise_requested_exception() If an exception has been passed to request_stop, this raises it. register_thread View source register_thread( thread ) Register a thread to join. Args thread A Python thread to join. request_stop View source request_stop( ex=None ) Request that the threads stop. After this is called, calls to should_stop() will return True. Note: If an exception is being passed in, in must be in the context of handling the exception (i.e. try: ... except Exception as ex: ...) and not a newly created one. Args ex Optional Exception, or Python exc_info tuple as returned by sys.exc_info(). If this is the first call to request_stop() the corresponding exception is recorded and re-raised from join(). should_stop View source should_stop() Check if stop was requested. Returns True if a stop was requested. stop_on_exception View source @contextlib.contextmanager stop_on_exception() Context manager to request stop when an Exception is raised. Code that uses a coordinator must catch exceptions and pass them to the request_stop() method to stop the other threads managed by the coordinator. This context handler simplifies the exception handling. Use it as follows: with coord.stop_on_exception(): # Any exception raised in the body of the with # clause is reported to the coordinator before terminating # the execution of the body. ...body... This is completely equivalent to the slightly longer code: try: ...body... except: coord.request_stop(sys.exc_info()) Yields nothing. wait_for_stop View source wait_for_stop( timeout=None ) Wait till the Coordinator is told to stop. Args timeout Float. Sleep for up to that many seconds waiting for should_stop() to become True. Returns True if the Coordinator is told stop, False if the timeout expired.
doc_26802
Helper function to normalize kwarg inputs. Parameters kwdict or None A dict of keyword arguments. None is explicitly supported and treated as an empty dict, to support functions with an optional parameter of the form props=None. alias_mappingdict or Artist subclass or Artist instance, optional A mapping between a canonical name to a list of aliases, in order of precedence from lowest to highest. If the canonical value is not in the list it is assumed to have the highest priority. If an Artist subclass or instance is passed, use its properties alias mapping. Raises TypeError To match what Python raises if invalid arguments/keyword arguments are passed to a callable.
doc_26803
Calculate the expanding correlation. Parameters other:Series or DataFrame, optional If not supplied then will default to self and produce pairwise output. pairwise:bool, default None If False then only matching columns between self and other will be used and the output will be a DataFrame. If True then all pairwise combinations will be calculated and the output will be a MultiIndexed DataFrame in the case of DataFrame inputs. In the case of missing elements, only complete pairwise observations will be used. **kwargs For NumPy compatibility and will not have an effect on the result. Returns Series or DataFrame Return type is the same as the original object with np.float64 dtype. See also cov Similar method to calculate covariance. numpy.corrcoef NumPy Pearson’s correlation calculation. pandas.Series.expanding Calling expanding with Series data. pandas.DataFrame.expanding Calling expanding with DataFrames. pandas.Series.corr Aggregating corr for Series. pandas.DataFrame.corr Aggregating corr for DataFrame. Notes This function uses Pearson’s definition of correlation (https://en.wikipedia.org/wiki/Pearson_correlation_coefficient). When other is not specified, the output will be self correlation (e.g. all 1’s), except for DataFrame inputs with pairwise set to True. Function will return NaN for correlations of equal valued sequences; this is the result of a 0/0 division error. When pairwise is set to False, only matching columns between self and other will be used. When pairwise is set to True, the output will be a MultiIndex DataFrame with the original index on the first level, and the other DataFrame columns on the second level. In the case of missing elements, only complete pairwise observations will be used.
doc_26804
Format a floating-point scalar as a decimal string in scientific notation. Provides control over rounding, trimming and padding. Uses and assumes IEEE unbiased rounding. Uses the “Dragon4” algorithm. Parameters xpython float or numpy floating scalar Value to format. precisionnon-negative integer or None, optional Maximum number of digits to print. May be None if unique is True, but must be an integer if unique is False. uniqueboolean, optional If True, use a digit-generation strategy which gives the shortest representation which uniquely identifies the floating-point number from other values of the same type, by judicious rounding. If precision is given fewer digits than necessary can be printed. If min_digits is given more can be printed, in which cases the last digit is rounded with unbiased rounding. If False, digits are generated as if printing an infinite-precision value and stopping after precision digits, rounding the remaining value with unbiased rounding trimone of ‘k’, ‘.’, ‘0’, ‘-’, optional Controls post-processing trimming of trailing digits, as follows: ‘k’ : keep trailing zeros, keep decimal point (no trimming) ‘.’ : trim all trailing zeros, leave decimal point ‘0’ : trim all but the zero before the decimal point. Insert the zero if it is missing. ‘-’ : trim trailing zeros and any trailing decimal point signboolean, optional Whether to show the sign for positive values. pad_leftnon-negative integer, optional Pad the left side of the string with whitespace until at least that many characters are to the left of the decimal point. exp_digitsnon-negative integer, optional Pad the exponent with zeros until it contains at least this many digits. If omitted, the exponent will be at least 2 digits. min_digitsnon-negative integer or None, optional Minimum number of digits to print. This only has an effect for unique=True. In that case more digits than necessary to uniquely identify the value may be printed and rounded unbiased. – versionadded:: 1.21.0 Returns repstring The string representation of the floating point value See also format_float_positional Examples >>> np.format_float_scientific(np.float32(np.pi)) '3.1415927e+00' >>> s = np.float32(1.23e24) >>> np.format_float_scientific(s, unique=False, precision=15) '1.230000071797338e+24' >>> np.format_float_scientific(s, exp_digits=4) '1.23e+0024'
doc_26805
Registers a template context processor function. Parameters f (Callable[[], Dict[str, Any]]) – Return type Callable[[], Dict[str, Any]]
doc_26806
Checks if all work currently captured by event has completed. Returns A boolean indicating if all work currently captured by event has completed.
doc_26807
On Unix and Windows, return the argument with an initial component of ~ or ~user replaced by that user’s home directory. On Unix, an initial ~ is replaced by the environment variable HOME if it is set; otherwise the current user’s home directory is looked up in the password directory through the built-in module pwd. An initial ~user is looked up directly in the password directory. On Windows, USERPROFILE will be used if set, otherwise a combination of HOMEPATH and HOMEDRIVE will be used. An initial ~user is handled by stripping the last directory component from the created user path derived above. If the expansion fails or if the path does not begin with a tilde, the path is returned unchanged. Changed in version 3.6: Accepts a path-like object. Changed in version 3.8: No longer uses HOME on Windows.
doc_26808
Alias for torch.gt().
doc_26809
An optional string of a field name (with an optional "-" prefix which indicates descending order) or an expression (or a tuple or list of strings and/or expressions) that specifies the ordering of the elements in the result list. Examples: 'some_field' '-some_field' from django.db.models import F F('some_field').desc()
doc_26810
Is True if the Tensor uses sparse storage layout, False otherwise.
doc_26811
Release the underlying buffer exposed by the memoryview object. Many objects take special actions when a view is held on them (for example, a bytearray would temporarily forbid resizing); therefore, calling release() is handy to remove these restrictions (and free any dangling resources) as soon as possible. After this method has been called, any further operation on the view raises a ValueError (except release() itself which can be called multiple times): >>> m = memoryview(b'abc') >>> m.release() >>> m[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: operation forbidden on released memoryview object The context management protocol can be used for a similar effect, using the with statement: >>> with memoryview(b'abc') as m: ... m[0] ... 97 >>> m[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: operation forbidden on released memoryview object New in version 3.2.
doc_26812
sklearn.metrics.mean_squared_log_error(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source] Mean squared logarithmic error regression loss. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_outputs) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) or (n_samples, n_outputs) Estimated target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’ Defines aggregating of multiple output values. Array-like value defines weights used to average errors. ‘raw_values’ : Returns a full set of errors when the input is of multioutput format. ‘uniform_average’ : Errors of all outputs are averaged with uniform weight. Returns lossfloat or ndarray of floats A non-negative floating point value (the best value is 0.0), or an array of floating point values, one for each individual target. Examples >>> from sklearn.metrics import mean_squared_log_error >>> y_true = [3, 5, 2.5, 7] >>> y_pred = [2.5, 5, 4, 8] >>> mean_squared_log_error(y_true, y_pred) 0.039... >>> y_true = [[0.5, 1], [1, 2], [7, 6]] >>> y_pred = [[0.5, 2], [1, 2.5], [8, 8]] >>> mean_squared_log_error(y_true, y_pred) 0.044... >>> mean_squared_log_error(y_true, y_pred, multioutput='raw_values') array([0.00462428, 0.08377444]) >>> mean_squared_log_error(y_true, y_pred, multioutput=[0.3, 0.7]) 0.060...
doc_26813
Enable stricter semantics for mixing floats and Decimals. If the signal is not trapped (default), mixing floats and Decimals is permitted in the Decimal constructor, create_decimal() and all comparison operators. Both conversion and comparisons are exact. Any occurrence of a mixed operation is silently recorded by setting FloatOperation in the context flags. Explicit conversions with from_float() or create_decimal_from_float() do not set the flag. Otherwise (the signal is trapped), only equality comparisons and explicit conversions are silent. All other mixed operations raise FloatOperation.
doc_26814
Destroy the tool. This method is called by ToolManager.remove_tool.
doc_26815
Default widget: EmailInput Empty value: Whatever you’ve given as empty_value. Normalizes to: A string. Uses EmailValidator to validate that the given value is a valid email address, using a moderately complex regular expression. Error message keys: required, invalid Has three optional arguments max_length, min_length, and empty_value which work just as they do for CharField.
doc_26816
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
doc_26817
Bases: mpl_toolkits.axes_grid1.axes_grid.CbarAxesBase, mpl_toolkits.axisartist.axislines.Axes [Deprecated] Notes Deprecated since version 3.5: Build an Axes in a figure. Parameters figFigure The Axes is built in the Figure fig. rect[left, bottom, width, height] The Axes is built in the rectangle rect. rect is in Figure coordinates. sharex, shareyAxes, optional The x or y axis is shared with the x or y axis in the input Axes. frameonbool, default: True Whether the Axes frame is visible. box_aspectfloat, optional Set a fixed aspect for the Axes box, i.e. the ratio of height to width. See set_box_aspect for details. **kwargs Other optional keyword arguments: Property Description adjustable {'box', 'datalim'} agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None anchor (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} animated bool aspect {'auto', 'equal'} or float autoscale_on bool autoscalex_on bool autoscaley_on bool axes_locator Callable[[Axes, Renderer], Bbox] axisbelow bool or 'line' box_aspect float or None clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None facecolor or fc color figure Figure frame_on bool gid str in_layout bool label object navigate bool navigate_mode unknown path_effects AbstractPathEffect picker None or bool or float or callable position [left, bottom, width, height] or Bbox prop_cycle unknown rasterization_zorder float or None rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None title str transform Transform url str visible bool xbound unknown xlabel str xlim (bottom: float, top: float) xmargin float greater than -0.5 xscale {"linear", "log", "symlog", "logit", ...} or ScaleBase xticklabels unknown xticks unknown ybound unknown ylabel str ylim (bottom: float, top: float) ymargin float greater than -0.5 yscale {"linear", "log", "symlog", "logit", ...} or ScaleBase yticklabels unknown yticks unknown zorder float Returns Axes The new Axes object. set(*, adjustable=<UNSET>, agg_filter=<UNSET>, alpha=<UNSET>, anchor=<UNSET>, animated=<UNSET>, aspect=<UNSET>, autoscale_on=<UNSET>, autoscalex_on=<UNSET>, autoscaley_on=<UNSET>, axes_locator=<UNSET>, axisbelow=<UNSET>, box_aspect=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, facecolor=<UNSET>, frame_on=<UNSET>, gid=<UNSET>, in_layout=<UNSET>, label=<UNSET>, navigate=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, position=<UNSET>, prop_cycle=<UNSET>, rasterization_zorder=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, title=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, xbound=<UNSET>, xlabel=<UNSET>, xlim=<UNSET>, xmargin=<UNSET>, xscale=<UNSET>, xticklabels=<UNSET>, xticks=<UNSET>, ybound=<UNSET>, ylabel=<UNSET>, ylim=<UNSET>, ymargin=<UNSET>, yscale=<UNSET>, yticklabels=<UNSET>, yticks=<UNSET>, zorder=<UNSET>)[source] Set multiple properties at once. Supported properties are Property Description adjustable {'box', 'datalim'} agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None anchor (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} animated bool aspect {'auto', 'equal'} or float autoscale_on bool autoscalex_on bool autoscaley_on bool axes_locator Callable[[Axes, Renderer], Bbox] axisbelow bool or 'line' box_aspect float or None clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None facecolor or fc color figure Figure frame_on bool gid str in_layout bool label object navigate bool navigate_mode unknown path_effects AbstractPathEffect picker None or bool or float or callable position [left, bottom, width, height] or Bbox prop_cycle unknown rasterization_zorder float or None rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None title str transform Transform url str visible bool xbound unknown xlabel str xlim (bottom: float, top: float) xmargin float greater than -0.5 xscale {"linear", "log", "symlog", "logit", ...} or ScaleBase xticklabels unknown xticks unknown ybound unknown ylabel str ylim (bottom: float, top: float) ymargin float greater than -0.5 yscale {"linear", "log", "symlog", "logit", ...} or ScaleBase yticklabels unknown yticks unknown zorder float
doc_26818
Set if artist is to be included in layout calculations, E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). Parameters in_layoutbool
doc_26819
Set the height of the rectangle.
doc_26820
Get parameters of this kernel. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_26821
policy is an object implementing the CookiePolicy interface. The CookieJar class stores HTTP cookies. It extracts cookies from HTTP requests, and returns them in HTTP responses. CookieJar instances automatically expire contained cookies when necessary. Subclasses are also responsible for storing and retrieving cookies from a file or database.
doc_26822
Bases: torch.distributions.distribution.Distribution The MixtureSameFamily distribution implements a (batch of) mixture distribution where all component are from different parameterizations of the same distribution type. It is parameterized by a Categorical “selecting distribution” (over k component) and a component distribution, i.e., a Distribution with a rightmost batch shape (equal to [k]) which indexes each (batch of) component. Examples: # Construct Gaussian Mixture Model in 1D consisting of 5 equally # weighted normal distributions >>> mix = D.Categorical(torch.ones(5,)) >>> comp = D.Normal(torch.randn(5,), torch.rand(5,)) >>> gmm = MixtureSameFamily(mix, comp) # Construct Gaussian Mixture Modle in 2D consisting of 5 equally # weighted bivariate normal distributions >>> mix = D.Categorical(torch.ones(5,)) >>> comp = D.Independent(D.Normal( torch.randn(5,2), torch.rand(5,2)), 1) >>> gmm = MixtureSameFamily(mix, comp) # Construct a batch of 3 Gaussian Mixture Models in 2D each # consisting of 5 random weighted bivariate normal distributions >>> mix = D.Categorical(torch.rand(3,5)) >>> comp = D.Independent(D.Normal( torch.randn(3,5,2), torch.rand(3,5,2)), 1) >>> gmm = MixtureSameFamily(mix, comp) Parameters mixture_distribution – torch.distributions.Categorical-like instance. Manages the probability of selecting component. The number of categories must match the rightmost batch dimension of the component_distribution. Must have either scalar batch_shape or batch_shape matching component_distribution.batch_shape[:-1] component_distribution – torch.distributions.Distribution-like instance. Right-most batch dimension indexes component. arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {} cdf(x) [source] property component_distribution expand(batch_shape, _instance=None) [source] has_rsample = False log_prob(x) [source] property mean property mixture_distribution sample(sample_shape=torch.Size([])) [source] property support property variance
doc_26823
Return x minus best fit line; 'linear' detrending. Parameters y0-D or 1-D array or sequence Array or sequence containing the data See also detrend_mean Another detrend algorithm. detrend_none Another detrend algorithm. detrend A wrapper around all the detrend algorithms.
doc_26824
A subclass of ConnectionError, raised when a connection is reset by the peer. Corresponds to errno ECONNRESET.
doc_26825
The internal LocalStack that holds AppContext instances. Typically, the current_app and g proxies should be accessed instead of the stack. Extensions can access the contexts on the stack as a namespace to store data. Changelog New in version 0.9.
doc_26826
Create a directory named pkg_dir containing an __init__ file with init_source as its contents.
doc_26827
See Migration guide for more details. tf.compat.v1.raw_ops.LeakyRelu tf.raw_ops.LeakyRelu( features, alpha=0.2, name=None ) Args features A Tensor. Must be one of the following types: half, bfloat16, float32, float64. alpha An optional float. Defaults to 0.2. name A name for the operation (optional). Returns A Tensor. Has the same type as features.
doc_26828
Wait till an object in object_list is ready. Returns the list of those objects in object_list which are ready. If timeout is a float then the call blocks for at most that many seconds. If timeout is None then it will block for an unlimited period. A negative timeout is equivalent to a zero timeout. For both Unix and Windows, an object can appear in object_list if it is a readable Connection object; a connected and readable socket.socket object; or the sentinel attribute of a Process object. A connection or socket object is ready when there is data available to be read from it, or the other end has been closed. Unix: wait(object_list, timeout) almost equivalent select.select(object_list, [], [], timeout). The difference is that, if select.select() is interrupted by a signal, it can raise OSError with an error number of EINTR, whereas wait() will not. Windows: An item in object_list must either be an integer handle which is waitable (according to the definition used by the documentation of the Win32 function WaitForMultipleObjects()) or it can be an object with a fileno() method which returns a socket handle or pipe handle. (Note that pipe handles and socket handles are not waitable handles.) New in version 3.3.
doc_26829
tf.ones_like( input, dtype=None, name=None ) See also tf.ones. Given a single tensor (tensor), this operation returns a tensor of the same type and shape as tensor with all elements set to 1. Optionally, you can use dtype to specify a new type for the returned tensor. For example: tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) tf.ones_like(tensor) <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[1, 1, 1], [1, 1, 1]], dtype=int32)> Args input A Tensor. dtype A type for the returned Tensor. Must be float16, float32, float64, int8, uint8, int16, uint16, int32, int64, complex64, complex128, bool or string. name A name for the operation (optional). Returns A Tensor with all elements set to one.
doc_26830
Compute last of group values. Parameters numeric_only:bool, default False Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. min_count:int, default -1 The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Returns Series or DataFrame Computed last of values within each group.
doc_26831
tf.compat.v1.nn.max_pool_with_argmax( input, ksize, strides, padding, data_format='NHWC', Targmax=None, name=None, output_dtype=None, include_batch_in_index=False ) The indices in argmax are flattened, so that a maximum value at position [b, y, x, c] becomes flattened index: (y * width + x) * channels + c if include_batch_in_index is False; ((b * height + y) * width + x) * channels + c if include_batch_in_index is True. The indices returned are always in [0, height) x [0, width) before flattening, even if padding is involved and the mathematically correct answer is outside (either negative or too large). This is a bug, but fixing it is difficult to do in a safe backwards compatible way, especially due to flattening. Args input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 4-D with shape [batch, height, width, channels]. Input to pool over. ksize A list of ints that has length >= 4. The size of the window for each dimension of the input tensor. strides A list of ints that has length >= 4. The stride of the sliding window for each dimension of the input tensor. padding A string from: "SAME", "VALID". The type of padding algorithm to use. Targmax An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64. include_batch_in_index An optional bool. Defaults to False. Whether to include batch dimension in flattened index of argmax. name A name for the operation (optional). Returns A tuple of Tensor objects (output, argmax). output A Tensor. Has the same type as input. argmax A Tensor of type Targmax.
doc_26832
Dimensionality reduction using truncated SVD (aka LSA). This transformer performs linear dimensionality reduction by means of truncated singular value decomposition (SVD). Contrary to PCA, this estimator does not center the data before computing the singular value decomposition. This means it can work with sparse matrices efficiently. In particular, truncated SVD works on term count/tf-idf matrices as returned by the vectorizers in sklearn.feature_extraction.text. In that context, it is known as latent semantic analysis (LSA). This estimator supports two algorithms: a fast randomized SVD solver, and a “naive” algorithm that uses ARPACK as an eigensolver on X * X.T or X.T * X, whichever is more efficient. Read more in the User Guide. Parameters n_componentsint, default=2 Desired dimensionality of output data. Must be strictly less than the number of features. The default value is useful for visualisation. For LSA, a value of 100 is recommended. algorithm{‘arpack’, ‘randomized’}, default=’randomized’ SVD solver to use. Either “arpack” for the ARPACK wrapper in SciPy (scipy.sparse.linalg.svds), or “randomized” for the randomized algorithm due to Halko (2009). n_iterint, default=5 Number of iterations for randomized SVD solver. Not used by ARPACK. The default is larger than the default in randomized_svd to handle sparse matrices that may have large slowly decaying spectrum. random_stateint, RandomState instance or None, default=None Used during randomized svd. Pass an int for reproducible results across multiple function calls. See Glossary. tolfloat, default=0. Tolerance for ARPACK. 0 means machine precision. Ignored by randomized SVD solver. Attributes components_ndarray of shape (n_components, n_features) explained_variance_ndarray of shape (n_components,) The variance of the training samples transformed by a projection to each component. explained_variance_ratio_ndarray of shape (n_components,) Percentage of variance explained by each of the selected components. singular_values_ndarray od shape (n_components,) The singular values corresponding to each of the selected components. The singular values are equal to the 2-norms of the n_components variables in the lower-dimensional space. See also PCA Notes SVD suffers from a problem called “sign indeterminacy”, which means the sign of the components_ and the output from transform depend on the algorithm and random state. To work around this, fit instances of this class to data once, then keep the instance around to do transformations. References Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions Halko, et al., 2009 (arXiv:909) https://arxiv.org/pdf/0909.4061.pdf Examples >>> from sklearn.decomposition import TruncatedSVD >>> from scipy.sparse import random as sparse_random >>> X = sparse_random(100, 100, density=0.01, format='csr', ... random_state=42) >>> svd = TruncatedSVD(n_components=5, n_iter=7, random_state=42) >>> svd.fit(X) TruncatedSVD(n_components=5, n_iter=7, random_state=42) >>> print(svd.explained_variance_ratio_) [0.0646... 0.0633... 0.0639... 0.0535... 0.0406...] >>> print(svd.explained_variance_ratio_.sum()) 0.286... >>> print(svd.singular_values_) [1.553... 1.512... 1.510... 1.370... 1.199...] Methods fit(X[, y]) Fit model on training data X. fit_transform(X[, y]) Fit model to X and perform dimensionality reduction on X. get_params([deep]) Get parameters for this estimator. inverse_transform(X) Transform X back to its original space. set_params(**params) Set the parameters of this estimator. transform(X) Perform dimensionality reduction on X. fit(X, y=None) [source] Fit model on training data X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yIgnored Returns selfobject Returns the transformer object. fit_transform(X, y=None) [source] Fit model to X and perform dimensionality reduction on X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. yIgnored Returns X_newndarray of shape (n_samples, n_components) Reduced version of X. This will always be a dense array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. inverse_transform(X) [source] Transform X back to its original space. Returns an array X_original whose transform would be X. Parameters Xarray-like of shape (n_samples, n_components) New data. Returns X_originalndarray of shape (n_samples, n_features) Note that this is always a dense array. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Perform dimensionality reduction on X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) New data. Returns X_newndarray of shape (n_samples, n_components) Reduced version of X. This will always be a dense array.
doc_26833
See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyMomentum tf.raw_ops.SparseApplyMomentum( var, accum, lr, grad, indices, momentum, use_locking=False, use_nesterov=False, name=None ) Set use_nesterov = True if you want to use Nesterov momentum. That is for rows we have grad for, we update var and accum as follows: $$accum = accum * momentum + grad$$ $$var -= lr * accum$$ Args var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). accum A mutable Tensor. Must have the same type as var. Should be from a Variable(). lr A Tensor. Must have the same type as var. Learning rate. Must be a scalar. grad A Tensor. Must have the same type as var. The gradient. indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum. momentum A Tensor. Must have the same type as var. Momentum. Must be a scalar. use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. use_nesterov An optional bool. Defaults to False. If True, the tensor passed to compute grad will be var - lr * momentum * accum, so in the end, the var you get is actually var - lr * momentum * accum. name A name for the operation (optional). Returns A mutable Tensor. Has the same type as var.
doc_26834
class ast.Or Boolean operator tokens.
doc_26835
Generate an etag for some data. Changed in version 2.0: Use SHA-1. MD5 may not be available in some environments. Parameters data (bytes) – Return type str
doc_26836
class sklearn.linear_model.LassoCV(*, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, precompute='auto', max_iter=1000, tol=0.0001, copy_X=True, cv=None, verbose=False, n_jobs=None, positive=False, random_state=None, selection='cyclic') [source] Lasso linear model with iterative fitting along a regularization path. See glossary entry for cross-validation estimator. The best model is selected by cross-validation. The optimization objective for Lasso is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 Read more in the User Guide. Parameters epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). normalizebool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. max_iterint, default=1000 The maximum number of iterations. tolfloat, default=1e-4 The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. cvint, cross-validation generator or iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, int, to specify the number of folds. CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. verbosebool or int, default=False Amount of verbosity. n_jobsint, default=None Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. positivebool, default=False If positive, restrict regression coefficients to be positive. random_stateint, RandomState instance, default=None The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary. selection{‘cyclic’, ‘random’}, default=’cyclic’ If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. Attributes alpha_float The amount of penalization chosen by cross validation. coef_ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the cost function formula). intercept_float or ndarray of shape (n_targets,) Independent term in decision function. mse_path_ndarray of shape (n_alphas, n_folds) Mean square error for the test set on each fold, varying alpha. alphas_ndarray of shape (n_alphas,) The grid of alphas used for fitting. dual_gap_float or ndarray of shape (n_targets,) The dual gap at the end of the optimization for the optimal alpha (alpha_). n_iter_int Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha. See also lars_path lasso_path LassoLars Lasso LassoLarsCV Notes For an example, see examples/linear_model/plot_lasso_model_selection.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Examples >>> from sklearn.linear_model import LassoCV >>> from sklearn.datasets import make_regression >>> X, y = make_regression(noise=4, random_state=0) >>> reg = LassoCV(cv=5, random_state=0).fit(X, y) >>> reg.score(X, y) 0.9993... >>> reg.predict(X[:1,]) array([-78.4951...]) Methods fit(X, y) Fit linear model with coordinate descent. get_params([deep]) Get parameters for this estimator. path(*args, **kwargs) Compute Lasso path with coordinate descent predict(X) Predict using the linear model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse. yarray-like of shape (n_samples,) or (n_samples, n_targets) Target values. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. static path(*args, **kwargs) [source] Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3 n_alphasint, default=100 Number of alphas along the regularization path alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). **paramskwargs keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also lars_path Lasso LassoLars LassoCV LassoLarsCV sklearn.decomposition.sparse_encode Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T >>> y = np.array([1, 2, 3.1]) >>> # Use lasso_path to compute a coefficient path >>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5]) >>> print(coef_path) [[0. 0. 0.46874778] [0.2159048 0.4425765 0.23689075]] >>> # Now use lars_path and 1D linear interpolation to compute the >>> # same path >>> from sklearn.linear_model import lars_path >>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso') >>> from scipy import interpolate >>> coef_path_continuous = interpolate.interp1d(alphas[::-1], ... coef_path_lars[:, ::-1]) >>> print(coef_path_continuous([5., 1., .5])) [[0. 0. 0.46915237] [0.2159048 0.4425765 0.23668876]] predict(X) [source] Predict using the linear model. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape (n_samples,) Returns predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.linear_model.LassoCV Combine predictors using stacking Model-based and sequential feature selection Lasso model selection: Cross-Validation / AIC / BIC Common pitfalls in interpretation of coefficients of linear models Cross-validation on diabetes Dataset Exercise
doc_26837
Remove the artist from the figure if possible. The effect will not be visible until the figure is redrawn, e.g., with FigureCanvasBase.draw_idle. Call relim to update the axes limits if desired. Note: relim will not see collections even if the collection was added to the axes with autolim = True. Note: there is no support for removing the artist's legend entry.
doc_26838
Upsamples the input, using bilinear upsampling. Warning This function is deprecated in favor of torch.nn.quantized.functional.interpolate(). This is equivalent with nn.quantized.functional.interpolate(..., mode='bilinear', align_corners=True). Note The input quantization parameters propagate to the output. Note Only 2D inputs are supported Parameters input (Tensor) – quantized input size (int or Tuple[int, int]) – output spatial size. scale_factor (int or Tuple[int, int]) – multiplier for spatial size
doc_26839
Decays the learning rate of each parameter group by gamma every epoch. When last_epoch=-1, sets initial lr as lr. Parameters optimizer (Optimizer) – Wrapped optimizer. gamma (float) – Multiplicative factor of learning rate decay. last_epoch (int) – The index of last epoch. Default: -1. verbose (bool) – If True, prints a message to stdout for each update. Default: False.
doc_26840
Get the yaxis' tick labels. Parameters minorbool Whether to return the minor or the major ticklabels. whichNone, ('minor', 'major', 'both') Overrides minor. Selects which ticklabels to return Returns list of Text Notes The tick label strings are not populated until a draw method has been called. See also: draw and draw. Examples using matplotlib.axes.Axes.get_yticklabels Fill Between and Alpha Programmatically controlling subplot adjustment
doc_26841
Return list of triples describing non-overlapping matching subsequences. Each triple is of the form (i, j, n), and means that a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in i and j. The last triple is a dummy, and has the value (len(a), len(b), 0). It is the only triple with n == 0. If (i, j, n) and (i', j', n') are adjacent triples in the list, and the second is not the last triple in the list, then i+n < i' or j+n < j'; in other words, adjacent triples always describe non-adjacent equal blocks. >>> s = SequenceMatcher(None, "abxcd", "abcd") >>> s.get_matching_blocks() [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)]
doc_26842
An enum class of available backends. PyTorch ships with two builtin backends: BackendType.TENSORPIPE and BackendType.PROCESS_GROUP. Additional ones can be registered using the register_backend() function.
doc_26843
Enqueues the result of preparing the LogRecord. Should an exception occur (e.g. because a bounded queue has filled up), the handleError() method is called to handle the error. This can result in the record silently being dropped (if logging.raiseExceptions is False) or a message printed to sys.stderr (if logging.raiseExceptions is True).
doc_26844
Return the process ID. Before the process is spawned, this will be None.
doc_26845
Return a sample data file. fname is a path relative to the mpl-data/sample_data directory. If asfileobj is True return a file object, otherwise just a file path. Sample data files are stored in the 'mpl-data/sample_data' directory within the Matplotlib package. If the filename ends in .gz, the file is implicitly ungzipped. If the filename ends with .npy or .npz, asfileobj is True, and np_load is True, the file is loaded with numpy.load. np_load currently defaults to False but will default to True in a future release.
doc_26846
See Migration guide for more details. tf.compat.v1.raw_ops.FractionalMaxPoolGrad tf.raw_ops.FractionalMaxPoolGrad( orig_input, orig_output, out_backprop, row_pooling_sequence, col_pooling_sequence, overlapping=False, name=None ) Args orig_input A Tensor. Must be one of the following types: float32, float64, int32, int64. Original input for fractional_max_pool orig_output A Tensor. Must have the same type as orig_input. Original output for fractional_max_pool out_backprop A Tensor. Must have the same type as orig_input. 4-D with shape [batch, height, width, channels]. Gradients w.r.t. the output of fractional_max_pool. row_pooling_sequence A Tensor of type int64. row pooling sequence, form pooling region with col_pooling_sequence. col_pooling_sequence A Tensor of type int64. column pooling sequence, form pooling region with row_pooling sequence. overlapping An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: index 0 1 2 3 4 value 20 5 16 3 7 If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [20, 16] for fractional max pooling. name A name for the operation (optional). Returns A Tensor. Has the same type as orig_input.
doc_26847
Test whether all array elements along a given axis evaluate to True. Parameters aarray_like Input array or object that can be converted to an array. axisNone or int or tuple of ints, optional Axis or axes along which a logical AND reduction is performed. The default (axis=None) is to perform a logical AND over all the dimensions of the input array. axis may be negative, in which case it counts from the last to the first axis. New in version 1.7.0. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before. outndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output and its type is preserved (e.g., if dtype(out) is float, the result will consist of 0.0’s and 1.0’s). See Output type determination for more details. keepdimsbool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then keepdims will not be passed through to the all method of sub-classes of ndarray, however any non-default value will be. If the sub-class’ method does not implement keepdims any exceptions will be raised. wherearray_like of bool, optional Elements to include in checking for all True values. See reduce for details. New in version 1.20.0. Returns allndarray, bool A new boolean or array is returned unless out is specified, in which case a reference to out is returned. See also ndarray.all equivalent method any Test whether any element along a given axis evaluates to True. Notes Not a Number (NaN), positive infinity and negative infinity evaluate to True because these are not equal to zero. Examples >>> np.all([[True,False],[True,True]]) False >>> np.all([[True,False],[True,True]], axis=0) array([ True, False]) >>> np.all([-1, 4, 5]) True >>> np.all([1.0, np.nan]) True >>> np.all([[True, True], [False, True]], where=[[True], [False]]) True >>> o=np.array(False) >>> z=np.all([-1, 4, 5], out=o) >>> id(z), id(o), z (28293632, 28293632, array(True)) # may vary
doc_26848
Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns a dictionary containing a whole state of the module Return type dict Example: >>> module.state_dict().keys() ['bias', 'weight']
doc_26849
Set the zorder for the artist. Artists with lower zorder values are drawn first. Parameters levelfloat
doc_26850
Convert a Laguerre series to a polynomial. Convert an array representing the coefficients of a Laguerre series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters carray_like 1-D array containing the Laguerre series coefficients, ordered from lowest order term to highest. Returns polndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also poly2lag Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. Examples >>> from numpy.polynomial.laguerre import lag2poly >>> lag2poly([ 23., -63., 58., -18.]) array([0., 1., 2., 3.])
doc_26851
Bases: matplotlib.offsetbox.AnchoredOffsetbox AnchoredOffsetbox with Text. Parameters sstr Text. locstr Location code. See AnchoredOffsetbox. padfloat, default: 0.4 Padding around the text as fraction of the fontsize. borderpadfloat, default: 0.5 Spacing between the offsetbox frame and the bbox_to_anchor. propdict, optional Dictionary of keyword parameters to be passed to the Text instance contained inside AnchoredText. **kwargs All other parameters are passed to AnchoredOffsetbox. set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, bbox_to_anchor=<UNSET>, child=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, gid=<UNSET>, height=<UNSET>, in_layout=<UNSET>, label=<UNSET>, offset=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, width=<UNSET>, zorder=<UNSET>)[source] Set multiple properties at once. Supported properties are Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None animated bool bbox_to_anchor unknown child unknown clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None figure Figure gid str height float in_layout bool label object offset (float, float) or callable path_effects AbstractPathEffect picker None or bool or float or callable rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None transform Transform url str visible bool width float zorder float
doc_26852
Create a quantized module from a float module or qparams_dict Parameters mod (Module) – a float module, either produced by torch.quantization utilities or provided by the user
doc_26853
Raised when attempting an astype operation on an array with NaN to an integer dtype.
doc_26854
Return the harmonic mean of data, a sequence or iterable of real-valued numbers. The harmonic mean, sometimes called the subcontrary mean, is the reciprocal of the arithmetic mean() of the reciprocals of the data. For example, the harmonic mean of three values a, b and c will be equivalent to 3/(1/a + 1/b + 1/c). If one of the values is zero, the result will be zero. The harmonic mean is a type of average, a measure of the central location of the data. It is often appropriate when averaging rates or ratios, for example speeds. Suppose a car travels 10 km at 40 km/hr, then another 10 km at 60 km/hr. What is the average speed? >>> harmonic_mean([40, 60]) 48.0 Suppose an investor purchases an equal value of shares in each of three companies, with P/E (price/earning) ratios of 2.5, 3 and 10. What is the average P/E ratio for the investor’s portfolio? >>> harmonic_mean([2.5, 3, 10]) # For an equal investment portfolio. 3.6 StatisticsError is raised if data is empty, or any element is less than zero. The current algorithm has an early-out when it encounters a zero in the input. This means that the subsequent inputs are not tested for validity. (This behavior may change in the future.) New in version 3.6.
doc_26855
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_26856
See Migration guide for more details. tf.compat.v1.raw_ops.IteratorGetNextAsOptional tf.raw_ops.IteratorGetNextAsOptional( iterator, output_types, output_shapes, name=None ) Args iterator A Tensor of type resource. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type variant.
doc_26857
""" May be applied as a `default=...` value on a serializer field. Returns the current user. """ requires_context = True def __call__(self, serializer_field): return serializer_field.context['request'].user When serializing the instance, default will be used if the object attribute or dictionary key is not present in the instance. Note that setting a default value implies that the field is not required. Including both the default and required keyword arguments is invalid and will raise an error. allow_null Normally an error will be raised if None is passed to a serializer field. Set this keyword argument to True if None should be considered a valid value. Note that, without an explicit default, setting this argument to True will imply a default value of null for serialization output, but does not imply a default for input deserialization. Defaults to False source The name of the attribute that will be used to populate the field. May be a method that only takes a self argument, such as URLField(source='get_absolute_url'), or may use dotted notation to traverse attributes, such as EmailField(source='user.email'). When serializing fields with dotted notation, it may be necessary to provide a default value if any object is not present or is empty during attribute traversal. The value source='*' has a special meaning, and is used to indicate that the entire object should be passed through to the field. This can be useful for creating nested representations, or for fields which require access to the complete object in order to determine the output representation. Defaults to the name of the field. validators A list of validator functions which should be applied to the incoming field input, and which either raise a validation error or simply return. Validator functions should typically raise serializers.ValidationError, but Django's built-in ValidationError is also supported for compatibility with validators defined in the Django codebase or third party Django packages. error_messages A dictionary of error codes to error messages. label A short text string that may be used as the name of the field in HTML form fields or other descriptive elements. help_text A text string that may be used as a description of the field in HTML form fields or other descriptive elements. initial A value that should be used for pre-populating the value of HTML form fields. You may pass a callable to it, just as you may do with any regular Django Field: import datetime from rest_framework import serializers class ExampleSerializer(serializers.Serializer): day = serializers.DateField(initial=datetime.date.today) style A dictionary of key-value pairs that can be used to control how renderers should render the field. Two examples here are 'input_type' and 'base_template': # Use <input type="password"> for the input. password = serializers.CharField( style={'input_type': 'password'} ) # Use a radio input instead of a select input. color_channel = serializers.ChoiceField( choices=['red', 'green', 'blue'], style={'base_template': 'radio.html'} ) For more details see the HTML & Forms documentation. Boolean fields BooleanField A boolean representation. When using HTML encoded form input be aware that omitting a value will always be treated as setting a field to False, even if it has a default=True option specified. This is because HTML checkbox inputs represent the unchecked state by omitting the value, so REST framework treats omission as if it is an empty checkbox input. Note that Django 2.1 removed the blank kwarg from models.BooleanField. Prior to Django 2.1 models.BooleanField fields were always blank=True. Thus since Django 2.1 default serializers.BooleanField instances will be generated without the required kwarg (i.e. equivalent to required=True) whereas with previous versions of Django, default BooleanField instances will be generated with a required=False option. If you want to control this behaviour manually, explicitly declare the BooleanField on the serializer class, or use the extra_kwargs option to set the required flag. Corresponds to django.db.models.fields.BooleanField. Signature: BooleanField() NullBooleanField A boolean representation that also accepts None as a valid value. Corresponds to django.db.models.fields.NullBooleanField. Signature: NullBooleanField() String fields CharField A text representation. Optionally validates the text to be shorter than max_length and longer than min_length. Corresponds to django.db.models.fields.CharField or django.db.models.fields.TextField. Signature: CharField(max_length=None, min_length=None, allow_blank=False, trim_whitespace=True) max_length - Validates that the input contains no more than this number of characters. min_length - Validates that the input contains no fewer than this number of characters. allow_blank - If set to True then the empty string should be considered a valid value. If set to False then the empty string is considered invalid and will raise a validation error. Defaults to False. trim_whitespace - If set to True then leading and trailing whitespace is trimmed. Defaults to True. The allow_null option is also available for string fields, although its usage is discouraged in favor of allow_blank. It is valid to set both allow_blank=True and allow_null=True, but doing so means that there will be two differing types of empty value permissible for string representations, which can lead to data inconsistencies and subtle application bugs. EmailField A text representation, validates the text to be a valid e-mail address. Corresponds to django.db.models.fields.EmailField Signature: EmailField(max_length=None, min_length=None, allow_blank=False) RegexField A text representation, that validates the given value matches against a certain regular expression. Corresponds to django.forms.fields.RegexField. Signature: RegexField(regex, max_length=None, min_length=None, allow_blank=False) The mandatory regex argument may either be a string, or a compiled python regular expression object. Uses Django's django.core.validators.RegexValidator for validation. SlugField A RegexField that validates the input against the pattern [a-zA-Z0-9_-]+. Corresponds to django.db.models.fields.SlugField. Signature: SlugField(max_length=50, min_length=None, allow_blank=False) URLField A RegexField that validates the input against a URL matching pattern. Expects fully qualified URLs of the form http://<host>/<path>. Corresponds to django.db.models.fields.URLField. Uses Django's django.core.validators.URLValidator for validation. Signature: URLField(max_length=200, min_length=None, allow_blank=False) UUIDField A field that ensures the input is a valid UUID string. The to_internal_value method will return a uuid.UUID instance. On output the field will return a string in the canonical hyphenated format, for example: "de305d54-75b4-431b-adb2-eb6b9e546013" Signature: UUIDField(format='hex_verbose') format: Determines the representation format of the uuid value 'hex_verbose' - The canonical hex representation, including hyphens: "5ce0e9a5-5ffa-654b-cee0-1238041fb31a" 'hex' - The compact hex representation of the UUID, not including hyphens: "5ce0e9a55ffa654bcee01238041fb31a" 'int' - A 128 bit integer representation of the UUID: "123456789012312313134124512351145145114" 'urn' - RFC 4122 URN representation of the UUID: "urn:uuid:5ce0e9a5-5ffa-654b-cee0-1238041fb31a" Changing the format parameters only affects representation values. All formats are accepted by to_internal_value FilePathField A field whose choices are limited to the filenames in a certain directory on the filesystem Corresponds to django.forms.fields.FilePathField. Signature: FilePathField(path, match=None, recursive=False, allow_files=True, allow_folders=False, required=None, **kwargs) path - The absolute filesystem path to a directory from which this FilePathField should get its choice. match - A regular expression, as a string, that FilePathField will use to filter filenames. recursive - Specifies whether all subdirectories of path should be included. Default is False. allow_files - Specifies whether files in the specified location should be included. Default is True. Either this or allow_folders must be True. allow_folders - Specifies whether folders in the specified location should be included. Default is False. Either this or allow_files must be True. IPAddressField A field that ensures the input is a valid IPv4 or IPv6 string. Corresponds to django.forms.fields.IPAddressField and django.forms.fields.GenericIPAddressField. Signature: IPAddressField(protocol='both', unpack_ipv4=False, **options) protocol Limits valid inputs to the specified protocol. Accepted values are 'both' (default), 'IPv4' or 'IPv6'. Matching is case insensitive. unpack_ipv4 Unpacks IPv4 mapped addresses like ::ffff:192.0.2.1. If this option is enabled that address would be unpacked to 192.0.2.1. Default is disabled. Can only be used when protocol is set to 'both'. Numeric fields IntegerField An integer representation. Corresponds to django.db.models.fields.IntegerField, django.db.models.fields.SmallIntegerField, django.db.models.fields.PositiveIntegerField and django.db.models.fields.PositiveSmallIntegerField. Signature: IntegerField(max_value=None, min_value=None) max_value Validate that the number provided is no greater than this value. min_value Validate that the number provided is no less than this value. FloatField A floating point representation. Corresponds to django.db.models.fields.FloatField. Signature: FloatField(max_value=None, min_value=None) max_value Validate that the number provided is no greater than this value. min_value Validate that the number provided is no less than this value. DecimalField A decimal representation, represented in Python by a Decimal instance. Corresponds to django.db.models.fields.DecimalField. Signature: DecimalField(max_digits, decimal_places, coerce_to_string=None, max_value=None, min_value=None) max_digits The maximum number of digits allowed in the number. It must be either None or an integer greater than or equal to decimal_places. decimal_places The number of decimal places to store with the number. coerce_to_string Set to True if string values should be returned for the representation, or False if Decimal objects should be returned. Defaults to the same value as the COERCE_DECIMAL_TO_STRING settings key, which will be True unless overridden. If Decimal objects are returned by the serializer, then the final output format will be determined by the renderer. Note that setting localize will force the value to True. max_value Validate that the number provided is no greater than this value. min_value Validate that the number provided is no less than this value. localize Set to True to enable localization of input and output based on the current locale. This will also force coerce_to_string to True. Defaults to False. Note that data formatting is enabled if you have set USE_L10N=True in your settings file. rounding Sets the rounding mode used when quantising to the configured precision. Valid values are decimal module rounding modes. Defaults to None. Example usage To validate numbers up to 999 with a resolution of 2 decimal places, you would use: serializers.DecimalField(max_digits=5, decimal_places=2) And to validate numbers up to anything less than one billion with a resolution of 10 decimal places: serializers.DecimalField(max_digits=19, decimal_places=10) This field also takes an optional argument, coerce_to_string. If set to True the representation will be output as a string. If set to False the representation will be left as a Decimal instance and the final representation will be determined by the renderer. If unset, this will default to the same value as the COERCE_DECIMAL_TO_STRING setting, which is True unless set otherwise. Date and time fields DateTimeField A date and time representation. Corresponds to django.db.models.fields.DateTimeField. Signature: DateTimeField(format=api_settings.DATETIME_FORMAT, input_formats=None, default_timezone=None) format - A string representing the output format. If not specified, this defaults to the same value as the DATETIME_FORMAT settings key, which will be 'iso-8601' unless set. Setting to a format string indicates that to_representation return values should be coerced to string output. Format strings are described below. Setting this value to None indicates that Python datetime objects should be returned by to_representation. In this case the datetime encoding will be determined by the renderer. input_formats - A list of strings representing the input formats which may be used to parse the date. If not specified, the DATETIME_INPUT_FORMATS setting will be used, which defaults to ['iso-8601']. default_timezone - A pytz.timezone representing the timezone. If not specified and the USE_TZ setting is enabled, this defaults to the current timezone. If USE_TZ is disabled, then datetime objects will be naive. DateTimeField format strings. Format strings may either be Python strftime formats which explicitly specify the format, or the special string 'iso-8601', which indicates that ISO 8601 style datetimes should be used. (eg '2013-01-29T12:34:56.000000Z') When a value of None is used for the format datetime objects will be returned by to_representation and the final output representation will determined by the renderer class. auto_now and auto_now_add model fields. When using ModelSerializer or HyperlinkedModelSerializer, note that any model fields with auto_now=True or auto_now_add=True will use serializer fields that are read_only=True by default. If you want to override this behavior, you'll need to declare the DateTimeField explicitly on the serializer. For example: class CommentSerializer(serializers.ModelSerializer): created = serializers.DateTimeField() class Meta: model = Comment DateField A date representation. Corresponds to django.db.models.fields.DateField Signature: DateField(format=api_settings.DATE_FORMAT, input_formats=None) format - A string representing the output format. If not specified, this defaults to the same value as the DATE_FORMAT settings key, which will be 'iso-8601' unless set. Setting to a format string indicates that to_representation return values should be coerced to string output. Format strings are described below. Setting this value to None indicates that Python date objects should be returned by to_representation. In this case the date encoding will be determined by the renderer. input_formats - A list of strings representing the input formats which may be used to parse the date. If not specified, the DATE_INPUT_FORMATS setting will be used, which defaults to ['iso-8601']. DateField format strings Format strings may either be Python strftime formats which explicitly specify the format, or the special string 'iso-8601', which indicates that ISO 8601 style dates should be used. (eg '2013-01-29') TimeField A time representation. Corresponds to django.db.models.fields.TimeField Signature: TimeField(format=api_settings.TIME_FORMAT, input_formats=None) format - A string representing the output format. If not specified, this defaults to the same value as the TIME_FORMAT settings key, which will be 'iso-8601' unless set. Setting to a format string indicates that to_representation return values should be coerced to string output. Format strings are described below. Setting this value to None indicates that Python time objects should be returned by to_representation. In this case the time encoding will be determined by the renderer. input_formats - A list of strings representing the input formats which may be used to parse the date. If not specified, the TIME_INPUT_FORMATS setting will be used, which defaults to ['iso-8601']. TimeField format strings Format strings may either be Python strftime formats which explicitly specify the format, or the special string 'iso-8601', which indicates that ISO 8601 style times should be used. (eg '12:34:56.000000') DurationField A Duration representation. Corresponds to django.db.models.fields.DurationField The validated_data for these fields will contain a datetime.timedelta instance. The representation is a string following this format '[DD] [HH:[MM:]]ss[.uuuuuu]'. Signature: DurationField(max_value=None, min_value=None) max_value Validate that the duration provided is no greater than this value. min_value Validate that the duration provided is no less than this value. Choice selection fields ChoiceField A field that can accept a value out of a limited set of choices. Used by ModelSerializer to automatically generate fields if the corresponding model field includes a choices=… argument. Signature: ChoiceField(choices) choices - A list of valid values, or a list of (key, display_name) tuples. allow_blank - If set to True then the empty string should be considered a valid value. If set to False then the empty string is considered invalid and will raise a validation error. Defaults to False. html_cutoff - If set this will be the maximum number of choices that will be displayed by a HTML select drop down. Can be used to ensure that automatically generated ChoiceFields with very large possible selections do not prevent a template from rendering. Defaults to None. html_cutoff_text - If set this will display a textual indicator if the maximum number of items have been cutoff in an HTML select drop down. Defaults to "More than {count} items…" Both the allow_blank and allow_null are valid options on ChoiceField, although it is highly recommended that you only use one and not both. allow_blank should be preferred for textual choices, and allow_null should be preferred for numeric or other non-textual choices. MultipleChoiceField A field that can accept a set of zero, one or many values, chosen from a limited set of choices. Takes a single mandatory argument. to_internal_value returns a set containing the selected values. Signature: MultipleChoiceField(choices) choices - A list of valid values, or a list of (key, display_name) tuples. allow_blank - If set to True then the empty string should be considered a valid value. If set to False then the empty string is considered invalid and will raise a validation error. Defaults to False. html_cutoff - If set this will be the maximum number of choices that will be displayed by a HTML select drop down. Can be used to ensure that automatically generated ChoiceFields with very large possible selections do not prevent a template from rendering. Defaults to None. html_cutoff_text - If set this will display a textual indicator if the maximum number of items have been cutoff in an HTML select drop down. Defaults to "More than {count} items…" As with ChoiceField, both the allow_blank and allow_null options are valid, although it is highly recommended that you only use one and not both. allow_blank should be preferred for textual choices, and allow_null should be preferred for numeric or other non-textual choices. File upload fields Parsers and file uploads. The FileField and ImageField classes are only suitable for use with MultiPartParser or FileUploadParser. Most parsers, such as e.g. JSON don't support file uploads. Django's regular FILE_UPLOAD_HANDLERS are used for handling uploaded files. FileField A file representation. Performs Django's standard FileField validation. Corresponds to django.forms.fields.FileField. Signature: FileField(max_length=None, allow_empty_file=False, use_url=UPLOADED_FILES_USE_URL) max_length - Designates the maximum length for the file name. allow_empty_file - Designates if empty files are allowed. use_url - If set to True then URL string values will be used for the output representation. If set to False then filename string values will be used for the output representation. Defaults to the value of the UPLOADED_FILES_USE_URL settings key, which is True unless set otherwise. ImageField An image representation. Validates the uploaded file content as matching a known image format. Corresponds to django.forms.fields.ImageField. Signature: ImageField(max_length=None, allow_empty_file=False, use_url=UPLOADED_FILES_USE_URL) max_length - Designates the maximum length for the file name. allow_empty_file - Designates if empty files are allowed. use_url - If set to True then URL string values will be used for the output representation. If set to False then filename string values will be used for the output representation. Defaults to the value of the UPLOADED_FILES_USE_URL settings key, which is True unless set otherwise. Requires either the Pillow package or PIL package. The Pillow package is recommended, as PIL is no longer actively maintained. Composite fields ListField A field class that validates a list of objects. Signature: ListField(child=<A_FIELD_INSTANCE>, allow_empty=True, min_length=None, max_length=None) child - A field instance that should be used for validating the objects in the list. If this argument is not provided then objects in the list will not be validated. allow_empty - Designates if empty lists are allowed. min_length - Validates that the list contains no fewer than this number of elements. max_length - Validates that the list contains no more than this number of elements. For example, to validate a list of integers you might use something like the following: scores = serializers.ListField( child=serializers.IntegerField(min_value=0, max_value=100) ) The ListField class also supports a declarative style that allows you to write reusable list field classes. class StringListField(serializers.ListField): child = serializers.CharField() We can now reuse our custom StringListField class throughout our application, without having to provide a child argument to it. DictField A field class that validates a dictionary of objects. The keys in DictField are always assumed to be string values. Signature: DictField(child=<A_FIELD_INSTANCE>, allow_empty=True) child - A field instance that should be used for validating the values in the dictionary. If this argument is not provided then values in the mapping will not be validated. allow_empty - Designates if empty dictionaries are allowed. For example, to create a field that validates a mapping of strings to strings, you would write something like this: document = DictField(child=CharField()) You can also use the declarative style, as with ListField. For example: class DocumentField(DictField): child = CharField() HStoreField A preconfigured DictField that is compatible with Django's postgres HStoreField. Signature: HStoreField(child=<A_FIELD_INSTANCE>, allow_empty=True) child - A field instance that is used for validating the values in the dictionary. The default child field accepts both empty strings and null values. allow_empty - Designates if empty dictionaries are allowed. Note that the child field must be an instance of CharField, as the hstore extension stores values as strings. JSONField A field class that validates that the incoming data structure consists of valid JSON primitives. In its alternate binary mode, it will represent and validate JSON-encoded binary strings. Signature: JSONField(binary, encoder) binary - If set to True then the field will output and validate a JSON encoded string, rather than a primitive data structure. Defaults to False. encoder - Use this JSON encoder to serialize input object. Defaults to None. Miscellaneous fields ReadOnlyField A field class that simply returns the value of the field without modification. This field is used by default with ModelSerializer when including field names that relate to an attribute rather than a model field. Signature: ReadOnlyField() For example, if has_expired was a property on the Account model, then the following serializer would automatically generate it as a ReadOnlyField: class AccountSerializer(serializers.ModelSerializer): class Meta: model = Account fields = ['id', 'account_name', 'has_expired'] HiddenField A field class that does not take a value based on user input, but instead takes its value from a default value or callable. Signature: HiddenField() For example, to include a field that always provides the current time as part of the serializer validated data, you would use the following: modified = serializers.HiddenField(default=timezone.now) The HiddenField class is usually only needed if you have some validation that needs to run based on some pre-provided field values, but you do not want to expose all of those fields to the end user. For further examples on HiddenField see the validators documentation. ModelField A generic field that can be tied to any arbitrary model field. The ModelField class delegates the task of serialization/deserialization to its associated model field. This field can be used to create serializer fields for custom model fields, without having to create a new custom serializer field. This field is used by ModelSerializer to correspond to custom model field classes. Signature: ModelField(model_field=<Django ModelField instance>) The ModelField class is generally intended for internal use, but can be used by your API if needed. In order to properly instantiate a ModelField, it must be passed a field that is attached to an instantiated model. For example: ModelField(model_field=MyModel()._meta.get_field('custom_field')) SerializerMethodField This is a read-only field. It gets its value by calling a method on the serializer class it is attached to. It can be used to add any sort of data to the serialized representation of your object. Signature: SerializerMethodField(method_name=None) method_name - The name of the method on the serializer to be called. If not included this defaults to get_<field_name>. The serializer method referred to by the method_name argument should accept a single argument (in addition to self), which is the object being serialized. It should return whatever you want to be included in the serialized representation of the object. For example: from django.contrib.auth.models import User from django.utils.timezone import now from rest_framework import serializers class UserSerializer(serializers.ModelSerializer): days_since_joined = serializers.SerializerMethodField() class Meta: model = User fields = '__all__' def get_days_since_joined(self, obj): return (now() - obj.date_joined).days Custom fields If you want to create a custom field, you'll need to subclass Field and then override either one or both of the .to_representation() and .to_internal_value() methods. These two methods are used to convert between the initial datatype, and a primitive, serializable datatype. Primitive datatypes will typically be any of a number, string, boolean, date/time/datetime or None. They may also be any list or dictionary like object that only contains other primitive objects. Other types might be supported, depending on the renderer that you are using. The .to_representation() method is called to convert the initial datatype into a primitive, serializable datatype. The .to_internal_value() method is called to restore a primitive datatype into its internal python representation. This method should raise a serializers.ValidationError if the data is invalid. Examples A Basic Custom Field Let's look at an example of serializing a class that represents an RGB color value: class Color: """ A color represented in the RGB colorspace. """ def __init__(self, red, green, blue): assert(red >= 0 and green >= 0 and blue >= 0) assert(red < 256 and green < 256 and blue < 256) self.red, self.green, self.blue = red, green, blue class ColorField(serializers.Field): """ Color objects are serialized into 'rgb(#, #, #)' notation. """ def to_representation(self, value): return "rgb(%d, %d, %d)" % (value.red, value.green, value.blue) def to_internal_value(self, data): data = data.strip('rgb(').rstrip(')') red, green, blue = [int(col) for col in data.split(',')] return Color(red, green, blue) By default field values are treated as mapping to an attribute on the object. If you need to customize how the field value is accessed and set you need to override .get_attribute() and/or .get_value(). As an example, let's create a field that can be used to represent the class name of the object being serialized: class ClassNameField(serializers.Field): def get_attribute(self, instance): # We pass the object instance onto `to_representation`, # not just the field attribute. return instance def to_representation(self, value): """ Serialize the value's class name. """ return value.__class__.__name__ Raising validation errors Our ColorField class above currently does not perform any data validation. To indicate invalid data, we should raise a serializers.ValidationError, like so: def to_internal_value(self, data): if not isinstance(data, str): msg = 'Incorrect type. Expected a string, but got %s' raise ValidationError(msg % type(data).__name__) if not re.match(r'^rgb\([0-9]+,[0-9]+,[0-9]+\)$', data): raise ValidationError('Incorrect format. Expected `rgb(#,#,#)`.') data = data.strip('rgb(').rstrip(')') red, green, blue = [int(col) for col in data.split(',')] if any([col > 255 or col < 0 for col in (red, green, blue)]): raise ValidationError('Value out of range. Must be between 0 and 255.') return Color(red, green, blue) The .fail() method is a shortcut for raising ValidationError that takes a message string from the error_messages dictionary. For example: default_error_messages = { 'incorrect_type': 'Incorrect type. Expected a string, but got {input_type}', 'incorrect_format': 'Incorrect format. Expected `rgb(#,#,#)`.', 'out_of_range': 'Value out of range. Must be between 0 and 255.' } def to_internal_value(self, data): if not isinstance(data, str): self.fail('incorrect_type', input_type=type(data).__name__) if not re.match(r'^rgb\([0-9]+,[0-9]+,[0-9]+\)$', data): self.fail('incorrect_format') data = data.strip('rgb(').rstrip(')') red, green, blue = [int(col) for col in data.split(',')] if any([col > 255 or col < 0 for col in (red, green, blue)]): self.fail('out_of_range') return Color(red, green, blue) This style keeps your error messages cleaner and more separated from your code, and should be preferred. Using source='*' Here we'll take an example of a flat DataPoint model with x_coordinate and y_coordinate attributes. class DataPoint(models.Model): label = models.CharField(max_length=50) x_coordinate = models.SmallIntegerField() y_coordinate = models.SmallIntegerField() Using a custom field and source='*' we can provide a nested representation of the coordinate pair: class CoordinateField(serializers.Field): def to_representation(self, value): ret = { "x": value.x_coordinate, "y": value.y_coordinate } return ret def to_internal_value(self, data): ret = { "x_coordinate": data["x"], "y_coordinate": data["y"], } return ret class DataPointSerializer(serializers.ModelSerializer): coordinates = CoordinateField(source='*') class Meta: model = DataPoint fields = ['label', 'coordinates'] Note that this example doesn't handle validation. Partly for that reason, in a real project, the coordinate nesting might be better handled with a nested serializer using source='*', with two IntegerField instances, each with their own source pointing to the relevant field. The key points from the example, though, are: to_representation is passed the entire DataPoint object and must map from that to the desired output. >>> instance = DataPoint(label='Example', x_coordinate=1, y_coordinate=2) >>> out_serializer = DataPointSerializer(instance) >>> out_serializer.data ReturnDict([('label', 'Example'), ('coordinates', {'x': 1, 'y': 2})]) Unless our field is to be read-only, to_internal_value must map back to a dict suitable for updating our target object. With source='*', the return from to_internal_value will update the root validated data dictionary, rather than a single key. >>> data = { ... "label": "Second Example", ... "coordinates": { ... "x": 3, ... "y": 4, ... } ... } >>> in_serializer = DataPointSerializer(data=data) >>> in_serializer.is_valid() True >>> in_serializer.validated_data OrderedDict([('label', 'Second Example'), ('y_coordinate', 4), ('x_coordinate', 3)]) For completeness lets do the same thing again but with the nested serializer approach suggested above: class NestedCoordinateSerializer(serializers.Serializer): x = serializers.IntegerField(source='x_coordinate') y = serializers.IntegerField(source='y_coordinate') class DataPointSerializer(serializers.ModelSerializer): coordinates = NestedCoordinateSerializer(source='*') class Meta: model = DataPoint fields = ['label', 'coordinates'] Here the mapping between the target and source attribute pairs (x and x_coordinate, y and y_coordinate) is handled in the IntegerField declarations. It's our NestedCoordinateSerializer that takes source='*'. Our new DataPointSerializer exhibits the same behaviour as the custom field approach. Serializing: >>> out_serializer = DataPointSerializer(instance) >>> out_serializer.data ReturnDict([('label', 'testing'), ('coordinates', OrderedDict([('x', 1), ('y', 2)]))]) Deserializing: >>> in_serializer = DataPointSerializer(data=data) >>> in_serializer.is_valid() True >>> in_serializer.validated_data OrderedDict([('label', 'still testing'), ('x_coordinate', 3), ('y_coordinate', 4)]) But we also get the built-in validation for free: >>> invalid_data = { ... "label": "still testing", ... "coordinates": { ... "x": 'a', ... "y": 'b', ... } ... } >>> invalid_serializer = DataPointSerializer(data=invalid_data) >>> invalid_serializer.is_valid() False >>> invalid_serializer.errors ReturnDict([('coordinates', {'x': ['A valid integer is required.'], 'y': ['A valid integer is required.']})]) For this reason, the nested serializer approach would be the first to try. You would use the custom field approach when the nested serializer becomes infeasible or overly complex. Third party packages The following third party packages are also available. DRF Compound Fields The drf-compound-fields package provides "compound" serializer fields, such as lists of simple values, which can be described by other fields rather than serializers with the many=True option. Also provided are fields for typed dictionaries and values that can be either a specific type or a list of items of that type. DRF Extra Fields The drf-extra-fields package provides extra serializer fields for REST framework, including Base64ImageField and PointField classes. djangorestframework-recursive the djangorestframework-recursive package provides a RecursiveField for serializing and deserializing recursive structures django-rest-framework-gis The django-rest-framework-gis package provides geographic addons for django rest framework like a GeometryField field and a GeoJSON serializer. django-rest-framework-hstore The django-rest-framework-hstore package provides an HStoreField to support django-hstore DictionaryField model field. fields.py
doc_26858
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_26859
Run a test module. Equivalent to calling $ nosetests <argv> <file_to_run> from the command line Parameters file_to_runstr, optional Path to test module, or None. By default, run the module from which this function is called. argvlist of strings Arguments to be passed to the nose test runner. argv[0] is ignored. All command line arguments accepted by nosetests will work. If it is the default value None, sys.argv is used. New in version 1.9.0. Examples Adding the following: if __name__ == "__main__" : run_module_suite(argv=sys.argv) at the end of a test module will run the tests when that module is called in the python interpreter. Alternatively, calling: >>> run_module_suite(file_to_run="numpy/tests/test_matlib.py") from an interpreter will run all the test routine in ‘test_matlib.py’.
doc_26860
Accessor object for datetimelike properties of the Series values. Examples >>> seconds_series = pd.Series(pd.date_range("2000-01-01", periods=3, freq="s")) >>> seconds_series 0 2000-01-01 00:00:00 1 2000-01-01 00:00:01 2 2000-01-01 00:00:02 dtype: datetime64[ns] >>> seconds_series.dt.second 0 0 1 1 2 2 dtype: int64 >>> hours_series = pd.Series(pd.date_range("2000-01-01", periods=3, freq="h")) >>> hours_series 0 2000-01-01 00:00:00 1 2000-01-01 01:00:00 2 2000-01-01 02:00:00 dtype: datetime64[ns] >>> hours_series.dt.hour 0 0 1 1 2 2 dtype: int64 >>> quarters_series = pd.Series(pd.date_range("2000-01-01", periods=3, freq="q")) >>> quarters_series 0 2000-03-31 1 2000-06-30 2 2000-09-30 dtype: datetime64[ns] >>> quarters_series.dt.quarter 0 1 1 2 2 3 dtype: int64 Returns a Series indexed like the original Series. Raises TypeError if the Series does not contain datetimelike values.
doc_26861
[Deprecated] Notes Deprecated since version 3.4:
doc_26862
Returns a tzinfo instance that represents the default time zone.
doc_26863
Return a copy of the array. Returns ExtensionArray
doc_26864
See Migration guide for more details. tf.compat.v1.raw_ops.ImageProjectiveTransformV2 tf.raw_ops.ImageProjectiveTransformV2( images, transforms, output_shape, interpolation, fill_mode='CONSTANT', name=None ) If one row of transforms is [a0, a1, a2, b0, b1, b2, c0, c1], then it maps the output point (x, y) to a transformed input point (x', y') = ((a0 x + a1 y + a2) / k, (b0 x + b1 y + b2) / k), where k = c0 x + c1 y + 1. If the transformed point lays outside of the input image, the output pixel is set to 0. Args images A Tensor. Must be one of the following types: uint8, int32, int64, half, float32, float64. 4-D with shape [batch, height, width, channels]. transforms A Tensor of type float32. 2-D Tensor, [batch, 8] or [1, 8] matrix, where each row corresponds to a 3 x 3 projective transformation matrix, with the last entry assumed to be 1. If there is one row, the same transformation will be applied to all images. output_shape A Tensor of type int32. 1-D Tensor [new_height, new_width]. interpolation A string. Interpolation method, "NEAREST" or "BILINEAR". fill_mode An optional string. Defaults to "CONSTANT". Fill mode, "REFLECT", "WRAP", or "CONSTANT". name A name for the operation (optional). Returns A Tensor. Has the same type as images.
doc_26865
Attributes data_class DataClass data_class display_name string display_name plugin_data PluginData plugin_data summary_description string summary_description Child Classes class PluginData
doc_26866
tf.keras.layers.GlobalMaxPooling2D Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.layers.GlobalMaxPool2D, tf.compat.v1.keras.layers.GlobalMaxPooling2D tf.keras.layers.GlobalMaxPool2D( data_format=None, **kwargs ) Examples: input_shape = (2, 4, 5, 3) x = tf.random.normal(input_shape) y = tf.keras.layers.GlobalMaxPool2D()(x) print(y.shape) (2, 3) Arguments data_format A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last". Input shape: If data_format='channels_last': 4D tensor with shape (batch_size, rows, cols, channels). If data_format='channels_first': 4D tensor with shape (batch_size, channels, rows, cols). Output shape: 2D tensor with shape (batch_size, channels).
doc_26867
self.byte() is equivalent to self.to(torch.uint8). See to(). Parameters memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.
doc_26868
tf.feature_column.categorical_column_with_vocabulary_file( key, vocabulary_file, vocabulary_size=None, dtype=tf.dtypes.string, default_value=None, num_oov_buckets=0 ) Use this when your inputs are in string or integer format, and you have a vocabulary file that maps each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of num_oov_buckets and default_value to specify how to include out-of-vocabulary values. For input dictionary features, features[key] is either Tensor or SparseTensor. If Tensor, missing values can be represented by -1 for int and '' for string, which will be dropped by this feature column. Example with num_oov_buckets: File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state abbreviation. All inputs with values in that file are assigned an ID 0-49, corresponding to its line number. All other values are hashed and assigned an ID 50-54. states = categorical_column_with_vocabulary_file( key='states', vocabulary_file='/us/states.txt', vocabulary_size=50, num_oov_buckets=5) columns = [states, ...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns) Example with default_value: File '/us/states.txt' contains 51 lines - the first line is 'XX', and the other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX' in input, and other values missing from the file, will be assigned ID 0. All others are assigned the corresponding line number 1-50. states = categorical_column_with_vocabulary_file( key='states', vocabulary_file='/us/states.txt', vocabulary_size=51, default_value=0) columns = [states, ...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction, _, _ = linear_model(features, columns) And to make an embedding with either: columns = [embedding_column(states, 3),...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) dense_tensor = input_layer(features, columns) Args key A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns. vocabulary_file The vocabulary file name. vocabulary_size Number of the elements in the vocabulary. This must be no greater than length of vocabulary_file, if less than length, later values are ignored. If None, it is set to the length of vocabulary_file. dtype The type of features. Only string and integer types are supported. default_value The integer ID value to return for out-of-vocabulary feature values, defaults to -1. This can not be specified with a positive num_oov_buckets. num_oov_buckets Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range [vocabulary_size, vocabulary_size+num_oov_buckets) based on a hash of the input value. A positive num_oov_buckets can not be specified with default_value. Returns A CategoricalColumn with a vocabulary file. Raises ValueError vocabulary_file is missing or cannot be opened. ValueError vocabulary_size is missing or < 1. ValueError num_oov_buckets is a negative integer. ValueError num_oov_buckets and default_value are both specified. ValueError dtype is neither string nor integer.
doc_26869
See Migration guide for more details. tf.compat.v1.estimator.MultiClassHead tf.estimator.MultiClassHead( n_classes, weight_column=None, label_vocabulary=None, loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, loss_fn=None, name=None ) Uses sparse_softmax_cross_entropy loss. The head expects logits with shape [D0, D1, ... DN, n_classes]. In many applications, the shape is [batch_size, n_classes]. labels must be a dense Tensor with shape matching logits, namely [D0, D1, ... DN, 1]. If label_vocabulary given, labels must be a string Tensor with values from the vocabulary. If label_vocabulary is not given, labels must be an integer Tensor with values specifying the class index. If weight_column is specified, weights must be of shape [D0, D1, ... DN], or [D0, D1, ... DN, 1]. The loss is the weighted sum over the input dimensions. Namely, if the input labels have shape [batch_size, 1], the loss is the weighted sum over batch_size. Also supports custom loss_fn. loss_fn takes (labels, logits) or (labels, logits, features, loss_reduction) as arguments and returns unreduced loss with shape [D0, D1, ... DN, 1]. loss_fn must support integer labels with shape [D0, D1, ... DN, 1]. Namely, the head applies label_vocabulary to the input labels before passing them to loss_fn. Usage: n_classes = 3 head = tf.estimator.MultiClassHead(n_classes) logits = np.array(((10, 0, 0), (0, 10, 0),), dtype=np.float32) labels = np.array(((1,), (1,)), dtype=np.int64) features = {'x': np.array(((42,),), dtype=np.int32)} # expected_loss = sum(cross_entropy(labels, logits)) / batch_size # = sum(10, 0) / 2 = 5. loss = head.loss(labels, logits, features=features) print('{:.2f}'.format(loss.numpy())) 5.00 eval_metrics = head.metrics() updated_metrics = head.update_metrics( eval_metrics, features, logits, labels) for k in sorted(updated_metrics): print('{} : {:.2f}'.format(k, updated_metrics[k].result().numpy())) accuracy : 0.50 average_loss : 5.00 preds = head.predictions(logits) print(preds['logits']) tf.Tensor( [[10. 0. 0.] [ 0. 10. 0.]], shape=(2, 3), dtype=float32) Usage with a canned estimator: my_head = tf.estimator.MultiClassHead(n_classes=3) my_estimator = tf.estimator.DNNEstimator( head=my_head, hidden_units=..., feature_columns=...) It can also be used with a custom model_fn. Example: def _my_model_fn(features, labels, mode): my_head = tf.estimator.MultiClassHead(n_classes=3) logits = tf.keras.Model(...)(features) return my_head.create_estimator_spec( features=features, mode=mode, labels=labels, optimizer=tf.keras.optimizers.Adagrad(lr=0.1), logits=logits) my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn) Args n_classes Number of classes, must be greater than 2 (for 2 classes, use BinaryClassHead). weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. label_vocabulary A list or tuple of strings representing possible label values. If it is not given, that means labels are already encoded as an integer within [0, n_classes). If given, labels must be of string type and have any value in label_vocabulary. Note that errors will be raised if label_vocabulary is not provided but labels are strings. If both n_classes and label_vocabulary are provided, label_vocabulary should contain exactly n_classes items. loss_reduction One of tf.losses.Reduction except NONE. Decides how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE, namely weighted sum of losses divided by batch size * label_dimension. loss_fn Optional loss function. name Name of the head. If provided, summary and metrics keys will be suffixed by "/" + name. Also used as name_scope when creating ops. Attributes logits_dimension See base_head.Head for details. loss_reduction See base_head.Head for details. name See base_head.Head for details. Methods create_estimator_spec View source create_estimator_spec( features, mode, logits, labels=None, optimizer=None, trainable_variables=None, train_op_fn=None, update_ops=None, regularization_losses=None ) Returns EstimatorSpec that a model_fn can return. It is recommended to pass all args via name. Args features Input dict mapping string feature names to Tensor or SparseTensor objects containing the values for that feature in a minibatch. Often to be used to fetch example-weight tensor. mode Estimator's ModeKeys. logits Logits Tensor to be used by the head. labels Labels Tensor, or dict mapping string label names to Tensor objects of the label values. optimizer An tf.keras.optimizers.Optimizer instance to optimize the loss in TRAIN mode. Namely, sets train_op = optimizer.get_updates(loss, trainable_variables), which updates variables to minimize loss. trainable_variables A list or tuple of Variable objects to update to minimize loss. In Tensorflow 1.x, by default these are the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES. As Tensorflow 2.x doesn't have collections and GraphKeys, trainable_variables need to be passed explicitly here. train_op_fn Function that takes a scalar loss Tensor and returns an op to optimize the model with the loss in TRAIN mode. Used if optimizer is None. Exactly one of train_op_fn and optimizer must be set in TRAIN mode. By default, it is None in other modes. If you want to optimize loss yourself, you can pass lambda _: tf.no_op() and then use EstimatorSpec.loss to compute and apply gradients. update_ops A list or tuple of update ops to be run at training time. For example, layers such as BatchNormalization create mean and variance update ops that need to be run at training time. In Tensorflow 1.x, these are thrown into an UPDATE_OPS collection. As Tensorflow 2.x doesn't have collections, update_ops need to be passed explicitly here. regularization_losses A list of additional scalar losses to be added to the training loss, such as regularization losses. Returns EstimatorSpec. loss View source loss( labels, logits, features=None, mode=None, regularization_losses=None ) Returns regularized training loss. See base_head.Head for details. metrics View source metrics( regularization_losses=None ) Creates metrics. See base_head.Head for details. predictions View source predictions( logits, keys=None ) Return predictions based on keys. See base_head.Head for details. Args logits logits Tensor with shape [D0, D1, ... DN, logits_dimension]. For many applications, the shape is [batch_size, logits_dimension]. keys a list or tuple of prediction keys. Each key can be either the class variable of prediction_keys.PredictionKeys or its string value, such as: prediction_keys.PredictionKeys.CLASSES or 'classes'. If not specified, it will return the predictions for all valid keys. Returns A dict of predictions. update_metrics View source update_metrics( eval_metrics, features, logits, labels, regularization_losses=None ) Updates eval metrics. See base_head.Head for details.
doc_26870
Does nothing here, but is called by the get_version method and can be overridden by subclasses. In particular it is redefined in the FCompiler class where more documentation can be found.
doc_26871
Whether to match the peer cert’s hostname in SSLSocket.do_handshake(). The context’s verify_mode must be set to CERT_OPTIONAL or CERT_REQUIRED, and you must pass server_hostname to wrap_socket() in order to match the hostname. Enabling hostname checking automatically sets verify_mode from CERT_NONE to CERT_REQUIRED. It cannot be set back to CERT_NONE as long as hostname checking is enabled. The PROTOCOL_TLS_CLIENT protocol enables hostname checking by default. With other protocols, hostname checking must be enabled explicitly. Example: import socket, ssl context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2) context.verify_mode = ssl.CERT_REQUIRED context.check_hostname = True context.load_default_certs() s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) ssl_sock = context.wrap_socket(s, server_hostname='www.verisign.com') ssl_sock.connect(('www.verisign.com', 443)) New in version 3.4. Changed in version 3.7: verify_mode is now automatically changed to CERT_REQUIRED when hostname checking is enabled and verify_mode is CERT_NONE. Previously the same operation would have failed with a ValueError. Note This features requires OpenSSL 0.9.8f or newer.
doc_26872
See Migration guide for more details. tf.compat.v1.data.experimental.map_and_batch tf.data.experimental.map_and_batch( map_func, batch_size, num_parallel_batches=None, drop_remainder=False, num_parallel_calls=None ) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.data.Dataset.map(map_func, num_parallel_calls) followed by tf.data.Dataset.batch(batch_size, drop_remainder). Static tf.data optimizations will take care of using the fused implementation. Maps map_func across batch_size consecutive elements of this dataset and then combines them into a batch. Functionally, it is equivalent to map followed by batch. This API is temporary and deprecated since input pipeline optimization now fuses consecutive map and batch operations automatically. Args map_func A function mapping a nested structure of tensors to another nested structure of tensors. batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch. num_parallel_batches (Optional.) A tf.int64 scalar tf.Tensor, representing the number of batches to create in parallel. On one hand, higher values can help mitigate the effect of stragglers. On the other hand, higher values can increase contention if CPU is scarce. drop_remainder (Optional.) A tf.bool scalar tf.Tensor, representing whether the last batch should be dropped in case its size is smaller than desired; the default behavior is not to drop the smaller batch. num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number of elements to process in parallel. If not specified, batch_size * num_parallel_batches elements will be processed in parallel. If the value tf.data.AUTOTUNE is used, then the number of parallel calls is set dynamically based on available CPU. Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. Raises ValueError If both num_parallel_batches and num_parallel_calls are specified.
doc_26873
The number of arguments. Data attribute containing the number of arguments the ufunc takes, including optional ones. Notes Typically this value will be one more than what you might expect because all ufuncs take the optional “out” argument. Examples >>> np.add.nargs 3 >>> np.multiply.nargs 3 >>> np.power.nargs 3 >>> np.exp.nargs 2
doc_26874
This class implements the actual IMAP4 protocol. The connection is created and protocol version (IMAP4 or IMAP4rev1) is determined when the instance is initialized. If host is not specified, '' (the local host) is used. If port is omitted, the standard IMAP4 port (143) is used. The optional timeout parameter specifies a timeout in seconds for the connection attempt. If timeout is not given or is None, the global default socket timeout is used. The IMAP4 class supports the with statement. When used like this, the IMAP4 LOGOUT command is issued automatically when the with statement exits. E.g.: >>> from imaplib import IMAP4 >>> with IMAP4("domain.org") as M: ... M.noop() ... ('OK', [b'Nothing Accomplished. d25if65hy903weo.87']) Changed in version 3.5: Support for the with statement was added. Changed in version 3.9: The optional timeout parameter was added.
doc_26875
tf.losses.KLDivergence Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.losses.KLDivergence tf.keras.losses.KLDivergence( reduction=losses_utils.ReductionV2.AUTO, name='kl_divergence' ) loss = y_true * log(y_true / y_pred) See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Standalone usage: y_true = [[0, 1], [0, 0]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. kl = tf.keras.losses.KLDivergence() kl(y_true, y_pred).numpy() 0.458 # Calling with 'sample_weight'. kl(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() 0.366 # Using 'sum' reduction type. kl = tf.keras.losses.KLDivergence( reduction=tf.keras.losses.Reduction.SUM) kl(y_true, y_pred).numpy() 0.916 # Using 'none' reduction type. kl = tf.keras.losses.KLDivergence( reduction=tf.keras.losses.Reduction.NONE) kl(y_true, y_pred).numpy() array([0.916, -3.08e-06], dtype=float32) Usage with the compile() API: model.compile(optimizer='sgd', loss=tf.keras.losses.KLDivergence()) Args reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training tutorial for more details. name Optional name for the op. Defaults to 'kl_divergence'. Methods from_config View source @classmethod from_config( config ) Instantiates a Loss from its config (output of get_config()). Args config Output of get_config(). Returns A Loss instance. get_config View source get_config() Returns the config dictionary for a Loss instance. __call__ View source __call__( y_true, y_pred, sample_weight=None ) Invokes the Loss instance. Args y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] y_pred The predicted values. shape = [batch_size, d0, .. dN] sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) Returns Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) Raises ValueError If the shape of sample_weight is invalid.
doc_26876
Return a new array of given shape and type, without initializing entries. Parameters shapeint or tuple of int Shape of the empty array, e.g., (2, 3) or 2. dtypedata-type, optional Desired output data-type for the array, e.g, numpy.int8. Default is numpy.float64. order{‘C’, ‘F’}, optional, default: ‘C’ Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. likearray_like Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as like supports the __array_function__ protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns outndarray Array of uninitialized (arbitrary) data of the given shape, dtype, and order. Object arrays will be initialized to None. See also empty_like Return an empty array with shape and type of input. ones Return a new array setting values to one. zeros Return a new array setting values to zero. full Return a new array of given shape filled with value. Notes empty, unlike zeros, does not set the array values to zero, and may therefore be marginally faster. On the other hand, it requires the user to manually set all the values in the array, and should be used with caution. Examples >>> np.empty([2, 2]) array([[ -9.74499359e+001, 6.69583040e-309], [ 2.13182611e-314, 3.06959433e-309]]) #uninitialized >>> np.empty([2, 2], dtype=int) array([[-1073741821, -1067949133], [ 496041986, 19249760]]) #uninitialized
doc_26877
See Migration guide for more details. tf.compat.v1.linalg.matrix_transpose, tf.compat.v1.linalg.transpose, tf.compat.v1.matrix_transpose tf.linalg.matrix_transpose( a, name='matrix_transpose', conjugate=False ) For example: x = tf.constant([[1, 2, 3], [4, 5, 6]]) tf.linalg.matrix_transpose(x) # [[1, 4], # [2, 5], # [3, 6]] x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j], [4 + 4j, 5 + 5j, 6 + 6j]]) tf.linalg.matrix_transpose(x, conjugate=True) # [[1 - 1j, 4 - 4j], # [2 - 2j, 5 - 5j], # [3 - 3j, 6 - 6j]] # Matrix with two batch dimensions. # x.shape is [1, 2, 3, 4] # tf.linalg.matrix_transpose(x) is shape [1, 2, 4, 3] Note that tf.matmul provides kwargs allowing for transpose of arguments. This is done with minimal cost, and is preferable to using this function. E.g. # Good! Transpose is taken at minimal additional cost. tf.matmul(matrix, b, transpose_b=True) # Inefficient! tf.matmul(matrix, tf.linalg.matrix_transpose(b)) Args a A Tensor with rank >= 2. name A name for the operation (optional). conjugate Optional bool. Setting it to True is mathematically equivalent to tf.math.conj(tf.linalg.matrix_transpose(input)). Returns A transposed batch matrix Tensor. Raises ValueError If a is determined statically to have rank < 2. Numpy Compatibility In numpy transposes are memory-efficient constant time operations as they simply return a new view of the same data with adjusted strides. TensorFlow does not support strides, linalg.matrix_transpose returns a new tensor with the items permuted.
doc_26878
HTTP protocol version used by server. 10 for HTTP/1.0, 11 for HTTP/1.1.
doc_26879
uninitialize the midi module quit() -> None Uninitializes the pygame.midi module. If pygame.midi.init() was called to initialize the pygame.midi module, then this function will be called automatically when your program exits. It is safe to call this function more than once.
doc_26880
Convert a DataFrame with sparse values to dense. New in version 0.25.0. Returns DataFrame A DataFrame with the same values stored as dense arrays. Examples >>> df = pd.DataFrame({"A": pd.arrays.SparseArray([0, 1, 0])}) >>> df.sparse.to_dense() A 0 0 1 1 2 0
doc_26881
Add text to the Axes. Add the text s to the Axes at location x, y in data coordinates. Parameters x, yfloat The position to place the text. By default, this is in data coordinates. The coordinate system can be changed using the transform parameter. sstr The text. fontdictdict, default: None A dictionary to override the default text properties. If fontdict is None, the defaults are determined by rcParams. Returns Text The created Text instance. Other Parameters **kwargsText properties. Other miscellaneous text parameters. Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None animated bool backgroundcolor color bbox dict with properties for patches.FancyBboxPatch clip_box unknown clip_on unknown clip_path unknown color or c color figure Figure fontfamily or family {FONTNAME, 'serif', 'sans-serif', 'cursive', 'fantasy', 'monospace'} fontproperties or font or font_properties font_manager.FontProperties or str or pathlib.Path fontsize or size float or {'xx-small', 'x-small', 'small', 'medium', 'large', 'x-large', 'xx-large'} fontstretch or stretch {a numeric value in range 0-1000, 'ultra-condensed', 'extra-condensed', 'condensed', 'semi-condensed', 'normal', 'semi-expanded', 'expanded', 'extra-expanded', 'ultra-expanded'} fontstyle or style {'normal', 'italic', 'oblique'} fontvariant or variant {'normal', 'small-caps'} fontweight or weight {a numeric value in range 0-1000, 'ultralight', 'light', 'normal', 'regular', 'book', 'medium', 'roman', 'semibold', 'demibold', 'demi', 'bold', 'heavy', 'extra bold', 'black'} gid str horizontalalignment or ha {'center', 'right', 'left'} in_layout bool label object linespacing float (multiple of font size) math_fontfamily str multialignment or ma {'left', 'right', 'center'} parse_math bool path_effects AbstractPathEffect picker None or bool or float or callable position (float, float) rasterized bool rotation float or {'vertical', 'horizontal'} rotation_mode {None, 'default', 'anchor'} sketch_params (scale: float, length: float, randomness: float) snap bool or None text object transform Transform transform_rotates_text bool url str usetex bool or None verticalalignment or va {'center', 'top', 'bottom', 'baseline', 'center_baseline'} visible bool wrap bool x float y float zorder float Examples Individual keyword arguments can be used to override any given parameter: >>> text(x, y, s, fontsize=12) The default transform specifies that text is in data coords, alternatively, you can specify text in axis coords ((0, 0) is lower-left and (1, 1) is upper-right). The example below places text in the center of the Axes: >>> text(0.5, 0.5, 'matplotlib', horizontalalignment='center', ... verticalalignment='center', transform=ax.transAxes) You can put a rectangular box around the text instance (e.g., to set a background color) by using the keyword bbox. bbox is a dictionary of Rectangle properties. For example: >>> text(x, y, s, bbox=dict(facecolor='red', alpha=0.5)) Examples using matplotlib.pyplot.text Figure size in different units Auto-wrapping text Styling text boxes Controlling style of text and labels using a dictionary Pyplot Mathtext Pyplot Text Reference for Matplotlib artists Close Event transforms.offset_copy Pyplot tutorial Path effects guide Text properties and layout Annotations
doc_26882
Sets the seed for generating random numbers for the current GPU. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters seed (int) – The desired seed. Warning If you are working with a multi-GPU model, this function is insufficient to get determinism. To seed all GPUs, use manual_seed_all().
doc_26883
import matplotlib.pyplot as plt x = np.arange(0, 5, 0.1) y = np.sin(x) plt.plot(x, y) The explicit (object-oriented) API is recommended for complex plots, though pyplot is still usually used to create the figure and often the axes in the figure. See pyplot.figure, pyplot.subplots, and pyplot.subplot_mosaic to create figures, and Axes API for the plotting methods on an axes: import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 5, 0.1) y = np.sin(x) fig, ax = plt.subplots() ax.plot(x, y) Functions acorr(x, *[, data]) Plot the autocorrelation of x. angle_spectrum(x[, Fs, Fc, window, pad_to, ...]) Plot the angle spectrum. annotate(text, xy, *args, **kwargs) Annotate the point xy with text text. arrow(x, y, dx, dy, **kwargs) Add an arrow to the Axes. autoscale([enable, axis, tight]) Autoscale the axis view to the data (toggle). autumn() Set the colormap to 'autumn'. axes([arg]) Add an axes to the current figure and make it the current axes. axhline([y, xmin, xmax]) Add a horizontal line across the axis. axhspan(ymin, ymax[, xmin, xmax]) Add a horizontal span (rectangle) across the Axes. axis(*args[, emit]) Convenience method to get or set some axis properties. axline(xy1[, xy2, slope]) Add an infinitely long straight line. axvline([x, ymin, ymax]) Add a vertical line across the Axes. axvspan(xmin, xmax[, ymin, ymax]) Add a vertical span (rectangle) across the Axes. bar(x, height[, width, bottom, align, data]) Make a bar plot. bar_label(container[, labels, fmt, ...]) Label a bar plot. barbs(*args[, data]) Plot a 2D field of barbs. barh(y, width[, height, left, align]) Make a horizontal bar plot. bone() Set the colormap to 'bone'. box([on]) Turn the axes box on or off on the current axes. boxplot(x[, notch, sym, vert, whis, ...]) Draw a box and whisker plot. broken_barh(xranges, yrange, *[, data]) Plot a horizontal sequence of rectangles. cla() Clear the current axes. clabel(CS[, levels]) Label a contour plot. clf() Clear the current figure. clim([vmin, vmax]) Set the color limits of the current image. close([fig]) Close a figure window. cohere(x, y[, NFFT, Fs, Fc, detrend, ...]) Plot the coherence between x and y. colorbar([mappable, cax, ax]) Add a colorbar to a plot. connect(s, func) Bind function func to event s. contour(*args[, data]) Plot contour lines. contourf(*args[, data]) Plot filled contours. cool() Set the colormap to 'cool'. copper() Set the colormap to 'copper'. csd(x, y[, NFFT, Fs, Fc, detrend, window, ...]) Plot the cross-spectral density. delaxes([ax]) Remove an Axes (defaulting to the current axes) from its figure. disconnect(cid) Disconnect the callback with id cid. draw() Redraw the current figure. draw_if_interactive() Redraw the current figure if in interactive mode. errorbar(x, y[, yerr, xerr, fmt, ecolor, ...]) Plot y versus x as lines and/or markers with attached errorbars. eventplot(positions[, orientation, ...]) Plot identical parallel lines at the given positions. figimage(X[, xo, yo, alpha, norm, cmap, ...]) Add a non-resampled image to the figure. figlegend(*args, **kwargs) Place a legend on the figure. fignum_exists(num) Return whether the figure with the given id exists. figtext(x, y, s[, fontdict]) Add text to figure. figure([num, figsize, dpi, facecolor, ...]) Create a new figure, or activate an existing figure. fill(*args[, data]) Plot filled polygons. fill_between(x, y1[, y2, where, ...]) Fill the area between two horizontal curves. fill_betweenx(y, x1[, x2, where, step, ...]) Fill the area between two vertical curves. findobj([o, match, include_self]) Find artist objects. flag() Set the colormap to 'flag'. gca(**kwargs) Get the current Axes. gcf() Get the current figure. gci() Get the current colorable artist. get(obj, *args, **kwargs) Return the value of an Artist's property, or print all of them. get_current_fig_manager() Return the figure manager of the current figure. get_figlabels() Return a list of existing figure labels. get_fignums() Return a list of existing figure numbers. get_plot_commands() Get a sorted list of all of the plotting commands. getp(obj, *args, **kwargs) Return the value of an Artist's property, or print all of them. ginput([n, timeout, show_clicks, mouse_add, ...]) Blocking call to interact with a figure. gray() Set the colormap to 'gray'. grid([visible, which, axis]) Configure the grid lines. hexbin(x, y[, C, gridsize, bins, xscale, ...]) Make a 2D hexagonal binning plot of points x, y. hist(x[, bins, range, density, weights, ...]) Plot a histogram. hist2d(x, y[, bins, range, density, ...]) Make a 2D histogram plot. hlines(y, xmin, xmax[, colors, linestyles, ...]) Plot horizontal lines at each y from xmin to xmax. hot() Set the colormap to 'hot'. hsv() Set the colormap to 'hsv'. imread(fname[, format]) Read an image from a file into an array. imsave(fname, arr, **kwargs) Save an array as an image file. imshow(X[, cmap, norm, aspect, ...]) Display data as an image, i.e., on a 2D regular raster. inferno() Set the colormap to 'inferno'. install_repl_displayhook() Install a repl display hook so that any stale figure are automatically redrawn when control is returned to the repl. ioff() Disable interactive mode. ion() Enable interactive mode. isinteractive() Return whether plots are updated after every plotting command. jet() Set the colormap to 'jet'. legend(*args, **kwargs) Place a legend on the Axes. locator_params([axis, tight]) Control behavior of major tick locators. loglog(*args, **kwargs) Make a plot with log scaling on both the x and y axis. magma() Set the colormap to 'magma'. magnitude_spectrum(x[, Fs, Fc, window, ...]) Plot the magnitude spectrum. margins(*margins[, x, y, tight]) Set or retrieve autoscaling margins. matshow(A[, fignum]) Display an array as a matrix in a new figure window. minorticks_off() Remove minor ticks from the Axes. minorticks_on() Display minor ticks on the Axes. new_figure_manager(num, *args, **kwargs) Create a new figure manager instance. nipy_spectral() Set the colormap to 'nipy_spectral'. pause(interval) Run the GUI event loop for interval seconds. pcolor(*args[, shading, alpha, norm, cmap, ...]) Create a pseudocolor plot with a non-regular rectangular grid. pcolormesh(*args[, alpha, norm, cmap, vmin, ...]) Create a pseudocolor plot with a non-regular rectangular grid. phase_spectrum(x[, Fs, Fc, window, pad_to, ...]) Plot the phase spectrum. pie(x[, explode, labels, colors, autopct, ...]) Plot a pie chart. pink() Set the colormap to 'pink'. plasma() Set the colormap to 'plasma'. plot(*args[, scalex, scaley, data]) Plot y versus x as lines and/or markers. plot_date(x, y[, fmt, tz, xdate, ydate, data]) Plot coercing the axis to treat floats as dates. polar(*args, **kwargs) Make a polar plot. prism() Set the colormap to 'prism'. psd(x[, NFFT, Fs, Fc, detrend, window, ...]) Plot the power spectral density. quiver(*args[, data]) Plot a 2D field of arrows. quiverkey(Q, X, Y, U, label, **kwargs) Add a key to a quiver plot. rc(group, **kwargs) Set the current rcParams. group is the grouping for the rc, e.g., for lines.linewidth the group is lines, for axes.facecolor, the group is axes, and so on. Group may also be a list or tuple of group names, e.g., (xtick, ytick). kwargs is a dictionary attribute name/value pairs, e.g.,::. rc_context([rc, fname]) Return a context manager for temporarily changing rcParams. rcdefaults() Restore the rcParams from Matplotlib's internal default style. rgrids([radii, labels, angle, fmt]) Get or set the radial gridlines on the current polar plot. savefig(*args, **kwargs) Save the current figure. sca(ax) Set the current Axes to ax and the current Figure to the parent of ax. scatter(x, y[, s, c, marker, cmap, norm, ...]) A scatter plot of y vs. sci(im) Set the current image. semilogx(*args, **kwargs) Make a plot with log scaling on the x axis. semilogy(*args, **kwargs) Make a plot with log scaling on the y axis. set_cmap(cmap) Set the default colormap, and applies it to the current image if any. set_loglevel(*args, **kwargs) Set Matplotlib's root logger and root logger handler level, creating the handler if it does not exist yet. setp(obj, *args, **kwargs) Set one or more properties on an Artist, or list allowed values. show(*[, block]) Display all open figures. specgram(x[, NFFT, Fs, Fc, detrend, window, ...]) Plot a spectrogram. spring() Set the colormap to 'spring'. spy(Z[, precision, marker, markersize, ...]) Plot the sparsity pattern of a 2D array. stackplot(x, *args[, labels, colors, ...]) Draw a stacked area plot. stairs(values[, edges, orientation, ...]) A stepwise constant function as a line with bounding edges or a filled plot. stem(*args[, linefmt, markerfmt, basefmt, ...]) Create a stem plot. step(x, y, *args[, where, data]) Make a step plot. streamplot(x, y, u, v[, density, linewidth, ...]) Draw streamlines of a vector flow. subplot(*args, **kwargs) Add an Axes to the current figure or retrieve an existing Axes. subplot2grid(shape, loc[, rowspan, colspan, fig]) Create a subplot at a specific location inside a regular grid. subplot_mosaic(mosaic, *[, sharex, sharey, ...]) Build a layout of Axes based on ASCII art or nested lists. subplot_tool([targetfig]) Launch a subplot tool window for a figure. subplots([nrows, ncols, sharex, sharey, ...]) Create a figure and a set of subplots. subplots_adjust([left, bottom, right, top, ...]) Adjust the subplot layout parameters. summer() Set the colormap to 'summer'. suptitle(t, **kwargs) Add a centered suptitle to the figure. switch_backend(newbackend) Close all open figures and set the Matplotlib backend. table([cellText, cellColours, cellLoc, ...]) Add a table to an Axes. text(x, y, s[, fontdict]) Add text to the Axes. thetagrids([angles, labels, fmt]) Get or set the theta gridlines on the current polar plot. tick_params([axis]) Change the appearance of ticks, tick labels, and gridlines. ticklabel_format(*[, axis, style, ...]) Configure the ScalarFormatter used by default for linear axes. tight_layout(*[, pad, h_pad, w_pad, rect]) Adjust the padding between and around subplots. title(label[, fontdict, loc, pad, y]) Set a title for the Axes. tricontour(*args, **kwargs) Draw contour lines on an unstructured triangular grid. tricontourf(*args, **kwargs) Draw contour regions on an unstructured triangular grid. tripcolor(*args[, alpha, norm, cmap, vmin, ...]) Create a pseudocolor plot of an unstructured triangular grid. triplot(*args, **kwargs) Draw a unstructured triangular grid as lines and/or markers. twinx([ax]) Make and return a second axes that shares the x-axis. twiny([ax]) Make and return a second axes that shares the y-axis. uninstall_repl_displayhook() Uninstall the Matplotlib display hook. violinplot(dataset[, positions, vert, ...]) Make a violin plot. viridis() Set the colormap to 'viridis'. vlines(x, ymin, ymax[, colors, linestyles, ...]) Plot vertical lines at each x from ymin to ymax. waitforbuttonpress([timeout]) Blocking call to interact with the figure. winter() Set the colormap to 'winter'. xcorr(x, y[, normed, detrend, usevlines, ...]) Plot the cross correlation between x and y. xkcd([scale, length, randomness]) Turn on xkcd sketch-style drawing mode. xlabel(xlabel[, fontdict, labelpad, loc]) Set the label for the x-axis. xlim(*args, **kwargs) Get or set the x limits of the current axes. xscale(value, **kwargs) Set the x-axis scale. xticks([ticks, labels]) Get or set the current tick locations and labels of the x-axis. ylabel(ylabel[, fontdict, labelpad, loc]) Set the label for the y-axis. ylim(*args, **kwargs) Get or set the y-limits of the current axes. yscale(value, **kwargs) Set the y-axis scale. yticks([ticks, labels]) Get or set the current tick locations and labels of the y-axis.
doc_26884
Solves a dictionary learning matrix factorization problem. Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving: (U^*, V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1 (U,V) with || V_k ||_2 = 1 for all 0 <= k < n_components where V is the dictionary and U is the sparse code. Read more in the User Guide. Parameters Xndarray of shape (n_samples, n_features) Data matrix. n_componentsint Number of dictionary atoms to extract. alphaint Sparsity controlling parameter. max_iterint, default=100 Maximum number of iterations to perform. tolfloat, default=1e-8 Tolerance for the stopping condition. method{‘lars’, ‘cd’}, default=’lars’ The method used: 'lars': uses the least angle regression method to solve the lasso problem (linear_model.lars_path); 'cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse. n_jobsint, default=None Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. dict_initndarray of shape (n_components, n_features), default=None Initial value for the dictionary for warm restart scenarios. code_initndarray of shape (n_samples, n_components), default=None Initial value for the sparse code for warm restart scenarios. callbackcallable, default=None Callable that gets invoked every five iterations verbosebool, default=False To control the verbosity of the procedure. random_stateint, RandomState instance or None, default=None Used for randomly initializing the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary. return_n_iterbool, default=False Whether or not to return the number of iterations. positive_dictbool, default=False Whether to enforce positivity when finding the dictionary. New in version 0.20. positive_codebool, default=False Whether to enforce positivity when finding the code. New in version 0.20. method_max_iterint, default=1000 Maximum number of iterations to perform. New in version 0.22. Returns codendarray of shape (n_samples, n_components) The sparse code factor in the matrix factorization. dictionaryndarray of shape (n_components, n_features), The dictionary factor in the matrix factorization. errorsarray Vector of errors at each iteration. n_iterint Number of iterations run. Returned only if return_n_iter is set to True. See also dict_learning_online DictionaryLearning MiniBatchDictionaryLearning SparsePCA MiniBatchSparsePCA
doc_26885
Check a password against a given salted and hashed password value. In order to support unsalted legacy passwords this method supports plain text passwords, md5 and sha1 hashes (both salted and unsalted). Returns True if the password matched, False otherwise. Parameters pwhash (str) – a hashed string like returned by generate_password_hash(). password (str) – the plaintext password to compare against the hash. Return type bool
doc_26886
Raise all references of o to the top of the stack, and return it. Raises ValueError If o is not in the stack.
doc_26887
Return a boolean array where the index values are in values. Compute boolean array of whether each index value is found in the passed set of values. The length of the returned boolean array matches the length of the index. Parameters values:set or list-like Sought values. level:str or int, optional Name or position of the index level to use (if the index is a MultiIndex). Returns np.ndarray[bool] NumPy array of boolean values. See also Series.isin Same for Series. DataFrame.isin Same method for DataFrames. Notes In the case of MultiIndex you must either specify values as a list-like object containing tuples that are the same length as the number of levels, or specify level. Otherwise it will raise a ValueError. If level is specified: if it is the name of one and only one index level, use that level; otherwise it should be a number indicating level position. Examples >>> idx = pd.Index([1,2,3]) >>> idx Int64Index([1, 2, 3], dtype='int64') Check whether each index value in a list of values. >>> idx.isin([1, 4]) array([ True, False, False]) >>> midx = pd.MultiIndex.from_arrays([[1,2,3], ... ['red', 'blue', 'green']], ... names=('number', 'color')) >>> midx MultiIndex([(1, 'red'), (2, 'blue'), (3, 'green')], names=['number', 'color']) Check whether the strings in the ‘color’ level of the MultiIndex are in a list of colors. >>> midx.isin(['red', 'orange', 'yellow'], level='color') array([ True, False, False]) To check across the levels of a MultiIndex, pass a list of tuples: >>> midx.isin([(1, 'red'), (3, 'red')]) array([ True, False, False]) For a DatetimeIndex, string values in values are converted to Timestamps. >>> dates = ['2000-03-11', '2000-03-12', '2000-03-13'] >>> dti = pd.to_datetime(dates) >>> dti DatetimeIndex(['2000-03-11', '2000-03-12', '2000-03-13'], dtype='datetime64[ns]', freq=None) >>> dti.isin(['2000-03-11']) array([ True, False, False])
doc_26888
Number of microseconds (>= 0 and less than 1 second).
doc_26889
The absolute path to the directory whose contents you want listed. This directory must exist.
doc_26890
Returns whether PyTorch’s CUDA state has been initialized.
doc_26891
Bind the socket to address. The socket must not already be bound. (The format of address depends on the address family — see above.) Raises an auditing event socket.bind with arguments self, address.
doc_26892
Associates level level with text levelName in an internal dictionary, which is used to map numeric levels to a textual representation, for example when a Formatter formats a message. This function can also be used to define your own levels. The only constraints are that all levels used must be registered using this function, levels should be positive integers and they should increase in increasing order of severity. Note If you are thinking of defining your own levels, please see the section on Custom Levels.
doc_26893
Sparse representation of the fitted coef_.
doc_26894
Windows only: return the filename of the VC runtime library used by Python, and by the extension modules. If the name of the library cannot be determined, None is returned. If you need to free memory, for example, allocated by an extension module with a call to the free(void *), it is important that you use the function in the same library that allocated the memory.
doc_26895
Plot 2D or 3D data. Parameters xs1D array-like x coordinates of vertices. ys1D array-like y coordinates of vertices. zsfloat or 1D array-like z coordinates of vertices; either one for all points or one for each point. zdir{'x', 'y', 'z'}, default: 'z' When plotting 2D data, the direction to use as z ('x', 'y' or 'z'). **kwargs Other arguments are forwarded to matplotlib.axes.Axes.plot.
doc_26896
Alias for set_facecolor.
doc_26897
A wrapper around Python’s assert which is symbolically traceable.
doc_26898
Get the artist's bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly.
doc_26899
Set the path effects. Parameters path_effectsAbstractPathEffect