_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_30300
Returns a value equal to x (rounded), having the exponent of y.
doc_30301
Set the url for the artist. Parameters urlstr
doc_30302
Set multiple properties at once. Supported properties are Property Description adjustable {'box', 'datalim'} agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha scalar or None anchor (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...} animated bool aspect {'auto', 'equal'} or float autoscale_on bool autoscalex_on bool autoscaley_on bool axes_locator Callable[[Axes, Renderer], Bbox] axisbelow bool or 'line' box_aspect float or None clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None facecolor or fc color figure Figure frame_on bool gid str in_layout bool label object navigate bool navigate_mode unknown path_effects AbstractPathEffect picker None or bool or float or callable position [left, bottom, width, height] or Bbox prop_cycle unknown rasterization_zorder float or None rasterized bool sketch_params (scale: float, length: float, randomness: float) snap bool or None title str transform Transform url str visible bool xbound unknown xlabel str xlim (bottom: float, top: float) xmargin float greater than -0.5 xscale {"linear", "log", "symlog", "logit", ...} or ScaleBase xticklabels unknown xticks unknown ybound unknown ylabel str ylim (bottom: float, top: float) ymargin float greater than -0.5 yscale {"linear", "log", "symlog", "logit", ...} or ScaleBase yticklabels unknown yticks unknown zorder float
doc_30303
Is True if the Tensor is quantized, False otherwise.
doc_30304
RGB to YCbCr color space conversion. Parameters rgb(…, 3) array_like The image in RGB format. Final dimension denotes channels. Returns out(…, 3) ndarray The image in YCbCr format. Same dimensions as input. Raises ValueError If rgb is not at least 2-D with shape (…, 3). Notes Y is between 16 and 235. This is the color space commonly used by video codecs; it is sometimes incorrectly called “YUV”. References 1 https://en.wikipedia.org/wiki/YCbCr
doc_30305
Return the Figure instance the artist belongs to.
doc_30306
Create a new ArgumentParser object. All parameters should be passed as keyword arguments. Each parameter has its own more detailed description below, but in short they are: prog - The name of the program (default: sys.argv[0]) usage - The string describing the program usage (default: generated from arguments added to parser) description - Text to display before the argument help (default: none) epilog - Text to display after the argument help (default: none) parents - A list of ArgumentParser objects whose arguments should also be included formatter_class - A class for customizing the help output prefix_chars - The set of characters that prefix optional arguments (default: ‘-‘) fromfile_prefix_chars - The set of characters that prefix files from which additional arguments should be read (default: None) argument_default - The global default value for arguments (default: None) conflict_handler - The strategy for resolving conflicting optionals (usually unnecessary) add_help - Add a -h/--help option to the parser (default: True) allow_abbrev - Allows long options to be abbreviated if the abbreviation is unambiguous. (default: True) exit_on_error - Determines whether or not ArgumentParser exits with error info when an error occurs. (default: True) Changed in version 3.5: allow_abbrev parameter was added. Changed in version 3.8: In previous versions, allow_abbrev also disabled grouping of short flags such as -vv to mean -v -v. Changed in version 3.9: exit_on_error parameter was added.
doc_30307
Reads size bytes from the remote server. You may override this method.
doc_30308
Return the values (min, max) that are mapped to the colormap limits.
doc_30309
linecache.getline(filename, lineno, module_globals=None) Get line lineno from file named filename. This function will never raise an exception — it will return '' on errors (the terminating newline character will be included for lines that are found). If a file named filename is not found, the function first checks for a PEP 302 __loader__ in module_globals. If there is such a loader and it defines a get_source method, then that determines the source lines (if get_source() returns None, then '' is returned). Finally, if filename is a relative filename, it is looked up relative to the entries in the module search path, sys.path. linecache.clearcache() Clear the cache. Use this function if you no longer need lines from files previously read using getline(). linecache.checkcache(filename=None) Check the cache for validity. Use this function if files in the cache may have changed on disk, and you require the updated version. If filename is omitted, it will check all the entries in the cache. linecache.lazycache(filename, module_globals) Capture enough detail about a non-file-based module to permit getting its lines later via getline() even if module_globals is None in the later call. This avoids doing I/O until a line is actually needed, without having to carry the module globals around indefinitely. New in version 3.5. Example: >>> import linecache >>> linecache.getline(linecache.__file__, 8) 'import sys\n'
doc_30310
Bases: matplotlib.collections.LineCollection Parameters which{"major", "minor"} axis{"both", "x", "y"} draw(renderer)[source] Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters rendererRendererBase subclass. Notes This method is overridden in the Artist subclasses. set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, antialiased=<UNSET>, array=<UNSET>, axis=<UNSET>, capstyle=<UNSET>, clim=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, cmap=<UNSET>, color=<UNSET>, colors=<UNSET>, edgecolor=<UNSET>, facecolor=<UNSET>, gid=<UNSET>, grid_helper=<UNSET>, hatch=<UNSET>, in_layout=<UNSET>, joinstyle=<UNSET>, label=<UNSET>, linestyle=<UNSET>, linewidth=<UNSET>, norm=<UNSET>, offset_transform=<UNSET>, offsets=<UNSET>, path_effects=<UNSET>, paths=<UNSET>, picker=<UNSET>, pickradius=<UNSET>, rasterized=<UNSET>, segments=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, urls=<UNSET>, verts=<UNSET>, visible=<UNSET>, which=<UNSET>, zorder=<UNSET>)[source] Set multiple properties at once. Supported properties are Property Description agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array alpha array-like or scalar or None animated bool antialiased or aa or antialiaseds bool or list of bools array array-like or None axis unknown capstyle CapStyle or {'butt', 'projecting', 'round'} clim (vmin: float, vmax: float) clip_box Bbox clip_on bool clip_path Patch or (Path, Transform) or None cmap Colormap or str or None color color or list of colors colors color or list of colors edgecolor or ec or edgecolors color or list of colors or 'face' facecolor or facecolors or fc color or list of colors figure Figure gid str grid_helper unknown hatch {'/', '\', '|', '-', '+', 'x', 'o', 'O', '.', '*'} in_layout bool joinstyle JoinStyle or {'miter', 'round', 'bevel'} label object linestyle or dashes or linestyles or ls str or tuple or list thereof linewidth or linewidths or lw float or list of floats norm Normalize or None offset_transform Transform offsets (N, 2) or (2,) array-like path_effects AbstractPathEffect paths unknown picker None or bool or float or callable pickradius float rasterized bool segments unknown sketch_params (scale: float, length: float, randomness: float) snap bool or None transform Transform url str urls list of str or None verts unknown visible bool which unknown zorder float set_axis(axis)[source] set_grid_helper(grid_helper)[source] set_which(which)[source]
doc_30311
Backward fill the values. Parameters limit:int, optional Limit of how many values to fill. Returns Series or DataFrame Object with missing values filled. See also Series.bfill Backward fill the missing values in the dataset. DataFrame.bfill Backward fill the missing values in the dataset. Series.fillna Fill NaN values of a Series. DataFrame.fillna Fill NaN values of a DataFrame.
doc_30312
Plot the autocorrelation of x. Parameters xarray-like detrendcallable, default: mlab.detrend_none (no detrending) A detrending function applied to x. It must have the signature detrend(x: np.ndarray) -> np.ndarray normedbool, default: True If True, input vectors are normalised to unit length. usevlinesbool, default: True Determines the plot style. If True, vertical lines are plotted from 0 to the acorr value using Axes.vlines. Additionally, a horizontal line is plotted at y=0 using Axes.axhline. If False, markers are plotted at the acorr values using Axes.plot. maxlagsint, default: 10 Number of lags to show. If None, will return all 2 * len(x) - 1 lags. Returns lagsarray (length 2*maxlags+1) The lag vector. carray (length 2*maxlags+1) The auto correlation vector. lineLineCollection or Line2D Artist added to the Axes of the correlation: LineCollection if usevlines is True. Line2D if usevlines is False. bLine2D or None Horizontal line at 0 if usevlines is True None usevlines is False. Other Parameters linestyleLine2D property, optional The linestyle for plotting the data points. Only used if usevlines is False. markerstr, default: 'o' The marker for plotting the data points. Only used if usevlines is False. dataindexable object, optional If given, the following parameters also accept a string s, which is interpreted as data[s] (unless this raises an exception): x **kwargs Additional parameters are passed to Axes.vlines and Axes.axhline if usevlines is True; otherwise they are passed to Axes.plot. Notes The cross correlation is performed with numpy.correlate with mode = "full".
doc_30313
Return a Path for the unit circle wedge from angles theta1 to theta2 (in degrees). theta2 is unwrapped to produce the shortest wedge within 360 degrees. That is, if theta2 > theta1 + 360, the wedge will be from theta1 to theta2 - 360 and not a full circle plus some extra overlap. If n is provided, it is the number of spline segments to make. If n is not provided, the number of spline segments is determined based on the delta between theta1 and theta2. See Path.arc for the reference on the approximation used.
doc_30314
streamreader Stream writer and reader classes or factory functions. These have to provide the interface defined by the base classes StreamWriter and StreamReader, respectively. Stream codecs can maintain state.
doc_30315
Describes a enum value of Parameter.kind. New in version 3.8. Example: print all descriptions of arguments: >>> def foo(a, b, *, c, d=10): ... pass >>> sig = signature(foo) >>> for param in sig.parameters.values(): ... print(param.kind.description) positional or keyword positional or keyword keyword-only keyword-only
doc_30316
tf.estimator.Estimator( model_fn, model_dir=None, config=None, params=None, warm_start_from=None ) The Estimator object wraps a model which is specified by a model_fn, which, given inputs and a number of other parameters, returns the ops necessary to perform training, evaluation, or predictions. All outputs (checkpoints, event files, etc.) are written to model_dir, or a subdirectory thereof. If model_dir is not set, a temporary directory is used. The config argument can be passed tf.estimator.RunConfig object containing information about the execution environment. It is passed on to the model_fn, if the model_fn has a parameter named "config" (and input functions in the same manner). If the config parameter is not passed, it is instantiated by the Estimator. Not passing config means that defaults useful for local execution are used. Estimator makes config available to the model (for instance, to allow specialization based on the number of workers available), and also uses some of its fields to control internals, especially regarding checkpointing. The params argument contains hyperparameters. It is passed to the model_fn, if the model_fn has a parameter named "params", and to the input functions in the same manner. Estimator only passes params along, it does not inspect it. The structure of params is therefore entirely up to the developer. None of Estimator's methods can be overridden in subclasses (its constructor enforces this). Subclasses should use model_fn to configure the base class, and may add methods implementing specialized functionality. See estimators for more information. To warm-start an Estimator: estimator = tf.estimator.DNNClassifier( feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], hidden_units=[1024, 512, 256], warm_start_from="/path/to/checkpoint/dir") For more details on warm-start configuration, see tf.estimator.WarmStartSettings. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Calling methods of Estimator will work while eager execution is enabled. However, the model_fn and input_fn is not executed eagerly, Estimator will switch to graph mode before calling all user-provided functions (incl. hooks), so their code has to be compatible with graph mode execution. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config export_savedmodel model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
doc_30317
Return a representation of the message corresponding to key and delete the message. If no such message exists, return default. The message is represented as an instance of the appropriate format-specific Message subclass unless a custom message factory was specified when the Mailbox instance was initialized.
doc_30318
Call to set a new value for the context variable in the current context. The required value argument is the new value for the context variable. Returns a Token object that can be used to restore the variable to its previous value via the ContextVar.reset() method.
doc_30319
'DEFAULT_PARSER_CLASSES': [ 'rest_framework.parsers.JSONParser', ] } You can also set the parsers used for an individual view, or viewset, using the APIView class-based views. from rest_framework.parsers import JSONParser from rest_framework.response import Response from rest_framework.views import APIView class ExampleView(APIView): """ A view that can accept POST requests with JSON content. """ parser_classes = [JSONParser] def post(self, request, format=None): return Response({'received data': request.data}) Or, if you're using the @api_view decorator with function based views. from rest_framework.decorators import api_view from rest_framework.decorators import parser_classes from rest_framework.parsers import JSONParser @api_view(['POST']) @parser_classes([JSONParser]) def example_view(request, format=None): """ A view that can accept POST requests with JSON content. """ return Response({'received data': request.data}) API Reference JSONParser Parses JSON request content. request.data will be populated with a dictionary of data. .media_type: application/json FormParser Parses HTML form content. request.data will be populated with a QueryDict of data. You will typically want to use both FormParser and MultiPartParser together in order to fully support HTML form data. .media_type: application/x-www-form-urlencoded MultiPartParser Parses multipart HTML form content, which supports file uploads. Both request.data will be populated with a QueryDict. You will typically want to use both FormParser and MultiPartParser together in order to fully support HTML form data. .media_type: multipart/form-data FileUploadParser Parses raw file upload content. The request.data property will be a dictionary with a single key 'file' containing the uploaded file. If the view used with FileUploadParser is called with a filename URL keyword argument, then that argument will be used as the filename. If it is called without a filename URL keyword argument, then the client must set the filename in the Content-Disposition HTTP header. For example Content-Disposition: attachment; filename=upload.jpg. .media_type: */* Notes: The FileUploadParser is for usage with native clients that can upload the file as a raw data request. For web-based uploads, or for native clients with multipart upload support, you should use the MultiPartParser instead. Since this parser's media_type matches any content type, FileUploadParser should generally be the only parser set on an API view. FileUploadParser respects Django's standard FILE_UPLOAD_HANDLERS setting, and the request.upload_handlers attribute. See the Django documentation for more details. Basic usage example: # views.py class FileUploadView(views.APIView): parser_classes = [FileUploadParser] def put(self, request, filename, format=None): file_obj = request.data['file'] # ... # do some stuff with uploaded file # ... return Response(status=204) # urls.py urlpatterns = [ # ... re_path(r'^upload/(?P<filename>[^/]+)$', FileUploadView.as_view()) ] Custom parsers To implement a custom parser, you should override BaseParser, set the .media_type property, and implement the .parse(self, stream, media_type, parser_context) method. The method should return the data that will be used to populate the request.data property. The arguments passed to .parse() are: stream A stream-like object representing the body of the request. media_type Optional. If provided, this is the media type of the incoming request content. Depending on the request's Content-Type: header, this may be more specific than the renderer's media_type attribute, and may include media type parameters. For example "text/plain; charset=utf-8". parser_context Optional. If supplied, this argument will be a dictionary containing any additional context that may be required to parse the request content. By default this will include the following keys: view, request, args, kwargs. Example The following is an example plaintext parser that will populate the request.data property with a string representing the body of the request. class PlainTextParser(BaseParser): """ Plain text parser. """ media_type = 'text/plain' def parse(self, stream, media_type=None, parser_context=None): """ Simply return a string representing the body of the request. """ return stream.read() Third party packages The following third party packages are also available. YAML REST framework YAML provides YAML parsing and rendering support. It was previously included directly in the REST framework package, and is now instead supported as a third-party package. Installation & configuration Install using pip. $ pip install djangorestframework-yaml Modify your REST framework settings. REST_FRAMEWORK = { 'DEFAULT_PARSER_CLASSES': [ 'rest_framework_yaml.parsers.YAMLParser', ], 'DEFAULT_RENDERER_CLASSES': [ 'rest_framework_yaml.renderers.YAMLRenderer', ], } XML REST Framework XML provides a simple informal XML format. It was previously included directly in the REST framework package, and is now instead supported as a third-party package. Installation & configuration Install using pip. $ pip install djangorestframework-xml Modify your REST framework settings. REST_FRAMEWORK = { 'DEFAULT_PARSER_CLASSES': [ 'rest_framework_xml.parsers.XMLParser', ], 'DEFAULT_RENDERER_CLASSES': [ 'rest_framework_xml.renderers.XMLRenderer', ], } MessagePack MessagePack is a fast, efficient binary serialization format. Juan Riaza maintains the djangorestframework-msgpack package which provides MessagePack renderer and parser support for REST framework. CamelCase JSON djangorestframework-camel-case provides camel case JSON renderers and parsers for REST framework. This allows serializers to use Python-style underscored field names, but be exposed in the API as Javascript-style camel case field names. It is maintained by Vitaly Babiy. parsers.py
doc_30320
A NamedNodeMap of attribute objects. Only elements have actual values for this; others provide None for this attribute. This is a read-only attribute.
doc_30321
tf.compat.v1.tpu.cross_replica_sum( x, group_assignment=None, name=None ) Args x The local tensor to the sum. group_assignment Optional 2d int32 lists with shape [num_groups, num_replicas_per_group]. group_assignment[i] represents the replica ids in the ith subgroup. name Optional op name. Returns A Tensor which is summed across replicas.
doc_30322
If value is not None, this function prints repr(value) to sys.stdout, and saves value in builtins._. If repr(value) is not encodable to sys.stdout.encoding with sys.stdout.errors error handler (which is probably 'strict'), encode it to sys.stdout.encoding with 'backslashreplace' error handler. sys.displayhook is called on the result of evaluating an expression entered in an interactive Python session. The display of these values can be customized by assigning another one-argument function to sys.displayhook. Pseudo-code: def displayhook(value): if value is None: return # Set '_' to None to avoid recursion builtins._ = None text = repr(value) try: sys.stdout.write(text) except UnicodeEncodeError: bytes = text.encode(sys.stdout.encoding, 'backslashreplace') if hasattr(sys.stdout, 'buffer'): sys.stdout.buffer.write(bytes) else: text = bytes.decode(sys.stdout.encoding, 'strict') sys.stdout.write(text) sys.stdout.write("\n") builtins._ = value Changed in version 3.2: Use 'backslashreplace' error handler on UnicodeEncodeError.
doc_30323
Return a time.struct_time such as returned by time.localtime(). d.timetuple() is equivalent to: time.struct_time((d.year, d.month, d.day, d.hour, d.minute, d.second, d.weekday(), yday, dst)) where yday = d.toordinal() - date(d.year, 1, 1).toordinal() + 1 is the day number within the current year starting with 1 for January 1st. The tm_isdst flag of the result is set according to the dst() method: tzinfo is None or dst() returns None, tm_isdst is set to -1; else if dst() returns a non-zero value, tm_isdst is set to 1; else tm_isdst is set to 0.
doc_30324
event type identifier. type -> int Read-only. The event type identifier. For user created event objects, this is the type argument passed to pygame.event.Event(). For example, some predefined event identifiers are QUIT and MOUSEMOTION.
doc_30325
Compute the mean and std to be used for later scaling. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The data used to compute the mean and standard deviation used for later scaling along the features axis. yNone Ignored. sample_weightarray-like of shape (n_samples,), default=None Individual weights for each sample. New in version 0.24: parameter sample_weight support to StandardScaler. Returns selfobject Fitted scaler.
doc_30326
Exit code that means that an error occurred while doing I/O on some file. Availability: Unix.
doc_30327
create_user(username_field, password=None, **other_fields) The prototype of create_user() should accept the username field, plus all required fields as arguments. For example, if your user model uses email as the username field, and has date_of_birth as a required field, then create_user should be defined as: def create_user(self, email, date_of_birth, password=None): # create user here ... create_superuser(username_field, password=None, **other_fields) The prototype of create_superuser() should accept the username field, plus all required fields as arguments. For example, if your user model uses email as the username field, and has date_of_birth as a required field, then create_superuser should be defined as: def create_superuser(self, email, date_of_birth, password=None): # create superuser here ...
doc_30328
Set the botframe, stopframe, returnframe and quitting attributes with values ready to start debugging.
doc_30329
The current rendered value of the response content, using the current template and context data.
doc_30330
Return the font stretch as a string or a number. See also font_manager.FontProperties.get_stretch
doc_30331
Return whether the lock is currently held by an owner.
doc_30332
Return a dictionary of all the properties of the artist.
doc_30333
The get_fieldsets method is given the HttpRequest and the obj being edited (or None on an add form) and is expected to return a list of two-tuples, in which each two-tuple represents a <fieldset> on the admin form page, as described above in the ModelAdmin.fieldsets section.
doc_30334
Replace values given in to_replace with value. Values of the DataFrame are replaced with other values dynamically. This differs from updating with .loc or .iloc, which require you to specify a location to update with some value. Parameters to_replace:str, regex, list, dict, Series, int, float, or None How to find the values that will be replaced. numeric, str or regex: numeric: numeric values equal to to_replace will be replaced with value str: string exactly matching to_replace will be replaced with value regex: regexs matching to_replace will be replaced with value list of str, regex, or numeric: First, if to_replace and value are both lists, they must be the same length. Second, if regex=True then all of the strings in both lists will be interpreted as regexs otherwise they will match directly. This doesn’t matter much for value since there are only a few possible substitution regexes you can use. str, regex and numeric rules apply as above. dict: Dicts can be used to specify different replacement values for different existing values. For example, {'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and ‘y’ with ‘z’. To use a dict in this way the value parameter should be None. For a DataFrame a dict can specify that different values should be replaced in different columns. For example, {'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’ and the value ‘z’ in column ‘b’ and replaces these values with whatever is specified in value. The value parameter should not be None in this case. You can treat this as a special case of passing two lists except that you are specifying the column to search in. For a DataFrame nested dictionaries, e.g., {'a': {'b': np.nan}}, are read as follows: look in column ‘a’ for the value ‘b’ and replace it with NaN. The value parameter should be None to use a nested dict in this way. You can nest regular expressions as well. Note that column names (the top-level dictionary keys in a nested dictionary) cannot be regular expressions. None: This means that the regex argument must be a string, compiled regular expression, or list, dict, ndarray or Series of such elements. If value is also None then this must be a nested dictionary or Series. See the examples section for examples of each of these. value:scalar, dict, list, str, regex, default None Value to replace any values matching to_replace with. For a DataFrame a dict of values can be used to specify which value to use for each column (columns not in the dict will not be filled). Regular expressions, strings and lists or dicts of such objects are also allowed. inplace:bool, default False If True, performs operation inplace and returns None. limit:int, default None Maximum size gap to forward or backward fill. regex:bool or same types as to_replace, default False Whether to interpret to_replace and/or value as regular expressions. If this is True then to_replace must be a string. Alternatively, this could be a regular expression or a list, dict, or array of regular expressions in which case to_replace must be None. method:{‘pad’, ‘ffill’, ‘bfill’, None} The method to use when for replacement, when to_replace is a scalar, list or tuple and value is None. Changed in version 0.23.0: Added to DataFrame. Returns DataFrame Object after replacement. Raises AssertionError If regex is not a bool and to_replace is not None. TypeError If to_replace is not a scalar, array-like, dict, or None If to_replace is a dict and value is not a list, dict, ndarray, or Series If to_replace is None and regex is not compilable into a regular expression or is a list, dict, ndarray, or Series. When replacing multiple bool or datetime64 objects and the arguments to to_replace does not match the type of the value being replaced ValueError If a list or an ndarray is passed to to_replace and value but they are not the same length. See also DataFrame.fillna Fill NA values. DataFrame.where Replace values based on boolean condition. Series.str.replace Simple string replacement. Notes Regex substitution is performed under the hood with re.sub. The rules for substitution for re.sub are the same. Regular expressions will only substitute on strings, meaning you cannot provide, for example, a regular expression matching floating point numbers and expect the columns in your frame that have a numeric dtype to be matched. However, if those floating point numbers are strings, then you can do this. This method has a lot of options. You are encouraged to experiment and play with this method to gain intuition about how it works. When dict is used as the to_replace value, it is like key(s) in the dict are the to_replace part and value(s) in the dict are the value parameter. Examples Scalar `to_replace` and `value` >>> s = pd.Series([1, 2, 3, 4, 5]) >>> s.replace(1, 5) 0 5 1 2 2 3 3 4 4 5 dtype: int64 >>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4], ... 'B': [5, 6, 7, 8, 9], ... 'C': ['a', 'b', 'c', 'd', 'e']}) >>> df.replace(0, 5) A B C 0 5 5 a 1 1 6 b 2 2 7 c 3 3 8 d 4 4 9 e List-like `to_replace` >>> df.replace([0, 1, 2, 3], 4) A B C 0 4 5 a 1 4 6 b 2 4 7 c 3 4 8 d 4 4 9 e >>> df.replace([0, 1, 2, 3], [4, 3, 2, 1]) A B C 0 4 5 a 1 3 6 b 2 2 7 c 3 1 8 d 4 4 9 e >>> s.replace([1, 2], method='bfill') 0 3 1 3 2 3 3 4 4 5 dtype: int64 dict-like `to_replace` >>> df.replace({0: 10, 1: 100}) A B C 0 10 5 a 1 100 6 b 2 2 7 c 3 3 8 d 4 4 9 e >>> df.replace({'A': 0, 'B': 5}, 100) A B C 0 100 100 a 1 1 6 b 2 2 7 c 3 3 8 d 4 4 9 e >>> df.replace({'A': {0: 100, 4: 400}}) A B C 0 100 5 a 1 1 6 b 2 2 7 c 3 3 8 d 4 400 9 e Regular expression `to_replace` >>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'], ... 'B': ['abc', 'bar', 'xyz']}) >>> df.replace(to_replace=r'^ba.$', value='new', regex=True) A B 0 new abc 1 foo new 2 bait xyz >>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True) A B 0 new abc 1 foo bar 2 bait xyz >>> df.replace(regex=r'^ba.$', value='new') A B 0 new abc 1 foo new 2 bait xyz >>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'}) A B 0 new abc 1 xyz new 2 bait xyz >>> df.replace(regex=[r'^ba.$', 'foo'], value='new') A B 0 new abc 1 new new 2 bait xyz Compare the behavior of s.replace({'a': None}) and s.replace('a', None) to understand the peculiarities of the to_replace parameter: >>> s = pd.Series([10, 'a', 'a', 'b', 'a']) When one uses a dict as the to_replace value, it is like the value(s) in the dict are equal to the value parameter. s.replace({'a': None}) is equivalent to s.replace(to_replace={'a': None}, value=None, method=None): >>> s.replace({'a': None}) 0 10 1 None 2 None 3 b 4 None dtype: object When value is not explicitly passed and to_replace is a scalar, list or tuple, replace uses the method parameter (default ‘pad’) to do the replacement. So this is why the ‘a’ values are being replaced by 10 in rows 1 and 2 and ‘b’ in row 4 in this case. >>> s.replace('a') 0 10 1 10 2 10 3 b 4 b dtype: object On the other hand, if None is explicitly passed for value, it will be respected: >>> s.replace('a', None) 0 10 1 None 2 None 3 b 4 None dtype: object Changed in version 1.4.0: Previously the explicit None was silently ignored.
doc_30335
Return the clip path with the non-affine part of its transformation applied, and the remaining affine part of its transformation.
doc_30336
Return the period of now’s date.
doc_30337
Set the zorder for the artist. Artists with lower zorder values are drawn first. Parameters levelfloat
doc_30338
An ExtensionDtype for Interval data. This is not an actual numpy dtype, but a duck type. Parameters subtype:str, np.dtype The dtype of the Interval bounds. Examples >>> pd.IntervalDtype(subtype='int64', closed='both') interval[int64, both] Attributes subtype The dtype of the Interval bounds. Methods None
doc_30339
Return the current UTC date and time, with tzinfo None. This is like now(), but returns the current UTC date and time, as a naive datetime object. An aware current UTC datetime can be obtained by calling datetime.now(timezone.utc). See also now(). Warning Because naive datetime objects are treated by many datetime methods as local times, it is preferred to use aware datetimes to represent times in UTC. As such, the recommended way to create an object representing the current time in UTC is by calling datetime.now(timezone.utc).
doc_30340
The calculated size of the struct (and hence of the bytes object produced by the pack() method) corresponding to format.
doc_30341
Provide data to the compressor object. Returns a chunk of compressed data if possible, or an empty byte string otherwise. When you have finished providing data to the compressor, call the flush() method to finish the compression process.
doc_30342
Stop the timer, and cancel the execution of the timer’s action. This will only work if the timer is still in its waiting stage.
doc_30343
Label Propagation classifier Read more in the User Guide. Parameters kernel{‘knn’, ‘rbf’} or callable, default=’rbf’ String identifier for kernel function to use or the kernel function itself. Only ‘rbf’ and ‘knn’ strings are valid inputs. The function passed should take two inputs, each of shape (n_samples, n_features), and return a (n_samples, n_samples) shaped weight matrix. gammafloat, default=20 Parameter for rbf kernel. n_neighborsint, default=7 Parameter for knn kernel which need to be strictly positive. max_iterint, default=1000 Change maximum number of iterations allowed. tolfloat, 1e-3 Convergence tolerance: threshold to consider the system at steady state. n_jobsint, default=None The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes X_ndarray of shape (n_samples, n_features) Input array. classes_ndarray of shape (n_classes,) The distinct labels used in classifying instances. label_distributions_ndarray of shape (n_samples, n_classes) Categorical distribution for each item. transduction_ndarray of shape (n_samples) Label assigned to each item via the transduction. n_iter_int Number of iterations run. See also LabelSpreading Alternate label propagation strategy more robust to noise. References Xiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002 http://pages.cs.wisc.edu/~jerryzhu/pub/CMU-CALD-02-107.pdf Examples >>> import numpy as np >>> from sklearn import datasets >>> from sklearn.semi_supervised import LabelPropagation >>> label_prop_model = LabelPropagation() >>> iris = datasets.load_iris() >>> rng = np.random.RandomState(42) >>> random_unlabeled_points = rng.rand(len(iris.target)) < 0.3 >>> labels = np.copy(iris.target) >>> labels[random_unlabeled_points] = -1 >>> label_prop_model.fit(iris.data, labels) LabelPropagation(...) Methods fit(X, y) Fit a semi-supervised label propagation model based get_params([deep]) Get parameters for this estimator. predict(X) Performs inductive inference across the model. predict_proba(X) Predict probability for each possible outcome. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit a semi-supervised label propagation model based All the input data is provided matrix X (labeled and unlabeled) and corresponding label matrix y with a dedicated marker value for unlabeled samples. Parameters Xarray-like of shape (n_samples, n_features) A matrix of shape (n_samples, n_samples) will be created from this. yarray-like of shape (n_samples,) n_labeled_samples (unlabeled points are marked as -1) All unlabeled samples will be transductively assigned labels. Returns selfobject get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Performs inductive inference across the model. Parameters Xarray-like of shape (n_samples, n_features) The data matrix. Returns yndarray of shape (n_samples,) Predictions for input data. predict_proba(X) [source] Predict probability for each possible outcome. Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution). Parameters Xarray-like of shape (n_samples, n_features) The data matrix. Returns probabilitiesndarray of shape (n_samples, n_classes) Normalized probability distributions across class labels. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_30344
Join lists contained as elements in the Series/Index with passed delimiter. If the elements of a Series are lists themselves, join the content of these lists using the delimiter passed to the function. This function is an equivalent to str.join(). Parameters sep:str Delimiter to use between list entries. Returns Series/Index: object The list entries concatenated by intervening occurrences of the delimiter. Raises AttributeError If the supplied Series contains neither strings nor lists. See also str.join Standard library version of this method. Series.str.split Split strings around given separator/delimiter. Notes If any of the list items is not a string object, the result of the join will be NaN. Examples Example with a list that contains non-string elements. >>> s = pd.Series([['lion', 'elephant', 'zebra'], ... [1.1, 2.2, 3.3], ... ['cat', np.nan, 'dog'], ... ['cow', 4.5, 'goat'], ... ['duck', ['swan', 'fish'], 'guppy']]) >>> s 0 [lion, elephant, zebra] 1 [1.1, 2.2, 3.3] 2 [cat, nan, dog] 3 [cow, 4.5, goat] 4 [duck, [swan, fish], guppy] dtype: object Join all lists using a ‘-’. The lists containing object(s) of types other than str will produce a NaN. >>> s.str.join('-') 0 lion-elephant-zebra 1 NaN 2 NaN 3 NaN 4 NaN dtype: object
doc_30345
Roll provided date forward to next offset only if not on offset. Returns TimeStamp Rolled timestamp if not on offset, otherwise unchanged timestamp.
doc_30346
assertTupleEqual(first, second, msg=None) Tests that two lists or tuples are equal. If not, an error message is constructed that shows only the differences between the two. An error is also raised if either of the parameters are of the wrong type. These methods are used by default when comparing lists or tuples with assertEqual(). New in version 3.1.
doc_30347
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters Xndarray of shape (n_samples_X, n_features) Left argument of the returned kernel k(X, Y) Returns K_diagndarray of shape (n_samples_X,) Diagonal of kernel k(X, X)
doc_30348
Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.
doc_30349
See Migration guide for more details. tf.compat.v1.raw_ops.IteratorV2 tf.raw_ops.IteratorV2( shared_name, container, output_types, output_shapes, name=None ) Args shared_name A string. container A string. output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type resource.
doc_30350
Return True if it is a block device.
doc_30351
Returns a new equivalent pickle string after eliminating unused PUT opcodes. The optimized pickle is shorter, takes less transmission time, requires less storage space, and unpickles more efficiently.
doc_30352
Finds blobs in the given grayscale image. Blobs are found using the Difference of Gaussian (DoG) method [1]. For each blob found, the method returns its coordinates and the standard deviation of the Gaussian kernel that detected the blob. Parameters image2D or 3D ndarray Input grayscale image, blobs are assumed to be light on dark background (white on black). min_sigmascalar or sequence of scalars, optional The minimum standard deviation for Gaussian kernel. Keep this low to detect smaller blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes. max_sigmascalar or sequence of scalars, optional The maximum standard deviation for Gaussian kernel. Keep this high to detect larger blobs. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes. sigma_ratiofloat, optional The ratio between the standard deviation of Gaussian Kernels used for computing the Difference of Gaussians thresholdfloat, optional. The absolute lower bound for scale space maxima. Local maxima smaller than thresh are ignored. Reduce this to detect blobs with less intensities. overlapfloat, optional A value between 0 and 1. If the area of two blobs overlaps by a fraction greater than threshold, the smaller blob is eliminated. exclude_bordertuple of ints, int, or False, optional If tuple of ints, the length of the tuple must match the input array’s dimensionality. Each element of the tuple will exclude peaks from within exclude_border-pixels of the border of the image along that dimension. If nonzero int, exclude_border excludes peaks from within exclude_border-pixels of the border of the image. If zero or False, peaks are identified regardless of their distance from the border. Returns A(n, image.ndim + sigma) ndarray A 2d array with each row representing 2 coordinate values for a 2D image, and 3 coordinate values for a 3D image, plus the sigma(s) used. When a single sigma is passed, outputs are: (r, c, sigma) or (p, r, c, sigma) where (r, c) or (p, r, c) are coordinates of the blob and sigma is the standard deviation of the Gaussian kernel which detected the blob. When an anisotropic gaussian is used (sigmas per dimension), the detected sigma is returned for each dimension. See also skimage.filters.difference_of_gaussians Notes The radius of each blob is approximately \(\sqrt{2}\sigma\) for a 2-D image and \(\sqrt{3}\sigma\) for a 3-D image. References 1 https://en.wikipedia.org/wiki/Blob_detection#The_difference_of_Gaussians_approach Examples >>> from skimage import data, feature >>> feature.blob_dog(data.coins(), threshold=.5, max_sigma=40) array([[120. , 272. , 16.777216], [193. , 213. , 16.777216], [263. , 245. , 16.777216], [185. , 347. , 16.777216], [128. , 154. , 10.48576 ], [198. , 155. , 10.48576 ], [124. , 337. , 10.48576 ], [ 45. , 336. , 16.777216], [195. , 102. , 16.777216], [125. , 45. , 16.777216], [261. , 173. , 16.777216], [194. , 277. , 16.777216], [127. , 102. , 10.48576 ], [125. , 208. , 10.48576 ], [267. , 115. , 10.48576 ], [263. , 302. , 16.777216], [196. , 43. , 10.48576 ], [260. , 46. , 16.777216], [267. , 359. , 16.777216], [ 54. , 276. , 10.48576 ], [ 58. , 100. , 10.48576 ], [ 52. , 155. , 16.777216], [ 52. , 216. , 16.777216], [ 54. , 42. , 16.777216]])
doc_30353
Decorate a view function to register it with the given URL rule and options. Calls add_url_rule(), which has more details about the implementation. @app.route("/") def index(): return "Hello, World!" See URL Route Registrations. The endpoint name for the route defaults to the name of the view function if the endpoint parameter isn’t passed. The methods parameter defaults to ["GET"]. HEAD and OPTIONS are added automatically. Parameters rule (str) – The URL rule string. options (Any) – Extra options passed to the Rule object. Return type Callable
doc_30354
tf.initializers.GlorotNormal, tf.initializers.glorot_normal, tf.keras.initializers.glorot_normal tf.keras.initializers.GlorotNormal( seed=None ) Also available via the shortcut function tf.keras.initializers.glorot_normal. Draws samples from a truncated normal distribution centered on 0 with stddev = sqrt(2 / (fan_in + fan_out)) where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor. Examples: # Standalone usage: initializer = tf.keras.initializers.GlorotNormal() values = initializer(shape=(2, 2)) # Usage in a Keras layer: initializer = tf.keras.initializers.GlorotNormal() layer = tf.keras.layers.Dense(3, kernel_initializer=initializer) Args seed A Python integer. An initializer created with a given seed will always produce the same random tensor for a given shape and dtype. References: Glorot et al., 2010 (pdf) Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=None, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only floating point types are supported. If not specified, tf.keras.backend.floatx() is used, which default to float32 unless you configured it otherwise (via tf.keras.backend.set_floatx(float_dtype)) **kwargs Additional keyword arguments.
doc_30355
The new process has a new console, instead of inheriting its parent’s console (the default).
doc_30356
Autoscale the scalar limits on the norm instance using the current array
doc_30357
Set the artist's clip Bbox. Parameters clipboxBbox
doc_30358
See Migration guide for more details. tf.compat.v1.raw_ops.Xlogy tf.raw_ops.Xlogy( x, y, name=None ) Args x A Tensor. Must be one of the following types: half, float32, float64, complex64, complex128. y A Tensor. Must have the same type as x. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
doc_30359
Bases: skimage.transform._geometric.EuclideanTransform 2D similarity transformation. Has the following form: X = a0 * x - b0 * y + a1 = = s * x * cos(rotation) - s * y * sin(rotation) + a1 Y = b0 * x + a0 * y + b1 = = s * x * sin(rotation) + s * y * cos(rotation) + b1 where s is a scale factor and the homogeneous transformation matrix is: [[a0 b0 a1] [b0 a0 b1] [0 0 1]] The similarity transformation extends the Euclidean transformation with a single scaling factor in addition to the rotation and translation parameters. Parameters matrix(3, 3) array, optional Homogeneous transformation matrix. scalefloat, optional Scale factor. rotationfloat, optional Rotation angle in counter-clockwise direction as radians. translation(tx, ty) as array, list or tuple, optional x, y translation parameters. Attributes params(3, 3) array Homogeneous transformation matrix. __init__(matrix=None, scale=None, rotation=None, translation=None) [source] Initialize self. See help(type(self)) for accurate signature. estimate(src, dst) [source] Estimate the transformation from a set of corresponding points. You can determine the over-, well- and under-determined parameters with the total least-squares method. Number of source and destination coordinates must match. Parameters src(N, 2) array Source coordinates. dst(N, 2) array Destination coordinates. Returns successbool True, if model estimation succeeds. property scale
doc_30360
An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array. For more information, refer to the numpy module and examine the methods and attributes of an array. Parameters (for the __new__ method; see Notes below) shapetuple of ints Shape of created array. dtypedata-type, optional Any object that can be interpreted as a numpy data type. bufferobject exposing buffer interface, optional Used to fill the array with data. offsetint, optional Offset of array data in buffer. stridestuple of ints, optional Strides of data in memory. order{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also array Construct an array. zeros Create an array, each element of which is zero. empty Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). dtype Create a data-type. numpy.typing.NDArray An ndarray alias generic w.r.t. its dtype.type. Notes There are two modes of creating an array using __new__: If buffer is None, then only shape, dtype, and order are used. If buffer is an object exposing the buffer interface, then all keywords are interpreted. No __init__ method is needed because the array is fully initialized after the __new__ method. Examples These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray. First mode, buffer is None: >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes Tndarray Transpose of the array. databuffer The array’s elements, in memory. dtypedtype object Describes the format of the elements in the array. flagsdict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. flatnumpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO). imagndarray Imaginary part of the array. realndarray Real part of the array. sizeint Number of elements in the array. itemsizeint The memory use of each array element in bytes. nbytesint The total number of bytes required to store the array data, i.e., itemsize * size. ndimint The array’s number of dimensions. shapetuple of ints Shape of the array. stridestuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4). ctypesctypes object Class containing properties of the array needed for interaction with ctypes. basendarray If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.
doc_30361
Given the location and size of the box, return the path of the box around it. Parameters x0, y0, width, heightfloat Location and size of the box. mutation_sizefloat A reference scale for the mutation. Returns Path
doc_30362
Remove and return an item from the queue. If optional args block is true and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the Empty exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the Empty exception (timeout is ignored in that case). Prior to 3.0 on POSIX systems, and for all versions on Windows, if block is true and timeout is None, this operation goes into an uninterruptible wait on an underlying lock. This means that no exceptions can occur, and in particular a SIGINT will not trigger a KeyboardInterrupt.
doc_30363
Initialize self. See help(type(self)) for accurate signature.
doc_30364
Creates a null session which acts as a replacement object if the real session support could not be loaded due to a configuration error. This mainly aids the user experience because the job of the null session is to still support lookup without complaining but modifications are answered with a helpful error message of what failed. This creates an instance of null_session_class by default. Parameters app (Flask) – Return type flask.sessions.NullSession
doc_30365
See torch.acosh()
doc_30366
Roll provided date forward to next offset only if not on offset. Returns TimeStamp Rolled timestamp if not on offset, otherwise unchanged timestamp.
doc_30367
tf.compat.v1.autograph.to_code( entity, recursive=True, arg_values=None, arg_types=None, indentation=' ', experimental_optional_features=None ) Example usage: def f(x): if x < 0: x = -x return x tf.autograph.to_code(f) "...def tf__f(x):..." Also see: tf.autograph.to_graph. Note: If a function has been decorated with tf.function, pass its underlying Python function, rather than the callable that `tf.function creates: @tf.function def f(x): if x < 0: x = -x return x tf.autograph.to_code(f.python_function) "...def tf__f(x):..." Args entity Python callable or class. recursive Whether to recursively convert any functions that the converted function may call. arg_values Deprecated. arg_types Deprecated. indentation Deprecated. experimental_optional_features None, a tuple of, or a single tf.autograph.experimental.Feature value. Returns The converted code as string.
doc_30368
Determine if a class is a subclass of a second class. issubclass_ is equivalent to the Python built-in issubclass, except that it returns False instead of raising a TypeError if one of the arguments is not a class. Parameters arg1class Input class. True is returned if arg1 is a subclass of arg2. arg2class or tuple of classes. Input class. If a tuple of classes, True is returned if arg1 is a subclass of any of the tuple elements. Returns outbool Whether arg1 is a subclass of arg2 or not. See also issubsctype, issubdtype, issctype Examples >>> np.issubclass_(np.int32, int) False >>> np.issubclass_(np.int32, float) False >>> np.issubclass_(np.float64, float) True
doc_30369
The number of input dimensions of this transform. Must be overridden (with integers) in the subclass.
doc_30370
Write some data on the stream.
doc_30371
Return a list of the visible child Artists.
doc_30372
Return True if the element has an attribute named by name.
doc_30373
Returns the machine type, e.g. 'i386'. An empty string is returned if the value cannot be determined.
doc_30374
True if the polling object is closed. New in version 3.4.
doc_30375
Same as the standard C memmove library function: copies count bytes from src to dst. dst and src must be integers or ctypes instances that can be converted to pointers.
doc_30376
Transform X into a (weighted) graph of k nearest neighbors The transformed data is a sparse graph as returned by kneighbors_graph. Read more in the User Guide. New in version 0.22. Parameters mode{‘distance’, ‘connectivity’}, default=’distance’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, and ‘distance’ will return the distances between neighbors according to the given metric. n_neighborsint, default=5 Number of neighbors for each sample in the transformed sparse graph. For compatibility reasons, as each sample is considered as its own neighbor, one extra neighbor will be computed when mode == ‘distance’. In this case, the sparse graph contains (n_neighbors + 1) neighbors. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’ Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree ‘kd_tree’ will use KDTree ‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force. leaf_sizeint, default=30 Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. metricstr or callable, default=’minkowski’ metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Distance matrices are not supported. Valid values for metric are: from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics. pint, default=2 Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. metric_paramsdict, default=None Additional keyword arguments for the metric function. n_jobsint, default=1 The number of parallel jobs to run for neighbors search. If -1, then the number of jobs is set to the number of CPU cores. Attributes effective_metric_str or callable The distance metric used. It will be same as the metric parameter or a synonym of it, e.g. ‘euclidean’ if the metric parameter set to ‘minkowski’ and p parameter set to 2. effective_metric_params_dict Additional keyword arguments for the metric function. For most metrics will be same with metric_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’. n_samples_fit_int Number of samples in the fitted data. Examples >>> from sklearn.manifold import Isomap >>> from sklearn.neighbors import KNeighborsTransformer >>> from sklearn.pipeline import make_pipeline >>> estimator = make_pipeline( ... KNeighborsTransformer(n_neighbors=5, mode='distance'), ... Isomap(neighbors_algorithm='precomputed')) Methods fit(X[, y]) Fit the k-nearest neighbors transformer from the training dataset. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. kneighbors([X, n_neighbors, return_distance]) Finds the K-neighbors of a point. kneighbors_graph([X, n_neighbors, mode]) Computes the (weighted) graph of k-Neighbors for points in X set_params(**params) Set the parameters of this estimator. transform(X) Computes the (weighted) graph of Neighbors for points in X fit(X, y=None) [source] Fit the k-nearest neighbors transformer from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. Returns selfKNeighborsTransformer The fitted k-nearest neighbors transformer. fit_transform(X, y=None) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Training set. yignored Returns Xtsparse matrix of shape (n_samples, n_samples) Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. kneighbors(X=None, n_neighbors=None, return_distance=True) [source] Finds the K-neighbors of a point. Returns indices of and distances to the neighbors of each point. Parameters Xarray-like, shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. n_neighborsint, default=None Number of neighbors required for each sample. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. Returns neigh_distndarray of shape (n_queries, n_neighbors) Array representing the lengths to points, only present if return_distance=True neigh_indndarray of shape (n_queries, n_neighbors) Indices of the nearest points in the population matrix. Examples In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1] >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(n_neighbors=1) >>> neigh.fit(samples) NearestNeighbors(n_neighbors=1) >>> print(neigh.kneighbors([[1., 1., 1.]])) (array([[0.5]]), array([[2]])) As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points: >>> X = [[0., 1., 0.], [1., 0., 1.]] >>> neigh.kneighbors(X, return_distance=False) array([[1], [2]]...) kneighbors_graph(X=None, n_neighbors=None, mode='connectivity') [source] Computes the (weighted) graph of k-Neighbors for points in X Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For metric='precomputed' the shape should be (n_queries, n_indexed). Otherwise the shape should be (n_queries, n_features). n_neighborsint, default=None Number of neighbors for each sample. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix is of CSR format. See also NearestNeighbors.radius_neighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(n_neighbors=2) >>> neigh.fit(X) NearestNeighbors(n_neighbors=2) >>> A = neigh.kneighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 1.], [1., 0., 1.]]) set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Computes the (weighted) graph of Neighbors for points in X Parameters Xarray-like of shape (n_samples_transform, n_features) Sample data. Returns Xtsparse matrix of shape (n_samples_transform, n_samples_fit) Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format.
doc_30377
See Migration guide for more details. tf.compat.v1.keras.applications.vgg19.preprocess_input tf.keras.applications.vgg19.preprocess_input( x, data_format=None ) Usage example with applications.MobileNet: i = tf.keras.layers.Input([None, None, 3], dtype = tf.uint8) x = tf.cast(i, tf.float32) x = tf.keras.applications.mobilenet.preprocess_input(x) core = tf.keras.applications.MobileNet() x = core(x) model = tf.keras.Model(inputs=[i], outputs=[x]) image = tf.image.decode_png(tf.io.read_file('file.png')) result = model(image) Arguments x A floating point numpy.array or a tf.Tensor, 3D or 4D with 3 color channels, with values in the range [0, 255]. The preprocessed data are written over the input data if the data types are compatible. To avoid this behaviour, numpy.copy(x) can be used. data_format Optional data format of the image tensor/array. Defaults to None, in which case the global setting tf.keras.backend.image_data_format() is used (unless you changed it, it defaults to "channels_last"). Returns Preprocessed numpy.array or a tf.Tensor with type float32. The images are converted from RGB to BGR, then each color channel is zero-centered with respect to the ImageNet dataset, without scaling. Raises ValueError In case of unknown data_format argument.
doc_30378
Convert x using the unit type of the xaxis. If the artist is not in contained in an Axes or if the xaxis does not have units, x itself is returned.
doc_30379
Compute the inertia tensor of the input image. Parameters imagearray The input image. muarray, optional The pre-computed central moments of image. The inertia tensor computation requires the central moments of the image. If an application requires both the central moments and the inertia tensor (for example, skimage.measure.regionprops), then it is more efficient to pre-compute them and pass them to the inertia tensor call. Returns Tarray, shape (image.ndim, image.ndim) The inertia tensor of the input image. \(T_{i, j}\) contains the covariance of image intensity along axes \(i\) and \(j\). References 1 https://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_tensor 2 Bernd Jähne. Spatio-Temporal Image Processing: Theory and Scientific Applications. (Chapter 8: Tensor Methods) Springer, 1993.
doc_30380
Get the matrix for the affine part of this transform.
doc_30381
returns a spherical interpolation to the given vector. slerp(Vector3, float) -> Vector3 Calculates the spherical interpolation from self to the given Vector. The second argument - often called t - must be in the range [-1, 1]. It parametrizes where - in between the two vectors - the result should be. If a negative value is given the interpolation will not take the complement of the shortest path.
doc_30382
Return True if the object is a coroutine function (a function defined with an async def syntax). New in version 3.5. Changed in version 3.8: Functions wrapped in functools.partial() now return True if the wrapped function is a coroutine function.
doc_30383
The type of methods of user-defined class instances.
doc_30384
The email package calls this method with the name and value currently stored in the Message when that header is requested by the application program, and whatever the method returns is what is passed back to the application as the value of the header being retrieved. Note that there may be more than one header with the same name stored in the Message; the method is passed the specific name and value of the header destined to be returned to the application. value may contain surrogateescaped binary data. There should be no surrogateescaped binary data in the value returned by the method. There is no default implementation
doc_30385
See Migration guide for more details. tf.compat.v1.lite.TargetSpec tf.lite.TargetSpec( supported_ops=None, supported_types=None ) Details about target device. Converter optimizes the generated model for specific device. Attributes supported_ops Experimental flag, subject to change. Set of OpsSet options supported by the device. (default set([OpsSet.TFLITE_BUILTINS])) supported_types List of types for constant values on the target device. Frequently, an optimization choice is driven by the most compact (i.e. smallest) type in this list (default [tf.float32])
doc_30386
The tuple of arguments to this Node. The interpretation of arguments depends on the node’s opcode. See the Node docstring for more information. Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment.
doc_30387
Return an integer file descriptor for the socket on which the server is listening. This function is most commonly passed to selectors, to allow monitoring multiple servers in the same process.
doc_30388
Alias of torch.vstack().
doc_30389
Request the size of the file named filename on the server. On success, the size of the file is returned as an integer, otherwise None is returned. Note that the SIZE command is not standardized, but is supported by many common server implementations.
doc_30390
Call self as a function.
doc_30391
Returns the field instance given a name of a field. field_name can be the name of a field on the model, a field on an abstract or inherited model, or a field defined on another model that points to the model. In the latter case, the field_name will be (in order of preference) the related_query_name set by the user, the related_name set by the user, or the name automatically generated by Django. Hidden fields cannot be retrieved by name. If a field with the given name is not found a FieldDoesNotExist exception will be raised. >>> from django.contrib.auth.models import User # A field on the model >>> User._meta.get_field('username') <django.db.models.fields.CharField: username> # A field from another model that has a relation with the current model >>> User._meta.get_field('logentry') <ManyToOneRel: admin.logentry> # A non existent field >>> User._meta.get_field('does_not_exist') Traceback (most recent call last): ... FieldDoesNotExist: User has no field named 'does_not_exist'
doc_30392
Call self as a function.
doc_30393
Outputs the record to the file.
doc_30394
Return a new string which is an unquoted version of str. If str ends and begins with double quotes, they are stripped off. Likewise if str ends and begins with angle brackets, they are stripped off.
doc_30395
Compute the Haar-like features for a region of interest (ROI) of an integral image. Haar-like features have been successfully used for image classification and object detection [1]. It has been used for real-time face detection algorithm proposed in [2]. Parameters int_image(M, N) ndarray Integral image for which the features need to be computed. rint Row-coordinate of top left corner of the detection window. cint Column-coordinate of top left corner of the detection window. widthint Width of the detection window. heightint Height of the detection window. feature_typestr or list of str or None, optional The type of feature to consider: ‘type-2-x’: 2 rectangles varying along the x axis; ‘type-2-y’: 2 rectangles varying along the y axis; ‘type-3-x’: 3 rectangles varying along the x axis; ‘type-3-y’: 3 rectangles varying along the y axis; ‘type-4’: 4 rectangles varying along x and y axis. By default all features are extracted. If using with feature_coord, it should correspond to the feature type of each associated coordinate feature. feature_coordndarray of list of tuples or None, optional The array of coordinates to be extracted. This is useful when you want to recompute only a subset of features. In this case feature_type needs to be an array containing the type of each feature, as returned by haar_like_feature_coord(). By default, all coordinates are computed. Returns haar_features(n_features,) ndarray of int or float Resulting Haar-like features. Each value is equal to the subtraction of sums of the positive and negative rectangles. The data type depends of the data type of int_image: int when the data type of int_image is uint or int and float when the data type of int_image is float. Notes When extracting those features in parallel, be aware that the choice of the backend (i.e. multiprocessing vs threading) will have an impact on the performance. The rule of thumb is as follows: use multiprocessing when extracting features for all possible ROI in an image; use threading when extracting the feature at specific location for a limited number of ROIs. Refer to the example Face classification using Haar-like feature descriptor for more insights. References 1 https://en.wikipedia.org/wiki/Haar-like_feature 2 Oren, M., Papageorgiou, C., Sinha, P., Osuna, E., & Poggio, T. (1997, June). Pedestrian detection using wavelet templates. In Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on (pp. 193-199). IEEE. http://tinyurl.com/y6ulxfta DOI:10.1109/CVPR.1997.609319 3 Viola, Paul, and Michael J. Jones. “Robust real-time face detection.” International journal of computer vision 57.2 (2004): 137-154. https://www.merl.com/publications/docs/TR2004-043.pdf DOI:10.1109/CVPR.2001.990517 Examples >>> import numpy as np >>> from skimage.transform import integral_image >>> from skimage.feature import haar_like_feature >>> img = np.ones((5, 5), dtype=np.uint8) >>> img_ii = integral_image(img) >>> feature = haar_like_feature(img_ii, 0, 0, 5, 5, 'type-3-x') >>> feature array([-1, -2, -3, -4, -1, -2, -3, -4, -1, -2, -3, -4, -1, -2, -3, -4, -1, -2, -3, -4, -1, -2, -3, -4, -1, -2, -3, -1, -2, -3, -1, -2, -3, -1, -2, -1, -2, -1, -2, -1, -1, -1]) You can compute the feature for some pre-computed coordinates. >>> from skimage.feature import haar_like_feature_coord >>> feature_coord, feature_type = zip( ... *[haar_like_feature_coord(5, 5, feat_t) ... for feat_t in ('type-2-x', 'type-3-x')]) >>> # only select one feature over two >>> feature_coord = np.concatenate([x[::2] for x in feature_coord]) >>> feature_type = np.concatenate([x[::2] for x in feature_type]) >>> feature = haar_like_feature(img_ii, 0, 0, 5, 5, ... feature_type=feature_type, ... feature_coord=feature_coord) >>> feature array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, -3, -1, -3, -1, -3, -1, -3, -1, -3, -1, -3, -1, -3, -2, -1, -3, -2, -2, -2, -1])
doc_30396
Works like a TypeConversionDict but does not support modifications. Changelog New in version 0.5. copy() Return a shallow mutable copy of this object. Keep in mind that the standard library’s copy() function is a no-op for this class like for any other python immutable type (eg: tuple).
doc_30397
This property returns the area of the Geometry.
doc_30398
True if this transform is separable in the x- and y- dimensions.
doc_30399
Removes a named value from a registry key. key is an already open key, or one of the predefined HKEY_* constants. value is a string that identifies the value to remove. Raises an auditing event winreg.DeleteValue with arguments key, value.