_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_27300
See torch.renorm()
doc_27301
A non-callable version of MagicMock. The constructor parameters have the same meaning as for MagicMock, with the exception of return_value and side_effect which have no meaning on a non-callable mock.
doc_27302
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
doc_27303
Initialize a graph with edges, name, or graph attributes. Parameters incoming_graph_datainput graph (optional, default: None) Data to initialize graph. If None (default) an empty graph is created. The data can be an edge list, or any NetworkX graph object. If the corresponding optional Python packages are installed the data can also be a NumPy matrix or 2d ndarray, a SciPy sparse matrix, or a PyGraphviz graph. attrkeyword arguments, optional (default= no attributes) Attributes to add to graph as key=value pairs. See also convert Examples >>> G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G = nx.Graph(name="my graph") >>> e = [(1, 2), (2, 3), (3, 4)] # list of edges >>> G = nx.Graph(e) Arbitrary graph attribute pairs (key=value) may be assigned >>> G = nx.Graph(e, day="Friday") >>> G.graph {'day': 'Friday'}
doc_27304
Evaluate a 3-D polynomial at points (x, y, z). This function returns the values: \[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * x^i * y^j * z^k\] The parameters x, y, and z are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either x, y, and z or their elements must support multiplication and addition both with themselves and with the elements of c. If c has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters x, y, zarray_like, compatible object The three dimensional series is evaluated at the points (x, y, z), where x, y, and z must have the same shape. If any of x, y, or z is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. carray_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j,k is contained in c[i,j,k]. If c has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns valuesndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from x, y, and z. See also polyval, polyval2d, polygrid2d, polygrid3d Notes New in version 1.7.0.
doc_27305
See Migration guide for more details. tf.compat.v1.io.decode_proto tf.io.decode_proto( bytes, message_type, field_names, output_types, descriptor_source='local://', message_format='binary', sanitize=False, name=None ) The decode_proto op extracts fields from a serialized protocol buffers message into tensors. The fields in field_names are decoded and converted to the corresponding output_types if possible. A message_type name must be provided to give context for the field names. The actual message descriptor can be looked up either in the linked-in descriptor pool or a filename provided by the caller using the descriptor_source attribute. Each output tensor is a dense tensor. This means that it is padded to hold the largest number of repeated elements seen in the input minibatch. (The shape is also padded by one to prevent zero-sized dimensions). The actual repeat counts for each example in the minibatch can be found in the sizes output. In many cases the output of decode_proto is fed immediately into tf.squeeze if missing values are not a concern. When using tf.squeeze, always pass the squeeze dimension explicitly to avoid surprises. For the most part, the mapping between Proto field types and TensorFlow dtypes is straightforward. However, there are a few special cases: A proto field that contains a submessage or group can only be converted to DT_STRING (the serialized submessage). This is to reduce the complexity of the API. The resulting string can be used as input to another instance of the decode_proto op. TensorFlow lacks support for unsigned integers. The ops represent uint64 types as a DT_INT64 with the same twos-complement bit pattern (the obvious way). Unsigned int32 values can be represented exactly by specifying type DT_INT64, or using twos-complement if the caller specifies DT_INT32 in the output_types attribute. Both binary and text proto serializations are supported, and can be chosen using the format attribute. The descriptor_source attribute selects the source of protocol descriptors to consult when looking up message_type. This may be: An empty string or "local://", in which case protocol descriptors are created for C++ (not Python) proto definitions linked to the binary. A file, in which case protocol descriptors are created from the file, which is expected to contain a FileDescriptorSet serialized as a string. NOTE: You can build a descriptor_source file using the --descriptor_set_out and --include_imports options to the protocol compiler protoc. A "bytes://", in which protocol descriptors are created from <bytes>, which is expected to be a FileDescriptorSet serialized as a string. Args bytes A Tensor of type string. Tensor of serialized protos with shape batch_shape. message_type A string. Name of the proto message type to decode. field_names A list of strings. List of strings containing proto field names. An extension field can be decoded by using its full name, e.g. EXT_PACKAGE.EXT_FIELD_NAME. output_types A list of tf.DTypes. List of TF types to use for the respective field in field_names. descriptor_source An optional string. Defaults to "local://". Either the special value local:// or a path to a file containing a serialized FileDescriptorSet. message_format An optional string. Defaults to "binary". Either binary or text. sanitize An optional bool. Defaults to False. Whether to sanitize the result or not. name A name for the operation (optional). Returns A tuple of Tensor objects (sizes, values). sizes A Tensor of type int32. values A list of Tensor objects of type output_types.
doc_27306
repository revision of the build rev = 'a6f89747b551+' The Mercurial node identifier of the repository checkout from which this package was built. If the identifier ends with a plus sign '+' then the package contains uncommitted changes. Please include this revision number in bug reports, especially for non-release pygame builds. Important note: pygame development has moved to github, this variable is obsolete now. As soon as development shifted to github, this variable started returning an empty string "". It has always been returning an empty string since v1.9.5. Changed in pygame 1.9.5: Always returns an empty string "".
doc_27307
tf.experimental.numpy.divmod( x1, x2 ) Unsupported arguments: out1, out2, out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.divmod.
doc_27308
Bases: matplotlib.dates.RRuleLocator Make ticks on occurrences of each hour. Mark every hour in byhour; byhour can be an int or sequence. Default is to tick every hour: byhour=range(24) interval is the interval between each iteration. For example, if interval=2, mark every second occurrence.
doc_27309
Fit a model to data with the RANSAC (random sample consensus) algorithm. RANSAC is an iterative algorithm for the robust estimation of parameters from a subset of inliers from the complete data set. Each iteration performs the following tasks: Select min_samples random samples from the original data and check whether the set of data is valid (see is_data_valid). Estimate a model to the random subset (model_cls.estimate(*data[random_subset]) and check whether the estimated model is valid (see is_model_valid). Classify all data as inliers or outliers by calculating the residuals to the estimated model (model_cls.residuals(*data)) - all data samples with residuals smaller than the residual_threshold are considered as inliers. Save estimated model as best model if number of inlier samples is maximal. In case the current estimated model has the same number of inliers, it is only considered as the best model if it has less sum of residuals. These steps are performed either a maximum number of times or until one of the special stop criteria are met. The final model is estimated using all inlier samples of the previously determined best model. Parameters data[list, tuple of] (N, …) array Data set to which the model is fitted, where N is the number of data points and the remaining dimension are depending on model requirements. If the model class requires multiple input data arrays (e.g. source and destination coordinates of skimage.transform.AffineTransform), they can be optionally passed as tuple or list. Note, that in this case the functions estimate(*data), residuals(*data), is_model_valid(model, *random_data) and is_data_valid(*random_data) must all take each data array as separate arguments. model_classobject Object with the following object methods: success = estimate(*data) residuals(*data) where success indicates whether the model estimation succeeded (True or None for success, False for failure). min_samplesint in range (0, N) The minimum number of data points to fit a model to. residual_thresholdfloat larger than 0 Maximum distance for a data point to be classified as an inlier. is_data_validfunction, optional This function is called with the randomly selected data before the model is fitted to it: is_data_valid(*random_data). is_model_validfunction, optional This function is called with the estimated model and the randomly selected data: is_model_valid(model, *random_data), . max_trialsint, optional Maximum number of iterations for random sample selection. stop_sample_numint, optional Stop iteration if at least this number of inliers are found. stop_residuals_sumfloat, optional Stop iteration if sum of residuals is less than or equal to this threshold. stop_probabilityfloat in range [0, 1], optional RANSAC iteration stops if at least one outlier-free set of the training data is sampled with probability >= stop_probability, depending on the current best model’s inlier ratio and the number of trials. This requires to generate at least N samples (trials): N >= log(1 - probability) / log(1 - e**m) where the probability (confidence) is typically set to a high value such as 0.99, e is the current fraction of inliers w.r.t. the total number of samples, and m is the min_samples value. random_stateint, RandomState instance or None, optional If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. initial_inliersarray-like of bool, shape (N,), optional Initial samples selection for model estimation Returns modelobject Best model with largest consensus set. inliers(N, ) array Boolean mask of inliers classified as True. References 1 “RANSAC”, Wikipedia, https://en.wikipedia.org/wiki/RANSAC Examples Generate ellipse data without tilt and add noise: >>> t = np.linspace(0, 2 * np.pi, 50) >>> xc, yc = 20, 30 >>> a, b = 5, 10 >>> x = xc + a * np.cos(t) >>> y = yc + b * np.sin(t) >>> data = np.column_stack([x, y]) >>> np.random.seed(seed=1234) >>> data += np.random.normal(size=data.shape) Add some faulty data: >>> data[0] = (100, 100) >>> data[1] = (110, 120) >>> data[2] = (120, 130) >>> data[3] = (140, 130) Estimate ellipse model using all available data: >>> model = EllipseModel() >>> model.estimate(data) True >>> np.round(model.params) array([ 72., 75., 77., 14., 1.]) Estimate ellipse model using RANSAC: >>> ransac_model, inliers = ransac(data, EllipseModel, 20, 3, max_trials=50) >>> abs(np.round(ransac_model.params)) array([20., 30., 5., 10., 0.]) >>> inliers array([False, False, False, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True], dtype=bool) >>> sum(inliers) > 40 True RANSAC can be used to robustly estimate a geometric transformation. In this section, we also show how to use a proportion of the total samples, rather than an absolute number. >>> from skimage.transform import SimilarityTransform >>> np.random.seed(0) >>> src = 100 * np.random.rand(50, 2) >>> model0 = SimilarityTransform(scale=0.5, rotation=1, translation=(10, 20)) >>> dst = model0(src) >>> dst[0] = (10000, 10000) >>> dst[1] = (-100, 100) >>> dst[2] = (50, 50) >>> ratio = 0.5 # use half of the samples >>> min_samples = int(ratio * len(src)) >>> model, inliers = ransac((src, dst), SimilarityTransform, min_samples, 10, ... initial_inliers=np.ones(len(src), dtype=bool)) >>> inliers array([False, False, False, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True])
doc_27310
Check whether an array-like or dtype is of a DatetimeTZDtype dtype. Parameters arr_or_dtype:array-like or dtype The array-like or dtype to check. Returns boolean Whether or not the array-like or dtype is of a DatetimeTZDtype dtype. Examples >>> is_datetime64tz_dtype(object) False >>> is_datetime64tz_dtype([1, 2, 3]) False >>> is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3])) # tz-naive False >>> is_datetime64tz_dtype(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern")) True >>> dtype = DatetimeTZDtype("ns", tz="US/Eastern") >>> s = pd.Series([], dtype=dtype) >>> is_datetime64tz_dtype(dtype) True >>> is_datetime64tz_dtype(s) True
doc_27311
Escape a string. Calls escape() and ensures that for subclasses the correct type is returned. Parameters s (Any) – Return type markupsafe.Markup
doc_27312
Set the character stream (a text file) for this input source. If there is a character stream specified, the SAX parser will ignore any byte stream and will not attempt to open a URI connection to the system identifier.
doc_27313
Find the unique elements of an array. Returns the sorted unique elements of an array. There are three optional outputs in addition to the unique elements: the indices of the input array that give the unique values the indices of the unique array that reconstruct the input array the number of times each unique value comes up in the input array Parameters ararray_like Input array. Unless axis is specified, this will be flattened if it is not already 1-D. return_indexbool, optional If True, also return the indices of ar (along the specified axis, if provided, or in the flattened array) that result in the unique array. return_inversebool, optional If True, also return the indices of the unique array (for the specified axis, if provided) that can be used to reconstruct ar. return_countsbool, optional If True, also return the number of times each unique item appears in ar. New in version 1.9.0. axisint or None, optional The axis to operate on. If None, ar will be flattened. If an integer, the subarrays indexed by the given axis will be flattened and treated as the elements of a 1-D array with the dimension of the given axis, see the notes for more details. Object arrays or structured arrays that contain objects are not supported if the axis kwarg is used. The default is None. New in version 1.13.0. Returns uniquendarray The sorted unique values. unique_indicesndarray, optional The indices of the first occurrences of the unique values in the original array. Only provided if return_index is True. unique_inversendarray, optional The indices to reconstruct the original array from the unique array. Only provided if return_inverse is True. unique_countsndarray, optional The number of times each of the unique values comes up in the original array. Only provided if return_counts is True. New in version 1.9.0. See also numpy.lib.arraysetops Module with a number of other functions for performing set operations on arrays. repeat Repeat elements of an array. Notes When an axis is specified the subarrays indexed by the axis are sorted. This is done by making the specified axis the first dimension of the array (move the axis to the first dimension to keep the order of the other axes) and then flattening the subarrays in C order. The flattened subarrays are then viewed as a structured type with each element given a label, with the effect that we end up with a 1-D array of structured types that can be treated in the same way as any other 1-D array. The result is that the flattened subarrays are sorted in lexicographic order starting with the first element. Examples >>> np.unique([1, 1, 2, 2, 3, 3]) array([1, 2, 3]) >>> a = np.array([[1, 1], [2, 3]]) >>> np.unique(a) array([1, 2, 3]) Return the unique rows of a 2D array >>> a = np.array([[1, 0, 0], [1, 0, 0], [2, 3, 4]]) >>> np.unique(a, axis=0) array([[1, 0, 0], [2, 3, 4]]) Return the indices of the original array that give the unique values: >>> a = np.array(['a', 'b', 'b', 'c', 'a']) >>> u, indices = np.unique(a, return_index=True) >>> u array(['a', 'b', 'c'], dtype='<U1') >>> indices array([0, 1, 3]) >>> a[indices] array(['a', 'b', 'c'], dtype='<U1') Reconstruct the input array from the unique values and inverse: >>> a = np.array([1, 2, 6, 4, 2, 3, 2]) >>> u, indices = np.unique(a, return_inverse=True) >>> u array([1, 2, 3, 4, 6]) >>> indices array([0, 1, 4, 3, 1, 2, 1]) >>> u[indices] array([1, 2, 6, 4, 2, 3, 2]) Reconstruct the input values from the unique values and counts: >>> a = np.array([1, 2, 6, 4, 2, 3, 2]) >>> values, counts = np.unique(a, return_counts=True) >>> values array([1, 2, 3, 4, 6]) >>> counts array([1, 3, 1, 1, 1]) >>> np.repeat(values, counts) array([1, 2, 2, 2, 3, 4, 6]) # original order not preserved
doc_27314
Return base to the power exp; if mod is present, return base to the power exp, modulo mod (computed more efficiently than pow(base, exp) % mod). The two-argument form pow(base, exp) is equivalent to using the power operator: base**exp. The arguments must have numeric types. With mixed operand types, the coercion rules for binary arithmetic operators apply. For int operands, the result has the same type as the operands (after coercion) unless the second argument is negative; in that case, all arguments are converted to float and a float result is delivered. For example, 10**2 returns 100, but 10**-2 returns 0.01. For int operands base and exp, if mod is present, mod must also be of integer type and mod must be nonzero. If mod is present and exp is negative, base must be relatively prime to mod. In that case, pow(inv_base, -exp, mod) is returned, where inv_base is an inverse to base modulo mod. Here’s an example of computing an inverse for 38 modulo 97: >>> pow(38, -1, mod=97) 23 >>> 23 * 38 % 97 == 1 True Changed in version 3.8: For int operands, the three-argument form of pow now allows the second argument to be negative, permitting computation of modular inverses. Changed in version 3.8: Allow keyword arguments. Formerly, only positional arguments were supported.
doc_27315
See Migration guide for more details. tf.compat.v1.raw_ops.Abs tf.raw_ops.Abs( x, name=None ) Given a tensor x, this operation returns a tensor containing the absolute value of each element in x. For example, if x is an input element and y is an output element, this operation computes \(y = |x|\). Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
doc_27316
Does basic configuration for the logging system by creating a StreamHandler with a default Formatter and adding it to the root logger. The functions debug(), info(), warning(), error() and critical() will call basicConfig() automatically if no handlers are defined for the root logger. This function does nothing if the root logger already has handlers configured, unless the keyword argument force is set to True. Note This function should be called from the main thread before other threads are started. In versions of Python prior to 2.7.1 and 3.2, if this function is called from multiple threads, it is possible (in rare circumstances) that a handler will be added to the root logger more than once, leading to unexpected results such as messages being duplicated in the log. The following keyword arguments are supported. Format Description filename Specifies that a FileHandler be created, using the specified filename, rather than a StreamHandler. filemode If filename is specified, open the file in this mode. Defaults to 'a'. format Use the specified format string for the handler. Defaults to attributes levelname, name and message separated by colons. datefmt Use the specified date/time format, as accepted by time.strftime(). style If format is specified, use this style for the format string. One of '%', '{' or '$' for printf-style, str.format() or string.Template respectively. Defaults to '%'. level Set the root logger level to the specified level. stream Use the specified stream to initialize the StreamHandler. Note that this argument is incompatible with filename - if both are present, a ValueError is raised. handlers If specified, this should be an iterable of already created handlers to add to the root logger. Any handlers which don’t already have a formatter set will be assigned the default formatter created in this function. Note that this argument is incompatible with filename or stream - if both are present, a ValueError is raised. force If this keyword argument is specified as true, any existing handlers attached to the root logger are removed and closed, before carrying out the configuration as specified by the other arguments. encoding If this keyword argument is specified along with filename, its value is used when the FileHandler is created, and thus used when opening the output file. errors If this keyword argument is specified along with filename, its value is used when the FileHandler is created, and thus used when opening the output file. If not specified, the value ‘backslashreplace’ is used. Note that if None is specified, it will be passed as such to func:open, which means that it will be treated the same as passing ‘errors’. Changed in version 3.2: The style argument was added. Changed in version 3.3: The handlers argument was added. Additional checks were added to catch situations where incompatible arguments are specified (e.g. handlers together with stream or filename, or stream together with filename). Changed in version 3.8: The force argument was added. Changed in version 3.9: The encoding and errors arguments were added.
doc_27317
Return a new hmac object. key is a bytes or bytearray object giving the secret key. If msg is present, the method call update(msg) is made. digestmod is the digest name, digest constructor or module for the HMAC object to use. It may be any name suitable to hashlib.new(). Despite its argument position, it is required. Changed in version 3.4: Parameter key can be a bytes or bytearray object. Parameter msg can be of any type supported by hashlib. Parameter digestmod can be the name of a hash algorithm. Deprecated since version 3.4, removed in version 3.8: MD5 as implicit default digest for digestmod is deprecated. The digestmod parameter is now required. Pass it as a keyword argument to avoid awkwardness when you do not have an initial msg.
doc_27318
Check whether the provided array or dtype is of the datetime64[ns] dtype. Parameters arr_or_dtype:array-like or dtype The array or dtype to check. Returns bool Whether or not the array or dtype is of the datetime64[ns] dtype. Examples >>> is_datetime64_ns_dtype(str) False >>> is_datetime64_ns_dtype(int) False >>> is_datetime64_ns_dtype(np.datetime64) # no unit False >>> is_datetime64_ns_dtype(DatetimeTZDtype("ns", "US/Eastern")) True >>> is_datetime64_ns_dtype(np.array(['a', 'b'])) False >>> is_datetime64_ns_dtype(np.array([1, 2])) False >>> is_datetime64_ns_dtype(np.array([], dtype="datetime64")) # no unit False >>> is_datetime64_ns_dtype(np.array([], dtype="datetime64[ps]")) # wrong unit False >>> is_datetime64_ns_dtype(pd.DatetimeIndex([1, 2, 3], dtype="datetime64[ns]")) True
doc_27319
tf.compat.v1.metrics.precision_at_top_k( labels, predictions_idx, k=None, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None ) Differs from sparse_precision_at_k in that predictions must be in the form of top k class indices, whereas sparse_precision_at_k expects logits. Refer to sparse_precision_at_k for more details. Args labels int64 Tensor or SparseTensor with shape [D1, ... DN, num_labels] or [D1, ... DN], where the latter implies num_labels=1. N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and labels has shape [batch_size, num_labels]. [D1, ... DN] must match predictions. Values should be in range [0, num_classes), where num_classes is the last dimension of predictions. Values outside this range are ignored. predictions_idx Integer Tensor with shape [D1, ... DN, k] where N >= 1. Commonly, N=1 and predictions has shape [batch size, k]. The final dimension contains the top k predicted class indices. [D1, ... DN] must match labels. k Integer, k for @k metric. Only used for the default op name. class_id Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of predictions. If class_id is outside this range, the method returns NAN. weights Tensor whose rank is either 0, or n-1, where n is the rank of labels. If the latter, it must be broadcastable to labels (i.e., all dimensions must be either 1, or the same as the corresponding labels dimension). metrics_collections An optional list of collections that values should be added to. updates_collections An optional list of collections that updates should be added to. name Name of new update operation, and namespace for other dependent ops. Returns precision Scalar float64 Tensor with the value of true_positives divided by the sum of true_positives and false_positives. update_op Operation that increments true_positives and false_positives variables appropriately, and whose value matches precision. Raises ValueError If weights is not None and its shape doesn't match predictions, or if either metrics_collections or updates_collections are not a list or tuple. RuntimeError If eager execution is enabled.
doc_27320
Create a new Font instance from a supported font file. Font(file, size=0, font_index=0, resolution=0, ucs4=False) -> Font Argument file can be either a string representing the font's filename, a file-like object containing the font, or None; if None, a default, Pygame, font is used. Optionally, a size argument may be specified to set the default size in points, which determines the size of the rendered characters. The size can also be passed explicitly to each method call. Because of the way the caching system works, specifying a default size on the constructor doesn't imply a performance gain over manually passing the size on each function call. If the font is bitmap and no size is given, the default size is set to the first available size for the font. If the font file has more than one font, the font to load can be chosen with the index argument. An exception is raised for an out-of-range font index value. The optional resolution argument sets the pixel size, in dots per inch, for use in scaling glyphs for this Font instance. If 0 then the default module value, set by init(), is used. The Font object's resolution can only be changed by re-initializing the Font instance. The optional ucs4 argument, an integer, sets the default text translation mode: 0 (False) recognize UTF-16 surrogate pairs, any other value (True), to treat Unicode text as UCS-4, with no surrogate pairs. See Font.ucs4. name Proper font name. name -> string Read only. Returns the real (long) name of the font, as recorded in the font file. path Font file path path -> unicode Read only. Returns the path of the loaded font file size The default point size used in rendering size -> float size -> (float, float) Get or set the default size for text metrics and rendering. It can be a single point size, given as a Python int or float, or a font ppem (width, height) tuple. Size values are non-negative. A zero size or width represents an undefined size. In this case the size must be given as a method argument, or an exception is raised. A zero width but non-zero height is a ValueError. For a scalable font, a single number value is equivalent to a tuple with width equal height. A font can be stretched vertically with height set greater than width, or horizontally with width set greater than height. For embedded bitmaps, as listed by get_sizes(), use the nominal width and height to select an available size. Font size differs for a non-scalable, bitmap, font. During a method call it must match one of the available sizes returned by method get_sizes(). If not, an exception is raised. If the size is a single number, the size is first matched against the point size value. If no match, then the available size with the same nominal width and height is chosen. get_rect() Return the size and offset of rendered text get_rect(text, style=STYLE_DEFAULT, rotation=0, size=0) -> rect Gets the final dimensions and origin, in pixels, of text using the optional size in points, style, and rotation. For other relevant render properties, and for any optional argument not given, the default values set for the Font instance are used. Returns a Rect instance containing the width and height of the text's bounding box and the position of the text's origin. The origin is useful in aligning separately rendered pieces of text. It gives the baseline position and bearing at the start of the text. See the render_to() method for an example. If text is a char (byte) string, its encoding is assumed to be LATIN1. Optionally, text can be None, which will return the bounding rectangle for the text passed to a previous get_rect(), render(), render_to(), render_raw(), or render_raw_to() call. See render_to() for more details. get_metrics() Return the glyph metrics for the given text get_metrics(text, size=0) -> [(...), ...] Returns the glyph metrics for each character in text. The glyph metrics are returned as a list of tuples. Each tuple gives metrics of a single character glyph. The glyph metrics are: (min_x, max_x, min_y, max_y, horizontal_advance_x, horizontal_advance_y) The bounding box min_x, max_x, min_y, and max_y values are returned as grid-fitted pixel coordinates of type int. The advance values are float values. The calculations are done using the font's default size in points. Optionally you may specify another point size with the size argument. The metrics are adjusted for the current rotation, strong, and oblique settings. If text is a char (byte) string, then its encoding is assumed to be LATIN1. height The unscaled height of the font in font units height -> int Read only. Gets the height of the font. This is the average value of all glyphs in the font. ascender The unscaled ascent of the font in font units ascender -> int Read only. Return the number of units from the font's baseline to the top of the bounding box. descender The unscaled descent of the font in font units descender -> int Read only. Return the height in font units for the font descent. The descent is the number of units from the font's baseline to the bottom of the bounding box. get_sized_ascender() The scaled ascent of the font in pixels get_sized_ascender(<size>=0) -> int Return the number of units from the font's baseline to the top of the bounding box. It is not adjusted for strong or rotation. get_sized_descender() The scaled descent of the font in pixels get_sized_descender(<size>=0) -> int Return the number of pixels from the font's baseline to the top of the bounding box. It is not adjusted for strong or rotation. get_sized_height() The scaled height of the font in pixels get_sized_height(<size>=0) -> int Returns the height of the font. This is the average value of all glyphs in the font. It is not adjusted for strong or rotation. get_sized_glyph_height() The scaled bounding box height of the font in pixels get_sized_glyph_height(<size>=0) -> int Return the glyph bounding box height of the font in pixels. This is the average value of all glyphs in the font. It is not adjusted for strong or rotation. get_sizes() return the available sizes of embedded bitmaps get_sizes() -> [(int, int, int, float, float), ...] get_sizes() -> [] Returns a list of tuple records, one for each point size supported. Each tuple containing the point size, the height in pixels, width in pixels, horizontal ppem (nominal width) in fractional pixels, and vertical ppem (nominal height) in fractional pixels. render() Return rendered text as a surface render(text, fgcolor=None, bgcolor=None, style=STYLE_DEFAULT, rotation=0, size=0) -> (Surface, Rect) Returns a new Surface, with the text rendered to it in the color given by 'fgcolor'. If no foreground color is given, the default foreground color, fgcolor is used. If bgcolor is given, the surface will be filled with this color. When no background color is given, the surface background is transparent, zero alpha. Normally the returned surface has a 32 bit pixel size. However, if bgcolor is None and anti-aliasing is disabled a monochrome 8 bit colorkey surface, with colorkey set for the background color, is returned. The return value is a tuple: the new surface and the bounding rectangle giving the size and origin of the rendered text. If an empty string is passed for text then the returned Rect is zero width and the height of the font. Optional fgcolor, style, rotation, and size arguments override the default values set for the Font instance. If text is a char (byte) string, then its encoding is assumed to be LATIN1. Optionally, text can be None, which will render the text passed to a previous get_rect(), render(), render_to(), render_raw(), or render_raw_to() call. See render_to() for details. render_to() Render text onto an existing surface render_to(surf, dest, text, fgcolor=None, bgcolor=None, style=STYLE_DEFAULT, rotation=0, size=0) -> Rect Renders the string text to the pygame.Surface surf, at position dest, a (x, y) surface coordinate pair. If either x or y is not an integer it is converted to one if possible. Any sequence where the first two items are x and y positional elements is accepted, including a Rect instance. As with render(), optional fgcolor, style, rotation, and size argument are available. If a background color bgcolor is given, the text bounding box is first filled with that color. The text is blitted next. Both the background fill and text rendering involve full alpha blits. That is, the alpha values of the foreground, background, and destination target surface all affect the blit. The return value is a rectangle giving the size and position of the rendered text within the surface. If an empty string is passed for text then the returned Rect is zero width and the height of the font. The rect will test False. Optionally, text can be set None, which will re-render text passed to a previous render_to(), get_rect(), render(), render_raw(), or render_raw_to() call. Primarily, this feature is an aid to using render_to() in combination with get_rect(). An example: def word_wrap(surf, text, font, color=(0, 0, 0)): font.origin = True words = text.split(' ') width, height = surf.get_size() line_spacing = font.get_sized_height() + 2 x, y = 0, line_spacing space = font.get_rect(' ') for word in words: bounds = font.get_rect(word) if x + bounds.width + bounds.x >= width: x, y = 0, y + line_spacing if x + bounds.width + bounds.x >= width: raise ValueError("word too wide for the surface") if y + bounds.height - bounds.y >= height: raise ValueError("text to long for the surface") font.render_to(surf, (x, y), None, color) x += bounds.width + space.width return x, y When render_to() is called with the same font properties ― size, style, strength, wide, antialiased, vertical, rotation, kerning, and use_bitmap_strikes ― as get_rect(), render_to() will use the layout calculated by get_rect(). Otherwise, render_to() will recalculate the layout if called with a text string or one of the above properties has changed after the get_rect() call. If text is a char (byte) string, then its encoding is assumed to be LATIN1. render_raw() Return rendered text as a string of bytes render_raw(text, style=STYLE_DEFAULT, rotation=0, size=0, invert=False) -> (bytes, (int, int)) Like render() but with the pixels returned as a byte string of 8-bit gray-scale values. The foreground color is 255, the background 0, useful as an alpha mask for a foreground pattern. render_raw_to() Render text into an array of ints render_raw_to(array, text, dest=None, style=STYLE_DEFAULT, rotation=0, size=0, invert=False) -> Rect Render to an array object exposing an array struct interface. The array must be two dimensional with integer items. The default dest value, None, is equivalent to position (0, 0). See render_to(). As with the other render methods, text can be None to render a text string passed previously to another method. The return value is a pygame.Rect() giving the size and position of the rendered text. style The font's style flags style -> int Gets or sets the default style of the Font. This default style will be used for all text rendering and size calculations unless overridden specifically a render or get_rect() call. The style value may be a bit-wise OR of one or more of the following constants: STYLE_NORMAL STYLE_UNDERLINE STYLE_OBLIQUE STYLE_STRONG STYLE_WIDE STYLE_DEFAULT These constants may be found on the FreeType constants module. Optionally, the default style can be modified or obtained accessing the individual style attributes (underline, oblique, strong). The STYLE_OBLIQUE and STYLE_STRONG styles are for scalable fonts only. An attempt to set either for a bitmap font raises an AttributeError. An attempt to set either for an inactive font, as returned by Font.__new__(), raises a RuntimeError. Assigning STYLE_DEFAULT to the style property leaves the property unchanged, as this property defines the default. The style property will never return STYLE_DEFAULT. underline The state of the font's underline style flag underline -> bool Gets or sets whether the font will be underlined when drawing text. This default style value will be used for all text rendering and size calculations unless overridden specifically in a render or get_rect() call, via the 'style' parameter. strong The state of the font's strong style flag strong -> bool Gets or sets whether the font will be bold when drawing text. This default style value will be used for all text rendering and size calculations unless overridden specifically in a render or get_rect() call, via the 'style' parameter. oblique The state of the font's oblique style flag oblique -> bool Gets or sets whether the font will be rendered as oblique. This default style value will be used for all text rendering and size calculations unless overridden specifically in a render or get_rect() call, via the style parameter. The oblique style is only supported for scalable (outline) fonts. An attempt to set this style on a bitmap font will raise an AttributeError. If the font object is inactive, as returned by Font.__new__(), setting this property raises a RuntimeError. wide The state of the font's wide style flag wide -> bool Gets or sets whether the font will be stretched horizontally when drawing text. It produces a result similar to pygame.font.Font's bold. This style not available for rotated text. strength The strength associated with the strong or wide font styles strength -> float The amount by which a font glyph's size is enlarged for the strong or wide transformations, as a fraction of the untransformed size. For the wide style only the horizontal dimension is increased. For strong text both the horizontal and vertical dimensions are enlarged. A wide style of strength 0.08333 ( 1/12 ) is equivalent to the pygame.font.Font bold style. The default is 0.02778 ( 1/36 ). The strength style is only supported for scalable (outline) fonts. An attempt to set this property on a bitmap font will raise an AttributeError. If the font object is inactive, as returned by Font.__new__(), assignment to this property raises a RuntimeError. underline_adjustment Adjustment factor for the underline position underline_adjustment -> float Gets or sets a factor which, when positive, is multiplied with the font's underline offset to adjust the underline position. A negative value turns an underline into a strike-through or overline. It is multiplied with the ascender. Accepted values range between -2.0 and 2.0 inclusive. A value of 0.5 closely matches Tango underlining. A value of 1.0 mimics pygame.font.Font underlining. fixed_width Gets whether the font is fixed-width fixed_width -> bool Read only. Returns True if the font contains fixed-width characters (for example Courier, Bitstream Vera Sans Mono, Andale Mono). fixed_sizes the number of available bitmap sizes for the font fixed_sizes -> int Read only. Returns the number of point sizes for which the font contains bitmap character images. If zero then the font is not a bitmap font. A scalable font may contain pre-rendered point sizes as strikes. scalable Gets whether the font is scalable scalable -> bool Read only. Returns True if the font contains outline glyphs. If so, the point size is not limited to available bitmap sizes. use_bitmap_strikes allow the use of embedded bitmaps in an outline font file use_bitmap_strikes -> bool Some scalable fonts include embedded bitmaps for particular point sizes. This property controls whether or not those bitmap strikes are used. Set it False to disable the loading of any bitmap strike. Set it True, the default, to permit bitmap strikes for a non-rotated render with no style other than wide or underline. This property is ignored for bitmap fonts. See also fixed_sizes and get_sizes(). antialiased Font anti-aliasing mode antialiased -> bool Gets or sets the font's anti-aliasing mode. This defaults to True on all fonts, which are rendered with full 8 bit blending. Set to False to do monochrome rendering. This should provide a small speed gain and reduce cache memory size. kerning Character kerning mode kerning -> bool Gets or sets the font's kerning mode. This defaults to False on all fonts, which will be rendered without kerning. Set to True to add kerning between character pairs, if supported by the font, when positioning glyphs. vertical Font vertical mode vertical -> bool Gets or sets whether the characters are laid out vertically rather than horizontally. May be useful when rendering Kanji or some other vertical script. Set to True to switch to a vertical text layout. The default is False, place horizontally. Note that the Font class does not automatically determine script orientation. Vertical layout must be selected explicitly. Also note that several font formats (especially bitmap based ones) don't contain the necessary metrics to draw glyphs vertically, so drawing in those cases will give unspecified results. rotation text rotation in degrees counterclockwise rotation -> int Gets or sets the baseline angle of the rendered text. The angle is represented as integer degrees. The default angle is 0, with horizontal text rendered along the X-axis, and vertical text along the Y-axis. A positive value rotates these axes counterclockwise that many degrees. A negative angle corresponds to a clockwise rotation. The rotation value is normalized to a value within the range 0 to 359 inclusive (eg. 390 -> 390 - 360 -> 30, -45 -> 360 + -45 -> 315, 720 -> 720 - (2 * 360) -> 0). Only scalable (outline) fonts can be rotated. An attempt to change the rotation of a bitmap font raises an AttributeError. An attempt to change the rotation of an inactive font instance, as returned by Font.__new__(), raises a RuntimeError. fgcolor default foreground color fgcolor -> Color Gets or sets the default glyph rendering color. It is initially opaque black ― (0, 0, 0, 255). Applies to render() and render_to(). bgcolor default background color bgcolor -> Color Gets or sets the default background rendering color. Initially it is unset and text will render with a transparent background by default. Applies to render() and render_to(). New in pygame 2.0.0. origin Font render to text origin mode origin -> bool If set True, render_to() and render_raw_to() will take the dest position to be that of the text origin, as opposed to the top-left corner of the bounding box. See get_rect() for details. pad padded boundary mode pad -> bool If set True, then the text boundary rectangle will be inflated to match that of font.Font. Otherwise, the boundary rectangle is just large enough for the text. ucs4 Enable UCS-4 mode ucs4 -> bool Gets or sets the decoding of Unicode text. By default, the freetype module performs UTF-16 surrogate pair decoding on Unicode text. This allows 32-bit escape sequences ('Uxxxxxxxx') between 0x10000 and 0x10FFFF to represent their corresponding UTF-32 code points on Python interpreters built with a UCS-2 Unicode type (on Windows, for instance). It also means character values within the UTF-16 surrogate area (0xD800 to 0xDFFF) are considered part of a surrogate pair. A malformed surrogate pair will raise a UnicodeEncodeError. Setting ucs4 True turns surrogate pair decoding off, allowing access the full UCS-4 character range to a Python interpreter built with four-byte Unicode character support. resolution Pixel resolution in dots per inch resolution -> int Read only. Gets pixel size used in scaling font glyphs for this Font instance.
doc_27321
Creates a class object dynamically using the appropriate metaclass. The first three arguments are the components that make up a class definition header: the class name, the base classes (in order), the keyword arguments (such as metaclass). The exec_body argument is a callback that is used to populate the freshly created class namespace. It should accept the class namespace as its sole argument and update the namespace directly with the class contents. If no callback is provided, it has the same effect as passing in lambda ns: ns. New in version 3.3.
doc_27322
Return the frequency object if it is set, otherwise None.
doc_27323
Class derived from Error. Contains no additional instance variables.
doc_27324
Apply func(self, *args, **kwargs), and return the result. Parameters func:function Function to apply to the Styler. Alternatively, a (callable, keyword) tuple where keyword is a string indicating the keyword of callable that expects the Styler. *args:optional Arguments passed to func. **kwargs:optional A dictionary of keyword arguments passed into func. Returns object : The value returned by func. See also DataFrame.pipe Analogous method for DataFrame. Styler.apply Apply a CSS-styling function column-wise, row-wise, or table-wise. Notes Like DataFrame.pipe(), this method can simplify the application of several user-defined functions to a styler. Instead of writing: f(g(df.style.set_precision(3), arg1=a), arg2=b, arg3=c) users can write: (df.style.set_precision(3) .pipe(g, arg1=a) .pipe(f, arg2=b, arg3=c)) In particular, this allows users to define functions that take a styler object, along with other parameters, and return the styler after making styling changes (such as calling Styler.apply() or Styler.set_properties()). Using .pipe, these user-defined style “transformations” can be interleaved with calls to the built-in Styler interface. Examples >>> def format_conversion(styler): ... return (styler.set_properties(**{'text-align': 'right'}) ... .format({'conversion': '{:.1%}'})) The user-defined format_conversion function above can be called within a sequence of other style modifications: >>> df = pd.DataFrame({'trial': list(range(5)), ... 'conversion': [0.75, 0.85, np.nan, 0.7, 0.72]}) >>> (df.style ... .highlight_min(subset=['conversion'], color='yellow') ... .pipe(format_conversion) ... .set_caption("Results with minimum conversion highlighted.")) ...
doc_27325
See Migration guide for more details. tf.compat.v1.math.segment_min, tf.compat.v1.segment_min tf.math.segment_min( data, segment_ids, name=None ) Read the section on segmentation for an explanation of segments. Computes a tensor such that \(output_i = \min_j(data_j)\) where min is over j such that segment_ids[j] == i. If the min is empty for a given segment ID i, output[i] = 0. For example: c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.segment_min(c, tf.constant([0, 0, 1])) # ==> [[1, 2, 2, 1], # [5, 6, 7, 8]] Args data A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. segment_ids A Tensor. Must be one of the following types: int32, int64. A 1-D tensor whose size is equal to the size of data's first dimension. Values should be sorted and can be repeated. name A name for the operation (optional). Returns A Tensor. Has the same type as data.
doc_27326
Assert the mock has been awaited with the specified calls. The await_args_list list is checked for the awaits. If any_order is false then the awaits must be sequential. There can be extra calls before or after the specified awaits. If any_order is true then the awaits can be in any order, but they must all appear in await_args_list. >>> mock = AsyncMock() >>> async def main(*args, **kwargs): ... await mock(*args, **kwargs) ... >>> calls = [call("foo"), call("bar")] >>> mock.assert_has_awaits(calls) Traceback (most recent call last): ... AssertionError: Awaits not found. Expected: [call('foo'), call('bar')] Actual: [] >>> asyncio.run(main('foo')) >>> asyncio.run(main('bar')) >>> mock.assert_has_awaits(calls)
doc_27327
This criterion combines log_softmax and nll_loss in a single function. See CrossEntropyLoss for details. Parameters input (Tensor) – (N,C)(N, C) where C = number of classes or (N,C,H,W)(N, C, H, W) in case of 2D Loss, or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K) where K≥1K \geq 1 in the case of K-dimensional loss. target (Tensor) – (N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-1 , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K) where K≥1K \geq 1 for K-dimensional loss. weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over non-ignored targets. Default: -100 reduce (bool, optional) – Deprecated (see reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean' Examples: >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randint(5, (3,), dtype=torch.int64) >>> loss = F.cross_entropy(input, target) >>> loss.backward()
doc_27328
Returns a tensor filled with the scalar value 1, with the same size as input. torch.ones_like(input) is equivalent to torch.ones(input.size(), dtype=input.dtype, layout=input.layout, device=input.device). Warning As of 0.4, this function does not support an out keyword. As an alternative, the old torch.ones_like(input, out=output) is equivalent to torch.ones(input.size(), out=output). Parameters input (Tensor) – the size of input will determine size of the output tensor. Keyword Arguments dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input. layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input. device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input. requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format. Example: >>> input = torch.empty(2, 3) >>> torch.ones_like(input) tensor([[ 1., 1., 1.], [ 1., 1., 1.]])
doc_27329
Raised when encountering a timedelta value that cannot be represented as a timedelta64[ns].
doc_27330
Initializes RPC primitives such as the local RPC agent and distributed autograd, which immediately makes the current process ready to send and receive RPCs. Parameters name (str) – a globally unique name of this node. (e.g., Trainer3, ParameterServer2, Master, Worker1) Name can only contain number, alphabet, underscore, colon, and/or dash, and must be shorter than 128 characters. backend (BackendType, optional) – The type of RPC backend implementation. Supported values include BackendType.TENSORPIPE (the default) and BackendType.PROCESS_GROUP. See Backends for more information. rank (int) – a globally unique id/rank of this node. world_size (int) – The number of workers in the group. rpc_backend_options (RpcBackendOptions, optional) – The options passed to the RpcAgent constructor. It must be an agent-specific subclass of RpcBackendOptions and contains agent-specific initialization configurations. By default, for all agents, it sets the default timeout to 60 seconds and performs the rendezvous with an underlying process group initialized using init_method = "env://", meaning that environment variables MASTER_ADDR and MASTER_PORT need to be set properly. See Backends for more information and find which options are available.
doc_27331
Applies a 2D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation see Conv2d. Note Only zeros is supported for the padding_mode argument. Note Only torch.quint8 is supported for the input data type. Variables ~Conv2d.weight (Tensor) – packed tensor derived from the learnable weight parameter. ~Conv2d.scale (Tensor) – scalar for the output scale ~Conv2d.zero_point (Tensor) – scalar for the output zero point See Conv2d for other attributes. Examples: >>> # With square kernels and equal stride >>> m = nn.quantized.Conv2d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> # non-square kernels and unequal stride and with padding and dilation >>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1)) >>> input = torch.randn(20, 16, 50, 100) >>> # quantize input to quint8 >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8) >>> output = m(q_input) classmethod from_float(mod) [source] Creates a quantized module from a float module or qparams_dict. Parameters mod (Module) – a float module, either produced by torch.quantization utilities or provided by the user
doc_27332
The multipart headers as Headers object. This usually contains irrelevant information but in combination with custom multipart requests the raw headers might be interesting. Changelog New in version 0.6.
doc_27333
A context manager that temporarily changes the current working directory to path and yields the directory. If quiet is False, the context manager raises an exception on error. Otherwise, it issues only a warning and keeps the current working directory the same.
doc_27334
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
doc_27335
Apply only the non-affine part of this transformation. transform(values) is always equivalent to transform_affine(transform_non_affine(values)). In non-affine transformations, this is generally equivalent to transform(values). In affine transformations, this is always a no-op. Parameters valuesarray The input values as NumPy array of length input_dims or shape (N x input_dims). Returns array The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input.
doc_27336
Returns the variance of all elements in the input tensor. If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used. Parameters input (Tensor) – the input tensor. unbiased (bool) – whether to use the unbiased estimation or not Example: >>> a = torch.randn(1, 3) >>> a tensor([[-0.3425, -1.2636, -0.4864]]) >>> torch.var(a) tensor(0.2455) torch.var(input, dim, unbiased=True, keepdim=False, *, out=None) → Tensor Returns the variance of each row of the input tensor in the given dimension dim. If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s). If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used. Parameters input (Tensor) – the input tensor. dim (int or tuple of python:ints) – the dimension or dimensions to reduce. unbiased (bool) – whether to use the unbiased estimation or not keepdim (bool) – whether the output tensor has dim retained or not. Keyword Arguments out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4, 4) >>> a tensor([[-0.3567, 1.7385, -1.3042, 0.7423], [ 1.3436, -0.1015, -0.9834, -0.8438], [ 0.6056, 0.1089, -0.3112, -1.4085], [-0.7700, 0.6074, -0.1469, 0.7777]]) >>> torch.var(a, 1) tensor([ 1.7444, 1.1363, 0.7356, 0.5112])
doc_27337
Size in bytes.
doc_27338
tkinter.filedialog.askopenfilenames(**options) The above two functions create an Open dialog and return the selected filename(s) that correspond to existing file(s).
doc_27339
Array mapping from feature integer indices to feature name. Returns feature_nameslist A list of feature names.
doc_27340
tf.sparse.split( sp_input=None, num_split=None, axis=None, name=None ) If the sp_input.dense_shape[axis] is not an integer multiple of num_split each slice starting from 0:shape[axis] % num_split gets extra one dimension. For example: indices = [[0, 2], [0, 4], [0, 5], [1, 0], [1, 1]] values = [1, 2, 3, 4, 5] t = tf.SparseTensor(indices=indices, values=values, dense_shape=[2, 7]) tf.sparse.to_dense(t) <tf.Tensor: shape=(2, 7), dtype=int32, numpy= array([[0, 0, 1, 0, 2, 3, 0], [4, 5, 0, 0, 0, 0, 0]], dtype=int32)> output = tf.sparse.split(sp_input=t, num_split=2, axis=1) tf.sparse.to_dense(output[0]) <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[0, 0, 1, 0], [4, 5, 0, 0]], dtype=int32)> tf.sparse.to_dense(output[1]) <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[2, 3, 0], [0, 0, 0]], dtype=int32)> output = tf.sparse.split(sp_input=t, num_split=2, axis=0) tf.sparse.to_dense(output[0]) <tf.Tensor: shape=(1, 7), dtype=int32, numpy=array([[0, 0, 1, 0, 2, 3, 0]], dtype=int32)> tf.sparse.to_dense(output[1]) <tf.Tensor: shape=(1, 7), dtype=int32, numpy=array([[4, 5, 0, 0, 0, 0, 0]], dtype=int32)> output = tf.sparse.split(sp_input=t, num_split=2, axis=-1) tf.sparse.to_dense(output[0]) <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[0, 0, 1, 0], [4, 5, 0, 0]], dtype=int32)> tf.sparse.to_dense(output[1]) <tf.Tensor: shape=(2, 3), dtype=int32, numpy= array([[2, 3, 0], [0, 0, 0]], dtype=int32)> Args sp_input The SparseTensor to split. num_split A Python integer. The number of ways to split. axis A 0-D int32 Tensor. The dimension along which to split. Must be in range [-rank, rank), where rank is the number of dimensions in the input SparseTensor. name A name for the operation (optional). Returns num_split SparseTensor objects resulting from splitting value. Raises TypeError If sp_input is not a SparseTensor.
doc_27341
Return whether event should be ignored. This method should be called at the beginning of any event callback.
doc_27342
In-place version of lerp()
doc_27343
Return True if this Future is done. A Future is done if it has a result or an exception.
doc_27344
New in Django 3.2. Set this attribute to False to prevent Django from selecting a configuration class automatically. This is useful when apps.py defines only one AppConfig subclass but you don’t want Django to use it by default. Set this attribute to True to tell Django to select a configuration class automatically. This is useful when apps.py defines more than one AppConfig subclass and you want Django to use one of them by default. By default, this attribute isn’t set.
doc_27345
See Migration guide for more details. tf.compat.v1.nondifferentiable_batch_function tf.nondifferentiable_batch_function( num_batch_threads, max_batch_size, batch_timeout_micros, allowed_batch_sizes=None, max_enqueued_batches=10, autograph=True, enable_large_batch_splitting=True ) So, for example, in the following code @batch_function(1, 2, 3) def layer(a): return tf.matmul(a, a) b = layer(w) if more than one session.run call is simultaneously trying to compute b the values of w will be gathered, non-deterministically concatenated along the first axis, and only one thread will run the computation. See the documentation of the Batch op for more details. Assumes that all arguments of the decorated function are Tensors which will be batched along their first dimension. SparseTensor is not supported. The return value of the decorated function must be a Tensor or a list/tuple of Tensors. Args num_batch_threads Number of scheduling threads for processing batches of work. Determines the number of batches processed in parallel. max_batch_size Batch sizes will never be bigger than this. batch_timeout_micros Maximum number of microseconds to wait before outputting an incomplete batch. allowed_batch_sizes Optional list of allowed batch sizes. If left empty, does nothing. Otherwise, supplies a list of batch sizes, causing the op to pad batches up to one of those sizes. The entries must increase monotonically, and the final entry must equal max_batch_size. max_enqueued_batches The maximum depth of the batch queue. Defaults to 10. autograph Whether to use autograph to compile python and eager style code for efficient graph-mode execution. enable_large_batch_splitting The value of this option doesn't affect processing output given the same input; it affects implementation details as stated below: 1. Improve batching efficiency by eliminating unnecessary adding. 2.max_batch_size specifies the limit of input and allowed_batch_sizes specifies the limit of a task to be processed. API user can give an input of size 128 when 'max_execution_batch_size' is 32 -> implementation can split input of 128 into 4 x 32, schedule concurrent processing, and then return concatenated results corresponding to 128. Returns The decorated function will return the unbatched computation output Tensors.
doc_27346
Execute the coroutine coro and return the result. This function runs the passed coroutine, taking care of managing the asyncio event loop, finalizing asynchronous generators, and closing the threadpool. This function cannot be called when another asyncio event loop is running in the same thread. If debug is True, the event loop will be run in debug mode. This function always creates a new event loop and closes it at the end. It should be used as a main entry point for asyncio programs, and should ideally only be called once. Example: async def main(): await asyncio.sleep(1) print('hello') asyncio.run(main()) New in version 3.7. Changed in version 3.9: Updated to use loop.shutdown_default_executor(). Note The source code for asyncio.run() can be found in Lib/asyncio/runners.py.
doc_27347
Returns the unique elements of the input tensor. See torch.unique()
doc_27348
int goal_reached(int index, float cumcost) This method is called each iteration after popping an index from the heap, before examining the neighbours. This method can be overloaded to modify the behavior of the MCP algorithm. An example might be to stop the algorithm when a certain cumulative cost is reached, or when the front is a certain distance away from the seed point. This method should return 1 if the algorithm should not check the current point’s neighbours and 2 if the algorithm is now done.
doc_27349
See Migration guide for more details. tf.compat.v1.tpu.experimental.TPUSystemMetadata tf.tpu.experimental.TPUSystemMetadata( num_cores, num_hosts, num_of_cores_per_host, topology, devices ) Attributes num_cores interger. Total number of TPU cores in the TPU system. num_hosts interger. Total number of hosts (TPU workers) in the TPU system. num_of_cores_per_host interger. Number of TPU cores per host (TPU worker). topology an instance of tf.tpu.experimental.Topology, which describes the physical topology of TPU system. devices a tuple of strings, which describes all the TPU devices in the system.
doc_27350
Shift the surface image in place scroll(dx=0, dy=0) -> None Move the image by dx pixels right and dy pixels down. dx and dy may be negative for left and up scrolls respectively. Areas of the surface that are not overwritten retain their original pixel values. Scrolling is contained by the Surface clip area. It is safe to have dx and dy values that exceed the surface size. New in pygame 1.9.
doc_27351
remove Sprites from the Group remove(*sprites) -> None Remove any number of Sprites from the Group. This will only remove Sprites that are already members of the Group. Each sprite argument can also be a iterator containing Sprites.
doc_27352
Fit the model from data in X. Parameters Xarray-like of shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yIgnored Returns selfobject Returns the instance itself.
doc_27353
Add a centered supylabel to the figure. Parameters tstr The supylabel text. xfloat, default: 0.02 The x location of the text in figure coordinates. yfloat, default: 0.5 The y location of the text in figure coordinates. horizontalalignment, ha{'center', 'left', 'right'}, default: left The horizontal alignment of the text relative to (x, y). verticalalignment, va{'top', 'center', 'bottom', 'baseline'}, default: center The vertical alignment of the text relative to (x, y). fontsize, sizedefault: rcParams["figure.titlesize"] (default: 'large') The font size of the text. See Text.set_size for possible values. fontweight, weightdefault: rcParams["figure.titleweight"] (default: 'normal') The font weight of the text. See Text.set_weight for possible values. Returns text The Text instance of the supylabel. Other Parameters fontpropertiesNone or dict, optional A dict of font properties. If fontproperties is given the default values for font size and weight are taken from the FontProperties defaults. rcParams["figure.titlesize"] (default: 'large') and rcParams["figure.titleweight"] (default: 'normal') are ignored in this case. **kwargs Additional kwargs are matplotlib.text.Text properties.
doc_27354
Return alpha to be applied to all ContourSet artists.
doc_27355
Set the label2 text. Parameters sstr
doc_27356
Return a Path representing a circle of a given radius and center. Parameters center(float, float), default: (0, 0) The center of the circle. radiusfloat, default: 1 The radius of the circle. readonlybool Whether the created path should have the "readonly" argument set when creating the Path instance. Notes The circle is approximated using 8 cubic Bezier curves, as described in Lancaster, Don. Approximating a Circle or an Ellipse Using Four Bezier Cubic Splines.
doc_27357
See Migration guide for more details. tf.compat.v1.raw_ops.TridiagonalMatMul tf.raw_ops.TridiagonalMatMul( superdiag, maindiag, subdiag, rhs, name=None ) Calculates product of two matrices, where left matrix is a tridiagonal matrix. Args superdiag A Tensor. Must be one of the following types: float64, float32, complex64, complex128. Tensor of shape [..., 1, M], representing superdiagonals of tri-diagonal matrices to the left of multiplication. Last element is ignored. maindiag A Tensor. Must have the same type as superdiag. Tensor of shape [..., 1, M], representing main diagonals of tri-diagonal matrices to the left of multiplication. subdiag A Tensor. Must have the same type as superdiag. Tensor of shape [..., 1, M], representing subdiagonals of tri-diagonal matrices to the left of multiplication. First element is ignored. rhs A Tensor. Must have the same type as superdiag. Tensor of shape [..., M, N], representing MxN matrices to the right of multiplication. name A name for the operation (optional). Returns A Tensor. Has the same type as superdiag.
doc_27358
Computes the singular value decomposition of either a matrix or batch of matrices input. The singular value decomposition is represented as a namedtuple (U,S,V), such that input = U diag(S) Vᴴ, where Vᴴ is the transpose of V for the real-valued inputs, or the conjugate transpose of V for the complex-valued inputs. If input is a batch of tensors, then U, S, and V are also batched with the same batch dimensions as input. If some is True (default), the method returns the reduced singular value decomposition i.e., if the last two dimensions of input are m and n, then the returned U and V matrices will contain only min(n, m) orthonormal columns. If compute_uv is False, the returned U and V will be zero-filled matrices of shape (m × m) and (n × n) respectively, and the same device as input. The some argument has no effect when compute_uv is False. Supports input of float, double, cfloat and cdouble data types. The dtypes of U and V are the same as input’s. S will always be real-valued, even if input is complex. Warning torch.svd() is deprecated. Please use torch.linalg.svd() instead, which is similar to NumPy’s numpy.linalg.svd. Note Differences with torch.linalg.svd(): some is the opposite of torch.linalg.svd()’s full_matricies. Note that default value for both is True, so the default behavior is effectively the opposite. torch.svd() returns V, whereas torch.linalg.svd() returns Vᴴ. If compute_uv=False, torch.svd() returns zero-filled tensors for U and Vh, whereas torch.linalg.svd() returns empty tensors. Note The singular values are returned in descending order. If input is a batch of matrices, then the singular values of each matrix in the batch is returned in descending order. Note The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the cuSOLVER routines gesvdj and gesvdjBatched on CUDA 10.1.243 and later, and uses the MAGMA routine gesdd on earlier versions of CUDA. Note The returned matrix U will be transposed, i.e. with strides U.contiguous().transpose(-2, -1).stride(). Note Gradients computed using U and V may be unstable if input is not full rank or has non-unique singular values. Note When some = False, the gradients on U[..., :, min(m, n):] and V[..., :, min(m, n):] will be ignored in backward as those vectors can be arbitrary bases of the subspaces. Note The S tensor can only be used to compute gradients if compute_uv is True. Note With the complex-valued input the backward operation works correctly only for gauge invariant loss functions. Please look at Gauge problem in AD for more details. Note Since U and V of an SVD is not unique, each vector can be multiplied by an arbitrary phase factor eiϕe^{i \phi} while the SVD result is still correct. Different platforms, like Numpy, or inputs on different device types, may produce different U and V tensors. Parameters input (Tensor) – the input tensor of size (*, m, n) where * is zero or more batch dimensions consisting of (m × n) matrices. some (bool, optional) – controls whether to compute the reduced or full decomposition, and consequently the shape of returned U and V. Defaults to True. compute_uv (bool, optional) – option whether to compute U and V or not. Defaults to True. Keyword Arguments out (tuple, optional) – the output tuple of tensors Example: >>> a = torch.randn(5, 3) >>> a tensor([[ 0.2364, -0.7752, 0.6372], [ 1.7201, 0.7394, -0.0504], [-0.3371, -1.0584, 0.5296], [ 0.3550, -0.4022, 1.5569], [ 0.2445, -0.0158, 1.1414]]) >>> u, s, v = torch.svd(a) >>> u tensor([[ 0.4027, 0.0287, 0.5434], [-0.1946, 0.8833, 0.3679], [ 0.4296, -0.2890, 0.5261], [ 0.6604, 0.2717, -0.2618], [ 0.4234, 0.2481, -0.4733]]) >>> s tensor([2.3289, 2.0315, 0.7806]) >>> v tensor([[-0.0199, 0.8766, 0.4809], [-0.5080, 0.4054, -0.7600], [ 0.8611, 0.2594, -0.4373]]) >>> torch.dist(a, torch.mm(torch.mm(u, torch.diag(s)), v.t())) tensor(8.6531e-07) >>> a_big = torch.randn(7, 5, 3) >>> u, s, v = torch.svd(a_big) >>> torch.dist(a_big, torch.matmul(torch.matmul(u, torch.diag_embed(s)), v.transpose(-2, -1))) tensor(2.6503e-06)
doc_27359
See Migration guide for more details. tf.compat.v1.data.experimental.DistributeOptions tf.data.experimental.DistributeOptions() You can set the distribution options of a dataset through the experimental_distribute property of tf.data.Options; the property is an instance of tf.data.experimental.DistributeOptions. options = tf.data.Options() options.experimental_distribute.auto_shard_policy = AutoShardPolicy.OFF dataset = dataset.with_options(options) Attributes auto_shard_policy The type of sharding that auto-shard should attempt. If this is set to FILE, then we will attempt to shard by files (each worker will get a set of files to process). If we cannot find a set of files to shard for at least one file per worker, we will error out. When this option is selected, make sure that you have enough files so that each worker gets at least one file. There will be a runtime error thrown if there are insufficient files. If this is set to DATA, then we will shard by elements produced by the dataset, and each worker will process the whole dataset and discard the portion that is not for itself. If this is set to OFF, then we will not autoshard, and each worker will receive a copy of the full dataset. This option is set to AUTO by default, AUTO will attempt to first shard by FILE, and fall back to sharding by DATA if we cannot find a set of files to shard. num_devices The number of devices attached to this input pipeline. This will be automatically set by MultiDeviceIterator. Methods __eq__ View source __eq__( other ) Return self==value. __ne__ View source __ne__( other ) Return self!=value.
doc_27360
Raised when an attempt is made to use an object that is not defined or is no longer usable.
doc_27361
Drawing function for box and whisker plots. Make a box and whisker plot for each column of x or each vector in sequence x. The box extends from the lower to upper quartile values of the data, with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers. Parameters bxpstatslist of dicts A list of dictionaries containing stats for each boxplot. Required keys are: med: Median (scalar). q1, q3: First & third quartiles (scalars). whislo, whishi: Lower & upper whisker positions (scalars). Optional keys are: mean: Mean (scalar). Needed if showmeans=True. fliers: Data beyond the whiskers (array-like). Needed if showfliers=True. cilo, cihi: Lower & upper confidence intervals about the median. Needed if shownotches=True. label: Name of the dataset (str). If available, this will be used a tick label for the boxplot positionsarray-like, default: [1, 2, ..., n] The positions of the boxes. The ticks and limits are automatically set to match the positions. widthsfloat or array-like, default: None The widths of the boxes. The default is clip(0.15*(distance between extreme positions), 0.15, 0.5). vertbool, default: True If True (default), makes the boxes vertical. If False, makes horizontal boxes. patch_artistbool, default: False If False produces boxes with the Line2D artist. If True produces boxes with the Patch artist. shownotches, showmeans, showcaps, showbox, showfliersbool Whether to draw the CI notches, the mean value (both default to False), the caps, the box, and the fliers (all three default to True). boxprops, whiskerprops, capprops, flierprops, medianprops, meanpropsdict, optional Artist properties for the boxes, whiskers, caps, fliers, medians, and means. meanlinebool, default: False If True (and showmeans is True), will try to render the mean as a line spanning the full width of the box according to meanprops. Not recommended if shownotches is also True. Otherwise, means will be shown as points. manage_ticksbool, default: True If True, the tick locations and labels will be adjusted to match the boxplot positions. zorderfloat, default: Line2D.zorder = 2 The zorder of the resulting boxplot. Returns dict A dictionary mapping each component of the boxplot to a list of the Line2D instances created. That dictionary has the following keys (assuming vertical boxplots): boxes: main bodies of the boxplot showing the quartiles, and the median's confidence intervals if enabled. medians: horizontal lines at the median of each box. whiskers: vertical lines up to the last non-outlier data. caps: horizontal lines at the ends of the whiskers. fliers: points representing data beyond the whiskers (fliers). means: points or lines representing the means. Examples (Source code, png, pdf) (png, pdf)
doc_27362
Draw samples from a standard Cauchy distribution with mode = 0. Also known as the Lorentz distribution. Note New code should use the standard_cauchy method of a default_rng() instance instead; please see the Quick Start. Parameters sizeint or tuple of ints, optional Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. Default is None, in which case a single value is returned. Returns samplesndarray or scalar The drawn samples. See also Generator.standard_cauchy which should be used for new code. Notes The probability density function for the full Cauchy distribution is \[P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+ (\frac{x-x_0}{\gamma})^2 \bigr] }\] and the Standard Cauchy distribution just sets \(x_0=0\) and \(\gamma=1\) The Cauchy distribution arises in the solution to the driven harmonic oscillator problem, and also describes spectral line broadening. It also describes the distribution of values at which a line tilted at a random angle will cut the x axis. When studying hypothesis tests that assume normality, seeing how the tests perform on data from a Cauchy distribution is a good indicator of their sensitivity to a heavy-tailed distribution, since the Cauchy looks very much like a Gaussian distribution, but with heavier tails. References 1 NIST/SEMATECH e-Handbook of Statistical Methods, “Cauchy Distribution”, https://www.itl.nist.gov/div898/handbook/eda/section3/eda3663.htm 2 Weisstein, Eric W. “Cauchy Distribution.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/CauchyDistribution.html 3 Wikipedia, “Cauchy distribution” https://en.wikipedia.org/wiki/Cauchy_distribution Examples Draw samples and plot the distribution: >>> import matplotlib.pyplot as plt >>> s = np.random.standard_cauchy(1000000) >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well >>> plt.hist(s, bins=100) >>> plt.show()
doc_27363
Returns list List of segments in the LineCollection. Each list item contains an array of vertices.
doc_27364
Transform X. This is implemented by linking the points X into the graph of geodesic distances of the training data. First the n_neighbors nearest neighbors of X are found in the training data, and from these the shortest geodesic distances from each point in X to each point in the training data are computed in order to construct the kernel. The embedding of X is the projection of this kernel onto the embedding vectors of the training set. Parameters Xarray-like, shape (n_queries, n_features) If neighbors_algorithm=’precomputed’, X is assumed to be a distance matrix or a sparse graph of shape (n_queries, n_samples_fit). Returns X_newarray-like, shape (n_queries, n_components)
doc_27365
This method adapts obj to a ctypes type. It is called with the actual object used in a foreign function call when the type is present in the foreign function’s argtypes tuple; it must return an object that can be used as a function call parameter. All ctypes data types have a default implementation of this classmethod that normally returns obj if that is an instance of the type. Some types accept other objects as well.
doc_27366
Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
doc_27367
Represents an RSS enclosure
doc_27368
tf.compat.v1.gfile.Stat( filename ) Args filename string, path to a file Returns FileStatistics struct that contains information about the path Raises errors.OpError If the operation fails.
doc_27369
Returns a GEOSGeometry combining the points in this geometry not in other, and the points in other not in this geometry.
doc_27370
uu.encode(in_file, out_file, name=None, mode=None, *, backtick=False) Uuencode file in_file into file out_file. The uuencoded file will have the header specifying name and mode as the defaults for the results of decoding the file. The default defaults are taken from in_file, or '-' and 0o666 respectively. If backtick is true, zeros are represented by '`' instead of spaces. Changed in version 3.7: Added the backtick parameter. uu.decode(in_file, out_file=None, mode=None, quiet=False) This call decodes uuencoded file in_file placing the result on file out_file. If out_file is a pathname, mode is used to set the permission bits if the file must be created. Defaults for out_file and mode are taken from the uuencode header. However, if the file specified in the header already exists, a uu.Error is raised. decode() may print a warning to standard error if the input was produced by an incorrect uuencoder and Python could recover from that error. Setting quiet to a true value silences this warning. exception uu.Error Subclass of Exception, this can be raised by uu.decode() under various situations, such as described above, but also including a badly formatted header, or truncated input file. See also Module binascii Support module containing ASCII-to-binary and binary-to-ASCII conversions.
doc_27371
See Migration guide for more details. tf.compat.v1.raw_ops.Floor tf.raw_ops.Floor( x, name=None ) Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
doc_27372
Alias for field number 1
doc_27373
class frozenset([iterable]) Return a new set or frozenset object whose elements are taken from iterable. The elements of a set must be hashable. To represent sets of sets, the inner sets must be frozenset objects. If iterable is not specified, a new empty set is returned. Sets can be created by several means: Use a comma-separated list of elements within braces: {'jack', 'sjoerd'} Use a set comprehension: {c for c in 'abracadabra' if c not in 'abc'} Use the type constructor: set(), set('foobar'), set(['a', 'b', 'foo']) Instances of set and frozenset provide the following operations: len(s) Return the number of elements in set s (cardinality of s). x in s Test x for membership in s. x not in s Test x for non-membership in s. isdisjoint(other) Return True if the set has no elements in common with other. Sets are disjoint if and only if their intersection is the empty set. issubset(other) set <= other Test whether every element in the set is in other. set < other Test whether the set is a proper subset of other, that is, set <= other and set != other. issuperset(other) set >= other Test whether every element in other is in the set. set > other Test whether the set is a proper superset of other, that is, set >= other and set != other. union(*others) set | other | ... Return a new set with elements from the set and all others. intersection(*others) set & other & ... Return a new set with elements common to the set and all others. difference(*others) set - other - ... Return a new set with elements in the set that are not in the others. symmetric_difference(other) set ^ other Return a new set with elements in either the set or other but not both. copy() Return a shallow copy of the set. Note, the non-operator versions of union(), intersection(), difference(), and symmetric_difference(), issubset(), and issuperset() methods will accept any iterable as an argument. In contrast, their operator based counterparts require their arguments to be sets. This precludes error-prone constructions like set('abc') & 'cbs' in favor of the more readable set('abc').intersection('cbs'). Both set and frozenset support set to set comparisons. Two sets are equal if and only if every element of each set is contained in the other (each is a subset of the other). A set is less than another set if and only if the first set is a proper subset of the second set (is a subset, but is not equal). A set is greater than another set if and only if the first set is a proper superset of the second set (is a superset, but is not equal). Instances of set are compared to instances of frozenset based on their members. For example, set('abc') == frozenset('abc') returns True and so does set('abc') in set([frozenset('abc')]). The subset and equality comparisons do not generalize to a total ordering function. For example, any two nonempty disjoint sets are not equal and are not subsets of each other, so all of the following return False: a<b, a==b, or a>b. Since sets only define partial ordering (subset relationships), the output of the list.sort() method is undefined for lists of sets. Set elements, like dictionary keys, must be hashable. Binary operations that mix set instances with frozenset return the type of the first operand. For example: frozenset('ab') | set('bc') returns an instance of frozenset. The following table lists operations available for set that do not apply to immutable instances of frozenset: update(*others) set |= other | ... Update the set, adding elements from all others. intersection_update(*others) set &= other & ... Update the set, keeping only elements found in it and all others. difference_update(*others) set -= other | ... Update the set, removing elements found in others. symmetric_difference_update(other) set ^= other Update the set, keeping only elements found in either set, but not in both. add(elem) Add element elem to the set. remove(elem) Remove element elem from the set. Raises KeyError if elem is not contained in the set. discard(elem) Remove element elem from the set if it is present. pop() Remove and return an arbitrary element from the set. Raises KeyError if the set is empty. clear() Remove all elements from the set. Note, the non-operator versions of the update(), intersection_update(), difference_update(), and symmetric_difference_update() methods will accept any iterable as an argument. Note, the elem argument to the __contains__(), remove(), and discard() methods may be a set. To support searching for an equivalent frozenset, a temporary one is created from elem.
doc_27374
See Migration guide for more details. tf.compat.v1.keras.applications.densenet.decode_predictions tf.keras.applications.densenet.decode_predictions( preds, top=5 ) Arguments preds Numpy array encoding a batch of predictions. top Integer, how many top-guesses to return. Defaults to 5. Returns A list of lists of top class prediction tuples (class_name, class_description, score). One list of tuples per sample in batch input. Raises ValueError In case of invalid shape of the pred array (must be 2D).
doc_27375
Build a text report showing the main classification metrics. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. labelsarray-like of shape (n_labels,), default=None Optional list of label indices to include in the report. target_nameslist of str of shape (n_labels,), default=None Optional display names matching the labels (same order). sample_weightarray-like of shape (n_samples,), default=None Sample weights. digitsint, default=2 Number of digits for formatting output floating point values. When output_dict is True, this will be ignored and the returned values will not be rounded. output_dictbool, default=False If True, return output as dict. New in version 0.20. zero_division“warn”, 0 or 1, default=”warn” Sets the value to return when there is a zero division. If set to “warn”, this acts as 0, but warnings are also raised. Returns reportstring / dict Text summary of the precision, recall, F1 score for each class. Dictionary returned if output_dict is True. Dictionary has the following structure: {'label 1': {'precision':0.5, 'recall':1.0, 'f1-score':0.67, 'support':1}, 'label 2': { ... }, ... } The reported averages include macro average (averaging the unweighted mean per label), weighted average (averaging the support-weighted mean per label), and sample average (only for multilabel classification). Micro average (averaging the total true positives, false negatives and false positives) is only shown for multi-label or multi-class with a subset of classes, because it corresponds to accuracy otherwise and would be the same for all metrics. See also precision_recall_fscore_support for more details on averages. Note that in binary classification, recall of the positive class is also known as “sensitivity”; recall of the negative class is “specificity”. See also precision_recall_fscore_support, confusion_matrix multilabel_confusion_matrix Examples >>> from sklearn.metrics import classification_report >>> y_true = [0, 1, 2, 2, 2] >>> y_pred = [0, 0, 2, 2, 1] >>> target_names = ['class 0', 'class 1', 'class 2'] >>> print(classification_report(y_true, y_pred, target_names=target_names)) precision recall f1-score support class 0 0.50 1.00 0.67 1 class 1 0.00 0.00 0.00 1 class 2 1.00 0.67 0.80 3 accuracy 0.60 5 macro avg 0.50 0.56 0.49 5 weighted avg 0.70 0.60 0.61 5 >>> y_pred = [1, 1, 0] >>> y_true = [1, 1, 1] >>> print(classification_report(y_true, y_pred, labels=[1, 2, 3])) precision recall f1-score support 1 1.00 0.67 0.80 3 2 0.00 0.00 0.00 0 3 0.00 0.00 0.00 0 micro avg 1.00 0.67 0.80 3 macro avg 0.33 0.22 0.27 3 weighted avg 1.00 0.67 0.80 3
doc_27376
Based on IntegerField and translates its input into NumericRange. Default for IntegerRangeField and BigIntegerRangeField.
doc_27377
Raw scoring function of the samples. Parameters Xarray-like of shape (n_samples, n_features) The data matrix. Returns score_samplesndarray of shape (n_samples,) Returns the (unshifted) scoring function of the samples.
doc_27378
Instance of the TestLoader class intended to be shared. If no customization of the TestLoader is needed, this instance can be used instead of repeatedly creating new instances.
doc_27379
New in Django 4.0. Override empty_result_set_value to None since most aggregate functions result in NULL when applied to an empty result set.
doc_27380
operator.invert(obj) operator.__inv__(obj) operator.__invert__(obj) Return the bitwise inverse of the number obj. This is equivalent to ~obj.
doc_27381
Gets or sets the CMY representation of the Color. cmy -> tuple The CMY representation of the Color. The CMY components are in the ranges C = [0, 1], M = [0, 1], Y = [0, 1]. Note that this will not return the absolutely exact CMY values for the set RGB values in all cases. Due to the RGB mapping from 0-255 and the CMY mapping from 0-1 rounding errors may cause the CMY values to differ slightly from what you might expect.
doc_27382
Takes a single “compressed” value of a field, for example a DateRangeField, and returns a tuple representing a lower and upper bound.
doc_27383
Check if coefficients match. New in version 1.6.0. Parameters otherclass instance The other class must have the coef attribute. Returns boolboolean True if the coefficients are the same, False otherwise.
doc_27384
Possible value for SSLContext.verify_mode, or the cert_reqs parameter to wrap_socket(). Except for PROTOCOL_TLS_CLIENT, it is the default mode. With client-side sockets, just about any cert is accepted. Validation errors, such as untrusted or expired cert, are ignored and do not abort the TLS/SSL handshake. In server mode, no certificate is requested from the client, so the client does not send any for client cert authentication. See the discussion of Security considerations below.
doc_27385
Alias for set_edgecolor.
doc_27386
>>> cache.get('my_key') 'hello, world!'
doc_27387
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
doc_27388
A generic version of collections.abc.AsyncIterable. New in version 3.5.2. Deprecated since version 3.9: collections.abc.AsyncIterable now supports []. See PEP 585 and Generic Alias Type.
doc_27389
tf.profiler.experimental.Profile( logdir, options=None ) Profiling will start when entering the scope, and stop and save the results to the logdir when exits the scope. Open TensorBoard profile tab to view results. Example usage: with tf.profiler.experimental.Profile("/path/to/logdir"): # do some work Args logdir profile data will save to this directory. options An optional tf.profiler.ProfilerOptions can be provided to fine tune the profiler's behavior. Methods __enter__ View source __enter__() __exit__ View source __exit__( typ, value, tb )
doc_27390
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns self
doc_27391
tf.compat.v1.keras.backend.set_session( session ) Arguments session A TF Session.
doc_27392
Return real and imaginary responses to Gabor filter. The real and imaginary parts of the Gabor filter kernel are applied to the image and the response is returned as a pair of arrays. Gabor filter is a linear filter with a Gaussian kernel which is modulated by a sinusoidal plane wave. Frequency and orientation representations of the Gabor filter are similar to those of the human visual system. Gabor filter banks are commonly used in computer vision and image processing. They are especially suitable for edge detection and texture classification. Parameters image2-D array Input image. frequencyfloat Spatial frequency of the harmonic function. Specified in pixels. thetafloat, optional Orientation in radians. If 0, the harmonic is in the x-direction. bandwidthfloat, optional The bandwidth captured by the filter. For fixed bandwidth, sigma_x and sigma_y will decrease with increasing frequency. This value is ignored if sigma_x and sigma_y are set by the user. sigma_x, sigma_yfloat, optional Standard deviation in x- and y-directions. These directions apply to the kernel before rotation. If theta = pi/2, then the kernel is rotated 90 degrees so that sigma_x controls the vertical direction. n_stdsscalar, optional The linear size of the kernel is n_stds (3 by default) standard deviations. offsetfloat, optional Phase offset of harmonic function in radians. mode{‘constant’, ‘nearest’, ‘reflect’, ‘mirror’, ‘wrap’}, optional Mode used to convolve image with a kernel, passed to ndi.convolve cvalscalar, optional Value to fill past edges of input if mode of convolution is ‘constant’. The parameter is passed to ndi.convolve. Returns real, imagarrays Filtered images using the real and imaginary parts of the Gabor filter kernel. Images are of the same dimensions as the input one. References 1 https://en.wikipedia.org/wiki/Gabor_filter 2 https://web.archive.org/web/20180127125930/http://mplab.ucsd.edu/tutorials/gabor.pdf Examples >>> from skimage.filters import gabor >>> from skimage import data, io >>> from matplotlib import pyplot as plt >>> image = data.coins() >>> # detecting edges in a coin image >>> filt_real, filt_imag = gabor(image, frequency=0.6) >>> plt.figure() >>> io.imshow(filt_real) >>> io.show() >>> # less sensitivity to finer details with the lower frequency kernel >>> filt_real, filt_imag = gabor(image, frequency=0.1) >>> plt.figure() >>> io.imshow(filt_real) >>> io.show()
doc_27393
Perform total-variation denoising on n-dimensional images. Parameters imagendarray of ints, uints or floats Input data to be denoised. image can be of any numeric type, but it is cast into an ndarray of floats for the computation of the denoised image. weightfloat, optional Denoising weight. The greater weight, the more denoising (at the expense of fidelity to input). epsfloat, optional Relative difference of the value of the cost function that determines the stop criterion. The algorithm stops when: (E_(n-1) - E_n) < eps * E_0 n_iter_maxint, optional Maximal number of iterations used for the optimization. multichannelbool, optional Apply total-variation denoising separately for each channel. This option should be true for color images, otherwise the denoising is also applied in the channels dimension. Returns outndarray Denoised image. Notes Make sure to set the multichannel parameter appropriately for color images. The principle of total variation denoising is explained in https://en.wikipedia.org/wiki/Total_variation_denoising The principle of total variation denoising is to minimize the total variation of the image, which can be roughly described as the integral of the norm of the image gradient. Total variation denoising tends to produce “cartoon-like” images, that is, piecewise-constant images. This code is an implementation of the algorithm of Rudin, Fatemi and Osher that was proposed by Chambolle in [1]. References 1 A. Chambolle, An algorithm for total variation minimization and applications, Journal of Mathematical Imaging and Vision, Springer, 2004, 20, 89-97. Examples 2D example on astronaut image: >>> from skimage import color, data >>> img = color.rgb2gray(data.astronaut())[:50, :50] >>> img += 0.5 * img.std() * np.random.randn(*img.shape) >>> denoised_img = denoise_tv_chambolle(img, weight=60) 3D example on synthetic data: >>> x, y, z = np.ogrid[0:20, 0:20, 0:20] >>> mask = (x - 22)**2 + (y - 20)**2 + (z - 17)**2 < 8**2 >>> mask = mask.astype(float) >>> mask += 0.2*np.random.randn(*mask.shape) >>> res = denoise_tv_chambolle(mask, weight=100)
doc_27394
Converts a float tensor to a per-channel quantized tensor with given scales and zero points. Parameters input (Tensor) – float tensor to quantize scales (Tensor) – float 1D tensor of scales to use, size should match input.size(axis) zero_points (int) – integer 1D tensor of offset to use, size should match input.size(axis) axis (int) – dimension on which apply per-channel quantization dtype (torch.dtype) – the desired data type of returned tensor. Has to be one of the quantized dtypes: torch.quint8, torch.qint8, torch.qint32 Returns A newly quantized tensor Return type Tensor Example: >>> x = torch.tensor([[-1.0, 0.0], [1.0, 2.0]]) >>> torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8) tensor([[-1., 0.], [ 1., 2.]], size=(2, 2), dtype=torch.quint8, quantization_scheme=torch.per_channel_affine, scale=tensor([0.1000, 0.0100], dtype=torch.float64), zero_point=tensor([10, 0]), axis=0) >>> torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8).int_repr() tensor([[ 0, 10], [100, 200]], dtype=torch.uint8)
doc_27395
Add the coordinates of an event to the list of clicks. Parameters eventMouseEvent
doc_27396
Base class for other signals and a subclass of ArithmeticError.
doc_27397
Encode the bytes-like object b using base85 (as used in e.g. git-style binary diffs) and return the encoded bytes. If pad is true, the input is padded with b'\0' so its length is a multiple of 4 bytes before encoding. New in version 3.4.
doc_27398
Set or remove the completion display function. If function is specified, it will be used as the new completion display function; if omitted or None, any completion display function already installed is removed. This sets or clears the rl_completion_display_matches_hook callback in the underlying library. The completion display function is called as function(substitution, [matches], longest_match_length) once each time matches need to be displayed.
doc_27399
Assert that the mock was awaited at least once. Note that this is separate from the object having been called, the await keyword must be used: >>> mock = AsyncMock() >>> async def main(coroutine_mock): ... await coroutine_mock ... >>> coroutine_mock = mock() >>> mock.called True >>> mock.assert_awaited() Traceback (most recent call last): ... AssertionError: Expected mock to have been awaited. >>> asyncio.run(main(coroutine_mock)) >>> mock.assert_awaited()