_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_26300
See Migration guide for more details. tf.compat.v1.raw_ops.TensorListLength tf.raw_ops.TensorListLength( input_handle, name=None ) input_handle: the input list length: the number of tensors in the list Args input_handle A Tensor of type variant. name A name for the operation (optional). Returns A Tensor of type int32.
doc_26301
Returns a human-readable printout of the current memory allocator statistics for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. Parameters device (torch.device or int, optional) – selected device. Returns printout for the current device, given by current_device(), if device is None (default). abbreviated (bool, optional) – whether to return an abbreviated summary (default: False). Note See Memory management for more details about GPU memory management.
doc_26302
Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters rendererRendererBase subclass. Notes This method is overridden in the Artist subclasses.
doc_26303
See Migration guide for more details. tf.compat.v1.debugging.is_inf, tf.compat.v1.is_inf, tf.compat.v1.math.is_inf tf.math.is_inf( x, name=None ) Example: x = tf.constant([5.0, np.inf, 6.8, np.inf]) tf.math.is_inf(x) ==> [False, True, False, True] Args x A Tensor. Must be one of the following types: bfloat16, half, float32, float64. name A name for the operation (optional). Returns A Tensor of type bool. Numpy Compatibility Equivalent to np.isinf
doc_26304
Display minor ticks on the Axes. Displaying minor ticks may reduce performance; you may turn them off using minorticks_off() if drawing speed is a problem.
doc_26305
A form that lets a user change their password without entering the old password.
doc_26306
A dictionary containing extra parameters passed to the content-type header. This is typically provided by services, such as Google App Engine, that intercept and handle file uploads on your behalf. As a result your handler may not receive the uploaded file content, but instead a URL or other pointer to the file (see RFC 2388).
doc_26307
Return a string which is the concatenation of the strings in the sequence seq. Calls str.join element-wise. Parameters separray_like of str or unicode seqarray_like of str or unicode Returns outndarray Output array of str or unicode, depending on input types See also str.join
doc_26308
Return the path to the resource as an actual file system path. This function returns a context manager for use in a with statement. The context manager provides a pathlib.Path object. Exiting the context manager cleans up any temporary file created when the resource needs to be extracted from e.g. a zip file. package is either a name or a module object which conforms to the Package requirements. resource is the name of the resource to open within package; it may not contain path separators and it may not have sub-resources (i.e. it cannot be a directory).
doc_26309
Returns the data type with the smallest size and smallest scalar kind to which both type1 and type2 may be safely cast. The returned data type is always in native byte order. This function is symmetric, but rarely associative. Parameters type1dtype or dtype specifier First data type. type2dtype or dtype specifier Second data type. Returns outdtype The promoted data type. See also result_type, dtype, can_cast Notes New in version 1.6.0. Starting in NumPy 1.9, promote_types function now returns a valid string length when given an integer or float dtype as one argument and a string dtype as another argument. Previously it always returned the input string dtype, even if it wasn’t long enough to store the max integer/float value converted to a string. Examples >>> np.promote_types('f4', 'f8') dtype('float64') >>> np.promote_types('i8', 'f4') dtype('float64') >>> np.promote_types('>i8', '<c8') dtype('complex128') >>> np.promote_types('i4', 'S8') dtype('S11') An example of a non-associative case: >>> p = np.promote_types >>> p('S', p('i1', 'u1')) dtype('S6') >>> p(p('S', 'i1'), 'u1') dtype('S4')
doc_26310
See Migration guide for more details. tf.compat.v1.io.gfile.isdir tf.io.gfile.isdir( path ) Args path string, path to a potential directory Returns True, if the path is a directory; False otherwise
doc_26311
Draw random samples from a normal (Gaussian) distribution. The probability density function of the normal distribution, first derived by De Moivre and 200 years later by both Gauss and Laplace independently [2], is often called the bell curve because of its characteristic shape (see the example below). The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution [2]. Parameters locfloat or array_like of floats Mean (“centre”) of the distribution. scalefloat or array_like of floats Standard deviation (spread or “width”) of the distribution. Must be non-negative. sizeint or tuple of ints, optional Output shape. If the given shape is, e.g., (m, n, k), then m * n * k samples are drawn. If size is None (default), a single value is returned if loc and scale are both scalars. Otherwise, np.broadcast(loc, scale).size samples are drawn. Returns outndarray or scalar Drawn samples from the parameterized normal distribution. See also scipy.stats.norm probability density function, distribution or cumulative density function, etc. Notes The probability density for the Gaussian distribution is \[p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },\] where \(\mu\) is the mean and \(\sigma\) the standard deviation. The square of the standard deviation, \(\sigma^2\), is called the variance. The function has its peak at the mean, and its “spread” increases with the standard deviation (the function reaches 0.607 times its maximum at \(x + \sigma\) and \(x - \sigma\) [2]). This implies that normal is more likely to return samples lying close to the mean, rather than those far away. References 1 Wikipedia, “Normal distribution”, https://en.wikipedia.org/wiki/Normal_distribution 2(1,2,3) P. R. Peebles Jr., “Central Limit Theorem” in “Probability, Random Variables and Random Signal Principles”, 4th ed., 2001, pp. 51, 51, 125. Examples Draw samples from the distribution: >>> mu, sigma = 0, 0.1 # mean and standard deviation >>> s = np.random.default_rng().normal(mu, sigma, 1000) Verify the mean and the variance: >>> abs(mu - np.mean(s)) 0.0 # may vary >>> abs(sigma - np.std(s, ddof=1)) 0.0 # may vary Display the histogram of the samples, along with the probability density function: >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ), ... linewidth=2, color='r') >>> plt.show() Two-by-four array of samples from N(3, 6.25): >>> np.random.default_rng().normal(3, 2.5, size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random
doc_26312
Finalize this object, making the underlying file a complete PDF file.
doc_26313
Return the cumulative sum of the array elements over the given axis. Masked values are set to 0 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to numpy.cumsum for full documentation. See also numpy.ndarray.cumsum corresponding function for ndarrays numpy.cumsum equivalent function Notes The mask is lost if out is not a valid ma.MaskedArray ! Arithmetic is modular when using integer types, and no error is raised on overflow. Examples >>> marr = np.ma.array(np.arange(10), mask=[0,0,0,1,1,1,0,0,0,0]) >>> marr.cumsum() masked_array(data=[0, 1, 3, --, --, --, 9, 16, 24, 33], mask=[False, False, False, True, True, True, False, False, False, False], fill_value=999999)
doc_26314
Deprecated since version 3.9: Deprecated in favor of headers.
doc_26315
Call matplotlib.backend_managers.ToolManager.message_event.
doc_26316
The tuple should be (nchannels, sampwidth, framerate, nframes, comptype, compname), with values valid for the set*() methods. Sets all parameters.
doc_26317
Returns a list of all hyperparameter specifications.
doc_26318
Escape special characters in pattern. This is useful if you want to match an arbitrary literal string that may have regular expression metacharacters in it. For example: >>> print(re.escape('http://www.python.org')) http://www\.python\.org >>> legal_chars = string.ascii_lowercase + string.digits + "!#$%&'*+-.^_`|~:" >>> print('[%s]+' % re.escape(legal_chars)) [abcdefghijklmnopqrstuvwxyz0123456789!\#\$%\&'\*\+\-\.\^_`\|\~:]+ >>> operators = ['+', '-', '*', '/', '**'] >>> print('|'.join(map(re.escape, sorted(operators, reverse=True)))) /|\-|\+|\*\*|\* This function must not be used for the replacement string in sub() and subn(), only backslashes should be escaped. For example: >>> digits_re = r'\d+' >>> sample = '/usr/sbin/sendmail - 0 errors, 12 warnings' >>> print(re.sub(digits_re, digits_re.replace('\\', r'\\'), sample)) /usr/sbin/sendmail - \d+ errors, \d+ warnings Changed in version 3.3: The '_' character is no longer escaped. Changed in version 3.7: Only characters that can have special meaning in a regular expression are escaped. As a result, '!', '"', '%', "'", ',', '/', ':', ';', '<', '=', '>', '@', and "`" are no longer escaped.
doc_26319
Return the text string.
doc_26320
Constructor for an IncrementalDecoder instance. All incremental decoders must provide this constructor interface. They are free to add additional keyword arguments, but only the ones defined here are used by the Python codec registry. The IncrementalDecoder may implement different error handling schemes by providing the errors keyword argument. See Error Handlers for possible values. The errors argument will be assigned to an attribute of the same name. Assigning to this attribute makes it possible to switch between different error handling strategies during the lifetime of the IncrementalDecoder object. decode(object[, final]) Decodes object (taking the current state of the decoder into account) and returns the resulting decoded object. If this is the last call to decode() final must be true (the default is false). If final is true the decoder must decode the input completely and must flush all buffers. If this isn’t possible (e.g. because of incomplete byte sequences at the end of the input) it must initiate error handling just like in the stateless case (which might raise an exception). reset() Reset the decoder to the initial state. getstate() Return the current state of the decoder. This must be a tuple with two items, the first must be the buffer containing the still undecoded input. The second must be an integer and can be additional state info. (The implementation should make sure that 0 is the most common additional state info.) If this additional state info is 0 it must be possible to set the decoder to the state which has no input buffered and 0 as the additional state info, so that feeding the previously buffered input to the decoder returns it to the previous state without producing any output. (Additional state info that is more complicated than integers can be converted into an integer by marshaling/pickling the info and encoding the bytes of the resulting string into an integer.) setstate(state) Set the state of the decoder to state. state must be a decoder state returned by getstate().
doc_26321
Retrieve the form class to instantiate. If form_class is provided, that class will be used. Otherwise, a ModelForm will be instantiated using the model associated with the queryset, or with the model, depending on which attribute is provided.
doc_26322
Number of elements in the array. Equal to np.prod(a.shape), i.e., the product of the array’s dimensions. Notes a.size returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested np.prod(a.shape), which returns an instance of np.int_), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. Examples >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30
doc_26323
Subset of data from the University of North Carolina Volume Rendering Test Data Set. The full dataset is available at [1]. Returns image(10, 256, 256) uint16 ndarray Notes The 3D volume consists of 10 layers from the larger volume. References 1 https://graphics.stanford.edu/data/voldata/
doc_26324
Which line number in the file the error occurred in. This is 1-indexed: the first line in the file has a lineno of 1.
doc_26325
Bases: matplotlib.tri.triinterpolate.TriInterpolator Linear interpolator on a triangular grid. Each triangle is represented by a plane so that an interpolated value at point (x, y) lies on the plane of the triangle containing (x, y). Interpolated values are therefore continuous across the triangulation, but their first derivatives are discontinuous at edges between triangles. Parameters triangulationTriangulation The triangulation to interpolate over. z(npoints,) array-like Array of values, defined at grid points, to interpolate between. trifinderTriFinder, optional If this is not specified, the Triangulation's default TriFinder will be used by calling Triangulation.get_trifinder. Methods `__call__` (x, y) (Returns interpolated values at (x, y) points.) `gradient` (x, y) (Returns interpolated derivatives at (x, y) points.) gradient(x, y)[source] Returns a list of 2 masked arrays containing interpolated derivatives at the specified (x, y) points. Parameters x, yarray-like x and y coordinates of the same shape and any number of dimensions. Returns dzdx, dzdynp.ma.array 2 masked arrays of the same shape as x and y; values corresponding to (x, y) points outside of the triangulation are masked out. The first returned array contains the values of \(\frac{\partial z}{\partial x}\) and the second those of \(\frac{\partial z}{\partial y}\).
doc_26326
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
doc_26327
Alias for get_linestyle.
doc_26328
Add batched image data to summary. Note that this requires the pillow package. Parameters tag (string) – Data identifier img_tensor (torch.Tensor, numpy.array, or string/blobname) – Image data global_step (int) – Global step value to record walltime (float) – Optional override default walltime (time.time()) seconds after epoch of event dataformats (string) – Image data format specification of the form NCHW, NHWC, CHW, HWC, HW, WH, etc. Shape: img_tensor: Default is (N,3,H,W)(N, 3, H, W) . If dataformats is specified, other shape will be accepted. e.g. NCHW or NHWC. Examples: from torch.utils.tensorboard import SummaryWriter import numpy as np img_batch = np.zeros((16, 3, 100, 100)) for i in range(16): img_batch[i, 0] = np.arange(0, 10000).reshape(100, 100) / 10000 / 16 * i img_batch[i, 1] = (1 - np.arange(0, 10000).reshape(100, 100) / 10000) / 16 * i writer = SummaryWriter() writer.add_images('my_image_batch', img_batch, 0) writer.close() Expected result:
doc_26329
class sklearn.metrics.RocCurveDisplay(*, fpr, tpr, roc_auc=None, estimator_name=None, pos_label=None) [source] ROC Curve visualization. It is recommend to use plot_roc_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. Parameters fprndarray False positive rate. tprndarray True positive rate. roc_aucfloat, default=None Area under ROC curve. If None, the roc_auc score is not shown. estimator_namestr, default=None Name of estimator. If None, the estimator name is not shown. pos_labelstr or int, default=None The class considered as the positive class when computing the roc auc metrics. By default, estimators.classes_[1] is considered as the positive class. New in version 0.24. Attributes line_matplotlib Artist ROC Curve. ax_matplotlib Axes Axes with ROC Curve. figure_matplotlib Figure Figure containing the curve. See also roc_curve Compute Receiver operating characteristic (ROC) curve. plot_roc_curve Plot Receiver operating characteristic (ROC) curve. roc_auc_score Compute the area under the ROC curve. Examples >>> import matplotlib.pyplot as plt >>> import numpy as np >>> from sklearn import metrics >>> y = np.array([0, 0, 1, 1]) >>> pred = np.array([0.1, 0.4, 0.35, 0.8]) >>> fpr, tpr, thresholds = metrics.roc_curve(y, pred) >>> roc_auc = metrics.auc(fpr, tpr) >>> display = metrics.RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=roc_auc, estimator_name='example estimator') >>> display.plot() >>> plt.show() Methods plot([ax, name]) Plot visualization plot(ax=None, *, name=None, **kwargs) [source] Plot visualization Extra keyword arguments will be passed to matplotlib’s plot. Parameters axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. namestr, default=None Name of ROC Curve for labeling. If None, use the name of the estimator. Returns displayRocCurveDisplay Object that stores computed values. Examples using sklearn.metrics.RocCurveDisplay Visualizations with Display Objects
doc_26330
Return the transform for linear scaling, which is just the IdentityTransform.
doc_26331
tf.experimental.numpy.signbit( x ) Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.signbit.
doc_26332
changes the layer of the sprite change_layer(sprite, new_layer) -> None sprite must have been added to the renderer. It is not checked.
doc_26333
See Migration guide for more details. tf.compat.v1.VariableSynchronization AUTO: Indicates that the synchronization will be determined by the current DistributionStrategy (eg. With MirroredStrategy this would be ON_WRITE). NONE: Indicates that there will only be one copy of the variable, so there is no need to sync. ON_WRITE: Indicates that the variable will be updated across devices every time it is written. ON_READ: Indicates that the variable will be aggregated across devices when it is read (eg. when checkpointing or when evaluating an op that uses the variable). Class Variables AUTO tf.VariableSynchronization NONE tf.VariableSynchronization ON_READ tf.VariableSynchronization ON_WRITE tf.VariableSynchronization
doc_26334
marker symbol description "." point "," pixel "o" circle "v" triangle_down "^" triangle_up "<" triangle_left ">" triangle_right "1" tri_down "2" tri_up "3" tri_left "4" tri_right "8" octagon "s" square "p" pentagon "P" plus (filled) "*" star "h" hexagon1 "H" hexagon2 "+" plus "x" x "X" x (filled) "D" diamond "d" thin_diamond "|" vline "_" hline 0 (TICKLEFT) tickleft 1 (TICKRIGHT) tickright 2 (TICKUP) tickup 3 (TICKDOWN) tickdown 4 (CARETLEFT) caretleft 5 (CARETRIGHT) caretright 6 (CARETUP) caretup 7 (CARETDOWN) caretdown 8 (CARETLEFTBASE) caretleft (centered at base) 9 (CARETRIGHTBASE) caretright (centered at base) 10 (CARETUPBASE) caretup (centered at base) 11 (CARETDOWNBASE) caretdown (centered at base) "None", " " or "" nothing '$...$' Render the string using mathtext. E.g "$f$" for marker showing the letter f. verts A list of (x, y) pairs used for Path vertices. The center of the marker is located at (0, 0) and the size is normalized, such that the created path is encapsulated inside the unit cell. path A Path instance. (numsides, 0, angle) A regular polygon with numsides sides, rotated by angle. (numsides, 1, angle) A star-like symbol with numsides sides, rotated by angle. (numsides, 2, angle) An asterisk with numsides sides, rotated by angle. None is the default which means 'nothing', however this table is referred to from other docs for the valid inputs from marker inputs and in those cases None still means 'default'. Note that special symbols can be defined via the STIX math font, e.g. "$\u266B$". For an overview over the STIX font symbols refer to the STIX font table. Also see the STIX Fonts. Integer numbers from 0 to 11 create lines and triangles. Those are equally accessible via capitalized variables, like CARETDOWNBASE. Hence the following are equivalent: plt.plot([1, 2, 3], marker=11) plt.plot([1, 2, 3], marker=matplotlib.markers.CARETDOWNBASE) Examples showing the use of markers: Marker reference Marker examples Classes MarkerStyle([marker, fillstyle]) A class representing marker types.
doc_26335
Special value that can be used as the stdin, stdout or stderr argument to process creation functions. It indicates that the special file os.devnull will be used for the corresponding subprocess stream.
doc_26336
Return a Font representation of a tk named font.
doc_26337
Conform Series/DataFrame to new index with optional filling logic. Places NA/NaN in locations having no value in the previous index. A new object is produced unless the new index is equivalent to the current one and copy=False. Parameters keywords for axes:array-like, optional New labels / index to conform to, should be specified using keywords. Preferably an Index object to avoid duplicating data. method:{None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’} Method to use for filling holes in reindexed DataFrame. Please note: this is only applicable to DataFrames/Series with a monotonically increasing/decreasing index. None (default): don’t fill gaps pad / ffill: Propagate last valid observation forward to next valid. backfill / bfill: Use next valid observation to fill gap. nearest: Use nearest valid observations to fill gap. copy:bool, default True Return a new object, even if the passed indexes are the same. level:int or name Broadcast across a level, matching Index values on the passed MultiIndex level. fill_value:scalar, default np.NaN Value to use for missing values. Defaults to NaN, but can be any “compatible” value. limit:int, default None Maximum number of consecutive elements to forward or backward fill. tolerance:optional Maximum distance between original and new labels for inexact matches. The values of the index at the matching locations most satisfy the equation abs(index[indexer] - target) <= tolerance. Tolerance may be a scalar value, which applies the same tolerance to all values, or list-like, which applies variable tolerance per element. List-like includes list, tuple, array, Series, and must be the same size as the index and its dtype must exactly match the index’s type. Returns Series/DataFrame with changed index. See also DataFrame.set_index Set row labels. DataFrame.reset_index Remove row labels or move them to new columns. DataFrame.reindex_like Change to same indices as other DataFrame. Examples DataFrame.reindex supports two calling conventions (index=index_labels, columns=column_labels, ...) (labels, axis={'index', 'columns'}, ...) We highly recommend using keyword arguments to clarify your intent. Create a dataframe with some fictional data. >>> index = ['Firefox', 'Chrome', 'Safari', 'IE10', 'Konqueror'] >>> df = pd.DataFrame({'http_status': [200, 200, 404, 404, 301], ... 'response_time': [0.04, 0.02, 0.07, 0.08, 1.0]}, ... index=index) >>> df http_status response_time Firefox 200 0.04 Chrome 200 0.02 Safari 404 0.07 IE10 404 0.08 Konqueror 301 1.00 Create a new index and reindex the dataframe. By default values in the new index that do not have corresponding records in the dataframe are assigned NaN. >>> new_index = ['Safari', 'Iceweasel', 'Comodo Dragon', 'IE10', ... 'Chrome'] >>> df.reindex(new_index) http_status response_time Safari 404.0 0.07 Iceweasel NaN NaN Comodo Dragon NaN NaN IE10 404.0 0.08 Chrome 200.0 0.02 We can fill in the missing values by passing a value to the keyword fill_value. Because the index is not monotonically increasing or decreasing, we cannot use arguments to the keyword method to fill the NaN values. >>> df.reindex(new_index, fill_value=0) http_status response_time Safari 404 0.07 Iceweasel 0 0.00 Comodo Dragon 0 0.00 IE10 404 0.08 Chrome 200 0.02 >>> df.reindex(new_index, fill_value='missing') http_status response_time Safari 404 0.07 Iceweasel missing missing Comodo Dragon missing missing IE10 404 0.08 Chrome 200 0.02 We can also reindex the columns. >>> df.reindex(columns=['http_status', 'user_agent']) http_status user_agent Firefox 200 NaN Chrome 200 NaN Safari 404 NaN IE10 404 NaN Konqueror 301 NaN Or we can use “axis-style” keyword arguments >>> df.reindex(['http_status', 'user_agent'], axis="columns") http_status user_agent Firefox 200 NaN Chrome 200 NaN Safari 404 NaN IE10 404 NaN Konqueror 301 NaN To further illustrate the filling functionality in reindex, we will create a dataframe with a monotonically increasing index (for example, a sequence of dates). >>> date_index = pd.date_range('1/1/2010', periods=6, freq='D') >>> df2 = pd.DataFrame({"prices": [100, 101, np.nan, 100, 89, 88]}, ... index=date_index) >>> df2 prices 2010-01-01 100.0 2010-01-02 101.0 2010-01-03 NaN 2010-01-04 100.0 2010-01-05 89.0 2010-01-06 88.0 Suppose we decide to expand the dataframe to cover a wider date range. >>> date_index2 = pd.date_range('12/29/2009', periods=10, freq='D') >>> df2.reindex(date_index2) prices 2009-12-29 NaN 2009-12-30 NaN 2009-12-31 NaN 2010-01-01 100.0 2010-01-02 101.0 2010-01-03 NaN 2010-01-04 100.0 2010-01-05 89.0 2010-01-06 88.0 2010-01-07 NaN The index entries that did not have a value in the original data frame (for example, ‘2009-12-29’) are by default filled with NaN. If desired, we can fill in the missing values using one of several options. For example, to back-propagate the last valid value to fill the NaN values, pass bfill as an argument to the method keyword. >>> df2.reindex(date_index2, method='bfill') prices 2009-12-29 100.0 2009-12-30 100.0 2009-12-31 100.0 2010-01-01 100.0 2010-01-02 101.0 2010-01-03 NaN 2010-01-04 100.0 2010-01-05 89.0 2010-01-06 88.0 2010-01-07 NaN Please note that the NaN value present in the original dataframe (at index value 2010-01-03) will not be filled by any of the value propagation schemes. This is because filling while reindexing does not look at dataframe values, but only compares the original and desired indexes. If you do want to fill in the NaN values present in the original dataframe, use the fillna() method. See the user guide for more.
doc_26338
Compute the Haversine distance between samples in X and Y. The Haversine (or great circle) distance is the angular distance between two points on the surface of a sphere. The first coordinate of each point is assumed to be the latitude, the second is the longitude, given in radians. The dimension of the data must be 2. \[D(x, y) = 2\arcsin[\sqrt{\sin^2((x1 - y1) / 2) + \cos(x1)\cos(y1)\sin^2((x2 - y2) / 2)}]\] Parameters Xarray-like of shape (n_samples_X, 2) Yarray-like of shape (n_samples_Y, 2), default=None Returns distancendarray of shape (n_samples_X, n_samples_Y) Notes As the Earth is nearly spherical, the haversine formula provides a good approximation of the distance between two points of the Earth surface, with a less than 1% error on average. Examples We want to calculate the distance between the Ezeiza Airport (Buenos Aires, Argentina) and the Charles de Gaulle Airport (Paris, France). >>> from sklearn.metrics.pairwise import haversine_distances >>> from math import radians >>> bsas = [-34.83333, -58.5166646] >>> paris = [49.0083899664, 2.53844117956] >>> bsas_in_radians = [radians(_) for _ in bsas] >>> paris_in_radians = [radians(_) for _ in paris] >>> result = haversine_distances([bsas_in_radians, paris_in_radians]) >>> result * 6371000/1000 # multiply by Earth radius to get kilometers array([[ 0. , 11099.54035582], [11099.54035582, 0. ]])
doc_26339
Animation subclass for time-based animation. A new frame is drawn every interval milliseconds. Note You must store the created Animation in a variable that lives as long as the animation should run. Otherwise, the Animation object will be garbage-collected and the animation stops. Parameters figFigure The figure object used to get needed events, such as draw or resize. intervalint, default: 200 Delay between frames in milliseconds. repeat_delayint, default: 0 The delay in milliseconds between consecutive animation runs, if repeat is True. repeatbool, default: True Whether the animation repeats when the sequence of frames is completed. blitbool, default: False Whether blitting is used to optimize drawing. __init__(fig, interval=200, repeat_delay=0, repeat=True, event_source=None, *args, **kwargs)[source] Methods __init__(fig[, interval, repeat_delay, ...]) new_frame_seq() Return a new sequence of frame information. new_saved_frame_seq() Return a new sequence of saved/cached frame information. pause() Pause the animation. resume() Resume the animation. save(filename[, writer, fps, dpi, codec, ...]) Save the animation as a movie file by drawing every frame. to_html5_video([embed_limit]) Convert the animation to an HTML5 <video> tag. to_jshtml([fps, embed_frames, default_mode]) Generate HTML representation of the animation.
doc_26340
Sets the terminating condition to be recognized on the channel. term may be any of three types of value, corresponding to three different ways to handle incoming protocol data. term Description string Will call found_terminator() when the string is found in the input stream integer Will call found_terminator() when the indicated number of characters have been received None The channel continues to collect data forever Note that any data following the terminator will be available for reading by the channel after found_terminator() is called.
doc_26341
Cause the process to sleep until a signal is received; the appropriate handler will then be called. Returns nothing. Availability: Unix. See the man page signal(2) for further information. See also sigwait(), sigwaitinfo(), sigtimedwait() and sigpending().
doc_26342
Weekly offset. Parameters weekday:int or None, default None Always generate specific day of week. 0 for Monday. Attributes base Returns a copy of the calling offset object with n=1 and all other attributes equal. freqstr kwds n name nanos normalize rule_code weekday Methods __call__(*args, **kwargs) Call self as a function. rollback Roll provided date backward to next offset only if not on offset. rollforward Roll provided date forward to next offset only if not on offset. apply apply_index copy isAnchored is_anchored is_month_end is_month_start is_on_offset is_quarter_end is_quarter_start is_year_end is_year_start onOffset
doc_26343
Return the font size as an integer. See also font_manager.FontProperties.get_size_in_points
doc_26344
time_raised = models.DateTimeField(default=timezone.now, editable=False) reference = models.CharField(unique=True, max_length=20) description = models.TextField() Here's a basic ModelSerializer that we can use for creating or updating instances of CustomerReportRecord: class CustomerReportSerializer(serializers.ModelSerializer): class Meta: model = CustomerReportRecord If we open up the Django shell using manage.py shell we can now >>> from project.example.serializers import CustomerReportSerializer >>> serializer = CustomerReportSerializer() >>> print(repr(serializer)) CustomerReportSerializer(): id = IntegerField(label='ID', read_only=True) time_raised = DateTimeField(read_only=True) reference = CharField(max_length=20, validators=[<UniqueValidator(queryset=CustomerReportRecord.objects.all())>]) description = CharField(style={'type': 'textarea'}) The interesting bit here is the reference field. We can see that the uniqueness constraint is being explicitly enforced by a validator on the serializer field. Because of this more explicit style REST framework includes a few validator classes that are not available in core Django. These classes are detailed below. UniqueValidator This validator can be used to enforce the unique=True constraint on model fields. It takes a single required argument, and an optional messages argument: queryset required - This is the queryset against which uniqueness should be enforced. message - The error message that should be used when validation fails. lookup - The lookup used to find an existing instance with the value being validated. Defaults to 'exact'. This validator should be applied to serializer fields, like so: from rest_framework.validators import UniqueValidator slug = SlugField( max_length=100, validators=[UniqueValidator(queryset=BlogPost.objects.all())] ) UniqueTogetherValidator This validator can be used to enforce unique_together constraints on model instances. It has two required arguments, and a single optional messages argument: queryset required - This is the queryset against which uniqueness should be enforced. fields required - A list or tuple of field names which should make a unique set. These must exist as fields on the serializer class. message - The error message that should be used when validation fails. The validator should be applied to serializer classes, like so: from rest_framework.validators import UniqueTogetherValidator class ExampleSerializer(serializers.Serializer): # ... class Meta: # ToDo items belong to a parent list, and have an ordering defined # by the 'position' field. No two items in a given list may share # the same position. validators = [ UniqueTogetherValidator( queryset=ToDoItem.objects.all(), fields=['list', 'position'] ) ] Note: The UniqueTogetherValidator class always imposes an implicit constraint that all the fields it applies to are always treated as required. Fields with default values are an exception to this as they always supply a value even when omitted from user input. UniqueForDateValidator UniqueForMonthValidator UniqueForYearValidator These validators can be used to enforce the unique_for_date, unique_for_month and unique_for_year constraints on model instances. They take the following arguments: queryset required - This is the queryset against which uniqueness should be enforced. field required - A field name against which uniqueness in the given date range will be validated. This must exist as a field on the serializer class. date_field required - A field name which will be used to determine date range for the uniqueness constrain. This must exist as a field on the serializer class. message - The error message that should be used when validation fails. The validator should be applied to serializer classes, like so: from rest_framework.validators import UniqueForYearValidator class ExampleSerializer(serializers.Serializer): # ... class Meta: # Blog posts should have a slug that is unique for the current year. validators = [ UniqueForYearValidator( queryset=BlogPostItem.objects.all(), field='slug', date_field='published' ) ] The date field that is used for the validation is always required to be present on the serializer class. You can't simply rely on a model class default=..., because the value being used for the default wouldn't be generated until after the validation has run. There are a couple of styles you may want to use for this depending on how you want your API to behave. If you're using ModelSerializer you'll probably simply rely on the defaults that REST framework generates for you, but if you are using Serializer or simply want more explicit control, use on of the styles demonstrated below. Using with a writable date field. If you want the date field to be writable the only thing worth noting is that you should ensure that it is always available in the input data, either by setting a default argument, or by setting required=True. published = serializers.DateTimeField(required=True) Using with a read-only date field. If you want the date field to be visible, but not editable by the user, then set read_only=True and additionally set a default=... argument. published = serializers.DateTimeField(read_only=True, default=timezone.now) Using with a hidden date field. If you want the date field to be entirely hidden from the user, then use HiddenField. This field type does not accept user input, but instead always returns its default value to the validated_data in the serializer. published = serializers.HiddenField(default=timezone.now) Note: The UniqueFor<Range>Validator classes impose an implicit constraint that the fields they are applied to are always treated as required. Fields with default values are an exception to this as they always supply a value even when omitted from user input. Advanced field defaults Validators that are applied across multiple fields in the serializer can sometimes require a field input that should not be provided by the API client, but that is available as input to the validator. Two patterns that you may want to use for this sort of validation include: Using HiddenField. This field will be present in validated_data but will not be used in the serializer output representation. Using a standard field with read_only=True, but that also includes a default=… argument. This field will be used in the serializer output representation, but cannot be set directly by the user. REST framework includes a couple of defaults that may be useful in this context. CurrentUserDefault A default class that can be used to represent the current user. In order to use this, the 'request' must have been provided as part of the context dictionary when instantiating the serializer. owner = serializers.HiddenField( default=serializers.CurrentUserDefault() ) CreateOnlyDefault A default class that can be used to only set a default argument during create operations. During updates the field is omitted. It takes a single argument, which is the default value or callable that should be used during create operations. created_at = serializers.DateTimeField( default=serializers.CreateOnlyDefault(timezone.now) ) Limitations of validators There are some ambiguous cases where you'll need to instead handle validation explicitly, rather than relying on the default serializer classes that ModelSerializer generates. In these cases you may want to disable the automatically generated validators, by specifying an empty list for the serializer Meta.validators attribute. Optional fields By default "unique together" validation enforces that all fields be required=True. In some cases, you might want to explicit apply required=False to one of the fields, in which case the desired behaviour of the validation is ambiguous. In this case you will typically need to exclude the validator from the serializer class, and instead write any validation logic explicitly, either in the .validate() method, or else in the view. For example: class BillingRecordSerializer(serializers.ModelSerializer): def validate(self, attrs): # Apply custom validation either here, or in the view. class Meta: fields = ['client', 'date', 'amount'] extra_kwargs = {'client': {'required': False}} validators = [] # Remove a default "unique together" constraint. Updating nested serializers When applying an update to an existing instance, uniqueness validators will exclude the current instance from the uniqueness check. The current instance is available in the context of the uniqueness check, because it exists as an attribute on the serializer, having initially been passed using instance=... when instantiating the serializer. In the case of update operations on nested serializers there's no way of applying this exclusion, because the instance is not available. Again, you'll probably want to explicitly remove the validator from the serializer class, and write the code for the validation constraint explicitly, in a .validate() method, or in the view. Debugging complex cases If you're not sure exactly what behavior a ModelSerializer class will generate it is usually a good idea to run manage.py shell, and print an instance of the serializer, so that you can inspect the fields and validators that it automatically generates for you. >>> serializer = MyComplexModelSerializer() >>> print(serializer) class MyComplexModelSerializer: my_fields = ... Also keep in mind that with complex cases it can often be better to explicitly define your serializer classes, rather than relying on the default ModelSerializer behavior. This involves a little more code, but ensures that the resulting behavior is more transparent. Writing custom validators You can use any of Django's existing validators, or write your own custom validators. Function based A validator may be any callable that raises a serializers.ValidationError on failure. def even_number(value): if value % 2 != 0: raise serializers.ValidationError('This field must be an even number.') Field-level validation You can specify custom field-level validation by adding .validate_<field_name> methods to your Serializer subclass. This is documented in the Serializer docs Class-based To write a class-based validator, use the __call__ method. Class-based validators are useful as they allow you to parameterize and reuse behavior. class MultipleOf: def __init__(self, base): self.base = base def __call__(self, value): if value % self.base != 0: message = 'This field must be a multiple of %d.' % self.base raise serializers.ValidationError(message) Accessing the context In some advanced cases you might want a validator to be passed the serializer field it is being used with as additional context. You can do so by setting a requires_context = True attribute on the validator. The __call__ method will then be called with the serializer_field or serializer as an additional argument. requires_context = True def __call__(self, value, serializer_field): ... validators.py
doc_26345
Gets the cuda capability of a device. Parameters device (torch.device or int, optional) – device for which to return the device capability. This function is a no-op if this argument is a negative integer. It uses the current device, given by current_device(), if device is None (default). Returns the major and minor cuda capability of the device Return type tuple(int, int)
doc_26346
This method generates an error message leader in the format of a Unix C compiler error label; the format is '"%s", line %d: ', where the %s is replaced with the name of the current source file and the %d with the current input line number (the optional arguments can be used to override these). This convenience is provided to encourage shlex users to generate error messages in the standard, parseable format understood by Emacs and other Unix tools.
doc_26347
Reverse the transformation operation Parameters Xarray of shape [n_samples, n_selected_features] The input samples. Returns X_rarray of shape [n_samples, n_original_features] X with columns of zeros inserted where features would have been removed by transform.
doc_26348
col_offset end_lineno end_col_offset Instances of ast.expr and ast.stmt subclasses have lineno, col_offset, lineno, and col_offset attributes. The lineno and end_lineno are the first and last line numbers of source text span (1-indexed so the first line is line 1) and the col_offset and end_col_offset are the corresponding UTF-8 byte offsets of the first and last tokens that generated the node. The UTF-8 offset is recorded because the parser uses UTF-8 internally. Note that the end positions are not required by the compiler and are therefore optional. The end offset is after the last symbol, for example one can get the source segment of a one-line expression node using source_line[node.col_offset : node.end_col_offset].
doc_26349
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
doc_26350
Return the name of the directory used for temporary files. This defines the default value for the dir argument to all functions in this module. Python searches a standard list of directories to find one which the calling user can create files in. The list is: The directory named by the TMPDIR environment variable. The directory named by the TEMP environment variable. The directory named by the TMP environment variable. A platform-specific location: On Windows, the directories C:\TEMP, C:\TMP, \TEMP, and \TMP, in that order. On all other platforms, the directories /tmp, /var/tmp, and /usr/tmp, in that order. As a last resort, the current working directory. The result of this search is cached, see the description of tempdir below.
doc_26351
Set this bounding box from the "frozen" bounds of another Bbox.
doc_26352
Set the artist's visibility. Parameters bbool
doc_26353
tf.compat.v1.nn.batch_norm_with_global_normalization( t=None, m=None, v=None, beta=None, gamma=None, variance_epsilon=None, scale_after_normalization=None, name=None, input=None, mean=None, variance=None ) This op is deprecated. See tf.nn.batch_normalization. Args t A 4D input Tensor. m A 1D mean Tensor with size matching the last dimension of t. This is the first output from tf.nn.moments, or a saved moving average thereof. v A 1D variance Tensor with size matching the last dimension of t. This is the second output from tf.nn.moments, or a saved moving average thereof. beta A 1D beta Tensor with size matching the last dimension of t. An offset to be added to the normalized tensor. gamma A 1D gamma Tensor with size matching the last dimension of t. If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor. variance_epsilon A small float number to avoid dividing by 0. scale_after_normalization A bool indicating whether the resulted tensor needs to be multiplied with gamma. name A name for this operation (optional). input Alias for t. mean Alias for m. variance Alias for v. Returns A batch-normalized t. References: Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift: Ioffe et al., 2015 (pdf)
doc_26354
Call all of the registered callbacks. This function is triggered internally when a property is changed. See also add_callback remove_callback
doc_26355
tf.math.bincount( arr, weights=None, minlength=None, maxlength=None, dtype=tf.dtypes.int32, name=None, axis=None, binary_output=False ) If minlength and maxlength are not given, returns a vector with length tf.reduce_max(arr) + 1 if arr is non-empty, and length 0 otherwise. If weights are non-None, then index i of the output stores the sum of the value in weights at each index where the corresponding value in arr is i. values = tf.constant([1,1,2,3,2,4,4,5]) tf.math.bincount(values) #[0 2 2 1 2 1] Vector length = Maximum element in vector values is 5. Adding 1, which is 6 will be the vector length. Each bin value in the output indicates number of occurrences of the particular index. Here, index 1 in output has a value 2. This indicates value 1 occurs two times in values. values = tf.constant([1,1,2,3,2,4,4,5]) weights = tf.constant([1,5,0,1,0,5,4,5]) tf.math.bincount(values, weights=weights) #[0 6 0 1 9 5] Bin will be incremented by the corresponding weight instead of 1. Here, index 1 in output has a value 6. This is the summation of weights corresponding to the value in values. Bin-counting on a certain axis This example takes a 2 dimensional input and returns a Tensor with bincounting on each sample. data = np.array([[1, 2, 3, 0], [0, 0, 1, 2]], dtype=np.int32) tf.math.bincount(data, axis=-1) <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[1, 1, 1, 1], [2, 1, 1, 0]], dtype=int32)> Bin-counting with binary_output This example gives binary output instead of counting the occurrence. data = np.array([[1, 2, 3, 0], [0, 0, 1, 2]], dtype=np.int32) tf.math.bincount(data, axis=-1, binary_output=True) <tf.Tensor: shape=(2, 4), dtype=int32, numpy= array([[1, 1, 1, 1], [1, 1, 1, 0]], dtype=int32)> Args arr A Tensor, RaggedTensor, or SparseTensor whose values should be counted. These tensors must have a rank of 2 if axis=-1. weights If non-None, must be the same shape as arr. For each value in arr, the bin will be incremented by the corresponding weight instead of 1. minlength If given, ensures the output has length at least minlength, padding with zeros at the end if necessary. maxlength If given, skips values in arr that are equal or greater than maxlength, ensuring that the output has length at most maxlength. dtype If weights is None, determines the type of the output bins. name A name scope for the associated operations (optional). axis The axis to slice over. Axes at and below axis will be flattened before bin counting. Currently, only 0, and -1 are supported. If None, all axes will be flattened (identical to passing 0). binary_output If True, this op will output 1 instead of the number of times a token appears (equivalent to one_hot + reduce_any instead of one_hot + reduce_add). Defaults to False. Returns A vector with the same dtype as weights or the given dtype. The bin values. Raises InvalidArgumentError if negative values are provided as an input.
doc_26356
Returns True or False based on a case-insensitive check for a header with the given name.
doc_26357
Computes the element-wise minimum of input and other. Note If one of the elements being compared is a NaN, then that element is returned. minimum() is not supported for tensors with complex dtypes. Parameters input (Tensor) – the input tensor. other (Tensor) – the second input tensor Keyword Arguments out (Tensor, optional) – the output tensor. Example: >>> a = torch.tensor((1, 2, -1)) >>> b = torch.tensor((3, 0, 4)) >>> torch.minimum(a, b) tensor([1, 0, -1])
doc_26358
The type object for weak references objects.
doc_26359
An ExtensionDtype for uint8 integer data. Changed in version 1.0.0: Now uses pandas.NA as its missing value, rather than numpy.nan. Attributes None Methods None
doc_26360
An SMTP_SSL instance behaves exactly the same as instances of SMTP. SMTP_SSL should be used for situations where SSL is required from the beginning of the connection and using starttls() is not appropriate. If host is not specified, the local host is used. If port is zero, the standard SMTP-over-SSL port (465) is used. The optional arguments local_hostname, timeout and source_address have the same meaning as they do in the SMTP class. context, also optional, can contain a SSLContext and allows configuring various aspects of the secure connection. Please read Security considerations for best practices. keyfile and certfile are a legacy alternative to context, and can point to a PEM formatted private key and certificate chain file for the SSL connection. Changed in version 3.3: context was added. Changed in version 3.3: source_address argument was added. Changed in version 3.4: The class now supports hostname check with ssl.SSLContext.check_hostname and Server Name Indication (see ssl.HAS_SNI). Deprecated since version 3.6: keyfile and certfile are deprecated in favor of context. Please use ssl.SSLContext.load_cert_chain() instead, or let ssl.create_default_context() select the system’s trusted CA certificates for you. Changed in version 3.9: If the timeout parameter is set to be zero, it will raise a ValueError to prevent the creation of a non-blocking socket
doc_26361
Alias for self._offset.
doc_26362
Computes the element-wise logical NOT of the given input tensor. If not specified, the output tensor will have the bool dtype. If the input tensor is not a bool tensor, zeros are treated as False and non-zeros are treated as True. Parameters input (Tensor) – the input tensor. Keyword Arguments out (Tensor, optional) – the output tensor. Example: >>> torch.logical_not(torch.tensor([True, False])) tensor([False, True]) >>> torch.logical_not(torch.tensor([0, 1, -10], dtype=torch.int8)) tensor([ True, False, False]) >>> torch.logical_not(torch.tensor([0., 1.5, -10.], dtype=torch.double)) tensor([ True, False, False]) >>> torch.logical_not(torch.tensor([0., 1., -10.], dtype=torch.double), out=torch.empty(3, dtype=torch.int16)) tensor([1, 0, 0], dtype=torch.int16)
doc_26363
Return the indices of the elements that are non-zero. Returns a tuple of arrays, one for each dimension of a, containing the indices of the non-zero elements in that dimension. The values in a are always tested and returned in row-major, C-style order. To group the indices by element, rather than dimension, use argwhere, which returns a row for each non-zero element. Note When called on a zero-d array or scalar, nonzero(a) is treated as nonzero(atleast_1d(a)). Deprecated since version 1.17.0: Use atleast_1d explicitly if this behavior is deliberate. Parameters aarray_like Input array. Returns tuple_of_arraystuple Indices of elements that are non-zero. See also flatnonzero Return indices that are non-zero in the flattened version of the input array. ndarray.nonzero Equivalent ndarray method. count_nonzero Counts the number of non-zero elements in the input array. Notes While the nonzero values can be obtained with a[nonzero(a)], it is recommended to use x[x.astype(bool)] or x[x != 0] instead, which will correctly handle 0-d arrays. Examples >>> x = np.array([[3, 0, 0], [0, 4, 0], [5, 6, 0]]) >>> x array([[3, 0, 0], [0, 4, 0], [5, 6, 0]]) >>> np.nonzero(x) (array([0, 1, 2, 2]), array([0, 1, 0, 1])) >>> x[np.nonzero(x)] array([3, 4, 5, 6]) >>> np.transpose(np.nonzero(x)) array([[0, 0], [1, 1], [2, 0], [2, 1]]) A common use for nonzero is to find the indices of an array, where a condition is True. Given an array a, the condition a > 3 is a boolean array and since False is interpreted as 0, np.nonzero(a > 3) yields the indices of the a where the condition is true. >>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> a > 3 array([[False, False, False], [ True, True, True], [ True, True, True]]) >>> np.nonzero(a > 3) (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) Using this result to index a is equivalent to using the mask directly: >>> a[np.nonzero(a > 3)] array([4, 5, 6, 7, 8, 9]) >>> a[a > 3] # prefer this spelling array([4, 5, 6, 7, 8, 9]) nonzero can also be called as a method of the array. >>> (a > 3).nonzero() (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2]))
doc_26364
Call decision_function on the estimator with the best found parameters. Only available if refit=True and the underlying estimator supports decision_function. Parameters Xindexable, length n_samples Must fulfill the input assumptions of the underlying estimator.
doc_26365
Like remove() but ignores errors. Parameters header – the header to be discarded.
doc_26366
Return the Canvas of this TurtleScreen. Useful for insiders who know what to do with a Tkinter Canvas. >>> cv = screen.getcanvas() >>> cv <turtle.ScrolledCanvas object ...>
doc_26367
Computes the discrete Fourier Transform sample frequencies for a signal of size n. Note By convention, fft() returns positive frequency terms first, followed by the negative frequencies in reverse order, so that f[-i] for all 0<i≤n/20 < i \leq n/2 in Python gives the negative frequency terms. For an FFT of length n and with inputs spaced in length unit d, the frequencies are: f = [0, 1, ..., (n - 1) // 2, -(n // 2), ..., -1] / (d * n) Note For even lengths, the Nyquist frequency at f[n/2] can be thought of as either negative or positive. fftfreq() follows NumPy’s convention of taking it to be negative. Parameters n (int) – the FFT length d (float, optional) – The sampling length scale. The spacing between individual samples of the FFT input. The default assumes unit spacing, dividing that result by the actual spacing gives the result in physical frequency units. Keyword Arguments dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided. device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Example >>> torch.fft.fftfreq(5) tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000]) For even input, we can see the Nyquist frequency at f[2] is given as negative: >>> torch.fft.fftfreq(4) tensor([ 0.0000, 0.2500, -0.5000, -0.2500])
doc_26368
See Migration guide for more details. tf.compat.v1.app.flags.mark_flag_as_required tf.compat.v1.flags.mark_flag_as_required( flag_name, flag_values=_flagvalues.FLAGS ) Registers a flag validator, which will follow usual validator rules. Important note: validator will pass for any non-None value, such as False, 0 (zero), '' (empty string) and so on. If your module might be imported by others, and you only wish to make the flag required when the module is directly executed, call this method like this: if name == 'main': flags.mark_flag_as_required('your_flag_name') app.run() Args flag_name str, name of the flag flag_values flags.FlagValues, optional FlagValues instance where the flag is defined. Raises AttributeError Raised when flag_name is not registered as a valid flag name.
doc_26369
Returns a decompression object, to be used for decompressing data streams that won’t fit into memory at once. The wbits parameter controls the size of the history buffer (or the “window size”), and what header and trailer format is expected. It has the same meaning as described for decompress(). The zdict parameter specifies a predefined compression dictionary. If provided, this must be the same dictionary as was used by the compressor that produced the data that is to be decompressed. Note If zdict is a mutable object (such as a bytearray), you must not modify its contents between the call to decompressobj() and the first call to the decompressor’s decompress() method. Changed in version 3.3: Added the zdict parameter.
doc_26370
The Age response-header field conveys the sender’s estimate of the amount of time since the response (or its revalidation) was generated at the origin server. Age values are non-negative decimal integers, representing time in seconds.
doc_26371
In-place version of multiply().
doc_26372
Convert c to a (n, 4) array of RGBA colors. Parameters cMatplotlib color or array of colors If c is a masked array, an ndarray is returned with a (0, 0, 0, 0) row for each masked value or row in c. alphafloat or sequence of floats, optional If alpha is given, force the alpha value of the returned RGBA tuple to alpha. If None, the alpha value from c is used. If c does not have an alpha channel, then alpha defaults to 1. alpha is ignored for the color value "none" (case-insensitive), which always maps to (0, 0, 0, 0). If alpha is a sequence and c is a single color, c will be repeated to match the length of alpha. Returns array (n, 4) array of RGBA colors, where each channel (red, green, blue, alpha) can assume values between 0 and 1. Examples using matplotlib.colors.to_rgba_array Specifying Colors
doc_26373
Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter tol; “trailing” means highest order coefficient(s), e.g., in [0, 1, 1, 0, 0] (which represents 0 + x + x**2 + 0*x**3 + 0*x**4) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters carray_like 1-d array of coefficients, ordered from lowest order to highest. tolnumber, optional Trailing (i.e., highest order) elements with absolute value less than or equal to tol (default value is zero) are removed. Returns trimmedndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises ValueError If tol < 0 See also trimseq Examples >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j])
doc_26374
Takes the logging configuration from a dictionary. The contents of this dictionary are described in Configuration dictionary schema below. If an error is encountered during configuration, this function will raise a ValueError, TypeError, AttributeError or ImportError with a suitably descriptive message. The following is a (possibly incomplete) list of conditions which will raise an error: A level which is not a string or which is a string not corresponding to an actual logging level. A propagate value which is not a boolean. An id which does not have a corresponding destination. A non-existent handler id found during an incremental call. An invalid logger name. Inability to resolve to an internal or external object. Parsing is performed by the DictConfigurator class, whose constructor is passed the dictionary used for configuration, and has a configure() method. The logging.config module has a callable attribute dictConfigClass which is initially set to DictConfigurator. You can replace the value of dictConfigClass with a suitable implementation of your own. dictConfig() calls dictConfigClass passing the specified dictionary, and then calls the configure() method on the returned object to put the configuration into effect: def dictConfig(config): dictConfigClass(config).configure() For example, a subclass of DictConfigurator could call DictConfigurator.__init__() in its own __init__(), then set up custom prefixes which would be usable in the subsequent configure() call. dictConfigClass would be bound to this new subclass, and then dictConfig() could be called exactly as in the default, uncustomized state. New in version 3.2.
doc_26375
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
doc_26376
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
doc_26377
Return (maximum - minimum) along the given dimension (i.e. peak-to-peak value). Warning ptp preserves the data type of the array. This means the return value for an input of signed integers with n bits (e.g. np.int8, np.int16, etc) is also a signed integer with n bits. In that case, peak-to-peak values greater than 2**(n-1)-1 will be returned as negative values. An example with a work-around is shown below. Parameters axis{None, int}, optional Axis along which to find the peaks. If None (default) the flattened array is used. out{None, array_like}, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. fill_valuescalar or None, optional Value used to fill in the masked values. keepdimsbool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns ptpndarray. A new array holding the result, unless out was specified, in which case a reference to out is returned. Examples >>> x = np.ma.MaskedArray([[4, 9, 2, 10], ... [6, 9, 7, 12]]) >>> x.ptp(axis=1) masked_array(data=[8, 6], mask=False, fill_value=999999) >>> x.ptp(axis=0) masked_array(data=[2, 0, 5, 2], mask=False, fill_value=999999) >>> x.ptp() 10 This example shows that a negative value can be returned when the input is an array of signed integers. >>> y = np.ma.MaskedArray([[1, 127], ... [0, 127], ... [-1, 127], ... [-2, 127]], dtype=np.int8) >>> y.ptp(axis=1) masked_array(data=[ 126, 127, -128, -127], mask=False, fill_value=999999, dtype=int8) A work-around is to use the view() method to view the result as unsigned integers with the same bit width: >>> y.ptp(axis=1).view(np.uint8) masked_array(data=[126, 127, 128, 129], mask=False, fill_value=999999, dtype=uint8)
doc_26378
class pprint.PrettyPrinter(indent=1, width=80, depth=None, stream=None, *, compact=False, sort_dicts=True) Construct a PrettyPrinter instance. This constructor understands several keyword parameters. An output stream may be set using the stream keyword; the only method used on the stream object is the file protocol’s write() method. If not specified, the PrettyPrinter adopts sys.stdout. The amount of indentation added for each recursive level is specified by indent; the default is one. Other values can cause output to look a little odd, but can make nesting easier to spot. The number of levels which may be printed is controlled by depth; if the data structure being printed is too deep, the next contained level is replaced by .... By default, there is no constraint on the depth of the objects being formatted. The desired output width is constrained using the width parameter; the default is 80 characters. If a structure cannot be formatted within the constrained width, a best effort will be made. If compact is false (the default) each item of a long sequence will be formatted on a separate line. If compact is true, as many items as will fit within the width will be formatted on each output line. If sort_dicts is true (the default), dictionaries will be formatted with their keys sorted, otherwise they will display in insertion order. Changed in version 3.4: Added the compact parameter. Changed in version 3.8: Added the sort_dicts parameter. >>> import pprint >>> stuff = ['spam', 'eggs', 'lumberjack', 'knights', 'ni'] >>> stuff.insert(0, stuff[:]) >>> pp = pprint.PrettyPrinter(indent=4) >>> pp.pprint(stuff) [ ['spam', 'eggs', 'lumberjack', 'knights', 'ni'], 'spam', 'eggs', 'lumberjack', 'knights', 'ni'] >>> pp = pprint.PrettyPrinter(width=41, compact=True) >>> pp.pprint(stuff) [['spam', 'eggs', 'lumberjack', 'knights', 'ni'], 'spam', 'eggs', 'lumberjack', 'knights', 'ni'] >>> tup = ('spam', ('eggs', ('lumberjack', ('knights', ('ni', ('dead', ... ('parrot', ('fresh fruit',)))))))) >>> pp = pprint.PrettyPrinter(depth=6) >>> pp.pprint(tup) ('spam', ('eggs', ('lumberjack', ('knights', ('ni', ('dead', (...))))))) The pprint module also provides several shortcut functions: pprint.pformat(object, indent=1, width=80, depth=None, *, compact=False, sort_dicts=True) Return the formatted representation of object as a string. indent, width, depth, compact and sort_dicts will be passed to the PrettyPrinter constructor as formatting parameters. Changed in version 3.4: Added the compact parameter. Changed in version 3.8: Added the sort_dicts parameter. pprint.pp(object, *args, sort_dicts=False, **kwargs) Prints the formatted representation of object followed by a newline. If sort_dicts is false (the default), dictionaries will be displayed with their keys in insertion order, otherwise the dict keys will be sorted. args and kwargs will be passed to pprint() as formatting parameters. New in version 3.8. pprint.pprint(object, stream=None, indent=1, width=80, depth=None, *, compact=False, sort_dicts=True) Prints the formatted representation of object on stream, followed by a newline. If stream is None, sys.stdout is used. This may be used in the interactive interpreter instead of the print() function for inspecting values (you can even reassign print = pprint.pprint for use within a scope). indent, width, depth, compact and sort_dicts will be passed to the PrettyPrinter constructor as formatting parameters. Changed in version 3.4: Added the compact parameter. Changed in version 3.8: Added the sort_dicts parameter. >>> import pprint >>> stuff = ['spam', 'eggs', 'lumberjack', 'knights', 'ni'] >>> stuff.insert(0, stuff) >>> pprint.pprint(stuff) [<Recursion on list with id=...>, 'spam', 'eggs', 'lumberjack', 'knights', 'ni'] pprint.isreadable(object) Determine if the formatted representation of object is “readable”, or can be used to reconstruct the value using eval(). This always returns False for recursive objects. >>> pprint.isreadable(stuff) False pprint.isrecursive(object) Determine if object requires a recursive representation. One more support function is also defined: pprint.saferepr(object) Return a string representation of object, protected against recursive data structures. If the representation of object exposes a recursive entry, the recursive reference will be represented as <Recursion on typename with id=number>. The representation is not otherwise formatted. >>> pprint.saferepr(stuff) "[<Recursion on list with id=...>, 'spam', 'eggs', 'lumberjack', 'knights', 'ni']" PrettyPrinter Objects PrettyPrinter instances have the following methods: PrettyPrinter.pformat(object) Return the formatted representation of object. This takes into account the options passed to the PrettyPrinter constructor. PrettyPrinter.pprint(object) Print the formatted representation of object on the configured stream, followed by a newline. The following methods provide the implementations for the corresponding functions of the same names. Using these methods on an instance is slightly more efficient since new PrettyPrinter objects don’t need to be created. PrettyPrinter.isreadable(object) Determine if the formatted representation of the object is “readable,” or can be used to reconstruct the value using eval(). Note that this returns False for recursive objects. If the depth parameter of the PrettyPrinter is set and the object is deeper than allowed, this returns False. PrettyPrinter.isrecursive(object) Determine if the object requires a recursive representation. This method is provided as a hook to allow subclasses to modify the way objects are converted to strings. The default implementation uses the internals of the saferepr() implementation. PrettyPrinter.format(object, context, maxlevels, level) Returns three values: the formatted version of object as a string, a flag indicating whether the result is readable, and a flag indicating whether recursion was detected. The first argument is the object to be presented. The second is a dictionary which contains the id() of objects that are part of the current presentation context (direct and indirect containers for object that are affecting the presentation) as the keys; if an object needs to be presented which is already represented in context, the third return value should be True. Recursive calls to the format() method should add additional entries for containers to this dictionary. The third argument, maxlevels, gives the requested limit to recursion; this will be 0 if there is no requested limit. This argument should be passed unmodified to recursive calls. The fourth argument, level, gives the current level; recursive calls should be passed a value less than that of the current call. Example To demonstrate several uses of the pprint() function and its parameters, let’s fetch information about a project from PyPI: >>> import json >>> import pprint >>> from urllib.request import urlopen >>> with urlopen('https://pypi.org/pypi/sampleproject/json') as resp: ... project_info = json.load(resp)['info'] In its basic form, pprint() shows the whole object: >>> pprint.pprint(project_info) {'author': 'The Python Packaging Authority', 'author_email': 'pypa-dev@googlegroups.com', 'bugtrack_url': None, 'classifiers': ['Development Status :: 3 - Alpha', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.2', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Topic :: Software Development :: Build Tools'], 'description': 'A sample Python project\n' '=======================\n' '\n' 'This is the description file for the project.\n' '\n' 'The file should use UTF-8 encoding and be written using ' 'ReStructured Text. It\n' 'will be used to generate the project webpage on PyPI, and ' 'should be written for\n' 'that purpose.\n' '\n' 'Typical contents for this file would include an overview of ' 'the project, basic\n' 'usage examples, etc. Generally, including the project ' 'changelog in here is not\n' 'a good idea, although a simple "What\'s New" section for the ' 'most recent version\n' 'may be appropriate.', 'description_content_type': None, 'docs_url': None, 'download_url': 'UNKNOWN', 'downloads': {'last_day': -1, 'last_month': -1, 'last_week': -1}, 'home_page': 'https://github.com/pypa/sampleproject', 'keywords': 'sample setuptools development', 'license': 'MIT', 'maintainer': None, 'maintainer_email': None, 'name': 'sampleproject', 'package_url': 'https://pypi.org/project/sampleproject/', 'platform': 'UNKNOWN', 'project_url': 'https://pypi.org/project/sampleproject/', 'project_urls': {'Download': 'UNKNOWN', 'Homepage': 'https://github.com/pypa/sampleproject'}, 'release_url': 'https://pypi.org/project/sampleproject/1.2.0/', 'requires_dist': None, 'requires_python': None, 'summary': 'A sample Python project', 'version': '1.2.0'} The result can be limited to a certain depth (ellipsis is used for deeper contents): >>> pprint.pprint(project_info, depth=1) {'author': 'The Python Packaging Authority', 'author_email': 'pypa-dev@googlegroups.com', 'bugtrack_url': None, 'classifiers': [...], 'description': 'A sample Python project\n' '=======================\n' '\n' 'This is the description file for the project.\n' '\n' 'The file should use UTF-8 encoding and be written using ' 'ReStructured Text. It\n' 'will be used to generate the project webpage on PyPI, and ' 'should be written for\n' 'that purpose.\n' '\n' 'Typical contents for this file would include an overview of ' 'the project, basic\n' 'usage examples, etc. Generally, including the project ' 'changelog in here is not\n' 'a good idea, although a simple "What\'s New" section for the ' 'most recent version\n' 'may be appropriate.', 'description_content_type': None, 'docs_url': None, 'download_url': 'UNKNOWN', 'downloads': {...}, 'home_page': 'https://github.com/pypa/sampleproject', 'keywords': 'sample setuptools development', 'license': 'MIT', 'maintainer': None, 'maintainer_email': None, 'name': 'sampleproject', 'package_url': 'https://pypi.org/project/sampleproject/', 'platform': 'UNKNOWN', 'project_url': 'https://pypi.org/project/sampleproject/', 'project_urls': {...}, 'release_url': 'https://pypi.org/project/sampleproject/1.2.0/', 'requires_dist': None, 'requires_python': None, 'summary': 'A sample Python project', 'version': '1.2.0'} Additionally, maximum character width can be suggested. If a long object cannot be split, the specified width will be exceeded: >>> pprint.pprint(project_info, depth=1, width=60) {'author': 'The Python Packaging Authority', 'author_email': 'pypa-dev@googlegroups.com', 'bugtrack_url': None, 'classifiers': [...], 'description': 'A sample Python project\n' '=======================\n' '\n' 'This is the description file for the ' 'project.\n' '\n' 'The file should use UTF-8 encoding and be ' 'written using ReStructured Text. It\n' 'will be used to generate the project ' 'webpage on PyPI, and should be written ' 'for\n' 'that purpose.\n' '\n' 'Typical contents for this file would ' 'include an overview of the project, ' 'basic\n' 'usage examples, etc. Generally, including ' 'the project changelog in here is not\n' 'a good idea, although a simple "What\'s ' 'New" section for the most recent version\n' 'may be appropriate.', 'description_content_type': None, 'docs_url': None, 'download_url': 'UNKNOWN', 'downloads': {...}, 'home_page': 'https://github.com/pypa/sampleproject', 'keywords': 'sample setuptools development', 'license': 'MIT', 'maintainer': None, 'maintainer_email': None, 'name': 'sampleproject', 'package_url': 'https://pypi.org/project/sampleproject/', 'platform': 'UNKNOWN', 'project_url': 'https://pypi.org/project/sampleproject/', 'project_urls': {...}, 'release_url': 'https://pypi.org/project/sampleproject/1.2.0/', 'requires_dist': None, 'requires_python': None, 'summary': 'A sample Python project', 'version': '1.2.0'}
doc_26379
Force rasterized (bitmap) drawing for vector graphics output. Rasterized drawing is not supported by all artists. If you try to enable this on an artist that does not support it, the command has no effect and a warning will be issued. This setting is ignored for pixel-based output. See also Rasterization for vector graphics. Parameters rasterizedbool
doc_26380
Creates a comment with the given target name and text. If insert_pis is true, this will also add it to the tree. New in version 3.8.
doc_26381
the session interface to use. By default an instance of SecureCookieSessionInterface is used here. Changelog New in version 0.8.
doc_26382
Bartlett window function. w[n]=1−∣2nN−1−1∣={2nN−1if 0≤n≤N−122−2nN−1if N−12<n<N,w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \begin{cases} \frac{2n}{N - 1} & \text{if } 0 \leq n \leq \frac{N - 1}{2} \\ 2 - \frac{2n}{N - 1} & \text{if } \frac{N - 1}{2} < n < N \\ \end{cases}, where NN is the full window size. The input window_length is a positive integer controlling the returned window size. periodic flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like torch.stft(). Therefore, if periodic is true, the NN in above formula is in fact window_length+1\text{window\_length} + 1 . Also, we always have torch.bartlett_window(L, periodic=True) equal to torch.bartlett_window(L + 1, periodic=False)[:-1]). Note If window_length =1=1 , the returned window contains a single value 1. Parameters window_length (int) – the size of returned window periodic (bool, optional) – If True, returns a window to be used as periodic function. If False, return a symmetric window. Keyword Arguments dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). Only floating point types are supported. layout (torch.layout, optional) – the desired layout of returned window tensor. Only torch.strided (dense layout) is supported. device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. Returns A 1-D tensor of size (window_length,)(\text{window\_length},) containing the window Return type Tensor
doc_26383
Initialize self. See help(type(self)) for accurate signature.
doc_26384
Try to match reference as well as possible to a portion of fragment (which should be the longer fragment). This is (conceptually) done by taking slices out of fragment, using findfactor() to compute the best match, and minimizing the result. The fragments should both contain 2-byte samples. Return a tuple (offset, factor) where offset is the (integer) offset into fragment where the optimal match started and factor is the (floating-point) factor as per findfactor().
doc_26385
Checks if a given byte content range is valid for the given length. Changelog New in version 0.7. Parameters start (Optional[int]) – stop (Optional[int]) – length (Optional[int]) – Return type bool
doc_26386
Return whether the artist is pickable. See also set_picker, get_picker, pick
doc_26387
Return the corresponding inverse transformation. It holds x == self.inverted().transform(self.transform(x)). The return value of this method should be treated as temporary. An update to self does not cause a corresponding update to its inverted copy.
doc_26388
Like Artist.get_window_extent, but includes any clipping. Parameters rendererRendererBase subclass renderer that will be used to draw the figures (i.e. fig.canvas.get_renderer()) Returns Bbox The enclosing bounding box (in figure pixel coordinates).
doc_26389
Compute the outer product of two vectors. Given two vectors, a = [a0, a1, ..., aM] and b = [b0, b1, ..., bN], the outer product [1] is: [[a0*b0 a0*b1 ... a0*bN ] [a1*b0 . [ ... . [aM*b0 aM*bN ]] Parameters a(M,) array_like First input vector. Input is flattened if not already 1-dimensional. b(N,) array_like Second input vector. Input is flattened if not already 1-dimensional. out(M, N) ndarray, optional A location where the result is stored New in version 1.9.0. Returns out(M, N) ndarray out[i, j] = a[i] * b[j] See also inner einsum einsum('i,j->ij', a.ravel(), b.ravel()) is the equivalent. ufunc.outer A generalization to dimensions other than 1D and other operations. np.multiply.outer(a.ravel(), b.ravel()) is the equivalent. tensordot np.tensordot(a.ravel(), b.ravel(), axes=((), ())) is the equivalent. References 1 : G. H. Golub and C. F. Van Loan, Matrix Computations, 3rd ed., Baltimore, MD, Johns Hopkins University Press, 1996, pg. 8. Examples Make a (very coarse) grid for computing a Mandelbrot set: >>> rl = np.outer(np.ones((5,)), np.linspace(-2, 2, 5)) >>> rl array([[-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.]]) >>> im = np.outer(1j*np.linspace(2, -2, 5), np.ones((5,))) >>> im array([[0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j], [0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j], [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j], [0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]]) >>> grid = rl + im >>> grid array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j], [-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j], [-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j], [-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j], [-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]]) An example using a “vector” of letters: >>> x = np.array(['a', 'b', 'c'], dtype=object) >>> np.outer(x, [1, 2, 3]) array([['a', 'aa', 'aaa'], ['b', 'bb', 'bbb'], ['c', 'cc', 'ccc']], dtype=object)
doc_26390
See Migration guide for more details. tf.compat.v1.raw_ops.AnonymousIterator tf.raw_ops.AnonymousIterator( output_types, output_shapes, name=None ) Args output_types A list of tf.DTypes that has length >= 1. output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1. name A name for the operation (optional). Returns A Tensor of type resource.
doc_26391
Returns a new tensor with the hyperbolic sine of the elements of input. outi=sinh⁡(inputi)\text{out}_{i} = \sinh(\text{input}_{i}) Parameters input (Tensor) – the input tensor. Keyword Arguments out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4) >>> a tensor([ 0.5380, -0.8632, -0.1265, 0.9399]) >>> torch.sinh(a) tensor([ 0.5644, -0.9744, -0.1268, 1.0845]) Note When input is on the CPU, the implementation of torch.sinh may use the Sleef library, which rounds very large results to infinity or negative infinity. See here for details.
doc_26392
The widget class to be used for GeometryField. Defaults to OSMWidget.
doc_26393
class sklearn.svm.LinearSVC(penalty='l2', loss='squared_hinge', *, dual=True, tol=0.0001, C=1.0, multi_class='ovr', fit_intercept=True, intercept_scaling=1, class_weight=None, verbose=0, random_state=None, max_iter=1000) [source] Linear Support Vector Classification. Similar to SVC with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples. This class supports both dense and sparse input and the multiclass support is handled according to a one-vs-the-rest scheme. Read more in the User Guide. Parameters penalty{‘l1’, ‘l2’}, default=’l2’ Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to coef_ vectors that are sparse. loss{‘hinge’, ‘squared_hinge’}, default=’squared_hinge’ Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. dualbool, default=True Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features. tolfloat, default=1e-4 Tolerance for stopping criteria. Cfloat, default=1.0 Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. multi_class{‘ovr’, ‘crammer_singer’}, default=’ovr’ Determines the multi-class strategy if y contains more than two classes. "ovr" trains n_classes one-vs-rest classifiers, while "crammer_singer" optimizes a joint objective over all classes. While crammer_singer is interesting from a theoretical perspective as it is consistent, it is seldom used in practice as it rarely leads to better accuracy and is more expensive to compute. If "crammer_singer" is chosen, the options loss, penalty and dual will be ignored. fit_interceptbool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be already centered). intercept_scalingfloat, default=1 When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. class_weightdict or ‘balanced’, default=None Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). verboseint, default=0 Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in liblinear that, if enabled, may not work properly in a multithreaded context. random_stateint, RandomState instance or None, default=None Controls the pseudo random number generation for shuffling the data for the dual coordinate descent (if dual=True). When dual=False the underlying implementation of LinearSVC is not random and random_state has no effect on the results. Pass an int for reproducible output across multiple function calls. See Glossary. max_iterint, default=1000 The maximum number of iterations to be run. Attributes coef_ndarray of shape (1, n_features) if n_classes == 2 else (n_classes, n_features) Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is a readonly property derived from raw_coef_ that follows the internal memory layout of liblinear. intercept_ndarray of shape (1,) if n_classes == 2 else (n_classes,) Constants in decision function. classes_ndarray of shape (n_classes,) The unique classes labels. n_iter_int Maximum number of iterations run across all classes. See also SVC Implementation of Support Vector Machine classifier using libsvm: the kernel can be non-linear but its SMO algorithm does not scale to large number of samples as LinearSVC does. Furthermore SVC multi-class mode is implemented using one vs one scheme while LinearSVC uses one vs the rest. It is possible to implement one vs the rest with SVC by using the OneVsRestClassifier wrapper. Finally SVC can fit dense data without memory copy if the input is C-contiguous. Sparse data will still incur memory copy though. sklearn.linear_model.SGDClassifier SGDClassifier can optimize the same cost function as LinearSVC by adjusting the penalty and loss parameters. In addition it requires less memory, allows incremental (online) learning, and implements various loss functions and regularization regimes. Notes The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon to have slightly different results for the same input data. If that happens, try with a smaller tol parameter. The underlying implementation, liblinear, uses a sparse internal representation for the data that will incur a memory copy. Predict output may not match that of standalone liblinear in certain cases. See differences from liblinear in the narrative documentation. References LIBLINEAR: A Library for Large Linear Classification Examples >>> from sklearn.svm import LinearSVC >>> from sklearn.pipeline import make_pipeline >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_features=4, random_state=0) >>> clf = make_pipeline(StandardScaler(), ... LinearSVC(random_state=0, tol=1e-5)) >>> clf.fit(X, y) Pipeline(steps=[('standardscaler', StandardScaler()), ('linearsvc', LinearSVC(random_state=0, tol=1e-05))]) >>> print(clf.named_steps['linearsvc'].coef_) [[0.141... 0.526... 0.679... 0.493...]] >>> print(clf.named_steps['linearsvc'].intercept_) [0.1693...] >>> print(clf.predict([[0, 0, 0, 0]])) [1] Methods decision_function(X) Predict confidence scores for samples. densify() Convert coefficient matrix to dense array format. fit(X, y[, sample_weight]) Fit the model according to the given training data. get_params([deep]) Get parameters for this estimator. predict(X) Predict class labels for samples in X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. sparsify() Convert coefficient matrix to sparse format. decision_function(X) [source] Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. densify() [source] Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self Fitted estimator. fit(X, y, sample_weight=None) [source] Fit the model according to the given training data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yarray-like of shape (n_samples,) Target vector relative to X. sample_weightarray-like of shape (n_samples,), default=None Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.18. Returns selfobject An instance of the estimator. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict class labels for samples in X. Parameters Xarray-like or sparse matrix, shape (n_samples, n_features) Samples. Returns Carray, shape [n_samples] Predicted class label per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. sparsify() [source] Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns self Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. Examples using sklearn.svm.LinearSVC Release Highlights for scikit-learn 0.22 Comparison of Calibration of Classifiers Probability Calibration curves Pipeline Anova SVM Univariate Feature Selection Scalable learning with polynomial kernel aproximation Explicit feature map approximation for RBF kernels Detection error tradeoff (DET) curve Balance model complexity and cross-validated score Precision-Recall Selecting dimensionality reduction with Pipeline and GridSearchCV Column Transformer with Heterogeneous Data Sources Feature discretization Plot the support vectors in LinearSVC Plot different SVM classifiers in the iris dataset Scaling the regularization parameter for SVCs Classification of text documents using sparse features
doc_26394
Return a bitmask of the audio output formats supported by the soundcard. Some of the formats supported by OSS are: Format Description AFMT_MU_LAW a logarithmic encoding (used by Sun .au files and /dev/audio) AFMT_A_LAW a logarithmic encoding AFMT_IMA_ADPCM a 4:1 compressed format defined by the Interactive Multimedia Association AFMT_U8 Unsigned, 8-bit audio AFMT_S16_LE Signed, 16-bit audio, little-endian byte order (as used by Intel processors) AFMT_S16_BE Signed, 16-bit audio, big-endian byte order (as used by 68k, PowerPC, Sparc) AFMT_S8 Signed, 8 bit audio AFMT_U16_LE Unsigned, 16-bit little-endian audio AFMT_U16_BE Unsigned, 16-bit big-endian audio Consult the OSS documentation for a full list of audio formats, and note that most devices support only a subset of these formats. Some older devices only support AFMT_U8; the most common format used today is AFMT_S16_LE.
doc_26395
sklearn.linear_model.enet_path(X, y, *, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, precompute='auto', Xy=None, copy_X=True, coef_init=None, verbose=False, return_n_iter=False, positive=False, check_input=True, **params) [source] Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. l1_ratiofloat, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso. epsfloat, default=1e-3 Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3. n_alphasint, default=100 Number of alphas along the regularization path. alphasndarray, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’ Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument. Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_Xbool, default=True If True, X will be copied; else, it may be overwritten. coef_initndarray of shape (n_features, ), default=None The initial values of the coefficients. verbosebool or int, default=False Amount of verbosity. return_n_iterbool, default=False Whether to return the number of iterations or not. positivebool, default=False If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1). check_inputbool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **paramskwargs Keyword arguments passed to the coordinate descent solver. Returns alphasndarray of shape (n_alphas,) The alphas along the path where models are computed. coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas) Coefficients along the path. dual_gapsndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iterslist of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also MultiTaskElasticNet MultiTaskElasticNetCV ElasticNet ElasticNetCV Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. Examples using sklearn.linear_model.enet_path Lasso and Elastic Net
doc_26396
os.O_NOINHERIT os.O_SHORT_LIVED os.O_TEMPORARY os.O_RANDOM os.O_SEQUENTIAL os.O_TEXT The above constants are only available on Windows.
doc_26397
Set the sizes of each member of the collection. Parameters sizesndarray or None The size to set for each element of the collection. The value is the 'area' of the element. dpifloat, default: 72 The dpi of the canvas.
doc_26398
Interpret the input as a matrix. Unlike matrix, asmatrix does not make a copy if the input is already a matrix or an ndarray. Equivalent to matrix(data, copy=False). Parameters dataarray_like Input data. dtypedata-type Data-type of the output matrix. Returns matmatrix data interpreted as a matrix. Examples >>> x = np.array([[1, 2], [3, 4]]) >>> m = np.asmatrix(x) >>> x[0,0] = 5 >>> m matrix([[5, 2], [3, 4]])
doc_26399
Compute the inverse FFT of a signal that has Hermitian symmetry. Parameters aarray_like Input array. nint, optional Length of the inverse FFT, the number of points along transformation axis in the input to use. If n is smaller than the length of the input, the input is cropped. If it is larger, the input is padded with zeros. If n is not given, the length of the input along the axis specified by axis is used. axisint, optional Axis over which to compute the inverse FFT. If not given, the last axis is used. norm{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see numpy.fft). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns outcomplex ndarray The truncated or zero-padded input, transformed along the axis indicated by axis, or the last one if axis is not specified. The length of the transformed axis is n//2 + 1. See also hfft, irfft Notes hfft/ihfft are a pair analogous to rfft/irfft, but for the opposite case: here the signal has Hermitian symmetry in the time domain and is real in the frequency domain. So here it’s hfft for which you must supply the length of the result if it is to be odd: even: ihfft(hfft(a, 2*len(a) - 2)) == a, within roundoff error, odd: ihfft(hfft(a, 2*len(a) - 1)) == a, within roundoff error. Examples >>> spectrum = np.array([ 15, -4, 0, -1, 0, -4]) >>> np.fft.ifft(spectrum) array([1.+0.j, 2.+0.j, 3.+0.j, 4.+0.j, 3.+0.j, 2.+0.j]) # may vary >>> np.fft.ihfft(spectrum) array([ 1.-0.j, 2.-0.j, 3.-0.j, 4.-0.j]) # may vary