_id
stringlengths
5
9
text
stringlengths
5
385k
title
stringclasses
1 value
doc_2900
pygame module for drawing shapes EXPERIMENTAL!: This API may change or disappear in later pygame releases. If you use this, your code may break with the next pygame release. The pygame package does not import gfxdraw automatically when loaded, so it must imported explicitly to be used. import pygame import pygame.gfxdraw For all functions the arguments are strictly positional and integers are accepted for coordinates and radii. The color argument can be one of the following formats: a pygame.Color object an (RGB) triplet (tuple/list) an (RGBA) quadruplet (tuple/list) The functions rectangle() and box() will accept any (x, y, w, h) sequence for their rect argument, though pygame.Rect instances are preferred. To draw a filled antialiased shape, first use the antialiased (aa*) version of the function, and then use the filled (filled_*) version. For example: col = (255, 0, 0) surf.fill((255, 255, 255)) pygame.gfxdraw.aacircle(surf, x, y, 30, col) pygame.gfxdraw.filled_circle(surf, x, y, 30, col) Note For threading, each of the functions releases the GIL during the C part of the call. Note See the pygame.draw module for alternative draw methods. The pygame.gfxdraw module differs from the pygame.draw module in the API it uses and the different draw functions available. pygame.gfxdraw wraps the primitives from the library called SDL_gfx, rather than using modified versions. New in pygame 1.9.0. pygame.gfxdraw.pixel() draw a pixel pixel(surface, x, y, color) -> None Draws a single pixel, at position (x ,y), on the given surface. Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the pixel y (int) -- y coordinate of the pixel color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.hline() draw a horizontal line hline(surface, x1, x2, y, color) -> None Draws a straight horizontal line ((x1, y) to (x2, y)) on the given surface. There are no endcaps. Parameters: surface (Surface) -- surface to draw on x1 (int) -- x coordinate of one end of the line x2 (int) -- x coordinate of the other end of the line y (int) -- y coordinate of the line color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.vline() draw a vertical line vline(surface, x, y1, y2, color) -> None Draws a straight vertical line ((x, y1) to (x, y2)) on the given surface. There are no endcaps. Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the line y1 (int) -- y coordinate of one end of the line y2 (int) -- y coordinate of the other end of the line color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.line() draw a line line(surface, x1, y1, x2, y2, color) -> None Draws a straight line ((x1, y1) to (x2, y2)) on the given surface. There are no endcaps. Parameters: surface (Surface) -- surface to draw on x1 (int) -- x coordinate of one end of the line y1 (int) -- y coordinate of one end of the line x2 (int) -- x coordinate of the other end of the line y2 (int) -- y coordinate of the other end of the line color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.rectangle() draw a rectangle rectangle(surface, rect, color) -> None Draws an unfilled rectangle on the given surface. For a filled rectangle use box(). Parameters: surface (Surface) -- surface to draw on rect (Rect) -- rectangle to draw, position and dimensions color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType Note The rect.bottom and rect.right attributes of a pygame.Rect always lie one pixel outside of its actual border. Therefore, these values will not be included as part of the drawing. pygame.gfxdraw.box() draw a filled rectangle box(surface, rect, color) -> None Draws a filled rectangle on the given surface. For an unfilled rectangle use rectangle(). Parameters: surface (Surface) -- surface to draw on rect (Rect) -- rectangle to draw, position and dimensions color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType Note The rect.bottom and rect.right attributes of a pygame.Rect always lie one pixel outside of its actual border. Therefore, these values will not be included as part of the drawing. Note The pygame.Surface.fill() method works just as well for drawing filled rectangles. In fact pygame.Surface.fill() can be hardware accelerated on some platforms with both software and hardware display modes. pygame.gfxdraw.circle() draw a circle circle(surface, x, y, r, color) -> None Draws an unfilled circle on the given surface. For a filled circle use filled_circle(). Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the center of the circle y (int) -- y coordinate of the center of the circle r (int) -- radius of the circle color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.aacircle() draw an antialiased circle aacircle(surface, x, y, r, color) -> None Draws an unfilled antialiased circle on the given surface. Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the center of the circle y (int) -- y coordinate of the center of the circle r (int) -- radius of the circle color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.filled_circle() draw a filled circle filled_circle(surface, x, y, r, color) -> None Draws a filled circle on the given surface. For an unfilled circle use circle(). Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the center of the circle y (int) -- y coordinate of the center of the circle r (int) -- radius of the circle color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.ellipse() draw an ellipse ellipse(surface, x, y, rx, ry, color) -> None Draws an unfilled ellipse on the given surface. For a filled ellipse use filled_ellipse(). Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the center of the ellipse y (int) -- y coordinate of the center of the ellipse rx (int) -- horizontal radius of the ellipse ry (int) -- vertical radius of the ellipse color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.aaellipse() draw an antialiased ellipse aaellipse(surface, x, y, rx, ry, color) -> None Draws an unfilled antialiased ellipse on the given surface. Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the center of the ellipse y (int) -- y coordinate of the center of the ellipse rx (int) -- horizontal radius of the ellipse ry (int) -- vertical radius of the ellipse color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.filled_ellipse() draw a filled ellipse filled_ellipse(surface, x, y, rx, ry, color) -> None Draws a filled ellipse on the given surface. For an unfilled ellipse use ellipse(). Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the center of the ellipse y (int) -- y coordinate of the center of the ellipse rx (int) -- horizontal radius of the ellipse ry (int) -- vertical radius of the ellipse color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.arc() draw an arc arc(surface, x, y, r, start_angle, stop_angle, color) -> None Draws an arc on the given surface. For an arc with its endpoints connected to its center use pie(). The two angle arguments are given in degrees and indicate the start and stop positions of the arc. The arc is drawn in a clockwise direction from the start_angle to the stop_angle. If start_angle == stop_angle, nothing will be drawn Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the center of the arc y (int) -- y coordinate of the center of the arc r (int) -- radius of the arc start_angle (int) -- start angle in degrees stop_angle (int) -- stop angle in degrees color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType Note This function uses degrees while the pygame.draw.arc() function uses radians. pygame.gfxdraw.pie() draw a pie pie(surface, x, y, r, start_angle, stop_angle, color) -> None Draws an unfilled pie on the given surface. A pie is an arc() with its endpoints connected to its center. The two angle arguments are given in degrees and indicate the start and stop positions of the pie. The pie is drawn in a clockwise direction from the start_angle to the stop_angle. If start_angle == stop_angle, a straight line will be drawn from the center position at the given angle, to a length of the radius. Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the center of the pie y (int) -- y coordinate of the center of the pie r (int) -- radius of the pie start_angle (int) -- start angle in degrees stop_angle (int) -- stop angle in degrees color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.trigon() draw a trigon/triangle trigon(surface, x1, y1, x2, y2, x3, y3, color) -> None Draws an unfilled trigon (triangle) on the given surface. For a filled trigon use filled_trigon(). A trigon can also be drawn using polygon() e.g. polygon(surface, ((x1, y1), (x2, y2), (x3, y3)), color) Parameters: surface (Surface) -- surface to draw on x1 (int) -- x coordinate of the first corner of the trigon y1 (int) -- y coordinate of the first corner of the trigon x2 (int) -- x coordinate of the second corner of the trigon y2 (int) -- y coordinate of the second corner of the trigon x3 (int) -- x coordinate of the third corner of the trigon y3 (int) -- y coordinate of the third corner of the trigon color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.aatrigon() draw an antialiased trigon/triangle aatrigon(surface, x1, y1, x2, y2, x3, y3, color) -> None Draws an unfilled antialiased trigon (triangle) on the given surface. An aatrigon can also be drawn using aapolygon() e.g. aapolygon(surface, ((x1, y1), (x2, y2), (x3, y3)), color) Parameters: surface (Surface) -- surface to draw on x1 (int) -- x coordinate of the first corner of the trigon y1 (int) -- y coordinate of the first corner of the trigon x2 (int) -- x coordinate of the second corner of the trigon y2 (int) -- y coordinate of the second corner of the trigon x3 (int) -- x coordinate of the third corner of the trigon y3 (int) -- y coordinate of the third corner of the trigon color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.filled_trigon() draw a filled trigon/triangle filled_trigon(surface, x1, y1, x2, y2, x3, y3, color) -> None Draws a filled trigon (triangle) on the given surface. For an unfilled trigon use trigon(). A filled_trigon can also be drawn using filled_polygon() e.g. filled_polygon(surface, ((x1, y1), (x2, y2), (x3, y3)), color) Parameters: surface (Surface) -- surface to draw on x1 (int) -- x coordinate of the first corner of the trigon y1 (int) -- y coordinate of the first corner of the trigon x2 (int) -- x coordinate of the second corner of the trigon y2 (int) -- y coordinate of the second corner of the trigon x3 (int) -- x coordinate of the third corner of the trigon y3 (int) -- y coordinate of the third corner of the trigon color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType pygame.gfxdraw.polygon() draw a polygon polygon(surface, points, color) -> None Draws an unfilled polygon on the given surface. For a filled polygon use filled_polygon(). The adjacent coordinates in the points argument, as well as the first and last points, will be connected by line segments. e.g. For the points [(x1, y1), (x2, y2), (x3, y3)] a line segment will be drawn from (x1, y1) to (x2, y2), from (x2, y2) to (x3, y3), and from (x3, y3) to (x1, y1). Parameters: surface (Surface) -- surface to draw on points (tuple(coordinate) or list(coordinate)) -- a sequence of 3 or more (x, y) coordinates, where each coordinate in the sequence must be a tuple/list/pygame.math.Vector2 of 2 ints/floats (float values will be truncated) color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType Raises: ValueError -- if len(points) < 3 (must have at least 3 points) IndexError -- if len(coordinate) < 2 (each coordinate must have at least 2 items) pygame.gfxdraw.aapolygon() draw an antialiased polygon aapolygon(surface, points, color) -> None Draws an unfilled antialiased polygon on the given surface. The adjacent coordinates in the points argument, as well as the first and last points, will be connected by line segments. e.g. For the points [(x1, y1), (x2, y2), (x3, y3)] a line segment will be drawn from (x1, y1) to (x2, y2), from (x2, y2) to (x3, y3), and from (x3, y3) to (x1, y1). Parameters: surface (Surface) -- surface to draw on points (tuple(coordinate) or list(coordinate)) -- a sequence of 3 or more (x, y) coordinates, where each coordinate in the sequence must be a tuple/list/pygame.math.Vector2 of 2 ints/floats (float values will be truncated) color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType Raises: ValueError -- if len(points) < 3 (must have at least 3 points) IndexError -- if len(coordinate) < 2 (each coordinate must have at least 2 items) pygame.gfxdraw.filled_polygon() draw a filled polygon filled_polygon(surface, points, color) -> None Draws a filled polygon on the given surface. For an unfilled polygon use polygon(). The adjacent coordinates in the points argument, as well as the first and last points, will be connected by line segments. e.g. For the points [(x1, y1), (x2, y2), (x3, y3)] a line segment will be drawn from (x1, y1) to (x2, y2), from (x2, y2) to (x3, y3), and from (x3, y3) to (x1, y1). Parameters: surface (Surface) -- surface to draw on points (tuple(coordinate) or list(coordinate)) -- a sequence of 3 or more (x, y) coordinates, where each coordinate in the sequence must be a tuple/list/pygame.math.Vector2 of 2 ints/floats (float values will be truncated)` color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType Raises: ValueError -- if len(points) < 3 (must have at least 3 points) IndexError -- if len(coordinate) < 2 (each coordinate must have at least 2 items) pygame.gfxdraw.textured_polygon() draw a textured polygon textured_polygon(surface, points, texture, tx, ty) -> None Draws a textured polygon on the given surface. For better performance, the surface and the texture should have the same format. A per-pixel alpha texture blit to a per-pixel alpha surface will differ from a pygame.Surface.blit() blit. Also, a per-pixel alpha texture cannot be used with an 8-bit per pixel destination. The adjacent coordinates in the points argument, as well as the first and last points, will be connected by line segments. e.g. For the points [(x1, y1), (x2, y2), (x3, y3)] a line segment will be drawn from (x1, y1) to (x2, y2), from (x2, y2) to (x3, y3), and from (x3, y3) to (x1, y1). Parameters: surface (Surface) -- surface to draw on points (tuple(coordinate) or list(coordinate)) -- a sequence of 3 or more (x, y) coordinates, where each coordinate in the sequence must be a tuple/list/pygame.math.Vector2 of 2 ints/floats (float values will be truncated) texture (Surface) -- texture to draw on the polygon tx (int) -- x offset of the texture ty (int) -- y offset of the texture Returns: None Return type: NoneType Raises: ValueError -- if len(points) < 3 (must have at least 3 points) IndexError -- if len(coordinate) < 2 (each coordinate must have at least 2 items) pygame.gfxdraw.bezier() draw a Bezier curve bezier(surface, points, steps, color) -> None Draws a Bézier curve on the given surface. Parameters: surface (Surface) -- surface to draw on points (tuple(coordinate) or list(coordinate)) -- a sequence of 3 or more (x, y) coordinates used to form a curve, where each coordinate in the sequence must be a tuple/list/pygame.math.Vector2 of 2 ints/floats (float values will be truncated) steps (int) -- number of steps for the interpolation, the minimum is 2 color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType Raises: ValueError -- if steps < 2 ValueError -- if len(points) < 3 (must have at least 3 points) IndexError -- if len(coordinate) < 2 (each coordinate must have at least 2 items)
doc_2901
The minutes of the datetime. Examples >>> datetime_series = pd.Series( ... pd.date_range("2000-01-01", periods=3, freq="T") ... ) >>> datetime_series 0 2000-01-01 00:00:00 1 2000-01-01 00:01:00 2 2000-01-01 00:02:00 dtype: datetime64[ns] >>> datetime_series.dt.minute 0 0 1 1 2 2 dtype: int64
doc_2902
class sklearn.decomposition.SparsePCA(n_components=None, *, alpha=1, ridge_alpha=0.01, max_iter=1000, tol=1e-08, method='lars', n_jobs=None, U_init=None, V_init=None, verbose=False, random_state=None) [source] Sparse Principal Components Analysis (SparsePCA). Finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Read more in the User Guide. Parameters n_componentsint, default=None Number of sparse atoms to extract. alphafloat, default=1 Sparsity controlling parameter. Higher values lead to sparser components. ridge_alphafloat, default=0.01 Amount of ridge shrinkage to apply in order to improve conditioning when calling the transform method. max_iterint, default=1000 Maximum number of iterations to perform. tolfloat, default=1e-8 Tolerance for the stopping condition. method{‘lars’, ‘cd’}, default=’lars’ lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse. n_jobsint, default=None Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. U_initndarray of shape (n_samples, n_components), default=None Initial values for the loadings for warm restart scenarios. V_initndarray of shape (n_components, n_features), default=None Initial values for the components for warm restart scenarios. verboseint or bool, default=False Controls the verbosity; the higher, the more messages. Defaults to 0. random_stateint, RandomState instance or None, default=None Used during dictionary learning. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes components_ndarray of shape (n_components, n_features) Sparse components extracted from the data. error_ndarray Vector of errors at each iteration. n_components_int Estimated number of components. New in version 0.23. n_iter_int Number of iterations run. mean_ndarray of shape (n_features,) Per-feature empirical mean, estimated from the training set. Equal to X.mean(axis=0). See also PCA MiniBatchSparsePCA DictionaryLearning Examples >>> import numpy as np >>> from sklearn.datasets import make_friedman1 >>> from sklearn.decomposition import SparsePCA >>> X, _ = make_friedman1(n_samples=200, n_features=30, random_state=0) >>> transformer = SparsePCA(n_components=5, random_state=0) >>> transformer.fit(X) SparsePCA(...) >>> X_transformed = transformer.transform(X) >>> X_transformed.shape (200, 5) >>> # most values in the components_ are zero (sparsity) >>> np.mean(transformer.components_ == 0) 0.9666... Methods fit(X[, y]) Fit the model from data in X. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Least Squares projection of the data onto the sparse components. fit(X, y=None) [source] Fit the model from data in X. Parameters Xarray-like of shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. yIgnored Returns selfobject Returns the instance itself. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Least Squares projection of the data onto the sparse components. To avoid instability issues in case the system is under-determined, regularization can be applied (Ridge regression) via the ridge_alpha parameter. Note that Sparse PCA components orthogonality is not enforced as in PCA hence one cannot use a simple linear projection. Parameters Xndarray of shape (n_samples, n_features) Test data to be transformed, must have the same number of features as the data used to train the model. Returns X_newndarray of shape (n_samples, n_components) Transformed data.
doc_2903
The bbox of the character (B) as a tuple (llx, lly, urx, ury).
doc_2904
Return the standard horizontal stem width as float, or None if not specified in AFM file.
doc_2905
Return fillstate (True if filling, False else). >>> turtle.begin_fill() >>> if turtle.filling(): ... turtle.pensize(5) ... else: ... turtle.pensize(3)
doc_2906
Return the Patch's axis-aligned extents as a Bbox.
doc_2907
Return the result of rotating the digits of the first operand by an amount specified by the second operand. The second operand must be an integer in the range -precision through precision. The absolute value of the second operand gives the number of places to rotate. If the second operand is positive then rotation is to the left; otherwise rotation is to the right. The coefficient of the first operand is padded on the left with zeros to length precision if necessary. The sign and exponent of the first operand are unchanged.
doc_2908
An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array. For more information, refer to the numpy module and examine the methods and attributes of an array. Parameters (for the __new__ method; see Notes below) shapetuple of ints Shape of created array. dtypedata-type, optional Any object that can be interpreted as a numpy data type. bufferobject exposing buffer interface, optional Used to fill the array with data. offsetint, optional Offset of array data in buffer. stridestuple of ints, optional Strides of data in memory. order{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also array Construct an array. zeros Create an array, each element of which is zero. empty Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). dtype Create a data-type. numpy.typing.NDArray An ndarray alias generic w.r.t. its dtype.type. Notes There are two modes of creating an array using __new__: If buffer is None, then only shape, dtype, and order are used. If buffer is an object exposing the buffer interface, then all keywords are interpreted. No __init__ method is needed because the array is fully initialized after the __new__ method. Examples These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray. First mode, buffer is None: >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) Second mode: >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) Attributes Tndarray Transpose of the array. databuffer The array’s elements, in memory. dtypedtype object Describes the format of the elements in the array. flagsdict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. flatnumpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO). imagndarray Imaginary part of the array. realndarray Real part of the array. sizeint Number of elements in the array. itemsizeint The memory use of each array element in bytes. nbytesint The total number of bytes required to store the array data, i.e., itemsize * size. ndimint The array’s number of dimensions. shapetuple of ints Shape of the array. stridestuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4). ctypesctypes object Class containing properties of the array needed for interaction with ctypes. basendarray If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.
doc_2909
Evaluate a 3-D Hermite_e series at points (x, y, z). This function returns the values: \[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * He_i(x) * He_j(y) * He_k(z)\] The parameters x, y, and z are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either x, y, and z or their elements must support multiplication and addition both with themselves and with the elements of c. If c has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters x, y, zarray_like, compatible object The three dimensional series is evaluated at the points (x, y, z), where x, y, and z must have the same shape. If any of x, y, or z is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. carray_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j,k is contained in c[i,j,k]. If c has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns valuesndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from x, y, and z. See also hermeval, hermeval2d, hermegrid2d, hermegrid3d Notes New in version 1.7.0.
doc_2910
Loop over the format_string and return an iterable of tuples (literal_text, field_name, format_spec, conversion). This is used by vformat() to break the string into either literal text, or replacement fields. The values in the tuple conceptually represent a span of literal text followed by a single replacement field. If there is no literal text (which can happen if two replacement fields occur consecutively), then literal_text will be a zero-length string. If there is no replacement field, then the values of field_name, format_spec and conversion will be None.
doc_2911
Return a list of the child Artists of this Artist.
doc_2912
draw a filled circle filled_circle(surface, x, y, r, color) -> None Draws a filled circle on the given surface. For an unfilled circle use circle(). Parameters: surface (Surface) -- surface to draw on x (int) -- x coordinate of the center of the circle y (int) -- y coordinate of the center of the circle r (int) -- radius of the circle color (Color or tuple(int, int, int, [int])) -- color to draw with, the alpha value is optional if using a tuple (RGB[A]) Returns: None Return type: NoneType
doc_2913
tf.experimental.numpy.ones( shape, dtype=float ) Unsupported arguments: order. See the NumPy documentation for numpy.ones.
doc_2914
Helper object to deal with Flask applications. This is usually not necessary to interface with as it’s used internally in the dispatching to click. In future versions of Flask this object will most likely play a bigger role. Typically it’s created automatically by the FlaskGroup but you can also manually create it and pass it onwards as click object. app_import_path Optionally the import path for the Flask application. create_app Optionally a function that is passed the script info to create the instance of the application. data A dictionary with arbitrary data that can be associated with this script info. load_app() Loads the Flask app (if not yet loaded) and returns it. Calling this multiple times will just result in the already loaded app to be returned.
doc_2915
Add a callback function that will be called whenever one of the Artist's properties changes. Parameters funccallable The callback function. It must have the signature: def func(artist: Artist) -> Any where artist is the calling Artist. Return values may exist but are ignored. Returns int The observer id associated with the callback. This id can be used for removing the callback with remove_callback later. See also remove_callback
doc_2916
Return the effective group id of the current process. This corresponds to the “set id” bit on the file being executed in the current process. Availability: Unix.
doc_2917
Bases: tornado.web.RequestHandler get()[source]
doc_2918
Keep only the elements where filter_fn applied to function name returns True.
doc_2919
The name of a GET field containing the URL to redirect to after log out. Defaults to 'next'. Overrides the next_page URL if the given GET parameter is passed.
doc_2920
Dictionary mapping filename extensions to non-standard, but commonly found MIME types.
doc_2921
Decode the input into a string of unicode symbols. The decoding strategy depends on the vectorizer parameters. Parameters docstr The string to decode. Returns doc: str A string of unicode symbols.
doc_2922
Set whether the artist uses clipping. When False artists will be visible outside of the axes which can lead to unexpected results. Parameters bbool
doc_2923
Return the event loop associated with the server object. New in version 3.7.
doc_2924
The HTTP reason phrase for the response. It uses the HTTP standard’s default reason phrases. Unless explicitly set, reason_phrase is determined by the value of status_code.
doc_2925
Construct an array from a text file, using regular expression parsing. The returned array is always a structured array, and is constructed from all matches of the regular expression in the file. Groups in the regular expression are converted to fields of the structured array. Parameters filepath or file Filename or file object to read. Changed in version 1.22.0: Now accepts os.PathLike implementations. regexpstr or regexp Regular expression used to parse the file. Groups in the regular expression correspond to fields in the dtype. dtypedtype or list of dtypes Dtype for the structured array; must be a structured datatype. encodingstr, optional Encoding used to decode the inputfile. Does not apply to input streams. New in version 1.14.0. Returns outputndarray The output array, containing the part of the content of file that was matched by regexp. output is always a structured array. Raises TypeError When dtype is not a valid dtype for a structured array. See also fromstring, loadtxt Notes Dtypes for structured arrays can be specified in several forms, but all forms specify at least the data type and field name. For details see basics.rec. Examples >>> from io import StringIO >>> text = StringIO("1312 foo\n1534 bar\n444 qux") >>> regexp = r"(\d+)\s+(...)" # match [digits, whitespace, anything] >>> output = np.fromregex(text, regexp, ... [('num', np.int64), ('key', 'S3')]) >>> output array([(1312, b'foo'), (1534, b'bar'), ( 444, b'qux')], dtype=[('num', '<i8'), ('key', 'S3')]) >>> output['num'] array([1312, 1534, 444])
doc_2926
See Migration guide for more details. tf.compat.v1.raw_ops.DatasetToGraph tf.raw_ops.DatasetToGraph( input_dataset, stateful_whitelist=[], allow_stateful=False, strip_device_assignment=False, name=None ) Returns a graph representation for input_dataset. Args input_dataset A Tensor of type variant. A variant tensor representing the dataset to return the graph representation for. stateful_whitelist An optional list of strings. Defaults to []. allow_stateful An optional bool. Defaults to False. strip_device_assignment An optional bool. Defaults to False. name A name for the operation (optional). Returns A Tensor of type string.
doc_2927
Scalar method identical to the corresponding array attribute. Please see ndarray.ravel.
doc_2928
os.spawnle(mode, path, ..., env) os.spawnlp(mode, file, ...) os.spawnlpe(mode, file, ..., env) os.spawnv(mode, path, args) os.spawnve(mode, path, args, env) os.spawnvp(mode, file, args) os.spawnvpe(mode, file, args, env) Execute the program path in a new process. (Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions. Check especially the Replacing Older Functions with the subprocess Module section.) If mode is P_NOWAIT, this function returns the process id of the new process; if mode is P_WAIT, returns the process’s exit code if it exits normally, or -signal, where signal is the signal that killed the process. On Windows, the process id will actually be the process handle, so can be used with the waitpid() function. Note on VxWorks, this function doesn’t return -signal when the new process is killed. Instead it raises OSError exception. The “l” and “v” variants of the spawn* functions differ in how command-line arguments are passed. The “l” variants are perhaps the easiest to work with if the number of parameters is fixed when the code is written; the individual parameters simply become additional parameters to the spawnl*() functions. The “v” variants are good when the number of parameters is variable, with the arguments being passed in a list or tuple as the args parameter. In either case, the arguments to the child process must start with the name of the command being run. The variants which include a second “p” near the end (spawnlp(), spawnlpe(), spawnvp(), and spawnvpe()) will use the PATH environment variable to locate the program file. When the environment is being replaced (using one of the spawn*e variants, discussed in the next paragraph), the new environment is used as the source of the PATH variable. The other variants, spawnl(), spawnle(), spawnv(), and spawnve(), will not use the PATH variable to locate the executable; path must contain an appropriate absolute or relative path. For spawnle(), spawnlpe(), spawnve(), and spawnvpe() (note that these all end in “e”), the env parameter must be a mapping which is used to define the environment variables for the new process (they are used instead of the current process’ environment); the functions spawnl(), spawnlp(), spawnv(), and spawnvp() all cause the new process to inherit the environment of the current process. Note that keys and values in the env dictionary must be strings; invalid keys or values will cause the function to fail, with a return value of 127. As an example, the following calls to spawnlp() and spawnvpe() are equivalent: import os os.spawnlp(os.P_WAIT, 'cp', 'cp', 'index.html', '/dev/null') L = ['cp', 'index.html', '/dev/null'] os.spawnvpe(os.P_WAIT, 'cp', L, os.environ) Raises an auditing event os.spawn with arguments mode, path, args, env. Availability: Unix, Windows. spawnlp(), spawnlpe(), spawnvp() and spawnvpe() are not available on Windows. spawnle() and spawnve() are not thread-safe on Windows; we advise you to use the subprocess module instead. Changed in version 3.6: Accepts a path-like object.
doc_2929
tf.gradients( ys, xs, grad_ys=None, name='gradients', gate_gradients=False, aggregation_method=None, stop_gradients=None, unconnected_gradients=tf.UnconnectedGradients.NONE ) tf.gradients is only valid in a graph context. In particular, it is valid in the context of a tf.function wrapper, where code is executing as a graph. ys and xs are each a Tensor or a list of tensors. grad_ys is a list of Tensor, holding the gradients received by the ys. The list must be the same length as ys. gradients() adds ops to the graph to output the derivatives of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys and for x in xs. grad_ys is a list of tensors of the same length as ys that holds the initial gradients for each y in ys. When grad_ys is None, we fill in a tensor of '1's of the shape of y for each y in ys. A user can provide their own initial grad_ys to compute the derivatives using a different initial gradient for each y (e.g., if one wanted to weight the gradient differently for each value in each y). stop_gradients is a Tensor or a list of tensors to be considered constant with respect to all xs. These tensors will not be backpropagated through, as though they had been explicitly disconnected using stop_gradient. Among other things, this allows computation of partial derivatives as opposed to total derivatives. For example: @tf.function def example(): a = tf.constant(0.) b = 2 * a return tf.gradients(a + b, [a, b], stop_gradients=[a, b]) example() [<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>] Here the partial derivatives g evaluate to [1.0, 1.0], compared to the total derivatives tf.gradients(a + b, [a, b]), which take into account the influence of a on b and evaluate to [3.0, 1.0]. Note that the above is equivalent to: @tf.function def example(): a = tf.stop_gradient(tf.constant(0.)) b = tf.stop_gradient(2 * a) return tf.gradients(a + b, [a, b]) example() [<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>] stop_gradients provides a way of stopping gradient after the graph has already been constructed, as compared to tf.stop_gradient which is used during graph construction. When the two approaches are combined, backpropagation stops at both tf.stop_gradient nodes and nodes in stop_gradients, whichever is encountered first. All integer tensors are considered constant with respect to all xs, as if they were included in stop_gradients. unconnected_gradients determines the value returned for each x in xs if it is unconnected in the graph to ys. By default this is None to safeguard against errors. Mathematically these gradients are zero which can be requested using the 'zero' option. tf.UnconnectedGradients provides the following options and behaviors: @tf.function def example(use_zero): a = tf.ones([1, 2]) b = tf.ones([3, 1]) if use_zero: return tf.gradients([b], [a], unconnected_gradients='zero') else: return tf.gradients([b], [a], unconnected_gradients='none') example(False) [None] example(True) [<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[0., 0.]], ...)>] Let us take one practical example which comes during the back propogation phase. This function is used to evaluate the derivatives of the cost function with respect to Weights Ws and Biases bs. Below sample implementation provides the exaplantion of what it is actually used for : @tf.function def example(): Ws = tf.constant(0.) bs = 2 * Ws cost = Ws + bs # This is just an example. Please ignore the formulas. g = tf.gradients(cost, [Ws, bs]) dCost_dW, dCost_db = g return dCost_dW, dCost_db example() (<tf.Tensor: shape=(), dtype=float32, numpy=3.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>) Args ys A Tensor or list of tensors to be differentiated. xs A Tensor or list of tensors to be used for differentiation. grad_ys Optional. A Tensor or list of tensors the same size as ys and holding the gradients computed for each y in ys. name Optional name to use for grouping all the gradient ops together. defaults to 'gradients'. gate_gradients If True, add a tuple around the gradients returned for an operations. This avoids some race conditions. aggregation_method Specifies the method used to combine gradient terms. Accepted values are constants defined in the class AggregationMethod. stop_gradients Optional. A Tensor or list of tensors not to differentiate through. unconnected_gradients Optional. Specifies the gradient value returned when the given input tensors are unconnected. Accepted values are constants defined in the class tf.UnconnectedGradients and the default value is none. Returns A list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys and for x in xs. Raises LookupError if one of the operations between x and y does not have a registered gradient function. ValueError if the arguments are invalid. RuntimeError if called in Eager mode.
doc_2930
Set all fields of the record to 0, through MsiRecordClearData().
doc_2931
Project the data by using matrix product with the random matrix Parameters X{ndarray, sparse matrix} of shape (n_samples, n_features) The input data to project into a smaller dimensional space. Returns X_new{ndarray, sparse matrix} of shape (n_samples, n_components) Projected array.
doc_2932
Return the picking behavior of the artist. The possible values are described in set_picker. See also set_picker, pickable, pick
doc_2933
Return a copy of the array. Parameters order{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible. (Note that this function and numpy.copy are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also numpy.copy Similar function with different default behavior numpy.copyto Notes This function is the preferred method for creating an array copy. The function numpy.copy is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. Examples >>> x = np.array([[1,2,3],[4,5,6]], order='F') >>> y = x.copy() >>> x.fill(0) >>> x array([[0, 0, 0], [0, 0, 0]]) >>> y array([[1, 2, 3], [4, 5, 6]]) >>> y.flags['C_CONTIGUOUS'] True
doc_2934
os.O_NOINHERIT os.O_SHORT_LIVED os.O_TEMPORARY os.O_RANDOM os.O_SEQUENTIAL os.O_TEXT The above constants are only available on Windows.
doc_2935
Returns the (complex) conjugate transpose of self. Equivalent to np.transpose(self) if self is real-valued. Parameters None Returns retmatrix object complex conjugate transpose of self Examples >>> x = np.matrix(np.arange(12).reshape((3,4))) >>> z = x - 1j*x; z matrix([[ 0. +0.j, 1. -1.j, 2. -2.j, 3. -3.j], [ 4. -4.j, 5. -5.j, 6. -6.j, 7. -7.j], [ 8. -8.j, 9. -9.j, 10.-10.j, 11.-11.j]]) >>> z.getH() matrix([[ 0. -0.j, 4. +4.j, 8. +8.j], [ 1. +1.j, 5. +5.j, 9. +9.j], [ 2. +2.j, 6. +6.j, 10.+10.j], [ 3. +3.j, 7. +7.j, 11.+11.j]])
doc_2936
ipaddress.ip_address(address) Return an IPv4Address or IPv6Address object depending on the IP address passed as argument. Either IPv4 or IPv6 addresses may be supplied; integers less than 2**32 will be considered to be IPv4 by default. A ValueError is raised if address does not represent a valid IPv4 or IPv6 address. >>> ipaddress.ip_address('192.168.0.1') IPv4Address('192.168.0.1') >>> ipaddress.ip_address('2001:db8::') IPv6Address('2001:db8::') ipaddress.ip_network(address, strict=True) Return an IPv4Network or IPv6Network object depending on the IP address passed as argument. address is a string or integer representing the IP network. Either IPv4 or IPv6 networks may be supplied; integers less than 2**32 will be considered to be IPv4 by default. strict is passed to IPv4Network or IPv6Network constructor. A ValueError is raised if address does not represent a valid IPv4 or IPv6 address, or if the network has host bits set. >>> ipaddress.ip_network('192.168.0.0/28') IPv4Network('192.168.0.0/28') ipaddress.ip_interface(address) Return an IPv4Interface or IPv6Interface object depending on the IP address passed as argument. address is a string or integer representing the IP address. Either IPv4 or IPv6 addresses may be supplied; integers less than 2**32 will be considered to be IPv4 by default. A ValueError is raised if address does not represent a valid IPv4 or IPv6 address. One downside of these convenience functions is that the need to handle both IPv4 and IPv6 formats means that error messages provide minimal information on the precise error, as the functions don’t know whether the IPv4 or IPv6 format was intended. More detailed error reporting can be obtained by calling the appropriate version specific class constructors directly. IP Addresses Address objects The IPv4Address and IPv6Address objects share a lot of common attributes. Some attributes that are only meaningful for IPv6 addresses are also implemented by IPv4Address objects, in order to make it easier to write code that handles both IP versions correctly. Address objects are hashable, so they can be used as keys in dictionaries. class ipaddress.IPv4Address(address) Construct an IPv4 address. An AddressValueError is raised if address is not a valid IPv4 address. The following constitutes a valid IPv4 address: A string in decimal-dot notation, consisting of four decimal integers in the inclusive range 0–255, separated by dots (e.g. 192.168.0.1). Each integer represents an octet (byte) in the address. Leading zeroes are tolerated only for values less than 8 (as there is no ambiguity between the decimal and octal interpretations of such strings). An integer that fits into 32 bits. An integer packed into a bytes object of length 4 (most significant octet first). >>> ipaddress.IPv4Address('192.168.0.1') IPv4Address('192.168.0.1') >>> ipaddress.IPv4Address(3232235521) IPv4Address('192.168.0.1') >>> ipaddress.IPv4Address(b'\xC0\xA8\x00\x01') IPv4Address('192.168.0.1') version The appropriate version number: 4 for IPv4, 6 for IPv6. max_prefixlen The total number of bits in the address representation for this version: 32 for IPv4, 128 for IPv6. The prefix defines the number of leading bits in an address that are compared to determine whether or not an address is part of a network. compressed exploded The string representation in dotted decimal notation. Leading zeroes are never included in the representation. As IPv4 does not define a shorthand notation for addresses with octets set to zero, these two attributes are always the same as str(addr) for IPv4 addresses. Exposing these attributes makes it easier to write display code that can handle both IPv4 and IPv6 addresses. packed The binary representation of this address - a bytes object of the appropriate length (most significant octet first). This is 4 bytes for IPv4 and 16 bytes for IPv6. reverse_pointer The name of the reverse DNS PTR record for the IP address, e.g.: >>> ipaddress.ip_address("127.0.0.1").reverse_pointer '1.0.0.127.in-addr.arpa' >>> ipaddress.ip_address("2001:db8::1").reverse_pointer '1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa' This is the name that could be used for performing a PTR lookup, not the resolved hostname itself. New in version 3.5. is_multicast True if the address is reserved for multicast use. See RFC 3171 (for IPv4) or RFC 2373 (for IPv6). is_private True if the address is allocated for private networks. See iana-ipv4-special-registry (for IPv4) or iana-ipv6-special-registry (for IPv6). is_global True if the address is allocated for public networks. See iana-ipv4-special-registry (for IPv4) or iana-ipv6-special-registry (for IPv6). New in version 3.4. is_unspecified True if the address is unspecified. See RFC 5735 (for IPv4) or RFC 2373 (for IPv6). is_reserved True if the address is otherwise IETF reserved. is_loopback True if this is a loopback address. See RFC 3330 (for IPv4) or RFC 2373 (for IPv6). is_link_local True if the address is reserved for link-local usage. See RFC 3927. IPv4Address.__format__(fmt) Returns a string representation of the IP address, controlled by an explicit format string. fmt can be one of the following: 's', the default option, equivalent to str(), 'b' for a zero-padded binary string, 'X' or 'x' for an uppercase or lowercase hexadecimal representation, or 'n', which is equivalent to 'b' for IPv4 addresses and 'x' for IPv6. For binary and hexadecimal representations, the form specifier '#' and the grouping option '_' are available. __format__ is used by format, str.format and f-strings. >>> format(ipaddress.IPv4Address('192.168.0.1')) '192.168.0.1' >>> '{:#b}'.format(ipaddress.IPv4Address('192.168.0.1')) '0b11000000101010000000000000000001' >>> f'{ipaddress.IPv6Address("2001:db8::1000"):s}' '2001:db8::1000' >>> format(ipaddress.IPv6Address('2001:db8::1000'), '_X') '2001_0DB8_0000_0000_0000_0000_0000_1000' >>> '{:#_n}'.format(ipaddress.IPv6Address('2001:db8::1000')) '0x2001_0db8_0000_0000_0000_0000_0000_1000' New in version 3.9. class ipaddress.IPv6Address(address) Construct an IPv6 address. An AddressValueError is raised if address is not a valid IPv6 address. The following constitutes a valid IPv6 address: A string consisting of eight groups of four hexadecimal digits, each group representing 16 bits. The groups are separated by colons. This describes an exploded (longhand) notation. The string can also be compressed (shorthand notation) by various means. See RFC 4291 for details. For example, "0000:0000:0000:0000:0000:0abc:0007:0def" can be compressed to "::abc:7:def". Optionally, the string may also have a scope zone ID, expressed with a suffix %scope_id. If present, the scope ID must be non-empty, and may not contain %. See RFC 4007 for details. For example, fe80::1234%1 might identify address fe80::1234 on the first link of the node. An integer that fits into 128 bits. An integer packed into a bytes object of length 16, big-endian. >>> ipaddress.IPv6Address('2001:db8::1000') IPv6Address('2001:db8::1000') >>> ipaddress.IPv6Address('ff02::5678%1') IPv6Address('ff02::5678%1') compressed The short form of the address representation, with leading zeroes in groups omitted and the longest sequence of groups consisting entirely of zeroes collapsed to a single empty group. This is also the value returned by str(addr) for IPv6 addresses. exploded The long form of the address representation, with all leading zeroes and groups consisting entirely of zeroes included. For the following attributes and methods, see the corresponding documentation of the IPv4Address class: packed reverse_pointer version max_prefixlen is_multicast is_private is_global is_unspecified is_reserved is_loopback is_link_local New in version 3.4: is_global is_site_local True if the address is reserved for site-local usage. Note that the site-local address space has been deprecated by RFC 3879. Use is_private to test if this address is in the space of unique local addresses as defined by RFC 4193. ipv4_mapped For addresses that appear to be IPv4 mapped addresses (starting with ::FFFF/96), this property will report the embedded IPv4 address. For any other address, this property will be None. scope_id For scoped addresses as defined by RFC 4007, this property identifies the particular zone of the address’s scope that the address belongs to, as a string. When no scope zone is specified, this property will be None. sixtofour For addresses that appear to be 6to4 addresses (starting with 2002::/16) as defined by RFC 3056, this property will report the embedded IPv4 address. For any other address, this property will be None. teredo For addresses that appear to be Teredo addresses (starting with 2001::/32) as defined by RFC 4380, this property will report the embedded (server, client) IP address pair. For any other address, this property will be None. IPv6Address.__format__(fmt) Refer to the corresponding method documentation in IPv4Address. New in version 3.9. Conversion to Strings and Integers To interoperate with networking interfaces such as the socket module, addresses must be converted to strings or integers. This is handled using the str() and int() builtin functions: >>> str(ipaddress.IPv4Address('192.168.0.1')) '192.168.0.1' >>> int(ipaddress.IPv4Address('192.168.0.1')) 3232235521 >>> str(ipaddress.IPv6Address('::1')) '::1' >>> int(ipaddress.IPv6Address('::1')) 1 Note that IPv6 scoped addresses are converted to integers without scope zone ID. Operators Address objects support some operators. Unless stated otherwise, operators can only be applied between compatible objects (i.e. IPv4 with IPv4, IPv6 with IPv6). Comparison operators Address objects can be compared with the usual set of comparison operators. Same IPv6 addresses with different scope zone IDs are not equal. Some examples: >>> IPv4Address('127.0.0.2') > IPv4Address('127.0.0.1') True >>> IPv4Address('127.0.0.2') == IPv4Address('127.0.0.1') False >>> IPv4Address('127.0.0.2') != IPv4Address('127.0.0.1') True >>> IPv6Address('fe80::1234') == IPv6Address('fe80::1234%1') False >>> IPv6Address('fe80::1234%1') != IPv6Address('fe80::1234%2') True Arithmetic operators Integers can be added to or subtracted from address objects. Some examples: >>> IPv4Address('127.0.0.2') + 3 IPv4Address('127.0.0.5') >>> IPv4Address('127.0.0.2') - 3 IPv4Address('126.255.255.255') >>> IPv4Address('255.255.255.255') + 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> ipaddress.AddressValueError: 4294967296 (>= 2**32) is not permitted as an IPv4 address IP Network definitions The IPv4Network and IPv6Network objects provide a mechanism for defining and inspecting IP network definitions. A network definition consists of a mask and a network address, and as such defines a range of IP addresses that equal the network address when masked (binary AND) with the mask. For example, a network definition with the mask 255.255.255.0 and the network address 192.168.1.0 consists of IP addresses in the inclusive range 192.168.1.0 to 192.168.1.255. Prefix, net mask and host mask There are several equivalent ways to specify IP network masks. A prefix /<nbits> is a notation that denotes how many high-order bits are set in the network mask. A net mask is an IP address with some number of high-order bits set. Thus the prefix /24 is equivalent to the net mask 255.255.255.0 in IPv4, or ffff:ff00:: in IPv6. In addition, a host mask is the logical inverse of a net mask, and is sometimes used (for example in Cisco access control lists) to denote a network mask. The host mask equivalent to /24 in IPv4 is 0.0.0.255. Network objects All attributes implemented by address objects are implemented by network objects as well. In addition, network objects implement additional attributes. All of these are common between IPv4Network and IPv6Network, so to avoid duplication they are only documented for IPv4Network. Network objects are hashable, so they can be used as keys in dictionaries. class ipaddress.IPv4Network(address, strict=True) Construct an IPv4 network definition. address can be one of the following: A string consisting of an IP address and an optional mask, separated by a slash (/). The IP address is the network address, and the mask can be either a single number, which means it’s a prefix, or a string representation of an IPv4 address. If it’s the latter, the mask is interpreted as a net mask if it starts with a non-zero field, or as a host mask if it starts with a zero field, with the single exception of an all-zero mask which is treated as a net mask. If no mask is provided, it’s considered to be /32. For example, the following address specifications are equivalent: 192.168.1.0/24, 192.168.1.0/255.255.255.0 and 192.168.1.0/0.0.0.255. An integer that fits into 32 bits. This is equivalent to a single-address network, with the network address being address and the mask being /32. An integer packed into a bytes object of length 4, big-endian. The interpretation is similar to an integer address. A two-tuple of an address description and a netmask, where the address description is either a string, a 32-bits integer, a 4-bytes packed integer, or an existing IPv4Address object; and the netmask is either an integer representing the prefix length (e.g. 24) or a string representing the prefix mask (e.g. 255.255.255.0). An AddressValueError is raised if address is not a valid IPv4 address. A NetmaskValueError is raised if the mask is not valid for an IPv4 address. If strict is True and host bits are set in the supplied address, then ValueError is raised. Otherwise, the host bits are masked out to determine the appropriate network address. Unless stated otherwise, all network methods accepting other network/address objects will raise TypeError if the argument’s IP version is incompatible to self. Changed in version 3.5: Added the two-tuple form for the address constructor parameter. version max_prefixlen Refer to the corresponding attribute documentation in IPv4Address. is_multicast is_private is_unspecified is_reserved is_loopback is_link_local These attributes are true for the network as a whole if they are true for both the network address and the broadcast address. network_address The network address for the network. The network address and the prefix length together uniquely define a network. broadcast_address The broadcast address for the network. Packets sent to the broadcast address should be received by every host on the network. hostmask The host mask, as an IPv4Address object. netmask The net mask, as an IPv4Address object. with_prefixlen compressed exploded A string representation of the network, with the mask in prefix notation. with_prefixlen and compressed are always the same as str(network). exploded uses the exploded form the network address. with_netmask A string representation of the network, with the mask in net mask notation. with_hostmask A string representation of the network, with the mask in host mask notation. num_addresses The total number of addresses in the network. prefixlen Length of the network prefix, in bits. hosts() Returns an iterator over the usable hosts in the network. The usable hosts are all the IP addresses that belong to the network, except the network address itself and the network broadcast address. For networks with a mask length of 31, the network address and network broadcast address are also included in the result. Networks with a mask of 32 will return a list containing the single host address. >>> list(ip_network('192.0.2.0/29').hosts()) [IPv4Address('192.0.2.1'), IPv4Address('192.0.2.2'), IPv4Address('192.0.2.3'), IPv4Address('192.0.2.4'), IPv4Address('192.0.2.5'), IPv4Address('192.0.2.6')] >>> list(ip_network('192.0.2.0/31').hosts()) [IPv4Address('192.0.2.0'), IPv4Address('192.0.2.1')] >>> list(ip_network('192.0.2.1/32').hosts()) [IPv4Address('192.0.2.1')] overlaps(other) True if this network is partly or wholly contained in other or other is wholly contained in this network. address_exclude(network) Computes the network definitions resulting from removing the given network from this one. Returns an iterator of network objects. Raises ValueError if network is not completely contained in this network. >>> n1 = ip_network('192.0.2.0/28') >>> n2 = ip_network('192.0.2.1/32') >>> list(n1.address_exclude(n2)) [IPv4Network('192.0.2.8/29'), IPv4Network('192.0.2.4/30'), IPv4Network('192.0.2.2/31'), IPv4Network('192.0.2.0/32')] subnets(prefixlen_diff=1, new_prefix=None) The subnets that join to make the current network definition, depending on the argument values. prefixlen_diff is the amount our prefix length should be increased by. new_prefix is the desired new prefix of the subnets; it must be larger than our prefix. One and only one of prefixlen_diff and new_prefix must be set. Returns an iterator of network objects. >>> list(ip_network('192.0.2.0/24').subnets()) [IPv4Network('192.0.2.0/25'), IPv4Network('192.0.2.128/25')] >>> list(ip_network('192.0.2.0/24').subnets(prefixlen_diff=2)) [IPv4Network('192.0.2.0/26'), IPv4Network('192.0.2.64/26'), IPv4Network('192.0.2.128/26'), IPv4Network('192.0.2.192/26')] >>> list(ip_network('192.0.2.0/24').subnets(new_prefix=26)) [IPv4Network('192.0.2.0/26'), IPv4Network('192.0.2.64/26'), IPv4Network('192.0.2.128/26'), IPv4Network('192.0.2.192/26')] >>> list(ip_network('192.0.2.0/24').subnets(new_prefix=23)) Traceback (most recent call last): File "<stdin>", line 1, in <module> raise ValueError('new prefix must be longer') ValueError: new prefix must be longer >>> list(ip_network('192.0.2.0/24').subnets(new_prefix=25)) [IPv4Network('192.0.2.0/25'), IPv4Network('192.0.2.128/25')] supernet(prefixlen_diff=1, new_prefix=None) The supernet containing this network definition, depending on the argument values. prefixlen_diff is the amount our prefix length should be decreased by. new_prefix is the desired new prefix of the supernet; it must be smaller than our prefix. One and only one of prefixlen_diff and new_prefix must be set. Returns a single network object. >>> ip_network('192.0.2.0/24').supernet() IPv4Network('192.0.2.0/23') >>> ip_network('192.0.2.0/24').supernet(prefixlen_diff=2) IPv4Network('192.0.0.0/22') >>> ip_network('192.0.2.0/24').supernet(new_prefix=20) IPv4Network('192.0.0.0/20') subnet_of(other) Return True if this network is a subnet of other. >>> a = ip_network('192.168.1.0/24') >>> b = ip_network('192.168.1.128/30') >>> b.subnet_of(a) True New in version 3.7. supernet_of(other) Return True if this network is a supernet of other. >>> a = ip_network('192.168.1.0/24') >>> b = ip_network('192.168.1.128/30') >>> a.supernet_of(b) True New in version 3.7. compare_networks(other) Compare this network to other. In this comparison only the network addresses are considered; host bits aren’t. Returns either -1, 0 or 1. >>> ip_network('192.0.2.1/32').compare_networks(ip_network('192.0.2.2/32')) -1 >>> ip_network('192.0.2.1/32').compare_networks(ip_network('192.0.2.0/32')) 1 >>> ip_network('192.0.2.1/32').compare_networks(ip_network('192.0.2.1/32')) 0 Deprecated since version 3.7: It uses the same ordering and comparison algorithm as “<”, “==”, and “>” class ipaddress.IPv6Network(address, strict=True) Construct an IPv6 network definition. address can be one of the following: A string consisting of an IP address and an optional prefix length, separated by a slash (/). The IP address is the network address, and the prefix length must be a single number, the prefix. If no prefix length is provided, it’s considered to be /128. Note that currently expanded netmasks are not supported. That means 2001:db00::0/24 is a valid argument while 2001:db00::0/ffff:ff00:: not. An integer that fits into 128 bits. This is equivalent to a single-address network, with the network address being address and the mask being /128. An integer packed into a bytes object of length 16, big-endian. The interpretation is similar to an integer address. A two-tuple of an address description and a netmask, where the address description is either a string, a 128-bits integer, a 16-bytes packed integer, or an existing IPv6Address object; and the netmask is an integer representing the prefix length. An AddressValueError is raised if address is not a valid IPv6 address. A NetmaskValueError is raised if the mask is not valid for an IPv6 address. If strict is True and host bits are set in the supplied address, then ValueError is raised. Otherwise, the host bits are masked out to determine the appropriate network address. Changed in version 3.5: Added the two-tuple form for the address constructor parameter. version max_prefixlen is_multicast is_private is_unspecified is_reserved is_loopback is_link_local network_address broadcast_address hostmask netmask with_prefixlen compressed exploded with_netmask with_hostmask num_addresses prefixlen hosts() Returns an iterator over the usable hosts in the network. The usable hosts are all the IP addresses that belong to the network, except the Subnet-Router anycast address. For networks with a mask length of 127, the Subnet-Router anycast address is also included in the result. Networks with a mask of 128 will return a list containing the single host address. overlaps(other) address_exclude(network) subnets(prefixlen_diff=1, new_prefix=None) supernet(prefixlen_diff=1, new_prefix=None) subnet_of(other) supernet_of(other) compare_networks(other) Refer to the corresponding attribute documentation in IPv4Network. is_site_local These attribute is true for the network as a whole if it is true for both the network address and the broadcast address. Operators Network objects support some operators. Unless stated otherwise, operators can only be applied between compatible objects (i.e. IPv4 with IPv4, IPv6 with IPv6). Logical operators Network objects can be compared with the usual set of logical operators. Network objects are ordered first by network address, then by net mask. Iteration Network objects can be iterated to list all the addresses belonging to the network. For iteration, all hosts are returned, including unusable hosts (for usable hosts, use the hosts() method). An example: >>> for addr in IPv4Network('192.0.2.0/28'): ... addr ... IPv4Address('192.0.2.0') IPv4Address('192.0.2.1') IPv4Address('192.0.2.2') IPv4Address('192.0.2.3') IPv4Address('192.0.2.4') IPv4Address('192.0.2.5') IPv4Address('192.0.2.6') IPv4Address('192.0.2.7') IPv4Address('192.0.2.8') IPv4Address('192.0.2.9') IPv4Address('192.0.2.10') IPv4Address('192.0.2.11') IPv4Address('192.0.2.12') IPv4Address('192.0.2.13') IPv4Address('192.0.2.14') IPv4Address('192.0.2.15') Networks as containers of addresses Network objects can act as containers of addresses. Some examples: >>> IPv4Network('192.0.2.0/28')[0] IPv4Address('192.0.2.0') >>> IPv4Network('192.0.2.0/28')[15] IPv4Address('192.0.2.15') >>> IPv4Address('192.0.2.6') in IPv4Network('192.0.2.0/28') True >>> IPv4Address('192.0.3.6') in IPv4Network('192.0.2.0/28') False Interface objects Interface objects are hashable, so they can be used as keys in dictionaries. class ipaddress.IPv4Interface(address) Construct an IPv4 interface. The meaning of address is as in the constructor of IPv4Network, except that arbitrary host addresses are always accepted. IPv4Interface is a subclass of IPv4Address, so it inherits all the attributes from that class. In addition, the following attributes are available: ip The address (IPv4Address) without network information. >>> interface = IPv4Interface('192.0.2.5/24') >>> interface.ip IPv4Address('192.0.2.5') network The network (IPv4Network) this interface belongs to. >>> interface = IPv4Interface('192.0.2.5/24') >>> interface.network IPv4Network('192.0.2.0/24') with_prefixlen A string representation of the interface with the mask in prefix notation. >>> interface = IPv4Interface('192.0.2.5/24') >>> interface.with_prefixlen '192.0.2.5/24' with_netmask A string representation of the interface with the network as a net mask. >>> interface = IPv4Interface('192.0.2.5/24') >>> interface.with_netmask '192.0.2.5/255.255.255.0' with_hostmask A string representation of the interface with the network as a host mask. >>> interface = IPv4Interface('192.0.2.5/24') >>> interface.with_hostmask '192.0.2.5/0.0.0.255' class ipaddress.IPv6Interface(address) Construct an IPv6 interface. The meaning of address is as in the constructor of IPv6Network, except that arbitrary host addresses are always accepted. IPv6Interface is a subclass of IPv6Address, so it inherits all the attributes from that class. In addition, the following attributes are available: ip network with_prefixlen with_netmask with_hostmask Refer to the corresponding attribute documentation in IPv4Interface. Operators Interface objects support some operators. Unless stated otherwise, operators can only be applied between compatible objects (i.e. IPv4 with IPv4, IPv6 with IPv6). Logical operators Interface objects can be compared with the usual set of logical operators. For equality comparison (== and !=), both the IP address and network must be the same for the objects to be equal. An interface will not compare equal to any address or network object. For ordering (<, >, etc) the rules are different. Interface and address objects with the same IP version can be compared, and the address objects will always sort before the interface objects. Two interface objects are first compared by their networks and, if those are the same, then by their IP addresses. Other Module Level Functions The module also provides the following module level functions: ipaddress.v4_int_to_packed(address) Represent an address as 4 packed bytes in network (big-endian) order. address is an integer representation of an IPv4 IP address. A ValueError is raised if the integer is negative or too large to be an IPv4 IP address. >>> ipaddress.ip_address(3221225985) IPv4Address('192.0.2.1') >>> ipaddress.v4_int_to_packed(3221225985) b'\xc0\x00\x02\x01' ipaddress.v6_int_to_packed(address) Represent an address as 16 packed bytes in network (big-endian) order. address is an integer representation of an IPv6 IP address. A ValueError is raised if the integer is negative or too large to be an IPv6 IP address. ipaddress.summarize_address_range(first, last) Return an iterator of the summarized network range given the first and last IP addresses. first is the first IPv4Address or IPv6Address in the range and last is the last IPv4Address or IPv6Address in the range. A TypeError is raised if first or last are not IP addresses or are not of the same version. A ValueError is raised if last is not greater than first or if first address version is not 4 or 6. >>> [ipaddr for ipaddr in ipaddress.summarize_address_range( ... ipaddress.IPv4Address('192.0.2.0'), ... ipaddress.IPv4Address('192.0.2.130'))] [IPv4Network('192.0.2.0/25'), IPv4Network('192.0.2.128/31'), IPv4Network('192.0.2.130/32')] ipaddress.collapse_addresses(addresses) Return an iterator of the collapsed IPv4Network or IPv6Network objects. addresses is an iterator of IPv4Network or IPv6Network objects. A TypeError is raised if addresses contains mixed version objects. >>> [ipaddr for ipaddr in ... ipaddress.collapse_addresses([ipaddress.IPv4Network('192.0.2.0/25'), ... ipaddress.IPv4Network('192.0.2.128/25')])] [IPv4Network('192.0.2.0/24')] ipaddress.get_mixed_type_key(obj) Return a key suitable for sorting between networks and addresses. Address and Network objects are not sortable by default; they’re fundamentally different, so the expression: IPv4Address('192.0.2.0') <= IPv4Network('192.0.2.0/24') doesn’t make sense. There are some times however, where you may wish to have ipaddress sort these anyway. If you need to do this, you can use this function as the key argument to sorted(). obj is either a network or address object. Custom Exceptions To support more specific error reporting from class constructors, the module defines the following exceptions: exception ipaddress.AddressValueError(ValueError) Any value error related to the address. exception ipaddress.NetmaskValueError(ValueError) Any value error related to the net mask.
doc_2937
The SMTP server refused to accept the message data.
doc_2938
Alias for get_facecolor.
doc_2939
Calls flush(), sets the target to None and clears the buffer.
doc_2940
class sklearn.ensemble.VotingClassifier(estimators, *, voting='hard', weights=None, n_jobs=None, flatten_transform=True, verbose=False) [source] Soft Voting/Majority Rule classifier for unfitted estimators. Read more in the User Guide. New in version 0.17. Parameters estimatorslist of (str, estimator) tuples Invoking the fit method on the VotingClassifier will fit clones of those original estimators that will be stored in the class attribute self.estimators_. An estimator can be set to 'drop' using set_params. Changed in version 0.21: 'drop' is accepted. Using None was deprecated in 0.22 and support was removed in 0.24. voting{‘hard’, ‘soft’}, default=’hard’ If ‘hard’, uses predicted class labels for majority rule voting. Else if ‘soft’, predicts the class label based on the argmax of the sums of the predicted probabilities, which is recommended for an ensemble of well-calibrated classifiers. weightsarray-like of shape (n_classifiers,), default=None Sequence of weights (float or int) to weight the occurrences of predicted class labels (hard voting) or class probabilities before averaging (soft voting). Uses uniform weights if None. n_jobsint, default=None The number of jobs to run in parallel for fit. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.18. flatten_transformbool, default=True Affects shape of transform output only when voting=’soft’ If voting=’soft’ and flatten_transform=True, transform method returns matrix with shape (n_samples, n_classifiers * n_classes). If flatten_transform=False, it returns (n_classifiers, n_samples, n_classes). verbosebool, default=False If True, the time elapsed while fitting will be printed as it is completed. New in version 0.23. Attributes estimators_list of classifiers The collection of fitted sub-estimators as defined in estimators that are not ‘drop’. named_estimators_Bunch Attribute to access any fitted sub-estimators by name. New in version 0.20. classes_array-like of shape (n_predictions,) The classes labels. See also VotingRegressor Prediction voting regressor. Examples >>> import numpy as np >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.naive_bayes import GaussianNB >>> from sklearn.ensemble import RandomForestClassifier, VotingClassifier >>> clf1 = LogisticRegression(multi_class='multinomial', random_state=1) >>> clf2 = RandomForestClassifier(n_estimators=50, random_state=1) >>> clf3 = GaussianNB() >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> y = np.array([1, 1, 1, 2, 2, 2]) >>> eclf1 = VotingClassifier(estimators=[ ... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='hard') >>> eclf1 = eclf1.fit(X, y) >>> print(eclf1.predict(X)) [1 1 1 2 2 2] >>> np.array_equal(eclf1.named_estimators_.lr.predict(X), ... eclf1.named_estimators_['lr'].predict(X)) True >>> eclf2 = VotingClassifier(estimators=[ ... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft') >>> eclf2 = eclf2.fit(X, y) >>> print(eclf2.predict(X)) [1 1 1 2 2 2] >>> eclf3 = VotingClassifier(estimators=[ ... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft', weights=[2,1,1], ... flatten_transform=True) >>> eclf3 = eclf3.fit(X, y) >>> print(eclf3.predict(X)) [1 1 1 2 2 2] >>> print(eclf3.transform(X).shape) (6, 6) Methods fit(X, y[, sample_weight]) Fit the estimators. fit_transform(X[, y]) Return class labels or probabilities for each estimator. get_params([deep]) Get the parameters of an estimator from the ensemble. predict(X) Predict class labels for X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of an estimator from the ensemble. transform(X) Return class labels or probabilities for X for each estimator. fit(X, y, sample_weight=None) [source] Fit the estimators. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. New in version 0.18. Returns selfobject fit_transform(X, y=None, **fit_params) [source] Return class labels or probabilities for each estimator. Return predictions for X for each estimator. Parameters X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features) Input samples yndarray of shape (n_samples,), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get the parameters of an estimator from the ensemble. Returns the parameters given in the constructor as well as the estimators contained within the estimators parameter. Parameters deepbool, default=True Setting it to True gets the various estimators and the parameters of the estimators as well. predict(X) [source] Predict class labels for X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input samples. Returns majarray-like of shape (n_samples,) Predicted class labels. property predict_proba Compute probabilities of possible outcomes for samples in X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input samples. Returns avgarray-like of shape (n_samples, n_classes) Weighted average probability for each class per sample. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of an estimator from the ensemble. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in estimators. Parameters **paramskeyword arguments Specific parameters using e.g. set_params(parameter_name=new_value). In addition, to setting the parameters of the estimator, the individual estimator of the estimators can also be set, or can be removed by setting them to ‘drop’. transform(X) [source] Return class labels or probabilities for X for each estimator. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns probabilities_or_labels If voting='soft' and flatten_transform=True: returns ndarray of shape (n_classifiers, n_samples * n_classes), being class probabilities calculated by each classifier. If voting='soft' and `flatten_transform=False: ndarray of shape (n_classifiers, n_samples, n_classes) If voting='hard': ndarray of shape (n_samples, n_classifiers), being class labels predicted by each classifier. Examples using sklearn.ensemble.VotingClassifier Plot the decision boundaries of a VotingClassifier Plot class probabilities calculated by the VotingClassifier
doc_2941
Returns the Well-Known Text of the geometry (an OGC standard).
doc_2942
Define the picking behavior of the artist. Parameters pickerNone or bool or float or callable This can be one of the following: None: Picking is disabled for this artist (default). A boolean: If True then picking will be enabled and the artist will fire a pick event if the mouse event is over the artist. A float: If picker is a number it is interpreted as an epsilon tolerance in points and the artist will fire off an event if its data is within epsilon of the mouse event. For some artists like lines and patch collections, the artist may provide additional data to the pick event that is generated, e.g., the indices of the data within epsilon of the pick event A function: If picker is callable, it is a user supplied function which determines whether the artist is hit by the mouse event: hit, props = picker(artist, mouseevent) to determine the hit test. if the mouse event is over the artist, return hit=True and props is a dictionary of properties you want added to the PickEvent attributes.
doc_2943
Draw a Path instance using the given affine transform.
doc_2944
Return an image showing the differences between two images. New in version 0.16. Parameters image1, image22-D array Images to process, must be of the same shape. methodstring, optional Method used for the comparison. Valid values are {‘diff’, ‘blend’, ‘checkerboard’}. Details are provided in the note section. n_tilestuple, optional Used only for the checkerboard method. Specifies the number of tiles (row, column) to divide the image. Returns comparison2-D array Image showing the differences. Notes 'diff' computes the absolute difference between the two images. 'blend' computes the mean value. 'checkerboard' makes tiles of dimension n_tiles that display alternatively the first and the second image.
doc_2945
Returns an aware datetime that represents the same point in time as value in timezone, value being a naive datetime. If timezone is set to None, it defaults to the current time zone. Deprecated since version 4.0: When using pytz, the pytz.AmbiguousTimeError exception is raised if you try to make value aware during a DST transition where the same time occurs twice (when reverting from DST). Setting is_dst to True or False will avoid the exception by choosing if the time is pre-transition or post-transition respectively. When using pytz, the pytz.NonExistentTimeError exception is raised if you try to make value aware during a DST transition such that the time never occurred. For example, if the 2:00 hour is skipped during a DST transition, trying to make 2:30 aware in that time zone will raise an exception. To avoid that you can use is_dst to specify how make_aware() should interpret such a nonexistent time. If is_dst=True then the above time would be interpreted as 2:30 DST time (equivalent to 1:30 local time). Conversely, if is_dst=False the time would be interpreted as 2:30 standard time (equivalent to 3:30 local time). The is_dst parameter has no effect when using non-pytz timezone implementations. The is_dst parameter is deprecated and will be removed in Django 5.0.
doc_2946
Returns a list of strings ready for printing. Each string in the resulting list corresponds to a single frame from the stack. Each string ends in a newline; the strings may contain internal newlines as well, for those items with source text lines. For long sequences of the same frame and line, the first few repetitions are shown, followed by a summary line stating the exact number of further repetitions. Changed in version 3.6: Long sequences of repeated frames are now abbreviated.
doc_2947
See Migration guide for more details. tf.compat.v1.raw_ops.QuantizedResizeBilinear tf.raw_ops.QuantizedResizeBilinear( images, size, min, max, align_corners=False, half_pixel_centers=False, name=None ) Input images and output images must be quantized types. Args images A Tensor. Must be one of the following types: quint8, qint32, float32. 4-D with shape [batch, height, width, channels]. size A 1-D int32 Tensor of 2 elements: new_height, new_width. The new size for the images. min A Tensor of type float32. max A Tensor of type float32. align_corners An optional bool. Defaults to False. If true, the centers of the 4 corner pixels of the input and output tensors are aligned, preserving the values at the corner pixels. Defaults to false. half_pixel_centers An optional bool. Defaults to False. name A name for the operation (optional). Returns A tuple of Tensor objects (resized_images, out_min, out_max). resized_images A Tensor. Has the same type as images. out_min A Tensor of type float32. out_max A Tensor of type float32.
doc_2948
Return dict of savefig file formats supported by this backend.
doc_2949
Called when the incoming data stream matches the termination condition set by set_terminator(). The default method, which must be overridden, raises a NotImplementedError exception. The buffered input data should be available via an instance attribute.
doc_2950
set the overlay pixel data display((y, u, v)) -> None display() -> None Display the YUV data in SDL's overlay planes. The y, u, and v arguments are strings of binary data. The data must be in the correct format used to create the Overlay. If no argument is passed in, the Overlay will simply be redrawn with the current data. This can be useful when the Overlay is not really hardware accelerated. The strings are not validated, and improperly sized strings could crash the program.
doc_2951
Example: fav_color = request.session.get('fav_color', 'red')
doc_2952
turtle.pos() Return the turtle’s current location (x,y) (as a Vec2D vector). >>> turtle.pos() (440.00,-0.00)
doc_2953
Apply map_fn to all of the function names. This can be used to regularize function names (e.g. stripping irrelevant parts of the file path), coalesce entries by mapping multiple functions to the same name (in which case the counts are added together), etc.
doc_2954
Set storage-indexed locations to corresponding values. Sets self._data.flat[n] = values[n] for each n in indices. If values is shorter than indices then it will repeat. If values has some masked values, the initial mask is updated in consequence, else the corresponding values are unmasked. Parameters indices1-D array_like Target indices, interpreted as integers. valuesarray_like Values to place in self._data copy at target indices. mode{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. ‘raise’ : raise an error. ‘wrap’ : wrap around. ‘clip’ : clip to the range. Notes values can be a scalar or length 1 array. Examples >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.put([0,4,8],[10,20,30]) >>> x masked_array( data=[[10, --, 3], [--, 20, --], [7, --, 30]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.put(4,999) >>> x masked_array( data=[[10, --, 3], [--, 999, --], [7, --, 30]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999)
doc_2955
Register a custom accessor on Series objects. Parameters name:str Name under which the accessor should be registered. A warning is issued if this name conflicts with a preexisting attribute. Returns callable A class decorator. See also register_dataframe_accessor Register a custom accessor on DataFrame objects. register_series_accessor Register a custom accessor on Series objects. register_index_accessor Register a custom accessor on Index objects. Notes When accessed, your accessor will be initialized with the pandas object the user is interacting with. So the signature must be def __init__(self, pandas_object): # noqa: E999 ... For consistency with pandas methods, you should raise an AttributeError if the data passed to your accessor has an incorrect dtype. >>> pd.Series(['a', 'b']).dt Traceback (most recent call last): ... AttributeError: Can only use .dt accessor with datetimelike values Examples In your library code: import pandas as pd @pd.api.extensions.register_dataframe_accessor("geo") class GeoAccessor: def __init__(self, pandas_obj): self._obj = pandas_obj @property def center(self): # return the geographic center point of this DataFrame lat = self._obj.latitude lon = self._obj.longitude return (float(lon.mean()), float(lat.mean())) def plot(self): # plot this array's data on a map, e.g., using Cartopy pass Back in an interactive IPython session: In [1]: ds = pd.DataFrame({"longitude": np.linspace(0, 10), ...: "latitude": np.linspace(0, 20)}) In [2]: ds.geo.center Out[2]: (5.0, 10.0) In [3]: ds.geo.plot() # plots data on a map
doc_2956
See torch.minimum()
doc_2957
All arguments are optional, and all except for m should be specified in keyword form. Test examples in docstrings in functions and classes reachable from module m (or module __main__ if m is not supplied or is None), starting with m.__doc__. Also test examples reachable from dict m.__test__, if it exists and is not None. m.__test__ maps names (strings) to functions, classes and strings; function and class docstrings are searched for examples; strings are searched directly, as if they were docstrings. Only docstrings attached to objects belonging to module m are searched. Return (failure_count, test_count). Optional argument name gives the name of the module; by default, or if None, m.__name__ is used. Optional argument exclude_empty defaults to false. If true, objects for which no doctests are found are excluded from consideration. The default is a backward compatibility hack, so that code still using doctest.master.summarize() in conjunction with testmod() continues to get output for objects with no tests. The exclude_empty argument to the newer DocTestFinder constructor defaults to true. Optional arguments extraglobs, verbose, report, optionflags, raise_on_error, and globs are the same as for function testfile() above, except that globs defaults to m.__dict__.
doc_2958
Fills self tensor with elements samples from the normal distribution parameterized by mean and std.
doc_2959
Leave-P-Out cross-validator Provides train/test indices to split data in train/test sets. This results in testing on all distinct samples of size p, while the remaining n - p samples form the training set in each iteration. Note: LeavePOut(p) is NOT equivalent to KFold(n_splits=n_samples // p) which creates non-overlapping test sets. Due to the high number of iterations which grows combinatorically with the number of samples this cross-validation method can be very costly. For large datasets one should favor KFold, StratifiedKFold or ShuffleSplit. Read more in the User Guide. Parameters pint Size of the test sets. Must be strictly less than the number of samples. Examples >>> import numpy as np >>> from sklearn.model_selection import LeavePOut >>> X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]]) >>> y = np.array([1, 2, 3, 4]) >>> lpo = LeavePOut(2) >>> lpo.get_n_splits(X) 6 >>> print(lpo) LeavePOut(p=2) >>> for train_index, test_index in lpo.split(X): ... print("TRAIN:", train_index, "TEST:", test_index) ... X_train, X_test = X[train_index], X[test_index] ... y_train, y_test = y[train_index], y[test_index] TRAIN: [2 3] TEST: [0 1] TRAIN: [1 3] TEST: [0 2] TRAIN: [1 2] TEST: [0 3] TRAIN: [0 3] TEST: [1 2] TRAIN: [0 2] TEST: [1 3] TRAIN: [0 1] TEST: [2 3] Methods get_n_splits(X[, y, groups]) Returns the number of splitting iterations in the cross-validator split(X[, y, groups]) Generate indices to split data into training and test set. get_n_splits(X, y=None, groups=None) [source] Returns the number of splitting iterations in the cross-validator Parameters Xarray-like of shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features. yobject Always ignored, exists for compatibility. groupsobject Always ignored, exists for compatibility. split(X, y=None, groups=None) [source] Generate indices to split data into training and test set. Parameters Xarray-like of shape (n_samples, n_features) Training data, where n_samples is the number of samples and n_features is the number of features. yarray-like of shape (n_samples,) The target variable for supervised learning problems. groupsarray-like of shape (n_samples,), default=None Group labels for the samples used while splitting the dataset into train/test set. Yields trainndarray The training set indices for that split. testndarray The testing set indices for that split.
doc_2960
Add a colorbar to a plot. Parameters mappable The matplotlib.cm.ScalarMappable (i.e., AxesImage, ContourSet, etc.) described by this colorbar. This argument is mandatory for the Figure.colorbar method but optional for the pyplot.colorbar function, which sets the default to the current image. Note that one can create a ScalarMappable "on-the-fly" to generate colorbars not attached to a previously drawn artist, e.g. fig.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax) caxAxes, optional Axes into which the colorbar will be drawn. axAxes, list of Axes, optional One or more parent axes from which space for a new colorbar axes will be stolen, if cax is None. This has no effect if cax is set. use_gridspecbool, optional If cax is None, a new cax is created as an instance of Axes. If ax is an instance of Subplot and use_gridspec is True, cax is created as an instance of Subplot using the gridspec module. Returns colorbarColorbar Notes Additional keyword arguments are of two kinds: axes properties: locationNone or {'left', 'right', 'top', 'bottom'} The location, relative to the parent axes, where the colorbar axes is created. It also determines the orientation of the colorbar (colorbars on the left and right are vertical, colorbars at the top and bottom are horizontal). If None, the location will come from the orientation if it is set (vertical colorbars on the right, horizontal ones at the bottom), or default to 'right' if orientation is unset. orientationNone or {'vertical', 'horizontal'} The orientation of the colorbar. It is preferable to set the location of the colorbar, as that also determines the orientation; passing incompatible values for location and orientation raises an exception. fractionfloat, default: 0.15 Fraction of original axes to use for colorbar. shrinkfloat, default: 1.0 Fraction by which to multiply the size of the colorbar. aspectfloat, default: 20 Ratio of long to short dimensions. padfloat, default: 0.05 if vertical, 0.15 if horizontal Fraction of original axes between colorbar and new image axes. anchor(float, float), optional The anchor point of the colorbar axes. Defaults to (0.0, 0.5) if vertical; (0.5, 1.0) if horizontal. panchor(float, float), or False, optional The anchor point of the colorbar parent axes. If False, the parent axes' anchor will be unchanged. Defaults to (1.0, 0.5) if vertical; (0.5, 0.0) if horizontal. colorbar properties: Property Description extend {'neither', 'both', 'min', 'max'} If not 'neither', make pointed end(s) for out-of- range values. These are set for a given colormap using the colormap set_under and set_over methods. extendfrac {None, 'auto', length, lengths} If set to None, both the minimum and maximum triangular colorbar extensions with have a length of 5% of the interior colorbar length (this is the default setting). If set to 'auto', makes the triangular colorbar extensions the same lengths as the interior boxes (when spacing is set to 'uniform') or the same lengths as the respective adjacent interior boxes (when spacing is set to 'proportional'). If a scalar, indicates the length of both the minimum and maximum triangular colorbar extensions as a fraction of the interior colorbar length. A two-element sequence of fractions may also be given, indicating the lengths of the minimum and maximum colorbar extensions respectively as a fraction of the interior colorbar length. extendrect bool If False the minimum and maximum colorbar extensions will be triangular (the default). If True the extensions will be rectangular. spacing {'uniform', 'proportional'} Uniform spacing gives each discrete color the same space; proportional makes the space proportional to the data interval. ticks None or list of ticks or Locator If None, ticks are determined automatically from the input. format None or str or Formatter If None, ScalarFormatter is used. If a format string is given, e.g., '%.3f', that is used. An alternative Formatter may be given instead. drawedges bool Whether to draw lines at color boundaries. label str The label on the colorbar's long axis. The following will probably be useful only in the context of indexed colors (that is, when the mappable has norm=NoNorm()), or other unusual circumstances. Property Description boundaries None or a sequence values None or a sequence which must be of length 1 less than the sequence of boundaries. For each region delimited by adjacent entries in boundaries, the colormapped to the corresponding value in values will be used. If mappable is a ContourSet, its extend kwarg is included automatically. The shrink kwarg provides a simple way to scale the colorbar with respect to the axes. Note that if cax is specified, it determines the size of the colorbar and shrink and aspect kwargs are ignored. For more precise control, you can manually specify the positions of the axes objects in which the mappable and the colorbar are drawn. In this case, do not use any of the axes properties kwargs. It is known that some vector graphics viewers (svg and pdf) renders white gaps between segments of the colorbar. This is due to bugs in the viewers, not Matplotlib. As a workaround, the colorbar can be rendered with overlapping segments: cbar = colorbar() cbar.solids.set_edgecolor("face") draw() However this has negative consequences in other circumstances, e.g. with semi-transparent images (alpha < 1) and colorbar extensions; therefore, this workaround is not used by default (see issue #1188).
doc_2961
Raise a Hermite series to a power. Returns the Hermite series c raised to the power pow. The argument c is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series P_0 + 2*P_1 + 3*P_2. Parameters carray_like 1-D array of Hermite series coefficients ordered from low to high. powinteger Power to which the series will be raised maxpowerinteger, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns coefndarray Hermite series of power. See also hermeadd, hermesub, hermemulx, hermemul, hermediv Examples >>> from numpy.polynomial.hermite_e import hermepow >>> hermepow([1, 2, 3], 2) array([23., 28., 46., 12., 9.])
doc_2962
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
doc_2963
Read at most size bytes from the chunk (less if the read hits the end of the chunk before obtaining size bytes). If the size argument is negative or omitted, read all data until the end of the chunk. An empty bytes object is returned when the end of the chunk is encountered immediately.
doc_2964
Set the width of the rectangle.
doc_2965
Return the decision path in the tree. New in version 0.18. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csr_matrix. check_inputbool, default=True Allow to bypass several input checking. Don’t use this parameter unless you know what you do. Returns indicatorsparse matrix of shape (n_samples, n_nodes) Return a node indicator CSR matrix where non zero elements indicates that the samples goes through the nodes.
doc_2966
Sets the current object instance (self.object).
doc_2967
Called when this tool gets used. This method is called by ToolManager.trigger_tool. Parameters eventEvent The canvas event that caused this tool to be called. senderobject Object that requested the tool to be triggered. dataobject Extra data.
doc_2968
TransformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application. Parameters d_model – the number of expected features in the input (required). nhead – the number of heads in the multiheadattention models (required). dim_feedforward – the dimension of the feedforward network model (default=2048). dropout – the dropout value (default=0.1). activation – the activation function of intermediate layer, relu or gelu (default=relu). Examples:: >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8) >>> src = torch.rand(10, 32, 512) >>> out = encoder_layer(src) forward(src, src_mask=None, src_key_padding_mask=None) [source] Pass the input through the encoder layer. Parameters src – the sequence to the encoder layer (required). src_mask – the mask for the src sequence (optional). src_key_padding_mask – the mask for the src keys per batch (optional). Shape: see the docs in Transformer class.
doc_2969
Create a circle level set with binary values. Parameters image_shapetuple of positive integers Shape of the image centertuple of positive integers, optional Coordinates of the center of the circle given in (row, column). If not given, it defaults to the center of the image. radiusfloat, optional Radius of the circle. If not given, it is set to the 75% of the smallest image dimension. Returns outarray with shape image_shape Binary level set of the circle with the given radius and center. Warns Deprecated: New in version 0.17: This function is deprecated and will be removed in scikit-image 0.19. Please use the function named disk_level_set instead. See also checkerboard_level_set
doc_2970
See class documentation.
doc_2971
Draws binary random numbers (0 or 1) from a Bernoulli distribution. The input tensor should be a tensor containing probabilities to be used for drawing the binary random number. Hence, all values in input have to be in the range: 0≤inputi≤10 \leq \text{input}_i \leq 1 . The ith\text{i}^{th} element of the output tensor will draw a value 11 according to the ith\text{i}^{th} probability value given in input. outi∼Bernoulli(p=inputi)\text{out}_{i} \sim \mathrm{Bernoulli}(p = \text{input}_{i}) The returned out tensor only has values 0 or 1 and is of the same shape as input. out can have integral dtype, but input must have floating point dtype. Parameters input (Tensor) – the input tensor of probability values for the Bernoulli distribution Keyword Arguments generator (torch.Generator, optional) – a pseudorandom number generator for sampling out (Tensor, optional) – the output tensor. Example: >>> a = torch.empty(3, 3).uniform_(0, 1) # generate a uniform random matrix with range [0, 1] >>> a tensor([[ 0.1737, 0.0950, 0.3609], [ 0.7148, 0.0289, 0.2676], [ 0.9456, 0.8937, 0.7202]]) >>> torch.bernoulli(a) tensor([[ 1., 0., 0.], [ 0., 0., 0.], [ 1., 1., 1.]]) >>> a = torch.ones(3, 3) # probability of drawing "1" is 1 >>> torch.bernoulli(a) tensor([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]]) >>> a = torch.zeros(3, 3) # probability of drawing "1" is 0 >>> torch.bernoulli(a) tensor([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]])
doc_2972
See Migration guide for more details. tf.compat.v1.raw_ops.SparseMatMul tf.raw_ops.SparseMatMul( a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None ) The inputs must be two-dimensional matrices and the inner dimension of "a" must match the outer dimension of "b". Both "a" and "b" must be Tensors not SparseTensors. This op is optimized for the case where at least one of "a" or "b" is sparse, in the sense that they have a large proportion of zero values. The breakeven for using this versus a dense matrix multiply on one platform was 30% zero values in the sparse matrix. The gradient computation of this operation will only take advantage of sparsity in the input gradient when that gradient comes from a Relu. Args a A Tensor. Must be one of the following types: float32, bfloat16. b A Tensor. Must be one of the following types: float32, bfloat16. transpose_a An optional bool. Defaults to False. transpose_b An optional bool. Defaults to False. a_is_sparse An optional bool. Defaults to False. b_is_sparse An optional bool. Defaults to False. name A name for the operation (optional). Returns A Tensor of type float32.
doc_2973
Process a single command keystroke. Here are the supported special keystrokes: Keystroke Action Control-A Go to left edge of window. Control-B Cursor left, wrapping to previous line if appropriate. Control-D Delete character under cursor. Control-E Go to right edge (stripspaces off) or end of line (stripspaces on). Control-F Cursor right, wrapping to next line when appropriate. Control-G Terminate, returning the window contents. Control-H Delete character backward. Control-J Terminate if the window is 1 line, otherwise insert newline. Control-K If line is blank, delete it, otherwise clear to end of line. Control-L Refresh screen. Control-N Cursor down; move down one line. Control-O Insert a blank line at cursor location. Control-P Cursor up; move up one line. Move operations do nothing if the cursor is at an edge where the movement is not possible. The following synonyms are supported where possible: Constant Keystroke KEY_LEFT Control-B KEY_RIGHT Control-F KEY_UP Control-P KEY_DOWN Control-N KEY_BACKSPACE Control-h All other keystrokes are treated as a command to insert the given character and move right (with line wrapping).
doc_2974
Checks whether there is a global language file for the given language code (e.g. ‘fr’, ‘pt_BR’). This is used to decide whether a user-provided language is available.
doc_2975
closes a midi stream, flushing any pending buffers. close() -> None PortMidi attempts to close open streams when the application exits. Note This is particularly difficult under Windows.
doc_2976
bytearray.lower() Return a copy of the sequence with all the uppercase ASCII characters converted to their corresponding lowercase counterpart. For example: >>> b'Hello World'.lower() b'hello world' Lowercase ASCII characters are those byte values in the sequence b'abcdefghijklmnopqrstuvwxyz'. Uppercase ASCII characters are those byte values in the sequence b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'. Note The bytearray version of this method does not operate in place - it always produces a new object, even if no changes were made.
doc_2977
Apply this transformation on the given array of values. Parameters valuesarray The input values as NumPy array of length input_dims or shape (N x input_dims). Returns array The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input.
doc_2978
A non-recursive lock object: a close analog of threading.Lock. Once a process or thread has acquired a lock, subsequent attempts to acquire it from any process or thread will block until it is released; any process or thread may release it. The concepts and behaviors of threading.Lock as it applies to threads are replicated here in multiprocessing.Lock as it applies to either processes or threads, except as noted. Note that Lock is actually a factory function which returns an instance of multiprocessing.synchronize.Lock initialized with a default context. Lock supports the context manager protocol and thus may be used in with statements. acquire(block=True, timeout=None) Acquire a lock, blocking or non-blocking. With the block argument set to True (the default), the method call will block until the lock is in an unlocked state, then set it to locked and return True. Note that the name of this first argument differs from that in threading.Lock.acquire(). With the block argument set to False, the method call does not block. If the lock is currently in a locked state, return False; otherwise set the lock to a locked state and return True. When invoked with a positive, floating-point value for timeout, block for at most the number of seconds specified by timeout as long as the lock can not be acquired. Invocations with a negative value for timeout are equivalent to a timeout of zero. Invocations with a timeout value of None (the default) set the timeout period to infinite. Note that the treatment of negative or None values for timeout differs from the implemented behavior in threading.Lock.acquire(). The timeout argument has no practical implications if the block argument is set to False and is thus ignored. Returns True if the lock has been acquired or False if the timeout period has elapsed. release() Release a lock. This can be called from any process or thread, not only the process or thread which originally acquired the lock. Behavior is the same as in threading.Lock.release() except that when invoked on an unlocked lock, a ValueError is raised.
doc_2979
Return the topmost SubplotSpec instance associated with the subplot.
doc_2980
The epilogue attribute acts the same way as the preamble attribute, except that it contains text that appears between the last boundary and the end of the message. As with the preamble, if there is no epilog text this attribute will be None.
doc_2981
Base class for creating enumerated constants that can be combined using the bitwise operators without losing their IntFlag membership. IntFlag members are also subclasses of int.
doc_2982
Return a string object containing two hexadecimal digits for each byte in the instance. >>> bytearray(b'\xf0\xf1\xf2').hex() 'f0f1f2' New in version 3.5. Changed in version 3.8: Similar to bytes.hex(), bytearray.hex() now supports optional sep and bytes_per_sep parameters to insert separators between bytes in the hex output.
doc_2983
Bases: matplotlib.scale.LogScale Provide an arbitrary scale with user-supplied function for the axis and then put on a logarithmic axes. Parameters axismatplotlib.axis.Axis The axis for the scale. functions(callable, callable) two-tuple of the forward and inverse functions for the scale. The forward function must be monotonic. Both functions must have the signature: def forward(values: array-like) -> array-like basefloat, default: 10 Logarithmic base of the scale. propertybase get_transform()[source] Return the Transform associated with this scale. name='functionlog'
doc_2984
The maximum size (in bytes) of the swap space that may be reserved or used by all of this user id’s processes. This limit is enforced only if bit 1 of the vm.overcommit sysctl is set. Please see tuning(7) for a complete description of this sysctl. Availability: FreeBSD 9 or later. New in version 3.4.
doc_2985
Returns a compression object, to be used for compressing data streams that won’t fit into memory at once. level is the compression level – an integer from 0 to 9 or -1. A value of 1 (Z_BEST_SPEED) is fastest and produces the least compression, while a value of 9 (Z_BEST_COMPRESSION) is slowest and produces the most. 0 (Z_NO_COMPRESSION) is no compression. The default value is -1 (Z_DEFAULT_COMPRESSION). Z_DEFAULT_COMPRESSION represents a default compromise between speed and compression (currently equivalent to level 6). method is the compression algorithm. Currently, the only supported value is DEFLATED. The wbits argument controls the size of the history buffer (or the “window size”) used when compressing data, and whether a header and trailer is included in the output. It can take several ranges of values, defaulting to 15 (MAX_WBITS): +9 to +15: The base-two logarithm of the window size, which therefore ranges between 512 and 32768. Larger values produce better compression at the expense of greater memory usage. The resulting output will include a zlib-specific header and trailer. −9 to −15: Uses the absolute value of wbits as the window size logarithm, while producing a raw output stream with no header or trailing checksum. +25 to +31 = 16 + (9 to 15): Uses the low 4 bits of the value as the window size logarithm, while including a basic gzip header and trailing checksum in the output. The memLevel argument controls the amount of memory used for the internal compression state. Valid values range from 1 to 9. Higher values use more memory, but are faster and produce smaller output. strategy is used to tune the compression algorithm. Possible values are Z_DEFAULT_STRATEGY, Z_FILTERED, Z_HUFFMAN_ONLY, Z_RLE (zlib 1.2.0.1) and Z_FIXED (zlib 1.2.2.2). zdict is a predefined compression dictionary. This is a sequence of bytes (such as a bytes object) containing subsequences that are expected to occur frequently in the data that is to be compressed. Those subsequences that are expected to be most common should come at the end of the dictionary. Changed in version 3.3: Added the zdict parameter and keyword argument support.
doc_2986
Alias for get_markerfacecolor.
doc_2987
bytearray.strip([chars]) Return a copy of the sequence with specified leading and trailing bytes removed. The chars argument is a binary sequence specifying the set of byte values to be removed - the name refers to the fact this method is usually used with ASCII characters. If omitted or None, the chars argument defaults to removing ASCII whitespace. The chars argument is not a prefix or suffix; rather, all combinations of its values are stripped: >>> b' spacious '.strip() b'spacious' >>> b'www.example.com'.strip(b'cmowz.') b'example' The binary sequence of byte values to remove may be any bytes-like object. Note The bytearray version of this method does not operate in place - it always produces a new object, even if no changes were made.
doc_2988
Returns the east asian width assigned to the character chr as string.
doc_2989
Return indices that are non-zero in the flattened version of a. This is equivalent to np.nonzero(np.ravel(a))[0]. Parameters aarray_like Input data. Returns resndarray Output array, containing the indices of the elements of a.ravel() that are non-zero. See also nonzero Return the indices of the non-zero elements of the input array. ravel Return a 1-D array containing the elements of the input array. Examples >>> x = np.arange(-2, 3) >>> x array([-2, -1, 0, 1, 2]) >>> np.flatnonzero(x) array([0, 1, 3, 4]) Use the indices of the non-zero elements as an index array to extract these elements: >>> x.ravel()[np.flatnonzero(x)] array([-2, -1, 1, 2])
doc_2990
See Migration guide for more details. tf.compat.v1.raw_ops.ZerosLike tf.raw_ops.ZerosLike( x, name=None ) Args x A Tensor. a tensor of type T. name A name for the operation (optional). Returns A Tensor. Has the same type as x.
doc_2991
display various pixelarray generated effects pixelarray.main() -> None Display various pixelarray generated effects.
doc_2992
def make_variables(k, initializer): return (tf.Variable(initializer(shape=[k], dtype=tf.float32)), tf.Variable(initializer(shape=[k, k], dtype=tf.float32))) v1, v2 = make_variables(3, tf.zeros_initializer()) v1 <tf.Variable ... shape=(3,) ... numpy=array([0., 0., 0.], dtype=float32)> v2 <tf.Variable ... shape=(3, 3) ... numpy= array([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], dtype=float32)> make_variables(4, tf.random_uniform_initializer(minval=-1., maxval=1.)) (<tf.Variable...shape=(4,) dtype=float32...>, <tf.Variable...shape=(4, 4) ... Methods from_config View source @classmethod from_config( config ) Instantiates an initializer from a configuration dictionary. Example: initializer = RandomUniform(-1, 1) config = initializer.get_config() initializer = RandomUniform.from_config(config) Args config A Python dictionary. It will typically be the output of get_config. Returns An Initializer instance. get_config View source get_config() Returns the configuration of the initializer as a JSON-serializable dict. Returns A JSON-serializable Python dict. __call__ View source __call__( shape, dtype=tf.dtypes.float32, **kwargs ) Returns a tensor object initialized as specified by the initializer. Args shape Shape of the tensor. dtype Optional dtype of the tensor. Only numeric or boolean dtypes are supported. **kwargs Additional keyword arguments. Raises ValuesError If the dtype is not numeric or boolean.
doc_2993
Remove artists that are connected to the image viewer.
doc_2994
@abc.abstractmethod @tf_contextlib.contextmanager as_default( step=None ) Returns a context manager that enables summary writing. For convenience, if step is not None, this function also sets a default value for the step parameter used in summary-writing functions elsewhere in the API so that it need not be explicitly passed in every such invocation. The value can be a constant or a variable. Note: when setting step in a @tf.function, the step value will be captured at the time the function is traced, so changes to the step outside the function will not be reflected inside the function unless using a tf.Variable step. For example, step can be used as: with writer_a.as_default(step=10): tf.summary.scalar(tag, value) # Logged to writer_a with step 10 with writer_b.as_default(step=20): tf.summary.scalar(tag, value) # Logged to writer_b with step 20 tf.summary.scalar(tag, value) # Logged to writer_a with step 10 Args step An int64-castable default step value, or None. When not None, the current step is captured, replaced by a given one, and the original one is restored when the context manager exits. When None, the current step is not modified (and not restored when the context manager exits). close View source close() Flushes and closes the summary writer. flush View source flush() Flushes any buffered data. init View source init() Initializes the summary writer. set_as_default View source @abc.abstractmethod set_as_default( step=None ) Enables this summary writer for the current thread. For convenience, if step is not None, this function also sets a default value for the step parameter used in summary-writing functions elsewhere in the API so that it need not be explicitly passed in every such invocation. The value can be a constant or a variable. Note: when setting step in a @tf.function, the step value will be captured at the time the function is traced, so changes to the step outside the function will not be reflected inside the function unless using a tf.Variable step. Args step An int64-castable default step value, or None. When not None, the current step is modified to the given value. When None, the current step is not modified.
doc_2995
Add a new action at the beginning. See append() for explanations of the arguments.
doc_2996
Returns a tensor with the same size as input filled with fill_value. torch.full_like(input, fill_value) is equivalent to torch.full(input.size(), fill_value, dtype=input.dtype, layout=input.layout, device=input.device). Parameters input (Tensor) – the size of input will determine size of the output tensor. fill_value – the number to fill the output tensor with. Keyword Arguments dtype (torch.dtype, optional) – the desired data type of returned Tensor. Default: if None, defaults to the dtype of input. layout (torch.layout, optional) – the desired layout of returned tensor. Default: if None, defaults to the layout of input. device (torch.device, optional) – the desired device of returned tensor. Default: if None, defaults to the device of input. requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False. memory_format (torch.memory_format, optional) – the desired memory format of returned Tensor. Default: torch.preserve_format.
doc_2997
A Grouper allows the user to specify a groupby instruction for an object. This specification will select a column via the key parameter, or if the level and/or axis parameters are given, a level of the index of the target object. If axis and/or level are passed as keywords to both Grouper and groupby, the values passed to Grouper take precedence. Parameters key:str, defaults to None Groupby key, which selects the grouping column of the target. level:name/number, defaults to None The level for the target index. freq:str / frequency object, defaults to None This will groupby the specified frequency if the target selection (via key or level) is a datetime-like object. For full specification of available frequencies, please see here. axis:str, int, defaults to 0 Number/name of the axis. sort:bool, default to False Whether to sort the resulting labels. closed:{‘left’ or ‘right’} Closed end of interval. Only when freq parameter is passed. label:{‘left’ or ‘right’} Interval boundary to use for labeling. Only when freq parameter is passed. convention:{‘start’, ‘end’, ‘e’, ‘s’} If grouper is PeriodIndex and freq parameter is passed. base:int, default 0 Only when freq parameter is passed. For frequencies that evenly subdivide 1 day, the “origin” of the aggregated intervals. For example, for ‘5min’ frequency, base could range from 0 through 4. Defaults to 0. Deprecated since version 1.1.0: The new arguments that you should use are ‘offset’ or ‘origin’. loffset:str, DateOffset, timedelta object Only when freq parameter is passed. Deprecated since version 1.1.0: loffset is only working for .resample(...) and not for Grouper (GH28302). However, loffset is also deprecated for .resample(...) See: DataFrame.resample origin:Timestamp or str, default ‘start_day’ The timestamp on which to adjust the grouping. The timezone of origin must match the timezone of the index. If string, must be one of the following: ‘epoch’: origin is 1970-01-01 ‘start’: origin is the first value of the timeseries ‘start_day’: origin is the first day at midnight of the timeseries New in version 1.1.0. ‘end’: origin is the last value of the timeseries ‘end_day’: origin is the ceiling midnight of the last day New in version 1.3.0. offset:Timedelta or str, default is None An offset timedelta added to the origin. New in version 1.1.0. dropna:bool, default True If True, and if group keys contain NA values, NA values together with row/column will be dropped. If False, NA values will also be treated as the key in groups. New in version 1.2.0. Returns A specification for a groupby instruction Examples Syntactic sugar for df.groupby('A') >>> df = pd.DataFrame( ... { ... "Animal": ["Falcon", "Parrot", "Falcon", "Falcon", "Parrot"], ... "Speed": [100, 5, 200, 300, 15], ... } ... ) >>> df Animal Speed 0 Falcon 100 1 Parrot 5 2 Falcon 200 3 Falcon 300 4 Parrot 15 >>> df.groupby(pd.Grouper(key="Animal")).mean() Speed Animal Falcon 200.0 Parrot 10.0 Specify a resample operation on the column ‘Publish date’ >>> df = pd.DataFrame( ... { ... "Publish date": [ ... pd.Timestamp("2000-01-02"), ... pd.Timestamp("2000-01-02"), ... pd.Timestamp("2000-01-09"), ... pd.Timestamp("2000-01-16") ... ], ... "ID": [0, 1, 2, 3], ... "Price": [10, 20, 30, 40] ... } ... ) >>> df Publish date ID Price 0 2000-01-02 0 10 1 2000-01-02 1 20 2 2000-01-09 2 30 3 2000-01-16 3 40 >>> df.groupby(pd.Grouper(key="Publish date", freq="1W")).mean() ID Price Publish date 2000-01-02 0.5 15.0 2000-01-09 2.0 30.0 2000-01-16 3.0 40.0 If you want to adjust the start of the bins based on a fixed timestamp: >>> start, end = '2000-10-01 23:30:00', '2000-10-02 00:30:00' >>> rng = pd.date_range(start, end, freq='7min') >>> ts = pd.Series(np.arange(len(rng)) * 3, index=rng) >>> ts 2000-10-01 23:30:00 0 2000-10-01 23:37:00 3 2000-10-01 23:44:00 6 2000-10-01 23:51:00 9 2000-10-01 23:58:00 12 2000-10-02 00:05:00 15 2000-10-02 00:12:00 18 2000-10-02 00:19:00 21 2000-10-02 00:26:00 24 Freq: 7T, dtype: int64 >>> ts.groupby(pd.Grouper(freq='17min')).sum() 2000-10-01 23:14:00 0 2000-10-01 23:31:00 9 2000-10-01 23:48:00 21 2000-10-02 00:05:00 54 2000-10-02 00:22:00 24 Freq: 17T, dtype: int64 >>> ts.groupby(pd.Grouper(freq='17min', origin='epoch')).sum() 2000-10-01 23:18:00 0 2000-10-01 23:35:00 18 2000-10-01 23:52:00 27 2000-10-02 00:09:00 39 2000-10-02 00:26:00 24 Freq: 17T, dtype: int64 >>> ts.groupby(pd.Grouper(freq='17min', origin='2000-01-01')).sum() 2000-10-01 23:24:00 3 2000-10-01 23:41:00 15 2000-10-01 23:58:00 45 2000-10-02 00:15:00 45 Freq: 17T, dtype: int64 If you want to adjust the start of the bins with an offset Timedelta, the two following lines are equivalent: >>> ts.groupby(pd.Grouper(freq='17min', origin='start')).sum() 2000-10-01 23:30:00 9 2000-10-01 23:47:00 21 2000-10-02 00:04:00 54 2000-10-02 00:21:00 24 Freq: 17T, dtype: int64 >>> ts.groupby(pd.Grouper(freq='17min', offset='23h30min')).sum() 2000-10-01 23:30:00 9 2000-10-01 23:47:00 21 2000-10-02 00:04:00 54 2000-10-02 00:21:00 24 Freq: 17T, dtype: int64 To replace the use of the deprecated base argument, you can now use offset, in this example it is equivalent to have base=2: >>> ts.groupby(pd.Grouper(freq='17min', offset='2min')).sum() 2000-10-01 23:16:00 0 2000-10-01 23:33:00 9 2000-10-01 23:50:00 36 2000-10-02 00:07:00 39 2000-10-02 00:24:00 24 Freq: 17T, dtype: int64 Attributes ax groups
doc_2998
Draw a border around the edges of the window. Each parameter specifies the character to use for a specific part of the border; see the table below for more details. Note A 0 value for any parameter will cause the default character to be used for that parameter. Keyword parameters can not be used. The defaults are listed in this table: Parameter Description Default value ls Left side ACS_VLINE rs Right side ACS_VLINE ts Top ACS_HLINE bs Bottom ACS_HLINE tl Upper-left corner ACS_ULCORNER tr Upper-right corner ACS_URCORNER bl Bottom-left corner ACS_LLCORNER br Bottom-right corner ACS_LRCORNER
doc_2999
Bind bye() method to mouse clicks on the Screen. If the value “using_IDLE” in the configuration dictionary is False (default value), also enter mainloop. Remark: If IDLE with the -n switch (no subprocess) is used, this value should be set to True in turtle.cfg. In this case IDLE’s own mainloop is active also for the client script.