_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_500 | The original exception that caused this 500 error. Can be used by frameworks to provide context when handling unexpected errors. | |
doc_501 |
Reconstruct the image from all of its patches. Patches are assumed to overlap and the image is constructed by filling in the patches from left to right, top to bottom, averaging the overlapping regions. Read more in the User Guide. Parameters
patchesndarray of shape (n_patches, patch_height, patch_width) or (n_patches, patch_height, patch_width, n_channels)
The complete set of patches. If the patches contain colour information, channels are indexed along the last dimension: RGB patches would have n_channels=3.
image_sizetuple of int (image_height, image_width) or (image_height, image_width, n_channels)
The size of the image that will be reconstructed. Returns
imagendarray of shape image_size
The reconstructed image. | |
doc_502 | See Migration guide for more details. tf.compat.v1.raw_ops.ApplyCenteredRMSProp
tf.raw_ops.ApplyCenteredRMSProp(
var, mg, ms, mom, lr, rho, momentum, epsilon, grad, use_locking=False, name=None
)
The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory. Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero. mean_square = decay * mean_square + (1-decay) * gradient ** 2 mean_grad = decay * mean_grad + (1-decay) * gradient Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2) mg <- rho * mg{t-1} + (1-rho) * grad ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon) var <- var - mom
Args
var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable().
mg A mutable Tensor. Must have the same type as var. Should be from a Variable().
ms A mutable Tensor. Must have the same type as var. Should be from a Variable().
mom A mutable Tensor. Must have the same type as var. Should be from a Variable().
lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar.
rho A Tensor. Must have the same type as var. Decay rate. Must be a scalar.
momentum A Tensor. Must have the same type as var. Momentum Scale. Must be a scalar.
epsilon A Tensor. Must have the same type as var. Ridge term. Must be a scalar.
grad A Tensor. Must have the same type as var. The gradient.
use_locking An optional bool. Defaults to False. If True, updating of the var, mg, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as var. | |
doc_503 | tf.eig
tf.linalg.eig(
tensor, name=None
)
The eigenvalues and eigenvectors for a non-Hermitian matrix in general are complex. The eigenvectors are not guaranteed to be linearly independent. Computes the eigenvalues and right eigenvectors of the innermost N-by-N matrices in tensor such that tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i], for i=0...N-1.
Args
tensor Tensor of shape [..., N, N]. Only the lower triangular part of each inner inner matrix is referenced.
name string, optional name of the operation.
Returns
e Eigenvalues. Shape is [..., N]. Sorted in non-decreasing order.
v Eigenvectors. Shape is [..., N, N]. The columns of the inner most matrices contain eigenvectors of the corresponding matrices in tensor | |
doc_504 |
Return a normalized rgba array corresponding to x. In the normal case, x is a 1D or 2D sequence of scalars, and the corresponding ndarray of rgba values will be returned, based on the norm and colormap set for this ScalarMappable. There is one special case, for handling images that are already rgb or rgba, such as might have been read from an image file. If x is an ndarray with 3 dimensions, and the last dimension is either 3 or 4, then it will be treated as an rgb or rgba array, and no mapping will be done. The array can be uint8, or it can be floating point with values in the 0-1 range; otherwise a ValueError will be raised. If it is a masked array, the mask will be ignored. If the last dimension is 3, the alpha kwarg (defaulting to 1) will be used to fill in the transparency. If the last dimension is 4, the alpha kwarg is ignored; it does not replace the pre-existing alpha. A ValueError will be raised if the third dimension is other than 3 or 4. In either case, if bytes is False (default), the rgba array will be floats in the 0-1 range; if it is True, the returned rgba array will be uint8 in the 0 to 255 range. If norm is False, no normalization of the input data is performed, and it is assumed to be in the range (0-1). | |
doc_505 |
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix. | |
doc_506 |
Gram Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problems using only the Gram matrix X.T * X and the product X.T * y. Read more in the User Guide. Parameters
Gramndarray of shape (n_features, n_features)
Gram matrix of the input data: X.T * X.
Xyndarray of shape (n_features,) or (n_features, n_targets)
Input targets multiplied by X: X.T * y.
n_nonzero_coefsint, default=None
Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features.
tolfloat, default=None
Maximum norm of the residual. If not None, overrides n_nonzero_coefs.
norms_squaredarray-like of shape (n_targets,), default=None
Squared L2 norms of the lines of y. Required if tol is not None.
copy_Grambool, default=True
Whether the gram matrix must be copied by the algorithm. A false value is only helpful if it is already Fortran-ordered, otherwise a copy is made anyway.
copy_Xybool, default=True
Whether the covariance vector Xy must be copied by the algorithm. If False, it may be overwritten.
return_pathbool, default=False
Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation.
return_n_iterbool, default=False
Whether or not to return the number of iterations. Returns
coefndarray of shape (n_features,) or (n_features, n_targets)
Coefficients of the OMP solution. If return_path=True, this contains the whole coefficient path. In this case its shape is (n_features, n_features) or (n_features, n_targets, n_features) and iterating over the last axis yields coefficients in increasing order of active features.
n_itersarray-like or int
Number of active features across every target. Returned only if return_n_iter is set to True. See also
OrthogonalMatchingPursuit
orthogonal_mp
lars_path
sklearn.decomposition.sparse_encode
Notes Orthogonal matching pursuit was introduced in G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (http://blanche.polytechnique.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf | |
doc_507 | Exception raised by the Bdb class for quitting the debugger. | |
doc_508 | Exception raised when a reply is received from the server that does not begin with a digit in the range 1–5. | |
doc_509 | tf.experimental.numpy.true_divide(
x1, x2
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.true_divide. | |
doc_510 | A string describing the name of the email field on the User model. This value is returned by get_email_field_name(). | |
doc_511 |
Integer number of levels in this MultiIndex. Examples
>>> mi = pd.MultiIndex.from_arrays([['a'], ['b'], ['c']])
>>> mi
MultiIndex([('a', 'b', 'c')],
)
>>> mi.nlevels
3 | |
doc_512 | See Migration guide for more details. tf.compat.v1.raw_ops.SparseConcat
tf.raw_ops.SparseConcat(
indices, values, shapes, concat_dim, name=None
)
Concatenation is with respect to the dense versions of these sparse tensors. It is assumed that each input is a SparseTensor whose elements are ordered along increasing dimension number. All inputs' shapes must match, except for the concat dimension. The indices, values, and shapes lists must have the same length. The output shape is identical to the inputs', except along the concat dimension, where it is the sum of the inputs' sizes along that dimension. The output elements will be resorted to preserve the sort order along increasing dimension number. This op runs in O(M log M) time, where M is the total number of non-empty values across all inputs. This is due to the need for an internal sort in order to concatenate efficiently across an arbitrary dimension. For example, if concat_dim = 1 and the inputs are sp_inputs[0]: shape = [2, 3]
[0, 2]: "a"
[1, 0]: "b"
[1, 1]: "c"
sp_inputs[1]: shape = [2, 4]
[0, 1]: "d"
[0, 2]: "e"
then the output will be shape = [2, 7]
[0, 2]: "a"
[0, 4]: "d"
[0, 5]: "e"
[1, 0]: "b"
[1, 1]: "c"
Graphically this is equivalent to doing [ a] concat [ d e ] = [ a d e ]
[b c ] [ ] [b c ]
Args
indices A list of at least 2 Tensor objects with type int64. 2-D. Indices of each input SparseTensor.
values A list with the same length as indices of Tensor objects with the same type. 1-D. Non-empty values of each SparseTensor.
shapes A list with the same length as indices of Tensor objects with type int64. 1-D. Shapes of each SparseTensor.
concat_dim An int. Dimension to concatenate along. Must be in range [-rank, rank), where rank is the number of dimensions in each input SparseTensor.
name A name for the operation (optional).
Returns A tuple of Tensor objects (output_indices, output_values, output_shape). output_indices A Tensor of type int64.
output_values A Tensor. Has the same type as values.
output_shape A Tensor of type int64. | |
doc_513 | Create a “child” parser which can be used to parse an external parsed entity referred to by content parsed by the parent parser. The context parameter should be the string passed to the ExternalEntityRefHandler() handler function, described below. The child parser is created with the ordered_attributes and specified_attributes set to the values of this parser. | |
doc_514 | The system identifier for the external subset of the document type definition. This will be a URI as a string, or None. | |
doc_515 | Line2D(xdata, ydata[, linewidth, linestyle, ...]) A line - the line can have both a solid linestyle connecting all the vertices, and a marker at each vertex.
VertexSelector(line) Manage the callbacks to maintain a list of selected vertices for Line2D. Functions
segment_hits(cx, cy, x, y, radius) Return the indices of the segments in the polyline with coordinates (cx, cy) that are within a distance radius of the point (x, y). | |
doc_516 | Returns a combined list of strings representing all file suffixes for modules recognized by the standard import machinery. This is a helper for code which simply needs to know if a filesystem path potentially refers to a module without needing any details on the kind of module (for example, inspect.getmodulename()). New in version 3.3. | |
doc_517 |
Forward fill the values. Parameters
limit:int, optional
Limit of how many values to fill. Returns
Series or DataFrame
Object with missing values filled. See also Series.ffill
Returns Series with minimum number of char in object. DataFrame.ffill
Object with missing values filled or None if inplace=True. Series.fillna
Fill NaN values of a Series. DataFrame.fillna
Fill NaN values of a DataFrame. | |
doc_518 |
The Python Tkinter Topic Guide provides a great deal of information on using Tk from Python and links to other sources of information on Tk. TKDocs
Extensive tutorial plus friendlier widget pages for some of the widgets. Tkinter 8.5 reference: a GUI for Python
On-line reference material. Tkinter docs from effbot
Online reference for tkinter supported by effbot.org. Programming Python
Book by Mark Lutz, has excellent coverage of Tkinter. Modern Tkinter for Busy Python Developers
Book by Mark Roseman about building attractive and modern graphical user interfaces with Python and Tkinter. Python and Tkinter Programming
Book by John Grayson (ISBN 1-884777-81-3). Tcl/Tk documentation: Tk commands
Most commands are available as tkinter or tkinter.ttk classes. Change ‘8.6’ to match the version of your Tcl/Tk installation. Tcl/Tk recent man pages
Recent Tcl/Tk manuals on www.tcl.tk. ActiveState Tcl Home Page
The Tk/Tcl development is largely taking place at ActiveState. Tcl and the Tk Toolkit
Book by John Ousterhout, the inventor of Tcl. Practical Programming in Tcl and Tk
Brent Welch’s encyclopedic book. Tkinter Modules Most of the time, tkinter is all you really need, but a number of additional modules are available as well. The Tk interface is located in a binary module named _tkinter. This module contains the low-level interface to Tk, and should never be used directly by application programmers. It is usually a shared library (or DLL), but might in some cases be statically linked with the Python interpreter. In addition to the Tk interface module, tkinter includes a number of Python modules, tkinter.constants being one of the most important. Importing tkinter will automatically import tkinter.constants, so, usually, to use Tkinter all you need is a simple import statement: import tkinter
Or, more often: from tkinter import *
class tkinter.Tk(screenName=None, baseName=None, className='Tk', useTk=1)
The Tk class is instantiated without arguments. This creates a toplevel widget of Tk which usually is the main window of an application. Each instance has its own associated Tcl interpreter.
tkinter.Tcl(screenName=None, baseName=None, className='Tk', useTk=0)
The Tcl() function is a factory function which creates an object much like that created by the Tk class, except that it does not initialize the Tk subsystem. This is most often useful when driving the Tcl interpreter in an environment where one doesn’t want to create extraneous toplevel windows, or where one cannot (such as Unix/Linux systems without an X server). An object created by the Tcl() object can have a Toplevel window created (and the Tk subsystem initialized) by calling its loadtk() method.
Other modules that provide Tk support include:
tkinter.colorchooser
Dialog to let the user choose a color.
tkinter.commondialog
Base class for the dialogs defined in the other modules listed here.
tkinter.filedialog
Common dialogs to allow the user to specify a file to open or save.
tkinter.font
Utilities to help work with fonts.
tkinter.messagebox
Access to standard Tk dialog boxes.
tkinter.scrolledtext
Text widget with a vertical scroll bar built in.
tkinter.simpledialog
Basic dialogs and convenience functions.
tkinter.dnd
Drag-and-drop support for tkinter. This is experimental and should become deprecated when it is replaced with the Tk DND.
turtle
Turtle graphics in a Tk window. Tkinter Life Preserver This section is not designed to be an exhaustive tutorial on either Tk or Tkinter. Rather, it is intended as a stop gap, providing some introductory orientation on the system. Credits: Tk was written by John Ousterhout while at Berkeley. Tkinter was written by Steen Lumholt and Guido van Rossum. This Life Preserver was written by Matt Conway at the University of Virginia. The HTML rendering, and some liberal editing, was produced from a FrameMaker version by Ken Manheimer. Fredrik Lundh elaborated and revised the class interface descriptions, to get them current with Tk 4.2. Mike Clarkson converted the documentation to LaTeX, and compiled the User Interface chapter of the reference manual. How To Use This Section This section is designed in two parts: the first half (roughly) covers background material, while the second half can be taken to the keyboard as a handy reference. When trying to answer questions of the form “how do I do blah”, it is often best to find out how to do “blah” in straight Tk, and then convert this back into the corresponding tkinter call. Python programmers can often guess at the correct Python command by looking at the Tk documentation. This means that in order to use Tkinter, you will have to know a little bit about Tk. This document can’t fulfill that role, so the best we can do is point you to the best documentation that exists. Here are some hints: The authors strongly suggest getting a copy of the Tk man pages. Specifically, the man pages in the manN directory are most useful. The man3 man pages describe the C interface to the Tk library and thus are not especially helpful for script writers. Addison-Wesley publishes a book called Tcl and the Tk Toolkit by John Ousterhout (ISBN 0-201-63337-X) which is a good introduction to Tcl and Tk for the novice. The book is not exhaustive, and for many details it defers to the man pages.
tkinter/__init__.py is a last resort for most, but can be a good place to go when nothing else makes sense. A Simple Hello World Program import tkinter as tk
class Application(tk.Frame):
def __init__(self, master=None):
super().__init__(master)
self.master = master
self.pack()
self.create_widgets()
def create_widgets(self):
self.hi_there = tk.Button(self)
self.hi_there["text"] = "Hello World\n(click me)"
self.hi_there["command"] = self.say_hi
self.hi_there.pack(side="top")
self.quit = tk.Button(self, text="QUIT", fg="red",
command=self.master.destroy)
self.quit.pack(side="bottom")
def say_hi(self):
print("hi there, everyone!")
root = tk.Tk()
app = Application(master=root)
app.mainloop()
A (Very) Quick Look at Tcl/Tk The class hierarchy looks complicated, but in actual practice, application programmers almost always refer to the classes at the very bottom of the hierarchy. Notes: These classes are provided for the purposes of organizing certain functions under one namespace. They aren’t meant to be instantiated independently. The Tk class is meant to be instantiated only once in an application. Application programmers need not instantiate one explicitly, the system creates one whenever any of the other classes are instantiated. The Widget class is not meant to be instantiated, it is meant only for subclassing to make “real” widgets (in C++, this is called an ‘abstract class’). To make use of this reference material, there will be times when you will need to know how to read short passages of Tk and how to identify the various parts of a Tk command. (See section Mapping Basic Tk into Tkinter for the tkinter equivalents of what’s below.) Tk scripts are Tcl programs. Like all Tcl programs, Tk scripts are just lists of tokens separated by spaces. A Tk widget is just its class, the options that help configure it, and the actions that make it do useful things. To make a widget in Tk, the command is always of the form: classCommand newPathname options
classCommand
denotes which kind of widget to make (a button, a label, a menu…) newPathname
is the new name for this widget. All names in Tk must be unique. To help enforce this, widgets in Tk are named with pathnames, just like files in a file system. The top level widget, the root, is called . (period) and children are delimited by more periods. For example, .myApp.controlPanel.okButton might be the name of a widget. options
configure the widget’s appearance and in some cases, its behavior. The options come in the form of a list of flags and values. Flags are preceded by a ‘-‘, like Unix shell command flags, and values are put in quotes if they are more than one word. For example: button .fred -fg red -text "hi there"
^ ^ \______________________/
| | |
class new options
command widget (-opt val -opt val ...)
Once created, the pathname to the widget becomes a new command. This new widget command is the programmer’s handle for getting the new widget to perform some action. In C, you’d express this as someAction(fred, someOptions), in C++, you would express this as fred.someAction(someOptions), and in Tk, you say: .fred someAction someOptions
Note that the object name, .fred, starts with a dot. As you’d expect, the legal values for someAction will depend on the widget’s class: .fred disable works if fred is a button (fred gets greyed out), but does not work if fred is a label (disabling of labels is not supported in Tk). The legal values of someOptions is action dependent. Some actions, like disable, require no arguments, others, like a text-entry box’s delete command, would need arguments to specify what range of text to delete. Mapping Basic Tk into Tkinter Class commands in Tk correspond to class constructors in Tkinter. button .fred =====> fred = Button()
The master of an object is implicit in the new name given to it at creation time. In Tkinter, masters are specified explicitly. button .panel.fred =====> fred = Button(panel)
The configuration options in Tk are given in lists of hyphened tags followed by values. In Tkinter, options are specified as keyword-arguments in the instance constructor, and keyword-args for configure calls or as instance indices, in dictionary style, for established instances. See section Setting Options on setting options. button .fred -fg red =====> fred = Button(panel, fg="red")
.fred configure -fg red =====> fred["fg"] = red
OR ==> fred.config(fg="red")
In Tk, to perform an action on a widget, use the widget name as a command, and follow it with an action name, possibly with arguments (options). In Tkinter, you call methods on the class instance to invoke actions on the widget. The actions (methods) that a given widget can perform are listed in tkinter/__init__.py. .fred invoke =====> fred.invoke()
To give a widget to the packer (geometry manager), you call pack with optional arguments. In Tkinter, the Pack class holds all this functionality, and the various forms of the pack command are implemented as methods. All widgets in tkinter are subclassed from the Packer, and so inherit all the packing methods. See the tkinter.tix module documentation for additional information on the Form geometry manager. pack .fred -side left =====> fred.pack(side="left")
How Tk and Tkinter are Related From the top down: Your App Here (Python)
A Python application makes a tkinter call. tkinter (Python Package)
This call (say, for example, creating a button widget), is implemented in the tkinter package, which is written in Python. This Python function will parse the commands and the arguments and convert them into a form that makes them look as if they had come from a Tk script instead of a Python script. _tkinter (C)
These commands and their arguments will be passed to a C function in the _tkinter - note the underscore - extension module. Tk Widgets (C and Tcl)
This C function is able to make calls into other C modules, including the C functions that make up the Tk library. Tk is implemented in C and some Tcl. The Tcl part of the Tk widgets is used to bind certain default behaviors to widgets, and is executed once at the point where the Python tkinter package is imported. (The user never sees this stage). Tk (C)
The Tk part of the Tk Widgets implement the final mapping to … Xlib (C)
the Xlib library to draw graphics on the screen. Handy Reference Setting Options Options control things like the color and border width of a widget. Options can be set in three ways: At object creation time, using keyword arguments
fred = Button(self, fg="red", bg="blue")
After object creation, treating the option name like a dictionary index
fred["fg"] = "red"
fred["bg"] = "blue"
Use the config() method to update multiple attrs subsequent to object creation
fred.config(fg="red", bg="blue")
For a complete explanation of a given option and its behavior, see the Tk man pages for the widget in question. Note that the man pages list “STANDARD OPTIONS” and “WIDGET SPECIFIC OPTIONS” for each widget. The former is a list of options that are common to many widgets, the latter are the options that are idiosyncratic to that particular widget. The Standard Options are documented on the options(3) man page. No distinction between standard and widget-specific options is made in this document. Some options don’t apply to some kinds of widgets. Whether a given widget responds to a particular option depends on the class of the widget; buttons have a command option, labels do not. The options supported by a given widget are listed in that widget’s man page, or can be queried at runtime by calling the config() method without arguments, or by calling the keys() method on that widget. The return value of these calls is a dictionary whose key is the name of the option as a string (for example, 'relief') and whose values are 5-tuples. Some options, like bg are synonyms for common options with long names (bg is shorthand for “background”). Passing the config() method the name of a shorthand option will return a 2-tuple, not 5-tuple. The 2-tuple passed back will contain the name of the synonym and the “real” option (such as ('bg', 'background')).
Index Meaning Example
0 option name 'relief'
1 option name for database lookup 'relief'
2 option class for database lookup 'Relief'
3 default value 'raised'
4 current value 'groove' Example: >>> print(fred.config())
{'relief': ('relief', 'relief', 'Relief', 'raised', 'groove')}
Of course, the dictionary printed will include all the options available and their values. This is meant only as an example. The Packer The packer is one of Tk’s geometry-management mechanisms. Geometry managers are used to specify the relative positioning of widgets within their container - their mutual master. In contrast to the more cumbersome placer (which is used less commonly, and we do not cover here), the packer takes qualitative relationship specification - above, to the left of, filling, etc - and works everything out to determine the exact placement coordinates for you. The size of any master widget is determined by the size of the “slave widgets” inside. The packer is used to control where slave widgets appear inside the master into which they are packed. You can pack widgets into frames, and frames into other frames, in order to achieve the kind of layout you desire. Additionally, the arrangement is dynamically adjusted to accommodate incremental changes to the configuration, once it is packed. Note that widgets do not appear until they have had their geometry specified with a geometry manager. It’s a common early mistake to leave out the geometry specification, and then be surprised when the widget is created but nothing appears. A widget will appear only after it has had, for example, the packer’s pack() method applied to it. The pack() method can be called with keyword-option/value pairs that control where the widget is to appear within its container, and how it is to behave when the main application window is resized. Here are some examples: fred.pack() # defaults to side = "top"
fred.pack(side="left")
fred.pack(expand=1)
Packer Options For more extensive information on the packer and the options that it can take, see the man pages and page 183 of John Ousterhout’s book. anchor
Anchor type. Denotes where the packer is to place each slave in its parcel. expand
Boolean, 0 or 1. fill
Legal values: 'x', 'y', 'both', 'none'. ipadx and ipady
A distance - designating internal padding on each side of the slave widget. padx and pady
A distance - designating external padding on each side of the slave widget. side
Legal values are: 'left', 'right', 'top', 'bottom'. Coupling Widget Variables The current-value setting of some widgets (like text entry widgets) can be connected directly to application variables by using special options. These options are variable, textvariable, onvalue, offvalue, and value. This connection works both ways: if the variable changes for any reason, the widget it’s connected to will be updated to reflect the new value. Unfortunately, in the current implementation of tkinter it is not possible to hand over an arbitrary Python variable to a widget through a variable or textvariable option. The only kinds of variables for which this works are variables that are subclassed from a class called Variable, defined in tkinter. There are many useful subclasses of Variable already defined: StringVar, IntVar, DoubleVar, and BooleanVar. To read the current value of such a variable, call the get() method on it, and to change its value you call the set() method. If you follow this protocol, the widget will always track the value of the variable, with no further intervention on your part. For example: import tkinter as tk
class App(tk.Frame):
def __init__(self, master):
super().__init__(master)
self.pack()
self.entrythingy = tk.Entry()
self.entrythingy.pack()
# Create the application variable.
self.contents = tk.StringVar()
# Set it to some value.
self.contents.set("this is a variable")
# Tell the entry widget to watch this variable.
self.entrythingy["textvariable"] = self.contents
# Define a callback for when the user hits return.
# It prints the current value of the variable.
self.entrythingy.bind('<Key-Return>',
self.print_contents)
def print_contents(self, event):
print("Hi. The current entry content is:",
self.contents.get())
root = tk.Tk()
myapp = App(root)
myapp.mainloop()
The Window Manager In Tk, there is a utility command, wm, for interacting with the window manager. Options to the wm command allow you to control things like titles, placement, icon bitmaps, and the like. In tkinter, these commands have been implemented as methods on the Wm class. Toplevel widgets are subclassed from the Wm class, and so can call the Wm methods directly. To get at the toplevel window that contains a given widget, you can often just refer to the widget’s master. Of course if the widget has been packed inside of a frame, the master won’t represent a toplevel window. To get at the toplevel window that contains an arbitrary widget, you can call the _root() method. This method begins with an underscore to denote the fact that this function is part of the implementation, and not an interface to Tk functionality. Here are some examples of typical usage: import tkinter as tk
class App(tk.Frame):
def __init__(self, master=None):
super().__init__(master)
self.pack()
# create the application
myapp = App()
#
# here are method calls to the window manager class
#
myapp.master.title("My Do-Nothing Application")
myapp.master.maxsize(1000, 400)
# start the program
myapp.mainloop()
Tk Option Data Types anchor
Legal values are points of the compass: "n", "ne", "e", "se", "s", "sw", "w", "nw", and also "center". bitmap
There are eight built-in, named bitmaps: 'error', 'gray25', 'gray50', 'hourglass', 'info', 'questhead', 'question', 'warning'. To specify an X bitmap filename, give the full path to the file, preceded with an @, as in "@/usr/contrib/bitmap/gumby.bit". boolean
You can pass integers 0 or 1 or the strings "yes" or "no". callback
This is any Python function that takes no arguments. For example: def print_it():
print("hi there")
fred["command"] = print_it
color
Colors can be given as the names of X colors in the rgb.txt file, or as strings representing RGB values in 4 bit: "#RGB", 8 bit: "#RRGGBB", 12 bit” "#RRRGGGBBB", or 16 bit "#RRRRGGGGBBBB" ranges, where R,G,B here represent any legal hex digit. See page 160 of Ousterhout’s book for details. cursor
The standard X cursor names from cursorfont.h can be used, without the XC_ prefix. For example to get a hand cursor (XC_hand2), use the string "hand2". You can also specify a bitmap and mask file of your own. See page 179 of Ousterhout’s book. distance
Screen distances can be specified in either pixels or absolute distances. Pixels are given as numbers and absolute distances as strings, with the trailing character denoting units: c for centimetres, i for inches, m for millimetres, p for printer’s points. For example, 3.5 inches is expressed as "3.5i". font
Tk uses a list font name format, such as {courier 10 bold}. Font sizes with positive numbers are measured in points; sizes with negative numbers are measured in pixels. geometry
This is a string of the form widthxheight, where width and height are measured in pixels for most widgets (in characters for widgets displaying text). For example: fred["geometry"] = "200x100". justify
Legal values are the strings: "left", "center", "right", and "fill". region
This is a string with four space-delimited elements, each of which is a legal distance (see above). For example: "2 3 4 5" and "3i 2i 4.5i 2i" and "3c 2c 4c 10.43c" are all legal regions. relief
Determines what the border style of a widget will be. Legal values are: "raised", "sunken", "flat", "groove", and "ridge". scrollcommand
This is almost always the set() method of some scrollbar widget, but can be any widget method that takes a single argument. wrap
Must be one of: "none", "char", or "word". Bindings and Events The bind method from the widget command allows you to watch for certain events and to have a callback function trigger when that event type occurs. The form of the bind method is: def bind(self, sequence, func, add=''):
where: sequence
is a string that denotes the target kind of event. (See the bind man page and page 201 of John Ousterhout’s book for details). func
is a Python function, taking one argument, to be invoked when the event occurs. An Event instance will be passed as the argument. (Functions deployed this way are commonly known as callbacks.) add
is optional, either '' or '+'. Passing an empty string denotes that this binding is to replace any other bindings that this event is associated with. Passing a '+' means that this function is to be added to the list of functions bound to this event type. For example: def turn_red(self, event):
event.widget["activeforeground"] = "red"
self.button.bind("<Enter>", self.turn_red)
Notice how the widget field of the event is being accessed in the turn_red() callback. This field contains the widget that caught the X event. The following table lists the other event fields you can access, and how they are denoted in Tk, which can be useful when referring to the Tk man pages.
Tk Tkinter Event Field Tk Tkinter Event Field
%f focus %A char
%h height %E send_event
%k keycode %K keysym
%s state %N keysym_num
%t time %T type
%w width %W widget
%x x %X x_root
%y y %Y y_root The index Parameter A number of widgets require “index” parameters to be passed. These are used to point at a specific place in a Text widget, or to particular characters in an Entry widget, or to particular menu items in a Menu widget. Entry widget indexes (index, view index, etc.)
Entry widgets have options that refer to character positions in the text being displayed. You can use these tkinter functions to access these special points in text widgets: Text widget indexes
The index notation for Text widgets is very rich and is best described in the Tk man pages. Menu indexes (menu.invoke(), menu.entryconfig(), etc.)
Some options and methods for menus manipulate specific menu entries. Anytime a menu index is needed for an option or a parameter, you may pass in: an integer which refers to the numeric position of the entry in the widget, counted from the top, starting with 0; the string "active", which refers to the menu position that is currently under the cursor; the string "last" which refers to the last menu item; An integer preceded by @, as in @6, where the integer is interpreted as a y pixel coordinate in the menu’s coordinate system; the string "none", which indicates no menu entry at all, most often used with menu.activate() to deactivate all entries, and finally, a text string that is pattern matched against the label of the menu entry, as scanned from the top of the menu to the bottom. Note that this index type is considered after all the others, which means that matches for menu items labelled last, active, or none may be interpreted as the above literals, instead. Images Images of different formats can be created through the corresponding subclass of tkinter.Image:
BitmapImage for images in XBM format.
PhotoImage for images in PGM, PPM, GIF and PNG formats. The latter is supported starting with Tk 8.6. Either type of image is created through either the file or the data option (other options are available as well). The image object can then be used wherever an image option is supported by some widget (e.g. labels, buttons, menus). In these cases, Tk will not keep a reference to the image. When the last Python reference to the image object is deleted, the image data is deleted as well, and Tk will display an empty box wherever the image was used. See also The Pillow package adds support for formats such as BMP, JPEG, TIFF, and WebP, among others. File Handlers Tk allows you to register and unregister a callback function which will be called from the Tk mainloop when I/O is possible on a file descriptor. Only one handler may be registered per file descriptor. Example code: import tkinter
widget = tkinter.Tk()
mask = tkinter.READABLE | tkinter.WRITABLE
widget.tk.createfilehandler(file, mask, callback)
...
widget.tk.deletefilehandler(file)
This feature is not available on Windows. Since you don’t know how many bytes are available for reading, you may not want to use the BufferedIOBase or TextIOBase read() or readline() methods, since these will insist on reading a predefined number of bytes. For sockets, the recv() or recvfrom() methods will work fine; for other files, use raw reads or os.read(file.fileno(), maxbytecount).
Widget.tk.createfilehandler(file, mask, func)
Registers the file handler callback function func. The file argument may either be an object with a fileno() method (such as a file or socket object), or an integer file descriptor. The mask argument is an ORed combination of any of the three constants below. The callback is called as follows: callback(file, mask)
Widget.tk.deletefilehandler(file)
Unregisters a file handler.
tkinter.READABLE
tkinter.WRITABLE
tkinter.EXCEPTION
Constants used in the mask arguments. | |
doc_519 | See Migration guide for more details. tf.compat.v1.estimator.CheckpointSaverListener, tf.compat.v1.train.CheckpointSaverListener CheckpointSaverListener triggers only in steps when CheckpointSaverHook is triggered, and provides callbacks at the following points: before using the session before each call to Saver.save()
after each call to Saver.save()
at the end of session To use a listener, implement a class and pass the listener to a CheckpointSaverHook, as in this example: class ExampleCheckpointSaverListener(CheckpointSaverListener):
def begin(self):
# You can add ops to the graph here.
print('Starting the session.')
self.your_tensor = ...
def before_save(self, session, global_step_value):
print('About to write a checkpoint')
def after_save(self, session, global_step_value):
print('Done writing checkpoint.')
if decided_to_stop_training():
return True
def end(self, session, global_step_value):
print('Done with the session.')
...
listener = ExampleCheckpointSaverListener()
saver_hook = tf.estimator.CheckpointSaverHook(
checkpoint_dir, listeners=[listener])
with
tf.compat.v1.train.MonitoredTrainingSession(chief_only_hooks=[saver_hook]):
...
A CheckpointSaverListener may simply take some action after every checkpoint save. It is also possible for the listener to use its own schedule to act less frequently, e.g. based on global_step_value. In this case, implementors should implement the end() method to handle actions related to the last checkpoint save. But the listener should not act twice if after_save() already handled this last checkpoint save. A CheckpointSaverListener can request training to be stopped, by returning True in after_save. Please note that, in replicated distributed training setting, only chief should use this behavior. Otherwise each worker will do their own evaluation, which may be wasteful of resources. Methods after_save View source
after_save(
session, global_step_value
)
before_save View source
before_save(
session, global_step_value
)
begin View source
begin()
end View source
end(
session, global_step_value
) | |
doc_520 | Create and return a SAX XMLReader object. The first parser found will be used. If parser_list is provided, it must be an iterable of strings which name modules that have a function named create_parser(). Modules listed in parser_list will be used before modules in the default list of parsers. Changed in version 3.8: The parser_list argument can be any iterable, not just a list. | |
doc_521 |
Fit the random classifier. Parameters
Xarray-like of shape (n_samples, n_features)
Training data.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
selfobject | |
doc_522 |
Build or fetch the effective stop words list. Returns
stop_words: list or None
A list of stop words. | |
doc_523 |
Apply only the non-affine part of this transformation. transform(values) is always equivalent to transform_affine(transform_non_affine(values)). In non-affine transformations, this is generally equivalent to transform(values). In affine transformations, this is always a no-op. Parameters
valuesarray
The input values as NumPy array of length input_dims or shape (N x input_dims). Returns
array
The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input. | |
doc_524 |
Remove the artist from the figure if possible. The effect will not be visible until the figure is redrawn, e.g., with FigureCanvasBase.draw_idle. Call relim to update the axes limits if desired. Note: relim will not see collections even if the collection was added to the axes with autolim = True. Note: there is no support for removing the artist's legend entry. | |
doc_525 |
Get the matrix for the affine part of this transform. | |
doc_526 |
Bases: mpl_toolkits.axisartist.axislines.AxisArtistHelper._Base get_line(axes)[source]
get_nth_coord()[source] | |
doc_527 |
Make a step plot. Call signatures: step(x, y, [fmt], *, data=None, where='pre', **kwargs)
step(x, y, [fmt], x2, y2, [fmt2], ..., *, where='pre', **kwargs)
This is just a thin wrapper around plot which changes some formatting options. Most of the concepts and parameters of plot can be used here as well. Note This method uses a standard plot with a step drawstyle: The x values are the reference positions and steps extend left/right/both directions depending on where. For the common case where you know the values and edges of the steps, use stairs instead. Parameters
xarray-like
1D sequence of x positions. It is assumed, but not checked, that it is uniformly increasing.
yarray-like
1D sequence of y levels.
fmtstr, optional
A format string, e.g. 'g' for a green line. See plot for a more detailed description. Note: While full format strings are accepted, it is recommended to only specify the color. Line styles are currently ignored (use the keyword argument linestyle instead). Markers are accepted and plotted on the given positions, however, this is a rarely needed feature for step plots.
where{'pre', 'post', 'mid'}, default: 'pre'
Define where the steps should be placed: 'pre': The y value is continued constantly to the left from every x position, i.e. the interval (x[i-1], x[i]] has the value y[i]. 'post': The y value is continued constantly to the right from every x position, i.e. the interval [x[i], x[i+1]) has the value y[i]. 'mid': Steps occur half-way between the x positions.
dataindexable object, optional
An object with labelled data. If given, provide the label names to plot in x and y. **kwargs
Additional parameters are the same as those for plot. Returns
list of Line2D
Objects representing the plotted data.
Examples using matplotlib.axes.Axes.step
step(x, y) | |
doc_528 |
Alias for get_linestyle. | |
doc_529 |
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | |
doc_530 | Returns a tensor with all the dimensions of input of size 1 removed. For example, if input is of shape: (A×1×B×C×1×D)(A \times 1 \times B \times C \times 1 \times D) then the out tensor will be of shape: (A×B×C×D)(A \times B \times C \times D) . When dim is given, a squeeze operation is done only in the given dimension. If input is of shape: (A×1×B)(A \times 1 \times B) , squeeze(input, 0) leaves the tensor unchanged, but squeeze(input, 1) will squeeze the tensor to the shape (A×B)(A \times B) . Note The returned tensor shares the storage with the input tensor, so changing the contents of one will change the contents of the other. Warning If the tensor has a batch dimension of size 1, then squeeze(input) will also remove the batch dimension, which can lead to unexpected errors. Parameters
input (Tensor) – the input tensor.
dim (int, optional) – if given, the input will be squeezed only in this dimension Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> x = torch.zeros(2, 1, 2, 1, 2)
>>> x.size()
torch.Size([2, 1, 2, 1, 2])
>>> y = torch.squeeze(x)
>>> y.size()
torch.Size([2, 2, 2])
>>> y = torch.squeeze(x, 0)
>>> y.size()
torch.Size([2, 1, 2, 1, 2])
>>> y = torch.squeeze(x, 1)
>>> y.size()
torch.Size([2, 2, 1, 2]) | |
doc_531 | skimage.graph.route_through_array(array, …) Simple example of how to use the MCP and MCP_Geometric classes.
skimage.graph.shortest_path(arr[, reach, …]) Find the shortest path through an n-d array from one side to another.
skimage.graph.MCP(costs[, offsets, …]) A class for finding the minimum cost path through a given n-d costs array.
skimage.graph.MCP_Connect(costs[, offsets, …]) Connect source points using the distance-weighted minimum cost function.
skimage.graph.MCP_Flexible(costs[, offsets, …]) Find minimum cost paths through an N-d costs array.
skimage.graph.MCP_Geometric(costs[, …]) Find distance-weighted minimum cost paths through an n-d costs array. route_through_array
skimage.graph.route_through_array(array, start, end, fully_connected=True, geometric=True) [source]
Simple example of how to use the MCP and MCP_Geometric classes. See the MCP and MCP_Geometric class documentation for explanation of the path-finding algorithm. Parameters
arrayndarray
Array of costs.
startiterable
n-d index into array defining the starting point
enditerable
n-d index into array defining the end point
fully_connectedbool (optional)
If True, diagonal moves are permitted, if False, only axial moves.
geometricbool (optional)
If True, the MCP_Geometric class is used to calculate costs, if False, the MCP base class is used. See the class documentation for an explanation of the differences between MCP and MCP_Geometric. Returns
pathlist
List of n-d index tuples defining the path from start to end.
costfloat
Cost of the path. If geometric is False, the cost of the path is the sum of the values of array along the path. If geometric is True, a finer computation is made (see the documentation of the MCP_Geometric class). See also
MCP, MCP_Geometric
Examples >>> import numpy as np
>>> from skimage.graph import route_through_array
>>>
>>> image = np.array([[1, 3], [10, 12]])
>>> image
array([[ 1, 3],
[10, 12]])
>>> # Forbid diagonal steps
>>> route_through_array(image, [0, 0], [1, 1], fully_connected=False)
([(0, 0), (0, 1), (1, 1)], 9.5)
>>> # Now allow diagonal steps: the path goes directly from start to end
>>> route_through_array(image, [0, 0], [1, 1])
([(0, 0), (1, 1)], 9.19238815542512)
>>> # Cost is the sum of array values along the path (16 = 1 + 3 + 12)
>>> route_through_array(image, [0, 0], [1, 1], fully_connected=False,
... geometric=False)
([(0, 0), (0, 1), (1, 1)], 16.0)
>>> # Larger array where we display the path that is selected
>>> image = np.arange((36)).reshape((6, 6))
>>> image
array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35]])
>>> # Find the path with lowest cost
>>> indices, weight = route_through_array(image, (0, 0), (5, 5))
>>> indices = np.stack(indices, axis=-1)
>>> path = np.zeros_like(image)
>>> path[indices[0], indices[1]] = 1
>>> path
array([[1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1]])
shortest_path
skimage.graph.shortest_path(arr, reach=1, axis=-1, output_indexlist=False) [source]
Find the shortest path through an n-d array from one side to another. Parameters
arrndarray of float64
reachint, optional
By default (reach = 1), the shortest path can only move one row up or down for every step it moves forward (i.e., the path gradient is limited to 1). reach defines the number of elements that can be skipped along each non-axis dimension at each step.
axisint, optional
The axis along which the path must always move forward (default -1)
output_indexlistbool, optional
See return value p for explanation. Returns
piterable of int
For each step along axis, the coordinate of the shortest path. If output_indexlist is True, then the path is returned as a list of n-d tuples that index into arr. If False, then the path is returned as an array listing the coordinates of the path along the non-axis dimensions for each step along the axis dimension. That is, p.shape == (arr.shape[axis], arr.ndim-1) except that p is squeezed before returning so if arr.ndim == 2, then p.shape == (arr.shape[axis],)
costfloat
Cost of path. This is the absolute sum of all the differences along the path.
MCP
class skimage.graph.MCP(costs, offsets=None, fully_connected=True, sampling=None)
Bases: object A class for finding the minimum cost path through a given n-d costs array. Given an n-d costs array, this class can be used to find the minimum-cost path through that array from any set of points to any other set of points. Basic usage is to initialize the class and call find_costs() with a one or more starting indices (and an optional list of end indices). After that, call traceback() one or more times to find the path from any given end-position to the closest starting index. New paths through the same costs array can be found by calling find_costs() repeatedly. The cost of a path is calculated simply as the sum of the values of the costs array at each point on the path. The class MCP_Geometric, on the other hand, accounts for the fact that diagonal vs. axial moves are of different lengths, and weights the path cost accordingly. Array elements with infinite or negative costs will simply be ignored, as will paths whose cumulative cost overflows to infinite. Parameters
costsndarray
offsetsiterable, optional
A list of offset tuples: each offset specifies a valid move from a given n-d position. If not provided, offsets corresponding to a singly- or fully-connected n-d neighborhood will be constructed with make_offsets(), using the fully_connected parameter value.
fully_connectedbool, optional
If no offsets are provided, this determines the connectivity of the generated neighborhood. If true, the path may go along diagonals between elements of the costs array; otherwise only axial moves are permitted.
samplingtuple, optional
For each dimension, specifies the distance between two cells/voxels. If not given or None, the distance is assumed unit. Attributes
offsetsndarray
Equivalent to the offsets provided to the constructor, or if none were so provided, the offsets created for the requested n-d neighborhood. These are useful for interpreting the traceback array returned by the find_costs() method.
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation.
find_costs()
Find the minimum-cost path from the given starting points. This method finds the minimum-cost path to the specified ending indices from any one of the specified starting indices. If no end positions are given, then the minimum-cost path to every position in the costs array will be found. Parameters
startsiterable
A list of n-d starting indices (where n is the dimension of the costs array). The minimum cost path to the closest/cheapest starting point will be found.
endsiterable, optional
A list of n-d ending indices.
find_all_endsbool, optional
If ‘True’ (default), the minimum-cost-path to every specified end-position will be found; otherwise the algorithm will stop when a a path is found to any end-position. (If no ends were specified, then this parameter has no effect.) Returns
cumulative_costsndarray
Same shape as the costs array; this array records the minimum cost path from the nearest/cheapest starting index to each index considered. (If ends were specified, not all elements in the array will necessarily be considered: positions not evaluated will have a cumulative cost of inf. If find_all_ends is ‘False’, only one of the specified end-positions will have a finite cumulative cost.)
tracebackndarray
Same shape as the costs array; this array contains the offset to any given index from its predecessor index. The offset indices index into the offsets attribute, which is a array of n-d offsets. In the 2-d case, if offsets[traceback[x, y]] is (-1, -1), that means that the predecessor of [x, y] in the minimum cost path to some start position is [x+1, y+1]. Note that if the offset_index is -1, then the given index was not considered.
goal_reached()
int goal_reached(int index, float cumcost) This method is called each iteration after popping an index from the heap, before examining the neighbours. This method can be overloaded to modify the behavior of the MCP algorithm. An example might be to stop the algorithm when a certain cumulative cost is reached, or when the front is a certain distance away from the seed point. This method should return 1 if the algorithm should not check the current point’s neighbours and 2 if the algorithm is now done.
traceback(end)
Trace a minimum cost path through the pre-calculated traceback array. This convenience function reconstructs the the minimum cost path to a given end position from one of the starting indices provided to find_costs(), which must have been called previously. This function can be called as many times as desired after find_costs() has been run. Parameters
enditerable
An n-d index into the costs array. Returns
tracebacklist of n-d tuples
A list of indices into the costs array, starting with one of the start positions passed to find_costs(), and ending with the given end index. These indices specify the minimum-cost path from any given start index to the end index. (The total cost of that path can be read out from the cumulative_costs array returned by find_costs().)
MCP_Connect
class skimage.graph.MCP_Connect(costs, offsets=None, fully_connected=True)
Bases: skimage.graph._mcp.MCP Connect source points using the distance-weighted minimum cost function. A front is grown from each seed point simultaneously, while the origin of the front is tracked as well. When two fronts meet, create_connection() is called. This method must be overloaded to deal with the found edges in a way that is appropriate for the application.
__init__(*args, **kwargs)
Initialize self. See help(type(self)) for accurate signature.
create_connection()
create_connection id1, id2, pos1, pos2, cost1, cost2) Overload this method to keep track of the connections that are found during MCP processing. Note that a connection with the same ids can be found multiple times (but with different positions and costs). At the time that this method is called, both points are “frozen” and will not be visited again by the MCP algorithm. Parameters
id1int
The seed point id where the first neighbor originated from.
id2int
The seed point id where the second neighbor originated from.
pos1tuple
The index of of the first neighbour in the connection.
pos2tuple
The index of of the second neighbour in the connection.
cost1float
The cumulative cost at pos1.
cost2float
The cumulative costs at pos2.
MCP_Flexible
class skimage.graph.MCP_Flexible(costs, offsets=None, fully_connected=True)
Bases: skimage.graph._mcp.MCP Find minimum cost paths through an N-d costs array. See the documentation for MCP for full details. This class differs from MCP in that several methods can be overloaded (from pure Python) to modify the behavior of the algorithm and/or create custom algorithms based on MCP. Note that goal_reached can also be overloaded in the MCP class.
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation.
examine_neighbor(index, new_index, offset_length)
This method is called once for every pair of neighboring nodes, as soon as both nodes are frozen. This method can be overloaded to obtain information about neightboring nodes, and/or to modify the behavior of the MCP algorithm. One example is the MCP_Connect class, which checks for meeting fronts using this hook.
travel_cost(old_cost, new_cost, offset_length)
This method calculates the travel cost for going from the current node to the next. The default implementation returns new_cost. Overload this method to adapt the behaviour of the algorithm.
update_node(index, new_index, offset_length)
This method is called when a node is updated, right after new_index is pushed onto the heap and the traceback map is updated. This method can be overloaded to keep track of other arrays that are used by a specific implementation of the algorithm. For instance the MCP_Connect class uses it to update an id map.
MCP_Geometric
class skimage.graph.MCP_Geometric(costs, offsets=None, fully_connected=True)
Bases: skimage.graph._mcp.MCP Find distance-weighted minimum cost paths through an n-d costs array. See the documentation for MCP for full details. This class differs from MCP in that the cost of a path is not simply the sum of the costs along that path. This class instead assumes that the costs array contains at each position the “cost” of a unit distance of travel through that position. For example, a move (in 2-d) from (1, 1) to (1, 2) is assumed to originate in the center of the pixel (1, 1) and terminate in the center of (1, 2). The entire move is of distance 1, half through (1, 1) and half through (1, 2); thus the cost of that move is (1/2)*costs[1,1] + (1/2)*costs[1,2]. On the other hand, a move from (1, 1) to (2, 2) is along the diagonal and is sqrt(2) in length. Half of this move is within the pixel (1, 1) and the other half in (2, 2), so the cost of this move is calculated as (sqrt(2)/2)*costs[1,1] + (sqrt(2)/2)*costs[2,2]. These calculations don’t make a lot of sense with offsets of magnitude greater than 1. Use the sampling argument in order to deal with anisotropic data.
__init__(costs, offsets=None, fully_connected=True, sampling=None)
See class documentation. | |
doc_532 |
Connect the callback function func to button click events. Returns a connection id, which can be used to disconnect the callback. | |
doc_533 | Backend gloo mpi nccl
Device CPU GPU CPU GPU CPU GPU
send ✓ ✘ ✓ ? ✘ ✘
recv ✓ ✘ ✓ ? ✘ ✘
broadcast ✓ ✓ ✓ ? ✘ ✓
all_reduce ✓ ✓ ✓ ? ✘ ✓
reduce ✓ ✘ ✓ ? ✘ ✓
all_gather ✓ ✘ ✓ ? ✘ ✓
gather ✓ ✘ ✓ ? ✘ ✘
scatter ✓ ✘ ✓ ? ✘ ✘
reduce_scatter ✘ ✘ ✘ ✘ ✘ ✓
all_to_all ✘ ✘ ✓ ? ✘ ✘
barrier ✓ ✘ ✓ ? ✘ ✓ Backends that come with PyTorch PyTorch distributed package supports Linux (stable), MacOS (stable), and Windows (prototype). By default for Linux, the Gloo and NCCL backends are built and included in PyTorch distributed (NCCL only when building with CUDA). MPI is an optional backend that can only be included if you build PyTorch from source. (e.g.building PyTorch on a host that has MPI installed.) Note As of PyTorch v1.8, Windows supports all collective communications backend but NCCL, If the init_method argument of init_process_group() points to a file it must adhere to the following schema: Local file system, init_method="file:///d:/tmp/some_file"
Shared file system, init_method="file://////{machine_name}/{share_folder_name}/some_file"
Same as on Linux platform, you can enable TcpStore by setting environment variables, MASTER_ADDR and MASTER_PORT. Which backend to use? In the past, we were often asked: “which backend should I use?”.
Rule of thumb Use the NCCL backend for distributed GPU training Use the Gloo backend for distributed CPU training.
GPU hosts with InfiniBand interconnect Use NCCL, since it’s the only backend that currently supports InfiniBand and GPUDirect.
GPU hosts with Ethernet interconnect Use NCCL, since it currently provides the best distributed GPU training performance, especially for multiprocess single-node or multi-node distributed training. If you encounter any problem with NCCL, use Gloo as the fallback option. (Note that Gloo currently runs slower than NCCL for GPUs.)
CPU hosts with InfiniBand interconnect If your InfiniBand has enabled IP over IB, use Gloo, otherwise, use MPI instead. We are planning on adding InfiniBand support for Gloo in the upcoming releases.
CPU hosts with Ethernet interconnect Use Gloo, unless you have specific reasons to use MPI. Common environment variables Choosing the network interface to use By default, both the NCCL and Gloo backends will try to find the right network interface to use. If the automatically detected interface is not correct, you can override it using the following environment variables (applicable to the respective backend):
NCCL_SOCKET_IFNAME, for example export NCCL_SOCKET_IFNAME=eth0
GLOO_SOCKET_IFNAME, for example export GLOO_SOCKET_IFNAME=eth0
If you’re using the Gloo backend, you can specify multiple interfaces by separating them by a comma, like this: export GLOO_SOCKET_IFNAME=eth0,eth1,eth2,eth3. The backend will dispatch operations in a round-robin fashion across these interfaces. It is imperative that all processes specify the same number of interfaces in this variable. Other NCCL environment variables NCCL has also provided a number of environment variables for fine-tuning purposes. Commonly used ones include the following for debugging purposes: export NCCL_DEBUG=INFO export NCCL_DEBUG_SUBSYS=ALL For the full list of NCCL environment variables, please refer to NVIDIA NCCL’s official documentation Basics The torch.distributed package provides PyTorch support and communication primitives for multiprocess parallelism across several computation nodes running on one or more machines. The class torch.nn.parallel.DistributedDataParallel() builds on this functionality to provide synchronous distributed training as a wrapper around any PyTorch model. This differs from the kinds of parallelism provided by Multiprocessing package - torch.multiprocessing and torch.nn.DataParallel() in that it supports multiple network-connected machines and in that the user must explicitly launch a separate copy of the main training script for each process. In the single-machine synchronous case, torch.distributed or the torch.nn.parallel.DistributedDataParallel() wrapper may still have advantages over other approaches to data-parallelism, including torch.nn.DataParallel(): Each process maintains its own optimizer and performs a complete optimization step with each iteration. While this may appear redundant, since the gradients have already been gathered together and averaged across processes and are thus the same for every process, this means that no parameter broadcast step is needed, reducing time spent transferring tensors between nodes. Each process contains an independent Python interpreter, eliminating the extra interpreter overhead and “GIL-thrashing” that comes from driving several execution threads, model replicas, or GPUs from a single Python process. This is especially important for models that make heavy use of the Python runtime, including models with recurrent layers or many small components. Initialization The package needs to be initialized using the torch.distributed.init_process_group() function before calling any other methods. This blocks until all processes have joined.
torch.distributed.is_available() [source]
Returns True if the distributed package is available. Otherwise, torch.distributed does not expose any other APIs. Currently, torch.distributed is available on Linux, MacOS and Windows. Set USE_DISTRIBUTED=1 to enable it when building PyTorch from source. Currently, the default value is USE_DISTRIBUTED=1 for Linux and Windows, USE_DISTRIBUTED=0 for MacOS.
torch.distributed.init_process_group(backend, init_method=None, timeout=datetime.timedelta(seconds=1800), world_size=-1, rank=-1, store=None, group_name='') [source]
Initializes the default distributed process group, and this will also initialize the distributed package. There are 2 main ways to initialize a process group:
Specify store, rank, and world_size explicitly. Specify init_method (a URL string) which indicates where/how to discover peers. Optionally specify rank and world_size, or encode all required parameters in the URL and omit them. If neither is specified, init_method is assumed to be “env://”. Parameters
backend (str or Backend) – The backend to use. Depending on build-time configurations, valid values include mpi, gloo, and nccl. This field should be given as a lowercase string (e.g., "gloo"), which can also be accessed via Backend attributes (e.g., Backend.GLOO). If using multiple processes per machine with nccl backend, each process must have exclusive access to every GPU it uses, as sharing GPUs between processes can result in deadlocks.
init_method (str, optional) – URL specifying how to initialize the process group. Default is “env://” if no init_method or store is specified. Mutually exclusive with store.
world_size (int, optional) – Number of processes participating in the job. Required if store is specified.
rank (int, optional) – Rank of the current process (it should be a number between 0 and world_size-1). Required if store is specified.
store (Store, optional) – Key/value store accessible to all workers, used to exchange connection/address information. Mutually exclusive with init_method.
timeout (timedelta, optional) – Timeout for operations executed against the process group. Default value equals 30 minutes. This is applicable for the gloo backend. For nccl, this is applicable only if the environment variable NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1. When NCCL_BLOCKING_WAIT is set, this is the duration for which the process will block and wait for collectives to complete before throwing an exception. When NCCL_ASYNC_ERROR_HANDLING is set, this is the duration after which collectives will be aborted asynchronously and the process will crash. NCCL_BLOCKING_WAIT will provide errors to the user which can be caught and handled, but due to its blocking nature, it has a performance overhead. On the other hand, NCCL_ASYNC_ERROR_HANDLING has very little performance overhead, but crashes the process on errors. This is done since CUDA execution is async and it is no longer safe to continue executing user code since failed async NCCL operations might result in subsequent CUDA operations running on corrupted data. Only one of these two environment variables should be set.
group_name (str, optional, deprecated) – Group name. To enable backend == Backend.MPI, PyTorch needs to be built from source on a system that supports MPI.
class torch.distributed.Backend [source]
An enum-like class of available backends: GLOO, NCCL, MPI, and other registered backends. The values of this class are lowercase strings, e.g., "gloo". They can be accessed as attributes, e.g., Backend.NCCL. This class can be directly called to parse the string, e.g., Backend(backend_str) will check if backend_str is valid, and return the parsed lowercase string if so. It also accepts uppercase strings, e.g., Backend("GLOO") returns "gloo". Note The entry Backend.UNDEFINED is present but only used as initial value of some fields. Users should neither use it directly nor assume its existence.
torch.distributed.get_backend(group=None) [source]
Returns the backend of the given process group. Parameters
group (ProcessGroup, optional) – The process group to work on. The default is the general main process group. If another specific group is specified, the calling process must be part of group. Returns
The backend of the given process group as a lower case string.
torch.distributed.get_rank(group=None) [source]
Returns the rank of current process group Rank is a unique identifier assigned to each process within a distributed process group. They are always consecutive integers ranging from 0 to world_size. Parameters
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Returns
The rank of the process group -1, if not part of the group
torch.distributed.get_world_size(group=None) [source]
Returns the number of processes in the current process group Parameters
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Returns
The world size of the process group -1, if not part of the group
torch.distributed.is_initialized() [source]
Checking if the default process group has been initialized
torch.distributed.is_mpi_available() [source]
Checks if the MPI backend is available.
torch.distributed.is_nccl_available() [source]
Checks if the NCCL backend is available.
Currently three initialization methods are supported: TCP initialization There are two ways to initialize using TCP, both requiring a network address reachable from all processes and a desired world_size. The first way requires specifying an address that belongs to the rank 0 process. This initialization method requires that all processes have manually specified ranks. Note that multicast address is not supported anymore in the latest distributed package. group_name is deprecated as well. import torch.distributed as dist
# Use address of one of the machines
dist.init_process_group(backend, init_method='tcp://10.1.1.20:23456',
rank=args.rank, world_size=4)
Shared file-system initialization Another initialization method makes use of a file system that is shared and visible from all machines in a group, along with a desired world_size. The URL should start with file:// and contain a path to a non-existent file (in an existing directory) on a shared file system. File-system initialization will automatically create that file if it doesn’t exist, but will not delete the file. Therefore, it is your responsibility to make sure that the file is cleaned up before the next init_process_group() call on the same file path/name. Note that automatic rank assignment is not supported anymore in the latest distributed package and group_name is deprecated as well. Warning This method assumes that the file system supports locking using fcntl - most local systems and NFS support it. Warning This method will always create the file and try its best to clean up and remove the file at the end of the program. In other words, each initialization with the file init method will need a brand new empty file in order for the initialization to succeed. If the same file used by the previous initialization (which happens not to get cleaned up) is used again, this is unexpected behavior and can often cause deadlocks and failures. Therefore, even though this method will try its best to clean up the file, if the auto-delete happens to be unsuccessful, it is your responsibility to ensure that the file is removed at the end of the training to prevent the same file to be reused again during the next time. This is especially important if you plan to call init_process_group() multiple times on the same file name. In other words, if the file is not removed/cleaned up and you call init_process_group() again on that file, failures are expected. The rule of thumb here is that, make sure that the file is non-existent or empty every time init_process_group() is called. import torch.distributed as dist
# rank should always be specified
dist.init_process_group(backend, init_method='file:///mnt/nfs/sharedfile',
world_size=4, rank=args.rank)
Environment variable initialization This method will read the configuration from environment variables, allowing one to fully customize how the information is obtained. The variables to be set are:
MASTER_PORT - required; has to be a free port on machine with rank 0
MASTER_ADDR - required (except for rank 0); address of rank 0 node
WORLD_SIZE - required; can be set either here, or in a call to init function
RANK - required; can be set either here, or in a call to init function The machine with rank 0 will be used to set up all connections. This is the default method, meaning that init_method does not have to be specified (or can be env://). Distributed Key-Value Store The distributed package comes with a distributed key-value store, which can be used to share information between processes in the group as well as to initialize the distributed pacakge in torch.distributed.init_process_group() (by explicitly creating the store as an alternative to specifying init_method.) There are 3 choices for Key-Value Stores: TCPStore, FileStore, and HashStore.
class torch.distributed.Store
Base class for all store implementations, such as the 3 provided by PyTorch distributed: (TCPStore, FileStore, and HashStore).
class torch.distributed.TCPStore
A TCP-based distributed key-value store implementation. The server store holds the data, while the client stores can connect to the server store over TCP and perform actions such as set() to insert a key-value pair, get() to retrieve a key-value pair, etc. Parameters
host_name (str) – The hostname or IP Address the server store should run on.
port (int) – The port on which the server store should listen for incoming requests.
world_size (int) – The total number of store users (number of clients + 1 for the server).
is_master (bool) – True when initializing the server store, False for client stores.
timeout (timedelta) – Timeout used by the store during initialization and for methods such as get() and wait(). Example::
>>> import torch.distributed as dist
>>> from datetime import timedelta
>>> # Run on process 1 (server)
>>> server_store = dist.TCPStore("127.0.0.1", 1234, 2, True, timedelta(seconds=30))
>>> # Run on process 2 (client)
>>> client_store = dist.TCPStore("127.0.0.1", 1234, 2, False)
>>> # Use any of the store methods from either the client or server after initialization
>>> server_store.set("first_key", "first_value")
>>> client_store.get("first_key")
class torch.distributed.HashStore
A thread-safe store implementation based on an underlying hashmap. This store can be used within the same process (for example, by other threads), but cannot be used across processes. Example::
>>> import torch.distributed as dist
>>> store = dist.HashStore()
>>> # store can be used from other threads
>>> # Use any of the store methods after initialization
>>> store.set("first_key", "first_value")
class torch.distributed.FileStore
A store implementation that uses a file to store the underlying key-value pairs. Parameters
file_name (str) – path of the file in which to store the key-value pairs
world_size (int) – The total number of processes using the store Example::
>>> import torch.distributed as dist
>>> store1 = dist.FileStore("/tmp/filestore", 2)
>>> store2 = dist.FileStore("/tmp/filestore", 2)
>>> # Use any of the store methods from either the client or server after initialization
>>> store1.set("first_key", "first_value")
>>> store2.get("first_key")
class torch.distributed.PrefixStore
A wrapper around any of the 3 key-value stores (TCPStore, FileStore, and HashStore) that adds a prefix to each key inserted to the store. Parameters
prefix (str) – The prefix string that is prepended to each key before being inserted into the store.
store (torch.distributed.store) – A store object that forms the underlying key-value store.
torch.distributed.Store.set(self: torch._C._distributed_c10d.Store, arg0: str, arg1: str) → None
Inserts the key-value pair into the store based on the supplied key and value. If key already exists in the store, it will overwrite the old value with the new supplied value. Parameters
key (str) – The key to be added to the store.
value (str) – The value associated with key to be added to the store. Example::
>>> import torch.distributed as dist
>>> from datetime import timedelta
>>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30))
>>> store.set("first_key", "first_value")
>>> # Should return "first_value"
>>> store.get("first_key")
torch.distributed.Store.get(self: torch._C._distributed_c10d.Store, arg0: str) → bytes
Retrieves the value associated with the given key in the store. If key is not present in the store, the function will wait for timeout, which is defined when initializing the store, before throwing an exception. Parameters
key (str) – The function will return the value associated with this key. Returns
Value associated with key if key is in the store. Example::
>>> import torch.distributed as dist
>>> from datetime import timedelta
>>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30))
>>> store.set("first_key", "first_value")
>>> # Should return "first_value"
>>> store.get("first_key")
torch.distributed.Store.add(self: torch._C._distributed_c10d.Store, arg0: str, arg1: int) → int
The first call to add for a given key creates a counter associated with key in the store, initialized to amount. Subsequent calls to add with the same key increment the counter by the specified amount. Calling add() with a key that has already been set in the store by set() will result in an exception. Parameters
key (str) – The key in the store whose counter will be incremented.
amount (int) – The quantity by which the counter will be incremented. Example::
>>> import torch.distributed as dist
>>> from datetime import timedelta
>>> # Using TCPStore as an example, other store types can also be used
>>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30))
>>> store.add("first_key", 1)
>>> store.add("first_key", 6)
>>> # Should return 7
>>> store.get("first_key")
torch.distributed.Store.wait(*args, **kwargs)
Overloaded function. wait(self: torch._C._distributed_c10d.Store, arg0: List[str]) -> None Waits for each key in keys to be added to the store. If not all keys are set before the timeout (set during store initialization), then wait will throw an exception. Parameters
keys (list) – List of keys on which to wait until they are set in the store. Example::
>>> import torch.distributed as dist
>>> from datetime import timedelta
>>> # Using TCPStore as an example, other store types can also be used
>>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30))
>>> # This will throw an exception after 30 seconds
>>> store.wait(["bad_key"])
wait(self: torch._C._distributed_c10d.Store, arg0: List[str], arg1: datetime.timedelta) -> None Waits for each key in keys to be added to the store, and throws an exception if the keys have not been set by the supplied timeout. Parameters
keys (list) – List of keys on which to wait until they are set in the store.
timeout (timedelta) – Time to wait for the keys to be added before throwing an exception. Example::
>>> import torch.distributed as dist
>>> from datetime import timedelta
>>> # Using TCPStore as an example, other store types can also be used
>>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30))
>>> # This will throw an exception after 10 seconds
>>> store.wait(["bad_key"], timedelta(seconds=10))
torch.distributed.Store.num_keys(self: torch._C._distributed_c10d.Store) → int
Returns the number of keys set in the store. Note that this number will typically be one greater than the number of keys added by set() and add() since one key is used to coordinate all the workers using the store. Warning When used with the TCPStore, num_keys returns the number of keys written to the underlying file. If the store is destructed and another store is created with the same file, the original keys will be retained. Returns
The number of keys present in the store. Example::
>>> import torch.distributed as dist
>>> from datetime import timedelta
>>> # Using TCPStore as an example, other store types can also be used
>>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30))
>>> store.set("first_key", "first_value")
>>> # This should return 2
>>> store.num_keys()
torch.distributed.Store.delete_key(self: torch._C._distributed_c10d.Store, arg0: str) → bool
Deletes the key-value pair associated with key from the store. Returns true if the key was successfully deleted, and false if it was not. Warning The delete_key API is only supported by the TCPStore and HashStore. Using this API with the FileStore will result in an exception. Parameters
key (str) – The key to be deleted from the store Returns
True if key was deleted, otherwise False. Example::
>>> import torch.distributed as dist
>>> from datetime import timedelta
>>> # Using TCPStore as an example, HashStore can also be used
>>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30))
>>> store.set("first_key")
>>> # This should return true
>>> store.delete_key("first_key")
>>> # This should return false
>>> store.delete_key("bad_key")
torch.distributed.Store.set_timeout(self: torch._C._distributed_c10d.Store, arg0: datetime.timedelta) → None
Sets the store’s default timeout. This timeout is used during initialization and in wait() and get(). Parameters
timeout (timedelta) – timeout to be set in the store. Example::
>>> import torch.distributed as dist
>>> from datetime import timedelta
>>> # Using TCPStore as an example, other store types can also be used
>>> store = dist.TCPStore("127.0.0.1", 0, 1, True, timedelta(seconds=30))
>>> store.set_timeout(timedelta(seconds=10))
>>> # This will throw an exception after 10 seconds
>>> store.wait(["bad_key"])
Groups By default collectives operate on the default group (also called the world) and require all processes to enter the distributed function call. However, some workloads can benefit from more fine-grained communication. This is where distributed groups come into play. new_group() function can be used to create new groups, with arbitrary subsets of all processes. It returns an opaque group handle that can be given as a group argument to all collectives (collectives are distributed functions to exchange information in certain well-known programming patterns).
torch.distributed.new_group(ranks=None, timeout=datetime.timedelta(seconds=1800), backend=None) [source]
Creates a new distributed group. This function requires that all processes in the main group (i.e. all processes that are part of the distributed job) enter this function, even if they are not going to be members of the group. Additionally, groups should be created in the same order in all processes. Warning Using multiple process groups with the NCCL backend concurrently is not safe and the user should perform explicit synchronization in their application to ensure only one process group is used at a time. This means collectives from one process group should have completed execution on the device (not just enqueued since CUDA execution is async) before collectives from another process group are enqueued. See Using multiple NCCL communicators concurrently for more details. Parameters
ranks (list[int]) – List of ranks of group members. If None, will be set to all ranks. Default is None.
timeout (timedelta, optional) – Timeout for operations executed against the process group. Default value equals 30 minutes. This is only applicable for the gloo backend.
backend (str or Backend, optional) – The backend to use. Depending on build-time configurations, valid values are gloo and nccl. By default uses the same backend as the global group. This field should be given as a lowercase string (e.g., "gloo"), which can also be accessed via Backend attributes (e.g., Backend.GLOO). Returns
A handle of distributed group that can be given to collective calls.
Point-to-point communication
torch.distributed.send(tensor, dst, group=None, tag=0) [source]
Sends a tensor synchronously. Parameters
tensor (Tensor) – Tensor to send.
dst (int) – Destination rank.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
tag (int, optional) – Tag to match send with remote recv
torch.distributed.recv(tensor, src=None, group=None, tag=0) [source]
Receives a tensor synchronously. Parameters
tensor (Tensor) – Tensor to fill with received data.
src (int, optional) – Source rank. Will receive from any process if unspecified.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
tag (int, optional) – Tag to match recv with remote send Returns
Sender rank -1, if not part of the group
isend() and irecv() return distributed request objects when used. In general, the type of this object is unspecified as they should never be created manually, but they are guaranteed to support two methods:
is_completed() - returns True if the operation has finished
wait() - will block the process until the operation is finished. is_completed() is guaranteed to return True once it returns.
torch.distributed.isend(tensor, dst, group=None, tag=0) [source]
Sends a tensor asynchronously. Parameters
tensor (Tensor) – Tensor to send.
dst (int) – Destination rank.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
tag (int, optional) – Tag to match send with remote recv Returns
A distributed request object. None, if not part of the group
torch.distributed.irecv(tensor, src=None, group=None, tag=0) [source]
Receives a tensor asynchronously. Parameters
tensor (Tensor) – Tensor to fill with received data.
src (int, optional) – Source rank. Will receive from any process if unspecified.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
tag (int, optional) – Tag to match recv with remote send Returns
A distributed request object. None, if not part of the group
Synchronous and asynchronous collective operations Every collective operation function supports the following two kinds of operations, depending on the setting of the async_op flag passed into the collective: Synchronous operation - the default mode, when async_op is set to False. When the function returns, it is guaranteed that the collective operation is performed. In the case of CUDA operations, it is not guaranteed that the CUDA operation is completed, since CUDA operations are asynchronous. For CPU collectives, any further function calls utilizing the output of the collective call will behave as expected. For CUDA collectives, function calls utilizing the output on the same CUDA stream will behave as expected. Users must take care of synchronization under the scenario of running under different streams. For details on CUDA semantics such as stream synchronization, see CUDA Semantics. See the below script to see examples of differences in these semantics for CPU and CUDA operations. Asynchronous operation - when async_op is set to True. The collective operation function returns a distributed request object. In general, you don’t need to create it manually and it is guaranteed to support two methods:
is_completed() - in the case of CPU collectives, returns True if completed. In the case of CUDA operations, returns True if the operation has been successfully enqueued onto a CUDA stream and the output can be utilized on the default stream without further synchronization.
wait() - in the case of CPU collectives, will block the process until the operation is completed. In the case of CUDA collectives, will block until the operation has been successfully enqueued onto a CUDA stream and the output can be utilized on the default stream without further synchronization. Example The following code can serve as a reference regarding semantics for CUDA operations when using distributed collectives. It shows the explicit need to synchronize when using collective outputs on different CUDA streams: # Code runs on each rank.
dist.init_process_group("nccl", rank=rank, world_size=2)
output = torch.tensor([rank]).cuda(rank)
s = torch.cuda.Stream()
handle = dist.all_reduce(output, async_op=True)
# Wait ensures the operation is enqueued, but not necessarily complete.
handle.wait()
# Using result on non-default stream.
with torch.cuda.stream(s):
s.wait_stream(torch.cuda.default_stream())
output.add_(100)
if rank == 0:
# if the explicit call to wait_stream was omitted, the output below will be
# non-deterministically 1 or 101, depending on whether the allreduce overwrote
# the value after the add completed.
print(output)
Collective functions
torch.distributed.broadcast(tensor, src, group=None, async_op=False) [source]
Broadcasts the tensor to the whole group. tensor must have the same number of elements in all processes participating in the collective. Parameters
tensor (Tensor) – Data to be sent if src is the rank of current process, and tensor to be used to save received data otherwise.
src (int) – Source rank.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group
torch.distributed.broadcast_object_list(object_list, src=0, group=None) [source]
Broadcasts picklable objects in object_list to the whole group. Similar to broadcast(), but Python objects can be passed in. Note that all objects in object_list must be picklable in order to be broadcasted. Parameters
object_list (List[Any]) – List of input objects to broadcast. Each object must be picklable. Only objects on the src rank will be broadcast, but each rank must provide lists of equal sizes.
src (int) – Source rank from which to broadcast object_list.
group – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None. Returns
None. If rank is part of the group, object_list will contain the broadcasted objects from src rank. Note For NCCL-based processed groups, internal tensor representations of objects must be moved to the GPU device before communication takes place. In this case, the device used is given by torch.cuda.current_device() and it is the user’s responsiblity to ensure that this is set so that each rank has an individual GPU, via torch.cuda.set_device(). Note Note that this API differs slightly from the all_gather() collective since it does not provide an async_op handle and thus will be a blocking call. Warning broadcast_object_list() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Example::
>>> # Note: Process group initialization omitted on each rank.
>>> import torch.distributed as dist
>>> if dist.get_rank() == 0:
>>> # Assumes world_size of 3.
>>> objects = ["foo", 12, {1: 2}] # any picklable object
>>> else:
>>> objects = [None, None, None]
>>> dist.broadcast_object_list(objects, src=0)
>>> broadcast_objects
['foo', 12, {1: 2}]
torch.distributed.all_reduce(tensor, op=<ReduceOp.SUM: 0>, group=None, async_op=False) [source]
Reduces the tensor data across all machines in such a way that all get the final result. After the call tensor is going to be bitwise identical in all processes. Complex tensors are supported. Parameters
tensor (Tensor) – Input and output of the collective. The function operates in-place.
op (optional) – One of the values from torch.distributed.ReduceOp enum. Specifies an operation used for element-wise reductions.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group Examples >>> # All tensors below are of torch.int64 type.
>>> # We have 2 process groups, 2 ranks.
>>> tensor = torch.arange(2, dtype=torch.int64) + 1 + 2 * rank
>>> tensor
tensor([1, 2]) # Rank 0
tensor([3, 4]) # Rank 1
>>> dist.all_reduce(tensor, op=ReduceOp.SUM)
>>> tensor
tensor([4, 6]) # Rank 0
tensor([4, 6]) # Rank 1
>>> # All tensors below are of torch.cfloat type.
>>> # We have 2 process groups, 2 ranks.
>>> tensor = torch.tensor([1+1j, 2+2j], dtype=torch.cfloat) + 2 * rank * (1+1j)
>>> tensor
tensor([1.+1.j, 2.+2.j]) # Rank 0
tensor([3.+3.j, 4.+4.j]) # Rank 1
>>> dist.all_reduce(tensor, op=ReduceOp.SUM)
>>> tensor
tensor([4.+4.j, 6.+6.j]) # Rank 0
tensor([4.+4.j, 6.+6.j]) # Rank 1
torch.distributed.reduce(tensor, dst, op=<ReduceOp.SUM: 0>, group=None, async_op=False) [source]
Reduces the tensor data across all machines. Only the process with rank dst is going to receive the final result. Parameters
tensor (Tensor) – Input and output of the collective. The function operates in-place.
dst (int) – Destination rank
op (optional) – One of the values from torch.distributed.ReduceOp enum. Specifies an operation used for element-wise reductions.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group
torch.distributed.all_gather(tensor_list, tensor, group=None, async_op=False) [source]
Gathers tensors from the whole group in a list. Complex tensors are supported. Parameters
tensor_list (list[Tensor]) – Output list. It should contain correctly-sized tensors to be used for output of the collective.
tensor (Tensor) – Tensor to be broadcast from current process.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group Examples >>> # All tensors below are of torch.int64 dtype.
>>> # We have 2 process groups, 2 ranks.
>>> tensor_list = [torch.zero(2, dtype=torch.int64) for _ in range(2)]
>>> tensor_list
[tensor([0, 0]), tensor([0, 0])] # Rank 0 and 1
>>> tensor = torch.arange(2, dtype=torch.int64) + 1 + 2 * rank
>>> tensor
tensor([1, 2]) # Rank 0
tensor([3, 4]) # Rank 1
>>> dist.all_gather(tensor_list, tensor)
>>> tensor_list
[tensor([1, 2]), tensor([3, 4])] # Rank 0
[tensor([1, 2]), tensor([3, 4])] # Rank 1
>>> # All tensors below are of torch.cfloat dtype.
>>> # We have 2 process groups, 2 ranks.
>>> tensor_list = [torch.zero(2, dtype=torch.cfloat) for _ in range(2)]
>>> tensor_list
[tensor([0.+0.j, 0.+0.j]), tensor([0.+0.j, 0.+0.j])] # Rank 0 and 1
>>> tensor = torch.tensor([1+1j, 2+2j], dtype=torch.cfloat) + 2 * rank * (1+1j)
>>> tensor
tensor([1.+1.j, 2.+2.j]) # Rank 0
tensor([3.+3.j, 4.+4.j]) # Rank 1
>>> dist.all_gather(tensor_list, tensor)
>>> tensor_list
[tensor([1.+1.j, 2.+2.j]), tensor([3.+3.j, 4.+4.j])] # Rank 0
[tensor([1.+1.j, 2.+2.j]), tensor([3.+3.j, 4.+4.j])] # Rank 1
torch.distributed.all_gather_object(object_list, obj, group=None) [source]
Gathers picklable objects from the whole group into a list. Similar to all_gather(), but Python objects can be passed in. Note that the object must be picklable in order to be gathered. Parameters
object_list (list[Any]) – Output list. It should be correctly sized as the size of the group for this collective and will contain the output.
object (Any) – Pickable Python object to be broadcast from current process.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Default is None. Returns
None. If the calling rank is part of this group, the output of the collective will be populated into the input object_list. If the calling rank is not part of the group, the passed in object_list will be unmodified. Note Note that this API differs slightly from the all_gather() collective since it does not provide an async_op handle and thus will be a blocking call. Note For NCCL-based processed groups, internal tensor representations of objects must be moved to the GPU device before communication takes place. In this case, the device used is given by torch.cuda.current_device() and it is the user’s responsiblity to ensure that this is set so that each rank has an individual GPU, via torch.cuda.set_device(). Warning all_gather_object() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Example::
>>> # Note: Process group initialization omitted on each rank.
>>> import torch.distributed as dist
>>> # Assumes world_size of 3.
>>> gather_objects = ["foo", 12, {1: 2}] # any picklable object
>>> output = [None for _ in gather_objects]
>>> dist.all_gather_object(output, gather_objects[dist.get_rank()])
>>> output
['foo', 12, {1: 2}]
torch.distributed.gather(tensor, gather_list=None, dst=0, group=None, async_op=False) [source]
Gathers a list of tensors in a single process. Parameters
tensor (Tensor) – Input tensor.
gather_list (list[Tensor], optional) – List of appropriately-sized tensors to use for gathered data (default is None, must be specified on the destination rank)
dst (int, optional) – Destination rank (default is 0)
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group
torch.distributed.gather_object(obj, object_gather_list=None, dst=0, group=None) [source]
Gathers picklable objects from the whole group in a single process. Similar to gather(), but Python objects can be passed in. Note that the object must be picklable in order to be gathered. Parameters
obj (Any) – Input object. Must be picklable.
object_gather_list (list[Any]) – Output list. On the dst rank, it should be correctly sized as the size of the group for this collective and will contain the output. Must be None on non-dst ranks. (default is None)
dst (int, optional) – Destination rank. (default is 0)
group – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None. Returns
None. On the dst rank, object_gather_list will contain the output of the collective. Note Note that this API differs slightly from the gather collective since it does not provide an async_op handle and thus will be a blocking call. Note Note that this API is not supported when using the NCCL backend. Warning gather_object() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Example::
>>> # Note: Process group initialization omitted on each rank.
>>> import torch.distributed as dist
>>> # Assumes world_size of 3.
>>> gather_objects = ["foo", 12, {1: 2}] # any picklable object
>>> output = [None for _ in gather_objects]
>>> dist.gather_object(
gather_objects[dist.get_rank()],
output if dist.get_rank() == 0 else None,
dst=0
)
>>> # On rank 0
>>> output
['foo', 12, {1: 2}]
torch.distributed.scatter(tensor, scatter_list=None, src=0, group=None, async_op=False) [source]
Scatters a list of tensors to all processes in a group. Each process will receive exactly one tensor and store its data in the tensor argument. Parameters
tensor (Tensor) – Output tensor.
scatter_list (list[Tensor]) – List of tensors to scatter (default is None, must be specified on the source rank)
src (int) – Source rank (default is 0)
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group
torch.distributed.scatter_object_list(scatter_object_output_list, scatter_object_input_list, src=0, group=None) [source]
Scatters picklable objects in scatter_object_input_list to the whole group. Similar to scatter(), but Python objects can be passed in. On each rank, the scattered object will be stored as the first element of scatter_object_output_list. Note that all objects in scatter_object_input_list must be picklable in order to be scattered. Parameters
scatter_object_output_list (List[Any]) – Non-empty list whose first element will store the object scattered to this rank.
scatter_object_input_list (List[Any]) – List of input objects to scatter. Each object must be picklable. Only objects on the src rank will be scattered, and the argument can be None for non-src ranks.
src (int) – Source rank from which to scatter scatter_object_input_list.
group – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None. Returns
None. If rank is part of the group, scatter_object_output_list will have its first element set to the scattered object for this rank. Note Note that this API differs slightly from the scatter collective since it does not provide an async_op handle and thus will be a blocking call. Warning scatter_object_list() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Only call this function with data you trust. Example::
>>> # Note: Process group initialization omitted on each rank.
>>> import torch.distributed as dist
>>> if dist.get_rank() == 0:
>>> # Assumes world_size of 3.
>>> objects = ["foo", 12, {1: 2}] # any picklable object
>>> else:
>>> # Can be any list on non-src ranks, elements are not used.
>>> objects = [None, None, None]
>>> output_list = [None]
>>> dist.scatter_object_list(output_list, objects, src=0)
>>> # Rank i gets objects[i]. For example, on rank 2:
>>> output_list
[{1: 2}]
torch.distributed.reduce_scatter(output, input_list, op=<ReduceOp.SUM: 0>, group=None, async_op=False) [source]
Reduces, then scatters a list of tensors to all processes in a group. Parameters
output (Tensor) – Output tensor.
input_list (list[Tensor]) – List of tensors to reduce and scatter.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op. Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group.
torch.distributed.all_to_all(output_tensor_list, input_tensor_list, group=None, async_op=False) [source]
Each process scatters list of input tensors to all processes in a group and return gathered list of tensors in output list. Parameters
output_tensor_list (list[Tensor]) – List of tensors to be gathered one per rank.
input_tensor_list (list[Tensor]) – List of tensors to scatter one per rank.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op. Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group. Warning all_to_all is experimental and subject to change. Examples >>> input = torch.arange(4) + rank * 4
>>> input = list(input.chunk(4))
>>> input
[tensor([0]), tensor([1]), tensor([2]), tensor([3])] # Rank 0
[tensor([4]), tensor([5]), tensor([6]), tensor([7])] # Rank 1
[tensor([8]), tensor([9]), tensor([10]), tensor([11])] # Rank 2
[tensor([12]), tensor([13]), tensor([14]), tensor([15])] # Rank 3
>>> output = list(torch.empty([4], dtype=torch.int64).chunk(4))
>>> dist.all_to_all(output, input)
>>> output
[tensor([0]), tensor([4]), tensor([8]), tensor([12])] # Rank 0
[tensor([1]), tensor([5]), tensor([9]), tensor([13])] # Rank 1
[tensor([2]), tensor([6]), tensor([10]), tensor([14])] # Rank 2
[tensor([3]), tensor([7]), tensor([11]), tensor([15])] # Rank 3
>>> # Essentially, it is similar to following operation:
>>> scatter_list = input
>>> gather_list = output
>>> for i in range(world_size):
>>> dist.scatter(gather_list[i], scatter_list if i == rank else [], src = i)
>>> input
tensor([0, 1, 2, 3, 4, 5]) # Rank 0
tensor([10, 11, 12, 13, 14, 15, 16, 17, 18]) # Rank 1
tensor([20, 21, 22, 23, 24]) # Rank 2
tensor([30, 31, 32, 33, 34, 35, 36]) # Rank 3
>>> input_splits
[2, 2, 1, 1] # Rank 0
[3, 2, 2, 2] # Rank 1
[2, 1, 1, 1] # Rank 2
[2, 2, 2, 1] # Rank 3
>>> output_splits
[2, 3, 2, 2] # Rank 0
[2, 2, 1, 2] # Rank 1
[1, 2, 1, 2] # Rank 2
[1, 2, 1, 1] # Rank 3
>>> input = list(input.split(input_splits))
>>> input
[tensor([0, 1]), tensor([2, 3]), tensor([4]), tensor([5])] # Rank 0
[tensor([10, 11, 12]), tensor([13, 14]), tensor([15, 16]), tensor([17, 18])] # Rank 1
[tensor([20, 21]), tensor([22]), tensor([23]), tensor([24])] # Rank 2
[tensor([30, 31]), tensor([32, 33]), tensor([34, 35]), tensor([36])] # Rank 3
>>> output = ...
>>> dist.all_to_all(output, input)
>>> output
[tensor([0, 1]), tensor([10, 11, 12]), tensor([20, 21]), tensor([30, 31])] # Rank 0
[tensor([2, 3]), tensor([13, 14]), tensor([22]), tensor([32, 33])] # Rank 1
[tensor([4]), tensor([15, 16]), tensor([23]), tensor([34, 35])] # Rank 2
[tensor([5]), tensor([17, 18]), tensor([24]), tensor([36])] # Rank 3
torch.distributed.barrier(group=None, async_op=False, device_ids=None) [source]
Synchronizes all processes. This collective blocks processes until the whole group enters this function, if async_op is False, or if async work handle is called on wait(). Parameters
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op
device_ids ([int], optional) – List of device/GPU ids. Valid only for NCCL backend. Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group
class torch.distributed.ReduceOp
An enum-like class for available reduction operations: SUM, PRODUCT, MIN, MAX, BAND, BOR, and BXOR. Note that BAND, BOR, and BXOR reductions are not available when using the NCCL backend. Additionally, MAX, MIN and PRODUCT are not supported for complex tensors. The values of this class can be accessed as attributes, e.g., ReduceOp.SUM. They are used in specifying strategies for reduction collectives, e.g., reduce(), all_reduce_multigpu(), etc. Members: SUM PRODUCT MIN MAX BAND BOR BXOR
class torch.distributed.reduce_op
Deprecated enum-like class for reduction operations: SUM, PRODUCT, MIN, and MAX. ReduceOp is recommended to use instead.
Autograd-enabled communication primitives If you want to use collective communication functions supporting autograd you can find an implementation of those in the torch.distributed.nn.* module. Functions here are synchronous and will be inserted in the autograd graph, so you need to ensure that all the processes that participated in the collective operation will do the backward pass for the backward communication to effectively happen and don’t cause a deadlock. Please notice that currently the only backend where all the functions are guaranteed to work is gloo. .. autofunction:: torch.distributed.nn.broadcast .. autofunction:: torch.distributed.nn.gather .. autofunction:: torch.distributed.nn.scatter .. autofunction:: torch.distributed.nn.reduce .. autofunction:: torch.distributed.nn.all_gather .. autofunction:: torch.distributed.nn.all_to_all .. autofunction:: torch.distributed.nn.all_reduce Multi-GPU collective functions If you have more than one GPU on each node, when using the NCCL and Gloo backend, broadcast_multigpu() all_reduce_multigpu() reduce_multigpu() all_gather_multigpu() and reduce_scatter_multigpu() support distributed collective operations among multiple GPUs within each node. These functions can potentially improve the overall distributed training performance and be easily used by passing a list of tensors. Each Tensor in the passed tensor list needs to be on a separate GPU device of the host where the function is called. Note that the length of the tensor list needs to be identical among all the distributed processes. Also note that currently the multi-GPU collective functions are only supported by the NCCL backend. For example, if the system we use for distributed training has 2 nodes, each of which has 8 GPUs. On each of the 16 GPUs, there is a tensor that we would like to all-reduce. The following code can serve as a reference: Code running on Node 0 import torch
import torch.distributed as dist
dist.init_process_group(backend="nccl",
init_method="file:///distributed_test",
world_size=2,
rank=0)
tensor_list = []
for dev_idx in range(torch.cuda.device_count()):
tensor_list.append(torch.FloatTensor([1]).cuda(dev_idx))
dist.all_reduce_multigpu(tensor_list)
Code running on Node 1 import torch
import torch.distributed as dist
dist.init_process_group(backend="nccl",
init_method="file:///distributed_test",
world_size=2,
rank=1)
tensor_list = []
for dev_idx in range(torch.cuda.device_count()):
tensor_list.append(torch.FloatTensor([1]).cuda(dev_idx))
dist.all_reduce_multigpu(tensor_list)
After the call, all 16 tensors on the two nodes will have the all-reduced value of 16
torch.distributed.broadcast_multigpu(tensor_list, src, group=None, async_op=False, src_tensor=0) [source]
Broadcasts the tensor to the whole group with multiple GPU tensors per node. tensor must have the same number of elements in all the GPUs from all processes participating in the collective. each tensor in the list must be on a different GPU Only nccl and gloo backend are currently supported tensors should only be GPU tensors Parameters
tensor_list (List[Tensor]) – Tensors that participate in the collective operation. If src is the rank, then the specified src_tensor element of tensor_list (tensor_list[src_tensor]) will be broadcast to all other tensors (on different GPUs) in the src process and all tensors in tensor_list of other non-src processes. You also need to make sure that len(tensor_list) is the same for all the distributed processes calling this function.
src (int) – Source rank.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op
src_tensor (int, optional) – Source tensor rank within tensor_list
Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group
torch.distributed.all_reduce_multigpu(tensor_list, op=<ReduceOp.SUM: 0>, group=None, async_op=False) [source]
Reduces the tensor data across all machines in such a way that all get the final result. This function reduces a number of tensors on every node, while each tensor resides on different GPUs. Therefore, the input tensor in the tensor list needs to be GPU tensors. Also, each tensor in the tensor list needs to reside on a different GPU. After the call, all tensor in tensor_list is going to be bitwise identical in all processes. Complex tensors are supported. Only nccl and gloo backend is currently supported tensors should only be GPU tensors Parameters
list (tensor) – List of input and output tensors of the collective. The function operates in-place and requires that each tensor to be a GPU tensor on different GPUs. You also need to make sure that len(tensor_list) is the same for all the distributed processes calling this function.
op (optional) – One of the values from torch.distributed.ReduceOp enum. Specifies an operation used for element-wise reductions.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group
torch.distributed.reduce_multigpu(tensor_list, dst, op=<ReduceOp.SUM: 0>, group=None, async_op=False, dst_tensor=0) [source]
Reduces the tensor data on multiple GPUs across all machines. Each tensor in tensor_list should reside on a separate GPU Only the GPU of tensor_list[dst_tensor] on the process with rank dst is going to receive the final result. Only nccl backend is currently supported tensors should only be GPU tensors Parameters
tensor_list (List[Tensor]) – Input and output GPU tensors of the collective. The function operates in-place. You also need to make sure that len(tensor_list) is the same for all the distributed processes calling this function.
dst (int) – Destination rank
op (optional) – One of the values from torch.distributed.ReduceOp enum. Specifies an operation used for element-wise reductions.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op
dst_tensor (int, optional) – Destination tensor rank within tensor_list
Returns
Async work handle, if async_op is set to True. None, otherwise
torch.distributed.all_gather_multigpu(output_tensor_lists, input_tensor_list, group=None, async_op=False) [source]
Gathers tensors from the whole group in a list. Each tensor in tensor_list should reside on a separate GPU Only nccl backend is currently supported tensors should only be GPU tensors Complex tensors are supported. Parameters
output_tensor_lists (List[List[Tensor]]) –
Output lists. It should contain correctly-sized tensors on each GPU to be used for output of the collective, e.g. output_tensor_lists[i] contains the all_gather result that resides on the GPU of input_tensor_list[i]. Note that each element of output_tensor_lists has the size of world_size * len(input_tensor_list), since the function all gathers the result from every single GPU in the group. To interpret each element of output_tensor_lists[i], note that input_tensor_list[j] of rank k will be appear in output_tensor_lists[i][k * world_size + j] Also note that len(output_tensor_lists), and the size of each element in output_tensor_lists (each element is a list, therefore len(output_tensor_lists[i])) need to be the same for all the distributed processes calling this function.
input_tensor_list (List[Tensor]) – List of tensors(on different GPUs) to be broadcast from current process. Note that len(input_tensor_list) needs to be the same for all the distributed processes calling this function.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group
torch.distributed.reduce_scatter_multigpu(output_tensor_list, input_tensor_lists, op=<ReduceOp.SUM: 0>, group=None, async_op=False) [source]
Reduce and scatter a list of tensors to the whole group. Only nccl backend is currently supported. Each tensor in output_tensor_list should reside on a separate GPU, as should each list of tensors in input_tensor_lists. Parameters
output_tensor_list (List[Tensor]) –
Output tensors (on different GPUs) to receive the result of the operation. Note that len(output_tensor_list) needs to be the same for all the distributed processes calling this function.
input_tensor_lists (List[List[Tensor]]) –
Input lists. It should contain correctly-sized tensors on each GPU to be used for input of the collective, e.g. input_tensor_lists[i] contains the reduce_scatter input that resides on the GPU of output_tensor_list[i]. Note that each element of input_tensor_lists has the size of world_size * len(output_tensor_list), since the function scatters the result from every single GPU in the group. To interpret each element of input_tensor_lists[i], note that output_tensor_list[j] of rank k receives the reduce-scattered result from input_tensor_lists[i][k * world_size + j] Also note that len(input_tensor_lists), and the size of each element in input_tensor_lists (each element is a list, therefore len(input_tensor_lists[i])) need to be the same for all the distributed processes calling this function.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
async_op (bool, optional) – Whether this op should be an async op. Returns
Async work handle, if async_op is set to True. None, if not async_op or if not part of the group.
Third-party backends Besides the GLOO/MPI/NCCL backends, PyTorch distributed supports third-party backends through a run-time register mechanism. For references on how to develop a third-party backend through C++ Extension, please refer to Tutorials - Custom C++ and CUDA Extensions and test/cpp_extensions/cpp_c10d_extension.cpp. The capability of third-party backends are decided by their own implementations. The new backend derives from c10d.ProcessGroup and registers the backend name and the instantiating interface through torch.distributed.Backend.register_backend() when imported. When manually importing this backend and invoking torch.distributed.init_process_group() with the corresponding backend name, the torch.distributed package runs on the new backend. Warning The support of third-party backend is experimental and subject to change. Launch utility The torch.distributed package also provides a launch utility in torch.distributed.launch. This helper utility can be used to launch multiple processes per node for distributed training. torch.distributed.launch is a module that spawns up multiple distributed training processes on each of the training nodes. The utility can be used for single-node distributed training, in which one or more processes per node will be spawned. The utility can be used for either CPU training or GPU training. If the utility is used for GPU training, each distributed process will be operating on a single GPU. This can achieve well-improved single-node training performance. It can also be used in multi-node distributed training, by spawning up multiple processes on each node for well-improved multi-node distributed training performance as well. This will especially be benefitial for systems with multiple Infiniband interfaces that have direct-GPU support, since all of them can be utilized for aggregated communication bandwidth. In both cases of single-node distributed training or multi-node distributed training, this utility will launch the given number of processes per node (--nproc_per_node). If used for GPU training, this number needs to be less or equal to the number of GPUs on the current system (nproc_per_node), and each process will be operating on a single GPU from GPU 0 to GPU (nproc_per_node - 1). How to use this module: Single-Node multi-process distributed training >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE
YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other
arguments of your training script)
Multi-Node multi-process distributed training: (e.g. two nodes) Node 1: (IP: 192.168.1.1, and has a free port: 1234) >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE
--nnodes=2 --node_rank=0 --master_addr="192.168.1.1"
--master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3
and all other arguments of your training script)
Node 2: >>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE
--nnodes=2 --node_rank=1 --master_addr="192.168.1.1"
--master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3
and all other arguments of your training script)
To look up what optional arguments this module offers: >>> python -m torch.distributed.launch --help
Important Notices: 1. This utility and multi-process distributed (single-node or multi-node) GPU training currently only achieves the best performance using the NCCL distributed backend. Thus NCCL backend is the recommended backend to use for GPU training. 2. In your training program, you must parse the command-line argument: --local_rank=LOCAL_PROCESS_RANK, which will be provided by this module. If your training program uses GPUs, you should ensure that your code only runs on the GPU device of LOCAL_PROCESS_RANK. This can be done by: Parsing the local_rank argument >>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument("--local_rank", type=int)
>>> args = parser.parse_args()
Set your device to local rank using either >>> torch.cuda.set_device(args.local_rank) # before your code runs
or >>> with torch.cuda.device(args.local_rank):
>>> # your code to run
3. In your training program, you are supposed to call the following function at the beginning to start the distributed backend. You need to make sure that the init_method uses env://, which is the only supported init_method by this module. torch.distributed.init_process_group(backend='YOUR BACKEND',
init_method='env://')
4. In your training program, you can either use regular distributed functions or use torch.nn.parallel.DistributedDataParallel() module. If your training program uses GPUs for training and you would like to use torch.nn.parallel.DistributedDataParallel() module, here is how to configure it. model = torch.nn.parallel.DistributedDataParallel(model,
device_ids=[args.local_rank],
output_device=args.local_rank)
Please ensure that device_ids argument is set to be the only GPU device id that your code will be operating on. This is generally the local rank of the process. In other words, the device_ids needs to be [args.local_rank], and output_device needs to be args.local_rank in order to use this utility 5. Another way to pass local_rank to the subprocesses via environment variable LOCAL_RANK. This behavior is enabled when you launch the script with --use_env=True. You must adjust the subprocess example above to replace args.local_rank with os.environ['LOCAL_RANK']; the launcher will not pass --local_rank when you specify this flag. Warning local_rank is NOT globally unique: it is only unique per process on a machine. Thus, don’t use it to decide if you should, e.g., write to a networked filesystem. See https://github.com/pytorch/pytorch/issues/12042 for an example of how things can go wrong if you don’t do this correctly. Spawn utility The Multiprocessing package - torch.multiprocessing package also provides a spawn function in torch.multiprocessing.spawn(). This helper function can be used to spawn multiple processes. It works by passing in the function that you want to run and spawns N processes to run it. This can be used for multiprocess distributed training as well. For references on how to use it, please refer to PyTorch example - ImageNet implementation Note that this function requires Python 3.4 or higher. | |
doc_534 |
Return the canvas width and height in display coords. | |
doc_535 | Contains the Python system version, in a form usable by the version_string method and the server_version class variable. For example, 'Python/1.4'. | |
doc_536 | See Migration guide for more details. tf.compat.v1.nest.assert_same_structure
tf.nest.assert_same_structure(
nest1, nest2, check_types=True, expand_composites=False
)
Note that namedtuples with identical name and fields are always considered to have the same shallow structure (even with check_types=True). For instance, this code will print True: def nt(a, b):
return collections.namedtuple('foo', 'a b')(a, b)
print(assert_same_structure(nt(0, 1), nt(2, 3)))
Args
nest1 an arbitrarily nested structure.
nest2 an arbitrarily nested structure.
check_types if True (default) types of sequences are checked as well, including the keys of dictionaries. If set to False, for example a list and a tuple of objects will look the same if they have the same size. Note that namedtuples with identical name and fields are always considered to have the same shallow structure. Two types will also be considered the same if they are both list subtypes (which allows "list" and "_ListWrapper" from trackable dependency tracking to compare equal).
expand_composites If true, then composite tensors such as tf.sparse.SparseTensor and tf.RaggedTensor are expanded into their component tensors.
Raises
ValueError If the two structures do not have the same number of elements or if the two structures are not nested in the same way.
TypeError If the two structures differ in the type of sequence in any of their substructures. Only possible if check_types is True. | |
doc_537 | Returns the currently-set application callable. | |
doc_538 |
Bases: matplotlib.axis.YTick A radial-axis tick. This subclass of YTick provides radial ticks with some small modification to their re-positioning such that ticks are rotated based on axes limits. This results in ticks that are correctly perpendicular to the spine. Labels are also rotated to be perpendicular to the spine, when 'auto' rotation is enabled. bbox is the Bound2D bounding box in display coords of the Axes loc is the tick location in data coords size is the tick size in points set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, gid=<UNSET>, in_layout=<UNSET>, label=<UNSET>, label1=<UNSET>, label2=<UNSET>, pad=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
figure Figure
gid str
in_layout bool
label str
label1 str
label2 str
pad float
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform Transform
url str
visible bool
zorder float
update_position(loc)[source]
Set the location of tick in data coords with scalar loc. | |
doc_539 |
Concatenate two or more Series. Parameters
to_append:Series or list/tuple of Series
Series to append with self.
ignore_index:bool, default False
If True, the resulting axis will be labeled 0, 1, …, n - 1.
verify_integrity:bool, default False
If True, raise Exception on creating index with duplicates. Returns
Series
Concatenated Series. See also concat
General function to concatenate DataFrame or Series objects. Notes Iteratively appending to a Series can be more computationally intensive than a single concatenate. A better solution is to append values to a list and then concatenate the list with the original Series all at once. Examples
>>> s1 = pd.Series([1, 2, 3])
>>> s2 = pd.Series([4, 5, 6])
>>> s3 = pd.Series([4, 5, 6], index=[3, 4, 5])
>>> s1.append(s2)
0 1
1 2
2 3
0 4
1 5
2 6
dtype: int64
>>> s1.append(s3)
0 1
1 2
2 3
3 4
4 5
5 6
dtype: int64
With ignore_index set to True:
>>> s1.append(s2, ignore_index=True)
0 1
1 2
2 3
3 4
4 5
5 6
dtype: int64
With verify_integrity set to True:
>>> s1.append(s2, verify_integrity=True)
Traceback (most recent call last):
...
ValueError: Indexes have overlapping values: [0, 1, 2] | |
doc_540 | See Migration guide for more details. tf.compat.v1.distribute.RunOptions
tf.distribute.RunOptions(
experimental_enable_dynamic_batch_size=True,
experimental_bucketizing_dynamic_shape=False
)
This can be used to hold some strategy specific configs.
Attributes
experimental_enable_dynamic_batch_size Boolean. Only applies to TPUStrategy. Default to True. If True, TPUStrategy will enable dynamic padder to support dynamic batch size for the inputs. Otherwise only static shape inputs are allowed.
experimental_bucketizing_dynamic_shape Boolean. Only applies to TPUStrategy. Default to False. If True, TPUStrategy will automatic bucketize inputs passed into run if the input shape is dynamic. This is a performance optimization to reduce XLA recompilation, which should not have impact on correctness. | |
doc_541 | Return True if s is a Python keyword. | |
doc_542 | The reset_mock method resets all the call attributes on a mock object: >>> mock = Mock(return_value=None)
>>> mock('hello')
>>> mock.called
True
>>> mock.reset_mock()
>>> mock.called
False
Changed in version 3.6: Added two keyword only argument to the reset_mock function. This can be useful where you want to make a series of assertions that reuse the same object. Note that reset_mock() doesn’t clear the return value, side_effect or any child attributes you have set using normal assignment by default. In case you want to reset return_value or side_effect, then pass the corresponding parameter as True. Child mocks and the return value mock (if any) are reset as well. Note return_value, and side_effect are keyword only argument. | |
doc_543 |
Alias for set_markerfacecolor. | |
doc_544 |
Set the keymap to associate with the specified tool. Parameters
namestr
Name of the Tool.
keystr or list of str
Keys to associate with the tool. | |
doc_545 | tf.metrics.MeanIoU Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.MeanIoU
tf.keras.metrics.MeanIoU(
num_classes, name=None, dtype=None
)
Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows: IOU = true_positive / (true_positive + false_positive + false_negative). The predictions are accumulated in a confusion matrix, weighted by sample_weight and the metric is then calculated from it. If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values.
Args
num_classes The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = [num_classes, num_classes] will be allocated.
name (Optional) string name of the metric instance.
dtype (Optional) data type of the metric result. Standalone usage:
# cm = [[1, 1],
# [1, 1]]
# sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1]
# iou = true_positives / (sum_row + sum_col - true_positives))
# result = (1 / (2 + 2 - 1) + 1 / (2 + 2 - 1)) / 2 = 0.33
m = tf.keras.metrics.MeanIoU(num_classes=2)
m.update_state([0, 0, 1, 1], [0, 1, 0, 1])
m.result().numpy()
0.33333334
m.reset_states()
m.update_state([0, 0, 1, 1], [0, 1, 0, 1],
sample_weight=[0.3, 0.3, 0.3, 0.1])
m.result().numpy()
0.23809525
Usage with compile() API: model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.MeanIoU(num_classes=2)])
Methods reset_states View source
reset_states()
Resets all of the metric state variables. This function is called between epochs/steps, when a metric is evaluated during training. result View source
result()
Compute the mean intersection-over-union via the confusion matrix. update_state View source
update_state(
y_true, y_pred, sample_weight=None
)
Accumulates the confusion matrix statistics.
Args
y_true The ground truth values.
y_pred The predicted values.
sample_weight Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true, and must be broadcastable to y_true.
Returns Update op. | |
doc_546 | See Migration guide for more details. tf.compat.v1.data.experimental.dense_to_sparse_batch
tf.data.experimental.dense_to_sparse_batch(
batch_size, row_shape
)
Like Dataset.padded_batch(), this transformation combines multiple consecutive elements of the dataset, which might have different shapes, into a single element. The resulting element has three components (indices, values, and dense_shape), which comprise a tf.sparse.SparseTensor that represents the same data. The row_shape represents the dense shape of each row in the resulting tf.sparse.SparseTensor, to which the effective batch size is prepended. For example: # NOTE: The following examples use `{ ... }` to represent the
# contents of a dataset.
a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }
a.apply(tf.data.experimental.dense_to_sparse_batch(
batch_size=2, row_shape=[6])) ==
{
([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1]], # indices
['a', 'b', 'c', 'a', 'b'], # values
[2, 6]), # dense_shape
([[0, 0], [0, 1], [0, 2], [0, 3]],
['a', 'b', 'c', 'd'],
[1, 6])
}
Args
batch_size A tf.int64 scalar tf.Tensor, representing the number of consecutive elements of this dataset to combine in a single batch.
row_shape A tf.TensorShape or tf.int64 vector tensor-like object representing the equivalent dense shape of a row in the resulting tf.sparse.SparseTensor. Each element of this dataset must have the same rank as row_shape, and must have size less than or equal to row_shape in each dimension.
Returns A Dataset transformation function, which can be passed to tf.data.Dataset.apply. | |
doc_547 | Gets or sets the HSVA representation of the Color. hsva -> tuple The HSVA representation of the Color. The HSVA components are in the ranges H = [0, 360], S = [0, 100], V = [0, 100], A = [0, 100]. Note that this will not return the absolutely exact HSV values for the set RGB values in all cases. Due to the RGB mapping from 0-255 and the HSV mapping from 0-100 and 0-360 rounding errors may cause the HSV values to differ slightly from what you might expect. | |
doc_548 | (Only supported on Linux 2.5.44 and newer.) Return an edge polling object, which can be used as Edge or Level Triggered interface for I/O events. sizehint informs epoll about the expected number of events to be registered. It must be positive, or -1 to use the default. It is only used on older systems where epoll_create1() is not available; otherwise it has no effect (though its value is still checked). flags is deprecated and completely ignored. However, when supplied, its value must be 0 or select.EPOLL_CLOEXEC, otherwise OSError is raised. See the Edge and Level Trigger Polling (epoll) Objects section below for the methods supported by epolling objects. epoll objects support the context management protocol: when used in a with statement, the new file descriptor is automatically closed at the end of the block. The new file descriptor is non-inheritable. Changed in version 3.3: Added the flags parameter. Changed in version 3.4: Support for the with statement was added. The new file descriptor is now non-inheritable. Deprecated since version 3.4: The flags parameter. select.EPOLL_CLOEXEC is used by default now. Use os.set_inheritable() to make the file descriptor inheritable. | |
doc_549 | Returns the average of the dependent variable (sum(y)/N) as a float, or default if there aren’t any matching rows. | |
doc_550 |
Launch a subplot tool window for a figure. Returns
matplotlib.widgets.SubplotTool | |
doc_551 | tf.estimator.BaselineRegressor(
model_dir=None, label_dimension=1, weight_column=None,
optimizer='Ftrl', config=None,
loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE
)
This regressor ignores feature values and will learn to predict the average value of each label. Example:
# Build BaselineRegressor
regressor = tf.estimator.BaselineRegressor()
# Input builders
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
# Fit model.
regressor.train(input_fn=input_fn_train)
# Evaluate squared-loss between the test and train targets.
loss = regressor.evaluate(input_fn=input_fn_eval)["loss"]
# predict outputs the mean value seen during training.
predictions = regressor.predict(new_samples)
Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor.
Args
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
label_dimension Number of regression targets per example. This is the size of the last dimension of the labels and logits Tensor objects (typically, these have shape [batch_size, label_dimension]).
weight_column A string or a _NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It will be multiplied by the loss of the example.
optimizer String, tf.keras.optimizers.* object, or callable that creates the optimizer to use for training. If not specified, will use Ftrl as the default optimizer.
config RunConfig object to configure the runtime settings.
loss_reduction One of tf.losses.Reduction except NONE. Describes how to reduce training loss over batch. Defaults to SUM_OVER_BATCH_SIZE. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods eval_dir View source
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A string which is the path of directory contains evaluation metrics.
evaluate View source
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until:
steps batches are processed, or
input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).
Args
input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.
Raises
ValueError If steps <= 0. experimental_export_all_saved_models View source
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source
get_variable_names()
Returns list of all variable names in this model.
Returns List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source
get_variable_value(
name
)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns The full path to the latest checkpoint or None if no checkpoint was found.
predict View source
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features.
predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same and yield_single_examples is True.
ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns self, for chaining.
Raises
ValueError If both steps and max_steps are not None.
ValueError If either steps or max_steps <= 0. | |
doc_552 | A variable annotated with C may accept a value of type C. In contrast, a variable annotated with Type[C] may accept values that are classes themselves – specifically, it will accept the class object of C. For example: a = 3 # Has type 'int'
b = int # Has type 'Type[int]'
c = type(a) # Also has type 'Type[int]'
Note that Type[C] is covariant: class User: ...
class BasicUser(User): ...
class ProUser(User): ...
class TeamUser(User): ...
# Accepts User, BasicUser, ProUser, TeamUser, ...
def make_new_user(user_class: Type[User]) -> User:
# ...
return user_class()
The fact that Type[C] is covariant implies that all subclasses of C should implement the same constructor signature and class method signatures as C. The type checker should flag violations of this, but should also allow constructor calls in subclasses that match the constructor calls in the indicated base class. How the type checker is required to handle this particular case may change in future revisions of PEP 484. The only legal parameters for Type are classes, Any, type variables, and unions of any of these types. For example: def new_non_team_user(user_class: Type[Union[BasicUser, ProUser]]): ...
Type[Any] is equivalent to Type which in turn is equivalent to type, which is the root of Python’s metaclass hierarchy. New in version 3.5.2. Deprecated since version 3.9: builtins.type now supports []. See PEP 585 and Generic Alias Type. | |
doc_553 | class sklearn.base.TransformerMixin [source]
Mixin class for all transformers in scikit-learn. Methods
fit_transform(X[, y]) Fit to data, then transform it.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
Examples using sklearn.base.TransformerMixin
Approximate nearest neighbors in TSNE | |
doc_554 |
Bases: matplotlib.offsetbox.OffsetBox Offset Box with the aux_transform. Its children will be transformed with the aux_transform first then will be offsetted. The absolute coordinate of the aux_transform is meaning as it will be automatically adjust so that the left-lower corner of the bounding box of children will be set to (0, 0) before the offset transform. It is similar to drawing area, except that the extent of the box is not predetermined but calculated from the window extent of its children. Furthermore, the extent of the children will be calculated in the transformed coordinate. add_artist(a)[source]
Add an Artist to the container box.
draw(renderer)[source]
Update the location of children if necessary and draw them to the given renderer.
get_extent(renderer)[source]
Return a tuple width, height, xdescent, ydescent of the box.
get_offset()[source]
Return offset of the container.
get_transform()[source]
Return the Transform applied to the children
get_window_extent(renderer)[source]
Get the artist's bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly.
set(*, agg_filter=<UNSET>, alpha=<UNSET>, animated=<UNSET>, clip_box=<UNSET>, clip_on=<UNSET>, clip_path=<UNSET>, gid=<UNSET>, height=<UNSET>, in_layout=<UNSET>, label=<UNSET>, offset=<UNSET>, path_effects=<UNSET>, picker=<UNSET>, rasterized=<UNSET>, sketch_params=<UNSET>, snap=<UNSET>, transform=<UNSET>, url=<UNSET>, visible=<UNSET>, width=<UNSET>, zorder=<UNSET>)[source]
Set multiple properties at once. Supported properties are
Property Description
agg_filter a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha scalar or None
animated bool
clip_box Bbox
clip_on bool
clip_path Patch or (Path, Transform) or None
figure Figure
gid str
height float
in_layout bool
label object
offset (float, float)
path_effects AbstractPathEffect
picker None or bool or float or callable
rasterized bool
sketch_params (scale: float, length: float, randomness: float)
snap bool or None
transform unknown
url str
visible bool
width float
zorder float
set_offset(xy)[source]
Set the offset of the container. Parameters
xy(float, float)
The (x, y) coordinates of the offset in display units.
set_transform(t)[source]
set_transform is ignored. | |
doc_555 | Casts all floating point parameters and buffers to float datatype. Returns
self Return type
Module | |
doc_556 |
Load data from a text file. Each row in the text file must have the same number of values. Parameters
fnamefile, str, pathlib.Path, list of str, generator
File, filename, list, or generator to read. If the filename extension is .gz or .bz2, the file is first decompressed. Note that generators must return bytes or strings. The strings in a list or produced by a generator are treated as lines.
dtypedata-type, optional
Data-type of the resulting array; default: float. If this is a structured data-type, the resulting array will be 1-dimensional, and each row will be interpreted as an element of the array. In this case, the number of columns used must match the number of fields in the data-type.
commentsstr or sequence of str, optional
The characters or list of characters used to indicate the start of a comment. None implies no comments. For backwards compatibility, byte strings will be decoded as ‘latin1’. The default is ‘#’.
delimiterstr, optional
The string used to separate values. For backwards compatibility, byte strings will be decoded as ‘latin1’. The default is whitespace.
convertersdict, optional
A dictionary mapping column number to a function that will parse the column string into the desired value. E.g., if column 0 is a date string: converters = {0: datestr2num}. Converters can also be used to provide a default value for missing data (but see also genfromtxt): converters = {3: lambda s: float(s.strip() or 0)}. Default: None.
skiprowsint, optional
Skip the first skiprows lines, including comments; default: 0.
usecolsint or sequence, optional
Which columns to read, with 0 being the first. For example, usecols = (1,4,5) will extract the 2nd, 5th and 6th columns. The default, None, results in all columns being read. Changed in version 1.11.0: When a single column has to be read it is possible to use an integer instead of a tuple. E.g usecols = 3 reads the fourth column the same way as usecols = (3,) would.
unpackbool, optional
If True, the returned array is transposed, so that arguments may be unpacked using x, y, z = loadtxt(...). When used with a structured data-type, arrays are returned for each field. Default is False.
ndminint, optional
The returned array will have at least ndmin dimensions. Otherwise mono-dimensional axes will be squeezed. Legal values: 0 (default), 1 or 2. New in version 1.6.0.
encodingstr, optional
Encoding used to decode the inputfile. Does not apply to input streams. The special value ‘bytes’ enables backward compatibility workarounds that ensures you receive byte arrays as results if possible and passes ‘latin1’ encoded strings to converters. Override this value to receive unicode arrays and pass strings as input to converters. If set to None the system default is used. The default value is ‘bytes’. New in version 1.14.0.
max_rowsint, optional
Read max_rows lines of content after skiprows lines. The default is to read all the lines. New in version 1.16.0.
likearray_like
Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as like supports the __array_function__ protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns
outndarray
Data read from the text file. See also
load, fromstring, fromregex
genfromtxt
Load data with missing values handled as specified. scipy.io.loadmat
reads MATLAB data files Notes This function aims to be a fast reader for simply formatted files. The genfromtxt function provides more sophisticated handling of, e.g., lines with missing values. New in version 1.10.0. The strings produced by the Python float.hex method can be used as input for floats. Examples >>> from io import StringIO # StringIO behaves like a file object
>>> c = StringIO("0 1\n2 3")
>>> np.loadtxt(c)
array([[0., 1.],
[2., 3.]])
>>> d = StringIO("M 21 72\nF 35 58")
>>> np.loadtxt(d, dtype={'names': ('gender', 'age', 'weight'),
... 'formats': ('S1', 'i4', 'f4')})
array([(b'M', 21, 72.), (b'F', 35, 58.)],
dtype=[('gender', 'S1'), ('age', '<i4'), ('weight', '<f4')])
>>> c = StringIO("1,0,2\n3,0,4")
>>> x, y = np.loadtxt(c, delimiter=',', usecols=(0, 2), unpack=True)
>>> x
array([1., 3.])
>>> y
array([2., 4.])
This example shows how converters can be used to convert a field with a trailing minus sign into a negative number. >>> s = StringIO('10.01 31.25-\n19.22 64.31\n17.57- 63.94')
>>> def conv(fld):
... return -float(fld[:-1]) if fld.endswith(b'-') else float(fld)
...
>>> np.loadtxt(s, converters={0: conv, 1: conv})
array([[ 10.01, -31.25],
[ 19.22, 64.31],
[-17.57, 63.94]]) | |
doc_557 |
Create an array. Parameters
data:Sequence of objects
The scalars inside data should be instances of the scalar type for dtype. It’s expected that data represents a 1-dimensional array of data. When data is an Index or Series, the underlying array will be extracted from data.
dtype:str, np.dtype, or ExtensionDtype, optional
The dtype to use for the array. This may be a NumPy dtype or an extension type registered with pandas using pandas.api.extensions.register_extension_dtype(). If not specified, there are two possibilities: When data is a Series, Index, or ExtensionArray, the dtype will be taken from the data. Otherwise, pandas will attempt to infer the dtype from the data. Note that when data is a NumPy array, data.dtype is not used for inferring the array type. This is because NumPy cannot represent all the types of data that can be held in extension arrays. Currently, pandas will infer an extension dtype for sequences of
Scalar Type Array Type
pandas.Interval pandas.arrays.IntervalArray
pandas.Period pandas.arrays.PeriodArray
datetime.datetime pandas.arrays.DatetimeArray
datetime.timedelta pandas.arrays.TimedeltaArray
int pandas.arrays.IntegerArray
float pandas.arrays.FloatingArray
str pandas.arrays.StringArray or pandas.arrays.ArrowStringArray
bool pandas.arrays.BooleanArray The ExtensionArray created when the scalar type is str is determined by pd.options.mode.string_storage if the dtype is not explicitly given. For all other cases, NumPy’s usual inference rules will be used. Changed in version 1.0.0: Pandas infers nullable-integer dtype for integer data, string dtype for string data, and nullable-boolean dtype for boolean data. Changed in version 1.2.0: Pandas now also infers nullable-floating dtype for float-like input data
copy:bool, default True
Whether to copy the data, even if not necessary. Depending on the type of data, creating the new array may require copying data, even if copy=False. Returns
ExtensionArray
The newly created array. Raises
ValueError
When data is not 1-dimensional. See also numpy.array
Construct a NumPy array. Series
Construct a pandas Series. Index
Construct a pandas Index. arrays.PandasArray
ExtensionArray wrapping a NumPy array. Series.array
Extract the array stored within a Series. Notes Omitting the dtype argument means pandas will attempt to infer the best array type from the values in the data. As new array types are added by pandas and 3rd party libraries, the “best” array type may change. We recommend specifying dtype to ensure that the correct array type for the data is returned the returned array type doesn’t change as new extension types are added by pandas and third-party libraries Additionally, if the underlying memory representation of the returned array matters, we recommend specifying the dtype as a concrete object rather than a string alias or allowing it to be inferred. For example, a future version of pandas or a 3rd-party library may include a dedicated ExtensionArray for string data. In this event, the following would no longer return a arrays.PandasArray backed by a NumPy array.
>>> pd.array(['a', 'b'], dtype=str)
<PandasArray>
['a', 'b']
Length: 2, dtype: str32
This would instead return the new ExtensionArray dedicated for string data. If you really need the new array to be backed by a NumPy array, specify that in the dtype.
>>> pd.array(['a', 'b'], dtype=np.dtype("<U1"))
<PandasArray>
['a', 'b']
Length: 2, dtype: str32
Finally, Pandas has arrays that mostly overlap with NumPy
arrays.DatetimeArray arrays.TimedeltaArray
When data with a datetime64[ns] or timedelta64[ns] dtype is passed, pandas will always return a DatetimeArray or TimedeltaArray rather than a PandasArray. This is for symmetry with the case of timezone-aware data, which NumPy does not natively support.
>>> pd.array(['2015', '2016'], dtype='datetime64[ns]')
<DatetimeArray>
['2015-01-01 00:00:00', '2016-01-01 00:00:00']
Length: 2, dtype: datetime64[ns]
>>> pd.array(["1H", "2H"], dtype='timedelta64[ns]')
<TimedeltaArray>
['0 days 01:00:00', '0 days 02:00:00']
Length: 2, dtype: timedelta64[ns]
Examples If a dtype is not specified, pandas will infer the best dtype from the values. See the description of dtype for the types pandas infers for.
>>> pd.array([1, 2])
<IntegerArray>
[1, 2]
Length: 2, dtype: Int64
>>> pd.array([1, 2, np.nan])
<IntegerArray>
[1, 2, <NA>]
Length: 3, dtype: Int64
>>> pd.array([1.1, 2.2])
<FloatingArray>
[1.1, 2.2]
Length: 2, dtype: Float64
>>> pd.array(["a", None, "c"])
<StringArray>
['a', <NA>, 'c']
Length: 3, dtype: string
>>> with pd.option_context("string_storage", "pyarrow"):
... arr = pd.array(["a", None, "c"])
...
>>> arr
<ArrowStringArray>
['a', <NA>, 'c']
Length: 3, dtype: string
>>> pd.array([pd.Period('2000', freq="D"), pd.Period("2000", freq="D")])
<PeriodArray>
['2000-01-01', '2000-01-01']
Length: 2, dtype: period[D]
You can use the string alias for dtype
>>> pd.array(['a', 'b', 'a'], dtype='category')
['a', 'b', 'a']
Categories (2, object): ['a', 'b']
Or specify the actual dtype
>>> pd.array(['a', 'b', 'a'],
... dtype=pd.CategoricalDtype(['a', 'b', 'c'], ordered=True))
['a', 'b', 'a']
Categories (3, object): ['a' < 'b' < 'c']
If pandas does not infer a dedicated extension type a arrays.PandasArray is returned.
>>> pd.array([1 + 1j, 3 + 2j])
<PandasArray>
[(1+1j), (3+2j)]
Length: 2, dtype: complex128
As mentioned in the “Notes” section, new extension types may be added in the future (by pandas or 3rd party libraries), causing the return value to no longer be a arrays.PandasArray. Specify the dtype as a NumPy dtype if you need to ensure there’s no future change in behavior.
>>> pd.array([1, 2], dtype=np.dtype("int32"))
<PandasArray>
[1, 2]
Length: 2, dtype: int32
data must be 1-dimensional. A ValueError is raised when the input has the wrong dimensionality.
>>> pd.array(1)
Traceback (most recent call last):
...
ValueError: Cannot pass scalar '1' to 'pandas.array'. | |
doc_558 |
Return the corresponding inverse transformation. It holds x == self.inverted().transform(self.transform(x)). The return value of this method should be treated as temporary. An update to self does not cause a corresponding update to its inverted copy. | |
doc_559 | Write all items (as machine values) to the file object f. | |
doc_560 | A dictionary mapping endpoint names to view functions. To register a view function, use the route() decorator. This data structure is internal. It should not be modified directly and its format may change at any time. | |
doc_561 | A 32-bit number in big-endian format. | |
doc_562 |
One-dimensional ndarray with axis labels (including time series). Labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index. Statistical methods from ndarray have been overridden to automatically exclude missing data (currently represented as NaN). Operations between Series (+, -, /, *, **) align values based on their associated index values– they need not be the same length. The result index will be the sorted union of the two indexes. Parameters
data:array-like, Iterable, dict, or scalar value
Contains data stored in Series. If data is a dict, argument order is maintained.
index:array-like or Index (1d)
Values must be hashable and have the same length as data. Non-unique index values are allowed. Will default to RangeIndex (0, 1, 2, …, n) if not provided. If data is dict-like and index is None, then the keys in the data are used as the index. If the index is not None, the resulting Series is reindexed with the index values.
dtype:str, numpy.dtype, or ExtensionDtype, optional
Data type for the output Series. If not specified, this will be inferred from data. See the user guide for more usages.
name:str, optional
The name to give to the Series.
copy:bool, default False
Copy input data. Only affects Series or 1d ndarray input. See examples. Examples Constructing Series from a dictionary with an Index specified
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> ser = pd.Series(data=d, index=['a', 'b', 'c'])
>>> ser
a 1
b 2
c 3
dtype: int64
The keys of the dictionary match with the Index values, hence the Index values have no effect.
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> ser = pd.Series(data=d, index=['x', 'y', 'z'])
>>> ser
x NaN
y NaN
z NaN
dtype: float64
Note that the Index is first build with the keys from the dictionary. After this the Series is reindexed with the given Index values, hence we get all NaN as a result. Constructing Series from a list with copy=False.
>>> r = [1, 2]
>>> ser = pd.Series(r, copy=False)
>>> ser.iloc[0] = 999
>>> r
[1, 2]
>>> ser
0 999
1 2
dtype: int64
Due to input data type the Series has a copy of the original data even though copy=False, so the data is unchanged. Constructing Series from a 1d ndarray with copy=False.
>>> r = np.array([1, 2])
>>> ser = pd.Series(r, copy=False)
>>> ser.iloc[0] = 999
>>> r
array([999, 2])
>>> ser
0 999
1 2
dtype: int64
Due to input data type the Series has a view on the original data, so the data is changed as well. Attributes
T Return the transpose, which is by definition self.
array The ExtensionArray of the data backing this Series or Index.
at Access a single value for a row/column label pair.
attrs Dictionary of global attributes of this dataset.
axes Return a list of the row axis labels.
dtype Return the dtype object of the underlying data.
dtypes Return the dtype object of the underlying data.
flags Get the properties associated with this pandas object.
hasnans Return True if there are any NaNs.
iat Access a single value for a row/column pair by integer position.
iloc Purely integer-location based indexing for selection by position.
index The index (axis labels) of the Series.
is_monotonic Return boolean if values in the object are monotonic_increasing.
is_monotonic_decreasing Return boolean if values in the object are monotonic_decreasing.
is_monotonic_increasing Alias for is_monotonic.
is_unique Return boolean if values in the object are unique.
loc Access a group of rows and columns by label(s) or a boolean array.
name Return the name of the Series.
nbytes Return the number of bytes in the underlying data.
ndim Number of dimensions of the underlying data, by definition 1.
shape Return a tuple of the shape of the underlying data.
size Return the number of elements in the underlying data.
values Return Series as ndarray or ndarray-like depending on the dtype.
empty Methods
abs() Return a Series/DataFrame with absolute numeric value of each element.
add(other[, level, fill_value, axis]) Return Addition of series and other, element-wise (binary operator add).
add_prefix(prefix) Prefix labels with string prefix.
add_suffix(suffix) Suffix labels with string suffix.
agg([func, axis]) Aggregate using one or more operations over the specified axis.
aggregate([func, axis]) Aggregate using one or more operations over the specified axis.
align(other[, join, axis, level, copy, ...]) Align two objects on their axes with the specified join method.
all([axis, bool_only, skipna, level]) Return whether all elements are True, potentially over an axis.
any([axis, bool_only, skipna, level]) Return whether any element is True, potentially over an axis.
append(to_append[, ignore_index, ...]) Concatenate two or more Series.
apply(func[, convert_dtype, args]) Invoke function on values of Series.
argmax([axis, skipna]) Return int position of the largest value in the Series.
argmin([axis, skipna]) Return int position of the smallest value in the Series.
argsort([axis, kind, order]) Return the integer indices that would sort the Series values.
asfreq(freq[, method, how, normalize, ...]) Convert time series to specified frequency.
asof(where[, subset]) Return the last row(s) without any NaNs before where.
astype(dtype[, copy, errors]) Cast a pandas object to a specified dtype dtype.
at_time(time[, asof, axis]) Select values at particular time of day (e.g., 9:30AM).
autocorr([lag]) Compute the lag-N autocorrelation.
backfill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='bfill'.
between(left, right[, inclusive]) Return boolean Series equivalent to left <= series <= right.
between_time(start_time, end_time[, ...]) Select values between particular times of the day (e.g., 9:00-9:30 AM).
bfill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='bfill'.
bool() Return the bool of a single element Series or DataFrame.
cat alias of pandas.core.arrays.categorical.CategoricalAccessor
clip([lower, upper, axis, inplace]) Trim values at input threshold(s).
combine(other, func[, fill_value]) Combine the Series with a Series or scalar according to func.
combine_first(other) Update null elements with value in the same location in 'other'.
compare(other[, align_axis, keep_shape, ...]) Compare to another Series and show the differences.
convert_dtypes([infer_objects, ...]) Convert columns to best possible dtypes using dtypes supporting pd.NA.
copy([deep]) Make a copy of this object's indices and data.
corr(other[, method, min_periods]) Compute correlation with other Series, excluding missing values.
count([level]) Return number of non-NA/null observations in the Series.
cov(other[, min_periods, ddof]) Compute covariance with Series, excluding missing values.
cummax([axis, skipna]) Return cumulative maximum over a DataFrame or Series axis.
cummin([axis, skipna]) Return cumulative minimum over a DataFrame or Series axis.
cumprod([axis, skipna]) Return cumulative product over a DataFrame or Series axis.
cumsum([axis, skipna]) Return cumulative sum over a DataFrame or Series axis.
describe([percentiles, include, exclude, ...]) Generate descriptive statistics.
diff([periods]) First discrete difference of element.
div(other[, level, fill_value, axis]) Return Floating division of series and other, element-wise (binary operator truediv).
divide(other[, level, fill_value, axis]) Return Floating division of series and other, element-wise (binary operator truediv).
divmod(other[, level, fill_value, axis]) Return Integer division and modulo of series and other, element-wise (binary operator divmod).
dot(other) Compute the dot product between the Series and the columns of other.
drop([labels, axis, index, columns, level, ...]) Return Series with specified index labels removed.
drop_duplicates([keep, inplace]) Return Series with duplicate values removed.
droplevel(level[, axis]) Return Series/DataFrame with requested index / column level(s) removed.
dropna([axis, inplace, how]) Return a new Series with missing values removed.
dt alias of pandas.core.indexes.accessors.CombinedDatetimelikeProperties
duplicated([keep]) Indicate duplicate Series values.
eq(other[, level, fill_value, axis]) Return Equal to of series and other, element-wise (binary operator eq).
equals(other) Test whether two objects contain the same elements.
ewm([com, span, halflife, alpha, ...]) Provide exponentially weighted (EW) calculations.
expanding([min_periods, center, axis, method]) Provide expanding window calculations.
explode([ignore_index]) Transform each element of a list-like to a row.
factorize([sort, na_sentinel]) Encode the object as an enumerated type or categorical variable.
ffill([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='ffill'.
fillna([value, method, axis, inplace, ...]) Fill NA/NaN values using the specified method.
filter([items, like, regex, axis]) Subset the dataframe rows or columns according to the specified index labels.
first(offset) Select initial periods of time series data based on a date offset.
first_valid_index() Return index for first non-NA value or None, if no NA value is found.
floordiv(other[, level, fill_value, axis]) Return Integer division of series and other, element-wise (binary operator floordiv).
ge(other[, level, fill_value, axis]) Return Greater than or equal to of series and other, element-wise (binary operator ge).
get(key[, default]) Get item from object for given key (ex: DataFrame column).
groupby([by, axis, level, as_index, sort, ...]) Group Series using a mapper or by a Series of columns.
gt(other[, level, fill_value, axis]) Return Greater than of series and other, element-wise (binary operator gt).
head([n]) Return the first n rows.
hist([by, ax, grid, xlabelsize, xrot, ...]) Draw histogram of the input series using matplotlib.
idxmax([axis, skipna]) Return the row label of the maximum value.
idxmin([axis, skipna]) Return the row label of the minimum value.
infer_objects() Attempt to infer better dtypes for object columns.
info([verbose, buf, max_cols, memory_usage, ...]) Print a concise summary of a Series.
interpolate([method, axis, limit, inplace, ...]) Fill NaN values using an interpolation method.
isin(values) Whether elements in Series are contained in values.
isna() Detect missing values.
isnull() Series.isnull is an alias for Series.isna.
item() Return the first element of the underlying data as a Python scalar.
items() Lazily iterate over (index, value) tuples.
iteritems() Lazily iterate over (index, value) tuples.
keys() Return alias for index.
kurt([axis, skipna, level, numeric_only]) Return unbiased kurtosis over requested axis.
kurtosis([axis, skipna, level, numeric_only]) Return unbiased kurtosis over requested axis.
last(offset) Select final periods of time series data based on a date offset.
last_valid_index() Return index for last non-NA value or None, if no NA value is found.
le(other[, level, fill_value, axis]) Return Less than or equal to of series and other, element-wise (binary operator le).
lt(other[, level, fill_value, axis]) Return Less than of series and other, element-wise (binary operator lt).
mad([axis, skipna, level]) Return the mean absolute deviation of the values over the requested axis.
map(arg[, na_action]) Map values of Series according to an input mapping or function.
mask(cond[, other, inplace, axis, level, ...]) Replace values where the condition is True.
max([axis, skipna, level, numeric_only]) Return the maximum of the values over the requested axis.
mean([axis, skipna, level, numeric_only]) Return the mean of the values over the requested axis.
median([axis, skipna, level, numeric_only]) Return the median of the values over the requested axis.
memory_usage([index, deep]) Return the memory usage of the Series.
min([axis, skipna, level, numeric_only]) Return the minimum of the values over the requested axis.
mod(other[, level, fill_value, axis]) Return Modulo of series and other, element-wise (binary operator mod).
mode([dropna]) Return the mode(s) of the Series.
mul(other[, level, fill_value, axis]) Return Multiplication of series and other, element-wise (binary operator mul).
multiply(other[, level, fill_value, axis]) Return Multiplication of series and other, element-wise (binary operator mul).
ne(other[, level, fill_value, axis]) Return Not equal to of series and other, element-wise (binary operator ne).
nlargest([n, keep]) Return the largest n elements.
notna() Detect existing (non-missing) values.
notnull() Series.notnull is an alias for Series.notna.
nsmallest([n, keep]) Return the smallest n elements.
nunique([dropna]) Return number of unique elements in the object.
pad([axis, inplace, limit, downcast]) Synonym for DataFrame.fillna() with method='ffill'.
pct_change([periods, fill_method, limit, freq]) Percentage change between the current and a prior element.
pipe(func, *args, **kwargs) Apply chainable functions that expect Series or DataFrames.
plot alias of pandas.plotting._core.PlotAccessor
pop(item) Return item and drops from series.
pow(other[, level, fill_value, axis]) Return Exponential power of series and other, element-wise (binary operator pow).
prod([axis, skipna, level, numeric_only, ...]) Return the product of the values over the requested axis.
product([axis, skipna, level, numeric_only, ...]) Return the product of the values over the requested axis.
quantile([q, interpolation]) Return value at the given quantile.
radd(other[, level, fill_value, axis]) Return Addition of series and other, element-wise (binary operator radd).
rank([axis, method, numeric_only, ...]) Compute numerical data ranks (1 through n) along axis.
ravel([order]) Return the flattened underlying data as an ndarray.
rdiv(other[, level, fill_value, axis]) Return Floating division of series and other, element-wise (binary operator rtruediv).
rdivmod(other[, level, fill_value, axis]) Return Integer division and modulo of series and other, element-wise (binary operator rdivmod).
reindex(*args, **kwargs) Conform Series to new index with optional filling logic.
reindex_like(other[, method, copy, limit, ...]) Return an object with matching indices as other object.
rename([index, axis, copy, inplace, level, ...]) Alter Series index labels or name.
rename_axis([mapper, index, columns, axis, ...]) Set the name of the axis for the index or columns.
reorder_levels(order) Rearrange index levels using input order.
repeat(repeats[, axis]) Repeat elements of a Series.
replace([to_replace, value, inplace, limit, ...]) Replace values given in to_replace with value.
resample(rule[, axis, closed, label, ...]) Resample time-series data.
reset_index([level, drop, name, inplace]) Generate a new DataFrame or Series with the index reset.
rfloordiv(other[, level, fill_value, axis]) Return Integer division of series and other, element-wise (binary operator rfloordiv).
rmod(other[, level, fill_value, axis]) Return Modulo of series and other, element-wise (binary operator rmod).
rmul(other[, level, fill_value, axis]) Return Multiplication of series and other, element-wise (binary operator rmul).
rolling(window[, min_periods, center, ...]) Provide rolling window calculations.
round([decimals]) Round each value in a Series to the given number of decimals.
rpow(other[, level, fill_value, axis]) Return Exponential power of series and other, element-wise (binary operator rpow).
rsub(other[, level, fill_value, axis]) Return Subtraction of series and other, element-wise (binary operator rsub).
rtruediv(other[, level, fill_value, axis]) Return Floating division of series and other, element-wise (binary operator rtruediv).
sample([n, frac, replace, weights, ...]) Return a random sample of items from an axis of object.
searchsorted(value[, side, sorter]) Find indices where elements should be inserted to maintain order.
sem([axis, skipna, level, ddof, numeric_only]) Return unbiased standard error of the mean over requested axis.
set_axis(labels[, axis, inplace]) Assign desired index to given axis.
set_flags(*[, copy, allows_duplicate_labels]) Return a new object with updated flags.
shift([periods, freq, axis, fill_value]) Shift index by desired number of periods with an optional time freq.
skew([axis, skipna, level, numeric_only]) Return unbiased skew over requested axis.
slice_shift([periods, axis]) (DEPRECATED) Equivalent to shift without copying data.
sort_index([axis, level, ascending, ...]) Sort Series by index labels.
sort_values([axis, ascending, inplace, ...]) Sort by the values.
sparse alias of pandas.core.arrays.sparse.accessor.SparseAccessor
squeeze([axis]) Squeeze 1 dimensional axis objects into scalars.
std([axis, skipna, level, ddof, numeric_only]) Return sample standard deviation over requested axis.
str alias of pandas.core.strings.accessor.StringMethods
sub(other[, level, fill_value, axis]) Return Subtraction of series and other, element-wise (binary operator sub).
subtract(other[, level, fill_value, axis]) Return Subtraction of series and other, element-wise (binary operator sub).
sum([axis, skipna, level, numeric_only, ...]) Return the sum of the values over the requested axis.
swapaxes(axis1, axis2[, copy]) Interchange axes and swap values axes appropriately.
swaplevel([i, j, copy]) Swap levels i and j in a MultiIndex.
tail([n]) Return the last n rows.
take(indices[, axis, is_copy]) Return the elements in the given positional indices along an axis.
to_clipboard([excel, sep]) Copy object to the system clipboard.
to_csv([path_or_buf, sep, na_rep, ...]) Write object to a comma-separated values (csv) file.
to_dict([into]) Convert Series to {label -> value} dict or dict-like object.
to_excel(excel_writer[, sheet_name, na_rep, ...]) Write object to an Excel sheet.
to_frame([name]) Convert Series to DataFrame.
to_hdf(path_or_buf, key[, mode, complevel, ...]) Write the contained data to an HDF5 file using HDFStore.
to_json([path_or_buf, orient, date_format, ...]) Convert the object to a JSON string.
to_latex([buf, columns, col_space, header, ...]) Render object to a LaTeX tabular, longtable, or nested table.
to_list() Return a list of the values.
to_markdown([buf, mode, index, storage_options]) Print Series in Markdown-friendly format.
to_numpy([dtype, copy, na_value]) A NumPy ndarray representing the values in this Series or Index.
to_period([freq, copy]) Convert Series from DatetimeIndex to PeriodIndex.
to_pickle(path[, compression, protocol, ...]) Pickle (serialize) object to file.
to_sql(name, con[, schema, if_exists, ...]) Write records stored in a DataFrame to a SQL database.
to_string([buf, na_rep, float_format, ...]) Render a string representation of the Series.
to_timestamp([freq, how, copy]) Cast to DatetimeIndex of Timestamps, at beginning of period.
to_xarray() Return an xarray object from the pandas object.
tolist() Return a list of the values.
transform(func[, axis]) Call func on self producing a Series with the same axis shape as self.
transpose(*args, **kwargs) Return the transpose, which is by definition self.
truediv(other[, level, fill_value, axis]) Return Floating division of series and other, element-wise (binary operator truediv).
truncate([before, after, axis, copy]) Truncate a Series or DataFrame before and after some index value.
tshift([periods, freq, axis]) (DEPRECATED) Shift the time index, using the index's frequency if available.
tz_convert(tz[, axis, level, copy]) Convert tz-aware axis to target time zone.
tz_localize(tz[, axis, level, copy, ...]) Localize tz-naive index of a Series or DataFrame to target time zone.
unique() Return unique values of Series object.
unstack([level, fill_value]) Unstack, also known as pivot, Series with MultiIndex to produce DataFrame.
update(other) Modify Series in place using values from passed Series.
value_counts([normalize, sort, ascending, ...]) Return a Series containing counts of unique values.
var([axis, skipna, level, ddof, numeric_only]) Return unbiased variance over requested axis.
view([dtype]) Create a new view of the Series.
where(cond[, other, inplace, axis, level, ...]) Replace values where the condition is False.
xs(key[, axis, level, drop_level]) Return cross-section from the Series/DataFrame. | |
doc_563 |
The day of the week with Monday=0, Sunday=6. | |
doc_564 | A shlex instance or subclass instance is a lexical analyzer object. The initialization argument, if present, specifies where to read characters from. It must be a file-/stream-like object with read() and readline() methods, or a string. If no argument is given, input will be taken from sys.stdin. The second optional argument is a filename string, which sets the initial value of the infile attribute. If the instream argument is omitted or equal to sys.stdin, this second argument defaults to “stdin”. The posix argument defines the operational mode: when posix is not true (default), the shlex instance will operate in compatibility mode. When operating in POSIX mode, shlex will try to be as close as possible to the POSIX shell parsing rules. The punctuation_chars argument provides a way to make the behaviour even closer to how real shells parse. This can take a number of values: the default value, False, preserves the behaviour seen under Python 3.5 and earlier. If set to True, then parsing of the characters ();<>|& is changed: any run of these characters (considered punctuation characters) is returned as a single token. If set to a non-empty string of characters, those characters will be used as the punctuation characters. Any characters in the wordchars attribute that appear in punctuation_chars will be removed from wordchars. See Improved Compatibility with Shells for more information. punctuation_chars can be set only upon shlex instance creation and can’t be modified later. Changed in version 3.6: The punctuation_chars parameter was added. | |
doc_565 | Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importance scores (of same shape as t) used to compute mask for pruning t. The values in this tensor indicate the importance of the corresponding elements in the t that is being pruned. If unspecified or None, the tensor t will be used in its place.
default_mask (torch.Tensor, optional) – mask from previous pruning iteration, if any. To be considered when determining what portion of the tensor that pruning should act on. If None, default to a mask of ones. Returns
pruned version of tensor t. | |
doc_566 | tf.compat.v1.nn.rnn_cell.DeviceWrapper(
*args, **kwargs
)
Args
cell An instance of RNNCell.
device A device string or function, for passing to tf.device.
**kwargs dict of keyword arguments for base layer.
Attributes
graph
output_size
scope_name
state_size
Methods get_initial_state View source
get_initial_state(
inputs=None, batch_size=None, dtype=None
)
zero_state View source
zero_state(
batch_size, dtype
) | |
doc_567 | tf.metrics.deserialize Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.metrics.deserialize
tf.keras.metrics.deserialize(
config, custom_objects=None
)
Arguments
config Metric configuration.
custom_objects Optional dictionary mapping names (strings) to custom objects (classes and functions) to be considered during deserialization.
Returns A Keras Metric instance or a metric function. | |
doc_568 |
Compute the medial axis transform of a binary image Parameters
imagebinary ndarray, shape (M, N)
The image of the shape to be skeletonized.
maskbinary ndarray, shape (M, N), optional
If a mask is given, only those elements in image with a true value in mask are used for computing the medial axis.
return_distancebool, optional
If true, the distance transform is returned as well as the skeleton. Returns
outndarray of bools
Medial axis transform of the image
distndarray of ints, optional
Distance transform of the image (only returned if return_distance is True) See also
skeletonize
Notes This algorithm computes the medial axis transform of an image as the ridges of its distance transform. The different steps of the algorithm are as follows
A lookup table is used, that assigns 0 or 1 to each configuration of the 3x3 binary square, whether the central pixel should be removed or kept. We want a point to be removed if it has more than one neighbor and if removing it does not change the number of connected components. The distance transform to the background is computed, as well as the cornerness of the pixel. The foreground (value of 1) points are ordered by the distance transform, then the cornerness. A cython function is called to reduce the image to its skeleton. It processes pixels in the order determined at the previous step, and removes or maintains a pixel according to the lookup table. Because of the ordering, it is possible to process all pixels in only one pass. Examples >>> square = np.zeros((7, 7), dtype=np.uint8)
>>> square[1:-1, 2:-2] = 1
>>> square
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> medial_axis(square).astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8) | |
doc_569 | Send normal and ancillary data to the socket, gathering the non-ancillary data from a series of buffers and concatenating it into a single message. The buffers argument specifies the non-ancillary data as an iterable of bytes-like objects (e.g. bytes objects); the operating system may set a limit (sysconf() value SC_IOV_MAX) on the number of buffers that can be used. The ancdata argument specifies the ancillary data (control messages) as an iterable of zero or more tuples (cmsg_level, cmsg_type, cmsg_data), where cmsg_level and cmsg_type are integers specifying the protocol level and protocol-specific type respectively, and cmsg_data is a bytes-like object holding the associated data. Note that some systems (in particular, systems without CMSG_SPACE()) might support sending only one control message per call. The flags argument defaults to 0 and has the same meaning as for send(). If address is supplied and not None, it sets a destination address for the message. The return value is the number of bytes of non-ancillary data sent. The following function sends the list of file descriptors fds over an AF_UNIX socket, on systems which support the SCM_RIGHTS mechanism. See also recvmsg(). import socket, array
def send_fds(sock, msg, fds):
return sock.sendmsg([msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", fds))])
Availability: most Unix platforms, possibly others. Raises an auditing event socket.sendmsg with arguments self, address. New in version 3.3. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an InterruptedError exception (see PEP 475 for the rationale). | |
doc_570 | See Migration guide for more details. tf.compat.v1.raw_ops.SparseApplyFtrl
tf.raw_ops.SparseApplyFtrl(
var, accum, linear, grad, indices, lr, l1, l2, lr_power, use_locking=False,
multiply_linear_by_lr=False, name=None
)
That is for rows we have grad for, we update var, accum and linear as follows: $$accum_new = accum + grad * grad$$ $$linear += grad + (accum_{new}^{-lr_{power} } - accum^{-lr_{power} } / lr * var$$ $$quadratic = 1.0 / (accum_{new}^{lr_{power} } * lr) + 2 * l2$$ $$var = (sign(linear) * l1 - linear) / quadratic\ if\ |linear| > l1\ else\ 0.0$$ $$accum = accum_{new}$$
Args
var A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable().
accum A mutable Tensor. Must have the same type as var. Should be from a Variable().
linear A mutable Tensor. Must have the same type as var. Should be from a Variable().
grad A Tensor. Must have the same type as var. The gradient.
indices A Tensor. Must be one of the following types: int32, int64. A vector of indices into the first dimension of var and accum.
lr A Tensor. Must have the same type as var. Scaling factor. Must be a scalar.
l1 A Tensor. Must have the same type as var. L1 regularization. Must be a scalar.
l2 A Tensor. Must have the same type as var. L2 regularization. Must be a scalar.
lr_power A Tensor. Must have the same type as var. Scaling factor. Must be a scalar.
use_locking An optional bool. Defaults to False. If True, updating of the var and accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
multiply_linear_by_lr An optional bool. Defaults to False.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as var. | |
doc_571 | Subtypes Real and adds numerator and denominator properties, which should be in lowest terms. With these, it provides a default for float().
numerator
Abstract.
denominator
Abstract. | |
doc_572 | New in Django 3.2. The database collation name of the field. Note Collation names are not standardized. As such, this will not be portable across multiple database backends. Oracle Oracle does not support collations for a TextField. | |
doc_573 |
Diff two sets of counts. One common reason to collect instruction counts is to determine the the effect that a particular change will have on the number of instructions needed to perform some unit of work. If a change increases that number, the next logical question is “why”. This generally involves looking at what part if the code increased in instruction count. This function automates that process so that one can easily diff counts on both an inclusive and exclusive basis. The subtract_baselines argument allows one to disable baseline correction, though in most cases it shouldn’t matter as the baselines are expected to more or less cancel out. | |
doc_574 | Set prepopulated_fields to a dictionary mapping field names to the fields it should prepopulate from: class ArticleAdmin(admin.ModelAdmin):
prepopulated_fields = {"slug": ("title",)}
When set, the given fields will use a bit of JavaScript to populate from the fields assigned. The main use for this functionality is to automatically generate the value for SlugField fields from one or more other fields. The generated value is produced by concatenating the values of the source fields, and then by transforming that result into a valid slug (e.g. substituting dashes for spaces and lowercasing ASCII letters). Prepopulated fields aren’t modified by JavaScript after a value has been saved. It’s usually undesired that slugs change (which would cause an object’s URL to change if the slug is used in it). prepopulated_fields doesn’t accept DateTimeField, ForeignKey, OneToOneField, and ManyToManyField fields. Changed in Django 3.2: In older versions, various English stop words are removed from generated values. | |
doc_575 | begin sound playback play(loops=0, maxtime=0, fade_ms=0) -> Channel Begin playback of the Sound (i.e., on the computer's speakers) on an available Channel. This will forcibly select a Channel, so playback may cut off a currently playing sound if necessary. The loops argument controls how many times the sample will be repeated after being played the first time. A value of 5 means that the sound will be played once, then repeated five times, and so is played a total of six times. The default value (zero) means the Sound is not repeated, and so is only played once. If loops is set to -1 the Sound will loop indefinitely (though you can still call stop() to stop it). The maxtime argument can be used to stop playback after a given number of milliseconds. The fade_ms argument will make the sound start playing at 0 volume and fade up to full volume over the time given. The sample may end before the fade-in is complete. This returns the Channel object for the channel that was selected. | |
doc_576 | Return the entire message flattened as a string. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to False. For backward compatibility reasons, maxheaderlen defaults to 0, so if you want a different value you must override it explicitly (the value specified for max_line_length in the policy will be ignored by this method). The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to the Generator. Flattening the message may trigger changes to the Message if defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified). Note that this method is provided as a convenience and may not always format the message the way you want. For example, by default it does not do the mangling of lines that begin with From that is required by the unix mbox format. For more flexibility, instantiate a Generator instance and use its flatten() method directly. For example: from io import StringIO
from email.generator import Generator
fp = StringIO()
g = Generator(fp, mangle_from_=True, maxheaderlen=60)
g.flatten(msg)
text = fp.getvalue()
If the message object contains binary data that is not encoded according to RFC standards, the non-compliant data will be replaced by unicode “unknown character” code points. (See also as_bytes() and BytesGenerator.) Changed in version 3.4: the policy keyword argument was added. | |
doc_577 | Alias for torch.le(). | |
doc_578 | See Migration guide for more details. tf.compat.v1.raw_ops.GroupByReducerDataset
tf.raw_ops.GroupByReducerDataset(
input_dataset, key_func_other_arguments, init_func_other_arguments,
reduce_func_other_arguments, finalize_func_other_arguments, key_func, init_func,
reduce_func, finalize_func, output_types, output_shapes, name=None
)
Creates a dataset that computes a group-by on input_dataset.
Args
input_dataset A Tensor of type variant. A variant tensor representing the input dataset.
key_func_other_arguments A list of Tensor objects. A list of tensors, typically values that were captured when building a closure for key_func.
init_func_other_arguments A list of Tensor objects. A list of tensors, typically values that were captured when building a closure for init_func.
reduce_func_other_arguments A list of Tensor objects. A list of tensors, typically values that were captured when building a closure for reduce_func.
finalize_func_other_arguments A list of Tensor objects. A list of tensors, typically values that were captured when building a closure for finalize_func.
key_func A function decorated with @Defun. A function mapping an element of input_dataset, concatenated with key_func_other_arguments to a scalar value of type DT_INT64.
init_func A function decorated with @Defun. A function mapping a key of type DT_INT64, concatenated with init_func_other_arguments to the initial reducer state.
reduce_func A function decorated with @Defun. A function mapping the current reducer state and an element of input_dataset, concatenated with reduce_func_other_arguments to a new reducer state.
finalize_func A function decorated with @Defun. A function mapping the final reducer state to an output element.
output_types A list of tf.DTypes that has length >= 1.
output_shapes A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.
name A name for the operation (optional).
Returns A Tensor of type variant. | |
doc_579 | class sklearn.pipeline.FeatureUnion(transformer_list, *, n_jobs=None, transformer_weights=None, verbose=False) [source]
Concatenates results of multiple transformer objects. This estimator applies a list of transformer objects in parallel to the input data, then concatenates the results. This is useful to combine several feature extraction mechanisms into a single transformer. Parameters of the transformers may be set using its name and the parameter name separated by a ‘__’. A transformer may be replaced entirely by setting the parameter with its name to another transformer, or removed by setting to ‘drop’. Read more in the User Guide. New in version 0.13. Parameters
transformer_listlist of (string, transformer) tuples
List of transformer objects to be applied to the data. The first half of each tuple is the name of the transformer. The tranformer can be ‘drop’ for it to be ignored. Changed in version 0.22: Deprecated None as a transformer in favor of ‘drop’.
n_jobsint, default=None
Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Changed in version v0.20: n_jobs default changed from 1 to None
transformer_weightsdict, default=None
Multiplicative weights for features per transformer. Keys are transformer names, values the weights. Raises ValueError if key not present in transformer_list.
verbosebool, default=False
If True, the time elapsed while fitting each transformer will be printed as it is completed. Attributes
n_features_in_
See also
make_union
Convenience function for simplified feature union construction. Examples >>> from sklearn.pipeline import FeatureUnion
>>> from sklearn.decomposition import PCA, TruncatedSVD
>>> union = FeatureUnion([("pca", PCA(n_components=1)),
... ("svd", TruncatedSVD(n_components=2))])
>>> X = [[0., 1., 3], [2., 2., 5]]
>>> union.fit_transform(X)
array([[ 1.5 , 3.0..., 0.8...],
[-1.5 , 5.7..., -0.4...]])
Methods
fit(X[, y]) Fit all transformers using X.
fit_transform(X[, y]) Fit all transformers, transform the data and concatenate results.
get_feature_names() Get feature names from all transformers.
get_params([deep]) Get parameters for this estimator.
set_params(**kwargs) Set the parameters of this estimator.
transform(X) Transform X separately by each transformer, concatenate results.
fit(X, y=None, **fit_params) [source]
Fit all transformers using X. Parameters
Xiterable or array-like, depending on transformers
Input data, used to fit transformers.
yarray-like of shape (n_samples, n_outputs), default=None
Targets for supervised learning. Returns
selfFeatureUnion
This estimator
fit_transform(X, y=None, **fit_params) [source]
Fit all transformers, transform the data and concatenate results. Parameters
Xiterable or array-like, depending on transformers
Input data to be transformed.
yarray-like of shape (n_samples, n_outputs), default=None
Targets for supervised learning. Returns
X_tarray-like or sparse matrix of shape (n_samples, sum_n_components)
hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers.
get_feature_names() [source]
Get feature names from all transformers. Returns
feature_nameslist of strings
Names of the features produced by transform.
get_params(deep=True) [source]
Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the transformer_list of the FeatureUnion. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsmapping of string to any
Parameter names mapped to their values.
set_params(**kwargs) [source]
Set the parameters of this estimator. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in tranformer_list. Returns
self
transform(X) [source]
Transform X separately by each transformer, concatenate results. Parameters
Xiterable or array-like, depending on transformers
Input data to be transformed. Returns
X_tarray-like or sparse matrix of shape (n_samples, sum_n_components)
hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers.
Examples using sklearn.pipeline.FeatureUnion
Concatenating multiple feature extraction methods | |
doc_580 |
Return POSIX timestamp as float. Examples
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548')
>>> ts.timestamp()
1584199972.192548 | |
doc_581 | The OptionMenu creates a menu button of options. | |
doc_582 | sklearn.datasets.fetch_covtype(*, data_home=None, download_if_missing=True, random_state=None, shuffle=False, return_X_y=False, as_frame=False) [source]
Load the covertype dataset (classification). Download it if necessary.
Classes 7
Samples total 581012
Dimensionality 54
Features int Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary.
shufflebool, default=False
Whether to shuffle dataset.
return_X_ybool, default=False
If True, returns (data.data, data.target) instead of a Bunch object. New in version 0.20.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.24. Returns
datasetBunch
Dictionary-like object, with the following attributes.
datandarray of shape (581012, 54)
Each row corresponds to the 54 features in the dataset.
targetndarray of shape (581012,)
Each value corresponds to one of the 7 forest covertypes with values ranging between 1 to 7.
framedataframe of shape (581012, 53)
Only present when as_frame=True. Contains data and target.
DESCRstr
Description of the forest covertype dataset.
feature_nameslist
The names of the dataset columns. target_names: list
The names of the target columns.
(data, target)tuple if return_X_y is True
New in version 0.20.
Examples using sklearn.datasets.fetch_covtype
Release Highlights for scikit-learn 0.24
Scalable learning with polynomial kernel aproximation | |
doc_583 |
Generate Kernel Density Estimate plot using Gaussian kernels. In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function (PDF) of a random variable. This function uses Gaussian kernels and includes automatic bandwidth determination. Parameters
bw_method:str, scalar or callable, optional
The method used to calculate the estimator bandwidth. This can be ‘scott’, ‘silverman’, a scalar constant or a callable. If None (default), ‘scott’ is used. See scipy.stats.gaussian_kde for more information.
ind:NumPy array or int, optional
Evaluation points for the estimated PDF. If None (default), 1000 equally spaced points are used. If ind is a NumPy array, the KDE is evaluated at the points passed. If ind is an integer, ind number of equally spaced points are used. **kwargs
Additional keyword arguments are documented in pandas.%(this-datatype)s.plot(). Returns
matplotlib.axes.Axes or numpy.ndarray of them
See also scipy.stats.gaussian_kde
Representation of a kernel-density estimate using Gaussian kernels. This is the function used internally to estimate the PDF. Examples Given a Series of points randomly sampled from an unknown distribution, estimate its PDF using KDE with automatic bandwidth determination and plot the results, evaluating them at 1000 equally spaced points (default):
>>> s = pd.Series([1, 2, 2.5, 3, 3.5, 4, 5])
>>> ax = s.plot.kde()
A scalar bandwidth can be specified. Using a small bandwidth value can lead to over-fitting, while using a large bandwidth value may result in under-fitting:
>>> ax = s.plot.kde(bw_method=0.3)
>>> ax = s.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the plot of the estimated PDF:
>>> ax = s.plot.kde(ind=[1, 2, 3, 4, 5])
For DataFrame, it works in the same way:
>>> df = pd.DataFrame({
... 'x': [1, 2, 2.5, 3, 3.5, 4, 5],
... 'y': [4, 4, 4.5, 5, 5.5, 6, 6],
... })
>>> ax = df.plot.kde()
A scalar bandwidth can be specified. Using a small bandwidth value can lead to over-fitting, while using a large bandwidth value may result in under-fitting:
>>> ax = df.plot.kde(bw_method=0.3)
>>> ax = df.plot.kde(bw_method=3)
Finally, the ind parameter determines the evaluation points for the plot of the estimated PDF:
>>> ax = df.plot.kde(ind=[1, 2, 3, 4, 5, 6]) | |
doc_584 | os.O_DIRECT
os.O_DIRECTORY
os.O_NOFOLLOW
os.O_NOATIME
os.O_PATH
os.O_TMPFILE
os.O_SHLOCK
os.O_EXLOCK
The above constants are extensions and not present if they are not defined by the C library. Changed in version 3.4: Add O_PATH on systems that support it. Add O_TMPFILE, only available on Linux Kernel 3.11 or newer. | |
doc_585 | Replace %xx escapes with their single-octet equivalent, and return a bytes object. string may be either a str or a bytes object. If it is a str, unescaped non-ASCII characters in string are encoded into UTF-8 bytes. Example: unquote_to_bytes('a%26%EF') yields b'a&\xef'. | |
doc_586 | If flag is True, curses will try and use hardware line editing facilities. Otherwise, line insertion/deletion are disabled. | |
doc_587 |
Set the outer radial limit. Parameters
rmaxfloat | |
doc_588 | Round to nearest with ties going away from zero. | |
doc_589 | The plural name for the object: verbose_name_plural = "stories"
If this isn’t given, Django will use verbose_name + "s". | |
doc_590 | In [1]: df = pd.DataFrame(
...: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
...: )
...:
In [2]: df
Out[2]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
if-then… An if-then on one column
In [3]: df.loc[df.AAA >= 5, "BBB"] = -1
In [4]: df
Out[4]:
AAA BBB CCC
0 4 10 100
1 5 -1 50
2 6 -1 -30
3 7 -1 -50
An if-then with assignment to 2 columns:
In [5]: df.loc[df.AAA >= 5, ["BBB", "CCC"]] = 555
In [6]: df
Out[6]:
AAA BBB CCC
0 4 10 100
1 5 555 555
2 6 555 555
3 7 555 555
Add another line with different logic, to do the -else
In [7]: df.loc[df.AAA < 5, ["BBB", "CCC"]] = 2000
In [8]: df
Out[8]:
AAA BBB CCC
0 4 2000 2000
1 5 555 555
2 6 555 555
3 7 555 555
Or use pandas where after you’ve set up a mask
In [9]: df_mask = pd.DataFrame(
...: {"AAA": [True] * 4, "BBB": [False] * 4, "CCC": [True, False] * 2}
...: )
...:
In [10]: df.where(df_mask, -1000)
Out[10]:
AAA BBB CCC
0 4 -1000 2000
1 5 -1000 -1000
2 6 -1000 555
3 7 -1000 -1000
if-then-else using NumPy’s where()
In [11]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [12]: df
Out[12]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [13]: df["logic"] = np.where(df["AAA"] > 5, "high", "low")
In [14]: df
Out[14]:
AAA BBB CCC logic
0 4 10 100 low
1 5 20 50 low
2 6 30 -30 high
3 7 40 -50 high
Splitting Split a frame with a boolean criterion
In [15]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [16]: df
Out[16]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [17]: df[df.AAA <= 5]
Out[17]:
AAA BBB CCC
0 4 10 100
1 5 20 50
In [18]: df[df.AAA > 5]
Out[18]:
AAA BBB CCC
2 6 30 -30
3 7 40 -50
Building criteria Select with multi-column criteria
In [19]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [20]: df
Out[20]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
…and (without assignment returns a Series)
In [21]: df.loc[(df["BBB"] < 25) & (df["CCC"] >= -40), "AAA"]
Out[21]:
0 4
1 5
Name: AAA, dtype: int64
…or (without assignment returns a Series)
In [22]: df.loc[(df["BBB"] > 25) | (df["CCC"] >= -40), "AAA"]
Out[22]:
0 4
1 5
2 6
3 7
Name: AAA, dtype: int64
…or (with assignment modifies the DataFrame.)
In [23]: df.loc[(df["BBB"] > 25) | (df["CCC"] >= 75), "AAA"] = 0.1
In [24]: df
Out[24]:
AAA BBB CCC
0 0.1 10 100
1 5.0 20 50
2 0.1 30 -30
3 0.1 40 -50
Select rows with data closest to certain value using argsort
In [25]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [26]: df
Out[26]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [27]: aValue = 43.0
In [28]: df.loc[(df.CCC - aValue).abs().argsort()]
Out[28]:
AAA BBB CCC
1 5 20 50
0 4 10 100
2 6 30 -30
3 7 40 -50
Dynamically reduce a list of criteria using a binary operators
In [29]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [30]: df
Out[30]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [31]: Crit1 = df.AAA <= 5.5
In [32]: Crit2 = df.BBB == 10.0
In [33]: Crit3 = df.CCC > -40.0
One could hard code:
In [34]: AllCrit = Crit1 & Crit2 & Crit3
…Or it can be done with a list of dynamically built criteria
In [35]: import functools
In [36]: CritList = [Crit1, Crit2, Crit3]
In [37]: AllCrit = functools.reduce(lambda x, y: x & y, CritList)
In [38]: df[AllCrit]
Out[38]:
AAA BBB CCC
0 4 10 100
Selection Dataframes The indexing docs. Using both row labels and value conditionals
In [39]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [40]: df
Out[40]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [41]: df[(df.AAA <= 6) & (df.index.isin([0, 2, 4]))]
Out[41]:
AAA BBB CCC
0 4 10 100
2 6 30 -30
Use loc for label-oriented slicing and iloc positional slicing GH2904
In [42]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]},
....: index=["foo", "bar", "boo", "kar"],
....: )
....:
There are 2 explicit slicing methods, with a third general case Positional-oriented (Python slicing style : exclusive of end) Label-oriented (Non-Python slicing style : inclusive of end) General (Either slicing style : depends on if the slice contains labels or positions)
In [43]: df.loc["bar":"kar"] # Label
Out[43]:
AAA BBB CCC
bar 5 20 50
boo 6 30 -30
kar 7 40 -50
# Generic
In [44]: df[0:3]
Out[44]:
AAA BBB CCC
foo 4 10 100
bar 5 20 50
boo 6 30 -30
In [45]: df["bar":"kar"]
Out[45]:
AAA BBB CCC
bar 5 20 50
boo 6 30 -30
kar 7 40 -50
Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.
In [46]: data = {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
In [47]: df2 = pd.DataFrame(data=data, index=[1, 2, 3, 4]) # Note index starts at 1.
In [48]: df2.iloc[1:3] # Position-oriented
Out[48]:
AAA BBB CCC
2 5 20 50
3 6 30 -30
In [49]: df2.loc[1:3] # Label-oriented
Out[49]:
AAA BBB CCC
1 4 10 100
2 5 20 50
3 6 30 -30
Using inverse operator (~) to take the complement of a mask
In [50]: df = pd.DataFrame(
....: {"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
....: )
....:
In [51]: df
Out[51]:
AAA BBB CCC
0 4 10 100
1 5 20 50
2 6 30 -30
3 7 40 -50
In [52]: df[~((df.AAA <= 6) & (df.index.isin([0, 2, 4])))]
Out[52]:
AAA BBB CCC
1 5 20 50
3 7 40 -50
New columns Efficiently and dynamically creating new columns using applymap
In [53]: df = pd.DataFrame({"AAA": [1, 2, 1, 3], "BBB": [1, 1, 2, 2], "CCC": [2, 1, 3, 1]})
In [54]: df
Out[54]:
AAA BBB CCC
0 1 1 2
1 2 1 1
2 1 2 3
3 3 2 1
In [55]: source_cols = df.columns # Or some subset would work too
In [56]: new_cols = [str(x) + "_cat" for x in source_cols]
In [57]: categories = {1: "Alpha", 2: "Beta", 3: "Charlie"}
In [58]: df[new_cols] = df[source_cols].applymap(categories.get)
In [59]: df
Out[59]:
AAA BBB CCC AAA_cat BBB_cat CCC_cat
0 1 1 2 Alpha Alpha Beta
1 2 1 1 Beta Alpha Alpha
2 1 2 3 Alpha Beta Charlie
3 3 2 1 Charlie Beta Alpha
Keep other columns when using min() with groupby
In [60]: df = pd.DataFrame(
....: {"AAA": [1, 1, 1, 2, 2, 2, 3, 3], "BBB": [2, 1, 3, 4, 5, 1, 2, 3]}
....: )
....:
In [61]: df
Out[61]:
AAA BBB
0 1 2
1 1 1
2 1 3
3 2 4
4 2 5
5 2 1
6 3 2
7 3 3
Method 1 : idxmin() to get the index of the minimums
In [62]: df.loc[df.groupby("AAA")["BBB"].idxmin()]
Out[62]:
AAA BBB
1 1 1
5 2 1
6 3 2
Method 2 : sort then take first of each
In [63]: df.sort_values(by="BBB").groupby("AAA", as_index=False).first()
Out[63]:
AAA BBB
0 1 1
1 2 1
2 3 2
Notice the same results, with the exception of the index. Multiindexing The multindexing docs. Creating a MultiIndex from a labeled frame
In [64]: df = pd.DataFrame(
....: {
....: "row": [0, 1, 2],
....: "One_X": [1.1, 1.1, 1.1],
....: "One_Y": [1.2, 1.2, 1.2],
....: "Two_X": [1.11, 1.11, 1.11],
....: "Two_Y": [1.22, 1.22, 1.22],
....: }
....: )
....:
In [65]: df
Out[65]:
row One_X One_Y Two_X Two_Y
0 0 1.1 1.2 1.11 1.22
1 1 1.1 1.2 1.11 1.22
2 2 1.1 1.2 1.11 1.22
# As Labelled Index
In [66]: df = df.set_index("row")
In [67]: df
Out[67]:
One_X One_Y Two_X Two_Y
row
0 1.1 1.2 1.11 1.22
1 1.1 1.2 1.11 1.22
2 1.1 1.2 1.11 1.22
# With Hierarchical Columns
In [68]: df.columns = pd.MultiIndex.from_tuples([tuple(c.split("_")) for c in df.columns])
In [69]: df
Out[69]:
One Two
X Y X Y
row
0 1.1 1.2 1.11 1.22
1 1.1 1.2 1.11 1.22
2 1.1 1.2 1.11 1.22
# Now stack & Reset
In [70]: df = df.stack(0).reset_index(1)
In [71]: df
Out[71]:
level_1 X Y
row
0 One 1.10 1.20
0 Two 1.11 1.22
1 One 1.10 1.20
1 Two 1.11 1.22
2 One 1.10 1.20
2 Two 1.11 1.22
# And fix the labels (Notice the label 'level_1' got added automatically)
In [72]: df.columns = ["Sample", "All_X", "All_Y"]
In [73]: df
Out[73]:
Sample All_X All_Y
row
0 One 1.10 1.20
0 Two 1.11 1.22
1 One 1.10 1.20
1 Two 1.11 1.22
2 One 1.10 1.20
2 Two 1.11 1.22
Arithmetic Performing arithmetic with a MultiIndex that needs broadcasting
In [74]: cols = pd.MultiIndex.from_tuples(
....: [(x, y) for x in ["A", "B", "C"] for y in ["O", "I"]]
....: )
....:
In [75]: df = pd.DataFrame(np.random.randn(2, 6), index=["n", "m"], columns=cols)
In [76]: df
Out[76]:
A B C
O I O I O I
n 0.469112 -0.282863 -1.509059 -1.135632 1.212112 -0.173215
m 0.119209 -1.044236 -0.861849 -2.104569 -0.494929 1.071804
In [77]: df = df.div(df["C"], level=1)
In [78]: df
Out[78]:
A B C
O I O I O I
n 0.387021 1.633022 -1.244983 6.556214 1.0 1.0
m -0.240860 -0.974279 1.741358 -1.963577 1.0 1.0
Slicing Slicing a MultiIndex with xs
In [79]: coords = [("AA", "one"), ("AA", "six"), ("BB", "one"), ("BB", "two"), ("BB", "six")]
In [80]: index = pd.MultiIndex.from_tuples(coords)
In [81]: df = pd.DataFrame([11, 22, 33, 44, 55], index, ["MyData"])
In [82]: df
Out[82]:
MyData
AA one 11
six 22
BB one 33
two 44
six 55
To take the cross section of the 1st level and 1st axis the index:
# Note : level and axis are optional, and default to zero
In [83]: df.xs("BB", level=0, axis=0)
Out[83]:
MyData
one 33
two 44
six 55
…and now the 2nd level of the 1st axis.
In [84]: df.xs("six", level=1, axis=0)
Out[84]:
MyData
AA 22
BB 55
Slicing a MultiIndex with xs, method #2
In [85]: import itertools
In [86]: index = list(itertools.product(["Ada", "Quinn", "Violet"], ["Comp", "Math", "Sci"]))
In [87]: headr = list(itertools.product(["Exams", "Labs"], ["I", "II"]))
In [88]: indx = pd.MultiIndex.from_tuples(index, names=["Student", "Course"])
In [89]: cols = pd.MultiIndex.from_tuples(headr) # Notice these are un-named
In [90]: data = [[70 + x + y + (x * y) % 3 for x in range(4)] for y in range(9)]
In [91]: df = pd.DataFrame(data, indx, cols)
In [92]: df
Out[92]:
Exams Labs
I II I II
Student Course
Ada Comp 70 71 72 73
Math 71 73 75 74
Sci 72 75 75 75
Quinn Comp 73 74 75 76
Math 74 76 78 77
Sci 75 78 78 78
Violet Comp 76 77 78 79
Math 77 79 81 80
Sci 78 81 81 81
In [93]: All = slice(None)
In [94]: df.loc["Violet"]
Out[94]:
Exams Labs
I II I II
Course
Comp 76 77 78 79
Math 77 79 81 80
Sci 78 81 81 81
In [95]: df.loc[(All, "Math"), All]
Out[95]:
Exams Labs
I II I II
Student Course
Ada Math 71 73 75 74
Quinn Math 74 76 78 77
Violet Math 77 79 81 80
In [96]: df.loc[(slice("Ada", "Quinn"), "Math"), All]
Out[96]:
Exams Labs
I II I II
Student Course
Ada Math 71 73 75 74
Quinn Math 74 76 78 77
In [97]: df.loc[(All, "Math"), ("Exams")]
Out[97]:
I II
Student Course
Ada Math 71 73
Quinn Math 74 76
Violet Math 77 79
In [98]: df.loc[(All, "Math"), (All, "II")]
Out[98]:
Exams Labs
II II
Student Course
Ada Math 73 74
Quinn Math 76 77
Violet Math 79 80
Setting portions of a MultiIndex with xs Sorting Sort by specific column or an ordered list of columns, with a MultiIndex
In [99]: df.sort_values(by=("Labs", "II"), ascending=False)
Out[99]:
Exams Labs
I II I II
Student Course
Violet Sci 78 81 81 81
Math 77 79 81 80
Comp 76 77 78 79
Quinn Sci 75 78 78 78
Math 74 76 78 77
Comp 73 74 75 76
Ada Sci 72 75 75 75
Math 71 73 75 74
Comp 70 71 72 73
Partial selection, the need for sortedness GH2995 Levels Prepending a level to a multiindex Flatten Hierarchical columns Missing data The missing data docs. Fill forward a reversed timeseries
In [100]: df = pd.DataFrame(
.....: np.random.randn(6, 1),
.....: index=pd.date_range("2013-08-01", periods=6, freq="B"),
.....: columns=list("A"),
.....: )
.....:
In [101]: df.loc[df.index[3], "A"] = np.nan
In [102]: df
Out[102]:
A
2013-08-01 0.721555
2013-08-02 -0.706771
2013-08-05 -1.039575
2013-08-06 NaN
2013-08-07 -0.424972
2013-08-08 0.567020
In [103]: df.reindex(df.index[::-1]).ffill()
Out[103]:
A
2013-08-08 0.567020
2013-08-07 -0.424972
2013-08-06 -0.424972
2013-08-05 -1.039575
2013-08-02 -0.706771
2013-08-01 0.721555
cumsum reset at NaN values Replace Using replace with backrefs Grouping The grouping docs. Basic grouping with apply Unlike agg, apply’s callable is passed a sub-DataFrame which gives you access to all the columns
In [104]: df = pd.DataFrame(
.....: {
.....: "animal": "cat dog cat fish dog cat cat".split(),
.....: "size": list("SSMMMLL"),
.....: "weight": [8, 10, 11, 1, 20, 12, 12],
.....: "adult": [False] * 5 + [True] * 2,
.....: }
.....: )
.....:
In [105]: df
Out[105]:
animal size weight adult
0 cat S 8 False
1 dog S 10 False
2 cat M 11 False
3 fish M 1 False
4 dog M 20 False
5 cat L 12 True
6 cat L 12 True
# List the size of the animals with the highest weight.
In [106]: df.groupby("animal").apply(lambda subf: subf["size"][subf["weight"].idxmax()])
Out[106]:
animal
cat L
dog M
fish M
dtype: object
Using get_group
In [107]: gb = df.groupby(["animal"])
In [108]: gb.get_group("cat")
Out[108]:
animal size weight adult
0 cat S 8 False
2 cat M 11 False
5 cat L 12 True
6 cat L 12 True
Apply to different items in a group
In [109]: def GrowUp(x):
.....: avg_weight = sum(x[x["size"] == "S"].weight * 1.5)
.....: avg_weight += sum(x[x["size"] == "M"].weight * 1.25)
.....: avg_weight += sum(x[x["size"] == "L"].weight)
.....: avg_weight /= len(x)
.....: return pd.Series(["L", avg_weight, True], index=["size", "weight", "adult"])
.....:
In [110]: expected_df = gb.apply(GrowUp)
In [111]: expected_df
Out[111]:
size weight adult
animal
cat L 12.4375 True
dog L 20.0000 True
fish L 1.2500 True
Expanding apply
In [112]: S = pd.Series([i / 100.0 for i in range(1, 11)])
In [113]: def cum_ret(x, y):
.....: return x * (1 + y)
.....:
In [114]: def red(x):
.....: return functools.reduce(cum_ret, x, 1.0)
.....:
In [115]: S.expanding().apply(red, raw=True)
Out[115]:
0 1.010000
1 1.030200
2 1.061106
3 1.103550
4 1.158728
5 1.228251
6 1.314229
7 1.419367
8 1.547110
9 1.701821
dtype: float64
Replacing some values with mean of the rest of a group
In [116]: df = pd.DataFrame({"A": [1, 1, 2, 2], "B": [1, -1, 1, 2]})
In [117]: gb = df.groupby("A")
In [118]: def replace(g):
.....: mask = g < 0
.....: return g.where(mask, g[~mask].mean())
.....:
In [119]: gb.transform(replace)
Out[119]:
B
0 1.0
1 -1.0
2 1.5
3 1.5
Sort groups by aggregated data
In [120]: df = pd.DataFrame(
.....: {
.....: "code": ["foo", "bar", "baz"] * 2,
.....: "data": [0.16, -0.21, 0.33, 0.45, -0.59, 0.62],
.....: "flag": [False, True] * 3,
.....: }
.....: )
.....:
In [121]: code_groups = df.groupby("code")
In [122]: agg_n_sort_order = code_groups[["data"]].transform(sum).sort_values(by="data")
In [123]: sorted_df = df.loc[agg_n_sort_order.index]
In [124]: sorted_df
Out[124]:
code data flag
1 bar -0.21 True
4 bar -0.59 False
0 foo 0.16 False
3 foo 0.45 True
2 baz 0.33 False
5 baz 0.62 True
Create multiple aggregated columns
In [125]: rng = pd.date_range(start="2014-10-07", periods=10, freq="2min")
In [126]: ts = pd.Series(data=list(range(10)), index=rng)
In [127]: def MyCust(x):
.....: if len(x) > 2:
.....: return x[1] * 1.234
.....: return pd.NaT
.....:
In [128]: mhc = {"Mean": np.mean, "Max": np.max, "Custom": MyCust}
In [129]: ts.resample("5min").apply(mhc)
Out[129]:
Mean Max Custom
2014-10-07 00:00:00 1.0 2 1.234
2014-10-07 00:05:00 3.5 4 NaT
2014-10-07 00:10:00 6.0 7 7.404
2014-10-07 00:15:00 8.5 9 NaT
In [130]: ts
Out[130]:
2014-10-07 00:00:00 0
2014-10-07 00:02:00 1
2014-10-07 00:04:00 2
2014-10-07 00:06:00 3
2014-10-07 00:08:00 4
2014-10-07 00:10:00 5
2014-10-07 00:12:00 6
2014-10-07 00:14:00 7
2014-10-07 00:16:00 8
2014-10-07 00:18:00 9
Freq: 2T, dtype: int64
Create a value counts column and reassign back to the DataFrame
In [131]: df = pd.DataFrame(
.....: {"Color": "Red Red Red Blue".split(), "Value": [100, 150, 50, 50]}
.....: )
.....:
In [132]: df
Out[132]:
Color Value
0 Red 100
1 Red 150
2 Red 50
3 Blue 50
In [133]: df["Counts"] = df.groupby(["Color"]).transform(len)
In [134]: df
Out[134]:
Color Value Counts
0 Red 100 3
1 Red 150 3
2 Red 50 3
3 Blue 50 1
Shift groups of the values in a column based on the index
In [135]: df = pd.DataFrame(
.....: {"line_race": [10, 10, 8, 10, 10, 8], "beyer": [99, 102, 103, 103, 88, 100]},
.....: index=[
.....: "Last Gunfighter",
.....: "Last Gunfighter",
.....: "Last Gunfighter",
.....: "Paynter",
.....: "Paynter",
.....: "Paynter",
.....: ],
.....: )
.....:
In [136]: df
Out[136]:
line_race beyer
Last Gunfighter 10 99
Last Gunfighter 10 102
Last Gunfighter 8 103
Paynter 10 103
Paynter 10 88
Paynter 8 100
In [137]: df["beyer_shifted"] = df.groupby(level=0)["beyer"].shift(1)
In [138]: df
Out[138]:
line_race beyer beyer_shifted
Last Gunfighter 10 99 NaN
Last Gunfighter 10 102 99.0
Last Gunfighter 8 103 102.0
Paynter 10 103 NaN
Paynter 10 88 103.0
Paynter 8 100 88.0
Select row with maximum value from each group
In [139]: df = pd.DataFrame(
.....: {
.....: "host": ["other", "other", "that", "this", "this"],
.....: "service": ["mail", "web", "mail", "mail", "web"],
.....: "no": [1, 2, 1, 2, 1],
.....: }
.....: ).set_index(["host", "service"])
.....:
In [140]: mask = df.groupby(level=0).agg("idxmax")
In [141]: df_count = df.loc[mask["no"]].reset_index()
In [142]: df_count
Out[142]:
host service no
0 other web 2
1 that mail 1
2 this mail 2
Grouping like Python’s itertools.groupby
In [143]: df = pd.DataFrame([0, 1, 0, 1, 1, 1, 0, 1, 1], columns=["A"])
In [144]: df["A"].groupby((df["A"] != df["A"].shift()).cumsum()).groups
Out[144]: {1: [0], 2: [1], 3: [2], 4: [3, 4, 5], 5: [6], 6: [7, 8]}
In [145]: df["A"].groupby((df["A"] != df["A"].shift()).cumsum()).cumsum()
Out[145]:
0 0
1 1
2 0
3 1
4 2
5 3
6 0
7 1
8 2
Name: A, dtype: int64
Expanding data Alignment and to-date Rolling Computation window based on values instead of counts Rolling Mean by Time Interval Splitting Splitting a frame Create a list of dataframes, split using a delineation based on logic included in rows.
In [146]: df = pd.DataFrame(
.....: data={
.....: "Case": ["A", "A", "A", "B", "A", "A", "B", "A", "A"],
.....: "Data": np.random.randn(9),
.....: }
.....: )
.....:
In [147]: dfs = list(
.....: zip(
.....: *df.groupby(
.....: (1 * (df["Case"] == "B"))
.....: .cumsum()
.....: .rolling(window=3, min_periods=1)
.....: .median()
.....: )
.....: )
.....: )[-1]
.....:
In [148]: dfs[0]
Out[148]:
Case Data
0 A 0.276232
1 A -1.087401
2 A -0.673690
3 B 0.113648
In [149]: dfs[1]
Out[149]:
Case Data
4 A -1.478427
5 A 0.524988
6 B 0.404705
In [150]: dfs[2]
Out[150]:
Case Data
7 A 0.577046
8 A -1.715002
Pivot The Pivot docs. Partial sums and subtotals
In [151]: df = pd.DataFrame(
.....: data={
.....: "Province": ["ON", "QC", "BC", "AL", "AL", "MN", "ON"],
.....: "City": [
.....: "Toronto",
.....: "Montreal",
.....: "Vancouver",
.....: "Calgary",
.....: "Edmonton",
.....: "Winnipeg",
.....: "Windsor",
.....: ],
.....: "Sales": [13, 6, 16, 8, 4, 3, 1],
.....: }
.....: )
.....:
In [152]: table = pd.pivot_table(
.....: df,
.....: values=["Sales"],
.....: index=["Province"],
.....: columns=["City"],
.....: aggfunc=np.sum,
.....: margins=True,
.....: )
.....:
In [153]: table.stack("City")
Out[153]:
Sales
Province City
AL All 12.0
Calgary 8.0
Edmonton 4.0
BC All 16.0
Vancouver 16.0
... ...
All Montreal 6.0
Toronto 13.0
Vancouver 16.0
Windsor 1.0
Winnipeg 3.0
[20 rows x 1 columns]
Frequency table like plyr in R
In [154]: grades = [48, 99, 75, 80, 42, 80, 72, 68, 36, 78]
In [155]: df = pd.DataFrame(
.....: {
.....: "ID": ["x%d" % r for r in range(10)],
.....: "Gender": ["F", "M", "F", "M", "F", "M", "F", "M", "M", "M"],
.....: "ExamYear": [
.....: "2007",
.....: "2007",
.....: "2007",
.....: "2008",
.....: "2008",
.....: "2008",
.....: "2008",
.....: "2009",
.....: "2009",
.....: "2009",
.....: ],
.....: "Class": [
.....: "algebra",
.....: "stats",
.....: "bio",
.....: "algebra",
.....: "algebra",
.....: "stats",
.....: "stats",
.....: "algebra",
.....: "bio",
.....: "bio",
.....: ],
.....: "Participated": [
.....: "yes",
.....: "yes",
.....: "yes",
.....: "yes",
.....: "no",
.....: "yes",
.....: "yes",
.....: "yes",
.....: "yes",
.....: "yes",
.....: ],
.....: "Passed": ["yes" if x > 50 else "no" for x in grades],
.....: "Employed": [
.....: True,
.....: True,
.....: True,
.....: False,
.....: False,
.....: False,
.....: False,
.....: True,
.....: True,
.....: False,
.....: ],
.....: "Grade": grades,
.....: }
.....: )
.....:
In [156]: df.groupby("ExamYear").agg(
.....: {
.....: "Participated": lambda x: x.value_counts()["yes"],
.....: "Passed": lambda x: sum(x == "yes"),
.....: "Employed": lambda x: sum(x),
.....: "Grade": lambda x: sum(x) / len(x),
.....: }
.....: )
.....:
Out[156]:
Participated Passed Employed Grade
ExamYear
2007 3 2 3 74.000000
2008 3 3 0 68.500000
2009 3 2 2 60.666667
Plot pandas DataFrame with year over year data To create year and month cross tabulation:
In [157]: df = pd.DataFrame(
.....: {"value": np.random.randn(36)},
.....: index=pd.date_range("2011-01-01", freq="M", periods=36),
.....: )
.....:
In [158]: pd.pivot_table(
.....: df, index=df.index.month, columns=df.index.year, values="value", aggfunc="sum"
.....: )
.....:
Out[158]:
2011 2012 2013
1 -1.039268 -0.968914 2.565646
2 -0.370647 -1.294524 1.431256
3 -1.157892 0.413738 1.340309
4 -1.344312 0.276662 -1.170299
5 0.844885 -0.472035 -0.226169
6 1.075770 -0.013960 0.410835
7 -0.109050 -0.362543 0.813850
8 1.643563 -0.006154 0.132003
9 -1.469388 -0.923061 -0.827317
10 0.357021 0.895717 -0.076467
11 -0.674600 0.805244 -1.187678
12 -1.776904 -1.206412 1.130127
Apply Rolling apply to organize - Turning embedded lists into a MultiIndex frame
In [159]: df = pd.DataFrame(
.....: data={
.....: "A": [[2, 4, 8, 16], [100, 200], [10, 20, 30]],
.....: "B": [["a", "b", "c"], ["jj", "kk"], ["ccc"]],
.....: },
.....: index=["I", "II", "III"],
.....: )
.....:
In [160]: def SeriesFromSubList(aList):
.....: return pd.Series(aList)
.....:
In [161]: df_orgz = pd.concat(
.....: {ind: row.apply(SeriesFromSubList) for ind, row in df.iterrows()}
.....: )
.....:
In [162]: df_orgz
Out[162]:
0 1 2 3
I A 2 4 8 16.0
B a b c NaN
II A 100 200 NaN NaN
B jj kk NaN NaN
III A 10 20.0 30.0 NaN
B ccc NaN NaN NaN
Rolling apply with a DataFrame returning a Series Rolling Apply to multiple columns where function calculates a Series before a Scalar from the Series is returned
In [163]: df = pd.DataFrame(
.....: data=np.random.randn(2000, 2) / 10000,
.....: index=pd.date_range("2001-01-01", periods=2000),
.....: columns=["A", "B"],
.....: )
.....:
In [164]: df
Out[164]:
A B
2001-01-01 -0.000144 -0.000141
2001-01-02 0.000161 0.000102
2001-01-03 0.000057 0.000088
2001-01-04 -0.000221 0.000097
2001-01-05 -0.000201 -0.000041
... ... ...
2006-06-19 0.000040 -0.000235
2006-06-20 -0.000123 -0.000021
2006-06-21 -0.000113 0.000114
2006-06-22 0.000136 0.000109
2006-06-23 0.000027 0.000030
[2000 rows x 2 columns]
In [165]: def gm(df, const):
.....: v = ((((df["A"] + df["B"]) + 1).cumprod()) - 1) * const
.....: return v.iloc[-1]
.....:
In [166]: s = pd.Series(
.....: {
.....: df.index[i]: gm(df.iloc[i: min(i + 51, len(df) - 1)], 5)
.....: for i in range(len(df) - 50)
.....: }
.....: )
.....:
In [167]: s
Out[167]:
2001-01-01 0.000930
2001-01-02 0.002615
2001-01-03 0.001281
2001-01-04 0.001117
2001-01-05 0.002772
...
2006-04-30 0.003296
2006-05-01 0.002629
2006-05-02 0.002081
2006-05-03 0.004247
2006-05-04 0.003928
Length: 1950, dtype: float64
Rolling apply with a DataFrame returning a Scalar Rolling Apply to multiple columns where function returns a Scalar (Volume Weighted Average Price)
In [168]: rng = pd.date_range(start="2014-01-01", periods=100)
In [169]: df = pd.DataFrame(
.....: {
.....: "Open": np.random.randn(len(rng)),
.....: "Close": np.random.randn(len(rng)),
.....: "Volume": np.random.randint(100, 2000, len(rng)),
.....: },
.....: index=rng,
.....: )
.....:
In [170]: df
Out[170]:
Open Close Volume
2014-01-01 -1.611353 -0.492885 1219
2014-01-02 -3.000951 0.445794 1054
2014-01-03 -0.138359 -0.076081 1381
2014-01-04 0.301568 1.198259 1253
2014-01-05 0.276381 -0.669831 1728
... ... ... ...
2014-04-06 -0.040338 0.937843 1188
2014-04-07 0.359661 -0.285908 1864
2014-04-08 0.060978 1.714814 941
2014-04-09 1.759055 -0.455942 1065
2014-04-10 0.138185 -1.147008 1453
[100 rows x 3 columns]
In [171]: def vwap(bars):
.....: return (bars.Close * bars.Volume).sum() / bars.Volume.sum()
.....:
In [172]: window = 5
In [173]: s = pd.concat(
.....: [
.....: (pd.Series(vwap(df.iloc[i: i + window]), index=[df.index[i + window]]))
.....: for i in range(len(df) - window)
.....: ]
.....: )
.....:
In [174]: s.round(2)
Out[174]:
2014-01-06 0.02
2014-01-07 0.11
2014-01-08 0.10
2014-01-09 0.07
2014-01-10 -0.29
...
2014-04-06 -0.63
2014-04-07 -0.02
2014-04-08 -0.03
2014-04-09 0.34
2014-04-10 0.29
Length: 95, dtype: float64
Timeseries Between times Using indexer between time Constructing a datetime range that excludes weekends and includes only certain times Vectorized Lookup Aggregation and plotting time series Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series. How to rearrange a Python pandas DataFrame? Dealing with duplicates when reindexing a timeseries to a specified frequency Calculate the first day of the month for each entry in a DatetimeIndex
In [175]: dates = pd.date_range("2000-01-01", periods=5)
In [176]: dates.to_period(freq="M").to_timestamp()
Out[176]:
DatetimeIndex(['2000-01-01', '2000-01-01', '2000-01-01', '2000-01-01',
'2000-01-01'],
dtype='datetime64[ns]', freq=None)
Resampling The Resample docs. Using Grouper instead of TimeGrouper for time grouping of values Time grouping with some missing values Valid frequency arguments to Grouper Timeseries Grouping using a MultiIndex Using TimeGrouper and another grouping to create subgroups, then apply a custom function GH3791 Resampling with custom periods Resample intraday frame without adding new days Resample minute data Resample with groupby Merge The Join docs. Concatenate two dataframes with overlapping index (emulate R rbind)
In [177]: rng = pd.date_range("2000-01-01", periods=6)
In [178]: df1 = pd.DataFrame(np.random.randn(6, 3), index=rng, columns=["A", "B", "C"])
In [179]: df2 = df1.copy()
Depending on df construction, ignore_index may be needed
In [180]: df = pd.concat([df1, df2], ignore_index=True)
In [181]: df
Out[181]:
A B C
0 -0.870117 -0.479265 -0.790855
1 0.144817 1.726395 -0.464535
2 -0.821906 1.597605 0.187307
3 -0.128342 -1.511638 -0.289858
4 0.399194 -1.430030 -0.639760
5 1.115116 -2.012600 1.810662
6 -0.870117 -0.479265 -0.790855
7 0.144817 1.726395 -0.464535
8 -0.821906 1.597605 0.187307
9 -0.128342 -1.511638 -0.289858
10 0.399194 -1.430030 -0.639760
11 1.115116 -2.012600 1.810662
Self Join of a DataFrame GH2996
In [182]: df = pd.DataFrame(
.....: data={
.....: "Area": ["A"] * 5 + ["C"] * 2,
.....: "Bins": [110] * 2 + [160] * 3 + [40] * 2,
.....: "Test_0": [0, 1, 0, 1, 2, 0, 1],
.....: "Data": np.random.randn(7),
.....: }
.....: )
.....:
In [183]: df
Out[183]:
Area Bins Test_0 Data
0 A 110 0 -0.433937
1 A 110 1 -0.160552
2 A 160 0 0.744434
3 A 160 1 1.754213
4 A 160 2 0.000850
5 C 40 0 0.342243
6 C 40 1 1.070599
In [184]: df["Test_1"] = df["Test_0"] - 1
In [185]: pd.merge(
.....: df,
.....: df,
.....: left_on=["Bins", "Area", "Test_0"],
.....: right_on=["Bins", "Area", "Test_1"],
.....: suffixes=("_L", "_R"),
.....: )
.....:
Out[185]:
Area Bins Test_0_L Data_L Test_1_L Test_0_R Data_R Test_1_R
0 A 110 0 -0.433937 -1 1 -0.160552 0
1 A 160 0 0.744434 -1 1 1.754213 0
2 A 160 1 1.754213 0 2 0.000850 1
3 C 40 0 0.342243 -1 1 1.070599 0
How to set the index and join KDB like asof join Join with a criteria based on the values Using searchsorted to merge based on values inside a range Plotting The Plotting docs. Make Matplotlib look like R Setting x-axis major and minor labels Plotting multiple charts in an IPython Jupyter notebook Creating a multi-line plot Plotting a heatmap Annotate a time-series plot Annotate a time-series plot #2 Generate Embedded plots in excel files using Pandas, Vincent and xlsxwriter Boxplot for each quartile of a stratifying variable
In [186]: df = pd.DataFrame(
.....: {
.....: "stratifying_var": np.random.uniform(0, 100, 20),
.....: "price": np.random.normal(100, 5, 20),
.....: }
.....: )
.....:
In [187]: df["quartiles"] = pd.qcut(
.....: df["stratifying_var"], 4, labels=["0-25%", "25-50%", "50-75%", "75-100%"]
.....: )
.....:
In [188]: df.boxplot(column="price", by="quartiles")
Out[188]: <AxesSubplot:title={'center':'price'}, xlabel='quartiles'>
Data in/out Performance comparison of SQL vs HDF5 CSV The CSV docs read_csv in action appending to a csv Reading a csv chunk-by-chunk Reading only certain rows of a csv chunk-by-chunk Reading the first few lines of a frame Reading a file that is compressed but not by gzip/bz2 (the native compressed formats which read_csv understands). This example shows a WinZipped file, but is a general application of opening the file within a context manager and using that handle to read. See here Inferring dtypes from a file Dealing with bad lines GH2886 Write a multi-row index CSV without writing duplicates Reading multiple files to create a single DataFrame The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all of the individual frames into a list, and then combine the frames in the list using pd.concat():
In [189]: for i in range(3):
.....: data = pd.DataFrame(np.random.randn(10, 4))
.....: data.to_csv("file_{}.csv".format(i))
.....:
In [190]: files = ["file_0.csv", "file_1.csv", "file_2.csv"]
In [191]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
You can use the same approach to read all files matching a pattern. Here is an example using glob:
In [192]: import glob
In [193]: import os
In [194]: files = glob.glob("file_*.csv")
In [195]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
Finally, this strategy will work with the other pd.read_*(...) functions described in the io docs. Parsing date components in multi-columns Parsing date components in multi-columns is faster with a format
In [196]: i = pd.date_range("20000101", periods=10000)
In [197]: df = pd.DataFrame({"year": i.year, "month": i.month, "day": i.day})
In [198]: df.head()
Out[198]:
year month day
0 2000 1 1
1 2000 1 2
2 2000 1 3
3 2000 1 4
4 2000 1 5
In [199]: %timeit pd.to_datetime(df.year * 10000 + df.month * 100 + df.day, format='%Y%m%d')
.....: ds = df.apply(lambda x: "%04d%02d%02d" % (x["year"], x["month"], x["day"]), axis=1)
.....: ds.head()
.....: %timeit pd.to_datetime(ds)
.....:
8.7 ms +- 765 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
2.1 ms +- 419 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Skip row between header and data
In [200]: data = """;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: ;;;;
.....: date;Param1;Param2;Param4;Param5
.....: ;m²;°C;m²;m
.....: ;;;;
.....: 01.01.1990 00:00;1;1;2;3
.....: 01.01.1990 01:00;5;3;4;5
.....: 01.01.1990 02:00;9;5;6;7
.....: 01.01.1990 03:00;13;7;8;9
.....: 01.01.1990 04:00;17;9;10;11
.....: 01.01.1990 05:00;21;11;12;13
.....: """
.....:
Option 1: pass rows explicitly to skip rows
In [201]: from io import StringIO
In [202]: pd.read_csv(
.....: StringIO(data),
.....: sep=";",
.....: skiprows=[11, 12],
.....: index_col=0,
.....: parse_dates=True,
.....: header=10,
.....: )
.....:
Out[202]:
Param1 Param2 Param4 Param5
date
1990-01-01 00:00:00 1 1 2 3
1990-01-01 01:00:00 5 3 4 5
1990-01-01 02:00:00 9 5 6 7
1990-01-01 03:00:00 13 7 8 9
1990-01-01 04:00:00 17 9 10 11
1990-01-01 05:00:00 21 11 12 13
Option 2: read column names and then data
In [203]: pd.read_csv(StringIO(data), sep=";", header=10, nrows=10).columns
Out[203]: Index(['date', 'Param1', 'Param2', 'Param4', 'Param5'], dtype='object')
In [204]: columns = pd.read_csv(StringIO(data), sep=";", header=10, nrows=10).columns
In [205]: pd.read_csv(
.....: StringIO(data), sep=";", index_col=0, header=12, parse_dates=True, names=columns
.....: )
.....:
Out[205]:
Param1 Param2 Param4 Param5
date
1990-01-01 00:00:00 1 1 2 3
1990-01-01 01:00:00 5 3 4 5
1990-01-01 02:00:00 9 5 6 7
1990-01-01 03:00:00 13 7 8 9
1990-01-01 04:00:00 17 9 10 11
1990-01-01 05:00:00 21 11 12 13
SQL The SQL docs Reading from databases with SQL Excel The Excel docs Reading from a filelike handle Modifying formatting in XlsxWriter output Loading only visible sheets GH19842#issuecomment-892150745 HTML Reading HTML tables from a server that cannot handle the default request header HDFStore The HDFStores docs Simple queries with a Timestamp Index Managing heterogeneous data using a linked multiple table hierarchy GH3032 Merging on-disk tables with millions of rows Avoiding inconsistencies when writing to a store from multiple processes/threads De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from csv file and creating a store by chunks, with date parsing as well. See here Creating a store chunk-by-chunk from a csv file Appending to a store, while creating a unique index Large Data work flows Reading in a sequence of files, then providing a global unique index to a store while appending Groupby on a HDFStore with low group density Groupby on a HDFStore with high group density Hierarchical queries on a HDFStore Counting with a HDFStore Troubleshoot HDFStore exceptions Setting min_itemsize with strings Using ptrepack to create a completely-sorted-index on a store Storing Attributes to a group node
In [206]: df = pd.DataFrame(np.random.randn(8, 3))
In [207]: store = pd.HDFStore("test.h5")
In [208]: store.put("df", df)
# you can store an arbitrary Python object via pickle
In [209]: store.get_storer("df").attrs.my_attribute = {"A": 10}
In [210]: store.get_storer("df").attrs.my_attribute
Out[210]: {'A': 10}
You can create or load a HDFStore in-memory by passing the driver parameter to PyTables. Changes are only written to disk when the HDFStore is closed.
In [211]: store = pd.HDFStore("test.h5", "w", driver="H5FD_CORE")
In [212]: df = pd.DataFrame(np.random.randn(8, 3))
In [213]: store["test"] = df
# only after closing the store, data is written to disk:
In [214]: store.close()
Binary files pandas readily accepts NumPy record arrays, if you need to read in a binary file consisting of an array of C structs. For example, given this C program in a file called main.c compiled with gcc main.c -std=gnu99 on a 64-bit machine,
#include <stdio.h>
#include <stdint.h>
typedef struct _Data
{
int32_t count;
double avg;
float scale;
} Data;
int main(int argc, const char *argv[])
{
size_t n = 10;
Data d[n];
for (int i = 0; i < n; ++i)
{
d[i].count = i;
d[i].avg = i + 1.0;
d[i].scale = (float) i + 2.0f;
}
FILE *file = fopen("binary.dat", "wb");
fwrite(&d, sizeof(Data), n, file);
fclose(file);
return 0;
}
the following Python code will read the binary file 'binary.dat' into a pandas DataFrame, where each element of the struct corresponds to a column in the frame:
names = "count", "avg", "scale"
# note that the offsets are larger than the size of the type because of
# struct padding
offsets = 0, 8, 16
formats = "i4", "f8", "f4"
dt = np.dtype({"names": names, "offsets": offsets, "formats": formats}, align=True)
df = pd.DataFrame(np.fromfile("binary.dat", dt))
Note The offsets of the structure elements may be different depending on the architecture of the machine on which the file was created. Using a raw binary file format like this for general data storage is not recommended, as it is not cross platform. We recommended either HDF5 or parquet, both of which are supported by pandas’ IO facilities. Computation Numerical integration (sample-based) of a time series Correlation Often it’s useful to obtain the lower (or upper) triangular form of a correlation matrix calculated from DataFrame.corr(). This can be achieved by passing a boolean mask to where as follows:
In [215]: df = pd.DataFrame(np.random.random(size=(100, 5)))
In [216]: corr_mat = df.corr()
In [217]: mask = np.tril(np.ones_like(corr_mat, dtype=np.bool_), k=-1)
In [218]: corr_mat.where(mask)
Out[218]:
0 1 2 3 4
0 NaN NaN NaN NaN NaN
1 -0.079861 NaN NaN NaN NaN
2 -0.236573 0.183801 NaN NaN NaN
3 -0.013795 -0.051975 0.037235 NaN NaN
4 -0.031974 0.118342 -0.073499 -0.02063 NaN
The method argument within DataFrame.corr can accept a callable in addition to the named correlation types. Here we compute the distance correlation matrix for a DataFrame object.
In [219]: def distcorr(x, y):
.....: n = len(x)
.....: a = np.zeros(shape=(n, n))
.....: b = np.zeros(shape=(n, n))
.....: for i in range(n):
.....: for j in range(i + 1, n):
.....: a[i, j] = abs(x[i] - x[j])
.....: b[i, j] = abs(y[i] - y[j])
.....: a += a.T
.....: b += b.T
.....: a_bar = np.vstack([np.nanmean(a, axis=0)] * n)
.....: b_bar = np.vstack([np.nanmean(b, axis=0)] * n)
.....: A = a - a_bar - a_bar.T + np.full(shape=(n, n), fill_value=a_bar.mean())
.....: B = b - b_bar - b_bar.T + np.full(shape=(n, n), fill_value=b_bar.mean())
.....: cov_ab = np.sqrt(np.nansum(A * B)) / n
.....: std_a = np.sqrt(np.sqrt(np.nansum(A ** 2)) / n)
.....: std_b = np.sqrt(np.sqrt(np.nansum(B ** 2)) / n)
.....: return cov_ab / std_a / std_b
.....:
In [220]: df = pd.DataFrame(np.random.normal(size=(100, 3)))
In [221]: df.corr(method=distcorr)
Out[221]:
0 1 2
0 1.000000 0.197613 0.216328
1 0.197613 1.000000 0.208749
2 0.216328 0.208749 1.000000
Timedeltas The Timedeltas docs. Using timedeltas
In [222]: import datetime
In [223]: s = pd.Series(pd.date_range("2012-1-1", periods=3, freq="D"))
In [224]: s - s.max()
Out[224]:
0 -2 days
1 -1 days
2 0 days
dtype: timedelta64[ns]
In [225]: s.max() - s
Out[225]:
0 2 days
1 1 days
2 0 days
dtype: timedelta64[ns]
In [226]: s - datetime.datetime(2011, 1, 1, 3, 5)
Out[226]:
0 364 days 20:55:00
1 365 days 20:55:00
2 366 days 20:55:00
dtype: timedelta64[ns]
In [227]: s + datetime.timedelta(minutes=5)
Out[227]:
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
In [228]: datetime.datetime(2011, 1, 1, 3, 5) - s
Out[228]:
0 -365 days +03:05:00
1 -366 days +03:05:00
2 -367 days +03:05:00
dtype: timedelta64[ns]
In [229]: datetime.timedelta(minutes=5) + s
Out[229]:
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
dtype: datetime64[ns]
Adding and subtracting deltas and dates
In [230]: deltas = pd.Series([datetime.timedelta(days=i) for i in range(3)])
In [231]: df = pd.DataFrame({"A": s, "B": deltas})
In [232]: df
Out[232]:
A B
0 2012-01-01 0 days
1 2012-01-02 1 days
2 2012-01-03 2 days
In [233]: df["New Dates"] = df["A"] + df["B"]
In [234]: df["Delta"] = df["A"] - df["New Dates"]
In [235]: df
Out[235]:
A B New Dates Delta
0 2012-01-01 0 days 2012-01-01 0 days
1 2012-01-02 1 days 2012-01-03 -1 days
2 2012-01-03 2 days 2012-01-05 -2 days
In [236]: df.dtypes
Out[236]:
A datetime64[ns]
B timedelta64[ns]
New Dates datetime64[ns]
Delta timedelta64[ns]
dtype: object
Another example Values can be set to NaT using np.nan, similar to datetime
In [237]: y = s - s.shift()
In [238]: y
Out[238]:
0 NaT
1 1 days
2 1 days
dtype: timedelta64[ns]
In [239]: y[1] = np.nan
In [240]: y
Out[240]:
0 NaT
1 NaT
2 1 days
dtype: timedelta64[ns]
Creating example data To create a dataframe from every combination of some given values, like R’s expand.grid() function, we can create a dict where the keys are column names and the values are lists of the data values:
In [241]: def expand_grid(data_dict):
.....: rows = itertools.product(*data_dict.values())
.....: return pd.DataFrame.from_records(rows, columns=data_dict.keys())
.....:
In [242]: df = expand_grid(
.....: {"height": [60, 70], "weight": [100, 140, 180], "sex": ["Male", "Female"]}
.....: )
.....:
In [243]: df
Out[243]:
height weight sex
0 60 100 Male
1 60 100 Female
2 60 140 Male
3 60 140 Female
4 60 180 Male
5 60 180 Female
6 70 100 Male
7 70 100 Female
8 70 140 Male
9 70 140 Female
10 70 180 Male
11 70 180 Female | |
doc_591 | The browser version, if it could be parsed from the string. | |
doc_592 | Token value for "//". | |
doc_593 |
Call transform on the estimator with the best found parameters. Only available if the underlying estimator supports transform and refit=True. Parameters
Xindexable, length n_samples
Must fulfill the input assumptions of the underlying estimator. | |
doc_594 |
Constant kernel. Can be used as part of a product-kernel where it scales the magnitude of the other factor (kernel) or as part of a sum-kernel, where it modifies the mean of the Gaussian process. \[k(x_1, x_2) = constant\_value \;\forall\; x_1, x_2\] Adding a constant kernel is equivalent to adding a constant: kernel = RBF() + ConstantKernel(constant_value=2)
is the same as: kernel = RBF() + 2
Read more in the User Guide. New in version 0.18. Parameters
constant_valuefloat, default=1.0
The constant value which defines the covariance: k(x_1, x_2) = constant_value
constant_value_boundspair of floats >= 0 or “fixed”, default=(1e-5, 1e5)
The lower and upper bound on constant_value. If set to “fixed”, constant_value cannot be changed during hyperparameter tuning. Attributes
bounds
Returns the log-transformed bounds on the theta. hyperparameter_constant_value
hyperparameters
Returns a list of all hyperparameter specifications.
n_dims
Returns the number of non-fixed hyperparameters of the kernel.
requires_vector_input
Whether the kernel works only on fixed-length feature vectors.
theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Examples >>> from sklearn.datasets import make_friedman2
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> from sklearn.gaussian_process.kernels import RBF, ConstantKernel
>>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0)
>>> kernel = RBF() + ConstantKernel(constant_value=2)
>>> gpr = GaussianProcessRegressor(kernel=kernel, alpha=5,
... random_state=0).fit(X, y)
>>> gpr.score(X, y)
0.3696...
>>> gpr.predict(X[:1,:], return_std=True)
(array([606.1...]), array([0.24...]))
Methods
__call__(X[, Y, eval_gradient]) Return the kernel k(X, Y) and optionally its gradient.
clone_with_theta(theta) Returns a clone of self with given hyperparameters theta.
diag(X) Returns the diagonal of the kernel k(X, X).
get_params([deep]) Get parameters of this kernel.
is_stationary() Returns whether the kernel is stationary.
set_params(**params) Set the parameters of this kernel.
__call__(X, Y=None, eval_gradient=False) [source]
Return the kernel k(X, Y) and optionally its gradient. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Left argument of the returned kernel k(X, Y)
Yarray-like of shape (n_samples_X, n_features) or list of object, default=None
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
eval_gradientbool, default=False
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed. Only supported when Y is None. Returns
Kndarray of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradientndarray of shape (n_samples_X, n_samples_X, n_dims), optional
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
property bounds
Returns the log-transformed bounds on the theta. Returns
boundsndarray of shape (n_dims, 2)
The log-transformed bounds on the kernel’s hyperparameters theta
clone_with_theta(theta) [source]
Returns a clone of self with given hyperparameters theta. Parameters
thetandarray of shape (n_dims,)
The hyperparameters
diag(X) [source]
Returns the diagonal of the kernel k(X, X). The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated. Parameters
Xarray-like of shape (n_samples_X, n_features) or list of object
Argument to the kernel. Returns
K_diagndarray of shape (n_samples_X,)
Diagonal of kernel k(X, X)
get_params(deep=True) [source]
Get parameters of this kernel. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property hyperparameters
Returns a list of all hyperparameter specifications.
is_stationary() [source]
Returns whether the kernel is stationary.
property n_dims
Returns the number of non-fixed hyperparameters of the kernel.
property requires_vector_input
Whether the kernel works only on fixed-length feature vectors.
set_params(**params) [source]
Set the parameters of this kernel. The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns
self
property theta
Returns the (flattened, log-transformed) non-fixed hyperparameters. Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale. Returns
thetandarray of shape (n_dims,)
The non-fixed, log-transformed hyperparameters of the kernel | |
doc_595 |
Immutable Index implementing a monotonic integer range. RangeIndex is a memory-saving special case of Int64Index limited to representing monotonic ranges. Using RangeIndex may in some instances improve computing speed. This is the default index type used by DataFrame and Series when no explicit index is provided by the user. Parameters
start:int (default: 0), range, or other RangeIndex instance
If int and “stop” is not given, interpreted as “stop” instead.
stop:int (default: 0)
step:int (default: 1)
dtype:np.int64
Unused, accepted for homogeneity with other index types.
copy:bool, default False
Unused, accepted for homogeneity with other index types.
name:object, optional
Name to be stored in the index. See also Index
The base pandas Index type. Int64Index
Index of int64 data. Attributes
start The value of the start parameter (0 if this was not supplied).
stop The value of the stop parameter.
step The value of the step parameter (1 if this was not supplied). Methods
from_range(data[, name, dtype]) Create RangeIndex from a range object. | |
doc_596 | Subclass of RawTurtle, has the same interface but draws on a default Screen object created automatically when needed for the first time. | |
doc_597 |
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. | |
doc_598 |
Compute cluster centers and predict cluster index for each sample. Convenience method; equivalent to calling fit(X) followed by predict(X). Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
New data to transform.
yIgnored
Not used, present here for API consistency by convention.
sample_weightarray-like of shape (n_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight. Returns
labelsndarray of shape (n_samples,)
Index of the cluster each sample belongs to. | |
doc_599 |
Transform array or sparse matrix X back to feature mappings. X must have been produced by this DictVectorizer’s transform or fit_transform method; it may only have passed through transformers that preserve the number of features and their order. In the case of one-hot/one-of-K coding, the constructed feature names and values are returned rather than the original ones. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Sample matrix.
dict_typetype, default=dict
Constructor for feature mappings. Must conform to the collections.Mapping API. Returns
Dlist of dict_type objects of shape (n_samples,)
Feature mappings for the samples in X. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.