_id stringlengths 5 9 | text stringlengths 5 385k | title stringclasses 1
value |
|---|---|---|
doc_25800 | Run test and return the result. | |
doc_25801 |
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | |
doc_25802 | Return a string indicating the HTTP request method. If Request.method is not None, return its value, otherwise return 'GET' if Request.data is None, or 'POST' if it’s not. This is only meaningful for HTTP requests. Changed in version 3.3: get_method now looks at the value of Request.method. | |
doc_25803 |
Return True if the period’s year is in a leap year. | |
doc_25804 | See Migration guide for more details. tf.compat.v1.nest.map_structure
tf.nest.map_structure(
func, *structure, **kwargs
)
Applies func(x[0], x[1], ...) where x[i] is an entry in structure[i]. All structures in structure must have the same arity, and the return value will contain results with the same structure layout. Examples: A single Python dict:
a = {"hello": 24, "world": 76}
tf.nest.map_structure(lambda p: p * 2, a)
{'hello': 48, 'world': 152}
Multiple Python dictionaries:
d1 = {"hello": 24, "world": 76}
d2 = {"hello": 36, "world": 14}
tf.nest.map_structure(lambda p1, p2: p1 + p2, d1, d2)
{'hello': 60, 'world': 90}
Args
func A callable that accepts as many arguments as there are structures.
*structure scalar, or tuple or dict or list of constructed scalars and/or other tuples/lists, or scalars. Note: numpy arrays are considered as scalars.
**kwargs Valid keyword args are:
check_types: If set to True (default) the types of iterables within the structures have to be same (e.g. map_structure(func, [1], (1,)) raises a TypeError exception). To allow this set this argument to False. Note that namedtuples with identical name and fields are always considered to have the same shallow structure.
expand_composites: If set to True, then composite tensors such as tf.sparse.SparseTensor and tf.RaggedTensor are expanded into their component tensors. If False (the default), then composite tensors are not expanded.
Returns A new structure with the same arity as structure, whose values correspond to func(x[0], x[1], ...) where x[i] is a value in the corresponding location in structure[i]. If there are different sequence types and check_types is False the sequence types of the first structure will be used.
Raises
TypeError If func is not callable or if the structures do not match each other by depth tree.
ValueError If no structure is provided or if the structures do not match each other by type.
ValueError If wrong keyword arguments are provided. | |
doc_25805 |
Change shape and size of array in-place. Parameters
new_shapetuple of ints, or n ints
Shape of resized array.
refcheckbool, optional
If False, reference count will not be checked. Default is True. Returns
None
Raises
ValueError
If a does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError
If the order keyword argument is specified. This behaviour is a bug in NumPy. See also resize
Return a new array with the specified shape. Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set refcheck to False. Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: >>> a = np.array([[0, 1], [2, 3]], order='C')
>>> a.resize((2, 1))
>>> a
array([[0],
[1]])
>>> a = np.array([[0, 1], [2, 3]], order='F')
>>> a.resize((2, 1))
>>> a
array([[0],
[2]])
Enlarging an array: as above, but missing entries are filled with zeros: >>> b = np.array([[0, 1], [2, 3]])
>>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple
>>> b
array([[0, 1, 2],
[3, 0, 0]])
Referencing an array prevents resizing… >>> c = a
>>> a.resize((1, 1))
Traceback (most recent call last):
...
ValueError: cannot resize an array that references or is referenced ...
Unless refcheck is False: >>> a.resize((1, 1), refcheck=False)
>>> a
array([[0]])
>>> c
array([[0]]) | |
doc_25806 |
Return boolean flag, True if artist is included in layout calculations. E.g. Constrained Layout Guide, Figure.tight_layout(), and fig.savefig(fname, bbox_inches='tight'). | |
doc_25807 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_25808 | Given a quantized Tensor, self.int_repr() returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor. | |
doc_25809 | os.F_TLOCK
os.F_ULOCK
os.F_TEST
Flags that specify what action lockf() will take. Availability: Unix. New in version 3.3. | |
doc_25810 | Return the arc cosine of x, in radians. The result is between 0 and pi. | |
doc_25811 | Set to a non-ASCII name for a temporary file. | |
doc_25812 |
Accuracy classification score. In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true. Read more in the User Guide. Parameters
y_true1d array-like, or label indicator array / sparse matrix
Ground truth (correct) labels.
y_pred1d array-like, or label indicator array / sparse matrix
Predicted labels, as returned by a classifier.
normalizebool, default=True
If False, return the number of correctly classified samples. Otherwise, return the fraction of correctly classified samples.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
If normalize == True, return the fraction of correctly classified samples (float), else returns the number of correctly classified samples (int). The best performance is 1 with normalize == True and the number of samples with normalize == False. See also
jaccard_score, hamming_loss, zero_one_loss
Notes In binary and multiclass classification, this function is equal to the jaccard_score function. Examples >>> from sklearn.metrics import accuracy_score
>>> y_pred = [0, 2, 1, 3]
>>> y_true = [0, 1, 2, 3]
>>> accuracy_score(y_true, y_pred)
0.5
>>> accuracy_score(y_true, y_pred, normalize=False)
2
In the multilabel case with binary label indicators: >>> import numpy as np
>>> accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
0.5 | |
doc_25813 | See Migration guide for more details. tf.compat.v1.keras.preprocessing.image.apply_affine_transform
tf.keras.preprocessing.image.apply_affine_transform(
x, theta=0, tx=0, ty=0, shear=0, zx=1, zy=1, row_axis=0, col_axis=1,
channel_axis=2, fill_mode='nearest', cval=0.0, order=1
)
Arguments
x 2D numpy array, single image.
theta Rotation angle in degrees.
tx Width shift.
ty Heigh shift.
shear Shear angle in degrees.
zx Zoom in x direction.
zy Zoom in y direction
row_axis Index of axis for rows in the input image.
col_axis Index of axis for columns in the input image.
channel_axis Index of axis for channels in the input image.
fill_mode Points outside the boundaries of the input are filled according to the given mode (one of {'constant', 'nearest', 'reflect', 'wrap'}).
cval Value used for points outside the boundaries of the input if mode='constant'.
order int, order of interpolation
Returns The transformed version of the input. | |
doc_25814 |
Performs a single optimization step. Parameters
closure (callable) – A closure that reevaluates the model and returns the loss. | |
doc_25815 | Open the given url (which can be a request object or a string), optionally passing the given data. Arguments, return values and exceptions raised are the same as those of urlopen() (which simply calls the open() method on the currently installed global OpenerDirector). The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). The timeout feature actually works only for HTTP, HTTPS and FTP connections). | |
doc_25816 |
def f(x):
return x*x
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))
will print to standard output [1, 4, 9]
The Process class In multiprocessing, processes are spawned by creating a Process object and then calling its start() method. Process follows the API of threading.Thread. A trivial example of a multiprocess program is from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
To show the individual process IDs involved, here is an expanded example: from multiprocessing import Process
import os
def info(title):
print(title)
print('module name:', __name__)
print('parent process:', os.getppid())
print('process id:', os.getpid())
def f(name):
info('function f')
print('hello', name)
if __name__ == '__main__':
info('main line')
p = Process(target=f, args=('bob',))
p.start()
p.join()
For an explanation of why the if __name__ == '__main__' part is necessary, see Programming guidelines. Contexts and start methods Depending on the platform, multiprocessing supports three ways to start a process. These start methods are spawn
The parent process starts a fresh python interpreter process. The child process will only inherit those resources necessary to run the process object’s run() method. In particular, unnecessary file descriptors and handles from the parent process will not be inherited. Starting a process using this method is rather slow compared to using fork or forkserver. Available on Unix and Windows. The default on Windows and macOS. fork
The parent process uses os.fork() to fork the Python interpreter. The child process, when it begins, is effectively identical to the parent process. All resources of the parent are inherited by the child process. Note that safely forking a multithreaded process is problematic. Available on Unix only. The default on Unix. forkserver
When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited. Available on Unix platforms which support passing file descriptors over Unix pipes. Changed in version 3.8: On macOS, the spawn start method is now the default. The fork start method should be considered unsafe as it can lead to crashes of the subprocess. See bpo-33725. Changed in version 3.4: spawn added on all unix platforms, and forkserver added for some unix platforms. Child processes no longer inherit all of the parents inheritable handles on Windows. On Unix using the spawn or forkserver start methods will also start a resource tracker process which tracks the unlinked named system resources (such as named semaphores or SharedMemory objects) created by processes of the program. When all processes have exited the resource tracker unlinks any remaining tracked object. Usually there should be none, but if a process was killed by a signal there may be some “leaked” resources. (Neither leaked semaphores nor shared memory segments will be automatically unlinked until the next reboot. This is problematic for both objects because the system allows only a limited number of named semaphores, and shared memory segments occupy some space in the main memory.) To select a start method you use the set_start_method() in the if __name__ == '__main__' clause of the main module. For example: import multiprocessing as mp
def foo(q):
q.put('hello')
if __name__ == '__main__':
mp.set_start_method('spawn')
q = mp.Queue()
p = mp.Process(target=foo, args=(q,))
p.start()
print(q.get())
p.join()
set_start_method() should not be used more than once in the program. Alternatively, you can use get_context() to obtain a context object. Context objects have the same API as the multiprocessing module, and allow one to use multiple start methods in the same program. import multiprocessing as mp
def foo(q):
q.put('hello')
if __name__ == '__main__':
ctx = mp.get_context('spawn')
q = ctx.Queue()
p = ctx.Process(target=foo, args=(q,))
p.start()
print(q.get())
p.join()
Note that objects related to one context may not be compatible with processes for a different context. In particular, locks created using the fork context cannot be passed to processes started using the spawn or forkserver start methods. A library which wants to use a particular start method should probably use get_context() to avoid interfering with the choice of the library user. Warning The 'spawn' and 'forkserver' start methods cannot currently be used with “frozen” executables (i.e., binaries produced by packages like PyInstaller and cx_Freeze) on Unix. The 'fork' start method does work. Exchanging objects between processes multiprocessing supports two types of communication channel between processes: Queues The Queue class is a near clone of queue.Queue. For example: from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if __name__ == '__main__':
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print(q.get()) # prints "[42, None, 'hello']"
p.join()
Queues are thread and process safe. Pipes The Pipe() function returns a pair of connection objects connected by a pipe which by default is duplex (two-way). For example: from multiprocessing import Process, Pipe
def f(conn):
conn.send([42, None, 'hello'])
conn.close()
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
print(parent_conn.recv()) # prints "[42, None, 'hello']"
p.join()
The two connection objects returned by Pipe() represent the two ends of the pipe. Each connection object has send() and recv() methods (among others). Note that data in a pipe may become corrupted if two processes (or threads) try to read from or write to the same end of the pipe at the same time. Of course there is no risk of corruption from processes using different ends of the pipe at the same time. Synchronization between processes multiprocessing contains equivalents of all the synchronization primitives from threading. For instance one can use a lock to ensure that only one process prints to standard output at a time: from multiprocessing import Process, Lock
def f(l, i):
l.acquire()
try:
print('hello world', i)
finally:
l.release()
if __name__ == '__main__':
lock = Lock()
for num in range(10):
Process(target=f, args=(lock, num)).start()
Without using the lock output from the different processes is liable to get all mixed up. Sharing state between processes As mentioned above, when doing concurrent programming it is usually best to avoid using shared state as far as possible. This is particularly true when using multiple processes. However, if you really do need to use some shared data then multiprocessing provides a couple of ways of doing so. Shared memory Data can be stored in a shared memory map using Value or Array. For example, the following code from multiprocessing import Process, Value, Array
def f(n, a):
n.value = 3.1415927
for i in range(len(a)):
a[i] = -a[i]
if __name__ == '__main__':
num = Value('d', 0.0)
arr = Array('i', range(10))
p = Process(target=f, args=(num, arr))
p.start()
p.join()
print(num.value)
print(arr[:])
will print 3.1415927
[0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
The 'd' and 'i' arguments used when creating num and arr are typecodes of the kind used by the array module: 'd' indicates a double precision float and 'i' indicates a signed integer. These shared objects will be process and thread-safe. For more flexibility in using shared memory one can use the multiprocessing.sharedctypes module which supports the creation of arbitrary ctypes objects allocated from shared memory. Server process A manager object returned by Manager() controls a server process which holds Python objects and allows other processes to manipulate them using proxies. A manager returned by Manager() will support types list, dict, Namespace, Lock, RLock, Semaphore, BoundedSemaphore, Condition, Event, Barrier, Queue, Value and Array. For example, from multiprocessing import Process, Manager
def f(d, l):
d[1] = '1'
d['2'] = 2
d[0.25] = None
l.reverse()
if __name__ == '__main__':
with Manager() as manager:
d = manager.dict()
l = manager.list(range(10))
p = Process(target=f, args=(d, l))
p.start()
p.join()
print(d)
print(l)
will print {0.25: None, 1: '1', '2': 2}
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
Server process managers are more flexible than using shared memory objects because they can be made to support arbitrary object types. Also, a single manager can be shared by processes on different computers over a network. They are, however, slower than using shared memory. Using a pool of workers The Pool class represents a pool of worker processes. It has methods which allows tasks to be offloaded to the worker processes in a few different ways. For example: from multiprocessing import Pool, TimeoutError
import time
import os
def f(x):
return x*x
if __name__ == '__main__':
# start 4 worker processes
with Pool(processes=4) as pool:
# print "[0, 1, 4,..., 81]"
print(pool.map(f, range(10)))
# print same numbers in arbitrary order
for i in pool.imap_unordered(f, range(10)):
print(i)
# evaluate "f(20)" asynchronously
res = pool.apply_async(f, (20,)) # runs in *only* one process
print(res.get(timeout=1)) # prints "400"
# evaluate "os.getpid()" asynchronously
res = pool.apply_async(os.getpid, ()) # runs in *only* one process
print(res.get(timeout=1)) # prints the PID of that process
# launching multiple evaluations asynchronously *may* use more processes
multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
print([res.get(timeout=1) for res in multiple_results])
# make a single worker sleep for 10 secs
res = pool.apply_async(time.sleep, (10,))
try:
print(res.get(timeout=1))
except TimeoutError:
print("We lacked patience and got a multiprocessing.TimeoutError")
print("For the moment, the pool remains available for more work")
# exiting the 'with'-block has stopped the pool
print("Now the pool is closed and no longer available")
Note that the methods of a pool should only ever be used by the process which created it. Note Functionality within this package requires that the __main__ module be importable by the children. This is covered in Programming guidelines however it is worth pointing out here. This means that some examples, such as the multiprocessing.pool.Pool examples will not work in the interactive interpreter. For example: >>> from multiprocessing import Pool
>>> p = Pool(5)
>>> def f(x):
... return x*x
...
>>> with p:
... p.map(f, [1,2,3])
Process PoolWorker-1:
Process PoolWorker-2:
Process PoolWorker-3:
Traceback (most recent call last):
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
AttributeError: 'module' object has no attribute 'f'
(If you try this it will actually output three full tracebacks interleaved in a semi-random fashion, and then you may have to stop the parent process somehow.) Reference The multiprocessing package mostly replicates the API of the threading module.
Process and exceptions
class multiprocessing.Process(group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None)
Process objects represent activity that is run in a separate process. The Process class has equivalents of all the methods of threading.Thread. The constructor should always be called with keyword arguments. group should always be None; it exists solely for compatibility with threading.Thread. target is the callable object to be invoked by the run() method. It defaults to None, meaning nothing is called. name is the process name (see name for more details). args is the argument tuple for the target invocation. kwargs is a dictionary of keyword arguments for the target invocation. If provided, the keyword-only daemon argument sets the process daemon flag to True or False. If None (the default), this flag will be inherited from the creating process. By default, no arguments are passed to target. If a subclass overrides the constructor, it must make sure it invokes the base class constructor (Process.__init__()) before doing anything else to the process. Changed in version 3.3: Added the daemon argument.
run()
Method representing the process’s activity. You may override this method in a subclass. The standard run() method invokes the callable object passed to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs arguments, respectively.
start()
Start the process’s activity. This must be called at most once per process object. It arranges for the object’s run() method to be invoked in a separate process.
join([timeout])
If the optional argument timeout is None (the default), the method blocks until the process whose join() method is called terminates. If timeout is a positive number, it blocks at most timeout seconds. Note that the method returns None if its process terminates or if the method times out. Check the process’s exitcode to determine if it terminated. A process can be joined many times. A process cannot join itself because this would cause a deadlock. It is an error to attempt to join a process before it has been started.
name
The process’s name. The name is a string used for identification purposes only. It has no semantics. Multiple processes may be given the same name. The initial name is set by the constructor. If no explicit name is provided to the constructor, a name of the form ‘Process-N1:N2:…:Nk’ is constructed, where each Nk is the N-th child of its parent.
is_alive()
Return whether the process is alive. Roughly, a process object is alive from the moment the start() method returns until the child process terminates.
daemon
The process’s daemon flag, a Boolean value. This must be set before start() is called. The initial value is inherited from the creating process. When a process exits, it attempts to terminate all of its daemonic child processes. Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children orphaned if it gets terminated when its parent process exits. Additionally, these are not Unix daemons or services, they are normal processes that will be terminated (and not joined) if non-daemonic processes have exited.
In addition to the threading.Thread API, Process objects also support the following attributes and methods:
pid
Return the process ID. Before the process is spawned, this will be None.
exitcode
The child’s exit code. This will be None if the process has not yet terminated. A negative value -N indicates that the child was terminated by signal N.
authkey
The process’s authentication key (a byte string). When multiprocessing is initialized the main process is assigned a random string using os.urandom(). When a Process object is created, it will inherit the authentication key of its parent process, although this may be changed by setting authkey to another byte string. See Authentication keys.
sentinel
A numeric handle of a system object which will become “ready” when the process ends. You can use this value if you want to wait on several events at once using multiprocessing.connection.wait(). Otherwise calling join() is simpler. On Windows, this is an OS handle usable with the WaitForSingleObject and WaitForMultipleObjects family of API calls. On Unix, this is a file descriptor usable with primitives from the select module. New in version 3.3.
terminate()
Terminate the process. On Unix this is done using the SIGTERM signal; on Windows TerminateProcess() is used. Note that exit handlers and finally clauses, etc., will not be executed. Note that descendant processes of the process will not be terminated – they will simply become orphaned. Warning If this method is used when the associated process is using a pipe or queue then the pipe or queue is liable to become corrupted and may become unusable by other process. Similarly, if the process has acquired a lock or semaphore etc. then terminating it is liable to cause other processes to deadlock.
kill()
Same as terminate() but using the SIGKILL signal on Unix. New in version 3.7.
close()
Close the Process object, releasing all resources associated with it. ValueError is raised if the underlying process is still running. Once close() returns successfully, most other methods and attributes of the Process object will raise ValueError. New in version 3.7.
Note that the start(), join(), is_alive(), terminate() and exitcode methods should only be called by the process that created the process object. Example usage of some of the methods of Process: >>> import multiprocessing, time, signal
>>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
>>> print(p, p.is_alive())
<Process ... initial> False
>>> p.start()
>>> print(p, p.is_alive())
<Process ... started> True
>>> p.terminate()
>>> time.sleep(0.1)
>>> print(p, p.is_alive())
<Process ... stopped exitcode=-SIGTERM> False
>>> p.exitcode == -signal.SIGTERM
True
exception multiprocessing.ProcessError
The base class of all multiprocessing exceptions.
exception multiprocessing.BufferTooShort
Exception raised by Connection.recv_bytes_into() when the supplied buffer object is too small for the message read. If e is an instance of BufferTooShort then e.args[0] will give the message as a byte string.
exception multiprocessing.AuthenticationError
Raised when there is an authentication error.
exception multiprocessing.TimeoutError
Raised by methods with a timeout when the timeout expires.
Pipes and Queues When using multiple processes, one generally uses message passing for communication between processes and avoids having to use any synchronization primitives like locks. For passing messages one can use Pipe() (for a connection between two processes) or a queue (which allows multiple producers and consumers). The Queue, SimpleQueue and JoinableQueue types are multi-producer, multi-consumer FIFO queues modelled on the queue.Queue class in the standard library. They differ in that Queue lacks the task_done() and join() methods introduced into Python 2.5’s queue.Queue class. If you use JoinableQueue then you must call JoinableQueue.task_done() for each task removed from the queue or else the semaphore used to count the number of unfinished tasks may eventually overflow, raising an exception. Note that one can also create a shared queue by using a manager object – see Managers. Note multiprocessing uses the usual queue.Empty and queue.Full exceptions to signal a timeout. They are not available in the multiprocessing namespace so you need to import them from queue. Note When an object is put on a queue, the object is pickled and a background thread later flushes the pickled data to an underlying pipe. This has some consequences which are a little surprising, but should not cause any practical difficulties – if they really bother you then you can instead use a queue created with a manager. After putting an object on an empty queue there may be an infinitesimal delay before the queue’s empty() method returns False and get_nowait() can return without raising queue.Empty. If multiple processes are enqueuing objects, it is possible for the objects to be received at the other end out-of-order. However, objects enqueued by the same process will always be in the expected order with respect to each other. Warning If a process is killed using Process.terminate() or os.kill() while it is trying to use a Queue, then the data in the queue is likely to become corrupted. This may cause any other process to get an exception when it tries to use the queue later on. Warning As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe. This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children. Note that a queue created using a manager does not have this issue. See Programming guidelines. For an example of the usage of queues for interprocess communication see Examples.
multiprocessing.Pipe([duplex])
Returns a pair (conn1, conn2) of Connection objects representing the ends of a pipe. If duplex is True (the default) then the pipe is bidirectional. If duplex is False then the pipe is unidirectional: conn1 can only be used for receiving messages and conn2 can only be used for sending messages.
class multiprocessing.Queue([maxsize])
Returns a process shared queue implemented using a pipe and a few locks/semaphores. When a process first puts an item on the queue a feeder thread is started which transfers objects from a buffer into the pipe. The usual queue.Empty and queue.Full exceptions from the standard library’s queue module are raised to signal timeouts. Queue implements all the methods of queue.Queue except for task_done() and join().
qsize()
Return the approximate size of the queue. Because of multithreading/multiprocessing semantics, this number is not reliable. Note that this may raise NotImplementedError on Unix platforms like Mac OS X where sem_getvalue() is not implemented.
empty()
Return True if the queue is empty, False otherwise. Because of multithreading/multiprocessing semantics, this is not reliable.
full()
Return True if the queue is full, False otherwise. Because of multithreading/multiprocessing semantics, this is not reliable.
put(obj[, block[, timeout]])
Put obj into the queue. If the optional argument block is True (the default) and timeout is None (the default), block if necessary until a free slot is available. If timeout is a positive number, it blocks at most timeout seconds and raises the queue.Full exception if no free slot was available within that time. Otherwise (block is False), put an item on the queue if a free slot is immediately available, else raise the queue.Full exception (timeout is ignored in that case). Changed in version 3.8: If the queue is closed, ValueError is raised instead of AssertionError.
put_nowait(obj)
Equivalent to put(obj, False).
get([block[, timeout]])
Remove and return an item from the queue. If optional args block is True (the default) and timeout is None (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the queue.Empty exception if no item was available within that time. Otherwise (block is False), return an item if one is immediately available, else raise the queue.Empty exception (timeout is ignored in that case). Changed in version 3.8: If the queue is closed, ValueError is raised instead of OSError.
get_nowait()
Equivalent to get(False).
multiprocessing.Queue has a few additional methods not found in queue.Queue. These methods are usually unnecessary for most code:
close()
Indicate that no more data will be put on this queue by the current process. The background thread will quit once it has flushed all buffered data to the pipe. This is called automatically when the queue is garbage collected.
join_thread()
Join the background thread. This can only be used after close() has been called. It blocks until the background thread exits, ensuring that all data in the buffer has been flushed to the pipe. By default if a process is not the creator of the queue then on exit it will attempt to join the queue’s background thread. The process can call cancel_join_thread() to make join_thread() do nothing.
cancel_join_thread()
Prevent join_thread() from blocking. In particular, this prevents the background thread from being joined automatically when the process exits – see join_thread(). A better name for this method might be allow_exit_without_flush(). It is likely to cause enqueued data to lost, and you almost certainly will not need to use it. It is really only there if you need the current process to exit immediately without waiting to flush enqueued data to the underlying pipe, and you don’t care about lost data.
Note This class’s functionality requires a functioning shared semaphore implementation on the host operating system. Without one, the functionality in this class will be disabled, and attempts to instantiate a Queue will result in an ImportError. See bpo-3770 for additional information. The same holds true for any of the specialized queue types listed below.
class multiprocessing.SimpleQueue
It is a simplified Queue type, very close to a locked Pipe.
close()
Close the queue: release internal resources. A queue must not be used anymore after it is closed. For example, get(), put() and empty() methods must no longer be called. New in version 3.9.
empty()
Return True if the queue is empty, False otherwise.
get()
Remove and return an item from the queue.
put(item)
Put item into the queue.
class multiprocessing.JoinableQueue([maxsize])
JoinableQueue, a Queue subclass, is a queue which additionally has task_done() and join() methods.
task_done()
Indicate that a formerly enqueued task is complete. Used by queue consumers. For each get() used to fetch a task, a subsequent call to task_done() tells the queue that the processing on the task is complete. If a join() is currently blocking, it will resume when all items have been processed (meaning that a task_done() call was received for every item that had been put() into the queue). Raises a ValueError if called more times than there were items placed in the queue.
join()
Block until all items in the queue have been gotten and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer calls task_done() to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks.
Miscellaneous
multiprocessing.active_children()
Return list of all live children of the current process. Calling this has the side effect of “joining” any processes which have already finished.
multiprocessing.cpu_count()
Return the number of CPUs in the system. This number is not equivalent to the number of CPUs the current process can use. The number of usable CPUs can be obtained with len(os.sched_getaffinity(0)) May raise NotImplementedError. See also os.cpu_count()
multiprocessing.current_process()
Return the Process object corresponding to the current process. An analogue of threading.current_thread().
multiprocessing.parent_process()
Return the Process object corresponding to the parent process of the current_process(). For the main process, parent_process will be None. New in version 3.8.
multiprocessing.freeze_support()
Add support for when a program which uses multiprocessing has been frozen to produce a Windows executable. (Has been tested with py2exe, PyInstaller and cx_Freeze.) One needs to call this function straight after the if __name__ ==
'__main__' line of the main module. For example: from multiprocessing import Process, freeze_support
def f():
print('hello world!')
if __name__ == '__main__':
freeze_support()
Process(target=f).start()
If the freeze_support() line is omitted then trying to run the frozen executable will raise RuntimeError. Calling freeze_support() has no effect when invoked on any operating system other than Windows. In addition, if the module is being run normally by the Python interpreter on Windows (the program has not been frozen), then freeze_support() has no effect.
multiprocessing.get_all_start_methods()
Returns a list of the supported start methods, the first of which is the default. The possible start methods are 'fork', 'spawn' and 'forkserver'. On Windows only 'spawn' is available. On Unix 'fork' and 'spawn' are always supported, with 'fork' being the default. New in version 3.4.
multiprocessing.get_context(method=None)
Return a context object which has the same attributes as the multiprocessing module. If method is None then the default context is returned. Otherwise method should be 'fork', 'spawn', 'forkserver'. ValueError is raised if the specified start method is not available. New in version 3.4.
multiprocessing.get_start_method(allow_none=False)
Return the name of start method used for starting processes. If the start method has not been fixed and allow_none is false, then the start method is fixed to the default and the name is returned. If the start method has not been fixed and allow_none is true then None is returned. The return value can be 'fork', 'spawn', 'forkserver' or None. 'fork' is the default on Unix, while 'spawn' is the default on Windows. New in version 3.4.
multiprocessing.set_executable()
Sets the path of the Python interpreter to use when starting a child process. (By default sys.executable is used). Embedders will probably need to do some thing like set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
before they can create child processes. Changed in version 3.4: Now supported on Unix when the 'spawn' start method is used.
multiprocessing.set_start_method(method)
Set the method which should be used to start child processes. method can be 'fork', 'spawn' or 'forkserver'. Note that this should be called at most once, and it should be protected inside the if __name__ == '__main__' clause of the main module. New in version 3.4.
Note multiprocessing contains no analogues of threading.active_count(), threading.enumerate(), threading.settrace(), threading.setprofile(), threading.Timer, or threading.local. Connection Objects Connection objects allow the sending and receiving of picklable objects or strings. They can be thought of as message oriented connected sockets. Connection objects are usually created using Pipe – see also Listeners and Clients.
class multiprocessing.connection.Connection
send(obj)
Send an object to the other end of the connection which should be read using recv(). The object must be picklable. Very large pickles (approximately 32 MiB+, though it depends on the OS) may raise a ValueError exception.
recv()
Return an object sent from the other end of the connection using send(). Blocks until there is something to receive. Raises EOFError if there is nothing left to receive and the other end was closed.
fileno()
Return the file descriptor or handle used by the connection.
close()
Close the connection. This is called automatically when the connection is garbage collected.
poll([timeout])
Return whether there is any data available to be read. If timeout is not specified then it will return immediately. If timeout is a number then this specifies the maximum time in seconds to block. If timeout is None then an infinite timeout is used. Note that multiple connection objects may be polled at once by using multiprocessing.connection.wait().
send_bytes(buffer[, offset[, size]])
Send byte data from a bytes-like object as a complete message. If offset is given then data is read from that position in buffer. If size is given then that many bytes will be read from buffer. Very large buffers (approximately 32 MiB+, though it depends on the OS) may raise a ValueError exception
recv_bytes([maxlength])
Return a complete message of byte data sent from the other end of the connection as a string. Blocks until there is something to receive. Raises EOFError if there is nothing left to receive and the other end has closed. If maxlength is specified and the message is longer than maxlength then OSError is raised and the connection will no longer be readable. Changed in version 3.3: This function used to raise IOError, which is now an alias of OSError.
recv_bytes_into(buffer[, offset])
Read into buffer a complete message of byte data sent from the other end of the connection and return the number of bytes in the message. Blocks until there is something to receive. Raises EOFError if there is nothing left to receive and the other end was closed. buffer must be a writable bytes-like object. If offset is given then the message will be written into the buffer from that position. Offset must be a non-negative integer less than the length of buffer (in bytes). If the buffer is too short then a BufferTooShort exception is raised and the complete message is available as e.args[0] where e is the exception instance.
Changed in version 3.3: Connection objects themselves can now be transferred between processes using Connection.send() and Connection.recv(). New in version 3.3: Connection objects now support the context management protocol – see Context Manager Types. __enter__() returns the connection object, and __exit__() calls close().
For example: >>> from multiprocessing import Pipe
>>> a, b = Pipe()
>>> a.send([1, 'hello', None])
>>> b.recv()
[1, 'hello', None]
>>> b.send_bytes(b'thank you')
>>> a.recv_bytes()
b'thank you'
>>> import array
>>> arr1 = array.array('i', range(5))
>>> arr2 = array.array('i', [0] * 10)
>>> a.send_bytes(arr1)
>>> count = b.recv_bytes_into(arr2)
>>> assert count == len(arr1) * arr1.itemsize
>>> arr2
array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
Warning The Connection.recv() method automatically unpickles the data it receives, which can be a security risk unless you can trust the process which sent the message. Therefore, unless the connection object was produced using Pipe() you should only use the recv() and send() methods after performing some sort of authentication. See Authentication keys. Warning If a process is killed while it is trying to read or write to a pipe then the data in the pipe is likely to become corrupted, because it may become impossible to be sure where the message boundaries lie. Synchronization primitives Generally synchronization primitives are not as necessary in a multiprocess program as they are in a multithreaded program. See the documentation for threading module. Note that one can also create synchronization primitives by using a manager object – see Managers.
class multiprocessing.Barrier(parties[, action[, timeout]])
A barrier object: a clone of threading.Barrier. New in version 3.3.
class multiprocessing.BoundedSemaphore([value])
A bounded semaphore object: a close analog of threading.BoundedSemaphore. A solitary difference from its close analog exists: its acquire method’s first argument is named block, as is consistent with Lock.acquire(). Note On Mac OS X, this is indistinguishable from Semaphore because sem_getvalue() is not implemented on that platform.
class multiprocessing.Condition([lock])
A condition variable: an alias for threading.Condition. If lock is specified then it should be a Lock or RLock object from multiprocessing. Changed in version 3.3: The wait_for() method was added.
class multiprocessing.Event
A clone of threading.Event.
class multiprocessing.Lock
A non-recursive lock object: a close analog of threading.Lock. Once a process or thread has acquired a lock, subsequent attempts to acquire it from any process or thread will block until it is released; any process or thread may release it. The concepts and behaviors of threading.Lock as it applies to threads are replicated here in multiprocessing.Lock as it applies to either processes or threads, except as noted. Note that Lock is actually a factory function which returns an instance of multiprocessing.synchronize.Lock initialized with a default context. Lock supports the context manager protocol and thus may be used in with statements.
acquire(block=True, timeout=None)
Acquire a lock, blocking or non-blocking. With the block argument set to True (the default), the method call will block until the lock is in an unlocked state, then set it to locked and return True. Note that the name of this first argument differs from that in threading.Lock.acquire(). With the block argument set to False, the method call does not block. If the lock is currently in a locked state, return False; otherwise set the lock to a locked state and return True. When invoked with a positive, floating-point value for timeout, block for at most the number of seconds specified by timeout as long as the lock can not be acquired. Invocations with a negative value for timeout are equivalent to a timeout of zero. Invocations with a timeout value of None (the default) set the timeout period to infinite. Note that the treatment of negative or None values for timeout differs from the implemented behavior in threading.Lock.acquire(). The timeout argument has no practical implications if the block argument is set to False and is thus ignored. Returns True if the lock has been acquired or False if the timeout period has elapsed.
release()
Release a lock. This can be called from any process or thread, not only the process or thread which originally acquired the lock. Behavior is the same as in threading.Lock.release() except that when invoked on an unlocked lock, a ValueError is raised.
class multiprocessing.RLock
A recursive lock object: a close analog of threading.RLock. A recursive lock must be released by the process or thread that acquired it. Once a process or thread has acquired a recursive lock, the same process or thread may acquire it again without blocking; that process or thread must release it once for each time it has been acquired. Note that RLock is actually a factory function which returns an instance of multiprocessing.synchronize.RLock initialized with a default context. RLock supports the context manager protocol and thus may be used in with statements.
acquire(block=True, timeout=None)
Acquire a lock, blocking or non-blocking. When invoked with the block argument set to True, block until the lock is in an unlocked state (not owned by any process or thread) unless the lock is already owned by the current process or thread. The current process or thread then takes ownership of the lock (if it does not already have ownership) and the recursion level inside the lock increments by one, resulting in a return value of True. Note that there are several differences in this first argument’s behavior compared to the implementation of threading.RLock.acquire(), starting with the name of the argument itself. When invoked with the block argument set to False, do not block. If the lock has already been acquired (and thus is owned) by another process or thread, the current process or thread does not take ownership and the recursion level within the lock is not changed, resulting in a return value of False. If the lock is in an unlocked state, the current process or thread takes ownership and the recursion level is incremented, resulting in a return value of True. Use and behaviors of the timeout argument are the same as in Lock.acquire(). Note that some of these behaviors of timeout differ from the implemented behaviors in threading.RLock.acquire().
release()
Release a lock, decrementing the recursion level. If after the decrement the recursion level is zero, reset the lock to unlocked (not owned by any process or thread) and if any other processes or threads are blocked waiting for the lock to become unlocked, allow exactly one of them to proceed. If after the decrement the recursion level is still nonzero, the lock remains locked and owned by the calling process or thread. Only call this method when the calling process or thread owns the lock. An AssertionError is raised if this method is called by a process or thread other than the owner or if the lock is in an unlocked (unowned) state. Note that the type of exception raised in this situation differs from the implemented behavior in threading.RLock.release().
class multiprocessing.Semaphore([value])
A semaphore object: a close analog of threading.Semaphore. A solitary difference from its close analog exists: its acquire method’s first argument is named block, as is consistent with Lock.acquire().
Note On Mac OS X, sem_timedwait is unsupported, so calling acquire() with a timeout will emulate that function’s behavior using a sleeping loop. Note If the SIGINT signal generated by Ctrl-C arrives while the main thread is blocked by a call to BoundedSemaphore.acquire(), Lock.acquire(), RLock.acquire(), Semaphore.acquire(), Condition.acquire() or Condition.wait() then the call will be immediately interrupted and KeyboardInterrupt will be raised. This differs from the behaviour of threading where SIGINT will be ignored while the equivalent blocking calls are in progress. Note Some of this package’s functionality requires a functioning shared semaphore implementation on the host operating system. Without one, the multiprocessing.synchronize module will be disabled, and attempts to import it will result in an ImportError. See bpo-3770 for additional information. Shared ctypes Objects It is possible to create shared objects using shared memory which can be inherited by child processes.
multiprocessing.Value(typecode_or_type, *args, lock=True)
Return a ctypes object allocated from shared memory. By default the return value is actually a synchronized wrapper for the object. The object itself can be accessed via the value attribute of a Value. typecode_or_type determines the type of the returned object: it is either a ctypes type or a one character typecode of the kind used by the array module. *args is passed on to the constructor for the type. If lock is True (the default) then a new recursive lock object is created to synchronize access to the value. If lock is a Lock or RLock object then that will be used to synchronize access to the value. If lock is False then access to the returned object will not be automatically protected by a lock, so it will not necessarily be “process-safe”. Operations like += which involve a read and write are not atomic. So if, for instance, you want to atomically increment a shared value it is insufficient to just do counter.value += 1
Assuming the associated lock is recursive (which it is by default) you can instead do with counter.get_lock():
counter.value += 1
Note that lock is a keyword-only argument.
multiprocessing.Array(typecode_or_type, size_or_initializer, *, lock=True)
Return a ctypes array allocated from shared memory. By default the return value is actually a synchronized wrapper for the array. typecode_or_type determines the type of the elements of the returned array: it is either a ctypes type or a one character typecode of the kind used by the array module. If size_or_initializer is an integer, then it determines the length of the array, and the array will be initially zeroed. Otherwise, size_or_initializer is a sequence which is used to initialize the array and whose length determines the length of the array. If lock is True (the default) then a new lock object is created to synchronize access to the value. If lock is a Lock or RLock object then that will be used to synchronize access to the value. If lock is False then access to the returned object will not be automatically protected by a lock, so it will not necessarily be “process-safe”. Note that lock is a keyword only argument. Note that an array of ctypes.c_char has value and raw attributes which allow one to use it to store and retrieve strings.
The multiprocessing.sharedctypes module The multiprocessing.sharedctypes module provides functions for allocating ctypes objects from shared memory which can be inherited by child processes. Note Although it is possible to store a pointer in shared memory remember that this will refer to a location in the address space of a specific process. However, the pointer is quite likely to be invalid in the context of a second process and trying to dereference the pointer from the second process may cause a crash.
multiprocessing.sharedctypes.RawArray(typecode_or_type, size_or_initializer)
Return a ctypes array allocated from shared memory. typecode_or_type determines the type of the elements of the returned array: it is either a ctypes type or a one character typecode of the kind used by the array module. If size_or_initializer is an integer then it determines the length of the array, and the array will be initially zeroed. Otherwise size_or_initializer is a sequence which is used to initialize the array and whose length determines the length of the array. Note that setting and getting an element is potentially non-atomic – use Array() instead to make sure that access is automatically synchronized using a lock.
multiprocessing.sharedctypes.RawValue(typecode_or_type, *args)
Return a ctypes object allocated from shared memory. typecode_or_type determines the type of the returned object: it is either a ctypes type or a one character typecode of the kind used by the array module. *args is passed on to the constructor for the type. Note that setting and getting the value is potentially non-atomic – use Value() instead to make sure that access is automatically synchronized using a lock. Note that an array of ctypes.c_char has value and raw attributes which allow one to use it to store and retrieve strings – see documentation for ctypes.
multiprocessing.sharedctypes.Array(typecode_or_type, size_or_initializer, *, lock=True)
The same as RawArray() except that depending on the value of lock a process-safe synchronization wrapper may be returned instead of a raw ctypes array. If lock is True (the default) then a new lock object is created to synchronize access to the value. If lock is a Lock or RLock object then that will be used to synchronize access to the value. If lock is False then access to the returned object will not be automatically protected by a lock, so it will not necessarily be “process-safe”. Note that lock is a keyword-only argument.
multiprocessing.sharedctypes.Value(typecode_or_type, *args, lock=True)
The same as RawValue() except that depending on the value of lock a process-safe synchronization wrapper may be returned instead of a raw ctypes object. If lock is True (the default) then a new lock object is created to synchronize access to the value. If lock is a Lock or RLock object then that will be used to synchronize access to the value. If lock is False then access to the returned object will not be automatically protected by a lock, so it will not necessarily be “process-safe”. Note that lock is a keyword-only argument.
multiprocessing.sharedctypes.copy(obj)
Return a ctypes object allocated from shared memory which is a copy of the ctypes object obj.
multiprocessing.sharedctypes.synchronized(obj[, lock])
Return a process-safe wrapper object for a ctypes object which uses lock to synchronize access. If lock is None (the default) then a multiprocessing.RLock object is created automatically. A synchronized wrapper will have two methods in addition to those of the object it wraps: get_obj() returns the wrapped object and get_lock() returns the lock object used for synchronization. Note that accessing the ctypes object through the wrapper can be a lot slower than accessing the raw ctypes object. Changed in version 3.5: Synchronized objects support the context manager protocol.
The table below compares the syntax for creating shared ctypes objects from shared memory with the normal ctypes syntax. (In the table MyStruct is some subclass of ctypes.Structure.)
ctypes sharedctypes using type sharedctypes using typecode
c_double(2.4) RawValue(c_double, 2.4) RawValue(‘d’, 2.4)
MyStruct(4, 6) RawValue(MyStruct, 4, 6)
(c_short * 7)() RawArray(c_short, 7) RawArray(‘h’, 7)
(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray(‘i’, (9, 2, 8)) Below is an example where a number of ctypes objects are modified by a child process: from multiprocessing import Process, Lock
from multiprocessing.sharedctypes import Value, Array
from ctypes import Structure, c_double
class Point(Structure):
_fields_ = [('x', c_double), ('y', c_double)]
def modify(n, x, s, A):
n.value **= 2
x.value **= 2
s.value = s.value.upper()
for a in A:
a.x **= 2
a.y **= 2
if __name__ == '__main__':
lock = Lock()
n = Value('i', 7)
x = Value(c_double, 1.0/3.0, lock=False)
s = Array('c', b'hello world', lock=lock)
A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
p = Process(target=modify, args=(n, x, s, A))
p.start()
p.join()
print(n.value)
print(x.value)
print(s.value)
print([(a.x, a.y) for a in A])
The results printed are 49
0.1111111111111111
HELLO WORLD
[(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
Managers Managers provide a way to create data which can be shared between different processes, including sharing over a network between processes running on different machines. A manager object controls a server process which manages shared objects. Other processes can access the shared objects by using proxies.
multiprocessing.Manager()
Returns a started SyncManager object which can be used for sharing objects between processes. The returned manager object corresponds to a spawned child process and has methods which will create shared objects and return corresponding proxies.
Manager processes will be shutdown as soon as they are garbage collected or their parent process exits. The manager classes are defined in the multiprocessing.managers module:
class multiprocessing.managers.BaseManager([address[, authkey]])
Create a BaseManager object. Once created one should call start() or get_server().serve_forever() to ensure that the manager object refers to a started manager process. address is the address on which the manager process listens for new connections. If address is None then an arbitrary one is chosen. authkey is the authentication key which will be used to check the validity of incoming connections to the server process. If authkey is None then current_process().authkey is used. Otherwise authkey is used and it must be a byte string.
start([initializer[, initargs]])
Start a subprocess to start the manager. If initializer is not None then the subprocess will call initializer(*initargs) when it starts.
get_server()
Returns a Server object which represents the actual server under the control of the Manager. The Server object supports the serve_forever() method: >>> from multiprocessing.managers import BaseManager
>>> manager = BaseManager(address=('', 50000), authkey=b'abc')
>>> server = manager.get_server()
>>> server.serve_forever()
Server additionally has an address attribute.
connect()
Connect a local manager object to a remote manager process: >>> from multiprocessing.managers import BaseManager
>>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc')
>>> m.connect()
shutdown()
Stop the process used by the manager. This is only available if start() has been used to start the server process. This can be called multiple times.
register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
A classmethod which can be used for registering a type or callable with the manager class. typeid is a “type identifier” which is used to identify a particular type of shared object. This must be a string. callable is a callable used for creating objects for this type identifier. If a manager instance will be connected to the server using the connect() method, or if the create_method argument is False then this can be left as None. proxytype is a subclass of BaseProxy which is used to create proxies for shared objects with this typeid. If None then a proxy class is created automatically. exposed is used to specify a sequence of method names which proxies for this typeid should be allowed to access using BaseProxy._callmethod(). (If exposed is None then proxytype._exposed_ is used instead if it exists.) In the case where no exposed list is specified, all “public methods” of the shared object will be accessible. (Here a “public method” means any attribute which has a __call__() method and whose name does not begin with '_'.) method_to_typeid is a mapping used to specify the return type of those exposed methods which should return a proxy. It maps method names to typeid strings. (If method_to_typeid is None then proxytype._method_to_typeid_ is used instead if it exists.) If a method’s name is not a key of this mapping or if the mapping is None then the object returned by the method will be copied by value. create_method determines whether a method should be created with name typeid which can be used to tell the server process to create a new shared object and return a proxy for it. By default it is True.
BaseManager instances also have one read-only property:
address
The address used by the manager.
Changed in version 3.3: Manager objects support the context management protocol – see Context Manager Types. __enter__() starts the server process (if it has not already started) and then returns the manager object. __exit__() calls shutdown(). In previous versions __enter__() did not start the manager’s server process if it was not already started.
class multiprocessing.managers.SyncManager
A subclass of BaseManager which can be used for the synchronization of processes. Objects of this type are returned by multiprocessing.Manager(). Its methods create and return Proxy Objects for a number of commonly used data types to be synchronized across processes. This notably includes shared lists and dictionaries.
Barrier(parties[, action[, timeout]])
Create a shared threading.Barrier object and return a proxy for it. New in version 3.3.
BoundedSemaphore([value])
Create a shared threading.BoundedSemaphore object and return a proxy for it.
Condition([lock])
Create a shared threading.Condition object and return a proxy for it. If lock is supplied then it should be a proxy for a threading.Lock or threading.RLock object. Changed in version 3.3: The wait_for() method was added.
Event()
Create a shared threading.Event object and return a proxy for it.
Lock()
Create a shared threading.Lock object and return a proxy for it.
Namespace()
Create a shared Namespace object and return a proxy for it.
Queue([maxsize])
Create a shared queue.Queue object and return a proxy for it.
RLock()
Create a shared threading.RLock object and return a proxy for it.
Semaphore([value])
Create a shared threading.Semaphore object and return a proxy for it.
Array(typecode, sequence)
Create an array and return a proxy for it.
Value(typecode, value)
Create an object with a writable value attribute and return a proxy for it.
dict()
dict(mapping)
dict(sequence)
Create a shared dict object and return a proxy for it.
list()
list(sequence)
Create a shared list object and return a proxy for it.
Changed in version 3.6: Shared objects are capable of being nested. For example, a shared container object such as a shared list can contain other shared objects which will all be managed and synchronized by the SyncManager.
class multiprocessing.managers.Namespace
A type that can register with SyncManager. A namespace object has no public methods, but does have writable attributes. Its representation shows the values of its attributes. However, when using a proxy for a namespace object, an attribute beginning with '_' will be an attribute of the proxy and not an attribute of the referent: >>> manager = multiprocessing.Manager()
>>> Global = manager.Namespace()
>>> Global.x = 10
>>> Global.y = 'hello'
>>> Global._z = 12.3 # this is an attribute of the proxy
>>> print(Global)
Namespace(x=10, y='hello')
Customized managers To create one’s own manager, one creates a subclass of BaseManager and uses the register() classmethod to register new types or callables with the manager class. For example: from multiprocessing.managers import BaseManager
class MathsClass:
def add(self, x, y):
return x + y
def mul(self, x, y):
return x * y
class MyManager(BaseManager):
pass
MyManager.register('Maths', MathsClass)
if __name__ == '__main__':
with MyManager() as manager:
maths = manager.Maths()
print(maths.add(4, 3)) # prints 7
print(maths.mul(7, 8)) # prints 56
Using a remote manager It is possible to run a manager server on one machine and have clients use it from other machines (assuming that the firewalls involved allow it). Running the following commands creates a server for a single shared queue which remote clients can access: >>> from multiprocessing.managers import BaseManager
>>> from queue import Queue
>>> queue = Queue()
>>> class QueueManager(BaseManager): pass
>>> QueueManager.register('get_queue', callable=lambda:queue)
>>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
>>> s = m.get_server()
>>> s.serve_forever()
One client can access the server as follows: >>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
>>> QueueManager.register('get_queue')
>>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
>>> m.connect()
>>> queue = m.get_queue()
>>> queue.put('hello')
Another client can also use it: >>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
>>> QueueManager.register('get_queue')
>>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
>>> m.connect()
>>> queue = m.get_queue()
>>> queue.get()
'hello'
Local processes can also access that queue, using the code from above on the client to access it remotely: >>> from multiprocessing import Process, Queue
>>> from multiprocessing.managers import BaseManager
>>> class Worker(Process):
... def __init__(self, q):
... self.q = q
... super().__init__()
... def run(self):
... self.q.put('local hello')
...
>>> queue = Queue()
>>> w = Worker(queue)
>>> w.start()
>>> class QueueManager(BaseManager): pass
...
>>> QueueManager.register('get_queue', callable=lambda: queue)
>>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
>>> s = m.get_server()
>>> s.serve_forever()
Proxy Objects A proxy is an object which refers to a shared object which lives (presumably) in a different process. The shared object is said to be the referent of the proxy. Multiple proxy objects may have the same referent. A proxy object has methods which invoke corresponding methods of its referent (although not every method of the referent will necessarily be available through the proxy). In this way, a proxy can be used just like its referent can: >>> from multiprocessing import Manager
>>> manager = Manager()
>>> l = manager.list([i*i for i in range(10)])
>>> print(l)
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
>>> print(repr(l))
<ListProxy object, typeid 'list' at 0x...>
>>> l[4]
16
>>> l[2:5]
[4, 9, 16]
Notice that applying str() to a proxy will return the representation of the referent, whereas applying repr() will return the representation of the proxy. An important feature of proxy objects is that they are picklable so they can be passed between processes. As such, a referent can contain Proxy Objects. This permits nesting of these managed lists, dicts, and other Proxy Objects: >>> a = manager.list()
>>> b = manager.list()
>>> a.append(b) # referent of a now contains referent of b
>>> print(a, b)
[<ListProxy object, typeid 'list' at ...>] []
>>> b.append('hello')
>>> print(a[0], b)
['hello'] ['hello']
Similarly, dict and list proxies may be nested inside one another: >>> l_outer = manager.list([ manager.dict() for i in range(2) ])
>>> d_first_inner = l_outer[0]
>>> d_first_inner['a'] = 1
>>> d_first_inner['b'] = 2
>>> l_outer[1]['c'] = 3
>>> l_outer[1]['z'] = 26
>>> print(l_outer[0])
{'a': 1, 'b': 2}
>>> print(l_outer[1])
{'c': 3, 'z': 26}
If standard (non-proxy) list or dict objects are contained in a referent, modifications to those mutable values will not be propagated through the manager because the proxy has no way of knowing when the values contained within are modified. However, storing a value in a container proxy (which triggers a __setitem__ on the proxy object) does propagate through the manager and so to effectively modify such an item, one could re-assign the modified value to the container proxy: # create a list proxy and append a mutable object (a dictionary)
lproxy = manager.list()
lproxy.append({})
# now mutate the dictionary
d = lproxy[0]
d['a'] = 1
d['b'] = 2
# at this point, the changes to d are not yet synced, but by
# updating the dictionary, the proxy is notified of the change
lproxy[0] = d
This approach is perhaps less convenient than employing nested Proxy Objects for most use cases but also demonstrates a level of control over the synchronization. Note The proxy types in multiprocessing do nothing to support comparisons by value. So, for instance, we have: >>> manager.list([1,2,3]) == [1,2,3]
False
One should just use a copy of the referent instead when making comparisons.
class multiprocessing.managers.BaseProxy
Proxy objects are instances of subclasses of BaseProxy.
_callmethod(methodname[, args[, kwds]])
Call and return the result of a method of the proxy’s referent. If proxy is a proxy whose referent is obj then the expression proxy._callmethod(methodname, args, kwds)
will evaluate the expression getattr(obj, methodname)(*args, **kwds)
in the manager’s process. The returned value will be a copy of the result of the call or a proxy to a new shared object – see documentation for the method_to_typeid argument of BaseManager.register(). If an exception is raised by the call, then is re-raised by _callmethod(). If some other exception is raised in the manager’s process then this is converted into a RemoteError exception and is raised by _callmethod(). Note in particular that an exception will be raised if methodname has not been exposed. An example of the usage of _callmethod(): >>> l = manager.list(range(10))
>>> l._callmethod('__len__')
10
>>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
[2, 3, 4, 5, 6]
>>> l._callmethod('__getitem__', (20,)) # equivalent to l[20]
Traceback (most recent call last):
...
IndexError: list index out of range
_getvalue()
Return a copy of the referent. If the referent is unpicklable then this will raise an exception.
__repr__()
Return a representation of the proxy object.
__str__()
Return the representation of the referent.
Cleanup A proxy object uses a weakref callback so that when it gets garbage collected it deregisters itself from the manager which owns its referent. A shared object gets deleted from the manager process when there are no longer any proxies referring to it. Process Pools One can create a pool of processes which will carry out tasks submitted to it with the Pool class.
class multiprocessing.pool.Pool([processes[, initializer[, initargs[, maxtasksperchild[, context]]]]])
A process pool object which controls a pool of worker processes to which jobs can be submitted. It supports asynchronous results with timeouts and callbacks and has a parallel map implementation. processes is the number of worker processes to use. If processes is None then the number returned by os.cpu_count() is used. If initializer is not None then each worker process will call initializer(*initargs) when it starts. maxtasksperchild is the number of tasks a worker process can complete before it will exit and be replaced with a fresh worker process, to enable unused resources to be freed. The default maxtasksperchild is None, which means worker processes will live as long as the pool. context can be used to specify the context used for starting the worker processes. Usually a pool is created using the function multiprocessing.Pool() or the Pool() method of a context object. In both cases context is set appropriately. Note that the methods of the pool object should only be called by the process which created the pool. Warning multiprocessing.pool objects have internal resources that need to be properly managed (like any other resource) by using the pool as a context manager or by calling close() and terminate() manually. Failure to do this can lead to the process hanging on finalization. Note that is not correct to rely on the garbage colletor to destroy the pool as CPython does not assure that the finalizer of the pool will be called (see object.__del__() for more information). New in version 3.2: maxtasksperchild New in version 3.4: context Note Worker processes within a Pool typically live for the complete duration of the Pool’s work queue. A frequent pattern found in other systems (such as Apache, mod_wsgi, etc) to free resources held by workers is to allow a worker within a pool to complete only a set amount of work before being exiting, being cleaned up and a new process spawned to replace the old one. The maxtasksperchild argument to the Pool exposes this ability to the end user.
apply(func[, args[, kwds]])
Call func with arguments args and keyword arguments kwds. It blocks until the result is ready. Given this blocks, apply_async() is better suited for performing work in parallel. Additionally, func is only executed in one of the workers of the pool.
apply_async(func[, args[, kwds[, callback[, error_callback]]]])
A variant of the apply() method which returns a AsyncResult object. If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it, that is unless the call failed, in which case the error_callback is applied instead. If error_callback is specified then it should be a callable which accepts a single argument. If the target function fails, then the error_callback is called with the exception instance. Callbacks should complete immediately since otherwise the thread which handles the results will get blocked.
map(func, iterable[, chunksize])
A parallel equivalent of the map() built-in function (it supports only one iterable argument though, for multiple iterables see starmap()). It blocks until the result is ready. This method chops the iterable into a number of chunks which it submits to the process pool as separate tasks. The (approximate) size of these chunks can be specified by setting chunksize to a positive integer. Note that it may cause high memory usage for very long iterables. Consider using imap() or imap_unordered() with explicit chunksize option for better efficiency.
map_async(func, iterable[, chunksize[, callback[, error_callback]]])
A variant of the map() method which returns a AsyncResult object. If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it, that is unless the call failed, in which case the error_callback is applied instead. If error_callback is specified then it should be a callable which accepts a single argument. If the target function fails, then the error_callback is called with the exception instance. Callbacks should complete immediately since otherwise the thread which handles the results will get blocked.
imap(func, iterable[, chunksize])
A lazier version of map(). The chunksize argument is the same as the one used by the map() method. For very long iterables using a large value for chunksize can make the job complete much faster than using the default value of 1. Also if chunksize is 1 then the next() method of the iterator returned by the imap() method has an optional timeout parameter: next(timeout) will raise multiprocessing.TimeoutError if the result cannot be returned within timeout seconds.
imap_unordered(func, iterable[, chunksize])
The same as imap() except that the ordering of the results from the returned iterator should be considered arbitrary. (Only when there is only one worker process is the order guaranteed to be “correct”.)
starmap(func, iterable[, chunksize])
Like map() except that the elements of the iterable are expected to be iterables that are unpacked as arguments. Hence an iterable of [(1,2), (3, 4)] results in [func(1,2),
func(3,4)]. New in version 3.3.
starmap_async(func, iterable[, chunksize[, callback[, error_callback]]])
A combination of starmap() and map_async() that iterates over iterable of iterables and calls func with the iterables unpacked. Returns a result object. New in version 3.3.
close()
Prevents any more tasks from being submitted to the pool. Once all the tasks have been completed the worker processes will exit.
terminate()
Stops the worker processes immediately without completing outstanding work. When the pool object is garbage collected terminate() will be called immediately.
join()
Wait for the worker processes to exit. One must call close() or terminate() before using join().
New in version 3.3: Pool objects now support the context management protocol – see Context Manager Types. __enter__() returns the pool object, and __exit__() calls terminate().
class multiprocessing.pool.AsyncResult
The class of the result returned by Pool.apply_async() and Pool.map_async().
get([timeout])
Return the result when it arrives. If timeout is not None and the result does not arrive within timeout seconds then multiprocessing.TimeoutError is raised. If the remote call raised an exception then that exception will be reraised by get().
wait([timeout])
Wait until the result is available or until timeout seconds pass.
ready()
Return whether the call has completed.
successful()
Return whether the call completed without raising an exception. Will raise ValueError if the result is not ready. Changed in version 3.7: If the result is not ready, ValueError is raised instead of AssertionError.
The following example demonstrates the use of a pool: from multiprocessing import Pool
import time
def f(x):
return x*x
if __name__ == '__main__':
with Pool(processes=4) as pool: # start 4 worker processes
result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
print(result.get(timeout=1)) # prints "100" unless your computer is *very* slow
print(pool.map(f, range(10))) # prints "[0, 1, 4,..., 81]"
it = pool.imap(f, range(10))
print(next(it)) # prints "0"
print(next(it)) # prints "1"
print(it.next(timeout=1)) # prints "4" unless your computer is *very* slow
result = pool.apply_async(time.sleep, (10,))
print(result.get(timeout=1)) # raises multiprocessing.TimeoutError
Listeners and Clients Usually message passing between processes is done using queues or by using Connection objects returned by Pipe(). However, the multiprocessing.connection module allows some extra flexibility. It basically gives a high level message oriented API for dealing with sockets or Windows named pipes. It also has support for digest authentication using the hmac module, and for polling multiple connections at the same time.
multiprocessing.connection.deliver_challenge(connection, authkey)
Send a randomly generated message to the other end of the connection and wait for a reply. If the reply matches the digest of the message using authkey as the key then a welcome message is sent to the other end of the connection. Otherwise AuthenticationError is raised.
multiprocessing.connection.answer_challenge(connection, authkey)
Receive a message, calculate the digest of the message using authkey as the key, and then send the digest back. If a welcome message is not received, then AuthenticationError is raised.
multiprocessing.connection.Client(address[, family[, authkey]])
Attempt to set up a connection to the listener which is using address address, returning a Connection. The type of the connection is determined by family argument, but this can generally be omitted since it can usually be inferred from the format of address. (See Address Formats) If authkey is given and not None, it should be a byte string and will be used as the secret key for an HMAC-based authentication challenge. No authentication is done if authkey is None. AuthenticationError is raised if authentication fails. See Authentication keys.
class multiprocessing.connection.Listener([address[, family[, backlog[, authkey]]]])
A wrapper for a bound socket or Windows named pipe which is ‘listening’ for connections. address is the address to be used by the bound socket or named pipe of the listener object. Note If an address of ‘0.0.0.0’ is used, the address will not be a connectable end point on Windows. If you require a connectable end-point, you should use ‘127.0.0.1’. family is the type of socket (or named pipe) to use. This can be one of the strings 'AF_INET' (for a TCP socket), 'AF_UNIX' (for a Unix domain socket) or 'AF_PIPE' (for a Windows named pipe). Of these only the first is guaranteed to be available. If family is None then the family is inferred from the format of address. If address is also None then a default is chosen. This default is the family which is assumed to be the fastest available. See Address Formats. Note that if family is 'AF_UNIX' and address is None then the socket will be created in a private temporary directory created using tempfile.mkstemp(). If the listener object uses a socket then backlog (1 by default) is passed to the listen() method of the socket once it has been bound. If authkey is given and not None, it should be a byte string and will be used as the secret key for an HMAC-based authentication challenge. No authentication is done if authkey is None. AuthenticationError is raised if authentication fails. See Authentication keys.
accept()
Accept a connection on the bound socket or named pipe of the listener object and return a Connection object. If authentication is attempted and fails, then AuthenticationError is raised.
close()
Close the bound socket or named pipe of the listener object. This is called automatically when the listener is garbage collected. However it is advisable to call it explicitly.
Listener objects have the following read-only properties:
address
The address which is being used by the Listener object.
last_accepted
The address from which the last accepted connection came. If this is unavailable then it is None.
New in version 3.3: Listener objects now support the context management protocol – see Context Manager Types. __enter__() returns the listener object, and __exit__() calls close().
multiprocessing.connection.wait(object_list, timeout=None)
Wait till an object in object_list is ready. Returns the list of those objects in object_list which are ready. If timeout is a float then the call blocks for at most that many seconds. If timeout is None then it will block for an unlimited period. A negative timeout is equivalent to a zero timeout. For both Unix and Windows, an object can appear in object_list if it is a readable Connection object; a connected and readable socket.socket object; or the sentinel attribute of a Process object. A connection or socket object is ready when there is data available to be read from it, or the other end has been closed. Unix: wait(object_list, timeout) almost equivalent select.select(object_list, [], [], timeout). The difference is that, if select.select() is interrupted by a signal, it can raise OSError with an error number of EINTR, whereas wait() will not. Windows: An item in object_list must either be an integer handle which is waitable (according to the definition used by the documentation of the Win32 function WaitForMultipleObjects()) or it can be an object with a fileno() method which returns a socket handle or pipe handle. (Note that pipe handles and socket handles are not waitable handles.) New in version 3.3.
Examples The following server code creates a listener which uses 'secret password' as an authentication key. It then waits for a connection and sends some data to the client: from multiprocessing.connection import Listener
from array import array
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
with Listener(address, authkey=b'secret password') as listener:
with listener.accept() as conn:
print('connection accepted from', listener.last_accepted)
conn.send([2.25, None, 'junk', float])
conn.send_bytes(b'hello')
conn.send_bytes(array('i', [42, 1729]))
The following code connects to the server and receives some data from the server: from multiprocessing.connection import Client
from array import array
address = ('localhost', 6000)
with Client(address, authkey=b'secret password') as conn:
print(conn.recv()) # => [2.25, None, 'junk', float]
print(conn.recv_bytes()) # => 'hello'
arr = array('i', [0, 0, 0, 0, 0])
print(conn.recv_bytes_into(arr)) # => 8
print(arr) # => array('i', [42, 1729, 0, 0, 0])
The following code uses wait() to wait for messages from multiple processes at once: import time, random
from multiprocessing import Process, Pipe, current_process
from multiprocessing.connection import wait
def foo(w):
for i in range(10):
w.send((i, current_process().name))
w.close()
if __name__ == '__main__':
readers = []
for i in range(4):
r, w = Pipe(duplex=False)
readers.append(r)
p = Process(target=foo, args=(w,))
p.start()
# We close the writable end of the pipe now to be sure that
# p is the only process which owns a handle for it. This
# ensures that when p closes its handle for the writable end,
# wait() will promptly report the readable end as being ready.
w.close()
while readers:
for r in wait(readers):
try:
msg = r.recv()
except EOFError:
readers.remove(r)
else:
print(msg)
Address Formats An 'AF_INET' address is a tuple of the form (hostname, port) where hostname is a string and port is an integer. An 'AF_UNIX' address is a string representing a filename on the filesystem. An 'AF_PIPE' address is a string of the form r'\.\pipe{PipeName}'. To use Client() to connect to a named pipe on a remote computer called ServerName one should use an address of the form r'\ServerName\pipe{PipeName}' instead. Note that any string beginning with two backslashes is assumed by default to be an 'AF_PIPE' address rather than an 'AF_UNIX' address. Authentication keys When one uses Connection.recv, the data received is automatically unpickled. Unfortunately unpickling data from an untrusted source is a security risk. Therefore Listener and Client() use the hmac module to provide digest authentication. An authentication key is a byte string which can be thought of as a password: once a connection is established both ends will demand proof that the other knows the authentication key. (Demonstrating that both ends are using the same key does not involve sending the key over the connection.) If authentication is requested but no authentication key is specified then the return value of current_process().authkey is used (see Process). This value will be automatically inherited by any Process object that the current process creates. This means that (by default) all processes of a multi-process program will share a single authentication key which can be used when setting up connections between themselves. Suitable authentication keys can also be generated by using os.urandom(). Logging Some support for logging is available. Note, however, that the logging package does not use process shared locks so it is possible (depending on the handler type) for messages from different processes to get mixed up.
multiprocessing.get_logger()
Returns the logger used by multiprocessing. If necessary, a new one will be created. When first created the logger has level logging.NOTSET and no default handler. Messages sent to this logger will not by default propagate to the root logger. Note that on Windows child processes will only inherit the level of the parent process’s logger – any other customization of the logger will not be inherited.
multiprocessing.log_to_stderr()
This function performs a call to get_logger() but in addition to returning the logger created by get_logger, it adds a handler which sends output to sys.stderr using format '[%(levelname)s/%(processName)s] %(message)s'.
Below is an example session with logging turned on: >>> import multiprocessing, logging
>>> logger = multiprocessing.log_to_stderr()
>>> logger.setLevel(logging.INFO)
>>> logger.warning('doomed')
[WARNING/MainProcess] doomed
>>> m = multiprocessing.Manager()
[INFO/SyncManager-...] child process calling self.run()
[INFO/SyncManager-...] created temp directory /.../pymp-...
[INFO/SyncManager-...] manager serving at '/.../listener-...'
>>> del m
[INFO/MainProcess] sending shutdown message to manager
[INFO/SyncManager-...] manager exiting with exitcode 0
For a full table of logging levels, see the logging module. The multiprocessing.dummy module multiprocessing.dummy replicates the API of multiprocessing but is no more than a wrapper around the threading module. In particular, the Pool function provided by multiprocessing.dummy returns an instance of ThreadPool, which is a subclass of Pool that supports all the same method calls but uses a pool of worker threads rather than worker processes.
class multiprocessing.pool.ThreadPool([processes[, initializer[, initargs]]])
A thread pool object which controls a pool of worker threads to which jobs can be submitted. ThreadPool instances are fully interface compatible with Pool instances, and their resources must also be properly managed, either by using the pool as a context manager or by calling close() and terminate() manually. processes is the number of worker threads to use. If processes is None then the number returned by os.cpu_count() is used. If initializer is not None then each worker process will call initializer(*initargs) when it starts. Unlike Pool, maxtasksperchild and context cannot be provided. Note A ThreadPool shares the same interface as Pool, which is designed around a pool of processes and predates the introduction of the concurrent.futures module. As such, it inherits some operations that don’t make sense for a pool backed by threads, and it has its own type for representing the status of asynchronous jobs, AsyncResult, that is not understood by any other libraries. Users should generally prefer to use concurrent.futures.ThreadPoolExecutor, which has a simpler interface that was designed around threads from the start, and which returns concurrent.futures.Future instances that are compatible with many other libraries, including asyncio.
Programming guidelines There are certain guidelines and idioms which should be adhered to when using multiprocessing. All start methods The following applies to all start methods. Avoid shared state As far as possible one should try to avoid shifting large amounts of data between processes. It is probably best to stick to using queues or pipes for communication between processes rather than using the lower level synchronization primitives. Picklability Ensure that the arguments to the methods of proxies are picklable. Thread safety of proxies Do not use a proxy object from more than one thread unless you protect it with a lock. (There is never a problem with different processes using the same proxy.) Joining zombie processes On Unix when a process finishes but has not been joined it becomes a zombie. There should never be very many because each time a new process starts (or active_children() is called) all completed processes which have not yet been joined will be joined. Also calling a finished process’s Process.is_alive will join the process. Even so it is probably good practice to explicitly join all the processes that you start. Better to inherit than pickle/unpickle When using the spawn or forkserver start methods many types from multiprocessing need to be picklable so that child processes can use them. However, one should generally avoid sending shared objects to other processes using pipes or queues. Instead you should arrange the program so that a process which needs access to a shared resource created elsewhere can inherit it from an ancestor process. Avoid terminating processes Using the Process.terminate method to stop a process is liable to cause any shared resources (such as locks, semaphores, pipes and queues) currently being used by the process to become broken or unavailable to other processes. Therefore it is probably best to only consider using Process.terminate on processes which never use any shared resources. Joining processes that use queues Bear in mind that a process that has put items in a queue will wait before terminating until all the buffered items are fed by the “feeder” thread to the underlying pipe. (The child process can call the Queue.cancel_join_thread method of the queue to avoid this behaviour.) This means that whenever you use a queue you need to make sure that all items which have been put on the queue will eventually be removed before the process is joined. Otherwise you cannot be sure that processes which have put items on the queue will terminate. Remember also that non-daemonic processes will be joined automatically. An example which will deadlock is the following: from multiprocessing import Process, Queue
def f(q):
q.put('X' * 1000000)
if __name__ == '__main__':
queue = Queue()
p = Process(target=f, args=(queue,))
p.start()
p.join() # this deadlocks
obj = queue.get()
A fix here would be to swap the last two lines (or simply remove the p.join() line). Explicitly pass resources to child processes On Unix using the fork start method, a child process can make use of a shared resource created in a parent process using a global resource. However, it is better to pass the object as an argument to the constructor for the child process. Apart from making the code (potentially) compatible with Windows and the other start methods this also ensures that as long as the child process is still alive the object will not be garbage collected in the parent process. This might be important if some resource is freed when the object is garbage collected in the parent process. So for instance from multiprocessing import Process, Lock
def f():
... do something using "lock" ...
if __name__ == '__main__':
lock = Lock()
for i in range(10):
Process(target=f).start()
should be rewritten as from multiprocessing import Process, Lock
def f(l):
... do something using "l" ...
if __name__ == '__main__':
lock = Lock()
for i in range(10):
Process(target=f, args=(lock,)).start()
Beware of replacing sys.stdin with a “file like object” multiprocessing originally unconditionally called: os.close(sys.stdin.fileno())
in the multiprocessing.Process._bootstrap() method — this resulted in issues with processes-in-processes. This has been changed to: sys.stdin.close()
sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)
Which solves the fundamental issue of processes colliding with each other resulting in a bad file descriptor error, but introduces a potential danger to applications which replace sys.stdin() with a “file-like object” with output buffering. This danger is that if multiple processes call close() on this file-like object, it could result in the same data being flushed to the object multiple times, resulting in corruption. If you write a file-like object and implement your own caching, you can make it fork-safe by storing the pid whenever you append to the cache, and discarding the cache when the pid changes. For example: @property
def cache(self):
pid = os.getpid()
if pid != self._pid:
self._pid = pid
self._cache = []
return self._cache
For more information, see bpo-5155, bpo-5313 and bpo-5331 The spawn and forkserver start methods There are a few extra restriction which don’t apply to the fork start method. More picklability Ensure that all arguments to Process.__init__() are picklable. Also, if you subclass Process then make sure that instances will be picklable when the Process.start method is called. Global variables Bear in mind that if code run in a child process tries to access a global variable, then the value it sees (if any) may not be the same as the value in the parent process at the time that Process.start was called. However, global variables which are just module level constants cause no problems. Safe importing of main module Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process). For example, using the spawn or forkserver start method running the following module would fail with a RuntimeError: from multiprocessing import Process
def foo():
print('hello')
p = Process(target=foo)
p.start()
Instead one should protect the “entry point” of the program by using if
__name__ == '__main__': as follows: from multiprocessing import Process, freeze_support, set_start_method
def foo():
print('hello')
if __name__ == '__main__':
freeze_support()
set_start_method('spawn')
p = Process(target=foo)
p.start()
(The freeze_support() line can be omitted if the program will be run normally instead of frozen.) This allows the newly spawned Python interpreter to safely import the module and then run the module’s foo() function. Similar restrictions apply if a pool or manager is created in the main module. Examples Demonstration of how to create and use customized managers and proxies: from multiprocessing import freeze_support
from multiprocessing.managers import BaseManager, BaseProxy
import operator
##
class Foo:
def f(self):
print('you called Foo.f()')
def g(self):
print('you called Foo.g()')
def _h(self):
print('you called Foo._h()')
# A simple generator function
def baz():
for i in range(10):
yield i*i
# Proxy type for generator objects
class GeneratorProxy(BaseProxy):
_exposed_ = ['__next__']
def __iter__(self):
return self
def __next__(self):
return self._callmethod('__next__')
# Function to return the operator module
def get_operator_module():
return operator
##
class MyManager(BaseManager):
pass
# register the Foo class; make `f()` and `g()` accessible via proxy
MyManager.register('Foo1', Foo)
# register the Foo class; make `g()` and `_h()` accessible via proxy
MyManager.register('Foo2', Foo, exposed=('g', '_h'))
# register the generator function baz; use `GeneratorProxy` to make proxies
MyManager.register('baz', baz, proxytype=GeneratorProxy)
# register get_operator_module(); make public functions accessible via proxy
MyManager.register('operator', get_operator_module)
##
def test():
manager = MyManager()
manager.start()
print('-' * 20)
f1 = manager.Foo1()
f1.f()
f1.g()
assert not hasattr(f1, '_h')
assert sorted(f1._exposed_) == sorted(['f', 'g'])
print('-' * 20)
f2 = manager.Foo2()
f2.g()
f2._h()
assert not hasattr(f2, 'f')
assert sorted(f2._exposed_) == sorted(['g', '_h'])
print('-' * 20)
it = manager.baz()
for i in it:
print('<%d>' % i, end=' ')
print()
print('-' * 20)
op = manager.operator()
print('op.add(23, 45) =', op.add(23, 45))
print('op.pow(2, 94) =', op.pow(2, 94))
print('op._exposed_ =', op._exposed_)
##
if __name__ == '__main__':
freeze_support()
test()
Using Pool: import multiprocessing
import time
import random
import sys
#
# Functions used by test code
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % (
multiprocessing.current_process().name,
func.__name__, args, result
)
def calculatestar(args):
return calculate(*args)
def mul(a, b):
time.sleep(0.5 * random.random())
return a * b
def plus(a, b):
time.sleep(0.5 * random.random())
return a + b
def f(x):
return 1.0 / (x - 5.0)
def pow3(x):
return x ** 3
def noop(x):
pass
#
# Test code
#
def test():
PROCESSES = 4
print('Creating pool with %d processes\n' % PROCESSES)
with multiprocessing.Pool(PROCESSES) as pool:
#
# Tests
#
TASKS = [(mul, (i, 7)) for i in range(10)] + \
[(plus, (i, 8)) for i in range(10)]
results = [pool.apply_async(calculate, t) for t in TASKS]
imap_it = pool.imap(calculatestar, TASKS)
imap_unordered_it = pool.imap_unordered(calculatestar, TASKS)
print('Ordered results using pool.apply_async():')
for r in results:
print('\t', r.get())
print()
print('Ordered results using pool.imap():')
for x in imap_it:
print('\t', x)
print()
print('Unordered results using pool.imap_unordered():')
for x in imap_unordered_it:
print('\t', x)
print()
print('Ordered results using pool.map() --- will block till complete:')
for x in pool.map(calculatestar, TASKS):
print('\t', x)
print()
#
# Test error handling
#
print('Testing error handling:')
try:
print(pool.apply(f, (5,)))
except ZeroDivisionError:
print('\tGot ZeroDivisionError as expected from pool.apply()')
else:
raise AssertionError('expected ZeroDivisionError')
try:
print(pool.map(f, list(range(10))))
except ZeroDivisionError:
print('\tGot ZeroDivisionError as expected from pool.map()')
else:
raise AssertionError('expected ZeroDivisionError')
try:
print(list(pool.imap(f, list(range(10)))))
except ZeroDivisionError:
print('\tGot ZeroDivisionError as expected from list(pool.imap())')
else:
raise AssertionError('expected ZeroDivisionError')
it = pool.imap(f, list(range(10)))
for i in range(10):
try:
x = next(it)
except ZeroDivisionError:
if i == 5:
pass
except StopIteration:
break
else:
if i == 5:
raise AssertionError('expected ZeroDivisionError')
assert i == 9
print('\tGot ZeroDivisionError as expected from IMapIterator.next()')
print()
#
# Testing timeouts
#
print('Testing ApplyResult.get() with timeout:', end=' ')
res = pool.apply_async(calculate, TASKS[0])
while 1:
sys.stdout.flush()
try:
sys.stdout.write('\n\t%s' % res.get(0.02))
break
except multiprocessing.TimeoutError:
sys.stdout.write('.')
print()
print()
print('Testing IMapIterator.next() with timeout:', end=' ')
it = pool.imap(calculatestar, TASKS)
while 1:
sys.stdout.flush()
try:
sys.stdout.write('\n\t%s' % it.next(0.02))
except StopIteration:
break
except multiprocessing.TimeoutError:
sys.stdout.write('.')
print()
print()
if __name__ == '__main__':
multiprocessing.freeze_support()
test()
An example showing how to use queues to feed tasks to a collection of worker processes and collect the results: import time
import random
from multiprocessing import Process, Queue, current_process, freeze_support
#
# Function run by worker processes
#
def worker(input, output):
for func, args in iter(input.get, 'STOP'):
result = calculate(func, args)
output.put(result)
#
# Function used to calculate result
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % \
(current_process().name, func.__name__, args, result)
#
# Functions referenced by tasks
#
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
#
#
#
def test():
NUMBER_OF_PROCESSES = 4
TASKS1 = [(mul, (i, 7)) for i in range(20)]
TASKS2 = [(plus, (i, 8)) for i in range(10)]
# Create queues
task_queue = Queue()
done_queue = Queue()
# Submit tasks
for task in TASKS1:
task_queue.put(task)
# Start worker processes
for i in range(NUMBER_OF_PROCESSES):
Process(target=worker, args=(task_queue, done_queue)).start()
# Get and print results
print('Unordered results:')
for i in range(len(TASKS1)):
print('\t', done_queue.get())
# Add more tasks using `put()`
for task in TASKS2:
task_queue.put(task)
# Get and print some more results
for i in range(len(TASKS2)):
print('\t', done_queue.get())
# Tell child processes to stop
for i in range(NUMBER_OF_PROCESSES):
task_queue.put('STOP')
if __name__ == '__main__':
freeze_support()
test() | |
doc_25817 |
Autoscale the scalar limits on the norm instance using the current array | |
doc_25818 |
Return the alpha value used for blending - not supported on all backends. | |
doc_25819 |
Loads the Torch serialized object at the given URL. If downloaded file is a zip file, it will be automatically decompressed. If the object is already present in model_dir, it’s deserialized and returned. The default value of model_dir is <hub_dir>/checkpoints where hub_dir is the directory returned by get_dir(). Parameters
url (string) – URL of the object to download
model_dir (string, optional) – directory in which to save the object
map_location (optional) – a function or a dict specifying how to remap storage locations (see torch.load)
progress (bool, optional) – whether or not to display a progress bar to stderr. Default: True
check_hash (bool, optional) – If True, the filename part of the URL should follow the naming convention filename-<sha256>.ext where <sha256> is the first eight or more digits of the SHA256 hash of the contents of the file. The hash is used to ensure unique names and to verify the contents of the file. Default: False
file_name (string, optional) – name for the downloaded file. Filename from url will be used if not set. Example >>> state_dict = torch.hub.load_state_dict_from_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth') | |
doc_25820 |
Bases: torch.distributions.exp_family.ExponentialFamily Creates a Bernoulli distribution parameterized by probs or logits (but not both). Samples are binary (0 or 1). They take the value 1 with probability p and 0 with probability 1 - p. Example: >>> m = Bernoulli(torch.tensor([0.3]))
>>> m.sample() # 30% chance 1; 70% chance 0
tensor([ 0.])
Parameters
probs (Number, Tensor) – the probability of sampling 1
logits (Number, Tensor) – the log-odds of sampling 1
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}
entropy() [source]
enumerate_support(expand=True) [source]
expand(batch_shape, _instance=None) [source]
has_enumerate_support = True
log_prob(value) [source]
logits [source]
property mean
property param_shape
probs [source]
sample(sample_shape=torch.Size([])) [source]
support = Boolean()
property variance | |
doc_25821 | A tuple of five integers representing version information about the OpenSSL library: >>> ssl.OPENSSL_VERSION_INFO
(1, 0, 2, 11, 15)
New in version 3.2. | |
doc_25822 | By default, Django’s admin uses a select-box interface (<select>) for fields that are ForeignKey. Sometimes you don’t want to incur the overhead of having to select all the related instances to display in the drop-down. raw_id_fields is a list of fields you would like to change into an Input widget for either a ForeignKey or ManyToManyField: class ArticleAdmin(admin.ModelAdmin):
raw_id_fields = ("newspaper",)
The raw_id_fields Input widget should contain a primary key if the field is a ForeignKey or a comma separated list of values if the field is a ManyToManyField. The raw_id_fields widget shows a magnifying glass button next to the field which allows users to search for and select a value: | |
doc_25823 | tf.debugging.assert_all_finite(
x, message, name=None
)
Args
x Tensor to check.
message Message to log on failure.
name A name for this operation (optional).
Returns Same tensor as x. | |
doc_25824 | No-op in the base class, this method takes file object fp, and reads the data from the file, initializing its message catalog. If you have an unsupported message catalog file format, you should override this method to parse your format. | |
doc_25825 |
Return the standard deviation of the array elements along the given axis. Refer to numpy.std for full documentation. See also numpy.std
Notes This is the same as ndarray.std, except that where an ndarray would be returned, a matrix object is returned instead. Examples >>> x = np.matrix(np.arange(12).reshape((3, 4)))
>>> x
matrix([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> x.std()
3.4520525295346629 # may vary
>>> x.std(0)
matrix([[ 3.26598632, 3.26598632, 3.26598632, 3.26598632]]) # may vary
>>> x.std(1)
matrix([[ 1.11803399],
[ 1.11803399],
[ 1.11803399]]) | |
doc_25826 | See Migration guide for more details. tf.compat.v1.raw_ops.LoadTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug
tf.raw_ops.LoadTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug(
parameters, gradient_accumulators, num_shards, shard_id, table_id=-1,
table_name='', config='', name=None
)
An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.
Args
parameters A Tensor of type float32. Value of parameters used in the stochastic gradient descent optimization algorithm.
gradient_accumulators A Tensor of type float32. Value of gradient_accumulators used in the Adadelta optimization algorithm.
num_shards An int.
shard_id An int.
table_id An optional int. Defaults to -1.
table_name An optional string. Defaults to "".
config An optional string. Defaults to "".
name A name for the operation (optional).
Returns The created Operation. | |
doc_25827 | See Migration guide for more details. tf.compat.v1.app.flags.CsvListSerializer
tf.compat.v1.flags.CsvListSerializer(
list_sep
)
Methods serialize
serialize(
value
)
Serializes a list as a CSV string or unicode. | |
doc_25828 |
True if this transform is separable in the x- and y- dimensions. | |
doc_25829 |
Like rfind, but raises ValueError when the substring sub is not found. See also char.rindex | |
doc_25830 | See Migration guide for more details. tf.compat.v1.distribute.in_cross_replica_context
tf.distribute.in_cross_replica_context()
See tf.distribute.get_replica_context for details. assert not tf.distribute.in_cross_replica_context()
with strategy.scope():
assert tf.distribute.in_cross_replica_context()
def f():
assert not tf.distribute.in_cross_replica_context()
strategy.run(f)
Returns True if in a cross-replica context (get_replica_context() returns None), or False if in a replica context (get_replica_context() returns non-None). | |
doc_25831 | Get information on the specified clock as a namespace object. Supported clock names and the corresponding functions to read their value are:
'monotonic': time.monotonic()
'perf_counter': time.perf_counter()
'process_time': time.process_time()
'thread_time': time.thread_time()
'time': time.time()
The result has the following attributes:
adjustable: True if the clock can be changed automatically (e.g. by a NTP daemon) or manually by the system administrator, False otherwise
implementation: The name of the underlying C function used to get the clock value. Refer to Clock ID Constants for possible values.
monotonic: True if the clock cannot go backward, False otherwise
resolution: The resolution of the clock in seconds (float) New in version 3.3. | |
doc_25832 |
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_25833 | See Migration guide for more details. tf.compat.v1.raw_ops.FractionalAvgPoolGrad
tf.raw_ops.FractionalAvgPoolGrad(
orig_input_tensor_shape, out_backprop, row_pooling_sequence,
col_pooling_sequence, overlapping=False, name=None
)
Unlike FractionalMaxPoolGrad, we don't need to find arg_max for FractionalAvgPoolGrad, we just need to evenly back-propagate each element of out_backprop to those indices that form the same pooling cell. Therefore, we just need to know the shape of original input tensor, instead of the whole tensor.
Args
orig_input_tensor_shape A Tensor of type int64. Original input tensor shape for fractional_avg_pool
out_backprop A Tensor. Must be one of the following types: float32, float64, int32, int64. 4-D with shape [batch, height, width, channels]. Gradients w.r.t. the output of fractional_avg_pool.
row_pooling_sequence A Tensor of type int64. row pooling sequence, form pooling region with col_pooling_sequence.
col_pooling_sequence A Tensor of type int64. column pooling sequence, form pooling region with row_pooling sequence.
overlapping An optional bool. Defaults to False. When set to True, it means when pooling, the values at the boundary of adjacent pooling cells are used by both cells. For example: index 0 1 2 3 4 value 20 5 16 3 7 If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice. The result would be [41/3, 26/3] for fractional avg pooling.
name A name for the operation (optional).
Returns A Tensor. Has the same type as out_backprop. | |
doc_25834 |
Apply a correction to raw Minimum Covariance Determinant estimates. Correction using the empirical correction factor suggested by Rousseeuw and Van Driessen in [RVD]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
covariance_correctedndarray of shape (n_features, n_features)
Corrected robust covariance estimate. References
RVD
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS | |
doc_25835 | See Migration guide for more details. tf.compat.v1.raw_ops.Cumsum
tf.raw_ops.Cumsum(
x, axis, exclusive=False, reverse=False, name=None
)
By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output: tf.cumsum([a, b, c]) # => [a, a + b, a + b + c]
By setting the exclusive kwarg to True, an exclusive cumsum is performed instead: tf.cumsum([a, b, c], exclusive=True) # => [0, a, a + b]
By setting the reverse kwarg to True, the cumsum is performed in the opposite direction: tf.cumsum([a, b, c], reverse=True) # => [a + b + c, b + c, c]
This is more efficient than using separate tf.reverse ops. The reverse and exclusive kwargs can also be combined: tf.cumsum([a, b, c], exclusive=True, reverse=True) # => [b + c, c, 0]
Args
x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half.
axis A Tensor. Must be one of the following types: int32, int64. A Tensor of type int32 (default: 0). Must be in the range [-rank(x), rank(x)).
exclusive An optional bool. Defaults to False. If True, perform exclusive cumsum.
reverse An optional bool. Defaults to False. A bool (default: False).
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | |
doc_25836 |
Apply trees in the forest to X, return leaf indices. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. Returns
X_leavesndarray of shape (n_samples, n_estimators)
For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in. | |
doc_25837 | This class stores the message data in a cookie (signed with a secret hash to prevent manipulation) to persist notifications across requests. Old messages are dropped if the cookie data size would exceed 2048 bytes. Changed in Django 3.2: Messages format was changed to the RFC 6265 compliant format. | |
doc_25838 | Default return value for get_redirect_field_name(). Defaults to "next". | |
doc_25839 | Return the number of weak references and proxies which refer to object. | |
doc_25840 | This function is called if the handle() method of a RequestHandlerClass instance raises an exception. The default action is to print the traceback to standard error and continue handling further requests. Changed in version 3.6: Now only called for exceptions derived from the Exception class. | |
doc_25841 |
Update this artist's properties from the dict props. Parameters
propsdict | |
doc_25842 |
Get whether the widget is active. | |
doc_25843 |
Fit the estimators. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Note that this is supported only if all underlying estimators support sample weights. Returns
selfobject | |
doc_25844 | Given a list of tuples or FrameSummary objects as returned by extract_tb() or extract_stack(), return a list of strings ready for printing. Each string in the resulting list corresponds to the item with the same index in the argument list. Each string ends in a newline; the strings may contain internal newlines as well, for those items whose source text line is not None. | |
doc_25845 | Run awaitable objects in the aws sequence concurrently. If any awaitable in aws is a coroutine, it is automatically scheduled as a Task. If all awaitables are completed successfully, the result is an aggregate list of returned values. The order of result values corresponds to the order of awaitables in aws. If return_exceptions is False (default), the first raised exception is immediately propagated to the task that awaits on gather(). Other awaitables in the aws sequence won’t be cancelled and will continue to run. If return_exceptions is True, exceptions are treated the same as successful results, and aggregated in the result list. If gather() is cancelled, all submitted awaitables (that have not completed yet) are also cancelled. If any Task or Future from the aws sequence is cancelled, it is treated as if it raised CancelledError – the gather() call is not cancelled in this case. This is to prevent the cancellation of one submitted Task/Future to cause other Tasks/Futures to be cancelled. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. Example: import asyncio
async def factorial(name, number):
f = 1
for i in range(2, number + 1):
print(f"Task {name}: Compute factorial({i})...")
await asyncio.sleep(1)
f *= i
print(f"Task {name}: factorial({number}) = {f}")
async def main():
# Schedule three calls *concurrently*:
await asyncio.gather(
factorial("A", 2),
factorial("B", 3),
factorial("C", 4),
)
asyncio.run(main())
# Expected output:
#
# Task A: Compute factorial(2)...
# Task B: Compute factorial(2)...
# Task C: Compute factorial(2)...
# Task A: factorial(2) = 2
# Task B: Compute factorial(3)...
# Task C: Compute factorial(3)...
# Task B: factorial(3) = 6
# Task C: Compute factorial(4)...
# Task C: factorial(4) = 24
Note If return_exceptions is False, cancelling gather() after it has been marked done won’t cancel any submitted awaitables. For instance, gather can be marked done after propagating an exception to the caller, therefore, calling gather.cancel() after catching an exception (raised by one of the awaitables) from gather won’t cancel any other awaitables. Changed in version 3.7: If the gather itself is cancelled, the cancellation is propagated regardless of return_exceptions. | |
doc_25846 |
Finish any processing for writing the movie. | |
doc_25847 |
Apply only the non-affine part of this transformation. transform(values) is always equivalent to transform_affine(transform_non_affine(values)). In non-affine transformations, this is generally equivalent to transform(values). In affine transformations, this is always a no-op. Parameters
valuesarray
The input values as NumPy array of length input_dims or shape (N x input_dims). Returns
array
The output values as NumPy array of length input_dims or shape (N x output_dims), depending on the input. | |
doc_25848 | Set loop as a current event loop for the current OS thread. | |
doc_25849 | class ast.Sub
class ast.Mult
class ast.Div
class ast.FloorDiv
class ast.Mod
class ast.Pow
class ast.LShift
class ast.RShift
class ast.BitOr
class ast.BitXor
class ast.BitAnd
class ast.MatMult
Binary operator tokens. | |
doc_25850 |
Alter Index or MultiIndex name. Able to set new names without level. Defaults to returning new index. Length of names must match number of levels in MultiIndex. Parameters
name:label or list of labels
Name(s) to set.
inplace:bool, default False
Modifies the object directly, instead of creating a new Index or MultiIndex. Returns
Index or None
The same type as the caller or None if inplace=True. See also Index.set_names
Able to set new names partially and by level. Examples
>>> idx = pd.Index(['A', 'C', 'A', 'B'], name='score')
>>> idx.rename('grade')
Index(['A', 'C', 'A', 'B'], dtype='object', name='grade')
>>> idx = pd.MultiIndex.from_product([['python', 'cobra'],
... [2018, 2019]],
... names=['kind', 'year'])
>>> idx
MultiIndex([('python', 2018),
('python', 2019),
( 'cobra', 2018),
( 'cobra', 2019)],
names=['kind', 'year'])
>>> idx.rename(['species', 'year'])
MultiIndex([('python', 2018),
('python', 2019),
( 'cobra', 2018),
( 'cobra', 2019)],
names=['species', 'year'])
>>> idx.rename('species')
Traceback (most recent call last):
TypeError: Must pass list-like as `names`. | |
doc_25851 |
Other Members
ABORTED 10
ALREADY_EXISTS 6
CANCELLED 1
DATA_LOSS 15
DEADLINE_EXCEEDED 4
FAILED_PRECONDITION 9
INTERNAL 13
INVALID_ARGUMENT 3
NOT_FOUND 5
OK 0
OUT_OF_RANGE 11
PERMISSION_DENIED 7
RESOURCE_EXHAUSTED 8
UNAUTHENTICATED 16
UNAVAILABLE 14
UNIMPLEMENTED 12
UNKNOWN 2 | |
doc_25852 |
Hyperbolic sine, element-wise. Equivalent to 1/2 * (np.exp(x) - np.exp(-x)) or -1j * np.sin(1j*x). Parameters
xarray_like
Input array.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray
The corresponding hyperbolic sine values. This is a scalar if x is a scalar. Notes If out is provided, the function writes the result into it, and returns a reference to out. (See Examples) References M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions. New York, NY: Dover, 1972, pg. 83. Examples >>> np.sinh(0)
0.0
>>> np.sinh(np.pi*1j/2)
1j
>>> np.sinh(np.pi*1j) # (exact value is 0)
1.2246063538223773e-016j
>>> # Discrepancy due to vagaries of floating point arithmetic.
>>> # Example of providing the optional output parameter
>>> out1 = np.array([0], dtype='d')
>>> out2 = np.sinh([0.1], out1)
>>> out2 is out1
True
>>> # Example of ValueError due to provision of shape mis-matched `out`
>>> np.sinh(np.zeros((3,3)),np.zeros((2,2)))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: operands could not be broadcast together with shapes (3,3) (2,2) | |
doc_25853 | Boolean flag that is True if the field has a one-to-many relation, such as a GenericRelation or the reverse of a ForeignKey; False otherwise. | |
doc_25854 | Built-in Functions
abs() delattr() hash() memoryview() set()
all() dict() help() min() setattr()
any() dir() hex() next() slice()
ascii() divmod() id() object() sorted()
bin() enumerate() input() oct() staticmethod()
bool() eval() int() open() str()
breakpoint() exec() isinstance() ord() sum()
bytearray() filter() issubclass() pow() super()
bytes() float() iter() print() tuple()
callable() format() len() property() type()
chr() frozenset() list() range() vars()
classmethod() getattr() locals() repr() zip()
compile() globals() map() reversed() __import__()
complex() hasattr() max() round()
abs(x)
Return the absolute value of a number. The argument may be an integer, a floating point number, or an object implementing __abs__(). If the argument is a complex number, its magnitude is returned.
all(iterable)
Return True if all elements of the iterable are true (or if the iterable is empty). Equivalent to: def all(iterable):
for element in iterable:
if not element:
return False
return True
any(iterable)
Return True if any element of the iterable is true. If the iterable is empty, return False. Equivalent to: def any(iterable):
for element in iterable:
if element:
return True
return False
ascii(object)
As repr(), return a string containing a printable representation of an object, but escape the non-ASCII characters in the string returned by repr() using \x, \u or \U escapes. This generates a string similar to that returned by repr() in Python 2.
bin(x)
Convert an integer number to a binary string prefixed with “0b”. The result is a valid Python expression. If x is not a Python int object, it has to define an __index__() method that returns an integer. Some examples: >>> bin(3)
'0b11'
>>> bin(-10)
'-0b1010'
If prefix “0b” is desired or not, you can use either of the following ways. >>> format(14, '#b'), format(14, 'b')
('0b1110', '1110')
>>> f'{14:#b}', f'{14:b}'
('0b1110', '1110')
See also format() for more information.
class bool([x])
Return a Boolean value, i.e. one of True or False. x is converted using the standard truth testing procedure. If x is false or omitted, this returns False; otherwise it returns True. The bool class is a subclass of int (see Numeric Types — int, float, complex). It cannot be subclassed further. Its only instances are False and True (see Boolean Values). Changed in version 3.7: x is now a positional-only parameter.
breakpoint(*args, **kws)
This function drops you into the debugger at the call site. Specifically, it calls sys.breakpointhook(), passing args and kws straight through. By default, sys.breakpointhook() calls pdb.set_trace() expecting no arguments. In this case, it is purely a convenience function so you don’t have to explicitly import pdb or type as much code to enter the debugger. However, sys.breakpointhook() can be set to some other function and breakpoint() will automatically call that, allowing you to drop into the debugger of choice. Raises an auditing event builtins.breakpoint with argument breakpointhook. New in version 3.7.
class bytearray([source[, encoding[, errors]]])
Return a new array of bytes. The bytearray class is a mutable sequence of integers in the range 0 <= x < 256. It has most of the usual methods of mutable sequences, described in Mutable Sequence Types, as well as most methods that the bytes type has, see Bytes and Bytearray Operations. The optional source parameter can be used to initialize the array in a few different ways: If it is a string, you must also give the encoding (and optionally, errors) parameters; bytearray() then converts the string to bytes using str.encode(). If it is an integer, the array will have that size and will be initialized with null bytes. If it is an object conforming to the buffer interface, a read-only buffer of the object will be used to initialize the bytes array. If it is an iterable, it must be an iterable of integers in the range 0 <= x < 256, which are used as the initial contents of the array. Without an argument, an array of size 0 is created. See also Binary Sequence Types — bytes, bytearray, memoryview and Bytearray Objects.
class bytes([source[, encoding[, errors]]])
Return a new “bytes” object, which is an immutable sequence of integers in the range 0 <= x < 256. bytes is an immutable version of bytearray – it has the same non-mutating methods and the same indexing and slicing behavior. Accordingly, constructor arguments are interpreted as for bytearray(). Bytes objects can also be created with literals, see String and Bytes literals. See also Binary Sequence Types — bytes, bytearray, memoryview, Bytes Objects, and Bytes and Bytearray Operations.
callable(object)
Return True if the object argument appears callable, False if not. If this returns True, it is still possible that a call fails, but if it is False, calling object will never succeed. Note that classes are callable (calling a class returns a new instance); instances are callable if their class has a __call__() method. New in version 3.2: This function was first removed in Python 3.0 and then brought back in Python 3.2.
chr(i)
Return the string representing a character whose Unicode code point is the integer i. For example, chr(97) returns the string 'a', while chr(8364) returns the string '€'. This is the inverse of ord(). The valid range for the argument is from 0 through 1,114,111 (0x10FFFF in base 16). ValueError will be raised if i is outside that range.
@classmethod
Transform a method into a class method. A class method receives the class as implicit first argument, just like an instance method receives the instance. To declare a class method, use this idiom: class C:
@classmethod
def f(cls, arg1, arg2, ...): ...
The @classmethod form is a function decorator – see Function definitions for details. A class method can be called either on the class (such as C.f()) or on an instance (such as C().f()). The instance is ignored except for its class. If a class method is called for a derived class, the derived class object is passed as the implied first argument. Class methods are different than C++ or Java static methods. If you want those, see staticmethod() in this section. For more information on class methods, see The standard type hierarchy. Changed in version 3.9: Class methods can now wrap other descriptors such as property().
compile(source, filename, mode, flags=0, dont_inherit=False, optimize=-1)
Compile the source into a code or AST object. Code objects can be executed by exec() or eval(). source can either be a normal string, a byte string, or an AST object. Refer to the ast module documentation for information on how to work with AST objects. The filename argument should give the file from which the code was read; pass some recognizable value if it wasn’t read from a file ('<string>' is commonly used). The mode argument specifies what kind of code must be compiled; it can be 'exec' if source consists of a sequence of statements, 'eval' if it consists of a single expression, or 'single' if it consists of a single interactive statement (in the latter case, expression statements that evaluate to something other than None will be printed). The optional arguments flags and dont_inherit control which compiler options should be activated and which future features should be allowed. If neither is present (or both are zero) the code is compiled with the same flags that affect the code that is calling compile(). If the flags argument is given and dont_inherit is not (or is zero) then the compiler options and the future statements specified by the flags argument are used in addition to those that would be used anyway. If dont_inherit is a non-zero integer then the flags argument is it – the flags (future features and compiler options) in the surrounding code are ignored. Compiler options and future statements are specified by bits which can be bitwise ORed together to specify multiple options. The bitfield required to specify a given future feature can be found as the compiler_flag attribute on the _Feature instance in the __future__ module. Compiler flags can be found in ast module, with PyCF_ prefix. The argument optimize specifies the optimization level of the compiler; the default value of -1 selects the optimization level of the interpreter as given by -O options. Explicit levels are 0 (no optimization; __debug__ is true), 1 (asserts are removed, __debug__ is false) or 2 (docstrings are removed too). This function raises SyntaxError if the compiled source is invalid, and ValueError if the source contains null bytes. If you want to parse Python code into its AST representation, see ast.parse().
Raises an auditing event compile with arguments source and filename. This event may also be raised by implicit compilation. Note When compiling a string with multi-line code in 'single' or 'eval' mode, input must be terminated by at least one newline character. This is to facilitate detection of incomplete and complete statements in the code module. Warning It is possible to crash the Python interpreter with a sufficiently large/complex string when compiling to an AST object due to stack depth limitations in Python’s AST compiler. Changed in version 3.2: Allowed use of Windows and Mac newlines. Also input in 'exec' mode does not have to end in a newline anymore. Added the optimize parameter. Changed in version 3.5: Previously, TypeError was raised when null bytes were encountered in source. New in version 3.8: ast.PyCF_ALLOW_TOP_LEVEL_AWAIT can now be passed in flags to enable support for top-level await, async for, and async with.
class complex([real[, imag]])
Return a complex number with the value real + imag*1j or convert a string or number to a complex number. If the first parameter is a string, it will be interpreted as a complex number and the function must be called without a second parameter. The second parameter can never be a string. Each argument may be any numeric type (including complex). If imag is omitted, it defaults to zero and the constructor serves as a numeric conversion like int and float. If both arguments are omitted, returns 0j. For a general Python object x, complex(x) delegates to x.__complex__(). If __complex__() is not defined then it falls back to __float__(). If __float__() is not defined then it falls back to __index__(). Note When converting from a string, the string must not contain whitespace around the central + or - operator. For example, complex('1+2j') is fine, but complex('1 + 2j') raises ValueError. The complex type is described in Numeric Types — int, float, complex. Changed in version 3.6: Grouping digits with underscores as in code literals is allowed. Changed in version 3.8: Falls back to __index__() if __complex__() and __float__() are not defined.
delattr(object, name)
This is a relative of setattr(). The arguments are an object and a string. The string must be the name of one of the object’s attributes. The function deletes the named attribute, provided the object allows it. For example, delattr(x, 'foobar') is equivalent to del x.foobar.
class dict(**kwarg)
class dict(mapping, **kwarg)
class dict(iterable, **kwarg)
Create a new dictionary. The dict object is the dictionary class. See dict and Mapping Types — dict for documentation about this class. For other containers see the built-in list, set, and tuple classes, as well as the collections module.
dir([object])
Without arguments, return the list of names in the current local scope. With an argument, attempt to return a list of valid attributes for that object. If the object has a method named __dir__(), this method will be called and must return the list of attributes. This allows objects that implement a custom __getattr__() or __getattribute__() function to customize the way dir() reports their attributes. If the object does not provide __dir__(), the function tries its best to gather information from the object’s __dict__ attribute, if defined, and from its type object. The resulting list is not necessarily complete, and may be inaccurate when the object has a custom __getattr__(). The default dir() mechanism behaves differently with different types of objects, as it attempts to produce the most relevant, rather than complete, information: If the object is a module object, the list contains the names of the module’s attributes. If the object is a type or class object, the list contains the names of its attributes, and recursively of the attributes of its bases. Otherwise, the list contains the object’s attributes’ names, the names of its class’s attributes, and recursively of the attributes of its class’s base classes. The resulting list is sorted alphabetically. For example: >>> import struct
>>> dir() # show the names in the module namespace
['__builtins__', '__name__', 'struct']
>>> dir(struct) # show the names in the struct module
['Struct', '__all__', '__builtins__', '__cached__', '__doc__', '__file__',
'__initializing__', '__loader__', '__name__', '__package__',
'_clearcache', 'calcsize', 'error', 'pack', 'pack_into',
'unpack', 'unpack_from']
>>> class Shape:
... def __dir__(self):
... return ['area', 'perimeter', 'location']
>>> s = Shape()
>>> dir(s)
['area', 'location', 'perimeter']
Note Because dir() is supplied primarily as a convenience for use at an interactive prompt, it tries to supply an interesting set of names more than it tries to supply a rigorously or consistently defined set of names, and its detailed behavior may change across releases. For example, metaclass attributes are not in the result list when the argument is a class.
divmod(a, b)
Take two (non complex) numbers as arguments and return a pair of numbers consisting of their quotient and remainder when using integer division. With mixed operand types, the rules for binary arithmetic operators apply. For integers, the result is the same as (a // b, a % b). For floating point numbers the result is (q, a % b), where q is usually math.floor(a /
b) but may be 1 less than that. In any case q * b + a % b is very close to a, if a % b is non-zero it has the same sign as b, and 0
<= abs(a % b) < abs(b).
enumerate(iterable, start=0)
Return an enumerate object. iterable must be a sequence, an iterator, or some other object which supports iteration. The __next__() method of the iterator returned by enumerate() returns a tuple containing a count (from start which defaults to 0) and the values obtained from iterating over iterable. >>> seasons = ['Spring', 'Summer', 'Fall', 'Winter']
>>> list(enumerate(seasons))
[(0, 'Spring'), (1, 'Summer'), (2, 'Fall'), (3, 'Winter')]
>>> list(enumerate(seasons, start=1))
[(1, 'Spring'), (2, 'Summer'), (3, 'Fall'), (4, 'Winter')]
Equivalent to: def enumerate(sequence, start=0):
n = start
for elem in sequence:
yield n, elem
n += 1
eval(expression[, globals[, locals]])
The arguments are a string and optional globals and locals. If provided, globals must be a dictionary. If provided, locals can be any mapping object. The expression argument is parsed and evaluated as a Python expression (technically speaking, a condition list) using the globals and locals dictionaries as global and local namespace. If the globals dictionary is present and does not contain a value for the key __builtins__, a reference to the dictionary of the built-in module builtins is inserted under that key before expression is parsed. This means that expression normally has full access to the standard builtins module and restricted environments are propagated. If the locals dictionary is omitted it defaults to the globals dictionary. If both dictionaries are omitted, the expression is executed with the globals and locals in the environment where eval() is called. Note, eval() does not have access to the nested scopes (non-locals) in the enclosing environment. The return value is the result of the evaluated expression. Syntax errors are reported as exceptions. Example: >>> x = 1
>>> eval('x+1')
2
This function can also be used to execute arbitrary code objects (such as those created by compile()). In this case pass a code object instead of a string. If the code object has been compiled with 'exec' as the mode argument, eval()’s return value will be None. Hints: dynamic execution of statements is supported by the exec() function. The globals() and locals() functions returns the current global and local dictionary, respectively, which may be useful to pass around for use by eval() or exec(). See ast.literal_eval() for a function that can safely evaluate strings with expressions containing only literals.
Raises an auditing event exec with the code object as the argument. Code compilation events may also be raised.
exec(object[, globals[, locals]])
This function supports dynamic execution of Python code. object must be either a string or a code object. If it is a string, the string is parsed as a suite of Python statements which is then executed (unless a syntax error occurs). 1 If it is a code object, it is simply executed. In all cases, the code that’s executed is expected to be valid as file input (see the section “File input” in the Reference Manual). Be aware that the nonlocal, yield, and return statements may not be used outside of function definitions even within the context of code passed to the exec() function. The return value is None. In all cases, if the optional parts are omitted, the code is executed in the current scope. If only globals is provided, it must be a dictionary (and not a subclass of dictionary), which will be used for both the global and the local variables. If globals and locals are given, they are used for the global and local variables, respectively. If provided, locals can be any mapping object. Remember that at module level, globals and locals are the same dictionary. If exec gets two separate objects as globals and locals, the code will be executed as if it were embedded in a class definition. If the globals dictionary does not contain a value for the key __builtins__, a reference to the dictionary of the built-in module builtins is inserted under that key. That way you can control what builtins are available to the executed code by inserting your own __builtins__ dictionary into globals before passing it to exec().
Raises an auditing event exec with the code object as the argument. Code compilation events may also be raised. Note The built-in functions globals() and locals() return the current global and local dictionary, respectively, which may be useful to pass around for use as the second and third argument to exec(). Note The default locals act as described for function locals() below: modifications to the default locals dictionary should not be attempted. Pass an explicit locals dictionary if you need to see effects of the code on locals after function exec() returns.
filter(function, iterable)
Construct an iterator from those elements of iterable for which function returns true. iterable may be either a sequence, a container which supports iteration, or an iterator. If function is None, the identity function is assumed, that is, all elements of iterable that are false are removed. Note that filter(function, iterable) is equivalent to the generator expression (item for item in iterable if function(item)) if function is not None and (item for item in iterable if item) if function is None. See itertools.filterfalse() for the complementary function that returns elements of iterable for which function returns false.
class float([x])
Return a floating point number constructed from a number or string x. If the argument is a string, it should contain a decimal number, optionally preceded by a sign, and optionally embedded in whitespace. The optional sign may be '+' or '-'; a '+' sign has no effect on the value produced. The argument may also be a string representing a NaN (not-a-number), or a positive or negative infinity. More precisely, the input must conform to the following grammar after leading and trailing whitespace characters are removed:
sign ::= "+" | "-"
infinity ::= "Infinity" | "inf"
nan ::= "nan"
numeric_value ::= floatnumber | infinity | nan
numeric_string ::= [sign] numeric_value
Here floatnumber is the form of a Python floating-point literal, described in Floating point literals. Case is not significant, so, for example, “inf”, “Inf”, “INFINITY” and “iNfINity” are all acceptable spellings for positive infinity. Otherwise, if the argument is an integer or a floating point number, a floating point number with the same value (within Python’s floating point precision) is returned. If the argument is outside the range of a Python float, an OverflowError will be raised. For a general Python object x, float(x) delegates to x.__float__(). If __float__() is not defined then it falls back to __index__(). If no argument is given, 0.0 is returned. Examples: >>> float('+1.23')
1.23
>>> float(' -12345\n')
-12345.0
>>> float('1e-003')
0.001
>>> float('+1E6')
1000000.0
>>> float('-Infinity')
-inf
The float type is described in Numeric Types — int, float, complex. Changed in version 3.6: Grouping digits with underscores as in code literals is allowed. Changed in version 3.7: x is now a positional-only parameter. Changed in version 3.8: Falls back to __index__() if __float__() is not defined.
format(value[, format_spec])
Convert a value to a “formatted” representation, as controlled by format_spec. The interpretation of format_spec will depend on the type of the value argument, however there is a standard formatting syntax that is used by most built-in types: Format Specification Mini-Language. The default format_spec is an empty string which usually gives the same effect as calling str(value). A call to format(value, format_spec) is translated to type(value).__format__(value, format_spec) which bypasses the instance dictionary when searching for the value’s __format__() method. A TypeError exception is raised if the method search reaches object and the format_spec is non-empty, or if either the format_spec or the return value are not strings. Changed in version 3.4: object().__format__(format_spec) raises TypeError if format_spec is not an empty string.
class frozenset([iterable])
Return a new frozenset object, optionally with elements taken from iterable. frozenset is a built-in class. See frozenset and Set Types — set, frozenset for documentation about this class. For other containers see the built-in set, list, tuple, and dict classes, as well as the collections module.
getattr(object, name[, default])
Return the value of the named attribute of object. name must be a string. If the string is the name of one of the object’s attributes, the result is the value of that attribute. For example, getattr(x, 'foobar') is equivalent to x.foobar. If the named attribute does not exist, default is returned if provided, otherwise AttributeError is raised.
globals()
Return a dictionary representing the current global symbol table. This is always the dictionary of the current module (inside a function or method, this is the module where it is defined, not the module from which it is called).
hasattr(object, name)
The arguments are an object and a string. The result is True if the string is the name of one of the object’s attributes, False if not. (This is implemented by calling getattr(object, name) and seeing whether it raises an AttributeError or not.)
hash(object)
Return the hash value of the object (if it has one). Hash values are integers. They are used to quickly compare dictionary keys during a dictionary lookup. Numeric values that compare equal have the same hash value (even if they are of different types, as is the case for 1 and 1.0). Note For objects with custom __hash__() methods, note that hash() truncates the return value based on the bit width of the host machine. See __hash__() for details.
help([object])
Invoke the built-in help system. (This function is intended for interactive use.) If no argument is given, the interactive help system starts on the interpreter console. If the argument is a string, then the string is looked up as the name of a module, function, class, method, keyword, or documentation topic, and a help page is printed on the console. If the argument is any other kind of object, a help page on the object is generated. Note that if a slash(/) appears in the parameter list of a function, when invoking help(), it means that the parameters prior to the slash are positional-only. For more info, see the FAQ entry on positional-only parameters. This function is added to the built-in namespace by the site module. Changed in version 3.4: Changes to pydoc and inspect mean that the reported signatures for callables are now more comprehensive and consistent.
hex(x)
Convert an integer number to a lowercase hexadecimal string prefixed with “0x”. If x is not a Python int object, it has to define an __index__() method that returns an integer. Some examples: >>> hex(255)
'0xff'
>>> hex(-42)
'-0x2a'
If you want to convert an integer number to an uppercase or lower hexadecimal string with prefix or not, you can use either of the following ways: >>> '%#x' % 255, '%x' % 255, '%X' % 255
('0xff', 'ff', 'FF')
>>> format(255, '#x'), format(255, 'x'), format(255, 'X')
('0xff', 'ff', 'FF')
>>> f'{255:#x}', f'{255:x}', f'{255:X}'
('0xff', 'ff', 'FF')
See also format() for more information. See also int() for converting a hexadecimal string to an integer using a base of 16. Note To obtain a hexadecimal string representation for a float, use the float.hex() method.
id(object)
Return the “identity” of an object. This is an integer which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value. CPython implementation detail: This is the address of the object in memory. Raises an auditing event builtins.id with argument id.
input([prompt])
If the prompt argument is present, it is written to standard output without a trailing newline. The function then reads a line from input, converts it to a string (stripping a trailing newline), and returns that. When EOF is read, EOFError is raised. Example: >>> s = input('--> ')
--> Monty Python's Flying Circus
>>> s
"Monty Python's Flying Circus"
If the readline module was loaded, then input() will use it to provide elaborate line editing and history features.
Raises an auditing event builtins.input with argument prompt before reading input
Raises an auditing event builtins.input/result with the result after successfully reading input.
class int([x])
class int(x, base=10)
Return an integer object constructed from a number or string x, or return 0 if no arguments are given. If x defines __int__(), int(x) returns x.__int__(). If x defines __index__(), it returns x.__index__(). If x defines __trunc__(), it returns x.__trunc__(). For floating point numbers, this truncates towards zero. If x is not a number or if base is given, then x must be a string, bytes, or bytearray instance representing an integer literal in radix base. Optionally, the literal can be preceded by + or - (with no space in between) and surrounded by whitespace. A base-n literal consists of the digits 0 to n-1, with a to z (or A to Z) having values 10 to 35. The default base is 10. The allowed values are 0 and 2–36. Base-2, -8, and -16 literals can be optionally prefixed with 0b/0B, 0o/0O, or 0x/0X, as with integer literals in code. Base 0 means to interpret exactly as a code literal, so that the actual base is 2, 8, 10, or 16, and so that int('010', 0) is not legal, while int('010') is, as well as int('010', 8). The integer type is described in Numeric Types — int, float, complex. Changed in version 3.4: If base is not an instance of int and the base object has a base.__index__ method, that method is called to obtain an integer for the base. Previous versions used base.__int__ instead of base.__index__. Changed in version 3.6: Grouping digits with underscores as in code literals is allowed. Changed in version 3.7: x is now a positional-only parameter. Changed in version 3.8: Falls back to __index__() if __int__() is not defined.
isinstance(object, classinfo)
Return True if the object argument is an instance of the classinfo argument, or of a (direct, indirect or virtual) subclass thereof. If object is not an object of the given type, the function always returns False. If classinfo is a tuple of type objects (or recursively, other such tuples), return True if object is an instance of any of the types. If classinfo is not a type or tuple of types and such tuples, a TypeError exception is raised.
issubclass(class, classinfo)
Return True if class is a subclass (direct, indirect or virtual) of classinfo. A class is considered a subclass of itself. classinfo may be a tuple of class objects, in which case every entry in classinfo will be checked. In any other case, a TypeError exception is raised.
iter(object[, sentinel])
Return an iterator object. The first argument is interpreted very differently depending on the presence of the second argument. Without a second argument, object must be a collection object which supports the iteration protocol (the __iter__() method), or it must support the sequence protocol (the __getitem__() method with integer arguments starting at 0). If it does not support either of those protocols, TypeError is raised. If the second argument, sentinel, is given, then object must be a callable object. The iterator created in this case will call object with no arguments for each call to its __next__() method; if the value returned is equal to sentinel, StopIteration will be raised, otherwise the value will be returned. See also Iterator Types. One useful application of the second form of iter() is to build a block-reader. For example, reading fixed-width blocks from a binary database file until the end of file is reached: from functools import partial
with open('mydata.db', 'rb') as f:
for block in iter(partial(f.read, 64), b''):
process_block(block)
len(s)
Return the length (the number of items) of an object. The argument may be a sequence (such as a string, bytes, tuple, list, or range) or a collection (such as a dictionary, set, or frozen set). CPython implementation detail: len raises OverflowError on lengths larger than sys.maxsize, such as range(2 ** 100).
class list([iterable])
Rather than being a function, list is actually a mutable sequence type, as documented in Lists and Sequence Types — list, tuple, range.
locals()
Update and return a dictionary representing the current local symbol table. Free variables are returned by locals() when it is called in function blocks, but not in class blocks. Note that at the module level, locals() and globals() are the same dictionary. Note The contents of this dictionary should not be modified; changes may not affect the values of local and free variables used by the interpreter.
map(function, iterable, ...)
Return an iterator that applies function to every item of iterable, yielding the results. If additional iterable arguments are passed, function must take that many arguments and is applied to the items from all iterables in parallel. With multiple iterables, the iterator stops when the shortest iterable is exhausted. For cases where the function inputs are already arranged into argument tuples, see itertools.starmap().
max(iterable, *[, key, default])
max(arg1, arg2, *args[, key])
Return the largest item in an iterable or the largest of two or more arguments. If one positional argument is provided, it should be an iterable. The largest item in the iterable is returned. If two or more positional arguments are provided, the largest of the positional arguments is returned. There are two optional keyword-only arguments. The key argument specifies a one-argument ordering function like that used for list.sort(). The default argument specifies an object to return if the provided iterable is empty. If the iterable is empty and default is not provided, a ValueError is raised. If multiple items are maximal, the function returns the first one encountered. This is consistent with other sort-stability preserving tools such as sorted(iterable, key=keyfunc, reverse=True)[0] and heapq.nlargest(1, iterable, key=keyfunc). New in version 3.4: The default keyword-only argument. Changed in version 3.8: The key can be None.
class memoryview(obj)
Return a “memory view” object created from the given argument. See Memory Views for more information.
min(iterable, *[, key, default])
min(arg1, arg2, *args[, key])
Return the smallest item in an iterable or the smallest of two or more arguments. If one positional argument is provided, it should be an iterable. The smallest item in the iterable is returned. If two or more positional arguments are provided, the smallest of the positional arguments is returned. There are two optional keyword-only arguments. The key argument specifies a one-argument ordering function like that used for list.sort(). The default argument specifies an object to return if the provided iterable is empty. If the iterable is empty and default is not provided, a ValueError is raised. If multiple items are minimal, the function returns the first one encountered. This is consistent with other sort-stability preserving tools such as sorted(iterable, key=keyfunc)[0] and heapq.nsmallest(1,
iterable, key=keyfunc). New in version 3.4: The default keyword-only argument. Changed in version 3.8: The key can be None.
next(iterator[, default])
Retrieve the next item from the iterator by calling its __next__() method. If default is given, it is returned if the iterator is exhausted, otherwise StopIteration is raised.
class object
Return a new featureless object. object is a base for all classes. It has the methods that are common to all instances of Python classes. This function does not accept any arguments. Note object does not have a __dict__, so you can’t assign arbitrary attributes to an instance of the object class.
oct(x)
Convert an integer number to an octal string prefixed with “0o”. The result is a valid Python expression. If x is not a Python int object, it has to define an __index__() method that returns an integer. For example: >>> oct(8)
'0o10'
>>> oct(-56)
'-0o70'
If you want to convert an integer number to octal string either with prefix “0o” or not, you can use either of the following ways. >>> '%#o' % 10, '%o' % 10
('0o12', '12')
>>> format(10, '#o'), format(10, 'o')
('0o12', '12')
>>> f'{10:#o}', f'{10:o}'
('0o12', '12')
See also format() for more information.
open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None)
Open file and return a corresponding file object. If the file cannot be opened, an OSError is raised. See Reading and Writing Files for more examples of how to use this function. file is a path-like object giving the pathname (absolute or relative to the current working directory) of the file to be opened or an integer file descriptor of the file to be wrapped. (If a file descriptor is given, it is closed when the returned I/O object is closed, unless closefd is set to False.) mode is an optional string that specifies the mode in which the file is opened. It defaults to 'r' which means open for reading in text mode. Other common values are 'w' for writing (truncating the file if it already exists), 'x' for exclusive creation and 'a' for appending (which on some Unix systems, means that all writes append to the end of the file regardless of the current seek position). In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding. (For reading and writing raw bytes use binary mode and leave encoding unspecified.) The available modes are:
Character Meaning
'r' open for reading (default)
'w' open for writing, truncating the file first
'x' open for exclusive creation, failing if the file already exists
'a' open for writing, appending to the end of the file if it exists
'b' binary mode
't' text mode (default)
'+' open for updating (reading and writing) The default mode is 'r' (open for reading text, synonym of 'rt'). Modes 'w+' and 'w+b' open and truncate the file. Modes 'r+' and 'r+b' open the file with no truncation. As mentioned in the Overview, Python distinguishes between binary and text I/O. Files opened in binary mode (including 'b' in the mode argument) return contents as bytes objects without any decoding. In text mode (the default, or when 't' is included in the mode argument), the contents of the file are returned as str, the bytes having been first decoded using a platform-dependent encoding or using the specified encoding if given. There is an additional mode character permitted, 'U', which no longer has any effect, and is considered deprecated. It previously enabled universal newlines in text mode, which became the default behaviour in Python 3.0. Refer to the documentation of the newline parameter for further details. Note Python doesn’t depend on the underlying operating system’s notion of text files; all the processing is done by Python itself, and is therefore platform-independent. buffering is an optional integer used to set the buffering policy. Pass 0 to switch buffering off (only allowed in binary mode), 1 to select line buffering (only usable in text mode), and an integer > 1 to indicate the size in bytes of a fixed-size chunk buffer. When no buffering argument is given, the default buffering policy works as follows: Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On many systems, the buffer will typically be 4096 or 8192 bytes long. “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above for binary files. encoding is the name of the encoding used to decode or encode the file. This should only be used in text mode. The default encoding is platform dependent (whatever locale.getpreferredencoding() returns), but any text encoding supported by Python can be used. See the codecs module for the list of supported encodings. errors is an optional string that specifies how encoding and decoding errors are to be handled—this cannot be used in binary mode. A variety of standard error handlers are available (listed under Error Handlers), though any error handling name that has been registered with codecs.register_error() is also valid. The standard names include:
'strict' to raise a ValueError exception if there is an encoding error. The default value of None has the same effect.
'ignore' ignores errors. Note that ignoring encoding errors can lead to data loss.
'replace' causes a replacement marker (such as '?') to be inserted where there is malformed data.
'surrogateescape' will represent any incorrect bytes as code points in the Unicode Private Use Area ranging from U+DC80 to U+DCFF. These private code points will then be turned back into the same bytes when the surrogateescape error handler is used when writing data. This is useful for processing files in an unknown encoding.
'xmlcharrefreplace' is only supported when writing to a file. Characters not supported by the encoding are replaced with the appropriate XML character reference &#nnn;.
'backslashreplace' replaces malformed data by Python’s backslashed escape sequences.
'namereplace' (also only supported when writing) replaces unsupported characters with \N{...} escape sequences. newline controls how universal newlines mode works (it only applies to text mode). It can be None, '', '\n', '\r', and '\r\n'. It works as follows: When reading input from the stream, if newline is None, universal newlines mode is enabled. Lines in the input can end in '\n', '\r', or '\r\n', and these are translated into '\n' before being returned to the caller. If it is '', universal newlines mode is enabled, but line endings are returned to the caller untranslated. If it has any of the other legal values, input lines are only terminated by the given string, and the line ending is returned to the caller untranslated. When writing output to the stream, if newline is None, any '\n' characters written are translated to the system default line separator, os.linesep. If newline is '' or '\n', no translation takes place. If newline is any of the other legal values, any '\n' characters written are translated to the given string. If closefd is False and a file descriptor rather than a filename was given, the underlying file descriptor will be kept open when the file is closed. If a filename is given closefd must be True (the default) otherwise an error will be raised. A custom opener can be used by passing a callable as opener. The underlying file descriptor for the file object is then obtained by calling opener with (file, flags). opener must return an open file descriptor (passing os.open as opener results in functionality similar to passing None). The newly created file is non-inheritable. The following example uses the dir_fd parameter of the os.open() function to open a file relative to a given directory: >>> import os
>>> dir_fd = os.open('somedir', os.O_RDONLY)
>>> def opener(path, flags):
... return os.open(path, flags, dir_fd=dir_fd)
...
>>> with open('spamspam.txt', 'w', opener=opener) as f:
... print('This will be written to somedir/spamspam.txt', file=f)
...
>>> os.close(dir_fd) # don't leak a file descriptor
The type of file object returned by the open() function depends on the mode. When open() is used to open a file in a text mode ('w', 'r', 'wt', 'rt', etc.), it returns a subclass of io.TextIOBase (specifically io.TextIOWrapper). When used to open a file in a binary mode with buffering, the returned class is a subclass of io.BufferedIOBase. The exact class varies: in read binary mode, it returns an io.BufferedReader; in write binary and append binary modes, it returns an io.BufferedWriter, and in read/write mode, it returns an io.BufferedRandom. When buffering is disabled, the raw stream, a subclass of io.RawIOBase, io.FileIO, is returned. See also the file handling modules, such as, fileinput, io (where open() is declared), os, os.path, tempfile, and shutil. Raises an auditing event open with arguments file, mode, flags. The mode and flags arguments may have been modified or inferred from the original call. Changed in version 3.3: The opener parameter was added. The 'x' mode was added.
IOError used to be raised, it is now an alias of OSError.
FileExistsError is now raised if the file opened in exclusive creation mode ('x') already exists. Changed in version 3.4: The file is now non-inheritable. Deprecated since version 3.4, will be removed in version 3.10: The 'U' mode. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the function now retries the system call instead of raising an InterruptedError exception (see PEP 475 for the rationale). The 'namereplace' error handler was added. Changed in version 3.6: Support added to accept objects implementing os.PathLike. On Windows, opening a console buffer may return a subclass of io.RawIOBase other than io.FileIO.
ord(c)
Given a string representing one Unicode character, return an integer representing the Unicode code point of that character. For example, ord('a') returns the integer 97 and ord('€') (Euro sign) returns 8364. This is the inverse of chr().
pow(base, exp[, mod])
Return base to the power exp; if mod is present, return base to the power exp, modulo mod (computed more efficiently than pow(base, exp) % mod). The two-argument form pow(base, exp) is equivalent to using the power operator: base**exp. The arguments must have numeric types. With mixed operand types, the coercion rules for binary arithmetic operators apply. For int operands, the result has the same type as the operands (after coercion) unless the second argument is negative; in that case, all arguments are converted to float and a float result is delivered. For example, 10**2 returns 100, but 10**-2 returns 0.01. For int operands base and exp, if mod is present, mod must also be of integer type and mod must be nonzero. If mod is present and exp is negative, base must be relatively prime to mod. In that case, pow(inv_base, -exp, mod) is returned, where inv_base is an inverse to base modulo mod. Here’s an example of computing an inverse for 38 modulo 97: >>> pow(38, -1, mod=97)
23
>>> 23 * 38 % 97 == 1
True
Changed in version 3.8: For int operands, the three-argument form of pow now allows the second argument to be negative, permitting computation of modular inverses. Changed in version 3.8: Allow keyword arguments. Formerly, only positional arguments were supported.
print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False)
Print objects to the text stream file, separated by sep and followed by end. sep, end, file and flush, if present, must be given as keyword arguments. All non-keyword arguments are converted to strings like str() does and written to the stream, separated by sep and followed by end. Both sep and end must be strings; they can also be None, which means to use the default values. If no objects are given, print() will just write end. The file argument must be an object with a write(string) method; if it is not present or None, sys.stdout will be used. Since printed arguments are converted to text strings, print() cannot be used with binary mode file objects. For these, use file.write(...) instead. Whether output is buffered is usually determined by file, but if the flush keyword argument is true, the stream is forcibly flushed. Changed in version 3.3: Added the flush keyword argument.
class property(fget=None, fset=None, fdel=None, doc=None)
Return a property attribute. fget is a function for getting an attribute value. fset is a function for setting an attribute value. fdel is a function for deleting an attribute value. And doc creates a docstring for the attribute. A typical use is to define a managed attribute x: class C:
def __init__(self):
self._x = None
def getx(self):
return self._x
def setx(self, value):
self._x = value
def delx(self):
del self._x
x = property(getx, setx, delx, "I'm the 'x' property.")
If c is an instance of C, c.x will invoke the getter, c.x = value will invoke the setter and del c.x the deleter. If given, doc will be the docstring of the property attribute. Otherwise, the property will copy fget’s docstring (if it exists). This makes it possible to create read-only properties easily using property() as a decorator: class Parrot:
def __init__(self):
self._voltage = 100000
@property
def voltage(self):
"""Get the current voltage."""
return self._voltage
The @property decorator turns the voltage() method into a “getter” for a read-only attribute with the same name, and it sets the docstring for voltage to “Get the current voltage.” A property object has getter, setter, and deleter methods usable as decorators that create a copy of the property with the corresponding accessor function set to the decorated function. This is best explained with an example: class C:
def __init__(self):
self._x = None
@property
def x(self):
"""I'm the 'x' property."""
return self._x
@x.setter
def x(self, value):
self._x = value
@x.deleter
def x(self):
del self._x
This code is exactly equivalent to the first example. Be sure to give the additional functions the same name as the original property (x in this case.) The returned property object also has the attributes fget, fset, and fdel corresponding to the constructor arguments. Changed in version 3.5: The docstrings of property objects are now writeable.
class range(stop)
class range(start, stop[, step])
Rather than being a function, range is actually an immutable sequence type, as documented in Ranges and Sequence Types — list, tuple, range.
repr(object)
Return a string containing a printable representation of an object. For many types, this function makes an attempt to return a string that would yield an object with the same value when passed to eval(), otherwise the representation is a string enclosed in angle brackets that contains the name of the type of the object together with additional information often including the name and address of the object. A class can control what this function returns for its instances by defining a __repr__() method.
reversed(seq)
Return a reverse iterator. seq must be an object which has a __reversed__() method or supports the sequence protocol (the __len__() method and the __getitem__() method with integer arguments starting at 0).
round(number[, ndigits])
Return number rounded to ndigits precision after the decimal point. If ndigits is omitted or is None, it returns the nearest integer to its input. For the built-in types supporting round(), values are rounded to the closest multiple of 10 to the power minus ndigits; if two multiples are equally close, rounding is done toward the even choice (so, for example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2). Any integer value is valid for ndigits (positive, zero, or negative). The return value is an integer if ndigits is omitted or None. Otherwise the return value has the same type as number. For a general Python object number, round delegates to number.__round__. Note The behavior of round() for floats can be surprising: for example, round(2.675, 2) gives 2.67 instead of the expected 2.68. This is not a bug: it’s a result of the fact that most decimal fractions can’t be represented exactly as a float. See Floating Point Arithmetic: Issues and Limitations for more information.
class set([iterable])
Return a new set object, optionally with elements taken from iterable. set is a built-in class. See set and Set Types — set, frozenset for documentation about this class. For other containers see the built-in frozenset, list, tuple, and dict classes, as well as the collections module.
setattr(object, name, value)
This is the counterpart of getattr(). The arguments are an object, a string and an arbitrary value. The string may name an existing attribute or a new attribute. The function assigns the value to the attribute, provided the object allows it. For example, setattr(x, 'foobar', 123) is equivalent to x.foobar = 123.
class slice(stop)
class slice(start, stop[, step])
Return a slice object representing the set of indices specified by range(start, stop, step). The start and step arguments default to None. Slice objects have read-only data attributes start, stop and step which merely return the argument values (or their default). They have no other explicit functionality; however they are used by Numerical Python and other third party extensions. Slice objects are also generated when extended indexing syntax is used. For example: a[start:stop:step] or a[start:stop, i]. See itertools.islice() for an alternate version that returns an iterator.
sorted(iterable, *, key=None, reverse=False)
Return a new sorted list from the items in iterable. Has two optional arguments which must be specified as keyword arguments. key specifies a function of one argument that is used to extract a comparison key from each element in iterable (for example, key=str.lower). The default value is None (compare the elements directly). reverse is a boolean value. If set to True, then the list elements are sorted as if each comparison were reversed. Use functools.cmp_to_key() to convert an old-style cmp function to a key function. The built-in sorted() function is guaranteed to be stable. A sort is stable if it guarantees not to change the relative order of elements that compare equal — this is helpful for sorting in multiple passes (for example, sort by department, then by salary grade). For sorting examples and a brief sorting tutorial, see Sorting HOW TO.
@staticmethod
Transform a method into a static method. A static method does not receive an implicit first argument. To declare a static method, use this idiom: class C:
@staticmethod
def f(arg1, arg2, ...): ...
The @staticmethod form is a function decorator – see Function definitions for details. A static method can be called either on the class (such as C.f()) or on an instance (such as C().f()). Static methods in Python are similar to those found in Java or C++. Also see classmethod() for a variant that is useful for creating alternate class constructors. Like all decorators, it is also possible to call staticmethod as a regular function and do something with its result. This is needed in some cases where you need a reference to a function from a class body and you want to avoid the automatic transformation to instance method. For these cases, use this idiom: class C:
builtin_open = staticmethod(open)
For more information on static methods, see The standard type hierarchy.
class str(object='')
class str(object=b'', encoding='utf-8', errors='strict')
Return a str version of object. See str() for details. str is the built-in string class. For general information about strings, see Text Sequence Type — str.
sum(iterable, /, start=0)
Sums start and the items of an iterable from left to right and returns the total. The iterable’s items are normally numbers, and the start value is not allowed to be a string. For some use cases, there are good alternatives to sum(). The preferred, fast way to concatenate a sequence of strings is by calling ''.join(sequence). To add floating point values with extended precision, see math.fsum(). To concatenate a series of iterables, consider using itertools.chain(). Changed in version 3.8: The start parameter can be specified as a keyword argument.
super([type[, object-or-type]])
Return a proxy object that delegates method calls to a parent or sibling class of type. This is useful for accessing inherited methods that have been overridden in a class. The object-or-type determines the method resolution order to be searched. The search starts from the class right after the type. For example, if __mro__ of object-or-type is D -> B -> C -> A -> object and the value of type is B, then super() searches C -> A -> object. The __mro__ attribute of the object-or-type lists the method resolution search order used by both getattr() and super(). The attribute is dynamic and can change whenever the inheritance hierarchy is updated. If the second argument is omitted, the super object returned is unbound. If the second argument is an object, isinstance(obj, type) must be true. If the second argument is a type, issubclass(type2, type) must be true (this is useful for classmethods). There are two typical use cases for super. In a class hierarchy with single inheritance, super can be used to refer to parent classes without naming them explicitly, thus making the code more maintainable. This use closely parallels the use of super in other programming languages. The second use case is to support cooperative multiple inheritance in a dynamic execution environment. This use case is unique to Python and is not found in statically compiled languages or languages that only support single inheritance. This makes it possible to implement “diamond diagrams” where multiple base classes implement the same method. Good design dictates that such implementations have the same calling signature in every case (because the order of calls is determined at runtime, because that order adapts to changes in the class hierarchy, and because that order can include sibling classes that are unknown prior to runtime). For both use cases, a typical superclass call looks like this: class C(B):
def method(self, arg):
super().method(arg) # This does the same thing as:
# super(C, self).method(arg)
In addition to method lookups, super() also works for attribute lookups. One possible use case for this is calling descriptors in a parent or sibling class. Note that super() is implemented as part of the binding process for explicit dotted attribute lookups such as super().__getitem__(name). It does so by implementing its own __getattribute__() method for searching classes in a predictable order that supports cooperative multiple inheritance. Accordingly, super() is undefined for implicit lookups using statements or operators such as super()[name]. Also note that, aside from the zero argument form, super() is not limited to use inside methods. The two argument form specifies the arguments exactly and makes the appropriate references. The zero argument form only works inside a class definition, as the compiler fills in the necessary details to correctly retrieve the class being defined, as well as accessing the current instance for ordinary methods. For practical suggestions on how to design cooperative classes using super(), see guide to using super().
class tuple([iterable])
Rather than being a function, tuple is actually an immutable sequence type, as documented in Tuples and Sequence Types — list, tuple, range.
class type(object)
class type(name, bases, dict, **kwds)
With one argument, return the type of an object. The return value is a type object and generally the same object as returned by object.__class__. The isinstance() built-in function is recommended for testing the type of an object, because it takes subclasses into account. With three arguments, return a new type object. This is essentially a dynamic form of the class statement. The name string is the class name and becomes the __name__ attribute. The bases tuple contains the base classes and becomes the __bases__ attribute; if empty, object, the ultimate base of all classes, is added. The dict dictionary contains attribute and method definitions for the class body; it may be copied or wrapped before becoming the __dict__ attribute. The following two statements create identical type objects: >>> class X:
... a = 1
...
>>> X = type('X', (), dict(a=1))
See also Type Objects. Keyword arguments provided to the three argument form are passed to the appropriate metaclass machinery (usually __init_subclass__()) in the same way that keywords in a class definition (besides metaclass) would. See also Customizing class creation. Changed in version 3.6: Subclasses of type which don’t override type.__new__ may no longer use the one-argument form to get the type of an object.
vars([object])
Return the __dict__ attribute for a module, class, instance, or any other object with a __dict__ attribute. Objects such as modules and instances have an updateable __dict__ attribute; however, other objects may have write restrictions on their __dict__ attributes (for example, classes use a types.MappingProxyType to prevent direct dictionary updates). Without an argument, vars() acts like locals(). Note, the locals dictionary is only useful for reads since updates to the locals dictionary are ignored. A TypeError exception is raised if an object is specified but it doesn’t have a __dict__ attribute (for example, if its class defines the __slots__ attribute).
zip(*iterables)
Make an iterator that aggregates elements from each of the iterables. Returns an iterator of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables. The iterator stops when the shortest input iterable is exhausted. With a single iterable argument, it returns an iterator of 1-tuples. With no arguments, it returns an empty iterator. Equivalent to: def zip(*iterables):
# zip('ABCD', 'xy') --> Ax By
sentinel = object()
iterators = [iter(it) for it in iterables]
while iterators:
result = []
for it in iterators:
elem = next(it, sentinel)
if elem is sentinel:
return
result.append(elem)
yield tuple(result)
The left-to-right evaluation order of the iterables is guaranteed. This makes possible an idiom for clustering a data series into n-length groups using zip(*[iter(s)]*n). This repeats the same iterator n times so that each output tuple has the result of n calls to the iterator. This has the effect of dividing the input into n-length chunks. zip() should only be used with unequal length inputs when you don’t care about trailing, unmatched values from the longer iterables. If those values are important, use itertools.zip_longest() instead. zip() in conjunction with the * operator can be used to unzip a list: >>> x = [1, 2, 3]
>>> y = [4, 5, 6]
>>> zipped = zip(x, y)
>>> list(zipped)
[(1, 4), (2, 5), (3, 6)]
>>> x2, y2 = zip(*zip(x, y))
>>> x == list(x2) and y == list(y2)
True
__import__(name, globals=None, locals=None, fromlist=(), level=0)
Note This is an advanced function that is not needed in everyday Python programming, unlike importlib.import_module(). This function is invoked by the import statement. It can be replaced (by importing the builtins module and assigning to builtins.__import__) in order to change semantics of the import statement, but doing so is strongly discouraged as it is usually simpler to use import hooks (see PEP 302) to attain the same goals and does not cause issues with code which assumes the default import implementation is in use. Direct use of __import__() is also discouraged in favor of importlib.import_module(). The function imports the module name, potentially using the given globals and locals to determine how to interpret the name in a package context. The fromlist gives the names of objects or submodules that should be imported from the module given by name. The standard implementation does not use its locals argument at all, and uses its globals only to determine the package context of the import statement. level specifies whether to use absolute or relative imports. 0 (the default) means only perform absolute imports. Positive values for level indicate the number of parent directories to search relative to the directory of the module calling __import__() (see PEP 328 for the details). When the name variable is of the form package.module, normally, the top-level package (the name up till the first dot) is returned, not the module named by name. However, when a non-empty fromlist argument is given, the module named by name is returned. For example, the statement import spam results in bytecode resembling the following code: spam = __import__('spam', globals(), locals(), [], 0)
The statement import spam.ham results in this call: spam = __import__('spam.ham', globals(), locals(), [], 0)
Note how __import__() returns the toplevel module here because this is the object that is bound to a name by the import statement. On the other hand, the statement from spam.ham import eggs, sausage as
saus results in _temp = __import__('spam.ham', globals(), locals(), ['eggs', 'sausage'], 0)
eggs = _temp.eggs
saus = _temp.sausage
Here, the spam.ham module is returned from __import__(). From this object, the names to import are retrieved and assigned to their respective names. If you simply want to import a module (potentially within a package) by name, use importlib.import_module(). Changed in version 3.3: Negative values for level are no longer supported (which also changes the default value to 0). Changed in version 3.9: When the command line options -E or -I are being used, the environment variable PYTHONCASEOK is now ignored.
Footnotes
1
Note that the parser only accepts the Unix-style end of line convention. If you are reading the code from a file, make sure to use newline conversion mode to convert Windows or Mac-style newlines. | |
doc_25855 | tf.compat.v1.nn.sparse_softmax_cross_entropy_with_logits(
_sentinel=None, labels=None, logits=None, name=None
)
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
Note: For this operation, the probability of a given label is considered exclusive. That is, soft classes are not allowed, and the labels vector must provide a single specific index for the true class for each row of logits (each minibatch entry). For soft softmax classification with a probability distribution for each entry, see softmax_cross_entropy_with_logits_v2.
Warning: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results. A common use case is to have logits of shape [batch_size, num_classes] and have labels of shape [batch_size], but higher dimensions are supported, in which case the dim-th dimension is assumed to be of size num_classes. logits must have the dtype of float16, float32, or float64, and labels must have the dtype of int32 or int64. Note that to avoid confusion, it is required to pass only named arguments to this function.
Args
_sentinel Used to prevent positional parameters. Internal, do not use.
labels Tensor of shape [d_0, d_1, ..., d_{r-1}] (where r is rank of labels and result) and dtype int32 or int64. Each entry in labels must be an index in [0, num_classes). Other values will raise an exception when this op is run on CPU, and return NaN for corresponding loss and gradient rows on GPU.
logits Per-label activations (typically a linear output) of shape [d_0, d_1, ..., d_{r-1}, num_classes] and dtype float16, float32, or float64. These activation energies are interpreted as unnormalized log probabilities.
name A name for the operation (optional).
Returns A Tensor of the same shape as labels and of the same type as logits with the softmax cross entropy loss.
Raises
ValueError If logits are scalars (need to have rank >= 1) or if the rank of the labels is not equal to the rank of the logits minus one. | |
doc_25856 | changes camera settings if supported by the camera set_controls(hflip = bool, vflip = bool, brightness) -> (hflip = bool, vflip = bool, brightness) Allows you to change camera settings if the camera supports it. The return values will be the input values if the camera claims it succeeded or the values previously in use if not. Each argument is optional, and the desired one can be chosen by supplying the keyword, like hflip. Note that the actual settings being used by the camera may not be the same as those returned by set_controls. | |
doc_25857 | The constructor doesn’t take any arguments and no content should be added to this response. Use this to designate that a page hasn’t been modified since the user’s last request (status code 304). | |
doc_25858 | tf.optimizers.Nadam Compat aliases for migration See Migration guide for more details. tf.compat.v1.keras.optimizers.Nadam
tf.keras.optimizers.Nadam(
learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07,
name='Nadam', **kwargs
)
Much like Adam is essentially RMSprop with momentum, Nadam is Adam with Nesterov momentum.
Args
learning_rate A Tensor or a floating point value. The learning rate.
beta_1 A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.
beta_2 A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm.
epsilon A small constant for numerical stability.
name Optional name for the operations created when applying gradients. Defaults to "Nadam".
**kwargs Keyword arguments. Allowed to be one of "clipnorm" or "clipvalue". "clipnorm" (float) clips gradients by norm; "clipvalue" (float) clips gradients by value. Reference:
Dozat, 2015.
Raises
ValueError in case of any invalid argument. | |
doc_25859 | | 1.0 MobileNet-224 | 70.6 % | 529 | 4.2 | | 1.0 MobileNet-192 | 69.1 % | 529 | 4.2 | | 1.0 MobileNet-160 | 67.2 % | 529 | 4.2 | | 1.0 MobileNet-128 | 64.4 % | 529 | 4.2 | Reference: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications Functions MobileNet(...): Instantiates the MobileNet architecture. decode_predictions(...): Decodes the prediction of an ImageNet model. preprocess_input(...): Preprocesses a tensor or Numpy array encoding a batch of images. | |
doc_25860 |
Return the non-negative square-root of an array, element-wise. Parameters
xarray_like
The values whose square-roots are required.
outndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs.
wherearray_like, optional
This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. **kwargs
For other keyword-only arguments, see the ufunc docs. Returns
yndarray
An array of the same shape as x, containing the positive square-root of each element in x. If any element in x is complex, a complex array is returned (and the square-roots of negative reals are calculated). If all of the elements in x are real, so is y, with negative elements returning nan. If out was provided, y is a reference to it. This is a scalar if x is a scalar. See also lib.scimath.sqrt
A version which returns complex numbers when given negative reals. Notes sqrt has–consistent with common convention–as its branch cut the real “interval” [-inf, 0), and is continuous from above on it. A branch cut is a curve in the complex plane across which a given complex function fails to be continuous. Examples >>> np.sqrt([1,4,9])
array([ 1., 2., 3.])
>>> np.sqrt([4, -1, -3+4J])
array([ 2.+0.j, 0.+1.j, 1.+2.j])
>>> np.sqrt([4, -1, np.inf])
array([ 2., nan, inf]) | |
doc_25861 |
Finds blobs in the given grayscale image. Blobs are found using the Determinant of Hessian method [1]. For each blob found, the method returns its coordinates and the standard deviation of the Gaussian Kernel used for the Hessian matrix whose determinant detected the blob. Determinant of Hessians is approximated using [2]. Parameters
image2D ndarray
Input grayscale image.Blobs can either be light on dark or vice versa.
min_sigmafloat, optional
The minimum standard deviation for Gaussian Kernel used to compute Hessian matrix. Keep this low to detect smaller blobs.
max_sigmafloat, optional
The maximum standard deviation for Gaussian Kernel used to compute Hessian matrix. Keep this high to detect larger blobs.
num_sigmaint, optional
The number of intermediate values of standard deviations to consider between min_sigma and max_sigma.
thresholdfloat, optional.
The absolute lower bound for scale space maxima. Local maxima smaller than thresh are ignored. Reduce this to detect less prominent blobs.
overlapfloat, optional
A value between 0 and 1. If the area of two blobs overlaps by a fraction greater than threshold, the smaller blob is eliminated.
log_scalebool, optional
If set intermediate values of standard deviations are interpolated using a logarithmic scale to the base 10. If not, linear interpolation is used. Returns
A(n, 3) ndarray
A 2d array with each row representing 3 values, (y,x,sigma) where (y,x) are coordinates of the blob and sigma is the standard deviation of the Gaussian kernel of the Hessian Matrix whose determinant detected the blob. Notes The radius of each blob is approximately sigma. Computation of Determinant of Hessians is independent of the standard deviation. Therefore detecting larger blobs won’t take more time. In methods line blob_dog() and blob_log() the computation of Gaussians for larger sigma takes more time. The downside is that this method can’t be used for detecting blobs of radius less than 3px due to the box filters used in the approximation of Hessian Determinant. References
1
https://en.wikipedia.org/wiki/Blob_detection#The_determinant_of_the_Hessian
2
Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, “SURF: Speeded Up Robust Features” ftp://ftp.vision.ee.ethz.ch/publications/articles/eth_biwi_00517.pdf Examples >>> from skimage import data, feature
>>> img = data.coins()
>>> feature.blob_doh(img)
array([[197. , 153. , 20.33333333],
[124. , 336. , 20.33333333],
[126. , 153. , 20.33333333],
[195. , 100. , 23.55555556],
[192. , 212. , 23.55555556],
[121. , 271. , 30. ],
[126. , 101. , 20.33333333],
[193. , 275. , 23.55555556],
[123. , 205. , 20.33333333],
[270. , 363. , 30. ],
[265. , 113. , 23.55555556],
[262. , 243. , 23.55555556],
[185. , 348. , 30. ],
[156. , 302. , 30. ],
[123. , 44. , 23.55555556],
[260. , 173. , 30. ],
[197. , 44. , 20.33333333]]) | |
doc_25862 |
Shuffle arrays or sparse matrices in a consistent way. This is a convenience alias to resample(*arrays, replace=False) to do random permutations of the collections. Parameters
*arrayssequence of indexable data-structures
Indexable data-structures can be arrays, lists, dataframes or scipy sparse matrices with consistent first dimension.
random_stateint, RandomState instance or None, default=None
Determines random number generation for shuffling the data. Pass an int for reproducible results across multiple function calls. See Glossary.
n_samplesint, default=None
Number of samples to generate. If left to None this is automatically set to the first dimension of the arrays. It should not be larger than the length of arrays. Returns
shuffled_arrayssequence of indexable data-structures
Sequence of shuffled copies of the collections. The original arrays are not impacted. See also
resample
Examples It is possible to mix sparse and dense arrays in the same run: >>> X = np.array([[1., 0.], [2., 1.], [0., 0.]])
>>> y = np.array([0, 1, 2])
>>> from scipy.sparse import coo_matrix
>>> X_sparse = coo_matrix(X)
>>> from sklearn.utils import shuffle
>>> X, X_sparse, y = shuffle(X, X_sparse, y, random_state=0)
>>> X
array([[0., 0.],
[2., 1.],
[1., 0.]])
>>> X_sparse
<3x2 sparse matrix of type '<... 'numpy.float64'>'
with 3 stored elements in Compressed Sparse Row format>
>>> X_sparse.toarray()
array([[0., 0.],
[2., 1.],
[1., 0.]])
>>> y
array([2, 1, 0])
>>> shuffle(y, n_samples=2, random_state=0)
array([0, 1]) | |
doc_25863 | Numerical underflow with result rounded to zero. Occurs when a subnormal result is pushed to zero by rounding. Inexact and Subnormal are also signaled. | |
doc_25864 |
Subtract one polynomial from another. Returns the difference of two polynomials c1 - c2. The arguments are sequences of coefficients from lowest order term to highest, i.e., [1,2,3] represents the polynomial 1 + 2*x + 3*x**2. Parameters
c1, c2array_like
1-D arrays of polynomial coefficients ordered from low to high. Returns
outndarray
Of coefficients representing their difference. See also
polyadd, polymulx, polymul, polydiv, polypow
Examples >>> from numpy.polynomial import polynomial as P
>>> c1 = (1,2,3)
>>> c2 = (3,2,1)
>>> P.polysub(c1,c2)
array([-2., 0., 2.])
>>> P.polysub(c2,c1) # -P.polysub(c1,c2)
array([ 2., 0., -2.]) | |
doc_25865 | tf.experimental.numpy.arctanh(
x
)
Unsupported arguments: out, where, casting, order, dtype, subok, signature, extobj. See the NumPy documentation for numpy.arctanh. | |
doc_25866 | Casts this storage to short type | |
doc_25867 |
Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter tol; “trailing” means highest order coefficient(s), e.g., in [0, 1, 1, 0, 0] (which represents 0 + x + x**2 + 0*x**3 + 0*x**4) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters
carray_like
1-d array of coefficients, ordered from lowest order to highest.
tolnumber, optional
Trailing (i.e., highest order) elements with absolute value less than or equal to tol (default value is zero) are removed. Returns
trimmedndarray
1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises
ValueError
If tol < 0 See also trimseq
Examples >>> from numpy.polynomial import polyutils as pu
>>> pu.trimcoef((0,0,3,0,5,0,0))
array([0., 0., 3., 0., 5.])
>>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed
array([0.])
>>> i = complex(0,1) # works for complex
>>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3)
array([0.0003+0.j , 0.001 -0.001j]) | |
doc_25868 | A Cmd instance or subclass instance is a line-oriented interpreter framework. There is no good reason to instantiate Cmd itself; rather, it’s useful as a superclass of an interpreter class you define yourself in order to inherit Cmd’s methods and encapsulate action methods. The optional argument completekey is the readline name of a completion key; it defaults to Tab. If completekey is not None and readline is available, command completion is done automatically. The optional arguments stdin and stdout specify the input and output file objects that the Cmd instance or subclass instance will use for input and output. If not specified, they will default to sys.stdin and sys.stdout. If you want a given stdin to be used, make sure to set the instance’s use_rawinput attribute to False, otherwise stdin will be ignored. | |
doc_25869 | Returns a new deque object initialized left-to-right (using append()) with data from iterable. If iterable is not specified, the new deque is empty. Deques are a generalization of stacks and queues (the name is pronounced “deck” and is short for “double-ended queue”). Deques support thread-safe, memory efficient appends and pops from either side of the deque with approximately the same O(1) performance in either direction. Though list objects support similar operations, they are optimized for fast fixed-length operations and incur O(n) memory movement costs for pop(0) and insert(0, v) operations which change both the size and position of the underlying data representation. If maxlen is not specified or is None, deques may grow to an arbitrary length. Otherwise, the deque is bounded to the specified maximum length. Once a bounded length deque is full, when new items are added, a corresponding number of items are discarded from the opposite end. Bounded length deques provide functionality similar to the tail filter in Unix. They are also useful for tracking transactions and other pools of data where only the most recent activity is of interest. Deque objects support the following methods:
append(x)
Add x to the right side of the deque.
appendleft(x)
Add x to the left side of the deque.
clear()
Remove all elements from the deque leaving it with length 0.
copy()
Create a shallow copy of the deque. New in version 3.5.
count(x)
Count the number of deque elements equal to x. New in version 3.2.
extend(iterable)
Extend the right side of the deque by appending elements from the iterable argument.
extendleft(iterable)
Extend the left side of the deque by appending elements from iterable. Note, the series of left appends results in reversing the order of elements in the iterable argument.
index(x[, start[, stop]])
Return the position of x in the deque (at or after index start and before index stop). Returns the first match or raises ValueError if not found. New in version 3.5.
insert(i, x)
Insert x into the deque at position i. If the insertion would cause a bounded deque to grow beyond maxlen, an IndexError is raised. New in version 3.5.
pop()
Remove and return an element from the right side of the deque. If no elements are present, raises an IndexError.
popleft()
Remove and return an element from the left side of the deque. If no elements are present, raises an IndexError.
remove(value)
Remove the first occurrence of value. If not found, raises a ValueError.
reverse()
Reverse the elements of the deque in-place and then return None. New in version 3.2.
rotate(n=1)
Rotate the deque n steps to the right. If n is negative, rotate to the left. When the deque is not empty, rotating one step to the right is equivalent to d.appendleft(d.pop()), and rotating one step to the left is equivalent to d.append(d.popleft()).
Deque objects also provide one read-only attribute:
maxlen
Maximum size of a deque or None if unbounded. New in version 3.1. | |
doc_25870 |
Return a string representing the given POSIX timestamp controlled by an explicit format string. Parameters
format:str
Format string to convert Timestamp to string. See strftime documentation for more information on the format string: https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior. Examples
>>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
>>> ts.strftime('%Y-%m-%d %X')
'2020-03-14 15:32:52' | |
doc_25871 |
Scalar method identical to the corresponding array attribute. Please see ndarray.argmin. | |
doc_25872 | See Migration guide for more details. tf.compat.v1.config.experimental_connect_to_cluster
tf.config.experimental_connect_to_cluster(
cluster_spec_or_resolver, job_name='localhost', task_index=0,
protocol=None, make_master_device_default=True, cluster_device_filters=None
)
Will make devices on the cluster available to use. Note that calling this more than once will work, but will invalidate any tensor handles on the old remote devices. If the given local job name is not present in the cluster specification, it will be automatically added, using an unused port on the localhost. Device filters can be specified to isolate groups of remote tasks to avoid undesired accesses between workers. Workers accessing resources or launching ops / functions on filtered remote devices will result in errors (unknown devices). For any remote task, if no device filter is present, all cluster devices will be visible; if any device filter is specified, it can only see devices matching at least one filter. Devices on the task itself are always visible. Device filters can be particially specified. For example, for a cluster set up for parameter server training, the following device filters might be specified: cdf = tf.config.experimental.ClusterDeviceFilters()
# For any worker, only the devices on PS nodes and itself are visible
for i in range(num_workers):
cdf.set_device_filters('worker', i, ['/job:ps'])
# Similarly for any ps, only the devices on workers and itself are visible
for i in range(num_ps):
cdf.set_device_filters('ps', i, ['/job:worker'])
tf.config.experimental_connect_to_cluster(cluster_def,
cluster_device_filters=cdf)
Args
cluster_spec_or_resolver A ClusterSpec or ClusterResolver describing the cluster.
job_name The name of the local job.
task_index The local task index.
protocol The communication protocol, such as "grpc". If unspecified, will use the default from python/platform/remote_utils.py.
make_master_device_default If True and a cluster resolver is passed, will automatically enter the master task device scope, which indicates the master becomes the default device to run ops. It won't do anything if a cluster spec is passed. Will throw an error if the caller is currently already in some device scope.
cluster_device_filters an instance of tf.train.experimental/ClusterDeviceFilters that specify device filters to the remote tasks in cluster. | |
doc_25873 |
Return the sketch parameters for the artist. Returns
tuple or None
A 3-tuple with the following elements:
scale: The amplitude of the wiggle perpendicular to the source line.
length: The length of the wiggle along the line.
randomness: The scale factor by which the length is shrunken or expanded. Returns None if no sketch parameters were set. | |
doc_25874 | sklearn.manifold.spectral_embedding(adjacency, *, n_components=8, eigen_solver=None, random_state=None, eigen_tol=0.0, norm_laplacian=True, drop_first=True) [source]
Project the sample on the first eigenvectors of the graph Laplacian. The adjacency matrix is used to compute a normalized graph Laplacian whose spectrum (especially the eigenvectors associated to the smallest eigenvalues) has an interpretation in terms of minimal number of cuts necessary to split the graph into comparably sized components. This embedding can also ‘work’ even if the adjacency variable is not strictly the adjacency matrix of a graph but more generally an affinity or similarity matrix between samples (for instance the heat kernel of a euclidean distance matrix or a k-NN matrix). However care must taken to always make the affinity matrix symmetric so that the eigenvector decomposition works as expected. Note : Laplacian Eigenmaps is the actual algorithm implemented here. Read more in the User Guide. Parameters
adjacency{array-like, sparse graph} of shape (n_samples, n_samples)
The adjacency matrix of the graph to embed.
n_componentsint, default=8
The dimension of the projection subspace.
eigen_solver{‘arpack’, ‘lobpcg’, ‘amg’}, default=None
The eigenvalue decomposition strategy to use. AMG requires pyamg to be installed. It can be faster on very large, sparse problems, but may also lead to instabilities. If None, then 'arpack' is used.
random_stateint, RandomState instance or None, default=None
Determines the random number generator used for the initialization of the lobpcg eigenvectors decomposition when solver == ‘amg’. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>.
eigen_tolfloat, default=0.0
Stopping criterion for eigendecomposition of the Laplacian matrix when using arpack eigen_solver.
norm_laplacianbool, default=True
If True, then compute normalized Laplacian.
drop_firstbool, default=True
Whether to drop the first eigenvector. For spectral embedding, this should be True as the first eigenvector should be constant vector for connected graph, but for spectral clustering, this should be kept as False to retain the first eigenvector. Returns
embeddingndarray of shape (n_samples, n_components)
The reduced samples. Notes Spectral Embedding (Laplacian Eigenmaps) is most useful when the graph has one connected component. If there graph has many components, the first few eigenvectors will simply uncover the connected components of the graph. References https://en.wikipedia.org/wiki/LOBPCG Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method Andrew V. Knyazev https://doi.org/10.1137%2FS1064827500366124 | |
doc_25875 | See Migration guide for more details. tf.compat.v1.distribute.experimental.CommunicationOptions
tf.distribute.experimental.CommunicationOptions(
bytes_per_pack=0, timeout_seconds=None,
implementation=tf.distribute.experimental.CollectiveCommunication.AUTO
)
This can be passed to methods like tf.distribute.get_replica_context().all_reduce() to optimize collective operation performance. Note that these are only hints, which may or may not change the actual behavior. Some options only apply to certain strategy and are ignored by others. One common optimization is to break gradients all-reduce into multiple packs so that weight updates can overlap with gradient all-reduce. Examples: options = tf.distribute.experimental.CommunicationOptions(
bytes_per_pack=50 * 1024 * 1024,
timeout_seconds=120,
implementation=tf.distribute.experimental.CommunicationImplementation.NCCL
)
grads = tf.distribute.get_replica_context().all_reduce(
'sum', grads, options=options)
optimizer.apply_gradients(zip(grads, vars),
experimental_aggregate_gradients=False)
Args
bytes_per_pack a non-negative integer. Breaks collective operations into packs of certain size. If it's zero, the value is determined automatically. This only applies to all-reduce with MultiWorkerMirroredStrategy currently.
timeout_seconds a float or None, timeout in seconds. If not None, the collective raises tf.errors.DeadlineExceededError if it takes longer than this timeout. Zero disables timeout. This can be useful when debugging hanging issues. This should only be used for debugging since it creates a new thread for each collective, i.e. an overhead of timeout_seconds * num_collectives_per_second more threads. This only works for tf.distribute.experimental.MultiWorkerMirroredStrategy.
implementation a tf.distribute.experimental.CommunicationImplementation. This is a hint on the preferred communication implementation. Possible values include AUTO, RING, and NCCL. NCCL is generally more performant for GPU, but doesn't work for CPU. This only works for tf.distribute.experimental.MultiWorkerMirroredStrategy.
Raises
ValueError When arguments have invalid value. | |
doc_25876 |
Set or clear the mask array. Parameters
maskNone or bool array of length ntri | |
doc_25877 | enable fake rendering of bold text set_bold(bool) -> None Enables the bold rendering of text. This is a fake stretching of the font that doesn't look good on many font types. If possible load the font from a real bold font file. While bold, the font will have a different width than when normal. This can be mixed with the italic and underline modes. Note This is the same as the bold attribute. | |
doc_25878 |
Draw the Artist (and its children) using the given renderer. This has no effect if the artist is not visible (Artist.get_visible returns False). Parameters
rendererRendererBase subclass.
Notes This method is overridden in the Artist subclasses. | |
doc_25879 | See Migration guide for more details. tf.compat.v1.random.gamma, tf.compat.v1.random_gamma
tf.random.gamma(
shape, alpha, beta=None, dtype=tf.dtypes.float32, seed=None, name=None
)
alpha is the shape parameter describing the distribution(s), and beta is the inverse scale parameter(s).
Note: Because internal calculations are done using float64 and casting has floor semantics, we must manually map zero outcomes to the smallest possible positive floating-point value, i.e., np.finfo(dtype).tiny. This means that np.finfo(dtype).tiny occurs more frequently than it otherwise should. This bias can only happen for small values of alpha, i.e., alpha << 1 or large values of beta, i.e., beta >> 1.
The samples are differentiable w.r.t. alpha and beta. The derivatives are computed using the approach described in (Figurnov et al., 2018). Example: samples = tf.random.gamma([10], [0.5, 1.5])
# samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents
# the samples drawn from each distribution
samples = tf.random.gamma([7, 5], [0.5, 1.5])
# samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]
# represents the 7x5 samples drawn from each of the two distributions
alpha = tf.constant([[1.],[3.],[5.]])
beta = tf.constant([[3., 4.]])
samples = tf.random.gamma([30], alpha=alpha, beta=beta)
# samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.
loss = tf.reduce_mean(tf.square(samples))
dloss_dalpha, dloss_dbeta = tf.gradients(loss, [alpha, beta])
# unbiased stochastic derivatives of the loss function
alpha.shape == dloss_dalpha.shape # True
beta.shape == dloss_dbeta.shape # True
Args
shape A 1-D integer Tensor or Python array. The shape of the output samples to be drawn per alpha/beta-parameterized distribution.
alpha A Tensor or Python value or N-D array of type dtype. alpha provides the shape parameter(s) describing the gamma distribution(s) to sample. Must be broadcastable with beta.
beta A Tensor or Python value or N-D array of type dtype. Defaults to 1. beta provides the inverse scale parameter(s) of the gamma distribution(s) to sample. Must be broadcastable with alpha.
dtype The type of alpha, beta, and the output: float16, float32, or float64.
seed A Python integer. Used to create a random seed for the distributions. See tf.random.set_seed for behavior.
name Optional name for the operation.
Returns
samples a Tensor of shape tf.concat([shape, tf.shape(alpha + beta)], axis=0) with values of type dtype. References: Implicit Reparameterization Gradients: Figurnov et al., 2018 (pdf) | |
doc_25880 | A list containing 2-tuples of TestCase instances and strings holding formatted tracebacks. Each tuple represents a test where a failure was explicitly signalled using the TestCase.assert*() methods. | |
doc_25881 | Returns weekday of first day of the month and number of days in month, for the specified year and month. | |
doc_25882 | Fetches all (remaining) rows of a query result, returning a list. Note that the cursor’s arraysize attribute can affect the performance of this operation. An empty list is returned when no rows are available. | |
doc_25883 | '__main__': block or moved to a separate file. Just make sure it’s not called because this will always start a local WSGI server which we do not want if we deploy that application to uWSGI. Starting your app with uwsgi uwsgi is designed to operate on WSGI callables found in python modules. Given a flask application in myapp.py, use the following command: $ uwsgi -s /tmp/yourapplication.sock --manage-script-name --mount /yourapplication=myapp:app
The --manage-script-name will move the handling of SCRIPT_NAME to uwsgi, since it is smarter about that. It is used together with the --mount directive which will make requests to /yourapplication be directed to myapp:app. If your application is accessible at root level, you can use a single / instead of /yourapplication. myapp refers to the name of the file of your flask application (without extension) or the module which provides app. app is the callable inside of your application (usually the line reads app = Flask(__name__). If you want to deploy your flask application inside of a virtual environment, you need to also add --virtualenv /path/to/virtual/environment. You might also need to add --plugin python or --plugin python3 depending on which python version you use for your project. Configuring nginx A basic flask nginx configuration looks like this: location = /yourapplication { rewrite ^ /yourapplication/; }
location /yourapplication { try_files $uri @yourapplication; }
location @yourapplication {
include uwsgi_params;
uwsgi_pass unix:/tmp/yourapplication.sock;
}
This configuration binds the application to /yourapplication. If you want to have it in the URL root its a bit simpler: location / { try_files $uri @yourapplication; }
location @yourapplication {
include uwsgi_params;
uwsgi_pass unix:/tmp/yourapplication.sock;
} | |
doc_25884 |
An extra-trees regressor. This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. Read more in the User Guide. Parameters
n_estimatorsint, default=100
The number of trees in the forest. Changed in version 0.22: The default value of n_estimators changed from 10 to 100 in 0.22.
criterion{“mse”, “mae”}, default=”mse”
The function to measure the quality of a split. Supported criteria are “mse” for the mean squared error, which is equal to variance reduction as feature selection criterion, and “mae” for the mean absolute error. New in version 0.18: Mean Absolute Error (MAE) criterion.
max_depthint, default=None
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
min_samples_splitint or float, default=2
The minimum number of samples required to split an internal node: If int, then consider min_samples_split as the minimum number. If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples) are the minimum number of samples for each split. Changed in version 0.18: Added float values for fractions.
min_samples_leafint or float, default=1
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least min_samples_leaf training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression. If int, then consider min_samples_leaf as the minimum number. If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node. Changed in version 0.18: Added float values for fractions.
min_weight_fraction_leaffloat, default=0.0
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
max_features{“auto”, “sqrt”, “log2”}, int or float, default=”auto”
The number of features to consider when looking for the best split: If int, then consider max_features features at each split. If float, then max_features is a fraction and round(max_features * n_features) features are considered at each split. If “auto”, then max_features=n_features. If “sqrt”, then max_features=sqrt(n_features). If “log2”, then max_features=log2(n_features). If None, then max_features=n_features. Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.
max_leaf_nodesint, default=None
Grow trees with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.
min_impurity_decreasefloat, default=0.0
A node will be split if this split induces a decrease of the impurity greater than or equal to this value. The weighted impurity decrease equation is the following: N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
where N is the total number of samples, N_t is the number of samples at the current node, N_t_L is the number of samples in the left child, and N_t_R is the number of samples in the right child. N, N_t, N_t_R and N_t_L all refer to the weighted sum, if sample_weight is passed. New in version 0.19.
min_impurity_splitfloat, default=None
Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf. Deprecated since version 0.19: min_impurity_split has been deprecated in favor of min_impurity_decrease in 0.19. The default value of min_impurity_split has changed from 1e-7 to 0 in 0.23 and it will be removed in 1.0 (renaming of 0.25). Use min_impurity_decrease instead.
bootstrapbool, default=False
Whether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree.
oob_scorebool, default=False
Whether to use out-of-bag samples to estimate the R^2 on unseen data.
n_jobsint, default=None
The number of jobs to run in parallel. fit, predict, decision_path and apply are all parallelized over the trees. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
random_stateint, RandomState instance or None, default=None
Controls 3 sources of randomness: the bootstrapping of the samples used when building trees (if bootstrap=True) the sampling of the features to consider when looking for the best split at each node (if max_features < n_features) the draw of the splits for each of the max_features
See Glossary for details.
verboseint, default=0
Controls the verbosity when fitting and predicting.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See the Glossary.
ccp_alphanon-negative float, default=0.0
Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than ccp_alpha will be chosen. By default, no pruning is performed. See Minimal Cost-Complexity Pruning for details. New in version 0.22.
max_samplesint or float, default=None
If bootstrap is True, the number of samples to draw from X to train each base estimator. If None (default), then draw X.shape[0] samples. If int, then draw max_samples samples. If float, then draw max_samples * X.shape[0] samples. Thus, max_samples should be in the interval (0, 1). New in version 0.22. Attributes
base_estimator_ExtraTreeRegressor
The child estimator template used to create the collection of fitted sub-estimators.
estimators_list of DecisionTreeRegressor
The collection of fitted sub-estimators.
feature_importances_ndarray of shape (n_features,)
The impurity-based feature importances.
n_features_int
The number of features.
n_outputs_int
The number of outputs.
oob_score_float
Score of the training dataset obtained using an out-of-bag estimate. This attribute exists only when oob_score is True.
oob_prediction_ndarray of shape (n_samples,)
Prediction computed with out-of-bag estimate on the training set. This attribute exists only when oob_score is True. See also
sklearn.tree.ExtraTreeRegressor
Base estimator for this ensemble.
RandomForestRegressor
Ensemble regressor using trees with optimal splits. Notes The default values for the parameters controlling the size of the trees (e.g. max_depth, min_samples_leaf, etc.) lead to fully grown and unpruned trees which can potentially be very large on some data sets. To reduce memory consumption, the complexity and size of the trees should be controlled by setting those parameter values. References
1
P. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomized trees”, Machine Learning, 63(1), 3-42, 2006. Examples >>> from sklearn.datasets import load_diabetes
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.ensemble import ExtraTreesRegressor
>>> X, y = load_diabetes(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> reg = ExtraTreesRegressor(n_estimators=100, random_state=0).fit(
... X_train, y_train)
>>> reg.score(X_test, y_test)
0.2708...
Methods
apply(X) Apply trees in the forest to X, return leaf indices.
decision_path(X) Return the decision path in the forest.
fit(X, y[, sample_weight]) Build a forest of trees from the training set (X, y).
get_params([deep]) Get parameters for this estimator.
predict(X) Predict regression target for X.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
apply(X) [source]
Apply trees in the forest to X, return leaf indices. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. Returns
X_leavesndarray of shape (n_samples, n_estimators)
For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in.
decision_path(X) [source]
Return the decision path in the forest. New in version 0.18. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. Returns
indicatorsparse matrix of shape (n_samples, n_nodes)
Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format.
n_nodes_ptrndarray of shape (n_estimators + 1,)
The columns from indicator[n_nodes_ptr[i]:n_nodes_ptr[i+1]] gives the indicator value for the i-th estimator.
property feature_importances_
The impurity-based feature importances. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance. Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See sklearn.inspection.permutation_importance as an alternative. Returns
feature_importances_ndarray of shape (n_features,)
The values of this array sum to 1, unless all trees are single node trees consisting of only the root node, in which case it will be an array of zeros.
fit(X, y, sample_weight=None) [source]
Build a forest of trees from the training set (X, y). Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csc_matrix.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in regression).
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict regression target for X. The predicted regression target of an input sample is computed as the mean predicted regression targets of the trees in the forest. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. Returns
yndarray of shape (n_samples,) or (n_samples, n_outputs)
The predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | |
doc_25885 | Returns the boundary as a newly allocated Geometry object. | |
doc_25886 | sklearn.cluster.mean_shift(X, *, bandwidth=None, seeds=None, bin_seeding=False, min_bin_freq=1, cluster_all=True, max_iter=300, n_jobs=None) [source]
Perform mean shift clustering of data using a flat kernel. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Input data.
bandwidthfloat, default=None
Kernel bandwidth. If bandwidth is not given, it is determined using a heuristic based on the median of all pairwise distances. This will take quadratic time in the number of samples. The sklearn.cluster.estimate_bandwidth function can be used to do this more efficiently.
seedsarray-like of shape (n_seeds, n_features) or None
Point used as initial kernel locations. If None and bin_seeding=False, each data point is used as a seed. If None and bin_seeding=True, see bin_seeding.
bin_seedingbool, default=False
If true, initial kernel locations are not locations of all points, but rather the location of the discretized version of points, where points are binned onto a grid whose coarseness corresponds to the bandwidth. Setting this option to True will speed up the algorithm because fewer seeds will be initialized. Ignored if seeds argument is not None.
min_bin_freqint, default=1
To speed up the algorithm, accept only those bins with at least min_bin_freq points as seeds.
cluster_allbool, default=True
If true, then all points are clustered, even those orphans that are not within any kernel. Orphans are assigned to the nearest kernel. If false, then orphans are given cluster label -1.
max_iterint, default=300
Maximum number of iterations, per seed point before the clustering operation terminates (for that seed point), if has not converged yet.
n_jobsint, default=None
The number of jobs to use for the computation. This works by computing each of the n_init runs in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.17: Parallel Execution using n_jobs. Returns
cluster_centersndarray of shape (n_clusters, n_features)
Coordinates of cluster centers.
labelsndarray of shape (n_samples,)
Cluster labels for each point. Notes For an example, see examples/cluster/plot_mean_shift.py. | |
doc_25887 |
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | |
doc_25888 | Open a bzip2-compressed file in binary mode. If filename is a str or bytes object, open the named file directly. Otherwise, filename should be a file object, which will be used to read or write the compressed data. The mode argument can be either 'r' for reading (default), 'w' for overwriting, 'x' for exclusive creation, or 'a' for appending. These can equivalently be given as 'rb', 'wb', 'xb' and 'ab' respectively. If filename is a file object (rather than an actual file name), a mode of 'w' does not truncate the file, and is instead equivalent to 'a'. If mode is 'w' or 'a', compresslevel can be an integer between 1 and 9 specifying the level of compression: 1 produces the least compression, and 9 (default) produces the most compression. If mode is 'r', the input file may be the concatenation of multiple compressed streams. BZ2File provides all of the members specified by the io.BufferedIOBase, except for detach() and truncate(). Iteration and the with statement are supported. BZ2File also provides the following method:
peek([n])
Return buffered data without advancing the file position. At least one byte of data will be returned (unless at EOF). The exact number of bytes returned is unspecified. Note While calling peek() does not change the file position of the BZ2File, it may change the position of the underlying file object (e.g. if the BZ2File was constructed by passing a file object for filename). New in version 3.3.
Changed in version 3.1: Support for the with statement was added. Changed in version 3.3: The fileno(), readable(), seekable(), writable(), read1() and readinto() methods were added. Changed in version 3.3: Support was added for filename being a file object instead of an actual filename. Changed in version 3.3: The 'a' (append) mode was added, along with support for reading multi-stream files. Changed in version 3.4: The 'x' (exclusive creation) mode was added. Changed in version 3.5: The read() method now accepts an argument of None. Changed in version 3.6: Accepts a path-like object. Changed in version 3.9: The buffering parameter has been removed. It was ignored and deprecated since Python 3.0. Pass an open file object to control how the file is opened. The compresslevel parameter became keyword-only. | |
doc_25889 | A one-character string used by the writer to escape the delimiter if quoting is set to QUOTE_NONE and the quotechar if doublequote is False. On reading, the escapechar removes any special meaning from the following character. It defaults to None, which disables escaping. | |
doc_25890 |
Rename categories. Parameters
new_categories:list-like, dict-like or callable
New categories which will replace old categories. list-like: all items must be unique and the number of items in the new categories must match the existing number of categories. dict-like: specifies a mapping from old categories to new. Categories not contained in the mapping are passed through and extra categories in the mapping are ignored. callable : a callable that is called on all items in the old categories and whose return values comprise the new categories.
inplace:bool, default False
Whether or not to rename the categories inplace or return a copy of this categorical with renamed categories. Deprecated since version 1.3.0. Returns
cat:Categorical or None
Categorical with removed categories or None if inplace=True. Raises
ValueError
If new categories are list-like and do not have the same number of items than the current categories or do not validate as categories See also reorder_categories
Reorder categories. add_categories
Add new categories. remove_categories
Remove the specified categories. remove_unused_categories
Remove categories which are not used. set_categories
Set the categories to the specified ones. Examples
>>> c = pd.Categorical(['a', 'a', 'b'])
>>> c.rename_categories([0, 1])
[0, 0, 1]
Categories (2, int64): [0, 1]
For dict-like new_categories, extra keys are ignored and categories not in the dictionary are passed through
>>> c.rename_categories({'a': 'A', 'c': 'C'})
['A', 'A', 'b']
Categories (2, object): ['A', 'b']
You may also provide a callable to create the new categories
>>> c.rename_categories(lambda x: x.upper())
['A', 'A', 'B']
Categories (2, object): ['A', 'B'] | |
doc_25891 | See Migration guide for more details. tf.compat.v1.fake_quant_with_min_max_vars_gradient, tf.compat.v1.quantization.fake_quant_with_min_max_vars_gradient
tf.quantization.fake_quant_with_min_max_vars_gradient(
gradients, inputs, min, max, num_bits=8, narrow_range=False, name=None
)
Args
gradients A Tensor of type float32. Backpropagated gradients above the FakeQuantWithMinMaxVars operation.
inputs A Tensor of type float32. Values passed as inputs to the FakeQuantWithMinMaxVars operation. min, max: Quantization interval, scalar floats.
min A Tensor of type float32.
max A Tensor of type float32.
num_bits An optional int. Defaults to 8. The bitwidth of the quantization; between 2 and 8, inclusive.
narrow_range An optional bool. Defaults to False. Whether to quantize into 2^num_bits - 1 distinct values.
name A name for the operation (optional).
Returns A tuple of Tensor objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max). backprops_wrt_input A Tensor of type float32.
backprop_wrt_min A Tensor of type float32.
backprop_wrt_max A Tensor of type float32. | |
doc_25892 | Maps request attributes to environment variables. This works not only for the Werkzeug request object, but also any other class with an environ attribute: >>> class Test(object):
... environ = {'key': 'value'}
... test = environ_property('key')
>>> var = Test()
>>> var.test
'value'
If you pass it a second value it’s used as default if the key does not exist, the third one can be a converter that takes a value and converts it. If it raises ValueError or TypeError the default value is used. If no default value is provided None is used. Per default the property is read only. You have to explicitly enable it by passing read_only=False to the constructor. | |
doc_25893 | Return immediately if the sound driver is busy. Note This flag is not supported on modern Windows platforms. | |
doc_25894 |
Compute the inverse hyperbolic tangent of x. Return the “principal value” (for a description of this, see numpy.arctanh) of arctanh(x). For real x such that abs(x) < 1, this is a real number. If abs(x) > 1, or if x is complex, the result is complex. Finally, x = 1 returns``inf`` and x=-1 returns -inf. Parameters
xarray_like
The value(s) whose arctanh is (are) required. Returns
outndarray or scalar
The inverse hyperbolic tangent(s) of the x value(s). If x was a scalar so is out, otherwise an array is returned. See also numpy.arctanh
Notes For an arctanh() that returns NAN when real x is not in the interval (-1,1), use numpy.arctanh (this latter, however, does return +/-inf for x = +/-1). Examples >>> np.set_printoptions(precision=4)
>>> from numpy.testing import suppress_warnings
>>> with suppress_warnings() as sup:
... sup.filter(RuntimeWarning)
... np.emath.arctanh(np.eye(2))
array([[inf, 0.],
[ 0., inf]])
>>> np.emath.arctanh([1j])
array([0.+0.7854j]) | |
doc_25895 | codecs.BOM_BE
codecs.BOM_LE
codecs.BOM_UTF8
codecs.BOM_UTF16
codecs.BOM_UTF16_BE
codecs.BOM_UTF16_LE
codecs.BOM_UTF32
codecs.BOM_UTF32_BE
codecs.BOM_UTF32_LE
These constants define various byte sequences, being Unicode byte order marks (BOMs) for several encodings. They are used in UTF-16 and UTF-32 data streams to indicate the byte order used, and in UTF-8 as a Unicode signature. BOM_UTF16 is either BOM_UTF16_BE or BOM_UTF16_LE depending on the platform’s native byte order, BOM is an alias for BOM_UTF16, BOM_LE for BOM_UTF16_LE and BOM_BE for BOM_UTF16_BE. The others represent the BOM in UTF-8 and UTF-32 encodings. | |
doc_25896 | See Migration guide for more details. tf.compat.v1.raw_ops.RefIdentity
tf.raw_ops.RefIdentity(
input, name=None
)
Args
input A mutable Tensor.
name A name for the operation (optional).
Returns A mutable Tensor. Has the same type as input. | |
doc_25897 | The offset argument must be specified as a timedelta object representing the difference between the local time and UTC. It must be strictly between -timedelta(hours=24) and timedelta(hours=24), otherwise ValueError is raised. The name argument is optional. If specified it must be a string that will be used as the value returned by the datetime.tzname() method. New in version 3.2. Changed in version 3.7: The UTC offset is not restricted to a whole number of minutes. | |
doc_25898 | Returns a value equal to Emin - prec + 1 which is the minimum exponent value for subnormal results. When underflow occurs, the exponent is set to Etiny. | |
doc_25899 |
Multiply one Hermite series by another. Returns the product of two Hermite series c1 * c2. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series P_0 + 2*P_1 + 3*P_2. Parameters
c1, c2array_like
1-D arrays of Hermite series coefficients ordered from low to high. Returns
outndarray
Of Hermite series coefficients representing their product. See also
hermeadd, hermesub, hermemulx, hermediv, hermepow
Notes In general, the (polynomial) product of two C-series results in terms that are not in the Hermite polynomial basis set. Thus, to express the product as a Hermite series, it is necessary to “reproject” the product onto said basis set, which may produce “unintuitive” (but correct) results; see Examples section below. Examples >>> from numpy.polynomial.hermite_e import hermemul
>>> hermemul([1, 2, 3], [0, 1, 2])
array([14., 15., 28., 7., 6.]) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.